(event tonight) something analogous to the natural intelligence of our universe
This new work is also an invitation to re-see quite how fucking remarkable the past five years have been. In this strange moment of political distrust, insecurity and loneliness, it's easy to forget.

WORK-IN-PROGRESS SHARING
DATE: Thursday 4 December 2025
EVENT: Deep Assignments #03
LOCATION: Apiary Studios, London
///
Diffeomorphism is a project I trace back to 2021 when I trained a StyleGAN3 model on every photo I’d taken. This is a small model, small enough to train on my own PC, and my entire lifetime catalogue held more kinds of image than this model could represent. Some images that came out of it looked slightly familiar. Others were more abstract, but with textures and shapes with a definite connection to reality.
Through these glitches, I felt a deeper understanding of the nature of this AI model itself emerge. I’ve since been exploring its character, gently contemplating what the proto-perception of AI has to reveal about how intelligence emerges in the universe.
I hacked and reworked this model, keeping the bits normally cropped out, animating it by injecting chaotic data at various parts of its process. As the field moved on, I found images I’d previously discarded began to seem more beautiful and significant than those I was initially drawn towards. I wrote software to work these into undulating landscapes of pattern, texture, and vaguely familiar forms. I found them hypnotic.
When we mould a medium to represent the familiar, the medium itself becomes visible through its flaws and limitations: the crackle of the record, the colourless eyes of marble statues, the blocky auras of jpeg compression. Likewise, we can see generative AI through its mistakes and its clichés. But what is the medium that these flaws reveal? I don’t think the answer is bytes, or artificial neurons, or other digital abstractions, but a process of emergent learning, where a web of pieces are incrementally nudged until they hold a representation of a world. StyleGAN3 doesn’t use text at all. It simply tries to replicate what makes the photos I’ve taken look like they look. In the time I’ve spent witnessing this emergent proto-perception, I’ve felt like I’m witnessing something analogous to the natural intelligence of our universe.
This evening (more like tonight or the past by the time you read this), I’m sharing a work-in-progress of some slow-moving forms I’ve generated from the world that emerged in this model I trained four years ago. Seventeen minutes, with a score composed by me and featuring vocals from Adriana Minu. It is a personal meditation on this medium of emergent knowledge, flawed yet eerie, and a mirror through which we might catch a glimpse of one of the building blocks of our intelligent universe.
All this while slop is word of the year. Our relationship with media and craft is changing for sure. I don’t know what it will become. The trend was already well underway towards the dull, shallow and relentless, content lacking the spark of intention that might be traced back to another human soul. That doesn’t mean this moment isn’t profoundly different. In any case, the saturation means we desensitize quicker than ever to images that were remarkable to encounter just a few moments ago. This new work is also an invitation to re-see. In this strange moment of political distrust, insecurity and loneliness, as we reckon with the power over our lives we’ve given to privately owned digital infrastructure, it’s easy to forget quite how fucking remarkable the past five years have been.
Tim
London, 4 Dec 2025
P.S. No plans in the short term to publish the work online. Drop me a message if you’d like to watch it and I’ll see if I can organize something.


I hope you have a great evening. It sounds fascinating - I'd love to watch it at a good moment.