AI research
AI research


WHAT
WHAT
WHAT
WHEN
WHEN
WHEN
Mar, 2025
Mar, 2025
Mar, 2025
About
About
About
I’m looking into the inner workings of artificial intelligence to study the “function” of silica – the material prism of the project – in its most accelerated form. Computation is founded on silicon, a substance entirely crucial to the fabrication of microchips essential to the machinery of data processing. The current explosive development of this technology is driven by AI’s growing demand for processing power, making silicon, in a sense, both the medium and the engine of technological acceleration.
I’ve been working with the Department of Computing at Goldsmiths to get a better understanding of what’s under the surface in how AI’s made and trained, so I can work with it on the level of practice.
In physics, the process of particles moving in a random fashion from areas of high concentration to low concentration to form uniform distribution is called diffusion. And this is, to use an important analogue applicable to AI image generation, a question of noise: in image generation, a diffusion model is trained on individual images by repeatedly adding noise and then learning to reverse this process, removing the noise step by step, to recover the original image.
But this idea of ‘reverse-engineered image’ in generative AI is confusing: surely, it’s all still noise! Once the pixels constituting the original image have dispersed into random constellations, any re-ordering of pixels will simply be another noise pattern, even if we recognise some of them as passable representations from our world.
An AI-generated image begins as noise and is shaped by pre-existing text–image pairings in a multimodal model. These pairings are derived from common crawls, where images are matched with alt-text or nearby descriptions scraped from across the internet. The complexity of these associations basically sets the boundary of what the model can imagine. These AI phantoms completely rely on an automated scrape of common contributions to the internet and the associated annotations. Which just amounts to another kind of noise, or diffusion.
Diffusion increases entropy.
Entropy is a measure of maximum disorder.
I’m looking into the inner workings of artificial intelligence to study the “function” of silica – the material prism of the project – in its most accelerated form. Computation is founded on silicon, a substance entirely crucial to the fabrication of microchips essential to the machinery of data processing. The current explosive development of this technology is driven by AI’s growing demand for processing power, making silicon, in a sense, both the medium and the engine of technological acceleration.
I’ve been working with the Department of Computing at Goldsmiths to get a better understanding of what’s under the surface in how AI’s made and trained, so I can work with it on the level of practice.
In physics, the process of particles moving in a random fashion from areas of high concentration to low concentration to form uniform distribution is called diffusion. And this is, to use an important analogue applicable to AI image generation, a question of noise: in image generation, a diffusion model is trained on individual images by repeatedly adding noise and then learning to reverse this process, removing the noise step by step, to recover the original image.
But this idea of ‘reverse-engineered image’ in generative AI is confusing: surely, it’s all still noise! Once the pixels constituting the original image have dispersed into random constellations, any re-ordering of pixels will simply be another noise pattern, even if we recognise some of them as passable representations from our world.
An AI-generated image begins as noise and is shaped by pre-existing text–image pairings in a multimodal model. These pairings are derived from common crawls, where images are matched with alt-text or nearby descriptions scraped from across the internet. The complexity of these associations basically sets the boundary of what the model can imagine. These AI phantoms completely rely on an automated scrape of common contributions to the internet and the associated annotations. Which just amounts to another kind of noise, or diffusion.
Diffusion increases entropy.
Entropy is a measure of maximum disorder.
I’m looking into the inner workings of artificial intelligence to study the “function” of silica – the material prism of the project – in its most accelerated form. Computation is founded on silicon, a substance entirely crucial to the fabrication of microchips essential to the machinery of data processing. The current explosive development of this technology is driven by AI’s growing demand for processing power, making silicon, in a sense, both the medium and the engine of technological acceleration.
I’ve been working with the Department of Computing at Goldsmiths to get a better understanding of what’s under the surface in how AI’s made and trained, so I can work with it on the level of practice.
In physics, the process of particles moving in a random fashion from areas of high concentration to low concentration to form uniform distribution is called diffusion. And this is, to use an important analogue applicable to AI image generation, a question of noise: in image generation, a diffusion model is trained on individual images by repeatedly adding noise and then learning to reverse this process, removing the noise step by step, to recover the original image.
But this idea of ‘reverse-engineered image’ in generative AI is confusing: surely, it’s all still noise! Once the pixels constituting the original image have dispersed into random constellations, any re-ordering of pixels will simply be another noise pattern, even if we recognise some of them as passable representations from our world.
An AI-generated image begins as noise and is shaped by pre-existing text–image pairings in a multimodal model. These pairings are derived from common crawls, where images are matched with alt-text or nearby descriptions scraped from across the internet. The complexity of these associations basically sets the boundary of what the model can imagine. These AI phantoms completely rely on an automated scrape of common contributions to the internet and the associated annotations. Which just amounts to another kind of noise, or diffusion.
Diffusion increases entropy.
Entropy is a measure of maximum disorder.
THANKS TO
Irini Kalaitzidi
Artist working with creative coding and machine learning, Department of Computing, Goldsmiths
Prof Larisa Soldatova
Prof Larisa Soldatova
Professor in Data Science Director of Research, Department of Computing, Goldsmiths
Professor in Data Science Director of Research, Department of Computing, Goldsmiths
THANKS
Irini Kalaitzidi
Artist working with creative coding and machine learning, Department of Computing, Goldsmiths
Prof Larisa Soldatova
Professor in Data Science Director of Research, Department of Computing, Goldsmiths
More findings…


