In this work I am revisiting my 2018 creation "Neural Glitch 1541266710"
by making a state-of-the-art Stable Diffusion model attempt to reconstruct
my earlier work. These new prompt-driven models operate very differently to
the way GANs used to work. Foremost the model itself was not trained by
me, but is what I call a "public latent space" - it has been trained on
hundreds of millions of images which gives it almost universally capabilities
to produce and reproduce any image.
In a two-step process I first use an algorithmic search in order to discover
a text prompt which is able to capture the semantic and aesthetic essence of the
original work. In the second step the Stable Diffusion model is given this text
prompt and is seeded with the original image which results in this work.
Whilst the outcome is clearly showing more detail and also captured some of the
original work's meaning one could also say that it failed at the task since
it was not able to reproduce the convolutional aesthetic of the source. One
possible explanation for why this is so might be that whilst these new models have
been trained on almost all kinds of imagery it appears that they have not
been exposed to any early AI art.