Ant Scott: Our Faulty and Chunky (digital) Machines

interview by Miguel Leal*

x #06

Chroma - 1984, 2006; Fuji Crystal Archive prints from hi-res digital images

Ant Scott (beflix) works and lives in Bournemouth, England. He is an artist engaged with digital media and so-called Glitch art. He often uses software errors and unwanted data to produce images. Influenced by console games and its raw material, his work is at the same time nostalgic and detached in relation to the technological processes, both in old and new media. His work in recent years also incorporates photographic processes.
He is one of the authors of Glitchbrowser ( —  2006), a joint collaborative artwork developed with Dimitre Lima and Iman Moradi. He is also co-editor with Iman Moradi (also colaborating in this issue) of the forthcoming book '
Glitch: Designing Imperfection' (Mark Batty Publisher — October 2008).

More info at



This interview is an on-going (never finished) e-mail conversation between Miguel Leal ( and Ant Scott (April-May 2008).

As an extra we have included in this issue a new project — GL:QU — by Ant Scott. It can be accessed here >>



Miguel Leal: How did you become engaged with digital media, and particularly with 'glitch art'?

Ant Scott: Several incidents from my childhood spring to mind... I was given a chunky red LED digital watch for my birthday in 1979. You pressed the button and the display came on for three seconds. I remember the day, standing in the garden in the blazing sun, and I asked an older friend something like - 'Do you think when the date changes at midnight the display goes weird for a bit?'. I was disappointed to discover that nothing weird happened at all, so I guess that was a non-glitch, but I had hoped for one!
Then there was the Little Professor, an electronic toy from Texas Instruments which tested you on mental arithmetic. It was very boring, but became more fun when you rolled the batteries, momentarily breaking the circuit and creating all sorts of interesting nonsense on the display.
I came across proper glitch art with the early home computers of the 1980s. For example, I got to grips with writing some assembler code to display the memory as coloured pixels and scroll through it, which was fascinating and beautiful. It felt like I was doing something terribly subversive that no-one else knew about. Of course I didn't think 'this is glitch art', it was just a bit of fun, but I never forgot it.
Fast-forward twenty years to 2001, I decided to start a glitch art blog on the basis of two things: First, it seemed odd that no-one else had a site about glitches. Second, I had recently picked up a book called The Computer In Art by Jasia Reichardt from a second-hand bookshop in Bristol, and I found the examples of straight-edged austere computer art from the 1960s very inspiring.

But were you trained somehow in computer art or programming? Or is your approach to digital media mainly based on an intuitive relation to the machine behaviour, something in-between those incidents you describe and the understanding of its aesthetic potential?

A nice thing about glitches is that it's not necessary to write code - it's about having fun trying to break things and seeing what you get. However, if you can program then you can use that to visualise data, or modify some existing software, such as a video game. Yes, I can program, but my approach to digital media is to simply use it as a container where the output appears. It's just the frame.
Apart from mapping the colours to a different palette, I don't really use digital media and processes at all. In fact I'm trying hard to not use it at all these days. Can I get something glitchy and computery-looking without a computer? I've pretty much settled on traditional photographic methods as a way to approach this, but it needs more experimentation. Because you obviously can't get such complicated glitchy visual patterns when you're working with tangible media - bits of cut-out paper etc., it's forcing me to examine what exactly it is about visual glitch that appeals, and to try and distill that. The aim is to get a combination of both the computer's aesthetic potential, and an analogue low-tech feel. Perhaps the most successful experiment in that vein so far has been the flat-panel photographic prints from 2005.

You are certainly trying to delegate a substantial part of the production processes to the machine itself. Are you mainly interested on the ‘becoming surface’ of those glitches — i.e. the images — or are you also fascinated by the lower part of the iceberg — i.e. the unconscious technological processes?

Yes, it is true that the computer does most of the heavy lifting. I don't get any pleasure whatsoever from drawing or painting, so I have to find other ways to make the images I want. Computers are good at this, but I have succumbed to an unfortunate desire to use some analogue processes too, because it feels more like 'proper' art, and there's more scope for accidental discoveries.
So it's not just about strictly technological processes for me. I think it's about setting up situations where interesting outcomes can occur, and then filtering out the good stuff. There's always a layer of indirection between my intentions and the results. A classic example of this is letting paint drips run down a canvas. Setting up a computer program to crash is much the same kind of thing. It's the image itself that interests me more than the process, which I guess isn't a very fashionable thing to say.

Generatives BX413 - 2005
Luminograma em gelatina de sais de prata obtido através da técnica "flat panel"
Ilford RC. 17,8 x 12,7 cm

We have here another interesting subject: the relation between old and new media. Do you think old media are potential triggers to the use and understanding of the so called new media? In what extent the flat panel prints work as mediators between the old and new, creating an uncanny effect?

In 1990 I accompanied a friend on a photo shoot he was doing for a portfolio. We came across a river with lots of white, swirly pollution at the sides of the bank, which I thought looked great .I was obsessed with fractals at the time and it looked a bit like the zoomed-in Mandelbrot set. I asked him to take a photo of it for me. When I got the print back, it actually looked like an electron microscope scan of some metal. It certainly looked man-made - a grey, machined object, not organic. But then I noticed a tiny bit of colour - it was a small daisy with a little green stalk and a white-and-yellow flower, as daisies tend to be. This accidental inclusion made the image much more special to me because it looked uncanny, to use your word. Certainly, an uncanny effect is a good outcome in my eyes.
What can I say about old and new media that hasn't already been said? I am seriously out of my depth here! In my own experience, as I alluded to earlier, I think I can say that playing with new (digital) media has inspired me to investigate old media, because I became dissatisfied with the aesthetic results of purely digital media, even though I enjoy the speed and power and ease of using it. Those flat-panel prints are a good example. As digital animations on a computer screen, they were unremarkable. But when I took up darkroom classes, I immediately thought it was worth putting the two things together to make long exposures directly off the computer's screen whilst the animation was running. It transformed the quality of the images. It actually made them much worse - blurry and uneven in tone, with plenty of dust speckles thrown in for good measure. But I've never had a better response to a piece of work - suddenly, the images had some emotion in them, even though they were still produced mechanically. So the conclusion I drew from that experiment was that having some analogue, old media involved in the mix was potentially sufficient to make the transition from mere aesthetics to art. Can you engineer a process to evoke emotions? Yes, absolutely. Supermarkets and advertising agencies do it all the time, so why not artists?

Your answer brings us again to an earlier question: does the engineering of processes to evoke emotions, as you said it, mean a particular focus on the formal aspects of the images, on the way they are visually perceived?

Yes, I believe it's the visual perception of an image that matters most, based on my own limited experiences and feedback. People who've bought some of my glitch prints really don't seem to mind that they're computer-generated digital prints - instead they say things like 'oh, that looks like reflections in water', for example. People aren't dumb - they know an oil painting is worth more than a digital print, but when it comes to having images in their environment that please them, it's the actual image that counts. As for ascribing value to art, who it's by, quality, size, and what it's made from are the main determining factors. I just don't know where 'how it's made' fits in to any of this, except in the digital arts field, where novelty of ideas appears to be the currency?

Are you somehow engaged with a metaphysical understanding of technology and its processes? Do you think technology is transcendent to the user, at least when it mumbles and produces nothing more than noise?

I did come up with a taxonomy of glitches a while back in an attempt to generalise the processes as I saw them. It was fun to think about, but not very useful as a means of generating ideas or art, because this tends to come about from getting your hands dirty, playing with things at the grass roots level. So I wouldn't say I was engaged with a meta-viewpoint, but it's at the back of my mind. For example, with the book, we had to get a wide variety of techniques included (because it's digital art!), so it was important to take a detached view and be able to spot when two techniques were essentially the same even though the specific tools might have been quite different.

 Now just hold on while I look up transcendent :-)
… Surpassing others; preeminent or supreme. Lying beyond the ordinary range of perception.

OK, so - technology being transcendent to the user? No. Technology is below the user. We make technology. We are God to technology's Adam and Eve, to use a wholly inappropriate religious metaphor there for no particular reason. I can't imagine anyone would intimate that technology surpasses users.
But perhaps it's more interesting to use the other meaning of transcendent - I think we can say that technology lies beyond the perception of users. When I'm sitting in an aeroplane, I'm thankfully unaware of all the bugs in the code that control whether the plane stays up or falls down. And I don't imagine the technology has to be just mumbling away like TV white noise for it to be unnoticed. Humans are too busy to notice technology at the best of times, but this is a good thing I reckon, it's progress when we can just ignore it. However, I don't think for one moment that we are ignorant of the fact that this technology exists all around us, it's just that we don't need to think about it most of the time. I hope I haven't contradicted myself. We know the technology is there, intellectually, but we don't (need to) perceive it.
If you knew all the ways in which faulty technology controls your life, you'd be too frightened to go out the front door. Heard of the cars whose management systems go haywire when it rains? Get a bit of light drizzle, next thing you know, you're being run-over by a tonne of metal. Nice bit of transcendence for you there!

How would you define 'technological unconscious'?

Technology isn't conscious, so it must be unconscious, hence the 'technological unconscious' is just technology. If my laptop (a battered working horse IBM Thinkpad, not one of those posh arty Apple doodads) ever became conscious, I'd hit it with the heel of my shoe until it became unconscious. I like my technology unthinking and subservient.
What are you getting at with this question?

I know this was a 'tricky' question, but I wanted to test the concept — or at least the expression — as it is, without prior comments or introductions. With the concept of 'technological unconscious' I am precisely trying to underline all the technological processes which lie 'beyond the perception of users', to use your own expression; something that follow the classical image of the iceberg, with its gigantic hidden mass of ice under the water.
Don’t you think technology can follow a line of indeterminism on itself, creating accidents and surprising events? Don't you think sometimes machines can became hysterical, i.e., unforeseeable on their reactions?

I don't buy into any notion of anthropomorphising technology.
Yes, technology can create accidents and surprising events. But it's only surprising in the sense that we as humans find it too complicated to follow every step of what's happening. If you play noughts and crosses (tic tac toe) a million times, nothing surprising will happen in the game unless you're a bit thick. But if you play chess, surprises can happen, because the complexity of the game lies just a little way beyond our brain's ability to completely assimilate it. Now take a typical piece of software with a few million lines of code...
Incidentally, for some terrifying bedtime reading, see The Risks Digest. This is the bad side of computers creating suprising events!

As you know, technology on itself involves risk and unpredictability. We can even say that each technology demands further developments to limit the risks of accident (look at the car industry...). Do you really think its only a question of probability?

 That's a good point - new technology is required to fix the holes in the previous layer. I wouldn't use the term 'probability' though, since that implies a random process is underlying it. It's all totally deterministic, but complicated. In fact, you've reminded me that at a talk several years ago, I proposed that glitches be defined as outcomes of complicated-yet-deterministic processes. It's just when the system gets very complicated, any hidden bugs will tend to produce glitches that look as if they've happened randomly. But the reality will simply be that a number of unrelated factors came together at the same time in such a way as to reveal the bug. Now, OK, those external factors might be random, but the point I want to make is that the machine/software/technology acts deterministically upon whatever inputs are provided to it.

Don't you ever experienced a real TILT with the machines and programs you play with?

TILT? Isn't that what you do on a pinball game..?

Yes. Normally a TILT is a reaction of the machine against the aggressive movements of the player, which are seen as a transgression of the rules.

 OK, I I haven't experienced machines turning against me! After all, a machine doesn't know what the rules are in the first place. I was more scared of computers when I was a kid, before I fully realized you couldn't break them, unless you overclock the CPU or something hardware-based like that.

Let's turn back to your own work and your current projects. Some very recent images at your website seem to follow the thread of your glitches but they point also to different production processes...

Those are experiments in entirely artificial glitch aesthetics, made with only a graphics editor. I was trying out ideas using just grey plus another colour of equal perceived brightness - depending on your monitor settings, you might see the two colours appear to swap between foreground and background - a sort of optical illusion. The reason for using these colours plus an artificial glitch was to test if the amount of glitchiness amplified the effect. This was inspired from hearing an interview with Bridget Riley, who noticed that by using wavy lines, the boundary between adjacent colours could be lengthened, which in turn made the optical illusion stronger.

Looking superficially it seems you are now far away from your first glitches...

That first type of visual glitch is an area that I think I quite fully explored between 2001 and 2005. That is a long time! I kept a blog with hundreds of pictures, before whittling them down to 25 of the best representatives. Then I got into the flat-panel photographic prints - this is something that I will return to, but I want to wait a couple of years for OLED monitors to appear and become affordable. Then there was the on-going book project with Iman Moradi, which meant I was more focused on hundreds of other peoples' glitches rather than my own! With the book coming out in October, I will do some more art again in the future, but currently I don't really feel the need to.
Is there any point in doing the same sort of stuff forever? One thing leads to another, and I firmly believe in the value of publishing your experiments whether successful or not, getting it out of your system, and clearing a mental space to move on to the next thing. All you can do is be yourself, and over the long term, a body of work should build up which will have your fingerprint on it, even if you end up a long way from where you started.


* Miguel Leal is one of the editors of this e-zine.