## Computer Art and the Theory of Computation: Chapter 1: The Blooming

What I’d like to do in a series of posts is explore the relevance of the *theory of computation* to *computer art*. Both of those terms, however, need a little unpacking/explanation before talking about their relations.

Let’s start with *computer art*. Dominic Lopes, in *A Philosophy of Computer Art*, makes a useful distinction between *digital art* and *computer art*. *Digital art*, according to Lopes, can refer to just about any art that is or was digitized. Such as scanned paintings, online fiction, digital art videos, or digital audio recordings. *Digital art* is not a single form of art, just as fiction and painting are different forms of art. To call something digital art is merely to say that the art’s representation is or was, at some point, digital. It doesn’t imply that computers are necessary or even desirable to view and appreciate the work.

Whereas the term *computer art* is much better to describe art in which the computer is crucial as medium. What does he mean by “medium”? He says “a technology is an artistic medium for a work just in case it’s use in the display or making of the work is relevant to its appreciation” (p. 15). We don’t need to see most paintings, texts, videos or audio recordings on computers to display or appreciate them. The art’s being digital is irrelevant to most digital art. Whereas, in computer art, the art’s being digital is crucial to its production, display and appreciation.

Lopes also argues that whereas *digital art* is simply not a single form of art, *computer art* should be thought of as a new form of art. He thinks of a form of art as being a kind of art with shared properties such that those properties are important to the art’s appreciation. He defines interactivity as being such that the user’s actions change the display of the work itself. So far so good. But he identifies the crucial property that works of computer art share as being interactivity.

I think all but one of the above ideas by Lopes are quite useful. The problem is that there are non-interactive works of computer art. For instance, generative computer art is often not interactive. It often is different each time you view it, because it’s generated at the time of viewing, but sometimes it requires no interaction at all. Such work should be classified as computer art. The computer is crucial to its production, display, and appreciation.

Lopes’s book is quite useful in a number of ways. It’s the first book by a professional philosopher toward a philosophy of computer art. It shows us how a philosophy of computer art might look and proceed. But it is a philosophy, in the end, of *interactive* computer art. Which is a more limited thing than a philosophy that can also accommodate *non-interactive* computer art.

Now, why did a professional philosopher writing a philosophy of computer art fail to account for non-interactive computer art in his philosophy? Well, to write a more comprehensive philosophy requires an appreciation of the importance of programmability. For it is programmability, not interactivity, that distinguishes computer art from other arts. And it is at this point that we begin to glimpse that some understanding of the theory of computation might be relevant to an understanding and appreciation of computer art.

I’ll return to this point in another chapter. I’ve given you some idea of what I mean by computer art. Now let’s have a look at the theory of computation.

It was inaugurated in the work of the mathematician and logician Alan Turing in 1936 with his world-shaking paper entitled “On Computable Numbers, with an Application to the Entscheidungsproblem“. This is one of the great intellectual documents of the twentieth century. In this paper, Turing invented the modern computer. He introduced us to what we now call the *Turing machine*, which is an abstract idea, a mathematization, an imaginary object that has all the theoretical capabilities of modern computers. As we know, lots of things have changed since then in how we think about computers. However, the Turing machine has not changed significantly and it is still the bedrock of how we think about computers. It is still crucial to our ability to think about the limits of the capabilities of computers.

And that is precisely what the theory of computation addresses: the limits of the capabilities of computers. Not so much today’s limits, but theoretical limits. The theory of computation shows us what is *theoretically* possible with computers and what is theoretically impossible. “Theoretically impossible” does not mean “probably won’t happen”. It means “will absolutely never ever (not ever) happen as long as the premises of the theory are true”.

Since we are dealing with matters of art, let’s first get a sense of the poetry of the theory of computation. It’s a little-appreciated fact that Turing devised the Turing machine in order to show that there are some things it will never be capable of doing. That is, he devised the modern computer not to usher in the age of computers and the immense capabilities of computers, but to show that there are some things that no computer will ever do. That’s beautiful. The theory of computation arises not so much from attempts to create behemoths of computation as to understand the theoretical limits of the capabilities of computing devices.

If you wish to prove that there are some things that no computer will ever do, or you suspect that such things do exist, as did Turing—and he had good reason for this suspicion because of the earlier work of Kurt Gödel—then how would you go about proving it? One way would be to come up with a computer that can do anything any conceivable computer can do, and then show that there are things it can’t possibly do. That’s precisely how Turing did it.

Why was he more interested in showing that there are some things no computer will ever do than in inventing the theoretical modern computer? Well, in his paper, he solves one of the most famous mathematical problems of his day. That was closer to the intellectual focus of his activities as a mathematician and logician, which is what he was. The famous problem he solved was called the Entscheidungsproblem, or the decision problem. Essentially, the problem, posed by David Hilbert in 1928, was to demonstrate the existence or non-existence of an algorithm that would decide the truth or falsity of any mathematical/logical proposition. More specifically, the problem was to demonstrate the existence or non-existence of an algorithm which, given any formal system and any proposition in that formal system, determines if the proposition is true or false. Turing showed that no such algorithm can exist.

At the time, one of the pressing problems of the day was basically whether mathematics was over and done with. If it was theoretically possible to build a computer that could decide the truth or falsity of any mathematical/logical proposition, then mathematics was finished as a serious intellectual enterprise. Just build the machines and let them do the work. The only possibly serious intellectual work left in mathematics would be meta-mathematical.

However, Turing showed that such an algorithm simply cannot exist. This result was congruent with Kurt Godel’s earlier 1930 work which demonstrated the existence in any sufficiently powerful formal system of true but unprovable propositions, so-called undecidable propositions. In other words, Godel showed that there are always going to be propositions that are true but unprovable. Consequently, after Godel’s work, it seemed likely that no algorithm could possibly exist which could decide whether any/every well-formed proposition was true or false.

We glimpse the poetics of the theory of computation in noting that its historical antecedant was this work by Gödel on unprovable truths and the necessary incompleteness of knowledge. The theory of computation needed the work of Gödel to exist before it could bloom into the world. And let us be clear about the nature of the unprovable truths adduced by Godel. They are not garden-variety axioms. Garden-variety axioms, such as the parallel postulate in geometry, are *independent*. That is, we are free to assume the proposition itself or some form of the negation of the proposition. The so called *undecidable* propositions adduced by Gödel are true propositions. We are not at liberty to assume their negation as we are in the case of independent axioms. They are (necessarily) true but unprovably true. And if we then throw in such a proposition as an axiom, there will necessarily always be more of them. Not only do they preclude the possibility of being able to prove every true proposition, since they are unprovable, but more of them cannot be avoided, regardless of how many of them we throw into the system as axioms.

Sufficiently rich and interesting formal systems are necessarily, then, incomplete, in the sense that they are by their nature incapable of ever being able to determine the truth or falsity of all propositions that they can express.

So the theory of computation begins just after humanity becomes capable of accomodating a deep notion of unprovable truth. Of course, unprovable truth is by no means a new concept! We have sensed for millenia that there are unprovable truths. But we have only recently been able to accommodate them in sensible epistemologies and mathematical analysis. Unprovable truths are fundamental to poetry and art, also. We know all too well that reason has its limits.

The more we know about ourselves, the more we come to acknowledge and understand our own limitations. It’s really only when we can acknowledge and understand our own limitations that we can begin to do something about them. The first design for a so-called Turing-complete computer—that is, a computer that has all the theoretical capabilities of a Turing machine—pre-dated Turing by a hundred years: Charles Babbage’s Analytical Engine. But Babbage was never able create his computer. It was not only a matter of not being able to manufacture the parts in the way he wanted, but he lacked the theory of computation that Turing created. A great theory goes a long way. The Turing machine, as we shall see, is simplicity itself. Children can understand it. We think of computers as intimidatingly complex machines, but their operation becomes much more understandable as Turing machines.

We could have had computers without the theory of computation, but we wouldn’t have understood them as deeply as we do, wouldn’t have any sense of their theoretical limitations—concerning both what they can and can’t do. And we wouldn’t have been able to develop the technology as we have, because we simply wouldn’t have understood computers as deeply as we do now, wouldn’t have been able to think about them as productively as we can with the aide of a comprehensive theory. Try to put a man on the moon without Newtonian mechanics. It might be doable, but who would want to be on that ship? Try to develop the age of computers without an elegant theory to understand them with? No thank you. That sounds more like the age of the perpetual blue screen.

Gödel’s incompleteness theorems are not logically prior to Turing’s work. In other words, Turing’s work does not logically depend on Gödel’s work—in fact, incompleteness can be deduced from Turing’s work, as computer scientists sometimes point out. But it was Gödel’s work that inspired Turing’s work. Not only as already noted, but even in its use of Georg Cantor’s diagonal argument. Gödel’s work was the historical antecedant of Turing’s work. Gödel’s work established a world view that had the requisite epistemological complexity for Turing to launch a theory of computation whose epistemological capabilities may well encompass thought itself.

The theory of computation does not begin as a manifesto declaring the great capabilities of computers, unlike the beginnings of various art movements. Instead, it begins by establishing that computers cannot and simply will never ever solve certain problems. That is the main news of the manifesto; it means that mathematics is not over, which had been a legitimate issue for several years. Were computers going to take over mathematics, basically? Well, no. That was very interesting news. You don’t often get such news in the form of mathematical proofs. News that stays news. The other news in the manifesto is almost incidental: oh, by the way, here is a mathematization of all conceivable machines—here is the universal machine, the machine that can compute anything that any conceivable machine can compute.

His famous paper layed the foundation for the theory of computation. He put the idea of the computer and the algorithm in profound relation with Gödel’s epistemologically significant work. He layed the philosophical foundation for the theory of computation, establishing that it does indeed have important limitations, epistemologically, and he also provided us with an extrordinarily robust mathematization of the computer in the form of the Turing Machine.

Turing’s paper is significant in the history of mathematics. We see now that the development of the computer and the theory of computation occurs after several hundred years of work on the “crisis of foundations” in mathematics and represents a significant harvest or bounty from that research. At least since the seventeenth century, when bishop Berkeley famously likened Newton’s treatment of some types of numbers in calculus to “the ghosts of departed quantities”, and especially since the birth pains in the eighteenth century of non-Euclidean geometry, mathematicians had understood that the foundations of mathematics were vaguely informal, possibly contradictory, and needed to be formalized in order to provide philosophical and logical justification and logical guidelines in the development of mathematics from first principles.

There’s a straight line from that work to the work of Frege, Cantor, and Gödel. And thence to Turing. The theory of computation, it turns out, needed all that work to have been done before it could bloom. It needed the philosophical perspective and the tools of symbolic logic afforded by that work. Because the theory of computation is not simply a theory of widgets and do-dad machines. At least since the time of Leibniz in the seventeenth century, the quest to develop computing devices has been understood as a quest to develop aides to reason and, more generally, the processes of thought.

The Turing Machine and the theory of computation provide us with machines that operate, very likely, at the atomic level of thought and mind. Their development comes after centurys of work on the philosophical foundations of mathematics and logic. Not to say that it’s flawless. After all, it’s necessarily incomplete and perhaps only relatively consistent, rather than absolutely consistent. But it’s good enough to give us the theory of computation and a new age of computers that begins with a fascinatingly humble but far-reaching paper entitled “On Computable Numbers, with an Application to the Entscheidungsproblem” by Alan Turing.

It changes our ideas about who and what we are. Computer art, without it, would be utterly different. Just as would the world in so many ways.

**********************

Here are links to the blog posts, so far, in Computer Art and the Theory of Computation:

Chapter 1: The Blooming

Chapter 2: Greenberg, Modernism, Computation and Computer Art

Chapter 3: Programmability

Chapter X: Evolution and the Universal Machine

Fascinating articles, Jim. I look forward to seeing where you proceed with the relevance of theory of computation to computer art. I’ve been thinking about interactivity recently, so let me consider your following statement:

I think I agree with you, but let me play devil’s advocate for a bit. Let’s think of some cases:

1) a) a writer uses a word processor and spell-checker to write a novel. b) A couple of months later it is published as a paperback.

2) a) an artist uses commercially-available software and a wacom tablet to draw a comic book. (as in: The DC Comics Guide to Digitally Drawing Comics, Freddie E Williams II) b) The comic book is printed, and is indistinguishable from conventionally-drawn comics.

3) a) an artist uses a fractal art generator to create an image that they would not have been able to create otherwise. b) The image is printed out and exhibited.

4) a) an art student uses a digital camera to take a number of pictures for a photography class. b) a digital picture display (which cycles through the images sequentially) is used to exhibit them

5) a) an artist uses Processing to create a program that repeatedly draws sequences of complex images from primitives (circles, polygons, etc.) b) The program is exhibited on a computer screen, and the shapes and colors of the images vary (using a sensor) based on how close the viewer is.

6) a) a poet programs a scriptable tool, like JanusNode, that is able to generate poetry. b) another poet writes scripts for JanusNode to generate a specific type of poetry. c) some JanusNode poems are printed out and exhibited

Instead of getting tied down by terms, I’d like to cluster some of these cases.

First cluster: 1a, 2a, 4a, 5a.

The writer with the spell-checker, the digital comic-book artist, the digital photographer, and the shape animator, are all using computers to make their task easier.

The task could be done without the computer (although there might be more spelling mistakes, the drawings might take longer, and even a photorealistic painting wouldn’t look completely like a photograph, the shape animations might require a movie camera.)

The artwork is “born digital”; using computer inputs, computational algorithms, and stored on a computer.

There seems to be an ordering in terms of how vital the computer is to the task. Spell-checking is purely a time-saving extra, programs for drawing make the job much easier (i.e. to edit if you get something wrong), animation is drudgery without a computer, and a child can do with digital photographs what an illustrator would take months to accomplish.

Second cluster: 3a, 6a.

The fractal artist and the poetry generator programmer are both using computers to make the task possible, rather than easier.

This is possibly on a continuum with the first cluster, since theoretically an artist could draw fractal art, or create specifications to author oulipo-like poetry generation algorithms, but without a computer this is not intuitive.

The fractal image and the poetry generator are both “born digital” as in the first cluster.

Third cluster: 1b, 2b, 3b, 6c

The artwork, which was created with the help of a computer, is instantiated in non-computational form.

The book, comic book, and poem are generally indistinguishable from versions that were not created with the help of computers. The fractal image is more of an exception, since often they are obviously computer-generated.

Fourth cluster: 4b, 5b, 6b

A human interacts with the digital picture, shape animation, or poetry generator.

Ordering is by increasing levels of interaction (Digital picture: passive observation. Shape animation: simple proxemic input. Poetry generator: engagement with programmability of computational artifact.)

I’m not going to try to say what is and what isn’t computer art, and I’m not familiar enough with Lopes’ ideas to critique them. But in my opinion, any definitive analysis of computer art should consider the following:

how the artist uses the computer is important: i.e. using a spell-checker vs developing a tool for poetry generation scripts (clusters 1&2). Part of this is the type of computational power used; this may be what you’re getting at in discussing programmability. Part of it is what we have come to expect from a computer; to a society that has never seen computers, novels written with a word processor may indeed be computer art.

cluster 3 (non-computational media that transmit artwork authored with computers) suggests that even if the authoring process is highly computational, the audience experience need not be. In this case, all computational work has finished by the time the audience reaches the product; any further processing occurs solely in the mind of the audience member.

even computationally-mediated audience experiences (cluster 4) represent a continuum of experiences with computation. In some cases it is passively observing the results of simple animations, in other cases reactions to simple inputs, in other cases engagement with fairly sophisticated scripting languages.

In other words interactivity occurs in at least 3 different ways. It may be OK to have an umbrella term for all these, but their distinctions are important.

Thanks, Edde. Instead of saying “Such work should be classified as computer art. The computer is crucial to its production and its display”, I should have added that the computer is also crucial to the appreciation of the work.

The idea I’m trying to capture in computer art is art in which the computer is crucial as medium. And that’s art in which the computer is crucial to the appreciation of the work and its production. So

1 is not computer art, 2 is not, 3 is, 4 is not, 5 and 6 are.