Posts Tagged ‘algorithm’
These were created on invitation to make a work related to self-portraiture for Scenes of Selves, Occasions for Ruses, a group exhibition at the Surrey Art Gallery. The curator saw an earlier dbCinema piece I did called The Club that incinemates the faces of my favorite North American politicians, business men, and psychopaths. He asked me to do related work with photos of myself rather than Jeffrey Dahmer, Paul Wolfowitz, Russell Williams, George Bush, and the rest of that psychotic, murderous crew. Which seemed like a remarkably strong opportunity to at least make an idiot of myself.
Let me show you the ‘trailers’ to the two resulting videos. What I’d like to show you are slideshows made of screenshots from the two videos. The videos are made of dbCinemations/collages of 53 images of me from the day I was born to my current grizzled state at 53 years of age. The Surrey show will run from September 15 (the opening is from 7:30-9:30pm), 2012 till December 16, 2012. The show was curated by Jordan Strom.
The first trailer is at http://vispo.com/dbcinema/selfportrait2/ index.htm?n=1 . The video of which these screenshots are composed used two dbCinema brushes. One of the brushes ‘paints’ a letter from my name each frame. The other brush paints a circle each frame. Each of the brushes (usually) paints a different photo. So we see two simultaneous photos of me being drawn. The man and the baby. Etc. A brush paints a given photo for several seconds and then paints a different photo. The slideshow is composed of 47 still images.
The second trailer is at http://vispo.com/dbcinema/selfportrait3/ index.htm?n=1 . The video used one dbCinema brush: a Flash brush. In other words, the brush was a SWF turned into a mask. The shape of the brush was a curving, undulating, rotating, translated line. Each frame of the video, dbCinema rendered one brush stroke, one rendering of the brush image; the curving line’s paint was sampled from photos of me. The brush would sample from a photo for several seconds before moving on to another photo. What we’re looking at here is not the video but 17 screenshots from the video.
In the main, the man does not cohere. No coherent person emerges from this process of forcibly joining / collaging / synthesizing / remixing these 53 photos of me. It doesn’t magically tell me who I have always been. Or does it? Or if not, what does it suggest? You could say “If you don’t know who you’ve always been, no piece of art is going to clue you in.” Well I do kinda know. On the other hand, I do seem to tell myself a lot of stories.
It seems what the self-portrait does for me mainly is to problematize the notion of the existence of a person whom I have always been. The images in the video are messy. Like birth mess. Perhaps that’s part of our discomfort in life. We’re always in the midst of our own birth mess. And death stink. As Bob Dylan once observed, “He not busy being born is busy dying.”
I used Camtasia 8 to create this video. I’ve used the voice-over capabilities of Camtasia before to create videos that talk about what’s on the screen, but this is the first time I’ve been able to use the webcam with it. Still a few bugs, though, it seems: at times the video is quite asynchronous between voice and video.
Still, you get the idea. I’m a big fan of Joe Keenan’s MOMENT and am glad I finally did a video on it.
Computer Art and the Theory of Computation: Chapter 2: Greenberg, Modernism, Computation and Computer Art
In a short but influential piece of writing by Clement Greenberg called Modernist Painting written in 1960—and revised periodically until 1982—the art critic remarked that “The essence of Modernism lies, as I see it, in the use of characteristic methods of a discipline to criticize the discipline itself, not in order to subvert it but in order to entrench it more firmly in its area of competence.” Such sweeping generalizations are always problematical, of course. But I want to use the Greenberg quote to tell you an equally problematical story about the birth of the theory of computation and, thereby, computer art. Humor me. It’s Clement Greenberg. Come on.
The work I’ve mentioned by Gödel and Turing happened in the thirties, toward the end of modernism, which was roughly from 1900 till 1945, the end of World War II. So it’s work of late modernism.
Let’s grant Greenberg clemency concerning his conceit, for the moment, that the “essence”—itself a word left over from previous eras—of modernism, of the art and culture of that era, at least in the west, involved a drive to a kind of productive self-referentiality or consciousness of the art itself within the art itself. What work could possibly be more exemplary of that inclination than the work by Gödel and Turing that I’ve mentioned?
What I’d like to do in a series of posts is explore the relevance of the theory of computation to computer art. Both of those terms, however, need a little unpacking/explanation before talking about their relations.
Let’s start with computer art. Dominic Lopes, in A Philosophy of Computer Art, makes a useful distinction between digital art and computer art. Digital art, according to Lopes, can refer to just about any art that is or was digitized. Such as scanned paintings, online fiction, digital art videos, or digital audio recordings. Digital art is not a single form of art, just as fiction and painting are different forms of art. To call something digital art is merely to say that the art’s representation is or was, at some point, digital. It doesn’t imply that computers are necessary or even desirable to view and appreciate the work.
Whereas the term computer art is much better to describe art in which the computer is crucial as medium. What does he mean by “medium”? He says “a technology is an artistic medium for a work just in case it’s use in the display or making of the work is relevant to its appreciation” (p. 15). We don’t need to see most paintings, texts, videos or audio recordings on computers to display or appreciate them. The art’s being digital is irrelevant to most digital art. Whereas, in computer art, the art’s being digital is crucial to its production, display and appreciation.
Lopes also argues that whereas digital art is simply not a single form of art, computer art should be thought of as a new form of art. He thinks of a form of art as being a kind of art with shared properties such that those properties are important to the art’s appreciation. He defines interactivity as being such that the user’s actions change the display of the work itself. So far so good. But he identifies the crucial property that works of computer art share as being interactivity.
I think all but one of the above ideas by Lopes are quite useful. The problem is that there are non-interactive works of computer art. For instance, generative computer art is often not interactive. It often is different each time you view it, because it’s generated at the time of viewing, but sometimes it requires no interaction at all. Such work should be classified as computer art. The computer is crucial to its production, display, and appreciation.
Aleph Null makes color music. Colors are tones. Musical notes are tones. Music is tones moving in time. Aleph Null makes changing color tones move in time. There is no audio.
Aleph Null is an instrument of color music. This is about how to play it. It’ll play on it’s own. But it profits immensely from a human player interceding continually. It’s interactive online art.
Color music in Aleph Null has a simple structure. There is a central color. It’s the main color. All the other colors are within a certain distance from the central color. That distance is called the color range.
Here’s how to change the central color.
- Click the Aleph Null logo at top left or press the ‘1′ key to make the controls visible.
- Press the ‘2′ key or click the input box labelled ‘central color’ to make the central color color-picker visible.
- Click around in both parts of the color-picker to see how it works. The current central color is displayed in the central color input box.
The colors Aleph Null uses are all random distances from the central color and these distances do not exceed the color range value. The lower the color range value, the closer all the colors are to the central color. The higher the color range value, the greater the range of colors that Aleph Null will use. If the color range is set to 0, Aleph Null only uses one color: the central color. If the color range is 255, any color might be used.
Aleph Null is best viewed by the light of a full moon. Or near full moon. Same with the set of stills I made. I mean they do like a bit of darkness.
If you’re using a PC, I’d recommend Chrome to view Aleph Null. At least on my machine, Chrome provides the smoothest performance. Firefox provides a similarly high framerate, but is a bit jerky from time to time. Internet Explorer kind of sucks. On the Mac, Chrome, Firefox, and Safari seem to be fine.
The Club is a moving-image digital collaging of 57 images of selected North American politicians, business men, and psychopaths from the eighties till the present. There’s also a linked slideshow of some stills from the video.
The politicians are conservatives who have blasted away both at home and abroad. Via deregulation, the shock doctrine, and explicitly military means. The business men are CEO’s who are mostly now behind bars, or have been. The psychopaths include (Ex-Colonel) Russell Williams who, until the time of his arrest for two sex murders, headed CFB Trenton, the largest military air-base in Canada.
So it’s a bit of a Dorian Gray piece. But they are each others’ deformities.
Here’s what Andy Warhole said about The Club: “they look like some kind of Auschwitz-Chernobyl mutant legacy, and maybe they are — this is like morphing, blocpix, mr. potatohead, and various slice-n-dice technologies… but not them — this is new — and of course i love your politics ”
Much of the work I’ve done with dbCinema, the graphic synthesizer I wrote in Adobe Director, has been toward beauty. This is quite different. But The Club was still made with dbCinema. There’s other work I’ve done with dbCinema here.
Jörg Piringer is a sound poet and poet-programmer currently living in Vienna/Austria. He really knows what he’s doing with the programming, having a master’s degree in Computer Science. And his sound work, both in live performance and in synthesis with the auditory and visual processing, is quite remarkable. I saw him in Nottingham and Paris, and was very impressed on both occassions.
He’s just released a new piece, a video called Unicode. It’s a 33:17 long, and simply displays Unicode characters. Each character is displayed for about 0.04 seconds. The video displays 49,571 characters.
It’s a video, but it’s a conceptual piece. The characters in this video are all symbols and each makes but the briefest appearance. A cast of thousands; Bar and Yeace.
Wikipedia describes Unicode thus:
Unicode is a computing industry standard for the consistent encoding, representation and handling of text expressed in most of the world’s writing systems. Developed in conjunction with the Universal Character Set standard and published in book form as The Unicode Standard, the latest version of Unicode consists of a repertoire of more than 109,000 characters covering 93 scripts, a set of code charts for visual reference, an encoding methodology and set of standard character encodings, an enumeration of character properties such as upper and lower case, a set of reference data computer files, and a number of related items, such as character properties, rules for normalization, decomposition, collation, rendering, and bidirectional display order (for the correct display of text containing both right-to-left scripts, such as Arabic and Hebrew, and left-to-right scripts). As of 2011, the most recent major revision of Unicode is Unicode 6.0.
Piringer’s Unicode simply shows us symbols but, to me, it illustrates how our notion of language has been expanded to not only the multi-lingual but also to include code. Not only do we see many of the world’s scripts but a good deal of abstract symbols of code.
By the way, his web site at joerg.piringer.net is well worth checking out.
The graphics in the first Slidvid 3 slideshow are old ones; they’re screenshots from a generative, interactive Shockwave piece I wrote called A Pen. I’ve had the screenshots on my site for quite a while, but not in a slideshow. The experience of them in a slideshow is more to my liking. Less work for the viewer. More options for the viewer and the presenter. And just a classier presentation.
The graphics in this slideshow were made with a virtual pen that has four nibs. The ‘ink’ of each nib is a lettristic animation that leaves trails as the pen moves the nibs/animations around the screen. Think of the nibs as being attached to the pen by long loose springs. When you click and drag the mouse in the Shockwave piece (not the slideshow), the nibs eventually catch up with you. And you can adjust things like the size and opacity of each nib. Hence the sort of graphics you see in this post. The project A Pen consists of both the interactive Shockwave piece and also the slideshow of screenshots taken from the Shockwave piece in action.
Gregory Chatonsky is a French/Canadian artist who has created a significant body of net art. Here are a couple of pieces of his I found that still work and are compelling:
“The Revolution Took Place in New York is a fictional story generated in real time from an internet source. A text generator gives shape to an infinite novel bearing close resemblance to the work “Projet pour une révolution à New York” written by Robbe-Grillet in 1970: Ben Saïd walks on the streets of the American metropolis and plots something. Some words are associated to video fragments, others to sounds gleaned on the network and others are automatically translated into images using Google. The structured association of these heterogeneous elements generates a narrative flow simultaneous with the network flow.”
Each time I’ve opened this piece, it’s been different. What surprised and charmed me most about this piece was how the narrative made sense, often, and kept me interested in where it was going. That is very unusual indeed in generative works. I’m referring to the text itself. But, also, the way the text goes with the images was also, often, quite interesting.
About a year ago, John Cayley made a post on NetPoetic entitled “An Edge of Chaos”. In it he delimits a constraint-based networked-writing process: “Write into the Google search field with text delimited by quote marks until the sequence of words is not found. Record this sequence….”
A couple of weeks ago, I woke up with the idea of making a poem composed entirely of lines that returned no search results. “Wow”, I thought to myself, “what a great idea”. I had forgotten it was John’s idea.
If this situation occured in 2014 (for example), and on waking I told the idea to my girlfriend, perhaps the instant-speech-checking algorithmically-networked microphone next to our bed might have immediately alerted me to my potential plagiarism. As it is, my memory had to slowly percolate John’s prescient precedent to the surface of my mind like a splinter.
Neuronal latency in the 21st century data avalanche is a vestigial design flaw that needs to be technologically cauterized.
Imagine that (while typing / while speaking), footnotes, bibliographies and source attributions immediately auto-generate, links sprout around text, and areas of uniqueness are spontaneously (and perhaps effortlessly) patented. The race to network becomes a race to brand segments of communication, to demarcate phrases of language, to colonize conjunctions of text in the same way attributions of authorship emerged from the book.
A writer becomes a sewer (sic pun) of uniqueness. Instead of quotation marks, a new grammar of overlapping links allow the subtlety of appropriated text’s multiple inheritances to Xanadu off towards diverse sources. Instead of Flarf, context-specific algorithmic-grammars differentiate between semantically meaningful units of language and word-salad collage-spew net-wrack.
Dsytopic singularity theories aside, an era of instantaneous as-you-type network-search is arriving. Google Instant is just one stride in the sprint toward word-processing software that automatically checks writing for repetition and rewards writing that is both meaningful and unique.