Archive for the ‘Jim Andrews’ Category

Digital Poetry in Digital Literacy

Poetry has been associated with the teaching of literacy for a long time. Because poetry, in some ways, is the cherry on the top of literacy. In poetry we see something approaching our full humanity expressed in the technology of writing. Writing is a complex, subtle, highly expressive technology. Poetry is typically considered the highest form of writing because that’s where we learn how to feel with language. Language in poetry carries human feeling, emotion, attitude, the tone of the inner voice, as well as thought. Poetry pushes the capabilities of language, tests it, throws it off a cliff, retrieves it, does it all again.

Computing environments have changed our typical reading and writing environments a great deal. We now typically read and write not only language but also images, sound, video, and code/programming. Also, the texts we read are often now interactive. Programming responds to what we write. All this changes what it means to be literate in the contemporary world. Just as poetry, for at least hundreds of years, has been the apogee of literacy, so too with digital poetry in digital literacy.

My first experiences with using technology artistically go back to my radio days in the 80s. I’d like to write about the dawn, for me, of understanding something about using technology artistically. Because it’s relevant now to our digital experience and to digital poetry/literature.

I produced a literary radio show in the 80’s each week for six years. At first, what I did was tape poets and fiction writers reading, and aired that. Sometimes I would do a bit of production on the material.

But then I heard a life-changing tape from Tellus. It was their #11 issue, The Sound of Radio, and it featured work by Gregory Whitehead, Susan Stone, Jay Allison, Helen Thorington and others. It was miles beyond what I was producing. It was interesting radio art. I was just putting work for print onto tape/radio. The Tellus tape was audio writing. This was art in its own right. Especially in the case of Whitehead and Stone, it was poetry not first written for the page, but created in almost a new language of poetry, with recorded sound and radio in mind from beginning to end.

It wasn’t simply that it was impressive technically, as produced audio. The point is that, as interesting poetry to listen to, as recorded sound or as radio, this was far more interesting than listening to poets read their print poems. Some of them described themselves as audio writers. Whitehead did a tape called Writing On Air. ; another was called Disorder Speech. These writers took radio and recorded sound seriously as artistic, writerly, poetic media. It was literary inscription in sound, on tape, in radio. And it opened up great vistas to me in the realm of poetry and language.

I started corresponding with and reading essays by Whitehead about radio art and the art of sound. Not only was Whitehead producing fantastic audio–he was writing about the poetics of radio art brilliantly!

I began to realize that creating exciting art for a particular medium was not the same as simply making art developed for one medium available in a different medium. Why is that?

Art that understands and uses the special properties of its medium is not a weak echo of some other medium. The radio I’d been producing was not the art itself. It was providing an inferior experience of the books that the authors were flogging. The books were the art itself.

If you’re not channeling the energy that flows through the special properties of the medium, those channels will work against you because energy flows through them whether you channel it or not. If you’re not channeling it, the attention it gets—just by virtue of the nature of the medium—is noise distracting the audience from whatever channels you are using.

For instance, reading text on a monitor is harder than reading text in a book because the medium is refreshing the image 60 times per second. And if there’s stuff that’s moving, that competes for attention. One way to use that energy is animation.

This topic about the value of dialing in the special properties of the medium is sometimes called media specificity; it’s associated with the writings of the USAmerican art critic Clement Greenberg, primarily, but the way I think of it predates my knowledge of Greenberg and is more associated with Gregory Whitehead and Marshall McLuhan. My friend Jeremy Owen Turner tells me that thinking on the matter goes back to Kant.

So if we ask what the relevance of digital poetry is, say—and by that, I don’t simply mean digitized poetry but poetry where the computer is crucial both for the production and appreciation of the work—we can say that it’s important to digital literacy, to being fully literate in the digital.

Digital literacy is not only in knowing how to google the information you want, and how to check to see if it’s accurate information—though that’s important to being digitally literate—as opposed to being an easy mark for misinformation and scams.

It’s also important to get a feel for how emotion and affect can be involved in interactivity. And how video and text can work together. And how sound and text and visuals can work together intellectually and emotionally. An important part of our contemporary computing experience is multimedia, the experience of several media at once. Multimedia poetry is intermedial, it relates the media, it makes them work together as one integrated experience. That is part of digital literacy too.

Poetry is where/how we learn to feel with language. Digital poetry is where/how we learn to feel with our expanded/changed language we experience in computing environments, our intermedial language, our interarts language, our new media language that is a confluence of language, image, sound, and interactivity.

While the digital can give us print and video and sound, etc—they’re all just coded in zeros and ones—digital art is more than a bunch of old media tacked together. It’s a new art form in itself. It isn’t simply that it’s uniquely multimedial or even intermedial, though that’s an important part of it. And it isn’t simply that it’s interactive, though that’s important too. And it isn’t simply that it’s programmable. In his book A Philosophy of Computer Art, Dominic Lopes proposes—as many others have—that computer art is, in fact, a brand new form of art. And if that’s true, then simply digitizing other forms of art does not suffice to experience computer art—which is art in which the computer is crucial for both the production and appreciation of the art. It’s art in which the computer is crucial as the medium.

Marshall McLuhan said that technologies are extensions of our senses. The telescope and microscope let us see things we can’t see with the naked eye. Telescopes and microscopes extend our sight into the large and small. Telephones extend our hearing and voice over great distances. Technologies extend senses, our bodies, our capabilities. Computers extend our memory and our cognitive abilities. We can know things with a google that otherwise would take us considerable research.

Computers extends our senses, bodies, and abilities/capabilities, but it’s digital poetry and other digital art (computer art) that extends our humanity throughout our new dimensions. Without computer art, the extensions of us we acquire via the digital are as claws without feeling. Digital art gets the blood flowing through our new abilities, gets the feelings going. Then we understand how interactivity involves our feelings, whether we knew it or not. We begin to be able to think and feel at once with computers, through intermedial, interactive, interestingly programmed computer art.

Digital art also gets our digital shit detectors working. We can sense better the truly human, the fully human, the true. As opposed to accepting ads and such as expressions of truth.

Oppen Do Down–first Web Audio API piece

In my previous post, I made notes about my reading of and preliminary understanding of Chris Wilson’s article on precision event scheduling in the Web Audio API–in preparation to create my first Web Audio API piece. I’ve created it. I’d like to share it with you and talk about it and the programming of it.

Oppen Do Down, an interactive audio piece

The piece is called Oppen Do Down. I first created it in the year 2000 with Director. It was viewable on the web via Shockwave, a Flash-like plugin–sort of Flash’s big brother. But hardly any contemporary browsers support the Shockwave plugin anymore–or any other plugins, for that matter–the trend is toward web apps that don’t use plugins at all but, instead, rely on newish native web technologies such as the Web Audio API, which requires no plugins to be installed before being able to view the content. The old Director version is still on my site, but nobody can view it anymore cuz of the above. I will, however, eventually release a bunch of downloadable desktop programs of my interactive audio work.

You can see the Director version of Oppen Do Down in a video I put together not long ago on Nio, Jig-Sound, and my other heap-based interactive audio work.

I sang/recorded/mixed the sounds in Oppen Do Down myself in 2000 using an old multi-track piece of recording software called Cakewalk. First I recorded a track of me snapping my fingers. Then I played that back over headphones, looping, while I recorded a looping vocal track. Then I’d play it back. If I liked it I’d keep it. Then I’d play the finger snapping and the vocal track back over headphones while I recorded another vocal track. Repeat that for, oh, probably about 60 or 70 tracks. Then I’d pick a few tracks to mix down into a loop. Most of the sounds in Oppen Do Down are multi-track.

As you can hear if you play Oppen Do Down, the sounds are synchronized. You click words to toggle their sounds on/off. The programming needs to be able to download a bunch of sound files, play them on command, and keep the ones that are playing synchronized. As you turn sounds on, the sounds are layered.

As it turns out, the programming of Oppen Do Down was easier in the Web Audio API than it was in Director. The reason for that is all to do with the relative deluxeness of the Web Audio API versus Director’s less featureful audio API.

Maybe the most powerful feature of the Web Audio API that Director didn’t offer is the high-performance clock. It’s high-performance in two ways. It has terrific resolution, apparently. It’s accurate to greater precision than 1 millisecond; you can use it to schedule events right down to the level of the individual sound sample, if you need that sort of accuracy. And the Web Audio API does indeed support getting your hands on the very data of the individual samples, if you need that sort of resolution. But the second way in which the high-performance clock is high-performance is that it stops for nothing. Which isn’t how it normally works with timers and clocks programmers use. They’re usually not the highest-priority processes in the computer, so they can get bumped by what the operating system or even the browser construes as more important processes. Which can result in inaccuracies. Often these inaccuracies are not big enough to notice. But in Oppen Do Down and pretty much all other rhythmic music, we need accurate rhythmic timing.

Director didn’t offer such a high-performance clock. What it had was the ability to insert cue-points into sounds. And define a callback handler that could execute when a cue-point was passed. That was how you could stay in touch with the actual physical state of the audio, in Director. The Web Audio API doesn’t let you insert cue-points in sounds, but you don’t need to. You can schedule events, like the playing of sounds, to happen in the time coordinate system of the high performance clock.

This makes synchronization more or less a piece of cake in the Web Audio API. Because you can look at the clock any time you want with great accuracy (AudioContext.currentTime is how you access the clock) and you can schedule sounds to start playing at time t and they indeed start exactly at time t. And the scheduling strategy Chris Wilson advocates, which I talked about in my previous post, whereby you schedule events a little in advance of the time they need to happen, works really well.

There are other features the Web Audio API has that Director didn’t. But, then, Director was actually started in 1987, whereas the Web Audio API has only been around for a few years as of this date in 2018. You can synthesize sounds in the browser, though that isn’t my interest; I’m more interested in recording vocal and other sounds and doing things with those recorded sounds. You can also process live input from the microphone, or from video, or from a remote stream. And you can create filters. And probably other things I don’t know anything about, at this point.

Anyway, Oppen Do Down links to two JavaScript files. One, oppen.js, is for this particular app and its particular interface. The other one, sounds.js, is the important one for understanding sound in Oppen Do Down. The sounds.js file defines the Sounds constructor, from top to bottom of sound.js. In oppen.js, we create an instance of it:

gSounds=new Sounds([‘1.wav’,’2.wav’,’3.wav’,’4.wav’,’5.wav’,’6.wav’]);

Actually there are 14 sounds, not 6, but just to make it prettier on this page I deleted the extra 8. I used wav files in my Director work. I was happy to see that the Web Audio API could use them. They are uncompressed audio files. Also, unlike mp3 files, they do not pose problems for seamless looping; mp3 files insert silence at the ends of files. I hate mp3 files for that very reason. Well, I don’t hate them. I just show them the symbol of the cross when I see them.

The gSounds object will download the sounds 1.wav, etc, and will store those sounds, and offers an API for playing them.

‘soundsAreLoaded’ is a function in oppen.js that gets called when all the sounds have been downloaded and are ready to be played.

gSounds adds each sound (1.wav, 2.wav, … 14.wav) via its ‘add’ method, which creates an instance of the Sound (not Sounds) constructor for each sound. The newly created Sound object then downloads its sound and, when it’s downloaded, the ‘makeAvailable’ function puts the Sound object in the pAvailableSounds array.

When all the sounds have been downloaded, the gSounds object runs a function that notifies subscribers that the sounds are ready to be played. At that point, the program makes the screen clickable; the listener has to click the screen to initiate play.

It’s important that no sounds are played until the user clicks the screen. If it’s done this way, the program will work OK in iOS. iOS will not play any sound until the user clicks the screen. After that, iOS releases its death grip on the audio and sounds can be played. Apparently, at that point, if you’re using the Web Audio API, you can even play sounds that aren’t triggered by a user click. As, of course, you should be able to, unless Apple is trying to kill the browser as a delivery system for interactive multimedia.

I’ve tested Oppen Do Down on Android, the iPad, the iPhone, and on Windows under Chrome, Edge, Firefox and Opera. Under OSX, I’ve tested it with Chrome, Safari and Firefox. It runs on them all. The Web Audio API seems to be well-supported on all the systems I’ve tried it on.

When, after the sounds are loaded, the user clicks the screen to begin playing with Oppen Do Down, we find the sound we want to play initially. Its name is ‘1’. It’s the sound associated with the word ‘badly’. We turn the word ‘badly’ blue and we play sound ‘1’. We also make the opening screen invisible and display the main screen of Oppen Do Down (which is held in the div with id=’container’).

var badly=gSounds.getSound('1');

The ‘’ method is, of course, crucial to the program cuz it plays the sounds.

It also checks to see if the web worker thread is working. This separate thread is used, as in Chris Wilson’s metronome program, to continually set a timer that times out just before sounds stop playing, so sounds can be scheduled to play. If the web worker isn’t working, ‘’ starts it working. Then it plays the ‘1’ sound.

Just before ‘1’ finishes playing–actually, pLookAhead milliseconds before it finishes–which is currently set to 25–the web worker’s timer times out and it sends the main thread a message to that effect. The main thread then calls the ‘scheduler’ function to schedule the playing of sounds which will start playing in pLookAhead milliseconds.

If the listener did nothing else, this process would repeat indefinitely. Play the sound. The worker thread’s timer ticks just before the sound finishes, and then sounds are scheduled to play.

But, of course, the listener clicks on words to start/stop sounds. When the listener clicks on a word to start the associated sound, ‘’ checks to see how far into the playing of a loop we are. And it starts the new sound so that it’s synchronized with the playing sound. Even if there are no sounds playing, the web worker is busy ticking and sending messages at the right time. So that new sounds can be started at the right time.

Anyway, that’s a sketch of how the programming in Oppen Do Down works.

Chris Joseph gave me some good feedback. He noticed that as he added sounds to the mix, the volume increased and distortion set in after about 3 or 4 sounds were playing. He suggested that I put in a volume control to control the distortion. He further suggested that each sound have a gain node and there also be a master gain node, so that the volume of each sound could be adjusted.

The idea is that as the listener adds sounds, the volume remains constant. Which is what the ‘adjustVolumes’ function is about. It works well.

I am happy with my first experiment with the Web Audio API. Onward and upward.

However, it’s hard to be happy with some of the uses that the Web Audio API is being put to. The same is true of the Canvas API and the WebRTC API. And these, to me, are the three most exciting new web technologies. But, of course, when new, interesting, powerful tools arise on the web, the forces of dullness will conspire to use them in evil ways. These are precisely the three technologies being used to ‘fingerprint‘ and track users on the web. This is the sort of crap that makes everything a security threat these days.

Event Scheduling in the Web Audio API

This is the first of a two-part essay on event scheduling in the Web Audio API and an interactive audio piece I wrote (and sang) called Oppen Do Down. There’s a link to part two at the bottom.

I’ve been reading about the Web Audio API concerning synchronization of layers and sequences of sounds. Concerning sound files, specifically. So that I can work with heaps of rhythmic music.

A heap is the term I use to describe a bunch of audio files that can be interactively layered and sequenced as in Nio and Jig Sound, which I wrote in Director, in Lingo. The music remains synchronized as the sound icons are interactively layered and sequenced. The challenge of this sort of programming is coming up with a way to schedule the playing of the sound files so as to maintain synchronization even when the user rearranges the sound icons. When I wrote Nio in 2000, I wrote an essay on how I did it in Nio; this essay became part of the Director documentation on audio programming. The approach to event scheduling I took in Nio is similar to the recommended strategy in the Web Audio API.

Concerning the Web Audio API, first, I tried basically the simplest approach. I wanted to see if I could get seamless looping of equal-duration layered sounds simply by waiting for a sound’s ‘end’ event. When the ‘end’ event occurred concerning a specific one of the several sounds, I played the sounds again. This actually worked seamlessly in Chrome, Opera and Edge on my PC. But not in Firefox. Given the failure of Firefox to support this sort of strategy, some other strategy is required.

The best doc I’ve encountered is A Tale of Two Clocks–Scheduling Web Audio With Precision by Chris Wilson of Google. I see that Chris Wilson is also one of the editors of the W3C spec on the Web Audio API. So the approach to event scheduling he describes in his article is probably not idiosyncratic; it’s probably what the architects of the Web Audio API had in mind. The article advocates a particular approach or strategy to event scheduling in the Web Audio API. I looked closely at the metronome he wrote to demonstrate the approach he advances in the article. The sounds in that program are synthesized. They’re not sound files. Chris Wilson answered my email to him in which I asked him if the same approach would work for scheduling the playing of sound files. He said the same approach would work there.

Basically Wilson’s strategy is this.

First, create a web worker thread. This will work in conjunction with the main thread. Part of the strategy is to use this separate thread that doesn’t have any big computation in it for a setTimeout timer X whose callback Xc regularly calls a schedule function Xcs, when needed, to schedule events. X has to be set to timeout sufficiently in advance of when sounds need to start that they can start seamlessly. Just how many milliseconds in advance it needs to be set will have to be figured out with trial and error.

But it’s desirable that the scheduling be done as late as feasibly possible, also. If user interaction necessitates recalculation and resetting of events and other structures, probably we want to do that as infrequently as possible, which means doing the scheduling as late as possible. As late as possible. And as early as necessary.

When we set a setTimeout timer to time out in x milliseconds, it doesn’t necessarily execute its callback in x milliseconds. If the thread or the system is busy, that can be delayed by 10 to 50 ms. Which is more inaccuracy than rhythmic timing will permit. That is one reason why timeout X needs to timeout before events need to be scheduled. Cuz if you set it to timeout too closely to when events need to be scheduled, it might end up timing out after events need to be scheduled, which won’t do—you’d have audible gaps.

Another reason why events may need to be scheduled in advance of when they need to happen is some browsers—such as Firefox—may require some time to get it together to play a sound. As I noted at the beginning, Firefox doesn’t support seamless looping via just starting sounds when they end. That means either that the end event’s callback happens quite a long time after the sound ends (improbable) or sounds require a bit of prep by Firefox before they can be played, in some situations.

So we need to schedule events a little before those events have to happen. We regularly set a timer X (using setTimeout or setInterval) to timeout in our web worker thread. When it does, it posts a message to the main thread saying it’s time to see if events need scheduling.
If some sounds do need to be scheduled to start, we schedule them now, in the main thread.

But to understand that process, it’s important to understand the AudioContext’s currentTime property. It’s measured in seconds from a 0 value when audio processing in the program begins. This is a high-precision clock. Regardless of how busy the system is, this clock keeps accurate time. Also, when you pause the program’s execution with the debugger, currentTime keeps changing. currentTime stops for nothing! The moral of the story is we want to schedule events that need rhythmic accuracy with currentTime.

That can be done with the .start(when, offset, duration) method. The ‘when’ parameter “should be specified in the same time coordinate system as the AudioContext’s currentTime attribute.” If we schedule events in that time coordinate system, we should be golden, concerning synchronization, as long as we allow for browsers such as Firefox needing enough prep time to play sounds. How much time do such browsers require? Well, I’ll find out in trials, when I get my code running.

The approach Chris Wilson recommends to event scheduling is similar to the approach I took in Nio and Jig Sound, which I programmed in Lingo. Again, it was necessary to schedule the playing of sounds in advance of the time when they needed to be played. And, again, that scheduling needed to be done as late as possible but as early as necessary. Also, it was important to not rely solely on timers but to ground the scheduling in the physical state of the audio. In the Web Audio API, that’s available via the AudioContext’s currentTime property. In Lingo, it was available by inserting a cuePoint in a sound and reacting to an event being triggered when that cuePoint was passed. In Nio and Jig-Sound, I used one and only one silent sound that contained a cuePoint to synchronize everything. That cuePoint let me ground the event scheduling in a kind of absolute time, physical time, which is what the Web Audio API currentTime gives us also.

Part 2: Oppen Do Down–First Web Audio Piece

Chris Joseph: Amazing Net Art from the Frontier

I’ve been following Chris Joseph‘s work as a net artist since the late 1990’s when he was living in Montréal–he’s a Brit/Canadian living now in London. He was on Webartery, a listserv I started in 1997; there was great discussion and activity in net art on Webartery, and Chris was an important part of it then, too. I visit his page of links to his art and writing several times a year to see what he’s up to.

I recently wrote a review of Sprinkled Speech, an interactive poem of Chris’s, the text of which is by our late mutual friend Randy Adams.

More recently–like yesterday–I visited #RiseTogether, shown below, which I’d somehow missed before. This is a 2014 piece by Chris. We see a map, the #RiseTogether hash tag, a red line and a short text describing issues, problems, possibilities, groups, etc. Every few seconds, the screen refreshes with a new map, red line, and description.

Chris Joseph’s #RiseTogether

I sent Chris an email about it:

Hey Chris,

I was looking at

I see you're using Google maps.

What's with the red line?

What is #RiseTogether ? 

The language after "#RiseTogether"--where does that come from?


Chris’s response was so interesting and illuminating I thought I’d post it here. Chris responded:

Hi Jim,

Originally this phrase, as a hashtag, was used by the Occupy Wall Street anti-capitalism movement, but I think since then it has been adopted/co-opted by many other movements including (US) football teams. The starting article and the text source for this piece was . 

It was one of three anti-capitalist pieces I did around that time, which was pretty much at the beginning of my investigating what could be done outside of Adobe Flash, along with and . And thematically these hark back to one of my first net art pieces, which isn't linked up on my art page at the moment, 

The red line was for a few reasons, I think. Firstly to add some visual interest, and additional randomisation, into what would be be a fairly static looking piece otherwise.  But I find the minimalism of a line quite interesting, as the viewer is asked to actively interpret the meaning of that line. For me it's a dividing line - between haves and have nots, or the 1% and 99%, or any of those binary divisions that the protesters tend to use. Or it could suggest a crossing out - perhaps (positively) of a defunct economic philosophy, or (negatively) of the opportunities of a geographical area as a result of that economic philosophy. 

All three of those pieces have a monochromatic base, but only two have the red, which feels quite angry, or reminiscent of blood, of which there was quite a bit in the anti-capitalist protests.

I used the same technique again in this piece: - but here the lines are much more descriptive, as an indication of the supposed 'plague vectors'. 

Chris Joseph

globalCompositeOperation in Net Art

Ted, Jim and globalCompositeOperation

Ted Warnell and I have been corresponding together about net art since 1996 or 97. We’ve both been creating net art using the HTML 5 canvas for about the last 6 years; we show each other what we’re doing and talk about canvas-related JavaScript via email. He lives in Alberta and I live in British Columbia.

Ted’s canvas-based stills and animations can be seen at My canvas-based work includes Aleph Null versions 1.0 and 2.0 at, respectively, and

One of the things we’ve talked about several times is globalCompositeOperation—which has got to be a candidate for longest-name-for-a-JavaScript-system-variable. The string value you give this variable determines “the type of compositing operation to apply when drawing new shapes” ( Or, as puts it:

“The globalCompositeOperation property sets or returns how a source (new) image is drawn onto a destination (existing) image.

Source image = drawings you are about to place onto the canvas.

Destination image = drawings that are already placed onto the canvas.”

The reason we’ve talked about this variable and its effects is because globalCompositeOperation turns out to be important to all sorts of things in creating animations and stills that you wouldn’t necessarily guess it had anything to do with. It’s one of those things that keeps on popping up too much to be coincidental. The moral of the story seems to be that globalCompositeOperation is an important, fundamental tool in creating animations or stills with the canvas.

In this article, we’d like to show you what we’ve found it useful for. We’ll show you the art works and how we used globalCompositeOperation in them to do what we did with it.

Ted’s uses of globalCompositeOperation tend to be in the creation of effects. Mine have been for masking, fading to transparency, and saving a canvas to png or jpg.

Digital Compositing

“Compositing” is an interesting word. It’s got “compose” and “composite” in it. “Compositing” is composing by combining images into composit images.

Keep in mind that each pixel of a digital image has four channels or components. The first three are color components. A pixel has a ‘red’ value, a ‘green’ value, and a ‘blue’ value. These are integers between 0 and 255. These combine to create a single color. The fourth channel or component is called the alpha channel. That’s a number between 0 and 1. It determines the opacity of the pixel. If a pixel’s alpha channel has a value of 1, the pixel is fully opaque. If it has a value of 0, the pixel is totally transparent. It can have intermediary values that give the pixel an intermediary opacity.

The default value of globalCompositeOperation is “source-over”. When that’s the value, when you paste a source image into a destination canvas, you get what you’d expect: the source is placed overtop of the destination.

There are 26 possible values for globalCompositeOperation which are described at The first 8 of the options, shown below, are for compositing via the alpha channel. The remaining 18 are blend modes. You may be familiar with blend modes in Photoshop; they determine how the colors of two layers combine and include values such as “multiply”, “screen”, “darken”, “lighten” and so on. Blend modes operate on the color channels of the two layers.

But the first 8 values shown below operate on the alpha channels of the two images. They don’t change the colors. They determine what shows up in the result, not what color it is. The first 8 values in the below diagram can be thought of as a kind of Venn diagram of image compositing. There’s the blue square (destination) and the red circle (source). There are 3 sections to that diagram:

  • A: the top left part of the blue square that doesn’t intersect with the red circle;
  • B: the section where the square and circle intersect;
  • C: and the bottom right section of the red circle that doesn’t intersect with the blue square.

Section A can be blue or be invisible; section B can be blue, red, or invisible; section C can be red or invisible. That makes for 12 possibilities, but some of those 12 possibilities, such as when everything is invisible, are of no use. When the useless possibilities are eliminated, we’re left with the first 8 shown below. These possibilities form the basic sort of Venn logic of image compositing. You see this diagram not only with regard to JavaScript but in image compositing regarding other languages.

The first 8 values for globalCompositeOperation operate on the alpha channels of the source (red) and destination (blue)

What is “compositing”? We read the following definition at Wikipedia:

Compositing is the combining of visual elements from separate sources into single images, often to create the illusion that all those elements are parts of the same scene. Live-action shooting for compositing is variously called “chroma key”, “blue screen”, “green screen” and other names. Today, most, though not all, compositing is achieved through digital image manipulation. Pre-digital compositing techniques, however, go back as far as the trick films of Georges Méliès in the late 19th century; and some are still in use. All compositing involves the replacement of selected parts of an image with other material, usually, but not always, from another image. In the digital method of compositing, software commands designate a narrowly defined color as the part of an image to be replaced. Then the software replaces every pixel within the designated color range with a pixel from another image, aligned to appear as part of the original. For example, one could record a television weather presenter positioned in front of a plain blue or green background, while compositing software replaces only the designated blue or green color with weather maps.

Whether the compositing is operating on the alpha or the color channels, compositing is about combining images via their color and/or alpha channels.

As we see at, different browsers treat some of the values of globalCompositeOperation differently, which can make for dev headaches and gnashing of teeth but, for the most part, globalCompositeOperation works OK cross-browser and cross-platform.

Jim Andrews: Masking (source-atop)

Masking is when you fill a shape, such as a letter, with an image. The shape is said to mask the image; the mask hides part of the image. Masking was crucial to an earlier piece of software I wrote called dbCinema, a graphic synthesizer I wrote in Lingo, the language of Adobe Director. The main idea was of brushes/shapes that sampled from images and used the samples as a kind of ‘paint’. My more recent piece Aleph Null 2.0, written in JavaScript, can do some masking, such as the sort of thing you see in SimiLily—and I’ll be developing more of that sort of thing in Aleph Null.

Let’s look at a simple example. You see it below. You can also see a copy of it at, where it’s easier to view the source code. There’s a 300×350 canvas with a red border. We draw an ‘H’ on the canvas. We fill it with any color–red in this case. Then we set globalCompositeOperation = ‘source-atop’. Then we draw a bitmap of a Kandinsky painting into the canvas, but the only part of the Kandinsky that we see fills the ‘H’. Because when you set globalCompositeOperation = ‘source-atop’ and you then draw into an image, it only draws on pixels that were already on the canvas. states it this way:

“source-atop displays the source image on top of the destination image. The part of the source image that is outside the destination image is not shown.”

In other words, first you draw on the canvas to create the “destination” image (the ‘H’). Then you set globalCompositeOperation = ‘source-atop’. Then you draw the “source” image on the canvas (the Kandinsky).

Masking with globalCompositeOperation = ‘source-atop’

The most relevant code in the above example is shown below:

function drawIt(oldValue) {
context.font = 'bold 400px Arial';
context.fillText('H', 0,320);
// The above three lines set the text font to bold,
// 400px, Arial, red, and draw a red 'H' at (0,320).
// This is the destination.
// (0,320) is the bottom left of the 'H'.
context.globalCompositeOperation = 'source-atop';
context.drawImage(newImg, -100,-100);
// newImg is the rectangular Kandinsky image.
// Sets globalCompositeOperation back to what it was.

In our example, the destination ‘H’ is fully opaque. However, if the destination is only partially opaque, so too will the result be partially opaque. The opacity of the mask determines the opacity of the result. You can see an example of that at The mask, or destination, is an ellipse that grows transparent toward its edge. The source image, once again, is a fully opaque Kandinsky-like image.

You can see some of Aleph Null’s masking ability if you click the Bowie Brush, shown below. It fills random polygons with images of the late, great David Bowie.

The Bowie Brush in Aleph Null fills random polygons with images of David Bowie

Ted Warnell: Albers by Numbers, February, 2017

Overview: Poem by Nari works are dynamically generated, autoactive and alinear, visual and code poetry from the cyberstream. Poem by Nari is Ted Warnell and friends. Following are four Poem by Nari works that demonstrate use of some of the HTML5 canvas globalCompositeOperation(s) documented in this article.

These works are tested and found to be working as intended on a PC running the following browsers: Google Chrome, Firefox, Firefox Developer Edition, Opera, Internet Explorer, Safari, and on an Android tablet. Additional browser specific notes are included below.

Experimental. Albers by Numbers is one from a series of homages to German-American artist Josef Albers. Poem by Nari series is loosely based on the Albers series “Homage to the Square”.

This work is accomplished in part by a complex interaction of stylesheet mixBlendMode(s) between the foreground and background canvases. All available mixBlendMode(s) are employed via a dedicated random selection function, x_BMX.

Interesting to me is how the work evolves from a single mass of randomly generated numeric digits to the Albers square-in-square motif. This emergence happens over a period of time, approximately one minute, and in a sense parallels emergence of the Albers series, which happened for Albers over a lifetime.

Note to IE and Safari users: works but not as intended.

Ted Warnell: Acid Rain Cloud 3, February 2017

Experimental. Another work from a series exploring a) acid, b) rain, c) clouds, d) all of the above.

globalCompositeOperation(s) “source-over” and “xor” are used here in combination with randomized color and get & putImageData functions. The result is a continually shifting vision of what d) all of the above, above, might look like.

Interesting to me here is that ever changing “barcode” effect in the lower half of the work – possibly the “rain” in this? Over time, that rain will turn from a strong black and white downpour to a gentle gray mist. This is globalCompositeOperation “xor” at work.

Note to Safari users: works but not as intended.

Ted Warnell: An Alinear Rembrandt, April 2017

An Alinear Rembrandt

Christ image is digitized from Rembrandt’s “Christ On The Cross”.

Not an experiment. The statement is clear, it’s Christ on the cross.

This fully-realized work brings together globalCompositeOperation(s) “source-over” and “lighter” in combination with gif image files, globalAlpha, linear gradients, standard and dedicated random functions, get & putImageData functions, and a Poem by Nari custom grid definition function. And of course, timing is everything.

Of interest to readers will be the flashing sky and flickering Christ. These effects are accomplished by linear gradient masks, gif image file redraws, and the aforementioned globalCompositeOperation(s).

Of interest to me, it’s Christ on the cross.

Ted Warnell: Pinwheels, April 2017

More experimentation. This work is for Mary & Ryan Maki, Canada

Full screen, variable canvas rotations, and globalCompositeOperation(s) “source-over” and “xor” with randomized color. “source-over” is default and is responsible for the vivid, solid colors in this work, while “xor” provides the muted, soft-edge color blends.

Pinwheels… I’m going to be a grandpa again.

Note to Safari users: does not work with Safari browser.



Fade to Transparency (destination-out)

The fader slider in Aleph Null

Aleph Null 2.0 has a fader slider. The greater the value of the fader slider, the quicker the screen fades to the current background color. This is implemented by periodically drawing a nearly-transparent fill of the background color over the whole canvas. The greater the value of the fader slider, the more frequent the drawing of that fill over the whole canvas.

That works well when there is just one canvas, when there is no notion of layers of canvases. Once you introduce layers, you have to be able to fade a layer to transparency, not to a background color, so that you can see what’s on lower layers. I’m attempting to implement layers at the moment in Aleph Null. So I have to be able to fade a canvas to transparency.

So, then, how do you fade a canvas to transparency?

As Blindman67 explains at, “…you can avoid the colour channels and fade only the alpha channel by using the global composite operation “destination-out” This will fade out the rendering by reducing the pixels’ alpha.” Each pixel has four channels: the red, the blue, the green, and the alpha channels; the alpha channel determines opacity. The code is like this:

ctx.globalAlpha = 0.01; // fade rate
ctx.globalCompositeOperation = "destination-out"
ctx.globalCompositeOperation = "source-over"
ctx.globalAlpha = 1; // reset alpha

You do the above every frame, or every second frame, or every third frame, etc, depending on how quickly you want it to fade to transparency. Another parameter with which you control the speed of the fade is ctx.globalAlpha, which is always a number between 0 and 1. The higher it is, the closer to fully opaque the result will be on a canvas draw operation.

Blindman67 develops an interesting example of a fade to transparency in You can see that it must be fading to transparency because the background color is dynamic, is constantly changing.

Note that the ctx.fillStyle color isn’t really important because we’re fading the alpha, not the color channels. ctx.fillStyle isn’t even specified in the above code. When globalCompositeOperation = ‘destination-out’, the color values of the destination pixels remain unchanged. What changes is the alpha value of the destination pixels. The alpha values of the source pixels get subtracted from the alpha values of the destination pixels.

The performance of fading this way should be very good, versus mucking with the color channels, because you’re changing less information; you’re only changing the alpha channel of each pixel, not the three color channels.

I massaged the Blindman67 example into something simpler at There’s a fade function:

function fade() {
gCtx1.globalAlpha = 0.15; // fade rate
gCtx1.globalCompositeOperation = "destination-out";
gCtx1.globalCompositeOperation = "source-over";
gCtx1.globalAlpha = 1; // reset alpha

But compare the fade function with the code above it from Blindman67. It’s the very same idea.

Above, we see an example much like what I wrote at

Finally, on this topic, I’m currently wondering about the best way to implement layers concerning canvases. Clearly compositing possibilities create a situation where, at least in some situations, you don’t need multiple visible canvases; you can composit with offscreen canvases and only use one visible canvas. Whether this is better in general, and what the performance issues are, is currently unclear to me. There also exists at least one platform, namely concretejs, that supports canvas layers.

Save Canvas to Image File (destination-over)

globalCompositeOperation = ‘destination-over’ allows you to slip an image into the background of another image. The source image is written underneath the destination image.

It turns out that’s precisely what is needed to fix some bad browser behavior when you save a canvas to an image file, as we’ll see.

If you want to save a canvas to an image file, the simplest way to do it, at least on Chrome and Firefox, is to right-click (PC) or Control+click (Mac). You are presented with a menu that allows you to “Save As…” or, on some browser, “Copy Image”. The problem is that some browsers insert a background into this image that probably isn’t the same color as the background on the canvas.

On the PC, Chrome inserts a black background. Other browsers may insert other colors, or the right color, or no color at all. One solution to this problem is to create a button that runs some JavaScript that inserts the right background color. This is a job for globalCompositeOperation = ‘destination-over’ because it allows you to create a background with the source image.

The “save” button in Aleph Null

You can see the solution I’ve created at, shown above. The controls contain a “save” button which, when clicked, copies a png-type image into a new tab, if permitted to do so. You may have to permit it by clicking on a red circle near the URL at the top of the browser. Once the image is in the new tab, right-click (PC) or Ctrl+click (Mac) and select “Save As…”.

The code is basically this sort of thing:

var canvas=document.getElementById('canvas');
var context=canvas.getContext('2d');
// We assume the canvas already has the destination image on it.
var oldGlobalComposite=context.globalCompositeOperation;
// backgroundColor is a string representing the desired background color.
var data=canvas.toDataURL('image/png');;

The toDataURL command can also create the image as a jpg or webp.

In your animations with the HTML 5 canvas, will globalCompositeOperation be of any use? The answer is that if you are combining images at all, doing any compositing at all, globalCompositeOperation is probably relevant to your task and may make it much easier.

Colour Music in Aleph Null 2.0

I’m working on Aleph Null 2.0. You can view what I have so far at . If you’re familiar with version 1.0, you can see that what 2.0 creates looks different from what 1.0 creates. I’ve learned a lot about the HTML5 canvas. Here are some recent screenshots from Aleph Null 2.0.


Image Masking with the HTML5 Canvas

Image masking with the HTML5 canvas is easier than I thought it might be. This shows you the main idea and two examples.

If you’d like to cut to the chase, as they say, look at this example and its source code. The circular outline is an image. The Kandinsky painting is a rectangular image that is made to fill the circular outline. We see the result below:

The Kandinsky painting fills a blurry circle.

The Kandinsky painting fills a blurry circle.

The key is the setting for the canvas’s globalCompositeOperation property. Like me, if you had seen any documentation for this property at all, you might have thought that it only concerned color blending, like the color blending options in Photoshop for a layer (the options usually include things like ‘normal’, ‘dissolve’, ‘darken’, ‘multiply’, ‘color burn’, etc). But, actually, globalCompositeOperation is more powerful than that. It’s for compositing images. Image masking is simply an example of compositing. Studying the possibilities of globalCompositeOperation would be interesting. We’re just going to use a couple of settings in this article. The definition we read of “compositing” via Googling the term includes this:

“Compositing is the combining of visual elements from separate sources into single images, often to create the illusion that all those elements are parts of the same scene.”

We’re going to use the “source-atop” setting of  globalCompositeOperation. The default value, by the way, is “source-in”.

The basic idea is that if you want image F to fill image A, you draw image A on a fresh canvas. Then you set  globalCompositeOperation to “source-atop”. Then you draw image F on the canvas. When you do that, the pixels in the canvas retain whatever opacity/alpha value they have. So, for instance, any totally transparent pixels remain totally transparent. Any pixels that are partially transparent remain partially transparent. Image F is drawn into the canvas, but F does not affect the opacity/alpha values of the canvas.

Here is an example where a Kandinsky painting is made to fill some canvas text:

Click the image and then view the source code.

Click the image and then view the source code.

I’m working on some brushes for Aleph Null 2.0 that are a lot like the brushes in dbCinema: the brushes ‘paint’ samples of images.

New Work by Ted Warnell

Ted Warnell, as many of you know, is a Canadian net artist originally from Vancouver, long since living in Alberta, who has been producing net art as programmed visual poetry at since the 90’s. Which is about how long we’ve been in correspondence with one another. Ted was very active on Webartery for some time, an email list in the 90’s that many of the writerly net artists were involved in. We’ve stayed in touch over the years, though we’ve never met in the same room. We have, however, met ‘face-to-face’ via video chat.

warnell2016He’s still creating interesting net art. In the nineties and oughts, his materials were largely bitmaps, HTML, CSS, and a little JavaScript. Most of his works were stills, or series thereof. Since about 2013, he’s been creating net works using the HTML5 canvas tag that consist entirely of JavaScript. The canvas tag lets us program animations and stills on HTML pages without needing any plugins such as Flash or Unity. Ted has never liked plugins, so the canvas tag works well for him for a variety of reasons. Ted has created a lot of very interesting canvas-based, programmed animations and stills at .

I’m always happy to get a note from Ted showing me new work he’s done. Since we both are using the canvas, we talk about the programming issues it involves and also the sorts of art we’re making. Below is an email Ted sent me recently after I asked him how he would describe the ‘look’ or ‘looks’ he’s been creating with his canvas work. If you have a good look at , you see that his work does indeed exhibit looks you will remember and identify as Warnellian.

hey jim,

further to earlier thoughts about your query re “looks” in my work (and assuming that you’re still interested by this subject), here is something that has been bubbling up over the past week or so

any look in my work comes mainly from the processes used in creation of the work – so, it’s not a deliberate or even a conscious thing, the look, but rather, it just is – mainly, but not entirely, of course – subject, too, is at least partly responsible for the way these things look

warnell2016-2have been thinking this past week that what is deliberate and conscious is my interest in the tension between and balance of order and chaos, by which i mean mathematics (especially geometry, visual math) and chance (random, unpredictable) – i’m exploring these things and that tension/balance in almost all of my works – you, too, explore and incorporate these things into many of your works including most strikingly in aleph null, and also in globebop and others

so here are some thoughts about order/chaos and balance/tension in no particular order:

works using these things function best when the balance is right – then the tension is strong – and then the work also is “right” and strong

it is not a requirement that both of these things are apparent (visible or immediately evident) in a work – there are some notable examples of works that seem to be all one or the other, though that may be more illusion than reality – works of jackson pollock seem to be all chaos but still balance with a behind-the-scenes intelligence, order – works by andrew wyeth on the other hand seem to be all about order and control, but look closely at the brushstrokes that make all of that detail and you’ll see that many of these are pure chance – brilliant stuff, really

warnell2016-3an artist whose work intrigues me much of late is quebecer claude tousignant – i’m sure you know of him – he is perhaps best known for his many “target” paintings of concentric rings – tousignant himself referred to these as “monochromatic transformers” and “gongs” – you can find lots of his works at google images

the reason tousignant is so interesting to me (again) at this time is because while i can see that his paintings “work”, i cannot for the life of me see where he is doing anything even remotely relating to order/chaos or the balance/tension of same – his works seem to me to be truly all order/order with no opposite i would consider necessary for balance and/or to make (required) tension – his works defy me and i’d love to understand how he’s doing it 🙂

warnell2016-4anyway, serious respect, more power, and many more years to the wonderful monsieur tousignant

Look Again –

is a new (this week) autointeractive work created with claude tousignant and his target paintings in mind

in this work are three broad rings, perfectly ordered geometric circles, each in the same randomly selected single PbN primary color – the space between and surrounding these rings is filled with a randomly generated (60/sec), randomly spun alphanumeric text in black and white, and also gray thanks to xor compositing – alinear chaos – as the work progresses, the three rings are gradually overcome by those relentless spinning texts – the outermost ring is all but obliterated while the middle ring is chipped away bit by bit until only a very thin inner crust of the ring remains – the third innermost ring, tho, is entirely unaffected

as the work continues to evolve, ghostlike apparitions of the missing outer and middle ring become more and more pronounced… because… within the chaos, new rings in ever-sharper black and white are beginning to emerge – this has the effect of clearly defining (in gray and tinted gray) the shape of the original color rings – even as order is continually attacked and destroyed by chaos, chaos is simultaneously rebuilding the order – so nothing is actually gained or lost… the work is simply transformed – a functioning “monochromatic transformer”, as tousignant might see it

that’s the tension and balance i’m talking about – the look you were asking about likely has something to do with autointeraction, alinearity, and most likely by my attempt to render visible order/chaos and balance/tension in every work i do

your attempt in aleph null (it now seems to me) might be in the form of progressive linearity on an alinear path – and well done


PS, “Look Again” is a rework of my earlier work, “Poem by Numbers 77” from march 2015 –

which work is a progression of “Poem by Numbers 52” from april 2013 –

which work was about learning canvas coding for circular motion

Poem by Numbers works usually (not always) are about coding research and development – moreso than concept development, which comes in later works like “Look Again”

other artists have “Untitled XX” works – i have “Poem by Numbers XX”

Google Image Search API


Google image search parameters

Here are some useful documents if, as a developer,  you want to use the Google Image Search API.

I used the Google Image Search API in an earlier piece called dbCinema, but this piece was done in Adobe Director. Since then, I’ve retooled to HTML5. So I looked into using the Image Search API with HTML5.

First, the official Google documentation of the Google Image Search API is at It’s all there. Note that it’s “deprecated”. It won’t be free for very much longer, for developers. Soon they will charge $5/1000 queries. But the documentation I have put together does not use a key of any kind.

Perhaps the main thing to have a look at in the official Google documentation is the sample HTML file. It’s in the section titled “The ‘Hello World’ of Image Search”. This automatically does a search for “Subaru STI” and displays the results. But wait. There is a bug in the sample file so that if you copy the code and paste it into a new HTML file, it doesn’t work. I expect this is simply to introduce the seemingly mandatory dysfunction almost invariably present in contemporary programming documentation. Unbelievable. I have corrected the bug in, which is almost exactly the same as “The ‘Hello World’ of Image Search” except it gets rid of “/image-search/v1/” in a couple of places.

After you look at that, look at Type something in and then press the Enter or Return key. It will then do a Google image search and display at most 64 images. 64 is the max you can get per query. The source code is very much based on the official Google example. The image size is set to “medium” and the porn filter is turned on. Strange but true.

Finally, have a look at This example shows you how to control Google Image Search parameters. The source code is the same as the previous example except we see a bunch of dropdown menus. Additionally, there is an extra function in the source code named setRestriction which is called when the user selects a new value from the dropdown menus.

There is a dropdown menu for all the controllable Image Search parameters except for the sitesearch restriction, which is simple enough if you understand the others.

Anyway, that ought to give you what you need to get up and running with the Google Image Search API.


Off planet teleporter

Off planet teleporter


Teleportation to the bottom of the ocean

Teleportation to the bottom of the ocean

I’ve been working on a new piece called Teleporter. The original version is here. The idea is it’s a teleporter. You click the Teleport button and it takes you s0mewhere random on the planet. Usually on the planet. It uses the Google Maps API. It takes you to a random panorama. In the new version, 4% of the time you see a random panorama made by my students; they were supposed to explore the teleporter or teleportation literally or figuratively. So the new version is a mix of Google street view panoramas and custom street view panoramas.

I’m teaching a course in mobile app development at Emily Carr U of Art and Design in Vancouver Canada. I wrote Teleporter to show the students some stuff with Google Maps. I’d shown them Geoguessr, which is a simple but fun piece of work with Google Maps.  I realized it was simple enough I could probably write a related thing. I wrote something that generates a random latitude and longitude. Then I asked Google to give me the closest panorama to that location.

So that worked fine, the students liked it, and I put a link to it on Facebook. A friend of mine, Bill Mullan, shared my link to Teleporter. Then a friend of his started a discussion about Teleporter on . A couple of days later I got an email from Adele Peters of who wanted to do a phone interview with me about Teleporter for an article she wanted to write. So we did. Her article came out a couple of days later; the same day, articles appeared in , from the UK, and some other online magazines. Articles quickly followed from various places such as  and, a digital art site from Paris. This resulted in tens of thousands of visitors to Teleporter.

Meanwhile, I decided to create a group project in the classroom out of Teleporter. The morning cohort was to build the Android app version of Teleporter, and the afternoon cohort the iOS version. That is wrapping up now. We should have an app soon. You can see the web version so far. It’s like the original version, mostly, except for a few things. The interface is more ‘app like’. Also, in the new version you see a student panorama 4% of the time. It’s meant to explore and develop the teleporter/teleportation theme. And there’s a Back button. The students designed the interface.

I want to mention a technical thing. Because I didn’t see any documentation on it online so perhaps it will help some poor developer who, like me, is trying to do something with a combination of Google streetview panoramas and custom streetview panoramas. I found that a bug was happening. Once the user viewed a student panorama, a custom panorama, then thereafter, when they viewed a Google panorama and tried to use the links to travel along a path, they would be taken back to the previous student custom streetview panorama.

The solution was the following JavaScript code:

if (panorama.panoProvider) {
// In this case, the previous panorama was a student panorama.
// We delete panorama.panoProvider or it causes navigation problems:
// if it is present, then when the user goes to use the links in the
// Google panorama, they simply get taken to the student panorama.
delete panorama.panoProvider;

As you eventually figure out, when you create a custom streetview panorama, you need to declare a custome panorama provider. You end up with a property of your panorama object named panoProvider. But this has to be deleted if you then want the pano to use a Google streetview panorama or you get the bug I was experiencing.

Anyway, onward and upward.


Breaking Bad as Sittrag

Whatever else it is, tragedy is a dramatic form, a type of drama for the stage or film or TV etc. Certain dramatic works of art are tragedies. Tragedy has been regarded as the pinnacle of dramatic art for about 2,500 years in the western world. It’s typically dated back to the Oresteia by Aeschylus. There has been fascinating conjecture about the origins of Greek tragic drama in, it’s thought, religious ritual.

Tragedy is not philosophy, but the phrase ‘tragic vision’ is associated with the form. Just what that is varies considerably. Tragedy isn’t inevitably as Aristotle says it is in The Poetics, of course. But a or the ‘tragic vision’ has typically been associated with our most profound dramatic art, our most probing drama into, well, the meaning of life.

Tragedy often involves a victory of the spirit in the face of great worldly loss. People endure, in tragedy. Usually they go down. It’s the end for them. A couple of the things long associated with the tragic vision are ‘anagnorisis’ or ‘recognition’. The vanilla meaning is the key moment in the play of insight usually by the protagonist into the situation. Another is ‘catharsis’. It can and has been interpreted to be many things, but it’s usually associated with the purging and purification of pity and terror/fear in the audience, ie, the drama leads them to catharsis, to an appreciation of the tragic vision of the drama or the fate of the hero/heroine. It’s sometimes associated with insight into ‘the human condition’ or something sufficiently vague. I expect that it often evades some formulas while partially satisfying others. Our own experience is often like that whether it’s cathartic or otherwise.

I expect that the writers of Breaking Bad have been more than a little aware of tragedy in the writing. How could they not? It’s basically the faith of most dramatic artists. They believe in people, typically, and they believe in their art and the art of tragedy as the great expression of their faith in the value of life and the capacity of people to, well, be heroic even as they go down. Not necessarily as martyrs but perhaps true to their own priorities and values about what’s important in this life.

In any case, the key insight or recognition in Breaking Bad is when Walt finally admits to himself and Skyler that he did it for himself. It’s a moment of insight into himself and his own life. And his life with Skyler and the family. He is finally revealed to himself and also open to his wife to whom he has been lying since the series began. That seems like a significant victory, in the drama. He can finally admit to her and to himself what he has been hiding all his life.

And the catharsis, well, that’s ongoing, isn’t it. It’s when it all comes together for you, whenever that is.

The great White west: Breaking Bad as Western

Breaking Bad is a kind of contemporary western. In various ways. Of course there’s the New Mexico landscape. Breaking Bad uses that landscape cinematographically to romance the story. The romance of the western. Great open spaces. Freedom. Lots of heat and danger, risk.

If you’d wondered ‘why all those car ads?’ especially in the finale but also lots of them throughout the series, consider this. Cowboys got their hosses. Cars, in Breaking Bad, do all the work of hosses in westerns. That’s why the car advertisers eat it up. For instance, when Walt’s black Chrysler SRT8 takes a bullet in “Ozymandias”, he doesn’t just lose a car. He’s on the way down after that. That black car symbolized the power of the evil drug kingpin he had become.

But there are other more interesting elements of the western in Breaking Bad. Westerns give their heroes and villains special powers. Sort of like super heroes but not quite. Sort of like the powers of fighters in kung fu movies who fly and so on. But not quite. Western heroes can kill a lot of bad guys in a shootout and/or they have great marksmanship or they are as tough as a grizzly bear or whatever.

Walter White can kill everyone with science, cleverness, and lots of guts. Gus Fring kills all of Don Eladio’s henchmen with a bottle of booze and a lot of guts. Walt blows up Tuco’s lair with fulminated mercury and a lot of guts. These are all improbable events. But the improbability is masked with science, realism, and good storytelling. We *want* Gus to win against overwhelming odds when he kills Don Eladio and all. We suspend our disbelief cuz we want exactly that outcome.

Emily Nussbaum, in the New Yorker, objects to the improbability in the finale (spoiler alert) of Uncle Jack giving a damn that Walt says Jack is partners with Jesse. Very true. It does seem out of character. But we also want him to go get Jesse. Our objection to the improbability and out of characterness of his action is mollified by our desire to get Jesse involved in the finale.

Westerns are rarely strictly realistic. BB also is sort of like a comic book at times.

Like in “Face Off” when Gus gets killed. He walks out of the room that has just exploded like nothing happened, straightens his tie–and then we see half his face has been blown off. He looks like something out of a comic book or a slasher movie, at that point. Then he falls down and dies. The unrealistic nature of it jars a little bit with Breaking Bad’s realism, but our objection is offset by the frisson of the emergence of the death head and devil from the villainous Gus Fring. He is suddenly what he is. He has hidden in plain sight for so long.

Suspension of disbelief is all about suspending our disbelief cuz we want to. Not cuz we’re asked to.

First Remainder Series by Joseph F. Keppler

Apologies for the long absence. In the interim, I got married to the lovely Natalie Funk. And bought a condo in Metrotown in Vancouver. And have been teaching mobile app development. And will soon be teaching mobile web development and motion graphics at the Emily Carr U of Art and Design. It’s been a time of a lot of change and, additionally, a lot of retooling. I’ve been learning mobile development this and mobile development that. Lots of new tricks for this old dog.

I put a couple of things together last week that I’d like to show you. I published seven visual poems by Joe Keppler back in 2008. I always liked them and thought them special, but since I published them, I’ve given them deeper thought–and wrote something that gets at what, to me, is so remarkable about these poems.

I also recoded Joe’s visual poems into HTML that displays well on mobile devices. I’ve been reading about “responsive web design” recently in preparation for teaching a course on mobile web development. Basically, “responsive web design” is about making web pages that work well on really a very wide range of display devices from big TVs down to smartphones. Joe’s poems were excellent practice in responsive design because they are varying degrees of simple but take up the whole page. Recoding these pages into contemporary HTML has helped me a great deal with my understanding of contemporary web design.

Two Self-Portraits

These were created on invitation to make a work related to self-portraiture for Scenes of Selves, Occasions for Ruses, a group exhibition at the Surrey Art Gallery. The curator saw an earlier dbCinema piece I did called The Club that incinemates the faces of my favorite North American politicians, business men, and psychopaths. He asked me to do related work with photos of myself rather than Jeffrey Dahmer, Paul Wolfowitz, Russell Williams, George Bush, and the rest of that psychotic, murderous crew. Which seemed like a remarkably strong opportunity to at least make an idiot of myself.

Let me show you the ‘trailers’ to the two resulting videos. What I’d like to show you are slideshows made of screenshots from the two videos. The videos are made of dbCinemations/collages of 53 images of me from the day I was born to my current grizzled state at 53 years of age.  The Surrey show will run from September 15 (the opening is from 7:30-9:30pm), 2012 till December 16, 2012. The show was curated by Jordan Strom.

The first trailer is at index.htm?n=1 . The video of which these screenshots are composed used two dbCinema brushes. One of the brushes ‘paints’ a letter from my name each frame. The other brush paints a circle each frame. Each of the brushes (usually) paints a different photo. So we see two simultaneous photos of me being drawn. The man and the baby. Etc. A brush paints a given photo for several seconds and then paints a different photo. The slideshow is composed of 47 still images.

The second trailer is at index.htm?n=1 . The video used one dbCinema brush: a Flash brush. In other words, the brush was a SWF turned into a mask. The shape of the brush was a curving, undulating, rotating, translated line. Each frame of the video, dbCinema rendered one brush stroke, one rendering of the brush image; the curving line’s paint was sampled from photos of me. The brush would sample from a photo for several seconds before moving on to another photo. What we’re looking at here is not the video but 17 screenshots from the video.

In the main, the man does not cohere. No coherent person emerges from this process of forcibly joining / collaging / synthesizing / remixing these 53 photos of me. It doesn’t magically tell me who I have always been. Or does it? Or if not, what does it suggest? You could say “If you don’t know who you’ve always been, no piece of art is going to clue you in.” Well I do kinda know. On the other hand, I do seem to tell myself a lot of stories.

It seems what the self-portrait does for me mainly is to problematize the notion of the existence of a person whom I have always been. The images in the video are messy. Like birth mess. Perhaps that’s part of our discomfort in life. We’re always in the midst of our own birth mess. And death stink. As Bob Dylan once observed, “He not busy being born is busy dying.”

Dreaming Methods Labs

Dreaming Methods Labs features 6 leading-edge digital fiction works developed using a spectrum of technologies and in collaboration with some fantastic writers/artists including Kate Pullinger, Chris Joseph, Jim Andrews, Judi Alston, Martyn Bedford, Lynda Williams, Matt Wright, Jacob Welby and Mez Breeze. The site also offers completely free source code for developing your own digital fiction works and links to highly recommended resources across the web.

Joe Keenan’s MOMENT

Joe Keenan's MOMENT in Internet Explorer

I put together a twenty minute video talking about a fantastic piece of digital poetry by Joe Keenan from the late nineties called MOMENT. Check it out: MOMENT, written in JavaScript for browsers, is a work of visual interactive code poetry. It’s one of the great unacknowledged works for the net.

I used Camtasia 8 to create this video. I’ve used the voice-over capabilities of Camtasia before to create videos that talk about what’s on the screen, but this is the first time I’ve been able to use the webcam with it. Still a few bugs, though, it seems: at times the video is quite asynchronous between voice and video.

Still, you get the idea. I’m a big fan of Joe Keenan’s MOMENT and am glad I finally did a video on it.

Color music

Thomas Wilfred and his art of light

Just a brief note to say something about color music. Cuz I’ve spoken of Aleph Null, a project of mine, as one of color music.

My friend Jeremy Turner in Vancouver recently pointed out the work of Thomas Wilfred (1889-1968) to me. It wasn’t a surprise to me that somebody was doing color music back in 1917–because that sort of thing was going on, what with Theosophy and the work of people such as Kandinsky. “Synesthesia was [a] topic of intensive scientific investigation in the late 19th century and early 20th century” (Wikipedia). The idea of ‘color music’ is not a new one, certainly.

But I bring up Thomas Wilfred’s work because his understanding of ‘color music’ is especially interesting. His work was visual. It wasn’t organically linked to audio. So why did he call it color music, then, if it didn’t involve music or sound? Well, because the machines he created were like musical instruments. One played them like one played musical instruments. Musical instruments, when played, create patterned sound and we enjoy the patterned sounds of music. Wilfred’s machines, when played, produced patterned, colored light shows that were meant to be enjoyed in the same sort of way that music is enjoyed. Music is quite abstract, when there are no lyrics. It is just sound without any obvious ‘meaning’. Wilfred’s machines produced patterned light waves and color without any obvious meaning.

Read the rest of this entry »

Exotic functions

The strong lines in this scrawly curve are via the Lily function

In my generative 2d art such as Aleph Null and dbCinema, a virtual ‘brush’ moves around the screen ‘painting’. So I have need of functions that aren’t particularly predictable but buzz around the screen–and stay on screen. Ideally, they’d look like a human scrawl. Like the graphics in this article.

What I’d like to do in this article is illustrate how to use and/or create some exotic functions in your own programming work that could help you achieve a look that isn’t spirographic, ie, too orderly to be of much interest.

There’s a math theorem that says that any curve whatsoever–hand drawn or whatever–can be represented as accurately as you please with trigonometric functions. Trig functions, in the right hands, can be very expressive. Not spirographic or predictably cyclic. They can be sinuous and right there with us on the mind’s tangents. Anyone who thinks that any curve expressed by trig functions lacks the hand’s humanity just has no idea what is possible with trig functions, has no sense of the theory at all, or just hasn’t seen any good applications. Or didn’t know it when they saw it.

It’s important to note that both sin(t) and cos(t) have a maximum value of 1 and a minimum value of -1. That makes them easy to scale to take up as much or as little of the screen as we like, as we’ll see.

Read the rest of this entry »

Chapter X: Evolution and the Universal Machine

Having recently been trying to be less a fossil concerning knowledge of evolution, I’ve watched all sorts of truly excellent documentaries available online. In several of them, it was said that Darwin’s idea of evolution through natural selection is the best idea anyone’s ever had. Because it’s been so powerfully explanatory and has all the marks of great ideas in its simplicity and audacious, unexpected and absolutely revolutionary character.

Uh huh. Ya it’s definitely a good one, that’s for sure. But I’ll tell you an idea that I think is right up there but is nowhere near as widely understood, perhaps permanently so. It’s Turing’s idea of the universal machine. Turing invented the modern computer. This was not at all an engineering feat. It was a mathematical and conceptual feat, because Turing’s machine is abstract, it’s a mathematization of a computer, it’s a theoretical construction.

What puts it in the Darwin range of supreme brilliance are several factors. First and foremost, it shows us what is almost certainly a sufficient (though not a necessary) model of mind. There is no proof, and probably never will be, that there exist thought processes of which humans are capable and computers are not. This is a source of extreme consternation for many people–very like Darwin’s ideas were and, in some quarters, still are.

The reason why such proof will likely never be forthcoming is because it would involve demonstrating that the brain or the mind is capable of things that a Turing machine is not–and a Turing machine is a universal machine in the sense that a Turing machine can perform any computation that can be thought of algorithmically, involving finitely many steps.

Turing has given us a theoretical model not only of all possible computing machines, which launched the age of computing, but a device capable of thought at, as it were, the atomic level of thought. I don’t really see that there is any reasonable alternative to the idea that our brains must function as information processing machines. The universality of Turing’s machine is what allows it to encompass even our own brains.

Additionally, another reason to rank Turing’s idea very high is that, mathematically, it is extrordinarily beautiful, drawing, as it does, on Godel’s marvelous ideas and also those of Georg Cantor. Turing’s ideas are apparently the culmination of some of the most beautiful mathematics ever devised.

Darwin’s ideas place us in the context of “deep history”, that is, within the long history of the planet. And they put us in familial relation with every living thing on the planet in a shared tree of life. And they show how the diversity of life on our planet can theoretically emerge via evolution and natural selection.

Darwin’s ideas outline a process that operates in history to generate the tree of life. Turing’s ideas outline a process that can generate all the levels of cognition in all the critters thought of and unthought. Darwin gives us the contemporary tree of life; Turing gives us the contemporary tree of knowledge.


Here are links to the blog posts, so far, in Computer Art and the Theory of Computation:

Chapter 1: The Blooming
Chapter 2: Greenberg, Modernism, Computation and Computer Art
Chapter 3: Programmability
Chapter X: Evolution and the Universal Machine

Why I am a Net Artist homepage is pretty much my life’s work, such as it is. Most of what I have created is available for free on the site. No, I haven’t zactly got rich on it. I’ve been publishing since 1996. It’s my “book.” In the sense that I haven’t published any books but think of myself primarily as a writer and as my main work. It’s been an adventure in creating and publishing interactive, multimedia poetry, among other things. So I thought I’d write about that adventure for The Journal of Electronic Publishing and its issue on digital poetry. Specifically, I thought I’d try to explain why I chose the net as my main artistic medium.

Read the rest of this entry »