Archive for the ‘Jim Andrews’ Category

globalCompositeOperation in Net Art

Ted, Jim and globalCompositeOperation

Ted Warnell and I have been corresponding together about net art since 1996 or 97. We’ve both been creating net art using the HTML 5 canvas for about the last 6 years; we show each other what we’re doing and talk about canvas-related JavaScript via email. He lives in Alberta and I live in British Columbia.

Ted’s canvas-based stills and animations can be seen at My canvas-based work includes Aleph Null versions 1.0 and 2.0 at, respectively, and

One of the things we’ve talked about several times is globalCompositeOperation—which has got to be a candidate for longest-name-for-a-JavaScript-system-variable. The string value you give this variable determines “the type of compositing operation to apply when drawing new shapes” ( Or, as puts it:

“The globalCompositeOperation property sets or returns how a source (new) image is drawn onto a destination (existing) image.

Source image = drawings you are about to place onto the canvas.

Destination image = drawings that are already placed onto the canvas.”

The reason we’ve talked about this variable and its effects is because globalCompositeOperation turns out to be important to all sorts of things in creating animations and stills that you wouldn’t necessarily guess it had anything to do with. It’s one of those things that keeps on popping up too much to be coincidental. The moral of the story seems to be that globalCompositeOperation is an important, fundamental tool in creating animations or stills with the canvas.

In this article, we’d like to show you what we’ve found it useful for. We’ll show you the art works and how we used globalCompositeOperation in them to do what we did with it.

Ted’s uses of globalCompositeOperation tend to be in the creation of effects. Mine have been for masking, fading to transparency, and saving a canvas to png or jpg.

Digital Compositing

“Compositing” is an interesting word. It’s got “compose” and “composite” in it. “Compositing” is composing by combining images into composit images.

Keep in mind that each pixel of a digital image has four channels or components. The first three are color components. A pixel has a ‘red’ value, a ‘green’ value, and a ‘blue’ value. These are integers between 0 and 255. These combine to create a single color. The fourth channel or component is called the alpha channel. That’s a number between 0 and 1. It determines the opacity of the pixel. If a pixel’s alpha channel has a value of 1, the pixel is fully opaque. If it has a value of 0, the pixel is totally transparent. It can have intermediary values that give the pixel an intermediary opacity.

The default value of globalCompositeOperation is “source-over”. When that’s the value, when you paste a source image into a destination canvas, you get what you’d expect: the source is placed overtop of the destination.

There are 26 possible values for globalCompositeOperation which are described at The first 8 of the options, shown below, are for compositing via the alpha channel. The remaining 18 are blend modes. You may be familiar with blend modes in Photoshop; they determine how the colors of two layers combine and include values such as “multiply”, “screen”, “darken”, “lighten” and so on. Blend modes operate on the color channels of the two layers.

But the first 8 values shown below operate on the alpha channels of the two images. They don’t change the colors. They determine what shows up in the result, not what color it is. The first 8 values in the below diagram can be thought of as a kind of Venn diagram of image compositing. There’s the blue square (destination) and the red circle (source). There are 3 sections to that diagram:

  • A: the top left part of the blue square that doesn’t intersect with the red circle;
  • B: the section where the square and circle intersect;
  • C: and the bottom right section of the red circle that doesn’t intersect with the blue square.

Section A can be blue or be invisible; section B can be blue, red, or invisible; section C can be red or invisible. That makes for 12 possibilities, but some of those 12 possibilities, such as when everything is invisible, are of no use. When the useless possibilities are eliminated, we’re left with the first 8 shown below. These possibilities form the basic sort of Venn logic of image compositing. You see this diagram not only with regard to JavaScript but in image compositing regarding other languages.

The first 8 values for globalCompositeOperation operate on the alpha channels of the source (red) and destination (blue)

What is “compositing”? We read the following definition at Wikipedia:

Compositing is the combining of visual elements from separate sources into single images, often to create the illusion that all those elements are parts of the same scene. Live-action shooting for compositing is variously called “chroma key”, “blue screen”, “green screen” and other names. Today, most, though not all, compositing is achieved through digital image manipulation. Pre-digital compositing techniques, however, go back as far as the trick films of Georges Méliès in the late 19th century; and some are still in use. All compositing involves the replacement of selected parts of an image with other material, usually, but not always, from another image. In the digital method of compositing, software commands designate a narrowly defined color as the part of an image to be replaced. Then the software replaces every pixel within the designated color range with a pixel from another image, aligned to appear as part of the original. For example, one could record a television weather presenter positioned in front of a plain blue or green background, while compositing software replaces only the designated blue or green color with weather maps.

Whether the compositing is operating on the alpha or the color channels, compositing is about combining images via their color and/or alpha channels.

As we see at, different browsers treat some of the values of globalCompositeOperation differently, which can make for dev headaches and gnashing of teeth but, for the most part, globalCompositeOperation works OK cross-browser and cross-platform.

Jim Andrews: Masking (source-atop)

Masking is when you fill a shape, such as a letter, with an image. The shape is said to mask the image; the mask hides part of the image. Masking was crucial to an earlier piece of software I wrote called dbCinema, a graphic synthesizer I wrote in Lingo, the language of Adobe Director. The main idea was of brushes/shapes that sampled from images and used the samples as a kind of ‘paint’. My more recent piece Aleph Null 2.0, written in JavaScript, can do some masking, such as the sort of thing you see in SimiLily—and I’ll be developing more of that sort of thing in Aleph Null.

Let’s look at a simple example. You see it below. You can also see a copy of it at, where it’s easier to view the source code. There’s a 300×350 canvas with a red border. We draw an ‘H’ on the canvas. We fill it with any color–red in this case. Then we set globalCompositeOperation = ‘source-atop’. Then we draw a bitmap of a Kandinsky painting into the canvas, but the only part of the Kandinsky that we see fills the ‘H’. Because when you set globalCompositeOperation = ‘source-atop’ and you then draw into an image, it only draws on pixels that were already on the canvas. states it this way:

“source-atop displays the source image on top of the destination image. The part of the source image that is outside the destination image is not shown.”

In other words, first you draw on the canvas to create the “destination” image (the ‘H’). Then you set globalCompositeOperation = ‘source-atop’. Then you draw the “source” image on the canvas (the Kandinsky).

Masking with globalCompositeOperation = ‘source-atop’

The most relevant code in the above example is shown below:

function drawIt(oldValue) {
context.font = 'bold 400px Arial';
context.fillText('H', 0,320);
// The above three lines set the text font to bold,
// 400px, Arial, red, and draw a red 'H' at (0,320).
// This is the destination.
// (0,320) is the bottom left of the 'H'.
context.globalCompositeOperation = 'source-atop';
context.drawImage(newImg, -100,-100);
// newImg is the rectangular Kandinsky image.
// Sets globalCompositeOperation back to what it was.

In our example, the destination ‘H’ is fully opaque. However, if the destination is only partially opaque, so too will the result be partially opaque. The opacity of the mask determines the opacity of the result. You can see an example of that at The mask, or destination, is an ellipse that grows transparent toward its edge. The source image, once again, is a fully opaque Kandinsky-like image.

You can see some of Aleph Null’s masking ability if you click the Bowie Brush, shown below. It fills random polygons with images of the late, great David Bowie.

The Bowie Brush in Aleph Null fills random polygons with images of David Bowie

Ted Warnell: Albers by Numbers, February, 2017

Overview: Poem by Nari works are dynamically generated, autoactive and alinear, visual and code poetry from the cyberstream. Poem by Nari is Ted Warnell and friends. Following are four Poem by Nari works that demonstrate use of some of the HTML5 canvas globalCompositeOperation(s) documented in this article.

These works are tested and found to be working as intended on a PC running the following browsers: Google Chrome, Firefox, Firefox Developer Edition, Opera, Internet Explorer, Safari, and on an Android tablet. Additional browser specific notes are included below.

Experimental. Albers by Numbers is one from a series of homages to German-American artist Josef Albers. Poem by Nari series is loosely based on the Albers series “Homage to the Square”.

This work is accomplished in part by a complex interaction of stylesheet mixBlendMode(s) between the foreground and background canvases. All available mixBlendMode(s) are employed via a dedicated random selection function, x_BMX.

Interesting to me is how the work evolves from a single mass of randomly generated numeric digits to the Albers square-in-square motif. This emergence happens over a period of time, approximately one minute, and in a sense parallels emergence of the Albers series, which happened for Albers over a lifetime.

Note to IE and Safari users: works but not as intended.

Ted Warnell: Acid Rain Cloud 3, February 2017

Experimental. Another work from a series exploring a) acid, b) rain, c) clouds, d) all of the above.

globalCompositeOperation(s) “source-over” and “xor” are used here in combination with randomized color and get & putImageData functions. The result is a continually shifting vision of what d) all of the above, above, might look like.

Interesting to me here is that ever changing “barcode” effect in the lower half of the work – possibly the “rain” in this? Over time, that rain will turn from a strong black and white downpour to a gentle gray mist. This is globalCompositeOperation “xor” at work.

Note to Safari users: works but not as intended.

Ted Warnell: An Alinear Rembrandt, April 2017

An Alinear Rembrandt

Christ image is digitized from Rembrandt’s “Christ On The Cross”.

Not an experiment. The statement is clear, it’s Christ on the cross.

This fully-realized work brings together globalCompositeOperation(s) “source-over” and “lighter” in combination with gif image files, globalAlpha, linear gradients, standard and dedicated random functions, get & putImageData functions, and a Poem by Nari custom grid definition function. And of course, timing is everything.

Of interest to readers will be the flashing sky and flickering Christ. These effects are accomplished by linear gradient masks, gif image file redraws, and the aforementioned globalCompositeOperation(s).

Of interest to me, it’s Christ on the cross.

Ted Warnell: Pinwheels, April 2017

More experimentation. This work is for Mary & Ryan Maki, Canada

Full screen, variable canvas rotations, and globalCompositeOperation(s) “source-over” and “xor” with randomized color. “source-over” is default and is responsible for the vivid, solid colors in this work, while “xor” provides the muted, soft-edge color blends.

Pinwheels… I’m going to be a grandpa again.

Note to Safari users: does not work with Safari browser.



Fade to Transparency (destination-out)

The fader slider in Aleph Null

Aleph Null 2.0 has a fader slider. The greater the value of the fader slider, the quicker the screen fades to the current background color. This is implemented by periodically drawing a nearly-transparent fill of the background color over the whole canvas. The greater the value of the fader slider, the more frequent the drawing of that fill over the whole canvas.

That works well when there is just one canvas, when there is no notion of layers of canvases. Once you introduce layers, you have to be able to fade a layer to transparency, not to a background color, so that you can see what’s on lower layers. I’m attempting to implement layers at the moment in Aleph Null. So I have to be able to fade a canvas to transparency.

So, then, how do you fade a canvas to transparency?

As Blindman67 explains at, “…you can avoid the colour channels and fade only the alpha channel by using the global composite operation “destination-out” This will fade out the rendering by reducing the pixels’ alpha.” Each pixel has four channels: the red, the blue, the green, and the alpha channels; the alpha channel determines opacity. The code is like this:

ctx.globalAlpha = 0.01; // fade rate
ctx.globalCompositeOperation = "destination-out"
ctx.globalCompositeOperation = "source-over"
ctx.globalAlpha = 1; // reset alpha

You do the above every frame, or every second frame, or every third frame, etc, depending on how quickly you want it to fade to transparency. Another parameter with which you control the speed of the fade is ctx.globalAlpha, which is always a number between 0 and 1. The higher it is, the closer to fully opaque the result will be on a canvas draw operation.

Blindman67 develops an interesting example of a fade to transparency in You can see that it must be fading to transparency because the background color is dynamic, is constantly changing.

Note that the ctx.fillStyle color isn’t really important because we’re fading the alpha, not the color channels. ctx.fillStyle isn’t even specified in the above code. When globalCompositeOperation = ‘destination-out’, the color values of the destination pixels remain unchanged. What changes is the alpha value of the destination pixels. The alpha values of the source pixels get subtracted from the alpha values of the destination pixels.

The performance of fading this way should be very good, versus mucking with the color channels, because you’re changing less information; you’re only changing the alpha channel of each pixel, not the three color channels.

I massaged the Blindman67 example into something simpler at There’s a fade function:

function fade() {
gCtx1.globalAlpha = 0.15; // fade rate
gCtx1.globalCompositeOperation = "destination-out";
gCtx1.globalCompositeOperation = "source-over";
gCtx1.globalAlpha = 1; // reset alpha

But compare the fade function with the code above it from Blindman67. It’s the very same idea.

Above, we see an example much like what I wrote at

Finally, on this topic, I’m currently wondering about the best way to implement layers concerning canvases. Clearly compositing possibilities create a situation where, at least in some situations, you don’t need multiple visible canvases; you can composit with offscreen canvases and only use one visible canvas. Whether this is better in general, and what the performance issues are, is currently unclear to me. There also exists at least one platform, namely concretejs, that supports canvas layers.

Save Canvas to Image File (destination-over)

globalCompositeOperation = ‘destination-over’ allows you to slip an image into the background of another image. The source image is written underneath the destination image.

It turns out that’s precisely what is needed to fix some bad browser behavior when you save a canvas to an image file, as we’ll see.

If you want to save a canvas to an image file, the simplest way to do it, at least on Chrome and Firefox, is to right-click (PC) or Control+click (Mac). You are presented with a menu that allows you to “Save As…” or, on some browser, “Copy Image”. The problem is that some browsers insert a background into this image that probably isn’t the same color as the background on the canvas.

On the PC, Chrome inserts a black background. Other browsers may insert other colors, or the right color, or no color at all. One solution to this problem is to create a button that runs some JavaScript that inserts the right background color. This is a job for globalCompositeOperation = ‘destination-over’ because it allows you to create a background with the source image.

The “save” button in Aleph Null

You can see the solution I’ve created at, shown above. The controls contain a “save” button which, when clicked, copies a png-type image into a new tab, if permitted to do so. You may have to permit it by clicking on a red circle near the URL at the top of the browser. Once the image is in the new tab, right-click (PC) or Ctrl+click (Mac) and select “Save As…”.

The code is basically this sort of thing:

var canvas=document.getElementById('canvas');
var context=canvas.getContext('2d');
// We assume the canvas already has the destination image on it.
var oldGlobalComposite=context.globalCompositeOperation;
// backgroundColor is a string representing the desired background color.
var data=canvas.toDataURL('image/png');;

The toDataURL command can also create the image as a jpg or webp.

In your animations with the HTML 5 canvas, will globalCompositeOperation be of any use? The answer is that if you are combining images at all, doing any compositing at all, globalCompositeOperation is probably relevant to your task and may make it much easier.

Colour Music in Aleph Null 2.0

I’m working on Aleph Null 2.0. You can view what I have so far at . If you’re familiar with version 1.0, you can see that what 2.0 creates looks different from what 1.0 creates. I’ve learned a lot about the HTML5 canvas. Here are some recent screenshots from Aleph Null 2.0.


Image Masking with the HTML5 Canvas

Image masking with the HTML5 canvas is easier than I thought it might be. This shows you the main idea and two examples.

If you’d like to cut to the chase, as they say, look at this example and its source code. The circular outline is an image. The Kandinsky painting is a rectangular image that is made to fill the circular outline. We see the result below:

The Kandinsky painting fills a blurry circle.

The Kandinsky painting fills a blurry circle.

The key is the setting for the canvas’s globalCompositeOperation property. Like me, if you had seen any documentation for this property at all, you might have thought that it only concerned color blending, like the color blending options in Photoshop for a layer (the options usually include things like ‘normal’, ‘dissolve’, ‘darken’, ‘multiply’, ‘color burn’, etc). But, actually, globalCompositeOperation is more powerful than that. It’s for compositing images. Image masking is simply an example of compositing. Studying the possibilities of globalCompositeOperation would be interesting. We’re just going to use a couple of settings in this article. The definition we read of “compositing” via Googling the term includes this:

“Compositing is the combining of visual elements from separate sources into single images, often to create the illusion that all those elements are parts of the same scene.”

We’re going to use the “source-atop” setting of  globalCompositeOperation. The default value, by the way, is “source-in”.

The basic idea is that if you want image F to fill image A, you draw image A on a fresh canvas. Then you set  globalCompositeOperation to “source-atop”. Then you draw image F on the canvas. When you do that, the pixels in the canvas retain whatever opacity/alpha value they have. So, for instance, any totally transparent pixels remain totally transparent. Any pixels that are partially transparent remain partially transparent. Image F is drawn into the canvas, but F does not affect the opacity/alpha values of the canvas.

Here is an example where a Kandinsky painting is made to fill some canvas text:

Click the image and then view the source code.

Click the image and then view the source code.

I’m working on some brushes for Aleph Null 2.0 that are a lot like the brushes in dbCinema: the brushes ‘paint’ samples of images.

New Work by Ted Warnell

Ted Warnell, as many of you know, is a Canadian net artist originally from Vancouver, long since living in Alberta, who has been producing net art as programmed visual poetry at since the 90’s. Which is about how long we’ve been in correspondence with one another. Ted was very active on Webartery for some time, an email list in the 90’s that many of the writerly net artists were involved in. We’ve stayed in touch over the years, though we’ve never met in the same room. We have, however, met ‘face-to-face’ via video chat.

warnell2016He’s still creating interesting net art. In the nineties and oughts, his materials were largely bitmaps, HTML, CSS, and a little JavaScript. Most of his works were stills, or series thereof. Since about 2013, he’s been creating net works using the HTML5 canvas tag that consist entirely of JavaScript. The canvas tag lets us program animations and stills on HTML pages without needing any plugins such as Flash or Unity. Ted has never liked plugins, so the canvas tag works well for him for a variety of reasons. Ted has created a lot of very interesting canvas-based, programmed animations and stills at .

I’m always happy to get a note from Ted showing me new work he’s done. Since we both are using the canvas, we talk about the programming issues it involves and also the sorts of art we’re making. Below is an email Ted sent me recently after I asked him how he would describe the ‘look’ or ‘looks’ he’s been creating with his canvas work. If you have a good look at , you see that his work does indeed exhibit looks you will remember and identify as Warnellian.

hey jim,

further to earlier thoughts about your query re “looks” in my work (and assuming that you’re still interested by this subject), here is something that has been bubbling up over the past week or so

any look in my work comes mainly from the processes used in creation of the work – so, it’s not a deliberate or even a conscious thing, the look, but rather, it just is – mainly, but not entirely, of course – subject, too, is at least partly responsible for the way these things look

warnell2016-2have been thinking this past week that what is deliberate and conscious is my interest in the tension between and balance of order and chaos, by which i mean mathematics (especially geometry, visual math) and chance (random, unpredictable) – i’m exploring these things and that tension/balance in almost all of my works – you, too, explore and incorporate these things into many of your works including most strikingly in aleph null, and also in globebop and others

so here are some thoughts about order/chaos and balance/tension in no particular order:

works using these things function best when the balance is right – then the tension is strong – and then the work also is “right” and strong

it is not a requirement that both of these things are apparent (visible or immediately evident) in a work – there are some notable examples of works that seem to be all one or the other, though that may be more illusion than reality – works of jackson pollock seem to be all chaos but still balance with a behind-the-scenes intelligence, order – works by andrew wyeth on the other hand seem to be all about order and control, but look closely at the brushstrokes that make all of that detail and you’ll see that many of these are pure chance – brilliant stuff, really

warnell2016-3an artist whose work intrigues me much of late is quebecer claude tousignant – i’m sure you know of him – he is perhaps best known for his many “target” paintings of concentric rings – tousignant himself referred to these as “monochromatic transformers” and “gongs” – you can find lots of his works at google images

the reason tousignant is so interesting to me (again) at this time is because while i can see that his paintings “work”, i cannot for the life of me see where he is doing anything even remotely relating to order/chaos or the balance/tension of same – his works seem to me to be truly all order/order with no opposite i would consider necessary for balance and/or to make (required) tension – his works defy me and i’d love to understand how he’s doing it 🙂

warnell2016-4anyway, serious respect, more power, and many more years to the wonderful monsieur tousignant

Look Again –

is a new (this week) autointeractive work created with claude tousignant and his target paintings in mind

in this work are three broad rings, perfectly ordered geometric circles, each in the same randomly selected single PbN primary color – the space between and surrounding these rings is filled with a randomly generated (60/sec), randomly spun alphanumeric text in black and white, and also gray thanks to xor compositing – alinear chaos – as the work progresses, the three rings are gradually overcome by those relentless spinning texts – the outermost ring is all but obliterated while the middle ring is chipped away bit by bit until only a very thin inner crust of the ring remains – the third innermost ring, tho, is entirely unaffected

as the work continues to evolve, ghostlike apparitions of the missing outer and middle ring become more and more pronounced… because… within the chaos, new rings in ever-sharper black and white are beginning to emerge – this has the effect of clearly defining (in gray and tinted gray) the shape of the original color rings – even as order is continually attacked and destroyed by chaos, chaos is simultaneously rebuilding the order – so nothing is actually gained or lost… the work is simply transformed – a functioning “monochromatic transformer”, as tousignant might see it

that’s the tension and balance i’m talking about – the look you were asking about likely has something to do with autointeraction, alinearity, and most likely by my attempt to render visible order/chaos and balance/tension in every work i do

your attempt in aleph null (it now seems to me) might be in the form of progressive linearity on an alinear path – and well done


PS, “Look Again” is a rework of my earlier work, “Poem by Numbers 77” from march 2015 –

which work is a progression of “Poem by Numbers 52” from april 2013 –

which work was about learning canvas coding for circular motion

Poem by Numbers works usually (not always) are about coding research and development – moreso than concept development, which comes in later works like “Look Again”

other artists have “Untitled XX” works – i have “Poem by Numbers XX”

Google Image Search API


Google image search parameters

Here are some useful documents if, as a developer,  you want to use the Google Image Search API.

I used the Google Image Search API in an earlier piece called dbCinema, but this piece was done in Adobe Director. Since then, I’ve retooled to HTML5. So I looked into using the Image Search API with HTML5.

First, the official Google documentation of the Google Image Search API is at It’s all there. Note that it’s “deprecated”. It won’t be free for very much longer, for developers. Soon they will charge $5/1000 queries. But the documentation I have put together does not use a key of any kind.

Perhaps the main thing to have a look at in the official Google documentation is the sample HTML file. It’s in the section titled “The ‘Hello World’ of Image Search”. This automatically does a search for “Subaru STI” and displays the results. But wait. There is a bug in the sample file so that if you copy the code and paste it into a new HTML file, it doesn’t work. I expect this is simply to introduce the seemingly mandatory dysfunction almost invariably present in contemporary programming documentation. Unbelievable. I have corrected the bug in, which is almost exactly the same as “The ‘Hello World’ of Image Search” except it gets rid of “/image-search/v1/” in a couple of places.

After you look at that, look at Type something in and then press the Enter or Return key. It will then do a Google image search and display at most 64 images. 64 is the max you can get per query. The source code is very much based on the official Google example. The image size is set to “medium” and the porn filter is turned on. Strange but true.

Finally, have a look at This example shows you how to control Google Image Search parameters. The source code is the same as the previous example except we see a bunch of dropdown menus. Additionally, there is an extra function in the source code named setRestriction which is called when the user selects a new value from the dropdown menus.

There is a dropdown menu for all the controllable Image Search parameters except for the sitesearch restriction, which is simple enough if you understand the others.

Anyway, that ought to give you what you need to get up and running with the Google Image Search API.


Off planet teleporter

Off planet teleporter


Teleportation to the bottom of the ocean

Teleportation to the bottom of the ocean

I’ve been working on a new piece called Teleporter. The original version is here. The idea is it’s a teleporter. You click the Teleport button and it takes you s0mewhere random on the planet. Usually on the planet. It uses the Google Maps API. It takes you to a random panorama. In the new version, 4% of the time you see a random panorama made by my students; they were supposed to explore the teleporter or teleportation literally or figuratively. So the new version is a mix of Google street view panoramas and custom street view panoramas.

I’m teaching a course in mobile app development at Emily Carr U of Art and Design in Vancouver Canada. I wrote Teleporter to show the students some stuff with Google Maps. I’d shown them Geoguessr, which is a simple but fun piece of work with Google Maps.  I realized it was simple enough I could probably write a related thing. I wrote something that generates a random latitude and longitude. Then I asked Google to give me the closest panorama to that location.

So that worked fine, the students liked it, and I put a link to it on Facebook. A friend of mine, Bill Mullan, shared my link to Teleporter. Then a friend of his started a discussion about Teleporter on . A couple of days later I got an email from Adele Peters of who wanted to do a phone interview with me about Teleporter for an article she wanted to write. So we did. Her article came out a couple of days later; the same day, articles appeared in , from the UK, and some other online magazines. Articles quickly followed from various places such as  and, a digital art site from Paris. This resulted in tens of thousands of visitors to Teleporter.

Meanwhile, I decided to create a group project in the classroom out of Teleporter. The morning cohort was to build the Android app version of Teleporter, and the afternoon cohort the iOS version. That is wrapping up now. We should have an app soon. You can see the web version so far. It’s like the original version, mostly, except for a few things. The interface is more ‘app like’. Also, in the new version you see a student panorama 4% of the time. It’s meant to explore and develop the teleporter/teleportation theme. And there’s a Back button. The students designed the interface.

I want to mention a technical thing. Because I didn’t see any documentation on it online so perhaps it will help some poor developer who, like me, is trying to do something with a combination of Google streetview panoramas and custom streetview panoramas. I found that a bug was happening. Once the user viewed a student panorama, a custom panorama, then thereafter, when they viewed a Google panorama and tried to use the links to travel along a path, they would be taken back to the previous student custom streetview panorama.

The solution was the following JavaScript code:

if (panorama.panoProvider) {
// In this case, the previous panorama was a student panorama.
// We delete panorama.panoProvider or it causes navigation problems:
// if it is present, then when the user goes to use the links in the
// Google panorama, they simply get taken to the student panorama.
delete panorama.panoProvider;

As you eventually figure out, when you create a custom streetview panorama, you need to declare a custome panorama provider. You end up with a property of your panorama object named panoProvider. But this has to be deleted if you then want the pano to use a Google streetview panorama or you get the bug I was experiencing.

Anyway, onward and upward.


Breaking Bad as Sittrag

Whatever else it is, tragedy is a dramatic form, a type of drama for the stage or film or TV etc. Certain dramatic works of art are tragedies. Tragedy has been regarded as the pinnacle of dramatic art for about 2,500 years in the western world. It’s typically dated back to the Oresteia by Aeschylus. There has been fascinating conjecture about the origins of Greek tragic drama in, it’s thought, religious ritual.

Tragedy is not philosophy, but the phrase ‘tragic vision’ is associated with the form. Just what that is varies considerably. Tragedy isn’t inevitably as Aristotle says it is in The Poetics, of course. But a or the ‘tragic vision’ has typically been associated with our most profound dramatic art, our most probing drama into, well, the meaning of life.

Tragedy often involves a victory of the spirit in the face of great worldly loss. People endure, in tragedy. Usually they go down. It’s the end for them. A couple of the things long associated with the tragic vision are ‘anagnorisis’ or ‘recognition’. The vanilla meaning is the key moment in the play of insight usually by the protagonist into the situation. Another is ‘catharsis’. It can and has been interpreted to be many things, but it’s usually associated with the purging and purification of pity and terror/fear in the audience, ie, the drama leads them to catharsis, to an appreciation of the tragic vision of the drama or the fate of the hero/heroine. It’s sometimes associated with insight into ‘the human condition’ or something sufficiently vague. I expect that it often evades some formulas while partially satisfying others. Our own experience is often like that whether it’s cathartic or otherwise.

I expect that the writers of Breaking Bad have been more than a little aware of tragedy in the writing. How could they not? It’s basically the faith of most dramatic artists. They believe in people, typically, and they believe in their art and the art of tragedy as the great expression of their faith in the value of life and the capacity of people to, well, be heroic even as they go down. Not necessarily as martyrs but perhaps true to their own priorities and values about what’s important in this life.

In any case, the key insight or recognition in Breaking Bad is when Walt finally admits to himself and Skyler that he did it for himself. It’s a moment of insight into himself and his own life. And his life with Skyler and the family. He is finally revealed to himself and also open to his wife to whom he has been lying since the series began. That seems like a significant victory, in the drama. He can finally admit to her and to himself what he has been hiding all his life.

And the catharsis, well, that’s ongoing, isn’t it. It’s when it all comes together for you, whenever that is.

The great White west: Breaking Bad as Western

Breaking Bad is a kind of contemporary western. In various ways. Of course there’s the New Mexico landscape. Breaking Bad uses that landscape cinematographically to romance the story. The romance of the western. Great open spaces. Freedom. Lots of heat and danger, risk.

If you’d wondered ‘why all those car ads?’ especially in the finale but also lots of them throughout the series, consider this. Cowboys got their hosses. Cars, in Breaking Bad, do all the work of hosses in westerns. That’s why the car advertisers eat it up. For instance, when Walt’s black Chrysler SRT8 takes a bullet in “Ozymandias”, he doesn’t just lose a car. He’s on the way down after that. That black car symbolized the power of the evil drug kingpin he had become.

But there are other more interesting elements of the western in Breaking Bad. Westerns give their heroes and villains special powers. Sort of like super heroes but not quite. Sort of like the powers of fighters in kung fu movies who fly and so on. But not quite. Western heroes can kill a lot of bad guys in a shootout and/or they have great marksmanship or they are as tough as a grizzly bear or whatever.

Walter White can kill everyone with science, cleverness, and lots of guts. Gus Fring kills all of Don Eladio’s henchmen with a bottle of booze and a lot of guts. Walt blows up Tuco’s lair with fulminated mercury and a lot of guts. These are all improbable events. But the improbability is masked with science, realism, and good storytelling. We *want* Gus to win against overwhelming odds when he kills Don Eladio and all. We suspend our disbelief cuz we want exactly that outcome.

Emily Nussbaum, in the New Yorker, objects to the improbability in the finale (spoiler alert) of Uncle Jack giving a damn that Walt says Jack is partners with Jesse. Very true. It does seem out of character. But we also want him to go get Jesse. Our objection to the improbability and out of characterness of his action is mollified by our desire to get Jesse involved in the finale.

Westerns are rarely strictly realistic. BB also is sort of like a comic book at times.

Like in “Face Off” when Gus gets killed. He walks out of the room that has just exploded like nothing happened, straightens his tie–and then we see half his face has been blown off. He looks like something out of a comic book or a slasher movie, at that point. Then he falls down and dies. The unrealistic nature of it jars a little bit with Breaking Bad’s realism, but our objection is offset by the frisson of the emergence of the death head and devil from the villainous Gus Fring. He is suddenly what he is. He has hidden in plain sight for so long.

Suspension of disbelief is all about suspending our disbelief cuz we want to. Not cuz we’re asked to.

First Remainder Series by Joseph F. Keppler

Apologies for the long absence. In the interim, I got married to the lovely Natalie Funk. And bought a condo in Metrotown in Vancouver. And have been teaching mobile app development. And will soon be teaching mobile web development and motion graphics at the Emily Carr U of Art and Design. It’s been a time of a lot of change and, additionally, a lot of retooling. I’ve been learning mobile development this and mobile development that. Lots of new tricks for this old dog.

I put a couple of things together last week that I’d like to show you. I published seven visual poems by Joe Keppler back in 2008. I always liked them and thought them special, but since I published them, I’ve given them deeper thought–and wrote something that gets at what, to me, is so remarkable about these poems.

I also recoded Joe’s visual poems into HTML that displays well on mobile devices. I’ve been reading about “responsive web design” recently in preparation for teaching a course on mobile web development. Basically, “responsive web design” is about making web pages that work well on really a very wide range of display devices from big TVs down to smartphones. Joe’s poems were excellent practice in responsive design because they are varying degrees of simple but take up the whole page. Recoding these pages into contemporary HTML has helped me a great deal with my understanding of contemporary web design.

Two Self-Portraits

These were created on invitation to make a work related to self-portraiture for Scenes of Selves, Occasions for Ruses, a group exhibition at the Surrey Art Gallery. The curator saw an earlier dbCinema piece I did called The Club that incinemates the faces of my favorite North American politicians, business men, and psychopaths. He asked me to do related work with photos of myself rather than Jeffrey Dahmer, Paul Wolfowitz, Russell Williams, George Bush, and the rest of that psychotic, murderous crew. Which seemed like a remarkably strong opportunity to at least make an idiot of myself.

Let me show you the ‘trailers’ to the two resulting videos. What I’d like to show you are slideshows made of screenshots from the two videos. The videos are made of dbCinemations/collages of 53 images of me from the day I was born to my current grizzled state at 53 years of age.  The Surrey show will run from September 15 (the opening is from 7:30-9:30pm), 2012 till December 16, 2012. The show was curated by Jordan Strom.

The first trailer is at index.htm?n=1 . The video of which these screenshots are composed used two dbCinema brushes. One of the brushes ‘paints’ a letter from my name each frame. The other brush paints a circle each frame. Each of the brushes (usually) paints a different photo. So we see two simultaneous photos of me being drawn. The man and the baby. Etc. A brush paints a given photo for several seconds and then paints a different photo. The slideshow is composed of 47 still images.

The second trailer is at index.htm?n=1 . The video used one dbCinema brush: a Flash brush. In other words, the brush was a SWF turned into a mask. The shape of the brush was a curving, undulating, rotating, translated line. Each frame of the video, dbCinema rendered one brush stroke, one rendering of the brush image; the curving line’s paint was sampled from photos of me. The brush would sample from a photo for several seconds before moving on to another photo. What we’re looking at here is not the video but 17 screenshots from the video.

In the main, the man does not cohere. No coherent person emerges from this process of forcibly joining / collaging / synthesizing / remixing these 53 photos of me. It doesn’t magically tell me who I have always been. Or does it? Or if not, what does it suggest? You could say “If you don’t know who you’ve always been, no piece of art is going to clue you in.” Well I do kinda know. On the other hand, I do seem to tell myself a lot of stories.

It seems what the self-portrait does for me mainly is to problematize the notion of the existence of a person whom I have always been. The images in the video are messy. Like birth mess. Perhaps that’s part of our discomfort in life. We’re always in the midst of our own birth mess. And death stink. As Bob Dylan once observed, “He not busy being born is busy dying.”

Dreaming Methods Labs

Dreaming Methods Labs features 6 leading-edge digital fiction works developed using a spectrum of technologies and in collaboration with some fantastic writers/artists including Kate Pullinger, Chris Joseph, Jim Andrews, Judi Alston, Martyn Bedford, Lynda Williams, Matt Wright, Jacob Welby and Mez Breeze. The site also offers completely free source code for developing your own digital fiction works and links to highly recommended resources across the web.

Joe Keenan’s MOMENT

Joe Keenan's MOMENT in Internet Explorer

I put together a twenty minute video talking about a fantastic piece of digital poetry by Joe Keenan from the late nineties called MOMENT. Check it out: MOMENT, written in JavaScript for browsers, is a work of visual interactive code poetry. It’s one of the great unacknowledged works for the net.

I used Camtasia 8 to create this video. I’ve used the voice-over capabilities of Camtasia before to create videos that talk about what’s on the screen, but this is the first time I’ve been able to use the webcam with it. Still a few bugs, though, it seems: at times the video is quite asynchronous between voice and video.

Still, you get the idea. I’m a big fan of Joe Keenan’s MOMENT and am glad I finally did a video on it.

Color music

Thomas Wilfred and his art of light

Just a brief note to say something about color music. Cuz I’ve spoken of Aleph Null, a project of mine, as one of color music.

My friend Jeremy Turner in Vancouver recently pointed out the work of Thomas Wilfred (1889-1968) to me. It wasn’t a surprise to me that somebody was doing color music back in 1917–because that sort of thing was going on, what with Theosophy and the work of people such as Kandinsky. “Synesthesia was [a] topic of intensive scientific investigation in the late 19th century and early 20th century” (Wikipedia). The idea of ‘color music’ is not a new one, certainly.

But I bring up Thomas Wilfred’s work because his understanding of ‘color music’ is especially interesting. His work was visual. It wasn’t organically linked to audio. So why did he call it color music, then, if it didn’t involve music or sound? Well, because the machines he created were like musical instruments. One played them like one played musical instruments. Musical instruments, when played, create patterned sound and we enjoy the patterned sounds of music. Wilfred’s machines, when played, produced patterned, colored light shows that were meant to be enjoyed in the same sort of way that music is enjoyed. Music is quite abstract, when there are no lyrics. It is just sound without any obvious ‘meaning’. Wilfred’s machines produced patterned light waves and color without any obvious meaning.

Read the rest of this entry »

Exotic functions

The strong lines in this scrawly curve are via the Lily function

In my generative 2d art such as Aleph Null and dbCinema, a virtual ‘brush’ moves around the screen ‘painting’. So I have need of functions that aren’t particularly predictable but buzz around the screen–and stay on screen. Ideally, they’d look like a human scrawl. Like the graphics in this article.

What I’d like to do in this article is illustrate how to use and/or create some exotic functions in your own programming work that could help you achieve a look that isn’t spirographic, ie, too orderly to be of much interest.

There’s a math theorem that says that any curve whatsoever–hand drawn or whatever–can be represented as accurately as you please with trigonometric functions. Trig functions, in the right hands, can be very expressive. Not spirographic or predictably cyclic. They can be sinuous and right there with us on the mind’s tangents. Anyone who thinks that any curve expressed by trig functions lacks the hand’s humanity just has no idea what is possible with trig functions, has no sense of the theory at all, or just hasn’t seen any good applications. Or didn’t know it when they saw it.

It’s important to note that both sin(t) and cos(t) have a maximum value of 1 and a minimum value of -1. That makes them easy to scale to take up as much or as little of the screen as we like, as we’ll see.

Read the rest of this entry »

Chapter X: Evolution and the Universal Machine

Having recently been trying to be less a fossil concerning knowledge of evolution, I’ve watched all sorts of truly excellent documentaries available online. In several of them, it was said that Darwin’s idea of evolution through natural selection is the best idea anyone’s ever had. Because it’s been so powerfully explanatory and has all the marks of great ideas in its simplicity and audacious, unexpected and absolutely revolutionary character.

Uh huh. Ya it’s definitely a good one, that’s for sure. But I’ll tell you an idea that I think is right up there but is nowhere near as widely understood, perhaps permanently so. It’s Turing’s idea of the universal machine. Turing invented the modern computer. This was not at all an engineering feat. It was a mathematical and conceptual feat, because Turing’s machine is abstract, it’s a mathematization of a computer, it’s a theoretical construction.

What puts it in the Darwin range of supreme brilliance are several factors. First and foremost, it shows us what is almost certainly a sufficient (though not a necessary) model of mind. There is no proof, and probably never will be, that there exist thought processes of which humans are capable and computers are not. This is a source of extreme consternation for many people–very like Darwin’s ideas were and, in some quarters, still are.

The reason why such proof will likely never be forthcoming is because it would involve demonstrating that the brain or the mind is capable of things that a Turing machine is not–and a Turing machine is a universal machine in the sense that a Turing machine can perform any computation that can be thought of algorithmically, involving finitely many steps.

Turing has given us a theoretical model not only of all possible computing machines, which launched the age of computing, but a device capable of thought at, as it were, the atomic level of thought. I don’t really see that there is any reasonable alternative to the idea that our brains must function as information processing machines. The universality of Turing’s machine is what allows it to encompass even our own brains.

Additionally, another reason to rank Turing’s idea very high is that, mathematically, it is extrordinarily beautiful, drawing, as it does, on Godel’s marvelous ideas and also those of Georg Cantor. Turing’s ideas are apparently the culmination of some of the most beautiful mathematics ever devised.

Darwin’s ideas place us in the context of “deep history”, that is, within the long history of the planet. And they put us in familial relation with every living thing on the planet in a shared tree of life. And they show how the diversity of life on our planet can theoretically emerge via evolution and natural selection.

Darwin’s ideas outline a process that operates in history to generate the tree of life. Turing’s ideas outline a process that can generate all the levels of cognition in all the critters thought of and unthought. Darwin gives us the contemporary tree of life; Turing gives us the contemporary tree of knowledge.


Here are links to the blog posts, so far, in Computer Art and the Theory of Computation:

Chapter 1: The Blooming
Chapter 2: Greenberg, Modernism, Computation and Computer Art
Chapter 3: Programmability
Chapter X: Evolution and the Universal Machine

Why I am a Net Artist homepage is pretty much my life’s work, such as it is. Most of what I have created is available for free on the site. No, I haven’t zactly got rich on it. I’ve been publishing since 1996. It’s my “book.” In the sense that I haven’t published any books but think of myself primarily as a writer and as my main work. It’s been an adventure in creating and publishing interactive, multimedia poetry, among other things. So I thought I’d write about that adventure for The Journal of Electronic Publishing and its issue on digital poetry. Specifically, I thought I’d try to explain why I chose the net as my main artistic medium.

Read the rest of this entry »

Chapter 3: Programmability

I said in chapter 1 that it’s programmability, not interactivity (or anything else) that is the crucial matter to consider in computer art. I want to explain and explore that claim in this chapter.

What makes computer art computer art? We’ve seen that there is a great deal of art that appears on computers that could as well appear on a page or on a TV, in a canvas or on an album. I’m calling that art digital art and computers are not crucial to the display or appreciation of it.

The idea I want to capture in the notion of ‘computer art’ is art in which computers are crucial for the production, display and appreciation of the art, art which takes advantage of the special properties of computers, art which cannot be translated into other media without fundamentally altering the work into something quite different than what it was on the computer, art in which the computer is crucial as medium.

Read the rest of this entry »

Computer Art and the Theory of Computation: Chapter 2: Greenberg, Modernism, Computation and Computer Art

In a short but influential piece of writing by Clement Greenberg called Modernist Painting written in 1960—and revised periodically until 1982—the art critic remarked that “The essence of Modernism lies, as I see it, in the use of characteristic methods of a discipline to criticize the discipline itself, not in order to subvert it but in order to entrench it more firmly in its area of competence.” Such sweeping generalizations are always problematical, of course. But I want to use the Greenberg quote to tell you an equally problematical story about the birth of the theory of computation and, thereby, computer art. Humor me. It’s Clement Greenberg. Come on.

The work I’ve mentioned by Gödel and Turing happened in the thirties, toward the end of modernism, which was roughly from 1900 till 1945, the end of World War II. So it’s work of late modernism.

Let’s grant Greenberg clemency concerning his conceit, for the moment, that the “essence”—itself a word left over from previous eras—of modernism, of the art and culture of that era, at least in the west, involved a drive to a kind of productive self-referentiality or consciousness of the art itself within the art itself. What work could possibly be more exemplary of that inclination than the work by Gödel and Turing that I’ve mentioned?

Read the rest of this entry »

Computer Art and the Theory of Computation: Chapter 1: The Blooming

Alan Turing inaugurated the theory of computation in 1936 with the most humble but powerful manifesto of all time

What I’d like to do in a series of posts is explore the relevance of the theory of computation to computer art. Both of those terms, however, need a little unpacking/explanation before talking about their relations.

Let’s start with computer art. Dominic Lopes, in A Philosophy of Computer Art, makes a useful distinction between digital art and computer art. Digital art, according to Lopes, can refer to just about any art that is or was digitized. Such as scanned paintings, online fiction, digital art videos, or digital audio recordings. Digital art is not a single form of art, just as fiction and painting are different forms of art. To call something digital art is merely to say that the art’s representation is or was, at some point, digital. It doesn’t imply that computers are necessary or even desirable to view and appreciate the work.

Whereas the term computer art is much better to describe art in which the computer is crucial as medium. What does he mean by “medium”? He says “a technology is an artistic medium for a work just in case it’s use in the display or making of the work is relevant to its appreciation” (p. 15). We don’t need to see most paintings, texts, videos or audio recordings on computers to display or appreciate them. The art’s being digital is irrelevant to most digital art. Whereas, in computer art, the art’s being digital is crucial to its production, display and appreciation.

Lopes also argues that whereas digital art is simply not a single form of art, computer art should be thought of as a new form of art. He thinks of a form of art as being a kind of art with shared properties such that those properties are important to the art’s appreciation. He defines interactivity as being such that the user’s actions change the display of the work itself. So far so good. But he identifies the crucial property that works of computer art share as being interactivity.

I think all but one of the above ideas by Lopes are quite useful. The problem is that there are non-interactive works of computer art. For instance, generative computer art is often not interactive. It often is different each time you view it, because it’s generated at the time of viewing, but sometimes it requires no interaction at all. Such work should be classified as computer art. The computer is crucial to its production, display, and appreciation.

Read the rest of this entry »

Aleph Null Color Music

Aleph Null makes color music. Colors are tones. Musical notes are tones. Music is tones moving in time. Aleph Null makes changing color tones move in time. There is no audio.

Aleph Null is an instrument of color music. This is about how to play it. It’ll play on it’s own. But it profits immensely from a human player interceding continually. It’s interactive online art.

Color music in Aleph Null has a simple structure. There is a central color. It’s the main color. All the other colors are within a certain distance from the central color. That distance is called the color range.

Here’s how to change the central color.

  1. Click the Aleph Null logo at top left or press the ‘1’ key to make the controls visible.
  2. Press the ‘2’ key or click the input box labelled ‘central color’ to make the central color color-picker visible.
  3. Click around in both parts of the color-picker to see how it works. The current central color is displayed in the central color input box.

The colors Aleph Null uses are all random distances from the central color and these distances do not exceed the color range value. The lower the color range value, the closer all the colors are to the central color. The higher the color range value, the greater the range of colors that Aleph Null will use. If the color range is set to 0, Aleph Null only uses one color: the central color. If the color range is 255, any color might be used.

Read the rest of this entry »