The Climate Baby Dilemma

For a growing number of young people, the climate crisis is affecting decisions about whether or not to have kids.

Duration: 44 min
Production year: 2021

High-Resolution HTML5 Canvases

You can make your canvases as high-res as the user’s computer can stand. I’ve recently used canvases of size 12000×6750 to create bitmaps that, when printed at 200dpi, which is sufficient for very high quality, would print a 60″x34″ print.

It’s very simple to do. The key is this. The CSS width and height values of a <canvas> determine only the size the canvas appears on the screen. The width and height attributes of the <canvas>, on the other hand, determine the size of the canvas that you work with in your JavaScript code. 

In the below code, the canvas appears on the screen to be 80×300, but if you use toDataURL to capture a jpg or png image of the canvas, it will be 1000×500.

<canvas width="1000" height="500" style="width:80px;height:300px;border:1px solid black;"></canvas>

Aleph Null 3.1 and later versions of Aleph Null support creating high-res work, as you can see in a tutorial video on the matter

Digital Poetry in Digital Literacy

Poetry has been associated with the teaching of literacy for a long time. Because poetry, in some ways, is the cherry on the top of literacy. In poetry we see something approaching our full humanity expressed in the technology of writing. Writing is a complex, subtle, highly expressive technology. Poetry is typically considered the highest form of writing because that’s where we learn how to feel with language. Language in poetry carries human feeling, emotion, attitude, the tone of the inner voice, as well as thought. Poetry pushes the capabilities of language, tests it, throws it off a cliff, retrieves it, does it all again.

Computing environments have changed our typical reading and writing environments a great deal. We now typically read and write not only language but also images, sound, video, and code/programming. Also, the texts we read are often now interactive. Programming responds to what we write. All this changes what it means to be literate in the contemporary world. Just as poetry, for at least hundreds of years, has been the apogee of literacy, so too with digital poetry in digital literacy.

My first experiences with using technology artistically go back to my radio days in the 80s. I’d like to write about the dawn, for me, of understanding something about using technology artistically. Because it’s relevant now to our digital experience and to digital poetry/literature.

I produced a literary radio show in the 80’s each week for six years. At first, what I did was tape poets and fiction writers reading, and aired that. Sometimes I would do a bit of production on the material.

But then I heard a life-changing tape from Tellus. It was their #11 issue, The Sound of Radio, and it featured work by Gregory Whitehead, Susan Stone, Jay Allison, Helen Thorington and others. It was miles beyond what I was producing. It was interesting radio art. I was just putting work for print onto tape/radio. The Tellus tape was audio writing. This was art in its own right. Especially in the case of Whitehead and Stone, it was poetry not first written for the page, but created in almost a new language of poetry, with recorded sound and radio in mind from beginning to end.

It wasn’t simply that it was impressive technically, as produced audio. The point is that, as interesting poetry to listen to, as recorded sound or as radio, this was far more interesting than listening to poets read their print poems. Some of them described themselves as audio writers. Whitehead did a tape called Writing On Air. ; another was called Disorder Speech. These writers took radio and recorded sound seriously as artistic, writerly, poetic media. It was literary inscription in sound, on tape, in radio. And it opened up great vistas to me in the realm of poetry and language.

I started corresponding with and reading essays by Whitehead about radio art and the art of sound. Not only was Whitehead producing fantastic audio–he was writing about the poetics of radio art brilliantly!

I began to realize that creating exciting art for a particular medium was not the same as simply making art developed for one medium available in a different medium. Why is that?

Art that understands and uses the special properties of its medium is not a weak echo of some other medium. The radio I’d been producing was not the art itself. It was providing an inferior experience of the books that the authors were flogging. The books were the art itself.

If you’re not channeling the energy that flows through the special properties of the medium, those channels will work against you because energy flows through them whether you channel it or not. If you’re not channeling it, the attention it gets—just by virtue of the nature of the medium—is noise distracting the audience from whatever channels you are using.

For instance, reading text on a monitor is harder than reading text in a book because the medium is refreshing the image 60 times per second. And if there’s stuff that’s moving, that competes for attention. One way to use that energy is animation.

This topic about the value of dialing in the special properties of the medium is sometimes called media specificity; it’s associated with the writings of the USAmerican art critic Clement Greenberg, primarily, but the way I think of it predates my knowledge of Greenberg and is more associated with Gregory Whitehead and Marshall McLuhan. My friend Jeremy Owen Turner tells me that thinking on the matter goes back to Kant.

So if we ask what the relevance of digital poetry is, say—and by that, I don’t simply mean digitized poetry but poetry where the computer is crucial both for the production and appreciation of the work—we can say that it’s important to digital literacy, to being fully literate in the digital.

Digital literacy is not only in knowing how to google the information you want, and how to check to see if it’s accurate information—though that’s important to being digitally literate—as opposed to being an easy mark for misinformation and scams.

It’s also important to get a feel for how emotion and affect can be involved in interactivity. And how video and text can work together. And how sound and text and visuals can work together intellectually and emotionally. An important part of our contemporary computing experience is multimedia, the experience of several media at once. Multimedia poetry is intermedial, it relates the media, it makes them work together as one integrated experience. That is part of digital literacy too.

Poetry is where/how we learn to feel with language. Digital poetry is where/how we learn to feel with our expanded/changed language we experience in computing environments, our intermedial language, our interarts language, our new media language that is a confluence of language, image, sound, and interactivity.

While the digital can give us print and video and sound, etc—they’re all just coded in zeros and ones—digital art is more than a bunch of old media tacked together. It’s a new art form in itself. It isn’t simply that it’s uniquely multimedial or even intermedial, though that’s an important part of it. And it isn’t simply that it’s interactive, though that’s important too. And it isn’t simply that it’s programmable. In his book A Philosophy of Computer Art, Dominic Lopes proposes—as many others have—that computer art is, in fact, a brand new form of art. And if that’s true, then simply digitizing other forms of art does not suffice to experience computer art—which is art in which the computer is crucial for both the production and appreciation of the art. It’s art in which the computer is crucial as the medium.

Marshall McLuhan said that technologies are extensions of our senses. The telescope and microscope let us see things we can’t see with the naked eye. Telescopes and microscopes extend our sight into the large and small. Telephones extend our hearing and voice over great distances. Technologies extend senses, our bodies, our capabilities. Computers extend our memory and our cognitive abilities. We can know things with a google that otherwise would take us considerable research.

Computers extends our senses, bodies, and abilities/capabilities, but it’s digital poetry and other digital art (computer art) that extends our humanity throughout our new dimensions. Without computer art, the extensions of us we acquire via the digital are as claws without feeling. Digital art gets the blood flowing through our new abilities, gets the feelings going. Then we understand how interactivity involves our feelings, whether we knew it or not. We begin to be able to think and feel at once with computers, through intermedial, interactive, interestingly programmed computer art.

Digital art also gets our digital shit detectors working. We can sense better the truly human, the fully human, the true. As opposed to accepting ads and such as expressions of truth.

Godel and Philosophy

It’s inspiring when math/logic leaps from the empyrean to the inner life. We see such a leap in Godel’s work: he actually showed that there exist truths that are not provable, truths that are true for no reason, thereby bringing the quotidian in more fruitful relation with the empyrean.

That there are truths which are not provable is something that we had intuited and even acknowledged for a very long time. Some things are true but not simply hard to prove–they’re impossible to prove. Yet definitely true. Courts of law acknowledge that proof, in matters of law, must extend only to the elimination of reasonable doubt. Not to fully demonstrable absolute proof. Because it can be and often is impossible to prove beyond all doubt what is true.

Previous to Godel’s work, meta-mathematics, ie, reasoning about mathematics and logic, had had some dramatic results. Such as the revelation that relatively consistent non-Euclidean geometries were possible. This came as a bolt of lightning not only to mathematics but to philosophy. Because even the great Kant–as well as many another philosopher–had provided, when pressed for an example of the existence of a-priori truth, the parallel postulate or the logically equivalent notion that the sum of the angles of a triangle is 180 degrees. But neither of these ideas is true in non-Euclidean geometries. In spherical geometry, for example, there are no parallel lines, and the sum of the angles of a triangle can be as large as 270 degrees. And this rocked the idea that a-priori truth exists at all, cuz the prime exemplars had been axioms of Euclidean geometry.

Godel’s work in meta-mathematics–which is now simply called logic–was at least as lightning-boltish as non-Euclidean geometry had been in the 1800’s.

When I studied math as an undergraduate, the most beautiful, profound work I studied was that of Georg Cantor concerning set theory and, in particular, infinite sets. Cantor actually proved things worth knowing about the infinite. And he did so in some of the most beautiful proofs you will ever encounter. Stunning work. He developed something called “the diagonal argument”. I won’t go into it, but it’s really killer-diller–and Godel uses that argument in his own proofs! He draws on this profound work by Cantor in his own incredible proof.

And, in turn, in Turing’s paper that layed the groundwork for the computer age, he also uses Cantor’s diagonal argument–and acknowledges Godel’s work as having helped him on his road.

You see, the poetics of computer art has this rich philosophical, mathematical history among its parents. It has this in its genes. It’s important to computer art for very many reasons. But one of those reasons is to understand where it comes from.

“Oh! Blessed rage for order, pale Ramon,
The maker’s rage to order words of the sea,
Words of the fragrant portals, dimly-starred,
And of ourselves and of our origins,
In ghostlier demarcations, keener sounds.”
from The Idea of Order at Key West, Wallace Stevens

Leibniz and Computing

If you are interested in the history of the philosophical/logical/poetical dimensions of computing, you will be interested in Leibniz, the greater contemporary of Newton who independently created calculus. Martin Davis’s astonishing history of just this dimension of the history of computing, titled Engines of Logic, provides fascinating insight into the life and work of Leibniz and its relation with the history of computing. Leibniz is often thought of as the granddaddy of computing.

You will also be interested in Godel’s work. The incompleteness theorems. For these suggested an answer in 1930 to a question posed at the turn of the century by Hilbert, a question that is important in the development of the theory of computation. And Turing used Godel’s work in his 1933 paper on the ‘decision problem’ to answer the question and, almost incidentally, introduce the notion of the Turing machine, the theoretical model of a computing device that is still used today.

I picked up an excellent book: Gottfried Wilhelm Leibniz: Philososophical Writings edited by G.H.R. Parkinson. It’s actually readable and understandable, neither of which is true of another translation I have of this work by Leibniz.

Anyway, what I wanted to point out tonight is that in the brilliant introduction by Parkinson to this volume, he says that Leibniz thought of what he called the “principle of sufficient reason” as one of his guiding lights.

“Leibniz said many times that reasoning is based on two great principles–that of identity or contradiction, and that of sufficient reason….The principle of sufficient reason says that every truth can be proved…”

Now, it is just this which Godel demonstrates is false. Godel proved the necessary existence, in sufficiently powerful formal systems, of what he called “undecidable propositions”, namely ones that are true but not provable. These are ideas that are true for no reason; they’re definitely true but definitely not provable. That makes them true for no reason.

It’s the necessary existence of these sorts of propositions that complicates the entire structure of human knowledge. Leibniz’s principle of sufficient reason is what he needs to lay the foundation for the possibility of machines that reason as far and wide as it is possible to reason, verily and forsooth into a perfectly known and understood universe.

If everything that is true is provable, and if devising proofs can be reduced to a mechanical procedure, then there is no impediment, except possibly time and an infinity of theorems, to a machine generating all the proofs of all the theorems.

Mathematics becomes, in such a world, something that we can leave to a machine.

But Godel showed that not everything that is true is provable (his example was a lot like the proposition “This proposition is not provable.”). If not everything that’s true is provable, then knowledge, perforce, must always be incomplete. Everything that’s true can’t be completely exhausted via a theorem-proving machine.

Anyway, it’s interesting that Leibniz’s “principle of sufficient reason” back in the 17th century, is so strongly related to Godel’s work.

Me and AI

Have been thinking about my art and AI. I don’t use AI, as you may know–if one takes as fundamental in AI that it learns. Which seems reasonable as a necessary condition for AI.

There will be terrific works of art in which AI is beautiful and crucial. But there will be many more where it is an inconsequential fashion statement.  It’s funny that programmed art is so affected by dev fashion. For the sake of strong work, it’s as important to not let programmer fashion dictate how we pursue excellence.

AI is not a silver bullet cure for creating great generative art or computer art more broadly. AI has great promise, but sometimes it’s preferable to use other approaches than AI.

There’s currently an AI gold-rush going on. I have seen a previous gold-rush: the dotcom gold-rush of 1996-2000. It’s in the nature of gold-rushes that people flock to it, misunderstand it, and create silly work with it that nonetheless is praised.

For many years, I have created programmed, generative, computer art, a type of art that is often associated with AI techniques.

The Trumpling characters (and other visual projects) that I am able to create, as you may have noted, have about them a diversity/range and quality that challenges more than a few art AIs. As art. As character. As expressive. As intriguing. As fascist chimera / Don Conway at , for instance.

The thing is this: it takes me some doing to learn how to create those. Both in the coding/JavaScript–and then in the artistic use of Aleph Null in generating the visuals, the ‘playing’ of the instrument, as it were, cinematically. That takes constant upgrades and other additions to the source code, so that i can explore in new ways, continually. Or stop for a while and explore what is already present in the controls, the instrument.

Some of the algorithms I’ve developed will be developed further; my work is the creation of a “graphic synthesizer”–a term I believe I invented–a multi-brushed, multi-layered, multi-filled brushstroke where brushes have replaceable nibs and many many parameters are exposed to granular controls. dbCinema was also a “graphic synthesizer” and a “langu(im)age processor” (another term I made up). I started dbCinema around 2005. I started Aleph Null in 2011. It’s 2019 now. I’ve been creating graphic synthesizers for some time now.

If I understand correctly, what AI has to offer in this situation is strong animation of the parameters. It’s learning would be in creating better and better animations without cease. Well, no, not really. Not ‘without cease’. It could be cyclic. And probably is.

It’s as good as the training data–and what is done with the training data, what images are grouped together, and how they’re grouped together in their position and so on.

The following is what I do instead of using AI.

My strategy is this:

  1. Create an instrument of generative art that allows me and other users of the tool to learn how to create strong art with Aleph Null. There is learning going on, but it’s by humans.
  2. Expose the most artistically crucial parameters (in the below architecture) in interactive controls–to get human decisions operating on some of those parameters–especially my own decisions–that is, Aleph Null and dbCinema are instruments that one plays.
  3. A control is allowed only if you can see the difference when you crank on it.
  4. The architecture: a ‘brush + nib‘ paradigm, and layers, in an animation of frames.
  5. A brushstroke: a shape mask to give the mask shape + a fill of the resulting shape. Any shape. An animated shape mask, possibly, so the shape changes + dynamic somewhat random fills chosen from/sampled from a folder of images–or a folder of videos, eventually. There are text nibs, also, so that a brushstroke can be a letter or word or longer string of text which is possibly filled with samples of images.
  6. The paint that a brush uses can be of different types: a folder of images; a folder of videos; a complex, dynamic gradient; a color. A brush fills itself with paint from its paint palette (the brush samples from its paint source) and then renders at least one brushstroke per frame.
  7. Each brush has a path. Can be random, or exotic-function-generated. Can be a mouse path–or finger path.
  8. A brush is placed in and often moved around in a layer. Can be moved from layer to layer.

Where could AI help Aleph Null? One could either concentrate on making Aleph Null more autonomous or use/create AI that acts as a kind of assistant to the human player of the instrument. 

If the former, i.e., if one concentrates on creating/using AI that makes Aleph Null more autonomous as an art machine–more autonomous from human input–then usually that requires an evaluation function, something that evaluates the quality of an image created by Aleph Null or used by Aleph Null, in order to ‘learn’ how to create quality work. Good data on which to base an evaluation function is difficult to come by. You could use the number of ‘likes’ an image acquires, for instance, if you can get that data from Facebook or wherever. Getting your audience to rate things is another way, which usually doesn’t work very well. 

My strategy, instead of this sort of AI, will be to create ‘gallery mode’. Aleph Null won’t be displayed in galleries as an interactive piece until ‘gallery mode’ has been implemented. There’ll be ‘gallery mode’ and ‘interactive mode’. Currently, Aleph Null is always in ‘interactive mode’. One of the pillars of ‘gallery mode’ is the ability to save configurations. If you like the way Aleph Null is looking, at any time, you can save that configuration. And you can ‘play’ it later, recall it. And you can create ‘playlists’ that string together different saved configurations. We normally think of a playlist as a sequence of songs to be played. This is much the same thing, only one is playing a sequence of Aleph Null configurations.

A configuration is a brushSet, i.e, a set of brushes that are configured in such and such a way. 

Playlists will allow Aleph Null to display varietously without the gallery viewer having to interact with Aleph Null. Currently, in ‘interactive mode’, the only way Aleph Null will display varietously is if you get in there and change it yourself. 

When you save a configuration, you also assign it a duration to play. So that when you play a playlist, which is a sequence of configurations, each configuration plays for a certain duration before transitioning to the next configuration.

When Aleph Null is displayed in a gallery, by default, it will be in ‘gallery mode’. It will remain in gallery mode, displaying a playlist of configurations, until the viewer clicks/touches Aleph Null. Then Aleph Null changes to ‘interactive mode’, i.e., it accepts input from the viewer and doesn’t play the playlist anymore. It automatically reverts to ‘gallery mode’ when it has not had any user input for a few minutes.

This idea of saving configurations and being able to play playlists, which are sequences of saved configurations/brushSets, is something I implemented in the desktop version of dbCinema. And this seems more supportive of creating quality art than an AI evaluation-learning model. Better because humans are saving things they like rather than software guessing/inferring what is likable.

Anyway, years ago, I decided that I probably wouldn’t be using AI cuz I want to spend my time really making art and art-making software. One can spend a great deal of time programming a very small detail of an AI system. My work is not in AI; it’s in art creation. The only possib for me of incorporating AI into my work is if I can use it as a web service, ie, I send an AI service some data and get the AI to respond to the data. Rather than me having to write AI code. 

But, so far, I think my approach gives me better results than what I’d get going an AI route. The proof is in the pudding.


Some correspondence with my pal Ted Warnell

Here is some correspondence between myself and the marvelous net artist Ted Warnell.

Oppen Do Down–first Web Audio API piece

In my previous post, I made notes about my reading of and preliminary understanding of Chris Wilson’s article on precision event scheduling in the Web Audio API–in preparation to create my first Web Audio API piece. I’ve created it. I’d like to share it with you and talk about it and the programming of it.

Oppen Do Down, an interactive audio piece

The piece is called Oppen Do Down. I first created it in the year 2000 with Director. It was viewable on the web via Shockwave, a Flash-like plugin–sort of Flash’s big brother. But hardly any contemporary browsers support the Shockwave plugin anymore–or any other plugins, for that matter–the trend is toward web apps that don’t use plugins at all but, instead, rely on newish native web technologies such as the Web Audio API, which requires no plugins to be installed before being able to view the content. The old Director version is still on my site, but nobody can view it anymore cuz of the above. I will, however, eventually release a bunch of downloadable desktop programs of my interactive audio work.

You can see the Director version of Oppen Do Down in a video I put together not long ago on Nio, Jig-Sound, and my other heap-based interactive audio work.

I sang/recorded/mixed the sounds in Oppen Do Down myself in 2000 using an old multi-track piece of recording software called Cakewalk. First I recorded a track of me snapping my fingers. Then I played that back over headphones, looping, while I recorded a looping vocal track. Then I’d play it back. If I liked it I’d keep it. Then I’d play the finger snapping and the vocal track back over headphones while I recorded another vocal track. Repeat that for, oh, probably about 60 or 70 tracks. Then I’d pick a few tracks to mix down into a loop. Most of the sounds in Oppen Do Down are multi-track.

As you can hear if you play Oppen Do Down, the sounds are synchronized. You click words to toggle their sounds on/off. The programming needs to be able to download a bunch of sound files, play them on command, and keep the ones that are playing synchronized. As you turn sounds on, the sounds are layered.

As it turns out, the programming of Oppen Do Down was easier in the Web Audio API than it was in Director. The reason for that is all to do with the relative deluxeness of the Web Audio API versus Director’s less featureful audio API.

Maybe the most powerful feature of the Web Audio API that Director didn’t offer is the high-performance clock. It’s high-performance in two ways. It has terrific resolution, apparently. It’s accurate to greater precision than 1 millisecond; you can use it to schedule events right down to the level of the individual sound sample, if you need that sort of accuracy. And the Web Audio API does indeed support getting your hands on the very data of the individual samples, if you need that sort of resolution. But the second way in which the high-performance clock is high-performance is that it stops for nothing. Which isn’t how it normally works with timers and clocks programmers use. They’re usually not the highest-priority processes in the computer, so they can get bumped by what the operating system or even the browser construes as more important processes. Which can result in inaccuracies. Often these inaccuracies are not big enough to notice. But in Oppen Do Down and pretty much all other rhythmic music, we need accurate rhythmic timing.

Director didn’t offer such a high-performance clock. What it had was the ability to insert cue-points into sounds. And define a callback handler that could execute when a cue-point was passed. That was how you could stay in touch with the actual physical state of the audio, in Director. The Web Audio API doesn’t let you insert cue-points in sounds, but you don’t need to. You can schedule events, like the playing of sounds, to happen in the time coordinate system of the high performance clock.

This makes synchronization more or less a piece of cake in the Web Audio API. Because you can look at the clock any time you want with great accuracy (AudioContext.currentTime is how you access the clock) and you can schedule sounds to start playing at time t and they indeed start exactly at time t. And the scheduling strategy Chris Wilson advocates, which I talked about in my previous post, whereby you schedule events a little in advance of the time they need to happen, works really well.

There are other features the Web Audio API has that Director didn’t. But, then, Director was actually started in 1987, whereas the Web Audio API has only been around for a few years as of this date in 2018. You can synthesize sounds in the browser, though that isn’t my interest; I’m more interested in recording vocal and other sounds and doing things with those recorded sounds. You can also process live input from the microphone, or from video, or from a remote stream. And you can create filters. And probably other things I don’t know anything about, at this point.

Anyway, Oppen Do Down links to two JavaScript files. One, oppen.js, is for this particular app and its particular interface. The other one, sounds.js, is the important one for understanding sound in Oppen Do Down. The sounds.js file defines the Sounds constructor, from top to bottom of sound.js. In oppen.js, we create an instance of it:

gSounds=new Sounds([‘1.wav’,’2.wav’,’3.wav’,’4.wav’,’5.wav’,’6.wav’]);

Actually there are 14 sounds, not 6, but just to make it prettier on this page I deleted the extra 8. I used wav files in my Director work. I was happy to see that the Web Audio API could use them. They are uncompressed audio files. Also, unlike mp3 files, they do not pose problems for seamless looping; mp3 files insert silence at the ends of files. I hate mp3 files for that very reason. Well, I don’t hate them. I just show them the symbol of the cross when I see them.

The gSounds object will download the sounds 1.wav, etc, and will store those sounds, and offers an API for playing them.

‘soundsAreLoaded’ is a function in oppen.js that gets called when all the sounds have been downloaded and are ready to be played.

gSounds adds each sound (1.wav, 2.wav, … 14.wav) via its ‘add’ method, which creates an instance of the Sound (not Sounds) constructor for each sound. The newly created Sound object then downloads its sound and, when it’s downloaded, the ‘makeAvailable’ function puts the Sound object in the pAvailableSounds array.

When all the sounds have been downloaded, the gSounds object runs a function that notifies subscribers that the sounds are ready to be played. At that point, the program makes the screen clickable; the listener has to click the screen to initiate play.

It’s important that no sounds are played until the user clicks the screen. If it’s done this way, the program will work OK in iOS. iOS will not play any sound until the user clicks the screen. After that, iOS releases its death grip on the audio and sounds can be played. Apparently, at that point, if you’re using the Web Audio API, you can even play sounds that aren’t triggered by a user click. As, of course, you should be able to, unless Apple is trying to kill the browser as a delivery system for interactive multimedia.

I’ve tested Oppen Do Down on Android, the iPad, the iPhone, and on Windows under Chrome, Edge, Firefox and Opera. Under OSX, I’ve tested it with Chrome, Safari and Firefox. It runs on them all. The Web Audio API seems to be well-supported on all the systems I’ve tried it on.

When, after the sounds are loaded, the user clicks the screen to begin playing with Oppen Do Down, we find the sound we want to play initially. Its name is ‘1’. It’s the sound associated with the word ‘badly’. We turn the word ‘badly’ blue and we play sound ‘1’. We also make the opening screen invisible and display the main screen of Oppen Do Down (which is held in the div with id=’container’).

var badly=gSounds.getSound('1');

The ‘’ method is, of course, crucial to the program cuz it plays the sounds.

It also checks to see if the web worker thread is working. This separate thread is used, as in Chris Wilson’s metronome program, to continually set a timer that times out just before sounds stop playing, so sounds can be scheduled to play. If the web worker isn’t working, ‘’ starts it working. Then it plays the ‘1’ sound.

Just before ‘1’ finishes playing–actually, pLookAhead milliseconds before it finishes–which is currently set to 25–the web worker’s timer times out and it sends the main thread a message to that effect. The main thread then calls the ‘scheduler’ function to schedule the playing of sounds which will start playing in pLookAhead milliseconds.

If the listener did nothing else, this process would repeat indefinitely. Play the sound. The worker thread’s timer ticks just before the sound finishes, and then sounds are scheduled to play.

But, of course, the listener clicks on words to start/stop sounds. When the listener clicks on a word to start the associated sound, ‘’ checks to see how far into the playing of a loop we are. And it starts the new sound so that it’s synchronized with the playing sound. Even if there are no sounds playing, the web worker is busy ticking and sending messages at the right time. So that new sounds can be started at the right time.

Anyway, that’s a sketch of how the programming in Oppen Do Down works.

Chris Joseph gave me some good feedback. He noticed that as he added sounds to the mix, the volume increased and distortion set in after about 3 or 4 sounds were playing. He suggested that I put in a volume control to control the distortion. He further suggested that each sound have a gain node and there also be a master gain node, so that the volume of each sound could be adjusted.

The idea is that as the listener adds sounds, the volume remains constant. Which is what the ‘adjustVolumes’ function is about. It works well.

I am happy with my first experiment with the Web Audio API. Onward and upward.

However, it’s hard to be happy with some of the uses that the Web Audio API is being put to. The same is true of the Canvas API and the WebRTC API. And these, to me, are the three most exciting new web technologies. But, of course, when new, interesting, powerful tools arise on the web, the forces of dullness will conspire to use them in evil ways. These are precisely the three technologies being used to ‘fingerprint‘ and track users on the web. This is the sort of crap that makes everything a security threat these days.

Event Scheduling in the Web Audio API

This is the first of a two-part essay on event scheduling in the Web Audio API and an interactive audio piece I wrote (and sang) called Oppen Do Down. There’s a link to part two at the bottom.

I’ve been reading about the Web Audio API concerning synchronization of layers and sequences of sounds. Concerning sound files, specifically. So that I can work with heaps of rhythmic music.

A heap is the term I use to describe a bunch of audio files that can be interactively layered and sequenced as in Nio and Jig Sound, which I wrote in Director, in Lingo. The music remains synchronized as the sound icons are interactively layered and sequenced. The challenge of this sort of programming is coming up with a way to schedule the playing of the sound files so as to maintain synchronization even when the user rearranges the sound icons. When I wrote Nio in 2000, I wrote an essay on how I did it in Nio; this essay became part of the Director documentation on audio programming. The approach to event scheduling I took in Nio is similar to the recommended strategy in the Web Audio API.

Concerning the Web Audio API, first, I tried basically the simplest approach. I wanted to see if I could get seamless looping of equal-duration layered sounds simply by waiting for a sound’s ‘end’ event. When the ‘end’ event occurred concerning a specific one of the several sounds, I played the sounds again. This actually worked seamlessly in Chrome, Opera and Edge on my PC. But not in Firefox. Given the failure of Firefox to support this sort of strategy, some other strategy is required.

The best doc I’ve encountered is A Tale of Two Clocks–Scheduling Web Audio With Precision by Chris Wilson of Google. I see that Chris Wilson is also one of the editors of the W3C spec on the Web Audio API. So the approach to event scheduling he describes in his article is probably not idiosyncratic; it’s probably what the architects of the Web Audio API had in mind. The article advocates a particular approach or strategy to event scheduling in the Web Audio API. I looked closely at the metronome he wrote to demonstrate the approach he advances in the article. The sounds in that program are synthesized. They’re not sound files. Chris Wilson answered my email to him in which I asked him if the same approach would work for scheduling the playing of sound files. He said the same approach would work there.

Basically Wilson’s strategy is this.

First, create a web worker thread. This will work in conjunction with the main thread. Part of the strategy is to use this separate thread that doesn’t have any big computation in it for a setTimeout timer X whose callback Xc regularly calls a schedule function Xcs, when needed, to schedule events. X has to be set to timeout sufficiently in advance of when sounds need to start that they can start seamlessly. Just how many milliseconds in advance it needs to be set will have to be figured out with trial and error.

But it’s desirable that the scheduling be done as late as feasibly possible, also. If user interaction necessitates recalculation and resetting of events and other structures, probably we want to do that as infrequently as possible, which means doing the scheduling as late as possible. As late as possible. And as early as necessary.

When we set a setTimeout timer to time out in x milliseconds, it doesn’t necessarily execute its callback in x milliseconds. If the thread or the system is busy, that can be delayed by 10 to 50 ms. Which is more inaccuracy than rhythmic timing will permit. That is one reason why timeout X needs to timeout before events need to be scheduled. Cuz if you set it to timeout too closely to when events need to be scheduled, it might end up timing out after events need to be scheduled, which won’t do—you’d have audible gaps.

Another reason why events may need to be scheduled in advance of when they need to happen is some browsers—such as Firefox—may require some time to get it together to play a sound. As I noted at the beginning, Firefox doesn’t support seamless looping via just starting sounds when they end. That means either that the end event’s callback happens quite a long time after the sound ends (improbable) or sounds require a bit of prep by Firefox before they can be played, in some situations.

So we need to schedule events a little before those events have to happen. We regularly set a timer X (using setTimeout or setInterval) to timeout in our web worker thread. When it does, it posts a message to the main thread saying it’s time to see if events need scheduling.
If some sounds do need to be scheduled to start, we schedule them now, in the main thread.

But to understand that process, it’s important to understand the AudioContext’s currentTime property. It’s measured in seconds from a 0 value when audio processing in the program begins. This is a high-precision clock. Regardless of how busy the system is, this clock keeps accurate time. Also, when you pause the program’s execution with the debugger, currentTime keeps changing. currentTime stops for nothing! The moral of the story is we want to schedule events that need rhythmic accuracy with currentTime.

That can be done with the .start(when, offset, duration) method. The ‘when’ parameter “should be specified in the same time coordinate system as the AudioContext’s currentTime attribute.” If we schedule events in that time coordinate system, we should be golden, concerning synchronization, as long as we allow for browsers such as Firefox needing enough prep time to play sounds. How much time do such browsers require? Well, I’ll find out in trials, when I get my code running.

The approach Chris Wilson recommends to event scheduling is similar to the approach I took in Nio and Jig Sound, which I programmed in Lingo. Again, it was necessary to schedule the playing of sounds in advance of the time when they needed to be played. And, again, that scheduling needed to be done as late as possible but as early as necessary. Also, it was important to not rely solely on timers but to ground the scheduling in the physical state of the audio. In the Web Audio API, that’s available via the AudioContext’s currentTime property. In Lingo, it was available by inserting a cuePoint in a sound and reacting to an event being triggered when that cuePoint was passed. In Nio and Jig-Sound, I used one and only one silent sound that contained a cuePoint to synchronize everything. That cuePoint let me ground the event scheduling in a kind of absolute time, physical time, which is what the Web Audio API currentTime gives us also.

Part 2: Oppen Do Down–First Web Audio Piece

Chris Joseph: Amazing Net Art from the Frontier

I’ve been following Chris Joseph‘s work as a net artist since the late 1990’s when he was living in Montréal–he’s a Brit/Canadian living now in London. He was on Webartery, a listserv I started in 1997; there was great discussion and activity in net art on Webartery, and Chris was an important part of it then, too. I visit his page of links to his art and writing several times a year to see what he’s up to.

I recently wrote a review of Sprinkled Speech, an interactive poem of Chris’s, the text of which is by our late mutual friend Randy Adams.

More recently–like yesterday–I visited #RiseTogether, shown below, which I’d somehow missed before. This is a 2014 piece by Chris. We see a map, the #RiseTogether hash tag, a red line and a short text describing issues, problems, possibilities, groups, etc. Every few seconds, the screen refreshes with a new map, red line, and description.

Chris Joseph’s #RiseTogether

I sent Chris an email about it:

Hey Chris,

I was looking at

I see you're using Google maps.

What's with the red line?

What is #RiseTogether ? 

The language after "#RiseTogether"--where does that come from?


Chris’s response was so interesting and illuminating I thought I’d post it here. Chris responded:

Hi Jim,

Originally this phrase, as a hashtag, was used by the Occupy Wall Street anti-capitalism movement, but I think since then it has been adopted/co-opted by many other movements including (US) football teams. The starting article and the text source for this piece was . 

It was one of three anti-capitalist pieces I did around that time, which was pretty much at the beginning of my investigating what could be done outside of Adobe Flash, along with and . And thematically these hark back to one of my first net art pieces, which isn't linked up on my art page at the moment, 

The red line was for a few reasons, I think. Firstly to add some visual interest, and additional randomisation, into what would be be a fairly static looking piece otherwise.  But I find the minimalism of a line quite interesting, as the viewer is asked to actively interpret the meaning of that line. For me it's a dividing line - between haves and have nots, or the 1% and 99%, or any of those binary divisions that the protesters tend to use. Or it could suggest a crossing out - perhaps (positively) of a defunct economic philosophy, or (negatively) of the opportunities of a geographical area as a result of that economic philosophy. 

All three of those pieces have a monochromatic base, but only two have the red, which feels quite angry, or reminiscent of blood, of which there was quite a bit in the anti-capitalist protests.

I used the same technique again in this piece: - but here the lines are much more descriptive, as an indication of the supposed 'plague vectors'. 

Chris Joseph

globalCompositeOperation in Net Art

Ted, Jim and globalCompositeOperation

Ted Warnell and I have been corresponding together about net art since 1996 or 97. We’ve both been creating net art using the HTML 5 canvas for about the last 6 years; we show each other what we’re doing and talk about canvas-related JavaScript via email. He lives in Alberta and I live in British Columbia.

Ted’s canvas-based stills and animations can be seen at My canvas-based work includes Aleph Null versions 1.0 and 2.0 at, respectively, and

One of the things we’ve talked about several times is globalCompositeOperation—which has got to be a candidate for longest-name-for-a-JavaScript-system-variable. The string value you give this variable determines “the type of compositing operation to apply when drawing new shapes” ( Or, as puts it:

“The globalCompositeOperation property sets or returns how a source (new) image is drawn onto a destination (existing) image.

Source image = drawings you are about to place onto the canvas.

Destination image = drawings that are already placed onto the canvas.”

The reason we’ve talked about this variable and its effects is because globalCompositeOperation turns out to be important to all sorts of things in creating animations and stills that you wouldn’t necessarily guess it had anything to do with. It’s one of those things that keeps on popping up too much to be coincidental. The moral of the story seems to be that globalCompositeOperation is an important, fundamental tool in creating animations or stills with the canvas.

In this article, we’d like to show you what we’ve found it useful for. We’ll show you the art works and how we used globalCompositeOperation in them to do what we did with it.

Ted’s uses of globalCompositeOperation tend to be in the creation of effects. Mine have been for masking, fading to transparency, and saving a canvas to png or jpg.

Digital Compositing

“Compositing” is an interesting word. It’s got “compose” and “composite” in it. “Compositing” is composing by combining images into composit images.

Keep in mind that each pixel of a digital image has four channels or components. The first three are color components. A pixel has a ‘red’ value, a ‘green’ value, and a ‘blue’ value. These are integers between 0 and 255. These combine to create a single color. The fourth channel or component is called the alpha channel. That’s a number between 0 and 1. It determines the opacity of the pixel. If a pixel’s alpha channel has a value of 1, the pixel is fully opaque. If it has a value of 0, the pixel is totally transparent. It can have intermediary values that give the pixel an intermediary opacity.

The default value of globalCompositeOperation is “source-over”. When that’s the value, when you paste a source image into a destination canvas, you get what you’d expect: the source is placed overtop of the destination.

There are 26 possible values for globalCompositeOperation which are described at The first 8 of the options, shown below, are for compositing via the alpha channel. The remaining 18 are blend modes. You may be familiar with blend modes in Photoshop; they determine how the colors of two layers combine and include values such as “multiply”, “screen”, “darken”, “lighten” and so on. Blend modes operate on the color channels of the two layers.

But the first 8 values shown below operate on the alpha channels of the two images. They don’t change the colors. They determine what shows up in the result, not what color it is. The first 8 values in the below diagram can be thought of as a kind of Venn diagram of image compositing. There’s the blue square (destination) and the red circle (source). There are 3 sections to that diagram:

  • A: the top left part of the blue square that doesn’t intersect with the red circle;
  • B: the section where the square and circle intersect;
  • C: and the bottom right section of the red circle that doesn’t intersect with the blue square.

Section A can be blue or be invisible; section B can be blue, red, or invisible; section C can be red or invisible. That makes for 12 possibilities, but some of those 12 possibilities, such as when everything is invisible, are of no use. When the useless possibilities are eliminated, we’re left with the first 8 shown below. These possibilities form the basic sort of Venn logic of image compositing. You see this diagram not only with regard to JavaScript but in image compositing regarding other languages.

The first 8 values for globalCompositeOperation operate on the alpha channels of the source (red) and destination (blue)

What is “compositing”? We read the following definition at Wikipedia:

Compositing is the combining of visual elements from separate sources into single images, often to create the illusion that all those elements are parts of the same scene. Live-action shooting for compositing is variously called “chroma key”, “blue screen”, “green screen” and other names. Today, most, though not all, compositing is achieved through digital image manipulation. Pre-digital compositing techniques, however, go back as far as the trick films of Georges Méliès in the late 19th century; and some are still in use. All compositing involves the replacement of selected parts of an image with other material, usually, but not always, from another image. In the digital method of compositing, software commands designate a narrowly defined color as the part of an image to be replaced. Then the software replaces every pixel within the designated color range with a pixel from another image, aligned to appear as part of the original. For example, one could record a television weather presenter positioned in front of a plain blue or green background, while compositing software replaces only the designated blue or green color with weather maps.

Whether the compositing is operating on the alpha or the color channels, compositing is about combining images via their color and/or alpha channels.

As we see at, different browsers treat some of the values of globalCompositeOperation differently, which can make for dev headaches and gnashing of teeth but, for the most part, globalCompositeOperation works OK cross-browser and cross-platform.

Jim Andrews: Masking (source-atop)

Masking is when you fill a shape, such as a letter, with an image. The shape is said to mask the image; the mask hides part of the image. Masking was crucial to an earlier piece of software I wrote called dbCinema, a graphic synthesizer I wrote in Lingo, the language of Adobe Director. The main idea was of brushes/shapes that sampled from images and used the samples as a kind of ‘paint’. My more recent piece Aleph Null 2.0, written in JavaScript, can do some masking, such as the sort of thing you see in SimiLily—and I’ll be developing more of that sort of thing in Aleph Null.

Let’s look at a simple example. You see it below. You can also see a copy of it at, where it’s easier to view the source code. There’s a 300×350 canvas with a red border. We draw an ‘H’ on the canvas. We fill it with any color–red in this case. Then we set globalCompositeOperation = ‘source-atop’. Then we draw a bitmap of a Kandinsky painting into the canvas, but the only part of the Kandinsky that we see fills the ‘H’. Because when you set globalCompositeOperation = ‘source-atop’ and you then draw into an image, it only draws on pixels that were already on the canvas. states it this way:

“source-atop displays the source image on top of the destination image. The part of the source image that is outside the destination image is not shown.”

In other words, first you draw on the canvas to create the “destination” image (the ‘H’). Then you set globalCompositeOperation = ‘source-atop’. Then you draw the “source” image on the canvas (the Kandinsky).

Masking with globalCompositeOperation = ‘source-atop’

The most relevant code in the above example is shown below:

function drawIt(oldValue) {
context.font = 'bold 400px Arial';
context.fillText('H', 0,320);
// The above three lines set the text font to bold,
// 400px, Arial, red, and draw a red 'H' at (0,320).
// This is the destination.
// (0,320) is the bottom left of the 'H'.
context.globalCompositeOperation = 'source-atop';
context.drawImage(newImg, -100,-100);
// newImg is the rectangular Kandinsky image.
// Sets globalCompositeOperation back to what it was.

In our example, the destination ‘H’ is fully opaque. However, if the destination is only partially opaque, so too will the result be partially opaque. The opacity of the mask determines the opacity of the result. You can see an example of that at The mask, or destination, is an ellipse that grows transparent toward its edge. The source image, once again, is a fully opaque Kandinsky-like image.

You can see some of Aleph Null’s masking ability if you click the Bowie Brush, shown below. It fills random polygons with images of the late, great David Bowie.

The Bowie Brush in Aleph Null fills random polygons with images of David Bowie

Ted Warnell: Albers by Numbers, February, 2017

Overview: Poem by Nari works are dynamically generated, autoactive and alinear, visual and code poetry from the cyberstream. Poem by Nari is Ted Warnell and friends. Following are four Poem by Nari works that demonstrate use of some of the HTML5 canvas globalCompositeOperation(s) documented in this article.

These works are tested and found to be working as intended on a PC running the following browsers: Google Chrome, Firefox, Firefox Developer Edition, Opera, Internet Explorer, Safari, and on an Android tablet. Additional browser specific notes are included below.

Experimental. Albers by Numbers is one from a series of homages to German-American artist Josef Albers. Poem by Nari series is loosely based on the Albers series “Homage to the Square”.

This work is accomplished in part by a complex interaction of stylesheet mixBlendMode(s) between the foreground and background canvases. All available mixBlendMode(s) are employed via a dedicated random selection function, x_BMX.

Interesting to me is how the work evolves from a single mass of randomly generated numeric digits to the Albers square-in-square motif. This emergence happens over a period of time, approximately one minute, and in a sense parallels emergence of the Albers series, which happened for Albers over a lifetime.

Note to IE and Safari users: works but not as intended.

Ted Warnell: Acid Rain Cloud 3, February 2017

Experimental. Another work from a series exploring a) acid, b) rain, c) clouds, d) all of the above.

globalCompositeOperation(s) “source-over” and “xor” are used here in combination with randomized color and get & putImageData functions. The result is a continually shifting vision of what d) all of the above, above, might look like.

Interesting to me here is that ever changing “barcode” effect in the lower half of the work – possibly the “rain” in this? Over time, that rain will turn from a strong black and white downpour to a gentle gray mist. This is globalCompositeOperation “xor” at work.

Note to Safari users: works but not as intended.

Ted Warnell: An Alinear Rembrandt, April 2017

An Alinear Rembrandt

Christ image is digitized from Rembrandt’s “Christ On The Cross”.

Not an experiment. The statement is clear, it’s Christ on the cross.

This fully-realized work brings together globalCompositeOperation(s) “source-over” and “lighter” in combination with gif image files, globalAlpha, linear gradients, standard and dedicated random functions, get & putImageData functions, and a Poem by Nari custom grid definition function. And of course, timing is everything.

Of interest to readers will be the flashing sky and flickering Christ. These effects are accomplished by linear gradient masks, gif image file redraws, and the aforementioned globalCompositeOperation(s).

Of interest to me, it’s Christ on the cross.

Ted Warnell: Pinwheels, April 2017

More experimentation. This work is for Mary & Ryan Maki, Canada

Full screen, variable canvas rotations, and globalCompositeOperation(s) “source-over” and “xor” with randomized color. “source-over” is default and is responsible for the vivid, solid colors in this work, while “xor” provides the muted, soft-edge color blends.

Pinwheels… I’m going to be a grandpa again.

Note to Safari users: does not work with Safari browser.



Fade to Transparency (destination-out)

The fader slider in Aleph Null

Aleph Null 2.0 has a fader slider. The greater the value of the fader slider, the quicker the screen fades to the current background color. This is implemented by periodically drawing a nearly-transparent fill of the background color over the whole canvas. The greater the value of the fader slider, the more frequent the drawing of that fill over the whole canvas.

That works well when there is just one canvas, when there is no notion of layers of canvases. Once you introduce layers, you have to be able to fade a layer to transparency, not to a background color, so that you can see what’s on lower layers. I’m attempting to implement layers at the moment in Aleph Null. So I have to be able to fade a canvas to transparency.

So, then, how do you fade a canvas to transparency?

As Blindman67 explains at, “…you can avoid the colour channels and fade only the alpha channel by using the global composite operation “destination-out” This will fade out the rendering by reducing the pixels’ alpha.” Each pixel has four channels: the red, the blue, the green, and the alpha channels; the alpha channel determines opacity. The code is like this:

ctx.globalAlpha = 0.01; // fade rate
ctx.globalCompositeOperation = "destination-out"
ctx.globalCompositeOperation = "source-over"
ctx.globalAlpha = 1; // reset alpha

You do the above every frame, or every second frame, or every third frame, etc, depending on how quickly you want it to fade to transparency. Another parameter with which you control the speed of the fade is ctx.globalAlpha, which is always a number between 0 and 1. The higher it is, the closer to fully opaque the result will be on a canvas draw operation.

Blindman67 develops an interesting example of a fade to transparency in You can see that it must be fading to transparency because the background color is dynamic, is constantly changing.

Note that the ctx.fillStyle color isn’t really important because we’re fading the alpha, not the color channels. ctx.fillStyle isn’t even specified in the above code. When globalCompositeOperation = ‘destination-out’, the color values of the destination pixels remain unchanged. What changes is the alpha value of the destination pixels. The alpha values of the source pixels get subtracted from the alpha values of the destination pixels.

The performance of fading this way should be very good, versus mucking with the color channels, because you’re changing less information; you’re only changing the alpha channel of each pixel, not the three color channels.

I massaged the Blindman67 example into something simpler at There’s a fade function:

function fade() {
gCtx1.globalAlpha = 0.15; // fade rate
gCtx1.globalCompositeOperation = "destination-out";
gCtx1.globalCompositeOperation = "source-over";
gCtx1.globalAlpha = 1; // reset alpha

But compare the fade function with the code above it from Blindman67. It’s the very same idea.

Above, we see an example much like what I wrote at

Finally, on this topic, I’m currently wondering about the best way to implement layers concerning canvases. Clearly compositing possibilities create a situation where, at least in some situations, you don’t need multiple visible canvases; you can composit with offscreen canvases and only use one visible canvas. Whether this is better in general, and what the performance issues are, is currently unclear to me. There also exists at least one platform, namely concretejs, that supports canvas layers.

Save Canvas to Image File (destination-over)

globalCompositeOperation = ‘destination-over’ allows you to slip an image into the background of another image. The source image is written underneath the destination image.

It turns out that’s precisely what is needed to fix some bad browser behavior when you save a canvas to an image file, as we’ll see.

If you want to save a canvas to an image file, the simplest way to do it, at least on Chrome and Firefox, is to right-click (PC) or Control+click (Mac). You are presented with a menu that allows you to “Save As…” or, on some browser, “Copy Image”. The problem is that some browsers insert a background into this image that probably isn’t the same color as the background on the canvas.

On the PC, Chrome inserts a black background. Other browsers may insert other colors, or the right color, or no color at all. One solution to this problem is to create a button that runs some JavaScript that inserts the right background color. This is a job for globalCompositeOperation = ‘destination-over’ because it allows you to create a background with the source image.

The “save” button in Aleph Null

You can see the solution I’ve created at, shown above. The controls contain a “save” button which, when clicked, copies a png-type image into a new tab, if permitted to do so. You may have to permit it by clicking on a red circle near the URL at the top of the browser. Once the image is in the new tab, right-click (PC) or Ctrl+click (Mac) and select “Save As…”.

The code is basically this sort of thing:

var canvas=document.getElementById('canvas');
var context=canvas.getContext('2d');
// We assume the canvas already has the destination image on it.
var oldGlobalComposite=context.globalCompositeOperation;
// backgroundColor is a string representing the desired background color.
var data=canvas.toDataURL('image/png');;

The toDataURL command can also create the image as a jpg or webp.

In your animations with the HTML 5 canvas, will globalCompositeOperation be of any use? The answer is that if you are combining images at all, doing any compositing at all, globalCompositeOperation is probably relevant to your task and may make it much easier.

Colour Music in Aleph Null 2.0

I’m working on Aleph Null 2.0. You can view what I have so far at . If you’re familiar with version 1.0, you can see that what 2.0 creates looks different from what 1.0 creates. I’ve learned a lot about the HTML5 canvas. Here are some recent screenshots from Aleph Null 2.0.


Image Masking with the HTML5 Canvas

Image masking with the HTML5 canvas is easier than I thought it might be. This shows you the main idea and two examples.

If you’d like to cut to the chase, as they say, look at this example and its source code. The circular outline is an image. The Kandinsky painting is a rectangular image that is made to fill the circular outline. We see the result below:

The Kandinsky painting fills a blurry circle.

The Kandinsky painting fills a blurry circle.

The key is the setting for the canvas’s globalCompositeOperation property. Like me, if you had seen any documentation for this property at all, you might have thought that it only concerned color blending, like the color blending options in Photoshop for a layer (the options usually include things like ‘normal’, ‘dissolve’, ‘darken’, ‘multiply’, ‘color burn’, etc). But, actually, globalCompositeOperation is more powerful than that. It’s for compositing images. Image masking is simply an example of compositing. Studying the possibilities of globalCompositeOperation would be interesting. We’re just going to use a couple of settings in this article. The definition we read of “compositing” via Googling the term includes this:

“Compositing is the combining of visual elements from separate sources into single images, often to create the illusion that all those elements are parts of the same scene.”

We’re going to use the “source-atop” setting of  globalCompositeOperation. The default value, by the way, is “source-in”.

The basic idea is that if you want image F to fill image A, you draw image A on a fresh canvas. Then you set  globalCompositeOperation to “source-atop”. Then you draw image F on the canvas. When you do that, the pixels in the canvas retain whatever opacity/alpha value they have. So, for instance, any totally transparent pixels remain totally transparent. Any pixels that are partially transparent remain partially transparent. Image F is drawn into the canvas, but F does not affect the opacity/alpha values of the canvas.

Here is an example where a Kandinsky painting is made to fill some canvas text:

Click the image and then view the source code.

Click the image and then view the source code.

I’m working on some brushes for Aleph Null 2.0 that are a lot like the brushes in dbCinema: the brushes ‘paint’ samples of images.

New Work by Ted Warnell

Ted Warnell, as many of you know, is a Canadian net artist originally from Vancouver, long since living in Alberta, who has been producing net art as programmed visual poetry at since the 90’s. Which is about how long we’ve been in correspondence with one another. Ted was very active on Webartery for some time, an email list in the 90’s that many of the writerly net artists were involved in. We’ve stayed in touch over the years, though we’ve never met in the same room. We have, however, met ‘face-to-face’ via video chat.

warnell2016He’s still creating interesting net art. In the nineties and oughts, his materials were largely bitmaps, HTML, CSS, and a little JavaScript. Most of his works were stills, or series thereof. Since about 2013, he’s been creating net works using the HTML5 canvas tag that consist entirely of JavaScript. The canvas tag lets us program animations and stills on HTML pages without needing any plugins such as Flash or Unity. Ted has never liked plugins, so the canvas tag works well for him for a variety of reasons. Ted has created a lot of very interesting canvas-based, programmed animations and stills at .

I’m always happy to get a note from Ted showing me new work he’s done. Since we both are using the canvas, we talk about the programming issues it involves and also the sorts of art we’re making. Below is an email Ted sent me recently after I asked him how he would describe the ‘look’ or ‘looks’ he’s been creating with his canvas work. If you have a good look at , you see that his work does indeed exhibit looks you will remember and identify as Warnellian.

hey jim,

further to earlier thoughts about your query re “looks” in my work (and assuming that you’re still interested by this subject), here is something that has been bubbling up over the past week or so

any look in my work comes mainly from the processes used in creation of the work – so, it’s not a deliberate or even a conscious thing, the look, but rather, it just is – mainly, but not entirely, of course – subject, too, is at least partly responsible for the way these things look

warnell2016-2have been thinking this past week that what is deliberate and conscious is my interest in the tension between and balance of order and chaos, by which i mean mathematics (especially geometry, visual math) and chance (random, unpredictable) – i’m exploring these things and that tension/balance in almost all of my works – you, too, explore and incorporate these things into many of your works including most strikingly in aleph null, and also in globebop and others

so here are some thoughts about order/chaos and balance/tension in no particular order:

works using these things function best when the balance is right – then the tension is strong – and then the work also is “right” and strong

it is not a requirement that both of these things are apparent (visible or immediately evident) in a work – there are some notable examples of works that seem to be all one or the other, though that may be more illusion than reality – works of jackson pollock seem to be all chaos but still balance with a behind-the-scenes intelligence, order – works by andrew wyeth on the other hand seem to be all about order and control, but look closely at the brushstrokes that make all of that detail and you’ll see that many of these are pure chance – brilliant stuff, really

warnell2016-3an artist whose work intrigues me much of late is quebecer claude tousignant – i’m sure you know of him – he is perhaps best known for his many “target” paintings of concentric rings – tousignant himself referred to these as “monochromatic transformers” and “gongs” – you can find lots of his works at google images

the reason tousignant is so interesting to me (again) at this time is because while i can see that his paintings “work”, i cannot for the life of me see where he is doing anything even remotely relating to order/chaos or the balance/tension of same – his works seem to me to be truly all order/order with no opposite i would consider necessary for balance and/or to make (required) tension – his works defy me and i’d love to understand how he’s doing it 🙂

warnell2016-4anyway, serious respect, more power, and many more years to the wonderful monsieur tousignant

Look Again –

is a new (this week) autointeractive work created with claude tousignant and his target paintings in mind

in this work are three broad rings, perfectly ordered geometric circles, each in the same randomly selected single PbN primary color – the space between and surrounding these rings is filled with a randomly generated (60/sec), randomly spun alphanumeric text in black and white, and also gray thanks to xor compositing – alinear chaos – as the work progresses, the three rings are gradually overcome by those relentless spinning texts – the outermost ring is all but obliterated while the middle ring is chipped away bit by bit until only a very thin inner crust of the ring remains – the third innermost ring, tho, is entirely unaffected

as the work continues to evolve, ghostlike apparitions of the missing outer and middle ring become more and more pronounced… because… within the chaos, new rings in ever-sharper black and white are beginning to emerge – this has the effect of clearly defining (in gray and tinted gray) the shape of the original color rings – even as order is continually attacked and destroyed by chaos, chaos is simultaneously rebuilding the order – so nothing is actually gained or lost… the work is simply transformed – a functioning “monochromatic transformer”, as tousignant might see it

that’s the tension and balance i’m talking about – the look you were asking about likely has something to do with autointeraction, alinearity, and most likely by my attempt to render visible order/chaos and balance/tension in every work i do

your attempt in aleph null (it now seems to me) might be in the form of progressive linearity on an alinear path – and well done


PS, “Look Again” is a rework of my earlier work, “Poem by Numbers 77” from march 2015 –

which work is a progression of “Poem by Numbers 52” from april 2013 –

which work was about learning canvas coding for circular motion

Poem by Numbers works usually (not always) are about coding research and development – moreso than concept development, which comes in later works like “Look Again”

other artists have “Untitled XX” works – i have “Poem by Numbers XX”

Swiss Screamscape

For Netarterian contemplation, a project of the International Institute for Screamscape Studies:








For the holiday listening pleasure of Netarterians, four radio programs beautifully produced by a talented group of young producers at WBUR, including Conor Gillies. Lots of subtle mixage, montage and quiet experimentation with how to tell intricate stories without the infantilizing host hand-holding that pulls down so much of public radio in the USA:







Crazy Horse One-Eight

Commissioned for the 2014 Radio Dreamlands project, produced by the UK-based Radio Arts.








For information on broadcast or other rights, contact GW: gregorywhitehead(at)

Google Image Search API


Google image search parameters

Here are some useful documents if, as a developer,  you want to use the Google Image Search API.

I used the Google Image Search API in an earlier piece called dbCinema, but this piece was done in Adobe Director. Since then, I’ve retooled to HTML5. So I looked into using the Image Search API with HTML5.

First, the official Google documentation of the Google Image Search API is at It’s all there. Note that it’s “deprecated”. It won’t be free for very much longer, for developers. Soon they will charge $5/1000 queries. But the documentation I have put together does not use a key of any kind.

Perhaps the main thing to have a look at in the official Google documentation is the sample HTML file. It’s in the section titled “The ‘Hello World’ of Image Search”. This automatically does a search for “Subaru STI” and displays the results. But wait. There is a bug in the sample file so that if you copy the code and paste it into a new HTML file, it doesn’t work. I expect this is simply to introduce the seemingly mandatory dysfunction almost invariably present in contemporary programming documentation. Unbelievable. I have corrected the bug in, which is almost exactly the same as “The ‘Hello World’ of Image Search” except it gets rid of “/image-search/v1/” in a couple of places.

After you look at that, look at Type something in and then press the Enter or Return key. It will then do a Google image search and display at most 64 images. 64 is the max you can get per query. The source code is very much based on the official Google example. The image size is set to “medium” and the porn filter is turned on. Strange but true.

Finally, have a look at This example shows you how to control Google Image Search parameters. The source code is the same as the previous example except we see a bunch of dropdown menus. Additionally, there is an extra function in the source code named setRestriction which is called when the user selects a new value from the dropdown menus.

There is a dropdown menu for all the controllable Image Search parameters except for the sitesearch restriction, which is simple enough if you understand the others.

Anyway, that ought to give you what you need to get up and running with the Google Image Search API.


Off planet teleporter

Off planet teleporter


Teleportation to the bottom of the ocean

Teleportation to the bottom of the ocean

I’ve been working on a new piece called Teleporter. The original version is here. The idea is it’s a teleporter. You click the Teleport button and it takes you s0mewhere random on the planet. Usually on the planet. It uses the Google Maps API. It takes you to a random panorama. In the new version, 4% of the time you see a random panorama made by my students; they were supposed to explore the teleporter or teleportation literally or figuratively. So the new version is a mix of Google street view panoramas and custom street view panoramas.

I’m teaching a course in mobile app development at Emily Carr U of Art and Design in Vancouver Canada. I wrote Teleporter to show the students some stuff with Google Maps. I’d shown them Geoguessr, which is a simple but fun piece of work with Google Maps.  I realized it was simple enough I could probably write a related thing. I wrote something that generates a random latitude and longitude. Then I asked Google to give me the closest panorama to that location.

So that worked fine, the students liked it, and I put a link to it on Facebook. A friend of mine, Bill Mullan, shared my link to Teleporter. Then a friend of his started a discussion about Teleporter on . A couple of days later I got an email from Adele Peters of who wanted to do a phone interview with me about Teleporter for an article she wanted to write. So we did. Her article came out a couple of days later; the same day, articles appeared in , from the UK, and some other online magazines. Articles quickly followed from various places such as  and, a digital art site from Paris. This resulted in tens of thousands of visitors to Teleporter.

Meanwhile, I decided to create a group project in the classroom out of Teleporter. The morning cohort was to build the Android app version of Teleporter, and the afternoon cohort the iOS version. That is wrapping up now. We should have an app soon. You can see the web version so far. It’s like the original version, mostly, except for a few things. The interface is more ‘app like’. Also, in the new version you see a student panorama 4% of the time. It’s meant to explore and develop the teleporter/teleportation theme. And there’s a Back button. The students designed the interface.

I want to mention a technical thing. Because I didn’t see any documentation on it online so perhaps it will help some poor developer who, like me, is trying to do something with a combination of Google streetview panoramas and custom streetview panoramas. I found that a bug was happening. Once the user viewed a student panorama, a custom panorama, then thereafter, when they viewed a Google panorama and tried to use the links to travel along a path, they would be taken back to the previous student custom streetview panorama.

The solution was the following JavaScript code:

if (panorama.panoProvider) {
// In this case, the previous panorama was a student panorama.
// We delete panorama.panoProvider or it causes navigation problems:
// if it is present, then when the user goes to use the links in the
// Google panorama, they simply get taken to the student panorama.
delete panorama.panoProvider;

As you eventually figure out, when you create a custom streetview panorama, you need to declare a custome panorama provider. You end up with a property of your panorama object named panoProvider. But this has to be deleted if you then want the pano to use a Google streetview panorama or you get the bug I was experiencing.

Anyway, onward and upward.


Echolocations of the Self

Though humble in format, Christine Hume’s recently published chapbook Hum offers readers a deeply polyphonous enquiry into hums and humming that begins inside her own voice, body and childhood (including the breaking/wiring of her “jutting” jaw), and then roams through philosophical and poetic territories that include everything from high school bleachers (hummer central) to Zug Island, and then on the Erinyes and Winnie-the-Pooh.

Contrary to certain fashionable academic philosophies that carry a false ring for anyone who actually works with voices creatively, Hume understands how the voice begins in the ear. Finding one’s own frequency amidst the din of the mother radio and other similarly dense signals requires a secretive gathering of one’s own strange and severe harmonies, a process that may become riddled with noise and interference, all of which then becomes embodied, in both life and text, through the endless echolocation of the self.

Below, a sequence of excerpts from this beautiful and brave little book, in the counter-vibrational zones of adolescent resistance against family suppression, dislocation and trauma (images added):