Author Archive

The Climate Baby Dilemma

For a growing number of young people, the climate crisis is affecting decisions about whether or not to have kids.

Duration: 44 min
Production year: 2021

https://gem.cbc.ca/the-climate-baby-dilemma/s01e01

High-Resolution HTML5 Canvases

You can make your canvases as high-res as the user’s computer can stand. I’ve recently used canvases of size 12000×6750 to create bitmaps that, when printed at 200dpi, which is sufficient for very high quality, would print a 60″x34″ print.

It’s very simple to do. The key is this. The CSS width and height values of a <canvas> determine only the size the canvas appears on the screen. The width and height attributes of the <canvas>, on the other hand, determine the size of the canvas that you work with in your JavaScript code. 

In the below code, the canvas appears on the screen to be 80×300, but if you use toDataURL to capture a jpg or png image of the canvas, it will be 1000×500.

<canvas width="1000" height="500" style="width:80px;height:300px;border:1px solid black;"></canvas>

Aleph Null 3.1 and later versions of Aleph Null support creating high-res work, as you can see in a tutorial video on the matter

Godel and Philosophy

It’s inspiring when math/logic leaps from the empyrean to the inner life. We see such a leap in Godel’s work: he actually showed that there exist truths that are not provable, truths that are true for no reason, thereby bringing the quotidian in more fruitful relation with the empyrean.

That there are truths which are not provable is something that we had intuited and even acknowledged for a very long time. Some things are true but not simply hard to prove–they’re impossible to prove. Yet definitely true. Courts of law acknowledge that proof, in matters of law, must extend only to the elimination of reasonable doubt. Not to fully demonstrable absolute proof. Because it can be and often is impossible to prove beyond all doubt what is true.

Previous to Godel’s work, meta-mathematics, ie, reasoning about mathematics and logic, had had some dramatic results. Such as the revelation that relatively consistent non-Euclidean geometries were possible. This came as a bolt of lightning not only to mathematics but to philosophy. Because even the great Kant–as well as many another philosopher–had provided, when pressed for an example of the existence of a-priori truth, the parallel postulate or the logically equivalent notion that the sum of the angles of a triangle is 180 degrees. But neither of these ideas is true in non-Euclidean geometries. In spherical geometry, for example, there are no parallel lines, and the sum of the angles of a triangle can be as large as 270 degrees. And this rocked the idea that a-priori truth exists at all, cuz the prime exemplars had been axioms of Euclidean geometry.

Godel’s work in meta-mathematics–which is now simply called logic–was at least as lightning-boltish as non-Euclidean geometry had been in the 1800’s.

When I studied math as an undergraduate, the most beautiful, profound work I studied was that of Georg Cantor concerning set theory and, in particular, infinite sets. Cantor actually proved things worth knowing about the infinite. And he did so in some of the most beautiful proofs you will ever encounter. Stunning work. He developed something called “the diagonal argument”. I won’t go into it, but it’s really killer-diller–and Godel uses that argument in his own proofs! He draws on this profound work by Cantor in his own incredible proof.

And, in turn, in Turing’s paper that layed the groundwork for the computer age, he also uses Cantor’s diagonal argument–and acknowledges Godel’s work as having helped him on his road.

You see, the poetics of computer art has this rich philosophical, mathematical history among its parents. It has this in its genes. It’s important to computer art for very many reasons. But one of those reasons is to understand where it comes from.

“Oh! Blessed rage for order, pale Ramon,
The maker’s rage to order words of the sea,
Words of the fragrant portals, dimly-starred,
And of ourselves and of our origins,
In ghostlier demarcations, keener sounds.”
from The Idea of Order at Key West, Wallace Stevens

Leibniz and Computing

If you are interested in the history of the philosophical/logical/poetical dimensions of computing, you will be interested in Leibniz, the greater contemporary of Newton who independently created calculus. Martin Davis’s astonishing history of just this dimension of the history of computing, titled Engines of Logic, provides fascinating insight into the life and work of Leibniz and its relation with the history of computing. Leibniz is often thought of as the granddaddy of computing.

You will also be interested in Godel’s work. The incompleteness theorems. For these suggested an answer in 1930 to a question posed at the turn of the century by Hilbert, a question that is important in the development of the theory of computation. And Turing used Godel’s work in his 1933 paper on the ‘decision problem’ to answer the question and, almost incidentally, introduce the notion of the Turing machine, the theoretical model of a computing device that is still used today.

I picked up an excellent book: Gottfried Wilhelm Leibniz: Philososophical Writings edited by G.H.R. Parkinson. It’s actually readable and understandable, neither of which is true of another translation I have of this work by Leibniz.

Anyway, what I wanted to point out tonight is that in the brilliant introduction by Parkinson to this volume, he says that Leibniz thought of what he called the “principle of sufficient reason” as one of his guiding lights.

“Leibniz said many times that reasoning is based on two great principles–that of identity or contradiction, and that of sufficient reason….The principle of sufficient reason says that every truth can be proved…”

Now, it is just this which Godel demonstrates is false. Godel proved the necessary existence, in sufficiently powerful formal systems, of what he called “undecidable propositions”, namely ones that are true but not provable. These are ideas that are true for no reason; they’re definitely true but definitely not provable. That makes them true for no reason.

It’s the necessary existence of these sorts of propositions that complicates the entire structure of human knowledge. Leibniz’s principle of sufficient reason is what he needs to lay the foundation for the possibility of machines that reason as far and wide as it is possible to reason, verily and forsooth into a perfectly known and understood universe.

If everything that is true is provable, and if devising proofs can be reduced to a mechanical procedure, then there is no impediment, except possibly time and an infinity of theorems, to a machine generating all the proofs of all the theorems.

Mathematics becomes, in such a world, something that we can leave to a machine.

But Godel showed that not everything that is true is provable (his example was a lot like the proposition “This proposition is not provable.”). If not everything that’s true is provable, then knowledge, perforce, must always be incomplete. Everything that’s true can’t be completely exhausted via a theorem-proving machine.

Anyway, it’s interesting that Leibniz’s “principle of sufficient reason” back in the 17th century, is so strongly related to Godel’s work.

Me and AI

Have been thinking about my art and AI. I don’t use AI, as you may know–if one takes as fundamental in AI that it learns. Which seems reasonable as a necessary condition for AI.

There will be terrific works of art in which AI is beautiful and crucial. But there will be many more where it is an inconsequential fashion statement.  It’s funny that programmed art is so affected by dev fashion. For the sake of strong work, it’s as important to not let programmer fashion dictate how we pursue excellence.

AI is not a silver bullet cure for creating great generative art or computer art more broadly. AI has great promise, but sometimes it’s preferable to use other approaches than AI.

There’s currently an AI gold-rush going on. I have seen a previous gold-rush: the dotcom gold-rush of 1996-2000. It’s in the nature of gold-rushes that people flock to it, misunderstand it, and create silly work with it that nonetheless is praised.

For many years, I have created programmed, generative, computer art, a type of art that is often associated with AI techniques.

The Trumpling characters (and other visual projects) that I am able to create, as you may have noted, have about them a diversity/range and quality that challenges more than a few art AIs. As art. As character. As expressive. As intriguing. As fascist chimera / Don Conway at http://vispo.com/aleph4/images/jim_andrews/aleph/slidvid12 , for instance.

The thing is this: it takes me some doing to learn how to create those. Both in the coding/JavaScript–and then in the artistic use of Aleph Null in generating the visuals, the ‘playing’ of the instrument, as it were, cinematically. That takes constant upgrades and other additions to the source code, so that i can explore in new ways, continually. Or stop for a while and explore what is already present in the controls, the instrument.

Some of the algorithms I’ve developed will be developed further; my work is the creation of a “graphic synthesizer”–a term I believe I invented–a multi-brushed, multi-layered, multi-filled brushstroke where brushes have replaceable nibs and many many parameters are exposed to granular controls. dbCinema was also a “graphic synthesizer” and a “langu(im)age processor” (another term I made up). I started dbCinema around 2005. I started Aleph Null in 2011. It’s 2019 now. I’ve been creating graphic synthesizers for some time now.

If I understand correctly, what AI has to offer in this situation is strong animation of the parameters. It’s learning would be in creating better and better animations without cease. Well, no, not really. Not ‘without cease’. It could be cyclic. And probably is.

It’s as good as the training data–and what is done with the training data, what images are grouped together, and how they’re grouped together in their position and so on.

The following is what I do instead of using AI.

My strategy is this:

  1. Create an instrument of generative art that allows me and other users of the tool to learn how to create strong art with Aleph Null. There is learning going on, but it’s by humans.
  2. Expose the most artistically crucial parameters (in the below architecture) in interactive controls–to get human decisions operating on some of those parameters–especially my own decisions–that is, Aleph Null and dbCinema are instruments that one plays.
  3. A control is allowed only if you can see the difference when you crank on it.
  4. The architecture: a ‘brush + nib‘ paradigm, and layers, in an animation of frames.
  5. A brushstroke: a shape mask to give the mask shape + a fill of the resulting shape. Any shape. An animated shape mask, possibly, so the shape changes + dynamic somewhat random fills chosen from/sampled from a folder of images–or a folder of videos, eventually. There are text nibs, also, so that a brushstroke can be a letter or word or longer string of text which is possibly filled with samples of images.
  6. The paint that a brush uses can be of different types: a folder of images; a folder of videos; a complex, dynamic gradient; a color. A brush fills itself with paint from its paint palette (the brush samples from its paint source) and then renders at least one brushstroke per frame.
  7. Each brush has a path. Can be random, or exotic-function-generated. Can be a mouse path–or finger path.
  8. A brush is placed in and often moved around in a layer. Can be moved from layer to layer.

Where could AI help Aleph Null? One could either concentrate on making Aleph Null more autonomous or use/create AI that acts as a kind of assistant to the human player of the instrument. 

If the former, i.e., if one concentrates on creating/using AI that makes Aleph Null more autonomous as an art machine–more autonomous from human input–then usually that requires an evaluation function, something that evaluates the quality of an image created by Aleph Null or used by Aleph Null, in order to ‘learn’ how to create quality work. Good data on which to base an evaluation function is difficult to come by. You could use the number of ‘likes’ an image acquires, for instance, if you can get that data from Facebook or wherever. Getting your audience to rate things is another way, which usually doesn’t work very well. 

My strategy, instead of this sort of AI, will be to create ‘gallery mode’. Aleph Null won’t be displayed in galleries as an interactive piece until ‘gallery mode’ has been implemented. There’ll be ‘gallery mode’ and ‘interactive mode’. Currently, Aleph Null is always in ‘interactive mode’. One of the pillars of ‘gallery mode’ is the ability to save configurations. If you like the way Aleph Null is looking, at any time, you can save that configuration. And you can ‘play’ it later, recall it. And you can create ‘playlists’ that string together different saved configurations. We normally think of a playlist as a sequence of songs to be played. This is much the same thing, only one is playing a sequence of Aleph Null configurations.

A configuration is a brushSet, i.e, a set of brushes that are configured in such and such a way. 

Playlists will allow Aleph Null to display varietously without the gallery viewer having to interact with Aleph Null. Currently, in ‘interactive mode’, the only way Aleph Null will display varietously is if you get in there and change it yourself. 

When you save a configuration, you also assign it a duration to play. So that when you play a playlist, which is a sequence of configurations, each configuration plays for a certain duration before transitioning to the next configuration.

When Aleph Null is displayed in a gallery, by default, it will be in ‘gallery mode’. It will remain in gallery mode, displaying a playlist of configurations, until the viewer clicks/touches Aleph Null. Then Aleph Null changes to ‘interactive mode’, i.e., it accepts input from the viewer and doesn’t play the playlist anymore. It automatically reverts to ‘gallery mode’ when it has not had any user input for a few minutes.

This idea of saving configurations and being able to play playlists, which are sequences of saved configurations/brushSets, is something I implemented in the desktop version of dbCinema. And this seems more supportive of creating quality art than an AI evaluation-learning model. Better because humans are saving things they like rather than software guessing/inferring what is likable.

Anyway, years ago, I decided that I probably wouldn’t be using AI cuz I want to spend my time really making art and art-making software. One can spend a great deal of time programming a very small detail of an AI system. My work is not in AI; it’s in art creation. The only possib for me of incorporating AI into my work is if I can use it as a web service, ie, I send an AI service some data and get the AI to respond to the data. Rather than me having to write AI code. 

But, so far, I think my approach gives me better results than what I’d get going an AI route. The proof is in the pudding.

 

Some correspondence with my pal Ted Warnell

Here is some correspondence between myself and the marvelous net artist Ted Warnell.

Oppen Do Down–first Web Audio API piece

In my previous post, I made notes about my reading of and preliminary understanding of Chris Wilson’s article on precision event scheduling in the Web Audio API–in preparation to create my first Web Audio API piece. I’ve created it. I’d like to share it with you and talk about it and the programming of it.

Oppen Do Down, an interactive audio piece

The piece is called Oppen Do Down. I first created it in the year 2000 with Director. It was viewable on the web via Shockwave, a Flash-like plugin–sort of Flash’s big brother. But hardly any contemporary browsers support the Shockwave plugin anymore–or any other plugins, for that matter–the trend is toward web apps that don’t use plugins at all but, instead, rely on newish native web technologies such as the Web Audio API, which requires no plugins to be installed before being able to view the content. The old Director version is still on my site, but nobody can view it anymore cuz of the above. I will, however, eventually release a bunch of downloadable desktop programs of my interactive audio work.

You can see the Director version of Oppen Do Down in a video I put together not long ago on Nio, Jig-Sound, and my other heap-based interactive audio work.

I sang/recorded/mixed the sounds in Oppen Do Down myself in 2000 using an old multi-track piece of recording software called Cakewalk. First I recorded a track of me snapping my fingers. Then I played that back over headphones, looping, while I recorded a looping vocal track. Then I’d play it back. If I liked it I’d keep it. Then I’d play the finger snapping and the vocal track back over headphones while I recorded another vocal track. Repeat that for, oh, probably about 60 or 70 tracks. Then I’d pick a few tracks to mix down into a loop. Most of the sounds in Oppen Do Down are multi-track.

As you can hear if you play Oppen Do Down, the sounds are synchronized. You click words to toggle their sounds on/off. The programming needs to be able to download a bunch of sound files, play them on command, and keep the ones that are playing synchronized. As you turn sounds on, the sounds are layered.

As it turns out, the programming of Oppen Do Down was easier in the Web Audio API than it was in Director. The reason for that is all to do with the relative deluxeness of the Web Audio API versus Director’s less featureful audio API.

Maybe the most powerful feature of the Web Audio API that Director didn’t offer is the high-performance clock. It’s high-performance in two ways. It has terrific resolution, apparently. It’s accurate to greater precision than 1 millisecond; you can use it to schedule events right down to the level of the individual sound sample, if you need that sort of accuracy. And the Web Audio API does indeed support getting your hands on the very data of the individual samples, if you need that sort of resolution. But the second way in which the high-performance clock is high-performance is that it stops for nothing. Which isn’t how it normally works with timers and clocks programmers use. They’re usually not the highest-priority processes in the computer, so they can get bumped by what the operating system or even the browser construes as more important processes. Which can result in inaccuracies. Often these inaccuracies are not big enough to notice. But in Oppen Do Down and pretty much all other rhythmic music, we need accurate rhythmic timing.

Director didn’t offer such a high-performance clock. What it had was the ability to insert cue-points into sounds. And define a callback handler that could execute when a cue-point was passed. That was how you could stay in touch with the actual physical state of the audio, in Director. The Web Audio API doesn’t let you insert cue-points in sounds, but you don’t need to. You can schedule events, like the playing of sounds, to happen in the time coordinate system of the high performance clock.

This makes synchronization more or less a piece of cake in the Web Audio API. Because you can look at the clock any time you want with great accuracy (AudioContext.currentTime is how you access the clock) and you can schedule sounds to start playing at time t and they indeed start exactly at time t. And the scheduling strategy Chris Wilson advocates, which I talked about in my previous post, whereby you schedule events a little in advance of the time they need to happen, works really well.

There are other features the Web Audio API has that Director didn’t. But, then, Director was actually started in 1987, whereas the Web Audio API has only been around for a few years as of this date in 2018. You can synthesize sounds in the browser, though that isn’t my interest; I’m more interested in recording vocal and other sounds and doing things with those recorded sounds. You can also process live input from the microphone, or from video, or from a remote stream. And you can create filters. And probably other things I don’t know anything about, at this point.

Anyway, Oppen Do Down links to two JavaScript files. One, oppen.js, is for this particular app and its particular interface. The other one, sounds.js, is the important one for understanding sound in Oppen Do Down. The sounds.js file defines the Sounds constructor, from top to bottom of sound.js. In oppen.js, we create an instance of it:

gSounds=new Sounds([‘1.wav’,’2.wav’,’3.wav’,’4.wav’,’5.wav’,’6.wav’]);
gSounds.notifyMeWhenReady(soundsAreLoaded);

Actually there are 14 sounds, not 6, but just to make it prettier on this page I deleted the extra 8. I used wav files in my Director work. I was happy to see that the Web Audio API could use them. They are uncompressed audio files. Also, unlike mp3 files, they do not pose problems for seamless looping; mp3 files insert silence at the ends of files. I hate mp3 files for that very reason. Well, I don’t hate them. I just show them the symbol of the cross when I see them.

The gSounds object will download the sounds 1.wav, etc, and will store those sounds, and offers an API for playing them.

‘soundsAreLoaded’ is a function in oppen.js that gets called when all the sounds have been downloaded and are ready to be played.

gSounds adds each sound (1.wav, 2.wav, … 14.wav) via its ‘add’ method, which creates an instance of the Sound (not Sounds) constructor for each sound. The newly created Sound object then downloads its sound and, when it’s downloaded, the ‘makeAvailable’ function puts the Sound object in the pAvailableSounds array.

When all the sounds have been downloaded, the gSounds object runs a function that notifies subscribers that the sounds are ready to be played. At that point, the program makes the screen clickable; the listener has to click the screen to initiate play.

It’s important that no sounds are played until the user clicks the screen. If it’s done this way, the program will work OK in iOS. iOS will not play any sound until the user clicks the screen. After that, iOS releases its death grip on the audio and sounds can be played. Apparently, at that point, if you’re using the Web Audio API, you can even play sounds that aren’t triggered by a user click. As, of course, you should be able to, unless Apple is trying to kill the browser as a delivery system for interactive multimedia.

I’ve tested Oppen Do Down on Android, the iPad, the iPhone, and on Windows under Chrome, Edge, Firefox and Opera. Under OSX, I’ve tested it with Chrome, Safari and Firefox. It runs on them all. The Web Audio API seems to be well-supported on all the systems I’ve tried it on.

When, after the sounds are loaded, the user clicks the screen to begin playing with Oppen Do Down, we find the sound we want to play initially. Its name is ‘1’. It’s the sound associated with the word ‘badly’. We turn the word ‘badly’ blue and we play sound ‘1’. We also make the opening screen invisible and display the main screen of Oppen Do Down (which is held in the div with id=’container’).

var badly=gSounds.getSound('1');
document.getElementById('1').style.color="#639cff";
gSounds.play(badly);
document.getElementById('openingScreen').style.display='none';
document.getElementById('container').style.display='block';

The ‘gSounds.play’ method is, of course, crucial to the program cuz it plays the sounds.

It also checks to see if the web worker thread is working. This separate thread is used, as in Chris Wilson’s metronome program, to continually set a timer that times out just before sounds stop playing, so sounds can be scheduled to play. If the web worker isn’t working, ‘gSounds.play’ starts it working. Then it plays the ‘1’ sound.

Just before ‘1’ finishes playing–actually, pLookAhead milliseconds before it finishes–which is currently set to 25–the web worker’s timer times out and it sends the main thread a message to that effect. The main thread then calls the ‘scheduler’ function to schedule the playing of sounds which will start playing in pLookAhead milliseconds.

If the listener did nothing else, this process would repeat indefinitely. Play the sound. The worker thread’s timer ticks just before the sound finishes, and then sounds are scheduled to play.

But, of course, the listener clicks on words to start/stop sounds. When the listener clicks on a word to start the associated sound, ‘gSounds.play’ checks to see how far into the playing of a loop we are. And it starts the new sound so that it’s synchronized with the playing sound. Even if there are no sounds playing, the web worker is busy ticking and sending messages at the right time. So that new sounds can be started at the right time.

Anyway, that’s a sketch of how the programming in Oppen Do Down works.

Chris Joseph gave me some good feedback. He noticed that as he added sounds to the mix, the volume increased and distortion set in after about 3 or 4 sounds were playing. He suggested that I put in a volume control to control the distortion. He further suggested that each sound have a gain node and there also be a master gain node, so that the volume of each sound could be adjusted.

The idea is that as the listener adds sounds, the volume remains constant. Which is what the ‘adjustVolumes’ function is about. It works well.

I am happy with my first experiment with the Web Audio API. Onward and upward.

However, it’s hard to be happy with some of the uses that the Web Audio API is being put to. The same is true of the Canvas API and the WebRTC API. And these, to me, are the three most exciting new web technologies. But, of course, when new, interesting, powerful tools arise on the web, the forces of dullness will conspire to use them in evil ways. These are precisely the three technologies being used to ‘fingerprint‘ and track users on the web. This is the sort of crap that makes everything a security threat these days.

Event Scheduling in the Web Audio API

This is the first of a two-part essay on event scheduling in the Web Audio API and an interactive audio piece I wrote (and sang) called Oppen Do Down. There’s a link to part two at the bottom.


I’ve been reading about the Web Audio API concerning synchronization of layers and sequences of sounds. Concerning sound files, specifically. So that I can work with heaps of rhythmic music.

A heap is the term I use to describe a bunch of audio files that can be interactively layered and sequenced as in Nio and Jig Sound, which I wrote in Director, in Lingo. The music remains synchronized as the sound icons are interactively layered and sequenced. The challenge of this sort of programming is coming up with a way to schedule the playing of the sound files so as to maintain synchronization even when the user rearranges the sound icons. When I wrote Nio in 2000, I wrote an essay on how I did it in Nio; this essay became part of the Director documentation on audio programming. The approach to event scheduling I took in Nio is similar to the recommended strategy in the Web Audio API.

Concerning the Web Audio API, first, I tried basically the simplest approach. I wanted to see if I could get seamless looping of equal-duration layered sounds simply by waiting for a sound’s ‘end’ event. When the ‘end’ event occurred concerning a specific one of the several sounds, I played the sounds again. This actually worked seamlessly in Chrome, Opera and Edge on my PC. But not in Firefox. Given the failure of Firefox to support this sort of strategy, some other strategy is required.

The best doc I’ve encountered is A Tale of Two Clocks–Scheduling Web Audio With Precision by Chris Wilson of Google. I see that Chris Wilson is also one of the editors of the W3C spec on the Web Audio API. So the approach to event scheduling he describes in his article is probably not idiosyncratic; it’s probably what the architects of the Web Audio API had in mind. The article advocates a particular approach or strategy to event scheduling in the Web Audio API. I looked closely at the metronome he wrote to demonstrate the approach he advances in the article. The sounds in that program are synthesized. They’re not sound files. Chris Wilson answered my email to him in which I asked him if the same approach would work for scheduling the playing of sound files. He said the same approach would work there.

Basically Wilson’s strategy is this.

First, create a web worker thread. This will work in conjunction with the main thread. Part of the strategy is to use this separate thread that doesn’t have any big computation in it for a setTimeout timer X whose callback Xc regularly calls a schedule function Xcs, when needed, to schedule events. X has to be set to timeout sufficiently in advance of when sounds need to start that they can start seamlessly. Just how many milliseconds in advance it needs to be set will have to be figured out with trial and error.

But it’s desirable that the scheduling be done as late as feasibly possible, also. If user interaction necessitates recalculation and resetting of events and other structures, probably we want to do that as infrequently as possible, which means doing the scheduling as late as possible. As late as possible. And as early as necessary.

When we set a setTimeout timer to time out in x milliseconds, it doesn’t necessarily execute its callback in x milliseconds. If the thread or the system is busy, that can be delayed by 10 to 50 ms. Which is more inaccuracy than rhythmic timing will permit. That is one reason why timeout X needs to timeout before events need to be scheduled. Cuz if you set it to timeout too closely to when events need to be scheduled, it might end up timing out after events need to be scheduled, which won’t do—you’d have audible gaps.

Another reason why events may need to be scheduled in advance of when they need to happen is some browsers—such as Firefox—may require some time to get it together to play a sound. As I noted at the beginning, Firefox doesn’t support seamless looping via just starting sounds when they end. That means either that the end event’s callback happens quite a long time after the sound ends (improbable) or sounds require a bit of prep by Firefox before they can be played, in some situations.

So we need to schedule events a little before those events have to happen. We regularly set a timer X (using setTimeout or setInterval) to timeout in our web worker thread. When it does, it posts a message to the main thread saying it’s time to see if events need scheduling.
If some sounds do need to be scheduled to start, we schedule them now, in the main thread.

But to understand that process, it’s important to understand the AudioContext’s currentTime property. It’s measured in seconds from a 0 value when audio processing in the program begins. This is a high-precision clock. Regardless of how busy the system is, this clock keeps accurate time. Also, when you pause the program’s execution with the debugger, currentTime keeps changing. currentTime stops for nothing! The moral of the story is we want to schedule events that need rhythmic accuracy with currentTime.

That can be done with the .start(when, offset, duration) method. The ‘when’ parameter “should be specified in the same time coordinate system as the AudioContext’s currentTime attribute.” If we schedule events in that time coordinate system, we should be golden, concerning synchronization, as long as we allow for browsers such as Firefox needing enough prep time to play sounds. How much time do such browsers require? Well, I’ll find out in trials, when I get my code running.

The approach Chris Wilson recommends to event scheduling is similar to the approach I took in Nio and Jig Sound, which I programmed in Lingo. Again, it was necessary to schedule the playing of sounds in advance of the time when they needed to be played. And, again, that scheduling needed to be done as late as possible but as early as necessary. Also, it was important to not rely solely on timers but to ground the scheduling in the physical state of the audio. In the Web Audio API, that’s available via the AudioContext’s currentTime property. In Lingo, it was available by inserting a cuePoint in a sound and reacting to an event being triggered when that cuePoint was passed. In Nio and Jig-Sound, I used one and only one silent sound that contained a cuePoint to synchronize everything. That cuePoint let me ground the event scheduling in a kind of absolute time, physical time, which is what the Web Audio API currentTime gives us also.

Part 2: Oppen Do Down–First Web Audio Piece

Chris Joseph: Amazing Net Art from the Frontier

I’ve been following Chris Joseph‘s work as a net artist since the late 1990’s when he was living in Montréal–he’s a Brit/Canadian living now in London. He was on Webartery, a listserv I started in 1997; there was great discussion and activity in net art on Webartery, and Chris was an important part of it then, too. I visit his page of links to his art and writing several times a year to see what he’s up to.

I recently wrote a review of Sprinkled Speech, an interactive poem of Chris’s, the text of which is by our late mutual friend Randy Adams.

More recently–like yesterday–I visited #RiseTogether, shown below, which I’d somehow missed before. This is a 2014 piece by Chris. We see a map, the #RiseTogether hash tag, a red line and a short text describing issues, problems, possibilities, groups, etc. Every few seconds, the screen refreshes with a new map, red line, and description.

Chris Joseph’s #RiseTogether

I sent Chris an email about it:

Hey Chris,

I was looking at http://babel.391.org/remix_runran/2014/risetogether.html

I see you're using Google maps.

What's with the red line?

What is #RiseTogether ? 

The language after "#RiseTogether"--where does that come from?

ja

Chris’s response was so interesting and illuminating I thought I’d post it here. Chris responded:

Hi Jim,

Originally this phrase, as a hashtag, was used by the Occupy Wall Street anti-capitalism movement, but I think since then it has been adopted/co-opted by many other movements including (US) football teams. The starting article and the text source for this piece was http://occupywallstreet.net/story/what-way-forward-popular-movement-2014 . 

It was one of three anti-capitalist pieces I did around that time, which was pretty much at the beginning of my investigating what could be done outside of Adobe Flash, along with http://babel.391.org/remix_runran/2014/capitalist-manifesto.html and http://babel.391.org/remix_runran/2014/thedaywefightback.html . And thematically these hark back to one of my first net art pieces, which isn't linked up on my art page at the moment, http://chrisjose.ph/quebec/ 

The red line was for a few reasons, I think. Firstly to add some visual interest, and additional randomisation, into what would be be a fairly static looking piece otherwise.  But I find the minimalism of a line quite interesting, as the viewer is asked to actively interpret the meaning of that line. For me it's a dividing line - between haves and have nots, or the 1% and 99%, or any of those binary divisions that the protesters tend to use. Or it could suggest a crossing out - perhaps (positively) of a defunct economic philosophy, or (negatively) of the opportunities of a geographical area as a result of that economic philosophy. 

All three of those pieces have a monochromatic base, but only two have the red, which feels quite angry, or reminiscent of blood, of which there was quite a bit in the anti-capitalist protests.

I used the same technique again in this piece: http://babel.391.org/remix_runran/2015/plague-vectors.html - but here the lines are much more descriptive, as an indication of the supposed 'plague vectors'. 

----------------------
Chris Joseph
@cj391
chrisjoseph.org

globalCompositeOperation in Net Art

Ted, Jim and globalCompositeOperation

Ted Warnell and I have been corresponding together about net art since 1996 or 97. We’ve both been creating net art using the HTML 5 canvas for about the last 6 years; we show each other what we’re doing and talk about canvas-related JavaScript via email. He lives in Alberta and I live in British Columbia.

Ted’s canvas-based stills and animations can be seen at warnell.com/mona. My canvas-based work includes Aleph Null versions 1.0 and 2.0 at, respectively, vispo.com/aleph and vispo.com/alephTouch.

One of the things we’ve talked about several times is globalCompositeOperation—which has got to be a candidate for longest-name-for-a-JavaScript-system-variable. The string value you give this variable determines “the type of compositing operation to apply when drawing new shapes” (mozilla.org). Or, as w3schools.com puts it:

“The globalCompositeOperation property sets or returns how a source (new) image is drawn onto a destination (existing) image.

Source image = drawings you are about to place onto the canvas.

Destination image = drawings that are already placed onto the canvas.”

The reason we’ve talked about this variable and its effects is because globalCompositeOperation turns out to be important to all sorts of things in creating animations and stills that you wouldn’t necessarily guess it had anything to do with. It’s one of those things that keeps on popping up too much to be coincidental. The moral of the story seems to be that globalCompositeOperation is an important, fundamental tool in creating animations or stills with the canvas.

In this article, we’d like to show you what we’ve found it useful for. We’ll show you the art works and how we used globalCompositeOperation in them to do what we did with it.

Ted’s uses of globalCompositeOperation tend to be in the creation of effects. Mine have been for masking, fading to transparency, and saving a canvas to png or jpg.

Digital Compositing

“Compositing” is an interesting word. It’s got “compose” and “composite” in it. “Compositing” is composing by combining images into composit images.

Keep in mind that each pixel of a digital image has four channels or components. The first three are color components. A pixel has a ‘red’ value, a ‘green’ value, and a ‘blue’ value. These are integers between 0 and 255. These combine to create a single color. The fourth channel or component is called the alpha channel. That’s a number between 0 and 1. It determines the opacity of the pixel. If a pixel’s alpha channel has a value of 1, the pixel is fully opaque. If it has a value of 0, the pixel is totally transparent. It can have intermediary values that give the pixel an intermediary opacity.

The default value of globalCompositeOperation is “source-over”. When that’s the value, when you paste a source image into a destination canvas, you get what you’d expect: the source is placed overtop of the destination.

There are 26 possible values for globalCompositeOperation which are described at mozilla.org. The first 8 of the options, shown below, are for compositing via the alpha channel. The remaining 18 are blend modes. You may be familiar with blend modes in Photoshop; they determine how the colors of two layers combine and include values such as “multiply”, “screen”, “darken”, “lighten” and so on. Blend modes operate on the color channels of the two layers.

But the first 8 values shown below operate on the alpha channels of the two images. They don’t change the colors. They determine what shows up in the result, not what color it is. The first 8 values in the below diagram can be thought of as a kind of Venn diagram of image compositing. There’s the blue square (destination) and the red circle (source). There are 3 sections to that diagram:

  • A: the top left part of the blue square that doesn’t intersect with the red circle;
  • B: the section where the square and circle intersect;
  • C: and the bottom right section of the red circle that doesn’t intersect with the blue square.

Section A can be blue or be invisible; section B can be blue, red, or invisible; section C can be red or invisible. That makes for 12 possibilities, but some of those 12 possibilities, such as when everything is invisible, are of no use. When the useless possibilities are eliminated, we’re left with the first 8 shown below. These possibilities form the basic sort of Venn logic of image compositing. You see this diagram not only with regard to JavaScript but in image compositing regarding other languages.

The first 8 values for globalCompositeOperation operate on the alpha channels of the source (red) and destination (blue)

What is “compositing”? We read the following definition at Wikipedia:

Compositing is the combining of visual elements from separate sources into single images, often to create the illusion that all those elements are parts of the same scene. Live-action shooting for compositing is variously called “chroma key”, “blue screen”, “green screen” and other names. Today, most, though not all, compositing is achieved through digital image manipulation. Pre-digital compositing techniques, however, go back as far as the trick films of Georges Méliès in the late 19th century; and some are still in use. All compositing involves the replacement of selected parts of an image with other material, usually, but not always, from another image. In the digital method of compositing, software commands designate a narrowly defined color as the part of an image to be replaced. Then the software replaces every pixel within the designated color range with a pixel from another image, aligned to appear as part of the original. For example, one could record a television weather presenter positioned in front of a plain blue or green background, while compositing software replaces only the designated blue or green color with weather maps.

Whether the compositing is operating on the alpha or the color channels, compositing is about combining images via their color and/or alpha channels.

As we see at rekim.com, different browsers treat some of the values of globalCompositeOperation differently, which can make for dev headaches and gnashing of teeth but, for the most part, globalCompositeOperation works OK cross-browser and cross-platform.

Jim Andrews: Masking (source-atop)

Masking is when you fill a shape, such as a letter, with an image. The shape is said to mask the image; the mask hides part of the image. Masking was crucial to an earlier piece of software I wrote called dbCinema, a graphic synthesizer I wrote in Lingo, the language of Adobe Director. The main idea was of brushes/shapes that sampled from images and used the samples as a kind of ‘paint’. My more recent piece Aleph Null 2.0, written in JavaScript, can do some masking, such as the sort of thing you see in SimiLily—and I’ll be developing more of that sort of thing in Aleph Null.

Let’s look at a simple example. You see it below. You can also see a copy of it at vispo.com/alephTouch/test/masking7.html, where it’s easier to view the source code. There’s a 300×350 canvas with a red border. We draw an ‘H’ on the canvas. We fill it with any color–red in this case. Then we set globalCompositeOperation = ‘source-atop’. Then we draw a bitmap of a Kandinsky painting into the canvas, but the only part of the Kandinsky that we see fills the ‘H’. Because when you set globalCompositeOperation = ‘source-atop’ and you then draw into an image, it only draws on pixels that were already on the canvas.

W3schools.com states it this way:

“source-atop displays the source image on top of the destination image. The part of the source image that is outside the destination image is not shown.”

In other words, first you draw on the canvas to create the “destination” image (the ‘H’). Then you set globalCompositeOperation = ‘source-atop’. Then you draw the “source” image on the canvas (the Kandinsky).

Masking with globalCompositeOperation = ‘source-atop’

The most relevant code in the above example is shown below:


function drawIt(oldValue) {
context.font = 'bold 400px Arial';
context.fillStyle='red';
context.fillText('H', 0,320);
// The above three lines set the text font to bold,
// 400px, Arial, red, and draw a red 'H' at (0,320).
// This is the destination.
// (0,320) is the bottom left of the 'H'.
context.globalCompositeOperation = 'source-atop';
context.drawImage(newImg, -100,-100);
// newImg is the rectangular Kandinsky image.
context.globalCompositeOperation=oldValue;
// Sets globalCompositeOperation back to what it was.
}

In our example, the destination ‘H’ is fully opaque. However, if the destination is only partially opaque, so too will the result be partially opaque. The opacity of the mask determines the opacity of the result. You can see an example of that at vispo.com/alephTouch/test/masking.html. The mask, or destination, is an ellipse that grows transparent toward its edge. The source image, once again, is a fully opaque Kandinsky-like image.

You can see some of Aleph Null’s masking ability if you click the Bowie Brush, shown below. It fills random polygons with images of the late, great David Bowie.

The Bowie Brush in Aleph Null fills random polygons with images of David Bowie

Ted Warnell: Albers by Numbers, February, 2017

Overview: Poem by Nari works are dynamically generated, autoactive and alinear, visual and code poetry from the cyberstream. Poem by Nari is Ted Warnell and friends. Following are four Poem by Nari works that demonstrate use of some of the HTML5 canvas globalCompositeOperation(s) documented in this article.

These works are tested and found to be working as intended on a PC running the following browsers: Google Chrome, Firefox, Firefox Developer Edition, Opera, Internet Explorer, Safari, and on an Android tablet. Additional browser specific notes are included below.

Experimental. Albers by Numbers is one from a series of homages to German-American artist Josef Albers. Poem by Nari series is loosely based on the Albers series “Homage to the Square”.

This work is accomplished in part by a complex interaction of stylesheet mixBlendMode(s) between the foreground and background canvases. All available mixBlendMode(s) are employed via a dedicated random selection function, x_BMX.

Interesting to me is how the work evolves from a single mass of randomly generated numeric digits to the Albers square-in-square motif. This emergence happens over a period of time, approximately one minute, and in a sense parallels emergence of the Albers series, which happened for Albers over a lifetime.

Note to IE and Safari users: works but not as intended.

Ted Warnell: Acid Rain Cloud 3, February 2017

Experimental. Another work from a series exploring a) acid, b) rain, c) clouds, d) all of the above.

globalCompositeOperation(s) “source-over” and “xor” are used here in combination with randomized color and get & putImageData functions. The result is a continually shifting vision of what d) all of the above, above, might look like.

Interesting to me here is that ever changing “barcode” effect in the lower half of the work – possibly the “rain” in this? Over time, that rain will turn from a strong black and white downpour to a gentle gray mist. This is globalCompositeOperation “xor” at work.

Note to Safari users: works but not as intended.

Ted Warnell: An Alinear Rembrandt, April 2017

An Alinear Rembrandt
warnell.com/mona/aa_rem.htm

Christ image is digitized from Rembrandt’s “Christ On The Cross”.

Not an experiment. The statement is clear, it’s Christ on the cross.

This fully-realized work brings together globalCompositeOperation(s) “source-over” and “lighter” in combination with gif image files, globalAlpha, linear gradients, standard and dedicated random functions, get & putImageData functions, and a Poem by Nari custom grid definition function. And of course, timing is everything.

Of interest to readers will be the flashing sky and flickering Christ. These effects are accomplished by linear gradient masks, gif image file redraws, and the aforementioned globalCompositeOperation(s).

Of interest to me, it’s Christ on the cross.

Ted Warnell: Pinwheels, April 2017

More experimentation. This work is for Mary & Ryan Maki, Canada

Full screen, variable canvas rotations, and globalCompositeOperation(s) “source-over” and “xor” with randomized color. “source-over” is default and is responsible for the vivid, solid colors in this work, while “xor” provides the muted, soft-edge color blends.

Pinwheels… I’m going to be a grandpa again.

Note to Safari users: does not work with Safari browser.

 

 

Fade to Transparency (destination-out)

The fader slider in Aleph Null

Aleph Null 2.0 has a fader slider. The greater the value of the fader slider, the quicker the screen fades to the current background color. This is implemented by periodically drawing a nearly-transparent fill of the background color over the whole canvas. The greater the value of the fader slider, the more frequent the drawing of that fill over the whole canvas.

That works well when there is just one canvas, when there is no notion of layers of canvases. Once you introduce layers, you have to be able to fade a layer to transparency, not to a background color, so that you can see what’s on lower layers. I’m attempting to implement layers at the moment in Aleph Null. So I have to be able to fade a canvas to transparency.

So, then, how do you fade a canvas to transparency?

As Blindman67 explains at cpume.com, “…you can avoid the colour channels and fade only the alpha channel by using the global composite operation “destination-out” This will fade out the rendering by reducing the pixels’ alpha.” Each pixel has four channels: the red, the blue, the green, and the alpha channels; the alpha channel determines opacity. The code is like this:

ctx.globalAlpha = 0.01; // fade rate
ctx.globalCompositeOperation = "destination-out"
ctx.fillRect(0,0,w,h)
ctx.globalCompositeOperation = "source-over"
ctx.globalAlpha = 1; // reset alpha

You do the above every frame, or every second frame, or every third frame, etc, depending on how quickly you want it to fade to transparency. Another parameter with which you control the speed of the fade is ctx.globalAlpha, which is always a number between 0 and 1. The higher it is, the closer to fully opaque the result will be on a canvas draw operation.

Blindman67 develops an interesting example of a fade to transparency in vispo.com/alephTouch/test/fade1.htm. You can see that it must be fading to transparency because the background color is dynamic, is constantly changing.

Note that the ctx.fillStyle color isn’t really important because we’re fading the alpha, not the color channels. ctx.fillStyle isn’t even specified in the above code. When globalCompositeOperation = ‘destination-out’, the color values of the destination pixels remain unchanged. What changes is the alpha value of the destination pixels. The alpha values of the source pixels get subtracted from the alpha values of the destination pixels.

The performance of fading this way should be very good, versus mucking with the color channels, because you’re changing less information; you’re only changing the alpha channel of each pixel, not the three color channels.

I massaged the Blindman67 example into something simpler at vispo.com/alephTouch/test/fade2.htm. There’s a fade function:


function fade() {
gCtx1.globalAlpha = 0.15; // fade rate
gCtx1.globalCompositeOperation = "destination-out";
gCtx1.fillRect(0,0,gCanW,gCanH);
gCtx1.globalCompositeOperation = "source-over";
gCtx1.globalAlpha = 1; // reset alpha
}

But compare the fade function with the code above it from Blindman67. It’s the very same idea.

Above, we see an example much like what I wrote at vispo.com/alephTouch/test/fade2.htm

Finally, on this topic, I’m currently wondering about the best way to implement layers concerning canvases. Clearly compositing possibilities create a situation where, at least in some situations, you don’t need multiple visible canvases; you can composit with offscreen canvases and only use one visible canvas. Whether this is better in general, and what the performance issues are, is currently unclear to me. There also exists at least one platform, namely concretejs, that supports canvas layers.

Save Canvas to Image File (destination-over)

globalCompositeOperation = ‘destination-over’ allows you to slip an image into the background of another image. The source image is written underneath the destination image.

It turns out that’s precisely what is needed to fix some bad browser behavior when you save a canvas to an image file, as we’ll see.

If you want to save a canvas to an image file, the simplest way to do it, at least on Chrome and Firefox, is to right-click (PC) or Control+click (Mac). You are presented with a menu that allows you to “Save As…” or, on some browser, “Copy Image”. The problem is that some browsers insert a background into this image that probably isn’t the same color as the background on the canvas.

On the PC, Chrome inserts a black background. Other browsers may insert other colors, or the right color, or no color at all. One solution to this problem is to create a button that runs some JavaScript that inserts the right background color. This is a job for globalCompositeOperation = ‘destination-over’ because it allows you to create a background with the source image.

The “save” button in Aleph Null

You can see the solution I’ve created at vispo.com/alephTouch/an.html, shown above. The controls contain a “save” button which, when clicked, copies a png-type image into a new tab, if permitted to do so. You may have to permit it by clicking on a red circle near the URL at the top of the browser. Once the image is in the new tab, right-click (PC) or Ctrl+click (Mac) and select “Save As…”.

The code is basically this sort of thing:


var canvas=document.getElementById('canvas');
var context=canvas.getContext('2d');
// We assume the canvas already has the destination image on it.
var oldGlobalComposite=context.globalCompositeOperation;
context.globalCompositeOperation='destination-over';
context.fillStyle=backgroundColor;
// backgroundColor is a string representing the desired background color.
context.fillRect(0,0,canvas.width,canvas.height);
context.globalCompositeOperation=oldGlobalComposite;
var data=canvas.toDataURL('image/png');
window.open(data);

The toDataURL command can also create the image as a jpg or webp.

In your animations with the HTML 5 canvas, will globalCompositeOperation be of any use? The answer is that if you are combining images at all, doing any compositing at all, globalCompositeOperation is probably relevant to your task and may make it much easier.

Colour Music in Aleph Null 2.0

I’m working on Aleph Null 2.0. You can view what I have so far at vispo.com/alephTouch/an.html . If you’re familiar with version 1.0, you can see that what 2.0 creates looks different from what 1.0 creates. I’ve learned a lot about the HTML5 canvas. Here are some recent screenshots from Aleph Null 2.0.

 

Image Masking with the HTML5 Canvas

Image masking with the HTML5 canvas is easier than I thought it might be. This shows you the main idea and two examples.

If you’d like to cut to the chase, as they say, look at this example and its source code. The circular outline is an image. The Kandinsky painting is a rectangular image that is made to fill the circular outline. We see the result below:

The Kandinsky painting fills a blurry circle.

The Kandinsky painting fills a blurry circle.

The key is the setting for the canvas’s globalCompositeOperation property. Like me, if you had seen any documentation for this property at all, you might have thought that it only concerned color blending, like the color blending options in Photoshop for a layer (the options usually include things like ‘normal’, ‘dissolve’, ‘darken’, ‘multiply’, ‘color burn’, etc). But, actually, globalCompositeOperation is more powerful than that. It’s for compositing images. Image masking is simply an example of compositing. Studying the possibilities of globalCompositeOperation would be interesting. We’re just going to use a couple of settings in this article. The definition we read of “compositing” via Googling the term includes this:

“Compositing is the combining of visual elements from separate sources into single images, often to create the illusion that all those elements are parts of the same scene.”

We’re going to use the “source-atop” setting of  globalCompositeOperation. The default value, by the way, is “source-in”.

The basic idea is that if you want image F to fill image A, you draw image A on a fresh canvas. Then you set  globalCompositeOperation to “source-atop”. Then you draw image F on the canvas. When you do that, the pixels in the canvas retain whatever opacity/alpha value they have. So, for instance, any totally transparent pixels remain totally transparent. Any pixels that are partially transparent remain partially transparent. Image F is drawn into the canvas, but F does not affect the opacity/alpha values of the canvas.

Here is an example where a Kandinsky painting is made to fill some canvas text:

Click the image and then view the source code.

Click the image and then view the source code.

I’m working on some brushes for Aleph Null 2.0 that are a lot like the brushes in dbCinema: the brushes ‘paint’ samples of images.

New Work by Ted Warnell

Ted Warnell, as many of you know, is a Canadian net artist originally from Vancouver, long since living in Alberta, who has been producing net art as programmed visual poetry at warnell.com since the 90’s. Which is about how long we’ve been in correspondence with one another. Ted was very active on Webartery for some time, an email list in the 90’s that many of the writerly net artists were involved in. We’ve stayed in touch over the years, though we’ve never met in the same room. We have, however, met ‘face-to-face’ via video chat.

warnell2016He’s still creating interesting net art. In the nineties and oughts, his materials were largely bitmaps, HTML, CSS, and a little JavaScript. Most of his works were stills, or series thereof. Since about 2013, he’s been creating net works using the HTML5 canvas tag that consist entirely of JavaScript. The canvas tag lets us program animations and stills on HTML pages without needing any plugins such as Flash or Unity. Ted has never liked plugins, so the canvas tag works well for him for a variety of reasons. Ted has created a lot of very interesting canvas-based, programmed animations and stills at warnell.com/mona .

I’m always happy to get a note from Ted showing me new work he’s done. Since we both are using the canvas, we talk about the programming issues it involves and also the sorts of art we’re making. Below is an email Ted sent me recently after I asked him how he would describe the ‘look’ or ‘looks’ he’s been creating with his canvas work. If you have a good look at warnell.com/mona , you see that his work does indeed exhibit looks you will remember and identify as Warnellian.


hey jim,

further to earlier thoughts about your query re “looks” in my work (and assuming that you’re still interested by this subject), here is something that has been bubbling up over the past week or so

any look in my work comes mainly from the processes used in creation of the work – so, it’s not a deliberate or even a conscious thing, the look, but rather, it just is – mainly, but not entirely, of course – subject, too, is at least partly responsible for the way these things look

warnell2016-2have been thinking this past week that what is deliberate and conscious is my interest in the tension between and balance of order and chaos, by which i mean mathematics (especially geometry, visual math) and chance (random, unpredictable) – i’m exploring these things and that tension/balance in almost all of my works – you, too, explore and incorporate these things into many of your works including most strikingly in aleph null, and also in globebop and others

so here are some thoughts about order/chaos and balance/tension in no particular order:

works using these things function best when the balance is right – then the tension is strong – and then the work also is “right” and strong

it is not a requirement that both of these things are apparent (visible or immediately evident) in a work – there are some notable examples of works that seem to be all one or the other, though that may be more illusion than reality – works of jackson pollock seem to be all chaos but still balance with a behind-the-scenes intelligence, order – works by andrew wyeth on the other hand seem to be all about order and control, but look closely at the brushstrokes that make all of that detail and you’ll see that many of these are pure chance – brilliant stuff, really

warnell2016-3an artist whose work intrigues me much of late is quebecer claude tousignant – i’m sure you know of him – he is perhaps best known for his many “target” paintings of concentric rings – tousignant himself referred to these as “monochromatic transformers” and “gongs” – you can find lots of his works at google images

the reason tousignant is so interesting to me (again) at this time is because while i can see that his paintings “work”, i cannot for the life of me see where he is doing anything even remotely relating to order/chaos or the balance/tension of same – his works seem to me to be truly all order/order with no opposite i would consider necessary for balance and/or to make (required) tension – his works defy me and i’d love to understand how he’s doing it 🙂

warnell2016-4anyway, serious respect, more power, and many more years to the wonderful monsieur tousignant

Look Again – warnell.com/mona/look.htm

is a new (this week) autointeractive work created with claude tousignant and his target paintings in mind

in this work are three broad rings, perfectly ordered geometric circles, each in the same randomly selected single PbN primary color – the space between and surrounding these rings is filled with a randomly generated (60/sec), randomly spun alphanumeric text in black and white, and also gray thanks to xor compositing – alinear chaos – as the work progresses, the three rings are gradually overcome by those relentless spinning texts – the outermost ring is all but obliterated while the middle ring is chipped away bit by bit until only a very thin inner crust of the ring remains – the third innermost ring, tho, is entirely unaffected

as the work continues to evolve, ghostlike apparitions of the missing outer and middle ring become more and more pronounced… because… within the chaos, new rings in ever-sharper black and white are beginning to emerge – this has the effect of clearly defining (in gray and tinted gray) the shape of the original color rings – even as order is continually attacked and destroyed by chaos, chaos is simultaneously rebuilding the order – so nothing is actually gained or lost… the work is simply transformed – a functioning “monochromatic transformer”, as tousignant might see it

that’s the tension and balance i’m talking about – the look you were asking about likely has something to do with autointeraction, alinearity, and most likely by my attempt to render visible order/chaos and balance/tension in every work i do

your attempt in aleph null (it now seems to me) might be in the form of progressive linearity on an alinear path – and well done

ted

PS, “Look Again” is a rework of my earlier work, “Poem by Numbers 77” from march 2015 – warnell.com/mona/pbnum77.htm

which work is a progression of “Poem by Numbers 52” from april 2013 – warnell.com/mona/pbnum52.htm

which work was about learning canvas coding for circular motion

Poem by Numbers works usually (not always) are about coding research and development – moreso than concept development, which comes in later works like “Look Again”

other artists have “Untitled XX” works – i have “Poem by Numbers XX”

Google Image Search API

imagesearch

Google image search parameters

Here are some useful documents if, as a developer,  you want to use the Google Image Search API.

I used the Google Image Search API in an earlier piece called dbCinema, but this piece was done in Adobe Director. Since then, I’ve retooled to HTML5. So I looked into using the Image Search API with HTML5.

First, the official Google documentation of the Google Image Search API is at developers.google.com/image-search/v1/devguide. It’s all there. Note that it’s “deprecated”. It won’t be free for very much longer, for developers. Soon they will charge $5/1000 queries. But the documentation I have put together does not use a key of any kind.

Perhaps the main thing to have a look at in the official Google documentation is the sample HTML file. It’s in the section titled “The ‘Hello World’ of Image Search”. This automatically does a search for “Subaru STI” and displays the results. But wait. There is a bug in the sample file so that if you copy the code and paste it into a new HTML file, it doesn’t work. I expect this is simply to introduce the seemingly mandatory dysfunction almost invariably present in contemporary programming documentation. Unbelievable. I have corrected the bug in vispo.com/typewriter/Google_Image_Search2.htm, which is almost exactly the same as “The ‘Hello World’ of Image Search” except it gets rid of “/image-search/v1/” in a couple of places.

After you look at that, look at vispo.com/typewriter/Google_Image_Search3.htm. Type something in and then press the Enter or Return key. It will then do a Google image search and display at most 64 images. 64 is the max you can get per query. The source code is very much based on the official Google example. The image size is set to “medium” and the porn filter is turned on. Strange but true.

Finally, have a look at vispo.com/typewriter/Google_Image_Search4.htm. This example shows you how to control Google Image Search parameters. The source code is the same as the previous example except we see a bunch of dropdown menus. Additionally, there is an extra function in the source code named setRestriction which is called when the user selects a new value from the dropdown menus.

There is a dropdown menu for all the controllable Image Search parameters except for the sitesearch restriction, which is simple enough if you understand the others.

Anyway, that ought to give you what you need to get up and running with the Google Image Search API.

Teleporter

Off planet teleporter

Off planet teleporter

 

Teleportation to the bottom of the ocean

Teleportation to the bottom of the ocean

I’ve been working on a new piece called Teleporter. The original version is here. The idea is it’s a teleporter. You click the Teleport button and it takes you s0mewhere random on the planet. Usually on the planet. It uses the Google Maps API. It takes you to a random panorama. In the new version, 4% of the time you see a random panorama made by my students; they were supposed to explore the teleporter or teleportation literally or figuratively. So the new version is a mix of Google street view panoramas and custom street view panoramas.

I’m teaching a course in mobile app development at Emily Carr U of Art and Design in Vancouver Canada. I wrote Teleporter to show the students some stuff with Google Maps. I’d shown them Geoguessr, which is a simple but fun piece of work with Google Maps.  I realized it was simple enough I could probably write a related thing. I wrote something that generates a random latitude and longitude. Then I asked Google to give me the closest panorama to that location.

So that worked fine, the students liked it, and I put a link to it on Facebook. A friend of mine, Bill Mullan, shared my link to Teleporter. Then a friend of his started a discussion about Teleporter on metafilter.com . A couple of days later I got an email from Adele Peters of FastCoExist.com who wanted to do a phone interview with me about Teleporter for an article she wanted to write. So we did. Her article came out a couple of days later; the same day, articles appeared in vice.com , dazeddigital.com from the UK, and some other online magazines. Articles quickly followed from various places such as programmableweb.com  and digitalarti.com, a digital art site from Paris. This resulted in tens of thousands of visitors to Teleporter.

Meanwhile, I decided to create a group project in the classroom out of Teleporter. The morning cohort was to build the Android app version of Teleporter, and the afternoon cohort the iOS version. That is wrapping up now. We should have an app soon. You can see the web version so far. It’s like the original version, mostly, except for a few things. The interface is more ‘app like’. Also, in the new version you see a student panorama 4% of the time. It’s meant to explore and develop the teleporter/teleportation theme. And there’s a Back button. The students designed the interface.

I want to mention a technical thing. Because I didn’t see any documentation on it online so perhaps it will help some poor developer who, like me, is trying to do something with a combination of Google streetview panoramas and custom streetview panoramas. I found that a bug was happening. Once the user viewed a student panorama, a custom panorama, then thereafter, when they viewed a Google panorama and tried to use the links to travel along a path, they would be taken back to the previous student custom streetview panorama.

The solution was the following JavaScript code:

if (panorama.panoProvider) {
// In this case, the previous panorama was a student panorama.
// We delete panorama.panoProvider or it causes navigation problems:
// if it is present, then when the user goes to use the links in the
// Google panorama, they simply get taken to the student panorama.
delete panorama.panoProvider;
}

As you eventually figure out, when you create a custom streetview panorama, you need to declare a custome panorama provider. You end up with a property of your panorama object named panoProvider. But this has to be deleted if you then want the pano to use a Google streetview panorama or you get the bug I was experiencing.

Anyway, onward and upward.

 

Breaking Bad as Sittrag

Whatever else it is, tragedy is a dramatic form, a type of drama for the stage or film or TV etc. Certain dramatic works of art are tragedies. Tragedy has been regarded as the pinnacle of dramatic art for about 2,500 years in the western world. It’s typically dated back to the Oresteia by Aeschylus. There has been fascinating conjecture about the origins of Greek tragic drama in, it’s thought, religious ritual.

Tragedy is not philosophy, but the phrase ‘tragic vision’ is associated with the form. Just what that is varies considerably. Tragedy isn’t inevitably as Aristotle says it is in The Poetics, of course. But a or the ‘tragic vision’ has typically been associated with our most profound dramatic art, our most probing drama into, well, the meaning of life.

Tragedy often involves a victory of the spirit in the face of great worldly loss. People endure, in tragedy. Usually they go down. It’s the end for them. A couple of the things long associated with the tragic vision are ‘anagnorisis’ or ‘recognition’. The vanilla meaning is the key moment in the play of insight usually by the protagonist into the situation. Another is ‘catharsis’. It can and has been interpreted to be many things, but it’s usually associated with the purging and purification of pity and terror/fear in the audience, ie, the drama leads them to catharsis, to an appreciation of the tragic vision of the drama or the fate of the hero/heroine. It’s sometimes associated with insight into ‘the human condition’ or something sufficiently vague. I expect that it often evades some formulas while partially satisfying others. Our own experience is often like that whether it’s cathartic or otherwise.

I expect that the writers of Breaking Bad have been more than a little aware of tragedy in the writing. How could they not? It’s basically the faith of most dramatic artists. They believe in people, typically, and they believe in their art and the art of tragedy as the great expression of their faith in the value of life and the capacity of people to, well, be heroic even as they go down. Not necessarily as martyrs but perhaps true to their own priorities and values about what’s important in this life.

In any case, the key insight or recognition in Breaking Bad is when Walt finally admits to himself and Skyler that he did it for himself. It’s a moment of insight into himself and his own life. And his life with Skyler and the family. He is finally revealed to himself and also open to his wife to whom he has been lying since the series began. That seems like a significant victory, in the drama. He can finally admit to her and to himself what he has been hiding all his life.

And the catharsis, well, that’s ongoing, isn’t it. It’s when it all comes together for you, whenever that is.

The great White west: Breaking Bad as Western

Breaking Bad is a kind of contemporary western. In various ways. Of course there’s the New Mexico landscape. Breaking Bad uses that landscape cinematographically to romance the story. The romance of the western. Great open spaces. Freedom. Lots of heat and danger, risk.

If you’d wondered ‘why all those car ads?’ especially in the finale but also lots of them throughout the series, consider this. Cowboys got their hosses. Cars, in Breaking Bad, do all the work of hosses in westerns. That’s why the car advertisers eat it up. For instance, when Walt’s black Chrysler SRT8 takes a bullet in “Ozymandias”, he doesn’t just lose a car. He’s on the way down after that. That black car symbolized the power of the evil drug kingpin he had become.

But there are other more interesting elements of the western in Breaking Bad. Westerns give their heroes and villains special powers. Sort of like super heroes but not quite. Sort of like the powers of fighters in kung fu movies who fly and so on. But not quite. Western heroes can kill a lot of bad guys in a shootout and/or they have great marksmanship or they are as tough as a grizzly bear or whatever.

Walter White can kill everyone with science, cleverness, and lots of guts. Gus Fring kills all of Don Eladio’s henchmen with a bottle of booze and a lot of guts. Walt blows up Tuco’s lair with fulminated mercury and a lot of guts. These are all improbable events. But the improbability is masked with science, realism, and good storytelling. We *want* Gus to win against overwhelming odds when he kills Don Eladio and all. We suspend our disbelief cuz we want exactly that outcome.

Emily Nussbaum, in the New Yorker, objects to the improbability in the finale (spoiler alert) of Uncle Jack giving a damn that Walt says Jack is partners with Jesse. Very true. It does seem out of character. But we also want him to go get Jesse. Our objection to the improbability and out of characterness of his action is mollified by our desire to get Jesse involved in the finale.

Westerns are rarely strictly realistic. BB also is sort of like a comic book at times.

Like in “Face Off” when Gus gets killed. He walks out of the room that has just exploded like nothing happened, straightens his tie–and then we see half his face has been blown off. He looks like something out of a comic book or a slasher movie, at that point. Then he falls down and dies. The unrealistic nature of it jars a little bit with Breaking Bad’s realism, but our objection is offset by the frisson of the emergence of the death head and devil from the villainous Gus Fring. He is suddenly what he is. He has hidden in plain sight for so long.

Suspension of disbelief is all about suspending our disbelief cuz we want to. Not cuz we’re asked to.

First Remainder Series by Joseph F. Keppler

Apologies for the long absence. In the interim, I got married to the lovely Natalie Funk. And bought a condo in Metrotown in Vancouver. And have been teaching mobile app development. And will soon be teaching mobile web development and motion graphics at the Emily Carr U of Art and Design. It’s been a time of a lot of change and, additionally, a lot of retooling. I’ve been learning mobile development this and mobile development that. Lots of new tricks for this old dog.

I put a couple of things together last week that I’d like to show you. I published seven visual poems by Joe Keppler back in 2008. I always liked them and thought them special, but since I published them, I’ve given them deeper thought–and wrote something that gets at what, to me, is so remarkable about these poems.

I also recoded Joe’s visual poems into HTML that displays well on mobile devices. I’ve been reading about “responsive web design” recently in preparation for teaching a course on mobile web development. Basically, “responsive web design” is about making web pages that work well on really a very wide range of display devices from big TVs down to smartphones. Joe’s poems were excellent practice in responsive design because they are varying degrees of simple but take up the whole page. Recoding these pages into contemporary HTML has helped me a great deal with my understanding of contemporary web design.

Two Self-Portraits

These were created on invitation to make a work related to self-portraiture for Scenes of Selves, Occasions for Ruses, a group exhibition at the Surrey Art Gallery. The curator saw an earlier dbCinema piece I did called The Club that incinemates the faces of my favorite North American politicians, business men, and psychopaths. He asked me to do related work with photos of myself rather than Jeffrey Dahmer, Paul Wolfowitz, Russell Williams, George Bush, and the rest of that psychotic, murderous crew. Which seemed like a remarkably strong opportunity to at least make an idiot of myself.

Let me show you the ‘trailers’ to the two resulting videos. What I’d like to show you are slideshows made of screenshots from the two videos. The videos are made of dbCinemations/collages of 53 images of me from the day I was born to my current grizzled state at 53 years of age.  The Surrey show will run from September 15 (the opening is from 7:30-9:30pm), 2012 till December 16, 2012. The show was curated by Jordan Strom.

The first trailer is at http://vispo.com/dbcinema/selfportrait2/ index.htm?n=1 . The video of which these screenshots are composed used two dbCinema brushes. One of the brushes ‘paints’ a letter from my name each frame. The other brush paints a circle each frame. Each of the brushes (usually) paints a different photo. So we see two simultaneous photos of me being drawn. The man and the baby. Etc. A brush paints a given photo for several seconds and then paints a different photo. The slideshow is composed of 47 still images.

The second trailer is at http://vispo.com/dbcinema/selfportrait3/ index.htm?n=1 . The video used one dbCinema brush: a Flash brush. In other words, the brush was a SWF turned into a mask. The shape of the brush was a curving, undulating, rotating, translated line. Each frame of the video, dbCinema rendered one brush stroke, one rendering of the brush image; the curving line’s paint was sampled from photos of me. The brush would sample from a photo for several seconds before moving on to another photo. What we’re looking at here is not the video but 17 screenshots from the video.

In the main, the man does not cohere. No coherent person emerges from this process of forcibly joining / collaging / synthesizing / remixing these 53 photos of me. It doesn’t magically tell me who I have always been. Or does it? Or if not, what does it suggest? You could say “If you don’t know who you’ve always been, no piece of art is going to clue you in.” Well I do kinda know. On the other hand, I do seem to tell myself a lot of stories.

It seems what the self-portrait does for me mainly is to problematize the notion of the existence of a person whom I have always been. The images in the video are messy. Like birth mess. Perhaps that’s part of our discomfort in life. We’re always in the midst of our own birth mess. And death stink. As Bob Dylan once observed, “He not busy being born is busy dying.”

Joe Keenan’s MOMENT

Joe Keenan's MOMENT in Internet Explorer

I put together a twenty minute video talking about a fantastic piece of digital poetry by Joe Keenan from the late nineties called MOMENT. Check it out: http://vispo.com/keenan/4. MOMENT, written in JavaScript for browsers, is a work of visual interactive code poetry. It’s one of the great unacknowledged works for the net.

I used Camtasia 8 to create this video. I’ve used the voice-over capabilities of Camtasia before to create videos that talk about what’s on the screen, but this is the first time I’ve been able to use the webcam with it. Still a few bugs, though, it seems: at times the video is quite asynchronous between voice and video.

Still, you get the idea. I’m a big fan of Joe Keenan’s MOMENT and am glad I finally did a video on it.