Posts Tagged ‘google’
HTML 5 has been publicized as an open source replacement for Adobe’s proprietary Flash. In truth, HTML 5 is far less featureful than Flash concerning audio, video, imaging, text and much else. And there are currently no tools available for non-programmers to work comfortably in HTML 5. It will take HTML 6 or 7, which will be some years, perhaps a decade, for HTML to approach the current featurefulness of Flash. But it’s coming along.
The most notable thing about HTML 5 is the <canvas> tag, which provides the ability to do interesting graphical operations. There are various programmerly commands available to draw stuff. HTML 5 also introduces a few audio commands, but nothing with the sophistication of Flash’s audio capabilities.
What we’re going to do is have a look at four recent pieces that use HTML 5 in interesting ways. And that work. Yes, some HTML 5 works. When new programming possibilities are introduced to a mass audience, you can bet there’s going to be more than a few blue screens. I’ve only had one today looking at new HTML 5 work. But not from any of the below pieces. These pieces ran well and were very rewarding to view.
The most interesting one, from an artistic perspective, is Arcade Fire’s interactive music video of their song “We Used to Wait” from their album The Suburbs, which won the Grammy for album of the year in 2011. The HTML 5 piece is called The Wilderness Downtown . This is quite impressive, really, both from a technical and artistic point of view. And it goes along perfectly with the suburbs, if that’s where you’re from. I’ve seen online videos that use multiple browser windows for video before, such as in the work of Peter Horvath, but The Wilderness Downtown is also quite sophisticated in other ways. The programmed birds, for instance, and the way they move between windows. And alight on what you have drawn in the interactive writing piece. And the way they use Google Earth. Very strong work indeed. And, o yes, the music is pretty darn good too. Moreover, the touches I’ve mentioned are not gratuitous wiz bang programming effects, but tie into a vision of the suburban experience that Arcade Fire has developed so very beautifully.
Gregory Chatonsky is a French/Canadian artist who has created a significant body of net art. Here are a couple of pieces of his I found that still work and are compelling:
“The Revolution Took Place in New York is a fictional story generated in real time from an internet source. A text generator gives shape to an infinite novel bearing close resemblance to the work “Projet pour une révolution à New York” written by Robbe-Grillet in 1970: Ben Saïd walks on the streets of the American metropolis and plots something. Some words are associated to video fragments, others to sounds gleaned on the network and others are automatically translated into images using Google. The structured association of these heterogeneous elements generates a narrative flow simultaneous with the network flow.”
Each time I’ve opened this piece, it’s been different. What surprised and charmed me most about this piece was how the narrative made sense, often, and kept me interested in where it was going. That is very unusual indeed in generative works. I’m referring to the text itself. But, also, the way the text goes with the images was also, often, quite interesting.
About a year ago, John Cayley made a post on NetPoetic entitled “An Edge of Chaos”. In it he delimits a constraint-based networked-writing process: “Write into the Google search field with text delimited by quote marks until the sequence of words is not found. Record this sequence….”
A couple of weeks ago, I woke up with the idea of making a poem composed entirely of lines that returned no search results. “Wow”, I thought to myself, “what a great idea”. I had forgotten it was John’s idea.
If this situation occured in 2014 (for example), and on waking I told the idea to my girlfriend, perhaps the instant-speech-checking algorithmically-networked microphone next to our bed might have immediately alerted me to my potential plagiarism. As it is, my memory had to slowly percolate John’s prescient precedent to the surface of my mind like a splinter.
Neuronal latency in the 21st century data avalanche is a vestigial design flaw that needs to be technologically cauterized.
Imagine that (while typing / while speaking), footnotes, bibliographies and source attributions immediately auto-generate, links sprout around text, and areas of uniqueness are spontaneously (and perhaps effortlessly) patented. The race to network becomes a race to brand segments of communication, to demarcate phrases of language, to colonize conjunctions of text in the same way attributions of authorship emerged from the book.
A writer becomes a sewer (sic pun) of uniqueness. Instead of quotation marks, a new grammar of overlapping links allow the subtlety of appropriated text’s multiple inheritances to Xanadu off towards diverse sources. Instead of Flarf, context-specific algorithmic-grammars differentiate between semantically meaningful units of language and word-salad collage-spew net-wrack.
Dsytopic singularity theories aside, an era of instantaneous as-you-type network-search is arriving. Google Instant is just one stride in the sprint toward word-processing software that automatically checks writing for repetition and rewards writing that is both meaningful and unique.