Posts Tagged ‘google’
Here are some useful documents if, as a developer, you want to use the Google Image Search API.
I used the Google Image Search API in an earlier piece called dbCinema, but this piece was done in Adobe Director. Since then, I’ve retooled to HTML5. So I looked into using the Image Search API with HTML5.
First, the official Google documentation of the Google Image Search API is at developers.google.com/image-search/v1/devguide. It’s all there. Note that it’s “deprecated”. It won’t be free for very much longer, for developers. Soon they will charge $5/1000 queries. But the documentation I have put together does not use a key of any kind.
Perhaps the main thing to have a look at in the official Google documentation is the sample HTML file. It’s in the section titled “The ‘Hello World’ of Image Search”. This automatically does a search for “Subaru STI” and displays the results. But wait. There is a bug in the sample file so that if you copy the code and paste it into a new HTML file, it doesn’t work. I expect this is simply to introduce the seemingly mandatory dysfunction almost invariably present in contemporary programming documentation. Unbelievable. I have corrected the bug in vispo.com/typewriter/Google_Image_Search2.htm, which is almost exactly the same as “The ‘Hello World’ of Image Search” except it gets rid of “/image-search/v1/” in a couple of places.
After you look at that, look at vispo.com/typewriter/Google_Image_Search3.htm. Type something in and then press the Enter or Return key. It will then do a Google image search and display at most 64 images. 64 is the max you can get per query. The source code is very much based on the official Google example. The image size is set to “medium” and the porn filter is turned on. Strange but true.
Finally, have a look at vispo.com/typewriter/Google_Image_Search4.htm. This example shows you how to control Google Image Search parameters. The source code is the same as the previous example except we see a bunch of dropdown menus. Additionally, there is an extra function in the source code named setRestriction which is called when the user selects a new value from the dropdown menus.
There is a dropdown menu for all the controllable Image Search parameters except for the sitesearch restriction, which is simple enough if you understand the others.
Anyway, that ought to give you what you need to get up and running with the Google Image Search API.
HTML 5 has been publicized as an open source replacement for Adobe’s proprietary Flash. In truth, HTML 5 is far less featureful than Flash concerning audio, video, imaging, text and much else. And there are currently no tools available for non-programmers to work comfortably in HTML 5. It will take HTML 6 or 7, which will be some years, perhaps a decade, for HTML to approach the current featurefulness of Flash. But it’s coming along.
The most notable thing about HTML 5 is the <canvas> tag, which provides the ability to do interesting graphical operations. There are various programmerly commands available to draw stuff. HTML 5 also introduces a few audio commands, but nothing with the sophistication of Flash’s audio capabilities.
What we’re going to do is have a look at four recent pieces that use HTML 5 in interesting ways. And that work. Yes, some HTML 5 works. When new programming possibilities are introduced to a mass audience, you can bet there’s going to be more than a few blue screens. I’ve only had one today looking at new HTML 5 work. But not from any of the below pieces. These pieces ran well and were very rewarding to view.
The most interesting one, from an artistic perspective, is Arcade Fire’s interactive music video of their song “We Used to Wait” from their album The Suburbs, which won the Grammy for album of the year in 2011. The HTML 5 piece is called The Wilderness Downtown . This is quite impressive, really, both from a technical and artistic point of view. And it goes along perfectly with the suburbs, if that’s where you’re from. I’ve seen online videos that use multiple browser windows for video before, such as in the work of Peter Horvath, but The Wilderness Downtown is also quite sophisticated in other ways. The programmed birds, for instance, and the way they move between windows. And alight on what you have drawn in the interactive writing piece. And the way they use Google Earth. Very strong work indeed. And, o yes, the music is pretty darn good too. Moreover, the touches I’ve mentioned are not gratuitous wiz bang programming effects, but tie into a vision of the suburban experience that Arcade Fire has developed so very beautifully.
Gregory Chatonsky is a French/Canadian artist who has created a significant body of net art. Here are a couple of pieces of his I found that still work and are compelling:
“The Revolution Took Place in New York is a fictional story generated in real time from an internet source. A text generator gives shape to an infinite novel bearing close resemblance to the work “Projet pour une révolution à New York” written by Robbe-Grillet in 1970: Ben Saïd walks on the streets of the American metropolis and plots something. Some words are associated to video fragments, others to sounds gleaned on the network and others are automatically translated into images using Google. The structured association of these heterogeneous elements generates a narrative flow simultaneous with the network flow.”
Each time I’ve opened this piece, it’s been different. What surprised and charmed me most about this piece was how the narrative made sense, often, and kept me interested in where it was going. That is very unusual indeed in generative works. I’m referring to the text itself. But, also, the way the text goes with the images was also, often, quite interesting.
About a year ago, John Cayley made a post on NetPoetic entitled “An Edge of Chaos”. In it he delimits a constraint-based networked-writing process: “Write into the Google search field with text delimited by quote marks until the sequence of words is not found. Record this sequence….”
A couple of weeks ago, I woke up with the idea of making a poem composed entirely of lines that returned no search results. “Wow”, I thought to myself, “what a great idea”. I had forgotten it was John’s idea.
If this situation occured in 2014 (for example), and on waking I told the idea to my girlfriend, perhaps the instant-speech-checking algorithmically-networked microphone next to our bed might have immediately alerted me to my potential plagiarism. As it is, my memory had to slowly percolate John’s prescient precedent to the surface of my mind like a splinter.
Neuronal latency in the 21st century data avalanche is a vestigial design flaw that needs to be technologically cauterized.
Imagine that (while typing / while speaking), footnotes, bibliographies and source attributions immediately auto-generate, links sprout around text, and areas of uniqueness are spontaneously (and perhaps effortlessly) patented. The race to network becomes a race to brand segments of communication, to demarcate phrases of language, to colonize conjunctions of text in the same way attributions of authorship emerged from the book.
A writer becomes a sewer (sic pun) of uniqueness. Instead of quotation marks, a new grammar of overlapping links allow the subtlety of appropriated text’s multiple inheritances to Xanadu off towards diverse sources. Instead of Flarf, context-specific algorithmic-grammars differentiate between semantically meaningful units of language and word-salad collage-spew net-wrack.
Dsytopic singularity theories aside, an era of instantaneous as-you-type network-search is arriving. Google Instant is just one stride in the sprint toward word-processing software that automatically checks writing for repetition and rewards writing that is both meaningful and unique.