The googler is an application running “in the background” of a space on a screen. It listens to the conversations in that space (via speech recognition software) and serves up google image search results based on word or sentence fragment it hears. Because the information comes from google, and the meaning derived from searching for words or sentence is limited, the application is rarely accurate in representing what appears to be the subject of conversation. On occasion, when it does, it is very exciting (at least for everybody I have shown it to).
But the stream of images is non stop, and not responsive to people’s requests or confusions. At times it is unclear what the program is doing, and other times it produces something seemingly meaningful. Such moments provide the kind of sense of serendipity and coincidence that one gets when they overhear somebody talking about the same thing as they were just saying . . or when somebody appears just as you were thinking of them.
My intention was to provide this sort of serendipity with google search, while highlighting the ubiquity of turning to google for supporting facts, data, or information during conversations about pretty much anything.
There have been a lot of discussions at ITP lately about the significance of everybody turning to google for their answers. Sites such as “let me google it for you” , that lets you send somebody a google search in response to a question, snidely suggest “what is the point of asking when you can just google it?” In a sense this site suggests that google is always going to be the best source for solutions to any of your questions. Well, kind of. Is there a danger or shortcoming to trusting google’s interpretation of and responses to our questions? What is the value of the constant “let me look that up”s and “can somebody google that?”s ubiquitous in today’s conversations? This program serves up google images to correspond to a conversation. Users can get constant google feedback without having to conduct the search themselves. The images attempt to illustrate the conversation in as much as google seeks to provide meaningful and relevant content.
This first iteration is an ambient presence of the googler in a conversation, google-interpreting the content and attempting to provide google solutions or responses to the conversation.
[wpvideo GQeFQAHx w=400]
I want the next step to exclude stop words, and search for synonyms in addition to the words, mimicking the kind of decisions we make when we chose our search phrases. I also want it to use N-Gram probabilities to predict the likely Nth word to follow, and search for that image, in an attempt to predict or at least synchronously ‘speak’ with the speaker. Finally, I want the searching to be done for several words at once, so that the images are served up in a constant flow, with an array of several images at once, that slowly fade from the screen as the conversation moves on.