sketches for visual possibilities . . the video link is closest:
Looking Eye – lookingeye.org
Looking Eye presents the image that an eye receives and the image it creates. It attempts to explore the difference between the seen and the assumed. The viewer is able to manipulate the eye movement, and in that way engages with language of the eyes, something we all employ but seldom acknowledge or understand.
Using eye tracking technology, the video is captured with the data of the eye movement around the seen, and presented as the collage of what appears, inviting the viewer to infer what is perceived, or to re-envision the eye movement.
The language of the eyes – both in direct communication and in the way we use it to inform ourselves, is a huge topic. At first I was interested in addressing communication with the eyes, but have discovered a lot of interesting ideas and texts about the eyes for studying mental patterns, thought, and culture. There is a lot that can be revealed through the eyes, and explained through this language we are all familiar with.
Sloooow Start. I have been overwhelmed, literally, by the different decisions and tasks: here are the conflicts that paralyze me:
originally supposed to be all live and dynamic but I am really interested in the push toward story arc
I don’t know how to reconcile the two. I can’t let go of the live and dynamic because it is the only way the eyetracking as editing works for me. I don’t see the point otherwise, but I am sure it could be there. After thinking of it this way, I think I should stick to the live idea and see what emerges as room for eyetracking in the viewing, or editing based on a story arc.
I have been trying to make it live on a small windows tablet that is internet enabled and has 2 usb ports: lamer but similar to the raspberry pi idea, while the raspberry pi remains unavailable. I also heard about some good examples of eye tracking using only camera video feeds from cell phones with two way cameras, and have been working on that. And I am still hacking my bluetooth cameras because they are small and could work via phone. I met with somebody who may be able to help me do it. But the process so far has been slow.
In this realm I’m torn between the meaning of the interaction – I am starting with a mouse based thing: moving the mouse is like moving the eyes, superseding the camera feed’s eye position. So if the video is showing a certain eye position, the viewer can change it. I think this will be good enough for now, but I want the viewer to be able to edit the footage, to store pieces of what is collected, and to arrange them.
And to combine it with the eye tracking that is attached to the computer.
Then to do one interview, where the eye tracking data is synched after the fact with the (wide-angle) footage.