.::..Nicholas Sagan..::. …:.::..artworks and experiments…::.:…nicholassagan@gmail.com

More Computer Vision

Coming off the heels of last semester’s successful project development I find myself pushing the possibilities of a user-controlled room-sized instrument. This time around I am using a second camera to create an XYZ system in which a person can move three-dimensionally to control sound variables. In this design the video feeds from both cameras will be fed to a PC with a Framegrabber card and then interpreted in the program Eyecon. The data that Eyecon collects (such as an objects specific position or velocity within the grid) will be sent to a Mac running Max 5 patches via OSC protocol (Open Sound Control) which basically enables the transfer of data between IP addresses. Magical stuff, really…CNMAT at Berkeley is responsible for much of this…

Now the variables of the sound really depend on what type of sound is hooked into the Mac side of the system. (Just to be clear: PC gathers data -> sends to Mac via OSC -> Mac uses data to control sound) The programming in Max will either utilize an on/off type relay in which certain locations in the XYZ grid will trigger different sounds…which is essentially what I did with the silhouettes patcher last semester…OR (which is the goal) have it control a synthesizer (such as Granular Synthesis) for real-time sound creation and manipulation. Moving away from the Z-axis camera (the horizontally-mounted one) will effect the volume level and moving through the XY-axis (ceiling-mounted camera) will control things like pitch, tone and other funky modulations.

Leave a Reply

Your email address will not be published. Required fields are marked *