.::..Nicholas Sagan..::. …:.::..artworks and experiments…::.:…nicholassagan@gmail.com

DGP Status Report

One thing that has set this semester’s progress apart from the last is where I started out. Last semester was where the beginnings of my understanding of computer vision took place. While the whole of the semester was driven towards development a singular method of computer vision control of sound, it was only one of the many ways to do so. That is why this semester started out with a bit of a head start; I had a much more clear understanding of the capabilities of computer vision and had more freedom to explore what is possible.

I began by investigating another program that uses computer vision, EyeCon and worked towards getting that system to communicate with Max/MSP. The two systems, which exist on two computers, affords a little more flexibility and shows me the potential for how this can work. Using an internet-based protocol, OpenSoundControl (OSC), for getting information between systems allows for a great distance to exist between machines.
But in my case they are in the same room. With the system setup this way I’ve have been able to tweak the sensitivity of certain variables. At first a single camera was able to detect a point where a “line” was broken. Since the camera was (and still is) mounted on the ceiling this “line” is essentially a vertical plane or barrier. The point of breakage corresponded to a keyboard note in a set range. Exploring the sensitivity of detection methods was the primary goal with this element of the project.

Moving further I was able to add in a method of detection that changes the volume of the tone or note played by measuring a) how far one is from the camera and b) how large a given control object is. For an example of the latter, the sound would become louder as I spread my arms open thus making the “object” larger. Each method of control, volume and frequency, spits out a set of numbers via OSC and can then be interpreted by Max/MSP, which again has been one of the main goals.

I am now in the process of properly scaling that data for a specific level sensitivity proper to various sound generators in Max/MSP. One of which is a simple oscillator-style synthesizer and using the two before mentioned object detection methods I was able to annoy people in the next room by “revving my engine”. This sound generator in Max was also a beneficial stepping-stone because the EyeCon synth is very, and I mean very, basic. What I am aiming to do necessitates something with a little more girth. The Max synth is able to move beyond a ‘do-ra-mi-fa-so-la-ti-do’ kind of mode into a ‘doramifasotila’ kind of slur, which is pretty cool once it gets loud.

Other sound and noise generators I plan on working with are the Granular Synthesis object and the Noise object, which has the ability to control various frequencies within a white noise player. Conceptually, both of these objects make sense because I’d like to keep exploring the manipulation of time (as in space-time) as performance and white noise is what cosmic background radiation sounds like. This is how they fit into my line of inquiry.

As I work through each of these I find that this work is by nature a performative process. It’s not really something that can be programmed and then turned on to do what it will. It must be activated by first the developer, myself, to make sure its doing what I want and then by whoever is going to experience it. Working towards the latter is the challenge that waits within my thesis project. However, I am not eliminating the possibility that my thesis could be a performance because it sure looks as if the presentation of this semester’s project will be.

Leave a Reply

Your email address will not be published. Required fields are marked *