.::..Nicholas Sagan..::. …:.::..artworks and experiments…::.:…nicholassagan@gmail.com

DGP Update: Final working ideas for the semester

Finally got around to doing another update on my progress.  It has been slow, admittedly, but that’s what happens during the “evaluation” phase.  It does feel good to get some solid ‘performative’ documentation done, to at least show how the damn system works.  To describe what is going on here it should be broken down into 2 functions (or set outcomes of control variables) and 2 cameras (sets of control surfaces or interfaces), for a total of 4 performative variations.

The first set of control functions is a simple oscillator, built in the Max environment.  The camera views from EyeCon, an overhead shot and a table grid shot, detect certain variables such as object size, direction of movement (arrow) and object position on the center line.  The size of the object correlates to the volume of the tone generated.  The frequency (in half-step intervals) is determined by the position of contact between the object and the center line.  The further up the line, the higher the pitch and the wider your arms are (or closer to the camera you are) the louder the sound.

This also works for the table top grid setup.  As an object (a hand in this case) moves across the field of view it and its shadow are detected to determine volume.  Since the “theater” is smaller compared to that of the overhead view the range of volume is a little more limited.  Though a better position for the lighting could cure that ailment.  What I didn’t do is put markers on the center line that correlate to specific pitches or intervals.  However, it makes up for “lack of musicality” with the wider range of notes…

Some issues with this version are that the object size detection field is a bit touchy.  That is, sometimes it doesn’t like the big object that is moving and will either latch onto some inanimate object or completely disappear, rendering the volume control mute.  There are a few different methods to determining the correct sensitivity thresholds but I have yet to find a good balance between them.  At this point there is too much of a ramp between the small and large sizes, creating a slight pop and crackle in moving between them.  Still works pretty well for “evaluative” purposes, though.

The second set of controls are what is really fun to play with and it goes by the name of Granular Synthesis.  If you haven’t come across it before it basically gives a number of control variables to a sound sample.  For example, there is a control that allows me to pick a set duration of the sample; a control that lets me play with playback speed; another for pitch; another for volume; another for cues; etc…You can also do many of these things with the sfplay~ object, but it takes just a few more patch cords.  Anyway, this second set of OSC controls from EyeCon are setup to control pitch, direction of playback and position in the sample.

I’ve also been working on another set of controls within EyeCon that can track 2 or more objects within a space and locate a sound based on their position…but more on that later…..here are some screen shots for now…

 

Leave a Reply

Your email address will not be published. Required fields are marked *