.::..Nicholas Sagan..::. …:.::..artworks and experiments…::.:…nicholassagan@gmail.com

Experimental Astrocartography Update

VISIT WAKING THE INVISIBLE

The following text is my progress report for an installation I am working on.   It will consist of a computer vision system that will be utilized to create a sonically interactive environment.  The structure of the space will be a representation of a map of dark matter that was developed in 2007 by cosmologists at Caltech.

<<the production of my work, the delays in the process of acquiring materials pushed the timeline into a much tighter form.  However, not having EyesWeb and the Framegrabber card set up within the first couple of weeks was only a slight delay.  The holding pattern that I was put into when the PC did not arrive sooner then we had hoped was not completely detrimental.  Using the existing Mac to explore the possibilities that Max/MSP provides I was able to determine what is actually more of a necessity for my final work that I plan to be the outcome of this project.

Experimenting with and tweaking the various patches that were pre-made and distributed by other artists and programmers I was able to determine a system with which to structure the tech side of the piece around.  The first patches I tooled around with were the Centroid and blob tracking patches.  Both can be utilized to process the dynamic qualities of the fiber arrays when they are interacted with.

There are two modes that I hope to continue to develop.  The first of which will treat each bundle of fibers as a separate ‘blob’ or ‘centroid’ and track it’s center in a series of XY coordinate outputs, which can be visually represented using a pict slider in Max/MSP.  That pict slider is one of the tools I used in a previous Max patch to track the movement of a sound through an XY system.  In the case of this project I hope to use that XY data to trigger certain sonic events and not necessarily to locate them.  The triggers can be something like: ‘if x and y<=5, play sound 1, if x and y>= 80, play sound 2, etc’.

Another method for triggers, most likely in conjunction with the first (within the same mode), could track the number of centroids within the camera’s field of view and respond to the number “visible”.  For example, if someone steps directly between the camera and the light clusters then all would effectively be blocked.  If the viewer moves through the center of the piece they would not only activate the triggers bound to the location of the bundles, but also block out sections towards the far end of the field of view, and hence block out a certain number of centroids, triggering other sounds.

This first mode of computer vision will utilize blob and centroid tracking as method for triggering more defined sonic events, such as solar flares, pulsars and singing black holes.  These events will only happen if the viewer so chooses to interact with the piece because this set of triggers, like I mentioned before, is more bound to the movement of the clusters of light, not if they remain still.

The second mode, which I hope I’ve built up enough suspense for, will operate the second level of sonic interactivity, respectively.  The method of computer vision I want to use is the silhouettes patch.  The first component of this patch uses background subtraction and frame differencing.  The camera takes a snapshot of the camera’s view and then inserts a small form, in the case of this early stage a circle, somewhere within that XY grid.  Using the frame differencing component the patch can detect when a form moves specifically into that circle which was set with the background subtraction function.  Once this patch is connected to a sound file, the “finding” of a form, i.e. the viewer, within that circle can trigger a sound from a predetermined library.

It is my goal to be able to run these systems individually off of two computers and get them to speak to each other using OSC in order to utilize the sound library and Reason software located on one computer.  If I have one computer set up as the audio output generator, I can use one single quad set of studio monitors.  So, two computers with one camera each that talk to each other with one of those computers connected to four speakers.  All controlled by the movement of the stars…

Space and Place Solo Installation and Directed Graduate Projects Final

Title: Waking the Invisible

Location: 916 S Wabash, Installation Lab Room 207

Open Dates: December 10th-17th

Project Description

I hope that the spectacle and experience of this piece will reawaken the natural philosophical inquisitiveness of humanity’s inner child, to recreate that first aesthetic experience of looking to the stars on a crisp, dark night.

This installation is a sculptural representation of dark matter.  While dark matter itself will not be directly represented, clusters of fiber optic lighting will define the outlined structure of it.  The viewer will be invited to experience the piece as both a sculptural element and the space as a musical instrument of sorts.  General movements of silhouettes will govern the nature of the ambient sound, which is a library of samples taken from radio astronomy.  Direct interaction with the form of the clusters will trigger more defined audio events.

However, while a viewer has the ability to walk through the piece, it will not be permanently disrupted.  No matter what is done to affect the immediate reality of the space (physical, meta, etc) the representation of the vast scale of what dark matter is and does will always revert back to the original or natural form.  The scale of the actions of a single person will be smoothed out by time and gravity as the fibers and sounds fall back into a mode of equilibrium, at least until the next viewer engages the space.

While the technical aspects of this are important, they are to serve as instrumental in getting the viewer to turn onto the idea that the scale of the human condition is being set against a cosmological constant.  This is meant to be an exploration of our relationship to the physical world in both immediate, tactile, empirical and aesthetic ways set against very cosmologically rooted ideas.  The two layers of audio will also help to make the distinction of scales, human vs. cosmological, the unknowable of the soul against the unknowable of the cosmos.  The question is not how to place the viewer into the role of dark matter, but to understand the analogy of the elements within the piece.

As the viewer enters the space, the first form of sonic reaction will be subtle and more ambient.  Those low volume events, in the form of rumbling and static (cosmic background radiation), etc, will be more durational and controllable by the movements around the piece.  If the viewer chooses to interact more directly with the form they will be triggering the more specific sonic events, which would various radio transmission bursts from astronauts, pulsars, planets and other celestial bodies.

Technical Description

The main form of this installation is created using approximately 500’ of fiber optic lighting arranged into approximately 150 bundles of 50 fibers.  Each bundle is attached to a ceiling grid at 1 per 2 ft sq and hung at various lengths between 1’ and 6’ in order to fill the cubic space of the room.  The math works out to close to 8,000 individual points of light within a 26’ x 18’ x 9’ space.  Each bundle will be lit with an array of white LEDs, which are also the only light source in the black-boxed room.

The audio content will be generated by a computer vision system powered by Max/MSP and Reason.  The sound library is stored in Reason and “played” via Max.  The system will detect the movement of viewers’ silhouettes against the stars that lay across the field of view.  Each cluster of star will also act as a reference point for the computer vision system so that both the viewers’ movement and interaction with the clusters will trigger sonic events.  The former will function as a trigger for ambient sound while the latter will initiate more defined sonic events.

The programming and sound output are managed through either one or two Mac Minis out to a four-channel audio interface and then to four studio monitors.

Schedule

October 28th-November 5th: Order materials, draft proposal, create PR materials, develop programming (ongoing), and develop digital models

November 5th: Proposal, Equipment List and PR Materials DUE

November 6th-December 5th: Develop programming, models, and components

December 5th: Move into space and begin installation

December 10th: Soft opening

December 11th: Hard opening

December 17th: Closing and Critiques (Directed Graduate Projects in the afternoon and Space and Place in the evening)

Leave a Reply

Your email address will not be published. Required fields are marked *