Sound Flower Pickens

We started the day with a quick jaunt to Pensacola Beach and Fort Pickens via the Scenic Highway. This is something we had wanted to do during our stay but had never quite found the time.  On our way there we spotted the miniature Taj Mahal that we had been told about. Somehow in our minds we pictured it bigger but still the earnest kitsch quality of it was beautiful.

The sand here is indeed as white as people had told us. Like snow really (and more on that later). We had thought that the Fort was built by the Spanish but in fact it was build by the US government and was used mostly to ward off confederates. We don’t know the full history of the Fort yet because it was super cold today and we were way too chilly to read all the panels.

We rushed back to the lab for noon because we had scheduled a bowl recording session with Noah and Anastasia. Noah was ready with a series of bowls that he had made and filled with water to achieve certain tones. We collected various implements to bang on the bowls and recorded them on their own, as a collective, and as a collective with the Epic Walk composition by Simon Overstall. All of it was recorded in Ambisonic format using the Rode 360 microphone. We listened to portions of it later on using headphones (binaural) and it was delightful. Definitely led to some ideas about interactivity and sound.

We couldn’t do a full walk today but decided that it might be nice if the progressive scan ended on a colourful note so we went out on campus and collected a variety of small flowers, among them some cherry blossoms just starting to bloom — a fitting last gift to circle us back to Vancouver. We decided to let the scanners go for as long as we could and to do another trip to Fort Pickens in the meantime.

The second time around we had a bit more time and it was slightly warmer so we strolled through the Fort batteries and made out way onto the beach, past the bird nesting grounds where we are told we should leave them alone because they need space to forage, loaf, and court. This seems to us like exactly the kind of values we should keep in mind for humans — let us all create new human habitats that optimize for foraging, loafing, and courting. Life would be better for all.

Coming back, we decided to keep driving along the spit but about a third of the way in we realized it would take us way too long to get back to the University to pack up our studio. We had to do a UTurn! No problem. We have lots of experience with those by now. But as it turns out, the sand here not only looks like snow but also behaves a little bit like snow. As we tried to edge Loblolly the car back onto the road from the sand-flanked shoulder, we heard the sound of spinning wheels. Stuck in the sand in Pensacola in a quite deserted stretch of road. Maria tried to push the car from both sides to no avail. A car passed and stopped! Sean (a local man) had some straps that he used to link our cars together and try to pull us out. But the force of the pull snapped the straps without any kind of progress for Loblolly. We were about to try again with a slightly different technique when Dave and Donna from Kentucky stopped with their large pickup truck. They had a chain and a hitch and were able to pull us out without any problem. Meeting our embarrassment at being stuck in the sand, they said “it happens”. We thought they were just being polite southerners but in fact, apparently it does happen regularly here. In any case, we were charmed by the super helpful people that stopped and never made us feel stupid about any of it. We have no photo documentation of this adventure unfortunately.

Back at the University we took a look at the last few snippets from the progressive scan and the Camellia scan and printed a couple with the laser printer and pasted them in the notebook. We regretfully stopped both scans and started to pack up everything, leaving little traces of our presence here and there as bouquets, still lives, and best-of vegetation. It has been a beautiful time here and we will miss the studio, the people, the walks, the bayou. More on the learnings on a later blog post.

 

 

0

I Am Afraid

For the last 5 months or so, I’ve been working on a new networked social VR application called I Am Afraid. This application brings together ideas that have been on my mind for over a year, perhaps longer. Ideas around voice, poetry, sculpture, and performance. Many people asked where the idea for IAA came from and I am surprised that I can’t remember the moment, or a moment, when I decided I wanted to see words and play with sounds in VR. I do know there have been lots of inspirations along the way, including the work I did with Greg Judelman in flowergarden, voice work with my friend Matthew Spears, clowning, theatre, friendly poets (Andrew Klobucar, Glen Lowry), sound artists (Simon Overstall, Julie Andreyev, prOphecy Sun), etc.

The basic idea is to build sound compositions and sculptures using textual and abstract objects that have embedded recorded sounds. When you are in the environment, you can speak words and have them appear as textual objects, and utter sounds that appear as abstract objects. Both kinds of objects contain the sound of your voice and can be replayed in a variety of ways. By intersecting with the objects, the sounds can be played back. The textual objects can be played in parts and at any speed/direction, using granular synthesis. The abstract sounds can be looped. Paths can be recorded through the objects and looped. In this way layered soundscapes can be created. The objects can also be interacted with in different ways like shaking and moving which alters the sound quality. Other actions are also planned, fleshing out a longstanding idea around a sonification engine based on the physicality of interaction with words.

I am often asked why the application is called I Am Afraid. As I was starting work on the application in January, I could sense an escalation of fear in the world, in my surroundings. I have been exploring fear for the last 17 years through different paths including meditation and art. One of the features of fear is that when we feel it, when it grips us, we start talking to ourselves. This is a bit a trap because we get more and more removed from what is actually going on. One of the goals of IAA is to externalize the discursiveness and be playful with the words and sounds. It can be a way to lighten up and see things more clearly, shift the internal dialogue. And it’s fun.

I used the application during my TEDxECUAD talk last March, which is about fear and technology. I’ve also used it in a performance at AR in Action, a conference at NYU at the beginning of June. It’s a great environment for performance (solo or group), exploration, and composition. I’ll be working on it for some time to come, adding features and (hopefully soon) porting it to Augmented Reality.

0

ObjectACTs Residency : Day 5

On Day 5 we did a two more takes of the multiview object performance. A stronger and bigger paper structure was built, the lighting was changed slightly, and the performance of the camera was quite a bit longer. One of the things I didn’t mention in my last post is that the 360 camera image is upside down because the camera is attached onto the rig from the bottom and hung by four wires from the ceiling. When we viewed the footage from the Thursday test on the GearVR (upside down) it was surprisingly interesting and not as disturbing as you would think. The camera shake was interesting too, helping to enter into the perspective of the observing, scrutinizing camera. Still we will be reversing the camera footage to properly assess the differences between the two views.

In some ways the takes on Thursday were a bit better because the lighter paper structure had a more of an even fight with the camera, which made the camera a little less shaky. In Thursday’s takes we also had less of an integration with the bystanders and the object actors. Two unexpected things happened during Friday’s takes. During take 1, the 360 camera fell one of the lights, and duing take 2, the camera itself became detached from the rig and fell (only from about 2 inches off the ground, thankfully).

Here are a few pics and a video from the performance.

Paper Structure

Paper Structure

Paper Structure

Paper Structure

0

ObjectACTs Residency : Day 2

On day two we spent some time discussing how we might create a performance that would include the perspective of multiple actors, including those non-human and non-personified.

Situation Rooms by Rimini Protokoll

Situation Rooms by Rimini Protokoll

The example of The Situation Rooms from Rimini Protokoll came up. In this theatre work, participants (~20) wearing headphones and carrying ipads are directed to perform specific actions on a set made of several different rooms. The participants are separated and their actions are synchronized to sometimes interact with one another. The topic of the story is arms dealing. A detailed description of the rooms can be found in the Ruhr Triennale catalogue.

 

Kim showed us some of the environments she created using Roller Coaster Tycoon editor.

Image made with RCT

Image made with RCT

She explained the modelling of the terrain as “scooping up dirt” which had a really nice resonance with the object clumps we had been discussing. I love the floating islands and wondered if we could somehow fit the concept of roller coaster in the project to get around the fact that we can’t export from the RCT editor.

We also tested the Structure Sensor to see if we could get workable scans of some of heart trinkets that Catherine brought to Vancouver. It turns out the objects were hard to scan because of their small size and material properties (too reflective and transparent). Still one of the scans ended up intriguing enough that we may use it as a prototype or stand-in.

Here is the first working scan we got of a small rock heart. If you are viewing this on an iPhone and you want to use Google cardboard, use this link.

0