Maria Lantin | performance
47
archive,tag,tag-performance,tag-47,ajax_fade,page_not_loaded,,select-theme-ver-3.6.1,wpb-js-composer js-comp-ver-5.0.1,vc_responsive

I Am Afraid

For the last 5 months or so, I’ve been working on a new networked social VR application called I Am Afraid. This application brings together ideas that have been on my mind for over a year, perhaps longer. Ideas around voice, poetry, sculpture, and performance. Many people asked where the idea for IAA came from and I am surprised that I can’t remember the moment, or a moment, when I decided I wanted to see words and play with sounds in VR. I do know there have been lots of inspirations along the way, including the work I did with Greg Judelman in flowergarden, voice work with my friend Matthew Spears, clowning, theatre, friendly poets (Andrew Klobucar, Glen Lowry), sound artists (Simon Overstall, Julie Andreyev, prOphecy Sun), etc.

The basic idea is to build sound compositions and sculptures using textual and abstract objects that have embedded recorded sounds. When you are in the environment, you can speak words and have them appear as textual objects, and utter sounds that appear as abstract objects. Both kinds of objects contain the sound of your voice and can be replayed in a variety of ways. By intersecting with the objects, the sounds can be played back. The textual objects can be played in parts and at any speed/direction, using granular synthesis. The abstract sounds can be looped. Paths can be recorded through the objects and looped. In this way layered soundscapes can be created. The objects can also be interacted with in different ways like shaking and moving which alters the sound quality. Other actions are also planned, fleshing out a longstanding idea around a sonification engine based on the physicality of interaction with words.

I am often asked why the application is called I Am Afraid. As I was starting work on the application in January, I could sense an escalation of fear in the world, in my surroundings. I have been exploring fear for the last 17 years through different paths including meditation and art. One of the features of fear is that when we feel it, when it grips us, we start talking to ourselves. This is a bit a trap because we get more and more removed from what is actually going on. One of the goals of IAA is to externalize the discursiveness and be playful with the words and sounds. It can be a way to lighten up and see things more clearly, shift the internal dialogue. And it’s fun.

I used the application during my TEDxECUAD talk last March, which is about fear and technology. I’ve also used it in a performance at AR in Action, a conference at NYU at the beginning of June. It’s a great environment for performance (solo or group), exploration, and composition. I’ll be working on it for some time to come, adding features and (hopefully soon) porting it to Augmented Reality.

0

ObjectACTs Residency : Day 5

On Day 5 we did a two more takes of the multiview object performance. A stronger and bigger paper structure was built, the lighting was changed slightly, and the performance of the camera was quite a bit longer. One of the things I didn’t mention in my last post is that the 360 camera image is upside down because the camera is attached onto the rig from the bottom and hung by four wires from the ceiling. When we viewed the footage from the Thursday test on the GearVR (upside down) it was surprisingly interesting and not as disturbing as you would think. The camera shake was interesting too, helping to enter into the perspective of the observing, scrutinizing camera. Still we will be reversing the camera footage to properly assess the differences between the two views.

In some ways the takes on Thursday were a bit better because the lighter paper structure had a more of an even fight with the camera, which made the camera a little less shaky. In Thursday’s takes we also had less of an integration with the bystanders and the object actors. Two unexpected things happened during Friday’s takes. During take 1, the 360 camera fell one of the lights, and duing take 2, the camera itself became detached from the rig and fell (only from about 2 inches off the ground, thankfully).

Here are a few pics and a video from the performance.

Paper Structure

Paper Structure

Paper Structure

Paper Structure

0

ObjectACTs Residency : Day 2

On day two we spent some time discussing how we might create a performance that would include the perspective of multiple actors, including those non-human and non-personified.

Situation Rooms by Rimini Protokoll

Situation Rooms by Rimini Protokoll

The example of The Situation Rooms from Rimini Protokoll came up. In this theatre work, participants (~20) wearing headphones and carrying ipads are directed to perform specific actions on a set made of several different rooms. The participants are separated and their actions are synchronized to sometimes interact with one another. The topic of the story is arms dealing. A detailed description of the rooms can be found in the Ruhr Triennale catalogue.

 

Kim showed us some of the environments she created using Roller Coaster Tycoon editor.

Image made with RCT

Image made with RCT

She explained the modelling of the terrain as “scooping up dirt” which had a really nice resonance with the object clumps we had been discussing. I love the floating islands and wondered if we could somehow fit the concept of roller coaster in the project to get around the fact that we can’t export from the RCT editor.

We also tested the Structure Sensor to see if we could get workable scans of some of heart trinkets that Catherine brought to Vancouver. It turns out the objects were hard to scan because of their small size and material properties (too reflective and transparent). Still one of the scans ended up intriguing enough that we may use it as a prototype or stand-in.

Here is the first working scan we got of a small rock heart. If you are viewing this on an iPhone and you want to use Google cardboard, use this link.

0