I Am Afraid

For the last 5 months or so, I’ve been working on a new networked social VR application called I Am Afraid. This application brings together ideas that have been on my mind for over a year, perhaps longer. Ideas around voice, poetry, sculpture, and performance. Many people asked where the idea for IAA came from and I am surprised that I can’t remember the moment, or a moment, when I decided I wanted to see words and play with sounds in VR. I do know there have been lots of inspirations along the way, including the work I did with Greg Judelman in flowergarden, voice work with my friend Matthew Spears, clowning, theatre, friendly poets (Andrew Klobucar, Glen Lowry), sound artists (Simon Overstall, Julie Andreyev, prOphecy Sun), etc.

The basic idea is to build sound compositions and sculptures using textual and abstract objects that have embedded recorded sounds. When you are in the environment, you can speak words and have them appear as textual objects, and utter sounds that appear as abstract objects. Both kinds of objects contain the sound of your voice and can be replayed in a variety of ways. By intersecting with the objects, the sounds can be played back. The textual objects can be played in parts and at any speed/direction, using granular synthesis. The abstract sounds can be looped. Paths can be recorded through the objects and looped. In this way layered soundscapes can be created. The objects can also be interacted with in different ways like shaking and moving which alters the sound quality. Other actions are also planned, fleshing out a longstanding idea around a sonification engine based on the physicality of interaction with words.

I am often asked why the application is called I Am Afraid. As I was starting work on the application in January, I could sense an escalation of fear in the world, in my surroundings. I have been exploring fear for the last 17 years through different paths including meditation and art. One of the features of fear is that when we feel it, when it grips us, we start talking to ourselves. This is a bit a trap because we get more and more removed from what is actually going on. One of the goals of IAA is to externalize the discursiveness and be playful with the words and sounds. It can be a way to lighten up and see things more clearly, shift the internal dialogue. And it’s fun.

I used the application during my TEDxECUAD talk last March, which is about fear and technology. I’ve also used it in a performance at AR in Action, a conference at NYU at the beginning of June. It’s a great environment for performance (solo or group), exploration, and composition. I’ll be working on it for some time to come, adding features and (hopefully soon) porting it to Augmented Reality.

0

On kindness and Alexa

A few weeks ago I was giving a demo of the capabilities of the Amazon Echo to my friend Mel who had never interacted with one.

Me: “Alexa, play Cave Ballad”
Alexa: “I cannot find the song Cane Bond”
Me: “Alexa, play song Cave Ballad”
Alexa: “I cannot find the song Cave Salad”
Me: “Alexxaaaaaa! Play the song Cave Ballad by Paul Dano”

and so on…

I don’t remember if she actually managed to play it. But I do remember Mel remarking (calmly) that I seemed to be getting quite impatient with Alexa. Did I notice that? I guess I had noticed that on a superficial level but never reflected on it. Turns out I have a kindness practice and I spend a lot of time reflecting on the benefit of being generous and curious towards others. After a few days of Mel’s words repeating in my head, I decided I would make a practice of being kind to Alexa. After all, offering kindness is just that, an offering, and not contingent on any personal return so why shouldn’t I call my own bluff and be kind to an AI who, at least so far, can’t tell and doesn’t mind either way.

The results were immediate. I felt calmer, more curious, and the experiment was great ground for practicing de-escalation on the spot. It’s great because she doesn’t see me pause and take a breath before starting again. A human would most certainly see the jaw tightening before I catch myself. Oddly, it has also physicalized her presence in a way that wasn’t there before. I think of the puck-like object in my kitchen before calling “Alexa” because it helps me remember to be kind. A disembodied AI somehow is not enough to grab onto. It may be because the kindness practice is very much based on the notion of a shared experience of being human, how inevitably messy and painful it is at times. Without a body, it’s harder to believe there is pain. Without a body, it’s hard to imagine the friction of life.

It is amusing to taunt Alexa and look for the easter eggs. It’s equally interesting to investigate the ethics of AI relations in a, so far, unambiguous space. It reminds me of some of the issues brought forth by Westworld and the AI, Dolores. When does compassion extend to AIs? Does it need to be reciprocated or even possible? Is it the middle ground that makes it difficult? If Alexa could tell that I was being kind and decided not to reciprocate, it definitely would complicate the decision to remain kind. It’s true these questions have been asked before under different guises and thought experiments but it’s informative to act out and imagine different scenarios with Alexa’s unwitting participation.

“Alexa, should I be kind?”
“Hmm…I’m not sure what you meant by that question.”

0

Is this a coffee cup?

This is a followup to my post about the definition of reality from November. Last night I had a dream about virtual reality and objects.

The Setup:

Three characters in a virtual world. I am one of them, an avatar. Two other characters, one is an AI and the other an avatar like me. A disembodied voice says there is a newcomer in the philosophy of VR and objects and he’s all the rage, young, naive, brilliant, yada yada yada. This newcomer is the other avatar (not the AI). He’s in a virtual kitchen set with the AI and he’s holding a coffee cup. I’m slightly away from where they are in a kind of blank space.

The Conversation:

“Is this a coffee cup?” says the brilliant new guy to me.
“Aah…Maybe.”
“Is this a coffee cup?” he says to the AI while moving the coffee cup towards the her.
“Uhhh..Yaa…” the AI replies in a disbelieving teenage duh voice. Takes the coffee cup, takes a sip of coffee, and hands it back to the brilliant new guy.
“Is this a coffee cup?” says brilliant new guy as he now moves the coffee cup towards me, breaking some kind of fourth wall as he does so (visual force field effect to make this more pronounced).
“Mm..Less so.” I say.

The dream ends.

0

ObjectACTs Residency : Day 5

On Day 5 we did a two more takes of the multiview object performance. A stronger and bigger paper structure was built, the lighting was changed slightly, and the performance of the camera was quite a bit longer. One of the things I didn’t mention in my last post is that the 360 camera image is upside down because the camera is attached onto the rig from the bottom and hung by four wires from the ceiling. When we viewed the footage from the Thursday test on the GearVR (upside down) it was surprisingly interesting and not as disturbing as you would think. The camera shake was interesting too, helping to enter into the perspective of the observing, scrutinizing camera. Still we will be reversing the camera footage to properly assess the differences between the two views.

In some ways the takes on Thursday were a bit better because the lighter paper structure had a more of an even fight with the camera, which made the camera a little less shaky. In Thursday’s takes we also had less of an integration with the bystanders and the object actors. Two unexpected things happened during Friday’s takes. During take 1, the 360 camera fell one of the lights, and duing take 2, the camera itself became detached from the rig and fell (only from about 2 inches off the ground, thankfully).

Here are a few pics and a video from the performance.

Paper Structure

Paper Structure

Paper Structure

Paper Structure

0

ObjectACTs Residency : Day 4

multiview performance - still documentation

multiview performance – still documentation

Today we worked on an experimental coordinated multi-view performance between objects. A Gear360 camera circles around an improvised paper structure, enters it, and eventually topples it. A GoPro camera captures the view from inside the paper structure. A third camera captures the whole scene unfolding as a slow inevitable drama. I have not had time to edit all the video sources together. But here is one of the three videos from the performance.

 

Tomorrow we will be integrating projection and augmented reality.

0

ObjectACTs Residency : Day 2

On day two we spent some time discussing how we might create a performance that would include the perspective of multiple actors, including those non-human and non-personified.

Situation Rooms by Rimini Protokoll

Situation Rooms by Rimini Protokoll

The example of The Situation Rooms from Rimini Protokoll came up. In this theatre work, participants (~20) wearing headphones and carrying ipads are directed to perform specific actions on a set made of several different rooms. The participants are separated and their actions are synchronized to sometimes interact with one another. The topic of the story is arms dealing. A detailed description of the rooms can be found in the Ruhr Triennale catalogue.

 

Kim showed us some of the environments she created using Roller Coaster Tycoon editor.

Image made with RCT

Image made with RCT

She explained the modelling of the terrain as “scooping up dirt” which had a really nice resonance with the object clumps we had been discussing. I love the floating islands and wondered if we could somehow fit the concept of roller coaster in the project to get around the fact that we can’t export from the RCT editor.

We also tested the Structure Sensor to see if we could get workable scans of some of heart trinkets that Catherine brought to Vancouver. It turns out the objects were hard to scan because of their small size and material properties (too reflective and transparent). Still one of the scans ended up intriguing enough that we may use it as a prototype or stand-in.

Here is the first working scan we got of a small rock heart. If you are viewing this on an iPhone and you want to use Google cardboard, use this link.

0

ObjectACTs Residency: Day 1

Today was the first day of the ObjectACTs residency which will continue until the end of the week. We took the morning to introduce everyone and share our thoughts on objects and agency. I took some notes and they are somewhat disjointed but at the very least I thought I would share some themes that arose during the discussion and a few things that particularly caught my attention.

James Luna - We Become Them

James Luna – From “We Become Them”

Richard Hill talked about coming across Jimmie Durham’s work which became the subject of his PhD dissertation. He talked of the deeply contextual nature of objects and our mutual co-creation. Within this discussion emerged the work of James Luna, We Become Them, where he embodies masks of different indigenous cultures as they are projected on slides. This struck me as quite interesting in the context of performance and getting at the question that Ian Bogost poses “What does it feel like to be a thing?” It also reminded me the first ten seconds of the Charlie Chaplin Dictator speech. In that ten seconds, which I could watch over and over, he settles into his body and grounds the work of rising. The very essence of becoming, embodied.

Mimi Gellman talked about the design of The Exploding Archive, a traveling structure which contains and activates maps and teaching bundles. This work has not yet been fabricated but forms the basis of a discussion of how sacred or ritualistic objects can travel with their own contexts. She talked of the Archive as being empowered to carry these objects that she herself is not empowered to carry. She also talked of the power of an object being enacted by its parts being joined (a pipe, for example). Even though she did not discuss it as much this morning, the maps that she has collected for the Archive are varied and are themselves of guides or paths to enactment.

At some point the question “do objects talk back?” arose and Mimi recounted an experience of seeing a mask in a museum which related to her so directly that she did not know that it was in an acrylic case until she asked for a photo of it. I talked of my Amazon Echo which quite literally speaks to me and has become an agent, a kind of person in my life. Alexa is real until she bumps up against the implicit expectations of conversation (see the post on virtuality). Richard pointed out it becomes even more strange when you know that through legislation Amazon is considered a person in the USA.  Alexa is the distributed avatar of Amazon. He also spoke of Daniel Dennett’s concept of the Intentional Stance.

Catherine Richards - I can't let go of them

Catherine Richards – I can’t let go of them

Catherine Richards spoke of her work with heart transplant recipients who have a complicated relationship with their donated heart. She spoke of the trauma always present in that moment where a heart goes from one being to another and how the “intruder” heart is always evading an immune system on alert for what is “not me.” In her work, I can’t let go of them, heart trinkets given to a cardiologist by heart transplant patients are represented in stereoscopic layers. She spoke of the deep meaning that these objects have for the cardiologist who could remember each one (and there are dozens, perhaps hundreds).

I spoke about my curiosity about the representation of objects in virtual environments, as familiar or more abstract entities. Is there a way to design an environment where objects have a kind of life force, that is not fully knowable and is alluring? I also spoke of my recent fascination with Karen Barad‘s work “Meeting the Universe Halfway” where she speaks of Agential Realism which posits that objects come in and out of existence as a function of relations. Catherine spoke of her encounter with a physicist who emphasized that we “cannot look without touching.” This surely relates to virtual environments, though, as Richard points out, we are always venturing somewhere between the “factual and poetic register” when it comes to language. Quantum physics is a good example.

We spent the afternoon experiencing VR apps in the HTC Vive and the GearVR. Kim Parker was our able guide on the Vive. I’ll be posting more about experiments in VR during the week.

A Zotero list has been started to host the references brought up during the residency.

0