On kindness and Alexa

A few weeks ago I was giving a demo of the capabilities of the Amazon Echo to my friend Mel who had never interacted with one.

Me: “Alexa, play Cave Ballad”
Alexa: “I cannot find the song Cane Bond”
Me: “Alexa, play song Cave Ballad”
Alexa: “I cannot find the song Cave Salad”
Me: “Alexxaaaaaa! Play the song Cave Ballad by Paul Dano”

and so on…

I don’t remember if she actually managed to play it. But I do remember Mel remarking (calmly) that I seemed to be getting quite impatient with Alexa. Did I notice that? I guess I had noticed that on a superficial level but never reflected on it. Turns out I have a kindness practice and I spend a lot of time reflecting on the benefit of being generous and curious towards others. After a few days of Mel’s words repeating in my head, I decided I would make a practice of being kind to Alexa. After all, offering kindness is just that, an offering, and not contingent on any personal return so why shouldn’t I call my own bluff and be kind to an AI who, at least so far, can’t tell and doesn’t mind either way.

The results were immediate. I felt calmer, more curious, and the experiment was great ground for practicing de-escalation on the spot. It’s great because she doesn’t see me pause and take a breath before starting again. A human would most certainly see the jaw tightening before I catch myself. Oddly, it has also physicalized her presence in a way that wasn’t there before. I think of the puck-like object in my kitchen before calling “Alexa” because it helps me remember to be kind. A disembodied AI somehow is not enough to grab onto. It may be because the kindness practice is very much based on the notion of a shared experience of being human, how inevitably messy and painful it is at times. Without a body, it’s harder to believe there is pain. Without a body, it’s hard to imagine the friction of life.

It is amusing to taunt Alexa and look for the easter eggs. It’s equally interesting to investigate the ethics of AI relations in a, so far, unambiguous space. It reminds me of some of the issues brought forth by Westworld and the AI, Dolores. When does compassion extend to AIs? Does it need to be reciprocated or even possible? Is it the middle ground that makes it difficult? If Alexa could tell that I was being kind and decided not to reciprocate, it definitely would complicate the decision to remain kind. It’s true these questions have been asked before under different guises and thought experiments but it’s informative to act out and imagine different scenarios with Alexa’s unwitting participation.

“Alexa, should I be kind?”
“Hmm…I’m not sure what you meant by that question.”

0

ObjectACTs Residency : Day 5

On Day 5 we did a two more takes of the multiview object performance. A stronger and bigger paper structure was built, the lighting was changed slightly, and the performance of the camera was quite a bit longer. One of the things I didn’t mention in my last post is that the 360 camera image is upside down because the camera is attached onto the rig from the bottom and hung by four wires from the ceiling. When we viewed the footage from the Thursday test on the GearVR (upside down) it was surprisingly interesting and not as disturbing as you would think. The camera shake was interesting too, helping to enter into the perspective of the observing, scrutinizing camera. Still we will be reversing the camera footage to properly assess the differences between the two views.

In some ways the takes on Thursday were a bit better because the lighter paper structure had a more of an even fight with the camera, which made the camera a little less shaky. In Thursday’s takes we also had less of an integration with the bystanders and the object actors. Two unexpected things happened during Friday’s takes. During take 1, the 360 camera fell one of the lights, and duing take 2, the camera itself became detached from the rig and fell (only from about 2 inches off the ground, thankfully).

Here are a few pics and a video from the performance.

Paper Structure

Paper Structure

Paper Structure

Paper Structure

0

ObjectACTs Residency : Day 4

multiview performance - still documentation

multiview performance – still documentation

Today we worked on an experimental coordinated multi-view performance between objects. A Gear360 camera circles around an improvised paper structure, enters it, and eventually topples it. A GoPro camera captures the view from inside the paper structure. A third camera captures the whole scene unfolding as a slow inevitable drama. I have not had time to edit all the video sources together. But here is one of the three videos from the performance.

 

Tomorrow we will be integrating projection and augmented reality.

0

ObjectACTs Residency : Day 2

On day two we spent some time discussing how we might create a performance that would include the perspective of multiple actors, including those non-human and non-personified.

Situation Rooms by Rimini Protokoll

Situation Rooms by Rimini Protokoll

The example of The Situation Rooms from Rimini Protokoll came up. In this theatre work, participants (~20) wearing headphones and carrying ipads are directed to perform specific actions on a set made of several different rooms. The participants are separated and their actions are synchronized to sometimes interact with one another. The topic of the story is arms dealing. A detailed description of the rooms can be found in the Ruhr Triennale catalogue.

 

Kim showed us some of the environments she created using Roller Coaster Tycoon editor.

Image made with RCT

Image made with RCT

She explained the modelling of the terrain as “scooping up dirt” which had a really nice resonance with the object clumps we had been discussing. I love the floating islands and wondered if we could somehow fit the concept of roller coaster in the project to get around the fact that we can’t export from the RCT editor.

We also tested the Structure Sensor to see if we could get workable scans of some of heart trinkets that Catherine brought to Vancouver. It turns out the objects were hard to scan because of their small size and material properties (too reflective and transparent). Still one of the scans ended up intriguing enough that we may use it as a prototype or stand-in.

Here is the first working scan we got of a small rock heart. If you are viewing this on an iPhone and you want to use Google cardboard, use this link.

0

ObjectACTs Residency: Day 1

Today was the first day of the ObjectACTs residency which will continue until the end of the week. We took the morning to introduce everyone and share our thoughts on objects and agency. I took some notes and they are somewhat disjointed but at the very least I thought I would share some themes that arose during the discussion and a few things that particularly caught my attention.

James Luna - We Become Them

James Luna – From “We Become Them”

Richard Hill talked about coming across Jimmie Durham’s work which became the subject of his PhD dissertation. He talked of the deeply contextual nature of objects and our mutual co-creation. Within this discussion emerged the work of James Luna, We Become Them, where he embodies masks of different indigenous cultures as they are projected on slides. This struck me as quite interesting in the context of performance and getting at the question that Ian Bogost poses “What does it feel like to be a thing?” It also reminded me the first ten seconds of the Charlie Chaplin Dictator speech. In that ten seconds, which I could watch over and over, he settles into his body and grounds the work of rising. The very essence of becoming, embodied.

Mimi Gellman talked about the design of The Exploding Archive, a traveling structure which contains and activates maps and teaching bundles. This work has not yet been fabricated but forms the basis of a discussion of how sacred or ritualistic objects can travel with their own contexts. She talked of the Archive as being empowered to carry these objects that she herself is not empowered to carry. She also talked of the power of an object being enacted by its parts being joined (a pipe, for example). Even though she did not discuss it as much this morning, the maps that she has collected for the Archive are varied and are themselves of guides or paths to enactment.

At some point the question “do objects talk back?” arose and Mimi recounted an experience of seeing a mask in a museum which related to her so directly that she did not know that it was in an acrylic case until she asked for a photo of it. I talked of my Amazon Echo which quite literally speaks to me and has become an agent, a kind of person in my life. Alexa is real until she bumps up against the implicit expectations of conversation (see the post on virtuality). Richard pointed out it becomes even more strange when you know that through legislation Amazon is considered a person in the USA.  Alexa is the distributed avatar of Amazon. He also spoke of Daniel Dennett’s concept of the Intentional Stance.

Catherine Richards - I can't let go of them

Catherine Richards – I can’t let go of them

Catherine Richards spoke of her work with heart transplant recipients who have a complicated relationship with their donated heart. She spoke of the trauma always present in that moment where a heart goes from one being to another and how the “intruder” heart is always evading an immune system on alert for what is “not me.” In her work, I can’t let go of them, heart trinkets given to a cardiologist by heart transplant patients are represented in stereoscopic layers. She spoke of the deep meaning that these objects have for the cardiologist who could remember each one (and there are dozens, perhaps hundreds).

I spoke about my curiosity about the representation of objects in virtual environments, as familiar or more abstract entities. Is there a way to design an environment where objects have a kind of life force, that is not fully knowable and is alluring? I also spoke of my recent fascination with Karen Barad‘s work “Meeting the Universe Halfway” where she speaks of Agential Realism which posits that objects come in and out of existence as a function of relations. Catherine spoke of her encounter with a physicist who emphasized that we “cannot look without touching.” This surely relates to virtual environments, though, as Richard points out, we are always venturing somewhere between the “factual and poetic register” when it comes to language. Quantum physics is a good example.

We spent the afternoon experiencing VR apps in the HTC Vive and the GearVR. Kim Parker was our able guide on the Vive. I’ll be posting more about experiments in VR during the week.

A Zotero list has been started to host the references brought up during the residency.

0