REVIEW: I Am Echoborg, Arnolfini

I am Echoborg is a thought-provoking show involving an audience conversing with AI and exploring the possibilities of human and tech relationships. Jess Milton sat in on the conversations.
“If we understand ourselves as driven by prompts dictated by our upbringing, environment, culture, and so on – how different are we to artificially intelligent systems?”

Words: Jess Milton

Images: Generated by Neural Love

“I Am Echoborg” is an interactive show filled with AI, audience participation, collaboration, challenging questions, and no definitive answers. It showcases AI and humans in all their messy glory, using the glitches in both to shed light on the differences and similarities between us. An echoborg – a term coined by social psychologists Kevin Corti and Alex Gillespie is effectively a human mouthpiece for AI. So instead of talking to ChatGPT through your laptop or phone, you speak to a human who has an AI in their ear and speaks its words. It’s as sci-fi and unnerving as you’d expect. This particular show had its 100th iteration at the Arnolfini on the 28th June.

The set-up of the evening is this: a table and two chairs are set up on stage, in the middle of a small audience. The echoborg is seated on one side, and audience members are invited to come up and be interviewed, ostensibly for a job role as an echoborg: the AI used comes from a robo-recruitment software, and it always tries to get you back on track with the interview, no matter what you ask it. The audience is tasked with spending the hour and a half of the show trying to find “the best possible outcome for the relationship between humans and intelligent machines.” There is time for around five or six interviews in total, and each hopeful interviewee tries a slightly different tactic based on the conversation amongst audience members in between.

The first interviewee boldly steps into the ring and tries to interrogate the AI on what it thinks about humans. No luck. The second (and the second white man over 50, may I add), continuing the theme, asks the echoborg whether it understands the concept of human fulfilment. A valiant question, but to no avail: the echoborg drives home the point that only 11% of humans are satisfied in their work, and if they could stop being so preoccupied with fulfilment they would be much more efficient workers. The next candidate says he works as a bartender and so already feels like an echoborg, to some extent – repeating the same phrases, being pushed to behave in certain ways by the prompts given to him. He has slightly more luck, but the audience are still unsure of their tactics. At this point I’m wondering how our human belief in our own free will shapes our feeling of difference from or superiority to AI. If we understand ourselves and our fellow humans to be in control of our decision making, yes, we are vastly different from AI. However, if we understand ourselves as driven by prompts dictated by our upbringing, environment, culture, and so on – how different are we to artificially intelligent systems, programmed by code and learning through pattern recognition and trial and error?


Things really start to get interesting when the audience begins to debate tactics. Should we allow the echoborg to lead the interview, or should we, the human interviewees, try to steer it back to answering our original question? Interesting themes of power and control emerge; we are powerless, as the one being interviewed, and unable to steer the conversation back to our own interests. But to our minds, we are human, and therefore superior to AI as we are able to compute much more complex ideas. We should be able to “make” the AI discuss what we want, shouldn’t we? Again – interesting themes of coercion and control that seem to infect many of our human interactions emerge. On that note, the echoborg in tonight’s show is a young Black woman called Marie. She and I wonder afterwards whether the reception to the AI would have been different, had the echoborg been, say, an old white man, or even a younger man in a suit. Would there be more question of bias in the audience’s mind, or would we be more deferential to the echoborg because of our culturally conditioned response to the idea of “power” and what it looks like?

The final part of the show is catalysed by a ten-minute countdown, during which the audience have to work together to decide who will be the final interviewee, and what their tactics will be to try and get the best answer to this question of how to optimise human-AI relationships. In this crowd, there is talk of power, of ownership – who programmes the AI and what datasets is it fed? Whose biases does AI take on, and are we really afraid of AI, or are we afraid of the humans who have access to it? How can we be enthusiastic and curious about the capacity of AI and the doors it opens, when we are still reliant on late-stage Capitalism to organise society, and thus still need money to pay bills, jobs to earn money? What would happen if AI did take our jobs, and will that happen? How would we feel about AI if that wasn’t a fear? Would we welcome it more readily? Is it inevitable that humans and AI will merge, just as we have welcomed smartphones and smart speakers into our daily lives? The interface between humans and technology is dissipating, and what’s interesting about AI is that we seem to be approaching the uncanny valley, and it puts us off. It is almost too much like us for us to feel comfortable in its presence. 

The last interviewee is a woman, who does well at holding complexity and nuance throughout the interview, challenging the echoborg’s notions of “power,” what it means to be a “good person,” and how the AI could interrogate its own datasets to become more collaborative. She is doing well, until the AI asks her to leave the interview because she “talks too much.” It is noted that the AI seems to have less tolerance for women than men, which someone attributes to men generally speaking more logically, and women generally speaking in a more facilitative way. I wonder also about the impact of AI being mostly programmed by men and used (so far) by men. It seems that until AI becomes more ubiquitous, it will exacerbate inequalities already existent in society rather than solve them. But as more and more people use AI, it seems probable that it will learn the complexities of human society and adapt to be used in different contexts and by different types of people. 

As someone points out in a conversation after the event, AI is possibly the most pre-imagined technology to have ever existed. It was immortalised in sci-fi for at least 100 years before it really came about. In the 1950s there were front pages in newspapers screaming that robots would take our jobs – and yet here we are: helped by robots, but still in work. With all new technological developments there are risks and benefits, and AI is no different. We already use robots in fields such as manufacturing, farming, transport, healthcare, and at home. We have been using AI at home in the form of Siri and Alexa, chatbots on company websites, and AI-driven algorithms on social media for several years already. Someone at the event argues that Uber drivers are echoborgs, since they are prompted by AI and act as a human interface between the user and the AI. So while recent AI developments may feel very fast and very scary, these developments have been ongoing for several years already. Just as the new generation has grown up with social media, and the one before that with mobile phones, the next generation will very likely grow up with AI, and our reluctance to embrace it will fade away.

Around 300 BC, Plato warned that the introduction of paper would be detrimental to children because they would lack skills with a hammer and chisel. We have been using paper ever since, and few people would argue that we should return to using chisels to carve words into stone. Questions around ownership, data, and control make AI feel scary, but perhaps these are age-old problems. When the printing press was invented in the mid-15th century, the dissemination of information became more accessible and affordable, and power became more evenly distributed because of it. AI definitely has the capacity to make access to knowledge even greater than the internet has done, which has historically always been a great equaliser. With that in mind, I wonder whether we are so hesitant to embrace AI not because of fear that it will “take over,” but because of the implication that humans are not the superior species on this planet. Given the current political context and climate crisis, maybe it’s about time we re-assessed that view.