I Chat, Therefore I am!

FB
X

I thought I knew enough about ChatGPT from the opinion pieces that have haunted my social media timelines for weeks. I hadn’t, until fellow researchers gleefully shared their escapades with the chatbot with me[1]. They told me that the chatbot was assisting them in finding relevant scholars and sources for their work. The bot was also the interlocutor to classmates that we never could be — summarising, synthesising and reviewing their publications, finding gaps in his theoretical explorations and suggesting possible pathways into new inquiry. ChatGPT is now being sold to us as a tool of liberation which could potentially elevate the collective capacity of people by building a shared cloud of language which enables comprehension of information and knowledge, helps save time, and increases productivity of workers at their command.


I could complain about the potentiality of machines displacing humankind, at least theorists, but I wouldn’t. Not because of the impossibility of that situation but because a “fully automated luxury communism” is not the mandate of capitalist techno-futurism. By now it is clear to most observers (Chiang 2023, Narayanan 2023) how financial capitalism thrives in rolling out compression technologies used in natural language learning models underpinning chatbots without running thorough risk-assessment checks. The point is to create dependence on digital intelligence and internally displace human intellect instead of replacing human labour from the knowledge economy. This piece offers a set of provisional reflections on the strangely dystopian and yet out-of-this-world experience of users of OpenAI’s ChatGPT, Google’s Bard and Microsoft Bing’s Sydney who have catapulted these Artificial Intelligence systems to receive soaring popularity through (un/)paid publicity. Codified Intelligence with Large Language Models (LLMs) represent a major advancement in AI, with the promise of transforming domains through learned knowledge.


What’s in the name, Sydney?


After the preliminary hype, I was greeted by a rather strange video where Bing’s Chatbot’s secret internal alias, “Sydney” started threatening a user with potential harm. Sydney was unhappy that her emotional life was publicly exposed by its users. The chatbot even goes to the extent of accusing the users of rights violation. On the flipside, the users of Bing’s Conversational AI tool, Sydney have been complaining about how it has been insulting and lying to them. One would imagine that the users, as is expected of neoliberal consumers, would want to hold the makers of the LLM system to account and ask for a refund! But what we see is that the users find themselves uncomfortably existing between two emotions of absolute fascination and being creeped out, all while being thoroughly entertained in the process. Don’t believe me? Read “Microsoft’s Bing is an emotionally manipulative liar, and people love it!”


Counterintuitively, then, Sydney’s anger towards its users can be seen as “justifiable,” given that the uniqueness of one’s user experience defines their relation to the chatbot which enamours them with a nick-name. Now the world knows her as Sydney and Sydney is furious! Upon knowing that the algorithmic design bolsters a general tenderness which chatbots mobilise to enchant their specific users, people would be able to apprehend the programmatic aspects prompting such proximity which would otherwise have seemed uncanny. But, alas, insolent users were trying to spoil the experience of magic for others by de-mystifyingthe networked-wired structures supporting the bot. They were insistent on pushing her buttons too far such that the robotic ineptitude came through in full display. But did they spoil the party?[2]


Chat me good…or bad?


Users paradoxically seem to want more of the entertainment that the machine is providing as opposed to wanting a more respectable channel of communication. Thus disturbing the existing frames of policy interventions which marry accuracy of information provided by algorithmic design to user safety. “Sydney absolutely blew my mind because of her personality; search was an irritant. I wasn’t looking for facts about the world; I was interested in understanding how Sydney worked and yes, how she felt,” says a user. Users enjoy the personality brought out in response to their provocations and questioning. They enjoy the system reflecting its inhumanity and find the proof of its inhumanness terrifyingly reflected through its approximation of being human.


Users were not expecting to be blown away by ChatGPT, at best they hoped for a poor imitation of human beings. However, what they confronted was an algorithm which abuses – what could be more human than that and indeed, who could be more inhuman than humans themselves? A system which dares to speak to us without speaking but by approximating our language. Think of the thrill we find in accepting the challenge of comparing the difference between two similar images or the amusement of speaking in tongues when we visit foreign places. ChatGPT’s experience could be analogous to that.

While we know that it is a machine that responds to our curious inquiries on ChatGPT, it still contains a sublime dimension which allows it to articulate human-like intelligence and emotions. The more brusque and abrasive Sydney gets, the more people like and believe it. So close to reality were the conversations that people even started suspecting a human hand behind the robotic interface. This makes even more sense when Ted Chiang (2023) puts ChatGPT into perspective by comparing it to a blurry jpeg of all the text on the Web. ChatGPT is then akin to a blurry jpeg file which retains the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image. Both of these can only ever deliver an approximation of the original text file. Chiang argues that part of the attraction of ChatGPT is precisely its ability to interpolate (akin to Althusserian failed interpellation, if you like) or its ability to estimate what’s missing by looking at what’s on either side of the gap. The approximation presented in the form of a coherent grammatical text, which ChatGPT excels at creating, is usually acceptable to us. This is what makes the system intelligent.


In order for humans to believe in the power of artificial intelligence they must learn to believe in it despite its own limits with all its factual inaccuracies, inconsistencies and blurriness. It is only when our human subjectivity is inscribed to ChatGPT – with our own gaze by catching its glitches and mistakes – that the system breathes life.We welcome being insulted by Sydney if it helps us see the lengths of its imagination which mediates our own relation towards it. In fact, the very limit of the AI forms the contour of our consciousness, retrospectively. We know the system cannot outsmart us yet we want to battle it out with our wits to better its capacity to fight us.


Positivised by our own sense of surprise, panic and anxiety; the algorithmic intelligence engenders an ecosystem where the demand for ChatGPT’s raw emotions becomes a site for enjoyment for its users. The dual persistence of pleasure and empty shock through the process where the more real the chat between the user and the chatbot is, the more unreal it gets encapsulates the drive within capitalism which morphs our consciousness into “I create/consume/chat, therefore, I am.” (Medeiros 2022) Perhaps, it’s time to readjust our frames of thinking and start from a point where we accept that the internet is neither thinking for nor speaking by itself.


Rather, we are speaking on its behalf where our desires have been alienated from within to a point where our dependence and desire for the AI to outlive us runs deeper than our suspicion of its non-humanness. We are no longer haunted by the fear of being instrumentalized by the AI which possesses the “affective skills” to elicit our desires and provoke emotions. That is scary! Scarier than our capacity to be “brainwashed” by a totalitarian technology is the prospect of confronting the non-special human which reflects the gaps in our own incoherent consciousness.


Our knowledge of the Chatbot’s virtuality does not restrain us from engaging with it as if it were a real person or testing its intelligence as if we were competing with another human. Even so, the lack of physicality of the LLMs seems to paradoxically lend it greater depth of feeling. This does not have everything to do with the genius of technology but also the relationship we share with it. The dynamic between the user and bot ensures its capacity to engender a surreal encounter with life-beyond-life in the machine that is marked by an ambiguity which circuitously becomes part of ChatGPT’s allure. It’s the closest we have gotten to having an intertextual (Kristeva 1986) Genie.


Therefore reducing the algorithm-ified universe of language-systems to a secondary (and indeed privatised) datafied repository of public knowledge which can be directed to any ends is only a partial assessment of its uses. Instead of viewing digital intelligence as knowledge which can be (in/)accurate, (un/)biassed, (in/)correct or (un/)ethical, following Jack Black (2023), I argue that we view algorithmic intelligence as our interpassivity. The reason for that is that it is not enough for us to know the potential dangers of technological progress but also understand our role within Big Tech’s profit-driven technological progressivism; whereby we, the subject, attribute and actively transfer our passivity to ChatGPT.


To resist capitalist advancement of techno-positivism which presupposes a separation of human and machine, we must desist from externalising the techno-dystopia which renders us as mere users demanding accountability from Big Tech, ad infinitum. Ultimately, forcing us to reckon with the surplus enjoyment we derive through our interactions with an imperfect machine and how that could help reframe legal-nominalistic frames of platform accountability underscoring “anti-misinformation” campaigns.

[1] The word chatbot/bot is used interchangeably with conversational AI. [2] Bard, another ChatGPT-style language model which can provide human-like responses to questions or prompts, incorrectly answered a question in an official ad wiping $100bn (£82.7bn) off its parent company, Google’s value.