What happened to Microsoft’s “Clippy?” The creepy paper clip that used to hang around your word document and try to help you “write a letter” or “create a list?” Sometimes it would ask if “you wanted this document in all caps,” weirdly urging you to yell at your audience.
Not only did Clippy provide useless suggestions, but it was all-too human like, according to it’s designer Alan Cooper:
Clippy’s design, the eminent software designer Alan Cooper later said, was based on a “tragic misunderstanding” of research indicating that people might interact better with computers that seemed to have emotions. Users certainly had emotions about Clippy: they hated him.
I guess amphormophizing wasn’t a good strategy–AI should be distributed as a cold machine, digital and eyeless. Ditch the facial features for simple language on a screen. Essentially, what you see with Chat GPT.
Although there’s been efforts lately to make AI more human. Open AI recently has given Chat GPT audio capabilities with five different voices that sound like regular people. Google’s DeepMind see’s AI becoming our next personal assistants, companions even, that carry out tasks you set for them by calling on other software and other people to get stuff done.
“It's a very, very profound moment in the history of technology that I think many people underestimate," said Mustafa Suleyman, cofounder of Google DeepMind, last September.
Ok then.
Too human or not too human, that tends to be the question. Do we want our AI to be smiling back at us, or spitting out cold, calculating information while having as many human-like qualities as your coffee machine? I’m not sure, but I’m starting to feel like this question is helping to deflect away from a more pertinent question: how might our own humanness be redefined through an engagement with this technology, especially with current AI tools like ChatGPT? Screw the robots, what about us?
To answer such a question, my mind returns to the definition of language, since AI tools like ChatGPT are essentially language-making machines, and language has always been talked about as an essential tool for human development. Philosophers in the early 20th century like Ernest Cassier saw language as a symbolic form that played a central role in mediating human experience and constructing meaning. For Cassier, language is part of a broader system of symbolic activities, like art, fashion or music, driving us towards defining and creating meaning in the world. We grasp onto language, and it is there where our world begins to reveal itself to us.
but it was the philosopher Martin Heidegger who brought into focus this connection between language and being in the world. Calling language the “house of being,” he believed that by examining language and its structures one could gain insights into the nature of being itself. For Heidegger, there’s an intimate relationship between language and our engagement with the fundamental question of what it means to be human, whereby words help reveal the very nature of our existence.
Ok, but what the hell does this have to do with Clippy?
Clippy didn’t use language to examine its own existence, and I doubt ChatGPT does either, at least for now. The need for AI to create words and meaning comes from other motivations, be it deep learning, LLM’s or a user prompting the AI for quick information.
But it’s this new interplay between robots and humans, where language gets exchanged, that might remind us that words and phrases are helping us shape our understanding of being. Sit long enough through a conversation with ChatGPT and naturally you’ll begin to wonder about the boundaries between human and non-human, agency and co-agency, thinking and non-thinking.
Wait, did I just come up with this argumentative essay on Formula One racing, or was it the machine? When did I begin and ChatGPT end?
The conversation about how we should define what it means to be human slowly bubbles to the surface as one watches ChatGPT string together effective sentences on nearly any topic. It’s efforts to get dangerously close to our neurological capabilities forces this discussion.
Maybe Mustafa is right: we are underestimating this whole AI thing.
For me, this is how Chat GPT is effecting language itself: illuminating this notion that words and phrases aren’t there simply to help us make meaning about the world, but allow us to wrestle with the assumptions we make about our own existence–about what it means to be and operate as human.
I got to this point not just through my own experience with ChatGPT but through watching my students use it as well. I would ask them to use ChatGPT for certain writing activities and many hated the experience, complaining about the robot “generating the ideas for them” and not being able to “write for themselves.” I think they felt that ChatGPT was encroaching on something sacred; something that should be innate inside of them and only them. But in this frustration came a classroom conversation on being and what it meant to be human. Does thinking truly resides only in the brain, and if not, isn’t ChatGPT just another tool to help us think? Are we always finding our agency solely within ourselves, or are we constantly using objects in the world, like Chat GPT, to help us construct this idea of agency? The conversation wasn’t really around AI becoming human, but what makes us human.
So what does it mean to be? I’m not sure we know that exactly, and language hasn’t brought us to some definitive answer I would say. Neither has AI of course, but it’s given us some glimpses. For example, nowadays one can find a handful of Clippy memes that make light of Clippy’s constant failure to help its users. A group of MIT Researchers actually analyzed a large sample size of these memes and concluded that what Clippy was missing all along–the “natural” interpersonal skills that humans have to adapt to certain situations-was what makes him funny today. Clippy is a throwback to when tech was allowed to make mistakes and be slightly crappy, and we were all cool with that for awhile. It couldn’t perfectly target us with ads or mimic a human’s voice. It was when tech, in all its imperfections, seemed a lot more human.
Clippy had an ancestor, Microsoft BOB, who was supposed to be a friendly assistant in Windows 3.1. People hated BOB. Microsoft seems to have ideas that it will refuse entirely to give up on, ever. Remember Tay?
Thanks for subscribing to ltRRtl, Brent. I enjoyed reading this piece. You highlight a key issue. I believe that part of the reason it’s so easy for kids to perceive bots as humans is that kids have been dehumanized in classrooms. Is it possible to read a textbook that you know contains authoritative answers you’ll be tested on as a human with interests and purposes of your own. I remember my first experiences with OpenAI3.5. It seemed mechanical and stylistically strange to me from the first chat and hasn’t really shown itself to be more than a digital tool. I do think your point is well made. Teachers have a real opportunity here to talk about what it means to be human in a classroom. Many academic norms are under threat when such talks begin. Let the games begin.