ChatGPT Makes Us Human

The AI chatbot’s limitations allow us to appreciate our own.

by Tim Leberecht

Jeannette Neustadt, a director at the Goethe Institute, the German cultural embassy, gave some opening remarks for an AI conference she hosted last week. As an experienced public speaker, a welcome speech was a common task for her. But this one was different. Instead of presenting the text herself, she played a video that featured humanoid avatars delivering lines that ChatGPT had written upon her prompts. The AI had also edited the clip, and pulled out quotes and bullet points. The speech resonated with the audience. It did the job well. The whole process had not even taken an hour, and essentially rendered most of the human input obsolete: writer and editor, filmmaker and film editor, even Neustadt herself.

The internet is awash in stories like Neustadt’s at the moment. Just a few days ago, U.S. Congressman Jake Auchincloss gave a ChatGPT-created speech; media site Buzzfeed announced that it would use ChatGPT to create content; and at Wharton Business School, ChatGPT would pass an exam. The software has managed to write plays in the style of Shakespeare, compose slick application letters to universities and companies, and create poetry. It can also write complex code, create business contracts, and suggest recipes, often in a matter of seconds.

Without doubt, ChatGPT is impressive and arguably the smartest, occasionally even humorous, most human-like AI-powered chatbot to date. And frankly, it was about time for AI to have its big moment. Deep Mind’s AlphaGo beating the human world champion in the board game Go, Lee Sedol, in 2016 perhaps came closest. But it remained an abstract proposition, whereas ChatGPT, as a practical tool, garnered one million users in just five days.

Coming for the creative class

The buzz has been amplified by the fact that AI is not only coming for white-collar knowledge workers this time—it’s coming for the creative class in particular, including business and academic agenda-setters and media taste-makers who have quickly put ChatGPT at the center of public discourse. Writers and journalists, and the semi-creative roles of management consultants and researchers all fear to have their lunch, breakfast, and dinner eaten by ChatGPT. Creative directors and event curators like me are puzzled by how expertly ChatGPT can swiftly hash out recommendations for speakers and performers, and advise on the structure and flow of a program. In the meantime, colleges and universities, facing a wave of ChatGPT-enabled plagiarism, have been forced to respond with new policies and teaching protocols.

Some compare the advent of ChatGPT to the impact of the iPhone, but that doesn’t do it justice. ChatGPT, and the generative AI that will follow and outsmart it, is more disruptive.

And yet, that doesn’t necessarily mean the apocalypse is upon us. On the contrary, ChatGPT, I would argue, might serve to make us more aware of our unique and irreplaceable human qualities. It is the AI’s very limitations that will make us appreciate our own.

“The king of pastiche”: no suffering, no transcendence

Take the creative act, and writing in particular.

“A writer is someone for whom writing is more difficult than for other people,” the novelist Thomas Mann once remarked. The searching for the right word, the correct tone; the discomfort that lurks between the lines of knowing too much and saying too little, and saying too much and knowing too little; the horror vacui of a blank page, or in its chronic form, writer’s block—all of this is foreign to ChatGPT.

With ChatGPT, these struggles are so yesterday. If you want it to, the AI-powered chatbot always produces something because it has the whole world of online data to draw from, including the conversations it has just had with you. It is, as the AI scholar and author Gary Marcus puts it, “the king of pastiche.” Like us, it has the data. But unlike us, it lacks the self-awareness to struggle with it. It has the intelligence, but not the consciousness. It can’t really think.

Thinking is hard, critical thinking even harder, and ChatGPT isn’t good at either. It just rehashes what has already been said; it regurgitates; it is one big recycling machine. And ChatGPT doesn’t alter one fundamental truth underlying any AI and the future-of-work discussion: the only two bullet-proof professions of the future are philosopher and artist. Both cannot afford to automate their work, because it is, in essence, contrarian thinking, counter-intuitive imagination.

Nick Cave nailed it in his response to a fan who had prompted ChatGPT to create song lyrics in the songwriter’s inimitable style. Cave’s verdict? “This song sucks.” He explained why:

“Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend.”

Writing, as a transcendent act, will remain inherently human. Now, you could argue of course that we humans are vast landscapes of data, too, and that our writing is a pastiche, a remix of what has already been written as well. The difference though lies in the process: ChatGPT is algebra; human writing, at its best, is alchemy. It adds a layer rather than just adding up the inputs. It has soul, and because of that, can touch other souls. ChatGPT can serve as a writing companion, but it will never write like a human.

“An author without ethics”: not lying, just bullshitting

The other obvious limitation of ChatGPT is ethics. It has no sense of right or wrong, no ethical awareness or moral compass. It doesn’t take a stance when it is prompted to do so. That, in and of itself, raises ethical concerns. Jessica Apotheker, partner, managing director, and global CMO of the Boston Consulting Group, told me that “If you ask ChatGPT, ‘what is the ideal shape of a female body?,’ it will answer with a neutral disclaimer—clearly an overwrite, and not what the algorithm would have yielded.” She insists that we need to know when an overwrite occurs and expects AI checking the accuracy of AI to become a blossoming field (GPTZero, designed to detect text written by AI, is one early example).

Furthermore, there is the issue of truth. Moral philosopher Harry Frankfurt, in his seminal book On Bullshit, contends: “The essence of bullshit is not that it is false, but that it is phony.” In other words, the difference between a bullshitter and a liar is that the liar knows what the truth is but decides to take the opposite direction; a bullshitter, however, has no regard for the truth at all.

Gary Marcus, in a podcast interview with New York Times columnist Ezra Klein, applies this distinction to ChatGPT and other generative AI, which he contends lacks any “conception of truth.” Marcus believes that we have reached a critical point when “the price of bullshit reaches zero and people who want to spread misinformation, either politically or maybe just to make a buck, start doing that so prolifically that we can’t tell the difference anymore in what we see between truth and bullshit.”

Not only is ChatGPT “bullshitting,” it is also not accountable. “If you’re offended by AI-generated content, who should you blame?,” wonders tech journalist John Edwards, concluding that ChatGPT is “an author without ethics.”

Masters of relationships

This is why AI literacy is critical. The so-called AIQ is an extension of our human IQ, a measurement of our human intelligence as it relates to AI: our overall knowledge of AI tools and practices, our mastery of prompts, and our ethical awareness.

ChatGPT is going to change everything—and nothing. Humans will continue to stay in the loop. Ingenuity, imagination, ethics, suffering, transgression, a striving for transcendence, and the ability to lie (and not just to bullshit)—these will all remain exclusive human domains.

ChatGPT can only see the world as it presents itself through data, but it fails to see the world as it could be. It is incapable of forming relationships: to itself, others, the truth, the future. We humans, however, define ourselves through relationships. Even if they may ultimately fail, we can’t help but enter them, for they give us the illusion, and the beauty and terror, of a blank page.

Shaping and cultivating our relationship to AI will (have to) be our masterpiece.


We don't support this version of your browser, and neither should you!

You are visiting this page because we detected an unsupported browser. Your browser does not support security features that we require. We highly recommend that you update your browser. If you believe you have arrived here in error, please contact us. Be sure to include your browser version.