AI Will Leave Us No Choice but to Create a Parallel World
AI is here to stay. Are we?
In a recent interview, Mustafa Suleyman, co-founder of Google DeepMind, the company on a mission to generate Artificial General Intelligence (AGI), warned of the risks of AI—its “omni use” and “asymmetrical effects”, as well as its inherent rogue tendencies and those of AI-powered humans—and demanded a “containment plan.” At the DLD AI Summit in Munich last week, Mozilla’s Mark Surman made the case for open-source AI as a potential alternative to the imminent threat of our world being colonized by the owners of walled-garden, black-box LLMs. On the same DLD stage, Palantir’s Markus Loeffler pointed to the military’s use of Generative AI, bluntly claiming that without it, “we wouldn’t even be sitting here in peace, and Ukraine would have already fallen to Russia.” Make no mistake, the “AI arms race” is an arms race.
On a more civil note, there are plenty of well-documented cases of Generative AI leading to breakthroughs in science (e.g., designing new proteins), helping to scale up mental health services, potentially detecting and curing cancer, and more. Maybe AI could indeed enable humanity to realize its full potential.
“The future is the past. It’s a trap.”
But there are also good reasons—access to only a limited pool of codified human expression, bias towards the English language, a strict optimization mindset, “generative inbreeding,” and other flaws—to assume that Generative AI will impoverish our human cultural flourishing. Futurist Matt Klein, author of the ZINE newsletter, points to an exponential “hall of mirrors” effect and warns of a “recursive culture” in an “age of average,” to borrow from Alex Murrell’s eponymous book. “That we’re so bullish on AI, something dependent upon the past, to help us chart our future is ironic,” Klein writes. “At this rate: The future is the past. It’s a trap.”
At the workplace, cost-saving job automation will affect the backbone of our societies, middle-tier knowledge workers, more profoundly than any other segment of the workforce. Not every journalist will become a moderator, not every teacher a psychedelic healer, and not every accountant a life coach.
We assure ourselves that certain human qualities are so unique that they cannot be automated, and that AI will never be able to replace human empathy, intuition, or creativity. But the thing is: Why would it want to? It will create its own mirror-world reality that doesn’t have to emulate us in order to marginalize us. Our clinging to inherent human traits may be nothing but a nostalgic, human-centered bias. It is plausible that Generative AI-enabled organizations will simply force all those unique human qualities into quantifiable tasks that they can optimize and just do well enough to produce cost-savings and enhance productivity.
If that happens, then Generative AI will become the razorblade of ruthless capitalism, further accelerating a world shaped by efficiency, extraction, and exponential growth, with even higher dividends for the few and increasingly severe disadvantages for all of us. Think of it as the ultimate optimization machine, seeking to optimize whatever can be quantified—and what it can’t optimize will ultimately lose its right to exist.
We must ringfence what must never be optimized.
Is it possible in such a scenario that we can build a very different kind of system, one that honors life and the human spirit in their most diverse forms, leaving space for intimacy, tenderness, and imagination? Is it possible to imagine using Generative AI to not just optimize us, but make us better humans?
The House of Beautiful Business has launched various initiatives to explore this. One of them is our “AI for Life” group, an ongoing interdisciplinary conversation among members of our community to develop a vision for AI that is truly life-centered. In addition, we are conducting and publishing research, supporting self-organized efforts by our community such as the Q’s Collective work on AI ethics, and hosting events on AI, e.g. ”Can AI Heal Us?” in New York. One of the speakers at the event, neuroscientist and learning expert Dr. Sará King champions the development of “loving-awareness”—experiencing the world from a place of self-awareness and self-love—through AI-based “contemplative technology” that aids the development of contemplative practices. Practices that heal us and help us heal others.
The overarching goal must be to develop a new hybrid culture with AI that is worth living in. The only way to get there is to embrace AI as a co-creator of human culture but not allow it to co-opt all human culture. The world of relentless optimization is unstoppable. So we must insist on a parallel world that appreciates—and ringfences—what must not be optimized, under any circumstance. Not a world that is analog, luddite, escapist. And not the metaverse! But a world that combines AI with the humanities and the arts to keep the mysteries alive and even create new ones. Whatever will remain mysterious—mystical—can be saved from optimization. Whatever we can save from optimization will remain beautiful.
Tim Leberecht
Stay ahead of the curve
Be in the know about the latest trends, current events, and beautiful business practices at the intersection of business, tech, science, the arts, and humanities. No strings attached.