It’s Time to Fire Your CEO
The “crazy ones” are at it again.
Dev Patnaik, CEO and founder of consultancy Jump Associates, recently blindsided Forbes subscribers with a provocative argument: The true heir to Steve Jobs isn’t Tim Cook, Elon Musk, or Sundar Pichai, but billionaire pop queen Taylor Swift. Meanwhile, all eyes are on Sam Altman and the debacle over at OpenAI.
There are surely more twists in the road ahead but, at this point, it looks as though Altman’s firing has caught the OpenAI board cutting off their nose to spite their face. Taking on a charismatic CEO is a complicated liability. Apple learned this the hard way when it fired Steve Jobs and then hired him back, more powerful than ever. Altman was OpenAI’s chief ringleader, cheerleader, and fundraiser—he leaves pretty big shoes to fill. Judging by his instant hiring at Microsoft, we can expect his stature and influence to keep growing.
Losing the Plot–and the Future
Some of the Altman drama may have structural roots. OpenAI’s board has an unusual governance mandate: Board members are not obligated to maximize shareholder value, but to ensure that OpenAI’s technology is used to benefit society. You could argue, as Azeem Azhar does, that the board honored this mandate when it attempted to rein in their fickle CEO, a self-acclaimed “AI centrist” turned de-facto AI accelerationist.
Sadly, their actions were both poorly executed and short-sighted. The employee revolt underway is rather remarkable, with 700 out of 770 employees signing a petition demanding Altman be reinstated as CEO, among them board member and OpenAI cofounder Ilya Sutskever, who says he regrets his role in the ousting. Tech journalist Casey Newton is spot on when he observes that “AI safety folks have been made to look like laughingstocks.” Newton thinks the Altman fiasco will spell the end of ethical resistance to new tech, paving the way for tech giants to floor the gas with a profit motive.
Are We Romanticizing Human Leadership?
It’s possible that all this leadership drama doesn’t actually have that much to do with the technology at stake. In fact, it’s hard not to wonder whether the Altman spectacle—be it tragedy or farce—has a lot more to do with the people in the wings. We humans have a thing for drama. We’ve proven this to ourselves for millennia, dating back to Ancient Greece, Sanskrit theatre, and Indigenous storytelling.
If this is the case, are we really so well poised to helm AI powerhouses in the first place? Would a more rational and objective leader have caused the chaos of last week? Could AI leadership protect companies and their shareholders from the sway of a powerful personality, a concern referred to as key-person risk?
Maybe the most interesting question emerging from this debacle is about the nature of good leadership. We tend to imagine great leaders as paragons of the finest human qualities: They are smart, brave, noble, diplomatic, empathetic. We tend to think of these qualities as uniquely human—characteristics that define what we’re capable of as a species and the best things we aspire to. But is there a small chance—is there any chance—that a machine could simulate these characteristics and become the kind of leader we respect?
We recently met with Niels Van Quaquebeke, a German leadership and organizational behavior professor who’s made a name for himself by rattling traditionalists in his field. (We’ll get a taste of his iconoclasm when he speaks at our festival in Tangier next year.) He and his colleague Fabiola H. Gerpott just shocked the business world with a study that challenges the notion that the C-level suite is safe from an AI takeover. Their research suggests that we romanticize human leadership and that AI might actually be better at the job, capable of heightened astuteness, emotional intelligence, articulateness, and, most importantly, objectivity and consistency.
What about charisma, you ask? Conventional wisdom suggests that you can’t inspire people without some degree of brio or charisma, and that this sort of inspiration is essential for good work. But maybe we romanticize charisma, too. New research indicates that charisma could be understood and recreated by AI. Then again, maybe charisma’s value as a leadership quality is overdue for some interrogation—whether the leader in question is human or not. Charisma sounds like a virtue but, in practice, it can be more of a burden than a benefit.
Van Quaquebeke and Gerpott don’t rule out the role of human leaders altogether. They conclude that human leaders will likely always be around, but that the makeup of their direct reports will change. “They won’t be leading the humans within an organization, but leading the machines that lead the humans,” their report states.
Of course this prediction opens a whole other can of worms: How will humans feel about answering to AI bosses? According to current studies, not so good. Automated work is typically associated with less valuable work, which can reflect poorly on someone’s status in an organization and lower their self-esteem. But these risks are a matter of perception and interpretation, both of which could gradually change as we see more and more AI in leadership roles.
So should CEOs be worried?
Not if you look at the big winner of the OpenAI drama: Microsoft. CEO Satya Nadella has played his cards well, exploiting the debacle to “acquihire” top tech talent (Altman and potentially most of his team). Would an AI leader have acted so swiftly and strategically?
Or maybe we can feel enough reassurance by quoting Martin Weiss, the CEO of Hubert Burda Media: “AI will be too smart to want the job of a CEO.”
Tim Leberecht & Martha Schabas
We Want to Give You Space
Goodbye Reality, For Real!
Can AI Heal Us?
Stay ahead of the curve
Be in the know about the latest trends, current events, and beautiful business practices at the intersection of business, tech, science, the arts, and humanities. No strings attached.