In the fog of uncertainty that surrounds it, one thing stands out about A.I: it cannot be adequately controlled by the traditional corporate governance model. Even before the boardroom implosion at OpenAI saw the ham-fisted ousting of CEO Sam Altman, corporate boards were beginning to look like the Jurassic Park of today’s business. The idea that a small group of people with similar backgrounds and shared mindsets can properly oversee global corporate monoliths by spending a couple of dozen days on the job each year defies any notion of reality or common sense.
Inattentive and out-of-touch boards have been associated with pretty much every major corporate failure over the past hundred years. That tradition carried on with the collapse of Silicon Valley Bank of California this spring, where directors of the bank, like so many before them, professed total surprise at what was happening. OpenAI’s missteps involving its board, and the employee revolt it unleashed, confirm the track record of the dysfunctional board remains unchecked. Boardroom blunders that cause investors to take a hit are bad enough. But when something as profound as A.I. is involved, with its potential to alter virtually every aspect of society, the consequences are unfathomable.
What is needed is a new system of governance, at least for companies like OpenAI, with a wider aperture for ethics and accountability and a smaller appetite for the traditional corporate matrix of success. I’ve been studying boards for half a century. I am as fearful about the implications of a traditional corporate governance model for A.I. as some of its early pioneers are about the impact of A.I. itself. We don’t need more corporate disasters before it becomes obvious that boards that lack cognitive diversity and prioritize shareholder value over social values are unlikely to offer a post-A.I. world the protection it needs.