AI Doomsayers Blamed In OpenAI's Undoing
OpenAI has gone from ruling the world of artificial intelligence with ChatGPT to chaos, its chief executive ousted seemingly for advancing too fast and too far with the risky technology.
The exit of Sam Altman set in motion a series of events that saw the upstart company's biggest investor, Microsoft, swoop in to hire the toppled CEO and begin a process of building an OpenAI clone in the Redmond, Washington-based tech giant.
In some ways the looming result to the weekend saga is hardly a surprise, with many wondering how the board members could be naive enough to think they could outmaneuver Altman.
Silicon Valley was left aghast by Altman's firing, with the investor community and OpenAI's own staff furious that the four-person board got in the way of going faster into the AI age.
"We are not happy about it. We want stability here," said Ryan Steelberg, CEO of Veritone, a company that helps firms develop artificial intelligence.
Instead of OpenAI becoming the new Apple or Google, the harsh critics see a deeply troubled startup that fell victim to the pearl-clutching of an incompetent board.
"We reached this point because minuscule risks have been hysterically amplified by the exotic thinking of sci-fi mindsets, and clickbait journalism from the press," veteran venture capitalist Vinod Khosla, an early investor in OpenAI, wrote in The Information.
Other observers warned that the drama in San Francisco proved that AI was too vital to be left in the hands of its creators.
"This is an important reminder that as brilliant as the designers of tech like AI -- scientists or engineers -- are, they are still just fallible people," said Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights.
"That is why it is important not to just defer to them on a technology that everyone agrees has significant risks even as it promises tremendous benefits," he added.
Gary Marcus, a respected AI expert, said OpenAI's civil war "highlights the fact that we can't really trust the companies to self-regulate AI where even their own internal governance can be deeply conflicted."
Government regulation, notably by the tougher-minded European Union, was needed more than ever, he added.
OpenAI was actually created in 2015 with the goal of being a counterweight to Google, which was by far the leader in developing AI technologies that mimic the operations of the human brain.
Though nothing is known for sure, assumptions are rife that Altman's increased efforts to monetize the company's leading GPT-4 model, all while keeping its inner functioning a secret, was becoming too problematic for the company's board.
Already, several senior staff at OpenAI deserted the enterprise in 2021 to build rival Anthropic over concerns that Altman was moving ahead too recklessly.
Many are shocked that the board had the power it did, or were naive enough to think they could actually use it.
Three of those board members are thought to have connections to the effective altruism movement, which frets over the risks of AI, but that critics say is cut off from reality.
Whatever their beliefs, by Monday the board were overseeing a company in name only, with virtually the whole staff committed to a pledge of quitting the firm for Altman's project at Microsoft if the board refused to go.
"Unless OpenAI can block those departures, OpenAI is pretty much done at this point," analyst Rob Enderle told AFP.
This could mean a history-making victory for Microsoft, which has already seen its share price reach record levels over its ties to OpenAI.
"This is like the best possible scenario for them, and the OpenAI board I am sure is kicking itself. They were clearly detached from reality," said Carolina Milanesi, analyst at Creative Strategies.
© Copyright AFP 2024. All rights reserved.