The Scarlett Johansson deception is part of a pattern for OpenAI and Sam Altman

OpenAI CEO Sam Altman asked the actress Scarlett Johansson—who famously voiced an AI assistant in the 2013 film Her —to do the voice for ChatGPT. She said no. So OpenAI concocted a voice that sounds a lot like that of the actress, and used it without telling her. Now the actress has lawyered up and OpenAI has egg on its face. (It’s since removed the Her voice from its chatbot.) 



Altman’s treatment of Johansson is more than an isolated “self-own” or PR flub. Seen in the context of some other milestones in Altman’s leadership, it looks like part of a larger pattern. 



Only about six months ago OpenAI’s board of directors fired Altman because he was “not consistently candid in his communications” with them. (Altman was soon reinstated at the insistence of OpenAI’s investors and employees). A source who knows him tells me Altman often “says one thing and does another.”



Altman has talked about the importance of safety research, but he’s been accused of hurrying new AI products to market without spending enough time making sure they’re safe. The latest voicing of that accusation came from Jan Leike , who recently left the company (along with cofounder Ilya Sutskever ) after the “super-alignment” safety group he led was disbanded. 



“[O]ver the past years, safety culture and processes have taken a backseat to shiny products,” Leike wrote on X (formerly Twitter). “I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”



Just last year, when OpenAI announced the super-alignment team, Altman said the company would commit 20% of its computing power to its alignment work. But, as Fortune reports , citing half a dozen sources with knowledge, that never happened.



OpenAI was the first to figure out that by dramatically scaling up model sizes, training data, and  computing power, AI models could start demonstrating uncanny skill. To get enough training data , the company vacuumed up vast amounts of data from the web—without permission or compensation for the publishers of the content. OpenAI says the practice is covered under “fair use,” in the copyright law . But now that its method of harvesting training data is better-known, it regularly pays sites for the data—most recently Reddit—and is being sued by the New York Times for feeding its models verbatim content from the news site. 



With the success of its “supersizing” approach, Altman and company began closing off access to its research, which it once shared openly with the AI community. The investors that began pouring money into the startup insisted that the research be treated like valuable intellectual property and locked away. 



Sutskever and Leike may have been among the last standard-bearers for the old OpenAI’s and its stated intent to “build artificial general intelligence that is safe and benefits all of humanity.” Since the leadership imbroglio last November, Altman and his allies, and OpenAI’s venture capital investors, very likely now set the company’s agenda. 



Investors may admire Altman, who is, after all, an investor himself. They may see his “better to ask forgiveness than permission” approach to Johansson’s voice, and publishers’ content , as examples of acting unilaterally to get something done. They may see the CEO’s job as putting a pleasing public face on a business that sometimes involves some less-than-savory practices in the background. 



Should we worry that the company that’s ushering in “super-intelligence as a service” doesn’t seem entirely honest or ethical? Can we trust this company to make chatbots that are honest? Can we be sure its products can’t be used to create bioweapons or subvert our elections?

Top Articles