OpenAI has added retired U.S. Army General and National Security Agency Director Paul Nakasone to its board of directors. It’s the latest move at the AI firm that’s been dealing with continued reshuffling since CEO Sam Altman was temporarily ousted last fall, including a number of recent high-profile departures .
Nakasone will also join the OpenAI board’s Safety and Security Committee, a new group that it says is “responsible for making recommendations to the full board on critical safety and security decisions for all OpenAI projects and operations.”
Here’s what to know about Nakasone:
Nakasone was a career Army officer
His interest in the digital age was sparked in the post-9/11 era, according to a 2020 profile in Wired . He served in both command and staff positions across all levels of the U.S. Army, assigned to cyber units domestically and in Korea, Iraq, and Afghanistan.
He was a Trump appointee
In 2018, former President Donald Trump tapped Nakasone to lead the NSA and U.S. Cyber Command. Nakasone came into the role as morale at the agency was reportedly suffering amid a series of leaks regarding its secret hacking tools.
Much of Nakasone’s time spent leading Cyber Command involved countering foreign efforts to meddle in American elections. He created a so-called Russia Small Group , consisting of experts within Cyber Command and the NSA, to home in specifically on Russia’s attempts at U.S. election interference.
Nakasone ended up being the longest-serving leader of the U.S. Army Cyber Command. Air Force General Timothy Haugh took the lead in February.
He’s well-respected in D.C.
Nakasone has long been widely respected throughout the cybersecurity and military communities. “There’s nobody in the security community, broadly, who’s more respected,” Democratic Senator Mark Warner of Virginia told Axios .
That Washington experience will likely be tremendously beneficial to OpenAI as the company works to gain public trust over its ability to safely build toward a goal of superintelligence.
Nakasone also arrives at a time when OpenAI is under heightened scrutiny over its AI systems and the safeguards in place. That concern was amplified recently, after a handful of current and former employees signed a public letter warning that the technology poses risks to humanity. “AI companies have strong financial incentives to avoid effective oversight,” the letter reads , “and we do not believe bespoke structures of corporate governance are sufficient to change this.”
Cofounder Ilya Sutskever, who helped lead a safety team that worked to ensure artificial general intelligence didn’t turn on humans, left the company in May. Jan Leike, the team’s other leader, also quit, and shared a lengthy thread on X that criticized the company and its leadership.
“Artificial intelligence has the potential to have huge positive impacts on people’s lives, but it can only meet this potential if these innovations are securely built and deployed,“ OpenAI board chair Bret Taylor said in a statement. “General Nakasone’s unparalleled experience in areas like cybersecurity will help guide OpenAI in achieving its mission of ensuring artificial general intelligence benefits all of humanity.”