Your browser is: WebKit 537.36. This browser is out of date so some features on this site might break. Try a different browser or update this browser. Learn more.

Why OpenAI Is Fueling the Arms Race It Once Warned Against

Two new books outline four reasons for OpenAI’s pivot away from its noncommercial origins. 

Illustration: Ard Su for Bloomberg

On June 11, 2020, more than two years before the launch of ChatGPT brought generative artificial intelligence to the mainstream, OpenAI launched its first commercial product — an application programming interface (API) that let companies build features on top of what was then its most powerful AI system, GPT-3.

Suddenly, developers could use OpenAI’s technology to spit out sonnets, social media posts and code, just as millions of users would do in late 2022 through a more intuitive chatbot interface. The product was not open source but rather intended to be a moneymaker. It was introduced earlier than some employees wanted in part because of (untrue) rumors that Google was about to put out its own AI model. And when it launched, OpenAI did not yet have a formal trust and safety team in place to address misuses of the technology, nor did it have clear rules for acceptable uses.