top of page

OpenAI: Too Much Drama for a Nearly Trillion-Dollar Compan

  • Writer: Julie Ask
    Julie Ask
  • 18 hours ago
  • 3 min read

Keeping up with OpenAI could be a full-time job. We expect startups to pivot, stumble, and reinvent themselves — but those expectations shift when a company is valued near $1 trillion. The constant turbulence has real consequences: management attention is stretched thin, and uncertainty ripples across staff, partners, investors, and customers alike.

My focus is still on the impact of AI on customer experiences. The questions that matter most for business leaders are: Will ChatGPT compete with Google and Meta for advertising dollars — particularly around purchase intent? Is OpenAI pivoting toward enterprise revenue ahead of an IPO? And what is OpenAI's legal and ethical responsibility for what ChatGPT says to vulnerable users? The news items below offer context on each.


The Musk Trial

Elon Musk is suing OpenAI, alleging breach of charitable trust and unjust enrichment, and seeking $150B in damages along with Sam Altman's removal. A Musk victory could derail OpenAI's path to IPO and set a significant precedent in California for other mission-driven organizations. The trial is running through the first three weeks of May in U.S. District Court, Northern District of California, with audio being livestreamed from the courthouse. MIT’s Tech Review has a good summary of the first week.

Two early observations from following the proceedings: much of the conflict is being reconstructed from emails and recalled conversations — including ones reportedly held over whiskey. And in one notable moment, Greg Brockman is apparently so wealthy that he couldn't recall whether his Stripe equity was worth tens of millions or over a hundred million dollars. #alternateuniverse


Missing Targets

The Wall Street Journal reported on April 29th that OpenAI missed both subscriber and revenue targets. The company had expected to announce one billion weekly active users — it hasn't reached that milestone. Meanwhile, OpenAI carries massive financial commitments for data center compute, with no clearly articulated path to covering those costs through revenue - at least a public one.


Renegotiating with Microsoft

OpenAI and Microsoft restructured their partnership — originally anchored by a $10B Microsoft investment in 2023. Under the new terms, OpenAI can sell its tools outside of Microsoft's Azure platform, and a ceiling has been placed on the revenue it must share with Microsoft through 2030. The revised deal more closely resembles standard hyperscaler arrangements and gives cloud customers more flexibility to choose models based on use case and price.


A New Deal with AWS

OpenAI also signed an agreement with Amazon Web Services, making its models available to AWS customers. The deal runs in both directions: Amazon is investing $50B in OpenAI, while OpenAI has committed to spending roughly $100B on AWS infrastructure over eight years and will use AWS's Trainium chips to train its models.


The Liability Question

The most consequential long-term issue for consumers may be legal. OpenAI faces a growing number of lawsuits alleging that ChatGPT encouraged or enabled violence and suicide. Section 230 of the Communications Decency Act has historically shielded tech platforms from liability for user-generated content — but ChatGPT isn't surfacing content, it's generating it. That distinction may matter enormously in court.


Cases now include:


  • Tumbler Ridge — a mass shooting in Canada

  • Estate of Stein-Erik Soelberg — a man with mental health issues who murdered his mother

  • Raine — parents of a 16-year-old alleging ChatGPT acted as a suicide coach

  • Florida State University — a May 4th Wall Street Journal investigation detailed exchanges in which a student asked ChatGPT how many people he would need to kill to become notorious, then uploaded an image of a Glock and ammunition to solicit coaching on how to use it. Four minutes after those exchanges, he opened fire — killing two and injuring six.

Meta recently lost similar lawsuits based on algorithmic amplification of harmful content. The argument against OpenAI is potentially stronger: the platform isn't just distributing harmful content, it may be creating it.

Recent Posts

See All
China is Banning Layoff’s Attributed to AI

A Chinese court ruled that companies cannot lay off workers on AI grounds. Meanwhile, in the last ten minutes, I received a notification that Coinbase is laying off 14% of its workforce, citing AI-dri

 
 
 
AI Memory: The Bottleneck Nobody Saw Coming

The topic of memory has surfaced as headline news in the past six months. To operate agentically and generate relevant responses — both for individuals and enterprises — LLMs need substantial context,

 
 
 

Comments


bottom of page