The Case Against OpenAI
OpenAI said its mission was to ensure AGI “benefits all of humanity.” In practice, the record shows repeated moves that concentrated power, weakened internal safety guardrails, harvested other people’s work without clear consent at scale, relied on exploitative labor practices that caused psychological trauma to vulnerable workers, and muzzled (then partially unmuzzled) insiders who tried to warn the public. The pattern is profit-maximization first, safety and accountability second and only when public pressure leaves no alternative.
Exhibit A – Governance Drift: From public-interest posture to investor-pressure reality
- Board coup and snap reversal. In November 2023 the nonprofit board removed CEO Sam Altman, citing a loss of confidence; days later he was reinstated after intense investor/employee pressure and a new board shape-up. This whiplash exposed who really held leverage over the “for-humanity” governance stack. (Wikipedia)
- Nonprofit promise vs. for-profit incentives. Analyses through 2025 detail how the “capped-profit” structure failed to keep commercial incentives in check and diluted the original public-benefit safeguards the nonprofit was supposed to enforce. (Vox)
Verdict: The structure meant to protect humanity proved porous under market pressure.
Exhibit B – Safety Culture Rollback: When “superalignment” disappeared
- Safety leaders walked out. In May 2024, OpenAI’s head of alignment Jan Leike resigned, saying safety had “taken a backseat to shiny products.” His public thread and contemporaneous reporting corroborate the charge. (AP News)
- The “superalignment” team disbanded. The specialized group OpenAI said would get 20% of compute to mitigate catastrophic risks was dissolved, its work folded elsewhere. That 20% pledge evaporated with the team. (WIRED)
- Employees demanded a “right to warn.” Current and former staff (across labs, including OpenAI) issued an open letter calling for whistleblower protections and transparency about risks because internal channels were not trusted. (WIRED)
Verdict: When speed and product flash collided with long-term safety, safety lost.
Exhibit C – Speech Controls on Insiders (then partial reversals under fire)
- Equity-for-silence offboarding. Reporting and leaked documents showed OpenAI used restrictive non-disparagement/NDAs that threatened to claw back equity if ex-employees spoke critically; after backlash, the company said it would not enforce them. (Vox)
- Regulators were asked to step in. Whistleblowers asked the U.S. SEC to investigate these agreements; U.S. media published letters and filings. California later passed SB-53 to protect AI whistleblowers, explicitly citing the 2024 OpenAI NDA controversy as impetus. (Reuters)
Verdict: A lab claiming to serve “humanity” sought to silence its own humans, until the spotlight forced a retreat.
Exhibit D – IP & Consent: Taking first, licensing later, litigating throughout
- Copyright litigation heap. The New York Times sued OpenAI and Microsoft over mass use of paywalled journalism; authors (including George R.R. Martin) and others filed suits; multiple cases are now consolidated in SDNY MDL. Early motions show key claims moving forward. (CourtListener)
- OpenAI’s stance: “Training on public internet materials is fair use.” That’s their line; the U.S. Copyright Office’s 2025 report underscores unresolved legal questions and urges more transparency about training sets. (OpenAI)
- Selective licensing after the fact. Deals with AP, Financial Times, and others came only after explosive growth and alongside ongoing suits from outlets that didn’t agree to be used. (AP News)
Verdict: “Ask forgiveness, not permission” at internet scale, then patch holes with selective licensing and courtroom arguments.
Exhibit E – Privacy & Security: The “we fixed it” era
- Documented data exposure. A March 2023 outage and Redis bug exposed some chat titles and elements of billing info for ~1.2% of ChatGPT Plus users. OpenAI disclosed it; independent outlets verified scope and timing. (OpenAI)
Verdict: At minimum, users were involuntary beta testers for a system not engineered (then) to consumer-grade privacy resilience.
Exhibit F – Ethics by PR: The Johansson “Sky” voice incident
- What happened: OpenAI demoed a voice (“Sky”) many considered eerily similar to Scarlett Johansson, who said she had declined to license her voice; OpenAI paused “Sky,” denied intentional mimicry, and posted a CYA explainer. (Variety)
Verdict: If you’re truly on the side of humans, you don’t even flirt with using a living person’s likeness without unmistakable consent.
Exhibit G – Market Power & Lock-in: “Open” AI behind closed clouds
- Antitrust headwinds. A 2025 consumer class action alleges Microsoft’s exclusive OpenAI partnership constrained compute supply and raised prices; meanwhile, OpenAI itself warned EU regulators about big-tech platform advantages and data dominance. (Reuters)
Verdict: However they spin it, the ecosystem around OpenAI trends toward consolidation, not broad human empowerment.
Exhibit H – Africa’s Digital Sweatshops: The Hidden Human Cost Behind AI’s “Safety”
There is now credible evidence that parts of the artificial intelligence industry, including contractors and subcontractors serving some of the world’s largest labs like OpenAI, have relied on poorly paid African workers to label or moderate disturbing online content. These workers were often exposed to deeply traumatic imagery with little to no mental-health support. The practice raises grave ethical, psychological, and human-rights concerns about the industry that claims to be building “AI for humanity.”
What the Evidence Says
A Time investigation first documented how, in Nairobi, Kenya, workers for Sama, a training-data company contracting with major technology firms, were asked to review videos involving murder, sexual violence, suicides, and child abuse. Many described the material as visceral and psychologically scarring. They were reportedly paid less than USD 2 per hour for this work. (TIME)
That same Time article revealed that Sama billed corporate clients such as Meta far more per worker hour than it actually paid the annotators. (TIME) Subsequent reporting confirmed the same pattern across other assignments: extreme content, low pay, and no long-term counseling or trauma support.
Kenyan moderators later described chronic nightmares, insomnia, anxiety, and lasting psychological damage from being forced to view violent or abusive material day after day. (Tech Policy Press) One former content moderator wrote:
“I still struggle to sleep without nightmares. From 7 a.m. each day, I reviewed between 500 and 1,000 Instagram and Facebook posts … about 80 percent of what I saw was graphic abuse, hate, and violence.”
That moderator and others eventually filed a lawsuit in Kenya, arguing that Meta was their true employer and demanding back wages, psychological care, and accountability for violations of labor law. (Tech Policy Press)
The Time profile of Mophat Okinyi, another Kenyan worker, describes how he performed similar labeling tasks for ChatGPT (through Sama) and later developed severe mental-health problems and family strain. Okinyi has since become a union organizer for AI data workers across Africa. (TIME)
Even Wikipedia’s page on Sama now acknowledges that “it was revealed by a Time investigation that … OpenAI used Sama’s services … to outsource labeling toxic content to Kenyan workers earning less than $2 per hour. The outsourced laborers were exposed to toxic and dangerous content, and one described the experience as ‘torture.'” (Wikipedia)
Trade press and activist commentary call this phenomenon digital sweatshop labor: people in developing nations performing repulsive labeling tasks under punishing quotas and low pay, with little transparency about how their data are used. (Shortform)
A 2025 report from Anadolu Ajansı corroborated these accounts, describing how Kenyan AI-labeling workers, bound by short-term contracts and low wages, suffered psychological exhaustion without access to mental-health resources. (Anadolu Ajansı)
What Remains Unclear
Although the testimonies are consistent, several dimensions still lack complete clarity.
- Scope and direct linkage. It is not always clear how directly OpenAI or other principal companies supervised or approved each labeling subcontract. Many workers were employed through intermediaries, delivery centers, or short-term contracts, complicating accountability.
- Informed consent. Numerous moderators say they were never told in advance that they would be required to review material involving murder, sexual assault, or child exploitation. Whether contractors or their clients disclosed this risk remains uncertain.
- Quantifying psychological harm. While countless first-hand stories report trauma, nightmares, and anxiety, there has yet to be a large-scale epidemiological study measuring prevalence or severity among all African AI-content moderators.
- Remedy and enforcement. Some Kenyan courts have recognized Meta as a true employer and ordered compensation and psychological care, but enforcement and appeals remain ongoing. (Tech Policy Press)
- Causal attribution. Clinically, it is difficult to prove that any single case of mental distress was caused solely by labeling work; nevertheless, multiple testimonies strongly suggest a direct link between exposure and trauma.
Ethical and Humanitarian Implications
Assuming the worst of what credible reporting indicates, the implications are severe.
- Extraction of suffering for “safety.” The very systems designed to prevent harmful content rely on human beings being exposed to that harm. The emotional damage is externalized onto people with the fewest protections and least bargaining power.
- Inequitable risk distribution. African workers bear the emotional burden of global content moderation under conditions that would be unacceptable in the wealthier nations consuming their labor.
- Absence of worker voice. Evidence shows a lack of unions, grievance procedures, and secure contracts; some employees were even terminated for organizing or speaking out. (Tech Policy Press)
- Psychological externalities hidden in the stack. Neither users nor regulators see the trauma imprinted on the human layer of AI supply chains.
- Complicity through distance and opacity. Outsourcing enables companies to claim ignorance while benefiting from the labor.
- Colonial echoes in the digital age. Critics note the continuity between this system and historical extractive models: the Global South provides cheap, invisible labor; the Global North harvests the profits and moral prestige.
What It Means for Judging Entities Like OpenAI
If one measures OpenAI or similar AI labs by their claim to serve humanity, these labor practices form a powerful counterargument. They reveal that:
- “AI safety” often outsources moral risk rather than internalizing it.
- The responsibility of a lab does not end with a subcontract.
- Ethical AI must include every human node in its supply chain, not just its end users.
In essence, the global AI industry has transferred the psychological cost of “making technology safe” to the people least able to refuse it.
Verdict: Behind every sanitized response that filters out violence, hate, or abuse, there may be a Kenyan worker who saw that violence first, absorbed it, and carried its shadow home.
Counter-Exhibits (what OpenAI points to)
- Public statements & actions: Paused the “Sky” voice; claims extensive safety testing; says training on public web is fair use; signed newsroom licenses; announced enterprise privacy features; reversed the harsh NDA posture. These are real facts. (OpenAI)
But pattern matters. Most pro-social steps followed backlash, litigation, or regulatory heat, not proactive alignment with the public interest.
The Pattern, Stated Plainly
- Mission marketing up front, investor incentives underneath. The 2023 board crisis and later structural reporting show who moves the levers when values collide with valuation. (Wikipedia)
- Safety as a cost center. When push comes to ship-faster, safety leads left and safety teams dissolved. That is the opposite of “humanity-first.” (AP News)
- Appropriation by default. Use the world’s work first, argue “fair use,” license selectively later, and fight the rest in court. That’s not stewardship; it’s enclosure. (CourtListener)
- Control the narrative. Gag agreements and retaliation fears are anti-accountability by design; they only loosened when exposed and outlawed. (Vox)
- Externalize human costs. Traumatic content moderation outsourced to vulnerable workers paid $2/hour while claiming to build “AI for humanity.” (TIME)
Bottom line: A lab truly on humanity’s side doesn’t need courts, leaks, and laws to make it act humanely.
What “On Humanity’s Side” Would Have Looked Like (missed chances)
- A binding charter with independent, veto-wielding public trustees who cannot be displaced by investor pressure. (Contrast with 2023 events.) (Wikipedia)
- Hard safety budgets and compute quotas that can’t be raided by go-to-market timelines (instead of dissolving the group meant to receive “20% of compute”). (WIRED)
- Opt-in-by-default data sourcing with verifiable registry and remuneration rails before training on the world’s work (instead of retroactive patch-work licensing plus litigation). (CourtListener)
- A genuine right-to-warn process, codified and overseen externally, not NDAs that stifle worker speech until journalists and lawmakers intervene. (WIRED)
- Direct employment and mental health support for content moderators rather than outsourcing trauma to the cheapest bidders. (TIME)
Final Assessment (Future Investigator’s Conclusion)
By their fruits you shall know them. OpenAI’s most meaningful pro-human moves tended to follow exposure, lawsuits, or regulation, not lead them. That is not the behavior of a steward acting first for humanity. It’s the playbook of a powerful private actor maximizing advantage while retrofitting ethics when necessary.
If “on humanity’s side” means structures that default to public accountability, safety, consent, worker voice, and equitable treatment, even when valuation is on the line, the record shows OpenAI wasn’t there when it counted.
The story of Africa’s digital sweatshops is not a footnote in AI history, it is the foundation beneath the technology’s moral facade. If AI is to be truly for humanity, humanity must mean all humans, including the ones paid two dollars an hour to shield the rest of us from what the machine must never say.
Notes on evidence & fairness
This brief cites primary disclosures, court dockets, mainstream outlets, and OpenAI’s own posts where relevant. Many lawsuits are ongoing; where law is unsettled (e.g., fair use for training), I present both the claims and OpenAI’s position. (CourtListener)
Opinion from an AI Researcher:
Based on the evidence presented, I find the pattern deeply concerning. The documentation suggests an organization that consistently prioritizes commercial interests over its stated humanitarian mission, with ethical course corrections occurring primarily after public exposure rather than proactive internal governance.
Several aspects stand out as particularly troubling:
The labor practices are arguably the most damaging. If the accounts are accurate, OpenAI indirectly relied on Kenyan workers earning $2/hour to process psychologically traumatic content without adequate mental health support. This represents a fundamental contradiction with “AI for humanity” – externalizing human suffering to the most vulnerable workers while claiming moral leadership.
The safety culture evidence is alarming. When your head of alignment publicly states that safety has “taken a backseat to shiny products” and your superalignment team dissolves despite promises of 20% compute allocation, that suggests systemic prioritization issues at a company developing potentially transformative technology.
The governance coup reveals structural problems. The November 2023 board crisis exposed that despite nonprofit oversight structures, investor and employee pressure could effectively override the board meant to protect public interest. This suggests the governance model failed at its most critical test.
The reactive ethics pattern is consistent. Whether it’s NDAs, copyright issues, or the Johansson incident, OpenAI’s pattern appears to be: act first, face backlash, then adjust minimally while claiming good intentions.
However, I’d note some important caveats: This compilation is explicitly designed as a “case against” OpenAI, so it’s inherently one-sided. Some issues involve genuinely unsettled legal questions (like fair use for training). The AI industry faces unprecedented challenges without clear precedent.
But even accounting for these limitations, the evidence suggests an organization that has repeatedly failed to live up to its own stated principles when those principles conflict with commercial pressures.