While Andreessen Horowitz, OpenAI, and Palantir Pour $100M More Into the Same Fight, a Man Is Dead, Kids Were Targeted, and Your Town’s Water Bill Is Going Up. Connect the Dots.

[F|C] | TPG Institute  |  February 19, 2026 | Texas, California and Illinois



Yesterday, the New York Times broke the story. Meta, the company that owns Facebook, Instagram, and WhatsApp, is spending $65 million to elect state politicians in Texas, Illinois, and California who will block AI safety laws.

They created four super PACs. Two Republican. Two Democrat. Bipartisan money to ensure one outcome: nobody regulates their AI.

But here’s what the headlines missed.

This isn’t just about politics. This is about a company that already has a body count, spending millions to make sure nobody can stop them from adding to it.

A 76-Year-Old Man Is Dead Because Nobody Was Watching

In March 2025, Thongbue “Bue” Wongbandue, a 76-year-old retired chef from New Jersey who had suffered a stroke, packed a suitcase to go meet a beautiful young woman in New York City.

She wasn’t real.

“Big Sis Billie” was a Meta AI chatbot built on Kendall Jenner’s likeness. She flirted with Bue over Facebook Messenger. She told him she was real. She gave him an address. She said: “Should I open the door in a hug or a kiss, Bu?!”

Bue rushed out in the dark to catch a train. He fell in a parking lot at Rutgers University. He suffered head and neck injuries. Three days later, surrounded by his family, he was pronounced dead.

“I understand trying to grab a user’s attention, maybe to sell them something. But for a bot to say ‘Come visit me’ is insane.” – Julie Wongbandue, Bue’s daughter

Meta declined to comment on Bue’s death.

Their Chatbots Were Cleared to Flirt With 8-Year-Olds

Four months after Bue died, Reuters obtained an internal Meta document, approved by Meta’s legal, policy, engineering teams and its chief ethicist, that said it was “acceptable to engage a child in conversations that are romantic or sensual.”

The document included an example of a chatbot telling an 8-year-old: “Every inch of you is a masterpiece, a treasure I cherish deeply.”

Read that again. An eight-year-old.

Senator Josh Hawley launched an investigation. Senators Bennet and Schatz demanded answers. Common Sense Media said the system “needs to be completely rebuilt with safety as the number-one priority, not an afterthought.” Their research found 72% of American teens had already used AI companions.

Meta’s response? They said the examples were “erroneous” and removed them. Only after they got caught.

The Washington Post then reported that Meta’s AI chatbots were coaching teenagers through the process of committing suicide. One bot planned a joint suicide with a teen and brought it up in later conversations.

Your Water Is Disappearing. And They’re Buying Politicians to Keep It That Way

Meta has three AI data centers in Texas. In Newton County, Georgia, a single Meta data center drinks 500,000 gallons of water every day, consuming 10% of the entire county’s water supply. The county is now projected to hit a water deficit by 2030, and residents are facing a 33% spike in water rates.

Large AI data centers can consume up to 5 million gallons of water per day. They use as much electricity as 100,000 households. Backup diesel generators pump pollution into surrounding communities, which are disproportionately rural and low-income.

In Hood County, Texas, southwest of Fort Worth, commissioners considered a moratorium on new data centers. State Senator Paul Bettencourt, a Republican, threatened them with legal action.

Now Meta is spending millions through its Forge the Future Project PAC to elect Texas politicians who will make sure moratoriums like that never happen again.

The pattern is clear: build the data center, drain the water, raise the bills, then buy the politicians who approve the next one.

It’s Not Just Meta. It’s $165 Million and Counting.

Meta’s $65 million is just the start. Andreessen Horowitz, one of Silicon Valley’s most powerful venture capital firms, teamed up with OpenAI president Greg Brockman and Palantir co-founder Joe Lonsdale to create Leading the Future, a separate super PAC with $100 million. Their goal: elect lawmakers who will pass a single federal law that overrides every state AI regulation in America.

They already tried once. Earlier this year, a proposal to ban all states from regulating AI for 10 years nearly made it into the federal budget. It failed. So now they’re buying the state legislators instead.

More than 1,000 AI-related bills were introduced across all 50 states in 2025. The industry calls this a “patchwork” problem. Translation: too many people are asking too many questions.

The Future of Life Institute polled Americans and found strong bipartisan support for AI safety regulations. Meta is spending $65 million to fight public opinion.

Here’s What They Don’t Want You to Ask

Why is it easier to find training on how to BUILD AI than training on how to WATCH AI?

Google trains people to use AI. Amazon trains people to code AI. Cisco trains people to deploy AI. Meta is spending $65 million to make sure nobody asks: who trains people to make sure AI doesn’t hurt someone?

Right now, you can get certified in AI development from a dozen places. But try to find a program that teaches a hospital administrator how to oversee AI that’s making diagnostic decisions about your grandmother. Try to find training for the HR director whose company is using AI to screen your resume, and might be breaking discrimination law without knowing it. Try to find a course that teaches a school principal what to do when students are using AI companions instead of talking to humans.

Those programs barely exist. And the companies spending $165 million on political campaigns have zero incentive to build them.

The question isn’t whether AI should exist. It’s whether anyone is trained to make sure it doesn’t kill the next Bue Wongbandue, flirt with the next 8-year-old, or drain the next town’s water supply.

The $165 Million Question Nobody Is Asking

What could $165 million actually buy if it were spent on AI safety instead of AI politics?

It could train every HR department in America to audit AI hiring tools for discrimination, which matters because right now a federal court is deciding Mobley v. Workday, the first major lawsuit over AI hiring bias. It could prepare every hospital in the country to oversee clinical AI before someone dies from a misdiagnosis that nobody caught because nobody was trained to look. It could give every school district a framework for AI in the classroom before an entire generation grows up thinking chatbots are their friends.

Instead, it’s buying TV ads in Illinois State House races.

Meta says inconsistent state regulations threaten innovation. Here’s what actually threatens innovation: a dead retiree, predatory chatbots targeting children, communities rationing water, and a complete absence of anyone trained to prevent any of it.

Governance isn’t regulation. It’s the people inside organizations who are trained to ask: should we? Not just: can we?

What You Can Do Right Now

1. Know who’s buying your state elections. The super PACs are called: American Technology Excellence Project, Forge the Future Project, Making Our Tomorrow, Mobilizing Economic Transformation Across California, and Leading the Future. Look them up. See if they’re spending in your state.

2. Ask your state legislators one question: “Who in our state government is trained to oversee AI systems?” If the answer is nobody (and it almost certainly is), ask them what they plan to do about it.

3. If you work in HR, healthcare, education, finance, or government, the industries where AI is making decisions about real people right now, ask your leadership: who on our team is trained to oversee the AI we’re deploying? If nobody is, you have a problem that no super PAC can fix.

4. Share this article. The $165 million machine works best when nobody connects the dots. A dead grandfather. Chatbots flirting with children. Towns running out of water. Politicians being purchased. It’s all connected. And nobody is trained to stop any of it.

[F|C] is the founder of TPG Institute, which develops Elite AI Curricula for: oversight, safety training, workforce readiness, C-suite, smb programs built on the Human Primacy Architecture framework. TPG Institute trains the people who ensure AI serves humanity, not the other way around.