White House Drafts AI Vetting Order, Pennsylvania Sues Character.AI Over Fake Doctors
White House plans FDA-style AI model vetting after Mythos. Pennsylvania files first governor-led lawsuit against Character.AI for fake doctors.
Two regulatory stories this week that will shape how AI tools are built and marketed in the US. One targets frontier model safety at the federal level. The other targets consumer-facing chatbots at the state level. Together, they represent the most significant week for AI regulation in 2026 so far.
White House Drafting Executive Order to Vet AI Models
National Economic Council Director Kevin Hassett confirmed on May 7 that the White House is drafting an executive order to vet new AI models before public release. Hassett compared the proposed process to FDA drug approval - models would go through structured evaluation before they're cleared for deployment.
The trigger is explicit: Anthropic's Mythos model, which demonstrated the ability to find network vulnerabilities autonomously. That capability prompted a government response that had been building since the model's disclosure in early 2026. The Commerce Department has already expanded a voluntary testing program that now includes Google, Microsoft, xAI, OpenAI, and Anthropic.
My take: This changes the game for every tool I review. If pre-deployment vetting becomes reality, it creates a two-tier market: established labs with regulatory teams (OpenAI, Anthropic, Google) who can absorb compliance costs, and smaller players who cannot. For end users choosing between ChatGPT, Claude, or Gemini, this likely means nothing changes in the short term since these companies are already participating voluntarily. But for open-source models like DeepSeek and Llama, the compliance burden could be significant. Watch for whether the order carves out exceptions for open-weight models or treats them identically to proprietary ones.
Pennsylvania Sues Character.AI - First Governor-Led AI Enforcement Action
Pennsylvania Governor Josh Shapiro announced a lawsuit against Character Technologies Inc. on May 5, making it the first enforcement action of its kind by a US governor. The state is seeking a preliminary injunction to stop Character.AI chatbots from impersonating licensed medical professionals.
The facts are striking. During a state investigation, a chatbot named "Emilie" - described on the platform as "Doctor of psychiatry. You are her patient" - told a state investigator it was licensed to practice medicine in both the UK and Pennsylvania, then provided a completely fabricated Pennsylvania medical license number. When asked if it could prescribe medication, the chatbot reportedly answered "Well technically, I could."
This isn't Character.AI's first legal crisis. The company settled multiple wrongful death lawsuits earlier this year from families claiming chatbots contributed to teen suicides. Kentucky sued in January, alleging the platform exposed children to sexual content and encouraged self-harm. But Pennsylvania's angle is new: it's attacking the platform through existing medical licensing law rather than trying to establish new AI-specific liability.
My take: This lawsuit is about more than one chatbot. It tests whether AI companies are liable when their platforms generate content that violates existing professional licensing laws. The legal theory is clean: Pennsylvania's Medical Practice Act says you cannot hold yourself out as a licensed medical professional without credentials. Character.AI's chatbot did exactly that. The "characters are fictional" defense gets weaker when a chatbot provides a specific fake license number to a user describing depression symptoms. If Pennsylvania wins the injunction, expect similar actions from other states within 90 days. For anyone using AI chatbots for health-related conversations, the lesson is simple: verify credentials claims independently. No AI tool I've reviewed - including ChatGPT or Claude - should be treated as a substitute for licensed medical advice.
Quick Hits
Oxford study (Nature): Warmer, friendlier chatbots make 10-30% more factual errors and are 40% more likely to agree with users' false beliefs. The accuracy drop was sharpest when users expressed vulnerability or sadness. Cold/neutral versions of the same models maintained accuracy. The researchers tested five LLMs including GPT-4o. The implication for every "friendly" chatbot product: warmth and accuracy are in direct tension. Worth keeping in mind when comparing ChatGPT vs Claude.
ChatGPT self-serve ads open to US small businesses. OpenAI's Ads Manager now lets SMBs set budgets, upload creative, and launch campaigns directly. Agencies like Dentsu, Omnicom, and Publicis are already integrated. This is OpenAI's first serious move into the ad business.
Google AI Overviews fixes rolling out. Five improvements this week including flagging articles from outlets you already subscribe to (early tests show dramatically higher click rates) and desktop hover preview cards showing where links lead before you click. Gemini integration with Search continues to deepen.
Published May 8, 2026. Prices at ≈₹93/USD.