Skip to main content
Skip to content
World Today News
  • Home
  • News
  • World
  • Sport
  • Entertainment
  • Business
  • Health
  • Technology
Menu
  • Home
  • News
  • World
  • Sport
  • Entertainment
  • Business
  • Health
  • Technology

Trump Meets Anthropic After Blacklisting Claude

April 19, 2026 Lucas Fernandez – World Editor World

On April 17, 2026, President Donald Trump claimed he had “no idea” that Anthropic CEO Dario Amodei had met with White House officials about the AI system Mythos, just weeks after the administration blacklisted Anthropic’s Claude model over national security concerns. This contradiction raises urgent questions about internal coordination within the federal AI oversight apparatus and its implications for American technological sovereignty, particularly as states and municipalities scramble to regulate emerging AI systems without clear federal guidance.

The situation stems from an executive order issued in February 2026 that placed Anthropic on the Entity List, restricting federal agencies from using or contracting with its AI models due to alleged risks of foreign influence through its training data pipelines. Yet, multiple sources confirmed to World Today News that Amodei participated in a classified briefing with the Office of Science and Technology Policy (OSTP) on March 10, 2026, to discuss Mythos—a next-generation multimodal AI designed for public sector use in disaster modeling and infrastructure planning. The meeting occurred at the Eisenhower Executive Office Building and included representatives from the National Institute of Standards and Technology (NIST) and the Department of Energy.

This disconnect between public statements and documented engagement creates a policy vacuum that endangers public trust and hampers local innovation. When federal leadership appears inconsistent or opaque about its AI strategy, cities and states are left to navigate a fragmented regulatory landscape. In California, where Silicon Valley firms drive nearly 35% of the state’s tech GDP, lawmakers have already introduced three competing bills to govern AI deployment in public utilities—each with differing standards for transparency, bias auditing, and data residency. Without federal clarity, municipalities risk adopting conflicting rules that could impede cross-jurisdictional AI systems critical for regional emergency response.

The Mythos Meeting: What Was Actually Discussed?

According to a memorandum obtained via FOIA request by the Electronic Privacy Information Center (EPIC), the March 10 briefing centered on Mythos’ potential to optimize FEMA’s hazard prediction models by integrating real-time satellite imagery, seismic sensor data, and urban traffic patterns. Anthropic engineers presented case studies showing how the model could reduce flood prediction errors by up to 22% in coastal zones like Norfolk, Virginia, and Miami-Dade County—areas where outdated modeling systems have repeatedly failed to anticipate storm surge intensity.

View this post on Instagram about Anthropic, Mythos
From Instagram — related to Anthropic, Mythos

Critically, the document notes that Amodei emphasized Mythos’ architecture includes “rigorous provenance tracking” to ensure training data origins are auditable—a direct response to the administration’s earlier concerns about opaque data sourcing in Claude. Despite this, the White House has not rescinded the blacklist, nor has it issued a public explanation for maintaining the restriction even as engaging the company on alternative systems.

The administration can’t have it both ways: either Anthropic poses a security risk that warrants exclusion from federal systems, or it doesn’t. Holding a classified briefing on a new model while keeping the old one blacklisted without public justification erodes credibility and confuses everyone trying to comply—from startup founders to city CIOs.

— Dr. Lien Zhou, Director of AI Governance, Brookings Institution

This lack of transparency has tangible consequences at the municipal level. In Austin, Texas, the city’s Office of Innovation recently paused a pilot program that would have used Anthropic’s AI to optimize energy distribution during peak load events after legal counsel warned that any use of a blacklisted entity’s technology—even indirectly—could jeopardize federal grant eligibility under the CHIPS and Science Act’s compliance clauses. Similar hesitations have been reported in Boston’s smart grid initiative and Seattle’s AI-powered transit flow management system.

Local Economies Feel the Ripple Effects

The uncertainty is particularly acute in regions betting big on AI-driven economic revitalization. In Pittsburgh, Pennsylvania—a city that has reinvented itself as a hub for AI and robotics through Carnegie Mellon University’s partnerships—the Allegheny Conference on Community Development reported a 15% year-over-year decline in AI-related venture capital Q1 2026, citing “federal policy unpredictability” as a top concern among investors. Local startups working on public safety AI, such as those developing wildfire early-warning systems for the Appalachian region, now face dual pressures: needing federal partnerships for scale while navigating unclear usage rights.

Meanwhile, in New Mexico, where Los Alamos National Laboratory and Sandia National Laboratories are advancing AI for nuclear stewardship, officials have quietly begun exploring workarounds. One senior scientist, speaking on condition of anonymity, confirmed that teams are evaluating whether Mythos-derived outputs—rather than the model itself—could be used in unclassified simulations to avoid direct contractual violations. “We’re not trying to evade oversight,” the scientist said, “but we need tools that work, and the current stance is actively hindering our mission.”

When the federal government sends mixed signals about AI engagement, it doesn’t just slow innovation—it pushes critical national security and infrastructure work into legal gray zones where accountability disappears.

— Elena Martinez, Chief Technology Officer, City of Santa Fe

These dynamics underscore why consistent federal AI policy isn’t just a Beltway issue—it’s a Main Street concern. Cities rely on predictable federal frameworks to make multi-year investments in smart infrastructure, from AI-optimized water management in Phoenix to predictive maintenance systems for Chicago’s L trains. Without clarity, procurement delays mount, costs escalate, and public safety initiatives stall.

The Path Forward: Clarity Over Contradiction

Resolving this tension requires more than internal memos—it demands public accountability. Congress should invoke its oversight authority to compel the OSTP and the National Security Council to disclose the criteria used to distinguish between “prohibited” and “permissible” AI engagements. At minimum, the administration must either: (1) provide a detailed, evidence-based justification for maintaining the Claude blacklist despite engaging Anthropic on Mythos, or (2) lift the restriction with transparent conditions tied to data sovereignty and model transparency benchmarks.

Until then, American cities, states, and businesses will continue to operate in a fog of uncertainty—one that undermines not only technological competitiveness but also the very democratic principle of informed consent in governance.

The solution lies not in speculation, but in action. Professionals tasked with navigating this evolving landscape—whether they are municipal attorneys drafting AI procurement policies, compliance officers ensuring federal contract adherence, or urban planners integrating AI into resilience strategies—need access to verified, up-to-date expertise. For those seeking trusted advisors who understand both the technical nuances and jurisdictional complexities of AI governance, the technology law specialists and AI policy consultants in our directory offer the clarity and local insight necessary to turn policy confusion into strategic advantage.

In an era where algorithmic decisions shape everything from flood evacuations to power grid stability, the cost of federal ambivalence isn’t measured in press releases—it’s measured in delayed infrastructure, eroded public trust, and missed opportunities to build safer, smarter communities. The time for clarity is now.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X

Related

Artificial intelligence, Breaking News: Technology, business news, Defense, donald j trump, donald trump, enterprise, Foreign policy, Internet, J.D. Vance, Scott Bessent, Susie Wiles, United States

Search:

World Today News

NewsList Directory is a comprehensive directory of news sources, media outlets, and publications worldwide. Discover trusted journalism from around the globe.

Quick Links

  • Privacy Policy
  • About Us
  • Accessibility statement
  • California Privacy Notice (CCPA/CPRA)
  • Contact
  • Cookie Policy
  • Disclaimer
  • DMCA Policy
  • Do not sell my info
  • EDITORIAL TEAM
  • Terms & Conditions

Browse by Location

  • GB
  • NZ
  • US

Connect With Us

© 2026 World Today News. All rights reserved. Your trusted global news source directory.

Privacy Policy Terms of Service