As scores of tech-funded EAs spread across key policy nodes in Washington, they’re triggering a culture clash — landing in the city’s incremental, detail-oriented culture with a fervor more akin to religious converts than policy professionals.
Regulators in Washington usually dwell in a world of practical disputes, like how AI could promote racial profiling, spread disinformation, undermine copyright or displace workers. But EAs, energized by a uniquely Northern Californian mix of awe and fear at the pace of technology, dwell in an existential realm.
“The EA people stand out as talking about a whole different topic, in a whole different style,” said Robin Hanson, an economist at George Mason University and former effective altruist. “They’re giving pretty abstract arguments about a pretty abstract concern, and they’re ratcheting up the stakes to the max.”
From their newfound perches on Capitol Hill, in federal agencies and at key think tanks, EAs are pressing lawmakers, agency officials and seasoned policy professionals to support sweeping laws that would “align” AI with human goals and values.
Virtually all the policies that EAs and their allies are pushing —
new reporting rules for advanced AI models,
licensing requirements for AI firms, restrictions on open-source models, crackdowns on the mixing of AI with biotechnology or even a
complete “pause” on “giant” AI experiments — are in furtherance of that goal.
“This shouldn’t be grouped in the same sort of vein as saying, ‘Well, this is just another tech issue. We’ve dealt with tech issues for a really long time, we have time to deal with this.’ Because we really don’t,” said Emilia Javorsky, director of the futures program at the Future of Life Institute — an organization
founded by EA luminaries and
funded in part by a foundation financed by tech billionaire Elon Musk, who calls EA a “close match” to his philosophy.
“If we don’t start drawing the lines now, the genie’s out of the bottle — and it will be almost impossible to put it back in,” Javorsky warned.
The prophets of the AI apocalypse
are boosted by an avalanche of tech dollars, with much of it flowing through Open Philanthropy — a major funder of effective altruist causes, founded and financed by billionaire Facebook co-founder Dustin Moskovitz and his wife Cari Tuna, that has
pumped hundreds of millions of dollars into
influential think tanks and
programs that place staffers in key congressional offices and at federal agencies.
“It’s an epic infiltration,” said one biosecurity researcher in Washington, granted anonymity to avoid blowback from EA-linked funders.
EAs are particularly fixated on the possibility that future AI systems could combine with gene synthesis tools and other technologies to create bioweapons that kill billions of people — a phenomenon that’s given more traditional AI and biosecurity researchers a front row seat as Silicon Valley’s hot new philosophy spreads across Washington.
Many of those researchers claim that EA’s billionaire backers — who often possess close personal and financial ties to companies like OpenAI and Anthropic — are trying to distract Washington from examining AI’s real-world impact, including its tendency to promote racial or gender bias, undermine privacy and weaken copyright protections.
They also worry that EA’s tech industry funders are acting in their self-interest, working to wall off leading AI firms from competition by promoting rules that, in the name of “AI safety,” lock down access to the technology.
“Many [EAs] do think that fewer players who are more carefully watched is safer, from their point of view,” said Hanson. “So they are not that eager to reduce concentration in this industry, or the centralization of power in this industry.”
The generally white and privileged backgrounds of EA adherents also has prompted suspicion in Washington, particularly among Black lawmakers concerned about how existing AI systems can harm marginalized communities.
“I don’t mean to create stereotypes of tech bros, but we know that this is not an area that often selects for diversity of America,” Sen. Cory Booker (D-N.J.) told POLITICO in September.
“This idea that we’re going to somehow get to a point where we’re going to be living in a Terminator nightmare — yeah, I’m concerned about those existential things,” Booker said. “But the immediacy of what we’ve already been using — most Americans don’t realize that AI is already out there, from resumé selection to what ads I’m seeing on my phone.”
Despite those concerns, the sheer amount of money being funneled into Washington by Open Philanthropy and other EA-linked groups has given the movement significant leverage over the AI and biosecurity debate in Washington.
“The money is overwhelmingly lopsided,” said Hanson, referring to support for AI-specific policy fellows and staff members.
AI and biosecurity staffers funded by Open Philanthropy
are embedded in congressional offices at the forefront of potential AI rules, including all three of the Senate offices tapped by Majority Leader Chuck Schumer to investigate the technology. And the more than half-dozen skeptical AI and biosecurity researchers that spoke with POLITICO say the dense network of Capitol Hill and agency staffers — financed by hundreds of millions of EA dollars — is skewing how policymakers discuss AI safety, which otherwise remains a relatively niche field in Washington.
One AI and biosecurity researcher in Washington said lawmakers and other policy professionals are being pushed toward a focus on existential AI risks by sheer force of repetition.
“It’s more just the object permanence of having that messaging constantly in your face,” said the researcher, who was also granted anonymity to avoid losing funding.