The GOP bill that would unleash AI is getting closer to passing. AI Now's Amba Kak says the plan keeps getting worse.
A wide swath of groups and lawmakers from across the political spectrum have raised alarms about the proposal to ban state-level regulations of AI for the next five years.
One of the most consequential pieces of legislation on AI sits on the verge of potentially being passed in Washington this week, but it’s hardly getting the attention or coverage it deserves.
A proposed 10-year-long moratorium on any state regulation of AI was shoehorned into the tax reconciliation bill, the big and supposedly beautiful one. And it’s looking closer than ever to getting passed, with Senate Republicans hoping to push through the package by the 4th of July.
The regulation ban was reduced to five years over the weekend, in a move that’s being billed as a compromise to win over holdout Republican senators. But it was strengthened in other ways, too.
The proposal directly targets efforts to create guardrails for the use of AI as it is deployed in an increasingly wide array of industries. This is not just theoretical; already there are many efforts to rein in the technology that would be jeopardized should the moratorium pass.
Just this year for example, state lawmakers filed some 1,000 bills related to AI. At least 75 new measures to regulate AI have already been enacted. These include laws that forbid health insurers from letting AI systems make final decisions on claims denials (Arizona), require political ads made with AI to include a disclaimer (Michigan), limit data collection for certain types of AI profiling (Montana) and mandate that employers conduct audits of AI tools used in employment decisions for bias (New York City).
All of these laws, as well as others that would likely be drafted as the technology advances and is better understood, are under threat.
The moratorium drew an initial flurry of coverage when it was spotted in the House’s version of the budget bill in May. But it has gotten lost amid the constant churn of louder and more dramatic news stories recently. Part of this is the underhanded way it is being proposed: in the weeds of a budget reconciliation bill, which means it will only require 51 votes, compared to the 60-vote threshold required to break a filibuster with regular legislation.
It’s not just liberals in blue bubbles who are opposed to the moratorium either. More than 260 state-level elected officials from both parties sent a letter to Congress expressing “strong opposition” to the plan earlier this month. A similarly bipartisan coalition of attorneys general from some 40 states, territories and Washington D.C. warned that the impact of the moratorium would be “sweeping and wholly destructive of reasonable state efforts to prevent known harms associated with AI,” in a similar letter.
It’s very rare these days to see officials as far apart politically as Letitia James and Kris Kobach agree on anything — let alone sign the same statement. And many experts and advocacy groups are also concerned.
For this week’s newsletter, I spoke with Amba Kak, the executive director of the AI Now Institute in New York. Kak is a lawyer and a former Rhodes Scholar who served as a senior advisor on AI at the Federal Trade Commission under Commissioner Lina Khan.
Kak testified eloquently against the regulation moratorium in front of the House last month. Her prepared remarks, an extensively footnoted report of 10 pages, are a worthwhile primer for anyone interested in the Big Questions about AI, its harms and the market forces that are influencing these discussions.
We spoke about the rapid speed of AI’s development, how AI systems are already causing issues in key ways, and why the new “compromise” forged over the weekend by Republican Senators to supposedly soften the ban is actually the most expansive version of the proposal yet.
Update July 1: The amendment was struck from the budget bill on Tuesday nearly unanimously by the Senate, 99-1. This means that the provision will not be passed through the budget reconciliation process. But Kak says that Republicans are likely to try to push through a version of the regulation ban as a standalone bill in the near future.
Eli Rosenberg: You noted in your testimony that it took Facebook about eight years to hit one billion users, while OpenAI will likely exceed that threshold by the end of the year, just three years after launching ChatGPT. The industry is growing rapidly. What kind of potential harms do you believe we face if AI goes largely unregulated for the next five years?
Amba Kak: The message that this moratorium is sending out is really unconscionable on any timescale. It is saying that state lawmakers cannot act to protect their constituents from AI-related harms, at a time when these harms are no longer theoretical, whether that comes to versions of AI that our children are interacting with, to harms faced by workers across industries, or to privacy and security flaws. One lesson from the past decade of social media is that it is close to impossible to play catch up and regulate the tech industry once these corrosive and harmful business models have already entrenched themselves.
That's what we risk in this present moment. The speed of rollout and adoption by employers in every sector, but also consumers interfacing directly with this tech across the country, is really quite staggering, and that’s what makes time really of the essence here.
ER: Can you give some examples of some of these risks?
AK: We are all at the receiving end of AI mediating our lives and work today, whether we choose to opt into these technologies or not. The kind of AI being used on us, not just by us.
Most concerning [to me] are the risks proliferating against those that are least able to fend for themselves: children, seniors, medical patients, and low income people. We are already seeing a massive uptick in AI related voice scams, and AI companions, for which seniors and children are particularly vulnerable.
Low income populations have been subject to faulty and often error prone AI systems being used in social services for at least ten years now. Inscrutable AI-enabled systems cut in-home care of 4,000 disabled people in Arkansas despite critical underlying medical conditions; tens of thousands of people in Michigan for example were wrongfully accused of unemployment fraud due to the use of an automated system some years ago.
And then of course, workers across many sectors are being devalued and replaced. Tennessee and California, as just one example, have both enacted laws that protect artists against the unauthorized use of their likeness; those are the kinds of laws that [could] effectively be wiped off the books by this law.
ER: The measure has drawn opposition from lawmakers from both parties. Some opinion polls have shown high levels of support for regulating AI among the public. How do you understand why such an unpopular and almost non sequitor of a provision is getting pushed through in a budget reconciliation bill?
AK: It speaks to the might of the lobbying power of the AI industry. It's not the first time that Republicans have pushed for federal preemption in the tech sector, but the speed at which this has moved ahead is unique. The only constituency that has anything to gain from this is big tech, which is really damning.
ER: Is the latest version of the provision any better?
AK: We should clear the air on whether the 10 year to five year reduction is really the compromise it's being reported to be. Both senators [Ted] Cruz and [Marsha] Blackburn1 have been out front positioning this latest [proposal] as a narrower version of the moratorium, with exceptions for kids’ online safety laws and for copyright laws. But when you read the fine print, it basically says any law, including these general laws, will be covered by the moratorium if they impose “undue or disproportionate burden" on AI system developers.
If that door is open a crack, an army of [tech lobbyists and lawyers] are going to argue that any law is an undue burden, whether that's copyright laws, or protecting the rights of publicity, or child online safety regulations. So this is a massive loophole, which is really a trap, and not a compromise at all. To put it succinctly, this version is quite honestly the most sprawling that we've seen because it goes back on the one exception that this moratorium had.
ER: A lot of people are finding AI tools helpful in their work or personal lives. How could this affect them?
AK: The conversation isn't ‘Is chat GPT useful or not?’ It's really ‘Is this unaccountable power in the AI industry as a whole good for society?’ The way in which you and I are playing with AI today — we're playing with it as a shiny toy. But it’s really a micro vs. macro thing.
We're encouraging people to say is, ‘It's okay if you enjoy playing with Chat GPT. But you might still think it's a problem that Sam Altman, Jeff Bezos, or Mark Zuckerberg are engaged in pushing AI as this answer to everything in ways that deepen their already deep pockets, or furthers their market positions.’
What it is ushering in is a whole scale rewiring of our economic and social foundations. We’re already seeing how AI is being pushed as a way to replace and devalue labor or to present tech as a silver bullet for [other] social challenges. Tech CEOs tell us AI will cure cancer and solve climate change, whatever that means, which is then being used to justify defunding federal research. It's a handful of companies that are going to reap the rewards of this, while for the most part, the rest of us, the general public, are being disempowered and devalued.
ER: I’m wondering if the way the U.S. has dealt with the tech sector — with a notably soft touch — is analogous to the way other big industries were treated historically. Have there been similar proposals in U.S. history to give other nascent industries, with benefits but also risks like AI, moratoriums on regulation?
AK: No, this is unprecedented in every respect. And this sense of AI exceptionalism doesn't come from nowhere: it's a product of relentless AI industry lobbying. They constantly say that we're on the precipice of a historical transformation through this transformational technology, but one that we could lose in this supposed arms race with China. Nobody knows precisely what that means, but any kind of friction, basic guardrails, protections for workers, consumers, or children is caricatured as an attack on our national interests. As if the interest of AI companies and that of the public and national interest were the same.
The "but what if China wins" bogeyman is used strategically by companies to push back against real arguments. Our lawmakers need to be focused on how this race delivers victories to the American people, not handouts to big tech companies.
ER: Do you think the issue has gotten the attention it deserves?
AK: This moratorium doesn’t make any sense in this moment, and it's something that we should all be raising our voices against. I think that people are finally realizing this might actually pass, even though it seems so absurd. There's so much momentum behind it, even as the opposition is heightened. I was at my neighborhood coffee shop yesterday, and someone was like, ‘Oh, I read that these AI companies are going to have no regulation for 10 years. That's crazy.’
ER: Thanks for your time, Amba.
Has your life or work been impacted by AI? Please feel free to reach out. We here at Hard Reset are interested in telling the story of how AI is upending industries, particularly from the perspective of workers. You can reach me at @elirosenberg.30 on Signal.
Read more: