In 2024, artificial intelligence (AI) legislation is making waves across the United States, Puerto Rico, the Virgin Islands, and Washington, D.C. With over 450 AI-related bills tracked in 23 categories, three significant trends have emerged: consumer protection, the regulation of deepfakes, and the use of AI by government agencies. This surge in legislative activity reflects growing concerns about the implications of AI technologies and the need for effective oversight.
Consumer Protection and Transparency
One of the most prominent trends is the push for consumer protection and transparency in AI development and usage. Lawmakers have introduced more than 100 bills aimed at ensuring the responsible use of AI in various sectors, including education, healthcare, and finance. Notably, Colorado has spearheaded the effort by passing the United States’ first comprehensive AI law, known as SB 205. This groundbreaking legislation mandates that AI developers avoid algorithmic discrimination, ensuring that their systems do not result in unlawful differential treatment of individuals based on protected characteristics such as age, disability, or ethnicity.
Additionally, the law outlines specific requirements related to consumer protections and risk management. To further enhance the bill’s effectiveness, Colorado has established an AI task force tasked with recommending modifications before the law’s implementation in 2026.
Similarly, Utah has also joined the ranks with SB 149, which requires businesses to disclose their use of generative AI technology. Violations of this law can lead to fines of up to $2,500, with the potential for additional civil penalties. Transparency is a recurring theme, as it is vital for fostering trust between consumers and AI companies.
Although California’s Governor Gavin Newsom vetoed a bill that aimed to impose additional requirements on large AI models (SB 1047), the state has enacted AB 2013, which focuses on training data transparency. Starting in 2026, developers must publicly disclose the data used to train their generative AI systems, enhancing consumer awareness and accountability.
Deepfakes and Legislative Responses
The rapid rise of deepfake technology, which enables the creation of manipulated audio, images, and videos, has provoked serious concerns, prompting more than 40 new laws across at least half of U.S. states. This wave of legislation primarily targets the use of deepfakes in creating sexually explicit materials without consent, especially when minors are involved.
Florida has taken a bold step by establishing a criminal offense for creating computer-generated child pornography. Washington has amended its child pornography laws to include digitally fabricated content, offering civil recourse for victims of non-consensual intimate imagery. In an era where misinformation can significantly impact electoral processes, Indiana has also expanded its revenge porn laws to encompass AI-generated content, demonstrating a proactive approach to combating harmful uses of technology.
Furthermore, states like Arizona and Utah are addressing the intersection of deepfakes and political communication. Arizona’s H 2394 allows candidates to sue for “digital impersonation,” while also mandating that any use of deepfakes in political advertising be disclosed within 90 days of elections. Utah’s legislation goes further by requiring transparency in any political ad that employs deepfake technology, ensuring voters are informed about the authenticity of the content they encounter.
Government Use of AI
Another developing trend involving the legislature pertains to the government’s use of AI technologies. With over 150 bills introduced regarding this issue in 2024, state agencies are already utilizing AI to optimize public services, streamline operations, and enhance citizen engagement. However, the potential for bias and discrimination has raised concerns, drawing the attention of lawmakers.
States like Connecticut, Delaware, Maryland, Vermont, and West Virginia are leading by example by requiring state agencies to inventory and review their AI applications. The aim is to assess how these technologies impact service delivery and identify potential biases or unfair impacts. Many of these states have mandated impact assessments to ensure that AI systems are ethical, trustworthy, and ultimately beneficial for the public.
As discussions about AI in government usage evolve, it becomes increasingly essential to equip legislators with the tools needed to address concerns effectively while still harnessing the technology’s potential to improve public services.
The Path Forward
As legislative sessions resume in 2025, the focus on AI and how it intersects with consumer rights, political integrity, and ethical governance will remain a priority. With technology evolving rapidly, the landscape of AI legislation is expected to change continuously, necessitating ongoing dialogue among lawmakers, the technology sector, and civil society.
As these discussions unfold, it is crucial for all stakeholders to engage transparently and responsibly, ensuring AI serves humanity well. At its best, this technology has the power to transform lives for the better; at its worst, it can threaten privacy and democratic values. By placing the spotlight on evolving regulations and consumer protections, we can collectively navigate the complexities while fostering innovation.
For more resources on AI legislation and related topics, you can visit the National Conference of State Legislatures’ AI Policy Toolkit. As we move forward into a future increasingly shaped by AI, the ongoing monitoring and development of these laws will be essential for ensuring a safe and equitable landscape for all.