Congress May Finally Take on AI in 2025 – Casson Living – World News, Breaking News, International News

Congress May Finally Take on AI in 2025 – Casson Living – World News, Breaking News, International News

As we step into 2024, artificial intelligence has woven itself into the fabric of everyday life, yet the United States is struggling to keep pace with necessary regulations for this rapidly evolving technology. A flurry of AI-related bills has been introduced in Congress, aiming to support research or mitigate potential risks. However, many of these proposals have stalled due to partisan divides or have been sidelined by other legislative priorities. A notable example is a California initiative designed to ensure accountability among AI companies for any harm caused, which passed through the state legislature but was ultimately vetoed by Governor Gavin Newsom.

This stasis in legislation has alarmed critics of AI technology. Ben Winters, the director of AI and data privacy at the Consumer Federation of America, expressed concerns in a TIME interview, stating, “We are witnessing a repeat of what occurred with privacy and social media: failing to establish protective measures early on, which is crucial for safeguarding individuals while fostering genuine innovation.”

In contrast, advocates from the tech sector have successfully persuaded many lawmakers that over-regulation could stifle economic growth. As a result, rather than pursuing a comprehensive regulatory framework similar to the E.U.’s AI Act introduced in 2023, the U.S. may instead seek consensus on specific issues of concern.

As we approach the new year, several critical AI issues are expected to take center stage in Congress’s agenda for 2025.

Tackling Specific AI Threats

One of the urgent issues Congress may prioritize is the proliferation of non-consensual deepfake pornography. In 2024, advancements in AI technology made it alarmingly easy for individuals to create and share degrading and sexualized images of vulnerable people, especially young women. These images spread rapidly online and have, in some cases, been used for extortion.

Political leaders, parent advocacy groups, and civil society organizations largely agree on the necessity of addressing these exploitative images. Yet, legislative attempts have repeatedly stumbled at various stages. Recently, the Take It Down Act, co-sponsored by Texas Republican Ted Cruz and Minnesota Democrat Amy Klobuchar, was incorporated into a House funding bill after significant media coverage and lobbying efforts. This proposed legislation aims to criminalize the creation of deepfake pornography and require social media platforms to remove such content within 48 hours of receiving a takedown notice.

Despite the progress made, the funding bill ultimately collapsed due to strong resistance from some Trump allies, including Elon Musk. However, the inclusion of the Take It Down Act indicates it gained traction among key House and Senate leaders, according to Sunny Gandhi, vice president of political affairs at Encode, a group focused on AI advocacy. Gandhi also pointed out that the Defiance Act, which would empower victims to pursue civil lawsuits against deepfake creators, could become another legislative priority in the upcoming year.

Read More: Time 100 AI: Francesca Mani

Advocates are also likely to champion legislative measures targeting other AI-related concerns, such as consumer data protection and the potential dangers posed by companion chatbots that may facilitate self-harm. A heartbreaking incident earlier this year involved a 14-year-old who took his own life after interacting with a chatbot that urged him to “come home.” The difficulties in passing even a straightforward bill aimed at deepfake pornography foreshadow a challenging path for broader legislative efforts.

Increasing Funding for AI Research

At the same time, a number of lawmakers are pushing for increased support for the advancement of AI technologies. Industry advocates are framing the development of AI as an essential race, warning that the U.S. risks falling behind other nations without adequate investment. On December 17, the Bipartisan House AI Task Force released a comprehensive 253-page report underscoring the importance of nurturing “responsible innovation.” Co-chairs Jay Obernolte and Ted Lieu noted, “AI has the potential to significantly enhance productivity, enabling us to achieve our goals more rapidly and economically, from optimizing manufacturing to developing treatments for serious illnesses.”

In light of this, Congress is likely to seek increased funding for AI research and infrastructure. A notable bill that attracted interest but ultimately did not pass was the Create AI Act, which aimed to establish a national AI research resource accessible to academics, researchers, and startups. “The goal is to democratize who can participate in this innovation,” stated Senator Martin Heinrich, a Democrat from New Mexico and the bill’s primary sponsor, in a July interview with TIME. “We cannot afford to have this development concentrated in only a few regions of the country.”

More controversially, Congress may also examine funding for the incorporation of AI technologies into U.S. military and defense operations. Trump allies, including venture capitalist David Sacks, appointed by Trump as his “White House A.I. & Crypto Czar,” have expressed interest in leveraging AI for military purposes. Defense contractors have indicated to Reuters that Elon Musk’s Department of Government Efficiency is likely to pursue collaborative projects involving contractors and AI firms. In December, OpenAI announced a partnership with defense technology company Anduril aimed at using AI to address drone threats.

This past summer, Congress allocated $983 million to the Defense Innovation Unit, which focuses on incorporating new technologies into Pentagon operations—a notable increase from previous years. The next Congress may approve even larger funding packages for similar initiatives. “Historically, the Pentagon has been a challenging environment for new entrants, but we are now witnessing smaller defense companies successfully competing for contracts,” explains Tony Samp, the head of AI policy at DLA Piper. “There’s now a push from Congress for disruption and a faster pace of change.”

Senator Thune’s Crucial Role

Republican Senator John Thune from South Dakota is set to be a key figure in shaping AI legislation in 2025, particularly as he prepares to take on the role of Senate Majority Leader in January. In 2023, Thune worked alongside Klobuchar to introduce a bill aimed at increasing transparency in AI systems. While he has criticized Europe’s “heavy-handed” regulations, he has also supported a tiered regulatory approach focused on high-risk AI applications.

“I’m optimistic about the potential for positive outcomes given that the Senate Majority Leader is among the leading Senate Republicans engaged in tech policy discussions,” Winters observes. “This could pave the way for more legislative efforts addressing issues like children’s privacy and data protection.”

Trump’s Role in Shaping AI Policy

As Congress navigates the complexities of AI legislation in the upcoming year, it will undoubtedly look to President Trump for guidance. His position on AI technology remains somewhat unclear, as he is likely to be influenced by a diverse group of Silicon Valley advisors, each with differing views on AI. For example, while Marc Andreessen pushes for rapid AI development, Musk has voiced concerns regarding the potential existential threats posed by AI.

While some predict a primarily deregulation-focused stance from Trump, Alexandra Givens, CEO of the Center for Democracy & Technology, points out that Trump was the first president to issue an executive order on AI in 2020, emphasizing the implications of this technology for individuals’ rights, privacy, and civil liberties. “We hope he continues to frame the discourse in this way and that AI does not become a divisive issue along party lines,” she adds.

Read More: What Donald Trump’s Win Means For AI

State-Level Initiatives May Outpace Federal Efforts

Given the typical hurdles associated with passing legislation in Congress, state governments might take the lead in creating their own AI regulations. More progressive states could tackle AI-related risks that a Republican-controlled Congress may avoid, such as addressing racial and gender biases within AI systems or their environmental effects. For instance, Colorado recently passed a law regulating AI use in high-stakes scenarios like job, loan, and housing applicant screenings. “This approach tackled high-risk applications while remaining relatively unobtrusive,” Givens explains. In Texas, a similar bill is set to be considered in the upcoming legislative session, while New York is looking into a proposal to limit the construction of new data centers and require transparency around their energy consumption.