Punchbowl News senior congressional reporter Andrew Desiderio,  Punchbowl founder and CEO Anna Palmer, and Sens. and Todd Young (R-Ind.) and Mark Warner (D-Va.)

Image caption: From left to right, Punchbowl News reporter Andrew Desiderio, Punchbowl News CEO Anna Palmer, and Sens. and Todd Young (R-Ind.) and Mark Warner (D-Va.)

Credit: Will Kirk / Johns Hopkins University

Tech policy

AI regulation necessary to address potential risks, key senators say

Sens. Mark Warner and Todd Young advocate for artificial intelligence policies during Hopkins Bloomberg Center discussion while Hopkins experts caution against stifling innovation

Rapid developments in artificial intelligence hold the potential to change how we live, work, learn, and play. Without legislative oversight, however, AI advancements pose risks that can range from violations of intellectual property and privacy rights to election fraud and disruption of our financial markets. What policies are needed?

U.S. Sens. Mark Warner (D-Va.) and Todd Young (R-Ind.), who have strong track records in bipartisan technology legislation, discussed AI policy during a wide-ranging conversation at the Johns Hopkins University Bloomberg Center in Washington, D.C., on Thursday morning. Punchbowl News founder and CEO Anna Palmer and senior congressional reporter Andrew Desiderio co-moderated the discussion, the first in The Bridge event series, which will explore bipartisan policy issues.

Video credit: Punchbowl News

Afterward, two leading AI experts from Johns Hopkins University—Rama Chellappa and K.T. Ramesh, interim co-directors of the university's new Data Science and AI Institute and professors in the Whiting School of Engineering—shared their thoughts, advocating for legislative oversight that strikes the right balance by protecting citizens and our democracy without stymying AI growth and innovation.

Both Warner and Young agreed that AI is a bipartisan issue that affects people on both sides of the political aisle. They also agreed that legislators must get up to speed on AI, learning the fundamentals of how it works so they can make policy decisions that keep pace with technological advancements.

"I don't think it involves coming up with a whole bunch of new laws," Young said. "There will be some of those, … [but more so] we're going to ensure that those laws that we've already passed are suitable for an AI-enabled world."

Warner said he believes policies are needed to protect society from two imminent threats: AI-generated deepfakes that mislead voters and AI tools that can disrupt financial markets.

"We've seen the power of fakes and how this can happen at speed and scale which is unprecedented," Warner said. He specifically referenced a recent fake robocall in which a voice purporting to be President Biden discouraged Democrats from voting in the New Hampshire primary, in addition to the many deepfakes deployed across Europe. Among examples are the fabricated audio clip of Slovakian politician Michal Šimečka discussing plans to rig the election and raise beer prices, and the fake audio of U.K. politician Keir Starmer berating his staff.

Anna Palmer, Rama Chellappa, and K.T. Ramesh

Image caption: Anna Palmer, Rama Chellappa, and K.T. Ramesh

Image credit: Will Kirk / Johns Hopkins University

To protect financial markets, Warner said he hopes Congress will revisit laws and policies that look at intent, which make it hard to prosecute or deter AI-based financial fraud. "Traditionally, financial manipulation means you've got to show intent," he said, adding that intent gets tricky in the arena of AI.

Young brought up intellectual property rights—and the need to protect those creating the data and content that feed the large language models of generative AI. These models wouldn't work "without the input of scholars, rank-and-file individuals, artists, and others who invest their sweat, their blood, their tears, their talent, [and] their treasure in producing these models," Young said. One possible solution, he added, is to place watermarks on web pages that allow authors and artists to "seek compensation for its utilization." Discussions on these issues and potential solutions can get complicated, Young indicated, but are presently underway in judiciary and legislative committees.

To implement protections, Congress must act swiftly—and surpass its dismal efforts to regulate social media, Warner said. "We've done nothing on social media, [despite uncovering misuses that range from] bipartisan-based Russian interference to the enormous challenge that our kids have [with social media]," he said. "Our record stinks." Warner said he believes it's "long overdue time to reform Section 230," a law that protects individuals' freedom of expression online by limiting or discouraging content moderation of the extremist viewpoints and inaccurate information that threaten to harm our democracy.

Both Chellappa and Ramesh agreed that action from Congress is needed. "Our competitive advantage as a country has always been in [technology and product] design," Ramesh said. "There's a risk that if we don't jump into this fast enough and put the guardrails in place, [we'll lose out]."

Added Chellappa: "What happened with the election in New Hampshire is just the tip of the iceberg, and that's what I worry about." He said he advocates working with our allies on matters of national defense and security, while working globally on issues like health care because "everyone wants to have healthy citizens, [and] AI is going to play a role in it all."

AI isn't only a bipartisan issue but is essentially "an issue of every human," Ramesh said. "We all are drinking from that same data well. If someone is poisoning the data well, that's what we all get, so we have to monitor and track what goes in and by whom."

The key point, Chellappa said, is that "regulation is needed, but let's not overregulate because we still don't know what AI is capable of … and what good things it can do.

"It's like a precocious child," Chellappa added. "Sometimes very intelligent children do some stupid things, but parents are needed to tell the kid, 'Don't do that.'"

The crowd gathered at the Hopkins Bloomberg Center for The Bridge: The future of AI policy

Image credit: Will Kirk / Johns Hopkins University