Why Big Tech Wants To Help Congress Regulate AI

Why Big Tech Wants To Help Congress Regulate AI
(TNS) - Members of Congress want to regulate artificial intelligence and track and lobby Big Tech.

Senate Majority Leader Charles E. Schumer (D-N.Y.) launched a major campaign on AI regulation late last month, promising hearings and a series of "conversation forums" with his colleagues. "AI analytics" that will bring the best AI experts to Washington and eventually lead to the creation of the bill.

In the coming months, members of Congress — few of whom have technical backgrounds — will have to choose between a tougher regulatory framework for artificial intelligence or a system more responsive to tech interests. Democratic and Republican lawmakers will face the daunting task of learning about the rapidly evolving technology, and even experts disagree on how to regulate artificial intelligence.


Congressmen from California's Silicon Valley region, including representatives. Joe Lofgren (D-San Jose), Ro Khanna (D-Fremont) and Anna G. Eshu (D-Menlo Park) find themselves in a unique position. All three are Democrats and support the idea of ​​regulating tech companies.

But these companies are the economic engine of the triad, and many of their members work in industry. Move too slowly and they could alienate their party's national base, especially unions that fear AI will kill their jobs. Move too quickly and they can damage their home position and make powerful enemies in the process.

Tech interests, notably OpenAI (with its for-profit subsidiary), the nonprofit that created ChatGPT, went on the attack in Washington, advocating for rules that prevent the technology from posing an existential threat to humans. In the first quarter of 2023, 123 companies, universities and trade associations spent a total of $94 million lobbying the federal government on issues such as artificial intelligence, according to an analysis by the policy watchdog OpenSecrets.


OpenAI CEO Sam Altman, 38, recently met with at least 100 lawmakers in Washington, D.C., and OpenAI is seeking a chief lobbyist in Congress.

OpenAI Publicist did not respond to requests for comment on the company's position on federal regulation of artificial intelligence. But in written testimony to the Senate Judiciary Committee, Altman wrote that regulation of artificial intelligence is "critical" and that OpenAI is "interested in helping policymakers determine how to balance incentives for security while ensuring that people have access to the benefits that the technology can bring."

He offered general ideas: AI companies should adhere to an "appropriate set of security requirements," which could include government-run licensing or registration systems. OpenAI "actively engages with policymakers around the world to help them understand our tools and discuss regulatory options," he said.

Maritze Schae, international policy fellow at the Artificial Intelligence Institute, a Stanford-based think tank and former member of the European Parliament, said members of Congress should develop a healthy skepticism about what they hear about regulation of artificial intelligence and technological interests. "Any advice from key stakeholders like businesses needs to be viewed through this lens: What will this mean for their bottom line?" he says. "What will this mean for their benefit? It is in the messenger's interest to make things right."

Shake worries that while Altman and others warn against the existential threat of artificial intelligence, they emphasize control over the horizon rather than the present. If lawmakers fear that artificial intelligence will end humanity, they ignore more immediate and less dramatic concerns.

Big Tech's power players currently have a "regulatory fetish" that "might not strike many people in Washington as an attempt to free them from the social consequences of what they're doing." Congressional efforts by AI companies to request anonymity to speak candidly about private conversations told the Times. "Unfortunately, there is a dangerous dynamic among members of Congress to rely on technical experts."

Altman's respect has been expressed in conversations between sources and Washington insiders.

"I have members of Congress in Washington saying, 'Well, Sam seems like a good guy.

Members of Congress openly say they will write legislation regardless of technological interests. In a Capitol courtroom, lawmakers are always the ones asking questions, and industry experts are the ones asking questions. Lofgren, the ranking member of the House Science, Space and Technology Committee, told The Times that his interest in technology did not drive him to regulate artificial intelligence.

"I will say this: No tech company is in the business of pressuring me," Lofgren said. "And I haven't heard from other members of Congress about it."

Robin Swanson, a tech regulatory advocate who pushed statewide privacy legislation, praised Lofgren's Silicon Valley counterpart Eshu and Khanna for their proactive policy stance on the issue.

Schumer emphasized that he does not want tech companies to write the rules.

"Individuals in the private sector can't do the job of protecting our country," Schumer said in announcing his views on the Senate's approach to artificial intelligence. "While many developers have good intentions, there will always be bad actors, unscrupulous companies, foreign enemies trying to harm us. To do this. That is why we are here today. I believe Congress needs to join the AI ​​revolution.

But tech interests employ skilled lobbyists who can influence policy outcomes with small changes, Swanson said. In lobbying for the California Consumer Privacy Act, tech industry interests pushed for changes to the long and complicated law in just seven words.

Swanson said changing the seven words would defeat the purpose of the law.

"They have to have very smart, intelligent people who are technologically savvy enough to know where to hide the products," he told The Times. "So we need equally informed people who really care about privacy and security on our behalf."

Lofgren didn't seem worried. He believes his colleagues need to better understand technology before they start writing policy. He advised to be patient in the process.

"I don't think we're in a position to decide what to do," Lofgren said. "And I think it's important to have an idea of ​​what you're going to do before you do it. This technology is changing rapidly. So we don't have an infinite amount of time to do something potentially stupid."

Congress will take the lead in regulating AI, he said.

"It's our duty to write what we have to write," Lofgren said.

Congressional offices themselves appear to be dabbling in AI technology. Chamber imposes new restrictions on how staff use large language models such as ChatGPT. Offices can only use ChatGPT Plus, a paid premium version that includes additional privacy features. Axios announced the new rules last week.

Khanna, who uses her identity as a "Silicon Valley congressman," told the Times that lawmakers should work with AI policymakers in her district to create safer legislation. .

"Startups in my district are at the forefront of innovative AI research and development, tackling complex problems and making strides to improve lives and protect our planet," Khanna said. "Congress must craft smart laws to regulate AI ethics and safety that don't stifle innovation."

In rushing to regulate AI, lawmakers risk missing opportunities to address some of the technology's less obvious dangers. Machine learning algorithms are often biased, explains Eric Rice, who founded USC's Center for AI in Society. A few years ago, researchers discovered that a popular health risk prediction algorithm was racially biased, with black patients receiving lower risk scores.

Rice said that in their discussions of artificial intelligence, lawmakers should consider how the technology could affect fairness and justice.

"We want to make sure we don't use AI systems that harm black people or harm women or people from rural communities," Rice said. "I think that's one piece of the regulatory puzzle."

While tech companies innovate quickly, Congress moves slowly on almost everything else. The Senate votes less often than in the past, and it takes longer. In the House of Representatives, the far-right Freedom Caucus has undermined Republican mandates and legislative direction.

Analysts are not convinced that political polarization in Congress or among Americans will hurt the progress of AI talks. The debate doesn't seem very partisan yet, because most people don't know where they stand: a Morning Consult poll found that 10% of Americans think a generative AI product is "very reliable," compared to 11%. "Not at all reliable" and 80% not sure. Furthermore, legislators work in a two-pronged manner; Rep. Ted Lew, Democrat of Torrance, and Rep. Representative Ken Buck of Colorado has introduced a bill with Eshaw that would create a national commission on artificial intelligence.

President Biden is interested in protecting Americans from the potential harm of AI. Biden, a self-proclaimed union president, must respond to unions across the country over fears that technology will take away workers' jobs. White House officials met with union leaders on Monday to discuss the issue, concluding that "governments and employers must work with unions to understand how to effectively mitigate risks and potential harm to workers."

Democrats and Republicans support rules requiring companies to label their AI creations, according to a Morning Consult poll. I also agree with the ban on artificial intelligence in political ads. Overall, 57% of Democrats and 50% of Republicans believe the development of artificial intelligence technology should be "heavily regulated" by the government.

But "regulation" itself is a meaningless term, Shaw points out. Some regulations interfere with markets, while others facilitate markets; Some regulations benefit big business, while others hurt them.

"Talking about being for or against regulation tells us nothing, because regulation — and this is something senators in particular need to be very aware of — can take you anywhere," Shake said.

Now that lawmakers have agreed on the need for regulation, they must agree on the details. Most of the proposals coming from Congress are abstract: lawmakers want to create commissions and pay for tuition. As for the draft regulations, they should be more specific: Do they want to create a licensing system, as Altman suggests? Will they tighten data privacy laws to limit the algorithms they can train?

"Congress is slow," Swanson said. "And Congress has the potential to be a wolf in sheep's clothing." For this reason, he added, legislators should be transparent in the decision-making process.

Schumer's Senate committees are expected to begin meeting in September, and the bill won't pass until then. Even so, political disagreements can hinder the legislative process.

Meanwhile, the generative AI software market is expected to grow tenfold in the next five years.

© 2023 Los Angeles Times. Distributed by Tribune Content Agency, LLC.

AI adjustment is coming…

Post a Comment