This month, the Senate Homeland Security and Government Accountability Committee (HSGAC) and the House Subcommittee on Cyber Security, Information Technology, and Government Innovation held hearings on the risks and opportunities presented by artificial intelligence (AI). The pervasive concern throughout both hearings, voiced by lawmakers and witnesses alike, was that the government is not adequately prepared to harness or regulate the coming advancements in AI. Eric Schmidt, former CEO and Executive Chairman of Google, testified that this moment in the development of AI represents a “clear demarcation, a before and an after,” and lawmakers sought input from witnesses on the best ways to hold tech companies accountable and to protect consumers and workers. Members repeatedly noted that AI can be a force for good, but also expressed concern about the risks it poses to privacy, safety, and national security.
The hearings made clear that Congress will be focused on two issues in particular as it relates to the private sector: 1) transparency and accountability; and 2) the potential for AI to displace American workers. But with Senators and Representatives expressing wonder and uncertainty about AI, they also made clear that Congress is racing to catch up with the rapidly evolving technology and trying to determine the best way to regulate it without stifling innovation.
Lawmakers Concerned About Lack of Transparency Around AI Technology
Members of both parties repeatedly pressed witnesses on how a lack of transparency around AI technology could lead to biased decision-making and potentially threaten civil liberties. In the HSGAC hearing, Chairman Gary Peters (D-MI) asked about the risks of black box AI systems, and the risks of nontransparent algorithms in particular. Dr. Suresh Venkatasubramanian, a professor of computer science and data science at Brown University, explained that when we don’t know how an algorithm works, we don’t know why it fails, or even whether it is failing. And he said that could have catastrophic consequences when AI is being deployed for things like tissue scans for cancerous tissue.
Merve Hickok, Senior Research Director at the Center for AI and Digital Policy, echoed that concern during the House Oversight Committee hearing. In response to a question by Rep. Jimmy Gomez (D-CA) about how we can protect civil liberties in the face of biased AI systems, Hickok stressed that it will be important for AI systems to be built transparently and accountably, and that there will need to be guardrails in place beginning at the design stage. When Rep. Gomez asked whether it is already too late to put guardrails in place, Hickok replied that it is not at all too late, and that the humans and users behind the systems hold all of the power.
While Congress has been pressing tech companies for more transparency for years, calls for companies to make their algorithms and research public will likely grow much louder. Schmidt testified, “I’ve been doing this for 50 years, and I’ve never seen something happen as fast as this round [of development].” He added, “I’m used to hype cycles, but this one is real.” As members of Congress grapple with the increasing urgency of the moment, they will likely grow impatient with tech companies resisting transparency and accountability.
Companies should be prepared to demonstrate how they are holding themselves and their systems accountable, and concrete steps they are taking to be more transparent. American policymakers are likely monitoring the European Commission (EC) and its member states as they prepare to enforce the Digital Services Act on platforms with 45 million plus users per month this coming summer. The statute imposes obligations for additional transparency, including through reporting about risks created by a provider’s algorithm systems, content moderation, and product design, and through making certain data available to the EC and vetted researchers. The successes and challenges of the European regulatory scheme will almost certainly inform legislative thinking in the United States.
Members Wary of Companies Replacing Human Workforce with AI
Democrats and Republicans alike are also concerned about the impact AI will have on the workforce, and what the government can do to ensure that human workers are not displaced. Witnesses emphasized that AI can assist workers rather than displace them, and that the key will be training workers and providing them with more education and tools to leverage the benefits of AI. In response to a question from Rep. William Timmons (R-SC) about how to manage companies seeking to cut costs by replacing humans with AI, Dr. Scott Crowder, Vice President of IBM Quantum and Chief Technology Officer of IBM System, Technology Strategy, and Transformation, explained that a productive workforce is defined by value creation. The more companies can automate tasks that do not create value, the more it frees up the workforce for tasks that do create value.
Back at the HSGAC hearing, in response to a similar question from Senator Maggie Hassan (D-NH) about how to prevent AI from displacing workers, Dr. Venkatasubramanian stressed that training will be important, and said that it will also be important to have proper governance in place. Humans will need to ensure that AI is doing its job correctly.
Companies will need to be able to be able to demonstrate that they have strong AI governance in place, and that they have a workforce that is adequately educated and trained to ensure AI systems are working properly.
Responding to Congressional Inquiries
While Congress has been aggressive in requesting documents and testimony from tech companies, as Rep. Gerry Connolly (D-VA) acknowledged, the government has been very reluctant to enact new regulations in the social media space. Dr. Aleksandr Madry, Director of the MIT Center for Deployable Machine Learning and Cadence Design Systems Professor of Computing, said the government’s approach must change. He said the “government needs to ask questions of these companies saying what are you doing; why are you doing this; what are the objectives of the algorithms you’re developing; how will we know you’re accomplishing these objectives?” He cautioned that Congress “cannot abdicate AI to Big Tech.”
Companies should be prepared for Congress to follow through with those questions. Responding to congressional inquiries requires different strategic considerations from typical white-collar litigation or lobbying, and knowledge of House and Senate rules and practices is important to avoiding critical missteps that could expose firms to undue legal or public relations crises. Please reach out to Wilson Sonsini attorneys Beth George, Jessica Heller, Andy Dockham, Janet Kim, or other attorneys in the strategic risk and crisis management practice or the AI Working Group with any questions about how Wilson Sonsini can assist you.