On October 30, 2023, President Biden announced a sweeping new Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO). The EO signals an “all-hands-on-deck” approach, with roles for agencies across the federal government, proposed requirements and/or guidance that will apply both to companies that offer artificial intelligence (AI)-related services and those that consume such services, and still-unfolding implications for the legal operation of such businesses.
Highlights of the EO for providers and consumers of AI products and services follow, with our 10 top takeaways for private sector investors and companies immediately after:
Highlights
Ten Top Takeaways for AI Builders, AI Investors, and AI Users
In sum: keep watching this space. Affected companies should carefully monitor the implementation of this executive order and any follow-on actions by agencies under the EO.
Wilson Sonsini Goodrich & Rosati routinely helps companies navigate complex privacy, data, and national security issues in developing policy sectors. For more information or advice concerning your compliance efforts related to AI, please contact Joshua Gruenspecht, Maneesha Mithal, Scott McKinney, Jess Cheng, Barath Chari, Manja Sachet, Seth Cowell, Nikhil Goyal, Kara Millard, Rosalind Schonwald, or any member of the firm’s national security practice, privacy and cybersecurity practice, or artificial intelligence and machine learning working group.
[1]More specifically defined as “an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters [. . .]”
[2]In January 2023, NIST released an Artificial Intelligence Risk Management Framework intended to provide a resource to organizations designing, developing, deploying, or using AI systems to manage risks and promote trustworthy and responsible development and use of AI systems. See our previous alert for more details.
[3]As one example, in an effort to slow China’s development of advanced AI technologies, the DoC recently issued an array of semiconductor and supercomputer-related export controls. See recent client alerts here and here for a discussion of these export controls. As another, see our recent client alert on proposed outbound investment rules to restrict U.S. support for AI innovation in China here.