Skip to main content
Home
Home

Notes From the Field: AI Virtual Summit: New AI Regulation in the EU and US: What To Expect and How To Prepare

Age of Disruption

Notes From the Field: AI Virtual Summit: New AI Regulation in the EU and US: What To Expect and How To Prepare

Intellectual Property

Perkins Coie presented at Digital Hollywood's "AI Bill of Rights, Ethics & the Law" Summit, a one-day virtual conference that seeks to advance the conversation around the establishment of a national regulatory policy for artificial intelligence (AI). The October 19 event highlighted the tension between efforts to unleash a once-in-a-generation burst of innovation, while simultaneously safeguarding against the dangers and risk inherent in complex and still developing technologies.

Over the course of the summit, panelists discussed a wide range of topics, including government regulation versus industry self-regulation, generative AI and intellectual property (IP) rights, human interaction with AI, and balancing the benefits and risks of deepfakes, among others.

Marc Martin moderated the panel "US and EU Regulation of AI: What To Expect and How To Prepare." The panelists included Cass Matthews from Microsoft's Office of Responsible AI and Benoit Barre, a partner at Le16 Law in Paris.

Below are a few of the key questions and responses that stood out from the broader discussion.

Is there a need for government regulation of AI in the first place?

From the U.S. perspective, the panelists noted that regulations already exist that may apply to uses of AI technology (e.g., antidiscrimination laws, consumer protection laws, etc.), so we may not need to start from scratch when it comes to regulating AI. At the same time, however, there are novel aspects of AI that likely call for new regulations. Regardless of any regulations adopted by governments, the panelists noted that it remains important for companies to maintain their self-directed internal governance systems around the responsible use of AI.

From the EU perspective, the panelists observed that applying existing rules may be difficult given the complexity of AI, and even if existing rules are applied, such application may be done in diverging and fragmented ways, particularly within the context of dozens of member states independently trying to regulate AI. Accordingly, one of the main reasons the EU is working to adopt its AI Act is to have a common set of rules that build trust with users of AI technology and provide legal predictability to companies and investors. The panelists also noted that the AI Act would ensure that AI technology respects certain EU core values, another motivating factor behind the efforts to adopt the act.

Speaking of the AI Act, what is the status of this proposed legislation?

The panelists explained that important discussions among the EU Commission, EU Parliament, and EU Council (known as trilogues) began in June 2023 and are still underway. Whether the AI Act should adopt an approach that is purely risk-based or more technology-based (which will not only regulate AI systems but also their underlying AI models) lies at the heart of these discussions, according to the panelists. Generally speaking, most member-state governments (which are represented by the EU Council) want to retain the AI Act's risk-based approach, while the EU Parliament (whose members represent the people within each member state and thus tends to be more citizen-centric) wants to ensure individuals are protected against potential risks stemming from particular kinds of AI models.

From the U.S. perspective, the panelists encouraged the EU to avoid undoing the risk-based framework of the AI Act. If the proposed law continues in this direction, the panelists recommended that it should apply to advanced foundation models only, and the creation of a compute-based threshold may be a way to distinguish between those AI models that should be subject to the AI Act and those that should not.

Although the number of trilogues remains uncertain, there is anticipation that EU lawmakers will come to a final agreement by the end of 2023 (or before the next EU Parliament elections in June 2024). The AI Act will enter into force once its final text is published in the Official Journal of the European Union; two years after that, it will apply to those who are subject to its requirements.

What issues should governments focus on as they grapple with AI regulations?

The panelists agreed that it is important for the definitions in the AI Act—and any other forthcoming AI regulations more generally—to be properly scoped so that they do not inadvertently cover other technologies. Relatedly, the panelists explained how properly scoped definitions can help ensure regulations are "future proof"—that is, still applicable to AI technologies, even as this technology develops further. The panelists discussed how certain aspects of the AI Act are set up to account for this concern. For instance, the AI Office, which the AI Act contemplates establishing, can have a role in designing and implementing regulations and other secondary legislation to account for any changes in AI technology down the road.

What does the AI Act mean for the United States? How can companies prepare for the AI Act?

The panelists noted that, like General Data Protection Regulation (GDPR), the AI Act is likely to have global consequences, as it is set to become the first comprehensive AI regulation—but this is not necessarily a cause for concern in the United States. The panelists observed that the AI Act gets some important issues right, specifically its focus on a risk-based approach, the importance it places on internal governance, the requirements it imposes on companies to maintain robust risk and impact assessments, and the transparency requirements it calls for to ensure individuals understand where and how AI is being used. At the same time, however, the panelists observed that the AI Act's obligations should not be so onerous, nor scoped so broadly as to apply to all technology more generally.

The panelists also explained that, like GDPR, the AI Act is intended to be extraterritorial, so regardless of where a company is located, it will need to comply with the law once in effect. To be ready, companies should start early to build the right internal tools and processes they will need to assist with compliance, according to the panelists.

Should the United States create a new regulatory agency for AI?

The panelists noted that the United States likely could benefit from a licensing process for certain AI systems, particularly those based on highly capable AI models, to ensure that the right safeguards are in place. Accordingly, the panelists suggested that the United States could move in the direction of establishing a new regulatory agency for AI, which could oversee such a licensing process, among other duties.

Follow us on social media @PerkinsCoieLLP, and if you have any questions or comments, contact us here. We invite you to learn more about our Digital Media & Entertainment, Gaming & Sports industry group and check out our podcast: Innovation Unlocked: The Future of Entertainment.

Blog series

Age of Disruption

We live in a disruptive age, with ever-accelerating advances in technology largely fueling the disruption permeating almost every aspect of our lives. We created the Age of Disruption blog with the goal of exploring the emerging technologies reshaping society and the business and legal considerations that they raise. 

View the blog
Home
Jump back to top