Legal landscape of AI regulations in the European Union and Switzerland

Switzerland
Germany
Austria

Legal landscape of AI regulations in the European Union and Switzerland

Quelle: iStock

The rapid and widespread adoption of artificial intelligence (AI) systems (and generative AI in particular) has been one of the major topics discussed over the past few years by the public and, to some extent, by legal scholars and legislators.

Although this technology is still recent, successful use cases can be found across all industries and allow businesses to create new opportunities or to streamline and simplify their current processes. However, companies must understand and manage the legal risks they are exposed to when they use AI systems in their activities. This may be challenging as national legislations have not caught up yet with this new technology and may be tempted to adopt different regulatory approaches, complexifying the international legal landscape companies will have to navigate.

In this article, we will provide a short overview of the EU and Swiss regulatory legal landscape on the use of AI systems as well as of the other common legal risks that arise from their use (intellectual property, data protection, contractual liabilities).

European Union

Following a legislative process launched on 21st April 2021 by the European Commission, the European Commission, the European Parliament and the Council reached a provisional agreement on 8th December 2023 regarding the Regulation Of The European Parliament And Of The Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts (“EU AI Act”).

With the EU AI Act, the European Union lawmakers aim to establish comprehensive regulations for AI systems, impacting their development, placement on the market, putting into service and usage within the European Union in conformity with its values. In particular, the EU AI Act seeks to guarantee the protection of health, safety, fundamental rights, democracy, rule of law, and the environment against the detrimental impacts of AI systems in the Union.

The EU AI Act will apply to any provider that develops, manages, customizes, or implements AI systems within the EU, as well as to users who use such AI systems within the EU, regardless of if these providers or users are in the EU. This extraterritorial effect means that, similarly to the General Data Protection Regulation (“GDPR”), foreign companies will have to be compliant with the EU AI Act if their activities fall within its scope of application. For these reasons, the EU AI Act emerges as a first legislative initiative of its kind and may influence AI policies in other jurisdictions.

Central to the EU AI Act is its risk-based regulatory approach, which aligns the level of regulatory intervention with the potential societal harm posed by an AI system. This approach categorizes AI systems based on the severity of risk they present imposing stricter regulations on those with greater capacity to cause harm.

  • Prohibited AI practices (art. 5 EU AI Act) are those that pose a risk deemed unacceptable and that are outright banned in the EU (e.g., cognitive behavioural manipulation, untargeted scraping of facial images from the internet or CCTV footage, emotion recognition in the workplace and educational institutions, social scoring, biometric categorisation to infer sensitive data, some cases of predictive policing for individuals etc.).
  • High-risk AI systems (art. 6 EU AI Act) include AI systems that are used in or are a product subject to EU product safety legislation or listed as such in Annex III of the EU AI Act, and concern domains that are considered as particularly sensitive (e.g., machinery, medical devices, vehicles, biometric identification, critical infrastructure, employment, education, etc.). High-risk AI systems are subject to various requirements and their providers and users are subject to specific obligations (art. 6 to art. 51 EU AI Act).
  • Limited risk AI systems (art. 52 EU AI Act) are subject to very light transparency obligations, for example disclosing that the content was AI-generated so users can make informed decisions on further use.
  • Low or minimal-risk AI systems (art. 69 EU AI Act) are those that do not fall within the scope of the EU AI Act and are therefore not subject to any restrictions.

Violations of the EU AI Act may result in fines set as a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. The provisional agreement of the EU AI Act notably provides for the following:

  • €35 million or 7% of the global annual turnover for violations of the prohibited AI applications;
  • €15 million or 3% of the global annual turnover for violations of the EU AI Act’s obligations and;
  • €7,5 million or 1,5% of the global annual turnover for the supply of incorrect information.

Because of the extraterritorial effects of the EU AI Act and the potential penalties associated, companies should already prepare to be compliant with the EU AI Act.

Switzerland

There is currently no specific AI systems regulation in the Swiss legal system. However, the Federal Council has instructed on 22nd November 2023 the Federal Department of the Environment, Transport, Energy and Communications ("DETEC") to prepare a report on the possible regulatory approaches to AI systems by the end of 2024, and to involve all federal agencies responsible in the legal areas affected. This analysis should create the basis to issue a concrete legislative mandate for an AI regulatory proposal in 2025.

The Federal Council mentioned that the overview will notably focus on the following elements:

  • Compatibility with the EU AI Act and the Council of Europe's AI Convention: the analysis will build on existing Swiss law and identify possible regulatory approaches for Switzerland that are compatible with the EU AI Act and the Council of Europe's AI Convention.
  • Compatibility with fundamental rights: the analysis will examine the regulatory requirements with a particular focus on compliance with fundamental rights.
  • Technical standards, financial and institutional implications: the implications of the different regulatory approaches will be taken into account in the overview.
  • Interdisciplinary cooperation: the analysis will involve careful legal, economic, and European policy clarifications and require interdisciplinary cooperation across all departments.

The approach taken by the Federal Council already highlights the role the EU AI Act will take in shaping AI policies, as a stated goal of the Federal Council is to adopt an approach that is compatible with the EU AI Act.

Other legal risks

Even in the absence of specific AI regulations, companies must in any case consider the other common legal risks associated with the use of AI systems. In particular, those related to contractual liabilities, intellectual property rights and data protection. As a non-exhaustive list, the following examples can be mentioned:

  • IP rights risks:
    • Materials used to train a Generative AI may be protected by copyright and infringe on the rights of the third-party holder of these rights.
    • Trade secrets could be inadvertently disclosed if they are inputted in a generative AI.
    • Output generated by the user may not be protected by copyright.
  • Contractual risks:
    • Acceptance of unfavourable contractual conditions of the AI system provider by the user.
    • Breach of contractual obligations with third parties (e.g., by inserting in the AI system confidential information without authorization from the third party).
  • Data protection risks: when the AI system processes personal data, local data protection law requirements still apply (e.g., GDPR, Federal Act on Data Protection, etc.).

Actions to take

Companies that develop, use or intend to use AI systems can already take appropriate measures today, to address the current legal risks and to prepare for the upcoming EU AI Act. Among others, companies should notably take the following actions:

  • Performing a risk assessment for the use of AI tools;
  • Drafting internal policies on the use of AI systems;
  • Reviewing and updating standard agreements templates (incl. Employment agreements, service agreements) taking into account the impact of the use of AI systems;
  • Updating their data privacy documentation;
  • Delivering practice trainings to employees on the appropriate use of AI systems;
  • Preparing and negotiating for regulatory approvals, as the case may be.

Conclusion

In conclusion, although the lawmakers have yet to catch up with the rate at which AI systems are adopted and new use are discovered, companies should stay abreast of the incoming EU AI Act and address the current legal risks associated with their use and development.