Regulation meets innovation: how can AI be handled smartly?

10:25

Add to bookmarks Remove from bookmarks

Artificial intelligence is currently establishing itself in all areas of our lives. However, its rapid spread also brings with it risks. Although these can be countered with legislation, the technology is developing so quickly that regulation itself is becoming a risk. For a possible Swiss AI law, it is therefore important to consider the experience with the EU AI Act.

Manuel Kugler, Programme Manager Data & AI / Advanced Manufacturing, in a grey suit against a blue background

Manuel Kugler: Programme Manager Data & AI / Advanced Manufacturing

Translated with DeepL

What we can learn from the EU AI Act - five points for Switzerland

  1. Technology overtakes laws
    AI is developing faster than legal processes can keep up - regulation must therefore remain flexible
  2. Complex risks must be addressed
    Fraud, disinformation and changes in the labour market are on the rise and require clear guidelines.
  3. The EU AI Act shows the problems of rigid regulation
    When the bill was drafted, key technologies such as ChatGPT were practically non-existent on the market. This required improvements.
  4. A sector-specific approach is recommended for Switzerland
    A broad "everything regulation" increases complexity and the risk of overregulation. Better: focus on individual areas and orientate towards international standards.
  5. Time pressure jeopardises the quality of standards
    The accelerated development of technical standards in the EU is causing criticism - a lack of consensus jeopardises practical feasibility.

Record pace of AI adoption

After just three years, more than half of the Swiss population is already using generative AI tools such as ChatGPT. Never before has a technology spread so quickly in Switzerland as artificial intelligence. The Swiss economy also has high hopes for it. Generative AI could increase GDP by up to 11 per cent in the coming years.

However, this rapid adoption of AI also brings with it risks such as increased fraud through AI-generated content or challenges in the labour market. These negative effects can be countered with regulation. However, it takes time to draw up new laws. Meanwhile, the technology is constantly evolving.

Why AI regulation is not a sprint

With the AI Act, the European Commission brought the world's first comprehensive AI regulation into force on 1 August 2024. It classifies and regulates AI systems according to risk levels. Its development took several years. One of the stumbling blocks was the speed at which AI is developing. Systems such as ChatGPT, which was launched in November 2022, were barely covered in the first draft of the law. The Commission had to go back to the drawing board.

Drafting the AI Act was an enormous effort. But that's not the end of it. In order to be able to apply it, so-called harmonised standards are also needed: technical standards that help to implement the requirements of the legislation in concrete terms. The development of such standards is a consensus-based process that takes time. Time that the Commission is running out of. This is because binding deadlines are associated with the entry into force of the AI Act.

The race against time jeopardises consensus

The technical committee JTC 21 Artificial Intelligence has been tasked with drafting these standards. Because it is behind schedule, the Commission is putting pressure on the European standardisation organisations (SDOs), the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC), to which JTC 21 is affiliated. The technical board of both SDOs has reacted and recently decided to shorten the usual development time so that the standards are available in time for the end of 2026.

This decision was sharply criticised by the members of JTC 21. They believe it jeopardises the principle of consensus. The development of broad-based harmonised standards would no longer be possible.

Important findings for Switzerland

The experience with the EU AI Act should be a lesson to us. Switzerland should not attempt to regulate artificial intelligence comprehensively, but rather on a sector-specific basis as far as possible. This reduces complexity, effort and the risk of being overtaken by ongoing developments.

Any Swiss AI law should be based on established international standards. It makes little sense to enact regulation that cannot be implemented in practice. We should not put ourselves under unnecessary pressure and risk regulation undermining the very principles it is supposed to protect: Transparency, inclusion and trust.

Contact us

 Manuel Kugler

Manuel Kugler

Data & AI Programme Manager / Advanced Manufacturing

Disclaimer

This article was written by the author and does not represent the official opinion of the Swiss Academy of Engineering Sciences SATW.