AI Policy Summit 2023: Shaping the Future of Global AI Governance

Artificial Intelligence 08:59

A lot of existing regulation, e. g. General Data Protection Regulation (GDPR), is impacting AI in one way or another. However, there is currently no global AI policy which would ensure that the adoption of AI is aligned with our human rights, democratic values and the rule of law. Therefore, the question remains: What AI policy do we want? Answers were discussed at the 4th ETH AI Policy Summit on 3 and 4 November 2023.

A global discussion 

Elliott Ash, Assistant Professor of Law, Economics, and Data Science at ETH Zurich's Center for Law & Economics and  Ayisha Piotti, Director of AI Policy, Center for Law & Economics (ETH Zurich) Managing Partner at RegHorizon, opened the fourth AI Policy Summit 2023. Over the two days, they welcomed over 1000 participants from 109 countries and 300 AI experts – online as well as live at the Audimax at ETH Zurich. With 52 speakers including policymakers, the civil society, the private sector and academia from all over the world a truly multidisciplinary dialogue was achieved. 

Gabriela Ramos, Assistant Director General at UNESCO, addressed the challenges in the application of AI systems. Despite being a policy person, she pointed out that the discussion should not only focus on the regulation of AI: Innovation and creative progress should not be hindered by extensive regulation, she argued. To this end, incentives, investments, subsidies and other tools should help ensure that AI delivers for good. Michael Hengartner, President of the ETH Board, underlined the importance of AI for Switzerland and made clear that AI affects every decision maker.

What is in scope? 

Not only since the release of ChatGPT – and its companion Chatbots based on large language models – stakeholders from academia, government but also industry is increasingly calling for a more substantial regulation of AI. According to Roger Dubach, Ambassador and Deputy Director of the Directorate of International Law at the Federal Department of Foreign Affairs (FDFA), reasons to regulate AI involve ensuring that the application of AI is in-line with our value system, driving for technological leadership, guiding the interaction between humans and machines or empowering new applications. 

However, the challenge to phrase a suitable definition of AI remains unsolved. One approach to deal with that is to avoid a concrete definition all together. Another one – which the EU chose for their AI Act – is to work with a fuzzy one – to be clarified in court. A third one is a total definition. ChatGPT demonstrated that the last approach is the one to follow: before the release of the chatbot, generative AI and large language models did not get much attention in terms of regulation. 

The underlying cause of this debate was elaborated on by Paul Nemitz from the European Commission. It lies in the collision of two worlds: the one of lawyers and the one of engineers. Engineers write code for computers, i.e., dumb machines that do not understand anything but just follow clear instructions. Therefore, these definitions need to be very exact. Laws on the other hand are not written for machines but for humans who can think for themselves. To work out a reasonable AI regulation, Engineers and Lawyers need to discuss and understand each other better.

Building bridges to understand the big picture 

According to Nemitz, AI is so important that it deserves its own law – “binding law” being among the most noble words of a democracy. To work out such a law, bridges between democracy and the technological world are required. This topic appeared throughout the main conference. While still missing today, the European Commission in in Brussels is trying to establish such bridges by engaging technologists in the democratic discussion. 

Jessica Montgomery, Executive Director of the Accelerate Programme for Scientific Discovery at the University of Cambridge, reflected on the British developments and referred to the AI Safety Summit hosted by the UK two days earlier. One theme which came through very strongly was what AI should and should not do. Until recently, the conviction was that AI should be used for tasks that humans cannot do well, unlike art or poetry. But the uprise of generative AI changed everything. Bridges are needed to address the fractures between science and policy – to disconnect public concerns from what AI is being used for. 

A more holistic approach towards the adoption of AI was mentioned by Hiroki Habuka from the Wadhwani Center for AI and Advanced Technologies, Japan. In his country, he said, the main aim is to understand the big picture and to consider the risks of not using AI. In Japan, a major goal is to transform all governance systems into agile, multistakeholder processes.

Towards a global common ground 

Dubach recalled that in 2019, an interdepartmental working group of the Swiss government concluded that the existing laws were still solid enough and that attention was only needed to be paid to the 17 sectors that had been analysed. Today, there are discussions towards different horizontal regulations in different parts of the world. For a country like Switzerland, having to address several international jurisdictions, this is a big challenge.  

That is also the reason why Switzerland is heavily engaged in the Committee on Artificial Intelligence (CAI) within the Council of Europe. The CAI involves not only European countries but also countries like the United States of America, Canada or Japan. The goal is to provide a common level by a high-level convention. A consolidated working draft was published earlier this year.