Is democracy threatened by artificial intelligence?

Artificial Intelligence 10:47

This article explores the disruptive potential of AI algorithms and the challenges posed by the increasing complexity of digitalization. It discusses the impact on political decision-making, voting behavior, and the need for unbiased AI intelligent assistants. However, it also raises concerns about data manipulation and algorithm biases. Is there a threat to democracies? The article emphasizes the importance of regulations, citizen education, and international governance.

Jorge Team: this is the subject of a recent article in the Guardian that puts technology in the dock for being used to influence democratic processes. This is not the first time. One certainly remembers the articles around Cambridge Analytica's role in the 2016 US election and during the Brexit vote: alleged or real, it is up to the judges to decide. What needs to be emphasized here is that it is technically possible to collect vast amounts of data and allow artificial intelligence (AI) algorithms to disrupt the democratic system.

By the end of 2022, the ChatGPT phenomenon came to flood the news. ChatGPT is an AI that can, among other things, prepare arguments as if they came from a human. Google with its Berd and the Chinese Baidu with its Ernie and many others are trying to do better. The pace at which AI technological tools emerge becomes cataclysmic. Do these more powerful algorithms, combined with ever larger masses of data, have the potential to generate even greater disruption, than our economies and society have experienced in the last decade?

To address this question, we need to highlight an important phenomenon that has emerged over the past two decades: the increasing complexity of the products and services around us, brought about by digitalization. For example, a smartphone today has many more functions (GPS, music, search engine, e-mail, games, etc.) than the phone at the beginning of the century. Services are available through tools like the smartphone, the smart watch, the computer, the car. The technological complexity of products and the corresponding supply chains, the countless new services, which radically change established habits of use, and value chains mutual interconnections between these new products and services, and new types of business models lead inexorably to an increased complexity in the relationships between corporates, and individuals. Examples such as the liberalization of the electricity market, the electronic medical file, regulation of cryptocurrencies, data security, ownership and privacy, Artificial Intelligence (Regulatory framework proposal on artificial intelligence | Shaping Europe’s digital future (europa.eu))or the taxation of robots (Xavier Oberson, (Univ. of Geneva) Taxing Robots, Helping the Economy to Adapt to the Use of Artificial Intelligence, Edward Elgar Publishing, 2019) are telling. The average person loses understanding, control and bargaining power in the welter of interfaces she has to face. Further, these new types of relationships are often not covered by legislation, due to their rapid evolution. Existing laws were designed for another world where digitization and the resulting complexity did not exist. Today, political decision-making in this context is difficult. The argumentation is becoming more and more multidimensional, mixing social, legal, political, technological and other issues. For citizens, it becomes difficult to find their way through this labyrinth of arguments. These very same citizens, are the ones that should, in a way or another, decide on each one of these aspects of their everyday life.

In a country of direct democracy like Switzerland, how will this affect the way citizens vote, when questions like the above (and many others, since the above are only a few examples) will be at stake in the democratic process.? Will the vote be based on a simplified argument that leaves out important parameters, or on the whole issue, which is often confused? The question becomes even more complicated in democratic countries where elections are held every four or five years. The decision of each citizen should be based on the understanding of all the positions of the candidates for the election, each position being extremely difficult to grasp. If a single issue is increasingly difficult to understand, what about a multitude of often interrelated issues? The question is, what is optimal and what is likely to happen in the future: adoption of a simplified argumentation (which is the most common case today), or the full and thorough description of each question on which each candidate must answer, an approach that is correct but too complex to be effective?

To solve this dilemma, an approach that would have been reasonable to expect, sooner or later, is the development of AI intelligent assistants capable of mapping the consequences of policy decisions, taking into account all parameters affecting a given issue. However, the neutrality of these tools must be questioned. AI algorithms draw conclusions on the basis of large amounts of data provided to them. AI learns from this data. AI only answers specific questions. If the data and the questions are biased, the answers will also be biased.

Let us assume that the latter problem is circumvented. Suppose that dedicated, perfect or near-perfect AI tools could advise citizens with perfect or near-perfect transparency on the issues being voted on or on the multitude of positions of the candidates running for office. What comes next?

We can now recall the Cambridge Analytica case. Again, if and how this happened is not for me to judge. However, it is clear now that it is technically possible for a family of AI algorithms to extract (or extrapolate) the wishes, fears, and hopes of individuals, this time based on a full, fair, and transparent analysis of all the consequences.

Is it not straightforward to conclude that the combination of AI algorithms that can provide a mapping of the impact of political decisions on the one hand, and algorithms that can understand the deep will of individuals, can credibly infer the vote of the individual? I think there is no need to comment further on the threat that such a development poses to democracies. Yes, this is a dystopian, Cassandrian prediction. It seems like science fiction.

These thoughts outlined here, were born during the preparation of my book "Social Classes and Political Order in the Age of Data" (Social Classes and Political Order in the Age of Data - Cambridge Scholars Publishing), which was completed in early 2022. Practically a year earlier, ChatGPT was by no means as well known to the general public as it is today; the case of Jorge Team was largely unknown. And I imagined that the AI tools to help individuals would have been the fastest, not earlier than 2025. In discussions with friends, the comment was that the expectation of such tools was close to science fiction. Weeks after I wrote this, I realized that the technology is already here:

Can AI predict how you'll vote in the next election? Study proves artificial intelligence can respond to complex survey questions like a real human -- ScienceDaily

Is this a degradation of democracy or a positive development? Depending on how our societies react, it may be a degradation (In particular in an environment of global tendency towards democracy degradation: The world’s most, and least, democratic countries in 2022 | The Economist) that would have an unforeseen negative impact, certainly on the daily life of the next generation, and most likely on ours. This is an open question!

What could our societies do to deal with this dystopian future? Should we regulate AI and data? This is what I proposed in my book, quoted above. Today, discussions and efforts are underway. Is it enough? And if it is enough, is it fast enough? Even if the answer to the first question would have been positive, I doubt very much that international governance, to date, can act faster than the evolution of the technology – In fact, I am convinced that the creation of an ad-hoc international organization with decision-making authority over data regulation issues (or the empowerment of an existing organization) is one of the few credible solutions that would allow regulations to catch up with the pace of technological evolution. But still if this were the case, a key element would be missing: information and education of citizens, on the power, strengths and weaknesses of AI and the data associated. The recent resignation of Geoffrey Hinton from Google in order to “speak freely about technology's 'dangers'” (Google AI pioneer says he quit to speak freely about technology's 'dangers' | Reuters) is a sign of the importance of the question. Here is the place, where our political authorities, our universities and our academies must play an active role as a priority. Today.