Apertus was developed under the Swiss AI Initiative by ETH Zurich and EPFL, and is presented as an example of compliant, transparent AI. As an ethicist, what is your first reaction to a model that publishes its source code, training data and training methods in full?
Apertus takes a visionary step towards open and transparent LLMs. But it is only one first step, as AI Systems like Apertus do not simply reflect reality; they structure what counts as knowledge, whose voices are legible, and which perspectives scale. Instead of the conventional question “Is the model biased?”, we should start asking “Who has the authority to define, interpret, and contest what we see as the informational substrate of society?” In this context, neutrality is no longer a passive stance – it becomes an active design choice. Especially for Switzerland with a long tradition in defining itself as neutral, this is an important shift.
In the case of Apertus, publishing source code, training data, and methods signals a strong commitment of Switzerland to procedural openness. Ethically, this matters for two reasons:
Looking through a more fundamental moral lens, however, further critical questions are being raised: does publishing training data and methods per se resolve questions about legitimacy? Who decides what data counts as representative? Which languages, dialects, or knowledge systems are prioritised or excluded? What normative assumptions are baked into filtering and alignment?
These are not just technical questions but political ones too. Transparency can indeed expose them but cannot settle them. In this sense, Apertus is less the endpoint of ethical AI, but more the beginning of a different kind of responsibility – once everything becomes visible, the challenge then boils down to building the institutions and practices that can make the greater visibility more meaningful.
You have argued that AI development teams tend to prioritise technical and economic goals, leaving ethical and societal concerns until later. What is the practical cost of this sequence?
In the case of Apertus, what we see is that stereotyping persists despite transparency. This reflects a broader sequencing problem in AI development: systems are optimised first for performance and efficiency, while questions of inclusion, representation, and societal impact are deferred, potentially externalising harms.
From my perspective, this ordering has a series of concrete and far-reaching consequences:
The real cost of this sequence is not biased outputs; it is the institutionalised delay in a system, where ethical considerations are always one step behind technological capability. A more sustainable approach would invert that sequence – not by slowing innovation, but by redefining what counts as “core functionality”. In that framing, inclusivity, representational balance, and societal impact are not add-ons; they are part of the system’s performance criteria from the start.
Yes – the scale doesn’t just amplify existing problems; it changes their nature. Treating AI as if it were simply a faster human decision-maker misses what is at stake.
With advanced AI systems, we are no longer dealing with isolated decisions, but with decision infrastructures. These systems do not just act at scale; they standardise judgments across contexts, often invisibly. When decisions scale, responsibility must scale accordingly. The ethical challenge of AI is not just that it can be biased; but that it can institutionalise biases at a speed and scope that outpaces our current ethical and legal imagination.
As Apertus shows, Switzerland has the institutional capacity to respond to this. Importantly, doing so will require moving from a case-by-case logic to a system-level governance mindset. The kind of response this point to, from my perspective, is not necessarily heavier regulation, but different mindset. For example:
"If we’re not even sitting at the table, how do we make our voices heard?" You've raised this from your own experience. What would it take for AI development teams to change that?
If people are invited to the table but cannot influence what is being served, inclusion has not taken place in a truthful sense. Now for the table to look meaningfully different, the issue is not simply who is invited, but how the table is structured – who sets the agenda, and whose knowledge counts. Without changing those conditions, diversity becomes symbolic rather than consequential. In the context of Apertus, I can see several structural shifts potentially taking place:
The underlying principle for initiating these structural changes should not be merely “adding voices”; but rather, “redistributing knowledge authority”. For Switzerland, on the one hand, I see this as an opportunity – its tradition of pluralism and negotiated governance could translate into AI development processes that are more participatory and reflexive than those driven purely by scale or market logic. On the other hand, this requires treating diversity not as a pipeline problem, but as a question of institutional design at its core.
Apertus is a foundation, not a finished product. Every adaptation is a potential point of ethical drift. Which governance mechanisms are essential to ensure that its values carry through to each new application?
Some ethical challenges may be intensified if we were treating LLMs as foundation models – because we decentralise responsibility. While the base model can embody certain commitments, like Apertus, every fine-tuning step is a potential point of drift. The ultimate question is therefore not how ethical the base model is; but how (and how far) its ethical properties travel.
This means that a foundation model is not just a technical base, but rather a normative starting point. If its ethical commitments are not actively carried through each layer of adaptation, they will not scale but will, instead, erode. This is why we should establish a practice that
Here, again, is where I see Switzerland as having a real opportunity – and making true contributions to the world – treating AI not just as modular technical products, but as an integrated governance ecosystem, where each extension of the model remains connected to a shared set of principles that are operationalisable, instead of being declared as “window dressing” or “ethical washing” slogans.
What this requires is a shift from “AI made in Switzerland” to “AI stewardedlike Switzerland”. This is because in the age of AI, neutrality is no longer about abstaining from power; but about how power is encoded. Switzerland has the unique opportunity to move from “neutrality” to “stewardship” – designing AI systems that do not merely optimise performance, but safeguard the integrity of knowledge as a public good, not a private asset.
Dr Ning Wang is a Research Group Leader at the University of Zurich, and a member of various working groups on AI Ethics at leading global institutions, such as the World Economic Forum (WEF) and the World Health Organisation (WHO). Her research addresses the ethical, social, legal, and regulatory challenges of disruptive technologies. She can be reached at ning.wang@uzh.ch.
| Role | Title + Name |
|---|---|
| Text by | Esther Lombardini |
| Expertise | Ning Wang |