Professor Michèle Finck is sitting in a white conference room and is looking into the camera with a friendly smile. Michèle has shoulder long brown hair and wears a black top with small white dots.
ZEISS Beyond Talks

Interview with Professor Michèle Finck

Professor Michèle Finck works at the intersection of law and computer science. The Carl-Zeiss-Stiftung Professor for Law and Artificial Intelligence at the University of Tübingen in Germany and Co-Director of the Carl-Zeiss-Stiftung Institute for Artificial Intelligence and Law studies how legal structures can support innovation while protecting people and the public interest.1

For over 175 years, the people at ZEISS have asked the question: How can we challenge the limits of imagination? In celebration of that vision, ZEISS has partnered with thought leaders and great minds from around the globe for the ZEISS Beyond Talks, giving them center stage to speak about their own work, visions, passion, and issues that are affecting our world moving forward.

Could you briefly describe the current scope of your research?

At the University of Tübingen and the Carl-Zeiss-Stiftung Institute for Artificial Intelligence and Law, we explore questions at the intersection of law and computer science. We take an interdisciplinary approach to understanding the real-world impacts of technologies like AI.

Over the past two years, I’ve focused on the European Union’s Artificial Intelligence Act and various areas of EU data law and data protection, especially their relationship to AI.

What is the role of law in shaping innovation?

The law has shaped innovation by pursuing societal or environmental objectives. We might think, for instance, about the regulation of chemicals or food.

Law plays a role in managing the risks that can be associated with certain kinds of innovation. It can help make sure that only products whose risks are deemed to have been managed to a sufficient degree are actually released onto the market.

Historically, how has the law responded to technological innovation?

We can see two broad patterns. In some cases, technology emerges first and regulation follows. This is an oversimplification, but think of the early Internet, where legal responses largely came after widespread adoption. In other areas, the law anticipates technological transformation and prepares for it.

Automated driving is a good example: Several jurisdictions updated legal frameworks years in advance to both prepare for and facilitate the technology’s emergence. The EU Artificial Intelligence Act includes similar forward-looking elements – for instance, it anticipates general-purpose AI models with systemic risks and sets the rules accordingly.

What are the core objectives of the EU Artificial Intelligence Act, especially regarding general-purpose AI models?

One explicit objective of the AI Act is to minimize risks associated with AI in ways that build public trust. The Act introduces requirements for general-purpose AI models that could pose systemic risks, aiming to avert those risks before they materialize. At the same time, it includes instruments for regulatory experimentation designed to stimulate innovation, recognizing that good regulation should both protect society and encourage innovation.

Why are data and data protection so central to AI?

Data is essentially information, and it’s the fuel AI systems need to produce useful outputs. Large AI models rely on vast and varied datasets to learn patterns and generate results. While data protection law has a complex history, one core goal is to ensure that people have control over what happens to their personal information. That principle of control is crucial as AI systems process and generate insights from data at scale.

Professor Michèle Finck is sitting at a desk, typing on a laptop and is looking into the camera with a friendly smile. Michèle has shoulder long brown hair and wears a black top with small white dots.

The law helps make sure new technologies develop in ways that benefit society as a whole.

Prof. Michèle Finck Carl-Zeiss-Stiftung Professor for Law and Artificial Intelligence
University of Tübingen

The European Union has taken a leading role in data regulation. What stands out about its approach?

The European Union has been at the forefront of regulating data in recent years. Beyond the General Data Protection Regulation (GDPR), which took effect in 2018, we’ve seen the adoption of the EU Data Act and the Data Governance Act.

The EU has really taken on a global leadership role to the extent that it has regulated data more heavily than virtually any other jurisdiction on Earth. It has set the benchmark for data protection worldwide.

What role does the law play in ensuring AI remains human-centered?

As with earlier waves of technological innovation, the law helps make sure new technologies develop in ways that benefit society as a whole, not just the companies that create them. It does this by managing risks and setting conditions under which products and services can enter the market, so innovation advances while protecting people and the public interest.

How can we involve people more closely in shaping AI-related law and policy?

There have been many initiatives to bring people closer to legislative processes. Two factors are critical. First, clear communication – explaining what’s at stake in terms that are understandable across demographics. Second, practical accessibility – making it genuinely easy for people to participate despite busy schedules and competing priorities.

What excites you most about the future of legal research and technology?

It’s exciting to see how interdisciplinary collaboration is becoming more common in legal research. Working across disciplines helps us better understand the real-world effects of law, especially in fast-moving areas like AI.

Looking ahead, I hope technology will make our working and personal lives easier by taking over tasks we’d rather not focus on. That way, we would have more time and energy for the things we truly care about.

In focus: Data protection

  • The core aim of data protection laws is to give people meaningful control over their personal data – who collects it, how it’s used, and for what purposes – while making organizations accountable. Principles like lawfulness, transparency, purpose limitation, data minimization, and security reduce risks and build public trust, which in turn supports responsible innovation.

  • Modern data protection frameworks apply a risk‑based approach that combines safeguards, including data protection impact assessments, transparency, human oversight, and robust data governance, with mechanisms that encourage safe innovation, such as regulatory sandboxes. This helps address both individual harms and broader, systemic risks, enabling AI to develop in a way that is trustworthy and socially beneficial.

  • The EU has set influential benchmarks through comprehensive rules like the General Data Protection Regulation (GDPR) and complementary data governance frameworks that enable the reuse and sharing of data in line with clear safeguards. These standards strengthen individual rights, clarify responsibilities, and often shape global practices because many organizations align with EU requirements across markets.


Share this page


  • 1

    Interview edited for clarity.