What is the AI World Café?

The AI World Café is an interactive discussion format designed to bring together experts, researchers, and innovators in the Helmholtz AI community and beyond. Through dynamic, small-group conversations, participants exchange ideas, tackle pressing challenges, and explore new directions in AI research.

At this year’s HAICON25, the format brought together a variety of participants across 11 discussion tables, leading to valuable insights and collaborations. Wonder how they looked like? Find the reports on the discussion tables here.

Participating in the AI World Café, you will:

  • Engage in meaningful discussions on cutting-edge AI topics;

  • Expand your network within the Helmholtz AI community and beyond;

  • Gain new perspectives and insights to advance your research;

  • Shape the future of AI by contributing to innovative conversations.

Whether you’re looking to share your expertise, learn from others, or find potential collaborators, the AI World Café is the perfect platform.

Be a topic contributor

Each year, all participants have the opportunity to propose and lead a discussion topic at the AI World Café. Typically, as a host, you will:

  • Select an engaging topic related to AI research, applications, or challenges.

  • Facilitate three 20-minute discussion rounds, guiding participants through key questions and ideas.

  • Document key insights from your session to contribute to a collective summary of the event.

LIST OF CAFÉ TOPICS, 2025:

Click for details.

This table discusses advancing AI beyond text-based models by combining generative and predictive architectures to create systems with accurate physical intelligence; such systems can adapt to dynamic environments, leveraging generative capabilities to develop new designs and behaviors while predictive modules optimize real-world outcomes. We expect a new generation of AI-powered systems: agents and a new family of intelligent machines and robots that learn rapidly and operate autonomously with enhanced flexibility and resilience. This table outlines these evolving AI systems’ key components, technological enablers, and potential applications, emphasizing their practical deployment.

Host: Celso Ricardo Caldeira Rêgo, Karlsruhe  Institute of Technology (KIT)
View full description

Agentic AI is taking over many facets of modern life and encroaching on scientific applications via projects and dedicated funding instruments. The purpose of this world cafe discussion is to determine the state of agentic AI (interest, implementation, politics) at Helmholtz, and to evaluate whether we can identify a core working group of interested scientists across the Helmholtz centres.

Host: Sebastian Lobentanzer, Helmholtz Munich
View full description 

Artificial intelligence (AI) offers powerful tools for the advancement of scientific discovery by identifying complex patterns that are often difficult for humans to detect. However, the performance and reliability of these models are inherently dependent on the expertise of those who develop and apply them. Consequently, issues such as data quality, biased datasets, and a lack of transparency in methodological reporting significantly impact the reproducibility of AI-driven research. This round will explore these challenges and invite participants to contribute ideas and potential solutions aimed at promoting reproducible practices in AI-driven science.

Resource: Is AI leading to a reproducibility crisis in science?

Host: Athar Khodabakhsh, Helmholtz-Zentrum Berlin (HZB)

View full description 

The DFG Code of Conduct ‘Guidelines for Safeguarding Good Research Practice’ (link) currently lacks specific guidance on the use of (generative) AI tools. However, in September 2023, the DFG introduced supplementary guidelines for working with generative models for text and image creation (link). These guidelines emphasise key principles, including Transparency and Disclosure, Maintaining Responsibility, Authorship, and Review Process Restrictions.

But what does this mean in practice? Where do we draw the line between acceptable and notifiable AI usage? Should researchers disclose the use of seemingly innocuous tools like autocorrection, or more advanced applications like AI-assisted literature searches, text polishing, automatic literature reviews, text generation, or coding co-pilots? These tools are increasingly being used to improve efficiency and precision, but the boundaries of transparency and accountability remain unclear.

In this open discussion session, we invite you to join us in exploring the complexities of AI tool usage in research. We will examine the extent to which AI tool usage should be made transparent in various contexts, such as publications, software documentation, and grant applications. Come share your thoughts and help shape the conversation on responsible AI use in research.

Host:  Susanne Wenzel, Forschungszentrum Jülich, Helmholtz AI

View full description

Artificial Intelligence is fundamentally reshaping the landscape of scientific inquiry, bringing about unprecedented opportunities alongside profound challenges to the very foundations of knowledge production. As AI tools scrutinize vast datasets and generate hypotheses at unparalleled scales, we grapple with opaque processes, the overwhelming deluge of research, and increasing hyper-specialization.

Inspired by the accompanying article, this round table discussion will delve into the “epistemological upheaval” driven by AI’s integration into science. We will explore concepts such as:

  • The redefined “Ends of Science”: Not a cessation, but a transformation of methodology and understanding.
  • Epistemic Overhangs & Underhangs: The gaps created when theories outpace verification or empirical findings lack causal explanations, and how AI might accelerate these.
  • The Challenge of Opacity: How do we evaluate scientific findings from systems operating beyond human comprehension?
  • The Promise of Mechanistic Interpretability: How can new tools help us “open and visualize” AI models to gain understandable explanations and bridge epistemic gaps?
  • The Future of the Scientific Method: What new tools, methods, and norms are needed to leverage AI effectively and responsibly for planetary-scale research?

Host: Oleg Filatov, Forschungszentrum Jülich, Helmholtz AI

View full description

Foundation models have been put forward as a possible one-size-fits-all solution also in science and several implementation projects are now underway across disciplines. With big money being spent and big words going around, we should ask ourselves: Is this a salvation coming our way or will it fall short of our expectations, leaving us behind with headaches rather than insights?

We want to discuss with you the implementation pathways of these large-scale models – beyond LLMs – their data situation, and the opportunities and limits of the tasks they might solve. How can we combine them with existing models? Who can benefit from their learned context? Should they be seen as working out-of-the-box or will only clever adaptation make them shine? Can we transfer insights from popular models such as those on language or images to scientific models? Under which circumstances should we instead prefer models trained for a specific task?

Hosts: Paul Keil, Tobias Weigel, Helmholtz-Zentrum Hereon, Helmholtz AI

View full description

As AI increasingly relies on data-hungry models, can synthetic data fill critical gaps in Earth observation and environmental modeling — and under what conditions does it help or harm?

Host: Vytautas Jancauskas, German Aerospace Center, Helmholtz AI

View full description 

As AI is picking up speed, concerns have emerged and many are pondering where our human inteligence is headed. Are we on the path to lose our brain plasticity? Then our brain is like a muscle: if it is not trained, it will get wakened? While AI is becoming a part of our everyday life supporting us in tasks ranging from information retrieval to decision-making, critics argue it may be fostering dependency, diminishing critical thinking skills, and weakening our ability to process complex problems unaided.

This discussion aims to evaluate both sides by triggering a philosophical and ethical conversation about our future.

Host: Maria Petrova-El Sayed, Forschungszentrum Jülich, Helmholtz AI

In machine learning, the ability to make reliable predictions is paramount. Yet, standard ML models and pipelines provide only point predictions without accounting for model confidence (or the lack thereof). Uncertainty in model outputs, especially when faced with out-of-distribution (OOD) data, is essential when deploying models in production. This session serves as an introduction to the concepts and techniques for quantifying uncertainty in machine learning models. We will explore the different sources of uncertainty and cover various methods for estimating these uncertainties effectively. By understanding and addressing uncertainty, particularly in the context of OOD data, practitioners can enhance the robustness of their models and foster greater confidence in model predictions.

See also Helmholtz-AI-Energy/HAICON25-Prologue-Day#10

Hosts:

Steve Schmerler, Helmholtz-Zentrum Dresden-Rossendorf, Helmholtz AI

Jose Robledo, Forschungszentrum Jülich, Helmholtz AI