2nd TDW: Trusted AI

Trusted AI – The Future of Creating Ethical & Responsible AI Systems

Theme Development Workshop
13 September 2023 | 9:00- 17:30 CEST

Identify common goals between academia and industry as well as other relevant stakeholders, and define promising approaches for European research and innovation in Artificial Intelligence.

The Organising team cordially invites you to the second cross-cutting Theme Development Workshop “Trusted AI- The future of creating ethical and responsible AI systems”, which will take place via Zoom on the 13th of September 2023. This is a joint workshop of the VISION CSA and the EU’s six AI Networks of Excellence (NoEs) AI4Media, ELISE, ELSA, euROBIN, HumanE-AI-Net and TAILOR as well as CLAIRE AISBL and ELLIS.

The workshop will be held online via Zoom with a mixed programme of presentations and in-depth discussions about specific sub-topics in smaller groups (Breakout sessions). This gives the participants the opportunity to discuss with selected experts the importance and the use of Trusted AI and contribute to the strategic research and innovation agenda for AI in Europe.

The deadline for applications has been postponed from 28 August to 31 August 2023.
Don’t miss your chance to be part of this discussion platform!


If you have questions, do not hesitate to reach the organising team via vision_ict48-dfki@dfki.de.

Part 1

9:00-9:15Welcome & Objectives
9:15-9:35Principles of Trusted AI
– André Meyer-Vitali, DFKI
9:35-9:50Role of the EU and orientation of EU policy
making in relation to trustworthy,
responsible and ethical AI

– Antoine Alexandre André, DG CNECT
9:50-10:00Coffee Break & Socialising
10:00~11:30Parallel Breakout sessions (1-7)
11:30-12:30Plenary presentation of key findings from the Sessions
12:30-13:30Lunch Break & Socialising

Part 2

13:30-13:45Ethical AI
– Meeri Haataja, Saidot
13:45-14:00Responsible AI in the industry
– Marc Steen, TNO
14:00-15:30Parallel Breakout sessions (8-14)
15:30~15:45Coffee Break & Socialising
15:45-16:45Plenary presentation of key findings from the Breakout sessions
16:45-17:30Closing & Socialising

Parallel Breakout Sessions

Morning Sessions 1-7

1. AI explainability for vision tasks
This session will discuss the present and future of AI explainability for visual data classifiers and other vision tasks, how explanations can be presented to the users, and what we can expect to understand from these explanations.

2. Ethical considerations and new challenges of Generative AI
This session aims to explore the risks and challenges raised by generative AI from an interdisciplinary perspective (legal, ethical, societal, technical, and cybersecurity).

3. Rigorous vs empirical AI privacy: Where is the middle ground for defining and evaluating privacy in complex algorithms?
This session will discuss the relevance of epsilon as a definitive measure of privacy loss in the context of complex algorithms implementing differential privacy and the proliferation of empirical measurements of privacy via attacks.

4. Monitoring progress in interpretable AI
It is well-studied how to measure the accuracy of machine learning predictors; it is less trivial to monitor progress in developing models interpretable by humans. We propose to bring together an interdisciplinary group of experts (legal, regulational, technical aspects) to outline the requirements for such monitoring and possible ways to approach this problem.

5. Causality and Trust
Causal models can improve the trustworthiness of AI systems (Causality for Trust, C4T). Besides precision and accuracy, which are fundamental to trustworthiness in AI, they are transparent, reproducible, fair, robust, privacy-aware, safe and accountable.

6. Robustness/Verification
This session looks at technologies to strengthen the secure use of AI technologies. There will be a discussion on certifiable robustness, resilience and recovery, and uncertainty and safety in decision-making. Finally, the relationships between robustness and privacy, explainability, and fairness will be discussed to rule out potential trade-offs or define suitable mitigation strategies.

7. AI/ML Benchmarking
This session reflects on ways and methodologies for evaluating AI/ML solutions in real-world conditions. The discussion will touch upon the best practices for defining meaningful benchmarks, the present and future of AI/ML benchmarking, reproducibility and specific ways and challenges of measuring aspects of systems’ trustworthiness on the road towards Creating Ethical & Responsible AI Systems.

Afternoon Sessions 8-14

8. AI Ethics: from principles to practice. Putting “ethical” and“responsible” AI into action
The session will focus on operationalizing the AI ethical guidelines and principles. It will reflect on the shifting approach from high-level ethical principles towards legally binding obligations (e.g. in the AI Act) and practical tools (e.g. the Human Rights Impact Assessments).

9. Meaningful Human Shared Control
Over the past years, we have seen a number of guidelines promoting ‘human-in/on/out of the loop’ approaches to ensure human control and oversight over AI systems. This topic explores the interplay between the dynamic transfer of tasks and ensuring long-term control over the socio-technical system.

10. Human Oversight and Explainability for AI
This session will look at architectures, mechanisms and methods capable of generating meaningful and evidence-based assurance necessary to secure and maintain the safety and security dimensions of AI systems.

11. Trusting Each Other
For collaborative decision-making (CDM), it is essential that each human and agent is aware of each others’ points of view and understands that others possess mental states that might differ from one’s own – which is known as a Theory of Mind(ToM).

12. Human-Aligned Video AI
Video AI holds the promise to explore what is unreachable, monitor what is imperceivable and protect what is most valuable. But what exactly defines human-aligned video-AI, how can it be made computable, and what determines its societal acceptance?

13. Trustworthiness in Robotics: at home, at work, and in the city
How to ensure the trust of humans in a robot? What are the hindrances to build that trust, at home, at work, in the city? How to ensure trust with people suffering from cognitive, physical or sensorial deficiencies? The session will take inspiration from the guidelines of the High-Level Expert Groupon AI which recommends that AI systems meet a set of requirements to be deemed trustworthy.

14. Ethics in Games AI
Games are an application domain of AI research that is often overlooked when discussing responsible AI. This session aims to challenge this by discussing the unique challenges that appear in the game’s environment (E.g. need for believable characters) while also satisfying ethical values.