AI Open Day 2023: Trustworthy AI

AI Open Day 2023: Trustworthy AI – Humans vs. Algorithms

Thursday, 1 June 2023 | Prague | Czech Republic

Venue: Czech Institute of Informatics, Robotics and Cybernetics (CIIRC CTU)

Are you interested in Artificial Intelligence and want to learn more in an interactive way? Would you like to explore and learn about concrete demonstrations of some AI technologies?

Join the afternoon program prepared in collaboration with the European Networks of AI Centres of Excellence and do not miss the second edition of the AI Open Day. If you want to see the atmosphere of the first such event we organised in Brussels in 2022, visit the this site.

The event was organised as a hybrid – you can visit the state-of-the-art facility of the RICAIP Testbed for Industry 4.0 at CIIRC CTU in person and see the interactive demonstrators. From 16:00 to 17:00 CET, the roundtable discussion with leading AI experts will be held and live-streamed from the next door Testbed Control Room. Come to CIIRC CTU in Prague or simply watch the discussion here (streamed in cooperation with CLAIRE).

Gallery

Recording of the opening introduction of the European AI Ecosystem

This video presents the first part to the “AI Open Day” programme that was hosted by CIIRC CTU in the state-of-the-art facility of the RICAIP Testbed for Industry 4.0. The particular members of the larger European AI ecosystem – European Networks of AI Centres of Excellence (NoEs): AI4Media, ELISE, ELSA, euRobin, HumanE-AI-Net, and TAILOR – represented by the project coordinators, the prominent persons of the EU AI scene – presented their scope of activities and contribution to the “AI made in Europe“.

Live-streaming: Roundtable Discussion

Watch the recording of the moderated discussion on current topics related to Trustworthy AI.

  • Yiannis Kompatsiaris CERTH-ITI, Greece | AI4media
  • Roman Barták Charles University, Czech Republic | TAILOR
  • Josef Šivic CIIRC CTU, Czech Republic | ELISE
  • Holger Hoos CLAIRE, Leiden University, RWTH Aachen University, Germany | VISION
  • Mario Fritz CISPA Helmholtz Center, Germany | ELSA
  • Sven Mayer LMU Munich, Germany | HumanE-AI-Net

Moderated by: Alžběta Solarczyk Krausová Institute of State and Law of the Czech Academy of Sciences, Czech Republic

Live – Real – Interactive

Join us in person for a half-day event. Learn more about Artificial Intelligence that is around us or will be in a near future.

With live demonstrations prepared by AI research teams and a round-table with some of the main actors of the European AI community, come to experience and interact with experts on the pressing topic of “Trustworthy AI”.

  • Learn something new in a dynamic and entertaining way.
  • Join the insightful discussions with main personalities on AI.
  • Get to know what stays behind “AI made in Europe: Trustworthy, Human-centred & Ethical AI

The event was designed for:

  • General public, especially university and high-school students
  • Industrial companies, innovators and start-up community
  • Researchers – even at the early stage of their carrier
  • Policymakers

The European AI community represented by European networks of AI excellence centres (NoEs) would like to invite all AI fans, with a specific focus on students and innovation industry, to discover some AI applications and join in an open discussion with AI experts.

The event is organised by the VISION project communication team led by CIIRC CTU in close collaboration with four NoEs that have been working on aspects of trustworthy AI funded under H2020-ICT-48-2020 call since 2021 and another two new NoEs since 2022:

Programme

14:00-14:30 | Registration and arrival of the participants

14:30-15:00 | Presentation of VISION & the European AI ecosystem

Holger H. Hoos
| Professor of AI at RWTH Aachen University and Professor of Machine Learning at Universiteit Leiden, VISION Project Coordinator and chairman of the board of CLAIRE

15:00-16:00 | Interactive Demonstrators and Open Discussions

16:00-17:00 | Round-table Discussion – Live-streamed!

17:00-18:20 | Networking & Cocktail

Round-table | 16-17 CEST

Wherever you are, on-site or online, don’t miss a moderated discussion on current topics related to Trustworthy AI, such as:

  • What does generative AI mean and how far the development goes in different areas?
  • Ethics of use of AI, impact on the society and change of dynamics in knowledge and information production
  • Should we protect the inputs for teaching algorithms/ outputs generated by algorithms?
  • How is transparency of input relevant for transparency and explainability of AI on case of ChatGPT-4?
  • What are the conditions for trustworthiness in relation to intellectual property protection?
  • What are the special challenges posed by Auto-GPT?

Ask your questions and discuss with the panellists through SLI.DO!

#AIOpenDay

Moderator:

Alžběta Solarczyk Krausová
Institute of State and Law of the Czech Academy of Sciences (Czech Republic)

Yiannis Kompatsiaris
CERTH-ITI (Greece)
AI4media

Roman Barták
Charles University (Czech Republic)
TAILOR

Josef Šivic
CIIRC CTU
(Czech Republic)
ELISE

Holger Hoos
CLAIRE
RWTH Aachen University
(Germany)
VISION

Mario Fritz
CISPA Helmholtz Center
(Germany)
ELSA

Sven Mayer
LMU Munich
(Germany)
HumanE-AI-Net

Selected Demonstrators

ELISE
Computer vision and Machine perception

Showcase of intelligent robotic systems that autonomously learn to perform complex tasks from instructional videos. We will demonstrate how to (i) teach a robot to perform dynamic manipulation of a tool (shovel, hammer, etc.) based on the YouTube video of a human manipulating the same tool category and (ii) how to use a human demonstration to help the planner to solve multi-steps task-and-motion planning problems in complex environments.

RICAIP & CIIRC CTU for VISION
Robot playing Checkers

Test your knowledge in a game like checkers against a robot that is programmed to predict the state of the gameboard for up to 9 moves ahead. You can experience the interaction with an industrial robot and compare your strength in checkers with the algorithms of artificial intelligence by choosing between three difficulty levels.

This robotic cell demonstrates how Artificial Intelligence, Computer Vision and collaborative robotics can be combined to develop an application of a Human-Robot Collaboration. 

It also shows that modern design approaches of industrial robots allow them to safely interact with humans, and thus can be used for teaching and education as well. The task of creating a collaborative robotic workplace for playing a game includes adjusting the input signals from the outside world using available devices, such as a camera, a touch screen, and sensors of the robot itself. Furthermore, it is necessary to consider the psychological perception of the game process by a human. When competing with a supposedly stronger robotic opponent, one would expect fast decision-making, conciseness of movements, and the impossibility of deception. Therefore, an implementation of the game algorithm should be robust enough to take little time while calculating an optimal move and complying with defined game rules.

AI4Media
AI for Social Media and Against Disinformation

The demonstration will present ΑΙ-based tools and services used by journalists and fact-checking experts for digital content verification and disinformation detection, integrated into the Truly Media platform (a web-based platform for collaborative verification).

CIIRC CTU for VISION
Collaborative workplace of the future – Human-robot Collaboration in Assembly

Collaborative robotic workspace where human and robot are supposed to solve together simple assembly tasks, while robot is able to react to the immediate needs of the operator. Several cameras observe the workspace to detect objects and human motions, the human is equipped by a microphone to instruct the robot, and graphical tablet provides feedback about the current state on the workplace. In the first scenario, robot serves to the human as a helper and is finding and fetching an object based on a command (e.g., pass me the hammer, tidy up the table). Human and robot engage in a dialogue to resolve ambiguities in the perception. In the second scenario, robot is observing human performing a task and acts on it by predicting the following steps and helping the human operator accordingly (e.g., the robot recognizes that the human is assembling a car and next part is a wheel, so it fetches and prepares it for the operator).

Technology partner: Factorio Solutions

HumanE-AI-Net
Storyboarder

A tool for the automatic generation of textual stories with accompanying images, sounds and videos.

StoryBoarder binds together text generation (GPT), image generation (StableDiffusion) and text-to-speech (Coqui) models to provide a tool for content creators to quickly generate stories. The creators have full control over the parameters of the individual tools, with the StoryBoarder binding the tools together to be easy to use while providing useful default settings and hints. The generated videos can then be directly streamed to video streaming servers (Twitch or YouTube). 

TAILOR
Robots you can trust

Charles University (a TAILOR partner), in cooperation with CIIRC CTU, will demonstrate technology for fully automated planning of the movement of mobile robots in a shared environment such as a warehouse. The presented planning and execution algorithms guarantee that robots do not collide and reach their destinations as soon as possible. We will use real robots to showcase the technology.

CIIRC CTU for VISION
Pick-and-Place enriched with 3D reconstruction of objects

The application of computer vision provides the information to enable the robust picking of objects using an inexpensive RGB camera. The Pick & Place robot processes data through cameras to detect paper packages of Lego bricks. It autonomously detects the exact location in 3D space and plans the robot’s trajectory. Based on the classification, it selects packages of different sizes and sorts them into boxes. The typical industry uses expensive 3D vision systems and encoders to get an understanding of the surroundings. The proposed solution uses just the RGB image leveraging the vision technique to calibrate the image space to real-world space by homographic transformation using distinct markers. When the conveyor belt moves, the encoder attached to it provides translation and speed information which is needed to pick the object further along with the moving conveyor belt with time. Deep learning algorithm is used to detect, classify and locate the object in images which are further translated along a moving conveyor using calibrated space and speed/translation data. The state machine handles different instances of detected objects in the FIFO fashion and prepares the trajectory for pick and place position with respect to time.

TAILOR
Sudoku Assistant – An AI-powered app to help
solve pen-and-paper Sudokus

The Sudoku Assistant is an AI assistant that can interpret, solve and explain pen-and-paper Sudokus scanned with a smartphone. It uses techniques from both machine learning and constraint programming, but goes one step further: it employs a new way of integrating the digit recognition and the reasoning more deeply, allowing it to solve significantly more scanned Sudokus correctly than previous approaches were able to.
Furthermore, it isn’t limited to merely solving Sudokus: through the use of our state-of-the-art research on step-wise explanations to constraint satisfaction problems, it is able to guide the user through the solving process by giving small hints. This ensures that the AI system doesn’t ruin the fun by doing the solving for you, but is able to assist you when you are stuck.The Sudoku Assistant hence demonstrates three concepts that are becoming increasingly important in AI research: the integration of learning and reasoning, explainable AI and human-centered AI. And although the assistant is focused on Sudoku, the underlying techniques are equally applicable to other constraint solving problems like timetabling, scheduling and vehicle routing.

RICAIP & CIIRC CTU for VISION
Delta robot in 5G environment

Delta robot is an example of a multi-axis positioning system with synchronised axes of motion. All axes are positionally controlled with feedback from absolute multi-turn encoders, while the interpolation of the robot endpoint position takes place in the control PLC. This configuration provides full control over the processes performed by the robot, whether they relate to inverse kinematics, inverse robot dynamics or the ability to run the robot simulation model, the so-called digital twin, directly in the PLC. It is thus possible to detect any deviations from the expected behaviour using machine learning algorithms, or “just” collect data on the operation and subject it to a more thorough subsequent analysis. The robot is also equipped with a handle for so-called manual guidance, thanks to which the operator can demonstrate the desired movements to the robot in the learning mode by guiding it with a hand controller, the robot thus learns the trajectories to be performed during normal operations.

Through the campus 5G SA network, the robot connects to the application server (edge server) and thus can use its high computing power for applications deploying neural networks, computer vision and other functionalities.

Partners: T-Mobile, Siemens

Published
Categorized as Event