SRA

Joint Strategic Research Agenda (SRA)

The EU’s six Networks of Excellence Centres in AI and Robotics (NoEs) are providing a Joint Strategic Research Agenda (SRA).

The European Union’s aspirations for AI, Data and Robotics (ADR) that are “made in Europe” demand an ambitious approach to advancing European AI research and development. The EU’s six AI Networks of Excellence Centres in AI and Robotics (NoEs) – AI4Media, ELISE, ELSA, euROBIN, HUMANE-AI-Net, and TAILOR – are providing a framework for delivering these ambitions, by advancing the frontiers of AI, data and robotics research and its translation to real-world impact in different domains.

The joint SRA complements the different Strategic Research Agendas (SRAs) that have already been published by AI4Media, ELISE, HUMANE-AI and TAILOR.

High-level Purpose

  • Provide a light-touch overview of shared research themes from across the ICT-48 networks.
  • Convey that the Networks of Excellence are pursuing research and translation activities that have scientific and societal significance.
  • Illustrate the value of continued investment in Europe’s AI R&D ecosystem.

Research Challenges

Complementing the Strategic Research Agendas (SRAs) that have already been published by AI4Media, ELISE, HUMANE-AI and TAILOR, the joint SRA provides an overview of the areas of research interest pursued across the networks. It highlights shared themes relating to 8+1 main areas (with reflection of the generative AI perspective):

  1. Building the technical foundations of safe and trustworthy ADR
    Generative AI impinges on all of the characteristics of trustworthy ADR. There is a pressing need to develop generative AI systems that can deliver these characteristics.
  2. Integrating AI into deployed or embedded systems, including robots
    Opportunity to drive a fresh wave of AI deployment, enabled by a new type of interface with AI. In practice generative AI makes each research topics more challenging – for example, trustworthiness, security, fairness, and robustness. Researchers, organisations using AI, and policymakers need practical strategies to tackle the limitations of today’s systems. Access to best practices will enable integration of foundation models, while ensuring ethical standards.
  3. Enhancing human capabilities with collaborative AI and robotics
    New human-machine interfaces that enable collaboration. However, risk that users mistake their language abilities for intelligence.
    Need technology and design strategies manage the risk of being given inaccurate information or being manipulated by these systems.
    Transparency standards can contribute to fostering trust and credibility while public literacy will help citizens feel more informed and confident.
  4. Accelerating research and innovation with AI and robotics
  5. Understanding interactions between ADR, social needs and socio-technical systems
    Concerns about social impact associated with technical progress: misinformation and elections; labour market concerns; psychological impact; risk of dual use; fairness and bias, and other societal and environmental risks.
    Power imbalance and dynamics from a multidisciplinary perspective, including economic, political, and legal but also in terms of infrastructure and funding.
    AI safety, how technical, legal, organisational, and other interventions can deliver safe and effective generative AI.
  6. Advancing fundamental theories, models, and methods
    Core AI methods and approaches provide the foundations upon which progress in generative AI is being built.
    Domain adaptation and fine-tuning of Large Models, neurosymbolic generative AI, as well as incorporation of external knowledge.
  7. Ensuring legal compliance of AI and robotics systems
    Generative AI’s compliance with existing regulatory frameworks.
    Exploring the applicability of obligations and transparency rules in different scenarios will bring legal certainty for AI developers, deployers and end-users.
    New challenges for existing copyright laws. Legal and moral concerns regarding large-scale exploitation of training data; exploring explicit legal protection against unauthorized use of works for ML training and the introduction of statutory licenses. In depth analysis of these will improve fairness in copyright negotiations and bargaining powers and will stimulate creativity. Personality rights and the right to control the commercial use of a person’s appearance, artistic expression or voice.
  8. Advancing hardware for safe and energy efficient interaction between ADR technologies, humans and the environment
    Methods capable of training high quality models using less computation and also make inference possible on more resource constrained devices.
  9. Building safe, reliable, and trustworthy foundation models (new)
    Ensuring foundation models reflect European values
    Creating an environment for AI R&D that benefits all in society
    Positioning Europe to lead the next wave of innovation in generative AI

Progress in each of these areas is delivered through NoE-convened research programmes that advance knowledge, understanding, and applications in specific areas, alongside NoE led efforts to accelerate research, education, and knowledge transfer. Across their portfolios, NoE activities have engaged over 1,000 researchers and 100 industry organisations.
Their work has seeded an ecosystem of AI and robotics research and development activities that connect the EU’s ADR policy ambitions to real-world, on-the-ground benefits for citizens and businesses in local communities across Europe.

The NoEs’ achievements so far demonstrate that diversity of ADR research in Europe can be a strength, shaping ADR deployment across multiple sectors to align with policy aspirations for ADR technologies.
However, amidst continuing international competition for ADR leadership – in terms of the ability to attract talent, shape research agendas, develop policy frameworks, and build novel AI-driven services and applications – increased investment in this European AI and robotics ecosystem is needed to deliver these aspirations.

Looking Ahead

The Strategic Research Agendas from each of the NoEs networks sets out their vision for AI research and plans to translate their ambitions for AI research to technological progress. As the work of these networks develops, updates to these Strategic Research Agendas will provide insights into the frontiers of Europe’s AI capabilities.

International competition for AI research talent, for technical leadership, and for policy influence will continue to grow. Current debates about generative AI highlight what is at stake. In this environment, investment in the European AI landscape is crucial, if Europe is to have an influence on the pathways for technological development and the ways in which AI technologies shape society. Europe must be at the forefront of technological innovation – pursuing world-leading, excellent ADR research – while building an ecosystem that can translate that research to application, boosting the diverse strengths of local innovation ecosystems across the continent.

The NoEs first 24 months of operation have already generated collaborations, research insights, and practical case studies that are helping to advance AI R&D across Europe. These signals of success demonstrate how the NoEs are seeding an ecosystem that connects from local capabilities to international priorities, taking strength from the diversity of research interests and opportunities across the continent by connecting researchers and industry in local innovation environments that deliver on-the-ground benefits for citizens, businesses, and society. The breadth of these activities shows the opportunity for Europe to pursue an AI agenda that delivers real-world benefits for citizens and organisations. Sustained and increased investment in these networks is needed to reap the benefits of the collaborations established via the ICT-48 programme in the long term.

Read the particular SRAs:

About the Networks of AI Excellence Centres (NoEs)

AI4Media provides a forum for researchers and practitioners with a focus on the media industry, responding to pressing concerns about the interaction between AI, the information environment, and wider society.

ELISE convenes leading researchers in machine learning, pursuing research to accelerate innovation and adoption of these technologies in ways that safely and effectively address real-world challenges.

HUMANE-AI-Net focuses on the development of AI systems that work alongside human users, leveraging AI capabilities to enhance human activities.

TAILOR brings together AI researchers with an interest in building the scientific foundations of trustworthy AI, integrating these methods across research and practice.

euRobin gathers researchers dedicated to increasing the performance of robots with a holistic approach combining increased cognitive capacities, enhanced learning performance, greater levels of interaction and better suitability for users.

ELSA is spearheading research in foundational safe and secure AI methods, pursuing the development of robustness guarantees and certificates, privacy-preserving and robust collaborative learning, and human control mechanisms for the ethical and secure use of AI.