Currently open to scientific and industrial communities, the “digital common good” consists of an end-to-end methodology based on numerous open-source technology components. It aims to maintain technological leadership of French companies by promoting development of critical industrial applications with securely integrated trustworthy AI. It will bolster competitiveness of national economic actors in the value chain of industrial and responsible AI. Confiance.ai has also announced the creation of a foundation to ensure dissemination and sustainability.

On March 7th, 2024, at Confiance.ai Day, Confiance.ai programme founding members (Air Liquide, Airbus, Atos, Naval Group, Renault Group, Safran, Sopra Steria, Thales, Valeo, CEA, Inria, IRT Saint Exupéry and IRT SystemX) revealed the methodology and the catalogue of technological components developed in the past three years to increase trustworthiness in AI-based critical systems. Intended as an end-to-end guidebook for industries, the tool-based methods are a means to characterise and qualify the trustworthiness of a data-based intelligent system, in order to integrate it into industrial products and services. The methodology can be applied to any business.

Launched in 2021, and funded by France 2030, Confiance.ai is a cornerstone programme of the French national strategy for artificial intelligence, and a worldwide pioneer. Aimed at making France one of the leaders in industrial and responsible AI by developing a sovereign methodological and technological environment which is open, interoperable and durable, it furthers integration of industrial (explicable, robust, etc.) and responsible (trustworthy, ethical, etc.) AI in strategic industries. Due to its genuine momentum, it has created, in particular through several Calls for Expression of Interest (CEIs), a rich ecosystem of nearly fifty partners: laboratories and research institutes, start-ups, and large industrial groups. Furthermore, the programme is vital in implementing the AI Act in French and European sectors of industry, by working with industry-specific organizations.

With 2030 in mind, those involved in the programme are keen to maintain their leadership in industrial and responsible AI by pinpointing main upcoming technological barriers, and establishing a foundation to ensure dissemination, evolution, sustainability and use of the tool-based methods.

“France 2030 invested in Confiance.ai in 2021 with the aim of translating the excellence of our AI research into industrial leadership capability. The results are significant: the R&D projects that ensue enable us to build our industrial strategies under the best circumstances, but also to restore a climate of trust and acceptability around a technology that is structuring for our economy of tomorrow,” stated Bruno Bonnell, Secretary General for Investment, responsible for France 2030

Opening the tool-based methods to the community

Programme members have been particularly successful in using a transversal approach within industries. Their end‑to‑end tool‑based methods can address the same types of technological issues, regardless of context of application or industry.

During Confiance.ai Day, partners have announced the introduction of the tool-based methods and the open-source components to scientific and industrial communities. Assets can be accessed here: https://bok.confiance.ai/.

Components are divided into nine functional sets corresponding to specific engineering processes: end-to-end engineering, data lifecycle management, model and component lifecycle management, component evaluation, component rollout, operating system management, robustness, explicability, and uncertainty quantification.

As a result of the programme’s widespread adoption, partners plan to make their tool-based methodsol a global de facto standard.

Major industrial impact: a transition approach towards augmented engineering and integration of trustworthy AI

After three years in the making, the programme has enabled automotive, aeronautical, energy, defence and industry partners to rethink their engineering systems by factoring in data-based AI, and to further the use of AI-based functions in their critical systems.

Some examples of results in industrial use cases:

Programme partners put forth use cases in which to test applicability of the end-to-end tool-based methods. Testing was conducted directly within their engineering systems.

Air Liquide
Air Liquide used generative AI to improve the robustness and reliability of its automated bottle-counting models used for inventory purposes, during adverse weather conditions (i.e.: rain, snow). They were able to reduce the number of counting errors during night shifts, and to obtain precision rates higher than 98%. Thanks to data preprocessing that eliminated water droplets and snowflakes, and to night-to-day image transformation, the system processed data as if conditions were normal, no additional learning required. Optimal performance was the result of improved management of new data (study, visualisation, characterisation) and completion of training scenarios.

Thales
Thales is well aware of the need to review traditional engineering processes (algorithmic engineering, software engineering, systems engineering) given their required integration into critical systems. The company became highly involved in establishing an end-to-end engineering methodology — a stringent and interdisciplinary approach compatible with business uses, whose design and validation would guarantee rollout, and safe and secure operating conditions.

This approach will ensure better flow across the entire AI-based critical system engineering chain. Take use case “object of interest detection in aerial images”: Thales was able to verify algorithm correction, improve quality of learning data through enrichment of synthetic images, and characterise, evaluate and monitor performance thanks to trust attributes and scores recommended by the end-to-end methodology. These initial steps are necessary for the industrial rollout of a learning component in a critical system.

Renault Group
Renault Group applied the Confiance.ai programme to a use case which entailed using an AI-based system to verify quality of vehicle frame welding. Although feasibility had already been established, quality managers were reluctant to employ the system in welding stations in which quality was monitored by an operator, especially in cases of critical welding. The Confiance.ai programme’s methods and tools were perfectly applicable; programme partners were greatly involved in using components to evaluate AI robustness, explicability and monitoring functions. This was the first time an end-to-end evaluation of the method was carried out. Confiance.ai tools and methods have come at a perfect time for Renault Group: its AI@Scale programme will entail organisation, and human, material, software and methodological resources to accelerate and securely scale up AI across the group’s entire value chain.

Confiance.ai: driving development of a global trustworthy AI ecosystem

A pioneer in trustworthy AI, the Confiance.ai programme is driving the creation and leadership of a global ecosystem. The following are some examples:

  • Signature of Memorandum of Understanding (MoU):
    • In Quebec (Canada), in 2024, with Confiance IA, a programme that brings together private and public stakeholders to support industries in their need to industrialise, and adopt robust, secure, sustainable, responsible and ethical artificial intelligence. Companies from different business fields study generic use cases to co-develop pre-competitive methods and tools to qualify and quantify trust properties of generated AI. For over a year, the programme has collaborated with its French counterpart Confiance.ai. Further cooperation is likely in the coming months thanks to sharing of use cases, methods and tools. The Computer Research Institute of Montreal (CRIM) is a trustee of the Confiance IA programme.
    • In Germany, in 2022, with VDE, one of the most important technological organisations in Europe, in order to create a future French-German responsible AI certification label.

An active participant in recommendation of norms regarding the risks set forth in the AI Act, Confiance.ai and its tool-based methods help provide an answer to the AI Act’s operational implementation. Regulatory requirements focus mainly on high-risk and systemic-risk systems, as well as on various trust aspects (robustness, explicability, maintaining human control, transparency, lack of bias, etc.). The Confiance.ai programme provides concrete elements — taxonomies, methodologies, technologies and tools — to further regulatory goals.

“We are dealing with a particularly complex and demanding issue. Our results are in keeping with our goals, and remarkable in many respects. Take the human aspect. We have been able to get a hybrid group of people — industrials, scientists, data scientists, engineers — to work together. We have also overcome a great many science and technological challenges, more than we expected. And we have led numerous international initiatives. A truly global community focused on trustworthy AI is emerging”, explained Juliette Mattioli, Steering Committee President at Confiance.ai and Senior Artificial Intelligence Expert at Thales.

“We still need to overcome numerous scientific and technological challenges in order for France to maintain its competitive edge in the field; we are making a list. Technology transfer and research valorisation are priorities, as well as breaking scientific and technological barriers”, added Fabien Mangeant, Executive Committee President at Confiance.ai and Scientific Director Computing & Data Sciences Chair at Air Liquide.

2030 vision and outlook

Although the programme will end in late 2024, Confiance.ai. partners are already looking ahead. They are focused on three main areas: sustainability, industrialisation and further exploration.

To start, ever-increasing advancements in AI reveal new barriers. Programme partners have identified several issues on which to base new R&T projects: hybrid AI, generative AI (i.e.: LLM), cybersecurity of AI-based critical systems, etc. Such projects will further enrich tool-based methods for new fields of application.

Sustainability and dissemination of the tool-based methods are also to be considered. Partners are planning to create a foundation that will rally international members around a shared roadmap. It will also ensure the “digital common good” remains fully operational, and that feedback and improvements increase its level of maturity. Training opportunities, such as a Master’s programme in trustworthy AI co-designed with CentraleSupélec, will also drive maturity.

Finally, industrialisation of programme results will further boost maturity and ensure their use at a large scale in industrial engineering processes. The goal is to create, and make accessible, competitiveness tools that will take into account companies’ businesses, data and use cases.

“AI opens up extraordinary possibilities for society. From personalised healthcare to smart transportation and fighting climate change, AI has the potential to revolutionise many aspects of our lives. A revolution, however, requires trust. It is a prerequisite to its acceptance by society, and to the adoption of smart systems by civilians, companies and administrations. Growing public concern about the risks of highly advanced AI models as well as the increasing number of initiatives regarding the international governance of AI, underscore the urgency. The Confiance.ai programme, vital to the national AI strategy, is a means for our industries to securely develop smart systems, and above all, to be prepared when the AI Act comes into effect”, commented Guillaume Avrin, Coordinator of the French National Strategy in AI.

 

 

- SUBSCRIPTION NEWSLETTER

Subscribe to IRT SystemX's
newsletter

and receive every month the latest news from the institute: