As part of France 2030, the French government has launched a Grand Challenge aimed at ” Securing, certifying and making reliable systems based on artificial intelligence,” led by the General Secretariat for Investment (SGPI) and funded by the Plan d’Investissement d’Avenir. AFNOR, the French standards association, has been mandated to ” create the normative environment of trust accompanying the tools and processes for the certification of critical systems based on artificial intelligence”. It recently published a strategic roadmap in which it presents six key areas for AI standardization.
Developing trusted AI is essential. To this end, the French government has committed €1.2 million, under the Future Investment Program (PIA) and the France Recovery Plan, to facilitate the creation of consensual and globally accepted standards.
Cédric O, Secretary of State for Digital Transition and Electronic Communications, has moreover declared:
“Trusted AI, i.e. AI for critical systems, is needed today in many fields such as autonomous cars, aeronautics or space.”
Launched by the SGPI, the Grand Challenge “Securing, making reliable and certifying systems based on artificial intelligence” aims to build the tools to guarantee the trust placed in products and services incorporating AI and served as the technical framework for the European AI regulation proposal of April 21. The SGPI has developed an approach based on three pillars: research, applications and standardization.
This last pillar is entrusted to AFNOR, which includes many players in the AI ecosystem, with the aim of creating synergies in France, with other countries in the framework of the International Organization for Standardization (ISO) and with other international consortia.
To structure the ecosystem, the association will set up a platform for cooperation between French AI players, strategic actions in standardization and develop European and international cooperation.
A national lack of understanding of standardization
French companies do not all understand the strategic importance of standards, especially start-ups, SMEs and ETIs, which, insufficiently integrated into the ecosystems of standardization do not measure the stakes. Economic players seem to be disinterested in standards, even though they are concerned about the application of regulations and compliance.
The experts of the companies concerned contribute to the elaboration of standardization rules in a direct way, at the national level as well as at the European and international levels, which will serve as technical support to the European regulations.
This European regulation is part of the continuity in Europe of the Data Governance Act presented in November 2020, the RGPD active since 2018 or the study of the role of AI in the Green Deal, carried out by the European Parliament.
AFNOR’s roadmap
260 French AI players took part in the consultation conducted in the summer of 2021 to establish this AI standardization strategy. All companies in the ecosystem will be able to participate in the development of standards within the standardization committees.
Patrick Bezombes, chairman of the French standardization committee, assures:
“The contribution is not reserved for large groups, quite the contrary. Start-ups and SMEs are an essential part of the ecosystem, and they must make their voices heard and give their point of view: the directions chosen will have a direct impact on them, right at the heart of their business.
The roadmap includes 6 axes:
- Develop standards on trust
The priority characteristics to be standardized are security, safety, explicability, robustness, transparency, and equity (including non-discrimination). Each of these characteristics will be the subject of a definition, a description of the concept, the technical requirements and the associated metrics and controls, particularly security.
- Develop standards on AI governance and management
All new AI applications carry risks: poor data quality, poor design, poor qualification. A risk analysis for AI-based systems is therefore essential. Companies will have to set up quality and risk management systems. Within the framework of the ISO/IEC work, two standards are being developed on :
– An AI quality management system: ISO 42001 (AI management System);
– An AI risk management system: ISO 23894.2 (AI Risk management).
They could be imposed at the global level just like ISO 9001, which is now an international reference in quality management, and become harmonized European standards.
- Develop standards on AI oversight and reporting
The role of humans from the design of AI systems to their use is essential. It is therefore necessary to ensure that they are controllable, that humans can supervise them and take over at critical moments when the AI goes out of its nominal operating range.
Reporting processes will allow major incidents to be brought up in order to treat them in real time before they spread. In case of incidents and accidents, audits will be conducted on the products and the standards on which they are based.
- Develop standards on the competencies of certification bodies
Certification bodies will need to hire, train their staff and have the methods and tools to assess AI systems to maintain confidence in products, increasingly technical processes and services. They will need to ensure that companies have processes in place to develop and qualify AI systems and that products comply with regulatory and other requirements.
- Develop standardization of certain digital tools
The synthetic data provided by simulation allows the specification, design, training, testing, validation, qualification and audit of AI systems. Simulation will allow a high repeatability of the tests and thus to better understand and explain certain behaviors of AI systems. Simulation will be used more and more and new standards in terms of qualification of simulations, interoperability of simulations and objects (digital twins) will have to be put in place.
- Simplify access to and use of standards
A consultation platform will soon be opened and adjusted as needed.