The European Commission aims at facilitating and enhancing cooperation on Artificial Intelligence (AI) across the EU to boost its competitiveness and ensure trust based on EU values. Following its strategy on AI for Europe, the Commission set up the High-Level Expert Group on AI, which consists of 52 independent experts representing academia, industry and civil society. On 8 April 2019, the EU’s High-Level Expert Group on Artificial Intelligence (AI HLEG) published its Ethical Guidelines for “Trustworthy AI” which aim to set out a non-binding EU’s framework paving the way for the ethical development of AI in Europe.
In other words, the guidelines constitute a roadmap for future rules and policy-making in the AI field, while promoting a “human-centric” approach putting forward higher levels of human well-being, rather than the development of AI as a means in itself.
Among others, the AI HLEG proposes the use of an Assessment List to evaluate whether artificial intelligence systems can meet the seven requirements of Trustworthy AI set in the guidelines. The assessment list particularly applies to AI systems that directly interact with users and is primarily addressed to developers of AI systems.
However, the proposed list only represents a pilot version that will be finalized following a feedback process from stakeholders across the public and private sector. Feedback will be collected through:
Based on all feedback received, the AI HLEG will propose a revised version of the assessment list to the Commission in early 2020.
As a reminder, the EU AI strategy aims at increasing the combined public and private investments to €20 billion annually over the next decade, making more data available, fostering talent and ensuring trust. The EU’s first ever Digital Europe Programme will dedicate €2.5 billion to support the deployment of AI and the building up of additional capacities in this domain across Europe.
More details for FEDMA members will be provided in the next LAC newsletter.
DO NOT MISS OUR NEWS