IFSECInsider-Logo-Square-23

Author Bio ▼

IFSEC Insider, formerly IFSEC Global, is the leading online community and news platform for security and fire safety professionals.
October 7, 2021

Download

Whitepaper: Enhancing security, resilience and efficiency across a range of industries

Artificial Intelligence

Position Paper on EC proposal for Regulation of AI released

Euralarm recently spoke with Gabriele Mazzini, lawyer of DG Connect, following the release of a Position Paper on the EC proposal for a Regulation on Artificial Intelligence.

First-ever legal framework on AI

The Artificial Intelligence Act is the result of a process a preparatory work that started in 2018, when a high-level expert group on artificial intelligence was appointed to provide advice to the Commission. The overall aim of the proposal is to make the rules for the development and use of AI consistent across the EU and thereby ensure legal certainty. It also strives to encourage investment and innovation in AI and build public trust that AI systems are used in ways that respect fundamental rights and European values.

In 2020 the Commission adopted a White Paper that was sent out for consultation and got more than 1200 comments. That input helped inform the proposal for the harmonised regulatory framework on AI, which based on the NLF approach.

The AI Act is currently the first-ever proposed legal framework on artificial intelligence.

Broad definition

The term AI was first used in the 1950s. Gabriele explains that, on the basis of a recent report by the Commission Joint Research Centre there are more than 50 definitions of AI, making it extremely complex.

Considering the greatest majority of EU countries are members of OECD as well as others non-European countries, it was decided that inspiration would be taken from the definition of AI adopted by the OECD in its principles on Artificial Intelligence.

In the Artificial Intelligence Act AI is defined in Article 3 and this is supplemented by Annex I, containing a suite of software development frameworks that encompass machine learning, expert and logic systems, and Bayesian or statistical approaches. A software product featuring these approaches and meeting the requirements of the definition stated in Article 3 will be considered AI for the purposes of the Act. The Act distinguishes three categories of AI uses: prohibited AI uses, high-risk AI uses, and systems with transparency risks. The proposal does not regulate the technology as such, but specific uses. In general, the approach is that the higher the risk, the stricter the rules (risk-based approach).

Prohibited uses of AI

Gabriele states that there are four main categories in regard to prohibited AI usage.

The first two relate to the manipulation of a person’s behaviour or exploitation of a person’s vulnerabilities due to their age, physical, or mental disability.

Also prohibited are the uses consisting in forms of social credit scoring by governments.

The fourth category relates to real-time remote biometric identification in publicly accessible spaces by law enforcement. An exception is made for certain time-limited public safety scenarios such as serious criminal activities. It is up to the member states if they want to make use of the exception. The AI Act is intended to apply as lex specialis with respect to the rules on the processing of biometric data contained in Article 10 of the Law Enforcement Directive. Also, the uses of remote biometric identification are already regulated by existing law, the GDPR.

High-risk classification

The AI Act also defines high-risk AI uses.

Gabriele explains that, the AI Act considers an AI system high-risk if it is used as a safety component of a product that is covered by existing single market harmonisation legislation and the product is required to undergo a third-party conformity assessment.

These mandatory third-party conformity checks will incorporate the AI Act’s requirements after the legislation is passed. In addition, other specifically listed AI systems deployed in a number of sectors are also deemed to be high-risk to safety or fundamental rights. The Commission can expand this list through a simplified process without new legislation. The Act relies on member state regulators for enforcement and sanctions, but consistency is ensured by a European level board.

Fruitful cooperation with stakeholders

When asked how important the comments and suggestions of stakeholders have been for the drafting of the AI Act, Gabriele says that, their contributions have been very important. It is crucial to consult and, as far as possible, have consistent dialogue with stakeholders during the design of regulatory frameworks.

Now the proposal is out to the European Parliament and the Council, which will pursue their own debates. Parliamentary committees, responsible for the preparatory work and the Parliament itself meet in public, so stakeholders can follow the discussions on the proposal and make sure that their voice is heard by the decision-makers.

‘Secure by Default’ in the Age of Converged Security: Insights from IFSEC 2019

From data security to the risks and opportunities of artificial intelligence, the conversations at IFSEC International shape future security strategies and best practices. This eBook brings you exclusive insights from these conversations, covering:

  • A Global Political and Security Outlook from Frank Gardner OBE
  • Surveillance Camera Day: Tony Porter launches ‘Secure by Default’ requirements for video surveillance systems
  • Using Drones to Secure the Future
  • Autonomous Cars and AI: Relocating human incompetence from drivers to security engineers?
  • The Ethical and Geopolitical Implications of AI and Machine Learning

Related Topics

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments