1
/ 10
Cover
Contents
Foreword
War in Ukraine
Climate adaptation
OPINION: Sustainability reporting
Sustainable finance
OPINION: Diversity, equity & inclusion
Solvency II
Recovery & resolution
International issues
Taxation
Retail investment: advice
Retail investment: purchasing process
Risk-based underwriting
OPINION: Cybersecurity
OPINION: Artificial intelligence
Open insurance
Pensions
Motor
Product liability
GFIA OPINION: Protection gaps
RAB OPINION: Open markets
Member associations
Events
Publications
Executive Committee
Committees, Working Groups & Platforms
Leadership
Would you like to hear from us?

SUPPORTING DIGITALISATION

OPINION: ARTIFICIAL INTELLIGENCE

Acting on AI

How to implement the EU’s AI Act without hindering competitiveness

Petra Hielkema

Chairperson, European Insurance & Occupational Pensions Authority (EIOPA)

It seems there is no limit to digital innovation. The speed and breadth of new ideas continue to astound, as do the changes that they bring to every aspect of people’s lives. In insurance alone, artificial intelligence is widely used — from understanding policyholder preferences to claims handling and fraud detection — and today artificial intelligence (AI) is very much part of the insurance value chain.

Not surprisingly, the digital transformation raises several questions for regulators and supervisors. First and foremost, how do we adapt our regulatory frameworks to match technological progress?

The European Commission’s ambitious digital agenda aims to address the risks generated by different aspects of digitalisation, aiming both to empower and protect by ensuring that all digital players act responsibly and safely.

Furthermore, since technology is not restricted to any single sector, new legislation is increasingly cross-sectoral in nature. Examples include the Digital Markets Act, seeking to curtail the increasing market power of large digital platforms, the Data Act, fostering the development of an innovative data economy, and the General Data Protection Regulation (GDPR), where the EU was a pioneer and inspired data protection legislation across the world.

The AI Act is also a landmark piece of legislation that seeks to establish new rules for the development, deployment and use of AI systems in the EU. It will apply to all sectors of the EU economy, including insurance.

Insurance: a highly regulated sector

EIOPA welcomes the legislative proposal and supports the objectives and principles of the AI Act to promote the ethical and trustworthy use of AI.

However, the AI Act has the complex task of integrating provisions into existing sectoral regulatory frameworks. In the case of insurance, the sector is already highly regulated and the application of the AI Act is likely to give rise to some friction.

Indeed, when insurance undertakings and intermediaries use AI today, they do not do so in an unregulated space. There are legally binding instruments at international, European and national level that apply to the use of AI in the insurance sector. These include EU primary law, as well as EU secondary law such as the Insurance Distribution Directive, the Solvency II framework or the GDPR.

In addition, the insurance sector has certain specific characteristics that deserve particular attention within any cross-sectoral legislative proposal. For example, actuaries play an important role in the pricing and underwriting of insurance products that does not exist in other sectors. And certain data sets (for example relating to a person’s age or disability) that are not allowed to be used for pricing products in some sectors of the economy are allowed for insurance underwriting purposes.

EIOPA acknowledges the challenges arising from complex AI systems and the need to address them and it stands ready to develop further guidance. However, we strongly consider that this should be done by building on the existing sectoral governance, risk management, conduct of business and product oversight and governance requirements.

“EIOPA supports the objectives and principles of the AI Act to promote ethical and trustworthy use of AI.”

Defining AI systems and levels of risk

The definition of AI systems in the Act also warrants further consideration. EIOPA is concerned that the current definition is too broad — notwithstanding the positive progress made in the text of the Council of the EU — and that it could potentially capture traditional mathematical models used in insurance, to the detriment of the sector. Take for example the models used to calculate the solvency capital requirements. These play an important role in ensuring the financial soundness and stability of insurance undertakings and are therefore already subject to supervisory approval. Given their prudential nature and the types of (non-personal) data sets used therein, they do not raise risks from a discrimination and fundamental rights perspective and should not, therefore, fall within the remit of the AI Act.

The question of high-risk AI use cases in the context of insurance also needs to be addressed. EIOPA’s view is that, at this stage, insurance AI use cases should not be categorised as high risk.

Within the Solvency II framework — notably within Delegated Regulation 2015/35 — there are already detailed provisions in areas such as data quality, model validation, model calibration or documentation and record-keeping. Such detailed requirements clearly overlap with the ones included for high-risk AI applications in the AI Act, which should certainly be avoided.

EIOPA therefore calls for a narrowing of the definition of AI, focusing it on those systems that have features distinct from traditional mathematical models, such as machine-learning approaches. In addition, the AI Act should explicitly include a clarification that models used for the calculation of capital requirements should not be considered as a high-risk use case in the AI Act.

“EIOPA’s view is that, at this stage, insurance AI use cases should not be categorised as high risk.”

Good governance

EIOPA also has particular interest in the governance of the AI Act. On the one hand, we welcome the fact that the Act recognises that national competent authorities responsible for the supervision and enforcement of the financial services legislation should be designated as competent authorities for the purpose of supervision.

On the other hand, EIOPA would like to see a greater role for European agencies such as EIOPA in the new European Artificial Intelligence Board. We believe that this is crucial given the tasks of the Board in further specifying the requirements of the AI Act. This is particularly relevant when considering the scope of activities included in Annex III of the AI Act — the annex that sets out which AI applications are required to comply with the Act. The participation of the relevant European agencies in the European Artificial Intelligence Board would foster better cooperation, sharing of information and monitoring of convergence and a level playing field across Europe. EIOPA strongly believes it has an important role in representing the specific characteristics of the insurance sector — even more so if insurance AI use cases such as pricing and underwriting in life and health insurance are finally included in the Act’s list of high-risk AI use cases.

From EIOPA’s perspective, a responsible use of AI by insurance undertakings and intermediaries is the best approach. The AI Act should certainly enable this and EIOPA will continue to use different supervisory and regulatory tools, and work closely with stakeholders, to ensure that both the sector and policyholders are able to benefit fully from continuing technological advances.