October 27, 2020 – AI technologies with high risk are currently still rare, says the EU Parliament, but it already wants to create a legal framework for this: In a proposal for an EU regulation, it calls for operators of such systems to be obliged “To take out insurance based on the model of the insurance prescribed for motor vehicles”.
Legislators are concerned with the increasing use of “intelligent” digital systems: Last week, the EU Parliament (EP) adopted a corresponding resolution with a large majority.
The text, which was passed by 626 votes in favor, 25 against and 40 abstentions, contains a proposal for an EU regulation on liability for the use of artificial intelligence (AI).
“AI system” is defined here as a “software-supported system or system embedded in hardware devices, which shows behavior that simulates intelligence by collecting and processing data, analyzing and interpreting its environment and taking measures with a certain degree of autonomy to achieve certain goals ”.
“Greatest possible security in the entire chain of liability”
The EP argues that “the greatest possible legal certainty must be ensured throughout the chain of liability, including the manufacturer, the operator, the person concerned and other third parties”.
A balance had to be ensured between “protecting citizens on the one hand and promoting companies to invest in innovations, especially in AI systems, on the other hand”.
Anyone who operates high-risk AI systems should have liability insurance
Given the “significant potential to cause harm”, “all operators of high-risk AI systems listed in the annex to the proposed regulation should have liability insurance,” said Parliament.
“High risk” means: a “significant potential” of an autonomously operated AI system to cause personal injury or property damage to one or more persons “in a manner that is random and goes beyond what can reasonably be expected”.
Article 4: No-fault liability for high risk AI systems
Article 5: Amount of compensation for damage caused by high-risk AI systems
1. The operator of a high-risk AI system is subject to strict liability for all personal injury or property damage caused by a physical or virtual activity, device or process driven by the AI system.
2. All high-risk AI systems and all critical sectors in which they are used should be listed in the annex to the regulation. […]
4. The front-end operator of a high-risk AI system ensures that the operation of this AI system is covered by liability insurance that is appropriate in relation to the amounts and scope of the compensation under Articles 5 and 6 of this Ordinance . The backend operator ensures that his services are covered by public liability or product liability insurance that is appropriate in relation to the amounts and the scope of the compensation in accordance with Articles 5 and 6 of this Ordinance. If it can be assumed that compulsory insurance of the front-end or back-end operator that already exists under other legal provisions of the Union or of the relevant member state or existing voluntary company insurance funds cover the operation of the AI system or the service provided, the obligation to take out insurance applies to this AI system or the service provided under this Ordinance as fulfilled as long as the relevant existing compulsory insurance or the voluntary business insurance fund covers the amounts and the scope of the compensation under Articles 5 and 6 of this Ordinance.
1. The operator of a high-risk AI system who is liable for personal injury or property damage in accordance with this ordinance is obliged to pay the following compensation:
a) up to a maximum of EUR 2 million in the event of death or damage to the health or physical integrity of a data subject as a result of the operation of a high-risk AI system;
b) up to a maximum amount of EUR 1 million in the event of significant non-material damage leading to verifiable economic loss, or property damage, even if several property of a data subject are damaged as a result of a single operation in a single high-risk AI system were; if the person concerned is also entitled to a contractual liability claim against the operator, no compensation is to be paid under this ordinance if the total amount of the damage to property or the significant immaterial damage is less than [500 EUR]* is.
2. If the total compensation to be paid to several people who have suffered personal injury or property damage caused by the same company using the same AI system at a high risk exceeds the maximum amounts specified in paragraph 1, the amounts to be paid to each individual will be proportionate reduced so that the total compensation does not exceed the maximum amounts specified in paragraph 1.
Liability for “other” AI systems
Article 5 contains rules for “other”, ie not “high-risk” AI systems.
The operator of such an “other” AI system would therefore be subject to “fault-based liability for all personal injury or property damage caused by a physical or virtual activity, device or a corresponding process driven by the AI system”.
The draft ordinance does not provide for compulsory insurance coverage in this context.
Stricter national legislation and consumer protection rules should, according to Parliament, remain unaffected by this regulation.
European Insurers Association skeptical
The European Insurance Association Insurance Europe reacted to the proposal with skepticism: He considers the call for compulsory AI insurance to be premature. General director Michaela Koller gave two reasons for this.
First: For compulsory insurance to work, there must be enough risk data, such as in motor vehicle liability, where one can fall back on claims data from decades in underwriting. High-risk AI systems, on the other hand, are only in development and a comparable amount of data is not available.
Second: Risks that are covered by compulsory insurance would have to be sufficiently similar in nature to be covered in a “one size fits all” approach. But since AI is used in so many ways, the risks from it are very different in different situations.
“Against this background,” says Koller, “we are of the opinion that Parliament’s call for compulsory insurance for high-risk AI would not work in practice.”
Commission to work with insurance sector
The EP also acknowledged the lack of risk data: combined with uncertainty about future developments, this lack made it difficult for the insurance industry to bring adapted or new products to the market.
His conclusion: The EU Commission should “work closely with the insurance industry” to find out “how data and innovative models can be used to develop insurance policies that offer adequate coverage at an affordable price”.
“Improved access to data generated by new technologies and the best possible use of this data, combined with an obligation to provide adequately documented information” would, in the EP’s view, “make it easier for insurers to model emerging risks and encourage the development of more innovative protection modalities “.
In exceptional cases, state compensation funds should help
In exceptional cases, “for example in the event of a massive damaging event in which the compensation is significantly higher than the maximum amounts set out in this regulation”, the EP believes that the states “should be asked to set up a special compensation fund for a limited period of time to meet their specific needs Cases “.
Special compensation funds could also be set up “to cover exceptional cases in which an AI system that is not yet classified as a high-risk AI system and therefore not yet insured causes personal injury or property damage”.
The resolution adopted by the EP, including the proposal for a regulation, is on one European Parliament website retrievable.