Publications

Rethinking Liability Regimes for AI-Related Claims – Should Canada Follow Europe?

Daniel Dennett’s masterwork From Bacteria to Bach and Back: The Evolution of Minds is a fascinating account of the evolution of human minds from bottom-up natural selection to top-down intelligent design. Dennett points out that we are in the age of intelligent design (not to be confused with its namesake creationist propaganda) in which cultural evolution has become self-comprehending and ever more refined in its search methods. This is leading us into the age of post-intelligent design – of artificial intelligence and deep learning – that produces epistemological competences without comprehension.

Artificial intelligence (“AI”) systems know the what, but not the why, of what they do. The why element is still traced back to humans who create the system in the first place. But, perhaps the more important socio-legal question is how. As Dennett notes, creating something is no longer a guarantee of understanding it. We can now make things that do what we want them to do, but that are beyond our understanding (sometimes called black box science).

Towards the end of his almost-500 page tome, Dennett unequivocally advocates strict liability laws as a “much-needed design incentive: anyone who uses an AI system to make decisions that impact people’s lives and welfare, like users of other dangerous and powerful equipment, must be trained (and bonded, perhaps) and held to higher standards of accountability, so that is always in their interests to be extra-scrupulously skeptical and probing in their interactions, lest they be taken in by their own devices.”

Dennett, of course, is no legal expert. His suggestion is not the least unfamiliar to lawyers who understand liability regimes, but that does not take away its relevance or importance. As human fault in or behind AI systems becomes increasingly futile to investigate or impossible to prove (or both), we are seeing lawmakers around the world veering towards liability regimes that are less concerned about fault.

European Union’s Directives on AI Systems and Product Liability

The most recent initiatives in this regard are the European Commission’s Directives on (i) non-contractual civil liability for damage caused by AI systems (“AI Directive”); and (ii) liability for defective products (“Product Liability Directive”), both released on September 28, 2022.

AI Directive: Among the new set of rights the AI Directive seeks to give to users of AI systems is the presumption of a causal link between non-compliance with the duty of care by manufacturers and the output produced by an AI, or its failure to produce an output, that could lead to the damage. The defendant has the right to rebut the presumption.

For high-risk AI systems, if the defendant demonstrates that sufficient evidence and expertise are reasonably accessible to the claimant seeking compensation to prove the causal link, it can act as an exception to the presumption. For non-high-risk AI systems, the presumption will apply only if a court considers it “excessively difficult” for the claimant to prove the causal link. In instances where the defendant uses the AI system for “personal non-professional activity,” the presumption will apply only if the “defendant has materially interfered with the conditions of the operation of the AI system or if the defendant was required and able to determine the conditions of operation of the AI system and failed to do so.” This is the first stage of the proposal. The second stage will involve a review of the effect of easing the burden of proof, including re-assessing the need to harmonize other elements of compensation claims, such as strict liability.

Product Liability Directive: The Product Liability Directive expressly confirms that AI systems and AI-enabled goods are “products.” Liability for defective products also applies to all movables, including when they’re integrated into other movables or installed into immovables – this catches software that stands alone or that is built into a device. Defects includes cybersecurity vulnerabilities, connectivity issues and software upgrades or updates. The Product Liability Directive also addresses liability of economic operators who substantially modify a product, especially in the context of circular economy business models. Such operators can be held liable, but may be exempted if they can prove the damage is related to a part of the product not affected by the modification.

Compensation can be available when defective AI causes harm or damage, without the injured person having to prove the manufacturer’s fault (strict liability). Hardware manufacturers, software providers and providers of digital services can all be held liable for defective AI.

Canada’s Legal Landscape

In June this year, the Canadian federal government introduced the Digital Charter Implementation Act, 2022 (Bill C-27), which includes the Artificial Intelligence and Data Act (AIDA). If passed, AIDA would be Canada’s first law with the purpose of regulating international and interprovincial trade and commerce in AI systems, especially to mitigate risks of harm and bias related to “high-impact artificial intelligence systems” – a term to be defined in regulations. AIDA provides for administrative monetary penalties (“AMPs”) for violating AIDA and its regulations, and fines for contravening its governance or transparency requirements. The purpose of the AMPs is to “promote compliance” and “not to punish.” The AIDA also creates new criminal offences.

However, when it comes to non-contractual civil liability and seeking compensation for damages, we have to rely on tort claims under common law (except in Quebec, a civil law jurisdiction, governed by the Civil Code of Québec). This fault-based regime provides for identifying the cause of the loss and imposing liability on the parties responsible for the cause. Typically, the tort of negligence is invoked for an AI-related damage or harm. The constituent elements of a negligence claim are:

a. defendant owes a duty of care to the plaintiff;

b. defendant’s behaviour breached that standard of care;

c. plaintiff suffered compensable damages;

d. damages were caused by the defendant’s breach; and

e. damages are not too remote in law.

The duty of care is owed by manufacturers, distributors, software developers, retailers, resellers and related stakeholders in the AI systems.

In AI product liability claims, the burden of proof rests with the plaintiff to establish, on a balance of probabilities, that an AI system was defective, the defect existed at the time the AI system was in the plaintiff’s control, and the defect caused or contributed to the plaintiff’s injury. The defect could either relate to manufacturing, design, instruction (i.e., failure to warn users of the AI system’s dangers), or economic loss for the cost of repairing dangerously defective products. The Canadian courts generally apply the “but for” test; “but for” the actions of the defendant, there would have been no damage caused to the plaintiff. Regulating product liability can fall within both federal and provincial jurisdictions, depending on the subject matter and the specific industry or sector.

Should Canada Follow Suit?

While Canada has no strict liability regime under tort law on manufacturers for defective products, Canadian courts have historically held them to high degrees of accountability. As a case in point, in Rowe v. Raleigh Industries of Canada (2006 NLTD 191), the Supreme Court of Newfoundland and Labrador (Trial Division) observed: “Generally a high standard of care is imposed on manufacturers in cases of defect in manufacturing. This coupled with permitted inferences from circumstantial evidence has resulted in liability being imposed on manufacturers in the absence of precise evidence of how the manufacturing defect occurred. The proof of the presence of the defect and that the defect resulted in injury to the plaintiff permits the trial judge to draw an inference of negligence.”

In Europe, strict liability is not a novel concept in many jurisdictions where, if the plaintiff proves the causal link between defect and damage, the manufacturer is liable to pay compensation, no matter if they are negligent or at fault. Closer to home, many states in the U.S. allow plaintiffs to claim under strict liability against manufacturers of products that are “unreasonably dangerous.” Given that general principles of torts are governed by common law in Canada, unless there is political will to legislate, strict liability would need to be embraced by the Canadian courts before it becomes an established principle. This does not seem to be on the horizon for now.

Either way, the government would do well to innovate with intermediary measures, such as a voluntary certification program for manufacturers and deployers of “high-impact” AI systems. Uncertified manufacturers and certified manufacturers that breach the conditions of certification could invite strict liability treatment for harm held to be caused by their AI systems. The federal government has introduced certification programs in other areas of technology, such as cybersecurity (CyberSecure certification for small and medium-sized organizations), and so such certification would not be a surprise.

AI systems are most often a complex combination of hardware, software, sensors, data/information, network features, etc. The algorithmic function of an AI system (in simplistic terms, the “brain” behind the decision-making) does not qualify as a “product” under the current regime in Canada. We would definitely do well in this country to expand the meaning of “product” to include AI-enabled software, goods and services.

Just as it has been with the development of privacy regimes worldwide – owing primarily to the creation of huge databases of both sensitive and other information about who we are, what we do and where we go – it is likely that the law of AI systems, prompted by the creation of ubiquitous AI systems that make key decisions in our lives, will develop worldwide, either by regulation or by judicial fiat (or both). What is clear is that current laws did not anticipate AI systems or their powerful effects on our lives, and so the law must change.

Should you have any questions respecting AI-related claims, please contact a member of our Technology Group and we would be pleased to discuss with you.