Publications

Artificial Intelligence in M&A Transactions: Ensuring Data Security and Compliance

To listen to an audio recording of this article, click here.

Introduction

Artificial intelligence (“AI”) and associated algorithms increasingly underpin routine business functions and often form part of a company’s product or service offering. In June 2025, Microsoft found that 71% of small and medium-sized enterprises (“SMEs”) surveyed were actively using AI or generative AI for core operations; among digital-native firms, the rate reached 90%. Behind the scenes, AI models are responsible for making material business decisions, running equipment and offering services to third parties – including health-care decisions, dynamic pricing, forecasting, supply chain optimization, customer chats and routing, applicant screening, identity validation, fraud detection and marketing campaigns.

It follows that AI generally, and particularly as it relates to data security, is an important consideration when entering into most transactions, even when the target company is not developing a proprietary model. Best practice for businesses and counsel is to address AI and data security early on so they can identify, manage and mitigate AI-related risks as they arise.

This article discusses how AI introduces new data security and privacy considerations, and provides recommendations on structuring due diligence and drafting transaction agreements accordingly.

Expanding Risk Considerations

The integration of AI into core business functions requires consideration of all aspects of associated data security. In relation to AI, data security is a comprehensive concept consisting of information security such as data storage, transit and processing, security over data inputs and outputs, as well as system infrastructure and code integrity. Canada’s national cybersecurity agency has reported an increasing number of cybersecurity threats, affecting not only governments and critical infrastructure but also corporations and SMEs. The Business Development Bank of Canada estimates 73% of Canadian small businesses have experienced a cybersecurity incident.

Many businesses depend on extensive channels of third-party vendors and platforms outside of their direct control to process and store their confidential and/or sensitive information. Risks associated with such dependency are exacerbated if contractual and cybersecurity safeguards are weak and disproportionate to the degree of sensitivity associated with the type of data.

Further, AI introduces additional risk where companies depend on a third-party model that generates outputs or makes decisions based on false, incomplete or biased data. In many ways, this risk is similar to decisions made by humans, but the quantum and speed of AI decision-making compounds the risk of using flawed data.

A target’s use of AI tools, as well as how it responds to cyber incidents or breaches, can attract scrutiny from various stakeholders, including shareholders, regulators, customers, business partners and the larger public. The rise of AI-related class actions highlights an increasing focus on AI’s intersections with privacy, employment, competition and anti-trust, and consumer protection laws.

Given increasing complexity and scrutiny, it is important that a purchaser’s and investor’s due diligence accounts for AI and data security at the onset.

Structuring AI Due Diligence

Purchasers and investors should seek specifics about how the target and its vendors operate and use AI in practice beyond written policies. Ideally, AI and associated data security diligence lists include the following where applicable:

  • Complete inventory of AI models, including the type of AI (generative or otherwise) and all uses of and/or decisions being made based on such AI;
  • Information on all third-party licensors, suppliers and/or vendors to the target entity involving AI, including all related agreements and specifics about the data (e.g., what data is shared or exchanged, how data is used, data residency, who owns the inputs and outputs, etc.);
  • Data records, tracking and retention practices, including data origins and subsequent modifications;
  • Information about internal and third-party persons with viewing access, administrative access and how access is controlled;
  • Information technology (“IT”) and cybersecurity specifics (e.g., third-party audits and penetration test reports) as well as policies (e.g., response plans, data transfer and encryption protocols, etc.);
  • IT security certifications (e.g., NIST, SOC 2, ISO 27001, etc.)
  • Cyber incident or breach histories, including near misses and interactions with government regulators;
  • List of the target’s IT assets that access or use AI models (e.g., laptops, mobile devices, remote access systems, etc.);
  • Employee training materials on AI and data security; and
  • All other AI-related practices, policies and compliance programs (e.g., privacy policies, employee “acceptable use” and “bring your own device” policies, examples of actual employee use cases, etc.).

If the target builds or develops AI models, also include the following where applicable:

  • Inventory of all third-party vendor and code providers (including open source) and all related agreements/confidentiality clauses between them and the target;
  • Existing licences for third-party training datasets;
  • Information on all possible or actual sources of bias and how bias is mitigated, including in training datasets or the algorithm itself;
  • AI model road map to understand future updates and service offerings;
  • Privacy policies and practices, including information about users’ consent to the limited collection, use and disclosure of their data for the stated purpose by the target, and how the target documents withdrawals of consent;
  • Information about the target’s complaints process, including key personnel and policies on handling complaints/inquiries;
  • Information about ownership and liability for generative AI outputs; and
  • Additional IT and cybersecurity specifics, including information on data encryption, anonymization or de-identification (and protections against re-identification), backups and user verification methods (e.g., via multi-factor authentication).

Of further interest is understanding how insurance coverage may relate to AI and machine learning used by the target.

What to Seek in Transaction Agreements

Purchasers and investors will need to understand and determine how important AI, and the associated risks, are to the overall transaction. The interested parties will have to consider how such risks may be controlled and/or mitigated. As the transaction progresses, and once the interested parties have an understanding of the AI components and related risks, the interested parties should consider whether an adjustment to the purchase price, an increase in escrow/holdback amounts, representation and warranty insurance and/or indemnification clauses to specifically address AI-related risks are appropriate to reflect in the transaction documents.

1. Reps, Warranties and Indemnity Clauses

Counsel should include AI and data security representations and warranties, and potentially targeted indemnification clauses, in the transaction agreement that flow from the due diligence. In addition to giving the interested parties a contractual basis on which to claim should an issue arise, of equal and perhaps even greater importance is that these clauses can be used to flush out issues before closing. Indemnification clauses can be drafted to bolster the interested parties’ rights in the event of, or separate from, non-compliance with said representations.

AI-related representations and warranties can vary from simple and general in nature to more detailed and comprehensive. At a minimum, AI-focused representations and warranties should address the nature, ownership and/or licence rights to the AI model as well as inputs and outputs, use, data integrity (including addressing bias where appropriate) and security related to the AI. Interested parties should seek representations and warranties that are factual and focused on verifiable records, and may include that the target:

  • Maintains an IT and cybersecurity program with specific security standards and requirements (e.g., access restrictions, end-to-end encryption, etc.);
  • Discloses all material security incidents and provides any legally required notices to individuals, regulators and persons;
  • Where appropriate, owns its data and AI outputs, and that no third party can use the target’s data inputs and outputs, or can do so only under a limited licence that:
    • prohibits use of the target’s data to train models available to other customers (i.e., a “closed” AI model);
    • requires prompt cyber incident and breach notification, regular audit rights and deletion on termination; and
    • if sublicensed, prescribe the same or better level of IT, cybersecurity and contractual protections as the main licence;
  • Has completed all privacy and transfer impact assessments (PIAs) where required, at a minimum, and that agreed recommendations were implemented; and
  • Has complied with all applicable laws in the jurisdictions it operates in and is obliged to notify the purchaser if regulatory proceedings are commenced against it in any jurisdiction.

2. Disclosure Schedules

The disclosure schedules should include a list and description of the following:

  • All AI inputs used in and material to the development, operation or improvement of any target AI, noting which AI inputs are owned or controlled by any other person; and
  • The contracts or other terms governing the target’s use of AI (including inputs and outputs).

Schedules should also distinguish between generative and non-generative AI.

Protecting Operations With Interim Covenants and Closing Obligations

If there is a period between signing and closing, use covenants in the purchase agreement to preserve business continuity as it relates to AI. In particular, an interested party will likely want to prevent the target from:

  • Launching new AI products or services without consent;
  • Disabling or downgrading key security controls; and
  • Amending existing vendor agreements and/or entering new agreements.

At closing, ensure the target provides an updated AI/data inventory and a list of outstanding regulatory matters, if any. Interested parties will also want to confirm that IT/cybersecurity certifications provided to the purchaser remain in force and have not expired or been revoked.

Post-Closing Risk Mitigation

Businesses can still take steps post-closing to mitigate their risk, even if it may not reduce exposure for pre-closing activities. Strategies include the below:

  • For a subject AI model or program, consider whether a substitute provider is available;
  • Create internal limitations on data inputs and segregate data accordingly; and
  • Conduct regular audits of third-party vendors and AI programs.

Looking Ahead

AI will continue to shape day-to-day operations for businesses, making data security, compliance and third-party vendor reliability central to M&A transactions, with the smoothest deals being those where risks are mapped early, addressed openly and allocated with precision.

The Privacy & Data Security Group at Aird & Berlis LLP frequently advises on every aspect of complex privacy and data security matters, including transactions, commercial relationships, litigation, regulatory concerns and emerging technologies. Please contact the authors or a member of the group if you have any questions concerning any AI-related considerations in your contemplated or ongoing M&A transactions.