Blog Post

Legally Mandated Security: Insights on Artificial Intelligence

This is the fourth and final instalment in a series on the subject of legally mandated security. I promised in previous articles to address the EU’s new Artificial Intelligence Act and NIST’s ideas on artificial intelligence. Below, I tackle both issues and also briefly address Canada’s proposed Artificial Intelligence and Data Act.

A Little About Artificial Intelligence (AI)

AI is a little like the weather: everybody talks about it but no one knows what to do about it or how to prepare for it. However, any discussion of AI has to centre on the legal and regulatory environment – both existing and proposed – and on such things as the societal impact of AI, the reliability and use of AI-generated data, the appropriate accountability framework and the management of AI systems. These elements overlap, of course, making thinking about AI difficult.

In short, AI is the capability of a machine to perform complex tasks commonly associated with intelligence (hint: people), including observing, reasoning, discovering meaning, generalizing from facts and observations, and learning from experience.

There are several flavours of AI. We have all bumped into them in our everyday lives if we use computers or smartphones. Among these is the Large Language Model (LLM), of which OpenAI’s ChatGPT was the first. LLMs are algorithms that are “trained” on huge datasets and generate output based on Natural Language Understanding (NLU) and Natural Language Processing (NLP). LLMs can summarize text, translate languages, infer information from context, respond to questions and generate output. We use LLMs every time we interact with a chatbot (or virtual assistant like Siri, Alexa, Cortana and Google Assistant).

There is also “generative” AI (GAI), which is capable of generating text, images and data by learning the patterns and structure of training data and then generating output of a similar nature.

Other kinds of AI will help you drive your car, avoid accidents, predict the weather, pilot an aircraft and even help you plan dinner. (Me: “Siri, what should I have for dinner?” Siri: “I don’t know. What do you want?”)

No matter what kind of AI is currently in use for tasks of any kind, it falls into the category of “Narrow AI” or “Weak AI.” There is currently no Artificial General Intelligence (Strong AI) such as that of which Isaac Asimov wrote. Beyond that is the possibility of Artificial Superintelligence, which, fortunately, also doesn’t exist yet. Such Artificial Superintelligence would outperform human beings in just about every way, and might even become self-aware.

Canadian AI Legislation Not Yet in Force

Through the Digital Charter Implementation Act, 2022, Canada’s federal government has proposed the Artificial Intelligence and Data Act, 2022 (“AIDA”), which is not yet in force. If AIDA becomes law, it will take a risk-based approach to regulating the responsible design, development and deployment of AI systems in Canada’s private sector. AI systems will be required to be safe and non-discriminatory, and their developers and implementers will be accountable for its use. As a constitutional matter, AIDA will address international and interprovincial private sector activities, but each province may – and likely will – enact its own AI laws.

AIDA is still in flux and, at the time of writing this blog, is before the parliamentary Standing Committee on Industry and Technology for study and stakeholder input – most importantly, the input of the federal privacy commissioner, who has already called for AIDA to focus more on protecting fundamental privacy rights.

U.S. Enacts AI Legislation, President Gives Directive

The United States has enacted the National Artificial Intelligence Initiative Act of 2020, with the aim of encouraging AI research and development, and education in AI. Its scope is limited to federal government use.

Additionally, President Joe Biden passed an Executive Order to promote safety in the use of AI, focusing on security, privacy, non-discrimination and innovation. Under this order, big AI developers have to report test results to the U.S. Department of Commerce. It also mandates risk assessments and reporting whenever foreign enterprises or governments train their own AI on data controlled in a U.S. cloud service.

Europe’s AI Governance: A Comprehensive Framework

Europe is leading the world in regulation of AI, as part of its overall digital transformation strategy. That strategy aims to help businesses adopt digital technologies to achieve beneficial impacts on society. Those technologies incorporate digital platforms, the Internet of Things, cloud computing and, centrally, AI. The hope is that these technologies will optimize production, be better for the environment, improve competition and bring benefits to consumers.

The EU’s Artificial Intelligence Act (the “AI Act”) is part of the European digital strategy. In like manner to Canada’s intention, it creates legal obligations for both AI providers and users of AI that are dependent on the level of risk of the particular activity. Significantly, the AI Act states that certain risks are “unacceptable” for AI – cognitive behavioural manipulation of people or specific vulnerable groups, social scoring of the kind that has been developed in China to control behaviour, biometric identification and classification of people, as well as real-time and remote biometric ID systems.

However, certain uses of AI, even though classified as “high risk,” are permitted but highly regulated. For example, any product that otherwise is subject to EU product safety regulations is automatically regulated when it comes to use of AI. Other uses have to registered if they use AI: critical infrastructure management, education and training, employment and worker management, access to self-employment, access to public services and essential private services (e.g., health plans), law enforcement, immigration and legal interpretation tools.

As for “generative” AI (GAI), transparency requirements are mandated by the AI Act. This means that a person using the GAI would have to disclose that the content was generated by AI. Additionally, the model AI would have to be designed to avoid generating illegal content and provide summaries of the copyright-protected data that the GAI was trained on.

Finally, “limited risk” AI will have to comply with a lower level of transparency, sufficient only to allow users to make informed decisions and to ensure that users know that they are interacting with AI. This includes AI systems that generate or manipulate images, audio or video content, such as so-called “deep fakes.”

NIST’s Role in AI Risk Management

NIST (the U.S.’s National Institute of Standards and Technology) has examined AI closely and has developed a risk management approach to dealing with it. The first arrow in NIST’s AI quiver is the NIST AI Risk Management Framework (“AI RMF”). The AI RMF aims to incorporate trustworthiness into the design, development, use and evaluation of AI. While it is a voluntary program, there is little doubt that it will become an influential mover in the AI business, just as NIST has done with the NIST Cybersecurity Framework and the NIST Digital Identity Guidelines, both of which are the gold standard in their respective areas.

NIST has also set up the Trustworthiness and Responsible AI Resource Center, which provides practical resources for AI developers and users. NIST aims to promote international alignment on trust in AI.

NIST has expressed great concern about security threats and untrustworthy data arising from the use of AI. Most AI systems rely on huge amounts of data. If the data are untrustworthy or subject to security threats, then the AI will clearly not function correctly and, in particular, without bias. NIST promotes protection of AI systems against biases that can cause harm and erode trust.

Conclusion

AI is here to stay. Its benefits and dangers are obvious. Both international regulation and education of the public in AI will be needed to keep it from causing harm. If the correct balance of education, regulation and adoption is implemented, AI will be a real boon to society.

We’ll see.

That’s the end of this mini-series. Thanks for reading!

More Articles in This Series: