3 OECD AI Safety Principles with Privacy Implications that Canadian Startups Need to Know
This project was one of the top submissions on the (Dec 2024) Writing Intensive course.
The 3 First Liabilities AI Developers will Be Sued For, and How to Get Ahead Using Privacy Law
By: Dylan Roberts
Executive Summary
What does the future hold for artificial intelligence (AI) liability?
According to privacy law, this: AI developers will need to
- Proactively secure its infrastructure,
- Explain its decisions, and
- Be held accountable for those decisions.
Privacy law will show them just how to do all that using well-established privacy principles. In Canada, the principles of security, transparency, and accountability have reigned supreme in the digital realm since the early 2000s. Given their importance in digital tech, and their overlap with other AI Safety principles - like those released by the Organisation for Economic Co-operation and Development (OECD) - AI developers should expect them to become the foundation for the first liability lawsuits. Learning what these principles entail, and how to adhere to them, will help shield developers from future payout
Disclaimers
This is not legal advice
This writing is in no way legal advice. It’s just a qualitative exploration of principles.
This discussion applies to all of Canada
These principles should apply to all provinces. Many sectors don’t need to abide by the Personal Information Protection and Electronic Documents Act (PIPEDA) because their province has legislation that is "substantially similar." That’s true for Alberta, British Columbia, and Quebec’s private-sector, as well as any bodies dealing with personal health information in Ontario, New Brunswick, Nova Scotia, and Newfoundland and Labrador. But, what makes these laws “substantially similar” is that they adhere to the same overarching principles. So, while the principles discussed here are rooted in PIPEDA, the analysis transfers to all provinces, including in the private and healthcare sectors.
How this post came about
This post was written as the final assignment for BlueDot Impact’s Writing Intensive.
Introduction
AI represents a dynamic and evolving field. To illustrate:
-
- In May 2024, the Organisation for Economic Co-operation and Development updated their AI Safety Principles (to which Canada is a signatory);
- In November 2024, Innovation, Science and Economic Development Canada launched the Canadian Artificial Intelligence Safety Institute; and
- In January 2025, Canada’s Treasury Board Secretariat will close submission for the fourth review of the Directive on Automated Decision-Making
Each of these milestones marks the nascency of an AI regulation ecosystem. In this ecosystem, AI developers will be enticed, or compelled, to adhere to standards. But, how to get ahead?
The 3 Insights
Thankfully, adherence to norms and standards is nothing new in the privacy sector. PIPEDA, Canada’s main private sector legal text, has been around for over 20 years. Its 10 Fair Information Principles have never changed.
Best of all, they inform what developers should expect from the OECD’s principles:
3 OECD AI Safety Principle | 3 PIPEDA Insights |
Robustness, Security, and Safety | Security measures will need to reflect the latest standards, regardless of industry norms, to avoid liability. |
Transparency and explainability | Protecting sensitive information will make it impossible to be transparent about some decisions to the public. |
Accountability | In the future, developers will need a designated person to assess, monitor, and intervene to secure their product. |
Let’s dig in.
#1: Robustness, Security, and Safety
The OECD’s robustness, security, and safety principle reads as follows:
AI actors, should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on an ongoing basis and adopt responsible business conduct to address risks related to AI systems. - OECD, AI Privacy Principles, “Security”
PIPEDA echoes this principle in their “security” principle. Organizations need to conduct privacy impact assessments and threat analyses. That means:
- Scanning existing data supply chains for vulnerabilities,
- Planning appropriate protections for new initiatives, and
- Staying on top of the threats and opportunities posed by emerging technologies.
The 2007 TJX breach illustrates the consequences of failing to meet these expectations.
A Security Case Study: The 2007 TJX Breach
In 2007, TJX got hacked. The hacker made copies of over 300 Canadian credit card numbers, names, addresses and telephone numbers. TJX’s reliance on an outdated internet encryption system. This system was already known to be inadequate since 2003—4 years ago. But, the retailer has already begun the process of installing a better encryption methodology—Wi-Fi Protected Access (WPA)—at a time when most competitors did not have these protections.
So: did it matter that TJX was ahead of their competitors?
No. The Office of the Privacy Commissioner declared that being ahead of competitors was irrelevant when considering whether protections are adequate.
The lesson for AI developers: security measures must reflect the latest standards to mitigate liability, irrespective of industry norms. These standards are rapidly evolving, and as of yet, there is no consensus on the ‘best’ approach to training, only newer ones.
(For example, Anthropic used artificial intelligence to make artificial intelligence smarter by training their model to value answers that adhere to a predetermined set of values. While this approach - called Constitutional AI - outperforms models that are trained on human feedback in safety tests, it is still imperfectS.)
So, as AI systems grow more complex, organizations must stay ahead of known vulnerabilities, such as jailbreaks, to limit their liability in case of a security breach.
#2: Transparency and explainability
Transparency and explainability are equally important principles in AI governance. The OECD advocates for providing accessible and meaningful information about data sources, processes, and decision-making logics so that affected individuals can understand AI outputs.
Specifically, this principle reads:
AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context…
consistent with the state of the art…
plain and easy-to-understand information on the sources of data/input, factors, processes and/or logic that led to the prediction, content, recommendation or decision, to enable those affected by an AI system to understand the output.
- OECD, AI Privacy Principles, “Transparency and Explainability”
PIPEDA reinforces this principle by granting individuals the right to access their personal information. But, this right is limited when disclosure risks compromising third-party data, particularly when dealing with sensitive information.
The tension between transparency and privacy is especially salient in healthcare. Consider the following thought experiment.
Thought Experiment: AI Patient Triage
AIs are already being used to triage medical patients. But, we know that many AIs in healthcare are biased, deprioritizing those who need more help.
Now, imagine an AI puts someone last in line to see the doctor. The OECD principle suggests patients are entitled to an explanation. After all, being last in line can prolong their suffering; or, if we’re in a pandemic, it could kill them.
So: what happens if someone asks for an explanation as to why the AI put them last in line?
Maybe nothing. Privacy laws may block clinics and hospitals from disclosing meaningful explanations, since details about the AI’s reasoning risks revealing third-party, sensitive, personal health information.
But, here’s another problem:
If ever the AI prioritized someone for the wrong reasons, and someone was hurt because of it, then are AI developers on the hook?
TBD. The jury is still out on these issues, especially in Canada.
Regardless, developers will want to be prepared — one day, they may have to prove their innocence in Canada. Some academics have asked that the burden of proof switch to AI companies to prove they did everything they could to prevent harm. The EU’s new product liability directive is halfway there: plaintiffs don’t have to prove an AI caused harm if the evidence is too complicated or unavailable to do so.
To prepare for this eventuality, developers will want to be able to explain why and how their AI followed the Hippocratic oath. But to explain the AI’s decision, they will need to access users’ health data, either for a court of law’s examination or to improve interpretability and mitigate bias. While the burden of protecting sensitive information is heavier than letting the AI do its own thing, it will be worth the investment before scaling up.
#3: Accountability
The last principle to be mindful of is the OECD’s Accountability principle. This means taking a proactive and systemic approach to risk management regularly. Specifically, the principle reads:
AI actors, should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on an ongoing basis and adopt responsible business conduct to address risks related to AI systems. - OECD, AI Privacy Principles, “Accountability”
PIPEDA’s echoes this principle with its own. The ensuing mandate includes:
- Identifying your organization’s designated privacy official, and
- Communicating that person’s name or title internally and externally (e.g. on your website or in publications).
Moreover, that person has to have the institutional support to intervene on his/her privacy issues. That means having the leeway to conduct a privacy impact assessment and threat analysis of the organization’s personal information handling practices, including ongoing activities, new initiatives, and new technologies.
The future: AI compliance
Think of SOX compliance. This regime requires senior executives to sign off on detailed accounting statements every year. It arose in the early 2000s after a series of accounting scandals (e.g., Enron). People did not want to repeat the past, so they forced companies to be accountable for any practices that could reasonably hurt others in the future.
The same will happen when AI products face scandals for misuse or misalignment. Regulators will require company representatives to sign off on their safety practices. When reporting becomes a requirement, developers at the AI frontier will need specialists to complete the paperwork clearly, completely, and convincingly enough to leave them alone.
The future of AI compliance, then, will ask that these specialists do for AI what designated persons do under PIPEDA now. Like PIPEDA, that person will need to be able to assess, monitor, and intervene to secure the startup’s AI. To be effective, that person, and their team will need to conduct risk assessments to keep the company updated on the newest technologies.
Conclusion
The future of artificial intelligence will demand that organizations take proactive steps to bridge the gap between privacy law and AI. Just as privacy law provides a framework for navigating regulatory trends, companies must develop expertise that integrates privacy principles with AI systems. To succeed, they’ll need to prioritize transparency, accountability, and security, ensuring their AI aligns with both legal standards and societal expectations. Experts who can speak to this balancing act will be in very high demand, and very soon.