11 min read

Bill 194, Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024

While Bill 194 introduces some welcome upgrades to Ontario's cybersecurity and privacy legislation, it falls short of delivering in several key areas, particularly protecting employees' privacy.
A view of the skyline of Toronto, seen across the water.
Photo by Daryan Shamkhali / Unsplash

This post intends to contribute to the public debate on what could have been a significant legislation - Bill 194. This post is not a summary of Bill 194. I am not a lawyer, and this is not a legal analysis. The post below draws on my experience as a privacy and data protection expert and understanding of current standards and legislation. I will provide an overview of the bill's scope, goals, and provisions and assess its potential to enhance Ontario’s cybersecurity and respect the privacy of Ontarians. While Bill 194 introduces some welcome upgrades to Ontario's cybersecurity and privacy legislation, it falls short of delivering in several key areas, particularly protecting employees' privacy.


Bill 194, Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024 (the Bill), was introduced in the Ontario Legislature for first reading and passed on May 13, 2024. It has been ordered for its Second Reading. Bill 194 has been introduced in the current context of the ongoing evolution of cybersecurity and privacy threats and the explosive growth of artificial intelligence. The Bill is, therefore, not surprising in what it is intended to address:

The Act addresses cyber security and artificial intelligence systems at public sector entities. Public sector entities are institutions within the meaning of the Freedom of Information and Protection of Privacy Act and the Municipal Freedom of Information and Protection of Privacy Act, children’s aid societies and school boards. (See explanatory note.)

It is worth noting that the Bill does not make amendments to the Municipal Freedom of Information and Protection of Privacy - MFIPPA (the sister act to the Freedom of Information and Protection of Privacy Act - FIPPA). Hopefully, this can be addressed as the Bill goes through the legislative process.

It must be said that if one of the government's objectives in the Bill were to improve cyber security and privacy protections for Ontarians, this would have been a golden opportunity to introduce private sector legislation to provide a made-in-Ontario solution that could supplement and extend the protections offered by Federal legislation and ensure that Ontarians have robust and equivalent protection in both the public and private sectors. In particular, the government of Ontario's failure to protect employees' privacy is a long-standing issue highlighted by the gaps in this legislation. I note that the current Federal private-sector privacy law is due to be superseded by the contents of Bill C-27, but that is not part of this post.

Employees in Ontario do not have legislation that protects their privacy in either the public or the public sector. Public sector privacy protections were removed in 1995, making Ontario unique among Canadian provinces in that it does not protect the privacy of civil servants at work. It is also the case that, due to employment being in provincial jurisdiction, Federal private-sector privacy legislation does not protect employees in Ontario.

Ontario-based employees in the federal public sector or employed under the federal labour code (entities like banks, for example) have privacy protection under federal legislation. Still, those are estimated to be less than 500,000 of Ontario's nearly 8 million employees or slightly more than 6%. In the private sector, employees under collective agreements, based on arbitral jurisprudence and the specifics of their contract, will have privacy protection, but that accounts for less than 14% of private sector works. I derived these numbers mainly from searching for available Statistics Canada and other online sources.

TL;DR — employees in Ontario are the least likely to have privacy protection at work compared to other provinces or territories.

The Bill

The Bill has two significant elements. Schedule 1, “Enhancing Digital Security and Trust Act,” addresses cyber security issues, the use of artificial intelligence systems, the impact of digital technology on children, and some general provisions, all of which will be addressed below. Schedule 2, “Freedom of Information and Protection of Privacy Act,” amends the Freedom of Information and Protection of Privacy Act, RSO 1990, c F.31. Bill 194 is 51 pages long. From a content perspective, that is about 17 pages in English, with a matching section in French. If you think, "This seems a bit perfunctory, given the complicated nature of cyber security, digital protection of children, and privacy," you would be right. It seems to me that the entire bill could be summarized by saying that the government recognizes the importance of issues and will, therefore, write and implement regulations sometime in the future to deal with them. "Just trust us and pass the bill." When you compare this to the 4 years of discussion that went into creating the 458-page EU Artificial Intelligence Act, it comes up short, literally and figuratively. Closer to home, Bill C-27, which includes the Artificial Intelligence Data Act, is 148 pages (or 74 pages in English) but is accompanied by more than 100 civil society, industry, and expert submissions on the provisions and issues of the bill.

Schedule 1, Enhancing Digital Security and Trust Act

The following describes some of the more significant elements of this part of the Act. This includes Definitions (s. 1), Cyber Security (s. 2 - 4), Use of Artificial Intelligence Systems (s. 5 - 8), and Digital Technology Affecting Individuals Under Age 18 (s. 9 - 11), and some concluding general sections.


The Bill adds a definition of artificial intelligence that appears to be derived, at least in part, from the definition of an AI system in Article 3 of the EU Artificial Intelligence Act. (An easier-to-use reference than the official text can be found in the AI Act Explorer prepared by The Future of Life Institute). It may be summarized as any system that infers from input to generate outputs to accomplish explicit or implicit objectives. Using an AI chatbot is an example that is fulfilled by this definition. A sample of definitions that are included in the AI Act but not this act include:

  • reasonably foreseeable misuse
  • safety component
  • training data
  • input data

It is good that the Bill includes procured services and systems as a "use" of artificial intelligence systems. Still, much of the success of this approach will be determined by the nature of the due diligence in Ontario Public Service (OPS) procurement requirements for AI and machine learning systems. Another positive inclusion is that digital information includes collection, use, retention or disclosure by a third party. This will help ensure that accountability remains with the originating government institution.

Cyber Security

This part of Bill 194 boils down to a requirement for the government to make regulations governing cyber security, including s. 2 (1):

  1. requiring public sector entities to develop and implement programs for ensuring cyber security;
  2. governing programs mentioned in clause (1), which may include prescribing elements to be included in the programs;
  3. requiring public sector entities to submit reports to the Minister or a specified individual in respect of incidents relating to cyber security, which may include different requirements in respect of different types of incidents;
  4. prescribing the form and frequency of reports.

In the absence of a public consultation on the content and purpose of the governing regulations, there is no assurance that the regulations that will be promulgated will meet diverse stakeholder needs nor that they will be effective in proving the desired effect of protecting security. While section 3 allows the government to make regulations setting technical standards, the devil will be in the details here. Noting that there are boatloads of security standards to choose from. There needs to be governance to ensure that the standards chosen are enforced. For example, I have been a consultant on several projects inside various Ministries, and it sometimes surprises information architects and project managers that there are Government of Ontario Information and Technology Standards (GO-ITS) to which their projects should abide. There is nothing to suggest in the Bill that even if good standards are adopted, they will be enforced with any rigour.

Use of Artificial Intelligence Systems

This part of Bill 194, similar to the prior section, mainly sets out the authority for the government to make regulations to govern the use of AI systems without creating content that could be publicly reviewed or publicly debated. I will note two particular gaps I feel should be addressed.

Developing an accountability framework

Section 5. (3) of the Bill states that each entity using artificial intelligence systems will develop and implement an accountability framework following the yet-to-be-published regulations. I will highlight what I believe to be two flaws with this approach.

There are no assurances in the Bill that marginalized or disadvantaged communities will provide input or be engaged in developing an Accountability Framework for an artificial intelligence system that may significantly impact their lives. Secondly, it appears that the approach in this Bill could lead to a proliferation of entity-specific Accountability Frameworks. This burdens both citizens whose data may be processed in multiple artificial intelligence systems with different frameworks and entities without the appropriate accountability expertise being asked to develop and implement their frameworks.

Rather than a proliferation of frameworks, creating a single Accountability Framework based on transparent, inclusive, and robust stakeholder engagement would be better.

Creating a risk framework

All that Bill 194 says on managing the risk of using artificial intelligence systems is, "A public sector entity to which this section applies shall take such steps as may be prescribed to manage risks associated with the use of the artificial intelligence system." This is woefully inadequate. The high-level risks and harms that can be created using artificial intelligence need to be articulated so that systems that may create high risks to individuals or Ontario as a whole can be identified, and those risks and harms can be identified and either avoided or mitigated. There is no identification of what might be termed unacceptable uses of AI systems or a way to identify whether a high-risk AI system - such as a system that collects biometric information about Ontarians and uses that as a basis for determining access to systems - is acceptable. (In my mind, such a system is inherently unacceptable.)

Digital Technology Affecting Individuals Under Age 18

This section replicates the section above; it essentially boils down to allowing the government to make regulations that

  • set out how children's information may be collected, used, or disclose
  • require reports about how children's information may be collected, used, or disclosed
  • may prohibit some processing of children's information

I have two broad comments here. The first is that I am somewhat relieved that the government is not trying to introduce broad systems of digital control or censorship in the name of protecting children. Such legislation is usually both overly broad and ineffective in its intended purpose. That isn't to say that there aren't real risks to students that could have been articulated, not least of which is using easily available tools to enable students to create deep fake photos and videos of other students - creating real trauma and having real-world consequences.

My second comment is that many digital risks to students are also digital risks for their parents, including misinformation and other social harms. This legislation would have been a great opportunity, for example, to create a requirement for school boards to develop and provide curricula and training to support students in identifying misinformation through critical digital media training.


The last section of Bill 194 includes section 12, which states that nothing in the Act establishes a private law duty of care owed to any person. I'm not a lawyer, but when I looked up the phrase, it said, "A duty recognized by law to take reasonable care to avoid conduct that poses an unreasonable risk of harm to others." My only comment here is to note that despite the title of the bill, the writers of the bill have taken care to ensure that the heads of government institutions do not have a duty to ensure that they take reasonable care to avoid the risk of harm (aside from the requirement of privacy safeguards addition Schedule 2, which doesn't appear to me to be the same thing). It seems that where an individual's information, especially sensitive information, is collected under a legislative authority, the institution or head should have a duty of care for that individual's information. It may be that this is standard language in this kind of legislation, but it still leaves me a little perplexed. 🤷‍♂️

Schedule 2, Freedom of Information and Protection of Privacy Act

This schedule is, in some ways, simpler in that it provides amendments to an existing Act (FIPPA) and doesn't endlessly defer to yet-to-be-determined regulations. Schedule 2 defines "information practices" to FIPPA, which will help those responsible for building systems comply with FIPPA. Some worthwhile elements for reporting have been added. I will take particular note of two significant changes: requirements for privacy impact assessments (PIAs) as well as breach reporting and notification requirement

Privacy Impact Assessments

This is a welcome addition to FIPPA. PIAs are a standard tool for identifying the risks to privacy in a system and recommending steps for their remediation. By standardizing the information required in a PIA, this legislation goes some distance to raising the floor for privacy protection and providing the ability to develop consistent expertise across all of government. I look forward to any prescribed requirements. This is followed by a section on risk mitigation that directs government institutions to implement the recommendations of the PIA

I would be remiss if I didn't point out the obvious gap between this and Schedule 1. There is no directive in Schedule 1 concerning impact assessments for AI systems nor is there a direction to heads to mitigate identified risks.

A copy of PIAs is required to be provided to the Information and Privacy Commissioner if asked. This could be improved by changing this to a mandatory filing with the Commissioner. This doesn’t require the IPC to approve the PIA but does make it available to the Commissioner promptly in case of a complaint or breach related to a system with a PIA.

Breach Reporting and Notice

Schedule 2 adds a Privacy Safeguards section to FIPPA. Specifically, the requirement is that "The head of an institution shall take steps that are reasonable in the circumstances to ensure that personal information in the custody or under the control of the institution is protected against theft, loss and unauthorized use or disclosure and to ensure that the records containing the personal information are protected against unauthorized copying, modification or disposal." This begs the question of why this requirement for privacy safeguards is only being added now, but suffice to applaud it.

The requirement for privacy safeguards provides the underpinning for defining a breach as "any theft, loss or unauthorized use or disclosure of personal information in the custody or under the control of the institution if it is reasonable in the circumstances to believe that there is a real risk that a significant harm to an individual would result...". Such breaches will be reported to the Commissioner, whose budget will hopefully reflect this new obligation. The factors identified as determining whether there is a real risk of significant harm include:

  • the sensitivity of the personal information;
  • the probability of misuse;
  • the availability of steps that a person could take to
    • reduce the risk of harm
    • mitigate the risk of harm
  • directions or guidance from the Commissioner

With safeguards, breaches, and risks of harm defined, the last piece is the addition of a requirement to notify individuals if there has been a breach of their information. This is welcome but has consequences. In some circumstances, such a notification can be traumatic or require expenditures by the individual to compensate. Where is the requirement to compensate the individual or help them mitigate the impact?

Order Making Power

It is worth noting that the amended FIPPA will provide the Commissioner concerning privacy breaches a new power for the Commissioner and, I suspect, a welcome one to bring the Commissioner's powers for privacy in FIPPA in alignment with her order-making powers for Freedom of Information issues.

Wrapping Up

This post was created within a day or two of Bill 194's First Reading. I look forward to other and deeper contributions to the debate in the days to come. In the meantime, I have these takeaways:

  • It is past time for Ontario to stop being a laggard in the protection of employee privacy and the government should, at the very least, amend Bill 194 to give public sector employees the privacy protection and respect they deserve.
  • A private sector privacy bill could address employment privacy issues, putting it under the authority of the Commissioner with private sector order-making powers. Alternatively, elements of privacy protection for employees could also be addressed by adding to Ontario's Employment Standards Act.
  • The government should use Bill 194's second reading and committee review to ensure that there is a clear legislative articulation of:
    • What are the acceptable and unacceptable uses of artificial intelligence
    • How to identify, categorize, and mitigate individual and social risks associated with the use of artificial intelligence
  • If the government wants to ensure that digital technology doesn't harm children, it should start with digital media training and take steps to prevent children from using technology to bully other children.
  • Consider recognizing that the government has a duty of care when it processes sensitive personal information under a legislative authority that deprives individuals of the ability to refuse that processing.
  • Adding PIA requirements with breach notifications will raise the bar for institutions processing Ontarians' personal information. This may lead to some interesting changes or headlines in the short term, but the longer-term consequences should be good.

At the end of the day, the government appears to want to be able to take steps to address cybersecurity, children's data processing, and artificial intelligence through regulations. It will be interesting to see how, or if, the consultation process will significantly alter this approach. The public consultation is open until June 11th and can be found at https://www.ontariocanada.com/registry/view.do?postingId=47433&language=en