There are few topics more polarizing than the rise of artificial intelligence, particularly super-wealthy corporations like OpenAI, Anthropic, and others that have driven a massive build-out of data centres with all the concomitant effects. For some, these companies and their large language models herald the imminent arrival of a kind of alien super-intelligence that will utterly transform our lives; others pejoratively refer to the entire sector as nothing more than “fancy autocomplete.”
You may have no interest in LLMs whatsoever, but if you live in Ontario your government does. The province has introduced AI-driven software in health care to assist medical practitioners in notetaking and allowed the use of Microsoft’s Copilot by career civil servants. The use of AI in the public sector is all but certain to increase in the future, so it would be reassuring if Ontario’s early effort were starting us off on the right foot.
Ontarians, alas, should not be reassured. A special report released by Auditor General Shelley Spence on Tuesday shows that public servants are regularly (almost exclusively, in fact) using unapproved AI websites without adequate controls, and the AI services procured by government for the broader public sector have numerous failings and weaknesses. Spence, however, offered a balanced assessment of AI use in government (and acknowledged that her office does use LLMs for the work that goes into her reports).
“AI is a tool that will improve efficiencies and delivering services. It’s going to take some baby steps to get there, to get it to be perfectly great,” Spence told reporters at Queen’s Park. “What we need to do when we’re testing: that we’re doing live demos, that we’re looking at security, that we’re putting those guardrails around what we’re doing.”
“AI is moving very quickly. Even our report itself is almost stale dated at this point, given how quickly technology is going,” Spence added.
(Full disclosure: TVO, as an agency of the government of Ontario, also has a policy around the use of generative AI; while I use AI-powered software to transcribe interviews I’ve never used an LLM in writing or researching any of my columns.)
Spence’s report found some glaring, and in some cases hilarious, lapses. According to the report, Microsoft’s Copilot is the only AI chatbot approved by the ministry for use in the Ontario Public Service, but only six per cent of OPS AI use involves Copilot, with the balance being much more popular services like ChatGPT and Claude. Nobody with even a passing familiarity of the relative popularity of competing LLMs will be surprised to learn that Copilot is faring poorly, but for some of us it’s legitimately very funny that even when government workers are using government computers and presumably aware of government policy, Microsoft’s product in this space is so mediocre the vast majority of OPS users can’t be bothered.
Underneath the humour, however, are real and serious concerns. The reason that Copilot is the only approved chatbot for the OPS is because Microsoft is the only provider willing or able to implement certain basic elements of control and security over the data fed into LLMs. The OPS handles sensitive information, including personal health information — not to mention more mundane individual and business records. The fact that OPS staff are bypassing Copilot isn’t necessarily harmless, and Spence recommends that non-compliant AI services be blocked from government computers.
(Even Microsoft’s nominal guarantees should be viewed critically, as the Washington-based software giant recently acknowledged to the EU that it may be compelled by U.S. law to provide any records on its servers to U.S. agencies, regardless of whatever contractual language it has with foreign governments.)
One instinct, when learning about failures like this, is to demand that government simply throw the baby out with the bathwater and abandon AI altogether. That would be a mistake, and even in the context of glaring failures Spence doesn’t argue for that. Second perhaps only to software development itself, government is an obvious place where LLMs could play a massively beneficial role. Collecting, consolidating, and analysing information is a fundamental, core role of government, and LLMs can self-evidently play a role there. And that’s before we even get into the discussions about the future role of AI in medicine and education, which will remain the two biggest segments of Ontario’s budget.
So, as citizens we need to answer a simple question: how much do we want the government to be able to do with the information we provide to it? There might theoretically be any number of advantages for the government if we allowed it to feed all the information it already has into high-performing LLMs — an AI agent could proactively reach out to citizens to recommend services and benefits they didn’t even know they were eligible for — but it could just as easily become a dystopian surveillance nightmare. Siloing some, perhaps most, types of data might be “inefficient,” but could also be a basic guarantee of our privacy rights.
More broadly, Canada is just at the beginning of a debate about what we want “digital sovereignty” to mean in the 21st century. This discussion has been driven recently by the chaotic and antagonistic positions adopted by the Trump administration — and the dawning realization that our near-total reliance on U.S. firms for digital services leaves us vulnerable in ways we’re only now recognizing — but it was likely inevitable one way or another. The Canadian Shield Institute has been producing some of the most compelling work recently on this topic, and one takeaway is that if we want to take this issue seriously the solutions aren’t going to be simple. It’s not enough, for example, to have data centres located in Canada if we’re still reliant on U.S.-based providers subject to U.S. law in their core operations. Governments need to be thinking about legislation not just to affirm democratic oversight over this industry but more importantly individual privacy rights both from government and non-government actors.
The AI era is still relatively young — ChatGPT only launched to the public less than four years ago — so it’s not surprising that governments haven’t figured out AI policy in detail yet. The early evidence from Ontario isn’t reassuring, but there’s time yet for policymakers at every level to bring up their game.