1. Society
  2. Analysis

ANALYSIS: How can Canada get better at catching scammers?

We must stop fighting a 21st century problem with 20th century tools
Written by Ritesh Kotak
Emerging technologies has made scams, such as the so-called grandparent scam, easier to do and harder to stop. (CP/Graham Hughes)

Scams are rapidly becoming more frequent and sophisticated, often leaving those targeted devastated and with little recourse. In my work, I hear from victims who have lost substantial amounts of money because they believed they were helping their grandson or paying taxes owed to the government. It’s truly heartbreaking.

Technology has become a sword used by savvy scammers to harm us. Advancements in artificial intelligence and the internet in general have created new vectors for fraud that are scalable and more complex than anything we have ever seen before.

From sophisticated phishing campaigns that can collect credentials in large batches, to automated bots that can scan networks for vulnerabilities, and deepfakes that can fool even the most tech-savvy experts, the balance of power has shifted. These technological advancements have created an uneven playing field, leaving individuals and businesses vulnerable.

It's time we use those same tools to our advantage.  

I’m often asked: why can’t police just catch the fraudsters? It’s easier said than done. Tackling fraud is like a never-ending game of whack-a-mole. Cybercrimes are often more complex than they seem. While victims may be local, the fraudsters are often overseas, and the data involved could be located anywhere in the world. From a legal perspective, law enforcement also has the added difficulty of proving a specific fraudster was behind a particular computer at the time of the incident.

As much as law enforcement would like to make victims whole, its primary role is to identify the fraudster and assist with the criminal component of the case — not to recover funds. Fraud investigations are notoriously complex, requiring significant time investments from dedicated detectives. Many agencies simply cannot spare the resources.

There are also legislative and structural challenges. Take a scenario where a grandparent gets a call from someone pretending to be their granddaughter asking for money, perhaps to use as bail money or to settle a utility bill to prevent her power from being cut. The calls and messages have been faked to look like they are coming from a real phone number or email address. The grandparent believes that the information is accurate and goes to the bank to make an immediate wire transfer. A day later, their actual granddaughter calls, and they realize that they were a victim of a grandparent scam.

One might assume that if they reported the incident to the bank, the report would trigger a coordinated response, ensuring that all necessary entities are made aware of the scam and can assist in freezing the funds, potentially retrieving the transfer, and attempting to trace the identity of the fraudster. This is not the case.

In fact, the victim will have to report the incident to their financial institution, their local police service, and to the Canadian Anti-Fraud Centre. These are all separate reports and there is limited information sharing, as law enforcement will need to obtain judicial authorization to access data. Meanwhile, the victim’s banks will have to communicate with the receiving bank to stop the transfer, but usually it’s too late. For victims already under significant emotional distress, this fragmented process can be traumatizing and re-victimizing.

Law enforcement wants to help every victim, but when faced with an overwhelming number of Canadians being defrauded, police must make difficult decisions on which cases they can realistically pursue.

That’s because we’re fighting 21st century crime with 20th century frameworks. Our systems simply aren’t built to stop today’s fraudsters. We are going to need robust international cooperation, increased funding for domestic law enforcement agencies, and widespread public awareness campaigns to have any hope in combatting this ballooning cyberfraud problem. Today’s complex processes, lengthy delays, and ambiguously fragmented institutions leave victims feeling confused and unsupported. These stolen funds are not just personal losses, they are often used to fund illicit activities.

Emerging technologies such as artificial intelligence have added to this complexity, becoming a force multiplier in an already fraught landscape. Generative tools can now be used to create realistic synthetic media that impersonates celebrities, elected officials, and family-members. It’s now even harder to tell fact from fiction and real from fake. There are currently no legislated requirements for technology vendors to adhere to a set of ethical standards, nor are there mandatory AI-labeling requirements on AI-generated content.

This is a complex issue that will require a solution that is victim-centric. The solution must aim to make it difficult to commit the fraud, easier to catch the fraudster through international vendor and government cooperation, and more preventable through grassroots educational campaigns to the masses.

The Canadian government has committed to creating a new financial crimes agency and has announced that legislation will be introduced in the spring of 2026. This is a vital step towards redesigning security architecture at an institutional level to help solve the core problem and to prioritize victim restitution. Success shouldn’t be measured by the number of arrests. Rather, we must consider the number of frauds reported and the amounts recovered for victims. In the interim, we must continue to educate the public on the ever-evolving landscape of fraud.

That’s a good start. But there needs to be legislation specifically targeting online scams. It should empower law enforcement to compel service providers offering service in Canada to remove fraudulent content, require the disclosure of information about fraudsters, and strengthen protections and remedies available to victims. It should also streamline reporting between financial institutions, platforms, and law enforcement.

Going further, there should be legislation requiring that all AI‑generated or synthetic media be clearly labelled when shared across platforms. While AI tools are often used to enhance images or videos in ways that may blur definitions, the law must adopt a common‑sense standard: clear disclosure should be mandatory where content is intentionally created or manipulated by AI with the intent to mislead its audience.

Given the technological advancements and the sophistication of deception through AI-enabled impersonation, we must evolve from those 20th-century frameworks. Only then will we be able to deliver justice and make whole those who have lost the most.