During a trip to Brussels in June, Prime Minister Mark Carney pledged to forge stronger ties with the European Union on trade , innovation and regulation related to the digital economy. “(We) intend to enhance cooperation on artificial intelligence innovation … and to deepen research cooperation in strategic technology areas such as AI and quantum. We also intend to align our frameworks and standards in the regulatory field … to develop trustworthy AI systems and to establish interoperable digital identities and credentials,” he said, flanked by key EU officials.

It was a pivotal moment. For years, Canada’s tech ecosystem has been dominated by the U.S. giants next door, due to a combination of geography and the sheer size of the U.S. market. While there has been a deepening realization in recent years of the drawbacks of U.S. technological dominance, it took what Carney has called a “rupture” of the U.S.-Canada relationship to make Ottawa really consider its options.

Europe’s emergence as a key partner however, brings with it a significant risk: that Ottawa becomes ensnared between the competing EU and U.S. approaches to tech governance, a limbo that could leave it vulnerable to economic retribution from the U.S. while risking access to EU markets.

“Canada is stuck in the middle (of) the EU’s comprehensive digital policy governance versus a lack of it in the U.S., where powerful tech giants are setting the rules to meet their interests,” said Robert Fay, a senior fellow at Centre for International Governance Innovation (CIGI), a Waterloo-based think-tank.

Canada has yet to determine its approach to digital regulation despite the Carney government’s lofty plans to boost Canada’s AI and tech ecosystem — one that includes widespread AI adoption in public and private sectors and new programs for deep-tech such as quantum computing.

Canada’s position has always been “a bit uncomfortable” given the country’s economic integration with the U.S., coupled with a risk-averse nature that has seen it slant toward EU-style tech rules, said Michael Karanicolas, Dalhousie University’s James S. Palmer chair in public policy and law and former executive director of UCLA’s Institute for Technology, Law and Policy.

Teresa Scassa, a Canada research chair in information law and policy at the University of Ottawa, echoed these sentiments.

Canada has historically “straddled the gap” between the U.S. and EU, seeking a regulatory position that gives Canadian businesses access to both markets by meeting minimum standards for equivalency with EU laws while ensuring that “there’s not … so much regulation in place that our companies can’t compete,” she said. Finding that balance in recent months however, has “become much harder” as the U.S. and EU have diverged even further on AI and data protection.

The U.S. has pursued a more laissez faire approach, emphasizing deregulation and innovation. At the federal level, the U.S. lacks privacy regulations and has “nothing on AI,” said Antoine Guilmain, a partner and co-lead of national cybersecurity and data protection at law firm Gowling WLG (Canada) LLP. Meanwhile, the EU has implemented sweeping rules to govern

Big Tech and safeguard consumer rights through landmark legislation like the

Digital Services Act (DSA), Digital Markets Act (DMA), and the General Data Protection Regulation (GDPR), doling out multi-billion dollar fines for transgressions along the way.

Canada, for its part, is sending “mixed messages on which direction it wants to take,” according to Fay.

Scassa said Ottawa is “figuring out how much room it has to manoeuvre.”

While Ottawa has warned about being too overzealous on digital regulation — AI and digital innovation minister

Evan Solomon has championed “light and tight regulation” — it has also inched closer to the EU on platform regulation and AI safety.

Canada’s past efforts at digital rulemaking and recent comments from Solomon could offer clues to what’s ahead.

Ottawa in the past, has attempted to modernize and implement new digital legislation. Bill C-27, which was first introduced in the Canadian House of Commons in 2022, was the government’s bid to reform and strengthen data and privacy rules and present new ones to regulate AI. That bill died in January when the former Trudeau government prorogued Parliament.

In some ways, Bill C-27 echoed the EU’s approach. “It wasn’t as comprehensive or rigid … but went further in the direction of what the EU was doing,” said Emily Laidlaw, an associate professor at the University of Calgary and a Canada Research Chair in cybersecurity law.

Canada’s proposal contained elements of the EU’s data protection laws and similarly carved out a path for greater enforcement measures, but was ultimately watered down, Scassa said. For instance, Canada’s bill included a right of erasure clause — which lets citizens request that an organization delete information it retains about them — but was “fairly light and limited” compared to the EU’s, she said. Ottawa also retained clauses on data mobility, which allows people to request a transfer of their personal information, but left them to be fleshed out in future regulations.

At All In, a major artificial intelligence conference held in Montreal last week, Solomon hinted that new tech rules are coming.

“We are going to modernize Canada’s data and privacy laws. They are more than 25 years old. We’re going to include protections … (on) things like deepfakes and protection for children. We’re going to set clear standards for the use of data so innovators have clarity to unlock investment,” he said.

Scassa took that to mean the government will likely push forward a new version of a bill to reform Canada’s private sector data protection law, known as PIPEDA. She added that Bill C-27 was set to increase the enforcement powers of PIPEDA and hopes to “see something similar in any new bill.”

Yet it remains unclear whether Canada’s Privacy Act, which governs the federal public sector and is “also desperately in need of reform” will see any changes, she said.

The federal government is navigating especially tricky waters when it comes to regulating AI.

Ninety-two per cent of Canadian business leaders want to see the federal government regulate AI “as soon as possible,” yet do so in a way that will cut red tape and incentivize enterprises to adopt AI, according to a September 2025 report from KPMG LLP.

Restrictions that are too tight could further incentivize Canadian companies to move to the U.S. and could be interpreted as a trade barrier by the Trump administration, with the associated risks that entails, Scassa said.

Those dangers were laid bare in June, when Ottawa was pressured to quash the digital services tax (DST) after Trump terminated trade discussions over Canada’s plan to tax U.S. tech giants.

The U.S. has also pressured the EU on its its tech rulebook, with the House Judiciary Committee most recently holding a September hearing that took aim at the EU and U.K.’s online safety laws. Trump has fired warning shots at “countries that attack our incredible American Tech Companies” through digital taxes and regulations. “Unless these discriminatory actions are removed … I will impose substantial additional tariffs on that country’s exports to the USA,” he wrote on Truth Social.

“The dramatic shift in direction on AI-related issues in the U.S. is going to make straddling the two jurisdictions a painful exercise. Canada may have some hard choices ahead,” Scassa said.

The White House’s AI Action Plan , released over the summer, removed “onerous” regulations on AI development and deployment, earning praise from tech leaders such as

Nvidia Corp. chief executive Jensen Huang. Brussels’ landmark EU AI Act, meanwhile, attempts to define, and regulate, AI systems. It has rules in place to prohibit certain practices and heightened compliance requirements for so-called high-risk systems.

Implemented last year, the Act is set to largely come into full force by next August — even in the face of criticism and growing calls to postpone full implementation. In July, scores of Europe’s top CEOs

called on Brussels to delay its timeline by two years, citing “unclear, overlapping and increasingly complex EU regulations.”

Tech giants including ChatGPT -maker OpenAI Inc. , Canada’s top AI startup, Cohere Inc. , and Google LLC , have signed on to the EU’s Code of Practice — a voluntary framework to help industry comply with EU AI safety, transparency and copyright rules. Google warned that the code and the bloc’s AI Act risk slowing down Europe’s AI development and competitiveness.

Meta Platforms Inc. , meanwhile, has refused to sign, arguing that the code introduces “legal uncertainties for model developers … (and) measures which go far beyond the scope of the AI Act.”

Solomon has said that Ottawa won’t reintroduce the Artificial Intelligence and Data Act (AIDA) that was proposed under Bill C-27. AIDA came under fire from all sides: Big Tech argued that its definitions were too broad and would hurt innovation, civil society groups criticized it for being too vague to be useful, and academics said that the bill was rushed and poorly thought out.

In the months ahead, Ottawa is unlikely to propose a new bill on AI regulation, Scassa said.

“Instead, I expect to see more emphasis on ‘soft’ law, meaning guidance, standards, certification, and so on.”

At the same time, while federal action remains important, provinces also have a major role to play, she said.

“The provinces have the power to set the rules for the use of AI-driven technologies in workplaces under provincial jurisdiction … (and) the power to regulate with respect to the use of AI systems in their own public sectors, including hospitals and universities.”

In the U.S., state lawmakers have already introduced several bills aimed at regulating AI given the dearth of federal rules. California governor Gavin Newsom on Monday passed a set of safety-focused AI laws that will become the toughest yet in the country.

Randy Goebel, a co-founder and fellow at the Alberta Machine Intelligence Institute (Amii), one of Canada’s national AI institutes, argued that it’s more sensible to follow Europe on AI rulemaking. “The U.S. (is) too incoherent. They’ve tried to nail things down like AI safety, but it gets disrupted regularly because of strong and large tech company lobbyists,” he said.

The last few months have shown that the U.S. “has almost completely abandoned the European notion of AI trustworthiness and safety,” he said.

Still, it’s unmistakable that the EU is undergoing “growing pains of implementing something new on the world stage,” Laidlaw said. The EU can serve as a “sounding board” for Canada, but it doesn’t mean that Ottawa should copy its template and apply it at home, she said.

Ottawa could, however, find a sweet spot if it manages to carve out a middle-of-the-road approach.

“There’s a benefit to being between the U.S. and the EU — being between underregulation and overregulation,” Guilmain said.

Canada, he added, has always been in the businesses of picking and choosing the best approaches from other jurisdictions.

“That makes sense. We want to incentivize businesses to come to Canada by having guardrails and a clear legal framework,” he said. “But it doesn’t mean that we can apply the same regulatory pressures on par with the EU — one of the biggest markets in the world.”