A new IBM study shows a workforce shifting faster than the organizations that employ it. Across Canada, millions of workers are already adopting AI tools in their daily routines, often without approval from their employers. It is a quiet transformation happening at the desk level, and it reveals a widening divide between employee expectations and the systems companies have put in place to manage AI use.
According to the study, 79 percent of Canadian office workers use some form of AI to complete tasks. Only one in four rely on enterprise versions of those tools, which means most are experimenting with consumer applications, browser extensions, or personal accounts. The result is a growing layer of activity that sits outside corporate oversight. It is not malicious, but it is unmanaged, and the pace of that adoption is reshaping the landscape of work.
Workers overwhelmingly see AI as a direct productivity booster. The survey found 97 percent believe AI improves their performance, and nearly 80 percent say it frees time for more strategic or creative tasks. Many report weekly time savings that range from one hour to as much as six. Faster execution, better accuracy, and improved workload management are among the top benefits. Employees clearly feel the gains, and they are not waiting for formal permission to access them.
The corporate side tells a different story. Only 29 percent of workers believe their employer uses AI to its potential, and confidence drops sharply among older workers, particularly those aged 45 to 54. In many organizations, governance models, training programs, and secure AI platforms are still in early stages. The gap is not technological so much as organizational. Workers have moved ahead, while many companies remain cautious or unprepared.
This divide carries real risk. IBM’s Cost of a Data Breach Report notes that shadow AI activity has added an average of $308,000 to the cost of a breach in the past year. Sensitive data handled through personal AI tools can expose organizations to security lapses, regulatory liabilities, and reputational damage. The tools may be powerful, but without safeguards they create vulnerabilities that are difficult to contain.
This post is for subscribers only
Subscribe now and have access to all our stories, enjoy exclusive content and stay up to date with constant updates.
Already a member? Sign in