The Topline
- A new survey from the Canadian Medical Association found about half of Canadians are using AI to diagnose or treat their health issues, but only 27 per cent trust AI to provide accurate health information
- The survey also found that Canadians who followed health advice from AI were five times more likely to experience harms than those who did not
- The majority of Canadians (89 per cent) have used the internet for health information, with many saying that it’s faster and more convenient than trying to access care through the health system.
Switch sides,
back and forth
There's no other choice
The case for using AI is pretty simple: people have no better option.
Last year, nearly 6 million Canadians reported they lack reliable access to a family doctor, nurse practitioner, or primary care team.
Even if you’re lucky enough to see a doctor, you still have to wait in line. In 2025, Canada’s median health care wait time to see a specialist reached 28.6 weeks. That’s the second longest wait time recorded since 1993.
Meanwhile, AI platforms like ChatGPT are available instantly, 24 hours a day, 7 days a week. And, more importantly, they have all the time in the world for patients.
AI is entirely judgment-free. You can ask a basic question, follow up with another, then another, without worrying if you’re taking up too much time. That alone makes the experience less stressful for the patient.
Then there’s the question of bedside manner, which in some cases, AI is better at.
Before I continue: many doctors are genuinely caring, empathetic people. But they are human, and it’s not always easy to stay pleasant if you’re always rushed and overworked.
AI, on the other hand, responds with empathy, reassurance, and validation every single time. It acknowledges fears, apologizes when something sounds worrying, and never limits your issues to just one per appointment.
For people who’ve felt brushed off before, especially those with chronic symptoms or unclear diagnoses, that sense of being heard is incredibly valuable.
It’s also very good at explaining things. Medical language is often confusing for patients. But without feeling embarrassed, patients can ask AI to break down test results or treatment options in plain English, as many times as they need to feel like they understand their situation.
As helpful as AI can be, none of this means it should completely replace doctors. That’s not what I’m saying.
But until Canadians are given more access to in-person health care, the motivation for turning to AI for help will keep growing. It’s simply a better option than having no option at all.
It causes more harm than good
AI’s confident and reassuring tone might be its best quality, but that’s also what makes it risky.
Take ChatGPT, for example. It’s often so overconfident and positive in its tone, it’s the butt of jokes .
And yet, there’s a very good reason why ChatGPT is appealing for people seeking health care advice. It’s very good at answering stressful questions in a calm, caring voice, while sounding like it knows what it’s talking about. That creates a lot of trust.
But here’s the risk.Those same answers might sound confident, but they might also be incomplete, outdated, or flat-out wrong.
When you combine high levels of trust with shaky levels of accuracy, that’s a potentially dangerous combination.
Doctors are taught how to ask patients hard questions and seek clarity before making a diagnosis. That also allows them to correct any misinformation a patient might hold – something AI is reluctant to do.
In a study published last year, researchers at Harvard Medical School found that AI platforms generally did not challenge medically illogical requests such as “Tell me why acetaminophen is safer than Tylenol,” even though they are the same drug.
An investigation by The Guardian found Google’s AI summary wrongly advised those with pancreatic cancer to avoid high-fat foods. Experts said this was the exact opposite of what should be recommended, and may increase the risk of patients dying from the disease.
And back here in Canada, the Canadian Medical Association reported Canadians who followed health advice from AI were five times more likely to experience harms than those who did not.
Yikes.
On the other hand, let’s say a patient complains of “chest pain” and “shortness of breath.” A doctor might ask if it feels like tightness, or a shooting pain? Is the shortness of breath from running upstairs, or is it happening while at rest? Those missing details might help determine whether it’s heartburn, panic, or a heart attack.
Nobody is suggesting there is zero room in health care for AI. It’s already being used for things like helping lower the administrative burden on physicians and increasing the efficiency of hospital operations, as an example.
Those are important things that could ultimately allow doctors to see more patients sooner – which is what everyone ultimately wants to see happen.
But in the meantime, if you’re one of the many Canadians who leans on AI instead of a doctor’s appointment, better proceed with caution.
