AI Is Even Worse For Medical Help Than Good Old Dr Google
Explore the intricate dynamics of AI in healthcare, revealing startling insights and hidden perils in digital health advice.
In the digital age, where information is merely a click away, the allure of instant answers, especially regarding health concerns, becomes increasingly irresistible. A survey by Asda Online Doctor has illuminated a startling reliance on Artificial Intelligence (AI) for medical advice among the UK population, revealing that an estimated 2.3 million adults have sought guidance from AI platforms (HuffPost UK).
The exploration into the depths of this phenomenon unveils a mixture of convenience, trust, and a dash of desperation, as long waiting times and a strained NHS have nudged individuals towards alternative, digital avenues for health advice. A staggering 1 in 7 UK adults have turned to Google as their initial source of medical advice, with a whopping 78.3% declaring the search engine as the most beneficial online tool for obtaining medical information. The reasons behind this shift are multifaceted, stemming from the prolonged waiting times for doctor appointments and an inherent trust in the digital realm to provide swift answers to our pressing queries.
However, beneath the surface of this seemingly convenient solution lies a potential minefield of misinformation and unregulated advice. While providing rapid responses, the internet and AI operate within an unregulated universe where misinformation can proliferate unchecked. Duality Health emphasises the criticality of verifying the credibility and authenticity of online sources, as some platforms may be managed by individuals lacking medical credentials.
The question that looms large is: Can AI be trusted with our health information? A substantial 82% of individuals who have utilised AI for medical advice found the information beneficial, even surpassing Google (78.3%), Instagram (81.4%), and TikTok (76.6%) in perceived helpfulness. However, the integrity of the information provided by AI has been called into question. In a TikTok experiment, Jeremy Faust MD, found that AI could fabricate information, combining real journals and author names to create non-existent studies, thereby casting a shadow over the reliability of AI-generated advice.
The research conducted by experts at Asda Online Doctor, using ChatGPT and Google Bard to seek advice for various medical symptoms, found that while 65.7% of the advice was deemed helpful, a concerning 22.8% was potentially harmful, especially concerning conditions like ovarian cancer, ectopic pregnancy, and HIV infection. This dichotomy of helpful yet potentially harmful advice underscores the precariousness of relying on AI for health-related guidance.
Dr Crystal Wyllie highlights an essential perspective amidst the digital health advice conundrum, asserting that despite the convenience and anonymity offered by AI platforms, a trained medical professional remains the irreplaceable and safest option for obtaining medical advice. The nuanced understanding, empathetic communication, and years of training that medical professionals bring cannot be replicated by AI, especially considering the potential risks and misinformation that may be disseminated.
While AI may offer a semblance of immediate relief or guidance, the potential risks and the critical importance of accurate health advice necessitate a cautious approach. The exploration into AI’s role in healthcare is not merely a technological discussion but a deeply human one, where the stakes are intrinsically tied to our well-being and health.