Doctors Relying on AI Became 20% Worse at Spotting Health Risks, Study Finds
AI-assistance tools have become a daily resource that has helped boost productivity and speed in several industries. Despite the notable benefits of these smart systems, medical experts are concerned that overreliance on artificial intelligence may be causing more harm than good.

In brief
- A study shows doctors relying on AI in colonoscopies may detect fewer irregularities when working without the tool.
- Research reveals AI boosts efficiency but may weaken judgment skills by discouraging deep and critical thinking.
- The Air France Flight 447 crash highlights the dangers of overdependence on automation in high-stakes environments.
- Experts stress AI’s benefits but warn industries to maintain human expertise for when automation fails.
AI Reliance May Reduce Doctors’ Detection Rates in Colonoscopies
A recent study conducted on 1,443 patients revealed that endoscopists who used AI agents during colonoscopies achieved a lower success rate in detecting irregularities when these tools were not used.
Results of the research, published this month in the Lancet Gastroenterology & Hepatology journal, show that these doctors achieved a 28.4% success rate in detecting potential polyps using the technology. Without these tools, the figure dropped to 22.4%, representing a 20% decrease in detection rates.
Dr. Marcin Romańczyk, a gastroenterologist at H-T. Medical Center in Tychy, Poland, and the study’s author expressed surprise over the results. He pointed to excessive reliance on artificial intelligence as one of the key factors contributing to the drop in detection rates.
We were taught medicine from books and from our mentors. We were observing them. They were telling us what to do. And now there’s some artificial object suggesting what we should do, where we should look, and actually we don’t know how to behave in that particular case.
Dr. Marcin Romańczyk
Not only does the result capture the potential ill effects of relying heavily on artificial intelligence, it also touches on the evolution of medical practice from an analog-focused tradition to a digital era.
Concerns Over Artificial Intelligence in the Workplace: Productivity Gains at a Cognitive Cost
Apart from the increased deployment in medical theaters and offices, AI automation has become a mainstay in workplaces, with many now turning to these tools to boost productivity. Goldman Sachs even predicted in 2023 that AI could boost workplace productivity by as much as 25% .
But adopting these AI systems also comes with the weight of eventual drawbacks. In fact, research from leading firms has highlighted the risks associated with putting too much faith in these tools.
A publication by Microsoft and Carnegie Mellon University noted that AI helped increase work efficiency among a group of surveyed knowledge workers. But it weakened their “atrophying judgment” skills and analytical ability.
Overreliance on Artificial Intelligence in Aviation: Lessons from Air France Flight 447
Even in the aviation sector, where safety is paramount, previous evidence suggests that overreliance on automation can compromise safety. In 2009, an Air France Flight 447 en route from Rio de Janeiro to Paris crashed into the Atlantic Ocean, resulting in the loss of over 228 lives.
Investigations later revealed that the plane’s automation system had developed malfunctions . Thus, this caused the aircraft’s automated “flight director” to transmit inaccurate information. And since the flight personnel were not adequately trained in manual flying, they relied on the aircraft’s automated features rather than making the necessary adjustments.
Balancing AI Adoption with Human Expertise in High-Stakes Industries
Lynn Wu, associate professor of operations, information, and decisions at the University of Pennsylvania’s Wharton School, noted that these incidents are a reality check for AI adoption in sectors where human safety is critical. Wu explained that while leaning into these technologies, industries should ensure that workers are appropriately adopting these tools.
What is important is that we learn from this history of aviation and the prior generation of automation, that AI absolutely can boost performance. But at the same time, we have to maintain those critical skills, such that when AI is not working, we know how to take over.
Lynn Wu
She added that if people lose their own skills, artificial intelligence will also perform worse. For AI to improve, individuals must also continually improve themselves.
Romańczyk also accepts the use of AI in medicine, noting that “AI will be, or is, part of our life, whether we like it or not.” Still, he emphasized the need to understand how artificial intelligence affects human thinking and urged professionals to determine the most effective ways to utilize it.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
"Verifiable Identity Embedded in Stablecoins Could End Counterfeit Crisis"
- Bluprynt, Circle, and PayPal pilot KYI verification for USDC/PYUSD, embedding issuer credentials in stablecoins to combat counterfeit tokens. - The blockchain-based solution aligns with U.S. regulatory frameworks like the GENIUS Act, enhancing transparency for investors and institutions. - USDC ($70B) and PYUSD ($1.1B) face $1.6B annual losses from fraud; KYI creates direct links between tokens and verified business identities. - Experts praise the tech for bridging DeFi and compliance, with Paxos' Gianc

Pudgy Penguins Turns Casual Play into Web3 Ownership
- Pudgy Penguins and Mythical Games launched Pudgy Party, a blockchain-integrated mobile game targeting mass Web3 adoption through accessible gameplay. - The game automatically enrolls players in a custodial wallet, enabling NFT ownership of in-game assets without blockchain expertise. - Aiming for 10 million downloads, it combines viral meme events and KOL-driven tournaments to bridge Web2 and Web3 audiences. - Mythical Games' platform supports secure NFT trading, while Pudgy Penguins expands its IP into

XRP's Crossroads: Technical Bull Case vs. Fundamental Caution in a Pre-ETF Climate
- XRP faces a 2025 inflection point with bullish technical signals clashing against regulatory uncertainty and whale sell-offs. - Institutional accumulation and ODL's $1.3T Q2 volume suggest utility-driven momentum, but SEC ETF rulings remain pending. - Raoul Pal forecasts a $5 price target via "full porting" from Bitcoin, while legal experts warn regulatory clarity won't guarantee adoption. - A $3.20 breakout with 20%+ volume surge could trigger a 40% rally, but 470M XRP whale sales and declining retail p

Why Layer Brett (LBRETT) is the 2025 Meme Coin to Outperform DOGE and SHIB
- Layer Brett (LBRETT), a 2025 Ethereum Layer 2 meme coin, outperforms DOGE and SHIB with 10,000 TPS, $0.01 fees, and 55,000% APY staking. - Its deflationary model burns 10% of transactions while allocating 25% to staking rewards, creating supply-demand imbalance for explosive growth. - Unlike community-driven DOGE/SHIB, LBRETT's DAO governance and cross-chain roadmap position it as a utility-driven asset in Ethereum's L2 adoption wave. - Analysts predict 100x-1,000x returns by late 2025 as LBRETT combines

Trending news
MoreCrypto prices
More








