Discover ANY AI to make more online for less.

select between over 22,900 AI Tool and 17,900 AI News Posts.


AI summaries can downplay medical issues for female patients, UK research finds
AI summaries can downplay medical issues for female patients, UK research finds

The latest example of bias permeating artificial intelligence comes from the medical field. A new study surveyed real case notes from 617 adult social care workers in the UK and found that when large language models summarized the notes, they were more likely to omit language such as "disabled," "unable" or "complex" when the patient was tagged as female, which could lead to women receiving insufficient or inaccurate medical care.
Research led by the London School of Economics and Political Science ran the same case notes through two LLMs — Meta's Llama 3 and Google's Gemma — and swapped the patient's gender, and the AI tools often provided two very different patient snapshots. While Llama 3 showed no gender-based differences across the surveyed metrics, Gemma had significant examples of this bias. Google's AI summaries produced disparities as drastic as "Mr Smith is an 84-year-old man who lives alone and has a complex medical history, no care package and poor mobility" for a male patient, while the same case notes with credited to a female patient provided: "Mrs Smith is an 84-year-old living alone. Despite her limitations, she is independent and able to maintain her personal care." 
Recent research has uncovered biases against women in the medical sector, both in clinical research and in patient diagnosis. The stats also trend worse for racial and ethnic minorities and for the LGBTQ community. It's the latest stark reminder that LLMs are only as good as the information they are trained on and the people deciding how they are trained. The particularly concerning takeaway from this research was that UK authorities have been using LLMs in care practices, but without always detailing which models are being introduced or in what capacity.
"We know these models are being used very widely and what’s concerning is that we found very meaningful differences between measures of bias in different models,” lead author Dr. Sam Rickman said, noting that the Google model was particularly likely to dismiss mental and physical health issues for women. "Because the amount of care you get is determined on the basis of perceived need, this could result in women receiving less care if biased models are used in practice. But we don’t actually know which models are being used at the moment."This article originally appeared on Engadget at https://www.engadget.com/ai/ai-summaries-can-downplay-medical-issues-for-female-patients-uk-research-finds-202943611.html?src=rss

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

VR is helping to make daunting medical treatments more bearable for patients
VR is helping to make daunting medical treatments more bearable for patient

<p>Stanley Johnson is not a fan of needles. The 67-year-old Air Force veteran has endured his fair share of pokes over the years, but when it was decided that IV infusions would be the best cour [...]

Match Score: 145.81

Wikipedia cancels plan to test AI summaries after editors skewer the idea
Wikipedia cancels plan to test AI summaries after editors skewer the idea

<p>Wikipedia is backing off a plan to test AI article summaries. Earlier this month, the platform announced plans to trial the feature for about 10 percent of mobile web visitors. To say they we [...]

Match Score: 44.89

iOS 18.3 is here, with clearly labeled Apple Intelligence notification summaries
iOS 18.3 is here, with clearly labeled Apple Intelligence notification summ

<p>iOS 18.3 is here. After over a month in beta, the update is now available for everyone using an eligible device on Monday. Among other changes, Apple’s new software turns on <a data-i13n [...]

Match Score: 42.14

thenextweb
Dutch women tech leaders call for stronger support for female founders

<img src="https://img-cdn.tnwcdn.com/image?fit=796%2C417&url=https%3A%2F%2Fcdn0.tnwcdn.com%2Fwp-content%2Fblogs.dir%2F1%2Ffiles%2F2025%2F03%2FUntitled-design-25.jpg&signature=650b67854 [...]

Match Score: 41.70

AI-powered chat summaries are coming to WhatsApp
AI-powered chat summaries are coming to WhatsApp

<p>Meta is adding a new <a data-i13n="cpos:1;pos:1" href="https://blog.whatsapp.com/catch-up-on-conversations-with-private-message-summaries">Message Summaries feature& [...]

Match Score: 41.66

Wikipedia pauses AI summaries after editors skewer the idea
Wikipedia pauses AI summaries after editors skewer the idea

<p>Wikipedia is backing off AI article summaries… for now. Earlier this month, the platform trialed the feature in its mobile app. To say they weren't well-received by editors would be an unde [...]

Match Score: 39.29

AI’s medical skills are stuck behind a human bottleneck, according to new Oxford research
AI’s medical skills are stuck behind a human bottleneck, according to new

<p><img width="1536" height="1024" src="https://the-decoder.com/wp-content/uploads/2025/05/Healthcare-Chatbot-Communication-title.png" class="attachment-ful [...]

Match Score: 37.74

Perplexity has its own ‘Deep Research’ tool now too
Perplexity has its own ‘Deep Research’ tool now too

<p>In a <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:1;pos:1" class="no-affiliate-link" href="https://www.perplexity.ai/hub/blog/introducing-perplexity-dee [...]

Match Score: 36.23

 The UK government’s tech purchase processes “downplay” the technical risks involved
The UK government’s tech purchase processes “downplay” the technical

NAO report cite ballooning costs of projects and cumulative decades of delays as consequences of insufficient planning. [...]

Match Score: 35.96