National Cyber Warfare Foundation (NCWF)

In an Oxford study, LLMs correctly identified medical conditions 94.9% of the time when given test scenarios directly, vs. 34.5% when prompted by huma


0 user ratings
2025-06-14 21:24:32
milo
Developers , General News , Education

Nick Mokey / VentureBeat:

In an Oxford study, LLMs correctly identified medical conditions 94.9% of the time when given test scenarios directly, vs. 34.5% when prompted by human subjects  —  Headlines have been blaring it for years: Large language models (LLMs) can not only pass medical licensing exams but also outperform humans.




Nick Mokey / VentureBeat:

In an Oxford study, LLMs correctly identified medical conditions 94.9% of the time when given test scenarios directly, vs. 34.5% when prompted by human subjects  —  Headlines have been blaring it for years: Large language models (LLMs) can not only pass medical licensing exams but also outperform humans.



Source: TechMeme
Source Link: http://www.techmeme.com/250614/p13#a250614p13


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Developers
General News
Education



Copyright 2012 through 2025 - National Cyber Warfare Foundation - All rights reserved worldwide.