Nu relevant · nu handelen · Zorginhoud & ethiek · 10 maart 2026
Large language models in healthcare are susceptible to spreading misinformation if trained on incorrect data.
The use of large language models (LLMs) in healthcare increases the risk of misinformation, especially if trained on unverified online sources. This risk is heightened when developers do not disclose the databases used to train these tools.
Waarom dit ertoe doet: De kwaliteit en veiligheid van zorg kunnen beïnvloed worden door deze ontwikkeling.
Bron: The Lancet Digital Health · 10 februari 2026