A family of healthcare-focused generative AI models is being unveiled by Google called MedLM

Google believes that there is a chance to recruit generative AI models to help healthcare staff do their responsibilities, or at least a chance to assign more healthcare tasks to those models.

The business unveiled MedLM, a line of models optimized for the healthcare and pharma sectors, today. For Google Cloud clients in the U.S. (it’s in preview in select international areas), MedLM is based on Med-PaLM 2, a model created by Google that scores at a “expert level” on dozens of medical exam questions. Whitelisted users can use MedLM through Vertex AI, Google’s fully managed AI development platform.

Currently, MedLM comes in two forms: a larger model intended for “complex tasks,” according to Google, and a smaller, more precisely tuned model best suited for “scaling across tasks.”

“We’ve discovered that the best model for a particular task varies depending on the use case through piloting our tools with various organizations,” Yossi Matias, Google’s vice president of engineering and research, said in a blog post that was sent to TechCrunch prior to today’s release. “For instance, one model might be better at summarizing talks, while another might be better at looking through medications.”

According to Google, HCA Healthcare, a for-profit facility operator, was one of the first users of MedLM and has been testing the models with doctors to assist with patient record writing at emergency department hospital sites. MedLM is now a part of BenchSci, another tester,’s “evidence engine” for finding, categorizing, and ranking new biomarkers.

Every day, according to Matias, “we’re working in close collaboration with practitioners, researchers, health and life science organizations and the people at the forefront of healthcare.”

Google is furiously trying to seize control of the healthcare AI business, which has the potential to be worth tens of billions of dollars by 2032, alongside its main competitors Microsoft and Amazon. AWS HealthScribe, which was just released by Amazon, use generative AI to record, compress, and evaluate notes from patient-doctor consultations. Microsoft is testing a number of AI-driven healthcare technologies, such as “assistant” apps for doctors that are supported by large language models.

However, there are good reasons to be cautious while using this kind of technology. Historically, the use of AI in healthcare has resulted in mixed results.

The National Health Service of the United Kingdom is supporting Babylon Health, an AI business that has come under fire on several occasions for asserting that its technology can diagnose illnesses more accurately than doctors. Additionally, IBM had to sell its Watson Health subsidiary at a loss due to technical issues that caused client relationships to deteriorate.

It may be claimed that generative models, such as the ones found in Google’s MedLM family, are far more advanced than their predecessors. However, studies have indicated that even for relatively simple healthcare-related queries, generative models aren’t very accurate.

In one study, which was co-authored by a group of ophthalmologists, queries concerning eye problems and diseases were posed to ChatGPT and Google’s Bard chatbot. The results showed that most of the answers from all three tools were outrageously inaccurate. Cancer treatment regimens generated by ChatGPT contain potentially fatal mistakes. In addition, models like ChatGPT and Bard respond to inquiries concerning skin, lung capacity, and kidney function with racism and misinformation about medicine.

The World Health Organization (WHO) issued a warning in October about the dangers of applying generative AI in healthcare. The organization noted that models may provide damaging incorrect answers, spread misinformation about health-related topics, and share sensitive personal or health data. (It is possible that models trained on medical records may unintentionally leak those records, as models sometimes memorize training data and return sections of this data when prompted.)

The World Health Organization (WHO) expressed its enthusiasm for the appropriate application of technologies, such as generative artificial intelligence (AI), to support patients, researchers, and scientists. However, it expressed concern that the caution that should be used when implementing any new technology may not be applied to AI consistently. “The rapid adoption of unproven systems may result in mistakes made by medical professionals, harm to patients, a decline in confidence in artificial intelligence, and ultimately endanger or postpone the potential long-term advantages and applications of such technologies worldwide.”

Google has stated time and time again that it is releasing generative AI healthcare tools very cautiously, and it is not going to change its stance today.

“[We]’re focused on providing professionals with the means to use this technology in a safe and responsible manner,” Matias added. And we’re dedicated to ensuring that these benefits are accessible to everyone in addition to assisting others in advancing healthcare.