This started as a thread on LinkedIn, but I doubt this will get anyone’s eyes on it at the FDA, so I sent Dr. Prasad an email. I figured I’d do something more formal. Here’s what I wrote:
“Dear Dr. Prasad,
I’m sure you already know, but LLM techonologies have been rapidly deployed, and are being used accross many different disciplines. I recently came across a paper on LinkedIn that was alarming. People are using LLM technologies in the healthcare industry, and after investigation, the authors determined that information obtained on women’s health using LLM technologies had a failure rate at about 60%. This, I am afraid, might be the tip of the iceberg.
I’m wondering what the FDA is doing to regulate these technologies, which are giving false information about health. Not to say that these technologies can’t have a positive impact in the health care industry if properly evaluated, but I believe stricter evaluation need be applied to LLM technologies in the healthcare industry before released to the general population.
Kind regards,
Andre Zapico
CEO
likely llc
likelyllc.com
linkedin.com/in/andre-zapico
gitub.com/drezap
ME Information and Communication Engineering
University of Electronic Science and Technology of China
Stan Developer
mc-stan.org
BS Mathematical Sciences: Probabilistic Methods
BS Statistics
University of Michigan, Ann Arbor 2017“
And here’s a link to the paper: Gruber, et al, “A Women’s Health Benchmark for Large Language Models,” https://arxiv.org/abs/2512.17028
Leave a comment