[1]
“Detecting and Mitigating Hallucinations in Large Language Models (LLMs) Using Reinforcement Learning in Healthcare”, JAPMI, vol. 1, no. 1, pp. 105–118, Aug. 2024, doi: 10.60087/Japmi.Vol.03.Issue.01.Id.011.