ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans
Researchers at Brigham and Women's Hospital found that cancer treatment plans generated by OpenAI's revolutionary chatbot were full of errors.
ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women's Hospital found that cancer treatment plans generated by OpenAI's revolutionary chatbot were full of errors.
People really need to get in their heads that AI can "hallucinate" random information and that any implementation on an AI needs a qualified human overseeing it.
Exactly, it's stringing together information in a series of iterations, each time adding a new inference consistent with what came before. It has no way to know if that inference is correct.