The Oxford Internet Institute is warning scientists about the dangerous tendency of Large Language Models (LLMs) used in chatbots to hallucinate. LLMs, which are designed to produce helpful and convincing responses without any guarantees regarding their accuracy or alignment with fact, pose a direct threat to science and scientific truth.
The paper published in Nature Human Behaviour highlights that LLMs are often trained on inaccurate data from online sources, leading users to trust them as human-like information sources despite their lack of accuracy. This can lead to biased or partial representations of reality, which can have serious consequences in science and education.
The researchers stress the importance of verifying the accuracy of information presented by LLMs and urge the scientific community to use them as “zero-shot translators.” This means providing LLMs with appropriate data and asking them to transform it into a conclusion or code rather than relying on them as a source of knowledge. By doing so, we can ensure that the output is factually correct and aligned with the provided input.
While LLMs may assist with scientific workflows, it is crucial for scientists to use them responsibly and maintain clear expectations of how they can contribute. The Oxford Internet Institute’s warning underscores the need for responsible use of these powerful tools in order to protect science and scientific truth from being undermined by false content generated by LLMs.