May 5, 2024 10:08 am
GPT-4 found to be able to exploit zero-day vulnerabilities with knowledge of CVE details

Researchers from the University of Illinois Urbana-Champaign have recently discovered that GPT-4, the most advanced iteration of language models, can identify security vulnerabilities without any human intervention. In a study shared on the Arxiv repository, Richard Fang, Rohan Bindu, Akil Gupta, and Daniel Kang demonstrated how GPT-4 can exploit zero-day flaws by utilizing knowledge of common vulnerabilities and exposures (CVE).

The researchers compiled a dataset of 15 critical severity vulnerabilities from the vulnerable list and common exposures to showcase how GPT-4 can act against them. They found that GPT-4 was able to exploit 87 percent of the vulnerabilities while GPT-3.5 was unable to exploit any. The researchers believe that this success was due to the complete CVE descriptions of the vulnerabilities.

According to their research, security organizations may consider refraining from publishing detailed reports on vulnerabilities as a mitigation strategy. To prevent cybercriminals from exploiting ‘zero-day’ vulnerabilities using GPT-4, they recommend proactive security measures such as regular security package updates. The researchers emphasize the importance of staying ahead of potential threats posed by advancements in language models.

Overall, this study highlights the potential dangers posed by advanced language models and underscores the need for organizations to take proactive measures to protect their systems against cyber threats.

Leave a Reply