Check Point finds potential cybercrime scenarios in ChatGPT4
Check Point Research (CPR) has released an initial analysis of ChatGPT4, surfacing five scenarios that allow threat actors to streamline malicious efforts and preparations faster and more precisely. Unfortunately, in some instances, even non-technical actors can create harmful tools.
CPR is sharing five scenarios of potentially malicious use of ChatGPT4. These include C++ malware that collects PDF files and sends them to FTP; phishing, including impersonating a bank and sending emails to employees; PHP reverse shell; and Java program that downloads and executes putty that can launch as a hidden powershell.
These scenarios sometimes empower non-technical individuals to create harmful tools, as if coding, constructing, and packaging is a simple recipe. Additionally, despite safeguards in ChatGPT4, some restrictions can be easily circumvented, enabling threat actors to achieve their objectives without much hindrance.
CPR warns of ChatGPT4's potential to accelerate cybercrime execution and will continue its platform analysis in the following days.
"After finding several ways in which ChatGPT can be used by hackers, and actual cases where it was, we spent the last 24 hours to see whether anything changed with the newest version of ChatGPT. While the new platform clearly improved on many levels, we can, however, report that there are potential scenarios where bad actors can accelerate cybercrime in ChatGPT4. ChatGPT4 can empower bad actors, even non-technical ones, with the tools to speed up and validate their activity," says Oded Vanunu, head of products vulnerabilities research at Check Point Software.
"Bad actors can also use ChatGPT4's quick responses to overcome technical challenges in developing malware. What we're seeing is that ChatGPT4 can serve both good and bad actors. Good actors can use ChatGPT to craft and stitch code that is useful to society; but simultaneously, bad actors can use this AI technology for rapid execution of cybercrime. As AI plays a significant and growing role in cyberattacks and defence, we expect this platform to be used by hackers as well, and we will spend the following days to better understand how."
Importantly, in February this year, CPR found an instance of cybercriminals using ChatGPT to "improve" the code of a basic Infostealer malware from 2019. Although the code is not complicated or difficult to create, ChatGPT improved the Infostealer's code.
CPR notes, "As part of its content policy, OpenAI created barriers and restrictions to stop malicious content creation on its platform. Several restrictions have been set within ChatGPT's user interface to prevent the abuse of the models. For example, if you ask ChatGPT to write a phishing email impersonating a bank or create a malware, it will not generate it."
"However, we are seeing cyber criminals are working their way around ChatGPT's restrictions and there is an active chatter in the underground forums disclosing how to use OpenAI API to bypass ChatGPT´s barriers and limitations. This is done mostly by creating Telegram bots that use the API. These bots are advertised in hacking forums to increase their exposure," CPR researchers add.