Hackers are already hijacking ChatGPT to develop malware

ChatGPT (also) revolutionizes cybercrime. The artificial intelligence developed by OpenAI is the subject of great interest among hackers. On specialized forums, more or less experienced hackers no longer hesitate to test in depth the possibilities in terms of code generation with this tool, worry the researchers of check pointa company specializing in cybersecurity.

Skills (still) required

Python, Java, C++, JavaScript, C#… ChatGPT is able to generate snippets of code with a single sentence, in seconds. A possibility that facilitates the creation of certain complex functions for developers, but which also benefits neophytes. With some basic knowledge and a general understanding of code, a user can compile (generate an executable file) or run the source code generated by the AI. In the field of cybercrime, hackers with basic knowledge could thus be able to produce advanced malware.

On a popular forum popular with hackers, ChatGPT-related topics have been on the rise lately, according to Check Point. Several members have thus shared their prowess with the OpenAI tool, in the development of functions or pieces of malware. A user, for example, managed to create a stealer, a virus capable of locating certain interesting files on his victim’s computer, copying them and then exporting them to a server (via FTP).

Advertising, your content continues below

Ransomware soon generated by AI?

Another hacker says he was able to produce multi-layered encryption software partly thanks to OpenAI’s AI. The latter could very well be modified and used for “potentially turning code into ransomware if scripting and syntax issues are fixed”, say Check Point researchers. In their report, the experts also point to the use of ChatGPT to generate the tools needed to set up an e-commerce site on the Dark Net.

Although ChatGPT makes it quick and easy to produce code, using it isn’t that easy. Many developers have underlined the sometimes poor quality of the generated content and the appearance of numerous runtime bugs. In some cases, in order to get a program to run, deep modifications are necessary, it is still necessary to know a minimum of the language used.

“It’s only a matter of time before more sophisticated threat actors improve how they use AI-powered tools for malicious purposes”, concludes Check Point. At the end of 2022, many cybersecurity experts also pointed to the use of AI, in cyber attack and defense, as a fundamental trend in 2023. The first days of the year therefore seem to confirm this.

Related Articles

Back to top button