Hackers in Russia are gathering on forums to share tips on how to access artificial intelligence (AI) developed by OpenAI, despite access being blocked in the country. As reported by Futura, these hackers are currently experimenting with ChatGPT to generate malicious code.

Experts from the Check Point company have discovered various scripts created by AI on Darknet hacker forums, but they are still in the early stages and not yet a significant threat for advanced cyberattacks. However, what caught the attention of Check Point researchers is that they found exchanges led by Russian hackers discussing how to access ChatGPT from Russia.

Opening a ChatGPT account with a Russian phone number

Opening a ChatGPT account with a Russian phone number

Hackers seeking ways to bypass block in Russia

In a discussion thread about using ChatGPT to generate malware, hackers were sharing information on how to bypass geographic restrictions in order to exploit the AI in creating malicious code. The spokesperson for Check Point stated that bypassing these geolocation blocks is not difficult to accomplish.

The hackers were explaining how they could access the service by paying to purchase existing user accounts. To pay, they were suggesting using stolen credit card information. However, to create an account, a phone number is necessary. The participants in the discussion thread mentioned the use of a Russian SMS generation service that could bypass the different blocking measures.


ChatGPT: hackers using it to create malware

Cybersecurity experts at Check Point have discovered that hackers are already using ChatGPT for malicious purposes. Since its public launch in early December, ChatGPT has been touted as the next big thing on the internet.

The chatbot is known to be engaging, impressive and is often pushed to its limits. It has been shown to be able to lie convincingly, but it has also been found to be capable of working on the dark side as well.

Check Point Research has found that OpenAI’s AI is being used by hackers to design malware. Researchers have previously tested ChatGPT to create a full hacking chain, starting with an enticing phishing email, and then injecting malicious code. The team’s conclusion is that if they had this idea, cybercriminals probably did too.

By analyzing large hacker communities, the experts at Check Point have confirmed that early cases of malicious use of ChatGPT are in progress.

The advantage is that these are not the most experienced hackers, but rather cybercriminals who do not have specific development skills. This means that the malware created is not particularly sophisticated, but given the potential of ChatGPT, it is likely that the AI will soon be used in the development of advanced hacking tools.

On hacker forums, participants are actively testing ChatGPT to recreate malware. The lab experts have thus dissected a script created by ChatGPT that searches for common file formats on a hard drive, copies them, compresses them, and sends them to a server controlled by the hackers.

Hacker 'USDoD' shares multi-layered encryption tool on Dark Web forum, built with help of ChatGPT AI

Hacker ‘USDoD’ shares multi-layered encryption tool on Dark Web forum, built with help of ChatGPT AI

Hackers hijack ChatGPT

In another example, a Java code was used to download a network client and force its execution via the Windows administrator console. The script could then download any malware. But it goes further with the writing of a Python script that can perform complex encryption operations. In principle, creating this code is neither good nor bad, but on a hacker forum, it can legitimately be assumed that this encryption system could be integrated into a ransomware.

Researchers have also observed discussions and tests that misuse ChatGPT, not to generate malicious code, but to create an automated illegal trading platform for the Dark Web.

Overall, what emerges from these investigations is that cybercriminals are doing the same thing as everyone else in all areas with ChatGPT’s chatbot. They are discussing and testing the AI to see how it can help them and if its performance is useful for carrying out their nefarious work.

Finally, like any computer program, the AI only does what is asked of it. It does not necessarily do it well, as can be testified by a study from Stanford University in the United States. Its conclusions show that when developers use AI to write code, it tends to generate vulnerabilities. They would not necessarily exist if a human had composed it.