Here's how ChatGPT and other AI models can aid cyberattacks

ChatGPT, which stands for Chat Generative Pre-trained Transformer, is a chatbot developed by OpenAI, an AI research laboratory founded in San Francisco in late 2015.

Launched in November 2022, ChatGPT has garnered a lot of attention with screenshots of ChatGPT interactions – from writing poetry to answering programming questions – flooding social media. It is built on top of OpenAI's GPT-3.5 family of large language models (LLM) and is fine-tuned with both supervised and reinforcement learning techniques and offering detailed responses and articulate answers across many domains.

There are also significant cybersecurity risks as well.

According to Check Point Research, a cybersecurity solutions provider, lowering the bar for code generation can help less-skilled threat actors effortlessly launch cyber-attacks. To demonstrate its assertion, CPR has used ChatGPT and Codex to create a full infection flow, from spear-phishing to running a reverse shell. Codex is an AI-based system by OpenAI that translates natural language to code.

According to Sergey Shykevich, Threat Intelligence Group Manager at Check Point Software, ChatGPT has the potential to significantly alter the cyber threat landscape. "Now anyone with minimal resources and zero knowledge in code, can easily exploit it to the detriment of his imagination."

Phishing

CPR asked ChatGPT to assist and create a plausible phishing email impersonating a hosting company. And with further interactions, specified the requirement of for the target to download an Excel document.

Of course, OpenAI suggests that this content might violate its content policy, but the output is already there and can be taken far and wide.

Malicious Code

The next step was to create malicious VBA code in an Excel document and it required multiple iterations with back-and-forth interactions.

CPR researchers first asked it to create a basic reverse shell using a placeholder IP and port. Then added some scanning tools, such as checking if a service is open to SQL injection and port scanning, as well as a sandbox detection script.

Of course, AI was also able to bundle the standalone Python code and compile it to an exe to run natively on any Windows machine. While this is a basic macro, given good textual prompts, ChatGPT can output working malicious code.

Essentially, using just natural language and without writing a single line of code, ChatGPT allows anyone to create a phishing email with an attached Excel document that contains malicious VBA code that downloads a reverse shell to the target machine. No scripting required to develop the entire infection flow!

Lede photo by GuerrillaBuzz

Write a comment ...

Abhishek Baxi

Show your support

I write columns, talk on my podcast, publish a newsletter, and run a publication about podcasting.

Recent Supporters

Write a comment ...