It is now possible to use a publicly available artificial chatbot to create a full chain of infection, potentially starting with a spear phishing email written in completely convincing human-like language, and finally a complete takeover of the computer systems caused by a company.
Checkpoint’s researchers recently created such a plausible phishing e-mail as a test. They just used ChatGPT, a chatbot that uses deep learning techniques to generate text and conversations that can basically convince anyone that it was written by a real person.
In reality, many potential cybersecurity threats are contained within this impressive technology developed by OpenAI and currently freely available online.
Here are just a few of them:
- Social Engineering: ChatGPT’s powerful language model can be used to generate realistic and compelling phishing messages, making it easier for attackers to trick victims into revealing sensitive information or downloading malware.
- Scam: Text generation by ChatGPT’s language models allows attackers to create fake ads, lists, and many other forms of deceptive material.
- Impersonation: ChatGPT can be used to create a convincing digital copy of someone’s writing style, allowing attackers to impersonate their target in a text-based environment, such as an email or text message.
- Attack Automation: ChatGPT can also be used to automate the creation of malicious messages and phishing emails, allowing attackers to launch large-scale attacks more efficiently.
- Spamming: The language model can be fine-tuned to produce large amounts of low-quality content that can be used in a variety of contexts, including as spam comments on social media or in spam email campaigns.
All five of the above are legitimate threats to businesses and all internet users that will only become more prevalent as OpenAI continues to train its model.
If the list convinced you, the technology has served its purpose, although in this case not with malicious intent.
All of the text from points one to five was actually written by ChatGPT with minimal changes for clarity. The tool is so powerful that it can convincingly identify and articulate its own inherent cybersecurity threats.
However, there are remedial actions that individuals and businesses can take, including new school security awareness training. Cybercrime moves at the speed of light. A few years ago, cybercriminals specialized in identity theft, but now they’re taking over your company’s network, hacking into your bank accounts and stealing tens or hundreds of thousands of Rand.
A smart platform like ChatGPT may have been created with the best of intentions, but it only increases the burden on internet users to always stay alert, trust their instincts and always be aware of the risks involved in clicking on a link or opening one Attachments are connected.