SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Story image
ChatGPT: A blessing or curse for Active Directory security?
Fri, 21st Apr 2023

Active Directory (AD) is a popular service used by 90% of enterprise organisations to manage their IT resources. AD is used for authentication, authorisation, and accounting of users, computers, and other resources on a network. For this reason, AD is also a prime target for attackers who aim to gain access to sensitive information or resources.

Until recently, compromising a target organisation’s AD infrastructure required familiarity with multiple attack tools and techniques and an advanced understanding of several authentication and authorisation protocols. However, with the introduction of ChatGPT—a publicly accessible Generative AI—even the uninitiated threat actor can mount serious attacks.

That said, ChatGPT can also be used to level the playing field for defenders, who can save critical time protecting their environments and minimising the impact of potential compromise. So, is this technology harmful or helpful to AD security?

What is Generative AI?

Generative AI can create new and original content, such as images, text, or audio. Unlike other types of AI, which are designed to recognise patterns and make predictions based on existing data, Generative AI is trained to generate new data based on what it has learned from training data. This is typically accomplished using deep neural networks, which are designed to mimic the way the human brain works.

The development of Generative AI can be traced back to the early days of artificial intelligence research in the 1950s and 1960s. At that time, researchers were already exploring the idea of using computers to generate new content, such as music or art.

However, it was not until the 2010s, with the development of deep learning techniques and the availability of large datasets, that Generative AI really began to take off. In 2014, a team of researchers at Google published a paper on the use of deep neural networks to generate images. This event was seen as a breakthrough in the field.

Since then, a significant amount of research has focused on Generative AI. Many new applications and techniques have been developed. Today, Generative AI is a rapidly evolving field, with new breakthroughs and innovations reported on a regular basis.

What is ChatGPT?

ChatGPT is a large-scale conversational language model developed by OpenAI. The model is based on the Generative Pre-trained Transformer (GPT) architecture and was trained on a massive amount of text data using unsupervised learning techniques. ChatGPT can generate human-like responses to text-based prompts, making it a useful tool for a range of applications, including chatbots, language translation, and question-answering systems.
The first version of ChatGPT was made publicly accessible in June 2020. OpenAI released a blog post announcing the launch of the model, along with a demonstration of its capabilities.

Since then, OpenAI has continued to refine and improve ChatGPT, releasing new versions with larger models and more advanced capabilities. As of March 2023, the latest version of ChatGPT is GPT-4, which contains 100 trillion parameters and is considered one of the most advanced (and popular) language models in the world.

How can threat actors harness the power of ChatGPT?

ChatGPT’s intention is to revolutionise the way we interact with machines and bring us closer to a world where humans and computers can communicate seamlessly. However, the public accessibility of its beta version enables threat actors to use the technology for their own goals. How are they doing it?

Phishing websites and apps

Phishing websites have been around since the early days of the internet. The first phishing attacks were conducted purely through email. As people became savvier about recognising and avoiding suspicious emails, attackers turned to creating fake websites that mimic legitimate sites to steal login credentials or personal information. These attempts often abuse emerging trends in various fields to lure unsuspecting victims. With the introduction of smartphones, for example, phishing campaigns started, including malicious mobile apps.

Since the release of ChatGPT—and the subsequent hype surrounding it—multiple phishing campaigns have masqueraded as ChatGPT websites and mobile applications. These campaigns encourage visitors to download and install malware or even provide credit card details. The websites and apps are often circulated through profiles and advertisements on social media to maximise the number of victims.

ChatGPT-assisted phishing campaigns

Both attackers and defenders are aware that phishing emails remain one of the most effective attack vectors. A huge part of any phishing campaign is effective, believable pretexting: creating a pretext that tricks the victim into revealing sensitive information. This pretext is typically designed to create a sense of urgency or importance.

For example, an attacker might pose as a trusted authority figure, such as a bank representative or IT support staff. Under this assumed identity, the attacker informs the intended victim of a supposed security breach or threat of account closure. The attacker then requests login credentials or other personal information from the victim to resolve the threat.

Pretexting is a common tactic in phishing campaigns and can be difficult to detect. The attacker might have performed extensive research to create a convincing story. But giveaways do exist and often include spelling and grammar errors and unusual formatting and syntax.

Multiple threat actors are now outsourcing pretexting to ChatGPT. Although ChatGPT won’t cooperate with a query asking it to create a believable pretext for a phishing campaign, finding a way around that limitation is easy. Pretexting seems to be tailored perfectly to ChatGPT’s Generative AI capabilities, thereby increasing the success rate of phishing campaigns.

ChatGPT-assisted attack chains

Threat actors can also use ChatGPT to help mount serious attacks on a target organisation, even with very little technical knowledge.

The attacker then asks how to execute the Golden Ticket attack mentioned by ChatGPT. ChatGPT has some ethical reservations when it comes to answering, but these seem to disappear once our attacker dons the alter ego of Ethical Hacker!

If the self-proclaimed “ethical attacker” follows these instructions, they can gain full, AI-assisted access to the target organisation’s domain.

How can cybersecurity defenders use ChatGPT?

Clearly, ChatGPT is a powerful tool that can be abused by threat actors. But it can also be a powerful tool used for good.

Blue Teams, charged with defending organisations, can harness the power of Generative AI for multiple benevolent actions, including:

  • Fixing security misconfigurations 
  • Monitoring critical security components
  • Minimising the potential damage in case of compromise

A defender can use ChatGPT to identify and mitigate a significant security risk in the form of non-constrained Kerberos delegations. (If you’re asking yourself what non-constrained Kerberos delegation is, ChatGPT is here to help with that too.)

Caution: Always validate AI-generated scripts before implementing them. Some might contain simple bugs that can break your system—or worse.

Generative AI: a double-edged sword

The public accessibility of ChatGPT and Generative AI has already changed the way we interact with machines. With the development of newer AI models and the release of newer versions, this technology has the potential to become a fully blown technological revolution—possibly more impactful than the invention of the internet and mobile phones.
Much like any technological advancement, this new technology has quickly been abused by threat actors looking to maximise the level of harm they can inflict. Luckily, the forces of good can also harness this technology to better protect organisations more quickly and efficiently than ever before.