Security and privacy challenges have evolved with the advent of Generative AI, prompting organisations to adopt stringent controls to prevent undesired data sharing.
Karina Mansfield and Damian Grace of Phriendly Phishing highlight the amplified threat of phishing and social engineering fuelled by Generative AI.
The level of sophistication has intensified in social engineering attacks powered by Generative AI. With tools like ChatGPT, attackers can personalise their attacks, leading to an increased efficacy. Organisations must focus on broad-based awareness and education, combined with the deployment of AI and zero-trust frameworks, to counter these threats.
ChatGPT and other AI provisions introduced over a year ago have come a long way since their inception. Their steady progress has resulted in an overwhelming user base and dissolved language barriers. Mansfield and Grace note that "the one attacker to a victim in traditional phishing or vishing scams can now be scaled up, and deepfakes are getting too good to ignore as an impersonation issue". Cyber criminals unfettered by red tape constraints, forge ahead creating more sophisticated phishing attempts, making employee readiness crucial to stave off these emerging threats.
Staff training can often clash with workloads, but bite-sized training incorporated into daily routines can bolster workforce readiness. Continuous learning coupled with phishing simulations are beneficial in preparing employees to counter advanced threats.
New technologies facilitating the creation of deepfakes and audio clones pose identity fraud and impersonation threats of significant proportions. Fraudsters misusing this technology could facilitate widespread corporate fraud. Spear phishing attacks convincingly mimic prominent individuals or leadership teams to leak financial data, sensitive information or direct theft. However, progress is being made in advanced detection methods.
The reliance of organisations on GenAI in combatting social engineering is not restricted to email phishing. AI-driven chatbots used on interactive platforms, such as customer service and social media platforms, present new vulnerabilities. Mansfield and Grace warn that these vulnerable chat systems "can be a vehicle for social engineering and phishing, the attacker using GenAI to mimic the language and style of the one being impersonated".
An optimism bias may lead individuals to be lax about their security, often believing themselves to be less susceptible to attack than their colleagues. Deepfakes or social engineering with unknown personas can pose sizeable threats. Peligrosity increases when higher executives, with limited contact, are impersonated, bringing about scenarios where damaging business decisions could be made based on false impressions.
They emphasise that certainty cannot be placed on AI for detection as the technology can only get better. Instilling a positive culture of security, curiosity and continuous cybersecurity education are the best defences. Phriendly Phishing's security awareness training and phishing simulations align with these principles, offering a flexible learning environment and an updated, customisable curriculum.