SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Story image

AI & security to redefine company operations in 2025

Mon, 30th Dec 2024

Companies are bracing for significant changes in 2025 as they learn to address new challenges related to Artificial Intelligence (AI) and security.

Steve Barrett, Vice President of EMEA for Datadog, has highlighted the necessity for companies to acquire new skills that are vital for debugging AI products and services. "In 2025 we'll see DevOps teams develop new sets of skills that will help them debug AI products and services. They can obtain these skills using existing methods like clustering, which allows you to cluster - or group - events to spot trends," said Barrett.

Barrett explained that such skills would be crucial for improving the performance of AI systems, such as chatbots, by efficiently identifying and addressing inaccuracies. "For example, if you're analysing a chatbot's performance to identify inaccurate responses to questions, you can use clustering to group together the most common questions asked and then work out which questions are receiving wrong answers," he added.

As AI adoption accelerates, Barrett stresses caution in the development of Large Language Models (LLMs), underscoring the importance of securing AI products to prevent misuse. "With AI adoption happening so rapidly, there is a risk that organisations will overlook the need to secure their Gen AI products," Barrett pointed out. Ensuring robust security measures are in place is critical to prevent LLMs from disseminating illegal or dangerous information.

He advises organisations to manage the data LLMs access, ensuring protective measures are in place to monitor the data framework. "Organisations can ensure this by controlling what data the LLM has access to or having checks and balances in place that monitor the data it's being fed," Barrett noted. He suggests partitioning AI agents to keep them focused solely within organisational boundaries.

Barrett also warned of new threats in 2025 that may compromise LLM security, making data lineage a crucial component for tracking and verifying data integrity. "When it comes to new types of threats, companies need to be aware that LLMs are susceptible to being hacked and being fed false data and materials that will manipulate their responses," Barrett said, highlighting the role of data lineage.

Barrett suggests that these processes will enable teams to scrutinise the source and authenticity of LLM data. He proposes detailed investigations of data origins to ensure that new inputs to generative AI products are secure and accurate. "Data lineage will play a pivotal role in tracking the origins and movement of data across its lifecycle," he affirmed.

Barrett's remarks highlight the anticipated challenges and necessary adaptations companies must navigate as they continue integrating AI technologies into their operations in 2025.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X