SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Story image

Kaspersky commits to EU ethical AI usage in cybersecurity

Today

Kaspersky has committed to the responsible and ethical utilisation of artificial intelligence (AI) within cybersecurity by signing the European Commission's AI Pact.

The AI Pact is an initiative designed to prepare organisations for the implementation of the AI Act, a comprehensive legal framework on AI worldwide, developed by the European Union (EU). By signing the pact, Kaspersky aligns itself with the promotion and enforcement of safe and ethical AI usage.

The EU AI Act aims to ensure that AI technologies adhere to safety and ethical standards and addresses associated risks. The Act, which was enacted in 2024, is expected to become fully applicable by mid-2026, guiding companies through the regulatory changes.

Through its pledge, Kaspersky has outlined three major commitments for AI technology usage. These commitments include adopting an AI governance strategy to bolster AI integration within the company and focusing on future compliance with the AI Act.

Kaspersky also plans to map AI systems utilised or provided in high-risk areas as determined by the AI Act. A further commitment involves promoting AI literacy and awareness among Kaspersky staff and individuals involved with AI systems on their behalf, depending on their technical education and the context of AI system usage.

Beyond the core commitments, Kaspersky intends to evaluate potential risks to individuals' rights from AI system usage. The company will also ensure that individuals interacting directly with AI systems are informed and that their employees are aware of AI deployments within the workplace.

Eugene Kaspersky, Founder and Chief Executive Officer of Kaspersky, stated, "As we witness the rapid deployment of AI technologies, it's crucial to ensure that the drive for innovation is balanced with proper risk management."

"Having been an advocate for AI literacy and the sharing of knowledge about AI-related risks and threats for years, we're happy to join the ranks of organizations working to help companies responsibly and securely benefit from AI technologies. We'll be working to further advance transparent and ethical AI practices and contribute to building confidence in this technology."

Kaspersky's experience with AI technologies spans nearly two decades, focusing on enhancing security and privacy for its users and identifying cyberthreats. With AI systems increasingly prevalent, the firm is committed to disseminating knowledge gained from its AI Technology Research Centre to help organisations address risks associated with AI deployment.

To assist practitioners in securely implementing AI systems, Kaspersky experts have crafted "Guidelines for Secure Development and Deployment of AI Systems." These guidelines were presented during the 2024 United Nations Internet Governance Forum. Kaspersky has underscored the ethical application of AI technologies and promulgated principles for the ethical use of AI in cybersecurity, encouraging other cybersecurity entities to adopt similar guidelines.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X