SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Story image

BSI provides strict guidelines for ethical use of facial recognition

Tue, 14th May 2024

In response to growing global privacy concerns, facial recognition technology (FRT) is now under strict guidance in an effort to ensure its respectful and ethical use in society. This comes in the wake of the recent data breach at Outabox in Australia. The compromised facial recognition system, used across various bars and clubs nationwide, demonstrated a clear need for stronger privacy measures in the rapidly advancing digital age.

The UK's National Standards Body, the BSI (Business Standards Institute), published a new guidance for facial recognition technology use. This is due to recent BSI research indicating that 40% of the global populace expects to utilise biometric identification in airports by 2030. The rising prevalence of the technology has simultaneously highlighted ethical concerns surrounding misidentification rates linked to racial or gender differences, as well as legal issues revolving around various incidences.

Facial recognition technology, increasingly powered by artificial intelligence, is becoming more widely used across the world for security purposes, as seen during significant events like the King Charles Coronation or various major sporting events. The technology maps an individual's physical traits in an image, creating a face template that can be matched against other images within a database. The new standard developed by BSI aims to alleviate general unease by guiding organisations through its use and instilling public trust.

Given that the facial recognition market is expected to be worth USD $13.4 billion globally by 2028, the new standard also emphasises the importance of regularly reviewing the ethics of artificial intelligence applications in FRT. It delivers best practice guidelines and establishes guardrails for unbiased and safe FRT use. Two usage scenarios, identification and verification, are outlined. While the standard requires human intervention in identification situations, guardrails are put in place for autonomous verification scenarios to mitigate bias risks stemming from false positives.

Expressing his thoughts on the newly released guidance, Scott Steedman, Director-General, Standards, BSI, stated, "AI-enabled facial recognition tools have the potential to be a driving force for good and benefit society through their ability to detect and monitor potential security threats." This set of practices is designed to guide organisations through the ethical challenges associated with FRT use, foster public trust and civil rights, and eliminate potential system bias and discrimination.

Voicing agreement, Dave Wilkinson, Director of Technical Services of the British Security Industry Association, mentioned that the framework aims to instil trustworthiness in FRT use by examining its proper and proportionate execution. It sets out key principles to cover the entire process, "from assessing the need to use it, to ensuring its continued operation remains fit for purpose and justified."

This proactive step within the technology industry marks an essential move towards safer and more ethical use of artificial intelligence and facial recognition, especially considering its growing prevalence in our daily lives. It promises a future where organisations can confidently utilise facial recognition technology, mindful of its implications, with essential safeguards designed to protect individual civil rights and privacy.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X