13.7 C
London
Thursday, April 18, 2024

UK watchdog warns against AI for emotional analysis, dubs ‘immature’ biometrics a bias risk •

The UK’s privacy watchdog has warned against use of so-called “emotion analysis” technologies for anything more serious than kids’ party games, saying there’s a discrimination risk attached to applying “immature” biometric tech that makes pseudoscientific claims about being able to recognize people’s emotions using AI to interpret biometric data inputs.

Such AI systems ‘function’, if we can use the word, by claiming to be able to ‘read the tea leaves’ of one or more biometric signals, such as heart rate, eye movements, facial expression, skin moisture, gait tracking, vocal tone etc, and perform emotion detection or sentiment analysis to predict how the person is feeling — presumably after being trained on a bunch of visual data of faces frowning, faces smiling etc (but you can immediately see the problem with trying to assign individual facial expressions to absolute emotional states — because no two people, and often no two emotional states, are the same; hence hello pseudoscience!).

The watchdog’s deputy commissioner, Stephen Bonner, appears to agree that this high tech nonsense must be stopped — saying today there’s no evidence that such technologies do actually work as claimed (or that they will ever work).

“Developments in the biometrics and emotion AI market are immature. They may not work yet, or indeed ever,” he warned in a statement. “While there are opportunities present, the risks are currently greater. At the ICO, we are concerned that incorrect analysis of data could result in assumptions and judgements about a person that are inaccurate and lead to discrimination.

“The only sustainable biometric deployments will be those that are fully functional, accountable and backed by science. As it stands, we are yet to see any emotion AI technology develop in a way that satisfies data protection requirements, and have more general questions about proportionality, fairness and transparency in this area.”

In a blog post accompanying Bonner’s shot across the bows of dodgy biometrics, the Information Commission’s Office (ICO) said organizations should assess public risks before deploying such tech — with a further warning that those that fail to act responsibly could face an investigation. (So could also be risking a penalty.)

“The ICO will continue to scrutinise the market, identifying stakeholders who are seeking to create or deploy these technologies, and explaining the importance of enhanced data privacy and compliance, whilst encouraging trust and confidence in how these systems work,” added Bonner.

The watchdog has fuller biometrics guidance coming in the spring — which it said today will highlight the need for organizations to pay proper mind to data security — so Bonner’s warning offers a taster of more comprehensive steerage coming down the pipe in the next half year or so.

“Organisations that do not act responsibly, posing risks to vulnerable people, or fail to meet ICO expectations will be investigated,” the watchdog added.

Its blog post gives some examples of potentially concerning uses of biometrics — including AI tech being used to monitoring the physical health of workers via the use of wearable screening tools; or the use of visual and behavioural methods such as body position, speech, eyes and head movements to register students for exams.

“Emotion analysis relies on collecting, storing and processing a range of personal data, including subconscious behavioural or emotional responses, and in some cases, special category data. This kind of data use is far more risky than traditional biometric technologies that are used to verify or identify a person,” it continued. “The inability of algorithms which are not sufficiently developed to detect emotional cues, means there’s a risk of systemic bias, inaccuracy and even discrimination.”

It’s not the first time the ICO has had concerns over rising use of biometric tech. Last year the then information commissioner, Elizabeth Denham, published an opinion expressing concerns about what she couched as the potentially “significant” impacts of inappropriate, reckless or excess use of live facial recognition (LFR) technology — warning it could lead to a ‘big brother’ style surveillance of the public.

However that warning was targeting a more specific technology (LFR). And the ICO’s Bonner told the Guardian this is the first time the regulator has issued a blanket warning on the ineffectiveness of a whole new technology — arguing this is justified by the harm that could be caused if companies made meaningful decisions based on meaningless data, per the newspaper’s report.

Where’s the biometrics regulation?

The ICO may be feeling moved to make more substantial interventions in this area because UK lawmakers aren’t being proactive when it comes to biometrics regulation.

An independent review of UK legislation in this area, published this summer, concluded the country urgently needs new laws to govern the use of biometric technologies — and called for the government to come forward with primary legislation.

However the government does not appear to have paid much mind to such urging or these various regulatory warnings — with a planned data protection reform, which it presented earlier this year, eschewing action to boost algorithmic transparency across the public sector, for example, while — on biometrics specifically — it offered only soft-touch measures aimed at clarifying the rules on (specifically) police use of biometric data (taking about developing best practice standards and codes of conduct). So a far cry from the comprehensive framework called for by the Ada Lovelace research institute-commissioned independent law review.

In any case, the data reform bill remains on pause after a summer of domestic political turmoil that has led to two changes of prime minister in quick succession. A legislative rethink was also announced earlier this month by the (still in post) secretary of state for digital issues, Michelle Donelan — who used a recent Conservative Party conference speech to take aim at the EU’s General Data Protection Regulation (GDPR), aka the framework that was transposed into UK law back in 2018. She said the government would be “replacing” the GDPR with a bespoke British data protection system — but gave precious little detail on what exactly will be put in place of that foundational framework.

The GDPR regulates the processing of biometrics data when it’s used for identifying individuals — and also includes a right to human review of certain substantial algorithmic decisions. So if the government is intent on ripping up the current rulebook it raises the question of how — or even whether — biometric technologies will be regulated in the UK in the future?

And that makes the ICO’s public pronouncements on the risks of pseudoscientific biometric AI systems all the more important. (It’s also noteworthy that the regulator name-checks the involvement of the Ada Lovelace Institute (which commissioned the aforementioned legal review) and the British Youth Council which it says will be involved in a process of public dialogues it plans to use to help shape its forthcoming ‘people-centric’ biometrics guidance.)

“Supporting businesses and organisations at the development stage of biometrics products and services embeds a ‘privacy by design’ approach, thus reducing the risk factors and ensuring organisations are operating safely and lawfully,” the ICO added, in what could be interpreted as rather pointed remarks on government policy priorities.

The regulator’s concern about emotional analysis tech is not an academic risk, either.

For example, a Manchester, UK-based company called Silent Talker was one of the entities involved in a consortium developing a highly controversial ‘AI lie detector’ technology — called iBorderCtrl — that was being pitched as a way to speed up immigration checks all the way back in 2017. Ironically enough, the iBorderCtrl project garnered EU R&D funding, even as critics accused the research project of automating discrimination.

It’s not clear what the status of the underlying ‘AI lie detector’ technology is now. The Manchester company involved in the ‘proof of concept’ project — which was also linked to research at Manchester Metropolitan University — was dissolved this summer, per Companies House records. But the iBorderCtrl project was also criticized on transparency grounds, and has faced a number of freedom of information actions seeking to lift the lid on the project and the consortium behind it — with, apparently, limited success.

In another example, UK heath startup, Babylon AI, demonstrated an “emotion-scanning” AI embedded into a telehealth platform back in a 2018 presentation — saying the tech scanned facial expressions in real time to generate an assessment of how the person was feeling and present that to the clinician to potentially act on.

Its CEO Ali Parser said at the time that the emotion-scanning tech had been built and implied it would be coming to market — however the company later rowed back on the claim, saying the AI had only been used in pre-market testing and development had been deprioritized in favor of other AI-powered features.

The ICO will surely be happy that Babylon had a rethink on using AI to claim its software could perform remote emotion-scanning.

Its blog post goes on to cite other current examples where biometric tech, more broadly, is being used — including in airports for streamlining passenger journeys; financial companies using live facial recognition tech for remote ID checks; and companies using voice recognition for convenient account access, instead of having to remember passwords.

The regulator doesn’t make specific remarks on the cited use-cases but it looks likely it will be keeping a close eye on all applications of biometrics given the high potential risks to people’s privacy and rights — even as its most special attention will be directed toward uses of the tech that slip their chains and stray into the realms of science fiction.

The ICO’s blog post notes that its look into “biometrics futures” is a key part of the its “horizon-scanning function”. Which is technocrat speak for ‘scrutiny of this type of AI tech being prioritized because it’s fast coming down the pipe at us all’.

“This work identifies the critical technologies and innovation that will impact privacy in the near future — its aim is to ensure that the ICO is prepared to confront the privacy challenges transformative technology can bring and ensure responsible innovation is encouraged,” it added.

Latest news
Related news