Automating the hostile environment: uncovering a secretive Home Office algorithm at the heart of immigration decision-making

October 17, 2024


Artificial intelligence decision making systems have in recent years become a fixture of immigration enforcement and border control. This is despite the clear and proven harmful impacts they often have on individuals going through the immigration system. More widely, the harms of automated decision making have been increasingly there for all to see: from systems that encode bias and discrimination, as happened in the case of an algorithm used to detect benefit fraud in the Netherlands, to inaccurate software that had horrific consequences for sub-postmasters caught in the post-office horizon scandal.

All the way back in 2020, the first warning signs were apparent in the immigration context when the Home Office agreed under the threat of litigation to withdraw a visa streaming algorithm that discriminated between certain nationalities. This year we, at Privacy International, have been investigating AI decision-making systems used by the Home Office. What we have found is that a number of highly opaque and secretive algorithms permeate immigration enforcement and play a role in decisions that can have life changing consequences for the migrants subject to them. This is all without any information being provided to migrants about the existence of the algorithm or how it uses their personal data.

The most concerning tool, given how far its use appears to extend across the immigration system, we uncovered information about so far is called, Identify and Prioritise Immigration Cases” (known as IPIC). This automatically identifies and recommends migrants for particular immigration decisions or enforcement action by the Home Office. It took a year of submitting freedom of information requests and eventually complaining to the ICO, for the Home Office to disclose information about the functioning of this AI tool. But even now despite being given some information the Home Office still refuses to provide us with explicit information about the actions and decisions in relation to which the tool provides recommendations.

The basis for this refusal has consistently been that migrants would use this information to ‘game’ the system by submitting false information to engender favourable decisions. It is illogical to suggest that a system could be ‘gamed’ on the basis of high level information about the nature of the recommendations the algorithm generates as this does not explain how the tool processes information to get there. But this assertion also does something more pernicious. It is an extension of a wider narrative pushed by successive governments that migrants are abusing and gaming the immigration system, which was most recently encapsulated by the former Home Secretary, James Cleverly, saying that suicidal migrants detained in the documented poor conditions of RAF Wethersfield were lying about their mental health.

Despite the obfuscatory approach taken by the Home Office when disclosing information to us, it is clear from internal documentation we have now seen so far that the algorithm is used across the immigration system. Training materials provided to Home Office officials refer to the algorithm making recommendations about EU Settlement Scheme cases, conditions which individuals on immigration bail are subject to, and deportations, referred to as ‘returns’.

These are all decisions that if made incorrectly can lead to individuals suffering catastrophic harm and, in these circumstances, meaningful human review of the algorithm’s recommendations is more important than ever. But from what we have seen in the disclosure, the algorithm is designed in ways that push Home Office officials towards accepting its recommendations. For example, officials have to provide an explanation if they reject a recommendation whereas this is not the case if they accept it. Similarly, a rejected recommendation can be changed for longer than an accepted one. In view of punishing targets and casework backlogs, what is to stop officials rubberstamping recommendations because it’s so much easier and less work than to look critically at a recommendation and reject it?



Source link

You May Also Like…

What is the EU AI Act?

Originally published by Vanta.Written by Herman Errico.As artificial intelligence (AI) continues to...

0 Comments