Police officers in Durham will soon use artificial intelligence to determine whether a suspect should be kept in custody or released on bail.
The system, which is called the Harm Assessment Risk Tool (Hart), has been trained using Durham Constabulary data collected from 2008 to 2013, and will also consider a suspect’s gender and postcode.
It is designed to help officers assess how risky it would be to release suspects.
Gadgets and tech news in pictures Gadgets and tech news in pictures
Hart will be used in an “advisory” capacity, according to the .
It was developed alongside academics from the University of Cambridge and has been built to err on the side of caution to lower the risk of it recommending the early release of potentially dangerous suspects.
Hart is, therefore, likely to classify somebody as medium- or high-risk, something that’s reflected in the results of tests conducted in 2013.
It was accurate 98 per cent of the time when it classified a suspect as “low-risk”, and 88 per cent of the time when it classified a suspect as “high-risk”.
“I imagine in the next two to three months we’ll probably make it a live tool to support officers’ decision making,” Sheena Urwin, the head of criminal justice at Durham Constabulary, told the BBC.
While the system could prove useful, there are fears that it could also be seriously flawed.
A recent report found that .
An investigation into a separate algorithm used by US authorities to predict how likely a suspect is to commit future crimes, conducted by last year, also found issues.
According to the report, the algorithm was twice as likely to incorrectly flag black suspects as future criminals than white suspects, and white suspects were incorrectly classified as low-risk more often than black suspects.
“Could this disparity be explained by defendants’ prior crimes or the type of crimes they were arrested for? No,” reads the report.
Hart will not be able to accurately risk-assess suspects with a criminal history from beyond Durham Constabulary’s jurisdiction.
Its creators believe they have mitigated any associated risks, and an auditing system explaining how the AI technology reached a decision will also be available if required.