In a scenario similar to “Robocop”, Chinese researchers have devised an AI that can allegedly recognize crimes and press charges against criminals.
According to the South China Morning Post, the AI was built and tested by the Shanghai Pudong People’s Procratorate, the country’s largest district public prosecution agency. Based on a description of a suspected criminal case, it can submit a charge with greater than 97 percent accuracy.
A robot judge may sound like something out of a science fiction movie. The exponential rise of computing technology, the global craze for big data analytics and machine learning, and the massive amounts of data routinely acquired by the internet of things encompassing nearly everything around us are reshaping our environment at an unprecedented rate. I was reading a 2013 research by CB Frey and M Osbourne of the University of Oxford on the future of work, specifically how sensitive jobs are to computerization.
According to a research on the influence of technology on 702 occupations, attorneys and judges are roughly in the middle of the list of careers that are likely to be replaced by technology. Experts believe that, while breakthroughs in ‘Judge AI’ or ‘Judicial AI’ are still in their early stages, there are signs that it will become increasingly significant. For example, the UK-based AI-driven legal services chatbot ‘DoNotPay’ was founded in 2015 as a ‘robot lawyer’ assisted by IBM’s Watson computer.
According to a 2016 story by ‘The Guardian,’ the chatbot challenged over 250,000 parking charges in London and New York and won 160,000 of them for free. Similarly, Xiaofa, a robot, stands in Beijing No 1 Intermediate People’s Court, delivering legal advice and aiding the general public in grasping legal terms. While ‘AI assistants’ can help courts make judgments by predicting and preparing them, ‘robot judges’ can replace human judges and determine cases autonomously in completely automated court procedures. So, can robots be effective judges?
renowned AI author and speaker Terence Mauri, of course, believes the robots will detect physical and psychological symptoms of deception with 99.9% accuracy. He predicted that in 50 years, robots will be widespread in civil and criminal courts in England and Wales. Since 2017, AI-enabled robot judges have been in use in China to hear specialized matters such as trade disputes, e-commerce liability claims, and copyright infringements. A robot judge has already handled millions of such cases.
However, the AI gadget analyzes the uploaded material and reaches a decision based on law and facts, rather than robots literally sitting in judges’ chairs. In America, a Jury chatbot project was in the works in Los Angeles. Other courts in the United States are implementing online dispute resolution (ODR) efforts to address a variety of problems. And no talk of technology would be complete without including Estonia, the world’s most sophisticated digital society.
The Estonian Ministry of Justice, on the other hand, has pushed its chief data officer, Velsberg Ott, to create an AI-enabled ‘robot judge’ to decide minor claim disputes of less than 7,000, which might assist in managing paperwork, decision-making, and make court services considerably more efficient. The two parties will upload papers and other pertinent evidence here as well, and the AI will make a judgment that may be challenged to a human court.
There are certainly benefits and drawbacks to using AI in the courtroom. A judge’s job is a difficult one. And the line between technology and people must be drawn carefully. In a 2018 study titled ‘Do Judges Need to Be Human? Tania Sourdin of the University of Newcastle, Australia, and Richard Cornes of the University of Essex noted in ‘The Implications of Technology for Responsive Judging’: “The job of the human judge, however, is not solely that of a data processor.” To limit judgment to such a notion would be to deny not just the judge’s humanity, but also the humanity of all individuals who appear before them.”
Again, a 2019 study published in the journal ‘Legal Studies’ casts doubt on the likelihood of advanced computer technology replacing judges. The research raises concerns regarding the ability of algorithmic techniques to completely penetrate this socio-legal environment and accurately imitate the activity of judging.
In December 2018, Justice Surya Kant, then Chief Justice of the Himachal Pradesh High Court and now a Supreme Court judge, expressed his concern: “If e-technology is allowed to overpower the judicial field without any ‘Lakshman Rekha,’ are we marching towards a stage where robots will be used in place of judicial officers?”
In a 2018 interview, US Supreme Court Chief Justice John Roberts was asked if he could envision a day “when sophisticated robots, powered by artificial intelligence, can aid with courtroom fact discovery or, more controversially, judicial decision making.” “It’s a day that’s here, and it’s placing a substantial pressure on how the judiciary goes about its business,” Justice Roberts answered. Was the case of Eric Loomis at the back of Justice Roberts’ mind?
Eric Loomis was discovered driving a car that had been involved in a shooting in 2013. The Wisconsin Supreme Court convicted and sentenced Loomis to six years in prison in 2016, based at least in part on the advice of a private company’s secret proprietary software called COMPAS, which operates using an algorithm that examines some of the responses to a 137-item questionnaire.
Loomis, on the other hand, filed a certiorari petition, claiming that his constitutional right to due process was violated since neither he nor his attorneys were able to evaluate or contest the accuracy and scientific validity of the algorithm underpinning the suggestion.
It further claimed that the method in issue violates due process rights by considering gender and race. The US Supreme Court, however, declined the writ of certiorari in June 2017, therefore declining to hear the case. Well, the case clearly raised some serious problems, and it is still brought up in every serious discussion of an AI-powered courtroom. When an AI proposes what a judge should do, one must establish what cognitive biases are at work. A recent study by Joanna Bryson, a computer science professor at the University of Bath, indicates that such biases are conceivable.
Using a prediction algorithm in instances involving human sentencing and verdicts is not the same as advising which movie to see next. As a result, the potential and risks of AI-driven judgments are unlimited!
The ‘PreCrime’ police department in Steven Spielberg’s 2002 film ‘Minority Report,’ set in Washington DC in 2054, is capable of anticipating future crimes via data mining and predictive analysis. However, when a system officer is accused of a future crime, he sets out to establish his innocence! Is this Spielberg picture a vision of the future of AI-powered courtrooms? And, finally, will the future be dismal or reforming?