• Welcome to Tamil Brahmins forums.

    You are currently viewing our boards as a guest which gives you limited access to view most discussions and access our other features. By joining our Free Brahmin Community you will have access to post topics, communicate privately with other members (PM), respond to polls, upload content and access many other special features. Registration is fast, simple and absolutely free so please, join our community today!

    If you have any problems with the registration process or your account login, please contact contact us.

Secret Algorithms Threaten the Rule of Law

Status
Not open for further replies.

tks

0
Source : MIT Technology Review

Date : june 1, 2017

Link : https://www.technologyreview.com/s/608011/secret-algorithms-threaten-the-rule-of-law/?set=608013

Sending people to jail because of the inexplicable, unchallengeable judgments of a secret computer program undermines our legal system.


======================================================================

Predicting and shaping what you will do next—whether as a shopper, worker, or voter—is big business for data-driven firms. But should their methods also inform judges and prosecutors? An ambitious program of predicting recidivism among convicts is bringing algorithmic risk assessments to American courthouses.


These assessments are an extension of a trend toward actuarial prediction instruments for recidivism risk. They may seem scientific, an injection of computational rationality into a criminal justice system riddled with discrimination and inefficiency. However, they are troubling for several reasons: many are secretly computed; they deny due process and intelligible explanations to defendants; and they promote a crabbed and inhumane vision of the role of punishment in society.


Let’s start with secrecy—a factor that has apparently alarmed even the Supreme Court in the case of the firm Northpointe’s COMPAS risk score. In Loomis v. Wisconsin, a judge rejected a plea deal and sentenced a defendant (Loomis) to a harsher punishment in part because a COMPAS risk score deemed him of higher than average risk of recidivating. Loomis appealed the sentence, arguing that neither he nor the judge could examine the formula for the risk assessment—it was a trade secret.

The state of Wisconsin countered that Northpointe required it to keep the algorithms confidential, to protect the firm’s intellectual property. And the Wisconsin Supreme Court upheld Loomis’s sentence, reasoning that the risk assessment was only one part of the rationale for the sentence. It wanted to continue to give judges the opportunity to take into account the COMPAS score as one part of their sentencing rationale, even if they had no idea how it was calculated.


Lawyers, academics, and activists are now questioning that reasoning. Judicial processes are, by and large, open to the public. Judges must give reasons for their most important actions, such as sentencing. When an algorithmic scoring process is kept secret, it is impossible to challenge key aspects of it. How is the algorithm weighting different data points, and why? Each of these inquiries is crucial to two core legal principles: due process, and the ability to meaningfully appeal an adverse decision.


Due process is an open-ended concept, but critical to legitimate legal systems. This basic constitutional principle gives defendants a right to understand what they are charged with, and what the evidence against them is. A secret risk assessment algorithm that offers a damning score is analogous to evidence offered by an anonymous expert, whom one cannot cross-examine. Any court aware of foundational rule of law principles, as well as Fifth and Fourteenth Amendment principles of notice and explanation for decisions, would be very wary of permitting a state to base sentences (even if only in part) on a secret algorithm.


Two forms of automation bias also menace the right to a meaningful appeal. Judges are all too likely to assume that quantitative methods are superior to ordinary verbal reasoning, and to reduce the task at hand (sentencing) to an application of the quantitative data available about recidivism risk. Both responses undermine the complexity and humane judgment necessary to sentencing.

Even worse, when companies offer commercial rationales for keeping their “secret sauce” out of the public eye, courts have been eager to protect the trade secrets of scoring firms. That tendency is troubling in private-sector contexts, since commercial torts may be committed with impunity thanks to the opacity of ranking and rating systems. Even in the context of voting, authorities have been sluggish about demanding software that is auditable and understandable by outsiders. Nevertheless, the case of criminal sentencing should be a bridge too far for conscientious judges—and that probably explains the U.S. Supreme Court’s interest in Loomis. Sending someone to jail thanks to the inexplicable, unchallengeable judgments of a secret computer program is too Black Mirror for even hardened defenders of corporate privileges.


Moreover, there are options between “complete algorithmic secrecy” and “complete public disclosure.” As I explained in 2010, “qualified transparency” is a well-established method of enabling certain experts to assess protected trade secrets (including firms’ code and data) in order to test a system’s quality, validity, and reliability. Think of a special master in a court case, or Secure Compartmented Information Facilities for intelligence agencies. At a bare minimum, governments should not use algorithms like the COMPAS score without some kind of external quality assurance enabled by qualified transparency.


But secrecy is not the only problem here. Assume that algorithmic risk assessment eventually becomes more public, with fully transparent formulae and data. There are still serious concerns about the use of “evidence-based sentencing,” as quantitative predictive analytics is often marketed in criminal justice contexts.


For example, legal scholar Sonja Starr has argued that what is really critical in the sentencing context is not just recidivism in itself, but the difference a longer prison term will make to the likelihood a convict will reoffend. Algorithmic risk assessment may eventually become very good at predicting reoffense, but what about a risk assessment of risk assessment itself—that is, the danger that a longer sentence for a “high risk” offender may become a self-fulfilling prophecy, given the criminogenic environment of many prisons?


There is also value in narrative intelligibility in the ranking and rating of human beings. Companies are marketing analytics to predict not only the likelihood of criminal recidivism, but also the chances that any given person will be mentally ill, a bad employee, a failing student, a criminal, or a terrorist. Even if we can set aside the self-fulfilling prophecy concerns raised above, these assessments should be deployed only with utmost caution. Once used to advise police, DHS, teachers, or bosses, they are not mere opinions circulating in a free flow of ideas. Rather, they can have direct impact on persons’ livelihoods, liberty, and education. If they cannot be explained in a narratively intelligible way, perhaps they should not be used at all without the direct consent of the person they are evaluating.


This opinion may not sit well with those who see artificial intelligence as the next step in human evolution. Roboticist Hod Lipson memorably compared efforts to make advanced algorithmic information-processing understandable to humans to “explaining Shakespeare to a dog.” But this loaded metaphor conceals more than it reveals. At least for now, humans are in charge of governments, and can demand explanations for decisions in natural language, not computer code. Failing to do so in the criminal context risks ceding inherently governmental and legal functions to an unaccountable computational elite.
 
Status
Not open for further replies.

Latest ads

Back
Top