MODEL POLICY ON RISK ALGORITHMS AND ARTIFICIAL INTELLIGENCE IN CRIMINAL JUSTICE

Summary

This model policy will encourage transparency by states using risk assessment tools in the criminal justice system and seek more fair and equitable outcomes for those directly affected by them.

MODEL POLICY ON RISK ALGORITHMS AND ARTIFICIAL INTELLIGENCE IN CRIMINAL JUSTICE

Summary: This model policy will encourage transparency by states using risk assessment tools in the criminal justice system and seek more fair and equitable outcomes for those directly affected by them.

A. All risk algorithms or artificial intelligence programs used within the criminal justice system to predict or assign risk at any point in the criminal matter, defined as commencing at the time of arrest and concluding upon termination of probation or parole, shall meet the following requirements in order to be used within this jurisdiction:

1. They shall have been shown to be free of racial, gender, and protected-class bias defined as not causing a disparate impact in recommendation or results upon any race, gender or other protected-class as defined by the laws of this jurisdiction when validated as required in section (2) according to national best practices.[i]

2. They shall be periodically and locally re-validated and revised in accordance with national best practices, and the report of such validation shall be a public record made available to the state legislature and the general public.[ii]

3. Information about any programs or algorithms, the risk factors they analyze, the data on which analysis of risk factors is based, the nature and mechanics of any validation process, and the results of any audits or tests to identify and eliminate bias, shall be a public record and subject to discovery in court proceedings.[iii]

4. Any process that culminates in the creation of a matrix or decision-making framework that assigns outcomes based on risk quantum or category shall be open to the public, comport with basic requirements of due process generally required of administrative proceedings within this jurisdiction, and all records of such process shall be a public record.[iv]

5. Assertions of trade secret privileges by owners of algorithms or artificial intelligence programs at any point in a criminal proceeding shall be prohibited.[v]

6. All users of such algorithms or artificial intelligence programs shall provide a statistical report annually to the appropriate governing entity including basic statistical information concerning the effectiveness of the program or algorithm according to commonly accepted metrics and basic comparative data on bias against persons in protected classes.

[i] Pennsylvania Senate Bill 449 adopted language requiring the assessments to be “effective, accurate and free from racial or economic bias” and also issue a report to that effect prior to adoption of the instrument.  http://www.legis.state.pa.us/CFDOCS/Legis/PN/Public/btCheck.cfm?txtType=PDF&sessYr=2017&sessInd=0&billBody=S&billTyp=B&billNbr=0449&pn=1424  This draft requires, similar to the Pennsylvania law, a validation report to embrace the question of bias.  Imposed in this draft is instead a non-liability disparate impact standard (which is settled law in the employment context), and prohibits the continuing operation of such risk assessments or artificial intelligence programs when they fail to meet the standard.  While one author argues for the extension of disparate impact tort liability as the solution in this area, he ultimately concedes that expanding liability to allow for such court challenges “will be difficult technically, difficult legally, and difficult politically.”  https://www.theatlantic.com/business/archive/2015/09/discrimination-algorithms-disparate-impact/403969/  See also: Barocas, Solon and Selbst, Andrew D., Big Data’s Disparate Impact (2016). 104 California Law Review 671 (2016). Available at SSRN: https://ssrn.com/abstract=2477899 or http://dx.doi.org/10.2139/ssrn.2477899

[ii] Although annual revalidation is suggested, revalidation every 18-24 months is considered acceptable.  See: Jones, D. (1996). Risk prediction in criminal justice. In A.T. Harland (Ed.). Choosing correctional options that work. London, Sage.  See also Risk & Needs Assessments:

What Defenders and Chief Defenders Need to Know at 5. http://www.nlada.org/sites/default/files/pictures/NLADA_Risk_and_Needs_Assessments-What_Public_Defenders_Need_to_Know.pdf

[iii] This language is drawn from Amendment 147 to legislation pending in the Massachusetts legislature drafted by a group of Harvard and MIT-based faculty and researchers: https://medium.com/berkman-klein-center/the-following-letter-signed-by-harvard-and-mit-based-faculty-staff-and-researchers-chelsea-7a0cf3e925e9 An alternate version of this language, but which is similar is the recommendation from the AI Now Institute at New York University: “Core public agencies, such as those responsible for criminal justice, healthcare,

welfare, and education (e.g “high stakes” domains) should no longer use “black box” AI and algorithmic systems. This includes the unreviewed or unvalidated use of pre-trained models, AI systems licensed from third party vendors, and algorithmic processes created in-house. The use of such systems by public agencies raises serious due process concerns, and at a minimum they should be available for public auditing, testing, and review, and subject to accountability standards.”  https://ainowinstitute.org/AI_Now_2017_Report.pdf

[iv] Generally, a decision-making framework is created, but there is seldom anything approaching due process or a public process regarding the creation of such a framework.  A typical framework is here: https://chicagotonight.wttw.com/sites/default/files/article/file-attachments/PSA%20Decision%20Making%20Framework.pdf The framework takes the risk scores, and then, in one case with the help of a “contractor” for the Foundation, sets the tolerances (e.g., low, medium, high) based on each risk category, which then informs the recommended result, whether it be greater pre- or post-trial supervision, higher bail, longer sentences, etc.  These tolerances and assigned responses are a substantive decision as to how much risk should society tolerate, what are the cut-points, and how we will respond to a person within a risk grouping.  In the case of the Arnold Foundation tool and many others, there is no due process or transparent process that leads to the result, and the Foundation continues to assert confidentiality of such process.  See Brauneis, Robert and Goodman, Ellen P., Algorithmic Transparency for the Smart City (August 2, 2017). 20 Yale J. of Law & Tech. 103, 137-142 (2018); GWU Law School Public Law Research Paper; GWU Legal Studies Research Paper. Available at SSRN: https://ssrn.com/abstract=3012499 or http://dx.doi.org/10.2139/ssrn.3012499  These decisions are substantive: “Determining how much risk a society should tolerate — and then formalizing those answers inside decision-making frameworks — is a difficult political and moral question, not a primarily technical one. To date, however, this decision has generally not been a target of considered political or policy debate.”  Koepke, John Logan and Robinson, David G., Danger Ahead: Risk Assessment and the Future of Bail Reform (February 19, 2018). Washington Law Review, Forthcoming. Available at SSRN: https://ssrn.com/abstract=3041622 or http://dx.doi.org/10.2139/ssrn.3041622 These authors argue that the process of setting the risk framework, unlike today, instead “must be a democratic one.”

[v] Wexler, Rebecca, Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System (February 21, 2017). 70 Stanford Law Review 1343, 1429 (2018). Available at SSRN: https://ssrn.com/abstract=2920883 or http://dx.doi.org/10.2139/ssrn.2920883 (“A criminal trade secret privilege would almost certainly lead to overclaiming, abuse, and the exclusion of highly probative evidence; it would also project a message that the government values intellectual property holders more than those whose life or liberty is at stake. These harms are unnecessary because narrow criminal discovery and subpoena powers combined with protective orders should suffice to safeguard the interests of trade secret owners to the full extent reasonable.”)