This project undertakes an analysis of how crime risk forecasting software used by police departments produces outcomes, directs decision-making, and is evaluated for accuracy and efficacy. Through querying these epistemological structures, I suggest that these algorithms’ process for constructing questions, answers, and actionable outputs generates political contestations that reveal and inform diverging visions of how a democratic society should understand crime, harm and inequity. Liz Calhoun’s work illuminates the stakes of these differing visions through an analysis of the places where the lexicons of reformist developers and those of abolitionist and radical critique use the same rhetorical concepts in evaluating “crime forecasting” algorithms but mean radically different things. I suggest that the polysemic nature of the terms “objectivity” and “bias” has mystified political reckonings with how policing algorithms
work and that we need a collectivist and directly participatory framework for intervening in how public data is converted into actionable information.