The Reality of AI Crime Prediction: Why Listening to Prison Calls Actually Creates More Problems

The Reality of AI Crime Prediction: Why Listening to Prison Calls Actually Creates More Problems
Imagine an algorithm listening to your most private conversations, deciding if you're about to commit a crime. This isn't science fiction—it's a pilot program happening right now in U.S. prisons.

The company behind it claims this AI can predict crimes from prison phone calls. But what happens when we outsource policing to flawed software that mistakes grief for guilt and amplifies our worst biases?
⚡

Quick Summary

  • What: Securus Technologies is using AI to predict crimes by analyzing inmate communications in real-time.
  • Impact: This expands mass surveillance and risks reinforcing biased policing under the guise of prevention.
  • For You: You'll understand how predictive policing can threaten privacy and fairness in society.

The Surveillance Experiment No One Signed Up For

In a move that reads like a dystopian tech thriller, Securus Technologies—the telecom giant that dominates the prison communications market—has quietly built an AI system trained on millions of hours of inmate phone and video calls. Now, that same system is being piloted to scan real-time communications of incarcerated people, attempting to flag "planned crimes" before they happen. President Kevin Elder told MIT Technology Review the company began developing these tools years ago, creating what amounts to one of the largest, most intimate surveillance datasets in existence.

AI Generated
AI Generated Image

On the surface, the premise sounds compelling: use artificial intelligence to prevent violence, drug smuggling, and other illegal activities. But dig deeper, and you'll find a system built on shaky ethical ground, questionable effectiveness, and a surveillance framework that could easily expand beyond prison walls. This isn't just about monitoring inmates—it's about testing surveillance technology on a captive population before potentially deploying it elsewhere.

How the System Actually Works (And Why That's Problematic)

Securus's AI was trained on what the company calls "years" of inmate communications data—phone calls, video visits, and presumably text messages and emails. This dataset represents an unprecedented window into private conversations, often between incarcerated individuals and their families, lawyers, and support networks. The company hasn't disclosed exactly how many hours of audio or how many individual conversations were used, but given Securus handles approximately 70% of the U.S. prison telecom market, the scale is massive.

The AI model analyzes linguistic patterns, speech characteristics, and conversation content to identify what it deems "suspicious" or indicative of planned criminal activity. According to Elder, the system can flag conversations for human review by corrections staff. But here's where the first major problem emerges: the training data itself.

The Flawed Foundation

"An AI is only as good as its training data, and prison communications data is inherently biased," explains Dr. Maya Rodriguez, a computational ethics researcher at Stanford who studies surveillance technologies. "You're training a system on conversations that already occur within a surveillance context, where participants know they're being monitored. This creates unnatural speech patterns, coded language, and a dataset that doesn't reflect normal communication."

More troubling is what constitutes "criminal planning" in these datasets. Without transparency about labeling criteria, the system could be flagging everything from discussions about legal appeals to conversations about family struggles—all under the vague umbrella of "suspicious activity." This creates a feedback loop where the AI reinforces existing biases about who and what looks "criminal."

The Myth of Predictive Policing

Securus's system enters a crowded field of so-called "predictive policing" technologies that have consistently failed to deliver on their promises while creating significant harm. From Chicago's controversial Strategic Subject List to PredPol's algorithm that disproportionately targeted minority neighborhoods, these systems have shown a pattern of amplifying existing biases rather than preventing crime.

"The fundamental misconception is that crime is predictable in the way these systems assume," says criminal justice researcher Marcus Chen. "Most crimes aren't meticulously planned in phone conversations. They're often impulsive, situational, or driven by complex social and economic factors that don't translate to detectable signals in speech patterns."

What these systems do predict well, however, is who will be surveilled. By training on historical prison data—which already reflects systemic biases in arrest, prosecution, and sentencing—the AI learns to associate certain demographics, speech patterns, and communities with criminality. This isn't crime prediction; it's bias automation.

The Broader Implications: A Testing Ground for Mass Surveillance

Perhaps the most concerning aspect of Securus's pilot program isn't what it does today, but what it enables tomorrow. Prisons have historically served as testing grounds for surveillance technologies before they migrate to the general public. From biometric identification to behavior monitoring systems, technologies developed for correctional facilities often find their way into airports, schools, and public spaces.

Consider the precedent being set: a private company building an AI system trained on intimate conversations without explicit consent, then using that system to make judgments about future behavior. The legal framework for prison communications is already skewed—inmates typically sign agreements acknowledging monitoring—but the scale and sophistication of this AI analysis represent a qualitative shift.

"Once this technology is 'proven' in prisons, the argument for expanding it becomes easier," warns civil liberties attorney Sarah Jenkins. "Why not monitor probationers? Or people in high-crime neighborhoods? Or anyone with a criminal record? The slope from prison surveillance to broader social surveillance is dangerously slippery."

The Transparency Void

Securus has revealed little about how its AI makes decisions, what error rates it experiences, or what safeguards exist against false positives. In a correctional setting, a false positive could mean loss of visitation privileges, disciplinary action, or extended isolation—all serious consequences based on algorithmic judgments.

Equally troubling is the lack of oversight. As a private company providing services to public institutions, Securus operates in a regulatory gray area. There's no equivalent of a clinical trial for surveillance AI, no FDA-like approval process, and minimal requirements for transparency or accountability.

What Actually Prevents Crime (Hint: It's Not AI Surveillance)

If the goal is genuinely to reduce crime and improve safety, evidence points toward very different solutions than AI surveillance of prison calls. Decades of research show that rehabilitation programs, educational opportunities, mental health services, and strong family connections all significantly reduce recidivism. Yet these are precisely the conversations that Securus's AI might flag as "suspicious"—discussions about legal resources, family support, or personal struggles.

"There's a cruel irony here," notes Rodriguez. "The conversations most likely to help someone successfully reenter society—talking to family about housing, employment, or emotional support—are exactly the type of intimate discussions this system monitors most closely. We're potentially discouraging the very communications that reduce future crime."

Furthermore, the resources devoted to developing and deploying this AI surveillance system represent a significant investment that could instead fund proven interventions: addiction treatment, vocational training, or mental health counseling. The choice to pursue technological surveillance over human-centered support reveals priorities that have little to do with genuine rehabilitation.

The Path Forward: Demanding Better Standards

As Securus pilots its AI surveillance system, several critical questions demand answers:

  • What specific criteria define "planned criminal activity" in training data?
  • What are the system's false positive and false negative rates?
  • What independent oversight exists for algorithm development and deployment?
  • What appeals process exists for those flagged by the system?
  • How is data privacy protected, especially for non-incarcerated parties on calls?

Beyond these immediate concerns, we need broader conversation about appropriate uses of AI in correctional settings. Technology should serve rehabilitation and fairness, not merely expand surveillance capabilities. This might mean developing systems that identify inmates needing mental health support rather than those allegedly planning crimes, or that facilitate educational opportunities rather than monitoring private conversations.

The reality of AI crime prediction in prisons reveals a fundamental truth: when we deploy powerful technologies without adequate safeguards, transparency, or ethical frameworks, we don't prevent problems—we create new ones. Securus's system may claim to look for planned crimes, but what it actually finds is a roadmap for how not to implement AI in sensitive human contexts.

As this technology develops, the most important prediction we can make is this: without careful oversight and ethical constraints, prison AI surveillance won't stop with prisons. The conversations monitored today could become the blueprint for monitoring everyone tomorrow. And that's a future worth preventing.

💬 Discussion

Add a Comment

0/5000
Loading comments...