The Promise of Predictive Policing Meets Prison Walls
When Securus Technologies president Kevin Elder announced his company had trained an AI model on years of inmate phone and video calls, he framed it as a breakthrough in crime prevention. The system, now being piloted to scan calls, texts, and emails from correctional facilities, represents the latest frontier in what's been called "predictive policing"āusing algorithms to identify potential criminal activity before it happens. On the surface, it sounds like a logical application of technology: analyze millions of conversations to find patterns that might indicate planned crimes, then alert authorities. But the reality of how these systems work, and what they actually accomplish, reveals a far more problematic picture.
How the System Actually Works (And Why That's the Problem)
Securus, which dominates the prison telecommunications market with contracts in over 3,000 correctional facilities, began building its AI tools by training models on what it calls "historical data"āyears of recorded inmate communications. The company hasn't disclosed the exact number of calls analyzed, but given their market position and the fact that inmates make approximately 600 million calls annually across the U.S., the training dataset likely encompasses tens of millions of conversations.
The fundamental assumption behind this approach is flawed from the start. These models aren't detecting "planned crimes" in any meaningful senseāthey're identifying linguistic patterns that correlate with previously flagged conversations. The system looks for specific keywords, speech patterns, emotional tones, and contextual clues that match what human monitors have previously identified as suspicious. This creates an immediate bias problem: the AI learns from human judgments that are themselves subject to racial, cultural, and socioeconomic biases.
The Bias Feedback Loop
Consider how this plays out in practice. If human monitors historically flagged conversations containing certain slang more frequently when spoken by Black inmates (whether due to explicit bias or cultural misunderstanding), the AI learns to associate that language with "suspicious" activity. When it then scans new conversations, it disproportionately flags similar patterns from similar demographics. Each flagged conversation becomes new training data, reinforcing the original bias in what researchers call a "bias feedback loop."
This isn't hypothetical. Studies of predictive policing algorithms in other contexts have shown they consistently over-police minority communities. A 2019 analysis of Chicago's predictive policing system found it targeted Black and Latino neighborhoods at rates disproportionate to actual crime statistics. When you transplant this technology into prisonsāwhere Black Americans are incarcerated at nearly five times the rate of white Americansāyou're essentially building racial bias into the surveillance infrastructure.
The Myth of Crime Prevention
Securus frames its technology as preventing crimes, but there's little evidence such systems actually achieve this goal. Instead, they create several dangerous outcomes:
- False Positives Overwhelm Systems: When AI flags thousands of conversations as "suspicious," human monitors can't possibly review them all. This leads to either ignoring most flags (making the system useless) or implementing automated punishments based on unverified AI judgments.
- Chilling Effects on Rehabilitation: Inmates aware of AI surveillance will avoid discussing sensitive but legitimate topicsāmental health struggles, family conflicts, reentry challengesāfor fear of triggering the system. This undermines rehabilitation efforts that depend on honest communication.
- Creating New Crimes: The system doesn't prevent crimes so much as create new categories of "suspicious behavior" that can lead to disciplinary actions within the prison system, potentially extending sentences or restricting privileges.
The Privacy Paradox in a Constitution-Free Zone
Prisons exist in a legal gray area when it comes to privacy rights. While the Fourth Amendment protects against unreasonable searches and seizures, courts have generally granted correctional facilities broad latitude to monitor communications for security purposes. This creates what legal scholars call a "privacy paradox": inmates have technically consented to monitoring by using prison communication systems, but they have no meaningful alternative if they want to maintain family connections.
Securus's AI surveillance takes this to a new level. Traditional human monitoring at least involved someone listening to conversations in context. AI systems analyze not just content but metadata, speech patterns, emotional tones, and relationship networks. They can identify when someone is speaking in code or using ambiguous languageābut they can't distinguish between actual criminal planning and, say, an inmate using metaphors to discuss personal struggles.
The Data Exploitation Problem
There's another troubling dimension: data exploitation. Securus collects what may be the most emotionally vulnerable communications imaginableāconversations between incarcerated people and their families during moments of crisis, grief, and desperation. Using this data to train commercial AI systems raises profound ethical questions about consent and exploitation. These conversations weren't given with the understanding they'd become training data for surveillance algorithms; they were given to maintain human connections under difficult circumstances.
What Actually Works in Prison Security
If AI surveillance isn't the solution, what actually improves prison safety and reduces recidivism? Research points to several evidence-based approaches:
- Increased Human Contact: Facilities with more visitation and better communication options have lower rates of violence.
- Rehabilitation Programs: Education, job training, and therapy reduce reoffending far more effectively than increased surveillance.
- Staff Training: Well-trained correctional officers who build relationships with inmates are better at detecting genuine threats than algorithms scanning for keywords.
- Mental Health Services: Many incidents labeled as "planned crimes" are actually manifestations of untreated mental illness.
These approaches require investment in people rather than technology. They're less flashy than AI surveillance systems, but they actually address the root causes of prison violence and post-release criminal behavior.
The Path Forward: Regulation and Transparency
The deployment of AI surveillance in prisons is happening with minimal oversight. Most states don't have specific regulations governing algorithmic decision-making in correctional settings, and companies like Securus aren't required to disclose how their systems work or how accurate they are.
Several steps could mitigate the harms of these systems:
- Mandatory Audits: Independent third parties should regularly audit AI surveillance systems for racial, gender, and socioeconomic bias.
- Transparency Requirements: Companies should disclose accuracy rates, false positive rates, and the demographic breakdown of flagged conversations.
- Human Review Mandates: No disciplinary action should be taken based solely on AI flags without human review and contextual understanding.
- Data Protection: Inmate communications used for AI training should be anonymized and used only with explicit, informed consent.
The Bottom Line: Technology Can't Solve Human Problems
The fundamental misconception behind AI crime prediction in prisons is that criminal behavior follows predictable patterns that machines can decode. The reality is that human behaviorāespecially in high-stress environments like prisonsāis complex, contextual, and often contradictory. An algorithm might flag a conversation about "getting out" as suspicious, but it can't distinguish between planning an escape and discussing parole hopes with a family member.
Securus's system represents a dangerous trend: using AI as a technological fix for deeply human problems. Prison violence, recidivism, and rehabilitation challenges won't be solved by better surveillance algorithms. They'll be solved by addressing the conditions that create themāpoverty, lack of opportunity, mental health crises, and systemic inequality.
As these AI surveillance systems expand from pilot programs to standard practice, we face a critical choice: Will we invest in technology that promises to predict crime but actually amplifies bias and undermines rights? Or will we invest in the human-centered approaches that actually make communities safer? The answer will determine not just the future of prison technology, but the fundamental fairness of our justice system.
š¬ Discussion
Add a Comment