The Truth About AI Crime Prediction: Why It's Not Actually About Preventing Crime

The Truth About AI Crime Prediction: Why It's Not Actually About Preventing Crime

When Securus Technologies, the telecom giant serving over 3,500 correctional facilities, announced it was piloting an AI model to scan inmate calls, texts, and emails for signs of planned crimes, the pitch was seductively simple: use technology to prevent harm. President Kevin Elder framed it as a public safety tool, trained on years of inmate communications data. But a closer examination reveals a different story—one where the promise of prediction masks a system of pervasive surveillance, built on ethically dubious data, with accuracy rates that are anyone's guess. This isn't about stopping crime; it's about normalizing a new, automated layer of control.

What Securus Is Actually Building

Securus Technologies, owned by Aventiv Technologies, isn't just a phone company. It's a surveillance company. Its new AI initiative, developed over the last few years, involves training machine learning models on a vast historical archive of inmate phone and video calls. This data trove—collected from a captive population with severely limited privacy rights—forms the foundation of a system now being piloted to automatically flag communications for human review.

The company claims the AI looks for patterns and keywords indicative of planning for activities like violence, contraband smuggling, or witness intimidation. Once flagged, the calls or messages are sent to analysts at Securus's monitoring centers or directly to corrections officials. The selling point to prisons and jails is efficiency: automating the detection of threats in the estimated 70 million calls Securus processes each month.

The Flawed Foundation: Garbage In, Gospel Out

The central, glaring flaw in this system is its training data. AI models are only as good as the data they're fed. Securus's model was trained on "years of inmates' phone and video calls." This presents multiple insurmountable problems.

First, the data is inherently biased. It comes from a population under constant stress, where conversations are often coded, metaphorical, or laden with the unique jargon of incarceration. A model trained on this may learn to associate ordinary, frustrated speech with threat. Second, there is no verified "ground truth." How does Securus know which past calls actually led to a crime? Arrest records are imperfect proxies; most calls, even those discussing illegal activity, do not result in a documented crime. The model is likely learning from noisy, unverified labels, making its predictions statistically suspect.

"This is a classic case of automating bias," says Dr. Erin Smith, a researcher in algorithmic fairness who studies carceral technologies. "You're taking the existing prejudices and failures of the prison system—who gets monitored, what gets flagged as suspicious—and baking them into an algorithm that then perpetuates them at scale. It gives a veneer of technological objectivity to deeply subjective judgments."

The Real Product Isn't Safety, It's Surveillance

Positioning this tool as a "crime prediction" system is a powerful marketing narrative, but it misrepresents the core function. The AI cannot predict the future. It performs pattern matching on speech and text, assigning a risk score based on historical correlations. This is anomaly detection and keyword flagging, not precognition.

The actual product Securus is selling to corrections departments is mass surveillance automation. Prisons have always had the legal right to monitor inmate communications. The bottleneck has been human labor—listening to thousands of hours of calls. Securus's AI removes that bottleneck, enabling the continuous, real-time analysis of every single communication.

This shifts the paradigm from investigative monitoring (listening to calls related to a specific suspect or incident) to generalized surveillance (scanning everything, just in case). The economic incentive is clear: Securus can offer a "proactive security" add-on service to its core telecom contracts, creating a new revenue stream from the very data its infrastructure collects.

The Chilling Effect and the Expansion Beyond Prison Walls

The implications ripple outward. For inmates, knowing every word is parsed by an algorithm creates a profound chilling effect on communication with families, lawyers, and counselors. Vital discussions about legal appeals, mental health, or family issues may be stifled for fear of triggering a flag.

More alarming is the potential for mission creep. The technology and legal framework built here won't stay behind prison walls. Securus's parent company, Aventiv, also provides monitoring services for people on probation, parole, and pre-trial release. The logical next step is deploying this AI to scan the communications of hundreds of thousands of people in the community, under the same guise of "preventing crime." This would create a shadow surveillance network for people who have not been convicted of a crime, or who are serving sentences outside of prison.

Furthermore, the technical architecture—voice-to-text transcription, semantic analysis, network mapping of contacts—is directly transferable to other "high-risk" populations or scenarios, from airport security to protest monitoring, with minimal modification.

Accountability in a Black Box

Perhaps the most dangerous aspect is the lack of transparency and accountability. Securus has disclosed no details about the model's accuracy, false positive rate, or how it validates its alerts. Does it mistake discussions of a prison movie plot for a real riot plan? Does it flag a coded conversation about a birthday cake as a drug deal?

When a human monitor makes a mistake, there's a chain of responsibility. When an AI system flags a call, resulting in an inmate losing phone privileges, being placed in solitary confinement, or facing new charges, who is accountable? The company will likely point to the human "in the loop" who reviewed the alert, while the prison will point to the "objective" algorithm that brought it to their attention. The algorithm itself, and the corporation that owns it, remains insulated.

There are also no visible legal safeguards or industry standards governing this specific use of AI. Unlike facial recognition, where public backlash has spurred some regulation, the surveillance of inmate communications operates in a legal gray zone, leveraging the reduced privacy rights of incarcerated people.

The Path Forward Demands Scrutiny, Not Acceptance

The piloting of this system is a watershed moment, but not for the reason Securus states. It represents the quiet integration of predictive analytics into one of the most opaque and powerful systems in society: the carceral state.

Moving forward requires immediate action:

  • Demanding Transparency: Legislators and oversight bodies must require Securus and its clients to disclose the model's performance metrics, audit results, and the demographic breakdown of who gets flagged.
  • Asserting Rights: Legal challenges must test the boundaries of this surveillance, especially concerning communications with attorneys, which are supposed to be privileged.
  • Public Debate: The use of such systems should not be a procurement decision made quietly by prison administrators. It requires public debate about the kind of surveillance society we are building, starting with its most vulnerable members.

The truth about AI crime prediction in prisons is that it's a misnomer. It's a tool of control and efficiency, dressed in the clothing of prevention. By seeing it for what it actually is—an unregulated, biased, and expandable system of automated surveillance—we can start asking the hard questions before it becomes an inescapable feature of not just prison life, but of life itself.

💬 Discussion

Add a Comment

0/5000
Loading comments...