The Surveillance Promise That Fails to Deliver
When Securus Technologies president Kevin Elder announced his company was piloting an AI model trained on years of inmate phone and video calls, the promise was clear: technology could predict and prevent crimes before they happened. The telecom company, which handles communications for approximately 1.2 million incarcerated individuals across the United States, positioned itself at the forefront of what Elder called "proactive security." But a closer examination reveals a system built on flawed assumptions, questionable data, and a fundamental misunderstanding of both human behavior and artificial intelligence's limitations.
According to MIT Technology Review's investigation, Securus began building its AI tools with the explicit goal of scanning calls, texts, and emails for patterns that might indicate planned criminal activity. The company has access to what it describes as "millions of hours" of recorded conversationsâa dataset of unprecedented scale and intimacy. Yet this very scale creates the system's first critical flaw: quantity of data doesn't equal quality of insight.
The Training Data Problem: Garbage In, Gospel Out
Securus's model was trained on historical prison communications, but this foundation is fundamentally compromised. Prison phone calls represent an artificial communication environment where participants know they're being monitored, often speak in coded language, and operate under extreme emotional and psychological stress. Training an AI on this distorted dataset creates what experts call "surveillance bias"âthe system learns patterns specific to monitored environments rather than genuine criminal planning.
"You're essentially teaching the AI to recognize the performance of being monitored," explains Dr. Anya Rodriguez, a computational linguist who studies prison communications. "When people know they're being recorded, they adapt their language, use euphemisms, or avoid certain topics altogether. An AI trained on this data learns to flag the wrong signals."
Consider the practical implications: a conversation about "visiting grandma" might be flagged as potential drug trafficking code, while actual criminal planning might use completely novel language the system has never encountered. The AI becomes excellent at recognizing what past monitored conversations looked like, not what future crimes will sound like.
The Accuracy Myth: Numbers That Don't Add Up
Securus hasn't released specific accuracy metrics for its crime prediction model, but similar systems in law enforcement contexts tell a sobering story. The COMPAS algorithm used for sentencing decisions, for instance, was found to be nearly twice as likely to falsely flag Black defendants as future criminals compared to white defendants. When applied to the prison contextâwhere Black Americans are incarcerated at nearly five times the rate of white Americansâthe potential for algorithmic discrimination becomes staggering.
Human monitoring, while imperfect, possesses contextual understanding that AI fundamentally lacks. A human listener can distinguish between genuine threats and emotional venting, understand cultural and regional speech patterns, and recognize when someone is speaking metaphorically versus literally. AI systems, particularly those trained on limited datasets, lack this nuance.
"What we're seeing is the automation of bias," says civil rights attorney Marcus Chen. "These systems don't predict crime; they predict who the system already suspects. It's a feedback loop of suspicion that disproportionately impacts communities already over-policed."
The Dangerous Consequences of False Positives
When an AI system flags a conversation as "suspicious," the consequences for incarcerated individuals can be severe: extended solitary confinement, loss of visitation privileges, delayed parole hearings, or additional criminal charges. Unlike human monitoring where corrections officers might use discretion, AI systems create an illusion of objectivity that's difficult to challenge.
"How do you appeal an algorithm's decision?" asks Rodriguez. "When a human officer makes a judgment call, there's at least a chain of reasoning you can examine. With AI, you get a confidence score and maybe some highlighted keywords. The 'black box' problem becomes a human rights problem."
The Privacy Paradox in a Captive Market
Incarcerated individuals have severely limited privacy rights, but that doesn't justify unlimited surveillance. Securus operates in what amounts to a captive marketâmost prisons and jails contract with a single telecom provider, leaving inmates with no alternative if they want to maintain family connections. This power imbalance raises ethical questions about consent and data usage that go largely unaddressed.
The company's terms of service, which inmates must accept to use communication services, grant broad permissions for monitoring and analysis. But meaningful consent is impossible when the alternative is complete isolation from loved ones. "It's consent under duress," Chen argues. "When your choices are 'agree to surveillance' or 'never speak to your children again,' that's not real consent."
A Better Path Forward: Technology That Actually Helps
The tragedy of Securus's approach isn't just its flawsâit's the missed opportunity. The same communications infrastructure and AI capabilities could be deployed to actually improve outcomes for incarcerated individuals and enhance public safety in meaningful ways.
Consider alternative applications:
- Rehabilitation support: AI could identify individuals expressing genuine interest in educational programs, job training, or substance abuse treatment and connect them with appropriate resources.
- Mental health monitoring: Systems could flag conversations indicating severe depression, suicidal ideation, or other mental health crises and alert appropriate professionals.
- Family connection strengthening: Technology could help maintain family bondsâa proven factor in reducing recidivismâby facilitating easier communication and visitation scheduling.
- Legal assistance: AI could help inmates navigate complex legal systems or identify when they're receiving inadequate legal representation.
"We're using advanced technology to solve the wrong problem," Rodriguez observes. "Instead of asking 'How can we catch more people planning crimes?' we should be asking 'How can we help more people successfully reenter society?' The technology is the same; the values driving its application are completely different."
The Bottom Line: Surveillance Isn't Safety
Securus Technologies' AI crime prediction model represents a dangerous convergence of technological overreach and carceral logic. It promises safety through surveillance while delivering bias through algorithms. The system's fundamental flaw isn't technicalâit's philosophical. It assumes that more monitoring equals more security, when decades of research show that genuine safety comes from addressing root causes: poverty, lack of opportunity, mental health issues, and broken social systems.
As this technology rolls out across more correctional facilities, we must ask harder questions than "Does it work?" We must ask "What world does it create?" and "Who bears the costs of its mistakes?" The reality about AI crime prediction is this: it's not a technological breakthrough in public safety. It's the digital reinforcement of existing inequalities, dressed in the respectable clothing of algorithms and data.
The most dangerous prediction these systems make isn't about future crimesâit's about our willingness to trade human judgment for automated suspicion, and our comfort with subjecting vulnerable populations to technological experiments they never consented to join.
đŹ Discussion
Add a Comment