AI Regulation

The False Promise of AI Security


This article was authored by Dallin Larson, a Research Intern at Libertas Institute.

AI weapon detection systems have a concerning track record of failure in public schools, jeopardizing not only students’ privacy — but their safety as well.

The Cost of False Security

Imagine, if you will, a student brings a large knife to your child’s school, and the AI weapon detection system didn’t detect it; later on, that student stabs your child’s friend. Meanwhile, your child’s lunchbox is being searched because the system mistakenly flagged their sandwich as a potential bomb. 

These incidents actually occurred in a New York School, and Utah has now entered the mix, with over 200 schools adopting AI weapon detection systems that consistently fall short of the lofty promises made by their developers.

Overpromised and Underperforming

Companies like Evolv Express—the same company responsible for the AI weapon detection system installed in the New York school where the stabbing occurred—have deployed their technology in over 800 schools across 40 states. They have done so because of the weighty promises they’ve made to parents that their children’s schools will be much safer due to the reliability of their technology.

Evolv Express’ chief executive, Peter George, said that their latest technology is capable of detecting, “all the guns, all the bombs and all the large tactical knives.” And yet when companies like NCS4 have tested the equipment, Evolv Express technology was unable to identify large knives 42% of the time. Meanwhile, some schools have reported false alarm rates as high as 60%.

The Evolv Express weapons detection system flags a weapon that Dominck D’Orazio, Evolv Technology account executive, wears on his hip while demonstrating the system, Wednesday, May 25, 2022, in New York. (AP Photo/Mary Altaffer)

The Federal Trade Commission (FTC) has even taken legal action against Evolv Express for making false claims about the effectiveness of their AI weapon detectors in identifying weapons while ignoring harmless personal items. This highlights a concerning trend: companies specializing in AI weapon detection technology are blatantly overstating the capabilities of their products, leaving teachers, parents, and students to bear the consequences.

Invasion of Privacy

This has frightening implications for the privacy and mental wellbeing of our children as well. ZeroEyes, an AI weapon detection company serving many Utah schools, mistakenly identified a student’s arm cast as a weapon in a Texas high school. This error triggered a school-wide lockdown, sent parents into a panic, and led to an unwarranted search of the student’s belongings. Similarly, prop guns used in a New York High School production were incorrectly flagged and prompted a needless lock down and police response. 

The constant stream of false positives highlights the ineffectiveness of this so-called “novel technology” while fostering a culture of fear and panic in our schools. A member of the Department of Homeland Security School Safety Advisory Board was correct in stating that “Surveillance hurts kids, false senses of security hurts kids. False alarms, false positives for weapons and bombs hurts kids.”

Our students’ safety at school is important. But the illusion of AI weapon detection does not improve their security, while removing their privacy at the same time. This is not a trade off parents should be forced to make.