Skip to main content

In an age where artificial intelligence (AI) and surveillance technology are rapidly advancing, it is crucial that we, as a society, take a step back and critically examine the potential consequences of their unchecked deployment. The recent incidents involving Flock Safety’s license plate-reading cameras serve as a stark reminder of the urgent need for legislation to protect the public from the dangers of invasive AI surveillance.

The stories of Jaclynn Gonzales, Michael Smith, and the Aurora mother and her children are just a few examples of the traumatic experiences that can result from the misuse of AI surveillance technology. These individuals were wrongfully detained, handcuffed, and held at gunpoint due to errors made by machines that were trusted to provide accurate information to law enforcement. The psychological impact of such experiences cannot be understated, particularly when they disproportionately affect people of color and other marginalized communities.

It is deeply concerning that Flock Safety, a company whose technology is being used by the Fort Worth Police Department and other law enforcement agencies across the country, has refused to allow independent testing of its latest products. Third-party evaluations, such as those conducted by IPVM, are essential for ensuring transparency and accountability in the use of surveillance technology. Without such testing, the public and the police departments that rely on these systems are left in the dark about their limitations and potential for error.

While proponents of AI surveillance technology often point to anecdotal evidence of its effectiveness in solving crimes and keeping communities safe, it is crucial that we look beyond these success stories and consider the broader implications for privacy and civil liberties. The deployment of license plate-reading cameras and other forms of AI surveillance can create a chilling effect on free speech and freedom of movement, as individuals may feel constantly watched and monitored by an all-seeing, unaccountable system.

Moreover, the integration of facial recognition technology with AI surveillance systems raises even greater concerns. Although Flock Safety claims that its cameras do not currently employ facial recognition software, there is nothing to prevent law enforcement agencies from running such technology on the footage collected by these cameras at a later date. The potential for abuse and misuse of facial recognition technology is well-documented, (*see cases of Atlanta attorneys being arrested because of being misidentified via random AI scans of the public ) with studies showing that it is often less accurate when identifying people of color, leading to a higher risk of wrongful arrests and detentions.

AI Tech Is In Your Community With No Rules

As AI surveillance technology becomes more prevalent in our communities, it is essential that we have robust legislation in place to protect the public from its potential harms. This legislation should, at a minimum, require independent testing and evaluation of all AI surveillance systems before they are deployed, to ensure that they are accurate, reliable, and free from bias. It should also establish clear guidelines for the use of such technology by law enforcement agencies, including strict limits on the collection, retention, and sharing of personal data.

Furthermore, legislation should mandate transparency and accountability in the use of AI surveillance technology. Law enforcement agencies should be required to publicly disclose their use of such systems, along with detailed information about their capabilities, limitations, and potential for error. There should also be mechanisms in place for individuals to challenge the use of AI surveillance technology in their cases and to seek redress for any harms caused by its misuse.

Drawing inspiration from the EU AI Act, lawmakers in the United States and other countries could consider implementing a risk-based approach to regulating AI systems. This would involve classifying AI applications according to their potential for harm, with the most stringent regulations applied to “high-risk” systems that pose significant threats to individual rights, public safety, or democratic processes.

For example, AI systems used in critical infrastructure, law enforcement, education, employment, and access to essential services could be subject to mandatory requirements for transparency, accountability, and human oversight. Providers of these high-risk systems would need to conduct rigorous testing and evaluation to ensure accuracy, robustness, and fairness, and to mitigate potential biases and errors. They would also be required to provide clear instructions and documentation to downstream users, enabling them to deploy the systems responsibly and in compliance with applicable regulations.

Time To Press Congress To Choose Privacy Over Corporations

In addition, lawmakers could consider specific provisions to address the challenges posed by general purpose AI (GPAI) models, which are capable of performing a wide range of tasks and can be integrated into various downstream applications. Following the EU AI Act’s approach, providers of GPAI models could be required to adhere to certain obligations, such as publishing summaries of their training data, respecting copyright laws, and cooperating with downstream users to ensure compliance.

For GPAI models that present systemic risks due to their scale and capabilities, additional requirements could be imposed, such as conducting adversarial testing, assessing and mitigating potential risks, and reporting serious incidents to regulatory authorities. The development of voluntary codes of practice, in collaboration with industry, academia, and civil society, could help to establish best practices and standards for the responsible development and deployment of GPAI models.

By adopting a risk-based, tiered approach to AI regulation, inspired by the EU AI Act, lawmakers can create a framework that balances the need for innovation and progress with the imperative to protect individual rights, promote public trust, and safeguard democratic values in the face of rapidly advancing AI technologies.

The time for action is now. As AI surveillance technology continues to advance at a rapid pace, we cannot afford to wait until more lives are disrupted and more communities are harmed before we take steps to protect the public. We must sound the alarm with lawmakers at the state and federal levels, urging them to pass comprehensive legislation that puts the rights and well-being of individuals first.

In the words of Jay Stanley from the ACLU, “These capabilities are things that human beings in the entire history of the world have never seen before. They are brand new, they are very powerful, and we need to think very carefully about what they’re doing to our communities.” It is our collective responsibility to ensure that the deployment of AI surveillance technology is guided by principles of transparency, accountability, and respect for civil liberties. Only then can we harness the potential benefits of these powerful tools while minimizing their risks and protecting the rights of all individuals.

Where do we go from here?

The path forward is clear. We must demand that our elected officials take swift and decisive action to regulate the use of AI surveillance technology. This means reaching out to our state representatives, senators, and members of Congress, and making our voices heard. We must organize and mobilize our communities, building coalitions of concerned citizens, civil liberties organizations, and other stakeholders who share our commitment to protecting privacy and preventing the abuse of power.

We must also work to educate the public about the risks and implications of AI surveillance technology. Many people may not be aware of the extent to which these systems are already being used in their communities, or of the potential for misuse and abuse. By raising awareness and fostering a public dialogue about these issues, we can build the groundswell of support needed to drive meaningful change.

At the same time, we must continue to hold technology companies like Flock Safety accountable for their actions. We cannot allow these companies to operate with impunity, shielded from independent scrutiny and public oversight. We must demand that they prioritize transparency, accountability, and respect for civil liberties in the development and deployment of their products.

Ultimately, the fight to protect the public from invasive AI surveillance is a fight for the soul of our democracy. It is a fight to ensure that our fundamental rights and freedoms are not sacrificed in the name of public safety or corporate profits. It is a fight to ensure that we, as individuals and as a society, retain control over our own lives and destinies, free from the prying eyes of an all-seeing, unaccountable surveillance state.

The road ahead will not be easy, but it is a road we must travel. The stakes are simply too high to ignore the dangers posed by unchecked AI surveillance. We owe it to ourselves, to our children, and to future generations to take a stand now, before it is too late.

A Few Things To Think About

In the words of Edward Snowden, the whistleblower who exposed the extent of government surveillance programs, “The liberties of a people never were, nor ever will be, secure, when the transactions of their rulers may be concealed from them.” Let us heed these words, and let us work together to build a future in which the transactions of our rulers — and the tools they use to monitor and control us — are subject to the full light of public scrutiny and democratic accountability.

The time for action is now. Let us seize this moment, and let us work together to create a world in which the power of AI surveillance technology is harnessed for the good of all, rather than the benefit of a few. Let us stand up for our rights, our freedoms, and our democracy, and let us fight for a future in which the promise of technology is realized, not as a tool of oppression, but as a means of empowerment and liberation for all.

Issen Alibris is the founder of AI Learning Labs, a blog and course platform where he shares AI news and teaches everyday people how to use AI.

Leave a Reply