Our Experience at the Childcare and Education Expo in Coventry: Highlights and Reflections
Blog
The rise of AI and biometrics in surveillance technology
Almas Team
The report found that AI surveillance technology is spreading at a fast rate. At least 75 out of 176 countries are actively using AI for surveillance purposes. The pool of countries using AI for surveillance is heterogeneous, all religions and political systems are represented. Not surprisingly, countries with authoritarian regimes are heavily investing in AI surveillance. The Gulf, East Asia and South/ Central Asia are using analytical systems, facial recognition cameras and monitoring systems which use biometrics.
Artificial intelligence is rapidly being adopted around the world. The most controversial area is the use of AI surveillance tools to monitor and track people. Some schemes are lawful, others clearly violate human rights, and many fall somewhere in the middle, in murky ground. As the market for professional video surveillance is expected to grow from $18.2 billion in 2018 to $19.9 billion this year, AI isn’t going to go away anytime soon.
What is AI?
In general terms, the goal of artificial intelligence is to “make machines intelligent” by automating or replicating behaviour that “enables an entity to function appropriately and with foresight in its environment,” according to computer scientist Nils Nilsson. AI is not one specific technology. Instead, it is more accurate to think of AI as an integrated system that incorporates information acquisition objectives, logical reasoning principles, and self-correction capacities. An important AI subfield is machine learning, which is a statistical process that analyses a large amount of information in order to discern a pattern to explain the current data and predict future uses.
Who is watching?
Unsurprisingly, AI’s impact extends well beyond individual consumer choices. It is starting to transform basic patterns of governance, not only by providing governments with unprecedented capabilities to monitor their citizens and shape their choices but also by giving them the new capacity to disrupt elections, elevate false information, and delegitimize democratic discourse across borders.
It’s difficult to know what AI tools are being used and who is using them. The AI Global Surveillance (AIGS) Index is the first type of survey to look into the use of AI worldwide. The index compiled empirical data on AI surveillance use for 176 countries around the world. It does not distinguish between legitimate and unlawful uses of AI surveillance. Rather, the purpose of the research is to show how new surveillance capabilities are transforming the ability of governments to monitor and track individuals or systems. It specifically asks:
- Which countries are adopting AI surveillance technology?
- What specific types of AI surveillance are governments deploying?
- Which countries and companies are supplying this technology?
Key findings
The report found that AI surveillance technology is spreading at a fast rate. At least 75 out of 176 countries are actively using AI for surveillance purposes. The pool of countries using AI for surveillance is heterogeneous, all religions and political systems are represented. Not surprisingly, countries with authoritarian regimes are heavily investing in AI surveillance. The Gulf, East Asia and South/ Central Asia are using analytical systems, facial recognition cameras and monitoring systems which use biometrics. The most common applications are:
- Smart city/ safe city platforms (56 countries)
- Facial recognition systems (64 countries)
- Smart policing (52 countries)
China is a major driver of AI surveillance worldwide. Chinese companies including Huawei, Hikivision, Dahua and ZTE supply surveillance technology to 63 countries. Of these, 36 countries have signed into China’s Bely and Road initiative. Huawei alone is responsible for providing AI surveillance technology to at least fifty countries worldwide. The next largest non-Chinese supplier of AI surveillance tech is Japan’s NEC Corporation (fourteen countries).
Chinese product pitches are often accompanied by soft loans to encourage governments to purchase their equipment. These tactics are particularly relevant in countries like Kenya, Laos, Mongolia, Uganda, and Uzbekistan—which otherwise might not access this technology. This raises troubling questions about the extent to which the Chinese government is subsidising the purchase of advanced repressive technology.
US companies are also active in supplying surveillance technology worldwide. The most significant companies are IBM (11 countries), Palantir (9 countries) and Cisco (6 countries).
Democracy vs security
Even advanced democracies are struggling to balance security with civil liberty. In the US, Baltimore police secretly deployed aerial drones to carry out daily surveillance on the city’s residents. Baltimore police also deployed facial recognition cameras to monitor and arrest protestors during riots in 2018. In Arizona, on the US-Mexico border, Israeli defence contractor Elbit has built dozens of towers which can ‘spot people from 7.5 miles away’. They perfected this technology in a smart fence which separated Jerusalem from the West Bank.
In Marseille, local authorities have adopted thousands of CCTV cameras, defending it as a means to create the ‘first safe-city of France and Europe’. In Valenciennes, Huawei gifted a showcase surveillance system to demonstrate the safe-city model. The package included high definition CCTV and an intelligent command centre powered by algorithms which detect unusual movements and crowd formations.
State surveillance is not inherently unlawful, but in an age when many of us have little faith in government, the application of such systems raises big questions. Clearly, we all want to feel safe, in our homes, at work and in the wider world. This is a basic human need. Tracking tools help to prevent terrorism. They help deter dangerous acts and resolve issues when things do occur. They have many, many positive applications which help to keep us all safe.
However, technology has changed the way governments carry out surveillance and what they choose to monitor and because the government is ultimately the de-facto power in any country, at present, there is no legislation which can prevent the abuse or misuse of surveillance AI. It’s a case of ‘who watches the watchmen’[1].
Why we must have conversations
AI and biometrics are not going to go away, and we need to have conversations, lots of them, about how we manage their impact on our lives. The general public are increasingly aware of the use of AI – for example, Amazon’s Alexa app – and many people are rightly concerned about how seemingly benign applications could be misused for surveillance. Disquieting questions are surfacing regarding the accuracy, fairness, methodological consistency, and prejudicial impact of advanced surveillance technologies. Governments have an obligation to provide better answers and fuller transparency.
Here at Almas, we know that technology does make the world a better place. We design, build and manufacture our own biometric fingerprint readers, and we are (just a little!) proud of them. We understand that security is important, but that it must always be balanced with the need for privacy. If you have a question about biometrics or CCTV, get in touch with our friendly team who will do their best to help you. Call us on 0333 567 6677 or 01 68 333 68 or drop us an email to [email protected].
This article has drawn directly from the work of Steven Feldstein, a non-resident fellow in Carnegie’s Democracy, Conflict and Governance Programme. If you would like to read Stephen’s commanding and insightful AI report in full, head over here.
[1] Quis custodiet Ipsos custodes? A Latin phrase found in the work of Roman poet Juvenal