3 minute read
The Nice Urban Supervision Centre
The Nice Urban Supervision Centre (USC), is the operational command controlling all the cameras of the city’s video surveillance network. It operates 24/7 and has three objectives:
Enhancing public security by deterring and preventing attacks on the safety of people and property. Video footage is notably used in criminal investigations and for the identification of offenders.
The prevention of natural disasters and risks; search and rescue; protection against fires. For example, CCTV is used to monitor riverbanks and sea coasts, allowing for both contextual and specific views of those areas without putting officers at risk. The images are also used by rescue services to identify the origin and monitor the development of natural phenomena and mobilise the most appropriate response means.
Monitoring traffic and urban circulation with technology such as predictive tools to anticipate traffic conditions, thermal cameras for detecting people and obstacles to circulation, or to limit uncivilised behaviour such as double parking, as well as real-time video protection of the tramway network.
Considerations when developing and deploying facial recognition technologies
Given the complexity of this technology and the ubiquity of its potential uses, it is crucial to consider how to ensure the protection of fundamental rights and freedoms while responding to security needs. How can anonymity be preserved in public spaces? What forms of surveillance are acceptable without raising fears in society and negatively influencing people’s feelings of insecurity and unsafety? While technology offers a broad range of opportunities for the protection of public spaces, these solutions have to be developed in concordance with physical protection measures and must be designed to respect privacy regulations. Efus’ working group on Security & Innovation drafted a series of considerations for local and regional authorities:
Working towards a clear legal and regulatory framework: considering the speed and complexity of new developments in facial recognition technology, the European Union is planning on re-assessing existing legal frameworks, such as the GDPR, and considering new legal requirements. In its white paper on artificial intelligence, the Commission outlines aspects that these requirements are linked to: training data, keeping of records and data, information to be provided, robustness and accuracy, and human oversight. Sharing local experiences, problems encountered and lessons learnt on a European level can help anchor such requirements in the real needs of European cities and regions.
Assessing the impact on fundamental rights: given that facial recognition technologies impact a whole range of fundamental rights, it is important to assess them both, to different extents, during the development and the deployment of algorithms.
Evaluating necessity and proportionality: prior to deploying facial recognition technology, a city or region must develop a clear understanding of the local urban security situation and evidence-based knowledge. The information gathered during a safety audit can help frame considerations of necessity and proportionality in order to find the right balance between the benefits and the risks of using facial recognition. This includes evaluating which public spaces should be outfitted with the technology for what reasons and problems.
Monitoring facial recognition technology: when a law enforcement agency uses facial recognition software, it is paramount that agents verify the results. They should evaluate whether a match is accurate and decide on an appropriate response. The accuracy and efficiency of the software itself should be monitored by independent supervisory bodies.
A proper understanding of the technology: local authorities often rely on externally developed technology. In that case it can be hard to understand how a facial recognition software works and to evaluate it. In order to ensure that fundamental rights, such as the right to non-discrimination and data protection, are integrated not only in the deployment but also in the development of the technology, such considerations must be part of the procurement process (FRA, 2019).
Adequate police training: depending on the quality of the software used, it is possible that law enforcement get a large number of hits. The interaction with people who were matched with a face on a watchlist needs to follow the same principles of respect as any other interaction. Again, awareness of the software’s potential fallibility and inaccuracy is important to understand that a match does not necessarily mean that a person was properly identified or authenticated. Training law enforcement officers on how to handle such situations can be helpful to ensure calm and dignified interactions with the public.
The 2021 European Union Artificial Intelligence (AI) Act
In April 2021, the European Commission published a proposal for a regulatory framework on the use of AI. This framework was conceived as an answer to insufficient existing legislation and sets out rules to enhance transparency and minimise risks to fundamental rights. The document focuses on high-risk AI systems, including, amongst others, the use of crime forecasting software and facial recognition in urban spaces. These high-risk uses can only be put into place if they fulfil a number of requirements, such as: the use of high-quality datasets, the establishment of appropriate documentation to enhance traceability, sharing of adequate information with the user and the design and implementation of appropriate human oversight measures.