3 minute read
AUSTRALIAN SECURITY INDUSTRY TAKES AN ETHICAL STAND ON AI
Artificial Intelligence is being used in security technology that is designed to improve the safety and wellbeing of people and speed up security processes. But could it be abused and who is steering its ethical application?
Automatic Facial Recognition (AFR) is one of many data analysis technologies under the umbrella of Artificial Intelligence (AI), a branch of Computer Science.
It is an advanced tool used by the security industry, but there is no single global ethical framework for the safe use of AI.
Now, the Australian Security Industry Association Ltd (ASIAL) has begun a consultation process with security companies, regulators and other stakeholders aimed at developing its own ethical framework for responsible use of AFR in Australia.
AFR is used by a range of organisations across multiple sectors to recognise a face in a crowd, someone entering a building and so on, and match that face to one that is stored in a database.
By By Steve Cropper, Reputation Australia
There are obvious benefits to policing, border customs, anti-terrorism – even shoplifting, and the benefits to the general public are also clear.
But could this technology be abused, infringe on people’s rights to privacy or unfairly ‘profile’ them?
The People’s Republic of China has a well-established track record for human rights violations and is known to use AI, specifically Facial Recognition, in population control.
China’s social credit system has been compared to Black Mirror, Big Brother and every other dystopian future that sci-fi writers can think up but the reality is more complicated — and in some ways, worse.
China’s social credit system is built on surveillance tech like AFR and has expanded to all aspects of life, judging citizens’ behaviour and trustworthiness.
People caught jaywalking, who don’t pay a fine or even play music too loud on the train could lose certain rights and access to goods and services.
There is also the more sinister use of the technology to profile ethnic and religious groups who are out of step or out of favour with the regime in Beijing.
Some ardent conspiracy theorists might suspect that security tech like that could be used in Australia to control citizens, but this is a long way from the common-sense reality.
And yet, Australia has no ethical reference for appropriate use of AI technology in security work, nor do most other countries.
In fact, the only readily available framework comes from the Organisation for Economic Cooperation and Development (OECD), who identifies five complementary values-based principles for the responsible stewardship of trustworthy AI:
1 2 3 4 5
AI should benefit people and the planet by driving inclusive growth, sustainable development and wellbeing.
AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards, e.g. enabling human intervention where necessary to ensure a fair and just society. There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
The Australian Security Industry Association Ltd (ASIAL) has been researching how the issue of ethical AI is being approached in the UK and US with a view to developing an uniquely Australian ethical framework.
ASIAL has been consulting widely across the industry, legislators, regulators and ethicists and plans to release an Ethical AFR Framework for system designers, installers/integrators and end-users.
In the absence of any guidelines or legislation from Parliament, the industry must self-regulate and instil in its member companies a sense of ethical responsibility when applying technologies like Automatic Facial Recognition to keep people and property safe.
This is not a quick or easy process, but ASIAL CEO Bryan de Caires says it has to be done.
“It is appropriate that the industry address these issues now,” said Mr de Caires. “The technology is being incorporated into more and more surveillance and security systems in the private and public sectors, in retail, transport and more, so it is crucial that it is done the right way and protects people from danger but also protects the rights of law-abiding citizens.”
“Relevant training must be given to staff involved in AFR and the technology cannot be inherently biased along gender, ethnic or cultural lines. And we need a reliable reporting system if flaws in a system are detected that could lead to unethical use of the technology.”
Mr de Caires said that ethical storage and legal use of databases are two more key issues.
The industry is also concerned to ensure that the use of AFR is proportionate to the purpose – that the problem being solved is cause for appropriate and ethical use of AFR.
The Security Industry is determined that there is a lawful basis for processing personal data and whether the same objective can be achieved by other less intrusive measures.
The draft ethical framework for the responsible use of Automated Facial Recognition will be circulated in the coming months to members and stakeholders for comment.