10 minute read
FAIR PLAY
from Change Agent
by cxoinsightme
THE IMPORTANCE OF ETHICS IN AI AND WHY IT MATTERS
With AI rapidly moving from pilot projects into production, it has become imperative for enterprises to address the issue of ethics in this fast-evolving technology. Though AI has spawned many new cases, it is also plagued by potential issues, including algorithmic bias. Ethical AI places importance on transparency and development of fairer AI practices with clearly stated guidelines on the legitimate uses of AI.
Advertisement
Recently, a report by the MIT Sloan Management Review and Boston Consulting Group Worldwide in
Stephen Gill
2020 found that spending on AI is expected to hit $50 billion this year and $110 billion annually by 2024. Needless to say, AI is essential across a vast array of industries, including e-commerce, health care, banking, retail, manufacturing, and more. With the use of AI extending beyond simply automating repetitive tasks and to more sophisticated issues such as strategic decision-making, areas of ethical concern become more complex.
“AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and the role of human judgment,” says Stephen Gill, Academic head of the School of Mathematical and Computer Sciences, Heriot-Watt University Dubai. “Debates about privacy safeguards and about how to overcome bias in algorithmic decision-making in sentencing, parole, and employment practices are by now familiar. Evidence has already emerged that algorithms tend to have embedded racial and gender biases. For example, a few smartphone applications have been called out for withholding financial services advertising from female users.”
Additionally, facial recognition technologies have been found to disproportionately misidentify people of colour. Therefore, ethical AI management is essential for decisionmaking and to ensure that such biases are not perpetuated, he says.
Zayed Abu Alhaj, the Regional Vice President, Middle East, at Cloudera, says advances in AI have meant that we have moved from building systems that make decisions based on human-defined rules to systems trained on data. It is now possible to build AI systems with no ethical considerations. An unconstrained AI system will be optimised for whatever its output is.
For example, a system designed to approve loans may unfairly penalise particular demographics that are underrepresented in the training data. This clearly has a negative impact on members of those demographics and potentially on the service provider. It may also place the provider in violation of organisational or industry guidelines or, in some cases, even the law.
“While organisations are leveraging data and AI to create scalable solutions, they are also scaling their reputational, regulatory, and legal risks. Ethical AI systems should be designed carefully,” Alhaj says.
Kamal Nasrollahi, Global Director of Research at Milestone Systems, says Ethical AI follows righteous policies concerning fundamental values. These include personal rights, privacy, non-prejudice, and integrity. Traditionally, we have discussed ethical AI as part of a discussion about collecting and using data without bias. Ethical AI is vital because it helps ensure that AI is developed, sold and used as responsible technology that is fair, transparent, and accountable.
Sid Bhatia, Regional VP and General Manager, Middle East & Turkey at Dataiku, says we need to broaden the conversation beyond ethical AI to responsible AI as many negative outcomes in AI can come from the best of intentions. “Responsible AI must encompass both intentions and accountability. Responsible AI refers to the practice of designing and implementing such systems to enhance societal impact and minimise prejudicial and negative outcomes.”
Zayed Abu Alhaj
RECENTLY, A REPORT BY THE MIT SLOAN MANAGEMENT REVIEW AND BOSTON CONSULTING GROUP WORLDWIDE IN 2020 FOUND THAT SPENDING ON AI IS EXPECTED TO HIT $50 BILLION THIS YEAR AND $110 BILLION ANNUALLY BY 2024.
Kamal Nasrollahi Sid Bhatia
How can organisations develop and use ethical AI?
Despite the benefits of AI and in some cases, its indispensability to businesses, companies must address the ethical issues that come with storing massive amounts of data, especially data used to train AI modules. Without these checks, companies can risk their reputations and their customers’ loyalty. However, the issue is complicated as there are no legal frameworks in place yet.
“Additionally, some companies are concerned that the increased regulation of AI may inhibit innovation. The best way to find a balance is for companies to tailor AI solutions according to their values and needs. This includes assessing current data availability and the company’s technical capabilities. In addition, the use of AI largely depends on industry needs. For example, some companies may need to use AI for behavioural analysis, others would need it to achieve better audience targeting. Therefore, understanding industry needs is essential to monitoring the use of AI and lessening the likelihood of falling into ethical quandaries,” says Gill from Heriot-Watt University.
Finally, companies should have a team in place that works on DESPITE THE BENEFITS OF AI AND IN SOME CASES, ITS INDISPENSABILITY TO BUSINESSES, COMPANIES MUST ADDRESS THE ETHICAL ISSUES THAT COME WITH STORING MASSIVE AMOUNTS OF DATA, ESPECIALLY DATA USED TO TRAIN AI MODULES.
developing ethical standards for the company. Such experts should have a background in AI solution development as well as software engineers and product managers, he adds.
Transparency is key, according to Alhaj from Cloudera. When algorithms make decisions, users may need to realise AI is in charge. The public doesn’t know how algorithms work, so when technology acts unexpectedly, it frustrates users. This could be addressed by explaining how technology works and how machine learning engines get better at their tasks by being fed gobs of data. A chatbot that relies on canned answers becomes more precise.
“This doesn’t exonerate technology companies from applying ethics to development. When developing ethical AI systems, the most essential part is intent and diligence in evaluating models on an ongoing basis. Sometimes, even if everything is done to deliver ethical outcomes, the machine may still make predictions and assumptions that don’t abide by these rules. It’s not the machine’s fault. After all, machine learning systems are inherently dumb and require a human in the loop to ensure the model remains healthy, accurate, and free of bias,” he says.
Nasrollahi from Milestone says deploying AI, then monitoring it with the intent of either retraining or replacing it – to constantly make it a better – you could say – is a responsible and stronger ethical approach to using AI. “At Milestone, we have adopted a concept for deployment, monitoring, retrain called ModelOps. This responsible technology approach is a better training strategy to continuously improve AI.”
Bhatia from Dataiku sums up: “Responsible AI systems must be secure, but they also must be transparent. Non-technical people must be given the means to interrogate a result from an AI system, be it an automated action, a recommendation, or an alert. Decision makers and developers must be empowered with the correct tools and best-practice training to deliver technically sound and auditready AI systems. Integrating the elements of responsible AI requires taking astute action at every point in the development pipeline. Constant communication between stakeholders will also be necessary to flag any potential issues so all relevant parties can assess them against the responsibility framework.”
NURTURING TECHNOLOGY INNOVATION
ORGANISATIONS IN THE MIDDLE EAST ARE INCREASINGLY WORKING WITH MOSCOW IT COMPANIES FOR ADVANCED TECHNOLOGIES SUCH AS VR PLATFORMS, SECURE FACE RECOGNITION, AND ROBOTS.
Moscow is the leader among Russian regions in providing IT innovations abroad. Exports of Moscow’s telecommunications, computer, and information services in 2021 reached AED 18 billion, with an additional AED 4.5 billion accounting for the export of rights for the use of the intellectual property.
The Middle East is one of the key regions Moscow IT developers are actively working with. The UAE, Saudi Arabia, and Bahrain are already using software and products from Russian companies in their operations. For example, the
Dubai Police purchased software from
Elcomsoft’s Moscow team, which develops computer and mobile forensics products.
The company creates a line of software for extracting information from mobile devices, recovering passwords for a wide range of applications, and also for restoring access to encrypted data. In addition, Cognitive Systems supplied robots to work with clients in the UAE.
Many Russian IT companies were also spotted in the Middle East during Expo 2020, including Pattern Digital Buro. “We became partners of the Russian pavilion at Dubai Expo and made an AR application for the organisers. There onsite, we also received from Arab partners the order for an AR case for an exhibition in Abu Dhabi at the stand of Abu Dhabi Maritime,” revealed the Pattern Digital Buro team.
Strengthening business ties between Moscow and the Middle East IT community
Many Moscow companies managed to find partners in the Middle East market thanks to the support of the Moscow Government, in particular, of the Moscow Export Center – an official structure designed to ensure the development of trade and economic relations with foreign partners. Furthermore, the MEC implements a wide range of projects to support businesses ready to collaborate with foreign markets.
Since 2018, more than 100 Moscow companies, with the support of the Moscow Export Center, have presented technological developments and IT solutions to the audience of the MENA region participating in the Gitex exhibition. In addition, the joint exposition under the ‘Made in Moscow’ brand included companies that have already adapted its product to the Middle East countries.
For example, Smart Engines - the developer of a secure face recognition system without transmitting data to third parties, suitable for use in Islamic countries, has a recommendation letter from Muslim Tour. DroneSolutions, which produces unmanned aerial vehicles, released an all-weather model, especially for the UAE market, with engines adapted for flights according to the region’s rules. ISS company, a developer of a MaaS application for tracing city routes, is also actively negotiating with the UAE city structures.
It was possible to facilitate negotiations with partners from the Middle East during these business missions. The Moscow Export Center organised IT companies’ business meetings in Dubai, Riyadh, Cairo, Tehran, and other cities of the MENA region, as well as reverse business missions in Moscow with buyers from the United Arab Emirates, Saudi Arabia, and Bahrain. Software developers adapt their products and work out strategies for entering foreign markets, counting, among others, on the GoGlobal accelerator, which MEC is implementing together with the Foundation for the Development of Internet Initiatives.
In addition to local business support, the Moscow Export Center actively cooperates with foreign companies interested in partnership with the Russian capital.
“The Middle East partners interested in finding an IT company to collaborate with can contact the Moscow Export Center, which is open to direct interaction with foreign partners and has a wide pool of IT services exporters. MEC selects the most suitable offers in the market free of charge and provides complete and up-to-date information about suppliers from the capital of Russia,” said Vitaly Stepanov, Head of the Moscow Export Center.
The development of the IT industry is one of the key tasks for Moscow. To strengthen business ties with the Middle East region, the MEC will continue organising joint events, including exhibitions, onsite stands and business missions, and the launch of acceleration programs with a focus on the MENA region.