10 minute read

A Brief Conversation: Speaking with

Speaking with Professor Niloufer Selvadurai

Advertisement

Nerissa Puth

Professor Niloufer Selvadurai is the Editor-In-Chief of the International Journal of Technology Policy and Law and Telecommunications, Editor of the Australian Journal of Competition and Consumer Law and Professor at Macquarie Law School, teaching Information Technology (‘IT’) law and Intellectual Property (‘IP’) law. With a vast portfolio across private practice and academia, she was part of the legal team acting for Optus’ in their bid to become the second telecommunication carrier in Australia and has contributed her expertise widely to law reform inquiries.

I sat down with Professor Niloufer Selvadurai to fnd out about her exploration of the new frontiers of technology law research, her current areas of research, and her opinion on the unintended (or, intended) consequences of Artifcial Intelligence. The Brief would like to thank Professor Niloufer Selvadurai for her time, and her incredible insights on the necessary approach towards governing AI.

What brought you into academia and more broadly, what attracted you to your research in how technological changes undermine the efectiveness of the law?

Initially, I practiced as a solicitor. I started work at Allens and then Ashurst, both in the technology and media law units. I was involved in Optus’ bid to

become the second telecommunications carrier in Australia. That introduced me to telecommunications and technology law. I thought there was a lot of potential in the area because many lawyers considered it to be rather too technical and not very interesting. So, I thought that it was a really fascinating area where I could add some value and do something new and original.

When I moved into academia in 2004, there was also next to nothing written on technology and telecommunications law from an academic point of view. It was a wide-open frontier, and to some extent it still is. People interested in IT law often move into practice because there are many opportunities. So [technology law and telecommunications law] is actually quite a small academic feld.

One of the areas that you explored was face recognition technology, which is really entering public discourse at the moment. In 2015, you published an article that referenced Lawrence Lessig. You noted ‘as digital records lack the transience of human memory they form a compelling threat to privacy’. What are some of the intrusive potential that face recognition yield today? What does it mean to privacy?

Absolutely, great question! There is a spectrum of problems, but I’ll focus on how it threatens autonomy and psychological wellbeing. Digital records don’t have the transience of human memory, meaning that data recorded in don’t evolve with your life – they are static records. So, it really undermines an individual’s ability to control the information that they put out there about themselves. Information that individuals choose to withhold and release and forget are undermined by these huge databases.

Beyond face recognition technology issues – there is the bigger issue of AI governance and data gathered by AI systems. There is the issue of privacy, but there is also a second issue – that is, the monitorisation of data. Alongside losing control of our data and degrees of our privacy, we are also entering a really interesting paradigm where we are giving frms the capacity to monetise aspect of ourselves. When we browse we routinely consent to providing information about our preferences and interests in exchange for accessing a particular digital platform or obtaining a service. That data is often collected and tabulated and sold to corporations. Though facial recognition technology is an important issue, it is part of a wider issue of exploitative data gathering practices.

I would imagine that there would be unintended consequences with facial recognition technologies and more broadly, with AI.

Yes, absolutely - though I would also suggest that some of these consequences are intended– I think it also goes to the question of what you can get away with as a tech institution. Even when there are efective laws, there are huge problems of enforcement. So, you get pockets of compliance. More visible uses have higher levels of compliance, so airports which gather [data] through face recognition technology and police force activities are highly scrutinised. They have secure frameworks. But then you also have other data mining practices by private institutions that just go unchallenged. As you say, privacy is massively difcult to protect, and an emerging school of thought is viewing privacy as a luxury. ‘Digital natives’ who have grown up in a more open digital space tend to accept a loss of privacy, and even the monitorising of their data by others. But ‘digital migrants’ who have grown up in an era where there was a high degree of privacy are highly uncomfortable with the privacy risks of digital spaces. It is an evolving area and it’ll be really interesting to see how the conversation about privacy and [data monitorisation] evolves in the next few years.

That’s really interesting. As someone that is part of the generation of accepting the lack of privacy and data collection, I can understand the ‘norm’ and assumed trust in exchanging my personal data under the presumption that it will be fne! It is also a reminder for the legal community to scrutinise technological evolutions more closely and bring in questions of ethics and responsibility.

Absolutely, it has entered the law reform discourse in a really formal way. The Australian government has just released an AI ethics framework, the OECD has released AI principles and the Human Rights Commission is conducting an AI inquiry at the moment.

A few years ago when Mark Zuckerberg said that AI was the biggest technology [breakthrough], a lot of people imagined a robot making dinner. Now, they realise it is much more subtle and that all sorts of decisions, tax assessments, loan assessments, social welfare calculations, are being made using algorithms.

Do you think there is a growing momentum by tech institutions, online intermediaries and academia in trying to bring in some form of governance?

Yes, defnitely, I think so. It has been driven by stakeholder’s interests such as those of end-users, investors and workers in tech companies – it is really stakeholder driven, rather than government driven. The stakeholder momentum is leading to a lot of the questions being asked as to accountability, transparency and ethics of AI decision-making

But although there has been a lot of law reform discourse, it hasn’t translated into a lot of legislative change. There have been many discussion papers and inquires, but they haven’t led to comprehensive statutory changes. The European Union is the notable exception. The EU’s General Data Protection Regulation (‘GDPR’) is really innovative and it creates lots of rights like the right to be forgotten and right to an explanation for AI generated decisions. The EU is, and always has been, the forefront runner of implementing tech policies and laws.

You analyse how technological change undermines the efectiveness of legal frameworks across the felds of IP, telecommunications and media to name a few. What are your current major areas of research?

I developed a legal theory some years ago on how legal frameworks should be reformed to address technological disruptions. And over the years I have applied this model to disruptive technologies in telecommunications, the media sector and intellectual property issues. My current focus is AI. I am looking at how we can take a holistic approach and analyse how new digital developments relate to all our existing laws. That way we can create a consistent legal framework and avoid a mosaic of diferent, potentially conficting, laws.

AI is a great new opportunity because nothing has been done. With copyright, the [approach to govern technological disruptions] was incremental. Various technologies, such as photocopying, computing and cloud, each led to small amendments to the Copyright Act 1968 (Cth), like a patch upon another patch. Now when you study IP, you’ll notice that the Copyright Act 1968 (Cth) is a maze of sections.

Fantastic. You did your PhD in Law at Macquarie University. What was your research question and how did you formulate it?

My research question addressed the convergence of telecommunications and broadcasting technologies by designing a new framework for ‘electronic communications’. I did that in 2003 and it still hasn’t happened in Australia. Although, it happened in the European Union in 2005.

Even in 2003, you could view television programs on your mobile phones, and you could stream services on your television. And yet, telecommunications had very light regulation because it was conceived as a one to one service and television had high intensity governance because it was conceived as a one to many service involving the public interest. This public/private dichotomy had dissolved but regulation had not adapted So that was my question and that came out of being involved in Optus’ tender when I was at Ashurst.

A lot of students are increasingly interested in a career IP and IT law. Do you also observe this increasing interest? What advice would you give them?

Well I am very biased - I think everyone should do IP and IT. I advise Macquarie students that if you want a point of diference, especially if you want to move into CBD law frms, having a strong technology background is really valuable. IP is also fantastic, but there are a few more people with IP skills as it is more accessible for law students and it seems to be a natural transition for them. IT can seem overly technical and it puts people of. As a result, there is a bit of a shortage. If you walk into a law frm with a strong technology law portfolio; you have done IT, an LLB Honours project in IT or voluntary work (which is not necessarily with a big corporate law frm, but even with community-based work), it helps set you apart. Partners and Senior Associates also often feel a little uneasy because they are not up to date and are really looking for junior solicitors who have cutting edge understanding of tech law issues. You can’t go wrong with doing IT law and it is not boring as it may seem!

We wrapped up our chat after 16 minutes – which shows you how much valuable information Niloufer has to part in a short time. My chat with Niloufer was a reminder that not all is dark and gloomy as popular culture often depicts AI. The eforts of academics, industries and stakeholders to introduce thoughtful and bottom-up AI governance makes me, and hopefully you the reader, hopeful that AI could still help us solve our toughest challenges as intended – even if it is just being able to have our dinner made by intelligent machines.

This article is from: