4 minute read

Dinkar Jain: Know Your Artificial Intelligence

Visiting Professor Dinkar Jain, AI pioneer and former Head of Ads Artificial Intelligence & Ads Delivery for Facebook, teaches a course in data science and how to think like a data scientist, Management in the Age of Artificial Intelligence. He recently sat down with Editor-in-Chief Pamela Gullard to discuss the future of AI and its impact on business and culture.

What do you find exciting about Artificial Intelligence?

Artificial intelligence is, essentially, our ability to program a machine with large data sets, as opposed to programming it with explicit instructions such as “If this then that” and “for i equals one-to-ten, do this.” The Internet took the cost of information dissemination basically to zero, lifting entire populations in the world out of poverty because they can compete in a marketplace with fast information flows. AI builds on that by enabling us—at zero marginal cost—to deliver certain types of services that rely on human judgments. High-quality healthcare, for instance, is expensive and difficult for most people to procure these days; no UCSF-quality doctor is likely to visit your remote village to read your x-ray. But what if you could get a similarly qualified machine to read your x-ray? We would be able to deliver high-quality healthcare at zero marginal cost. Artificial intelligence as a force can lift the standards of living globally, overnight. To me that is very exciting.

What should we all know about AI?

It’s very important for every student, every human, to understand your relationship with these AI algorithms that make decisions and micro-decisions to mediate information flows to you. If you assume that AI works magically, you won’t question that program’s objective, thus letting it play a role in your life in which you are not in control. Just that awareness goes a long way in building a little bit of a boundary between you and an AI system that’s not you.

You don’t need to know the details of logistic regressions or loss functions, but you do need to know how that algorithm is making its decisions. If the company behind the algorithm is an e-commerce website, they want to maximize sales, while you may not want to spend all your pocket money on buying their recommended items.

Also understand that these systems are programmed with data sets, and data sets have all kinds of fallacies, biases, and errors lurking in them. People rave about chatbots, but we have no proof that chatbots actually know anything; they’ve probably done nothing much more than read the internet a few times over, enabling them to ace a particular exam or whatever. They don’t understand satire, so they might end up recommending, as Alexa once did, that a child stick their finger and a penny into an electrical socket. Appreciating the possibilities and limitations of the technology is vital to being in control of it and vital to being a masterful user of it, which college students require to differentiate themselves in the job market and in the world.

Business leaders, similarly, need to know is that almost every function of management is being transformed with artificial intelligence, but how do you make machines and humans play nice with each other? Branding is about building a relationship with your customer, but today people are building relationships with algorithms. If Netflix recommends a bad movie to you, you’re going to curse it out for wasting your evening. The oldschool business frameworks on topics such as accounting and channel strategy, all need a fundamental relook as we look at this world that is increasingly being operated with datasets and AI systems.

What are the most important legal and ethical issues regarding AI?

A very important part of figuring out how to use technologies for social good is to create a compact between civic society and private actors, technologists, futurists: these are the ground rules we all understand and accept in order to advance the conduct of this technology. Good conditions for that are broad understanding, good communication between all actors, very competent regulators that are deeply educated in the field.

When it comes to AI, we don’t have any of these conditions today.

We can’t wait for an institutional framework forever, as we still have to practice and introduce the technology. That’s happening, but as it does we need to identify, at least nominally, the ethical and legal problems that are cropping up.

Auditability and transparency is a very big problem. What data is being used by which algorithm to do what? There’s no Sarbanes Oxley-style disclosures required for algorithms to declare to the public that, “This is how we’re making these decisions.” AI bots on social media, for example, are probably violating copyrights by retrieving and manipulating copyrighted materials.

We also need to consider issues of bias and fairness. AI systems are coded by datasets, and datasets are nothing but museums of the past. When Amazon was deciding where to offer sameday delivery, their AI essentially recreated the discriminatory real estate redlining maps of the 1950s. Amazon probably never set out to build those maps in that way, but that’s what the AI system did. We know that every past of every country is littered with social abuses, biases, inequality, and trampling of minority rights. We must understand what datasets are being used to program what AI systems, and if they are biased in the recommendations they’re generating.

Data cooperation is also critical. Back in the 40s and 50s, flying was not particularly safe. Even then, aircraft carried the black boxes that help to determine the causes of plane crashes, but only after the FAA started requiring the airlines to share black box data did the number of crashes decrease dramatically. Companies still try to hoard their data, which is often not in the best social interest, but we lack the legal constructs and the social frameworks for changing that behavior. The “six month pause” that some advocate won’t help, as we can’t let the US fall behind in geopolitical war over this technology.

That’s where education is so important. I’d love every educational institution to figure out its AI literacy curriculum. I’d like AI to be the first question in a presidential debate next year. That would demonstrate our willingness as a society to take this head on. We are very far from it, but I think education and voter interest are all important ground conditions for both growth and control of AI.

This article is from: