8 minute read

Seen & Heard: Talking w/ Zeynep Tufekci

Next Article
Extra Credit

Extra Credit

UVM's 2024 Aiken Lecturer and sociologist who the New York Times says "keeps getting the big things right" considers the power of artificial intelligence.

After a decade of study, pioneering sociologist Zeynep Tufekci says social media is “specifically designed to draw you in, and waste your time, and distort your thinking.” With the rise of artificial intelligence, the role and power of social media may shift radically. But in what ways? This may be the most urgent question of the next decade. Tufekci—the Henry G. Bryant Professor of Sociology and Public Affairs at Princeton University and columnist for the New York Times brings a deeply informed and pragmatic approach to finding answers.

You’re going to be speaking about lessons learned from a decade of observing social media. What are some of these lessons?

Tufekci: There are lessons from social media, and I’m interested in applying them to the big change happening right now—the rise of artificial intelligence. One lesson is to not too quickly jump to conclusions about the winners and losers. It’s important to consider the entirety of the system from technology to social institutions—and their interactions. We need to be specific about how does technology work, what are the human incentives, what are the structural dynamics? And we also should be very mindful of the fact that these interactions and dynamics don’t just happen—they are going to depend on choices we make. There's not one set, single path that AI will take.

I often hear people say, “technology is neutral. It's all about what you do with it.” Do you think the moral nature of AI depends on how we use it?

Tufekci: There’s a very common saying, “Oh, it's just a tool. It depends on whether we use it for good or bad.” While that may be true in some very abstract sense, it’s misleading because particular tools don't have infinite sets of possibilities. Certain outcomes are more possible and more likely, and technologies have certain structures. So you can't just say, “Oh, here it is, and I can do anything with it.” You have to be cognizant of the question: which direction is this tool pulling me? And which direction is society going to pull this technology because of the way society works? An open-ended, anything-canhappen scenario is misleading.

I would warn people against that. Your intentions matter greatly, but AI or other technologies are not things that you can do anything with. Specific technologies, specific scientific advancements, have certain equilibriums that they facilitate—and certain equilibriums they don't. Take an example from nuclear weapons. There's only so many ways a world with nuclear weapons can continue. And one of those ways is not that we have a nuclear war every other year. That is not a viable path. You either have a world in which there is a significant barrier to their use or you don't have a world. And there's not really many alternatives to that because of the nature of the technology. Nuclear war is not something you can “kind of” do.

And it’s similar when you look at artificial intelligence; it's not some abstraction. It is a very specific set of technologies, machine learning, reinforcement learning, it's a particular way of being trained. It's not some abstract intelligence. It's a very concrete application of a particular computational technology, which means it can do certain things but not others. And it has weaknesses and costs and tradeoffs, but they're really specific. And that's what we should talk about.

Who is this "we" and what choices should we be considering?

Tufekci: There are lots of actors who would like to be making those choices! My view is that, as a society, we should be making those choices based on democratic legitimacy and the public interest. And it should not be just a few companies making money from them. That's what I argue. But that is not always what happens because of a complicated process with these companies: They make money, they become very friendly with politicians who also want to use these tools. One of the lessons from social media is that many of these decisions have been made by very few unaccountable actors rather than through society as a whole through mechanisms of democratic legitimacy.

This makes me think about climate change. Some people think of climate change as the big problem and others think of climate change as just a symptom of an even bigger problem of—I don't know, it depends on who you talk to, right?—capitalism or greed or technological naivete or failed markets. So when you think about artificial intelligence, how big of an issue is it?

Tufekci: I understand what you're saying and here's what I would say: You can have a problem posited at multiple levels. You can, for example, argue that climate change is downstream of a particular type of capitalism and greed. You might be right. Other people could argue socialists and communist countries were very polluting too. But, in some sense, it doesn't really matter because you need to address climate change! So if your argument is there is no way to address climate change without solving, say, capitalism, I would argue that we have made progress in lots of areas of life without necessarily solving the bigger problems within which they're embedded.

Perhaps what is most important is understanding the power of short-term interests. The short-term interest of powerful people, rather than the longterm interest of the people in general, is the problem—in climate change and AI. It’s not a capitalism-only problem.

One of the lessons from social media is that many of these decisions have been made by very few unaccountable actors rather than through society as a whole through mechanisms of democratic legitimacy.

What do you recommend undergraduate students do about artificial intelligence?

Tufekci: I’d have a lot of recommendations if I ran the world, which, obviously, I don't! But I think the most important recommendation for undergraduates is to become involved, because it is not a world in which people who are sitting it out are getting heard, right? That's just not happening. So if you want to have a say in how the world works, you have to get involved in how the world works. And I know a lot of students are interested in doing that, but usually their concept of what that involves is various forms of activism.

Activism is very good. I was an activist myself when I was in college, and beyond, and it's something I've studied. But there's a lot of other ways to shape the world—including becoming part of the political system and running for office and trying to directly influence policy; or running NGOs; or running companies that come up with innovations. I would encourage undergraduate young people to keep their sense of the possible open.

One thing I tell my own students is to recognize that in the academy we have disciplines: sociology, computer science, this or that. And they stand separated, in different departments, but that's not the way the world works! Of course, you have to get a degree in one major and maybe one minor, but it’s an advantage to learn broadly with an open mind and curiosity—and then make those connections, because the world is not separated into neat disciplines that fit to historically defined majors.

In your work in the academy, and also in your personal life, how much time do you spend on social media and what do you do there?

Tufekci: This is not a good question for me because I study these social media! So I spend more time on them than I would if I were not studying them. It’s kind of like asking a pathologist, “how much time do you spend with microscopes?” A lot! But that's not a good indicator of what people should do. I think social media has specific uses for some topics. There are some communities that exist there. But if I weren't studying it, I would spend a lot less time on it because it is designed to waste your time. It is very specifically designed to draw you in and waste your time and it distorts your thinking. Social media is tribalizing. It’s an in-group, outgroup pushing environment—just trying to keep you there. It creates distortions in your thinking. If I were not also studying these things, I would limit my time on social media purely because I think it would make my thinking less useful. It would mislead me. It would distort my thinking and my emotions. Even when I'm studying it—because I need to understand something—I feel “oh gosh, I have to take a break,” because I am a person too. I start having certain inaccurate impressions about the world that I know are just coming from social media.

And I'm like, all right, I've got to go take a break from this and talk to people who aren't in these small groups! It’s not that there’s nothing useful in social media. There's are genuine and helpful communities there. But it's important to realize that it is a tool designed to suck you into an in-group/out-group process and distort your thinking. So you need to approach it defensively. That doesn't mean don't use social media. There are good reasons to use it: to keep in touch with people. I enjoy doing that myself. But I think that defensive attitude is healthier.

Photo: Andy Duback

This article is from: