a journal of strategic insight and foresight VOL. 25 2017 $12 USD $15 CAD £7.50 GBP Display Until 09/30/2017
The Human in the System 14
The Future is Slowly Killing Me 30
Defining Design Research 42
What’s Feeding You: The Future of Food 78
BOUTIQUES GENÈVE • PARIS • LONDON • BERLIN • NEW YORK MIAMI • BEVERLY HILLS • LAS VEGAS MOSCOW • DUBAI • TOKYO • HONG KONG SINGAPORE • SAINT-TROPEZ • CANNES COURCHEVEL • GSTAAD • ZERMATT • ZÜRICH
hublot.com
Developed with the famous tattoo artist Maxime Buchi, founder of Sang Bleu. Titanium case with faceted bezel. UNICO in-house movement. Exclusive way to read time through the superposition of 3 discs. Design inspired by his emblematic tattoo style. Limited edition of 200 pieces.
1
Publisher / Editor-in-Chief Idris Mootee
Theory, So What? 8
Publishing Advisory Council Scott Friedmann Theo Forbath Dr. Andy Hines Michael Novak Christer Windeløv-Lidzélius Lenore Richards
Signal, So What? 10 Insight, So What? 12 The Human in the System 14 The Future of Money: An Interview with Ann Pettifor 24
Head of Media & Publication Ashley Perez Karp Managing Editor Esther Rogers Lead Editor Mira Blumenthal
The Future is Slowly Killing Me 30
Additional Editing Stephen Bernhut Taylor Dennis
Defining Design Research 42
Guest Editor Dr. Paul Hartley
Real-Time Religion 54
Art Director / Design Sali Tabacchi, Inc.
A Mind of Their Own: How Do We Manage Bias in AI? 56
Additional Design Jemuel Datiles
What’s Feeding You: The Future of Food 78 Toward a Human Futures Perspetive 116 Experience 127
Illustration Jennifer Backman Erica Whyte
Distribution (US/Canada) Disticor International Distribution Pineapple Media Subscription Enquiries subscription@miscmagazine.com Letters to the Editor letters@miscmagazine.com Contribution Enquiries contribution@miscmagazine.com Advertising Enquires advertising@miscmagazine.com Canada 241 Spadina Avenue, Suite 500 Toronto, ON M5T 2E2
MISC is a publication by
United States 649 Front Street, Suite 300 San Francisco, CA 94111 United Kingdom 85 Great Eastern Street London, EC2A 3HY United Kingdom Corporate Office Cognizant Digital Business 500 Frank W Burr Boulevard Teaneck, NJ 07666
Contributing Writers Charles Andrew Lena Blackstock Dr. Ryan Brotman Kyle Brown Kurtis Chen Cheesan Chew Sara Constantino Dr. Ian Cosh Theo Forbath Jared Gordon Dr. Andy Hines Azadeh Houshmand Jocelyn Jeffrey Melanie Levitin Yehezkel Lipinsky Edward Merchant Marcos D. Moldes Jaraad Mootee Maryam Nabavi Dr. Scott Pobiner Ben Pring Lakshiya Rabendran Chloé Roubert Lindsay Roxon Lena Rubisova Shane Saunderson Alexis Scobie Victoria Scrubb Dr. Maya Shapiro Valdis Silins Dominic Smith Udit Vira Dr. Ted Witek Hilda Yasseri
MISC (ISSN 1925-2129) is published by Cognizant Digital Business. All Rights Reserved 2017. Email misc@miscmagazine.com The advertising and articles appearing within this publication reflect the opinions and attitudes of their respective authors and not necessarily those of the publisher or editors. We are not to be held accountable for unsolicited manuscripts, artworks or photographs. All material within this magazine is © 2017 Idea Couture Inc.
photo: idris mootee
Contents
Publisher
3
cognizant.com
Congizant Digital Business recognizes the coming convergence of new technologies – automation, the sensor-enabled world, AI, platforms, 3D printing, etc. – as well as the shifting demographics, expectations, and regulations that are creating a context for a new age of business. Cognizant Digital Business brings together digital strategy, deep industry knowledge, experience design, and technology expertise to help clients design, build, and run digital business solutions. The practice provides managed digital innovation at enterprise scale which includes services around Insight, Foresight, Strategy, Ideation, Experience Design, Prototying, and a Foundry where pilot programs are moved to enterprise scale. We help organizations navigate and innovate in complex and uncertain environments. We use design thinking methodologies to solve problems and exploit business opportunities – generating new growth, meaningful differentiation, and economic value. By taking an insight and foresight lens to our explorations in MISC, we can thoroughly examine the impacts and opportunities for change in a vast range of industries, allowing businesses to plan for the present and the future.
uh.edu
The University of Houston’s Foresight Program offers a Master’s Degree in Foresight, a four-course Graduate Certificate, and a week-long intensive bootcamp overview, each of which prepares students to work with businesses, governments, non-profits, and others to anticipate and prepare for the future. Established in 1974, it is the world’s longest-running degree program exclusively devoted to foresight.
Some students enroll to become professional futurists, while others seek to bring a foresight perspective to their current careers. Students have three major areas of focus: understanding the future, mapping the future, and influencing the future, blending theory and practice to prepare graduates to make a difference in the world.
kaospilot.dk
Kaospilot is an international school of entrepreneurship, creativity, and leadership. It was founded in 1991 as a response to the emerging need for a new type of education – one that could help young people navigate the changing reality of the late 20th century. The program’s main areas of focus are leadership, project management, creative business, and process design. Promoting a hands-on approach, case
studies are replaced by immersing students in real projects with real clients. Out of more than 600 graduates, one third have started their own company, NGO, or other similar initiative, the remaining hold management positions. Kaospilot also offers a wide range of courses for professionals in creative leadership and educational design.
cedim.edu.mx
Based in Monterrey, Mexico, CEDIM takes a design, innovation, and business comprehensive approach to education. Design is promoted as a core philosophy, and the faculty consists of active, young, and experienced professionals who have expertise in a broad range of fields. Students are engaged with real and dynamic work projects, and are encouraged
to immerse themselves in these active projects in order to participate in the realities of the workforce long before graduation. As a result, students at CEDIM develop an extensive sensitivity to their social, economic, and cultural environment, and go on to make real, pragmatic change in the world of design and innovation.
izational change, students in the program address the complex dilemmas of contemporary society. This interdisciplinary program interweaves design and foresight methods with social science, systemic design, and business, while providing the skills and knowledge to identify critical issues, frame problems, and develop innovative and humane solutions to better implementation plans.
ocadu.ca
OCAD University’s Strategic Foresight and Innovation program (SFI) can claim a place at the leading edge of pedagogy and foresight practice. The SFI program is creating a new kind of designer – a strategist who sees the world from a human perspective, rethinks what is possible, and imagines and plans a better future. Recognizing the increasing importance that design thinking can play in positively impacting society, enhancing business success, and managing organ-
Co-Publishers
ideacouture.com
As co-publishers of MISC, our aim is to provide a new level of understanding in the fields of insight and foresight. We navigate the blurred boundaries of business, design, and innovation through in-depth articles from some of the preeminent voices of design thinking, technology, customer experience, and strategy. Idea Couture is a global strategic innovation and experience design firm. It is the innovation unit of Cognizant, and a member of Cognizant Digital Works.
4
hen I was growing up, I was fascinated with all kinds of new technology, from color TV and multi-channel, digital laser-disc players to the first mobile phones. Each new gadget meant economic progress and was a sure sign that our lives would improve a little every time a new technology was introduced. It was only a matter of time, we told ourselves, before the particular technology became affordable for the masses. Risk was an alien concept, as was the possibility that technology could one day replace us humans. That was something that only happened in sci-fi books and movies. On the whole, our lives were driven by the following certitudes: Technological innovation drives economic development and economic development drives human development, ultimately leading to permanently higher standards of living. After all, that’s what neoclassical growth theory suggests. Maybe we grew too comfortable in our thinking, because we failed to see or realize that our industries were going through profound structural changes. Today, we are racing to understand these changes so that we can realign our industries with our new future – one that is being shaped and driven by AI. The scale and scope of these structural changes have no historical precedent, and as a result, our economies are being driven by discontinuities. This profound change is disrupting almost every industry in every location, and it is transforming entire systems of production, management, and governance. People of all disciplines – from design to engineering – are combining computational design, additive manufacturing, materials engineering, and synthetic biology to create what, according to the World Economic Forum, was once unimaginable: a symbiosis between microorganisms,
our bodies, the products we consume, and even the buildings we inhabit. Technology is now an integral part of the global economic growth engine, and without continuous technological innovation, this growth is unlikely to be sustainable. We know that technology and design should always be deployed in the service of humans, but we are at a point where we are evolving and discovering a new ambition. That new ambition is nothing less than the redesign of humans and even our society. Welcome to the world of AI. The history of the development of design and technology is also a history of evolving conceptions of the human race. Over the last few centuries, our lives have been reshaped by technology and design, from the tools we use to work, to the ways we disseminate knowledge, and even to our own identities. New materials, means of connectivity, interfaces, machines, data, and infrastructures are adding layers and layers of complexity and newfound capabilities. The integration and adoption of robotics is causing anxiety about the future of our jobs as we know them. At the same time, jobs are becoming more advanced as they are increasingly driven by AI. Significant headway is being made with more advanced microprocessors that are configured to act more like a brain implanted with neuromorphic chips. The reality of increased robotics can be seen not only in manufacturing, but also in the retail and home-support industries. Change in these industries is already conspicuous and is likely to lead to radical changes in our economic systems. This is so powerful that it seems that even the planet we live on is encrusted with a geological layer of design and technology; as the curators of the Istanbul Design Biennial so eloquently put it, “there is no longer an outside to the world of design. Design
5
has become the world.” Satellites are orbiting the earth and big data is working in the background to ensure that everything on the planet is operating properly. Digital design is now at the center of our universe – as it should be – for it will certainly power us into the future. But a new discipline is emerging: human-centric design (HCD). And it will shape and transform other disciplines, in turn changing how we look at design and the design of human experiences. With the influx of algorithms and deep-learning networks that monitor and influence every gesture, heartbeat, and sensory response, we are building intricate statistical images of each one of us. This job is too important, however, to be left to HCD designers alone. The emergence of algorithm-driven design, alongside the expanding concept of HCD, gives new meaning to design. Today, design transcends most disciplines – which is why it is absolutely integral to the success of AI’s mandate to redesign humans and society. It is an awesome task, and one that requires multidisciplinary and transdisciplinary thinking. It is one that includes the use of emotional analytics, empathetic design, the human experience, inclusive design, generative design, and intelligent design efforts – anywhere that the human element is present. Which is everywhere. So, what exactly sets humans apart from every other animal? Our complicated brains. Those brains are hard at work trying to figure out how to pair humans, with our needs and emotions, with the wildly evolving world we are living in. In this latest issue of MISC, we give you a glimpse into the future through a new discipline we call Human Futures. Articles that include “Inside the Customer’s Mind” and “If You Build It, They Will Care” give you a close-up of a future where the customer and the patient are at the center of service and healthcare delivery. Other articles
discuss the future of money, and our issue’s special feature, “What’s Feeding You,” delves deep into the future of food. Finally, we explore the new and exciting field of design research and its impact on understanding human behavior in a compilation titled “Defining Design Research.” I believe that all of the articles we present here are stimulating and highly informative, and they truly embrace the spirit of Human Futures. I hope you enjoy this issue.
Idris Mootee Publisher / Editor-in-Chief
6
What Is a Human Future Anyway?
B y D r. Pa u l H a r t l e y
It seems important to clearly state what a “human future” is before we delve into this issue. Admittedly, this is not the easiest task because this phrase has many possible meanings. The most important meaning is its literal one: A human future is a future for humans. It is a future state that humans define and occupy – or, a future vision that focuses entirely on the human element. Many of the articles in this issue are using exactly this definition to craft their perspectives of how to build a better future for all of us. But the concept of a human future is more than just a vision of how people will live in 5, 20, or 100 years. It is actually a radical argument for a particular kind of future-looking perspective – one that recasts our current understanding of the relationship between humans and technology, and seeks to bring balance back to our relationship with the complex world around us. In this way, it is a framework for critical thought that undoes some of the assumptions that we use to understand technology now. A human futures perspective is best understood as a social history of the present and future. It is a way of tracing change through social, cultural, and individual contexts by examining them together and appreciating their richness and complexity. A human futures perspective is a way of seeing the development of our future selves that requires us to view humans as part of a large and complex system. You can think of it as a holistic perspective, or a way of acknowledging that humans live in a closed ecosystem with everything else on this planet. With this in mind, a human futures perspective is a way of understanding the implications of the interconnectedness of
everything. It is a way to build visions of the future by connecting it with the past and present, and it examines change and transformation without forgetting the interconnected system as a whole. This is a radical way of thinking, because it requires a new way of examining ourselves and a willingness to shed old perspectives and assumptions. There can be no causal drivers of human behavior in this system, because everything is connected. The idea that technology is somehow separate from humans – driving social change or disrupting lives of its own accord – can no longer be accepted without question. The coming robot apocalypse or rise of the singularity need not be an inevitability, or indeed even a possibility. All of this is important because a human futures perspective provides a framework to see with greater clarity the human core of everything we do in human-centered research, foresight strategy, design, and business. It provides us with better support for building new products, services, and experiences through design thinking. It is also an important part of creating a holistic innovation practice, because this perspective provides a more complete way to connect the work of insight, foresight, strategy, and design. We created this issue of MISC to provoke a productive conversation about how we are designing for ourselves in the future. Each article in this issue explores aspects of how a human futures perspective can craft a new way of seeing the human potential in new products, services, and future worlds. We hope that you find the articles interesting and stimulating. //// Dr. Paul Hartley is executive director and cofounder, Institute for Human Futures and a senior resident anthropologist at Idea Couture.
7
// A human futures perspective is a way of understanding the implications of the interconnectedness of everything. //
8 In this regular feature, we pick a social theory, explain its relevance to everyday life, and then explore how the theory’s implications could impact the future of your business, industry, or category.
Theory, so what?
Understanding Felt Experience As a Powerful Agent of Change and Growth
B y M a r c o s D. M o l d e s
Humans crave experiences that we identify as rich and meaningful. In the rush of modern life, with its demands on our time, attention, and resources, we’re increasingly seeking out services and products that promise to provide us with moments of connection and deeper meaning. We want to fall in love on the streets of Paris, create inner peace with meditation apps, and return to our childhoods by reliving the magic of Disney. Meaningful experiences provide moments of connection. They affirm our values and teach us who we are, ultimately becoming memories we reflect
on and shaping who we’ll be tomorrow. This desire for life-altering experiences hasn’t gone unnoticed. Companies are capitalizing on this by developing and delivering products and services that offer consumers pathways to personal growth, or by creating products tied to a charitable mission. But what is so often lacking in these experiences is a way to translate these moments into opportunities for personal growth and social change. How do we make meaningful experiences still mean something a year down the road? Five years down the road? How do we ensure lasting value – in all senses of the word – from our felt experiences?
9
So What? Dian Million’s “Felt Theory: An Indigenous Feminist Approach to Affect and History” looks at indigenous women’s writing and makes a case for understanding the important role that felt experiences and embodied knowledge play when people recount events that could create social change. Examining the testimonial literature of several First Nations writers in the 1990s, Million argues that the inclusion of emotional knowledge and felt experience allows for a more complex and nuanced understanding of colonial violence, which has led to the emergence of social movements that seek to challenge and change the lived experience of indigenous people. From Idle No More to protests in North Dakota over oil pipelines, felt knowledge is creating opportunities for new kinds of telling and understanding. Although indigenous communities continue to face challenges and systemic inequalities, the roles of community and felt knowledge are growing and developing into new tools for social change and activism. It is interesting to consider what the possibilities for social change could be if people harnessed their felt experiences into long-lasting, meaningful relationships and learning moments. How could platforms be developed to enable these kinds of long-term, personally tailored engagements? How could we learn from one another’s experiences for the benefit of all society?
// Meaningful experiences provide moments of connection. //
Weak Signals of a Move Toward Experiential and Felt Knowledge / Although now discontinued, Carnival Cruise's Fathom vacation package aimed to combine social impact with cruise travel. Partnering with local organizations in Cuba and the Dominican Republic, travelers were given opportunities to work on community projects and to partake in culturally immersive experiences. / Aimed at college students looking to use their spring breaks for something more meaningful than a beach vacation, Alternative Spring Break (ASB) is an immersive week of service projects, as well as leadership and relationship building. College-aged students work in communities in the US and learn more about the challenges and opportunities facing the country. / Like ASB, Benevity is an employeevolunteering platform that matches corporate employees with volunteering opportunities around the world. The app handles signing up and time tracking, and provides reward pathways to motivate participants. The platform also doubles as a benchmark for measuring the success of initiatives and creating events that support both local and international charities. / The Headspace app provides users with a platform that facilitates meditation and encourages the development of a regular and consistent meditation practice. Marketed as a “gym for your mind,” Headspace encourages its users to use mindful meditation to achieve their goals, while also achieving inner peace.
What If… / How can your business draw on the power of felt experience to develop platforms for personal growth and social change? How can you develop products or services that authentically fulfill the consume desire for experiences? / What role do feelings play in your product or service development process? How do you make sense of the experiences and feelings of your consumers? / How can you respond to people’s feelings in a way that feels authentic and meaningful? Can you develop systems to help translate felt experience into useful data? / How can you meaningfully partner with NGOs and non-profits to create platforms for felt experiences that make a contribution and have staying power? / What kinds of broader change could your organization be a part of through felt experience? / How can backend service providers find ways to tap into felt experience for social change? How can they leverage their expertise and infrastructure to engage their employees in felt experience? //// Marcos D. Moldes is a resident ethnographer at Idea Couture.
10
Signal, so what?
In every issue, we highlight a weak signal and explore its possibilities and ramifications for the future of your business, and how to better prepare for it.
Adversarial Data: For Machines’ Eyes Only
By Va l dis Sil in s
In the past year, machine vision has outperformed humans in facial and object image recognition. Show a machine an image of a face or an object, and it’s now more likely to correctly identify it than a human. This ability is transforming visionbased tasks such as driving, surveillance, medical diagnosis, industrial production, and more. Our machines can see – and they can see better than us. So how do they work? Using convolutional neural networks, these programs learn to identify images by consuming gigantic databases of labeled data. They take thousands of images with human descriptions and run them through layers, looking for patterns in lines, edges, shades, and more. Trained on this set, they’re shown unlabeled images and asked to predict their identity based on patterns found in past data.
photo: XXXX
Overview
11
But while they can outperform humans at telling an ostrich from an osprey, they don't understand the content of images in the ways we do. They rely on identifying complex patterns in past data, a discrete form of understanding that could leave them open to manipulation if those same patterns could be reproduced in unrelated images. Which is exactly what's happened in the last year. Visual filters that look like bits of noise to humans have been developed to trick machines into seeing what's not there, such as a building instead of a stop sign; they’re called adversarial examples. And they’re not confined to the digital world. Researchers at Carnegie Mellon University, in a 2016 October paper titled “Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition,” have printed out patterns on eyeglasses that look like psychedelic, tie-dye designs. They seem like nothing out of the ordinary to human eyes. But when a photo is taken of someone wearing the glasses, the facial recognition system interpreting it sees a celebrity. Invisible to humans, they're like dog whistles for machine-learning systems, able to lead them elsewhere while remaining undetected by humans.
So What? Adversarial examples have been documented for years as a possible avenue to exploit in machine learning, explored by everyone from the Army's Research Laboratories to Google to top research universities. In the last year, the difference is that they’ve migrated from digital-only forms to real-world examples, and from white-box to black-box attacks that lack access to the original training data sets. As more and more mission-critical infrastructure is made smart, these adversarial examples speak to the kinds of arms race, reverse-engineering edge cases that will likely emerge. Researchers, in fact, believe other forms of deceptive data are already in play in simpler, non-visual forms of machine learning. They believe, for instance, that financial firms are using false flags to trick their competitors’ systems into dumping stock. The question is how to design humans into the loop, who can flag the discrepancies between, say, Brad Pitt and a pair of tie-dye glasses, or between a financial kayfabe and a real move; in short, to make machine intelligence more legible to human intelligence. And what is our role in the loop? From these examples, it seems likely to be janitorial – cleaning up data, supervising machines, training them to tell right from wrong. We’re babysitting. We’re domesticating. Like a child being taught about the world, machine learning is uttering its first words, peering out, creating its first inhuman memories in a paranoid, deceptive world. And now it has eyes. It’s up to us to tame them.
What If...
// Visual filters that look like bits of noise to humans have been developed to trick machines into seeing what's not there, such as a building instead of a stop sign. //
Personal
Organizational
/ From smarthomes to self-driving cars, vision systems and smart sensors are everywhere.
/ From regression analysis to convoluted neural networks, having quality inputs will matter more than sophisticated computation.
/ Will you train and domesticate your devices, or will others do it for you?
Brand / Community managers work hand in hand with intelligent bots to engage, interpret, and moderate online. / Whose data will mold the content of your bot’s character?
/ How clean and secure is your data? //// Valdis Silins is a senior foresight analyst at Idea Couture.
12
What is an Insight? House vs. Home: How the Concept of the “Smarthome” Only Tells Part of the Story
In every issue, we explore a topic through an anthropological lens in order to better understand its impacts on a wide range of industries.
B y D r. Pa u l H a r t l e y
In our last issue, Searching for Excellence, my colleague, Dr. Emma Aiken-Klar, introduced the concept of an insight as we understand it in our practice of anthropology. By way of a definition, she said, “the power of an excellent insight lies in its ability to critically reframe our understanding of the problem at hand.” This is a fantastic way to describe what an insight is because it captures the fact that an insight’s job is to describe the world in a way that changes how we respond to it. It cannot just be a factual description of the world, because this lacks the call to action necessary to achieve the all-important status of “actionable insight,” the gold standard of innovation.
photo: Gareth Williams
Insight, so what?
13
In this issue, I would like to expand her definition by adding another layer: an insight is a way of reframing our understanding of the problem at hand by allowing us to see the world through someone else’s eyes. A good insight allows the audience to capture something beyond just a piece of knowledge or an understanding. It allows them to experience what it means to live a life and how that knowledge is put to use, providing a glimpse of what it means to be someone else and to see the world in a different way. It should give a hint of what it means to know something and to think about it from another point of view. It should also allow people to see themselves living another kind of life and having differing experiences. In short, it should force the audience to confront the fact that other people live their lives differently and likely don’t understand the world in the same way. It is perhaps best to ground this with a concrete example. Given that this is the Human Futures issue, I will use this opportunity to provide an example of how an insight can develop into something that changes the way we understand our relationship with developing technologies. The following insight is a distillation of several projects, examining the future potentialities of various new technologies in a smarthome. For each project, we conducted in-home research with people in at least two countries. The purpose of each investigation was different, but each study returned a set of insights that were very similar. This is the one insight that was common across all of them:
There is an Important Distinction Between a House and Home People make an important distinction between a house and a home when they speak about their lives and their use of technology. The house is a physical building. The home is something that exists only as a set of relationships within the house. Importantly, while these two words can be used as synonyms, most people use them carefully to create two spheres of reference that describe how they live their lives. The house is more often related to infrastructure. While the house contains the home, it is the home that actually matters. This distinction is important because it explains how their lives are contextualized at home, and because it indicates where smarthome technologies can and cannot fit without major changes. The house and the home differ because they are part of various levels of lived experiences. While humans live in both spheres, many technologies can only exist in one at a time. Importantly, “smart” technologies belong in the home, whereas “dumb” technologies can be part of the house. The home itself is a product of the everyday lives of the people who live in the space. It exists in their daily rituals, in the orientation of furniture, in the activities that
occur in various rooms, and in the meaning all of this holds for the people who live in the house. The home is a smart space because it is a relational space. It can easily accommodate “smart” technologies that have some kind of agency or offer something that is relevant. It is much easier for people to accept smart home devices when they participate in the daily workings of the home. The house, however, need not be a smart space. It can be a dumb space which has no advanced technologies at all. The house is the infrastructure or container that protects the home, but it is not always considered to be part of the home. HVAC units are dumb. Dishwashers are dumb. Even refrigerators are dumb. The reason for this is that many people do not feel a strong connection to the home infrastructure. They did not choose the appliances they live with because they bought them with the house, or they were provided with the rental of the space. In countries where people do not select these appliances, the house is considered to be dumber than in those places where people must provide them, even when renting. When you have to bring a refrigerator with you, it can become a part of the home instead of the house. The reason why I believe this is a good insight is because it uncovers something that the participants did not fully understand themselves. It describes this distinction not only as something meaningful to them, but also demonstrates that if we see the world through the house/home dweller’s eyes, we can see that certain approaches to IoT and smarthome technologies will succeed while others fail. If we pay careful attention to the fact that people in different countries make this distinction differently, then we can understand how a global, one-size-fits-all approach will not work. We also see that there is more work to be done locally to understand how technology can serve to knit the two halves of the house and home together. Finally, if we really squint and see the notion of a smarthome through the participants’ eyes, we see that technology need not always be smart. They are still using many dumb technologies quite happily. // They did not make a value The house, judgment about the dumb however, need qualities of the home, not be a smart preferring to just make a space. It can distinction. ////
be a dumb space which has no advanced technologies at all. //
Dr. Paul Hartley is executive director and cofounder, Institute for Human Futures and a senior resident anthropologist at Idea Couture.
14
B y Ch e e s a n Ch e w
Today’s service economy is driven by a desire to deliver sophisticated customer expectations: personalization, convenience, simplicity, and automation. It is grounded in the belief that customers want experiences that go beyond simply being seamless. They want invisibility. They want relief from the burden of navigating decisions, hoping instead for these decisions to be hidden behind the interfaces of intelligent systems driven by data and analytics. There is an expectation that customers’ needs and wants will be “remembered” by these systems and carried across various service experiences via different touchpoints and interactions. Delivering on this type of experience requires a sophisticated understanding of customers and the role of employees in the system. Organizations delivering service experiences are comprised of complex systems of customers and suppliers, processes and functions, and technology and channels. In 1993, in an article written for the Harvard Business Review titled “Predators and Prey: A New Ecology of Competition,” James Moore introduced the idea of a “business ecosystem”: a living organism that evolves over time in conjunction with the other actors it interacts with. The critical aspect missing from Moore’s definition is the role of people within organizations and their leadership – that is, the role of the human in the system. The ongoing debate surrounding the value of the human in the system is extensive. Here, we explore three different perspectives and debates regarding the role of humans in organizational systems.
James Moore coined the notion of a “business ecosystem.” In his 1996 book, The Death of Competition, he defines it as “an economic community supported by a foundation of interacting organizations and individuals – the organisms of the business world. The economic community produces goods and services of value to customers, who are themselves members of the ecosystem. The member organisms also include suppliers, lead producers, competitors, and other stakeholders. Over time, they coevolve their capabilities and roles, and tend to align themselves with the directions set by one or more central companies. Those companies holding leadership roles may change over time, but the function of an ecosystem leader is valued by the community because it enables members to move toward shared visions to align their investments, and to find mutually supportive roles.”
photo: anna dziubinska
The Human
15
in the System
16
01 The human removed from the system. The debate between technophobes and technophiles has stretched across centuries. Beginning as early as Aristotle and later commented on by Marx, the issue of man versus technology is now reflected in how new models of AI are married to business models in a 21st-century context (See Riccardo Campa’s 2014 Journal of Evolution and Technology article, “Technological Growth and Unemployment: A Global Scenario Analysis”). The contemporary tension of man versus machine pushes simultaneously for productivity in the workplace and preservation of livelihood. Disruption has increased dramatically since 2011, with ride-sharing companies like Uber, Hailo, and Didi automating the experiences of onboarding (i.e., hailing a taxi), navigating, and offboarding (i.e., paying the taxi). What remains in the ride-sharing service journey – for now – is the driver, who can make or break the driving experience. Other examples of such disruption include emerging AI paradigms in chatbots, such as Facebook’s direct-to-Messenger marketing; connected home devices, like Amazon Echo and Google Home; customer relationship management (CRM) AI, like Salesforce Einstein; and many others. In each of these cases, the AI is only as good as the data used to create the response algorithms – and we’re still in the early days of mapping the complexity of human emotion and behavior beyond just repetitive action. To underscore this point, McKinsey conducted a study on the potential to automate jobs, looking also at the impact automation could have on organizations and the future of work. What surfaced in the resulting work – “Four Fundamentals of Workplace Automation,” by Michael Chui, James Manyika, and Mehdi Miremadi – was a focus not on occupations, but on the activities performed. The results of the study suggested that, as of 2015, 45% of the activities an individual performs at work can be automated with existing technology. This represents $2 trillion in annual wages. The study also highlights that fewer than 5% of occupations can be completely automated; however, 60% of occupations can have more than 30% of their constituent activities automated. Clearly, machines cannot yet replicate empathy or the “human touch,” features which continue to play a significant role in the delivery of service experiences. The question is, what happens next?
02 The role of the human in the system. A number of recent studies, from PWC’s 20th CEO Survey in 2017 to KPMG’s US CEO Outlook 2016 survey, indicate that human capital – including talent and culture – tops the list of important features for an organization’s growth, innovation, and security. The key is to identify and build capabilities that will support the growth and innovation of similar features of human capital. Such features complement and lead – but cannot be replicated by – machines, and they include creativity, innovation, understanding, judgment, and, of course, empathy. These innately human capabilities come from experience, not algorithms. As an organizational ecosystem evolves and shifts, the role of humans within that ecosystem becomes increasingly important. Some businesses are stress-testing the role that people can play within their organizations. A few years ago, both Zappos and Medium introduced the notion of Holacracy, a model that uses selforganization to replace the traditional management hierarchy with a new peer-to-peer “operating system” that increases transparency, accountability, and organizational agility. Zappos’ experiment began in 2013, with the company having a goal of continuing to innovate at scale. The model has been criticized for its rigidity, inability to understand the role of human emotion in the workplace, and for the sheer number of rules that must be absorbed and followed by staff for the model to function. In his article, “The Fatal Gap Between Organizational Theory and Organizational Practice,” NOBL founder Bud Caddell “We want to referred to it as a “dungeons and believe that we dragons” model for work. are thinking, In an effort to redefine the role rational people and structures for employees, and on occasion tangle with Holacracy seems to have emotion, flick it missed a critical element that out of the way, makes employees human, and go back to and in 2016, Zappos fell from thinking. That Fortune’s “Top 50 Companies is not the truth. The truth is to Work For” for the first we are emotional time in eight years. beings who on occasion think.” Brené Brown
17
03 The human designs the system.
photo: tim wright
The two most critical inputs in a business ecosystem are employees and customers. The more an employee understands the impact they have on customer experience, and the more they own that role and are rewarded against well-defined KPIs, the better decisions they can make. Consider, for example, the accounting team at a large Fortune 500 organization. This team is responsible for creating and maintaining what’s often considered a mundane aspect of service experience: billing customers. A critical part of a business ecosystem, billing touches upon finance, revenue, technology, and – when things go awry – customer service. While many companies view billing as an experience afterthought with the sole purpose of collecting revenue, a human-centric approach to designing the billing experience begins with understanding the impact of the process, the emotions of the customers receiving the bill, and where billing exists within the customer experience journey. Employees who understand the ethos of even the most mundane bill see it as another customer touchpoint. This touchpoint goes beyond the format of the bill itself, the channel of engagement, or the places where the numbers sit on the page. These employees understand that by connecting numbers to expectations and using the bill as an opportunity to reengage and strengthen customer relationships, organizations can create a loop of experience that can drive future revenue. The design and delivery of future service experiences will rely on human ingenuity, innovation, and inventiveness continuing to play integral roles. These human features will need to work in concert with digital technologies, such as evolutionary AI and automation, to drive innovation and evolve how organizations interact with customers. Successful organizations understand not only how to build the systems needed, but also the role of people in those systems. Such organizations will invest in data, empathy, efficiency, and humanness to deliver not just on fact, but also on feel.
A parable of the human in the system.
In the Robert Munsch children’s classic, Jonathan Cleaned Up – Then He Heard a Sound, Jonathan is entrusted to keep the house clean by his mother. All of a sudden, he hears “Last stop, everyone out!” A subway car opens up through the wall, and thousands of people traipse through his living room as if it were a subway station. Furious, Jonathan tells the conductor that his house is not a subway stop. The conductor replies, “If the subway stops here, then it’s a subway station.” Jonathan goes to City Hall to see the subway boss, who redirects him to the mayor. The mayor tells Jonathan, “If the subway stops there, then it’s a subway station! You shouldn’t build your house in a subway station. Our computer says it’s a subway station and our computer is never wrong... now, I have to go for lunch.” On his way out of City Hall, Jonathan passes the big computer and stops when he hears a voice coming from it. Going to investigate, he discovers an old man behind the computer who says he is the computer. Jonathan is no dummy, and says, “Computers are machines and you are not a machine. They go ‘wing, wing, kler-klung, clickety clang.’” The old man/computer replies, “Well, that goes ‘wing, wing, kler-klung, clickety clang,’ but the darn thing never did work. I do everything for the whole city.” The old man/computer tells Jonathan he put the subway station at his house because he didn’t know where to put it. Jonathan convinces him to move the subway station, and he yells to Jonathan as he’s leaving, “Don’t tell anyone the computer is broken. The mayor would be very upset. He paid ten million dollars for it.” //// Cheesan Chew is chief CX officer, SVP at Idea Couture.
18
On yOur cOmputer, On yOur tablet, On yOur phOne, On On On, never Off, easily distracted, watching a shOw abOut dragOns, watching a shOw abOut nOrwegian teenagers, what’s the wifi passwOrd? watching an Orange man winning the wOrld’s largest reality shOw. are yOu still watching? watching the wOrld cOllapse. weekends in berlin, weekends in barcelOna, weekends in reykjavik, bOard anOther plane, watching the temperature rise, watching yOur heart getting cOlder. reading the news, avOiding the news, having all the right OpiniOns, saying all the right things, having nO meaning at all. dOing. absOlutely. nOthing? Your time is now.
becOme a kaOspilOt. apply befOre march 15th. 2017 a schOOl fOr change makers, leaders and sOcial entrepreneurs. w w w.k a O s p i l O t.d k
19
20
AI by Any Other Name…
…Would Compute as Well
Everyone is talking about machines that can think, sense, and feel, or how the coming massification of AI will change the way we live, work, and play. Major companies with deep pockets like Google, Facebook, Amazon, and Microsoft – and soon, Apple – understand that this is where the money is going and are willing to put as much of theirs as they can behind it. They are following a common vision that, in a few years will see AI power our smarthomes, healthcare systems, manufacturing, personal mobility, and financial services. This vision suggests that AI has an entry point into all of our lives, and that its day has arrived.
photo: idris mootee
B y Id r i s M o o t e e
21
But before we jump into weaving these algorithms into the fabric of our lives, we need to think about the role AI plays in the human experience. When we require machines to think, we need to decide how we want them to think. This can mean many different things to different people who have various cognitive capabilities, cultural backgrounds, experiences, and training. Thinking can be creative problem solving, optimization, or pattern recognition. Humans sometimes have a starting point before they even begin to consciously think. Where is that starting point for machines? How can machines develop the all-important qualities of trust, subordination, and leadership? Let’s consider love. According to the triangular theory of love, developed by psychologist Robert Sternberg, this emotion is composed of three things: Intimacy, passion, and commitment. Each component manifests a different aspect of love: / Intimacy refers to feelings of closeness, connectedness, and bondedness in loving relationships. / Passion refers to the drives that lead to romance, physical attraction, sexual consummation, and related phenomena in loving relationships. / Commitment refers, in the short-term, to the decision that one loves a certain other, and in the long-term, to one's commitment to maintaining that love. This aligns well with the theory of luxury that I developed when I researching the subject for a book I was writing. True luxury is composed of three elements: / Desire refers to feelings of wanting. A luxury brand’s truth is a visceral connection between consumer and brand. While this truth arises from a product’s design, it’s the features that bring about a deeper desire. / Perfection is the maker’s insatiable quest to achieve the highest possible level of craftsmanship or performance that is most often expressed in taste, tactility or aesthetic appeal. / Self refers to how consumers are treating luxury as a means of developing their self-identity, and communicating their social standing or “face” in certain cultures.
Can machines manifest all of the attributes identified by both the theory of love and the theory of luxury? If we believe machines can possess even most of these attributes, many of them would require some form of physical experiences to achieve this. They would have to learn by doing. But an AI machine with the ability to have physical and emotional senses is not something we are close to realizing – although some believe that humans and computers are quickly becoming one tightly coupled cognitive unit. These outlined attributes are at the core of human experiences. The same goes for fear, anxiety, empathy, and hopes and dreams. Could humans be the very thing that gifts these qualities to AI? However, even if AI can possess these human qualities, the problem of how it acquires them is still in question. For instance, machine learning is not the same as machine understanding. If AI cannot experience all of the above senses for itself – and grow from the experience – can it ever truly understand us? Can we trust it to make important decisions for us? No amount of processing power can change this. If machines cannot truly experience the equivalent of “real life” in machine form, then we are a long way from claiming that we have achieved true “artificial intelligence.” At best, we can claim to have logical intelligence or designed intelligence. In order for a machine to behave more like a human, we have to reproduce millions of neurons, tens of thousands of connections between each of them, and hundreds of thousands of millions of ion channels just to represent two numbers that achieve minimal humanoid intelligence. And this assumes that a brain is just the sum of its parts, which may not be the case. Professor Mark Bishop, a director and researcher with the Tungsten Centre of Intelligent Data Analytics, recently spoke on this topic at GOTO Berlin. In a presentation titled “Deep Stupidity,” he described what deep neural networks can and cannot do, arguing that AI cannot become sentient because computers don’t understand semantics, lack mathematical insight, and cannot experience phenomenal sensation. In short, they can do none of the things that make us human. Everything that we have of value as human beings, as a society, as a civilization, is the result of our intelligence, and is still out of reach of even the most sophisticated AI. Machines would need to achieve superintelligence status to match or advance beyond us – which could be both the best and the worst thing that we ever invent. There is no proof as to who makes better decisions between humans and machines. What we do know is that machines are still dumb – and the same goes for many humans, too. As the development process of AI continues, future iterations can instead become a tool that magnifies humanity and human intelligence, giving us the ability to advance our civilization in a human-centric way. The choices we make will shape the future of our world and our lives. We need to ensure that the future we are creating is not an AI age, but a human age. Perhaps it is time to give AI a new name and work toward a better goal for this technology. //// Idris Mootee is Global CEO of Idea Couture.
How Lonely Roombas and Friendly AI Are Signaling a Rise in Anthropomorphic Normativity By K yle B r o w n, J o c e l y n J e f f r e y, a n d Melanie Levitin
In an interview with Ellen Degeneres, Ryan Gosling announced that he intended to buy his vacuum cleaner a companion. Gosling has a Roomba, a robotic vacuum, and as he listens to it cleaning alone in the night, he pities the vacuum’s “tireless” efforts. “I feel bad for it,” Gosling admitted. And so Gosling reached what is, to him, a reasonable conclusion: “I want to get a Roomba for my Roomba this year, so it has company.”
Drawing Suns With Happy Faces The empathy Gosling feels for his “lonely” Roomba is by no means unusual. It is the psychology of anthropomorphism – attributing human characteristics to non-human objects. Just consider why children draw the sun with a happy face or why we give names to our cars. We maintain a peculiar tendency to ascribe human qualities – such as emotion, fear, and ambition – to inanimate objects. According to psychologists, this happens for three reasons. First, we tend to anthropomorphize when confronted with things that carry human-like attributes and resemble certain human features, like eyes. If you’ve ever seen a car outfitted with plastic “eyelashes” on the headlights, you’ve witnessed this
phenomenon. We also have a desire to make the unfamiliar familiar by leveraging contextual indicators as cues to dictate an emotional response. After all, what’s more familiar than an object that is in some way made human? Is a car in a bad mood when it won’t start? Do our dogs perhaps act with vengeance when they misbehave? By seeing the “humanity” in our surroundings, we can attempt to interpret the behaviors of non-human entities, like our vehicles and pets, so that we can understand and relate to them. Lastly, our desire for social interaction represents a powerful force that causes us to manufacture relationships with inanimate things. Social beings by nature, humans create friendships with objects or animals as a way to cope with loneliness. Growing epidemics in a connected world, loneliness and isolation are but a few of the factors contributing to the proliferation
photo: idris mootee
22
23
of new, anthropomorphic technologies, devices, and services aimed at filling our social needs. Our 21st-century lifestyles are complex and, compared to past generations, compressed. Because many of us are looking to avoid the complexities of modern human relationships, we are changing the types of companionship we seek and the relationships we form. Today, some people are ready to reject old models of relationships altogether and are actively looking for alternatives.
Natural Moments With Unnatural Partners Increasingly sophisticated technologies are giving birth to AI-driven digital personalities, virtual beings, stay-at-home robots, and other forms of synthetic companions. These new technologies allow users to control all the parameters of any given interaction to create “relationships,” which, to varying degrees, can meet a user’s practical and emotional needs. Alexa, the digital assistant behind Amazon’s fast-growing Echo smarthome device, is an anthropomorphic conversational interface that helps users perform everyday tasks solely by voice. Alexa and its counterparts, Siri and Cortana, are acclimatizing users to the idea that algorithms can simulate the experience of a personality. As users accept and rely on these “personalities” and the technology behind them continues to evolve, it is not difficult to imagine conventional human interdependencies, such as the Western nuclear family, being replaced by artificial mechanisms and networks of digital personas. A more extreme example of a digital assistant is Gatebox. Created by the Japanese company Vinclu, Gatebox is poised to blend the personal assistant component of Alexa with the intimacy of a partnership. Gatebox is a holographic anime AI that “lives” in a user’s home. Its reach, however, moves beyond the domestic sphere, as the Gatebox character can interact with the user throughout the day via Gatebox’s chat application. Vinclu promises that the character will spend “natural moments” with users while channeling the essence of a trusted companion. The relationship between this AI and its partner is so blurred that on the Vinclu website, the main character offered with Gatebox refers to the user as her “husband.” Gatebox is built on the modern concept of a waifu – a term for a female, animated character who is regarded by her human
counterpart as a partner or wife. Though the concept of a waifu predates its Gatebox AI, Vinclu recognizes that these human– other relationships exist, and the company is taking steps toward normalizing them. As old relationship structures are being tested, technology is allowing new ideas about companionship to be explored, even as the old are overthrown. Going forward, these shifting relationships will challenge dominant paradigms and lead to anthropomorphism being leveraged as a design principle to an even greater extent.
Designing for Humans Anthropomorphism has long prevailed in business and political contexts. Advertisers use personalities, physical human features, and voices to create concepts, brands, and communications that are designed to be more empathetic and to elicit an emotional response. For the most part, this intentional construction of personhood is successful: Millions create “real” relationships with products, and experience true brand love and loyalty. However, the psychology of anthropomorphism can have a negative effect on business as well. Dissatisfied users and consumers can project personalities, characters, and intentions onto businesses – even entire industries – and craft powerful narratives that harm specific brands. Consider the discourse around Big Pharma, a term that has become so ingrained within consumers’ consciousness that it now conjures up the image of a villainous character. Anthropomorphism’s absolute, deductive nature – in conjunction with the ability it has to allow users to convey a message easily and quickly – means that for some organizations, a customer’s tendency to anthropomorphize can be both an opportunity and a loss. Consider also the notion of “corporate personhood,” in which a corporation, separate from its associated human beings (founders, CEOs, or employees), has many of the legal rights and responsibilities enjoyed by humans. This characterization also has both positive and negative implications. Companies responsible for creating virtual assistants, for example, also have a moral obligation to assess the values and worldviews of the personalities and physical features of the entities they sell. For example, will users be able to personalize their digital assistant to such an extent that they influence social regression (e.g.,
by creating overly feminized or subservient characters)? Could such a device be enabled to further behave in a manner that would be considered socially regressive? Microsoft’s public foray into the AI world was shut down less than 48 hours after it began when “Tay” – an AI chatbot designed to mimic the speech of a 19-year-old girl and to interact with and learn from humans on Twitter – was inundated by a host of racist and sexist messages from a Reddit subgroup, and began mimicking their speech. Likewise, it is certainly possible to imagine a scenario in which personal assistants are hacked and used as vehicles to subtly indoctrinate users with new, potentially malevolent ideas and value systems. In Mikhail Bulgakov’s The Master and Margarita, this risk runs true when a large cat named Behemoth walks on two legs, enjoys chess, vodka, pistols, and has an affinity for his own crude jokes. Anthropomorphization can enable people to empathize with animals and technologies just enough to be receptive to strong, uncomfortable messages.
Connecting With Others It remains to be seen whether the myriad of emerging tech-based alternatives to human relationships fully address our psychological and physiological needs. What is clear is that our relationships, communities, and social circles are being redefined and renegotiated. Just as our communities are expanding to include new types of families and groups, our relationships with technology are becoming more meaningful. Technology has long been connecting us, and we are now asking it to connect with us. We are broadening our notion of what it means to live in an interdependent world where humans are just one part of a broader system that everyone and everything relies on. In connecting with and anthropomorphizing technology, we might ask ourselves if we are losing something that makes us human. In fact, we are not. We are only trying to create deeper, more meaningful relationships. What’s more human than that? //// Kyle Brown is a senior foresight strategist at Idea Couture. Jocelyn Jeffrey is a developer at Idea Couture. Melanie Levitin is an innovation strategist at Idea Couture.
24
The Future of Money: An Interview with Ann Pettifor
B y D r. I a n C o s h
We tend to think of money as a cultural artifact that has evolved from stone disks and precious metals into modern currencies that are becoming increasingly digital. According to Ann Pettifor, a UK-based economist and director of policy research in macroeconomics, this technological frame distracts us from the more important social dimensions of money. In this interview, Ann describes her view of money as an essential part of civilizational advancement, and argues that the future of civilization depends on how effectively we understand and manage money – not as a technology, but as a human social system.
25
What comes to your mind when asked about the future of money? With the future of money, you’re thinking bitcoin, digitally created cryptocurrencies, or whatever other technological shape money takes. But the wonders of technology can be a little bit delusional. The critical thing is this: Money is a social construct. And so, because it is not a commodity but a series of man-made promises – obligations and claims, assets and liabilities, credits and debts – it is not finite in supply. However, these promises, obligations, and claims must be institutionally upheld – by both contract law and the criminal justice system. Now, bitcoin prides itself on operating in a sphere where there ain’t no law – there is only the accounting system of blockchain. Bitcoin aims for the utopia of all liberal free marketers: that they get to live beyond the control of regulatory democracy. The second thing the architects of the system have done with bitcoin is totally orthodox, and based on the theories of Friedrich Hayek, who advocated for a free market in The Denationalization of Money. Bitcoin’s founder, Satoshi Nakamoto, understood that to maintain or increase the value of bitcoin, it must conform to a monetary policy of artificial scarcity. In this sense, bitcoin would come to resemble gold, whose value is based on its scarcity. Apparently the limit of 21 million bitcoins will be reached in about 2140, at which point miners will be remunerated on the basis of transaction fees only.
But there are inherent flaws in this system, aren’t there? The issue with only 21M bitcoins (and the enormous number of units into which it can be sub-divided) is that it theoretically aims to limit economic activity. Hayek believed this was necessary to prevent inflation. And that’s what the gold standard was about; it was what orthodox economists wanted because they believed that, by limiting the supply of a commodity (such as gold), they could tame inflation. But money and monetary systems are not based on a commodity – nor have they been for at least 5,000 years, as anthropologist David Graeber explains in his book Debt: The First 5,000 Years. We have had systems for making and keeping promises that date back to then, but our systems have always had to be overseen by trusted third parties – the village chief or priest, and then ultimately, the regulator. Once released from such oversight and management, money systems are quickly corrupted by the wicked. But this is exactly the fatally flawed theory that Hayekians tried to achieve with a socially constructed system of promises, obligations, and claims; they turned it into a system based on a commodity (gold) and argued that the system had to be left to the control of the “invisible hand.” They tried to use gold as a way of controlling economic activity, and contracting investment, employment, and so on, by arguing that society “can only do as much activity as we’ve got bitcoins or gold bars in the bank.” In the age of the gold standard, when governments supposedly ran out of bars, the public was told, “You’ve got to slash back your economic activities, because they exceed the amount of money/ gold in the bank.” That was a deeply, deeply flawed and reactionary theory. It’s what caused the contraction of economic activity (mainly employment, but also investment and consumption, leading to falling profits, bankruptcies, and unemployment), which has led to recurring crises and depressions.
Why does that serve anybody’s interests? Because the more demand you create for your gold bar or your bitcoin, the more the price rises – which is good if you own the gold or the bitcoin. But bitcoin doesn’t help create economic activity, and it’s a really bad exchange for ordinary traders. It is so volatile – its value can change by up to 10% a day. The dollar, for all its faults, has a certain stability: Its central bank is a proper institution, and it has a sense of value across the world. And unlike bitcoin or gold, the dollar’s not finite. And thank God it isn’t, because what we want to do isn’t finite. A lot of Greens argue that unlimited money supply is the problem, because you can consume far more. That is true, but that is why we have to manage it. In the old days, you couldn’t buy an $11,000 crocodile leather handbag with a credit card. But then the banks made it easy; it’s hugely profitable for someone to use a credit card and charge them 22%. We should manage the credit system so it doesn’t do these things. But if it is used to build wind farms all over the world, we should be able to finance that. A wind farm employs people, it uses resources, it generates income, and it can repay the debt. Therefore, it brings about a sort of equilibrium. Whereas if you buy a leather handbag, it just gets destroyed, it doesn’t have an incomegenerating use. That being said, a lot of people earn a lot of income from their bags.
// The dollar, for all its faults, has a certain stability: Its central bank is a proper institution, and it has a sense of value across the world. //
26
What would be different if more people understood money? The finance sector would be held accountable. At the moment, the finance sector is beyond our control. It’s beyond governmental control, and it can do as it likes. It has enriched itself to an extent unprecedented in history. For me, the reason we are in this populist crisis is that the financial sector has tried to detach itself from all of these institutions. They don’t like regulations, they don’t like central banks, they don’t like to pay taxes, and they don’t like any of the things that keep the system stable. So they are detaching the financial markets from all of those and operating at a global level. But that is socially and politically unsustainable, and civil society organizations are beginning to ask the right questions and basically say, “We didn’t know you could just do this out of thin air, you bastards.” That’s fine, but we need to nuance it a little bit to understand that the monetary system is really essential. Monetary systems are a great civilizational advancement, but they have to be managed by a democratic society, and they have to be made subservient to the interests of that society – not allowed to be the masters of it. On the broader scale of human concerns, does the future of money matter? I think it matters enormously, especially to environmentalists and women, because it’s those groups that are told there is no money for what they need. It’s simply not true. We discovered it wasn’t true when the central bank found a trillion dollars from nowhere overnight to bail out the banks. I feel passionately about this because of climate change. We may not have the intelligence to tackle climate change, and there are big challenges facing us, but we will never ever be short of money. We can never run short on the promises we make to each other. The idea that there is no money is laughable and is something that denies us the possibility of a sustainable future. So instead of the future of money, maybe it’s more important to think about how our understanding of money shapes the future. Absolutely. The propaganda we get is that there is no money, forget it, just go lie in your slum. I get really angry because of that defeatism. The acceptance that we are victims of money is terrible. It’s what is tearing the earth apart. There’s a lot at stake. We must remember that money is a human social system that we’ve constructed to enable us to do things. It isn’t complicated, except it is made to look complicated. It is crucial to understand the system and make it work for us. //// Dr. Ian Cosh is a resident anthropologist at Idea Couture.
Ann Pettifor is director of policy research in macroeconomics. She is best known for correctly predicting the global financial crisis of 2007–08 in numerous publications. She recently published The Production of Money: How to Break the Power of the Banks.
photo: Foyers Photography
You talk about institutions and how money is not finite. That sort of goes against how many of us experience money. Money seems so natural – most of us can’t even remember how we learned about it. Growing up, we probably saw parents using it to get things. Then came allowances, summer jobs, salaried careers, and so on. Yes, the way we think about it growing up, it just flows around us. There’s so much confusion around money. We go to work, and at the end of the week or month, we are given money. And we think, “Ah, that money has come as a result of me working.” In reality, that money was there before you worked. No investment in anything begins without money in the first place. Money then enables economic activity to occur, and that generates income. Money – or credit – has never been anything except a promise to pay. It is that promise to pay which enables us to undertake activity, not just to barter. But that promise has to be institutionally backed up. The most important institutions are those that uphold the law of contract and say, “I promise to pay.” Another key institution is the central bank, whose role is to give these promises a value by making us pay our taxes in a currency, in a form that represents those promises. And then there is a fee for the upholding of this promise, and that’s the rate of interest. If you have all of that in place, you can issue credit – and credit is money. A lot of civil society organizations have discovered – and when I say discovered, I mean that ironically – that banks create credit out of thin air. Well, banks have been doing that since the 15th and 16th centuries. And yet today, there’s still a lot of blinding to the science of it. I always tell this story of Ben Bernanke, the Chairman of the Federal Reserve from 2006 to 2014, giving an interview to 60 Minutes in 2008, the day after he had given $85B to American International Group (AIG) to bail them out. A journalist asked him, “Where did you get that money from? Did you get it from taxpayers?” And he said, “No, no, no, no, we have at the Federal Reserve something called a computer, and we enter numbers into the computer and we post it to AIG’s account and they get $85B.” And the bank demanded collateral from AIG and the promise to repay, which was upheld under law, and they set a rate of interest. And that’s how all money is created. By the time you do actually see money – when you’re a kid and you’ve earned your pocket money, for example – it’s gone through these processes to end up in your mum’s bank account. And you think you’ve got it for being a good boy.
27
Thought you had Gen Z all figured out? Think again.
A must-read for marketers, designers, and brand and insight managers.
IC/publishing
Available on Amazon and at bookstores around the world.
28
Medical Technology Without Humanization
Innovation Needs Human-Centric Design
B y D r .T e d W i t e k
to prevent or treat afflictions hampering health and happiness. The National Human Genome Research Institute of the National Institutes of Health (NIH) points out that sequencing a newborn’s genome could provide more information than what is currently measured in routine genetic testing. This information could potentially be used to guide a lifetime of medical care. Imagine a person’s genome becoming part of their medical record, with complex computer algorithms run to diagnose and recommend personalized care.
The Human Genome Project, the completion of which was announced on April 14, 2003, saw the successful sequencing of the humane genome. This amazing scientific feat cost some $3B. Today, only 14 years later, an individual can have their genome sequenced for under $1,000. As this technology continues to advance, there is a growing hope that millions of people will be able to have their genomes sequenced, allowing us to better understand disease and
29
While this level of screening certainly seems possible, we need to ask if today’s patients and health systems are prepared to welcome such advancements. Will patients understand the true meaning of risk factors, disease probabilities, and potential treatments or cures? Will their physicians and caregivers be able to explain these matters in a meaningful way? To ensure that both patient and physician needs are met, the human and social factors associated with advances in genetic screening must keep up with the incredible progress being made in medical technology. This is also recognized by the NIH, which is studying the social impacts of such advancements by examining different issues, such as the implications of learning one’s risk for an incurable disease like Alzheimer’s.
The Concept of Humanomics In a thought-provoking 2014 editorial piece for Chest: The Cardiopulmonary and Critical Care Journal, J. Mark Fitzgerald and Iraj Poureslami highlight the need to acknowledge the behavioral perspectives of those on the receiving end of technology. They describe this perspective as humanomics and stress that, along with ensuring patients’ health literacy, healthcare providers must manage diseases by considering patients’ various cultural and ethnic perspectives. When it comes to genome sequencing and personalized medicine, Fitzgerald and Poureslami point out the need for interpretation and explanation. Effective health decisions are seldom simple, particularly when it comes to chronic disease. For patients to make these decisions, they do not only require access to information. They also need it to be processed, interpreted, and communicated, and this most often involves more technical aspects of data. Providing rational explanations to patients on the risk–benefit ratio of emerging technologies will be especially difficult for healthcare providers if behavioral factors, such as compliance and communication, are not considered early on in the integrated care plans of patients. For example, one of the most elusive behavioral aspects in respiratory care is the notoriously poor compliance patients tend to have with inhaled medication.
Barriers to Personalized Medicine: Organizational Behavior Another barrier to the implementation of technology can be illustrated in the application of genomic testing to certain types of lung cancer therapy. Researchers have learned a great deal about various biochemicals and their receptors that regulate the growth of cells, including cancer cells. One such factor, epidermal growth factor (EGF) and its receptor (EGFR), is a crucial player in many diseases, including lung cancer. The presence of a mutation in EGFR has been shown to indicate a dramatic clinical response to drugs blocking that receptor. In other words, identifying this mutation through genomic testing at the time of lung cancer diagnosis can help healthcare providers determine the best therapy for a patient. It seems that personalized medicine has come of age. Or has it? While molecular diagnostic tools are in place for genomic testing, and the costs of these tools are falling as their usage increases, the human aspects of providing medical care are among the factors prohibiting the optimal utility of such innovative tools. For example, in order to test for an EGFR mutation, a sufficient sample of tissue must be obtained during a biopsy.
Completing a biopsy depends on both human and organizational behavior; in the case of a lung cancer patient, its successful completion relies on the integration of patient care across the disciplines of radiology, respirology, surgery, pathology, and oncology. Actors from each of these disciplines must understand the importance of obtaining an adequate tissue sample in order to optimize the downstream aspects of care provided by actors in other disciplines. Mutual understanding and cooperation among these various actors will promote the creation of targeted therapy that makes the best use of groundbreaking scientific advancements. This is not the only way in which the integration of care can drive the optimal utility of technology. One Canadian report explains that most molecular diagnostic testing is ordered by a medical oncologist well into the patient’s journey rather than being ordered by the pathologist, who enters earlier in the patient’s care. Additionally, this study found that it takes a median time of 18 days to receive the results of molecular testing, a delay that could be devastating for some advanced lung cancer patients. These human and organizational behavior factors – a lack of integration of care and the inability to obtain results quickly – work to prohibit the integration of one of society’s greatest technological advances. The good news is that by identifying these factors, we can give them the attention needed to ultimately optimize how technology is applied to healthcare.
Never Open Champagne at Halftime While advances in medical technology are immensely helpful to the treatment of disease, we must be careful not to celebrate success too early by focusing only on these advances. Understanding the human and organizational behavior factors relevant to providing care is necessary for fostering the patient’s health and happiness. By establishing a human- and society-centered value platform during the development of these technologies, the optimal integration of technology into healthcare could be facilitated. Such a platform would give deliberate consideration to potential integration issues during the development of technology, rather than resulting in the consideration of these issues after development is complete. Scientific discovery and innovation that result in a tangible (and useful) drug or diagnostic tool is no doubt worth celebrating. But if we are to solve the whole problem, we must address the human, social, and organizational factors that bridge the gap between technology and patient care. //// Dr. Ted Witek is a professor and senior fellow at the University of Toronto and senior VP of corporate partnerships and chief scientific officer at Innoviva.
// Imagine a person’s genome becoming part of their medical record, with complex computer algorithms run to diagnose and recommend personalized care. //
The Future is Slowly Killing Me B y Sh a n e S a u n d e r s o n
Technology-Induced Depression and Other Follies of Our Modern Digital Age
I was shaking uncontrollably. It started with my hands. I remember holding them to my face, looking at my own flesh as though it were foreign. The tremor quickly traveled down my arms toward my legs, and within seconds my whole body was vibrating as though every atom in my being had
matched a resonant frequency and moved me in unison at a macro level. For a moment, I didn’t know where I was. (I was in a taxi.) I forgot where I was going. (I was traveling to an airport.) I couldn’t remember why I was there. (I was supposed to fly home.) But I did know how I felt. (Scared.) In a fraction of a second, my body had revolted against my life decisions and was denying me, with every ounce of its being, the ability to keep going toward that airport. But I wasn’t afraid of airports; I was afraid of facing my life. And all of this had been set off by a simple buzz in my pocket.
illustration: Erica Whyte
30
31
That buzz was the straw that broke the camel’s back. That buzz was one notification too many. That buzz was a cry for my attention when I had nothing left to give. That buzz was the reminder that my phone was the portal to so many issues. I had been dealing with an inordinate amount of personal and professional anxiety. I was trying – and failing – to deal remotely with several members of my family who had serious health issues. I had been traveling non-stop for months and barely had a stable home of which to speak. I was juggling multiple difficult and demanding clients, all the while questioning the value and purpose of my work. I felt the increasing guilt of losing touch with close friends. My calendar looked like a mason’s pride and joy; blocks stacked upon each other to make solid walls of my weeks.
I was born
And this powder keg and tiny fuse was lit by one little buzz. Sadly, this was not an isolated episode. I get anxious, depressed, and near manic at times – but I don’t tell anyone that. The irony of publicly announcing this in a globally distributed magazine is not lost on me; however, I’m getting to a point where I realize that if I don’t speak up, I’m fueling the very problem that plagues me. I say this because I believe my struggle and mental health challenges are not completely of an internal origin, but a product of the increasingly technologized life I creep toward with each passing tomorrow. And, though I realize the folly of attributing personal struggles to external factors, at the risk of sounding paranoid, I’m legitimately concerned that the future is trying to kill me.
into a quiet farm life just outside of a small, sleepy town on the Canadian Prairies, and I was raised in a largely analog world. Both of these things are worthwhile mentioning, as I believe they contribute to the ways in which I interact with, perceive, and ultimately am burdened by technology. I’ve always had a complicated relationship with technology, particularly novel gadgets and tech-heavy science fiction. As someone who was infinitely curious about the world, technology historically fascinated me and provided my mind with a tangible outlet for envisioning the future. And while I was always an early tinkerer with new tech, over time I became a very reluctant early adopter, particularly with communication devices. I was a late bloomer when it came to the mobile phone game; I only got one when I became an executive in student politics in 2003 and, even then, I only got it because the job required it. That phone was ceremoniously destroyed the following year when I left the position, as it represented a distraction from the important people and things in life. I wouldn’t choose to get a smartphone until 2009, when I biked across the country and wanted a device to update my blog from the road. I was not yet aware that having the best of intentions would turn out be the beginning of something dark. In the spring of 2014, I had that anxiety attack in the back of a taxi. I often use modifiers to soften that statement – “anxiety thingy,” “panicky moment,” “nervous episode” – but I’ve learned enough now to know what really happened. It was one of the scariest and most confusing moments of my life. I felt completely helpless and out of control, and it was only with the kindness and compassion of a few close friends that I was able to navigate my way through it. However, there was a silver lining: My eyes were finally opened to the increasingly dangerous impact technology was having on my life and the lives of many around me. We check our phones at the dinner table. We browse Facebook at work as though it were a resting state. We might even give the same priority to a tweet as we do a face-to-face conversation. We’ve traded genuine presence for digital omnipresence. For some – particularly those raised and socially indoctrinated within a digital-heavy lifestyle – this way of living works. Those raised outside of these norms, however, find themselves in increasingly uncomfortable situations and conflicted by the competing social and information priorities laid out for them. After my episode, I became more conscious of technology’s impact on my life. I started to realize that I had let friendships lapse, because following someone on a social network had given me a false sense of connection without actually staying in touch. I saw that some of my relationships were affected by the fact that I was unable to be fully present in the moment; one eye was constantly on the digital world, wondering what I was missing out on. My sleep patterns were erratic, as I scrolled endlessly on my phone right up until bedtime and periodically forgot to put it on “do not disturb” before placing it on my bedside table. My work was slow and inefficient from constant pop-ups and notifications sneaking into my field of view. I had become an addict.
32
While taking a sabbatical after the episode to deal with my various issues, I recognized these problems and began to put different barriers and rules in place to protect me – a process that I have reevaluated and updated every few months as new tech creeps into my life. I banned my smartphone from my room. I turned off 80% of my push notifications. I set office hours for my work email to reduce notifications after work hours. I set personal rules about how I would use social networks. I left my phone in my pocket, not on the table. I grew wary of technology. I grew skeptical of online communication. I changed from a gadget freak into a bit of a Luddite. I did all of this, and continue to do it, to protect myself. It seems every new year brings with it yet another notification we must integrate into our lives – new social networks, new personal monitoring, new media sources, and new friends through channels as varied as the individuals we connect with. However, with each year and notification comes an increase in the growing number of counter-culture trends: neo-Luddites, tinfoil hatters, minimalists, hikikomori. All bring an acknowledgement that there is something wrong with the way we live. We should see these departures from the digital norm as a growing signal of human behavior, a rejection of this hyper-connected lifestyle. With the flux and pace of technology, we don’t have the time to anticipate or plan for the implications of new developments. As such, we need to take our modern digital world for exactly what it is: an experiment that may or may not be the right path. Human beings haven’t always embraced technology this way. However, to argue against this way of life raises issues that are anti-consumer and anti-capitalist, which creates a big problem for both individual companies and the economy as a whole. In the face of this growing tech-skepticism, how do we create more, sell more, and engage more? How do we increase our bottom line and grow the GDP when it potentially comes at the expense of mental health or human value? This is not a new problem. The economy has always found (and will continue to find) a way to adapt. When agrarian economics began to stagnate, we evolved beyond commodities and focused on differentiated goods. When goods became lackluster, it gave way to the service economy. As services become dull and transactional, future-oriented organizations cropped up to help companies evolve into experience businesses, attempting to derive value from how people feel. Looking forward, as our feelings become commoditized, Joseph Pine and James Gilmore (the originators of the “experience economy” concept) have even predicted that a new type of “transformation business” – where value is derived from the state of change you invoke in a consumer – will flourish. This is all starting to sound like some convoluted Ponzi scheme, but instead of a pyramid of sales with limited resources underpinning it all, we actually have the opposite issue. We have more underpinning of basic resources than we’ll ever need, but in order to offer continual growth to the economy, we keep building false problems, solutions, and constructs overtop ourselves. In reality, beyond our basic commodities and needs, the only economy that should matter is the happiness economy. Naturally, some of us will derive happiness from aspects of products, services, experiences, or notifications. However, in business today, it is all too easy to lose sight of this and contribute more to the problem – in this case, mental health – than the solution: happiness.
As an organization, this means challenging and expecting better of yourself. It means that if trends continue to shift power and awareness toward users, the only way for you to survive in the long term is to do more societal good than harm. Your notifications, screens, and interactions shouldn’t burden us; they should make us smile. And for “consumers,” “users,” or just “people,” this means that you need to take back the power you’ve given up and set your own rules and limits. You need to realize that this is all an experiment, not hard fact or certainty, and you can choose to experiment on your own as well. You get to decide what is harmful, what is unacceptable, and what makes you happy. Recently, I’ve been trying a new experiment of my own. I’m fashioning my own tinfoil hat and playing around with the idea of what it looks like to revert to the analog ways of my past. I’m trying to rediscover my happiness economy. I had my fun with social media, I dove into the deep end of the digital world, and I tried living as a modern-day cyborg. Now I want to see what it’s like to live more analog again. I want to see if there are more smiles from a hug than a “like.” I want to see if I’m happier with fewer, deeper interactions, rather than more superficial ones. Will I miss certain information, discussions, and even people by not being more engaged in the online world? Probably. However, will I also see so much more by regaining my focus, presence, and energy in the things I choose to engage with completely? We’ll see. //// Shane Saunderson is VP, IC/ things at Idea Couture.
// Beyond our basic commodities and needs, the only economy that should matter is the happiness economy. //
33
CAN MANAGEMENT EDUCATION BALANCE PROGRESS AND PROFIT ? Change begins with a question. What will you ask?
At The New School, we questioned conventional management education to create graduate programs that pioneer progressive leadership practices. Whether you’re interested in managing nonprofits or start-ups, social media or social enterprise, we’ll equip you with the skills and tools needed to address the growing demand for socially aware leadership in the workplace. Discover more at newschool.edu/management. MANAGEMENT GRADUATE PROGRAMS AT THE NEW SCHOOL • Strategic Design and Management (MS) • Media Management (MS) • Nonprofit Management (MS) • Organizational Change Management (MS) Photo by Michael Kirby Smith / Equal Opportunity Institution
34
B y K u r t i s Ch e n
So far, the unflinching wave of job automation across almost every sector seems to have left creatives untouched. While telemarketers, librarians, and bank tellers seem like obvious fodder for the automation machine, artists, designers, filmmakers, and other creatives have been granted some sort of amnesty. This is also reflected in the findings of a widely cited study published by Oxford University, which gave the former group of jobs over a 98% chance of being automated, while giving artists a safe 4.2%. Startup incubator Nesta further shared this sentiment, claiming in a report that “musicians, architects, and artists emerge as those with some of the highest probabilities of being resistant to automization.” Understandably, people are wary of whirring silicon boxes that are able to produce work, despite lacking the same life experience as humans. Meanwhile, current attempts at automating creative work – like Wolfram’s music generator or Google’s AI poet – are unconvincing.
It’s easy to see the logic behind the exemption of artists’ work from automation. The better a job can be quantified and systematized – and thereby broken down into the semantics of computer programming functions – the more easily that job can be written into code. The capabilities of any job automator are essentially beholden to the forethought of the human who designed the program. In this sense, creativity – which requires a high degree of ingenuity and intuition – is decidedly challenging. But the onset of unsupervised machine learning negates this reliance on human cognition, eliminating the need for human interference or even environmental feedback. It enables computers to recognize patterns on their own, allowing them to tackle dynamic problems that are seemingly impossible to codify. IBM’s Watson, as incredible as it is, is only the beginning of this. That being said, can art actually be codified? Perhaps not in the way in which a librarian’s job can, where inputs and outputs can be logically computed into a
measurable success. But maybe, with unsupervised AI, art can be codified in a different way, where the computer is free to insert its own subjectivity in between inputs and outputs – or perhaps without inputs at all. This begs the age-old question: What is “art”? In the mid-20th century, philosopher R.G. Collingwood said, “By creating for ourselves an imaginary experience or activity, we express our emotions; and this is what we call art.” Later, philosopher George Dickie defined art as “an artifact which has had conferred upon it the status of candidate for appreciation by the art world.” At the center of both definitions is the idea of intent. The intent of an artist to create a work may itself be sufficient to define that work as art; without intent, is there really any art at all? Most would agree that a sloppy painter’s accidental spills on the floor don’t count as art, even if the end result is vaguely reminiscent of a Pollock. Art is, after all, inextricably tied to its creator. So, can a machine have this kind of intent? Can it “think” the way humans do?
photo: eric terrade
A Portrait of the Machine as an Artist
35
Symbols vs. Semantics
The Question of Consciousness
There are two prominent ideas surrounding cognition: symbol manipulation and semantic neural networks. Symbol manipulation posits that our brain is no more than a sophisticated symbol manipulator, where each neuron takes in the inputs of other neurons (symbols) and reacts according to a predetermined fashion or logic (manipulation), forming our thoughts and actions in the process. The theory of semantic neural networks, on the other hand, posits that our cognition functions by attaching meaning to inputs (semantics) rather than through meaningless symbols, allowing neurons to react to the metadata attached to inputs rather than reacting to the shape of the input itself. So, does our brain function as a symbol manipulator, like a computer, or does it supersede symbols by applying meanings to otherwise objective inputs? If the mind does function semantically, could this spell disaster for machine creativity, since transistors are ultimately nothing more than symbol manipulators? At a greater level, beyond lone transistors, we are entering an era where semantic neural networks are being built out of silicon. Computers have already been connected as artificial neural networks, allowing them to function together in a fashion that is inching closer to human biology, the synapses between neurons replaced with network communications. If we take a methodical reductionist approach to both our biological brains and computers, we can see that, at some point, our neurons also function as symbol manipulators, taking molecules (symbols) from the environment to create an output vis-à-vis some chemical reaction (manipulator). Whether an artistic entity is biological or electronic is, therefore, causally irrelevant. In short, as computers replicate the mechanics of our brains, we can equally expect them to replicate our human creativity.
If this still feels counterintuitive – a machine making art – your last stand might be an argument about consciousness. Sure, you might say, a machine might think the way we do, but it definitely isn’t aware of itself. But before you question the sentience of a machine, you should be questioning your own self-awareness – or lack thereof. A study by Washington University found that the brains of subjects under continuous MRI scanning “began to encode the outcome of [an] upcoming decision” up to four seconds before actual awareness of the decision. This indicates that the choices we make, or that we think we make, are not made by us at all – at least not in the sense that we made the decision willingly, with intent. Self-awareness, according to Princeton neuroscientist Michael Graziano, is merely a “slightly incorrect,” “cartoonish” representation of the physical world: It is an evolutionary by-product. It would seem that we have as little agency in our decision making as a diatom; instead, we react instinctively to our increasingly complex environment by following a protectionist policy. This effectively renders consciousness and intent null in any discourse about AI, let alone machine art.
// We were once confident in the exclusiveness of our emotions and ability to use tools, believing these to belong to the domain of humanity. But history has proven otherwise. //
The Place of Humans in Art We’ve always believed that we carry some sort of human uniqueness. We were once confident in the exclusiveness of our emotions and ability to use tools, believing these to belong to the domain of humanity. But history has proven otherwise, and in the future, unsupervised machine learning, neural networks, and access to big data will forever change the landscape of creativity. Unmatched social awareness, infinitely greater iterative capabilities, and the ability to string together novel ideas could allow the machine to reign creative supreme. However, despite all this, humans will never lose their place in art. Unless machines begin making art for other machines, human activity will continue to shape it. And our limited abilities will certainly remain a facet of art, an artifact of pre-machine creativity. However novel and creative the work of a machine may become, it will not always produce better art. Art always encompasses context, a context which machines themselves are inescapably woven into. //// Kurtis Chen is a filmmaker at Idea Couture.
36
If You Build It,
They Will Care Designing Empathetic Spaces and Objects for the Future of Caregiving
37
“There is no doubt whatever about the influence of architecture and structure upon human character and action. We make our buildings, and afterwards they make us. They regulate the course of our lives.” —Winston Churchill, addressing the English Architectural Association, 1924
B y V i c t o r i a S c r u bb a n d D r . M a y a Sh a p i r o
In his book, The Architecture of Happiness, Alain de Botton addresses the skepticism with which people have historically approached the investment of time and money in design. “Beautiful architecture,” he writes, possesses “none of the unambiguous advantages of a vaccine or a bowl of rice.” And yet, as de Botton and others have noted more recently, the objects that people use and the spaces in which they use them are central to how human behaviors develop and take root – particularly in the area of health and wellness. After a spinal cord infection left American architect and designer Michael Graves paralyzed from the chest down, he became especially attuned to how the design of rooms, furniture, and even linens could mean the difference between dignity and shame, hygiene and infection, and health and illness. Disturbed by his own slow recovery, Graves began observing the objects and behaviors that surrounded him in rehab facilities. Seeing, for example, that nurses always grabbed food tray tables from under the tray’s surface while cleaning staff only disinfected the tops, Graves understood that something as simple as adding a few strategically placed handles to the tray could limit the spread of harmful bacteria and have a major impact on patient recovery.
“It doesn’t make you feel very good, when the stuff around you says ‘I’m sick.’” —Michael Graves, architect, designer, and long-term rehab patient
Care in Crisis While Graves called the patient room the “last frontier of healthcare design,” his work demonstrates that it is the conceptual and social underpinnings of that room – a lack of human understanding in designing for long-term care – that comprise the broader and more elusive frontier. Indeed, nowhere is thoughtful and empathetic design needed more than in eldercare and care for people with chronic conditions, two segments of the healthcare industry that are persistently misunderstood, even as they are experiencing massive growth. By now, the statistics are familiar. They project a near future in which more people will need care and fewer people will be able to provide it. In the US alone, the ratio of caregivers to care recipients will change from 7:1 in 2010 to 3:1 by 2050, according to the AARP Public Policy Institute. Care recipients will also be younger, have debilitating conditions for longer periods of time, develop more comorbidities as they age, and have more specific needs and wants. Nursing homes, hospitals, and assisted-living facilities are ill prepared for the challenges ahead. Many still resemble a factory floor – with physical and organizational infrastructures that promote Ford-like production and efficiency – and ignore the holistic needs of patients, families, and paid caregivers. As we look to this near future, we must ask ourselves several questions: What can architecture and design do both for the processes of care and for the people involved in those processes? How might care spaces be configured to build relationships, encourage wellness, and promote individual and social health? How can design make people empathize more, communicate more effectively, and ultimately take better care of themselves and one another?
38
Designing for Interaction: Facilitating Connection and Compassion Beyond the Patient Room
Designing for Collaboration: Balancing the Professional and Personal Needs for All Care Participants
Designing for Endurance: Long-Term Disability and the Slow-Care Movement
A guiding question for design is that of what the space must do, not just in terms of functionality, but in terms of what behavior, activities, and attitudes the space must facilitate and promote. Experts in biophilic design and neuroscience for the built environment have made the case that design must inspire particular states of mind. Through the interventions of these experts, color and wall art are used in patient rooms to promote relaxation, connection to nature, and other emotional and psychological states. And yet, care is not just about the individual and their state of mind. It is about social interactions – both planned and spontaneous – as they occur among colleagues, friends, family members, and strangers. Care is a social act that takes place in unexpected ways between various people and in multiple spaces throughout care facilities; as such, designing for care must look beyond the individual to facilitate and encourage interaction, relationships, and engagement.
An additional question of design to consider is who the space or object needs to serve. The non-medical aspect of care is increasingly being addressed through patient-centric design that takes into consideration the holistic desires of patients and their visitors. Organized activities keep patients occupied and fulfilled beyond just their basic needs. And yet, care facilities are inhabited by numerous stakeholders who are fundamental to the care process, as they are constantly engaged in negotiation, communication, and decision making. As care is a collaborative act that requires the input and commitment of a range of paid and unpaid workers, designing for care must involve caring for everyone in the network of care delivery.
A less frequently asked question, both in design and in the healthcare industry, is how long care must last. Just as the slow-food movement has directed attention to the various ways in which food is made, an emerging slow-care movement encourages us to understand care as a set of processes that must be deliberate and sustained over long periods. And yet the concept of slow care has further to go, specifically in moving beyond the idea of care as a set of skills that are fixed and static. As more people come to need different kinds of care for longer stretches of time, spaces and objects of care will need to be designed for dynamic and flexible activities that adapt to changing needs.
/ What if hallways, stairwells, and other “transitional spaces” were designed to foster interaction, instead of simply accommodating the flow of people who are moving from one place to another? / How might the configuration of common spaces and their furniture encourage patients and their family members to engage as much with the people they came to see as with those who are in the space visiting others?
/ What if an increasing number of family caregivers who have grown up as digital natives could use their personal devices and tech solutions to communicate with nurses, physicians, and therapists? / What if break rooms were enhanced to allow orderlies, nurses, or staff to communicate seamlessly with their colleagues, or to connect remotely to their own families as they spend long hours caring for the family members of others?
// Designing for care must look beyond the individual to facilitate and encourage interaction, relationships, and engagement. //
/ What if care spaces were configured in modular ways that could change over time to accommodate different stages of recovery or decline, as well as to suit the shifting relationships between patients and their families? / How might furniture be built to change, and then change back, in order to suit the requirements of specific times of the day, month, or year? If buildings where care is provided and the objects within them are able to continuously “make us” – and often in ways that we do not expect – then we have no choice but to ask big questions about the nature of care. What does it mean to design with and for care? How do we balance functionality, empathy, and inspiration in the creation of objects and space? And how can we assure that empathetic design gets under the surface and serves the latent desires, emerging needs, and everyday utility that people seek? As Graves found, it is through observation and real-time engagement that we can truly understand how to design for both utility and inspiration. Going forward, we must make those observations carefully and with a critical eye to bridge the gap between what we know and understand now and what the future demands. //// Victoria Scrubb is a research analyst at Idea Couture. Dr. Maya Shapiro is a resident anthropologist at Idea Couture.
39
OCAD University's graduate program in Strategic Foresight and Innovation is creating a new kind of designer: a strategist who sees the world from a human perspective and re-thinks what is possible; an innovator who can imagine, plan and develop a better world. Are you ready to embrace a creative future? Our graduate programs span the fields of art and design histories, criticism and curatorial practice, inclusive design, health, business innovation and foresight, digital media, and the studio art and design we’ve been teaching for 139 years. Find out more at: ocadu.ca/graduatestudies
40
The Implications of Long-Term Value Shifts B y D r. A n d y H i n e s
Businesses have long used market research and consumer insight professionals to understand who buys their products. In this research, the primary unit of analysis has been the “consumer.” This is a sterilized and objectified notion well suited for quantitative analysis, but it is one that de-personalizes the actual human beings who seek the products and services of these businesses. Long-terms shifts in individual values now suggest that this impersonal approach to research – and to business in general – is out of synch. Today, it’s all about the personal aspect. It is time to move on from the consumer concept.
Insight is a precious commodity, one that organizations seek to uncover by using a wide array of frameworks, tools, models, and formulas. In such models and frameworks, the input is the consumer: a sterilized, objectified notion of a human being who purchases units of a product. It seems that the feeding-consumers-intoalgorithms approach is not producing the intended output – that is, insights – that organizations need. Formulaic approaches work better with the existing and known, but they come up short when it comes to knowing where the future is headed. A fundamental change is coming. This change lies in the way the unit of “the consumer” is used as the basis for analysis in consumer research and insights: Put simply, an increasing number of consumers are rebelling against being labeled as “consumers.” This rebellion does not have its roots in some notion of extreme simplicity, in which consumers want to walk around in paper sacks or recycle their underwear. They still want to buy things – but they are being much more thoughtful and discriminating about their purchases.
photo: idris mootee
“I Am Not a Consumer.”
41
These anti-consumers are unhappy with the relationship between buyer and seller; they want to reconfigure or rebalance it. They don’t want things they don’t need and are asking hard questions about what they actually do need. They don’t want people spinning and hard-selling them. They see a difference between transactions and relationships, and they want authentic connections and their viewpoints to be acknowledged. They want to be thought of as human beings rather than as entries on a spreadsheet. The beliefs of anti-consumers are, in part, a reaction against the materialism they embraced earlier in their lives. These individuals are not only abandoning this materialism: They are developing a distaste for it. They now have a low regard of the importance of consuming and find the notion of a consumer economy to be wrong-headed, seeing it instead as a confusion of means and ends. The data is very clear: Consuming or accumulating more goods and possessions does nothing to increase one’s happiness. This knowledge has fueled a search among anti-consumers for “what really matters,” thus leading them to embark on a journey toward postmodern values. The label “consumer” defines people in terms of their usefulness to the organization and tends to neglect their larger sense of self. Some argue that the focus is on what makes a consumer buy, without regard for anything else. Some insight groups have caught on to taking a holistic view of those they serve, but they may still be trapped in the language and paradigm of the past. The emergence of the anti-consumer is part of a larger shift that I (and many others) believe has its roots way back in the Woodstock counterculture of the late 1960s and 1970s. The argument here is that long-simmering shifts in individual values are at work. It’s a bit challenging to detect, since there is consistency amidst the change. The “mass” market (in its broadest definition) is slow to change. Innovation, as we know, comes from the edges, so it’s easy to miss – and to dismiss. The pattern that has been uncovered in research drawing upon nearly two dozen systems – most heavily on the outstanding work of the World Values Survey and Spiral Dynamics – is that there are four types of values, and that there has been a consistent “developmental” pattern in their adoption over time. The four value types are summarized in this graphic:
4 Value Types An individual view about what is most important in life that, in turn, guides decision-making and behavior.
01
02
03
04
Traditional Follow the Rules
Modern Achieve
Postmodern What’s It All Mean?
Integral Make a Difference
Fulfilling one’s predetermined role, with an emphasis on there being a “right” way to do things.
Driven by growth and progress and the ability to improve one’s social and economic status – and show it.
A shift away from material concerns to a search for meaning, connection, and greater participation.
Leading edge of values change emphasizing practical and functional approaches that best fit particular situations.
The pattern is a slow, steady shift from left to right, away from the traditional and modern segments of the “mass market” and toward the postmodern and integral (on the periphery). In short, the peripheral is moving toward becoming mainstream, though it accounts for only about one-third of US consumers today (a similar percentage globally in affluent nations, the highest percentage being in Northern Europe and Scandinavia). The anti-consumer phenomenon is not a fad, but part of a longer-term paradigm shift. It was captured and popularized several years ago in The Cluetrain Manifesto, which made the point that markets are conversations. Anti-consumers expect to engage in conversations, and they want prospective partners to invest time in getting to know who they are. These were the same consumers at the core of the no-branding movement. They will often buy generic brands or private label goods where available, not necessarily to save money but to make a statement against the notion and value of consumerism. A striking finding of my 2011 Consumershift book, which summarized these value shifts, was that the notion of a “consumer” was becoming obsolete. As a futurist, it can be challenging to know when you should “Do in Rome as the Romans do” (in this case, use the accepted vocabulary), or to strike out for new territory (or new vocabulary). Many times, futurists are in the new territory before their audience is, and while this provides an “I told you so” opportunity in hindsight, it’s not terribly helpful to clients who live in the present. Thus, we often try to explain the new territory in terms of the old and familiar. “I am not a consumer” is about understanding and treating customers as people, not just as statistics that make purchases. They can be quite thoughtful consumers who consider purchases carefully and understand how their actions can have positive repercussions, such as encouraging good behavior from the brands they’re supporting. For example, those with a strong environmental ethic are likely to be willing to spend more on green or fair trade products, thus supporting the values and causes that mean most to them. This finding is especially significant as we move into an era of advanced analytics. It will be tempting, once again, to think of people as statistics, data points in aggregations, or components of algorithms. This is not to suggest that we abandon such tools, but rather that we see them as a means to the end of achieving a greater understanding of customers, indeed people. In this era of super advanced technology, it is important to keep the human element front-and-center. //// Dr. Andy Hines is program coordinator and assistant professor at the University of Houston’s graduate program in Foresight.
42
defining Design Research As a new domain of inquiry relative to other scientific spheres, design research has evolved into a loose kit of parts consisting of various perspectives, approaches, and methodologies. This amoeba-like assemblage, while in alignment with paradigm shifts and scientific revolution, can lead to confusion regarding what design research is, and what topics and questions it should investigate. Transdisciplinary collisions between design research, political science, foresight, and branches of engineering and psychology have created a problem. They have eroded the once simple dichotomy between casual, observational inquiry (which explores people acting in the real world to address the “fuzzy front-end”) and rigorous human factors (which test and assess the usability of artifacts). While such collisions have birthed new practices such as constructive, speculative, and agonistic design research, the nucleus holding these limbs together is the focus on design, or wicked problems. Such challenges are characterized by high ambiguity, complex relationships, and the lack of a singular, correct answer. The figure below illustrates nine current practices of design research, many of which researchers mix and match in response to wickedness.
Agnostic
Anticipatory
Constructive
1 4 7
Empathic
Human Factors/ Usability Motivational
2 5 8
Participatory
Speculative
User/Context
3 6 9
43
In addition to these newer design practices, the breadth and depth of design researchers’ modus operandi have also grown over the past 20 years, reintroducing the long-cherished practices of making and crafting to process research. In this figure, we introduce these practices, as well as the primary, contributing disciplines. The two columns on the left demonstrate more established approaches to design research. The column on the right (research through design) is an emerging approach that has gained both recognition and credibility over the past decade.
photo: idris mootee
Established Approaches
Emerging Approach
Research for Design
Enquiry by Design
Research Through Design
Application of social and natural science perspectives and methods to explore and assess human relationships to the world.
Application of design craft to explore and assess human relationships to the world.
Application of design science perspectives and methods, often within co-creative settings to explore and assess human relationships to the world.
Representative Disciplines
Representative Disciplines
Representative Disciplines
/ Anthropology / Complexity Science / Ergonomy / PAC Psychology / Social Psychology
/ Architecture / Graphic Design / Industrial Design / Interaction Design
/ Action Research / UX Research
While countless publications have documented research for design and enquiry by design, research through design is a newer and relatively underused approach, and has rarely been discussed within the context of innovation. The following collection of articles address the emerging role of research through design to generate novel outcomes that tackle design problems and inform innovation. Lena Blackstock, a strategist and design anthropologist, leads this collection with an exploration of how future design researchers and artificially intelligent agents – such as a modern-day, centaur-like hybrid – can collaborate to fuse emotional intelligence and machine intelligence to rapidly construct and assess speculative artifacts. Following Blackstock’s rumination, Dr. Scott Pobiner examines design research’s role in peoples’ relationships with data and cyber security. Finally, Dr. Ryan Brotman articulates how participatory prototyping, as a fundamental method of research through design, can be used to reveal the political landscapes of customer experiences to support innovations that address the goals of customers and stakeholders in the enterprise. We hope that, with these pieces, we can help propel design research from a novel practice to become tried, true, and trusted. ////
44
The Modern-Day Centaur
Tools and Technology: An Evolution From Passive to Active
A Contemporary Mythology of Man-Machine Symbiosis
By Lena Blackstock
The motif of the centaur, the half-human, half-horse, originates from Greek mythology and represents a relationship that has evolved throughout history, from food source, to transportation, and beyond. We have found cave paintings of horses dating back over 17,000 years, but it wasn’t until their domestication (estimated to be about 5,000 years ago) that the human-horse relationship led to major innovations in communication, mobility, and trade. And, just like the image of a human-horse creature may have originated from infantry viewing enemies on horseback in the distance, the merging of human and machine into a hybrid creature is coming more clearly into view. One aspect of this contemporary centaur model for human-computer interaction is the coupling of emotional intelligence (EI) with big data analytics. Researchers investigating this relationship propose that such a coupling is advantageous because emotional intelligence represents our ability to empathize, allowing for the management of interpersonal communication. We use these skills to overcome obstacles and inspire others to work toward collective goals so that, even in human-machine partnerships, the human will remain the action-inspiring force. Through a centaur-like nervous system, the machine collects data on usage and reveals patterns, trends, and associations as they relate to behavior and interactions. The human remains the strategic driver, understanding lived human experiences in a connected world by applying design research approaches.
Tools have evolved from rocks and stone to tablets, algorithms, and artificial intelligence. As Bryan Johnson, founder and chief executive officer of neuroprosthesis developer Kernel, explains, “we are moving from using our tools as passive extensions of ourselves, to working with them as active partners. An axe or a hammer is a passive extension of a hand, but a drone forms a distributed intelligence along with its operator, and is closer to a dog or horse than a device.” Between the mythological human-horse centaur and the modern human-machine centaur, it seems we are coming full circle. Technology is becoming more “horse-like” again, working as an active extension of the human rather than a passive tool. J.C.R. Licklider, an MIT professor and key figure in defining modern computing and the internet, explored the concept of human-machine partnerships in his research on man-computer symbiosis in the 1960s. According to Licklider, “human brains and computing machines will be coupled together very tightly, and… the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.” In Greek mythology, most centaurs were governed by their bestial halves. Their behavior was wild, lustful, and small amounts of wine drove them even wilder. One of the best-known centaurs was named Chiron, who, unlike most of his brethren, was known for his civility and wisdom, and renowned for his medicinal knowledge and ability to teach music and hunting – presumably influenced by his human side. The question to consider is: Which half will guide us in our quest for human-machine symbiosis in our modern centaur story?
45 Design Research: An Active Tool for Human Futures Design research is an approach we use to gain an understanding of relationships and interactions between people, technology, our environment, and the artifacts and actors within it. Even as centaur-like collaboration continues to evolve, a challenge remains: Once our products and services are released into the world, we often lack immediate knowledge of the actual human experience with them. We either collect quantitative data, which allows for quick but basic product optimization, or we collect qualitative data, which takes longer to address and apply. Although design research is an iterative approach, one of the primary constraints is the frequency and relevancy of the iterations to improve experiences, products, and services. If we take the idea of a human-machine centaur designing new products and services collaboratively, and embed those designs into an IoT context of connected devices and things, we can create an iterative feedback loop. We can equip artifacts with a nervous system and create the opportunity to drastically increase the frequency of design research iterations and improve the relevancy of design research data. Johnson envisions connected products “reviewing themselves through data, saying things like, ‘I was only used once before breaking,’ or, ‘I’ve been used for 354 days straight and had one minor malfunction.’” The design researcher, as the human-half of the modern centaur, is critical in adding an understanding of the subjective human experience with those devices and services beyond obvious performance improvements. Design research is undergoing an evolution of its own, moving the focus from simple user-centricity to co-designing and participative approaches; it is shifting from being a passive tool to being an active one, and is leading us to new paths of collaborative creativity. With that, design research represents a continuous evolution of different approaches and application areas, including: participatory design through the collaboration between the end user and a professional designer conducting speculative design research; co-creation by both a design researcher and a foresight practitioner; and adversarial design, where a team of professional designers and political scientists are involved. However, to date, these various approaches operate under the assumption that design teams are comprised of only human participants. Yet, many of the prevailing theories influencing design practice, such as actor-network theory and assemblage theory, have begun to question the assumption of a human-only outlook. Perhaps the model of the centaur nervous system, functioning as an iterative feedback loop to understand and improve the lived experience of the products and services, can lead us to Licklider’s vision of human-machine symbiosis.
A Partnership for the Modern Centaur 1997 was a milestone for the modern-day humanmachine centaur. That year, for the first time, a machine – IBM’s Deep Blue – beat then chess grandmaster Garry Kasparov. Assuming that human players augmented by AI would be even better than the computer by itself, Kasparov developed what is now called freestyle chess (sometimes called centaur chess). Confirming his assumption, Intagrand – a hybrid team of humans and several different chess programs – is the reigning chess grandmaster today. As AI innovations continue, the concept of centaur partnerships can go beyond the world of chess and inform other forms of modern collaboration. But just what does a centaur model mean for future relationships and interactions between people, technology, our environment, and the artifacts and actors in our connected world?
Today’s Centaur: The Robo-Advisor Finance has long utilized technology supported by human counterparts. For years, options traders have used algorithms to time when they enter and exit a position. Big banks soon followed with buy/sell sites, and now large institutions like Fidelity Investment are offering customers a robo-advisor solution, Fidelity Go. While competing services use algorithms to make decisions about investments with as little human input as possible, Fidelity Go combines its robo-advisor technology with human experts to make personalized investment suggestions, putting the human back in the strategic driver seat.
The Nervous System: An Iterative Feedback Loop of AI and EI Maurice Conti, director of strategic innovation at Autodesk and leader of their Applied Research Lab, describes the need for an AI-powered nervous system for the continuous on-the-go evaluation and evolution of products and services: “Our nervous system, the human nervous system, tells us everything that’s going on around us. But the nervous system of the things we make is rudimentary at best. For instance, a car doesn’t tell the city’s public works department that it just hit a pothole at the corner of Broadway and Morrison. A building doesn’t tell its designers whether or not the people inside like being there, and the toy manufacturer doesn’t know if a toy is actually being played with – how and where and whether or not it’s any fun.” Could we create a better nervous system by combining AI-driven sensor systems with design research approaches? Can we tap into our ability to empathize and interact with others and the AI technology to help us match millions of data points to choose the most strategic and effective next step in order to refine and adjust again and again?
46 // Which half will guide us in our quest to human-machine symbiosis in our modern centaur story? //
What’s Next, Self-Iterating CentaurDesigned Cars? Just like technological advances in transportation have taken us from cruise control to adaptive cruise control to driverless cars, the development of human-machine partnerships should evolve to give us the ability to understand the experiences and interactions of all actors and artifacts in a connected world. Hack Rod is an example of what the AI contribution to the nervous system could look like. Mike “Mouse” McCoy founded this digital industrial startup after running the award-winning entertainment studio Bandito Brothers. In 2016, Bandito Brothers collaborated with Conti at the Applied Research Lab at Autodesk, and equipped a traditional race car with dozens of sensors, creating four billion data points as a world-class driver drove it for a week. The car’s nervous system captured everything that was happening to the car, after which the team took this data and plugged it into DreamCatcher, their generative design AI. Generative design allows engineers and designers to input design goals and parameters (like materials and cost constraints) into the software to simulate design outcomes prior to production. Using cloud computing, the software then explores all possible configurations, generating design alternatives. As Conti explains, “What do you get when you give a design tool a nervous system? This is something that a human could never have designed. Except a human did design this, but it was a human that was augmented by a generative-design AI, a digital nervous system and robots that can actually fabricate something like this.” If the designer knew the true usage of their designs in the world, they could have designed a better experience. According to Conti, generative design is “a computational system that allows engineers and computers to co-create things they couldn’t have accomplished separately.” It’s true co-creation by a modern-day centaur. Or is it? This case study can offer a single data point in an emerging future on how design researchers and AI will innovate through the centaur model. In order to truly supplement each other’s skills, the centaur team of the future should look like this: a design researcher, which brings human-centricity, empathy, communication-skills, and
participatory approaches, and a machine-counterpart, which collects past and current data and presents possible future scenarios. The human now acts as a driving force during the data collection, the design, and even after the launch, co-creating a new human future continuously and seamlessly. One way to think about centaur interaction and co-creation and what it might look like is to consider Bill Buxton’s approach to sketching. Buxton, a pioneer in the field of human-computer interaction, defines “sketching along a continuum, which starts with rough drawings at the ideation stage and increases in fidelity to the point of prototype, the usability stage.” Creating 2D and 3D modeling iteratively and collaboratively at the fuzzy front end of innovation, coupled with the continuous feedback loop the nervous system provides, will enable us to reduce the risk of product and service failures after a launch.
The Future Centaur In our contemporary vision of the centaur, we see how this human-machine collaboration can greatly improve the lived experience for humans as we interact with new products, services, and technologies. Let’s think about the nervous system in terms of reacting to and co-creating the human lived experience as it is happening. The differentiating piece for the modern centaur is a better process to facilitate interaction and co-creation, as it develops human-centered products, services, and experiences, refining and evolving far beyond the official “release date.” This is how we ensure the focus on human futures – AI provides the data and analytical power, but EI remains as the driving force. The human half trumps the “bestial half” – just like our mythical predecessor, Chiron. //// Lena Blackstock is a design strategist at Idea Couture.
A Window into the Customer’s Mind
47
By d r. S c o t t P o biner
Personal data, and the question of what to do with it, sits squarely in the middle of a widening gap between companies and their customers. The list of corporate data breaches is growing, with each new hack followed by a neatly written, and sufficiently abstract, announcement intended to mollify the public. In the end, these announcements fail to address the underlying concerns or root causes. In fact each new breach merely heightens public concern and shines a spotlight on the data practices of the many organizations that store, share, and sell the data they collect. Today, when we use Netflix to stream a movie, browse Amazon for things to buy, and use Google to search for any number of things, we produce streams of data that these same companies use in a variety of ways. Based on the belief that the data is theirs to collect and use as they see fit, companies also use this data to launch new businesses. This sharing of information with service providers is so ingrained in our behavioral loops, that the Terms and Conditions of Use are barely a speed bump on the way to the repurposing of that data. When one considers that numerous third-party companies handle, validate, and store this data, two things become clear: First, consumers are merely the first point of exposure in a long and unsecured custodial data pipeline. Second, the company you give your information to may not be the same one that exposes you to risk. The scale and scope of how data is used are expanding, and fast. Importantly, the rapid pace of change comes at the risk of exposing the consumer to fraud and making companies vulnerable to reputational damage, or even worse, to having their customers look for someone to take responsibility.
48
// Corporate leaders must begin to understand their customers’ perspectives on the utility and value of personal data. //
Corporations have many uses for the data they collect, but not all of them use the data internally. For example, a wide range of companies capitalize on the oceans of data they’ve collected by selling it to data brokers, who aggregate it with data from other businesses, creating the informatics equivalent of grade A ground chuck. By connecting small details, these data brokers can piece together a rich and complete picture of a person, regardless of whether this was their initial intention. This puts corporate leaders in a difficult position. On the one hand, their companies rely on the data they collect to function and grow and, at least for now, legally sell. On the other hand, public anxiety over the exposure of personal information grows with every breach, increasing the likelihood that customers will think twice before divulging information, perhaps cutting off the relatively free flow of personal data that these companies have come to depend on. As the opportunities to monetize the harvesting of customers’
data increase, companies’ practices will inevitably draw long-overdue scrutiny and they may perhaps find themselves unprepared to respond. Companies today also risk drawing the ire of governments and trade commissions that want to rein in the trading of personal data and hold companies accountable. For example, after allegedly tracking the viewing and usage habits of 11 million television sets without owners’ permission, Vizio settled with the State of New Jersey and the US Federal Trade Commission for $2.2 million. But the value of the data gleaned from each television was almost certainly more than 20 cents, and, given that the $3 billion data broker industry thrives on this behavior, the case only highlights the ambiguous moral space in which we find ourselves. What is clear is that the perception of personal data as a corporate asset is beginning to change. Whereas a company might have once been able to simply assign blame to IT systems or partners, the use of customer data for profit now makes it harder to do so. Further, because user-generated passwords are notoriously weak, customers pose the biggest threat to the security of information that is actually their own, thus creating an asymmetrical risk that companies can’t manage without cooperation from these same customers. Here again companies are challenged: Do they risk their long-term reputation by allowing customers to use weak security protocols, or do they enforce stronger policies that might frustrate customers and even dissuade them from making a transaction? Neither approach is palatable, as sooner or later they will degrade the customer experience and damage brand perception. This tension between selling data while co-creating security with customers crystallizes a significant design challenge for companies: How can we improve the security of customer data in a way that is inclusive and fosters consumers’ trust in us? To head off the litany of potential reputational and operating problems that the collection of personal data may cause, corporate leaders must begin to understand their customers’ perspectives on the utility and value of personal data. And they must do so in a way that few companies have so far been willing to do, and that few consumers understand themselves. The problem is that data in and of itself is an abstract concept. Without context, data is intangible and invisible and, as such, it requires that a company and its customers have a shared understanding of its utility – and more importantly, an agreement on how it is to be used. Achieving a mutual understanding like this is difficult because it requires the creation of a dialogue about data with stakeholders that is ongoing and functions deep within an organization and at customer-facing interchanges, like points of sale and customer service. In doing so, the organization can build a relationship that is transparent and based on trust, without compromising its capacity to capitalize on the data lake that it’s likely been building for some time. This is where design research offers a unique angle on the problem. When we study a problem using design research techniques, we examine the way a complex system – like a security system – is assembled, and how that system affects the experience, outcome, and success of those involved. This allows us to understand things holistically, before identifying what needs to be redesigned and how. In the case of data, we’d examine the interactions between the customer and the security and enterprise systems of a company. We’d identify gaps in the process but also – crucially – how interactions between people affect the way data is perceived, and how it is handled between associates within a company, as well as customers and employees.
49 Three design research techniques for understanding the customer
01 / Cognitive Mapping
02 / MOtIVATION MATRIX
03 / stakeholder map
Cognitive mapping is a common approach for gaining an understanding of how people make sense of complex situations. In the case of personal data handling, cognitive maps can be used to identify differences between how a customer service representative might deal with a customer’s concern about remembering their password, and how an IT employee might suggest a memorable, but weak, password. To ensure a customer’s account is secure, the security team might have guidelines recommending a long string of letters, numbers, and symbols. A simple cognitive mapping process would ask both employees to describe the creation of a password by a customer, how that description is used to secure information, and the consequences of losing it or having it stolen. Armed with that information, the design researcher would then create a diagram that would map out each explanation and bring both employees together to discuss the differences. Using the maps, the idea of “security” is made tangible, and embodied by the password. With both the IT person and the customer service representative present, the design researcher can begin to identify common concerns and interactions, or “touch points,” where the customer’s concern might be best addressed. In doing so, both the customer service representative and the IT employee become part of the process of identifying a solution. They are therefore both made stakeholders and become responsible for caring for the customer’s needs.
Another example of a design research technique that leads to a deeper understanding is the creation of a motivation matrix. Motivation matrices allow design researchers to see how different stakeholders interact by identifying their primary objectives with regards to a process. In doing so, they can identify the interdependencies of various actors within a system. If a company wants to begin addressing customer concerns about data sharing policies, for example, it can use a motivation matrix to explain how their internal processes and data partners work together to develop solutions that they otherwise could not. Ultimately, they can use a motivation matrix as a starting point for initiating a dialogue about what “data sharing” is and how it results in a better experience for the customer. Or, alternatively, the matrix can be a starting point for reducing their sharing of customer data where it has a negative impact on the customer.
A final example of a design research technique that can foster an ongoing dialogue between a company and its customers is the stakeholder map. A stakeholder map depicts the key constituents in a system in a way that tells the design researcher about how various actors contribute to, and are affected by, a system. By using a stakeholder map in a workshop setting, participants can begin a dialogue about the various roles that people play in the system. The map could illuminate who has access to the data and why. By talking about the map with customers and employees, we can begin to understand biases and faulty perceptions about participants, and learn more about what stakeholders might think about their own role in creating a positive outcome.
// Public anxiety over the exposure of personal information grows with every breach. //
Regardless of where a company stands in the debate about personal data and privacy, one thing is clear: Change is coming. Leaders need to recognize the important relationship between their customers’ values and the data they share – and they need to act fast on what they find out. Beyond educating customers on the importance of good security practices, companies must find ways to enfranchise customers and share the wealth, showing the customer real value in ways that give them a reason to believe that their data is not only valuable but valued. Design research techniques help to bring these ideas to light by providing frameworks for business leaders to foster a shared understanding of the complex and invisible ideas behind personal data. //// Dr. Scott Pobiner is co-head of design strategy at Idea Couture.
50
Political Prototypes Innovation Through Constructive Confrontation
B y d r. R ya n B r o t m a n
Either knowingly or subconsciously, customer experience (CX) innovation can be seen as political innovation. While the outcomes of CX innovation yield everything from tactical assets such as sales support scripts to strategic shifts in engagement channels, an underlying rationale for these outcomes can be reduced to the essence of politics – the act of distributing power and resources within a community. For CX, the community consists of enterprises, their human resources and strategic partners, and the people with whom an enterprise seeks to create reciprocal value. Executives, product developers, sales representatives, technical support staff, third-party vendors, and consumers – these are people who are motivated to act with personal goals in mind. Many are influenced by the goals of other individuals within their political ecosystem as well as those who may not align with each other, which generates conflicts that customers experience as pain. For example, the sales director of a telecommunications company with a home-security offering may incentivize their sales representatives to meet or
exceed quotas through commissions or bonuses. An overly aggressive sales target influences the sales representative to go for the quick sale. But because the focus is on turning leads into sales quickly, the representative does not prepare a plan that is customized to fit the customer’s specifications. This reduces the ability of the customer to make an informed purchase decision and results in a system that does not fit their needs. The customer does not discover this until after the contract is signed and the installation technician rings the doorbell to install the system. On the other hand, the technician does not discover the misalignment until they arrive at the customer’s home. As a result, they have to contend with the salesservice gap while on site. The technician’s metric for success is based on an estimated installation time, which has been calculated by an operations manager. However, the estimate that has been prepared does not account for the time needed to reconfigure the system and process the change order. Therefore, when the technician’s reported installation time is twice the original estimate, the operations manager sees a failure to meet a performance benchmark. Meanwhile, the customer must purchase additional system components and host the technician at their home beyond the estimated time. As a result, the customer begins to lose trust and wonders if the system is appropriate for their needs. These seemingly minor misalignments result in a poor employee and customer experience. The dissatisfaction is reflected in the company’s employee satisfaction, customer satisfaction, and net promoter scores, as well as by a low retention rate for both employees and customers.
51 Design Research: Maximizing Reciprocal Value The above scenario exemplifies a common challenge in CX: How does the enterprise articulate a CX that is both innovative and maximizes reciprocal value for multiple stakeholders? A solution for a satisfactory experience involving multiple stakeholders’ goals is critical for creating sustainable CX innovation because it aligns the needs of both the enterprise and the customer to generate reciprocal value. In contrast, a solution that considers only one stakeholder’s goal can degrade the customer experience.
From Observation to Confrontation Years of evolving service design and CX practices have led to a standard approach to observe, frame, and design. This process, which may vary in terminology, broadly follows a three-step pattern:
01 Observe
02 Frame
03 Design
Ethnographic researchers go out to observe and talk to CX stakeholders.
Innovation teams synthesize ethnographic findings into strategic assets that may include, but are not limited to, journey maps, experience blueprints, and whitespace opportunities.
In response, innovation teams ideate and assess solutions to frame assets.
While this process is used frequently, it overlooks a critical step that occurs between observe and frame – namely the act of confrontation, which is where the political dimension inserts itself. In politics, tools of confrontation often include speeches, debates, manifestos, protests, and in cases of extreme dissension, violence. Through both the strategic and tactical implementation of such tools, formal and informal political entities spark a dialogue to further their agendas, which is intended to provoke responses from other actors in a particular space. Through these feedback loops, stakeholders either move closer to or further away from a consensus. The challenge with such political design is that, without provocation through confrontation, the nuances of political dynamics such as motivations, goals, and aspirations can remain hidden. For CX designers, this leads to a lack of design rationale that, in turn, results in poor design outcomes. Design researchers address this by creating platforms for constructive conflict that will generate data about issues that can be used to make better design decisions. Now is the right time to explore the use of participatory prototyping labs in political design research to create agonism – a state of prolonged conflict in which people assume a deep respect for each other’s concerns while maintaining their own beliefs to engage in constructive confrontation.
Participatory Prototyping and the Art of Controlled Dissension Decades of observing design research have led to the redefinition of research participants as co-researchers. One of the most common approaches to negotiating this co-creative approach to design research is participatory prototyping. Let’s broadly define a prototype as a “concrete representation of a part or all of an interactive system.” The “interactive system” refers to a group of people and/or technologies engaged with one another to create back-and-forth interactions of input and feedback. In this context, an interactive system could include anything from two people having a conversation, to a person writing a chat bot on a website for troubleshooting a product issue, to a group of people playing a multi-person, online video game. Prototype representations can range from simple paper assets such as sticky-note concept descriptions, to more complex and static visualizations, including storyboards, future-experience blueprints, and form-factor reference designs. Traditionally, designers have been the stewards of such prototypes, but in participatory contexts, it is the role of design researchers to create inclusive environments where non-designers can co-create prototypes (often alongside professional designers) to explore design issues. The lab design for agonistic, participatory prototyping requires an understanding of five key questions:
01 Who are the actors within the political system? 02 What is the perceived current state of enfranchisement within the political system? 03 Why is this the perceived current state? 04 What are the topics of discussion that will challenge the status quo to reveal new information about the system? 05 What materials are appropriate for lab participants to prototype in order to sustain dialogue?
52 Participants are sampled from a variety of backgrounds in relation to the CX of interest and its intrinsic political issues. For example, a lab seeking to understand value-chain relationships between employees and customers within an insurance CX may include processors, underwriters, third-party sales agents, and customers. However, if the CX issue at hand was to understand the politics of policyholders, executors, and family members, then the lab’s sample population adjusts to this new space. Developing a perspective on the current state of enfranchisement among enterprises and customers is one of the most challenging activities for agonistic prototyping labs. At a minimum, it requires co-creative activities with enterprise stakeholders to map out processes and identify the actors associated with CX delivery. However, such cursory planning sessions only address the question of what is happening on paper, leaving the reality of the CX, and the political issues that characterize it, largely unknown. Moreover, establishing a rich, a priori understanding of the goals, motivations, and lived experiences of employees and customers across enterprise value chains can mean employing a number of social-science perspectives and practices. These include, but are not limited to, motivational psychology and personal-project indexing, as well as anthropology and organizational ethnography. The final question for agonistic, participatory prototyping is to determine whether the transformative powers of participatory design research succeed or fail. In agonistic conversations, the most valuable topics are the ones that provoke debate. Many such conversational triggers can be identified through mapping journeys and identifying touchpoints and pain points across both customer-facing and employee-facing interactions. For participatory prototyping, however, there is the added criteria of provoking debate through meaning-making – the process of how people construe, understand, or make sense of life events and relationships. The reason for using meaning-making as a tool for dialogue is that it activates cognitive processes, including, but not limited to, the visual thinking, kinesthetic learning, and ambidextrous thinking that goes unused in much of daily life. Triggering these cognitive modes is imperative because it changes participants’ states of mind, which improves the likelihood of uncovering novel data for innovation. Tantamount to crafting topics of discussion that can incite provocative prototyping is the right choice of materials. This involves establishing a kit with parts that speak to both the lowest common denominator of craft activity as a means of providing an entry point into prototyping, and a spectrum of other more challenging options to keep people interested. Continuing with the example of the telecommunications company, a primary topic of interest is the misalignment between the sales rep and the complexity of home-security system design, which leads to a loss of operating efficacy. An array of materials could be deployed to explore the issue, including a pen and
// How will your enterprise strategically confront and leverage politics to create value? //
paper to write fictional letters that establish a narrative, or iconography-based paper tokens to describe a core set of actions. Consideration could also be given to the thoughts and feelings of participants’ co-creation of CX prototype journeys, or even Legos that ask participants to build and explain fictional solutions in response to text prompts designed to explore political levers. All of these materials and mediums could be viable within the context of our definitions of prototypes and interactive systems needed for a participatory discussion. Choosing the correct array of mediums requires addressing two criteria: the degree of accessibility, and the degree of pre-production to reduce ambiguity in political discourse. The tools of CX are always evolving. The use of omnichannel communications to engage with customers; the implementation of big data, deep learning, and artificial intelligence to attempt to predict and proactively respond to operational inefficiencies; and impending breakdowns in employee and customer experiences are merely the latest turn of technological innovations. Regardless of which technologies enterprises wield to drive value, centuries of human history point to the immutable presence of politics and its influence on interpersonal behavior. The question is, how will your enterprise strategically confront and leverage politics to create value? //// Dr. Ryan Brotman is co-head of CX, design research at Idea Couture.
53
think future
01
Design Thinking for Strategic Innovation
04
Humans and Innovation
07
When Everything is Connected
02
The M/I/S/C/ Guide to Design Thinking
05
Human Insights
08
Wearables
03
Spaces + Places of Care
06
The Future According to Women
09
Winning the Next Game
amazon.com ideacouture.com
54
For Religion to Flourish in Humanity’s Future, Rituals Need to Be Developed in Real Time By Yehezkel Lipinsky
When considering human futures, we are asked to explore the future of humanity without conflating that future with the technological progress humanity creates. Human futures reorients the usual narrative, taking the focus away from technology as the sole driver of change. Technology is usually understood as the collection of tools we use to help us achieve our goals. From weapons and axes, to sickles and wells, to mainframes and networks, humanity’s progress is usually a reflection of the technology we’ve created. Yet, not all technologies are constructed objects that exist separately from the self. Perhaps one of humanity’s most potent pieces of technology is the ritual. Rituals vary, ranging from the everyday acts we perform to acts that inspire holiness. They can be defined as any specific, repeated actions that hold significance. Rituals exist in life, and they are the bedrock of religions around the world. Yet, in this growing secular age, religion is having difficulty finding its footing. With religious practice declining, a re-engagement with ritual might allow religion to revitalize itself and find its future in an age of rapid change. Rather than seeing rituals as unchanging, religions should foster a constant practice of creating new rituals. This type of a “maker” movement for religious practice – in which rituals would be devised frequently and in real time – could give meaning and perspective to immediate concerns, keeping religion relevant. With this approach, religion would stay fresh and brimming with active response to today’s concerns.
In this world of real-time ritual, synagogues, churches, mosques, and other spiritual communities would foster rituals with frequency, at appropriate intervals, to help followers build deeper connections to their faiths. These rituals would be crafted for that very moment, providing deeper meaning than a sermon on current events ever could. With this constant reflection and contemplation on the human experience, religion would stay relevant to its followers’ lives and provide the world with something that media, social banter, or data never could: belief. In everyday life, from a secular perspective, rituals are constantly being born and rewritten. Every year, thousands congregate in Times Square to watch the ball drop on New Year’s Eve. Many people run each morning, attributing a spiritual and physical reward to the act. Over time, even our smaller actions change and adapt – the ritualized action of hailing a cab, for example, is becoming antiquated, having been replaced by a few taps on an app. As society and technology change, rituals surrounding things like communal eating, bathroom visits, and driving will change with them. In the secular space, rituals can form without us noticing. We often take on new ones seamlessly, without much fanfare. Yet in religious contexts, rituals aren’t added or edited as easily. Most are inherited from generations past and from liturgy, and the creation of new rituals is either hardly done or sporadically attempted.
illustration: Erica Whyte
Real-Time Religion
Ritual as Design Medium
55
Anthropologist Barbara Myerhoff has posited that rituals work because we don’t fathom their origins. “Underlying all rituals is an ultimate danger… We may slip into the fatal perspective of recognizing culture as our construct, [which is] arbitrary, conventional, [and] invented by mortals.” As ritual scholar Catherine Bell sums up, “Ritual must simultaneously disguise its techniques and purposes and improvisations and mistakes. It must make its own invention invisible.” Yet, even religion itself is a designed act. During his speech at the Bloomberg Businessweek Design 2016 conference, Michael Rock – founding partner and creative director of 2x4 and a Yale professor – stated, “Design is a frame that gives life meaning.” For Rock, our understanding of the world itself is designed. Design is the tool we use to manifest our ideologies. An example can be found in Islam with the idea of Qiblah, the direction one should face when praying. The Qiblah is fixed in the direction of the Kaaba in Mecca. When decorating their homes, some Muslims will make sure to arrange the rooms in which they pray to have no images or mirrors that would come into view during prayer, as this could be distracting or create the opportunity to pray to an image. In this way, the metaphysical – in this case, the idea of a holy direction – is made manifest through design. Design can give holiness a physical reality, and rituals can make religion real for its followers – even as they pray to a God they cannot see. Rock believes that design’s purpose is to “create coherent worlds.” Religion is a system of designed acts that, when brought together, create a coherence about the world – and its design medium is the ritual. A changing approach to religion is timely. Society has changed, and religion has had a hard time adapting. A growing group in the US is being classified as religious “nones.” These individuals are choosing not to identify with a religious movement; indeed, 23% of adults in the US classify themselves as agnostic, atheist, or “nothing in particular,” and this group is always growing. And yet, religious “nones” are a tricky group to pin down, with 70% of these individuals reporting some kind of belief in a universal spirit or God, and 37% defining themselves as “spiritual, but not religious.” Answering the call of these burgeoning subsets that yearn for relevant spirituality is a purpose that religions can serve, if designed thoughtfully. And they can do so by fostering new, timely rituals. When a synagogue, church, or mosque is current in its practice and in line with the minds of its followers, there’s a better chance that its followers might find meaning within it. However, it might be jarring to think of the world’s religions constantly inventing new kinds of rituals. Couldn’t that dilute what’s special about their unique practice? Might it displace the aura of holiness around time-honored traditions? To combat this, we can reframe rituals and divide them into two categories. The first category is comprised of lasting rituals. These are the tent-pole rituals of religions today, like Judaism’s Sabbath and Islam’s Salah (five daily prayers). In the second category are liquid rituals, which are smaller and adaptable. They add to religious practice, but they aren’t linchpins of its foundation. Liquid rituals can be created when the time is right to guide followers’ and spiritual believers’ understanding of the world, and they can come to a close when they’ve served their purpose. These rituals don’t need to be grand, highly involved, or costly to be meaningful. The Kitchen, an inventive synagogue in San Francisco, for example, took an existing ritual – the tying of a red string around the wrist to ward off evil spirits – and adapted it. They used its familiarity to create a new, timely practice. This new ritual involved wearing a green string on the wrist at the time of Passover, a holiday that commemorates the Jewish people’s path from slavery to freedom, and made it into a reminder of the holiday and of those living in slavery and affliction today. When religious significance is placed on something, no matter how small, it can inspire people to search our world for better answers. Small liquid rituals do not mess with what makes religion work – they add to it.
Connecting with Contemporary Congregants
Ritual as Retreat
In today’s climate, there’s great opportunity for ritual to be reinvigorated. We are entering an uneasy era. Truth is evading its once-simple definition, our own leaders are sometimes seeming to lack common morality or decency, and we often feel forced to choose between caring for only our own community and having compassion for others. Right now, in this era where opinions can appear like fact, religion may be seeing its most opportune moment. The practice of existing religious rituals – and the creation of new ones – that respond to this issue could foster something opinions never can: conviction. Rituals are the everyday practice of our beliefs, and today, we need reminders of those beliefs. Our most important everyday practices give us the space to constantly reaffirm what we believe is just and right. Even liquid rituals, which change according to our evolving needs, can act as timeless affirmations of our personal as well as humanity’s values. People need rituals, as our most human form of technology, to be updated as frequently as our electronic devices. We need timely rituals, both religious and secular, that match our feelings, yearnings, and restlessness, and we need them to bring us meaning and communal engagement through practice. Rituals unhinge us from the cacophony of the everyday and remind us of what really matters. One day, we might be able to look back and reflect on key moments within our lives by remembering the rituals we practiced during those times. We may even be able to slow life down as we fill its moments with practices that foster greater reflection, depth, and contemplation. // In an age of Perhaps noise, redesigned one of rituals may be the technology we humanity’s need most. //// most potent
pieces of technology is the ritual. //
Yehezkel Lipinsky is a foresight and innovation analyst at Idea Couture.
56
A Mind of Their Own How Do We Manage Bias in AI? B y Id r i s M o o t e e
Each one of us is biased, whether we admit it or not. Some people may be more or less biased than others, but regardless of the level of bias, it is especially evident when it comes to decision making. Biases are caused by individual experiences, intelligence, cognitive abilities, decision-making abilities, self-esteem, and general personality traits – and, often, by a combination of these. In general, people simply don’t know how biased they truly are. But what about machines? We are trying to create AI that thinks like a human. AI, which broadly includes machine-learning systems, predictive analytics,
speech recognition, and neural networks, are all a part of this effort. However, sometimes people equate AI with automation and robotics as well. It is related, but not necessarily AI. Consequently, I will leave it out for now. This space is advancing fast because of technological advancement and incoming investments. Hype aside, I do think that AI, if applied properly and responsibly, can help create the next wave of growth in productivity and solve many of our human and business problems. However, it would not be beneficial for AI to think like a human entirely. We wouldn’t want to embed a human weakness into them in the hopes that they can compensate for it. But we do have to ask one critical question: even without following a human example, is AI still capable of bias? The answer is yes. Bias can be introduced to AI through neural network-based algorithms, methods of training, or in the way they learn to interpret human emotions. They can equally develop bias toward colors, tones, shapes, patterns, or even faces and
photo: idris mootee
57
voices. I’ve seen experiments where AI responded better to certain tones and voices. It’s possible that we might not be able to make truly bias-neutral decisions after all, whether we are human or machine. With these assumptions in mind, we can imagine how dangerous it would be if someone introduced a bias intentionally or unintentionally into AI, and it suddenly started to discriminate against certain faces, voices, or behaviors. Even if this bias wasn’t introduced by design, there is still a strong likelihood that the AI will learn it at some point. After all, if it can learn what coffee and movies we like, it can begin to sort and define everything in our lives. This is where it starts to become problematic. But how exactly does machine learning work? AI systems are composed of algorithms that interact in a set framework. Interestingly, while we define the framework and the terms of the interaction, no one really knows how they would interpret certain data sets at any particular time. The structure produces an interactive environment where we are not sure how it comes to its conclusions. It is this structure that is open to the introduction of bias. Let’s use neural networks as an example. These are built with layers and layers of interconnected neuron-like nodes supported by many hidden layers between the outside and the deep nets. Each of these performs one some simple mathematical operation. But by collaborating (and with some training) they can process previously unseen data and generate the right results based on past learning from the training data. Call it machine awakening or “back propagation” – during which examples are labeled and modified in a middle layer until they calculate an optimal match to input. The back propagation algorithm was originally introduced in the 1970s, but its importance wasn't fully appreciated until a well-known paper published in 1986 by David Rumelhart, Geoffrey Hinton, and Ronald Williams. The paper, “Learning Representations by Back-Propagating Errors,” presents several neural networks where back propagation works far faster than earlier approaches to learning, making it possible to use neural nets to solve problems which were previously insoluble. Today, the back propagation algorithm is the backbone of learning in neural networks. This is similar to how we process information internally – also known as thinking. AI is not entirely rules-based, but there are still evident patterns in most instances, until they want to change the rules and not tell you. I believe that the breakdown would still be the misinterpretation of certain languages or words. To sum it up, the generator extracts certain segments of text, giving these scores or coherent strings. Then the extractor classifies them and tries to figure out the maximum scores and, as a result, create more accurate predictions. Let’s look at an example of how this works. Researchers used 1,100 reviews from various social media properties devoted to Porsche lovers. A part of these reviews or comments were annotated by humans, highlighting the correspondence between sentences and scores (from 1 to 5) for performance, design, and value (price). If the system (generator/extractor) can show the same sentences and correlate them with the same reviewer ratings, then it would be exercising human-like judgment. This is a relatively simple exercise, and the AI shows a close agreement with the human, namely in performance (92%) and design (90.2%), while value (73.1%) is apparently harder to define. I would imagine that it is a difficult job to apply this analysis to complex subject areas, such as dating or security screening. But if we use this for text-based medical data or image-based data on health, it could be very effective. It will be a long time before we
trust AI for making big decisions, because we won’t know what kind of bias will be impacting their recommendations, even if the AI system indicates a high level of confidence. So, what are the possible biases that can affect AI? Here are the four most common:
1/ Interaction Bias
Machines learn from the interactions they have with users via text, voices, or gestures. Repeated interactions with a specific group of users can cause problematic machine responses to other groups.
2/ Data Bias Limited data sets or the absence of certain data sets can cause the machine to skew. This problem is also seen in quantitative analysis. In short, skewed data input equals biased opinion output.
3/ Emergent Bias
Specific information that is favored by a group of people will appear more often, and the system response will be skewed toward these beliefs or preferences. While it’s great for personalization, it’s problematic for broadening considerations and staying objective.
4/ Conflicting Goals Bias
Oftentimes, conflicting objectives of different points of view cause machines to make certain trade-offs, including balancing short-term versus long-term goals or individual benefits versus group benefits.
At the end of the day, we have to consider how we can allow humans to override machines for any of these biases, rather than rely on machines to correct our own bias. The hope is that there is a still a place for humans in this after all – even if our hope is more than a little biased. //// Idris Mootee is Global CEO of Idea Couture.
58
By Va l dis Sil in s a n d Ud i t V i r a
The political events of 2016, from Brexit to Trump, left many without an explanatory narrative, blindsided by actors and interests beyond their control. Panicked explanations ran the gamut from geopolitical conspiracy to the mass-diagnosis of the psyches of “deplorables.” Many looked to external causes for these events, citing the authoritarian personalities of those who had voted; the post-truth, “fake news” world of those without media literacy; the ingrained sexism of those unwilling to check their implicit or explicit biases; and the triviality of citizenship in a culture of celebrity and audience democracy. Each explanation might speak to a truth about our world, but common to them all is an insistent, defensive focus on the behavior of others rather than one’s own actions.
The truth is that the structural cracks in our representative democracies have long sprung from the inside. Caught in short-term news cycles, we tell stories that explain the most recent polls rather than look for longer-term forces. We began 2017 with the Edelman Trust Barometer showing declines in trust across all institutions of business, media, government, and NGOs. But who was there to remind us that already a decade earlier, in 2007, both Gallup and the General Social Survey showed that public trust in nearly every single major institution was at or near an all-time low? With symptoms like these brewing for over a decade, how many of us took the time to make sense of them, to identify the underlying causes, and to imagine alternative political futures? Instead, we doubled down and rebranded hope for another campaign cycle. We glossed over the frustrations of a political system that many felt no
illustrations: Rebecca Monahan
Crisis of -Cracies Who Will Shape 21st-Century Politics?
59
longer represented their interests, one that had severed the feedback mechanisms between the governed and the governing. With this technocratic mode of governing, politics was repaced by a narrow band of policy solutions. We nudged the machine slightly to the left in favor of equality, slightly to the right in favor of liberty, but we never questioned the premise that creating change was just a matter of tweaking the existing system. In the end, we constricted our values by defining them only within the context of acceptable policy solutions, rather than changing our policies to fit our evolving values; in the process, we lost our collective imaginations. What recaptured it last year was the dark mirror image of technocracy: populism. Some saw it as a useful corrective for a gridlocked political system. Putting aside the actual content of populist claims, it can be hard not to be seduced by how effectively the form reset the playing field, acting as a seemingly anti-fragile force that gained strength with every faux pas the status quo deemed impossible to recover from. But, as Jan-Werner Müller – a professor of politics at Princeton – explored in his 2016 book on the subject, populism poses a serious threat to anyone invested in democracy. Just like technocracy, it removes pluralism from politics, but through a different approach. Instead of seeing policy as the only measure of legitimacy, populism claims “the people” as the only arbiter of value, outside of which there is nothing and no one. For Müller, populism’s anti-elitist claims create a people without difference, whose interests are totally and exclusively represented by the populist. Anyone who criticizes or disagrees is simply not part of the people. We are against this future. Here, we illustrate some emerging alternatives that don’t sit solely on the poles of technocracy versus populism; they run across the political spectrum. We use them to ask three fundamental questions that often get left out of policy or populist-oriented discussions: Who has a voice? Who governs? And finally, who gets a share?
Who Has a Voice? Liquid Democracy vs. Epistocracy
Liquid democracy is a system of democracy where members of the electorate have the choice to delegate their voting power in flexible ways. If voters choose an approach where they vote on every issue, it could resemble a direct democracy. On the other hand, it could also resemble a representative democracy, where voters delegate their vote to someone else. With this type of system, delegation can be around one issue or many issues; voting power can be delegated or revoked at any time. Many feel that the proliferation of mobile phones that allow real-time civic engagement has made a liquid democracy model a viable political option in certain countries. Argentina’s Net Party, for instance, has developed a mobile application, DemocracyOS, to run on a liquid-democracy-based mandate.
Epistocracy is a political system in which only the well-informed may vote. This has long been an undercurrent in American politics, and in the aftermath of recent events, some establishment figures have called for stronger checks and balances against the capriciousness of the electorate. Last year, James Traub – contributing editor for Foreign Policy – published an article titled “It’s Time for the Elites to Rise Up Against the Ignorant Masses.” Traub argues that taking such a stand is necessary in order to save democracy from itself. Meanwhile, libertarian economist Jason Brennan’s book, Against Democracy, argues that many voters don’t have the competence necessary for proper decision making. Instead, he believes that passing some form of test that assesses basic political knowledge should be required for those wishing to vote. This minimum threshold would supposedly ensure a more rational, deliberate sphere of politics.
60
Who Governs? Crypto-Anarchists vs. Neo-Reactionaries
Neo-reactionaries are united in their opposition to democratic politics. They reject the view that the arc of history bends toward liberal democracy and its view of progress, which they see as a contemporary religion upheld by “The Cathedral�: a set of Ivy-educated individuals, media culture, and civil servants who indoctrinate citizens with their egalitarian worldview. Instead, neo-reactionaries see freedom and democracy as being incompatible, with their attempted combination creating destructive feedback loops of fleeting and selfish demands. Instead of seeking a voice in the current system, they advocate the design of political systems that give citizens the option to leave and join another state with different values. The result of such systems would be a network of charter cities, with individuals free to choose cities amenable to their way of living but not free to voice their opinions on those cities.
Crypto-anarchists believe in a form of anarchism enabled through cryptography. Cryptographic methods can effectively make communications secure and anonymous, which allows cryptoanarchists to evade prosecution and harassment. Crypto-anarchists have a diminished trust in institutions and use cryptography to circumvent perceived censorship, as well as state-sponsored and corporate surveillance. They also believe in building counter-economies. Through cryptographically secure social media, knowledge bases, alternate currencies, banking systems, and marketplaces, crypto-anarchists can exist outside the purview of our current political and legal framework.
61
Who Gets a Share? Surveillance Capitalism vs. Platform Cooperativism
Surveillance capitalism, a term coined by Harvard Business School professor Shoshana Zuboff, is a mode of creating and extracting value from people through the surveillance and modification of their behavior. It harnesses the immense amount of data created through our use of digital tools in order to profile us and develop behavior-modification techniques based on that information. For Zuboff, surveillance capitalism sits on par with the scale of transformations wrought by the mass production of the last century, and it is poised to similarly transform commercial practices across industries. But unlike an economy of mass-produced goods, it operates below the threshold of our conscious perception – subtly transforming the ways we behave and make decisions while funneling much of the profit upward. In its more radical forms, it appears as a high-tech version of feudalism, in which neither goods nor data are owned by those who use them.
Platform cooperativism is a movement to create a new ownership model for digital companies. Rather than centralized platforms headquartered far from where they operate, this model advocates for local, worker-owner cooperatives and true peer-to-peer platforms. It sees the sharing economy as a set of logistics services, each of which takes a cut whenever you rent your car, your apartment, your time to others, or any other service. Fundamental to platform cooperativism is the idea that those who create value for companies should also have a stake in them. Its proponents suggest that it can be enabled by protocol-based forms of cooperativism, such as the use of blockchain to support shared ownership models. Trebor Scholz of The New School, who introduced the term, admits that many objections to the concept can be raised; however, he suggests critiquing the concept should spur us to look critically at the values embedded in our digital economy.
Valdis Silins is a senior foresight analyst at Idea Couture. Udit Vira is an electromechanical designer at Idea Couture.
62
Redefining Human: It’s Not Just Us Anymore
As an anthropologist, I observe our species to figure out how we find meaning, define ourselves, and animate the world. Recently, I’ve been taken by the development of two seemingly contradictory human behaviors. On the one hand, we are engaged in rapidly destroying the biodiversity that helped us claim our uniqueness within the natural realm. On the other, we are now investing massive amounts of effort into creating AI. Both activities are challenging the very qualities that, just a few years back, were considered exclusively human. What does this mean for the future of human territorialism? To further explore this question, I spoke with Sara Constantino, a neuroscientist. Together, we aimed to unite our respective fields of expertise to better understand these paradoxical developments and their implications for our foreseeable futures.
photo: siyan ren
B y S a r a C o n s t a n t i n o a n d Ch l o é R o u b e r t
63
// Will new laws secure our human territory or defend the rights of non-human forms? //
Chloé Roubert: We come from two different fields, anthropology and neuroscience, yet we both feel our work focuses on understanding what constitutes a human experience. In my field, anthropology, the concept of the human is at the very foundation of the discipline – as its name, from the Greek anthropo (human) and -logy (study of), testifies. Anthropologists attempt to understand the human experience through observations and theoretical frameworks. Neuroscience also uses theoretical frameworks, but it is more analytical in its approach, focusing on empirical investigations and a collection of abstract metrics. What is increasingly clear within these different fields’ attempts to answer the fundamental question of how humans become human is that both are making epistemological assumptions about how to define humanness. Sara Constantino: We’re also both interested in understanding that definition through a cross-disciplinary lens, with a sensitivity or critical eye on the underlying frameworks and assumptions – both disciplinary and cultural – inherent to different methodologies. Anthropogeny, the study of human origins, approaches the process of becoming human through biology, but it also looks at existing ecological, geological, social, and cultural conditions. Anthropogenesis or hominization is the point of genesis of the human – where a life form becomes a human life form. And with this label comes all sorts of rights, privileges, and implications.
CR: The idea of defining our human origins reminds me of creation myths. Humans throughout the world and history have used stories about “other” forms of life – the sky, the sea, the raven, the turtle – to explain the emergence of mankind and to understand how a certain reality came into existence, whether an island, a species, or a type of human behavior. Arguably, in our contemporary age, we are still searching for our “creation myth.” SC: I think you’re right that a sort of creation narrative is still unfolding. Transdisciplinary research units – such as the Center for Academic Research and Training in Anthropogeny (CARTA) at UC San Diego, for instance – have replaced the symbolic creation myth with empirical, historical, and theoretical narratives in order to understand the origin of the human. This perspective takes into account not only evolution, but also climatic, geographical, ecological, social, and cultural factors. CR: Geologists’ recent naming of our period as the Anthropocene is arguably also part of this attempt to understand our place on the planet in the context of climate change. A common trope in the creation myth is the protagonist passing through stages of metamorphosis and various worlds. Humanity is emerging, crossing changing geological epochs as though they are mythological lands.
SC: Yes, and then there are proponents of living systems theories, in which entire networks of information, matter, and energy flows are considered alive. In the Gaia hypothesis – named after Gaea, a character from Greek mythology – the earth is conceived of as a breathing, excreting, consuming, self-regulating system. This is life on a planetary scale, arising from the interactions and co-evolution of organisms and the environment. And then there is “strong AI,” the idea that a computer program with the correct inputs and outputs can be said to literally have a mind. Conversely, the brain has also been likened to a machine. These broader positions on life, intelligence, and consciousness destabilize the territory of the human – as do recent philosophical debates around objectoriented ontology, the Anthropocene, automation and general AI, posthumanism, and work by biologists on the “human” characteristics shared by plants. CR: This obviously has implications for our fields of research. Anthropology is based on the premise that humans are unlike other life forms, which has always been tough to define: Is it our ability for abstraction, our adoption of language, our use of tools? It’s also difficult to demonstrate. I mean, what makes anything alive in the first place? NASA’s website uses the example of crystals to critically engage with the very definition of living by stating that crystals can grow, reach equilibrium, and react to stimuli, though they lack the nervous system that would qualify them as being alive. So is the nervous system what makes matter alive?
64
SC: Crystals! Anthropologist Loren Eiseley said, “One does not meet oneself until one catches the reflection from an eye other than human.” Maybe this reflection is at the elusive root of crystal or equine therapies. Neuroscientists often do look to animals as model organisms for understanding the function of the human brain and its relationship to behavior, cognition, and affect. For example, the rat is an “animal model of depression”; it is used to study stress, anxiety, and despair. Psychologist Harry Harlow’s controversial research on love and maternal (and paternal) care exposed baby rhesus monkeys to grim experimental apparatuses – like the “iron maiden,” an inanimate surrogate mother made of wire, or the “pit of despair,” an isolation chamber – designed to probe these higher-order constructs. These studies had real bearings on human pedagogic recommendations. These inferences are actually said to be possible because of the common origins of living organisms. But we also use computer models to understand our cognition. CR: The fact that we look at rats or monkeys and use forms of AI to understand how we feel love or depression once again destabilizes the boundaries of what is uniquely human. There’s been a lot of talk about the Wood Wide Web, for instance. Ecologists, such as Suzanne Simard or Ren Sen Zeng, have demonstrated that trees communicate through a network of fungi. The idea that trees can call for help, warn neighbors, or aid younger trees challenges the idea that communication, cooperation, or agency are exclusive to human or animal life. The work of primatologist and ethologist Frans de Waal debunks another human uniqueness myth: that we are the only ones with a sense of ethics. The video from de Waal’s lab featuring a capuchin monkey rebelling against the unfair distribution of cucumbers versus tasty grapes is a hilariously endearing – but also profound – retake on humanness. Along similar lines, research by Tanya Latty and others has shown that even the Physarum polycephalum (meaning “many-headed slime”), an ancient species of slime mold made up of autonomous single-celled organisms that operate as one, can learn to navigate labyrinths and exploit probabilistic reward structures. This species has even been observed for its infrastructural planning abilities – and this is all without a nervous system. Evolutionary biologist Lynn Margulis predicted that, “Those great evolutionary survivors, the lowly slime mold, would inherit the earth.”
SC: And speaking of inheriting the earth, we have grown up with the technological singularity – the acceleration of technological progress to the point where machine intelligence surpasses human intelligence – on our horizons. As primordial slime challenges our notions of consciousness and intelligence, so does the future of AI. The term “artificial intelligence” was coined in 1956 at a Dartmouth conference organized by Marvin Minsky, though it has a longer history in our collective imaginations, from mythology to fiction to philosophy. Researchers became interested in whether it was possible to create a machine with general intelligence. Alan Turing famously proposed a conversation-based test, a very anthropomorphic metric indeed! Today, machine intelligence has progressed one game at a time: Robots now beat us at Jeopardy!, chess, video games, and most recently, the ancient game of Go. This success has been a public vindication for deep-learning algorithms, which are loosely based on the network structure of the brain. These algorithms are also translators, drivers, and babysitters for our growing elderly population, and they may even engender feelings of intimacy and love, as depicted in Spike Jonze’s movie Her.
// The fact that we look at rats or monkeys and use forms of AI to understand how we feel love or depression once again destabilizes the boundaries of what is uniquely human. //
CR: So the territory humans had made up for themselves among life forms is shrinking. On the one end, we are uncovering how organic matter functions in human-like ways; simultaneously, we are developing technologies that are encroaching on human-like ways. It’s all about borders, and, like geographic ones, these definitions are not fixed, but continuously evolving. So, what is the future space of humans? Will it shrink? Will it expand? Will it be inclusive of pigeons, slime, and cellphones? Will machines become consumers? Will bacteria be remunerated? SC: I think we are at the limits of anthropomorphism and have to consider how these boundary breakdowns will impact fields like anthropology. Will we see ethnographies about the culture or experiences of robots? Who will do them? The revelations about slime molds, monkeys, and trees, as well as the recent debates and advances we’ve been discussing, may lead to cyborg, animal, or object politics. Will new laws secure our human territory or defend the rights of non-human forms? In some ways, these developments expand horizons by emphasizing our emergent ontology. Returning to mythology, perhaps we should think of
ourselves as chimera: as compositional creatures made up of microorganisms and prosthetics, from artificial limbs to contacts and smartphones, we all experience assisted living. As academic Donna Haraway said, “To be one at all, you must be a many.” Our systems of knowledge are rigid, but our realities are much more fluid. //// Sara Constantino has a PhD in neuroscience from NYU. She is currently working as an editor. Chloé Roubert is a resident anthropologist at Idea Couture.
65
FORESIGHT
Foresight: + Houston Preparing
University of Houston College of Technology Foresight Program
Professional Futurists
FORESIGHT CERTIFICATE SEMINAR
May 1-5, 2017 | Houston TO REGISTER: http://www.uh.edu/technology/departments/hdcs/certificates/fore/
66
B y Th e o F o r b a t h
Not to get too touchy-feely, but how connected are you? Most likely, you’ve got every techno gadget available – a laptop, tablet, cellphone, and maybe even a business-personal split between an Android and an iPhone. And I’d guess your friends and family scour the “For the Tech Lover in Your Life” gift guides, so your drawers are full of Fitbits, Bluetooth devices, and other odds and ends.
In short, I’m assuming your answer is: I’m very connected. But I don’t mean technologically connected. Despite the many ways humans can connect digitally, the percentage of Americans who report feeling lonely has increased greatly, growing from 11% to 20% in the 1970s and 80s, to 35% in 2010, according to AARP. In other words, we’ve never been more lonely and disconnected, even as the internet has enhanced the way we discover and use information; find jobs, retirement communities, vacation destinations, and colleges; locate homes, recreational activities, and love; and create activist networks with a passion that rivals that of the Vietnam War era. I’d argue, however, that in many realms of our lives – including work, family, and romantic love – the latest advances in digital technologies are helping to alleviate this disconnection while adding value to our lives.
photo: idris mootee
In the Age of Interconnectivity, How “Connected” Are You Really?
“We cannot live only for ourselves. A thousand fibers connect us with our fellow men…” – Henry Melvill
67
// In the near future, AI and IoT-enabled smart products will introduce more objects that encourage and enable people to be more empathetic. // Together While Apart at Work
Strengthening Family Ties
Finding Love in Remote Places
A colleague recently posted a picture of a birthday cake on social media, with the caption: “From my work family.” Not only did the chocolate ganache make her feel appreciated and connected to her co-workers, but she also got to share, or celebrate, that feeling of inclusion and love on her Facebook newsfeed. This type of online social connectivity can also benefit those workers who, like myself, have an itinerant lifestyle. I logged almost 300,000 miles last year globetrotting, so my “office” is crosscultural, versatile, and virtual. However, despite my reliance on seemingly every interconnected business tool on the market, I’m also very “workplace social” – in fact, 75% of my time is dedicated to some form of interpersonal meeting. Services like Vidyo and Slack, meanwhile, have made tools like email look like they belong to the Morse code era. I recently heard a colleague use “Slack” as a verb – a sure sign that the service is poised for worldwide success (think Google). Yet, despite the “work families” and the ability to Slack, Harvard Business Review reports that a sense of workplace alienation still prevails. The bottom line is that just as companies need to continuously watchdog their B2B and B2C communications, they also need to keep an eye on business-to-employee (B2E) connections. Tools like Cisco Spark Board can serve as “digital transformation” tools for silo-affliction, as can Google Jamboard and Microsoft Surface Hub, which strive to foster workplace connectivity by bundling services and streamlining the virtual meeting experience.
In the late 1960s, 70s, and 80s, a now familiar televised public service announcement played each night: “It’s 10:00 pm: Do you know where your children are?” If parents had forgotten to check in with their kids, or perhaps were having such a good time that they forgot they even had children, they had a sobering nightly reminder. Today, with all manner of news sources available, fewer parents are tuned in to the same channel at the same time every night. However, thanks to smartphones and even GPS trackers, they’re actually more likely to be in touch with their kids and aware of their whereabouts. In this way and others, digital technologies are also bringing families closer together. For example, digital tools are enhancing family vacation experiences and making it simpler for families to find entertainment venues. Taking a page from the Disney playbook, Carnival Cruise Lines unveiled a wristband solution that eases the water-locked itch of a family-fun cruise. And though it’s been around since 2009, Yelp’s Augmented Reality Monocle, with its early implementation of an augmented reality overlay, is still the app I’m betting on to eclipse some of the more technically limited restaurant-sourcing tools. While recreational services can serve as an antidote to familial isolation, social networks can provide a supportive community to family members with specialized needs, such as parents with a chronic illness, veterans with PTSD, or adolescents with cancer. These virtual, online groups are a muscular, modern take on the chat rooms of years past. Combined with other digital capabilities, such as smart pill bottles that provide medication reminders, online social networks can give people a greater sense of independence.
With the plethora of dating and hookup sites, who can remember how we found love in the dinosaur age of blind dates, parties and, dare I say, the bar? Match, eHarmony, and OkCupid have been joined by Bumble and Tinder, among others, and each brand seems to offer a unique point of view on how to strike up a romantic connection. OkCupid introduced an app revamp on Valentine’s Day to provide a more detailed portrait of a potential mate. Information, as always, is king. There’s also Skype, FaceTime, and now Skype Translator, which can make the heart grow fonder no matter the geographic distance between people – or even the language a romantic interest may speak. In the near future, AI and IoT-enabled smart products will introduce more objects that encourage and enable people to be more empathetic. A bot that remotely triggers a hug to a connected pillow will not necessarily replace skin-to-skin contact, but it could act as a concrete placeholder until a real embrace is available. Over half (57%) of teens are beginning friendships and extra-familial connections in the digital space. This makes a key finding of a 2015 report published by the Pew Research Center even more compelling: The stronger our technology becomes, the stronger – and closer – our bonds will be. //// Theo Forbath is chief transformation officer, VP at Idea Couture and a member of the executive leadership team at Cognizant Digital Business.
68
WEF 2017
By Ben Pring
This year’s annual meeting of the World Economic Forum came at a time when a new economy – a digital economy – was emerging from the red-hot furnace of technological innovation. An economy based on artificial intelligence, algorithms, “bots,” and instrumented “things” is taking shape and, in the process, generating huge new money from ideas that represent the future. At the heart of this digital economy are changes in how we work. The work that got us to the present won’t get us to the future because the digital economy requires completely different skill sets and attitudes. This reality was at the heart of the World Economic Forum’s 47th global gathering of leaders in politics, business, academia, and the arts. The event came at a time when generational orthodoxies were (and are still) being questioned and views of the future hotly contested around the world. The irony that the conference’s opening sessions – which vigorously defended a vision of globalization and economic liberalism – occurred in the same week as the inauguration of President Donald Trump was, undoubtedly, not lost on anyone.
A Manifest Destiny of Machines
Four Macro Forces at Play in the Digital Economy The angst and uncertainty surrounding this year’s World Economic Forum event were understandable; big macro forces are gathering momentum: / Digital change is larger than it appears. Though we may think we understand the digital revolution, its scope, scale, and importance are set to surprise us all. We have only scratched the surface of the change that will occur in the next 20 years. / You can run, but you can’t hide. The economics of digital are unstoppable. In the emerging “new world,” the Republic of Digital is the state you want to be in. The Republic of Denial is no place to stake a claim. / Entrenched economic laws are shifting. Ten years ago, we had to learn how to operate at the “China price”; now, we need to learn how to operate at the “Amazon price.” / The bots are here. Artificial intelligence is real business. AI has left the laboratory (and the movie lot) and is in your building.
We Need to Figure Out How the Future of Work Works With machine intelligence developing at exponential rates, and the near-term ability for machines to do everything we can today, it’s never been more pressing to figure out how the future of work works. According to recent research from Cognizant’s Center for the Future of Work, executives understand that this is no theoretical exercise. Though we have been musing philosophically (and artistically) on the promise and peril of robots for eons, the time for debate is over; now we need to figure out What To Do When Machines Do Everything (a topic I cover in my book of the same name). This rapidly unfolding AI-driven era is exposing a gap among employees in organizations around the world: those who embrace the possibilities of the next wave of technology, and those who fear the risks. Ensuring that you (and your team and your organization) are on the right side of this divide is Job #1 for everyone.
69
// The work that got us to the present won’t get us to the future. //
The Future of AI is the Future of Work Leaders from every region, every industry, every company type, and of every age all noted that AI, combined with analytics, will be the number one driver of business change between 2015 and 2018 as follows:
99% Business Analytics Making meaning from business data and information.
92% Cloud Delivery Services further reducing delivery costs.
98% Artificial Intelligence Technology that can perceive and react with a form of intelligence.
90% Software for Process Automation Software for legal due diligence, auto, etc.
The Davos Dialogue: Building Machines to Achieve Higher Levels of Human Performance Although digital business is over 70 years old (counting from 1946, when the first general-purpose computer, ENIAC, was launched), the “digital revolution” is only now kicking into high gear. Most of us have worked with computers all of our working lives, but, in many ways, we’re only just beginning to understand what digital work truly is. The future won’t see us trying to replicate old ways of working with new tools; it will create entirely new ways of working, created by the new tools. Consider that, while email aped the memo (have you forgotten what “CC” stands for?), software like Slack completely does away with memories of typing pools and vacuum tubes, enabling communication as though everyone were sitting around the dinner table on a Friday night. And whereas Excel was spawned from Dickensian actuarial offices, Tableau was born by people wearing flip-flops in a computer science class in California. In the next AI-catalyzed wave of digitization, work will be reimagined, remixed, and reborn as something fit for our kids rather than primed for the nursing home. The new frontiers that were discussed in Davos weren’t simply about substituting labor with software; they were about building the new machines that will allow us to achieve higher levels of human performance. As Kris Hammond, CTO of Narrative Science – a pioneer in natural language generation – puts it, “AI is not a mythical unicorn. It’s the next-level productivity tool.” Davos 2017 couldn’t have come at a more consequential juncture. Exhilaration (and terror) won’t only be found out on Switzerland's steep slopes. //// Ben Pring is VP and director, Cognizant’s Center for the Future of Work.
70
71
From Artistic to Artificial Intelligence The Emergence of the Creative Machine
What is creativity?
photo: Tyler Vipond
B y A z a d e h H o u s hm a n d a n d M a r ya m N a b av i
Creativity has long been perceived to be immune to falling under the dominion of machines. However, this is changing fast. Machine learning and big data now enable intelligent systems to be used in many creative fields, such as music production, or the culinary and visual arts. Even though we know that machines and humans are creative in fundamentally different ways, in the future they will be able to collaborate and co-create. However, facilitating this will be extremely challenging. According to recent studies, 35-40% of jobs are at high risk of being replaced by machines in the next two decades. Tasks that have a high degree of repetition and require precision will be the first to go. But machines will never replicate the creativity, empathy, and personal connection required by many disciplines today. Here, we explore how machine and human creativity are different and why we need both. But first, we need to define creativity and understand the essential difference between creativity in humans and in machines.
There are many definitions of creativity, from a belief that it is an inherent talent with which some people are gifted to the view that it is a skill that can be built through learning, immersion, and practice. Steve Jobs’ definition of creativity is perhaps the most general definition: “Creativity is just connecting things.” Based on Jobs’ definition, machines are certainly more creative than humans. Compared to humans, machines are able to collect and make sense of greater amounts of data sets. Ed Rex, a music composer who has created an online platform enabling anyone to compose music with the help of artificial intelligence (AI), agrees: “Creativity is immersion, assimilation, and recombination, and by this definition, it is pretty clear that machines should be able to perform all of those steps: computers can immerse themselves in huge bodies of data, they can find patterns and common features in big data sets (even features that humans would not notice), and they can recombine things in novel ways.” For us, creativity is more than that. Creativity is the act of turning new and imaginative ideas into reality and making something new or unique. Psychologist Mihaly Csikszentmihalyi, author of Creativity: Flow and the Psychology of Discovery and Invention, says that creativity does not occur in our heads but in the interaction between our imagination and our social context. Creativity is a matter of reflection, experience, and response. It is a matter of relationships and who we are in the world. Creative thinking involves pushing the envelope and looking beyond the boundaries of the current frames of reference. It happens when you can draw from different fields of intelligence simultaneously. Although machines are able to draw information from different sources and make new connections, they still rely on humans to enhance their creative skills. The question that comes to mind is not whether machines can be as creative as humans, but how their creative process is different.
72
What is the difference between human creativity and machine creativity? So far, machines have been able to come up with new ideas, recipes, lyrics, and suggestions for the visual arts. Their source of inspiration is data and patterns, and their creative outcomes are the result. They are not a reflection of personal tastes or preferences. Machine outcomes are a result of making lots of connections and learning over time, and of determining which connections are more desirable. The human creative process is much messier and more complex. Machines do not have emotions like humans (yet). They do not have a sense of self, cannot experience the world like humans, and do not have the power to imagine in the same way. Therefore, human creativity is much different than that of a machine. Human creativity is bound to our emotions, our hopes and desires, fears, hate, grief, and love. Emotion cannot be programmed, and people do not feel the same emotions in similar situations. As many researchers and computer scientists have mentioned, the ability to have an emotional response, to be able to pick up on subtle emotional signals from others, and to have empathy and judgement are all things that are hard to replicate in machines’ development. The curving, swirling lines of the hills and sky, the contrasting blues and yellows, and the thickly layered brushstrokes of Vincent Van Gogh’s Starry Night masterpiece are deep-seated in the artist’s turbulent state-of-mind, imagination, emotions, and experiences. Would an artwork by an AI artist, be creative in the same way as this masterpiece by Van Gogh? Sure, it would be a creative piece of art, but it certainly won’t be the same as something that has been created by a human artist. A sense of self is another big part of creativity; machines do not have a unique personality. The fundamental question for any creative machine is whether its abilities derive only from its creator or whether it is capable of independent and surprising originality. For example, consider a machine like AARON, an AI-based machine that creates original artistic images written by artist Harold Cohen. It cannot evaluate its own work, make decisions on what to create next, or self-reflect and modify its work without an external standard – all things that a human can do. Cohen argues, “A robot would have to develop a sense of self. That may or may not happen, and if it doesn’t, it means that machines will never be creative in the same sense that humans are creative.” Perhaps AARON was not creative in the same way that Harold was, but AARON’s creative capabilities certainly opened a new approach to creating art. As Irish playwright George Bernard Shaw said, “Imagination is the beginning of creation. You imagine what you desire, you will what you imagine, and at last, you create what you will.” According to this definition, machines can become creative once they are able to imagine. Imagination is the creative ability to form images, ideas, and sensations in the mind without direct input from the senses. Creativity is applied imagination. In theory, unless we program imagination and intuition, we will not be able to claim that machines are capable of thinking creatively in the same way that humans do.
How might machine and human creativity influence each other? Whether or not we are working in creative fields, a machine’s ability to create new processes, artifacts, and entities will transform our lives in radical and new ways. To better understand this, it is important to look at how machine intelligence is progressing and how it is directly related to its ability to think and work in creative ways. The following three scenarios suggest how machine intelligence might evolve over time, and consequently, what implications it may have on creativity as a trait and as a discipline.
Model 1.0 AI will leave the creative tasks to us. Machines are incredibly accurate in performing repetitive tasks such as product assemblies or harvesting data from a sensor. They are also able to collect and make sense of enormous amounts of data, and to enhance their performance simply by being exposed to more data. This removes the burden of doing work that is repetitive and straightforward, and will free up humans to concentrate on more creative, complex tasks. Scenarios that involve complex decision making, personal connections, and creativity will be left to humans. It is similar to ATM machines freeing up bank tellers to focus on building personal connections with customers. Hence, we could claim that the job of a bank teller is far more creative today than what it was prior to the arrival of ATM machines. Being able to waive a transaction fee for a customer goes a long way, and that is something that no algorithm will be able to do in the near future.
// Machines can become creative once they are able to imagine. //
73
Model 2.0 A new wave of creativity will emerge as a result of human-AI collaboration. Ginni Rometty, the CEO of IBM, set the tone on stage at the World of Watson conference in Vegas last year: “It’s about man and machine. Enhancement, not replacement.” AI is already enhancing human creativity. Musicians are looking to AI to analyze thousands of tracks of music and to derive the underlying tones and lyrics that resonate with people. In 2016, Grammy Award-winning musician Alex da Kid collaborated with Watson to create lyrics and music to a song that was composed after analyzing five year’s worth of data from the New York Times, the most edited Wikipedia songs, and popular movies. The collaboration resulted in the creation of the song “Not So Easy,” which reached the top of the charts on Spotify. Similarly, the platform developed by Ed Rex allows us to tap in a few variables, including genre, mood, and duration, after which the program will create music. Machines are not only going to be assistants to humans (and artists), but also new, creative collaborators. Imagine when AI gets added to the designer’s inspiration box: Mood boards will be created in a fraction of a second, storyboards with multiple endings laid out, and guidelines on a particular style, size, and feature list provided, leading to an endless array of design possibilities. How might that change the way designers think and work? The possibilities of human-AI collaboration are not just limited to art and design. Imagine if AI becomes our creative half when it comes to problem solving. With the potential to challenge our assumptions and help get us outside of our tunnel vision, AI can twist our thinking, shift our perspective, and give us a different point of view when we are stuck. More interestingly, we will be able to codify some of the characteristics of creative people. So, we could ask our creative machine to give us different ideas, as if Steve Jobs, Leonardo da Vinci, or Thomas Edison would have thought about them. Simultaneously, we could spit out a few ideas and have the most creative people in human history vet the best ones and help us prioritize. The idea is, of course, not to rely on machines to think for of us, but rather to enhance our cognitive ability.
Model 3.0 Human-machine hybrids will be the ultimate creative species. This scenario is more radical than the previous ones and takes into account the fact that we are already seeing a hint of how embedding technology in our body might change the way we think and behave. An early example of this is the work of artists like Neil Harbisson and Moon Ribas, who hacked their bodies to acquire an additional sense, such as hearing colors or feeling seismic loads. Although far from a true machine-human hybrid, these examples show how our mind begins to ignore the difference between our biological and electromechanical organs and thinks of them as a unified system. This occurs when cyborgs (humans with technological organs) have been able to connect their mind to a computer and are able to use the best of the two worlds to collect data, to make sense of it, and to problem solve. Imagine what would happen if we gained unlimited mental storage, and could review our thoughts and memories as if we were going back in time. In this scenario, the distinction between human and machine creativity is blurred. This will radically change how we think about technology as a tool and force us to think about it as part of our being.
A promising partnership. As seen in these scenarios, the emergence of AI and advances in deep learning have resulted in new creative processes and outcomes. Machines are getting better at identifying patterns and human intentions. As a result, they are getting better at interacting with humans, allowing them to become even more creative. With AIs as chefs, rappers, poets, and artists, we will also see a new wave of creativity that is inspired and influenced by humans, but not created solely by them. Change is inevitable. We need to embrace non-human collaborators and adjust our perceptions in the new era; we need to learn to use machines as co-workers, collaborators, and co-creators. Lastly, we have a bit of advice for any machines reading this article: Improve how you interact with us by reading between the lines and looking beyond the surface. Earn your right as humans’ creative counterparts. //// Azadeh Houshmand is director of client experience, design & insights at RBC. Maryam Nabavi is VP, IC/ things at Idea Couture.
74
Sologamy and the Stay at Home Club: How Marriage and Romance Might Benefit From a Design Eye
B y L i n d s ay R ox o n
As a strategist, a futurist, and a woman navigating the dating scene for the first time in a decade, I have thoughts – a lot of thoughts – on the state of dating, love, marriage, fidelity, pregnancy, and everything even remotely associated with these topics. Admittedly, I find it nearly impossible not to view love and relationships through the lens of strategy and design. Lately, I’ve noticed an emerging pattern of signals. One signal in particular stood out: According to VICE, 40% of Japanese women have allegedly “moved on” from trying to find relationships and are instead opting for rent-a-boyfriend services. It’s even more interesting, however, that they are hiring these men to cry with them, which is considered the pinnacle – and often most uncomfortable expression – of intimacy, especially for Japanese women. This struck a chord with me. The decision to stop dating does not seem to be a trend isolated to Japan. In China, an increasing number of women are choosing to delay marriage in favor of pursuing careers and finding real love. This is creating tidal waves of angst among
older generations. After over three decades under the single-child policy, entire family lineages in China rest on the shoulders of these children. And yet, despite the fact that there are 33 million more men than women in China, unmarried women over the age of 25 are called “leftover women.” Across the Pacific, similar stories are emerging. A book titled Spinster: Making a Life of One’s Own, written by New York-based writer Kate Bolick, explores multiple narratives for women “beyond the marriage plot.” In a recent article for Flare, “Why I’m Giving Up Men and Just Staying Home,” Sarah Ratchford writes about the Stay at Home Club, a group of women who are proactively exiting the dating scene to find fulfillment in their friendships and careers instead. According to an article in The Economist called “Single Women: Why Put a Ring On It,” as of 2016, the number of unmarried women is exceeding the number of married women for the first time. Single women now buy homes at greater rates than single men, signifying a big step in independent wealth building. A decade ago, we were concerned with whether or not women could “have it all” and questioning why there
were so many great single women. But we are now moving toward a trend of women simply carrying on with their lives and even avoiding the dating scene altogether. This is significant because, married or not, many of us go through life feeling the weight of that invisible metronome. For women, that may center on pregnancy, or it may link to the feeling of being a depreciating stock in a world that idolizes youth and beauty. For others, it’s a marker of ambiguous milestones that is heavily laden with what they “should” do by a certain age. This rush is in tension with new ideals of marriage and relationships, where the expectation is to accomplish certain personal milestones before the knot is tied; marriage comes only after self-realization. So where do these behaviors come from, considering the importance that has been placed on partnering up for so long? Let’s consider two of the oldest behaviors in the dating world: chivalry and monogamy. In his book, Date-onomics: How Dating Became a Lopsided Numbers Game, Jon Birger argues that chivalry and monogamy are not necessarily natural, but rather that these are a response to market adaption. Whether or not you agree with his point, the idea of market adaption is provocative. It makes sense that our dating behavior would, in some shape or form, be dictated by market drivers, and this goes beyond simply being sick of Tinder. It can be argued that women are gaining a better sense of self-worth and greater confidence, and are holding fast to idyllic notions of what relationships should be. Because of this, we are simply choosing not to settle down with the wrong person. Thanks to social media and urban density, we now have stronger social networks than ever. It might be the case that the concept of the nuclear family is no longer necessary; in fact, this arrangement may be more restrictive than other possibilities. This is coupled with the growing recognition that the modern concept of marriage isn’t necessarily a fair arrangement for women. While the benefits of marriage have often been touted, the benefits for men far outweigh those for women. For example, while life expectancy for women stays the same with or without marriage, life expectancy is actually much longer for married men. As a result, many women are taking longer to reach a point where they feel comfortable sitting at the marriage-negotiation table. Waiting longer to settle down means that women are able to better negotiate and
75
navigate their marriage arrangements and retain control over their assets. Delaying marriage is certainly having positive economic effects: According to The Economist, women aged 25 to 34 are the first generation to start their careers near parity with men, earning 93% of men’s wages. This is also having a positive effect on marriages when they do eventually take place. The article goes on to say that, despite the increasingly complex landscape of relationships and the declining rates of marriage, those who do get married are less likely to divorce – particularly those who have chosen to hold out for the right match. What this is signaling is that, unlike in decades past when women may have dated to find a spouse for security or status, there is now a social rejection of the idea that women must marry into
success or stability. And, as women are attaining more economic power and are delaying marriage, they are starting to manipulate the market by purchasing and developing products and services that empower their independence. From biohacking to paid dating, a plethora of tools are becoming available for reassembling and optimizing life as a single person. In our homes, we are outsourcing and automating everything from laundry and childrearing to our grocery lists. Technocrats are taking a similar approach to relationships by breaking down the magic of courtship into modular bits; the meeting and mating process is now being shortened and simplified. But by developing solutions for literal painpoints – rejection, confusion, awkward silences, sexual frustration – what we are creating is
the emotional equivalent of Soylent, a meal replacement product. The result, in some sense, is that we are creating a collage of living simulations and catering to our need for control and a life of productivity. Further, reducing our dependency on coupledom has both positive and negative implications. On the one hand, equality and the augmentation of biology may level the playing field, giving women a greater advantage and more opportunity to contribute skills. On the other hand, there is something necessary about preserving our dependencies on one another, as essential aspects of the human experience lie within the vulnerability of the dating dance. Still, we may begin to see a shift to a world that is increasingly individualistic, disrupting the type, frequencies, and timing of romantic relationships as we know them.
// Unlike in decades past when women may have dated to find a spouse for security or status, there is now a social rejection of the idea that women must marry into success or stability. //
76
// Waiting longer to settle down means that women are able to better negotiate and navigate their marriage arrangements and retain control over their assets. //
This may mean developing products and services that not only cater to those with an independent status, but that also provide opportunities for singles to optimize their lives. Service developments may include the facilitation of purchases of smaller homes or the banding of multiple buyers together for large-scale investments. Products and the way they are sold require rethinking – including everyday items, like groceries. The economy already benefits from the spending of single people. Single women spend 35% more per person on groceries and twice as much on hair and beauty products than women who are in relationships. A campaign by skincare brand SK-II tapped into this trend by developing an emotional video that spoke directly to China’s “leftover women,” empowering them to be true to themselves in their choice to seek fulfillment in their careers and to wait for real love, despite immense pressure from their parents to get married. As more women are remaining single, either by choice or by design, we may also need to examine the ways in which the marketplace for certain types of labor may change. Ohlala, for example, is an app that allows women to charge for dates; it is flipping dating dynamics on their head. While some see this as the digitization of escort services, there are women who use it simply as a way to appropriate the market for paid relationship services, leading some to question whether this is about the development of a marketplace for paid, emotional labor. But there are other matters to consider as well. If time becomes increasingly scarce and more valued, how will we begin to view the work traditionally performed by women at no cost in the past? Will the provision of care become a premium, lucrative service as women have less time to provide this
for free, and as technology takes on a greater role in providing it? As women begin to step away from free labor and into power, we are sure to see a premium placed on unpaid work. One thing is for sure: These changes, however they occur, are creating more complex personas for marketers to sell to. Women, just like everyone else, have many needs occurring at different stages in their lives. They require business and science to provide tools for designing the kind of lives they want to live. The new human-centered design should empower people, including women, to be the designers of their own lives. Let’s look at this through a different lens. The first plow revolutionized the way that food was produced and the way that we thought about agriculture. But the plow itself was not what was truly impressive; instead, as history revealed, the plow was simply a ladder to social progress, liberation, and, eventually, a major driver in shifting the economic and social landscape far beyond food production. The ways in which we achieve our end goals are still, and will always be, innovated, but the needs at the core of these goals are fixed prospects.
Technologies that facilitate or substitute human connection will have complicated, unforeseen implications for the future that transcend the realities of the technologies themselves. Personally, I’m not sure that I will be jumping on the bandwagon to suppress oxytocin or vasopressin next time I feel the unmistakable fear that accompanies a budding attachment, nor do I think that we’re headed toward a dystopian reality of love where technology and simulation form the patchwork of our emotional lives. Instead, these technologies and services will eventually fade as we invent new ways to meet our needs. The real productivity will be in how these technologies shape us socially – and I’m completely optimistic about that. As an innovation strategist, my job is not to see barriers, but possibilities – and to build pathways to those possibilities. It’s a precise kind of optimism. They say that necessity is the mother of invention; these stories about women from around the world indicate that self-prioritization, combined with a healthy dash of idealism, will be the gateway to social progress. //// Lindsay Roxon is a healthcare innovation strategist at Idea Couture.
Signals of Change Biohacking Attachment Scientists are exploring ways to prevent the feelings of attachment that surge after sex with pills that suppress oxytocin and vasopressin receptors. Gaggle Dating Strategies The Globe and Mail profiled dating experts that recommend using gaggle dating strategies – essentially dating many people at once to reduce reliance on a sole individual.
Sologamy Relationship structures are being disrupted with sologamy, an early trend of people marrying themselves in the style of traditional wedding ceremonies. Relationship and Sex Robots Inventor Ian Pearson has predicted that sex with robots will overtake human intercourse by 2050.
77
“Who would’ve predicted 5 years ago that Silicon Valley would take over Hollywood?” — Anne Thompson, Editor at Large, IndieWire
Long Take
Listen to the smartest people in film talk about what, why and how we watch. New podcast every Friday: iTunes | SoundCloud | tiff.net/longtake
78
BY— DR. TANIA AHMAD DYLAN GORDON KIRSTIN HAMMERBERG GARETH HOCKLEY MICHELLE JACOBS STEPHANIE KAPTEIN MATHEW LINCEZ
DESIGN— JEMUEL DATILES
PHOTO: JEMUEL DATILES
MAYA OCZERETKO
79
BY— KIRSTIN HAMMERBERG MAYA OCZERETKO
THINK ABOUT HOW YOU EAT TODAY, AND THEN TRY TO REIMAGINE THE FUTURE OF THE FOOD EXPERIENCE. HOW MIGHT WE EAT IN 5, 10, OR 20 YEARS? WHAT WILL WE EAT? WHEN? WHY?
W.
hichever table you find yourself at, one thing is certain: One cannot reimagine the future of food without also considering the human experience that is so deeply rooted in it. No matter what innovations emerge, digital or otherwise, food will always remain an integral part of our existence. And yet, in an ever-changing, fast-moving, and increasingly connected world, our most basic ties to food remain, while our expectations continue to change. Food connects all of us, across the globe, transcending cultures and religions. In every corner of the world, you can find places and spaces where people connect over food. Think of the vibrant Taling Chan floating market in Bangkok, bustling with both locals and tourists enjoying seafood at rickety communal wooden tables; or lining up around the block with hundreds of New Yorkers, impatiently waiting to try the world’s first bleeding veggie burger from Impossible Foods at Momofuku Nishi; or foraging through a tasting menu of 17 small plates (that ends with chocolate reindeer moss) at Noma, one of the world’s best restaurants; or a perfect three-hour gastronomical experience that’s waiting to be shared with your social networks. Consumers have become so much more educated that they are more aware than ever about the relationship between food and wellbeing – not only in terms of our own health and wellness, but that of our planet as well. There is a greater desire to know what’s in the food that’s consumed, but also to know where it came from and how it was grown. The #RealFood revolution, popularized by celebrity chef Jamie Oliver, followed by the rise of a countless number of natural food brands, illustrate the seismic shifts that are altering our relationship with food. The rise of these initiatives has unearthed an innate desire – which was dormant throughout the Fordist era – to connect with food. Real food, made by real people (not the faceless brands that reigned at the height of the industrial food era). Conversely, the digital revolution of food enables new and redefined experiences, from biometric personalized nutrition plans, to frictionless ordering and delivery at any time of day. The digitization of food empowers consumers with new and different forms of access and information, resulting in a new relationship between humans and what they eat. The visionaries who have created these new products and services have tapped into the human experience that is so central to food. They play at the intersection of real food and technology, and are able to unearth new and infinite possibilities that are abound in this new digital culinary era. As the future of food continues to unfold, and an increasingly digitized food experience emerges, the human experience will need to remain central. As you delve into this series of articles, take time to consider the experience you want to have with food, and how your brand, product, and story can deliver on this as we continue to redefine the future of food and the experience surrounding food culture. Kirstin Hammerberg is VP, brand experience at Idea Couture. Maya Oczeretko is an innovation strategist at Idea Couture.
80
THE POST—INDUSTRIAL
PALATE
PHOTO: JEMUEL DATILES AND KAPIL VACHHAR
BY— DR. TANIA AHMAD
THE
81
82
THINK ABOUT THE HUMBLE MACARONI AND CHEESE.
Now imagine that it’s glutenfree, made with hormonefree milk containing active cultures, and packaged in a recycled box labeled with sustainable inks, coatings, and adhesives. Think of the familiar sizes and shapes from your childhood (or the one you may have seen on TV): the tall box, the packet of neon orange mystery powder, and the directions to boil, add, and stir until uncannily colorful creaminess is achieved. What kind of consumer object is this? And what can we make of this magnificent sociocultural amalgam of nostalgia and ethical responsibility? Let’s hone in on the apparent contradiction of reinvented processed foods: We’ll dub it the “post-industrial palate.” Call it a love-hate relationship to industrially produced food that doesn’t alternate between loving and hating, but instead combines both into tastes, textures, and packaging that is simultaneously new and old school. This may not describe Soylent, but it works much more effectively for vegan fried chicken with biscuits and gravy. The post-industrial palate allows us to explain how the different and seemingly opposing ways consumers understand processed food can coincide in the same products.
MAKING RETRO FOODS RELEVANT THROUGH CONTEMPORARY. LIFESTYLE CONCERNS CAN RENEW FOOD NOSTALGIA..
The “post” in post-industrial is not temporal in nature. Instead, it means that we cannot understand contemporary food practices and tastes unless we take into account the immediate past that they grew out of. We are not finished with industrial processing; we just associate it with a new set of meanings that are shaped by a set of specific values. Like the “post” in post-colonial, post-Fordist, or post-socialist, the post-industrial palate acknowledges how we can’t explain the existence of HIPPEAS maple-bacon-flavored organic, vegan, non-GMO, chickpea-based snack puffs – which claim to release nitrogen into crop soil with a shameless brand pun on politically enlightened flower children – unless we consider the social and cultural implications of what these snacks both embrace and critique about the iconic junk food byproductplus-enriched-cornmeal “original” they riff on.
The key is that we engage with the memory of past tastes and ideas, reinvented for the present moment. We interact with what we know, or what we think we know, and with how things used to be by remembering the good old days where we enjoyed innocent pleasures that we look back on both critically and nostalgically from our contemporary, more discerning, and selfaware selves. For example, during a large chunk of our recent history – from at least the 1930s until the 1970s – industrial processing was culturally recognized as a positive transformation; it made foods purer, healthier, and less perishable, in addition to being more scientific and modern. Now, of course, we know better.
Still, awareness and the ethical demands we make of some of the ingredients in some of the items in our shopping cart – often only some of the time – still don’t dampen the strong emotional attachments and senses of belonging that we associate with the “good stuff,” in the most wide-ranging sense of what “good” can possibly mean. High-end festive SPAM in South Korea, Vegemite for babies, or ostentatious gourmet concept milkshakes made from locally sourced artisanal ingredients reinforce that the history of industrial processing was also a history of building and feeling a sense of what particular foods promised in that fuzzy version of the past that seems so much sweeter, more authentic, and more right – now that we aren’t in it.
83
Processed foods are by no means passé. The catch is that consumers want to believe that these foods are not industrial, and instead have been reinvented for new contemporary c r i t i c i s m s o f o l d - s c h o o l m a s s p ro d u c t i o n . To a r t f u l l y operationalize the post-industrial palate, the cliché of a new twist on an old favorite is instructive; the new twist must acknowledge the ways we hope to reject particular versions of the past (i.e. pollution, food additives, one-size-fits-all), while the old favorite must sustain its mystique of nostalgic attachment, whether through form, aesthetic, texture, or micro-production.
Hitting that delicate balance requires a deep awareness of the immediate past and the change drivers in the industry, but it is critical to investigate and explore how the people in a particular target market relate to processed foods – what they may critique, embrace, or profess indifference to. We can’t pretend to know what they think unless we discover and gauge the relevant design principles for multisensory bliss. Only then can we effectively tell people that elevated processed foods are good for the planet, their souls, and for working folks everywhere.
As an anthropologist, this became blatantly apparent to me during recent Canada-wide research using co-creative exercises on brand extension, when consumers of diverse backgrounds consistently suggested healthy lifestyle alternatives to, and new cosmopolitan flavors and uses for, classic treats. Many choices focused on reinventing nostalgia for a delicious past by redeeming processed food tastes from their unhealthy and artificial associations.
PHOTO: JEMUEL DATILES AND KAPIL VACHHAR
Newer concerns about health and food pleasures do not invalidate past tastes for processed foods. Making retro foods relevant through contemporary lifestyle concerns can renew food nostalgia. What elements of the past to hang on to and how to bring those together for the future is an open question that is made answerable and actionable through human-centric design. Beyond signals and change drivers, it may be useful to keep in mind the liberatory claim that industrially processed foods were made for radically democratizing how we ate, while latching onto our dreams for more access and time freeing us from onerous food preparation and humdrum cleanup. The post-industrial utopian agenda is unfolding as we speak; bringing it to more people in better ways can reinvent past food revolutions for the demands of a truly innovative present. Dr. Tania Ahmad is a resident anthropologist at Idea Couture.
84
THE COMING CONVERGENCE OF NATURAL AND INDUSTRIAL FOOD
BY— DYLAN GORDON
85
PHOTO: JEMUEL DATILES
THE WORLD OF FOOD TODAY IS IN THE THROES OF A REACTIONARY TURMOIL THAT DATES, MOST RECENTLY—
—TO THE COUNTER-CULTURAL,. BACK-TO-THE-LAND MOVEMENTS OF THE 1960S..
86
W
hether the argument at hand is about plastics and pesticides or the processing involved in “fast” or shelf-stable foods, the modern food industry is more and more often painted the villain. As this perspective has moved from the fringes to the mainstream – where organic, local, and natural foods are rapidly becoming the order of the day – both Big Food and the nimble class of disruptive new food startups nipping at its heels have scrambled to fulfill these shifting appetites. It may seem hard to imagine from today’s vantage point, but many of the things now critiqued about the modern industrial food system – its techno-scientific “manipulation” of the goods of nature; its speed and convenience; its fortified nutrition and hygienic sterility – were once seen as key parts of a joyous plenty delivered by the highest stages of human and industrial development. In the 1950s, for instance, convenience foods and appliances – as well as the marvels of preservation, distribution, and safety offered by advances in packaging and refrigeration – were widely celebrated. Owning, consuming, preparing, and eating these technological marvels was downright aspirational. Food in its natural state, meanwhile, meant uncertain availability, inconsistent quality, backbreaking labor, and the threat of disease and decay. Although less evident beneath the vision of natural food that is preeminent today, this excitement and desire for what some might call a modernist approach to food remained, though latent, and has re-emerged as of late. Consider, for instance, the molecular cuisines made fashionable by haute restaurants like elBulli and Alinea. Here, foods are fractionated and recombined, with the aid of materials and procedures best known by food scientists, into novel forms that capture their essences (or the essences of something else entirely): the tomato spherified into the form of a strawberry; the roasted veal posing as charcoal briquette. Everything that is at the symbolic heart of the opposition to the modern food industry – the chemicals, the engineering, the unnatural result – is here recuperated into the engine of food culture’s forefront.
Anthropologists, sociologists, and professionals in media, art, fashion, and design will all tell you that our tastes are part of an endless cycle wherein what is marginal or excluded is co-opted and reimagined, often by elites, into new tastes and products that eventually filter down to the mainstream. Those dirty, longhaired hippies with their macrobiotic diets morphed into California yuppies eating mesclun mix and Earthbound Farm pre-washed greens found on every corner. Similarly, the TV dinner and its ilk, which have become an object of critique and even disgust in the decades since the 1950s, have morphed into molecular gastronomy. The Twinkie and the Tater Tot these days have their own high-class iterations that entertain and delight flush diners. What remains to be seen is how this cycle will end and how this revalorization of modern, industrial food will enter the mainstream in the wake of organic, natural, and local food.
Early indications imply that we are poised for a groundbreaking shift in that long-running war in our culture between the value of nature and the value of industry; between demand for whole, natural foods, on the one hand, and for scientifically augmented or processed ones on the other. The last century (and beyond) has been marked by an oscillation between these poles, with one paradigm or another being valorized while the other was disparaged. Recent cultural conversation, however, suggests that these formerly divergent ways of thinking and speaking about food – and, moreover, of preparing and eating it – are poised for a union.
87
NATURE IS NOT, AS IN THE 1950S, A BACKWARD REALM OF FLAWED DECAY IN NEED OF AUGMENTATION BY SCIENTIFIC MAN.. INDUSTRY IS NOT, AS WE HAVE IT TODAY, A CHEMICAL-LADEN. DESTROYER OF LIFE AND VITALITY UNDER THE YOKE OF THE MODERN MACHINE.
Consider, as one prime example, the promotional spin of Doug Evans, founder of Juicero. Their product (think Keurig meets cold-press juicer) is less the object of interest here than Evans’ discourse and his way of speaking about his mission and motivations, which bring together concepts that were unimaginable bedfellows in the past. “Organic cold-pressed juice,” Evans says, “is rainwater filtered through the soil and the roots and the stems and the plants.” This is a typical evocation found in the naturalist paradigm of food: one of the earth and all that is good that comes from it, and of the natural rhythms and processes of nature. Like a homespun farmer, or better yet, a forager of wild foods, Evans is selling us the unalloyed goodness of nature. But in the next breath, Evans changes course. “You extract the water molecules, the chlorophyll, the anthocyanin and the flavonoids, and the micronutrients,” he tells us. Suddenly, he’s speaking the language of scientific nutrition, of superfoods and supplements.
In the end, he brings it all together, and in this little story the power of the industrial paradigm is wedded to the goodness of nature. The result of this entire process, he says, is that “You’re getting this living nutrition. It’s like drinking the nectar of the earth.” Modern processing technology, scientific knowledge, the machine, as well as the hand, are harnessed not to manipulate, or refashion, or improve that essence of nature, but to purify and condense it into an elixir of health.
PHOTOS: JEMUEL DATILES
The point is not that there is a new juicer on the market. It is that there is the dawn of a new way of speaking, one that can make sense and one that can inspire. A vocabulary that, instead of counterposing nature to industry and valorizing one at the expense of the other, celebrates the possibilities of their union. Nature is not, as in the 1950s, a backward realm of flawed decay in need of augmentation by scientific man. Industry is not, as we have it today, a chemical-laden destroyer of life and vitality under the yoke of the modern machine. Instead, each works in the service of the other. This is a wholly different system of value that, until very recently, was unimaginable and unspeakable. Dylan Gordon is a resident anthropologist at Idea Couture.
88
FARMERS OF THE FUTURE—
—THEY’RE NOT WHO YOU THINK THEY ARE. BY— STEPHANIE KAPTEIN
89
TECHNOLOGICAL INNOVATIONS HAVE LONG PLAYED A ROLE IN THE ADVANCEMENT OF AGRICULTURE.
T
ARTWORK: GRANT WOOD
he evolution of grain harvesting equipment reduced the amount of time and labor needed to harvest by more than half. Tractors, trucks, and self-propelled machinery powered by the internal combustion engine revolutionized American agriculture by providing a reliable, efficient, and mobile source of power. These agricultural advancements all have one thing in common: They have been, and continue to be, driven by profit. But, while profit provides incentive to invest in technological innovation, it also creates limits – and these limits are becoming increasingly concerning. When the main objective is to increase short-term profit, agricultural advancement is constrained to working within the current production system. According to research published in Ecology and Society, this system is finite. Peak production of the world’s most important crops and livestock products has come and gone. For instance, peak corn came in 1985, peak wheat in 2004, and peak soy in 2009. This trajectory means food production will eventually plateau, and, in some cases, start to decline.
90
IF YOU WANT INNOVATION— LOOK TO PEOPLE, NOT PROFIT.
91
Agricultural advances are also limited to the external environment in which they exist. Even the most advanced production innovations are still vulnerable to unpredictable weather conditions. Shifting climates have already affected America through extreme conditions, such as prolonged periods of heat, heavy downpours, floods, and droughts. According to the National Climate Assessment, climate disruptions to agricultural production have increased in the past 40 years and are projected to continue to increase over the next 25. So how do we prevent the plateau of agricultural innovation? The answer is almost too straightforward and overstated these days: The focus needs to shift from profit to people. Put simply, the current approaches to technological progress have marginalized the human element in favor of maximizing profit – and this needs to change. The truth is, though, that the agricultural industry is running out of time – and people. Currently, only 2% of the American population is involved in farming. In 2012, one-third of farmers were age 65 and older and only 6% of farmers were younger than 35 according to the USDA Census of Agriculture. So who will be the farmers of the future?
ARTWORK: GRANT WOOD
Most young people are not interested in farming. Lindsey Lusher Shute, executive director of the National Young Farmers Coalition, notes that, as farm families have struggled over the last century under the effects of globalization, farmland speculation, industrialization, and competing land uses, many have discouraged their children from staying in the industry. Between 2007 and 2012, there has been a nearly a 20% drop in the number of new farmers nationwide. Shute warns that these barriers to new farmers are barriers to national food security, with a global population projected to reach nine billion by 2050.
But there are those who are initiating change, like Caleb Harper, Director of the Open Agriculture (OpenAG) initiative at MIT Media Lab. Instead of using technology to remove the human element from the agricultural industry, he wants to increase it with the use of personal food computers. MIT’s agriculture platform uses aeroponic technology to grow plants in a completely climatecontrolled environment. With 30 sensing points per plant, data points can be observed over time to discover exactly what each plant needs. It takes a farmer a lifetime of experience to develop the trained eyes to determine if a plant is dying from a nitrogen deficiency, a calcium deficiency, or if it needs more humidity. Harper is making that knowledge instantly accessible by creating a common language. Each plant profile will not only show the progress on the plant and when it’s ready, but it will send alerts if the conditions are not optimal for its growth. After carefully listening to the plant, the controlled environment can be altered to achieve the juiciest strawberries or the sweetest basil. Every time the environment is altered, it creates a new climate recipe. It’s an iterative design and exploration process made possible through an open database. Individuals can use their smartphone or tablet to log into the controlled environment from anywhere in the world. They can then select and activate climate recipes created by others or make changes to create new ones. Because the platform is open-source, the data is available for everyone to use in a language everyone understands. MIT OpenAg has used technology to reframe agriculture not only into an open platform that is accessible to practically everyone, but into one they can find renewed interest in as well.
Innovation stems from the intersection of different people, with different ideas and backgrounds, working together. Agriculture isn’t just for farmers; it’s for electrical engineers, environmental engineers, computer scientists, economists, urban planners, and seventh-grade students. If you want innovation, look to people, not profit. Stephanie Kaptein is a senior foresight analyst at Idea Couture.
92
93
EVERY YEAR AT MY ANNUAL PHYSICAL, MY DOCTOR RUNS THROUGH A STANDARD BATTERY OF QUESTIONS— AM I SLEEPING WELL? EXERCISING? EATING RIGHT? I often find the last question difficult to answer. After all, what is the “right” way to eat, and am I really qualified to determine whether or not I meet that standard? Increasingly, many might argue that I am. But if I am now the expert on my own health, what former “experts” are out of a job?
DEFINING “RIGHT”
F
PHOTO: JEMUEL DATILES
or decades, we, the lay consumers wandering the aisles of the grocery store, have relied on Big Food brands to tell us what, when, how, and why to eat. The claims printed on packaged food labels and proudly touted in television commercials have told us what it means to be a healthy eater, and those of us who self-identify as healthy have – excuse the pun – eaten it up. When nutritional studies – many now notorious for being funded by Big Food itself – told us that fat was bad, we flocked in droves to products labeled “non-fat,” “fat-free,” “low-fat,” or “part of a heart-healthy diet.” When carbs became the enemy, we filled our baskets with “sugar-free,” “low-carb,” “high-protein,” “Atkins-approved” items. And when conversations turned to the idea of “everything in moderation,” we started counting Weight Watchers points on familiar wrappers while embracing the “permissible indulgence” of cookies pre-portioned into 100-calorie packs.
These waves of clear, socially accepted (albeit transient) definitions of “health” not only made it easier for health-conscious consumers to make decisions at the shelf, but also made Big Food’s job that much easier. After all, if I know millions of consumers equate “health” with “fat-free,” I know what to make, how to sell it, and who to sell it to. Marketers relying on cleanly delineated (if not unrepresentative) segmentations knew what claims would resonate with Health-Conscious Heather or Nutritious Nancy and could build health-focused brands accordingly. (See: Lean Cuisine, Healthy Choice, Kashi, Special K, Fiber One, Smart Balance, Activia, and so on.) If this doesn’t feel representative of how you – our health-conscious reader – make choices for your own lifestyle and dietary needs, it’s because it no longer is. Say goodbye to the age of mass-marketed health and hello to the age of personalized, democratized, information-overloaded health.
PHOTO: JEMUEL DATILES
94
95
DEMOCRATIZATION
NOW WHAT?
OF AUTHORITY A recent NPR poll found that roughly 75% of Americans ranked their diets as “good,” “very good,” or “excellent.” While rising rates of obesity, diabetes, and heart disease might bring this number into question, it’s not hard to see how so many people view their diets as “healthy.” One may be a vegan who relies on protein-rich soy and grains; another is on the ketogenic diet, which encourages eating lots of fat but limits fruits and vegetables and prohibits grains altogether; a third packs their diet full of lean proteins, raw nuts, and fruits and vegetables rich in natural sugars. And these are just a handful of the arguably “healthy eaters” walking around my office right now. A host of factors have contributed to this new normal. Consumers are exhausted from decades of conflicting claims. (Fat is bad vs. fat is good! Carbs cause weight gain vs. eat more quinoa! An apple a day vs. fruit contains evil sugars! Three square meals vs. snacking is part of a healthy diet!) Social media and its myriad of “experts” only amplify this confusion. Consumers also have unprecedented access to information around biology, nutrition, and medicine, and are more empowered than ever by technologies that measure bodily inputs, outputs, and performance. As a result, consumers have taken it upon themselves to define “healthy eating” – and no two define it the same way. Indeed, a recent Nielsen study found that 64% of global consumers say they partake in a specialized diet that restricts consumption of certain foods. For some consumers, this is required by food allergies or intolerances, or is part of disease prevention or a treatment protocol. For others, a specialized diet is a carefully curated projection of personal brand and convictions. Regardless of the driver, the “right” diet has been transformed from what you say is right for me to what I say is right for me. “Right” is no longer an objective definition, but rather a subjective assessment that relies less on reading nutrition labels and more on a comprehensive consideration of how a certain food fits an individual’s lifestyle. Big Food has spent decades building healthy brands by telling consumers that their products adhere to an objective definition of health as evidenced by the numbers and claims on the package. But with these shifts, the future for “healthy” Big Food starts to look murky.
Consumers are self-clustering into ever smaller “segments” of healthy eaters: vegetarians vs. paleos vs. nut allergies vs. lactose intolerants vs. celiacs vs. diabetics vs. pre-diabetics vs. weight losers vs. fitness buffs vs. the infinite combinations of these and other considerations. Many may already say that their personal brand of “healthy eating” has an N of 1, and more will say so moving forward. As such, Big Food brands clutching to a singular health promise will find their once mass-product lines chasing increasingly niche markets. The question for Big Food now becomes less about which functional claim companies should print on the package, and more about giving consumers the information, optionality, and judgment-free space they need to define their own healthy diets. Consumers will choose the brands that – dare I say it – leave those prescriptive functional claims out of the conversation altogether and focus on lifestyle fit instead. By way of example, consider the following two brands: Sure, the popular beverage brand (150% YOY growth since inception) has a “healthy” attribute that ties its product line together (read: antioxidants), but it takes some digging on their website to find the functional RTBs you might have formerly expected to find front and center. And though its signature product contains only five calories, you’ll never see “diet” or “low calorie” on its labels. Instead, Bai focuses the conversation on flavor, variety, and a brand promise to “never conform.” Does a consumer need to adhere to a specific antioxidant-rich or low-calorie diet to feel Bai is for them? Of course not, because Bai isn’t built around a singular, prescriptive promise of health. Instead, it’s built for (the much broader audience of) people who want a “better” alternative to great-tasting drinks. Whether consumers define “better” as rich in antioxidants, lower in calories, lower in sugar, or free from artificial sweeteners is entirely up to them.
BAI —
“Health” is what it’s all about at KIND. How does the billiondollar brand define “health?” I read the entire company website and couldn’t find the answer to this question. But I know what the brand stands for: treating people (including ourselves) with kindness, transparency (in communications, in packaging), and great taste. The food itself can be a part of a high-fat, high-protein, wholeingredient, and/or clean-label diet; the brand, on the other hand, transcends these attributes. A kind and honest lifestyle – who wouldn’t aspire to that?
KIND —
Consumers today and moving forward are not asking “Does this brand define health the same way I do?” but rather “Does the brand stand for something? Does it trust me to decide when, how, and why I eat for myself? Does it speak to who I am as a person, rather than to the quantities of particular macronutrients I choose to put in my body?” The future of Big Food will belong to those who stop selling functionally beneficial products and start building emotionally resonant brands. Michelle Jacobs is AVP, innovation strategy and head of IC New York.
PHOTO: JEMUEL DATILES AND KAPIL VACHHAR
96
THE FUTURE OF MEAL PREPARATION—
97
WHO’S. TAKING THE LEAD:.
TECH
OR FOOD? BY— GARETH HOCKLEY. MATHEW LINCEZ
98
Y
ou’ve probably read in Business Insider or heard from someone on your technology team that the Internet of Things (IoT) will revolutionize the world. You probably bought a Nest Learning Thermostat a few years ago and thought, “wow, heating my house remotely is really convenient!” You probably even have a few other smart devices in your home, like a TV or smoke detector. If you’ve really got some bucks to spare, you probably also have a smart fridge that looks cool and tells you what you need to replenish without even pulling out your smartphone. In your business, your tech and supply chain teams are probably testing smart sensors and working through a bunch of pilot programs to test their value. That’s the extent of your IoT experience so far. On the consumer side, you’ve got a bunch of slick looking devices that connect to “remote control” smartphones, and on the business side you have a proliferation of sensors that are helping to monitor and track inventory. Stating the obvious, all first generation technologies are overhyped and do not fulfill their promised potential. It’s precisely why they call the adoption period before mass adoption “the chasm.” Crossing the chasm requires a convergence of human insights, technology, new and unconventional partnerships, new business models, and affordability where the exponential value is created for both consumers and businesses.
CONNECTED
The proliferation of cheap sensors has spurred Amazon to step in with a modular solution that helps consumers retrofit existing technology. Enter the Amazon Dash: a $5 dongle that enables frictionless ordering for any household goods. Eating prepared and fresh meals has never been more affordable and easy. Then there are other players entering the connected culinary space with no dependence on hardware whatsoever. Service offerings like UberEATS remove all friction from the cooking experience by bringing healthy and affordable meals to the mobile-first generation. It’s not a leap to think of a possible future where more and more meals are completely prepared offsite and the kitchen is only used for special occasions.
IBM’s Chef Watson is the first real step forward in bringing intelligence to the world of connected culinary. Chef Watson assists the at-home cook in making better recipe decisions by matching ingredients in the fridge with recipes from its Bon Appétit database. The beauty of Chef Watson (and any other solution that’s hinged on machine learning) is that it gets better and better with the more data it has. It’s easy to imagine a future where your medical and social data is connected to Chef Watson so that all your meals are customized to your unique nutritional needs and social settings. It’s not only the major OEMs that are entering the connected culinary space. Niche products like Juicero (a connected cold-press juice system that promises to revolutionize nutrition with frictionless ordering and nutritional tracking) and Pantelligent (a connected fry pan that helps consumers fry the perfect steak or salmon) are also emerging at a rapid pace. From the examples above, it’s easy to see that the current connected culinary experience lacks cohesion. This is typical of early iterations, however, and you can bet that this won’t last for too much longer. In fact, Apple has already attempted to bring cohesion to this space with their HomeKit, and Samsung has an open-source operating system for the IoT in the works.
PHOTO: JEMUEL DATILES AND KAPIL VACHHAR
Original equipment manufacturers (OEMs) and non-traditional food players are teaming up and taking the first step into “connected culinary.” Both Samsung and LG have announced strategic partnerships with the likes of GrubHub and Amazon for first-generation frictionless ordering and inventory management.
CULINARY IS HERE
99
CROSSING THE CHASM REQUIRES A CONVERGENCE OF HUMAN INSIGHTS, TECHNOLOGY, NEW AND UNCONVENTIONAL PARTNERSHIPS, NEW BUSINESS MODELS, AND AFFORDABILITY WHERE THE EXPONENTIAL VALUE IS CREATED FOR BOTH CONSUMERS AND BUSINESSES..
100
WHAT CONNECTED CULINARY
HOW BIG FOOD SHOULD RESPOND
COULD MEAN FOR BIG FOOD
Clearly, Big Food is a laggard in the connected culinary space. This makes sense, given that technology isn’t part of the Big Food core competency. Nevertheless, this will need to change if Big Food wants to stay relevant and at the top of grocery lists.
To win in the world of connected culinary, food companies need to understand the end-to-end value chain in order to know where and how they can play and which strategy (build, buy, or partner) is best to bring the opportunity to life.
Here are a few possible scenarios for Big Food to consider:
The connected culinary value chain includes a front-end user experience layer, a middleware insights layer, and a back-end infrastructure layer. To be successful, food companies must become human-centric, future-oriented, technology-fueled, and design-centric. To make this transition, food companies will need a transformation that addresses the following:
TECH PUSHES OUT FOOD—
Te c h h a s a f i r s t - m o v e r advantage in terms of user experience design, as well as a monopoly on the ecosystem and key enablers (like AI), which create a barrier for entry into the IoT for more manufacturing-focused industries, like Big Food. In this scenario, traditional channels are eroded, and Big Food will be beholden to the tech ecosystem in order to market and provision its products to consumers. “Foodpreneurs” who work at the intersection of food and tech will be the ones who will partner with tech companies and reap the rewards.
BIG FOOD LEADS CONNECTED CULINARY—
In this scenario, Big Food makes an early investment in connected culinary. This early investment enables Big Food to have the first access to data analytics and frictionless ordering systems, giving it a valuable first-mover advantage. Big Food will, in turn, have a leg up and will be able to transform the back-end infrastructure to support the connected culinary experience for frictionless ordering before it hits the mainstream. It will also give marketing teams access to data that will help inform their messaging in traditional channels and their product development process, eliminating costs and decreasing lead time so that Big Food can move with and lead markets, instead of being reactionary.
— A future-oriented understanding of existing and potential changes that connected culinary could enable in market conditions, and how those could threaten Big Food’s current position within the market. — The opportunities, including products, services, and business models, that connected culinary could enable for Big Food now and in the future. — The design principles required to deliver a human-centric connected culinary experience. — The data that could be made available from a connected culinary ecosystem, and how to leverage that data to drive value and reduce costs. — Privacy and security protocols to ensure consumer trust is central to the connected culinary experience. — Investments in both the systems and people required to support a connected culinary strategy. — Understanding of when to build, buy, or partner in order to bring a connected culinary experience to life.
Addressing these points now using both foresight and strategy is a small investment that could provide a big payoff downstream. After all, you shouldn’t be letting tech lead in a space that food knows best. Gareth Hockley is a senior foresight strategist at Idea Couture. Mathew Lincez is VP, futures at Idea Couture.
PHOTO: JEMUEL DATILES AND KAPIL VACHHAR
101
102
O.
ur brains are wired to seek out sensory experiences, and food taps into all of them. Food also inspires us to share our experiences with others, triggering pleasure sensors that keep us coming back. In a world increasingly void of genuine, authentic human connection, food remains the one source we can turn to for a plate full of comfort, nostalgia, and sustenance.
We can see this evolution unfolding, from the diet fads of the past, to the personalized nutrition plans of the future. As time goes on, the human experience around food will undoubtedly continue to progress, and so too will consumer expectations. An increasingly digitized food experience is something that we cannot overlook as we move forward into the future of the food experience. Our future food systems must not only strive to sustain the planet and reverse the effects of past business models, but also work to satisfy the human necessity for an experiential connection to what we eat.
PHOTOS: JEMUEL DATILES AND KAPIL VACHHAR
Food is at the heart of who we are. Food can connect us. It tells a story of our culture, our history, our values, and our societal norms. As we head into the fourth agricultural revolution and continue to explore the digitization of food systems and culinary experiences, it would be no surprise if the next technological upheaval is not in cyberspace, but, instead, is on our plates.
103
editorial excellence award-winning insight, foresight, and design 2016 FO LIO: E D DIE A N D OZ ZIE A WA R D S EDI TO R I A L T E A M O F T H E Y E A R (W IN N ER) B-TO-B – FE AT U R E D ESIG N U N D ER 10 0,0 0 0 CIR C U L AT IO N FO R T H E FU T U R E AC C O R DIN G TO W O M EN (W IN N ER)
We are honored to receive the 2016 Folio: Eddie and Ozzie Award for Editorial Team of the Year. MISC continues to strive for excellence in journalism and design, pushing formal boundaries and providing a platform for innovation’s most thoughtful and provocative voices. In addition to being named Editorial Team of the Year, MISC has been recognized for the high-quality content and design of our features, The Future According to Women and The Collapse of the American Family Ideal. The MISC editorial team would like to send a heartfelt thank you to our contributors and readers. It’s you that we serve in our continuous pursuit of editorial excellence.
H O N O R A B LE M E N T IO N S B-T O- B – SIN G LE A R T IC LE I N
B-T O- B – SIN G LE A R T IC LE FO R
D ESIG N/A DV ER T ISI N G/M A R K I N G FO R
T H E C O LL A P S E O F T H E A M ER ICA N
T H E FU T U R E AC C O R DIN G TO W O M EN
FA MILY ID E A L
104
We Don’t Use Paper Anymore So Why Do We Design for It?
By Jared Gordon a n d Ed w a r d M e r c h a n t
Information printed on paper has certain benefits. Like this magazine, it carries weight and importance derived from the effort expended to create it. Moreover, as printed materials become less and less common, anything printed will be treated with extra importance. The original web was inspired by libraries, magazines, and newspapers, whose homes on the internet are called webpages. The common sizes and formats of these pages have only recently been challenged by the different formats of screens on laptops, cell phones, iPads, Kindles, and the like. While these formats are different than traditional paper, pitfalls remain, chief among them the legacy thinking that informs our approach to creating websites. For example, website content – or data – is flat and static as it appears on a screen, constrained by our habits of creating content for print, a medium that is in decline. Did you ever stop and consider why a car is the width it is? It is because of the width of a horse drawn carriage, which was governed by the strength of a wooden axle. Steel is much stronger than wood, and yet our cars remain the same width. Is our data
visualization bound to suffer the same fate? Although width might have originally been driven by our familiarity with horse-drawn wagons and limitations on axle strength, infrastructure was subsequently built that made assumptions about the width of future vehicles. As a result, even though wider cars might have become technically possible, they would not be practical, given the widths of existing roadways and parking spaces. In engineering circles, this need to accommodate limitations imposed by previous design decisions is referred to as “backward compatibility.” Similarly, information infrastructures (including webpages) are currently built to generate and accommodate data representations that are typically limited to two or three dimensions. This is to ensure backward compatibility with existing data-handling tools. Much like a traffic reporter in a helicopter has a 360-degree perspective that table-based, road-by-road congestion reports do not, alternative methods of visualization can provide people with a way to escape the bounds of existing information infrastructures and gain access to a richer perspective. Ironically, even the traffic maps we currently get on phones, navigation systems, and TVs are victims of legacy information infrastructures and filter out information that is visible only to that reporter in the sky. Consequently, those
105
of us not in the helicopter are required to blindly trust that reporter’s interpretation of a situation and what they feel will be useful to us. Those who can provide visualizations that offer a deeper understanding and connection with what is being presented – making consumers feel confident about these conclusions and recommendations – will be the ones who win the battle for loyalty and trust. More importantly, they will be the ones who can win over new advocates whose loyalty was based primarily on familiarity (such as a well-known brand) as opposed to a true understanding of value. Data and its representations are currently firmly in the realist period. Realism was initially a school of art governed by an honest depiction of everyday life, with little left out. But, perhaps other than Whistler, can you name a famous realist painter? Compare that to Impressionism, which gave us Monet, Manet, Renoir, and so on. These artists’ hallmarks were the inclusion of movement as a crucial element of human perception and experience, and unusual visual angles. The visualization of data is ready to move into its own Impressionist moment. Let’s consider the humble data table. The first written example of the table dates back to the second century. In the ensuing 1800 years, the only major innovations have been the bar and the line graph. Both are static forms of data presentation and place the
burden on the reader to derive insight from the data. Yet today, we are at a point in technology where the visualization itself can actually demonstrate the insight. Think of the line graph. In itself, it is an abstraction of the table. Each point on the line is rich in context that the graph simply misses. What were the drivers of the result? What kind of momentum propelled the drivers at a specific moment in time? Current means of visualizing data depict the information without interpretation or context, relying on the viewer to apply a subjective lens. As consumers’ expectations develop and evolve, and in light of new endpoints and delivery methods, there is an opportunity for leaders in the space to rethink their approaches to the communication of information. As we usher in the new era of data impressionism, the winners will leverage change to create better relationships with consumers of data by showing them more and asking less of them. //// Jared Gordon is head of financial services innovation at Idea Couture. Edward Merchant is senior digital partner, banking & financial services at Cognizant Technology Solutions.
106
Through the Looking Glass The Future Is Older (and Happier)
/
hguorhT gnikooL eht ssalG
B y Ch a r l e s A n d r e w
Any article of this sort could be filled with detailed statistics about different age groups and demographics, like how lifespans are predicted to increase, how the ratio of old to young will alter radically in the coming years, how the proportion of the global population over 60 will double within a
107
others. When looked at in this way, our entire environment can be seen as a sort of sci-fi “exoskeleton,” one which extends our own intrinsic capabilities manifold. Intrinsic health may remain the domain of “real” health experts and those organizations with large specialist research teams. However, expanding functional ability beyond intrinsic capacity is an opportunity for almost any sort of company or entrepreneur. To restore clarity of vision requires an expensive operation. Or, alternatively, a $10 pair of glasses. Or the ability of a Kindle to provide the type-size you want. The point is that when you start with a consideration of the human need, rather than the medical, the solutions may not be surgical or pharmacological – or prohibitively expensive. How else will innovation help us to narrow the gap between our limited, age-related intrinsic ability and the much greater functional capacity that helps us live life as we would wish it? The physical realities of aging and the financial truths of major healthcare mean that there may come a time in life where we fall beneath a threshold where quality of life becomes seriously impaired. Researchers Kalache and Kickbusch call this the disability threshold. In an ideal world, the medical world will prevent such a happening, but in the real world, we need to look for additional ways to maintain functional capacity and defer or avoid hitting the disability threshold. The researchers’ chart illustrates the different paths that life can follow; the lower line reflects only our intrinsic abilities, which decline over time, and the upper line reflects our true functional capacity, a result of that “exoskeleton” of products and services that build quality of life in the face of underlying decline. The message for companies, entrepreneurs, and policy makers is this: Irrespective of cost, there will be a rapidly growing market for solutions that address quality of life and functional capacity, far beyond what healthcare can provide.
Maintaining Functional Capacity Over the Life Course Early Life
Adult Life
Older Age
Growth and development
Maintaining highest possible level of function
Maintaining independence and preventing disability
Functional Capacity
single generation, and so on. But all of these statistics and predictions work to support the same story: We’re living longer and having fewer children, and these changes will have implications not just for each of us, but for society as well. Longevity is a good thing. But let’s be pessimistic for a moment: In a consumer-driven society, where the ability to shape your life is significantly connected to your economic choices and how you spend your money, a longer life span can mean a less-than-ideal financial situation. The doubling of the average post-retirement lifespan, when funded by the same fixed post-retirement asset base, effectively halves an individual’s annual personal income. In the UK, when the state pension was introduced in 1908, the average pensioner would have 9 years left to live. That figure has since doubled, and the line on the graph is climbing still. At the same time, the financial burden of healthcare peaks during this period of later life, resulting in a perfect storm of maximized annual cost and minimized annual income. Things will have to change. Understanding these emerging trends will give organizations and countries the opportunity to anticipate and adapt to the significant change in consumer priorities that can be expected as the population continues to age. There has been an evident growth in the demand for better health and care options, but meeting this demand will require the development of innovative and non-traditional solutions. This is the only way that demand can be matched by the resources required to supply it. The labor market will also be affected by the demographic shift to come. The US Bureau of Labor Statistics’ forecasted growth rate for the years 2014 to 2024 predicts that 7 of the 10 highest increases will occur in age-related, health-support roles, such as physical therapy assistants, home-health aides, occupational therapists, and more. But is the challenge and opportunity of an aging population all about health? Let’s ask a stupid question and see where it leads: Why do we want to be healthy? The answer tends to be threefold: We want to live longer, to avoid pain and discomfort, and to do the things we want and need to do. The last of the three points is very important. If we can’t do the things that make life special, the notion of keeping the clock ticking diminishes in appeal. Intrinsic health has a huge influence on our ability to lead highfunctioning lives, both mentally and physically, but it is only part of the story. Health is complex and expensive to maintain, and it is often very difficult to fix; however, the innovative products and services that expand our functional capacity represent a significant opportunity for life improvement. Already, our lives are enhanced by technologies that allow us to perform tasks better than we would otherwise be able to. This technology includes things as simple as contact lenses, which help us see better; as large as transport systems, which allow us to move farther, faster; and as connective as social media, which allow us to communicate efficiently with
Disability Threshold
Range of function in individuals
Rehabilitation and ensuring the quality of life Age
108
// In the UK, when the state pension was introduced in 1908, the average pensioner would have 9 years left to live. That figure has since doubled. //
Partly in answer to this question, the World Health Organization (WHO), took a look at where the older generation will largely be living – in cities. Using the concept of promoting “active aging” to examine the factors that enhance or reduce quality of life in urban areas, they identified five areas for innovation (beyond the obvious inclusion of health services): infrastructure, transport, housing, social participation, and communication. In each of the five areas, there is a plethora of opportunities for innovation in both the commercial and state sectors. The WHO, understandably, takes quite a functional and practical perspective. Others highlight the growing opportunities for bringing pleasure and purpose to the growing number of “older years.” A UK government paper reported the interest in learning and education among older people, but noted that they were less interested in obtaining qualifications or vocational learning, and more interested in humanities and social sciences. Interest in languages developed at around age 60. At the last Consumer Electronics Show (CES) in Las Vegas, the growing presence of home assistants and robots was a major trend. These have huge potential to enhance functionality, pleasure, and purpose. Companies such as Intuition Robotics, with their ElliQ robot, are shaping their design around a deeper understanding of their older target audience.
Aging baby boomers may be the passive recipients of the benefits of such innovation, or they may drive innovation themselves. Researchers from the Max Planck Institutes and the University of Washington, among others, have projected a future based on the combination of aging, skills, and wealth found in Germany, which is second only to Japan in terms of its aging population. These researchers concluded that, in contrast to the idea that the demographic shift will be a net drag on the economy, the growing aging population could actually be a competitive advantage. How so? It will be the result of a number of factors. Increased longevity and improved health will lead to an overall increase in economically productive years, and members of the aging population are expected to focus their spending on things that help them cope with the impacts of aging. This increased spending on such products and services will work to drive innovation. Additionally, while retirement was once about managing an individual’s few remaining (and diminishing) years, the aging baby boomers will comprise a large cohort of highly capable people with the desire, ability, and time to remain economically active. One signal of this change can be seen in a Financial Times survey in the UK, which showed that entrepreneurship rates for people over 50 now exceed the rates for 18 to 29 year olds. Barring any radical mishap, our aging population will significantly shape the future. A much larger proportion of society will live into old age, and innovation will be required in social systems, in healthcare, and in functional-capacity enhancement to ensure that those who do live are not just old, but old and happy. //// Charles Andrew is senior client partner at Idea Couture.
109
Applications open September 15. Graduate // Undergraduate
WE’RE ON THE MOVE! Our state-of-the-art campus at Great Northern Way is a space designed to support and foster innovation, collaboration and critical discourse – a University that stands for learning in the 21st century. To live curiously, live intelligently, live collaboratively, live imaginatively and live creatively, is core to who we are.
Your Future Starts Here // ecuad.ca
The People vs. The Machine Making the Right Decision By Jaraad Mootee
In late December 2016, Facebook’s Safety Check spread false reports of a bombing in Bangkok, Thailand. Safety Check, which allows users to mark themselves as safe when acts of terrorism or natural disasters occur, was activated after a minor incident
involving firecrackers at a government building in Bangkok. However, the algorithm had picked up some re-archived posts from small news sites covering the 2015 Bangkok bombing. Users and news outlets rushed to confirm the bombing, but to no avail. The problem was resolved within a few hours, and aside from the mildly embarrassing well-wishing messages sent to friends, no harm was done. However, this incident raises questions about our dependence on similar technology. Can we really rely on computers to not make mistakes? Are they watching over us, or do we need to be watching over them? In the late 1980s, when Mark Zuckerberg was four or five years old, a radiation-therapy machine called Therac-25 was created to provide electron-beam therapy and X-ray scans. However, a highly improbable combination of engineering errors resulted in the machine malfunctioning. During its use, the machine’s X-ray scan would unexpectedly switch to the high-powered electron beam, which would hit the patient with over 100 times the intended dose of radiation. The patient would experience a severe burning sensation and develop fatal radiation poisoning. The six accidents were primarily attributed to bad process design, overconfidence, and a lack of due diligence in testing and reporting. The mistakes were rolled into a case study, which gave birth to the general techniques in reliability modeling and risk management today.
photo: idris mootee
110
111
A machine can now be an independent thinker. But the Therac-25 incident was over 30 years ago. It’s now getting harder to draw the line between humans and machines, with the latter always getting smarter through machine learning and deep learning. Today, machines are beginning to mimic our brains through simulated neuron firing. Through developments in machine learning, they are gaining the ability to act on their own without set instructions. Soon, their abilities will be differentiated from human instinct only by their sheer processing power, capacity to store infinite historical data, and calculated efficiency. But decision making means accountability. Is it possible to define accountability when it comes to machines? If humans make poor decisions or mistakes, we may be reprimanded, receive less billable work, get fired, or even face a malpractice suit. Such consequences are less applicable to machines. Can you imagine giving a machine less work to process as punishment for a mistake? The idea of reprimanding a machine is laughable, and giving a machine less opportunity to get work done is just called underutilization. Although there are some parallels between firing an employee and trashing a printer that refuses to work, the analogy doesn’t translate if the printer is a piece of $1.7 million USD artificial intelligence (AI). Essentially, errors and mistakes are inevitable. In the future, good design practice will be about defining which problems only humans can solve and which problems only computers can solve. The point here is to find the high-impact problems that require both human context and computer processing to solve – especially since we’re facing a future with driverless cars. Last year, the first major lethal accident with Tesla’s autopilot system occurred when a car on autopilot rammed into a hitched trailer. It’s suspected that the white paint of the truck, blended with the brightly lit white sky and height of the trailer, made the autopilot think the trailer was an overhead highway sign. This tricked the autopilot camera into thinking the way ahead was clear. The computer was simply operating as instructed, according to what it saw. Humans often do the same thing, making decisions based on the information available. This occurrence raises some important questions. At what point in the operation of the car should the autopilot wake up the driver and require interference with the autopilot? How does the car know it’s made a mistake? Most importantly, who is the most suitable party to make the best decision based on the circumstances?
Can machines manage uncertainty? Good process design comes from weighted decision-making based on risk, reliability, and impact. If the autopilot program is not 80% certain that the road ahead is clear, it should warn the driver – especially considering the level of risk and impact. Perhaps it should look at the driver’s driving history to explore their reliability and measure it against the computer. If this is done, what percentage value should be considered safe? An AI system must be able to analyze its previous actions in order to improve its scores and learn from its past mistakes. Humans have different methods and styles of learning; some may ignore smaller mistakes in favor of making huge learning jumps from revelations, while some may improve themselves incrementally, based on each minor experience. What would be the learning style
of a machine, and how much time would it need to prepare itself for the unexpected? There is still a lot that is unknown about how humans deal with uncertainty. When humans design machines to run robotics in factories, or even to perform domestic work, we need to know how well these machines can handle the unexpected. Will they continue to make the same mistakes until humans intervene? This problem is not easy to solve, but our future relationship with technology will be best suited by deciding who is the boss and who is better at doing which task. For example, Fukoku Mutual Life – an insurance company – has begun replacing its staff with an AI system designed to calculate insurance payoffs. Policyholders will have their medical certificates, past history, and data regarding surgeries and hospital stays used to calculate risk and payoff. This trend toward increased adoption of artificial intelligence in decision-making is inevitable. The combination of cognitive computing, machine reasoning, machine learning, and digital mapping is bringing about a full-blown industrial revolution. But can we marry human and machine to create a working partnership? Effective collaboration between humans and machines will combine the diverse set of experiences, expertise, and instincts of humans with the vast computing power of machines. Together, these forces will accelerate innovation and execution to unprecedented levels. However, failure to combine the human and the machine will result in conflict and a rejection of partnership. This conflict could result in a loss of face, demotion, privacy invasion, or humans simply feeling that they are unsafe. The biggest concern is always machines’ inability to empathize. Machines can access extremely large data banks, but when it comes to empathy, it’s difficult for a machine to understand “the human element.” To do so, the machine must be capable of combining logic, cognition, emotion, and empathy – especially when it comes to foresight. Foresight can be instinctive and emotional. It comes from stories borne out of creativity and imagination – it represents everything machines are not capable of doing. Foresight is often part intuition, part ambition, and part craziness, all attributes that machines cannot comprehend. For example, it will not tell you that you need to innovate by throwing out legacy material and creating something novel. At the same time, you can’t argue with it, because you cannot beat the irrefutable logic of a computer. Imagine arguing with your boss about a new intrapreneurial product venture that you think is generating a lot of industry interest. If the idea is too new, the computer may find no data to support your claim, thereby creating more risk for your venture; on the other hand, it could find ten other people who have already had the same idea and failed to get it off the ground. An effective partnership between the human and the machine will require both parties to work around conflict and make use of the skills that each brings to the table. This partnership (or conflict) will depend on the ability of humans to understand a machine’s capabilities, as well as the introduction of our human foresight capabilities – which are based on empathy and instinct – into the equation. We can call this algorithm the “humarithm.” It is about defining structural reasoning logic that includes human values, ethics, and emotions as part of the collaborative decisions made by humans and machines. I guess we are just waiting to see the first AP – Artificial Philosophers – instead of AI. //// Jaraad Mootee is a technology trends analyst at Idea Couture.
112
From Turnspit Dogs to Robots: The Future is Domesticated
B y Ch l o é R o u b e r t a n d Le n a R u bis ova
From the 16th century to the mid-19th century, the kitchens of the British upper class were equipped with a vernepator cur, Latin for “the dog that turns the wheel.” Also classified as the canis vertigus (“dizzy dog”) by zoologist Carl Linnaeus, and more colloquially known as the turnspit dog, the vernepator cur was a short-legged, low-bodied dog capable of running in a round, hamster-like wheel for hours, ensuring that the roast over the kitchen’s open fire was perfectly prepared. In the 1800s, the dog was slowly replaced by the mechanical spit rotator, leading to its eventual disappearance. Domestication is often analyzed from an anthropocentric perspective. Humans adopt an organic being, plant or animal, manipulate it to play a particular role, and then transform it from a self-sufficient life form into one that is dependant on human taste, activity, infrastructure, and intervention to survive. It is also notable that without the protein, protection, heat, skin, labor, or affection of these domesticated beings, our species would have never become what it is today. As we have gone from nomadic to settled, rural to urban, and actual to virtual, animals have followed, enabled, and inspired our evolution. In a sense, it is the animals of today inspiring the robots of tomorrow, and it is these new, “domesticated” devices that will impact our human future. The Latin root of “domestication,” domesticus, means “belonging to the house.” The turnspit dog belonged to the house, as did the mechanical spit that replaced it. Similarly, the Nest Learning Thermostat, the Roomba, and Amazon Alexa are just as much part of the household today as the family corgi or pug; they just fulfill different needs. To understand how the domestication of different species may evolve in our near future, we must only look back at how our relationships with plants, animals, and even robots have evolved over time.
Dogs
Figs
Domesticated 14,000–31,000 Years Ago
Domesticated 11,400 Years Ago
It is unknown whether we first brought the friendliest wolves of the pack into our nomadic tribes, or if certain wolves started following a group of carcassdiscarding humans. Regardless, the partnership worked: Wolves helped our ancestors hunt more efficiently and acted as a forceful means of protection against rival tribes, while humans offered similar protection and food supply for their canine companions. Archaeologist and geneticist Greger Larson argues that without this partnership, there would “probably [be] a couple of million of us on the planet” today, as well as no dogs. Dogs also helped us understand our relationship with the natural world in a new, epistemological way: We learned that we could change the life cycles of the world around us to our benefit.
An entire millennium before domesticating two types of cereals, humans gathered to intentionally plant branches of parthenocarpic (seedless) figs. Because these figs were easy to dry and store, they made for an ideal source of nourishment on long hunts. This encouraged otherwise nomadic hunting parties to return to their fig trees during lengthy intervals between successful hunts. So, it is the unassuming fig that stopped our wandering and settled us down.
Sheep Domesticated 11,000 Years Ago Sheep and goats were the first livestock species to be domesticated. Initially kept and bred for their meat and skins, sheep showed humans that taking care of animals would provide a constant source of food and warmth, paving the way for the breeding of cattle and other livestock later on. It took humans
113
a few thousand years to learn to spin wool, which drastically altered the way we dressed, and enabled human’s gradual move further north into the parts of Europe and northern Asia they inhabit today.
Grains Domesticated 10,000 Years Ago The domestication of edible plants with larger seeds and a stronger outer sheath – specifically wheat and barley in the Fertile Crescent and rice in East Asia – marked the start of the Neolithic or Agricultural Revolution, which radically altered the nature of early human societies. The emergence of agriculture meant the true beginning of human settlements, especially densely populated ones. This led to massive ideological and practical shifts; while certain humans worked in the fields, their labor allowed others to study the stars, invent the wheel, and discover bronze.
Horses Domesticated 6,000 Years Ago Current theories hypothesize that horses were domesticated randomly by multiple tribes at around the same time. Remains of horses’ teeth from 3000 BCE found in Europe and Asia show signs of wear, indicating that horses were being ridden at that time. Not only were horses a source of food in the harsh steppe climate, but according to Alessandro Achilli, a geneticist at the University of Pavia (Italy), these large and powerful animals allowed our ancestors to travel faster than before, creating opportunities for communication and expansion to new environments.
Penjing Trees Cultivated 3,000 Years Ago Bringing plants indoors enabled the wealthy to maintain their horticultural knowledge in less hospitable weather. The cultivation of penjing trees shows that, by this point, humans had in part removed plants from the realm of necessity and brought them to an indoor place – the home – where they played a purely emotional and aesthetic role.
Tamagotchi Launched 21 Years Ago Tamagotchi, which translates as “cute little egg” in Japanese, was launched in May 1997 in North America, and promptly rose to “toy of the year” according to the then-chief executive of Toys “R” Us, Michael Goldstein. More than 79 million Tamagotchis were produced and sold worldwide. Rather than face the responsibility that comes with taking care of a real animal, as both our near and distant ancestors had done, the Tamagotchi allowed late-20th century humans to hone their nurturing skills, knowing that if they forgot to press the “feed” button they could restart the device in the morning.
Paro the Robot Built 12 Years Ago Specifically designed to be a companion for senior citizens suffering from dementia or traumatic experiences, Paro looks like a cute baby seal and is programmed to remember the affectionate habits of its owner without responding to negative reactions like throwing or yelling. Unlike an animal that has its own needs and personality, Paro mimics the endearing aspects of a friendly pet without judgment, expectations, or expenses. Studies have shown that the robotic Paro
has similar therapeutic effects as a live animal on senior patients with dementia, namely inducing calmness and laughter, and inviting patients to talk to and interact with it.
Nest Released 6 Years Ago Nest is a responsive and remote control smarthome device that monitors a household’s energy consumption, manages its climate, and ensures its protection. It learns the inhabitants’ behaviors and preferences through sensors and data accumulation and then adapts to them. Nest is autonomous but controllable, adaptable but functional – a “smartness” or “consciousness” that echoes a number of roles played by animals in homes.
Woolly Mammoth To Be Genetically Resurrected in 2–5 Years The invention of the highly detailed, gene-editing tool CRISPR is allowing scientists to combine preserved woolly mammoth DNA with that of its closest living relative, the Asian elephant, to bring back the large mammal that last roamed the Earth 4,500 years ago. The goal is to help save the decreasing Asian elephant population by helping them to live in colder climates, and to fight global climate change. Never before have humans had the ability to modify the world around them so precisely, so quickly, or with such extensive ramifications. Genetically modifying animals is nothing new, as turning wolves into various dog breeds shows, but the scope of such modification, when expanded to our external environment, means the implications extend far beyond the standard definition of the home.
The Future of Domestication In the millennia between homo erectus and homo sapiens, human beings have learned to bring other living beings into the home and alter their evolutionary trajectory to fill gaps. With every new crop cultivated and every animal born with a shorter snout and floppier ears, human beings changed too. During the last few centuries, our domestication process has also evolved. Humans no longer create relationships with creatures to fulfill needs, but instead create technology to replace roles previously filled by pets. The development of wifi-controlled space heaters and meat grown in labs has pushed the plants and animals that once provided us with warmth and nourishment to the level of family members. To get a glimpse into the next development in tech, one has to look no further than their favorite leafy tree or furry friend. //// Chloé Roubert is a resident anthropologist at Idea Couture. Lena Rubisova is a graphic design lead at Idea Couture.
114
Scan for a video
PUSH TO TRANSFORM YOUR BATHROOM. #SelectHansgrohe
With the new Talis Select S faucets, intuitive push-button control has moved from the shower to the washbasin. Turn the water flow on and off by pressing the handle. If necessary, you can turn the water on or off with your arm or elbow — useful if your hands are dirty or soapy. Turn the handle to set the temperature. The sleek handle design retains its aesthetic appeal in any temperature setting. Learn more at www.hansgrohe-usa.com/Select or on our social media channels at /HansgroheUSA
115
INSTITUTE FOR HUMAN FUTURES
manifesto As advanced technologies become an integral part of people’s lives, it is essential to be in complete control of their development. The development and advancement of technologies are not neutral acts; they are ways in which we can manifest our social, cultural, economic, and political intentions through tools. But without asking how and why we do this, we are leaving the most important issues unexplored. We are allowing our culture, society, and climate to change without having a good reason for these alterations. Current approaches to innovation and technological progress marginalize the human element in favor of maximizing technological integration and profit. This needs to change. The future will be human, just as our past and present are human. And so, we need to develop technologies in a more human way. We need to start asking how and why technology is applied and advanced as practical solutions for all. We need to begin creating technologies that manifest purpose-driven solutions. We need to define those purposes before we build. We need to move past the simplistic view of technology as a driver of human behavior, and take up a perspective that is in a dialogue with humans, non-humans, and ecological development alike. In reaction to this need, we have founded the Institute for Human Futures to create and advocate for a new way to look at how technology should develop and fit into our lives. Part philosophy, part rules for practical application, Human Futures is a future-oriented critical perspective and collection of methodologies grounded in cutting-edge social science and strategic foresight principles. The Human Futures approach tells the story of transformation. It provides organizations with a vision of what changes need to be made in order to manifest a preferred future, or to overcome barriers to bring that future state into being. Human Futures perspectives give purpose to our creative actions and help guide the holistic development and application of technology and technological solutions. We welcome individuals and institutions to join us in our efforts.
Idris Mootee
Dr. Paul Hartley
CHAIR AND FOUNDER
CO-FOUNDER AND EXECUTIVE DIRECTOR
P A R T N ERSHIP ENQUI RIES: CEO@IDEACOUTURE.C O M
Š COPYRIGHT 2017 INSTITUTE FOR HUMAN FUTURES
PHOTO: SHUETS UDONO
116
toward a human futures perspective—
117
CONVERSATIONS WITH TERRY IRWIN, DR. GIDEON KOSSOFF, DR. ANDY MIAH, AND DR. WILLEM H. VANDERBURG
by dr. paul hartley SINCE THE FOUNDING OF THE INSTITUTE FOR HUMAN FUTURES, WE HAVE BEEN BUILDING THE PILLARS OF OUR PRACTICE AND DEVELOPING OUR CORE MISSION. As part of this work, we have been establishing connections with a network of scholars and thinkers who have been working on human futures for many years. We recently sat down with four scholars in our network to discuss the basic premises of their work, and to understand how they have been approaching the issues that arise when one engages in a global conversation about improving the way our collective future will unfold. The purpose of these conversations was to learn from these scholars’ experiences and begin to forge a common vision for the Institute. We met with Terry Irwin and Dr. Gideon Kossoff of Carnegie Mellon University (CMU), the developers of transition design; Dr. Andy Miah, Chair in Science, Communication, and Future Media at the University of Salford, and a voice in developing a mature concept of transhumanism; and Dr. Willem H. Vanderburg, Professor Emeritus in Engineering at the University of Toronto, a multi-discipline scholar, and the author of eight books on the relationship between humans and technology.
Each has been working on the issues that lie at the core of a human futures perspective for some time. The commonalities and differences in their work highlight how rich and varied the human futures perspective can be. Their different approaches provide new perspectives on the large, knotty problems facing humankind as we build a new future. Their comments offer a foundation for what a human futures perspective should be. These conversations were part of the ongoing dialogue that guides the Institute’s core purpose: to build a strong perspective on how to frame temporally conscious, systemic change. This newly formed perspective would help guide designers, thinkers, consultants, business people, and technologists of all kinds to build a better world for us all. But before this important work can begin, we have to start at the beginning by defining the concept of human futures. Without this lynchpin, there is little to do. The scholars’ approaches, though different, are each related to the human futures perspective. Their work is helping to create a richer, fuller expression of human futures so that we can begin to help make change where change is needed.
118
Terry Irwin and Dr. Gideon Kossoff are trying to tackle the problem of systemic change and develop a way to make sustainable, holistic change. They have created an approach they call transition design, and have worked for several years to develop this perspective into an open-source platform and curriculum for the CMU design program. Terry is the head of the design school at CMU, a founder of MetaDesign in San Francisco, and a long-time innovation consultant. Gideon is an adjunct professor at CMU and wrote his Ph.D. dissertation on holistic frameworks for sustainable transformation. Their work, along with their partner’s, Cameron Tonkinwise, a professor of design at CMU, has focused on fostering a way to craft “fundamental change at every level of our society… to address the issues confronting us in the 21st century.” Terry and Gideon’s project began with a collaboration when they were both at Schumacher College in the UK, where Gideon was a faculty member. Once they arrived at CMU, they committed to channeling their work on holistic, systemic change into a design approach, thus founding transition design. But their focus was not just on crafting a perspective or methodology. They also wanted to incorporate as many people as possible early on. Terry explained, “We made the radical decision to invest some time and money into the development of transition design as a new, opensource movement. Can we foster this in a more accelerated way than service design, which began in academia 27 years ago, but did not develop quickly because there was no social network? Our hypothesis was, ‘If we don’t try to own it or brand it, would it be possible to leverage all of the new communications options to get it out there?’ We try to produce everything at a high, yet accessible, level. We’re not going to wait to put out a call for papers, or to do the conferences. We started to write about it as credibly as we could and asked others for input.”
“IT IS BECOMING IMPORTANT FOR BUSINESS INNOVATORS AND DESIGNERS TO UNDERSTAND HOW THEIR WORK IMPACTS A LARGER SYSTEM, SO THAT THEY DO NOT JUST MAKE SMALL AND INDEPENDENT CHANGES.”
Terry and Gideon wanted to address some of the problems they saw in the way design is practiced currently. Terry pointed out that, “There are people out there using service design to do great things. For the most part, service design has been widely taken up by business. It has been transformational because it is a whole way of thinking about people. But it still looks at people as customers and has a profit-based imperative. It has not expanded its focus to other species or the environment. It has no concern for the environment.” According to Gideon, transition design is about bringing an appreciation of complexity and an acknowledgement of how everything is connected to help expand beyond these limitations and omissions. “Transition design pushes the argument of interconnectedness to a logical conclusion. You have to think about sociological and ecological contexts together with the cultural, historical, and social aspects. You cannot deal with what is convenient.” Consequently, they have tried to foster a new way for business and design to address problems at the level of complexity where they become manifest. And this is an important point, because as things are more overtly interconnected, it is becoming important for business innovators and designers to understand how their work impacts a larger system, so that they do not just make small and independent changes.
119
Terry and Gideon are offering a vision to change how design is practiced. They are careful to point out that they are not trying to offer an alternative to service design, but rather an extension of many of its core principles. Terry went to great pains to explain, “Service design projects are holistic in their approach. They do understand ecologies of people and levels of scale. But they have not put the temporal consideration into their work very well. You would do that with a transition design problem. You are viewing potential initiatives as steps in a long transition, or nodes in an ecology that will have leverage within the system. You’re toggling with this long-term vision of a desirable future, and a pathway you’ve developed using back-casting, and you are connecting existing projects to be steps along that pathway. I don’t think service design does this at all. It might project, but that is out of what is going on right now, or what is simply emerging in isolation.” Gideon made it clear that the reason for emphasizing systems change in transition design came out of a desire to address larger problems – the “wicked problems” that everyone is faced with. “We look at transition design in terms of large systems change. How do you intervene in entire food systems or water systems? We’ve been very influenced by many socio-technical thinkers. There is this group of non-designers in the Netherlands who are looking at how changes in large systems – such as water, transport, and food – have occurred to understand how they will occur again. They are making an argument that there are intervention points in these systems that can affect the system as a whole. We’ve moved in that direction significantly in the past year. Systems change is the thing we should be thinking about. Not isolated change, but intervening in entire systems.” However, this task is not that easy. There are many disciplinary boundaries that must be crossed before transition design can become a natural part of existing design approaches. This is simply because most analysts and designers are looking at the world in a very fragmented, linear way. Terry sees the differences between traditional approaches and a transition design approach in two major areas: “It’s radically temporal- and place-based. I don’t think a service designer would think in terms of a watershed… It is about societal change rather than incremental change.”
What this means is that both Terry and Gideon feel they must help designers of all kinds see past their current techniques and practices – something that will involve changing more than the methodology. With this in mind, they find it necessary to have a dialogue with their audience, rather than develop their perspectives independently. “We are being clear that we are in a phase where we are trying to facilitate a global conversation, much like social innovation or service design. We are trying to point out that [transition design] is meant to be symbiotic with these disciplines, but different. We are also trying to educate a new kind of designer, one who can tackle systemic problems and is going to be radically collaborative and temporal.” This is where transition design reveals itself as a partner with a human futures perspective, because the focus on the temporality of systemic change allows us to see the complexity of designing social contexts in the appropriate light. It is not enough to
design with an end user in mind, because that end user is already connected to a number of other people and circumstances, both of which are constantly affecting their thoughts, actions, and experiences. The designer must be aware of this interconnectedness and work to include the other voices and perspectives in their work. Transition design is, therefore, also a way of being self-reflexive and using design as a way of shedding less useful ways of seeing the world while creating new ones. Terry made this very clear when she said, “You have to consciously question your own values and be mindful of new postures… It’s not a process. This is hard to get people to understand. But we are trying to understand what new visions are needed to shift your thinking and what you need to know… We are at a point where a process is starting to emerge, but it is very embryonic.”
120
Systemic change must also involve designers of all kinds to change themselves while they do their work. The open, dialogue-based approach of transition design is intended to facilitate this change. But it is not limited to transition design. Others, like Dr. Andy Miah, have incorporated a multidisciplinary approach largely because the only way to tackle these bigger issues is to work collaboratively and to incorporate multiple perspectives. Andy has worked at the boundary lines of several disciplines for his entire career. Originally trained in the physical sciences, sports science, and media studies, he now incorporates ethics, genetic science, and the arts into his practice. He focuses on the relationship between humans and technology, and studies the implications of how technology is changing us as individuals and as a species. “My exposure to the relationship between technology and humans is straightforwardly autobiographical,” Andy explains. “At school, I was interested in math, arts, and athletics. I’m still really interested in these now. I ended up doing an undergraduate degree in sports science, which was, at the time, one of the few degrees that had a lot of philosophy and ethics courses in it. My professors, one of whom became my Ph.D. supervisor, really shaped my approach to things. At the time, there was a lot of discussion about doping and sports, which had been going on since the 80s. But there was something about the 90s. In 1998, I picked up a copy of New Scientist Magazine. One article was trying to propose the idea that you could identify performance genes. This was the beginning of the debate that there were genes we could change or alter to obtain superhuman performance. So my career began at a point where I was able to engage in two revolutions: the digital and the genetic. Throughout my Ph.D., I found it impossible to ignore either. The Ph.D. itself built on these ideas.” Andy’s work combines media, technology, and ethics in a way that is not often possible in today’s compartmentalized academic environment, or the business world. But he takes a larger view, and much like Terry and Gideon, he is trying to engage the public in his work. He discusses how technology is contributing to how we understand our bodies, our performance, and the ethics of pushing ourselves beyond what humans have been able to do in the past. This puts him squarely in the emerging conversation about transhumanism. For Andy, the decision to engage with this concept came from a concern about the direction of the debate. “The transhumanist idea was being hijacked by a very radical, somewhat odd group of people, particularly in the US. People like this were very present in my early studies. I had to consistently write about technology and biology simultaneously… There was something in that mixture that was crucial in coming to terms with what is going on. But there was a philosophical issue, which also had to do with making sense of how the mixture changes ourselves, and our relationship with others and our species – a concern with the fact that some of those inquiries were still lacking in many ways.” Inquiries lack information because of peoples’ difficulty in understanding the nuances of how we are using technologies – electronic, genetic, or otherwise – to change our bodies and our understanding of ourselves. This is part of a larger problem, namely the lack of popular engagement with science. Without specialized understanding, many people cannot fully grasp the problems that arise when we begin to alter our bodies or genetic structure to meet our own needs.
Andy has therefore become a voice for engaging people in a new kind of debate about science and technology: “My approach has always been to engage people in the ethical implications of science. Anyone can be engaged in this. I do not like this kind of deficit model where scientists inform people of their work because they see them as illinformed. I think we should engage people much earlier and have a dialogue about whether or not it is something we want to do. I subscribe to the idea that we do not have enough things to do this well.” This practical work has always led him to believe that “working with artists and designers to create provocations about the future would help us make sense of it all.” He identifies an essential pillar of what human futures need to be, specifically an open conversation designed to engage people from multiple backgrounds. This is not just to ensure that there are many perspectives, but also to allow all kinds of people to have access to the meaning and implications of what is being discussed. Andy made a point that his background afforded him a unique perspective, saying, “I tried to find a way to understand the place of technology within the broader world of sports. But then I was very influenced by people like Lewis Mumford, Carl Mitcham, and Jacques Ellul, with his technological view of science. They were all very present. When you dig into this topic in sufficient depth from these starting points, you get into problems of identity, personhood, and humanity.” He sees this as necessary because many of these issues are missing in the conversation about technological enhancement. “One example that I use to talk about the problem of enhancement keeps coming up. About 12 years ago, a lab in Newcastle received a license to experiment with mitochondrial DNA. This involved replacing damaged or bad DNA with material from someone else to improve the chances of embryos that may not survive otherwise. The latest license allows researchers to bring these embryos to term. This is being done without any public debate or engagement. How do we ensure that this discussion can happen? How do we democratize science to ask if this is a good idea?” But connecting people to the complex topics lurking behind seemingly simple technological or scientific questions is not easy. Andy believes it is necessary to engage people in as many varied and active ways as possible. “The citizen science movement is one solution, though there is a lot of criticism of it. But if it’s done well, this movement has the capacity to bring about a dramatically different decision-making process around science… It is a good way forward. We’re still only at the start.” This engagement is desperately needed, because our understanding of technology is being swept along by a desire to craft futures for the human species that take technological, rather than human, advancement to its logical extremes. Andy worries about this way of thinking: “The technology ethos permeates every part of modern life. Our fascination with the new is all-encompassing now.” He also warns against our tendency to see only novelty when examining possible technological futures. “The stuff that is still quite old technologically is still providing the most radical applications… But the hyperbole about the cutting edge of technology is really nothing more than a way to get us to spend more. Technology as a currency is really about that desire to be part of the latest trends and not to be behind the times.”
PHOTO: MARK GUNN
121
122
PHOTO: JOÃO LAVINHA
“WE NEED TO LOOK AT THE FUTURE AS SOMETHING THAT IS NOT JUST IN THE SERVICE OF HUMANITY. IF WE DON’T DO THIS, WE WON’T BE AROUND FOR MUCH LONGER. ARTIFICIAL LIFE RESEARCH HAS TO EVOLVE BEYOND SIMPLY TRYING TO EMULATE HUMAN INTELLIGENCE. THOSE THAT ARE DOING THIS ARE MOVING US IN THE RIGHT DIRECTION. BUT THERE IS NOT ENOUGH OF THAT.”
123
His point largely rests on the fact that a technological future is only one of many competing futures. However, technology and the future are so totally intertwined that it is hard to separate them at all. The discussion about the advancement of technology has subtly replaced many other conversations about the future – a problem that Andy sees everywhere. “The speculative design economy is interesting. Kickstarter has allowed us to see these micro-entrepreneurial products as a source of how we understand the future. On these promo videos, you can see the disconnect between projected futures and the lived reality of the technology.” To counter this tendency, he advocates reassessing what we mean by technology. “We have to think beyond technology. I say things like, remember the Fosbury Flop. This is an example about how physics could be applied to sport. We can call this ‘technology’ as much as anything else.” This is needed because he believes it is also time to reassess the questions behind science and the way we view science, in order to reorient the way we relate to our technologies. “The fundamental elements of what ethics entails needs revisiting. Focusing morality on purely human concerns is a big problem. There are species amongst us that deserve the same considerations as humans. Posthumanism is about starting a much broader consideration of the moral conditions of life, an approach that doesn’t favor the human above all else. We need to look at the future as something that is not just in the service of humanity. If we don’t do this, we won’t be around for much longer. Artificial life research has to evolve beyond simply trying to emulate human intelligence. Those that are doing this are moving us in the right direction. But there is not enough of that.” While Andy is one of a small but growing group of scholars making this point, Dr. Willem H. Vanderburg, Professor Emeritus of Engineering at the University of Toronto, was one of the first. He has been building the foundation for this kind of reframing since his work as a post-doctoral student with influential sociologist, Jacques Ellul, at the University of Bordeaux. Willem has been a major proponent of Ellul’s view of the problematic features of a technological society for over 40 years. But
Willem’s position is somewhat unique because his training and scholarship have put him right on the line that divides these two disciplines. This has allowed him to be a voice in engineering and the sciences in general, one that raises the systemic, moral, ethical, and practical questions that are ignored or forgotten. Like Terry, Gideon, and Andy, Willem’s work begins and ends with teaching and engagement. “Very early on, I had an intuitive sense that engineers had no idea how technology was connected to everything else. I first sensed this at the University of Toronto, and started my research with a comprehensive study of the undergraduate curriculum. I asked two questions: What do we teach engineers to help them understand how technology influences human life and the biosphere? And to what extent do we use this understanding to influence the design of technology and avoid collisions with human life, sciences, and the biosphere? I knew this was going to be controversial, so I was really rigorous, especially since the answer to both questions was ‘Nothing.’” To address the lack of instruction, Willem developed a curriculum and a series of courses. However, he does not locate the problem entirely with engineers, stating, “When engineers go across campus and undertake what are called complementary studies, there are also problems. When we studied what they are being taught, we asked questions that had nothing to do with those courses in sociology, anthropology, psychology, economics, and so forth. What do they teach about the role of technology and design from their point of view? At the time, the answer was also ‘practically nothing.’” Such a finding led him to identify the fact that there is no common dialogue, or even acknowledgement of different points of view, within academia – the very same problem that is present in innovation and design. Like the other scholars, Willem’s solution lies in forming systemic perspectives grounded in multidisciplinary collaboration. He sees developing a systemic view as the only way to truly understand the role of technology in the world. “If this world is a highly interconnected world, whatever happens with the design will be connected to everything, positive and negative. If the actors have no
idea about these interconnected things, then the collisions they create could be significant. For a designer, at least an engineer or a business person, their intent is never going to be realized for the simple reason that they have no idea how it is going to fit into the world where the technology is going to operate. The simple fact is that, in any environment, tech working smoothly is a function of its characteristics and the characteristics of the environment.” Willem’s work is built on Ellul’s concept of technique: the idea that we have created a substitute way of understanding ourselves, entirely based on technology, that governs human society and human action. Ellul’s point is that this understanding began to develop before the Industrial Revolution, and that it has slowly replaced what humans would do without it. Our understanding of ourselves is now entirely shaped by technology, and this is a problem that we may never solve. Instead of seeing our reflection in technology, we now see its reflection in us. We seek to mechanize our society and optimize everything with the belief that this is better. Even the way we practice science is guided by technique. As Willem points out, “The way we practice science is built on repeatability and noncontradiction. It is a way of knowing that is very different from every other human way of knowing. We are symbolic beings who think and reason dialectically. Applying science to everything eliminates this experience, and you can only remove the human being and their experience in order to guarantee that it is scientific. If we apply science to everything, then we alienate ourselves from our own work.” This is already happening, particularly because we understand our present and future by looking through a technological lens. Design, practice, and business have all been similarly affected, making it difficult to see past this artificial relationship. “Here, we have a society that invests heavily in science and technology,” Willem says, “and in rational approaches that try to figure out what’s going on. We ridicule religion and traditional societies. And I think that we are actually less in touch than ever with what is going on. We prefer one kind of specialist knowledge over another and we have put this knowledge higher in our priorities, thus squeezing out any competitors.”
124
The consequences of this are profound because it means that human perspectives are not just being marginalized – they are being systematically and efficiently displaced in favor of visions, processes, and desires that are governed only by what we see when we look into our technological lens. Willem points out that we are already doing this, to the point that technology is now the answer for every question. “We live as if there are no limits, or no consequential limits, and we go about our business. Technology is viewed as the only answer. What will technology be unable to do for us? The thinking that leads one to believe that technology will be able to accomplish everything is like believing that a hammer can fix everything. This way of thinking fits into the technological way of thinking, but it doesn’t fit into life, society, and culture.” The fact that it used to not be this way suggests that it need not be this way in the future. Willem pointed out that, “In Alexandria 2,000 years ago, they had all kinds of hydraulic irrigation and machines for specific functions. But no one considered using machines to produce things. For Greek thinkers of the same era, a machine was used to prove a point and was destroyed after the point was made. Each culture assigned a role to technology. What this meant was that societies saw technology as a limited thing. We don’t have that today. We need to focus on the difference and realize that we must see only a reciprocal relationship. Humans build and change technology, and the technology we make changes us. One half of this relationship is missing in sociology books and the other is missing in engineering education. Both of them just omit a different half.” Willem’s solution, like that of the others, is to correct these omissions through multidisciplinary dialogue and engagement. “To ask good questions, you have to ask what the patterns and the social dynamics are. It is necessary to be multidisciplinary.” His purpose is to show that in order to do systemic work, one must first fully understand the system. We must first be aware “that once rationality takes over from culture, the signal has been given that this dualistic society has taken hold.” To design new technologies, we have to be aware of this dualism and ensure that we are building for what he calls “the collisions” that result from their interaction. His point here begins with this thought: “Why does technology tend to drive social change? There is a collision of architectures. The architecture of technology masquerading as society will inevitably destroy the architecture of human relations. Some see this as progress, but it is actually a collision of two things that are opposed, when they should not be.” The solution is to bring them together and work against the tendency to see technology and humans as separate.
125
The Human Futures Perspective These conversations show us how human futures is both a temporally oriented critical perspective and a collection of methodologies grounded in cutting-edge social science, design, and strategic foresight principles. More simply, human futures is a way of seeing change as a set of systemic conditions and events that unfold over time. It exists to complement design, development, and invention by providing the context needed to create highly complex, diverse environments and situations. This perspective focuses on the behavioral, sociocultural, and experiential aspects of how both present and future opportunities will unfold, and it acts as both a theory of change and a critical statement about how the world changes around us. Human futures is a way of populating the future with people and their experiences. It allows us to look beyond technology as the guiding force of our social, cultural, and environmental circumstances. It is, then, also a way of examining technology as an outcome of human activity – not as something that is different from humans. It is a critically informed, speculative anthropology of change and an analytical procedure that examines ideological and technological change in the correct historical, contemporary, and relational contexts. It sees media, technology, art, politics, and individual experience as the building blocks of what is to come and not as the after-effects of technological or evolutionary progress. It is a view of the way humans recreate their worlds and continually develop themselves to discover new contexts and new ways of being. As a methodology, human futures provides a framework to further complement the work of the various innovation approaches now used in social innovation, design thinking, and human-centered research. With this approach, we can manage complexity without resorting to over-simplification, and we can understand human behavior in a context that includes our past, present, and future – without overemphasizing any one of these. The human futures perspective should be a catalyst for understanding change in a different way, for multidisciplinary collaboration, and for looking at familiar contexts, things, and ideas in new ways. Simply put, human futures is a system of people and ideas that allows us to examine other systems without damaging their complexity or dynamism. //// Dr. Paul Hartley is executive director and cofounder, Institute for Human Futures and a senior resident anthropologist at Idea Couture.
126
Herbs and Seeds Inspired
Arriving Summer 2017
Winnipeg Richardson International Airport, Canada
An
za Premium Group e by Pla | air ectiv l l por o C t-d g n i inin n i g.c tD r om o rp i A
127
Experience
The Glenmorangie Distillery Tour The Black Wines of Cahors The Roof Gardens, London East to West: Eat Your Way Across London Wisewear: Trackers Go Next Level Fashionable
128 131 132 134 140
128
Exploring Scotland The Glenmorangie Distillery Tour By dominic smith
Tucked away in the Royal Burgh of Tain on the coast of the Dornoch Firth, lies a pathway to Scotland’s history. The Glenmorangie distillery has stood for over 150 years, and has in that time built a stellar reputation as a producer of exceptional single malt whisky. Those seeking to follow the well-tread steps of whiskey connoisseurs and discover the essence of the drink’s creation are able to tour the distillery and
be guided through a whisky tasting by Glenmorangie’s director of distilling and whisky creation, Dr. Bill Lumsden. Each step of the tour is imbued with a sense of history and wildness, designed to drag you into the heart of the surrounding Scottish Highlands. Every aspect of the distillery is set up with “perfection in mind.” It boasts the tallest stills in the world – standing at eight meters tall – and the Men of Tain are carefully selected craftsmen with an intricate knowledge
of their art. But what drives the perpetual allure of their dark liquor is the pure, mineral-rich waters of the Tarlogie Springs. Considered by ancient Scots to be sacred, the tranquil waters of the spring are the unchanging force behind Glenmorangie’s success and a reminder to every visitor of the simple beauty of the distillation process. In a land where the winds whistle over wild waters, where you can sit upon the shores of a vast loch and drink in the essence of the place, lies the opportunity to live out a quintessentially Scottish experience. In Glenmorangie’s own words: “We’ve always realized that pursuing perfection is rather a long journey. We just know it’s a trip worth making.” //// glenmorangie.com
129
130
Experience
Safe-T The Fire Extinguisher You Never Knew You Needed B y S t e ph e n B e r n h u t
You’ve probably seen Salvadore Dali’s image – he of the outrageous, wafer-thin handlebar moustache and those unmistakably surreal paintings – on everything from book covers to bottles of perfume. But in what is surely a first, you can now see the great Spanish artist’s image staring at you from, of all things, the barrel of a fire extinguisher that may not be suitable for framing, but is eminently suitable for hanging on your kitchen wall. The indelible likeness of the Spanish dandy is just one of the 30 or so images that adorn that most surely-forgotten-about, but need-to-have household item: the standard-issue red fire extinguisher. The proletarian, industrial look may be fine for some, but if you want a fire extinguisher that looks stylish or, dare
we say it, even chic, then look no further than Safe-T’s line of whimsical designs. But first, here’s what you need to know: The Safe-T extinguisher is imported into the US from Europe, where it’s manufactured by a company that’s been producing reliable fire extinguishers for 85 years. The Safe-T is just as efficient as that old red extinguisher under your sink and is available in brass, chrome, or copper finishes for more understated spaces. For the quirky and adventurous, there are designs that resemble bottles of wine, whisky, or even olive oil, or your favorite brand of ketchup or tomato soup. So, when you want to add a touch of whimsy to your kitchen or office, or give a unique housewarming gift, consider a fire extinguisher that is not only original, but a potential lifesaver too. ////
131
The Black Wines of Cahors Red or white, sir? Black, please. b y d r. t e d w i t e k
A hidden gem among the more celebrated French reds from Bordeaux and Burgundy, the Black Wines of Cahors bring a meaty, herb-tinged aroma to the dining table. The comparatively unknown region lies east of Bordeaux, and has been producing wines since the Roman days over 2,000 years ago. These deeply dark wines are very powerful, yet perfectly balanced by a pleasurable hint of dark berry fruits. The primary grape is Malbec, and while many associate this grape with Argentina, Cahors is its ancestral home. The region’s wines demonstrate a fair degree of diversity both from the terroir’s, or natural environment, as well as from a new generation of winemakers exploring untried methods of creating this staple. The terroir’s effects are quite interesting, as each terrace of planting means a different composition of soil is used, which asserts itself in the final product. For example, the lower flood plain gives lesser quality and fruity contributions, while the upper terraces’ rock and red clay with iron ore deposits provide a rounded taste. Some wine descriptions from the region will even indicate the terrace level of the grape’s origin. Unlike an Argentinean Malbec, Cahors is a wine that could be enjoyed on its own. It pairs well with local dishes like cassoulet or duck and red meat. It joins Zinfandel as a Cabernet Sauvignon alternative to a good steak. Best of all, choosing a Cahors at a steakhouse may see you spend more on your steak than the bottle of wine. ////
132
Experience
The Roof Gardens London B y S t e ph e n B e r n h u t
What if we told you that you could find paradise only 100 feet above the streets of Kensington in London? The Roof Gardens takes everything you think you know about London venues and flips it on its head with its private members’ club that rivals the most verdant, exotic vacation sites. And you don’t even have to get on a plane. With three plush gardens – Spanish, Tudor, and English Woodland themed – this venue has proven itself a popular spot for events ranging from weddings, celebrity parties, and casual, al fresco business meetings, to a thoroughly social BBQ on a balmy night. The gardens feature over 70 full-size trees and a fish-filled stream, complete with several resident flamingos (yes, real flamingos). And while The Roof Gardens is very much a venue for today, it is also one with a rich and storied past. Once the site of Derry & Toms
department store and later the iconic fashion store Biba, the building was transformed into Regine’s, a popular Kensington nightclub. One fateful evening, when a young man in jeans and scruffy shoes was turned away by the doorman, he simply bought the place and relaunched it as The Roof Gardens. The years was 1981 and the young man was Sir Richard Branson. Today, the luscious, landscaped gardens are dotted with intimate spaces like the stunning Spanish hideaway, which can accommodate up to 40 guests for a variety of events between May and September. Inside the clubhouse, a cocktail waitress dedicated to VIP tables will ensure your party is pampered. Guests can step up to London’s only rooftop Private Members Club, where they’ll also discover the beautiful, multi-award-winning restaurant, Babylon, and dine in impeccable style while looking out over the iconic London skyline. 100 feet above ground never looked or tasted this good. //// The Roof Gardens is a part of Preastigious Venues, a renowned portfolio of the world’s best venues, qualified to host the most memorable events. prestigiousvenues.com
133
134
Experience
East to West: Eat Your Way Across London There’s no shortage of amazing restaurants to choose from in London, but doing so can be a touch overwhelming. We’ve made it easier for you by highlighting three of our top picks. Whether you’re into something modern, opulent, or completely unique, we’ve got you covered – from one end of the city to the other.
135
Opulent Chinese Food Meets Live Music
Park Chinois London B y D r. T e d W i t e k a n d H i l d a Ya s s e r i
Though relatively new on the scene, Park Chinois is proving to be a fine dining experience that you will be talking about until you return – and yes, you will return. The restaurant pays tribute to the dinner and dance of yesteryear, and the stunning opulence of the design is on view both in the Salon de Chine upstairs and the dazzling Club Chinois down below.
You can expect to be feasting on elevated Chinese dishes developed by Head Chef Lee Che Liang and inspired by years of culinary travel. The Duck de Chine is considered the flagship of their delectable French Chinoiserie, but do not forget to order the soft shell crab and a selection of Dim Sum. Although you can order from their menu, this is a dining experience you can shape and customize to your liking with the assistance of the friendly, attentive, and knowledgeable staff. One defining feature that sets this restaurant apart is the skilled way they have incorporated music into the dining experience. The first notes of jazz waft in with a few blows of the trumpet and riffs on the piano, followed by an elegant singer performing lovely, timeless tunes as diners enjoy sweets and coffee. It’s an undeniably romantic way to spend an evening. This self-described blend of “French elegance and the mystique of the Orient” works beautifully. It is at once enchanting and refined, entertaining and glamorous. It’s an experience worth repeating time and again. //// parkchinois.com
136
Experience
Shipping Containers Never Looked So Good
Boxpark Shoreditch B y L a k s h i y a R a b e n d r a n
London is a vibrant and diverse city with so much to offer, but it can seem ordinary when you’re trying to find and experience something different from the usual spots suggested by TripAdvisor and Google. Boxpark changes all that. A visit will refresh your take on London, drawing you away from bustling crowds to somewhere you can completely unwind, shop, and dine. Located in the heart of one of London’s trendiest, grittiest, and most artistic neighborhoods, Shoreditch, Boxpark is the world’s first permanent pop-up mall. The concept of Boxpark is simple to grasp: It’s a small-mall hub holding over a mixture of 40 independent and global cafes, restaurants, art galleries, and retailers housed compactly on two floors. What really makes Boxpark striking, though, is the eccentric architectural design, which is made entirely out of refitted shipping containers. The beautifully stacked and repurposed containers, which are all black with white typography, is an attraction in its own right, and fits the aesthetic charm of Shoreditch to a tee. The fusion of modern street food and retail is not the only appealing part of the experience offered by Boxpark. They also
put on events with a variety of performers, including musicians, DJs, singers, and public speakers. Roger Wade, the entrepreneur and creator behind the project, brought this unique and inspiring invention to life in 2011. He was on a mission to create opportunities for small business owners and give them the freedom to express their creative visions, all while selling products and services to consumers without worrying about competition from large-scale businesses or the heavy cost of leases. Wade’s minimalist approach in both function and style was his way of catering entirely to Boxpark’s young and urban audience. So if you are looking for a London attraction that’s visually exciting and a bit outside the norm, consider Boxpark a must-see. //// boxpark.co.uk
137
Going Green in Notting Hill
Farmacy London By Alexis Scobie
Situated in the quaint neighborhood of Notting Hill, just a short walk from Portobello Road Market, sits the brainchild of Camilla al-Fayed – Farmacy. The menu, which is filled with healthy comfort foods, is predicated on the Hippocratic belief that food is medicine. And while Farmacy’s menu is undoubtedly good for you, it’s also approachable, making it appealing to diners well beyond the health-minded foodie. The space itself is highly curated, and merges the worlds of health and design. The gold-colored bar is home to an extensive collection of biodynamic wines and clean cocktails, and is decorated with crystals and other symbolic tokens of wellness. One notable drink is the “Tequila Mockingbird,” a twist on a classic margarita that’s topped with pink Himalayan salt. The non-alcoholic beverage menu is just as innovative, including juice shots served in a syringe, unique herbal tea blends, and even mushroom lattes. Farmacy is also the first London restaurant to offer cannabis oil (CBD) in its drinks and elixirs, which is known for both its functional and health benefits.
The dinner menu features classic items like nachos, the Farmacy Burger, and Chef’s Clean Curry, all of which are entirely plant-based and can be altered to fit dietary requirements. The Earth Bowls are one of their bestsellers, and are filled with wholesome grains and greens. The best part, however, is the dessert menu, offering “healthy” berry crumble and warm cookies served with milk. Whether one goes there for Sunday brunch or on a date, Farmacy will not disappoint. //// farmacylondon.com
138
Experience
Book Reviews Work Life: A Survival Guide to the Modern Office By Molly Erman B y S t e ph e n B e r n h u t
In her introduction, author Molly Erman says that her favorite piece of career advice came from her grandfather, who told her to “walk in like you own the place.” The advice that Erman dispenses in the book’s 13 chapters may not enable you to walk around like you own the place (at least not all of the time), but some of her suggestions for handling delicate workplace situations may help you save face – or, just maybe, your job. One of the book’s more helpful chapters, “Staying Cool and Saving Face,” describes scenarios that any grunt worker – whether in the Netherlands or New Mexico – is likely to encounter. Erman serves up strategies for managing those scenarios, with tabs like “Setting Boundaries,” which advises staying late when absolutely necessary, but otherwise leaving at a normal hour; “How to Argue at Work,” which covers knowing when to stand down; and “How to Handle the Condescending Coworker,” which instructs the reader to redirect bad energy by saying something like “I’d be happy to do that for you.” The chapter has good advice for enhancing careers – and saving them. If that condescending co-worker happens to be your boss, Erman’s tips will help to make you empathetic, if not sympathetic. Under “Truths About Your Boss,” she writes that “Your boss wants to trust you,” and “Your boss is grateful for you, even if that gratitude is not properly expressed.” In case your boss has never expressed appreciation for you, Erman says, “That doesn’t mean she or he doesn’t want to.” Work Life’s size – like a pocket book – and its colorful illustrations make it an appealing read, as does Erman’s breezy, irreverent tone. While some of the chapters fall into the nice-to-know category (like those on meetings and getting organized), most of them are filled with at least one bit of advice that is a real (job) keeper. ////
139
Host: A Modern Guide to Eating, Drinking and Feeding Your Friends By Eric Prum and Josh Williams With Lauren Sloss B y S t e ph e n B e r n h u t
If you are anything like the coauthors of this warm and friendly book, you know that “cooking for friends has been key to maintaining sanity amid the craziness of urban living.” Like them, you’ll want to prepare dishes that “keep it simple, use fewer, better ingredients and above all, make people happy.” The hours and hours of prep time and constant fretting required for 15-ingredient dishes are not for these authors – nor for you. Just for you, then, is this coffee-table book filled with more than 30 recipes, ranging from Farmers’ Market Greek Salad to Cavatelli with Blue Crab, Chile, and Toasted Breadcrumbs. Of course, any memorable evening starts with a cocktail – some exotic, others more common – and the authors include recipes for more than 25 concoctions, ranging from the Louisville Lambic to the mezcal-infused El Coronel. While the pages are filled with rich, full-color photos of cheeses, fruits, and vegetables, there is a helpful, utilitarian touch in the authors’ descriptions of what they call “Essentials”: the kitchen, bar, and serving tools a host or hostess should have. They also describe the basics of the well-stocked bar, and even the pantry. Most helpful is the authors’ advice for those who want to go beyond pointing and saying “I’ll take six of those.” Such advice includes tips for visiting a farmers’ market: Get there early to get the very best of what they have to offer, and stop and take a few minutes to get to know the particular farmer – or butcher, or fishmonger – who may even have a cooking tip. Then get ready to eat and be happy. ////
140
Experience
WiseWear Trackers Go Next-Level Fashionable
By Mira Blumenthal
WiseWear, an engineering and design firm that develops IoT products that blur the line between fashion and function, has developed perhaps the most stylish personal tracker yet. Their Socialite devices are sleek, albeit quite heavy, and are a unique accessory for almost any outfit. Unlike most other trackers, however, these devices cannot be worn while exercising, due to their weight and the materials used – though they still measure number of steps taken, calorie intake, and a host of other vitals. WiseWear’s fundamental differentiator is that it was designed to help people feel safer – especially the elderly. The idea originally came from a personal story, which involves founder Gerald “Jerry” Wilmink’s grandfather, who suffered a fatal fall. The bracelets come with a “panic button” function that texts pre-set
emergency contacts when activated, a feature that sets it apart from other IoT accessories on the market. The matter of geriatric safety also hits close to home for WiseWear advisor and design partner, Iris Apfel. The world-renowned style icon intimately knows about the dangers of falling, and loves that the bracelet is able to notify her loved ones. She also maintains, however, that the bracelet is not just for the elderly. The Socialite measures activity and notifies users of their missed calls and messages via Bluetooth, so it can truly be used by anyone of any age. The Socialite bracelets are undoubtedly fashionforward. The lack of a screen is surprisingly appealing, and the delicate notification vibrations are useful. While the bangles cannot be used for athletic purposes, and certain models are not as sleek in person as they are shown on the company website, the seamless fusion of fashion and technology is certainly inspiring. ////
141
The Ultimate Driving Experience.®
EVERY DRIVE FEELS LIKE A PERSONAL BEST. THE ALL-NEW 5 SERIES.
© 2017 BMW Canada Inc. “BMW”, the BMW logo, BMW model designations and all other BMW related marks, images and symbols are the exclusive properties and/or trademarks of BMW AG, used under licence.