Summer 2011
The D-Crit Florilegium* www.dcrit.sva.edu
An irregularly appearing volume of writings by students in the School of Visual Arts MFA in Design Criticism.
This issue features selected extracts from the theses written by D-Crit’s Class of 2011, and edited by Andrea Codrington Lippke, D-Crit instructor and primary thesis advisor. To read the theses in full, please arrange an appointment to visit the D-Crit library at 136, West 21st Street, New York City. *Derives from the Latin flos (flower) and legere (to gather): literally a gathering of flowers.
from Design Crusades: A Critical Reflection on Social Design by Vera Sacchetti “Success” was a crucial element in the projects included at Museum of Modern Art’s Small Scale, Big Change: New Architectures of Social Engagement 2010 exhibit, one of MoMA’s initial forays into the world of social design, recognizing the importance of this emerging field. On a sunny September morning in 2010, architecture curator Andres Lepik stood behind a small podium in a light gray suit introducing the exhibition, and assured all journalists and architects present that he had personally visited each and every one of the 11 projects included in the show, to guarantee their success. As to what exactly Lepik’s criteria were, we were not told. As you entered the Special Exhibitions Gallery, on the third floor, the corridor wall to your right prominently featured a map showing the distribution of the projects on display—seven of them in the developing world, all bringing “innovative architecture to underserved communities”—complete with cost and year of construction. Inside the gallery’s pale blue walls, display was democratized. Blown-up photos introduced each project, and you could analyze architect’s statements and sketchbooks, project models and videos that varied in content. For an exhibition that sought to “offer a redefining of the architect’s role and responsibility to society,” Small Scale was not so different from every other architecture show before it. Missing from the models and sketches was information about the particular context and narrative of each project. Most projects didn’t even have a map of the location and what surrounds it, and it is hard to believe that every museumgoer understands the concept of a township in South Africa, a barrio in Venezuela or a village in Bangladesh. Lepik offers in the exhibition catalogue that “each project is the result of a dialogue in which the architect cedes part of his or her authority to others, marking an important departure from the modernist ideal,” but the only project on the show explicit about its process was the Quinta Monroy Housing in the small town of Iquique, Chile, by local architecture firm Elemental. A row of impromptu stereoscopes mounted along a wall told the story of how success, in this case, relied on pragmatism and full cooperation between the architects and the
residents from the beginning. On the opposing wall, a video gave a sense of place and of who the members of this community were. The overblown image on the wall showed a bare, geometric succession of buildings, no people in sight. Almost all of the other projects on display in Small Scale featured photographs of at least one smiling, colored person, seemingly jubilant at the architect’s gift. Bangladeshi children stared innocently at the camera—and at us—in Anna Haeringer’s METI Handmade School. Venezuelan kids shared a hilarious joke sitting in Urban Think-Tank’s Metro Cable in Caracas. These images exploit the population served by using them as proof of the project’s success. If the kids in the developing world are smiling at the moment of the snapshot, then we museumgoers are to believe that everything is fine. These people hang in timeless limbo, their positive futures inferred. All other information is not of immediate concern. However, “to fly the flag of social engagement you do indeed need to move beyond looks,” architecture critic Alexandra Lange noted in a review of the show. For Small Scale, this would mean providing context and process information for each project, bringing the architects down from their pedestals and transforming this exhibition into a celebration of collaboration, signaling indeed the “conviction that good design is not a privilege of the few and powerful.” (…) “It is always more complex than what it seems,” offers Tim Brown, CEO of IDEO, referring to the major lesson he has learned from working for the social sector. “Therefore a willingness to dig into complexity, a willingness to embrace it and understand it, and then somehow cut through it and do something tangible on the other side, is a skill you need as a designer in the social sector. If what you want is somebody to come and give you a simple brief that you can then go away with and create a wonderful design from and hand it back at the end, then you’ll be disappointed working in the social sector, because it won’t work out that way.” The transition of the design industry toward the social sector will be painful and long. Although the first social design projects in the early 2000s kept encountering the same barriers, practitioners working abroad today have made strides, constantly testing new models in a variety of places and scales, and making the most of a field where everything is still negotiable. It is clear now that success is hard and never certain. However, good first steps include leaving your cultural bias behind you, working with the target community from the inception of the project, building on the expertise of local partners and starting small. If designers really wish to embark on social design projects abroad, they must go beyond the enthusiasm and feel-good of their initial ideas: they must learn about development initiatives and business planning, about the context they’ll be working in, and must be willing to change and adapt their concepts, facing constraints that will inevitably exist. It is wiser to start in your community than abroad. Designers will more naturally adapt to a context they already know; however, working locally doesn’t always translate to good results. It also seems wrong to engage in a charitylike, pro-bono model. I’m a firm believer in an exchange process, empowering and conveying
ownership of ideas to the users designers work with, in the U.S. or abroad. If social design wants to become a sustainable, profitable field, then it must start with exchanges of objects, ideas or money to create interest and demand in the social sector. Both the ideas of cocreation and of a holistic approach are beautiful to hear, but elusive and extremely difficult to implement. Many designers working on the social sectors have talked of a metrics system to be universally adopted, but such an endeavor will only exists when there is sufficient consensus around the field. Back in the West, the media fails miserably in telling the stories of these projects, finding its biggest difficulties in the simplistic vocabulary and images used to describe social design abroad, its users and outcomes. Social designers in the field seem to be descending from their pedestals and bridging cultural divides, by shattering the figure of the designer-as-savior. But the media in the West reconstructs that figure as a pivotal part of the story. Prototypes are bolstered to pass for finished projects, concepts that haven’t left the drawing board are heralded as excellent examples, and the user is constantly diminished, generalized and stereotyped in vocabulary and images. “The way we talk and write about these issues is incredibly important,” argues writer Maria Popova. “Language shapes culture and cognition in a powerful way. The very vocabulary we use in this debate is incredibly flawed. We can’t even come up with a fair way of describing the communities in question.” These flawed stories feed the idealist student back at home in the West, inspiring him to do what he comes to believe is noble, easy and imperative, leading to more mistakes and errors. And social designers cannot afford to make big mistakes in the social sector. As Tim Brown points out, “With many of these things where people don’t have choices and you’re maybe giving them the only choice they have, then there’s a responsibility to develop solutions that have the most possible impact.” Designers will hardly be chastised for failing in the social sector, but they must do justice to themselves and to the people they are working with. The transition is happening in the field, and it must now happen in education, museums and the media, giving an opportunity to the audience to realize and interpret the complexity of the social sector and its issues. To empower others we must disempower ourselves, and it is time to deconstruct and disempower the figure of the designer-as-savior, bringing nuance to the simplistic debate, and allowing for the social design field to live up to its true potential.
from Going Public: Creation and Dissemination of the Designer’s Identity by Molly Heintz While even the most famous industrial designers are not quite household names, they are what public policy scholar Elizabeth CurridHalkett calls “relative celebrities.” To people in the design field as well as those who care about designer products, Raymond Loewy, Karim Rashid and Yves Béhar are de facto celebrities,
and the same methods of analyzing celebrity creation apply. As Currid-Halkett argues, celebrity often has less to do with talent than with what she terms “residual celebrity,” a term that borrows from historian Daniel Boorstin’s definition of a celebrity as someone who becomes “known for their well-knownness.” Talent may put a designer on the map in the first place, but it is a host of other hard-todefine qualities that elevate someone from simple fame (being recognized as a leader in his or her field) to celebrity. These traits may be cultivated by the designer, deployed by the client and amplified by the media—all of which results in a powerful public identity. As such, the industrial designers considered here are part of a system in which they have moved beyond their role in the basic system of mass production and consumption of objects to a new place in the system of mass production and consumption of images. When it first came out in Raymond Loewy’s memoir Never Leave Well Enough Alone (NLWEA) received mixed reviews. Some found Loewy’s firsthand immodesty hard to stomach. “The book is instructive, brash, cocksure, occasionally funny, sometimes vulgar, and always honest,” wrote an unattributed review in the New Yorker. Peter Blake, editor of Architectural Forum at the time, penned a longer review for the New York Times that was grudgingly positive about Loewy but skeptical about his new book: “Mr. Loewy is, among other things, an accomplished salesman, and in this packaged 100,000-word after-dinner speech he is selling himself.” While Loewy openly courted publicity throughout his career, with NLWEA, as Blake suggests, Loewy threw down the gauntlet. It’s clear that this book was not intended for his industrial design peers, editors like Blake, or even New Yorker readers, although it says something about Loewy’s notoriety that his book was covered in the New Yorker as well as the New York Times. With his Horatio Alger-style tale, Loewy was emerging from behind the scenes of manufacturing to seduce the client of his client: the consumer. “No product designed by Raymond Fernand Loewy, the world’s most successful industrial designer or packager, has ever been more studiously packaged than Loewy himself,” begins an article by John Kobler in a May 1949 feature on Loewy in Life magazine. Loewy cut a distinctive, elegant figure that set him apart from some of his tweedier colleagues at the time, like the Cranbrook clique of Charles Eames and Eero Saarinen, or the down-toearth Henry Dreyfuss. Throughout his career, in any published image related to his work, Loewy wears a suit (always with carefully creased pants), a white shirt with a tie, cufflinks and a pocket-square. His black moustache is closely trimmed, his hair parted on the left and slicked back into waves with pomade. Over the years, the pompadour shifts from dark to light gray (although the moustache remains aggressively black), his girth widens a bit, but otherwise the image of Raymond Loewy remains remarkably consistent. The April Life article offers one reading of Loewy’s signature look: “middle brow.” On his chart of high-brow to low-brow, writer Russell Lynes places the signature Loewy look beside monogrammed towels, bourbon and ginger-ales, and the card game bridge. (The
“high-brow” category of the chart is the vicinity of Eames plywood chairs and Kurt Versen lamps.) The chart is of course, tongue-in-cheek, much like New York magazine’s back page “Approval Matrix” today. Yet it does illustrate an important aspect of the Loewy image: suave, yes—but suave according to popular taste. The twist that made Loewy fascinating and exotic—in branding terms, what “differentiated” him—especially in the 1940s, was his Gallic origin: he hailed from the land of Charles Boyer, Christian Dior and Chanel No. 5. Notably, Loewy never lost his French accent. And while Loewy was foreign, he made himself thoroughly familiar to the American consumer, much like Marlene Dietrich, Ingrid Bergman or Cary Grant, with whom Raymond Loewy shared a place on the American Fashion Guild’s 10 Best-Dressed Men list in 1952. Loewy’s much-published home in Palm Springs, Tierra Caliente, helped established the designer’s celebrity status, creating a stage set—complete with a swimming pool that spanned from the living room to the outside patio—to play out the ultimate chapter of the American Dream. Through his design work, Loewy developed the concept of “Most Advanced Yet Acceptable”— the MAYA principal. The idea behind MAYA was that the public needed something recognizable to latch onto in order to accept a new, progressive iteration of a design. It was a balance of the commercial and the creative. After considering Loewy’s own “package,” one might argue that Loewy was applying the MAYA principal to himself. On the one hand he was presenting himself in an accessible yet intriguing visual language—the foreign-born designer of the everyday—while on the other he pushed the limits of the designer image: Loewy was designing himself into a celebrity. Design is rarely the work of one individual and the case of Loewy’s image is no different. His main collaborator on the project was his in-house publicist, Elizabeth Reese, known as “Betty.” Reese spent almost 30 years in the office of Raymond Loewy and acted as a behind-the-scenes liaison with media that published and broadcasted Raymond Loewy into a design legend. (…) No designer to date may have his name physically attached—through tags affixed to the product or stamped directly into the product itself—to as many products as Karim Rashid. These include everything from the Oh chair for Umbra (now a featured product of the Container Store), to manhole covers for the City of New York to shoes for the Brazilian company Melissa, to soap dispensers for Method, to the Bobble, a reusable clear plastic water bottle with a carbon filter, displayed prominently for a time in the windows of American Apparel. Thanks to the distribution of these products in the world by his clients, the “Karim” signature has become a logo in and of itself. Rashid’s first name has become more than just a series of letters identifying a person—it has become an image and a symbol of a brand, appearing (in pink) in every email that is sent out by his staff. For all the slick and other-worldly aspects of Rashid and his projects, that he prefers a color that in the West is traditionally associated with little girls, that he now uses his first name only, and that he consciously deploys a distinctive handwritten signature—complete with an “x” over the letter i, commonly used to symbolize a kiss in correspondence—as a logo, add up to an accessible and highly consumable public identity, presented in a way that mitigates any distance created by Rashid’s perceived outsider status. With the diffusion of the “Karim” name, it’s almost impossible to evaluate how much Rashid is known for his work or simply, in Boorstin’s words, “known for his well-knowness.” To Rashid, this distinction is irrelevant.
He sees public recognition as an integral part of his personal mission as a designer: “I am a celebrity, and it affords me [the opportunity] to make design a public subject. I have been perceived as a showman because I have a lot to say, and I feel that to really change the world design is not just about physical objects or space, but it’s also about philosophy, vision, and communication.” (…) In his book Nobrow, critic John Seabrook is blunt about the way it worked with celebrity profiles in the 1990s at the New Yorker, when Tina Brown was editor: “If you wrote about a pop star, or a designer, or an athlete, you were necessarily borrowing some of your subject’s celebrity and using it to sell your story. And if you thought you could get away with that— with taking their Buzz and not giving up some of your creative independence in return—well, brother, you were kidding yourself. There was always a transaction involved.” And, one might imagine, the greater the star power, the greater the price of the transaction. Historically, the kind of quid pro quo that happens between journalists and publicists has stayed behind the scenes—off the record phone conversations negotiating access, agreements to switch out a recent photo for a more flattering headshot. But in a new era of digital communication, that old system has been upended to some degree; one wrong move and the negotiation itself could quickly become fodder for a story. In the age of Wikileaks, the notion of transparency has changed the playing field between publicists and journalists. Image-crafting has become more nuanced, a trend represented by the way Yves Béhar and fuseproject work with their clients to present a seamless image to the public, which in their definition, includes the media. Logan Ray, fuseproject’s director of strategy, tells a story about a new client who recently sent out a “ramshackle” press release without consulting fuseproject—a major no-no in Ray’s book. “We say, be sure to control your own story, be sure to control the reception. We curate all the photography, we select and provide those images to the press; we curate the messaging and we work with the PR firms to parse out the message.” Ray notes the example of the recent debut of the Herman Miller Sayl Chair designed by fuseproject, where different media channels each got different aspects of the story. For a firm that presents such a happy-go-lucky face to the world, it’s a remarkably strategic approach. In fact, even the way Béhar talks about storytelling evokes Loewy’s MAYA principle: Storytelling serves the purpose of communicating the core idea of a project. Every project attempts to communicate an idea via storytelling, and the experience of the product and brand. A good story is one that takes a relevant issue in people’s lives, and takes what may have been a theory into the actualization of an idea. In doing so I believe it accelerates the adoption of ideas about new ways of living and new ways of consuming.
from Tinkering with Design: The Convergence of Design and Hacking Avinash Rajagopal If one had to choose a patron goddess for hackers, it would have to be Mētis. The first wife of Zeus and the mother of Athena, Mētis was the original goddess of wisdom and magical cunning. In Greek, her name could be loosely interpreted as “tricks of the trade,” the kind of insider knowledge one gathers through doing. The French philosopher of the everyday, Michel de Certeau, translates mētis as “ways of operating”: “clever tricks, knowing how to get
away with things, hunter’s cunning, maneuvers, polymorphic simulations, joyful discoveries, poetic as well as warlike.” Certeau proposes that in modern societies the means to actually produce things are held by a few, and that the majority of people are marginalized by being reduced to the role of consumers. Yet when we go shopping for food, we don’t just buy what is available. We form our own strategies and tactics, maneuvering between the demands of the recipe, the tastes of the people who will eat the food, and what is actually on the shelves. The decision to replace one ingredient of the recipe with another takes but a minute. Thus, even as consumers, we create our own spaces to re-assert ourselves. These specific ways of using—the strategic possibilities of mētis—are what we produce. One can argue that the first group of people who called themselves hackers were, in fact, using an extreme form of mētis. These original hackers were students at the Massachusetts Institute of Technology (MIT) in 1959, who found all kinds of unauthorized ways to use a computer given to MIT by IBM. Within a few years, an unwritten Hacker’s Ethic emerged out of their work, and a commitment to manipulate and improve any status quo became a sort of guiding principle for all future digital hackers, well into the 1980s. Already by 1975, hacker groups like the Homebrew Computer Club in California had moved to hardware hacking, manipulating electronics rather than software. The first Apple computer was in fact a direct descendant of this wave of hacking. Steve Wozniak built the Apple 1 completely by hand in 1975 and brought it in to show to the Homebrew Club. Wozniak gave away the instructions for that first Apple for free. In Wozniak’s words, “Eventually, Steve Jobs came and said, ‘Why don’t we build it for them?’” The rest is history. Meanwhile, vast simplifications in electronics technology in the 1990s made it easier for hackers to start manipulating consumer products. In 2005, Dale Dougherty, a co-founder of O’Reilly Media, a publisher of computer books, founded a DIY magazine called Make. “People had started to take things apart, like the early TiVo cable boxes,” Dougherty explains, “and I thought the future of computing would not just be in computers, but it would be out in the world itself, too.” Make would become one of the most influential magazines of the hacker movement in the U.S., mentoring and nurturing people like the hacker Bre Pettis, who worked for the publication between 2006 and 2008. The magazine also opened the movement up to people with a wider variety of interests. The term “maker” began to be used for people who wanted to take things apart and put them together, but who were not necessarily interested in sophisticated technology. The word “hacker” took on connotations of tech-fetishism with the result that the “maker” designation is now often used as a less extreme term. Once Make magazine had organized the first Maker Faire in 2006, the word stuck. Then in April that year, O’Reilly Media launched Craft magazine, and brought a lot of craftspeople— now called “crafters”—into the community. The terms used to describe hackers have continued to proliferate. From French, via the writings of the anthropologist Claude LeviStrauss, comes the word bricoleur, conveying the impression of a disorganized handyman. The Wall Street Journal used the verb “tinkering” to describe what people do at hacker collectives like NYC Resistor, and this word has been claimed by many as a sort of blanket term for hackers and makers. Having a fluid identity seems to actually help the hacking movement along, by allowing people to venture into new modes of working. “Especially among the hackers who came from the software world, there is now more respect for skill in actually making things,” says hacker and digital security consultant Eleanor Saitta. “There is more of a culture of getting your hands dirty.”
Over the past four years, all kinds of projects have come out of this culture. Some of these are but small interventions, allowing people to ameliorate the objects they use. British design student Jane ni Dhulchaointigh worked with a team of material scientists to develop Sugru—a play-doh like substance that sticks to almost any surface, and hardens into shape. On her website, Dhulchaointigh asks her customers to “Hack things better.” “Make your stuff last,” she tells them. “Hack it perfect for you.” And they do, sending her photographs of repaired mugs, new handles for penknives and zany cases for cellphones. Time magazine listed Sugru among the 50 best inventions of 2010. Other hackers take more critical stands. In 2006, Radio Frequency Identification (RFID) tags were integrated into U.S. passport cards, despite concerns that anyone with an RFID reader might be able to steal important personal information from them. Saitta began to hunt for a metallic cloth from which to make shielding wallets and pouches. But San Francisco–based hacker Chris Paget went one step further. Standard RFID readers can read information from six inches away. Using components bought on eBay, Paget hacked one of these readers to work at a distance of 30 feet, effectively decimating the claims of the U.S. Customs and Border Protection Department that the RFID tags were perfectly secure. This concern with flows of information is built into the foundations of the hacking movement. When Bre Pettis declares the “Epoch of Sharing” and advocates the easy exchange of ideas within communities, he is echoing one of the earliest tenets of the Hacker Ethic—that all information should be free. In his seminal book on digital hackers, senior Wired magazine journalist Steven Levy explains: Hackers believe that essential lessons can be learned about the systems … from taking things apart, seeing how they work, and using this knowledge to create new and even more interesting things. They resent any person, physical barrier, or law that tries to keep them from doing this. … In a perfect hacker world, anyone pissed off enough to open up a control box near a traffic light and take it apart to make it work better should be perfectly welcome to make the attempt. The other thing hackers are incessantly obsessed with is tool-making. In a sense, the desktop 3D printer Makerbot is itself a tool, but its accompanying website Thingiverse.com is full of computer files for vises and clamps, wrenches and screwdrivers that can be made on a 3D printer. Anything can be made into a tool, including outdated Makerbot parts. Donutman_2000 has uploaded instructions for converting the MK4—the plastic extruder that used to come with the first version of the Makerbot—into a nifty little Plastic Welding Gun. In developing his idea of the bricoleur, LeviStrauss pays special attention to tools. The engineer has a task, and he conceives and procures raw materials and tools that are suitable for that task. This kind of thinking is alien to the bricoleur, who must always make do with what he has. What tools he needs must be found only among available resources, no matter what the task may be. The engineer operates within the necessities of the task, the designer within the parameters of his brief, but the hacker operates within the possibilities of his tools. In the real world this plays out in two ways. When considering the Makerbot as a final product, for instance, the engineer might revel in the very existence of such a technology, while the hacker would only be interested in what it could do. When considering the Makerbot’s purpose as a tool, the designer might wonder what it could make, but hackers, as we have seen, are concerned with how the tool can make more tools. Mētis is indeed the presiding goddess of the hacker’s world, where cunning must be used to take something apart, and then put it back
together so that more cunning may be used to take it apart again. For a hacker, there is no such thing as a finished product. In 2005, writing “A Manifesto for Postindustrial Design” in I.D. magazine, design educator Jamer Hunt already prophesied that we will inhabit that world, populated not with industrial goods, but with codes that can be “manipulated, changed, improved, hacked and produced in multiple variations in myriad places.” He wrote at the time that “there is no single product that embodies this new process completely.” But we now have the Makerbot, and with it, a demonstrably new way of conceiving of the material world.
from Permanence as a Criterion by Zachary Sachs Historians largely trace the Western appreciation of patina to the 14th- and 15th-century fascination with Greek and Roman ruins, which culminated in Romantic reveries such as Wordsworth’s “The Ruined Cottage” and the etchings of Giovanni Piranesi. But a parallel attitude of cultivated decrepitude can be found as early as the 16th century, when Mannerist architects would build deliberately ruinedlooking houses on the estates of aristocrats. Examples of deliberately induced patina in forgeries are countless—by 1761 the popular artist William Hogarth had already satirized the practice in the mass-produced print Time Smoking A Picture—and parallel in the Romantic efforts to achieve the picturesque ruin were sentiments that the appearance of ageing was not fakeable. Anthony Trollope describes the estate in Barchester Towers: It is the colour of Ullathorne that is so remarkable. It is of that delicious tawny hue which no stone can give, unless it has on it the vegetable richness of centuries. Strike the wall with your hand, and you will think that the stone has on it no covering, but rub it carefully, and you will find that the colour comes off upon your finger. No colourist that ever yet worked from a palette has been able to come up to this rich colouring of years crowding themselves on years. As beautiful as the observation is, Trollope’s appreciation of patina is closely attached to an impression of authenticity that is comparable to the “authenticity” of family lineages. It reflects a particular political force intrinsic to the aesthetic: an elevated fineness in the Western aesthete’s conception of decay, for whom the patina is supposed to admit rarefied pleasures (and, it follows, only those members of the leisure class for whom such pleasures are made available by their “superior culture”). In 1990, Grant McCracken, a sociologist studying class in the 18th century, targeted the symbolic structure of patina: Patina, as both a physical and a symbolic property of consumer goods, was one of the most important ways that high-standing individuals distinguished themselves from low-standing ones, and social mobility was policed and constrained ... Patina has an important symbolic burden, that of suggesting that existing status claims are legitimate. Its function is not to claim status but to authenticate it. This system of authentication, along with the relatively fixed system of aristocracy that made use of it, began to collapse in the late 18th century, when the increase of range in the available consumer goods drove the English to (in McCraken’s words) “conspicuous consumption on a modern scale [...] a new kind and tempo of fashion change.” But this conception also shows the limitations of McCracken’s symbolic structure for patina, in which it is circumscribed by the class structure by which
he attempts to define it. When he argues that the fashion for attractive wear is “eclipsed,” he is also limiting the range of his assessment of that fashion to precisely the class-system with which he attempted to characterize it. In other words, what if patina had not been eclipsed, but only this particular incarnation of its social-aesthetic function? Though Trollope’s account deliberately emphasizes the social distinction of patina, the metaphorical quality of the passage speaks to an attitude that does not prove especially easy to shoehorn into a sociological system. Despite the intervening centuries, his description seems to have less to do with policing a social order and more to do with an interior, private pleasure that seems to have always attached itself to material decay. A contemporary contrast can be found on the internet forum MyNudies, where users upload photographs documenting the lifespan of their Nudie brand blue jeans, made out of special “unsanforized denim.” Over time, the material develops characteristics specific to how its indigo dye wears off: these are described as “whiskers” along the hips, “honeycombs” on the back of the knees and horizontal stripes of varying dimension that “stack” like sedimentary deposits at piled hems. The dedication to this task by some members of these forums may seem alien to disinterested observers—the Nudie Jeans Company’s official recommendation suggests wearing the jeans for six months before washing them, and then submitting them to a detailed cleaning process: turned inside-out and folded into a dilution of mild soap and water, kept at precisely 60 degrees and then air-dried. Clothing seems uniquely positioned to bring consumers in contact with the visible lifespan of materials—due to its ubiquity and sort of reusability, to say nothing of its physical proximity. Corporations like Nudies use intimate imagery to stress the sort of meaningful relationship it seeks to cultivate in its market: “Jeans is all about passion [sic]—the more you wear and treat your jeans, the more beautiful they get. Your everyday life gives the denim its unique character, formed by you into a second skin—personal and naked.” But this kind of hyperbole isn’t limited to marketing or fanatical teenagers. A critic ordinarily as sober and measured as James Agee was driven to ecstasies by the worn denim of Southern field workers: The texture and color change in union by sweat, sun, laundering, between the steady pressures of its use and age: both, at length, into realms of fine softness and marvel of draping and velvet plays of light which chamois and silk can only suggest, not touch (the textures of old paper money); and into a region and scale of blues, subtle, delicious, and deft beyond what I have ever seen elsewhere approached except in rare skies, the smoke light some days are filmed with, and some of the blues of Cézanne: one could watch and touch even one garment, study it, with the eyes, the fingers, and the subtlest lips, and illimitably long, and never fully learn it. (…) Functionality and exterior form continue to be the springboards for most design methods and processes, whereas a conscious attention to the condition of things through their lifespan remains rare. Or, at the most, it’s considered simply in terms of choosing material “for the long run,” rather than making decisions to adapt to changing contingencies. But there are indications that the attitudes of designers and critics are showing signs of shifting toward a conception of the design object as something that exists through time. It might be an uncomfortable realization, since such a conception will necessarily cede meaning-making to every stage of an object’s manufacture and ownership (and finally disposal). But if such a conception is realized, then the materiality of the object through time—its decay—needs to become a central concern.
In architecture’s more theoretical precincts, some thinkers have started to stress this imperative, and investigate its possibilities. In the 2010 essay collection Design Ecologies, design theorist Peter Hasdell outlines a contrasting suggestion for the integration of architecture into landscape that goes considerably beyond the traditional notion of urban context: Choreography may be an apt metaphorical condition for a number of shifts in architectural theory over the past several decades. The question of type has been radically reevaluated under this new understanding. Formally a central question in the establishment of ideal formal relationships and patterns, the concept has recently undergone a shift, which places emphasis on mutability and transience and resists notions of stasis and fixed identity. Type exists, but only as a contingent and mutable reality subject to the changes of contexts and fields. The fixed state of any organism—and, by extension, design object—is not a permanent condition, but a momentary example of homeostasis and equilibrium, the result of certain contextual balances in forces affecting any organism. Hasdell’s focus in the essay is on how this “soft and weedy” architecture reorients its underlying values not around stylistic or compositional integration but rather around an interpretation of architectural context that conditions itself to change through time. (His connection of this with “choreography” or performance probably indicates a degree to which these concepts have been drawn in part from the suggestive, semiarchitectural “Land” or “Earthworks” artists that came to prominence in the United States in the early 1970s.) Many of the critics writing about decay and preservation made a comparison between signs of decay and evidence of living objects. Perhaps in the near future, design may be able to show us there is more to this comparison than metaphorical resonance or mere memento mori. It may be able to demonstrate that the metaphor for life, represented by change in general and decay in specific, might be a model for process or an inspiration for formal development. In Hasdell’s reimagining of architectural context the similarity has prescriptive potential: Contemporary designers consider the ecological context in which an organism (the design itself ) is formed; they consider how it evolves out of the forces embedded in a contextual field. Designers have thus become more interested in the possibility of producing what amounts to customized organisms. The very strategy of design presumes a system for continued evolution and responsive intelligence in any developed solution. Design is now understood as a process of unfolding possibilities within an ecological or contextual system; it is a kind of performance in which the designed organisms furnish responses to the dynamic field conditions of their environment. But to achieve a kind of evolution of this complexity will require for designers to attune themselves to the decay and ultimate impermanence of objects; and, more fundamentally still, a reconsideration of their role relative to the final object. To embrace this conception of the object requires a reorientation away from the idea of an ideal, finished object. Apart even from the ecological benefits that can be achieved through a more fastidious stewardship of the built environment, this approach suggests manifold aesthetic possibilities and a more complex and nuanced relationship between designers, manufacturers, products and consumers in which meaning does not come from a single origin. Designers will need to begin thinking of themselves more like parents—conditioning the development of their creations over time—rather than prime movers, setting things in motion and then walking away.
from Recreate: New Grounds for New York’s Playgrounds by Kimberlie Birks Just as a society’s approach to education is visible in the design of its schools, and its ideas about nature can be read in the character of its parks, the attitudes of Americans toward children’s play are embodied in their playgrounds. Around the country, monolithic pipe-rail and plastic units moored in seas of rubber matting languish, no longer able to compete with the dynamism of today’s virtual play worlds. The playground—with its slide, swing, seesaw and sandbox—emerged at the beginning of the 20th century when improved child labor laws suddenly afforded children an abundance of playtime. A century later, playgrounds seem to speak more to the past than to the future, no longer equipped for the realities of modern childhood. Susan Solomon, in her definitive book American Playgrounds, assigns much of the blame to McDonald’s. In Solomon’s eyes, the more than 8,000 cookie-cutter playgrounds that decorate its restaurants nationwide represent the low point in post-World War II American playground design. While the midtwentieth century saw artists such as Isamu Noguchi and Egon Möller-Nielsen spark a brief period of interest and innovation in the field, when it comes to playgrounds Americans have long since sacrificed creativity at the altar of safety. As many a baby boomer blossomed into enterprising lawyers, the playground gained an elaborate set of safety standards. With standards came standardization so that today playgrounds are no longer built so much as ordered from a catalog. The result, exemplified by the McDonald’s model, is playgrounds that are safe but spiritless. In banishing unpredictability, such designs also succeed in prescribing play to such an extent as to render it boring. What began as an effort to serve children, has homogenized playground design and eliminated risk so successfully as to also eliminate interest. Sensitive playground designs have been exchanged for the metal and plastic contraptions now the paradigm of contemporary playgrounds. While total safety has undeniable parental appeal, psychologists and specialists in early childhood education believe that for play to be valuable, it needs to possess not only creativity, but also an element of danger. Educators argue that these totally safe environments lack the important elements necessary for meaningful play: variety, challenge, complexity, flexibility, adaptability and risk. Appropriate challenge adds the possibility of failure, which fuels both learning and enjoyment. Just as currents in contemporary society conspire against the playground, the argument for its importance grows. As competition for children’s attention proliferates, the perils of such designed predictability must be recognized. Just as children’s play environments have become more structured and predetermined, so too has children’s play. American culture in the 21st century has constricted child’s play as never before. Technology, educational shifts, increased pressure to compete and parental fears have combined to affect what David Elkind, professor of Child Development at Tufts University, describes as “the reinvention of childhood.” As the growing popularity of phrases like the “professionalization of parenthood” reveals, preparing little Quinn for Kindergarten has become a full-time occupation. While the world becomes smaller and the steeplechase to Harvard faster, many parents are exchanging playtime for résumébuilding activities. There is simply no time for the chaotic, unstructured fun of the local playground. Indeed studies show that today’s children play an estimated eight hours fewer
each week than they did a decade ago. “Kids are victims of this changing perception of what good parenting is,” reports Dr. Kenneth Ginsburg, associate professor of pediatrics at the University of Pennsylvania School of Medicine. “Good parenting has suddenly become about signing your kid up for many different activities; about making sure that they get into the best college … When this happens, childhood changes: it becomes parent-driven and adult-driven, rather than child-driven.” Increasingly, psychologists lament that overscheduled kids have no time left for the real business of childhood: idle, creative, unstructured play. “I think that parents, for all the right reasons, have started to do things that ultimately are not in the best interest of kids,” remarks Susan Magsamen, co-director of the Neuro-Education Initiative in the Johns Hopkins School of Education. “In America, we have a very Puritanical work ethic, which is ‘if you just work harder; if you just push harder; if you just do more’—this is a fantastic ethic, but it has caused us to lose play.” And yet, increasingly scientists are revealing that it is precisely this type of child’s play that provides the social and intellectual abilities needed to succeed in life. Not only does play provide opportunities to practice new skills and functions, encourage autonomous thinking and environment building, promote flexibility in problem-solving, and develop creative and aesthetic appreciation—it also shapes the brain. Contrary to the widely held belief that only intellectual activities build a sharp brain, it’s in play that the cognitive skills are most acutely developed. Play hastens the development of the brain’s executive functions and stimulates the very neural centers that allow kids to exert control over attention, regulate emotions, and control behavior. By thriving on complexity, uncertainty and possibility, play provides essential preparation for life in the 21st century. It prompts us to see the world in new ways. As Psychology Today’s editor-at-large, Hara Estroff Marano, quips: “Play is the future with sneakers on.” The future may have its sneakers on, but all too often, it is told to stay indoors. Schools that eliminate recess in favor of more class time to “teach to the test,” parents who fear “stranger danger,” the rising popularity of organized activities and the lure of technology are all laying siege to the sandbox. Over the past 50 years, the increasing primacy of television and computers in children’s lives has transformed their activities in what play theorist Brian Sutton-Smith sees as a shift from a “manual” involvement with objects and places to a “symbolic” relationship with information and amusement. Increasingly, reality is exchanged for a simulation of reality, and our bodies are left behind in pursuit of the visual and the virtual. From early life through adolescence, young brains undergo a process called “pruning,” during which time 60 percent of the synaptic connections between the brain cells are pared away—never to return. “How a young person spends their time and what they expose their brains to will have a profound effect on what they will be like for the rest of their lives. In her TED2010 talk, Jane McGonigal, director of game research and development for the Institute of the Future, reported that among countries with strong gamer cultures, today’s average youth will have spent 10,000 hours playing online games by the age of 21. What— wonders Gary Small, director of UCLA’s Memory and Aging Research Center—are the effects of this increasingly technological world on the developing brain? In their July Newsweek feature, “The Creativity Crisis,” writers Po Bronson and Ashley Merryman revealed that for the first time in 50 years American creativity scores are falling. While technologically enriched environments continue to make children’s IQ scores rise, creativity scores have been in steady decline since 1990, with the trend most pronounced for children in kindergarten through sixth grade. The Newsweek study goes on to report that the correlation to lifetime creative accomplishment is three times stronger for childhood creativity than IQ. In the book To Play or Not to Play:
Is it Really a Question? author Doris Bergen suggests that play sculpts the brain and that it is crucial to understand that playful minds are as much created as they are born. “We don’t grow into creativity, we grow out of it,” remarks renowned innovation consultant Sir Ken Robinson; “or rather, we get educated out of it.” “From education to play consumption, we have unknowingly created a society of more game players than game designers,” Richardson notes, “and that’s an important distinction.” How, then, can we foster a young generation of designers rather than players? Fundamentally it starts with empowering children to be the architects of their own play. Inextricably linked to creativity and innovation, play is vital to preparing a society capable of dreaming up—and meeting the challenges of—the world of tomorrow. The cityscape, with its abundance of unplanned and unpredictable social encounters, provides a perfect stage to foster playful, spontaneous and creative behavior. Playgrounds can function as important meeting places for people of all ages and backgrounds, and should be considered an important architectural element within the city. By fostering play, relaxation, education and community interaction, such spaces can re-envision the urban experience and energize the public realm. Martha Thorne, executive director of the Pritzker Architecture Prize, contends that just as museums were the commissions of choice for architects at the end of the 20th century, the coveted assignment of the future could well be the urban playground. While New York gave the country it’s first permanent playground in 1903, over the last century the city has gone from playground pioneer to philistine. Now, with architects like Michael Van Valkenburgh, David Rockwell and Frank Gehry turning their attention to Manhattan’s swing set, New York may be poised to prove Thorne right. It is time we put play back in our cultural crosshairs.
from Living Licensed: Consuming Characters in Girls’ Popular Culture by Saundra Marcel Strawberry Shortcake is imperfect, but adored. Introduced as both a toy and television special in 1980, the character is slightly chubby with a penchant for sweets, too-big feet, freckles, curly-red hair and a bonnet-and-bloomers costume reminiscent of a past era. Even unconventional, she’s a hit — a character that was, and still is, enormously popular among girls. The original Strawberry Shortcake was different. The doll’s scent was the new and noticeable attribute when she launched. A permanent bouquet of strawberry was embedded into the molded plastic and synthetic hair of Strawberry Shortcake and her 32 friends, each themed with their own sweet dessert. But the doll also signified a change in the kind of characters that little girls were playing with. Before Shortcake, there was Barbie, a dainty-footed and unattainable image of perfection. “U.S. designers of dolls have always glorified light skin, blond hair, blue eyes, tiny noses, and thin lips,” says author Ellen Seiter in Sold Separately. But Strawberry Shortcake was a massively popular non-blond, blue-eyed character who did not clutch to traditional ideals of beauty, but maneuvered subtly away from them. In retrospect, the 1980 Strawberry Shortcake looks dated. Her graphic style isn’t slick or smoothly rendered. The character was handillustrated by Muriel Fahrion, a staff-artist at American Greetings who created more than 6,500 original works for greeting cards, among them Shortcake and her cat, Custard. But Strawberry Shortcake is dated. According to Fahrion, she was given instruction by art director Rex Connors to “reinvent the rag doll,” based on the likeness of the early-century
Raggedy Ann character. The result is a girl with an unusual style completely disparate from the fashion-forward Barbie. Ninteenth-century frock, pantaloons, white-frilled apron, striped stockings, mittened hands, sensible shoes and short, red hair are all trademarks of the classic Raggedy Ann. Shortcake also participated in the beginning of strong female characters represented in media, girls who are pursuing, rather than being pursued. “Strawberry Shortcake [and others] are not token female members of a male gang; and they are not drawn in the sexualized caricature of adult women, repeated since Betty Boop,” says author Marsha Kinder. And while Walt Disney gives starring roles to women, the stories position them as helpless, selfless, and dependant on a rescuer. Deviation from this format often signifies villainy—overtly sexual and confident women are frequently the “bad guys.” Strawberry Shortcake is as self-assured as she is sweet. She is also the first television show aimed specifically at girls. Male characters had always been preferred in the media, the assumption being that girls would watch boys’ programming but not the other way around. Even in toys there was a lack of creativity, with Barbie and realistic-looking baby dolls the norm. This is perhaps because toy and television executives were mostly male. But Marsha Kinder also speculates that girl-centric media that inaugurated with Shortcake is a product of 1970s feminist attitudes. In the beginning, Strawberry Shortcake lived on greeting cards. But she was forcibly extracted from her two-dimensional paper world, and thrust into the role of cultural blockbuster. There is no name for the unprecedented magnitude of this character launch. Writing about the 1980s and products of that time, critic Tom Englehardt calls this “the Strawberry Shortcake Strategy.” He says, “for the first time on such a massive scale, a ‘character’ has been born free of its specific structure in a myth, fairy-tale, story or even cartoon.” Even Bernard Loomis, president of Kenner Parker Products and who is credited with her fame, found few, but bold words to articulate his intentions. “Mark the date and time,” he said. “We’re going to make history.” Shortcake in the Baking This red-headed sweetie, created by Muriel Fahrion at American Greetings was a “promotional,” ambiguous artwork not associated with any season that could be produced in-between major holidays. Fahrion was just one of over 300 graphic artists churning out designs for the company. The directive to base Shortcake on Raggedy Ann was Fahrion’s only instruction; she was sole governor of the saccharine universe. “I just got lost in the world of Shortcake and all the characters,” says Fahrion. “I didn’t have a committee telling me what to draw and how to draw. I just made them up. I just did what was fun.” For toymaker Loomis, Shortcake was the realization of an idea that had already been germinating for some time. In 1976, Kenner Parker Products was making toys for the Star Wars movie franchise, and having reaped great profits, Loomis wanted to manufacture the bounty again. But this time, he wasn’t content to tag along on the success of a feature film. He was determined to skip the filmmaking and make his own history. By 1978, Loomis was shopping for characters, and he suspected that American Greetings might have what he needed. Summoned to meet with Loomis was Tom Wilson, the creative vice president of American Greetings. To the meeting, he brought a portfolio of characters that had been used in the previous year’s card line. One-by-one characters were presented and one-by-one rejected, until Strawberry Shortcake made her appearance. This was the girl Loomis wanted. This young lady, imperfect and antiquated, was to be plucked from obscurity and positioned on an international stage. This girl was going to be famous.
“It all came out at once,” Fahrion says. “It was everything, in one big launch. Can you imagine?” On March 28th, 1980, the first of only six Strawberry Shortcake animated specials aired, coinciding with a bombardment of consumer products at retail. Toys were the lynchpin, but there was also apparel, books, décor, gifts, sporting-goods and housewares. She was everywhere. She was big. And she was perfectly orchestrated. The Problem with Pink It was during the 1980s, under Strawberry Shortcake’s term, that pink began to really become recognized as a color for girls. It was a marketing strategy; divide the children’s market in half, and make parents buy clothing and toys for two— girl and boy. But by segmenting, marketers succeeded also in highlighting and broadening gender differences. Possibilities began to exist within narrow ranges of color. “Pink for girls and blue for boys. It’s just ‘natural,’ we’ve been told,” says Leslie Feinberg, author of Trans Liberation: Beyond Pink or Blue. But Feinberg evokes a time when this was not true, dispelling popular notions about instinctive preferences. Before the 20th century, children were not color coded at all. Babies wore white or unbleached cloth out of practicality, and both boys and girls wore genderneutral dresses. At the turn of the century, girls were wearing blue, invoking the color of the Virgin Mary’s attire. Boys wore pink, a shade of red, an intense color symbolizing strength. “Simplistic and rigid gender codes are neither eternal nor natural,” says Feinberg. Color assignment is a social concept, and adhering to constructed norms implies a gender citizenship. But belonging to only one color narrows the choices that girls have. Of all the colors in the spectrum, it is pink that little girls reach for. It’s not the best color, it’s just the one they’ve been taught to like. The problem with pink is that girlhood has become monochromatic. “New-stalgia” Appealing to the nostalgic sensitivities of the original audience, who at this point are likely to have young girls of their own, the original Strawberry Shortcake was redesigned and reintroduced in 2002. This Shortcake has a slimmer figure and smaller feet, wears jeans, a red sweatshirt and a striped shirt. With a slicker illustration style and up-to-date attire, she looks nothing like a reinvented rag doll. Artist Muriel Fahrion sees little resemblance to her original sweet creation. “It’s just a totally different property,” she says. “It’s well done. I appreciate the new design, but it’s not mine. It’s a different Shortcake for a different generation.” The property was re-launched yet again in 2009, when Shortcake received a “fruitforward” makeover, meaning less emphasis on sugary desserts and more on healthy fruits. Considering the increasing criticism that connects character marketing to the consumption of fast foods and unhealthy snacks, this seems an appropriate evolution. This time the greenand-white stockings have returned, although the character’s ethos continues to move even farther from the original. This Strawberry Shortcake is thinner again, and she poses suggestively with shoulders lifted and toes pointed, as if she were modeling a fashion collection. Her hair—now more magenta than red—is long and luscious, and her signature freckles have nearly disappeared. With large doe eyes, this thin, couture-conscious Strawberry Shortcake regresses back to a singular reliance on appearance. Sadly, what made the original character different is now gone.
from Listen to Your Chair: Design and the Art of Storytelling by Amelie Znidaric When Ettore Sottsass went traveling with Eulalia, he had already turned away from Valentine. Yet, as with all love affairs, it had started all sunshine and roses. “Her name is Valentine,” Sottsass wrote in the summer of ’69, presenting the bright red-and-orange typewriter he had designed with Perry King, “and she was intended for use any place except in an office, so as not to remind anyone of monotonous working hours, but rather to keep amateur poets company on quiet Sundays in the country.” He praised Valentine as “an anti-machine machine,” an “unpretentious toy,” a “successful transformation of a useful object into a means of expression.” Ads and posters, designed by Sottsass himself and by other graphic designers, showed sexy, young women, scantily clad libertines, wild and freespirited. We see beaches and bikinis, trains and planes, blond hair and tanned thighs. Like Pygmalion, Sottsass had fallen in love with his own creation. Or so it seems. For reality and legend blur. Sottsass has been credited repeatedly with these words of courtship, yet the original article from 1969, published in the Italian design magazine Abitare, names no author. Was it by the maestro himself, refraining from official authorship? By an anonymous editor? Or a press release, copied without further editing? Sottsass’s name doesn’t show up in the magazine’s imprint either. But then, does it matter? For history, it certainly should. For the story however, it doesn’t. “From a small number of perfectly ordinary words a tapestry takes shape, suggestive of a dream, but close enough to reality which, more often than not, remains elusive,” says Hassan, the protagonist of Joydeep Roy-Bhattacharya’s novel The Storyteller of Marrakesh. Soon enough, romance was over anyway. The year 1970 was a difficult one for Sottsass. After 21 years, he left his wife Fernanda Pivano, a translator and writer from Turin, and started a semi-nomadic life with Eulalia Grau, a young artist from Barcelona. And after not even two years, he turned his back on Valentine, at the same time ending a much longer-lasting relation to industrial design and dedicating himself to art and photography instead. Sottsass recovered from his life crisis. An invitation to participate in the CooperHewitt’s inaugural exhibition in 1976 in New York came as an opportunity to resume work and, eventually, his previous profession as an architect. What remained was an unforgiving resentfulness against Valentine. “They told me to design a very poor machine,” Sottsass told Icon magazine almost 40 years later, in April 2007. “So I said, OK, if this machine has to become a sort of biro of typewriters, I design a very popular machine. It was a mistake.” He also dismissed Valentine as being “too obvious, a bit like a girl wearing a very short skirt and too much makeup.” Poor Valentine got to carry the whole load of Sottsass’s despise for a consumerist society on her fragile shoulders. Yet, it was her very sex appeal that made Valentine special. She was, after all, a typewriter. And typewriters, in 1969, were gray, beige, dull. When the Italian manufacturer Olivetti launched Valentine in February of that year, it was like Brigitte Bardot entering a universe of Plain Janes. Light as a feather, sumptuous lines, a risqué red and a cleavage revealing two perfectly sized orange spools. Dressing up—Valentine was portable—she slipped into a sleek case, red of course. “Red is the color of the Communist flag,” Sottsass said to The New York Times in 2006, “the color that makes a surgeon move faster and the color of passion.” And the author adds, “Red was his way of bringing a machine from the business world into the realm of the senses and emotions, or from the office into the bedroom.”
“There is nothing harder than the creation of fictional character,” says James Wood in his handbook How Fiction Works. Ettore Sottsass, architect, designer, painter, photographer, writer and philosopher, born in 1917 in Innsbruck, Austria, and deceased in 2007 in Milan, Italy, was a master in the creation of fictional character. “His greatest innovation is nothing less than giving souls to objects,” said Paola Antonelli, curator in the Department of Architecture and Design at the Museum of Modern Art, in 1998. And around those soulful objects, Sottsass weaved intriguing tales, using shape, texture, volume and color, which, for him, were “languages as direct as the spoken word.” Like any good storyteller, Sottsass crafted his words fastidiously. This might sound unreasonable, especially in view of his more and more flamboyant style, the opulence of postmodern icons such as his Carlton shelf from 1981: a motley collection of askew laminated plywood boards. Yet, the Italian maestro was neither flashy nor obvious in his sense-making. “In your latest projects, there is a level of absolute abstraction,” said the Italian designer Fabio Novembre in a conversation with Sottsass in 2005, “a minimal gesture will do to convey meaning.” But Sottsass has always kept a high level of abstraction, even in his earlier work. Thus he followed a golden rule of storytelling: show; don’t tell. “Describing rather than evoking is perhaps the most common error of beginner writers,” says Garry Disher in his handbook Writing Fiction. The same holds true for oral and visual storytelling. And as we can see in Stefano Giovannoni’s work, for example, the showdon’t-tell mistake is not limited to beginners. His eggcups, corkscrews, cotton swab holders and other knick-knacks for Alessi have literal faces: eyes, noses, ears. Yet, it doesn’t take that much to interpret an object as a human or animal figure. We will take the smallest hint of a beaming smile, a lazy gaze, or a pair of slender legs and start to read an object as a character. In Valentine’s case, all we need is some red and two orange spools. It speaks in favor of Sottsass and Valentine’s qualities as storytellers that the narrative evolving from the little red typewriter becomes so rich and layered. Through Valentine, the Italian maestro did not only talk to us about the humdrum of everyday office life and the desire for color, freedom and eros. Anticipating the fervid critique on a rigidly ideological modernism that Sottsass would express by cofounding the Memphis group in 1981, Valentine and her master storyteller also made a statement. “Romantics don’t often lead the avant-garde— they tend to prefer the rearguard,” says Icon magazine, “but in using colors, forms and materials that snubbed the efficiency of the machine, that is what [Sottsass] was doing through the late 1960s and 70s.” Ooh, romance. Is it coincidence that Valentine wears red, the color of Cupid’s favorite target? Is it coincidence that she bears the name of the lovers’ patron saint? Hardly, as it fits neatly into Olivetti’s marketing plan: Valentine hit the stores on February 14. And Sottsass’s posters for the ad campaign carried captions like, “Pick me, says the flower. Eat me, says the fruit. Love pleads: don’t forget me. And Valentine: take me with you.” A typewriter meant to put on paper not only poetry, but also love letters. Yet, what sounds like a melodrama, overly performed for the sake of filthy lucre, seemed to reflect the true culture of Olivetti, too. “It was a fantastic company,” says graphic designer Milton Glaser, who has worked with Olivetti for decades, “and the company was full of poets.” Valentine, first wooed, then rejected by her creator; Valentine, the sexy girl, the free spirit, the big romantic; Valentine, the design critic and marketing genius. With her many facets, the red typewriter is what E.M. Forster, grand novelist and author of Aspects of the Novel, calls a “round” character. “The test of a round character is whether it is capable of surprising in a convincing way,” he says. In contrast, flat characters “are constructed round a single idea
or quality: when there is more than one factor in them, we get the beginning of the curve towards the round.” (…) If we look at objects as storytellers and stories, we should look at them as fiction, too—all the more if they have a pedigree and a public relations agenda. This is the only exit from a fruitless debate whether design is true or not and, ultimately, the most truthful way to view objects. “The only reason that the phrase ‘fictional truth’ is not an oxymoron, as ‘fictitious truth’ would be, is that fiction is a genre whereas lies are not,” says Michael Riffaterre in his book Fictional Truth. “Being a genre, it rests on conventions, of which the first and perhaps only one is that fiction specifically, but not always explicitly, excludes the intention to deceive. A novel always contains signs whose function is to remind readers that the tale they are being told is imaginary.” Fiction comes from the Latin ficio, “act of fashioning, shaping, making”—and what is design, if not an act of fashioning, shaping, or making?
from Untangling the Naps: The Afro Talks Back by Michele Y. Washington If the 1960s was all about the transformation of Black self-identity—the “we” generation—then the 1970s can be called the “me” generation, a time when everyone turned their attention to conspicuous consumption. At this time, the ideal of cultural authenticity engendered by the Black Power Movement had made an impact on most cultural and sociopolitical elements of Black American life, and the changes that emerged were clearly realized in the images used in advertisements, films, television programs and fashion, many of which were directed towards the new Black consumer market. Black Magazines Support the Acceptance of the Afro By the 1970s, just about all the Black lifestyle magazines carefully stocked images of men, women and youngsters who wore Afros. Leading the pack was Ebony, Jet, Sepia and The Urbanite magazines, which informed readers about the latest word on the changing lifestyles of prominent Black artists, writers, celebrities and political figures. Ebony publisher John H. Johnson realized the power of his Black middle-class readership, the kind of power that came with increased spending dollars. He also knew the power of positive visual images, and based the design and editorial content of Ebony on such popular mainstream magazines as Life and Look. Using sophisticated images of Black people who sported an Afro, Johnson was able to plumb three entities: An allegiant Black middle-class readership; a new Black consumer market; and a nascent culture intent on changing stereotypical attitudes and discarding old cultural myths about Black people. In the previous decades, the groups in our society that controlled both language and the media decided the “meaning” of popular visual culture. With the growing understanding of Black Power and the introduction of Black magazines into the marketplace, the control shifted to Black authorship. By the 1970s Black visual artists, writers, political activists, performers and entrepreneurs were fully defining their own visual culture, language and signifiers. The design and photographic direction of advertisements directed toward Black Americans would quickly change as a result of the Afro, and of political demands for equality. There was a sudden proliferation of hip-looking Black models wearing the latest fashions and sporting Afros. In cities heavily populated with
Black people, huge billboards plastered with new images beckoned to passersby, offering goods and services that were specifically targeted to African-Americans, from hair care products to cigarettes, to liquor and much more. The first issue of Essence magazine, photographed by Tomas, features a beautiful Black woman with a huge sculpted Afro. The groundbreaking magazine was started by a group of Black businessmen, including Jonathan Blount, Cecil Hollingsworth, Edward Lewis and Clarence O. Smith. They came together to create a publication that would be aimed at the new, young, urban middle-class Black woman between the ages of 18 and 34. The owners convinced photographer Gordon Parks to come aboard as the editorial director, to oversee the look and editorial content. As a writer, composer and filmmaker, Parks was the embodiment of creativity, and added a kind of legitimacy to the editorial choices. The Essence formula included chronicling the dreams and aspirations of a diverse, contemporary group of women who sought advice on everything from relationships to politics, to beauty tips. And it backed this information up with dynamic photography that illustrated how these women lived and saw themselves. Essence fought hard-to-get White advertisers to rethink beauty, to go beyond just using Black female models paired with White women, all of them with flowing straight hair. The magazine’s creators delivered a clear message to Madison Avenue advertising executives who would need an attitude adjustment in their perception of Black beauty. Soul Marketing Revolutionizes Nappy Hair In the design of Black hair products in the late 1960s and through the 1970s, Black advertising agencies clearly took advantage of the Black aesthetic created by the Black Arts Movement. In Chicago, Vince Cullers, of Vince Cullers Advertisements, realized the monetary value of the Black aesthetic in creating product brands. He coined the term “Soul Marketing,” which actively sought to “speak” to Black consumers. The Vince Cullers agency created a nichewithin-a-niche market with his ethnic ad campaigns and packaging for Afro Sheen hair care products. He played off of the strident words of activist Stokely Carmichael in his statement that Blacks should stop being ashamed of their broad noses, thick lips and nappy hair. Culler’s designs referenced the ideology of Black pride by featuring Blacks of varying ages and skin tones, in regal or loving contexts. Incorporating Afrocentric symbols and African languages, he created a distinct look that set Afro Sheen apart. He also used copy in both Swahili and English that read: “A beautiful new product for a beautiful new people.” This new approach attracted the attention of a growing urban middle-class community interested in embracing Black identity. The packaging design articulates the cultural expressions of Black pride by borrowing Afrocentric icons and images. These designs incorporate a range of warm hues, including yellows and oranges—colors that mimic earth tones or the wood of African sculpture. The advertising copy slogans use Swahili words that direct the consumer to embrace their African roots. Whether written in Swahili or English, the meaning conveyed is the same: You are beautiful and confident people. The models wear the natural hairstyle as a symbol of pride, and are usually posed in positions of power—head upright, eyes looking forward. These ads not only symbolize Black pride, but also suggest to the consumer that they no longer need look to the subservient icons of the past, such as Aunt Jemima, Uncle Ben or the Gold Dust Twins. Black consumers had already begun to lower their tolerance for these derogatory images way before Madison Avenue advertising agencies figured out that they were outdated and offensive. Most of these ads already had limited mainstream media placement, and appeared in just a handful of Black magazines. But
it is interesting to note that the new wave of advertisements seen were in fact mainly created by White Madison Avenue agencies, who rendered their own view of the savvy Black consumer. This modern marketplace mix would bring together a creative group of people who had seldom crossed paths before, and certainly never worked toward the same ends. White-owned companies were also eager to garner some of the Black consumer dollars and hair and cosmetic companies were ready to take advantage of the Afro trend. Clairol had already produced ads for hair coloring that featured Black women, so it was not such a broad leap to promote specific products such as their “Hair So New” creme rinse. One ad features a cartoon illustration that depicts a woman desperately trying to comb her kinky Afro. The caption reads: “Don’t beat around the bush, get Hair So New.” Reappropriation of the Afro in “Blaxploitation” Films and Television The 1970s was the decade that would see the reimaging of Blackness in films and television. A new film genre termed “Blaxploitation” (in an awkward combination of the words “Black” and “exploitation”) would spawn a bevy of films of questionable quality that played on the tragicomedy of ghetto life. The films featured a black hero who was fighting some kind of injustice, and the cinematic images in these movies highlighted newcomers such as actors Jim Kelly, Richard Roundtree, Pam Grier and Tamara Dobson, creating characters that would take on lives of their own. And like everyone else, their characters favored skin-tight leather trousers, balloon Afros and tough-talking dialogue that would disappear almost as fast as it had appeared. Such blockbuster films as Shaft, Coffy and Cleopatra Jones, featured these respective actors, and for awhile made virtual superheroes of the men and sex-godesses-with-super-Afros. In reviewing the genre of Blaxploitation films and their construction of Black images, it is interesting to take another look at these films, as well as the 1971 hit, Sweet Sweetback’s Baadasssss Song, which was written, directed and produced by filmmaker Melvin Van Peebles, who is credited with launching the independent Black film industry. British filmmaker and visual artist Isaac Julien does just that in a 2002 documentary called BaadAsssss Cinema that reconstructs the entire genre and enlists the help of Van Peebles, Roundtree, Grier and others (including filmmaker Quentin Tarantino). Julien’s film examines the short-lived, highly commercial and surprisingly influential Blaxploitation films produced for a Black audience. In the documentary, social critic, writer and University of Mississippi professor bell hooks dissects the image of a particular character type—the black female political character often played by Grier. Professor hooks deconstructs the identity of the protaganist, Coffy, in the film of the same name. “I think that she is one of the more meaningful resistance images of a black female to come out of these films,” hooks says, “and it’s important that resistance images begin with the original Coffy film, in 1973.” She describes the 1997 movie, Jackie Brown, by Tarantino, as a “remake” of Coffy. “Tarantino has the capacity and love of the character to bring it into a new generation and time,” she says. Ultimately, what Tarantino did was to erase all the pornography in Coffy and mold the protagonist’s identity into a multiplicity of characters that represent real women with problems, minus the gun and dagger whipping out of her Afro. But he also stripped away a lot of the political issues that were the foundation of the character. In contrast, the Black musical variety and dance show, “Soul Train,” created by Chicago producer Don Cornelius, was quickly dubbed the “hippest trip in America.” The TV show featured young Black people who wore the latest fashions and danced to the day’s funkiest soul music. Most of them wore big Afros and were decked out in the most stylish ’70s
fashions. The program became the perfect pipeline for advertising the latest Black hair care products that were flooding the beauty market. Much like the new Black magazines, the show’s commercials mainly featured blacks, whose images prompted young viewers to replicate the hair and fashions worn by the dancers and musical guests. Eventually, Afro Sheen hair care products became the exclusive sponsor of the nationally syndicated show. One Afro Sheen commercial featured the ghost of black abolitionist Frederick Douglass, who lectured a Black youngster about his horrible hair as he gave him a lesson in Black history. The actor playing Frederick Douglass says, “Haven’t you forgotten something,” when the young man is about to walk off-screen. The boy replies, “Say aren’t you Frederick Douglas?” To which Douglass replies, “Are you going out in the world with your hair like that?”
from Mirror Image Maker: Looking at Music Videos of the Internet Age by Aileen Kwun Detractors of the pop music machine would naively have you believe that “It’s all about the music, man”—or that it should be. But the relationship between a musician’s image, sound and performance has never been so simple. Industrialization of the recording process throughout the 20th century ensured that listening to a song aroused more than the ears. A model founded on the prospect of capturing and packaging the most fleeting, immaterial of artistic mediums—the moving particles and vibrations of sound—the commercial business of selling music has depended on augmenting the listening experience with profitable tactile and visual products. Musician and pop cultural thinker David Byrne explains in the November 2007 issue of Wired that: Before recording technology existed, you could not separate music from its social context. Epic songs and ballads, troubadours, courtly entertainments, church music, shamanic chants, pub sing-alongs, ceremonial music, military music, dance music […] It was communal and often utilitarian. You couldn’t take it home, copy it, sell it as a commodity or even hear it again. Music was an experience, intimately married to your life. You could pay to hear music, but after you did, it was over, gone—a memory. In the 1890s, as the invention of the gramophone brought sound recording technology to a music industry once defined by the sale of printed sheet music—consumers reacted in fear of the cultural change that records presented. Unlike radio, which cast out a transmission of live sound that left as soon as it occurred, the mechanization of playback of recorded sound was perceived as frightening. The first test audiences of recorded sound remarked that the experience of listening to the “voice without a face” on playback was like hearing “the devil every time.” But even as the record disembodied the voice of the individual, it significantly helped along the rise of the music celebrity, performance and public image. As the institution of selling and buying music records became standard, fledgling recording companies carved out the market by creating records that highlighted an individual’s unique performance skill, replacing “anonymous renditions of well-known pieces [with those by well-known] singers, bandleaders, and monologuists.” In the 1940s, as recording companies grew and fine-tuned industrialized production, an introduction of sleeve art design became a standard packaging process for record albums, tying the consumers’ experience of the tactile product with bold, colorful abstractions of graphic art—and putting in place a tactile-visual fetish that persists today among collectors and vinyl enthusiasts.
Though not intended for sale and as immaterial as sound itself, the traditional broadcast music video has played a vital role in further visually packaging both the music and its performer. Even as earlier commercial audiovisuals of “live” music programs in the 1950s launched performers like Elvis into superstardom with the broadcast of his image, pop critic and cultural theorist Simon Frith points out that “it was only with the emergence of cable television in the 1980s that a music television service was developed with anything like the day-to-day significance of music radio. Music television, MTV, duly aped Top 40 radio formats, with playlists, veejays, ‘hot’ releases, ‘breaking’ singles, and more.” A century after the institution of the recording industry, a new tide of technological developments have caused us to yet again reposition the way we consume music in vastly significant ways. As the immaterial, compressed digital format of the mp3 overrides our cultural attachments and predisposal for the tactile ownership of records and CDs—facilitating the ease of pirating and sharing music without purchase and reducing sleeve art to pixelated thumbnails—the musical experience is becoming disembodied yet again: this time, from the high-production packaging that set the foundations for the recording industry one hundred years ago. As record labels struggle to maintain sales with an aging business model—their sales plummeting by more than 75% in the past decade—in the digital age of screens, one part of the traditional packaging process continues to see a growth period: the music video. Armed with a set of cutting-edge design tools specific to the Web, a new set of Internet auteurs are crafting new ways for us to look at music in the 21st century. (…) Last December, Masashi Kawamura, a selfdescribed “art director/film director/creative director/whatever” catapulted his childhood friends in Sour, a relatively obscure indie rock band from Japan, as main contenders in the mega micro-blogosphere publicity ring with the viral music video, “Mirror.” Instead of barraging the network with a slew of Tweets to gain a following, Kawamura encroached upon the entire application altogether—pulling the cloth from beneath the table and using it as an overarching canvas for the entire project. The result was so self-reflexively postmodern and visually complex, it put writers of the Observer’s Very Short List at a loss for words. Days after the video’s launch, the VSL—a daily email digest known for combing the Web for its most astounding cultural ephemera and parsing it with succinct wit—admitted, “We’ve spent the whole weekend scrambling for words to describe it.” The concept of forming one’s identity through the visual bits and pieces of social media outlets was a central source of Kawamura’s inspiration for “Mirror,” a song that sings, “Everything that I see with these eyes / Are the reflections of my heart … If I can’t see through the frosted glass / I should keep on polishing my soul.” Sourcing preexisting imagery of the viewer from Google, Twitter and Facebook’s APIs, Kawamura and his Web wiz team draw in attention with both a glorification and a self-reflexive commentary of what social media best promotes: the act of voluntary display and the underlying desire to find connectivity in all of the minute, disparate instances of self-broadcasting. Using a trifecta of the most popular social platforms, “Mirror” both questions and celebrates the quotidian set of tools used to construct online identities. After the video finishes loading, it splices us into a sub-reality by showing us a picture of a screen within our screen. A Google homepage appears and, without command, types the viewer’s name into the search engine. As the results draw to a page of images, the thumbnails storm into a formation of a stick figure, who then proceeds to walk across the screen against a scroll of visuals that include a Google Map of the viewer’s location, his Facebook profile and his Twitter user page. Traveling
through the web’s portal of space and time, the stick figure’s journey has the viewer’s online life flash before his very eyes. In a 21st-century rendition of Impressionistic portraiture, the video reflects back an image of the viewer through a cobbled collage of avatars, status updates and images. Whereas the 19th-century painters depicted scenes of life through the capture of colorful refractions of light, Kawamura creates the image with all of the self-projected fragments of the viewer’s own self-constructed online identity. But far from serene, the digital mash-up of images is disquieting, even grotesque. Members of Sour are shown performing in a variety of screen configurations: first within a video framed within a 300 by 250 pixel ad space of a Twitter page; in a YouTube video frame and, finally, within a Hockney-esque collage of animated browser windows. With the gaze of imagery reversed, somehow the act of rock star idolatry here feels secondary. The band’s performance and song—which, one quickly forgets, is the reason this video exists at all—are quickly shoved out of the limelight by the video’s real star. The viewer, pictured on the screen in a Chuck Close-like portrait that employs a grid of minute avatars, is reduced to pixelated pastiche. It’s an audiovisual cannonball that, at just three minutes and fifty-two seconds in length, would set the grandfather of contemporary media theory, Marshall McLuhan, rolling in his grave. His 1964 writings on technology could not have struck more prophetic in describing the circumstances of postmodern life. In his magnum opus, Understanding Media, McLuhan writes, “With the arrival of electric technology, man extended, or set outside himself, a live model of the central nervous system itself […] too violent and superstimulated a social experience for the central nervous system to endure.” If director Chris Milk’s “Wilderness Downtown” video for Arcade Fire provokes a wistful nostalgia, here, the sight inspires a surreal horror—not so much for the pronounced visual metaphor of our online social lives as a mirror of our personal identities, but for the arresting reality that in all of free-flowing, self-absorbed, self-broadcasting, we are ironically, innocently unaware through it all that someone could be watching. In this very instance, someone is watching. That someone is you. A Rear Window scenario of the 21st century, the video turns the gaze back onto the viewer, making him acutely sentient of the sheer magnitude of personal data outputted by regular users of these social media networks. Gathering together all the minutiae broadcast by the individual onto the World Wide Web, the set of personal images and data Kawamura reflects back onto a viewer is an overwhelming fulfillment of McLuhan’s prophetic statement: a portrait of internal neuroses splayed thin upon the vast social media circuit.
from Designing Sound: Aural Agency in the Twenty-First Century Stephanie Jönsson In the late 20th century, digital memory and personal computer technology introduced the possibility of including pre-recorded sounds as part of an interface. As auspicious as this technology is for the field of interaction design, the power to distribute sound into virtually any device brings with it specific challenges and obstacles. With respect to the ways in which we listen to sound, a surplus of non-actionable sounds in any given environment interferes with our reactions and serves only to annoy and distract us. In some cases, the discordant or otherwise “ugly” acoustics of an industrial design can interfere directly with its functionality far more than other aesthetic faux pas, much like the ham-handed application of a garish typeface.
Designing the aural aspects of a product is a natural extension of experience design, but since audio feedback is rarely composed with respect to how and in what contexts the user will be listening to it, the sounds produced by products are usually relegated to the simplistic, binary role of delivering affirmative feedback exclusively. The results are notorious: car alarms that yell at us hysterically, appliances that beep unintelligibly and cellular phones that chirp botched versions of pop hits. When sound is used semiotically, or in the absence of human presence, it is necessary for designers to decide exactly what the sound will communicate to the user and what sonic properties will be required in order to fulfill this purpose. In the 1920s, musicologist Paul Nettl divided music into the following categories: music that accompanies something else, utilitarian music (Gebrauchsmusik) and music to be listened to for its own sake, or freestanding music (Eigenstandige). Even at this early juncture in the age of technology-bound sound, different ways of listening were being stratified. Music theorist Anahid Kassabian discusses what he calls “inattentive engagement” and argues that the kind of distracted listening that occurs with utilitarian music still conditions our subjectivity with respect to the musical minutiae imbedded in industrial designs, and should not be disregarded. In his book, Designing Pleasurable Products, design strategist Patrick Jordan places the types of sounds used in current products into two categories: signal and identity. Signals give users cues about the state of a product: the sound of a jar lid popping indicates that the food inside is factory fresh, the roar of a car engine indicates that a vehicle has started up. These examples refer to the natural or “consequential” sounds that a product makes simply as a result of being in a particular state. As implied, artificial, or “added,” sounds are attached to announce that something has occurred within the product, and this occurrence usually requires some sort of reaction from the user. The whistler on a teakettle, for instance, tells the user that the water has boiled and the tea is ready to be steeped. Certain designs make a series of successive sounds that change in order to signal that different processes within have occurred and these events may require the user’s attention; imagine a stove with a timer that plays a warning beep one minute prior to completion and a different beep, or series of beeps, upon completion. The creation of cacophony is one potential hazard when sound is not standardized across a family of devices, but just as significant is the fact that we listen to alarming sounds, like those created by the motor or horn of a car, differently than we do other non-threatening, more instructive sounds. Although we don’t often hear them regularly in their original modality, historical acoustic sources of sounds (such as whistles, church bells and train horns) are still very much with us, so much so that we often think about and define modern digital sounds metaphorically in terms of their old counterparts. To produce sound sans electricity, objects must be activated manually in order to vibrate and create resonance. Old air raid sirens, church bells and organ pipes were designed based on the energy one person could deliver by cranking, shaking or blowing. Electrically charged designs still work with the same sound activation principles in mind. A mechanical telephone would receive voltage, providing just enough power to repeatedly slap a tiny hammer against a metal bell to produce a ringing sound. The archetypical telephone ring was designed to produce the maximum sound pressure level in the air from minimum voltage; sound quality was entirely dependent upon how these historical sounds were produced. The formal properties of these early sounds were based exclusively on the engineer’s aesthetic, where the focus was on producing an energy efficient signal rather than a miniature composition.
During the 20th century many of the sounds we still hear in designs today were codified. We’ve grown so accustomed to hearing certain sounds in certain contexts that we react in a Pavlovian way. If designers were to completely overhaul established acoustic systems, especially in alarm devices, they would run the risk of disrupting our very delicate response system. Changing ritualized behavior can sway a person’s sense of equilibrium and create cognitive dissonance. This does not mean that sound systems are completely inflexible; in fact, subtle compositional changes to the familiar beeps and chirps in many devices can communicate very specific information to the user and, in turn, improve its usability. In a 1998 study, Patrick Jordan addressed these potential communication difficulties using an approach wherein the designers involved learned to “sonify” metaphorical information. Industrial designers created a list of descriptors that could fit the “personalities” of sounds. In total, around 25 pairs of polarized descriptors were generated. Examples include masculine and feminine, strong and weak, intense and subtle, dirty and clean, cold and warm, and modern and traditional. Surprisingly there was a high level of agreement about the descriptors that applied to a particular sound. Researchers at Delft University’s Sound Design program refer to this type of adjective-bound composing as “audiolization” or “acoustic sketching.” Elif Özcan, a researcher at Delft, likens this process to visual sketching and suggests that linking an abstraction, like sound, with semantics can make an immaterial concept more tangible. It is often difficult to pin down in terms of formal properties exactly what it is that makes a sound appealing or distressful. For example, it’s not easy to articulate why you may prefer the Beatles to the Rolling Stones in terms of the formal properties of the music they make. However, it is easier for people to describe music in terms of its experiential properties— or its personality. As a primer to her process, Özcan asked what physical features of a product could evoke friendliness: Designers could use rounded shapes, bright colors that do not hurt the eyes, or softer materials in order to evoke friendliness. Using this idiosyncratic approach, designers could search for a bath duck with a squeaky sound, cat bells for their jingly sound, or a wooden wind chime for its full, round sound. Then, they can analyze the spectral-temporal content of the material collected and come up with physical sound descriptions: so in this case, we can understand that friendliness for sound means overlapping, repetitive sonic events with a rather high-pitched sound that has a short round envelope. After using a similar audiolization procedure with their set of 25 adjectives, Jordan’s team applied the appropriate sounds to their chosen design, a sandwich maker. The designers imagined the following scenario to understand how this device would actually interact with the user in a realistic domestic setting: a person sits in his or her living room watching television, while the sandwich maker toasts bread for a mid-show snack. Three different sounds representing three different phases of the cooking procedure are implemented: one corresponds to the machine starting, one to the toast being browned and one to the toast being burned. The personality of the sound started as friendly and informative but grows increasingly stern and authoritarian as the snack starts to burn. Jordan’s team was aware that as the product was associated with food preparation, the toaster should have modern, calm and clean experiential properties. So in this case, as is the case with most sonified designs, the sound had three purposes: as a signal to inform the user of the state of the product; as a navigating sound to inform as to the degree of doneness; and as an identity sound, to indicate
cleanliness, hygiene and efficiency. While not technically complicated, these subtle variances in tone and frequency can communicate specific information to the listener, and in turn, these sounds may be perceived as a necessary element of the physical design instead of an ancillary embellishment.
empty crumbling buildings, cropping the more vibrant parts of the city from view to endorse a tragic perspective of decline and collapse; this sparked a rebuttal produced by Palladium, a footwear company (arguably an unexpected source of urban criticism), titled Detroit Lives. This series of three film shorts, posted online in September 2010, starred Johnny Knoxville driving around the city and interviewing impassioned Detroit advocates. The debate rages on today: dying or thriving depends a lot on whom you ask in Detroit.
from The Detroiter: Resident Design Initiatives in a City Reshaping by Sarah F. Cox
How to address that reputation for decline is a topic that residents and the Mayor approach with a new fervor. Shrinkage is complicated and not automatically a bad thing, as John Gallagher argues extensively while advocating the strength in Detroit’s smaller size; he points out that many small cities rank as “best places to live.” But a particular outcome of shrinkage is nearly impossible to defend—abandoned architecture that has lost useable potential, commonly referred to as blight. Vacancy within city borders leads to unsafe conditions— crime—and a ripple effect of bad economics. Maintaining roads rarely used or basic services like sanitation and police protection over such a huge, sparsely populated area, carries significant cost; when the population thins, cashstrapped cities have to adjust so that spending is proportionate to the tax base. When there are more buildings than residents can use; the city has a responsibility to repurpose the land.
Does Detroit have problems? Yes. Many of the debates about the city’s urban health and future depend on what one wants to call a “problem.” The most prevalent example in this debate is vacancy; Detroit is a 139-square-mile city with 40 vacant square miles. Those are just simple facts. How one decides to interpret them has a lot to say about biases, urban perspectives and familiarity with the many layers of Detroit’s complex past. The Detroit Works Project is an initiative of Mayor Dave Bing—who was elected to complete the previous Mayor’s term on May 5, 2009, and won re-election for a full term on November 3, 2009—and it is also the best current representation of the city’s government perspective on urban design and planning issues. This project’s website avoids the word “problems” with a page that calls out “Challenges and Opportunities.” It lists the following: • A 60% population decline since its peak in 1950 (from 1.85 million to 800,000) • An unemployment rate of 30% • 60,000 parcels of vacant space, which are underutilized • The lack of a regional transportation authority, which makes the Detroit Metro area the only major metropolitan area in America without one • A city median income of $29,000, well below the national average • 33% of Detroit residents living below the poverty line • 762,000 manufacturing jobs lost in Michigan over the last decade, resulting in a workforce that is moving away from both the city and the state • 55,000 properties in foreclosure, a crisis that began in 2004 and continues today What media-savvy reader isn’t tempted to see a political site’s insistence that these are “opportunities” as propaganda? In this argument over rhetoric, budget-conscious publications are the opposition and play up the drama with the term “crisis.” In either case, the complexity of the issues suffers; this happens in discussions of the economy and design. While unemployment and poverty are not primary concerns of urban design, they complicate the perspective on the city’s urban health. Several other issues that spark lively debate on Detroit’s allegedly dire situation include: closed and abandoned schools, the decaying train station at the center of endless photo essays and the decision to spend a large portion of the dwindling city funds to tear down unsafe structures. The media’s ongoing love affair with Detroitas-decay employs a familiar plotline in support of a biased artistic perspective. In an essay for Guernica ( January 2011), author John Patrick Leary explored Detroit’s potential as a metonym: a literary device in which a smaller thing stands in for a bigger system of ideas. For Detroit, this has meant being used as a symbol for the entire American foreclosure crisis, the decline of the auto industry, suburban sprawl and a host of other issues. The comparison is easy and tempting, especially with so much readily available photographic evidence that has come to be known as “ruin porn.” One of the most grating examples for residents was the BBC documentary Requiem for Detroit, which went out of its way to show
Until recently, the City of Detroit has been slow to address these design problems for a variety of issues, while impassioned and concerned residents have come up with an arsenal of their own improvement strategies, taking matters into their own hands. However, Mayor Bing’s Detroit Works Project, launched on September 14, 2010, signals a shift in local planning that has the potential to alter that trend, including a task force that has been assigned to deal with blight and other myriad issues. In this thesis, I am interested in the momentum behind this project and possible outcomes as well as the community-based efforts that have succeeded in spite of the government, or at least without much help. What is the intersection of these two movements in Detroit, and how can they work together? This is a very personal question for its citizens, but a much bigger implication for urban theory thanks to the explosion of media attention focused on Detroit. What lessons from this so-called urban test lab will theorists extract, and is it prudent to do so? Is Detroit a unique urban condition? It’s worth specifying that grassroots design and government plans do not necessarily duplicate the effort to make Detroit better. The Detroit Works Project is just beginning to look at options for land reuse and was designed to look for opportunities such as: • areas where infill is appropriate • sites that can be used for economic development, like business attraction • productive landscapes for storm water management, environmental remediation and recreation spaces • new forms of mixed use neighborhoods. Residents working at the block or lot level are likely too preoccupied with safety to think about storm water. Nor should they if the government is functioning properly. But on the issue of where people should live, how and what to do with houses no longer needed, there is much overlap, debate, and “opportunity,” if you will forgive my use of this loaded term.
The SVA MFA in Design Criticism trains students to research, analyze and evaluate design and its social and environmental implications. The program seeks to cultivate design criticism as a discipline and contribute to public discourse with new writing and thinking that is imaginative and historically informed. The course of study couples a theoretical framework with significant opportunities for practical experience. In addition to written assignments, students produce tangible documents of their critical practice, such as radio podcasts, books, blogs, documentaries, course syllabi, conferences, and exhibitions. Chaired by Alice Twemlow, and co-founded by Steven Heller, the D-Crit program features faculty members such as Paola Antonelli, senior curator of Design and Architecture at MoMA; Ralph Caplan, recipient of the 2010 National Design Awards Design Mind award; Karrie Jacobs, author and Metropolis columnist; and Julie Lasky, editor of Change Observer. For more information, please visit www.dcrit.sva.edu.
School of Visual Arts (SVA) in New York City is an established leader and innovator in the education of artists. From its inception in 1947, the faculty has comprised professionals working in the arts and art-related fields. SVA provides an environment that nurtures creativity, inventiveness and experimentation, enabling students to develop a strong sense of identity and a clear direction of purpose.