︎︎︎ previous     next ︎︎︎

A Selection of

Aug. - Nov. 2020

Below is a series of mini-essays responding to seminal design literature. Each discusses the ever-changing role of graphic designers and the implications of new technology on culture, cognition, and the social construction of meaning.

  Automation and Ambiguity

Karl Gerstner opens with a provocative quote: “The process of designing is reduced to an act of selection: crossing and linking parameters.” Gerstner suggests that a framework could be devised where all that has been designed and all that will be designed can be found. Almost like the books in the Library of Babel, these parameters don’t just cover the look of the final product but can construct a simulated creativity through automated contexts and constraints. Would it be possible to build this framework— future-proofed and with low ambiguity? If so, how might that affect the likelihood of graphic design being taken over by AI, currently ranked by the University of Oxford as 8.2% probability?

Sol LeWitt’s work relies on ambiguity. In theory, a perfectly written set of instructions from LeWitt would result in a perfectly drafted piece of art (let’s set aside the problems with the term “perfect” for the sake of argument). Conversely, no instructions from LeWitt would result in a lack of his ownership over the piece drafted. This controlled ambiguity leads to diverse interpretations/drafts and is the purpose of the piece itself. So, if LeWitt controls and possesses the ambiguity in his text, does he also possess your interpretation of it? Are your experiences under the umbrella of his creation? ︎
Karl Gerstner, “Programme as Computer Graphics,” “Programme as Movement,” “Programme as Squaring the Circle,” in Designing Programmes (New York: Hastings House, 1964)

Sol LeWitt, “Doing Wall Drawings,” Art Now 3, no. 2 (1971)

Observer’s Effects in the View of the Present from the Speculative Future             

Candy and Kornet introduced me to the idea of using ethnography to envision speculative futures. The method of ethnography has long been critiqued for its lack of consideration on how the process of observing affects the observations— The Hawthorne Effect, Observer’s Effects, and Observer’s Bias. As I read it, speculative design is the process of placing one’s self in the future to better observe the effects of the present. How might a designer counteract the effects of their ethnographic observation of the present in creating speculative futures?

This goes hand-in-hand with a question Heather Dewey-Hagborg raised: “How do you think outside of a totalizing system?” She was referencing systems such as capitalism or the internet but, from an anti-tempocentric standpoint, the system of the present is one that is truly totalizing— at this point, no one can possibly divorce themselves from it. If speculative design is observing the totalizing system of the present, how might the observer change their role in the spectrum from complete participant to complete observer? Could they ever move beyond the role of complete participant? ︎
Stuart Candy and Kelly Karnet, “An Introduction to Ethnographic Experiential Futures” in Design and Futures, (Taipei: Tamkang U Press, 2019)

Heather Dewey–Hagborg’s talks on design nonfiction on Tellart.com

  Hostile Graphic Design

Hostile architecture, also called defensive architecture, is a strategy where elements of a built environment are designed to purposefully guide or restrict behavior “in order to prevent crime and maintain order.” Examples of hostile architecture include: public benches with armrests (to stop homeless people sleeping on them), putting spikes on ledges (to discourage sitting on them), and adding regular divots to curbs (to stop skateboarders from grinding).

Recently I’ve been wondering how these strategies translate into the graphic design space. One interesting example I’ve found is FE-Schrift, a typeface designed for European licenses plates that is purposefully inconsistent in order to stop easy modifications that trick red light cameras. With websites and apps being just as much of a “built environment” as benches or curbs, I wonder how dark patterns may fit into this equation. Hostile architecture has been criticized as being “designs against humanity” that restrict the “rights and freedoms of a human being.” Do these inherent rights and freedoms carry over to the virtual built environment?

If dark patterns are deliberately guiding / restricting people into certain behavior, in what capacity can they to use those patterns strategically against the builders of that environment? For example, recently a large group of TikTok and Twitter users decided to request tickets to a Trump rally in Tulsa with no intention of going. A similar trend has been going around where these users intentionally become the target of Trump campaign advertising so the president’s money is wasted on people who would never vote for him. These are examples of people rallying together and using the algorithms to their advantage. Almost like a form of aikido— using their opponents strengths against them. ︎

  Wild Futures within Ourselves

In Janelle Shane’s TED talk, she mentioned that today’s AI “doesn’t have a concept of what the pedestrian is beyond that it’s a collection of lines and textures and things.” This begs the question, do we? Obviously we know what it means to experience life as a human so let’s put it into simpler terms. Today’s AI can identify a cat but it doesn’t know what a cat actually is. But do we as humans know what at cat is? We can draw it, identify it, explain it, define it, etc., but those are all things an AI can do as well. How can we properly distinguish between the knowledge of an AI versus a human? After all, we only know our own lived experience. An AI might not feel emotions the way we do but, in all senses except actual, they can perform and describe it like they do!

Silka Miesnieks’ point on the importance of sensory tools, Robert Peart’s take that AI will mean people can design with less time spent learning tools, and Patrick Hebron saying that “tools are the ripest place to affect change.” They all focus on tools and how the future of tech will affect them. I think the most interesting assertion is Peart’s who claims AI will transform the designer’s job into the building of tools with a potential complete ignorance to the forms they’re used to create. What would this future look like in terms of trends: visual, material, behavioral, etc.? In a world of complete individual control, how could a designer corral the chaos (or even should they?) ︎
“The Danger of AI is Weirder Than You Think,” by Janelle Shane

Silka Miesnieks, “Designing for Humanity Begins, Now,” Augmented World Expo, June 2018.

Robert Peart, “Automation Threatens to Make Graphic Designers Obsolete,” Eye on Design, October 25, 2016

How Many Beavers Does It Take To Change A Light Bulb?                             

The point has been previously made: Why are we designing collaborative AIs that are just ourselves with greater capabilities? Wouldn’t more creative collaboration be formed through working with a being unlike our own? Beatriz Columina and Mark Wigley add to this discussion, asserting that nothing is more human than technology and that these tech tools we’re creating are all just prosthetic extensions of ourselves. If we’re truly looking for creative collaboration, we must look to things beyond our design. Rather than an artificial intelligence, perhaps we’re looking for intelligence that is much more natural than ourselves. One that persists beyond reason, or any thought at all, but works nonetheless.

While reading Carl DiSalvo and Jonathan Lukens’ paper, the question prevailed: Shouldn’t we figure out our own problems before we start considering those of other species, genus, or form? The problem with this question is it’s assumption that humans will ever move beyond their greedy nature and start to care about others. Even with world peace (or at least human peace), humans will still have problems to solve in the form of a product. I think a better question is: How will we know when to start including the non-human? What thing can we look for that will signify the need to stop focusing on human problems and start focusing on the entire ecology? Don Norman I suspect would say the symptoms are visible now and, perhaps if we addressed their root problems there wouldn’t be a need for a human non-human problem distinction at all.

Another question I had while reading Carl DiSalvo and Jonathan Lukens’ paper was: How do we know what non-humans want? In most cases, we can assume they want to live, thrive, and breed. But do they want agency and control like we do? Columina and Wigley point to the caveman’s comfort in caves, the control of their environment, as the root of humans’ core needs. But what about the beaver in the box that DiSalvo and Lukens describe? It includes a light switch that both a human and a beaver can control but what if the beaver doesn’t want to worry about their agency over light? Are we projecting our basic human needs onto these beings?

So, how many beavers does it take to change a light bulb? Well, I guess it depends on if they want it changed… ︎
Are We Human by Beatriz Columina and Mark Wigley
Carl DiSalvo and Jonathan Lukens, “Nonathropocentrism and the Nonhuman in Design.” From Social Butterfly to Engaged Citizen, (Cambridge: MIT, 2018.

Warning: These Questions Are Useless                             

While reading Shanahan’s Heaven or Hell chapter, I found myself wanting a preface to their series of quandaries. It seemed as though the questions they were asking were stuck in the timescale with which we are currently familiar. Especially towards the beginning of the chapter, most scenarios revolved around the capabilities of AI technology and it’s place in human-developed systems: legal, moral, societal, etc. I think it’s almost funny to consider these predicaments. As we’ve heard in all of the resources this week, AI will develop at an exponential rate. As soon as we find ourselves at the right time to ask these sort of questions, our AI counterparts will have long since solved the problem and moved on in the blink of a (human) eye. Why consider how an AI’s citizenship will work? By the time we’ve recognized that as a valid issue, AI will have dismissed the idea of “countries” long ago (that is: in the matter of minutes). Perhaps it’s my pessimism showing but I just don’t see the use in running speculative races when our competitors, whenever they may start running, can move at light-speed. I guess my question here is: What value does being speculative about AI have if the speculations live in human-made systems?

Perhaps I’ll answer my own question with a question here… What can we learn about ourselves as humans by speculating about the potentials of AI? I first thought of this question while viewing Kai-Fu Lee’s TED talk. He remarked on the speed at which an AI could accomplish all of humanity’s feats, such as landing on the moon. This made me think: what might get in the way of an AI moon landing? (Physics, etc). Or, better yet, what wouldn’t get in the way of an AI moon landing? This is a much more interesting question. A lightly considered list of answers may contain: working hours, money at all, moral considerations, etc. Is there any value in then asking ourselves why we as humans bother to let these things limit us as well? Does an efficiency-driven creature serve as a worthwhile comparison to the human race or is that just another fool’s errand? ︎
Murray Shanahan, “Heaven or Hell,” The Technological Singularity (Cambridge: MIT, 2015)

Kai-Fu Lee: How AI can Save our Humanity