A List Apart

  1. Designing for Research

    If you’ve spent enough time developing for the web, this piece of feedback has landed in your inbox since time immemorial:

    “This photo looks blurry. Can we replace it with a better version?”

    Every time this feedback reaches me, I’m inclined to question it: “What about the photo looks bad to you, and can you tell me why?

    That’s a somewhat unfair question to counter with. The complaint is rooted in a subjective perception of image quality, which in turn is influenced by many factors. Some are technical, such as the export quality of the image or the compression method (often lossy, as is the case with JPEG-encoded photos). Others are more intuitive or perceptual, such as content of the image and how compression artifacts mingle within. Perhaps even performance plays a role we’re not entirely aware of.

    Fielding this kind of feedback for many years eventually lead me to design and develop an image quality survey, which was my first go at building a research project on the web. I started with twenty-five photos shot by a professional photographer. With them, I generated a large pool of images at various quality levels and sizes. Images were served randomly from this pool to users who were asked to rate what they thought about their quality.

    Results from the first round were interesting, but not entirely clear: users seemed to have a tendency to overestimate the actual quality of images, and poor performance appeared to have a negative impact on perceptions of image quality, but this couldn’t be stated conclusively. A number of UX and technical issues made it necessary to implement important improvements and conduct a second round of research. In lieu of spinning my wheels trying to extract conclusions from the first round results, I decided it would be best to improve the survey as much as possible, and conduct another round of research to get better data. This article chronicles how I first built the survey, and then how I subsequently listened to user feedback to improve it.

    Defining the research

    Of the subjects within web performance, image optimization is especially vast. There’s a wide array of formats, encodings, and optimization tools, all of which are designed to make images small enough for web use while maintaining reasonable visual quality. Striking the balance between speed and quality is really what image optimization is all about.

    This balance between performance and visual quality prompted me to consider how people perceive image quality. Lossy image quality, in particular. Eventually, this train of thought lead to a series of questions spurring the design and development of an image quality perception survey. The idea of the survey is that users are providing subjective assessments on quality. This is done by asking participants to rate images without an objective reference for what’s “perfect.” This is, after all, how people view images in situ.

    A word on surveys

    Any time we want to quantify user behavior, it’s inevitable that a survey is at least considered, if not ultimately chosen to gather data from a group of people. After all, surveys are perfect when your goal is to get something measurable. However, the survey is a seductively dangerous tool, as Erika Hall cautions. They’re easy to make and conduct, and are routinely abused in their dissemination. They’re not great tools for assessing past behavior. They’re just as bad (if not worse) at predicting future behavior. For example, the 1–10 scale often employed by customer satisfaction surveys don’t really say much of anything about how satisfied customers actually are or how likely they’ll be to buy a product in the future.

    The unfortunate reality, however, is that in lieu of my lording over hundreds of participants in person, the survey is the only truly practical tool I have to measure how people perceive image quality as well as if (and potentially how) performance metrics correlate to those perceptions. When I designed the survey, I kept with the following guidelines:

    • Don’t ask participants about anything other than what their perceptions are in the moment. By the time a participant has moved on, their recollection of what they just did rapidly diminishes as time elapses.
    • Don’t assume participants know everything you do. Guide them with relevant copy that succinctly describes what you expect of them.
    • Don’t ask participants to provide assessments with coarse inputs. Use an input type that permits them to finely assess image quality on a scale congruent with the lossy image quality encoding range.

    All we can do going forward is acknowledge we’re interpreting the data we gather under the assumption that participants are being truthful and understand the task given to them. Even if the perception metrics are discarded from the data, there are still some objective performance metrics gathered that could tell a compelling story. From here, it’s a matter of defining the questions that will drive the research.

    Asking the right questions

    In research, you’re seeking answers to questions. In the case of this particular effort, I wanted answers to these questions:

    • How accurate are people’s perceptions of lossy image quality in relation to actual quality?
    • Do people perceive the quality of JPEG images differently than WebP images?
    • Does performance play a role in all of this?

    These are important questions. To me, however, answering the last question was the primary goal. But the road to answers was (and continues to be) a complex journey of design and development choices. Let’s start out by covering some of the tech used to gather information from survey participants.

    Sniffing out device and browser characteristics

    When measuring how people perceive image quality, devices must be considered. After all, any given device’s screen will be more or less capable than others. Thankfully, HTML features such as srcset and picture are highly appropriate for delivering the best image for any given screen. This is vital because one’s perception of image quality can be adversely affected if an image is ill-fit for a device’s screen. Conversely, performance can be negatively impacted if an exceedingly high-quality (and therefore behemoth) image is sent to a device with a small screen. When sniffing out potential relationships between performance and perceived quality, these are factors that deserve consideration.

    With regard to browser characteristics and conditions, JavaScript gives us plenty of tools for identifying important aspects of a user’s device. For instance, the currentSrc property reveals which image is being shown from an array of responsive images. In the absence of currentSrc, I can somewhat safely assume support for srcset or picture is lacking, and fall back to the img tag’s src value:

    const surveyImage = document.querySelector(".survey-image");
    let loadedImage = surveyImage.currentSrc || surveyImage.src;

    Where screen capability is concerned, devicePixelRatio tells us the pixel density of a given device’s screen. In the absence of devicePixelRatio, you may safely assume a fallback value of 1:

    let dpr = window.devicePixelRatio || 1;

    devicePixelRatio enjoys excellent browser support. Those few browsers that don’t support it (i.e., IE 10 and under) are highly unlikely to be used on high density displays.

    The stalwart getBoundingClientRect method retrieves the rendered width of an img element, while the HTMLImageElement interface’s complete property determines whether an image has finished loaded. The latter of these two is important, because it may be preferable to discard individual results in situations where images haven’t loaded.

    In cases where JavaScript isn’t available, we can’t collect any of this data. When we collect ratings from users who have JavaScript turned off (or are otherwise unable to run JavaScript), I have to accept there will be gaps in the data. The basic information we’re still able to collect does provide some value.

    Sniffing for WebP support

    As you’ll recall, one of the initial questions asked was how users perceived the quality of WebP images. The HTTP Accept request header advertises WebP support in browsers like Chrome. In such cases, the Accept header might look something like this:

    Accept: image/webp,image/apng,image/*,*/*;q=0.8

    As you can see, the WebP content type of image/webp is one of the advertised content types in the header content. In server-side code, you can check Accept for the image/webp substring. Here’s how that might look in Express back-end code:

    const WebP = req.get("Accept").indexOf("image/webp") !== -1 ? true : false;

    In this example, I’m recording the browser’s WebP support status to a JavaScript constant I can use later to modify image delivery. I could use the picture element with multiple sources and let the browser figure out which one to use based on the source element’s type attribute value, but this approach has clear advantages. First, it’s less markup. Second, the survey shouldn’t always choose a WebP source simply because the browser is capable of using it. For any given survey specimen, the app should randomly decide between a WebP or JPEG image. Not all participants using Chrome should rate only WebP images, but rather a random smattering of both formats.

    Recording performance API data

    You’ll recall that one of the earlier questions I set out to answer was if performance impacts the perception of image quality. At this stage of the web platform’s development, there are several APIs that aid in the search for an answer:

    • Navigation Timing API (Level 2): This API tracks performance metrics for page loads. More than that, it gives insight into specific page loading phases, such as redirect, request and response time, DOM processing, and more.
    • Navigation Timing API (Level 1): Similar to Level 2 but with key differences. The timings exposed by Level 1 of the API lack the accuracy as those in Level 2. Furthermore, Level 1 metrics are expressed in Unix time. In the survey, data is only collected from Level 1 of the API if Level 2 is unsupported. It’s far from ideal (and also technically obsolete), but it does help fill in small gaps.
    • Resource Timing API: Similar to Navigation Timing, but Resource Timing gathers metrics on various loading phases of page resources rather than the page itself. Of the all the APIs used in the survey, Resource Timing is used most, as it helps gather metrics on the loading of the image specimen the user rates.
    • Server Timing: In select browsers, this API is brought into the Navigation Timing Level 2 interface when a page request replies with a Server-Timing response header. This header is open-ended and can be populated with timings related to back-end processing phases. This was added to round two of the survey to quantify back-end processing time in general.
    • Paint Timing API: Currently only in Chrome, this API reports two paint metrics: first paint and first contentful paint. Because a significant slice of users on the web use Chrome, we may be able to observe relationships between perceived image quality and paint metrics.

    Using these APIs, we can record performance metrics for most participants. Here’s a simplified example of how the survey uses the Resource Timing API to gather performance metrics for the loaded image specimen:

    // Get information about the loaded image
    const surveyImageElement = document.querySelector(".survey-image");
    const fullImageUrl = surveyImageElement.currentSrc || surveyImageElement.src;
    const imageUrlParts = fullImageUrl.split("/");
    const imageFilename = imageUrlParts[imageUrlParts.length - 1];
    // Check for performance API methods
    if ("performance" in window && "getEntriesByType" in performance) {
      // Get entries from the Resource Timing API
      let resources = performance.getEntriesByType("resource");
      // Ensure resources were returned
      if (typeof resources === "object" && resources.length > 0) {
        resources.forEach((resource) => {
          // Check if the resource is for the loaded image
          if (resource.name.indexOf(imageFilename) !== -1) {
            // Access resource images for the image here

    If the Resource Timing API is available, and the getEntriesByType method returns results, an object with timings is returned, looking something like this:

      connectEnd: 1156.5999999947962,
      connectStart: 1156.5999999947962,
      decodedBodySize: 11110,
      domainLookupEnd: 1156.5999999947962,
      domainLookupStart: 1156.5999999947962,
      duration: 638.1000000037602,
      encodedBodySize: 11110,
      entryType: "resource",
      fetchStart: 1156.5999999947962,
      initiatorType: "img",
      name: "https://imagesurvey.site/img-round-2/1-1024w-c2700e1f2c4f5e48f2f57d665b1323ae20806f62f39c1448490a76b1a662ce4a.webp",
      nextHopProtocol: "h2",
      redirectEnd: 0,
      redirectStart: 0,
      requestStart: 1171.6000000014901,
      responseEnd: 1794.6999999985565,
      responseStart: 1737.0999999984633,
      secureConnectionStart: 0,
      startTime: 1156.5999999947962,
      transferSize: 11227,
      workerStart: 0

    I grab these metrics as participants rate images, and store them in a database. Down the road when I want to write queries and analyze the data I have, I can refer to the Processing Model for the Resource and Navigation Timing APIs. With SQL and data at my fingertips, I can measure the distinct phases outlined by the model and see if correlations exist.

    Having discussed the technical underpinnings of how data can be collected from survey participants, let’s shift the focus to the survey’s design and user flows.

    Designing the survey

    Though surveys tend to have straightforward designs and user flows relative to other sites, we must remain cognizant of the user’s path and the impediments a user could face.

    The entry point

    When participants arrive at the home page, we want to be direct in our communication with them. The home page intro copy greets participants, gives them a succinct explanation of what to expect, and presents two navigation choices:

    One button with the text “I want to participate!” and another button with the text “What data do you gather?”

    From here, participants either start the survey or read a privacy policy. If the user decides to take the survey, they’ll reach a page politely asking them what their professional occupation is and requesting them to disclose any eyesight conditions. The fields for these questions can be left blank, as some may not be comfortable disclosing this kind of information. Beyond this point, the survey begins in earnest.

    The survey primer

    Before the user begins rating images, they’re redirected to a primer page. This page describes what’s expected of participants, and explains how to rate images. While the survey is promoted on design and development outlets where readers regularly work with imagery on the web, a primer is still useful in getting everyone on the same page. The first paragraph of the page stresses that users are rating image quality, not image content. This is important. Absent any context, participants may indeed rate images for their content, which is not what we’re asking for. After this clarification, the concept of lossy image quality is demonstrated with the following diagram:

    A divided photo with one half demonstrating low image quality and the other demonstrating high quality.

    Lastly, the function of the rating input is explained. This could likely be inferred by most, but the explanatory copy helps remove any remaining ambiguity. Assuming your user knows everything you do is not necessarily wise. What seems obvious to one is not always so to another.

    The image specimen page

    This page is the main event and is where participants assess the quality of images shown to them. It contains two areas of focus: the image specimen and the input used to rate the image’s quality.

    Let’s talk a bit out of order and discuss the input first. I mulled over a few options when it came to which input type to use. I considered a select input with coarsely predefined choices, an input with a type of number, and other choices. What seemed to make the most sense to me, however, was a slider input with a type of range.

    A rating slide with “worst” at the far left, and “best” at the far right. The slider track is a gradient from red on the left to green on the right.

    A slider input is more intuitive than a text input, or a select element populated with various choices. Because we’re asking for a subjective assessment about something with such a large range of interpretation, a slider allows participants more granularity in their assessments and lends further accuracy to the data collected.

    Now let’s talk about the image specimen and how it’s selected by the back-end code. I decided early on in the survey’s development that I wanted images that weren’t prominent in existing stock photo collections. I also wanted uncompressed sources so I wouldn’t be presenting participants with recompressed image specimens. To achieve this, I procured images from a local photographer. The twenty-five images I settled on were minimally processed raw images from the photographer’s camera. The result was a cohesive set of images that felt visually related to each other.

    To properly gauge perception across the entire spectrum of quality settings, I needed to generate each image from the aforementioned sources at ninety-six different quality settings ranging from 5 to 100. To account for the varying widths and pixel densities of screens in the wild, each image also needed to be generated at four different widths for each quality setting: 1536, 1280, 1024, and 768 pixels, to be exact. Just the job srcset was made for!

    To top it all off, images also needed to be encoded in both JPEG and WebP formats. As a result, the survey draws randomly from 768 images per specimen across the entire quality range, while also delivering the best image for the participant’s screen. This means that across the twenty-five image specimens participants evaluate, the survey draws from a pool of 19,200 images total.

    With the conception and design of the survey covered, let’s segue into how the survey was improved by implementing user feedback into the second round.

    Listening to feedback

    When I launched round one of the survey, feedback came flooding in from designers, developers, accessibility advocates, and even researchers. While my intentions were good, I inevitably missed some important aspects, which made it necessary to conduct a second round. Iteration and refinement are critical to improving the usefulness of a design, and this survey was no exception. When we improve designs with user feedback, we take a project from average to something more memorable. Getting to that point means taking feedback in stride and addressing distinct, actionable items. In the case of the survey, incorporating feedback not only yielded a better user experience, it improved the integrity of the data collected.

    Building a better slider input

    Though the first round of the survey was serviceable, I ran into issues with the slider input. In round one of the survey, that input looked like this:

    A slider with evenly-spaced spaced labels from left to right reading respectively, “Awful”, “Bad”, “OK”, “Good”, “Great”. Below it is a disabled button with the text “Please Rate the Image…”.

    There were two recurring complaints regarding this specific implementation. The first was that participants felt they had to align their rating to one of the labels beneath the slider track. This was undesirable for the simple fact that the slider was chosen specifically to encourage participants to provide nuanced assessments.

    The second complaint was that the submit button was disabled until the user interacted with the slider. This design choice was intended to prevent participants from simply clicking the submit button on every page without rating images. Unfortunately, this implementation was unintentionally hostile to the user and needed improvement, because it blocked users from rating images without a clear and obvious explanation as to why.

    Fixing the problem with the labels meant redesigning the slider as it appeared in Figure 3. I removed the labels altogether to eliminate the temptation of users to align their answers to them. Additionally, I changed the slider background property to a gradient pattern, which further implied the granularity of the input.

    The submit button issue was a matter of how users were prompted. In round one the submit button was visible, yet the disabled state wasn’t obvious enough to some. After consulting with a colleague, I found a solution for round two: in lieu of the submit button being initially visible, it’s hidden by some guide copy:

    The revised slider followed by the text “Once you rate the image, you may submit.”

    Once the user interacts with the slider and rates the image, a change event attached to the input fires, which hides the guide copy and replaces it with the submit button:

    The revised slider now followed by a button reading “Submit rating”.

    This solution is less ambiguous, and it funnels participants down the desired path. If someone with JavaScript disabled visits, the guide copy is never shown, and the submit button is immediately usable. This isn’t ideal, but it doesn’t shut out participants without JavaScript.

    Addressing scrolling woes

    The survey page works especially well in portrait orientation. Participants can see all (or most) of the image without needing to scroll. In browser windows or mobile devices in landscape orientation, however, the survey image can be larger than the viewport:

    Screen shot of the survey with an image clipped at the bottom by the viewport and rating slider.

    Working with such limited vertical real estate is tricky, especially in this case where the slider needs to be fixed to the bottom of the screen (which addressed an earlier bit of user feedback from round one testing). After discussing the issue with colleagues, I decided that animated indicators in the corners of the page could signal to users that there’s more of the image to see.

    The survey with the clipped image, but now there is a downward-pointing arrow with the word “Scroll”.

    When the user hits the bottom of the page, the scroll indicators disappear. Because animations may be jarring for certain users, a prefers-reduced-motion media query is used to turn off this (and all other) animations if the user has a stated preference for reduced motion. In the event JavaScript is disabled, the scrolling indicators are always hidden in portrait orientation where they’re less likely to be useful and always visible in landscape where they’re potentially needed the most.

    Avoiding overscaling of image specimens

    One issue that was brought to my attention from a coworker was how the survey image seemed to expand boundlessly with the viewport. On mobile devices this isn’t such a problem, but on large screens and even modestly sized high-density displays, images can be scaled excessively. Because the responsive img tag’s srcset attribute specifies a maximum resolution image of 1536w, an image can begin to overscale at as “small” at sizes over 768 pixels wide on devices with a device pixel ratio of 2.

    The survey with an image expanding to fill the window.

    Some overscaling is inevitable and acceptable. However, when it’s excessive, compression artifacts in an image can become more pronounced. To address this, the survey image’s max-width is set to 1536px for standard displays as of round two. For devices with a device pixel ratio of 2 or higher, the survey image’s max-width is set to half that at 768px:

    The survey with an image comfortably fitting in the window.

    This minor (yet important) fix ensures that images aren’t scaled beyond a reasonable maximum. With a reasonably sized image asset in the viewport, participants will assess images close to or at a given image asset’s natural dimensions, particularly on large screens.

    User feedback is valuable. These and other UX feedback items I incorporated improved both the function of the survey and the integrity of the collected data. All it took was sitting down with users and listening to them.

    Wrapping up

    As round two of the survey gets under way, I’m hoping the data gathered reveals something exciting about the relationship between performance and how people perceive image quality. If you want to be a part of the effort, please take the survey. When round two concludes, keep an eye out here for a summary of the results!

    Thank you to those who gave their valuable time and feedback to make this article as good as it could possibly be: Aaron Gustafson, Jeffrey Zeldman, Brandon Gregory, Rachel Andrew, Bruce Hyslop, Adrian Roselli, Meg Dickey-Kurdziolek, and Nick Tucker.

    Additional thanks to those who helped improve the image quality survey: Mandy Tensen, Darleen Denno, Charlotte Dann, Tim Dunklee, and Thad Roe.

  2. Conversational Design

    A note from the editors: We’re pleased to share an excerpt from Chapter 1 of Erika Hall’s new book, Conversational Design, available now from A Book Apart.

    Texting is how we talk now. We talk by tapping tiny messages on touchscreens—we message using SMS via mobile data networks, or through apps like Facebook Messenger or WhatsApp.

    In 2015, the Pew Research Center found that 64% of American adults owned a smartphone of some kind, up from 35% in 2011. We still refer to these personal, pocket-sized computers as phones, but “Phone” is now just one of many communication apps we neglect in favor of texting. Texting is the most widely used mobile data service in America. And in the wider world, four billion people have mobile phones, so 4 billion people have access to SMS or other messaging apps. For some, dictating messages into a wristwatch offers an appealing alternative to placing a call.

    The popularity of texting can be partially explained by the medium’s ability to offer the easy give-and-take of conversation without requiring continuous attention. Texting feels like direct human connection, made even more captivating by unpredictable lag and irregular breaks. Any typing is incidental because the experience of texting barely resembles “writing,” a term that carries associations of considered composition. In his TED talk, Columbia University linguist John McWhorter called texting “fingered conversation”—terminology I find awkward, but accurate. The physical act—typing—isn’t what defines the form or its conventions. Technology is breaking down our traditional categories of communication.

    By the numbers, texting is the most compelling computer-human interaction going. When we text, we become immersed and forget our exchanges are computer-mediated at all. We can learn a lot about digital design from the inescapable draw of these bite-sized interactions, specifically the use of language.

    What Texting Teaches Us

    This is an interesting example of what makes computer-mediated interaction interesting. The reasons people are compelled to attend to their text messages—even at risk to their own health and safety—aren’t high-production values, so-called rich media, or the complexity of the feature set.

    Texting, and other forms of social media, tap into something very primitive in the human brain. These systems offer always-available social connection. The brevity and unpredictability of the messages themselves triggers the release of dopamine that motivates seeking behavior and keeps people coming back for more. What makes interactions interesting may start on a screen, but the really interesting stuff happens in the mind. And language is a critical part of that. Our conscious minds are made of language, so it’s easy to perceive the messages you read not just as words but as the thoughts of another mingled with your own. Loneliness seems impossible with so many voices in your head.

    With minimal visual embellishment, texts can deliver personality, pathos, humor, and narrative. This is apparent in “Texts from Dog,” which, as the title indicates, is a series of imagined text exchanges between a man and his dog. (Fig 1.1). With just a few words, and some considered capitalization, Joe Butcher (writing as October Jones) creates a vivid picture of the relationship between a neurotic canine and his weary owner.

    A dog texts his master about belly rubs.
    Fig 1.1: “Texts from Dog” shows how lively a simple text exchange can be.

    Using words is key to connecting with other humans online, just as it is in the so-called “real world.” Imbuing interfaces with the attributes of conversation can be powerful. I’m far from the first person to suggest this. However, as computers mediate more and more relationships, including customer relationships, anyone thinking about digital products and services is in a challenging place. We’re caught between tried-and-true past practices and the urge to adopt the “next big thing,” sometimes at the exclusion of all else.

    Being intentionally conversational isn’t easy. This is especially true in business and at scale, such as in digital systems. Professional writers use different types of writing for different purposes, and each has rules that can be learned. The love of language is often fueled by a passion for rules — rules we received in the classroom and revisit in manuals of style, and rules that offer writers the comfort of being correct outside of any specific context. Also, there is the comfort of being finished with a piece of writing and moving on. Conversation, on the other hand, is a context-dependent social activity that implies a potentially terrifying immediacy.

    Moving from the idea of publishing content to engaging in conversation can be uncomfortable for businesses and professional writers alike. There are no rules. There is no done. It all feels more personal. Using colloquial language, even in “simplifying” interactive experiences, can conflict with a desire to appear authoritative. Or the pendulum swings to the other extreme and a breezy style gets applied to a laborious process like a thin coat of paint.

    As a material for design and an ingredient in interactions, words need to emerge from the content shed and be considered from the start.  The way humans use language—easily, joyfully, sometimes painfully—should anchor the foundation of all interactions with digital systems.

    The way we use language and the way we socialize are what make us human; our past contains the key to what commands our attention in the present, and what will command it in the future. To understand how we came to be so perplexed by our most human quality, it’s worth taking a quick look at, oh!, the entire known history of communication technology.

    The Mother Tongue

    Accustomed to eyeballing type, we can forget language began in our mouths as a series of sounds, like the calls and growls of other animals. We’ll never know for sure how long we’ve been talking—speech itself leaves no trace—but we do know it’s been a mighty long time.

    Archaeologist Natalie Thais Uomini and psychologist Georg Friedrich Meyer concluded that our ancestors began to develop language as early as 1.75 million years ago. Per the fossil record, modern humans emerged at least 190,000 years ago in the African savannah. Evidence of cave painting goes back 30,000 years (Fig 1.2).

    Then, a mere 6,000 years ago, ancient Sumerian commodity traders grew tired of getting ripped off. Around 3200 BCE, one of them had the idea to track accounts by scratching wedges in wet clay tablets. Cuneiform was born.

    So, don’t feel bad about procrastinating when you need to write—humanity put the whole thing off for a couple hundred thousand years! By a conservative estimate, we’ve had writing for about 4% of the time we’ve been human. Chatting is easy; writing is an arduous chore.

    Prior to mechanical reproduction, literacy was limited to the elite by the time and cost of hand-copying manuscripts. It was the rise of printing that led to widespread literacy; mass distribution of text allowed information and revolutionary ideas to circulate across borders and class divisions. The sharp increase in literacy bolstered an emerging middle class. And the ability to record and share knowledge accelerated all other advances in technology: photography, radio, TV, computers, internet, and now the mobile web. And our talking speakers.

    Chart showing the evolution of communication over the last 200,000, 6,000, and 180 years
    Fig 1.2: In hindsight, “literate culture” now seems like an annoying phase we had to go through so we could get to texting.

    Every time our communication technology advances and changes, so does the surrounding culture—then it disrupts the power structure and upsets the people in charge. Catholic archbishops railed against mechanical movable type in the fifteenth century. Today, English teachers deplore texting emoji. Resistance is, as always, futile. OMG is now listed in the Oxford English Dictionary.

    But while these developments have changed the world and how we relate to one another, they haven’t altered our deep oral core.

    Orality, Say It with Me

    Orality knits persons into community.
    Walter Ong

    Today, when we record everything in all media without much thought, it’s almost impossible to conceive of a world in which the sum of our culture existed only as thoughts.

    Before literacy, words were ephemeral and all knowledge was social and communal. There was no “save” option and no intellectual property. The only way to sustain an idea was to share it, speaking aloud to another person in a way that made it easy for them to remember. This was orality—the first interface.

    We can never know for certain what purely oral cultures were like. People without writing are terrible at keeping records. But we can examine oral traditions that persist for clues.

    The oral formula

    Reading and writing remained elite activities for centuries after their invention. In cultures without a writing system, oral characteristics persisted to help transmit poetry, history, law and other knowledge across generations.

    The epic poems of Homer rely on meter, formulas, and repetition to aid memory:

    Far as a man with his eyes sees into the mist of the distance Sitting aloft on a crag to gaze over the wine-dark seaway, Just so far were the loud-neighing steeds of the gods overleaping.
    Iliad, 5.770

    Concrete images like rosy-fingered dawn, loud-neighing steeds, wine-dark seaway, and swift-footed Achilles served to aid the teller and to sear the story into the listener’s memory.

    Biblical proverbs also encode wisdom in a memorable format:

    As a dog returns to its vomit, so fools repeat their folly.
    Proverbs 26:11

    That is vivid.

    And a saying that originated in China hundreds of years ago can prove sufficiently durable to adorn a few hundred Etsy items:

    A journey of a thousand miles begins with a single step.
    Tao Te Ching, Chapter 64, ascribed to Lao Tzu

    The labor of literature

    Literacy created distance in time and space and decoupled shared knowledge from social interaction. Human thought escaped the existential present. The reader doesn’t need to be alive at the same time as the writer, let alone hanging out around the same fire pit or agora. 

    Freed from the constraints of orality, thinkers explored new forms to preserve their thoughts. And what verbose and convoluted forms these could take:

    The Reader will I doubt too soon discover that so large an interval of time was not spent in writing this discourse; the very length of it will convince him, that the writer had not time enough to make a shorter.
    George Tullie, An Answer to a Discourse Concerning the Celibacy of the Clergy, 1688

    There’s no such thing as an oral semicolon. And George Tullie has no way of knowing anything about his future audience. He addresses himself to a generic reader he will never see, nor receive feedback from. Writing in this manner is terrific for precision, but not good at all for interaction.

    Writing allowed literate people to become hermits and hoarders, able to record and consume knowledge in total solitude, invest authority in them, and defend ownership of them. Though much writing preserved the dullest of records, the small minority of language communities that made the leap to literacy also gained the ability to compose, revise, and perfect works of magnificent complexity, utility, and beauty.

    The qualities of oral culture

    In Orality and Literacy: The Technologizing of the Word, Walter Ong explored the “psychodynamics of orality,” which is, coincidentally, quite a mouthful.  Through his research, he found that the ability to preserve ideas in writing not only increased knowledge, it altered values and behavior. People who grow up and live in a community that has never known writing are different from literate people—they depend upon one another to preserve and share knowledge. This makes for a completely different, and much more intimate, relationship between ideas and communities.

    Oral culture is immediate and social

    In a society without writing, communication can happen only in the moment and face-to-face. It sounds like the introvert’s nightmare! Oral culture has several other hallmarks as well:

    • Spoken words are events that exist in time. It’s impossible to step back and examine a spoken word or phrase. While the speaker can try to repeat, there’s no way to capture or replay an utterance.
    • All knowledge is social, and lives in memory. Formulas and patterns are essential to transmitting and retaining knowledge. When the knowledge stops being interesting to the audience, it stops existing.
    • Individuals need to be present to exchange knowledge or communicate. All communication is participatory and immediate. The speaker can adjust the message to the context. Conversation, contention, and struggle help to retain this new knowledge.
    • The community owns knowledge, not individuals. Everyone draws on the same themes, so not only is originality not helpful, it’s nonsensical to claim an idea as your own.
    • There are no dictionaries or authoritative sources. The right use of a word is determined by how it’s being used right now.

    Literate culture promotes authority and ownership

    Printed books enabled mass-distribution and dispensed with handicraft of manuscripts, alienating readers from the source of the ideas, and from each other. (Ong pg. 100):

    • The printed text is an independent physical object. Ideas can be preserved as a thing, completely apart from the thinker.
    • Portable printed works enable individual consumption. The need and desire for private space accompanied the emergence of silent, solo reading.
    • Print creates a sense of private ownership of words. Plagiarism is possible.
    • Individual attribution is possible. The ability to identify a sole author increases the value of originality and creativity.
    • Print fosters a sense of closure. Once a work is printed, it is final and closed.

    Print-based literacy ascended to a position of authority and cultural dominance, but it didn’t eliminate oral culture completely.

    Technology brought us together again

    All that studying allowed people to accumulate and share knowledge, speeding up the pace of technological change. And technology transformed communication in turn. It took less than 150 years to get from the telegraph to the World Wide Web. And with the web—a technology that requires literacy—Ong identified a return to the values of the earlier oral culture. He called this secondary orality. Then he died in 2003, before the rise of the mobile internet, when things really got interesting.

    Secondary orality is:

    • Immediate. There is no necessary delay between the expression of an idea and its reception. Physical distance is meaningless.
    • Socially aware and group-minded. The number of people who can hear and see the same thing simultaneously is in the billions.
    • Conversational. This is in the sense of being both more interactive and less formal.
    • Collaborative. Communication invites and enables a response, which may then become part of the message.
    • Intertextual. The products of our culture reflect and influence one another.

    Social, ephemeral, participatory, anti-authoritarian, and opposed to individual ownership of ideas—these qualities sound a lot like internet culture.

    Wikipedia: Knowledge Talks

    When someone mentions a genre of music you’re unfamiliar with—electroclash, say, or plainsong—what do you do to find out more? It’s quite possible you type the term into Google and end up on Wikipedia, the improbably successful, collaborative encyclopedia that would be absent without the internet.

    According to Wikipedia, encyclopedias have existed for around two-thousand years. Wikipedia has existed since 2001, and it’s the fifth most-popular site on the web. Wikipedia is not a publication so much as a society that provides access to knowledge. A volunteer community of “Wikipedians” continuously adds to and improves millions of articles in over 200 languages. It’s a phenomenon manifesting all the values of secondary orality:

    • Anyone can contribute anonymously and anyone can modify the contributions of another.
    • The output is free.
    • The encyclopedia articles are not attributed to any sole creator. A single article might have 2 editors or 1,000.
    • Each article has an accompanying “talk” page where editors discuss potential improvements, and a “history” page that tracks all revisions. Heated arguments are not documented. They take place as revisions within documents.

    Wikipedia is disruptive in the true Clayton Christensen sense. It’s created immense value and wrecked an existing business model. Traditional encyclopedias are publications governed by authority, and created by experts and fact checkers. A volunteer project collaboratively run by unpaid amateurs shows that conversation is more powerful than authority, and that human knowledge is immense and dynamic.

    In an interview with The Guardian, a British librarian expressed some disdain about Wikipedia.

    The main problem is the lack of authority. With printed publications, the publishers must ensure that their data are reliable, as their livelihood depends on it. But with something like this, all that goes out the window.
    Philip Bradley, “Who knows?”, The Guardian, October 26, 2004

    Wikipedia is immediate, group-minded, conversational, collaborative, and intertextual— secondary orality in action—but it relies on traditionally published sources for its authority. After all, anything new that changes the world does so by fitting into the world. As we design for new methods of communication, we should remember that nothing is more valuable simply because it’s new; rather, technology is valuable when it brings us more of what’s already meaningful.

    From Documents to Events

    Pages and documents organize information in space. Space used to be more of a constraint back when we printed conversation out. Now that the internet has given us virtually infinite space, we need to mind how conversation moves through time. Thinking about serving the needs of people in an internet-based culture requires a shift from thinking about how information occupies space—documents—to how it occupies time—events.

    Texting means that we’ve never been more lively (yet silent) in our communications. While we still have plenty of in-person interactions, it’s gotten easy to go without. We text grocery requests to our spouses. We click through a menu in a mobile app to summon dinner (the order may still arrive at the restaurant by fax, proving William Gibson’s maxim that the future is unevenly distributed). We exchange messages on Twitter and Facebook instead of visiting friends in person, or even while visiting friends in person. We work at home and Slack our colleagues.

    We’re rapidly approaching a future where humans text other humans and only speak aloud to computers. A text-based interaction with a machine that’s standing in for a human should feel like a text-based interaction with a human. Words are a fundamental part of the experience, and they are part of the design. Words should be the basis for defining and creating the design.

    We’re participating in a radical cultural transformation. The possibilities manifest in systems like Wikipedia that succeed in changing the world by using technology to connect people in a single collaborative effort. And even those of us creating the change suffer from some lag. The dominant educational and professional culture remains based in literary values. We’ve been rewarded for individual achievement rather than collaboration. We seek to “make our mark,” even when designing changeable systems too complex for any one person to claim authorship. We look for approval from an authority figure. Working in a social, interactive way should feel like the most natural thing in the world, but it will probably take some doing.

    Literary writing—any writing that emerges from the culture and standards of literacy—is inherently not interactive. We need to approach the verbal design not as a literary work, but as a conversation. Designing human-centered interactive systems requires us to reflect on our deep-seated orientation around artifacts and ownership. We must alienate ourselves from a set of standards that no longer apply.

    Most advice on “writing for the web” or “creating content” starts from the presumption that we are “writing,” just for a different medium. But when we approach communication as an assembly of pieces of content rather than an interaction, customers who might have been expecting a conversation end up feeling like they’ve been handed a manual instead.

    Software is on a path to participating in our culture as a peer.  So, it should behave like a person—alive and present. It doesn’t matter how much so-called machine intelligence is under the hood—a perceptive set of programmatic responses, rather than a series of documents, can be enough if they have the qualities of conversation.

    Interactive systems should evoke the best qualities of living human communities—active, social, simple, and present—not passive, isolated, complex, or closed off.

    Life Beyond Literacy

    Indeed, language changes lives. It builds society, expresses our highest aspirations, our basest thoughts, our emotions and our philosophies of life. But all language is ultimately at the service of human interaction. Other components of language—things like grammar and stories—are secondary to conversation.
    Daniel L. Everett, How Language Began

    Literacy has gotten us far. It’s gotten you this far in this book. So, it’s not surprising we’re attached to the idea. Writing has allowed us to create technologies that give us the ability to interact with one another across time and space, and have instantaneous access to knowledge in a way our ancestors would equate with magic. However, creating and exchanging documents, while powerful, is not a good model for lively interaction. Misplaced literate values can lead to misery—working alone and worrying too much about posterity.

    So, it’s time to let go and live a little! We’re at an exciting moment. The computer screen that once stood for a page can offer a window into a continuous present that still remembers everything. Or, the screen might disappear completely.

    Now we can start imagining, in an open-ended way, what constellation of connected devices any given person will have around them, and how we can deliver a meaningful, memorable experience on any one of them. We can step away from the screen and consider what set of inputs, outputs, events, and information add up to the best experience.

    This is daunting for designers, sure, yet phenomenal for people. Thinking about human-computer interactions from a screen-based perspective was never truly human-centered from the start. The ideal interface is an interface that’s not noticeable at all—a world in which the distance from thought to action has collapsed and merely uttering a phrase can make it so.

    We’re fast moving past “computer literacy.” It’s on us to ensure all systems speak human fluently.

  3. A DIY Web Accessibility Blueprint

    The summer of 2017 marked a monumental victory for the millions of Americans living with a disability. On June 13th, a Southern District of Florida Judge ruled that Winn-Dixie’s inaccessible website violated Title III of the Americans with Disabilities Act. This case marks the first trial under the ADA, which was passed into law in 1990.

    Despite spending more than $7 million to revamp its website in 2016, Winn-Dixie neglected to include design considerations for users with disabilities. Some of the features that were added include online prescription refills, digital coupons, rewards card integration, and a store locator function. However, it appears that inclusivity didn’t make the cut.

    Because Winn-Dixie’s new website wasn’t developed to WCAG 2.0 standards, the new features it boasted were in effect only available to sighted, able-bodied users. When Florida resident Juan Carlos Gil, who is legally blind, visited the Winn-Dixie website to refill his prescriptions, he found it to be almost completely inaccessible using the same screen reader software he uses to access hundreds of other sites.

    Juan stated in his original complaint that he “felt as if another door had been slammed in his face.” But Juan wasn’t alone. Intentionally or not, Winn-Dixie was denying an entire group of people access to their new website and, in turn, each of the time-saving features it had to offer.

    What makes this case unique is that it marks the first time in history in which a public accommodations case went to trial, meaning the judge ruled the website to be a “place of public accommodation” under the ADA and therefore subject to ADA regulations. Since there are no specific ADA regulations regarding the internet, Judge Scola decided the adoption of the Web Content Accessibility Guidelines (WCAG) 2.0 Level AA to be appropriate. (Thanks to the hard work of the Web Accessibility Initiative (WAI) at the W3C, WCAG 2.0 has found widespread adoption throughout the globe, either as law or policy.)

    Learning to have empathy

    Anyone with a product subscription service (think diapers, razors, or pet food) knows the feeling of gratitude that accompanies the delivery of a much needed product that arrives just in the nick of time. Imagine how much more grateful you’d be for this service if you, for whatever reason, were unable to drive and lived hours from the nearest store. It’s a service that would greatly improve your life. But now imagine that the service gets overhauled and redesigned in such a way that it is only usable by people who own cars. You’d probably be pretty upset.

    This subscription service example is hypothetical, yet in the United States, despite federal web accessibility requirements instituted to protect the rights of disabled Americans, this sort of discrimination happens frequently. In fact, anyone assuming the Winn-Dixie case was an isolated incident would be wrong. Web accessibility lawsuits are rising in number. The increase from 2015 to 2016 was 37%. While some of these may be what’s known as “drive-by lawsuits,” many of them represent plaintiffs like Juan Gil who simply want equal rights. Scott Dinin, Juan’s attorney, explained, “We’re not suing for damages. We’re only suing them to follow the laws that have been in this nation for twenty-seven years.”

    For this reason and many others, now is the best time to take a proactive approach to web accessibility. In this article I’ll help you create a blueprint for getting your website up to snuff.

    The accessibility blueprint

    If you’ll be dealing with remediation, I won’t sugarcoat it: successfully meeting web accessibility standards is a big undertaking, one that is achieved only when every page of a site adheres to all the guidelines you are attempting to comply with. As I mentioned earlier, those guidelines are usually WCAG 2.0 Level AA, which means meeting every Level A and AA requirement. Tight deadlines, small budgets, and competing priorities may increase the stress that accompanies a web accessibility remediation project, but with a little planning and research, making a website accessible is both reasonable and achievable.

    My intention is that you may use this article as a blueprint to guide you as you undertake a DIY accessibility remediation project. Before you begin, you’ll need to increase your accessibility know-how, familiarize yourself with the principles of universal design, and learn about the benefits of an accessible website. Then you may begin to evangelize the benefits of web accessibility to those you work with.

    Have the conversation with leadership

    Securing support from company leadership is imperative to the long-term success of your efforts. There are numerous ways to broach the subject of accessibility, but, sadly, in the world of business, substantiated claims top ethics and moral obligation. Therefore I’ve found one of the most effective ways to build a business case for web accessibility is to highlight the benefits.

    Here are just a few to speak of:

    • Accessible websites are inherently more usable, and consequently they get more traffic. Additionally, better user experiences result in lower bounce rates, higher conversions, and less negative feedback, which in turn typically make accessible websites rank higher in search engines.
    • Like assistive technology, web crawlers (such as Googlebot) leverage HTML to get their information from websites, so a well marked-up, accessible website is easier to index, which makes it easier to find in search results.
    • There are a number of potential risks for not having an accessible website, one of which is accessibility lawsuits.
    • Small businesses in the US that improve the accessibility of their website may be eligible for a tax credit from the IRS.

    Start the movement

    If you can’t secure leadership backing right away, you can still form a grassroots accessibility movement within the company. Begin slowly and build momentum as you work to improve usability for all users. Though you may not have the authority to make company-wide changes, you can strategically and systematically lead the charge for web accessibility improvements.

    My advice is to start small. For example, begin by pushing for site-wide improvements to color contrast ratios (which would help color-blind, low-vision, and aging users) or work on making the site keyboard accessible (which would help users with mobility impairments or broken touchpads, and people such as myself who prefer not using a mouse whenever possible). Incorporate user research and A/B testing into these updates, and document the results. Use the results to champion for more accessibility improvements.

    Read and re-read the guidelines

    Build your knowledge base as you go. Learning which laws, rules, or guidelines apply to you, and understanding them, is a prerequisite to writing an accessibility plan. Web accessibility guidelines vary throughout the world. There may be other guidelines that apply to you, and in some cases, additional rules, regulations, or mandates specific to your industry.

    Not understanding which rules apply to you, not reading them in full, or not understanding what they mean can create huge problems down the road, including excessive rework once you learn you need to make changes.

    Build a team

    Before you can start remediating your website, you’ll need to assemble a team. The number of people will vary depending on the size of your organization and website. I previously worked for a very large company with a very large website, yet the accessibility team they assembled was small in comparison to the thousands of pages we were tasked to remediate. This team included a project manager, visual designers, user experience designers, front-end developers, content editors, a couple requirements folks, and a few QA testers. Most of these people had been pulled from their full-time roles and instructed to quickly become familiar with WCAG 2.0. To help you create your own accessibility team, I will explain in detail some of the top responsibilities of the key players:

    • Project manager is responsible for coordinating the entire remediation process. They will help run planning sessions, keep everyone on schedule, and report the progress being made. Working closely with the requirements people, their goal is to keep every part of this new machine running smoothly.
    • Visual designers will mainly address issues of color usage and text alternatives. In its present form, WCAG 2.0 contrast minimums only apply to text, however the much anticipated WCAG 2.1 update (due to be released in mid-2018) contains a new success criterion for Non-text Contrast, which covers contrast minimums of all interactive elements and “graphics required to understand the content.” Visual designers should also steer clear of design trends that ruin usability.
    • UX designers should be checking for consistent, logical navigation and reading order. They’ll need to test that pages are using heading tags appropriately (headings are for semantic structure, not for visual styling). They’ll be checking to see that page designs are structured to appear and operate in predictable ways.
    • Developers have the potential to make or break an accessible website because even the best designs will fail if implemented incorrectly. If your developers are unfamiliar with WAI-ARIA, accessible coding practices, or accessible JavaScript, then they have a few things to learn. Developers should think of themselves as designers because they play a very important role in designing an inclusive user experience. Luckily, Google offers a short, free Introduction to Web Accessibility course and, via Udacity, a free, advanced two-week accessibility course. Additionally, The A11Y Project is a one-stop shop loaded with free pattern libraries, checklists, and accessibility resources for front-end developers.
    • Editorial review the copy for verbosity. Avoid using phrases that will confuse people who aren’t native language speakers. Don’t “beat around the bush” (see what I did there?). Keep content simple, concise, and easy to understand. No writing degree? No worries. There are apps that can help you improve the clarity of your writing and that correct your grammar like a middle school English teacher. Score bonus points by making sure link text is understandable out of context. While this is a WCAG 2.0 Level AAA guideline, it’s also easily fixed and it greatly improves the user experience for individuals with varying learning and cognitive abilities.
    • Analysts work in tandem with editorial, design, UX, and QA. They coordinate the work being done by these groups and document the changes needed. As they work with these teams, they manage the action items and follow up on any outstanding tasks, questions, or requests. The analysts also deliver the requirements specifications to the developers. If the changes are numerous and complex, the developers may need the analysts to provide further clarification and to help them properly implement the changes as described in the specs.
    • QA will need to be trained to the same degree as the other accessibility specialists since they will be responsible for testing the changes that are being made and catching any issues that arise. They will need to learn how to navigate a website using only a keyboard and also by properly using a screen reader (ideally a variety of screen readers). I emphasized “properly” because while anyone can download NVDA or turn on VoiceOver, it takes another level of skill to understand the difference between “getting through a page” and “getting through a page with standard keyboard controls.” Having individuals with visual, auditory, or mobility impairments on the QA team can be a real advantage, as they are more familiar with assistive technology and can test in tandem with others. Additionally, there are a variety of automated accessibility testing tools you can use alongside manual testing. These tools typically catch only around 30% of common accessibility issues, so they do not replace ongoing human testing. But they can be extremely useful in helping QA learn when an update has negatively affected the accessibility of your website.

    Start your engines!

    Divide your task into pieces that make sense. You may wish to tackle all the global elements first, then work your way through the rest of the site, section by section. Keep in mind that every page must adhere to the accessibility standards you’re following for it to be deemed “accessible.” (This includes PDFs.)

    Use what you’ve learned so far by way of accessibility videos, articles, and guidelines to perform an audit of your current site. While some manual testing may seem difficult at first, you’ll be happy to learn that some manual testing is very simple. Regardless of the testing being performed, keep in mind that it should always be done thoroughly and by considering a variety of users, including:

    • keyboard users;
    • blind users;
    • color-blind users;
    • low-vision users;
    • deaf and hard-of-hearing users;
    • users with learning disabilities and cognitive limitations;
    • mobility-impaired users;
    • users with speech disabilities;
    • and users with seizure disorders.

    When you are in the weeds, document the patterns

    As you get deep in the weeds of remediation, keep track of the patterns being used. Start a knowledge repository for elements and situations. Lock down the designs and colors, code each element to be accessible, and test these patterns across various platforms, browsers, screen readers, and devices. When you know the elements are bulletproof, save them in a pattern library that you can pull from later. Having a pattern library at your fingertips will improve consistency and compliance, and help you meet tight deadlines later on, especially when working in an agile environment. You’ll need to keep this online knowledge repository and pattern library up-to-date. It should be a living, breathing document.

    Cross the finish line … and keep going!

    Some people mistakenly believe accessibility is a set-it-and-forget-it solution. It isn’t. Accessibility is an ongoing challenge to continually improve the user experience the way any good UX practitioner does. This is why it’s crucial to get leadership on board. Once your site is fully accessible, you must begin working on the backlogs of continuous improvements. If you aren’t vigilant about accessibility, people making even small site updates can unknowingly strip the site of the accessibility features you worked so hard to put in place. You’d be surprised how quickly it can happen, so educate everyone you work with about the importance of accessibility. When everyone working on your site understands and evangelizes accessibility, your chances of protecting the accessibility of the site are much higher.

    It’s about the experience, not the law

    In December of 2017, Winn-Dixie appealed the case with blind patron Juan Carlo Gil. Their argument is that a website does not constitute a place of accommodation, and therefore, their case should have been dismissed. This case, and others, illustrate that the legality of web accessibility is still very much in flux. However, as web developers and designers, our motivation to build accessible websites should have nothing to do with the law and everything to do with the user experience.

    Good accessibility is good UX. We should seek to create the best user experience for all. And we shouldn’t settle for simply meeting accessibility standards but rather strive to create an experience that delights users of all abilities.

    Additional resources and articles

    If you are ready to learn more about web accessibility standards and become the accessibility evangelist on your team, here are some additional resources that can help.



  4. We Write CSS Like We Did in the 90s, and Yes, It’s Silly

    As web developers, we marvel at technology. We enjoy the many tools that help with our work: multipurpose editors, frameworks, libraries, polyfills and shims, content management systems, preprocessors, build and deployment tools, development consoles, production monitors—the list goes on.

    Our delight in these tools is so strong that no one questions whether a small website actually requires any of them. Tool obesity is the new WYSIWYG—the web developers who can’t do without their frameworks and preprocessors are no better than our peers from the 1990s who couldn’t do without FrontPage or Dreamweaver. It is true that these tools have improved our lives as developers in many ways. At the same time, they have perhaps also prevented us from improving our basic skills.

    I want to talk about one of those skills: the craft of writing CSS. Not of using CSS preprocessors or postprocessors, but of writing CSS itself. Why? Because CSS is second in importance only to HTML in web development, and because no one needs processors to build a site or app.

    Most of all, I want to talk about this because when it comes to writing CSS, it often seems that we have learned nothing since the 1990s. We still write CSS the natural way, with no advances in sorting declarations or selectors and no improvements in writing DRY CSS.

    Instead, many developers argue fiercely about each of these topics. Others simply dig in their heels and refuse to change. And a third cohort protests even the discussion of these topics.

    I don’t care that developers do this. But I do care about our craft. And I care that we, as a profession, are ignoring simple ways to improve our work.

    Let’s talk about this more after the code break.

    Here’s unsorted, unoptimized CSS from Amazon in 2003.

    .serif {
      font-family: times, serif;
      font-size: small;
    .sans {
      font-family: verdana, arial, helvetica, sans-serif;
      font-size: small;
    .small {
      font-family: verdana, arial, helvetica, sans-serif;
      font-size: x-small;
    .h1 {
      font-family: verdana, arial, helvetica, sans-serif;
      color: #CC6600;
      font-size: small;
    .h3color {
      font-family: verdana, arial, helvetica, sans-serif;
      color: #CC6600;
      font-size: x-small;
    .tiny {
      font-family: verdana, arial, helvetica, sans-serif;
      font-size: xx-small;
    .listprice {
      font-family: arial, verdana, sans-serif;
      text-decoration: line-through;
      font-size: x-small;
    .price {
      font-family: verdana, arial, helvetica, sans-serif;
      color: #990000;
      font-size: x-small;
    .attention {
      background-color: #FFFFD5;

    And here’s CSS from contemporary Amazon:

    .a-box {
      display: block;
      border-radius: 4px;
      border: 1px #ddd solid;
      background-color: #fff;
    .a-box .a-box-inner {
      border-radius: 4px;
      position: relative;
      padding: 14px 18px;
    .a-box-thumbnail {
      display: inline-block;
    .a-box-thumbnail .a-box-inner {
      padding: 0 !important;
    .a-box-thumbnail .a-box-inner img {
      border-radius: 4px;
    .a-box-title {
      overflow: hidden;
    .a-box-title .a-box-inner {
      overflow: hidden;
      padding: 12px 18px 11px;
      background: #f0f0f0;

    Just as in 2003, the CSS is unsorted and unoptimized. Did we learn anything over the past 15 years? Is this really the best CSS we can write?

    Let’s look at three areas where I believe we can easily improve the way we do our work: declaration sorting, selector sorting, and declaration repetition.

    Declaration sorting

    The 90s web developer, if he or she wrote CSS, wrote CSS as it occurred to them. Without sense or order—with no direction whatsoever. The same was true of last decade’s developer. The same is true of today’s developer, whether novice or expert.

    .foo {
      font: arial, sans-serif;
      background: #abc;
      margin: 1em;
      text-align: center;
      letter-spacing: 1px;
      -x-yaddayadda: yes;

    The only difference between now and then: today’s expert developer uses eight variables, because “that’s how you do it” (even with one-pagers) and because at some point in their life they may need them. In twenty-something years of web development we have somehow not managed to make our CSS consistent and easier to work on by establishing the (or even a) common sense standard to sort declarations.

    (If this sounds harsh, it’s because it’s true. Developers condemn selectors, shorthands, !important, and other useful aspects of CSS rather than concede that they don’t even know how to sort their declarations.)

    In reality, the issue is dead simple: Declarations should be sorted alphabetically. Period.


    For one, sorting makes collaborating easier.

    Untrained developers can do it. Non-English speakers (such as this author) can do it. I wouldn’t be surprised to learn that even houseplants can do it.

    For another reason, alphabetical sorting can be automated. What’s that? Yes, one can use or write little scripts (such as CSS Declaration Sorter) to sort declarations.

    Given the ease of sorting, and its benefits, the current state of affairs borders on the ridiculous, making it tempting to ignore our peers who don’t sort declarations, and to ban from our lives those who argue that it’s easier—or even logical—not to sort alphabetically but instead to sort based on 1) box dimensions, 2) colors, 3) grid- or flexbox-iness, 4) mood, 5) what they ate for breakfast, or some equally random basis.

    With this issue settled (if somewhat provocatively), on to our second problem from the 90s.

    Selector sorting

    The situation concerning selectors is quite similar. Almost since 1994, developers have written selectors and rules as they occurred to them. Perhaps they’ve moved them around (“Oh, that belongs with the nav”). Perhaps they’ve refactored their style sheets (“Oh, strange that site styles appear amidst notification styles”). But standardizing the order—no.

    Let’s take a step back and assume that order does matter, not just for aesthetics as one might think, but for collaboration. As an example, think of the letters below as selectors. Which list would be easiest to work with?

    c, b · a · a, b · c, d · d, c, a · e · a
    c · b · a, b · a · c, d · a, c, d · a · e
    a, b · a, c, d · a · b, c · c, d · e

    The fact that one selector (a) was a duplicate that only got discovered and merged in the last row perhaps gives away my preference. But then, if you wanted to add d, e to the list, wouldn’t the order of the third row make placing the new selector easier than placing it in either of the first two rows?

    This example gets at the two issues caused by not sorting selectors:

    • No one knows where to add new selectors, creating a black hole in the workflow.
    • There’s a higher chance of both selector repetition and duplication of rules with the same selectors.

    Both problems get compounded in larger projects and larger teams. Both problems have haunted us since the 90s. Both problems get fixed by standardizing—through coding guidelines—how selectors should be ordered.

    The answer in this case is not as trivial as sorting alphabetically (although we could play with the idea—the cognitive ease of alphabetical selector sorting may make it worth trying). But we can take a path similar to how the HTML spec roughly groups elements, so that we first define sections, and then grouping elements, text elements, etc. (That’s also the approach of at least one draft, the author’s.)

    The point is that ideal selector sorting doesn’t just occur naturally and automatically. We can benefit from putting more thought into this problem.

    Declaration repetition

    Our third hangover from the 90s is that there is and has always been an insane amount of repetition in our style sheets. According to one analysis of more than 200 websites, a median of 66% of all declarations are redundant, and the repetition rate goes as high as 92%—meaning that, in this study at least, the typical website uses each declaration at least three times and some up to ten times.

    As shown by a list of some sample sites I compiled, declaration repetition has indeed been bad from the start and has even increased slightly over the years.

    Yes, there are reasons for repetition: notably for different target media (we may repeat ourselves for screen, print, or different viewport sizes) and, occasionally, for the cascade. That is why a repetition rate of 10–20% seems to be acceptable. But the degree of repetition we observe right now is not acceptable—it’s an unoptimized mess that goes mostly unnoticed.

    What’s the solution here? One possibility is to use declarations just once. We’ve seen with a sample optimization of Yandex’s large-scale site that this can lead to slightly more unwieldy style sheets, but we also know that in many other cases it does make them smaller and more compact.

    This approach of using declarations just once has at least three benefits:

    • It reduces repetition to a more acceptable amount.
    • It reduces the pseudo need for variables.
    • Excluding outliers like Yandex, it reduces file size and payload (10–20% according to my own experience—we looked at the effects years ago at Google).

    No matter what practice we as a field come up with—whether to use declarations just once or follow a different path—the current level of “natural repetition” we face on sample websites is too high. We shouldn’t need to remind ourselves not to repeat ourselves if we repeat code up to nine times, and it’s getting outright pathetic—again excuse the strong language—if then we’re also the ones to scream for constants and variables and other features only because we’ve never stopped to question this 90s-style coding.

    The unnatural, more modern way of writing CSS

    Targeting these three areas would help us move to a more modern way of writing style sheets, one that has a straightforward but powerful way to sort declarations, includes a plan for ordering selectors, and minimizes declaration repetition.

    In this article, we’ve outlined some options for us to adhere to this more modern way:

    • Sort declarations alphabetically.
    • Use an existing order system or standardize and follow a new selector order system.
    • Try to use declarations just once.
    • Get assistance through tools.

    And yet there’s still great potential to improve in all of these areas. The potential, then, is what we should close with. While I’ve emphasized our “no changes since the 90s” way of writing CSS, and stressed the need for robust practices, we need more proposals, studies, and conversations around what practices are most beneficial. Beneficial in terms of writing better, more consistent CSS, but also in terms of balancing our sense of craft (our mastery of our profession) with a high degree of efficiency (automating when it’s appropriate). Striving to achieve this balance will help ensure that developers twenty years from now won’t have to write rants about hangovers from the 2010s.

  5. Owning the Role of the Front-End Developer

    When I started working as a web developer in 2009, I spent most of my time crafting HTML/CSS layouts from design comps. My work was the final step of a linear process in which designers, clients, and other stakeholders made virtually all of the decisions.

    Whether I was working for an agency or as a freelancer, there was no room for a developer’s input on client work other than when we were called to answer specific technical questions. Most of the time I would be asked to confirm whether it was possible to achieve a simple feature, such as adding a content slider or adapting an image loaded from a CMS.

    In the ensuing years, as front-end development became increasingly challenging, developers’ skills began to evolve, leading to more frustration. Many organizations, including the ones I worked for, followed a traditional waterfall approach that kept us in the dark until the project was ready to be coded. Everything would fall into our laps, often behind schedule, with no room for us to add our two cents. Even though we were often highly esteemed by our teammates, there still wasn’t a chance for us to contribute to projects at the beginning of the process. Every time we shared an idea or flagged a problem, it was already too late.

    Almost a decade later, we’ve come a long way as front-end developers. After years of putting in the hard work required to become better professionals and have a bigger impact on projects, many developers are now able to occupy a more fulfilling version of the role.

    But there’s still work to be done: Unfortunately, some front-end developers with amazing skills are still limited to basic PSD-to-HTML work. Others find themselves in a better position within their team, but are still pushing for a more prominent role where their ideas can be fostered.

    Although I’m proud to believe I’m part of the group that evolved with the role, I continue to fight for our seat at the table. I hope sharing my experience will help others fighting with me.

    My road to earning a seat at the table

    My role began to shift the day I watched an inspiring talk by Seth Godin, which helped me realize I had the power to start making changes to make my work more fulfilling. With his recommendation to demand responsibility whether you work for a boss or a client, Godin gave me the push I needed.

    I wasn’t expecting to make any big leaps—just enough to feel like I was headed in the right direction.

    Taking small steps within a small team

    My first chance to test the waters was ideal. I had recently partnered with a small design studio and we were a team of five. Since I’d always been open about my soft spot for great design, it wasn’t hard to sell them on the idea of having me begin to get a bit more involved with the design process and start giving technical feedback before comps were presented to clients.

    The results were surprisingly amazing and had a positive impact on everybody’s work. I started getting design hand-offs that I both approved of from a technical point of view and had a more personal connection with. For their part, the designers happily noticed that the websites we launched were more accurate representations of the comps they had handed off.

    My next step was to get involved with every single project from day one. I started to tag along to initial client meetings, even before any contracts had been signed. I started flagging things that could turn the development phase into a nightmare; at the same time I was able to throw around some ideas about new technologies I’d been experimenting with.

    After a few months, I started feeling that my skills were finally having an impact on my team’s projects. I was satisfied with my role within the team, but I knew it wouldn’t last forever. Eventually it was time for me to embark on a journey that would take me back to the classic role of the front-end developer, closer to the base of the waterfall.

    Moving to the big stage

    As my career started to take off, I found myself far away from that five-desk office where it had all started. I was now working with a much bigger team, and the challenges were quite different. At first I was amazed at how they were approaching the process: the whole team had a strong technical background, unlike any team I had ever worked with, which made collaboration very efficient. I had no complaints about the quality of the designs I was assigned to work with. In fact, during my first few months, I was constantly pushed out of my comfort zone, and my skills were challenged to the fullest.

    After I started to feel more comfortable with my responsibilities, though, I soon found my next challenge: to help build a stronger connection between the design and development teams. Though we regularly collaborated to produce high-quality work, these teams didn’t always speak the same language. Luckily, the company was already making an effort to improve the conversation between creatives and developers, so I had all the support I needed.

    As a development team, we had been shifting to modern JavaScript libraries that led us to work on our applications using a strictly component-based approach. But though we had slowly changed our mindset, we hadn’t changed the ways we collaborated with our creative colleagues. We had not properly shared our new vision; making that connection would become my new personal goal.

    I was fascinated by Brad Frost’s “death to the waterfall” concept: the idea that UX, visual design, and development teams should work in parallel, allowing for a higher level of iteration during the project.

    By pushing to progressively move toward a collaborative workflow, everyone on my team began to share more responsibilities and exchange more feedback throughout every project. Developers started to get involved in projects during the design phase, flagging any technical issues we could anticipate. Designers made sure they provided input and guidance after the projects started coming to life during development. Once we got the ball rolling, we quickly began seeing positive results and producing rewarding (and award-winning) work.

    Even though it might sound like it was a smooth transition, it required a great amount of hard work and commitment from everybody on the team. Not only did we all want to produce better work but we also needed to be willing to take a big leap away from our comfort zones and our old processes.

    How you can push for a seat at the table

    In my experience, making real progress required a combination of sharpening my skills as a front-end developer and pushing the team to improve our processes.

    What follows are more details about what worked for me—and could also work for you.

    Making changes as a developer

    Even though the real change in your role may depend on your organization, sometimes your individual actions can help jump-start the shift:

    • Speak up. In multidisciplinary teams, developers are known as highly analytical, critical, and logical, but not always the most communicative of the pack. I’ve seen many who quietly complain and claim to have better ideas on how things should be handled, but bottle up those thoughts and move on to a different job. After I started voicing my concerns, proposing new ideas, and seeing small changes within my team, I experienced an unexpected boost in my motivation and noticed others begin to see my role differently.
    • Always be aware of what the rest of the team is up to. One of the most common mistakes we tend to make is to focus only on our craft. To connect with our team and improve in our role, we need to understand our organization’s goals, our teammates’ skill sets, our customers, and basically every other aspect of our industry that we used to think wasn’t worth a developer’s time. Once I started having a better understanding of the design process, communication with my team started to improve. The same applied to designers who started learning more about the processes we use as front-end developers.
    • Keep core skills sharp. Today our responsibilities are broader and we’re constantly tasked with leading our teams into undiscovered technologies. As a front-end developer, it’s not uncommon to be required to research technologies like WebGL or VR, and introduce them to the rest of the team. We must stay current with the latest practices in our technical areas of focus. Our credibility is at stake every time our input is needed, so we must always strive to be the best developers in the business.

    Rethinking practices within the company

    In order to make the most of your role as a developer, you’ll have to persuade your organization to make key changes. This might be hard to achieve, since it tends to require taking all members of your team out of their comfort zones.

    For me, what worked was long talks with my colleagues, including designers, management, and fellow developers. It’s hard for a manager to turn you down when you propose an idea to improve the quality of your work and only ask for small changes. Once the rest of the team is on board, you have to work hard and start implementing these changes to keep the ball rolling:

    • Involve developers in projects from the beginning. Many companies have high standards when it comes to hiring developers but don’t take full advantage of their talent. We tend to be logical thinkers, so it’s usually a good idea to involve developers in many aspects of the projects we work on. I often had to take the first step to be invited to project kickoffs. But once I started making an effort to provide valuable input, my team started automatically involving me and other developers during the creative phase of new projects.
    • Schedule team reviews. Problems frequently arise when teams present to clients without having looped in everyone working on the project. Once the client signs off on something, it can be risky to introduce new ideas, even if they add value. Developers, designers, and other key players must come together for team reviews before handing off any work. As a developer, sometimes you might need to raise your hand and invest some of your time to help your teammates review their work before they present it.
    • Get people to work together. Whenever possible, get people in the same room. We tend to rely on technology and push to communicate only by chat and email, but there is real value in face time. It’s always a good idea to have different teammates sit together, or at least in close enough proximity for regular in-person conversation, so they can share feedback more easily during projects. If your team works remotely, you have to look for alternatives to achieve the same effect. Occasional video chats and screen sharing can help teams share feedback and interact in real time.
    • Make time for education. Of all the teams I’ve worked on, those that foster a knowledge-sharing culture tend to work most efficiently. Simple and casual presentations among colleagues from different disciplines can be vital to creating a seamless variety of skills across the team. So it’s important to encourage members of the team to teach and learn from each other.

      When we made the decision to use only a component-based architecture, we prepared a simple presentation for the design team that gave them an overview of how we all would benefit from the change to our process. Shortly after, the team began delivering design comps that were aligned with our new approach.

    It’s fair to say that the modern developer can’t simply hide behind a keyboard and expect the rest of the team to handle all of the important decisions that define our workflow. Our role requires us to go beyond code, share our ideas, and fight hard to improve the processes we’re involved in.

  6. Discovery on a Budget: Part II

    Welcome to the second installment of the “Discovery on a Budget” series, in which we explore how to conduct effective discovery research when there is no existing data to comb through, no stakeholders to interview, and no slush fund to draw upon. In part 1 of this series, we discussed how it is helpful to articulate what you know (and what you assume) in the form of a problem hypothesis. We also covered strategies for conducting one of the most affordable and effective research methods: user interviews. In part 2 we will discuss when it’s beneficial to introduce a second, competing problem hypothesis to test against the first. We will also discuss the benefits of launching a “fake-door” and how to conduct an A/B test when you have little to no traffic.

    A quick recap

    In part 1 I conducted the first round of discovery research for my budget-conscious (and fictitious!) startup, Candor Network. The original goal for Candor Network was to provide a non-addictive social media platform that users would pay for directly. I articulated that goal in the form of a problem hypothesis:

    Because their business model relies on advertising, social media tools like Facebook are deliberately designed to “hook” users and make them addicted to the service. Users are unhappy with this and would rather have a healthier relationship with social media tools. They would be willing to pay for a social media service that was designed with mental health in mind.

    Also in part 1, I took extra care to document the assumptions that went into creating this hypothesis. They were:

    • Users feel that social media sites like Facebook are addictive.
    • Users don’t like to be addicted to social media.
    • Users would be willing to pay for a non-addictive Facebook replacement.

    For the first round of research, I chose to conduct user interviews because it is a research method that is adaptable, effective, and—above all—affordable. I recruited participants from Facebook, taking care to document the bias of using a convenience sampling method. I carefully crafted my interview protocol, and used a number of strategies to keep my participants talking. Now it is time to review the data and analyze the results.

    Analyze the data

    When we conduct discovery research, we look for data that can help us either affirm or reject the assumptions we made in our problem hypothesis. Regardless of what research method you choose, it’s critical that you set aside the time to objectively review and analyze the results.

    In practice, analyzing interview data involves creating transcriptions of the interviews and then reading them many, many times. Each time you read through the transcripts, you highlight and label sentences or sections that seem relevant or important to your research question. You can use products like NVivo, HyperRESEARCH, or any other qualitative analysis tool to help facilitate this process. Or, if you are on a pretty strict budget, you can simply use Google Sheets to keep track of relevant sections in one column and labels in another.

    Screenshot of a spreadsheet with quotes about Facebook usage
    Screenshot of my interview analysis in Google Sheets

    For my project, I specifically looked for data that would show whether my participants felt Facebook was addicting and whether that was a bad thing, and if they’d be willing to pay for an alternative. Here’s how that analysis played out:

    Assumption 1: Users feel that social media sites like Facebook are addictive

    Facebook has a weird, hypnotizing effect on my brain. I keep scrolling and scrolling and then I like wake up and think, ‘where have I been? why am I spending my time on this?’
    interview participant

    Overwhelmingly, my data affirms this assumption. All of my participants (eleven out of eleven) mentioned Facebook being addictive in some way.

    Assumption 2: Users don’t like to be addicted to social media

    I know a lot of people who spend a lot of time on Facebook, but I think I manage it pretty well.
    interview participant

    This assumption turned out to be a little more tricky to affirm or reject. While all of my participants described Facebook as addictive, many of them (eight out of eleven) expressed that “it wasn’t so bad” or that they felt like they were less addicted than the average Facebook user.

    Assumption 3: Users would be willing to pay for a non-addictive Facebook replacement

    No, I wouldn’t pay for that. I mean, why would I pay for something I don’t think I should use so much anyway?
    interview participant

    Unfortunately for my project, I can’t readily affirm this assumption. Four participants told me they would flat-out never pay for a social media service, four participants said they would be interested in trying a paid-for “non-addictive Facebook,” and three participants said they would only try it if it became really popular and everyone else was using it.

    One unexpected result: “It’s super creepy”

    I don’t like that they are really targeting me with those ads. It’s super creepy.
    interview participant

    In reviewing the interview transcripts, I came across one unexpected theme. More than 80% of the interviewees (nine out of eleven) said they found Facebook “creepy” because of the targeted advertising and the collection of personal data. Also, most of those participants (seven out of nine) went on to say that they would pay for a “non-creepy Facebook.” This is particularly remarkable because I never asked the participants how they felt about targeted advertising or the use of personal data. It always came up in the conversation organically.

    Whenever we start a new project, our initial ideas revolve around our own personal experiences and discomforts. I started Candor Network because I personally feel that social media is designed to be addicting, and that this is a major flaw with many of the most popular services. However, while I can affirm my first assumption, I had unclear results on the second and have to consider rejecting the third. Also, I encountered a new user experience that I previously didn’t think of or account for: that the way social media tools collect and use personal data for advertising can be disconcerting and “creepy.” As is so often the case, the data analysis showed that there are a variety of other experiences, expectations, and needs that must be accounted for if the project is to be successful.

    Refining the hypothesis

    Graphic showing a process with Create Hypothesis, leading to Test, leading to Analyze, leading back to Create Hypothesis
    Discovery research cycle: Create Hypothesis, Test, Analyze, and repeat

    Each time we go through the discovery research process, we start with a hypothesis, test it by gathering data, analyze the data, and arrive at a new understanding of the problem. In theory, it may be possible to take one trip through the cycle and either completely affirm or completely reject our hypothesis and assumptions. However, like with Candor Network, it is more often the case that we get a mixture of results: some assumptions can be affirmed while others are rejected, and some completely new insights come to light.

    One option is to continue working with a single hypothesis, and simply refine it to account for the results of each round of research. This is especially helpful when the research mostly affirms your assumptions, but there is additional context and nuance you need to account for. However, if you find that your research results are pulling you in a new direction entirely, it can be useful to create a second, competing hypothesis.

    In my example, the interview research brought to light a new concern about social media I previously hadn’t considered: the “creepy” collection of personal data. I am left wondering, Would potential customers be more attracted to the idea of a social media platform built to prevent addiction, or one built for data privacy? To answer this question, I articulated a new, competing hypothesis:

    Because their business model relies on advertising, social media tools like Facebook are designed to gather lots of behavior data. They then utilize this behavior data to create super-targeted ads. Users are unhappy with this, and would rather use a social media tool that does not rely on the commodification of their data to make money. They would be willing to pay for a social media service that did not track their use and behavior.

    I now have two hypotheses to test against one another: one focused on social media addiction, the other focused on behavior tracking and data collection.

    At this point, it would be perfectly acceptable to conduct another round of interviews. We would need to change our interview protocol and find more participants, but it would still be an effective (and cheap) method to use. However, for this article I wanted to introduce a new method for you to consider, and to illustrate that a technique like A/B testing is not just for the “big guys” on the web. So I chose to conduct an A/B test utilizing two “fake-doors.”

    A low-cost comparative test: fake-door A/B testing

    A “fake-door” test is simply a marketing page, ad, button, or other asset that promotes a product that has yet to be made. Fake-door testing (or “ghetto testing”) is Zynga’s go-to method for testing ideas. They create a five-word summary of any new game they are considering, make a few ads, and put it up on various high-trafficked websites. Data is then collected to track how often users click on each of the fake-door “probes,” and only those games that attract a certain number of “conversions” on the fake-door are built.

    One of the many benefits of conducting a fake-door test is that it allows you to measure interest in a product before you begin to develop it. This makes it a great method for low-budget projects, because it can help you decide whether a project is worth investing in before you spend anything.

    However, for my project, I wasn’t just interested in measuring potential customer interest in a single product idea. I wanted to continue evaluating my original hypothesis on non-addictive social media as well as start investigating the second hypothesis on a social media platform that doesn’t record behavior data. Specifically, I wanted to see which theoretical social media platform is more attractive. So I created two fake-door landing pages—one for each hypothesis—and used Google Optimize to conduct an A/B test.

    Two screenshots of a Candor Network landing page with different copy
    Versions A (right) and B (left) of the Candor Network landing page

    Version A of the Candor Network landing page advertises the product I originally envisioned and described in my first problem hypothesis. It advertises a social network “built with mental health in mind.” Version B reflects the second problem hypothesis and my interview participants’ concerns around the “creepy” commodification of user data. It advertises a social network that “doesn’t track, use, solicit, or sell your data.” In all other respects, the landing pages are identical, and both will receive 50% of the traffic.

    Running an A/B test with little to no site traffic

    One of the major caveats when running an A/B test is that you need to have a certain number of people participate to achieve any kind of statistically significant result. This wouldn’t be a problem if we worked at a large company with an existing customer base, as it would be relatively straightforward to find ways to direct some of the existing traffic to the test. If you’re working on a new or low-trafficked site, however, conducting an A/B test can be tricky. Here are a few strategies I recommend:

    Figuring out how much traffic you need to achieve statistical significance in a quantitative study is an inexact science. If we were conducting a high-stakes experiment at a more established business, we would conduct multiple rounds of pre-tests to calculate the effect size of the experiment. Then we would use a calculation like Cohen’s d to estimate the number of people we need to participate in the actual test. This approach is rigorous and helps avoid sample pollution or sampling bias, but it requires a lot of resources upfront (like time, money, and lots of potential participants) that we may not have access to.

    In general, however, you can use this rule of thumb: the bigger the difference between the variations, the fewer participants you need to see a significant result. In other words, if your A and B are very different from each other, you will need fewer participants.

    Tip 2: Run the test for a longer amount of time

    When I worked at Weather Underground, we would always start an A/B test on a Sunday and end it a full week later on the following Sunday. That way we could be sure we captured both weekday and weekend users. Because Weather Underground is a high-trafficked site, this always resulted in having more than enough participants to see a statistically significant result.

    If you’re working on a new or low-trafficked site, however, you’ll need to run your test for longer than a week to achieve the number of test participants required. I recommend budgeting enough time so that your study can run a full six weeks. Six weeks will provide enough time to not only capture results from all your usual website traffic, but also any newcomers you can recruit through other means.

    Tip 3: Beg and borrow traffic from someone else

    I’ve got a pretty low number of followers on social media, so if I tweet or post about Candor Network, only a few people will see it. However, I know a few people and organizations that have a huge number of followers. For example, @alistapart has roughly 148k followers on Twitter, and A List Apart’s publisher, Jeffrey Zeldman (@zeldman), has 358k followers. I have asked them both to share the link for Candor Network with their followers.

    A screenshot of a Tweet from Jeffrey Zeldman promoting Meg's experiment
    A helpful tweet from @zeldman

    Of course, this method of advertising doesn’t cost any money, but it does cost social capital. I’m sure A List Apart and Mr. Zeldman wouldn’t appreciate it if I asked them to tweet things on my behalf on a regular basis. I recommend you use this method sparingly.

    Tip 4: Beware! There is always a risk of no results.

    Before you create an A/B test for your new product idea, there is one major risk you need to assess: there is a chance that your experiment won’t produce any statistically significant results at all. Even if you use all of the tips I’ve outlined above and manage to get a large number of participants in your test, there is a chance that you won’t be able to “declare a winner.” This isn’t just a risk for companies that have low traffic, it is an inherent risk when running any kind of quantitative study. Sometimes there simply isn’t a clear effect on participant behavior.

    Tune in next time for the last installment

    In the third and final installment of the “Discovery on a Budget” series, I’ll describe how I designed the incredibly short survey on the Candor Network landing page and discuss the results of my fake-door A/B test. I will also make another revision to my problem hypothesis and will discuss how to know when you’re ready to leave the discovery process (at least for now) and embark on the next phase of design: ideating over possible solutions.

  7. My Accessibility Journey: What I’ve Learned So Far

    Last year I gave a talk about CSS and accessibility at the stahlstadt.js meetup in Linz, Austria. Afterward, an attendee asked why I was interested in accessibility: Did I or someone in my life have a disability?

    I’m used to answering this question—to which the answer is no—because I get it all the time. A lot of people seem to assume that a personal connection is the only reason someone would care about accessibility.

    This is a problem. For the web to be truly accessible, everyone who makes websites needs to care about accessibility. We tend to use our own abilities as a baseline when we’re designing and building websites. Instead, we need to keep in mind our diverse users and their diverse abilities to make sure we’re creating inclusive products that aren’t just designed for a specific range of people.

    Another reason we all should think about accessibility is that it makes us better at our jobs. In 2016 I took part in 10k Apart, a competition held by Microsoft and An Event Apart. The objective was to build a compelling web experience that worked without JavaScript and could be delivered in 10 kB. On top of that, the site had to be accessible. At the time, I knew about some accessibility basics like using semantic HTML, providing descriptions for images, and hiding content visually. But there was still a lot to learn.

    As I dug deeper, I realized that there was far more to accessibility than I had ever imagined, and that making accessible sites basically means doing a great job as a developer (or as a designer, project manager, or writer).

    Accessibility is exciting

    Web accessibility is not about a certain technology. It’s not about writing the most sophisticated code or finding the most clever solution to a problem; it’s about users and whether they’re able to use our products.

    The focus on users is the main reason why I’m specializing in accessibility rather than solely in animation, performance, JavaScript frameworks, or WebVR. Focusing on users means I have to keep up with pretty much every web discipline, because users will load a page, deal with markup in some way, use a design, read text, control a JavaScript component, see animation, walk through a process, and navigate. What all those things have in common is that they’re performed by someone in front of a device. What makes them exciting is that we don’t know which device it will be, or which operating system or browser. We also don’t know how our app or site will be used, who will use it, how fast their internet connection will be, or how powerful their device will be.

    Making accessible sites forces you to engage with all of these variables—and pushes you, in the process, to do a great job as a developer. For me, making accessible sites means making fast, resilient sites with great UX that are fun and easy to use even in conditions that aren’t ideal.

    I know, that sounds daunting. The good news, though, is that I’ve spent the last year focusing on some of those things, and I’ve learned several important lessons that I’m happy to share.

    1. Accessibility is a broad concept

    Many people, like me pre-2016, think making your site accessible is synonymous with making it accessible to people who use screen readers. That’s certainly hugely important, but it’s only one part of the puzzle. Accessibility means access for everyone:

    • If your site takes ten seconds to load on a mobile connection, it’s not accessible.
    • If your site is only optimized for one browser, it’s not accessible.
    • If the content on your site is difficult to understand, your site isn’t accessible.

    It doesn’t matter who’s using your website or when, where, and how they’re doing it. What matters is that they’re able to do it.

    The belief that you have to learn new software or maybe even hardware to get started with accessibility is a barrier for many developers. At some point you will have to learn how to use a screen reader if you really want to get everything right, but there’s a lot more to do before that. We can make a lot of improvements that help everyone, including people with visual impairments, by simply following best practices.

    2. There are permanent, temporary, and situational impairments

    Who benefits from a keyboard-accessible site? Only a small percentage of users, some might argue. Aaron Gustafson pointed me to the Microsoft design toolkit, which helped me broaden my perspective. People with permanent impairments are not the only ones who benefit from accessibility. There are also people with temporary and situational impairments who’d be happy to have an alternative way of navigating. For example, someone with a broken arm, someone who recently got their forearm tattooed, or a parent who’s holding their kid in one arm while having to check something online. When you watch a developer operate their editor, it sometimes feels like they don’t even know they have a mouse. Why not give users the opportunity to use your website in a similar way?

    As you think about the range of people who could benefit from accessibility improvements, the group of beneficiaries tends to grow much bigger. As Derek Featherstone has said, “When something works for everyone, it works better for everyone.

    3. The first step is to make accessibility a requirement

    I’ve been asked many times whether it’s worth the effort to fix accessibility, how much it costs, and how to convince bosses and colleagues. My answer to those questions is that you can improve things significantly without even having to use new tools, spend extra money, or ask anyone’s permission.

    The first step is to make accessibility a requirement—if not on paper, then at least in your head. For example, if you’re looking for a slider component, pick one that’s accessible. If you’re working on a design, make sure color contrasts are high enough. If you’re writing copy, use language that is easy to understand.

    We ask ourselves many questions when we make design and development decisions: Is the code clean? Does the site look nice? Is the UX great? Is it fast enough? Is it well-documented?

    As a first step, add one more question to your list: Is it accessible?

    4. Making accessible sites is a team sport

    Another reason why making websites accessible sounds scary to some developers is that there is a belief that we’re the only ones responsible for getting it right.

    In fact, as Dennis Lembree reminds us, “Nearly everyone in the organization is responsible for accessibility at some level.

    It’s a developer’s job to create an accessible site from a coding perspective, but there are many things that have to be taken care of both before and after that. Designs must be intuitive, interactions clear and helpful, copy understandable and readable. Relevant personas and use cases have to be defined, and tests must be carried out accordingly. Most importantly, leadership and teams have to see accessibility as a core principle and requirement, which brings me to the next point: communication.

    5. Communication is key

    After talking to a variety of people at meetups and conferences, I think one of the reasons accessibility often doesn’t get the place it deserves is that not everyone knows what it means. Many times you don’t even have to convince your team, but rather just explain what accessibility is. If you want to get people on board, it matters how you approach them.

    The first step here is to listen. Talk to your colleagues and ask why they make certain design, development, or management decisions. Try to find out if they don’t approach things in an accessible way because they don’t want to, they’re not allowed to, or they just never thought of it. You’ll have better results if they don’t feel bad, so don’t try to guilt anyone into anything. Just listen. As soon as you know why they do things the way they do, you’ll know how to address your concerns.

    Highlight the benefits beyond accessibility

    You can talk about accessibility without mentioning it. For example, talk about typography and ideal character counts per line and how beautiful text is with the perfect combination of font size and line height. Demonstrate how better performance impacts conversion rates and how focusing on accessibility can promote out-of-the-box thinking that improves usability in general.

    Challenge your colleagues

    Some people like challenges. At a meetup, a designer who specializes in accessibility once said that one of the main reasons she loves designing with constraints in mind is that it demands a lot more of her than going the easy way. Ask your colleagues, Can we hit a speed index below 1000? Do you think you can design that component in such a way that it’s keyboard-accessible? My Nokia 3310 has a browser—wouldn’t it be cool if we could make our next website work on that thing as well?

    Help people empathize

    In his talk “Every Day Website Accessibility,” Scott O’Hara points out that it can be hard for someone to empathize if they are unaware of what they should be empathizing with. Sometimes people just don’t know that certain implementations might be problematic for others. You can help them by explaining how people who are blind or who can’t use a mouse, use the web. Even better, show videos of how people navigate the web without a mouse. Empathy prompts are also a great of way of illustrating different circumstances under which people are surfing the web.

    6. Talk about accessibility before a projects kicks off

    It’s of course a good thing if you’re fixing accessibility issues on a site that’s already in production, but that has its limitations. At some point, changes may be so complicated and costly that someone will argue that it’s not worth the effort. If your whole team cares about accessibility from the very beginning, before a box is drawn or a line of code is written, it’s much easier, effective, and cost-efficient to make an accessible product.

    7. A solid knowledge of HTML solves a lot of problems

    It’s impressive to see how JavaScript and the way we use it has changed in recent years. It has become incredibly powerful and more important than ever for web development. At the same time, it seems HTML has become less important. There is an ongoing discussion about CSS in JavaScript and whether it’s more efficient and cleaner than normal CSS from a development perspective. What we should talk about instead is the excessive use of <div> and <span> elements at the expense of other elements. It makes a huge difference whether we use a link or a <div> with an onclick handler. There’s also a difference between links and buttons when it comes to accessibility. Form items need <label> elements, and a sound document outline is essential. Those are just a few examples of absolute basics that some of us forgot or never learned. Semantic HTML is one of the cornerstones of accessible web development. Even if we write everything in JavaScript, HTML is what is finally rendered in the user’s browser.

    (Re)learning HTML and using it consciously prevents and fixes many accessibility issues.

    8. JavaScript is not the enemy, and sometimes JavaScript even improves accessibility

    I’m one of those people who believes that most websites should be accessible even when JavaScript fails to execute. That doesn’t mean that I hate JavaScript; of course not—it pays part of my rent. JavaScript is not the enemy, but it’s important that we use it carefully because it’s very easy to change the user experience for the worse otherwise.

    Not that long ago, I didn’t know that JavaScript could improve accessibility. We can leverage its power to make our websites more accessible for keyboard users. We can do things like trapping focus in a modal window, adding key controls to custom components, or showing and hiding content in an accessible manner.

    There are many impressive and creative CSS-only implementations of common widgets, but they’re often less accessible and provide worse UX than their JavaScript equivalents. In a post about building a fully accessible help tooltip, Sara Soueidan explains why JavaScript is important for accessibility. “Every single no-JS solution came with a very bad downside that negatively affected the user experience,” she writes.

    9. It’s a good time to know vanilla CSS and JavaScript

    For a long time, we’ve been reliant on libraries, frameworks, grid systems, and polyfills because we demanded more of browsers than they were able to give us. Naturally, we got used to many of those tools, but from time to time we should take a step back and question if we really still need them. There were many problems that Bootstrap and jQuery solved for us, but do those problems still exist, or is it just easier for us to write $() instead of document.querySelector()?

    jQuery is still relevant, but browser inconsistencies aren’t as bad as they used to be. CSS Grid Layout is supported in all major desktop browsers, and thanks to progressive enhancement we can still provide experiences for legacy browsers. We can do feature detection natively with feature queries, testing has gotten much easier, and caniuse and MDN help us understand what browsers are capable of. Many people use frameworks and libraries without knowing what problems those tools are solving. To decide whether it makes sense to add the extra weight to your site, you need a solid understanding of HTML, CSS, and JavaScript. Instead of increasing the page weight for older browsers, it’s often better to progressively enhance an experience. Progressively enhancing our websites—and reducing the number of requests, kilobytes, and dependencies—makes them faster and more robust, and thus more accessible.

    10. Keep learning about accessibility and share your knowledge

    I’m really thankful that I’ve learned all this in the past few months. Previously, I was a very passive part of the web community for a very long time. Ever since I started to participate online, attend and organize events, and write about web-related topics, especially accessibility, things have changed significantly for me and I’ve grown both personally and professionally.

    Understanding the importance of access and inclusion, viewing things from different perspectives, and challenging my decisions has helped me become a better developer.

    Knowing how things should be done is great, but it’s just the first step. Truly caring, implementing, and most importantly sharing your knowledge is what makes an impact.

    Share your knowledge

    Don’t be afraid to share what you’ve learned. Write articles, talk at meetups, and give in-house workshops. The distinct culture of sharing knowledge is one of the most important and beautiful things about our industry.

    Go to conferences and meetups

    Attending conferences and meetups is very valuable because you get to meet many different people from whom you can learn. There are several dedicated accessibility events and many conferences that feature at least one accessibility talk.

    Organize meetups

    Dennis Deacon describes his decision to start and run an accessibility meetup as a life-changing experience. Meetups are very important and valuable for the community, but organizing a meetup doesn’t just bring value to attendees and speakers. As an organizer, you get to meet all these people and learn from them. By listening and by understanding how they see and approach things, and what’s important to them, you are able to broaden your horizons. You grow as a person, but you also get to meet other professionals, agencies, and companies from which you may also benefit professionally.

    Invite experts to your meetup or conference

    If you’re a meetup or conference organizer, you can have a massive impact on the role accessibility plays in our community. Invite accessibility experts to your event and give the topic a forum for discussion.

    Follow accessibility experts on Twitter

    Follow experts on Twitter to learn what they’re working on, what bothers them, and what they think about recent developments in inclusive web development and design in general. I’ve learned a lot from the following people: Aaron Gustafson, Adrian Roselli, Carie Fisher, Deborah Edwards-Onoro, Heydon Pickering, Hugo Giraudel, Jo Spelbrink, Karl Groves, Léonie Watson, Marco Zehe, Marcy Sutton, Rob Dodson, Scott O’Hara, Scott Vinkle, and Steve Faulkner.

    11. Simply get started

    You don’t have to go all-in from the very beginning. If you improve just one thing, you’re already doing a great job in bringing us closer to a better web. Just get started and keep working.

    There are a lot of resources out there, and trying to find out how and where to start can get quite overwhelming. I’ve gathered a few sites and books that helped me; hopefully they will help you as well. The following lists are by no means exhaustive.

    Video series

    • This free Udacity course is a great way to get started.
    • Rob Dodson covers many different accessibility topics in his video series A11ycasts (a11y is short for accessibility—the number eleven stands for the number of letters omitted).




    Accessible JavaScript components

    Resources and further reading

  8. Design Like a Teacher

    In 2014, the clinic where I served as head of communications and digital strategy switched to a new online patient portal, a change that was mandated by the electronic health record (EHR) system we used. The company that provides the EHR system held several meetings for the COO and me to learn the new tool and provided materials to give to patients to help them register for and use the new portal.

    As the sole person at my clinic working on any aspect of user experience, I knew the importance of knowing the audience when implementing an initiative like the patient portal. So I was skeptical of the materials provided to the patients, which assumed a lot of knowledge on their part and focused on the cool features of the portal rather than on why patients would actually want to use it.

    By the time the phone rang for the fifth time on the first day of the transition, I knew my suspicion that the patient portal had gone wrong in the user experience stage was warranted. Patients were getting stuck during every phase of the process—from wondering why they should use the portal to registering for and using it. My response was to ask patients what they had tried so far and where they were getting stuck. Then I would try to explain why they might want to use the portal.

    Sometimes I lost patients completely; they just refused to sign up. They had a bad user experience trying to understand how a portal fit into their mental model of receiving healthcare, and I had a terrible user experience trying to learn what I needed to do to guide patients through the migration. To borrow a phrase from Dave Platt, the lead instructor of the UX Engineering course I currently help teach, the “hassle budget” was extremely high.

    I realized three important things in leading this migration:

    • When people get stuck, their frustration prevents them from providing information up front. They start off with “I’m stuck” and don’t offer additional feedback until you pull it out of them. (If you felt a tremor just then, that was every IT support desk employee in the universe nodding emphatically.)
    • In trying to get them unstuck, I had to employ skills that drew on my work outside of UX. There was no choice; I had a mandate to reach an adoption rate of at least 60%.
    • The overarching goal was really to help these patients learn to do something different than what they were used to, whether that was deal with a new interface or deal with an interface for the first time at all.

    Considering these three realizations led me to a single, urgent conclusion that has transformed my UX practice: user experience is really a way of defining and planning what we want a user to learn, so we also need to think about our own work as how to teach.

    It follows, then, that user experience toolkits need to include developing a teaching mindset. But what does that mean? And what’s the benefit? Let’s use this patient portal story and the three realizations above as a framework for considering this.

    Helping users get unstuck

    Research on teaching and learning has produced two concepts that can help explain why people struggle to get unstuck and what to do about it: 1) cognitive load and 2) the zone of proximal development.

    Much like you work your muscles through weight resistance to develop physical strength, you work your brain through cognitive load to develop mental strength—to learn. There are three kinds of cognitive load: intrinsic, germane, and extraneous.


    This type of cognitive load ... is responsible for ...
    Intrinsic Actual learning of the material
    Germane Building that new information into a more permanent memory store
    Extraneous Everything else about the experience of encountering the material (e.g., who’s teaching it, how they teach, your comfort level with the material, what the room is like, the temperature, the season, your physical health, energy levels, and so on)

    In the case of the patient portal, intrinsic cognitive load was responsible for a user actually signing up for the portal and using it for the first time. Germane cognitive load was devoted to a user making sense of this experience and storing it so that it can be repeated in the future with increasing fluency. My job in salvaging the user experience was to figure out what was extraneous in the process of using the portal so that I could help users focus on what they needed to know to use it effectively.

    Additionally, we all have a threshold for comfortably exploring and figuring something out with minimal guidance. This threshold moves around depending on the task and is called your zone of proximal development. It lays between the spaces where you can easily do a task on your own and where you cannot do a task at all without help. Effective learning happens in this zone by offering the right support, at the right time, in the right amount.

    When you’re confronted with an extremely frustrated person because of a user experience you have helped create (or ideally, before that scenario happens), ask yourself a couple questions:

    • Did I put too much burden on their learning experience at the expense of the material?
    • Did I do my best to support their movement from something completely familiar to something new and/or unknown?

    Think about your creation in terms of the immediate task and everything else. Consider (or reconsider) all the feelings, thoughts, contexts, and everything else that could make up the space around that task. What proportion of effort goes to the task versus to everything in the space around it? After that, think about how easy or difficult it is to achieve that task. Are you offering the right support, at the right time, in the right amount? What new questions might you need to ask to figure that out?

    Making use of “unrelated” skill sets

    When you were hired, you responded to a job description that included specific bullet points detailing the skills you should have and duties you would be responsible for fulfilling. You highlighted everything about your work that showed you fit that description. You prepared your portfolio, and demonstrated awareness of the recent writings from UX professionals in the field to show you can hold a conversation about how to “do” this work. You looked incredibly knowledgeable.

    In research on teaching and learning, we also explore the idea of how we know in addition to what we know. Some people believe that knowledge is universally true and is out there to be uncovered and explored. Others believe that knowledge is subjective because it is uncovered and explored through the filter of the individual’s experiences, reflections, and general perception of reality. This is called constructivism. If we accept constructivism, it means that we open ourselves to learning from each other in how we conceptualize and practice UX based on what else we know besides what job descriptions ask. How do we methodically figure out the what else? By asking better questions.

    Part of teaching and learning in a constructivist framework is understanding that the name of the game is facilitation, not lecturing (you might have heard the cute phrase, “Guide on the side, not sage on the stage”). Sharing knowledge is actually about asking questions to evoke reflection and then conversation to connect the dots. Making time to do this can help you recall and highlight the “unrelated” skills that you may have buried that would actually serve you well in your UX work. For example:

    • That was an incredibly difficult stakeholder meeting. What feels like the most surprising thing about how it turned out?
    • It seemed like we got nothing done in that wireframing session. Everyone wanted to see their own stuff included instead of keeping their eye on who we’re solving for. What is another time in my life when I had this kind of situation? How did it turn out?

    All of this is in service to helping ourselves unlock more productive communication with our clients. In the patient portal case, I relied very heavily on my master’s degree in international relations, which taught me about how to ask questions to methodically untangle a problem into more manageable chunks, and listen for what a speaker is really saying between the lines. When you don’t speak the same language, your emotional intelligence and empathy begin to heighten as you try to relate on a broader human level. This helped me manage patient expectations to navigate them to the outcome I needed, even though this degree was meant to prepare me to be a diplomat.

    As you consider how you’re feeling in your current role, preparing for a performance review, or plotting your next step, think about your whole body of experience. What are the themes in your work that you can recall dealing with in other parts of your life (at any point)? What skills are you relying on that, until you’ve now observed them, you didn’t think very much about but that have a heavy influence on your style of practice or that help make you effective when you work with your intended audiences?

    Unlearn first, then learn

    When Apple wanted to win over consumers in their bid to make computers a household item, they had to help them embrace what a machine with a screen and some keys could accomplish. In other words, to convince consumers it was worth it to learn how to use a computer, they first had to help consumers unlearn their reliance on a desk, paper, file folders, and pencils.

    Apple integrated this unlearning and learning into one seamless experience by creating a graphical user interface that used digital representations of objects people were already familiar with—desks, paper, file folders, and pencils. But the solution may not always be that literal. There are two concepts that can help you support your intended audiences as they transition from one system or experience to another.

    The first concept, called a growth mindset, relates to the belief that people are capable of constructing and developing intelligence in any given area, as opposed to a fixed mindset, which holds that each of us is born with a finite capacity for some level of skill. It’s easy to tell if someone has a fixed mindset if they say things like, “I’m too old to understand new technology,” or “This is too complicated. I’ll never get it.”

    The second is self-determination theory, which divides motivation into two types: intrinsic and extrinsic. Self-determination theory states that in learning, your desire to persevere is not just about having motivation at all, but about what kind of motivation you have. Intrinsic motivation comes from within yourself; extrinsic comes from the world around you. Thanks to this research and subsequent studies, we know that intrinsic motivation is vital to meaningful learning and skill development (think about the last time you did an HR training and liked it).

    This appears in our professional practice as the ever-favored endeavor to generate “buy-in.” What we’re really saying is, “How do I get someone to feel motivated on their own to be part of this or do this thing, instead of me having to reward them or somehow provide an incentive?” Many resources on getting buy-in are about the end goal of getting someone to do what you want. While this is important, conducting this as a teaching process allows you to step back and make space for the other person’s active contribution to a dialogue where you also learn something, even if you don’t end up getting buy-in:

    • “I’m curious about your feeling that this is complicated. Walk me through what you’ve done so far and tell me more about that feeling.”
    • “What’s the most important part of this for you? What excites you or resonates with you?”

    For the majority of patients I worked with, transitioning to a new portal was almost fully an extrinsically motivated endeavor—if they didn’t switch, they didn’t get to access their health information, such as lab results, which is vital for people with chronic diseases. They did it because they had to. And many patients ran into a fixed-mindset wall as they confronted bad design: “I can’t understand this. I’m not very good at the computer. I don’t understand technology. Why do I have to get my information this way?” I especially spent a lot of time on exploring why some users felt the portal was complicated (i.e., the first bullet point above), because not only did I want them to switch to it, but I wanted them to switch and then keep using the portal with increasing fluency. First I had to help them unlearn some beliefs about their capabilities and what it means to access information online, and then I could help them successfully set up and use this tool.

    While you’re researching the experience you’re going to create around a product, service, or program, ask questions not just about the thing itself but about the circumstances or context. What are the habits or actions someone might need to stop doing, or unlearn, before they adopt what you’re creating? What are the possible motivators in learning to do the something different? Among those, what is the ratio of extrinsic to intrinsic? Do you inadvertently cause an inflammation of fixed mindset? How do you instead encourage a growth mindset?

    Where we go from here

    Ultimately, I hit the target: about 70% of patients who had been using the old portal migrated to the new tool. It took some time for me to realize I needed to create a process rather than react to individual situations, but gradually things started to smooth out as I knew what bumps in the road to expect. I also walked back even further and adjusted our communications and website content to speak to the fears and concerns I now knew patients experienced. Eventually, we finished migrating existing patients, and the majority of patients signing onto this portal for the first time were new to the clinic overall (so they would not have used the previous portal). To my knowledge the interface design never improved in any profound way, but we certainly lodged a lot of technical tickets to contribute to a push for feature changes and improvements.

    Although this piece contains a lot of information, it essentially boils down to asking questions as you always do, but from a different angle to uncover more blind spots. The benefit is a more thorough understanding of who you intend to serve and a more empathetic process for acquiring that understanding. Each section is specifically written to give you a direct idea of a question or prompt you can use the next time you have an opportunity in your own work. I would love to hear how deepening your practice in this way works for you—please comment or feel free to find me on Twitter!

  9. CSS: The Definitive Guide, 4th Edition

    A note from the editors: We’re pleased to share an excerpt from Chapter 19 (“Filters, Blending, Clipping, and Masking”) of CSS: The Definitive Guide, 4th Edition by Eric Meyer and Estelle Weyl, available now from O’Reilly.

    In addition to filtering, CSS offers the ability to determine how elements are composited together. Take, for example, two elements that partially overlap due to positioning. We’re used to the one in front obscuring the one behind. This is sometimes called simple alpha compositing, in that you can see whatever is behind the element as long as some (or all) of it has alpha channel values less than 1. Think of, for example, how you can see the background through an element with opacity: 0.5, or in the areas of a PNG or GIF87a that are set to be transparent.

    But if you’re familiar with image-editing programs like Photoshop or GIMP, you know that image layers which overlap can be blended together in a variety of ways. CSS has gained the same ability. There are two blending strategies in CSS (at least as of late 2017): blending entire elements with whatever is behind them, and blending together the background layers of a single element.

    Blending Elements

    In situations where elements overlap, it’s possible to change how they blend together with the property mix-blend-mode.


    Values: normal | multiply | screen | overlay | darken | lighten | colordodge | color-burn | hard-light | soft-light | difference | exclusion | hue | saturation | color | luminosity
    Initial value: normal
    Applies to: All elements
    Computed value: As declared
    Inherited: No
    Animatable: No

    The way the CSS specification puts this is: “defines the formula that must be used to mix the colors with the backdrop.” That is to say, the element is blended with whatever is behind it (the “backdrop”), whether that’s pieces of another element, or just the background of its parent element.

    The default, normal, means that the element’s pixels are shown as is, without any mixing with the backdrop, except where the alpha channel is less than 1. This is the “simple alpha compositing” mentioned previously. It’s what we’re all used to, which is why it’s the default value. A few examples are shown in Figure 19-6.

    Graphic showing three different alpha compositing blending modes in CSS
    Figure 19-6. Simple alpha channel blending

    For the rest of the mix-blend-mode keywords, I’ve grouped them into a few categories. Let’s also nail down a few definitions:

    • The foreground is the element that has mix-blend-mode applied to it.
    • The backdrop is whatever is behind that element. This can be other elements, the background of the parent element, and so on.
    • A pixel component is the color component of a given pixel: R, G, and B

    If it helps, think of the foreground and backdrop as images that are layered atop one another in an image-editing program. With mix-blend-mode, you can change the blend mode applied to the top image (the foreground).

    Darken, Lighten, Difference, and Exclusion

    These blend modes might be called simple-math modes—they achieve their effect by directly comparing values in some way, or using simple addition and subtraction to modify pixels:

    darken: Each pixel in the foreground is compared with the corresponding pixel in the backdrop, and for each of the R, G, and B values (the pixel components), the smaller of the two is kept. Thus, if the foreground pixel has a value corresponding to rgb(91,164,22) and the backdrop pixel is rgb(102,104,255), the resulting pixel will be rgb(91,104,22).

    lighten: This blend is the inverse of darken: when comparing the R, G, and B components of a foreground pixel and its corresponding backdrop pixel, the larger of the two values is kept. Thus, if the foreground pixel has a value corresponding to rgb(91,164,22) and the backdrop pixel is rgb(102,104,255), the resulting pixel will be rgb(102,164,255).

    difference: The R, G, and B components of each pixel in the foreground are compared to the corresponding pixel in the backdrop, and the absolute value of subtracting one from the other is the final result. Thus, if the foreground pixel has a value corresponding to rgb(91,164,22) and the backdrop pixel is rgb(102,104,255), the resulting pixel will be rgb(11,60,233). If one of the pixels is white, the resulting pixel will be the inverse of the non-white pixel. If one of the pixels is black, the result will be exactly the same as the non-black pixel.

    exclusion: This blend is a milder version of difference. Rather than being | back - fore |, the formula is back + fore - (2 × back × fore), where back and fore are values in the range from 0-1. For example, an exclusion calculation of an orange (rgb(100%,50%,0%)) and medium gray (rgb(50%,50%,50%)) will yield rgb(50%,50%,50%). For the red component, the math is 1 + 0.5 - (2 × 1 × 0.5), which reduces to 0.5, corresponding to 50%. For the blue and green components, the math is 0 + 0.5 - (2 × 0 × 0.5), which again reduces to 0.5. Compare this to difference, where the result would be rgb(50%,0%,50%), since each component is the absolute value of subtracting one from the other.

    This last definition highlights the fact that for all blend modes, the actual values being operated on are in the range 0-1. The previous examples showing values like rgb(11,60,233) are normalized from the 0-1 range. In other words, given the example of applying the difference blend mode to rgb(91,164,22) and rgb(102,104,255), the actual operation is as follows:

    • rgb(91,164,22) is R = 91 ÷ 255 = 0.357; G = 164 ÷ 255 = 0.643; B = 22 ÷ 255 = 0.086. Similarly, rgb(102,104,255) corresponds to R = 0.4; G = 0.408; B = 1.
    • Each component is subtracted from the corresponding component, and the absolute value taken. Thus, R = | 0.357 - 0.4 | = 0.043; G = | 0.643 - 0.408 | = 0.235; B = | 1 - 0.086 | = 0.914. This could be expressed as rgba(4.3%,23.5%,91.4%), or (by multiplying each component by 255) as rgb(11,60,233).

    From all this, you can perhaps understand why the full formulas are not written out for every blend mode we cover. If you’re interested in the fine details, each blend mode’s formula is provided in the “Compositing and Blending Level 1” specification.

    Examples of the blend modes in this section are depicted in Figure 19-7.

    Graphic showing various blend modes in CSS
    Figure 19-7. Darken, lighten, difference, and exclusion blending

    Multiply, Screen, and Overlay

    These blend modes might be called the multiplication modes—they achieve their effect by multiplying values together:

    multiply: Each pixel component in the foreground is multiplied by the corresponding pixel component in the backdrop. This yields a darker version of the foreground, modified by what is underneath. This blend mode is symmetric, in that the result will be exactly the same even if you were to swap the foreground with the back‐drop.

    screen: Each pixel component in the foreground is inverted (see invert in the earlier section “Color Filtering” on page 948), multiplied by the inverse of the corresponding pixel component in the backdrop, and the result inverted again. This yields a lighter version of the foreground, modified by what is underneath. Like multiply, screen is symmetric.

    overlay: This blend is a combination of multiply and screen. For foreground pixel components darker than 0.5 (50%), the multiply operation is carried out; for foreground pixel components whose values are above 0.5, screen is used. This makes the dark areas darker, and the light areas lighter. This blend mode is not symmetric, because swapping the foreground for the backdrop would mean a different pattern of light and dark, and thus a different pattern of multiplying versus screening.

    Examples of these blend modes are depicted in Figure 19-8.

    Graphic showing various blend modes in CSS
    Figure 19-8. Multiply, screen, and overlay blending

    Hard and Soft Light

    There blend modes are covered here because the first is closely related to a previous blend mode, and the other is just a muted version of the first:

    hard-light: This blend is the inverse of overlay blending. Like overlay, it’s a combination of multiply and screen, but the determining layer is the backdrop. Thus, for back‐drop pixel components darker than 0.5 (50%), the multiply operation is carried out; for backdrop pixel components lighter than 0.5, screen is used. This makes it appear somewhat as if the foreground is being projected onto the backdrop with a projector that employs a harsh light.

    soft-light: This blend is a softer version of hard-light. That is to say, it uses the same operation, but is muted in its effects. The intended appearance is as if the foreground is being projected onto the backdrop with a projector that employs a diffuse light.

    Examples of these blend modes are depicted in Figure 19-9.

    Graphic showing various blend modes in CSS
    Figure 19-9. Hard- and soft-light blending

    Color Dodge and Burn

    Color dodging and burning are interesting modes, in that they’re meant to lighten or darken a picture with a minimum of change to the colors themselves. The terms come from old darkroom techniques performed on chemical film stock:

    color-dodge: Each pixel component in the foreground is inverted, and the component of the corresponding backdrop pixel component is divided by the inverted foreground value. This yields a brightened backdrop unless the foreground value is 0, in which case the backdrop value is unchanged.

    color-burn: This blend is a reverse of color-dodge: each pixel component in the backdrop is inverted, the inverted backdrop value is divided by the unchanged value of the corresponding foreground pixel component, and the result is then inverted. This yields a result where the darker the backdrop pixel, the more its color will burn through the foreground pixel.

    Examples of these blend modes are depicted in Figure 19-10.

    Graphic showing various blend modes in CSS
    Figure 19-10. Color dodge and burn blending

    Hue, Saturation, Luminosity, and Color

    The final four blend modes are different than those we’ve seen before, because they do not perform operations on the R/G/B pixel components. Instead, they perform operations to combine the hue, saturation, luminosity, and color of the foreground and backdrop in different ways:

    hue: For each pixel, combines the luminosity and saturation levels of the backdrop with the hue angle of the foreground.

    saturation: For each pixel, combines the hue angle and luminosity level of the backdrop with the saturation level of the foreground.

    color: For each pixel, combines the luminosity level of the backdrop with the hue angle and saturation level of the foreground.

    luminosity: For each pixel, combines the hue angle and saturation level of the backdrop with the luminosity level of the foreground.

    Examples of these blend modes are depicted in Figure 19-11.

    Graphic showing various blend modes in CSS
    Figure 19-11. Hue, saturation, luminosity, and color blending

    These blend modes can be a lot harder to grasp without busting out raw formulas, and even those can be confusing if you aren’t familiar with how things like saturation and luminosity levels are determined. If you don’t feel like you quite have a handle on how they work, the best thing is to practice with a bunch of different images and simple color patterns.

    Two things to note:

    • Remember that an element always blends with its backdrop. If there are other elements behind it, it will blend with them; if there’s a patterned background on the parent element, the blending will be done against that pattern.
    • Changing the opacity of a blended element will change the outcome, though not always in the way you might expect. For example, if an element with mix-blend-mode: difference is also given opacity: 0.8, then the difference calculations will be scaled by 80%. More precisely, a scaling factor of 0.8 will be applied to the color-value calculations. This can cause some operations to trend toward flat middle gray, and others to shift the color changes.
  10. The King vs. Pawn Game of UI Design

    If you want to improve your UI design skills, have you tried looking at chess? I know it sounds contrived, but hear me out. I’m going to take a concept from chess and use it to build a toolkit of UI design strategies. By the end, we’ll have covered color, typography, lighting and shadows, and more.

    But it all starts with rooks and pawns.

    I want you to think back to the first time you ever played chess (if you’ve never played chess, humor me for a second—and no biggie; you will still understand this article). If your experience was anything like mine, your friend set up the board like this:

    Standard chessboard in starting configuration

    And you got your explanation of all the pieces. This one’s a pawn and it moves like this, and this one is a rook and it moves like this, but the knight goes like this or this—still with me?—and the bishop moves diagonally, and the king can only do this, but the queen is your best piece, like a combo of the rook and the bishop. OK, want to play?

    This is probably the most common way of explaining chess, and it’s enough to make me hate board games forever. I don’t want to sit through an arbitrary lecture. I want to play.

    One particular chess player happens to agree with me. His name is Josh Waitzkin, and he’s actually pretty good. Not only at chess (where he’s a grandmaster), but also at Tai Chi Push Hands (he’s a world champion) and Brazilian Jiu Jitsu (he’s the first black belt under 5x world champion Marcelo Garcia). Now he trains financiers to go from the top 1% to the top .01% in their profession.

    Point is: this dude knows a lot about getting good at stuff.

    Now here’s the crazy part. When Josh teaches you chess, the board looks like this:

    Chessboard showing only three pieces
    King vs. King and Pawn


    Compared to what we saw above, this is stupidly simple.

    And, if you know how to play chess, it’s even more mind-blowing that someone would start teaching with this board. In the actual game of chess, you never see a board like this. Someone would have won long ago. This is the chess equivalent of a street fight where both guys break every bone in their body, dislocate both their arms, can hardly see out of their swollen eyes, yet continue to fight for another half-hour.

    What gives?

    Here’s Josh’s thinking: when you strip the game down to its core, everything you learn is a universal principle.

    That sounds pretty lofty, but I think it makes sense when you consider it. There are lots of things to distract a beginning chess player by a fully-loaded board, but everything you start learning in a king-pawn situation is fundamentally important to chess:

    • using two pieces to apply pressure together;
    • which spaces are “hot”;
    • and the difference between driving for a checkmate and a draw.

    Are you wondering if I’m ever going to start talking about design? Glad you asked.

    The simplest possible scenario

    What if, instead of trying to design an entire page with dozens of elements (nav, text, input controls, a logo, etc.), we consciously started by designing the simplest thing possible? We deliberately limit the playing field to one, tiny thing and see what we learn? Let’s try.

    What is the simplest possible element? I vote that it’s a button.

    Basic blue button that says, 'Learn More'

    This is the most basic, default button I could muster. It’s Helvetica (default font) with a 16px font size (pretty default) on a plain, Sketch-default-blue rectangle. It’s 40px tall (nice, round number) and has 20px of horizontal padding on each side.

    So yeah, I’ve already made a bunch of design decisions, but can we agree I basically just used default values instead of making decisions for principled, design-related reasons?

    Now let’s start playing with this button. What properties are modifiable here?

    • the font (and text styling)
    • the color
    • the border radius
    • the border
    • the shadows

    These are just the first things that come to my mind. There are even more, of course.


    Playing with the font is a pretty easy place to start.

    Button with a rounder font
    Blown up to show font detail.

    Now I’ve changed the font to Moon (available for free on Behance for personal use). It’s rounded and soft, unlike Helvetica, which felt a little more squared-off—or at least not as overtly friendly.

    The funny thing is: do you see how the perfectly square edges now look a tad awkward with the rounded font?

    Let’s round the corners a bit.

    Button with round font and rounded corner

    Bam. Nice. That’s a 3px border radius.

    But that’s kind of weird, isn’t it? We adjusted the border radius of a button because of the shape of the letterforms in our font. I wouldn’t want you thinking fonts are just loosey-goosey works of art that only work when you say the right incantations.

    No, fonts are shapes. Shapes have connotations. It’s not rocket science.

    Here’s another popular font, DIN.

    Examples of the Din font
    With its squared edges, DIN is a clean and solid workhorse font.

    Specifically, this is a version called DIN 2014 (available for cheap on Typekit). It’s the epitome of a squared-off-but-still-readable font. A bit harsh and no-nonsense, but in a bureaucratic way.

    It’s the official font of the German government, and it looks the part.

    So let’s test our working hypothesis with DIN.

    Button with sharp font and rounded corner

    How does DIN look with those rounded corners?

    Well, we need to compare it to square corners now, don’t we?

    Button with sharp font and sharp corners

    Ahhh, the squared-off corners are better here. It’s a much more consistent feel.

    Now look at our two buttons with their separate fonts. Which is more readable? I think Moon has a slight advantage here. DIN’s letters just look too cramped by comparison. Let’s add a bit of letter-spacing.

    Button with sharp font and letter spacing, with sharp corners

    When we add some letter-spacing, it’s far more relaxed.

    This is a key law of typography: always letter-space your uppercase text. Why? Because unless a font doesn’t have lowercase characters, it was designed for sentence-case reading, and characters in uppercase words will ALWAYS appear too cramped. (Moon is the special exception here—it only has uppercase characters, and notice how the letter-spacing is built into the font.)

    We’ll review later, but so far we’ve noticed two things that apply not just to buttons, but to all elements:

    • Rounded fonts go better with rounded shapes; squared-off fonts with squared-off shapes.
    • Fonts designed for sentence case should be letter-spaced when used in words that are all uppercase.

    Let’s keep moving for now.


    Seeing the plain default Sketch blue is annoying me. It’s begging to be changed into something that matches the typefaces we’re using.

    How can a color match a font? Well, I’ll hand it to you. This one is a bit more loosey-goosey.

    For our Moon button, we want something a bit more friendly. To me, a staid blue says default, unstyled, trustworthy, takes-no-risks, design-by-committee. How do you inject some fun into it?

    Well, like all problems of modifying color, it helps to think in the HSB color system (hue, saturation, and brightness). When we boil color down to three intuitive numbers, we give ourselves levers to pull.

    For instance, let’s look at hue. We have two directions we can push hue: down to aqua or up to indigo. Which sounds more in line with Moon? To me, aqua does. A bit less staid, a bit more Caribbean sea. Let’s try it. We’ll move the hue to 180° or so.

    Same button with a lighter blue background

    Ah, Moon Button, now you’ve got a beach vibe going on. You’re a vibrant sea foam!

    This is a critical lesson about color. “Blue” is not a monolith; it’s a starting point. I’ve taught hundreds of students UI design, and this comes up again and again: just because blue was one color in kindergarten doesn’t mean that we can’t find interesting variations around it as designers.

    Chart showing various shades of blue with names and temperature
    “Blue” is not a monolith. Variations are listed in HSB, with CSS color names given below each swatch.

    Aqua is a great variation with a much cooler feel, but it’s also much harder to read that white text. So now we have another problem to fix.

    “Hard to read” is actually a numerically-specific property. The World Wide Web Consortium has published guidelines for contrast between text and background, and if we use a tool to test those, we find we’re lacking in a couple departments.

    Chart showing a failure for AA and AAA WCAG compliance
    White text on an aqua button doesn’t provide enough contrast, failing to pass either AA or AAA WCAG recommendations.

    According to Stark (which is my preferred Sketch plugin for checking contrast—check out Lea Verou’s Contrast Ratio for a similar web-based tool), we’ve failed our contrast guidelines across the board!

    How do you make the white text more legible against the aqua button? Let’s think of our HSB properties again.

    • Brightness. Let’s decrease it. That much should be obvious.
    • Saturation. We’re going to increase it. Why? Because we’re contrasting with white text, and white has a saturation of zero. So a higher saturation will naturally stand out more.
    • Hue. We’ll leave this alone since we like its vibe. But if the contrast continued to be too low, you could lower the aqua’s luminosity by shifting its hue up toward blue.

    So now, we’ve got a teal button:

    Same button with a darker teal

    Much better?

    Chart showing a passing score for AA WCAG compliance

    Much better.

    For what it’s worth, I’m not particularly concerned about missing the AAA standard here. WCAG developed the levels as relative descriptors of how much contrast there is, not as an absolute benchmark of, say, some particular percentage of people to being able to read the text. The gold standard is—as always—to test with real people. AAA is best to hit, but at times, AA may be as good as you’re going to get with the colors you have to work with.

    Some of the ideas we’ve used to make a button’s blue a bit more fun and legible against white are actually deeper lessons about color that apply to almost everything else you design:

    • Think in HSB, as it gives you intuitive levers to pull when modifying color.
    • If you like the general feel of a color, shifting the hue in either direction can be a baseline for getting interesting variations on it (e.g., we wanted to spice up the default blue, but not by, say, changing it to red).
    • Modify saturation and brightness at the same time (but always in opposite directions) to increase or decrease contrast.

    OK, now let’s switch over to our DIN button. What color goes with its harsh edges and squared-off feel?

    The first thing that comes to mind is black.

    Black button with sharp font and sharp corners

    But let’s keep brainstorming. Maybe a stark red would also work.

    Deep red button with sharp font and sharp corners

    Or even a construction-grade orange.

    Deep orange button with sharp font and sharp corners

    (But not the red and orange together. Yikes! In general, two adjacent hues with high saturations will not look great next to each other.)

    Now, ignoring that the text of this is “Learn More” and a button like this probably doesn’t need to be blaze orange, I want you to pay attention to the colors I’m picking. We’re trying to maintain consistency with the official-y, squared-off DIN. So the colors we go to naturally have some of the same connotations: engineered, decisive, no funny business.

    Sure, this match-a-color-and-a-font business is more subjective, but there’s something solid to it: note that the words I used to describe the colors (“stark” and “construction-grade”) apply equally well to DIN—a fact I am only noticing now, not something done intentionally.

    Want to match a color with a font? This is another lesson applicable to all of branding. It’s best to start with adjectives/emotions, then match everything to those. Practically by accident, we’ve uncovered something fundamental in the branding design process.


    Let’s shift gears to work with shadows for a bit.

    There are a couple directions we could go with shadows, but the two main categories are (for lack of better terms):

    • realistic shadows;
    • and cartoon-y shadows.

    Here’s an example of each:

    Graphic showing a teal button with a realistic drop shadow and another button with a border-like shadow along the bottom

    The top button’s shadow is more photorealistic. It behaves like a shadow in the real world.

    The bottom button’s shadow is a bit lower-fidelity. It shows that the button is raised up, but it’s a cartoon version, with a slightly unrealistic, idealized bottom edge—and without a normal shadow, which would be present in the real world.

    The bottom works better for the button we’re crafting. The smoothness, the friendliness, the cartoon fidelity—it all goes together.

    As for our DIN button?

    Graphic showing a black button with a realistic drop shadow and another button with an indistinguishable bottom shadow

    I’m more ambivalent here. Maybe the shadow is for a hover state, à la Material Design?

    In any case, with a black background, a darkened bottom edge is impossible—you can’t get any darker than black.

    By the way, you may not have noticed it above, but the black button has a much stronger shadow. Compare:

    Graphic showing the soft shadow on the teal button and the harder shadow on the black button

    The teal button’s shadow is 30%-opacity black, shifted 1 pixel down on the y-axis, with a 2-pixel blur (0 1px 2px). The black button’s is 50%-opacity black, shifted 2 pixels down on the y-axis, with a 4-pixel blur (0 2px 4px). What’s more, the stronger shadow looks awful on the teal button.

    A dark shadow underneath a softer teal button

    Why is that? The answer, like so many questions that involve color, is in luminosity. When we put the button’s background in luminosity blend mode, converting it to a gray of equal natural lightness, we see something interesting.

    A grayscale version of our teal button with a shadow

    The shadow, at its darkest, is basically as dark as the button itself. Or, at least, the rate of change of luminosity is steady between each row of pixels.

    Extreme closeup of the gradient resulting in the drop shadow

    The top row is the button itself, not shadow.

    Shadows that are too close to the luminosity of their element’s backgrounds will appear too strong. And while this may sound like an overly specific lesson, it’s actually broadly applicable across elements. You know where else you see it?


    Let’s put a border on our teal button.

    Our teal button with a subtle border

    Now the way I’ve added this border is something that a bunch of people have thought of: make the border translucent black so that it works on any background color. In this case, I’ve used a single-pixel-wide border of 20%-opacity black.

    However, if I switch the background color to a more standard blue, which is naturally a lot less luminous, that border all but disappears.

    A bright blue button with a nearly invisible border

    In fact, to see it on blue just as much as you can see it on teal, you’ve got to crank up black’s opacity to something like 50%.

    A bright blue button with a more noticeable border

    This is a generalizable rule: when you want to layer black on another color, it needs to be a more opaque black to show up the same amount on less luminous background colors. Where else would you apply this idea?

    Buttons with varying degrees of shadows on a light background and a dark background

    Spoiler alert: shadows!

    Each of these buttons has the same shadow (0 2px 3px) except for different opacities. The top two buttons’ shadows have opacity 20%, and the bottom two have opacity 40%. Note how what’s fine on a white background (top left) is hardly noticeable on a dark background (top right). And what’s too dark on a white background (lower left) works fine on a dark background (lower right).


    I want to change gears one more time and talk about icons.

    Button with sharp font and a download icon from FontAwesome

    Here’s the download icon from Font Awesome, my least favorite icon set of all time.

    Close-up of the download icon from the previous button

    I dislike it, not only because it’s completely overused, but also because the icons are really bubbly and soft. Yet most of the time, they’re used in clean, crisp websites. They just don’t fit.

    You can see it works better with a soft, rounded font. I’m less opposed to this sort of thing.

    Softer teal button with the download icon from FontAwesome

    But there’s still a problem: the icon has these insanely small details! The dots are never going to show up at size, and even the space between the arrow and the disk is a fraction of a pixel in practice. Compared to the letterforms, it doesn’t look like quite the same style.

    But what good is my complaining if I don’t offer a solution?

    Let’s create a new take on the “download” icon, but with some different guiding principles:

    • We’ll use a stroke weight that’s equivalent (or basically equivalent) to the text weight.
    • We’ll use corner radii that are similar to the corner radii of our font: squared off for DIN, rounded for Moon.
    • We’ll use a simpler icon shape so the differences are easy to see.

    Let’s see how it looks:

    Black button with sharp font and a download icon with sharp corners, and a softer teal button with a round font and a download icon with softer corners

    I call this “drawing with the same pen.” Each of these icons looks like it could basically be a character in the font used in the button. And that’s the point here. I’m not saying all icons will appear this way, but for an icon that appears inline with text like this, it’s a fantastic rule of thumb.

    Wrapping it up

    Now this is just the beginning. Buttons can take all kinds of styles.

    Various button styles

    But we’ve got a good start here considering we designed just two buttons. In doing so, we covered a bunch of the things that are focal points of my day-to-day work as a UI designer:

    • lighting and shadows;
    • color;
    • typography;
    • consistency;
    • and brand.

    And the lessons we’ve learned in those areas are fundamental to the entirety of UI design, not just one element. Recall:

    • Letterforms are shapes. You can analyze fonts as sets of shapes, not simply as works of art.
    • You should letter-space uppercase text, since most fonts were designed for sentence case.
    • Think in HSB to modify colors.
    • You can find more interesting variations on a “basic” color (like a CSS default shade of blue or red) by tweaking the hue in either direction.
    • Saturation and brightness are levers that you can move in opposite directions to control luminosity.
    • Find colors that match the same descriptors that you would give your typeface and your overall brand.
    • Use darker shadows or black borders on darker backgrounds—and vice versa.
    • For inline icons, choose or design them to appear as though they were drawn with the same pen as the font you’re using.

    You can thank Josh Waitzkin for making me a pedant. I know, you just read an entire essay on buttons. But next time you’re struggling with a redesign or even something you’re designing from scratch, try stripping out all the elements that you think you should be including already, and just mess around with the simplest players on the board. Get a feel for the fundamentals, and go from there.

    Weird? Sure. But if it’s good enough for a grandmaster, I’ll take it.

  11. Mental Illness in the Web Industry

    The picture of the tortured artist has endured for centuries: creative geniuses who struggle with their metaphorical demons and don’t relate to life the same way as most people. Today, we know some of this can be attributed to mental illness: depression, anxiety, bipolar disorder, and many others. We have modern stories about this and plenty of anecdotal information that fuels the popular belief in a link between creativity and mental illness.

    But science has also started asking questions about the link between mental illness and creativity. A recent study has suggested that creative professionals may be more genetically predisposed to mental illness. In the web industry, whether designer, dev, copywriter, or anything else, we’re often creative professionals. The numbers suggest that mental illness hits the web industry especially hard.

    Our industry has made great strides in compassionate discussion of disability, with a focus on accessibility and events like Blue Beanie Day. But even though we’re having meaningful conversations and we’re seeing progress, issues related to diversity, inclusion, and sexual harassment are still a major problem for our industry. Understanding and acceptance of mental health issues is an area that needs growth and attention just like many others.

    When it comes to mental health, we aren’t quite as understanding as we think we are. According to a study published by the Center of Disease Control, 57% of the general population believes that society at large is caring and sympathetic toward people with mental illness; but only 25% of people with mental health symptoms believed the same thing. Society is less understanding and sympathetic regarding mental illness than it thinks it is.

    Where’s the disconnect?  What does it look like in our industry? It’s usually not negligence or ill will on anybody’s part. It has a lot more to do with people just not understanding the prevalence and reality of mental illness in the workplace. We need to begin discussing mental illness as we do any other personal challenge that people face.

    This article is no substitute for a well-designed scientific study or a doctor’s advice, and it’s not trying to declare truths about mental illness in the industry. And it certainly does not intend to lump together or equalize any and all mental health issues, illnesses, or conditions. But it does suspect that plenty of people in the industry struggle with their mental health at some point or another, and we just don’t seem to talk about it. This doesn’t seem to make sense in light of the sense of community that web professionals have been proud of for decades.

    We reached out to a few people in our industry who were willing to share their unique stories to bring light to what mental health looks like for them in the workplace. Whether you have your own struggles with mental health issues or just want to understand those who do, these stories are a great place to start the conversation.

    Meet the contributors

    Gerry: I’ve been designing websites since the late ‘90s, starting out in UI design, evolving into an IA, and now in a UX leadership role. Over my career, I’ve contributed to many high-profile projects, organized local UX events, and done so in spite of my personal roadblocks.

    Brandon Gregory: I’ve been working in the web industry since 2006, first as a designer, then as a developer, then as a manager/technical leader. I’m also a staff member and regular contributor at A List Apart. I was diagnosed with bipolar disorder in 2002 and almost failed out of college because of it, although I now live a mostly normal life with a solid career and great family. I’ve been very open about my condition and have done some writing on it on Medium to help spread awareness and destigmatize mental illnesses.

    Stephen Keable: I’ve been building and running websites since 1999, both professionally and for fun. Worked for newspapers, software companies, and design agencies, in both permanent and freelance roles, almost always creating front-end solutions, concentrating on a user-centered approach.

    Bri Piccari: I’ve been messing around with the web since MySpace was a thing, figuring out how to customize themes and make random animations fall down from the top of my profile. Professionally, I’ve been in the field since 2010, freelancing while in college before transitioning to work at small agencies and in-house for a spell after graduation. I focus on creating solid digital experiences, employing my love for design with [a] knack for front-end development. Most recently, I started a small design studio, but decided to jump back into more steady contract and full-time work, after the stress of owning a small business took a toll on my mental health. It was a tough decision, but I had to do what was best for me. I also lead my local AIGA chapter and recently got my 200-hour-yoga-teacher certification.

    X: I also started tinkering with the web on Myspace, and started working on websites to help pay my way through college. I just always assumed I would do something else to make a living. Then, I was diagnosed with bipolar disorder. My [original non-web] field was not a welcoming and supportive place for that, so I had to start over, in more ways than one. The web industry hadn’t gone anywhere, and it’s always been welcoming to people with random educational histories, so I felt good about being able to make a living and staying healthy here. But because of my experience when I first tried to be open about my illness, I now keep it a secret. I’m not ashamed of it; in fact, it’s made me live life more authentically. For example, in my heart, I knew I wanted to work on the web the entire time.

    The struggle is real

    Mental health issues are as numerous and unique as the people who struggle with them. We asked the contributors what their struggles look like, particularly at work in the web industry.

    G: I have an interesting mix of ADD, dyslexia, and complex PTSD. As a result, I’m an incomplete person, in a perpetual state of self-doubt, toxic shame, and paralyzing anxiety. I’ve had a few episodes in my past where a requirement didn’t register or a criticism was taken the wrong way and I’ve acted less than appropriately (either through panic, avoidance, or anger). When things go wrong, I deal with emotional flashbacks for weeks.

    Presenting or reading before an audience is a surreal experience as well. I go into a zone where I’m never sure if I’m speaking coherently or making any sense at all until I’ve spoken with friends in the audience afterward. This has had a negative effect on my career, making even the most simple tasks anxiety-driven.

    BG: I actually manage to at least look like I have everything together, so most people don’t know I have bipolar until I tell them. On the inside, I struggle—a lot. There are bouts of depression where I’m exhausted all day and deal with physical pain, and bursts of mania where I take unnecessary risks and make inappropriate outbursts, and I can switch between these states with little or no notice. It’s a balancing act to be sure, and I work very hard to keep it together for the people in my life.

    SK: After the sudden death of my mother, I started suffering from panic attacks. One of which came on about 30 mins after getting to work, I couldn’t deal with the attack at work, so suddenly went home without telling anyone. Only phoning my boss from a lay-by after I’d been in tears at the side of the road for a while. The attacks also triggered depression, which has made motivation when I’m working from home so hard that I actually want to spend more time at the office. Luckily my employer is very understanding and has been really flexible.

    BP: Depending upon the time of year, I struggle greatly, with the worst making it nearly impossible to leave my apartment. As most folks often say, I’ve gotten rather good at appearing as though I’ve got my shit together—typically, most people I interact with have no idea what I’m going through unless I let them in. It wasn’t until recently that my mental health began to make a public appearance, as the stress of starting my own business and attempting to “have it all” made it tough to continue hiding it. There are definitely spans of time where depression severely affects my ability to create and interface with others, and “fake it till ya make it” doesn’t even cut it. I’m currently struggling with severe anxiety brought on by stress. Learning to manage that has been a process.

    X: I have been fortunate to be a high-functioning bipolar person for about 5 years now, so there really isn’t a struggle you can really see. The struggle is the stress and anxiety of losing that stability, and especially of people finding out. I take medication, have a routine, a support system, and a self-care regimen that is the reason why I am stable, but if work starts [to] erode my work-life balance, I can’t protect that time and energy anymore. In the past, this has started to happen when I’ve been asked to routinely pull all-nighters, work over the weekend, travel often, or be surrounded by a partying and drinking culture at work. Many people burn out under those conditions, but for me, it could be dangerous and send me into a manic episode, or even [make me] feel suicidal. I struggle with not knowing how far I can grow in my career, because a lot of the things you do to prove yourself and to demonstrate that you’re ready for more responsibility involves putting more on your plate. What’s the point of going after a big role if it’ll mean that I won’t be able to take care of myself? The FOMO [(fear of missing out)] gets bad.

    Making it work

    There are different ways that people can choose to—or choose not to—address the mental problems they struggle with. We’re ultimately responsible for making our own mental health decisions, and they are different for everyone. In the meantime, the rent has to get paid. Here’s how our contributors cope with their situations at work to make it happen.

    G: I started seeing a therapist, which has been an amazing help. I’ve also worked to change my attitude about criticism—I ask more clarifying questions, looking to define the problem, rather than get mad, defensive, or sarcastic. I’ve learned to be more honest with my very close coworkers, making them aware of my irrational shortcomings and asking for help. Also, because I’ve experienced trauma in personal and professional life, I’m hypersensitive to the emotions of others. Just being around a heated argument or otherwise heightened situation could put my body into a panic. I have to take extra special care in managing personalities, making sure everyone in a particular situation feels confident that they’re set up for success.

    BG: Medicine has worked very well for me, and I’m very lucky in that regard. That keeps most of my symptoms at a manageable level. Keeping my regular schedule and maintaining some degree of normalcy is a huge factor in remaining stable. Going to work, sleeping when I should, and keeping some social appointments, while not always easy, keep me from slipping too far in either direction. Also, writing has been a huge outlet for me and has helped others to better understand my condition as well. Finding some way to express what you’re going through is huge.

    SK: I had several sessions of bereavement counseling to help with the grief. I also made efforts to try and be more physically active each day, even if just going for a short walk on my lunch break. Working had become a way of escaping everything else that was going on at the time. Before the depression I used to work from home two days a week, however found these days very hard being on my own. So I started working from the office every weekday. Thankfully, through all of this, my employer was incredibly supportive and simply told me to do what I need to do. And it’s made me want to stay where I work more than before, as I realize how lucky I am to have their support.

    BP: Last winter I enrolled in a leadership/yoga teacher training [program] with a goal of cultivating a personal practice to better manage my depression and anxiety. Making the jump to be in an uncomfortable situation and learn the value of mindfulness has made a huge difference in my ability to cope with stress. Self-care is really big for me, and being aware of when I need to take a break. I’ve heard it called high-functioning depression and anxiety. I often take on too much and learning to say no has been huge. Therapy and a daily routine have been incredibly beneficial as well.

    X: The biggest one is medicine, it’s something I will take for the rest of my life and it’s worth it to me. I did a form of therapy called Dialectical Behavioral Therapy for a couple of years. The rest is a consistent regimen of self-care, but there are a couple of things that are big for work. Not working nights or weekends, keeping it pretty 9–5. Walking to and from the office or riding my bike. I started a yoga practice immediately after getting diagnosed, and the mental discipline it’s given me dampens the intensity of how I react to stressful situations at work. This isn’t to say that I will refuse to work unless it’s easy. Essentially, if something catches on fire, these coping strategies help me keep my shit together for long enough to get out.

    Spreading awareness

    There are a lot of misconceptions about mental illness, in the web industry as much as anywhere else. Some are benign but annoying; others are pretty harmful. Here are some of the things we wish others knew about us and our struggles.

    G: Nothing about my struggle is rational. It seems as if my body is wired to screw everything up and wallow in the shame of it. I have to keep moving, working against myself to get projects as close to perfect as possible. However, I am wired to really care about people, and that is probably why I’ve been successful in UX.

    BG: Just because I look strong doesn’t mean I don’t need support. Just because I have problems doesn’t mean I need you to solve them. Sometimes, just checking in or being there is the best thing for me. I don’t want to be thought of as broken or fragile (although I admit, sometimes I am). I am more than my disorder, but I can’t completely ignore it either.

    Also, there are still a lot of stigmas surrounding mental illness, to the point that I didn’t feel safe admitting to my disorder to a boss at a previous job. Mental illnesses are medical conditions that are often classified as legitimate disabilities, but employees may not be safe admitting that they have one—that’s the reality we live with.

    SK: For others who are going through grief-related depression, I would say that talking about it with friends, family, and even strangers helps you process it a lot. And the old cliché that time is a healer really is true. Also, for any employers, be supportive [of those] with mental health conditions—as supportive as you would [be of those] with physical health situations. They will pay you back.

    BP: I am a chronically ambitious human. Oftentimes, this comes from a place of working and doing versus dealing with what is bothering or plaguing me at the time. Much of my community involvement came from a place of needing a productive outlet. Fortunately or unfortunately, I have accomplished a lot through that—however, there are times where I simply need a break. I’m learning to absorb and understand that, as well as become OK with it.

    X: I wish people knew how much it bothers me to hear the word bipolar being used as an adjective to casually describe things and people. It’s not given as a compliment, and it makes it less likely that I will ever disclose my illness publicly. I also wish people knew how many times I’ve come close to just being open about it, but held back because of the other major diversity and inclusion issues in the tech industry. Women have to deal with being called moody and erratic. People stereotype the ethnic group I belong to as being fiery and ill-tempered. Why would I give people another way to discriminate against me?

  12. Working with External User Researchers: Part I

    You’ve got an idea or perhaps some rough sketches, or you have a fully formed product nearing launch. Or maybe you’ve launched it already. Regardless of where you are in the product lifecycle, you know you need to get input from users.

    You have a few sound options to get this input: use a full-time user researcher or contract out the work (or maybe a combination of both). Between the three of us, we’ve run a user research agency, hired external researchers, and worked as freelancers. Through our different perspectives, we hope to provide some helpful considerations.

    Should you hire an external user researcher?

    First things first–in this article, we focus on contracting external user researchers, meaning that a person or team is brought on for the duration of a contract to conduct the research. Here are the most common situations where we find this type of role:

    Organizations without researchers on staff: It would be great if companies validated their work with users during every iteration. But unfortunately, in real-world projects, user research happens at less frequent intervals, meaning there might not be enough work to justify hiring a full-time researcher. For this reason, it sometimes makes sense to use external people as needed.

    Organizations whose research staff is maxed out: In other cases, particularly with large companies, there may already be user researchers on the payroll. Sometimes these researchers are specific to a particular effort, and other times the researchers themselves function as internal consultants, helping out with research across multiple projects. Either way, there is a finite amount of research staff, and sometimes the staff gets overbooked. These companies may then pull in additional contract-based researchers to independently run particular projects or to provide support to full-time researchers.

    Organizations that need special expertise: Even if a company does have user research on staff and those researchers have time, it’s possible that there are specialized kinds of user research for which an external contract-based researcher is brought on. For example, they may want to do research with representative users who regularly use screen readers, so they bring in an accessibility expert who also has user research skills. Or they might need a researcher with special quantitative skills for a certain project.

    Why hire an external researcher vs. other options?

    Designers as researchers: You could hire a full-time designer who also has research skills. But a designer usually won’t have the same level of research expertise as a dedicated researcher. Additionally, they may end up researching their own designs, making it extremely difficult to moderate test sessions without any form of bias.

    Product managers as researchers: While it’s common for enthusiastic product managers to want to conduct their own guerilla user research, this is often a bad idea. Product managers tend to hear feedback that validates their ideas and most often aren’t trained on how to ask non-leading questions.

    Temporary roles: You could also bring on a researcher in a staff augmentation role, meaning someone who works for you full-time for an extended period of time, but who is not considered a full-time employee. This can be a bit harder to justify. For example, there may be legal requirements that you’d have to pass if you directly contract an individual. Or you could find someone through a staffing agency–fewer legal hurdles, but likely far pricier.

    If these options don’t sound like a good fit for your needs, hiring an external user researcher on a project-specific basis could be the best solution for you. They give you exactly what you need without additional commitment or other risks. They may be a freelancer (or a slightly larger microbusiness), or even a team farmed out for a particular project by a consulting firm or agency.

    What kinds of projects would you contract a user researcher for?

    You can reasonably expect that anyone or any company that advertises their skillset as user research likely can do the full scope of qualitative efforts—from usability studies of all kinds, to card sorts, to ethnographic and exploratory work.

    Contracting out quantitative work is a bit riskier. An analogy that comes to mind is using TurboTax to file your taxes. While TurboTax may be just fine for many situations, it’s easy to overlook what you don’t know in terms of more complicated tax regulations, which can quickly get you in trouble. Similarly, with quantitative work, there’s a long list of diverse, specialized quantitative skills (e.g., logs analysis, conjoint, Kano, and multiple regression). Don’t assume someone advertising as a general quantitative user researcher has the exact skills you need.

    Also, for some companies, quantitative work comes with unique user privacy considerations that can require special internal permissions from legal and privacy teams.

    But if the topic of your project is pretty easy to grasp and absorb without needing much specialized technical or organizational insight, hiring an external researcher is generally a great option.

    What are the benefits to hiring an external researcher?

    A new, objective perspective is one major benefit to hiring an external researcher. We all suffer from design fixation and are influenced by organizational politics and perceived or real technical constraints. Hiring an unbiased external researcher can uncover more unexpected issues and opportunities.

    Contracting a researcher can also expand an internal researcher’s ability to influence. Having someone else moderate research studies frees up in-house researchers to be part of the conversations among stakeholders that happen while user interviews are being observed. If they are intuitively aware of an issue or opportunity, they can emphasize their perspective during those critical, decision-making moments that they often miss out on when they moderate studies themselves. In these situations, the in-house team can even design the study plan, draft the discussion guide, and just have the contractor moderate the study. The external researcher may then collaborate with the in-house researcher on the final report.

    More candid and honest feedback can come out of hiring an external researcher. Research participants tend to be more comfortable sharing more critical feedback with someone who doesn’t work for the company whose product is being tested.

    Lastly, if you need access to specialized research equipment or software (for example, proprietary online research tools), it can be easier to get it via an external researcher.

    How do I hire an external user researcher?

    So you’ve decided that you need to bring on an external user researcher to your team. How do you get started?

    Where to find them

    Network: Don’t wait until you need help to start networking and collecting a list of external researchers. Be proactive. Go to UX events in your local region. You’ll meet consultants and freelancers at those events, as well as people who have contracted out research and can make recommendations. You won’t necessarily have the opportunity for deep conversations, but you can continue a discussion over coffee or drinks!

    Referrals: Along those same lines, when you anticipate a need at some point in the future, seek out trusted UX colleagues at your company and elsewhere. Ask them to connect you with people that they may have worked with.

    What about a request for proposal (RFP)?

    Your company may require you to specify your need in the form of an RFP, which is a document that outlines your project needs and specifications, and asks for bids in response.

    An RFP provides these benefits:

    • It keeps the playing field level, and anyone who wants to bid on a project can (in theory).
    • You can be very specific about what you’re looking for, and get bids that can be easy to compare on price.

    On the other hand, an RFP comes with limitations:

    • You may think your requirements were very specific, but respondents may interpret them in different ways. This can result in large quote differences.
    • You may be eliminating smaller players—those freelancers and microbusinesses who may be able to give you the highest level of seniority for the dollar but don’t have the staff to respond to RFPs quickly.
    • You may be forced to be very concrete about your needs when you are not yet sure what you’ll actually need.

    When it comes to RFPs, the most important thing to remember is to clearly and thoroughly specify your needs. Don’t forget to include small but important details that can matter in terms of pricing, such as answers to these questions:

    • Who is responsible for recruitment of research participants?
    • How many participants do you want included?
    • Who will be responsible for distributing participant incentives?
    • Who will be responsible for localizing prototypes?
    • How long will sessions be?
    • Over how many days and locations will they be?
    • What is the format of expected deliverables?
    • Do you want full, transcribed videos, or video clips?

    It’s these details that will ultimately result in receiving informed proposals that are easy to compare.

    Do a little digging on their backgrounds

    Regardless of how you find a potential researcher, make sure you check out their credentials if you haven’t worked with them before.

    At the corporate level, review the company: Google them and make sure that user research seems to be one of their core competencies. The same is true when dealing with a freelancer or microbusiness: Google them and see whether you get research-oriented results, and also check them out on social media.

    Certainly feel free to ask for references if you don’t already have a direct connection, but take them with a grain of salt. Between the self-selecting nature of a reference, and a potential reference just trying to be nice to a friend, these can never be fully trusted.

    One of the strongest indicators of experience and quality work is if a researcher has been hired by the same client for more than one project over time.

    Larger agencies, individual researchers, or something in-between?

    So you’ve got a solid sense of what research you need, and you’ve got several quality options to choose from. But external researchers come in all shapes and sizes, from single freelancers to very large agencies. How do you choose what’s best for your project while still evaluating the external researchers fairly?

    Larger consulting firms and agencies do have some distinct advantages—namely that you’ve got a large company to back up the project. Even if one researcher isn’t available as expected (for example, if the project timeline slips), another can take their place. They also likely have a whole infrastructure for dealing with contracts like yours.

    On the other hand, this larger infrastructure may add extra burden on your side. You may not know who exactly is going to be working on your project, or their level of seniority or experience. Changes in scope will likely be more involved. Larger infrastructure also likely means higher costs.

    Individual (freelance) researchers also have some key advantages. You will likely have more control over contracting requirements. They are also likely to be more flexible—and less costly. In addition, if they were referred to you, you will be working with a specific resource that you can get to know over multiple projects.

    Bringing on individual researchers can incur a little more risk. You will need to make sure that you can properly justify hiring an external researcher instead of an employee. (In the United States, the IRS has a variety of tests to make sure it is OK.) And if your project timeline slips, you run a greater risk of losing the researcher to some other commitment without someone to replace them.

    A small business, a step between an individual researcher and a large firm, has some advantages over hiring an individual. Contracting an established business may involve less red tape, and you will still have the personal touch of knowing exactly who is conducting your research.

    An established business also shows a certain level of commitment, even if it’s one person. For example, a microbusiness could represent a single freelancer, but it could also involve a very small number of employees or established relationships with trusted subcontractors (or both). Whatever the configuration,  don’t expect a business of this size to have the ability to readily respond to RFPs.

    The money question

    Whether you solicit RFPs or get a single bid, price quotes will often differ significantly. User research is not a product but rather a customized and sophisticated effort around your needs. Here are some important things to consider:

    • Price quotes are a reflection of how a project is interpreted. Different researchers are going to interpret your needs in different ways. A good price quote clearly details any assumptions that are going into pricing so you can quickly see if something is misaligned.
    • Research teams are made up of staff with different levels of experience. A quote is going to be a reflection of the overall seniority of the team, their salaries and benefits, the cost of any business resources they use, and a reasonable profit margin for the business.
    • Businesses all want to make a reasonable profit, but approaches to profitability differ. Some organizations may balance having a high volume of work with less profit per project. Other organizations may take more of a boutique approach: more selectivity over projects taken on, with added flexibility to focus on those projects, but also with a higher profit margin.
    • Overbooked businesses provide higher quotes. Some consultants and agencies are in the practice of rarely saying no to a request, even if they are at capacity in terms of their workload. In these instances, it can be a common practice to multiply a quote by as much as three—if you say no, no harm done given they’re at capacity. However, if you say yes, the substantial profit is worth the cost for them to hire additional resources and to work temporarily above capacity in the meantime.

    To determine whether a researcher or research team is right for you, you’ll certainly need to look at the big picture, including pricing, associated assumptions, and the seniority and background of the individuals who are doing the work.

    Remember, it’s always OK to negotiate

    If you have a researcher or research team that you want to work with but their pricing isn’t in line with your budget, let them know. It could be that the quote is just based on faulty assumptions. They may expect you to negotiate and are willing to come down in price; they may also offer alternative, cheaper options with them.

    Next steps

    Hiring an external user researcher typically brings a long list of benefits. But like most relationships, you’ll need to invest time and effort to foster a healthy working dynamic between you, your external user researcher, and your team. Stay tuned for the next installment, where we’ll focus on how to collaborate together.

  13. No More FAQs: Create Purposeful Information for a More Effective User Experience

    It’s normal for your website users to have recurring questions and need quick access to specific information to complete … whatever it is they came looking for. Many companies still opt for the ubiquitous FAQ (frequently asked/anticipated questions) format to address some or even all information needs. But FAQs often miss the mark because people don’t realize that creating effective user information—even when using the apparently simple question/answer format—is complex and requires careful planning.

    As a technical writer and now information architect, I’ve worked to upend this mediocre approach to web content for more than a decade, and here’s what I’ve learned: instead of defaulting to an unstructured FAQ, invest in information that’s built around a comprehensive content strategy specifically designed to meet user and company goals. We call it purposeful information.

    The problem with FAQs

    Because of the internet’s Usenet heritage—discussion boards where regular contributors would produce FAQs so they didn’t have to repeat information for newbies—a lot of early websites started out by providing all information via FAQs. Well, the ‘80s called, and they want their style back!

    Unfortunately, content in this simple format can often be attractive to organizations, as it’s “easy” to produce without the need to engage professional writers or comprehensively work on information architecture (IA) and content strategy. So, like zombies in a horror film, and with the same level of intellectual rigor, FAQs continue to pop up all over the web. The trouble is, this approach to documentation-by-FAQ has problems, and the information is about as far from being purposeful as it’s possible to get.

    For example, when companies and organizations resort to documentation-by-FAQ, it’s often the only place certain information exists, yet users are unlikely to spend the time required to figure that out. Conversely, if information is duplicated, it’s easy for website content to get out of sync. The FAQ page can also be a dumping ground for any information a company needs to put on the website, regardless of the topic. Worse, the page’s format and structure can increase confusion and cognitive load, while including obviously invented questions and overt marketing language can result in losing users’ trust quickly. Looking at each issue in more detail:

    • Duplicate and contradictory information: Even on small websites, it can be hard to maintain information. On large sites with multiple authors and an unclear content strategy, information can get out of sync quickly, resulting in duplicate or even contradictory content. I once purchased food online from a company after reading in their FAQ—the content that came up most often when searching for allergy information—that the product didn’t contain nuts. However, on receiving the product and reading the label, I realized the FAQ information was incorrect, and I was able to obtain a refund. An information architecture (IA) strategy that includes clear pathways to key content not only better supports user information needs that drive purchases, but also reduces company risk. If you do have to put information in multiple locations, consider using an object-oriented content management system (CMS) so content is reused, not duplicated. (Our company open-sourced one called Fae.)
    • Lack of discernible content order: Humans want information to be ordered in ways they can understand, whether it’s alphabetical, time-based, or by order of operation, importance, or even frequency. The question format can disguise this organization by hiding the ordering mechanism. For example, I could publish a page that outlines a schedule of household maintenance tasks by frequency, with natural categories (in order) of daily, weekly, monthly, quarterly, and annually. But putting that information into an FAQ format, such as “How often should I dust my ceiling fan?,” breaks that logical organization of content—it’s potentially a stand-alone question. Even on a site that’s dedicated only to household maintenance, that information will be more accessible if placed within the larger context of maintenance frequency.
    • Repetitive grammatical structure: Users like to scan for information, so having repetitive phrases like “How do I …” that don’t relate to the specific task make it much more difficult for readers to quickly find the relevant content. In a lengthy help page with catch-all categories, like the Patagonia FAQ page, users have to swim past a sea of “How do I …,” “Why can’t I …,” and “What do I …” phrases to get to the actual information. While categories can help narrow the possibilities, the user still has to take the time to find the most likely category and then the relevant question within it. The Patagonia website also shows how an FAQ section can become a catch-all. Oh, how I’d love the opportunity to restructure all that Patagonia information into purposeful information designed to address user needs at the exact right moment. So much potential!
    • Increased cognitive load: As well as being repetitive, the question format can also be surprisingly specific, forcing users to mentally break apart the wording of the questions to find a match for their need. If a question appears to exclude the required information, the user may never click to see the answer, even if it is actually relevant. Answers can also raise additional, unnecessary questions in the minds of users. Consider the FAQ-formatted “Can I pay my bill with Venmo?” (which limits the answer to one payment type that only some users may recognize). Rewriting the question to “How can I pay my bill online?” and updating the content improves the odds that users will read the answer and be able to complete their task. However, an even better approach is to create purposeful content under the more direct and concise heading “Online payment options,” which is broad enough to cover all payment services (as a topic in the “Bill Payments” portion of a website), as well as instructions and other task-orientated information.
    • Longer content requirements: In most cases, questions have a longer line length than topic headings. The Airbnb help page illustrates when design and content strategy clash. The design truncates the question after 40 characters when the browser viewport is wider than 743 pixels. You have to click the question to find out if it holds the answer you need—far from ideal! Yet the heading “I’m a guest. How do I check the status of my reservation?” could easily have been rewritten as “Checking reservation status” or even “Guests: Checking reservation status.” Not only do these alternatives fit within the line length limitations set by the design, but the lower word count and simplified English also reduce translation costs (another issue some companies have to consider).

    Purposeful information

    Grounded in the Minimalist approach to technical documentation, the idea behind purposeful information is that users come to any type of content with a particular purpose in mind, ranging from highly specific (task completion) to general learning (increased knowledge). Different websites—and even different areas within a single website—may be aimed at different users and different purposes. Organizations also have goals when they construct websites, whether they’re around brand awareness, encouraging specific user behavior, or meeting legal requirements. Companies that meld user and organization goals in a way that feels authentic can be very successful in building brand loyalty.

    Commerce sites, for example, have the goal of driving purchases, so the information on the site needs to provide content that enables effortless purchasing decisions. For other sites, the goal might be to drive user visits, encourage newsletter sign-ups, or increase brand awareness. In any scenario, burying in FAQs any pathways needed by users to complete their goals is a guaranteed way to make it less likely that the organization will meet theirs.

    By digging into what users need to accomplish (not a general “they need to complete the form,” but the underlying, real-world task, such as getting a shipping quote, paying a bill, accessing health care, or enrolling in college), you can design content to provide the right information at the right time and better help users accomplish those goals. As well as making it less likely you’ll need an FAQ section at all, using this approach to generate a credible IA and content strategy—the tools needed to determine a meaningful home for all your critical content—will build authority and user trust.

    Defining specific goals when planning a website is therefore essential if content is to be purposeful throughout the site. Common user-centered methodologies employed during both IA and content planning include user-task analysis, content audits, personas, user observations, and analysis of call center data and web analytics. A complex project might use multiple methodologies to define the content strategy and supporting IA to provide users with the necessary information.

    The redesign of the Oliver Winery website is a good example of creating purposeful information instead of resorting to an FAQ. There was a user goal of being able to find practical information about visiting the winery (such as details regarding food, private parties, etc.), yet this information was scattered across various pages, including a partially complete FAQ. There was a company goal of reducing the volume of calls to customer support. In the redesign, a single page called “Plan Your Visit” was created with all the relevant topics. It is accessible from the “Visit” section and via the main navigation.

    The system used is designed to be flexible. Topics are added, removed, and reordered using the CMS, and published on the “Plan Your Visit” page, which also shows basic logistical information like hours and contact details, in a non-FAQ format. Conveniently, contact details are maintained in only one location within the CMS yet published on various pages throughout the site. As a result, all information is readily available to users, increasing the likelihood that they’ll make the decision to visit the winery.

    If you have to include FAQs

    This happens. Even though there are almost always more effective ways to meet user needs than writing an FAQ, FAQs happen. Sometimes the client insists, and sometimes even the most ardent opponent (ahem) concludes that in a very particular circumstance, an FAQ can be purposeful. The most effective FAQ is one with a specific, timely, or transactional need, or one with information that users need repeated access to, such as when paying bills or organizing product returns.

    Good topics for an FAQ include transactional activities, such as those involved in the buying process: think shipments, payments, refunds, and returns. By being specific and focusing on a particular task, you avoid the categorization problem described earlier. By limiting questions to those that are frequently asked AND that have a very narrow focus (to reduce users having to sort through lots of content), you create more effective FAQs.

    Amazon’s support center has a great example of an effective FAQ within their overall support content because they have exactly one: “Where’s My Stuff?.” Set under the “Browse Help Topics” heading, the question leads to a list of task-based topics that help users track down the location of their missing packages. Note that all of the other support content is purposeful, set in a topic-based help system that’s nicely categorized, with a search bar that allows users to dive straight in.

    Conference websites, which by their nature are already focused on a specific company goal (conference sign-ups), often have an FAQ section that covers basic conference information, logistics, or the value of attending. This can be effective. However, for the reasons outlined earlier, the content can quickly become overwhelming if conference organizers try to include all information about the conference as a single list of questions, as demonstrated by Web Summit’s FAQ page. Overdoing it can cause confusion even when the design incorporates categories and an otherwise useful UX that includes links, buttons, or tabs, such as on the FAQ page of The Next Web Conference.

    In examining these examples, it’s apparent how much more easily users could access the information if it wasn’t presented as questions. But if you do have to use FAQs, here are my tips for creating the best possible user experience.

    Creating a purposeful FAQ:

    • Make it easy to find.
    • Have a clear purpose and highly specific content in mind.
    • Give it a clear title related to the user tasks (e.g., “Shipping FAQ” rather than just “FAQ”).
    • Use clear, concise wording for questions.
    • Focus questions on user goals and tasks, not on product or brand.
    • Keep it short.

    What to avoid in any FAQ:

    • Don’t include “What does FAQ stand for?” (unfortunately, not a fictional example). Instead, simply define acronyms and initialisms on first use.
    • Don’t define terms using an FAQ format—it’s a ticket straight to documentation hell. If you have to define terms, what you need is a glossary, not FAQs.
    • Don’t tell your brand story or company history, or pontificate. People don’t want to know as much about your brand, product, and services as you are eager to tell them. Sorry.

    In the end, always remember your users

    Your website should be filled with purposeful content that meets users’ core needs and fulfills your company’s objectives. Do your users and your bottom line a favor and invest in effective user analysis, IA, content strategy, and documentation. Your users will be able to find the information they need, and your brand will be that much more awesome as a result.

  14. Why Mutation Can Be Scary

    A note from the editors: This article contain sample lessons from Learn JavaScript, a course that helps you learn JavaScript to build real-world components from scratch.

    To mutate means to change in form or nature. Something that’s mutable can be changed, while something that’s immutable cannot be changed. To understand mutation, think of the X-Men. In X-Men, people can suddenly gain powers. The problem is, you don’t know when these powers will emerge. Imagine your friend turns blue and grows fur all of a sudden; that’d be scary, wouldn’t it?

    In JavaScript, the same problem with mutation applies. If your code is mutable, you might change (and break) something without knowing.

    Objects are mutable in JavaScript

    In JavaScript, you can add properties to an object. When you do so after instantiating it, the object is changed permanently. It mutates, like how an X-Men member mutates when they gain powers.

    In the example below, the variable egg mutates once you add the isBroken property to it. We say that objects (like egg) are mutable (have the ability to mutate).

    const egg = { name: "Humpty Dumpty" };
    egg.isBroken = false;
    // {
    //   name: "Humpty Dumpty",
    //   isBroken: false
    // }

    Mutation is pretty normal in JavaScript. You use it all the time.

    Here’s when mutation becomes scary.

    Let’s say you create a constant variable called newEgg and assign egg to it. Then you want to change the name of newEgg to something else.

    const egg = { name: "Humpty Dumpty" };
    const newEgg = egg;
    newEgg.name = "Errr ... Not Humpty Dumpty";

    When you change (mutate) newEgg, did you know egg gets mutated automatically?

    // {
    //   name: "Errr ... Not Humpty Dumpty"
    // }

    The example above illustrates why mutation can be scary—when you change one piece of your code, another piece can change somewhere else without your knowing. As a result, you’ll get bugs that are hard to track and fix.

    This weird behavior happens because objects are passed by reference in JavaScript.

    Objects are passed by reference in JavaScript

    To understand what “passed by reference” means, first you have to understand that each object has a unique identity in JavaScript. When you assign an object to a variable, you link the variable to the identity of the object (that is, you pass it by reference) rather than assigning the variable the object’s value directly. This is why when you compare two different objects, you get false even if the objects have the same value.

    console.log({} === {}); // false

    When you assign egg to newEgg, newEgg points to the same object as egg. Since egg and newEgg are the same thing, when you change newEgg, egg gets changed automatically.

    console.log(egg === newEgg); // true

    Unfortunately, you don’t want egg to change along with newEgg most of the time, since it causes your code to break when you least expect it. So how do you prevent objects from mutating? Before you understand how to prevent objects from mutating, you need to know what’s immutable in JavaScript.

    Primitives are immutable in JavaScript

    In JavaScript, primitives (String, Number, Boolean, Null, Undefined, and Symbol) are immutable; you cannot change the structure (add properties or methods) of a primitive. Nothing will happen even if you try to add properties to a primitive.

    const egg = "Humpty Dumpty";
    egg.isBroken = false;
    console.log(egg); // Humpty Dumpty
    console.log(egg.isBroken); // undefined

    const doesn’t grant immutability

    Many people think that variables declared with const are immutable. That’s an incorrect assumption.

    Declaring a variable with const doesn’t make it immutable, it prevents you from assigning another value to it.

    const myName = "Zell";
    myName = "Triceratops";
    // ERROR

    When you declare an object with const, you’re still allowed to mutate the object. In the egg example above, even though egg is created with const, const doesn’t prevent egg from mutating.

    const egg = { name: "Humpty Dumpty" };
    egg.isBroken = false;
    // {
    //   name: "Humpty Dumpty",
    //   isBroken: false
    // }

    Preventing objects from mutating

    You can use Object.assign and assignment to prevent objects from mutating.


    Object.assign lets you combine two (or more) objects together into a single one. It has the following syntax:

    const newObject = Object.assign(object1, object2, object3, object4);

    newObject will contain properties from all of the objects you’ve passed into Object.assign.

    const papayaBlender = { canBlendPapaya: true };
    const mangoBlender = { canBlendMango: true };
    const fruitBlender = Object.assign(papayaBlender, mangoBlender);
    // {
    //   canBlendPapaya: true,
    //   canBlendMango: true
    // }

    If two conflicting properties are found, the property in a later object overwrites the property in an earlier object (in the Object.assign parameters).

    const smallCupWithEar = {
      volume: 300,
      hasEar: true
    const largeCup = { volume: 500 };
    // In this case, volume gets overwritten from 300 to 500
    const myIdealCup = Object.assign(smallCupWithEar, largeCup);
    // {
    //   volume: 500,
    //   hasEar: true
    // }

    But beware! When you combine two objects with Object.assign, the first object gets mutated. Other objects don’t get mutated.

    // {
    //   volume: 500,
    //   hasEar: true
    // }
    // {
    //   volume: 500
    // }

    Solving the Object.assign mutation problem

    You can pass a new object as your first object to prevent existing objects from mutating. You’ll still mutate the first object though (the empty object), but that’s OK since this mutation doesn’t affect anything else.

    const smallCupWithEar = {
      volume: 300,
      hasEar: true
    const largeCup = {
      volume: 500
    // Using a new object as the first argument
    const myIdealCup = Object.assign({}, smallCupWithEar, largeCup);

    You can mutate your new object however you want from this point. It doesn’t affect any of your previous objects.

    myIdealCup.picture = "Mickey Mouse";
    // {
    //   volume: 500,
    //   hasEar: true,
    //   picture: "Mickey Mouse"
    // }
    // smallCupWithEar doesn't get mutated
    console.log(smallCupWithEar); // { volume: 300, hasEar: true }
    // largeCup doesn't get mutated
    console.log(largeCup); // { volume: 500 }

    But Object.assign copies references to objects

    The problem with Object.assign is that it performs a shallow merge—it copies properties directly from one object to another. When it does so, it also copies references to any objects.

    Let’s explain this statement with an example.

    Suppose you buy a new sound system. The system allows you to declare whether the power is turned on. It also lets you set the volume, the amount of bass, and other options.

    const defaultSettings = {
      power: true,
      soundSettings: {
        volume: 50,
        bass: 20,
        // other options

    Some of your friends love loud music, so you decide to create a preset that’s guaranteed to wake your neighbors when they’re asleep.

    const loudPreset = {
      soundSettings: {
        volume: 100

    Then you invite your friends over for a party. To preserve your existing presets, you attempt to combine your loud preset with the default one.

    const partyPreset = Object.assign({}, defaultSettings, loudPreset);

    But partyPreset sounds weird. The volume is loud enough, but the bass is non-existent. When you inspect partyPreset, you’re surprised to find that there’s no bass in it!

    // {
    //   power: true,
    //   soundSettings: {
    //     volume: 100
    //   }
    // }

    This happens because JavaScript copies over the reference to the soundSettings object. Since both defaultSettings and loudPreset have a soundSettings object, the one that comes later gets copied into the new object.

    If you change partyPreset, loudPreset will mutate accordingly—evidence that the reference to soundSettings gets copied over.

    partyPreset.soundSettings.bass = 50;
    // {
    //   soundSettings: {
    //     volume: 100,
    //     bass: 50
    //   }
    // }

    Since Object.assign performs a shallow merge, you need to use another method to merge objects that contain nested properties (that is, objects within objects).

    Enter assignment.


    assignment is a small library made by Nicolás Bevacqua from Pony Foo, which is a great source for JavaScript knowledge. It helps you perform a deep merge without having to worry about mutation. Aside from the method name, the syntax is the same as Object.assign.

    // Perform a deep merge with assignment
    const partyPreset = assignment({}, defaultSettings, loudPreset);
    // {
    //   power: true,
    //   soundSettings: {
    //     volume: 100,
    //     bass: 20
    //   }
    // }

    assignment copies over values of all nested objects, which prevents your existing objects from getting mutated.

    If you try to change any property in partyPreset.soundSettings now, you’ll see that loudPreset remains as it was.

    partyPreset.soundSettings.bass = 50;
    // loudPreset doesn't get mutated
    // {
    //   soundSettings {
    //     volume: 100
    //   }
    // }

    assignment is just one of many libraries that help you perform a deep merge. Other libraries, including lodash.merge and merge-options, can help you do it, too. Feel free to choose from any of these libraries.

    Should you always use assignment over Object.assign?

    As long as you know how to prevent your objects from mutating, you can use Object.assign. There’s no harm in using it as long as you know how to use it properly.

    However, if you need to assign objects with nested properties, always prefer a deep merge over Object.assign.

    Ensuring objects don’t mutate

    Although the methods I mentioned can help you prevent objects from mutating, they don’t guarantee that objects don’t mutate. If you made a mistake and used Object.assign for a nested object, you’ll be in for deep trouble later on.

    To safeguard yourself, you might want to guarantee that objects don’t mutate at all. To do so, you can use libraries like ImmutableJS. This library throws an error whenever you attempt to mutate an object.

    Alternatively, you can use Object.freeze and deep-freeze. These two methods fail silently (they don’t throw errors, but they also don’t mutate the objects).

    Object.freeze and deep-freeze

    Object.freeze prevents direct properties of an object from changing.

    const egg = {
      name: "Humpty Dumpty",
      isBroken: false
    // Freezes the egg
    // Attempting to change properties will silently fail
    egg.isBroken = true;
    console.log(egg); // { name: "Humpty Dumpty", isBroken: false }

    But it doesn’t help when you mutate a deeper property like defaultSettings.soundSettings.base.

    const defaultSettings = {
      power: true,
      soundSettings: {
        volume: 50,
        bass: 20
    defaultSettings.soundSettings.bass = 100;
    // soundSettings gets mutated nevertheless
    // {
    //   power: true,
    //   soundSettings: {
    //     volume: 50,
    //     bass: 100
    //   }
    // }

    To prevent a deep mutation, you can use a library called deep-freeze, which recursively calls Object.freeze on all objects.

    const defaultSettings = {
      power: true,
      soundSettings: {
        volume: 50,
        bass: 20
    // Performing a deep freeze (after including deep-freeze in your code per instructions on npm)
    // Attempting to change deep properties will fail silently
    defaultSettings.soundSettings.bass = 100;
    // soundSettings doesn't get mutated anymore
    // {
    //   power: true,
    //   soundSettings: {
    //     volume: 50,
    //     bass: 20
    //   }
    // }

    Don’t confuse reassignment with mutation

    When you reassign a variable, you change what it points to. In the following example, a is changed from 11 to 100.

    let a = 11;
    a = 100;

    When you mutate an object, it gets changed. The reference to the object stays the same.

    const egg = { name: "Humpty Dumpty" };
    egg.isBroken = false;

    Wrapping up

    Mutation is scary because it can cause your code to break without your knowing about it. Even if you suspect the cause of breakage is a mutation, it can be hard for you to pinpoint the code that created the mutation. So the best way to prevent code from breaking unknowingly is to make sure your objects don’t mutate from the get-go.

    To prevent objects from mutating, you can use libraries like ImmutableJS and Mori.js, or use Object.assign and Object.freeze.

    Take note that Object.assign and Object.freeze can only prevent direct properties from mutating. If you need to prevent multiple layers of objects from mutating, you’ll need libraries like assignment and deep-freeze.

  15. Discovery on a Budget: Part I

    If you crack open any design textbook, you’ll see some depiction of the design cycle: discover, ideate, create, evaluate, and repeat. Whenever we bring on a new client or start working on a new feature, we start at the top of the wheel with discover (or discovery). It is the time in the project when we define what problem we are trying to solve and what our first approach at solving it should be.

    A flowchart showing Discover, leading to Ideate, leading to Create, leading to Evaluate, which leads back to Discover
    Ye olde design cycle

    We commonly talk about discovery at the start of a sprint cycle at an established business, where there are things like budgets, product teams, and existing customers. The discovery process may include interviewing stakeholders or pouring over existing user data. And we always exit the discovery phase with some sort of idea to move forward with.

    However, discovery is inherently different when you work at a nonprofit, startup, or fledgling small business. It may be a design team of one (you), with zero dollars to spend, and only a handful of people aware the business even exists. There are no clients to interview and no existing data to examine. This may also be the case at large businesses when they want to test the waters on a new direction without overcommitting (or overspending). Whenever you are constrained on budget, data, and stakeholders, you need to be flexible and crafty in how you conduct discovery research. But you can’t skimp on rigor and thoroughness. If the idea you exit the discovery phase with isn’t any good, your big launch could turn out to be a business-ending flop.

    In this article I’ll take you through a discovery research cycle, but apply it towards a (fictitious) startup idea. I’ll introduce strategies for conducting discovery research with no budget, existing user data, or resources to speak of. And I’ll show how the research shapes the business going forward.

    Write up the problem hypothesis

    An awful lot of ink (virtual or otherwise) has been spent on proclaiming we should all, “fall in love with the problem, not the solution.” And it has been ink spent well. When it comes to product building, a problem-focused philosophy is the cornerstone of any user-centric business.

    But how, exactly, do you know when you have a problem worth solving? If you work at a large, established business you may have user feedback and data pointing you like flashing arrows on a well-marked road towards a problem worth solving. However, if you are launching a startup, or work at a larger business venturing into new territory, it can be more like hiking through the woods and searching for the next blaze mark on the trail. Your ideas are likely based on personal experiences and gut instincts.

    When your ideas are based on personal experiences, assumptions, and instincts, it’s important to realize they need a higher-than-average level of tire-kicking. You need to evaluate the question “Do I have a problem worth solving?” with a higher level of rigor than you would at a company with budget to spare and a wealth of existing data. You need to take all of your ideas and assumptions and examine them thoroughly. And the best way to examine your ideas and categorize your assumptions is with a hypothesis.

    As the dictionary describes, a hypothesis is “a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation.” That also serves as a good description of why we do discovery research in the first place. We may have an idea that there is a problem worth solving, but we don’t yet know the scope or critical details. Articulating our instincts, ideas, and assumptions as a problem hypothesis lays a foundation for the research moving forward.

    Here is a general formula you can use to write a problem hypothesis:

    Because [assumptions and gut instincts about the problem], users are [in some undesirable state]. They need [solution idea].

    For this article, I decided to “launch” a fictitious (and overly ambitious) startup as an example. Here is the problem hypothesis I wrote for my startup:

    Because their business model relies on advertising, social media tools like Facebook are deliberately designed to “hook” users and make them addicted to the service. Users are unhappy with this and would rather have a healthier relationship with social media tools. They would be willing to pay for a social media service that was designed with mental health in mind.

    You can see in this example that my assumptions are:

    • Users feel that social media sites like Facebook are addictive.
    • Users don’t like to be addicted to social media.
    • Users would be willing to pay for a non-addictive Facebook replacement.

    These are the assumptions I’ll be researching and testing throughout the discovery process. If I find through my research that I cannot readily affirm these assumptions, it means I might not be ready to take on Mr. Zuckerberg just yet.

    The benefit of articulating our assumptions in the form of a hypothesis is that it provides something concrete to talk about, refer to, and test. The whole product team can be involved in forming the initial problem hypothesis, and you can refer back to it throughout the discovery process. Once we’ve completed the research and analyzed the results, we can edit the hypothesis to reflect our new understanding of our users and the problems we want to solve.

    Now that we’ve articulated a problem hypothesis, it is time to figure out our research plan. In the following two sections, I’ll cover the research method I recommend the most for new ventures, as well as strategies for recruiting participants on a budget.

    A method that is useful in all phases of design: interviews

    In my career as a user researcher, I have used all sorts of methods. I’ve done A/B testing, eye tracking, Wizard of Oz testing, think-alouds, contextual inquiries, and guerilla testing. But the one research method I utilize the most, and that I believe provides the most “bang for the buck,” is user interviews.

    User interviews are relatively inexpensive to conduct. You don’t need to travel to a client site and you don’t need a fortune’s worth of equipment. If you have access to a phone, you can conduct an interview with participants all around the world. Yet interviews provide a wealth of information and can be used in every phase of research and design. Interviews are especially useful in discovery, because it is a method that is adaptable. As you learn more about the problem you are trying to solve, you can adapt your interview protocol to match.

    To be clear, your interviewees will not tell you:

    • what to build;
    • or how to build it.

    But they absolutely can tell you:

    • what problem they have;
    • how they feel about it;
    • and what the value of a solution would mean to them.

    And if you know the problem, how users feels about it, and the value of a solution, you are well on your way to designing the right product.

    The challenge of conducting a good user interview is making sure you ask the questions that elicit that information. Here are a couple tips:

    Tip 1: always ask the following two questions:

    • “What do you like about [blank]?”
    • “What do you dislike about [blank]?”

    … where you fill “[blank]” with whatever domain your future product will improve.

    Your objective is to gain an understanding of all aspects of the problem your potential customers face—the bad and the good. One common mistake is to spend too much time investigating what’s wrong with the current state of affairs. Naturally, you want your product to fix all the problems your customers face. However, you also need to preserve what currently works well, what is satisfying, or what is otherwise good about how users accomplish their goals currently. So it is important to ask about both in user interviews.

    For example, in my interviews I always asked, “What do you like about using Facebook?” And it wasn’t until my interview participant told me everything they enjoyed about Facebook that I would ask, “What do you dislike about using Facebook?”

    Tip 2: after (nearly) every response, ask them to say more.

    The goal of conducting interviews is to gain an exhaustive set of data to review and consider moving forward. That means you don’t want your participants to discuss one thing they like and dislike, you want them to tell you all the things they like and dislike.

    Here is an example of how this played out in one of the interviews I conducted:

    Interviewer (Me): What do you like about using Facebook?

    Interviewee: I like seeing people on there that I wouldn’t otherwise get a chance to see and catch up with in real life. I have moved a couple times so I have a lot of friends that I don’t see regularly. I also like seeing the people I know do well, even though I haven’t seen them since, maybe, high school. But I like seeing how their life has gone. I like seeing their kids. I like seeing their accomplishments. It’s also a little creepy because it’s a window into their life and we haven’t actually talked in forever. But I like staying connected.

    Interviewer (Me): What else do you like about it?

    Interviewee: Um, well it’s also sort of a convenient way of keeping contacts. There have been a few times when I was able to message people and get in touch with people even when I don’t have their address or email in my phone. I could message them through Facebook.

    Interviewer (Me): Great. Is there anything else you like about it?

    Interviewee: Let me think … well I also find cool stuff to do on the weekends there sometimes. They have an events feature. And businesses, or local places, will post events and there have been a couple times where I’ve gone to something cool. Like I found a cool movie festival once that way.

    Interviewer (Me): That seems cool. What else do you like about using Facebook?

    Interviewee: Uh … that’s all I think I really use it for. I can’t really think of anything else. Mainly I use it just to keep in touch with people that I’ve met over the years.

    From this example you can see the first feature that popped into the interviewee’s mind was their ability to keep up with friends that they otherwise wouldn’t have much opportunity to connect with anymore. That is a feature that any Facebook replacement would have to replicate. However, if I hadn’t pushed the interviewee to think of even more features they like, I might have never uncovered an important secondary feature: convenient in-app messaging. In fact, six out of the eleven people I interviewed for this project said they liked Facebook Messenger. But not a single one of them mentioned that feature first. It only came up in conversation after I probed for more.

    As I continued to repeat my question, the interviewee thought of one more feature they liked: local event listings. (Five out of the eleven people I interviewed mentioned this feature.) But after that, the interviewee couldn’t think of any more features to discuss. You know you can move on to the next question in the interview when your participant starts to repeat themselves or bluntly tells you they have nothing else to say.

    Recruit all around you, then document the bias

    There are all sorts of ways to recruit participants for research. You can hire an agency or use a tool like UserTesting.com. But many of those paid-for options can be quite costly, and since we are working with a shoestring budget we have roughly zero dollars to spend on recruitment. We will have to be creative.

    Hey Friends. I have a little project I am working on that may turn into a startup pitch one day. To help me figure out whether I have an idea worth working on, I’d like to interview some folks about their use of Facebook and other social media. Would any of you be willing to do an interview with me? It would take ~15min and we would talk via Skype or Google Hangouts – your choice.
    My post on Facebook to recruit volunteers. One volunteer decided to respond with a Hunger Games “I volunteer as tribute!” gif.

    For my project, I decided to rely on the kindness of friends and strangers I could reach through Facebook. I posted one request for participants on my personal Facebook page, and another on the local FreeCodeCamp page. A day after I posted my request, twenty-five friends and five strangers volunteered. This type of participant recruitment method is called convenience sampling, because I was recruiting participants that were conveniently accessible to me.

    Since my project involved talking to people about social media sites like Facebook, it was appropriate for my first attempt at recruiting to start on Facebook. I could be sure that everyone who saw my request uses Facebook in some form or fashion. However, like all convenience sampling, my recruitment method was biased. (I’ll explain how in just a bit.)

    Bias is something that we should try—whenever possible—to avoid. If we have access to more sophisticated recruitment methods, we should use them. However, when you have a tight budget, avoiding recruitment bias is virtually impossible. In this scenario, our goals should be to:

    • mitigate bias as best we can;
    • and document all the biases we see.

    For my project, I could mitigate some of the biases by using a few more recruitment methods. I could go to various neighborhoods and try to recruit participants off the street (i.e., guerilla testing). If I had a little bit of money to spend, I could hang out in various coffee shops and offer folks free coffee in exchange for ten-minute interviews. These recruitment methods also fall under the umbrella of convenience sampling, but by using a variety of methods I can mitigate some of the bias I would have from using just one of them.

    Also, it is always important to reflect on and document how your sampling method is biased. For my project, I wrote the following in my notes:

    All of the people I interviewed were connected to me in some way on Facebook. Many of them I know well enough to be “friends” with. All of them were around my age, many (but not all) worked in tech in some form or fashion, and all of them but one lived in the US.

    Documenting bias ensures that we won’t forget about the bias when it comes time to analyze and discuss the results.

    Let’s keep this going

    As the title suggests, this is just the first installment of a series of articles on the discovery process. In part two, I will analyze the results of my interviews, revise my problem hypothesis, and continue to work on my experimental startup. I will launch into another round of discovery research, but this time utilizing some different research methods, like A/B testing and fake-door testing. You can help me out by checking out this mock landing page for Candor Network (what I’ve named my fictitious startup) and taking the survey you see there.

  16. My Grandfather’s Travel Logs and Other Repetitive Tasks

    My grandfather, James, was a meticulous recordkeeper. He kept handwritten journals detailing everything from his doctor visits to the daily fluctuations of stocks he owned. I only discovered this part of his life seven years after his death, when my family’s basement flooded on Christmas Eve in 2011 and we found his journals while cleaning up the damage. His travel records impressed me the most. He documented every trip he ever took, including dates, countries and cities visited, methods of travel, and people he traveled with. In total, he left the United States 99 times, visited 80 countries, and spent 1,223 days at sea on 48 ships.

    A section of the handwritten travel log kept by the author’s grandfather
    A section of the travel log.

    I was only twenty-four when he died, so I hadn’t yet realized that I’d inherited many of his record-keeping, journaling, and collecting habits. And I had never had the chance to ask him many questions about his travels (like why he went to Venezuela twelve times or what he was doing in Syria and Beirut in the 1950s). So, in an effort to discover more about him, I decided to make an infographic of his travel logs.

    Today, we take for granted that we can check stocks on our phones or go online and view records from doctor visits. The kinds of repetitive tasks my grandfather did might seem excessive, especially to young web developers and designers who’ve never had to do them. But my grandfather had no recording method besides pencil and paper for most of his life, so this was a normal and especially vital part of his daily routine.

    A photograph of a ship called SS Amor, taken by the author’s grandfather in the West Indies in 1939.
    SS Amor in the West Indies. Taken by the author’s grandfather in 1939.
    A photograph of the New York City skyline, taken by the author’s grandfather, probably in the 1930s.
    New York City. Taken by the author’s grandfather, probably in the 1930s.

    Whether you’re processing Sass, minifying, or using Autoprefixer, you’re using tools to perform mundane and repetitive tasks that people previously had to do by hand, albeit in a different medium.

    But what do you do when you’re faced with a problem that can’t be solved with a plugin, like my grandfather’s travel data? If you’re a designer, what’s the best way to structure unconventional data so you can just focus on designing?

    My idea for the travel web app was to graph each country based on the number of my grandfather’s visits. As the country he visited the most (twenty-two times), Bermuda would have a graph bar stretching 100 percent across the screen, while a country he visited eleven times (St. Thomas, for example) would stretch roughly 50 percent across, the proportions adjusted slightly to fit the name and visits. I also wanted each graph bar to be the country’s main flag color.

    The big issue to start was that some of the data was on paper and some was already transcribed into a text file. I could have written the HTML and CSS by hand, but I wanted to have the option to display the data in different ways. I needed a JSON file.

    I tediously transcribed the remaining travel data into a tab-separated text file for the countries. I added the name, number of visits, and flag color:

    honduras	    1    #0051ba
    syria	1	#E20000
    venezuela	    16    #fcd116
    enewetak	2	rgb(0,56,147)

    For the ships, I added the date and name:

    1941    SS Granada
    1944    USS Alimosa
    1945    USS Alcoa Patriot

    Manually creating a JSON file would have taken forever, so I used JavaScript to iterate through the text files and create two separate JSON files—one for countries and one for ships—which I would later merge.

    First, I used Node readFileSync() and trim() to remove any quotation marks at the end of the file so as to avoid an empty object in the results:

    const fs = require('fs');
    let countriesData = fs.readFileSync('countries.txt', 'utf8')

    This returned the contents of the countries.txt file and stored it in a variable called countriesData. At that point, I outputted the variable to the console, which showed that the data was lumped together into one giant string with a bunch of tabs (\t) and newlines (\n):

    "angaur\t2\t#56a83c\nantigua\t5\t#ce1126\nargentina\t2\trgb(117,170,219)\naruba\t10\trgb(0,114,198)\nbahamas\t3\trgb(0,173,198)\nbarbados\t6\trgb(255,198,30)\nbermuda\t22\trgb(0,40,104)\nbonaire\t1\trgb(37,40,135)\nguyana\t2\trgb(0,158,73)\nhonduras\t1\trgb(0,81,186)\nvirgin Islands\t2\trgb(0,40,104)\nbrazil\t3\trgb(30,181,58)\nburma\t1\trgb(254,203,0)\ncanary Islands\t1\trgb(7,104,169)\ncanal Zone\t7\trgb(11,14,98)\ncarriacou\t1\trgb(239,42,12)\n ..."

    Next, I split the string at the line breaks (\n):

    const fs = require('fs');
    let countriesData = fs.readFileSync('countries.txt', 'utf8')

    After split(), in the console, the countries’ data lived in an array:


    I wanted to split each item of country data at the tabs, separating the name, number of visits, and color. To do this, I used map(), which iterates and runs a function on each item, returning something new. In this case, it split the string at each tab it found and returned a new array:

    const fs = require('fs');
    let countriesData = fs.readFileSync('countries.txt', 'utf8')
    	.map(item => item.split('\t'));

    After I used map(), countriesData was an array of arrays with each country and its data split into separate items:


    To create the final output for each country, I used reduce(), which uses an accumulator and a function to create something new, whether that’s an object, a value, or an array. Accumulator is a fancy way of referring to the end product, which in our case is an object ({}).

    const fs = require('fs');
    let countriesData = fs.readFileSync('countries.txt', 'utf8')
    	.map(item => item.split('\t'))
    	.reduce((countries, item) => {
    		return countries;
    	}, {countries: []});

    I knew I wanted {countries: []} to contain the data. So instead of creating it on the first pass and testing whether it existed on each iteration, I added {countries: []} to the resulting object. That way, it existed before I started iterating.

    This process returned an empty object because I hadn’t told reduce() what to do with each array of data.

    To fix this, I used reduce() to push and add a new object for each country with the name (item[0]), visits (item[1]), and color (item[2]) into the end result object. Finally, I used a capitalization function on each name value to ensure formatting would be consistent.

    const fs = require('fs');
    const cap = (s) => {
      return s.charAt(0).toUpperCase() + s.slice(1);
    let countriesData = fs.readFileSync('countries.txt', 'utf8')
    	.map(item => item.split('\t'))
    	.reduce((countries, item) => {
    			name: cap(item[0]),
          			visits: item[1],
          			color: item[2]
    		return countries;
    	}, {countries: []});

    I used the same method for the ships.txt file and merged the two using Object.assign, a method that takes two objects and creates a new one.

    let result = Object.assign({}, countriesData, shipsData);

    I could have created a function that took a text file and an object, or created a form-to-JSON tool, but these seemed like overkill for this project, and I had already transcribed some of the data into separate files before even conceiving of the infographic idea. The final JSON result can be found on CodePen.

    I used the JSON data to create the infographic bars, defining the layout for each one with CSS Grid and dynamic styles for width and color. Check out the final product at ninetyninetimes.com. I think my grandfather would have enjoyed seeing his handwritten logs transformed into a visual format that showcases the breadth of his travels.

    He passed away in 2005, but I remember showing him my Blackberry and explaining the internet to him, showing him how he could look at pictures from around the world and read articles. He took a sip of his martini and sort of waved his hand at the screen. I think he preferred handwritten notes and life outside of the internet, something many of us can appreciate. After sifting through all his travel logs, I more clearly understood the importance he placed on having different experiences, meeting new people, and fearlessly exploring the world. To him, his travels were more than just dates on a page. Now they’re more than that for me, too.

    The author wishes to thank Mattias Petter Johansson, whose video series, “Fun Fun Function,” inspired some of the thinking in this article.

  17. How the Sausage Gets Made: The Hidden Work of Content

    I won an Emmy for keeping a website free of dick pics.

    Officially, my award certificate says I was on a team that won a 2014 Emmy for Interactive Media, Social TV Experience. The category “Social TV Experience” sounds far classier than my true contribution to the project.

    The award-winning Live From Space site served as a second-screen experience for a National Geographic Channel show of the same name. The show Live From Space covered the wonders of the International Space Station. The website displayed the globe as seen by astronauts, along with entertaining social data about each country crossed by the Space Station’s trajectory. One of those data points was an Instagram feed showcasing images of local cuisine.

    Image of the National Geographic Channel’s Live From Space second-screen experience, including an Instagram photo of an Australian repast.
    The second-screen experience for National Geographic Channel’s Live From Space event, featuring an Instagram photo of local food.

    You might think that adding this feed was a relatively simple task. Include a specific channel, or feed in images tagged with the food and the country in which the images were taken, connect to an API, and boom: a stream of images from food bloggers in South Africa, Taiwan, Mexico, what have you. One exec was so impressed that he called this feature “automagical.”

    What he described as “automagical” was actually me sitting in front of a computer screen, scanning Instagram, hunting for the most appetizing images, avoiding the unappetizing ones, and pasting my choices into a spreadsheet for import by a developer. I wouldn’t call it automated, and I wouldn’t call it magical. As the team’s content manager, I performed this task because the Instagram API wasn’t playing nice with the developers, but we had to get that information into the site by the deadline somehow.

    An additional, and perhaps worse, problem was that if you found a feed of images taken in certain countries and tagged #food, you might get pictures of sausage. But we’re talking about the kinds of sausages usually discussed in locker rooms or on school buses full of junior high boys. As you can imagine, you cannot add Instagram photos tagged #food to a family-friendly site without a little effort, either in terms of getting around an API or filtering out the naughty bits.

    The mythical “automagical” tool

    You might think I’m knocking the website, but I’m not. Many creative, brilliant people worked ridiculous hours to create a gorgeous experience for which they rightly earned an award, and the images of local cuisine made up only a small slice of the site’s data.

    Yet I feel conflicted about my own involvement with Live From Space because most of the site’s users still have no idea how the sausage of apps and websites gets made. In fact, these people may never know because the site is no longer live.

    Or they may not care. Few people are aware of the rote work that goes into moving or importing data from one website to another, which causes problems if they don’t understand how long the process takes to make content happen. Unless you’re working with a pristine data source, there often is no “content hose” or “automagical” tool that cleans up data and moves it from one app or content management system to another. Unfortunately, the assumption that a “content hose” exists can lead to miscommunication, frustration, and delays when it is time to produce the work.

    Oftentimes, a person will need to go in, copy content, and paste that code into the new app or CMS. They must repeat this task until the app or site is ready for launch. This type of work usually spurs revolt within the workplace, and I can’t say I blame people for being upset. Unless you know some tips, tricks, and shortcuts, as I do, you have a long stretch of tedious, mind-numbing work ahead of you.

    Did someone say shortcuts?

    Yes, you do have shortcuts when it comes to pulling content into a website. Those shortcuts happen earlier in the site-building process than you may think, and they rely on making sure your entire team is involved in the content process.

    The most important thing when you are creating a new site or migrating an existing one is to lock down the content you want to bring in, as early as possible.

    In the case of the National Geographic Channel website, the team knew it needed the map data and the coordinates, but did it really need the Instagram feed with the food data? And, when the creative team decided it needed the food data, did anyone ask questions about how the food data would be drawn into the site?

    This involves building tactical questions into the creative workflow. When someone is on a creative roll, the last thing I want to do is slow them down by asking overly tactical questions. But all brainstorming sessions should include a team member who is taking notes as the ideas fly so they can ask the crucial questions later:

    • Where will this content come from?
    • Do we have a team member who can generate this content from a data feed or from scratch?
    • If not, do we need to hire someone?

    These questions are nothing new to a content strategist, but the questions must be asked in the earliest stages of the project. Think about it: if your team is in love with an idea, and the client falls in love with it, too, then you will have a harder time changing course if you can’t create the content that makes the site run.

    Site updates and migrations are a little bit different in that most of the content exists, but you’d be surprised by how few team members know their content. Right now, I am working for a company that helps universities revamp their considerably large websites, and the first thing we do when making the sausage is halve the recipe.

    First, we use Screaming Frog to generate a content inventory, which we spot-check for any unusual features that might need to be incorporated into the new site. Then we pass the inventory to the client, asking them to go through the inventory and archive duplicate or old content. Once they archive the old content, they can focus on what they intend to revise or keep as is.

    Image of an in-progress content inventory for one of iFactory’s current clients, a large community college.
    A work-in-progress content inventory for a large community college.

    During the first few weeks of any project, I check in with the client about how they are doing with their content archive. If they aren’t touching the content early, we schedule a follow-up meeting and essentially haunt them until they make tough decisions.

    Perfecting the process

    How do we improve the way our teams relate to content? How do we show them how the content sausage gets made without grossing anyone out? Here are a few tips:

    Your content strategist and your developer need to be on speaking terms. “Content strategist” isn’t a fancy name for a writer or an editor. A good content strategist knows how to work with developers. For one site migration involving a community college, I used Screaming Frog to scrape the content from the original site. Then I passed the resulting .csv document back and forth to the developer, fine-tuning the alignment of fields each time so it would be easier for us to import the material into GatherContent, an editorial tool for digital projects.

    Speaking of GatherContent ... set up a proper content workflow. GatherContent allows you to assign specific tasks to team members so you can divide work. Even better, GatherContent’s editorial tool allows each page to pass through specific points in the editorial process, including drafting, choosing pictures, adding tags, and uploading to the CMS.

    Train the team on how to transform the current content. In my current workplace, not only do we train the client on how to use the CMS, but we also provide Content Guidelines, an overview of the basic building blocks that make up a web page. I’ve shown clients how to create fields for page metadata, images, image alt text, and downloads—and we do this early so the client doesn’t wait until the last minute to dive into details.

    Sample slides from an iFactory Content Guidelines presentation.
    Sample slides from a Content Guidelines presentation for one of iFactory’s current clients.

    Actually make the sausage. Clever uses of tools and advance training can only go so far. At some point you will need to make sure that what is in the CMS lines up with what you intended. You may need to take your content source, remove any odd characters, shift content from one field to another, and make the content safe for work—just like removing dick pics.

    Make sure everyone on your team scrapes, scrubs, and uploads content at least once. Distributing the work ensures that your team members think twice before recommending content that doesn’t exist or content that needs a serious cleanup. That means each team member should sit down and copy content directly into the CMS or scrub the content that is there. An hour or two is enough to transform perspectives.

    Push back if a team member shirks his or her content duty. Occasionally, you will encounter people who believe their roles protect them from content. I’ve heard people ask, “Can’t we get an intern to do that?” or “Can’t we do that through Mechanical Turk?” Sometimes, these people mean well and are thinking of efficiency, but other times, their willingness to brush content off as an intern task or as a task worth a nickel or two should be alarming. It’s demeaning to those who do the work for starters, but it also shows that they are cavalier about content. Asking someone to pitch in for content creation or migration is a litmus test. If they don’t seem to take content seriously, you have to ask: just how committed are these people to serving up a quality digital experience? Do you even want them on your team in the future? By the way, I’ve seen VPs and sales team members entering content in a website, and every last one of them told me that the experience was eye-opening.

    People are the “automagical” ingredient

    None of these shortcuts and process tips are possible without some kind of hidden content work. Content is often discussed in terms of which gender does what kind of work and how they are recognized for it. This worthwhile subject is covered in depth by many authors, especially in the context of social media, but I’d like to step back and think about why this work is hidden and how we can avoid delays, employee revolts, and overall tedium in the future.

    Whether you’re scraping, scrubbing, copying, or pasting, the connecting thread for all hidden content work is that nearly no one thinks of it until the last minute. In general, project team members can do a better job of thinking about how content needs to be manipulated to fit a design or a data model. Then they should prepare their team and the client for the amount of work it will take to get content ready and entered into a site. By taking the initiative, you can save time, money, and sanity. If you’re really doing it right, you can make a site that’s the equivalent of a sausage … without dubious ingredients.


  18. The Best Request Is No Request, Revisited

    Over the last decade, web performance optimization has been controlled by one indisputable guideline: the best request is no request. A very humble rule, easy to interpret. Every network call for a resource eliminated improves performance. Every src attribute spared, every link element dropped. But everything has changed now that HTTP/2 is available, hasn’t it? Designed for the modern web, HTTP/2 is more efficient in responding to a larger number of requests than its predecessor. So the question is: does the old rule of reducing requests still hold up?

    What has changed with HTTP/2?

    To understand how HTTP/2 is different, it helps to know about its predecessors. A brief history follows. HTTP builds on TCP. While TCP is powerful and is capable of transferring lots of data reliably, the way HTTP/1 utilized TCP was inefficient. Every resource requested required a new TCP connection. And every TCP connection required synchronization between the client and server, resulting in an initial delay as the browser established a connection. This was OK in times when the majority of web content consisted of unstyled documents that didn’t load additional resources, such as images or JavaScript files.

    Updates in HTTP/1.1 try to overcome this limitation. Clients are able to use one TCP connection for multiple resources, but still have to download them in sequence. This so-called “head of line blocking” makes waterfall charts actually look like waterfalls:

    Figure 1. Schematic waterfall of assets loading over one pipelined TCP connection
    Figure 1. Schematic waterfall of assets loading over one pipelined TCP connection

    Also, most browsers started to open multiple TCP connections in parallel, limited to a rather low number per domain. Even with such optimizations, HTTP/1.1 is not well-suited to the considerable number of resources of today’s websites. Hence the saying “The best request is no request.” TCP connections are costly and take time. This is why we use things like concatenation, image sprites, and inlining of resources: avoid new connections, and reuse existing ones.

    HTTP/2 is fundamentally different than HTTP/1.1. HTTP/2 uses a single TCP connection and allows more resources to be downloaded in parallel than its predecessor. Think of this single TCP connection as one broad tunnel where data is sent through in frames. On the client, all packages get reassembled into their original source. Using a couple of link elements to transfer style sheets is now as practically efficient as bundling all of your style sheets into one file.

    Figure 2. Schematic waterfall of assets loading over one shared TCP connection
    Figure 2. Schematic waterfall of assets loading over one shared TCP connection

    All connections use the same stream, so they also share bandwidth. Depending on the number of resources, this might mean that individual resources could take longer to be transmitted to the client side on low-bandwidth connections.

    This also means that resource prioritization is not done as easily as it was with HTTP/1.1: the order of resources in the document had an impact on when they begin to download. With HTTP/2, everything happens at the same time! The HTTP/2 spec contains information on stream prioritization, but at the time of this writing, placing control over prioritization in developers’ hands is still in the distant future.

    The best request is no request: cherry-picking

    So what can we do to overcome the lack of waterfall resource prioritization? What about not wasting bandwidth? Think back to the first rule of performance optimization: the best request is no request. Let’s reinterpret the rule.

    For example, consider a typical webpage (in this case, from Dynatrace). The screenshot below shows a piece of online documentation consisting of different components: main navigation, a footer, breadcrumbs, a sidebar, and the main article.

    Figure 3. A typical website split into a few components
    Figure 3. A typical website split into a few components

    On other pages of the same site, we have things like a masthead, social media outlets, galleries, or other components. Each component is defined by its own markup and style sheet.

    In HTTP/1.1 environments, we would typically combine all component style sheets into one CSS file. The best request is no request: one TCP connection to transfer all the CSS necessary, even for pages the user hasn’t seen yet. This can result in a huge CSS file.

    The problem is compounded when a site uses a library like Bootstrap, which reached the 300 kB mark, adding site-specific CSS on top of it. The actual amount of CSS required by any given page, in some cases, was even less than 10% of the amount loaded:

    Figure 4. Code coverage of a random cinema webpage that uses 10% of the bundled 300 kB CSS. This page is built upon Bootstrap.
    Figure 4. Code coverage of a random cinema webpage that uses 10% of the bundled 300 kB CSS. This page is built upon Bootstrap.

    There are even tools like UnCSS that aim to get rid of unused styles.

    The Dynatrace documentation example shown in figure 3 is built with the company’s own style library, which is tailored to the site’s specific needs as opposed to Bootstrap, which is offered as a general purpose solution. All components in the company style library combined add up to 80 kB of CSS. The CSS actually used on the page is divided among eight of those components, totaling 8.1 kB. So even though the library is tailored to the specific needs of the website, the page still uses only around 10% of the CSS it downloads.

    HTTP/2 allows us to be much more picky when it comes to the files we want to transmit. The request itself is not as costly as it is in HTTP/1.1, so we can safely use more link elements, pointing directly to the elements used on that particular page:

    <link rel="stylesheet" href="/css/base.css">
    <link rel="stylesheet" href="/css/typography.css">
    <link rel="stylesheet" href="/css/layout.css">
    <link rel="stylesheet" href="/css/navbar.css">
    <link rel="stylesheet" href="/css/article.css">
    <link rel="stylesheet" href="/css/footer.css">
    <link rel="stylesheet" href="/css/sidebar.css">
    <link rel="stylesheet" href="/css/breadcrumbs.css">

    This, of course, is true for every sprite map or JavaScript bundle as well. By just transferring what you actually need, the amount of data transferred to your site can be reduced greatly! Compare the download times for bundle and single files shown with Chrome timings below:

    Figure 5. Download of the bundle. After the initial connection is established, the bundle takes 583 ms to download on regular 3G.
    Figure 5. Download of the bundle. After the initial connection is established, the bundle takes 583 ms to download on regular 3G.
    Figure 6. Split only the files needed, and download them in parallel. The initial connection takes about as long, but the content (one style sheet, in this case) downloads much faster because it is smaller.
    Figure 6. Split only the files needed, and download them in parallel. The initial connection takes about as long, but the content (one style sheet, in this case) downloads much faster because it is smaller.

    The first image shows that including the time required for the browser to establish the initial connection, the bundle needs about 700 ms to download on regular 3G connections. The second image shows timing values for one CSS file out of the eight that make up the page. The beginning of the response (TTFB) takes as long, but since the file is a lot smaller (less than 1 kB), the content is downloaded almost immediately.

    This might not seem impressive when looking at only one resource. But as shown below, since all eight style sheets are downloaded in parallel, we still can save a great deal of transfer time when compared to the bundle approach.

    Figure 7. All style sheets on the split variant load in parallel.
    Figure 7. All style sheets on the split variant load in parallel.

    When running the same page through webpagetest.org on regular 3G, we can see a similar pattern. The full bundle (main.css) starts to download just after 1.5 s (yellow line) and takes 1.3 s to download; the time to first meaningful paint is around 3.5 seconds (green line):

    Figure 8. Full page download of the bundle, regular 3G.
    Figure 8. Full page download of the bundle, regular 3G.

    When we split up the CSS bundle, each style sheet starts to download at 1.5 s (yellow line) and takes 315–375 ms to finish. As a result, we can reduce the time to first meaningful paint by more than one second (green line):

    Figure 9. Downloading single files instead, regular 3G.
    Figure 9. Downloading single files instead, regular 3G.

    Per our measurements, the difference between bundled and split files has more impact on slow 3G than on regular 3G. On the latter, the bundle needs a total of 4.5 s to be downloaded, resulting in a time to first meaningful paint at around 7 s:

    Figure 10. Bundle, slow 3G.
    Figure 10. Bundle, slow 3G.

    The same page with split files on slow 3G connections via webpagetest.org results in meaningful paint (green line) occurring 4 s earlier:

    Figure 11. Split files, slow 3G.
    Figure 11. Split files, slow 3G.

    The interesting thing is that what was considered a performance anti-pattern in HTTP/1.1—using lots of references to resources—becomes a best practice in the HTTP/2 era. Plus, the rule stays the same! The meaning changes slightly.

    The best request is no request: drop files and code your users don’t need!

    It has to be noted that the success of this approach is strongly connected to the number of resources transferred. The example above used 10% of the original style sheet library, which is an enormous reduction in file size. Downloading the whole UI library in split-up files might give different results. For example, Khan Academy found that by splitting up their JavaScript bundles, the overall application size—and thus the transfer time–became drastically worse. This was mainly because of two reasons: a huge amount of JavaScript files (close to 100), and the often underestimated powers of Gzip.

    Gzip (and Brotli) yields higher compression ratios when there is repetition in the data it is compressing. This means that a Gzipped bundle typically has a much smaller footprint than Gzipped single files. So if you are going to download a whole set of files anyway, the compression ratio of bundled assets might outperform that of single files downloaded in parallel. Test accordingly.

    Also, be aware of your user base. While HTTP/2 has been widely adopted, some of your users might be limited to HTTP/1.1 connections. They will suffer from split resources.

    The best request is no request: caching and versioning

    To this point with our example, we’ve seen how to optimize the first visit to a page. The bundle is split up into separate files and the client receives only what it needs to display on a page. This gives us the chance to look into something people tend to neglect when optimizing for performance: subsequent visits.

    On subsequent visits we want to avoid re-transferring assets unnecessarily. HTTP headers like Cache-Control (and their implementation in servers like Apache and NGINX) allow us to store files on the user’s disk for a specified amount of time. Some CDN servers default that to a few minutes. Some others to a few hours or days even. The idea is that during a session, users shouldn’t have to download what they already have in the past (unless they’ve cleared their cache in the interim). For example, the following Cache-Control header directive makes sure the file is stored in any cache available, for 600 seconds.

    Cache-Control: public, max-age=600

    We can leverage Cache-Control to be much more strict. In our first optimization we decided to cherry-pick resources and be choosy about what we transfer to the client, so let’s store these resources on the machine for a long period of time:

    Cache-Control: public, max-age=31536000

    The number above is one year in seconds. The usefulness in setting a high Cache-Control max-age value is that the asset will be stored by the client for a long period of time. The screenshot below shows a waterfall chart of the first visit. Every asset of the HTML file is requested:

    Figure 12. First visit: every asset is requested.
    Figure 12. First visit: every asset is requested.

    With properly set Cache-Control headers, a subsequent visit will result in less requests. The screenshot below shows that all assets requested on our test domain don’t trigger a request. Assets from another domain with improperly set Cache-Control headers still trigger a request, as do resources which haven’t been found:

    Figure 13. Second visit: only some poorly cached SVGs from a different server are requested again.
    Figure 13. Second visit: only some poorly cached SVGs from a different server are requested again.

    When it comes to invalidating the cached asset (which, consequently, is one of the two hardest things in computer science), we simply use a new asset instead. Let’s see how that would work with our example. Caching works based on file names. A new file name triggers a new download. Previously, we split up our code base into reasonable chunks. A version indicator makes sure that each file name stays unique:

    <link rel="stylesheet" href="/css/header.v1.css">
    <link rel="stylesheet" href="/css/article.v1.css">

    After a change to our article styles, we would modify the version number:

    <link rel="stylesheet" href="/css/header.v1.css">
    <link rel="stylesheet" href="/css/article.v2.css">

    An alternative to keeping track of the file’s version is to set a revision hash based on the file’s content with automation tools.

    It’s OK to store your assets on the client for a long period of time. However, your HTML should be more transient in most cases. Typically, the HTML file contains the information about which resources to download. Should you want your resources to change (such as loading article.v2.css instead of article.v1.css, as we just saw), you’ll need to update references to them in your HTML. Popular CDN servers cache HTML for no longer than six minutes, but you can decide what’s better suited for your application.

    And again, the best request is no request: store files on the client as long as possible, and don’t request them over the wire ever again. Recent Firefox and Edge editions even sport an immutable directive for Cache-Control, targeting this pattern specifically.

    Bottom line

    HTTP/2 has been designed from the ground up to address the inefficiencies of HTTP/1. Triggering a large number of requests in an HTTP/2 environment is no longer inherently bad for performance; transferring unnecessary data is.

    To reach the full potential of HTTP/2, we have to look at each case individually. An optimization that might be good for one website can have a negative effect on another. With all the benefits that come with HTTP/2 , the golden rule of performance optimization still applies: the best request is no request. Only this time we take a look at the actual amount of data transferred.

    Only transfer what your users actually need. Nothing more, nothing less.

  19. Faux Grid Tracks

    A little while back, there was a question posted to css-discuss:

    Is it possible to style the rows and columns of a [CSS] grid—the grid itself? I have an upcoming layout that uses what looks like a tic-tac-toe board—complete with the vertical and horizontal lines of said tic-tac-toe board—with text/icon in each grid cell.

    This is a question I expect to come up repeatedly, as more and more people start to explore Grid layout. The short answer is: no, it isn’t possible to do that. But it is possible to fake the effect, which is what I’d like to explore.

    Defining the grid

    Since we’re talking about tic-tac-toe layouts, we’ll need a containing element around nine elements. We could use an ordered list, or a paragraph with a bunch of <span>s, or a <section> with some <div>s. Let’s go with that last one.

    <section id="ttt">

    We’ll take those nine <div>s and put them into a three-by-three grid, with each row five ems high and each column five ems wide. Setting up the grid structure is straightforward enough:

    #ttt {
    	display: grid;
    	grid-template-columns: repeat(3,5em);
    	grid-template-rows: repeat(3,5em);

    That’s it! Thanks to the auto-flow algorithm inherent in Grid layout, that’s enough to put the nine <div> elements into the nine grid cells. From there, creating the appearance of a grid is a matter of setting borders on the <div> elements. There are a lot of ways to do this, but here’s what I settled on:

    #ttt > * {
    	border: 1px solid black;
    	border-width: 0 1px 1px 0;
    	display: flex; /* flex styling to center content in divs */
    	align-items: center;
    	justify-content: center;
    #ttt > *:nth-of-type(3n)  {
    	border-right-width: 0;
    #ttt > *:nth-of-type(n+7) {
    	border-bottom-width: 0;

    The result is shown in the basic layout below.

    Screenshot: The basic layout features a 3x3 grid with lines breaking up the grid like a tic-tac-toe board.
    Figure 1: The basic layout

    This approach has the advantage of not relying on class names or what-have-you. It does fall apart, though, if the grid flow is changed to be columnar, as we can see in Figure 2.

    #ttt {
    	display: grid;
    	grid-template-columns: repeat(3,5em);
    	grid-template-rows: repeat(3,5em);
    	grid-auto-flow: column;  /* a change in layout! */
    Screenshot: If you switch the grid to columnar flow order, the borders get out of whack. Instead of a tic-tac-toe board, the right-most horizontal borders have moved to the bottom of the grid and the bottom-most vertical borders have moved to the right edge.
    Figure 2: The basic layout in columnar flow order

    If the flow is columnar, then the border-applying rules have to get flipped, like this:

    #ttt > *:nth-of-type(3n) {
    	border-bottom-width: 0;
    #ttt > *:nth-of-type(n+7) {
    	border-right-width: 0;

    That will get us back to the result we saw in Figure 1, but with the content in columnar order instead of row order. There’s no row reverse or column reverse in Grid like there is in flexbox, so we only have to worry about normal row and columnar flow patterns.

    But what if a later change to the design leads to grid items being rearranged in different ways? For example, there might be a reason to take one or two of the items and display them last in the grid, like this:

    #ttt > *:nth-of-type(4), #ttt > *:nth-of-type(6) {
    	order: 66;

    Just like in flexbox, this will move the displayed grid items out of source order, placing them after the grid items that don’t have explicit order values. If this sort of rearrangement is a possibility, there’s no easy way to switch borders on and off in order to create the illusion of the inner grid lines. What to do?

    Attack of the filler <b>s!

    If we want to create standalone styles that follow grid tracks—that is, presentation aspects that aren’t directly linked to the possibly-rearranged content—then we need other elements to place and style. They likely won’t have any content, making them a sort of structural filler to spackle over the gaps in Grid’s capabilities.

    Thus, to the <section> element, we can add two <b> elements with identifiers.

    <section id="ttt">
    	<b id="h"></b>
    	<b id="v"></b>

    These “filler <b>s,” as I like to call them, could be placed anywhere inside the <section>, but the beginning works fine. We’ll stick with that. Then we add these styles to our original grid from the basic layout:

    b[id] {
    	border: 1px solid gray;
    b#h {
    	grid-column: 1 / -1;
    	grid-row: 2;
    	border-width: 1px 0;
    b#v {
    	grid-column: 2;
    	grid-row: 1 / -1;
    	border-width: 0 1px;

    The 1 / -1 means “go from the first grid line to the last grid line of the explicit grid”, regardless of how many grid lines there might be. It’s a handy pattern to use in any situation where you have a grid item meant to stretch from edge to edge of a grid.

    So the horizontal <b> has top and bottom borders, and the vertical <b> has left and right borders. This creates the board lines, as shown in Figure 3.

    Screenshot: With the filler b tags, you can see the tic-tac-toe board again. But only the corners of the grid are filled with content, and there are 5 cells below the board as the grid lines have displaced the content.
    Figure 3: The basic layout with “Filler <b>s”

    Hold on a minute: we got the tic-tac-toe grid back, but now the numbers are in the wrong places, which means the <div>s that contain them are out of place. Here’s why: the <div> elements holding the actual content will no longer auto-flow into all the grid cells, because the filler <b>s are already occupying five of the nine cells. (They’re the cells in the center column and row of the grid.) The only way to get the <div> elements into their intended grid cells is to explicitly place them. This is one way to do that:

    div:nth-of-type(3n+1) {
    	grid-column: 1;
    div:nth-of-type(3n+2) {
    	grid-column: 2;
    div:nth-of-type(3n+3) {
    	grid-column: 3;
    div:nth-of-type(-n+3) {
    	grid-row: 1;
    div {
    	grid-row: 2;
    div:nth-of-type(n+7) {
    	grid-row: 3;

    That works if you know the content will always be laid out in row-then-column order. Switching to column-then-row requires rewriting the CSS. If the contents are to be placed in a jumbled-up order, then you’d have to write a rule for each <div>.

    This probably suffices for most cases, but let’s push this even further. Suppose you want to draw those grid lines without interfering with the automatic flow of the contents. How can this be done?


    It would be handy if there were a property to mark elements as not participating in the grid flow, but there isn’t. So instead, we’ll split the contents and filler into their own grids, and use a third grid to put one of those grids over the other.

    This will necessitate a bit of structural change to make happen, because for it to work, the contents and the filler <b>s have to have identical grids. Thus we end up with:

    <section id="ttt">
    	<div id="board">
    		<b id="h"></b>
    		<b id="v"></b>
    	<div id="content">

    The first thing is to give the board and the content <div>s identical grids. The same grid we used before, in fact. We just change the #ttt rule’s selector a tiny bit, to select the children of #ttt instead:

    #ttt > * {
    	display: grid;
    	grid-template-columns: repeat(3,5em);
    	grid-template-rows: repeat(3,5em);

    Now that the two grids have the same layout, we need to place one over the other. We could relatively position the #ttt container and absolutely position its children, but there’s another way: use Grid.

    #ttt { /* new rule added */
    	display: grid;
    #ttt > * {
    	display: grid;
    	grid-template-columns: repeat(3,5em);
    	grid-template-rows: repeat(3,5em);

    But wait—where are the rows and columns for #ttt? Where we’re going, we don’t need rows (or columns). Here is how the two grids end up occupying the same area with one on top of the other:

    #ttt {
    	display: grid;
    #ttt > * {
    	display: grid;
    	grid-template-columns: repeat(3,5em);
    	grid-template-rows: repeat(3,5em);
    	grid-column: 1;  /* explicit grid placement */
    	grid-row: 1;  /* explicit grid placement */

    So #ttt is given a one-cell grid, and its two children are explicitly placed in that single cell. Thus one sits over the other, as with positioning—but unlike positioning, the outer grid’s size is dictated by the layout of its children. It will resize to surround them, even if we later change the inner grids to be larger (or smaller). We can see this in practice in Figure 4, where the outer grid is outlined in purple in Firefox’s Grid inspector tool.

    Screenshot: In the Firefox Grid Inspector, the containing grid spans the full width of the page with a purple border. Occupying about a third of the space on the left side of the container are the two child grids, one with the numbers 1 through 9 in a 3 by 3 grid and the other with tic-tac-toe lines overlaid on top of each other.
    Figure 4: The overgridded layout

    And that’s it. We could take further steps, like using z-index to layer the board on top of the content (by default, the element that comes later in the source displays on top of the element that comes earlier), but this will suffice for the case we have here.

    The advantage is that the content <div>, having only its own contents to worry about, can make use of grid-auto-flow and order to rearrange things. As an example, you can do things like the following and you won’t need all of the :nth-of-type grid item placements from our earlier CSS. Figure 5 shows the result.

    /* added to #snippet13 code */
    #ttt > #content {
    	grid-auto-flow: column;
    #ttt > #content > :nth-child(5) {
    	order: 2;
    Screenshot: The overgridded version, where the numbered 3 by 3 grid is overlaid on top of the tic-tac-toe board, continues to work fine if you reorder the cells. In this case, the number 5 has moved from the central grid cell to the bottom right.
    Figure 5: Moving #5 to the end and letting the other items reflow into columns


    The downside here, and it’s a pretty big one, is that the board and content grids are only minimally aware of each other. The reason the previous example works is the grid tracks are of fixed size, and none of the content is overflowing. Suppose we wanted to make the columns and rows resize based on content, like this:

    #content {
    	grid-template-columns: repeat(3,min-content);
    	grid-template-rows: repeat(3,min-content);

    This will fall apart quickly, with the board lines not corresponding to the layout of the actual content. At all.

    In other words, this overlap technique sacrifices one of Grid’s main strengths: the way grid cells relate to other grid cells. In cases where content size is predictable but ordering is not, it’s a reasonable trade-off to make. In other cases, it probably isn’t a good idea.

    Bear in mind that this really only works with layouts where sizes and placements are always known, and where you sometimes have to layer grids on top of one another. If your Filler <b> comes into contact with an implicitly-placed grid item in the same grid as it occupies, it will be blocked from stretching. (Explicitly-placed grid items, i.e., those with author-declared values for both grid-row and grid-column, do not block Filler <b>s.)

    Why is this useful?

    I realize that few of us will need to create a layout that looks like a tic-tac-toe board, so you may wonder why we should bother. We may not want octothorpe-looking structures, but there will be times we want to style an entire column track or highlight a row.

    Since CSS doesn’t (yet) offer a way to style grid cells, areas, or tracks directly, we have to stretch elements over the parts we want to style independently from the elements that contain content. There is a discussion about adding this capability directly to CSS in the Working Group’s GitHub repository, where you can add your thoughts and proposals.

    But why <b>s? Why?

    I use <b>s for the decorative portions of the layout because they’re purely decorative elements. There’s no content to strongly emphasize or to boldface, and semantically a <b> isn’t any better or worse than a <span>. It’s just a hook on which to hang some visual effects. And it’s shorter, so it minimizes page bloat (not that a few characters will make all that much of a difference).

    More to the point, the <b>’s complete lack of semantic meaning instantly flags it in the markup as being intentionally non-semantic. It is, in that meta sense, self-documenting.

    Is this all there is?

    There’s another way to get this precise effect: backgrounds and grid gaps. It comes with its own downsides, but let’s see how it works first. First, we set a black background for the grid container and white backgrounds for each item in the grid. Then, by using grid-gap: 1px, the black container background shows between the grid items.

    <section id="ttt">
    #ttt {
    	display: grid;
    	grid-template-columns: repeat(3,5em);
    	grid-template-rows: repeat(3,5em);
    	background: black;
    	grid-gap: 1px;
    #ttt > div {
    	background: white;

    Simple, no Filler <b>s needed. What’s not to like?

    The first problem is that if you ever remove an item, there will be a big black block in the layout. Maybe that’s OK, but more likely it isn’t. The second problem is that grid containers do not, by default, shrink-wrap their items. Instead, they fill out the parent element, as block boxes do. Both of these problems are illustrated in Figure 6.

    Screenshot: When a grid cell goes missing with the background and grid-gap solution, it leaves a big black box in its place. There's also a giant black box filling the rest of the space to the right of the grid cells.
    Figure 6: Some possible background problems

    You can use extra CSS to restrict the width of the grid container, but the background showing through where an item is missing can’t really be avoided.

    On the other hand, these problems could become benefits if, instead of a black background, you want to show a background image that has grid items “punch out” space, as Jen Simmons did in her “Jazz At Lincoln Center Poster” demo.

    A third problem with using the backgrounds is when you just want solid grid lines over a varied page background, and you want that background to show through the grid items. In that case, the grid items (the <div>s in this case) have to have transparent backgrounds, which prevents using grid-gap to reveal a color.

    If the <b>s really chap your cerebellum, you can use generated content instead. When you generate before- and after-content pseudo-elements, Grid treats them as actual elements and makes them grid items. So using the simple markup used in the previous example, we could write this CSS instead:

    #ttt {
    	display: grid;
    	grid-template-columns: repeat(3,5em);
    	grid-template-rows: repeat(3,5em);
    #ttt::before {
    	grid-column: 1 / -1;
    	grid-row: 2;
    	border-width: 1px 0;
    #ttt::after {
    	grid-column: 2;
    	grid-row: 1 / -1;
    	border-width: 0 1px;

    It’s the same as with the Filler <b>s, except here the generated elements draw the grid lines.

    This approach works just fine for any 3x3 grid like the one we’ve been playing with, but to go any further, you’ll need to get more complicated. Suppose we have a 5x4 grid instead of a 3x3. Using gradients and repeating, we can draw as many lines as needed, at the cost of more complicated CSS.

    #ttt {
    	display: grid;
    	grid-template-columns: repeat(5,5em);
    	grid-template-rows: repeat(4,5em);
    #ttt::before {
    	content: "";
    	grid-column: 1 / -1;
    	grid-row: 1 / -2;
    		linear-gradient(to bottom,transparent 4.95em, 4.95em, black 5em)
    		top left / 5em 5em;
    #ttt::after {
    	content: "";
    	grid-column: 1 / -2;
    	grid-row: 1 / -1;
    		linear-gradient(to right,transparent 4.95em, 4.95em, black 5em)
    		top left / 5em 5em;

    This works pretty well, as shown in Figure 7, assuming you go through the exercise of explicitly assigning the grid cells similar to how we did in #snippet9.

    Screenshot: A 5 by 4 grid with evenly spaced borders dividing the cells internally using background gradients.
    Figure 7: Generated elements and background gradients

    This approach uses linear gradients to construct almost-entirely transparent images that have just a 1/20th of an em of black, and then repeating those either to the right or to the bottom. The downward gradient (which creates the horizontal lines) is stopped one gridline short of the bottom of the container, since otherwise there would be a horizontal line below the last row of items. Similarly, the rightward gradient (creating the vertical lines) stops one column short of the right edge. That’s why there are -2 values for grid-column and grid-row.

    One downside of this is the same as the Filler <b> approach: since the generated elements are covering most of the background, all the items have to be explicitly assigned to their grid cells instead of letting them flow automatically. The only way around this is to use something like the overgridding technique explored earlier. You might even be able to drop the generated elements if you’re overgridding, depending on the specific situation.

    Another downside is that if the font size ever changes, the width of the lines can change. I expect there’s a way around this problem using calc(), but I’ll leave that for you clever cogs to work out and share with the world.

    The funny part to me is that if you do use this gradient-based approach, you’re filling images into the background of the container and placing items over that … just as we did with Faux Columns.


    It’s funny how some concepts echo through the years. More than a decade ago, Dan Cederholm showed us how to fake full-height columns with background images. Now I’m showing you how to fake full-length column and row boxes with empty elements and, when needed, background images.

    Over time, the trick behind Faux Columns fell out of favor, and web design moved away from that kind of visual effect. Perhaps the same fate awaits Faux Grid Tracks, but I hope we see new CSS capabilities arise that allow this sort of effect without the need for trickery.

    We’ve outgrown so many of our old tricks. Here’s another to use while it’s needed, and to hopefully one day leave behind.

  20. Feedback That Gives Focus

    I have harbored a lifelong dislike of feedback. I didn’t like it in sixth grade when a kid on the bus told me my brand new sneakers were “too bright.” And I didn’t like it when a senior executive heard my pitch for a digital project and said, “I hate this idea.” Turns out my sneakers were pretty bright, and my pitch wasn’t the best idea. Still, those experiences and many others like them didn’t help me learn to stop worrying and love the feedback process.

    We can’t avoid feedback. Processing ideas and synthesizing feedback is a big part of what we do for a living. I have had plenty of opportunities to consider why both giving and receiving feedback is often so emotionally charged, so challenging to get right.

    And here’s what I’ve found to be true.

    When a project is preoccupying us at work, we often don’t think about it as something external and abstract. We think about it more like a story, with ourselves in the middle as the protagonist—the hero. That might seem melodramatic, especially if your work isn’t the kind of thing they’d make an inspirational movie about. But there’s research to back this up: humans use stories to make sense of the world and our place within it.

    Our work is no different. We create a story in our heads about how far we’ve come on a project and about where we’re going. This makes discussing feedback dangerous. It’s the place where someone else swoops in and hijacks your story.

    Speaking personally, I notice that when I’m giving feedback (and feeling frustrated), the story in my head goes like this: These people don’t get it. How can I force them into thinking the same way I do so that we can fix everything that’s wrong with this project, and in the end, I don’t feel like a failure?

    Likewise, when I’m receiving feedback (and feeling defensive), the story goes like this: These people don’t get it. How can I defend our work so that we keep everything that I like about this project, and in the end, I don’t feel like a failure?

    Both of these postures are ultimately counterproductive because they are focused inward. They’re really about avoiding shame. Both the person giving and receiving feedback are on opposing sides of the equation, protecting their turf.

    But like a good story, good feedback can take us out of ourselves, allowing us to see the work more clearly. It can remove the artificial barrier between feedback giver and receiver, refocusing both on shared goals.

    Change your habits around feedback, and you can change the story of your project.

    Here are three ways to think about feedback that might help you do just that.

    Good feedback helps us understand how we got here

    Here’s a story for you. I was presenting some new wireframes for an app to the creative leads on the project. There were a number of stakeholders and advisors on the project, and I had integrated several rounds of their feedback into the harmonious and brilliant vision that I was presenting in this meeting. That’s the way I hoped the story would go, anyway.

    But at the end of the meeting, I got some of the best, worst feedback I have ever received: “We’ve gotten into our heads a little bit with this concept. Maybe it should be simpler. Maybe something more like this …” And they handed me a loose sketch on paper to illustrate a new, simpler approach. I had come for sign-off but left with a do-over.

    I felt ashamed. How could I have missed that? Even though that feedback was hard to hear, I walked away able to make important changes, which led to a better outcome in the end. Here are the reasons why:

    First, the feedback started as a conversation. Conversations (rather than written notes) make it easier to verify assumptions. When you talk face-to-face you can ask open-ended questions and clarify intent, so you don’t jump to conclusions. Talking helps you find where the trouble is much faster.

    The feedback connected the dots between problems in our process so far (trying to reconcile too many competing ideas) and how it led to the current result (an overly complicated product). The person who gave the feedback helped me see how we got to where we were, without assigning blame or shaming me in the process.

    The feedback was direct. They didn’t try to mask the fact that the concept wasn’t working. Veiled or vague criticism does more harm than good; the same negativity comes through but without a clear sense of what to do next.

    Good feedback invites each person to contribute their best work

    No thought, no idea, can possibly be conveyed as an idea from one person to another. … Only by wrestling with the conditions of the problem … first hand … does he think.
    John Dewey, Democracy and Education

    Here’s another story. I was the producer on an app-based game, and the team was working on a part of the user interface that the player would use again and again. I was convinced that the current design didn’t “feel” right. I kept pushing for a change, against the input of others, and I gave the team some specific feedback about what I wanted to see done. The designers played along and tried it out. But it became clear that my feedback wasn’t helping, and the design director (gently) stepped in and steered us out of my design tangent and back on course.

    John Dewey had it right in that quote above; you can’t think for someone else. And that’s exactly what I was doing: giving specific solutions without inviting the team to engage with the problem. And the results were worse for it.

    It’s very tempting to use feedback to cajole and control people into doing things your way. But that usually leads to mediocre results. You have a team for a reason: you can’t possibly do everything on your own. Instead, when giving feedback try to remember that you’re building a team of individual contributors that will work together to make a better end product.

    Here are a few feedback habits that help avoid the trap of using feedback to control, and instead, bring out the best in people.

    Don’t give feedback until the timing is right

    Feedback isn’t useful if it’s given before the work is really ready to be looked at. It’s also not useful to give feedback if you have not taken the time to look at the work and think about it in advance. If you rush either of these, the feedback will devolve into a debate about what could have been, rather than what’s actually there now. That invites confusion, defensiveness, and inefficiency.

    Be just specific enough

    Good feedback should have enough specifics to clearly identify the problem. But, usually, it’s better to not give a specific solution. The feedback in this example goes too far:

    The background behind the menu items is a light blue on a darker blue. This makes it hard to see some options. Change the background fill to white and add a thin, red border around each square. When an option is selected, perhaps the inside border should glow red but not fill in all the way.

    Instead, feedback that clearly identifies the problem is probably enough:

    The background behind the menu items makes it a little hard for me to see some options. Any way we might make it easier to read?

    Give the person whose job it is to solve the problem the room to do just that.  They might solve it in a better way that you hadn’t anticipated.

    Admit when you’re wrong

    When you acknowledge a mistake openly and without fear, it gives permission for others on the team to do the same. This refocuses energies away from ego-protection and toward problem solving. I chose to admit I got it wrong on that app project I mentioned above; the designers had it right and I told them I was glad they stuck to their guns. Saying that out loud was actually easier than I thought, and our working relationship was better for it.

    Good feedback tells a story about the future

    In my writing, as much as I could, I tried to find the good, and praise it.
    Alex Haley

    We’ve said that good feedback connects past assumptions and decisions to current results, without assigning blame. Good feedback also identifies issues in a timely and specific way, giving people room to find novel solutions and contribute their best work.

    Lastly, I’ve found that most useful feedback helps us look beyond the present state of our work and builds a shared vision of where we’re headed.

    One of maybe the most overlooked tools for building that shared vision is actually pretty simple: positive feedback. The best positive feedback acknowledges great work that’s already complete, doing so in a way that is future-focused.  Its purpose is to point out what we want to do more of as we move forward.

    In practice, I’ve found that I can become stingy with positive feedback, especially when it’s early in a project and there’s so much work ahead of us. Maybe this is because I’m afraid that mentioning the good things will distract us from what’s still in need of improvement.

    But ironically, the opposite is true: it becomes easier to fix what’s broken once you have something (however small) that you know is working well and that you can begin to build that larger vision around.

    So be equally direct about what’s working as you are with what isn’t, and you’ll find it becomes easier to rally a team around a shared vision for the future.  The first signs of that future can be found right here in the present.

    Like Mr. Haley said: find the good and praise it.

    Oh and one more thing: say thank you.

    Thank people for their contributions. Let me give that a try right now:

    It seemed wise to get some feedback from others when writing about feedback. So thanks to everyone in the PBS KIDS family of producers who generously shared their thoughts and experience with me in preparation of this article. I look forward to hearing your feedback.