A List Apart

  1. Discovery on a Budget: Part III

    Sometimes we have the luxury of large budgets and deluxe research facilities, and sometimes we’ve got nothing but a research question and the determination to answer it. Throughout the “Discovery on a Budget” series we have discussed strategies for conducting discovery research with very few resources but lots of creativity. In part 1 we discussed the importance of a clearly defined problem hypothesis and started our affordable research with user interviews. Then, in part 2, we discussed competing hypotheses and “fake-door” A/B testing when you have little to no traffic. Today we’ll conclude the series by considering the pitfalls of the most tempting and seemingly affordable research method of all: surveys. We will also answer the question “when are you done with research and ready to build something?”

    A quick recap on Candor Network

    Throughout this series I’ve used a budget-conscious, and fictitious, startup called Candor Network as my example. Like most startups, Candor Network started simply as an idea:

    I bet end-users would be willing to pay directly for a really good social networking tool. But there are lots of big unknowns behind that idea. What exactly would “really good” mean? What are the critical features? And what would be the central motivation for users to try yet another social networking tool? 

    To kick off my discovery research, I created a hypothesis based on my own personal experience: that a better social network tool would be one designed with mental health in mind. But after conducting a series of interviews, I realized that people might be more interested in a social network that focused on data privacy as opposed to mental health. I captured this insight in a second, competing hypothesis. Then I launched two corresponding “fake door” landing pages for Candor Network so I could A/B test both ideas.

    For the past couple of months I’ve run an A/B test between the two landing pages where half the traffic goes to version A and half to version B. In both versions there is a short, two-question survey. To start our discussion today, we will take a more in-depth look at this seemingly simple survey, and analyze the results of the A/B test.

    Surveys: Proceed with caution

    Surveys are probably the most used, but least useful, research tool. It is ever so tempting to say, “lets run a quick survey” when you find yourself wondering about customer desires or user behavior. Modern web-based tools have made surveys incredibly quick, cheap, and simple to run. But as anyone who has ever tried running a “quick survey” can attest, they rarely, if ever, provide the insight you are looking for.

    In the words of Erika Hall, surveys are “too easy.” They are too easy to create, too easy to disseminate, and too easy to tally. This inherent ease masks the survey’s biggest flaw as a research method: it is far, far too easy to create biased, useless survey questions. And when you run a survey littered with biased, useless questions, you either (1) realize that your results are not reliable and start all over again, or (2) proceed with the analysis and make decisions based on biased results. If you aren’t careful, a survey can be a complete waste of time, or worse, lead you in the wrong direction entirely.

    However, sometimes a survey is the only method at your immediate disposal. You might be targeting a user group that is difficult to reach through other convenience- or “guerilla”-style means (think of products that revolve around taboo or sensitive topics—it’s awfully hard to spring those conversations on random people you meet in a coffee shop!). Or you might work for a client that is reluctant to help locate research participants in any way beyond sending an email blast with a survey link. Whatever the case may be, there are times when a survey is the only step forward you can take. If you find yourself in that position, keep the following tips in mind.

    Tip 1: Try to stick to questions about facts, not opinions

    If you were building a website for ordering dog food and supplies, a question like “how many dogs do you own?” can provide key demographic information not available through standard analytics. It’s the sort of question that works great in a short survey. But if you need to ask “why did you decide to adopt a dog in the first place?” then you’re much better off with a user interview.

    If you try asking any kind of “why” question in a survey, you will usually end up with a lot of “I don’t know” and otherwise blank responses. This is because people are, in general, not willing to write an essay on why they’ve made a particular choice (such as choosing to adopt a dog) when they’re in the middle of doing something (like ordering pet food). However, when people schedule time for a phone call, they are more than willing to talk about the “whys” behind their decisions. In short, people like to talk about their opinions, but are generally too lazy or busy to write about their opinions. Save the why questions for later (and see Tip 5).

    Tip 2: Avoid asking about the future

    People live in the present, and only dream about the future. There are a lot of things outside of our control that affect what we will buy, eat, wear, and do in the future. Also, sometimes the future selves we imagine are more aspirational than factual. For example, if you were to ask a random group of people how many times they plan to go to the gym next month, you might be (not so) surprised to see that their prediction is significantly higher than the actual number. It is much better to ask “how many times did you go to the gym this week?” as an indicator of general gym attendance than to ask about any future plans.

    I asked a potentially problematic, future-looking question in the Candor Network landing page survey:

    How much would you be willing to pay, per year, for Candor Network?

    • Would not pay anything
    • $1
    • $5
    • $10
    • $15
    • $20
    • $25
    • $30
    • Would pay more

    In this question, I’m asking participants to think about how much money they would like to spend in the future on a product that doesn’t exist yet. This question is problematic for a number of reasons, but the main issue is that people, in general, don’t know how they really feel about pricing until the exact moment they are poised to make a purchase. Relying on this question to, say, develop my income projections for an investor pitch would be unwise to say the least. (I’ll discuss what I actually plan to do with the answers to this question in the next tip.)

    Tip 3: Know how you are going to analyze responses before you launch the survey

    A lot of times, people will create and send out a survey without thinking through what they are going to do with the results once they are in hand. Depending on the length and type of survey, the analysis could take a significant amount of time. Also, if you were hoping to answer some specific questions with the survey data, you’ll want to make sure you’ve thought through how you’ll arrive at those answers. I recommend that while you are drafting survey questions, you also simultaneously draft an analysis plan.

    In your analysis plan, think about what you are ultimately trying to learn from each survey question. How will you know when you’ve arrived at the answer? If you are doing an A/B test like I am, what statistical analysis should you run to see if there is a significant difference between the versions? You should also think about what the numbers will look like and what kinds of graphs or tables you will need to build. Ultimately, you should try to visualize what the data will look like before you gather it, and plan accordingly.

    For example, when I created the two survey questions on the Candor Network landing pages, I created a short analysis plan for each. Here is what those plans looked like:

    Analysis plan for question 1: “How much would you be willing to pay per year for Candor Network?”

    Each response will go into one of two buckets:

    • Bucket 1: said they would not pay any money;
    • and Bucket 2: said they might pay some money.

    Everyone who answered “Would not pay anything” goes in Bucket 1. Everyone else goes in Bucket 2. I will interpret every response that falls into Bucket 2 as an indicator of general interest (and I’m not going to put any value on the specific answer selected). To see whether any difference in response between landing page A and B is statistically significant (i.e., attributable to more than just chance), I will use a chi-square test. (Side note: There are a number of different statistical tests we could use in this scenario, but I like chi-square because of its simplicity. It is a test that’s easy for non-statisticians to run and understand, and it errs on the conservative side.)

    Analysis plan for question 2: “Would you like to be a beta tester or participate in future research?”

    The question only has two possible responses: “yes” and “no.” I will interpret every “yes” response as an indicator of general interest in the idea. Again, a chi-square test will show if there is a significant difference between the two landing pages. 

    Tip 4: Never rely on a survey by itself to make important decisions

    Surveys are hard to get right, and even when they are well made, the results are often approximations of what you really want to measure. However, if you pair a survey with a series of user interviews or contextual inquiries, you will have a richer, more thorough set of data to analyze. In the social sciences, this is called triangulation. If you use multiple methods to triangulate and study the same phenomenon, you will get a richer, more complete picture. This leads me to my final tip …

    Tip 5: End every survey with an opportunity to participate in future research

    There have been many times in my career when I have launched surveys with only one objective in mind: to gather the contact information of potential study participants. In cases like these, the survey questions themselves are not entirely superfluous, but they are certainly secondary to the main research objective. Shortly after the survey results have been collected, I will select and email a few respondents, inviting them to participate in a user interview or usability study. If I planned on continuing Candor Network, this is absolutely what I would do.

    Finally, the results

    According to Google Optimize, there were a total of 402 sessions in my experiment. Of those sessions, 222 saw version A and 180 saw version B. Within the experiment, I tracked how often the “submit” button on the survey was clicked, and Google Optimize tells me “no clear leader was found” on that measure of engagement. Roughly an equal number of people from each condition submitted the survey.

    Here is a breakdown of the number of sessions and survey responses each condition received:

    Version A:
    better mental health
    Version B:
    privacy and data security
    Total
    Sessions 222 180 402
    Survey responses 76 68 144

    When we look at the actual answers to the survey questions, we start to get some more interesting results.

    Bucket 1:
    would not pay any money
    Bucket 2:
    might pay some money
    Version A 25 51
    Version B 14 54

    Breakdown of question 1, “How much would you be willing to pay per year for Candor Network?”

    Plugging these figures into my favorite chi-square calculator, I get the following values: chi-square = 2.7523, p = 0.097113. In general, bigger chi-square values indicate greater differences between the groups. And the p-value is less than 0.1, which suggests that the result is marginally significant (i.e., the result is probably not due to random chance). This gives me a modest indicator that respondents in group B, who saw the “data secure” version of the landing page, are more likely to fall into the “might pay some money” bucket.

    And when we look at the breakdown and chi-square calculation of question two, we see similar results.

    No Yes
    Version A 24 52
    Version B 13 55

    Breakdown of question 2, “Would you like to be a beta tester or participate in future research?”

    The chi-square = 2.9189, and p = .087545. Again, I have a modest indicator that respondents in group B are more likely to say yes to participating in future research. (If you’d like to learn more about how to run and interpret chi-square tests, the Interaction Design department at the University of California, San Diego has provided a great video tutorial.)

    How do we know when it’s time to move on?

    I wish I could provide you with a formula for calculating the exact moment when the research is done and it’s time to move on to prototyping, but I’m afraid no such formula exists. There is no definitive way to determine how much research is enough. Every round of research teaches you something new, but you are always left with more questions. As Albert Einstein said, “the more I learn, the more I realize how much I don’t know.”

    However, with experience you come to recognize certain hallmarks that indicate it’s time to move on. Erika Hall, in her book Just Enough Research, described it as feeling a “satisfying click.” She says, “[O]ne way to know you’ve done enough research is to listen for the satisfying click. That’s the sound of the pieces falling into place when you have a clear idea of the problem you need to solve and enough information to start working on a solution.” (Just Enough Research, p. 36.)

    When it comes to building a product on a budget, you may also want to consider that research is relatively cheap compared to the cost of design and development. The rule I tend to follow is this: continue conducting discovery research until the questions you really want answered can only be answered by putting something in front of users. That is, wait to build something until you absolutely have to. Learn as much as you can about your target market and user base until the only way forward is to put some sketches on paper.

    With Candor Network, I’m not quite there yet. There is still plenty of runway to cover in the research cycle. Now that I know that data privacy is a more motivating reason to consider paying for a social networking tool, I need to work out what other features will be essential. In the next round of research, I could do think-aloud studies and ask participants to give me a tour of their Facebook and other social media pages. Or I could continue with more interviews, but recruit from a different source and reach a broader demographic of participants. Regardless of the exact path I choose to take from here, the key is to focus on what the requirements would be for the ultra-private, data-secure social network that users would value.

    A few parting words

    Discovery research helps us learn more about the users we want to help and the problems they need a solution for. It doesn’t have to be expensive either, and it definitely isn’t something that should be omitted from the development cycle. By starting with a problem hypothesis and conducting multiple rounds of research, we can ultimately save time and money. We can move from gut instincts and personal experiences to a tested hypothesis. And when it comes time to launch, we’ll know it’s from a solid foundation of research-backed understanding.

    Recommended reading

    If you’re testing the waters on a new idea and want to jump into some (budget-friendly) discovery research, here are some additional resources to help you along:

    Books

    Articles

  2. The Problem with Patterns

    It started off as an honest problem with a brilliant solution. As the ways we use the web continue to grow and evolve, we, as its well-intentioned makers and stewards, needed something better than making simple collections of pages over and over again.

    Design patterns, component libraries, or even style guides have become the norm for organizations big and small. Having reusable chunks of UI aids consistency and usability for users, and it lends familiarity and efficiency to designers. This in turn frees up designers’ time to focus on bigger problems, like solving for their users’ needs. In theory.

    The use of design patterns, regardless of their scope or complexity, should never stifle creativity or hold back design progress. In order to achieve what they promise, they should be adaptable, flexible, and scalable. A good design pattern is undeterred by context, and most importantly, is unobtrusive. Again, in theory.

    Before getting further into the weeds, let’s define what is meant by the term pattern here. You’re probably wondering what the difference is between all the different combinations of the same handful of words being used in the web community.

    Initially, design patterns were small pieces of a user interface, like buttons and error messages.

    Two styled buttons: one dark blue, one green
    Buttons and links from Co-op

    Design patterns go beyond the scope and function of a style guide, which deals more with documenting how something should look, feel, or work. Type scales, design principles, and writing style are usually found within the bounds of a style guide.

    More recently, the scope of design patterns has expanded as businesses and organizations look to work more efficiently and consistently, especially if it involves a group or family of products and services. Collections of design patterns are then commonly used to create reusable components of a larger scope, such as account sign-up, purchase checkout, or search. This is most often known as the component library.

    A simple wireframe with tabbed content
    Tabs from BBC Global Experience Language (GEL)

    The final evolution of all these is known as a design system (or a design language). This encompasses the comprehensive set of design standards, documentation, and principles. It includes the design patterns and components to achieve those standards and adhere to those principles. More often than not, a design system is still used day-to-day by designers for its design patterns or components.

    The service design pattern

    A significant reason why designing for the web has irrevocably changed like this is due to the fact that more and more products and services live on it. This is why service design is becoming much more widely valued and sought after in the industry.

    Service patterns—unlike all of the above patterns, which focus on relatively small and compartmentalized parts of a UI—go above and beyond. They aim to incorporate an entire task or chunk of a user’s journey. For example, a credit card application can be represented by some design patterns or components, but the process of submitting an application to obtain a credit card is a service pattern.

    A simple page layout from Gov.uk
    Pattern for GOV.UK start pages

    If thinking in terms of an analogy like atomic design, service patterns don’t fit any one category (atoms, molecules, organisms, etc). For example, a design pattern for a form can be described as a molecule. It does one thing and does it well. This is the beauty of a good design pattern—it can be taken without context and used effectively across a variety of situations.

    Service design patterns attempt to combine the goals of both design patterns and components by creating a reusable task. In theory.

    So, what’s the problem?

    The design process is undervalued

    Most obvious misuses of patterns are easy to avoid with good documentation, but do patterns actually result in better-designed products and services?

    Having a library of design components can sometimes give the impression that all the design work has been completed. Designers or developers can revert to using a library as clip art to create “off-the-shelf” solutions. Projects move quickly into development.

    Although patterns do help teams hesitate less and build things in shorter amounts of time, it is how and why a group of patterns and components are stitched together that results in great design.

    For example, when designing digital forms, using button and input fields patterns will improve familiarity and consistency, without a doubt. However, there is no magic formula for the order in which questions on a form should be presented or for how to word them. To best solve for a user’s needs, an understanding of their goals and constraints is essential.

    Patterns can even cause harm without considering a user’s context and the bearing it may have on their decision-making process.

    For example, if a user will likely be filling out a form under stress (this can be anything from using a weak connection, to holding a mobile phone with one hand, to being in a busy airport), an interface should prioritize minimizing cognitive load over the number of steps or clicks needed to complete it. This decision architecture cannot be predetermined using patterns.

    A simple wireframe showing a multi-step form
    Break up tasks into multiple steps to reduce cognitive load

    Patterns don’t start with user needs

    Components and service patterns have a tendency to serve the needs of the business or organization, not the user.

    Pattern Service User need Organization need
    Apply for something Get a fishing license Enjoy the outdoors Keep rivers clean; generate income
    Apply for something Apply for a work visa Work in a different country Check eligibility
    Create an account Online bank account Save money Security; fraud prevention
    Create an account Join a gym Lose weight Capture customer information
    Register Register to vote Make my voice heard Check eligibility
    Register Online shopping Find my order Security; marketing

    If you are simply designing a way to apply for a work visa, having form field and button patterns is very useful. But any meaningful testing sessions with users will speak to how confident they felt in obtaining the necessary documents to work   abroad, not if they could simply locate a “submit” button.

    User needs are conflated with one another

    Patterns are also sometimes a result of grouping together user needs, essentially creating a set of fictional users that in reality do not exist. Users usually have one goal that they want to achieve efficiently and effectively. Assembling a group of user needs can result in a complex system trying to be everything to everyone.

    For example, when creating a design pattern for registering users to a service across a large organization, the challenge can very quickly move from:

    “How can I check the progress of my application?”
    “Can I update or change my delivery address?”
    “Can I quickly repeat or renew an application?”

    to:

    “How can we get all the details we need from users to allow them to register for an account?”

    The individual user needs are forgotten and replaced with a combined assumed need to “register for an account” in order to “view a dashboard.” In this case, the original problem has even been adapted to suit the design pattern instead of the other way around. 

    Outcomes are valued over context

    Even if they claim to address user context, the success of a service pattern might still be measured through an end result, output, or outcome. Situations, reactions, and emotions are still overlooked.

    Take mass transit, for example. When the desired outcome is to get from Point A to Point B, we may find that a large number of users need to get there quickly, especially if they’re headed home from work. But we cannot infer from this need that the most important goal of transportation is speed. Someone traveling alone at night or in unfamiliar surroundings may place greater importance on safety or need more guidance and reassurance from the service.

    Sometimes, service patterns cannot solve complex human problems like these. More often than not, an over-reliance on outcome-focused service patterns just defeats the purpose of building any empathy during the design process.

    For example, date pickers tend to follow a similar pattern across multiple sectors, including transport, leisure, and healthcare. Widely-used patterns like this are intuitive and familiar to most users.

    Three screenshots of similar-looking date finder tools

    This does not mean that the same date picker pattern can be used seamlessly in any service. If a user is trying to book an emergency doctor appointment, the same patterns seen above are suddenly much less effective. Being presented with a full calendar of options is no longer helpful because choice is no longer the most valuable aspect of the service. The user needs to quickly see the first available appointment with minimal choices or distractions.

    Two screenshots: the first is a traditional date picker, the second is a simplified interface for finding an appointment

    Digital by default

    Because patterns are built for reuse, they sometimes encourage us to use them without much question, particularly assuming that digital technology is the solution.

    A service encompasses everything a user needs to complete their goal. By understanding the user’s entire journey, we start to uncover their motivations and can begin to think about new, potentially non-digital ways to solve their problems.

    For example, the Canadian Immigration Service receives more than 5.2 million inquiries a year by email or phone from people looking for information about applications.

    One of the most common reasons behind the complaints was the time it took to complete an application over the phone. Instead of just taking this data and speeding up the process with a digital form, the product team focused on understanding the service’s users and their reasons behind their reactions and behaviors.

    For example, calls received were often bad-tempered, despite callers being greeted by a recorded message informing them of the length of time it could take to process an application, and advising them against verbally abusing the staff. 

    The team found that users were actually more concerned with the lack of information than they were with the length of time it took to process their application. They felt confused, lost, and clueless about the immigration process. They were worried they had missed an email or letter in the mail asking for missing documentation.

    In response to this, the team decided to change the call center’s greeting, setting the tone to a more positive and supportive one. Call staff also received additional training and began responding to questions even if the application had not reached its standard processing time.

    The team made sure to not define the effectiveness of the design by how short new calls were. Although the handling time for each call went up by 16 percent, follow-up calls dropped by a whopping 30 percent in fewer than eight weeks, freeing up immigration agents’ time to provide better quality information to callers.

    Alternatives to patterns

    As the needs of every user are unique, every service is also unique. To design a successful service you need to have an in-depth understanding of its users, their motivations, their goals, and their situations. While there are numerous methodologies to achieve this, a few key ones follow:

    Framing the problem

    Use research or discovery phases to unearth the real issues with the existing service or process. Contextual research sessions can help create a deeper understanding of users, which helps to ensure that the root cause of a problem is being addressed, not just the symptoms.

    Journey maps

    Journey maps are used to create a visual representation of a service through the eyes of the user. Each step a user takes is recorded against a timeline along with a series of details including:

    • how the user interacts with the service;
    • how the service interacts with the user;
    • the medium of communication;
    • the user’s emotions;
    • and service pain points.

    Service teams, not product teams

    Setting up specialist pattern or product teams creates a disconnect with users. There may be common parts to user journeys, such as sign-up or on-boarding, but having specialist design teams will ultimately not help an organization meet user (and therefore business) needs. Teams should consider taking an end-to-end, service approach.

    Yes No
    Mortgage service Registration; Application
    Passports service Registration; Application
    Tax-return service Registration; Submit Information

    Assign design teams to a full service rather than discrete parts of it

    Be open and inclusive

    Anyone on a wider team should be able to contribute to or suggest improvements to a design system or component library. If applicable, people should also be able to prune away patterns that are unnecessary or ineffective. This enables patterns to grow and develop in the most fruitful way.

    Open-sourcing pattern libraries, like the ones managed by a11yproject.com or WordPress.org, is a good way to keep structure and process in place while still allowing people to contribute. The transparent and direct review process characteristic of the open-source spirit can also help reduce friction.

    Across larger organizations, this can be harder to manage, and the time commitment can contradict the intended benefits. Still, some libraries, such as the Carbon Design System, exist and are open to suggestions and feedback.

    In summary

    A design pattern library can range from being thorough, trying to cover all the bases, to politely broad, so as to not step on the toes of a design team. But patterns should never sacrifice user context for efficiency and consistency. They should reinforce the importance of the design process while helping an organization think more broadly about its users’ needs and its own goals. Real-world problems rarely are solved with out-of-the-box solutions. Even in service design.

  3. Orchestrating Experiences

    A note from the editors: It’s our pleasure to share this excerpt from Chapter 2 (“Pinning Down Touchpoints”) of Orchestrating Experiences: Collaborative Design for Complexity by Chris Risdon and Patrick Quattlebaum, available now from Rosenfeld Media.

    If you embrace the recommended collaborative approaches in your sense-making activities, you and your colleagues should build good momentum toward creating better and valuable end-to-end experiences. In fact, the urge to jump into solution mode will be tempting. Take a deep breath: you have a little more work to do. To ensure that your new insights translate into the right actions, you must collectively define what is good and hold one another accountable for aligning with it.

    Good, in this context, means the ideas and solutions that you commit to reflect your customers’ needs and context while achieving organizational objectives. It also means that each touchpoint harmonizes with others as part of an orchestrated system. Defining good, in this way, provides common constraints to reduce arbitrary decisions and nudge everyone in the same direction.

    How do you align an organization to work collectively toward the same good? Start with some common guidelines called experience principles.

    A Common DNA

    Experience principles are a set of guidelines that an organization commits to and follows from strategy through delivery to produce mutually beneficial and differentiated customer experiences. Experience principles represent the alignment of brand aspirations and customer needs, and they are derived from understanding your customers. In action, they help teams own their part (e.g., a product, touchpoint, or channel) while supporting consistency and continuity in the end-to-end experience. Figure 6.1 presents an example of a set of experience principles.

    Seven example experience principles: Timeliness, Shaping, Fluidity, Empowerment, Flexibility, Bonding, and Specificity
    Figure 6.1: Example set of experience principles. Courtesy of Adaptive Path

    Experience principles are not detailed standards that everyone must obey to the letter. Standards tend to produce a rigid system, which curbs innovation and creativity. In contrast, experience principles inform the many decisions required to define what experiences your product or service should create and how to design for individual, yet connected, moments. They communicate in a few memorable phrases the organizational wisdom for how to meet customers’ needs consistently and effectively. For example, look at the following:   

    • Paint me a picture.
    • Have my back.
    • Set my expectations.
    • Be one step ahead of me.
    • Respect my time.
    Experience Principles vs Design Principles
    Orchestrating experiences is a team sport. Many roles contribute to defining, designing, and delivering products and services that result in customer experiences. For this reason, the label experience—rather than design—reflects the value of principles better that inform and guide the organization. Experience principles are outcome oriented; design principles are process oriented. Everyone should follow and buy into them, not just designers.
    Patrick Quattlebaum

    Experience principles are grounded in customer needs, and they keep collaborators focused on the why, what, and how of engaging people through products and services. They keep critical insights and intentions top of mind, such as the following:

    • Mental Models: How part of an experience can help people have a better understanding, or how it should conform to their mental model.
    • Emotions: How part of an experience should support the customer emotionally, or directly address their motivations.
    • Behaviors: How part of an experience should enable someone to do something they set out to do better.
    • Target: The characteristics to which an experience should adhere.
    • Impact: The outcomes and qualities an experience should engender in the user or customer.
    Focusing on Needs to Differentiate
    Many universal or heuristic principles exist to guide design work. There are visual design principles, interaction design principles, user experience principles, and any number of domain principles that can help define the best practices you apply in your design process. These are lessons learned over time that have a broader application and can be relied on consistently to inform your work across even disparate projects.

    It’s important to reinforce that experience principles specific to your customers’ needs provide contextual guidelines for strategy and design decisions. They help everyone focus on what’s appropriate to specific customers with a unique set of needs, and your product or service can differentiate itself by staying true to these principles. Experience principles shouldn’t compete with best practices or universal principles, but they should be honored as critical inputs for ensuring that your organization’s specific value propositions are met.
    Chris Risdon

    Playing Together

    Earlier, we compared channels and touchpoints to instruments and notes played by an orchestra, but in the case of experience principles, it’s more like jazz. While each member of a jazz ensemble is given plenty of room to improvise, all players understand the common context in which they are performing and carefully listen and respond to one another (see Figure 6.2). They know the standards of the genre backward and forward, and this knowledge allows them to be creative individually while collectively playing the same tune.

    A jazz ensemble plays music
    Figure 6.2: Jazz ensembles depend upon a common foundation to inspire improvisation while working together to form a holistic work of art. Photo by Roland Godefroy, License

    Experience principles provide structure and guidelines that connect collaborators while giving them room to be innovative. As with a time signature, they ensure alignment. Similar to a melody, they provide a foundation that encourages supportive harmony. Like musical style, experience principles provide boundaries for what fits and what doesn’t.

    Experience principles challenge a common issue in organizations: isolated soloists playing their own tune to the detriment of the whole ensemble. While still leaving plenty of room for individual improvisation, they ask a bunch of solo acts to be part of the band. This structure provides a foundation for continuity in the resulting customer journey, but doesn’t overengineer consistency and predictability, which might prevent delight and differentiation. Stressing this balance of designing the whole while distributing effort and ownership is a critical stance to take to engender cross-functional buy-in.

    To get broad acceptance of your experience principles, you must help your colleagues and your leadership see their value. You will need to craft value propositions for your different stakeholders, educate the stakeholders on how to use experience principles, and pilot the experience principles to show how they are used in action. This typically requires crafting specific value propositions and education materials for different stakeholders to gain broad support and adoption. Piloting your experience principals on a project can also help others understand their tactical use. When approaching each stakeholder, consider these common values:

    • Defining good: While different channels and media have their specific best practices, experience principles provide a common set of criteria that can be applied across an entire end-to-end experience.
    • Decision-making filter: Throughout the process of determining what to do strategically and how to do it tactically, experience principles ensure that customers’ needs and desires are represented in the decision-making process.
    • Boundary constraints: Because these constraints represent the alignment of brand aspiration and customer desire, experience principles can filter out ideas or solutions that don’t reinforce this alignment.
    • Efficiency: Used consistently, experience principles reduce ambiguity and the resultant churn when determining what concepts should move forward and how to design them well.
    • Creativity inspiration: Experience principles are very effective in sparking new ideas with greater confidence that will map back to customer needs. (See Chapter 8, “Generating and Evaluating Ideas.”)
    • Quality control: Through the execution lifecycle, experience principles can be used to critique touchpoint designs (i.e., the parts) to ensure that they align to the greater experience (i.e., the whole).

    Pitching and educating aside, your best bet for creating good experience principles that get adopted is to avoid creating them in a black box. You don’t want to spring your experience principles on your colleagues as if they were commandments from above to follow blindly. Instead, work together to craft a set of principles that everyone can follow energetically.

    Identifying Draft Principles

    Your research into the lives and journeys of customers will produce a large number of insights. These insights are reflective. They capture people’s current experiences—such as, their met and unmet needs, how they frame the world, and their desired outcomes. To craft useful and appropriate experience principles, you must turn these insights inside out to project what future experiences should be.

    When You Can’t Do Research (Yet)
    If you lack strong customer insights (and the support or time to gather them), it’s still valuable to craft experience principles with your colleagues. The process of creating them provides insight into the various criteria that people are using to make decisions. It also sheds light on what your collaborators believe are the most important customer needs to meet. While not as sound as research-driven principles, your team can align around a set of guidelines to inform and critique your collective work—and then build the case for gathering insights for creating better experience principles.
    Patrick Quattlebaum

    From the Bottom Up

    The leap from insights to experience principles will take several iterations. While you may be able to rattle off a few candidates based on your research, it’s well worth the time to follow a more rigorous approach in which you work from the bottom (individual insights) to the top (a handful of well-crafted principles). Here’s how to get started:

    • Reassemble your facilitators and experience mappers, as they are closest to what you learned in your research.
    • Go back to the key insights that emerged from your discovery and research. These likely have been packaged in maps, models, research reports, or other artifacts. You can also go back to your raw data if needed.
    • Write each key insight on a sticky note. These will be used to spark a first pass at potential principles.
    • For each insight, have everyone take a pass individually at articulating a principle derived from just that insight. You can use sticky notes again or a quarter sheet of 8.5”’’x 11”’ (A6) template to give people a little more structure (see Figure 6.3).
    A hand with a pen writes notes with insights and corresponding principles
    Figure 6.3: A simple template to generate insight-level principles quickly.
    • At this stage, you should coach participants to avoid finding the perfect words or a pithy way to communicate a potential principle. Instead, focus on getting the core lesson learned from the insight and what advice you would give others to guide product or service decisions in the future. Table 6.1 shows a couple of examples of what a good first pass looks like.
    • At this stage, don’t be a wordsmith. Work quickly to reframe your insights from something you know (“Most people don’t want to…”) to what should be done to stay true to this insight (“Make it easy for people…”).
    • Work your way through all the insights until everyone has a principle for each one.
    Table 6.1: From insights to draft principles
    Insight Principle
    Most people don’t want to do their homework first. They want to get started and learn what they need to know when they need to know it. Make it easy for people to dive in and collect knowledge when it’s most relevant.
    Everyone believes their situation (financial, home, health) is unique and reflects their specific circumstances, even if it’s not true. Approach people as they see themselves: unique people in unique situations.

    Finding Patterns

    You now have a superset of individual principles from which a handful of experience principles will emerge. Your next step is to find the patterns within them. You can use affinity mapping to identify principles that speak to a similar theme or intent. As with any clustering activity, this may take a few iterations until you feel that you have mutually exclusive categories. You can do this in just a few steps:

    • Select someone to be a workshop participant to present the principles one by one, explaining the intent behind each one.
    • Cycle through the rest of the group, combining like principles and noting where principles conflict with one another. As you cluster, the dialogue the group has is as important as where the principles end up.
    • Once things settle down, you and your colleagues can take a first pass at articulating a principle for each cluster. A simple half sheet (8.5” x 4.25” or A5) template can give some structure to this step. Again, don’t get too precious with every word yet.  (see Figure 6.4). Get the essence down so that you and others can understand and further refine it with the other principles.
    • You should end up with several mutually exclusive categories with a draft principle for each.

    Designing Principles as a System

    No experience principle is an island. Each should be understandable and useful on its own, but together your principles should form a system. Your principles should be complementary and reinforcing. They should be able to be applied across channels and throughout your product or service development process. See the following “Experience Principles Refinement Workshop” for tips on how to critique your principles to ensure that they work together as a complete whole.

  4. The Cult of the Complex

    ’Tis a gift to be simple. Increasingly, in our line of work, ’tis a rare gift indeed.

    In an industry that extols innovation over customer satisfaction, and prefers algorithm to human judgement (forgetting that every algorithm has human bias in its DNA), perhaps it should not surprise us that toolchains have replaced know-how.

    Likewise, in a field where young straight white dudes take an overwhelming majority of the jobs (including most of the management jobs) it’s perhaps to be expected that web making has lately become something of a dick measuring competition.

    It was not always this way, and it needn’t stay this way. If we wish to get back to the business of quietly improving people’s lives, one thoughtful interaction at a time, we must rid ourselves of the cult of the complex. Admitting the problem is the first step in solving it.

    And the div cries Mary

    In 2001, more and more of us began using CSS to replace the non-semantic HTML table layouts with which we’d designed the web’s earliest sites. I soon noticed something about many of our new CSS-built sites. I especially noticed it in sites built by the era’s expert backend coders, many of whom viewed HTML and CSS as baby languages for non-developers.

    In those days, whether from contempt for the deliberate, intentional (designed) limitations of HTML and CSS, or ignorance of the HTML and CSS framers’ intentions, many code jockeys who switched from table layouts to CSS wrote markup consisting chiefly of divs and spans. Where they meant list item, they wrote span. Where they meant paragraph, they wrote div. Where they meant level two headline, they wrote div or span with a classname of h2, or, avoiding even that tragicomic gesture toward document structure, wrote a div or span with verbose inline styling. Said div was followed by another, and another. They bred like locusts, stripping our content of structural meaning.

    As an early adopter and promoter of CSS via my work in The Web Standards Project (kids, ask your parents), I rejoiced to see our people using the new language. But as a designer who understood, at least on a basic level, how HTML and CSS were supposed to work together, I chafed.

    Cry, the beloved font tag

    Everyone who wrote the kind of code I just described thought they were advancing the web merely by walking away from table layouts. They had good intentions, but their executions were flawed. My colleagues and I here at A List Apart were thus compelled to explain a few things.

    Mainly, we argued that HTML consisting mostly of divs and spans and classnames was in no way better than table layouts for content discovery, accessibility, portability, reusability, or the web’s future. If you wanted to build for people and the long term, we said, then simple, structural, semantic HTML was best—each element deployed for its intended purpose. Don’t use a div when you mean a p.

    This basic idea, and I use the adjective advisedly, along with other equally rudimentary and self-evident concepts, formed the basis of my 2003 book Designing With Web Standards, which the industry treated as a revelation, when it was merely common sense.

    The message messes up the medium

    When we divorce ideas from the conditions under which they arise, the result is dogma and misinformation—two things the internet is great at amplifying. Somehow, over the years, in front-end design conversations, the premise “don’t use a div when you mean a p” got corrupted into “divs are bad.”

    A backlash in defense of divs followed this meaningless running-down of them—as if the W3C had created the div as a forbidden fruit. So, let’s be clear. No HTML element is bad. No HTML element is good. A screwdriver is neither good nor bad, unless you try to use it as a hammer. Good usage is all about appropriateness.

    Divs are not bad. If no HTML5 element is better suited to an element’s purpose, divs are the best and most appropriate choice. Common sense, right? And yet.

    Somehow, the two preceding simple sentences are never the takeaway from these discussions. Somehow, over the years, a vigorous defense of divs led to a defiant (or ignorant) overuse of them. In some strange way, stepping back from a meaningless rejection of divs opened the door to gaseous frameworks that abuse them.

    Note: We don’t mind so much about the abuse of divs. After all, they are not living things. We are not purists. It’s the people who use the stuff we design who suffer from our uninformed or lazy over-reliance on these div-ridden gassy tools. And that suffering is what we protest. div-ridden, overbuilt frameworks stuffed with mystery meat offer the developer tremendous power—especially the power to build things quickly. But that power comes at a price your users pay: a hundred tons of stuff your project likely doesn’t need, but you force your users to download anyway. And that bloat is not the only problem. For who knows what evil lurks in someone else’s code?

    Two cheers for frameworks

    If you entered web design and development in the past ten years, you’ve likely learned and may rely on frameworks. Most of these are built on meaningless arrays of divs and spans—structures no better than the bad HTML we wrote in 1995, however more advanced the resulting pages may appear. And what keeps the whole monkey-works going? JavaScript, and more JavaScript. Without it, your content may not render. With it, you may deliver more services than you intended to.

    There’s nothing wrong with using frameworks to quickly whip up and test product prototypes, especially if you do that testing in a non-public space. And theoretically, if you know what you’re doing, and are willing to edit out the bits your product doesn’t need, there’s nothing wrong with using a framework to launch a public site. Notice the operative phrases: if you know what you’re doing, and are willing to edit out the bits your product doesn’t need.

    Alas, many new designers and developers (and even many experienced ones) feel like they can’t launch a new project without dragging in packages from NPM, or Composer, or whatever, with no sure idea what the code therein is doing. The results can be dangerous. Yet here we are, training an entire generation of developers to build and launch projects with untrusted code.

    Indeed, many designers and developers I speak with would rather dance naked in public than admit to posting a site built with hand-coded, progressively enhanced HTML, CSS, and JavaScript they understand and wrote themselves. For them, it’s a matter of job security and viability. There’s almost a fear that if you haven’t mastered a dozen new frameworks and tools each year (and by mastered, I mean used), you’re slipping behind into irrelevancy. HR folks who write job descriptions listing the ten thousand tool sets you’re supposed to know backwards and forwards to qualify for a junior front-end position don’t help the situation.

    CSS is not broken, and it’s not too hard

    As our jerrybuilt contraptions, lashed together with fifteen layers of code we don’t understand and didn’t write ourselves, start to buckle and hiss, we blame HTML and CSS for the faults of developers. This fault-finding gives rise to ever more complex cults of specialized CSS, with internecine sniping between cults serving as part of their charm. New sects spring up, declaring CSS is broken, only to splinter as members disagree about precisely which way it’s broken, or which external technology not intended to control layout should be used to “fix” CSS. (Hint: They mostly choose JavaScript.)

    Folks, CSS is not broken, and it’s not too hard. (You know what’s hard? Chasing the ever-receding taillights of the next shiny thing.) But don’t take my word for it. Check these out:

    CSS Grid is here; it’s logical and fairly easy to learn. You can use it to accomplish all kinds of layouts that used to require JavaScript and frameworks, plus new kinds of layout nobody’s even tried yet. That kind of power requires some learning, but it’s good learning, the kind that stimulates creativity, and its power comes at no sacrifice of semantics, or performance, or accessibility. Which makes it web technology worth mastering.

    The same cannot be said for our deluge of frameworks and alternative, JavaScript-based platforms. As a designer who used to love creating web experiences in code, I am baffled and numbed by the growing preference for complexity over simplicity. Complexity is good for convincing people they could not possibly do your job. Simplicity is good for everything else.

    Keep it simple, smarty

    Good communication strives for clarity. Design is its most brilliant when it appears most obvious—most simple. The question for web designers should never be how complex can we make it. But that’s what it has become. Just as, in pursuit of “delight,” we forget the true joy reliable, invisible interfaces can bring, so too, in chasing job security, do we pile on the platform requirements, forgetting that design is about solving business and customer problems … and that baseline skills never go out of fashion. As ALA’s Brandon Gregory, writing elsewhere, explains:

    I talk with a lot of developers who list Angular, Ember, React, or other fancy JavaScript libraries among their technical skills. That’s great, but can you turn that mess of functions the junior developer wrote into a custom extensible object that we can use on other projects, even if we don’t have the extra room for hefty libraries? Can you code an image slider with vanilla JavaScript so we don’t have to add jQuery to an older website just for one piece of functionality? Can you tell me what recursion is and give me a real-world example?
    I interview web developers. Here’s how to impress me.

    Growing pains

    There’s a lot of complexity to good design. Technical complexity. UX complexity. Challenges of content and microcopy. Performance challenges. This has never been and never will be an easy job.

    Simplicity is not easy—not for us, anyway. Simplicity means doing the hard work that makes experiences appear seamless—the sweat and torture-testing and failure that eventually, with enough effort, yields experiences that seem to “just work.”

    Nor, in lamenting our industry’s turn away from basic principles and resilient technologies, am I suggesting that CDNs and Git are useless. Or wishing that we could go back to FTP—although I did enjoy the early days of web design, when one designer could do it all. I’m glad I got to experience those simpler times.

    But I like these times just fine. And I think you do, too. Our medium is growing up, and it remains our great privilege to help shape its future while creating great experiences for our users. Let us never forget how lucky we are, nor, in chasing the ever-shinier, lose sight of the people and purpose we serve.

  5. Onboarding: A College Student Discovers A List Apart

    What would you say if I told you I just read and analyzed over 350 articles from A List Apart in less than six weeks? “You’re crazy!” might have passed through your lips. In that case, what would you say if I was doing it for a grade? Well, you might say that makes sense.

    As a part of an Independent Research Study for my undergraduate degree, I wanted to fill in some of the gaps I had when it came to working with the World Wide Web. I wanted to know more about user experience and user interface design, however, I needed the most help getting to know the industry in general. Naturally, my professor directed me to A List Apart.

    At first I wasn’t sure what I was going to get out of the assignment other than the credit I needed to graduate. What could one website really tell me? As I read article after article, I realized that I wasn’t just looking at a website—I was looking at a community. A community with history in which people have struggled to build the right way. One that is constantly working to be open to all. One that is always learning, always evolving, and sometimes hard to keep up with. A community that, without my realizing it, I had become a part of. For me, the web has pretty much always been there, but now that I am better acquainted with its past, I am energized to be a part of its future. Take a look at some of the articles that inspired this change in me.

    A bit of history

    I started in the Business section and went back as far as November 1999. What a whirlwind that was! I had no idea what people went through and the battles that they fought to make the web what it is today. Now, I don’t mean to date any of you lovely readers, but I would have been three years old when the first business article on A List Apart was published, so everything I read until about 2010 was news to me.

    For instance, when I came across Jeffrey Zeldman’s “Survivor! (How Your Peers Are Coping with the Dotcom Crisis)” that was published in 2001, I had no idea what he was talking about! The literal note I wrote for that article was: “Some sh** went down in the late 1990s???” I was in the dark until I had the chance to Google it and sheepishly ask my parents.

    I had the same problem with the term Web 2.0. It wasn’t until I looked it up that I realized I didn’t know what it was, because I never experienced Web 1.0 (having not had access to the internet until 2004). In that short time, the industry had completely reinvented itself before I ever had a chance to log on!

    The other bit of history that surprised me was how long and hard people had to fight to get web standards and accessibility in line. In school I’ve always been taught to make my sites accessible, and that just seemed like common sense to me. I guess I now understand why I have mixed feelings about Flash.

    What I learned about accessibility

    Accessibility is one of the topics I took a lot of notes on. I was glad to see that although a lot of progress had been made in this area, people were still taking the time to write about and constantly make improvements to it. In Beth Raduenzel’s “A DIY Web Accessibility Blueprint,” she explains the fundamentals to remember when designing for accessibility, including considering:

    • keyboard users;
    • blind users;
    • color-blind users;
    • low-vision users;
    • deaf and hard-of-hearing users;
    • users with learning disabilities and cognitive limitations;
    • mobility-impaired users;
    • users with speech disabilities;
    • and users with seizure disorders.

    It was nice to have someone clearly spell it out. However, the term “user” was used a lot. This distances us from the people we are supposed to be designing for. Anne Gibson feels the same way; in her article, she states that “[web] accessibility means that people can use the web.” All people. In “My Accessibility Journey: What I’ve Learned So Far,” Manuel Matuzović gives exact examples of this:

    • If your site takes ten seconds to load on a mobile connection, it’s not accessible.
    • If your site is only optimized for one browser, it’s not accessible.
    • If the content on your site is difficult to understand, your site isn’t accessible.

    It goes beyond just people with disabilities (although they are certainly not to be discounted).

    I learned a lot of tips for designing with specific people in mind. Like including WAI-ARIA in my code to benefit visually-impaired users, and checking the color contrast of my site for people with color blindness and low-vision problems. One article even inspired me to download a Sketch plugin to easily check the contrast of my designs in the future. I’m more than willing to do what I can to allow my website to be accessible to all, but I also understand that it’s not an easy feat, and I will never get it totally right.

    User research and testing methods that were new to me

    Nevertheless, we still keep learning. Another topic on A List Apart I desperately wanted to absorb was the countless research, testing, and development methods I came across in my readings. Every time I turn around, someone else has come up with another way of working, and I’m always trying to keep my finger in the pie.

    I’m happy to report that the majority of the methods I read about I already knew about and have used in my own projects at school. I’ve been doing open interview techniques, personas, style tiles, and element collages all along, but I was surprised by how many new practices I’d come across.

    The Kano Model, the Core Model, Wizard of Oz prototyping, and think-alouds were some of the methods that piqued my curiosity. Others like brand architecture research, call center log analysis, clickstream analysis, search analytics, and stakeholder reviews I’ve heard of before, but have never been given the opportunity to try. 

    Unattended qualitative research, A/B testing and fake-door testing are those that stood out to me. I liked that they allow you to conduct research even if you don’t have any users in front of you. I learned a lot of new terms and did a lot of research in this section. After all, it’s easy to get lost in all the jargon.

    The endless amount of abbreviations

    I spent a lot of my time Googling terms during this project—especially with the older articles that mentioned programs like Fireworks that aren’t really used anymore. One of my greatest fears in working with web design is that someone will ask me something and I will have no idea what they are talking about. When I was reading all the articles, I had the hardest time with the substantial amount of abbreviations I came across: AJAX, API, ARIA, ASCII, B2B, B2C, CMS, CRM, CSS, EE, GUI, HTML, IIS, IPO, JSP, MSA, RFP, ROI, RSS, SASS, SEM, SEO, SGML, SOS, SOW, SVN, and WYSIWYG, just to name a few. Did you manage to get them all? Probably not.

    We don’t use abbreviations in school because they aren’t always clear and the professors know we won’t know what they mean. To a newbie like me, these abbreviations feel like a barrier. A wall that divides the veterans of the industry and those trying to enter it. I can’t imagine how the clients must feel.

    It seems as if I am not alone in my frustrations. Inayaili de León says in her article “Becoming Better Communicators,” “We want people to care about design as much as we do, but how can they if we speak to them in a foreign language?” I’m training to be a designer, I’m in Design, and I had to look up almost every abbreviation listed above.

    What I learned about myself

    Prior to taking on this assignment, I would have been very hesitant to declare myself capable of creating digital design. To my surprise, I’m not alone. Matt Griffin thinks, “… the constant change and adjustments that come with living on the internet can feel overwhelming.” Kendra Skeene admits, “It’s a lot to keep track of, whether you’ve been working on the web for [twenty] years or only [twenty] months.”

    My fear of not knowing all the fancy lingo was lessened when I read Lyza Danger Gardner’s “Never Heard of It.” She is a seasoned professional who admits to not knowing it all, so I, a soon-to-be-grad, can too. I have good foundations and Google on my side for those pesky abbreviations that keep popping up. As long as I just remember to use my brain as Dave Rupert suggests, when I go to get a job I should do just fine.

    Entering the workplace

    Before starting this assignment, I knew I wanted to work in digital and interaction design, but I didn’t know where. I was worried I didn’t know enough about the web to be able to design for it—that all the jobs out there would require me to know coding languages I’d never heard of before, and I’d have a hard time standing out among the crowd.

    The articles I read on A List Apart supplied me with plenty of solid career advice. After reading articles written by designers, project managers, developers, marketers, writers, and more, I’ve come out with a better understanding of what kind of work I want to do. In the article “80/20 Practitioners Make Better Communicators,” Katie Kovalcin makes a good point about not forcing yourself to learn skills just because you feel the need to:

    We’ve all heard the argument that designers need to code. And while that might be ideal in some cases, the point is to expand your personal spectrum of skills to be more useful to your team, whether that manifests itself in the form of design, content strategy, UX, or even project management. A strong team foundation begins by addressing gaps that need to be filled and the places where people can meet in the middle.

    I already have skills that someone desperately needs. I just need to find the right fit and expand my skills from there. Brandon Gregory also feels that hiring isn’t all about technical knowledge. In his article, he says, “personality, fit with the team, communication skills, openness to change, [and] leadership potential” are just as important.

    Along with solid technical fundamentals and good soft skills, it seems as if having a voice is also crucial. When I read Jeffrey Zeldman’s article “The Love You Make,” it became clear to me that if I ever wanted to get anywhere with my career, I was going to have to start writing.

    Standout articles

    The writers on A List Apart have opened my eyes to many new subjects and perspectives on web design. I particularly enjoyed looking through the game design lens in Graham Herrli’s “Gaming the System … and Winning.” It was one of the few articles where I copied his diagram on interaction personality types and their goals into my notebook. Another article that made me consider a new perspective was “The King vs. Pawn Game of UI Design” by Erik Kennedy. To start with one simple element and grow from there really made something click in my head.

    However, I think that the interview I read between Mica McPheeters and Sara Wachter-Boettcher stuck with me the most. I actually caught myself saying “hmm” out loud as I was reading along. Sara’s point about crash-test dummies being sized to the average male completely shifted my understanding about how important user-centered design is. Like, life-or-death important. There is no excuse not to test your products or services on a variety of users if this is what’s at stake! It’s an article I’m glad I read.

    Problems I’ve noticed in the industry

    During the course of my project, I noticed some things about A List Apart that I was spending so much time on. Like, for example, it wasn’t until I got to the articles that were published after 2014 that I really started to understand and relate to the content; funnily enough, that was the year I started my design degree.

    I also noticed that it was around this time that female writers became much more prominent on the site. Today there may be many women on A List Apart, but I must point out a lack of women of color. Shoutout to Aimee Gonzalez-Cameron for her article “Hello, My Name is <Error>,” a beautiful assertion for cultural inclusion on the web through user-centered design.

    Despite the lack of representation of women of color, I was very happy to see many writers acknowledge their privilege in the industry. Thanks to Cennydd Bowles, Matt Griffin, and Rian van der Merwe for their articles. My only qualm is that the topic of privilege has only appeared on A List Apart in the last five years. Because isn’t it kinda ironic? As creators of the web we aim to allow everyone access to our content, but not everyone has access to the industry itself. Sara Wachter-Boettcher wrote an interesting article that expands on this idea, which you should read if you haven’t already. However, I won’t hold it against any of you. That’s why we are here anyway: to learn.

    The takeaway

    Looking back at this assignment, I’m happy to say that I did it. It was worth every second (even with the possible eye damage from reading off my computer screen for hours on end). It was worth it because I learned more than I had ever anticipated. I received an unexpected history lesson of the recent internet past. I was bombarded by an explosion of new terms and abbreviations. I learned a lot about myself and how I can possibly fit into this community. Most importantly, I came out on the other end with more confidence in myself and my abilities—which is probably the greatest graduation gift I could receive from a final project in my last year of university. Thanks for reading, and wish me luck!

    Thanks

    Thanks to my Interactive Design professor Michael LeBlanc for giving me this assignment and pushing me to take it further.

  6. The Slow Death of Internet Explorer and the Future of Progressive Enhancement

    My first full-time developer job was at a small company. We didn’t have BrowserStack, so we cobbled together a makeshift device lab. Viewing a site I’d been making on a busted first-generation iPad with an outdated version of Safari, I saw a distorted, failed mess. It brought home to me a quote from Douglas Crockford, who once deemed the web “the most hostile software engineering environment imaginable.”

    The “works best with Chrome” problem

    Because of this difficulty, a problem has emerged. Earlier this year, a widely shared article in the Verge warned of “works best with Chrome” messages seen around the web.

    There are more examples of this problem. In the popular messaging app Slack, voice calls work only in Chrome. In response to help requests, Slack explains its decision like this: “It requires significant effort for us to build out support and triage issues on each browser, so we’re focused on providing a great experience in Chrome.” (Emphasis mine.) Google itself has repeatedly built sites—including Google Meet, Allo, YouTube TV, Google Earth, and YouTube Studio—that block alternative browsers entirely. This is clearly a bad practice, but highlights the fact that cross-browser compatibility can be difficult and time-consuming.

    The significant feature gap, though, isn’t between Chrome and everything else. Of far more significance is the increasingly gaping chasm between Internet Explorer and every other major browser. Should our development practices be hamstrung by the past? Or should we dash into the future relinquishing some users in our wake? I’ll argue for a middle ground. We can make life easier for ourselves without breaking the backward compatibility of the web.

    The widening gulf

    Chrome, Opera, and Firefox ship new features constantly. Edge and Safari eventually catch up. Internet Explorer, meanwhile, has been all but abandoned by Microsoft, which is attempting to push Windows users toward Edge. IE receives nothing but security updates. It’s a frustrating period for client-side developers. We read about new features but are often unable to use them—due to a single browser with a diminishing market share.

    A graph showing Internet Explorer’s global market share from 2013 to 2018. It has declined from about 23 percent to about 3 percent.
    Internet Explorer’s global market share since 2013 is shown in dark blue. It now stands at just 3 percent.

    Some new features are utterly trivial (caret-color!); some are for particular use cases you may never have (WebGL 2.0, Web MIDI, Web Bluetooth). Others already feel near-essential for even the simplest sites (object-fit, Grid).

    A list from caniuse.com of features that are supported in Chrome but unavailable in IE11. This is a truncated and incomplete screenshot of an extraordinarily long list.
    A list of features supported in Chrome but unavailable in IE11, taken from caniuse.com. This is a truncated and incomplete screenshot of an extraordinarily long list.

    The promise and reality of progressive enhancement

    For content-driven sites, the question of browser support should never be answered with a simple yes or no. CSS and HTML were designed to be fault-tolerant. If a particular browser doesn’t support shape-outside or service workers or font-display, you can still use those features. Your website will not implode. It’ll just lack that extra stylistic flourish or performance optimization in non-supporting browsers.

    Other features, such as CSS Grid, require a bit more work. Your page layout is less enhancement than necessity, and Grid has finally brought a real layout system to the web. When used with care for simple cases, Grid can gracefully fall back to older layout techniques. We could, for example, fall back to flex-wrap. Flexbox is by now a taken-for-granted feature among developers, yet even that is riddled with bugs in IE11.

    .grid > * {
        width: 270px; /* no grid fallback style */
        margin-right: 30px; /* no grid fallback style */
    }
    
    @supports (display: grid) {
    .grid > * {
    	width: auto;
    	margin-right: 0;
    }
    }

    In the code above, I’m setting all the immediate children of the grid to have a specified width and a margin. For browsers that support Grid, I’ll use grid-gap in place of margin and define the width of the items with the grid-template-columns property. It’s not difficult, but it adds bloat and complexity if it’s repeated throughout a codebase for different layouts. As we start building entire page layouts with Grid (and eventually display: contents), providing a fallback for IE will become increasingly arduous. By using @supports for complex layout tasks, we’re effectively solving the same problem twice—using two different methods to create a similar result.

    Not every feature can be used as an enhancement. Some things are imperative. People have been getting excited about CSS custom properties since 2013, but they’re still not widely used, and you can guess why: Internet Explorer doesn’t support them. Or take Shadow DOM. People have been doing conference talks about it for more than five years. It’s finally set to land in Firefox and Edge this year, and lands in Internet Explorer … at no time in the future. You can’t patch support with transpilers or polyfills or prefixes.

    Users have more browsers than ever to choose from, yet IE manages to single-handedly tie us to the pre-evergreen past of the web. If developing Chrome-only websites represents one extreme of bad development practice, shackling yourself to a vestigial, obsolete, zombie browser surely represents the other.

    The problem with shoehorning

    Rather than eschew modern JavaScript features, polyfilling and transpiling have become the norm. ES6 is supported everywhere other than IE, yet we’re sending all browsers transpiled versions of our code. Transpilation isn’t great for performance. A single five-line async function, for example, may well transpile to twenty-five lines of code.

    “I feel some guilt about the current state of affairs,” Alex Russell said of his previous role leading development of Traceur, a transpiler that predated Babel. “I see so many traces where the combination of Babel transpilation overhead and poor [webpack] foo totally sink the performance of a site. … I’m sad that we’re still playing this game.”

    What you can’t transpile, you can often polyfill. Polyfill.io has become massively popular. Chrome gets sent a blank file. Ancient versions of IE receive a giant mountain of polyfills. We are sending the largest payload to those the least equipped to deal with it—people stuck on slow, old machines.

    What is to be done?

    Prioritize content

    Cutting the mustard is a technique popularized by the front-end team at BBC News. The approach cuts the browser market in two: all browsers receive a base experience or core content. JavaScript is conditionally loaded only by the more capable browsers. Back in 2012, their dividing line was this:

    if ('querySelector' in document && 'localStorage' in window && 'addEventListener' in window) {
         // load the javascript
    }

    Tom Maslen, then a lead developer at the BBC, explained the rationale: “Over the last few years I feel that our industry has gotten lazy because of the crazy download speeds that broadband has given us. Everyone stopped worrying about how large their web pages were and added a ton of JS libraries, CSS files, and massive images into the DOM. This has continued on to mobile platforms that don’t always have broadband speeds or hardware capacity to render complex code.”

    The Guardian, meanwhile, entirely omits both JavaScript and stylesheets from Internet Explorer 8 and further back.

    A screenshot of the Guardian navigation as seen in Internet Explorer 8, showing a list of links stripped of the site’s visual design.
    The Guardian navigation as seen in Internet Explorer 8. Unsophisticated yet functional.

    Nature.com takes a similar approach, delivering only a very limited stylesheet to anything older than IE10.

    The nature.com homepage as seen in Internet Explorer 9, showing only a minimal visual design.
    The nature.com homepage as seen in Internet Explorer 9.

    Were you to break into a museum, steal an ancient computer, and open Netscape Navigator, you could still happily view these websites. A user comes to your site for the content. They didn’t come to see a pretty gradient or a nicely rounded border-radius. They certainly didn’t come for the potentially nauseating parallax scroll animation.

    Anyone who’s been developing for the web for any amount of time will have come across a browser bug. You check your new feature in every major browser and it works perfectly—except in one. Memorizing support info from caniuse.com and using progressive enhancement is no guarantee that every feature of your site will work as expected.

    A screenshot of the W3C’s website for the CSS Working Group as viewed in the latest version of Safari, showing overlapping, unreadable text.
    The W3C’s website for the CSS Working Group as viewed in the latest version of Safari.

    Regardless of how perfectly formed and well-written your code, sometimes things break through no fault of your own, even in modern browsers. If you’re not actively testing your site, bugs are more likely to reach your users, unbeknownst to you. Rather than transpiling and polyfilling and hoping for the best, we can deliver what the person came for, in the most resilient, performant, and robust form possible: unadulterated HTML. No company has the resources to actively test their site on every old version of every browser. Malfunctioning JavaScript can ruin a web experience and make a simple page unusable. Rather than leaving users to a mass of polyfills and potential JavaScript errors, we give them a basic but functional experience.

    Make a clean break

    What could a mustard cut look like going forward? You could conduct a feature query using JavaScript to conditionally load the stylesheet, but relying on JavaScript introduces a brittleness that would be best to avoid. You can’t use @import inside an @supports block, so we’re left with media queries.

    The following query will prevent the CSS file from being delivered to any version of Internet Explorer and older versions of other browsers:

    <link id="mustardcut" rel="stylesheet" href="stylesheet.css" media="
        only screen,
        only all and (pointer: fine), only all and (pointer: coarse), only all and (pointer: none),
        min--moz-device-pixel-ratio:0) and (display-mode:browser), (min--moz-device-pixel-ratio:0)
    ">

    We’re not really interested in what particular features this query is testing for; it’s just a hacky way to split between legacy and modern browsers. The shiny, modern site will be delivered to Edge, Chrome (and Chrome for Android) 39+, Opera 26+, Safari 9+, Safari on iOS 9+, and Firefox 47+. I based the query on the work of Andy Kirk. If you want to take a cutting-the-mustard approach but have to meet different support demands, he maintains a Github repo with a range of options.

    We can use the same media query to conditionally load a Javascript file. This gives us one consistent dividing line between old and modern browsers:

    (function() {
    	var linkEl = document.getElementById('mustardcut');
    	if (window.matchMedia && window.matchMedia(linkEl.media).matches) {
        	var script = document.createElement('script');
        	script.src = 'your-script.js';
        	script.async = true;
        	document.body.appendChild(script);
    	}
    })();

    matchMedia brings the power of CSS media queries to JavaScript. The matches property is a boolean that reflects the result of the query. If the media query we defined in the link tag evaluates to true, the JavaScript file will be added to the page.

    It might seem like an extreme solution. From a marketing point of view, the site no longer looks “professional” for a small amount of visitors. However, we’ve managed to improve the performance for those stuck on old technology while also opening the possibility of using the latest standards on browsers that support them. This is far from a new approach. All the way back in 2001, A List Apart stopped delivering a visual design to Netscape 4. Readership among users of that browser went up.

    Front-end development is complicated at the best of times. Adding support for a technologically obsolete browser adds an inordinate amount of time and frustration to the development process. Testing becomes onerous. Bug-fixing looms large.

    By making a clean break with the past, we can focus our energies on building modern sites using modern standards without leaving users stuck on antiquated browsers with an untested and possibly broken site. We save a huge amount of mental overhead. If your content has real value, it can survive without flashy embellishments. And for Internet Explorer users on Windows 10, Edge is preinstalled. The full experience is only a click away.

    A screenshot of the A List Apart masthead open in Internet Explorer 11, showing the ever-present Internet Explorer button that reads Open Microsoft Edge.
    Internet Explorer 11 with its ever-present “Open Microsoft Edge” button.

    Developers must avoid living in a bubble of MacBook Pros and superfast connections. There’s no magic bullet that enables developers to use bleeding-edge features. You may still need Autoprefixer and polyfills. If you’re planning to have a large user base in Asia and Africa, you’ll need to build a site that looks great in Opera Mini and UC Browser, which have their own limitations. You might choose a different cutoff point for now, but it will increasingly pay off, in terms of both user experience and developer experience, to make use of what the modern web has to offer.

     

  7. So You Want to Write an Article?

    So you want to write an article. Maybe you’ve got a great way of organizing your CSS, or you’re a designer who has a method of communicating really well with developers, or you have some insight into how to best use a new technology. Whatever the topic, you have insights, you’ve read the basics of finding your voice, and you’re ready to write and submit your first article for a major publication. Here’s the thing: most article submissions suck. Yours doesn’t have to be one of them.

    At A List Apart, we want to see great minds in the industry write the next great articles, and you could be one of our writers. I’ve been on the editorial team here for about nine months now, and I’ve written a fair share of articles here as well. Part of what I do is review article submissions and give feedback on what’s working and what’s not. We publish different kinds of articles, but many of the submissions I see—particularly from newer writers—fall into the same traps. If you’re trying to get an article published in A List Apart or anywhere else, knowing these common mistakes can help your article’s chances of being accepted.

    Keep introductions short and snappy

    Did you read the introduction above? My guess is a fair share of readers skipped straight to this point. That’s pretty typical behavior, especially for articles like this one that offer several answers to one clear question. And that’s totally fine. If you’re writing, realize that some people will do the same thing. There are some things you can do to improve the chances of your intro being read, though.

    Try to open with a bang. A recent article from Caroline Roberts has perhaps the best example of this I’ve ever seen: “I won an Emmy for keeping a website free of dick pics.” When I saw that in the submission, I was instantly hooked and read the whole thing. It’s hilarious, it shows she has expertise on managing content, and it shows that the topic is more involved and interesting than it may at first seem. A more straightforward introduction to the topic of content procurement would seem very boring in comparison. Your ideas are exciting, so show that right away if you can. A funny or relatable story can also be a great way to lead into an article—just keep it brief!

    If you can’t open with a bang, keep it short. State the problem, maybe put something about why it matters or why you’re qualified to write about it, and get to the content as quickly as possible. If a line in your introduction does not add value to the article, delete it. There’s little room for meandering in professional articles, but there’s absolutely no room for it in introductions.

    Get specific

    Going back to my first article submission for A List Apart, way before I joined the team, I wanted to showcase my talent and expertise, and I thought the best way to do this was to showcase all of it in one article. I wrote an overview of professional skills for web professionals. There was some great information in there, based on my years of experience working up through the ranks and dealing with workplace drama. I was so proud when I submitted the article. It wasn’t accepted, but I got some great feedback from the editor-in-chief: get more specific.

    The most effective articles I see deal with one central idea. The more disparate ideas I see in an article, the less focused and impactful the article is. There will be exceptions to this, of course, but those are rarer than articles that suffer for this. Don’t give yourself a handicap by taking an approach that fails more often than it succeeds.

    Covering one idea in great detail, with research and examples to back it up, usually goes a lot further in displaying your expertise than an overview of a bunch of disparate thoughts. Truth be told, a lot of people have probably arrived at the same ideas you have. The insights you have are not as important as your evidence and eloquence in expressing them.

    Can an overview article work? Actually, yes, but you need to frame it within a specific problem. One great example I saw was an overview of web accessibility (which has not been published yet). The article followed a fictional project from beginning to end, showing how each team on the project could work toward a goal of accessibility. But the idea was not just accessibility—it was how leaders and project managers could assign responsibility in regards to accessibility. It was a great submission because it began with a problem of breadth and offered a complete solution to that problem. But it only worked because it was written specifically for an audience that needed to understand the whole process. In other words, the comprehensive nature of the article was the entire point, and it stuck to that.

    Keep your audience in mind

    You have a viewpoint. A problem I frequently see with new submissions is forgetting that the audience also has its viewpoint. You have to know your audience and remember how the audience’s mindset matches yours—or doesn’t. In fact, you’ll probably want to state in your introduction who the intended audience is to hook the right readers. To write a successful article, you have to keep that audience in mind and write for it specifically.

    A common mistake I see writers make is using an article to vent their frustrations about the people who won’t listen to them. The problem is that the audience of our publication usually agrees with the author on these points, so a rant about why he or she is right is ultimately pointless. If you’re writing for like-minded people, it’s usually best to assume the readers agree with you and then either delve into how to best accomplish what you’re writing about or give them talking points to have that conversation in their workplace. Write the kind of advice you wish you’d gotten when those frustrations first surfaced.

    Another common problem is forgetting what the audience already knows—or doesn’t know. If something is common knowledge in your industry, it doesn’t need another explanation. You might link out to another explanation somewhere else just in case, but there’s no need to start from scratch when you’re trying to make a new point. At the same time, don’t assume that all your readers have the same expertise you do. I wrote an article on some higher-level object-oriented programming concepts—something many JavaScript developers are not familiar with. Rather than spend half the article giving an overview of object-oriented programming, though, I provided some links at the beginning of the article that gave a good overview. Pro tip: if you can link out to articles from the same publication you’re submitting to, publications will appreciate the free publicity.

    Defining your audience can also really help with knowing their viewpoint. Many times when I see a submission with two competing ideas, they’re written for different audiences. In my article I mentioned above, I provide some links for developers who may be new to object-oriented programming, but the primary audience is developers who already have some familiarity with it and want to go deeper. Trying to cater to both audiences wouldn’t have doubled the readership—it would have reduced it by making a large part of the article less relevant to readers.

    Keep it practical

    I’ll admit, of all these tips, this is the one I usually struggle with the most. I’m a writer who loves ideas, and I love explaining them in great detail. While there are some readers who appreciate this, most are looking for some tangible ways to improve something. This isn’t to say that big concepts have no place in professional articles, but you need to ask why they are there. Is your five-paragraph explanation of the history of your idea necessary for the reader to make the improvements you suggest?

    This became abundantly clear to me in my first submission of an article on managing ego in the workplace. I love psychology and initially included a lengthy section up-front on how our self-esteem springs from the strengths we leaned on growing up. While this fascinated me, it wasn’t right for an audience of web professionals who wanted advice on how to improve their working relationships. Based on feedback I received, I removed the section entirely and added a section on how to manage your own ego in the workplace—much more practical, and that ended up being a favorite section in the final piece.

    Successful articles solve a problem. Begin with the problem—set it up in your introduction, maybe tell a little story that illustrates how this problem manifests—and then build a case for your solution. The problem should be clear to the reader very early on in the article, and the rest of the article should all be related to that problem. There is no room for meandering and pontification in a professional article. If the article is not relevant and practical, the reader will move on to something else.

    The litmus test for determining the practicality of your article is to boil it down to an outline. Of course all of your writing is much more meaningful than an outline, but look at the outline. There should be several statements along the lines of “Do this,” or “Don’t do this.” You can have other statements, of course, but they should all be building toward some tangible outcome with practical steps for the reader to take to solve the problem set up in your introduction.

    It’s a hard truth you have to learn as a writer that you’ll be much more in love with your ideas than your audience will. Writing professional articles is not about self-expression—it’s about helping and serving your readers. The more clear and concise the content you offer, the more your article will be read and shared.

    Support what you say

    Your opinions, without evidence to support them, will only get you so far. As a writer, your ideas are probably grounded in a lot of real evidence, but your readers don’t know that—you’ll have to show it. How do you show it? Write a first draft and get your ideas out. Then do another pass to look for stories, stats, and studies to support your ideas. Trying to make a point without at least one of these is at best difficult and at worst empty hype. Professionals in your industry are less interested in platitudes and more interested in results. Having some evidence for your claims goes a long way toward demonstrating your expertise and proving your point.

    Going back to my first article in A List Apart, on defusing workplace drama, I had an abstract point to prove, and I needed to show that my insights meant something. My editor on that article was fantastic and asked the right questions to steer me toward demonstrating the validity of my ideas in a meaningful way. Personal stories made up the backbone of the article, and I was able to find social psychology studies to back up what I was saying. These illustrations of the ideas ended up being more impactful than the ideas themselves, and the article was very well-received in the community.

    Storytelling can be an amazing way to bring your insights to life. Real accounts or fictional, well-told stories can serve to make big ideas easier to understand, and they work best when representing typical scenarios, not edge cases. If your story goes against common knowledge, readers will pick up on that instantly and you’ll probably get some nasty comments. Never use a story to prove a point that doesn’t have any other hard evidence to back it up—use stories to illustrate points or make problems more relatable. Good stories are often the most memorable parts of articles and make your ideas and assertions easier to remember.

    Stats are one of the easiest ways to make a point. If you’re arguing that ignoring website accessibility can negatively impact the business, some hard numbers are going to say a lot more than stories. If there’s a good stat to prove your point, always include it, and always be on the lookout for relevant numbers. As with stories, though, you should never try to use stats to distort the truth or prove a point that doesn’t have much else to support it. Mark Twain once said, “There are three kinds of lies: lies, damned lies, and statistics.” You shouldn’t decide what to say and then scour the internet for ways to back it up. Base your ideas on the numbers, don’t base your selection of facts on your idea.

    Studies, including both user experience studies and social psychology experiments, are somewhere in between stories and stats, and a lot of the same advantages and pitfalls also apply. A lot of studies can be expressed as a story—write a quick bit from the point of view of the study participant, then go back and explain what’s really going on. This can be just as engaging and memorable as a good story, but studies usually result in stats, which usually serve to make the stories significantly more authoritative. And remember to link out to the study for people who want to read more about it!

    Just make sure your study wasn’t disproved by later studies. In my first article, linked above, I originally referenced a study to introduce the bystander effect, but an editor wisely pointed out that there’s actually a lot of evidence against that interpretation of the well-known study. Interpretations can change over time, especially as new information comes out. I found a later, more relevant study that illustrated the point better and was less well-known, so it made for a better story.

    Kill your darlings

    Early twentieth century writer and critic Arthur Quiller-Couch once said in a speech, “Whenever you feel an impulse to perpetrate a piece of exceptionally fine writing, obey it—whole-heartedly—and delete it before sending your manuscript to press. Murder your darlings.” Variants of this quote were repeated by many authors throughout the twentieth century, and it’s just as true today as when he originally said it.

    What does that mean for your article? Great prose, great analogies, great stories—any bits of brilliant writing that you churn out—only mean as much as they contribute to the subject at hand. If it doesn’t contribute anything, it needs to be killed.

    When getting your article ready for submission, your best friend will be the backspace or delete key on your keyboard. Before submitting, do a read-through for the express purpose of deleting whatever you can to trim down the article. Articles are not books. Brevity is a virtue, and it usually ends up being one of the most important virtues in article submissions.

    Your intro should have a clear thesis so readers know what the article is about. For every bit of writing that follows it, ask if it contributes to your argument. Does it illustrate the problem or solution? Does it give the reader empathy for or understanding of the people you’re trying to help? Does it give them guidance on how to have these conversations in their workplaces? If you can’t relate a sentence back to your original thesis, it doesn’t matter how brilliant it is—it should be deleted.

    Humor can be useful, but many jokes serve as little more than an aside or distraction from the main point. Don’t interrupt your train of thought with a cute joke—use a joke to make your thoughts more clear. It doesn’t matter how funny the joke is; if it doesn’t help illustrate or reinforce one of your points, it needs to go.

    There are times when a picture really is worth a thousand words. Don’t go crazy with images and illustrations in your piece, but if a quick graphic is going to save you a lengthy explanation, go that route.

    So what are you waiting for?

    The industry needs great advice in articles, and many of you could provide that. The points I’ve delved into in this article aren’t just formalities and vague ideas; the editing team at A List Apart has weighed in, and these are problems we see often that weaken articles and make them less accessible to readers. Heeding this advice will strengthen your professional articles, whether you plan to submit to A List Apart or anywhere else. The next amazing article in A List Apart could be yours, and we hope to see you get there.

  8. We’re Looking for People Who Love to Write

    Here at A List Apart, we’re looking for new authors, and that means you. What should you write about? Glad you asked!

    You should write about topics that keep you up at night, passions that make you the first to show up in the office each morning, ideas that matter to our community and about which you have a story to tell or an insight to share.

    We’re not looking for case studies about your company or thousand-foot overviews of topics most ALA readers already know about (i.e., you don’t have to tell A List Apart readers that Sir Tim Berners-Lee invented the web). But you also don’t have to write earth-shaking manifestos or share new ways of working that will completely change the web. A good strong idea, or detailed advice about an industry best practice make excellent ALA articles.

    Where we’ve been

    Although A List Apart covers everything from accessible UX and product design to advanced typography and content and business strategy, the sweet spot for an A List Apart article is one that combines UI design (and design thinking) with front-end code, especially when it’s innovative. Thus our most popular article of the past ten years was Ethan Marcotte’s “Responsive Web Design”—a marriage of design and code, accessible to people with diverse backgrounds at differing levels of expertise.

    In the decade-plus before that, our most popular articles were Douglas Bowman’s “Sliding Doors of CSS” and Dan Cederholm’s “Faux Columns”—again, marriages of design and code, and mostly in the nature of clever workarounds (because CSS in 2004 didn’t really let us design pages as flexibly and creatively, or even as reliably, as we wanted to).

    From hacks to standards

    Although clever front-end tricks like Sliding Doors, and visionary re-imaginings of the medium like Responsive Web Design, remain our most popular offerings, the magazine has offered fewer of them in recent years, focusing more on UX and strategy. To a certain extent, if a front-end technique isn’t earth-changing (i.e., isn’t more than just a technique), and if it isn’t semantic, inclusive, accessible, and progressively enhanced, we don’t care how flashy it is—it’s not for us.

    The demand to create more powerful layouts was also, in a real way, satisfied by the rise of frameworks and shared libraries—another reason for us to have eased off front-end tricks (although not all frameworks and libraries are equally or in some cases even acceptably semantic, inclusive, accessible, and progressively enhanced—and, sadly, many of their users don’t know or care).

    Most importantly, now that CSS is finally capable of true layout design without hacks, any responsible web design publication will want to ease off on the flow of front-end hacks, in favor of standards-based education, from basic to advanced. Why would any editor or publisher (or framework engineer, for that matter) recommend that designers use 100 pounds of fragile JavaScript when a dozen lines of stable CSS will do?

    It will be interesting to see what happens to the demand for layout hack articles in Medium and web design publications and communities over the next twelve months. It will also be interesting to see what becomes of frameworks now that CSS is so capable. But that’s not our problem. Our problem is finding the best ideas for A List Apart’s readers, and working with the industry’s best old and new writers to polish those ideas to near-perfection.

    After all, even more than being known for genius one-offs like Responsive Web Design and Sliding Doors of CSS, A List Apart has spent its life introducing future-friendly, user-focused design advances to this community, i.e., fighting for web standards when table layouts were the rage, fighting for web standards when Flash was the rage, pushing for real typography on the web years before Typekit was a gleam in Jeff Veen’s eye, pushing for readability in layout when most design-y websites thought single-spaced 7px Arial was plenty big enough, promoting accessible design solutions, user-focused solutions, independent content and communities, and so on.

    Call to action

    Great, industry-changing articles are still what we want most, whether they’re front-end, design, content, or strategy-focused. And changing the industry doesn’t have to mean inventing a totally new way of laying out pages or evaluating client content. It can also mean coming up with a compelling argument in favor of an important but embattled best practice. Or sharing an insightful story that helps those who read it be more empathetic and more ethical in their daily work.

    Who will write the next 20 years of great A List Apart articles? That’s where you come in.

    Publishing on A List Apart isn’t as easy-peasy as dashing off a post on your blog, but the results—and the audience—are worth it. And when you write for A List Apart, you never write alone: our industry-leading editors, technical editors, and copyeditors are ready to help you polish your best idea from good to great.

    Come share with us!

  9. Priority Guides: A Content-First Alternative to Wireframes

    No matter your role, if you’ve ever been involved in a digital design project, chances are you’re familiar with wireframes. After all, they’re among the most popular and widely used tools when designing websites, apps, dashboards, and other digital user interfaces.

    But they do have their problems, and wireframes are so integrated into the accepted way of working that many don’t consider those drawbacks. That’s a shame, because the tool’s downsides can seriously undermine user-centricity. Ever lose yourself in aesthetic details when you should have been talking about content and functionality? We have!

    That’s why we use an alternative that avoids the pitfalls of wireframes: the priority guide. It not only keeps our process user-centered and creates more valuable designs for our users (whether used alongside wireframes or as a direct replacement), it’s also improved team engagement, collaboration, and design workflows.

    The problem with wireframes

    Wikipedia appropriately defines the wireframe as “a visual guide that represents the skeletal framework of a website. … [It] depicts the page layout or arrangement of the website’s content, including interface elements and navigational systems.” In other words, wireframes are sketches that represent the potential website (or app) in a simplified way, including the placement and shape of any interface elements. They range from low-fidelity rough sketches on paper to high-fidelity colored, textual screens in a digital format.

    A sketched out wireframe, and one designed with software
    Examples of low-fidelity (on the left) and high-fidelity (on the right) wireframes

    Because of their visual nature, wireframes are great tools for sketching and exploring design ideas, as well as communicating those ideas to colleagues, clients, and stakeholders. And since they’re so easy to create and adapt with tools such as Sketch or Balsamiq, you also have something to user test early in the design process, allowing usability issues to be addressed sooner than might otherwise be possible.

    But although these are all valuable characteristics of wireframes, there are also some significant downsides.

    The illusion of final design

    Wireframes can provide the illusion that a design is final, or at least in a late stage of completion. Regardless of how carefully you explain to clients or stakeholders that these first concepts are just early explorations and not final—maybe you even decorated them with big “DRAFT” stickers—too often they’ll still enthusiastically exclaim, “Looks good, let’s start building!”

    Killing creativity and engagement

    At Mirabeau, we’ve noticed that wireframes tend to kill creativity. We primarily work in multidisciplinary teams consisting of (among others) interaction (UX) designers, visual designers, front-end developers, and functional testers. But once an interaction designer has created a wireframe, it’s hard for many (we’re not saying all) visual designers to think outside the boundaries set by that wireframe and challenge the ideas it contains. As a result, the final designs almost always resemble the wireframes. Their creativity impaired, the visual designers were essentially just coloring in the wireframes.

    Undermining user-centricity

    As professionals, we naturally care about how something looks and is presented. So much so that we can easily lose ourselves for hours in the fine details, such as alignment, sizing, coloring, and the like, even on rough wireframes intended only for internal use. Losing time means losing focus on what’s valuable for your user: the content, the product offering, and the functionality.

    Static, not responsive

    A wireframe (even multiple wireframes) can’t capture the responsive behavior that is so essential to modern web design. Even though digital design tools are catching up in efficiently designing for different screen sizes (here’s hoping InVision Studio will deliver), each of the resulting wireframes is still just a static image.

    Inconvenient for developers and functional testers

    Developers and functional testers work with code, and a wireframe sketch or picture provides little functional information and isn’t directly translatable into code (not yet, anyway). This lack of clarity around how the design should behave can lead to developers and testers making decisions about functionality or responsiveness without input from the designer, or having to frequently check with the designer to find out if a feature is working correctly. Perhaps less of a problem for a mature team or project where there’s plenty of experience with, and knowledge of, the product, but all too often this (unnecessary) collaboration means more development work, a slower process, and wasted time.

    To overcome these wireframe pitfalls, about five years ago we adopted priority guides. Our principal interaction designer, Paul Versteeg, brought the tool to Mirabeau, and we’ve been improving and fine-tuning our way of working with them ever since, with great results.

    So what are priority guides?

    As far as we know, credit for the invention of priority guides goes to Drew Clemens, who first introduced the concept in his article on the Smashing Magazine website in 2012. Since that time, however, it seems that priority guides have received little attention, either from the web and app design industry or within related education establishments.

    Simply put, a priority guide contains content and elements for a mobile screen, sorted by hierarchy from top to bottom and without layout specifications. The hierarchy is based on relevance to users, with the content most critical to satisfying user needs and supporting user (and company) goals higher up.

    The format of a priority guide is not fixed: it can be digital (we personally prefer Sketch), or it can be physical, made with paper and Post-its. Most importantly, a priority guide is automatically content-first, with a strong focus on providing best value for users.

    Chart listing components from high priority to low: Main Content, Less Important Content, Least Important Content
    The core structure of a priority guide

    Diving a bit deeper, the following example shows the exact same page as shown in the wireframe images presented earlier in this article. It consists of the title “Book a flight,” real content (yes, even the required legal notice!), several sections of information, and annotations that explain components and functionality.

    A graphic showing a rough flight overview page with functionality and links defined
    A detailed digital priority guide for an airline’s flight overview page

    When comparing the content to the high-fidelity wireframe, you’ll notice that the order of the sections is not the same. The step indicator, for example, is shown at the bottom of the priority guide, as the designer decided it’s not the most important information on the page. Conversely, the most important information—flight information and prices—is now placed near the top.

    Annotations are an important part of priority guides, as they provide explanations of the functionalities and page behavior, name the component types, and link the priority guide of one page to the priority guides of other pages. In this example, you can find descriptions of what happens when a user interacts with a button or link, such as opening a layover screen to display flight details or loading a a flight selection page.

    The advantages of priority guides

    Of course, we can debate for hours whether the creator of, or team responsible for, the above priority guide has chosen the correct priorities and functionalities, but that goes beyond the scope of this article. Instead, let’s name the main advantages that priority guides offer over wireframes.

    Suitable for responsive design

    Wireframes are static images, requiring multiple screenshots to cover the full spectrum from mobile to desktop. Priority guides, on the other hand, give an overview of content hierarchy regardless of screen size (assuming user goals remain the same on different devices). Ever since responsive design became standard practice within Mirabeau, priority guides have been an essential addition to our design toolkit.

    Focused on solving problems and serving needs

    When creating priority guides, you automatically focus on solving the users’ problems, serving their needs, and supporting them to reach their goals. The interface is always filled with content that communicates a message or helps the user. By designing content-first, you’re always focused on serving the user.

    No time wasted on aesthetics and layout

    There’s no need for interaction designers to waste time on aesthetics and layout in the early phases of the design process. Priority guides help avoid the focus shifting away from the content and user toward specific layout elements too early, and keep us from falling into the “designer trap” of visual perfectionism.

    Facilitating visual designers’ creativity

    Priority guides provide the opportunity for designers to explore extravagant ideas on how to best support and delight the user without visual boundaries set by interaction designers. Even when you’re the only designer on your team, working as both interaction and visual designer, it’s hard to move past how those first wireframes looked, even when confronted with new content.

    Developers and testers get “HTML” early in the process

    The structure of a priority guide is very similar to HTML, allowing the developer to start laying the groundwork for future development early on. Similarly, testers get a checklist for testing, allowing them to begin building those tests straight away. The result is early feedback on the feasibility of the designs, and we’ve found priority guides have significantly speeded up the collaborative process of design and development at Mirabeau.

    How to create priority guides

    There are a number of baselines and steps that we’ve found useful when creating priority guides. We’ve fine-tuned them over the years as we’ve applied this new approach to our projects, and conducted workshops explaining priority guides to the Dutch design community.

    The baselines

    Your priority guide should only contain real content that’s relevant to the user. Lorem ipsum, or any other type of placeholder text, doesn’t communicate how the page supports users in reaching their goals. Moreover, don’t include any layout elements when making priority guides. Instead, include only content and functionality. Remember that a priority guide is never a deliverable—it’s merely a tool to facilitate discussion among the designers, developers, testers, and stakeholders involved in the project.

    Priority guides should always have a mobile format. By constraining yourself this way, you automatically think mobile-first and consider which information is most important (and so should be at the top of the screen). Also, since the menu is typically more or less the same on every screen of your website or app, we recommend leaving the menu out of your priority guide. It’ll help you focus on the screen you’re designing for, and the guide won’t be cluttered with unnecessary distractions.

    Step 1: determine the goal(s)

    Before jumping to the solution, it’s important to take a step back and consider why you’re making this priority guide. What is the purpose of the page? What goal or goals does the user have? And what goal or goals does the business have? The answers to these questions will both guide your user research and determine which content will add more value to users and the business, and so have higher priority.

    Step 2: research and understand the user

    There are many methods for user research, and the method or methods chosen will largely depend on the situation and project. However, when creating priority guides, we’ve definitely found it useful to generate personas, affinity diagrams, and experience maps to help create a visual summary of any research findings.

    Step 3: determine the content topics

    The aim of this stage is to use your knowledge of the user and the business to determine which specific content and topics will best support their goals in each phase of the customer journey. Experience has taught us that co-creating this content outline with users, clients, copywriters, and stakeholders can be highly beneficial. The result is a list of topics that each page should contain.

    Step 4: create a high-level priority guide

    Use the list of topics to create a high-level priority guide. Which is the most important topic? Place that one on the top. Which is the second most important topic? That one goes below the first. It’s a straightforward prioritization process that should be continued until all the (relevant) topics have found a place in the priority list. It’s important to question the importance of each topic, not only in comparison to other topics, but also whether the topic should really be on the page at all. And we’ve found that starting on paper definitely helps avoid focusing too much on the little visual details, which can happen if using a digital design tool (“pixel-fixing”).

    Graphic showing a priority guide with the page title, goal, and a prioritized list of content
    A high-level priority guide for FreeBees, a fictional company

    Step 5: create a detailed priority guide

    Now it’s time to start adding the details. For each topic, determine the detailed, real content that will appear on the page. Also, start thinking about any functionalities the page may need. When you have multiple priority guides for multiple pages, indicate how and where these pages are connected in a sitemap format.

    We often use this first schematic shape of the product to identify flows, test if the concept is complete, and determine whether the current content and priorities effectively serve users’ needs and help solve their problems. More than once it has allowed us to identify that a content plan needed to be altered to achieve the outcome we were targeting. And because priority guides are quick and easy to produce, iterating at this stage saved a lot of time and effort.

    A priority guide showing the page title, goal, and a prioritized list of content along with notes on what role each piece plays (i.e. - heading, call to action, etc.)
    A detailed priority guide for FreeBees, a fictional company

    Step 6: user testing and (further) iteration

    The last (continuous) step involves testing and iterating your priority guides. Ask users what they think about the information presented in the priority guides (yes, it is possible to do usability testing with priority guides!), and gather feedback from stakeholders. The input gained from these sessions can then be used to validate and reprioritize the information, and to add or adapt functionalities, followed by further testing as needed.

    Find out what works for you

    Over the years we’ve seen many variations on the process described above. Some designers work entirely with paper and Post-its, while others prefer to create priority guides in a digital design tool from scratch. Some go no further than high-level priority guides, while others use detailed priority guides as a guideline for their entire project.

    The key is to experiment, and take the time to find out which approach works best for you and your team. What remains important no matter your process, however, is the need to always keep the focus on user and business goals, and to continuously ask yourself what each piece of content or functionality adds to these goals.

    Conclusion

    For us here at Mirabeau, priority guides have become a highly efficient tool for designing user-first, content-first, and mobile-first, overcoming many of the significant pitfalls that come from relying only on wireframes. Wireframes do have their uses, and in many situations it’s valuable to be able to visualize ideas and discuss them with team members, clients, or stakeholders. Sketching concepts as wireframes to test ideas can also be useful, and sometimes we’ll even generate wireframes to gain new insights into how to improve our priority guides!

    Overall, we’ve found that priority guides are more useful at the start of a project, when in the phase of defining the purpose and content of screens. Wireframes, on the other hand, are more useful for sketching and communicating ideas and visual concepts. Just don’t start with wireframes, and make sure you always stay focused on what’s important.

  10. The Illusion of Control in Web Design

    We all want to build robust and engaging web experiences. We scrutinize every detail of an interaction. We spend hours getting the animation swing just right. We refactor our JavaScript to shave tiny fractions of a second off load times. We control absolutely everything we can, but the harsh reality is that we control less than we think.

    Last week, two events reminded us, yet again, of how right Douglas Crockford was when he declared the web “the most hostile software engineering environment imaginable.” Both were serious enough to take down an entire site—actually hundreds of entire sites, as it turned out. And both were avoidable.

    In understanding what we control (and what we don’t), we will build resilient, engaging products for our users.

    What happened?

    The first of these incidents involved the launch of Chrome 66. With that release, Google implemented a security patch with serious implications for folks who weren’t paying attention. You might recall that quite a few questionable SSL certificates issued by Symantec Corporation’s PKI began to surface early last year. Apparently, Symantec had subcontracted the creation of certificates without providing a whole lot of oversight. Long story short, the Chrome team decided the best course of action with respect to these potentially bogus (and security-threatening) SSL certificates was to set an “end of life” for accepting them as secure. They set Chrome 66 as the cutoff.

    So, when Chrome 66 rolled out (an automatic, transparent update for pretty much everyone), suddenly any site running HTTPS on one of these certificates would no longer be considered secure. That’s a major problem if the certificate in question is for our primary domain, but it’s also a problem it’s for a CDN we’re using. You see, my server may be running on a valid SSL certificate, but if I have my assets—images, CSS, JavaScript—hosted on a CDN that is not secure, browsers will block those resources. It’s like CSS Naked Day all over again.

    To be completely honest, I wasn’t really paying attention to this until Michael Spellacy looped me in on Twitter. Two hundred of his employer’s sites were instantly reduced to plain old semantic HTML. No CSS. No images. No JavaScript.

    The second incident was actually quite similar in that it also involved SSL, and specifically the expiration of an SSL certificate being used by jQuery’s CDN. If a site relied on that CDN to serve an HTTPS-hosted version of jQuery, their users wouldn’t have received it. And if that site was dependent on jQuery to be usable … well, ouch!

    For what it’s worth, this isn’t the first time incidents like these have occurred. Only a few short years ago, Sky Broadband’s parental filter dramatically miscategorized the jQuery CDN as a source of malware. With that designation in place, they spent the better part of a day blocking all requests for resources on that domain, affecting nearly all of their customers.

    It can be easy to shrug off news like this. Surely we’d make smarter implementation decisions if we were in charge. We’d certainly have included a local copy of jQuery like the good Boilerplate tells us to. The thing is, even with that extra bit of protection in place, we’re falling for one of the most attractive fallacies when it comes to building for the web: that we have control.

    Lost in transit?

    There are some things we do control on the web, but they may be fewer than you think. As a solo dev or team lead, we have considerable control over the HTML, CSS, and JavaScript code that ultimately constructs our sites. Same goes for the tools we use and the hosting solutions we’ve chosen. Of course, that control lessens on large teams or when others are calling the shots, though in those situations we still have an awareness of the coding conventions, tooling, and hosting environment we’re working with. Once our carefully-crafted code leaves our servers, however, all bets are off.

    First off, we don’t—at least in the vast majority of cases—control the network our code traverses to reach our users. Ideally our code takes an optimized path so that it reaches its destination quickly, yet any one of the servers along that path can read and manipulate the code. If you’ve heard of “man-in-the-middle” attacks, this is how they happen.

    For example, certain providers have no qualms about injecting their own advertising into your pages. Gross, right? HTTPS is one way to stop this from happening (and to prevent servers from being able to snoop on our traffic), but some providers have even found a way around that. Sigh.

    Lost in translation?

    Assuming no one touches our code in transit, the next thing standing between our users and our code is the browser. These applications are the gateways to (and gatekeepers of) the experiences we build on the web. And, even though the last decade has seen browser vendors coalesce around web standards, there are still differences to consider. Those differences are yet another factor that will make or break the experience our users have.

    While every browser vendor supports the idea and ongoing development of standards, they do so at their own pace and very much in relation to their business interests. They prioritize features that help them meet their own goals and can sometimes be reluctant or slow to implement new features. Occasionally, as happened with CSS Grid, everyone gets on board rather quickly, and we can see a new spec go from draft to implementation within a single calendar year. Others, like Service Worker, can take hold quickly in a handful of browsers but take longer to roll out in others. Still others, like Pointer Events, might get implemented widely, only to be undermined by one browser’s indifference.

    All of this is to say that the browser landscape is much like the Great Plains of the American Midwest: from afar it looks very even, but walking through it we’re bound to stumble into a prairie dog burrow or two. And to successfully navigate the challenges posed by the browser environment, it pays to get familiar with where those burrows lie so we don’t lose our footing. Object detection … font stacks … media queries … feature detection … these tools (and more) help us ensure our work doesn’t fall over in less-than-ideal situations.

    Beyond standards support, it’s important to recognize that some browsers include optimizations that can affect the delivery of your code. Opera Mini and Amazon’s Silk are examples of the class of browser often referred to as proxy browsers. Proxy browsers, as their name implies, position their own proxy servers in between our domains and the end user. They use these servers to do things like optimize images, simplify markup, and jettison unsupported JavaScript in the interest of slimming the download size of our pages. Proxy browsers can be a tremendous help for users paying for downloads by the bit, especially given our penchant for increasing web page sizes year upon year.

    If we don’t consider how these browsers can affect our pages, our site may simply collapse and splay its feet in the air like a fainting goat. Consider this JavaScript taken from an example I threw up on Codepen:

    document.body.innerHTML += '<p>Can I count to four?</p>';
    for (let i=1; i<=4; i++) {
      document.body.innerHTML += '<p>' + i + '</p>';
    }
    document.body.innerHTML += '<p>Success!</p>'; 

    This code is designed to insert several paragraphs into the current document and, when executed, produces this:

    Can I count to four?
    1
    2
    3
    4
    Success!

    Simple enough, right? Well, yes and no. You see, this code makes use of the let keyword, which was introduced in ECMAScript 2015 (a.k.a. ES6) to enable block-level variable scoping. It will work a treat in browsers that understand let. However, any browsers that don’t understand let will have no idea what to make of it and won’t execute any of the JavaScript—not even the parts they do understand—because they don’t know how to interpret the program. Users of Opera Mini, Internet Explorer 10, QQ, and Safari 9 would get nothing.

    This is a relatively simplistic example, but it underscores the fragility of JavaScript. The UK’s GDS ran a study to determine how many of their users didn’t get JavaScript enhancements and discovered that 0.9% of their users who should have received them—in other words, their browser supported JavaScript and they had not turned it off—didn’t for some reason. Add in the 0.2% of users whose browsers did not support JavaScript or who had turned it off, and the total non-JavaScript constituency was 1.1%, or 1 in every 93 people who visit their site.

    It’s worth keeping in mind that browsers must understand the entirety of our JavaScript before they can execute it. This may not be a big deal if we write all of our own JavaScript (though we all occasionally make mistakes), but it becomes a big deal when we include third-party code like JavaScript libraries, advertising code, or social media buttons. Errors in any of those codebases can cause problems for our users.

    Browser plugins are another form of third-party code that can negatively affect our sites. And they’re ones we don’t often consider. Back in the early ’00s, I remember spending hours trying to diagnose a site issue reported by one of my clients, only to discover it only occurred when using a particular plugin. Anger and self-doubt were wreaking havoc on me as I failed time and time again to reproduce the error my client was experiencing. It took me traveling the two hours to her office and sitting down at her desk to discover the difference between her setup and mine: a third-party browser toolbar.

    We don’t have the luxury of traveling to our users’ homes and offices to determine if and when a browser plugin is hobbling our creations. Instead, the best defense against the unknowns of the browsing environment is to always design our sites with a universally usable baseline.

    Lost in interpretation?

    Regardless of everything discussed so far, when our carefully crafted website finally reaches its destination, it has one more potential barrier to success: us. Specifically, our users. More broadly, people. Unless our product is created solely for the consumption of some other life form or machine, we’ve got to consider the ultimate loss of control when we cede it to someone else.

    Over the course of my twenty years of building websites for customers, I’ve always had the plaintive voice of Clerks’ Randal Graves in the back of my head: “This job would be great if it wasn’t for the f—ing customers.” I’m not happy about that. It’s an arrogant position (surely), yet an easy one to lapse into.

    People are so needy. Wouldn’t it be great if we could just focus on ourselves?

    No, that wouldn’t be good at all.

    When we design and build for people like us, we exclude everyone who isn’t like us. And that’s most people. I’m going to put on my business hat here—Fedora? Bowler? Top hat?—and say that artificially limiting our customer base is probably not in our company’s best interest. Not only will it limit our potential revenue growth, it could actually reduce our income if we become the target of a legal complaint by an excluded party.

    Our efforts to build robust experiences on the web must account for the actual people that use them (or may want to use them). That means ensuring our sites work for people who experience motor impairments, vision impairments, hearing impairments, vestibular disorders, and other things we aggregate under the heading of “accessibility.” It also means ensuring our sites work well for users in a variety of contexts: on large screens, small screens, even in-between screens. Via mouse, keyboard, stylus, finger, and even voice. In dark, windowless offices, glass-walled conference rooms, and out in the midday sun. Over blazingly fast fiber and painfully slow cellular networks. Wherever people are, however they access the web, whatever special considerations need to be made to accommodate them … we should build our products to support them.

    That may seem like a tall order, but consider this: removing access barriers for one group has a far-reaching ripple effect that benefits others. The roadside curb cut is an example we often cite. It was originally designed for wheelchair access, but stroller-pushing parents, children on bicycles, and even that UPS delivery person hauling a tower of Amazon boxes down Seventh Avenue all benefit from that rather simple consideration.

    Maybe you’re more of a numbers person. If so, consider designing your interface such that it’s easier to use by someone who only has use of one arm. Every year, about 26,000 people in the U.S. permanently lose the use of an upper extremity. That’s a drop in the bucket compared to an overall population of nearly 326 million people. But that’s a permanent impairment. There are two other forms of impairment to consider: temporary and situational. Breaking your arm can mean you lose use of that hand—maybe your dominant one—for a few weeks. About 13 million Americans suffer an arm injury like this every year. Holding a baby is a situational impairment in that you can put it down and regain use of your arm, but the feasibility of that may depend greatly on the baby’s temperament and sleep schedule. About 8 million Americans welcome this kind of impairment—sweet and cute as it may be—into their home each year, and this particular impairment can last for over a year. All of this is to say that designing an interface that’s usable with one hand (or via voice) can help over 21 million more Americans (about 6% of the population) effectively use your service.

    Finally, and in many ways coming full circle, there’s the copy we employ. Clear, well-written, and appropriate copy is the bedrock of great experiences on the web. When we draft copy, we should do so with a good sense of how our users talk to one another. That doesn’t mean we should pepper our legalese with slang, but it does mean we should author copy that is easily understood. It should be written at an appropriate reading level, devoid of unnecessary jargon and idioms, and approachable to both native and non-native speakers alike. Nestled in the gentle embrace of our (hopefully) semantic, server-rendered HTML, the copy we write is one of the only experiences of our sites we can pretty much guarantee our users will have.

    Old advice, still relevant

    Recognizing all of the ways our carefully-crafted experiences can be rendered unusable can be more than a little disheartening. No one likes to spend their time thinking about failure. So don’t. Don’t focus on all of the bad things you can’t control. Focus on what you can control.

    Start simply. Code defensively. User-test the heck out of it. Recognize the chaos. Embrace it. And build resilient web experiences that will work no matter what the internet throws at them.

  11. Working with External User Researchers: Part II

    In the first installment of the Working with External User Researchers series, we explored the reasons why you might hire a user researcher on contract and helpful things to consider in choosing one. This time, we talk about getting the actual work done.

    You’ve hired a user researcher for your project. Congrats! On paper, this person (or team of people) has everything you need and more. You might think the hardest part of your project is complete and that you can be more hands off at this point. But the real work hasn’t started yet. Hiring the researcher is just the beginning of your journey.

    Let’s recap what we mean by an external user researcher. Hiring a contract external user researcher means that a person or team is brought on for the duration of a contract to conduct research.

    This situation is most commonly found in:

    • organizations without researchers on staff;
    • organizations whose research staff is maxed out;
    • and organizations that need special expertise.

    In other words, external user researchers exist to help you gain the insight from your users when hiring one full-time is not an option. Check out Part I to learn more about how to find external user researchers, the types of projects that will get you the most value for your money, writing a request for proposal, and finally, negotiating payment.

    Working together

    Remember why you hired an external researcher

    No project or work relationship is perfect. Before we delve into more specific guidelines on how to work well together, remember the reasons why you decided to hire an external researcher (and this specific one) for your project. Keeping them in mind as you work together will help you keep your priorities straight.

    External researchers are great for bringing in a fresh, objective perspective

    You could ask your full-time designer who also has research skills to wear the research hat. This isn’t uncommon. But a designer won’t have the same depth and breadth of expertise as a dedicated researcher. In addition, they will probably end up researching their own design work, which will make it very difficult for them to remain unbiased.

    Product managers sometimes like to be proactive and conduct some form of guerrilla user research themselves, but this is an even riskier idea. They usually aren’t trained on how to ask non-leading questions, for example, so they tend to only hear feedback that validates their ideas.

    It isn’t a secret—but it’s well worth remembering—that research participants tend to be more comfortable sharing critical feedback with someone who doesn’t work for the product that is being tested.

    The real work begins

    In our experience the most important work starts once a researcher is hired. Here are some key considerations in setting them and your own project team up for success.

    Be smart about the initial brain dump

    Do share background materials that provide important context and prevent redundant work from being done. It’s likely that some insight is already known on a topic that will be researched, so it’s important to share this knowledge with your researcher so they can focus on new areas of inquiry. Provide things such as report templates to ensure that the researcher presents their learnings in a way that’s consistent with your organization’s unique culture. While you’re at it, consider showing them where to find documentation or tutorials about your product, or specific industry jargon.

    Make sure people know who they are

    Conduct a project kick-off meeting with the external researcher and your internal stakeholders. Influence is often partially a factor of trust and relationships, and for this reason it’s sometimes easy for internal stakeholders to question or brush aside projects conducted by research consultants, especially if they disagree with research insights and recommendations. (Who is this person I don’t know trying to tell me what is best for my product?)

    Conduct a kick-off meeting with the broader team

    A great way to prevent this potential pushback is to conduct a project kick-off meeting with the external researcher and important internal stakeholders or consumers of the research. Such a meeting might include activities such as:

    • Team introductions.
    • A discussion about the research questions, including an exercise for prioritizing the questions. Especially with contracted-out projects, it’s common for project teams to be tempted to add more questions—question creep—which is why it’s important to have clear priorities from the start.
    • A summary of what’s out of scope for the research. This is another important task in setting firm boundaries around project priorities from the start so the project is completed on time and within budget.
    • A summary of any incoming hypotheses the project team might have—in other words, what they think the answers to the research questions are. This can be an especially impactful exercise to remind stakeholders how their initial thinking changed in response to study findings upon the study being completed.
    • A review of the project phases and timeline, and any threats that could get in the way of the project being completed on time.
    • A review of prior research and what’s already known, if available. This is important for both the external researcher and the most important internal consumers of the research, as it’s often the case that the broader project team might not be aware of prior research and why certain questions already answered aren’t being addressed in the project at hand.

    Use a buddy system

    Appoint an internal resource who can answer questions that will no doubt arise during the project. This might include questions on how to use an internal lab, questions about whom to invite to a critical meeting, or clarifying questions regarding project priorities. This is also another opportunity to build trust and rapport between your project team and external researcher.

    Conducting the research

    While an external researcher or agency can help plan and conduct a study for you, don’t expect them to be experts on your product and company culture. It’s like hiring an architect to build your house or a designer to furnish a room: you need to provide guidance early and often, or the end result may not be what you expected. Here are some things to consider to make the engagement more effective.

    Be available

    A good research contractor will ask lots of questions to make sure they’re understanding important details, such as your priorities and research questions, and to collect feedback on the study plan and research report. While it can sometimes feel more efficient to handle most of these types of questions over email, email can often result in misinterpretations. Sometimes it’s faster to speak to questions that require lots of detail and context rather than type a response. Consider establishing weekly remote or in-person status checks to discuss open questions and action items.

    Be present

    If moderated sessions are part of the research, plan on observing as many of these as possible. While you should expect the research agency to provide you with a final report, you should not expect them to know which insights are most impactful to your project. They don’t have the background from internal meetings, prior decisions, and discussions about future product directions that an internal stakeholder has. Many of the most insightful findings come from conversations that happen immediately after a session with a research participant. The research moderator and client contact can share their perspectives on what the participant just did and said during their session.

    Be proactive

    Before the researcher drafts their final report, set up a meeting between them and your internal stakeholders to brainstorm over the main research findings. This will help the researcher identify more insights and opportunities that reflect internal priorities and limitations. It also helps stakeholders build trust in the research findings.

    In other words, it’s a waste of everyone’s time if a final report is delivered and basic questions arise from stakeholders that could have been addressed by involving them earlier. This is also a good opportunity to get feedback from stakeholders’ stakeholders, who may have a different (but just as important) influence on the project’s success.

    Be reasonable

    Don’t treat an external contractor like a PowerPoint jockey. Changing fonts and colors to your liking is fine, but only to a point. Your researcher should provide you with a polished report free from errors and in a professional format, but minute changes are not a constructive use of time and money. Focus more on decisions and recommendations than the aesthetics of the deliverables. You can prevent this kind of situation by providing any templates you want used in your initial brain dump, so the findings don’t have to be replicated in the “right” format for presenting.

    When it’s all said and done

    Just because the project has been completed and all the agreed deliverables have been received doesn’t mean you should close the door on any additional learning opportunities for both the client and researcher. At the end of the project, identify what worked, and find ways to increase buy-in for their recommendations.

    Tell them what happened

    Try to identify a check-in point in the future (such as two weeks or months) to let the researcher know what happened because of the research: what decisions were made, what problems were fixed, or other design changes. While you shouldn’t expect your researcher to be perpetually available, if you encounter problems with buy-in, they might be able to provide a quick recommendation.

    Maintain a relationship

    While it’s typical for vendors to treat their clients to dinner or drinks, don’t be afraid to invite your external researcher to your own happy hour or event with your staff. The success of your next project may rely on getting the right researcher, and you’ll want them to be excited to make themselves available to help you when you need them again.

  12. Going Offline

    A note from the editors: We’re excited to share Chapter 1 of Going Offline by Jeremy Keith, available this month from A Book Apart.

    Businesses are built on the web. Without the web, Twitter couldn’t exist. Facebook couldn’t exist. And not just businesses—Wikipedia couldn’t exist. Your favorite blog couldn’t exist without the web. The web doesn’t favor any one kind of use. It’s been deliberately designed to accommodate many and varied activities.

    Just as many wonderful things are built upon the web, the web itself is built upon the internet. Though we often use the terms web and internet interchangeably, the World Wide Web is just one application that uses the internet as its plumbing. Email, for instance, is another.

    Like the web, the internet was designed to allow all kinds of services to be built on top of it. The internet is a network of networks, all of them agreeing to use the same protocols to shuttle packets of data around. Those packets are transmitted down fiber-optic cables across the ocean floor, bounced around with Wi-Fi or radio signals, or beamed from satellites in freakin’ space.

    As long as these networks are working, the web is working. But sometimes networks go bad. Mobile networks have a tendency to get flaky once you’re on a train or in other situations where you’re, y’know, mobile. Wi-Fi networks work fine until you try to use one in a hotel room (their natural enemy).

    When the network fails, the web fails. That’s just the way it is, and there’s nothing we can do about it. Until now.

    Weaving the Web

    For as long as I can remember, the World Wide Web has had an inferiority complex. Back in the ’90s, it was outshone by CD-ROMs (ask your parents). They had video, audio, and a richness that the web couldn’t match. But they lacked links—you couldn’t link from something in one CD-ROM to something in another CD-ROM. They faded away. The web grew.

    Later, the web technologies of HTML, CSS, and JavaScript were found wanting when compared to the whiz-bang beauty of Flash. Again, Flash movies were much richer than regular web pages. But they were also black boxes. The Flash format seemed superior to the open standards of the web, and yet the very openness of those standards made the web an unstoppable force. Flash—under the control of just one company—faded away. The web grew.

    These days it’s native apps that make the web look like an underachiever. Like Flash, they’re under the control of individual companies instead of being a shared resource like the web. Like Flash, they demonstrate all sorts of capabilities that the web lacks, such as access to device APIs and, crucially, the ability to work even when there’s no network connection.

    The history of the web starts to sound like an endless retelling of the fable of the tortoise and the hare. CD-ROMs, Flash, and native apps outshine the web in the short term, but the web always seems to win the day somehow.

    Each of those technologies proved very useful for the expansion of web standards. In a way, Flash was like the R&D department for HTML, CSS, and JavaScript. Smooth animations, embedded video, and other great features first saw the light of day in Flash. Having shown their usefulness, they later appeared in web standards. The same thing is happening with native apps. Access to device features like the camera and the accelerometer is beginning to show up in web browsers. Most exciting of all, we’re finally getting the ability for a website to continue working even when the network isn’t available.

    Service Workers

    The technology that makes this bewitching offline sorcery possible is a browser feature called service workers. You might have heard of them. You might have heard that they’re something to do with JavaScript, and technically they are…but conceptually they’re very different from other kinds of scripts.

    Usually when you’re writing some JavaScript that’s going to run in a web browser, it’s all related to the document currently being displayed in the browser window. You might want to listen out for events triggered by the user interacting with the document (clicks, swipes, hovers, etc.). You might want to update the contents of the document: add some markup here, remove some text there, manipulate some values somewhere else. The sky’s the limit. And it’s all made possible thanks to the Document Object Model (DOM), a representation of what the browser is rendering. Through the combination of the DOM and JavaScript—DOM scripting, if you will—you can conjure up all sorts of wonderful magic.

    Well, a service worker can’t do any of that. It’s still a script, and it’s still written in the same language—JavaScript—but it has no access to the DOM. Without any DOM scripting capabilities, this kind of script might seem useless at first glance. But there’s an advantage to having a script that never needs to interact with the current document. Adding, editing, and deleting parts of the DOM can be hard work for the browser. If you’re not careful, things can get very sluggish very quickly. But if there’s a whole class of script that isn’t allowed access to the DOM, then the browser can happily run that script in parallel to its regular rendering activities, safe in the knowledge that it’s an entirely separate process.

    The first kind of script to come with this constraint was called a web worker. In a web worker, you could write some JavaScript to do number-crunching calculations without slowing down whatever else was being displayed in the browser window. Spin up a web worker to generate larger and larger prime numbers, for instance, and it will merrily do so in the background.

    A service worker is like a web worker with extra powers. It still can’t access the DOM, but it does have access to the fundamental inner workings of the browser.

    Browsers and servers

    Let’s take a step back and think about how the World Wide Web works. It’s a beautiful ballet of client and server. The client is usually a web browser—or, to use the parlance of web standards, a user agent: a piece of software that acts on behalf of the user.

    The user wants to accomplish a task or find some information. The URL is the key technology that will empower the user in their quest. They will either type a URL into their web browser or follow a link to get there. This is the point at which the web browser—or client—makes a request to a web server. Before the request can reach the server, it must traverse the internet of undersea cables, radio towers, and even the occasional satellite (Fig 1.1).

    Diagram of the request/response cycle between a user and a server
    Fig 1.1: Browsers send URL requests to servers, and servers respond by sending files.

    Imagine if you could leave instructions for the web browser that would be executed before the request is even sent. That’s exactly what service workers allow you to do (Fig 1.2).

    Diagram of the request/response cycle between a user and a server with a service worker being the first thing the response hits
    Fig 1.2: Service workers tell the web browser to do something before they send the request to queue up a URL.

    Usually when we write JavaScript, the code is executed after it’s been downloaded from a server. With service workers, we can write a script that’s executed by the browser before anything else happens. We can tell the browser, “If the user asks you to retrieve a URL for this particular website, run this corresponding bit of JavaScript first.” That explains why service workers don’t have access to the Document Object Model; when the service worker is run, there’s no document yet.

    Getting your head around service workers

    A service worker is like a cookie. Cookies are downloaded from a web server and installed in a browser. You can go to your browser’s preferences and see all the cookies that have been installed by sites you’ve visited. Cookies are very small and very simple little text files. A website can set a cookie, read a cookie, and update a cookie. A service worker script is much more powerful. It contains a set of instructions that the browser will consult before making any requests to the site that originally installed the service worker.

    A service worker is like a virus. When you visit a website, a service worker is surreptitiously installed in the background. Afterwards, whenever you make a request to that website, your request will be intercepted by the service worker first. Your computer or phone becomes the home for service workers lurking in wait, ready to perform man-in-the-middle attacks. Don’t panic. A service worker can only handle requests for the site that originally installed that service worker. When you write a service worker, you can only use it to perform man-in-the-middle attacks on your own website.

    A service worker is like a toolbox. By itself, a service worker can’t do much. But it allows you to access some very powerful browser features, like the Fetch API, the Cache API, and even notifications. API stands for Application Programming Interface, which sounds very fancy but really just means a tool that you can program however you want. You can write a set of instructions in your service worker to take advantage of these tools. Most of your instructions will be written as “when this happens, reach for this tool.” If, for instance, the network connection fails, you can instruct the service worker to retrieve a backup file using the Cache API.

    A service worker is like a duck-billed platypus. The platypus not only lactates, but also lays eggs. It’s the only mammal capable of making its own custard. A service worker can also…Actually, hang on, a service worker is nothing like a duck-billed platypus! Sorry about that. But a service worker is somewhat like a cookie, and somewhat like a virus, and somewhat like a toolbox.

    Safety First

    Service workers are powerful. Once a service worker has been installed on your machine, it lies in wait, like a patient spider waiting to feel the vibrations of a particular thread.

    Imagine if a malicious ne’er-do-well wanted to wreak havoc by impersonating a website in order to install a service worker. They could write instructions in the service worker to prevent the website ever appearing in that browser again. Or they could write instructions to swap out the content displayed under that site’s domain. That’s why it’s so important to make sure that a service worker really belongs to the site it claims to come from. As the specification for service workers puts it, they “create the opportunity for a bad actor to turn a bad day into a bad eternity.”1

    To prevent this calamity, service workers require you to adhere to two policies:

    Same origin.

    HTTPS only.

    The same-origin policy means that a website at example.com can only install a service worker script that lives at example.com. That means you can’t put your service worker script on a different domain. You can use a domain like for hosting your images and other assets, but not your service worker script. That domain wouldn’t match the domain of the site installing the service worker.

    The HTTPS-only policy means that https://example.com can install a service worker, but http://example.com can’t. A site running under HTTPS (the S stands for Secure) instead of HTTP is much harder to spoof. Without HTTPS, the communication between a browser and a server could be intercepted and altered. If you’re sitting in a coffee shop with an open Wi-Fi network, there’s no guarantee that anything you’re reading in browser from http://newswebsite.com hasn’t been tampered with. But if you’re reading something from https://newswebsite.com, you can be pretty sure you’re getting what you asked for.

    Securing your site

    Enabling HTTPS on your site opens up a whole series of secure-only browser features—like the JavaScript APIs for geolocation, payments, notifications, and service workers. Even if you never plan to add a service worker to your site, it’s still a good idea to switch to HTTPS. A secure connection makes it trickier for snoopers to see who’s visiting which websites. Your website might not contain particularly sensitive information, but when someone visits your site, that’s between you and your visitor. Enabling HTTPS won’t stop unethical surveillance by the NSA, but it makes the surveillance slightly more difficult.

    There’s one exception. You can use a service worker on a site being served from localhost, a web server on your own computer, not part of the web. That means you can play around with service workers without having to deploy your code to a live site every time you want to test something.

    If you’re using a Mac, you can spin up a local server from the command line. Let’s say your website is in a folder called mysite. Drag that folder to the Terminal app, or open up the Terminal app and navigate to that folder using the cd command to change directory. Then type:

    python -m SimpleHTTPServer 8000

    This starts a web server from the mysite folder, served over port 8000. Now you can visit localhost:8000 in a web browser on the same computer, which means you can add a service worker to the website you’ve got inside the mysite folder: http://localhost:8000.

    This starts a web server from the mysite folder, served over port 8000. Now you can visit localhost:8000 in a web browser on the same computer, which means you can add a service worker to the website you’ve got inside the mysite folder: http://localhost:8000.


    But if you then put the site live at, say, http://mysite.com, the service worker won’t run. You’ll need to serve the site from https://mysite.com instead. To do that, you need a secure certificate for your server.

    There was a time when certificates cost money and were difficult to install. Now, thanks to a service called Certbot, certificates are free. But I’m not going to lie: it still feels a bit intimidating to install the certificate. There’s something about logging on to a server and typing commands that makes me simultaneously feel like a l33t hacker, and also like I’m going to break everything. Fortunately, the process of using Certbot is relatively jargon-free (Fig 1.3).

    Screenshot of certbot.eff.org
    Fig 1.3: The website of EFF’s Certbot.

    On the Certbot website, you choose which kind of web server and operating system your site is running on. From there you’ll be guided step-by-step through the commands you need to type in the command line of your web server’s computer, which means you’ll need to have SSH access to that machine. If you’re on shared hosting, that might not be possible. In that case, check to see if your hosting provider offers secure certificates. If not, please pester them to do so, or switch to a hosting provider that can serve your site over HTTPS.

    Another option is to stay with your current hosting provider, but use a service like Cloudflare to act as a “front” for your website. These services can serve your website’s files from data centers around the world, making sure that the physical distance between your site’s visitors and your site’s files is nice and short. And while they’re at it, these services can make sure all of those files are served over HTTPS.

    Once you’re set up with HTTPS, you’re ready to write a service worker script. It’s time to open up your favorite text editor. You’re about to turbocharge your website!

    Footnotes

  13. Planning for Everything

    A note from the editors: We’re pleased to share an excerpt from Chapter 7 (“Reflecting”) of Planning for Everything: The Design of Paths and Goals by Peter Morville, available now from Semantic Studios.

    Once upon a time, there was a happy family. Every night at dinner, mom, dad, and two girls who still believed in Santa played a game. The rules are simple. Tell three stories about your day, two true, one false, and see who can detect the fib. Today I saw a lady walk a rabbit on a leash. Today I found a tooth in the kitchen. Today I forgot my underwear. The family ate, laughed, and learned together, and lied happily ever after.

    There’s truth in the tale. It’s mostly not false. We did play this game, for years, and it was fun. We loved to stun and bewilder each other, yet the big surprise was insight. In reflecting on my day, I was often amazed by oddities already lost. If not for the intentional search for anomaly, I’d have erased these standard deviations from memory. The misfits we find, we rarely recall.

    We observe a tiny bit of reality. We understand and remember even less. Unlike most machines, our memory is selective and purposeful. Goals and beliefs define what we notice and store.  To mental maps we add places we predict we’ll need to visit later. It’s not about the past. The intent of memory is to plan.

    In reflecting we look back to go forward. We search the past for truths and insights to shift the future. I’m not speaking of nostalgia, though we are all borne back ceaselessly and want what we think we had. My aim is redirection. In reflecting on inconvenient truths, I hope to change not only paths but goals.

    A path showing Framing ('The Here and Now'), Imagining, Narrowing, Deciding, Executing, and Reflecting ('The Goal')
    Figure 7-1. Reflection changes direction.

    We all have times for reflection. Alone in the shower or on a walk, we retrace the steps of a day. Together at lunch for work or over family dinner, we share memories and missteps. Some of us reflect more rigorously than others. Given time, it shows.

    People who as a matter of habit extract underlying principles or rules from new experiences are more successful learners than those who take their experiences at face value, failing to infer lessons that can be applied later in similar situations.1

    In Agile, the sprint retrospective offers a collaborative context for reflection. Every two to four weeks, at the end of a sprint, the team meets for an hour or so to look back. Focal questions include 1) what went well? 2) what went wrong? 3) how might we improve? In reflecting on the plan, execution, and results, the team explores surprises, conflicts, roadblocks, and lessons.

    In addition to conventional analysis, a retrospective creates an opportunity for double loop learning. To edit planned actions based on feedback is normal, but revising assumptions, goals, values, methods, or metrics may effect change more profound. A team able to expand the frame may hack their habits, beliefs, and environment to be better prepared to succeed and learn.

    A loop showing Beliefs leading to Actions leading to Results. Loop 1 leads back to Actions, Loop 2 leads back to Beliefs.
    Figure 7-2. Double loop learning.

    Retrospectives allow for constructive feedback to drive team learning and bonding, but that’s what makes them hard. We may lack courage to be honest, and often people can’t handle the truth. Our filters are as powerful as they are idiosyncratic, which means we’re all blind men touching a tortoise, or is it a tree or an elephant? It hurts to reconcile different perceptions of reality, so all too often we simply shut up and shut down.

    Search for Truth

    To seek truth together requires a culture of humility and respect. We are all deeply flawed and valuable. We must all speak and listen. Ideas we don’t implement may lead to those we do. Errors we find aren’t about fault, since our intent is a future fix. And counterfactuals merit no more confidence than predictions, as we never know what would have happened if.

    Reflection is more fruitful if we know our own minds, but that is harder than we think. An imperfect ability to predict actions of sentient beings is a product of evolution. It’s quick and dirty yet better than nothing in the context of survival in a jungle or a tribe. Intriguingly, cognitive psychology and neuroscience have shown we use the same theory of mind to study ourselves.

    Self-awareness is just this same mind reading ability, turned around and employed on our own mind, with all the fallibility, speculation, and lack of direct evidence that bedevils mind reading as a tool for guessing at the thought and behavior of others.2

    Empirical science tells us introspection and consciousness are unreliable bases for self-knowledge. We know this is true but ignore it all the time. I’ll do an hour of homework a day, not leave it to the end of vacation. If we adopt a dog, I’ll walk it. If I buy a house, I’ll be happy. I’ll only have one drink. We are more than we think, as Walt Whitman wrote in Song of Myself.

    Do I contradict myself?
    Very well then I contradict myself
    (I am large, I contain multitudes.)

    Our best laid plans go awry because complexity exists within as well as without. Our chaotic, intertwingled bodyminds are ecosystems inside ecosystems. No wonder it’s hard to predict. Still, it’s wise to seek self truth, or at least that’s what I think.

    Upon reflection, my mirror neurons tell me I’m a shy introvert who loves reading, hiking, and planning. I avoid conflict when possible but do not lack courage. Once I set a goal, I may focus and filter relentlessly. I embrace habit and eschew novelty. If I fail, I tend to pivot rather than persist. Who I am is changing. I believe it’s speeding up. None of these traits is bad or good, as all things are double-edged. But mindful self awareness holds value. The more I notice the truth, the better my plans become.

    Years ago, I planned a family vacation on St. Thomas. I kept it simple: a place near a beach where we could snorkel. It was a wonderful, relaxing escape. But over time a different message made it past my filters. Our girls had been bored. I dismissed it at first. I’d planned a shared experience I recalled fondly. It hurt to hear otherwise. But at last I did listen and learn. They longed not for escape but adventure. Thus our trip to Belize. I found planning and executing stressful due to risk, but I have no regrets. We shared a joyful adventure we’ll never forget.

    Way back when we were juggling toddlers, we accidentally threw out the mail. Bills went unpaid, notices came, we swore we’d do better, then lost mail again. One day I got home from work to find an indoor mailbox system made with paint cans. My wife Susan built it in a day. We’ve used it to sort and save mail for 15 years. It’s an epic life hack I’d never have done. My ability to focus means I filter things out. I ignore problems and miss fixes. I’m not sure I’ll change. Perhaps it merits a prayer.

    God grant me the serenity
    to accept the things I cannot change,
    courage to change the things I can,
    and wisdom to know the difference.

    We also seek wisdom in others. This explains our fascination with the statistics of regret. End of life wishes often include:

    I wish I’d taken more risks, touched more lives, stood up to bullies, been a better spouse or parent or child. I should have followed my dreams, worked and worried less, listened more. If only I’d taken better care of myself, chosen meaningful work, had the courage to express my feelings, stayed in touch. I wish I’d let myself be happy.

    While they do yield wisdom, last wishes are hard to hear. We are skeptics for good reason. Memory prepares for the future, and that too is the aim of regret. It’s unwise to trust the clarity of rose-colored glasses. The memory of pain and anxiety fades in time, but our desire for integrity grows. When time is short, regret is a way to rectify. I’ve learned my lesson. I’m passing it on to you. I’m a better person now. Don’t make my mistakes. It’s easy to say “I wish I’d stood up to bullies,” but hard to do at the time. There’s wisdom in last wishes but bias and self justification too. Confabulation means we edit memories with no intention to deceive. The truth is elusive. Reflection is hard.

    Footnotes

    • 1. Make It Stick by Peter Brown et. al. (2014), p.133.
    • 2. Why You Don’t Know Your Own Mind by Alex Rosenberg (2016).
  14. Meeting Design

    A note from the editors: We’re pleased to share an excerpt from Chapter 2 (“The Design Constraint of All Meetings”) of Meeting Design: For Managers, Makers, and Everyone by Kevin Hoffman, available now from Two Waves.

    Jane is a “do it right, or I’ll do it myself ” kind of person. She leads marketing, customer service, and information technology teams for a small airline that operates between islands of the Caribbean. Her work relies heavily on “reservation management system” (RMS) software, which is due for an upgrade. She convenes a Monday morning meeting to discuss an upgrade with the leadership from each of her three teams. The goal of this meeting is to identify key points for a proposal to upgrade the outdated software.

    Jane begins by reviewing the new software’s advantages. She then goes around the room, engaging each team’s representatives in an open discussion. They capture how this software should alleviate current pain points; someone from marketing takes notes on a laptop, as is their tradition. The meeting lasts nearly three hours, which is a lot longer than expected, because they frequently loop back to earlier topics as people forget what was said. It concludes with a single follow-up action item: the director of each department will provide her with two lists for the upgrade proposal. First, a list of cost savings, and second, a list of timesaving outcomes. Each list is due back to Jane by the end of the week.

    The first team’s list is done early but not organized clearly. The second list provides far too much detail to absorb quickly, so Jane puts their work aside to summarize later. By the end of the following Monday, there’s no list from the third team—it turns out they thought she meant the following Friday. Out of frustration, Jane calls another meeting to address the problems with the work she received, which range from “not quite right” to “not done at all.” Based on this pace, her upgrade proposal is going to be finished two weeks later than planned.

    What went wrong? The plan seemed perfectly clear to Jane, but each team remembered their marching orders differently, if they remembered them at all. Jane could have a meeting experience that helps her team form more accurate memories. But for that meeting to happen, she needs to understand where those memories are formed in her team and how to form them more clearly.

    Better Meetings Make Better Memories

    If people are the one ingredient that all meetings have in common, there is one design constraint they all bring: their capacity to remember the discussion. That capacity lives in the human brain.

    The brain shapes everything believed to be true about the world. On the one hand, it is a powerful computer that can be trained to memorize thousands of numbers in random sequences.1 But brains are also easily deceived, swayed by illusions and pre-existing biases. Those things show up in meetings as your instincts. Instincts vary greatly based on differences in the amount and type of previous experience. The paradox of ability and deceive-ability creates a weird mix of unpredictable behavior in meetings. It’s no wonder that they feel awkward.

    What is known about how memory works in the brain is constantly evolving. To cover that in even a little detail is beyond the scope of this book, so this chapter is not meant to be an exhaustive look at human memory. However, there are a few interesting theories that will help you be more strategic about how you use meetings to support forming actionable memories.

    Your Memory in Meetings

    The brain’s job in meetings is to accept inputs (things we see, hear, and touch) and store it as memory, and then to apply those absorbed ideas in discussion (things we say and make). See Figure 2.1.

    A drawing of a brain with appendages representing the five senses
    FIGURE 2.1 The human brain has a diverse set of inputs that contribute to your memories.

    Neuroscience has identified four theoretical stages of memory, which include sensory, working, intermediate, and long-term. Understanding working memory and intermediate memory is relevant to meetings, because these stages represent the most potential to turn thought into action.

    Working Memory

    You may be familiar with the term short-term memory. Depending on the research you read, the term working memory has replaced short-term memory in the vocabulary of neuro- and cognitive science. I’ll use the term working memory here. Designing meeting experiences to support the working memory of attendees will improve meetings.

    Working memory collects around 30 seconds of the things you’ve recently heard and seen. Its storage capacity is limited, and that capacity varies among individuals. This means that not everyone in a meeting has the same capacity to store things in their working memory. You might assume that because you remember an idea mentioned within the last few minutes of a meeting, everyone else probably will as well. That is not necessarily the case.

    You can accommodate variations in people’s ability to use working memory by establishing a reasonable pace of information. The pace of information is directly connected to how well aligned attendees’ working memories become. To make sure that everyone is on the same page, you should set a pace that is deliberate, consistent, and slower than your normal pace of thought.

    Sometimes, concepts are presented more quickly than people can remember them, simply because the presenter is already familiar with the details. Breaking information into evenly sized, consumable chunks is what separates a great presenter from an average (or bad) one. In a meeting, slower, more broken-up pacing allows a group of people to engage in constructive and critical thinking more effectively. It gets the same ideas in everyone’s head. (For a more detailed dive into the pace of content in meetings, see Chapter 3, “Build Agendas Out of Ideas, People, and Time.”)

    Theoretical models that explain working memory are complex, as seen in Figure 2.2.2 This model presumes two distinct processes taking place in your brain to make meaning out of what you see, what you hear, and how much you can keep in your mind. Assuming that your brain creates working memories from what you see and what you hear in different ways, combining listening and seeing in meetings becomes more essential to getting value out of that time.

    A chart showing a model of working memory
    FIGURE 2.2 Alan Baddeley and Graham Hitch’s Model of Working Memory provides context for the interplay between what we see and hear in meetings.

    In a meeting, absorbing something seen and absorbing something heard require different parts of the brain. Those two parts can work together to improve retention (the quantity and accuracy of information in our brain) or compete to reduce retention. Nowhere is this better illustrated than in the research of Richard E. Meyer, where he has found that “people learn better from words and pictures than from words alone, but not all graphics are created equal(ly).”3 When what you hear and what you see compete, it creates cognitive dissonance. Listening to someone speaking while reading the same words on a screen actually decreases the ability to commit something to memory. People who are subjected to presentation slides filled with speaking points face this challenge. But listening to someone while looking at a complementary photograph or drawing increases the likelihood of committing something to working memory.

    Intermediate-Term Memory

    Your memory should transform ideas absorbed in meetings into taking an action of some kind afterward. Triggering intermediate-term memories is the secret to making that happen. Intermediate-term memories last between two and three hours, and are characterized by processes taking place in the brain called biochemical translation and transcription. Translation can be considered as a process by which the brain makes new meaning. Transcription is where that meaning is replicated (see Figures 2.3a and 2.3b). In both processes, the cells in your brain are creating new proteins using existing ones: making some “new stuff” from “existing stuff.”4

    Two illustrations, showing a woman describing a hat to a man, and then a man showing an actual hat to a few people
    FIGURE 2.3 Biochemical translation (a) and transcription (b), loosely in the form of understanding a hat.

    Here’s an example: instead of having someone take notes on a laptop, imagine if Jane sketched a diagram that helped her make sense out of the discussion, using what was stored in her working memory. The creation of that diagram is an act of translation, and theoretically Jane should be able to recall the primary details of that diagram easily for two to three hours, because it’s moving into her intermediate memory.

    If Jane made copies of that diagram, and the diagram was so compelling that those copies ended up on everyone’s wall around the office that would be transcription. Transcription is the (theoretical) process that leads us into longer-term stages of memory. Transcription connects understanding something within a meeting to acting on it later, well after the meeting has ended.

    Most of the time simple meetings last from 10 minutes to an hour, while workshops and working sessions can last anywhere from 90 minutes to a few days. Consider the duration of various stages of memory against different meeting lengths (see Figure 2.4). A well-designed meeting experience moves the right information from working to intermediate memory. Ideas generated and decisions made should materialize into actions that take place outside the meeting. Any session without breaks that lasts longer than 90 minutes makes the job of your memories moving thought into action fuzzier, and therefore more difficult.

    A chart showing how the different types of memory work over a 90-minute meeting
    FIGURE 2.4 The time duration of common meetings against the varying durations for different stages of memory. Sessions longer than 90 minutes can impede memories from doing their job.

    Jane’s meeting with her three teams lasted nearly three hours. That length of time spent on a single task or topic taxes people’s ability to form intermediate (actionable) memories. Action items become muddled, which leads to liberal interpretations of what each team is supposed to accomplish.

    But just getting agreement about a shared task in the first place is a difficult design challenge. All stages of memory are happening simultaneously, with multiple translation and transcription processes being applied to different sounds and sights. A fertile meeting environment that accommodates multiple modes of input allows memories to form amidst the cognitive chaos.

    Brain Input Modes

    During a meeting, each attendee’s brain in a meeting is either in a state of input or output. By choosing to assemble in a group, the assumption is implicit that information needs to be moved out of one place, or one brain, into another (or several others).

    Some meetings, like presentations, move information in one direction. The goal is for a presenting party to move information from their brain to the brains in the audience. When you are presenting an idea, your brain is in output mode. You use words and visuals to give form to ideas in the hopes that they will become memories in your audience. Your audience’s brains are receiving information; if the presentation is well designed and well executed, their ears and their eyes will do a decent job of absorbing that information accurately.

    In a live presentation, the output/input processes are happening synchronously. This is not like reading a written report or an email message, where the author (presenting party) has output information in absence of an audience, and the audience is absorbing information in absence of the author’s presence; that is moving information asynchronously.

    Footnotes

    • 1. Joshua Foer, Moonwalking with Einstein (New York: Penguin Books, 2011).
    • 2. A. D. Baddeley and G. Hitch, “Working Memory,” in The Psychology of Learning and Motivation: Advances in Research and Theory, ed. G. H. Bower (New York: Academic Press, 1974), 8:47–89.
    • 3. Richard E. Meyer, “Principles for Multimedia Learning with Richard E. Mayer,” Harvard Initiative for Learning & Teaching (blog), July 8, 2014, http://hilt.harvard.edu/blog/ principles-multimedia-learning-richard-e-mayer
    • 4. M. A. Sutton and T. J. Carew, “Behavioral, Cellular, and Molecular Analysis of Memory in Aplysia I: Intermediate-Term Memory,” Integrative and Comparative Biology 42, no. 4 (2002): 725–735.
  15. Designing for Research

    If you’ve spent enough time developing for the web, this piece of feedback has landed in your inbox since time immemorial:

    “This photo looks blurry. Can we replace it with a better version?”

    Every time this feedback reaches me, I’m inclined to question it: “What about the photo looks bad to you, and can you tell me why?

    That’s a somewhat unfair question to counter with. The complaint is rooted in a subjective perception of image quality, which in turn is influenced by many factors. Some are technical, such as the export quality of the image or the compression method (often lossy, as is the case with JPEG-encoded photos). Others are more intuitive or perceptual, such as content of the image and how compression artifacts mingle within. Perhaps even performance plays a role we’re not entirely aware of.

    Fielding this kind of feedback for many years eventually lead me to design and develop an image quality survey, which was my first go at building a research project on the web. I started with twenty-five photos shot by a professional photographer. With them, I generated a large pool of images at various quality levels and sizes. Images were served randomly from this pool to users who were asked to rate what they thought about their quality.

    Results from the first round were interesting, but not entirely clear: users seemed to have a tendency to overestimate the actual quality of images, and poor performance appeared to have a negative impact on perceptions of image quality, but this couldn’t be stated conclusively. A number of UX and technical issues made it necessary to implement important improvements and conduct a second round of research. In lieu of spinning my wheels trying to extract conclusions from the first round results, I decided it would be best to improve the survey as much as possible, and conduct another round of research to get better data. This article chronicles how I first built the survey, and then how I subsequently listened to user feedback to improve it.

    Defining the research

    Of the subjects within web performance, image optimization is especially vast. There’s a wide array of formats, encodings, and optimization tools, all of which are designed to make images small enough for web use while maintaining reasonable visual quality. Striking the balance between speed and quality is really what image optimization is all about.

    This balance between performance and visual quality prompted me to consider how people perceive image quality. Lossy image quality, in particular. Eventually, this train of thought lead to a series of questions spurring the design and development of an image quality perception survey. The idea of the survey is that users are providing subjective assessments on quality. This is done by asking participants to rate images without an objective reference for what’s “perfect.” This is, after all, how people view images in situ.

    A word on surveys

    Any time we want to quantify user behavior, it’s inevitable that a survey is at least considered, if not ultimately chosen to gather data from a group of people. After all, surveys are perfect when your goal is to get something measurable. However, the survey is a seductively dangerous tool, as Erika Hall cautions. They’re easy to make and conduct, and are routinely abused in their dissemination. They’re not great tools for assessing past behavior. They’re just as bad (if not worse) at predicting future behavior. For example, the 1–10 scale often employed by customer satisfaction surveys don’t really say much of anything about how satisfied customers actually are or how likely they’ll be to buy a product in the future.

    The unfortunate reality, however, is that in lieu of my lording over hundreds of participants in person, the survey is the only truly practical tool I have to measure how people perceive image quality as well as if (and potentially how) performance metrics correlate to those perceptions. When I designed the survey, I kept with the following guidelines:

    • Don’t ask participants about anything other than what their perceptions are in the moment. By the time a participant has moved on, their recollection of what they just did rapidly diminishes as time elapses.
    • Don’t assume participants know everything you do. Guide them with relevant copy that succinctly describes what you expect of them.
    • Don’t ask participants to provide assessments with coarse inputs. Use an input type that permits them to finely assess image quality on a scale congruent with the lossy image quality encoding range.

    All we can do going forward is acknowledge we’re interpreting the data we gather under the assumption that participants are being truthful and understand the task given to them. Even if the perception metrics are discarded from the data, there are still some objective performance metrics gathered that could tell a compelling story. From here, it’s a matter of defining the questions that will drive the research.

    Asking the right questions

    In research, you’re seeking answers to questions. In the case of this particular effort, I wanted answers to these questions:

    • How accurate are people’s perceptions of lossy image quality in relation to actual quality?
    • Do people perceive the quality of JPEG images differently than WebP images?
    • Does performance play a role in all of this?

    These are important questions. To me, however, answering the last question was the primary goal. But the road to answers was (and continues to be) a complex journey of design and development choices. Let’s start out by covering some of the tech used to gather information from survey participants.

    Sniffing out device and browser characteristics

    When measuring how people perceive image quality, devices must be considered. After all, any given device’s screen will be more or less capable than others. Thankfully, HTML features such as srcset and picture are highly appropriate for delivering the best image for any given screen. This is vital because one’s perception of image quality can be adversely affected if an image is ill-fit for a device’s screen. Conversely, performance can be negatively impacted if an exceedingly high-quality (and therefore behemoth) image is sent to a device with a small screen. When sniffing out potential relationships between performance and perceived quality, these are factors that deserve consideration.

    With regard to browser characteristics and conditions, JavaScript gives us plenty of tools for identifying important aspects of a user’s device. For instance, the currentSrc property reveals which image is being shown from an array of responsive images. In the absence of currentSrc, I can somewhat safely assume support for srcset or picture is lacking, and fall back to the img tag’s src value:

    const surveyImage = document.querySelector(".survey-image");
    let loadedImage = surveyImage.currentSrc || surveyImage.src;

    Where screen capability is concerned, devicePixelRatio tells us the pixel density of a given device’s screen. In the absence of devicePixelRatio, you may safely assume a fallback value of 1:

    let dpr = window.devicePixelRatio || 1;

    devicePixelRatio enjoys excellent browser support. Those few browsers that don’t support it (i.e., IE 10 and under) are highly unlikely to be used on high density displays.

    The stalwart getBoundingClientRect method retrieves the rendered width of an img element, while the HTMLImageElement interface’s complete property determines whether an image has finished loaded. The latter of these two is important, because it may be preferable to discard individual results in situations where images haven’t loaded.

    In cases where JavaScript isn’t available, we can’t collect any of this data. When we collect ratings from users who have JavaScript turned off (or are otherwise unable to run JavaScript), I have to accept there will be gaps in the data. The basic information we’re still able to collect does provide some value.

    Sniffing for WebP support

    As you’ll recall, one of the initial questions asked was how users perceived the quality of WebP images. The HTTP Accept request header advertises WebP support in browsers like Chrome. In such cases, the Accept header might look something like this:

    Accept: image/webp,image/apng,image/*,*/*;q=0.8

    As you can see, the WebP content type of image/webp is one of the advertised content types in the header content. In server-side code, you can check Accept for the image/webp substring. Here’s how that might look in Express back-end code:

    const WebP = req.get("Accept").indexOf("image/webp") !== -1 ? true : false;

    In this example, I’m recording the browser’s WebP support status to a JavaScript constant I can use later to modify image delivery. I could use the picture element with multiple sources and let the browser figure out which one to use based on the source element’s type attribute value, but this approach has clear advantages. First, it’s less markup. Second, the survey shouldn’t always choose a WebP source simply because the browser is capable of using it. For any given survey specimen, the app should randomly decide between a WebP or JPEG image. Not all participants using Chrome should rate only WebP images, but rather a random smattering of both formats.

    Recording performance API data

    You’ll recall that one of the earlier questions I set out to answer was if performance impacts the perception of image quality. At this stage of the web platform’s development, there are several APIs that aid in the search for an answer:

    • Navigation Timing API (Level 2): This API tracks performance metrics for page loads. More than that, it gives insight into specific page loading phases, such as redirect, request and response time, DOM processing, and more.
    • Navigation Timing API (Level 1): Similar to Level 2 but with key differences. The timings exposed by Level 1 of the API lack the accuracy as those in Level 2. Furthermore, Level 1 metrics are expressed in Unix time. In the survey, data is only collected from Level 1 of the API if Level 2 is unsupported. It’s far from ideal (and also technically obsolete), but it does help fill in small gaps.
    • Resource Timing API: Similar to Navigation Timing, but Resource Timing gathers metrics on various loading phases of page resources rather than the page itself. Of the all the APIs used in the survey, Resource Timing is used most, as it helps gather metrics on the loading of the image specimen the user rates.
    • Server Timing: In select browsers, this API is brought into the Navigation Timing Level 2 interface when a page request replies with a Server-Timing response header. This header is open-ended and can be populated with timings related to back-end processing phases. This was added to round two of the survey to quantify back-end processing time in general.
    • Paint Timing API: Currently only in Chrome, this API reports two paint metrics: first paint and first contentful paint. Because a significant slice of users on the web use Chrome, we may be able to observe relationships between perceived image quality and paint metrics.

    Using these APIs, we can record performance metrics for most participants. Here’s a simplified example of how the survey uses the Resource Timing API to gather performance metrics for the loaded image specimen:

    // Get information about the loaded image
    const surveyImageElement = document.querySelector(".survey-image");
    const fullImageUrl = surveyImageElement.currentSrc || surveyImageElement.src;
    const imageUrlParts = fullImageUrl.split("/");
    const imageFilename = imageUrlParts[imageUrlParts.length - 1];
    
    // Check for performance API methods
    if ("performance" in window && "getEntriesByType" in performance) {
      // Get entries from the Resource Timing API
      let resources = performance.getEntriesByType("resource");
    
      // Ensure resources were returned
      if (typeof resources === "object" && resources.length > 0) {
        resources.forEach((resource) => {
          // Check if the resource is for the loaded image
          if (resource.name.indexOf(imageFilename) !== -1) {
            // Access resource images for the image here
          }
        });
      }
    }

    If the Resource Timing API is available, and the getEntriesByType method returns results, an object with timings is returned, looking something like this:

    {
      connectEnd: 1156.5999999947962,
      connectStart: 1156.5999999947962,
      decodedBodySize: 11110,
      domainLookupEnd: 1156.5999999947962,
      domainLookupStart: 1156.5999999947962,
      duration: 638.1000000037602,
      encodedBodySize: 11110,
      entryType: "resource",
      fetchStart: 1156.5999999947962,
      initiatorType: "img",
      name: "https://imagesurvey.site/img-round-2/1-1024w-c2700e1f2c4f5e48f2f57d665b1323ae20806f62f39c1448490a76b1a662ce4a.webp",
      nextHopProtocol: "h2",
      redirectEnd: 0,
      redirectStart: 0,
      requestStart: 1171.6000000014901,
      responseEnd: 1794.6999999985565,
      responseStart: 1737.0999999984633,
      secureConnectionStart: 0,
      startTime: 1156.5999999947962,
      transferSize: 11227,
      workerStart: 0
    }

    I grab these metrics as participants rate images, and store them in a database. Down the road when I want to write queries and analyze the data I have, I can refer to the Processing Model for the Resource and Navigation Timing APIs. With SQL and data at my fingertips, I can measure the distinct phases outlined by the model and see if correlations exist.

    Having discussed the technical underpinnings of how data can be collected from survey participants, let’s shift the focus to the survey’s design and user flows.

    Designing the survey

    Though surveys tend to have straightforward designs and user flows relative to other sites, we must remain cognizant of the user’s path and the impediments a user could face.

    The entry point

    When participants arrive at the home page, we want to be direct in our communication with them. The home page intro copy greets participants, gives them a succinct explanation of what to expect, and presents two navigation choices:

    One button with the text “I want to participate!” and another button with the text “What data do you gather?”

    From here, participants either start the survey or read a privacy policy. If the user decides to take the survey, they’ll reach a page politely asking them what their professional occupation is and requesting them to disclose any eyesight conditions. The fields for these questions can be left blank, as some may not be comfortable disclosing this kind of information. Beyond this point, the survey begins in earnest.

    The survey primer

    Before the user begins rating images, they’re redirected to a primer page. This page describes what’s expected of participants, and explains how to rate images. While the survey is promoted on design and development outlets where readers regularly work with imagery on the web, a primer is still useful in getting everyone on the same page. The first paragraph of the page stresses that users are rating image quality, not image content. This is important. Absent any context, participants may indeed rate images for their content, which is not what we’re asking for. After this clarification, the concept of lossy image quality is demonstrated with the following diagram:

    A divided photo with one half demonstrating low image quality and the other demonstrating high quality.

    Lastly, the function of the rating input is explained. This could likely be inferred by most, but the explanatory copy helps remove any remaining ambiguity. Assuming your user knows everything you do is not necessarily wise. What seems obvious to one is not always so to another.

    The image specimen page

    This page is the main event and is where participants assess the quality of images shown to them. It contains two areas of focus: the image specimen and the input used to rate the image’s quality.

    Let’s talk a bit out of order and discuss the input first. I mulled over a few options when it came to which input type to use. I considered a select input with coarsely predefined choices, an input with a type of number, and other choices. What seemed to make the most sense to me, however, was a slider input with a type of range.

    A rating slide with “worst” at the far left, and “best” at the far right. The slider track is a gradient from red on the left to green on the right.

    A slider input is more intuitive than a text input, or a select element populated with various choices. Because we’re asking for a subjective assessment about something with such a large range of interpretation, a slider allows participants more granularity in their assessments and lends further accuracy to the data collected.

    Now let’s talk about the image specimen and how it’s selected by the back-end code. I decided early on in the survey’s development that I wanted images that weren’t prominent in existing stock photo collections. I also wanted uncompressed sources so I wouldn’t be presenting participants with recompressed image specimens. To achieve this, I procured images from a local photographer. The twenty-five images I settled on were minimally processed raw images from the photographer’s camera. The result was a cohesive set of images that felt visually related to each other.

    To properly gauge perception across the entire spectrum of quality settings, I needed to generate each image from the aforementioned sources at ninety-six different quality settings ranging from 5 to 100. To account for the varying widths and pixel densities of screens in the wild, each image also needed to be generated at four different widths for each quality setting: 1536, 1280, 1024, and 768 pixels, to be exact. Just the job srcset was made for!

    To top it all off, images also needed to be encoded in both JPEG and WebP formats. As a result, the survey draws randomly from 768 images per specimen across the entire quality range, while also delivering the best image for the participant’s screen. This means that across the twenty-five image specimens participants evaluate, the survey draws from a pool of 19,200 images total.

    With the conception and design of the survey covered, let’s segue into how the survey was improved by implementing user feedback into the second round.

    Listening to feedback

    When I launched round one of the survey, feedback came flooding in from designers, developers, accessibility advocates, and even researchers. While my intentions were good, I inevitably missed some important aspects, which made it necessary to conduct a second round. Iteration and refinement are critical to improving the usefulness of a design, and this survey was no exception. When we improve designs with user feedback, we take a project from average to something more memorable. Getting to that point means taking feedback in stride and addressing distinct, actionable items. In the case of the survey, incorporating feedback not only yielded a better user experience, it improved the integrity of the data collected.

    Building a better slider input

    Though the first round of the survey was serviceable, I ran into issues with the slider input. In round one of the survey, that input looked like this:

    A slider with evenly-spaced spaced labels from left to right reading respectively, “Awful”, “Bad”, “OK”, “Good”, “Great”. Below it is a disabled button with the text “Please Rate the Image…”.

    There were two recurring complaints regarding this specific implementation. The first was that participants felt they had to align their rating to one of the labels beneath the slider track. This was undesirable for the simple fact that the slider was chosen specifically to encourage participants to provide nuanced assessments.

    The second complaint was that the submit button was disabled until the user interacted with the slider. This design choice was intended to prevent participants from simply clicking the submit button on every page without rating images. Unfortunately, this implementation was unintentionally hostile to the user and needed improvement, because it blocked users from rating images without a clear and obvious explanation as to why.

    Fixing the problem with the labels meant redesigning the slider as it appeared in Figure 3. I removed the labels altogether to eliminate the temptation of users to align their answers to them. Additionally, I changed the slider background property to a gradient pattern, which further implied the granularity of the input.

    The submit button issue was a matter of how users were prompted. In round one the submit button was visible, yet the disabled state wasn’t obvious enough to some. After consulting with a colleague, I found a solution for round two: in lieu of the submit button being initially visible, it’s hidden by some guide copy:

    The revised slider followed by the text “Once you rate the image, you may submit.”

    Once the user interacts with the slider and rates the image, a change event attached to the input fires, which hides the guide copy and replaces it with the submit button:

    The revised slider now followed by a button reading “Submit rating”.

    This solution is less ambiguous, and it funnels participants down the desired path. If someone with JavaScript disabled visits, the guide copy is never shown, and the submit button is immediately usable. This isn’t ideal, but it doesn’t shut out participants without JavaScript.

    Addressing scrolling woes

    The survey page works especially well in portrait orientation. Participants can see all (or most) of the image without needing to scroll. In browser windows or mobile devices in landscape orientation, however, the survey image can be larger than the viewport:

    Screen shot of the survey with an image clipped at the bottom by the viewport and rating slider.

    Working with such limited vertical real estate is tricky, especially in this case where the slider needs to be fixed to the bottom of the screen (which addressed an earlier bit of user feedback from round one testing). After discussing the issue with colleagues, I decided that animated indicators in the corners of the page could signal to users that there’s more of the image to see.

    The survey with the clipped image, but now there is a downward-pointing arrow with the word “Scroll”.

    When the user hits the bottom of the page, the scroll indicators disappear. Because animations may be jarring for certain users, a prefers-reduced-motion media query is used to turn off this (and all other) animations if the user has a stated preference for reduced motion. In the event JavaScript is disabled, the scrolling indicators are always hidden in portrait orientation where they’re less likely to be useful and always visible in landscape where they’re potentially needed the most.

    Avoiding overscaling of image specimens

    One issue that was brought to my attention from a coworker was how the survey image seemed to expand boundlessly with the viewport. On mobile devices this isn’t such a problem, but on large screens and even modestly sized high-density displays, images can be scaled excessively. Because the responsive img tag’s srcset attribute specifies a maximum resolution image of 1536w, an image can begin to overscale at as “small” at sizes over 768 pixels wide on devices with a device pixel ratio of 2.

    The survey with an image expanding to fill the window.

    Some overscaling is inevitable and acceptable. However, when it’s excessive, compression artifacts in an image can become more pronounced. To address this, the survey image’s max-width is set to 1536px for standard displays as of round two. For devices with a device pixel ratio of 2 or higher, the survey image’s max-width is set to half that at 768px:

    The survey with an image comfortably fitting in the window.

    This minor (yet important) fix ensures that images aren’t scaled beyond a reasonable maximum. With a reasonably sized image asset in the viewport, participants will assess images close to or at a given image asset’s natural dimensions, particularly on large screens.

    User feedback is valuable. These and other UX feedback items I incorporated improved both the function of the survey and the integrity of the collected data. All it took was sitting down with users and listening to them.

    Wrapping up

    As round two of the survey gets under way, I’m hoping the data gathered reveals something exciting about the relationship between performance and how people perceive image quality. If you want to be a part of the effort, please take the survey. When round two concludes, keep an eye out here for a summary of the results!

    Thank you to those who gave their valuable time and feedback to make this article as good as it could possibly be: Aaron Gustafson, Jeffrey Zeldman, Brandon Gregory, Rachel Andrew, Bruce Hyslop, Adrian Roselli, Meg Dickey-Kurdziolek, and Nick Tucker.

    Additional thanks to those who helped improve the image quality survey: Mandy Tensen, Darleen Denno, Charlotte Dann, Tim Dunklee, and Thad Roe.

  16. Conversational Design

    A note from the editors: We’re pleased to share an excerpt from Chapter 1 of Erika Hall’s new book, Conversational Design, available now from A Book Apart.

    Texting is how we talk now. We talk by tapping tiny messages on touchscreens—we message using SMS via mobile data networks, or through apps like Facebook Messenger or WhatsApp.

    In 2015, the Pew Research Center found that 64% of American adults owned a smartphone of some kind, up from 35% in 2011. We still refer to these personal, pocket-sized computers as phones, but “Phone” is now just one of many communication apps we neglect in favor of texting. Texting is the most widely used mobile data service in America. And in the wider world, four billion people have mobile phones, so 4 billion people have access to SMS or other messaging apps. For some, dictating messages into a wristwatch offers an appealing alternative to placing a call.

    The popularity of texting can be partially explained by the medium’s ability to offer the easy give-and-take of conversation without requiring continuous attention. Texting feels like direct human connection, made even more captivating by unpredictable lag and irregular breaks. Any typing is incidental because the experience of texting barely resembles “writing,” a term that carries associations of considered composition. In his TED talk, Columbia University linguist John McWhorter called texting “fingered conversation”—terminology I find awkward, but accurate. The physical act—typing—isn’t what defines the form or its conventions. Technology is breaking down our traditional categories of communication.

    By the numbers, texting is the most compelling computer-human interaction going. When we text, we become immersed and forget our exchanges are computer-mediated at all. We can learn a lot about digital design from the inescapable draw of these bite-sized interactions, specifically the use of language.

    What Texting Teaches Us

    This is an interesting example of what makes computer-mediated interaction interesting. The reasons people are compelled to attend to their text messages—even at risk to their own health and safety—aren’t high-production values, so-called rich media, or the complexity of the feature set.

    Texting, and other forms of social media, tap into something very primitive in the human brain. These systems offer always-available social connection. The brevity and unpredictability of the messages themselves triggers the release of dopamine that motivates seeking behavior and keeps people coming back for more. What makes interactions interesting may start on a screen, but the really interesting stuff happens in the mind. And language is a critical part of that. Our conscious minds are made of language, so it’s easy to perceive the messages you read not just as words but as the thoughts of another mingled with your own. Loneliness seems impossible with so many voices in your head.

    With minimal visual embellishment, texts can deliver personality, pathos, humor, and narrative. This is apparent in “Texts from Dog,” which, as the title indicates, is a series of imagined text exchanges between a man and his dog. (Fig 1.1). With just a few words, and some considered capitalization, Joe Butcher (writing as October Jones) creates a vivid picture of the relationship between a neurotic canine and his weary owner.

    A dog texts his master about belly rubs.
    Fig 1.1: “Texts from Dog” shows how lively a simple text exchange can be.

    Using words is key to connecting with other humans online, just as it is in the so-called “real world.” Imbuing interfaces with the attributes of conversation can be powerful. I’m far from the first person to suggest this. However, as computers mediate more and more relationships, including customer relationships, anyone thinking about digital products and services is in a challenging place. We’re caught between tried-and-true past practices and the urge to adopt the “next big thing,” sometimes at the exclusion of all else.

    Being intentionally conversational isn’t easy. This is especially true in business and at scale, such as in digital systems. Professional writers use different types of writing for different purposes, and each has rules that can be learned. The love of language is often fueled by a passion for rules — rules we received in the classroom and revisit in manuals of style, and rules that offer writers the comfort of being correct outside of any specific context. Also, there is the comfort of being finished with a piece of writing and moving on. Conversation, on the other hand, is a context-dependent social activity that implies a potentially terrifying immediacy.

    Moving from the idea of publishing content to engaging in conversation can be uncomfortable for businesses and professional writers alike. There are no rules. There is no done. It all feels more personal. Using colloquial language, even in “simplifying” interactive experiences, can conflict with a desire to appear authoritative. Or the pendulum swings to the other extreme and a breezy style gets applied to a laborious process like a thin coat of paint.

    As a material for design and an ingredient in interactions, words need to emerge from the content shed and be considered from the start.  The way humans use language—easily, joyfully, sometimes painfully—should anchor the foundation of all interactions with digital systems.

    The way we use language and the way we socialize are what make us human; our past contains the key to what commands our attention in the present, and what will command it in the future. To understand how we came to be so perplexed by our most human quality, it’s worth taking a quick look at, oh!, the entire known history of communication technology.

    The Mother Tongue

    Accustomed to eyeballing type, we can forget language began in our mouths as a series of sounds, like the calls and growls of other animals. We’ll never know for sure how long we’ve been talking—speech itself leaves no trace—but we do know it’s been a mighty long time.

    Archaeologist Natalie Thais Uomini and psychologist Georg Friedrich Meyer concluded that our ancestors began to develop language as early as 1.75 million years ago. Per the fossil record, modern humans emerged at least 190,000 years ago in the African savannah. Evidence of cave painting goes back 30,000 years (Fig 1.2).

    Then, a mere 6,000 years ago, ancient Sumerian commodity traders grew tired of getting ripped off. Around 3200 BCE, one of them had the idea to track accounts by scratching wedges in wet clay tablets. Cuneiform was born.

    So, don’t feel bad about procrastinating when you need to write—humanity put the whole thing off for a couple hundred thousand years! By a conservative estimate, we’ve had writing for about 4% of the time we’ve been human. Chatting is easy; writing is an arduous chore.

    Prior to mechanical reproduction, literacy was limited to the elite by the time and cost of hand-copying manuscripts. It was the rise of printing that led to widespread literacy; mass distribution of text allowed information and revolutionary ideas to circulate across borders and class divisions. The sharp increase in literacy bolstered an emerging middle class. And the ability to record and share knowledge accelerated all other advances in technology: photography, radio, TV, computers, internet, and now the mobile web. And our talking speakers.

    Chart showing the evolution of communication over the last 200,000, 6,000, and 180 years
    Fig 1.2: In hindsight, “literate culture” now seems like an annoying phase we had to go through so we could get to texting.

    Every time our communication technology advances and changes, so does the surrounding culture—then it disrupts the power structure and upsets the people in charge. Catholic archbishops railed against mechanical movable type in the fifteenth century. Today, English teachers deplore texting emoji. Resistance is, as always, futile. OMG is now listed in the Oxford English Dictionary.

    But while these developments have changed the world and how we relate to one another, they haven’t altered our deep oral core.

    Orality, Say It with Me

    Orality knits persons into community.
    Walter Ong

    Today, when we record everything in all media without much thought, it’s almost impossible to conceive of a world in which the sum of our culture existed only as thoughts.

    Before literacy, words were ephemeral and all knowledge was social and communal. There was no “save” option and no intellectual property. The only way to sustain an idea was to share it, speaking aloud to another person in a way that made it easy for them to remember. This was orality—the first interface.

    We can never know for certain what purely oral cultures were like. People without writing are terrible at keeping records. But we can examine oral traditions that persist for clues.

    The oral formula

    Reading and writing remained elite activities for centuries after their invention. In cultures without a writing system, oral characteristics persisted to help transmit poetry, history, law and other knowledge across generations.

    The epic poems of Homer rely on meter, formulas, and repetition to aid memory:

    Far as a man with his eyes sees into the mist of the distance Sitting aloft on a crag to gaze over the wine-dark seaway, Just so far were the loud-neighing steeds of the gods overleaping.
    Iliad, 5.770

    Concrete images like rosy-fingered dawn, loud-neighing steeds, wine-dark seaway, and swift-footed Achilles served to aid the teller and to sear the story into the listener’s memory.

    Biblical proverbs also encode wisdom in a memorable format:

    As a dog returns to its vomit, so fools repeat their folly.
    Proverbs 26:11

    That is vivid.

    And a saying that originated in China hundreds of years ago can prove sufficiently durable to adorn a few hundred Etsy items:

    A journey of a thousand miles begins with a single step.
    Tao Te Ching, Chapter 64, ascribed to Lao Tzu

    The labor of literature

    Literacy created distance in time and space and decoupled shared knowledge from social interaction. Human thought escaped the existential present. The reader doesn’t need to be alive at the same time as the writer, let alone hanging out around the same fire pit or agora. 

    Freed from the constraints of orality, thinkers explored new forms to preserve their thoughts. And what verbose and convoluted forms these could take:

    The Reader will I doubt too soon discover that so large an interval of time was not spent in writing this discourse; the very length of it will convince him, that the writer had not time enough to make a shorter.
    George Tullie, An Answer to a Discourse Concerning the Celibacy of the Clergy, 1688

    There’s no such thing as an oral semicolon. And George Tullie has no way of knowing anything about his future audience. He addresses himself to a generic reader he will never see, nor receive feedback from. Writing in this manner is terrific for precision, but not good at all for interaction.

    Writing allowed literate people to become hermits and hoarders, able to record and consume knowledge in total solitude, invest authority in them, and defend ownership of them. Though much writing preserved the dullest of records, the small minority of language communities that made the leap to literacy also gained the ability to compose, revise, and perfect works of magnificent complexity, utility, and beauty.

    The qualities of oral culture

    In Orality and Literacy: The Technologizing of the Word, Walter Ong explored the “psychodynamics of orality,” which is, coincidentally, quite a mouthful.  Through his research, he found that the ability to preserve ideas in writing not only increased knowledge, it altered values and behavior. People who grow up and live in a community that has never known writing are different from literate people—they depend upon one another to preserve and share knowledge. This makes for a completely different, and much more intimate, relationship between ideas and communities.

    Oral culture is immediate and social

    In a society without writing, communication can happen only in the moment and face-to-face. It sounds like the introvert’s nightmare! Oral culture has several other hallmarks as well:

    • Spoken words are events that exist in time. It’s impossible to step back and examine a spoken word or phrase. While the speaker can try to repeat, there’s no way to capture or replay an utterance.
    • All knowledge is social, and lives in memory. Formulas and patterns are essential to transmitting and retaining knowledge. When the knowledge stops being interesting to the audience, it stops existing.
    • Individuals need to be present to exchange knowledge or communicate. All communication is participatory and immediate. The speaker can adjust the message to the context. Conversation, contention, and struggle help to retain this new knowledge.
    • The community owns knowledge, not individuals. Everyone draws on the same themes, so not only is originality not helpful, it’s nonsensical to claim an idea as your own.
    • There are no dictionaries or authoritative sources. The right use of a word is determined by how it’s being used right now.

    Literate culture promotes authority and ownership

    Printed books enabled mass-distribution and dispensed with handicraft of manuscripts, alienating readers from the source of the ideas, and from each other. (Ong pg. 100):

    • The printed text is an independent physical object. Ideas can be preserved as a thing, completely apart from the thinker.
    • Portable printed works enable individual consumption. The need and desire for private space accompanied the emergence of silent, solo reading.
    • Print creates a sense of private ownership of words. Plagiarism is possible.
    • Individual attribution is possible. The ability to identify a sole author increases the value of originality and creativity.
    • Print fosters a sense of closure. Once a work is printed, it is final and closed.

    Print-based literacy ascended to a position of authority and cultural dominance, but it didn’t eliminate oral culture completely.

    Technology brought us together again

    All that studying allowed people to accumulate and share knowledge, speeding up the pace of technological change. And technology transformed communication in turn. It took less than 150 years to get from the telegraph to the World Wide Web. And with the web—a technology that requires literacy—Ong identified a return to the values of the earlier oral culture. He called this secondary orality. Then he died in 2003, before the rise of the mobile internet, when things really got interesting.

    Secondary orality is:

    • Immediate. There is no necessary delay between the expression of an idea and its reception. Physical distance is meaningless.
    • Socially aware and group-minded. The number of people who can hear and see the same thing simultaneously is in the billions.
    • Conversational. This is in the sense of being both more interactive and less formal.
    • Collaborative. Communication invites and enables a response, which may then become part of the message.
    • Intertextual. The products of our culture reflect and influence one another.

    Social, ephemeral, participatory, anti-authoritarian, and opposed to individual ownership of ideas—these qualities sound a lot like internet culture.

    Wikipedia: Knowledge Talks

    When someone mentions a genre of music you’re unfamiliar with—electroclash, say, or plainsong—what do you do to find out more? It’s quite possible you type the term into Google and end up on Wikipedia, the improbably successful, collaborative encyclopedia that would be absent without the internet.

    According to Wikipedia, encyclopedias have existed for around two-thousand years. Wikipedia has existed since 2001, and it’s the fifth most-popular site on the web. Wikipedia is not a publication so much as a society that provides access to knowledge. A volunteer community of “Wikipedians” continuously adds to and improves millions of articles in over 200 languages. It’s a phenomenon manifesting all the values of secondary orality:

    • Anyone can contribute anonymously and anyone can modify the contributions of another.
    • The output is free.
    • The encyclopedia articles are not attributed to any sole creator. A single article might have 2 editors or 1,000.
    • Each article has an accompanying “talk” page where editors discuss potential improvements, and a “history” page that tracks all revisions. Heated arguments are not documented. They take place as revisions within documents.

    Wikipedia is disruptive in the true Clayton Christensen sense. It’s created immense value and wrecked an existing business model. Traditional encyclopedias are publications governed by authority, and created by experts and fact checkers. A volunteer project collaboratively run by unpaid amateurs shows that conversation is more powerful than authority, and that human knowledge is immense and dynamic.

    In an interview with The Guardian, a British librarian expressed some disdain about Wikipedia.

    The main problem is the lack of authority. With printed publications, the publishers must ensure that their data are reliable, as their livelihood depends on it. But with something like this, all that goes out the window.
    Philip Bradley, “Who knows?”, The Guardian, October 26, 2004

    Wikipedia is immediate, group-minded, conversational, collaborative, and intertextual— secondary orality in action—but it relies on traditionally published sources for its authority. After all, anything new that changes the world does so by fitting into the world. As we design for new methods of communication, we should remember that nothing is more valuable simply because it’s new; rather, technology is valuable when it brings us more of what’s already meaningful.

    From Documents to Events

    Pages and documents organize information in space. Space used to be more of a constraint back when we printed conversation out. Now that the internet has given us virtually infinite space, we need to mind how conversation moves through time. Thinking about serving the needs of people in an internet-based culture requires a shift from thinking about how information occupies space—documents—to how it occupies time—events.

    Texting means that we’ve never been more lively (yet silent) in our communications. While we still have plenty of in-person interactions, it’s gotten easy to go without. We text grocery requests to our spouses. We click through a menu in a mobile app to summon dinner (the order may still arrive at the restaurant by fax, proving William Gibson’s maxim that the future is unevenly distributed). We exchange messages on Twitter and Facebook instead of visiting friends in person, or even while visiting friends in person. We work at home and Slack our colleagues.

    We’re rapidly approaching a future where humans text other humans and only speak aloud to computers. A text-based interaction with a machine that’s standing in for a human should feel like a text-based interaction with a human. Words are a fundamental part of the experience, and they are part of the design. Words should be the basis for defining and creating the design.

    We’re participating in a radical cultural transformation. The possibilities manifest in systems like Wikipedia that succeed in changing the world by using technology to connect people in a single collaborative effort. And even those of us creating the change suffer from some lag. The dominant educational and professional culture remains based in literary values. We’ve been rewarded for individual achievement rather than collaboration. We seek to “make our mark,” even when designing changeable systems too complex for any one person to claim authorship. We look for approval from an authority figure. Working in a social, interactive way should feel like the most natural thing in the world, but it will probably take some doing.

    Literary writing—any writing that emerges from the culture and standards of literacy—is inherently not interactive. We need to approach the verbal design not as a literary work, but as a conversation. Designing human-centered interactive systems requires us to reflect on our deep-seated orientation around artifacts and ownership. We must alienate ourselves from a set of standards that no longer apply.

    Most advice on “writing for the web” or “creating content” starts from the presumption that we are “writing,” just for a different medium. But when we approach communication as an assembly of pieces of content rather than an interaction, customers who might have been expecting a conversation end up feeling like they’ve been handed a manual instead.

    Software is on a path to participating in our culture as a peer.  So, it should behave like a person—alive and present. It doesn’t matter how much so-called machine intelligence is under the hood—a perceptive set of programmatic responses, rather than a series of documents, can be enough if they have the qualities of conversation.

    Interactive systems should evoke the best qualities of living human communities—active, social, simple, and present—not passive, isolated, complex, or closed off.

    Life Beyond Literacy

    Indeed, language changes lives. It builds society, expresses our highest aspirations, our basest thoughts, our emotions and our philosophies of life. But all language is ultimately at the service of human interaction. Other components of language—things like grammar and stories—are secondary to conversation.
    Daniel L. Everett, How Language Began

    Literacy has gotten us far. It’s gotten you this far in this book. So, it’s not surprising we’re attached to the idea. Writing has allowed us to create technologies that give us the ability to interact with one another across time and space, and have instantaneous access to knowledge in a way our ancestors would equate with magic. However, creating and exchanging documents, while powerful, is not a good model for lively interaction. Misplaced literate values can lead to misery—working alone and worrying too much about posterity.

    So, it’s time to let go and live a little! We’re at an exciting moment. The computer screen that once stood for a page can offer a window into a continuous present that still remembers everything. Or, the screen might disappear completely.

    Now we can start imagining, in an open-ended way, what constellation of connected devices any given person will have around them, and how we can deliver a meaningful, memorable experience on any one of them. We can step away from the screen and consider what set of inputs, outputs, events, and information add up to the best experience.

    This is daunting for designers, sure, yet phenomenal for people. Thinking about human-computer interactions from a screen-based perspective was never truly human-centered from the start. The ideal interface is an interface that’s not noticeable at all—a world in which the distance from thought to action has collapsed and merely uttering a phrase can make it so.

    We’re fast moving past “computer literacy.” It’s on us to ensure all systems speak human fluently.

  17. A DIY Web Accessibility Blueprint

    The summer of 2017 marked a monumental victory for the millions of Americans living with a disability. On June 13th, a Southern District of Florida Judge ruled that Winn-Dixie’s inaccessible website violated Title III of the Americans with Disabilities Act. This case marks the first trial under the ADA, which was passed into law in 1990.

    Despite spending more than $7 million to revamp its website in 2016, Winn-Dixie neglected to include design considerations for users with disabilities. Some of the features that were added include online prescription refills, digital coupons, rewards card integration, and a store locator function. However, it appears that inclusivity didn’t make the cut.

    Because Winn-Dixie’s new website wasn’t developed to WCAG 2.0 standards, the new features it boasted were in effect only available to sighted, able-bodied users. When Florida resident Juan Carlos Gil, who is legally blind, visited the Winn-Dixie website to refill his prescriptions, he found it to be almost completely inaccessible using the same screen reader software he uses to access hundreds of other sites.

    Juan stated in his original complaint that he “felt as if another door had been slammed in his face.” But Juan wasn’t alone. Intentionally or not, Winn-Dixie was denying an entire group of people access to their new website and, in turn, each of the time-saving features it had to offer.

    What makes this case unique is that it marks the first time in history in which a public accommodations case went to trial, meaning the judge ruled the website to be a “place of public accommodation” under the ADA and therefore subject to ADA regulations. Since there are no specific ADA regulations regarding the internet, Judge Scola decided the adoption of the Web Content Accessibility Guidelines (WCAG) 2.0 Level AA to be appropriate. (Thanks to the hard work of the Web Accessibility Initiative (WAI) at the W3C, WCAG 2.0 has found widespread adoption throughout the globe, either as law or policy.)

    Learning to have empathy

    Anyone with a product subscription service (think diapers, razors, or pet food) knows the feeling of gratitude that accompanies the delivery of a much needed product that arrives just in the nick of time. Imagine how much more grateful you’d be for this service if you, for whatever reason, were unable to drive and lived hours from the nearest store. It’s a service that would greatly improve your life. But now imagine that the service gets overhauled and redesigned in such a way that it is only usable by people who own cars. You’d probably be pretty upset.

    This subscription service example is hypothetical, yet in the United States, despite federal web accessibility requirements instituted to protect the rights of disabled Americans, this sort of discrimination happens frequently. In fact, anyone assuming the Winn-Dixie case was an isolated incident would be wrong. Web accessibility lawsuits are rising in number. The increase from 2015 to 2016 was 37%. While some of these may be what’s known as “drive-by lawsuits,” many of them represent plaintiffs like Juan Gil who simply want equal rights. Scott Dinin, Juan’s attorney, explained, “We’re not suing for damages. We’re only suing them to follow the laws that have been in this nation for twenty-seven years.”

    For this reason and many others, now is the best time to take a proactive approach to web accessibility. In this article I’ll help you create a blueprint for getting your website up to snuff.

    The accessibility blueprint

    If you’ll be dealing with remediation, I won’t sugarcoat it: successfully meeting web accessibility standards is a big undertaking, one that is achieved only when every page of a site adheres to all the guidelines you are attempting to comply with. As I mentioned earlier, those guidelines are usually WCAG 2.0 Level AA, which means meeting every Level A and AA requirement. Tight deadlines, small budgets, and competing priorities may increase the stress that accompanies a web accessibility remediation project, but with a little planning and research, making a website accessible is both reasonable and achievable.

    My intention is that you may use this article as a blueprint to guide you as you undertake a DIY accessibility remediation project. Before you begin, you’ll need to increase your accessibility know-how, familiarize yourself with the principles of universal design, and learn about the benefits of an accessible website. Then you may begin to evangelize the benefits of web accessibility to those you work with.

    Have the conversation with leadership

    Securing support from company leadership is imperative to the long-term success of your efforts. There are numerous ways to broach the subject of accessibility, but, sadly, in the world of business, substantiated claims top ethics and moral obligation. Therefore I’ve found one of the most effective ways to build a business case for web accessibility is to highlight the benefits.

    Here are just a few to speak of:

    • Accessible websites are inherently more usable, and consequently they get more traffic. Additionally, better user experiences result in lower bounce rates, higher conversions, and less negative feedback, which in turn typically make accessible websites rank higher in search engines.
    • Like assistive technology, web crawlers (such as Googlebot) leverage HTML to get their information from websites, so a well marked-up, accessible website is easier to index, which makes it easier to find in search results.
    • There are a number of potential risks for not having an accessible website, one of which is accessibility lawsuits.
    • Small businesses in the US that improve the accessibility of their website may be eligible for a tax credit from the IRS.

    Start the movement

    If you can’t secure leadership backing right away, you can still form a grassroots accessibility movement within the company. Begin slowly and build momentum as you work to improve usability for all users. Though you may not have the authority to make company-wide changes, you can strategically and systematically lead the charge for web accessibility improvements.

    My advice is to start small. For example, begin by pushing for site-wide improvements to color contrast ratios (which would help color-blind, low-vision, and aging users) or work on making the site keyboard accessible (which would help users with mobility impairments or broken touchpads, and people such as myself who prefer not using a mouse whenever possible). Incorporate user research and A/B testing into these updates, and document the results. Use the results to champion for more accessibility improvements.

    Read and re-read the guidelines

    Build your knowledge base as you go. Learning which laws, rules, or guidelines apply to you, and understanding them, is a prerequisite to writing an accessibility plan. Web accessibility guidelines vary throughout the world. There may be other guidelines that apply to you, and in some cases, additional rules, regulations, or mandates specific to your industry.

    Not understanding which rules apply to you, not reading them in full, or not understanding what they mean can create huge problems down the road, including excessive rework once you learn you need to make changes.

    Build a team

    Before you can start remediating your website, you’ll need to assemble a team. The number of people will vary depending on the size of your organization and website. I previously worked for a very large company with a very large website, yet the accessibility team they assembled was small in comparison to the thousands of pages we were tasked to remediate. This team included a project manager, visual designers, user experience designers, front-end developers, content editors, a couple requirements folks, and a few QA testers. Most of these people had been pulled from their full-time roles and instructed to quickly become familiar with WCAG 2.0. To help you create your own accessibility team, I will explain in detail some of the top responsibilities of the key players:

    • Project manager is responsible for coordinating the entire remediation process. They will help run planning sessions, keep everyone on schedule, and report the progress being made. Working closely with the requirements people, their goal is to keep every part of this new machine running smoothly.
    • Visual designers will mainly address issues of color usage and text alternatives. In its present form, WCAG 2.0 contrast minimums only apply to text, however the much anticipated WCAG 2.1 update (due to be released in mid-2018) contains a new success criterion for Non-text Contrast, which covers contrast minimums of all interactive elements and “graphics required to understand the content.” Visual designers should also steer clear of design trends that ruin usability.
    • UX designers should be checking for consistent, logical navigation and reading order. They’ll need to test that pages are using heading tags appropriately (headings are for semantic structure, not for visual styling). They’ll be checking to see that page designs are structured to appear and operate in predictable ways.
    • Developers have the potential to make or break an accessible website because even the best designs will fail if implemented incorrectly. If your developers are unfamiliar with WAI-ARIA, accessible coding practices, or accessible JavaScript, then they have a few things to learn. Developers should think of themselves as designers because they play a very important role in designing an inclusive user experience. Luckily, Google offers a short, free Introduction to Web Accessibility course and, via Udacity, a free, advanced two-week accessibility course. Additionally, The A11Y Project is a one-stop shop loaded with free pattern libraries, checklists, and accessibility resources for front-end developers.
    • Editorial review the copy for verbosity. Avoid using phrases that will confuse people who aren’t native language speakers. Don’t “beat around the bush” (see what I did there?). Keep content simple, concise, and easy to understand. No writing degree? No worries. There are apps that can help you improve the clarity of your writing and that correct your grammar like a middle school English teacher. Score bonus points by making sure link text is understandable out of context. While this is a WCAG 2.0 Level AAA guideline, it’s also easily fixed and it greatly improves the user experience for individuals with varying learning and cognitive abilities.
    • Analysts work in tandem with editorial, design, UX, and QA. They coordinate the work being done by these groups and document the changes needed. As they work with these teams, they manage the action items and follow up on any outstanding tasks, questions, or requests. The analysts also deliver the requirements specifications to the developers. If the changes are numerous and complex, the developers may need the analysts to provide further clarification and to help them properly implement the changes as described in the specs.
    • QA will need to be trained to the same degree as the other accessibility specialists since they will be responsible for testing the changes that are being made and catching any issues that arise. They will need to learn how to navigate a website using only a keyboard and also by properly using a screen reader (ideally a variety of screen readers). I emphasized “properly” because while anyone can download NVDA or turn on VoiceOver, it takes another level of skill to understand the difference between “getting through a page” and “getting through a page with standard keyboard controls.” Having individuals with visual, auditory, or mobility impairments on the QA team can be a real advantage, as they are more familiar with assistive technology and can test in tandem with others. Additionally, there are a variety of automated accessibility testing tools you can use alongside manual testing. These tools typically catch only around 30% of common accessibility issues, so they do not replace ongoing human testing. But they can be extremely useful in helping QA learn when an update has negatively affected the accessibility of your website.

    Start your engines!

    Divide your task into pieces that make sense. You may wish to tackle all the global elements first, then work your way through the rest of the site, section by section. Keep in mind that every page must adhere to the accessibility standards you’re following for it to be deemed “accessible.” (This includes PDFs.)

    Use what you’ve learned so far by way of accessibility videos, articles, and guidelines to perform an audit of your current site. While some manual testing may seem difficult at first, you’ll be happy to learn that some manual testing is very simple. Regardless of the testing being performed, keep in mind that it should always be done thoroughly and by considering a variety of users, including:

    • keyboard users;
    • blind users;
    • color-blind users;
    • low-vision users;
    • deaf and hard-of-hearing users;
    • users with learning disabilities and cognitive limitations;
    • mobility-impaired users;
    • users with speech disabilities;
    • and users with seizure disorders.

    When you are in the weeds, document the patterns

    As you get deep in the weeds of remediation, keep track of the patterns being used. Start a knowledge repository for elements and situations. Lock down the designs and colors, code each element to be accessible, and test these patterns across various platforms, browsers, screen readers, and devices. When you know the elements are bulletproof, save them in a pattern library that you can pull from later. Having a pattern library at your fingertips will improve consistency and compliance, and help you meet tight deadlines later on, especially when working in an agile environment. You’ll need to keep this online knowledge repository and pattern library up-to-date. It should be a living, breathing document.

    Cross the finish line … and keep going!

    Some people mistakenly believe accessibility is a set-it-and-forget-it solution. It isn’t. Accessibility is an ongoing challenge to continually improve the user experience the way any good UX practitioner does. This is why it’s crucial to get leadership on board. Once your site is fully accessible, you must begin working on the backlogs of continuous improvements. If you aren’t vigilant about accessibility, people making even small site updates can unknowingly strip the site of the accessibility features you worked so hard to put in place. You’d be surprised how quickly it can happen, so educate everyone you work with about the importance of accessibility. When everyone working on your site understands and evangelizes accessibility, your chances of protecting the accessibility of the site are much higher.

    It’s about the experience, not the law

    In December of 2017, Winn-Dixie appealed the case with blind patron Juan Carlo Gil. Their argument is that a website does not constitute a place of accommodation, and therefore, their case should have been dismissed. This case, and others, illustrate that the legality of web accessibility is still very much in flux. However, as web developers and designers, our motivation to build accessible websites should have nothing to do with the law and everything to do with the user experience.

    Good accessibility is good UX. We should seek to create the best user experience for all. And we shouldn’t settle for simply meeting accessibility standards but rather strive to create an experience that delights users of all abilities.

    Additional resources and articles

    If you are ready to learn more about web accessibility standards and become the accessibility evangelist on your team, here are some additional resources that can help.

    Resources

    Articles

  18. We Write CSS Like We Did in the 90s, and Yes, It’s Silly

    As web developers, we marvel at technology. We enjoy the many tools that help with our work: multipurpose editors, frameworks, libraries, polyfills and shims, content management systems, preprocessors, build and deployment tools, development consoles, production monitors—the list goes on.

    Our delight in these tools is so strong that no one questions whether a small website actually requires any of them. Tool obesity is the new WYSIWYG—the web developers who can’t do without their frameworks and preprocessors are no better than our peers from the 1990s who couldn’t do without FrontPage or Dreamweaver. It is true that these tools have improved our lives as developers in many ways. At the same time, they have perhaps also prevented us from improving our basic skills.

    I want to talk about one of those skills: the craft of writing CSS. Not of using CSS preprocessors or postprocessors, but of writing CSS itself. Why? Because CSS is second in importance only to HTML in web development, and because no one needs processors to build a site or app.

    Most of all, I want to talk about this because when it comes to writing CSS, it often seems that we have learned nothing since the 1990s. We still write CSS the natural way, with no advances in sorting declarations or selectors and no improvements in writing DRY CSS.

    Instead, many developers argue fiercely about each of these topics. Others simply dig in their heels and refuse to change. And a third cohort protests even the discussion of these topics.

    I don’t care that developers do this. But I do care about our craft. And I care that we, as a profession, are ignoring simple ways to improve our work.

    Let’s talk about this more after the code break.

    Here’s unsorted, unoptimized CSS from Amazon in 2003.

    .serif {
      font-family: times, serif;
      font-size: small;
    }
    
    .sans {
      font-family: verdana, arial, helvetica, sans-serif;
      font-size: small;
    }
    
    .small {
      font-family: verdana, arial, helvetica, sans-serif;
      font-size: x-small;
    }
    
    .h1 {
      font-family: verdana, arial, helvetica, sans-serif;
      color: #CC6600;
      font-size: small;
    }
    
    .h3color {
      font-family: verdana, arial, helvetica, sans-serif;
      color: #CC6600;
      font-size: x-small;
    }
    
    .tiny {
      font-family: verdana, arial, helvetica, sans-serif;
      font-size: xx-small;
    }
    
    .listprice {
      font-family: arial, verdana, sans-serif;
      text-decoration: line-through;
      font-size: x-small;
    }
    
    .price {
      font-family: verdana, arial, helvetica, sans-serif;
      color: #990000;
      font-size: x-small;
    }
    
    .attention {
      background-color: #FFFFD5;
    }

    And here’s CSS from contemporary Amazon:

    .a-box {
      display: block;
      border-radius: 4px;
      border: 1px #ddd solid;
      background-color: #fff;
    }
    
    .a-box .a-box-inner {
      border-radius: 4px;
      position: relative;
      padding: 14px 18px;
    }
    
    .a-box-thumbnail {
      display: inline-block;
    }
    
    .a-box-thumbnail .a-box-inner {
      padding: 0 !important;
    }
    
    .a-box-thumbnail .a-box-inner img {
      border-radius: 4px;
    }
    
    .a-box-title {
      overflow: hidden;
    }
    
    .a-box-title .a-box-inner {
      overflow: hidden;
      padding: 12px 18px 11px;
      background: #f0f0f0;
    }

    Just as in 2003, the CSS is unsorted and unoptimized. Did we learn anything over the past 15 years? Is this really the best CSS we can write?

    Let’s look at three areas where I believe we can easily improve the way we do our work: declaration sorting, selector sorting, and declaration repetition.

    Declaration sorting

    The 90s web developer, if he or she wrote CSS, wrote CSS as it occurred to them. Without sense or order—with no direction whatsoever. The same was true of last decade’s developer. The same is true of today’s developer, whether novice or expert.

    .foo {
      font: arial, sans-serif;
      background: #abc;
      margin: 1em;
      text-align: center;
      letter-spacing: 1px;
      -x-yaddayadda: yes;
    }

    The only difference between now and then: today’s expert developer uses eight variables, because “that’s how you do it” (even with one-pagers) and because at some point in their life they may need them. In twenty-something years of web development we have somehow not managed to make our CSS consistent and easier to work on by establishing the (or even a) common sense standard to sort declarations.

    (If this sounds harsh, it’s because it’s true. Developers condemn selectors, shorthands, !important, and other useful aspects of CSS rather than concede that they don’t even know how to sort their declarations.)

    In reality, the issue is dead simple: Declarations should be sorted alphabetically. Period.

    Why?

    For one, sorting makes collaborating easier.

    Untrained developers can do it. Non-English speakers (such as this author) can do it. I wouldn’t be surprised to learn that even houseplants can do it.

    For another reason, alphabetical sorting can be automated. What’s that? Yes, one can use or write little scripts (such as CSS Declaration Sorter) to sort declarations.

    Given the ease of sorting, and its benefits, the current state of affairs borders on the ridiculous, making it tempting to ignore our peers who don’t sort declarations, and to ban from our lives those who argue that it’s easier—or even logical—not to sort alphabetically but instead to sort based on 1) box dimensions, 2) colors, 3) grid- or flexbox-iness, 4) mood, 5) what they ate for breakfast, or some equally random basis.

    With this issue settled (if somewhat provocatively), on to our second problem from the 90s.

    Selector sorting

    The situation concerning selectors is quite similar. Almost since 1994, developers have written selectors and rules as they occurred to them. Perhaps they’ve moved them around (“Oh, that belongs with the nav”). Perhaps they’ve refactored their style sheets (“Oh, strange that site styles appear amidst notification styles”). But standardizing the order—no.

    Let’s take a step back and assume that order does matter, not just for aesthetics as one might think, but for collaboration. As an example, think of the letters below as selectors. Which list would be easiest to work with?

    c, b · a · a, b · c, d · d, c, a · e · a
    
    c · b · a, b · a · c, d · a, c, d · a · e
    
    a, b · a, c, d · a · b, c · c, d · e

    The fact that one selector (a) was a duplicate that only got discovered and merged in the last row perhaps gives away my preference. But then, if you wanted to add d, e to the list, wouldn’t the order of the third row make placing the new selector easier than placing it in either of the first two rows?

    This example gets at the two issues caused by not sorting selectors:

    • No one knows where to add new selectors, creating a black hole in the workflow.
    • There’s a higher chance of both selector repetition and duplication of rules with the same selectors.

    Both problems get compounded in larger projects and larger teams. Both problems have haunted us since the 90s. Both problems get fixed by standardizing—through coding guidelines—how selectors should be ordered.

    The answer in this case is not as trivial as sorting alphabetically (although we could play with the idea—the cognitive ease of alphabetical selector sorting may make it worth trying). But we can take a path similar to how the HTML spec roughly groups elements, so that we first define sections, and then grouping elements, text elements, etc. (That’s also the approach of at least one draft, the author’s.)

    The point is that ideal selector sorting doesn’t just occur naturally and automatically. We can benefit from putting more thought into this problem.

    Declaration repetition

    Our third hangover from the 90s is that there is and has always been an insane amount of repetition in our style sheets. According to one analysis of more than 200 websites, a median of 66% of all declarations are redundant, and the repetition rate goes as high as 92%—meaning that, in this study at least, the typical website uses each declaration at least three times and some up to ten times.

    As shown by a list of some sample sites I compiled, declaration repetition has indeed been bad from the start and has even increased slightly over the years.

    Yes, there are reasons for repetition: notably for different target media (we may repeat ourselves for screen, print, or different viewport sizes) and, occasionally, for the cascade. That is why a repetition rate of 10–20% seems to be acceptable. But the degree of repetition we observe right now is not acceptable—it’s an unoptimized mess that goes mostly unnoticed.

    What’s the solution here? One possibility is to use declarations just once. We’ve seen with a sample optimization of Yandex’s large-scale site that this can lead to slightly more unwieldy style sheets, but we also know that in many other cases it does make them smaller and more compact.

    This approach of using declarations just once has at least three benefits:

    • It reduces repetition to a more acceptable amount.
    • It reduces the pseudo need for variables.
    • Excluding outliers like Yandex, it reduces file size and payload (10–20% according to my own experience—we looked at the effects years ago at Google).

    No matter what practice we as a field come up with—whether to use declarations just once or follow a different path—the current level of “natural repetition” we face on sample websites is too high. We shouldn’t need to remind ourselves not to repeat ourselves if we repeat code up to nine times, and it’s getting outright pathetic—again excuse the strong language—if then we’re also the ones to scream for constants and variables and other features only because we’ve never stopped to question this 90s-style coding.

    The unnatural, more modern way of writing CSS

    Targeting these three areas would help us move to a more modern way of writing style sheets, one that has a straightforward but powerful way to sort declarations, includes a plan for ordering selectors, and minimizes declaration repetition.

    In this article, we’ve outlined some options for us to adhere to this more modern way:

    • Sort declarations alphabetically.
    • Use an existing order system or standardize and follow a new selector order system.
    • Try to use declarations just once.
    • Get assistance through tools.

    And yet there’s still great potential to improve in all of these areas. The potential, then, is what we should close with. While I’ve emphasized our “no changes since the 90s” way of writing CSS, and stressed the need for robust practices, we need more proposals, studies, and conversations around what practices are most beneficial. Beneficial in terms of writing better, more consistent CSS, but also in terms of balancing our sense of craft (our mastery of our profession) with a high degree of efficiency (automating when it’s appropriate). Striving to achieve this balance will help ensure that developers twenty years from now won’t have to write rants about hangovers from the 2010s.

  19. Owning the Role of the Front-End Developer

    When I started working as a web developer in 2009, I spent most of my time crafting HTML/CSS layouts from design comps. My work was the final step of a linear process in which designers, clients, and other stakeholders made virtually all of the decisions.

    Whether I was working for an agency or as a freelancer, there was no room for a developer’s input on client work other than when we were called to answer specific technical questions. Most of the time I would be asked to confirm whether it was possible to achieve a simple feature, such as adding a content slider or adapting an image loaded from a CMS.

    In the ensuing years, as front-end development became increasingly challenging, developers’ skills began to evolve, leading to more frustration. Many organizations, including the ones I worked for, followed a traditional waterfall approach that kept us in the dark until the project was ready to be coded. Everything would fall into our laps, often behind schedule, with no room for us to add our two cents. Even though we were often highly esteemed by our teammates, there still wasn’t a chance for us to contribute to projects at the beginning of the process. Every time we shared an idea or flagged a problem, it was already too late.

    Almost a decade later, we’ve come a long way as front-end developers. After years of putting in the hard work required to become better professionals and have a bigger impact on projects, many developers are now able to occupy a more fulfilling version of the role.

    But there’s still work to be done: Unfortunately, some front-end developers with amazing skills are still limited to basic PSD-to-HTML work. Others find themselves in a better position within their team, but are still pushing for a more prominent role where their ideas can be fostered.

    Although I’m proud to believe I’m part of the group that evolved with the role, I continue to fight for our seat at the table. I hope sharing my experience will help others fighting with me.

    My road to earning a seat at the table

    My role began to shift the day I watched an inspiring talk by Seth Godin, which helped me realize I had the power to start making changes to make my work more fulfilling. With his recommendation to demand responsibility whether you work for a boss or a client, Godin gave me the push I needed.

    I wasn’t expecting to make any big leaps—just enough to feel like I was headed in the right direction.

    Taking small steps within a small team

    My first chance to test the waters was ideal. I had recently partnered with a small design studio and we were a team of five. Since I’d always been open about my soft spot for great design, it wasn’t hard to sell them on the idea of having me begin to get a bit more involved with the design process and start giving technical feedback before comps were presented to clients.

    The results were surprisingly amazing and had a positive impact on everybody’s work. I started getting design hand-offs that I both approved of from a technical point of view and had a more personal connection with. For their part, the designers happily noticed that the websites we launched were more accurate representations of the comps they had handed off.

    My next step was to get involved with every single project from day one. I started to tag along to initial client meetings, even before any contracts had been signed. I started flagging things that could turn the development phase into a nightmare; at the same time I was able to throw around some ideas about new technologies I’d been experimenting with.

    After a few months, I started feeling that my skills were finally having an impact on my team’s projects. I was satisfied with my role within the team, but I knew it wouldn’t last forever. Eventually it was time for me to embark on a journey that would take me back to the classic role of the front-end developer, closer to the base of the waterfall.

    Moving to the big stage

    As my career started to take off, I found myself far away from that five-desk office where it had all started. I was now working with a much bigger team, and the challenges were quite different. At first I was amazed at how they were approaching the process: the whole team had a strong technical background, unlike any team I had ever worked with, which made collaboration very efficient. I had no complaints about the quality of the designs I was assigned to work with. In fact, during my first few months, I was constantly pushed out of my comfort zone, and my skills were challenged to the fullest.

    After I started to feel more comfortable with my responsibilities, though, I soon found my next challenge: to help build a stronger connection between the design and development teams. Though we regularly collaborated to produce high-quality work, these teams didn’t always speak the same language. Luckily, the company was already making an effort to improve the conversation between creatives and developers, so I had all the support I needed.

    As a development team, we had been shifting to modern JavaScript libraries that led us to work on our applications using a strictly component-based approach. But though we had slowly changed our mindset, we hadn’t changed the ways we collaborated with our creative colleagues. We had not properly shared our new vision; making that connection would become my new personal goal.

    I was fascinated by Brad Frost’s “death to the waterfall” concept: the idea that UX, visual design, and development teams should work in parallel, allowing for a higher level of iteration during the project.

    By pushing to progressively move toward a collaborative workflow, everyone on my team began to share more responsibilities and exchange more feedback throughout every project. Developers started to get involved in projects during the design phase, flagging any technical issues we could anticipate. Designers made sure they provided input and guidance after the projects started coming to life during development. Once we got the ball rolling, we quickly began seeing positive results and producing rewarding (and award-winning) work.

    Even though it might sound like it was a smooth transition, it required a great amount of hard work and commitment from everybody on the team. Not only did we all want to produce better work but we also needed to be willing to take a big leap away from our comfort zones and our old processes.

    How you can push for a seat at the table

    In my experience, making real progress required a combination of sharpening my skills as a front-end developer and pushing the team to improve our processes.

    What follows are more details about what worked for me—and could also work for you.

    Making changes as a developer

    Even though the real change in your role may depend on your organization, sometimes your individual actions can help jump-start the shift:

    • Speak up. In multidisciplinary teams, developers are known as highly analytical, critical, and logical, but not always the most communicative of the pack. I’ve seen many who quietly complain and claim to have better ideas on how things should be handled, but bottle up those thoughts and move on to a different job. After I started voicing my concerns, proposing new ideas, and seeing small changes within my team, I experienced an unexpected boost in my motivation and noticed others begin to see my role differently.
    • Always be aware of what the rest of the team is up to. One of the most common mistakes we tend to make is to focus only on our craft. To connect with our team and improve in our role, we need to understand our organization’s goals, our teammates’ skill sets, our customers, and basically every other aspect of our industry that we used to think wasn’t worth a developer’s time. Once I started having a better understanding of the design process, communication with my team started to improve. The same applied to designers who started learning more about the processes we use as front-end developers.
    • Keep core skills sharp. Today our responsibilities are broader and we’re constantly tasked with leading our teams into undiscovered technologies. As a front-end developer, it’s not uncommon to be required to research technologies like WebGL or VR, and introduce them to the rest of the team. We must stay current with the latest practices in our technical areas of focus. Our credibility is at stake every time our input is needed, so we must always strive to be the best developers in the business.

    Rethinking practices within the company

    In order to make the most of your role as a developer, you’ll have to persuade your organization to make key changes. This might be hard to achieve, since it tends to require taking all members of your team out of their comfort zones.

    For me, what worked was long talks with my colleagues, including designers, management, and fellow developers. It’s hard for a manager to turn you down when you propose an idea to improve the quality of your work and only ask for small changes. Once the rest of the team is on board, you have to work hard and start implementing these changes to keep the ball rolling:

    • Involve developers in projects from the beginning. Many companies have high standards when it comes to hiring developers but don’t take full advantage of their talent. We tend to be logical thinkers, so it’s usually a good idea to involve developers in many aspects of the projects we work on. I often had to take the first step to be invited to project kickoffs. But once I started making an effort to provide valuable input, my team started automatically involving me and other developers during the creative phase of new projects.
    • Schedule team reviews. Problems frequently arise when teams present to clients without having looped in everyone working on the project. Once the client signs off on something, it can be risky to introduce new ideas, even if they add value. Developers, designers, and other key players must come together for team reviews before handing off any work. As a developer, sometimes you might need to raise your hand and invest some of your time to help your teammates review their work before they present it.
    • Get people to work together. Whenever possible, get people in the same room. We tend to rely on technology and push to communicate only by chat and email, but there is real value in face time. It’s always a good idea to have different teammates sit together, or at least in close enough proximity for regular in-person conversation, so they can share feedback more easily during projects. If your team works remotely, you have to look for alternatives to achieve the same effect. Occasional video chats and screen sharing can help teams share feedback and interact in real time.
    • Make time for education. Of all the teams I’ve worked on, those that foster a knowledge-sharing culture tend to work most efficiently. Simple and casual presentations among colleagues from different disciplines can be vital to creating a seamless variety of skills across the team. So it’s important to encourage members of the team to teach and learn from each other.

      When we made the decision to use only a component-based architecture, we prepared a simple presentation for the design team that gave them an overview of how we all would benefit from the change to our process. Shortly after, the team began delivering design comps that were aligned with our new approach.

    It’s fair to say that the modern developer can’t simply hide behind a keyboard and expect the rest of the team to handle all of the important decisions that define our workflow. Our role requires us to go beyond code, share our ideas, and fight hard to improve the processes we’re involved in.

  20. Discovery on a Budget: Part II

    Welcome to the second installment of the “Discovery on a Budget” series, in which we explore how to conduct effective discovery research when there is no existing data to comb through, no stakeholders to interview, and no slush fund to draw upon. In part 1 of this series, we discussed how it is helpful to articulate what you know (and what you assume) in the form of a problem hypothesis. We also covered strategies for conducting one of the most affordable and effective research methods: user interviews. In part 2 we will discuss when it’s beneficial to introduce a second, competing problem hypothesis to test against the first. We will also discuss the benefits of launching a “fake-door” and how to conduct an A/B test when you have little to no traffic.

    A quick recap

    In part 1 I conducted the first round of discovery research for my budget-conscious (and fictitious!) startup, Candor Network. The original goal for Candor Network was to provide a non-addictive social media platform that users would pay for directly. I articulated that goal in the form of a problem hypothesis:

    Because their business model relies on advertising, social media tools like Facebook are deliberately designed to “hook” users and make them addicted to the service. Users are unhappy with this and would rather have a healthier relationship with social media tools. They would be willing to pay for a social media service that was designed with mental health in mind.

    Also in part 1, I took extra care to document the assumptions that went into creating this hypothesis. They were:

    • Users feel that social media sites like Facebook are addictive.
    • Users don’t like to be addicted to social media.
    • Users would be willing to pay for a non-addictive Facebook replacement.

    For the first round of research, I chose to conduct user interviews because it is a research method that is adaptable, effective, and—above all—affordable. I recruited participants from Facebook, taking care to document the bias of using a convenience sampling method. I carefully crafted my interview protocol, and used a number of strategies to keep my participants talking. Now it is time to review the data and analyze the results.

    Analyze the data

    When we conduct discovery research, we look for data that can help us either affirm or reject the assumptions we made in our problem hypothesis. Regardless of what research method you choose, it’s critical that you set aside the time to objectively review and analyze the results.

    In practice, analyzing interview data involves creating transcriptions of the interviews and then reading them many, many times. Each time you read through the transcripts, you highlight and label sentences or sections that seem relevant or important to your research question. You can use products like NVivo, HyperRESEARCH, or any other qualitative analysis tool to help facilitate this process. Or, if you are on a pretty strict budget, you can simply use Google Sheets to keep track of relevant sections in one column and labels in another.

    Screenshot of a spreadsheet with quotes about Facebook usage
    Screenshot of my interview analysis in Google Sheets

    For my project, I specifically looked for data that would show whether my participants felt Facebook was addicting and whether that was a bad thing, and if they’d be willing to pay for an alternative. Here’s how that analysis played out:

    Assumption 1: Users feel that social media sites like Facebook are addictive

    Facebook has a weird, hypnotizing effect on my brain. I keep scrolling and scrolling and then I like wake up and think, ‘where have I been? why am I spending my time on this?’
    interview participant

    Overwhelmingly, my data affirms this assumption. All of my participants (eleven out of eleven) mentioned Facebook being addictive in some way.

    Assumption 2: Users don’t like to be addicted to social media

    I know a lot of people who spend a lot of time on Facebook, but I think I manage it pretty well.
    interview participant

    This assumption turned out to be a little more tricky to affirm or reject. While all of my participants described Facebook as addictive, many of them (eight out of eleven) expressed that “it wasn’t so bad” or that they felt like they were less addicted than the average Facebook user.

    Assumption 3: Users would be willing to pay for a non-addictive Facebook replacement

    No, I wouldn’t pay for that. I mean, why would I pay for something I don’t think I should use so much anyway?
    interview participant

    Unfortunately for my project, I can’t readily affirm this assumption. Four participants told me they would flat-out never pay for a social media service, four participants said they would be interested in trying a paid-for “non-addictive Facebook,” and three participants said they would only try it if it became really popular and everyone else was using it.

    One unexpected result: “It’s super creepy”

    I don’t like that they are really targeting me with those ads. It’s super creepy.
    interview participant

    In reviewing the interview transcripts, I came across one unexpected theme. More than 80% of the interviewees (nine out of eleven) said they found Facebook “creepy” because of the targeted advertising and the collection of personal data. Also, most of those participants (seven out of nine) went on to say that they would pay for a “non-creepy Facebook.” This is particularly remarkable because I never asked the participants how they felt about targeted advertising or the use of personal data. It always came up in the conversation organically.

    Whenever we start a new project, our initial ideas revolve around our own personal experiences and discomforts. I started Candor Network because I personally feel that social media is designed to be addicting, and that this is a major flaw with many of the most popular services. However, while I can affirm my first assumption, I had unclear results on the second and have to consider rejecting the third. Also, I encountered a new user experience that I previously didn’t think of or account for: that the way social media tools collect and use personal data for advertising can be disconcerting and “creepy.” As is so often the case, the data analysis showed that there are a variety of other experiences, expectations, and needs that must be accounted for if the project is to be successful.

    Refining the hypothesis

    Graphic showing a process with Create Hypothesis, leading to Test, leading to Analyze, leading back to Create Hypothesis
    Discovery research cycle: Create Hypothesis, Test, Analyze, and repeat

    Each time we go through the discovery research process, we start with a hypothesis, test it by gathering data, analyze the data, and arrive at a new understanding of the problem. In theory, it may be possible to take one trip through the cycle and either completely affirm or completely reject our hypothesis and assumptions. However, like with Candor Network, it is more often the case that we get a mixture of results: some assumptions can be affirmed while others are rejected, and some completely new insights come to light.

    One option is to continue working with a single hypothesis, and simply refine it to account for the results of each round of research. This is especially helpful when the research mostly affirms your assumptions, but there is additional context and nuance you need to account for. However, if you find that your research results are pulling you in a new direction entirely, it can be useful to create a second, competing hypothesis.

    In my example, the interview research brought to light a new concern about social media I previously hadn’t considered: the “creepy” collection of personal data. I am left wondering, Would potential customers be more attracted to the idea of a social media platform built to prevent addiction, or one built for data privacy? To answer this question, I articulated a new, competing hypothesis:

    Because their business model relies on advertising, social media tools like Facebook are designed to gather lots of behavior data. They then utilize this behavior data to create super-targeted ads. Users are unhappy with this, and would rather use a social media tool that does not rely on the commodification of their data to make money. They would be willing to pay for a social media service that did not track their use and behavior.

    I now have two hypotheses to test against one another: one focused on social media addiction, the other focused on behavior tracking and data collection.

    At this point, it would be perfectly acceptable to conduct another round of interviews. We would need to change our interview protocol and find more participants, but it would still be an effective (and cheap) method to use. However, for this article I wanted to introduce a new method for you to consider, and to illustrate that a technique like A/B testing is not just for the “big guys” on the web. So I chose to conduct an A/B test utilizing two “fake-doors.”

    A low-cost comparative test: fake-door A/B testing

    A “fake-door” test is simply a marketing page, ad, button, or other asset that promotes a product that has yet to be made. Fake-door testing (or “ghetto testing”) is Zynga’s go-to method for testing ideas. They create a five-word summary of any new game they are considering, make a few ads, and put it up on various high-trafficked websites. Data is then collected to track how often users click on each of the fake-door “probes,” and only those games that attract a certain number of “conversions” on the fake-door are built.

    One of the many benefits of conducting a fake-door test is that it allows you to measure interest in a product before you begin to develop it. This makes it a great method for low-budget projects, because it can help you decide whether a project is worth investing in before you spend anything.

    However, for my project, I wasn’t just interested in measuring potential customer interest in a single product idea. I wanted to continue evaluating my original hypothesis on non-addictive social media as well as start investigating the second hypothesis on a social media platform that doesn’t record behavior data. Specifically, I wanted to see which theoretical social media platform is more attractive. So I created two fake-door landing pages—one for each hypothesis—and used Google Optimize to conduct an A/B test.

    Two screenshots of a Candor Network landing page with different copy
    Versions A (right) and B (left) of the Candor Network landing page

    Version A of the Candor Network landing page advertises the product I originally envisioned and described in my first problem hypothesis. It advertises a social network “built with mental health in mind.” Version B reflects the second problem hypothesis and my interview participants’ concerns around the “creepy” commodification of user data. It advertises a social network that “doesn’t track, use, solicit, or sell your data.” In all other respects, the landing pages are identical, and both will receive 50% of the traffic.

    Running an A/B test with little to no site traffic

    One of the major caveats when running an A/B test is that you need to have a certain number of people participate to achieve any kind of statistically significant result. This wouldn’t be a problem if we worked at a large company with an existing customer base, as it would be relatively straightforward to find ways to direct some of the existing traffic to the test. If you’re working on a new or low-trafficked site, however, conducting an A/B test can be tricky. Here are a few strategies I recommend:

    Tip 1: Make sure your “A” is very different than your “B”

    Figuring out how much traffic you need to achieve statistical significance in a quantitative study is an inexact science. If we were conducting a high-stakes experiment at a more established business, we would conduct multiple rounds of pre-tests to calculate the effect size of the experiment. Then we would use a calculation like Cohen’s d to estimate the number of people we need to participate in the actual test. This approach is rigorous and helps avoid sample pollution or sampling bias, but it requires a lot of resources upfront (like time, money, and lots of potential participants) that we may not have access to.

    In general, however, you can use this rule of thumb: the bigger the difference between the variations, the fewer participants you need to see a significant result. In other words, if your A and B are very different from each other, you will need fewer participants.

    Tip 2: Run the test for a longer amount of time

    When I worked at Weather Underground, we would always start an A/B test on a Sunday and end it a full week later on the following Sunday. That way we could be sure we captured both weekday and weekend users. Because Weather Underground is a high-trafficked site, this always resulted in having more than enough participants to see a statistically significant result.

    If you’re working on a new or low-trafficked site, however, you’ll need to run your test for longer than a week to achieve the number of test participants required. I recommend budgeting enough time so that your study can run a full six weeks. Six weeks will provide enough time to not only capture results from all your usual website traffic, but also any newcomers you can recruit through other means.

    Tip 3: Beg and borrow traffic from someone else

    I’ve got a pretty low number of followers on social media, so if I tweet or post about Candor Network, only a few people will see it. However, I know a few people and organizations that have a huge number of followers. For example, @alistapart has roughly 148k followers on Twitter, and A List Apart’s publisher, Jeffrey Zeldman (@zeldman), has 358k followers. I have asked them both to share the link for Candor Network with their followers.

    A screenshot of a Tweet from Jeffrey Zeldman promoting Meg's experiment
    A helpful tweet from @zeldman

    Of course, this method of advertising doesn’t cost any money, but it does cost social capital. I’m sure A List Apart and Mr. Zeldman wouldn’t appreciate it if I asked them to tweet things on my behalf on a regular basis. I recommend you use this method sparingly.

    Tip 4: Beware! There is always a risk of no results.

    Before you create an A/B test for your new product idea, there is one major risk you need to assess: there is a chance that your experiment won’t produce any statistically significant results at all. Even if you use all of the tips I’ve outlined above and manage to get a large number of participants in your test, there is a chance that you won’t be able to “declare a winner.” This isn’t just a risk for companies that have low traffic, it is an inherent risk when running any kind of quantitative study. Sometimes there simply isn’t a clear effect on participant behavior.

    Tune in next time for the last installment

    In the third and final installment of the “Discovery on a Budget” series, I’ll describe how I designed the incredibly short survey on the Candor Network landing page and discuss the results of my fake-door A/B test. I will also make another revision to my problem hypothesis and will discuss how to know when you’re ready to leave the discovery process (at least for now) and embark on the next phase of design: ideating over possible solutions.