You have seen it
When I started in the design field, I had trouble finding a job that was a good match for both my entry level skill set and my hunger to do exciting design work. At that point, getting steady work took precedent and I took a job with a company that brought with it some ethical baggage. Am I sure I want to do this? What about what I had heard in the news about this company?
At the end of the day, I simply needed work and while it wasn’t exciting and the company raised eyebrows with some, I figured I would get from it what I needed: experience and a paycheck. I compromised.
Later in my career, I was freelancing at a senior level and had the immense privilege of choosing between different clients. I looked at each and asked “if I do my job, what happens?” This lead me to working with a company whose work significantly reduced carbon emissions in the environment–something extremely important to me. I didn’t have to compromise in taking the job.
Later, while working at a company that had an altruistic mission, I was faced with some product experience decisions that could increase some risks in the sales process of the company’s most profitable product. I negotiated working scenarios where we had to strike a balance between a sales-focused experience vs a customer-focused experience. In other words, do we speed someone through a transaction as quickly as possible, or do we hold their hand through the process, explaining and re-explaining every step of the way even at the risk of them abandoning the transaction.
I remember arguing that a product might be creating risky situations and feeling as though I was the only one. I was going up against the rest of my product team, our bosses, company leadership. I had to make peace in the moment–how was I going to move forward in a way that I could live with but still execute something that the company could benefit.
There were times when I compromised, and times when I didn’t.
As a society, we’re still figuring out the potential reach of digital technology, and these lessons are emerging in each meeting, critique, and office debate.
As a designer working on that technology, your ethical standards will be challenged at some point. It may happen before you even start the job, as you consider whether to do work at a company. It may happen when debating the strategy for a new feature, determining whether you’re executing on what is important. It may happen when working in the details of a design, determining if your work will influence people’s behavior in a way that is helpful or harmful. It will happen.
So what do we do?
First off, it makes sense to know where you stand. Know what is important to you. Ask yourself the question, “if I do my job, what happens?” and make sure the outcome is something you can be proud of. If that doesn’t work, make sure it’s something you can live with. And if you can’t do that, look for something else.
But what about when we’re doing work, discussing product strategy with our team, or evaluating screen-by-screen experiences?
This is where we could use some guidance.
Looking to others
Other professional fields have dealt with this. Not surprisingly, the fields where there is an immediate risk of harm are where the ethical standards have become the most apparent. The Hippocratic Oath is an example of a set of values for doctors. The oath originates in ancient times, however the modern version of the oath was developed in 1964 by Louis Lasagna, Academic Dean of the School of Medicine at Tufts University:
I swear to fulfill, to the best of my ability and judgment, this covenant:
“The Hippocratic Oath Today” NOVA, PBS
I will respect the hard-won scientific gains of those physicians in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow.
I will apply, for the benefit of the sick, all measures [that] are required, avoiding those twin traps of overtreatment and therapeutic nihilism.
I will remember that there is art to medicine as well as science, and that warmth, sympathy, and understanding may outweigh the surgeon’s knife or the chemist’s drug.
I will not be ashamed to say “I know not,” nor will I fail to call in my colleagues when the skills of another are needed for a patient’s recovery.
I will respect the privacy of my patients, for their problems are not disclosed to me that the world may know. Most especially must I tread with care in matters of life and death. If it is given me to save a life, all thanks. But it may also be within my power to take a life; this awesome responsibility must be faced with great humbleness and awareness of my own frailty. Above all, I must not play at God.
I will remember that I do not treat a fever chart, a cancerous growth, but a sick human being, whose illness may affect the person’s family and economic stability. My responsibility includes these related problems, if I am to care adequately for the sick.
I will prevent disease whenever I can, for prevention is preferable to cure.
I will remember that I remain a member of society, with special obligations to all my fellow human beings, those sound of mind and body as well as the infirm.
If I do not violate this oath, may I enjoy life and art, respected while I live and remembered with affection thereafter. May I always act so as to preserve the finest traditions of my calling and may I long experience the joy of healing those who seek my help.
The oath is remarkable in its brevity, breadth and depth: education, treatment, bedside manner, humility, privacy–it’s all in there. One of the oath’s strengths is its emphasis on the relationship between the doctor and the patient. However, there is no mention of how to run the business of medicine, where so many challenges lie in today’s society and economy. The is no mention of pharmaceutical company influence, pharmaceutical pricing, insurance markets, or the impact of for-profit corporations on patient care.
As designers, it is also difficult to relate to the relationship that a doctor has to the patient: who are we in this construct? The doctor? And who is the patient? A client company? The customer?
But how bad can it get? Amazingly bad.
Some answers may be found when branching out out from medicine itself to medical research, where another advisory document acts as a guide for ethical standards. First, a little history:
Starting in 1932, a group of African-American men were placed into an experiment conducted by the US Public Health Service and Tuskeegee University, a historically black college in Alabama. The goal of the study was to research the untreated effects of syphilis until the time of a person’s death, however this goal was not revealed to the men participating in the study. Out of the 600 men, 399 had syphilis at the time, while 201 did not. The men were given free medical care and other benefits for participating. They were told they were being treated for ‘bad blood,” a colloquial term used to refer to any manner of ailments, though they did not receive the correct care for syphilis.
The men were told that the study would last only six months, but they were ultimately observed for 40 years, living with the disease without their knowledge, and dealing with the consequences. The experiment was designed in a way that it could not be considered complete until all participants had died and been autopsied.
Even when confronted with accusations of unethical practices throughout the 1960s and 70s, the Centers for Disease Control, which had taken over the study, insisted on continuing the work for the sake of research. The CDC’s efforts were supported by prestigious organizations such as the National Medical Association (representing African-American physicians) and the American Medical Association.
Ultimately, an investigator for the US Public Health Service in San Francisco, Peter Buxton, took the story to the press. Ted Kennedy called for hearings and in 1972, forty years after it had began, the study was finally terminated. Only after a lawsuit from the NAACP was a compensatory settlement was issued to the remaining survivors and their families.
It is considered the paradigm of unethical experimentation in American history.
An ethical response
In July of 1974, in the wake of the scandal, the National Research Act was signed into law, creating the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The commission was formed to analyze research practices involving human test subjects, assess the risks of these practices, provide professional ethical guidance, and deliver a definition of informed consent for each research setting, especially for vulnerable populations. Their findings and guidance were provided in, The Belmont Report, a 10-page guide to conducting research on human beings.
OK, this is where it gets interesting for designers again.
The report starts with defining medical practice vs medical research. Practice is work done solely to improve an individual’s wellbeing. Just because someone deviates from conventional practices doesn’t mean they’re doing research, even if they call that work “experimental.” The definition of research quoted in the report gets interesting….
…the term “research’ designates an activity designed to test an hypothesis, permit conclusions to be drawn, and thereby to develop or contribute to generalizable knowledge (expressed, for example, in theories, principles, and statements of relationships). Research is usually described in a formal protocol that sets forth an objective and a set of procedures designed to reach that objective.
The Belmont Report, Boundaries Between Practice and Research
Most of our work as designers of digital products and experiences bears a striking resemblance to research in this case. We are doing work to hypothesize, test, and learn.
To drive ethically effective research, the Report advises three core ethical principles:
Principle 1: Respect for persons
The respect for persons comes in two parts:
- That people are autonomous, capable of determining personal goals and acting with self-determination
- That people who cannot act autonomously deserve protection
In our field as product designers, respecting the autonomy of individuals is of paramount importance. The autonomy of individuals can often be overlooked or de-prioritized in favor of the collective interests of a business as expressed through their own internal metrics for success.
Principle 2: Beneficience
Beneficience is defined as an act of charity, mercy and kindness as an extension of a professional practice (I had to look it up, too). As such, the report calls for all research efforts to do no harm while maximizing possible benefits. Ultimately, this strengthens the commitment to the individual involved, obligating any researcher to knowingly assess the risks that will be posed to a test participant.
Principle 3: Justice
The Report poses the question clearly: “Who ought to receive the benefits of research and bear its burdens?” Historically, the burden of medical research subjects fell to those who simply couldn’t say ‘no,’ whether they were hospitalized, imprisoned, partially or fully incapacitated. This can not stand. Screening test subjects requires constant scrutiny to mitigate biases and assure that vulnerable segments of the population are not being targeted.
Advised application of these principles
The report also advises the application of ethical principles to research as follows, paraphrasing here:
- Only proceed with informed consent. This means that participants have access to and comprehend all information about the experiment, then volunteer to participate.
- Take measures to assure that the research is justifiable. Assess the probability and magnitude of risks, to the best of your ability, against the possible benefits of the research outcome.
- The selection of research subjects should be consistently evaluated for fairness in mitigating bias and eliminating the odds of taking advantage of vulnerable populations.
Taking it on
I found the Belmont Report fascinating. I wanted to distill it down into principles that can be woven into parts of the design process. Having experienced ethical dilemmas concerning the impact of my job, the role of a product, or the unintended consequences of a feature at various parts of my career, I wanted to explore how an ethical framework could be applied through the design process.
I also wanted to take a moment to consider that the label of ‘research’ applies to more than one facet of the design process, and the application of these ethical principles would vary accordingly.
In interpreting the Belmont Report’s principles for use in a product design context, I settled on these three:
- Respect people as individuals
- Provide more benefit than harm
- Examine any affected population
And in practice
- Utilize informed consent
- Evaluate the benefits of the research
- Select participants fairly
These principles sound great in a blog post, but do we need to take this into account for product design?
So what about us?
As designers of digital products, we’re loathe to think that anything we would do would approach the harm of the Tuskeegee experiment. But we should be wary of letting hubris set in.
It is well-documented that experimentation with Facebook’s feed may have influenced the emotional state of approximately 689,000 users, without their knowledge, consent, and without total scope of the potential impact.
Metrics used to define success of an app or site tend to lean towards engagement behaviors, such as daily active use, time on site. Engineers and designers have found that getting the user addicted to the experience of the product is a surefire way to boost their internal metrics. Even the inventor of the infinite scroll, Aza Raskin, admits he didn’t consider the far-reaching outcome of that feature.
But, he said, many designers were driven to create addictive app features by the business models of the big companies that employed them.
Aza Raskin, interviewed by The BBC
“In order to get the next round of funding, in order to get your stock price up, the amount of time that people spend on your app has to go up,” he said.
“So, when you put that much pressure on that one number, you’re going to start trying to invent new ways of getting people to stay hooked.”
But here’s the hard truth: there will be no peace. There will be no perfect state of balance between ethics, design, and commercialization. This is a challenge of progress, not perfection. So how do we make progress towards products that engage but do not substantially risk harm?
Finding a way forward
I have been working with user experience design in an Agile environment for nearly 15 years. In that time I have seen an increasing commitment to product design in a manner structured around the scientific method, especially developing a hypothesis for testing.
The principles of ethical research stated in the Belmont Report can be carried into product design work in two key areas: design discovery research and the product design process.
Do design research
There is a relatively clear parallel between the process of research described in the Belmont Report and the practices of design research in the discovery phase of a given project.
Design discovery research is predicated on reaching the right audiences, honoring the complexities of their life experience, and proposing beneficial solutions.
The ethical crisis facing product teams emerges when considering whether this research is happening at all. Many teams move forward without discovery research, instead using the experimentation and validation of the product as their research framework, whether it’s called that or not.
Having no familiarity with the human beings on the receiving end of a product, viewing them purely as consumers and their use of the product as their only value is a fundamental departure from both the Belmont principles and basic user-centered design practices.
This absence of understanding has also become so common as to the point of normality, not only in the big companies highlighted above, but in the countless smaller companies fielding product teams that are under-served, under-resourced and under-powered. The research process is overlooked or undermined on a regular basis on these teams.
I would also argue that viewing your customer or audience in purely quantifiable means additionally diminishes the role of their humanity, further narrowing and biasing any perspective on them as a person.
Putting out ethical fires
When taking a course of action to improve the ethical outlook of your team, the first step is recognizing them and, most importantly, addressing them while they are in progress. Lu Han, Product Designer for Spotify makes an excellent point:
“One way to recognize when a trade-off is being made is to pay attention to the language being used.”
Designing for Tomorrow – A Discussion on Ethical Design, Lu Han
She goes on in her article to highlight phrases like “just put it in the T&C” that are warning signs of potential ethical risks. The sad part? The phrases she identifies have been VERY COMMON.
Opportunities for built-in ethics
The time has come for us to realize that ethics is simply not too much to ask.
Consider this: in response to the Tuskeegee scandal, the Belmont Report did not advise spot-treating ethical violations as they happen, but instead recommended firmly-rooted principles for creating ethical standards.
To ingrain ethical standards into the practice of design, there are two larger points of integration, design principles and Agile development.
Design principles
In the technology sphere, design principles are fundamental ideas used by design teams to inform a standard of practice. These typically revolve around visual characteristics…
Made up principles for a design team:
- Unified: provides a cohesive experience that clearly communicates the brand to the user
- Simple: never distracts the user with anything unnecessary
- Useful: provides necessary functionality
- Delightful: sparks joy for users, turning ordinary tasks into a delight
I just made these up… but they sound so familiar somehow, right?
The idea being that the team, when working, would critique their work against these principles and determine if it is on par with their standards. But you can see how superficial these principles can be, focusing on lightweight form and function, and avoiding deeper product considerations.
This presents a key opportunity for inserting ethical standards as design principles, and integrating them into the design team’s day to day considerations for all their work.
That would be nice. But design principles have an Achilles heel. Two of them actually.
First, the principles are aimed at the designers. But the designers aren’t the only ones making the decisions that can have a negative impact the product and on the customer. Product managers, software engineers, QA engineers, salespeople, customer service representatives, the General Counsel, the CFO, the CMO, the CEO… lots of people are making decisions that have an impact on a give team’s product and the people who use it. It’s just unfair and unrealistic for designers to be the only guardians of good in this equation.
Secondly, when the design team is crunched for time, under-resourced, and on deadline, sometimes principles get overlooked entirely. I don’t like to say it will happen, but it can happen.
Solving these problems means that ethical commitments to design have to happen on multiple levels, ideally with commitment from leadership.
Design critique towards ethical principles
As work is in progress, design feedback is a critical refinement tool. Feedback should already avoid reactionary and directive feedback, favoring a more productive critique structure:
If your objective is to achieve _____, then doing _____ does/doesn’t meet that objective because ______.
… and explore introducing an additional step in the process to highlight ethical risks.
If your objective is to achieve _____, then doing _____ could create an ethical risk when ______.
The Belmont principles of respecting individuals, beneficience, and justice would provide fundamental guidance in these scenarios.
- …this doesn’t take into account the customer’s intensions…
- …this has a consequence of XYZ, whether intended or unintended…
- …this will create a disproportionately problematic situation for XYZ vulnerable population…
…. and so on.
Frame the critique to consider both the quality of the work and the outcome of its execution, whether that outcome is intended or not.
And this scrutiny extends to all processes and materials within the design process.
Additional ethical enhancements to agile methods
Beyond situationally addressing ethical issues or as they come up, and carrying principles as a design team, there remain opportunities to apply ethical principles into the product design process by interfacing with the tools and ceremonies employed by other members of a typical product team working in an agile environment.
Setting ethical objectives
The objectives for a project can take different forms depending on the operations of the team, including team-level OKRs, problem statements, or as narrow as a hypothesis for an individual project. Have these objectives been considered against ethical standards?
Ethical prototyping
As a design hypothesis is refined and designs are developed and refined through critique, the work can enter a prototyping cycle for evaluation, testing, and rapid validation. The prototype itself can leverage a design library that has been refined to be accessible for broader populations, while testing script and participants can be formed under ethical research standards. Additionally, as a team, when the prototype experience comes together, ask yourself: could this have unintended but serious consequences? At a minimum, write them down and discuss trade-offs. Even better, consider alternatives that mitigae that impact.
Ethical sprint grooming
In working with my development teams, I treat sprint grooming (previewing and evaluating upcoming user stories) as a design critique with a technical audience. I have used the opportunity to bring them into the project; sharing research findings, gaining additional buy-in, and presenting prototypes as design work that will be produced.
In many cases, there are trade-offs at this point, cutting features in favor of speed to market. With a new focus on maintaining ethical standards, this dialogue opens up an opportunity to examine these trade-offs from a new angle, evaluating their impact and unintended consequences.
Depending on the trade-off, there might be a showdown at this point, and that is an opportunity to clarify and refine. Don’t fear the conflict, open to the opportunity.
Ethical acceptance criteria
Acceptance criteria is a critical collaboration area for successful collaboration between development and design, and it is often overlooked as strict technical requirements. Recently, on my team, we instituted visual design
But here’s an idea that we can explore: augmenting the product design process by introducing the Belmont principles as a rubric for quality could protect against ethical risks.
I can tell you, I don’t know what this would look like. But it’s worth a shot. Try starting with some ethical Gherkin language…. GIVEN a user knows they are trying this feature…
Ever forward
I do not think this will improve overnight. Or even in a week. Or a month. And i don’t think it will even become a movement. It’s only a matter of reaching forward, reaching higher, one project at a time, one company at a time, one designer at a time, one team at a time. I would challenge you and your team to start small, but aim high.
Introduce these principles to your team:
- Respect people as individuals
- Provide more benefit than harm
- Examine any affected population
Start with a critique session. Then another critique. Then another. Push your work and your team’s work with higher standards.
Or, start by kicking off a project with a question for stakeholders: are there any unintended consequences that we should keep in mind that may affect a certain population in a negative way?
Try evaluating objectives against longer-term outcomes, looking for potential unintended consequences. For each intended objective outcome, ask “then what?” and “then what?” and “then what?”
But most importantly, don’t give up.
When I started my career, and I was in a position whee I needed to compromise my beliefs in order to get and hold a job, I hated hearing people in the profession encourage me to “stick to my guns” and never compromise my beliefs. I thought that was so arrogant, because the pressure I felt to get and hold a job was massive.
So I want to be clear, I don’t expect sainthood from professionals, but I do expect quality. And the scope of quality is growing to include an ethical and principled perspective.
Please feel free to reach out to me with your thoughts. This is, and always will be, a work in progress.
###
NOTE: I first heard about the Belmont Report from awesome service designer Sarah Fathallah, in her recent interview in Communication Arts.
Correction: I misstated that the Tuskeekee study participants had been unknowingly infected with syphilis as part of the study. The post has been updated to reflect that 399 of the participants had syphilis, and clarified the nature of the care that was administered during the study. Many thanks to reader Kit Oliynyk for assistance in these corrections.