How we used service design, plain language, and user research to increase transparency, accountability, and trust in facial recognition technology
As part of the Customer Experience Directorate (CxD) at the Department of Homeland Security (DHS), we created a brief from the Chief Information Officer and a report explaining how DHS and its components (TSA, CBP) use facial recognition and face capture technology, how these technologies are tested, and how individuals’ rights are protected.
Through this effort, we:
- Improved public trust by transparently explaining how the Department uses facial recognition technology at different touchpoints in people’s lives.
- Strengthened accountability for civil society stakeholders by publicly reporting on how the Department is ensuring accurate oversight and governance of this technology.
- Supported fair and equitable use of this technology through transparent accountability of how accurately this technology performs across age, sex, and skin tone.
Context
The mission of the Customer Experience (CX) team at the Department of Homeland Security is to center real people and their lived experiences to deliver services they trust.
I worked on a small team of 6, consisting of 2 designers, 1 other content designer, 1 accessibility expert, and one project manager.
We were tasked with:
- Communicating 8 of the highest visibility use cases of facial recognition technology
- Explaining the results of third-party testing and oversight
To meet diverse user needs, we intentionally designed with this framework:
- Bite: High-level findings and key takeaways
- Snack: Clear explanations of insights and safeguards
- Meal: Detailed testing data and raw documentation
The two main user groups we focused on were:
Advocacy groups, researchers, and watchdogs who needed:
- Clear evidence of oversight and accountability
- Specific testing data and performance metrics
- Confidence that concerns about bias and equity were being taken seriously
The public who needed:
- Plain-language explanations of what’s happening
- Clarity on how their rights are protected
- Reassurance that the technology is being used responsibly
Methodologies
Desk research: Reviewed internal documentation and technical materials to understand DHS use cases and testing processes, using these as a foundation to develop accurate content drafts.
Cross-functional collaboration: Worked closely with component partners (TSA, CBP) as subject-matter experts to validate accuracy and ensure policy alignment.
Legal compliance: We also navigated the content compliance process with our agency's legal stakeholders to ensure each agency was ok with how we were speaking about each use case. I introduced a framework for working with our legal stakeholders, who often might push back on certain content decisions:
- Involve legal and accessibility partners early.
- Understand what actual legal requirements are vs what people's interpretations are.
- Try to discover when feedback is a must-do from a legal requirement POV, and what's more subjective based on personal preference.
- Show them the results from research.
- Annotate my content requirements so that they understand why I will or will not implement the feedback they gave.
Plain language: Applied plain-language principles throughout:
- Short, scannable sentences
- Defined technical terms at first use
- Clear headings and visual hierarchy to reduce cognitive load
Voice and tone: Aimed to use a conversational, yet direct tone. Tried to be as transparent as possible and acknowledge gaps in understanding or systems.
User research
- Usability testing: We needed to gather feedback on the report to identify any pain points our users were having in finding information, but also to understand what information is more or less valuable to users. I completed ten 40-50 minute sessions with people from the general public and ten 40-50 minute sessions with people from civil society groups.
- Highlighter testing: Allowed us to understand on specific pages what sentences or words our users found confusing or hard to understand. I completed 5 highlighter sessions with people from the general public and 5 with people from civil society groups.
By combining usability testing and highlighter testing, I was able to understand holistically how users understand content and hone in on problem areas.
Developing use case content
To develop each use case content, I audited existing reports on how each agency uses facial recognition.
Then, I drafted content for each use case, focusing on a few specific areas we heard from our community partners like. I audited these reports and highlighted these pieces of information that we knew people wanted to learn, like:
- Understanding how to opt in vs opt out.
- How data is stored.
- What are the steps in the process for each use case?
User research and synthesis
I moderated usability testing sessions with people from the general public and civil society.
After conducting these sessions, I led an affinity diagramming session with our team to identify specific themes and pain points.
We prioritized findings based on their consensus with participants and how impactful they were to the participants’ trust in the content.
Insights
- Users wanted the “bottom line” upfront.
- Civil society users wanted specific performance data.
- Finding content information if you’ve had a negative experience with this technology was hard.
- Text density overwhelmed users.
They cared most about how the technology affected them personally and what DHS was doing to prevent harm.
Especially around accuracy and impacts across different skin tones.
This directly reduced trust when users couldn’t easily ask questions or raise concerns.
Even when the writing was clear, long blocks of text created friction.
Design and content changes driven by research
💡 We introduced tables where we could break up information in a simple, digestible way, especially for those who may be neurodivergent.
This helps to reduce cognitive load for users, groups information in meaningful ways, and sets expectations for how the rest of the report will
💡 Defining technical terms upfront in plain language, so users can understand specific contexts fully when these terms come up.
Based on the highlighter testing, we found people to be especially confused with terms like “biometrics” and “face capture”. These terms are used a lot and often in different contexts. We aimed to clearly define how these terms are used in this report at the very start, so users are clear on how these terms are used in this context.
We also hoped that by clearly laying out these terms, people would trust the content.
Breaking down “biometrics.”
FR/FC systems use “biometric samples,” which are usually a picture of an individual’s face. These images can be taken live or come from an identity document like a passport or driver’s license. “Biometrics” refers to measuring physical traits, such as facial features, to identify a person.
Breaking down “face capture.”
Face capture means taking a picture of an individual’s face so that it can be used in a face recognition system and then applying different automated methods to verify that the photo is actually of a person’s face and is of high quality.
Breaking down “face recognition,” “verification,” and “identification.”
Face recognition technology compares an individual’s facial features to available images for:
- Verification: “One-to-one” matching to confirm a photo matches a different photo of the same person.
- Identification: “One-to-many” matching a photo of a person against a selection of photos from a larger group. This can happen against a database of millions of photos, but at DHS this most often involves matching against a limited, pre-built gallery of photos, such as the passport photos of passengers on a flight manifest. This limited gallery matching is more efficient and effective.
💡 Writing in an honest, conversational tone with the most important information at the very top.
Through user research, we found users are concerned about when each use case is used, how they can opt-out of the technology, and how their rights are being protected. We pushed the most important information at the very start of each use case page, and rather than write each header as a short, general topic, I wrote each header as sentences to lead into each paragraph. This helps users who are scanning each page find content that is meaningful to them without having to do too much work.
Global Entry is a voluntary, opt-in service. Global Entry is a voluntary, opt-in service that requires biometrics. Global Entry members can always opt out of using Global Entry and go through standard CBP processing.
Encounter photos are deleted from CBP databases after use but kept in a DHS-wide system. CBP uses an application called the Traveler Verification Service (TVS) for all border crossing processes that use face comparison. No photos are permanently stored in the TVS cloud matching service.
💡Including a section in each use case to guide people on how to submit a complaint or feedback about their experiences with facial recognition technology.
We heard from users in our usability testing that they often felt we weren’t listening to their concerns, especially regarding opting out of this technology. We included in each use case a section on redress channels people can contact if they have had negative experiences or want to submit feedback before or after the fact.
If you have feedback, questions, or a complaint about your experience with CBP’s use of FR/FC technologies, get in touch.
If it’s in the moment, ask to speak to the supervisory agent or officer on hand.
If it’s after the fact, you can:
- Contact the Department of Homeland Security Traveler Redress Inquiry Program (DHS TRIP) if you’ve had difficulty with travel screenings, such as denied or delayed entry into the U.S., or repeated additional screening.
- If you believe DHS has violated your rights or someone else’s rights, you can file a complaint with the DHS Office for Civil Rights and Civil Liberties.
These options are available to all travelers, regardless of citizenship status.
Impact
After our blog post and PDF were released, we received positive feedback from both stakeholders and public users. Survey results showed that users felt the content was:
- Easy to understand
- Transparent
- Made users feel more trust in the Department
Check out the blog post and report to learn more about the work we did, and if you’re interested in how DHS and their various components (TSA, CBP) use facial recognition and face capture technology.
Final report
Reflection
- Doing design work in the government can be messy and often ambiguous. Trust in your teammates, and always be collaborating.
- Don’t be afraid to ask questions and be comfortable with not knowing things. Often in design and content work, you have to become a subject matter expert in a topic quickly. Asking questions can be a superpower!
- User research often pivots in multiple directions. It’s important to be flexible and go with where your research takes you, and listen to what your users are telling you.