MatrixCare, Inc. is a forefront leader in healthcare technology, specializing in pioneering innovations that streamline and elevate the delivery of quality healthcare services, with a strong focus on maintaining Electronic Health Records (EHR). The study centered on defining and implementing standards for iconography and nomenclature across the Matrixcare platform to establish consistency in the usage of icons and language.

THE PROBLEM

Icons are meant to be simple, intuitive, and universal. But during my time at MatrixCare, we noticed something odd: despite using “standard” icons, users were still confused. Some weren’t clicking the right buttons. Others hesitated before selecting what seemed like the obvious option.

So we asked ourselves: Is the icon broken? Or is our assumption about its clarity flawed?

That question kicked off a deep dive into iconography and nomenclature across the product — one that uncovered hidden usability issues beneath the surface.

MY ROLE

I led the research and testing strategy for evaluating icon comprehension and naming clarity. I worked closely with a cross-functional team to:

  • Design moderated testing sessions

  • Create task-based scenarios to evaluate understanding

  • Analyze both verbal and behavioral feedback

  • Recommend changes to icon labels, visuals, and usage guidelines

DISCOVERY & RESEARCH

When it came to figuring out whether our icons and their corresponding labels made sense, I didn’t want to rely on assumptions or designer logic. So I ran a hybrid card sorting exercise with users—mixing open-ended thinking with a bit of structure to see how people naturally interpreted the meaning behind different visual cues.

Each participant was given a mix of icons, labels, and tasks, and asked to group them in ways that felt most intuitive. Some categories were predefined (like “navigation” or “actions”), but others were left open to interpretation to surface unexpected mental models.

To move beyond designer assumptions, I ran a hybrid card sort—giving users a mix of icons, labels, and tasks to group in ways that felt intuitive. Some categories were predefined, while others were open, letting users surface their own mental models.


What we found:


  • Users grouped icons by emotion or familiarity, not always by function.

  • Some “universal” icons were misinterpreted or placed in unexpected categories.

  • Icon-label pairings varied widely based on prior platform exposure.


This exercise revealed misalignments between design intent and user perception, giving us a more nuanced view than standard usability testing. It ultimately led us to revise both the visuals and naming conventions, reinforcing that good iconography isn’t just about visuals—it’s about shared understanding.


What we found surprised us:

  • Users said they understood an icon… but their actions didn’t match.

  • Many relied on familiarity over accuracy — choosing icons based on what they’d seen elsewhere.

  • A subtle pressure to “get it right” led to socially desirable answers — meaning our feedback wasn’t always honest.


"The clock seems to indicate time, so it seems like a good icon for Due. The pencil looks like I have to work on it." - Nurse, Senior Care Facility


“I don’t like the error icon (ban). It should be red to be more pronounced. But I still don’t like it. The exclamation icon draws the eye and seems more imperative.” - Admin, SNF


“I think you could exchange error and warning icons. Yellow is a good color for warnings” - Intern, MatrixCare

THE PIVOT

We realized we were measuring icon recognition as a binary — they get it / they don’t — when it should be about behavior and confidence. So we reframed our approach:

Instead of only asking “Do you know what this means?”, we asked:

Can they complete the task quickly and confidently using this icon — without hovering, second-guessing, or pausing?

We also observed that icons without supporting labels performed worse, especially when context shifted across screens. That insight led us to explore better pairing strategies between icons and plain language.

DESIGNING THE FIX

We proposed:

  • Updated icon sets that avoided metaphorical ambiguity

  • Short, clear labels added where possible

  • Usage guidelines in the design system for how and when to pair text with icons

  • Behavioral-first testing as the new standard in icon validation

These changes were packaged in a recommendations document and shared with the broader design systems team, who began implementing them into their component library.

COLLABORATION & COMMUNICATION


I worked closely with senior designers to align updates with the existing style guide.

  • Presented findings in a cross-team design critique and stakeholder sync with product managers and QA leads.

  • Created a decision tree for choosing between text-only, icon-only, or hybrid label options based on context.

RESULTS & IMPACT


3 of 5 tested icons were revised for clarity post-testing

  • Follow-up usability tests showed a 30% improvement in task completion time for impacted icons

  • Internal adoption: Our testing model was integrated into the design system’s validation checklist for new UI patterns

  • Helped shift internal thinking from “icon = universal” to “icon = assumed clarity, needs testing”

FINAL REFLECTION


This project taught me that intuition is not a substitute for validation.

Icons seem small — even invisible — until they get in the way. By digging into how users really interpret meaning, we brought clarity and confidence back into the UI.

It’s easy to make an icon prettier. It’s harder — and more meaningful — to make it work better.


Interested in working together?

Shoot me an email if you'd like to chat.