Boeing 777 EICAS display and aircraft systems diagram

Case Study

NASA Ames Research Center

Designing Boeing 777 flight simulator interfaces to help pilots make faster decisions during in-flight emergencies.

Role Product Designer
Timeline Dec 2015 – Oct 2018
Location Mountain View, CA (On Site)
Focus EICAS — Cockpit Display Systems

At NASA Ames Research Center, I worked on the Boeing 777 flight simulator program — redesigning the EICAS (Engine Indication and Crew Alerting System) interface to help pilots process critical failure information faster during emergencies.

The core insight was simple but powerful: instead of showing pilots everything, show them only what’s wrong. This reductive approach to cockpit display design reduced incident analysis time by 35% and informed FAA safety recommendations.

20+
Pilots Interviewed
35%
Faster Analysis
FAA
Safety Recommendations
~3yr
Engagement Duration

01 — Context

The Cockpit Problem

Too much information at the worst possible time.

Modern cockpits are marvels of engineering — but they present a critical design challenge. The EICAS display on a Boeing 777 shows engine parameters, hydraulic systems, electrical status, fuel quantities, and dozens of other indicators simultaneously.

During normal flight, this comprehensive view works fine. But during an emergency — when an engine fails, a fire warning triggers, or multiple systems cascade — pilots are suddenly drowning in information at the exact moment they need clarity most.

The question we set out to answer: how do you redesign a cockpit display so that it helps pilots focus on what matters during the moments that matter most?


02 — Research

Learning from Pilots

Field research with 20+ pilots studying real in-flight failure scenarios.

We conducted extensive research with over 20 commercial airline pilots, studying how they process information during simulated in-flight failures. We observed their scan patterns, decision-making timelines, and the moments where critical information was missed or delayed.

One of our most instructive case studies was Qantas Flight 32 — an Airbus A380 that suffered an uncontained engine failure over Indonesia in 2010. The crew was bombarded with over 50 ECAM messages in rapid succession, many of them contradictory. Despite the chaos, the crew’s methodical approach to triaging information saved 469 lives.

visibility_off

Alert Saturation

During multi-system failures, pilots received 30–50+ alerts simultaneously. After the first 8–10, most were dismissed or ignored entirely.

timer

Decision Latency

Pilots spent an average of 12 seconds scanning for relevant information before acting — time that could mean the difference between containment and escalation.

psychology

Cognitive Overload

Pilots described “tunnel vision” during emergencies — fixating on one indicator while missing changes in others. The display design was working against human cognition.


03 — The Problem

Information Overload

Every alert treated equally means no alert gets the attention it deserves.

The fundamental problem was that the EICAS display treated all information with equal visual weight. Normal operating parameters, advisory messages, caution alerts, and critical warnings all competed for the same screen real estate and pilot attention.

50+
simultaneous alerts during a multi-system failure event — pilots couldn’t distinguish critical from informational in the noise

Pilots told us they had developed personal coping strategies — mentally filtering out certain display areas, relying on memory instead of the display, or simply dismissing alerts in bulk to clear the screen. These workarounds were unreliable and dangerous.

  • Normal and abnormal parameters displayed with equal visual prominence
  • Alert cascades during failures made it impossible to identify the root cause
  • Pilots resorted to memorized procedures rather than trusting the display
  • QRH cross-referencing added critical seconds to response times

04 — Approach

Focus on the Negative

Instead of showing everything, surface only what’s wrong.

Our design philosophy was counterintuitive for a system that had always tried to show everything: show less. Specifically, during failure scenarios, suppress all normal operating parameters and surface only the deviations — the failures, the warnings, the things that need immediate pilot action.

“If everything is highlighted, nothing is highlighted. The most powerful thing a display can do during an emergency is hide what’s working.”

Design principle — EICAS redesign

We studied the Quick Reference Handbook (QRH) — the paper-based emergency procedures that pilots reference during failures — and used its structure as a design model. The QRH doesn’t list everything about the aircraft; it lists only the relevant steps for the specific failure at hand. Our display should do the same.

  • Suppress normal-state parameters during failure modes to reduce visual noise
  • Surface only deviations, failures, and items requiring pilot action
  • Use progressive disclosure: show severity first, details on demand
  • Mirror QRH structure so the display reinforces trained procedures

05 — Design

Before & After

From showing everything to surfacing only what matters.

Before
Original EICAS display showing all system parameters simultaneously
After
Redesigned EICAS display focused only on failure information

06 — Final Design

The Redesigned EICAS

A failure-focused display built for high-pressure decision-making.

The final interface strips away operating norms during failure scenarios and presents only the information pilots need to act. The display dynamically transitions between a full-system overview during normal flight and a failure-focused view when anomalies are detected.

Pilots no longer need to mentally filter out noise — the system does it for them. The result is a display that works with human cognition under stress rather than against it.


07 — Impact

Results & Impact

Measurable improvements in pilot decision-making under pressure.

35%
Faster incident analysis time
Speed
20+
Commercial pilots tested in simulator
Research
FAA
Safety recommendations informed
Regulation
~3yr
Research and design engagement
Duration
speed

Faster Root Cause Identification

By suppressing normal parameters, pilots identified the root failure 35% faster. The display did the cognitive filtering that pilots previously did manually.

verified

Reduced Alert Dismissals

Pilots stopped bulk-dismissing alerts because only relevant alerts were shown. Every visible item warranted attention, rebuilding trust in the display system.

policy

FAA Safety Input

Research findings from pilot testing were compiled into recommendations that informed FAA safety guidelines for cockpit display design standards.


08 — Reflections

Key Takeaways

What nearly three years at NASA taught me about designing for extreme environments.

01

Designing for Extreme Pressure

Interfaces used in emergencies must work with stressed cognition, not against it. Every element that doesn’t serve the immediate task is a liability. This changed how I think about information hierarchy in every product I’ve designed since.

02

Simplification as a Design Tool

The most impactful design decision wasn’t adding a new feature — it was removing information. Showing less, at the right time, gave pilots more clarity than any new widget or visualization could have.

03

Working Within Aviation Constraints

Aviation design operates under strict regulatory, safety, and certification constraints. Learning to innovate within rigid boundaries — and to build trust with domain experts who are rightfully skeptical of change — was one of the most valuable skills I developed.