AR navigation for Spacewalking

Human-Computer Interaction, Usability testing, Prototyping, NASA, Space, Augmented reality, Design engineering






Summary


As the space exploration industry grows increasingly ambitious and commercial, we are faced with the reality that more people will be embarking on spacewalking morethan ever before. To answer this call from NASA, we (Harvard NASA SUITS team) designed spacesuit information displays within Augmented Reality environments to reduce the cognitive load of spacewalking tasks for astronauts in emergency situations.

  • Selected project for NASA SUITS Challenge 2018
  • Built & tested the prototype in NASA Johnson Space Center, TX.

Team


Erin McLean (Team leader)
Berlynn Bai (Health and telemetry)
Hane Roh (Health and telemetry)
Lindsey Renee Derry (Voice command)
Anahide Nahhal (Voice command)
Agatha Park (Development)
Tyler Rogers (Development)
Zongheng Sun (Gestures)
Emily Yang (Gestures)
Nick Oueijan (Navigation)
Hanif Wicaksono (Navigation)

Project for


NASA SUITS 2018

Featured in


  • NASA SUITS
  • Harvard Graduate School of Design News






Software system architecture. Mixed reality devices  were used as the prototyping toolkit.



Usability testing at NASA Johnson Space Center, Houston, TX

Design-focused interaction for NASA


Innovation at NASA typically came from solid engineering roots. Deep space crewed missions have stringent requirements for robust and flexible systems that can be built on a shoestring budget and operate in extreme environments and large distances from Earth, with the possibility of technological failure. These missions are large-scale, complex engineered systems that require the input of thousands of technical, subsystem experts and synthesis of their interconnections by mission planners.  We will need designers to create these complex socio-technical systems.

Emergency in space


On July 16th, 2013 Italian Astronaut Luca Parmitano terminated his spacewalk 44 minutes in due to the build up of 1.5 liters of water in his helmet. This emergency was unprecedented, and required the problem solving cooperation of Extravehicular activity (EVA) , Intravehicular activity (IVA) and ground control. While this emergency was successfully handled by ground control, as we venture farther from Earth, we will need new ways to bring the expertise of ground control with us.


Research question


How might we improve productivity and safety of EVA’s for near-future astronauts working in deep space?





Guardian Angel Mode: System architecture



Guardian Angel mode UI

Goal: Prototype and test


We imagine a scenario for an emergency EVA situation. Out of these list, we picked three features to prototype and test:

  1. Management of EVA activities
  2. Communication
  3. Navigation
  4. Personal Health Monitoring
  5. Scientific Experimentation & Exploration
  6. Robotic Control
  7. Construction & Repair

We keep these design principles in mind:

  1. Keep Astronauts aware and focused on the task at hand
  2. Improve Astronaut Agency: Reduce the cognitive load of EVA tasks without being patronizing
  3. Prioritize procedure and details over intuitive usability
  4. Design for limited mobility & muted senses
  5. Extend cognition with flexible and responsive content, communications, and controls
  6. Facilitate collaboration and extend awareness between IVA, EVA crew & ground

Output


We designed a Guardian Angel Mode as a Holo-Lens enabled augmented reality system that connects EVA astronauts with relevant content and procedures from a comprehensive database of expert information, surfacing bio-metric and suit information for the user and crew, and establishing orientational aids for translation in 360 degree micro-gravity environments.

This capability aims to extend an astronaut’s awareness to the status of their fellow crew on spacewalks to potentially improve rapid response to an emergency.



Navigation system interaction: sketch and prototype.


Navigation waypoints concept

Design: Guardian Angel mode


Guardian Angel system has three main components:

  • Dynamic adjustment: Our system dynamically updates task lists based on the environment and human feedback.

  • Preserve and promote autonomy. Human is best at interpreting information within an environment that might be dynamically changing.

  • Surface the information as it is relevant. While the suit measures personal biometrics during the whole EVA, it’s only relevant if something is off nominal. And it’s often not your own health but a partner’s that can be relevant.

My design focus: navigation 


I’ve had a pleasure designing the navigation part with my amazing and talented friend, Nicolas Oueijan.

In EVA, astronauts attach themselves into Tether Attachment Point, or so called it “Hooking point” with a standardized tether hook receptacle. We designed a translational computer-aid visuals to help the user find the most efficient route based.

The translational feature annotates 3D, not speroid objects, clearly demarcating the waypoints (handholds) on the space station. The translation feature is an aid, allowing the users to choose how they navigate through the environment with machine assist, because especially in emergencies, the environment might be dynamically changing and the human is best at interpreting that information.



Obstacles were present in testing to mimic the environment.


Two rounds of testing: on campus and at NASA

Usability testing


We did two rounds of testing:

1. Initial testing with Jeff Hoffman, ex-NASA astronaut and now is a professor at MIT Department of Aeronautics and Astronautics

2. Full usability testing ground at NASA Johnson Space Center, Texas.

How is EVA being simulated?


  • The nominal EVA task of inspecting a part of the ISS. 
  • Navigation from one part of the ISS to another via waypoints. 
  • The emergency EVA task, involves communication without radio. 

What was measured during the test?


  • Navigation: time for each waypoint
  • Guardian angel mode: time for each task, VUI testing
  • Global: time from start to finish, error rate, error correction time, heart rate, facial expression, profile mood states rating scale, system learnability

Recording the data


Video recording of entire setting, Hololens Spectator view, GoPro on participant (video and sound), face recording (if possible), 2 observers, heart rate monitor



Gallery





Mark








© 2020 Hanif Wicaksono.