
<Overview/>
QUICK BACKGROUND
This is a client-based project. I was in a UX team of five and worked with Backyard Brains, an Ann Arbor company that develops fun and hands-on experiments/tools to teach neuroscience knowledge. I had so much fun in this project. Our team worked on their neuroscience robot app and improved its overall design to make learning neuroscience more intuitive. The new app based on our design is now in development and will be downloadable in 2023.
My Role
UX Designer
Timeline
Jan 2022 - Apr 2022
Tools
Figma, Miro, Matlab
CLIENT
Backyard Brains is an Ann Arbor company that aims to enable everyone to be a neuroscientist. They provide affordable neuroscience experiment kits for students of all ages to learn about neuroscience in a hands-on way. In this way, they want to make neuroscience less obscure and more accessible to the general public.

Over the years, they have developed quite a variety of interesting products that help people learn about the brains. They include the RoboRoach Bundle, DIY Neuroprosthetic Kit, and the Muscle SpikerBox.



But we were lucky enough to work on their latest product which also happened to be the most complex one, the Neurobot.
NEUROBOT
Can robots think and feel? Can they have minds? Can they learn to be more like us? To do any of this, robots need brains. Scientists use “neurorobots” – robots with computer models of biological brains – to understand everything from motor control, to learning and problem solving. Now Backyard Brains is trying to taking these neurorobots out of the research labs and using them to help students learn neuroscience more intuitively. Right now, the primary target is high-school students. Backyard Brains has developed a dedicated Neurorobot curriculum and twice already they have had the opportunity to pilot the Neurorobots with high-school students from two schools. However, their ultimate goal is to let everyone have fun with the Neurorobot.
The Neurorobot has a cute look: it has cameras, wheels, microphones, speakers, distance sensors, and a big brain-shaped shell on its back.

Coming together with the robot, there is a MatLab-based App they have developed that controls the ‘brain’ of the Neurorobot and also monitors its behaviors from the sensor inputs. From the App interface, students can hook up neurons and neural networks into artificial brains that control the actions of the Neurorobot. Needless to say, the range of brains and behaviors one can create and design is limitless.

The Neurorobot App was designed by a team of engineers without proper considerations of design principles. As a result, they realize that students seem to spend more time on figuring out how the interface works than actually exploring and designing brains.
PROBLEM
TARGET USER
We will primarily design for high school students.
GOAL
Provide students with an engaging, fun, and hands-on learning experience to learn neuroscience fundamentals while minimizing their time spent on learning how to use the system.
<Research/>
EXPERT INTERVIEW
To better understand the situation, we conducted an interview with a high school teacher who has had the experience of adopting the Neurorobot into his classroom.
The interview suggests that he requires a 55 minute class period to demo the robot and explain its GUI to students before students have the opportunity to design their own robotic brains. However, this stakeholder is a highly motivated teacher with a PhD in neuroscience who worked alongside the robot’s developer to design the robot’s curriculum and does not represent a realistic high school scenario.
USER TESTING ROUND 1
To evaluate the Neurorobot’s GUI’s usability breakdowns in an environment with minimal teacher guidance, we conducted user testings with five UM college students. We have recruited users only from non STEM backgrounds to approximate the target user, high school students.
>> User Test
Apart from several pure usability tasks, we have designed the majority of user testing flow based on the first two labs from the course curriculum to mimic the real usage scenario. Here is a brief overview of the process:
-
Introduce ourselves and the test goal, and put users at ease
-
Set up 3 scenarios and let users complete the detailed tasks while thinking aloud
-
Let users complete a Single Ease Question after each task
-
Ask follow-up questions
-
Let users fill out a questionnaire based on System Usability Scale (SUS)
>> Data Analysis
Apart from the Single Ease Questions and the final SUS questionnaire, we also designed a way to quantitatively measure the priorities of the user problems found.
HEURISTIC EVALUATION
After finding the user problems, we mapped them into the ten heursitic principles from Nielsen Norman.

After finding the user problems, we mapped them into the ten heursitic principles from Nielsen Norman.
MAJOR FINDINGS
Below shows some of the most outstanding problems we have found from the research phase.



<Design/>
BRAINSTORM
With all these user problems and pain points in mind, we organized a brainstorm session and invited the CEO and the chief engineer of Backyard Brains to join. During the brainstorm, we started with 3 ‘How Might We’ questions, brainstormed ideas, sketched them out, and did dot-voting to select the favorite ones.



LOW-FI PROTOTYPE
We divided ourselves into two groups and did parallel designs. Later, we came together as a team to have design critique sessions where we bounced off of each other’s designs and ideas, and integrated the two designs into a better one. Some of the low-fi prototypes are included below.

LOW-FI PROTOTYPE
From the low-fi prototype, we developed the UI Library and polished the prototype into mid-to-high-fi prototype.
>> UI Library

>> Features





<Evaluation/>
USER TESTING ROUND 2
To evaluate our design, we adopted exactly the same test process and conducted user testing round 2. Interestingly, because it’s impossible to fully prototype our design in Figma due to its ‘open-build’ nature, we assigned two teammates to act as ‘wizards’ behind the scene that construct the scene in real time as the user describes what they want to do.
COMPARISON BETWEEN 2 TESTINGS
We were glad to see that comparing the new design with the old design, the number of user problems with a severity score greater than 5 dropped from 13 to 1, while the average score for Single Ease Questions raised from 5.6 to 6.4 (the higher the better).