Can Robot Patients Lead to Better Doctors?

Research seeks to make better human patient simulators, a training tool that enables clinicians to practice their skills before treating real patients.

Photo Caption: Nurses practice taking blood pressure and collecting medical history with a traditional human patient simulator system. Its face is completely inexpressive, and its lips do not move when it "talks." Speech is either pre-recorded, or voiced by the clinical educator with a microphone remotely located.

A young doctor leans over a patient who has been in a serious car accident and invariably must be experiencing pain. The doctor’s trauma team examines the patient’s pelvis and rolls her onto her side to check her spine. They scan the patient’s abdomen with a rapid ultrasound machine, finding fluid. They insert a tube in her nose. Throughout the procedure, the patient’s face remains rigid, showing no signs of pain.

The patient’s facial demeanor isn’t a result of stoicism - it’s a robot, not a person. The trauma team is training on a “human patient simulator” (HPS), a training tool which enables clinicians to practice their skills before treating real patients.

HPS systems have evolved over the past several decades from mannequins into machines that can breathe, bleed and expel fluids. Some models have pupils that contract when hit by light. Others have entire physiologies that can change. They come in life-sized forms that resemble both children and adults.

But they could be better, said Laurel D. Riek, a computer science and engineering professor at the University of Notre Dame. As remarkable as modern patient simulators are, they have two major limitations.

“Their faces don’t actually move, and they are unable to sense or respond to the environment,” she said.

Riek, a roboticist, is designing the next generation of HPS systems. Her NSF-supported research explores new means for the robots to exhibit realistic, clinically relevant facial expressions and respond automatically to clinicians in real time.

“This work will enable hundreds of thousands of doctors, nurses, EMTs, firefighters and combat medics to practice their treatment and diagnostic skills extensively and safely on robots before treating real patients,” she said.


For pain synthesis, the team starts with an existing database of painful-face videos. The team tracks the face using constrained local models, digital frameworks that help them map out facial features.

One novel aspect of Riek’s research is the development of new algorithms that use data from real patients to generate simulated facial characteristics. For example, Riek and her students have recently completed a pain simulation project and are the first research group to synthesize pain using patient data. This work won them best overall paper and best student paper at the International Meeting on Simulation in Healthcare, the top medical simulation conference.

Riek’s team is now working on an interactive stroke simulator that can automatically sense and respond to learners as they work through a case. Stroke is the fifth leading cause of death in the United States, yet many of these deaths could be prevented through faster diagnosis and treatment.



Comments



Log in to leave a Comment



Editors’ Picks

Meet Jing Xiao: WPI’s New Director of Robotics
In January 2018, Jing Xiao will become the new director of the Robotics...

Disney: Focus on the Robot Experience
The robot experience included in a business strategy is important not only...

Flirtey Wants Drones to Deliver Defibrillators in Nevada
Flirtey and REMSA have partnered to use drones to delivery automated external...

How Many Robots Does it Take to Screw in a Light Bulb?
Watch a Fetch robot with a custom soft robotic gripper use a...