Robotic First aid response

Title Robotic First aid response
Summary A robot system which assesses a person's state of health as a first step toward autonomous robotic first aid/ems
Keywords robot first aid, injury localization, anomalous breathing recognition, bleeding recognition
TimeFrame 2015/1/16-2015/6/30
References (first aid teleoperated robots) (fall detection example) Simin Wang, Salim Zabir, Bastian Leibe. Lying Pose Recognition for Elderly Fall Detection (breathing recognition) Phil Corbishley and Esther Rodriguez-Villegas. 2008. Breathing Detection: Towards a Miniaturized, Wearable, Battery-Operated Monitoring System. IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 55, NO. 1, JANUARY 2008

Prerequisites Image analysis, sensors systems, learning systems, cooperating intelligent systems or similar;

Some ability to work with software (installing libraries and writing code), and interest in robots, healthcare, or recognition

Author Tianyi Zhang and Yuwei Zhao
Supervisor Martin Cooney, Anita Sant'Anna
Level Master
Status Finished

Generate PDF template

Goal: The capability for a robot in a home or facility to be able to recognize a person's health state when an emergency such as a fall has occurred, as a first step toward endowing robots with critical first aid skills.

Motivation: Robots need to be useful, and one of the most useful things a robot can do is look after people's health and safety. A quick and meaningful assessment of a person's state during first response to a possible emergency could help save lives and prevent much anguish.

Challenge: the first thing which should be done is to assess a victim's state, but this is very difficult even for humans; e.g., for a person who has fallen and unresponsive:

 1) where did they hurt themselves?
 2) are they breathing normally?
 3) are they bleeding?

Focus: this project seeks to show that such recognition is possible for an automatic system, as a proof-of-concept; a simplified in-lab scenario is assumed in which a robot is near the victim and good sensor data can be acquired (visual and sound, without occlusions or noise).

Approach: the students will perform three steps

 1) obtain kinect data (skeleton and depth) of a human-shaped dummy falling in different ways
    create a recognition system (possibly using LIBSVM) to classify if the head has been hurt 
 2) record sound samples based on videos of "agonal" respiration, tachypnea (fast breathing), 
     and regular breathing from YouTube
   calculate mfcc features with htk
   create a recognition system (possibly using LIBSVM) to classify kind of breathing 
 3) use a robot (possibly Turtlebot) to drag a white glove over a dummy (red ink will symbolize 
      blood at some areas) to detect the presence/location of deadly bleeding 

Evaluation of system: accuracy or similar metric for how often the system detects head trauma, breathing, bleeding

Requirement: some ability to work with software (installing libraries and writing code), and interest in robots, healthcare, or recognition

Deliverable: an intelligent robot system which can assess a victim's state (thesis/report, code, video)