Venkatraman Narayanan, Bala Murali Manoghar, Vishnu Sashank Dorbala, Dinesh Manocha, Aniket Bera
We present ProxEmo, a novel end-to-end emotion prediction algorithm for socially aware robot navigation among pedestrians. Our approach predicts the perceived emotions of a pedestrian from walking gaits, which is then used for emotion-guided navigation taking into account social and proxemic constraints. To classify emotions, we propose a multi-view skeleton graph convolution-based model that works on a commodity camera mounted onto a moving robot. Our emotion recognition is integrated into a mapless navigation scheme and makes no assumptions about the environment of pedestrian motion. It achieves a mean average emotion prediction precision of 82.47% on the Emotion-Gait benchmark dataset. We outperform current state-of-art algorithms for emotion recognition from 3D gaits. We highlight its benefits in terms of navigation in indoor scenes using a Clearpath Jackal robot.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Text Classification | EWALK | Accuracy | 82.4 | ProxEmo (ours) |
| Text Classification | EWALK | Accuracy | 78.24 | STEP [bhattacharya2019step] |
| Text Classification | EWALK | Accuracy | 55.47 | Baseline (Vanilla LSTM) [Ewalk] |
| Emotion Classification | EWALK | Accuracy | 82.4 | ProxEmo (ours) |
| Emotion Classification | EWALK | Accuracy | 78.24 | STEP [bhattacharya2019step] |
| Emotion Classification | EWALK | Accuracy | 55.47 | Baseline (Vanilla LSTM) [Ewalk] |
| Classification | EWALK | Accuracy | 82.4 | ProxEmo (ours) |
| Classification | EWALK | Accuracy | 78.24 | STEP [bhattacharya2019step] |
| Classification | EWALK | Accuracy | 55.47 | Baseline (Vanilla LSTM) [Ewalk] |