In the past few years, social robots have been introduced into public spaces, such as museums, airports, commercial malls, banks, company showrooms, hospitals, and retirement homes, to mention a few examples. In addition to classical robotic skills with physical interactions, such as navigation, grasping and manipulating objects, social robots should be able to perceive and communicate with people in the most natural way, i.e. cognitive interactions. Visual perception is a stepping step for social robots to achieve such natural social interactions.
In this course, we will provide an introduction about socially aware robotics and the recent advances in deep learning with a particular coverage on its application on the most related vision tasks, e.g. face detection, facial landmark localisation and soft bio analysis. Following that, we will introduce the well-known Robotic Operating System (ROS) and learn how to deploy simple visual perception algorithms to run on general robotic platforms. The course provides the essential ingredients to enable PhD students to demonstrate AI modules on robotic platforms, such as the ARI humanoid robot who recently joined the UniTN family.