Reconstructing depth from single images originating from single views of scenes is a valuable goal for computer vision, robotics and artificial intelligence, as it is a fundamental human capability that has not yet been fully realised on machines, nor employed to any great benefit. On the other hand, in the last 10 years it has become clear that machine learning approaches to the problem of predicting dense depth maps from single images are viable. More recently, deep learning approaches, using convolutional neural networks, have also started to be employed. Moreover, while training can take many days, at time-of-use these more recent methods can run as fast as 100Hz.
In order to exploit the burgeoning real-time capabilities of depth-from-a-single-image algorithms, and to analyse the relationship between the capabilities of these algorithms and the needs of embodied systems with respect to providing possibilities for action, this project will adapt these methods to a robot context, applying it to the task of obstacle avoidance, and developing new approaches to fit the needs of the application. The robotic tasks of SLAM (simultaneous localisation and mapping) and navigation will be investigated also.
In parallel, the project will examine the properties of algorithms for depth from single-image, and the features that they exploit in input images. Length: