ESA GNC Conference Papers Repository
Title:
CNN-based Autonomous Hazard Detection: a LiDAR-less approach.
Authors:
Presented at:
Full paper:
Abstract:
Interacting with planetary surfaces is becoming a key technology for the current space market. Missions like Origins, Spectral Interpretation, Resource Identification, and Security-Regolith Explorer (OSIRIS-REx), Hayabusa2, or Double Asteroid Redirection Test (DART) are clear examples that the goal when flying a spacecraft in a small-body environment is to take a sample of the surface materials to take back to Earth for further analysis or to make contact with the asteroid or comet in some way; even to the point of trying to deviate from its trajectory. Be that as it may, whether it is for scientific or for planetary defence purposes (and maybe soon for mining purposes too), the capability to interact with a planetary body is definitely something that any mission to a small celestial body would benefit from having. However, interacting with a planetary surface is a very complex problem that involves many other disciplines, such as relative navigation, trajectory design and control, contact dynamics analysis, risk management, or autonomous operations; among others. In this paper, a focus is placed on the role of autonomous navigation and, in particular, on the identification of elements that could put to risk a potential landing or interaction with the orbited body. The methods involved in trying to assess whether a certain surface is a safe or dangerous spot to land are usually used in Hazard Detection and Avoidance (HDA) algorithms. HDA systems and algorithms are used to identify and avoid elements that could be considered potentially risky, such as boulders, cracks, craters, holes, steep terrain, or shadowed areas, for instance. The A (avoidance) in the acronym is usually taken care of by the guidance and control modules of the spacecraft, while the D (detection) is tackled by the navigation module. In this work, a method is proposed that attempts to solve some of the most troublesome aspects of the detection part of the system: hardware requirements and cost. HDA systems typically rely on Light Detecting And Ranging (LIDAR) sensors to combine the visual observations with ranging measurements that help understand the steepness and roughness of the terrain underneath the spacecraft. However, this sensor suite is usually expensive, heavy, and takes up a considerable volume in the spacecraft platform. In this work, a LIDAR-less method is proposed, where Convolutional Neural Networks (CNNs) are trained and used to detect shadows, features, and sloped terrain. LIDARs are considered active instruments because they are actively interacting with the environment to take observations, i.e., they emit a laser beam that is afterwards received by the instrument itself to measure the way in which the beam was reflected by the surface. Optical sensors such as cameras, on the other hand, are considered passive instruments because they only take information from the environment, without actively interacting with it. Using only passive instruments is a very challenging problem because they offer no range-like observation, which is the type that is typically exploited to build Digital Elevation Maps (DEMs). This makes assessing the topology of the surface of interest the key element to the proposed approach. To cope with that, the proposed method exploits the shadows cast by the geometry of the planetary body to infer the slopes in the terrain, relative to the local gravity vector at the surface. To train the networks intensively, a dataset is generated using Blender that includes several small celestial body geometries to increase their generalisation capabilities. Stochastic feature populations are distributed on the surface of the different meshes used to add variability and enrich the dataset, and different illumination conditions are also considered. Making use of several randomly generated initial orbital states, images are rendered from very different orbital positions and orientations, to broaden the different viewpoints from which predictions are expected. The networks used are based on the classic Residual Network (ResNet) architecture (in particular ResNet-34), and exploit the benefits that transfer learning offers (especially for semantic segmentation problems like the one described here) by initialising its weights to those obtained from the ImageNet dataset. Results show that the networks are capable of predicting very accurately the hazards present in the input observations provided, particularly for shadows and features. Performances for slope estimation show an accuracy above 70% for true positives, which is high for this type of problems. Furthermore, the composition of the three layers trained (feature detection, shadow detection, and slope estimation) show that the safety maps generated are capable of very accurately predicting and deciding where a landing can safely take place. Preliminary Hardware In the Loop (HIL) tests are also included to show the performances of the algorithm when real hardware is used for image acquisition.