ESA GNC Conference Papers Repository
Title:
Bennunet - Applying Machine Learning Techniques for Autonomous Optical Relative Navigation of an Asteroid
Authors:
Presented at:
Full paper:
Abstract:
Small Solar System bodies have been the target of space missions since the ICE (International Cometary Explorer) crossed the plasma tail of Comet Giacobini-Zinner on September 11, 1985, and became the first spacecraft visitor of a comet. The main challenge of navigating small bodies is that the ephemeris and physical properties of the target are typically not known with enough accuracy for orbit determination. In addition, low-gravity field and non-uniform target shape means that orbit and attitude estimation must be calculated with the spacecraft on-board computer. Among the sensors used for on-board pose estimation and relative navigation in small bodies missions, monocular vision cameras enable the estimation of the relative position of the spacecraft with lower hardware complexity, mass, size, and power requirements. It is true that monocular vision requires more complex solutions for solving the relative position as monocular cameras are not able to directly resolve the distance to the target. But as soon as this algorithmic complexity is overcome, monocular vision sensors are applicable for a full pose estimation solution in low-resources missions. Bennunet is a hybrid neural network-based method, devoted to on-board spacecraft relative position and attitude estimation in the vicinity of minor bodies like asteroids, comets or small moons, using monocular camera sensor. In the context of navigating such minor bodies, traditional heuristic methods for spacecraft position and attitude determination encounter limitations in robustness and precision in the presence of adverse illumination conditions. Moreover, their performance is limited due to the computational cost resulting from the evaluation of a large number of possible pose hypotheses. In comparison, Bennunet directly learns the nonlinear transformation from a 2-D grayscale image to the 6-D pose vector space. Bennunet is conformed by a set of sequential convolutional neural networks (CNNs) organised in two levels. The high-level multiclass-classification CNN is in charge of determining the sector of the discretized 3D space. Then, based on the sector estimation, the image is ingested by a low-level regression CNN, trained specifically for that sector, which estimates the pose of the camera. In addition, a high-level regression CNN was added before the high-level classification with the purpose of estimating vertical and horizontal shift of the target centroid in the image. This de-shifting pre-processing substantially boosted the performance of the classification CNN. The secondary contribution of this research is the development of SPyRender, a tool for the generation of large sets of synthetic images, suitable for the training and testing of the designed CNNs. SPyRender implements GPU-accelerated physically-based rendering, enabling the efficient generation of photorealistic images. SPyRender has been used with 3-D models of asteroid Bennu for producing multiple image sets covering the whole range of camera position, attitude, illumination conditions, and target albedo map variation, allowing to study the impact of different geometries and image effects in the network performance. The architecture of the neural networks conforming Bennunet was originally based on AlexNet [1] with the purpose of setting up the basis of the application of CNNs for the use case of autonomous optical relative navigation [2]. However in later development stages, other architectures have been implemented for Bennunet, substantially improving its performance. Namely, the usage of Time Distributed CNNs (TdCNNs) takes advantage of the dynamics of the spacecraft orbiting the target to ingest sequences of frames instead of a single image, substantially improving estimation performance. In addition, Automated Machine Learning (AutoML) [3] techniques have been introduced in the architecture design process aiming to semi-automate the Neural Architecture Search (NAS) and efficiently explore the search space for Model Selection and Hyperparameter Optimization. Compared to previous works, the training process and data augmentation in this contribution have been extended by using multiple input image resolutions in the NAS process. Moreover, random target model variations have been introduced for the generation of training and test image sets. Finally, the designed Neural Networks have been tested with real images from the Osiris Rex mission [4]. References 1.A. Krizhevsky et al.,ImageNet Classification with Deep Convolutional Neural Networks, Advances in Neural Information Processing Systems, vol. 25, 2012. 2.A. Escalante et al.,Churinet A deep learning approach to optical navigation for minor bodies, IAC2021 Proceedings, 2021. 3.F. Hutter et al, Automated Machine Learning, 2019, doi:10.1007/978-3-030-05318-5. 4.Rizk, B., OCAMS: The OSIRIS-REx Camera Suite, Space Science Reviews, vol. 214, no. 1, 2018. doi:10.1007/s11214-017-0460-7.