ESA GNC Conference Papers Repository
Title:
Hazard Detection & Avoidance and Optical Navigation Integration Demonstration for Autonomous Moon Landing Applications
Authors:
Presented at:
Full paper:
Abstract:
It is generally recognised that future Moon landing platforms will require a global access capability. That means being able to land precisely at any location on the Moon on various types of terrains, potentially hazardous for the Lander. To do that, future Lander systems will require a Hazard Detection and Avoidance (HDA) capability. The HDA function analyses the terrain topography to identify landing hazards (roughness, large slopes, shadowed areas). It commands the sensors (scanning Lidar and camera), processes the sensor data, reconstructs the terrain topography, generates surface hazard maps for slope, roughness and shadow, and combines this information to recommend a safe landing site meeting all the safety and Lander manoeuvrability constraints. An important challenge for the HDA function is the need for motion compensation. The acquisition of the scanning Lidar data takes several seconds and the Lander is moving (in both attitude and translation) during this process. The Lidar measurements appear distorted due to the change of relative pose during the scan time. The HDA function thus relies on the outputs of the navigation system for processing the sensor measurements and to actively command the sensor to maintain the desired coverage and resolution. The integration of navigation and HDA components is key to achieve the required hazard detection reliability and to enable accurate retargeting toward the identified target. Such landing platforms typically baseline the use of optical navigation for the descent and landing operations. The validation of the integration between this optical navigation system and the HDA system is an important challenge. The validation of the interaction requires the navigation to produce flight-representative state estimation performance, which requires flight-representative image and sensor inputs. A Hardware-in-the-Loop (HIL) test campaign had previously been performed in a scaled dynamic environment to demonstrate system integration and real-time operation. The integrated navigation and HDA system was tested in NGCs Landing Dynamic Test Facility (LDTF). The LDTF consists of a 6-degree of freedom robotic arm mounted on a linear rail, surface mock-ups, and a dedicated lighting system. The prototype payload was mounted at the end effector of the robot. The LDTF is capable of reproducing realistic, but scaled, landing trajectories along the lunar surface mock-up. The primary objective of the dynamic testing was to perform the demonstration of the motion compensation function, i.e., the ability to process Lidar and camera measurements in real-time considering the Lander motion during the scanning process. For that matter, the test campaign included a series of the tests with incremental complexity, ranging from simple static tests to axis per axis isolated motion (in both translation and angular rate) to nominal landing trajectories with 6-DoF motion. High rate cases were also run to identify the limits of the system. Such testing however suffers from scaling limitations. Lidar sensors have an absolute range measurement error which does not scale down when the measured range is small. Applying a scaling factor on the Lidar range measurement has the effect of artificially amplifying the range measurement error. In the particular context of the system lab testing, this error amplification factor means that the functionality and the integration of the HDA system can be demonstrated, but that quantitative slope and roughness performance are not representative of the actual system performance at full scale. The sensor measurement performance is degraded, relatively speaking, with respect to the dimensions of the actual mission. The test campaign did nevertheless demonstrate the ability to reconstruct terrain slope using measurements taken from a moving Lander to the expected level of accuracy considering the scaling effects on range measurement noise. The HIL test campaign led to the identification of several paths of improvement which were addressed in the following project development phase. First, it was identified that the implemented formulation of the image processing function measurement update did not perform as expected when there is little or no motion along the boresight axis of the camera due to a singularity in the solution. Second, the need to inject an altimeter measurement to stabilise the vertical channel was demonstrated. The proposed solution consists of using the Lidar sensor as a laser altimeter when it is not used for HDA purposes in order to feed the navigation filter, provide the required observability of the distance to ground and solve for the scale. Finally, it was also identified that the execution rate of the image processing had to be increased. In order to maintain feature tracking at angular rates of up to 3 deg/s or more, the image processing rate had to be increased to a rate which is not achievable with a pure software implementation on the identified processor. The identified solution was to migrate some of the image processing subroutines on the FPGA co-processor. The identified improvements have been implemented and integrated in the prototype system. A full scale demonstration campaign of the system on a small UAV is currently under preparation. This will enable system performance quantitative assessment as the scaling limitations will be removed. The paper will highlight the status of the integration and results of the full-scale demonstration of the system on a UAV.