
SPARK 2026 Challenge organized as part of the AI4Space workshop, in conjunction with
Space Situational Awareness (SSA) focuses on understanding and monitoring objects orbiting the Earth. As space activity continues to grow, SSA has become a critical research area, with strong interest from major space agencies such as the European Space Agency (ESA) and the National Aeronautics and Space Administration (NASA).
Vision-based sensors play a key role in SSA, particularly for spacecraft navigation and close-proximity operations. These include rendezvous missions, docking, in-space refueling, and satellite servicing. Vision-based target recognition is also essential for enabling autonomous space systems. While object recognition has seen major advances on Earth, relatively little work has been designed or tested specifically for the unique conditions of the space environment.
Modern perception systems rely heavily on deep learning, which requires large amounts of labeled data. However, high-quality annotated space imagery is scarce. In addition, spaceborne images are affected by challenging factors such as extreme lighting variations, low signal-to-noise ratios, and high contrast, making the problem even more complex.
The SPARK 2026 Challenge addresses these challenges by encouraging the development of data-driven methods for spacecraft perception and relative navigations tasks. The challenge provides participants with both high-fidelity synthetic data generated using a state-of-the-art rendering engine and real experimental data collected at the Zero-Gravity Laboratory (Zero-G Lab) at the Interdisciplinary Center for Security, Reliability and Trust (SnT), University of Luxembourg.
SPARK 2026 Challenge Streams
🚀 Stream 1: Multi-Task Spacecraft Perception
⚡ Stream 2: Event-Based Pose Estimation





