Space Situational Awareness (SSA) focuses on understanding and monitoring objects orbiting the Earth. As space activity continues to grow, SSA has become a critical research area, with strong interest from major space agencies such as the European Space Agency (ESA) and the National Aeronautics and Space Administration (NASA).

Vision-based sensors play a key role in SSA, particularly for spacecraft navigation and close-proximity operations. These include rendezvous missions, docking, in-space refueling, and satellite servicing. Vision-based target recognition is also essential for enabling autonomous space systems. While object recognition has seen major advances on Earth, relatively little work has been designed or tested specifically for the unique conditions of the space environment.

Modern perception systems rely heavily on deep learning, which requires large amounts of labeled data. However, high-quality annotated space imagery is scarce. In addition, spaceborne images are affected by challenging factors such as extreme lighting variations, low signal-to-noise ratios, and high contrast, making the problem even more complex.

The SPARK 2026 Challenge addresses these challenges by encouraging the development of data-driven methods for spacecraft perception and relative navigations tasks. The challenge provides participants with both high-fidelity synthetic data generated using a state-of-the-art rendering engine and real experimental data collected at the Zero-Gravity Laboratory (Zero-G Lab) at the Interdisciplinary Center for Security, Reliability and Trust (SnT), University of Luxembourg.

SPARK 2026 Challenge Streams

SPARK 2026 pushes the boundaries of space perception by introducing two exciting and forward-looking challenge streams, each targeting critical capabilities for next-generation autonomous space systems.

🚀 Stream 1: Multi-Task Spacecraft Perception

This stream challenges participants to design a single, powerful model capable of performing spacecraft classification, spacecraft detection, and fine-grained segmentation of spacecraft components—regardless of spacecraft type. The focus is on efficiency and performance, encouraging the development of compact, high-performing models suitable for deployment on resource-constrained space platforms.

⚡ Stream 2: Event-Based Pose Estimation

Dive into the world of event-based vision with Stream 2, which focuses on pose estimation using the SPADES dataset. Participants will train their models on high-quality synthetic event data and validate their approaches on real event data, addressing one of the most challenging perception problems in space.

 

Whether you are pushing model efficiency to its limits or exploring cutting-edge event-driven perception, SPARK 2026 offers a competitive platform to showcase innovation, performance, and real-world impact in space autonomy.

Stream 1: Multi-Task Spacecraft Perception

Developing a single, unified model capable of performing multiple tasks (classification, detection and segmentation) simultaneously.

Stream 2: Event-Based Sensing for Pose Estimation

Leveraging event data to estimate the 6DoF pose of the spacecraft.