In recent years, the demand for enhanced autonomy in on-orbit operations such as rendezvous, docking, and proximity manoeuvres has driven interest in employing Deep Learning-based Spacecraft Pose Estimation techniques. However, due to limited access to real target datasets, algorithms are often trained using synthetic data and applied in the real domain, leading to a performance drop due to the domain gap. State-of-the-art approaches employ Domain Adaptation techniques to mitigate this issue. In search of viable solutions, event sensing was explored in the past and shown to reduce the domain gap between synthetic simulations and real-world scenarios. Event sensors have grown significantly in the last few years in hardware and software; moreover, event sensor characteristics provide many advantages in space applications compared to an RGB sensor. To facilitate further training and evaluation of DL-based models, we introduce a novel dataset comprising real event data acquired in a controlled lab environment and synthetic event data generated using simulators using the same camera intrinsics. Furthermore, we propose an effective data filtering method to improve the quality of training data, thus enhancing model performance. Also, we present a novel image-based event representation that performs better than existing representations. A multi-facet baseline evaluation on the dataset was carried out and the results are summarised for different event representations, adept event filtering strategies for training neural networks, and prominent algorithmic frameworks.