Announcement: this year, FOUR challenges are being organized in the framework of PBVS 2025 workshop.

6th Thermal Image Super-Resolution challenge (TISRc)

The sixth Thermal Image Super-Resolution is a new edition of last year's challenge, which considers a cross-spectral dataset captured with visible (Basler camera) and thermal (TAU2 camera) sensors. It consists of two tracks.

Track 1 features a single evaluation task that requires participants to generate an x8 super-resolution thermal image from the given low-resolution thermal images. The challenge uses a noiseless set of images, bicubic down-sampled by a factor of 8, as input.

Track 2 consists of two evaluation tasks using the above-mentioned dataset. The first evaluation involves generating an x8 super-resolution thermal image, while the second evaluation requires participants to generate an x16 super-resolution thermal image. In both cases, the provided high-resolution visible image should be used as guidance for enhancing the low-resolution thermal image. The architectures proposed in this track must incorporate visible images as guidance.

For further details and access to the dataset, please refer to the CodaLab page:

TRACK 1

TRACK 2

4th Multi-modal Aerial View Imagery Challenge: Classification (MAVIC-C)

Electro-optical (EO) sensors that capture images in the visible spectrum such as RGB and grayscale images, have been most prevalent in the computer vision research area. However, other sensors such as synthetic aperture radar (SAR) can reproduce images from radar signals that in some cases could complement EO sensors when such sensors fail to capture significant information (i.e. weather conditions, no visible light, etc.).

An ideal automated target recognition system would be based on multi-sensor information to compensate for the shortcomings of either of the sensor-based platforms individually. However, it is currently unclear if/how using EO and SAR data together can improve the performance of automatic target recognition (ATR) systems. Thus, the motivation for this challenge is to understand if and how data from one modality can improve the learning process for the other modality and vice versa. Ideas from domain adaptation, transfer learning or fusion are welcomed to solve this problem.

In addition to target recognition, this challenge focuses on accuracy and out-of-distribution detection. A robust target recognition system would not only provide a labeled target but also a confidence score for the target. A low score would correspond to an out-of-distribution sample.

More details and dataset in CodaLab page:

LINK

3rd Multi-modal Aerial View Imagery Challenge: Translation (MAVIC-T)

We iterate on the sensor translation challenge introduced at PBVS 2023. Sensor translation algorithms allow for dataset augmentation and enable the fusion of information from multiple sensors. Multi-modal sensor translation and data generation has wide ranging applications. We introduce a custom, multi-modal paired image dataset consisting of Electro-optical (EO) and Synthetic Aperture Radar (SAR) paired images, and RGB-IR paired images. The motivation for this challenge is to facilitate state of the art techniques in high-fidelity, conditioned data generation. This competition challenges participants to design general methods to translate aligned images from multiple modalities.

More details and dataset in CodaLab page:

LINK

1st Thermal Pedestrian Multiple Object Tracking Challenge (TP-MOT)

The Thermal MOT Challenge is dedicated to advancing object tracking research in thermal imaging, a critical modality for scenarios where visible-light sensors may fail, such as low-light conditions, nighttime, and adverse weather. Unlike RGB-based datasets, which dominate MOT research, thermal imaging provides unique advantages by capturing long-wavelength infrared (LWIR) data, making it an essential tool for robust tracking in challenging environments.

This challenge introduces the Thermal MOT Dataset, the first large-scale thermal imaging dataset annotated specifically for multiple object tracking. The dataset comprises 30 sequences (9000 frames) collected at five urban intersections using a FLIR ADK thermal sensor. These sequences include diverse scenes and object types in public spaces, providing a comprehensive benchmark for thermal MOT research.

Participants are invited to develop innovative algorithms that leverage thermal data to improve tracking accuracy and robustness. The challenge emphasizes single-modality tracking in the thermal domain and encourages approaches that address the unique characteristics of thermal imagery, such as noise, resolution, and contrast.

Dataset download link: Sharepoint

Rules

  • Solutions must adhere to the tracking-by-detection paradigm in a two-stage approach. An object detection model must be used to predict boxes, and a separate model/algorithm can be used to associate boxes into tracks.
  • For a fair comparison of MOT algorithms, the detectors used can be either YOLOV5s or YOLOV8s
  • Ground truth annotations for the training set are provided. Models must not be trained on images from the validation set.
  • Same tracking paramters should be used for all 6 test sequences. No unique parameters must be changed based on any particular sequence.

Submission Guidelines

  • You must submit one zip file containing 6 txt files, one for each val sequence. The filenames in the zip file must be: seq17_thermal.txt, seq22_thermal.txt, seq2_thermal.txt, seq47_thermal.txt, seq54_thermal.txt, seq66_thermal.txt
  • Every submission requires having a link to a github repository where your source code and trained models are stored. This will be used by the challenge administrators to verify your results. If it is a private repository, please add "wassimea" to the repository so we can verify your results.
  • The txt files represent the tracking results using the MOT format
  • This script https://github.com/wassimea/thermalMOT/blob/main/utils/infer_results.py shows an example of how to generate the results in the correct format.
  • If you aim to publish your work using this dataset, kindly consider citing the paper that introduced it https://arxiv.org/abs/2411.12943 @article{ahmar2024enhancing, title={Enhancing Thermal MOT: A Novel Box Association Method Leveraging Thermal Identity and Motion Similarity}, author={Ahmar, Wassim El and Kolhatkar, Dhanvin and Nowruzi, Farzan and Laganiere, Robert}, journal={arXiv preprint arXiv:2411.12943}, year={2024} }
  • For any questions, please contact Dr. Wassim El Ahmar at welahmar@uottawa.ca

Submission will take place on the EvalAI platform, link to be updated soon.

We strongly encourage you to submit a paper where you introduce your architecture and results to our workshop, where they will be considered for publication in the proceedings of CVPR2025 Workshops.