Announcement: this year, THREE challenges are being organized in the framework of PBVS'2024 workshop.

5th Thermal Image Super-Resolution Challenge (TISR)

The fifth Thermal Image Super-Resolution challenge introduces a recently acquired benchmark dataset captured with cross-spectral sensors - visible (Balser camera) and thermal (TAU2 camera). It consists of two Tracks.

Track 1 features a single evaluation task, requiring participants to generate an x8 super-resolution thermal image from the given low-resolution thermal images. The challenge involves utilizing a bicubic down-sampled by 8 and noiseless set of images as input.

Track 2 consists of two evaluation tasks using the newly acquired dataset. The first evaluation involves generating an x8 super-resolution thermal image, while the second evaluation requires participants to generate an x16 super-resolution thermal image. In both cases, the provided high-resolution visible image should be used as a guidance for enhancing the low-resolution thermal image. The proposed architecture in this track must use visible images as guidance.

For further details and access to the dataset, please refer to the CodaLab page:

TRACK 1

TRACK 2

Multi-modal Aerial View Imagery Challenge: Classification (MAVIC-C)

Electro-optical (EO) sensors that capture images in the visible spectrum such as RGB and grayscale images, have been most prevalent in the computer vision research area. However, other sensors such as synthetic aperture radar (SAR) can reproduce images from radar signals that in some cases could complement EO sensors when such sensors fail to capture significant information (i.e. weather condition, no visible light, etc.).

An ideal automated target recognition system would be based on multi-sensor information to compensate for the shortcomings of either of the sensor-based platforms individually. However, it is currently unclear if/how using EO and SAR data together can improve the performance of automatic target recognition (ATR) systems. Thus, the motivation for this challenge is to understand if and how data from one modality can improve the learning process for the other modality and vice versa. Ideas from domain adaptation, transfer learning or fusion are welcomed to solve this problem.

In addition to target recognition, this challenge focuses on accuracy and out-of-distribution detection. A robust target recognition system would not only provide a labeled target but also a confidence score for the target. A low score would correspond to an out-of-distribution sample.

More details and dataset in CodaLab page:

LINK

Multi-modal Aerial View Imagery Challenge: Translation (MAVIC-T)

We iterate on the sensor translation challenged introduced at PBVS 2023. Sensor translation algorithms allow for dataset augmentation and enables the fusion of information from multiple sensors. Multi-modal sensor translation, and data generation has wide ranging applications. We introduce a custom, multi-modal paired image dataset consisting of Electro-optical (EO) and Synthetic Aperture Radar (SAR) paired images, and RGB-IR paired images. The motivation for this challenge is to facilitate state of the art techniques in high-fidelity, conditioned data generation. This competition challenges participants to design general methods to translate aligned images from multiple modalities.

More details and dataset in CodaLab page:

LINK