Announcement: this year, THREE challenges are being organized in the framework of PBVS'2022 workshop. (Details to be updated in next few weeks)
The third thermal image super-resolution (TISR) challenge consists in obtaining super-resolution images at x2 and x4 scales from the given thermal images.
The third edition of this challenge will follow the same setup as the second edition (i.e., just the mid- and high-resolution images from the dataset used in the first edition are considered). Ground truth images for the x4 scale correspond to the provided high- resolution images; in other words, each team should down-sample the given images by x4 and use these down-sampled images (by adding noise) as inputs to develop their solutions. Regarding the x2 super-resolution solution, it should be developed using as an input the given mid-resolution images acquired with the camera Axis Q2901-E and as an output, the corresponding high-resolution images, of the same scene, but acquired with the FLIR FC-632O camera. Strictly speaking, the x2 scale proposed solution should tackle both problems, i.e., generating the super-resolution of the images acquired with the Axis Q2901-E camera and mapping images from one domain (Axis Q2901-E camera) to another domain (FLIR FC-632O camera).
More details and dataset in CodaLab page:: LINK
Electro-optical (EO) sensors that capture images in the visible spectrum such as RGB and grayscale images, have been most prevalent in the computer vision research area. However, other sensors such as synthetic aperture radar (SAR) can reproduce images from radar signals that in some cases could complement EO sensors when such sensors fail to capture significant information (i.e. weather condition, no visible light, etc)
An ideal automated target recognition system would be based on multi-sensor information to compensate for the shortcomings of either of the sensor-based platforms individually. However, it is currently unclear if/how using EO and SAR data together can improve the performance of automatic target recognition (ATR) systems. Thus, the motivation for this challenge is to understand if and how data from one modality can improve the learning process for the other modality and vice versa. Ideas from domain adaptation, transfer learning or fusion are welcomed to solve this problem.
Jointly with PBVS workshop we have a PBVS challenge on Multi-modal Aerial view Imagery Classification, that is, the task of predicting the class label of an aerial low resolution image based on a set of prior examples of images and their class labels. Two tracks of data are made available: EO + SAR, and SAR.
Dataset is hosted on CodaLabs but will be updated with cleaner data.
More details and dataset in CodaLab page:
Semi-supervised learning has developed into a highly researched problem as it minimizes the labeling costs while still achieving performance levels comparable to a fully labeled dataset. However, most semi-supervised learning algorithms are based on pre-trained models on ImageNet and are thus challenging to port to other image domains, especially those with more than three bands. In this competition, we present the application of a newly acquired dataset collected from a university rooftop with a hyperspectral camera to perform object detection. We present a semi-supervised learning challenge with 10% labeled data with three moving object categories: vehicles, bus and bike. Additional details, with links to dataset and CodaLab server are provided HERE