direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Es gibt keine deutsche Übersetzung dieser Webseite.

Theses

We offer Bachelor and Master theses opportunities in the field of processing and analysis of remote sensing images acquired by satellite systems for Earth observation.

Prerequisites: You should have knowledge and experience on machine learning, deep learning and image analysis. Your programming skills should be excellent (e.g., Python).

In the following you find currently available theses topics. You can also propose a topic of your interest within the research field of our group. If you are interested in, please contact the respective person with the following information: 1) topic that you are interested in; 2) your knowledge and prior experiences relevant to that topic.

Learning Attention-Based Deep Models for Change Detection and Classification in Remote Sensing

Lupe

Change detection and classification is a challenging task in remote sensing, that used to identify areas of change between two images acquired at different times for the same geographical area. Robust and accurate change detection Methods are required in different fields like climate change, environmental Monitoring or emergency management. In order to detect and classify different kind of changes, in the last years deep learning models are broadly used to extract significant spectral-spatial-temporal features from changed areas and suppress Features from unchanged areas [1]. Attention mechanisms implicitly learn to suppress irrelevant regions in Images while highlighting salient features. In computer vision, attention mechanisms are applied to a variety of problems, including image classification, segmentation, action
recognition, image captioning and visual question answering [2]. This thesis study is devoted to: 1) Study the literature on attention-based mechanisms for change detection and classification; 2) Explore, analyze, and compare them; and 3) Develop a new attention-based approach to improve the performance
in change detection/classification tasks.

Contact: Dr. Mahdyar Ravanbakhsh, Prof. Dr. Begüm Demir

References:

[1] W. Wiratama and D. Sim, 'Fusion network for change detection of high-resolution panchromatic imagery', Applied Sciences, 2019.
[2] J. Schlemper, O. Oktay, M. Schaap, M. Heinrich, B. Kainz, B. Glocker, and D. Rueckert, 'Attention gated networks: Learning to leverage salient regions in medical images', Medical Image Analysis, 2019.

Learning Deep Models with Minimal Supervision for Change Detection in Remote Sensing

Detecting Changes in remote sensing refers to the task of identifying the areas that undergo a significant change between two images captured at different times for the same geographical area [1]. Deep neural networks achieve state-of-the-art results for many tasks, however, they often demand a large amounts of labeled data to be trained. Specifically, in the remote sensing domain the acquisition of annotated data is often highly expensive and needs expertise. Hence, reducing the annotation efforts, while still being competitive with the fully-supervised approaches is a challenging task. A possible solution is to train deep generative models [2] with a minimum amount of supervised data from the unchanged areas. Such model would be able to approximate the distribution of "unchanged" data, and detect the "changed" samples as out-of-distribution. In this regard, the trained model should be able to estimate the uncertainty of each prediction (change/unchange) for unlabeled data in an automatic fashion. However, this Approach comes with several challenges. First, having a strategy for selecting the best subset for training samples. Second, in case of high intra-domain diversity learning the Change detection criterion becomes more crucial. So, the model should learn an appropriate criterion to detect out of distribution samples [3]. This thesis study is devoted to identify an appropriate strategy for selecting right Training samples, and then develop/train such deep model for change detection, which required less annotated data.

Contact: Dr. Mahdyar Ravanbakhsh, Prof. Dr. Begüm Demir

References:

[1] W. Wiratama and D. Sim, 'Fusion network for change detection of high-resolution panchromatic imagery', Applied Sciences, 2019.
[2] P. Isola, J-Y Zhu, T. Zhou, and A. A Efros, 'Image-to-image translation with conditional adversarial networks', arXiv preprint, 2017.
[3] K. Lee, H. Lee, K. Lee, and J. Shin, 'Training confidence-calibrated classifiers for detecting out-of-Distribution samples', ICLR, 2018.

Multisource Multi-Label Remote Sensing Image Scene Classification

The increased number of recent Earth observation satellite missions has led to a significant growth of remote sensing (RS) image archives. Accordingly, associating one of the predefined categories to the most significant content of a RS image scene with deep neural network models, which is usually achieved by direct supervised classification of each image in the archive, has received increasing attention in RS. However, assigning different low-level land-cover class labels (i.e., multi-labels) to a RS image is not well studied in the literature. Since it is much more complex than the single label scene classification, joint use of different image sources together [1] in order to both model the co-occurrence of different land-cover classes and leverage the complementary spectral, spatial, and structural information embedded in different sources is crucial. This study requires to develop unified deep neural network framework that simultaneously i) learns the multi-label classification rules by accurately characterizing the information from different sources; and ii) overcomes the possible problems of using different sources together like alignment and registration of RS images from different sources. For this study, the BigEarthNet [2], which is a new large-scale Sentinel-2 benchmark archive, and the EU-DEM [3], which is digital surface model of whole Europe, will be used.

Contact: Gencer Sümbül, Prof. Dr. Begüm Demir

References:

[1] X. Xu, W. Li, Q. Ran, Q. Du, L. Gao and B. Zhang, "Multisource Remote Sensing Data Classification Based on Convolutional Neural Network," in IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 2, pp. 937-949, Feb. 2018.
[2] G. Sumbul, M. Charfuelan, B. Demir, V. Markl, BigEarthNet: A Large-Scale Benchmark Archive for Remote Sensing Image Understanding, International Conference on Geoscience and Remote Sensing Symposium (IGARSS), 2019.
[3] land.copernicus.eu/imagery-in-situ/eu-dem

Self-Supervised Feature Learning for Content Based Remote Sensing Image Retrieval

Supervised training of deep convolutional neural networks for the feature learning of content-based remote sensing (RS) image retrieval task requires massive amounts of manually labeled data in order to obtain high retrieval accuracy. However, this is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully leverage the vast amount of freely available RS images. Recently, a novel paradigm for unsupervised learning called self-supervised learning is proposed in the computer vision literature [1], [2], [3], [4]. The main idea is to exploit different labeling (e.g. relative spatial co-location of image patches, different rotations of image patches etc.) that are available besides or within images, and to use them as intrinsic reward signals to learn general-purpose image features. The features obtained with existing self-supervised approaches have been successfully transferred to classification and detections tasks in computer vision literature, and their performance is encouraging when compared to fully-supervised training. However, self-supervision for content based RS image retrieval has not been investigated yet. This study requires 1) to define possible labeling that can be extracted only from RS images, 2) to create suitable neural network and training procedure for feature extraction with self-supervised learning and 3) to find a way for benefitting from extracted image features for content based retrieval. Experiments for this study will be conducted on the BigEarthNet, new large-scale Sentinel-2 benchmark archive [5].

Contact: Gencer Sümbül, Prof. Dr. Begüm Demir

References:

[1] Z. Feng, C. Xu, D. Tao, “Self-Supervised Representation Learning by Rotation Feature Decoupling”, Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
[2] A. Kolesnikov, X. Zhai, L. Beyer, “Revisiting Self-Supervised Visual Representation Learning”, Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
[3] T. N. Mundhenk, D. Ho, B. Y. Chen, “Improvements to context based self-supervised learning”, Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
[4] C. Doersch, A. Gupta, A. A. Efros, “Unsupervised Visual Representation Learning by Context Prediction”, International Conference on Computer Vision (ICCV), 2015.
[5] G. Sumbul, M. Charfuelan, B. Demir, V. Markl, BigEarthNet: A Large-Scale Benchmark Archive for Remote Sensing Image Understanding, International Conference on Geoscience and Remote Sensing Symposium (IGARSS), 2019.

Continual Learning for Large-Scale Satellite Image Analysis

Traditional machine learning applications tend to operate in a fixed learning environment under the assumption that all training data is available at the time of learning. In the context of earth observation however, new data become available at a rapid pace and potentially include new semantic information, e.g. new land-cover classes. Existing models have never seen this data and cannot deal with them, therefore making nonsensical predictions. A naïve approach would be retraining a model, whenever a sufficient volume of new data is available. However, this is very time and resource expensive, and the quality of the model on previous samples cannot be guaranteed anymore. A model may makes a wrong prediction on data it had previously learned and predicted correctly, which is referred to as catastrophic forgetting. This field of dealing with new data and evolving existing models is known as continual learning. The topic can be divided into continual data and continual models. For the data part, although remote sensing archives are constantly growing, there are restrictions for viable training datasets. At some point researchers have to decide, which of the existing data to keep and which new samples to include for further training. Some existing approaches consider choosing data elements with the highest uncertainty [1] or the most important ones [2]. The second branch of continual learning, continual models, focuses on incrementally evolving models over time, avoiding the retraining from scratch. There are first approaches to fix certain parts of a model to ensure compatibility with former predictions [3], [4]. This study investigates (1) existing methods of continual learning, for data as well as for models, and (2) the development and implementation of new algorithms to push this research further, but also (3) bring existing ideas to the field of remote sensing and multi-label learning.

ContactTristan Kreuziger, Prof. Dr. Begüm Demir

References:

[1] Q. Xie, M. Luong, E. Hovy, Q. Le, ‘Self-training with Noisy Student improves ImageNet classification’, 2019, arXiv: 1911.04252.

[2] A. Katharopoulos, F. Fleuret, ‘Not All Samples Are Created Equal: Deep Learning with Importance Sampling’, 2018, arXiv: 1803.00942.

[3] R. Aljundi, K. Kelchtermans, T. Tuytelaars, ‘Task-Free Continual Learning’, 2018, arXiv: 1812.03596.

[4] D. Wu, Q. Dai , J. Liu, B. Li , W. Wang, ‘Deep Incremental Hashing Network for Efficient Image Retrieval’, 2019, IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

Zusatzinformationen / Extras

Direktzugang:

Schnellnavigation zur Seite über Nummerneingabe