direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Es gibt keine deutsche Übersetzung dieser Webseite.

Theses

We offer Bachelor’s and Master’s thesis opportunities in the fields of processing and analysis of remote sensing images acquired by satellite systems for Earth observation. 

In the following you find currently available theses. If you are interested in one of the topics given below, please contact the respective person or Prof. Demir. You can also propose a topic of your interest within the research field of our group.

Dr. Mahdyar Ravanbakhsh

ravanbakhsh@tu-berlin.de

Topic: Learning Attention-Based Deep Models for Change Detection and Classification in Remote Sensing

Lupe

Change detection and classification is a challenging task in remote sensing, that used to identify areas of change between two images acquired at different times for the same geographical area. Robust and accurate change detection Methods are required in different fields like climate change, environmental Monitoring or emergency management. In order to detect and classify different kind of changes, in the last years deep learning models are broadly used to extract significant spectral-spatial-temporal features from changed areas and suppress Features from unchanged areas [1]. Attention mechanisms implicitly learn to suppress irrelevant regions in Images while highlighting salient features. In computer vision, attention mechanisms are applied to a variety of problems, including image classification, segmentation, action
recognition, image captioning and visual question answering [2]. This thesis study is devoted to: 1) Study the literature on attention-based mechanisms for change detection and classification; 2) Explore, analyze, and compare them; and 3) Develop a new attention-based approach to improve the performance
in change detection/classification tasks.


Nice to have: Prior knowledge on image processing


Prerequisite: Prior knowledge on deep learning


We also encourage students to propose their own topics. If you are interested in a topic in this area, please, provide me with some information about you, your interests, your programming skills, and your CV.


References:

[1] WahyuWiratama and Donggyu Sim. Fusion network for change detection of high-resolution panchromatic imagery. Applied Sciences, 2019.
[2] Jo Schlemper, Ozan Oktay, Michiel Schaap, Mattias Heinrich, Bernhard Kainz, Ben Glocker, and Daniel Rueckert. Attention gated networks: Learning to leverage salient regions in medical images. Medical image analysis, 2019.

Topic: Learning Deep Models with Minimal Supervision for Change Detection in Remote Sensing

Detecting Changes in remote sensing refers to the task of identifying the areas that undergo a significant change between two images captured at different times for the same geographical area [1]. Deep neural networks achieve state-of-the-art results for many tasks, however, they often demand a large amounts of labeled data to be trained. Specifically, in the remote sensing domain the acquisition of annotated data is often highly expensive and needs expertise. Hence, reducing the annotation efforts, while still being competitive with the fully-supervised approaches is a challenging task. A possible solution is to train deep generative models [2] with a minimum amount of supervised data from the unchanged areas. Such model would be able to approximate the distribution of "unchanged" data, and detect the "changed" samples as out-of-distribution. In this regard, the trained model should be able to estimate the uncertainty of each prediction (change/unchange) for unlabeled data in an automatic fashion. However, this Approach comes with several challenges. First, having a strategy for selecting the best subset for training samples. Second, in case of high intra-domain diversity learning the Change detection criterion becomes more crucial. So, the model should learn an appropriate criterion to detect out of distribution samples [3]. This thesis study is devoted to identify an appropriate strategy for selecting right Training samples, and then develop/train such deep model for change detection, which required less annotated data.


Nice to have: Prior knowledge on image processing


Prerequisite: Prior knowledge on deep learning, Python


We also encourage students to propose own topics. If you are interested in a topic in this area, please, provide me with some information about you, your interests, your programming skills, and your CV.


References:

[1] Wahyu Wiratama and Donggyu Sim. Fusion network for change detection of high-resolution panchromatic imagery. Applied Sciences, 2019.
[2] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. arXiv preprint, 2017.
[3] Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. Training confidence-calibrated classifiers for detecting out-of-Distribution samples. ICLR, 2018.

Gencer Sümbül

gencer.suembuel@tu-berlin.de

Topic: Multisource Multi-Label Remote Sensing Image Scene Classification

The increased number of recent Earth observation satellite missions has led to a significant growth of remote sensing (RS) image archives. Accordingly, associating one of the predefined categories to the most significant content of a RS image scene with deep neural network models, which is usually achieved by direct supervised classification of each image in the archive, has received increasing attention in RS. However, assigning different low-level land-cover class labels (i.e., multi-labels) to a RS image is not well studied in the literature. Since it is much more complex than the single label scene classification, joint use of different image sources together [1] in order to both model the co-occurrence of different land-cover classes and leverage the complementary spectral, spatial, and structural information embedded in different sources is crucial. This study requires to develop unified deep neural network framework that simultaneously i) learns the multi-label classification rules by accurately characterizing the information from different sources; and ii) overcomes the possible problems of using different sources together like alignment and registration of RS images from different sources. For this study, the BigEarthNet [2], which is a new large-scale Sentinel-2 benchmark archive, and the EU-DEM [3], which is digital surface model of whole Europe, will be used.


Nice to have: Prior knowledge on image processing


Prerequisite: Prior knowledge on deep learning


We also encourage students to propose own topics. If you are interested in a topic in this area, please, provide me with some information about you, your interests, your programming skills, and your CV.


References:

[1] X. Xu, W. Li, Q. Ran, Q. Du, L. Gao and B. Zhang, "Multisource Remote Sensing Data Classification Based on Convolutional Neural Network," in IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 2, pp. 937-949, Feb. 2018.
[2] G. Sumbul, M. Charfuelan, B. Demir, V. Markl, BigEarthNet: A Large-Scale Benchmark Archive for Remote Sensing Image Understanding, International Conference on Geoscience and Remote Sensing Symposium (IGARSS), 2019.
[3] land.copernicus.eu/imagery-in-situ/eu-dem

Topic: Self-Supervised Feature Learning for Content Based Remote Sensing Image Retrieval

Supervised training of deep convolutional neural networks for the feature learning of content-based remote sensing (RS) image retrieval task requires massive amounts of manually labeled data in order to obtain high retrieval accuracy. However, this is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully leverage the vast amount of freely available RS images. Recently, a novel paradigm for unsupervised learning called self-supervised learning is proposed in the computer vision literature [1], [2], [3], [4]. The main idea is to exploit different labeling (e.g. relative spatial co-location of image patches, different rotations of image patches etc.) that are available besides or within images, and to use them as intrinsic reward signals to learn general-purpose image features. The features obtained with existing self-supervised approaches have been successfully transferred to classification and detections tasks in computer vision literature, and their performance is encouraging when compared to fully-supervised training. However, self-supervision for content based RS image retrieval has not been investigated yet. This study requires 1) to define possible labeling that can be extracted only from RS images, 2) to create suitable neural network and training procedure for feature extraction with self-supervised learning and 3) to find a way for benefitting from extracted image features for content based retrieval. Experiments for this study will be conducted on the BigEarthNet, new large-scale Sentinel-2 benchmark archive [5].


Nice to have: Prior knowledge on image processing


Prerequisite: Prior knowledge on deep learning


We also encourage students to propose own topics. If you are interested in a topic in this area, please, provide me with some information about you, your interests, your programming skills, and your CV.


References:

[1] Z. Feng, C. Xu, D. Tao, “Self-Supervised Representation Learning by Rotation Feature Decoupling”, Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
[2] A. Kolesnikov, X. Zhai, L. Beyer, “Revisiting Self-Supervised Visual Representation Learning”, Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
[3] T. N. Mundhenk, D. Ho, B. Y. Chen, “Improvements to context based self-supervised learning”, Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
[4] C. Doersch, A. Gupta, A. A. Efros, “Unsupervised Visual Representation Learning by Context Prediction”, International Conference on Computer Vision (ICCV), 2015.
[5] G. Sumbul, M. Charfuelan, B. Demir, V. Markl, BigEarthNet: A Large-Scale Benchmark Archive for Remote Sensing Image Understanding, International Conference on Geoscience and Remote Sensing Symposium (IGARSS), 2019.

Zusatzinformationen / Extras

Direktzugang:

Schnellnavigation zur Seite über Nummerneingabe