We are pleased to invite you to attend the seminar:

Friday October 26 2018,10.00-11.00 (hall open 9.45)
CT-Collegezaal C ,23-HG-0.53Building number: 23Stevinweg 1 Delft

*Please forward this invitation to interested students and employees

10.00: Prof. Christian Wöhler (TU Dortmund)

Titel:3D object reconstruction and semantic scene segmentation: From "mundane"
scenarios to space applications


An essential step in understanding the content of an image is semantic scene segmentation,providing a subdivision of the image into regions that belong to contiguous objects or scene parts. State-of-the-art methods for semantic segmentation are mostly based on supervised methods such as Deep Learning, which require huge efforts in data annotation. Results of such a supervised, convolutional neural network based method applied to the remote sensing problem of cloud recognition in satellite images are presented.

Furthermore, an unsupervised approach to semantic scene segmentation is proposed which is based on statistical modeling of the data distribution. This method yields results similar to state-of-the-art supervised techniques and can be extended towards a weakly supervised scheme with strongly limited annotation effort. Results on publicly available real-world data sets are shown.

Another important step in understanding a scene is the reconstruction of 3D information from one or several images. Classical photogrammetric techniques are fringe projection (e.g., in production engineering) or stereo image analysis (e.g., in robotics or remote sensing).

Our proposed framework refines 3D information acquired by such methods relying on shape from shading, where on large spatial scales the shape from shading result is constrained by the photogrammetrically derived surface.

Application results of this framework in the fields of production engineering, involving metallic objects with surfaces unfavorable for photogrammetric techniques, as well as in planetary remote sensing are presented.

In space-based applications, image acquisition devices with tens or even hundreds of wavelength channels (hyperspectral sensors) are commonly utilized. Considering the scenario of remote sensing of the Moon, the previously described techniques will be extended to orbital hyperspectral data, demonstrating their usefulness in the fields of compositional analysis of the lunar surface and in particular the identification of lunar surficial hydroxyl/water.

Short scientific CV of Christian Wöhler

1990-1996:Studies of physics at University of Würzburg, Germany, and Université Joseph Fourier, Grenoble, France; degree: Diplom-Physiker (equivalent to M. Sc.)

1997-1999: Ph. D. student at University of Bonn, Germany, and at DaimlerChrysler (Mercedes-Benz) Research Centre Ulm, Germany

2000: Graduation Dr. rer. nat., University of Bonn, Germany

2000-2010:Senior research scientist at Daimler Research Centre Ulm, department of Environment Perception

2003-2004:Visiting lecturer at Ulm University of Applied Sciences2005-2010: Visiting lecturer / from 2009 Privatdozent at the Faculty of Technology of Bielefeld University,Germany

2009:Venia legendi (habilitation) in applied computer science, Bielefeld University

since 04/2010: Professor, TU Dortmund University, Faculty of Electrical Engineering and Information Technology, Image Analysis Group

Main fields of research:
- Image analysis in remote sensing
- Image analysis in production engineering and optical metrology
- Machine learning methods for image analysis

10.30: Prof. Dr.-Ing. Christoph Stiller (Karlsruhe Institute of Technology)

Title: Promises and Challenges of Automated Automobiles


Vehicle automation is among the most fascinating trends in automotive electronics and a huge challenge to Machine Vision. We investigate the information needed by automated vehicles and elaborate on the augmentation of sensor information by prior knowledge from digital maps. We show that cognitive and autonomous vehicles with a few close-to-market sensors are feasible when prior information is available from maps.

Vision plays the dominant role in our autonomous vehicle. Methods for automated mapping based on vision are discussed that build the basis for real-time automated decision-making and motion planning.

Extensive experiments are shown in real world scenarios from our AnnieWAY and BerthaOne experimental vehicles, the winner of the 2011 and second winner of the 2016 Grand Cooperative Driving Challenge, respectively. We also discuss lessons learned from the autonomous Bertha Benz memorial tour from Mannheim to Pforzheim through a highly populated area in Germany. Last not least the future evolution of automated driving and qualitative changes that vehicle automation induces to our traffic is discussed.


Christoph Stiller studied Electrical Engineering in Aachen, Germany and Trondheim, Norway, and received the Diploma degree and the Dr.-Ing. degree (Ph.D.) from Aachen University of Technology in 1988 and 1994, respectively. He worked with INRS-Telecommunications in Montreal, Canada for a post-doctoral year in 1994/1995. In 1995 he joined the Corporate Research and Advanced Development of Robert Bosch GmbH, Germany. In 2001 he became Chaired Professor and director of the Institute for Measurement and Control Systems at Karlsruhe Institute of Technology, Germany.

Dr. Stiller serves as Senior Editor for the IEEE Transactions on Intelligent Vehicles (2015-ongoing) and as Associate Editor for the IEEE Intelligent Transportation Systems Magazine (2012-ongoing). He served as Editor-in-Chief of the IEEE Intelligent Transportation Systems Magazine (2009-2011). His automated driving team AnnieWAY has been finalist in the Darpa Urban Challenge 2007, first and second winner of the Grand Cooperative Driving Challenge in 2011 and 2016, respectively. He has served is several positions for the IEEE Intelligent Transportation Systems Society including being its President 2012-2013.

We are looking forward to see you all on October 26th

Kind regards,

Karin van Tongeren
Secretary Robotics Institute



10:00 AM11:45 AM

This website uses cookies to enhance user experience and to provide us with visitor analytics.