Computer Vision Research Group

Coregistration of Space-borne Optical and SAR Satellite Images

In recent year, Synthetic Aperture Radar (SAR) Imaging has emerged as the standard tool for the Earth Observation. Spaceborne SAR satellites provide complex images of the Earth, containing information about the scattering characteristics of the viewed terrain. These images find use in numerous applications in various fields of science, such as meteorology, oceanography, agriculture, forestry, hydrology, security, military reconnaissance, cartography, navigation, etc. The information provided by SAR sensors in space can surely be complemented by the information provided by Optical Images (such as those offered by Google Earth). The basic aim of this project is to achieve data fusion among Optical and SAR imaging sensors.

For this project, the data sources are the sensors TerraSAR-X (providing high-resolution SAR images) and the RapidEye sensors (providing high resolution multispectral/multi-temporal images). As a fundamental step toward fusion, the images need to be co-registered, prior to subsequent feature-extraction, classification etc. Having had the images from the two different sensors individually ortho-rectified and map-projected, co-registration may be coarsely performed on the basis of geo-coordinates; however, since the sensors follow different orbits, the geo-referencing algorithm applied for the ortho-rectification of one sensor may not be appropriate for the other, hindering the desired level of accuracy in co-registration. Therefore, the orbit model parameters may need to be re-evaluated for improved accuracy. Even when the knowledge of orbits and the configuration geometry of the satellites is well known to estimate the deformation map, it is not sufficient to achieve a fine registration. The objective of this project is to devise a fully automated solution for sub-pixel registration.

Automated techniques for co-registration of images may broadly be classified as ‘area-based’ or ‘feature-based’. In case of multi-sensor imagery, ‘area-based’ (also referred as ‘correlation-like’ or `template-matching') techniques are generally not encouraged, keeping in view the differences in the radiometric characteristics of the images as well as the fact that the ‘windowing’ involved is suited only to those images that differ with a translation. Feature-based techniques (or a hybrid with `area-based', as in) rely on establishing feature correspondence between the images. The features (or primitives) need to be identified and located in each image, and then matched to the corresponding features in the other image(s), thus obtaining some `control point' among the images. Subsequently, a spatial transformation model is implemented using those control points, and then the final `registered image' is obtained after the requisite image resampling/interpolation.

The following serves to exemplify. A Dual-Polarization EEC Level 1B TerraSAR-X Image of Rome City (courtesy Inforterra GmbH) is co-registered with the Google Image. The images are already orthorectified, individually. However, here, for the sake of some cursory experimentation, they are co-registered without using the geocoordinates; rather, some control points are coarsely selected, followed by a spatial transformation. Figure 1 shows the images. They don’t share the same scale. Control points are selected as shown in Figure 1c. These are the different landmarks along the river. Figure 1d shows the registered images, overlaid. Figures 1e,1f show a zoomed-in location among the registered images. The images seem to be well co-registered (and this example does suggest that such feature-based techniques may work prior to obtaining well-matched control points). Features such coastlines, river edges, bridges on rivers, etc. may be used for control points. However, designing a fully automated algorithm for identification and matching of such control points is not trivial.

This project is in collaboration with the ‘Computer Vision and Remote Sensing Laboratory’, Technical University (TU) of Berlin, Germany (Please refer to the collaborations section). The team comprises of Prof. Dr.-Ing. Olaf Hellwich, Dr.-Ing. Stéphane Guillaso and Dipl.-Ing. David Bornemann from TU-Berlin, and Dr. –Ing. Saquib Sarfraz and Engr. M. Adnan Siddique from COMVis.



© 2010 Department of Electrical Engineering, CIIT Lahore  |  Terms and Conditions