wiki:ossimImageViewGroundSpaces

A brief description of image, view and ground spaces - by Oscar Kramer

image space -- x,y of a pixel on the original raw image,

ground space -- lat, lon, height of the pixel on the earth. This requires an elevation model to solve, and finally

view-space -- The u,v map coordinates of the pixel in the output ortho-image.

The sensor model is derived from ossimProjection, providing two critical functions: lineSampleToWorld and worldToLineSample. These two functions (members of ossimProjection but implemented in each derived sensor model) perform the transformations between image space and ground space.

There are many map projections implemented in OSSIM, all derived also from ossimProjection and therefore providing their own version of lineSampleToWorld and worldToLineSample. In this case, the line-sample are the u,v coordinates of the map-projected ortho-image. So you have:

Raw Image (x,y)

|

ossimProjection--ossimSensorModel--ossimMyCameraModel::lineSampleToWorld()

|

Ground space (lat, lon, height)

|

ossimProjection--ossimMapProjection--ossimUtmProjection::worldToLineSample()

|

View space (u,v)

The above transformations are handled by the class called ossimImageViewProjectionTransform (IVT) that owns the two projections: the input projection -- in this case the sensor model, and the output projection -- the map projection. The IVT in turn is owned by the ossimImageRenderer, that also contains a resampler. The resampler uses the IVT to determine the mapping from image space to view space, and visa versa, establishing the correct resampling kernel given this relation. The renderer then "pulls" pixels from the input side, resamples them, and populates a requested tile in the output map space.

It is necessary to go the other way too. When you request pixels in u,v map coordinates, the IVT needs to perform the exact inverse operation from above to find the pixels in the raw image:

View space (u,v)

|

ossimProjection--ossimMapProjection--ossimUtmProjection::lineSampleToWorld()

|

Ground space (lat, lon, height)

|

ossimProjection--ossimSensorModel--ossimMyCameraModel::worldToLineSample()

|

Raw Image (x,y)

Remember, pixels flow in a chain. They are bundled in packets called "tiles", typically 128x128 pixels. The objects that provide tiles are called tile sources, derived from ossimTileSource. A renderer is a special tile source that uses a resampler and an IVT to fill tiles. There are many tile sources, such as remappers, various filters, and image handlers that understand how to read the raw images. The minimum chain needed to perform orthorectification is:

ossimImageHandler

|

ossimImageRenderer (contains ossimSensorModel, ossimMapProjection)

|

Output tile sink (such as ossimTiffWriter)

Hopefully this helps orient you a bit. In order to implement a new sensor, you don't have to concern yourself with anything other than implementing an ossimSensorModel-derived class that can initialize itself somehow (through keyword-list and/or image metadata), and that provides the basic functionality of lineSampleToWorld and worldToLineSample. There's also the issue of registering the model with the sensor model factory... So many details! Study how other sensor models are implemented and copy that.

Last modified 15 years ago Last modified on Jul 22, 2009, 5:25:25 AM
Note: See TracWiki for help on using the wiki.