Opened 14 years ago

Closed 13 years ago

#3514 closed defect (invalid)

Reading large single-strip compressed files requires too much memory

Reported by: warmerdam Owned by: warmerdam
Priority: normal Milestone: 1.8.1
Component: GDAL_Raster Version: svn-trunk
Severity: normal Keywords: gtiff
Cc: gaopeng

Description

Reading and 288MB all-in-one-strip LZW TIFF file requires the entire compressed data to be loaded into ram even though it is only being decoded one strip at a time. Can something be done in libtiff to alleviate this?

The file in question is mentioned in #3512

Attachments (1)

CR115548.doc (213.0 KB ) - added by gaopeng 14 years ago.

Download all attachments as: .zip

Change History (7)

comment:1 by warmerdam, 14 years ago

Milestone: 1.8.0
Resolution: fixed
Status: newclosed

I have implemented a new CHUNKY_STRIP_READ_SUPPORT capability in libtiff that reads very large strip in chunks in order to keep memory use modest. The support has been downstreamed into trunk (r19289) and 1.6-esri (r19290).

The implementation was rather intrusive, so testing of assorted tiff files would be prudent.

comment:2 by gaopeng, 14 years ago

Resolution: fixed
Status: closedreopened

I can now display the image. But it doesn't look correct, see attached.

by gaopeng, 14 years ago

Attachment: CR115548.doc added

comment:3 by warmerdam, 14 years ago

Gao,

I have reviewed the file and discovered it has an odd steepling or dithering pattern to it. I have also found that this pattern is very sensitive to sampling artifacts. That is, if downsampled using nearest neighbour sampling it produces bizarre results at some resolutions.

I have also viewed the file in Microsoft Photo Viewer and at full resolution it produces the same results I'm seeing through libtiff and GDAL. The Microsoft product is not libtiff based to the best of my knowledge, which leads me to conclude GDAL is producing the image properly. Is it possible that different styles of downsampling are done in 9.3 and 10? Perhaps it is left up to the IO library to do the downsampling and they behave differently?

I didn't notice this at all at first in OpenEV because it defaults to averaged downsampling which does not produce downsampling artifacts.

Can you examine the image zoomed in past 1:1 and see if the old and new results are similar?

comment:4 by warmerdam, 14 years ago

I have spent some time stepping through the code and I don't see any obvious signs of a problem. I will note that the georeferencing is being derived from the four control points in the BLOCKA TRE which should be significantly more precise than the GEOL0 header information.

The only other exotic information in the file is a HISTOA TRE, and three PIXnnn TREs. The HISTOA is apparently processing history and presumably has no significance with regard to georeferencing. The PIXnnn TREs are apparently registered to Paragon Imaging corporation. I wonder if the other view package might be using them for something. I did a binary dump and it was not obvious what the contents were.

I'm at a loss!

PS. I review the geotransform computed from the four corner coordinates and the error is around 1 100th of a pixel which should be negligable.

comment:5 by warmerdam, 14 years ago

Disregard the last comment, it ended up in the wrong ticket.

comment:6 by Even Rouault, 13 years ago

Resolution: invalid
Status: reopenedclosed

Apparently, there is no issue according to Frank's investigation. Closing

Note: See TracTickets for help on using tickets.