Opened 19 years ago

Last modified 19 years ago

#664 closed enhancement (fixed)

HFA Compression

Reported by: sam.gillingham@… Owned by: warmerdam
Priority: high Milestone:
Component: GDAL_Raster Version: unspecified
Severity: minor Keywords:
Cc:

Description

Hi Frank,
 
Attached is my start on the compression of HFA files. Sorry I have taken so long
to get this to you - I've been busier than expected. Let me know what you think
and what I may need to change. It has been tested against Imagine generated
files and does the same things in all cases I could think of.
 
Unfortunately this is not complete - it only does the compression of the block
so GDAL can read it in again. There are a number of things that still need to be
addressed:
1. The flags on the dataset need to be changed so that Imagine recognises the
file as compressed. I have only changed the BFLG_COMPRESSED flag for all the
blocks. I believe Imagine needs some of the metadata altered. 
2. There is an assumption made in the HFA driver that all blocks will be the
same size. This will not be the case with compressed blocks. Maybe after each
block is written, or on file close, panBlockStart and panBlockSize must be
updated and the file structure re written. These and panBlockFlag (which
determines whether the block was actually compressed - some blocks will the
bigger compressed) must be re-written to file - my understanding is currently,
this is only written when the files is created. 
3. My patch does not make use of the fact that data can be encoded as 1, 2, and
4 bits - 8 bits is the minimum size. 

I have also changed UncompressBlock() so that it reads signed and unsigned 32
bit integers. I can supply this as a separate patch if you wish.

These things are probably a bit beyond my understanding of the HFA driver and
GDAL. I would appreciate it if you could perhaps look at some of these things
and give me some assistance in fixing them up.

Thanks for your time,
 
Sam.

Attachments (3)

hfacompress.zip (4.7 KB ) - added by warmerdam 19 years ago.
code updates
hfacompress.2.zip (5.3 KB ) - added by sam.gillingham@… 19 years ago.
Added CREATE_COMPRESSED option - Imagine now recognises file as compressed
hfacompress.3.zip (6.7 KB ) - added by sam.gillingham@… 19 years ago.
Now writes a proper (ie small!) compressed HFA file (attachment is a .zip)

Download all attachments as: .zip

Change History (6)

comment:1 by warmerdam, 19 years ago

Attached code from quoted url.

I'm not sure when I will get to try and incorporate this Sam, but thanks for
your work!

by warmerdam, 19 years ago

Attachment: hfacompress.zip added

code updates

by sam.gillingham@…, 19 years ago

Attachment: hfacompress.2.zip added

Added CREATE_COMPRESSED option - Imagine now recognises file as compressed

by sam.gillingham@…, 19 years ago

Attachment: hfacompress.3.zip added

Now writes a proper (ie small!) compressed HFA file (attachment is a .zip)

comment:2 by sam.gillingham@…, 19 years ago

Hi Frank,

I've done a bit more work on this patch now and should be getting close to
complete. Of the things that were ommitted from the first patch I have fixed the
first 2 - ie Imagine now recognises that the files is compressed and reads in a
range of data types OK. I have delayed the allocation of space for blocks until
they are writted so the resulting file is only as big as it needs to be. The
outstanding is that I don't use the 1, 2 and 4 bit options for the compressed
block - I decided the space saving wasn't worth the work!

BTW the default behaviour is still uncompressed - file needs to be created with
the CREATE_COMPRESSED option before compression is done.

The only outstanding problems I can see are:
1. Unwritten blocks have no offset etc so all blocks must be written - perhaps
we should write zeros or something.
2. We don't maintain a free list of areas in the HFA file so re-writing a block
isn't as good as it should be. Currently reuses the existing block allocation if
big enough.
3. The decision to use a spill file or not is flawed - we don't know how big the
file is before we write it. 

Anyway we have been using this patch for a while now and seems OK. Let me know
what you think,

Sam.

comment:3 by warmerdam, 19 years ago

Sam,

I have committed your patches with a few minor changes.  The creation
options is now called COMPRESS instead of CREATE_COMPRESSED.  I also 
added documentation on the driver for the creation options (see with
gdalinfo --format hfa) and moved hfacompress.h into hfa_p.h. 

In answer to some of your questions:

 1) I think that unwriten blocks ought to have the logValid flag
    set to FALSE in the blockinfo.   If we do this, we would need
    to modify SetRasterBlock() to work properly in this case (even
    for uncompressed files I would hope). 
 
2) The fact that we don't use a free list is unstandable. I have the
   same issue in the GeoTIFF driver. 

3) This issue about not knowing whether we should force things through the
   spill file is OK.  If the user explicitly requests compression it will
   be up to them to ensure the file isn't too large.  

Would you mind updating the frmt_hfa.html to reflect your new option? 

NOTE: I didn't actually test your patch much.  I didn't try reading the
resulting image with Imagine for instance, or even any apps of my own as it
happens as today I am trapped on a Mac where i am using GDAL in commandline
mode only.
Note: See TracTickets for help on using tickets.