Opened 13 years ago
Closed 13 years ago
#3515 closed defect (worksforme)
gdalwarp clip the image error!
|Reported by:||bicealyh||Owned by:||warmerdam|
I use such command to clip the image,it give me error below: ERROR 2: Out of memory allocating 438538240 bytes for UnifiedSrcDensity mask.
My gdalwarp command like below: gdalwarp -of HFA -te 112.56 37.85 112.57 37.86 -dstnodata -2147483648 -wm 500 -cutline G:\test\PolyMask.shp geor:xxx/xxx@localhost:1521/ORCL,CITY_IMAGES_RDT,10 D:\test\Extract_clip.img
I get polygon coordinates from the map and save as PolyMask.shp, here you just see the clip polygon. My image is just about 1G.
If I change the value of parameter '-wm' from 500 to 200, it run well, Why?
Change History (5)
comment:1 by , 13 years ago
|Component:||GDAL_Raster → Algorithms|
|Status:||new → assigned|
comment:2 by , 13 years ago
I use the svn trunk-r18934 code.
comment:3 by , 13 years ago
I will try it and give you the new message.
comment:4 by , 13 years ago
Yong Heng reports that after updating:
I use the gdalwarp clip my image, when my export raster data more than 30G, it always give me an error like the title. When the value of parameter '-wm' is lower, like 200, or the export data is not very large, like 1G, it always runs well. I don't know why? Does it mean my computer's memory is not enough? How can I make it run well use the parameters '-wm 500' and '-mutil', not considering the size of export data? Does the parameters '-wm' and '-mutil' are relative with the performance of the gdalwarp that clip raster data?
By the way, I use the trunk-r19286 of svn verion GDAL.
My gdalwarp command like below: gdalwarp -of HFA -te 112.54 37.84 112.56 37.87 -dstnodata -2147483648 -wm 500 -multi -cutline F:\test\PolyMask?.shp geor:email@example.com:1521/orcl,CITY_IMAGES_RDT,1 D:\export_clip.img
comment:5 by , 13 years ago
|Status:||assigned → closed|
The error indicates that the program is unable to allocate the desired large memory block. I'm not sure how much RAM you have on your system, but on win32 systems it is easy to get into a situation due to memory fragmentation where even though the memory is available, it is not all available in on e large block.
I would suggest just using smaller -wm values.
I have reviewed the warping algorithm and it does not seem to account for some of the mask buffers, including UnifiedSrcDensity, when deciding how small to split things to stay within the permitted memory limit. I have made a small change in trunk (r19285) that I think will alleviate this. Are you in a position to pull the latest code from svn and try it out?
Alternatively, using the smaller -wm setting on the commandline will have a similar effect as a workaround.