Opened 12 years ago
Closed 12 years ago
#4306 closed defect (fixed)
[PATCH] Too many files in destination directory v.out.ogr, write fails
|Reported by:||MarjanM444||Owned by:||warmerdam|
If there are more than 1024(350?) files in destination directory, the write is not successful. This was proven to be problem on Linux as MS Windows platform. The problem is present for long time, please either document the behavior or present a workaround.
Example code: v.out.ogr -p input=testvect type=area 'dsn=/home/user1' olayer=11 layer=1
Some details in http://trac.osgeo.org/grass/ticket/1478
Change History (7)
comment:1 by , 12 years ago
comment:2 by , 12 years ago
This was claimed by the GRASS developers. Fact is, if in the destination directory is around 1024 (341 shapes x3 files for each = around 350), then the error is the file cannot be created.
Versions are GDAL 1.7.3 and OGR 1.7.3
comment:3 by , 12 years ago
I can reproduce the issue with latest trunk too. This is a difficult one that would likely need important refactoring of the shapefile driver. In the meantime, if you still want to have a directory with hundreds of shapefiles in it, on Linux, the only workaround is to increase the limit of open files for a process with "ulimit -n XXXX", but you need to be root to have that priviledge.
comment:4 by , 12 years ago
Thank you for your fast answer. I will try the proposed workaround.
Can you please propose a solution too?
by , 12 years ago
comment:5 by , 12 years ago
|Summary:||Too many files in destination directory v.out.ogr, write fails → [PATCH] Too many files in destination directory v.out.ogr, write fails|
Attached a patch that looks promising. A bit more testing will be needed though.
comment:6 by , 12 years ago
|Status:||new → closed|
r23319 /trunk/ (4 files in 2 dirs): Shapefile driver: allow managing datasources with several hundreds of layers without exhausting the limit of file descriptors. The solution is to maintain a list of MRU layers limited in size (100), automatically close the LRU layer when the list max size is reached, and reopen the 'evicted' layers when necessary. No performance impact in standard use cases should be noticed. (#4306)
It is a bit difficult to understand what the issue is really. Is it a directory opened with the shapefile driver that contains 1024 files that are shapefile layers ?
Where does the 350 number comes from ? My assumptiun is the following : each shapefile layer needs to maintain 3 files opened (.dbf, .shp and .shx). And 350 x 3 = 1050
Which GDAL/OGR version are you using ?