Opened 8 years ago
Last modified 6 years ago
#3293 new enhancement
r.texture: very slow when size is increased
Reported by: | mlennert | Owned by: | |
---|---|---|---|
Priority: | normal | Milestone: | 7.6.2 |
Component: | Raster | Version: | svn-trunk |
Keywords: | r.texture speed | Cc: | |
CPU: | Unspecified | Platform: | Unspecified |
Description
Running
r.texture -a ortho_2001_t792_1m size=27 out=text_ortho_27
in the NC demo data set took the following time:
real 339m54.922s user 339m44.824s sys 0m5.464s
This seems excessively long, especially when the relevant literature cites common sizes of 20-50 (see #3210 for a discussion).
It would be great if this could somehow be accelerated.
Change History (9)
follow-up: 2 comment:1 by , 8 years ago
follow-up: 3 comment:2 by , 8 years ago
Replying to mmetz:
Replying to mlennert:
Running
r.texture -a ortho_2001_t792_1m size=27 out=text_ortho_27in the NC demo data set took the following time:
real 339m54.922s user 339m44.824s sys 0m5.464sThis seems excessively long, especially when the relevant literature cites common sizes of 20-50 (see #3210 for a discussion).
Note that e.g. Haralick et al. (1973) did not use a moving window approach, instead texture measurements were calculated for selected blocks and the output is not a raster map but a single value for each texture measurement and each block.
Yes. But I find the pixel-by-pixel version quite nice ! ;-) This said, a per-object texture measurement would also be nice... See #2111.
It would be great if this could somehow be accelerated.
Maybe by processing moving windows in parallel, at the cost of allocating / freeing memory for each moving window.
Yes, I've thought about this as well. This seems like a perfect example of a module that could profit from parallelisation across as many cores/threads the user choses. Memory usage is very low, so this will not be a bottleneck (except for large images as the module still reads in the entire data, no ?). And allocation only has to happen once at the beginning, and then you can just use the allocated window throughout, or ?
Might be interesting to benchmark where exactly most time is lost. Any hints on how to do that ?
follow-up: 4 comment:3 by , 8 years ago
Replying to mlennert:
Replying to mmetz:
Replying to mlennert:
[...]
It would be great if this could somehow be accelerated.
Maybe by processing moving windows in parallel, at the cost of allocating / freeing memory for each moving window.
Yes, I've thought about this as well. This seems like a perfect example of a module that could profit from parallelisation across as many cores/threads the user choses. Memory usage is very low, so this will not be a bottleneck (except for large images as the module still reads in the entire data, no ?). And allocation only has to happen once at the beginning, and then you can just use the allocated window throughout, or ?
No, because if you process moving windows in parallel, you process several moving windows at the same time. Therefore you would need to allocate + free memory for each moving window, otherwise the different moving windows would overwrite each other's results.
Might be interesting to benchmark where exactly most time is lost. Any hints on how to do that ?
You could use profiling tools, e.g. pprof.
comment:4 by , 8 years ago
Replying to mmetz:
Replying to mlennert:
Replying to mmetz:
Replying to mlennert:
[...]
It would be great if this could somehow be accelerated.
Maybe by processing moving windows in parallel, at the cost of allocating / freeing memory for each moving window.
Yes, I've thought about this as well. This seems like a perfect example of a module that could profit from parallelisation across as many cores/threads the user choses. Memory usage is very low, so this will not be a bottleneck (except for large images as the module still reads in the entire data, no ?). And allocation only has to happen once at the beginning, and then you can just use the allocated window throughout, or ?
No, because if you process moving windows in parallel, you process several moving windows at the same time. Therefore you would need to allocate + free memory for each moving window, otherwise the different moving windows would overwrite each other's results.
Yes, but you would only need one window per process, and you would only allocate each process window once, at the beginning, or ?
comment:6 by , 6 years ago
Milestone: | 7.4.1 → 7.4.2 |
---|
comment:7 by , 6 years ago
Milestone: | 7.4.2 → 7.6.0 |
---|
All enhancement tickets should be assigned to 7.6 milestone.
Replying to mlennert:
Note that e.g. Haralick et al. (1973) did not use a moving window approach, instead texture measurements were calculated for selected blocks and the output is not a raster map but a single value for each texture measurement and each block.
Maybe by processing moving windows in parallel, at the cost of allocating / freeing memory for each moving window.