Opened 8 years ago

Last modified 6 years ago

#3293 new enhancement

r.texture: very slow when size is increased

Reported by: mlennert Owned by: grass-dev@…
Priority: normal Milestone: 7.6.2
Component: Raster Version: svn-trunk
Keywords: r.texture speed Cc:
CPU: Unspecified Platform: Unspecified

Description

Running

r.texture -a ortho_2001_t792_1m size=27 out=text_ortho_27

in the NC demo data set took the following time:

real	339m54.922s
user	339m44.824s
sys	0m5.464s

This seems excessively long, especially when the relevant literature cites common sizes of 20-50 (see #3210 for a discussion).

It would be great if this could somehow be accelerated.

Change History (9)

in reply to:  description ; comment:1 by mmetz, 8 years ago

Replying to mlennert:

Running

r.texture -a ortho_2001_t792_1m size=27 out=text_ortho_27

in the NC demo data set took the following time:

real	339m54.922s
user	339m44.824s
sys	0m5.464s

This seems excessively long, especially when the relevant literature cites common sizes of 20-50 (see #3210 for a discussion).

Note that e.g. Haralick et al. (1973) did not use a moving window approach, instead texture measurements were calculated for selected blocks and the output is not a raster map but a single value for each texture measurement and each block.

It would be great if this could somehow be accelerated.

Maybe by processing moving windows in parallel, at the cost of allocating / freeing memory for each moving window.

in reply to:  1 ; comment:2 by mlennert, 8 years ago

Replying to mmetz:

Replying to mlennert:

Running

r.texture -a ortho_2001_t792_1m size=27 out=text_ortho_27

in the NC demo data set took the following time:

real	339m54.922s
user	339m44.824s
sys	0m5.464s

This seems excessively long, especially when the relevant literature cites common sizes of 20-50 (see #3210 for a discussion).

Note that e.g. Haralick et al. (1973) did not use a moving window approach, instead texture measurements were calculated for selected blocks and the output is not a raster map but a single value for each texture measurement and each block.

Yes. But I find the pixel-by-pixel version quite nice ! ;-) This said, a per-object texture measurement would also be nice... See #2111.

It would be great if this could somehow be accelerated.

Maybe by processing moving windows in parallel, at the cost of allocating / freeing memory for each moving window.

Yes, I've thought about this as well. This seems like a perfect example of a module that could profit from parallelisation across as many cores/threads the user choses. Memory usage is very low, so this will not be a bottleneck (except for large images as the module still reads in the entire data, no ?). And allocation only has to happen once at the beginning, and then you can just use the allocated window throughout, or ?

Might be interesting to benchmark where exactly most time is lost. Any hints on how to do that ?

in reply to:  2 ; comment:3 by mmetz, 8 years ago

Replying to mlennert:

Replying to mmetz:

Replying to mlennert:

[...]

It would be great if this could somehow be accelerated.

Maybe by processing moving windows in parallel, at the cost of allocating / freeing memory for each moving window.

Yes, I've thought about this as well. This seems like a perfect example of a module that could profit from parallelisation across as many cores/threads the user choses. Memory usage is very low, so this will not be a bottleneck (except for large images as the module still reads in the entire data, no ?). And allocation only has to happen once at the beginning, and then you can just use the allocated window throughout, or ?

No, because if you process moving windows in parallel, you process several moving windows at the same time. Therefore you would need to allocate + free memory for each moving window, otherwise the different moving windows would overwrite each other's results.

Might be interesting to benchmark where exactly most time is lost. Any hints on how to do that ?

You could use profiling tools, e.g. pprof.

in reply to:  3 comment:4 by mlennert, 8 years ago

Replying to mmetz:

Replying to mlennert:

Replying to mmetz:

Replying to mlennert:

[...]

It would be great if this could somehow be accelerated.

Maybe by processing moving windows in parallel, at the cost of allocating / freeing memory for each moving window.

Yes, I've thought about this as well. This seems like a perfect example of a module that could profit from parallelisation across as many cores/threads the user choses. Memory usage is very low, so this will not be a bottleneck (except for large images as the module still reads in the entire data, no ?). And allocation only has to happen once at the beginning, and then you can just use the allocated window throughout, or ?

No, because if you process moving windows in parallel, you process several moving windows at the same time. Therefore you would need to allocate + free memory for each moving window, otherwise the different moving windows would overwrite each other's results.

Yes, but you would only need one window per process, and you would only allocate each process window once, at the beginning, or ?

comment:5 by neteler, 7 years ago

Milestone: 7.4.07.4.1

Ticket retargeted after milestone closed

comment:6 by neteler, 6 years ago

Milestone: 7.4.17.4.2

comment:7 by martinl, 6 years ago

Milestone: 7.4.27.6.0

All enhancement tickets should be assigned to 7.6 milestone.

comment:8 by martinl, 6 years ago

Milestone: 7.6.07.6.1

Ticket retargeted after milestone closed

comment:9 by martinl, 6 years ago

Milestone: 7.6.17.6.2

Ticket retargeted after milestone closed

Note: See TracTickets for help on using tickets.