Opened 12 years ago

Closed 6 years ago

#1625 closed defect (wontfix)

Disk performance degrades by several orders of magnitude on two processes

Reported by: sprice Owned by: grass-dev@…
Priority: normal Milestone: 7.0.7
Component: Default Version: svn-trunk
Keywords: Cc:
CPU: x86-64 Platform: MacOSX

Description

When there are two GRASS process competing for a single disk on an I/O bound task, performance doesn't just half. It decreases by several orders of magnitude. However, 'iostat' lists just as much data flowing off the disk as would be expected.

Also, when one task is canceled, the other process doesn't recover. 'iostat' claims that just as much data is flowing, but the remaining process remains degraded until it is canceled and restarted.

I'd try to give a bit more debug info, but I suspect that it's some sort of interaction with a caching layer in GRASS where extra data is read, and then discarded, many times.

Attachments (1)

multiproc_mapcalc.py (2.9 KB ) - added by huhabla 12 years ago.
Check the IO disk performance by running multiple instances of r.mapcalc in parallel

Download all attachments as: .zip

Change History (11)

comment:1 by sprice, 12 years ago

Actually, this might be a problem even when you have two processes each accessing their own disk. The disk is pushing the maximum amount of data again, but the processing is proceeding extremely slowly.

in reply to:  description comment:2 by glynn, 12 years ago

Replying to sprice:

When there are two GRASS process competing for a single disk on an I/O bound task, performance doesn't just half. It decreases by several orders of magnitude.

That's what I would expect, due to more time lost to seeking.

However, 'iostat' lists just as much data flowing off the disk as would be expected.

That isn't what I would expect for a normal GRASS command. The only obvious case where I would expect that would be with in-memory calculations which result in physical RAM availability being exceeded. In that situation, memory would keep getting swapped out and back in again.

Also, when one task is canceled, the other process doesn't recover. 'iostat' claims that just as much data is flowing, but the remaining process remains degraded until it is canceled and restarted.

That is odd. Which commands, exactly?

I'd try to give a bit more debug info, but I suspect that it's some sort of interaction with a caching layer in GRASS where extra data is read, and then discarded, many times.

Most GRASS raster commands just do sequential I/O. Any caching is internal to the process; there isn't any interaction between processes.

by huhabla, 12 years ago

Attachment: multiproc_mapcalc.py added

Check the IO disk performance by running multiple instances of r.mapcalc in parallel

comment:3 by huhabla, 12 years ago

To assure that this issue is related to GRASS and not to your implementation you to need to check the IO performance of a native GRASS module running in parallel.

I have attached (multiproc_mapcalc.py) a simple grass module to check the IO disk performance by running multiple instances of r.mapcalc in parallel. The implementation assures disk syncing at the end of the processing and measures the time between start and end of processing and sync in seconds. I did not face large performance drops running multiple r.mapcalc instances in parallel. Here some examples running on my AMD 6 core 1TB HD Ubuntu 64Bit system:

Running a single r.mapcalc instance to create a 50.000.000 cell integer raster map:

GRASS 7.0.svn (TestLL):~/ > python multiproc_mapcalc.py base=raster nprocs=1 size=50
projection: 3 (Latitude-Longitude)
zone:       0
datum:      wgs84
ellipsoid:  wgs84
north:      80N
south:      0
west:       0
east:       62:30E
nsres:      0:00:36
ewres:      0:00:36
rows:       8000
cols:       6250
cells:      50000000
### main process ###
process id: 4152
### sub process for map <raster_0> ###
ppid 4152 pid 4158
 100%
Sync disk
Time for processing: 3.885329 seconds
Removing raster <raster_0>

Running six r.mapcalc instance to create six 50.000.000 cell integer raster maps:

GRASS 7.0.svn (TestLL):~/ > python multiproc_mapcalc.py base=raster nprocs=6 size=50
projection: 3 (Latitude-Longitude)
zone:       0
datum:      wgs84
ellipsoid:  wgs84
north:      80N
south:      0
west:       0
east:       62:30E
nsres:      0:00:36
ewres:      0:00:36
rows:       8000
cols:       6250
cells:      50000000
### main process ###
process id: 4175
### sub process for map <raster_1> ###
ppid 4175 pid 4182
### sub process for map <raster_0> ###
ppid 4175 pid 4181
### sub process for map <raster_2> ###
ppid 4175 pid 4183
### sub process for map <raster_3> ###
ppid 4175 pid 4186
### sub process for map <raster_4> ###
ppid 4175 pid 4187
### sub process for map <raster_5> ###
ppid 4175 pid 4190
 100%
 100%
 100%
 100%
 100%
 100%
Sync disk
Time for processing: 4.878238 seconds
Removing raster <raster_0>
Removing raster <raster_1>
Removing raster <raster_2>
Removing raster <raster_3>
Removing raster <raster_4>
Removing raster <raster_5>

comment:4 by hamish, 11 years ago

is there anything to be done for this ticket? or is it just an observation?

in reply to:  4 comment:5 by wenzeslaus, 10 years ago

Replying to hamish:

is there anything to be done for this ticket? or is it just an observation?

I don't know. Perhaps writing a a benchmark/test which will test/show that two processes in parallel behaves as expected.

comment:6 by martinl, 8 years ago

Milestone: 7.0.07.0.5

comment:7 by neteler, 8 years ago

Milestone: 7.0.57.0.6

comment:8 by neteler, 6 years ago

Milestone: 7.0.67.0.7

comment:9 by martinl, 6 years ago

No activity for a long time. Closing. Feel free to reopen if needed.

comment:10 by martinl, 6 years ago

Resolution: wontfix
Status: newclosed
Note: See TracTickets for help on using tickets.