Opened 13 years ago

Closed 9 years ago

#1070 closed defect (fixed)

WFS responses are truncated at 16mb

Reported by: ksgeograf Owned by:
Priority: medium Milestone:
Component: WFS Interface Version: 2.2.0
Severity: major Keywords: WFS Sheboygan 16777216 0x1000000 truncated xml GetFeature
Cc: External ID:


Attempting to read the Sheboygan parcels via WFS fails completely. The query string I used is this:


You may have to replace "TYPENAME" with the correct random namespace.

When the request is issued through a browser the mgserver.exe starts running at 100% CPU. After a little while the w3wp.exe start running instead, and consuming a large amount of memory. At some point the process starts sending the data, but truncated at 1677216 (0x1000000) bytes.

There are two problems here: 1) The data should not be character truncated, as that results in broken xml 2) The data should not be collected fully in each process, but rather streamed onto the client, resulting in less memory overhead and faster response times.

Change History (5)

comment:1 by zspitzer, 12 years ago

Component: GeneralWFS Interface
Summary: WFS is inefficient and breaks for largers datasetsWFS responses are truncated at 16mb

just tried this against 2.2.0 and the response is still truncated

don't try this in a browser, it will lock up due to the size, use wget instead

wget "http://localhost:8008/mapguide/mapagent/mapagent.fcgi?VERSION=1.0.0 &SERVICE=WFS&REQUEST=GetFeature&OUTPUTFORMAT=GML2&TYPENAME=ns34414117:Parcels&us ername=Administrator&PASSWORD=admin"

comment:2 by zspitzer, 12 years ago

looks like there is also a memory leak as this request leaves the apache httpd.exe process using 1gb of ram

comment:3 by zspitzer, 12 years ago

incidently, gzip compression reduces this down to 1.7mb #1652

comment:4 by jng, 11 years ago

Some investigative notes.

I think the problem starts here

It is funnelling the WFS response from the server Feature Service here first to a string, then into an in-memory feature set

That in-memory feature set is then funnelled in to a MgHttpResponseStream, whose backing store is MgByte which had (until recently) a 16mb storage limit (now 64mb, but still a limit)

There seems to be many in-memory projections of the original data before it reaches to the Apache/ISAPI response handler

So a full read of Sheboygan Parcels is probably spiking the memory usage due to all these (possibly unnecessary) projections of data, with the truncation due to the size limit of MgByte that the response stream is using

comment:5 by jng, 9 years ago

Resolution: fixed
Status: newclosed

Fixed r7782

Note: See TracTickets for help on using tickets.