Opened 18 years ago
Closed 18 years ago
#1885 closed defect (fixed)
Heap overflow with oversized QUERY_STRING
Reported by: | Owned by: | sdlime | |
---|---|---|---|
Priority: | high | Milestone: | 4.10 release |
Component: | MapServer CGI | Version: | 4.10 |
Severity: | major | Keywords: | |
Cc: |
Description
If a QUERY_STRING is specified with greater than MAX_PARAMS key/value pairs, heap memory will be overwritten. atlas:~/Documents/Source/CVS/mapserver mrowe$ MAPSERV=./mapserv sh ~/mapserv-large-query- string /Users/mrowe/mapserv-large-query-string: line 4: 23441 Segmentation fault $MAPSERV QUERY_STRING="$QUERY_STRING" atlas:~/Documents/Source/CVS/mapserver mrowe$ Running under GDB gives the following info: Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address: 0x002ff000 0x0000a98d in loadParams (request=0x1601540) at cgiutil.c:206 206 request->ParamNames[m] = makeword(request->ParamValues[m],'='); (gdb) print m $1 = 10240
Attachments (1)
Change History (13)
comment:1 by , 18 years ago
Cc: | added |
---|---|
Milestone: | → 4.10 release |
by , 18 years ago
Attachment: | mapserv-large-query-string added |
---|
comment:2 by , 18 years ago
Status: | new → assigned |
---|
Shouldn't be too hard to fix- I assume we just throw a check on the number of params in the right place. This was code borrowed from the old NCSA web server. Will try to fix by beta2 and for sure beta3... Steve
comment:3 by , 18 years ago
I should also note that at the time it crashes, mapserv has malloc'd no less than 786,984KB of memory. This would appear to be due to makeword() malloc'ing strlen(line) bytes of memory each time it is called. Is it worth filing a separate bug report on this issue, or should it be resolved along with the heap overflow that revealed it?
comment:5 by , 18 years ago
Cc: | added |
---|
I can't test this on Linux, not with the submitted script, the damn shells complain about argument length. The easiest fix would be to throw a quick test in the parameter looping code to make sure we don't process more than the pre-defined limit (currently 10,000). We could probably really reduce that limit to just a hundred. We could also, I suppose, make it so there is no limit, but that seems unreasonable too- there aren't practical reasons for that many arguments. One question would be how to handle exceptions then. Again, it's easy to simply ignore any name/value pairs beyond some fixed limit, and throw no error. That might be ok since those types of requests would be malicious and would generate errors later. Alternatively we could throw an error and exit (like calling the CGI without any args). Are there any adverse affects with fast-cgi. We're not currently reporting any erros from loadParams(). Whatever the fix it needs to be ported to 4.8 and a new release created. Adding Dan, Frank to the CC list for other thoughts. Steve
comment:6 by , 18 years ago
I would agree we should be checking against MAX_PARAMS, and that it could be reduced to 100 with no adverse problems. Reporting an error and exiting in loadParams is ok with me for this sort of situation. It will cause the fastcgi process to terminate, but that is no big problem. It just means a new process is started for the next request. Are we really going to keep producing 4.8.x releases now that we are into the 4.10 beta process? Is it that serious a problem?
comment:7 by , 18 years ago
+1 to produce an error and exit if we exceed MAX_PARAMS. I also agree that a MAX_PARAMS value of 100 is good enough for any use I can think of. And finally I think we should backport to 4.8 and produce at least a 4.8.5 in a few weeks since there are already a few other fixes in it that have been backported. I don't think producing a 4.8.5 hurts 4.10 in any way, that just helps users.
comment:8 by , 18 years ago
If you can construct a query that can create a segfault doesn't that pose a somewhat serious security hole? If so then patching the most recent stable release might be wise. Also, at least for 4.10, would it make sense to change MAX_PARAMS to a more descriptive MS_MAX_CGI_PARAMS? That would require touching a couple of MapScript modules and mapserv.c. Steve
comment:9 by , 18 years ago
I can fix tonite. Easier to test on my Mac. Let me know about renaming MAX_LAYERS... Steve
comment:10 by , 18 years ago
+1 on changing the name to MS_MAX_CGI_PARAMS in 4.10. I was also going to suggest that we move the allocation of the paramNames/Values arrays inside msAllocCgiObj() but I decided to create new bug 1888 and keep that for 5.0.
comment:11 by , 18 years ago
For 4.8 I will just throw the checks in cgiutil.c and NOT change the symbol name them. Steve
comment:12 by , 18 years ago
Resolution: | → fixed |
---|---|
Status: | assigned → closed |
Fixed the overflow in CVS HEAD and version 4.8. I got a bit greedy with 4.8 and committed too many files, hopefully my rollback was ok. The maximum number of params is now set to 100 in both versions. The issue will too much memory being malloc'd should probably be another bug. Steve
Note:
See TracTickets
for help on using tickets.
mapserv-large-query-string script mentioned in bug description