Opened 12 years ago
Closed 11 years ago
#1110 closed defect (fixed)
Trac and SVN seem very slow past few days
Reported by: | robe | Owned by: | |
---|---|---|---|
Priority: | normal | Milestone: | |
Component: | SysAdmin | Keywords: | |
Cc: |
Description
I'm noticing that both Trac and SVN (PostGIS) have been very slow this past week or so. Haven't played enough with trac/sac to notice, though it did take me longer than usual to log into osgeo trac. It's off and on, but seems particularly bad this morning -taking about 20 - 30 seconds.
Sandro (strk) noticed too so I'm assuming it's not just me.
Change History (3)
comment:1 by , 12 years ago
comment:2 by , 12 years ago
A quick probe or two of server-status shows lots of SVN activity. I suspect aggressive SVN clients but I don't know what to do about it.
Scoreboard Key: "_" Waiting for Connection, "S" Starting up, "R" Reading Request, "W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup, "C" Closing connection, "L" Logging, "G" Gracefully finishing, "I" Idle cleanup of worker, "." Open slot with no current process Srv PID Acc M CPU SS Req Conn Child Slot Client VHost Request 0-0 31564 3/7/25803 K 0.93 0 0 0.7 0.02 99.14 164.71.1.150 svn.osgeo.org PROPFIND /qgis/!svn/vcc/default HTTP/1.1 1-0 31574 1/10/25670 K 0.74 0 912 3.0 0.01 111.84 199.146.147.66 trac.osgeo.org GET /gdal/chrome/common/js/wikitoolbar.js HTTP/1.1 2-0 31541 1/13/24729 K 1.19 1 3 0.4 0.01 75.11 164.71.1.150 svn.osgeo.org REPORT /qgis/!svn/vcc/default HTTP/1.1 3-0 31592 1/1/24877 K 0.03 0 18 0.4 0.00 97.28 164.71.1.221 svn.osgeo.org REPORT /qgis/!svn/vcc/default HTTP/1.1 4-0 31567 1/11/24906 C 0.06 0 1 0.1 0.00 90.73 164.71.1.150 svn.osgeo.org OPTIONS /qgis/trunk/qgis/src/gui HTTP/1.1 5-0 31554 1/4/24167 C 0.74 0 1 0.1 0.00 216.76 164.71.1.150 svn.osgeo.org OPTIONS /qgis/trunk/qgis/src/gui HTTP/1.1 6-0 30552 0/6/24008 R 1.44 263 144 0.0 0.04 78.89 ? ? ..reading.. 7-0 31599 0/0/24761 R 1.31 0 102 0.0 0.00 143.74 ? ? ..reading.. 8-0 31582 0/2/24636 W 0.04 3 0 0.0 0.01 154.26 66.249.74.90 svn.osgeo.org GET /fdo/tags/3.4_G061/Providers/GDAL/TestData/Jp2/101862.jp2 H 9-0 31568 2/20/24468 K 0.11 0 0 0.5 0.01 434.02 164.71.1.150 svn.osgeo.org PROPFIND /qgis/trunk/qgis/src/gui/qgsmapcanvas.cpp HTTP/1.1 10-0 31569 0/12/24864 W 0.93 0 0 0.0 0.01 148.74 216.239.45.93 trac.osgeo.org GET /server-status HTTP/1.1 11-0 31573 1/17/23650 K 0.03 0 7 0.4 0.01 113.72 164.71.1.150 svn.osgeo.org REPORT /qgis/!svn/vcc/default HTTP/1.1 12-0 31585 3/5/23744 K 0.04 0 11 0.7 0.01 119.40 164.71.1.150 svn.osgeo.org PROPFIND /qgis/!svn/vcc/default HTTP/1.1 13-0 31600 0/0/24018 S 1.02 0 1 0.0 0.00 98.28 164.71.1.150 svn.osgeo.org PROPFIND /qgis/!svn/bc/5567/trunk/qgis/src/gui/qgsmapcanvas.cpp 14-0 31601 1/1/22415 K 0.01 0 4 0.4 0.00 431.01 164.71.1.150 svn.osgeo.org REPORT /qgis/!svn/vcc/default HTTP/1.1 15-0 31602 0/0/22069 R 0.83 0 4 0.0 0.00 130.01 ? ? ..reading.. 16-0 31540 1/29/23348 K 1.03 0 4 0.4 0.03 156.63 164.71.1.221 svn.osgeo.org REPORT /qgis/!svn/vcc/default HTTP/1.1 17-0 - 0/0/24063 . 0.84 1 4 0.0 0.00 77.24 164.71.1.150 svn.osgeo.org GET /qgis/!svn/ver/5676/trunk/qgis/src/gui/qgsmapcanvas.cpp HTT 18-0 - 0/0/24006 . 0.85 1 4 0.0 0.00 108.05 164.71.1.221 svn.osgeo.org GET /qgis/!svn/ver/5671/trunk/qgis/src/gui/qgsmapcanvas.cpp HTT 19-0 31517 10/32/19931 C 0.87 1 1 10.3 0.02 203.60 164.71.1.150 svn.osgeo.org PROPFIND /qgis/!svn/bc/5338/trunk/qgis/src/gui/qgsmapcanvas.cpp 20-0 - 0/0/21972 . 0.72 0 1 0.0 0.00 58.63 164.71.1.150 svn.osgeo.org OPTIONS /qgis/trunk/qgis/src/gui HTTP/1.1 21-0 31583 1/2/22725 C 0.01 1 3 0.4 0.01 66.48 164.71.1.150 svn.osgeo.org REPORT /qgis/!svn/vcc/default HTTP/1.1 22-0 31593 4/4/20618 K 0.02 0 1 1.5 0.00 153.49 164.71.1.221 svn.osgeo.org PROPFIND /qgis/!svn/bc/5438/trunk/qgis/src/gui HTTP/1.1 23-0 31578 0/2/22216 R 1.02 0 1296 0.0 0.00 63.04 ? ? ..reading.. 24-0 31570 1/10/21638 C 0.13 0 49 0.1 0.01 317.13 164.71.1.221 svn.osgeo.org OPTIONS /qgis/trunk/qgis/src/gui HTTP/1.1
Note:
See TracTickets
for help on using tickets.
I see increased Apache load for the last 2 days but no issues with ram or cpu usage. Could be bots (any volunteers to look through the logs?), will also look into adding some more munin charts related to I/O to see if we can find a reason.