2004-11-24

Some thoughts on SurgeProtectors from IRC, based on recent events on WikiTravel:

Since the October upgrade to MediaWiki 1.3.x the site has been at an unacceptably high average load (CPU usage). On November 11th, a usage spike caused the server to simply stop handling requests. The admins say they've had similar spikes over last weekend and last week.
I spent about a week programming to reduce as much as possible having to run MediaWiki at all. I took out the little-used user-css and user-js functionality and implemented a caching system that directs most (80%) of our requests to static files on the Web server rather than running the MediaWiki PHP scripts.
This brought down average load to an acceptable level, but this morning another spike happened – load hit 170+ for about half an hour. The WPP had no choice but to turn off our service.
EvanProdromou in his Wikitravel.org down note, end of November 2004 [1]

forcer
kensanata: What wiki engine were they using?
drewr
forcer: MediaWiki
forcer
Hm.
drewr
same as wikipedia IIRC
forcer
Hm. Weird.
forcer
I didn't know mediawiki was especially CPU hungry or so.
forcer
So I wondered whether that's a specific mediawiki problem, which I doubt considering wikipedia, or what could have happened there
kensanata
i'm guessing their problem compared to wikipedia is that wikipedia has a lot more infrastructure – caching proxies, load-balancing and stuff like that.
kensanata
i also remember EmacsWiki being shut down several times in the past with some cheapo commercial web hosting company.
evanpro
forcer: mediawiki is hugely CPU hungry
kensanata
when i moved to thinkmo.de, i implemented a SurgeProtector.
kensanata
that still doesn't prevent perl from starting once per request, but at least it terminates quickly.
evanpro
kensanata: I don't think an application-level surge protector would have helped
drewr
evanpro: sorry; i'd like to check it out when it returns
evanpro
I think it needs to be at the httpd or even OS level
kensanata
i also asked ThomasWaldmann for mod_throttle or similar, so control access to the cgi script on the web server level.
kensanata
i'm not sure what we are using right now, but i remember asking for it specifically. :)
kensanata
evanpro: surgeprotectors help against stupid spiders (as opposed to malevolent dos attacks)
kensanata
(because they don't get any links to follow)
kensanata
so it's a partial solution in a multi-layered defense…
kensanata
(my take on the issue)
kensanata
but i'm interested in discussing other defense mechanisms!
kensanata
because most wikis will hit this sooner or later.
kensanata
eg. thinkmo.de is using another very simple defense mechanism: it disables all requests from user agents that identify as wget or LWP… that takes care of the "spiders with bugs from innocent users"
evanpro
kensanata: good point!
kensanata
emacswiki.org was taken down about three times. in all cases it seemed like innocent users trying to spider the site.
kensanata
so most of the protection i suggested would have worked. :)
kensanata
i just didn't know.
forcer
Hm. Wouldn't cacheing help with the same problem just as well?
kensanata
sure.
kensanata
404handler, you mean?
kensanata
or real caching?
forcer
real caching
kensanata
oddmuse uses a 10s period before pages turn stale, so repeated downloads of the same page by the same users don't trigger the script.
kensanata
the http/1.1 caching is different, however:
kensanata
a copy of the script gets started for every request. it just terminates faster if it can reply with 304.
forcer
Ok, maybe I misphrased
forcer
The wiki script could only ever be called to edit a page, and generate static pages (of course also updating references to this page). The problem with this is of course that the average spider follows the typical Edit links, and to avoid this, you'd need to make the "edit this page" thing a form input button…
forcer
Did I mention that I'm a great fan of static page generation?
forcer
(But do realize that this is much more difficult)
bkhl
The edit links problem could also be addressed with robots.txt.
bkhl
At least for those nice spiders…
Fabi
we have improved the performance of MoinMoin by factor 10-40 by using long running processes and clever caching
Fabi
we took half of the time to load the python module and the other half for parsing the page
Fabi
right now each page is parsed only once per edit
kensanata
long running processes? something like mod_python?
Fabi
and compliled to python byte code to keep the dynamic parts of the page dynamic
Fabi
yep
Fabi
mod_py, fast_cgi, twisted and a stand alone version
Fabi
after refactoring the differences of the runtime environments out adding more was easy
kensanata
Oddmuse splits pages into static blocks and dynamic blocks (eg. pagenames) and only reparses the dynamic blocks. that helped a lot, too.
kensanata
i'm not yet using mod_perl, however.
kensanata
that would probably help, but make maintenance a bit harder (eg. never forget to touch the wrapper scripts when changing code, etc.)
kensanata
what does fast_cgi do?
Fabi
more or less the same as mod_py
kensanata
ah.
forcer
fast_cgi is a protocol
Fabi
don't ask me for the details
forcer
it communicates with a permanently running program
forcer
The protocol even has support for concurrent connections
forcer
I like it a lot, though it's rarely if ever used :-(
Fabi
as long as you have only one processor all the methods are more or less equal in performance
kensanata
i see.
Fabi
as long as you don't use CGI
kensanata
haha
Fabi
which is very expensive for a full featured wiki
Fabi
50% of the time only for loading modules… :(
kensanata
maybe not even loading but compiling, since the code will be in some disk cache in memory, i hope.
Fabi
python precompliles the modules
kensanata
right. perl doesn't. :(
Fabi
but the have to be loaded - which is executed
Fabi
simply don't use CGI if you want performance


Is phtml (php) faster than CGI? I think it's preloaded into Apache, like mod_perl.

If so, one can produce a half-compiled .phtml form of the page, which executes the really dynamic stuff (like variable CSS, saved usernames, breadcrumbs…) but contains the editable-but-cacheable stuff (like parsed Wiki text, et cetera) as HTML. No need to load complex Perl modules, but no need to sacrifice dynamism producing cached HTML.

Does that sound plausible?

I really need to get into mod_perl.

EditNearLinks: MediaWiki EmacsWiki WikiTravel ThomasWaldmann MoinMoin

Languages: