Deutsch English Français Italiano |
<vde763$fa3$3@reader1.panix.com> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!weretis.net!feeder9.news.weretis.net!panix!.POSTED.spitfire.i.gajendra.net!not-for-mail From: cross@spitfire.i.gajendra.net (Dan Cross) Newsgroups: comp.os.vms Subject: Re: Apache + mod_php performance Date: Mon, 30 Sep 2024 12:56:35 -0000 (UTC) Organization: PANIX Public Access Internet and UNIX, NYC Message-ID: <vde763$fa3$3@reader1.panix.com> References: <vcv0bl$39mnj$1@dont-email.me> <vd7hbi$tgu3$2@dont-email.me> <66f8183e$0$715$14726298@news.sunsite.dk> <66f8a44c$0$716$14726298@news.sunsite.dk> Injection-Date: Mon, 30 Sep 2024 12:56:35 -0000 (UTC) Injection-Info: reader1.panix.com; posting-host="spitfire.i.gajendra.net:166.84.136.80"; logging-data="15683"; mail-complaints-to="abuse@panix.com" X-Newsreader: trn 4.0-test77 (Sep 1, 2010) Originator: cross@spitfire.i.gajendra.net (Dan Cross) Bytes: 3934 Lines: 107 In article <66f8a44c$0$716$14726298@news.sunsite.dk>, Arne Vajhøj <arne@vajhoej.dk> wrote: >On 9/28/2024 10:52 AM, Arne Vajhøj wrote: >> On 9/27/2024 8:07 PM, Arne Vajhøj wrote: >>> And we have a solution. >>> >>> httpd.conf >>> >>> KeepAlive On >>> -> >>> KeepAlive Off >>> >>> And numbers improve dramatically. >>> >>> nop.txt 281 req/sec >>> nop.php 176 req/sec >>> real PHP no db con pool 94 req/sec >>> real PHP db con pool 103 req/sec >>> >>> Numbers are not great, but within acceptable. >>> >>> It is a bug in the code. >>> >>> Comment in httpd.conf say: >>> >>> # KeepAlive: Whether or not to allow persistent connections (more than >>> # one request per connection). Set to "Off" to deactivate. >>> >>> It does not say that it will reduce throughput to 1/10'th if on. >> >> Note that the problem may not impact anyone in >> the real world. >> >> I am simulating thousands of independent users using keep alive >> with a single simulator not using keep alive. >> >> It could very well be the case that the problem only arise for >> the simulator and not for the real users. >> >> Still weird though. > >Another update. > >Client side can also impact keep alive. > >HTTP 1.0 : no problem >HTTP 1.1 with "Connection: close" header : no problem >HTTP 1.1 without "Connection: close" header : problem > >Server side: > >KeepAlive On -> Off > >solves the problem. But obviously has the drawback of loosing >keep alive capability. Well ... yes. That's how the protocol works. Keep-alive is the default with HTTP/1.1 unless you explicitly send `Connection: close`. See RFC 9112, section 9.3 for details. >Not a disaster. Back in the early 00's when prefork MPM was >common, then KeepAlive Off was sometimes suggested for high >volume sites. But inconvenient. > >With KeepAlive On then we have a performance problem. Actually, sounds like the bug is in your client, which expects behavior at odds with that specified in the RFC. >The cause is that worker processes are unavailable while >waiting for next request from client even though client is >long gone. > >That indicates that the cap is: > >max throughput (req/sec) = MaxClients / KeepAliveTimeout > >The formula holds for low resulting throughput but it does >not scale and seems to be more like 1/3 of that for higher >resulting throughput. > >But if one wants keep alive enabled, then it is something one >can work with. > >My experiments indicate that: > >KeepAlive On >KeepAliveTimeout 15 -> 1 >MaxSpareServers 50 -> 300 >MaxClients 150 -> 300 > >is almost acceptable. > >nop.txt : 100 req/sec > >And 1 second should be more than enough for a browser to request >additional assets within a static HTML page. > >But having hundreds of processes each using 25 MB for serving a 2 byte >file at such a low throughput is ridiculous. > >OSU (or WASD) still seems as a better option. See above. Looks like the problem ended up being between the keyboard and the chair. - Dan C.