From mboxrd@z Thu Jan 1 00:00:00 1970 From: mailings at hupie.com (Ferry Huberts) Date: Wed, 21 Mar 2012 08:23:12 +0100 Subject: [PATCH (WIP)] limit CPU usage of cgit processes In-Reply-To: <20120321015254.GA23083@dcvr.yhbt.net> References: <20120321015254.GA23083@dcvr.yhbt.net> Message-ID: <4F6981E0.50909@hupie.com> how about using a robots.txt on your site? On 21-03-12 02:52, Eric Wong wrote: > Here's a work-in-progress patch which I've been running to > prevent crawlers/bots from using up all the CPU on my system > when doing expensive queries. > > If it's interesting, it should be wired up to appropriate > config option... > > Signed-off-by: Eric Wong > --- > cgit.c | 13 +++++++++++++ > 1 file changed, 13 insertions(+) > > diff --git a/cgit.c b/cgit.c > index 1d50129..285467c 100644 > --- a/cgit.c > +++ b/cgit.c > @@ -768,12 +768,25 @@ static int calc_ttl() > return ctx.cfg.cache_repo_ttl; > } > > +#include > +#include > +static void init_rlimit(void) > +{ > + struct rlimit rlim = { .rlim_cur = 10, .rlim_max = 10 }; > + if (setrlimit(RLIMIT_CPU,&rlim) != 0) { > + perror("setrlimit"); > + exit(EXIT_FAILURE); > + } > +} > + > int main(int argc, const char **argv) > { > const char *path; > char *qry; > int err, ttl; > > + init_rlimit(); > + > prepare_context(&ctx); > cgit_repolist.length = 0; > cgit_repolist.count = 0; -- Ferry Huberts