Looking for a way to deal with unwanted HTTP requests using mod_perl

classic Classic list List threaded Threaded
19 messages Options
Reply | Threaded
Open this post in threaded view
|

Looking for a way to deal with unwanted HTTP requests using mod_perl

Chris Bennett
I am not sure what is appropriate, given netiqette and practicality for
my server. I am sick of thousands of identical requests in my error log,
plus I want to be able to look over my logs easily to find any real
problems.

Below is a copy of the question I sent to [hidden email]
So far they have never answered any questions I have asked.


Right now I am using a simple script from the error log to block
permanently any requests from that IP using OpenBSD pf.

That simply doesn't work well enough anymore due to the time lag between
20+ requests at once getting to the log file.

OpenBSD no longer uses Apache 1 so I am going to move to Apache 2 and
study how to make the changes, so now is a great time for me to move in
anything new that I haven't used before.

Right now I have a list of regexes for attack URL's and requests for
anything with cgi or php in them, which I don't use.

At first glance, it seems to me that setting up a filter to use to block
anything in my ever growing list seems appropriate. Right or wrong?

If that's right, what should I do to these requests? I would prefer to
not build up a set of IP addresses to block since they may be forged
addresses and a real user might get blocked later on. Plus, I
occasionally screw up and block my own IP address so I keep an SSH
session open before experimenting.

Or am I looking at this wrong?
Any help appreciated.

Chris Bennett

Reply | Threaded
Open this post in threaded view
|

Re: Looking for a way to deal with unwanted HTTP requests using mod_perl

Raul Miller
In my opinion, the appropriate thing to do here is drop the connection
(so most clients would time out) for bad requests, along with a short
term ip "block" for stuff that becomes real problems. Not a true
block, though, but instead a fixed content "your address is being used
as a part of a hostile action, please try again later" type message in
place of legit content.

In this context, a bad request (enough to drop the connection) is a
request for a url pattern which your site does not host. To trigger
the block you'd need something more obviously malicious.

I don't think modperl is going to be able to help you with that, yet.
You (or someone else) would need to do some significant groundwork,
first.

I hope this helps (but I know it's inadequate),

Thanks,

--
Raul


--
Raul




On Wed, Sep 28, 2016 at 1:20 PM, Chris Bennett
<[hidden email]> wrote:

> I am not sure what is appropriate, given netiqette and practicality for
> my server. I am sick of thousands of identical requests in my error log,
> plus I want to be able to look over my logs easily to find any real
> problems.
>
> Below is a copy of the question I sent to [hidden email]
> So far they have never answered any questions I have asked.
>
>
> Right now I am using a simple script from the error log to block
> permanently any requests from that IP using OpenBSD pf.
>
> That simply doesn't work well enough anymore due to the time lag between
> 20+ requests at once getting to the log file.
>
> OpenBSD no longer uses Apache 1 so I am going to move to Apache 2 and
> study how to make the changes, so now is a great time for me to move in
> anything new that I haven't used before.
>
> Right now I have a list of regexes for attack URL's and requests for
> anything with cgi or php in them, which I don't use.
>
> At first glance, it seems to me that setting up a filter to use to block
> anything in my ever growing list seems appropriate. Right or wrong?
>
> If that's right, what should I do to these requests? I would prefer to
> not build up a set of IP addresses to block since they may be forged
> addresses and a real user might get blocked later on. Plus, I
> occasionally screw up and block my own IP address so I keep an SSH
> session open before experimenting.
>
> Or am I looking at this wrong?
> Any help appreciated.
>
> Chris Bennett

Reply | Threaded
Open this post in threaded view
|

Re: Looking for a way to deal with unwanted HTTP requests using mod_perl

trondd-2
In reply to this post by Chris Bennett
On Wed, September 28, 2016 1:20 pm, Chris Bennett wrote:
>
> Right now I am using a simple script from the error log to block
> permanently any requests from that IP using OpenBSD pf.
>
> That simply doesn't work well enough anymore due to the time lag between
> 20+ requests at once getting to the log file.

I use a combination of overload in pf with a bruteforce table and log
parsing.  I don't currently do the log parsing in real time.  You could
use your own script or something like fail2ban for that.  The combination
will quickly lock out rapid connection attempts, while eventually also
getting the slow pokes.

> Plus, I
> occasionally screw up and block my own IP address so I keep an SSH
> session open before experimenting.
>

Create a "safe" table in pf and put your often used IPs in it (assuming
they are static enough for this) and match that before you check the
bruteforce table.  Also, your rules and tables for ssh can be different
than that of the web server.  No reason for accidentally going to a bad
URL to lock you out of ssh.

Tim.

Reply | Threaded
Open this post in threaded view
|

Re: Looking for a way to deal with unwanted HTTP requests using mod_perl

Chris Bennett
On Wed, Sep 28, 2016 at 08:54:14PM -0400, trondd wrote:

> On Wed, September 28, 2016 1:20 pm, Chris Bennett wrote:
> >
> > Right now I am using a simple script from the error log to block
> > permanently any requests from that IP using OpenBSD pf.
> >
> > That simply doesn't work well enough anymore due to the time lag between
> > 20+ requests at once getting to the log file.
>
> I use a combination of overload in pf with a bruteforce table and log
> parsing.  I don't currently do the log parsing in real time.  You could
> use your own script or something like fail2ban for that.  The combination
> will quickly lock out rapid connection attempts, while eventually also
> getting the slow pokes.

I don't think bruteforce will be helpful in my case. I do occasionally
get bruteforce attacks, but not very often.
What I usually get are identical attacks of a certain set of variations
of URLs from one IP address. A little later the same thing from another
IP, then another, etc.

One of the reasons I am thinking of a mod_perl solution is that mod_perl
can step in very early in the Apache process. All kinds of things can be
done long before normal access is available to other processes.
But I have no experience using any of these parts of mod_perl. I have
only used later functions in the cycle.

>
> > Plus, I
> > occasionally screw up and block my own IP address so I keep an SSH
> > session open before experimenting.
> >
>
> Create a "safe" table in pf and put your often used IPs in it (assuming
> they are static enough for this) and match that before you check the
> bruteforce table.  Also, your rules and tables for ssh can be different
> than that of the web server.  No reason for accidentally going to a bad
> URL to lock you out of ssh.
>

Thanks, I hadn't thought of that. Some of my IPs are static. But I also
travel a lot between parts of Mexico and Texas. But I will add to pf for
that. I can add hotel IPs, when their WiFi signal is actually
strong enough to connect. That should solve that problem.

For the list, the rest is me rambling on so don't bother reading any
further, is OT.


I can develop on my office/home systems, but some stuff I use requires
live testing since I don't have another production server. Live testing
since my software depends on what is sent from another company and then
processing on my server followed by an email customised for a customer
to access paid content on my server. I can fake the input to a certain
degree, but I had one customer a while ago request a refund before
getting a username/password from my end, so that input was unexpected
and did not follow the other company's documentation, which is of poor
quality, so I had to fix a problem that was unexpected and basically
undocumented.


Thanks. Very useful for my SSH problem.
Chris Bennett

Reply | Threaded
Open this post in threaded view
|

Re: Looking for a way to deal with unwanted HTTP requests using mod_perl

Patrick Dohman-3
At the risk of sounding last decade…

Sourcing a scanner that attempts to illustrates the goals of an attacker could
make for a worthwhile project.

As an aside a postfix version really ought to exist with it’s myriad of
status codes.

Regards
Patrick


> On Sep 28, 2016, at 9:04 PM, Chris Bennett
<[hidden email]> wrote:

>
> On Wed, Sep 28, 2016 at 08:54:14PM -0400, trondd wrote:
>> On Wed, September 28, 2016 1:20 pm, Chris Bennett wrote:
>>>
>>> Right now I am using a simple script from the error log to block
>>> permanently any requests from that IP using OpenBSD pf.
>>>
>>> That simply doesn't work well enough anymore due to the time lag between
>>> 20+ requests at once getting to the log file.
>>
>> I use a combination of overload in pf with a bruteforce table and log
>> parsing.  I don't currently do the log parsing in real time.  You could
>> use your own script or something like fail2ban for that.  The combination
>> will quickly lock out rapid connection attempts, while eventually also
>> getting the slow pokes.
>
> I don't think bruteforce will be helpful in my case. I do occasionally
> get bruteforce attacks, but not very often.
> What I usually get are identical attacks of a certain set of variations
> of URLs from one IP address. A little later the same thing from another
> IP, then another, etc.
>
> One of the reasons I am thinking of a mod_perl solution is that mod_perl
> can step in very early in the Apache process. All kinds of things can be
> done long before normal access is available to other processes.
> But I have no experience using any of these parts of mod_perl. I have
> only used later functions in the cycle.
>
>>
>>> Plus, I
>>> occasionally screw up and block my own IP address so I keep an SSH
>>> session open before experimenting.
>>>
>>
>> Create a "safe" table in pf and put your often used IPs in it (assuming
>> they are static enough for this) and match that before you check the
>> bruteforce table.  Also, your rules and tables for ssh can be different
>> than that of the web server.  No reason for accidentally going to a bad
>> URL to lock you out of ssh.
>>
>
> Thanks, I hadn't thought of that. Some of my IPs are static. But I also
> travel a lot between parts of Mexico and Texas. But I will add to pf for
> that. I can add hotel IPs, when their WiFi signal is actually
> strong enough to connect. That should solve that problem.
>
> For the list, the rest is me rambling on so don't bother reading any
> further, is OT.
>
>
> I can develop on my office/home systems, but some stuff I use requires
> live testing since I don't have another production server. Live testing
> since my software depends on what is sent from another company and then
> processing on my server followed by an email customised for a customer
> to access paid content on my server. I can fake the input to a certain
> degree, but I had one customer a while ago request a refund before
> getting a username/password from my end, so that input was unexpected
> and did not follow the other company's documentation, which is of poor
> quality, so I had to fix a problem that was unexpected and basically
> undocumented.
>
>
> Thanks. Very useful for my SSH problem.
> Chris Bennett

Reply | Threaded
Open this post in threaded view
|

Re: Looking for a way to deal with unwanted HTTP requests using mod_perl

Stuart Henderson
In reply to this post by Chris Bennett
On 2016-09-28, Chris Bennett <[hidden email]> wrote:
> I am not sure what is appropriate, given netiqette and practicality for
> my server. I am sick of thousands of identical requests in my error log,
> plus I want to be able to look over my logs easily to find any real
> problems.

If it's just about the logs, can you write a grep script to simply filter them?

If you want something more, ModSecurity might be worth a look. I'm not
using Apache myself (and haven't for some years) so I haven't tried it..
It's not in ports but you might be able to get somewhere basing it off
an existing port for an ap2-* module.

Reply | Threaded
Open this post in threaded view
|

Re: Looking for a way to deal with unwanted HTTP requests using mod_perl

Paul Suh-2
In reply to this post by Chris Bennett
On Sep 28, 2016, at 10:04 PM, Chris Bennett
<[hidden email]> wrote:

>
> I don't think bruteforce will be helpful in my case. I do occasionally
> get bruteforce attacks, but not very often.
> What I usually get are identical attacks of a certain set of variations
> of URLs from one IP address. A little later the same thing from another
> IP, then another, etc.
>
> One of the reasons I am thinking of a mod_perl solution is that mod_perl
> can step in very early in the Apache process. All kinds of things can be
> done long before normal access is available to other processes.
> But I have no experience using any of these parts of mod_perl. I have
> only used later functions in the cycle.

Just as a random thought, have you considered reverse proxying through
something like squid? This would allow you to catch bad requests long before
any kind of processing happens in httpd. I think squid even has direct pf
integration if you want to go that route.


--Paul

[demime 1.01d removed an attachment of type application/pkcs7-signature which had a name of smime.p7s]

Reply | Threaded
Open this post in threaded view
|

Re: Looking for a way to deal with unwanted HTTP requests using mod_perl

Daniel Ouellet
> I don't think bruteforce will be helpful in my case. I do occasionally
> get bruteforce attacks, but not very often.
> What I usually get are identical attacks of a certain set of variations
> of URLs from one IP address. A little later the same thing from another
> IP, then another, etc.
>
> One of the reasons I am thinking of a mod_perl solution is that mod_perl
> can step in very early in the Apache process. All kinds of things can be
> done long before normal access is available to other processes.
> But I have no experience using any of these parts of mod_perl. I have
> only used later functions in the cycle.

You can look in the archive.

I did and continue to do some where Appache is still in use a redirect
instead to the origin. You can sure redirect to some well funded
government agency instead if you like as it is faster for them to react
to attack on themselves oppose to you reporting them. Just a funny
thought. The only part is this setup works very well and is pretty darn
efficient too, but it also mean you need to add to your filters time to
time when you see something new in your logs.

You could even redirect to the origin anything that is NOT valid on your
site if you want, not sure that's a good idea, may well be stupid one,
but that's up to you if you run your own site. Just a thought.

Anyway, look in this thread, I put plenty of examples 11 years ago using
Apache rewite mod.

https://marc.info/?l=openbsd-misc&m=110745960831277&w=2

or the start of the thread

https://marc.info/?t=110745731900004&r=1&w=2

Some even push the idea to redirect them to various government agency.
After all that's just your tax dollar at work isn't it.... I just do not
do this for ethical reason, but as you see many see it differently.

For me, I return them to the origin instead, or drop it.

I did also add n the pass a log to sql for bad url to get feedback in
real time by doing a redirect to a simple sh script to log directly in
the database, just to suppose high volume, but you can do the same with
php only if your traffic level is high but not huge. Up to you. Plenty
of ideas on the subject and it is limited only by your imagination of
how aggressive you want to be.

https://marc.info/?l=openbsd-misc&m=110772972803127&w=2

Anyway, that was 11 years ago and was working very well and still do
well if you still use Apache and is all easy to use and setup. And I can
say it is surprisingly very efficient too, specially if you redirect it
to the right location. Looks like some attack are welling to go attack
who ever, but when they are redirected to big bad boys, curiously the
attack on you stop as I can only guess they do no like to be send back
on places that have resources to fight back I guess. (:'

In any case, this was a very old idea I put to work long ago, I am sure
if you want you can improve on it. I never used PERL for this as the
volume I was dealing with at the time was way to high for it, but in a
decade, servers improve in performance as well, your mileages may vary.

Have fun!

Daniel

Reply | Threaded
Open this post in threaded view
|

Re: Looking for a way to deal with unwanted HTTP requests using mod_perl

BergenBergen BergenBergen
There's Kickstarter's Rack::Attack if you're willing to "upgrade" to ie.
Ruby on Rails:

https://github.com/kickstarter/rack-attack

I find this quite nice along with those pf bruteforce tables mentioned
earlier.

Murk

On Fri, Sep 30, 2016 at 12:54 AM, Daniel Ouellet <[hidden email]>
wrote:

> > I don't think bruteforce will be helpful in my case. I do occasionally
> > get bruteforce attacks, but not very often.
> > What I usually get are identical attacks of a certain set of variations
> > of URLs from one IP address. A little later the same thing from another
> > IP, then another, etc.
> >
> > One of the reasons I am thinking of a mod_perl solution is that mod_perl
> > can step in very early in the Apache process. All kinds of things can be
> > done long before normal access is available to other processes.
> > But I have no experience using any of these parts of mod_perl. I have
> > only used later functions in the cycle.
>
> You can look in the archive.
>
> I did and continue to do some where Appache is still in use a redirect
> instead to the origin. You can sure redirect to some well funded
> government agency instead if you like as it is faster for them to react
> to attack on themselves oppose to you reporting them. Just a funny
> thought. The only part is this setup works very well and is pretty darn
> efficient too, but it also mean you need to add to your filters time to
> time when you see something new in your logs.
>
> You could even redirect to the origin anything that is NOT valid on your
> site if you want, not sure that's a good idea, may well be stupid one,
> but that's up to you if you run your own site. Just a thought.
>
> Anyway, look in this thread, I put plenty of examples 11 years ago using
> Apache rewite mod.
>
> https://marc.info/?l=openbsd-misc&m=110745960831277&w=2
>
> or the start of the thread
>
> https://marc.info/?t=110745731900004&r=1&w=2
>
> Some even push the idea to redirect them to various government agency.
> After all that's just your tax dollar at work isn't it.... I just do not
> do this for ethical reason, but as you see many see it differently.
>
> For me, I return them to the origin instead, or drop it.
>
> I did also add n the pass a log to sql for bad url to get feedback in
> real time by doing a redirect to a simple sh script to log directly in
> the database, just to suppose high volume, but you can do the same with
> php only if your traffic level is high but not huge. Up to you. Plenty
> of ideas on the subject and it is limited only by your imagination of
> how aggressive you want to be.
>
> https://marc.info/?l=openbsd-misc&m=110772972803127&w=2
>
> Anyway, that was 11 years ago and was working very well and still do
> well if you still use Apache and is all easy to use and setup. And I can
> say it is surprisingly very efficient too, specially if you redirect it
> to the right location. Looks like some attack are welling to go attack
> who ever, but when they are redirected to big bad boys, curiously the
> attack on you stop as I can only guess they do no like to be send back
> on places that have resources to fight back I guess. (:'
>
> In any case, this was a very old idea I put to work long ago, I am sure
> if you want you can improve on it. I never used PERL for this as the
> volume I was dealing with at the time was way to high for it, but in a
> decade, servers improve in performance as well, your mileages may vary.
>
> Have fun!
>
> Daniel

Reply | Threaded
Open this post in threaded view
|

Re: Looking for a way to deal with unwanted HTTP requests using mod_perl

Daniel Ouellet
On 9/29/16 7:20 PM, Murk Fletcher wrote:
> There's Kickstarter's Rack::Attack if you're willing to "upgrade" to ie.
> Ruby on Rails:
>
> https://github.com/kickstarter/rack-attack
>
> I find this quite nice along with those pf bruteforce tables mentioned
> earlier.

Sure I guess you can, but personally I prefer smaller solutions and
suggestions, that are efficient and need minimum resources. This is like
saying install Windows 10 to just use notepad here...

I am fine with just vi/vim at time. (:

I think installing the full blown Ruby on Rails suite for just limiting
simple to block bruteforce is overkill, but it's a shrinking free world
for most of it. One can choose what he/she see fit.

Peace

Daniel

Reply | Threaded
Open this post in threaded view
|

Re: Looking for a way to deal with unwanted HTTP requests using mod_perl

BergenBergen BergenBergen
rack-attack itself is very small, and its configuration is minimal. Use it
if you have a Ruby-based web app and want to add that extra layer of
protection to it that pf can't provide.

On Fri, Sep 30, 2016 at 1:30 AM, Daniel Ouellet <[hidden email]> wrote:

> On 9/29/16 7:20 PM, Murk Fletcher wrote:
> > There's Kickstarter's Rack::Attack if you're willing to "upgrade" to ie.
> > Ruby on Rails:
> >
> > https://github.com/kickstarter/rack-attack
> >
> > I find this quite nice along with those pf bruteforce tables mentioned
> > earlier.
>
> Sure I guess you can, but personally I prefer smaller solutions and
> suggestions, that are efficient and need minimum resources. This is like
> saying install Windows 10 to just use notepad here...
>
> I am fine with just vi/vim at time. (:
>
> I think installing the full blown Ruby on Rails suite for just limiting
> simple to block bruteforce is overkill, but it's a shrinking free world
> for most of it. One can choose what he/she see fit.
>
> Peace
>
> Daniel

Reply | Threaded
Open this post in threaded view
|

Re: Looking for a way to deal with unwanted HTTP requests using mod_perl

Alceu R. de Freitas Jr.
I may be a little bit late... but isn't this something already handled by mod_security?

 
      De: Murk Fletcher <[hidden email]>
 Para: Daniel Ouellet <[hidden email]>
Cc: [hidden email]
 Enviadas: Quinta-feira, 29 de Setembro de 2016 20:57
 Assunto: Re: Looking for a way to deal with unwanted HTTP requests using mod_perl
   
rack-attack itself is very small, and its configuration is minimal. Use it
if you have a Ruby-based web app and want to add that extra layer of
protection to it that pf can't provide.

Reply | Threaded
Open this post in threaded view
|

Re: Looking for a way to deal with unwanted HTTP requests using mod_perl

Chris Bennett
On Fri, Sep 30, 2016 at 01:26:30AM +0000, Alceu R. de Freitas Jr. wrote:
> I may be a little bit late... but isn't this something already handled by mod_security?
>
>  

mod_security is no longer in the ports tree

Chris

Reply | Threaded
Open this post in threaded view
|

Re: Looking for a way to deal with unwanted HTTP requests using mod_perl

john slee
In reply to this post by Chris Bennett
On 29 September 2016 at 03:20, Chris Bennett <
[hidden email]> wrote:
> I am not sure what is appropriate, given netiqette and practicality for
> my server. I am sick of thousands of identical requests in my error log,
> plus I want to be able to look over my logs easily to find any real
> problems.

Varnish. Keep as many requests as you can away from the webserver
and let it just deal with mod_perl.

If you later decide to integrate with a third-party CDN, being able to
express your wishes in VCL will make for a much more pleasant journey.

I will admit to having not deployed it on OpenBSD (other than quickly
checking that it would at least install and work at a basic level before
posting), but my team at work do use it in anger on some very busy
sites.

John

Reply | Threaded
Open this post in threaded view
|

Re: Looking for a way to deal with unwanted HTTP requests using mod_perl

Stefan Sperling-5
In reply to this post by Chris Bennett
On Wed, Sep 28, 2016 at 12:20:38PM -0500, Chris Bennett wrote:

> I am not sure what is appropriate, given netiqette and practicality for
> my server. I am sick of thousands of identical requests in my error log,
> plus I want to be able to look over my logs easily to find any real
> problems.
>
> Below is a copy of the question I sent to [hidden email]
> So far they have never answered any questions I have asked.
>
>
> Right now I am using a simple script from the error log to block
> permanently any requests from that IP using OpenBSD pf.
>
> That simply doesn't work well enough anymore due to the time lag between
> 20+ requests at once getting to the log file.
>
> OpenBSD no longer uses Apache 1 so I am going to move to Apache 2 and
> study how to make the changes, so now is a great time for me to move in
> anything new that I haven't used before.
>
> Right now I have a list of regexes for attack URL's and requests for
> anything with cgi or php in them, which I don't use.
>
> At first glance, it seems to me that setting up a filter to use to block
> anything in my ever growing list seems appropriate. Right or wrong?

Have you already considered running relayd(8) in front of your
web service to filter out malicious requests?

See the FILTER RULES section in relayd.conf(5).

Reply | Threaded
Open this post in threaded view
|

Re: Looking for a way to deal with unwanted HTTP requests using mod_perl

Chris Bennett
On Fri, Sep 30, 2016 at 03:00:17PM +0200, Stefan Sperling wrote:
> Have you already considered running relayd(8) in front of your
> web service to filter out malicious requests?
>
> See the FILTER RULES section in relayd.conf(5).
>

No, I hadn't.
Can I redirect to the same server?
If so, I like what I'm seeing in the relayd.conf man page!

Chris

Reply | Threaded
Open this post in threaded view
|

Re: Looking for a way to deal with unwanted HTTP requests using mod_perl

Stefan Sperling-5
On Fri, Sep 30, 2016 at 09:13:43AM -0500, Chris Bennett wrote:
> Can I redirect to the same server?

I don't see why that shouldn't work.

Put your actual web service on some port on 127.0.0.1 and have
relayd send the filtered traffic there.

Reply | Threaded
Open this post in threaded view
|

Re: Looking for a way to deal with unwanted HTTP requests using mod_perl

Chris Bennett
On Fri, Sep 30, 2016 at 04:19:58PM +0200, Stefan Sperling wrote:
> On Fri, Sep 30, 2016 at 09:13:43AM -0500, Chris Bennett wrote:
> > Can I redirect to the same server?
>
> I don't see why that shouldn't work.
>
> Put your actual web service on some port on 127.0.0.1 and have
> relayd send the filtered traffic there.
>

I'm going to have two sites on the server with SSL.
How does that play out with relayd going to the same server?
Or for that matter, with other servers being relayed to?

Chris

Reply | Threaded
Open this post in threaded view
|

Re: Looking for a way to deal with unwanted HTTP requests using mod_perl

Stefan Sperling-5
On Fri, Sep 30, 2016 at 09:46:35AM -0500, Chris Bennett wrote:

> On Fri, Sep 30, 2016 at 04:19:58PM +0200, Stefan Sperling wrote:
> > On Fri, Sep 30, 2016 at 09:13:43AM -0500, Chris Bennett wrote:
> > > Can I redirect to the same server?
> >
> > I don't see why that shouldn't work.
> >
> > Put your actual web service on some port on 127.0.0.1 and have
> > relayd send the filtered traffic there.
> >
>
> I'm going to have two sites on the server with SSL.
> How does that play out with relayd going to the same server?
> Or for that matter, with other servers being relayed to?
>
> Chris

I guess you could let relayd handle TLS as well.
See the TLS RELAYS section.

I can't really give you more information than this.
All I know is what the man page says.