using queues and keep state

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

using queues and keep state

Mario Theodoridis
Hello everyone,

i'm using openbsd-6.2 as a home router gateway separating the internet
from a dmz (httpd, mail, wlan) and an internal network.

I would like to use queues to establish bandwidth policies for traffic
to my web and email servers and the rest of what goes on.

As an example, when a http request comes in, i really want to control
the bandwidth for the response via a match out on $extIf statement. And
maybe even the incoming request via a match out on $dmzIf.

In the past i've used a mix of match and pass rules to make that happen,
but found that unless i set no state on my pass rules, the answer
packets do not get evaluated on any outbound match rules.

Unfortunately using no state makes the rule set rather cumbersome and
hard to read.

Did i overlook something or is that the way to do this?

I just looked again at man pf.conf and the book of pf, but don't really
see any mention of keeping state in conjunction with bandwidth shaping.

Could someone enlighten me in this regard?


--
Mit freundlichen Grüßen/Best regards

Mario Theodoridis
Reply | Threaded
Open this post in threaded view
|

Re: using queues and keep state

Stuart Henderson
The state entry is "tagged" with the queue name. If a packet matches the
state, when it's transmitted, if a queue with that name exists on the
outgoing interface, it's used to restrict the traffic.

So you can simply setup queues like "queue mail on em1 ..." and assign
traffic with "match to port 25 queue mail".

--
  Sent from a phone, apologies for poor formatting.



On 26 November 2017 21:01:08 Mario Theodoridis <[hidden email]> wrote:

> Hello everyone,
>
> i'm using openbsd-6.2 as a home router gateway separating the internet
> from a dmz (httpd, mail, wlan) and an internal network.
>
> I would like to use queues to establish bandwidth policies for traffic
> to my web and email servers and the rest of what goes on.
>
> As an example, when a http request comes in, i really want to control
> the bandwidth for the response via a match out on $extIf statement. And
> maybe even the incoming request via a match out on $dmzIf.
>
> In the past i've used a mix of match and pass rules to make that happen,
> but found that unless i set no state on my pass rules, the answer
> packets do not get evaluated on any outbound match rules.
>
> Unfortunately using no state makes the rule set rather cumbersome and
> hard to read.
>
> Did i overlook something or is that the way to do this?
>
> I just looked again at man pf.conf and the book of pf, but don't really
> see any mention of keeping state in conjunction with bandwidth shaping.
>
> Could someone enlighten me in this regard?
>
>
> --
> Mit freundlichen Grüßen/Best regards
>
> Mario Theodoridis

Reply | Threaded
Open this post in threaded view
|

Re: using queues and keep state

Mario Theodoridis
On 27/11/17 10:11, Stuart Henderson wrote:
> The state entry is "tagged" with the queue name. If a packet matches
> the state, when it's transmitted, if a queue with that name exists on
> the outgoing interface, it's used to restrict the traffic.
>
> So you can simply setup queues like "queue mail on em1 ..." and assign
> traffic with "match to port 25 queue mail".

So if i understand you correctly, this no state business is not needed?

So with $webserver living behind $dmzIf and the internet being behind $extIf
which of the following would achieve bandwidth shaping of both requests
and responses.
Version 1, Version 2, either one or neither?


# inbound
queue qIn on $dmzIf bandwidth 12M
     [...]
     queue qiWeb parent qIn bandwidth 499K min 100K max 1M
     queue qiPri parent qIn bandwidth 500K min 100K burst 1M for 1000ms

# outbound
queue qOut on $extIf bandwidth 1M
     [...]
     queue qoWeb parent qOut bandwidth 300K min 30K
     queue qoPri parent qOut bandwidth 150K min 30K burst 300K for 1000ms

# Version 1
pass in quick on $extIf proto tcp from any to $webserver port 80
# requests
pass out quick on $dmzIf proto tcp to $webserver port 80 \
     set queue (qiWeb, qiPri) set prio (4,5)
# responses
pass out log quick on $extIf proto tcp from $webserver port 80 \
     set queue (qoWeb, qoPri) set prio (4, 5)


# Version 2
# requests
match out on $dmzIf proto tcp to $webserver port 80 \
     set queue (qiWeb, qiPri) set prio (4,5)
# responses
match out on $extIf proto tcp from $webserver port 80 \
     set queue (qoWeb, qoPri) set prio (4, 5)
pass in quick on $extIf proto tcp from any to $webserver port 80


--
Mit freundlichen Grüßen/Best Regards

Mario Theodoridis
Reply | Threaded
Open this post in threaded view
|

Re: using queues and keep state

Stuart Henderson
Correct that "no state" is not needed (and generally not wanted - states
are more efficient for existing traffic flows, automatically match ICMP
messages that directly relate to the flow, and validate TCP sequence numbers).

The problem with the rules you've shown is the different names for the "in"
and "out" queues. There's one state table entry for the connection, not
separate ones for in+out.

Use the same queue name instead, something like this:

queue root on $dmzIf bandwidth 12M
    queue qWeb on $dmzIf parent root bandwidth 499K min 100K max 1M
..

queue root on $extIf bandwidth 1M
    queue qWeb on $extIf parent root bandwidth 300K min 30K
..

match proto tcp to $webserver port 80 set queue (qWeb, qPri) set prio (4,5)

Though, "set prio" won't do much here unless the Ethernet interface
bandwidth (not the queue bandwidth) is maxed out.


--
  Sent from a phone, apologies for poor formatting.



On 27 November 2017 13:00:22 Mario Theodoridis <[hidden email]> wrote:

> On 27/11/17 10:11, Stuart Henderson wrote:
>> The state entry is "tagged" with the queue name. If a packet matches
>> the state, when it's transmitted, if a queue with that name exists on
>> the outgoing interface, it's used to restrict the traffic.
>>
>> So you can simply setup queues like "queue mail on em1 ..." and assign
>> traffic with "match to port 25 queue mail".
>
> So if i understand you correctly, this no state business is not needed?
>
> So with $webserver living behind $dmzIf and the internet being behind $extIf
> which of the following would achieve bandwidth shaping of both requests
> and responses.
> Version 1, Version 2, either one or neither?
>
>
> # inbound
> queue qIn on $dmzIf bandwidth 12M
>      [...]
>      queue qiWeb parent qIn bandwidth 499K min 100K max 1M
>      queue qiPri parent qIn bandwidth 500K min 100K burst 1M for 1000ms
>
> # outbound
> queue qOut on $extIf bandwidth 1M
>      [...]
>      queue qoWeb parent qOut bandwidth 300K min 30K
>      queue qoPri parent qOut bandwidth 150K min 30K burst 300K for 1000ms
>
> # Version 1
> pass in quick on $extIf proto tcp from any to $webserver port 80
> # requests
> pass out quick on $dmzIf proto tcp to $webserver port 80 \
>      set queue (qiWeb, qiPri) set prio (4,5)
> # responses
> pass out log quick on $extIf proto tcp from $webserver port 80 \
>      set queue (qoWeb, qoPri) set prio (4, 5)
>
>
> # Version 2
> # requests
> match out on $dmzIf proto tcp to $webserver port 80 \
>      set queue (qiWeb, qiPri) set prio (4,5)
> # responses
> match out on $extIf proto tcp from $webserver port 80 \
>      set queue (qoWeb, qoPri) set prio (4, 5)
> pass in quick on $extIf proto tcp from any to $webserver port 80
>
>
> --
> Mit freundlichen Grüßen/Best Regards
>
> Mario Theodoridis

Reply | Threaded
Open this post in threaded view
|

Re: using queues and keep state

Mario Theodoridis
On 28.11.2017 09:19, Stuart Henderson wrote:

> Correct that "no state" is not needed (and generally not wanted -
> states are more efficient for existing traffic flows, automatically
> match ICMP messages that directly relate to the flow, and validate TCP
> sequence numbers).
>
> The problem with the rules you've shown is the different names for the
> "in" and "out" queues. There's one state table entry for the
> connection, not separate ones for in+out.
>
> Use the same queue name instead, something like this:
>
> queue root on $dmzIf bandwidth 12M
>     queue qWeb on $dmzIf parent root bandwidth 499K min 100K max 1M
> ..
>
> queue root on $extIf bandwidth 1M
>     queue qWeb on $extIf parent root bandwidth 300K min 30K
> ..
>
> match proto tcp to $webserver port 80 set queue (qWeb, qPri) set prio
> (4,5)
>
> Though, "set prio" won't do much here unless the Ethernet interface
> bandwidth (not the queue bandwidth) is maxed out.
Thanks for your responses Stuart.

I tried that, but managed to only get one direction to work.

I must really be missing something here.
In my desperation i tried from "The Book of PF" 3
Chapter 7. Traffic Shaping with Queues and Priorities
   Always-On Priority and Queues for Traffic Shaping
     The DMZ Network, Now with Traffic Shaping

Which looks like this:

/queue ext on $ext_if bandwidth 2M//
//        queue ext_main parent ext bandwidth 500K default//
//        queue ext_web parent ext bandwidth 500K//
//        queue ext_udp parent ext bandwidth 400K//
//        queue ext_mail parent ext bandwidth 600K//
//
//queue dmz on $dmz_if bandwidth 100M//
//        queue ext_dmz parent dmz bandwidth 2M//
//                queue ext_dmz_web parent ext_dmz bandwidth 800K default//
//                queue ext_dmz_udp parent ext_dmz bandwidth 200K//
//                queue ext_dmz_mail parent ext_dmz bandwidth 1M//
//        queue dmz_main parent dmz bandwidth 25M//
//        queue dmz_web parent dmz bandwidth 25M//
//        queue dmz_udp parent dmz bandwidth 20M//
//        queue dmz_mail parent dmz bandwidth 20M/

and the web traffic extract without the internal net is

/pass in on $ext_if proto tcp to $webserver port $webports set queue
ext_web//
//pass out on $dmz_if proto tcp to $webserver port $webports \//
//    set queue ext_dmz_web//
/
I had to add nat to make my test environment work.
I also made dmz_main default instead of ext_dmz_web

Here's my pf.conf

# pfctl -vf /etc/pf.conf

ext_if = "em0"

dmz_if = "vether1"

webserver = "192.168.7.2"

webports = "80"

queue ext on em0 bandwidth 2M

queue ext_main parent ext bandwidth 500K default

queue ext_web parent ext bandwidth 500K

queue ext_udp parent ext bandwidth 400K

queue ext_mail parent ext bandwidth 600K

queue dmz on vether1 bandwidth 100M

queue ext_dmz parent dmz bandwidth 2M

queue ext_dmz_web parent ext_dmz bandwidth 800K

queue ext_dmz_udp parent ext_dmz bandwidth 200K

queue ext_dmz_mail parent ext_dmz bandwidth 1M

queue dmz_main parent dmz bandwidth 25M default

queue dmz_web parent dmz bandwidth 25M

queue dmz_udp parent dmz bandwidth 20M

queue dmz_mail parent dmz bandwidth 20M

match out log on vether1 inet from 10.0.0.0/24 to any nat-to (vether1) round-robin

pass in log on em0 inet proto tcp from any to 192.168.7.2 port = 80 flags S/SA set ( queue ext_web )

pass out log on vether1 inet proto tcp from any to 192.168.7.2 port = 80 flags S/SA set ( queue ext_dmz_web )


Initial rule and queue counters

# pfctl -vvqs rules

@0 match out log on vether1 inet from 10.0.0.0/24 to any nat-to (vether1:2) round-robin

   [ Evaluations: 490       Packets: 0         Bytes: 0           States: 0     ]

   [ Inserted: uid 0 pid 75321 State Creations: 0     ]

@1 pass in log on em0 inet proto tcp from any to 192.168.7.2 port = 80 flags S/SA set ( queue ext_web )

   [ Evaluations: 490       Packets: 0         Bytes: 0           States: 0     ]

   [ Inserted: uid 0 pid 75321 State Creations: 0     ]

@2 pass out log on vether1 inet proto tcp from any to 192.168.7.2 port = 80 flags S/SA set ( queue ext_dmz_web )

   [ Evaluations: 262       Packets: 0         Bytes: 0           States: 0     ]

   [ Inserted: uid 0 pid 75321 State Creations: 0     ]

# pfctl -vqs queue

queue ext on em0 bandwidth 2M

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue ext_main parent ext bandwidth 500K default

   [ pkts:          1  bytes:         60  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue ext_web parent ext bandwidth 500K

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue ext_udp parent ext bandwidth 400K

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue ext_mail parent ext bandwidth 600K

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue dmz on vether1 bandwidth 100M

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue ext_dmz parent dmz bandwidth 2M

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue ext_dmz_web parent ext_dmz bandwidth 800K

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue ext_dmz_udp parent ext_dmz bandwidth 200K

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue ext_dmz_mail parent ext_dmz bandwidth 1M

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue dmz_main parent dmz bandwidth 25M default

   [ pkts:         20  bytes:       6077  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue dmz_web parent dmz bandwidth 25M

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue dmz_udp parent dmz bandwidth 20M

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue dmz_mail parent dmz bandwidth 20M

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]


Then i did one curl http:/..... call from outside of $ext_if to a
webserver behind $dmz_if

# tcpdump -e -n -ttt -i pflog0 ip and port 80

tcpdump: WARNING: snaplen raised from 116 to 160

tcpdump: listening on pflog0, link-type PFLOG

Nov 30 21:15:39.431581 rule 1/(match) pass in on em0: 10.0.0.6.54021 > 192.168.7.2.80: S 674821420:674821420(0) win 29200 <mss 1460,sackOK,timestamp 1385678 0,nop,wscale 7> (DF)

Nov 30 21:15:39.431628 rule 0/(match) match out on vether1: 192.168.7.14.56560 > 192.168.7.2.80: S 674821420:674821420(0) win 29200 <mss 1460,sackOK,timestamp 1385678 0,nop,wscale 7> (DF)

Nov 30 21:15:39.431638 rule 2/(match) pass out on vether1: 192.168.7.14.56560 > 192.168.7.2.80: S 674821420:674821420(0) win 29200 <mss 1460,sackOK,timestamp 1385678 0,nop,wscale 7> (DF)

^C

3 packets received by filter

0 packets dropped by kernel


Here are the counters afterwards

# pfctl -vvqs rules

@0 match out log on vether1 inet from 10.0.0.0/24 to any nat-to (vether1:2) round-robin

   [ Evaluations: 912       Packets: 33        Bytes: 23306       States: 1     ]

   [ Inserted: uid 0 pid 75321 State Creations: 0     ]

@1 pass in log on em0 inet proto tcp from any to 192.168.7.2 port = 80 flags S/SA set ( queue ext_web )

   [ Evaluations: 912       Packets: 33        Bytes: 23306       States: 1     ]

   [ Inserted: uid 0 pid 75321 State Creations: 1     ]

@2 pass out log on vether1 inet proto tcp from any to 192.168.7.2 port = 80 flags S/SA set ( queue ext_dmz_web )

   [ Evaluations: 474       Packets: 33        Bytes: 23306       States: 1     ]

   [ Inserted: uid 0 pid 75321 State Creations: 1     ]


The rule counter sort of look like what i'd expect, except for maybe the
byte count being the same everywhere.

# pfctl -vqs queue

queue ext on em0 bandwidth 2M

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue ext_main parent ext bandwidth 500K default

   [ pkts:          1  bytes:         60  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue*ext_web*  parent ext bandwidth 500K

   [ pkts:         20  bytes:*22968*   dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue ext_udp parent ext bandwidth 400K

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue ext_mail parent ext bandwidth 600K

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue dmz on vether1 bandwidth 100M

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue ext_dmz parent dmz bandwidth 2M

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue ext_dmz_web parent ext_dmz bandwidth 800K

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue ext_dmz_udp parent ext_dmz bandwidth 200K

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue ext_dmz_mail parent ext_dmz bandwidth 1M

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue*dmz_main*  parent dmz bandwidth 25M default

   [ pkts:         34  bytes:*10122*   dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue dmz_web parent dmz bandwidth 25M

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue dmz_udp parent dmz bandwidth 20M

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]

queue dmz_mail parent dmz bandwidth 20M

   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

   [ qlength:   0/ 50 ]


Here i expect the byte counters in /ext_dmz_web/ and /ext_web/ to go up,
but only /ext_web/ triggered.
So am i reading this wrong or did the request traffic indeed use the
default queue /dmz_main/ instead.

I must be missing something.
Clue stick desperately needed.

--
Mit freundlichen Grüßen/Best regards

Mario Theodoridis
Reply | Threaded
Open this post in threaded view
|

Re: using queues and keep state

Mario Theodoridis
In reply to this post by Stuart Henderson


On 28.11.2017 09:19, Stuart Henderson wrote:

> Correct that "no state" is not needed (and generally not wanted -
> states are more efficient for existing traffic flows, automatically
> match ICMP messages that directly relate to the flow, and validate TCP
> sequence numbers).
>
> The problem with the rules you've shown is the different names for the
> "in" and "out" queues. There's one state table entry for the
> connection, not separate ones for in+out.
>
> Use the same queue name instead, something like this:
>
> queue root on $dmzIf bandwidth 12M
>     queue qWeb on $dmzIf parent root bandwidth 499K min 100K max 1M
> ..
>
> queue root on $extIf bandwidth 1M
>     queue qWeb on $extIf parent root bandwidth 300K min 30K
> ..
>
> match proto tcp to $webserver port 80 set queue (qWeb, qPri) set prio
> (4,5)
>
> Though, "set prio" won't do much here unless the Ethernet interface
> bandwidth (not the queue bandwidth) is maxed out.
Stuart,
here's the detail of what happened with your suggestion each queue after
one curl call.
This snippet

queue root on $dmzIf bandwidth 12M
     queue qDef on $dmzIf parent root bandwidth 11M default
     queue qWeb on $dmzIf parent root bandwidth 1M

queue root on $extIf bandwidth 1M
     queue qDef on $extIf parent root bandwidth 700K default
     queue qWeb on $extIf parent root bandwidth 300K

match proto tcp to $webserver port 80 set queue qWeb

pass in log on $extIf proto tcp to $webserver port $webports
pass out log on $dmzIf proto tcp to $webserver port $webports

Results in

pfctl -vf /etc/pf.conf'
extIf = "em0"
dmzIf = "vether1"
webserver = "192.168.7.2"
webports = "80"
queue root on vether1 bandwidth 12M
queue qDef parent root bandwidth 11M default
queue qWeb parent root bandwidth 1M
queue root on em0 bandwidth 1M
queue qDef parent root bandwidth 700K default
queue qWeb parent root bandwidth 300K
match inet proto tcp from any to 192.168.7.2 port = 80 set ( queue qWeb )
pass in log on em0 inet proto tcp from any to 192.168.7.2 port = 80 flags S/SA
pass out log on vether1 inet proto tcp from any to 192.168.7.2 port = 80 flags S/SA

# pfctl -vqs queue
queue root on vether1 bandwidth 12M
   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]
   [ qlength:   0/ 50 ]
queue qDef parent root bandwidth 11M default
   [ pkts:         17  bytes:       5432  dropped pkts:      0 bytes:      0 ]
   [ qlength:   0/ 50 ]
queue qWeb parent root bandwidth 1M
   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]
   [ qlength:   0/ 50 ]
queue root on em0 bandwidth 1M
   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]
   [ qlength:   0/ 50 ]
queue qDef parent root bandwidth 700K default
   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]
   [ qlength:   0/ 50 ]
queue qWeb parent root bandwidth 300K
   [ pkts:         20  bytes:      22968  dropped pkts:      0 bytes:      0 ]
   [ qlength:   0/ 50 ]


Then this

queue root on $dmzIf bandwidth 12M
     queue qDef on $dmzIf parent root bandwidth 11M default
     queue qWeb on $dmzIf parent root bandwidth 1M

queue root on $extIf bandwidth 1M
     queue qDef on $extIf parent root bandwidth 700K default
     queue qWeb on $extIf parent root bandwidth 300K

match proto tcp to $webserver port 80 set queue qWeb
match proto tcp from $webserver port 80 set queue qWeb

pass in log on $extIf proto tcp to $webserver port $webports
pass out log on $dmzIf proto tcp to $webserver port $webports

Results in

# pfctl -vf /etc/pf.conf
extIf = "em0"
dmzIf = "vether1"
webserver = "192.168.7.2"
webports = "80"
queue root on vether1 bandwidth 12M
queue qDef parent root bandwidth 11M default
queue qWeb parent root bandwidth 1M
queue root on em0 bandwidth 1M
queue qDef parent root bandwidth 700K default
queue qWeb parent root bandwidth 300K
match inet proto tcp from any to 192.168.7.2 port = 80 set ( queue qWeb )
match inet proto tcp from 192.168.7.2 port = 80 to any set ( queue qWeb )
pass in log on em0 inet proto tcp from any to 192.168.7.2 port = 80 flags S/SA
pass out log on vether1 inet proto tcp from any to 192.168.7.2 port = 80 flags S/SA

# pfctl -vqs queue
queue root on vether1 bandwidth 12M
   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]
   [ qlength:   0/ 50 ]
queue qDef parent root bandwidth 11M default
   [ pkts:         24  bytes:       5834  dropped pkts:      0 bytes:      0 ]
   [ qlength:   0/ 50 ]
queue qWeb parent root bandwidth 1M
   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]
   [ qlength:   0/ 50 ]
queue root on em0 bandwidth 1M
   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]
   [ qlength:   0/ 50 ]
queue qDef parent root bandwidth 700K default
   [ pkts:          1  bytes:         60  dropped pkts:      0 bytes:      0 ]
   [ qlength:   0/ 50 ]
queue qWeb parent root bandwidth 300K
   [ pkts:         20  bytes:      22968  dropped pkts:      0 bytes:      0 ]
   [ qlength:   0/ 50 ]

So all i can control seems to be the response.
Now while one can argue, no need to queue incoming traffic as it already
ate the bandwidth, i would say, what about source quenches for large
upload requests?
Wouldn't these be triggered by queuing the requests?

--
Mit freundlichen Grüßen/Best regards

Mario Theodoridis
Reply | Threaded
Open this post in threaded view
|

Re: using queues and keep state

Stuart Henderson
On 2017/11/30 22:48, Mario Theodoridis wrote:

>
>
> On 28.11.2017 09:19, Stuart Henderson wrote:
> > Correct that "no state" is not needed (and generally not wanted - states
> > are more efficient for existing traffic flows, automatically match ICMP
> > messages that directly relate to the flow, and validate TCP sequence
> > numbers).
> >
> > The problem with the rules you've shown is the different names for the
> > "in" and "out" queues. There's one state table entry for the connection,
> > not separate ones for in+out.
> >
> > Use the same queue name instead, something like this:
> >
> > queue root on $dmzIf bandwidth 12M
> >     queue qWeb on $dmzIf parent root bandwidth 499K min 100K max 1M
> > ..
> >
> > queue root on $extIf bandwidth 1M
> >     queue qWeb on $extIf parent root bandwidth 300K min 30K
> > ..
> >
> > match proto tcp to $webserver port 80 set queue (qWeb, qPri) set prio
> > (4,5)
> >
> > Though, "set prio" won't do much here unless the Ethernet interface
> > bandwidth (not the queue bandwidth) is maxed out.
> Stuart,
> here's the detail of what happened with your suggestion each queue after one
> curl call.
> This snippet
>
> queue root on $dmzIf bandwidth 12M
>     queue qDef on $dmzIf parent root bandwidth 11M default
>     queue qWeb on $dmzIf parent root bandwidth 1M
>
> queue root on $extIf bandwidth 1M
>     queue qDef on $extIf parent root bandwidth 700K default
>     queue qWeb on $extIf parent root bandwidth 300K
>
> match proto tcp to $webserver port 80 set queue qWeb
>
> pass in log on $extIf proto tcp to $webserver port $webports
> pass out log on $dmzIf proto tcp to $webserver port $webports
>
> Results in
>
> pfctl -vf /etc/pf.conf'
> extIf = "em0"
> dmzIf = "vether1"
> webserver = "192.168.7.2"
> webports = "80"
> queue root on vether1 bandwidth 12M
> queue qDef parent root bandwidth 11M default
> queue qWeb parent root bandwidth 1M
> queue root on em0 bandwidth 1M
> queue qDef parent root bandwidth 700K default
> queue qWeb parent root bandwidth 300K
> match inet proto tcp from any to 192.168.7.2 port = 80 set ( queue qWeb )
> pass in log on em0 inet proto tcp from any to 192.168.7.2 port = 80 flags S/SA
> pass out log on vether1 inet proto tcp from any to 192.168.7.2 port = 80 flags S/SA
>
> # pfctl -vqs queue
> queue root on vether1 bandwidth 12M
>   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]
>   [ qlength:   0/ 50 ]
> queue qDef parent root bandwidth 11M default
>   [ pkts:         17  bytes:       5432  dropped pkts:      0 bytes:      0 ]
>   [ qlength:   0/ 50 ]
> queue qWeb parent root bandwidth 1M
>   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]
>   [ qlength:   0/ 50 ]
> queue root on em0 bandwidth 1M
>   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]
>   [ qlength:   0/ 50 ]
> queue qDef parent root bandwidth 700K default
>   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]
>   [ qlength:   0/ 50 ]
> queue qWeb parent root bandwidth 300K
>   [ pkts:         20  bytes:      22968  dropped pkts:      0 bytes:      0 ]
>   [ qlength:   0/ 50 ]
>
>
> Then this
>
> queue root on $dmzIf bandwidth 12M
>     queue qDef on $dmzIf parent root bandwidth 11M default
>     queue qWeb on $dmzIf parent root bandwidth 1M
>
> queue root on $extIf bandwidth 1M
>     queue qDef on $extIf parent root bandwidth 700K default
>     queue qWeb on $extIf parent root bandwidth 300K
>
> match proto tcp to $webserver port 80 set queue qWeb
> match proto tcp from $webserver port 80 set queue qWeb
>
> pass in log on $extIf proto tcp to $webserver port $webports
> pass out log on $dmzIf proto tcp to $webserver port $webports
>
> Results in
>
> # pfctl -vf /etc/pf.conf
> extIf = "em0"
> dmzIf = "vether1"
> webserver = "192.168.7.2"
> webports = "80"
> queue root on vether1 bandwidth 12M
> queue qDef parent root bandwidth 11M default
> queue qWeb parent root bandwidth 1M
> queue root on em0 bandwidth 1M
> queue qDef parent root bandwidth 700K default
> queue qWeb parent root bandwidth 300K
> match inet proto tcp from any to 192.168.7.2 port = 80 set ( queue qWeb )
> match inet proto tcp from 192.168.7.2 port = 80 to any set ( queue qWeb )
> pass in log on em0 inet proto tcp from any to 192.168.7.2 port = 80 flags S/SA
> pass out log on vether1 inet proto tcp from any to 192.168.7.2 port = 80 flags S/SA
>
> # pfctl -vqs queue
> queue root on vether1 bandwidth 12M
>   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]
>   [ qlength:   0/ 50 ]
> queue qDef parent root bandwidth 11M default
>   [ pkts:         24  bytes:       5834  dropped pkts:      0 bytes:      0 ]
>   [ qlength:   0/ 50 ]
> queue qWeb parent root bandwidth 1M
>   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]
>   [ qlength:   0/ 50 ]
> queue root on em0 bandwidth 1M
>   [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]
>   [ qlength:   0/ 50 ]
> queue qDef parent root bandwidth 700K default
>   [ pkts:          1  bytes:         60  dropped pkts:      0 bytes:      0 ]
>   [ qlength:   0/ 50 ]
> queue qWeb parent root bandwidth 300K
>   [ pkts:         20  bytes:      22968  dropped pkts:      0 bytes:      0 ]
>   [ qlength:   0/ 50 ]
>
> So all i can control seems to be the response.
> Now while one can argue, no need to queue incoming traffic as it already ate
> the bandwidth, i would say, what about source quenches for large upload
> requests?
> Wouldn't these be triggered by queuing the requests?

Not quite sure what is wrong. But it seems weird to be using vether
here, the queue is done on transmission and I don't see why you would
be transmitting on vether. Normally you want to queue on the physical
interface.
Reply | Threaded
Open this post in threaded view
|

Re: using queues and keep state

Mario Theodoridis
On 01/12/17 12:25, Stuart Henderson wrote:

> On 2017/11/30 22:48, Mario Theodoridis wrote:
>> [snip]....
>> So all i can control seems to be the response.
>> Now while one can argue, no need to queue incoming traffic as it already ate
>> the bandwidth, i would say, what about source quenches for large upload
>> requests?
>> Wouldn't these be triggered by queuing the requests?
> Not quite sure what is wrong. But it seems weird to be using vether
> here, the queue is done on transmission and I don't see why you would
> be transmitting on vether. Normally you want to queue on the physical
> interface.

Ah, ha. Could that be the issue?

vether1 is a bridge consisting of em1 (wired dmz) and athn0 (wlan)

I'll try with em1 tonight and see.

Thank you, Stuart.


--
Mit freundlichen Grüßen/Best Regards

Mario Theodoridis