Slow VPN Performance

classic Classic list List threaded Threaded
19 messages Options
Reply | Threaded
Open this post in threaded view
|

Slow VPN Performance

Michael Sideris
Hey @misc,

----------- ENDPOINT INFO -----------

`dmesg`

(G-VPN)
OpenBSD 5.1 (GENERIC.MP) #207: Sun Feb 12 09:42:14 MST 2012
    [hidden email]:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 2146172928 (2046MB)
avail mem = 2074935296 (1978MB)
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.3 @ 0xfa850 (75 entries)
bios0: vendor Dell Computer Corporation version "A03" date 01/04/2006
bios0: Dell Computer Corporation PowerEdge SC1425
acpi0 at bios0: rev 0
acpi0: sleep states S0 S4 S5
acpi0: tables DSDT FACP APIC SPCR HPET MCFG
acpi0: wakeup devices PCI0(S5) PALO(S5) PXH_(S5) PXHB(S5) PXHA(S5) PICH(S5)
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee00000: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel(R) Xeon(TM) CPU 2.80GHz, 2800.48 MHz
cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,CNXT-ID,CX16,xTPR,NXE,LONG
cpu0: 1MB 64b/line 8-way L2 cache
cpu0: apic clock running at 200MHz
cpu1 at mainbus0: apid 1 (application processor)
cpu1: Intel(R) Xeon(TM) CPU 2.80GHz, 2800.11 MHz
cpu1: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,CNXT-ID,CX16,xTPR,NXE,LONG
cpu1: 1MB 64b/line 8-way L2 cache
ioapic0 at mainbus0: apid 2 pa 0xfec00000, version 20, 24 pins
ioapic0: misconfigured as apic 0, remapped to apid 2
ioapic1 at mainbus0: apid 3 pa 0xfec80000, version 20, 24 pins
ioapic1: misconfigured as apic 0, remapped to apid 3
ioapic2 at mainbus0: apid 4 pa 0xfec80800, version 20, 24 pins
ioapic2: misconfigured as apic 0, remapped to apid 4
acpihpet0 at acpi0: 14318179 Hz
acpimcfg0 at acpi0 addr 0xe0000000, bus 0-255
acpiprt0 at acpi0: bus 0 (PCI0)
acpiprt1 at acpi0: bus 1 (PALO)
acpiprt2 at acpi0: bus 3 (PXHB)
acpiprt3 at acpi0: bus 2 (PXHA)
acpiprt4 at acpi0: bus 4 (PICH)
acpicpu0 at acpi0
acpicpu1 at acpi0
ipmi at mainbus0 not configured
pci0 at mainbus0 bus 0
pchb0 at pci0 dev 0 function 0 "Intel E7520 Host" rev 0x09
ppb0 at pci0 dev 2 function 0 "Intel E7520 PCIE" rev 0x09
pci1 at ppb0 bus 1
ppb1 at pci1 dev 0 function 0 "Intel 6700PXH PCIE-PCIX" rev 0x09
pci2 at ppb1 bus 2
em0 at pci2 dev 4 function 0 "Intel PRO/1000MT (82541GI)" rev 0x05:
apic 3 int 0, address 00:14:22:72:61:c6
ppb2 at pci1 dev 0 function 2 "Intel 6700PXH PCIE-PCIX" rev 0x09
pci3 at ppb2 bus 3
isp0 at pci3 dev 7 function 0 "QLogic ISP2312" rev 0x02: apic 4 int 2
isp0: board type 2312 rev 0x2, loaded firmware rev 3.3.19
scsibus0 at isp0: 512 targets, WWPN 210000e08b1d3fc7, WWNN 200000e08b1d3fc7
uhci0 at pci0 dev 29 function 0 "Intel 82801EB/ER USB" rev 0x02: apic 2 int 16
uhci1 at pci0 dev 29 function 1 "Intel 82801EB/ER USB" rev 0x02: apic 2 int 19
ehci0 at pci0 dev 29 function 7 "Intel 82801EB/ER USB2" rev 0x02: apic 2 int 23
usb0 at ehci0: USB revision 2.0
uhub0 at usb0 "Intel EHCI root hub" rev 2.00/1.00 addr 1
ppb3 at pci0 dev 30 function 0 "Intel 82801BA Hub-to-PCI" rev 0xc2
pci4 at ppb3 bus 4
em1 at pci4 dev 3 function 0 "Intel PRO/1000MT (82541GI)" rev 0x05:
apic 2 int 20, address 00:14:22:72:61:c7
vga1 at pci4 dev 13 function 0 "ATI Radeon VE" rev 0x00
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
radeondrm0 at vga1: apic 2 int 17
drm0 at radeondrm0
pcib0 at pci0 dev 31 function 0 "Intel 82801EB/ER LPC" rev 0x02
pciide0 at pci0 dev 31 function 1 "Intel 82801EB/ER IDE" rev 0x02:
DMA, channel 0 configured to compatibility, channel 1 configured to
compatibility
atapiscsi0 at pciide0 channel 0 drive 0
scsibus1 at atapiscsi0: 2 targets
cd0 at scsibus1 targ 0 lun 0: <HL-DT-ST, CD-ROM GCR-8240N, 1.06> ATAPI
5/cdrom removable
cd0(pciide0:0:0): using PIO mode 4, Ultra-DMA mode 2
pciide0: channel 1 ignored (disabled)
pciide1 at pci0 dev 31 function 2 "Intel 82801EB SATA" rev 0x02: DMA,
channel 0 configured to native-PCI, channel 1 configured to native-PCI
pciide1: using apic 2 int 18 for native-PCI interrupt
wd0 at pciide1 channel 0 drive 0: <Maxtor 7Y250M0>
wd0: 16-sector PIO, LBA48, 238418MB, 488281250 sectors
wd0(pciide1:0:0): using PIO mode 4, Ultra-DMA mode 6
usb1 at uhci0: USB revision 1.0
uhub1 at usb1 "Intel UHCI root hub" rev 1.00/1.00 addr 1
usb2 at uhci1: USB revision 1.0
uhub2 at usb2 "Intel UHCI root hub" rev 1.00/1.00 addr 1
isa0 at pcib0
isadma0 at isa0
com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
pckbc0 at isa0 port 0x60/5
pckbd0 at pckbc0 (kbd slot)
pckbc0: using irq 1 for kbd slot
wskbd0 at pckbd0: console keyboard, using wsdisplay0
pcppi0 at isa0 port 0x61
spkr0 at pcppi0
mtrr: Pentium Pro MTRR support
vscsi0 at root
scsibus2 at vscsi0: 256 targets
softraid0 at root
scsibus3 at softraid0: 256 targets
root on wd0a (a29928cba946c858.a) swap on wd0b dump on wd0b

(L-VPN)
OpenBSD 5.1 (GENERIC.MP) #207: Sun Feb 12 09:42:14 MST 2012
    [hidden email]:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 3219914752 (3070MB)
avail mem = 3120099328 (2975MB)
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.3 @ 0xfa850 (75 entries)
bios0: vendor Dell Computer Corporation version "A03" date 01/04/2006
bios0: Dell Computer Corporation PowerEdge SC1425
acpi0 at bios0: rev 0
acpi0: sleep states S0 S4 S5
acpi0: tables DSDT FACP APIC SPCR HPET MCFG
acpi0: wakeup devices PCI0(S5) PALO(S5) PXH_(S5) PXHB(S5) PXHA(S5) PICH(S5)
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee00000: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel(R) Xeon(TM) CPU 2.80GHz, 2800.45 MHz
cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,CNXT-ID,CX16,xTPR,NXE,LONG
cpu0: 1MB 64b/line 8-way L2 cache
cpu0: apic clock running at 200MHz
cpu1 at mainbus0: apid 1 (application processor)
cpu1: Intel(R) Xeon(TM) CPU 2.80GHz, 2800.11 MHz
cpu1: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,CNXT-ID,CX16,xTPR,NXE,LONG
cpu1: 1MB 64b/line 8-way L2 cache
ioapic0 at mainbus0: apid 2 pa 0xfec00000, version 20, 24 pins
ioapic0: misconfigured as apic 0, remapped to apid 2
ioapic1 at mainbus0: apid 3 pa 0xfec80000, version 20, 24 pins
ioapic1: misconfigured as apic 0, remapped to apid 3
ioapic2 at mainbus0: apid 4 pa 0xfec80800, version 20, 24 pins
ioapic2: misconfigured as apic 0, remapped to apid 4
acpihpet0 at acpi0: 14318179 Hz
acpimcfg0 at acpi0 addr 0xe0000000, bus 0-255
acpiprt0 at acpi0: bus 0 (PCI0)
acpiprt1 at acpi0: bus 1 (PALO)
acpiprt2 at acpi0: bus 3 (PXHB)
acpiprt3 at acpi0: bus 2 (PXHA)
acpiprt4 at acpi0: bus 4 (PICH)
acpicpu0 at acpi0
acpicpu1 at acpi0
ipmi at mainbus0 not configured
pci0 at mainbus0 bus 0
pchb0 at pci0 dev 0 function 0 "Intel E7520 Host" rev 0x09
ppb0 at pci0 dev 2 function 0 "Intel E7520 PCIE" rev 0x09
pci1 at ppb0 bus 1
ppb1 at pci1 dev 0 function 0 "Intel 6700PXH PCIE-PCIX" rev 0x09
pci2 at ppb1 bus 2
em0 at pci2 dev 4 function 0 "Intel PRO/1000MT (82541GI)" rev 0x05:
apic 3 int 0, address 00:14:22:72:5e:bd
ppb2 at pci1 dev 0 function 2 "Intel 6700PXH PCIE-PCIX" rev 0x09
pci3 at ppb2 bus 3
em1 at pci3 dev 7 function 0 "Intel PRO/1000MT (82546GB)" rev 0x03:
apic 4 int 2, address 00:04:23:ce:d0:0c
em2 at pci3 dev 7 function 1 "Intel PRO/1000MT (82546GB)" rev 0x03:
apic 4 int 3, address 00:04:23:ce:d0:0d
uhci0 at pci0 dev 29 function 0 "Intel 82801EB/ER USB" rev 0x02: apic 2 int 16
uhci1 at pci0 dev 29 function 1 "Intel 82801EB/ER USB" rev 0x02: apic 2 int 19
ehci0 at pci0 dev 29 function 7 "Intel 82801EB/ER USB2" rev 0x02: apic 2 int 23
usb0 at ehci0: USB revision 2.0
uhub0 at usb0 "Intel EHCI root hub" rev 2.00/1.00 addr 1
ppb3 at pci0 dev 30 function 0 "Intel 82801BA Hub-to-PCI" rev 0xc2
pci4 at ppb3 bus 4
em3 at pci4 dev 3 function 0 "Intel PRO/1000MT (82541GI)" rev 0x05:
apic 2 int 20, address 00:14:22:72:5e:be
vga1 at pci4 dev 13 function 0 "ATI Radeon VE" rev 0x00
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
radeondrm0 at vga1: apic 2 int 17
drm0 at radeondrm0
pcib0 at pci0 dev 31 function 0 "Intel 82801EB/ER LPC" rev 0x02
pciide0 at pci0 dev 31 function 1 "Intel 82801EB/ER IDE" rev 0x02:
DMA, channel 0 configured to compatibility, channel 1 configured to
compatibility
atapiscsi0 at pciide0 channel 0 drive 0
scsibus0 at atapiscsi0: 2 targets
cd0 at scsibus0 targ 0 lun 0: <HL-DT-ST, CD-ROM GCR-8240N, 1.06> ATAPI
5/cdrom removable
cd0(pciide0:0:0): using PIO mode 4, Ultra-DMA mode 2
pciide0: channel 1 ignored (disabled)
pciide1 at pci0 dev 31 function 2 "Intel 82801EB SATA" rev 0x02: DMA,
channel 0 configured to native-PCI, channel 1 configured to native-PCI
pciide1: using apic 2 int 18 for native-PCI interrupt
wd0 at pciide1 channel 0 drive 0: <WDC WD400BD-75LRA0>
wd0: 16-sector PIO, LBA48, 38146MB, 78125000 sectors
wd0(pciide1:0:0): using PIO mode 4, Ultra-DMA mode 6
wd1 at pciide1 channel 1 drive 0: <Maxtor 7Y250M0>
wd1: 16-sector PIO, LBA48, 238418MB, 488281250 sectors
wd1(pciide1:1:0): using PIO mode 4, Ultra-DMA mode 6
usb1 at uhci0: USB revision 1.0
uhub1 at usb1 "Intel UHCI root hub" rev 1.00/1.00 addr 1
usb2 at uhci1: USB revision 1.0
uhub2 at usb2 "Intel UHCI root hub" rev 1.00/1.00 addr 1
isa0 at pcib0
isadma0 at isa0
com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
pckbc0 at isa0 port 0x60/5
pckbd0 at pckbc0 (kbd slot)
pckbc0: using irq 1 for kbd slot
wskbd0 at pckbd0: console keyboard, using wsdisplay0
pcppi0 at isa0 port 0x61
spkr0 at pcppi0
mtrr: Pentium Pro MTRR support
vscsi0 at root
scsibus1 at vscsi0: 256 targets
softraid0 at root
scsibus2 at softraid0: 256 targets
root on wd0a (c66c13b9ce71dcfc.a) swap on wd0b dump on wd0b


`ifconfig` (for the sake of security, G.G.G.G is the public IP for
G-VPN where L.L.L.L is the public IP for L-VPN)

(G-VPN)
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33152
        priority: 0
        groups: lo
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x4
        inet 127.0.0.1 netmask 0xff000000
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        lladdr 00:14:22:72:61:c6
        priority: 0
        media: Ethernet autoselect (1000baseT full-duplex)
        status: active
        inet 10.1.50.181 netmask 0xffffff00 broadcast 10.1.50.255
        inet6 fe80::214:22ff:fe72:61c6%em0 prefixlen 64 scopeid 0x1
em1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        lladdr 00:14:22:72:61:c7
        priority: 0
        groups: egress
        media: Ethernet autoselect (1000baseT full-duplex)
        status: active
        inet G.G.G.G netmask 0xfffffff0 broadcast G.G.G.X
        inet6 fe80::214:22ff:fe72:61c7%em1 prefixlen 64 scopeid 0x2
enc0: flags=0<>
        priority: 0
        groups: enc
        status: active
pflog0: flags=141<UP,RUNNING,PROMISC> mtu 33152
        priority: 0
        groups: pflog

(L-VPN)
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33152
        priority: 0
        groups: lo
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x6
        inet 127.0.0.1 netmask 0xff000000
em0: flags=8b43<UP,BROADCAST,RUNNING,PROMISC,ALLMULTI,SIMPLEX,MULTICAST>
mtu 1500
        lladdr 00:14:22:72:5e:bd
        priority: 0
        trunk: trunkdev trunk0
        media: Ethernet autoselect (1000baseT full-duplex)
        status: active
        inet6 fe80::204:23ff:fece:d00c%em0 prefixlen 64 scopeid 0x1
em1: flags=8b43<UP,BROADCAST,RUNNING,PROMISC,ALLMULTI,SIMPLEX,MULTICAST>
mtu 1500
        lladdr 00:14:22:72:5e:bd
        priority: 0
        trunk: trunkdev trunk0
        media: Ethernet autoselect (1000baseT full-duplex)
        status: active
        inet6 fe80::204:23ff:fece:d00d%em1 prefixlen 64 scopeid 0x2
em2: flags=8b43<UP,BROADCAST,RUNNING,PROMISC,ALLMULTI,SIMPLEX,MULTICAST>
mtu 1500
        lladdr 00:04:23:ce:d0:0d
        priority: 0
        trunk: trunkdev trunk1
        media: Ethernet autoselect (1000baseT full-duplex)
        status: active
        inet6 fe80::214:22ff:fe72:5ebe%em2 prefixlen 64 scopeid 0x3
em3: flags=8b43<UP,BROADCAST,RUNNING,PROMISC,ALLMULTI,SIMPLEX,MULTICAST>
mtu 1500
        lladdr 00:04:23:ce:d0:0d
        priority: 0
        trunk: trunkdev trunk1
        media: Ethernet autoselect (1000baseT full-duplex)
        status: active
        inet6 fe80::214:22ff:fe72:5ebd%em3 prefixlen 64 scopeid 0x4
enc0: flags=0<>
        priority: 0
        groups: enc
        status: active
trunk0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        lladdr 00:14:22:72:5e:bd
        priority: 0
        trunk: trunkproto lacp
        trunk id: [(8000,00:14:22:72:5e:bd,403C,0000,0000),
                 (8000,00:23:05:1d:fb:80,000C,0000,0000)]
                trunkport em1 active,collecting,distributing
                trunkport em0 collecting,distributing
        groups: trunk
        media: Ethernet autoselect
        status: active
        inet6 fe80::214:22ff:fe72:5ebd%trunk0 prefixlen 64 scopeid 0x7
trunk1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        lladdr 00:04:23:ce:d0:0d
        priority: 0
        trunk: trunkproto lacp
        trunk id: [(8000,00:04:23:ce:d0:0d,4044,0000,0000),
                 (8000,00:23:05:3f:19:80,0010,0000,0000)]
                trunkport em3 active,collecting,distributing
                trunkport em2 collecting,distributing
        groups: trunk
        media: Ethernet autoselect
        status: active
        inet6 fe80::204:23ff:fece:d00d%trunk1 prefixlen 64 scopeid 0x8
vlan10: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        lladdr 00:14:22:72:5e:bd
        priority: 0
        vlan: 10 parent interface: trunk0
        groups: vlan egress
        status: active
        inet6 fe80::214:22ff:fe72:5ebd%vlan10 prefixlen 64 scopeid 0x9
        inet L.L.L.L netmask 0xfffffff8 broadcast L.L.L.X
vlan20: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        lladdr 00:04:23:ce:d0:0d
        priority: 0
        vlan: 20 parent interface: trunk1
        groups: vlan
        status: active
        inet6 fe80::204:23ff:fece:d00d%vlan20 prefixlen 64 scopeid 0xa
        inet 10.240.2.169 netmask 0xffffff00 broadcast 10.240.2.255
vlan30: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        lladdr 00:04:23:ce:d0:0d
        priority: 0
        vlan: 30 parent interface: trunk1
        groups: vlan
        status: active
        inet6 fe80::204:23ff:fece:d00d%vlan30 prefixlen 64 scopeid 0xb
        inet 10.240.3.169 netmask 0xffffff00 broadcast 10.240.3.255
vlan40: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        lladdr 00:14:22:72:5e:bd
        priority: 0
        vlan: 40 parent interface: trunk0
        groups: vlan
        status: active
        inet6 fe80::214:22ff:fe72:5ebd%vlan40 prefixlen 64 scopeid 0xc
        inet 10.240.4.169 netmask 0xffffff00 broadcast 10.240.4.255
pflog0: flags=141<UP,RUNNING,PROMISC> mtu 33152
        priority: 0
        groups: pflog


`cat /etc/pf.conf`

(G-VPN)

int_if="em0"
ext_if="em1"

remote_gw="L.L.L.L"

admins_net="{ 10.17.6.0/24, 10.32.24.0/24 }"
devs_net="{ 10.1.2.0/24, 10.17.8.0/24 }"

L_databases="{ 10.240.4.111, 10.240.4.112, 10.240.4.121, 10.240.4.122,
10.240.4.131, 10.240.4.132 }"
G_databases="{ 10.1.50.121, 10.1.50.122 }"

set skip on { lo enc0 }

table <authpf_users> persist

block

# VPN
pass in quick on $ext_if proto esp from $remote_gw to $ext_if
pass out quick on $ext_if proto esp from $ext_if to $remote_gw

pass in quick on $ext_if proto udp from $remote_gw to $ext_if port {
isakmp, ipsec-nat-t }
pass out quick on $ext_if proto udp from $ext_if to $remote_gw port {
isakmp, ipsec-nat-t }

# DNS/NTP/SSH
pass out quick on $int_if proto udp to port domain
pass out quick on $int_if proto udp to port ntp
pass in quick on $int_if proto tcp to 10.1.50.181 port ssh

# TRAFFIC
pass in on $int_if proto tcp from { 10.1.50.11, $devs_net } to
10.240.4.21 port ssh
pass out on $ext_if proto tcp from { 10.1.50.11, $devs_net } to
10.240.4.21 port ssh

pass in on $int_if proto tcp from { $devs_net, $G_databases } to
$L_databases port 1521
pass out on $int_if proto tcp from { $devs_net, $G_databases } to
$L_databases port 1521

pass in on $ext_if proto tcp from $L_databases to $G_databases port 1521
pass out on $int_if proto tcp from $L_databases to $G_databases port 1521

pass in on $int_if from <authpf_users>
pass out on $ext_if from <authpf_users>

(L-VPN)
ext_if="vlan10"

remote_gw="G.G.G.G"

admins_net="{ 10.17.6.0/24, 10.32.24.0/24 }"
devs_net="{ 10.1.2.0/24, 10.17.8.0/24 }"

L_databases="{ 10.240.4.111, 10.240.4.112, 10.240.4.121, 10.240.4.122,
10.240.4.131, 10.240.4.132 }"
G_databases="{ 10.1.50.121, 10.1.50.122 }"

set skip on { lo enc0 }

block

# VPN
pass in quick on $ext_if proto esp from $remote_gw to $ext_if
pass out quick on $ext_if proto esp from $ext_if to $remote_gw

pass in quick on $ext_if proto udp from $remote_gw to $ext_if port {
isakmp, ipsec-nat-t }
pass out quick on $ext_if proto udp from $ext_if to $remote_gw port {
isakmp, ipsec-nat-t }

# DNS/NTP/SSH
pass out quick on $ext_if proto udp to port domain
pass out quick on $ext_if proto udp to port ntp
pass in quick on vlan20 proto tcp to 10.240.2.169 port ssh

# TRAFFIC
pass in on vlan10 from $admins_net
pass out on { vlan20, vlan30, vlan40 } from $admins_net

pass in on vlan10 proto tcp from { 10.1.50.11, $devs_net } to
10.240.4.21 port 22
pass out on vlan40 proto tcp from { 10.1.50.11, $devs_net } to
10.240.4.21 port 22

pass in on vlan10 proto tcp from { $devs_net, $G_databases } to
$L_databases port 1521
pass out on vlan40 proto tcp from { $devs_net, $G_databases } to
$L_databases port 1521

pass in on vlan40 proto tcp from $L_databases to $G_databases port 1521
pass out on vlan10 proto tcp from $L_databases to $G_databases port 1521

pass in on vlan40 proto tcp from 10.1.50.181 to 10.240.2.169
pass out on vlan20 proto tcp from 10.1.50.181 to 10.240.2.169


`cat /etc/ipsec.conf`

(G-VPN)
local_ip="G.G.G.G"
local_net="{ 10.1.2.0/24, 10.1.50.0/24, 10.17.6.0/24, 10.17.8.0/24,
10.32.24.0/24 }"
remote_ip="L.L.L.L"
remote_net="{ 10.240.2.0/24, 10.240.3.0/24, 10.240.4.0/24 }"

ike esp from $local_net to $remote_net peer $remote_ip
ike esp from $local_ip to $remote_net peer $remote_ip
ike esp from $local_ip to $remote_ip


(L-VPN)
local_ip="L.L.L.L"
local_net="{ 10.240.2.0/24, 10.240.3.0/24, 10.240.4.0/24 }"
remote_ip="G.G.G.G"
remote_net="{ 10.1.2.0/24, 10.1.50.0/24, 10.17.6.0/24, 10.17.8.0/24,
10.32.24.0/24 }"

ike esp from $local_net to $remote_net peer $remote_ip
ike esp from $local_ip to $remote_net peer $remote_ip
ike esp from $local_ip to $remote_ip

----------- ENDPOINT INFO -----------


Both endpoints run stock OpenBSD 5.1 (amd64). We use the VPN link to
manage our platform remotely and perform daily backups. G-VPN runs on
a 150Mbit/s link while L-VPN on a 1Gbit/s link. On one hand, our VPN
setup runs really nicely. The connections are routed properly, pf is
godsent and authpf works wonders. On the other hand, network
throughput over the VPN tunnel never exceeds 3.4MB/s (ftp, scp, rsync,
etc...)

I welcome any suggestions. Keep in mind that this is our production
VPN tunnel, so I cannot shut it down at will. Thanks in advance.

---
Mike

Reply | Threaded
Open this post in threaded view
|

Re: Slow VPN Performance

Michael Sideris
`ping -c10`

(L-VPN --> G-VPN)

PING G.G.G.G (G.G.G.G): 56 data bytes
64 bytes from G.G.G.G: icmp_seq=0 ttl=255 time=17.073 ms
64 bytes from G.G.G.G: icmp_seq=1 ttl=255 time=3.604 ms
64 bytes from G.G.G.G: icmp_seq=2 ttl=255 time=3.666 ms
64 bytes from G.G.G.G: icmp_seq=3 ttl=255 time=3.716 ms
64 bytes from G.G.G.G: icmp_seq=4 ttl=255 time=3.639 ms
64 bytes from G.G.G.G: icmp_seq=5 ttl=255 time=3.685 ms
64 bytes from G.G.G.G: icmp_seq=6 ttl=255 time=3.734 ms
64 bytes from G.G.G.G: icmp_seq=7 ttl=255 time=3.658 ms
64 bytes from G.G.G.G: icmp_seq=8 ttl=255 time=3.707 ms
64 bytes from G.G.G.G: icmp_seq=9 ttl=255 time=3.755 ms
--- G.G.G.G ping statistics ---
10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 3.604/5.023/17.073/4.017 ms


(G-VPN --> L-VPN)

PING L.L.L.L (L.L.L.L): 56 data bytes
64 bytes from L.L.L.L: icmp_seq=0 ttl=255 time=3.707 ms
64 bytes from L.L.L.L: icmp_seq=1 ttl=255 time=3.746 ms
64 bytes from L.L.L.L: icmp_seq=2 ttl=255 time=3.677 ms
64 bytes from L.L.L.L: icmp_seq=3 ttl=255 time=3.717 ms
64 bytes from L.L.L.L: icmp_seq=4 ttl=255 time=3.754 ms
64 bytes from L.L.L.L: icmp_seq=5 ttl=255 time=3.670 ms
64 bytes from L.L.L.L: icmp_seq=6 ttl=255 time=3.703 ms
64 bytes from L.L.L.L: icmp_seq=7 ttl=255 time=3.742 ms
64 bytes from L.L.L.L: icmp_seq=8 ttl=255 time=3.654 ms
64 bytes from L.L.L.L: icmp_seq=9 ttl=255 time=3.693 ms
--- L.L.L.L ping statistics ---
10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 3.654/3.706/3.754/0.057 ms


It is also worth mentioning that if I send anything from one endpoint
to the other, the speed is ~7.5MB/s. Better than a transfer between 2
nodes from each site but still a bit slow for a 150Mbit/s <--> 1Gbit/s
link.

On Wed, Oct 17, 2012 at 1:36 AM, Kent Fritz <[hidden email]> wrote:

> I didn't see anyone reply to this yet, so let me ask a really dumb question:
> what's the round-trip-time between G.G.G.G and L.L.L.L?  Are you running
> into the TCP limits due to this?
>
>
> On Tue, Oct 16, 2012 at 2:43 AM, Michael Sideris <[hidden email]> wrote:
>>
>> Hey @misc,
>>
>> ----------- ENDPOINT INFO -----------
>>
>> `dmesg`
>>
>> (G-VPN)
>> OpenBSD 5.1 (GENERIC.MP) #207: Sun Feb 12 09:42:14 MST 2012
>>     [hidden email]:/usr/src/sys/arch/amd64/compile/GENERIC.MP
>> real mem = 2146172928 (2046MB)
>> avail mem = 2074935296 (1978MB)
>> mainbus0 at root
>> bios0 at mainbus0: SMBIOS rev. 2.3 @ 0xfa850 (75 entries)
>> bios0: vendor Dell Computer Corporation version "A03" date 01/04/2006
>> bios0: Dell Computer Corporation PowerEdge SC1425
>> acpi0 at bios0: rev 0
>> acpi0: sleep states S0 S4 S5
>> acpi0: tables DSDT FACP APIC SPCR HPET MCFG
>> acpi0: wakeup devices PCI0(S5) PALO(S5) PXH_(S5) PXHB(S5) PXHA(S5)
>> PICH(S5)
>> acpitimer0 at acpi0: 3579545 Hz, 24 bits
>> acpimadt0 at acpi0 addr 0xfee00000: PC-AT compat
>> cpu0 at mainbus0: apid 0 (boot processor)
>> cpu0: Intel(R) Xeon(TM) CPU 2.80GHz, 2800.48 MHz
>> cpu0:
>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,CNXT-ID,CX16,xTPR,NXE,LONG
>> cpu0: 1MB 64b/line 8-way L2 cache
>> cpu0: apic clock running at 200MHz
>> cpu1 at mainbus0: apid 1 (application processor)
>> cpu1: Intel(R) Xeon(TM) CPU 2.80GHz, 2800.11 MHz
>> cpu1:
>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,CNXT-ID,CX16,xTPR,NXE,LONG
>> cpu1: 1MB 64b/line 8-way L2 cache
>> ioapic0 at mainbus0: apid 2 pa 0xfec00000, version 20, 24 pins
>> ioapic0: misconfigured as apic 0, remapped to apid 2
>> ioapic1 at mainbus0: apid 3 pa 0xfec80000, version 20, 24 pins
>> ioapic1: misconfigured as apic 0, remapped to apid 3
>> ioapic2 at mainbus0: apid 4 pa 0xfec80800, version 20, 24 pins
>> ioapic2: misconfigured as apic 0, remapped to apid 4
>> acpihpet0 at acpi0: 14318179 Hz
>> acpimcfg0 at acpi0 addr 0xe0000000, bus 0-255
>> acpiprt0 at acpi0: bus 0 (PCI0)
>> acpiprt1 at acpi0: bus 1 (PALO)
>> acpiprt2 at acpi0: bus 3 (PXHB)
>> acpiprt3 at acpi0: bus 2 (PXHA)
>> acpiprt4 at acpi0: bus 4 (PICH)
>> acpicpu0 at acpi0
>> acpicpu1 at acpi0
>> ipmi at mainbus0 not configured
>> pci0 at mainbus0 bus 0
>> pchb0 at pci0 dev 0 function 0 "Intel E7520 Host" rev 0x09
>> ppb0 at pci0 dev 2 function 0 "Intel E7520 PCIE" rev 0x09
>> pci1 at ppb0 bus 1
>> ppb1 at pci1 dev 0 function 0 "Intel 6700PXH PCIE-PCIX" rev 0x09
>> pci2 at ppb1 bus 2
>> em0 at pci2 dev 4 function 0 "Intel PRO/1000MT (82541GI)" rev 0x05:
>> apic 3 int 0, address 00:14:22:72:61:c6
>> ppb2 at pci1 dev 0 function 2 "Intel 6700PXH PCIE-PCIX" rev 0x09
>> pci3 at ppb2 bus 3
>> isp0 at pci3 dev 7 function 0 "QLogic ISP2312" rev 0x02: apic 4 int 2
>> isp0: board type 2312 rev 0x2, loaded firmware rev 3.3.19
>> scsibus0 at isp0: 512 targets, WWPN 210000e08b1d3fc7, WWNN
>> 200000e08b1d3fc7
>> uhci0 at pci0 dev 29 function 0 "Intel 82801EB/ER USB" rev 0x02: apic 2
>> int 16
>> uhci1 at pci0 dev 29 function 1 "Intel 82801EB/ER USB" rev 0x02: apic 2
>> int 19
>> ehci0 at pci0 dev 29 function 7 "Intel 82801EB/ER USB2" rev 0x02: apic 2
>> int 23
>> usb0 at ehci0: USB revision 2.0
>> uhub0 at usb0 "Intel EHCI root hub" rev 2.00/1.00 addr 1
>> ppb3 at pci0 dev 30 function 0 "Intel 82801BA Hub-to-PCI" rev 0xc2
>> pci4 at ppb3 bus 4
>> em1 at pci4 dev 3 function 0 "Intel PRO/1000MT (82541GI)" rev 0x05:
>> apic 2 int 20, address 00:14:22:72:61:c7
>> vga1 at pci4 dev 13 function 0 "ATI Radeon VE" rev 0x00
>> wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
>> wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
>> radeondrm0 at vga1: apic 2 int 17
>> drm0 at radeondrm0
>> pcib0 at pci0 dev 31 function 0 "Intel 82801EB/ER LPC" rev 0x02
>> pciide0 at pci0 dev 31 function 1 "Intel 82801EB/ER IDE" rev 0x02:
>> DMA, channel 0 configured to compatibility, channel 1 configured to
>> compatibility
>> atapiscsi0 at pciide0 channel 0 drive 0
>> scsibus1 at atapiscsi0: 2 targets
>> cd0 at scsibus1 targ 0 lun 0: <HL-DT-ST, CD-ROM GCR-8240N, 1.06> ATAPI
>> 5/cdrom removable
>> cd0(pciide0:0:0): using PIO mode 4, Ultra-DMA mode 2
>> pciide0: channel 1 ignored (disabled)
>> pciide1 at pci0 dev 31 function 2 "Intel 82801EB SATA" rev 0x02: DMA,
>> channel 0 configured to native-PCI, channel 1 configured to native-PCI
>> pciide1: using apic 2 int 18 for native-PCI interrupt
>> wd0 at pciide1 channel 0 drive 0: <Maxtor 7Y250M0>
>> wd0: 16-sector PIO, LBA48, 238418MB, 488281250 sectors
>> wd0(pciide1:0:0): using PIO mode 4, Ultra-DMA mode 6
>> usb1 at uhci0: USB revision 1.0
>> uhub1 at usb1 "Intel UHCI root hub" rev 1.00/1.00 addr 1
>> usb2 at uhci1: USB revision 1.0
>> uhub2 at usb2 "Intel UHCI root hub" rev 1.00/1.00 addr 1
>> isa0 at pcib0
>> isadma0 at isa0
>> com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
>> pckbc0 at isa0 port 0x60/5
>> pckbd0 at pckbc0 (kbd slot)
>> pckbc0: using irq 1 for kbd slot
>> wskbd0 at pckbd0: console keyboard, using wsdisplay0
>> pcppi0 at isa0 port 0x61
>> spkr0 at pcppi0
>> mtrr: Pentium Pro MTRR support
>> vscsi0 at root
>> scsibus2 at vscsi0: 256 targets
>> softraid0 at root
>> scsibus3 at softraid0: 256 targets
>> root on wd0a (a29928cba946c858.a) swap on wd0b dump on wd0b
>>
>> (L-VPN)
>> OpenBSD 5.1 (GENERIC.MP) #207: Sun Feb 12 09:42:14 MST 2012
>>     [hidden email]:/usr/src/sys/arch/amd64/compile/GENERIC.MP
>> real mem = 3219914752 (3070MB)
>> avail mem = 3120099328 (2975MB)
>> mainbus0 at root
>> bios0 at mainbus0: SMBIOS rev. 2.3 @ 0xfa850 (75 entries)
>> bios0: vendor Dell Computer Corporation version "A03" date 01/04/2006
>> bios0: Dell Computer Corporation PowerEdge SC1425
>> acpi0 at bios0: rev 0
>> acpi0: sleep states S0 S4 S5
>> acpi0: tables DSDT FACP APIC SPCR HPET MCFG
>> acpi0: wakeup devices PCI0(S5) PALO(S5) PXH_(S5) PXHB(S5) PXHA(S5)
>> PICH(S5)
>> acpitimer0 at acpi0: 3579545 Hz, 24 bits
>> acpimadt0 at acpi0 addr 0xfee00000: PC-AT compat
>> cpu0 at mainbus0: apid 0 (boot processor)
>> cpu0: Intel(R) Xeon(TM) CPU 2.80GHz, 2800.45 MHz
>> cpu0:
>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,CNXT-ID,CX16,xTPR,NXE,LONG
>> cpu0: 1MB 64b/line 8-way L2 cache
>> cpu0: apic clock running at 200MHz
>> cpu1 at mainbus0: apid 1 (application processor)
>> cpu1: Intel(R) Xeon(TM) CPU 2.80GHz, 2800.11 MHz
>> cpu1:
>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,CNXT-ID,CX16,xTPR,NXE,LONG
>> cpu1: 1MB 64b/line 8-way L2 cache
>> ioapic0 at mainbus0: apid 2 pa 0xfec00000, version 20, 24 pins
>> ioapic0: misconfigured as apic 0, remapped to apid 2
>> ioapic1 at mainbus0: apid 3 pa 0xfec80000, version 20, 24 pins
>> ioapic1: misconfigured as apic 0, remapped to apid 3
>> ioapic2 at mainbus0: apid 4 pa 0xfec80800, version 20, 24 pins
>> ioapic2: misconfigured as apic 0, remapped to apid 4
>> acpihpet0 at acpi0: 14318179 Hz
>> acpimcfg0 at acpi0 addr 0xe0000000, bus 0-255
>> acpiprt0 at acpi0: bus 0 (PCI0)
>> acpiprt1 at acpi0: bus 1 (PALO)
>> acpiprt2 at acpi0: bus 3 (PXHB)
>> acpiprt3 at acpi0: bus 2 (PXHA)
>> acpiprt4 at acpi0: bus 4 (PICH)
>> acpicpu0 at acpi0
>> acpicpu1 at acpi0
>> ipmi at mainbus0 not configured
>> pci0 at mainbus0 bus 0
>> pchb0 at pci0 dev 0 function 0 "Intel E7520 Host" rev 0x09
>> ppb0 at pci0 dev 2 function 0 "Intel E7520 PCIE" rev 0x09
>> pci1 at ppb0 bus 1
>> ppb1 at pci1 dev 0 function 0 "Intel 6700PXH PCIE-PCIX" rev 0x09
>> pci2 at ppb1 bus 2
>> em0 at pci2 dev 4 function 0 "Intel PRO/1000MT (82541GI)" rev 0x05:
>> apic 3 int 0, address 00:14:22:72:5e:bd
>> ppb2 at pci1 dev 0 function 2 "Intel 6700PXH PCIE-PCIX" rev 0x09
>> pci3 at ppb2 bus 3
>> em1 at pci3 dev 7 function 0 "Intel PRO/1000MT (82546GB)" rev 0x03:
>> apic 4 int 2, address 00:04:23:ce:d0:0c
>> em2 at pci3 dev 7 function 1 "Intel PRO/1000MT (82546GB)" rev 0x03:
>> apic 4 int 3, address 00:04:23:ce:d0:0d
>> uhci0 at pci0 dev 29 function 0 "Intel 82801EB/ER USB" rev 0x02: apic 2
>> int 16
>> uhci1 at pci0 dev 29 function 1 "Intel 82801EB/ER USB" rev 0x02: apic 2
>> int 19
>> ehci0 at pci0 dev 29 function 7 "Intel 82801EB/ER USB2" rev 0x02: apic 2
>> int 23
>> usb0 at ehci0: USB revision 2.0
>> uhub0 at usb0 "Intel EHCI root hub" rev 2.00/1.00 addr 1
>> ppb3 at pci0 dev 30 function 0 "Intel 82801BA Hub-to-PCI" rev 0xc2
>> pci4 at ppb3 bus 4
>> em3 at pci4 dev 3 function 0 "Intel PRO/1000MT (82541GI)" rev 0x05:
>> apic 2 int 20, address 00:14:22:72:5e:be
>> vga1 at pci4 dev 13 function 0 "ATI Radeon VE" rev 0x00
>> wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
>> wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
>> radeondrm0 at vga1: apic 2 int 17
>> drm0 at radeondrm0
>> pcib0 at pci0 dev 31 function 0 "Intel 82801EB/ER LPC" rev 0x02
>> pciide0 at pci0 dev 31 function 1 "Intel 82801EB/ER IDE" rev 0x02:
>> DMA, channel 0 configured to compatibility, channel 1 configured to
>> compatibility
>> atapiscsi0 at pciide0 channel 0 drive 0
>> scsibus0 at atapiscsi0: 2 targets
>> cd0 at scsibus0 targ 0 lun 0: <HL-DT-ST, CD-ROM GCR-8240N, 1.06> ATAPI
>> 5/cdrom removable
>> cd0(pciide0:0:0): using PIO mode 4, Ultra-DMA mode 2
>> pciide0: channel 1 ignored (disabled)
>> pciide1 at pci0 dev 31 function 2 "Intel 82801EB SATA" rev 0x02: DMA,
>> channel 0 configured to native-PCI, channel 1 configured to native-PCI
>> pciide1: using apic 2 int 18 for native-PCI interrupt
>> wd0 at pciide1 channel 0 drive 0: <WDC WD400BD-75LRA0>
>> wd0: 16-sector PIO, LBA48, 38146MB, 78125000 sectors
>> wd0(pciide1:0:0): using PIO mode 4, Ultra-DMA mode 6
>> wd1 at pciide1 channel 1 drive 0: <Maxtor 7Y250M0>
>> wd1: 16-sector PIO, LBA48, 238418MB, 488281250 sectors
>> wd1(pciide1:1:0): using PIO mode 4, Ultra-DMA mode 6
>> usb1 at uhci0: USB revision 1.0
>> uhub1 at usb1 "Intel UHCI root hub" rev 1.00/1.00 addr 1
>> usb2 at uhci1: USB revision 1.0
>> uhub2 at usb2 "Intel UHCI root hub" rev 1.00/1.00 addr 1
>> isa0 at pcib0
>> isadma0 at isa0
>> com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
>> pckbc0 at isa0 port 0x60/5
>> pckbd0 at pckbc0 (kbd slot)
>> pckbc0: using irq 1 for kbd slot
>> wskbd0 at pckbd0: console keyboard, using wsdisplay0
>> pcppi0 at isa0 port 0x61
>> spkr0 at pcppi0
>> mtrr: Pentium Pro MTRR support
>> vscsi0 at root
>> scsibus1 at vscsi0: 256 targets
>> softraid0 at root
>> scsibus2 at softraid0: 256 targets
>> root on wd0a (c66c13b9ce71dcfc.a) swap on wd0b dump on wd0b
>>
>>
>> `ifconfig` (for the sake of security, G.G.G.G is the public IP for
>> G-VPN where L.L.L.L is the public IP for L-VPN)
>>
>> (G-VPN)
>> lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33152
>>         priority: 0
>>         groups: lo
>>         inet6 ::1 prefixlen 128
>>         inet6 fe80::1%lo0 prefixlen 64 scopeid 0x4
>>         inet 127.0.0.1 netmask 0xff000000
>> em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
>>         lladdr 00:14:22:72:61:c6
>>         priority: 0
>>         media: Ethernet autoselect (1000baseT full-duplex)
>>         status: active
>>         inet 10.1.50.181 netmask 0xffffff00 broadcast 10.1.50.255
>>         inet6 fe80::214:22ff:fe72:61c6%em0 prefixlen 64 scopeid 0x1
>> em1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
>>         lladdr 00:14:22:72:61:c7
>>         priority: 0
>>         groups: egress
>>         media: Ethernet autoselect (1000baseT full-duplex)
>>         status: active
>>         inet G.G.G.G netmask 0xfffffff0 broadcast G.G.G.X
>>         inet6 fe80::214:22ff:fe72:61c7%em1 prefixlen 64 scopeid 0x2
>> enc0: flags=0<>
>>         priority: 0
>>         groups: enc
>>         status: active
>> pflog0: flags=141<UP,RUNNING,PROMISC> mtu 33152
>>         priority: 0
>>         groups: pflog
>>
>> (L-VPN)
>> lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33152
>>         priority: 0
>>         groups: lo
>>         inet6 ::1 prefixlen 128
>>         inet6 fe80::1%lo0 prefixlen 64 scopeid 0x6
>>         inet 127.0.0.1 netmask 0xff000000
>> em0: flags=8b43<UP,BROADCAST,RUNNING,PROMISC,ALLMULTI,SIMPLEX,MULTICAST>
>> mtu 1500
>>         lladdr 00:14:22:72:5e:bd
>>         priority: 0
>>         trunk: trunkdev trunk0
>>         media: Ethernet autoselect (1000baseT full-duplex)
>>         status: active
>>         inet6 fe80::204:23ff:fece:d00c%em0 prefixlen 64 scopeid 0x1
>> em1: flags=8b43<UP,BROADCAST,RUNNING,PROMISC,ALLMULTI,SIMPLEX,MULTICAST>
>> mtu 1500
>>         lladdr 00:14:22:72:5e:bd
>>         priority: 0
>>         trunk: trunkdev trunk0
>>         media: Ethernet autoselect (1000baseT full-duplex)
>>         status: active
>>         inet6 fe80::204:23ff:fece:d00d%em1 prefixlen 64 scopeid 0x2
>> em2: flags=8b43<UP,BROADCAST,RUNNING,PROMISC,ALLMULTI,SIMPLEX,MULTICAST>
>> mtu 1500
>>         lladdr 00:04:23:ce:d0:0d
>>         priority: 0
>>         trunk: trunkdev trunk1
>>         media: Ethernet autoselect (1000baseT full-duplex)
>>         status: active
>>         inet6 fe80::214:22ff:fe72:5ebe%em2 prefixlen 64 scopeid 0x3
>> em3: flags=8b43<UP,BROADCAST,RUNNING,PROMISC,ALLMULTI,SIMPLEX,MULTICAST>
>> mtu 1500
>>         lladdr 00:04:23:ce:d0:0d
>>         priority: 0
>>         trunk: trunkdev trunk1
>>         media: Ethernet autoselect (1000baseT full-duplex)
>>         status: active
>>         inet6 fe80::214:22ff:fe72:5ebd%em3 prefixlen 64 scopeid 0x4
>> enc0: flags=0<>
>>         priority: 0
>>         groups: enc
>>         status: active
>> trunk0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
>>         lladdr 00:14:22:72:5e:bd
>>         priority: 0
>>         trunk: trunkproto lacp
>>         trunk id: [(8000,00:14:22:72:5e:bd,403C,0000,0000),
>>                  (8000,00:23:05:1d:fb:80,000C,0000,0000)]
>>                 trunkport em1 active,collecting,distributing
>>                 trunkport em0 collecting,distributing
>>         groups: trunk
>>         media: Ethernet autoselect
>>         status: active
>>         inet6 fe80::214:22ff:fe72:5ebd%trunk0 prefixlen 64 scopeid 0x7
>> trunk1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
>>         lladdr 00:04:23:ce:d0:0d
>>         priority: 0
>>         trunk: trunkproto lacp
>>         trunk id: [(8000,00:04:23:ce:d0:0d,4044,0000,0000),
>>                  (8000,00:23:05:3f:19:80,0010,0000,0000)]
>>                 trunkport em3 active,collecting,distributing
>>                 trunkport em2 collecting,distributing
>>         groups: trunk
>>         media: Ethernet autoselect
>>         status: active
>>         inet6 fe80::204:23ff:fece:d00d%trunk1 prefixlen 64 scopeid 0x8
>> vlan10: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
>>         lladdr 00:14:22:72:5e:bd
>>         priority: 0
>>         vlan: 10 parent interface: trunk0
>>         groups: vlan egress
>>         status: active
>>         inet6 fe80::214:22ff:fe72:5ebd%vlan10 prefixlen 64 scopeid 0x9
>>         inet L.L.L.L netmask 0xfffffff8 broadcast L.L.L.X
>> vlan20: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
>>         lladdr 00:04:23:ce:d0:0d
>>         priority: 0
>>         vlan: 20 parent interface: trunk1
>>         groups: vlan
>>         status: active
>>         inet6 fe80::204:23ff:fece:d00d%vlan20 prefixlen 64 scopeid 0xa
>>         inet 10.240.2.169 netmask 0xffffff00 broadcast 10.240.2.255
>> vlan30: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
>>         lladdr 00:04:23:ce:d0:0d
>>         priority: 0
>>         vlan: 30 parent interface: trunk1
>>         groups: vlan
>>         status: active
>>         inet6 fe80::204:23ff:fece:d00d%vlan30 prefixlen 64 scopeid 0xb
>>         inet 10.240.3.169 netmask 0xffffff00 broadcast 10.240.3.255
>> vlan40: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
>>         lladdr 00:14:22:72:5e:bd
>>         priority: 0
>>         vlan: 40 parent interface: trunk0
>>         groups: vlan
>>         status: active
>>         inet6 fe80::214:22ff:fe72:5ebd%vlan40 prefixlen 64 scopeid 0xc
>>         inet 10.240.4.169 netmask 0xffffff00 broadcast 10.240.4.255
>> pflog0: flags=141<UP,RUNNING,PROMISC> mtu 33152
>>         priority: 0
>>         groups: pflog
>>
>>
>> `cat /etc/pf.conf`
>>
>> (G-VPN)
>>
>> int_if="em0"
>> ext_if="em1"
>>
>> remote_gw="L.L.L.L"
>>
>> admins_net="{ 10.17.6.0/24, 10.32.24.0/24 }"
>> devs_net="{ 10.1.2.0/24, 10.17.8.0/24 }"
>>
>> L_databases="{ 10.240.4.111, 10.240.4.112, 10.240.4.121, 10.240.4.122,
>> 10.240.4.131, 10.240.4.132 }"
>> G_databases="{ 10.1.50.121, 10.1.50.122 }"
>>
>> set skip on { lo enc0 }
>>
>> table <authpf_users> persist
>>
>> block
>>
>> # VPN
>> pass in quick on $ext_if proto esp from $remote_gw to $ext_if
>> pass out quick on $ext_if proto esp from $ext_if to $remote_gw
>>
>> pass in quick on $ext_if proto udp from $remote_gw to $ext_if port {
>> isakmp, ipsec-nat-t }
>> pass out quick on $ext_if proto udp from $ext_if to $remote_gw port {
>> isakmp, ipsec-nat-t }
>>
>> # DNS/NTP/SSH
>> pass out quick on $int_if proto udp to port domain
>> pass out quick on $int_if proto udp to port ntp
>> pass in quick on $int_if proto tcp to 10.1.50.181 port ssh
>>
>> # TRAFFIC
>> pass in on $int_if proto tcp from { 10.1.50.11, $devs_net } to
>> 10.240.4.21 port ssh
>> pass out on $ext_if proto tcp from { 10.1.50.11, $devs_net } to
>> 10.240.4.21 port ssh
>>
>> pass in on $int_if proto tcp from { $devs_net, $G_databases } to
>> $L_databases port 1521
>> pass out on $int_if proto tcp from { $devs_net, $G_databases } to
>> $L_databases port 1521
>>
>> pass in on $ext_if proto tcp from $L_databases to $G_databases port 1521
>> pass out on $int_if proto tcp from $L_databases to $G_databases port 1521
>>
>> pass in on $int_if from <authpf_users>
>> pass out on $ext_if from <authpf_users>
>>
>> (L-VPN)
>> ext_if="vlan10"
>>
>> remote_gw="G.G.G.G"
>>
>> admins_net="{ 10.17.6.0/24, 10.32.24.0/24 }"
>> devs_net="{ 10.1.2.0/24, 10.17.8.0/24 }"
>>
>> L_databases="{ 10.240.4.111, 10.240.4.112, 10.240.4.121, 10.240.4.122,
>> 10.240.4.131, 10.240.4.132 }"
>> G_databases="{ 10.1.50.121, 10.1.50.122 }"
>>
>> set skip on { lo enc0 }
>>
>> block
>>
>> # VPN
>> pass in quick on $ext_if proto esp from $remote_gw to $ext_if
>> pass out quick on $ext_if proto esp from $ext_if to $remote_gw
>>
>> pass in quick on $ext_if proto udp from $remote_gw to $ext_if port {
>> isakmp, ipsec-nat-t }
>> pass out quick on $ext_if proto udp from $ext_if to $remote_gw port {
>> isakmp, ipsec-nat-t }
>>
>> # DNS/NTP/SSH
>> pass out quick on $ext_if proto udp to port domain
>> pass out quick on $ext_if proto udp to port ntp
>> pass in quick on vlan20 proto tcp to 10.240.2.169 port ssh
>>
>> # TRAFFIC
>> pass in on vlan10 from $admins_net
>> pass out on { vlan20, vlan30, vlan40 } from $admins_net
>>
>> pass in on vlan10 proto tcp from { 10.1.50.11, $devs_net } to
>> 10.240.4.21 port 22
>> pass out on vlan40 proto tcp from { 10.1.50.11, $devs_net } to
>> 10.240.4.21 port 22
>>
>> pass in on vlan10 proto tcp from { $devs_net, $G_databases } to
>> $L_databases port 1521
>> pass out on vlan40 proto tcp from { $devs_net, $G_databases } to
>> $L_databases port 1521
>>
>> pass in on vlan40 proto tcp from $L_databases to $G_databases port 1521
>> pass out on vlan10 proto tcp from $L_databases to $G_databases port 1521
>>
>> pass in on vlan40 proto tcp from 10.1.50.181 to 10.240.2.169
>> pass out on vlan20 proto tcp from 10.1.50.181 to 10.240.2.169
>>
>>
>> `cat /etc/ipsec.conf`
>>
>> (G-VPN)
>> local_ip="G.G.G.G"
>> local_net="{ 10.1.2.0/24, 10.1.50.0/24, 10.17.6.0/24, 10.17.8.0/24,
>> 10.32.24.0/24 }"
>> remote_ip="L.L.L.L"
>> remote_net="{ 10.240.2.0/24, 10.240.3.0/24, 10.240.4.0/24 }"
>>
>> ike esp from $local_net to $remote_net peer $remote_ip
>> ike esp from $local_ip to $remote_net peer $remote_ip
>> ike esp from $local_ip to $remote_ip
>>
>>
>> (L-VPN)
>> local_ip="L.L.L.L"
>> local_net="{ 10.240.2.0/24, 10.240.3.0/24, 10.240.4.0/24 }"
>> remote_ip="G.G.G.G"
>> remote_net="{ 10.1.2.0/24, 10.1.50.0/24, 10.17.6.0/24, 10.17.8.0/24,
>> 10.32.24.0/24 }"
>>
>> ike esp from $local_net to $remote_net peer $remote_ip
>> ike esp from $local_ip to $remote_net peer $remote_ip
>> ike esp from $local_ip to $remote_ip
>>
>> ----------- ENDPOINT INFO -----------
>>
>>
>> Both endpoints run stock OpenBSD 5.1 (amd64). We use the VPN link to
>> manage our platform remotely and perform daily backups. G-VPN runs on
>> a 150Mbit/s link while L-VPN on a 1Gbit/s link. On one hand, our VPN
>> setup runs really nicely. The connections are routed properly, pf is
>> godsent and authpf works wonders. On the other hand, network
>> throughput over the VPN tunnel never exceeds 3.4MB/s (ftp, scp, rsync,
>> etc...)
>>
>> I welcome any suggestions. Keep in mind that this is our production
>> VPN tunnel, so I cannot shut it down at will. Thanks in advance.
>>
>> ---
>> Mike

Reply | Threaded
Open this post in threaded view
|

Re: Slow VPN Performance

Michael Sideris
I ran a few more tests on a local setup:

* 2 x OpenBSD 5.1 (i386) w/ Gbit NICs connected on the same switch
* `cat /etc/ipsec.conf`: "ike esp from 10.0.0.1 to 10.0.0.2" (and vice versa)
* pf is disabled

Running `isakmpd -K ; ipsecctl -f /etc/ipsec.conf` "caps" tcpbench at
~50Mbit speeds, same as our production tunnel. Without isakmpd the
speed ramps up to ~800Mbit or so, which is reasonable. Right now, I
have no idea what else I can try. Any suggestions are appreciated.

On Wed, Oct 17, 2012 at 10:05 AM, Michael Sideris <[hidden email]> wrote:

> `ping -c10`
>
> (L-VPN --> G-VPN)
>
> PING G.G.G.G (G.G.G.G): 56 data bytes
> 64 bytes from G.G.G.G: icmp_seq=0 ttl=255 time=17.073 ms
> 64 bytes from G.G.G.G: icmp_seq=1 ttl=255 time=3.604 ms
> 64 bytes from G.G.G.G: icmp_seq=2 ttl=255 time=3.666 ms
> 64 bytes from G.G.G.G: icmp_seq=3 ttl=255 time=3.716 ms
> 64 bytes from G.G.G.G: icmp_seq=4 ttl=255 time=3.639 ms
> 64 bytes from G.G.G.G: icmp_seq=5 ttl=255 time=3.685 ms
> 64 bytes from G.G.G.G: icmp_seq=6 ttl=255 time=3.734 ms
> 64 bytes from G.G.G.G: icmp_seq=7 ttl=255 time=3.658 ms
> 64 bytes from G.G.G.G: icmp_seq=8 ttl=255 time=3.707 ms
> 64 bytes from G.G.G.G: icmp_seq=9 ttl=255 time=3.755 ms
> --- G.G.G.G ping statistics ---
> 10 packets transmitted, 10 packets received, 0.0% packet loss
> round-trip min/avg/max/std-dev = 3.604/5.023/17.073/4.017 ms
>
>
> (G-VPN --> L-VPN)
>
> PING L.L.L.L (L.L.L.L): 56 data bytes
> 64 bytes from L.L.L.L: icmp_seq=0 ttl=255 time=3.707 ms
> 64 bytes from L.L.L.L: icmp_seq=1 ttl=255 time=3.746 ms
> 64 bytes from L.L.L.L: icmp_seq=2 ttl=255 time=3.677 ms
> 64 bytes from L.L.L.L: icmp_seq=3 ttl=255 time=3.717 ms
> 64 bytes from L.L.L.L: icmp_seq=4 ttl=255 time=3.754 ms
> 64 bytes from L.L.L.L: icmp_seq=5 ttl=255 time=3.670 ms
> 64 bytes from L.L.L.L: icmp_seq=6 ttl=255 time=3.703 ms
> 64 bytes from L.L.L.L: icmp_seq=7 ttl=255 time=3.742 ms
> 64 bytes from L.L.L.L: icmp_seq=8 ttl=255 time=3.654 ms
> 64 bytes from L.L.L.L: icmp_seq=9 ttl=255 time=3.693 ms
> --- L.L.L.L ping statistics ---
> 10 packets transmitted, 10 packets received, 0.0% packet loss
> round-trip min/avg/max/std-dev = 3.654/3.706/3.754/0.057 ms
>
>
> It is also worth mentioning that if I send anything from one endpoint
> to the other, the speed is ~7.5MB/s. Better than a transfer between 2
> nodes from each site but still a bit slow for a 150Mbit/s <--> 1Gbit/s
> link.
>
> On Wed, Oct 17, 2012 at 1:36 AM, Kent Fritz <[hidden email]> wrote:
>> I didn't see anyone reply to this yet, so let me ask a really dumb question:
>> what's the round-trip-time between G.G.G.G and L.L.L.L?  Are you running
>> into the TCP limits due to this?
>>
>>
>> On Tue, Oct 16, 2012 at 2:43 AM, Michael Sideris <[hidden email]> wrote:
>>>
>>> Hey @misc,
>>>
>>> ----------- ENDPOINT INFO -----------
>>>
>>> `dmesg`
>>>
>>> (G-VPN)
>>> OpenBSD 5.1 (GENERIC.MP) #207: Sun Feb 12 09:42:14 MST 2012
>>>     [hidden email]:/usr/src/sys/arch/amd64/compile/GENERIC.MP
>>> real mem = 2146172928 (2046MB)
>>> avail mem = 2074935296 (1978MB)
>>> mainbus0 at root
>>> bios0 at mainbus0: SMBIOS rev. 2.3 @ 0xfa850 (75 entries)
>>> bios0: vendor Dell Computer Corporation version "A03" date 01/04/2006
>>> bios0: Dell Computer Corporation PowerEdge SC1425
>>> acpi0 at bios0: rev 0
>>> acpi0: sleep states S0 S4 S5
>>> acpi0: tables DSDT FACP APIC SPCR HPET MCFG
>>> acpi0: wakeup devices PCI0(S5) PALO(S5) PXH_(S5) PXHB(S5) PXHA(S5)
>>> PICH(S5)
>>> acpitimer0 at acpi0: 3579545 Hz, 24 bits
>>> acpimadt0 at acpi0 addr 0xfee00000: PC-AT compat
>>> cpu0 at mainbus0: apid 0 (boot processor)
>>> cpu0: Intel(R) Xeon(TM) CPU 2.80GHz, 2800.48 MHz
>>> cpu0:
>>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,CNXT-ID,CX16,xTPR,NXE,LONG
>>> cpu0: 1MB 64b/line 8-way L2 cache
>>> cpu0: apic clock running at 200MHz
>>> cpu1 at mainbus0: apid 1 (application processor)
>>> cpu1: Intel(R) Xeon(TM) CPU 2.80GHz, 2800.11 MHz
>>> cpu1:
>>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,CNXT-ID,CX16,xTPR,NXE,LONG
>>> cpu1: 1MB 64b/line 8-way L2 cache
>>> ioapic0 at mainbus0: apid 2 pa 0xfec00000, version 20, 24 pins
>>> ioapic0: misconfigured as apic 0, remapped to apid 2
>>> ioapic1 at mainbus0: apid 3 pa 0xfec80000, version 20, 24 pins
>>> ioapic1: misconfigured as apic 0, remapped to apid 3
>>> ioapic2 at mainbus0: apid 4 pa 0xfec80800, version 20, 24 pins
>>> ioapic2: misconfigured as apic 0, remapped to apid 4
>>> acpihpet0 at acpi0: 14318179 Hz
>>> acpimcfg0 at acpi0 addr 0xe0000000, bus 0-255
>>> acpiprt0 at acpi0: bus 0 (PCI0)
>>> acpiprt1 at acpi0: bus 1 (PALO)
>>> acpiprt2 at acpi0: bus 3 (PXHB)
>>> acpiprt3 at acpi0: bus 2 (PXHA)
>>> acpiprt4 at acpi0: bus 4 (PICH)
>>> acpicpu0 at acpi0
>>> acpicpu1 at acpi0
>>> ipmi at mainbus0 not configured
>>> pci0 at mainbus0 bus 0
>>> pchb0 at pci0 dev 0 function 0 "Intel E7520 Host" rev 0x09
>>> ppb0 at pci0 dev 2 function 0 "Intel E7520 PCIE" rev 0x09
>>> pci1 at ppb0 bus 1
>>> ppb1 at pci1 dev 0 function 0 "Intel 6700PXH PCIE-PCIX" rev 0x09
>>> pci2 at ppb1 bus 2
>>> em0 at pci2 dev 4 function 0 "Intel PRO/1000MT (82541GI)" rev 0x05:
>>> apic 3 int 0, address 00:14:22:72:61:c6
>>> ppb2 at pci1 dev 0 function 2 "Intel 6700PXH PCIE-PCIX" rev 0x09
>>> pci3 at ppb2 bus 3
>>> isp0 at pci3 dev 7 function 0 "QLogic ISP2312" rev 0x02: apic 4 int 2
>>> isp0: board type 2312 rev 0x2, loaded firmware rev 3.3.19
>>> scsibus0 at isp0: 512 targets, WWPN 210000e08b1d3fc7, WWNN
>>> 200000e08b1d3fc7
>>> uhci0 at pci0 dev 29 function 0 "Intel 82801EB/ER USB" rev 0x02: apic 2
>>> int 16
>>> uhci1 at pci0 dev 29 function 1 "Intel 82801EB/ER USB" rev 0x02: apic 2
>>> int 19
>>> ehci0 at pci0 dev 29 function 7 "Intel 82801EB/ER USB2" rev 0x02: apic 2
>>> int 23
>>> usb0 at ehci0: USB revision 2.0
>>> uhub0 at usb0 "Intel EHCI root hub" rev 2.00/1.00 addr 1
>>> ppb3 at pci0 dev 30 function 0 "Intel 82801BA Hub-to-PCI" rev 0xc2
>>> pci4 at ppb3 bus 4
>>> em1 at pci4 dev 3 function 0 "Intel PRO/1000MT (82541GI)" rev 0x05:
>>> apic 2 int 20, address 00:14:22:72:61:c7
>>> vga1 at pci4 dev 13 function 0 "ATI Radeon VE" rev 0x00
>>> wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
>>> wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
>>> radeondrm0 at vga1: apic 2 int 17
>>> drm0 at radeondrm0
>>> pcib0 at pci0 dev 31 function 0 "Intel 82801EB/ER LPC" rev 0x02
>>> pciide0 at pci0 dev 31 function 1 "Intel 82801EB/ER IDE" rev 0x02:
>>> DMA, channel 0 configured to compatibility, channel 1 configured to
>>> compatibility
>>> atapiscsi0 at pciide0 channel 0 drive 0
>>> scsibus1 at atapiscsi0: 2 targets
>>> cd0 at scsibus1 targ 0 lun 0: <HL-DT-ST, CD-ROM GCR-8240N, 1.06> ATAPI
>>> 5/cdrom removable
>>> cd0(pciide0:0:0): using PIO mode 4, Ultra-DMA mode 2
>>> pciide0: channel 1 ignored (disabled)
>>> pciide1 at pci0 dev 31 function 2 "Intel 82801EB SATA" rev 0x02: DMA,
>>> channel 0 configured to native-PCI, channel 1 configured to native-PCI
>>> pciide1: using apic 2 int 18 for native-PCI interrupt
>>> wd0 at pciide1 channel 0 drive 0: <Maxtor 7Y250M0>
>>> wd0: 16-sector PIO, LBA48, 238418MB, 488281250 sectors
>>> wd0(pciide1:0:0): using PIO mode 4, Ultra-DMA mode 6
>>> usb1 at uhci0: USB revision 1.0
>>> uhub1 at usb1 "Intel UHCI root hub" rev 1.00/1.00 addr 1
>>> usb2 at uhci1: USB revision 1.0
>>> uhub2 at usb2 "Intel UHCI root hub" rev 1.00/1.00 addr 1
>>> isa0 at pcib0
>>> isadma0 at isa0
>>> com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
>>> pckbc0 at isa0 port 0x60/5
>>> pckbd0 at pckbc0 (kbd slot)
>>> pckbc0: using irq 1 for kbd slot
>>> wskbd0 at pckbd0: console keyboard, using wsdisplay0
>>> pcppi0 at isa0 port 0x61
>>> spkr0 at pcppi0
>>> mtrr: Pentium Pro MTRR support
>>> vscsi0 at root
>>> scsibus2 at vscsi0: 256 targets
>>> softraid0 at root
>>> scsibus3 at softraid0: 256 targets
>>> root on wd0a (a29928cba946c858.a) swap on wd0b dump on wd0b
>>>
>>> (L-VPN)
>>> OpenBSD 5.1 (GENERIC.MP) #207: Sun Feb 12 09:42:14 MST 2012
>>>     [hidden email]:/usr/src/sys/arch/amd64/compile/GENERIC.MP
>>> real mem = 3219914752 (3070MB)
>>> avail mem = 3120099328 (2975MB)
>>> mainbus0 at root
>>> bios0 at mainbus0: SMBIOS rev. 2.3 @ 0xfa850 (75 entries)
>>> bios0: vendor Dell Computer Corporation version "A03" date 01/04/2006
>>> bios0: Dell Computer Corporation PowerEdge SC1425
>>> acpi0 at bios0: rev 0
>>> acpi0: sleep states S0 S4 S5
>>> acpi0: tables DSDT FACP APIC SPCR HPET MCFG
>>> acpi0: wakeup devices PCI0(S5) PALO(S5) PXH_(S5) PXHB(S5) PXHA(S5)
>>> PICH(S5)
>>> acpitimer0 at acpi0: 3579545 Hz, 24 bits
>>> acpimadt0 at acpi0 addr 0xfee00000: PC-AT compat
>>> cpu0 at mainbus0: apid 0 (boot processor)
>>> cpu0: Intel(R) Xeon(TM) CPU 2.80GHz, 2800.45 MHz
>>> cpu0:
>>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,CNXT-ID,CX16,xTPR,NXE,LONG
>>> cpu0: 1MB 64b/line 8-way L2 cache
>>> cpu0: apic clock running at 200MHz
>>> cpu1 at mainbus0: apid 1 (application processor)
>>> cpu1: Intel(R) Xeon(TM) CPU 2.80GHz, 2800.11 MHz
>>> cpu1:
>>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,CNXT-ID,CX16,xTPR,NXE,LONG
>>> cpu1: 1MB 64b/line 8-way L2 cache
>>> ioapic0 at mainbus0: apid 2 pa 0xfec00000, version 20, 24 pins
>>> ioapic0: misconfigured as apic 0, remapped to apid 2
>>> ioapic1 at mainbus0: apid 3 pa 0xfec80000, version 20, 24 pins
>>> ioapic1: misconfigured as apic 0, remapped to apid 3
>>> ioapic2 at mainbus0: apid 4 pa 0xfec80800, version 20, 24 pins
>>> ioapic2: misconfigured as apic 0, remapped to apid 4
>>> acpihpet0 at acpi0: 14318179 Hz
>>> acpimcfg0 at acpi0 addr 0xe0000000, bus 0-255
>>> acpiprt0 at acpi0: bus 0 (PCI0)
>>> acpiprt1 at acpi0: bus 1 (PALO)
>>> acpiprt2 at acpi0: bus 3 (PXHB)
>>> acpiprt3 at acpi0: bus 2 (PXHA)
>>> acpiprt4 at acpi0: bus 4 (PICH)
>>> acpicpu0 at acpi0
>>> acpicpu1 at acpi0
>>> ipmi at mainbus0 not configured
>>> pci0 at mainbus0 bus 0
>>> pchb0 at pci0 dev 0 function 0 "Intel E7520 Host" rev 0x09
>>> ppb0 at pci0 dev 2 function 0 "Intel E7520 PCIE" rev 0x09
>>> pci1 at ppb0 bus 1
>>> ppb1 at pci1 dev 0 function 0 "Intel 6700PXH PCIE-PCIX" rev 0x09
>>> pci2 at ppb1 bus 2
>>> em0 at pci2 dev 4 function 0 "Intel PRO/1000MT (82541GI)" rev 0x05:
>>> apic 3 int 0, address 00:14:22:72:5e:bd
>>> ppb2 at pci1 dev 0 function 2 "Intel 6700PXH PCIE-PCIX" rev 0x09
>>> pci3 at ppb2 bus 3
>>> em1 at pci3 dev 7 function 0 "Intel PRO/1000MT (82546GB)" rev 0x03:
>>> apic 4 int 2, address 00:04:23:ce:d0:0c
>>> em2 at pci3 dev 7 function 1 "Intel PRO/1000MT (82546GB)" rev 0x03:
>>> apic 4 int 3, address 00:04:23:ce:d0:0d
>>> uhci0 at pci0 dev 29 function 0 "Intel 82801EB/ER USB" rev 0x02: apic 2
>>> int 16
>>> uhci1 at pci0 dev 29 function 1 "Intel 82801EB/ER USB" rev 0x02: apic 2
>>> int 19
>>> ehci0 at pci0 dev 29 function 7 "Intel 82801EB/ER USB2" rev 0x02: apic 2
>>> int 23
>>> usb0 at ehci0: USB revision 2.0
>>> uhub0 at usb0 "Intel EHCI root hub" rev 2.00/1.00 addr 1
>>> ppb3 at pci0 dev 30 function 0 "Intel 82801BA Hub-to-PCI" rev 0xc2
>>> pci4 at ppb3 bus 4
>>> em3 at pci4 dev 3 function 0 "Intel PRO/1000MT (82541GI)" rev 0x05:
>>> apic 2 int 20, address 00:14:22:72:5e:be
>>> vga1 at pci4 dev 13 function 0 "ATI Radeon VE" rev 0x00
>>> wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
>>> wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
>>> radeondrm0 at vga1: apic 2 int 17
>>> drm0 at radeondrm0
>>> pcib0 at pci0 dev 31 function 0 "Intel 82801EB/ER LPC" rev 0x02
>>> pciide0 at pci0 dev 31 function 1 "Intel 82801EB/ER IDE" rev 0x02:
>>> DMA, channel 0 configured to compatibility, channel 1 configured to
>>> compatibility
>>> atapiscsi0 at pciide0 channel 0 drive 0
>>> scsibus0 at atapiscsi0: 2 targets
>>> cd0 at scsibus0 targ 0 lun 0: <HL-DT-ST, CD-ROM GCR-8240N, 1.06> ATAPI
>>> 5/cdrom removable
>>> cd0(pciide0:0:0): using PIO mode 4, Ultra-DMA mode 2
>>> pciide0: channel 1 ignored (disabled)
>>> pciide1 at pci0 dev 31 function 2 "Intel 82801EB SATA" rev 0x02: DMA,
>>> channel 0 configured to native-PCI, channel 1 configured to native-PCI
>>> pciide1: using apic 2 int 18 for native-PCI interrupt
>>> wd0 at pciide1 channel 0 drive 0: <WDC WD400BD-75LRA0>
>>> wd0: 16-sector PIO, LBA48, 38146MB, 78125000 sectors
>>> wd0(pciide1:0:0): using PIO mode 4, Ultra-DMA mode 6
>>> wd1 at pciide1 channel 1 drive 0: <Maxtor 7Y250M0>
>>> wd1: 16-sector PIO, LBA48, 238418MB, 488281250 sectors
>>> wd1(pciide1:1:0): using PIO mode 4, Ultra-DMA mode 6
>>> usb1 at uhci0: USB revision 1.0
>>> uhub1 at usb1 "Intel UHCI root hub" rev 1.00/1.00 addr 1
>>> usb2 at uhci1: USB revision 1.0
>>> uhub2 at usb2 "Intel UHCI root hub" rev 1.00/1.00 addr 1
>>> isa0 at pcib0
>>> isadma0 at isa0
>>> com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
>>> pckbc0 at isa0 port 0x60/5
>>> pckbd0 at pckbc0 (kbd slot)
>>> pckbc0: using irq 1 for kbd slot
>>> wskbd0 at pckbd0: console keyboard, using wsdisplay0
>>> pcppi0 at isa0 port 0x61
>>> spkr0 at pcppi0
>>> mtrr: Pentium Pro MTRR support
>>> vscsi0 at root
>>> scsibus1 at vscsi0: 256 targets
>>> softraid0 at root
>>> scsibus2 at softraid0: 256 targets
>>> root on wd0a (c66c13b9ce71dcfc.a) swap on wd0b dump on wd0b
>>>
>>>
>>> `ifconfig` (for the sake of security, G.G.G.G is the public IP for
>>> G-VPN where L.L.L.L is the public IP for L-VPN)
>>>
>>> (G-VPN)
>>> lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33152
>>>         priority: 0
>>>         groups: lo
>>>         inet6 ::1 prefixlen 128
>>>         inet6 fe80::1%lo0 prefixlen 64 scopeid 0x4
>>>         inet 127.0.0.1 netmask 0xff000000
>>> em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
>>>         lladdr 00:14:22:72:61:c6
>>>         priority: 0
>>>         media: Ethernet autoselect (1000baseT full-duplex)
>>>         status: active
>>>         inet 10.1.50.181 netmask 0xffffff00 broadcast 10.1.50.255
>>>         inet6 fe80::214:22ff:fe72:61c6%em0 prefixlen 64 scopeid 0x1
>>> em1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
>>>         lladdr 00:14:22:72:61:c7
>>>         priority: 0
>>>         groups: egress
>>>         media: Ethernet autoselect (1000baseT full-duplex)
>>>         status: active
>>>         inet G.G.G.G netmask 0xfffffff0 broadcast G.G.G.X
>>>         inet6 fe80::214:22ff:fe72:61c7%em1 prefixlen 64 scopeid 0x2
>>> enc0: flags=0<>
>>>         priority: 0
>>>         groups: enc
>>>         status: active
>>> pflog0: flags=141<UP,RUNNING,PROMISC> mtu 33152
>>>         priority: 0
>>>         groups: pflog
>>>
>>> (L-VPN)
>>> lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33152
>>>         priority: 0
>>>         groups: lo
>>>         inet6 ::1 prefixlen 128
>>>         inet6 fe80::1%lo0 prefixlen 64 scopeid 0x6
>>>         inet 127.0.0.1 netmask 0xff000000
>>> em0: flags=8b43<UP,BROADCAST,RUNNING,PROMISC,ALLMULTI,SIMPLEX,MULTICAST>
>>> mtu 1500
>>>         lladdr 00:14:22:72:5e:bd
>>>         priority: 0
>>>         trunk: trunkdev trunk0
>>>         media: Ethernet autoselect (1000baseT full-duplex)
>>>         status: active
>>>         inet6 fe80::204:23ff:fece:d00c%em0 prefixlen 64 scopeid 0x1
>>> em1: flags=8b43<UP,BROADCAST,RUNNING,PROMISC,ALLMULTI,SIMPLEX,MULTICAST>
>>> mtu 1500
>>>         lladdr 00:14:22:72:5e:bd
>>>         priority: 0
>>>         trunk: trunkdev trunk0
>>>         media: Ethernet autoselect (1000baseT full-duplex)
>>>         status: active
>>>         inet6 fe80::204:23ff:fece:d00d%em1 prefixlen 64 scopeid 0x2
>>> em2: flags=8b43<UP,BROADCAST,RUNNING,PROMISC,ALLMULTI,SIMPLEX,MULTICAST>
>>> mtu 1500
>>>         lladdr 00:04:23:ce:d0:0d
>>>         priority: 0
>>>         trunk: trunkdev trunk1
>>>         media: Ethernet autoselect (1000baseT full-duplex)
>>>         status: active
>>>         inet6 fe80::214:22ff:fe72:5ebe%em2 prefixlen 64 scopeid 0x3
>>> em3: flags=8b43<UP,BROADCAST,RUNNING,PROMISC,ALLMULTI,SIMPLEX,MULTICAST>
>>> mtu 1500
>>>         lladdr 00:04:23:ce:d0:0d
>>>         priority: 0
>>>         trunk: trunkdev trunk1
>>>         media: Ethernet autoselect (1000baseT full-duplex)
>>>         status: active
>>>         inet6 fe80::214:22ff:fe72:5ebd%em3 prefixlen 64 scopeid 0x4
>>> enc0: flags=0<>
>>>         priority: 0
>>>         groups: enc
>>>         status: active
>>> trunk0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
>>>         lladdr 00:14:22:72:5e:bd
>>>         priority: 0
>>>         trunk: trunkproto lacp
>>>         trunk id: [(8000,00:14:22:72:5e:bd,403C,0000,0000),
>>>                  (8000,00:23:05:1d:fb:80,000C,0000,0000)]
>>>                 trunkport em1 active,collecting,distributing
>>>                 trunkport em0 collecting,distributing
>>>         groups: trunk
>>>         media: Ethernet autoselect
>>>         status: active
>>>         inet6 fe80::214:22ff:fe72:5ebd%trunk0 prefixlen 64 scopeid 0x7
>>> trunk1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
>>>         lladdr 00:04:23:ce:d0:0d
>>>         priority: 0
>>>         trunk: trunkproto lacp
>>>         trunk id: [(8000,00:04:23:ce:d0:0d,4044,0000,0000),
>>>                  (8000,00:23:05:3f:19:80,0010,0000,0000)]
>>>                 trunkport em3 active,collecting,distributing
>>>                 trunkport em2 collecting,distributing
>>>         groups: trunk
>>>         media: Ethernet autoselect
>>>         status: active
>>>         inet6 fe80::204:23ff:fece:d00d%trunk1 prefixlen 64 scopeid 0x8
>>> vlan10: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
>>>         lladdr 00:14:22:72:5e:bd
>>>         priority: 0
>>>         vlan: 10 parent interface: trunk0
>>>         groups: vlan egress
>>>         status: active
>>>         inet6 fe80::214:22ff:fe72:5ebd%vlan10 prefixlen 64 scopeid 0x9
>>>         inet L.L.L.L netmask 0xfffffff8 broadcast L.L.L.X
>>> vlan20: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
>>>         lladdr 00:04:23:ce:d0:0d
>>>         priority: 0
>>>         vlan: 20 parent interface: trunk1
>>>         groups: vlan
>>>         status: active
>>>         inet6 fe80::204:23ff:fece:d00d%vlan20 prefixlen 64 scopeid 0xa
>>>         inet 10.240.2.169 netmask 0xffffff00 broadcast 10.240.2.255
>>> vlan30: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
>>>         lladdr 00:04:23:ce:d0:0d
>>>         priority: 0
>>>         vlan: 30 parent interface: trunk1
>>>         groups: vlan
>>>         status: active
>>>         inet6 fe80::204:23ff:fece:d00d%vlan30 prefixlen 64 scopeid 0xb
>>>         inet 10.240.3.169 netmask 0xffffff00 broadcast 10.240.3.255
>>> vlan40: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
>>>         lladdr 00:14:22:72:5e:bd
>>>         priority: 0
>>>         vlan: 40 parent interface: trunk0
>>>         groups: vlan
>>>         status: active
>>>         inet6 fe80::214:22ff:fe72:5ebd%vlan40 prefixlen 64 scopeid 0xc
>>>         inet 10.240.4.169 netmask 0xffffff00 broadcast 10.240.4.255
>>> pflog0: flags=141<UP,RUNNING,PROMISC> mtu 33152
>>>         priority: 0
>>>         groups: pflog
>>>
>>>
>>> `cat /etc/pf.conf`
>>>
>>> (G-VPN)
>>>
>>> int_if="em0"
>>> ext_if="em1"
>>>
>>> remote_gw="L.L.L.L"
>>>
>>> admins_net="{ 10.17.6.0/24, 10.32.24.0/24 }"
>>> devs_net="{ 10.1.2.0/24, 10.17.8.0/24 }"
>>>
>>> L_databases="{ 10.240.4.111, 10.240.4.112, 10.240.4.121, 10.240.4.122,
>>> 10.240.4.131, 10.240.4.132 }"
>>> G_databases="{ 10.1.50.121, 10.1.50.122 }"
>>>
>>> set skip on { lo enc0 }
>>>
>>> table <authpf_users> persist
>>>
>>> block
>>>
>>> # VPN
>>> pass in quick on $ext_if proto esp from $remote_gw to $ext_if
>>> pass out quick on $ext_if proto esp from $ext_if to $remote_gw
>>>
>>> pass in quick on $ext_if proto udp from $remote_gw to $ext_if port {
>>> isakmp, ipsec-nat-t }
>>> pass out quick on $ext_if proto udp from $ext_if to $remote_gw port {
>>> isakmp, ipsec-nat-t }
>>>
>>> # DNS/NTP/SSH
>>> pass out quick on $int_if proto udp to port domain
>>> pass out quick on $int_if proto udp to port ntp
>>> pass in quick on $int_if proto tcp to 10.1.50.181 port ssh
>>>
>>> # TRAFFIC
>>> pass in on $int_if proto tcp from { 10.1.50.11, $devs_net } to
>>> 10.240.4.21 port ssh
>>> pass out on $ext_if proto tcp from { 10.1.50.11, $devs_net } to
>>> 10.240.4.21 port ssh
>>>
>>> pass in on $int_if proto tcp from { $devs_net, $G_databases } to
>>> $L_databases port 1521
>>> pass out on $int_if proto tcp from { $devs_net, $G_databases } to
>>> $L_databases port 1521
>>>
>>> pass in on $ext_if proto tcp from $L_databases to $G_databases port 1521
>>> pass out on $int_if proto tcp from $L_databases to $G_databases port 1521
>>>
>>> pass in on $int_if from <authpf_users>
>>> pass out on $ext_if from <authpf_users>
>>>
>>> (L-VPN)
>>> ext_if="vlan10"
>>>
>>> remote_gw="G.G.G.G"
>>>
>>> admins_net="{ 10.17.6.0/24, 10.32.24.0/24 }"
>>> devs_net="{ 10.1.2.0/24, 10.17.8.0/24 }"
>>>
>>> L_databases="{ 10.240.4.111, 10.240.4.112, 10.240.4.121, 10.240.4.122,
>>> 10.240.4.131, 10.240.4.132 }"
>>> G_databases="{ 10.1.50.121, 10.1.50.122 }"
>>>
>>> set skip on { lo enc0 }
>>>
>>> block
>>>
>>> # VPN
>>> pass in quick on $ext_if proto esp from $remote_gw to $ext_if
>>> pass out quick on $ext_if proto esp from $ext_if to $remote_gw
>>>
>>> pass in quick on $ext_if proto udp from $remote_gw to $ext_if port {
>>> isakmp, ipsec-nat-t }
>>> pass out quick on $ext_if proto udp from $ext_if to $remote_gw port {
>>> isakmp, ipsec-nat-t }
>>>
>>> # DNS/NTP/SSH
>>> pass out quick on $ext_if proto udp to port domain
>>> pass out quick on $ext_if proto udp to port ntp
>>> pass in quick on vlan20 proto tcp to 10.240.2.169 port ssh
>>>
>>> # TRAFFIC
>>> pass in on vlan10 from $admins_net
>>> pass out on { vlan20, vlan30, vlan40 } from $admins_net
>>>
>>> pass in on vlan10 proto tcp from { 10.1.50.11, $devs_net } to
>>> 10.240.4.21 port 22
>>> pass out on vlan40 proto tcp from { 10.1.50.11, $devs_net } to
>>> 10.240.4.21 port 22
>>>
>>> pass in on vlan10 proto tcp from { $devs_net, $G_databases } to
>>> $L_databases port 1521
>>> pass out on vlan40 proto tcp from { $devs_net, $G_databases } to
>>> $L_databases port 1521
>>>
>>> pass in on vlan40 proto tcp from $L_databases to $G_databases port 1521
>>> pass out on vlan10 proto tcp from $L_databases to $G_databases port 1521
>>>
>>> pass in on vlan40 proto tcp from 10.1.50.181 to 10.240.2.169
>>> pass out on vlan20 proto tcp from 10.1.50.181 to 10.240.2.169
>>>
>>>
>>> `cat /etc/ipsec.conf`
>>>
>>> (G-VPN)
>>> local_ip="G.G.G.G"
>>> local_net="{ 10.1.2.0/24, 10.1.50.0/24, 10.17.6.0/24, 10.17.8.0/24,
>>> 10.32.24.0/24 }"
>>> remote_ip="L.L.L.L"
>>> remote_net="{ 10.240.2.0/24, 10.240.3.0/24, 10.240.4.0/24 }"
>>>
>>> ike esp from $local_net to $remote_net peer $remote_ip
>>> ike esp from $local_ip to $remote_net peer $remote_ip
>>> ike esp from $local_ip to $remote_ip
>>>
>>>
>>> (L-VPN)
>>> local_ip="L.L.L.L"
>>> local_net="{ 10.240.2.0/24, 10.240.3.0/24, 10.240.4.0/24 }"
>>> remote_ip="G.G.G.G"
>>> remote_net="{ 10.1.2.0/24, 10.1.50.0/24, 10.17.6.0/24, 10.17.8.0/24,
>>> 10.32.24.0/24 }"
>>>
>>> ike esp from $local_net to $remote_net peer $remote_ip
>>> ike esp from $local_ip to $remote_net peer $remote_ip
>>> ike esp from $local_ip to $remote_ip
>>>
>>> ----------- ENDPOINT INFO -----------
>>>
>>>
>>> Both endpoints run stock OpenBSD 5.1 (amd64). We use the VPN link to
>>> manage our platform remotely and perform daily backups. G-VPN runs on
>>> a 150Mbit/s link while L-VPN on a 1Gbit/s link. On one hand, our VPN
>>> setup runs really nicely. The connections are routed properly, pf is
>>> godsent and authpf works wonders. On the other hand, network
>>> throughput over the VPN tunnel never exceeds 3.4MB/s (ftp, scp, rsync,
>>> etc...)
>>>
>>> I welcome any suggestions. Keep in mind that this is our production
>>> VPN tunnel, so I cannot shut it down at will. Thanks in advance.
>>>
>>> ---
>>> Mike

Reply | Threaded
Open this post in threaded view
|

Re: Slow VPN Performance

Michael Sideris
In reply to this post by Michael Sideris
It seems that changing to hmac-md5 boosted network throughput from
~50Mbit/s to ~100Mbit/s which is decent and reasonable. I am going to
experiment a bit further with `scrub` options in pf.conf to see if I
can squeeze more performance out of the link. The question now
is....how much is security affected by using hmac-md5 vs the default
hmac-sha2-256? Should I consider using better CPUs on the servers in
order to gain better performance through a stronger algorithm?

On Mon, Oct 22, 2012 at 2:58 PM, Mike Belopuhov <[hidden email]> wrote:

> On Tue, Oct 16, 2012 at 11:43 AM, Michael Sideris <[hidden email]> wrote:
>> Both endpoints run stock OpenBSD 5.1 (amd64). We use the VPN link to
>> manage our platform remotely and perform daily backups. G-VPN runs on
>> a 150Mbit/s link while L-VPN on a 1Gbit/s link. On one hand, our VPN
>> setup runs really nicely. The connections are routed properly, pf is
>> godsent and authpf works wonders. On the other hand, network
>> throughput over the VPN tunnel never exceeds 3.4MB/s (ftp, scp, rsync,
>> etc...)
>>
>> I welcome any suggestions. Keep in mind that this is our production
>> VPN tunnel, so I cannot shut it down at will. Thanks in advance.
>>
>> ---
>> Mike
>>
>
> Hi,
>
> I suggest a couple of changes:
>
>  1) use cheaper hash function (md5 or at least sha1)
>  2) use mss fixup so that your packets don't get fragmented
>
> The first point relates to your "ike" rules in ipsec.conf:
>
>     ike esp from $local_net to $remote_net peer $remote_ip \
>         quick auth hmac-md5 enc aes
>
> The second point relates to pf rules in pf.conf:
>
>     match in scrub (max-mss 1440)
>
> You can experiment with the values in the 1400-1480 range.
>
> Also, please make sure that you don't run tcpbench or any
> other benchmarking on the vpn gates themselves as it offsets
> the measurements.

Reply | Threaded
Open this post in threaded view
|

Re: Slow VPN Performance

Mike Belopuhov-5
On Mon, Oct 22, 2012 at 4:10 PM, Michael Sideris <[hidden email]> wrote:
> It seems that changing to hmac-md5 boosted network throughput from
> ~50Mbit/s to ~100Mbit/s which is decent and reasonable. I am going to
> experiment a bit further with `scrub` options in pf.conf to see if I
> can squeeze more performance out of the link. The question now
> is....how much is security affected by using hmac-md5 vs the default
> hmac-sha2-256?

It's more a question of how often do you rekey? You also should not
disable Perfect Forward Secrecy that recomputes DH values every
time you renew your phase 2 key. And while there are no known
serious attacks on HMAC-MD5 it all depends on how important the
data that you're protecting is and if you have to be compliant with
any regulations that might mandate use of SHA2.

>  Should I consider using better CPUs on the servers in
> order to gain better performance through a stronger algorithm?
>

You can get 600-750Mbps (depending on the CPU speed) in the
AES-NI enabled setup (using AES-GCM that is).

> On Mon, Oct 22, 2012 at 2:58 PM, Mike Belopuhov <[hidden email]> wrote:
>> On Tue, Oct 16, 2012 at 11:43 AM, Michael Sideris <[hidden email]> wrote:
>>> Both endpoints run stock OpenBSD 5.1 (amd64). We use the VPN link to
>>> manage our platform remotely and perform daily backups. G-VPN runs on
>>> a 150Mbit/s link while L-VPN on a 1Gbit/s link. On one hand, our VPN
>>> setup runs really nicely. The connections are routed properly, pf is
>>> godsent and authpf works wonders. On the other hand, network
>>> throughput over the VPN tunnel never exceeds 3.4MB/s (ftp, scp, rsync,
>>> etc...)
>>>
>>> I welcome any suggestions. Keep in mind that this is our production
>>> VPN tunnel, so I cannot shut it down at will. Thanks in advance.
>>>
>>> ---
>>> Mike
>>>
>>
>> Hi,
>>
>> I suggest a couple of changes:
>>
>>  1) use cheaper hash function (md5 or at least sha1)
>>  2) use mss fixup so that your packets don't get fragmented
>>
>> The first point relates to your "ike" rules in ipsec.conf:
>>
>>     ike esp from $local_net to $remote_net peer $remote_ip \
>>         quick auth hmac-md5 enc aes
>>
>> The second point relates to pf rules in pf.conf:
>>
>>     match in scrub (max-mss 1440)
>>
>> You can experiment with the values in the 1400-1480 range.
>>
>> Also, please make sure that you don't run tcpbench or any
>> other benchmarking on the vpn gates themselves as it offsets
>> the measurements.

Reply | Threaded
Open this post in threaded view
|

Re: Slow VPN Performance

Michael Sideris
While I am not required to comply with any particular crypto
standards, I have NFS data passing through that link which I would
classify as....fairly sensitive. That being said, I am not sure how to
check the re-keying frequency except watching `ipsecctl -m`. I am not
sure if PFS is enabled by default on a stock OpenBSD 5.1 installation
so I would appreciate it if you could tell me how I can check that.

Performance wise I would be happy if I could squeeze ~100 out of the
150Mbit/s link. At the moment I am struggling to reach ~100Mbit/s and
that is with hmac-md5. I would like to find a reasonable balance
between performance and security but it seems that hmac-sha2-256 is
too "expensive" for my hardware.  I really thought dual Xeons @ 2.8GHz
would be up to the task.

On Mon, Oct 22, 2012 at 6:41 PM, Mike Belopuhov <[hidden email]> wrote:

> On Mon, Oct 22, 2012 at 4:10 PM, Michael Sideris <[hidden email]> wrote:
>> It seems that changing to hmac-md5 boosted network throughput from
>> ~50Mbit/s to ~100Mbit/s which is decent and reasonable. I am going to
>> experiment a bit further with `scrub` options in pf.conf to see if I
>> can squeeze more performance out of the link. The question now
>> is....how much is security affected by using hmac-md5 vs the default
>> hmac-sha2-256?
>
> It's more a question of how often do you rekey? You also should not
> disable Perfect Forward Secrecy that recomputes DH values every
> time you renew your phase 2 key. And while there are no known
> serious attacks on HMAC-MD5 it all depends on how important the
> data that you're protecting is and if you have to be compliant with
> any regulations that might mandate use of SHA2.
>
>>  Should I consider using better CPUs on the servers in
>> order to gain better performance through a stronger algorithm?
>>
>
> You can get 600-750Mbps (depending on the CPU speed) in the
> AES-NI enabled setup (using AES-GCM that is).
>
>> On Mon, Oct 22, 2012 at 2:58 PM, Mike Belopuhov <[hidden email]> wrote:
>>> On Tue, Oct 16, 2012 at 11:43 AM, Michael Sideris <[hidden email]> wrote:
>>>> Both endpoints run stock OpenBSD 5.1 (amd64). We use the VPN link to
>>>> manage our platform remotely and perform daily backups. G-VPN runs on
>>>> a 150Mbit/s link while L-VPN on a 1Gbit/s link. On one hand, our VPN
>>>> setup runs really nicely. The connections are routed properly, pf is
>>>> godsent and authpf works wonders. On the other hand, network
>>>> throughput over the VPN tunnel never exceeds 3.4MB/s (ftp, scp, rsync,
>>>> etc...)
>>>>
>>>> I welcome any suggestions. Keep in mind that this is our production
>>>> VPN tunnel, so I cannot shut it down at will. Thanks in advance.
>>>>
>>>> ---
>>>> Mike
>>>>
>>>
>>> Hi,
>>>
>>> I suggest a couple of changes:
>>>
>>>  1) use cheaper hash function (md5 or at least sha1)
>>>  2) use mss fixup so that your packets don't get fragmented
>>>
>>> The first point relates to your "ike" rules in ipsec.conf:
>>>
>>>     ike esp from $local_net to $remote_net peer $remote_ip \
>>>         quick auth hmac-md5 enc aes
>>>
>>> The second point relates to pf rules in pf.conf:
>>>
>>>     match in scrub (max-mss 1440)
>>>
>>> You can experiment with the values in the 1400-1480 range.
>>>
>>> Also, please make sure that you don't run tcpbench or any
>>> other benchmarking on the vpn gates themselves as it offsets
>>> the measurements.

Reply | Threaded
Open this post in threaded view
|

Re: Slow VPN Performance

Christian Weisgerber
In reply to this post by Michael Sideris
Michael Sideris <[hidden email]> wrote:

> It seems that changing to hmac-md5 boosted network throughput from
> ~50Mbit/s to ~100Mbit/s which is decent and reasonable. I am going to
> experiment a bit further with `scrub` options in pf.conf to see if I
> can squeeze more performance out of the link. The question now
> is....how much is security affected by using hmac-md5 vs the default
> hmac-sha2-256?

At present, negligibly.  The HMAC construction uses MD5 in a way
that it is not affected by the known MD5 vulnerabilities.
A difference is that the HMAC-MD5 authentication tags are truncated
to 96 bits, the HMAC-SHA256 ones to 128 bits, but this doesn't have
practical relevance either.

Note that SSH continues to use (untruncated) hmac-md5 by default.

Of course, if you are setting up something where you'll be stuck
with the chosen algorithms for the next 15 years, you may want to
use something with a bigger security margin.

--
Christian "naddy" Weisgerber                          [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Slow VPN Performance

Mike Belopuhov-5
In reply to this post by Michael Sideris
On Tue, Oct 23, 2012 at 10:18 AM, Michael Sideris <[hidden email]> wrote:
> While I am not required to comply with any particular crypto
> standards, I have NFS data passing through that link which I would
> classify as....fairly sensitive.

hmm, if you're using udp mounts for NFS you might want to try tcp
instead.

> That being said, I am not sure how to
> check the re-keying frequency except watching `ipsecctl -m`.

that depends on the isakmpd settings. right now you're probably
using default ones which are negotiated with your peer. this is
what isakmpd.conf(5) has to say about it:

    The Quick Mode lifetime defaults to 20 minutes (minimum 60
    seconds, maximum 1 day).

this is rather a broad range, so you might want to shorten it.
look for the "Default-phase-2-lifetime" parameter in the man
page for the /etc/isakmpd/isakmpd.conf.

> I am not
> sure if PFS is enabled by default on a stock OpenBSD 5.1 installation
> so I would appreciate it if you could tell me how I can check that.
>

it is, unless you disable it with the "group none" in the "quick"
configuration options.

> Performance wise I would be happy if I could squeeze ~100 out of the
> 150Mbit/s link. At the moment I am struggling to reach ~100Mbit/s and
> that is with hmac-md5. I would like to find a reasonable balance
> between performance and security but it seems that hmac-sha2-256 is
> too "expensive" for my hardware.

unfortunately it is expensive for any hardware that's why aes-gcm
was developed.

> I really thought dual Xeons @ 2.8GHz would be up to the task.
>

the "dual" part doesn't help as much as it could though.

in any case, i suggest you play with nfs tcp mounts and mss fixups.
otherwise you might be loosing performance where you shouldn't.

trying the snapshot out might also give an opportunity to learn if
some of the performance changes that were committed are helpful
in your setup.

> On Mon, Oct 22, 2012 at 6:41 PM, Mike Belopuhov <[hidden email]> wrote:
>> On Mon, Oct 22, 2012 at 4:10 PM, Michael Sideris <[hidden email]> wrote:
>>> It seems that changing to hmac-md5 boosted network throughput from
>>> ~50Mbit/s to ~100Mbit/s which is decent and reasonable. I am going to
>>> experiment a bit further with `scrub` options in pf.conf to see if I
>>> can squeeze more performance out of the link. The question now
>>> is....how much is security affected by using hmac-md5 vs the default
>>> hmac-sha2-256?
>>
>> It's more a question of how often do you rekey? You also should not
>> disable Perfect Forward Secrecy that recomputes DH values every
>> time you renew your phase 2 key. And while there are no known
>> serious attacks on HMAC-MD5 it all depends on how important the
>> data that you're protecting is and if you have to be compliant with
>> any regulations that might mandate use of SHA2.
>>
>>>  Should I consider using better CPUs on the servers in
>>> order to gain better performance through a stronger algorithm?
>>>
>>
>> You can get 600-750Mbps (depending on the CPU speed) in the
>> AES-NI enabled setup (using AES-GCM that is).
>>
>>> On Mon, Oct 22, 2012 at 2:58 PM, Mike Belopuhov <[hidden email]> wrote:
>>>> On Tue, Oct 16, 2012 at 11:43 AM, Michael Sideris <[hidden email]> wrote:
>>>>> Both endpoints run stock OpenBSD 5.1 (amd64). We use the VPN link to
>>>>> manage our platform remotely and perform daily backups. G-VPN runs on
>>>>> a 150Mbit/s link while L-VPN on a 1Gbit/s link. On one hand, our VPN
>>>>> setup runs really nicely. The connections are routed properly, pf is
>>>>> godsent and authpf works wonders. On the other hand, network
>>>>> throughput over the VPN tunnel never exceeds 3.4MB/s (ftp, scp, rsync,
>>>>> etc...)
>>>>>
>>>>> I welcome any suggestions. Keep in mind that this is our production
>>>>> VPN tunnel, so I cannot shut it down at will. Thanks in advance.
>>>>>
>>>>> ---
>>>>> Mike
>>>>>
>>>>
>>>> Hi,
>>>>
>>>> I suggest a couple of changes:
>>>>
>>>>  1) use cheaper hash function (md5 or at least sha1)
>>>>  2) use mss fixup so that your packets don't get fragmented
>>>>
>>>> The first point relates to your "ike" rules in ipsec.conf:
>>>>
>>>>     ike esp from $local_net to $remote_net peer $remote_ip \
>>>>         quick auth hmac-md5 enc aes
>>>>
>>>> The second point relates to pf rules in pf.conf:
>>>>
>>>>     match in scrub (max-mss 1440)
>>>>
>>>> You can experiment with the values in the 1400-1480 range.
>>>>
>>>> Also, please make sure that you don't run tcpbench or any
>>>> other benchmarking on the vpn gates themselves as it offsets
>>>> the measurements.

Reply | Threaded
Open this post in threaded view
|

Re: Slow VPN Performance

Michael Sideris
I am using the NFS defaults which means, according to the man page at
least, that it should go over TCP. Regardless, I think I have a fair
idea of what is what happening now. Or at least better than I had
before. I will try to tweak things around a bit until I find the right
balance between performance and security. Also, OpenBSD 5.2 is around
the corner and you never know what that might bring.

Big thanks to everyone who put the time into answering my questions. Cheers!

On Tue, Oct 23, 2012 at 5:26 PM, Mike Belopuhov <[hidden email]> wrote:

> On Tue, Oct 23, 2012 at 10:18 AM, Michael Sideris <[hidden email]> wrote:
>> While I am not required to comply with any particular crypto
>> standards, I have NFS data passing through that link which I would
>> classify as....fairly sensitive.
>
> hmm, if you're using udp mounts for NFS you might want to try tcp
> instead.
>
>> That being said, I am not sure how to
>> check the re-keying frequency except watching `ipsecctl -m`.
>
> that depends on the isakmpd settings. right now you're probably
> using default ones which are negotiated with your peer. this is
> what isakmpd.conf(5) has to say about it:
>
>     The Quick Mode lifetime defaults to 20 minutes (minimum 60
>     seconds, maximum 1 day).
>
> this is rather a broad range, so you might want to shorten it.
> look for the "Default-phase-2-lifetime" parameter in the man
> page for the /etc/isakmpd/isakmpd.conf.
>
>> I am not
>> sure if PFS is enabled by default on a stock OpenBSD 5.1 installation
>> so I would appreciate it if you could tell me how I can check that.
>>
>
> it is, unless you disable it with the "group none" in the "quick"
> configuration options.
>
>> Performance wise I would be happy if I could squeeze ~100 out of the
>> 150Mbit/s link. At the moment I am struggling to reach ~100Mbit/s and
>> that is with hmac-md5. I would like to find a reasonable balance
>> between performance and security but it seems that hmac-sha2-256 is
>> too "expensive" for my hardware.
>
> unfortunately it is expensive for any hardware that's why aes-gcm
> was developed.
>
>> I really thought dual Xeons @ 2.8GHz would be up to the task.
>>
>
> the "dual" part doesn't help as much as it could though.
>
> in any case, i suggest you play with nfs tcp mounts and mss fixups.
> otherwise you might be loosing performance where you shouldn't.
>
> trying the snapshot out might also give an opportunity to learn if
> some of the performance changes that were committed are helpful
> in your setup.
>
>> On Mon, Oct 22, 2012 at 6:41 PM, Mike Belopuhov <[hidden email]> wrote:
>>> On Mon, Oct 22, 2012 at 4:10 PM, Michael Sideris <[hidden email]> wrote:
>>>> It seems that changing to hmac-md5 boosted network throughput from
>>>> ~50Mbit/s to ~100Mbit/s which is decent and reasonable. I am going to
>>>> experiment a bit further with `scrub` options in pf.conf to see if I
>>>> can squeeze more performance out of the link. The question now
>>>> is....how much is security affected by using hmac-md5 vs the default
>>>> hmac-sha2-256?
>>>
>>> It's more a question of how often do you rekey? You also should not
>>> disable Perfect Forward Secrecy that recomputes DH values every
>>> time you renew your phase 2 key. And while there are no known
>>> serious attacks on HMAC-MD5 it all depends on how important the
>>> data that you're protecting is and if you have to be compliant with
>>> any regulations that might mandate use of SHA2.
>>>
>>>>  Should I consider using better CPUs on the servers in
>>>> order to gain better performance through a stronger algorithm?
>>>>
>>>
>>> You can get 600-750Mbps (depending on the CPU speed) in the
>>> AES-NI enabled setup (using AES-GCM that is).
>>>
>>>> On Mon, Oct 22, 2012 at 2:58 PM, Mike Belopuhov <[hidden email]> wrote:
>>>>> On Tue, Oct 16, 2012 at 11:43 AM, Michael Sideris <[hidden email]> wrote:
>>>>>> Both endpoints run stock OpenBSD 5.1 (amd64). We use the VPN link to
>>>>>> manage our platform remotely and perform daily backups. G-VPN runs on
>>>>>> a 150Mbit/s link while L-VPN on a 1Gbit/s link. On one hand, our VPN
>>>>>> setup runs really nicely. The connections are routed properly, pf is
>>>>>> godsent and authpf works wonders. On the other hand, network
>>>>>> throughput over the VPN tunnel never exceeds 3.4MB/s (ftp, scp, rsync,
>>>>>> etc...)
>>>>>>
>>>>>> I welcome any suggestions. Keep in mind that this is our production
>>>>>> VPN tunnel, so I cannot shut it down at will. Thanks in advance.
>>>>>>
>>>>>> ---
>>>>>> Mike
>>>>>>
>>>>>
>>>>> Hi,
>>>>>
>>>>> I suggest a couple of changes:
>>>>>
>>>>>  1) use cheaper hash function (md5 or at least sha1)
>>>>>  2) use mss fixup so that your packets don't get fragmented
>>>>>
>>>>> The first point relates to your "ike" rules in ipsec.conf:
>>>>>
>>>>>     ike esp from $local_net to $remote_net peer $remote_ip \
>>>>>         quick auth hmac-md5 enc aes
>>>>>
>>>>> The second point relates to pf rules in pf.conf:
>>>>>
>>>>>     match in scrub (max-mss 1440)
>>>>>
>>>>> You can experiment with the values in the 1400-1480 range.
>>>>>
>>>>> Also, please make sure that you don't run tcpbench or any
>>>>> other benchmarking on the vpn gates themselves as it offsets
>>>>> the measurements.

Reply | Threaded
Open this post in threaded view
|

Re: Slow VPN Performance

Philip Guenther-2
On Wed, Oct 24, 2012 at 12:57 AM, Michael Sideris <[hidden email]> wrote:
> I am using the NFS defaults which means, according to the man page at
> least, that it should go over TCP.

Hmm, I don't believe that to be the case.  What man page text are you
seeing says the default is TCP?


Philip Guenther

Reply | Threaded
Open this post in threaded view
|

Re: Slow VPN Performance

Michael Sideris
Actually, scratch that. I was looking at nfs(5) from an old SL 5.7 box
I have here which explicitly states:

"tcp            Mount the NFS filesystem using the TCP protocol.  This
is the default protocol."

This is not the case anymore though, thanks for bringing that to my attention.

On Wed, Oct 24, 2012 at 10:27 AM, Philip Guenther <[hidden email]> wrote:
> On Wed, Oct 24, 2012 at 12:57 AM, Michael Sideris <[hidden email]> wrote:
>> I am using the NFS defaults which means, according to the man page at
>> least, that it should go over TCP.
>
> Hmm, I don't believe that to be the case.  What man page text are you
> seeing says the default is TCP?
>
>
> Philip Guenther

Reply | Threaded
Open this post in threaded view
|

Re: Slow VPN Performance

Stuart Henderson
In reply to this post by Michael Sideris
On 2012-10-24, Michael Sideris <[hidden email]> wrote:
> Also, OpenBSD 5.2 is around the corner and you never know what that might bring.

There's a commit from just after 5.2 which is relevant to some
packet forwarding setups, which might be of interest..

http://www.openbsd.org/cgi-bin/cvsweb/src/sys/netinet/ip_input.c?r1=1.197;f=h#rev1.197

Reply | Threaded
Open this post in threaded view
|

Re: Slow VPN Performance

Radek
I have configured Site-to-Site ikev2 VPN between two routers (Soekris net5501-70).
Over the internet my transfer speed between these machines is up to 5000KB/s (it is OK).
Over the VPN it is up to 400KB/s only.

Is there any way to squeeze more performance out from these hardware and speed up the VPN?

Tested with netcat:
$ nc 10.0.15.254 1234 < 49MB.test
$ nc -l 1234 > 49MB.test

$ cat /etc/iked.conf
ikev2 quick active esp from $local_gw to $remote_gw \
from $local_lan to $remote_lan peer $remote_gw \
psk "pass"

$ dmesg | head
OpenBSD 6.3 (GENERIC) #0: Wed Apr 25 16:38:25 CEST 2018
    rdk@RAC_fw63:/usr/src/sys/arch/i386/compile/GENERIC
cpu0: Geode(TM) Integrated Processor by AMD PCS ("AuthenticAMD" 586-class) 500 MHz
cpu0: FPU,DE,PSE,TSC,MSR,CX8,SEP,PGE,CMOV,CFLUSH,MMX,MMXX,3DNOW2,3DNOW
real mem  = 536363008 (511MB)
avail mem = 512651264 (488MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: date 20/80/26, BIOS32 rev. 0 @ 0xfac40


On Wed, 24 Oct 2012 10:28:43 +0000 (UTC)
Stuart Henderson <[hidden email]> wrote:

> On 2012-10-24, Michael Sideris <[hidden email]> wrote:
> > Also, OpenBSD 5.2 is around the corner and you never know what that might bring.
>
> There's a commit from just after 5.2 which is relevant to some
> packet forwarding setups, which might be of interest..
>
> http://www.openbsd.org/cgi-bin/cvsweb/src/sys/netinet/ip_input.c?r1=1.197;f=h#rev1.197
>


--
radek

Reply | Threaded
Open this post in threaded view
|

Re: Slow VPN Performance

sven falempin
On Fri, Jan 18, 2019 at 8:58 AM Radek <[hidden email]> wrote:

> I have configured Site-to-Site ikev2 VPN between two routers (Soekris
> net5501-70).
> Over the internet my transfer speed between these machines is up to
> 5000KB/s (it is OK).
> Over the VPN it is up to 400KB/s only.
>
> Is there any way to squeeze more performance out from these hardware and
> speed up the VPN?
>
> Tested with netcat:
> $ nc 10.0.15.254 1234 < 49MB.test
> $ nc -l 1234 > 49MB.test
>
> $ cat /etc/iked.conf
> ikev2 quick active esp from $local_gw to $remote_gw \
> from $local_lan to $remote_lan peer $remote_gw \
> psk "pass"
>
> $ dmesg | head
> OpenBSD 6.3 (GENERIC) #0: Wed Apr 25 16:38:25 CEST 2018
>     rdk@RAC_fw63:/usr/src/sys/arch/i386/compile/GENERIC
> cpu0: Geode(TM) Integrated Processor by AMD PCS ("AuthenticAMD" 586-class)
> 500 MHz
> cpu0: FPU,DE,PSE,TSC,MSR,CX8,SEP,PGE,CMOV,CFLUSH,MMX,MMXX,3DNOW2,3DNOW
> real mem  = 536363008 (511MB)
> avail mem = 512651264 (488MB)
> mpath0 at root
> scsibus0 at mpath0: 256 targets
> mainbus0 at root
> bios0 at mainbus0: date 20/80/26, BIOS32 rev. 0 @ 0xfac40
>
>
>
You should use curl + nginx (with tmpfs) or iperf for bw testing.

don't  drop data, maybe the driver of the ethernet card is crappy ?

just drop the all sendbug data if you actually want to help.

Have you tried your NC on the loopback as a reference ?
is the HEADER compression activated ?

--
--
---------------------------------------------------------------------------------------------------------------------
Knowing is not enough; we must apply. Willing is not enough; we must do
Reply | Threaded
Open this post in threaded view
|

Re: Slow VPN Performance

Radek
To be more precise:
I use net/ifstat for current bw testing.
If I push data by netcat over public IPs, it is up to 5MB/s.
If I push data by netcat through VPN, it is up to 400KB/s.
Endusers in LANs also complain about VPN bw.

> You should use curl + nginx (with tmpfs) or iperf for bw testing.
I do not need to get very exact bw. My "netcat test" shows that data transfer over VPN is ~10 times slower.

> Have you tried your NC on the loopback as a reference ?
$ time nc -N 127.0.0.1 1234 < 50MB.test
0.054u 1.476s 0:10.54 14.4%     0+0k 1281+1io 0pf+0w

> is the HEADER compression activated ?
I do not know. How can I check it out?

> just drop the all sendbug data if you actually want to help.
OpenBSD 6.3 (GENERIC) #0: Wed Apr 25 16:38:25 CEST 2018
    rdk@RAC_fw63:/usr/src/sys/arch/i386/compile/GENERIC
cpu0: Geode(TM) Integrated Processor by AMD PCS ("AuthenticAMD" 586-class) 500 MHz
cpu0: FPU,DE,PSE,TSC,MSR,CX8,SEP,PGE,CMOV,CFLUSH,MMX,MMXX,3DNOW2,3DNOW
real mem  = 536363008 (511MB)
avail mem = 512651264 (488MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: date 20/80/26, BIOS32 rev. 0 @ 0xfac40
pcibios0 at bios0: rev 2.0 @ 0xf0000/0x10000
pcibios0: pcibios_get_intr_routing - function not supported
pcibios0: PCI IRQ Routing information unavailable.
pcibios0: PCI bus #0 is the last bus
bios0: ROM list: 0xc8000/0xa800
cpu0 at mainbus0: (uniprocessor)
mtrr: K6-family MTRR support (2 registers)
amdmsr0 at mainbus0
pci0 at mainbus0 bus 0: configuration mode 1 (no bios)
0:20:0: io address conflict 0x6100/0x100
0:20:0: io address conflict 0x6200/0x200
pchb0 at pci0 dev 1 function 0 "AMD Geode LX" rev 0x33
glxsb0 at pci0 dev 1 function 2 "AMD Geode LX Crypto" rev 0x00: RNG AES
vr0 at pci0 dev 6 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 11, address 00:00:24:cd:90:10
ukphy0 at vr0 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 0x004063, model 0x0034
vr1 at pci0 dev 7 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 5, address 00:00:24:cd:90:11
ukphy1 at vr1 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 0x004063, model 0x0034
vr2 at pci0 dev 8 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 9, address 00:00:24:cd:90:12
ukphy2 at vr2 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 0x004063, model 0x0034
vr3 at pci0 dev 9 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 12, address 00:00:24:cd:90:13
ukphy3 at vr3 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 0x004063, model 0x0034
glxpcib0 at pci0 dev 20 function 0 "AMD CS5536 ISA" rev 0x03: rev 3, 32-bit 3579545Hz timer, watchdog, gpio, i2c
gpio0 at glxpcib0: 32 pins
iic0 at glxpcib0
pciide0 at pci0 dev 20 function 2 "AMD CS5536 IDE" rev 0x01: DMA, channel 0 wired to compatibility, channel 1 wired to compatibility
wd0 at pciide0 channel 0 drive 0: <SanDisk SDCFH-008G>
wd0: 1-sector PIO, LBA48, 7629MB, 15625216 sectors
wd0(pciide0:0:0): using PIO mode 4, Ultra-DMA mode 2
pciide0: channel 1 ignored (disabled)
ohci0 at pci0 dev 21 function 0 "AMD CS5536 USB" rev 0x02: irq 15, version 1.0, legacy support
ehci0 at pci0 dev 21 function 1 "AMD CS5536 USB" rev 0x02: irq 15
usb0 at ehci0: USB revision 2.0
uhub0 at usb0 configuration 1 interface 0 "AMD EHCI root hub" rev 2.00/1.00 addr 1
isa0 at glxpcib0
isadma0 at isa0
com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
com0: console
com1 at isa0 port 0x2f8/8 irq 3: ns16550a, 16 byte fifo
pckbc0 at isa0 port 0x60/5 irq 1 irq 12
pckbc0: unable to establish interrupt for irq 12
pckbd0 at pckbc0 (kbd slot)
wskbd0 at pckbd0: console keyboard
pcppi0 at isa0 port 0x61
spkr0 at pcppi0
nsclpcsio0 at isa0 port 0x2e/2: NSC PC87366 rev 9: GPIO VLM TMS
gpio1 at nsclpcsio0: 29 pins
npx0 at isa0 port 0xf0/16: reported by CPUID; using exception 16
usb1 at ohci0: USB revision 1.0
uhub1 at usb1 configuration 1 interface 0 "AMD OHCI root hub" rev 1.00/1.00 addr 1
ugen0 at uhub1 port 1 "American Power Conversion Smart-UPS C 1500 FW:UPS 10.0 / ID=1005" rev 2.00/1.06 addr 2
vscsi0 at root
scsibus1 at vscsi0: 256 targets
softraid0 at root
scsibus2 at softraid0: 256 targets
root on wd0a (3f37e17802c01339.a) swap on wd0b dump on wd0b

> You should use curl + nginx (with tmpfs) or iperf for bw testing.
>
> don't  drop data, maybe the driver of the ethernet card is crappy ?
>
> just drop the all sendbug data if you actually want to help.
>
> Have you tried your NC on the loopback as a reference ?
> is the HEADER compression activated ?


On Fri, 18 Jan 2019 09:28:45 -0500
sven falempin <[hidden email]> wrote:

> On Fri, Jan 18, 2019 at 8:58 AM Radek <[hidden email]> wrote:
>
> > I have configured Site-to-Site ikev2 VPN between two routers (Soekris
> > net5501-70).
> > Over the internet my transfer speed between these machines is up to
> > 5000KB/s (it is OK).
> > Over the VPN it is up to 400KB/s only.
> >
> > Is there any way to squeeze more performance out from these hardware and
> > speed up the VPN?
> >
> > Tested with netcat:
> > $ nc 10.0.15.254 1234 < 49MB.test
> > $ nc -l 1234 > 49MB.test
> >
> > $ cat /etc/iked.conf
> > ikev2 quick active esp from $local_gw to $remote_gw \
> > from $local_lan to $remote_lan peer $remote_gw \
> > psk "pass"
> >
> > $ dmesg | head
> > OpenBSD 6.3 (GENERIC) #0: Wed Apr 25 16:38:25 CEST 2018
> >     rdk@RAC_fw63:/usr/src/sys/arch/i386/compile/GENERIC
> > cpu0: Geode(TM) Integrated Processor by AMD PCS ("AuthenticAMD" 586-class)
> > 500 MHz
> > cpu0: FPU,DE,PSE,TSC,MSR,CX8,SEP,PGE,CMOV,CFLUSH,MMX,MMXX,3DNOW2,3DNOW
> > real mem  = 536363008 (511MB)
> > avail mem = 512651264 (488MB)
> > mpath0 at root
> > scsibus0 at mpath0: 256 targets
> > mainbus0 at root
> > bios0 at mainbus0: date 20/80/26, BIOS32 rev. 0 @ 0xfac40
> >
> >
> >
> You should use curl + nginx (with tmpfs) or iperf for bw testing.
>
> don't  drop data, maybe the driver of the ethernet card is crappy ?
>
> just drop the all sendbug data if you actually want to help.
>
> Have you tried your NC on the loopback as a reference ?
> is the HEADER compression activated ?
>
> --
> --
> ---------------------------------------------------------------------------------------------------------------------
> Knowing is not enough; we must apply. Willing is not enough; we must do


--
radek

Reply | Threaded
Open this post in threaded view
|

Re: Slow VPN Performance

Radek
I changed default crypto to:

ikev2 quick active esp from $local_gw to $remote_gw \
from $local_lan to $remote_lan peer $remote_gw \
ikesa auth hmac-sha1 enc aes-128 prf hmac-sha1 group modp1024 \
childsa enc aes-128-ctr \
psk "pass"

That increased VPN throughput up to 750KB/s but it is still too slow.
Mayba some sysctl tweaks would also help with this?

Any hint would be appreciated. Thank you.


$ ifstat -i vr0
       vr0        
 KB/s in  KB/s out
    4.48    100.64
   24.14    503.63
   15.32    237.62
    0.33      6.32
   27.37    516.81
   25.92    548.57
   25.36    516.66
   23.49    514.80
   30.79    594.94
   37.45    583.15
   34.16    621.32
   31.54    653.58
   31.40    659.72
   33.00    667.91
   40.15    753.08
   34.54    738.35
   32.15    639.13
   35.11    621.26
   34.78    733.43
   34.59    728.21

On Fri, 18 Jan 2019 18:25:11 +0100
Radek <[hidden email]> wrote:

> To be more precise:
> I use net/ifstat for current bw testing.
> If I push data by netcat over public IPs, it is up to 5MB/s.
> If I push data by netcat through VPN, it is up to 400KB/s.
> Endusers in LANs also complain about VPN bw.
>
> > You should use curl + nginx (with tmpfs) or iperf for bw testing.
> I do not need to get very exact bw. My "netcat test" shows that data transfer over VPN is ~10 times slower.
>
> > Have you tried your NC on the loopback as a reference ?
> $ time nc -N 127.0.0.1 1234 < 50MB.test
> 0.054u 1.476s 0:10.54 14.4%     0+0k 1281+1io 0pf+0w
>
> > is the HEADER compression activated ?
> I do not know. How can I check it out?
>
> > just drop the all sendbug data if you actually want to help.
> OpenBSD 6.3 (GENERIC) #0: Wed Apr 25 16:38:25 CEST 2018
>     rdk@RAC_fw63:/usr/src/sys/arch/i386/compile/GENERIC
> cpu0: Geode(TM) Integrated Processor by AMD PCS ("AuthenticAMD" 586-class) 500 MHz
> cpu0: FPU,DE,PSE,TSC,MSR,CX8,SEP,PGE,CMOV,CFLUSH,MMX,MMXX,3DNOW2,3DNOW
> real mem  = 536363008 (511MB)
> avail mem = 512651264 (488MB)
> mpath0 at root
> scsibus0 at mpath0: 256 targets
> mainbus0 at root
> bios0 at mainbus0: date 20/80/26, BIOS32 rev. 0 @ 0xfac40
> pcibios0 at bios0: rev 2.0 @ 0xf0000/0x10000
> pcibios0: pcibios_get_intr_routing - function not supported
> pcibios0: PCI IRQ Routing information unavailable.
> pcibios0: PCI bus #0 is the last bus
> bios0: ROM list: 0xc8000/0xa800
> cpu0 at mainbus0: (uniprocessor)
> mtrr: K6-family MTRR support (2 registers)
> amdmsr0 at mainbus0
> pci0 at mainbus0 bus 0: configuration mode 1 (no bios)
> 0:20:0: io address conflict 0x6100/0x100
> 0:20:0: io address conflict 0x6200/0x200
> pchb0 at pci0 dev 1 function 0 "AMD Geode LX" rev 0x33
> glxsb0 at pci0 dev 1 function 2 "AMD Geode LX Crypto" rev 0x00: RNG AES
> vr0 at pci0 dev 6 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 11, address 00:00:24:cd:90:10
> ukphy0 at vr0 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 0x004063, model 0x0034
> vr1 at pci0 dev 7 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 5, address 00:00:24:cd:90:11
> ukphy1 at vr1 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 0x004063, model 0x0034
> vr2 at pci0 dev 8 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 9, address 00:00:24:cd:90:12
> ukphy2 at vr2 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 0x004063, model 0x0034
> vr3 at pci0 dev 9 function 0 "VIA VT6105M RhineIII" rev 0x96: irq 12, address 00:00:24:cd:90:13
> ukphy3 at vr3 phy 1: Generic IEEE 802.3u media interface, rev. 3: OUI 0x004063, model 0x0034
> glxpcib0 at pci0 dev 20 function 0 "AMD CS5536 ISA" rev 0x03: rev 3, 32-bit 3579545Hz timer, watchdog, gpio, i2c
> gpio0 at glxpcib0: 32 pins
> iic0 at glxpcib0
> pciide0 at pci0 dev 20 function 2 "AMD CS5536 IDE" rev 0x01: DMA, channel 0 wired to compatibility, channel 1 wired to compatibility
> wd0 at pciide0 channel 0 drive 0: <SanDisk SDCFH-008G>
> wd0: 1-sector PIO, LBA48, 7629MB, 15625216 sectors
> wd0(pciide0:0:0): using PIO mode 4, Ultra-DMA mode 2
> pciide0: channel 1 ignored (disabled)
> ohci0 at pci0 dev 21 function 0 "AMD CS5536 USB" rev 0x02: irq 15, version 1.0, legacy support
> ehci0 at pci0 dev 21 function 1 "AMD CS5536 USB" rev 0x02: irq 15
> usb0 at ehci0: USB revision 2.0
> uhub0 at usb0 configuration 1 interface 0 "AMD EHCI root hub" rev 2.00/1.00 addr 1
> isa0 at glxpcib0
> isadma0 at isa0
> com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
> com0: console
> com1 at isa0 port 0x2f8/8 irq 3: ns16550a, 16 byte fifo
> pckbc0 at isa0 port 0x60/5 irq 1 irq 12
> pckbc0: unable to establish interrupt for irq 12
> pckbd0 at pckbc0 (kbd slot)
> wskbd0 at pckbd0: console keyboard
> pcppi0 at isa0 port 0x61
> spkr0 at pcppi0
> nsclpcsio0 at isa0 port 0x2e/2: NSC PC87366 rev 9: GPIO VLM TMS
> gpio1 at nsclpcsio0: 29 pins
> npx0 at isa0 port 0xf0/16: reported by CPUID; using exception 16
> usb1 at ohci0: USB revision 1.0
> uhub1 at usb1 configuration 1 interface 0 "AMD OHCI root hub" rev 1.00/1.00 addr 1
> ugen0 at uhub1 port 1 "American Power Conversion Smart-UPS C 1500 FW:UPS 10.0 / ID=1005" rev 2.00/1.06 addr 2
> vscsi0 at root
> scsibus1 at vscsi0: 256 targets
> softraid0 at root
> scsibus2 at softraid0: 256 targets
> root on wd0a (3f37e17802c01339.a) swap on wd0b dump on wd0b
>
> > You should use curl + nginx (with tmpfs) or iperf for bw testing.
> >
> > don't  drop data, maybe the driver of the ethernet card is crappy ?
> >
> > just drop the all sendbug data if you actually want to help.
> >
> > Have you tried your NC on the loopback as a reference ?
> > is the HEADER compression activated ?
>
>
> On Fri, 18 Jan 2019 09:28:45 -0500
> sven falempin <[hidden email]> wrote:
>
> > On Fri, Jan 18, 2019 at 8:58 AM Radek <[hidden email]> wrote:
> >
> > > I have configured Site-to-Site ikev2 VPN between two routers (Soekris
> > > net5501-70).
> > > Over the internet my transfer speed between these machines is up to
> > > 5000KB/s (it is OK).
> > > Over the VPN it is up to 400KB/s only.
> > >
> > > Is there any way to squeeze more performance out from these hardware and
> > > speed up the VPN?
> > >
> > > Tested with netcat:
> > > $ nc 10.0.15.254 1234 < 49MB.test
> > > $ nc -l 1234 > 49MB.test
> > >
> > > $ cat /etc/iked.conf
> > > ikev2 quick active esp from $local_gw to $remote_gw \
> > > from $local_lan to $remote_lan peer $remote_gw \
> > > psk "pass"
> > >
> > > $ dmesg | head
> > > OpenBSD 6.3 (GENERIC) #0: Wed Apr 25 16:38:25 CEST 2018
> > >     rdk@RAC_fw63:/usr/src/sys/arch/i386/compile/GENERIC
> > > cpu0: Geode(TM) Integrated Processor by AMD PCS ("AuthenticAMD" 586-class)
> > > 500 MHz
> > > cpu0: FPU,DE,PSE,TSC,MSR,CX8,SEP,PGE,CMOV,CFLUSH,MMX,MMXX,3DNOW2,3DNOW
> > > real mem  = 536363008 (511MB)
> > > avail mem = 512651264 (488MB)
> > > mpath0 at root
> > > scsibus0 at mpath0: 256 targets
> > > mainbus0 at root
> > > bios0 at mainbus0: date 20/80/26, BIOS32 rev. 0 @ 0xfac40
> > >
> > >
> > >
> > You should use curl + nginx (with tmpfs) or iperf for bw testing.
> >
> > don't  drop data, maybe the driver of the ethernet card is crappy ?
> >
> > just drop the all sendbug data if you actually want to help.
> >
> > Have you tried your NC on the loopback as a reference ?
> > is the HEADER compression activated ?
> >
> > --
> > --
> > ---------------------------------------------------------------------------------------------------------------------
> > Knowing is not enough; we must apply. Willing is not enough; we must do
>
>
> --
> radek


--
radek

Reply | Threaded
Open this post in threaded view
|

Re: Slow VPN Performance

Stuart Henderson
On 2019-01-21, Radek <[hidden email]> wrote:

> I changed default crypto to:
>
> ikev2 quick active esp from $local_gw to $remote_gw \
> from $local_lan to $remote_lan peer $remote_gw \
> ikesa auth hmac-sha1 enc aes-128 prf hmac-sha1 group modp1024 \
> childsa enc aes-128-ctr \
> psk "pass"
>
> That increased VPN throughput up to 750KB/s but it is still too slow.
> Mayba some sysctl tweaks would also help with this?

Try chacha20-poly1305 instead of aes-128-ctr, it may help a little.
I don't think any sysctl is likely to help.

750KB/s is maybe a bit slower than I'd expect but that 10+ year old
net5501 is *not* a fast machine. You might be able to squeeze a bit more
from it but probably not a lot, it won't be getting anywhere near your
line speed even with larger packets, and will be terribly overloaded
for small packets e.g. voip.

Do you have any other hardware you can use? If buying new, apu2/apu4
would be good/easy options for running OpenBSD on, but if you have
anything with enough NICs and AES (or at least PCLMUL) showing in
the cpu attach line in dmesg, run OpenBSD/amd64 on it, and use
suitable ciphers (try "quick enc aes-128-gcm"), it should be
way better than the 5501.

>> To be more precise:
>> I use net/ifstat for current bw testing.
>> If I push data by netcat over public IPs, it is up to 5MB/s.
>> If I push data by netcat through VPN, it is up to 400KB/s.
>> Endusers in LANs also complain about VPN bw.

The best test would be run between LAN machines rather than the routers.
Generating traffic on the router itself means it's constantly switching
between kernel and userland which won't be helping. Still, your test is
good enough to show that things are much slower with IPsec enabled.

>> > is the HEADER compression activated ?
>> I do not know. How can I check it out?

I don't know what compression that would be. There is ROHCoIPsec (RFC5856)
but OpenBSD doesn't support that.

There is ipcomp (packet compression) which can be configured in iked,
but the last thing you want to do on this hardware is add more cpu load
by compressing. (it is not configured in the sample you sent).

Reply | Threaded
Open this post in threaded view
|

Re: Slow VPN Performance

Christian Weisgerber
In reply to this post by Radek
On 2019-01-21, Radek <[hidden email]> wrote:

> ikev2 quick active esp from $local_gw to $remote_gw \
> from $local_lan to $remote_lan peer $remote_gw \
> ikesa auth hmac-sha1 enc aes-128 prf hmac-sha1 group modp1024 \
> childsa enc aes-128-ctr \
> psk "pass"
>
> That increased VPN throughput up to 750KB/s but it is still too slow.

A net5501 is very slow by today's standards.  I don't remember if
that speed is expected.  Assuming that encryption/decryption is the
actual bottleneck:

The phase 1 negotiation (ikesa) is only used when the encrypted
channel is set up.  Tweaking the parameters there has no effect on
the performance of the actual data transfer, which is instead
determined by the phase 2 (childsa) algorithms.

The Geode LX CPU in the net5501 offers hardware acceleration for
AES-128-CBC and nothing else. Not AES-192 or -256, not CTR mode.
You can combine this with the cheapest authentication available,
which is HMAC-MD5. The HMAC construction is not affected by the
known vulnerabilities of MD5.

In short, I'd use "childsa enc aes-128 auth hmac-md5" for maximum
throughput on this hardware.

--
Christian "naddy" Weisgerber                          [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Slow VPN Performance

Radek
In reply to this post by Stuart Henderson
Thank you Stuart and Christian.
>In short, I'd use "childsa enc aes-128 auth hmac-md5" for maximum
> throughput on this hardware.
It gives me up to 700KB/s.

> Try chacha20-poly1305 instead of aes-128-ctr, it may help a little.
"childsa enc chacha20-poly1305" does the trick. It gives me up to 3MB/s. I think it is throughput I need, but what about security with CHACHA vs AES? Should I buy new routers ASAP and change enc to AES or stay calm with CHACHA?

> Do you have any other hardware you can use? If buying new, apu2/apu4
> would be good/easy options for running OpenBSD on, but if you have
> anything with enough NICs and AES (or at least PCLMUL) showing in
> the cpu attach line in dmesg, run OpenBSD/amd64 on it, and use
> suitable ciphers (try "quick enc aes-128-gcm"), it should be
> way better than the 5501.
No, I don't have any - that's the problem. I'm trying *not* to buy new APUs because it seems to be quite expensive (very small company, only 3 endusers at remote location). I think 3MB/s over VPN is sufficient. If not - I (they) will have no choice.
Will APU.2D2 be OK for that purpose or other board, considering price/performance?
https://www.pcengines.ch/apu2d2.htm

> The best test would be run between LAN machines rather than the routers.
> Generating traffic on the router itself means it's constantly switching
> between kernel and userland which won't be helping. Still, your test is
> good enough to show that things are much slower with IPsec enabled.
True. I use LAN machine on the one side in my netcat tests, but I don't have any on the other side, so I have to use router.

On Mon, 21 Jan 2019 13:52:41 +0000 (UTC)
Stuart Henderson <[hidden email]> wrote:

> On 2019-01-21, Radek <[hidden email]> wrote:
> > I changed default crypto to:
> >
> > ikev2 quick active esp from $local_gw to $remote_gw \
> > from $local_lan to $remote_lan peer $remote_gw \
> > ikesa auth hmac-sha1 enc aes-128 prf hmac-sha1 group modp1024 \
> > childsa enc aes-128-ctr \
> > psk "pass"
> >
> > That increased VPN throughput up to 750KB/s but it is still too slow.
> > Mayba some sysctl tweaks would also help with this?
>
> Try chacha20-poly1305 instead of aes-128-ctr, it may help a little.
> I don't think any sysctl is likely to help.
>
> 750KB/s is maybe a bit slower than I'd expect but that 10+ year old
> net5501 is *not* a fast machine. You might be able to squeeze a bit more
> from it but probably not a lot, it won't be getting anywhere near your
> line speed even with larger packets, and will be terribly overloaded
> for small packets e.g. voip.
>
> Do you have any other hardware you can use? If buying new, apu2/apu4
> would be good/easy options for running OpenBSD on, but if you have
> anything with enough NICs and AES (or at least PCLMUL) showing in
> the cpu attach line in dmesg, run OpenBSD/amd64 on it, and use
> suitable ciphers (try "quick enc aes-128-gcm"), it should be
> way better than the 5501.
>
> >> To be more precise:
> >> I use net/ifstat for current bw testing.
> >> If I push data by netcat over public IPs, it is up to 5MB/s.
> >> If I push data by netcat through VPN, it is up to 400KB/s.
> >> Endusers in LANs also complain about VPN bw.
>
> The best test would be run between LAN machines rather than the routers.
> Generating traffic on the router itself means it's constantly switching
> between kernel and userland which won't be helping. Still, your test is
> good enough to show that things are much slower with IPsec enabled.
>
> >> > is the HEADER compression activated ?
> >> I do not know. How can I check it out?
>
> I don't know what compression that would be. There is ROHCoIPsec (RFC5856)
> but OpenBSD doesn't support that.
>
> There is ipcomp (packet compression) which can be configured in iked,
> but the last thing you want to do on this hardware is add more cpu load
> by compressing. (it is not configured in the sample you sent).
>


--
radek