10GBit network performance on OpenBSD 6.4

classic Classic list List threaded Threaded
23 messages Options
12
Reply | Threaded
Open this post in threaded view
|

10GBit network performance on OpenBSD 6.4

ms-2
Hi,

Please allow me few questions regarding 10GBit network performance on
OpenBSD 6.4.
I face quite low network performance  for the Intell X520-DA2 10GBit
network card.

Test configuration in OpenBSD-Linux-10GBit_net_performance.txt -
http://paste.debian.net/1076461/
Low transfer rate for scp - OpenBSD-10GBit-perftest.txt -
http://paste.debian.net/1076460/

Test configuration:
# ---
# OpenBSD 6.4 on HP DL380g7
# -------------------------

# 10GBit X520-DA2 NIC
ix0: flags=208843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,AUTOCONF6> mtu 1500
         media: Ethernet autoselect (10GbaseSR full-duplex,rxpause,txpause)
         inet6 fe80::d51e:1b74:17d7:8230%ix0 prefixlen 64 scopeid 0x1
         inet 200.0.0.3 netmask 0xffffff00 broadcast 200.0.0.255

ix1: flags=208843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,AUTOCONF6> mtu 1500
         media: Ethernet autoselect (10GbaseSR full-duplex,rxpause,txpause)
         inet 10.0.0.7 netmask 0xffffff00 broadcast 10.0.0.255
         inet6 fe80::b488:caea:5d6f:9992%ix1 prefixlen 64 scopeid 0x2
# ---

Compare to Linux the 10GBit transfer from/to OpenBSD is few times slower:

# ---
# OpenBSD to Linux (Asus P8BWS)
# -----------------------------
srvob# iperf3 -c 10.0.0.2
...
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  1.50 GBytes  1.29 Gbits/sec                  sender
[  5]   0.00-10.20  sec  1.50 GBytes  1.27 Gbits/sec                  
receiver
# ---


# ---
# Linux (DL380g7) to Linux (Asus P8BWS)
# -------------------------------------
root@kali:~# iperf3 -c 100.0.0.2
...
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  10.9 GBytes  9.39 Gbits/sec 328             sender
[  5]   0.00-10.04  sec  10.9 GBytes  9.35 Gbits/sec                  
receiver
# ---

The scp transfer rate is like 21MBytes/s only per ssh connection
(OpenBSD <-> Linux):
# ---
root@kali:~# scp /re*/b*/ka*/kali-linux-kde-2019.1a-*.iso
ironm@10.0.0.7:/home/ironm/t12.iso
ironm@10.0.0.7's password:
kali-linux-kde-2019.1a-amd64.iso                     4%  173MB
21.5MB/s   02:40 ETA
# ---


The 1GBit cooper based NIC works also slower but reaching almost 40% of
the max trasfer rate of 1 Gbit:

# ---
# OpenBSD 6.4 (DL380g7 1Gbit NIC) to Linux (DL380g7 1GBit NIC)
# ------------------------------------------------------------
srvob# iperf3 -c 170.0.0.10
...
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec   471 MBytes   395 Mbits/sec                  sender
[  5]   0.00-10.20  sec   471 MBytes   388 Mbits/sec                  
receiver
# ---

# ---
# Linux (Asus P8BWS) to Linux (DL380g7)
# -------------------------------------
root@kali:~# iperf3 -c 192.168.1.122
...
     - - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.09 GBytes   939 Mbits/sec 183             sender
[  5]   0.00-10.04  sec  1.09 GBytes   934 Mbits/sec                  
receiver
# ---


Thank you in advance for your hints what OpenBSD 6.4 settings do I miss.

Best regards
Mark

--
[hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: 10GBit network performance on OpenBSD 6.4

ms-2
Short feedback:

Just for the test I have checked the 10GBit network performance
between two FreeBSD 13.0 servers (both HP DL380g7 machines)
transfering data in both directions

# ---
ironm@fbsdsrv2:~ $ scp ironm@200.0.0.10:/home/ironm/t2.iso t100.iso
Password for ironm@fbsdsrv1:
t2.iso                                     100% 3626MB 130.2MB/s   00:27

# ---
ironm@fbsdsrv2:~ $ scp obsd2fbsd.iso ironm@200.0.0.10:/home/ironm/t1.iso
Password for ironm@fbsdsrv1:
obsd2fbsd.iso                              100% 3626MB 140.4MB/s   00:25
# ---

The ssh performance using 10GBit network connection on FreeBSD 13.0
is approx 7 times higher than the one on OpenBSD 6.4.

Is it the question of the "ix" NIC driver of OpenBSD 6.4?
(X520-DA2 NICs from Intel)

Does one of you achieve good 10Gbit network performance with other
10Gbit NICs?

Thank you in advance for your hints.

Kind regards
Mark

--
[hidden email]


Am 06.04.2019 22:52, schrieb Mark Schneider:

> Hi,
>
> Please allow me few questions regarding 10GBit network performance on
> OpenBSD 6.4.
> I face quite low network performance  for the Intell X520-DA2 10GBit
> network card.
>
> Test configuration in OpenBSD-Linux-10GBit_net_performance.txt -
> http://paste.debian.net/1076461/
> Low transfer rate for scp - OpenBSD-10GBit-perftest.txt -
> http://paste.debian.net/1076460/
>
> Test configuration:
> # ---
> # OpenBSD 6.4 on HP DL380g7
> # -------------------------
>
> # 10GBit X520-DA2 NIC
> ix0: flags=208843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,AUTOCONF6>
> mtu 1500
>         media: Ethernet autoselect (10GbaseSR
> full-duplex,rxpause,txpause)
>         inet6 fe80::d51e:1b74:17d7:8230%ix0 prefixlen 64 scopeid 0x1
>         inet 200.0.0.3 netmask 0xffffff00 broadcast 200.0.0.255
>
> ix1: flags=208843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,AUTOCONF6>
> mtu 1500
>         media: Ethernet autoselect (10GbaseSR
> full-duplex,rxpause,txpause)
>         inet 10.0.0.7 netmask 0xffffff00 broadcast 10.0.0.255
>         inet6 fe80::b488:caea:5d6f:9992%ix1 prefixlen 64 scopeid 0x2
> # ---
>
> Compare to Linux the 10GBit transfer from/to OpenBSD is few times slower:
>
> # ---
> # OpenBSD to Linux (Asus P8BWS)
> # -----------------------------
> srvob# iperf3 -c 10.0.0.2
> ...
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval           Transfer     Bitrate
> [  5]   0.00-10.00  sec  1.50 GBytes  1.29 Gbits/sec                  
> sender
> [  5]   0.00-10.20  sec  1.50 GBytes  1.27 Gbits/sec                  
> receiver
> # ---
>
>
> # ---
> # Linux (DL380g7) to Linux (Asus P8BWS)
> # -------------------------------------
> root@kali:~# iperf3 -c 100.0.0.2
> ...
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval           Transfer     Bitrate         Retr
> [  5]   0.00-10.00  sec  10.9 GBytes  9.39 Gbits/sec 328            
> sender
> [  5]   0.00-10.04  sec  10.9 GBytes  9.35 Gbits/sec                  
> receiver
> # ---
>
> The scp transfer rate is like 21MBytes/s only per ssh connection
> (OpenBSD <-> Linux):
> # ---
> root@kali:~# scp /re*/b*/ka*/kali-linux-kde-2019.1a-*.iso
> ironm@10.0.0.7:/home/ironm/t12.iso
> ironm@10.0.0.7's password:
> kali-linux-kde-2019.1a-amd64.iso                     4%  173MB
> 21.5MB/s   02:40 ETA
> # ---
>
>
> The 1GBit cooper based NIC works also slower but reaching almost 40%
> of the max trasfer rate of 1 Gbit:
>
> # ---
> # OpenBSD 6.4 (DL380g7 1Gbit NIC) to Linux (DL380g7 1GBit NIC)
> # ------------------------------------------------------------
> srvob# iperf3 -c 170.0.0.10
> ...
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval           Transfer     Bitrate
> [  5]   0.00-10.00  sec   471 MBytes   395 Mbits/sec                  
> sender
> [  5]   0.00-10.20  sec   471 MBytes   388 Mbits/sec                  
> receiver
> # ---
>
> # ---
> # Linux (Asus P8BWS) to Linux (DL380g7)
> # -------------------------------------
> root@kali:~# iperf3 -c 192.168.1.122
> ...
>     - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval           Transfer     Bitrate         Retr
> [  5]   0.00-10.00  sec  1.09 GBytes   939 Mbits/sec 183            
> sender
> [  5]   0.00-10.04  sec  1.09 GBytes   934 Mbits/sec                  
> receiver
> # ---
>
>
> Thank you in advance for your hints what OpenBSD 6.4 settings do I miss.
>
> Best regards
> Mark
>


Reply | Threaded
Open this post in threaded view
|

Re: 10GBit network performance on OpenBSD 6.4

Anatoli
In reply to this post by ms-2
Hi,

I guess you're hitting 2 bottlenecks: the CPU performance for iperf and
HDD performance for scp.

Check how much CPU is consumed during iperf transfer and try scp'ing
something not from/to HDD, e.g. /dev/zero.

I've seen extremely slow HDD performance in OpenBSD, like 12x slower
than on Linux, also no filesystem cache, so depending on your HDD with
scp you may be hitting the max throughput for the FS, not the network.

Regards,
Anatoli

*From:* Mark Schneider <[hidden email]>
*Sent:* Saturday, April 06, 2019 17:52
*To:* Misc <[hidden email]>
*Subject:* 10GBit network performance on OpenBSD 6.4

Hi,

Please allow me few questions regarding 10GBit network performance on
OpenBSD 6.4.
I face quite low network performance  for the Intell X520-DA2 10GBit
network card.

Test configuration in OpenBSD-Linux-10GBit_net_performance.txt -
http://paste.debian.net/1076461/
Low transfer rate for scp - OpenBSD-10GBit-perftest.txt -
http://paste.debian.net/1076460/

Test configuration:
# ---
# OpenBSD 6.4 on HP DL380g7
# -------------------------

# 10GBit X520-DA2 NIC
ix0: flags=208843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,AUTOCONF6> mtu
1500
         media: Ethernet autoselect (10GbaseSR full-duplex,rxpause,txpause)
         inet6 fe80::d51e:1b74:17d7:8230%ix0 prefixlen 64 scopeid 0x1
         inet 200.0.0.3 netmask 0xffffff00 broadcast 200.0.0.255

ix1: flags=208843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,AUTOCONF6> mtu
1500
         media: Ethernet autoselect (10GbaseSR full-duplex,rxpause,txpause)
         inet 10.0.0.7 netmask 0xffffff00 broadcast 10.0.0.255
         inet6 fe80::b488:caea:5d6f:9992%ix1 prefixlen 64 scopeid 0x2
# ---

Compare to Linux the 10GBit transfer from/to OpenBSD is few times slower:

# ---
# OpenBSD to Linux (Asus P8BWS)
# -----------------------------
srvob# iperf3 -c 10.0.0.2
...
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  1.50 GBytes  1.29 Gbits/sec                 
sender
[  5]   0.00-10.20  sec  1.50 GBytes  1.27 Gbits/sec                 
receiver
# ---


# ---
# Linux (DL380g7) to Linux (Asus P8BWS)
# -------------------------------------
root@kali:~# iperf3 -c 100.0.0.2
...
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  10.9 GBytes  9.39 Gbits/sec 328             sender
[  5]   0.00-10.04  sec  10.9 GBytes  9.35 Gbits/sec                 
receiver
# ---

The scp transfer rate is like 21MBytes/s only per ssh connection
(OpenBSD <-> Linux):
# ---
root@kali:~# scp /re*/b*/ka*/kali-linux-kde-2019.1a-*.iso
ironm@10.0.0.7:/home/ironm/t12.iso
ironm@10.0.0.7's password:
kali-linux-kde-2019.1a-amd64.iso                     4%  173MB
21.5MB/s   02:40 ETA
# ---


The 1GBit cooper based NIC works also slower but reaching almost 40% of
the max trasfer rate of 1 Gbit:

# ---
# OpenBSD 6.4 (DL380g7 1Gbit NIC) to Linux (DL380g7 1GBit NIC)
# ------------------------------------------------------------
srvob# iperf3 -c 170.0.0.10
...
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec   471 MBytes   395 Mbits/sec                 
sender
[  5]   0.00-10.20  sec   471 MBytes   388 Mbits/sec                 
receiver
# ---

# ---
# Linux (Asus P8BWS) to Linux (DL380g7)
# -------------------------------------
root@kali:~# iperf3 -c 192.168.1.122
...
     - - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.09 GBytes   939 Mbits/sec 183             sender
[  5]   0.00-10.04  sec  1.09 GBytes   934 Mbits/sec                 
receiver
# ---


Thank you in advance for your hints what OpenBSD 6.4 settings do I miss.

Best regards
Mark


Reply | Threaded
Open this post in threaded view
|

Re: 10GBit network performance on OpenBSD 6.4

Chris Cappuccio
Anatoli [[hidden email]] wrote:
>
> I've seen extremely slow HDD performance in OpenBSD, like 12x slower than on
> Linux, also no filesystem cache, so depending on your HDD with scp you may
> be hitting the max throughput for the FS, not the network.
>

12x slower? That's insane. What are you talking about? USB HDD? USB Flash?
SATA? Driver? You should submit a bug report with lots of details.

Chris

Reply | Threaded
Open this post in threaded view
|

Re: 10GBit network performance on OpenBSD 6.4

Abel Abraham Camarillo Ojeda-2
In reply to this post by ms-2
On Sun, Apr 7, 2019 at 5:21 PM Mark Schneider <[hidden email]>
wrote:

> Short feedback:
>
> Just for the test I have checked the 10GBit network performance
> between two FreeBSD 13.0 servers (both HP DL380g7 machines)
> transfering data in both directions
>
> # ---
> ironm@fbsdsrv2:~ $ scp ironm@200.0.0.10:/home/ironm/t2.iso t100.iso
> Password for ironm@fbsdsrv1:
> t2.iso                                     100% 3626MB 130.2MB/s   00:27
>
> # ---
> ironm@fbsdsrv2:~ $ scp obsd2fbsd.iso ironm@200.0.0.10:/home/ironm/t1.iso
> Password for ironm@fbsdsrv1:
> obsd2fbsd.iso                              100% 3626MB 140.4MB/s   00:25
> # ---
>
> The ssh performance using 10GBit network connection on FreeBSD 13.0
> is approx 7 times higher than the one on OpenBSD 6.4.
>
> Is it the question of the "ix" NIC driver of OpenBSD 6.4?
> (X520-DA2 NICs from Intel)
>
> Does one of you achieve good 10Gbit network performance with other
> 10Gbit NICs?
>
> Thank you in advance for your hints.
>
> Kind regards
> Mark
>
> --
> [hidden email]
>
>
> Am 06.04.2019 22:52, schrieb Mark Schneider:
> > Hi,
> >
> > Please allow me few questions regarding 10GBit network performance on
> > OpenBSD 6.4.
> > I face quite low network performance  for the Intell X520-DA2 10GBit
> > network card.
> >
> > Test configuration in OpenBSD-Linux-10GBit_net_performance.txt -
> > http://paste.debian.net/1076461/
> > Low transfer rate for scp - OpenBSD-10GBit-perftest.txt -
> > http://paste.debian.net/1076460/
> >
> > Test configuration:
> > # ---
> > # OpenBSD 6.4 on HP DL380g7
> > # -------------------------
> >
> > # 10GBit X520-DA2 NIC
> > ix0: flags=208843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,AUTOCONF6>
> > mtu 1500
> >         media: Ethernet autoselect (10GbaseSR
> > full-duplex,rxpause,txpause)
> >         inet6 fe80::d51e:1b74:17d7:8230%ix0 prefixlen 64 scopeid 0x1
> >         inet 200.0.0.3 netmask 0xffffff00 broadcast 200.0.0.255
> >
> > ix1: flags=208843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,AUTOCONF6>
> > mtu 1500
> >         media: Ethernet autoselect (10GbaseSR
> > full-duplex,rxpause,txpause)
> >         inet 10.0.0.7 netmask 0xffffff00 broadcast 10.0.0.255
> >         inet6 fe80::b488:caea:5d6f:9992%ix1 prefixlen 64 scopeid 0x2
> > # ---
> >
> > Compare to Linux the 10GBit transfer from/to OpenBSD is few times slower:
> >
> > # ---
> > # OpenBSD to Linux (Asus P8BWS)
> > # -----------------------------
> > srvob# iperf3 -c 10.0.0.2
> > ...
> > - - - - - - - - - - - - - - - - - - - - - - - - -
> > [ ID] Interval           Transfer     Bitrate
> > [  5]   0.00-10.00  sec  1.50 GBytes  1.29 Gbits/sec
> > sender
> > [  5]   0.00-10.20  sec  1.50 GBytes  1.27 Gbits/sec
> > receiver
> > # ---
> >
> >
> > # ---
> > # Linux (DL380g7) to Linux (Asus P8BWS)
> > # -------------------------------------
> > root@kali:~# iperf3 -c 100.0.0.2
> > ...
> > - - - - - - - - - - - - - - - - - - - - - - - - -
> > [ ID] Interval           Transfer     Bitrate         Retr
> > [  5]   0.00-10.00  sec  10.9 GBytes  9.39 Gbits/sec 328
> > sender
> > [  5]   0.00-10.04  sec  10.9 GBytes  9.35 Gbits/sec
> > receiver
> > # ---
> >
> > The scp transfer rate is like 21MBytes/s only per ssh connection
> > (OpenBSD <-> Linux):
> > # ---
> > root@kali:~# scp /re*/b*/ka*/kali-linux-kde-2019.1a-*.iso
> > ironm@10.0.0.7:/home/ironm/t12.iso
> > ironm@10.0.0.7's password:
> > kali-linux-kde-2019.1a-amd64.iso                     4%  173MB
> > 21.5MB/s   02:40 ETA
> > # ---
> >
> >
> > The 1GBit cooper based NIC works also slower but reaching almost 40%
> > of the max trasfer rate of 1 Gbit:
> >
> > # ---
> > # OpenBSD 6.4 (DL380g7 1Gbit NIC) to Linux (DL380g7 1GBit NIC)
> > # ------------------------------------------------------------
> > srvob# iperf3 -c 170.0.0.10
> > ...
> > - - - - - - - - - - - - - - - - - - - - - - - - -
> > [ ID] Interval           Transfer     Bitrate
> > [  5]   0.00-10.00  sec   471 MBytes   395 Mbits/sec
> > sender
> > [  5]   0.00-10.20  sec   471 MBytes   388 Mbits/sec
> > receiver
> > # ---
> >
> > # ---
> > # Linux (Asus P8BWS) to Linux (DL380g7)
> > # -------------------------------------
> > root@kali:~# iperf3 -c 192.168.1.122
> > ...
> >     - - - - - - - - - - - - - - - - - - - - - - - - -
> > [ ID] Interval           Transfer     Bitrate         Retr
> > [  5]   0.00-10.00  sec  1.09 GBytes   939 Mbits/sec 183
> > sender
> > [  5]   0.00-10.04  sec  1.09 GBytes   934 Mbits/sec
> > receiver
> > # ---
> >
> >
> > Thank you in advance for your hints what OpenBSD 6.4 settings do I miss.
> >
> > Best regards
> > Mark
> >
>

Whats your performance without scp? tcpbench / netcat, for example?
Reply | Threaded
Open this post in threaded view
|

Re: 10GBit network performance on OpenBSD 6.4

Anatoli
In reply to this post by Chris Cappuccio
That was with Samsung 960 EVO U.2 (PCIe) on i7-8550u with 32GB RAM.
OpenBSD read/write was around 220-240MB/s (with FS encryption), Linux
without FS cache about 2.6-2.8GB/s and with cache over 3.5GB/s.

I don't have a dmesg right now as I installed Gentoo on top and just
saved a printscreen of the tests (below), but I can reinstall OpenBSD
and make more specific tests if anybody is interested (I do am
interested in a reasonable OpenBSD performance, but I thought 12x slower
and no cache to improve things when I/O lags wasn't that strange).

If you can suggest some specific tests to analyze the cause (i.e.
filesystem, hardware issues, scheduling, etc.), please let me know.



*From:* Chris Cappuccio <[hidden email]>
*Sent:* Monday, April 08, 2019 16:28
*To:* Anatoli <[hidden email]>
*Cc:* Misc <[hidden email]>
*Subject:* Re: 10GBit network performance on OpenBSD 6.4

Anatoli [[hidden email]] wrote:

> I've seen extremely slow HDD performance in OpenBSD, like 12x slower than on
> Linux, also no filesystem cache, so depending on your HDD with scp you may
> be hitting the max throughput for the FS, not the network.
>
12x slower? That's insane. What are you talking about? USB HDD? USB Flash?
SATA? Driver? You should submit a bug report with lots of details.

Chris



obsd_dd.png (163K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

compared filesystem performance, was Re: 10GBit network performance on OpenBSD 6.4

gwes-2


On 04/08/19 17:46, Anatoli wrote:

> That was with Samsung 960 EVO U.2 (PCIe) on i7-8550u with 32GB RAM.
> OpenBSD read/write was around 220-240MB/s (with FS encryption), Linux
> without FS cache about 2.6-2.8GB/s and with cache over 3.5GB/s.
>
> I don't have a dmesg right now as I installed Gentoo on top and just
> saved a printscreen of the tests (below), but I can reinstall OpenBSD
> and make more specific tests if anybody is interested (I do am
> interested in a reasonable OpenBSD performance, but I thought 12x
> slower and no cache to improve things when I/O lags wasn't that strange).
>
> If you can suggest some specific tests to analyze the cause (i.e.
> filesystem, hardware issues, scheduling, etc.), please let me know.
>
>
>
> *From:* Chris Cappuccio <[hidden email]>
> *Sent:* Monday, April 08, 2019 16:28
> *To:* Anatoli <[hidden email]>
> *Cc:* Misc <[hidden email]>
> *Subject:* Re: 10GBit network performance on OpenBSD 6.4
>
> Anatoli [[hidden email]] wrote:
>
>> I've seen extremely slow HDD performance in OpenBSD, like 12x slower
>> than on
>> Linux, also no filesystem cache, so depending on your HDD with scp
>> you may
>> be hitting the max throughput for the FS, not the network.
>>
> 12x slower? That's insane. What are you talking about? USB HDD? USB
> Flash?
> SATA? Driver? You should submit a bug report with lots of details.
>
> Chris
>
>
>
A quick test on a slow laptop running linux shows
   dd if=/dev/zero of=a bs=64k count=20000
runs 1.3 GB/sec. The physical disk transfer rate is 80 MB/sec max.
Linux caches very aggressively.

What is the rated transfer rate of the SSD you're using to test?
SATA 3 wire speed is 6G/sec and realistically 500MB/sec raw rate
is near the top.

Anything over that is an artefact probably from a cache somewhere.

I suspect that if you tried to write more data than physical memory
can hold the transfer rate would slow to something under the
disk or channel rate.

OpenBSD saves a great deal less in its cache. This slows repetitive
accesses to large data sets a painful amount. That's a separate problem
which I'd like to look at but don't have the time to write the tools
to do it.

Reply | Threaded
Open this post in threaded view
|

Re: compared filesystem performance, was Re: 10GBit network performance on OpenBSD 6.4

Chris Cappuccio
gwes [[hidden email]] wrote:
>
> What is the rated transfer rate of the SSD you're using to test?
> SATA 3 wire speed is 6G/sec and realistically 500MB/sec raw rate
> is near the top.
>
> Anything over that is an artefact probably from a cache somewhere.
>

He's using NVMe with its own DRAM cache, which should perform higly. There
is a limiter somewhere, it seems.

Reply | Threaded
Open this post in threaded view
|

Re: 10GBit network performance on OpenBSD 6.4

Joseph Mayer
In reply to this post by Chris Cappuccio
On Tuesday, April 9, 2019 3:28 AM, Chris Cappuccio <[hidden email]> wrote:
> Anatoli [[hidden email]] wrote:
> > I've seen extremely slow HDD performance in OpenBSD, like 12x slower than on
> > Linux, also no filesystem cache, so depending on your HDD with scp you may
> > be hitting the max throughput for the FS, not the network.
>
> 12x slower? That's insane. What are you talking about? USB HDD? USB Flash?
> SATA? Driver? You should submit a bug report with lots of details.
>
> Chris

Chris,

Isn't the filesystem layer in OpenBSD altogether serial-processing, all
the way pretty much from userland fwrite() down to hardware access (as
in no use of hardware multiqueueing).

The non-use of multiqueueing is problematic for random reads from SSD:s
as they have extremely high latency within the individual read op e.g.
~~1 millisecond.

On the hardware where I tested, OpenBSD will give ~120MB/sec
system-wide filesystem IO on any number of disks, also using an NVMe
SSD which has ~500-900MB/sec random access performance. I took this as
confirmation of the filesystem layer itself being the primary
bottleneck.

Also is the filesystem's internal sector size which it then accesses
underlying hardware with, 4KB, 16KB or 512B? I always suspected the
lastmentioned.

One thing that will be very interesting to see in OpenBSD is how serial
and random accesses perform on Intel Optane NVMe disks, with their
incredibly low latency. These could offset OpenBSD filesystem
limitations in not parallellizing IO.

Also the filesystem logics can be sidestepped by doing 16KB aligned
accesses to /dev/rsd* .

Joseph

Reply | Threaded
Open this post in threaded view
|

Re: 10GBit network performance on OpenBSD 6.4

Anatoli
On top of this (and I don't know why, maybe because of softraid FS
encryption?) I haven't seen any effect of the FS cache for files of any
size (not even 128Mb) that is supposed to be using at least the 32-bit
mem (some percent of the first 4Gb,
https://unix.stackexchange.com/questions/61459/does-sysctl-kern-bufcachepercent-not-work-in-openbsd-5-2-above-1-7gb/62184#62184).

In the presence of FS/hardware management inefficiencies, things could
be dramatically improved with an efficient FS cache if one has enough
RAM as reading from RAM should be in the range of dozens of GB/s with
nanoseconds latency, but that's not the case unfortunately (at least in
my setup).

*From:* Joseph Mayer <[hidden email]>
*Sent:* Monday, April 08, 2019 22:52
*To:* Chris Cappuccio <[hidden email]>
*Cc:* Anatoli <[hidden email]>, Misc <[hidden email]>
*Subject:* Re: 10GBit network performance on OpenBSD 6.4

On Tuesday, April 9, 2019 3:28 AM, Chris Cappuccio <[hidden email]> wrote:

> Anatoli [[hidden email]] wrote:
>> I've seen extremely slow HDD performance in OpenBSD, like 12x slower than on
>> Linux, also no filesystem cache, so depending on your HDD with scp you may
>> be hitting the max throughput for the FS, not the network.
> 12x slower? That's insane. What are you talking about? USB HDD? USB Flash?
> SATA? Driver? You should submit a bug report with lots of details.
>
> Chris

Chris,

Isn't the filesystem layer in OpenBSD altogether serial-processing, all
the way pretty much from userland fwrite() down to hardware access (as
in no use of hardware multiqueueing).

The non-use of multiqueueing is problematic for random reads from SSD:s
as they have extremely high latency within the individual read op e.g.
~~1 millisecond.

On the hardware where I tested, OpenBSD will give ~120MB/sec
system-wide filesystem IO on any number of disks, also using an NVMe
SSD which has ~500-900MB/sec random access performance. I took this as
confirmation of the filesystem layer itself being the primary
bottleneck.

Also is the filesystem's internal sector size which it then accesses
underlying hardware with, 4KB, 16KB or 512B? I always suspected the
lastmentioned.

One thing that will be very interesting to see in OpenBSD is how serial
and random accesses perform on Intel Optane NVMe disks, with their
incredibly low latency. These could offset OpenBSD filesystem
limitations in not parallellizing IO.

Also the filesystem logics can be sidestepped by doing 16KB aligned
accesses to /dev/rsd* .

Joseph


Reply | Threaded
Open this post in threaded view
|

Re: compared filesystem performance, was Re: 10GBit network performance on OpenBSD 6.4

gwes-2
In reply to this post by Chris Cappuccio


On 04/08/19 19:29, Chris Cappuccio wrote:

> gwes [[hidden email]] wrote:
>> What is the rated transfer rate of the SSD you're using to test?
>> SATA 3 wire speed is 6G/sec and realistically 500MB/sec raw rate
>> is near the top.
>>
>> Anything over that is an artefact probably from a cache somewhere.
>>
> He's using NVMe with its own DRAM cache, which should perform higly. There
> is a limiter somewhere, it seems.
>
That doesn't answer the question: if you say
dd if=/dev/zero of=/dev/sda (linux) /dev/rsd0c (bsd) bs=64k count=1000000
what transfer rate is reported

That number represents the maximum possible long-term filesystem
performance on that drive.

There are other non-filesystem overheads which have to be excluded
before you can be sure that the differences are truly the filesystem
code and algorithms without cache differences.

Reply | Threaded
Open this post in threaded view
|

Re: 10GBit network performance on OpenBSD 6.4

Stuart Henderson
In reply to this post by ms-2
On 2019-04-07, Mark Schneider <[hidden email]> wrote:

> Short feedback:
>
> Just for the test I have checked the 10GBit network performance
> between two FreeBSD 13.0 servers (both HP DL380g7 machines)
> transfering data in both directions
>
> # ---
> ironm@fbsdsrv2:~ $ scp ironm@200.0.0.10:/home/ironm/t2.iso t100.iso
> Password for ironm@fbsdsrv1:
> t2.iso                                     100% 3626MB 130.2MB/s   00:27
>
> # ---
> ironm@fbsdsrv2:~ $ scp obsd2fbsd.iso ironm@200.0.0.10:/home/ironm/t1.iso
> Password for ironm@fbsdsrv1:
> obsd2fbsd.iso                              100% 3626MB 140.4MB/s   00:25
> # ---

scp is a *terrible* way to test network performance. If you are only
interested in scp performance between two hosts then it's relevant,
and you can probably improve speeds by using something other than scp.
Otherwise irrelevant.

> The ssh performance using 10GBit network connection on FreeBSD 13.0
> is approx 7 times higher than the one on OpenBSD 6.4.
>
> Is it the question of the "ix" NIC driver of OpenBSD 6.4?
> (X520-DA2 NICs from Intel)
>
> Does one of you achieve good 10Gbit network performance with other
> 10Gbit NICs?

FreeBSD's network stack can make better use of multiple processors.
OpenBSD is improving (you can read some stories about this work at
www.grenadille.net) but is slower.

Jumbo frames should help if you can use them. Much of network
performance is related to packets-per-second not bits-per-second.
For scp, switching ciphers/MACs is likely to speed things up too.

Reply | Threaded
Open this post in threaded view
|

Re: compared filesystem performance, was Re: 10GBit network performance on OpenBSD 6.4

Chris Cappuccio
In reply to this post by gwes-2
gwes [[hidden email]] wrote:
>
> That doesn't answer the question: if you say
> dd if=/dev/zero of=/dev/sda (linux) /dev/rsd0c (bsd) bs=64k count=1000000
> what transfer rate is reported
>

totally agree, Anatoli could you please compare ?

> That number represents the maximum possible long-term filesystem
> performance on that drive.
>

you mean non-filesystem?

Reply | Threaded
Open this post in threaded view
|

Re: 10GBit network performance on OpenBSD 6.4

ms-2
In reply to this post by ms-2
Hello Tom

Thank you very much for your hint.
I have disabled pf with "pfctrl -d" command but didn't notice any
difference in the 10GBit transfer speed.
The CPU usage was high (like 100% for one of the available CPU cores)

# Single send
obsdsrv2$ scp 4GByte-random.bin ironm@10.0.0.2:/home/ironm/send4GByte-v1.bin
4GByte-random.bin                            89% 3665MB 71.0MB/s   00:06 ETA

---
# Send and receive at once (two scp connections)

obsdsrv2$ scp 4GByte-random.bin ironm@10.0.0.2:/home/ironm/send4GByte-v1.bin
4GByte-random.bin                            50% 2050MB 47.7MB/s   00:42 ETA

obsdsrv2$ scp ironm@10.0.0.2:/home/ironm/4GByte-random.bin
receive4GByte-v1.bin
4GByte-random.bin                           68% 2814MB  50.8MB/s   00:25 ETA

Details of the test are in the attached file:
obsd2obsd-send_conf_AMD_FX_4100_to_Xeon-E31270-10gbit-NICs-SSD_cpu-load-top.txt

Kind regards
Mark

--
[hidden email]



Am 08.04.2019 00:20, schrieb Tom Smyth:

> Hello
>
> if you disable pf you should get alot higher speeds,
> as PF uses 1 CPU
> alternatively you can  enable experimental pf code that uses more than one
> CPU
>
>
> On Sun, 7 Apr 2019 at 23:15, Mark Schneider <[hidden email]> wrote:
>> Short feedback:
>>
>> Just for the test I have checked the 10GBit network performance
>> between two FreeBSD 13.0 servers (both HP DL380g7 machines)
>> transfering data in both directions
>>
>> # ---
>> ironm@fbsdsrv2:~ $ scp ironm@200.0.0.10:/home/ironm/t2.iso t100.iso
>> Password for ironm@fbsdsrv1:
>> t2.iso                                     100% 3626MB 130.2MB/s   00:27
>>
>> # ---
>> ironm@fbsdsrv2:~ $ scp obsd2fbsd.iso ironm@200.0.0.10:/home/ironm/t1.iso
>> Password for ironm@fbsdsrv1:
>> obsd2fbsd.iso                              100% 3626MB 140.4MB/s   00:25
>> # ---
>>
>> The ssh performance using 10GBit network connection on FreeBSD 13.0
>> is approx 7 times higher than the one on OpenBSD 6.4.
>>
>> Is it the question of the "ix" NIC driver of OpenBSD 6.4?
>> (X520-DA2 NICs from Intel)
>>
>> Does one of you achieve good 10Gbit network performance with other
>> 10Gbit NICs?
>>
>> Thank you in advance for your hints.
>>
>> Kind regards
>> Mark
>>
>> --
>> [hidden email]
>>
>>
>> Am 06.04.2019 22:52, schrieb Mark Schneider:
>>> Hi,
>>>
>>> Please allow me few questions regarding 10GBit network performance on
>>> OpenBSD 6.4.
>>> I face quite low network performance  for the Intell X520-DA2 10GBit
>>> network card.
>>>
>>> Test configuration in OpenBSD-Linux-10GBit_net_performance.txt -
>>> http://paste.debian.net/1076461/
>>> Low transfer rate for scp - OpenBSD-10GBit-perftest.txt -
>>> http://paste.debian.net/1076460/
>>>
>>> Test configuration:
>>> # ---
>>> # OpenBSD 6.4 on HP DL380g7
>>> # -------------------------
>>>
>>> # 10GBit X520-DA2 NIC
>>> ix0: flags=208843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,AUTOCONF6>
>>> mtu 1500
>>>          media: Ethernet autoselect (10GbaseSR
>>> full-duplex,rxpause,txpause)
>>>          inet6 fe80::d51e:1b74:17d7:8230%ix0 prefixlen 64 scopeid 0x1
>>>          inet 200.0.0.3 netmask 0xffffff00 broadcast 200.0.0.255
>>>
>>> ix1: flags=208843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,AUTOCONF6>
>>> mtu 1500
>>>          media: Ethernet autoselect (10GbaseSR
>>> full-duplex,rxpause,txpause)
>>>          inet 10.0.0.7 netmask 0xffffff00 broadcast 10.0.0.255
>>>          inet6 fe80::b488:caea:5d6f:9992%ix1 prefixlen 64 scopeid 0x2
>>> # ---
>>>
>>> Compare to Linux the 10GBit transfer from/to OpenBSD is few times slower:
>>>
>>> # ---
>>> # OpenBSD to Linux (Asus P8BWS)
>>> # -----------------------------
>>> srvob# iperf3 -c 10.0.0.2
>>> ...
>>> - - - - - - - - - - - - - - - - - - - - - - - - -
>>> [ ID] Interval           Transfer     Bitrate
>>> [  5]   0.00-10.00  sec  1.50 GBytes  1.29 Gbits/sec
>>> sender
>>> [  5]   0.00-10.20  sec  1.50 GBytes  1.27 Gbits/sec
>>> receiver
>>> # ---
>>>
>>>
>>> # ---
>>> # Linux (DL380g7) to Linux (Asus P8BWS)
>>> # -------------------------------------
>>> root@kali:~# iperf3 -c 100.0.0.2
>>> ...
>>> - - - - - - - - - - - - - - - - - - - - - - - - -
>>> [ ID] Interval           Transfer     Bitrate         Retr
>>> [  5]   0.00-10.00  sec  10.9 GBytes  9.39 Gbits/sec 328
>>> sender
>>> [  5]   0.00-10.04  sec  10.9 GBytes  9.35 Gbits/sec
>>> receiver
>>> # ---
>>>
>>> The scp transfer rate is like 21MBytes/s only per ssh connection
>>> (OpenBSD <-> Linux):
>>> # ---
>>> root@kali:~# scp /re*/b*/ka*/kali-linux-kde-2019.1a-*.iso
>>> ironm@10.0.0.7:/home/ironm/t12.iso
>>> ironm@10.0.0.7's password:
>>> kali-linux-kde-2019.1a-amd64.iso                     4%  173MB
>>> 21.5MB/s   02:40 ETA
>>> # ---
>>>
>>>
>>> The 1GBit cooper based NIC works also slower but reaching almost 40%
>>> of the max trasfer rate of 1 Gbit:
>>>
>>> # ---
>>> # OpenBSD 6.4 (DL380g7 1Gbit NIC) to Linux (DL380g7 1GBit NIC)
>>> # ------------------------------------------------------------
>>> srvob# iperf3 -c 170.0.0.10
>>> ...
>>> - - - - - - - - - - - - - - - - - - - - - - - - -
>>> [ ID] Interval           Transfer     Bitrate
>>> [  5]   0.00-10.00  sec   471 MBytes   395 Mbits/sec
>>> sender
>>> [  5]   0.00-10.20  sec   471 MBytes   388 Mbits/sec
>>> receiver
>>> # ---
>>>
>>> # ---
>>> # Linux (Asus P8BWS) to Linux (DL380g7)
>>> # -------------------------------------
>>> root@kali:~# iperf3 -c 192.168.1.122
>>> ...
>>>      - - - - - - - - - - - - - - - - - - - - - - - - -
>>> [ ID] Interval           Transfer     Bitrate         Retr
>>> [  5]   0.00-10.00  sec  1.09 GBytes   939 Mbits/sec 183
>>> sender
>>> [  5]   0.00-10.04  sec  1.09 GBytes   934 Mbits/sec
>>> receiver
>>> # ---
>>>
>>>
>>> Thank you in advance for your hints what OpenBSD 6.4 settings do I miss.
>>>
>>> Best regards
>>> Mark
>>>
>>
>


obsd2obsd-send_conf_AMD_FX_4100_to_Xeon-E31270-10gbit-NICs-SSD_cpu-load-top.txt (8K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Answer 2 / Re: 10GBit network performance on OpenBSD 6.4

ms-2
In reply to this post by ms-2
Hi Peter

Thank you very much for your feedback.

It looks like the performance issue is more complex than I have expected.
Just for the test I have installed OpenBSD 6.4 and FreeBSD 13.0 on few
different servers and compared results (details are in attached files).

Pure network speed I have to test with sucht tools like tcpbench, iperf3
or netperf to get independent of the mass storage stuff and other things.

Kind regards
Mark

--
[hidden email]



Am 08.04.2019 05:12, schrieb Peter Membrey:

> Hi Mark,
>
> I saw very similar performance issues on my system as well. The card was an Intel X550-10 dual port on an Atom C2750 box (8 core but low power). I've never had great TCP performance from OpenBSD on this box, but routing traffic through it had always been fine. Upgrading to 10Gb/s though I found I couldn't get much more than the speeds you were seeing.
>
> As the OS was due for an upgrade anyway, I tried it with Linux, and was immediately about to hit 9.8Gb/s. Since I'm also quite comfortable using Linux, I left it in place (it's a key router for me) and it's been working great ever since.
>
> I'm afraid I can't add much to help you apart from a "me too" but at least it rules out it being an issue with your particular hardware.
>
> Kind Regards,
>
> Peter Membrey
>
>
> ----- Original Message -----
> From: "Mark Schneider" <[hidden email]>
> To: "misc" <[hidden email]>
> Sent: Monday, 8 April, 2019 06:09:09
> Subject: Re: 10GBit network performance on OpenBSD 6.4
>
> Short feedback:
>
> Just for the test I have checked the 10GBit network performance
> between two FreeBSD 13.0 servers (both HP DL380g7 machines)
> transfering data in both directions
>
> # ---
> ironm@fbsdsrv2:~ $ scp ironm@200.0.0.10:/home/ironm/t2.iso t100.iso
> Password for ironm@fbsdsrv1:
> t2.iso                                     100% 3626MB 130.2MB/s   00:27
>
> # ---
> ironm@fbsdsrv2:~ $ scp obsd2fbsd.iso ironm@200.0.0.10:/home/ironm/t1.iso
> Password for ironm@fbsdsrv1:
> obsd2fbsd.iso                              100% 3626MB 140.4MB/s   00:25
> # ---
>
> The ssh performance using 10GBit network connection on FreeBSD 13.0
> is approx 7 times higher than the one on OpenBSD 6.4.
>
> Is it the question of the "ix" NIC driver of OpenBSD 6.4?
> (X520-DA2 NICs from Intel)
>
> Does one of you achieve good 10Gbit network performance with other
> 10Gbit NICs?
>
> Thank you in advance for your hints.
>
> Kind regards
> Mark
>


dd-times-for-devzero-and-devnull-fbsd+debian.txt (4K) Download Attachment
dd-times-for-devzero-and-devnull-fbsd+debian+obsd.txt (4K) Download Attachment
debian2openBSD-10gbit.txt (2K) Download Attachment
fbsd2debian-conf_AMD_FX_4100_to_Xeon-E31270-10gbit-NICs.txt (3K) Download Attachment
fbsd2fbs-send-10gbit-dl380g7-raid60-8x6G-sas-drives.txt (9K) Download Attachment
obsd2obsd-send_conf_AMD_FX_4100_to_Xeon-E31270-10gbit-NICs-SSD_cpu-load-top.txt (8K) Download Attachment
README-network-performance-FreeBSD-10GBit.txt (4K) Download Attachment
scp-4GB-OpenBSD-to-OpenBSD-10Gbit-fiber.txt (3K) Download Attachment
scp-OBSD-FBSD-10gbit-SSD.txt (1K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Answer 3 / Re: 10GBit network performance on OpenBSD 6.4

ms-2
In reply to this post by Anatoli
Hi Anatoly

Thank you very much for your helpfull hints.
The CPU usage (one of available cores) was nearly 100%.

FreeBSD 13.0 and Linux (Debian) seem currently to have faster network
stacks (and faster mass storage handling).
During test I used debian linux running in live mode (transfer to DDR3
memory instead of RAID or SSD)

Just for the test I have installed OpenBSD 6.4 and FreeBSD 13.0 on few
different servers (with 6G/15k SAS drives RAID or Pro 860 SATA3 SSDs)
and compared results (details are in attached files).

I have got some further hints from the misc list to run tests usinf
"/dev/null" so I will try repeat few tests.

More details are in attached files.


Kind regards
Mark



Am 08.04.2019 19:30, schrieb Anatoli:

> Hi,
>
> I guess you're hitting 2 bottlenecks: the CPU performance for iperf
> and HDD performance for scp.
>
> Check how much CPU is consumed during iperf transfer and try scp'ing
> something not from/to HDD, e.g. /dev/zero.
>
> I've seen extremely slow HDD performance in OpenBSD, like 12x slower
> than on Linux, also no filesystem cache, so depending on your HDD with
> scp you may be hitting the max throughput for the FS, not the network.
>
> Regards,
> Anatoli
>
> *From:* Mark Schneider <[hidden email]>
> *Sent:* Saturday, April 06, 2019 17:52
> *To:* Misc <[hidden email]>
> *Subject:* 10GBit network performance on OpenBSD 6.4
>
> Hi,
>
> Please allow me few questions regarding 10GBit network performance on
> OpenBSD 6.4.
> I face quite low network performance  for the Intell X520-DA2 10GBit
> network card.
>
> Test configuration in OpenBSD-Linux-10GBit_net_performance.txt -
> http://paste.debian.net/1076461/
> Low transfer rate for scp - OpenBSD-10GBit-perftest.txt -
> http://paste.debian.net/1076460/
>
> Test configuration:
> # ---
> # OpenBSD 6.4 on HP DL380g7
> # -------------------------
>
> # 10GBit X520-DA2 NIC
> ix0: flags=208843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,AUTOCONF6>
> mtu 1500
>         media: Ethernet autoselect (10GbaseSR
> full-duplex,rxpause,txpause)
>         inet6 fe80::d51e:1b74:17d7:8230%ix0 prefixlen 64 scopeid 0x1
>         inet 200.0.0.3 netmask 0xffffff00 broadcast 200.0.0.255
>
> ix1: flags=208843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,AUTOCONF6>
> mtu 1500
>         media: Ethernet autoselect (10GbaseSR
> full-duplex,rxpause,txpause)
>         inet 10.0.0.7 netmask 0xffffff00 broadcast 10.0.0.255
>         inet6 fe80::b488:caea:5d6f:9992%ix1 prefixlen 64 scopeid 0x2
> # ---
>
> Compare to Linux the 10GBit transfer from/to OpenBSD is few times slower:
>
> # ---
> # OpenBSD to Linux (Asus P8BWS)
> # -----------------------------
> srvob# iperf3 -c 10.0.0.2
> ...
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval           Transfer     Bitrate
> [  5]   0.00-10.00  sec  1.50 GBytes  1.29 Gbits/sec                  
> sender
> [  5]   0.00-10.20  sec  1.50 GBytes  1.27 Gbits/sec                  
> receiver
> # ---
>
>
> # ---
> # Linux (DL380g7) to Linux (Asus P8BWS)
> # -------------------------------------
> root@kali:~# iperf3 -c 100.0.0.2
> ...
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval           Transfer     Bitrate         Retr
> [  5]   0.00-10.00  sec  10.9 GBytes  9.39 Gbits/sec 328            
> sender
> [  5]   0.00-10.04  sec  10.9 GBytes  9.35 Gbits/sec                  
> receiver
> # ---
>
> The scp transfer rate is like 21MBytes/s only per ssh connection
> (OpenBSD <-> Linux):
> # ---
> root@kali:~# scp /re*/b*/ka*/kali-linux-kde-2019.1a-*.iso
> ironm@10.0.0.7:/home/ironm/t12.iso
> ironm@10.0.0.7's password:
> kali-linux-kde-2019.1a-amd64.iso                     4%  173MB
> 21.5MB/s   02:40 ETA
> # ---
>
>
> The 1GBit cooper based NIC works also slower but reaching almost 40%
> of the max trasfer rate of 1 Gbit:
>
> # ---
> # OpenBSD 6.4 (DL380g7 1Gbit NIC) to Linux (DL380g7 1GBit NIC)
> # ------------------------------------------------------------
> srvob# iperf3 -c 170.0.0.10
> ...
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval           Transfer     Bitrate
> [  5]   0.00-10.00  sec   471 MBytes   395 Mbits/sec                  
> sender
> [  5]   0.00-10.20  sec   471 MBytes   388 Mbits/sec                  
> receiver
> # ---
>
> # ---
> # Linux (Asus P8BWS) to Linux (DL380g7)
> # -------------------------------------
> root@kali:~# iperf3 -c 192.168.1.122
> ...
>     - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval           Transfer     Bitrate         Retr
> [  5]   0.00-10.00  sec  1.09 GBytes   939 Mbits/sec 183            
> sender
> [  5]   0.00-10.04  sec  1.09 GBytes   934 Mbits/sec                  
> receiver
> # ---
>
>
> Thank you in advance for your hints what OpenBSD 6.4 settings do I miss.
>
> Best regards
> Mark


dd-times-for-devzero-and-devnull-fbsd+debian.txt (4K) Download Attachment
dd-times-for-devzero-and-devnull-fbsd+debian+obsd.txt (4K) Download Attachment
debian2openBSD-10gbit.txt (2K) Download Attachment
fbsd2debian-conf_AMD_FX_4100_to_Xeon-E31270-10gbit-NICs.txt (3K) Download Attachment
fbsd2fbs-send-10gbit-dl380g7-raid60-8x6G-sas-drives.txt (9K) Download Attachment
obsd2obsd-send_conf_AMD_FX_4100_to_Xeon-E31270-10gbit-NICs-SSD_cpu-load-top.txt (8K) Download Attachment
README-network-performance-FreeBSD-10GBit.txt (4K) Download Attachment
scp-4GB-OpenBSD-to-OpenBSD-10Gbit-fiber.txt (3K) Download Attachment
scp-OBSD-FBSD-10gbit-SSD.txt (1K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Answer 4 / Re: 10GBit network performance on OpenBSD 6.4

ms-2
In reply to this post by Abel Abraham Camarillo Ojeda-2
Hi

 > Whats your performance without scp? tcpbench / netcat, for example?

Thank you very much for your hint. I did not run them yet (only iperf3
as listed below)
Further test details are in attached files.

Kind regards
Mark

--
[hidden email]



Am 08.04.2019 22:06, schrieb Abel Abraham Camarillo Ojeda:

> On Sun, Apr 7, 2019 at 5:21 PM Mark Schneider <[hidden email]>
> wrote:
>
>> Short feedback:
>>
>> Just for the test I have checked the 10GBit network performance
>> between two FreeBSD 13.0 servers (both HP DL380g7 machines)
>> transfering data in both directions
>>
>> # ---
>> ironm@fbsdsrv2:~ $ scp ironm@200.0.0.10:/home/ironm/t2.iso t100.iso
>> Password for ironm@fbsdsrv1:
>> t2.iso                                     100% 3626MB 130.2MB/s   00:27
>>
>> # ---
>> ironm@fbsdsrv2:~ $ scp obsd2fbsd.iso ironm@200.0.0.10:/home/ironm/t1.iso
>> Password for ironm@fbsdsrv1:
>> obsd2fbsd.iso                              100% 3626MB 140.4MB/s   00:25
>> # ---
>>
>> The ssh performance using 10GBit network connection on FreeBSD 13.0
>> is approx 7 times higher than the one on OpenBSD 6.4.
>>
>> Is it the question of the "ix" NIC driver of OpenBSD 6.4?
>> (X520-DA2 NICs from Intel)
>>
>> Does one of you achieve good 10Gbit network performance with other
>> 10Gbit NICs?
>>
>> Thank you in advance for your hints.
>>
>> Kind regards
>> Mark
>>
>> --
>> [hidden email]
>>
>>
>> Am 06.04.2019 22:52, schrieb Mark Schneider:
>>> Hi,
>>>
>>> Please allow me few questions regarding 10GBit network performance on
>>> OpenBSD 6.4.
>>> I face quite low network performance  for the Intell X520-DA2 10GBit
>>> network card.
>>>
>>> Test configuration in OpenBSD-Linux-10GBit_net_performance.txt -
>>> http://paste.debian.net/1076461/
>>> Low transfer rate for scp - OpenBSD-10GBit-perftest.txt -
>>> http://paste.debian.net/1076460/
>>>
>>> Test configuration:
>>> # ---
>>> # OpenBSD 6.4 on HP DL380g7
>>> # -------------------------
>>>
>>> # 10GBit X520-DA2 NIC
>>> ix0: flags=208843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,AUTOCONF6>
>>> mtu 1500
>>>          media: Ethernet autoselect (10GbaseSR
>>> full-duplex,rxpause,txpause)
>>>          inet6 fe80::d51e:1b74:17d7:8230%ix0 prefixlen 64 scopeid 0x1
>>>          inet 200.0.0.3 netmask 0xffffff00 broadcast 200.0.0.255
>>>
>>> ix1: flags=208843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,AUTOCONF6>
>>> mtu 1500
>>>          media: Ethernet autoselect (10GbaseSR
>>> full-duplex,rxpause,txpause)
>>>          inet 10.0.0.7 netmask 0xffffff00 broadcast 10.0.0.255
>>>          inet6 fe80::b488:caea:5d6f:9992%ix1 prefixlen 64 scopeid 0x2
>>> # ---
>>>
>>> Compare to Linux the 10GBit transfer from/to OpenBSD is few times slower:
>>>
>>> # ---
>>> # OpenBSD to Linux (Asus P8BWS)
>>> # -----------------------------
>>> srvob# iperf3 -c 10.0.0.2
>>> ...
>>> - - - - - - - - - - - - - - - - - - - - - - - - -
>>> [ ID] Interval           Transfer     Bitrate
>>> [  5]   0.00-10.00  sec  1.50 GBytes  1.29 Gbits/sec
>>> sender
>>> [  5]   0.00-10.20  sec  1.50 GBytes  1.27 Gbits/sec
>>> receiver
>>> # ---
>>>
>>>
>>> # ---
>>> # Linux (DL380g7) to Linux (Asus P8BWS)
>>> # -------------------------------------
>>> root@kali:~# iperf3 -c 100.0.0.2
>>> ...
>>> - - - - - - - - - - - - - - - - - - - - - - - - -
>>> [ ID] Interval           Transfer     Bitrate         Retr
>>> [  5]   0.00-10.00  sec  10.9 GBytes  9.39 Gbits/sec 328
>>> sender
>>> [  5]   0.00-10.04  sec  10.9 GBytes  9.35 Gbits/sec
>>> receiver
>>> # ---
>>>
>>> The scp transfer rate is like 21MBytes/s only per ssh connection
>>> (OpenBSD <-> Linux):
>>> # ---
>>> root@kali:~# scp /re*/b*/ka*/kali-linux-kde-2019.1a-*.iso
>>> ironm@10.0.0.7:/home/ironm/t12.iso
>>> ironm@10.0.0.7's password:
>>> kali-linux-kde-2019.1a-amd64.iso                     4%  173MB
>>> 21.5MB/s   02:40 ETA
>>> # ---
>>>
>>>
>>> The 1GBit cooper based NIC works also slower but reaching almost 40%
>>> of the max trasfer rate of 1 Gbit:
>>>
>>> # ---
>>> # OpenBSD 6.4 (DL380g7 1Gbit NIC) to Linux (DL380g7 1GBit NIC)
>>> # ------------------------------------------------------------
>>> srvob# iperf3 -c 170.0.0.10
>>> ...
>>> - - - - - - - - - - - - - - - - - - - - - - - - -
>>> [ ID] Interval           Transfer     Bitrate
>>> [  5]   0.00-10.00  sec   471 MBytes   395 Mbits/sec
>>> sender
>>> [  5]   0.00-10.20  sec   471 MBytes   388 Mbits/sec
>>> receiver
>>> # ---
>>>
>>> # ---
>>> # Linux (Asus P8BWS) to Linux (DL380g7)
>>> # -------------------------------------
>>> root@kali:~# iperf3 -c 192.168.1.122
>>> ...
>>>      - - - - - - - - - - - - - - - - - - - - - - - - -
>>> [ ID] Interval           Transfer     Bitrate         Retr
>>> [  5]   0.00-10.00  sec  1.09 GBytes   939 Mbits/sec 183
>>> sender
>>> [  5]   0.00-10.04  sec  1.09 GBytes   934 Mbits/sec
>>> receiver
>>> # ---
>>>
>>>
>>> Thank you in advance for your hints what OpenBSD 6.4 settings do I miss.
>>>
>>> Best regards
>>> Mark
>>>
> Whats your performance without scp? tcpbench / netcat, for example?


dd-times-for-devzero-and-devnull-fbsd+debian.txt (4K) Download Attachment
dd-times-for-devzero-and-devnull-fbsd+debian+obsd.txt (4K) Download Attachment
debian2openBSD-10gbit.txt (2K) Download Attachment
fbsd2debian-conf_AMD_FX_4100_to_Xeon-E31270-10gbit-NICs.txt (3K) Download Attachment
fbsd2fbs-send-10gbit-dl380g7-raid60-8x6G-sas-drives.txt (9K) Download Attachment
obsd2obsd-send_conf_AMD_FX_4100_to_Xeon-E31270-10gbit-NICs-SSD_cpu-load-top.txt (8K) Download Attachment
README-network-performance-FreeBSD-10GBit.txt (4K) Download Attachment
scp-4GB-OpenBSD-to-OpenBSD-10Gbit-fiber.txt (3K) Download Attachment
scp-OBSD-FBSD-10gbit-SSD.txt (1K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Answer 5 / Re: 10GBit network performance on OpenBSD 6.4

ms-2
In reply to this post by Anatoli
Am 08.04.2019 23:46, schrieb Anatoli:

Thank you very much for the idea Anatoli!

Running dd with "/dev/zero" and "/dev/null" gave me back a very good
overview what is going on (different server hardware and operating systems)

ironm@wheezy:~$ time dd if=/dev/zero of=file1.tmp bs=1M count=4096 && sync
4294967296 Bytes (4.3 GB) kopiert, 1.0029 s, 4.3 GB/s
--
ironm@fbsdsrv8:~ $ time dd if=/dev/zero of=file1.tmp bs=1M count=4096 &&
sync
4294967296 bytes transferred in 8.432852 secs (509313755 bytes/sec)
--
ironm@fbsdsrv1:~ $ time dd if=/dev/zero of=file1.tmp bs=1M count=4096 &&
sync
4294967296 bytes transferred in 5.947370 secs (722162508 bytes/sec)
--
ironm@fbsdsrv2:~ $ time dd if=/dev/zero of=file1.tmp bs=1M count=4096 &&
sync
4294967296 bytes transferred in 8.804378 secs (487821753 bytes/sec)
--
ronm@wheezy:~$ time dd if=file1.tmp of=/dev/null bs=1M count=4096 && sync
4294967296 Bytes (4.3 GB) kopiert, 0.410687 s, 10.5 GB/s
--
ironm@wheezy:~$ time dd if=file1.tmp of=/dev/null bs=1M count=512 && sync
536870912 Bytes (537 MB) kopiert, 0.0558006 s, 9.6 GB/s
--
ironm@fbsdsrv8:~ $ time dd if=file1.tmp of=/dev/null bs=1M count=4096 &&
sync
4294967296 bytes transferred in 1.338350 secs (3209151777 bytes/sec)
--
ironm@fbsdsrv8:~ $ time dd if=file1.tmp of=/dev/null bs=1M count=512 && sync
536870912 bytes transferred in 0.167219 secs (3210581655 bytes/sec)
--
ironm@fbsdsrv1:~ $ time dd if=file1.tmp of=/dev/null bs=1M count=4096 &&
sync
4294967296 bytes transferred in 1.173098 secs (3661217181 bytes/sec)
--
ironm@fbsdsrv1:~ $ time dd if=file1.tmp of=/dev/null bs=1M count=512 && sync
536870912 bytes transferred in 0.191353 secs (2805662938 bytes/sec)
--
ironm@fbsdsrv2:~ $ time dd if=file1.tmp of=/dev/null bs=1M count=4096 &&
sync
4294967296 bytes transferred in 1.159899 secs (3702879890 bytes/sec)
--
ironm@fbsdsrv2:~ $ time dd if=file1.tmp of=/dev/null bs=1M count=512 && sync
536870912 bytes transferred in 0.213278 secs (2517231248 bytes/sec)
--
obsdsrv2$ time dd if=/dev/zero of=file1.tmp bs=1M count=4096 && sync
4294967296 bytes transferred in 9.136 secs (470078173 bytes/sec)
--
obsdsrv2$ time dd if=file1.tmp of=/dev/null bs=1M count=4096 && sync
4294967296 bytes transferred in 11.280 secs (380734881 bytes/sec)
--
obsdsrv2$ time dd if=file1.tmp of=/dev/null bs=1M count=4096 && sync
4294967296 bytes transferred in 10.167 secs (422400700 bytes/sec)
--
obsdsrv2$ time dd if=file1.tmp of=/dev/null bs=1M count=2048 && sync
2147483648 bytes transferred in 4.515 secs (475551520 bytes/sec)
--
obsdsrv2$ time dd if=file1.tmp of=/dev/null bs=1M count=1024 && sync
1073741824 bytes transferred in 1.728 secs (621203080 bytes/sec)
--
obsdsrv2$ time dd if=file1.tmp of=/dev/null bs=1M count=512 && sync
536870912 bytes transferred in 0.265 secs (2021821094 bytes/sec)

Kind regards
Mark

--
[hidden email]


dd-times-for-devzero-and-devnull-fbsd+debian+obsd.txt (4K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Answer 6 - ix network driver from FreeBSD 13.0 / Re: 10GBit network performance on OpenBSD 6.4

ms-2
In reply to this post by Stuart Henderson
Hi Stuart

Thank you very much for the link.

The total ssh based performace depends strongly on the server hardware
(and installed OSes).
For the "fastest" test configuration (server hardware / installed OS) I
was possible to achieve
a total trasfer speed of approx 400MBytes/s (on the 10Gbit fiber link)
for few parallel read/write scp sessions.

I don't know if the ix network drivers used in FreeBSD 13.0 orLinux
kernel are much more efficient
and it is possible to use the ix network driver from FreeBSD 13.0 in
OpenBSD 6.4.

# ---
# fbsdsrv1 kernel: CPU: Intel(R) Xeon(R) CPU           X5690 @ 3.47GHz
(3465.76-MHz K8-class CPU)
# fbsdsrv1 kernel: FreeBSD/SMP: Multiprocessor System Detected: 24 CPUs

# fbsdsrv1 kernel: real memory  = 103079215104 (98304 MB)

# fbsdsrv1 kernel: da0: <HP RAID ADG OK> Fixed Direct Access SPC-3 SCSI
device
# fbsdsrv1 kernel: da0: Serial Number PACCRCN80ZL1RD7
# fbsdsrv1 kernel: da0: 135.168MB/s transfers
# fbsdsrv1 kernel: da0: Command Queueing enabled
# fbsdsrv1 kernel: da0: 839893MB (1720102192 512 byte sectors)

# fbsdsrv1 kernel: ix0: <Intel(R) PRO/10GbE PCI-Express Network Driver>
port 0x7000-0x701f mem 0xe9a80000-0xe9afffff,0xe9a70000-0xe9a73fff irq 26
# at device 0.0 on pci6
# fbsdsrv1 kernel: ix0: Using 2048 tx descriptors and 2048 rx descriptors
# fbsdsrv1 kernel: ix0: queue equality override not set, capping
rx_queues at 12 and tx_queues at 12
# fbsdsrv1 kernel: ix0: Using 12 rx queues 12 tx queues
# fbsdsrv1 kernel: ix0: Using MSI-X interrupts with 13 vectors
# fbsdsrv1 kernel: ix0: allocated for 12 queues
# fbsdsrv1 kernel: ix0: allocated for 12 rx queues
# fbsdsrv1 kernel: ix0: Ethernet address: 90:e2:ba:16:20:a4
# fbsdsrv1 kernel: ix0: PCI Express Bus: Speed 5.0GT/s Width x4
# fbsdsrv1 kernel: ix0: netmap queues/slots: TX 12/2048, RX 12/2048
# fbsdsrv1 kernel: ix1: <Intel(R) PRO/10GbE PCI-Express Network Driver>
port 0x7020-0x703f mem 0xe9980000-0xe99fffff,0xe9970000-0xe9973fff irq 25
# at device 0.1 on pci6
# fbsdsrv1 kernel: ix1: Using 2048 tx descriptors and 2048 rx descriptors
# fbsdsrv1 kernel: ix1: queue equality override not set, capping
rx_queues at 12 and tx_queues at 12
# fbsdsrv1 kernel: ix1: Using 12 rx queues 12 tx queues
# fbsdsrv1 kernel: ix1: Using MSI-X interrupts with 13 vectors
# fbsdsrv1 kernel: ix1: allocated for 12 queues
# fbsdsrv1 kernel: ix1: allocated for 12 rx queues
# fbsdsrv1 kernel: ix1: Ethernet address: 90:e2:ba:16:20:a5
# fbsdsrv1 kernel: ix1: PCI Express Bus: Speed 5.0GT/s Width x4
# fbsdsrv1 kernel: ix1: netmap queues/slots: TX 12/2048, RX 12/2048


# ---
ironm@fbsdsrv1:~ $ scp t1.iso
ironm@200.0.0.20:/home/ironm/fbsd2fbs-send-conf1.iso
Password for ironm@fbsdsrv2:
t1.iso 100% 3626MB 132.0MB/s   00:27



Kind regards
Mark

--
[hidden email]


Am 09.04.2019 13:31, schrieb Stuart Henderson:

> On 2019-04-07, Mark Schneider <[hidden email]> wrote:
>> Short feedback:
>>
>> Just for the test I have checked the 10GBit network performance
>> between two FreeBSD 13.0 servers (both HP DL380g7 machines)
>> transfering data in both directions
>>
>> # ---
>> ironm@fbsdsrv2:~ $ scp ironm@200.0.0.10:/home/ironm/t2.iso t100.iso
>> Password for ironm@fbsdsrv1:
>> t2.iso                                     100% 3626MB 130.2MB/s   00:27
>>
>> # ---
>> ironm@fbsdsrv2:~ $ scp obsd2fbsd.iso ironm@200.0.0.10:/home/ironm/t1.iso
>> Password for ironm@fbsdsrv1:
>> obsd2fbsd.iso                              100% 3626MB 140.4MB/s   00:25
>> # ---
> scp is a *terrible* way to test network performance. If you are only
> interested in scp performance between two hosts then it's relevant,
> and you can probably improve speeds by using something other than scp.
> Otherwise irrelevant.
>
>> The ssh performance using 10GBit network connection on FreeBSD 13.0
>> is approx 7 times higher than the one on OpenBSD 6.4.
>>
>> Is it the question of the "ix" NIC driver of OpenBSD 6.4?
>> (X520-DA2 NICs from Intel)
>>
>> Does one of you achieve good 10Gbit network performance with other
>> 10Gbit NICs?
> FreeBSD's network stack can make better use of multiple processors.
> OpenBSD is improving (you can read some stories about this work at
> www.grenadille.net) but is slower.
>
> Jumbo frames should help if you can use them. Much of network
> performance is related to packets-per-second not bits-per-second.
> For scp, switching ciphers/MACs is likely to speed things up too.
>


Reply | Threaded
Open this post in threaded view
|

Re: compared filesystem performance, was Re: 10GBit network performance on OpenBSD 6.4

Anatoli
In reply to this post by Chris Cappuccio
 > totally agree, Anatoli could you please compare ?

Will try to make tests these days + will attach dmesg. Anyway, without a
FS (sequentially writing to a raw device) we'd be testing just the
sequential speed to a raw device, not even to a partition. I think this
would be a practical maximum possible performance for that device, not a
real-use scenario. But combined with other tests this could be an
interesting stat to find the bottleneck.

*From:* Chris Cappuccio <[hidden email]>
*Sent:* Tuesday, April 09, 2019 10:36
*To:* Gwes <[hidden email]>
*Cc:* Chris Cappuccio <[hidden email]>, Anatoli <[hidden email]>, Misc
<[hidden email]>
*Subject:* Re: compared filesystem performance, was Re: 10GBit network
performance on OpenBSD 6.4

gwes [[hidden email]] wrote:

> That doesn't answer the question: if you say
> dd if=/dev/zero of=/dev/sda (linux) /dev/rsd0c (bsd) bs=64k count=1000000
> what transfer rate is reported
>
totally agree, Anatoli could you please compare ?

> That number represents the maximum possible long-term filesystem
> performance on that drive.
>
you mean non-filesystem?


12