Performance problems with OpenBSD 4.9 under ESXi 5

classic Classic list List threaded Threaded
25 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Performance problems with OpenBSD 4.9 under ESXi 5

Gene-46
I'm trying to run OpenBSD 4.9 (amd64) under VMware vSphere 5 (ESXi 5).  I
set up four virtual machines with one core, 256 MB of RAM, and 4 GB of disk
space each.  I used the install49.iso as my installation medium.  Aside from
the OS installation, I haven't installed anything on them yet.

They perform terribly.  The load average hovers around 1.5 on all of these
VMs although the CPU shows as being idle.  Connecting via SSH and switching
to root can take over a minute.  If I reboot the virtual machines they
perform well for a short time, but within 15-30 minutes they slow down to a
crawl again.

These four machines are spread across two VM hosts, each with six cores and
16 GB of RAM each.  I haven't started doing anything with these VMs yet.   I
have other VMs installed (Linux and FreeBSD) and they don't have this
problem.

Has anyone else experienced this problem?  Is there tuning I can do to make
it work better?  I tried disabling mpbios, that did not have an effect.

Thanks.

-Gene

Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

Gonzalo L. Rodriguez
dmesg?

On Wed, 19 Oct 2011 11:55:19 -0700, Gene <[hidden email]> wrote:
> I'm trying to run OpenBSD 4.9 (amd64) under VMware vSphere 5 (ESXi 5).
I
> set up four virtual machines with one core, 256 MB of RAM, and 4 GB of
disk
> space each.  I used the install49.iso as my installation medium.  Aside
> from
> the OS installation, I haven't installed anything on them yet.
>
> They perform terribly.  The load average hovers around 1.5 on all of
these
> VMs although the CPU shows as being idle.  Connecting via SSH and
switching
> to root can take over a minute.  If I reboot the virtual machines they
> perform well for a short time, but within 15-30 minutes they slow down
to a
> crawl again.
>
> These four machines are spread across two VM hosts, each with six cores
and
> 16 GB of RAM each.  I haven't started doing anything with these VMs yet.

> I
> have other VMs installed (Linux and FreeBSD) and they don't have this
> problem.
>
> Has anyone else experienced this problem?  Is there tuning I can do to
make
> it work better?  I tried disabling mpbios, that did not have an effect.
>
> Thanks.
>
> -Gene

--
Sending from my computer

Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

Johan Ryberg
What "hardware" version did you use? Have you tried different?

// Johan

2011/10/19 Gonzalo L. R. <[hidden email]>:

> dmesg?
>
> On Wed, 19 Oct 2011 11:55:19 -0700, Gene <[hidden email]> wrote:
>> I'm trying to run OpenBSD 4.9 (amd64) under VMware vSphere 5 (ESXi 5).
> I
>> set up four virtual machines with one core, 256 MB of RAM, and 4 GB of
> disk
>> space each.  I used the install49.iso as my installation medium.  Aside
>> from
>> the OS installation, I haven't installed anything on them yet.
>>
>> They perform terribly.  The load average hovers around 1.5 on all of
> these
>> VMs although the CPU shows as being idle.  Connecting via SSH and
> switching
>> to root can take over a minute.  If I reboot the virtual machines they
>> perform well for a short time, but within 15-30 minutes they slow down
> to a
>> crawl again.
>>
>> These four machines are spread across two VM hosts, each with six cores
> and
>> 16 GB of RAM each.  I haven't started doing anything with these VMs yet.
>
>> I
>> have other VMs installed (Linux and FreeBSD) and they don't have this
>> problem.
>>
>> Has anyone else experienced this problem?  Is there tuning I can do to
> make
>> it work better?  I tried disabling mpbios, that did not have an effect.
>>
>> Thanks.
>>
>> -Gene
>
> --
> Sending from my computer

Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

Gene-46
I'm using amd64.  I'll try i386 later today to see if the issue occurs
again.  Another person replied to me saying i386 works fine for him in ESXi
5.

I had the VMs powered off.  I started them back up and am trying to
reproduce the problem.  So far dmesg isn't giving me anything beyond the
messages from boot.

Thank you for the replies, it is much appreciated.

-Gene

On Wed, Oct 19, 2011 at 1:18 PM, Johan Ryberg <[hidden email]> wrote:

> What "hardware" version did you use? Have you tried different?
>
> // Johan
>
> 2011/10/19 Gonzalo L. R. <[hidden email]>:
> > dmesg?
> >
> > On Wed, 19 Oct 2011 11:55:19 -0700, Gene <[hidden email]> wrote:
> >> I'm trying to run OpenBSD 4.9 (amd64) under VMware vSphere 5 (ESXi 5).
> > I
> >> set up four virtual machines with one core, 256 MB of RAM, and 4 GB of
> > disk
> >> space each.  I used the install49.iso as my installation medium.  Aside
> >> from
> >> the OS installation, I haven't installed anything on them yet.
> >>
> >> They perform terribly.  The load average hovers around 1.5 on all of
> > these
> >> VMs although the CPU shows as being idle.  Connecting via SSH and
> > switching
> >> to root can take over a minute.  If I reboot the virtual machines they
> >> perform well for a short time, but within 15-30 minutes they slow down
> > to a
> >> crawl again.
> >>
> >> These four machines are spread across two VM hosts, each with six cores
> > and
> >> 16 GB of RAM each.  I haven't started doing anything with these VMs yet.
> >
> >> I
> >> have other VMs installed (Linux and FreeBSD) and they don't have this
> >> problem.
> >>
> >> Has anyone else experienced this problem?  Is there tuning I can do to
> > make
> >> it work better?  I tried disabling mpbios, that did not have an effect.
> >>
> >> Thanks.
> >>
> >> -Gene
> >
> > --
> > Sending from my computer

Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

Stuart Henderson
In reply to this post by Gene-46
Haven't tried esxi 5 but I have some hack VMs under 4.1 which are
working ok (i386 and amd64). Some things to try:-

- Try different "guest os types" in the vm config page. On 4.1
I typically set rhel 5 32-bit which seems to work fairly well,
even for amd64, and uses the vic(4) network driver.

- Try i386.

- If you're overcommitting RAM, can you avoid doing that?

- Might be worth giving -current a spin (or 5.0 when it's
available - release isn't far off - note that people who pre-order
CDs often receive them before the official release date ;-)


On 2011-10-19, Gene <[hidden email]> wrote:

> I'm trying to run OpenBSD 4.9 (amd64) under VMware vSphere 5 (ESXi 5).  I
> set up four virtual machines with one core, 256 MB of RAM, and 4 GB of disk
> space each.  I used the install49.iso as my installation medium.  Aside from
> the OS installation, I haven't installed anything on them yet.
>
> They perform terribly.  The load average hovers around 1.5 on all of these
> VMs although the CPU shows as being idle.  Connecting via SSH and switching
> to root can take over a minute.  If I reboot the virtual machines they
> perform well for a short time, but within 15-30 minutes they slow down to a
> crawl again.
>
> These four machines are spread across two VM hosts, each with six cores and
> 16 GB of RAM each.  I haven't started doing anything with these VMs yet.   I
> have other VMs installed (Linux and FreeBSD) and they don't have this
> problem.
>
> Has anyone else experienced this problem?  Is there tuning I can do to make
> it work better?  I tried disabling mpbios, that did not have an effect.
>
> Thanks.
>
> -Gene

Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

Joe S-3
In reply to this post by Gene-46
On Wed, Oct 19, 2011 at 11:55 AM, Gene <[hidden email]> wrote:
> I'm trying to run OpenBSD 4.9 (amd64) under VMware vSphere 5 (ESXi 5).  I
> set up four virtual machines with one core, 256 MB of RAM, and 4 GB of disk
>
> They perform terribly.  The load average hovers around 1.5 on all of these

What sort of hardware is ESXi running on?

Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

Gene-46
On Wed, Oct 19, 2011 at 2:54 PM, Joe S <[hidden email]> wrote:

> On Wed, Oct 19, 2011 at 11:55 AM, Gene <[hidden email]> wrote:
> > I'm trying to run OpenBSD 4.9 (amd64) under VMware vSphere 5 (ESXi 5).  I
> > set up four virtual machines with one core, 256 MB of RAM, and 4 GB of
> disk
> >
> > They perform terribly.  The load average hovers around 1.5 on all of
> these
>
> What sort of hardware is ESXi running on?
>

AMD Phenom II X6 3.2 GHz processor, 16 GB RAM.

Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

Gene-46
In reply to this post by Stuart Henderson
On Wed, Oct 19, 2011 at 2:52 PM, Stuart Henderson <[hidden email]>wrote:

> Haven't tried esxi 5 but I have some hack VMs under 4.1 which are
> working ok (i386 and amd64). Some things to try:-
>
> - Try different "guest os types" in the vm config page. On 4.1
> I typically set rhel 5 32-bit which seems to work fairly well,
> even for amd64, and uses the vic(4) network driver.
>

I used FreeBSD 64bit for the guest type.  I will try using different guest
types if switching to i386 doesn't improve it.


> - Try i386.
>
> - If you're overcommitting RAM, can you avoid doing that?
>

I have allocated less than 50% of the RAM, and almost none of it is being
used.


>
> - Might be worth giving -current a spin (or 5.0 when it's
> available - release isn't far off - note that people who pre-order
> CDs often receive them before the official release date ;-)
>

Does 5.0 have VM specific features in it?



>
>
> On 2011-10-19, Gene <[hidden email]> wrote:
> > I'm trying to run OpenBSD 4.9 (amd64) under VMware vSphere 5 (ESXi 5).  I
> > set up four virtual machines with one core, 256 MB of RAM, and 4 GB of
> disk
> > space each.  I used the install49.iso as my installation medium.  Aside
> from
> > the OS installation, I haven't installed anything on them yet.
> >
> > They perform terribly.  The load average hovers around 1.5 on all of
> these
> > VMs although the CPU shows as being idle.  Connecting via SSH and
> switching
> > to root can take over a minute.  If I reboot the virtual machines they
> > perform well for a short time, but within 15-30 minutes they slow down to
> a
> > crawl again.
> >
> > These four machines are spread across two VM hosts, each with six cores
> and
> > 16 GB of RAM each.  I haven't started doing anything with these VMs yet.
>   I
> > have other VMs installed (Linux and FreeBSD) and they don't have this
> > problem.
> >
> > Has anyone else experienced this problem?  Is there tuning I can do to
> make
> > it work better?  I tried disabling mpbios, that did not have an effect.
> >
> > Thanks.
> >
> > -Gene

Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

Edho P Arief-2
In reply to this post by Gene-46
On Wed, Oct 19, 2011 at 8:41 PM, Gene <[hidden email]> wrote:
> I'm using amd64. B I'll try i386 later today to see if the issue occurs
> again. B Another person replied to me saying i386 works fine for him in
ESXi
> 5.
>

I'm also running 4.9 i386 in a VMware and it sure is fine:

[edho@tomoka ~]$ uptime
 7:33AM  up 80 days,  8:51, 1 user, load averages: 0.23, 0.26, 0.27
[edho@tomoka ~]$ uname -a
OpenBSD tomoka.myconan.net 4.9 GENERIC.MP#794 i386
[edho@tomoka ~]$ dmesg | grep vm
vmt0 at mainbus0
vmt0 at mainbus0



--
O< ascii ribbon campaign - stop html mail - www.asciiribbon.org

Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

patric conant
In reply to this post by Gene-46
What could we ask you that would get you to post those "messages from boot"?

On Wed, Oct 19, 2011 at 3:41 PM, Gene <[hidden email]> wrote:

> I'm using amd64.  I'll try i386 later today to see if the issue occurs
> again.  Another person replied to me saying i386 works fine for him in ESXi
> 5.
>
> I had the VMs powered off.  I started them back up and am trying to
> reproduce the problem.  So far dmesg isn't giving me anything beyond the
> messages from boot.
>
> Thank you for the replies, it is much appreciated.
>
> -Gene
>
> On Wed, Oct 19, 2011 at 1:18 PM, Johan Ryberg <[hidden email]> wrote:
>
> > What "hardware" version did you use? Have you tried different?
> >
> > // Johan
> >
> > 2011/10/19 Gonzalo L. R. <[hidden email]>:
> > > dmesg?
> > >
> > > On Wed, 19 Oct 2011 11:55:19 -0700, Gene <[hidden email]> wrote:
> > >> I'm trying to run OpenBSD 4.9 (amd64) under VMware vSphere 5 (ESXi 5).
> > > I
> > >> set up four virtual machines with one core, 256 MB of RAM, and 4 GB of
> > > disk
> > >> space each.  I used the install49.iso as my installation medium.
>  Aside
> > >> from
> > >> the OS installation, I haven't installed anything on them yet.
> > >>
> > >> They perform terribly.  The load average hovers around 1.5 on all of
> > > these
> > >> VMs although the CPU shows as being idle.  Connecting via SSH and
> > > switching
> > >> to root can take over a minute.  If I reboot the virtual machines they
> > >> perform well for a short time, but within 15-30 minutes they slow down
> > > to a
> > >> crawl again.
> > >>
> > >> These four machines are spread across two VM hosts, each with six
> cores
> > > and
> > >> 16 GB of RAM each.  I haven't started doing anything with these VMs
> yet.
> > >
> > >> I
> > >> have other VMs installed (Linux and FreeBSD) and they don't have this
> > >> problem.
> > >>
> > >> Has anyone else experienced this problem?  Is there tuning I can do to
> > > make
> > >> it work better?  I tried disabling mpbios, that did not have an
> effect.
> > >>
> > >> Thanks.
> > >>
> > >> -Gene
> > >
> > > --
> > > Sending from my computer

Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

LeviaComm Networks NOC
In reply to this post by Gene-46
On 19-Oct-11 16:19, Gene wrote:

> On Wed, Oct 19, 2011 at 2:52 PM, Stuart Henderson<[hidden email]>wrote:
>
>> Haven't tried esxi 5 but I have some hack VMs under 4.1 which are
>> working ok (i386 and amd64). Some things to try:-
>>
>> - Try different "guest os types" in the vm config page. On 4.1
>> I typically set rhel 5 32-bit which seems to work fairly well,
>> even for amd64, and uses the vic(4) network driver.
>>
>
> I used FreeBSD 64bit for the guest type.  I will try using different guest
> types if switching to i386 doesn't improve it.
>
>

You should try setting up the disks as thick eager-zeroed. Otherwise VM
is set up as version 8 and FreeBSD 32-bit.  Load calculations are less
than 0.1 idel and 0.8 under use.

This is the demsg form one of my VMs from my ESXi 5.0 host:

OpenBSD 5.0-current (GENERIC) #71: Fri Oct  7 12:57:13 MDT 2011
     [hidden email]:/usr/src/sys/arch/i386/compile/GENERIC
cpu0: AMD Opteron(tm) Processor 6128 ("AuthenticAMD" 686-class, 512KB
L2
cache)
 
                      2 GHz
cpu0:
FPU,V86,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CF
 
 
               LUSH,MMX,FXSR,SSE,SSE2,SSE3,CX16,POPCNT
real mem  = 267907072 (255MB)
avail mem = 253468672 (241MB)
mainbus0 at root
bios0 at mainbus0: AT/286+ BIOS, date 01/07/11, BIOS32 rev. 0 @ 0xfd780,
SMBIOS
 
                      rev. 2.4 @ 0xe0010 (268 entries)
bios0: vendor Phoenix Technologies LTD version "6.00" date 01/07/2011
bios0: VMware, Inc. VMware Virtual Platform
acpi0 at bios0: rev 2
acpi0: sleep states S0 S1 S4 S5
acpi0: tables DSDT FACP BOOT APIC MCFG SRAT HPET
acpi0: wakeup devices PCI0(S3) USB_(S1) P2P0(S3) S1F0(S3) S2F0(S3)
S3F0(S3) S4F0
 
                            (S3) S5F0(S3) S6F0(S3) S7F0(S3) S8F0(S3)
S9F0(S3) Z00Q(S3) Z00R(S3) Z00S(S3) Z00
 
                                                      T(S3) Z00U(S3)
Z00V(S3) Z00W(S3) Z00X(S3) Z00Y(S3) Z00Z(S3) Z010(S3) Z011(S3) Z0
 
 
         12(S3) Z013(S3) Z014(S3) Z015(S3) Z016(S3) Z017(S3) Z018(S3)
Z019(S3) Z01A(S3) Z
 
                                  01B(S3) Z01C(S3) P2P1(S3) S1F0(S3)
S2F0(S3) S3F0(S3) S4F0(S3) S5F0(S3) S6F0(S3)
 
                                                            S7F0(S3)
S8F0(S3) S9F0(S3) Z00Q(S3) Z00R(S3) Z00S(S3) Z00T(S3) Z00U(S3) Z00V(S3)
 
 
                Z00W(S3) Z00X(S3) Z00Y(S3) Z00Z(S3) Z010(S3) Z011(S3)
Z012(S3) Z013(S3) Z014(S3
 
                                        ) Z015(S3) Z016(S3) Z017(S3)
Z018(S3) Z019(S3) Z01A(S3) Z01B(S3) Z01C(S3) P2P2(S
 
                                                                  3)
S1F0(S3) S2F0(S3) S3F0(S3) S4F0(S3) S5F0(S3) S6F0(S3) S7F0(S3) S8F0(S3)
S9F0(
 
                    S3) Z00Q(S3) Z00R(S3) Z00S(S3) Z00T(S3) Z00U(S3)
Z00V(S3) Z00W(S3) Z00X(S3) Z00Y
 
                                              (S3) Z00Z(S3) Z010(S3)
Z011(S3) Z012(S3) Z013(S3) Z014(S3) Z015(S3) Z016(S3) Z01
 
 
7(S3) Z018(S3) Z019(S3) Z01A(S3) Z01B(S3) Z01C(S3) P2P3(S3) S1F0(S3)
S2F0(S3) S3
 
                          F0(S3) S4F0(S3) S5F0(S3) S6F0(S3) S7F0(S3)
S8F0(S3) S9F0(S3) Z00Q(S3) Z00R(S3) Z
 
                                                    00S(S3) Z00T(S3)
Z00U(S3) Z00V(S3) Z00W(S3) Z00X(S3) Z00Y(S3) Z00Z(S3) Z010(S3)
 
 
       Z011(S3) Z012(S3) Z013(S3) Z014(S3) Z015(S3) Z016(S3) Z017(S3)
Z018(S3) Z019(S3)
 
                                 Z01A(S3) Z01B(S3) Z01C(S3) PE40(S3)
S1F0(S3) PE50(S3) S1F0(S3) PE60(S3) S1F0(S3
 
                                                          ) PE70(S3)
S1F0(S3) PE80(S3) S1F0(S3) PE90(S3) S1F0(S3) PEA0(S3) S1F0(S3) PEB0(S
 
 
             3) S1F0(S3) PEC0(S3) S1F0(S3) PED0(S3) S1F0(S3) PEE0(S3)
S1F0(S3) PE41(S3) S1F0(
 
                                      S3) PE42(S3) S1F0(S3) PE43(S3)
S1F0(S3) PE44(S3) S1F0(S3) PE45(S3) S1F0(S3) PE46
 
                                                                (S3)
S1F0(S3) PE47(S3) S1F0(S3) PE51(S3) S1F0(S3) PE52(S3) S1F0(S3) PE53(S3)
S1F
 
                  0(S3) PE54(S3) S1F0(S3) PE55(S3) S1F0(S3) PE56(S3)
S1F0(S3) PE57(S3) S1F0(S3) PE
 
                                            61(S3) S1F0(S3) PE62(S3)
S1F0(S3) PE63(S3) S1F0(S3) PE64(S3) S1F0(S3) PE65(S3) S
 
 
1F0(S3) PE66(S3) S1F0(S3) PE67(S3) S1F0(S3) PE71(S3) S1F0(S3) PE72(S3)
S1F0(S3)
 
                        PE73(S3) S1F0(S3) PE74(S3) S1F0(S3) PE75(S3)
S1F0(S3) PE76(S3) S1F0(S3) PE77(S3)
 
                                                   S1F0(S3) PE81(S3)
S1F0(S3) PE82(S3) S1F0(S3) PE83(S3) S1F0(S3) PE84(S3) S1F0(S3
 
 
     ) PE85(S3) S1F0(S3) PE86(S3) S1F0(S3) PE87(S3) S1F0(S3) PE91(S3)
S1F0(S3) PE92(S
 
                              3) S1F0(S3) PE93(S3) S1F0(S3) PE94(S3)
S1F0(S3) PE95(S3) S1F0(S3) PE96(S3) S1F0(
 
                                                        S3) PE97(S3)
S1F0(S3) PEA1(S3) S1F0(S3) PEA2(S3) S1F0(S3) PEA3(S3) S1F0(S3) PEA4
 
 
           (S3) S1F0(S3) PEA5(S3) S1F0(S3) PEA6(S3) S1F0(S3) PEA7(S3)
S1F0(S3) PEB1(S3) S1F
 
                                    0(S3) PEB2(S3) S1F0(S3) PEB3(S3)
S1F0(S3) PEB4(S3) S1F0(S3) PEB5(S3) S1F0(S3) PE
 
                                                              B6(S3)
S1F0(S3) PEB7(S3) S1F0(S3) SLPB(S4) LID_(S4)
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee00000: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: apic clock running at 65MHz
ioapic0 at mainbus0: apid 1 pa 0xfec00000, version 11, 24 pins
acpimcfg0 at acpi0 addr 0xe0000000, bus 0-255
acpihpet0 at acpi0: 14318179 Hz
acpiprt0 at acpi0: bus 0 (PCI0)
acpicpu0 at acpi0
acpibat0 at acpi0: BAT1 not present
acpibat1 at acpi0: BAT2 not present
acpiac0 at acpi0: AC unit online
acpibtn0 at acpi0: SLPB
acpibtn1 at acpi0: LID_
bios0: ROM list: 0xc0000/0x8000 0xc8000/0x1e00! 0xca000/0x1000
0xdc000/0x4000! 0
 
                                xe0000/0x4000!
vmt0 at mainbus0
pci0 at mainbus0 bus 0: configuration mode 1 (bios)
pchb0 at pci0 dev 0 function 0 "Intel 82443BX AGP" rev 0x01
ppb0 at pci0 dev 1 function 0 "Intel 82443BX AGP" rev 0x01
pci1 at ppb0 bus 1
piixpcib0 at pci0 dev 7 function 0 "Intel 82371AB PIIX4 ISA" rev 0x08
pciide0 at pci0 dev 7 function 1 "Intel 82371AB IDE" rev 0x01: DMA,
channel 0 co
 
                           nfigured to compatibility, channel 1
configured to compatibility
pciide0: channel 0 disabled (no drives)
atapiscsi0 at pciide0 channel 1 drive 0
scsibus0 at atapiscsi0: 2 targets
cd0 at scsibus0 targ 0 lun 0: <NECVMWar, VMware IDE CDR10, 1.00> ATAPI
5/cdrom r
 
                        emovable
cd0(pciide0:1:0): using PIO mode 4, Ultra-DMA mode 2
piixpm0 at pci0 dev 7 function 3 "Intel 82371AB Power" rev 0x08: SMBus
disabled
"VMware Virtual Machine Communication Interface" rev 0x10 at pci0 dev 7
function
 
                        7 not configured
vga1 at pci0 dev 15 function 0 "VMware Virtual SVGA II" rev 0x00
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
mpi0 at pci0 dev 16 function 0 "Symbios Logic 53c1030" rev 0x01: apic 1
int 17
scsibus1 at mpi0: 16 targets, initiator 7
sd0 at scsibus1 targ 0 lun 0: <VMware, Virtual disk, 1.0> SCSI2 0/direct
fixed
sd0: 8192MB, 512 bytes/sector, 16777216 sectors
mpi0: target 0 Sync at 160MHz width 16bit offset 127 QAS 1 DT 1 IU 1
ppb1 at pci0 dev 17 function 0 "VMware Virtual PCI-PCI" rev 0x02
pci2 at ppb1 bus 2
em0 at pci2 dev 0 function 0 "Intel PRO/1000MT (82545EM)" rev 0x01: apic
1 int 1
 
                      8, address 00:0c:29:cc:7a:de
ppb2 at pci0 dev 21 function 0 "VMware Virtual PCIE-PCIE" rev 0x01
pci3 at ppb2 bus 3
ppb3 at pci0 dev 21 function 1 "VMware Virtual PCIE-PCIE" rev 0x01
pci4 at ppb3 bus 4
ppb4 at pci0 dev 21 function 2 "VMware Virtual PCIE-PCIE" rev 0x01
pci5 at ppb4 bus 5
ppb5 at pci0 dev 21 function 3 "VMware Virtual PCIE-PCIE" rev 0x01
pci6 at ppb5 bus 6
ppb6 at pci0 dev 21 function 4 "VMware Virtual PCIE-PCIE" rev 0x01
pci7 at ppb6 bus 7
ppb7 at pci0 dev 21 function 5 "VMware Virtual PCIE-PCIE" rev 0x01
pci8 at ppb7 bus 8
ppb8 at pci0 dev 21 function 6 "VMware Virtual PCIE-PCIE" rev 0x01
pci9 at ppb8 bus 9
ppb9 at pci0 dev 21 function 7 "VMware Virtual PCIE-PCIE" rev 0x01
pci10 at ppb9 bus 10
ppb10 at pci0 dev 22 function 0 "VMware Virtual PCIE-PCIE" rev 0x01
pci11 at ppb10 bus 11
ppb11 at pci0 dev 22 function 1 "VMware Virtual PCIE-PCIE" rev 0x01
pci12 at ppb11 bus 12
ppb12 at pci0 dev 22 function 2 "VMware Virtual PCIE-PCIE" rev 0x01
pci13 at ppb12 bus 13
ppb13 at pci0 dev 22 function 3 "VMware Virtual PCIE-PCIE" rev 0x01
pci14 at ppb13 bus 14
ppb14 at pci0 dev 22 function 4 "VMware Virtual PCIE-PCIE" rev 0x01
pci15 at ppb14 bus 15
ppb15 at pci0 dev 22 function 5 "VMware Virtual PCIE-PCIE" rev 0x01
pci16 at ppb15 bus 16
ppb16 at pci0 dev 22 function 6 "VMware Virtual PCIE-PCIE" rev 0x01
pci17 at ppb16 bus 17
ppb17 at pci0 dev 22 function 7 "VMware Virtual PCIE-PCIE" rev 0x01
pci18 at ppb17 bus 18
ppb18 at pci0 dev 23 function 0 "VMware Virtual PCIE-PCIE" rev 0x01
pci19 at ppb18 bus 19
ppb19 at pci0 dev 23 function 1 "VMware Virtual PCIE-PCIE" rev 0x01
pci20 at ppb19 bus 20
ppb20 at pci0 dev 23 function 2 "VMware Virtual PCIE-PCIE" rev 0x01
pci21 at ppb20 bus 21
ppb21 at pci0 dev 23 function 3 "VMware Virtual PCIE-PCIE" rev 0x01
pci22 at ppb21 bus 22
ppb22 at pci0 dev 23 function 4 "VMware Virtual PCIE-PCIE" rev 0x01
pci23 at ppb22 bus 23
ppb23 at pci0 dev 23 function 5 "VMware Virtual PCIE-PCIE" rev 0x01
pci24 at ppb23 bus 24
ppb24 at pci0 dev 23 function 6 "VMware Virtual PCIE-PCIE" rev 0x01
pci25 at ppb24 bus 25
ppb25 at pci0 dev 23 function 7 "VMware Virtual PCIE-PCIE" rev 0x01
pci26 at ppb25 bus 26
ppb26 at pci0 dev 24 function 0 "VMware Virtual PCIE-PCIE" rev 0x01
pci27 at ppb26 bus 27
ppb27 at pci0 dev 24 function 1 "VMware Virtual PCIE-PCIE" rev 0x01
pci28 at ppb27 bus 28
ppb28 at pci0 dev 24 function 2 "VMware Virtual PCIE-PCIE" rev 0x01
pci29 at ppb28 bus 29
ppb29 at pci0 dev 24 function 3 "VMware Virtual PCIE-PCIE" rev 0x01
pci30 at ppb29 bus 30
ppb30 at pci0 dev 24 function 4 "VMware Virtual PCIE-PCIE" rev 0x01
pci31 at ppb30 bus 31
ppb31 at pci0 dev 24 function 5 "VMware Virtual PCIE-PCIE" rev 0x01
pci32 at ppb31 bus 32
ppb32 at pci0 dev 24 function 6 "VMware Virtual PCIE-PCIE" rev 0x01
pci33 at ppb32 bus 33
ppb33 at pci0 dev 24 function 7 "VMware Virtual PCIE-PCIE" rev 0x01
pci34 at ppb33 bus 34
isa0 at piixpcib0
isadma0 at isa0
com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
com1 at isa0 port 0x2f8/8 irq 3: ns16550a, 16 byte fifo
pckbc0 at isa0 port 0x60/5
pckbd0 at pckbc0 (kbd slot)
pckbc0: using irq 1 for kbd slot
wskbd0 at pckbd0: console keyboard, using wsdisplay0
pms0 at pckbc0 (aux slot)
pckbc0: using irq 12 for aux slot
wsmouse0 at pms0 mux 0
pcppi0 at isa0 port 0x61
spkr0 at pcppi0
lpt0 at isa0 port 0x378/4 irq 7
npx0 at isa0 port 0xf0/16: reported by CPUID; using exception 16
fdc0 at isa0 port 0x3f0/6 irq 6 drq 2
mtrr: Pentium Pro MTRR support
vscsi0 at root
scsibus2 at vscsi0: 256 targets
softraid0 at root
scsibus3 at softraid0: 256 targets
root on sd0a (d2028f36002b02ad.a) swap on sd0b dump on sd0b

Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

James Shupe-3
In reply to this post by Gene-46
What's it take to get an actual dmesg around here? Just post the output
for us to look at regardless of whether or not you think the messages at
boot" are important. They're needed to troubleshoot any problem like
this.

Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

Gene-46
When the initial dmesg question was asked ("dmesg?") I didn't understand
that it was a request for the entire dmesg output.  I thought he was asking
if errors were showing up in dmesg.

I have attached the entirety of a dmesg output.

-Gene

On Wed, Oct 19, 2011 at 6:53 PM, James Shupe <[hidden email]> wrote:

> What's it take to get an actual dmesg around here? Just post the output
> for us to look at regardless of whether or not you think the messages at
> boot" are important. They're needed to troubleshoot any problem like
> this.
OpenBSD 4.9 (GENERIC) #477: Wed Mar  2 06:50:31 MST 2011

    [hidden email]:/usr/src/sys/arch/amd64/compile/GENERIC

real mem = 267321344 (254MB)

avail mem = 246403072 (234MB)

mainbus0 at root

bios0 at mainbus0: SMBIOS rev. 2.4 @ 0xe0010 (268 entries)

bios0: vendor Phoenix Technologies LTD version "6.00" date 01/07/2011

bios0: VMware, Inc. VMware Virtual Platform

acpi0 at bios0: rev 2

acpi0: sleep states S0 S1 S4 S5

acpi0: tables DSDT FACP BOOT APIC MCFG SRAT HPET

acpi0: wakeup devices PCI0(S3) USB_(S1) P2P0(S3) S1F0(S3) S2F0(S3) S3F0(S3) S4F0                                                                                        (S3) S5F0(S3) S6F0(S3) S7F0(S3) S8F0(S3) S9F0(S3) Z00Q(S3) Z00R(S3) Z00S(S3) Z00                                                                                        T(S3) Z00U(S3) Z00V(S3) Z00W(S3) Z00X(S3) Z00Y(S3) Z00Z(S3) Z010(S3) Z011(S3) Z0                                                                                        12(S3) Z013(S3) Z014(S3) Z015(S3) Z016(S3) Z017(S3) Z018(S3) Z019(S3) Z01A(S3) Z                                                                                        01B(S3) Z01C(S3) P2P1(S3) S1F0(S3) S2F0(S3) S3F0(S3) S4F0(S3) S5F0(S3) S6F0(S3)                                                                                         S7F0(S3) S8F0(S3) S9F0(S3) Z00Q(S3) Z00R(S3) Z00S(S3) Z00T(S3) Z00U(S3) Z00V(S3)                                                                     !
                     Z00W(S3) Z00X(S3) Z00Y(S3) Z00Z(S3) Z010(S3) Z011(S3) Z012(S3) Z013(S3) Z014(S3                                                                                        ) Z015(S3) Z016(S3) Z017(S3) Z018(S3) Z019(S3) Z01A(S3) Z01B(S3) Z01C(S3) P2P2(S                                                                                        3) S1F0(S3) S2F0(S3) S3F0(S3) S4F0(S3) S5F0(S3) S6F0(S3) S7F0(S3) S8F0(S3) S9F0(                                                                                        S3) Z00Q(S3) Z00R(S3) Z00S(S3) Z00T(S3) Z00U(S3) Z00V(S3) Z00W(S3) Z00X(S3) Z00Y                                                                                        (S3) Z00Z(S3) Z010(S3) Z011(S3) Z012(S3) Z013(S3) Z014(S3) Z015(S3) Z016(S3) Z01                                                                                        7(S3) Z018(S3) Z019(S3) Z01A(S3) Z01B(S3) Z01C(S3) P2P3(S3) S1F0(S3) S2F0(S3) S3                                                 !
                                        F0(S3) S4F0(S3) S5F0(S3) S6F0(S3) S7F0(S3) S8F0(S3) S9F0(S3) Z00Q(S3) Z00R(S3) Z                                                                                        00S(S3) Z00T(S3) Z00U(S3) Z00V(S3) Z00W(S3) Z00X(S3) Z00Y(S3) Z00Z(S3) Z010(S3)                                                                                         Z011(S3) Z012(S3) Z013(S3) Z014(S3) Z015(S3) Z016(S3) Z017(S3) Z018(S3) Z019(S3)                                                                                         Z01A(S3) Z01B(S3) Z01C(S3) PE40(S3) S1F0(S3) PE50(S3) S1F0(S3) PE60(S3) S1F0(S3                                                                                        ) PE70(S3) S1F0(S3) PE80(S3) S1F0(S3) PE90(S3) S1F0(S3) PEA0(S3) S1F0(S3) PEB0(S                                                                                        3) S1F0(S3) PEC0(S3) S1F0(S3) PED0(S3) S1F0(S3) PEE0(S3) S1F0(S3) PE41(S3) S1F0(                                                                                        S3) PE42(S3)!
  S1F0(S3) PE43(S3) S1F0(S3) PE44(S3) S1F0(S3) PE45(S3) S1F0(S3) PE46                                                                                        (S3) S1F0(S3) PE47(S3) S1F0(S3) PE51(S3) S1F0(S3) PE52(S3) S1F0(S3) PE53(S3) S1F                                                                                        0(S3) PE54(S3) S1F0(S3) PE55(S3) S1F0(S3) PE56(S3) S1F0(S3) PE57(S3) S1F0(S3) PE                                                                                        61(S3) S1F0(S3) PE62(S3) S1F0(S3) PE63(S3) S1F0(S3) PE64(S3) S1F0(S3) PE65(S3) S                                                                                        1F0(S3) PE66(S3) S1F0(S3) PE67(S3) S1F0(S3) PE71(S3) S1F0(S3) PE72(S3) S1F0(S3)                                                                                         PE73(S3) S1F0(S3) PE74(S3) S1F0(S3) PE75(S3) S1F0(S3) PE76(S3) S1F0(S3) PE77(S3)                                                                                !
          S1F0(S3) PE81(S3) S1F0(S3) PE82(S3) S1F0(S3) PE83(S3) S1F0(S3) PE84(S3) S1F0(S3                                                                                        ) PE85(S3) S1F0(S3) PE86(S3) S1F0(S3) PE87(S3) S1F0(S3) PE91(S3) S1F0(S3) PE92(S                                                                                        3) S1F0(S3) PE93(S3) S1F0(S3) PE94(S3) S1F0(S3) PE95(S3) S1F0(S3) PE96(S3) S1F0(                                                                                        S3) PE97(S3) S1F0(S3) PEA1(S3) S1F0(S3) PEA2(S3) S1F0(S3) PEA3(S3) S1F0(S3) PEA4                                                                                        (S3) S1F0(S3) PEA5(S3) S1F0(S3) PEA6(S3) S1F0(S3) PEA7(S3) S1F0(S3) PEB1(S3) S1F                                                                                        0(S3) PEB2(S3) S1F0(S3) PEB3(S3) S1F0(S3) PEB4(S3) S1F0(S3) PEB5(S3) S1F0(S3) PE                                                                                        B6(S3) S1F0(S3) PEB7(S3) S1F0(S3) SLPB(S4) !
 LID_(S4)

acpitimer0 at acpi0: 3579545 Hz, 24 bits

acpimadt0 at acpi0 addr 0xfee00000: PC-AT compat

cpu0 at mainbus0: apid 0 (boot processor)

cpu0: AMD Phenom(tm) II X6 1090T Processor, 3200.79 MHz

cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CF                                                                                        LUSH,MMX,FXSR,SSE,SSE2,SSE3,CX16,POPCNT,NXE,MMXX,FFXSR,LONG,3DNOW2,3DNOW

cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line 1                                                                                        6-way L2 cache

cpu0: ITLB 32 4KB entries fully associative, 16 4MB entries fully associative

cpu0: DTLB 48 4KB entries fully associative, 48 4MB entries fully associative

cpu0: apic clock running at 66MHz

ioapic0 at mainbus0: apid 1 pa 0xfec00000, version 11, 24 pins

acpimcfg0 at acpi0 addr 0xe0000000, bus 0-255

acpihpet0 at acpi0: 14318179 Hz

acpiprt0 at acpi0: bus 0 (PCI0)

acpicpu0 at acpi0

acpibat0 at acpi0: BAT1 not present

acpibat1 at acpi0: BAT2 not present

acpiac0 at acpi0: AC unit online

acpibtn0 at acpi0: SLPB

acpibtn1 at acpi0: LID_

vmt0 at mainbus0

pci0 at mainbus0 bus 0

pchb0 at pci0 dev 0 function 0 "Intel 82443BX AGP" rev 0x01

ppb0 at pci0 dev 1 function 0 "Intel 82443BX AGP" rev 0x01

pci1 at ppb0 bus 1

pcib0 at pci0 dev 7 function 0 "Intel 82371AB PIIX4 ISA" rev 0x08

pciide0 at pci0 dev 7 function 1 "Intel 82371AB IDE" rev 0x01: DMA, channel 0 co                                                                                        nfigured to compatibility, channel 1 configured to compatibility

pciide0: channel 0 disabled (no drives)

atapiscsi0 at pciide0 channel 1 drive 0

scsibus0 at atapiscsi0: 2 targets

cd0 at scsibus0 targ 0 lun 0: <NECVMWar, VMware IDE CDR10, 1.00> ATAPI 5/cdrom r                                                                                        emovable

cd0(pciide0:1:0): using PIO mode 4, Ultra-DMA mode 2

piixpm0 at pci0 dev 7 function 3 "Intel 82371AB Power" rev 0x08: SMBus disabled

"VMware Virtual Machine Communication Interface" rev 0x10 at pci0 dev 7 function                                                                                         7 not configured

vga1 at pci0 dev 15 function 0 "VMware Virtual SVGA II" rev 0x00

wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)

wsdisplay0: screen 1-5 added (80x25, vt100 emulation)

mpi0 at pci0 dev 16 function 0 "Symbios Logic 53c1030" rev 0x01: apic 1 int 17 (                                                                                        irq 11)

scsibus1 at mpi0: 16 targets, initiator 7

sd0 at scsibus1 targ 0 lun 0: <VMware, Virtual disk, 1.0> SCSI2 0/direct fixed

sd0: 1024MB, 512 bytes/sec, 2097152 sec total

mpi0: target 0 Sync at 160MHz width 16bit offset 127 QAS 1 DT 1 IU 1

ppb1 at pci0 dev 17 function 0 "VMware Virtual PCI-PCI" rev 0x02

pci2 at ppb1 bus 2

em0 at pci2 dev 0 function 0 "Intel PRO/1000MT (82545EM)" rev 0x01: apic 1 int 1                                                                                        8 (irq 10), address 00:0c:29:02:47:ea

ppb2 at pci0 dev 21 function 0 "VMware Virtual PCIE-PCIE" rev 0x01

pci3 at ppb2 bus 3

ppb3 at pci0 dev 21 function 1 "VMware Virtual PCIE-PCIE" rev 0x01

pci4 at ppb3 bus 4

ppb4 at pci0 dev 21 function 2 "VMware Virtual PCIE-PCIE" rev 0x01

pci5 at ppb4 bus 5

ppb5 at pci0 dev 21 function 3 "VMware Virtual PCIE-PCIE" rev 0x01

pci6 at ppb5 bus 6

ppb6 at pci0 dev 21 function 4 "VMware Virtual PCIE-PCIE" rev 0x01

pci7 at ppb6 bus 7

ppb7 at pci0 dev 21 function 5 "VMware Virtual PCIE-PCIE" rev 0x01

pci8 at ppb7 bus 8

ppb8 at pci0 dev 21 function 6 "VMware Virtual PCIE-PCIE" rev 0x01

pci9 at ppb8 bus 9

ppb9 at pci0 dev 21 function 7 "VMware Virtual PCIE-PCIE" rev 0x01

pci10 at ppb9 bus 10

ppb10 at pci0 dev 22 function 0 "VMware Virtual PCIE-PCIE" rev 0x01

pci11 at ppb10 bus 11

ppb11 at pci0 dev 22 function 1 "VMware Virtual PCIE-PCIE" rev 0x01

pci12 at ppb11 bus 12

ppb12 at pci0 dev 22 function 2 "VMware Virtual PCIE-PCIE" rev 0x01

pci13 at ppb12 bus 13

ppb13 at pci0 dev 22 function 3 "VMware Virtual PCIE-PCIE" rev 0x01

pci14 at ppb13 bus 14

ppb14 at pci0 dev 22 function 4 "VMware Virtual PCIE-PCIE" rev 0x01

pci15 at ppb14 bus 15

ppb15 at pci0 dev 22 function 5 "VMware Virtual PCIE-PCIE" rev 0x01

pci16 at ppb15 bus 16

ppb16 at pci0 dev 22 function 6 "VMware Virtual PCIE-PCIE" rev 0x01

pci17 at ppb16 bus 17

ppb17 at pci0 dev 22 function 7 "VMware Virtual PCIE-PCIE" rev 0x01

pci18 at ppb17 bus 18

ppb18 at pci0 dev 23 function 0 "VMware Virtual PCIE-PCIE" rev 0x01

pci19 at ppb18 bus 19

ppb19 at pci0 dev 23 function 1 "VMware Virtual PCIE-PCIE" rev 0x01

pci20 at ppb19 bus 20

ppb20 at pci0 dev 23 function 2 "VMware Virtual PCIE-PCIE" rev 0x01

pci21 at ppb20 bus 21

ppb21 at pci0 dev 23 function 3 "VMware Virtual PCIE-PCIE" rev 0x01

pci22 at ppb21 bus 22

ppb22 at pci0 dev 23 function 4 "VMware Virtual PCIE-PCIE" rev 0x01

pci23 at ppb22 bus 23

ppb23 at pci0 dev 23 function 5 "VMware Virtual PCIE-PCIE" rev 0x01

pci24 at ppb23 bus 24

ppb24 at pci0 dev 23 function 6 "VMware Virtual PCIE-PCIE" rev 0x01

pci25 at ppb24 bus 25

ppb25 at pci0 dev 23 function 7 "VMware Virtual PCIE-PCIE" rev 0x01

pci26 at ppb25 bus 26

ppb26 at pci0 dev 24 function 0 "VMware Virtual PCIE-PCIE" rev 0x01

pci27 at ppb26 bus 27

ppb27 at pci0 dev 24 function 1 "VMware Virtual PCIE-PCIE" rev 0x01

pci28 at ppb27 bus 28

ppb28 at pci0 dev 24 function 2 "VMware Virtual PCIE-PCIE" rev 0x01

pci29 at ppb28 bus 29

ppb29 at pci0 dev 24 function 3 "VMware Virtual PCIE-PCIE" rev 0x01

pci30 at ppb29 bus 30

ppb30 at pci0 dev 24 function 4 "VMware Virtual PCIE-PCIE" rev 0x01

pci31 at ppb30 bus 31

ppb31 at pci0 dev 24 function 5 "VMware Virtual PCIE-PCIE" rev 0x01

pci32 at ppb31 bus 32

ppb32 at pci0 dev 24 function 6 "VMware Virtual PCIE-PCIE" rev 0x01

pci33 at ppb32 bus 33

ppb33 at pci0 dev 24 function 7 "VMware Virtual PCIE-PCIE" rev 0x01

pci34 at ppb33 bus 34

isa0 at pcib0

isadma0 at isa0

com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo

com1 at isa0 port 0x2f8/8 irq 3: ns16550a, 16 byte fifo

pckbc0 at isa0 port 0x60/5

pckbd0 at pckbc0 (kbd slot)

pckbc0: using irq 1 for kbd slot

wskbd0 at pckbd0: console keyboard, using wsdisplay0

pms0 at pckbc0 (aux slot)

pckbc0: using irq 12 for aux slot

wsmouse0 at pms0 mux 0

pcppi0 at isa0 port 0x61

spkr0 at pcppi0

lpt0 at isa0 port 0x378/4 irq 7

fdc0 at isa0 port 0x3f0/6 irq 6 drq 2

fd0 at fdc0 drive 0: 1.44MB 80 cyl, 2 head, 18 sec

mtrr: Pentium Pro MTRR support

vscsi0 at root

scsibus2 at vscsi0: 256 targets

softraid0 at root

root on sd0a swap on sd0b dump on sd0b


Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

Gene-46
In reply to this post by Edho P Arief-2
I haven't been able to reproduce the problem since this morning.
Nothing has been changed on the vmhosts so I'm at a bit of a loss at
the moment.

When the issue reoccurs I'll try everything that has been suggested today.

Thank you very much for your help everyone.

-Gene

On Wed, Oct 19, 2011 at 5:33 PM, Edho Arief <[hidden email]> wrote:
> On Wed, Oct 19, 2011 at 8:41 PM, Gene <[hidden email]> wrote:
>> I'm using amd64.  I'll try i386 later today to see if the issue occurs
>> again.  Another person replied to me saying i386 works fine for him in
ESXi

>> 5.
>>
>
> I'm also running 4.9 i386 in a VMware and it sure is fine:
>
> [edho@tomoka ~]$ uptime
>  7:33AM  up 80 days,  8:51, 1 user, load averages: 0.23, 0.26, 0.27
> [edho@tomoka ~]$ uname -a
> OpenBSD tomoka.myconan.net 4.9 GENERIC.MP#794 i386
> [edho@tomoka ~]$ dmesg | grep vm
> vmt0 at mainbus0
> vmt0 at mainbus0
>
>
>
> --
> O< ascii ribbon campaign - stop html mail - www.asciiribbon.org

Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

Stuart Henderson
In reply to this post by Gene-46
On 2011/10/19 16:19, Gene wrote:
>
>     - Might be worth giving -current a spin (or 5.0 when it's
>     available - release isn't far off - note that people who pre-order
>     CDs often receive them before the official release date ;-)
>
> Does 5.0 have VM specific features in it?

No but there have been changes to various parts of the OS which
could conceivably have an effect.

>     > They perform terribly.  The load average hovers around 1.5 on all
>     of these
>     > VMs although the CPU shows as being idle.

Oh what does vmstat -i say?

Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

Gene-46
In reply to this post by Gene-46
This is just an update, I've still got to try everything that was
suggested before.

This issue is finally occurring again, and I have been able to collect
more information about it:

# uptime
11:46AM  up 3 days, 22:50, 1 user, load averages: 1.33, 1.12, 1.10

# ps aux
USER       PID %CPU %MEM   VSZ   RSS TT  STAT  STARTED       TIME COMMAND
root         1  0.0  0.2   364   376 ??  Is    Wed12PM    0:00.09 /sbin/init
root     17473  0.0  0.3   412   812 ??  Is    Wed12PM    0:00.09
syslogd: [priv] (syslogd)
_syslogd  4944  0.0  0.3   420   860 ??  S     Wed12PM    1:59.70
syslogd -a /var/www/dev/log -a /var/empty/dev/log
root     17203  0.0  0.2   572   464 ??  Is    Wed12PM    0:00.01
pflogd: [priv] (pflogd)
_pflogd  25836  0.0  0.2   636   384 ??  S     Wed12PM    1:18.70
pflogd: [running] -s 160 -i pflog0 -f /var/log/pflog (pflogd)
root     20453  0.0  0.4   496  1020 ??  Is    Wed12PM    0:02.17
ntpd: [priv] (ntpd)
_ntp     27033  0.0  0.4   548  1092 ??  S     Wed12PM    0:36.73
ntpd: ntp engine (ntpd)
_ntp     30318  0.0  0.4   676  1008 ??  I     Wed12PM    0:00.02
ntpd: dns engine (ntpd)
root     12410  0.0  0.5   616  1384 ??  Is    Wed12PM    0:00.02 /usr/sbin/sshd
root     18650  0.0  0.3   412   832 ??  Is    Wed12PM    0:00.06 inetd
root     13652  0.0  0.4   668   912 ??  Is    Wed12PM    0:04.15 cron
root     12191  0.0  0.8  1216  2116 ??  Ss    Wed12PM    1:36.36
sendmail: accepting connections (sendmail)
root     18822  0.0  1.2  3452  3084 ??  Is    11:22AM    0:00.13
sshd: gene [priv] (sshd)
gene     27682  0.3  0.9  3420  2312 ??  S     11:22AM    0:00.55
sshd: gene@ttyp0 (sshd)
gene     18431  0.0  0.2   616   492 p0  Ss    11:22AM    0:00.14 -ksh (ksh)
root     23079  0.1  0.2   692   536 p0  S     11:46AM    0:00.07 -ksh (ksh)
root     19366  0.0  0.1   516   328 p0  R+    11:47AM    0:00.00 ps -aux
root     17451  0.0  0.3   280   864 C0  Is+   Wed12PM    0:00.02
/usr/libexec/getty std.9600 ttyC0
root     23962  0.0  0.3   324   864 C1  Is+   Wed12PM    0:00.01
/usr/libexec/getty std.9600 ttyC1
root      2571  0.0  0.3   272   860 C2  Is+   Wed12PM    0:00.01
/usr/libexec/getty std.9600 ttyC2
root      9191  0.0  0.3   296   864 C3  Is+   Wed12PM    0:00.02
/usr/libexec/getty std.9600 ttyC3
root      2812  0.0  0.3   416   868 C5  Is+   Wed12PM    0:00.01
/usr/libexec/getty std.9600 ttyC5

# vmstat -i
interrupt                       total     rate
irq0/clock                   34043772       99
irq97/mpi0                     772066        2
irq112/em0                      96237        0
Total                        34912075      102

# systat
   1 users    Load 1.10 1.07 1.08 PAUSED               Sun Oct 23 11:46:02 2011

            memory totals (in KB)            PAGING   SWAPPING     Interrupts
           real   virtual     free           in  out   in  out      105 total
Active    12420     12420   185072   ops                            100 clock
All       55712     55712   447212   pages                            4 mpi0
                                                                      1 em0
Proc:r  d  s  w    Csw   Trp   Sys   Int   Sof  Flt       forks
           6        21    17    88     4   102   21       fkppw
                                                          fksvm
   0.0%Int   0.2%Sys   0.4%Usr   0.0%Nic  99.4%Idle       pwait
|    |    |    |    |    |    |    |    |    |    |     2 relck
                                                        2 rlkok
                                                          noram
Namei         Sys-cache    Proc-cache    No-cache         ndcpy
    Calls     hits    %    hits     %    miss   %         fltcp
       14       14  100                                 2 zfod
                                                          cow
Disks   cd0   sd0   fd0                              2006 fmin
seeks                                                2674 ftarg
xfers           4                                         itarg
speed         67K                                       1 wired
  sec         0.0                                         pdfre
                                                          pdscn
                                                          pzidle
                                                       10 kmapent

# dmesg | tail
vmware: sending length failed, eax=00000000, ecx=00000000
vmt0: failed to send TCLO outgoing ping
vmware: sending length failed, eax=00000000, ecx=00000000
vmt0: failed to send TCLO outgoing ping
vmware: sending length failed, eax=00000000, ecx=00000000
vmt0: failed to send TCLO outgoing ping
vmware: sending length failed, eax=00000000, ecx=00000000
vmt0: failed to send TCLO outgoing ping
vmware: sending length failed, eax=00000000, ecx=00000000
vmt0: failed to send TCLO outgoing ping


My /var/log/messages* files have that pair of error messages in them
over 16,000 times.

I will go through and try what has been suggested, starting with
changing the guest OS type.  Unfortunately it appears it can be days
apart when this problem occurs.  I'll send an update when I have
something more concrete.

If anyone would like to try recreating this problem on their ESXi host
I'll make a .tar.gz of this vm guest for you to download.

Thanks again.

-Gene


On Wed, Oct 19, 2011 at 8:23 PM, Gene <[hidden email]> wrote:
> I haven't been able to reproduce the problem since this morning.
> Nothing has been changed on the vmhosts so I'm at a bit of a loss at
> the moment.
>
> When the issue reoccurs I'll try everything that has been suggested today.
>
> Thank you very much for your help everyone.
>
> -Gene

Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

C. L. Martinez
On 10/23/2011 09:10 PM, Gene wrote:

> This is just an update, I've still got to try everything that was
> suggested before.
>
> This issue is finally occurring again, and I have been able to collect
> more information about it:
>
> # uptime
> 11:46AM  up 3 days, 22:50, 1 user, load averages: 1.33, 1.12, 1.10
>
> # ps aux
> USER       PID %CPU %MEM   VSZ   RSS TT  STAT  STARTED       TIME COMMAND
> root         1  0.0  0.2   364   376 ??  Is    Wed12PM    0:00.09 /sbin/init
> root     17473  0.0  0.3   412   812 ??  Is    Wed12PM    0:00.09
> syslogd: [priv] (syslogd)
> _syslogd  4944  0.0  0.3   420   860 ??  S     Wed12PM    1:59.70
> syslogd -a /var/www/dev/log -a /var/empty/dev/log
> root     17203  0.0  0.2   572   464 ??  Is    Wed12PM    0:00.01
> pflogd: [priv] (pflogd)
> _pflogd  25836  0.0  0.2   636   384 ??  S     Wed12PM    1:18.70
> pflogd: [running] -s 160 -i pflog0 -f /var/log/pflog (pflogd)
> root     20453  0.0  0.4   496  1020 ??  Is    Wed12PM    0:02.17
> ntpd: [priv] (ntpd)
> _ntp     27033  0.0  0.4   548  1092 ??  S     Wed12PM    0:36.73
> ntpd: ntp engine (ntpd)
> _ntp     30318  0.0  0.4   676  1008 ??  I     Wed12PM    0:00.02
> ntpd: dns engine (ntpd)
> root     12410  0.0  0.5   616  1384 ??  Is    Wed12PM    0:00.02 /usr/sbin/sshd
> root     18650  0.0  0.3   412   832 ??  Is    Wed12PM    0:00.06 inetd
> root     13652  0.0  0.4   668   912 ??  Is    Wed12PM    0:04.15 cron
> root     12191  0.0  0.8  1216  2116 ??  Ss    Wed12PM    1:36.36
> sendmail: accepting connections (sendmail)
> root     18822  0.0  1.2  3452  3084 ??  Is    11:22AM    0:00.13
> sshd: gene [priv] (sshd)
> gene     27682  0.3  0.9  3420  2312 ??  S     11:22AM    0:00.55
> sshd: gene@ttyp0 (sshd)
> gene     18431  0.0  0.2   616   492 p0  Ss    11:22AM    0:00.14 -ksh (ksh)
> root     23079  0.1  0.2   692   536 p0  S     11:46AM    0:00.07 -ksh (ksh)
> root     19366  0.0  0.1   516   328 p0  R+    11:47AM    0:00.00 ps -aux
> root     17451  0.0  0.3   280   864 C0  Is+   Wed12PM    0:00.02
> /usr/libexec/getty std.9600 ttyC0
> root     23962  0.0  0.3   324   864 C1  Is+   Wed12PM    0:00.01
> /usr/libexec/getty std.9600 ttyC1
> root      2571  0.0  0.3   272   860 C2  Is+   Wed12PM    0:00.01
> /usr/libexec/getty std.9600 ttyC2
> root      9191  0.0  0.3   296   864 C3  Is+   Wed12PM    0:00.02
> /usr/libexec/getty std.9600 ttyC3
> root      2812  0.0  0.3   416   868 C5  Is+   Wed12PM    0:00.01
> /usr/libexec/getty std.9600 ttyC5
>
> # vmstat -i
> interrupt                       total     rate
> irq0/clock                   34043772       99
> irq97/mpi0                     772066        2
> irq112/em0                      96237        0
> Total                        34912075      102
>
> # systat
>     1 users    Load 1.10 1.07 1.08 PAUSED               Sun Oct 23 11:46:02 2011
>
>              memory totals (in KB)            PAGING   SWAPPING     Interrupts
>             real   virtual     free           in  out   in  out      105 total
> Active    12420     12420   185072   ops                            100 clock
> All       55712     55712   447212   pages                            4 mpi0
>                                                                        1 em0
> Proc:r  d  s  w    Csw   Trp   Sys   Int   Sof  Flt       forks
>             6        21    17    88     4   102   21       fkppw
>                                                            fksvm
>     0.0%Int   0.2%Sys   0.4%Usr   0.0%Nic  99.4%Idle       pwait
> |    |    |    |    |    |    |    |    |    |    |     2 relck
>                                                          2 rlkok
>                                                            noram
> Namei         Sys-cache    Proc-cache    No-cache         ndcpy
>      Calls     hits    %    hits     %    miss   %         fltcp
>         14       14  100                                 2 zfod
>                                                            cow
> Disks   cd0   sd0   fd0                              2006 fmin
> seeks                                                2674 ftarg
> xfers           4                                         itarg
> speed         67K                                       1 wired
>    sec         0.0                                         pdfre
>                                                            pdscn
>                                                            pzidle
>                                                         10 kmapent
>
> # dmesg | tail
> vmware: sending length failed, eax=00000000, ecx=00000000
> vmt0: failed to send TCLO outgoing ping
> vmware: sending length failed, eax=00000000, ecx=00000000
> vmt0: failed to send TCLO outgoing ping
> vmware: sending length failed, eax=00000000, ecx=00000000
> vmt0: failed to send TCLO outgoing ping
> vmware: sending length failed, eax=00000000, ecx=00000000
> vmt0: failed to send TCLO outgoing ping
> vmware: sending length failed, eax=00000000, ecx=00000000
> vmt0: failed to send TCLO outgoing ping
>
>
> My /var/log/messages* files have that pair of error messages in them
> over 16,000 times.
>
> I will go through and try what has been suggested, starting with
> changing the guest OS type.  Unfortunately it appears it can be days
> apart when this problem occurs.  I'll send an update when I have
> something more concrete.
>
> If anyone would like to try recreating this problem on their ESXi host
> I'll make a .tar.gz of this vm guest for you to download.
>
> Thanks again.
>
> -Gene
>

It is really strange ... I have two OpenBSD 4.9 vms running under ESXi 5
without problems (one is i386 and another adm64), but with 768 MB RAM in
each one, using e1000 for nic interfaces and LSI Logic Parallell as a
scsi controller without any issue until now ....

Have you tried to change vic interface by em?? And what scsi controller
do you use in this vm???

And a very very important point: what type of storage do you use for
this ESXi5 server: local, nfs, iscsi?? If you use a local harddisk, it
is highly recommended that you use an specific storage hardware device
like an HP SmartArray, Dell PeRC, etc ...

For example: on a HP ML115 G5 with a MCP55 SATA controller, disk
performance is horrible. In this ESXi 5 server I use another box with
RHEL6 installed acting as an iscsi server and all works very very well ...

Bye.



--
CL Martinez
carlopmart {at} gmail {d0t} com

Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

Gene-46
In reply to this post by Gene-46
This problem appears to be resolved.  By changing the guest os type
from "FreeBSD (64-bit)" to "Other (64-bit)" these vm guests perform
much better.

I found out I could easily duplicate the problem with the following command:

find / -type f -exec grep -i moo {} \;

After ten or so minutes dmesg would be flooded with the "vmware:
sending length failed" messages.  Looking at the ESXi system
performance, that vm guest would have its core pegged.

After changing the guest os type I ran that find repeatedly in a loop
for 30 minutes, and the problem didn't come back.  I switched back and
forth between the OS types a couple of times to confirm my findings.
With the fix in place the CPU utilisation for that vm guest's core did
not go above 75%.

Once again, thank you for your help everyone.

-Gene

On Sun, Oct 23, 2011 at 12:10 PM, Gene <[hidden email]> wrote:

> This is just an update, I've still got to try everything that was
> suggested before.
>
> This issue is finally occurring again, and I have been able to collect
> more information about it:
>
> # uptime
> 11:46AM  up 3 days, 22:50, 1 user, load averages: 1.33, 1.12, 1.10
>
> # ps aux
> USER       PID %CPU %MEM   VSZ   RSS TT  STAT  STARTED       TIME COMMAND
> root         1  0.0  0.2   364   376 ??  Is    Wed12PM    0:00.09
/sbin/init

> root     17473  0.0  0.3   412   812 ??  Is    Wed12PM    0:00.09
> syslogd: [priv] (syslogd)
> _syslogd  4944  0.0  0.3   420   860 ??  S     Wed12PM    1:59.70
> syslogd -a /var/www/dev/log -a /var/empty/dev/log
> root     17203  0.0  0.2   572   464 ??  Is    Wed12PM    0:00.01
> pflogd: [priv] (pflogd)
> _pflogd  25836  0.0  0.2   636   384 ??  S     Wed12PM    1:18.70
> pflogd: [running] -s 160 -i pflog0 -f /var/log/pflog (pflogd)
> root     20453  0.0  0.4   496  1020 ??  Is    Wed12PM    0:02.17
> ntpd: [priv] (ntpd)
> _ntp     27033  0.0  0.4   548  1092 ??  S     Wed12PM    0:36.73
> ntpd: ntp engine (ntpd)
> _ntp     30318  0.0  0.4   676  1008 ??  I     Wed12PM    0:00.02
> ntpd: dns engine (ntpd)
> root     12410  0.0  0.5   616  1384 ??  Is    Wed12PM    0:00.02
/usr/sbin/sshd
> root     18650  0.0  0.3   412   832 ??  Is    Wed12PM    0:00.06 inetd
> root     13652  0.0  0.4   668   912 ??  Is    Wed12PM    0:04.15 cron
> root     12191  0.0  0.8  1216  2116 ??  Ss    Wed12PM    1:36.36
> sendmail: accepting connections (sendmail)
> root     18822  0.0  1.2  3452  3084 ??  Is    11:22AM    0:00.13
> sshd: gene [priv] (sshd)
> gene     27682  0.3  0.9  3420  2312 ??  S     11:22AM    0:00.55
> sshd: gene@ttyp0 (sshd)
> gene     18431  0.0  0.2   616   492 p0  Ss    11:22AM    0:00.14 -ksh
(ksh)
> root     23079  0.1  0.2   692   536 p0  S     11:46AM    0:00.07 -ksh
(ksh)

> root     19366  0.0  0.1   516   328 p0  R+    11:47AM    0:00.00 ps -aux
> root     17451  0.0  0.3   280   864 C0  Is+   Wed12PM    0:00.02
> /usr/libexec/getty std.9600 ttyC0
> root     23962  0.0  0.3   324   864 C1  Is+   Wed12PM    0:00.01
> /usr/libexec/getty std.9600 ttyC1
> root      2571  0.0  0.3   272   860 C2  Is+   Wed12PM    0:00.01
> /usr/libexec/getty std.9600 ttyC2
> root      9191  0.0  0.3   296   864 C3  Is+   Wed12PM    0:00.02
> /usr/libexec/getty std.9600 ttyC3
> root      2812  0.0  0.3   416   868 C5  Is+   Wed12PM    0:00.01
> /usr/libexec/getty std.9600 ttyC5
>
> # vmstat -i
> interrupt                       total     rate
> irq0/clock                   34043772       99
> irq97/mpi0                     772066        2
> irq112/em0                      96237        0
> Total                        34912075      102
>
> # systat
>   1 users    Load 1.10 1.07 1.08 PAUSED               Sun Oct 23 11:46:02
2011
>
>            memory totals (in KB)            PAGING   SWAPPING    
Interrupts
>           real   virtual     free           in  out   in  out      105
total
> Active    12420     12420   185072   ops                            100
clock
> All       55712     55712   447212   pages                            4
mpi0

>                                                                      1 em0
> Proc:r  d  s  w    Csw   Trp   Sys   Int   Sof  Flt       forks
>           6        21    17    88     4   102   21       fkppw
>                                                          fksvm
>   0.0%Int   0.2%Sys   0.4%Usr   0.0%Nic  99.4%Idle       pwait
> |    |    |    |    |    |    |    |    |    |    |     2 relck
>                                                        2 rlkok
>                                                          noram
> Namei         Sys-cache    Proc-cache    No-cache         ndcpy
>    Calls     hits    %    hits     %    miss   %         fltcp
>       14       14  100                                 2 zfod
>                                                          cow
> Disks   cd0   sd0   fd0                              2006 fmin
> seeks                                                2674 ftarg
> xfers           4                                         itarg
> speed         67K                                       1 wired
>  sec         0.0                                         pdfre
>                                                          pdscn
>                                                          pzidle
>                                                       10 kmapent
>
> # dmesg | tail
> vmware: sending length failed, eax=00000000, ecx=00000000
> vmt0: failed to send TCLO outgoing ping
> vmware: sending length failed, eax=00000000, ecx=00000000
> vmt0: failed to send TCLO outgoing ping
> vmware: sending length failed, eax=00000000, ecx=00000000
> vmt0: failed to send TCLO outgoing ping
> vmware: sending length failed, eax=00000000, ecx=00000000
> vmt0: failed to send TCLO outgoing ping
> vmware: sending length failed, eax=00000000, ecx=00000000
> vmt0: failed to send TCLO outgoing ping
>
>
> My /var/log/messages* files have that pair of error messages in them
> over 16,000 times.
>
> I will go through and try what has been suggested, starting with
> changing the guest OS type.  Unfortunately it appears it can be days
> apart when this problem occurs.  I'll send an update when I have
> something more concrete.
>
> If anyone would like to try recreating this problem on their ESXi host
> I'll make a .tar.gz of this vm guest for you to download.
>
> Thanks again.
>
> -Gene
>
>
> On Wed, Oct 19, 2011 at 8:23 PM, Gene <[hidden email]> wrote:
>> I haven't been able to reproduce the problem since this morning.
>> Nothing has been changed on the vmhosts so I'm at a bit of a loss at
>> the moment.
>>
>> When the issue reoccurs I'll try everything that has been suggested today.
>>
>> Thank you very much for your help everyone.
>>
>> -Gene

Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

Gene-46
I was wrong, just changing the guest OS type did not fix my problem.
The morning following this email I found the CPU being pegged again.

I ended up installing the i386 version of 4.9 and used FreeBSD 32-bit
as the guest os type.  These VMs have been running for four days
without a problem.  If it occurs again I'll try the other suggestions
provided here.

-Gene

On Sun, Oct 23, 2011 at 10:09 PM, Gene <[hidden email]> wrote:
> This problem appears to be resolved.  By changing the guest os type
> from "FreeBSD (64-bit)" to "Other (64-bit)" these vm guests perform
> much better.
>
> I found out I could easily duplicate the problem with the following
command:

>
> find / -type f -exec grep -i moo {} \;
>
> After ten or so minutes dmesg would be flooded with the "vmware:
> sending length failed" messages.  Looking at the ESXi system
> performance, that vm guest would have its core pegged.
>
> After changing the guest os type I ran that find repeatedly in a loop
> for 30 minutes, and the problem didn't come back.  I switched back and
> forth between the OS types a couple of times to confirm my findings.
> With the fix in place the CPU utilisation for that vm guest's core did
> not go above 75%.
>
> Once again, thank you for your help everyone.
>
> -Gene
>
> On Sun, Oct 23, 2011 at 12:10 PM, Gene <[hidden email]> wrote:
>> This is just an update, I've still got to try everything that was
>> suggested before.
>>
>> This issue is finally occurring again, and I have been able to collect
>> more information about it:
>>
>> # uptime
>> 11:46AM  up 3 days, 22:50, 1 user, load averages: 1.33, 1.12, 1.10
>>
>> # ps aux
>> USER       PID %CPU %MEM   VSZ   RSS TT  STAT  STARTED       TIME COMMAND
>> root         1  0.0  0.2   364   376 ??  Is    Wed12PM    0:00.09
/sbin/init

>> root     17473  0.0  0.3   412   812 ??  Is    Wed12PM    0:00.09
>> syslogd: [priv] (syslogd)
>> _syslogd  4944  0.0  0.3   420   860 ??  S     Wed12PM    1:59.70
>> syslogd -a /var/www/dev/log -a /var/empty/dev/log
>> root     17203  0.0  0.2   572   464 ??  Is    Wed12PM    0:00.01
>> pflogd: [priv] (pflogd)
>> _pflogd  25836  0.0  0.2   636   384 ??  S     Wed12PM    1:18.70
>> pflogd: [running] -s 160 -i pflog0 -f /var/log/pflog (pflogd)
>> root     20453  0.0  0.4   496  1020 ??  Is    Wed12PM    0:02.17
>> ntpd: [priv] (ntpd)
>> _ntp     27033  0.0  0.4   548  1092 ??  S     Wed12PM    0:36.73
>> ntpd: ntp engine (ntpd)
>> _ntp     30318  0.0  0.4   676  1008 ??  I     Wed12PM    0:00.02
>> ntpd: dns engine (ntpd)
>> root     12410  0.0  0.5   616  1384 ??  Is    Wed12PM    0:00.02
/usr/sbin/sshd
>> root     18650  0.0  0.3   412   832 ??  Is    Wed12PM    0:00.06 inetd
>> root     13652  0.0  0.4   668   912 ??  Is    Wed12PM    0:04.15 cron
>> root     12191  0.0  0.8  1216  2116 ??  Ss    Wed12PM    1:36.36
>> sendmail: accepting connections (sendmail)
>> root     18822  0.0  1.2  3452  3084 ??  Is    11:22AM    0:00.13
>> sshd: gene [priv] (sshd)
>> gene     27682  0.3  0.9  3420  2312 ??  S     11:22AM    0:00.55
>> sshd: gene@ttyp0 (sshd)
>> gene     18431  0.0  0.2   616   492 p0  Ss    11:22AM    0:00.14 -ksh
(ksh)
>> root     23079  0.1  0.2   692   536 p0  S     11:46AM    0:00.07 -ksh
(ksh)

>> root     19366  0.0  0.1   516   328 p0  R+    11:47AM    0:00.00 ps -aux
>> root     17451  0.0  0.3   280   864 C0  Is+   Wed12PM    0:00.02
>> /usr/libexec/getty std.9600 ttyC0
>> root     23962  0.0  0.3   324   864 C1  Is+   Wed12PM    0:00.01
>> /usr/libexec/getty std.9600 ttyC1
>> root      2571  0.0  0.3   272   860 C2  Is+   Wed12PM    0:00.01
>> /usr/libexec/getty std.9600 ttyC2
>> root      9191  0.0  0.3   296   864 C3  Is+   Wed12PM    0:00.02
>> /usr/libexec/getty std.9600 ttyC3
>> root      2812  0.0  0.3   416   868 C5  Is+   Wed12PM    0:00.01
>> /usr/libexec/getty std.9600 ttyC5
>>
>> # vmstat -i
>> interrupt                       total     rate
>> irq0/clock                   34043772       99
>> irq97/mpi0                     772066        2
>> irq112/em0                      96237        0
>> Total                        34912075      102
>>
>> # systat
>>   1 users    Load 1.10 1.07 1.08 PAUSED               Sun Oct 23 11:46:02
2011
>>
>>            memory totals (in KB)            PAGING   SWAPPING    
Interrupts
>>           real   virtual     free           in  out   in  out      105
total
>> Active    12420     12420   185072   ops                            100
clock
>> All       55712     55712   447212   pages                            4
mpi0

>>                                                                      1 em0
>> Proc:r  d  s  w    Csw   Trp   Sys   Int   Sof  Flt       forks
>>           6        21    17    88     4   102   21       fkppw
>>                                                          fksvm
>>   0.0%Int   0.2%Sys   0.4%Usr   0.0%Nic  99.4%Idle       pwait
>> |    |    |    |    |    |    |    |    |    |    |     2 relck
>>                                                        2 rlkok
>>                                                          noram
>> Namei         Sys-cache    Proc-cache    No-cache         ndcpy
>>    Calls     hits    %    hits     %    miss   %         fltcp
>>       14       14  100                                 2 zfod
>>                                                          cow
>> Disks   cd0   sd0   fd0                              2006 fmin
>> seeks                                                2674 ftarg
>> xfers           4                                         itarg
>> speed         67K                                       1 wired
>>  sec         0.0                                         pdfre
>>                                                          pdscn
>>                                                          pzidle
>>                                                       10 kmapent
>>
>> # dmesg | tail
>> vmware: sending length failed, eax=00000000, ecx=00000000
>> vmt0: failed to send TCLO outgoing ping
>> vmware: sending length failed, eax=00000000, ecx=00000000
>> vmt0: failed to send TCLO outgoing ping
>> vmware: sending length failed, eax=00000000, ecx=00000000
>> vmt0: failed to send TCLO outgoing ping
>> vmware: sending length failed, eax=00000000, ecx=00000000
>> vmt0: failed to send TCLO outgoing ping
>> vmware: sending length failed, eax=00000000, ecx=00000000
>> vmt0: failed to send TCLO outgoing ping
>>
>>
>> My /var/log/messages* files have that pair of error messages in them
>> over 16,000 times.
>>
>> I will go through and try what has been suggested, starting with
>> changing the guest OS type.  Unfortunately it appears it can be days
>> apart when this problem occurs.  I'll send an update when I have
>> something more concrete.
>>
>> If anyone would like to try recreating this problem on their ESXi host
>> I'll make a .tar.gz of this vm guest for you to download.
>>
>> Thanks again.
>>
>> -Gene
>>
>>
>> On Wed, Oct 19, 2011 at 8:23 PM, Gene <[hidden email]> wrote:
>>> I haven't been able to reproduce the problem since this morning.
>>> Nothing has been changed on the vmhosts so I'm at a bit of a loss at
>>> the moment.
>>>
>>> When the issue reoccurs I'll try everything that has been suggested
today.
>>>
>>> Thank you very much for your help everyone.
>>>
>>> -Gene

Reply | Threaded
Open this post in threaded view
|

Re: Performance problems with OpenBSD 4.9 under ESXi 5

Tyler Morgan
Hi, I setup four 4.9-RELEASE installs under ESXi 5.0.0:

amd64 as "Other"
amd64 as "FreeBSD"
i386 as "Other"
i386 as "FreeBSD"

All 4 got 512megs of RAM, unlimited use of the 8 available CPU cores,
and totally default installs other than stress from ports.

After installing I ran "stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M
--hdd 4 --hdd-bytes 128M --timeout 60s" in an infinite loop for a few
hours. Then I let them sit for a couple days. Then I the stress loops
again for a few hours with 3 days of uptime. I verified the stress was
pegging 95%+ of all CPU, doing about 75% of what the RAID array is
capable of in disk read/write, and as much RAM as I'd let it have -- all
verified using ESXi's standard host monitoring.

At the end of testing, I have no unusual messages in dmesg, a normal
0.5ish load when idle, and no noticed performance issues on all four
virtual machines.

The ESXi host is a 3.5 year old SuperMicro server from Penguin Linux
with 2xXeon X5365s, 32Gigs of ECC DDR3, and an Adaptec RAID controller.
I can get a real dmesg out of the ESXi host if anyone wants it, and
someone already provided a dmesg of 4.9-RELEASE under VMWare, but I can
also provide those if desired.

I will leave these VMs around for at least a couple weeks so feel free
to ask if you would like me to do anything to help troubleshoot the
problem you're having.

It seems to me that running OpenBSD under virtual environments does not
get a lot of attention (largely for obvious security reasons, I'd
guess), but ESXi is an important part of the systems I manage and am
happy to help as best I can with anything VMWare related.

On 10/28/2011 9:15 PM, Gene wrote:

> I was wrong, just changing the guest OS type did not fix my problem.
> The morning following this email I found the CPU being pegged again.
>
> I ended up installing the i386 version of 4.9 and used FreeBSD 32-bit
> as the guest os type.  These VMs have been running for four days
> without a problem.  If it occurs again I'll try the other suggestions
> provided here.
>
> -Gene
>
--
Tyler Morgan
Systems Administrator
Trade Tech Inc.

12