vmd: spurious VM restarts

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
13 messages Options
Reply | Threaded
Open this post in threaded view
|

vmd: spurious VM restarts

Thomas L.
Hi,

I'm running OpenBSD 6.8 as hypervisor with multiple OpenBSD VMs.
Regularly, it happens that all VM are restarted, not at the same time
but clustered. The indication that this happend is reduced uptime on the
VMs, some services that fail to come up again and the following logs:

# grep vmd /var/log/daemon
Apr  1 18:10:35 golem vmd[31367]: wiki: started vm 12 successfully, tty /dev/ttyp0
Apr  6 13:24:52 golem vmd[31367]: matrix: started vm 13 successfully, tty /dev/ttypb
Apr  6 13:25:55 golem vmd[31367]: matrix: started vm 13 successfully, tty /dev/ttypb
Apr  6 13:26:45 golem vmd[18933]: vmd: LSR UART write 0x8203d260 unsupported
Apr  6 13:26:45 golem vmd[31367]: ticketfrei: started vm 5 successfully, tty /dev/ttyp5
Apr  6 14:22:34 golem vmd[31367]: www: started vm 4 successfully, tty /dev/ttyp4
Apr  6 14:33:54 golem vmd[31367]: kibicara: started vm 8 successfully, tty /dev/ttyp8
Apr  6 14:35:02 golem vmd[31367]: vpn: started vm 3 successfully, tty /dev/ttyp3
Apr  6 14:36:38 golem vmd[31367]: relay: started vm 1 successfully, tty /dev/ttyp1
Apr  6 14:37:51 golem vmd[31367]: schleuder: started vm 2 successfully, tty /dev/ttyp2
Apr  6 14:40:34 golem vmd[31367]: mumble: started vm 6 successfully, tty /dev/ttyp6
Apr  6 14:41:58 golem vmd[31367]: minecraft: started vm 9 successfully, tty /dev/ttyp9

The restarts seem to be non-graceful, since the matrix vm needed manual
fsck on /var. Going back over the logs this seems to happen about every
month (not all restarts are this phenomenon, but Mar 8/10 and Feb
17/20/22 seem like it):

# zgrep vmd /var/log/daemon.0.gz
Mar  8 19:43:07 golem vmd[31367]: wiki: started vm 12 successfully, tty /dev/ttyp0
Mar  8 19:43:37 golem vmd[31367]: ticketfrei: started vm 5 successfully, tty /dev/ttyp5
Mar 10 09:21:20 golem vmd[31367]: www: started vm 4 successfully, tty /dev/ttyp4
Mar 10 09:24:13 golem vmd[31367]: kibicara: started vm 8 successfully, tty /dev/ttyp8
Mar 10 09:26:13 golem vmd[31367]: vpn: started vm 3 successfully, tty /dev/ttyp3
Mar 10 09:28:40 golem vmd[31367]: gitea: started vm 7 successfully, tty /dev/ttyp7
Mar 10 09:29:01 golem vmd[31367]: relay: started vm 1 successfully, tty /dev/ttyp1
Mar 10 09:31:29 golem vmd[31367]: schleuder: started vm 2 successfully, tty /dev/ttyp2
Mar 10 09:34:02 golem vmd[31367]: mumble: started vm 6 successfully, tty /dev/ttyp6
Mar 10 09:35:44 golem vmd[31367]: minecraft: started vm 9 successfully, tty /dev/ttyp9
Mar 13 01:46:37 golem vmd[31367]: gitea: started vm 7 successfully, tty /dev/ttyp7
golem# zgrep vmd /var/log/daemon.1.gz
Feb 17 21:18:45 golem vmd[31367]: matrix: started vm 13 successfully, tty /dev/ttypc
Feb 20 08:32:28 golem vmd[31367]: wiki: started vm 12 successfully, tty /dev/ttyp0
Feb 20 08:33:14 golem vmd[31367]: ticketfrei: started vm 5 successfully, tty /dev/ttyp5
Feb 20 08:35:20 golem vmd[31367]: www: started vm 4 successfully, tty /dev/ttyp4
Feb 20 11:09:01 golem vmd[31367]: kibicara: started vm 8 successfully, tty /dev/ttyp8
Feb 20 11:10:18 golem vmd[31367]: vpn: started vm 3 successfully, tty /dev/ttyp3
Feb 20 11:11:52 golem vmd[31367]: gitea: started vm 7 successfully, tty /dev/ttyp7
Feb 22 00:51:03 golem vmd[31367]: relay: started vm 1 successfully, tty /dev/ttyp1
Feb 22 00:52:44 golem vmd[31367]: schleuder: started vm 2 successfully, tty /dev/ttyp2
Feb 22 00:53:59 golem vmd[31367]: mumble: started vm 6 successfully, tty /dev/ttyp6
Feb 22 00:54:45 golem vmd[31367]: minecraft: started vm 9 successfully, tty /dev/ttyp9
Feb 24 23:01:50 golem vmd[31367]: vmd_sighdlr: reload requested with SIGHUP
Feb 24 23:01:51 golem vmd[31367]: test: started vm 10 successfully, tty /dev/ttypa
Feb 24 23:01:51 golem vmd[52735]: test: unsupported refcount size
Feb 24 23:06:27 golem vmd[31367]: vmd_sighdlr: reload requested with SIGHUP
Feb 24 23:06:27 golem vmd[1230]: test: unsupported refcount size
Feb 24 23:06:27 golem vmd[31367]: matrix: started vm 13 successfully, tty /dev/ttypb
Feb 24 23:06:27 golem vmd[31367]: test: started vm 10 successfully, tty /dev/ttypc
Feb 24 23:10:20 golem vmd[31367]: matrix: started vm 13 successfully, tty /dev/ttypb

vm.conf and dmesg of the hypervisor are below. How would I go
about debugging this?

Kind regards,

Thomas


switch internal {
        interface bridge0
        locked lladdr
        group internal
}


vm relay {
        disk /data/vmd/relay.qcow2
        interface {
                switch internal
                lladdr fe:e1:ba:d0:00:03
        }
}

vm schleuder {
        disk /data/vmd/schleuder.qcow2
        interface {
                switch internal
                lladdr fe:e1:ba:d0:00:04
        }
}

vm vpn {
        disk /data/vmd/vpn.qcow2
        interface {
                switch internal
                lladdr fe:e1:ba:d0:00:05
        }
}

vm www {
        disk /data/vmd/www.qcow2
        interface {
                switch internal
                lladdr fe:e1:ba:d0:00:06
        }
}

vm ticketfrei {
        disk /data/vmd/ticketfrei.qcow2
        interface {
                switch internal
                lladdr fe:e1:ba:d0:00:07
        }
}

vm mumble {
        disk /data/vmd/mumble.qcow2
        interface {
                switch internal
                lladdr fe:e1:ba:d0:00:08
        }
}

vm gitea {
        disk /data/vmd/gitea.qcow2
        interface {
                switch internal
                lladdr fe:e1:ba:d0:00:0a
        }
}

vm kibicara {
        disk /data/vmd/kibicara.qcow2
        interface {
                switch internal
                lladdr fe:e1:ba:d0:00:0b
        }
}

vm minecraft {
        memory 8G
        disk /data/vmd/minecraft.qcow2
        interface {
                switch internal
                lladdr fe:e1:ba:d0:00:0c
        }
}

vm wiki {
        disk /data/vmd/wiki.qcow2
        interface {
                switch internal
                lladdr fe:e1:ba:d0:00:0d
        }
}

vm matrix {
        disk /data/vmd/matrix.qcow2
        interface {
                switch internal
                lladdr fe:e1:ba:d0:00:0e
        }
}

vm test {
        disk /data/vmd/test.qcow2
        interface {
                switch internal
                lladdr fe:e1:ba:d0:00:17
        }
}

OpenBSD 6.8 (GENERIC.MP) #1: Tue Nov  3 09:06:04 MST 2020
    [hidden email]:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 17070161920 (16279MB)
avail mem = 16537780224 (15771MB)
random: good seed from bootblocks
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xeb8c0 (109 entries)
bios0: vendor American Megatrends Inc. version "9010" date 07/06/2018
bios0: ASUSTeK COMPUTER INC. P8H77-M PRO
acpi0 at bios0: ACPI 5.0
acpi0: sleep states S0 S3 S4 S5
acpi0: tables DSDT FACP APIC FPDT MCFG HPET SSDT DMAR SSDT SSDT
acpi0: wakeup devices PS2K(S4) P0P1(S4) RP01(S4) PXSX(S4) RP02(S4) PXSX(S4) RP03(S4) PXSX(S4) RP04(S4) PXSX(S4) RP06(S4) PXSX(S4) RP07(S4) PXSX(S4) RP08(S4) PXSX(S4) [...]
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee00000: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3400.54 MHz, 06-3a-09
cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
cpu0: 256KB 64b/line 8-way L2 cache
cpu0: smt 0, core 0, package 0
mtrr: Pentium Pro MTRR support, 10 var ranges, 88 fixed ranges
cpu0: apic clock running at 100MHz
cpu0: mwait min=64, max=64, C-substates=0.2.1.1, IBE
cpu1 at mainbus0: apid 2 (application processor)
cpu1: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3400.01 MHz, 06-3a-09
cpu1: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
cpu1: 256KB 64b/line 8-way L2 cache
cpu1: smt 0, core 1, package 0
cpu2 at mainbus0: apid 4 (application processor)
cpu2: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3400.01 MHz, 06-3a-09
cpu2: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
cpu2: 256KB 64b/line 8-way L2 cache
cpu2: smt 0, core 2, package 0
cpu3 at mainbus0: apid 6 (application processor)
cpu3: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3400.02 MHz, 06-3a-09
cpu3: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
cpu3: 256KB 64b/line 8-way L2 cache
cpu3: smt 0, core 3, package 0
cpu4 at mainbus0: apid 1 (application processor)
cpu4: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3400.01 MHz, 06-3a-09
cpu4: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
cpu4: 256KB 64b/line 8-way L2 cache
cpu4: smt 1, core 0, package 0
cpu5 at mainbus0: apid 3 (application processor)
cpu5: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3400.01 MHz, 06-3a-09
cpu5: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
cpu5: 256KB 64b/line 8-way L2 cache
cpu5: smt 1, core 1, package 0
cpu6 at mainbus0: apid 5 (application processor)
cpu6: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3400.01 MHz, 06-3a-09
cpu6: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
cpu6: 256KB 64b/line 8-way L2 cache
cpu6: smt 1, core 2, package 0
cpu7 at mainbus0: apid 7 (application processor)
cpu7: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3400.01 MHz, 06-3a-09
cpu7: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
cpu7: 256KB 64b/line 8-way L2 cache
cpu7: smt 1, core 3, package 0
ioapic0 at mainbus0: apid 2 pa 0xfec00000, version 20, 24 pins
acpimcfg0 at acpi0
acpimcfg0: addr 0xf8000000, bus 0-63
acpihpet0 at acpi0: 14318179 Hz
acpiprt0 at acpi0: bus 0 (PCI0)
acpiprt1 at acpi0: bus -1 (P0P1)
acpiprt2 at acpi0: bus 2 (RP01)
acpiprt3 at acpi0: bus -1 (RP02)
acpiprt4 at acpi0: bus -1 (RP03)
acpiprt5 at acpi0: bus -1 (RP04)
acpiprt6 at acpi0: bus -1 (RP06)
acpiprt7 at acpi0: bus -1 (RP07)
acpiprt8 at acpi0: bus -1 (RP08)
acpiprt9 at acpi0: bus 1 (PEG0)
acpiprt10 at acpi0: bus -1 (PEG1)
acpiprt11 at acpi0: bus -1 (PEG2)
acpiprt12 at acpi0: bus -1 (PEG3)
acpiprt13 at acpi0: bus 3 (RP05)
acpiec0 at acpi0: not present
acpipci0 at acpi0 PCI0: 0x00000010 0x00000011 0x00000000
acpicmos0 at acpi0
acpibtn0 at acpi0: PWRB
"PNP0C0B" at acpi0 not configured
"PNP0C0B" at acpi0 not configured
"PNP0C0B" at acpi0 not configured
"PNP0C0B" at acpi0 not configured
"PNP0C0B" at acpi0 not configured
"PNP0C14" at acpi0 not configured
acpicpu0 at acpi0: C3(350@80 mwait.1@0x20), C2(500@59 mwait.1@0x10), C1(1000@1 mwait.1), PSS
acpicpu1 at acpi0: C3(350@80 mwait.1@0x20), C2(500@59 mwait.1@0x10), C1(1000@1 mwait.1), PSS
acpicpu2 at acpi0: C3(350@80 mwait.1@0x20), C2(500@59 mwait.1@0x10), C1(1000@1 mwait.1), PSS
acpicpu3 at acpi0: C3(350@80 mwait.1@0x20), C2(500@59 mwait.1@0x10), C1(1000@1 mwait.1), PSS
acpicpu4 at acpi0: C3(350@80 mwait.1@0x20), C2(500@59 mwait.1@0x10), C1(1000@1 mwait.1), PSS
acpicpu5 at acpi0: C3(350@80 mwait.1@0x20), C2(500@59 mwait.1@0x10), C1(1000@1 mwait.1), PSS
acpicpu6 at acpi0: C3(350@80 mwait.1@0x20), C2(500@59 mwait.1@0x10), C1(1000@1 mwait.1), PSS
acpicpu7 at acpi0: C3(350@80 mwait.1@0x20), C2(500@59 mwait.1@0x10), C1(1000@1 mwait.1), PSS
acpipwrres0 at acpi0: FN00, resource for FAN0
acpipwrres1 at acpi0: FN01, resource for FAN1
acpipwrres2 at acpi0: FN02, resource for FAN2
acpipwrres3 at acpi0: FN03, resource for FAN3
acpipwrres4 at acpi0: FN04, resource for FAN4
acpitz0 at acpi0: critical temperature is 106 degC
acpitz1 at acpi0: critical temperature is 106 degC
acpivideo0 at acpi0: GFX0
acpivout0 at acpivideo0: DD02
cpu0: using VERW MDS workaround (except on vmm entry)
cpu0: Enhanced SpeedStep 3400 MHz: speeds: 3401, 3400, 3300, 3100, 3000, 2900, 2800, 2600, 2500, 2400, 2200, 2100, 2000, 1900, 1700, 1600 MHz
pci0 at mainbus0 bus 0
pchb0 at pci0 dev 0 function 0 "Intel Core 3G Host" rev 0x09
ppb0 at pci0 dev 1 function 0 "Intel Core 3G PCIE" rev 0x09: msi
pci1 at ppb0 bus 1
inteldrm0 at pci0 dev 2 function 0 "Intel HD Graphics 4000" rev 0x09
drm0 at inteldrm0
inteldrm0: msi, IVYBRIDGE, gen 7
"Intel 7 Series MEI" rev 0x04 at pci0 dev 22 function 0 not configured
ehci0 at pci0 dev 26 function 0 "Intel 7 Series USB" rev 0x04: apic 2 int 23
usb0 at ehci0: USB revision 2.0
uhub0 at usb0 configuration 1 interface 0 "Intel EHCI root hub" rev 2.00/1.00 addr 1
azalia0 at pci0 dev 27 function 0 "Intel 7 Series HD Audio" rev 0x04: msi
azalia0: codecs: Realtek/0x0892, Intel/0x2806, using Realtek/0x0892
audio0 at azalia0
ppb1 at pci0 dev 28 function 0 "Intel 7 Series PCIE" rev 0xc4: msi
pci2 at ppb1 bus 2
ppb2 at pci0 dev 28 function 4 "Intel 7 Series PCIE" rev 0xc4: msi
pci3 at ppb2 bus 3
re0 at pci3 dev 0 function 0 "Realtek 8168" rev 0x09: RTL8168F/8111F (0x4800), msi, address 08:60:6e:68:13:89
rgephy0 at re0 phy 7: RTL8169S/8110S/8211 PHY, rev. 5
ehci1 at pci0 dev 29 function 0 "Intel 7 Series USB" rev 0x04: apic 2 int 23
usb1 at ehci1: USB revision 2.0
uhub1 at usb1 configuration 1 interface 0 "Intel EHCI root hub" rev 2.00/1.00 addr 1
pcib0 at pci0 dev 31 function 0 "Intel H77 LPC" rev 0x04
ahci0 at pci0 dev 31 function 2 "Intel 7 Series AHCI" rev 0x04: msi, AHCI 1.3
ahci0: port 0: 6.0Gb/s
ahci0: port 1: 6.0Gb/s
scsibus1 at ahci0: 32 targets
sd0 at scsibus1 targ 0 lun 0: <ATA, TOSHIBA DT01ACA3, MX6O> naa.5000039ff4c8c194
sd0: 2861588MB, 512 bytes/sector, 5860533168 sectors
sd1 at scsibus1 targ 1 lun 0: <ATA, TOSHIBA DT01ACA3, MX6O> naa.5000039ff4c8bdcb
sd1: 2861588MB, 512 bytes/sector, 5860533168 sectors
ichiic0 at pci0 dev 31 function 3 "Intel 7 Series SMBus" rev 0x04: apic 2 int 18
iic0 at ichiic0
iic0: addr 0x20 01=00 02=00 03=00 04=00 05=00 06=00 07=f0 08=f0 09=f0 0a=f0 0b=22 0c=22 0d=88 0e=88 0f=00 10=00 11=98 12=fc 13=04 14=00 15=00 16=30 17=5b 18=00 19=00 1a=00 1b=00 1c=00 1d=22 1e=88 1f=02 20=00 21=00 22=05 23=02 24=00 25=00 26=55 27=09 28=bf 29=00 2a=f5 2b=00 2c=01 2d=d0 2e=a0 2f=18 30=00 31=00 32=00 33=68 3e=8b 46=00 47=03 48=04 49=13 b2=20 b3=83 words 00=ff00 01=0000 02=0000 03=0000 04=0000 05=0000 06=00f0 07=f0f0
spdmem0 at iic0 addr 0x50: 4GB DDR3 SDRAM PC3-10600
spdmem1 at iic0 addr 0x51: 4GB DDR3 SDRAM PC3-10600
spdmem2 at iic0 addr 0x52: 4GB DDR3 SDRAM PC3-10600
spdmem3 at iic0 addr 0x53: 4GB DDR3 SDRAM PC3-10600
isa0 at pcib0
isadma0 at isa0
pckbc0 at isa0 port 0x60/5 irq 1 irq 12
pckbd0 at pckbc0 (kbd slot)
wskbd0 at pckbd0: console keyboard
pcppi0 at isa0 port 0x61
spkr0 at pcppi0
wbsio0 at isa0 port 0x2e/2: NCT6779D rev 0x62
lm1 at wbsio0 port 0x290/8: NCT6779D
vmm0 at mainbus0: VMX/EPT
uhub2 at uhub0 port 1 configuration 1 interface 0 "Intel Rate Matching Hub" rev 2.00/0.00 addr 2
uhub3 at uhub1 port 1 configuration 1 interface 0 "Intel Rate Matching Hub" rev 2.00/0.00 addr 2
vscsi0 at root
scsibus2 at vscsi0: 256 targets
softraid0 at root
scsibus3 at softraid0: 256 targets
sd2 at scsibus3 targ 1 lun 0: <OPENBSD, SR RAID 1, 006>
sd2: 2097148MB, 512 bytes/sector, 4294961093 sectors
root on sd2a (dcbd00955078fc15.a) swap on sd2b dump on sd2b
inteldrm0: 1024x768, 32bpp
wsdisplay0 at inteldrm0 mux 1: console (std, vt100 emulation), using wskbd0
wsdisplay0: screen 1-5 added (std, vt100 emulation)
sd3 at scsibus3 targ 2 lun 0: <OPENBSD, SR CRYPTO, 006>
sd3: 1724830MB, 512 bytes/sector, 3532452036 sectors
cannot forward src fe80:5::fce1:baff:fed0:4, dst 2a00:1450:4001:81a::2004, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:4, dst 2a00:1450:4001:81a::2004, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:4, dst 2a00:1450:4001:81a::2004, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:3, dst 2a00:1450:4001:802::2004, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:3, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:4, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:3, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:4, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:6, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:8, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:6, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:b, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:7, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:b, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:5, dst 2a00:1450:4001:802::2004, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:a, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:4, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:7, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:6, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:b, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:5, dst 2a00:1450:4001:81a::2004, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:a, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:3, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:4, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:8, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:c, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:7, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:6, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:b, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:5, dst 2a00:1450:4001:824::2004, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:3, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:4, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:8, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:c, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:7, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:6, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:5, dst 2a00:1450:4001:81a::2004, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:4, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:c, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:d, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:d, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:d, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:e, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:e, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:e, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:7, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:6, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:b, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:5, dst 2a00:1450:4001:828::2004, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:3, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:8, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:c, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:e, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:d, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:7, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:6, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:b, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:3, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:4, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:e, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:e, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:e, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:d, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:7, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:6, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:5, dst 2a00:1450:4001:802::2004, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:3, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:4, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:c, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:d, dst 2a00:1450:4001:811::2004, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:e, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:7, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:b, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:5, dst 2a00:1450:4001:82a::2004, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:3, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:4, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:8, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
cannot forward src fe80:5::fce1:baff:fed0:c, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1

Reply | Threaded
Open this post in threaded view
|

Re: vmd: spurious VM restarts

Mike Larkin-2
On Tue, Apr 06, 2021 at 07:47:52PM +0200, Thomas L. wrote:

> Hi,
>
> I'm running OpenBSD 6.8 as hypervisor with multiple OpenBSD VMs.
> Regularly, it happens that all VM are restarted, not at the same time
> but clustered. The indication that this happend is reduced uptime on the
> VMs, some services that fail to come up again and the following logs:
>
> # grep vmd /var/log/daemon
> Apr  1 18:10:35 golem vmd[31367]: wiki: started vm 12 successfully, tty /dev/ttyp0
> Apr  6 13:24:52 golem vmd[31367]: matrix: started vm 13 successfully, tty /dev/ttypb
> Apr  6 13:25:55 golem vmd[31367]: matrix: started vm 13 successfully, tty /dev/ttypb
> Apr  6 13:26:45 golem vmd[18933]: vmd: LSR UART write 0x8203d260 unsupported
> Apr  6 13:26:45 golem vmd[31367]: ticketfrei: started vm 5 successfully, tty /dev/ttyp5
> Apr  6 14:22:34 golem vmd[31367]: www: started vm 4 successfully, tty /dev/ttyp4
> Apr  6 14:33:54 golem vmd[31367]: kibicara: started vm 8 successfully, tty /dev/ttyp8
> Apr  6 14:35:02 golem vmd[31367]: vpn: started vm 3 successfully, tty /dev/ttyp3
> Apr  6 14:36:38 golem vmd[31367]: relay: started vm 1 successfully, tty /dev/ttyp1
> Apr  6 14:37:51 golem vmd[31367]: schleuder: started vm 2 successfully, tty /dev/ttyp2
> Apr  6 14:40:34 golem vmd[31367]: mumble: started vm 6 successfully, tty /dev/ttyp6
> Apr  6 14:41:58 golem vmd[31367]: minecraft: started vm 9 successfully, tty /dev/ttyp9
>
> The restarts seem to be non-graceful, since the matrix vm needed manual
> fsck on /var. Going back over the logs this seems to happen about every
> month (not all restarts are this phenomenon, but Mar 8/10 and Feb
> 17/20/22 seem like it):
>
> # zgrep vmd /var/log/daemon.0.gz
> Mar  8 19:43:07 golem vmd[31367]: wiki: started vm 12 successfully, tty /dev/ttyp0
> Mar  8 19:43:37 golem vmd[31367]: ticketfrei: started vm 5 successfully, tty /dev/ttyp5
> Mar 10 09:21:20 golem vmd[31367]: www: started vm 4 successfully, tty /dev/ttyp4
> Mar 10 09:24:13 golem vmd[31367]: kibicara: started vm 8 successfully, tty /dev/ttyp8
> Mar 10 09:26:13 golem vmd[31367]: vpn: started vm 3 successfully, tty /dev/ttyp3
> Mar 10 09:28:40 golem vmd[31367]: gitea: started vm 7 successfully, tty /dev/ttyp7
> Mar 10 09:29:01 golem vmd[31367]: relay: started vm 1 successfully, tty /dev/ttyp1
> Mar 10 09:31:29 golem vmd[31367]: schleuder: started vm 2 successfully, tty /dev/ttyp2
> Mar 10 09:34:02 golem vmd[31367]: mumble: started vm 6 successfully, tty /dev/ttyp6
> Mar 10 09:35:44 golem vmd[31367]: minecraft: started vm 9 successfully, tty /dev/ttyp9
> Mar 13 01:46:37 golem vmd[31367]: gitea: started vm 7 successfully, tty /dev/ttyp7
> golem# zgrep vmd /var/log/daemon.1.gz
> Feb 17 21:18:45 golem vmd[31367]: matrix: started vm 13 successfully, tty /dev/ttypc
> Feb 20 08:32:28 golem vmd[31367]: wiki: started vm 12 successfully, tty /dev/ttyp0
> Feb 20 08:33:14 golem vmd[31367]: ticketfrei: started vm 5 successfully, tty /dev/ttyp5
> Feb 20 08:35:20 golem vmd[31367]: www: started vm 4 successfully, tty /dev/ttyp4
> Feb 20 11:09:01 golem vmd[31367]: kibicara: started vm 8 successfully, tty /dev/ttyp8
> Feb 20 11:10:18 golem vmd[31367]: vpn: started vm 3 successfully, tty /dev/ttyp3
> Feb 20 11:11:52 golem vmd[31367]: gitea: started vm 7 successfully, tty /dev/ttyp7
> Feb 22 00:51:03 golem vmd[31367]: relay: started vm 1 successfully, tty /dev/ttyp1
> Feb 22 00:52:44 golem vmd[31367]: schleuder: started vm 2 successfully, tty /dev/ttyp2
> Feb 22 00:53:59 golem vmd[31367]: mumble: started vm 6 successfully, tty /dev/ttyp6
> Feb 22 00:54:45 golem vmd[31367]: minecraft: started vm 9 successfully, tty /dev/ttyp9
> Feb 24 23:01:50 golem vmd[31367]: vmd_sighdlr: reload requested with SIGHUP
> Feb 24 23:01:51 golem vmd[31367]: test: started vm 10 successfully, tty /dev/ttypa
> Feb 24 23:01:51 golem vmd[52735]: test: unsupported refcount size
> Feb 24 23:06:27 golem vmd[31367]: vmd_sighdlr: reload requested with SIGHUP
> Feb 24 23:06:27 golem vmd[1230]: test: unsupported refcount size
> Feb 24 23:06:27 golem vmd[31367]: matrix: started vm 13 successfully, tty /dev/ttypb
> Feb 24 23:06:27 golem vmd[31367]: test: started vm 10 successfully, tty /dev/ttypc
> Feb 24 23:10:20 golem vmd[31367]: matrix: started vm 13 successfully, tty /dev/ttypb
>
> vm.conf and dmesg of the hypervisor are below. How would I go
> about debugging this?
>
> Kind regards,
>
> Thomas
>

Anything in the host's dmesg?

>
> switch internal {
> interface bridge0
> locked lladdr
> group internal
> }
>
>
> vm relay {
> disk /data/vmd/relay.qcow2
> interface {
> switch internal
> lladdr fe:e1:ba:d0:00:03
> }
> }
>
> vm schleuder {
> disk /data/vmd/schleuder.qcow2
> interface {
> switch internal
> lladdr fe:e1:ba:d0:00:04
> }
> }
>
> vm vpn {
> disk /data/vmd/vpn.qcow2
> interface {
> switch internal
> lladdr fe:e1:ba:d0:00:05
> }
> }
>
> vm www {
> disk /data/vmd/www.qcow2
> interface {
> switch internal
> lladdr fe:e1:ba:d0:00:06
> }
> }
>
> vm ticketfrei {
> disk /data/vmd/ticketfrei.qcow2
> interface {
> switch internal
> lladdr fe:e1:ba:d0:00:07
> }
> }
>
> vm mumble {
> disk /data/vmd/mumble.qcow2
> interface {
> switch internal
> lladdr fe:e1:ba:d0:00:08
> }
> }
>
> vm gitea {
> disk /data/vmd/gitea.qcow2
> interface {
> switch internal
> lladdr fe:e1:ba:d0:00:0a
> }
> }
>
> vm kibicara {
> disk /data/vmd/kibicara.qcow2
> interface {
> switch internal
> lladdr fe:e1:ba:d0:00:0b
> }
> }
>
> vm minecraft {
> memory 8G
> disk /data/vmd/minecraft.qcow2
> interface {
> switch internal
> lladdr fe:e1:ba:d0:00:0c
> }
> }
>
> vm wiki {
> disk /data/vmd/wiki.qcow2
> interface {
> switch internal
> lladdr fe:e1:ba:d0:00:0d
> }
> }
>
> vm matrix {
> disk /data/vmd/matrix.qcow2
> interface {
> switch internal
> lladdr fe:e1:ba:d0:00:0e
> }
> }
>
> vm test {
> disk /data/vmd/test.qcow2
> interface {
> switch internal
> lladdr fe:e1:ba:d0:00:17
> }
> }
>
> OpenBSD 6.8 (GENERIC.MP) #1: Tue Nov  3 09:06:04 MST 2020
>     [hidden email]:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> real mem = 17070161920 (16279MB)
> avail mem = 16537780224 (15771MB)
> random: good seed from bootblocks
> mpath0 at root
> scsibus0 at mpath0: 256 targets
> mainbus0 at root
> bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xeb8c0 (109 entries)
> bios0: vendor American Megatrends Inc. version "9010" date 07/06/2018
> bios0: ASUSTeK COMPUTER INC. P8H77-M PRO
> acpi0 at bios0: ACPI 5.0
> acpi0: sleep states S0 S3 S4 S5
> acpi0: tables DSDT FACP APIC FPDT MCFG HPET SSDT DMAR SSDT SSDT
> acpi0: wakeup devices PS2K(S4) P0P1(S4) RP01(S4) PXSX(S4) RP02(S4) PXSX(S4) RP03(S4) PXSX(S4) RP04(S4) PXSX(S4) RP06(S4) PXSX(S4) RP07(S4) PXSX(S4) RP08(S4) PXSX(S4) [...]
> acpitimer0 at acpi0: 3579545 Hz, 24 bits
> acpimadt0 at acpi0 addr 0xfee00000: PC-AT compat
> cpu0 at mainbus0: apid 0 (boot processor)
> cpu0: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3400.54 MHz, 06-3a-09
> cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> cpu0: 256KB 64b/line 8-way L2 cache
> cpu0: smt 0, core 0, package 0
> mtrr: Pentium Pro MTRR support, 10 var ranges, 88 fixed ranges
> cpu0: apic clock running at 100MHz
> cpu0: mwait min=64, max=64, C-substates=0.2.1.1, IBE
> cpu1 at mainbus0: apid 2 (application processor)
> cpu1: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3400.01 MHz, 06-3a-09
> cpu1: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> cpu1: 256KB 64b/line 8-way L2 cache
> cpu1: smt 0, core 1, package 0
> cpu2 at mainbus0: apid 4 (application processor)
> cpu2: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3400.01 MHz, 06-3a-09
> cpu2: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> cpu2: 256KB 64b/line 8-way L2 cache
> cpu2: smt 0, core 2, package 0
> cpu3 at mainbus0: apid 6 (application processor)
> cpu3: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3400.02 MHz, 06-3a-09
> cpu3: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> cpu3: 256KB 64b/line 8-way L2 cache
> cpu3: smt 0, core 3, package 0
> cpu4 at mainbus0: apid 1 (application processor)
> cpu4: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3400.01 MHz, 06-3a-09
> cpu4: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> cpu4: 256KB 64b/line 8-way L2 cache
> cpu4: smt 1, core 0, package 0
> cpu5 at mainbus0: apid 3 (application processor)
> cpu5: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3400.01 MHz, 06-3a-09
> cpu5: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> cpu5: 256KB 64b/line 8-way L2 cache
> cpu5: smt 1, core 1, package 0
> cpu6 at mainbus0: apid 5 (application processor)
> cpu6: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3400.01 MHz, 06-3a-09
> cpu6: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> cpu6: 256KB 64b/line 8-way L2 cache
> cpu6: smt 1, core 2, package 0
> cpu7 at mainbus0: apid 7 (application processor)
> cpu7: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3400.01 MHz, 06-3a-09
> cpu7: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> cpu7: 256KB 64b/line 8-way L2 cache
> cpu7: smt 1, core 3, package 0
> ioapic0 at mainbus0: apid 2 pa 0xfec00000, version 20, 24 pins
> acpimcfg0 at acpi0
> acpimcfg0: addr 0xf8000000, bus 0-63
> acpihpet0 at acpi0: 14318179 Hz
> acpiprt0 at acpi0: bus 0 (PCI0)
> acpiprt1 at acpi0: bus -1 (P0P1)
> acpiprt2 at acpi0: bus 2 (RP01)
> acpiprt3 at acpi0: bus -1 (RP02)
> acpiprt4 at acpi0: bus -1 (RP03)
> acpiprt5 at acpi0: bus -1 (RP04)
> acpiprt6 at acpi0: bus -1 (RP06)
> acpiprt7 at acpi0: bus -1 (RP07)
> acpiprt8 at acpi0: bus -1 (RP08)
> acpiprt9 at acpi0: bus 1 (PEG0)
> acpiprt10 at acpi0: bus -1 (PEG1)
> acpiprt11 at acpi0: bus -1 (PEG2)
> acpiprt12 at acpi0: bus -1 (PEG3)
> acpiprt13 at acpi0: bus 3 (RP05)
> acpiec0 at acpi0: not present
> acpipci0 at acpi0 PCI0: 0x00000010 0x00000011 0x00000000
> acpicmos0 at acpi0
> acpibtn0 at acpi0: PWRB
> "PNP0C0B" at acpi0 not configured
> "PNP0C0B" at acpi0 not configured
> "PNP0C0B" at acpi0 not configured
> "PNP0C0B" at acpi0 not configured
> "PNP0C0B" at acpi0 not configured
> "PNP0C14" at acpi0 not configured
> acpicpu0 at acpi0: C3(350@80 mwait.1@0x20), C2(500@59 mwait.1@0x10), C1(1000@1 mwait.1), PSS
> acpicpu1 at acpi0: C3(350@80 mwait.1@0x20), C2(500@59 mwait.1@0x10), C1(1000@1 mwait.1), PSS
> acpicpu2 at acpi0: C3(350@80 mwait.1@0x20), C2(500@59 mwait.1@0x10), C1(1000@1 mwait.1), PSS
> acpicpu3 at acpi0: C3(350@80 mwait.1@0x20), C2(500@59 mwait.1@0x10), C1(1000@1 mwait.1), PSS
> acpicpu4 at acpi0: C3(350@80 mwait.1@0x20), C2(500@59 mwait.1@0x10), C1(1000@1 mwait.1), PSS
> acpicpu5 at acpi0: C3(350@80 mwait.1@0x20), C2(500@59 mwait.1@0x10), C1(1000@1 mwait.1), PSS
> acpicpu6 at acpi0: C3(350@80 mwait.1@0x20), C2(500@59 mwait.1@0x10), C1(1000@1 mwait.1), PSS
> acpicpu7 at acpi0: C3(350@80 mwait.1@0x20), C2(500@59 mwait.1@0x10), C1(1000@1 mwait.1), PSS
> acpipwrres0 at acpi0: FN00, resource for FAN0
> acpipwrres1 at acpi0: FN01, resource for FAN1
> acpipwrres2 at acpi0: FN02, resource for FAN2
> acpipwrres3 at acpi0: FN03, resource for FAN3
> acpipwrres4 at acpi0: FN04, resource for FAN4
> acpitz0 at acpi0: critical temperature is 106 degC
> acpitz1 at acpi0: critical temperature is 106 degC
> acpivideo0 at acpi0: GFX0
> acpivout0 at acpivideo0: DD02
> cpu0: using VERW MDS workaround (except on vmm entry)
> cpu0: Enhanced SpeedStep 3400 MHz: speeds: 3401, 3400, 3300, 3100, 3000, 2900, 2800, 2600, 2500, 2400, 2200, 2100, 2000, 1900, 1700, 1600 MHz
> pci0 at mainbus0 bus 0
> pchb0 at pci0 dev 0 function 0 "Intel Core 3G Host" rev 0x09
> ppb0 at pci0 dev 1 function 0 "Intel Core 3G PCIE" rev 0x09: msi
> pci1 at ppb0 bus 1
> inteldrm0 at pci0 dev 2 function 0 "Intel HD Graphics 4000" rev 0x09
> drm0 at inteldrm0
> inteldrm0: msi, IVYBRIDGE, gen 7
> "Intel 7 Series MEI" rev 0x04 at pci0 dev 22 function 0 not configured
> ehci0 at pci0 dev 26 function 0 "Intel 7 Series USB" rev 0x04: apic 2 int 23
> usb0 at ehci0: USB revision 2.0
> uhub0 at usb0 configuration 1 interface 0 "Intel EHCI root hub" rev 2.00/1.00 addr 1
> azalia0 at pci0 dev 27 function 0 "Intel 7 Series HD Audio" rev 0x04: msi
> azalia0: codecs: Realtek/0x0892, Intel/0x2806, using Realtek/0x0892
> audio0 at azalia0
> ppb1 at pci0 dev 28 function 0 "Intel 7 Series PCIE" rev 0xc4: msi
> pci2 at ppb1 bus 2
> ppb2 at pci0 dev 28 function 4 "Intel 7 Series PCIE" rev 0xc4: msi
> pci3 at ppb2 bus 3
> re0 at pci3 dev 0 function 0 "Realtek 8168" rev 0x09: RTL8168F/8111F (0x4800), msi, address 08:60:6e:68:13:89
> rgephy0 at re0 phy 7: RTL8169S/8110S/8211 PHY, rev. 5
> ehci1 at pci0 dev 29 function 0 "Intel 7 Series USB" rev 0x04: apic 2 int 23
> usb1 at ehci1: USB revision 2.0
> uhub1 at usb1 configuration 1 interface 0 "Intel EHCI root hub" rev 2.00/1.00 addr 1
> pcib0 at pci0 dev 31 function 0 "Intel H77 LPC" rev 0x04
> ahci0 at pci0 dev 31 function 2 "Intel 7 Series AHCI" rev 0x04: msi, AHCI 1.3
> ahci0: port 0: 6.0Gb/s
> ahci0: port 1: 6.0Gb/s
> scsibus1 at ahci0: 32 targets
> sd0 at scsibus1 targ 0 lun 0: <ATA, TOSHIBA DT01ACA3, MX6O> naa.5000039ff4c8c194
> sd0: 2861588MB, 512 bytes/sector, 5860533168 sectors
> sd1 at scsibus1 targ 1 lun 0: <ATA, TOSHIBA DT01ACA3, MX6O> naa.5000039ff4c8bdcb
> sd1: 2861588MB, 512 bytes/sector, 5860533168 sectors
> ichiic0 at pci0 dev 31 function 3 "Intel 7 Series SMBus" rev 0x04: apic 2 int 18
> iic0 at ichiic0
> iic0: addr 0x20 01=00 02=00 03=00 04=00 05=00 06=00 07=f0 08=f0 09=f0 0a=f0 0b=22 0c=22 0d=88 0e=88 0f=00 10=00 11=98 12=fc 13=04 14=00 15=00 16=30 17=5b 18=00 19=00 1a=00 1b=00 1c=00 1d=22 1e=88 1f=02 20=00 21=00 22=05 23=02 24=00 25=00 26=55 27=09 28=bf 29=00 2a=f5 2b=00 2c=01 2d=d0 2e=a0 2f=18 30=00 31=00 32=00 33=68 3e=8b 46=00 47=03 48=04 49=13 b2=20 b3=83 words 00=ff00 01=0000 02=0000 03=0000 04=0000 05=0000 06=00f0 07=f0f0
> spdmem0 at iic0 addr 0x50: 4GB DDR3 SDRAM PC3-10600
> spdmem1 at iic0 addr 0x51: 4GB DDR3 SDRAM PC3-10600
> spdmem2 at iic0 addr 0x52: 4GB DDR3 SDRAM PC3-10600
> spdmem3 at iic0 addr 0x53: 4GB DDR3 SDRAM PC3-10600
> isa0 at pcib0
> isadma0 at isa0
> pckbc0 at isa0 port 0x60/5 irq 1 irq 12
> pckbd0 at pckbc0 (kbd slot)
> wskbd0 at pckbd0: console keyboard
> pcppi0 at isa0 port 0x61
> spkr0 at pcppi0
> wbsio0 at isa0 port 0x2e/2: NCT6779D rev 0x62
> lm1 at wbsio0 port 0x290/8: NCT6779D
> vmm0 at mainbus0: VMX/EPT
> uhub2 at uhub0 port 1 configuration 1 interface 0 "Intel Rate Matching Hub" rev 2.00/0.00 addr 2
> uhub3 at uhub1 port 1 configuration 1 interface 0 "Intel Rate Matching Hub" rev 2.00/0.00 addr 2
> vscsi0 at root
> scsibus2 at vscsi0: 256 targets
> softraid0 at root
> scsibus3 at softraid0: 256 targets
> sd2 at scsibus3 targ 1 lun 0: <OPENBSD, SR RAID 1, 006>
> sd2: 2097148MB, 512 bytes/sector, 4294961093 sectors
> root on sd2a (dcbd00955078fc15.a) swap on sd2b dump on sd2b
> inteldrm0: 1024x768, 32bpp
> wsdisplay0 at inteldrm0 mux 1: console (std, vt100 emulation), using wskbd0
> wsdisplay0: screen 1-5 added (std, vt100 emulation)
> sd3 at scsibus3 targ 2 lun 0: <OPENBSD, SR CRYPTO, 006>
> sd3: 1724830MB, 512 bytes/sector, 3532452036 sectors
> cannot forward src fe80:5::fce1:baff:fed0:4, dst 2a00:1450:4001:81a::2004, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:4, dst 2a00:1450:4001:81a::2004, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:4, dst 2a00:1450:4001:81a::2004, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:3, dst 2a00:1450:4001:802::2004, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:3, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:4, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:3, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:4, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:6, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:8, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:6, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:b, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:7, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:b, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:5, dst 2a00:1450:4001:802::2004, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:a, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:4, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:7, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:6, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:b, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:5, dst 2a00:1450:4001:81a::2004, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:a, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:3, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:4, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:8, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:c, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:7, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:6, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:b, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:5, dst 2a00:1450:4001:824::2004, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:3, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:4, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:8, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:c, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:7, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:6, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:5, dst 2a00:1450:4001:81a::2004, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:4, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:c, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:d, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:d, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:d, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:e, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:e, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:e, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:7, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:6, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:b, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:5, dst 2a00:1450:4001:828::2004, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:3, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:8, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:c, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:e, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:d, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:7, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:6, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:b, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:3, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:4, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:e, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:e, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:e, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:d, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:7, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:6, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:5, dst 2a00:1450:4001:802::2004, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:3, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:4, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:c, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:d, dst 2a00:1450:4001:811::2004, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:e, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:7, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:b, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:5, dst 2a00:1450:4001:82a::2004, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:3, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:4, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:8, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
> cannot forward src fe80:5::fce1:baff:fed0:c, dst 2620:fe::fe, nxt 6, rcvif 5, outif 1
>

Reply | Threaded
Open this post in threaded view
|

Re: vmd: spurious VM restarts

Thomas L.
On Tue, 6 Apr 2021 11:11:01 -0700
Mike Larkin <[hidden email]> wrote:
> Anything in the host's dmesg?

Below is the dmesg and latest syslog from one of the VMs.

OpenBSD 6.8 (GENERIC) #1: Tue Nov  3 09:04:47 MST 2020
    [hidden email]:/usr/src/sys/arch/amd64/compile/GENERIC
real mem = 520085504 (495MB)
avail mem = 489435136 (466MB)
random: good seed from bootblocks
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.4 @ 0xf3f40 (10 entries)
bios0: vendor SeaBIOS version "1.11.0p3-OpenBSD-vmm" date 01/01/2011
bios0: OpenBSD VMM
acpi at bios0 not configured
cpu0 at mainbus0: (uniprocessor)
cpu0: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3403.18 MHz, 06-3a-09
cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,CX8,SEP,PGE,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,LONG,LAHF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,MELTDOWN
cpu0: 256KB 64b/line 8-way L2 cache
cpu0: smt 0, core 0, package 0
cpu0: using VERW MDS workaround
pvbus0 at mainbus0: OpenBSD
pvclock0 at pvbus0
pci0 at mainbus0 bus 0
pchb0 at pci0 dev 0 function 0 "OpenBSD VMM Host" rev 0x00
virtio0 at pci0 dev 1 function 0 "Qumranet Virtio RNG" rev 0x00
viornd0 at virtio0
virtio0: irq 3
virtio1 at pci0 dev 2 function 0 "Qumranet Virtio Network" rev 0x00
vio0 at virtio1: address fe:e1:ba:d0:00:04
virtio1: irq 5
virtio2 at pci0 dev 3 function 0 "Qumranet Virtio Storage" rev 0x00
vioblk0 at virtio2
scsibus1 at vioblk0: 1 targets
sd0 at scsibus1 targ 0 lun 0: <VirtIO, Block Device, >
sd0: 307200MB, 512 bytes/sector, 629145600 sectors
virtio2: irq 6
virtio3 at pci0 dev 4 function 0 "OpenBSD VMM Control" rev 0x00
vmmci0 at virtio3
virtio3: irq 7
isa0 at mainbus0
isadma0 at isa0
com0 at isa0 port 0x3f8/8 irq 4: ns8250, no fifo
com0: console
vscsi0 at root
scsibus2 at vscsi0: 256 targets
softraid0 at root
scsibus3 at softraid0: 256 targets
root on sd0a (c14ce37920a910f7.a) swap on sd0b dump on sd0b
WARNING: / was not properly unmounted

Apr  6 14:39:33 schleuder /bsd: OpenBSD 6.8 (GENERIC) #1: Tue Nov  3 09:04:47 MST 2020
Apr  6 14:39:33 schleuder /bsd:     [hidden email]:/usr/src/sys/arch/amd64/compile/GENERIC
Apr  6 14:39:33 schleuder /bsd: real mem = 520085504 (495MB)
Apr  6 14:39:33 schleuder /bsd: avail mem = 489435136 (466MB)
Apr  6 14:39:33 schleuder /bsd: random: good seed from bootblocks
Apr  6 14:39:33 schleuder /bsd: mpath0 at root
Apr  6 14:39:33 schleuder /bsd: scsibus0 at mpath0: 256 targets
Apr  6 14:39:33 schleuder /bsd: mainbus0 at root
Apr  6 14:39:33 schleuder /bsd: bios0 at mainbus0: SMBIOS rev. 2.4 @ 0xf3f40 (10 entries)
Apr  6 14:39:33 schleuder /bsd: bios0: vendor SeaBIOS version "1.11.0p3-OpenBSD-vmm" date 01/01/2011
Apr  6 14:39:33 schleuder /bsd: bios0: OpenBSD VMM
Apr  6 14:39:33 schleuder /bsd: acpi at bios0 not configured
Apr  6 14:39:33 schleuder /bsd: cpu0 at mainbus0: (uniprocessor)
Apr  6 14:39:33 schleuder /bsd: cpu0: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3403.18 MHz, 06-3a-09
Apr  6 14:39:33 schleuder /bsd: cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,CX8,SEP,PGE,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,LONG,LAHF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,MELTDOWN
Apr  6 14:39:33 schleuder /bsd: cpu0: 256KB 64b/line 8-way L2 cache
Apr  6 14:39:33 schleuder /bsd: cpu0: smt 0, core 0, package 0
Apr  6 14:39:33 schleuder /bsd: cpu0: using VERW MDS workaround
Apr  6 14:39:33 schleuder /bsd: pvbus0 at mainbus0: OpenBSD
Apr  6 14:39:33 schleuder /bsd: pvclock0 at pvbus0
Apr  6 14:39:33 schleuder /bsd: pci0 at mainbus0 bus 0
Apr  6 14:39:33 schleuder /bsd: pchb0 at pci0 dev 0 function 0 "OpenBSD VMM Host" rev 0x00
Apr  6 14:39:33 schleuder /bsd: virtio0 at pci0 dev 1 function 0 "Qumranet Virtio RNG" rev 0x00
Apr  6 14:39:33 schleuder /bsd: viornd0 at virtio0
Apr  6 14:39:33 schleuder /bsd: virtio0: irq 3
Apr  6 14:39:33 schleuder /bsd: virtio1 at pci0 dev 2 function 0 "Qumranet Virtio Network" rev 0x00
Apr  6 14:39:33 schleuder /bsd: vio0 at virtio1: address fe:e1:ba:d0:00:04
Apr  6 14:39:33 schleuder /bsd: virtio1: irq 5
Apr  6 14:39:33 schleuder /bsd: virtio2 at pci0 dev 3 function 0 "Qumranet Virtio Storage" rev 0x00
Apr  6 14:39:33 schleuder /bsd: vioblk0 at virtio2
Apr  6 14:39:33 schleuder /bsd: scsibus1 at vioblk0: 1 targets
Apr  6 14:39:33 schleuder /bsd: sd0 at scsibus1 targ 0 lun 0: <VirtIO, Block Device, >
Apr  6 14:39:33 schleuder /bsd: sd0: 307200MB, 512 bytes/sector, 629145600 sectors
Apr  6 14:39:33 schleuder /bsd: virtio2: irq 6
Apr  6 14:39:33 schleuder /bsd: virtio3 at pci0 dev 4 function 0 "OpenBSD VMM Control" rev 0x00
Apr  6 14:39:33 schleuder /bsd: vmmci0 at virtio3
Apr  6 14:39:33 schleuder /bsd: virtio3: irq 7
Apr  6 14:39:33 schleuder /bsd: isa0 at mainbus0
Apr  6 14:39:33 schleuder /bsd: isadma0 at isa0
Apr  6 14:39:33 schleuder /bsd: com0 at isa0 port 0x3f8/8 irq 4: ns8250, no fifo
Apr  6 14:39:33 schleuder /bsd: com0: console
Apr  6 14:39:33 schleuder /bsd: vscsi0 at root
Apr  6 14:39:33 schleuder /bsd: scsibus2 at vscsi0: 256 targets
Apr  6 14:39:33 schleuder /bsd: softraid0 at root
Apr  6 14:39:33 schleuder /bsd: scsibus3 at softraid0: 256 targets
Apr  6 14:39:33 schleuder /bsd: root on sd0a (c14ce37920a910f7.a) swap on sd0b dump on sd0b
Apr  6 14:39:33 schleuder /bsd: WARNING: / was not properly unmounted
Apr  6 14:39:33 schleuder sendsyslog: dropped 4 messages, error 57, pid 93571
Apr  6 14:39:34 schleuder savecore: no core dump
Apr  6 14:40:49 schleuder reorder_kernel: kernel relinking done
Apr  6 15:00:19 schleuder syslogd[70665]: restart

Reply | Threaded
Open this post in threaded view
|

Re: vmd: spurious VM restarts

Mike Larkin-2
On Tue, Apr 06, 2021 at 09:15:10PM +0200, Thomas L. wrote:
> On Tue, 6 Apr 2021 11:11:01 -0700
> Mike Larkin <[hidden email]> wrote:
> > Anything in the host's dmesg?
>

*host* dmesg. I think you misread what I was after...

> Below is the dmesg and latest syslog from one of the VMs.
>
> OpenBSD 6.8 (GENERIC) #1: Tue Nov  3 09:04:47 MST 2020
>     [hidden email]:/usr/src/sys/arch/amd64/compile/GENERIC
> real mem = 520085504 (495MB)
> avail mem = 489435136 (466MB)
> random: good seed from bootblocks
> mpath0 at root
> scsibus0 at mpath0: 256 targets
> mainbus0 at root
> bios0 at mainbus0: SMBIOS rev. 2.4 @ 0xf3f40 (10 entries)
> bios0: vendor SeaBIOS version "1.11.0p3-OpenBSD-vmm" date 01/01/2011
> bios0: OpenBSD VMM
> acpi at bios0 not configured
> cpu0 at mainbus0: (uniprocessor)
> cpu0: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3403.18 MHz, 06-3a-09
> cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,CX8,SEP,PGE,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,LONG,LAHF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,MELTDOWN
> cpu0: 256KB 64b/line 8-way L2 cache
> cpu0: smt 0, core 0, package 0
> cpu0: using VERW MDS workaround
> pvbus0 at mainbus0: OpenBSD
> pvclock0 at pvbus0
> pci0 at mainbus0 bus 0
> pchb0 at pci0 dev 0 function 0 "OpenBSD VMM Host" rev 0x00
> virtio0 at pci0 dev 1 function 0 "Qumranet Virtio RNG" rev 0x00
> viornd0 at virtio0
> virtio0: irq 3
> virtio1 at pci0 dev 2 function 0 "Qumranet Virtio Network" rev 0x00
> vio0 at virtio1: address fe:e1:ba:d0:00:04
> virtio1: irq 5
> virtio2 at pci0 dev 3 function 0 "Qumranet Virtio Storage" rev 0x00
> vioblk0 at virtio2
> scsibus1 at vioblk0: 1 targets
> sd0 at scsibus1 targ 0 lun 0: <VirtIO, Block Device, >
> sd0: 307200MB, 512 bytes/sector, 629145600 sectors
> virtio2: irq 6
> virtio3 at pci0 dev 4 function 0 "OpenBSD VMM Control" rev 0x00
> vmmci0 at virtio3
> virtio3: irq 7
> isa0 at mainbus0
> isadma0 at isa0
> com0 at isa0 port 0x3f8/8 irq 4: ns8250, no fifo
> com0: console
> vscsi0 at root
> scsibus2 at vscsi0: 256 targets
> softraid0 at root
> scsibus3 at softraid0: 256 targets
> root on sd0a (c14ce37920a910f7.a) swap on sd0b dump on sd0b
> WARNING: / was not properly unmounted
>
> Apr  6 14:39:33 schleuder /bsd: OpenBSD 6.8 (GENERIC) #1: Tue Nov  3 09:04:47 MST 2020
> Apr  6 14:39:33 schleuder /bsd:     [hidden email]:/usr/src/sys/arch/amd64/compile/GENERIC
> Apr  6 14:39:33 schleuder /bsd: real mem = 520085504 (495MB)
> Apr  6 14:39:33 schleuder /bsd: avail mem = 489435136 (466MB)
> Apr  6 14:39:33 schleuder /bsd: random: good seed from bootblocks
> Apr  6 14:39:33 schleuder /bsd: mpath0 at root
> Apr  6 14:39:33 schleuder /bsd: scsibus0 at mpath0: 256 targets
> Apr  6 14:39:33 schleuder /bsd: mainbus0 at root
> Apr  6 14:39:33 schleuder /bsd: bios0 at mainbus0: SMBIOS rev. 2.4 @ 0xf3f40 (10 entries)
> Apr  6 14:39:33 schleuder /bsd: bios0: vendor SeaBIOS version "1.11.0p3-OpenBSD-vmm" date 01/01/2011
> Apr  6 14:39:33 schleuder /bsd: bios0: OpenBSD VMM
> Apr  6 14:39:33 schleuder /bsd: acpi at bios0 not configured
> Apr  6 14:39:33 schleuder /bsd: cpu0 at mainbus0: (uniprocessor)
> Apr  6 14:39:33 schleuder /bsd: cpu0: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3403.18 MHz, 06-3a-09
> Apr  6 14:39:33 schleuder /bsd: cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,CX8,SEP,PGE,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,LONG,LAHF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,MELTDOWN
> Apr  6 14:39:33 schleuder /bsd: cpu0: 256KB 64b/line 8-way L2 cache
> Apr  6 14:39:33 schleuder /bsd: cpu0: smt 0, core 0, package 0
> Apr  6 14:39:33 schleuder /bsd: cpu0: using VERW MDS workaround
> Apr  6 14:39:33 schleuder /bsd: pvbus0 at mainbus0: OpenBSD
> Apr  6 14:39:33 schleuder /bsd: pvclock0 at pvbus0
> Apr  6 14:39:33 schleuder /bsd: pci0 at mainbus0 bus 0
> Apr  6 14:39:33 schleuder /bsd: pchb0 at pci0 dev 0 function 0 "OpenBSD VMM Host" rev 0x00
> Apr  6 14:39:33 schleuder /bsd: virtio0 at pci0 dev 1 function 0 "Qumranet Virtio RNG" rev 0x00
> Apr  6 14:39:33 schleuder /bsd: viornd0 at virtio0
> Apr  6 14:39:33 schleuder /bsd: virtio0: irq 3
> Apr  6 14:39:33 schleuder /bsd: virtio1 at pci0 dev 2 function 0 "Qumranet Virtio Network" rev 0x00
> Apr  6 14:39:33 schleuder /bsd: vio0 at virtio1: address fe:e1:ba:d0:00:04
> Apr  6 14:39:33 schleuder /bsd: virtio1: irq 5
> Apr  6 14:39:33 schleuder /bsd: virtio2 at pci0 dev 3 function 0 "Qumranet Virtio Storage" rev 0x00
> Apr  6 14:39:33 schleuder /bsd: vioblk0 at virtio2
> Apr  6 14:39:33 schleuder /bsd: scsibus1 at vioblk0: 1 targets
> Apr  6 14:39:33 schleuder /bsd: sd0 at scsibus1 targ 0 lun 0: <VirtIO, Block Device, >
> Apr  6 14:39:33 schleuder /bsd: sd0: 307200MB, 512 bytes/sector, 629145600 sectors
> Apr  6 14:39:33 schleuder /bsd: virtio2: irq 6
> Apr  6 14:39:33 schleuder /bsd: virtio3 at pci0 dev 4 function 0 "OpenBSD VMM Control" rev 0x00
> Apr  6 14:39:33 schleuder /bsd: vmmci0 at virtio3
> Apr  6 14:39:33 schleuder /bsd: virtio3: irq 7
> Apr  6 14:39:33 schleuder /bsd: isa0 at mainbus0
> Apr  6 14:39:33 schleuder /bsd: isadma0 at isa0
> Apr  6 14:39:33 schleuder /bsd: com0 at isa0 port 0x3f8/8 irq 4: ns8250, no fifo
> Apr  6 14:39:33 schleuder /bsd: com0: console
> Apr  6 14:39:33 schleuder /bsd: vscsi0 at root
> Apr  6 14:39:33 schleuder /bsd: scsibus2 at vscsi0: 256 targets
> Apr  6 14:39:33 schleuder /bsd: softraid0 at root
> Apr  6 14:39:33 schleuder /bsd: scsibus3 at softraid0: 256 targets
> Apr  6 14:39:33 schleuder /bsd: root on sd0a (c14ce37920a910f7.a) swap on sd0b dump on sd0b
> Apr  6 14:39:33 schleuder /bsd: WARNING: / was not properly unmounted
> Apr  6 14:39:33 schleuder sendsyslog: dropped 4 messages, error 57, pid 93571
> Apr  6 14:39:34 schleuder savecore: no core dump
> Apr  6 14:40:49 schleuder reorder_kernel: kernel relinking done
> Apr  6 15:00:19 schleuder syslogd[70665]: restart

Reply | Threaded
Open this post in threaded view
|

Re: vmd: spurious VM restarts

Thomas L.
On Tue, 6 Apr 2021 14:28:09 -0700
Mike Larkin <[hidden email]> wrote:

> On Tue, Apr 06, 2021 at 09:15:10PM +0200, Thomas L. wrote:
> > On Tue, 6 Apr 2021 11:11:01 -0700
> > Mike Larkin <[hidden email]> wrote:
> > > Anything in the host's dmesg?
> >
>
> *host* dmesg. I think you misread what I was after...

The dmesg of the host was already attached to the first mail below the
vm.conf (I mistakenly called the host hypervisor, which I realize now is
not accurate). I figured since it was already attached, that
you must mean the VM, compounding the confusion ...

Kind regards,

Thomas

Reply | Threaded
Open this post in threaded view
|

Re: vmd: spurious VM restarts

Mike Larkin-2
On Wed, Apr 07, 2021 at 12:22:23AM +0200, Thomas L. wrote:

> On Tue, 6 Apr 2021 14:28:09 -0700
> Mike Larkin <[hidden email]> wrote:
>
> > On Tue, Apr 06, 2021 at 09:15:10PM +0200, Thomas L. wrote:
> > > On Tue, 6 Apr 2021 11:11:01 -0700
> > > Mike Larkin <[hidden email]> wrote:
> > > > Anything in the host's dmesg?
> > >
> >
> > *host* dmesg. I think you misread what I was after...
>
> The dmesg of the host was already attached to the first mail below the
> vm.conf (I mistakenly called the host hypervisor, which I realize now is
> not accurate). I figured since it was already attached, that
> you must mean the VM, compounding the confusion ...
>
> Kind regards,
>
> Thomas
>

I see.

You'll probably need to build a kernel with VMM_DEBUG and save that output and
send it to me once a VM crashes. Note: it will generate a lot of output and
probably make things somewhat slower.

-ml

Reply | Threaded
Open this post in threaded view
|

Re: vmd: spurious VM restarts

Dave Voutila-2

Mike Larkin writes:

> On Wed, Apr 07, 2021 at 12:22:23AM +0200, Thomas L. wrote:
>> On Tue, 6 Apr 2021 14:28:09 -0700
>> Mike Larkin <[hidden email]> wrote:
>>
>> > On Tue, Apr 06, 2021 at 09:15:10PM +0200, Thomas L. wrote:
>> > > On Tue, 6 Apr 2021 11:11:01 -0700
>> > > Mike Larkin <[hidden email]> wrote:
>> > > > Anything in the host's dmesg?
>> > >
>> >
>> > *host* dmesg. I think you misread what I was after...
>>
>> The dmesg of the host was already attached to the first mail below the
>> vm.conf (I mistakenly called the host hypervisor, which I realize now is
>> not accurate). I figured since it was already attached, that
>> you must mean the VM, compounding the confusion ...
>>
>> Kind regards,
>>
>> Thomas
>>
>
> I see.
>
> You'll probably need to build a kernel with VMM_DEBUG and save that output and
> send it to me once a VM crashes. Note: it will generate a lot of output and
> probably make things somewhat slower.
>
> -ml

Thomas: I looked at your host dmesg and your provided vm.conf. It looks
like 11 vm's with the default 512M memory and one (minecraft) with
8G. Your host seems to have only 16GB of memory, some of which is
probably unavailable as it's used by the integrated gpu. I'm wondering
if you are effectively oversusbcribing your memory here.

I know we currently don't support swapping guest memory out, but not
sure what happens if we don't have the physical memory to fault a page
in and wire it.

Even without a custom kernel with VMM_DEBUG, if it's a uvm_fault issue
you should see a message in the kernel buffer. Something like:

  vmx_fault_page: uvm_fault returns N, GPA=0x...., rip=0x....

mlarkin: thoughts on my hypothesis? Am I wildly off course?

-dv

Reply | Threaded
Open this post in threaded view
|

Re: vmd: spurious VM restarts

Dave Voutila-2

Dave Voutila writes:

> Mike Larkin writes:
>
>> On Wed, Apr 07, 2021 at 12:22:23AM +0200, Thomas L. wrote:
>>> On Tue, 6 Apr 2021 14:28:09 -0700
>>> Mike Larkin <[hidden email]> wrote:
>>>
>>> > On Tue, Apr 06, 2021 at 09:15:10PM +0200, Thomas L. wrote:
>>> > > On Tue, 6 Apr 2021 11:11:01 -0700
>>> > > Mike Larkin <[hidden email]> wrote:
>>> > > > Anything in the host's dmesg?
>>> > >
>>> >
>>> > *host* dmesg. I think you misread what I was after...
>>>
>>> The dmesg of the host was already attached to the first mail below the
>>> vm.conf (I mistakenly called the host hypervisor, which I realize now is
>>> not accurate). I figured since it was already attached, that
>>> you must mean the VM, compounding the confusion ...
>>>
>>> Kind regards,
>>>
>>> Thomas
>>>
>>
>> I see.
>>
>> You'll probably need to build a kernel with VMM_DEBUG and save that output and
>> send it to me once a VM crashes. Note: it will generate a lot of output and
>> probably make things somewhat slower.
>>
>> -ml
>
> Thomas: I looked at your host dmesg and your provided vm.conf. It looks
> like 11 vm's with the default 512M memory and one (minecraft) with
> 8G. Your host seems to have only 16GB of memory, some of which is
> probably unavailable as it's used by the integrated gpu. I'm wondering
> if you are effectively oversusbcribing your memory here.
>
> I know we currently don't support swapping guest memory out, but not
> sure what happens if we don't have the physical memory to fault a page
> in and wire it.

Looked a bit further and since your host is running 6.8 it doesn't have
wiring memory logic, but I'd still be cautious about oversubscribing
memory.

>
> Even without a custom kernel with VMM_DEBUG, if it's a uvm_fault issue
> you should see a message in the kernel buffer. Something like:
>
>   vmx_fault_page: uvm_fault returns N, GPA=0x...., rip=0x....
>

You can also run vmd(8) with debug logging (-v or -vv) and maybe capture
these events. Like with vmm(4) logging, it can be excessively verbose.

> mlarkin: thoughts on my hypothesis? Am I wildly off course?
>
> -dv

Reply | Threaded
Open this post in threaded view
|

Re: vmd: spurious VM restarts

Mike Larkin-2
In reply to this post by Dave Voutila-2
On Wed, Apr 07, 2021 at 07:26:41AM -0400, Dave Voutila wrote:

>
> Mike Larkin writes:
>
> > On Wed, Apr 07, 2021 at 12:22:23AM +0200, Thomas L. wrote:
> >> On Tue, 6 Apr 2021 14:28:09 -0700
> >> Mike Larkin <[hidden email]> wrote:
> >>
> >> > On Tue, Apr 06, 2021 at 09:15:10PM +0200, Thomas L. wrote:
> >> > > On Tue, 6 Apr 2021 11:11:01 -0700
> >> > > Mike Larkin <[hidden email]> wrote:
> >> > > > Anything in the host's dmesg?
> >> > >
> >> >
> >> > *host* dmesg. I think you misread what I was after...
> >>
> >> The dmesg of the host was already attached to the first mail below the
> >> vm.conf (I mistakenly called the host hypervisor, which I realize now is
> >> not accurate). I figured since it was already attached, that
> >> you must mean the VM, compounding the confusion ...
> >>
> >> Kind regards,
> >>
> >> Thomas
> >>
> >
> > I see.
> >
> > You'll probably need to build a kernel with VMM_DEBUG and save that output and
> > send it to me once a VM crashes. Note: it will generate a lot of output and
> > probably make things somewhat slower.
> >
> > -ml
>
> Thomas: I looked at your host dmesg and your provided vm.conf. It looks
> like 11 vm's with the default 512M memory and one (minecraft) with
> 8G. Your host seems to have only 16GB of memory, some of which is
> probably unavailable as it's used by the integrated gpu. I'm wondering
> if you are effectively oversusbcribing your memory here.
>
> I know we currently don't support swapping guest memory out, but not
> sure what happens if we don't have the physical memory to fault a page
> in and wire it.
>

Something else gets swapped out.

> Even without a custom kernel with VMM_DEBUG, if it's a uvm_fault issue
> you should see a message in the kernel buffer. Something like:
>
>   vmx_fault_page: uvm_fault returns N, GPA=0x...., rip=0x....
>
> mlarkin: thoughts on my hypothesis? Am I wildly off course?
>
> -dv
>

Yeah I was trying to catch the big dump when a VM resets. That would tell
us if the vm caused the reset or if vmd(8) crashed for some reason.

Reply | Threaded
Open this post in threaded view
|

Re: vmd: spurious VM restarts

Mike Larkin-2
In reply to this post by Dave Voutila-2
On Wed, Apr 07, 2021 at 09:23:14AM -0400, Dave Voutila wrote:

>
> Dave Voutila writes:
>
> > Mike Larkin writes:
> >
> >> On Wed, Apr 07, 2021 at 12:22:23AM +0200, Thomas L. wrote:
> >>> On Tue, 6 Apr 2021 14:28:09 -0700
> >>> Mike Larkin <[hidden email]> wrote:
> >>>
> >>> > On Tue, Apr 06, 2021 at 09:15:10PM +0200, Thomas L. wrote:
> >>> > > On Tue, 6 Apr 2021 11:11:01 -0700
> >>> > > Mike Larkin <[hidden email]> wrote:
> >>> > > > Anything in the host's dmesg?
> >>> > >
> >>> >
> >>> > *host* dmesg. I think you misread what I was after...
> >>>
> >>> The dmesg of the host was already attached to the first mail below the
> >>> vm.conf (I mistakenly called the host hypervisor, which I realize now is
> >>> not accurate). I figured since it was already attached, that
> >>> you must mean the VM, compounding the confusion ...
> >>>
> >>> Kind regards,
> >>>
> >>> Thomas
> >>>
> >>
> >> I see.
> >>
> >> You'll probably need to build a kernel with VMM_DEBUG and save that output and
> >> send it to me once a VM crashes. Note: it will generate a lot of output and
> >> probably make things somewhat slower.
> >>
> >> -ml
> >
> > Thomas: I looked at your host dmesg and your provided vm.conf. It looks
> > like 11 vm's with the default 512M memory and one (minecraft) with
> > 8G. Your host seems to have only 16GB of memory, some of which is
> > probably unavailable as it's used by the integrated gpu. I'm wondering
> > if you are effectively oversusbcribing your memory here.
> >
> > I know we currently don't support swapping guest memory out, but not
> > sure what happens if we don't have the physical memory to fault a page
> > in and wire it.
>
> Looked a bit further and since your host is running 6.8 it doesn't have
> wiring memory logic, but I'd still be cautious about oversubscribing
> memory.
>

Yep. Try -current and see if this can be reproduced.

> >
> > Even without a custom kernel with VMM_DEBUG, if it's a uvm_fault issue
> > you should see a message in the kernel buffer. Something like:
> >
> >   vmx_fault_page: uvm_fault returns N, GPA=0x...., rip=0x....
> >
>
> You can also run vmd(8) with debug logging (-v or -vv) and maybe capture
> these events. Like with vmm(4) logging, it can be excessively verbose.
>
> > mlarkin: thoughts on my hypothesis? Am I wildly off course?
> >
> > -dv
>

Reply | Threaded
Open this post in threaded view
|

Re: vmd: spurious VM restarts

Thomas L.
In reply to this post by Mike Larkin-2
> > Thomas: I looked at your host dmesg and your provided vm.conf. It
> > looks like 11 vm's with the default 512M memory and one (minecraft)
> > with 8G. Your host seems to have only 16GB of memory, some of which
> > is probably unavailable as it's used by the integrated gpu. I'm
> > wondering if you are effectively oversusbcribing your memory here.
> >
> > I know we currently don't support swapping guest memory out, but not
> > sure what happens if we don't have the physical memory to fault a
> > page in and wire it.
> >
>
> Something else gets swapped out.

Wire == Can't swap out?
top shows 15G real memory available. That should be enough (8G + 11 *
0.5G = 13.5G), or is this inherently risky with 6.8?
I can try -current as suggested in the other mail. Is this a likely
cause or should I run with VMM_DEBUG for further investigation? Is
"somewhat slower" from VMM_DEBUG still usable? I don't need full
performance, but ~month downtime until the problem shows again would be
too much.

> > Even without a custom kernel with VMM_DEBUG, if it's a uvm_fault
> > issue you should see a message in the kernel buffer. Something like:
> >
> >   vmx_fault_page: uvm_fault returns N, GPA=0x...., rip=0x....
> >
> > mlarkin: thoughts on my hypothesis? Am I wildly off course?
> >
> > -dv
> >
>
> Yeah I was trying to catch the big dump when a VM resets. That would
> tell us if the vm caused the reset or if vmd(8) crashed for some
> reason.

But if vmd crashed it wouldn't restart automatically or does it?
All VMs down from vmd crashing would have been noticed.
That kernel message would have shown in the dmesg too, wouldn't it?

Kind regards,

Thomas

Reply | Threaded
Open this post in threaded view
|

Re: vmd: spurious VM restarts

Dave Voutila-2

Thomas L. writes:

>> > Thomas: I looked at your host dmesg and your provided vm.conf. It
>> > looks like 11 vm's with the default 512M memory and one (minecraft)
>> > with 8G. Your host seems to have only 16GB of memory, some of which
>> > is probably unavailable as it's used by the integrated gpu. I'm
>> > wondering if you are effectively oversusbcribing your memory here.
>> >
>> > I know we currently don't support swapping guest memory out, but not
>> > sure what happens if we don't have the physical memory to fault a
>> > page in and wire it.
>> >
>>
>> Something else gets swapped out.
>
> Wire == Can't swap out?

Yes.

> top shows 15G real memory available. That should be enough (8G + 11 *
> 0.5G = 13.5G), or is this inherently risky with 6.8?

With 6.8, the guests might have memory swapped out and worst case you'll
see some performance issues. That shouldn't cause unexpected
termination.

> I can try -current as suggested in the other mail. Is this a likely
> cause or should I run with VMM_DEBUG for further investigation? Is
> "somewhat slower" from VMM_DEBUG still usable? I don't need full
> performance, but ~month downtime until the problem shows again would be
> too much.

A fix is more likely to land in -current if an issue can be
identified. Since the issue doesn't sound like it's easily reproducible
yet, VMM_DEBUG is the best bet for having the information you'd need to
share when the issue occurs.

>> > Even without a custom kernel with VMM_DEBUG, if it's a uvm_fault
>> > issue you should see a message in the kernel buffer. Something like:
>> >
>> >   vmx_fault_page: uvm_fault returns N, GPA=0x...., rip=0x....
>> >
>> > mlarkin: thoughts on my hypothesis? Am I wildly off course?
>> >
>> > -dv
>> >
>>
>> Yeah I was trying to catch the big dump when a VM resets. That would
>> tell us if the vm caused the reset or if vmd(8) crashed for some
>> reason.
>
> But if vmd crashed it wouldn't restart automatically or does it?
> All VMs down from vmd crashing would have been noticed.
> That kernel message would have shown in the dmesg too, wouldn't it?
>

There are multiple factors. First is vmd(8) is multi-process and a vm's
process can die without impacting others. Second is the vcpu could be
reset making the guest "reboot." There are numerous reasons these things
could happen, hence needing debug logging.

-dv

Reply | Threaded
Open this post in threaded view
|

Re: vmd: spurious VM restarts

Mike Larkin-2
On Wed, Apr 07, 2021 at 07:47:28PM -0400, Dave Voutila wrote:

>
> Thomas L. writes:
>
> >> > Thomas: I looked at your host dmesg and your provided vm.conf. It
> >> > looks like 11 vm's with the default 512M memory and one (minecraft)
> >> > with 8G. Your host seems to have only 16GB of memory, some of which
> >> > is probably unavailable as it's used by the integrated gpu. I'm
> >> > wondering if you are effectively oversusbcribing your memory here.
> >> >
> >> > I know we currently don't support swapping guest memory out, but not
> >> > sure what happens if we don't have the physical memory to fault a
> >> > page in and wire it.
> >> >
> >>
> >> Something else gets swapped out.
> >
> > Wire == Can't swap out?
>
> Yes.
>
> > top shows 15G real memory available. That should be enough (8G + 11 *
> > 0.5G = 13.5G), or is this inherently risky with 6.8?
>
> With 6.8, the guests might have memory swapped out and worst case you'll
> see some performance issues. That shouldn't cause unexpected
> termination.
>

Depends on the exact content that got swapped out (as we didn't handle
TLB flushes correctly), so a crash was certainly a possibility. That's why
I wanted to see the VMM_DEBUG output.

In any case, Thomas should try -current and see if this problem is even
reproducible.

-ml

> > I can try -current as suggested in the other mail. Is this a likely
> > cause or should I run with VMM_DEBUG for further investigation? Is
> > "somewhat slower" from VMM_DEBUG still usable? I don't need full
> > performance, but ~month downtime until the problem shows again would be
> > too much.
>
> A fix is more likely to land in -current if an issue can be
> identified. Since the issue doesn't sound like it's easily reproducible
> yet, VMM_DEBUG is the best bet for having the information you'd need to
> share when the issue occurs.
>
> >> > Even without a custom kernel with VMM_DEBUG, if it's a uvm_fault
> >> > issue you should see a message in the kernel buffer. Something like:
> >> >
> >> >   vmx_fault_page: uvm_fault returns N, GPA=0x...., rip=0x....
> >> >
> >> > mlarkin: thoughts on my hypothesis? Am I wildly off course?
> >> >
> >> > -dv
> >> >
> >>
> >> Yeah I was trying to catch the big dump when a VM resets. That would
> >> tell us if the vm caused the reset or if vmd(8) crashed for some
> >> reason.
> >
> > But if vmd crashed it wouldn't restart automatically or does it?
> > All VMs down from vmd crashing would have been noticed.
> > That kernel message would have shown in the dmesg too, wouldn't it?
> >
>
> There are multiple factors. First is vmd(8) is multi-process and a vm's
> process can die without impacting others. Second is the vcpu could be
> reset making the guest "reboot." There are numerous reasons these things
> could happen, hence needing debug logging.
>
> -dv
>