Adam,
So I have been using version 23.10.1, built as per the download page, and have gotten a couple VMs to ping eachother and give me a login prompt. I finally realized I was missing the virtio_switch package and grabbed it from github and put it where it is supposed to go. Of course, being a version mismatch now, it did not compile. I decide to bite the bullet and do an upgrade (always a mistake) and use the new build process given with the new website. That went smoothly! I install ham, run it, etc, everything builds, I update my local scripts and links to use the new environment variables instead of being hardcoded, and it all builds and looks good. Except that it doesn't run. The VMs get a memory exception and kick into jdb.
I have not changed any of my .cfg or .list files. The VMs and ramdisks are untouched. My local (L4 native) processes start and appear to run. But the VMs crash for some reason. Even the device tree is unchanged. I also tried a newly built linux (as opposed to a prebuilt) and that failed as well.
Is there some reason for this new crash between version 23.10.1 and the latest version from github? I've attached the VM startup output from both the old and new runs so you can take a look.
I'm so close... I see the IP configuration parameter and how to set up lwip and virtio_switch.... Then my natives should be able to talk directly to my linuxes.
Your help is greatly appreciated!
Richard
Richard,
is that setup running on Linux QEMU+KVM? If yes, we recently (really a few days ago) fixed an issue in this virtualized setup (wrt to performance counter handling). It's on GH since Friday I believe. Otherwise please provide me you fiasco binary, such that I can look up fffffffff006a176 as this will point to the location that triggered the issue.
Thanks, Adam
On Sun May 04, 2025 at 13:14:00 +0000, Richard Clark wrote:
Adam,
So I have been using version 23.10.1, built as per the download page, and have gotten a couple VMs to ping eachother and give me a login prompt. I finally realized I was missing the virtio_switch package and grabbed it from github and put it where it is supposed to go. Of course, being a version mismatch now, it did not compile. I decide to bite the bullet and do an upgrade (always a mistake) and use the new build process given with the new website. That went smoothly! I install ham, run it, etc, everything builds, I update my local scripts and links to use the new environment variables instead of being hardcoded, and it all builds and looks good. Except that it doesn't run. The VMs get a memory exception and kick into jdb.
I have not changed any of my .cfg or .list files. The VMs and ramdisks are untouched. My local (L4 native) processes start and appear to run. But the VMs crash for some reason. Even the device tree is unchanged. I also tried a newly built linux (as opposed to a prebuilt) and that failed as well.
Is there some reason for this new crash between version 23.10.1 and the latest version from github? I've attached the VM startup output from both the old and new runs so you can take a look.
I'm so close... I see the IP configuration parameter and how to set up lwip and virtio_switch.... Then my natives should be able to talk directly to my linuxes.
Your help is greatly appreciated!
Richard
vm1 | VMM: Created VCPU 0 @ 17000 vm1 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm1 | VMM[main]: Hello out there. vm1 | VMM[ASM]: Sys Info: vm1 | vBus: 0 vm1 | DMA devs: 0 vm1 | IO-MMU: 0 vm1 | Identity forced: 0 vm1 | DMA phys addr: 0 vm1 | DT dma-ranges: 0 vm1 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[ram]: RAM not set up for DMA. vm2 | VMM: Created VCPU 0 @ 17000 vm2 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm2 | VMM[main]: Hello out there. vm2 | VMM[ASM]: Sys Info: vm2 | vBus: 0 vm2 | DMA devs: 0 vm2 | IO-MMU: 0 vm2 | Identity forced: 0 vm1 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | DMA phys addr: 0 vm1 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | DT dma-ranges: 0 vm1 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[main]: Loading kernel... vm2 | VMM[ram]: RAM not set up for DMA. vm1 | VMM[loader]: Linux kernel detected vm1 | VMM[file]: load: @ 0xfc400 vm1 | VMM[file]: copy in: to offset 0xfc400-0xba989f vm1 | VMM[main]: Loading ram disk... vm1 | VMM[ram]: load: rom/ramdisk1-amd64.rd -> 0x1fc00000 vm1 | VMM[file]: load: @ 0x1fc00000 vm1 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm1 | VMM[main]: Loaded ramdisk image rom/ramdisk1-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM[PIC]: Hello, Legacy_pic vm2 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[main]: Loading kernel... vm1 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM[loader]: Linux kernel detected vm1 | VMM: Creating Acpi_platform vm2 | VMM[file]: load: @ 0xfc400 vm2 | VMM[file]: copy in: to offset 0xfc400-0xba989f vm1 | VMM[ACPI]: Acpi timer @ 0xb008 vm1 | VMM[RTC]: Hello from RTC. Irq=8 vm2 | VMM[main]: Loading ram disk... vm1 | VMM[uart_8250]: Create virtual 8250 console vm2 | VMM[ram]: load: rom/ramdisk2-amd64.rd -> 0x1fc00000 vm2 | VMM[file]: load: @ 0x1fc00000 vm2 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm2 | VMM[main]: Loaded ramdisk image rom/ramdisk2-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm1 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm1 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm1 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm2 | VMM[PIC]: Hello, Legacy_pic vm1 | VMM[PCI bus]: Creating host bridge vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm1 | VMM[PCI bus]: Registering PCI device 00:00.0 vm2 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM: Creating Acpi_platform vm1 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM[PCI bus]: Created & Registered the PCI host bridge vm2 | VMM[ACPI]: Acpi timer @ 0xb008 vm2 | VMM[RTC]: Hello from RTC. Irq=8 vm1 | VMM[VIO Cons]: Create virtual PCI console vm2 | VMM[uart_8250]: Create virtual 8250 console vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm2 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm2 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm1 | VMM[PCI bus]: Registering PCI device 00:01.0 vm2 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[VIO proxy]: Creating proxy vm2 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm2 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 p2p | PORT[0x15d70]: DMA guest [0-1fffffff] local [600000-205fffff] offset 0 vm2 | VMM[PCI bus]: Creating host bridge p2p | register client: host IRQ: 420010 config DS: 41d000 vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm2 | VMM[PCI bus]: Registering PCI device 00:00.0 vm1 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM[VIO proxy]: Creating proxy vm1 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm2 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm2 | VMM[PCI bus]: Created & Registered the PCI host bridge vm1 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm2 | VMM[VIO Cons]: Create virtual PCI console vm1 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm2 | VMM[PCI bus]: Registering PCI device 00:01.0 vm1 | VMM: Created VCPU 1 @ 23000 vm2 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm2 | VMM[VIO proxy]: Creating proxy vm1 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 p2p | PORT[0x15e80]: DMA guest [0-1fffffff] local [20600000-405fffff] offset 0 p2p | register client: host IRQ: 420010 config DS: 41e000 vm1 | VMM[ACPI]: Initialize legacy BIOS ACPI tables. vm2 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM: Zeropage @ 0x1000, Kernel @ 0xfc400 vm2 | VMM[VIO proxy]: Creating proxy vm1 | VMM: Cmd_line: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM: cmdline check: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm1 | VMM[vmmap]: VM map: vm1 | VMM[vmmap]: [ 0:1fffffff]: Ram vm1 | VMM[vmmap]: [b0000000:bfffffff]: Pci_bus_cfg_ecam vm2 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm1 | VMM[vmmap]: [fec00000:fec00fff]: Ioapic vm2 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vmmap]: [fee00000:fee00fff]: Lapic_access_handler vm2 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm1 | VMM[main]: Populating guest physical address space vm1 | VMM[mmio]: Mapping [1000000 - 20ffffff] -> [0 - 1fffffff] vm2 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[vmmap]: IOport map: vm2 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vmmap]: [ 20: 21]: PIC vm2 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm1 | VMM[vmmap]: [ 40: 43]: PIT vm1 | VMM[vmmap]: [ 61: 61]: PIT port 61 vm1 | VMM[vmmap]: [ 70: 71]: RTC vm1 | VMM[vmmap]: [ a0: a1]: PIC vm1 | VMM[vmmap]: [ 3f8: 3ff]: UART 8250 vm1 | VMM[vmmap]: [ 510: 51b]: Firmware interface vm1 | VMM[vmmap]: [ cf8: cff]: PCI bus cfg vm1 | VMM[vmmap]: [1800:1808]: ACPI platform vm2 | VMM: Created VCPU 1 @ 23000 vm1 | VMM[vmmap]: [b008:b008]: ACPI Timer vm2 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm1 | VMM[guest]: Starting VMM @ 0x100000 vm2 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm1 | VMM[Cpu_dev]: [ 0] Reset called vm1 | VMM[Cpu_dev]: [ 0] Resetting vCPU. vm1 | VMM: Hello clock source for vCPU 0
--------------------------------------------------------------------- CPU 2 [fffffffff006a176]: General Protection (ERR=0000000000000000) CPU(s) 0-5 entered JDBjdb:
_______________________________________________ l4-hackers mailing list -- l4-hackers@os.inf.tu-dresden.de To unsubscribe send an email to l4-hackers-leave@os.inf.tu-dresden.de
Adam,
Yes, Linux and QEMU+KVM. I think I pulled the code Tuesday or Wednesday.... I will check out new code and try it again.
But that brings me to a bigger question. How do I fetch only Long-Term-Support or Fully-Tested-and-Blessed versions? Github is woefully lacking in proper version support. I can't send random untested code to my customers.
Richard
-----Original Message----- From: Adam Lackorzynski adam@l4re.org Sent: Sunday, May 4, 2025 11:11 AM To: Richard Clark richard.clark@Coheretechnology.us; l4-hackers@os.inf.tu-dresden.de Cc: Bud Wykoff bud.wykoff@Coheretechnology.us; Douglas Schafer douglas.schafer@Coheretechnology.us Subject: Re: Upgrade issues. VM won't start.
Richard,
is that setup running on Linux QEMU+KVM? If yes, we recently (really a few days ago) fixed an issue in this virtualized setup (wrt to performance counter handling). It's on GH since Friday I believe. Otherwise please provide me you fiasco binary, such that I can look up fffffffff006a176 as this will point to the location that triggered the issue.
Thanks, Adam
On Sun May 04, 2025 at 13:14:00 +0000, Richard Clark wrote:
Adam,
So I have been using version 23.10.1, built as per the download page, and have gotten a couple VMs to ping eachother and give me a login prompt. I finally realized I was missing the virtio_switch package and grabbed it from github and put it where it is supposed to go. Of course, being a version mismatch now, it did not compile. I decide to bite the bullet and do an upgrade (always a mistake) and use the new build process given with the new website. That went smoothly! I install ham, run it, etc, everything builds, I update my local scripts and links to use the new environment variables instead of being hardcoded, and it all builds and looks good. Except that it doesn't run. The VMs get a memory exception and kick into jdb.
I have not changed any of my .cfg or .list files. The VMs and ramdisks are untouched. My local (L4 native) processes start and appear to run. But the VMs crash for some reason. Even the device tree is unchanged. I also tried a newly built linux (as opposed to a prebuilt) and that failed as well.
Is there some reason for this new crash between version 23.10.1 and the latest version from github? I've attached the VM startup output from both the old and new runs so you can take a look.
I'm so close... I see the IP configuration parameter and how to set up lwip and virtio_switch.... Then my natives should be able to talk directly to my linuxes.
Your help is greatly appreciated!
Richard
vm1 | VMM: Created VCPU 0 @ 17000 vm1 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm1 | VMM[main]: Hello out there. vm1 | VMM[ASM]: Sys Info: vm1 | vBus: 0 vm1 | DMA devs: 0 vm1 | IO-MMU: 0 vm1 | Identity forced: 0 vm1 | DMA phys addr: 0 vm1 | DT dma-ranges: 0 vm1 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[ram]: RAM not set up for DMA. vm2 | VMM: Created VCPU 0 @ 17000 vm2 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm2 | VMM[main]: Hello out there. vm2 | VMM[ASM]: Sys Info: vm2 | vBus: 0 vm2 | DMA devs: 0 vm2 | IO-MMU: 0 vm2 | Identity forced: 0 vm1 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | DMA phys addr: 0 vm1 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | DT dma-ranges: 0 vm1 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[main]: Loading kernel... vm2 | VMM[ram]: RAM not set up for DMA. vm1 | VMM[loader]: Linux kernel detected vm1 | VMM[file]: load: @ 0xfc400 vm1 | VMM[file]: copy in: to offset 0xfc400-0xba989f vm1 | VMM[main]: Loading ram disk... vm1 | VMM[ram]: load: rom/ramdisk1-amd64.rd -> 0x1fc00000 vm1 | VMM[file]: load: @ 0x1fc00000 vm1 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm1 | VMM[main]: Loaded ramdisk image rom/ramdisk1-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM[PIC]: Hello, Legacy_pic vm2 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[main]: Loading kernel... vm1 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM[loader]: Linux kernel detected vm1 | VMM: Creating Acpi_platform vm2 | VMM[file]: load: @ 0xfc400 vm2 | VMM[file]: copy in: to offset 0xfc400-0xba989f vm1 | VMM[ACPI]: Acpi timer @ 0xb008 vm1 | VMM[RTC]: Hello from RTC. Irq=8 vm2 | VMM[main]: Loading ram disk... vm1 | VMM[uart_8250]: Create virtual 8250 console vm2 | VMM[ram]: load: rom/ramdisk2-amd64.rd -> 0x1fc00000 vm2 | VMM[file]: load: @ 0x1fc00000 vm2 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm2 | VMM[main]: Loaded ramdisk image rom/ramdisk2-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm1 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm1 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm1 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm2 | VMM[PIC]: Hello, Legacy_pic vm1 | VMM[PCI bus]: Creating host bridge vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm1 | VMM[PCI bus]: Registering PCI device 00:00.0 vm2 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM: Creating Acpi_platform vm1 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM[PCI bus]: Created & Registered the PCI host bridge vm2 | VMM[ACPI]: Acpi timer @ 0xb008 vm2 | VMM[RTC]: Hello from RTC. Irq=8 vm1 | VMM[VIO Cons]: Create virtual PCI console vm2 | VMM[uart_8250]: Create virtual 8250 console vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm2 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm2 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm1 | VMM[PCI bus]: Registering PCI device 00:01.0 vm2 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[VIO proxy]: Creating proxy vm2 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm2 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 p2p | PORT[0x15d70]: DMA guest [0-1fffffff] local [600000-205fffff] offset 0 vm2 | VMM[PCI bus]: Creating host bridge p2p | register client: host IRQ: 420010 config DS: 41d000 vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm2 | VMM[PCI bus]: Registering PCI device 00:00.0 vm1 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM[VIO proxy]: Creating proxy vm1 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm2 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm2 | VMM[PCI bus]: Created & Registered the PCI host bridge vm1 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm2 | VMM[VIO Cons]: Create virtual PCI console vm1 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm2 | VMM[PCI bus]: Registering PCI device 00:01.0 vm1 | VMM: Created VCPU 1 @ 23000 vm2 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm2 | VMM[VIO proxy]: Creating proxy vm1 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 p2p | PORT[0x15e80]: DMA guest [0-1fffffff] local [20600000-405fffff] offset 0 p2p | register client: host IRQ: 420010 config DS: 41e000 vm1 | VMM[ACPI]: Initialize legacy BIOS ACPI tables. vm2 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM: Zeropage @ 0x1000, Kernel @ 0xfc400 vm2 | VMM[VIO proxy]: Creating proxy vm1 | VMM: Cmd_line: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM: cmdline check: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm1 | VMM[vmmap]: VM map: vm1 | VMM[vmmap]: [ 0:1fffffff]: Ram vm1 | VMM[vmmap]: [b0000000:bfffffff]: Pci_bus_cfg_ecam vm2 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm1 | VMM[vmmap]: [fec00000:fec00fff]: Ioapic vm2 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vmmap]: [fee00000:fee00fff]: Lapic_access_handler vm2 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm1 | VMM[main]: Populating guest physical address space vm1 | VMM[mmio]: Mapping [1000000 - 20ffffff] -> [0 - 1fffffff] vm2 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[vmmap]: IOport map: vm2 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vmmap]: [ 20: 21]: PIC vm2 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm1 | VMM[vmmap]: [ 40: 43]: PIT vm1 | VMM[vmmap]: [ 61: 61]: PIT port 61 vm1 | VMM[vmmap]: [ 70: 71]: RTC vm1 | VMM[vmmap]: [ a0: a1]: PIC vm1 | VMM[vmmap]: [ 3f8: 3ff]: UART 8250 vm1 | VMM[vmmap]: [ 510: 51b]: Firmware interface vm1 | VMM[vmmap]: [ cf8: cff]: PCI bus cfg vm1 | VMM[vmmap]: [1800:1808]: ACPI platform vm2 | VMM: Created VCPU 1 @ 23000 vm1 | VMM[vmmap]: [b008:b008]: ACPI Timer vm2 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm1 | VMM[guest]: Starting VMM @ 0x100000 vm2 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm1 | VMM[Cpu_dev]: [ 0] Reset called vm1 | VMM[Cpu_dev]: [ 0] Resetting vCPU. vm1 | VMM: Hello clock source for vCPU 0
--------------------------------------------------------------------- CPU 2 [fffffffff006a176]: General Protection (ERR=0000000000000000) CPU(s) 0-5 entered JDBjdb:
Hi Richard,
On Mon, 2025-05-05 at 11:25 +0000, Richard Clark wrote:
But that brings me to a bigger question. How do I fetch only Long-Term-Support or Fully-Tested-and-Blessed versions? Github is woefully lacking in proper version support. I can't send random untested code to my customers.
All code we push to Github went through our internal QA process, running compile checks for all our supported architectures as well as an extensive test suite on different platforms and configurations. So from that point of view I would say you can consider all code pushed to Github as “Fully-Tested-and-Blessed”.
However, obviously we cannot possibly test every combination of configurations on every hardware.
Kernkonzept offers commercial support for these cases, where we provide a dedicated delivery pipeline that is tailored to the customer use-case and hardware and where testing ensures that the concrete customer use-case works flawlessly for every software release on the hardware relevant to the customer. Please contact sales@kernkonzept.com for quotes or discussions on details of such an arrangement.
Best regards,
- Marcus Hähnel Principal Engineering Lead
Richard
-----Original Message----- From: Adam Lackorzynski adam@l4re.org Sent: Sunday, May 4, 2025 11:11 AM To: Richard Clark richard.clark@Coheretechnology.us; l4-hackers@os.inf.tu-dresden.de Cc: Bud Wykoff bud.wykoff@Coheretechnology.us; Douglas Schafer douglas.schafer@Coheretechnology.us Subject: Re: Upgrade issues. VM won't start.
Richard,
is that setup running on Linux QEMU+KVM? If yes, we recently (really a few days ago) fixed an issue in this virtualized setup (wrt to performance counter handling). It's on GH since Friday I believe. Otherwise please provide me you fiasco binary, such that I can look up fffffffff006a176 as this will point to the location that triggered the issue.
Thanks, Adam
On Sun May 04, 2025 at 13:14:00 +0000, Richard Clark wrote:
Adam,
So I have been using version 23.10.1, built as per the download page, and have gotten a couple VMs to ping eachother and give me a login prompt. I finally realized I was missing the virtio_switch package and grabbed it from github and put it where it is supposed to go. Of course, being a version mismatch now, it did not compile. I decide to bite the bullet and do an upgrade (always a mistake) and use the new build process given with the new website. That went smoothly! I install ham, run it, etc, everything builds, I update my local scripts and links to use the new environment variables instead of being hardcoded, and it all builds and looks good. Except that it doesn't run. The VMs get a memory exception and kick into jdb.
I have not changed any of my .cfg or .list files. The VMs and ramdisks are untouched. My local (L4 native) processes start and appear to run. But the VMs crash for some reason. Even the device tree is unchanged. I also tried a newly built linux (as opposed to a prebuilt) and that failed as well.
Is there some reason for this new crash between version 23.10.1 and the latest version from github? I've attached the VM startup output from both the old and new runs so you can take a look.
I'm so close... I see the IP configuration parameter and how to set up lwip and virtio_switch.... Then my natives should be able to talk directly to my linuxes.
Your help is greatly appreciated!
Richard
vm1 | VMM: Created VCPU 0 @ 17000 vm1 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm1 | VMM[main]: Hello out there. vm1 | VMM[ASM]: Sys Info: vm1 | vBus: 0 vm1 | DMA devs: 0 vm1 | IO-MMU: 0 vm1 | Identity forced: 0 vm1 | DMA phys addr: 0 vm1 | DT dma-ranges: 0 vm1 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[ram]: RAM not set up for DMA. vm2 | VMM: Created VCPU 0 @ 17000 vm2 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm2 | VMM[main]: Hello out there. vm2 | VMM[ASM]: Sys Info: vm2 | vBus: 0 vm2 | DMA devs: 0 vm2 | IO-MMU: 0 vm2 | Identity forced: 0 vm1 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | DMA phys addr: 0 vm1 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | DT dma-ranges: 0 vm1 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[main]: Loading kernel... vm2 | VMM[ram]: RAM not set up for DMA. vm1 | VMM[loader]: Linux kernel detected vm1 | VMM[file]: load: @ 0xfc400 vm1 | VMM[file]: copy in: to offset 0xfc400-0xba989f vm1 | VMM[main]: Loading ram disk... vm1 | VMM[ram]: load: rom/ramdisk1-amd64.rd -> 0x1fc00000 vm1 | VMM[file]: load: @ 0x1fc00000 vm1 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm1 | VMM[main]: Loaded ramdisk image rom/ramdisk1-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM[PIC]: Hello, Legacy_pic vm2 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[main]: Loading kernel... vm1 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM[loader]: Linux kernel detected vm1 | VMM: Creating Acpi_platform vm2 | VMM[file]: load: @ 0xfc400 vm2 | VMM[file]: copy in: to offset 0xfc400-0xba989f vm1 | VMM[ACPI]: Acpi timer @ 0xb008 vm1 | VMM[RTC]: Hello from RTC. Irq=8 vm2 | VMM[main]: Loading ram disk... vm1 | VMM[uart_8250]: Create virtual 8250 console vm2 | VMM[ram]: load: rom/ramdisk2-amd64.rd -> 0x1fc00000 vm2 | VMM[file]: load: @ 0x1fc00000 vm2 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm2 | VMM[main]: Loaded ramdisk image rom/ramdisk2-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm1 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm1 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm1 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm2 | VMM[PIC]: Hello, Legacy_pic vm1 | VMM[PCI bus]: Creating host bridge vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm1 | VMM[PCI bus]: Registering PCI device 00:00.0 vm2 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM: Creating Acpi_platform vm1 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM[PCI bus]: Created & Registered the PCI host bridge vm2 | VMM[ACPI]: Acpi timer @ 0xb008 vm2 | VMM[RTC]: Hello from RTC. Irq=8 vm1 | VMM[VIO Cons]: Create virtual PCI console vm2 | VMM[uart_8250]: Create virtual 8250 console vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm2 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm2 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm1 | VMM[PCI bus]: Registering PCI device 00:01.0 vm2 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[VIO proxy]: Creating proxy vm2 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm2 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 p2p | PORT[0x15d70]: DMA guest [0-1fffffff] local [600000-205fffff] offset 0 vm2 | VMM[PCI bus]: Creating host bridge p2p | register client: host IRQ: 420010 config DS: 41d000 vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm2 | VMM[PCI bus]: Registering PCI device 00:00.0 vm1 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM[VIO proxy]: Creating proxy vm1 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm2 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm2 | VMM[PCI bus]: Created & Registered the PCI host bridge vm1 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm2 | VMM[VIO Cons]: Create virtual PCI console vm1 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm2 | VMM[PCI bus]: Registering PCI device 00:01.0 vm1 | VMM: Created VCPU 1 @ 23000 vm2 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm2 | VMM[VIO proxy]: Creating proxy vm1 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 p2p | PORT[0x15e80]: DMA guest [0-1fffffff] local [20600000-405fffff] offset 0 p2p | register client: host IRQ: 420010 config DS: 41e000 vm1 | VMM[ACPI]: Initialize legacy BIOS ACPI tables. vm2 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM: Zeropage @ 0x1000, Kernel @ 0xfc400 vm2 | VMM[VIO proxy]: Creating proxy vm1 | VMM: Cmd_line: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM: cmdline check: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm1 | VMM[vmmap]: VM map: vm1 | VMM[vmmap]: [ 0:1fffffff]: Ram vm1 | VMM[vmmap]: [b0000000:bfffffff]: Pci_bus_cfg_ecam vm2 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm1 | VMM[vmmap]: [fec00000:fec00fff]: Ioapic vm2 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vmmap]: [fee00000:fee00fff]: Lapic_access_handler vm2 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm1 | VMM[main]: Populating guest physical address space vm1 | VMM[mmio]: Mapping [1000000 - 20ffffff] -> [0 - 1fffffff] vm2 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[vmmap]: IOport map: vm2 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vmmap]: [ 20: 21]: PIC vm2 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm1 | VMM[vmmap]: [ 40: 43]: PIT vm1 | VMM[vmmap]: [ 61: 61]: PIT port 61 vm1 | VMM[vmmap]: [ 70: 71]: RTC vm1 | VMM[vmmap]: [ a0: a1]: PIC vm1 | VMM[vmmap]: [ 3f8: 3ff]: UART 8250 vm1 | VMM[vmmap]: [ 510: 51b]: Firmware interface vm1 | VMM[vmmap]: [ cf8: cff]: PCI bus cfg vm1 | VMM[vmmap]: [1800:1808]: ACPI platform vm2 | VMM: Created VCPU 1 @ 23000 vm1 | VMM[vmmap]: [b008:b008]: ACPI Timer vm2 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm1 | VMM[guest]: Starting VMM @ 0x100000 vm2 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm1 | VMM[Cpu_dev]: [ 0] Reset called vm1 | VMM[Cpu_dev]: [ 0] Resetting vCPU. vm1 | VMM: Hello clock source for vCPU 0
--------------------------------------------------------------------- CPU 2 [fffffffff006a176]: General Protection (ERR=0000000000000000) CPU(s) 0-5 entered JDBjdb:
l4-hackers mailing list -- l4-hackers@os.inf.tu-dresden.de To unsubscribe send an email to l4-hackers-leave@os.inf.tu-dresden.de
On Monday, 12 May 2025 15:09:52 CEST Marcus Hähnel wrote:
On Mon, 2025-05-05 at 11:25 +0000, Richard Clark wrote:
But that brings me to a bigger question. How do I fetch only Long-Term-Support or Fully-Tested-and-Blessed versions? Github is woefully lacking in proper version support. I can't send random untested code to my customers.
All code we push to Github went through our internal QA process, running compile checks for all our supported architectures as well as an extensive test suite on different platforms and configurations. So from that point of view I would say you can consider all code pushed to Github as “Fully-Tested-and-Blessed”.
I don't have a strong opinion about quality assurance for people wanting to provide solutions for paying customers, but I have personally wondered how I might successfully and conveniently reproduce repository configurations when creating new L4Re development environments.
For example, if I decide to work on support for a new board, I might want to replicate the L4Re configuration I have been using for another board. Starting from scratch, it was possible to use the ham tool, but despite it apparently maintaining version details for the different repositories, it wasn't particularly clear how one might preserve or export that metadata for further use.
I now see that there is another tool involved:
https://l4re.org/getting_started/bob.html
Although that doesn't seem to replace the ham tool:
https://l4re.org/getting_started/make.html
Naturally, one might say that this is the point at which anyone serious-enough about using L4Re would get in touch with Kernkonzept and start talking business, but such a lack of clarity tends to suggest that either there aren't particularly adequate solutions for such fundamental needs or that any adequate solutions that may exist aren't for people merely investigating or evaluating the technology.
Again, it isn't my concern if there's a business decision involved that everybody feels comfortable with, and if there's a steady stream of interested customers that seems to justify such a decision, but I could easily see potential users going elsewhere if the answer to simple questions is "talk to us". Even in reasonably large organisations, hitting an approval barrier that "talk to us" or "register your interest" represents can be a strong disincentive, especially if other solutions exist.
I accept that my opinion isn't important, however, since my own activities are confined to my own interests and driven by a general belief that L4Re represents a reasonable foundation for certain kinds of systems. That there isn't exactly much of a public community around L4Re could also be regarded as a disincentive for potential adopters, which is unfortunate.
Paul
On Mon, 2025-05-12 at 22:37 +0200, Paul Boddie wrote:
On Monday, 12 May 2025 15:09:52 CEST Marcus Hähnel wrote:
On Mon, 2025-05-05 at 11:25 +0000, Richard Clark wrote:
But that brings me to a bigger question. How do I fetch only Long-Term-Support or Fully-Tested-and-Blessed versions? Github is woefully lacking in proper version support. I can't send random untested code to my customers.
All code we push to Github went through our internal QA process, running compile checks for all our supported architectures as well as an extensive test suite on different platforms and configurations. So from that point of view I would say you can consider all code pushed to Github as “Fully-Tested-and-Blessed”.
I don't have a strong opinion about quality assurance for people wanting to provide solutions for paying customers, but I have personally wondered how I might successfully and conveniently reproduce repository configurations when creating new L4Re development environments.
For example, if I decide to work on support for a new board, I might want to replicate the L4Re configuration I have been using for another board. Starting from scratch, it was possible to use the ham tool, but despite it apparently maintaining version details for the different repositories, it wasn't particularly clear how one might preserve or export that metadata for further use.
I now see that there is another tool involved:
https://l4re.org/getting_started/bob.html
Although that doesn't seem to replace the ham tool:
https://l4re.org/getting_started/make.html
Naturally, one might say that this is the point at which anyone serious-enough about using L4Re would get in touch with Kernkonzept and start talking business, but such a lack of clarity tends to suggest that either there aren't particularly adequate solutions for such fundamental needs or that any adequate solutions that may exist aren't for people merely investigating or evaluating the technology.
Again, it isn't my concern if there's a business decision involved that everybody feels comfortable with, and if there's a steady stream of interested customers that seems to justify such a decision, but I could easily see potential users going elsewhere if the answer to simple questions is "talk to us". Even in reasonably large organisations, hitting an approval barrier that "talk to us" or "register your interest" represents can be a strong disincentive, especially if other solutions exist.
I accept that my opinion isn't important, however, since my own activities are confined to my own interests and driven by a general belief that L4Re represents a reasonable foundation for certain kinds of systems. That there isn't exactly much of a public community around L4Re could also be regarded as a disincentive for potential adopters, which is unfortunate.
Paul
Hi Paul,
thank you very much for your thoughtful message — it's really appreciated.
First of all: your opinion absolutely matters. The input from users like you, who engage deeply and share candid feedback, helps us make L4Re better. While we are aware that L4Re is still quite niche and the community small, we want to support it as best we can and would love to see it grow.
I think you're raising a different, but equally important, point compared to what Richard was asking. My original response focused on the idea of providing a fully tested release for a specific combination of software configuration and hardware, which is understandably difficult for us to maintain as part of the open-source offering given our limited resources.
Your concern — about being able to reproduce a known working state of your development environment — is much more fundamental. You're right: this should be straightforward, and if it's not, then it's something we want to improve. Ease of use and accessibility are important to us, and sometimes we’re just too close to the system to see where friction arises — so thank you again for pointing this out.
To clarify one key point: we don’t deliberately withhold features or usability improvements from the open-source version of L4Re to push people into commercial contact. That’s not our business philosophy. In fact, that would go directly against our goal of getting L4Re into more hands and making it easier to work with.
Some of the convenience features — like release tagging — do exist in our customer repositories, but it’s more of a workflow habit than a conscious decision to exclude them from GitHub. No one had brought up the need for that kind of reproducibility in the open repo so far — and now that you have, let’s fix it.
Would something like weekly tags on GitHub help you? For example, a tag like `l4re-2025-05-14` that you could use with `ham checkout` to reproduce that specific state?
Also, ham already supports pinned revisions in the manifest (`revision` attribute in `project` tags), so you can share a complete and reproducible state that way as well. But I agree that this could be made more convenient.
One possible improvement could be a `ham create-pinned-manifest` sub-command that generates such a manifest from your current state. That’s not trivial — it would require resolving different remotes and checking that all commits are actually reachable in one of the remotes — but it’s definitely doable. If you're interested, feel free to open a proposal or even an issue on the ham GitHub repository — we’d love to hear your thoughts or collaborate on a solution.
Would any of these ideas help in your workflow? Do you have something else in mind? We're always happy to improve L4Re together with the people who use it.
Best regards,
- Marcus Hähnel Principal Engineering Lead
Marcus,
I, for one, rather than having to learn yet-another-tool that doesn't quite do what I want, would rather that you have a fully-blessed-and-tested release all rolled up in a tar (or zip) file for download. You could even have a git repo just for the tar files. One caveat, being that it must contain ALL the repos and possible little pieces that could possibly go with it. One thing I hate about git is sub-repos that get lost or are not pulled down and cause severe headaches trying to match versions. Like lwip, or virtio_switch, or any of the other couple dozen packages that don't get pulled down with the new ham build process. I'd like to be able to go to one place, use a command I already know, and get a fully functional, fully populated, "blessed" version. You do have your old downloads site with a tgz file, but even that doesn't contain all the packages any more. This is the frustrating part. I need a way to get a snapshot of everything that could possibly go together in a release whether I need it or not, because some day I will need it and then the version I need will no longer be available. So, not a "demo" or "example" version, but a full-source-everything-including-the-kitchen-sink version.
Thank you for considering our opinions!
Richard
-----Original Message----- From: Marcus Hähnel support@kernkonzept.com Sent: Wednesday, May 14, 2025 10:53 AM To: Paul Boddie paul@boddie.org.uk; l4-hackers@os.inf.tu-dresden.de Subject: Re: Configuration, component and repository versioning (was Re: Upgrade issues. VM won't start.)
On Mon, 2025-05-12 at 22:37 +0200, Paul Boddie wrote:
On Monday, 12 May 2025 15:09:52 CEST Marcus Hähnel wrote:
On Mon, 2025-05-05 at 11:25 +0000, Richard Clark wrote:
But that brings me to a bigger question. How do I fetch only Long-Term-Support or Fully-Tested-and-Blessed versions? Github is woefully lacking in proper version support. I can't send random untested code to my customers.
All code we push to Github went through our internal QA process, running compile checks for all our supported architectures as well as an extensive test suite on different platforms and configurations. So from that point of view I would say you can consider all code pushed to Github as “Fully-Tested-and-Blessed”.
I don't have a strong opinion about quality assurance for people wanting to provide solutions for paying customers, but I have personally wondered how I might successfully and conveniently reproduce repository configurations when creating new L4Re development environments.
For example, if I decide to work on support for a new board, I might want to replicate the L4Re configuration I have been using for another board. Starting from scratch, it was possible to use the ham tool, but despite it apparently maintaining version details for the different repositories, it wasn't particularly clear how one might preserve or export that metadata for further use.
I now see that there is another tool involved:
https://l4re.org/getting_started/bob.html
Although that doesn't seem to replace the ham tool:
https://l4re.org/getting_started/make.html
Naturally, one might say that this is the point at which anyone serious-enough about using L4Re would get in touch with Kernkonzept and start talking business, but such a lack of clarity tends to suggest that either there aren't particularly adequate solutions for such fundamental needs or that any adequate solutions that may exist aren't for people merely investigating or evaluating the technology.
Again, it isn't my concern if there's a business decision involved that everybody feels comfortable with, and if there's a steady stream of interested customers that seems to justify such a decision, but I could easily see potential users going elsewhere if the answer to simple questions is "talk to us". Even in reasonably large organisations, hitting an approval barrier that "talk to us" or "register your interest" represents can be a strong disincentive, especially if other solutions exist.
I accept that my opinion isn't important, however, since my own activities are confined to my own interests and driven by a general belief that L4Re represents a reasonable foundation for certain kinds of systems. That there isn't exactly much of a public community around L4Re could also be regarded as a disincentive for potential adopters, which is unfortunate.
Paul
Hi Paul,
thank you very much for your thoughtful message — it's really appreciated.
First of all: your opinion absolutely matters. The input from users like you, who engage deeply and share candid feedback, helps us make L4Re better. While we are aware that L4Re is still quite niche and the community small, we want to support it as best we can and would love to see it grow.
I think you're raising a different, but equally important, point compared to what Richard was asking. My original response focused on the idea of providing a fully tested release for a specific combination of software configuration and hardware, which is understandably difficult for us to maintain as part of the open-source offering given our limited resources.
Your concern — about being able to reproduce a known working state of your development environment — is much more fundamental. You're right: this should be straightforward, and if it's not, then it's something we want to improve. Ease of use and accessibility are important to us, and sometimes we’re just too close to the system to see where friction arises — so thank you again for pointing this out.
To clarify one key point: we don’t deliberately withhold features or usability improvements from the open-source version of L4Re to push people into commercial contact. That’s not our business philosophy. In fact, that would go directly against our goal of getting L4Re into more hands and making it easier to work with.
Some of the convenience features — like release tagging — do exist in our customer repositories, but it’s more of a workflow habit than a conscious decision to exclude them from GitHub. No one had brought up the need for that kind of reproducibility in the open repo so far — and now that you have, let’s fix it.
Would something like weekly tags on GitHub help you? For example, a tag like `l4re-2025-05-14` that you could use with `ham checkout` to reproduce that specific state?
Also, ham already supports pinned revisions in the manifest (`revision` attribute in `project` tags), so you can share a complete and reproducible state that way as well. But I agree that this could be made more convenient.
One possible improvement could be a `ham create-pinned-manifest` sub-command that generates such a manifest from your current state. That’s not trivial — it would require resolving different remotes and checking that all commits are actually reachable in one of the remotes — but it’s definitely doable. If you're interested, feel free to open a proposal or even an issue on the ham GitHub repository — we’d love to hear your thoughts or collaborate on a solution.
Would any of these ideas help in your workflow? Do you have something else in mind? We're always happy to improve L4Re together with the people who use it.
Best regards,
- Marcus Hähnel Principal Engineering Lead
Hi everyone,
I want, would rather that
you have a fully-blessed-and-tested release all rolled up in a tar (or zip) file for download
it seems like a suitable solution for one-off cases. However, I believe it may lead to significant overhead when maintaining the final product, as it would require manually tracking the history of each package and compiling change logs.
In my opinion, it's sometimes worth putting in a bit more effort at the beginning to ensure a smoother path forward later on. :)
I’m just an outside observer, but I’d like to share my proposed solution in the hope that it might help address some of the challenges you're facing.
1) using a project manifest format (github.com/kernkonzept/manifest) aligns well with the repo tool (gerrit.googlesource.com/git-repo), which is used by AOSP and several other large projects consisting of 100+ sub-projects, each with its own commit history, etc. In practice, we could explore applying this tool for version management in our current project as well. repo also has broader documentation and more usage examples available online, which could be beneficial.
2) with minimal changes, the manifest could also be used to create local versions of repositories. For example, the following command can be used to initialize a local mirror based on a manifest (my rpersonal example): repo init --mirror -b repotool_support -u https://github.com/EugeniyKozhanov/l4re_manifest.git repo sync -j9
3) additionally, you can create mirrors for each release and use them to clone and rebuild the entire project tree with a single command. This approach is almost like using .tar archives, but with the advantage of preserving full commit histories and enabling centralized synchronization across all projects.
4) you can also centrally tag your set of subprojects in your local mirror. This makes it easy to mark consistent states across all components of your system and later reference or rebuild them as needed, with full traceability: repo forall -c 'git tag -a v2025.05 -m "Release tag for May 2025"' repo forall -c 'git push origin v2025.05'
*P.S.* It's possible that *Hammer* is also capable of doing all of this. But I must admit — even after trying to understand its philosophy, I found it very difficult to use without extensive documentation, examples, or community support. I even had to recall some long-forgotten Perl skills, which I was really hoping to avoid. I worry that many newcomers might be discouraged from even starting with it if they can’t easily understand how it works or install the missing dependencies on their systems ( https://github.com/EugeniyKozhanov/ham/pull/1/files).
Thanks!
Best regards,
Am Do., 15. Mai 2025 um 00:34 Uhr schrieb Richard Clark < richard.clark@coheretechnology.us>:
Marcus,
I, for one, rather than having to learn yet-another-tool that doesn't quite do what I want, would rather that you have a fully-blessed-and-tested release all rolled up in a tar (or zip) file for download. You could even have a git repo just for the tar files. One caveat, being that it must contain ALL the repos and possible little pieces that could possibly go with it. One thing I hate about git is sub-repos that get lost or are not pulled down and cause severe headaches trying to match versions. Like lwip, or virtio_switch, or any of the other couple dozen packages that don't get pulled down with the new ham build process. I'd like to be able to go to one place, use a command I already know, and get a fully functional, fully populated, "blessed" version. You do have your old downloads site with a tgz file, but even that doesn't contain all the packages any more. This is the frustrating part. I need a way to get a snapshot of everything that could possibly go together in a release whether I need it or not, because some day I will need it and then the version I need will no longer be available. So, not a "demo" or "example" version, but a full-source-everything-including-the-kitchen-sink version.
Thank you for considering our opinions!
Richard
-----Original Message----- From: Marcus Hähnel support@kernkonzept.com Sent: Wednesday, May 14, 2025 10:53 AM To: Paul Boddie paul@boddie.org.uk; l4-hackers@os.inf.tu-dresden.de Subject: Re: Configuration, component and repository versioning (was Re: Upgrade issues. VM won't start.)
On Mon, 2025-05-12 at 22:37 +0200, Paul Boddie wrote:
On Monday, 12 May 2025 15:09:52 CEST Marcus Hähnel wrote:
On Mon, 2025-05-05 at 11:25 +0000, Richard Clark wrote:
But that brings me to a bigger question. How do I fetch only Long-Term-Support or Fully-Tested-and-Blessed versions? Github is woefully lacking in proper version support. I can't send random untested code to my customers.
All code we push to Github went through our internal QA process, running compile checks for all our supported architectures as well as an extensive test suite on different platforms and configurations. So from that point of view I would say you can consider all code pushed to Github as “Fully-Tested-and-Blessed”.
I don't have a strong opinion about quality assurance for people wanting to provide solutions for paying customers, but I have personally wondered how I might successfully and conveniently reproduce repository configurations when creating new L4Re development
environments.
For example, if I decide to work on support for a new board, I might want to replicate the L4Re configuration I have been using for another board. Starting from scratch, it was possible to use the ham tool, but despite it apparently maintaining version details for the different repositories, it wasn't particularly clear how one might preserve or export that metadata for further use.
I now see that there is another tool involved:
https://l4re.org/getting_started/bob.html
Although that doesn't seem to replace the ham tool:
https://l4re.org/getting_started/make.html
Naturally, one might say that this is the point at which anyone serious-enough about using L4Re would get in touch with Kernkonzept and start talking business, but such a lack of clarity tends to suggest that either there aren't particularly adequate solutions for such fundamental needs or that any adequate solutions that may exist aren't for people merely investigating or evaluating the technology.
Again, it isn't my concern if there's a business decision involved that everybody feels comfortable with, and if there's a steady stream of interested customers that seems to justify such a decision, but I could easily see potential users going elsewhere if the answer to simple questions is "talk to us". Even in reasonably large organisations, hitting an approval barrier that "talk to us" or "register your interest" represents can be a strong disincentive,
especially if other solutions exist.
I accept that my opinion isn't important, however, since my own activities are confined to my own interests and driven by a general belief that L4Re represents a reasonable foundation for certain kinds of systems. That there isn't exactly much of a public community around L4Re could also be regarded as a disincentive for potential adopters,
which is unfortunate.
Paul
Hi Paul,
thank you very much for your thoughtful message — it's really appreciated.
First of all: your opinion absolutely matters. The input from users like you, who engage deeply and share candid feedback, helps us make L4Re better. While we are aware that L4Re is still quite niche and the community small, we want to support it as best we can and would love to see it grow.
I think you're raising a different, but equally important, point compared to what Richard was asking. My original response focused on the idea of providing a fully tested release for a specific combination of software configuration and hardware, which is understandably difficult for us to maintain as part of the open-source offering given our limited resources.
Your concern — about being able to reproduce a known working state of your development environment — is much more fundamental. You're right: this should be straightforward, and if it's not, then it's something we want to improve. Ease of use and accessibility are important to us, and sometimes we’re just too close to the system to see where friction arises — so thank you again for pointing this out.
To clarify one key point: we don’t deliberately withhold features or usability improvements from the open-source version of L4Re to push people into commercial contact. That’s not our business philosophy. In fact, that would go directly against our goal of getting L4Re into more hands and making it easier to work with.
Some of the convenience features — like release tagging — do exist in our customer repositories, but it’s more of a workflow habit than a conscious decision to exclude them from GitHub. No one had brought up the need for that kind of reproducibility in the open repo so far — and now that you have, let’s fix it.
Would something like weekly tags on GitHub help you? For example, a tag like `l4re-2025-05-14` that you could use with `ham checkout` to reproduce that specific state?
Also, ham already supports pinned revisions in the manifest (`revision` attribute in `project` tags), so you can share a complete and reproducible state that way as well. But I agree that this could be made more convenient.
One possible improvement could be a `ham create-pinned-manifest` sub-command that generates such a manifest from your current state. That’s not trivial — it would require resolving different remotes and checking that all commits are actually reachable in one of the remotes — but it’s definitely doable. If you're interested, feel free to open a proposal or even an issue on the ham GitHub repository — we’d love to hear your thoughts or collaborate on a solution.
Would any of these ideas help in your workflow? Do you have something else in mind? We're always happy to improve L4Re together with the people who use it.
Best regards,
- Marcus Hähnel Principal Engineering Lead
-- +++ Register now for our workshop “Get to know L4Re in 3 days” on +++ October 28–30. Learn to design and deploy secure system architectures for your product with L4Re: https://www.kernkonzept.com/workshop-getting-started-with-l4re/ +++
Kernkonzept GmbH Sitz: Dresden HRB 31129 Geschäftsführer: Dr.-Ing. Michael Hohmuth
l4-hackers mailing list -- l4-hackers@os.inf.tu-dresden.de To unsubscribe send an email to l4-hackers-leave@os.inf.tu-dresden.de _______________________________________________ l4-hackers mailing list -- l4-hackers@os.inf.tu-dresden.de To unsubscribe send an email to l4-hackers-leave@os.inf.tu-dresden.de
On Thursday, 15 May 2025 12:57:53 CEST Eugeniy Kozhanov wrote:
I’m just an outside observer, but I’d like to share my proposed solution in the hope that it might help address some of the challenges you're facing.
- using a project manifest format (github.com/kernkonzept/manifest) aligns
well with the repo tool (gerrit.googlesource.com/git-repo), which is used by AOSP and several other large projects consisting of 100+ sub-projects, each with its own commit history, etc. In practice, we could explore applying this tool for version management in our current project as well. repo also has broader documentation and more usage examples available online, which could be beneficial.
I seem to remember L4Re using a different tool that I thought was repo, but just checking in some old source distributions, I see that it was a tool called repomgr, written in Perl, that wrapped Subversion in a way that might be comparable to what repo does for Git.
Another benefit of repo appears to be its availability in distributions like Debian, so that there is no need to download another tool. Unless I am getting confused with repomgr again, I seem to remember encountering some projects where users were encouraged to wget/curl the repo tool from some URL, which then requires the user to inspect the source code for potentially undesirable behaviour.
(There can sometimes be hostility from developers when one questions whether a tool can be trusted, particularly if those developers do not perceive value in software distributions, but even if the intentions of the developers are pure, it doesn't always prevent them from having some very strange ideas about what a program should be doing. In some cases, such ideas may not entirely align with responsible practice.)
[Overview of repo capabilities]
Thanks for this brief overview!
*P.S.* It's possible that *Hammer* is also capable of doing all of this. But I must admit — even after trying to understand its philosophy, I found it very difficult to use without extensive documentation, examples, or community support. I even had to recall some long-forgotten Perl skills, which I was really hoping to avoid. I worry that many newcomers might be discouraged from even starting with it if they can’t easily understand how it works or install the missing dependencies on their systems ( https://github.com/EugeniyKozhanov/ham/pull/1/files).
I am someone who likes to develop my own tools when I perceive that the existing ones do not perform in the way I would like. However, beyond ergonomic considerations of a tool itself, I think it also helps to use technologies that many people understand or are able to learn and use effectively.
In this case, there is a trade-off between what the original developers are comfortable with and what potential contributors are able to use, which one also sees in projects like Debian where a lot of the tools happen to be written in Perl. I also used Perl in a professional environment about thirty years ago, revisiting it on brief occasions to maintain legacy systems, but I would hesitate to interact with a Perl code base now.
Thank you for your perspectives on these issues! Don't worry about being an outside observer: that is also my position. Hopefully, you might become familiar with the technology and a contributor to the broader community.
Paul
Hi,
- using a project manifest format (github.com/kernkonzept/manifest) aligns
well with the repo tool (gerrit.googlesource.com/git-repo), which is used by AOSP and several other large projects consisting of 100+ sub-projects, each with its own commit history, etc. In practice, we could explore applying this tool for version management in our current project as well. repo also has broader documentation and more usage examples available online, which could be beneficial.
- with minimal changes, the manifest could also be used to create local
versions of repositories. For example, the following command can be used to initialize a local mirror based on a manifest (my rpersonal example): repo init --mirror -b repotool_support -u https://github.com/EugeniyKozhanov/l4re_manifest.git repo sync -j9
Indeed ham is modeled after repo and we try to keep the format that ham accepts somewhat compatible with repo where that's feasible. So one should be able to use our manifests with repo, though we don't test it ourselves right now as far as I am aware.
Ham has some additional features which are primarily valuable for our internal use and dealing with specific QA / CI use- cases and customer specific requirements.
[…] *P.S.* It's possible that *Hammer* is also capable of doing all of this. But I must admit — even after trying to understand its philosophy, I found it very difficult to use without extensive documentation, examples, or community support. I even had to recall some long-forgotten Perl skills, which I was really hoping to avoid. I worry that many newcomers might be discouraged from even starting with it if they can’t easily understand how it works or install the missing dependencies on their systems ( https://github.com/EugeniyKozhanov/ham/pull/1/files).
Thanks for that valuable input! We might consider switching the explanations to `repo` in the tutorials, if people find it easier to use and are more comfortable with that tool. I'll bring it up with our release engineers.
Best regards,
- Marcus
Thanks!
Best regards,
Am Do., 15. Mai 2025 um 00:34 Uhr schrieb Richard Clark < richard.clark@coheretechnology.us>:
Marcus,
I, for one, rather than having to learn yet-another-tool that doesn't quite do what I want, would rather that you have a fully-blessed-and-tested release all rolled up in a tar (or zip) file for download. You could even have a git repo just for the tar files. One caveat, being that it must contain ALL the repos and possible little pieces that could possibly go with it. One thing I hate about git is sub-repos that get lost or are not pulled down and cause severe headaches trying to match versions. Like lwip, or virtio_switch, or any of the other couple dozen packages that don't get pulled down with the new ham build process. I'd like to be able to go to one place, use a command I already know, and get a fully functional, fully populated, "blessed" version. You do have your old downloads site with a tgz file, but even that doesn't contain all the packages any more. This is the frustrating part. I need a way to get a snapshot of everything that could possibly go together in a release whether I need it or not, because some day I will need it and then the version I need will no longer be available. So, not a "demo" or "example" version, but a full-source-everything-including-the-kitchen-sink version.
Thank you for considering our opinions!
Richard
-----Original Message----- From: Marcus Hähnel support@kernkonzept.com Sent: Wednesday, May 14, 2025 10:53 AM To: Paul Boddie paul@boddie.org.uk; l4-hackers@os.inf.tu-dresden.de Subject: Re: Configuration, component and repository versioning (was Re: Upgrade issues. VM won't start.)
On Mon, 2025-05-12 at 22:37 +0200, Paul Boddie wrote:
On Monday, 12 May 2025 15:09:52 CEST Marcus Hähnel wrote:
On Mon, 2025-05-05 at 11:25 +0000, Richard Clark wrote:
But that brings me to a bigger question. How do I fetch only Long-Term-Support or Fully-Tested-and-Blessed versions? Github is woefully lacking in proper version support. I can't send random untested code to my customers.
All code we push to Github went through our internal QA process, running compile checks for all our supported architectures as well as an extensive test suite on different platforms and configurations. So from that point of view I would say you can consider all code pushed to Github as “Fully-Tested-and-Blessed”.
I don't have a strong opinion about quality assurance for people wanting to provide solutions for paying customers, but I have personally wondered how I might successfully and conveniently reproduce repository configurations when creating new L4Re development
environments.
For example, if I decide to work on support for a new board, I might want to replicate the L4Re configuration I have been using for another board. Starting from scratch, it was possible to use the ham tool, but despite it apparently maintaining version details for the different repositories, it wasn't particularly clear how one might preserve or export that metadata for further use.
I now see that there is another tool involved:
https://l4re.org/getting_started/bob.html
Although that doesn't seem to replace the ham tool:
https://l4re.org/getting_started/make.html
Naturally, one might say that this is the point at which anyone serious-enough about using L4Re would get in touch with Kernkonzept and start talking business, but such a lack of clarity tends to suggest that either there aren't particularly adequate solutions for such fundamental needs or that any adequate solutions that may exist aren't for people merely investigating or evaluating the technology.
Again, it isn't my concern if there's a business decision involved that everybody feels comfortable with, and if there's a steady stream of interested customers that seems to justify such a decision, but I could easily see potential users going elsewhere if the answer to simple questions is "talk to us". Even in reasonably large organisations, hitting an approval barrier that "talk to us" or "register your interest" represents can be a strong disincentive,
especially if other solutions exist.
I accept that my opinion isn't important, however, since my own activities are confined to my own interests and driven by a general belief that L4Re represents a reasonable foundation for certain kinds of systems. That there isn't exactly much of a public community around L4Re could also be regarded as a disincentive for potential adopters,
which is unfortunate.
Paul
Hi Paul,
thank you very much for your thoughtful message — it's really appreciated.
First of all: your opinion absolutely matters. The input from users like you, who engage deeply and share candid feedback, helps us make L4Re better. While we are aware that L4Re is still quite niche and the community small, we want to support it as best we can and would love to see it grow.
I think you're raising a different, but equally important, point compared to what Richard was asking. My original response focused on the idea of providing a fully tested release for a specific combination of software configuration and hardware, which is understandably difficult for us to maintain as part of the open-source offering given our limited resources.
Your concern — about being able to reproduce a known working state of your development environment — is much more fundamental. You're right: this should be straightforward, and if it's not, then it's something we want to improve. Ease of use and accessibility are important to us, and sometimes we’re just too close to the system to see where friction arises — so thank you again for pointing this out.
To clarify one key point: we don’t deliberately withhold features or usability improvements from the open-source version of L4Re to push people into commercial contact. That’s not our business philosophy. In fact, that would go directly against our goal of getting L4Re into more hands and making it easier to work with.
Some of the convenience features — like release tagging — do exist in our customer repositories, but it’s more of a workflow habit than a conscious decision to exclude them from GitHub. No one had brought up the need for that kind of reproducibility in the open repo so far — and now that you have, let’s fix it.
Would something like weekly tags on GitHub help you? For example, a tag like `l4re-2025-05-14` that you could use with `ham checkout` to reproduce that specific state?
Also, ham already supports pinned revisions in the manifest (`revision` attribute in `project` tags), so you can share a complete and reproducible state that way as well. But I agree that this could be made more convenient.
One possible improvement could be a `ham create-pinned-manifest` sub-command that generates such a manifest from your current state. That’s not trivial — it would require resolving different remotes and checking that all commits are actually reachable in one of the remotes — but it’s definitely doable. If you're interested, feel free to open a proposal or even an issue on the ham GitHub repository — we’d love to hear your thoughts or collaborate on a solution.
Would any of these ideas help in your workflow? Do you have something else in mind? We're always happy to improve L4Re together with the people who use it.
Best regards,
- Marcus Hähnel Principal Engineering Lead
-- +++ Register now for our workshop “Get to know L4Re in 3 days” on +++ October 28–30. Learn to design and deploy secure system architectures for your product with L4Re: https://www.kernkonzept.com/workshop-getting-started-with-l4re/ +++
Kernkonzept GmbH Sitz: Dresden HRB 31129 Geschäftsführer: Dr.-Ing. Michael Hohmuth
l4-hackers mailing list -- l4-hackers@os.inf.tu-dresden.de To unsubscribe send an email to l4-hackers-leave@os.inf.tu-dresden.de _______________________________________________ l4-hackers mailing list -- l4-hackers@os.inf.tu-dresden.de To unsubscribe send an email to l4-hackers-leave@os.inf.tu-dresden.de
On Wednesday, 14 May 2025 16:52:48 CEST Marcus Hähnel wrote:
thank you very much for your thoughtful message — it's really appreciated.
Thank you for taking the time to respond! I didn't really expect such a comprehensive reply, so it is much appreciated.
First of all: your opinion absolutely matters. The input from users like you, who engage deeply and share candid feedback, helps us make L4Re better. While we are aware that L4Re is still quite niche and the community small, we want to support it as best we can and would love to see it grow.
It is reassuring to hear that you take feedback into consideration, even from those of us who are not directly contributing to your work or, indeed, your business.
I think you're raising a different, but equally important, point compared to what Richard was asking. My original response focused on the idea of providing a fully tested release for a specific combination of software configuration and hardware, which is understandably difficult for us to maintain as part of the open-source offering given our limited resources.
I completely agree that if someone is in need of something that requires an investment of time and resources, then they should either invest their own time and resources, which is effectively what I have done over the last few years, or they should be prepared to compensate others for that work. Nobody should be required to do something for someone else for nothing - there is already too much of that happening in the open source world - not that any such request was made, I should add.
Productive access to specific hardware configurations has always been a challenge for software environment developers, and the collaboration required to deliver robust support for a given hardware platform can be difficult to coordinate. Even with today's hardware, mostly inexpensive from a historical perspective, there is still a substantial cost in developing the corresponding software support.
Your concern — about being able to reproduce a known working state of your development environment — is much more fundamental. You're right: this should be straightforward, and if it's not, then it's something we want to improve. Ease of use and accessibility are important to us, and sometimes we’re just too close to the system to see where friction arises — so thank you again for pointing this out.
To clarify one key point: we don’t deliberately withhold features or usability improvements from the open-source version of L4Re to push people into commercial contact. That’s not our business philosophy. In fact, that would go directly against our goal of getting L4Re into more hands and making it easier to work with.
I think it is fine to encourage people to make contact and to discuss opportunities, and I don't seriously believe that people are being herded in the direction of commercial support, but any kind of obstacle or hurdle in the independent evaluation of a technology can be dissuasive. Some evaluators are more likely to move on and look at other things than to cultivate some kind of dialogue.
This doesn't only happen in a commercial context. On plenty of occasions in academia, I saw researchers encouraging others to get in touch, in order to work around deficiencies in the way they had communicated or published their work. I also saw the long-term effects of this, where people could not account for what they had published after only a few years, with this emerging when I was actually making contact with them to try and distribute their originally published data.
Empowering other people as much as possible or practicable may eventually return such favours. When I return to my own work after a substantial period of time, it is almost as if I am just another person taking a look at it, too.
Some of the convenience features — like release tagging — do exist in our customer repositories, but it’s more of a workflow habit than a conscious decision to exclude them from GitHub. No one had brought up the need for that kind of reproducibility in the open repo so far — and now that you have, let’s fix it.
I can understand that it can be easy to overlook. How many times has one seen missing tags in public repositories because Git makes it easy to forget to push them? I also understand that publicly tagging releases can make mistakes difficult to rectify, but I suppose this is just another hazard of release management, and eventually we all get used to making "patch" releases.
Would something like weekly tags on GitHub help you? For example, a tag like `l4re-2025-05-14` that you could use with `ham checkout` to reproduce that specific state?
It might, but I think the challenge is then applying such tags to a large number of repositories. Naturally, one can write scripts to synchronise all those repositories, but isn't that what ham is supposed to achieve?
Richard also mentioned various other packages that ham doesn't cover, and having been working with the software for a long time now, there are still various packages that I have needed to retrieve from the Subversion repository for L4Re because they were never migrated to GitHub. I can understand that those packages are no longer a focus, and I have even eliminated some of them from my own environment, but they might still be used by the L4Re demonstrations.
Also, ham already supports pinned revisions in the manifest (`revision` attribute in `project` tags), so you can share a complete and reproducible state that way as well. But I agree that this could be made more convenient.
Unfortunately, I never discovered this, but I assumed that you must have a way to capture the state of repositories in order to reproduce releases required by customers.
One possible improvement could be a `ham create-pinned-manifest` sub-command that generates such a manifest from your current state. That’s not trivial — it would require resolving different remotes and checking that all commits are actually reachable in one of the remotes — but it’s definitely doable. If you're interested, feel free to open a proposal or even an issue on the ham GitHub repository — we’d love to hear your thoughts or collaborate on a solution.
It is something I could consider working on, but it would be joining a long list of other tasks.
Would any of these ideas help in your workflow? Do you have something else in mind? We're always happy to improve L4Re together with the people who use it.
I think they would be helpful. Previously, with Subversion, one could request a given release and get a versioned distribution of the software. The disadvantages were the poor performance of Subversion and the awkwardness of managing independent changes, as we all know. But that simplicity was also very useful.
Thanks once again for following up and giving my concerns your consideration!
Paul
Hi Paul,
thanks also to you for the valuable feedback, also with regards to ham in the other part of the thread. We're sure to take all of it into consideration! Regarding some other points:
On Wed, 2025-05-14 at 22:01 +0200, Paul Boddie wrote:
On Wednesday, 14 May 2025 16:52:48 CEST Marcus Hähnel wrote:
Some of the convenience features — like release tagging — do exist in our customer repositories, but it’s more of a workflow habit than a conscious decision to exclude them from GitHub. No one had brought up the need for that kind of reproducibility in the open repo so far — and now that you have, let’s fix it.
I can understand that it can be easy to overlook. How many times has one seen missing tags in public repositories because Git makes it easy to forget to push them? I also understand that publicly tagging releases can make mistakes difficult to rectify, but I suppose this is just another hazard of release management, and eventually we all get used to making "patch" releases.
I talked to our release engineers and we'll try to integrate this into our release process in the future, such that specific source states that ran through our QA will be tagged accordingly, also on GitHub. Indeed internally we already have them tagged, but these tags aren't transferred over to GitHub.
Would something like weekly tags on GitHub help you? For example, a tag like `l4re-2025-05-14` that you could use with `ham checkout` to reproduce that specific state?
It might, but I think the challenge is then applying such tags to a large number of repositories. Naturally, one can write scripts to synchronise all those repositories, but isn't that what ham is supposed to achieve?
As mentioned above we already do this :) We need to test a consistent state anyways, so it is easy to tag the tested state. And checking them out is indeed as easy as calling `ham checkout name-of-the-tag`.
Richard also mentioned various other packages that ham doesn't cover, and having been working with the software for a long time now, there are still various packages that I have needed to retrieve from the Subversion repository for L4Re because they were never migrated to GitHub. I can understand that those packages are no longer a focus, and I have even eliminated some of them from my own environment, but they might still be used by the L4Re demonstrations.
We'll try to migrate as much of this over to GitHub in the future as we reasonably can support. If you want to have your own repositories you can also have multiple remotes that you can then use in the project tags using the remote attribute of the project tag. Again, this is similar to how repo does it.
One possible improvement could be a `ham create-pinned-manifest` sub-command that generates such a manifest from your current state. That’s not trivial — it would require resolving different remotes and checking that all commits are actually reachable in one of the remotes — but it’s definitely doable. If you're interested, feel free to open a proposal or even an issue on the ham GitHub repository — we’d love to hear your thoughts or collaborate on a solution.
It is something I could consider working on, but it would be joining a long list of other tasks.
Sure :) That's always the problem, there's just too little time. But at least now the idea is out there and either someone from us will find the time to do it or hopefully someone from the community will step up. So thanks for bringing it up!
Again, thanks a lot for the input and for all your valuable contributions that you already brought to the list!
Best regards,
- Marcus
Dear Hackers,
On Mon, 2025-05-19 at 17:01 +0200, Marcus Hähnel wrote:
On Wed, 2025-05-14 at 22:01 +0200, Paul Boddie wrote:
On Wednesday, 14 May 2025 16:52:48 CEST Marcus Hähnel wrote:
Some of the convenience features — like release tagging — do exist in our customer repositories, but it’s more of a workflow habit than a conscious decision to exclude them from GitHub. No one had brought up the need for that kind of reproducibility in the open repo so far — and now that you have, let’s fix it.
I can understand that it can be easy to overlook. How many times has one seen missing tags in public repositories because Git makes it easy to forget to push them? I also understand that publicly tagging releases can make mistakes difficult to rectify, but I suppose this is just another hazard of release management, and eventually we all get used to making "patch" releases.
I talked to our release engineers and we'll try to integrate this into our release process in the future, such that specific source states that ran through our QA will be tagged accordingly, also on GitHub. Indeed internally we already have them tagged, but these tags aren't transferred over to GitHub.
We have started to integrate a process into our release pipeline to tag the releases. This hopefully helps your use-cases to better get back to a known-good state for the specific setups you have. See an example here: https://github.com/kernkonzept/mk/tree/r-2025-W24
The release tag schema for now is r-yyyy-Www (with y = Year, w = week, W/r = literal W/r). We might re-evaluate that schema if we see that it is not perfect, but for now it should be OK. Right now the latest release has been tagged as r-2025-W24 (see above) and we will make sure future releases will be tagged as well.
You can check out this state, for example when using ham, through:
`ham checkout r-2025-W24`
(and to optionally create a local branch named my-project as well and avoid the detached head messages)
`ham checkout -b my-project r-2025-W24`
Thanks a lot to Matthias Lange who implements this into our release pipeline.
Best regards,
- Marcus
Adam,
I did another ham sync, rebuilt fiasco, rebuilt l4, rebuilt my natives, and it's a little better? Now the VMs crash, but at least the system keeps running and I can seem my natives printing even after both VMs are halted. So at least the whole system doesn't crash any more. Just the VMs.
Again, yes, Debian Linux on QEMU with kvm and six cores. Cores 0 and 1 are for L4 and natives, cores 2 and 3 are for vm1, cores 4 and 5 are for vm2.
Please see attached!
Richard
-----Original Message----- From: Adam Lackorzynski adam@l4re.org Sent: Sunday, May 4, 2025 11:11 AM To: Richard Clark richard.clark@Coheretechnology.us; l4-hackers@os.inf.tu-dresden.de Cc: Bud Wykoff bud.wykoff@Coheretechnology.us; Douglas Schafer douglas.schafer@Coheretechnology.us Subject: Re: Upgrade issues. VM won't start.
Richard,
is that setup running on Linux QEMU+KVM? If yes, we recently (really a few days ago) fixed an issue in this virtualized setup (wrt to performance counter handling). It's on GH since Friday I believe. Otherwise please provide me you fiasco binary, such that I can look up fffffffff006a176 as this will point to the location that triggered the issue.
Thanks, Adam
On Sun May 04, 2025 at 13:14:00 +0000, Richard Clark wrote:
Adam,
So I have been using version 23.10.1, built as per the download page, and have gotten a couple VMs to ping eachother and give me a login prompt. I finally realized I was missing the virtio_switch package and grabbed it from github and put it where it is supposed to go. Of course, being a version mismatch now, it did not compile. I decide to bite the bullet and do an upgrade (always a mistake) and use the new build process given with the new website. That went smoothly! I install ham, run it, etc, everything builds, I update my local scripts and links to use the new environment variables instead of being hardcoded, and it all builds and looks good. Except that it doesn't run. The VMs get a memory exception and kick into jdb.
I have not changed any of my .cfg or .list files. The VMs and ramdisks are untouched. My local (L4 native) processes start and appear to run. But the VMs crash for some reason. Even the device tree is unchanged. I also tried a newly built linux (as opposed to a prebuilt) and that failed as well.
Is there some reason for this new crash between version 23.10.1 and the latest version from github? I've attached the VM startup output from both the old and new runs so you can take a look.
I'm so close... I see the IP configuration parameter and how to set up lwip and virtio_switch.... Then my natives should be able to talk directly to my linuxes.
Your help is greatly appreciated!
Richard
vm1 | VMM: Created VCPU 0 @ 17000 vm1 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm1 | VMM[main]: Hello out there. vm1 | VMM[ASM]: Sys Info: vm1 | vBus: 0 vm1 | DMA devs: 0 vm1 | IO-MMU: 0 vm1 | Identity forced: 0 vm1 | DMA phys addr: 0 vm1 | DT dma-ranges: 0 vm1 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[ram]: RAM not set up for DMA. vm2 | VMM: Created VCPU 0 @ 17000 vm2 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm2 | VMM[main]: Hello out there. vm2 | VMM[ASM]: Sys Info: vm2 | vBus: 0 vm2 | DMA devs: 0 vm2 | IO-MMU: 0 vm2 | Identity forced: 0 vm1 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | DMA phys addr: 0 vm1 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | DT dma-ranges: 0 vm1 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[main]: Loading kernel... vm2 | VMM[ram]: RAM not set up for DMA. vm1 | VMM[loader]: Linux kernel detected vm1 | VMM[file]: load: @ 0xfc400 vm1 | VMM[file]: copy in: to offset 0xfc400-0xba989f vm1 | VMM[main]: Loading ram disk... vm1 | VMM[ram]: load: rom/ramdisk1-amd64.rd -> 0x1fc00000 vm1 | VMM[file]: load: @ 0x1fc00000 vm1 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm1 | VMM[main]: Loaded ramdisk image rom/ramdisk1-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM[PIC]: Hello, Legacy_pic vm2 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[main]: Loading kernel... vm1 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM[loader]: Linux kernel detected vm1 | VMM: Creating Acpi_platform vm2 | VMM[file]: load: @ 0xfc400 vm2 | VMM[file]: copy in: to offset 0xfc400-0xba989f vm1 | VMM[ACPI]: Acpi timer @ 0xb008 vm1 | VMM[RTC]: Hello from RTC. Irq=8 vm2 | VMM[main]: Loading ram disk... vm1 | VMM[uart_8250]: Create virtual 8250 console vm2 | VMM[ram]: load: rom/ramdisk2-amd64.rd -> 0x1fc00000 vm2 | VMM[file]: load: @ 0x1fc00000 vm2 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm2 | VMM[main]: Loaded ramdisk image rom/ramdisk2-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm1 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm1 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm1 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm2 | VMM[PIC]: Hello, Legacy_pic vm1 | VMM[PCI bus]: Creating host bridge vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm1 | VMM[PCI bus]: Registering PCI device 00:00.0 vm2 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM: Creating Acpi_platform vm1 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM[PCI bus]: Created & Registered the PCI host bridge vm2 | VMM[ACPI]: Acpi timer @ 0xb008 vm2 | VMM[RTC]: Hello from RTC. Irq=8 vm1 | VMM[VIO Cons]: Create virtual PCI console vm2 | VMM[uart_8250]: Create virtual 8250 console vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm2 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm2 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm1 | VMM[PCI bus]: Registering PCI device 00:01.0 vm2 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[VIO proxy]: Creating proxy vm2 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm2 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 p2p | PORT[0x15d70]: DMA guest [0-1fffffff] local [600000-205fffff] offset 0 vm2 | VMM[PCI bus]: Creating host bridge p2p | register client: host IRQ: 420010 config DS: 41d000 vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm2 | VMM[PCI bus]: Registering PCI device 00:00.0 vm1 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM[VIO proxy]: Creating proxy vm1 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm2 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm2 | VMM[PCI bus]: Created & Registered the PCI host bridge vm1 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm2 | VMM[VIO Cons]: Create virtual PCI console vm1 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm2 | VMM[PCI bus]: Registering PCI device 00:01.0 vm1 | VMM: Created VCPU 1 @ 23000 vm2 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm2 | VMM[VIO proxy]: Creating proxy vm1 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 p2p | PORT[0x15e80]: DMA guest [0-1fffffff] local [20600000-405fffff] offset 0 p2p | register client: host IRQ: 420010 config DS: 41e000 vm1 | VMM[ACPI]: Initialize legacy BIOS ACPI tables. vm2 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM: Zeropage @ 0x1000, Kernel @ 0xfc400 vm2 | VMM[VIO proxy]: Creating proxy vm1 | VMM: Cmd_line: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM: cmdline check: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm1 | VMM[vmmap]: VM map: vm1 | VMM[vmmap]: [ 0:1fffffff]: Ram vm1 | VMM[vmmap]: [b0000000:bfffffff]: Pci_bus_cfg_ecam vm2 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm1 | VMM[vmmap]: [fec00000:fec00fff]: Ioapic vm2 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vmmap]: [fee00000:fee00fff]: Lapic_access_handler vm2 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm1 | VMM[main]: Populating guest physical address space vm1 | VMM[mmio]: Mapping [1000000 - 20ffffff] -> [0 - 1fffffff] vm2 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[vmmap]: IOport map: vm2 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vmmap]: [ 20: 21]: PIC vm2 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm1 | VMM[vmmap]: [ 40: 43]: PIT vm1 | VMM[vmmap]: [ 61: 61]: PIT port 61 vm1 | VMM[vmmap]: [ 70: 71]: RTC vm1 | VMM[vmmap]: [ a0: a1]: PIC vm1 | VMM[vmmap]: [ 3f8: 3ff]: UART 8250 vm1 | VMM[vmmap]: [ 510: 51b]: Firmware interface vm1 | VMM[vmmap]: [ cf8: cff]: PCI bus cfg vm1 | VMM[vmmap]: [1800:1808]: ACPI platform vm2 | VMM: Created VCPU 1 @ 23000 vm1 | VMM[vmmap]: [b008:b008]: ACPI Timer vm2 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm1 | VMM[guest]: Starting VMM @ 0x100000 vm2 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm1 | VMM[Cpu_dev]: [ 0] Reset called vm1 | VMM[Cpu_dev]: [ 0] Resetting vCPU. vm1 | VMM: Hello clock source for vCPU 0
--------------------------------------------------------------------- CPU 2 [fffffffff006a176]: General Protection (ERR=0000000000000000) CPU(s) 0-5 entered JDBjdb:
Hi Richard,
is it working with just one VM? And is it QEMU with "-cpu host"? And if yes, what's the host CPU if I may ask?
Adam
On Mon May 05, 2025 at 13:01:55 +0000, Richard Clark wrote:
Adam,
I did another ham sync, rebuilt fiasco, rebuilt l4, rebuilt my natives, and it's a little better? Now the VMs crash, but at least the system keeps running and I can seem my natives printing even after both VMs are halted. So at least the whole system doesn't crash any more. Just the VMs.
Again, yes, Debian Linux on QEMU with kvm and six cores. Cores 0 and 1 are for L4 and natives, cores 2 and 3 are for vm1, cores 4 and 5 are for vm2.
Please see attached!
Richard
-----Original Message----- From: Adam Lackorzynski adam@l4re.org Sent: Sunday, May 4, 2025 11:11 AM To: Richard Clark richard.clark@Coheretechnology.us; l4-hackers@os.inf.tu-dresden.de Cc: Bud Wykoff bud.wykoff@Coheretechnology.us; Douglas Schafer douglas.schafer@Coheretechnology.us Subject: Re: Upgrade issues. VM won't start.
Richard,
is that setup running on Linux QEMU+KVM? If yes, we recently (really a few days ago) fixed an issue in this virtualized setup (wrt to performance counter handling). It's on GH since Friday I believe. Otherwise please provide me you fiasco binary, such that I can look up fffffffff006a176 as this will point to the location that triggered the issue.
Thanks, Adam
On Sun May 04, 2025 at 13:14:00 +0000, Richard Clark wrote:
Adam,
So I have been using version 23.10.1, built as per the download page, and have gotten a couple VMs to ping eachother and give me a login prompt. I finally realized I was missing the virtio_switch package and grabbed it from github and put it where it is supposed to go. Of course, being a version mismatch now, it did not compile. I decide to bite the bullet and do an upgrade (always a mistake) and use the new build process given with the new website. That went smoothly! I install ham, run it, etc, everything builds, I update my local scripts and links to use the new environment variables instead of being hardcoded, and it all builds and looks good. Except that it doesn't run. The VMs get a memory exception and kick into jdb.
I have not changed any of my .cfg or .list files. The VMs and ramdisks are untouched. My local (L4 native) processes start and appear to run. But the VMs crash for some reason. Even the device tree is unchanged. I also tried a newly built linux (as opposed to a prebuilt) and that failed as well.
Is there some reason for this new crash between version 23.10.1 and the latest version from github? I've attached the VM startup output from both the old and new runs so you can take a look.
I'm so close... I see the IP configuration parameter and how to set up lwip and virtio_switch.... Then my natives should be able to talk directly to my linuxes.
Your help is greatly appreciated!
Richard
vm1 | VMM: Created VCPU 0 @ 17000 vm1 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm1 | VMM[main]: Hello out there. vm1 | VMM[ASM]: Sys Info: vm1 | vBus: 0 vm1 | DMA devs: 0 vm1 | IO-MMU: 0 vm1 | Identity forced: 0 vm1 | DMA phys addr: 0 vm1 | DT dma-ranges: 0 vm1 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[ram]: RAM not set up for DMA. vm2 | VMM: Created VCPU 0 @ 17000 vm2 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm2 | VMM[main]: Hello out there. vm2 | VMM[ASM]: Sys Info: vm2 | vBus: 0 vm2 | DMA devs: 0 vm2 | IO-MMU: 0 vm2 | Identity forced: 0 vm1 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | DMA phys addr: 0 vm1 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | DT dma-ranges: 0 vm1 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[main]: Loading kernel... vm2 | VMM[ram]: RAM not set up for DMA. vm1 | VMM[loader]: Linux kernel detected vm1 | VMM[file]: load: @ 0xfc400 vm1 | VMM[file]: copy in: to offset 0xfc400-0xba989f vm1 | VMM[main]: Loading ram disk... vm1 | VMM[ram]: load: rom/ramdisk1-amd64.rd -> 0x1fc00000 vm1 | VMM[file]: load: @ 0x1fc00000 vm1 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm1 | VMM[main]: Loaded ramdisk image rom/ramdisk1-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM[PIC]: Hello, Legacy_pic vm2 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[main]: Loading kernel... vm1 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM[loader]: Linux kernel detected vm1 | VMM: Creating Acpi_platform vm2 | VMM[file]: load: @ 0xfc400 vm2 | VMM[file]: copy in: to offset 0xfc400-0xba989f vm1 | VMM[ACPI]: Acpi timer @ 0xb008 vm1 | VMM[RTC]: Hello from RTC. Irq=8 vm2 | VMM[main]: Loading ram disk... vm1 | VMM[uart_8250]: Create virtual 8250 console vm2 | VMM[ram]: load: rom/ramdisk2-amd64.rd -> 0x1fc00000 vm2 | VMM[file]: load: @ 0x1fc00000 vm2 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm2 | VMM[main]: Loaded ramdisk image rom/ramdisk2-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm1 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm1 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm1 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm2 | VMM[PIC]: Hello, Legacy_pic vm1 | VMM[PCI bus]: Creating host bridge vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm1 | VMM[PCI bus]: Registering PCI device 00:00.0 vm2 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM: Creating Acpi_platform vm1 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM[PCI bus]: Created & Registered the PCI host bridge vm2 | VMM[ACPI]: Acpi timer @ 0xb008 vm2 | VMM[RTC]: Hello from RTC. Irq=8 vm1 | VMM[VIO Cons]: Create virtual PCI console vm2 | VMM[uart_8250]: Create virtual 8250 console vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm2 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm2 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm1 | VMM[PCI bus]: Registering PCI device 00:01.0 vm2 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[VIO proxy]: Creating proxy vm2 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm2 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 p2p | PORT[0x15d70]: DMA guest [0-1fffffff] local [600000-205fffff] offset 0 vm2 | VMM[PCI bus]: Creating host bridge p2p | register client: host IRQ: 420010 config DS: 41d000 vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm2 | VMM[PCI bus]: Registering PCI device 00:00.0 vm1 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM[VIO proxy]: Creating proxy vm1 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm2 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm2 | VMM[PCI bus]: Created & Registered the PCI host bridge vm1 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm2 | VMM[VIO Cons]: Create virtual PCI console vm1 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm2 | VMM[PCI bus]: Registering PCI device 00:01.0 vm1 | VMM: Created VCPU 1 @ 23000 vm2 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm2 | VMM[VIO proxy]: Creating proxy vm1 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 p2p | PORT[0x15e80]: DMA guest [0-1fffffff] local [20600000-405fffff] offset 0 p2p | register client: host IRQ: 420010 config DS: 41e000 vm1 | VMM[ACPI]: Initialize legacy BIOS ACPI tables. vm2 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM: Zeropage @ 0x1000, Kernel @ 0xfc400 vm2 | VMM[VIO proxy]: Creating proxy vm1 | VMM: Cmd_line: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM: cmdline check: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm1 | VMM[vmmap]: VM map: vm1 | VMM[vmmap]: [ 0:1fffffff]: Ram vm1 | VMM[vmmap]: [b0000000:bfffffff]: Pci_bus_cfg_ecam vm2 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm1 | VMM[vmmap]: [fec00000:fec00fff]: Ioapic vm2 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vmmap]: [fee00000:fee00fff]: Lapic_access_handler vm2 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm1 | VMM[main]: Populating guest physical address space vm1 | VMM[mmio]: Mapping [1000000 - 20ffffff] -> [0 - 1fffffff] vm2 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[vmmap]: IOport map: vm2 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vmmap]: [ 20: 21]: PIC vm2 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm1 | VMM[vmmap]: [ 40: 43]: PIT vm1 | VMM[vmmap]: [ 61: 61]: PIT port 61 vm1 | VMM[vmmap]: [ 70: 71]: RTC vm1 | VMM[vmmap]: [ a0: a1]: PIC vm1 | VMM[vmmap]: [ 3f8: 3ff]: UART 8250 vm1 | VMM[vmmap]: [ 510: 51b]: Firmware interface vm1 | VMM[vmmap]: [ cf8: cff]: PCI bus cfg vm1 | VMM[vmmap]: [1800:1808]: ACPI platform vm2 | VMM: Created VCPU 1 @ 23000 vm1 | VMM[vmmap]: [b008:b008]: ACPI Timer vm2 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm1 | VMM[guest]: Starting VMM @ 0x100000 vm2 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm1 | VMM[Cpu_dev]: [ 0] Reset called vm1 | VMM[Cpu_dev]: [ 0] Resetting vCPU. vm1 | VMM: Hello clock source for vCPU 0
--------------------------------------------------------------------- CPU 2 [fffffffff006a176]: General Protection (ERR=0000000000000000) CPU(s) 0-5 entered JDBjdb:
Created vcon channel: vm1 [426010] cons> Created vcon channel: vm2 [427010] cons> vm1 | VMM: Created VCPU 0 @ 17000 vm1 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm1 | VMM[main]: Hello out there. vm1 | VMM[ASM]: Sys Info: vm1 | vBus: 0 vm1 | DMA devs: 0 vm1 | IO-MMU: 0 vm1 | Identity forced: 0 vm1 | DMA phys addr: 0 vm1 | DT dma-ranges: 0 vm1 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[ram]: RAM not set up for DMA. vm2 | VMM: Created VCPU 0 @ 17000 vm2 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm2 | VMM[main]: Hello out there. vm1 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm1 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm1 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[ASM]: Sys Info: vm2 | vBus: 0 vm1 | VMM[main]: Loading kernel... vm2 | DMA devs: 0 vm2 | IO-MMU: 0 vm2 | Identity forced: 0 vm2 | DMA phys addr: 0 vm2 | DT dma-ranges: 0 vm2 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[loader]: Elf image detected vm2 | VMM[ram]: RAM not set up for DMA. vm1 | VMM[bin]: Copy in ELF binary section @0x1000000 from 0x200000/0x18aef10 vm1 | VMM[bin]: Copy in ELF binary section @0x2a00000 from 0x1c00000/0x814000 vm1 | VMM[bin]: Copy in ELF binary section @0x3214000 from 0x2600000/0x2ece8 vm1 | VMM[bin]: Copy in ELF binary section @0x3243000 from 0x2643000/0x288000 vm1 | VMM[main]: Loading ram disk... vm1 | VMM[ram]: load: rom/ramdisk1-amd64.rd -> 0x1fc00000 vm1 | VMM[file]: load: @ 0x1fc00000 vm1 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm1 | VMM[main]: Loaded ramdisk image rom/ramdisk1-amd64.rd to 1fc00000 (size: 00400000) vm2 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[main]: Loading kernel... vm1 | VMM[PIC]: Hello, Legacy_pic vm2 | VMM[loader]: Elf image detected vm2 | VMM[bin]: Copy in ELF binary section @0x1000000 from 0x200000/0x18aef10 vm2 | VMM[bin]: Copy in ELF binary section @0x2a00000 from 0x1c00000/0x814000 vm2 | VMM[bin]: Copy in ELF binary section @0x3214000 from 0x2600000/0x2ece8 vm2 | VMM[bin]: Copy in ELF binary section @0x3243000 from 0x2643000/0x288000 vm2 | VMM[main]: Loading ram disk... vm1 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM[ram]: load: rom/ramdisk2-amd64.rd -> 0x1fc00000 vm2 | VMM[file]: load: @ 0x1fc00000 vm2 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm1 | VMM: Creating Acpi_platform vm2 | VMM[main]: Loaded ramdisk image rom/ramdisk2-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM[ACPI]: Acpi timer @ 0xb008 vm1 | VMM[RTC]: Hello from RTC. Irq=8 vm1 | VMM[uart_8250]: Create virtual 8250 console vm1 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm1 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm2 | VMM[PIC]: Hello, Legacy_pic vm1 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm1 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm2 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM: Creating Acpi_platform vm1 | VMM[PCI bus]: Creating host bridge vm2 | VMM[ACPI]: Acpi timer @ 0xb008 vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm2 | VMM[RTC]: Hello from RTC. Irq=8 vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm2 | VMM[uart_8250]: Create virtual 8250 console vm1 | VMM[PCI bus]: Registering PCI device 00:00.0 vm1 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM[PCI bus]: Created & Registered the PCI host bridge vm1 | VMM[VIO Cons]: Create virtual PCI console vm2 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm2 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm2 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm2 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm2 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm1 | VMM[PCI bus]: Registering PCI device 00:01.0 vm1 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[VIO proxy]: Creating proxy vm2 | VMM[PCI bus]: Creating host bridge vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] p2p | PORT[0x15d70]: DMA guest [0-1fffffff] local [600000-205fffff] offset 0 vm2 | VMM[PCI bus]: Registering PCI device 00:00.0 vm2 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm2 | VMM[PCI bus]: Created & Registered the PCI host bridge p2p | register client: host IRQ: 420010 config DS: 41d000 vm2 | VMM[VIO Cons]: Create virtual PCI console vm1 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM[VIO proxy]: Creating proxy vm1 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm1 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm1 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm2 | VMM[PCI bus]: Registering PCI device 00:01.0 vm2 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm2 | VMM[VIO proxy]: Creating proxy vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 vm1 | VMM: Created VCPU 1 @ 23000 vm1 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm1 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 p2p | PORT[0x15e80]: DMA guest [0-1fffffff] local [20600000-405fffff] offset 0 p2p | register client: host IRQ: 420010 config DS: 41e000 vm2 | VMM[PCI bus]: Registering PCI device 00:02.0 vm2 | VMM[VIO proxy]: Creating proxy vm2 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm2 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm2 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm2 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm2 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm2 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm2 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm2 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm1 | VMM[ACPI]: Initialize legacy BIOS ACPI tables. vm1 | VMM: Zeropage @ 0x1000, Kernel @ 0x1000000 vm1 | VMM: Cmd_line: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: Created VCPU 1 @ 23000 vm1 | VMM: Elf guest zeropage: dtb 0x1fbfdff0, entry 0x1000000 vm2 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm1 | VMM: cmdline check: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm1 | VMM[vmmap]: VM map: vm1 | VMM[vmmap]: [ 0:1fffffff]: Ram vm1 | VMM[vmmap]: [b0000000:bfffffff]: Pci_bus_cfg_ecam vm1 | VMM[vmmap]: [fec00000:fec00fff]: Ioapic vm1 | VMM[vmmap]: [fee00000:fee00fff]: Lapic_access_handler vm1 | VMM[main]: Populating guest physical address space vm1 | VMM[mmio]: Mapping [1000000 - 20ffffff] -> [0 - 1fffffff] vm1 | VMM[vmmap]: IOport map: vm1 | VMM[vmmap]: [ 20: 21]: PIC vm1 | VMM[vmmap]: [ 40: 43]: PIT vm1 | VMM[vmmap]: [ 61: 61]: PIT port 61 vm1 | VMM[vmmap]: [ 70: 71]: RTC vm1 | VMM[vmmap]: [ a0: a1]: PIC vm1 | VMM[vmmap]: [ 3f8: 3ff]: UART 8250 vm1 | VMM[vmmap]: [ 510: 51b]: Firmware interface vm1 | VMM[vmmap]: [ cf8: cff]: PCI bus cfg vm1 | VMM[vmmap]: [1800:1808]: ACPI platform vm1 | VMM[vmmap]: [b008:b008]: ACPI Timer vm1 | VMM[guest]: Starting VMM @ 0x1000000 vm1 | VMM[Cpu_dev]: [ 0] Reset called vm1 | VMM[Cpu_dev]: [ 0] Resetting vCPU. vm2 | VMM[ACPI]: Initialize legacy BIOS ACPI tables. vm2 | VMM: Zeropage @ 0x1000, Kernel @ 0x1000000 vm2 | VMM: Cmd_line: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: Elf guest zeropage: dtb 0x1fbfdff0, entry 0x1000000 vm1 | VMM: Hello clock source for vCPU 0 vm2 | VMM: cmdline check: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM[vmmap]: VM map: vm2 | VMM[vmmap]: [ 0:1fffffff]: Ram vm2 | VMM[vmmap]: [b0000000:bfffffff]: Pci_bus_cfg_ecam vm2 | VMM[vmmap]: [fec00000:fec00fff]: Ioapic vm2 | VMM[vmmap]: [fee00000:fee00fff]: Lapic_access_handler vm2 | VMM[main]: Populating guest physical address space vm2 | VMM[mmio]: Mapping [1000000 - 20ffffff] -> [0 - 1fffffff] vm2 | VMM[vmmap]: IOport map: vm2 | VMM[vmmap]: [ 20: 21]: PIC vm1 | VMM: VM-entry failure due to invalid guest state: vm1 | Exit reason raw: 0x80000021 vm1 | Exit qualification: 0x0 vm2 | VMM[vmmap]: [ 40: 43]: PIT vm1 | IP: 0x100001e vm2 | VMM[vmmap]: [ 61: 61]: PIT port 61 vm1 | Instruction error: 0x0 vm2 | VMM[vmmap]: [ 70: 71]: RTC vm1 | Entry exception error: 0x0 vm2 | VMM[vmmap]: [ a0: a1]: PIC vm1 | VMM: [ 0]: Exit at guest IP 0x100001e SP 0x1a03f4e with 0x80000021 (Qual: 0x0) vm1 | VMM: [ 0]: Unhandled exit reason: VM-entry failure due to invalid guest state (33) vm1 | VMM: FATAL: [ 0]: Failure in VMM -38 vm2 | VMM[vmmap]: [ 3f8: 3ff]: UART 8250 vm1 | VMM: FATAL: [ 0] RAX 0x2213fe9 vm2 | VMM[vmmap]: [ 510: 51b]: Firmware interface vm1 | RBX 0x0 vm2 | VMM[vmmap]: [ cf8: cff]: PCI bus cfg vm1 | RCX 0xc0000101 vm2 | VMM[vmmap]: [1800:1808]: ACPI platform vm1 | RDX 0x2213fea vm2 | VMM[vmmap]: [b008:b008]: ACPI Timer vm1 | RSI 0x1000 vm2 | VMM[guest]: Starting VMM @ 0x1000000 vm1 | RDI 0x1000 vm2 | VMM[Cpu_dev]: [ 0] Reset called vm1 | RSP 0x0 vm2 | VMM[Cpu_dev]: [ 0] Resetting vCPU. vm1 | RBP 0x0 vm1 | R8 0x0 vm1 | R9 0x0 vm1 | R10 0x0 vm1 | R11 0x0 vm1 | R12 0x0 vm1 | R13 0x0 vm1 | R14 0x0 vm1 | R15 0x0 vm1 | RIP 0x100001e vm1 | vCPU RIP 0x1000000 vm1 | VMM: FATAL: [ 0] VM instruction error: 0x0 vm1 | VMM: FATAL: VM entered a fatal state. Halting. vm2 | VMM: Hello clock source for vCPU 0 vm2 | VMM: VM-entry failure due to invalid guest state: vm2 | Exit reason raw: 0x80000021 vm2 | Exit qualification: 0x0 vm2 | IP: 0x100001e vm2 | Instruction error: 0x0 vm2 | Entry exception error: 0x0 vm2 | VMM: [ 0]: Exit at guest IP 0x100001e SP 0x1a03f4e with 0x80000021 (Qual: 0x0) vm2 | VMM: [ 0]: Unhandled exit reason: VM-entry failure due to invalid guest state (33) vm2 | VMM: FATAL: [ 0]: Failure in VMM -38 vm2 | VMM: FATAL: [ 0] RAX 0x2213fe9 vm2 | RBX 0x0 vm2 | RCX 0xc0000101 vm2 | RDX 0x2213fea vm2 | RSI 0x1000 vm2 | RDI 0x1000 vm2 | RSP 0x0 vm2 | RBP 0x0 vm2 | R8 0x0 vm2 | R9 0x0 vm2 | R10 0x0 vm2 | R11 0x0 vm2 | R12 0x0 vm2 | R13 0x0 vm2 | R14 0x0 vm2 | R15 0x0 vm2 | RIP 0x100001e vm2 | vCPU RIP 0x1000000 vm2 | VMM: FATAL: [ 0] VM instruction error: 0x0 vm2 | VMM: FATAL: VM entered a fatal state. Halting.
Adam,
Same crash with just one vm. Yes, -cpu host. Intel 12th Gen i3-1220P
Richard
-----Original Message----- From: Adam Lackorzynski adam@l4re.org Sent: Thursday, May 8, 2025 11:19 AM To: Richard Clark richard.clark@Coheretechnology.us; l4-hackers@os.inf.tu-dresden.de Cc: Bud Wykoff bud.wykoff@Coheretechnology.us; Douglas Schafer douglas.schafer@Coheretechnology.us Subject: Re: Upgrade issues. VM won't start.
Hi Richard,
is it working with just one VM? And is it QEMU with "-cpu host"? And if yes, what's the host CPU if I may ask?
Adam
On Mon May 05, 2025 at 13:01:55 +0000, Richard Clark wrote:
Adam,
I did another ham sync, rebuilt fiasco, rebuilt l4, rebuilt my natives, and it's a little better? Now the VMs crash, but at least the system keeps running and I can seem my natives printing even after both VMs are halted. So at least the whole system doesn't crash any more. Just the VMs.
Again, yes, Debian Linux on QEMU with kvm and six cores. Cores 0 and 1 are for L4 and natives, cores 2 and 3 are for vm1, cores 4 and 5 are for vm2.
Please see attached!
Richard
-----Original Message----- From: Adam Lackorzynski adam@l4re.org Sent: Sunday, May 4, 2025 11:11 AM To: Richard Clark richard.clark@Coheretechnology.us; l4-hackers@os.inf.tu-dresden.de Cc: Bud Wykoff bud.wykoff@Coheretechnology.us; Douglas Schafer douglas.schafer@Coheretechnology.us Subject: Re: Upgrade issues. VM won't start.
Richard,
is that setup running on Linux QEMU+KVM? If yes, we recently (really a few days ago) fixed an issue in this virtualized setup (wrt to performance counter handling). It's on GH since Friday I believe. Otherwise please provide me you fiasco binary, such that I can look up fffffffff006a176 as this will point to the location that triggered the issue.
Thanks, Adam
On Sun May 04, 2025 at 13:14:00 +0000, Richard Clark wrote:
Adam,
So I have been using version 23.10.1, built as per the download page, and have gotten a couple VMs to ping eachother and give me a login prompt. I finally realized I was missing the virtio_switch package and grabbed it from github and put it where it is supposed to go. Of course, being a version mismatch now, it did not compile. I decide to bite the bullet and do an upgrade (always a mistake) and use the new build process given with the new website. That went smoothly! I install ham, run it, etc, everything builds, I update my local scripts and links to use the new environment variables instead of being hardcoded, and it all builds and looks good. Except that it doesn't run. The VMs get a memory exception and kick into jdb.
I have not changed any of my .cfg or .list files. The VMs and ramdisks are untouched. My local (L4 native) processes start and appear to run. But the VMs crash for some reason. Even the device tree is unchanged. I also tried a newly built linux (as opposed to a prebuilt) and that failed as well.
Is there some reason for this new crash between version 23.10.1 and the latest version from github? I've attached the VM startup output from both the old and new runs so you can take a look.
I'm so close... I see the IP configuration parameter and how to set up lwip and virtio_switch.... Then my natives should be able to talk directly to my linuxes.
Your help is greatly appreciated!
Richard
vm1 | VMM: Created VCPU 0 @ 17000 vm1 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm1 | VMM[main]: Hello out there. vm1 | VMM[ASM]: Sys Info: vm1 | vBus: 0 vm1 | DMA devs: 0 vm1 | IO-MMU: 0 vm1 | Identity forced: 0 vm1 | DMA phys addr: 0 vm1 | DT dma-ranges: 0 vm1 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[ram]: RAM not set up for DMA. vm2 | VMM: Created VCPU 0 @ 17000 vm2 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm2 | VMM[main]: Hello out there. vm2 | VMM[ASM]: Sys Info: vm2 | vBus: 0 vm2 | DMA devs: 0 vm2 | IO-MMU: 0 vm2 | Identity forced: 0 vm1 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | DMA phys addr: 0 vm1 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | DT dma-ranges: 0 vm1 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[main]: Loading kernel... vm2 | VMM[ram]: RAM not set up for DMA. vm1 | VMM[loader]: Linux kernel detected vm1 | VMM[file]: load: @ 0xfc400 vm1 | VMM[file]: copy in: to offset 0xfc400-0xba989f vm1 | VMM[main]: Loading ram disk... vm1 | VMM[ram]: load: rom/ramdisk1-amd64.rd -> 0x1fc00000 vm1 | VMM[file]: load: @ 0x1fc00000 vm1 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm1 | VMM[main]: Loaded ramdisk image rom/ramdisk1-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM[PIC]: Hello, Legacy_pic vm2 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[main]: Loading kernel... vm1 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM[loader]: Linux kernel detected vm1 | VMM: Creating Acpi_platform vm2 | VMM[file]: load: @ 0xfc400 vm2 | VMM[file]: copy in: to offset 0xfc400-0xba989f vm1 | VMM[ACPI]: Acpi timer @ 0xb008 vm1 | VMM[RTC]: Hello from RTC. Irq=8 vm2 | VMM[main]: Loading ram disk... vm1 | VMM[uart_8250]: Create virtual 8250 console vm2 | VMM[ram]: load: rom/ramdisk2-amd64.rd -> 0x1fc00000 vm2 | VMM[file]: load: @ 0x1fc00000 vm2 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm2 | VMM[main]: Loaded ramdisk image rom/ramdisk2-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm1 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm1 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm1 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm2 | VMM[PIC]: Hello, Legacy_pic vm1 | VMM[PCI bus]: Creating host bridge vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm1 | VMM[PCI bus]: Registering PCI device 00:00.0 vm2 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM: Creating Acpi_platform vm1 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM[PCI bus]: Created & Registered the PCI host bridge vm2 | VMM[ACPI]: Acpi timer @ 0xb008 vm2 | VMM[RTC]: Hello from RTC. Irq=8 vm1 | VMM[VIO Cons]: Create virtual PCI console vm2 | VMM[uart_8250]: Create virtual 8250 console vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm2 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm2 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm1 | VMM[PCI bus]: Registering PCI device 00:01.0 vm2 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[VIO proxy]: Creating proxy vm2 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm2 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 p2p | PORT[0x15d70]: DMA guest [0-1fffffff] local [600000-205fffff] offset 0 vm2 | VMM[PCI bus]: Creating host bridge p2p | register client: host IRQ: 420010 config DS: 41d000 vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm2 | VMM[PCI bus]: Registering PCI device 00:00.0 vm1 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM[VIO proxy]: Creating proxy vm1 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm2 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm2 | VMM[PCI bus]: Created & Registered the PCI host bridge vm1 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm2 | VMM[VIO Cons]: Create virtual PCI console vm1 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm2 | VMM[PCI bus]: Registering PCI device 00:01.0 vm1 | VMM: Created VCPU 1 @ 23000 vm2 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm2 | VMM[VIO proxy]: Creating proxy vm1 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 p2p | PORT[0x15e80]: DMA guest [0-1fffffff] local [20600000-405fffff] offset 0 p2p | register client: host IRQ: 420010 config DS: 41e000 vm1 | VMM[ACPI]: Initialize legacy BIOS ACPI tables. vm2 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM: Zeropage @ 0x1000, Kernel @ 0xfc400 vm2 | VMM[VIO proxy]: Creating proxy vm1 | VMM: Cmd_line: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM: cmdline check: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm1 | VMM[vmmap]: VM map: vm1 | VMM[vmmap]: [ 0:1fffffff]: Ram vm1 | VMM[vmmap]: [b0000000:bfffffff]: Pci_bus_cfg_ecam vm2 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm1 | VMM[vmmap]: [fec00000:fec00fff]: Ioapic vm2 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vmmap]: [fee00000:fee00fff]: Lapic_access_handler vm2 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm1 | VMM[main]: Populating guest physical address space vm1 | VMM[mmio]: Mapping [1000000 - 20ffffff] -> [0 - 1fffffff] vm2 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[vmmap]: IOport map: vm2 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vmmap]: [ 20: 21]: PIC vm2 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm1 | VMM[vmmap]: [ 40: 43]: PIT vm1 | VMM[vmmap]: [ 61: 61]: PIT port 61 vm1 | VMM[vmmap]: [ 70: 71]: RTC vm1 | VMM[vmmap]: [ a0: a1]: PIC vm1 | VMM[vmmap]: [ 3f8: 3ff]: UART 8250 vm1 | VMM[vmmap]: [ 510: 51b]: Firmware interface vm1 | VMM[vmmap]: [ cf8: cff]: PCI bus cfg vm1 | VMM[vmmap]: [1800:1808]: ACPI platform vm2 | VMM: Created VCPU 1 @ 23000 vm1 | VMM[vmmap]: [b008:b008]: ACPI Timer vm2 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm1 | VMM[guest]: Starting VMM @ 0x100000 vm2 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm1 | VMM[Cpu_dev]: [ 0] Reset called vm1 | VMM[Cpu_dev]: [ 0] Resetting vCPU. vm1 | VMM: Hello clock source for vCPU 0
--------------------------------------------------------------------- CPU 2 [fffffffff006a176]: General Protection (ERR=0000000000000000) CPU(s) 0-5 entered JDBjdb:
Created vcon channel: vm1 [426010] cons> Created vcon channel: vm2 [427010] cons> vm1 | VMM: Created VCPU 0 @ 17000 vm1 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm1 | VMM[main]: Hello out there. vm1 | VMM[ASM]: Sys Info: vm1 | vBus: 0 vm1 | DMA devs: 0 vm1 | IO-MMU: 0 vm1 | Identity forced: 0 vm1 | DMA phys addr: 0 vm1 | DT dma-ranges: 0 vm1 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[ram]: RAM not set up for DMA. vm2 | VMM: Created VCPU 0 @ 17000 vm2 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm2 | VMM[main]: Hello out there. vm1 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm1 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm1 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[ASM]: Sys Info: vm2 | vBus: 0 vm1 | VMM[main]: Loading kernel... vm2 | DMA devs: 0 vm2 | IO-MMU: 0 vm2 | Identity forced: 0 vm2 | DMA phys addr: 0 vm2 | DT dma-ranges: 0 vm2 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[loader]: Elf image detected vm2 | VMM[ram]: RAM not set up for DMA. vm1 | VMM[bin]: Copy in ELF binary section @0x1000000 from 0x200000/0x18aef10 vm1 | VMM[bin]: Copy in ELF binary section @0x2a00000 from 0x1c00000/0x814000 vm1 | VMM[bin]: Copy in ELF binary section @0x3214000 from 0x2600000/0x2ece8 vm1 | VMM[bin]: Copy in ELF binary section @0x3243000 from 0x2643000/0x288000 vm1 | VMM[main]: Loading ram disk... vm1 | VMM[ram]: load: rom/ramdisk1-amd64.rd -> 0x1fc00000 vm1 | VMM[file]: load: @ 0x1fc00000 vm1 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm1 | VMM[main]: Loaded ramdisk image rom/ramdisk1-amd64.rd to 1fc00000 (size: 00400000) vm2 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[main]: Loading kernel... vm1 | VMM[PIC]: Hello, Legacy_pic vm2 | VMM[loader]: Elf image detected vm2 | VMM[bin]: Copy in ELF binary section @0x1000000 from 0x200000/0x18aef10 vm2 | VMM[bin]: Copy in ELF binary section @0x2a00000 from 0x1c00000/0x814000 vm2 | VMM[bin]: Copy in ELF binary section @0x3214000 from 0x2600000/0x2ece8 vm2 | VMM[bin]: Copy in ELF binary section @0x3243000 from 0x2643000/0x288000 vm2 | VMM[main]: Loading ram disk... vm1 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM[ram]: load: rom/ramdisk2-amd64.rd -> 0x1fc00000 vm2 | VMM[file]: load: @ 0x1fc00000 vm2 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm1 | VMM: Creating Acpi_platform vm2 | VMM[main]: Loaded ramdisk image rom/ramdisk2-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM[ACPI]: Acpi timer @ 0xb008 vm1 | VMM[RTC]: Hello from RTC. Irq=8 vm1 | VMM[uart_8250]: Create virtual 8250 console vm1 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm1 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm2 | VMM[PIC]: Hello, Legacy_pic vm1 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm1 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm2 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM: Creating Acpi_platform vm1 | VMM[PCI bus]: Creating host bridge vm2 | VMM[ACPI]: Acpi timer @ 0xb008 vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm2 | VMM[RTC]: Hello from RTC. Irq=8 vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm2 | VMM[uart_8250]: Create virtual 8250 console vm1 | VMM[PCI bus]: Registering PCI device 00:00.0 vm1 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM[PCI bus]: Created & Registered the PCI host bridge vm1 | VMM[VIO Cons]: Create virtual PCI console vm2 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm2 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm2 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm2 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm2 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm1 | VMM[PCI bus]: Registering PCI device 00:01.0 vm1 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[VIO proxy]: Creating proxy vm2 | VMM[PCI bus]: Creating host bridge vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] p2p | PORT[0x15d70]: DMA guest [0-1fffffff] local [600000-205fffff] offset 0 vm2 | VMM[PCI bus]: Registering PCI device 00:00.0 vm2 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm2 | VMM[PCI bus]: Created & Registered the PCI host bridge p2p | register client: host IRQ: 420010 config DS: 41d000 vm2 | VMM[VIO Cons]: Create virtual PCI console vm1 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM[VIO proxy]: Creating proxy vm1 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm1 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm1 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm2 | VMM[PCI bus]: Registering PCI device 00:01.0 vm2 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm2 | VMM[VIO proxy]: Creating proxy vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 vm1 | VMM: Created VCPU 1 @ 23000 vm1 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm1 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 p2p | PORT[0x15e80]: DMA guest [0-1fffffff] local [20600000-405fffff] offset 0 p2p | register client: host IRQ: 420010 config DS: 41e000 vm2 | VMM[PCI bus]: Registering PCI device 00:02.0 vm2 | VMM[VIO proxy]: Creating proxy vm2 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm2 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm2 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm2 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm2 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm2 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm2 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm2 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm1 | VMM[ACPI]: Initialize legacy BIOS ACPI tables. vm1 | VMM: Zeropage @ 0x1000, Kernel @ 0x1000000 vm1 | VMM: Cmd_line: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: Created VCPU 1 @ 23000 vm1 | VMM: Elf guest zeropage: dtb 0x1fbfdff0, entry 0x1000000 vm2 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm1 | VMM: cmdline check: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm1 | VMM[vmmap]: VM map: vm1 | VMM[vmmap]: [ 0:1fffffff]: Ram vm1 | VMM[vmmap]: [b0000000:bfffffff]: Pci_bus_cfg_ecam vm1 | VMM[vmmap]: [fec00000:fec00fff]: Ioapic vm1 | VMM[vmmap]: [fee00000:fee00fff]: Lapic_access_handler vm1 | VMM[main]: Populating guest physical address space vm1 | VMM[mmio]: Mapping [1000000 - 20ffffff] -> [0 - 1fffffff] vm1 | VMM[vmmap]: IOport map: vm1 | VMM[vmmap]: [ 20: 21]: PIC vm1 | VMM[vmmap]: [ 40: 43]: PIT vm1 | VMM[vmmap]: [ 61: 61]: PIT port 61 vm1 | VMM[vmmap]: [ 70: 71]: RTC vm1 | VMM[vmmap]: [ a0: a1]: PIC vm1 | VMM[vmmap]: [ 3f8: 3ff]: UART 8250 vm1 | VMM[vmmap]: [ 510: 51b]: Firmware interface vm1 | VMM[vmmap]: [ cf8: cff]: PCI bus cfg vm1 | VMM[vmmap]: [1800:1808]: ACPI platform vm1 | VMM[vmmap]: [b008:b008]: ACPI Timer vm1 | VMM[guest]: Starting VMM @ 0x1000000 vm1 | VMM[Cpu_dev]: [ 0] Reset called vm1 | VMM[Cpu_dev]: [ 0] Resetting vCPU. vm2 | VMM[ACPI]: Initialize legacy BIOS ACPI tables. vm2 | VMM: Zeropage @ 0x1000, Kernel @ 0x1000000 vm2 | VMM: Cmd_line: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: Elf guest zeropage: dtb 0x1fbfdff0, entry 0x1000000 vm1 | VMM: Hello clock source for vCPU 0 vm2 | VMM: cmdline check: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM[vmmap]: VM map: vm2 | VMM[vmmap]: [ 0:1fffffff]: Ram vm2 | VMM[vmmap]: [b0000000:bfffffff]: Pci_bus_cfg_ecam vm2 | VMM[vmmap]: [fec00000:fec00fff]: Ioapic vm2 | VMM[vmmap]: [fee00000:fee00fff]: Lapic_access_handler vm2 | VMM[main]: Populating guest physical address space vm2 | VMM[mmio]: Mapping [1000000 - 20ffffff] -> [0 - 1fffffff] vm2 | VMM[vmmap]: IOport map: vm2 | VMM[vmmap]: [ 20: 21]: PIC vm1 | VMM: VM-entry failure due to invalid guest state: vm1 | Exit reason raw: 0x80000021 vm1 | Exit qualification: 0x0 vm2 | VMM[vmmap]: [ 40: 43]: PIT vm1 | IP: 0x100001e vm2 | VMM[vmmap]: [ 61: 61]: PIT port 61 vm1 | Instruction error: 0x0 vm2 | VMM[vmmap]: [ 70: 71]: RTC vm1 | Entry exception error: 0x0 vm2 | VMM[vmmap]: [ a0: a1]: PIC vm1 | VMM: [ 0]: Exit at guest IP 0x100001e SP 0x1a03f4e with 0x80000021 (Qual: 0x0) vm1 | VMM: [ 0]: Unhandled exit reason: VM-entry failure due to invalid guest state (33) vm1 | VMM: FATAL: [ 0]: Failure in VMM -38 vm2 | VMM[vmmap]: [ 3f8: 3ff]: UART 8250 vm1 | VMM: FATAL: [ 0] RAX 0x2213fe9 vm2 | VMM[vmmap]: [ 510: 51b]: Firmware interface vm1 | RBX 0x0 vm2 | VMM[vmmap]: [ cf8: cff]: PCI bus cfg vm1 | RCX 0xc0000101 vm2 | VMM[vmmap]: [1800:1808]: ACPI platform vm1 | RDX 0x2213fea vm2 | VMM[vmmap]: [b008:b008]: ACPI Timer vm1 | RSI 0x1000 vm2 | VMM[guest]: Starting VMM @ 0x1000000 vm1 | RDI 0x1000 vm2 | VMM[Cpu_dev]: [ 0] Reset called vm1 | RSP 0x0 vm2 | VMM[Cpu_dev]: [ 0] Resetting vCPU. vm1 | RBP 0x0 vm1 | R8 0x0 vm1 | R9 0x0 vm1 | R10 0x0 vm1 | R11 0x0 vm1 | R12 0x0 vm1 | R13 0x0 vm1 | R14 0x0 vm1 | R15 0x0 vm1 | RIP 0x100001e vm1 | vCPU RIP 0x1000000 vm1 | VMM: FATAL: [ 0] VM instruction error: 0x0 vm1 | VMM: FATAL: VM entered a fatal state. Halting. vm2 | VMM: Hello clock source for vCPU 0 vm2 | VMM: VM-entry failure due to invalid guest state: vm2 | Exit reason raw: 0x80000021 vm2 | Exit qualification: 0x0 vm2 | IP: 0x100001e vm2 | Instruction error: 0x0 vm2 | Entry exception error: 0x0 vm2 | VMM: [ 0]: Exit at guest IP 0x100001e SP 0x1a03f4e with 0x80000021 (Qual: 0x0) vm2 | VMM: [ 0]: Unhandled exit reason: VM-entry failure due to invalid guest state (33) vm2 | VMM: FATAL: [ 0]: Failure in VMM -38 vm2 | VMM: FATAL: [ 0] RAX 0x2213fe9 vm2 | RBX 0x0 vm2 | RCX 0xc0000101 vm2 | RDX 0x2213fea vm2 | RSI 0x1000 vm2 | RDI 0x1000 vm2 | RSP 0x0 vm2 | RBP 0x0 vm2 | R8 0x0 vm2 | R9 0x0 vm2 | R10 0x0 vm2 | R11 0x0 vm2 | R12 0x0 vm2 | R13 0x0 vm2 | R14 0x0 vm2 | R15 0x0 vm2 | RIP 0x100001e vm2 | vCPU RIP 0x1000000 vm2 | VMM: FATAL: [ 0] VM instruction error: 0x0 vm2 | VMM: FATAL: VM entered a fatal state. Halting.
Hi Richard,
On [08-05-2025 17:16], Richard Clark wrote:
Adam,
Same crash with just one vm. Yes, -cpu host. Intel 12th Gen i3-1220P
Can you send me (to my personal email) an exportpack of your setup? You can create the exportpack in your build directory with
make exportpack EXPORTPACKTARGETDIR=/path/to/dir E=<YOUR_ENTRY>
Then archive /path/to/dir and send it to me please.
Best, Matthias.
Richard
-----Original Message----- From: Adam Lackorzynski adam@l4re.org Sent: Thursday, May 8, 2025 11:19 AM To: Richard Clark richard.clark@Coheretechnology.us; l4-hackers@os.inf.tu-dresden.de Cc: Bud Wykoff bud.wykoff@Coheretechnology.us; Douglas Schafer douglas.schafer@Coheretechnology.us Subject: Re: Upgrade issues. VM won't start.
Hi Richard,
is it working with just one VM? And is it QEMU with "-cpu host"? And if yes, what's the host CPU if I may ask?
Adam
On Mon May 05, 2025 at 13:01:55 +0000, Richard Clark wrote:
Adam,
I did another ham sync, rebuilt fiasco, rebuilt l4, rebuilt my natives, and it's a little better? Now the VMs crash, but at least the system keeps running and I can seem my natives printing even after both VMs are halted. So at least the whole system doesn't crash any more. Just the VMs.
Again, yes, Debian Linux on QEMU with kvm and six cores. Cores 0 and 1 are for L4 and natives, cores 2 and 3 are for vm1, cores 4 and 5 are for vm2.
Please see attached!
Richard
-----Original Message----- From: Adam Lackorzynski adam@l4re.org Sent: Sunday, May 4, 2025 11:11 AM To: Richard Clark richard.clark@Coheretechnology.us; l4-hackers@os.inf.tu-dresden.de Cc: Bud Wykoff bud.wykoff@Coheretechnology.us; Douglas Schafer douglas.schafer@Coheretechnology.us Subject: Re: Upgrade issues. VM won't start.
Richard,
is that setup running on Linux QEMU+KVM? If yes, we recently (really a few days ago) fixed an issue in this virtualized setup (wrt to performance counter handling). It's on GH since Friday I believe. Otherwise please provide me you fiasco binary, such that I can look up fffffffff006a176 as this will point to the location that triggered the issue.
Thanks, Adam
On Sun May 04, 2025 at 13:14:00 +0000, Richard Clark wrote:
Adam,
So I have been using version 23.10.1, built as per the download page, and have gotten a couple VMs to ping eachother and give me a login prompt. I finally realized I was missing the virtio_switch package and grabbed it from github and put it where it is supposed to go. Of course, being a version mismatch now, it did not compile. I decide to bite the bullet and do an upgrade (always a mistake) and use the new build process given with the new website. That went smoothly! I install ham, run it, etc, everything builds, I update my local scripts and links to use the new environment variables instead of being hardcoded, and it all builds and looks good. Except that it doesn't run. The VMs get a memory exception and kick into jdb.
I have not changed any of my .cfg or .list files. The VMs and ramdisks are untouched. My local (L4 native) processes start and appear to run. But the VMs crash for some reason. Even the device tree is unchanged. I also tried a newly built linux (as opposed to a prebuilt) and that failed as well.
Is there some reason for this new crash between version 23.10.1 and the latest version from github? I've attached the VM startup output from both the old and new runs so you can take a look.
I'm so close... I see the IP configuration parameter and how to set up lwip and virtio_switch.... Then my natives should be able to talk directly to my linuxes.
Your help is greatly appreciated!
Richard
vm1 | VMM: Created VCPU 0 @ 17000 vm1 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm1 | VMM[main]: Hello out there. vm1 | VMM[ASM]: Sys Info: vm1 | vBus: 0 vm1 | DMA devs: 0 vm1 | IO-MMU: 0 vm1 | Identity forced: 0 vm1 | DMA phys addr: 0 vm1 | DT dma-ranges: 0 vm1 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[ram]: RAM not set up for DMA. vm2 | VMM: Created VCPU 0 @ 17000 vm2 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm2 | VMM[main]: Hello out there. vm2 | VMM[ASM]: Sys Info: vm2 | vBus: 0 vm2 | DMA devs: 0 vm2 | IO-MMU: 0 vm2 | Identity forced: 0 vm1 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | DMA phys addr: 0 vm1 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | DT dma-ranges: 0 vm1 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[main]: Loading kernel... vm2 | VMM[ram]: RAM not set up for DMA. vm1 | VMM[loader]: Linux kernel detected vm1 | VMM[file]: load: @ 0xfc400 vm1 | VMM[file]: copy in: to offset 0xfc400-0xba989f vm1 | VMM[main]: Loading ram disk... vm1 | VMM[ram]: load: rom/ramdisk1-amd64.rd -> 0x1fc00000 vm1 | VMM[file]: load: @ 0x1fc00000 vm1 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm1 | VMM[main]: Loaded ramdisk image rom/ramdisk1-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM[PIC]: Hello, Legacy_pic vm2 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[main]: Loading kernel... vm1 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM[loader]: Linux kernel detected vm1 | VMM: Creating Acpi_platform vm2 | VMM[file]: load: @ 0xfc400 vm2 | VMM[file]: copy in: to offset 0xfc400-0xba989f vm1 | VMM[ACPI]: Acpi timer @ 0xb008 vm1 | VMM[RTC]: Hello from RTC. Irq=8 vm2 | VMM[main]: Loading ram disk... vm1 | VMM[uart_8250]: Create virtual 8250 console vm2 | VMM[ram]: load: rom/ramdisk2-amd64.rd -> 0x1fc00000 vm2 | VMM[file]: load: @ 0x1fc00000 vm2 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm2 | VMM[main]: Loaded ramdisk image rom/ramdisk2-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm1 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm1 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm1 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm2 | VMM[PIC]: Hello, Legacy_pic vm1 | VMM[PCI bus]: Creating host bridge vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm1 | VMM[PCI bus]: Registering PCI device 00:00.0 vm2 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM: Creating Acpi_platform vm1 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM[PCI bus]: Created & Registered the PCI host bridge vm2 | VMM[ACPI]: Acpi timer @ 0xb008 vm2 | VMM[RTC]: Hello from RTC. Irq=8 vm1 | VMM[VIO Cons]: Create virtual PCI console vm2 | VMM[uart_8250]: Create virtual 8250 console vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm2 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm2 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm1 | VMM[PCI bus]: Registering PCI device 00:01.0 vm2 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[VIO proxy]: Creating proxy vm2 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm2 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 p2p | PORT[0x15d70]: DMA guest [0-1fffffff] local [600000-205fffff] offset 0 vm2 | VMM[PCI bus]: Creating host bridge p2p | register client: host IRQ: 420010 config DS: 41d000 vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm2 | VMM[PCI bus]: Registering PCI device 00:00.0 vm1 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM[VIO proxy]: Creating proxy vm1 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm2 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm2 | VMM[PCI bus]: Created & Registered the PCI host bridge vm1 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm2 | VMM[VIO Cons]: Create virtual PCI console vm1 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm2 | VMM[PCI bus]: Registering PCI device 00:01.0 vm1 | VMM: Created VCPU 1 @ 23000 vm2 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm2 | VMM[VIO proxy]: Creating proxy vm1 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 p2p | PORT[0x15e80]: DMA guest [0-1fffffff] local [20600000-405fffff] offset 0 p2p | register client: host IRQ: 420010 config DS: 41e000 vm1 | VMM[ACPI]: Initialize legacy BIOS ACPI tables. vm2 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM: Zeropage @ 0x1000, Kernel @ 0xfc400 vm2 | VMM[VIO proxy]: Creating proxy vm1 | VMM: Cmd_line: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM: cmdline check: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm1 | VMM[vmmap]: VM map: vm1 | VMM[vmmap]: [ 0:1fffffff]: Ram vm1 | VMM[vmmap]: [b0000000:bfffffff]: Pci_bus_cfg_ecam vm2 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm1 | VMM[vmmap]: [fec00000:fec00fff]: Ioapic vm2 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vmmap]: [fee00000:fee00fff]: Lapic_access_handler vm2 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm1 | VMM[main]: Populating guest physical address space vm1 | VMM[mmio]: Mapping [1000000 - 20ffffff] -> [0 - 1fffffff] vm2 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[vmmap]: IOport map: vm2 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vmmap]: [ 20: 21]: PIC vm2 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm1 | VMM[vmmap]: [ 40: 43]: PIT vm1 | VMM[vmmap]: [ 61: 61]: PIT port 61 vm1 | VMM[vmmap]: [ 70: 71]: RTC vm1 | VMM[vmmap]: [ a0: a1]: PIC vm1 | VMM[vmmap]: [ 3f8: 3ff]: UART 8250 vm1 | VMM[vmmap]: [ 510: 51b]: Firmware interface vm1 | VMM[vmmap]: [ cf8: cff]: PCI bus cfg vm1 | VMM[vmmap]: [1800:1808]: ACPI platform vm2 | VMM: Created VCPU 1 @ 23000 vm1 | VMM[vmmap]: [b008:b008]: ACPI Timer vm2 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm1 | VMM[guest]: Starting VMM @ 0x100000 vm2 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm1 | VMM[Cpu_dev]: [ 0] Reset called vm1 | VMM[Cpu_dev]: [ 0] Resetting vCPU. vm1 | VMM: Hello clock source for vCPU 0
--------------------------------------------------------------------- CPU 2 [fffffffff006a176]: General Protection (ERR=0000000000000000) CPU(s) 0-5 entered JDBjdb:
Created vcon channel: vm1 [426010] cons> Created vcon channel: vm2 [427010] cons> vm1 | VMM: Created VCPU 0 @ 17000 vm1 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm1 | VMM[main]: Hello out there. vm1 | VMM[ASM]: Sys Info: vm1 | vBus: 0 vm1 | DMA devs: 0 vm1 | IO-MMU: 0 vm1 | Identity forced: 0 vm1 | DMA phys addr: 0 vm1 | DT dma-ranges: 0 vm1 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[ram]: RAM not set up for DMA. vm2 | VMM: Created VCPU 0 @ 17000 vm2 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm2 | VMM[main]: Hello out there. vm1 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm1 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm1 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[ASM]: Sys Info: vm2 | vBus: 0 vm1 | VMM[main]: Loading kernel... vm2 | DMA devs: 0 vm2 | IO-MMU: 0 vm2 | Identity forced: 0 vm2 | DMA phys addr: 0 vm2 | DT dma-ranges: 0 vm2 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[loader]: Elf image detected vm2 | VMM[ram]: RAM not set up for DMA. vm1 | VMM[bin]: Copy in ELF binary section @0x1000000 from 0x200000/0x18aef10 vm1 | VMM[bin]: Copy in ELF binary section @0x2a00000 from 0x1c00000/0x814000 vm1 | VMM[bin]: Copy in ELF binary section @0x3214000 from 0x2600000/0x2ece8 vm1 | VMM[bin]: Copy in ELF binary section @0x3243000 from 0x2643000/0x288000 vm1 | VMM[main]: Loading ram disk... vm1 | VMM[ram]: load: rom/ramdisk1-amd64.rd -> 0x1fc00000 vm1 | VMM[file]: load: @ 0x1fc00000 vm1 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm1 | VMM[main]: Loaded ramdisk image rom/ramdisk1-amd64.rd to 1fc00000 (size: 00400000) vm2 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[main]: Loading kernel... vm1 | VMM[PIC]: Hello, Legacy_pic vm2 | VMM[loader]: Elf image detected vm2 | VMM[bin]: Copy in ELF binary section @0x1000000 from 0x200000/0x18aef10 vm2 | VMM[bin]: Copy in ELF binary section @0x2a00000 from 0x1c00000/0x814000 vm2 | VMM[bin]: Copy in ELF binary section @0x3214000 from 0x2600000/0x2ece8 vm2 | VMM[bin]: Copy in ELF binary section @0x3243000 from 0x2643000/0x288000 vm2 | VMM[main]: Loading ram disk... vm1 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM[ram]: load: rom/ramdisk2-amd64.rd -> 0x1fc00000 vm2 | VMM[file]: load: @ 0x1fc00000 vm2 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm1 | VMM: Creating Acpi_platform vm2 | VMM[main]: Loaded ramdisk image rom/ramdisk2-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM[ACPI]: Acpi timer @ 0xb008 vm1 | VMM[RTC]: Hello from RTC. Irq=8 vm1 | VMM[uart_8250]: Create virtual 8250 console vm1 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm1 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm2 | VMM[PIC]: Hello, Legacy_pic vm1 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm1 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm2 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM: Creating Acpi_platform vm1 | VMM[PCI bus]: Creating host bridge vm2 | VMM[ACPI]: Acpi timer @ 0xb008 vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm2 | VMM[RTC]: Hello from RTC. Irq=8 vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm2 | VMM[uart_8250]: Create virtual 8250 console vm1 | VMM[PCI bus]: Registering PCI device 00:00.0 vm1 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM[PCI bus]: Created & Registered the PCI host bridge vm1 | VMM[VIO Cons]: Create virtual PCI console vm2 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm2 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm2 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm2 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm2 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm1 | VMM[PCI bus]: Registering PCI device 00:01.0 vm1 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[VIO proxy]: Creating proxy vm2 | VMM[PCI bus]: Creating host bridge vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] p2p | PORT[0x15d70]: DMA guest [0-1fffffff] local [600000-205fffff] offset 0 vm2 | VMM[PCI bus]: Registering PCI device 00:00.0 vm2 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm2 | VMM[PCI bus]: Created & Registered the PCI host bridge p2p | register client: host IRQ: 420010 config DS: 41d000 vm2 | VMM[VIO Cons]: Create virtual PCI console vm1 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM[VIO proxy]: Creating proxy vm1 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm1 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm1 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm2 | VMM[PCI bus]: Registering PCI device 00:01.0 vm2 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm2 | VMM[VIO proxy]: Creating proxy vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 vm1 | VMM: Created VCPU 1 @ 23000 vm1 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm1 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 p2p | PORT[0x15e80]: DMA guest [0-1fffffff] local [20600000-405fffff] offset 0 p2p | register client: host IRQ: 420010 config DS: 41e000 vm2 | VMM[PCI bus]: Registering PCI device 00:02.0 vm2 | VMM[VIO proxy]: Creating proxy vm2 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm2 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm2 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm2 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm2 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm2 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm2 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm2 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm1 | VMM[ACPI]: Initialize legacy BIOS ACPI tables. vm1 | VMM: Zeropage @ 0x1000, Kernel @ 0x1000000 vm1 | VMM: Cmd_line: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: Created VCPU 1 @ 23000 vm1 | VMM: Elf guest zeropage: dtb 0x1fbfdff0, entry 0x1000000 vm2 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm1 | VMM: cmdline check: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm1 | VMM[vmmap]: VM map: vm1 | VMM[vmmap]: [ 0:1fffffff]: Ram vm1 | VMM[vmmap]: [b0000000:bfffffff]: Pci_bus_cfg_ecam vm1 | VMM[vmmap]: [fec00000:fec00fff]: Ioapic vm1 | VMM[vmmap]: [fee00000:fee00fff]: Lapic_access_handler vm1 | VMM[main]: Populating guest physical address space vm1 | VMM[mmio]: Mapping [1000000 - 20ffffff] -> [0 - 1fffffff] vm1 | VMM[vmmap]: IOport map: vm1 | VMM[vmmap]: [ 20: 21]: PIC vm1 | VMM[vmmap]: [ 40: 43]: PIT vm1 | VMM[vmmap]: [ 61: 61]: PIT port 61 vm1 | VMM[vmmap]: [ 70: 71]: RTC vm1 | VMM[vmmap]: [ a0: a1]: PIC vm1 | VMM[vmmap]: [ 3f8: 3ff]: UART 8250 vm1 | VMM[vmmap]: [ 510: 51b]: Firmware interface vm1 | VMM[vmmap]: [ cf8: cff]: PCI bus cfg vm1 | VMM[vmmap]: [1800:1808]: ACPI platform vm1 | VMM[vmmap]: [b008:b008]: ACPI Timer vm1 | VMM[guest]: Starting VMM @ 0x1000000 vm1 | VMM[Cpu_dev]: [ 0] Reset called vm1 | VMM[Cpu_dev]: [ 0] Resetting vCPU. vm2 | VMM[ACPI]: Initialize legacy BIOS ACPI tables. vm2 | VMM: Zeropage @ 0x1000, Kernel @ 0x1000000 vm2 | VMM: Cmd_line: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: Elf guest zeropage: dtb 0x1fbfdff0, entry 0x1000000 vm1 | VMM: Hello clock source for vCPU 0 vm2 | VMM: cmdline check: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM[vmmap]: VM map: vm2 | VMM[vmmap]: [ 0:1fffffff]: Ram vm2 | VMM[vmmap]: [b0000000:bfffffff]: Pci_bus_cfg_ecam vm2 | VMM[vmmap]: [fec00000:fec00fff]: Ioapic vm2 | VMM[vmmap]: [fee00000:fee00fff]: Lapic_access_handler vm2 | VMM[main]: Populating guest physical address space vm2 | VMM[mmio]: Mapping [1000000 - 20ffffff] -> [0 - 1fffffff] vm2 | VMM[vmmap]: IOport map: vm2 | VMM[vmmap]: [ 20: 21]: PIC vm1 | VMM: VM-entry failure due to invalid guest state: vm1 | Exit reason raw: 0x80000021 vm1 | Exit qualification: 0x0 vm2 | VMM[vmmap]: [ 40: 43]: PIT vm1 | IP: 0x100001e vm2 | VMM[vmmap]: [ 61: 61]: PIT port 61 vm1 | Instruction error: 0x0 vm2 | VMM[vmmap]: [ 70: 71]: RTC vm1 | Entry exception error: 0x0 vm2 | VMM[vmmap]: [ a0: a1]: PIC vm1 | VMM: [ 0]: Exit at guest IP 0x100001e SP 0x1a03f4e with 0x80000021 (Qual: 0x0) vm1 | VMM: [ 0]: Unhandled exit reason: VM-entry failure due to invalid guest state (33) vm1 | VMM: FATAL: [ 0]: Failure in VMM -38 vm2 | VMM[vmmap]: [ 3f8: 3ff]: UART 8250 vm1 | VMM: FATAL: [ 0] RAX 0x2213fe9 vm2 | VMM[vmmap]: [ 510: 51b]: Firmware interface vm1 | RBX 0x0 vm2 | VMM[vmmap]: [ cf8: cff]: PCI bus cfg vm1 | RCX 0xc0000101 vm2 | VMM[vmmap]: [1800:1808]: ACPI platform vm1 | RDX 0x2213fea vm2 | VMM[vmmap]: [b008:b008]: ACPI Timer vm1 | RSI 0x1000 vm2 | VMM[guest]: Starting VMM @ 0x1000000 vm1 | RDI 0x1000 vm2 | VMM[Cpu_dev]: [ 0] Reset called vm1 | RSP 0x0 vm2 | VMM[Cpu_dev]: [ 0] Resetting vCPU. vm1 | RBP 0x0 vm1 | R8 0x0 vm1 | R9 0x0 vm1 | R10 0x0 vm1 | R11 0x0 vm1 | R12 0x0 vm1 | R13 0x0 vm1 | R14 0x0 vm1 | R15 0x0 vm1 | RIP 0x100001e vm1 | vCPU RIP 0x1000000 vm1 | VMM: FATAL: [ 0] VM instruction error: 0x0 vm1 | VMM: FATAL: VM entered a fatal state. Halting. vm2 | VMM: Hello clock source for vCPU 0 vm2 | VMM: VM-entry failure due to invalid guest state: vm2 | Exit reason raw: 0x80000021 vm2 | Exit qualification: 0x0 vm2 | IP: 0x100001e vm2 | Instruction error: 0x0 vm2 | Entry exception error: 0x0 vm2 | VMM: [ 0]: Exit at guest IP 0x100001e SP 0x1a03f4e with 0x80000021 (Qual: 0x0) vm2 | VMM: [ 0]: Unhandled exit reason: VM-entry failure due to invalid guest state (33) vm2 | VMM: FATAL: [ 0]: Failure in VMM -38 vm2 | VMM: FATAL: [ 0] RAX 0x2213fe9 vm2 | RBX 0x0 vm2 | RCX 0xc0000101 vm2 | RDX 0x2213fea vm2 | RSI 0x1000 vm2 | RDI 0x1000 vm2 | RSP 0x0 vm2 | RBP 0x0 vm2 | R8 0x0 vm2 | R9 0x0 vm2 | R10 0x0 vm2 | R11 0x0 vm2 | R12 0x0 vm2 | R13 0x0 vm2 | R14 0x0 vm2 | R15 0x0 vm2 | RIP 0x100001e vm2 | vCPU RIP 0x1000000 vm2 | VMM: FATAL: [ 0] VM instruction error: 0x0 vm2 | VMM: FATAL: VM entered a fatal state. Halting.
l4-hackers mailing list -- l4-hackers@os.inf.tu-dresden.de To unsubscribe send an email to l4-hackers-leave@os.inf.tu-dresden.de
Hi,
just to mention, I happen to have a same gen CPU available and it works well for me. So I'm also curious.
Adam
On Thu May 08, 2025 at 17:16:27 +0000, Richard Clark wrote:
Adam,
Same crash with just one vm. Yes, -cpu host. Intel 12th Gen i3-1220P
Richard
-----Original Message----- From: Adam Lackorzynski adam@l4re.org Sent: Thursday, May 8, 2025 11:19 AM To: Richard Clark richard.clark@Coheretechnology.us; l4-hackers@os.inf.tu-dresden.de Cc: Bud Wykoff bud.wykoff@Coheretechnology.us; Douglas Schafer douglas.schafer@Coheretechnology.us Subject: Re: Upgrade issues. VM won't start.
Hi Richard,
is it working with just one VM? And is it QEMU with "-cpu host"? And if yes, what's the host CPU if I may ask?
Adam
On Mon May 05, 2025 at 13:01:55 +0000, Richard Clark wrote:
Adam,
I did another ham sync, rebuilt fiasco, rebuilt l4, rebuilt my natives, and it's a little better? Now the VMs crash, but at least the system keeps running and I can seem my natives printing even after both VMs are halted. So at least the whole system doesn't crash any more. Just the VMs.
Again, yes, Debian Linux on QEMU with kvm and six cores. Cores 0 and 1 are for L4 and natives, cores 2 and 3 are for vm1, cores 4 and 5 are for vm2.
Please see attached!
Richard
-----Original Message----- From: Adam Lackorzynski adam@l4re.org Sent: Sunday, May 4, 2025 11:11 AM To: Richard Clark richard.clark@Coheretechnology.us; l4-hackers@os.inf.tu-dresden.de Cc: Bud Wykoff bud.wykoff@Coheretechnology.us; Douglas Schafer douglas.schafer@Coheretechnology.us Subject: Re: Upgrade issues. VM won't start.
Richard,
is that setup running on Linux QEMU+KVM? If yes, we recently (really a few days ago) fixed an issue in this virtualized setup (wrt to performance counter handling). It's on GH since Friday I believe. Otherwise please provide me you fiasco binary, such that I can look up fffffffff006a176 as this will point to the location that triggered the issue.
Thanks, Adam
On Sun May 04, 2025 at 13:14:00 +0000, Richard Clark wrote:
Adam,
So I have been using version 23.10.1, built as per the download page, and have gotten a couple VMs to ping eachother and give me a login prompt. I finally realized I was missing the virtio_switch package and grabbed it from github and put it where it is supposed to go. Of course, being a version mismatch now, it did not compile. I decide to bite the bullet and do an upgrade (always a mistake) and use the new build process given with the new website. That went smoothly! I install ham, run it, etc, everything builds, I update my local scripts and links to use the new environment variables instead of being hardcoded, and it all builds and looks good. Except that it doesn't run. The VMs get a memory exception and kick into jdb.
I have not changed any of my .cfg or .list files. The VMs and ramdisks are untouched. My local (L4 native) processes start and appear to run. But the VMs crash for some reason. Even the device tree is unchanged. I also tried a newly built linux (as opposed to a prebuilt) and that failed as well.
Is there some reason for this new crash between version 23.10.1 and the latest version from github? I've attached the VM startup output from both the old and new runs so you can take a look.
I'm so close... I see the IP configuration parameter and how to set up lwip and virtio_switch.... Then my natives should be able to talk directly to my linuxes.
Your help is greatly appreciated!
Richard
vm1 | VMM: Created VCPU 0 @ 17000 vm1 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm1 | VMM[main]: Hello out there. vm1 | VMM[ASM]: Sys Info: vm1 | vBus: 0 vm1 | DMA devs: 0 vm1 | IO-MMU: 0 vm1 | Identity forced: 0 vm1 | DMA phys addr: 0 vm1 | DT dma-ranges: 0 vm1 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[ram]: RAM not set up for DMA. vm2 | VMM: Created VCPU 0 @ 17000 vm2 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm2 | VMM[main]: Hello out there. vm2 | VMM[ASM]: Sys Info: vm2 | vBus: 0 vm2 | DMA devs: 0 vm2 | IO-MMU: 0 vm2 | Identity forced: 0 vm1 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | DMA phys addr: 0 vm1 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | DT dma-ranges: 0 vm1 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[main]: Loading kernel... vm2 | VMM[ram]: RAM not set up for DMA. vm1 | VMM[loader]: Linux kernel detected vm1 | VMM[file]: load: @ 0xfc400 vm1 | VMM[file]: copy in: to offset 0xfc400-0xba989f vm1 | VMM[main]: Loading ram disk... vm1 | VMM[ram]: load: rom/ramdisk1-amd64.rd -> 0x1fc00000 vm1 | VMM[file]: load: @ 0x1fc00000 vm1 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm1 | VMM[main]: Loaded ramdisk image rom/ramdisk1-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM[PIC]: Hello, Legacy_pic vm2 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[main]: Loading kernel... vm1 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM[loader]: Linux kernel detected vm1 | VMM: Creating Acpi_platform vm2 | VMM[file]: load: @ 0xfc400 vm2 | VMM[file]: copy in: to offset 0xfc400-0xba989f vm1 | VMM[ACPI]: Acpi timer @ 0xb008 vm1 | VMM[RTC]: Hello from RTC. Irq=8 vm2 | VMM[main]: Loading ram disk... vm1 | VMM[uart_8250]: Create virtual 8250 console vm2 | VMM[ram]: load: rom/ramdisk2-amd64.rd -> 0x1fc00000 vm2 | VMM[file]: load: @ 0x1fc00000 vm2 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm2 | VMM[main]: Loaded ramdisk image rom/ramdisk2-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm1 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm1 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm1 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm2 | VMM[PIC]: Hello, Legacy_pic vm1 | VMM[PCI bus]: Creating host bridge vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm1 | VMM[PCI bus]: Registering PCI device 00:00.0 vm2 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM: Creating Acpi_platform vm1 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM[PCI bus]: Created & Registered the PCI host bridge vm2 | VMM[ACPI]: Acpi timer @ 0xb008 vm2 | VMM[RTC]: Hello from RTC. Irq=8 vm1 | VMM[VIO Cons]: Create virtual PCI console vm2 | VMM[uart_8250]: Create virtual 8250 console vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm2 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm2 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm1 | VMM[PCI bus]: Registering PCI device 00:01.0 vm2 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[VIO proxy]: Creating proxy vm2 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm2 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 p2p | PORT[0x15d70]: DMA guest [0-1fffffff] local [600000-205fffff] offset 0 vm2 | VMM[PCI bus]: Creating host bridge p2p | register client: host IRQ: 420010 config DS: 41d000 vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm2 | VMM[PCI bus]: Registering PCI device 00:00.0 vm1 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM[VIO proxy]: Creating proxy vm1 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm2 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm2 | VMM[PCI bus]: Created & Registered the PCI host bridge vm1 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm2 | VMM[VIO Cons]: Create virtual PCI console vm1 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm2 | VMM[PCI bus]: Registering PCI device 00:01.0 vm1 | VMM: Created VCPU 1 @ 23000 vm2 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm2 | VMM[VIO proxy]: Creating proxy vm1 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 p2p | PORT[0x15e80]: DMA guest [0-1fffffff] local [20600000-405fffff] offset 0 p2p | register client: host IRQ: 420010 config DS: 41e000 vm1 | VMM[ACPI]: Initialize legacy BIOS ACPI tables. vm2 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM: Zeropage @ 0x1000, Kernel @ 0xfc400 vm2 | VMM[VIO proxy]: Creating proxy vm1 | VMM: Cmd_line: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM: cmdline check: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm1 | VMM[vmmap]: VM map: vm1 | VMM[vmmap]: [ 0:1fffffff]: Ram vm1 | VMM[vmmap]: [b0000000:bfffffff]: Pci_bus_cfg_ecam vm2 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm1 | VMM[vmmap]: [fec00000:fec00fff]: Ioapic vm2 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vmmap]: [fee00000:fee00fff]: Lapic_access_handler vm2 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm1 | VMM[main]: Populating guest physical address space vm1 | VMM[mmio]: Mapping [1000000 - 20ffffff] -> [0 - 1fffffff] vm2 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[vmmap]: IOport map: vm2 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vmmap]: [ 20: 21]: PIC vm2 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm1 | VMM[vmmap]: [ 40: 43]: PIT vm1 | VMM[vmmap]: [ 61: 61]: PIT port 61 vm1 | VMM[vmmap]: [ 70: 71]: RTC vm1 | VMM[vmmap]: [ a0: a1]: PIC vm1 | VMM[vmmap]: [ 3f8: 3ff]: UART 8250 vm1 | VMM[vmmap]: [ 510: 51b]: Firmware interface vm1 | VMM[vmmap]: [ cf8: cff]: PCI bus cfg vm1 | VMM[vmmap]: [1800:1808]: ACPI platform vm2 | VMM: Created VCPU 1 @ 23000 vm1 | VMM[vmmap]: [b008:b008]: ACPI Timer vm2 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm1 | VMM[guest]: Starting VMM @ 0x100000 vm2 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm1 | VMM[Cpu_dev]: [ 0] Reset called vm1 | VMM[Cpu_dev]: [ 0] Resetting vCPU. vm1 | VMM: Hello clock source for vCPU 0
--------------------------------------------------------------------- CPU 2 [fffffffff006a176]: General Protection (ERR=0000000000000000) CPU(s) 0-5 entered JDBjdb:
Created vcon channel: vm1 [426010] cons> Created vcon channel: vm2 [427010] cons> vm1 | VMM: Created VCPU 0 @ 17000 vm1 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm1 | VMM[main]: Hello out there. vm1 | VMM[ASM]: Sys Info: vm1 | vBus: 0 vm1 | DMA devs: 0 vm1 | IO-MMU: 0 vm1 | Identity forced: 0 vm1 | DMA phys addr: 0 vm1 | DT dma-ranges: 0 vm1 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[ram]: RAM not set up for DMA. vm2 | VMM: Created VCPU 0 @ 17000 vm2 | VMM[vmbus]: 'vbus' capability not found. Hardware access not possible for VM. vm2 | VMM[main]: Hello out there. vm1 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm1 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm1 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[ASM]: Sys Info: vm2 | vBus: 0 vm1 | VMM[main]: Loading kernel... vm2 | DMA devs: 0 vm2 | IO-MMU: 0 vm2 | Identity forced: 0 vm2 | DMA phys addr: 0 vm2 | DT dma-ranges: 0 vm2 | VMM[ASM]: Operating mode: No DMA vm1 | VMM[loader]: Elf image detected vm2 | VMM[ram]: RAM not set up for DMA. vm1 | VMM[bin]: Copy in ELF binary section @0x1000000 from 0x200000/0x18aef10 vm1 | VMM[bin]: Copy in ELF binary section @0x2a00000 from 0x1c00000/0x814000 vm1 | VMM[bin]: Copy in ELF binary section @0x3214000 from 0x2600000/0x2ece8 vm1 | VMM[bin]: Copy in ELF binary section @0x3243000 from 0x2643000/0x288000 vm1 | VMM[main]: Loading ram disk... vm1 | VMM[ram]: load: rom/ramdisk1-amd64.rd -> 0x1fc00000 vm1 | VMM[file]: load: @ 0x1fc00000 vm1 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm1 | VMM[main]: Loaded ramdisk image rom/ramdisk1-amd64.rd to 1fc00000 (size: 00400000) vm2 | VMM[ram]: RAM: @ 0x0 size=0x20000000 vm2 | VMM[ram]: RAM: VMM local mapping @ 0x1000000 vm2 | VMM[ram]: RAM: VM offset=0x1000000 vm2 | VMM[main]: Loading kernel... vm1 | VMM[PIC]: Hello, Legacy_pic vm2 | VMM[loader]: Elf image detected vm2 | VMM[bin]: Copy in ELF binary section @0x1000000 from 0x200000/0x18aef10 vm2 | VMM[bin]: Copy in ELF binary section @0x2a00000 from 0x1c00000/0x814000 vm2 | VMM[bin]: Copy in ELF binary section @0x3214000 from 0x2600000/0x2ece8 vm2 | VMM[bin]: Copy in ELF binary section @0x3243000 from 0x2643000/0x288000 vm2 | VMM[main]: Loading ram disk... vm1 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM[ram]: load: rom/ramdisk2-amd64.rd -> 0x1fc00000 vm2 | VMM[file]: load: @ 0x1fc00000 vm2 | VMM[file]: copy in: to offset 0x1fc00000-0x1fffffff vm1 | VMM: Creating Acpi_platform vm2 | VMM[main]: Loaded ramdisk image rom/ramdisk2-amd64.rd to 1fc00000 (size: 00400000) vm1 | VMM[ACPI]: Acpi timer @ 0xb008 vm1 | VMM[RTC]: Hello from RTC. Irq=8 vm1 | VMM[uart_8250]: Create virtual 8250 console vm1 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm1 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm2 | VMM[PIC]: Hello, Legacy_pic vm1 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm1 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm1 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm2 | VMM: acpi_platform: Failed to get property 'l4vmm,pwrinput': FDT_ERR_NOTFOUND vm2 | VMM: Creating Acpi_platform vm1 | VMM[PCI bus]: Creating host bridge vm2 | VMM[ACPI]: Acpi timer @ 0xb008 vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm2 | VMM[RTC]: Hello from RTC. Irq=8 vm1 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] vm2 | VMM[uart_8250]: Create virtual 8250 console vm1 | VMM[PCI bus]: Registering PCI device 00:00.0 vm1 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm1 | VMM[PCI bus]: Created & Registered the PCI host bridge vm1 | VMM[VIO Cons]: Create virtual PCI console vm2 | VMM: l4rtc.l4vmm,rtccap: capability rtc is invalid. vm2 | VMM[RTC]: l4vmm,rtccap not valid. Will not have wallclock time. vm2 | VMM[vm]: Device creation for virtual device l4rtc failed. Disabling device. vm2 | VMM: isa_debugport.l4vmm,vcon_cap: capability debug is invalid. vm2 | VMM[vm]: Device creation for virtual device isa_debugport failed. Disabling device. vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm1 | VMM[PCI bus]: Registering PCI device 00:01.0 vm1 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM[VIO proxy]: Creating proxy vm2 | VMM[PCI bus]: Creating host bridge vm1 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm1 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm1 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm1 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x6000, 0xffff IO] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0xaa000000, 0xaaffffff MMIO32] vm2 | VMM[Pci_window_alloc]: Init PCI window with range [0x300000000, 0x3ffffffff MMIO64] p2p | PORT[0x15d70]: DMA guest [0-1fffffff] local [600000-205fffff] offset 0 vm2 | VMM[PCI bus]: Registering PCI device 00:00.0 vm2 | VMM[guest]: New mmio mapping: @ b0000000 10000000 vm2 | VMM[PCI bus]: Created & Registered the PCI host bridge p2p | register client: host IRQ: 420010 config DS: 41d000 vm2 | VMM[VIO Cons]: Create virtual PCI console vm1 | VMM[PCI bus]: Registering PCI device 00:02.0 vm1 | VMM[VIO proxy]: Creating proxy vm1 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm1 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa000000, 0xaa001fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6000, 0x607f] vm1 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io vm1 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm2 | VMM[PCI bus]: Registering PCI device 00:01.0 vm2 | VMM[VIO Cons]: Console: 0x186b0 vm1 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm1 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm1 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm2 | VMM[VIO proxy]: Creating proxy vm2 | VMM[Pci_bridge_windows]: [MMIO32] allocated [0xaa002000, 0xaa003fff] vm2 | VMM[Virt PCI dev]: bar[0] addr=0x0 size=0x2000 type=mmio32 (non-prefetchable) vm2 | VMM[Pci_bridge_windows]: [IO] allocated [0x6080, 0x60ff] vm2 | VMM[Virt PCI dev]: bar[1] addr=0x0 size=0x80 type=io p2p | Registering dataspace from 0x0 with 524288 KiB, offset 0x0 vm1 | VMM: Created VCPU 1 @ 23000 vm1 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm1 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 p2p | PORT[0x15e80]: DMA guest [0-1fffffff] local [20600000-405fffff] offset 0 p2p | register client: host IRQ: 420010 config DS: 41e000 vm2 | VMM[PCI bus]: Registering PCI device 00:02.0 vm2 | VMM[VIO proxy]: Creating proxy vm2 | VMM: virtio_disk@2.l4vmm,virtiocap: capability qdrv is invalid. vm2 | VMM[vm]: Device creation for virtual device virtio_disk@2 failed. Disabling device. vm2 | VMM: rom@ffc84000.l4vmm,dscap: capability bios_code is invalid. vm2 | VMM[ROM]: Missing 'l4vmm,dscap' property! vm2 | VMM[vm]: Device creation for virtual device rom@ffc84000 failed. Disabling device. vm2 | VMM: nvm@ffc00000.l4vmm,dscap: capability bios_vars is invalid. vm2 | VMM[CFI]: Missing 'l4vmm,dscap' property! vm2 | VMM[vm]: Device creation for virtual device nvm@ffc00000 failed. Disabling device. vm1 | VMM[ACPI]: Initialize legacy BIOS ACPI tables. vm1 | VMM: Zeropage @ 0x1000, Kernel @ 0x1000000 vm1 | VMM: Cmd_line: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: Created VCPU 1 @ 23000 vm1 | VMM: Elf guest zeropage: dtb 0x1fbfdff0, entry 0x1000000 vm2 | VMM[ram]: Cleaning caches for device tree [20bfe000-20bffb80] ([1fbfe000]) vm1 | VMM: cmdline check: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: reschedule(): Initiating cpu startup for cap 0x418000/core 0 vm1 | VMM[vmmap]: VM map: vm1 | VMM[vmmap]: [ 0:1fffffff]: Ram vm1 | VMM[vmmap]: [b0000000:bfffffff]: Pci_bus_cfg_ecam vm1 | VMM[vmmap]: [fec00000:fec00fff]: Ioapic vm1 | VMM[vmmap]: [fee00000:fee00fff]: Lapic_access_handler vm1 | VMM[main]: Populating guest physical address space vm1 | VMM[mmio]: Mapping [1000000 - 20ffffff] -> [0 - 1fffffff] vm1 | VMM[vmmap]: IOport map: vm1 | VMM[vmmap]: [ 20: 21]: PIC vm1 | VMM[vmmap]: [ 40: 43]: PIT vm1 | VMM[vmmap]: [ 61: 61]: PIT port 61 vm1 | VMM[vmmap]: [ 70: 71]: RTC vm1 | VMM[vmmap]: [ a0: a1]: PIC vm1 | VMM[vmmap]: [ 3f8: 3ff]: UART 8250 vm1 | VMM[vmmap]: [ 510: 51b]: Firmware interface vm1 | VMM[vmmap]: [ cf8: cff]: PCI bus cfg vm1 | VMM[vmmap]: [1800:1808]: ACPI platform vm1 | VMM[vmmap]: [b008:b008]: ACPI Timer vm1 | VMM[guest]: Starting VMM @ 0x1000000 vm1 | VMM[Cpu_dev]: [ 0] Reset called vm1 | VMM[Cpu_dev]: [ 0] Resetting vCPU. vm2 | VMM[ACPI]: Initialize legacy BIOS ACPI tables. vm2 | VMM: Zeropage @ 0x1000, Kernel @ 0x1000000 vm2 | VMM: Cmd_line: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM: Elf guest zeropage: dtb 0x1fbfdff0, entry 0x1000000 vm1 | VMM: Hello clock source for vCPU 0 vm2 | VMM: cmdline check: console=hvc0 ramdisk_size=10000 root=/dev/ram0 rw vm2 | VMM[vmmap]: VM map: vm2 | VMM[vmmap]: [ 0:1fffffff]: Ram vm2 | VMM[vmmap]: [b0000000:bfffffff]: Pci_bus_cfg_ecam vm2 | VMM[vmmap]: [fec00000:fec00fff]: Ioapic vm2 | VMM[vmmap]: [fee00000:fee00fff]: Lapic_access_handler vm2 | VMM[main]: Populating guest physical address space vm2 | VMM[mmio]: Mapping [1000000 - 20ffffff] -> [0 - 1fffffff] vm2 | VMM[vmmap]: IOport map: vm2 | VMM[vmmap]: [ 20: 21]: PIC vm1 | VMM: VM-entry failure due to invalid guest state: vm1 | Exit reason raw: 0x80000021 vm1 | Exit qualification: 0x0 vm2 | VMM[vmmap]: [ 40: 43]: PIT vm1 | IP: 0x100001e vm2 | VMM[vmmap]: [ 61: 61]: PIT port 61 vm1 | Instruction error: 0x0 vm2 | VMM[vmmap]: [ 70: 71]: RTC vm1 | Entry exception error: 0x0 vm2 | VMM[vmmap]: [ a0: a1]: PIC vm1 | VMM: [ 0]: Exit at guest IP 0x100001e SP 0x1a03f4e with 0x80000021 (Qual: 0x0) vm1 | VMM: [ 0]: Unhandled exit reason: VM-entry failure due to invalid guest state (33) vm1 | VMM: FATAL: [ 0]: Failure in VMM -38 vm2 | VMM[vmmap]: [ 3f8: 3ff]: UART 8250 vm1 | VMM: FATAL: [ 0] RAX 0x2213fe9 vm2 | VMM[vmmap]: [ 510: 51b]: Firmware interface vm1 | RBX 0x0 vm2 | VMM[vmmap]: [ cf8: cff]: PCI bus cfg vm1 | RCX 0xc0000101 vm2 | VMM[vmmap]: [1800:1808]: ACPI platform vm1 | RDX 0x2213fea vm2 | VMM[vmmap]: [b008:b008]: ACPI Timer vm1 | RSI 0x1000 vm2 | VMM[guest]: Starting VMM @ 0x1000000 vm1 | RDI 0x1000 vm2 | VMM[Cpu_dev]: [ 0] Reset called vm1 | RSP 0x0 vm2 | VMM[Cpu_dev]: [ 0] Resetting vCPU. vm1 | RBP 0x0 vm1 | R8 0x0 vm1 | R9 0x0 vm1 | R10 0x0 vm1 | R11 0x0 vm1 | R12 0x0 vm1 | R13 0x0 vm1 | R14 0x0 vm1 | R15 0x0 vm1 | RIP 0x100001e vm1 | vCPU RIP 0x1000000 vm1 | VMM: FATAL: [ 0] VM instruction error: 0x0 vm1 | VMM: FATAL: VM entered a fatal state. Halting. vm2 | VMM: Hello clock source for vCPU 0 vm2 | VMM: VM-entry failure due to invalid guest state: vm2 | Exit reason raw: 0x80000021 vm2 | Exit qualification: 0x0 vm2 | IP: 0x100001e vm2 | Instruction error: 0x0 vm2 | Entry exception error: 0x0 vm2 | VMM: [ 0]: Exit at guest IP 0x100001e SP 0x1a03f4e with 0x80000021 (Qual: 0x0) vm2 | VMM: [ 0]: Unhandled exit reason: VM-entry failure due to invalid guest state (33) vm2 | VMM: FATAL: [ 0]: Failure in VMM -38 vm2 | VMM: FATAL: [ 0] RAX 0x2213fe9 vm2 | RBX 0x0 vm2 | RCX 0xc0000101 vm2 | RDX 0x2213fea vm2 | RSI 0x1000 vm2 | RDI 0x1000 vm2 | RSP 0x0 vm2 | RBP 0x0 vm2 | R8 0x0 vm2 | R9 0x0 vm2 | R10 0x0 vm2 | R11 0x0 vm2 | R12 0x0 vm2 | R13 0x0 vm2 | R14 0x0 vm2 | R15 0x0 vm2 | RIP 0x100001e vm2 | vCPU RIP 0x1000000 vm2 | VMM: FATAL: [ 0] VM instruction error: 0x0 vm2 | VMM: FATAL: VM entered a fatal state. Halting.
l4-hackers@os.inf.tu-dresden.de