310 lines
25 KiB
HTML
310 lines
25 KiB
HTML
<!-- HTML header for doxygen 1.9.1-->
|
|
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "https://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
|
|
<html xmlns="http://www.w3.org/1999/xhtml">
|
|
<head>
|
|
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
|
|
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
|
|
<meta name="generator" content="Doxygen 1.15.0"/>
|
|
<meta name="viewport" content="width=device-width, initial-scale=1"/>
|
|
<title>L4Re Operating System Framework: Uvmm, the virtual machine monitor</title>
|
|
<link href="tabs.css" rel="stylesheet" type="text/css"/>
|
|
<script type="text/javascript" src="jquery.js"></script>
|
|
<script type="text/javascript" src="dynsections.js"></script>
|
|
<link href="navtree.css" rel="stylesheet" type="text/css"/>
|
|
<script type="text/javascript" src="navtreedata.js"></script>
|
|
<script type="text/javascript" src="navtree.js"></script>
|
|
<script type="text/javascript" src="cookie.js"></script>
|
|
<link href="search/search.css" rel="stylesheet" type="text/css"/>
|
|
<script type="text/javascript" src="search/searchdata.js"></script>
|
|
<script type="text/javascript" src="search/search.js"></script>
|
|
<link href="doxygen.css" rel="stylesheet" type="text/css" />
|
|
<link href="doxygen-awesome.css" rel="stylesheet" type="text/css"/>
|
|
<link href="l4re-awesome.css" rel="stylesheet" type="text/css"/>
|
|
</head>
|
|
<body>
|
|
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
|
|
<div id="titlearea">
|
|
<table cellspacing="0" cellpadding="0">
|
|
<tbody>
|
|
<tr style="height: 56px;">
|
|
<td id="projectlogo"><img alt="Logo" src="L4Re_rgb_logo_quer_hg_h55.png"/></td>
|
|
<td id="projectalign" style="padding-left: 0.5em;">
|
|
<div id="projectname">L4Re Operating System Framework
|
|
</div>
|
|
<div id="projectbrief">Interface and Usage Documentation</div>
|
|
</td>
|
|
</tr>
|
|
</tbody>
|
|
</table>
|
|
</div>
|
|
<!-- end header part -->
|
|
<!-- Generated by Doxygen 1.15.0 -->
|
|
<script type="text/javascript">
|
|
var searchBox = new SearchBox("searchBox", "search/",'.html');
|
|
</script>
|
|
<script type="text/javascript">
|
|
$(function() { codefold.init(); });
|
|
</script>
|
|
<script type="text/javascript" src="menudata.js"></script>
|
|
<script type="text/javascript" src="menu.js"></script>
|
|
<script type="text/javascript">
|
|
$(function() {
|
|
initMenu('',true,false,'search.php','Search',true);
|
|
$(function() { init_search(); });
|
|
});
|
|
</script>
|
|
<div id="main-nav"></div>
|
|
</div><!-- top -->
|
|
<div id="side-nav" class="ui-resizable side-nav-resizable">
|
|
<div id="nav-tree">
|
|
<div id="nav-tree-contents">
|
|
<div id="nav-sync" class="sync"></div>
|
|
</div>
|
|
</div>
|
|
<div id="splitbar" style="-moz-user-select:none;"
|
|
class="ui-resizable-handle">
|
|
</div>
|
|
</div>
|
|
<script type="text/javascript">
|
|
$(function(){initNavTree('l4re_servers_uvmm.html','',''); });
|
|
</script>
|
|
<div id="container">
|
|
<div id="doc-content">
|
|
<!-- window showing the filter options -->
|
|
<div id="MSearchSelectWindow"
|
|
onmouseover="return searchBox.OnSearchSelectShow()"
|
|
onmouseout="return searchBox.OnSearchSelectHide()"
|
|
onkeydown="return searchBox.OnSearchSelectKey(event)">
|
|
</div>
|
|
|
|
<!-- iframe showing the search results (closed by default) -->
|
|
<div id="MSearchResultsWindow">
|
|
<div id="MSearchResults">
|
|
<div class="SRPage">
|
|
<div id="SRIndex">
|
|
<div id="SRResults"></div>
|
|
<div class="SRStatus" id="Loading">Loading...</div>
|
|
<div class="SRStatus" id="Searching">Searching...</div>
|
|
<div class="SRStatus" id="NoMatches">No Matches</div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
|
|
<div><div class="header">
|
|
<div class="headertitle"><div class="title">Uvmm, the virtual machine monitor </div></div>
|
|
</div><!--header-->
|
|
<div class="contents">
|
|
<div class="textblock"><p>Uvmm provides a virtual machine for running an unmodified guest in non-privileged mode.</p>
|
|
<h2>Command Line Options </h2>
|
|
<p>uvmm provides the following command line options:</p>
|
|
<ul>
|
|
<li><p class="startli"><span class="tt">-c, --cmdline=<guest command line></span></p>
|
|
<p class="startli">Command line that is passed to the guest on boot.</p>
|
|
</li>
|
|
<li><p class="startli"><span class="tt">-k, --kernel=<kernel image name></span></p>
|
|
<p class="startli">The name of the guest-kernel image file present in the ROM namespace.</p>
|
|
</li>
|
|
<li><p class="startli"><span class="tt">-d, --dtb=<DTB overlay></span></p>
|
|
<p class="startli">The name of the device tree file present in the ROM namespace. The device tree will be placed in the upmost region of guest memory. Optionally, a user may use an additional parameter in the form of "<DTB overlay>:limit=0xffffffff" to set an upper limit for the device tree location.</p>
|
|
</li>
|
|
<li><p class="startli"><span class="tt">-r, --ramdisk=<RAM disk name></span></p>
|
|
<p class="startli">The name of the RAM disk file present in the ROM namespace</p>
|
|
</li>
|
|
<li><p class="startli"><span class="tt">-b, --rambase=<Base address of the guest RAM></span></p>
|
|
<p class="startli">Physical start address for the guest RAM. This value is platform specific.</p>
|
|
</li>
|
|
<li><p class="startli"><span class="tt">-D, --debug=[<component>=][level]</span></p>
|
|
<p class="startli">Control the verbosity level of the uvmm. Possible <span class="tt">level</span> values are: quiet, warn, info, trace</p>
|
|
<p class="startli">Using the <span class="tt">component</span> prefix, the verbosity level of each uvmm component is configurable. The component names are: core, cpu, mmio, irq, dev, pm, vbus_event</p>
|
|
<p class="startli">For example, the following command line sets the verbosity of all uvmm components to <span class="tt">info</span> except for IRQ handling, which is set to <span class="tt">trace</span>. </p><pre class="fragment">uvmm -D info -D irq=trace
|
|
</pre></li>
|
|
</ul>
|
|
<ul>
|
|
<li><p class="startli"><span class="tt">-f, --fault-mode</span></p>
|
|
<p class="startli">Control the handling of guest reads/writes to non-existing memory. Possible values are:</p><ul>
|
|
<li><span class="tt">ignore</span> - Invalid writes are ignored. Invalid reads either return 0 or are skipped. The guest may experience undefined behaviour.</li>
|
|
<li><span class="tt">halt</span> - Halt the VM on the first invalid memory access.</li>
|
|
<li><span class="tt">inject</span> - Try to forward the invalid access to the guest. This is not supported on all architectures. Falls back to <span class="tt">halt</span> if the error could not be forwarded to the guest.</li>
|
|
</ul>
|
|
<p class="startli">Defaults to <span class="tt">ignore</span>.</p>
|
|
</li>
|
|
<li><p class="startli"><span class="tt">-q, --quiet</span></p>
|
|
<p class="startli">Silence all uvmm output.</p>
|
|
</li>
|
|
<li><p class="startli"><span class="tt">-v, --verbose</span></p>
|
|
<p class="startli">Increase the verbosity of the uvmm. Repeating the option increases the verbosity by another level.</p>
|
|
</li>
|
|
<li><p class="startli"><span class="tt">-W, --wakeup-on-system-resume</span></p>
|
|
<p class="startli">When set, the uvmm resumes when the host system resumes after a suspend call.</p>
|
|
</li>
|
|
<li><p class="startli"><span class="tt">-i</span></p>
|
|
<p class="startli">When set, the option forces the guest RAM to be mapped to its corresponding host-physical addresses.</p>
|
|
</li>
|
|
</ul>
|
|
<dl class="section note"><dt>Note</dt><dd>Options <span class="tt">-q, --quiet</span>, <span class="tt">-v, --verbose</span> and <span class="tt">-D, --debug</span> cancel each other out.</dd></dl>
|
|
<h2>Setting up guest memory </h2>
|
|
<p>In the most simple setup, memory for the guest can be provided via a simple dataspace. In your ned script, create a new dataspace of the required size and hand it into uvmm as the <span class="tt">ram</span> capability: </p><pre class="fragment">local ramds = L4.Env.user_factory:create(L4.Proto.Dataspace, 60 * 1024 * 1024)
|
|
|
|
L4.default_loader::startv({caps = {ram = ramds:m("rw")}}, "rom/uvmm")
|
|
</pre><p>The memory will be mapped to the most appropriate place and a memory node added to the device tree, so that the guest can find the memory.</p>
|
|
<p>For a more complex setup, the memory can be configured via the device tree. uvmm scans for memory nodes and tries to set up the memory from them. A memory device node should look like this: </p><pre class="fragment">memory@0 {
|
|
device_type = "memory";
|
|
reg = <0x00000000 0x00100000
|
|
0x00200000 0xffffffff>;
|
|
l4vmm,dscap = "memcap";
|
|
dma-ranges = <>;
|
|
};
|
|
</pre><p>The <span class="tt">device_type</span> property is mandatory and needs to be set to <span class="tt">memory</span>.</p>
|
|
<p><span class="tt">l4vmm,dscap</span> contains the name of the capability containing the dataspace to be used for the RAM. <span class="tt">reg</span> describe the memory regions to use for the memory. The regions will be filled up to the size of the supplied dataspace. If they are larger, then the remaining area will be cut.</p>
|
|
<p>If the optional <span class="tt">dma-ranges</span> property is given, the host-physical address ranges for the memory regions will be added here. Note that the property is not cleared first, so it should be left empty.</p>
|
|
<p>For more details see <a class="el" href="l4re_servers_uvmm_ram_details.html">RAM configuration</a>.</p>
|
|
<h3>Memory layout</h3>
|
|
<p>uvmm populates the RAM with the following data:</p>
|
|
<ul>
|
|
<li>kernel binary</li>
|
|
<li>(optional) ramdisk</li>
|
|
<li>(optional) device tree</li>
|
|
</ul>
|
|
<p>The kernel binary is put at the predefined address. For ELF binaries, this is an absolute physical address. If the binary supports relative addressing, the binary is put to the requested offset relative to beginning of the first 'memory' region defined in the device tree.</p>
|
|
<p>The ramdisk and device tree are placed as far as possible to the end of the regions defined in the first 'memory' node.</p>
|
|
<p>If there is a part of RAM that must remain empty, then define an extra memory node for it in the device tree. uvmm only writes to memory in the first memory node it finds.</p>
|
|
<p>Warning: uvmm does not touch any unpopulated memory. In particular, it does not ensure that the memory is cleared. It is the responsibility of the provider of the RAM dataspace to make sure that no data leakage can happen. Normally this is not an issue because dataspaces are guaranteed to be cleaned when they are newly created but users should be careful when reusing memory or dataspaces, for example, when restarting the uvmm.</p>
|
|
<h2>Forwarding hardware resources to the guest </h2>
|
|
<p>Hardware resources must be specified in two places: the device tree contains the description of all hardware devices the guest could see and the Vbus describes which resources are actually available to the uvmm.</p>
|
|
<p>The vbus allows the uvmm access to hardware resources in the same way as any other <a class="el" href="namespaceL4.html" title="L4 low-level kernel interface.">L4</a> application. uvmm expects a capability named 'vbus' where it can access its hardware resources. It is possible to leave out the capability for purely virtual guests (Note that this is not actually practical on some architectures. On ARM, for example, the guest needs hardware access to the interrupt controller. Without a 'vbus' capability, interrupts will not work.) For information on how to configure a vbus, see the <a class="el" href="io.html">IO documentation</a>.</p>
|
|
<p>The device tree needs to contain the hardware description the guest should see. For hardware devices this usually means to use a device tree that would also be used when running the guest directly on hardware.</p>
|
|
<p>On startup, uvmm scans the device tree for any devices that require memory or interrupt resources and compares the required resources with the ones available from its vbus. When all resources are available, it sets up the appropriate forwarding, so that the guest now has direct access to the hardware. If the resources are not available, the device will be marked as 'disabled'. This mechanism allows to work with a standard device tree for all guests in the system while handling the actual resource allocation in a flexible manner via the vbus configuration.</p>
|
|
<p>The default mechanism assigns all resources 1:1, i.e. with the same memory address and interrupt number as on hardware. It is also possible to map a hardware device to a different location. In this case, the assignment between vbus device and device tree device must be known in advance and marked in the device tree using the <span class="tt">l4vmm,vbus-dev</span> property.</p>
|
|
<p>The following device will for example be bound with the vbus device with the HID 'l4-test,dev': </p><pre class="fragment">test@e0000000 {
|
|
compatible = "memdev,bar";
|
|
reg = <0 0xe0000000 0 0x50000>,
|
|
<0 0xe1000000 0 0x50000>;
|
|
l4vmm,vbus-dev = "l4-test,dev";
|
|
interrupts-extended = <&gic 0 139 4>;
|
|
};
|
|
</pre><p>Resources are then matched by name. Memory resources in the vbus must be named <span class="tt">reg0</span> to <span class="tt">reg9</span> to match against the address ranges in the device tree <span class="tt">reg</span> property. Interrupts must be called <span class="tt">irq0</span> to <span class="tt">irq9</span> and will be matched against <span class="tt">interrupts</span> or <span class="tt">interrupts-extended</span> entries in the device tree. The vbus must expose resources for all resources defined in the device tree entry or the initialisation will fail.</p>
|
|
<p>An appropriate IO entry for the above device would thus be: </p><pre class="fragment">MEM = Io.Hw.Device(function()
|
|
Property.hid = "l4-test,dev"
|
|
Resource.reg0 = Io.Res.mmio(0x41000000, 0x4104ffff)
|
|
Resource.reg1 = Io.Res.mmio(0x42000000, 0x4204ffff)
|
|
Resource.irq0 = Io.Res.irq(134);
|
|
end)
|
|
</pre><p>Please note that HIDs on the vbus are not necessarily unique. If multiple devices with the HID given in <span class="tt">l4vmm,vbus-dev</span> are available on the vbus, then one device is chosen at random.</p>
|
|
<p>If no vbus device with the given HID is available, the device is disabled.</p>
|
|
<h2>How to enable guest suspend/resume </h2>
|
|
<dl class="section note"><dt>Note</dt><dd>Currently only supported on ARM. It should work fine with Linux version 4.4 or newer.</dd></dl>
|
|
<p>Uvmm (partially) implements the power state coordination interface (PSCI), which is the standard ARM power management interface. To make use of this interface, you have to announce its availability to the guest operating system via the device tree like so: </p><pre class="fragment">psci {
|
|
compatible = "arm,psci-0.2";
|
|
method = "hvc";
|
|
};
|
|
</pre><p>The Linux guest must be configured with at least these options: </p><pre class="fragment">CONFIG_SUSPEND=y
|
|
CONFIG_ARM_PSCI=y
|
|
</pre><h2>How to communicate power management (PM) events </h2>
|
|
<p>Uvmm can be instructed to inform a PM manager of PM events through the <a class="el" href="classL4_1_1Platform__control.html" title="L4 C++ interface for controlling platform-wide properties, see Platform Control C API for the C inter...">L4::Platform_control</a> interface. To that end, uvmm may be equipped with a <span class="tt">pfc</span> cap. On suspend, uvmm will call <a class="el" href="group__l4__platform__control__api.html#ga56b855fa63a3d94b275bdd17bb0bc21e" title="Enter suspend to RAM.">l4_platform_ctl_system_suspend()</a>.</p>
|
|
<p>The <span class="tt">pfc</span> cap can also be implemented by IO. In that case the guest can start a machine suspend/shutdown/reboot.</p>
|
|
<h2>Ram block device support </h2>
|
|
<p>The example ramdisk works by loading a file system into RAM, which needs RAM block device support to work. In the Linux kernel configuration add: CONFIG_BLK_DEV_RAM=y</p>
|
|
<h2>Framebuffer support for uvmm/amd64 guests </h2>
|
|
<p>Uvmm can be instructed to pass along a framebuffer to the Linux guest. To enable this three things need to be done:</p>
|
|
<ol type="1">
|
|
<li>Configure Linux to support a simple framebuffer by enabling CONFIG_FB_SIMPLE=y CONFIG_X86_SYSFB=y</li>
|
|
<li><p class="startli">Configure a simple framebuffer device in the device tree (currently only read by uvmm, linearer framebuffer at [0xf0000000 - 0xf1000000])</p>
|
|
<p class="startli">simplefb { compatible = "simple-framebuffer"; reg = <0x0 0xf0000000 0x0 0x1000000>; l4vmm,fbcap = "fb"; };</p>
|
|
</li>
|
|
<li>Start a framebuffer instance and connect it to uvmm e.g. – Start fb-drv (but only if we need to) local fbdrv_fb = L4.Env.vesa; if (not fbdrv_fb) then fbdrv_fb = l:new_channel(); l:start({ caps = { vbus = io_busses.fbdrv, fb = fbdrv_fb:svr(), }, log = { "fbdrv", "r" }, }, "rom/fb-drv"); end vmm.start_vm{ ext_caps = { fb = fbdrv_fb }, – ...</li>
|
|
</ol>
|
|
<h2>Requirements on the Fiasco.OC configuration on amd64 </h2>
|
|
<p>The kernel configuration must feature <span class="tt">CONFIG_SYNC_TSC=y</span> in order for the emulated timers to reach a sufficiently high resolution.</p>
|
|
<h2>Recommended Linux configuration options for uvmm/amd64 guests </h2>
|
|
<p>The following options are recommended in additon to the amd64 defaults provided by a <span class="tt">make defconfig</span>:</p>
|
|
<p>Virtio support is required to access virtual devices provided by uvmm: </p><pre class="fragment"> CONFIG_VIRTIO=y
|
|
CONFIG_VIRTIO_PCI=y
|
|
CONFIG_VIRTIO_BLK=y
|
|
CONFIG_BLK_MQ_VIRTIO=y
|
|
CONFIG_VIRTIO_CONSOLE=y
|
|
CONFIG_VIRTIO_INPUT=y
|
|
CONFIG_VIRTIO_NET=y
|
|
</pre><p>It is highly recommended to use the X2APIC, which needs virtualization awareness to work under uvmm: </p><pre class="fragment"> CONFIG_X86_X2APIC=y
|
|
CONFIG_PARAVIRT=y
|
|
CONFIG_PARAVIRT_SPINLOCKS=y
|
|
</pre><h2>KVM clock for uvmm/amd64 guests </h2>
|
|
<p>When executing <a class="el" href="namespaceL4Re.html" title="L4Re C++ Interfaces.">L4Re</a> + uvmm on QEMU, the PIT as clock source is not reliable. The paravirtualized KVM clock provides the guest with a stable clock source.</p>
|
|
<p>A KVM clock device is available to the guest, if the device tree contains the corresponding entry: </p><pre class="fragment">kvm_clock {
|
|
compatible = "kvm-clock";
|
|
reg = <0x0 0x0 0x0 0x0>;
|
|
};
|
|
</pre><p>To make use of this clock, the Linux guest must be built with the following configuration options: </p><pre class="fragment">CONFIG_HYPERVISOR_GUEST=y
|
|
CONFIG_KVM_GUEST=y
|
|
CONFIG_PTP_1588_CLOCK_KVM is not set
|
|
</pre><p>Note: KVM calls besides the KVM clock are unhandled and lead to failure in the uvmm, e.g. vmcall 0x9 for the PTP_1588_CLOCK_KVM.</p>
|
|
<p>This is considered a development feature. The KVM clock is not required when running on physical hardware as TSC calibration via the PIT works as expected.</p>
|
|
<h2>Development notes for amd64 </h2>
|
|
<p>When you are developing on Linux using QEMU please note that nested virtualization support is necessary on your host system to run uvmm guests. Your host Linux version should be 4.12 or greater, <b>excluding 4.20</b>.</p>
|
|
<p>Check if your KVM module has nested virtualization enabled via: </p><pre class="fragment">> cat /sys/module/kvm_intel/parameters/nested
|
|
Y
|
|
</pre><p>In case it shows <span class="tt">N</span> instead of <span class="tt">Y</span> enable nested virtualization support via: </p><pre class="fragment">modprobe kvm_intel nested=1
|
|
</pre><p>On AMD platforms the module name is <span class="tt">kvm_amd</span>.</p>
|
|
<h2>QEMU network setup for a uvmm guest on amd64 </h2>
|
|
<p>qemu-system-x86_64 -M q35 -cpu host -enable-kvm -device intel-iommu -device e1000e,netdev=net0 -netdev bridge,id=net0,br=virbr0</p>
|
|
<p>where 'virbr0' is the name of the host's bridge device. The line 'allow virbr0' needs to be present in /etc/qemu/bridge.conf. The bridge can either be created via the network manager or via the command line: </p><pre class="fragment">brctl addbr virbr0
|
|
ip addr add 192.168.124.1/24 dev virbr0
|
|
ip link set up dev virbr0
|
|
</pre><p>In the guest linux with eth0 as network device: </p><pre class="fragment">ip a a 192.168.124.5/24 dev eth0
|
|
ip li se up dev eth0
|
|
</pre><p>Now the host and guest can ping each other using their respective IPs.</p>
|
|
<p>Of course, uvmm needs to be connected to Io and Io needs a vbus configuration for the uvmm client like this: </p><pre class="fragment">Io.add_vbusses
|
|
{
|
|
vm_pci = Io.Vi.System_bus(function ()
|
|
Property.num_msis = 6
|
|
PCI = Io.Vi.PCI_bus(function ()
|
|
pci_net = wrap(Io.system_bus():match("PCI/CC_0200"))
|
|
end)
|
|
end)
|
|
}
|
|
</pre><h2>QEMU emulated VirtIO devices and IO-MMU on amd64 </h2>
|
|
<p>QEMU does not route VirtIO devices through the IO-MMU per default. To use QEMU emulated VirtIO devices add the <span class="tt">disable-legacy=on,disable-modern=off,iommu_platform=on</span> flags to the option list of the device. The e1000e card in the network example above can be replaced with an virtio-net-pci card like this: </p><pre class="fragment">-device virtio-net-pci,disable-legacy=on,disable-modern=off,
|
|
iommu_platform=on,netdev=net0
|
|
</pre><p>For more information on VirtIO devices and their options see <a href="https://wiki.qemu.org/Features/VT-d">https://wiki.qemu.org/Features/VT-d</a>.</p>
|
|
<h2>Using the uvmm monitor interface </h2>
|
|
<p>Uvmm implements an interface with which parts of the guest's state can be queried and manipulated at runtime. This monitor interface needs to be enabled during compilation as well as during startup of uvmm. This is described in detail below.</p>
|
|
<h3>Compiling uvmm with monitor interface support</h3>
|
|
<p>To compile uvmm with monitor interface support pass the <span class="tt">CONFIG_MONITOR=y</span>, option during the <span class="tt">make</span> step (or set in in the Makefile.config). This option is available on all architectures but note that the set of available monitor interface features may vary significantly between them. Also note that the monitor interface will always be disabled in release mode, i.e. if <span class="tt">CONFIG_RELEASE_MODE=y</span>.</p>
|
|
<h3>Enabling the monitor interface at runtime</h3>
|
|
<p>When starting a uvmm instance from inside a <span class="tt">ned</span> script using the <span class="tt">vmm.start_vm</span> function, the <span class="tt">mon</span> argument controls whether the monitor interface is enabled at runtime. There are three cases to distinguish:</p>
|
|
<ul>
|
|
<li><span class="tt">mon=true</span> (default): The monitor interface is enabled but no server implementing the client side of the monitor interface is started. The monitor interface can still be utilized via <span class="tt">cons</span> but no readline functionality will be available.</li>
|
|
<li><span class="tt">mon='some_binary'</span>: If a string is passed as the value of <span class="tt">mon</span>, the monitor interface is enabled and the string is interpreted as the name of a server binary which implements the client side of the monitor interface. This server is automatically started and has access to a vcon capability named <span class="tt">mon</span> at startup through which it can make use of the monitor interface. Unless you have written your own server you should specify <span class="tt">'uvmm_cli'</span> which is a server implementing a simple readline interface.</li>
|
|
<li><span class="tt">mon=false</span>: The monitor interface is disabled at runtime.</li>
|
|
</ul>
|
|
<h3>Using the monitor interface</h3>
|
|
<p>If the monitor interface was enabled you can connect to it via <span class="tt">cons</span> under the name <span class="tt">mon<n></span> where <span class="tt"><n></span> is a unique integer for every uvmm instance that is started with the monitor interface enabled (numbered starting from one in order of corresponding <span class="tt">vmm.start_vm</span> calls). If <span class="tt">mon='uvmm_cli'</span> was specified, readline functionality such as command completion and history will be available. Enter a command followed by enter to run that command. To obtain a list of all available commands issue the <span class="tt">help</span> command, to obtain usage information for a specific command <span class="tt">foo</span> issue <span class="tt">help foo</span>.</p>
|
|
<dl class="section note"><dt>Note</dt><dd>Some commands will modify the guests state. Since it should be obvious to which ones this applies this is usually not specifically highlighted. Exercise reasonable caution.</dd></dl>
|
|
<h3>Using the guest debugger</h3>
|
|
<p>The guest debugger provides monitoring functionality akin to a very bare-bone GDB interface, e.g. guest RAM and page table dumping, breakpointing and single stepping. Additional functionality might be added in the future.</p>
|
|
<dl class="section note"><dt>Note</dt><dd>The guest debugger is currently still under development. The guest debugger may also not be available on all architectures. To check whether the guest debugger is available check if <span class="tt">help dbg</span> returns usage information.</dd></dl>
|
|
<p>If the guest debugger is available, you have to manually load it at runtime using the monitor interface. This saves resources if the guest debugger is not used. To enable the guest debugger, issue the <span class="tt">dbg on</span> monitor command. Once enabled, the guest debugger can not be disabled again.</p>
|
|
<p>To list available guest debugger subcommands, issue <span class="tt">dbg help</span> after <span class="tt">dbg on</span>.</p>
|
|
<dl class="section note"><dt>Note</dt><dd>When using SMP, most guest debugger subcommands require you to explicitly specify a guest vcpu using an index starting from zero. </dd></dl>
|
|
</div></div><!-- contents -->
|
|
</div><!-- PageDoc -->
|
|
</div><!-- doc-content -->
|
|
<div id="page-nav" class="page-nav-panel">
|
|
<div id="page-nav-resize-handle"></div>
|
|
<div id="page-nav-tree">
|
|
<div id="page-nav-contents">
|
|
</div><!-- page-nav-contents -->
|
|
</div><!-- page-nav-tree -->
|
|
</div><!-- page-nav -->
|
|
</div><!-- container -->
|
|
<!-- HTML footer for doxygen 1.9.1-->
|
|
<!-- start footer part -->
|
|
<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
|
|
<ul>
|
|
<li class="navelem"><a href="l4re_servers.html">L4Re Servers</a></li>
|
|
<li class="footer">Generated on <span class="timestamp"></span> for L4Re Operating System Framework by <a href="https://www.doxygen.org/index.html"><img class="footer" src="doxygen.svg" width="104" height="31" alt="doxygen"/></a> 1.15.0 </li>
|
|
</ul>
|
|
</div>
|
|
</body>
|
|
</html>
|