[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC v2] Add SUPPORT.md



On Mon, Sep 11, 2017 at 06:01:59PM +0100, George Dunlap wrote:
> +## Toolstack
> +
> +### xl
> +
> +    Status: Supported
> +
> +### Direct-boot kernel image format
> +
> +    Supported, x86: bzImage

ELF

> +    Supported, ARM32: zImage
> +    Supported, ARM64: Image
> +
> +Format which the toolstack accept for direct-boot kernels

IMHO it would be good to provide references to the specs, for ELF that
should be:

http://refspecs.linuxbase.org/elf/elf.pdf

> +### Qemu based disk backend (qdisk) for xl
> +
> +    Status: Supported
> +
> +### Open vSwitch integration for xl
> +
> +    Status: Supported

Status, Linux: Supported

I haven't played with vswitch on FreeBSD at all.

> +
> +### systemd support for xl
> +
> +    Status: Supported
> +
> +### JSON output support for xl
> +
> +    Status: Experimental
> +
> +Output of information in machine-parseable JSON format
> +
> +### AHCI support for xl
> +
> +    Status, x86: Supported
> +
> +### ACPI guest
> +
> +    Status, x86 HVM: Supported
> +    Status, ARM: Tech Preview

status, x86 PVH: Tech preview

> +
> +### PVUSB support for xl
> +
> +    Status: Supported
> +
> +### HVM USB passthrough for xl
> +
> +    Status, x86: Supported
> +
> +### QEMU backend hotplugging for xl
> +
> +    Status: Supported

What's this exactly? Is it referring to hot-adding PV disk and nics?
If so it shouldn't specifically reference xl, the same can be done
with blkback or netback for example.

> +### Virtual cpu hotplug
> +
> +    Status: Supported
> +
> +## Toolstack/3rd party
> +
> +### libvirt driver for xl
> +
> +    Status: Supported, Security support external
> +
> +## Debugging, analysis, and crash post-mortem
> +
> +### gdbsx
> +
> +    Status, x86: Supported
> +
> +Debugger to debug ELF guests
> +
> +### Guest serial sonsole
> +
> +    Status: Supported
> +
> +Logs key hypervisor and Dom0 kernel events to a file
> +
> +### Soft-reset for PV guests
> +
> +    Status: Supported
> +     
> +Soft-reset allows a new kernel to start 'from scratch' with a fresh VM 
> state, 
> +but with all the memory from the previous state of the VM intact.
> +This is primarily designed to allow "crash kernels", 
> +which can do core dumps of memory to help with debugging in the event of a 
> crash.
> +
> +### xentrace
> +
> +    Status, x86: Supported
> +
> +Tool to capture Xen trace buffer data
> +
> +### gcov
> +
> +    Status: Supported, Not security supported
> +
> +Export hypervisor coverage data suitable for analysis by gcov or lcov.
> +
> +## Memory Management
> +
> +### Memory Ballooning
> +
> +    Status: Supported
> +
> +### Memory Sharing
> +
> +    Status, x86 HVM: Tech Preview
> +    Status, ARM: Tech Preview
> +
> +Allow sharing of identical pages between guests
> +
> +### Memory Paging
> +
> +    Status, x86 HVM: Experimenal
> +
> +Allow pages belonging to guests to be paged to disk
> +
> +### Transcendent Memory
> +
> +    Status: Experimental
> +
> +[XXX Add description]
> +
> +### Alternative p2m
> +
> +    Status, x86 HVM: Tech Preview
> +    Status, ARM: Tech Preview
> +
> +Allows external monitoring of hypervisor memory
> +by maintaining multiple physical to machine (p2m) memory mappings.
> +
> +## Resource Management
> +
> +### CPU Pools
> +
> +    Status: Supported
> +
> +Groups physical cpus into distinct groups called "cpupools",
> +with each pool having the capability of using different schedulers and 
> scheduling properties.
> +
> +### Credit Scheduler
> +
> +    Status: Supported
> +
> +The default scheduler, which is a weighted proportional fair share virtual 
> CPU scheduler.
> +
> +### Credit2 Scheduler
> +
> +    Status: Supported
> +
> +Credit2 is a general purpose scheduler for Xen,
> +designed with particular focus on fairness, responsiveness and scalability
> +
> +### RTDS based Scheduler
> +
> +    Status: Experimental
> +
> +A soft real-time CPU scheduler built to provide guaranteed CPU capacity to 
> guest VMs on SMP hosts
> +
> +### ARINC653 Scheduler
> +
> +    Status: Supported, Not security supported
> +
> +A periodically repeating fixed timeslice scheduler. Multicore support is not 
> yet implemented.
> +
> +### Null Scheduler
> +
> +    Status: Experimental
> +
> +A very simple, very static scheduling policy 
> +that always schedules the same vCPU(s) on the same pCPU(s). 
> +It is designed for maximum determinism and minimum overhead
> +on embedded platforms.
> +
> +### Numa scheduler affinity
> +
> +    Status, x86: Supported
> +
> +Enables Numa aware scheduling in Xen
> +
> +## Scalability
> +
> +### 1GB/2MB super page support
> +
> +    Status: Supported

This needs something like:

Status, x86 HVM/PVH: Supported

IIRC on ARM page sizes are different (64K?)

> +
> +### x86/PV-on-HVM
> +
> +    Status: Supported
> +
> +This is a useful label for a set of hypervisor features
> +which add paravirtualized functionality to HVM guests 
> +for improved performance and scalability.  
> +This includes exposing event channels to HVM guests.
> +
> +### x86/Deliver events to PVHVM guests using Xen event channels
> +
> +    Status: Supported

I think this should be labeled as "x86/HVM deliver guest events using
event channels", and the x86/PV-on-HVM section removed.

> +
> +## High Availability and Fault Tolerance
> +
> +### Live Migration, Save & Restore
> +
> +    Status, x86: Supported
> +
> +### Remus Fault Tolerance
> +
> +    Status: Experimental
> +
> +### COLO Manager
> +
> +    Status: Experimental
> +
> +### x86/vMCE
> +
> +    Status: Supported
> +
> +Forward Machine Check Exceptions to Appropriate guests
> +
> +## Virtual driver support, guest side
> +
> +[XXX Consider adding 'frontend' and 'backend' to the titles in these two 
> sections to make it clearer]
> +
> +### Blkfront
> +
> +    Status, Linux: Supported
> +    Status, FreeBSD: Supported, Security support external
> +    Status, Windows: Supported

Status, NetBSD: Supported, Security support external

> +
> +Guest-side driver capable of speaking the Xen PV block protocol
> +
> +### Netfront
> +
> +    Status, Linux: Supported
> +    States, Windows: Supported
> +    Status, FreeBSD: Supported, Security support external
> +    Status, NetBSD: Supported, Security support external
> +    Status, OpenBSD: Supported, Security support external
> +
> +Guest-side driver capable of speaking the Xen PV networking protocol
> +
> +### Xen Framebuffer
> +
> +    Status, Linux (xen-fbfront): Supported
> +
> +Guest-side driver capable of speaking the Xen PV Framebuffer protocol
> +
> +### Xen Console
> +
> +    Status, Linux (hvc_xen): Supported
> +    Status, Windows: Supported
> +
> +Guest-side driver capable of speaking the Xen PV console protocol

Status, FreeBSD: Supported, Security support external
Status, NetBSD: Supported, Security support external

> +
> +### Xen PV keyboard
> +
> +    Status, Linux (xen-kbdfront): Supported
> +    Status, Windows: Supported
> +
> +Guest-side driver capable of speaking the Xen PV keyboard protocol
> +
> +[XXX 'Supported' here depends on the version we ship in 4.10 having some 
> fixes]
> +
> +### Xen PVUSB protocol
> +
> +    Status, Linux: Supported
> +
> +### Xen PV SCSI protocol
> +
> +    Status, Linux: Supported, with caveats

Should both of the above items be labeled with frontend/backend?

And do we really need the 'Xen' prefix in all the items? Seems quite
redundant.

> +
> +NB that while the pvSCSU frontend is in Linux and tested regularly,
> +there is currently no xl support.
> +
> +### Xen TPMfront

PV TPM frotnend

> +
> +    Status, Linux (xen-tpmfront): Tech Preview
> +
> +Guest-side driver capable of speaking the Xen PV TPM protocol
> +
> +### Xen 9pfs frontend
> +
> +    Status, Linux: Tech Preview
> +
> +Guest-side driver capable of speaking the Xen 9pfs protocol
> +
> +### PVCalls frontend
> +
> +    Status, Linux: Tech Preview
> +
> +Guest-side driver capable of making pv system calls

Didn't we merge the backend, but not the frontend?

> +
> +## Virtual device support, host side
> +
> +### Blkback
> +
> +    Status, Linux (blkback): Supported
> +    Status, FreeBSD (blkback): Supported
                                           ^, security support
                                            external

Status, NetBSD (xbdback): Supported, security support external
> +    Status, QEMU (xen_disk): Supported
> +    Status, Blktap2: Deprecated
> +
> +Host-side implementations of the Xen PV block protocol
> +
> +### Netback
> +
> +    Status, Linux (netback): Supported
> +    Status, FreeBSD (netback): Supported

Status, NetBSD (xennetback): Supported

Both FreeBSD & NetBSD: security support external.

> +
> +Host-side implementations of Xen PV network protocol
> +
> +### Xen Framebuffer
> +
> +    Status, Linux: Supported

Frontend?

> +    Status, QEMU: Supported

Backend?

I don't recall Linux having a backend for the pv fb.

> +
> +Host-side implementaiton of the Xen PV framebuffer protocol
> +
> +### Xen Console (xenconsoled)

Console backend

> +
> +    Status: Supported
> +
> +Host-side implementation of the Xen PV console protocol
> +
> +### Xen PV keyboard

PV keyboard backend

> +
> +    Status, QEMU: Supported
> +
> +Host-side implementation fo the Xen PV keyboard protocol
> +
> +### Xen PV USB

PV USB Backend

> +
> +    Status, Linux: Experimental

? The backend is in QEMU.

> +    Status, QEMU: Supported
> +
> +Host-side implementation of the Xen PV USB protocol
> +
> +### Xen PV SCSI protocol

Does this refer to the backend or the frontend?

> +
> +    Status, Linux: Supported, with caveats
> +
> +NB that while the pvSCI backend is in Linux and tested regularly,
> +there is currently no xl support.
> +
> +### Xen PV TPM
> +
> +    Status: Tech Preview

This seems to be duplicated with the item "Xen TPMfront".

> +
> +### Xen 9pfs

backend

> +
> +    Status, QEMU: Tech Preview
> +
> +### PVCalls
> +
> +    Status, Linux: Tech Preview

? backend, frontend?

> +
> +### Online resize of virtual disks
> +
> +    Status: Supported

I would remove this.

> +## Security
> +
> +### Driver Domains
> +
> +    Status: Supported
> +
> +### Device Model Stub Domains
> +
> +    Status: Supported, with caveats
> +
> +Vulnerabilities of a device model stub domain to a hostile driver domain are 
> excluded from security support.
> +
> +### KCONFIG Expert
> +
> +    Status: Experimental
> +
> +### Live Patching
> +
> +    Status, x86: Supported
> +    Status, ARM: Experimental
> +
> +Compile time disabled
> +
> +### Virtual Machine Introspection
> +
> +    Status, x86: Supported, not security supported
> +
> +### XSM & FLASK
> +
> +    Status: Experimental
> +
> +Compile time disabled
> +
> +### XSM & FLASK support for IS_PRIV
> +
> +    Status: Experimental
> +
> +Compile time disabled
> +
> +## Hardware
> +
> +### x86/Nested PV
> +
> +    Status, x86 HVM: Tech Preview
> +
> +This means running a Xen hypervisor inside an HVM domain,
> +with support for PV L2 guests only
> +(i.e., hardware virtualization extensions not provided
> +to the guest).
> +
> +This works, but has performance limitations
> +because the L1 dom0 can only access emulated L1 devices.
> +
> +### x86/Nested HVM
> +
> +    Status, x86 HVM: Experimental
> +
> +This means running a Xen hypervisor inside an HVM domain,
> +with support for running both PV and HVM L2 guests
> +(i.e., hardware virtualization extensions provided
> +to the guest).
> +
> +### x86/HVM iPXE
> +
> +    Status: Supported, with caveats
> +
> +Booting a guest via PXE.
> +PXE inherently places full trust of the guest in the network,
> +and so should only be used
> +when the guest network is under the same administrative control
> +as the guest itself.
> +
> +### x86/HVM BIOS
> +
> +    Status: Supported
> +
> +Booting a guest via guest BIOS firmware
> +
> +### x86/HVM EFI
> +
> +     Status: Supported
> +
> +Booting a guest via guest EFI firmware

Maybe this is too generic? We certainly don't support ROMBIOS with
qemu-trad, or SeaBIOS with qemu-upstream.

> +### x86/Physical CPU Hotplug
> +
> +    Status: Supported
> +
> +### x86/Physical Memory Hotplug
> +
> +    Status: Supported
> +
> +### x86/PCI Passthrough PV
> +
> +    Status: Supported, Not security supported
> +
> +PV passthrough cannot be done safely.
> +
> +[XXX Not even with an IOMMU?]
> +
> +### x86/PCI Passthrough HVM
> +
> +    Status: Supported, with caveats
> +
> +Many hardware device and motherboard combinations are not possible to use 
> safely.
> +The XenProject will support bugs in PCI passthrough for Xen,
> +but the user is responsible to ensure that the hardware combination they use
> +is sufficiently secure for their needs,
> +and should assume that any combination is insecure
> +unless they have reason to believe otherwise.
> +
> +### ARM/Non-PCI device passthrough
> +
> +    Status: Supported
> +
> +### x86/Advanced Vector eXtension
> +
> +    Status: Supported
> +
> +### vPMU
> +
> +    Status, x86: Supported, Not security supported
> +
> +Virtual Performance Management Unit for HVM guests
> +
> +Disabled by default (enable with hypervisor command line option).
> +This feature is not security supported: see 
> http://xenbits.xen.org/xsa/advisory-163.html
> +
> +### Intel Platform QoS Technologies
> +
> +    Status: Tech Preview
> +
> +### ARM/ACPI (host)
> +
> +    Status: Experimental

"ACPI host" (since we already have "ACPI guest" above).

Status, ARM: experimental
Status, x86 PV: supported
Status, x86 PVH: experimental

> +### ARM/SMMUv1
> +
> +    Status: Supported
> +
> +### ARM/SMMUv2
> +
> +    Status: Supported
> +
> +### ARM/GICv3 ITS
> +
> +    Status: Experimental
> +
> +Extension to the GICv3 interrupt controller to support MSI.
> +
> +### ARM: 16K and 64K pages in guests
> +
> +    Status: Supported, with caveats
> +
> +No support for QEMU backends in a 16K or 64K domain.

Needs to be merged with the "1GB/2MB super page support"?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.