[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC: very initial PVH design document



On 22/08/14 17:13, Jan Beulich wrote:
>>>> On 22.08.14 at 16:55, <roger.pau@xxxxxxxxxx> wrote:
>> It is important to highlight the following notes:
>>
>>   * XEN_ELFNOTE_ENTRY: contains the memory address of the kernel entry point.
> 
> ... the virtual memory address ...
> 
>>   * XEN_ELFNOTE_HYPERCALL_PAGE: contains the memory address of the hypercall
>>     page inside of the guest kernel (this memory region will be filled by Xen
>>     prior to booting).
> 
> Same here.
> 
>>   * XEN_ELFNOTE_FEATURES: contains the list of features supported by the 
>> kernel.
>>     In this case the kernel is only able to boot as a PVH guest, but those
> 
> "In this case" is not clear what it relates to. You should probably say
> something like "In the example above".
> 
>> Xen will jump into the kernel entry point defined in `XEN_ELFNOTE_ENTRY` with
>> paging enabled (either long or protected mode depending on the kernel 
>> bitness)
> 
> If I understand right how 32-bit PVH is intended to function, "protected
> mode" is insufficient to state here, it ought to be "paged protect mode"
> or "protected mode with paging turned on".

Thanks, will fix the comments above.

>> ## Physical devices ##
>>
>> When running as Dom0 the guest OS has the ability to interact with the 
>> physical
>> devices present in the system. A note should be made that PVH guests require
>> a working IOMMU in order to interact with physical devices.
>>
>> The first step in order to manipulate the devices is to make Xen aware of
>> them. Due to the fact that all the hardware description on x86 comes from
>> ACPI, Dom0 is responsible of parsing the ACPI tables and notify Xen about 
>> the
>> devices it finds. This is done with the `PHYSDEVOP_pci_device_add` 
>> hypercall.
>>
>> *TODO*: explain the way to register the different kinds of PCI devices, like
>> devices with virtual functions.
> 
> I think both the second paragraph and the TODO don't belong here,
> as there's no difference to PV, and this shouldn't be subject of this
> document.

IMHO, I think of this document as a reference that could be used by
people when trying to port OSes to PVH, and I wanted it to be complete.
Also, I don't see any of this hypercalls being documented, neither in
the header files or in a document in the repository which makes it's
usage completely obscure.

> 
>> ## Interrupts ##
>>
>> All interrupts on PVH guests are routed over event channels, see
>> [Event Channel Internals][event_channels] for more detailed information 
>> about
>> event channels. In order to inject interrupts into the guest an IDT vector 
>> is
>> used. This is the same mechanism used on PVHVM guests, and allows having
>> per-cpu interrupts that can be used to deliver timers or IPIs.
>>
>> In order to register the callback IDT vector the `HVMOP_set_param` hypercall
>> is used with the following values:
>>
>>     domid = DOMID_SELF
>>     index = HVM_PARAM_CALLBACK_IRQ
>>     value = (0x2 << 56) | vector_value
>>
>> In order to know which event channel has fired, we need to look into the
>> information provided in the `shared_info` structure. The `evtchn_pending`
>> array is used as a bitmap in order to find out which event channel has
>> fired. Event channels can also be masked by setting it's port value in the
>> `shared_info->evtchn_mask` bitmap.
>>
>> *TODO*: provide a reference about how to interact with FIFO event channels?
> 
> Or better don't be event-channel-ABI specific in the paragraph right
> before the TODO, as this again isn't PVH-specific? 

Ack, will probably remove it in next version unless someone has a
different opinion.

> (There are more
> such items further down which I'll not further comment on; actually
> it looks like very little of the rest of the document is really on PVH.)

See the reply above regarding the documentation of shared interfaces
used by both PV and PVH.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.