[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 0/9] toolstack-based approach to pvhvm guest kexec

Wei Liu <wei.liu2@xxxxxxxxxx> writes:

> On Wed, Dec 03, 2014 at 06:16:12PM +0100, Vitaly Kuznetsov wrote:
>> Changes from RFCv3:
>> This is the first non-RFC series as no major concerns were expressed. I'm 
>> trying
>> to address Jan's comments. Changes are:
>> - Move from XEN_DOMCTL_set_recipient to XEN_DOMCTL_devour (I don't really 
>> like
>>   the name but nothing more appropriate came to my mind) which incorporates
>>   former XEN_DOMCTL_set_recipient and XEN_DOMCTL_destroydomain to prevent
>>   original domain from changing its allocations during transfer procedure.
>> - Check in free_domheap_pages() that assign_pages() succeeded.
>> - Change printk() in free_domheap_pages().
>> - DOMDYING_locked state was introduced to support XEN_DOMCTL_devour.
>> - xc_domain_soft_reset() got simplified a bit. Now we just wait for the 
>> original
>>   domain to die or loose all its pages.
>> - rebased on top of current master branch.
>> Changes from RFC/WIPv2:
>> Here is a slightly different approach to memory reassignment. Instead of
>> introducing new (and very doubtful) XENMEM_transfer operation introduce
>> simple XEN_DOMCTL_set_recipient operation and do everything in 
>> free_domheap_pages()
>> handler utilizing normal domain destroy path. This is better because:
>> - The approach is general-enough
>> - All memory pages are usually being freed when the domain is destroyed
>> - No special grants handling required
>> - Better supportability
>> With regards to PV:
>> Though XEN_DOMCTL_set_recipient works for both PV and HVM this patchset does 
>> not
>> bring PV kexec/kdump support. xc_domain_soft_reset() is limited to work with 
>> HVM
>> domains only. The main reason for that is: it is (in theory) possible to 
>> save p2m
>> and rebuild them with the new domain but that would only allow us to resume 
>> execution
>> from where we stopped. If we want to execute new kernel we need to build the 
>> same
>> kernel/initrd/bootstrap_pagetables/... structure we build to boot PV domain 
>> initially.
>> That however would destroy the original domain's memory thus making kdump 
>> impossible.
>> To make everything work additional support from kexec userspace/linux kernel 
>> is
>> required and I'm not sure it makes sense to implement all this stuff in the 
>> light of
>> PVH.
> What would happen if you soft reset a PV guest? At the very least soft
> reset should be explicitly forbidden in PV case.

Well, nothing particulary bad from hypervisor point of view
happens. But when PV guest dies page tables are being destroyed (and we
lose the knowledge anyway). It would be possible for the toolstack to
start from the beggining - build bootstrap tables, put kernel/initrd and
start everything. I however don't see any value in doing so: our memory
gets destroyed and it is going to be the same as plain reboot.

Why do we need to forbid the call for PV? (xc_domain_soft_reset()
already forbids PV).

>> Original description:
>> When a PVHVM linux guest performs kexec there are lots of things which
>> require taking care of:
>> - shared info, vcpu_info
>> - grants
>> - event channels
>> - ...
> Is there a complete list?

Konrad tried to assemble it here:

>> Instead of taking care of all these things we can rebuild the domain
>> performing kexec from scratch doing so-called soft-reboot.
>> The idea was suggested by David Vrabel, Jan Beulich, and Konrad Rzeszutek 
>> Wilk.
> As this approach requires toolstack do complex interaction with
> hypervisor and preserve / throw away a bunch of states. I think the
> whole procedure should be documented.

Sure.. Where would you expect such doc to appear?

> It would also be helpful if you link to previous discussions in your
> cover letter.

Sure, will do next time. For now:
on resetting VCPU_info (and that's where 'rebuild everything with the
toolstack solution' was suggested):
Previous versions:

EVTCHNOP_reset (got merged):

This patch series:

> Wei.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.