[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server



On Wed, Apr 20, 2016 at 3:57 PM, Paul Durrant <Paul.Durrant@xxxxxxxxxx> wrote:
>> -----Original Message-----
>> From: George Dunlap
>> Sent: 20 April 2016 15:50
>> To: Yu, Zhang
>> Cc: Paul Durrant; xen-devel@xxxxxxxxxxxxx; Kevin Tian; Jan Beulich; Andrew
>> Cooper; Tim (Xen.org); Lv, Zhiyuan; jun.nakajima@xxxxxxxxx
>> Subject: Re: [Xen-devel] [PATCH v2 3/3] x86/ioreq server: Add HVMOP to
>> map guest ram with p2m_ioreq_server to an ioreq server
>>
>> On Tue, Apr 19, 2016 at 12:59 PM, Yu, Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
>> wrote:
>> >>> So what about the VM suspend case you mentioned above? Will that
>> trigger
>> >>> the unmapping of ioreq server? Could the device model also take the
>> role
>> >>> to change the p2m type back in such case?
>> >>
>> >>
>> >> Yes. The device model has to be told by the toolstack that the VM is
>> >> suspending, otherwise it can't disable the ioreq server which puts the
>> >> shared ioreq pages back into the guest p2m. If that's not done then the
>> >> pages will be leaked.
>> >>
>> >>>
>> >>> It would be much simpler if hypervisor side does not need to provide
>> >>> the p2m resetting logic, and we can support live migration at the same
>> >>> time then. :)
>> >>>
>> >>
>> >> That really should not be hypervisor's job.
>> >>
>> >>    Paul
>> >>
>> >
>> > Oh. So let's just remove the p2m type recalculation code from this
>> > patch, no need to call p2m_change_entry_type_global, and no need to
>> > worry about the log dirty part.
>> >
>> > George, do you think this acceptable?
>>
>> Sorry, just to make sure I understand your intentions:
>>
>> 1. The ioreq servers will be responsible for changing all the
>> p2m_ioreq_server types back to p2m_ram_rw before detaching
>> 2. Xen will refuse to set logdirty mode if there are any attached ioreq 
>> servers
>>
>> The one problem with #1 is what to do if the ioreq server is buggy /
>> killed, and forgets / is unable to clean up its ioreq_server entries?
>>
>
> If we ever want to meaningfully migrate a VM with XenGT enabled then I think 
> it has to be possible to migrate the p2m_ioreq_server page types as-is. I'd 
> expect a new instance of the xengt emulator to claim them on the far and any 
> emulator state be transferred in a similar way to which qemu state is 
> transferred such that vgpu emulation can be continued after the migration.
> After all, it's not like we magically reset the type of mmio-dm pages on 
> migration. It is just expected that a new instance of QEMU is spawned to take 
> care of them on the far end when the VM is unpaused.

I don't see the implication you're trying to make regarding the
attach/detach interface.

In any case, is it really not the case that the devicemodels on the
far side have to re-register which IO ranges they're interested in?
Shouldn't the devicemodel on the far side re-register the pfns it
wants to watch as well?  That again seems like the most robust
solution, rather than assuming that the pfns you expect to have been
marked are already marked.  To do otherwise risks bugs where 1) pages
you think were being watched aren't 2) pages being watched that you
don't care about (and you don't think to un-watch them because you
didn't think you'd watched them in the first place).

On a slightly different topic: come to think of it, there's no need to
switch the p2m type for ioreq_server when we turn on logdirty mode; we
just have to call paging_mark_dirty() if it's a write, don't we?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.