[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/shadow: account for ioreq server pages before complaining about not found mapping


  • To: Jan Beulich <JBeulich@xxxxxxxx>
  • From: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
  • Date: Fri, 29 Apr 2016 09:35:16 +0000
  • Accept-language: en-GB, en-US
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Wei Liu <wei.liu2@xxxxxxxxxx>, "Tim \(Xen.org\)" <tim@xxxxxxx>
  • Delivery-date: Fri, 29 Apr 2016 09:35:22 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: AQHRoe8LvjOB91KaK02ozPNoQM8fip+gnm5Q///tVoCAACMW0A==
  • Thread-topic: [PATCH] x86/shadow: account for ioreq server pages before complaining about not found mapping

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 29 April 2016 10:22
> To: Paul Durrant
> Cc: Wei Liu; xen-devel; Tim (Xen.org)
> Subject: RE: [PATCH] x86/shadow: account for ioreq server pages before
> complaining about not found mapping
> 
> >>> On 29.04.16 at 10:29, <Paul.Durrant@xxxxxxxxxx> wrote:
> >>  -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> >> Sent: 29 April 2016 09:14
> >> To: xen-devel
> >> Cc: Paul Durrant; Wei Liu; Tim (Xen.org)
> >> Subject: [PATCH] x86/shadow: account for ioreq server pages before
> >> complaining about not found mapping
> >>
> >> prepare_ring_for_helper(), just like share_xen_page_with_guest(),
> >> takes a write reference on the page, and hence should similarly be
> >> accounted for when determining whether to log a complaint.
> >>
> >> This requires using recursive locking for the ioreq server lock, as the
> >> offending invocation of sh_remove_all_mappings() is down the call stack
> >> from hvm_set_ioreq_server_state(). (While not strictly needed to be
> >> done in all other instances too, convert all of them for consistency.)
> >
> > Do you have an example of a call stack?
> 
> (XEN) Xen call trace:
> (XEN)    [<ffff82d08021ca61>]
> common.c#sh_remove_all_mappings+0x1fb/0x27d
> (XEN)    [<ffff82d08021ea2a>]
> common.c#sh_unshadow_for_p2m_change+0xc9/0x2a8
> (XEN)    [<ffff82d08021ed06>] shadow_write_p2m_entry+0xfd/0x168
> (XEN)    [<ffff82d0801ffe64>] paging_write_p2m_entry+0x44/0x51
> (XEN)    [<ffff82d08020c706>] p2m-pt.c#p2m_pt_set_entry+0x539/0x846
> (XEN)    [<ffff82d080201e57>] p2m_set_entry+0xd2/0x113
> (XEN)    [<ffff82d080202423>] p2m.c#p2m_remove_page+0x20a/0x220
> (XEN)    [<ffff82d0802069c5>] guest_physmap_remove_page+0x19b/0x207
> (XEN)    [<ffff82d0801db0b0>] ioreq.c#hvm_remove_ioreq_gmfn+0x4e/0x5e
> (XEN)    [<ffff82d0801db123>] ioreq.c#hvm_ioreq_server_enable+0x63/0xaf
> (XEN)    [<ffff82d0801db1d8>] hvm_set_ioreq_server_state+0x69/0xb3
> (XEN)    [<ffff82d0801d5495>] do_hvm_op+0x56a/0x1c85
> (XEN)    [<ffff82d080243f5d>] lstar_enter+0xdd/0x137
> 
> > Is the recursion due to the
> > domain_pause() being done with the ioreq server spinlock held?
> 
> I don't think the domain_pause() matters here.

Thanks for the stack. Yes the domain_pause() is clearly irrelevant and there's 
no trivial way to avoid the need for the recursive lock so

Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.