[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges.



> -----Original Message-----
> From: George Dunlap [mailto:george.dunlap@xxxxxxxxxx]
> Sent: 22 February 2016 16:46
> To: Paul Durrant
> Cc: Jan Beulich; Kevin Tian; Zhang Yu; Wei Liu; Ian Campbell; Andrew Cooper;
> xen-devel@xxxxxxxxxxxxx; Stefano Stabellini; Zhiyuan Lv; Ian Jackson; Keir
> (Xen.org)
> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
> 
> On 22/02/16 16:02, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: dunlapg@xxxxxxxxx [mailto:dunlapg@xxxxxxxxx] On Behalf Of
> >> George Dunlap
> >> Sent: 22 February 2016 15:56
> >> To: Paul Durrant
> >> Cc: George Dunlap; Jan Beulich; Kevin Tian; Zhang Yu; Wei Liu; Ian
> Campbell;
> >> Andrew Cooper; xen-devel@xxxxxxxxxxxxx; Stefano Stabellini; Zhiyuan Lv;
> Ian
> >> Jackson; Keir (Xen.org)
> >> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> >> max_wp_ram_ranges.
> >>
> >> On Wed, Feb 17, 2016 at 11:12 AM, Paul Durrant
> <Paul.Durrant@xxxxxxxxxx>
> >> wrote:
> >>>> -----Original Message-----
> >>>> From: George Dunlap [mailto:george.dunlap@xxxxxxxxxx]
> >>>> Sent: 17 February 2016 11:02
> >>>> To: Jan Beulich; Paul Durrant; Kevin Tian; Zhang Yu
> >>>> Cc: Andrew Cooper; Ian Campbell; Ian Jackson; Stefano Stabellini; Wei
> Liu;
> >>>> Zhiyuan Lv; xen-devel@xxxxxxxxxxxxx; George Dunlap; Keir (Xen.org)
> >>>> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> >>>> max_wp_ram_ranges.
> >>>>
> >>>> On 17/02/16 10:22, Jan Beulich wrote:
> >>>>>>>> On 17.02.16 at 10:58, <kevin.tian@xxxxxxxxx> wrote:
> >>>>>> Thanks for the help. Let's see whether we can have some solution
> >> ready
> >>>> for
> >>>>>> 4.7. :-)
> >>>>>
> >>>>> Since we now seem to all agree that a different approach is going to
> >>>>> be taken, I think we indeed should revert f5a32c5b8e ("x86/HVM:
> >>>>> differentiate IO/mem resources tracked by ioreq server"). Please
> >>>>> voice objections to the plan pretty soon.
> >>>>
> >>>> FWIW, after this discussion, I don't have an objection to the basic
> >>>> interface in this series as it is, since it addresses my request that it
> >>>> be memory-based, and it could be switched to using p2m types
> >>>> behind-the-scenes -- with the exception of the knob to control how
> many
> >>>> ranges are allowed (since it exposes the internal implementation).
> >>>>
> >>>> I wouldn't object to the current implementation going in as it was in v9
> >>>> (apparently), and then changing the type stuff around behind the
> scenes
> >>>> later as an optimization.  I also don't think it would be terribly
> >>>> difficult to change the implementation as it is to just use write_dm for
> >>>> a single IOREQ server.  We can rename it ioreq_server and expand it
> >>>> later.  Sorry if this wasn't clear from my comments before.
> >>>>
> >>>
> >>> The problem is that, since you objected to wp_mem being treated the
> >> same as mmio, we had to introduce a new range type to the map/unmap
> >> hypercalls and that will become redundant if we steer by type. If we could
> >> have just increased the size of the rangeset for mmio and used that *for
> >> now* then weeks of work could have been saved going down this dead
> end.
> >>
> >> So first of all, let me say that I handled things respect to this
> >> thread rather poorly in a lot of ways, and you're right to be
> >> frustrated.  All I can say is, I'm sorry and I'll try to do better.
> >>
> >> I'm not sure I understand what you're saying with this line: "We had
> >> to introduce a new range type to the map/unmap hypercalls that will
> >> become redundant if we steer by type".
> >>
> >
> > What I means is that if we are going to do away with the concept of 'wp
> mem' and replace it with 'special type for steering to ioreq server' then we
> only need range types of pci, portio and mmio as we had before.
> 
> I'm still confused.
> 

So am I. I thought I was being quite clear.

> To me, there are two distinct things here: the interface and the
> implementation.
> 

Indeed.

> From an interface perspective, ioreq servers need to be able to ask Xen
> to intercept writes to specific memory regions.

No. You insisted on making MMIO distinct from 'memory'. We need to be able to 
intercept MMIO, port IO and PCI access to specific regions.

>  I insisted that this
> interface be specifically designed for memory, not calling them MMIO
> regions (since it's actually memory and not MMIO regions); and I said
> that the explosion in number of required rangesets demonstrated that
> rangesets were a bad fit to implement this interface.
> 

The intended overloading of MMIO for this purpose was a pragmatic choice at the 
time, however it appears pragmatism has no place here.

> What you did in an earlier version of this series (correct me if I'm
> wrong) is to make a separate hypercall for memory, but still keep using
> the same internal implementation (i.e., still having a write_dm p2m type
> and using rangesets to determine which ioreq server to give the
> notification to).  But since the interface for memory looks exactly the
> same as the interface for MMIO, at some point this morphed back to "use
> xen_hvm_io_range but with a different range type (i.e., WP_MEM)".
> 

Yes, and that's now pointless since we're going to use purely p2m types for 
sending memory accesses to ioreq servers.

> From an *interface* perspective that makes sense, because in both cases
> you want to be able to specify a domain, an ioreq server, and a range of
> physical addresses.  I don't have any objection to the change you made
> to hvm_op.h in this version of the series.
> 

No, if we are intercepting 'memory' purely by p2m type then there is no need 
for the additional range type.

> But from an *implementation* perspective, you're back to the very thing
> I objected to -- having a large and potentially unbounded number of
> rangesets, and baking that implementation into the interface by
> requiring the administrator to specify the maxiumum number of rangesets
> with a HVM_PARAM.
> 
> My suggestion was that you retain the same interface that you exposed in
> hvm_op.h (adding WP_MEM), but instead of using rangesets to determine
> which ioreq server to send the notification to, use p2m types instead.

Ok. But ultimately we want multiple p2m types for this purpose and the ability 
to send accesses to a specific type to a specific ioreq server.

> The interface to qemu can be exactly the same as it is now, but you
> won't need to increase the number of rangesets.  In fact, if you only
> allow one ioreq server to register for WP_MEM at a time, then you don't
> even  need to change the p2m code.
> 

Who said anything about QEMU? It's not relevant here.

Functionally maybe not at this time *but* that limit of a single ioreq server 
should not be baked into the interface forever more.

> One way or another, you still want the ioreq server to be able to
> intercept writes to guest memory, so I don't understand in what way "we
> are going to do away with the concept of 'wp mem'".
> 

As I have said previously on this thread, 'wp mem' should be replaced by a 
'steer to ioreq server' type. That fulfils the purpose and makes it clear what 
the type is for. Then additional 'steer to a different ioreq server' types can 
be added later when we have sufficient type-space.

  Paul

> The only question is whether it's going to go through xen_hvm_io_range
> or some other hypecall -- and I don't particularly care which one it
> ends up being, as long as it doesn't require setting this magic
> potentially unbounded parameter.
> 
>  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.