[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges.



On 22/02/16 16:02, Paul Durrant wrote:
>> -----Original Message-----
>> From: dunlapg@xxxxxxxxx [mailto:dunlapg@xxxxxxxxx] On Behalf Of
>> George Dunlap
>> Sent: 22 February 2016 15:56
>> To: Paul Durrant
>> Cc: George Dunlap; Jan Beulich; Kevin Tian; Zhang Yu; Wei Liu; Ian Campbell;
>> Andrew Cooper; xen-devel@xxxxxxxxxxxxx; Stefano Stabellini; Zhiyuan Lv; Ian
>> Jackson; Keir (Xen.org)
>> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
>> max_wp_ram_ranges.
>>
>> On Wed, Feb 17, 2016 at 11:12 AM, Paul Durrant <Paul.Durrant@xxxxxxxxxx>
>> wrote:
>>>> -----Original Message-----
>>>> From: George Dunlap [mailto:george.dunlap@xxxxxxxxxx]
>>>> Sent: 17 February 2016 11:02
>>>> To: Jan Beulich; Paul Durrant; Kevin Tian; Zhang Yu
>>>> Cc: Andrew Cooper; Ian Campbell; Ian Jackson; Stefano Stabellini; Wei Liu;
>>>> Zhiyuan Lv; xen-devel@xxxxxxxxxxxxx; George Dunlap; Keir (Xen.org)
>>>> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
>>>> max_wp_ram_ranges.
>>>>
>>>> On 17/02/16 10:22, Jan Beulich wrote:
>>>>>>>> On 17.02.16 at 10:58, <kevin.tian@xxxxxxxxx> wrote:
>>>>>> Thanks for the help. Let's see whether we can have some solution
>> ready
>>>> for
>>>>>> 4.7. :-)
>>>>>
>>>>> Since we now seem to all agree that a different approach is going to
>>>>> be taken, I think we indeed should revert f5a32c5b8e ("x86/HVM:
>>>>> differentiate IO/mem resources tracked by ioreq server"). Please
>>>>> voice objections to the plan pretty soon.
>>>>
>>>> FWIW, after this discussion, I don't have an objection to the basic
>>>> interface in this series as it is, since it addresses my request that it
>>>> be memory-based, and it could be switched to using p2m types
>>>> behind-the-scenes -- with the exception of the knob to control how many
>>>> ranges are allowed (since it exposes the internal implementation).
>>>>
>>>> I wouldn't object to the current implementation going in as it was in v9
>>>> (apparently), and then changing the type stuff around behind the scenes
>>>> later as an optimization.  I also don't think it would be terribly
>>>> difficult to change the implementation as it is to just use write_dm for
>>>> a single IOREQ server.  We can rename it ioreq_server and expand it
>>>> later.  Sorry if this wasn't clear from my comments before.
>>>>
>>>
>>> The problem is that, since you objected to wp_mem being treated the
>> same as mmio, we had to introduce a new range type to the map/unmap
>> hypercalls and that will become redundant if we steer by type. If we could
>> have just increased the size of the rangeset for mmio and used that *for
>> now* then weeks of work could have been saved going down this dead end.
>>
>> So first of all, let me say that I handled things respect to this
>> thread rather poorly in a lot of ways, and you're right to be
>> frustrated.  All I can say is, I'm sorry and I'll try to do better.
>>
>> I'm not sure I understand what you're saying with this line: "We had
>> to introduce a new range type to the map/unmap hypercalls that will
>> become redundant if we steer by type".
>>
> 
> What I means is that if we are going to do away with the concept of 'wp mem' 
> and replace it with 'special type for steering to ioreq server' then we only 
> need range types of pci, portio and mmio as we had before.

I'm still confused.

To me, there are two distinct things here: the interface and the
implementation.

From an interface perspective, ioreq servers need to be able to ask Xen
to intercept writes to specific memory regions.  I insisted that this
interface be specifically designed for memory, not calling them MMIO
regions (since it's actually memory and not MMIO regions); and I said
that the explosion in number of required rangesets demonstrated that
rangesets were a bad fit to implement this interface.

What you did in an earlier version of this series (correct me if I'm
wrong) is to make a separate hypercall for memory, but still keep using
the same internal implementation (i.e., still having a write_dm p2m type
and using rangesets to determine which ioreq server to give the
notification to).  But since the interface for memory looks exactly the
same as the interface for MMIO, at some point this morphed back to "use
xen_hvm_io_range but with a different range type (i.e., WP_MEM)".

From an *interface* perspective that makes sense, because in both cases
you want to be able to specify a domain, an ioreq server, and a range of
physical addresses.  I don't have any objection to the change you made
to hvm_op.h in this version of the series.

But from an *implementation* perspective, you're back to the very thing
I objected to -- having a large and potentially unbounded number of
rangesets, and baking that implementation into the interface by
requiring the administrator to specify the maxiumum number of rangesets
with a HVM_PARAM.

My suggestion was that you retain the same interface that you exposed in
hvm_op.h (adding WP_MEM), but instead of using rangesets to determine
which ioreq server to send the notification to, use p2m types instead.
The interface to qemu can be exactly the same as it is now, but you
won't need to increase the number of rangesets.  In fact, if you only
allow one ioreq server to register for WP_MEM at a time, then you don't
even  need to change the p2m code.

One way or another, you still want the ioreq server to be able to
intercept writes to guest memory, so I don't understand in what way "we
are going to do away with the concept of 'wp mem'".

The only question is whether it's going to go through xen_hvm_io_range
or some other hypecall -- and I don't particularly care which one it
ends up being, as long as it doesn't require setting this magic
potentially unbounded parameter.

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.