[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/3] x86/ioreq server: Rename p2m_mmio_write_dm to p2m_ioreq_server



On 21/04/16 14:56, Paul Durrant wrote:
>> -----Original Message-----
>> From: Yu, Zhang [mailto:yu.c.zhang@xxxxxxxxxxxxxxx]
>> Sent: 21 April 2016 14:49
>> To: Paul Durrant; George Dunlap; Jan Beulich; Wei Liu
>> Cc: Kevin Tian; Keir (Xen.org); Andrew Cooper; Tim (Xen.org); xen-
>> devel@xxxxxxxxxxxxx; zhiyuan.lv@xxxxxxxxx; jun.nakajima@xxxxxxxxx
>> Subject: Re: [Xen-devel] [PATCH v2 2/3] x86/ioreq server: Rename
>> p2m_mmio_write_dm to p2m_ioreq_server
>>
>>
>>
>> On 4/21/2016 9:31 PM, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> From: Yu, Zhang [mailto:yu.c.zhang@xxxxxxxxxxxxxxx]
>>>> Sent: 21 April 2016 13:25
>>>> To: George Dunlap; Paul Durrant; Jan Beulich; Wei Liu
>>>> Cc: Kevin Tian; Keir (Xen.org); Andrew Cooper; Tim (Xen.org); xen-
>>>> devel@xxxxxxxxxxxxx; zhiyuan.lv@xxxxxxxxx; jun.nakajima@xxxxxxxxx
>>>> Subject: Re: [Xen-devel] [PATCH v2 2/3] x86/ioreq server: Rename
>>>> p2m_mmio_write_dm to p2m_ioreq_server
>>>>
>>>>
>>>>
>>>> On 4/21/2016 1:06 AM, George Dunlap wrote:
>>>>> On 20/04/16 17:58, Paul Durrant wrote:
>>>>>>> -----Original Message-----
>>>>>>> From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf
>>>> Of Jan
>>>>>>> Beulich
>>>>>>> Sent: 20 April 2016 17:53
>>>>>>> To: George Dunlap; Paul Durrant; Wei Liu; yu.c.zhang@xxxxxxxxxxxxxxx
>>>>>>> Cc: Kevin Tian; Keir (Xen.org); Andrew Cooper; Tim (Xen.org); xen-
>>>>>>> devel@xxxxxxxxxxxxx; zhiyuan.lv@xxxxxxxxx; jun.nakajima@xxxxxxxxx
>>>>>>> Subject: Re: [Xen-devel] [PATCH v2 2/3] x86/ioreq server: Rename
>>>>>>> p2m_mmio_write_dm to p2m_ioreq_server
>>>>>>>
>>>>>>>>>> George Dunlap <george.dunlap@xxxxxxxxxx> 04/20/16 6:30 PM
>>>>>
>>>>>>>> On Wed, Apr 20, 2016 at 4:02 PM, George Dunlap
>>>>>>> <george.dunlap@xxxxxxxxxx> wrote:
>>>>>>>>> On 19/04/16 12:02, Yu, Zhang wrote:
>>>>>>>>>> So I suppose the only place we need change for this patch is
>>>>>>>>>> for hvmmem_type_t, which should be defined like this?
>>>>>>>>>>
>>>>>>>>>> typedef enum {
>>>>>>>>>>     HVMMEM_ram_rw,             /* Normal read/write guest RAM */
>>>>>>>>>>     HVMMEM_ram_ro,             /* Read-only; writes are discarded */
>>>>>>>>>>     HVMMEM_mmio_dm,            /* Reads and write go to the device
>>>>>>> model */
>>>>>>>>>> #if __XEN_INTERFACE_VERSION__ >= 0x00040700
>>>>>>>>>>     HVMMEM_ioreq_server
>>>>>>>>>> #else
>>>>>>>>>>     HVMMEM_mmio_write_dm
>>>>>>>>>> #endif
>>>>>>>>>> } hvmmem_type_t;
>>>>>>>>>>
>>>>>>>>>> Besides, does 4.7 still accept freeze exception? It would be great
>>>>>>>>>> if we can get an approval for this.
>>>>>>>>>
>>>>>>>>> Wait, do we *actually* need this?  Is anyone actually using this?
>>>>>>>>>
>>>>>>>>> I'd say remove it, and if anyone complains, *then* do the
>> #ifdef'ery
>>>> as
>>>>>>>>> a bug-fix.  I'm pretty sure that's Linux's policy -- You Must Keep
>>>>>>>>> Userspace Working, but you can break it to see if anyone complains
>>>> first.
>>>>>>>
>>>>>>> We don't normally do it like that - we aim at keeping things compatible
>>>>>>> right away. I don't know of a case where we would have knowingly
>>>> broken
>>>>>>> compatibility for users of the public headers (leaving aside tool stack
>> only
>>>>>>> stuff of course).
>>>>>>>
>>>>>>>> Going further than this:
>>>>>>>>
>>>>>>>> The proposed patch series not only changes the name, it changes
>> the
>>>>>>>> functionality.  We do not want code to *compile* against 4.7 and
>> then
>>>>>>>> not *work* against 4.7; and the worst of all is to compile and sort of
>>>>>>>> work but do it incorrectly.
>>>>>>>
>>>>>>> I had the impression that the renaming patch was what it is - a
>> renaming
>>>>>>> patch, without altering behavior.
>>>>>>>
>>>>>>>> Does the ioreq server have a way of asking Xen what version of the
>> ABI
>>>>>>>> it's providing?  I'm assuming the answer is "no"; in which case code
>>>>>>>> that is compiled against the 4.6 interface but run on a 4.8 interface
>>>>>>>> that looks like this will fail in a somewhat unpredictable way.
>>>>>>>
>>>>>>> The only thing it can do is ask for the Xen version. The ABI version is
>> not
>>>>>>> being returned by anything (but perhaps should be).
>>>>>>>
>>>>>>>> Given that:
>>>>>>>>
>>>>>>>> 1. When we do check the ioreq server functionality in, what's the
>>>>>>>> correct way to deal with code that wants to use the old interface,
>> and
>>>>>>>> what do we do with code compiled against the old interface but
>>>> running
>>>>>>>> on the new one?
>>>>>>>
>>>>>>> For the full series I'm not sure I can really tell.But as said, for the
>> rename
>>>>>>> patch alone I thought it is just a rename. And that's what we want to
>> get
>>>>>>> in (see Paul's earlier reply - he wants to see the old name gone, so it
>>>> won't
>>>>>>> be used any further).
>>>>>>>
>>>>>>>> 2. What's the best thing to do for this release?
>>>>>>>
>>>>>>> If the entire series (no matter whether to go in now or later) is
>> changing
>>>>>>> behavior, then the only choice is to consider the currently used enum
>>>>>>> value burnt, and use a fresh one for the new semantics.
>>>>>>
>>>>>> It sounds like that would be best way. If we don't so that then we have
>> to
>>>> maintain the write-dm semantics for pages of that type unless the type is
>>>> claimed (by using the new hypercall) and that's bit icky. I much prefer 
>>>> that
>>>> pages of the new type are treated as RAM until claimed.
>>>>>
>>>>> I think the only sensible way to keep the enum is to also keep the
>>>>> functionality, which would mean using *another* p2m type for
>>>> ioreq_server.
>>>>>
>>>>> Given that the functionality isn't going away for 4.7, I don't see an
>>>>> urgent need to remove the enum; but if Paul does, then a patch
>> renaming
>>>>> it to HVMMEM_unused would be the way forward then I guess.  Once
>> the
>>>>> underlying p2m type goes away, you'll want to return -EINVAL for this
>>>>> enum value.
>>>>>
>>>>
>>>> So the enum would be sth. like this?
>>>>
>>>> typedef enum {
>>>>      HVMMEM_ram_rw,        /* Normal read/write guest RAM */
>>>>      HVMMEM_ram_ro,        /* Read-only; writes are discarded */
>>>>      HVMMEM_mmio_dm,       /* Reads and write go to the device model */
>>>> #if __XEN_INTERFACE_VERSION__ < 0x00040700
>>>>      HVMMEM_mmio_write_dm, /* Read-only; writes go to the device
>> model
>>>> */
>>>> #else
>>>>      HVMMEM_unused,
>>>> #endif
>>>>      HVMMEM_ioreq_server
>>>> } hvmmem_type_t;
>>>>
>>>
>>> I believe that's correct, but presumably there's need to be a change to the
>> hypervisor since any reference there to HVMMEM_mmio_write_dm (which I
>> think is limited to the get and set mem type code in hvm.c) will now need to
>> map HVMMEM_unused to the old p2m_mmio_write_dm type.
>>>
>> Thank you, Paul.
>>
>> But p2m_mmio_write_dm will not exist any more...
>> E.g. if in hvmop_get_mem_type(), if type 0xf(p2m_ioreq_server) is
>> returned, we could just return HVMMEM_ioreq_server. No need to
>> worry about the HVMMEM_mmio_write_dm.
>>
>> Maybe we only need to change the beginning of hvmop_set_mem_type()
>> to sth. like this:
>>
>> /* Interface types to internal p2m types */
>> static const p2m_type_t memtype[] = {
>>      [HVMMEM_ram_rw]  = p2m_ram_rw,
>>      [HVMMEM_ram_ro]  = p2m_ram_ro,
>>      [HVMMEM_mmio_dm] = p2m_mmio_dm,
>>      [HVMMEM_unused] = p2m_invalid,  /* this will be rejected later */
>>      [HVMMEM_ioreq_server] = p2m_ioreq_server
>> };
>> and later in the same routine, just reject the HVMMEM_unused type, in
>> an if(with unlikely) statement.
>>
> 
> As long as everyone is in agreement then we can break the functionality that 
> exists in 4.6.1 (and presumably 4.7 now) then that’s ok.

I think we're all in agreement that we can *remove* functionality as
long as (from Xen's perspective) it should fail gracefully.  Which means
that if anyone uses the hvm_mem_type enum  value previously allocated do
HVMMEM_mmio_write_dm, return an error (one way or another).

The exact method for doing so is up for review.

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.