[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.



On 25/04/16 16:53, Yu, Zhang wrote:
> 
> 
> On 4/25/2016 11:38 PM, Jan Beulich wrote:
>>>>> On 25.04.16 at 17:29, <Paul.Durrant@xxxxxxxxxx> wrote:
>>>>  -----Original Message-----
>>>> From: Yu, Zhang [mailto:yu.c.zhang@xxxxxxxxxxxxxxx]
>>>> Sent: 25 April 2016 16:22
>>>> To: Paul Durrant; George Dunlap
>>>> Cc: xen-devel@xxxxxxxxxxxxx; Kevin Tian; Keir (Xen.org); Jun Nakajima;
>>>> Andrew Cooper; Tim (Xen.org); Lv, Zhiyuan; Jan Beulich; Wei Liu
>>>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for
>>>> 4.7):
>>>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>>>
>>>>
>>>>
>>>> On 4/25/2016 10:01 PM, Paul Durrant wrote:
>>>>>> -----Original Message-----
>>>>>> From: dunlapg@xxxxxxxxx [mailto:dunlapg@xxxxxxxxx] On Behalf Of
>>>>>> George Dunlap
>>>>>> Sent: 25 April 2016 14:39
>>>>>> To: Yu Zhang
>>>>>> Cc: xen-devel@xxxxxxxxxxxxx; Kevin Tian; Keir (Xen.org); Jun
>>>>>> Nakajima;
>>>>>> Andrew Cooper; Tim (Xen.org); Paul Durrant; Lv, Zhiyuan; Jan
>>>>>> Beulich; Wei
>>>> Liu
>>>>>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for
>>>>>> 4.7):
>>>>>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>>>>>
>>>>>> On Mon, Apr 25, 2016 at 11:35 AM, Yu Zhang
>>>> <yu.c.zhang@xxxxxxxxxxxxxxx>
>>>>>> wrote:
>>>>>>> Previously p2m type p2m_mmio_write_dm was introduced for write-
>>>>>>> protected memory pages whose write operations are supposed to be
>>>>>>> forwarded to and emulated by an ioreq server. Yet limitations of
>>>>>>> rangeset restrict the number of guest pages to be write-protected.
>>>>>>>
>>>>>>> This patch replaces the p2m type p2m_mmio_write_dm with a new
>>>> name:
>>>>>>> p2m_ioreq_server, which means this p2m type can be claimed by one
>>>>>>> ioreq server, instead of being tracked inside the rangeset of ioreq
>>>>>>> server. Patches following up will add the related hvmop handling
>>>>>>> code which map/unmap type p2m_ioreq_server to/from an ioreq
>>>> server.
>>>>>>>
>>>>>>> changes in v3:
>>>>>>>   - According to Jan & George's comments, keep
>>>>>> HVMMEM_mmio_write_dm
>>>>>>>     for old xen interface versions, and replace it with
>>>>>>> HVMMEM_unused
>>>>>>>     for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a new
>>>>>>>     enum - HVMMEM_ioreq_server is introduced for the get/set mem
>>>> type
>>>>>>>     interfaces;
>>>>>>>   - Add George's Reviewed-by and Acked-by from Tim & Andrew.
>>>>>>
>>>>>> Unfortunately these rather contradict each other -- I consider
>>>>>> Reviewed-by to only stick if the code I've specified hasn't changed
>>>>>> (or has only changed trivially).
>>>>>>
>>>>>> Also...
>>>>>>
>>>>>>>
>>>>>>> changes in v2:
>>>>>>>   - According to George Dunlap's comments, only rename the p2m type,
>>>>>>>     with no behavior changes.
>>>>>>>
>>>>>>> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
>>>>>>> Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
>>>>>>> Acked-by: Tim Deegan <tim@xxxxxxx>
>>>>>>> Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>>>>>> Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
>>>>>>> Cc: Keir Fraser <keir@xxxxxxx>
>>>>>>> Cc: Jan Beulich <jbeulich@xxxxxxxx>
>>>>>>> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>>>>>> Cc: Jun Nakajima <jun.nakajima@xxxxxxxxx>
>>>>>>> Cc: Kevin Tian <kevin.tian@xxxxxxxxx>
>>>>>>> Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
>>>>>>> Cc: Tim Deegan <tim@xxxxxxx>
>>>>>>> ---
>>>>>>>  xen/arch/x86/hvm/hvm.c          | 14 ++++++++------
>>>>>>>  xen/arch/x86/mm/p2m-ept.c       |  2 +-
>>>>>>>  xen/arch/x86/mm/p2m-pt.c        |  2 +-
>>>>>>>  xen/arch/x86/mm/shadow/multi.c  |  2 +-
>>>>>>>  xen/include/asm-x86/p2m.h       |  4 ++--
>>>>>>>  xen/include/public/hvm/hvm_op.h |  8 +++++++-
>>>>>>>  6 files changed, 20 insertions(+), 12 deletions(-)
>>>>>>>
>>>>>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>>>>>>> index f24126d..874cb0f 100644
>>>>>>> --- a/xen/arch/x86/hvm/hvm.c
>>>>>>> +++ b/xen/arch/x86/hvm/hvm.c
>>>>>>> @@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
>>>>>> unsigned long gla,
>>>>>>>       */
>>>>>>>      if ( (p2mt == p2m_mmio_dm) ||
>>>>>>>           (npfec.write_access &&
>>>>>>> -          (p2m_is_discard_write(p2mt) || (p2mt ==
>>>> p2m_mmio_write_dm))) )
>>>>>>> +          (p2m_is_discard_write(p2mt) || (p2mt ==
>>>>>>> p2m_ioreq_server))) )
>>>>>>>      {
>>>>>>>          __put_gfn(p2m, gfn);
>>>>>>>          if ( ap2m_active )
>>>>>>> @@ -5499,8 +5499,8 @@ long do_hvm_op(unsigned long op,
>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>              get_gfn_query_unlocked(d, a.pfn, &t);
>>>>>>>              if ( p2m_is_mmio(t) )
>>>>>>>                  a.mem_type =  HVMMEM_mmio_dm;
>>>>>>> -            else if ( t == p2m_mmio_write_dm )
>>>>>>> -                a.mem_type = HVMMEM_mmio_write_dm;
>>>>>>> +            else if ( t == p2m_ioreq_server )
>>>>>>> +                a.mem_type = HVMMEM_ioreq_server;
>>>>>>>              else if ( p2m_is_readonly(t) )
>>>>>>>                  a.mem_type =  HVMMEM_ram_ro;
>>>>>>>              else if ( p2m_is_ram(t) )
>>>>>>> @@ -5531,7 +5531,8 @@ long do_hvm_op(unsigned long op,
>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>              [HVMMEM_ram_rw]  = p2m_ram_rw,
>>>>>>>              [HVMMEM_ram_ro]  = p2m_ram_ro,
>>>>>>>              [HVMMEM_mmio_dm] = p2m_mmio_dm,
>>>>>>> -            [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm
>>>>>>> +            [HVMMEM_unused] = p2m_invalid,
>>>>>>> +            [HVMMEM_ioreq_server] = p2m_ioreq_server
>>>>>>>          };
>>>>>>>
>>>>>>>          if ( copy_from_guest(&a, arg, 1) )
>>>>>>> @@ -5555,7 +5556,8 @@ long do_hvm_op(unsigned long op,
>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>               ((a.first_pfn + a.nr - 1) >
>>>>>>> domain_get_maximum_gpfn(d)) )
>>>>>>>              goto setmemtype_fail;
>>>>>>>
>>>>>>> -        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) )
>>>>>>> +        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
>>>>>>> +             unlikely(a.hvmmem_type == HVMMEM_unused) )
>>>>>>>              goto setmemtype_fail;
>>>>>>>
>>>>>>>          while ( a.nr > start_iter )
>>>>>>> @@ -5579,7 +5581,7 @@ long do_hvm_op(unsigned long op,
>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>              }
>>>>>>>              if ( !p2m_is_ram(t) &&
>>>>>>>                   (!p2m_is_hole(t) || a.hvmmem_type !=
>>>>>>> HVMMEM_mmio_dm)
>>>>>> &&
>>>>>>> -                 (t != p2m_mmio_write_dm || a.hvmmem_type !=
>>>>>> HVMMEM_ram_rw) )
>>>>>>> +                 (t != p2m_ioreq_server || a.hvmmem_type !=
>>>>>> HVMMEM_ram_rw) )
>>>>>>>              {
>>>>>>>                  put_gfn(d, pfn);
>>>>>>>                  goto setmemtype_fail;
>>>>>>> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-
>>>> ept.c
>>>>>>> index 3cb6868..380ec25 100644
>>>>>>> --- a/xen/arch/x86/mm/p2m-ept.c
>>>>>>> +++ b/xen/arch/x86/mm/p2m-ept.c
>>>>>>> @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct
>>>>>> p2m_domain *p2m, ept_entry_t *entry,
>>>>>>>              entry->a = entry->d = !!cpu_has_vmx_ept_ad;
>>>>>>>              break;
>>>>>>>          case p2m_grant_map_ro:
>>>>>>> -        case p2m_mmio_write_dm:
>>>>>>> +        case p2m_ioreq_server:
>>>>>>>              entry->r = 1;
>>>>>>>              entry->w = entry->x = 0;
>>>>>>>              entry->a = !!cpu_has_vmx_ept_ad;
>>>>>>> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
>>>>>>> index 3d80612..eabd2e3 100644
>>>>>>> --- a/xen/arch/x86/mm/p2m-pt.c
>>>>>>> +++ b/xen/arch/x86/mm/p2m-pt.c
>>>>>>> @@ -94,7 +94,7 @@ static unsigned long
>>>> p2m_type_to_flags(p2m_type_t
>>>>>> t, mfn_t mfn,
>>>>>>>      default:
>>>>>>>          return flags | _PAGE_NX_BIT;
>>>>>>>      case p2m_grant_map_ro:
>>>>>>> -    case p2m_mmio_write_dm:
>>>>>>> +    case p2m_ioreq_server:
>>>>>>>          return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT;
>>>>>>>      case p2m_ram_ro:
>>>>>>>      case p2m_ram_logdirty:
>>>>>>> diff --git a/xen/arch/x86/mm/shadow/multi.c
>>>>>> b/xen/arch/x86/mm/shadow/multi.c
>>>>>>> index e5c8499..c81302a 100644
>>>>>>> --- a/xen/arch/x86/mm/shadow/multi.c
>>>>>>> +++ b/xen/arch/x86/mm/shadow/multi.c
>>>>>>> @@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v,
>>>>>>>
>>>>>>>      /* Need to hand off device-model MMIO to the device model */
>>>>>>>      if ( p2mt == p2m_mmio_dm
>>>>>>> -         || (p2mt == p2m_mmio_write_dm && ft == ft_demand_write) )
>>>>>>> +         || (p2mt == p2m_ioreq_server && ft == ft_demand_write) )
>>>>>>>      {
>>>>>>>          gpa = guest_walk_to_gpa(&gw);
>>>>>>>          goto mmio;
>>>>>>> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
>>>>>>> index 5392eb0..ee2ea9c 100644
>>>>>>> --- a/xen/include/asm-x86/p2m.h
>>>>>>> +++ b/xen/include/asm-x86/p2m.h
>>>>>>> @@ -71,7 +71,7 @@ typedef enum {
>>>>>>>      p2m_ram_shared = 12,          /* Shared or sharable memory */
>>>>>>>      p2m_ram_broken = 13,          /* Broken page, access cause
>>>>>>> domain
>>>> crash
>>>>>> */
>>>>>>>      p2m_map_foreign  = 14,        /* ram pages from foreign
>>>>>>> domain */
>>>>>>> -    p2m_mmio_write_dm = 15,       /* Read-only; writes go to the
>>>>>>> device
>>>>>> model */
>>>>>>> +    p2m_ioreq_server = 15,
>>>>>>>  } p2m_type_t;
>>>>>>>
>>>>>>>  /* Modifiers to the query */
>>>>>>> @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t;
>>>>>>>                        | p2m_to_mask(p2m_ram_ro)         \
>>>>>>>                        | p2m_to_mask(p2m_grant_map_ro)   \
>>>>>>>                        | p2m_to_mask(p2m_ram_shared)     \
>>>>>>> -                      | p2m_to_mask(p2m_mmio_write_dm))
>>>>>>> +                      | p2m_to_mask(p2m_ioreq_server))
>>>>>>>
>>>>>>>  /* Write-discard types, which should discard the write
>>>>>>> operations */
>>>>>>>  #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro)     \
>>>>>>> diff --git a/xen/include/public/hvm/hvm_op.h
>>>>>> b/xen/include/public/hvm/hvm_op.h
>>>>>>> index 1606185..b3e45cf 100644
>>>>>>> --- a/xen/include/public/hvm/hvm_op.h
>>>>>>> +++ b/xen/include/public/hvm/hvm_op.h
>>>>>>> @@ -83,7 +83,13 @@ typedef enum {
>>>>>>>      HVMMEM_ram_rw,             /* Normal read/write guest RAM */
>>>>>>>      HVMMEM_ram_ro,             /* Read-only; writes are
>>>>>>> discarded */
>>>>>>>      HVMMEM_mmio_dm,            /* Reads and write go to the device
>>>> model
>>>>>> */
>>>>>>> -    HVMMEM_mmio_write_dm       /* Read-only; writes go to the
>>>>>>> device
>>>>>> model */
>>>>>>> +#if __XEN_INTERFACE_VERSION__ < 0x00040700
>>>>>>> +    HVMMEM_mmio_write_dm,      /* Read-only; writes go to the
>>>>>>> device
>>>>>> model */
>>>>>>> +#else
>>>>>>> +    HVMMEM_unused,             /* Placeholder; setting memory to
>>>>>>> this
>>>> type
>>>>>>> +                                  will fail for code after 4.7.0 */
>>>>>>> +#endif
>>>>>>> +    HVMMEM_ioreq_server
>>>>>>
>>>>>> Also, I don't think we've had a convincing argument for why this
>>>>>> patch
>>>>>> needs to be in 4.7.  The p2m name changes are internal only, and so
>>>>>> don't need to be made at all; and the old functionality will work as
>>>>>> well as it ever did.  Furthermore, the whole reason we're in this
>>>>>> situation is that we checked in a publicly-visible change to the
>>>>>> interface before it was properly ready; I think we should avoid
>>>>>> making
>>>>>> the same mistake this time.
>>>>>>
>>>>>> So personally I'd just leave this patch entirely for 4.8; but if Paul
>>>>>> and/or Jan have strong opinions, then I would say check in only a
>>>>>> patch to do the #if/#else/#endif, and leave both the p2m type changes
>>>>>> and the new HVMMEM_ioreq_server enum for when the 4.8
>>>> development
>>>>>> window opens.
>>>>>>
>>>>>
>>>>> If the whole series is going in then I think this patch is ok. If
>>>>> this the
>>> only
>>>> patch that is going in for 4.7 then I thing we need the patch to
>>>> hvm_op.h to
>>>> deprecate the old type and that's it. We definitely should not
>>>> introduce an
>>>> implementation of the type HVMMEM_ioreq_server that has the same
>>>> hardcoded semantics as the old type and then change it.
>>>>> The p2m type changes are also wrong. That type needs to be left alone,
>>>> presumably, so that anything using HVMMEM_mmio_write_dm and
>>>> compiled to the old interface version continues to function. I think
>>>> HVMMEM_ioreq_server needs to map to a new p2m type which should be
>>>> introduced in patch #3.
>>>>>
>>>>
>>>> Sorry, I'm also confused now. :(
>>>>
>>>> Do we really want to introduce a new p2m type? Why?
>>>> My understanding of the previous agreement is that:
>>>> 1> We need the interface to work on old hypervisor for
>>>> HVMMEM_mmio_write_dm;
>>>> 2> We need the interface to return -EINVAL for new hypervisor
>>>> for HVMMEM_mmio_write_dm;
>>>> 3> We need the new type HVMMEM_ioreq_server to work on new
>>>> hypervisor;
>>>>
>>>> Did I miss something? Or I totally misunderstood?
>>>>
>>>
>>> I don't know. I'm confused too. What we definitely don't want to do
>>> is add a
>>> new HVMMEM type and have it map to the old behaviour, otherwise we're no
>>> better off.
>>>
>>> The question I'm not clear on the answer to is what happens to old code:
>>>
>>> Should it continue to compile? If so, should it continue to run.
>>
>> We only need to be concerned about the "get type" functionality,
>> as that's the only thing an arbitrary guest can use. If the
>> hypercall simply never returns the old type, then old code will
>> still work (it'll just have some dead code on new Xen), and hence
>> it compiling against the older interface is fine (and, from general
>> considerations, a requirement).
>>
> 
> Thanks Jan. And I think the answer is yes. The new hypervisor will
> only return HVMMEM_ioreq_server, which is a different value now.

Right -- but we can't check in this entire series for 4.7; however, Paul
would like to make it clear that HVMMEM_mmio_write_dm is deprecated; so
we need a patch which does the #ifdef/#else/#endif clause, and then
everything else necessary to make sure that things compile properly
either way, but no p2m changes or additional functionality.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.