[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 5/6] ioreq-server: add support for multiple servers



On Tue, Mar 11, 2014 at 10:41 AM, Paul Durrant <Paul.Durrant@xxxxxxxxxx> wrote:
>>
>> > diff --git a/tools/libxc/xc_hvm_build_x86.c
>> b/tools/libxc/xc_hvm_build_x86.c
>> > index b65e702..6d6328a 100644
>> > --- a/tools/libxc/xc_hvm_build_x86.c
>> > +++ b/tools/libxc/xc_hvm_build_x86.c
>> > @@ -45,7 +45,7 @@
>> >  #define SPECIALPAGE_IDENT_PT 4
>> >  #define SPECIALPAGE_CONSOLE  5
>> >  #define SPECIALPAGE_IOREQ    6
>> > -#define NR_SPECIAL_PAGES     SPECIALPAGE_IOREQ + 2 /* ioreq server
>> needs 2 pages */
>> > +#define NR_SPECIAL_PAGES(n)  SPECIALPAGE_IOREQ + (2 * n) /* ioreq
>> server needs 2 pages */
>>
>> "each ioreq server needs 2 pages"?
>>
>
> That's the intent. The line is getting rather long though.


Wouldn't hurt to put it on a separate line, then.

>> Although actually, are you planning to make it possible to add more
>> emulators (above "max_emulators") dynamically after the VM is created
>> -- maybe in a future series?
>>
>> If not, and you're always going to be statically allocating a fixed
>> number of emulators at the beginning, there's not actually a reason to
>> change the direction that the special PFNs go at all.
>>
>
> I was slightly paranoid about some of the PFNs moving if we ever did increase 
> the number of reserved special pages. We've seen breakage in old Windows PV 
> drivers when that happened. So, I thought better to change the arrangement 
> once and then if we did want to add emulators during a migration (or save 
> restore) we could do it without e.g. the store ring moving.

So if you're afraid of implicit dependencies, one way to deal with it
is to try to avoid moving them at all; the other way is to move them
around all the time, to shake out implicit dependencies as early as
possible. :-)  (You could have XenRT tests, for instance, that set
max_emulators to {1,2,3,4}.)

>
>> > +    xc_set_hvm_param(xch, dom, HVM_PARAM_NR_IOREQ_SERVERS,
>> > +                     max_emulators);
>> >
>> >      /*
>> >       * Identity-map page table is required for running with CR0.PG=0 when
>>
>> [snip]
>>
>> > diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
>> > index 4fc46eb..cf9b67d 100644
>> > --- a/tools/libxl/xl_cmdimpl.c
>> > +++ b/tools/libxl/xl_cmdimpl.c
>> > @@ -1750,6 +1750,9 @@ skip_vfb:
>> >
>> >              b_info->u.hvm.vendor_device = d;
>> >          }
>> > +
>> > +        if (!xlu_cfg_get_long (config, "secondary_device_emulators", &l, 
>> > 0))
>> > +            b_info->u.hvm.max_emulators = l + 1;
>>
>> Do we want to give this a more structured naming convention?
>>
>> device_model_secondary_max?  device_model_secondary_emulators?
>>
>
> It was just a name I chose. I'm happy to change it... perhaps 
> device_model_max? (Defaults to 1). It's a bit shorter to type.

"device_model_max" doesn't say what the "max" is.

>
>> Also, how are you planning on starting these secondary emulators?
>> Would it make sense for libxl to start them, in which case it should
>> be able to do its own counting?  Or are you envisioning starting /
>> destroying secondary emulators as the guest is running?
>>
>
> That's an open question at the moment. I coded this series such that the 
> secondary emulator could be started after the VM and hotplugs its device. For 
> some emulators (e.g. the one I'm working on to supply a console) it makes 
> more sense for libxl to start it -but I see that as being additional to this 
> series. I don't think we want to stipulate that libxl is the only way to kick 
> of a secondary emulator.

Actually, I think in general we should always expect secondary
emulators to be started *through* libxl, so that we can have a
guaranteed interface; the question I meant to ask I guess was about
whether all emulators would be started *during domain creation* (in
which case libxl would know the number of emulators at the beginning),
or whether some might be started later (in which case libxl would not
necessarily know the number of emulators at the beginning).

The thing which is the simplest, and which keeps our options open, is
what you've done here -- to have an optional user config.  Then if we
add the ability to specify seconday emulators in the config file, we
can add "auto-counting" functionality at that time; and if we add
libxl functions to "hot-plug" secondary emulators, we'll need the user
config to make space.

Speaking of adding more emulators: After some more thinking, I'm not
sure that baking the layout of the ioreq and buf_ioreq pages into Xen,
the way the current patch series does, is a good idea.  At the moment,
you set HVM_PARAM_[BUF]IOREQ_PFN, and assume that all the emulators
will be contiguous.  Would it be better to introduce an interface to
allow arbitrary pages to be used for each ioreq server as it's created
(grandfathering in sid 0 to use the HVM_PARAMs)?  Then you wouldn't
need to mark off pfn space to use for ioreq servers during domain
creation at all.

 -George


>
>> >      }
>> >
>> >      xlu_cfg_destroy(config);
>> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>> > index fb2dd73..e8b73fa 100644
>> > --- a/xen/arch/x86/hvm/hvm.c
>> > +++ b/xen/arch/x86/hvm/hvm.c
>> > @@ -357,14 +357,21 @@ static ioreq_t *get_ioreq(struct
>> hvm_ioreq_server *s, int id)
>> >  bool_t hvm_io_pending(struct vcpu *v)
>> >  {
>> >      struct domain *d = v->domain;
>> > -    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
>> > -    ioreq_t *p;
>> > +    struct list_head *entry;
>> >
>> > -    if ( !s )
>> > -        return 0;
>> > +    list_for_each ( entry, &d->arch.hvm_domain.ioreq_server_list )
>> > +    {
>> > +        struct hvm_ioreq_server *s = list_entry(entry,
>> > +                                                struct hvm_ioreq_server,
>> > +                                                list_entry);
>> > +        ioreq_t *p = get_ioreq(s, v->vcpu_id);
>> >
>> > -    p = get_ioreq(s, v->vcpu_id);
>> > -    return ( p->state != STATE_IOREQ_NONE );
>> > +        p = get_ioreq(s, v->vcpu_id);
>> > +        if ( p->state != STATE_IOREQ_NONE )
>> > +            return 1;
>>
>> Redundant calls to get_ioreq().
>
> Hmm. Looks like a patch rebase went a bit wrong there. Good spot.
>
>>
>> > +    }
>> > +
>> > +    return 0;
>> >  }
>> >
>> >  static void hvm_wait_on_io(struct domain *d, ioreq_t *p)
>>
>> [snip]
>>
>> > +static int hvm_access_cf8(
>> > +    int dir, uint32_t port, uint32_t bytes, uint32_t *val)
>>
>> I take it this is part of virtualizing the pci space?
>>
>
> Yes, that needs to be done once you have more than one emulator.
>
>> This wasn't mentioned in the commit message; it seems like it probably
>> should have been introduced in a separate patch.
>>
>
> It's not actually needed until you have more than one emulator though so it 
> doesn't really make sense to separate it. I'll amend the commit message to 
> point out that, for secondary emulators, IO ranges and PCI devices need to be 
> explicitly registered.

>>
>> Obviously we'll need to handle incoming migration from domains one
>> release back, but after that we should be able to get rid of them,
>> right?
>
> We'd probably want to support old versions of QEMU for a while right? The 
> params (and the catch-all server) need to stick around as long as they do.

Oh, right -- yeah, if we want to be able to run arbitrary versions of
qemu we'll need to keep it around for a while.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.