[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
>>> On 23.11.16 at 14:33, <boris.ostrovsky@xxxxxxxxxx> wrote: > On 11/23/2016 03:09 AM, Jan Beulich wrote: >>>>> On 23.11.16 at 00:47, <boris.ostrovsky@xxxxxxxxxx> wrote: >>> I have a prototype that replaces XEN_DOMCTL_set_avail_vcpus with >>> XEN_DOMCTL_acpi_access and it seems to work OK. The toolstack needs to >>> perform two (or more, if >32 VCPUs) hypercalls and the logic on the >>> hypervisor side is almost the same as the ioreq handling that this >>> series added in patch 8. >> Why would there be multiple hypercalls needed? (I guess I may need >> to see the prototype to understand.) > > The interface is > > #define XEN_DOMCTL_acpi_access 81 > struct xen_domctl_acpi_access { > uint8_t rw; > uint8_t bytes; > uint16_t port; > uint32_t val; > }; > > And so as an example, to add VCPU1 to already existing VCPU0: > > /* Update the VCPU map*/ > val = 3; > xc_acpi_access(ctx->xch, domid, WRITE, 0xaf00, 1/*bytes*/, &val); > > /* Set event status in GPE block */ > val= 1 << 2; > xc_acpi_access(ctx->xch, domid, WRITE, 0xafe0, 1/*bytes*/, &val); > > > If we want to support ACPI registers in memory space then we need to add > 'uint8_t space' and extend val to uint64_t. It's a domctl, so we can change it down the road, but of course if would be nice if this was supporting a proper GAS (without access width I guess) from the beginning. > We also may want to make val > a uint64_t to have fewer hypercalls for VCPU map updates. (We could, in > fact, pass a pointer to the map but I think a scalar is cleaner.) Well, mostly re-using GAS layout, we'd allow for up to 255 bits per operation. That might be fine for now, but the need for the caller to break things up seems awkward to me, so I'd in fact lean towards passing a handle. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |