[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/2] altp2m: Merge p2m_set_altp2m_mem_access and p2m_set_mem_access



>>> On 01.02.16 at 17:35, <tlengyel@xxxxxxxxxxx> wrote:
> On Mon, Feb 1, 2016 at 9:30 AM, Ian Campbell <ian.campbell@xxxxxxxxxx>
> wrote:
> 
>> On Mon, 2016-02-01 at 09:21 -0700, Jan Beulich wrote:
>> > > > > On 01.02.16 at 15:45, <ian.campbell@xxxxxxxxxx> wrote:
>> > > On Fri, 2016-01-29 at 09:47 -0700, Jan Beulich wrote:
>> > > > > > > On 29.01.16 at 17:32, <tlengyel@xxxxxxxxxxx> wrote:
>> > > > > On Fri, Jan 29, 2016 at 9:19 AM, Jan Beulich <JBeulich@xxxxxxxx>
>> > > > > wrote:
>> > > > > > > > > On 29.01.16 at 17:12, <tlengyel@xxxxxxxxxxx> wrote:
>> > > > > > > On Fri, Jan 29, 2016 at 4:03 AM, Jan Beulich <
>> JBeulich@xxxxxxxx 
>> > > > > > > >
>> > > > > > > wrote:
>> > > > > > > > > > > On 28.01.16 at 21:58, <tlengyel@xxxxxxxxxxx> wrote:
>> > > > > > > > > --- a/xen/include/public/memory.h
>> > > > > > > > > +++ b/xen/include/public/memory.h
>> > > > > > > > > @@ -423,11 +423,14 @@ struct xen_mem_access_op {
>> > > > > > > > >      /* xenmem_access_t */
>> > > > > > > > >      uint8_t access;
>> > > > > > > > >      domid_t domid;
>> > > > > > > > > +    uint16_t altp2m_idx;
>> > > > > > > > > +    uint16_t _pad;
>> > > > > > > > >      /*
>> > > > > > > > >       * Number of pages for set op
>> > > > > > > > >       * Ignored on setting default access and other ops
>> > > > > > > > >       */
>> > > > > > > > >      uint32_t nr;
>> > > > > > > > > +    uint32_t _pad2;
>> > > > > > > >
>> > > > > > > > Repeating what I had said on v1: So this is a tools only
>> > > > > > > > interface,
>> > > > > > > > yes. But it's not versioned (other than e.g. domctl and
>> > > > > > > > sysctl),
>> > > > > > > > so
>> > > > > > > > altering the interface structure is at least fragile.
>> > > > > > >
>> > > > > > > Not sure what I can do to address this.
>> > > > > >
>> > > > > > Deprecate the old interface and introduce a new one. But other
>> > > > > > maintainers' opinions would be welcome.
>> > > > >
>> > > > > That seems like a very heavy handed solution to me.
>> > > >
>> > > > I understand that - hence the request for others' opinions.
>> > >
>> > > It's unfortunate that we've found ourselves here, but I think rather
>> > > than
>> > > deprecating the current and adding a new op alongside we should just
>> > > accept
>> > > the one-time fragility this time around, add the version field as part
>> > > of
>> > > this set of changes and try and remember to include a version number
>> > > for
>> > > next time we add a tools only interface. I don't think xenaccess is yet
>> > > widely used outside of Tamas and the Bitdfender folks, who I would
>> > > assume
>> > > can cope with such a change.
>> > >
>> > > I could accept changing the op number would make sense, but I don't
>> > > think
>> > > we should deprecate the old one (which implies continuing to support it
>> > > in
>> > > parallel), if we go this route we should just retire the old number to
>> > > straight away to return -ENOSYS (or maybe -EACCESS, which is what a
>> > > version
>> > > mismatch would have resulted in).
>> >
>> > That actually looks like a reasonable compromise, until we finally
>> > manage to get around to morph the tools-only HVM-ops into a
>> > new hvmctl hypercall (leaving only guest accessible ones in the
>> > current interface).
>>
>> Aren't the ones being discussed here xenmem subops rather than hvmops?
> 
> Yes, in v2 I'm not touching the guest-visible altp2m_set_mem_access hvmop,
> just how it is handled internal to Xen. I do wonder if it was intended to
> be guest-visible operation though or if that was an overlook of not putting
> it into tools-only.

I think this was intentional, so a guest could control the access
rights in the various views of its P2M.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.