[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V2] x86/altp2m: Added xc_altp2m_set_mem_access_multi()
On 03/13/2017 02:40 PM, Jan Beulich wrote: >>>> On 13.03.17 at 13:29, <rcojocaru@xxxxxxxxxxxxxxx> wrote: >> On 03/13/2017 02:19 PM, Jan Beulich wrote: >>> I think as long as there's no need for the guest to use an operation >>> on itself, it should not be a hvmop. After all, if you make it a domctl >>> now and later find a need for it to be called by the guest, we can >>> always replace the domctl by a hvmop. If, however, you start out >>> with a hvmop, we'll be bound to be supporting it virtually forever. >> >> Since we're on this point, IMHO the xc_altp2m_ prefixed versions of >> set_mem_access() and set_mem_access_multi() shouldn't exist at all. >> Plain xc_set_mem_access() and xc_set_mem_access_multi() (as DOMCTLs) >> should be enough, as long as we also add the view_id as an >> extra-parameter, where view ID 0 is (already) the default EPT view. >> >> As it stands now, xc_set_mem_access() can do less than >> xc_altp2m_set_mem_access() in that its view ID is always 0, but more >> than xc_altp2m_set_mem_access() in that it is able to set more than one >> page with a single hypercall, while the underlying hypervisor code is >> the same. >> >> Maybe I'm missing something design-wise (obviously if these really do >> need to be HVMOPs a separate libxc function is required). Maybe the >> altp2m maintainers have a different view of the matter. > > "altp2m maintainers"? Maintainership is covered by the x86/mm/ > entry afaict, so perhaps you mean the original altp2m authors. In > any event - you didn't Cc anyone of them. I've used scripts/get_maintainer.pl on the patch, and CCd everyone in the output. Consulting the original authors is a very good idea, I've CCd Ravi Sahita here, perhaps he can help us out. Thanks, Razvan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |