|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC 2/2] tools/libxc: Make the mem_access APIs generic
>>> On 09.04.14 at 10:23, <Ian.Campbell@xxxxxxxxxx> wrote:
>> int xc_hvm_set_mem_access(
>> - xc_interface *xch, domid_t dom, hvmmem_access_t mem_access, uint64_t
>> first_pfn, uint64_t nr)
>> + xc_interface *xch, domid_t dom, hvmmem_access_t mem_access,
>> + uint64_t first_pfn, uint64_t nr)
>> {
>> - DECLARE_HYPERCALL;
>> - DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_mem_access, arg);
>> - int rc;
>> -
>> - arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
>> - if ( arg == NULL )
>> - {
>> - PERROR("Could not allocate memory for xc_hvm_set_mem_access
>> hypercall");
>> - return -1;
>> - }
>> -
>> - arg->domid = dom;
>> - arg->hvmmem_access = mem_access;
>> - arg->first_pfn = first_pfn;
>> - arg->nr = nr;
>> -
>> - hypercall.op = __HYPERVISOR_hvm_op;
>> - hypercall.arg[0] = HVMOP_set_mem_access;
>> - hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
>> -
>> - rc = do_xen_hypercall(xch, &hypercall);
>> -
>> - xc_hypercall_buffer_free(xch, arg);
>> -
>> - return rc;
>> + if ( nr > UINT_MAX )
>> + return -EINVAL;
>
> I guess the hypervisor must also make a similar check?
>
>> + return xc_set_mem_access(xch, dom, mem_access, first_pfn, nr);
>
> This will return -1 and set errno, while the EINVAL above returns a
> -errno value.
>
> Personally I would just drop the limit check and rely on the hypervisor
> to return an error, but if you want to keep the check then please make
> it use the -1 and set errno pattern.
Actually the hypervisor can't do that check, since it only sees the
truncated (32-bit) value. Question rather is why to retain a 64-bit
function parameter - I recently suggested switching it to 32 bits, and
I think this should definitely be done while anyway fiddling with the
interface.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |