[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 39/39] arm/xen-access: Add test of xc_altp2m_change_gfn



Hi Razvan,


[...]

>> +
>> +    *gfn_new = ++(xenaccess->max_gpfn);
> Unnecessary parentheses.
>

Thanks.

>> +    rc = xc_domain_populate_physmap_exact(xenaccess->xc_handle, domain_id, 
>> 1, 0, 0, gfn_new);
>> +    if ( rc < 0 )
>> +        goto err;
>> +
>> +    /* Copy content of the old gfn into the newly allocated gfn */
>> +    rc = xenaccess_copy_gfn(xenaccess, domain_id, *gfn_new, gfn_old);
>> +    if ( rc < 0 )
>> +        goto err;
>> +
>> +    rc = xc_altp2m_change_gfn(xenaccess->xc_handle, domain_id, ap2m_idx, 
>> gfn_old, *gfn_new);
>> +    if ( rc < 0 )
>> +        goto err;
>> +
>> +    return 0;
>> +
>> +err:
>> +    xc_domain_decrease_reservation_exact(xenaccess->xc_handle, domain_id, 
>> 1, 0, gfn_new);
>> +
>> +    (xenaccess->max_gpfn)--;
> Here too.
>
>> +
>> +    return -1;
>> +}
>> +
>> +static int xenaccess_reset_gfn(xc_interface *xch,
>> +                               domid_t domain_id,
>> +                               unsigned int ap2m_idx,
>> +                               xen_pfn_t gfn_old,
>> +                               xen_pfn_t gfn_new)
>> +{
>> +    int rc;
>> +
>> +    /* Reset previous state */
>> +    xc_altp2m_change_gfn(xch, domain_id, ap2m_idx, gfn_old, INVALID_GFN);
>> +
>> +    /* Invalidate the new gfn */
>> +    xc_altp2m_change_gfn(xch, domain_id, ap2m_idx, gfn_new, INVALID_GFN);
> Do these two xc_altp2m_change_gfn() calls not require error checking?
>
>> +
>> +    rc = xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, 
>> &gfn_new);
>> +    if ( rc < 0 )
>> +        return -1;
>> +
>> +    (xenaccess->max_gpfn)--;
> Again, please remove the parentheses.
>

Thanks again. I will adjust the implementation for v5.

Cheers,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.