[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 1/4] xen/domctl, tools: Introduce a new domctl to get guest memory map


  • To: Henry Wang <xin.wang2@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Mon, 8 Apr 2024 08:19:51 +0200
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Anthony PERARD <anthony.perard@xxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Alec Kwapis <alec.kwapis@xxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 08 Apr 2024 06:20:05 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 08.04.2024 05:08, Henry Wang wrote:
> On 4/4/2024 5:28 PM, Jan Beulich wrote:
>> On 03.04.2024 10:16, Henry Wang wrote:
>>> --- a/xen/arch/arm/domain.c
>>> +++ b/xen/arch/arm/domain.c
>>> @@ -696,6 +696,7 @@ int arch_domain_create(struct domain *d,
>>>   {
>>>       unsigned int count = 0;
>>>       int rc;
>>> +    struct mem_map_domain *mem_map = &d->arch.mem_map;
>>>   
>>>       BUILD_BUG_ON(GUEST_MAX_VCPUS < MAX_VIRT_CPUS);
>>>   
>>> @@ -785,6 +786,20 @@ int arch_domain_create(struct domain *d,
>>>       d->arch.sve_vl = config->arch.sve_vl;
>>>   #endif
>>>   
>>> +    if ( mem_map->nr_mem_regions < XEN_MAX_MEM_REGIONS )
>>> +    {
>>> +        mem_map->regions[mem_map->nr_mem_regions].start = GUEST_MAGIC_BASE;
>>> +        mem_map->regions[mem_map->nr_mem_regions].size = GUEST_MAGIC_SIZE;
>>> +        mem_map->regions[mem_map->nr_mem_regions].type = 
>>> GUEST_MEM_REGION_MAGIC;
>>> +        mem_map->nr_mem_regions++;
>>> +    }
>>> +    else
>>> +    {
>>> +        printk("Exceed max number of supported memory map regions\n");
>> Debugging leftover?
> 
> Well, not really, I did this on purpose to print some info before exit. 
> But now I realize other error paths in arch_domain_create() do not do 
> that. I will drop this printk in v4.
> 
>>> @@ -176,6 +175,37 @@ long arch_do_domctl(struct xen_domctl *domctl, struct 
>>> domain *d,
>>>   
>>>           return rc;
>>>       }
>>> +    case XEN_DOMCTL_get_mem_map:
>> ... separating blank line above this line and ...
>>
>>> +    {
>>> +        int rc = 0;
>>> +        uint32_t nr_regions, i;
>>> +
>>> +        if ( domctl->u.mem_map.pad )
>>> +            return -EINVAL;
>>> +
>>> +        /*
>>> +         * Cap the number of regions to the minimum value between 
>>> toolstack and
>>> +         * hypervisor to avoid overflowing the buffer.
>>> +         */
>>> +        nr_regions = min(d->arch.mem_map.nr_mem_regions,
>>> +                         domctl->u.mem_map.nr_mem_regions);
>>> +
>>> +        domctl->u.mem_map.nr_mem_regions = nr_regions;
>>> +
>>> +        for ( i = 0; i < nr_regions; i++ )
>>> +        {
>>> +            if ( d->arch.mem_map.regions[i].pad )
>>> +                return -EINVAL;
>>> +        }
>>> +
>>> +        if ( copy_to_guest(domctl->u.mem_map.buffer,
>>> +                           d->arch.mem_map.regions,
>>> +                           nr_regions) ||
>>> +             __copy_to_guest(u_domctl, domctl, 1) )
>>> +            rc = -EFAULT;
>>> +
>>> +        return rc;
>>> +    }
>>>       default:
>> ... this one.
> 
> ...personally I don't have strong opinions on the style as long as we 
> keep consistent. I can switch the Arm one following the x86 style or 
> just leave it as is.
> 
>> Further with the way you use min() above, how is the caller going to know
>> whether it simply specified too small an array?
> 
> I am a bit unsure if we need to forbid caller to specify a smaller value 
> than the max number of regions supported by the hypervisor, technically 
> it is legal, although I agree it will lead to some issues in the 
> toolstack side. It looks like the similar hypercall of e820 also does 
> not forbid this (see get_mem_mapping_layout() and related 
> XENMEM_memory_map). Do you have any suggestions?

Fill only as much of the array as there is space for, but return the full
count to the caller. Another option (less desirable imo) would be to return
-ENOBUFS. If to be written anew now, I'd likely code XENMEM_memory_map
handling that way, too. But that's too late now.

>> And then you check d->arch.mem_map.regions[i].pad. Why's that? And even
>> if needed here for some reason, that's surely not EINVAL, but an internal
>> error in Xen.
> 
> I did that under the impression that we need to check the value of 
> padding field being 0. Also you mentioned in one of the comments below 
> that Xen should guarantee that the padding field should be 0 before 
> return. Apologize if I misunderstand your comment. The -EINVAL is taken 
> from the same way of checking the padding field in XEN_DOMCTL_vuart_op 
> above. Personally I would keep some consistency, but I am open to 
> suggestions to make it better.

In XEN_DOMCTL_vuart_op it is caller input which is being checked (and
needs checking). You're checking internal Xen state here instead.
Considering the nature of the issue arising if the assumption was broken,
ASSERT() would seem to be the construct to use for the internal state
check.

>> Finally instead of __copy_to_guest() can't you use __copy_field_to_guest(),
>> for just nr_regions?
> 
> You mean replacing __copy_to_guest(u_domctl, domctl, 1) with only the 
> __copy_field_to_guest(u_domctl, domctl, u.mem_map.nr_mem_regions)? Ok I 
> can do that in v4.

Yes (unless there are technical reasons not to, of course).

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.