[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v6 06/15] tools: add support for cache coloring configuration
Hi Anthony, On Wed, Feb 28, 2024 at 7:05 PM Anthony PERARD <anthony.perard@xxxxxxxxx> wrote: > > On Mon, Jan 29, 2024 at 06:18:02PM +0100, Carlo Nonato wrote: > > diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h > > index 2ef8b4e054..4b541fffd2 100644 > > --- a/tools/include/xenctrl.h > > +++ b/tools/include/xenctrl.h > > @@ -2653,6 +2653,15 @@ int xc_livepatch_replace(xc_interface *xch, char > > *name, uint32_t timeout, uint32 > > int xc_domain_cacheflush(xc_interface *xch, uint32_t domid, > > xen_pfn_t start_pfn, xen_pfn_t nr_pfns); > > > > +/* > > + * Set LLC colors for a domain. > > + * This is an internal hypercall. It can only be used directly after domain > > What is an "internal hypercall"? Can those even exist? Bad naming from my side. I'll remove that piece from the comment. > > + * creation. An attempt to use it afterwards will result in an error. > > + */ > > +int xc_domain_set_llc_colors(xc_interface *xch, uint32_t domid, > > + const unsigned int *llc_colors, > > + unsigned int num_llc_colors); > > + > > #if defined(__arm__) || defined(__aarch64__) > > int xc_dt_overlay(xc_interface *xch, void *overlay_fdt, > > uint32_t overlay_fdt_size, uint8_t overlay_op); > > diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c > > index f2d9d14b4d..ad02288659 100644 > > --- a/tools/libs/ctrl/xc_domain.c > > +++ b/tools/libs/ctrl/xc_domain.c > > @@ -2180,6 +2180,40 @@ int xc_domain_soft_reset(xc_interface *xch, > > domctl.domain = domid; > > return do_domctl(xch, &domctl); > > } > > + > > +int xc_domain_set_llc_colors(xc_interface *xch, uint32_t domid, > > + const unsigned int *llc_colors, > > + unsigned int num_llc_colors) > > +{ > > + struct xen_domctl domctl = {}; > > + DECLARE_HYPERCALL_BUFFER(uint32_t, local); > > + int ret = -1; > > + > > + if ( num_llc_colors ) > > + { > > + size_t bytes = sizeof(uint32_t) * num_llc_colors; > > Isn't there a risk of overflow, maybe only on 32bit platform? Or maybe > that doesn't matter because the hypervisor should be able to find out if > the buffer is too short, right? I'm not aware of a way to find out the actual buffer size in the hypervisor, so to answer your question, yes there's a risk of overflow, but that would require a huge number of colors, which would probably fail the allocation or would fail the check on the platform supported number of color in the hypervisor. If you prefer I can add the check here anyway. > > + local = xc_hypercall_buffer_alloc(xch, local, bytes); > > + if ( local == NULL ) > > + { > > + PERROR("Could not allocate LLC colors for set_llc_colors"); > > + return -ENOMEM; > > + } > > + memcpy(local, llc_colors, bytes); > > + set_xen_guest_handle(domctl.u.set_llc_colors.llc_colors, local); > > + } > > + > > + domctl.cmd = XEN_DOMCTL_set_llc_colors; > > + domctl.domain = domid; > > + domctl.u.set_llc_colors.num_llc_colors = num_llc_colors; > > + > > + ret = do_domctl(xch, &domctl); > > + > > + if ( local ) > > + xc_hypercall_buffer_free(xch, local); > > It doesn't looks like you need to check if "local != NULL" before > calling xc_hypercall_buffer_free(), it should work even with > local==NULL. This is even used multiple time in xc_kexec. Ok, thanks. > > + > > + return ret; > > +} > > Thanks, > > -- > Anthony PERARD
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |