[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCHv11 4/4] gnttab: use per-VCPU maptrack free lists
>>> On 05.06.15 at 18:42, <david.vrabel@xxxxxxxxxx> wrote: > On 05/06/15 17:11, Jan Beulich wrote: >>>>> On 05.06.15 at 17:55, <david.vrabel@xxxxxxxxxx> wrote: >>> On 05/06/15 15:51, Jan Beulich wrote: >>>>>>> On 02.06.15 at 18:26, <david.vrabel@xxxxxxxxxx> wrote: >>>>> + /* >>>>> + * max_maptrack_frames is per domain so each VCPU gets a share of >>>>> + * the maximum, but allow at least one frame per VCPU. >>>>> + */ >>>>> + if ( v->maptrack_frames >>>>> + && v->maptrack_frames >= max_maptrack_frames / >>>>> v->domain->max_vcpus ) >>>>> + return -1; >>>> >>>> So with e.g. max_maptrack_frames being 256 and ->max_vcpus >>>> being 129 you'd potentially allow each vCPU to only have exactly >>>> one page despite there being 127 more to use. >>> >>> There's a limit to how many wacky combinations we can support with a >>> single default limit. >>> >>> With the standard defaults and 129 VCPUs: >>> >>> Before >>> >>> 131072 entries (256 * 4096 / 8) >>> >>> After >>> >>> 231168 entries (1024 / 129 * 129 * 4096 / 16) >>> 1792 entries per vcpu. >> >> And that's why I'm putting the currently proposed resource >> management model under question. > > The new default of 1024 frames ensures that with any number of VCPUs > results in the domain having /more/ entries than then old default (256 > frames). > > It's not at all clear what you want here. Can you provide a proposal? There being more frames per domain doesn't help a guest e.g. first doing one or more mappings on each vCPU and then wanting to do very many mappings on a single vCPU. I think there needs to be a (slow) fallback path where using "foreign" vCPU-s' maptrack entries is possible. Or something else to avoid regressing in scenarios that work prior to your change. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |