[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] slow xp hibernation revisited
> > On 04/06/2011 05:54, "James Harper" <james.harper@xxxxxxxxxxxxxxxx> wrote: > > >> It's the !test_bit(address_offset>>XC_PAGE_SHIFT, > > entry->valid_mapping) > >> that is causing the if expression to be true. From what I can see so > >> far, the bit representing the pfn in entry->valid_mapping is 0 because > >> err[] returned for that pfn was -EINVAL. > >> > >> Maybe the test is superfluous? Is there a need to do the remap if all > >> the other variables in the expression are satisfied? If the remap was > >> already done and the page could not be mapped last time, what reasons > >> are there why it would succeed this time? > >> > > > > FWIW, removing the test_bit makes the hibernate go faster than my screen > > can refresh over a slow DSL connection and in a quick 30 second test > > doesn't appear to have any adverse effects. > > > > If there is a chance that a subsequent call to qemu_remap_bucket with > > identical parameters could successfully map a page that couldn't be > > mapped in the previous call, are there any optimisations that could be > > done? Maybe only attempt to map the page being accessed rather than all > > pages in the bucket if the other parameters are identical? > > I'm guessing this happens because of frequent guest CPU access to non-RAM > during hibernate? Unfortunately really the qemu checks do make sense, I'd > say, since the memory map of the guest can be changed dynamically , and we > currently only flush the map_cache on XENMEM_decrease_reservation > hypercalls. > > One fix would be for Xen to know which regions of non-RAM are actually > emulated device areas, and only forward those to qemu. It could then > quick-fail on the rest. > > However, the easiest fix would be to only re-try to map the one pfn under > test. Reloading a whole bucket takes bloody ages as they are *huge*: 256kB > in 32-bit qemu; 4MB in 64-bit qemu. It might be easiest to do a test re-map > of the one page to a scratch area, then iff it succeeds, *then* call > qemu_remap_bucket(). Then you remap the bucket only if something really has > changed, and you don't have to mess too much with modifying the bucket > yourself outside of remap_bucket. > > How does that sound? > Sounds like a plan. Is there no way to detect the changes to the memory map of the guest? Looking past the test_bit call, the next statement does another test and sets last_address_index to 0 and returns NULL. Is this just to ensure that the next access isn't just trivially accepted? James _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |