[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 8/9] xen/arm: Implement hypercall for dirty page tracing
On Thu, 2013-10-17 at 20:05 +0900, Jaeyong Yoo wrote: > > > > > > > > > > > > + { > > > > > + lpae_t pte = *vlp2m_pte; > > > > > + pte.p2m.write = 1; > > > > > + write_pte(vlp2m_pte, pte); > > > > > + flush_tlb_local(); > > > > > + > > > > > + /* in order to remove mappings in cleanup stage */ > > > > > + add_mapped_vaddr(d, addr); > > > > > > > > No locks or atomic operations here? How are races with the tools > > > > reading the dirty bitmap addressed? If it is by clever ordering of > > > > the checks and pte writes then I think a comment would be in order. > > > > > > Actually, the lock is inside the add_mapped_vaddr function. > > > > But that only covers the bitmap update, not the pte frobbing. > > I also locked with the same lock at the get_dirty_bitmap which is reading > the bitmap. I meant the pte frobbing immediately before the call the add_mapped_vaddr quoted above. > > > There are three candidates: > > > > > > 1) bitmap > > > 2) encoding into p2m > > > 3) linked list of VAs. > > > > > > And, two functions for dirty-page tracing: > > > > > > A) dirty-page handling (At trap) > > > B) getting dirty bitmap (for toolstack) > > > > > > 1) and 2) have advantage for A) but have to search the full range of > > > pages at B) to find out which page is dirtied and set the page as > > > read-only for next dirty detection. On the otherhand, 3) does not need > > to search the full range at B). > > > I prefer 3) due to the 'non-full range search' but I understand your > > concern. > > > Maybe I can do some measurement for each method to see which one better. > > > > If you have a bitmap then you can do a scan for each set bit, and the > > offset of each bit corresponds to the guest pfn, which allows you to > > directly calculate the address of the pte in the vlpt. > > > > I don't think doing a scan for each set bit is going to compare > > unfavourably to walking a linked list. Certainly not once the memory > > allocator gets involved. > > Oh, do you mean using something like "find_next_zero_bit"? find_next_bit probably since you want to find the 1s. I suppose you could also initialise to all 1 and clear bits instead > At the first > time, > I thought searching every bit-by-bit but, I just see findbit.S and it looks > quite optimized. Yes, I think it should be fast enough. > > > > > +/* get the dirty bitmap from the linked list stored by > > > > > +add_mapped_vaddr > > > > > + * and also clear the mapped vaddrs linked list */ static void > > > > > +get_dirty_bitmap(struct domain *d, uint8_t *bitmap[]) { > > > > > + struct page_info *page_head; > > > > > + struct mapped_va_node *mvn_head; > > > > > + struct page_info *tmp_page; > > > > > + struct mapped_va_node *tmp_mvn; > > > > > + vaddr_t gma_start = 0; > > > > > + vaddr_t gma_end = 0; > > > > > + > > > > > + /* this hypercall is called from domain 0, and we don't know > > > > > + which > > > > guest's > > > > > + * vlpt is mapped in xen_second, so, to be sure, we restore > > > > > + vlpt > > > > here */ > > > > > + restore_vlpt(d); > > > > > > > > AFAICT you don't actually use the vlpt here, just the domains list > > > > of dirty addresses (which as above could become a bitmap) > > > > > > I think vlpt is still needed because at this stage, we have to reset > > > the write permission of dirtied pages. Maybe some tricks for efficiently > > doing this? > > > > Ah, of course. > > > > Do you need to be careful about the context switch of a dom0 vcpu which > > has a foreign vlpt loaded? I suppose you unload it at the end of this > > function so it is safe because Xen doesn't preempt vcpus in hypervisor > > mode. > > > That is my concern now. For safety, unloading is necessary but it requires > flushing vlpt which is not a cheap operation. I suppose it could be deferred either to the next load of a vplt, if it is a different one, or the next context switch away from the vcpu. You'd just need to track which foreign vlpt was currently loaded on the vcpu in vcpu->arch.something? > Actually, there is one more concern: > Since we only context switch the vlpt area with 'migrating domains' when two > domains (one migrating and the other not) are context switched, the > non-migrating > domain have access to the foreign vlpt. In order to prevent the access of > foreign vlpt area, do you think unloading of vlpt when switching into > non-migrating > domains also necessary? The domain itself doesn't have access to the vlpt since it is mapped in hypervisor context so the only danger is that Xen accidentally uses the vlpt for something while the non-migrating domain is scheduled. I think we should just not write that bug ;-) IOW I think flushing the vlpt lazily is OK. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |