[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 06 of 14] Don't lose track of the log dirty bitmap


  • To: "Tim Deegan" <tim@xxxxxxx>
  • From: "Andres Lagar-Cavilla" <andres@xxxxxxxxxxxxxxxx>
  • Date: Thu, 24 Nov 2011 09:46:37 -0800
  • Cc: andres@xxxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, keir.xen@xxxxxxxxx, jbeulich@xxxxxxxx, adin@xxxxxxxxxxxxxx
  • Delivery-date: Thu, 24 Nov 2011 17:47:13 +0000
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id :in-reply-to:references:date:subject:from:to:cc:reply-to :mime-version:content-type:content-transfer-encoding; q=dns; s= lagarcavilla.org; b=sZ/eDPlxwMKHr5T8dHVS5eeLQx3I9Axuz78Z0dtjdpXC /+ysW9dU2JCABdhBWRGbn5WwBcERa7gqD7RjVLkIsTbqzFBv/wMKDiTnwQRzuqyF GDXRoLB7W2m9uH6PyDhLALi2Y0+UJoDnmJc7pU69/e4hd65K8PwTpvqBNFxO+zw=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Confirmed that your (simpler) patch solves the issue
Acked-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>

Thanks
Andres
> At 16:11 -0500 on 23 Nov (1322064673), Andres Lagar-Cavilla wrote:
>> hap_log_dirty_init unconditionally sets the top of the log dirty
>> bitmap to INVALID_MFN. If there had been a bitmap allocated, it is
>> then leaked, and the host crashes on an ASSERT when the domain is
>> cleaned up. Fixing it here.
>>
>> Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
>>
>> diff -r e8f0709af2b7 -r e22610ef339a xen/arch/x86/mm/hap/hap.c
>> --- a/xen/arch/x86/mm/hap/hap.c
>> +++ b/xen/arch/x86/mm/hap/hap.c
>> @@ -224,11 +224,18 @@ static void hap_clean_dirty_bitmap(struc
>>  void hap_logdirty_init(struct domain *d)
>>  {
>>      struct sh_dirty_vram *dirty_vram = d->arch.hvm_domain.dirty_vram;
>> -    if ( paging_mode_log_dirty(d) && dirty_vram )
>> +    if ( paging_mode_log_dirty(d) )
>>      {
>> -        paging_log_dirty_disable(d);
>> -        xfree(dirty_vram);
>> -        dirty_vram = d->arch.hvm_domain.dirty_vram = NULL;
>> +        if ( dirty_vram )
>> +        {
>> +            paging_log_dirty_disable(d);
>> +            xfree(dirty_vram);
>> +            dirty_vram = d->arch.hvm_domain.dirty_vram = NULL;
>> +        } else {
>> +            /* We still need to clean up the bitmap, because
>> +             * init below will overwrite it */
>> +            paging_free_log_dirty_bitmap(d);
>
> That's not safe - in the particular case you're worried about there are
> concurrent users of this bitmap.  I think the only sane way to deal with
> this is to make sure the initialization of log_dirty.top to INVALID_MFN
> happens once at start of day, and remove it from the functions that get
> called mulitple times.
>
> Something like this:
>
> diff -r f71ff2f7b604 xen/arch/x86/mm/paging.c
> --- a/xen/arch/x86/mm/paging.c        Thu Nov 24 15:20:57 2011 +0000
> +++ b/xen/arch/x86/mm/paging.c        Thu Nov 24 16:05:46 2011 +0000
> @@ -595,7 +595,6 @@ void paging_log_dirty_init(struct domain
>      d->arch.paging.log_dirty.enable_log_dirty = enable_log_dirty;
>      d->arch.paging.log_dirty.disable_log_dirty = disable_log_dirty;
>      d->arch.paging.log_dirty.clean_dirty_bitmap = clean_dirty_bitmap;
> -    d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
>  }
>
>  /* This function fress log dirty bitmap resources. */
> @@ -617,6 +616,11 @@ int paging_domain_init(struct domain *d,
>
>      mm_lock_init(&d->arch.paging.lock);
>
> +    /* This must be initialized separately from the rest of the
> +     * log-dirty init code as that can be called more than once and we
> +     * don't want to leak any active log-dirty bitmaps */
> +    d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
> +
>      /* The order of the *_init calls below is important, as the later
>       * ones may rewrite some common fields.  Shadow pagetables are the
>       * default... */
>
> Does that make sense to you?
>
> Tim.
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.