[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 2/4] xen/shadow: fix shadow_track_dirty_vram to work on hvm guests



El 08/05/15 a les 17.34, Andrew Cooper ha escrit:
> On 08/05/15 16:28, Jan Beulich wrote:
>>>>> On 08.05.15 at 16:34, <roger.pau@xxxxxxxxxx> wrote:
>>> @@ -3668,21 +3671,19 @@ int shadow_track_dirty_vram(struct domain *d,
>>>          if ( map_sl1p )
>>>              sh_unmap_domain_page(map_sl1p);
>>>  
>>> -        rc = -EFAULT;
>>> -        if ( copy_to_guest(dirty_bitmap, dirty_vram->dirty_bitmap, 
>>> dirty_size) == 0 ) {
>>> -            memset(dirty_vram->dirty_bitmap, 0, dirty_size);
>>> -            if (dirty_vram->last_dirty + SECONDS(2) < NOW())
>>> +        memcpy(dirty_bitmap, dirty_vram->dirty_bitmap, dirty_size);
>>> +        memset(dirty_vram->dirty_bitmap, 0, dirty_size);
>> This is certainly a behavioral change; I'm only uncertain whether it's
>> acceptable. Previously the memset() was done only when the copying
>> to guest memory succeeded, while now it happens unconditionally.
> 
> On the one hand, if the toolstack logdirty buffer suffers an EFAULT,
> most bets are probably off.
> 
> However, it would better if Xen didn't then clobber the dirty bitmap, in
> case the toolstack's kernel is doing some particularly funky memory
> management which would succeed on a retry.

A possible workaround to this would be to do acquire the paging_lock
again if copy_to_guest fails and set the dirty_bitmap to 0xff, although
it's not very elegant. Or do an OR of dirty_bitmap and
dirty_vram->dirty_bitmap.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.