[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC v2 3/3] xen: rework paging_log_dirty_op to work with hvm guests



Hi,

At 12:26 +0200 on 02 Apr (1427977595), Roger Pau Monne wrote:
> +static inline void *map_dirty_bitmap(XEN_GUEST_HANDLE_64(uint8) dirty_bitmap,
> +                                     unsigned long pages,
> +                                     struct page_info **page,
> +                                     unsigned long *mapped_page)
> +{
> +    p2m_type_t p2mt;
> +    uint32_t pfec;
> +    unsigned long gfn;
> +
> +    gfn = paging_gva_to_gfn(current,
> +                            (paddr_t)(((char *)dirty_bitmap.p) + (pages >> 
> 3)),
> +                            &pfec);
> +    if ( gfn == INVALID_GFN )
> +        return NULL;
> +
> +    *page = get_page_from_gfn(current->domain, gfn, &p2mt, P2M_UNSHARE);
> +
> +    if ( p2m_is_paging(p2mt) )
> +    {
> +        put_page(*page);
> +        p2m_mem_paging_populate(current->domain, gfn);
> +        return NULL;
> +    }
> +    if ( p2m_is_shared(p2mt) )
> +    {
> +        put_page(*page);
> +        return NULL;
> +    }
> +    if ( p2m_is_grant(p2mt) )
> +    {
> +        put_page(*page);
> +        return NULL;
> +    }
> +
> +    *mapped_page = pages;
> +    return __map_domain_page(*page);
> +}

It's a shame to have to copy this logic from __hvm_copy() but I can't
see an easy way of avoiding it.  We might want to wrap it up in case
some other path wants to do the same thing?

But since we are doing these checks, we also need to check that the
destination buffer isn't supposed to be read-only -- __hvm_copy()
uses p2m_is_discard_write() for that.  It would probably be a good
idea to add a check of p2m_is_ram too, just to avoid having to think
about the MMIO cases.

Also, I think the _is_shared() and _is_grant() clauses should be folded
together, since the action taken is the same.

> @@ -471,11 +534,25 @@ static int paging_log_dirty_op(struct domain *d,
>                      bytes = (unsigned int)((sc->pages - pages + 7) >> 3);
>                  if ( likely(peek) )
>                  {
> -                    if ( (l1 ? copy_to_guest_offset(sc->dirty_bitmap,
> -                                                    pages >> 3, (uint8_t 
> *)l1,
> -                                                    bytes)
> -                             : clear_guest_offset(sc->dirty_bitmap,
> -                                                  pages >> 3, bytes)) != 0 )
> +                    if ( (pages >> 3) >= (index_mapped >> 3) + 4096 ) {

This is a bit odd.  Maybe s/4096/PAGE_SIZE/ ?
Or test (pages >> (3 + PAGE_SHIFT) != index_mapped >> (3 + PAGE_SHIFT) ?

> +                        /* We need to map next page */
> +                        paging_unlock(d);

> +                        unmap_dirty_bitmap(dirty_bitmap, page);
> +                        dirty_bitmap = map_dirty_bitmap(sc->dirty_bitmap, 
> pages,
> +                                                        &page, 
> &index_mapped);
> +                        paging_lock(d);
> +                        if ( dirty_bitmap == NULL )
> +                        {
> +                            rv = -EFAULT;
> +                            goto out;
> +                        }
> +                        goto again;
> +                    }
> +                    BUG_ON(((pages >> 3) % PAGE_SIZE) + bytes > PAGE_SIZE);
> +                    if ( (l1 ? memcpy(dirty_bitmap + ((pages >> 3) % 
> PAGE_SIZE),
> +                                      (uint8_t *)l1, bytes)
> +                             : memset(dirty_bitmap + ((pages >> 3) % 
> PAGE_SIZE),
> +                                      0, bytes)) == NULL )

Like in the previous patch, memcpy() and memset() never return errors,
so there's no need to check them here.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.