[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/5] xen mapcache: Check if a memory space has moved.



On Thu, 24 Nov 2011, Anthony PERARD wrote:
> This patch change the xen_map_cache behavior. Before trying to map a guest
> addr, mapcache will look into the list of range of address that have been 
> moved
> (physmap/set_memory). There is currently one memory space like this, the vram,
> "moved" from were it's allocated to were the guest will look into.
> 
> This help to have a succefull migration.
> 
> Signed-off-by: Anthony PERARD <anthony.perard@xxxxxxxxxx>
> ---
>  xen-all.c      |   19 +++++++++++++++++++
>  xen-mapcache.c |    8 ++++++--
>  xen-mapcache.h |    1 +
>  3 files changed, 26 insertions(+), 2 deletions(-)
> 
> diff --git a/xen-all.c b/xen-all.c
> index b5e28ab..40e8869 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -83,6 +83,8 @@ typedef struct XenIOState {
>      Notifier exit;
>  } XenIOState;
>  
> +XenIOState *xen_io_state = NULL;
> +
>  /* Xen specific function for piix pci */
>  
>  int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
> @@ -218,6 +220,22 @@ static XenPhysmap *get_physmapping(XenIOState *state,
>      return NULL;
>  }
>  
> +target_phys_addr_t xen_addr_actually_is(target_phys_addr_t start_addr, 
> ram_addr_t size)

I really don't like the name of this function. What about 
xen_phys_offset_to_gaddr?


> +{
> +    if (xen_io_state) {
> +        target_phys_addr_t addr = start_addr & TARGET_PAGE_MASK;
> +        XenPhysmap *physmap = NULL;
> +
> +        QLIST_FOREACH(physmap, &xen_io_state->physmap, list) {
> +            if (range_covers_byte(physmap->phys_offset, physmap->size, 
> addr)) {
> +                return physmap->start_addr;
> +            }
> +        }
> +    }
> +
> +    return start_addr;
> +}
> +
>  #if CONFIG_XEN_CTRL_INTERFACE_VERSION >= 340
>  static int xen_add_to_physmap(XenIOState *state,
>                                target_phys_addr_t start_addr,
> @@ -891,6 +909,7 @@ int xen_hvm_init(void)
>      XenIOState *state;
>  
>      state = g_malloc0(sizeof (XenIOState));
> +    xen_io_state = state;

We should pass a XenIOState pointer as an opaque pointer and
xen_phys_offset_to_gaddr as a function pointer to xen_map_cache_init,
rather than setting this global variable.


>      state->xce_handle = xen_xc_evtchn_open(NULL, 0);
>      if (state->xce_handle == XC_HANDLER_INITIAL_VALUE) {
> diff --git a/xen-mapcache.c b/xen-mapcache.c
> index 7bcb86e..73927ab 100644
> --- a/xen-mapcache.c
> +++ b/xen-mapcache.c
> @@ -191,10 +191,14 @@ uint8_t *xen_map_cache(target_phys_addr_t phys_addr, 
> target_phys_addr_t size,
>                         uint8_t lock)
>  {
>      MapCacheEntry *entry, *pentry = NULL;
> -    target_phys_addr_t address_index  = phys_addr >> MCACHE_BUCKET_SHIFT;
> -    target_phys_addr_t address_offset = phys_addr & (MCACHE_BUCKET_SIZE - 1);
> +    target_phys_addr_t address_index;
> +    target_phys_addr_t address_offset;
>      target_phys_addr_t __size = size;
>  
> +    phys_addr = xen_addr_actually_is(phys_addr, size);
> +    address_index  = phys_addr >> MCACHE_BUCKET_SHIFT;
> +    address_offset = phys_addr & (MCACHE_BUCKET_SIZE - 1);
> +
>      trace_xen_map_cache(phys_addr);
>  
>      if (address_index == mapcache->last_address_index && !lock && !__size) {
> diff --git a/xen-mapcache.h b/xen-mapcache.h
> index da874ca..5e561c5 100644
> --- a/xen-mapcache.h
> +++ b/xen-mapcache.h
> @@ -19,6 +19,7 @@ uint8_t *xen_map_cache(target_phys_addr_t phys_addr, 
> target_phys_addr_t size,
>  ram_addr_t xen_ram_addr_from_mapcache(void *ptr);
>  void xen_invalidate_map_cache_entry(uint8_t *buffer);
>  void xen_invalidate_map_cache(void);
> +target_phys_addr_t xen_addr_actually_is(target_phys_addr_t start_addr, 
> ram_addr_t size);
>  
>  #else
>  
> -- 
> Anthony PERARD
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.