[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC v3 1/6] xen/arm: Add basic save/restore support for ARM



On Thu, 2014-05-08 at 23:11 +0100, Andrew Cooper wrote:
> > diff --git a/xen/include/public/arch-arm/hvm/save.h 
> > b/xen/include/public/arch-arm/hvm/save.h
> > index 75b8e65..8312e7b 100644
> > --- a/xen/include/public/arch-arm/hvm/save.h
> > +++ b/xen/include/public/arch-arm/hvm/save.h
> > @@ -3,6 +3,7 @@
> >   * be saved along with the domain's memory and device-model state.
> >   *
> >   * Copyright (c) 2012 Citrix Systems Ltd.
> > + * Copyright (c) 2014 Samsung Electronics.
> >   *
> >   * Permission is hereby granted, free of charge, to any person obtaining a 
> > copy
> >   * of this software and associated documentation files (the "Software"), to
> > @@ -26,6 +27,24 @@
> >  #ifndef __XEN_PUBLIC_HVM_SAVE_ARM_H__
> >  #define __XEN_PUBLIC_HVM_SAVE_ARM_H__
> >  
> > +#define HVM_ARM_FILE_MAGIC   0x92385520
> > +#define HVM_ARM_FILE_VERSION 0x00000001
> > +
> > +/* Note: For compilation purpose hvm_save_header name is the same as x86,
> > + * but layout is different. */
> > +struct hvm_save_header
> > +{
> > +    uint32_t magic;             /* Must be HVM_ARM_FILE_MAGIC */
> > +    uint32_t version;           /* File format version */
> > +    uint32_t cpuinfo;           /* Record MIDR_EL1 info of saving machine 
> > */
> > +};
> > +DECLARE_HVM_SAVE_TYPE(HEADER, 1, struct hvm_save_header);
> > +
> > +/*
> > + * Largest type-code in use
> > + */
> > +#define HVM_SAVE_CODE_MAX 1
> > +
> >  #endif
> >  
> >  /*
> 
> Hmm - it is quite poor to have this magically named "hvm_save_header".

We frequently have arch interfaces where generic code requires arch code
to provide particular structs or functions etc. What is poor about this
particular instance of that pattern?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.