[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH]: PVH: specify xen features strings cleany for PVH



On Mon, 21 Jan 2013 12:06:42 +0000
"Jan Beulich" <JBeulich@xxxxxxxx> wrote:

> >>> On 19.01.13 at 02:35, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
> >>> wrote:
> 
> That's not really better. What I'd like you to do is keep the common
> part common (i.e. not redundantly defined) and add the PVH-specific
> bits (with what amounts to an empty string as the non-PVH
> replacement) on top. Meaning you will likely want a mixture of
> .ascii and .asciz.

         ELFNOTE(Xen, XEN_ELFNOTE_FEATURES,       .asciz 
"!writable_page_tables|pae_pgdir_above_4gb"PVH_FEATURES_STR);

Will put NULL char before PVH_FEATURES_STR, so we need:

         ELFNOTE(Xen, XEN_ELFNOTE_FEATURES,       .ascii 
"!writable_page_tables|pae_pgdir_above_4gb"PVH_FEATURES_STR);

This means PVH_FEATURES_STR has to be defined as:

#define PVH_FEATURES_STR  
"|writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel|hvm_callback_vector\0"

because 
#define PVH_FEATURES_STR "|writable_descriptor_tables" \
                         "|auto_translated_physmap"       
                         .....

will put null char after writable_descriptor_tables. Putting .ascii
above will not work either with concatenation later.

So, I think what I proposed earlier is the cleanest. Alternately:

#ifdef CONFIG_XEN_X86_PVH

#define PVH_FEATURES_STR  
"|writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel|hvm_callback_vector\0"

#else
#define PVH_FEATURES_STR "\0"
#endif

Then:
        ELFNOTE(Xen, XEN_ELFNOTE_FEATURES,       .ascii 
"!writable_page_tables|pae_pgdir_above_4gb"PVH_FEATURES_STR);


Let me know what you prefer, whats right above, or what I had originally
last week. I can't figure any other way.

Thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.