[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 00/11] Alternate p2m: support multiple copies of host p2m



At 10:41 -0700 on 25 Mar (1427280115), Ed White wrote:
> >>
> >> The second thing is how similar some of this is to nested p2m code,
> >> making me wonder whether it could share more code with that.  It's not
> >> as much duplication as I had feared, but e.g. altp2m_write_p2m_entry()
> >> is _identical_ to nestedp2m_write_p2m_entry(), (making the
> >> copyright claim at the top of the file quite dubious, BTW).
> >>
> > 
> > I did initially use nestedp2m_write_p2m_entry directly, but I knew
> > that wouldn't be acceptable! On this specific point, I would be more
> > inclined to refactor the normal write entry routine so you can call
> > it everywhere, since both the nested and alternate ones are simply
> > a copy of a part of the normal one.
> > 
> 
> I started to look into this again. What I described as the normal write
> routine is the one for HAP, and that is actually a domain-level routine
> (akin to the paging mode-specific ones), and takes a domain as its first
> parameter. The nestedp2m routine is a p2m-level routine that takes a p2m
> as its first parameter, and that's what I had copied.
> 
> However, looking at the code, the p2m-level write routine is only ever
> called from the domain-level one, but the HAP routine doesn't make such
> calls, and nestedp2m and altp2m are only available in HAP mode.
> 
> Therefore, I have dropped the altp2m write routine with no ill effects.
> I don't think the nestedp2m one can ever be called.

The write_entry() hook is called from the p2m-pt.c implementation (i.e. on
AMD SVM), so with s/HAP/EPT/g your explanation is correct.  In EPT,
the equivalent is the atomic_write_p2m_entry() function.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.