[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4] x86: add p2m_mmio_write_dm





On 12/2/2014 7:40 PM, Tim Deegan wrote:
At 15:38 +0800 on 02 Dec (1417531126), Yu, Zhang wrote:
On 12/1/2014 8:13 PM, Tim Deegan wrote:
At 11:17 +0000 on 01 Dec (1417429027), Jan Beulich wrote:
On 01.12.14 at 11:30, <tim@xxxxxxx> wrote:
During this bit of archaeology I realised that either this new type
should _not_ be made part of P2M_RO_TYPES, or, better, we need a new
class of P2M types (P2M_DISCARD_WRITE_TYPES, say) that should be used
for these paths in emulate_gva_to_mfn() and __hvm_copy(), containing
just p2m_ram_ro and p2m_grant_map_ro.

And I suppose in that latter case the new type could be made part
of P2M_RO_TYPES()?

Yes indeed, as P2M_RO_TYPES is defined as "must have the _PAGE_RW bit
clear in their PTEs".


Thanks Tim.
Following are my understanding of the P2M_RO_TYPES and your comments.
Not sure if I get it right. Please correct me if anything wrong:
1> The P2M_RO_TYPES now bears 2 meanings: one is "w bit is clear in the
pte"; and another being to discard the write operations;
2> We better define another class to bear the second meaning.

Yes, that's what I meant.

Answering your other questions in reverse order:

2> p2m_grant_map_ro is also supposed to be discarded? Will handling of
this type of pages goes into __hvm_copy()/__hvm_clear(), or should?

I think so, yes.  At the moment we inject #GP when the guest writes to
a read-only grant, which is OK: the guest really ought to know better.
But I think we'll probably end up with neater code if we handle
read-only grants the same way as p2m_ram_ro.

Anyone else have an opinion on the right thing to do here?

Also some questions for the new p2m class, say P2M_DISCARD_WRITE_TYPES,
and the new predicates, say p2m_is_discard_write:
1> You mentioned emulate_gva_to_mfn() and __hvm_copy() should discard
the write ops, yet I also noticed many other places using the
p2m_is_readonly, or only the "p2mt == p2m_ram_ro" judgement(in
__hvm_copy/__hvm_clear). Among all these other places, is there any ones
also supposed to use the p2m_is_discard_write?

I've just had a look through them all, and I can see exactly four
places that should be using the new p2m_is_discard_write() test:

  - emulate_gva_to_mfn() (Though in fact it's a no-op as shadow-mode
    guests never have p2m_ram_shared or p2m_ram_logdirty mappings.)
  - __hvm_copy()
  - __hvm_clear() and
  - hvm_hap_nested_page_fault() (where you should also remove the
    explicit handling of p2m_grant_map_ro below.)

Thank you, Tim & Jan.
To give a summary for all the comments:

1> new p2m type p2m_mmio_write_dm is to be added;
2> new p2m type need to be added to P2M_RO_TYPES class;
3> new p2m class, say P2M_DISCARD_WRITE_TYPES(which only include p2m_ram_ro and p2m_grant_map_ro), and the new predicates, say p2m_is_discard_write are needed to in these 4 places to discard the write op; 4> and of cause hvm_hap_nested_page_fault() do not need the special handling for p2m_grant_map_ro anymore;
5> coding style changes pointed out by Jan
6> clear the commit message

I'll prepare the patch and thanks! :)

Yu

Looking through that turned up a few other oddities, which I'm
listing here to remind myself to look at them later (i.e. you don't
need to worry about them for this patch):

  - nsvm_get_nvmcb_page() and nestedhap_walk_L0_p2m() need to handle
    p2m_ram_logdirty or they might spuriously fail duiring live
    migration.
  - __hvm_copy() and __hvm_clear are probably over-strict in their
    failure to handle grant types.
  - P2M_UNMAP_TYPES in vmce.c is a mess.  It's not the right place to
    define this, since it definitely won't be seen my anyone
    adding a new type, and it already has an 'XXX' comment that says
    it doesn't cover a lot of cases. :(

I'll have a look at those another time.

Cheers,

Tim.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.