[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server.
>>> On 16.06.16 at 11:30, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote: > On 6/15/2016 6:21 PM, Jan Beulich wrote: >>>>> On 15.06.16 at 11:50, <george.dunlap@xxxxxxxxxx> wrote: >>> On 14/06/16 14:31, Jan Beulich wrote: >>>>>>> On 14.06.16 at 15:13, <george.dunlap@xxxxxxxxxx> wrote: >>>>> On 14/06/16 11:45, Jan Beulich wrote: >>>>>> Locking is somewhat strange here: You protect against the "set" >>>>>> counterpart altering state while you retrieve it, but you don't >>>>>> protect against the returned data becoming stale by the time >>>>>> the caller can consume it. Is that not a problem? (The most >>>>>> concerning case would seem to be a race of hvmop_set_mem_type() >>>>>> with de-registration of the type.) >>>>> How is that different than calling set_mem_type() first, and then >>>>> de-registering without first unmapping all the types? >>>> Didn't we all agree this is something that should be disallowed >>>> anyway (not that I've seen this implemented, i.e. just being >>>> reminded of it by your reply)? >>> I think I suggested it as a good idea, but Paul and Yang both thought it >>> wasn't necessary. Do you think it should be a requirement? >> I think things shouldn't be left in a half-adjusted state. >> >>> We could have the de-registering operation fail in those circumstances; >>> but probably a more robust thing to do would be to have Xen go change >>> all the ioreq_server entires back to ram_rw (since if the caller just >>> ignores the failure, things are in an even worse state). >> If that's reasonable to do without undue delay (e.g. by using >> the usual "recalculate everything" forced to trickle down through >> the page table levels, then that's as good. > > Thanks for your advices, Jan & George. > > Previously in the 2nd version, I used p2m_change_entry_type_global() to > reset the > outstanding p2m_ioreq_server entries back to p2m_ram_rw asynchronously after > the de-registration. But we realized later that this approach means we > can not support > live migration. And to recalculate the whole p2m table forcefully when > de-registration > happens means too much cost. > > And further discussion with Paul was that we can leave the > responsibility to reset p2m type > to the device model side, and even a device model fails to do so, the > affected one will only > be the current VM, neither other VM nor hypervisor will get hurt. > > I thought we have reached agreement in the review process of version 2, > so I removed > this part from version 3. In which case I would appreciate the commit message to explain this (in particular I admit I don't recall why live migration would be affected by the p2m_change_entry_type_global() approach, but the request is also so that later readers have at least some source of information other than searching the mailing list). > Hah, I guess these 2 #defines are just cloned from similar ones, and I > did not expected > they would receive so much comments. Anyway, I admire your preciseness > and thanks > for pointing this out. :) > > Since the bit number #defines have no special meaning, I'd like to just > define the flags > directly: > > #define HVMOP_IOREQ_MEM_ACCESS_READ (1u << 0) > #define HVMOP_IOREQ_MEM_ACCESS_WRITE (1u << 1) XEN_HVMOP_* then please. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |