[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] vMCE vs migration
>>> On 26.01.12 at 17:54, George Dunlap <george.dunlap@xxxxxxxxxx> wrote: > On Tue, 2012-01-24 at 11:08 +0000, Jan Beulich wrote: >> >>> On 24.01.12 at 11:29, George Dunlap <George.Dunlap@xxxxxxxxxxxxx> wrote: >> > On Mon, Jan 23, 2012 at 11:08 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote: >> >> x86's vMCE implementation lets a guest know of as many MCE reporting >> >> banks as there are in the host. While a PV guest could be expected to >> >> deal with this number changing (particularly decreasing) during migration >> >> (not currently handled anywhere afaict), for HVM guests this is certainly >> >> wrong. >> >> >> >> At least to me it isn't, however, clear how to properly handle this. The >> >> easiest would appear to be to save and restore the number of banks >> >> the guest was made believe it can access, making vmce_{rd,wr}msr() >> >> silently tolerate accesses between the host and guest values. >> > >> > We ran into this in the XS 6.0 release as well. I think that the >> > ideal thing to do would be to have a parameter that can be set at >> > boot, to say how many vMCE banks a guest has, defaulting to the number >> > of MCE banks on the host. This parameter would be preserved across >> > migration. Ideally, a pool-aware toolstack like xapi would then set >> > this value to be the value of the host in the pool with the largest >> > number of banks, allowing a guest to access all the banks on any host >> > to which it migrates. >> > >> > What do you think? >> >> That sounds like the way to go. > > So should we put this on IanC's to-do-be-done list? Would certainly be desirable to get fixed. > Are you going to put it on your to-do list? :-) I'll first check with Intel whether the engineers who wrote that code could make an attempt. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |