[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support for Xen



>>> On 17.02.16 at 10:01, <haozhong.zhang@xxxxxxxxx> wrote:
> On 02/15/16 04:07, Jan Beulich wrote:
>> >>> On 15.02.16 at 09:43, <haozhong.zhang@xxxxxxxxx> wrote:
>> > On 02/03/16 03:15, Konrad Rzeszutek Wilk wrote:
>> >> >  Similarly to that in KVM/QEMU, enabling vNVDIMM in Xen is composed of
>> >> >  three parts:
>> >> >  (1) Guest clwb/clflushopt/pcommit enabling,
>> >> >  (2) Memory mapping, and
>> >> >  (3) Guest ACPI emulation.
>> >> 
>> >> 
>> >> .. MCE? and vMCE?
>> >> 
>> > 
>> > NVDIMM can generate UCR errors like normal ram. Xen may handle them in a
>> > way similar to what mc_memerr_dhandler() does, with some differences in
>> > the data structure and the broken page offline parts:
>> > 
>> > Broken NVDIMM pages should be marked as "offlined" so that Xen
>> > hypervisor can refuse further requests that map them to DomU.
>> > 
>> > The real problem here is what data structure will be used to record
>> > information of NVDIMM pages. Because the size of NVDIMM is usually much
>> > larger than normal ram, using struct page_info for NVDIMM pages would
>> > occupy too much memory.
>> 
>> I don't see how your alternative below would be less memory
>> hungry: Since guests have at least partial control of their GFN
>> space, a malicious guest could punch holes into the contiguous
>> GFN range that you appear to be thinking about, thus causing
>> arbitrary splitting of the control structure.
>>
> 
> QEMU would always use MFN above guest normal ram and I/O holes for
> vNVDIMM. It would attempt to search in that space for a contiguous range
> that is large enough for that that vNVDIMM devices. Is guest able to
> punch holes in such GFN space?

See XENMAPSPACE_* and their uses.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.