[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support for Xen



On Tue, Mar 01, 2016 at 06:33:32PM +0000, Ian Jackson wrote:
> Haozhong Zhang writes ("Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support 
> for Xen"):
> > On 02/18/16 21:14, Konrad Rzeszutek Wilk wrote:
> > > [someone:]
> > > > (2) For XENMAPSPACE_gmfn, _gmfn_range and _gmfn_foreign,
> > > >    (a) never map idx in them to GFNs occupied by vNVDIMM, and
> > > >    (b) never map idx corresponding to GFNs occupied by vNVDIMM
> > > 
> > > Would that mean that guest xen-blkback or xen-netback wouldn't
> > > be able to fetch data from the GFNs? As in, what if the HVM guest
> > > that has the NVDIMM also serves as a device domain - that is it
> > > has xen-blkback running to service other guests?
> > 
> > I'm not familiar with xen-blkback and xen-netback, so following
> > statements maybe wrong.
> > 
> > In my understanding, xen-blkback/-netback in a device domain maps the
> > pages from other domains into its own domain, and copies data between
> > those pages and vNVDIMM. The access to vNVDIMM is performed by NVDIMM
> > driver in device domain. In which steps of this procedure that
> > xen-blkback/-netback needs to map into GFNs of vNVDIMM?
> 
> I think I agree with what you are saying.  I don't understand exactly
> what you are proposing above in XENMAPSPACE_gmfn but I don't see how
> anything about this would interfere with blkback.
> 
> blkback when talking to an nvdimm will just go through the block layer
> front door, and do a copy, I presume.

I believe you are right. The block layer, and then the fs would copy in.
> 
> I don't see how netback comes into it at all.
> 
> But maybe I am just confused or ignorant!  Please do explain :-).

s/back/frontend/  

My fear was refcounting.

Specifically where we do not do copying. For example, you could
be sending data from the NVDIMM GFNs (scp?) to some other location
(another host?). It would go over the xen-netback (in the dom0)
- which would then grant map it (dom0 would).

In effect Xen there are two guests (dom0 and domU) pointing in the
P2M to the same GPFN. And that would mean:

> > > >    (b) never map idx corresponding to GFNs occupied by vNVDIMM

Granted the XENMAPSPACE_gmfn happens _before_ the grant mapping is done
so perhaps this is not an issue?

The other situation I was envisioning - where the driver domain has
the NVDIMM passed in, and as well SR-IOV network card and functions
as an iSCSI target. That should work OK as we just need the IOMMU
to have the NVDIMM GPFNs programmed in.

> 
> Thanks,
> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.