[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 3/4] xen: implement guest_physmap_pin_range and guest_physmap_unpin_range



On Fri, 2013-09-27 at 16:23 +0100, Stefano Stabellini wrote:
> On Tue, 10 Sep 2013, Ian Campbell wrote:
> > On Mon, 2013-09-09 at 17:06 +0100, Stefano Stabellini wrote:
> > >  
> > > +static int pin_one_pte(lpae_t *ptep, void *arg, int level)
> > > +{
> > > +    lpae_t pte = *ptep;
> > > +    ASSERT(level == 3);
> > > +
> > > +    if ( pte.p2m.avail & P2M_DMA_PIN )
> > > +        return -EBUSY;
> > > +    pte.p2m.avail |= P2M_DMA_PIN;
> > > +    write_pte(ptep, pte);
> > > +    return 0;
> > > +}
> > > +
> > > +int guest_physmap_pin_range(struct domain *d,
> > > +                            xen_pfn_t gpfn,
> > > +                            unsigned int order)
> > > +{
> > > +    return p2m_walker(d, gpfn << PAGE_SHIFT, order,
> > > +                      pin_one_pte, NULL);
> > 
> > Did we not also discuss accounting and limits on the amount of memory a
> > guest can lock down?
>  
> I am thinking to allow hardware domains (is_hardware_domain(d)) to pin
> any number of pages, while preveting any other domains from doing it.

That would work.

FWIW XENMEM_exchange uses multipage_allocation_permitted which allows
any guest to allocate 2MB contiguous chunks and domains with a rangeset
(i.e. dom0 or passthrough) to allocate whichever chunk they want (or at
least lets them try).

I'm not sure that pinning and exchanging are analogous though, you
blanket privileged only rule will suffice until we implement
passthrough.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.