[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Domctl and physdevop for passthrough (Was: Re: Stabilising some tools only HVMOPs?)



On Mon, Feb 29, 2016 at 05:29:54AM -0700, Jan Beulich wrote:
> >>> On 29.02.16 at 13:23, <wei.liu2@xxxxxxxxxx> wrote:
> > On Tue, Feb 23, 2016 at 02:31:30PM +0000, Wei Liu wrote:
> >> On Mon, Feb 22, 2016 at 04:28:19AM -0700, Jan Beulich wrote:
> >> > >>> On 19.02.16 at 17:05, <wei.liu2@xxxxxxxxxx> wrote:
> >> > > On Wed, Feb 17, 2016 at 05:28:08PM +0000, Wei Liu wrote:
> >> > >> Hi all
> >> > >> 
> >> > >> Tools people are in the process of splitting libxenctrl into a set of
> >> > >> stable libraries. One of the proposed libraries is libxendevicemodel
> >> > >> which has a collection of APIs that can be used by device model.
> >> > >> 
> >> > >> Currently we use QEMU as reference to extract symbols and go through
> >> > >> them one by one. Along the way we discover QEMU is using some tools
> >> > >> only HVMOPs.
> >> > >> 
> >> > >> The list of tools only HVMOPs used by QEMU are:
> >> > >> 
> >> > >>   #define HVMOP_track_dirty_vram    6
> >> > >>   #define HVMOP_modified_memory    7
> >> > >>   #define HVMOP_set_mem_type    8
> >> > >>   #define HVMOP_inject_msi         16
> >> > >>   #define HVMOP_create_ioreq_server 17
> >> > >>   #define HVMOP_get_ioreq_server_info 18
> >> > >>   #define HVMOP_map_io_range_to_ioreq_server 19
> >> > >>   #define HVMOP_unmap_io_range_from_ioreq_server 20
> >> > >>   #define HVMOP_destroy_ioreq_server 21
> >> > >>   #define HVMOP_set_ioreq_server_state 22
> >> > >> 
> >> > > 
> >> > > In the process of ploughing through QEMU symbols, there are some 
> >> > > domctls
> >> > > and physdevops used to do  passthrough. To make passthrough APIs in
> >> > > libxendevicemodel we need to stabilise them as well. Can I use the same
> >> > > trick __XEN_TOOLS_STABLE__ here? If not, what would be the preferred 
> >> > > way
> >> > > of doing this?
> >> > > 
> >> > > PASSTHRU
> >> > > `xc_domain_bind_pt_pci_irq`     `XEN_DOMCTL_bind_pt_irq`    
> >> > > `xc_domain_ioport_mapping`      `XEN_DOMCTL_ioport_mapping` 
> >> > > `xc_domain_memory_mapping`      `XEN_DOMCTL_memory_mapping` 
> >> > > `xc_domain_unbind_msi_irq`      `XEN_DOMCTL_unbind_pt_irq`  
> >> > > `xc_domain_unbind_pt_irq`       `XEN_DOMCTL_unbind_pt_irq`  
> >> > > `xc_domain_update_msi_irq`      `XEN_DOMCTL_bind_pt_irq`    
> >> > > `xc_physdev_map_pirq`           `PHYSDEVOP_map_pirq`        
> >> > > `xc_physdev_map_pirq_msi`       `PHYSDEVOP_map_pirq`        
> >> > > `xc_physdev_unmap_pirq`         `PHYSDEVOP_unmap_pirq`      
> >> > 
> >> > Mechanically I would say yes, but anything here which is also on
> >> > the XSA-77 waiver list would first need removing there (with
> >> > proper auditing and, if necessary, fixing).
> >> > 
> >> 
> >> I admit I failed to parse xsm-flask.txt and XSA-77 and its implication,
> >> so let's take a concrete example instead.
> >> 
> >> Say, now I need to stabilise XEN_DOMCTL_pin_mem_cacheattr, which is on
> > 
> > The conversation thus far has indicated stabilising this particular
> > hypercall is no go.
> > 
> > The higher order goal is actually pinning the memory cache attribute for
> > video ram. I was thinking to have a set of dedicated hypercalls for
> > video ram.
> > 
> > But then my reading of XSA-154 suggests that no untrusted entity should
> > be allowed to alter the caching attribute, so a set of restricted
> > hypercalls might not be feasible either. I would like to know if my
> > reading is correct.
> 
> Yes, your reading is mostly correct: Of course this can be
> permitted eventually, but only after having made such a model
> safe against abuse.
> 

I read the XSA-154 patch and think a little bit on whether making
dedicated hypercall is feasible.

1. The patch for XSA-154 mentions that only MMIO mappings with
   inconsistent attributes can cause system instability.
2. PV case is hard, but the device model library is only of interest to
   HVM domain, so PV can be ignored.
3. We want to continue honoring pinned cachability attributes for HVM
   domain.

It seems we have a way forward. Say, we have new hypercall just for
pinning video ram cachability attribute.

The new hypercall has following properties:

1. It can only be used on HVM domains.
2. It can only be used on mfns that are not in MMIO ranges, because
   vram is just normal ram.
3. It can only set the cachability attribute to WC (used by video ram).
4. It is not considered stable.

so that it won't be abused to change cachability attributes of MMIO
mappings on PV guest to make the host unstable. The stale data issue is
of no relevance as stated in XSA-154 patch.

Does this sound plausible?

Wei.

> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.