[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC][v2][PATCH 00/14] Fix RMRR



>>> On 28.05.15 at 07:48, <tiejun.chen@xxxxxxxxx> wrote:
> On 2015/5/22 17:46, Jan Beulich wrote:
>>>>> On 22.05.15 at 11:35, <tiejun.chen@xxxxxxxxx> wrote:
>>> As you know all devices are owned by Dom0 firstly before we create any
>>> DomU, right? Do we allow Dom0 still own a group device while assign another
>>> device in the same group?
>>
>> Clearly not, or - just like anything else putting the security of a system
>> at risk - only at explicit host admin request.
>>
> 
> You're right.
> 
> After we discussed internally, we're intending to cover this simply 
> since the case of shared RMRR is a rare case according to our previous 
> experiences. Furthermore, Xen doesn't have a good existing API to 
> directly assign this sort of group devices and even Xen doesn't identify 
> these devices,  so currently we always assign devices one by one, right? 
> This means we have to put more efforts to concern a good implementation 
> to address something like, identification, atomic, hotplug and so on. 
> Obviously, this would involve hypervisor and tools at the same time so 
> this has a little bit of difficulty to work along with 4.6.
> 
> So could we do this separately?
> 
> #1. Phase 1 to 4.6
> 
> #1.1. Do a simple implementation
> 
> We just prevent from that device assignment if we're assigning this sort 
> of group devices like this,

Right.

> @@ -2291,6 +2291,16 @@ static int intel_iommu_assign_device(
>                PCI_BUS(bdf) == bus &&
>                PCI_DEVFN2(bdf) == devfn )
>           {
> +            if ( rmrr->scope.devices_cnt > 1 )
> +            {
> +                reassign_device_ownership(d, hardware_domain, devfn, pdev);

I think if this is really needed here, the check comes too late.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.