[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] xen/arm: Make HAS_PCI compilable on ARM by adding place-holder code



On 14/04/15 10:28, Stefano Stabellini wrote:
> On Tue, 14 Apr 2015, Jaggi, Manish wrote:
>>> diff --git a/xen/include/asm-arm/pci.h b/xen/include/asm-arm/pci.h
>>> index de13359..b8ec882 100644
>>> --- a/xen/include/asm-arm/pci.h
>>> +++ b/xen/include/asm-arm/pci.h
>>> @@ -1,7 +1,8 @@
>>> -#ifndef __X86_PCI_H__
>>> -#define __X86_PCI_H__
>>> +#ifndef __ARM_PCI_H__
>>> +#define __ARM_PCI_H__
>>>
>>>  struct arch_pci_dev {
>>> +    void *dev;
>>
>> void * is error-prone. Why can't you use the use the real structure?
>>
>> [manish]Will change it.  I believe dev_archdata structure has also a void *  
>> (in asm-arm/device.h). So all void * are error prone in xen ?
>>
> 
> As you know void* works around the type system, so it prevents the
> compiler from making many type safety checks. We should try to avoid
> them if we can.

In this case, the pointer add more management (allocation...).

As for the device tree solution, the field should be a struct device.

> I think that you are right, the void *iommu in dev_archdata should
> actually be struct arm_smmu_xen_device *iommu.

I agree that void* should be void in most of the case when we know the
underlaying type. Although there is some place where the number of type
of unknown because it could be used to store driver specific case.

This is actually the case of this field. It contains private data for
IOMMU driver. When a new driver will be implemented, it will likely use
a different structure.

Furthermore, the SMMU drivers is self contained in a file, using struct
arm_smmu_xen_device* would require to export/define some part of the
driver in an header.

So, I don't think this is the right things to do.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.