[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 14/19] xen: introduce xenpv bus and a dummy pvcpu device



On 05/01/14 22:52, Julien Grall wrote:
> 
> 
> On 01/02/2014 03:43 PM, Roger Pau Monne wrote:
>> Since Xen PVH guests doesn't have ACPI, we need to create a dummy
>> bus so top level Xen devices can attach to it (instead of
>> attaching directly to the nexus) and a pvcpu device that will be used
>> to fill the pcpu->pc_device field.
>> ---
>>   sys/conf/files.amd64 |    1 +
>>   sys/conf/files.i386  |    1 +
>>   sys/x86/xen/xenpv.c  |  155
>> ++++++++++++++++++++++++++++++++++++++++++++++++++
> 
> I think it makes more sense to have 2 files: one for xenpv bus and one
> for a dummy pvcpu device. It would allow us to move xenpv bus to common
> code (sys/xen or sys/dev/xen).

Ack. I wasn't thinking other arches will probably use the xenpv bus but
not the dummy cpu device. Would you agree to leave xenpv bus inside
x86/xen for now and move the dummy PV cpu device to dev/xen/pvcpu/?

> 
> [..]
> 
>> +
>> +static int
>> +xenpv_probe(device_t dev)
>> +{
>> +
>> +    device_set_desc(dev, "Xen PV bus");
>> +    device_quiet(dev);
>> +    return (0);
> 
> As I understand, 0 means I can "handle" the current device, in this case
> if a device is probing, because it doesn't have yet a driver, we will
> use xenpv and end up with 2 (or even more) xenpv buses.
>
> As we only want to probe xenpv bus once, when the bus was added
> manually, returning BUS_PROBE_NO_WILDCARD would suit better.
> 
> [..]
> 
>> +static int
>> +xenpvcpu_probe(device_t dev)
>> +{
>> +
>> +    device_set_desc(dev, "Xen PV CPU");
>> +    return (0);
> 
> Same here: BUS_PROBE_NOWILDCARD.

Ack for both, will change it to BUS_PROBE_NOWILDCARD. While at it, we
should also change xenstore probe function to return BUS_PROBE_NOWILDCARD.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.