[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V2] ns16550: Add support for UART present in Broadcom TruManage capable NetXtreme chips



>>> On 15.11.13 at 18:51, Aravind Gopalakrioshnan 
>>> <aravind.gopalakrishnan@xxxxxxx> wrote:
> On 11/15/2013 3:31 AM, Jan Beulich wrote:
>>>>> On 14.11.13 at 18:40, Aravind Gopalakrishnan 
>>>>> <Aravind.Gopalakrishnan@xxxxxxx> wrote:
>>>   static struct ns16550 {
>>> -    int baud, clock_hz, data_bits, parity, stop_bits, fifo_size, irq;
>>> +    int baud, clock_hz, data_bits, parity, stop_bits, irq;
>>>       u64 io_base;   /* I/O port or memory-mapped I/O address. */
>>> +    u32 fifo_size;
>> Is this in any way related to the main purpose of the patch here?
>> And if there is (i.e. in response to Andrew's comment on v1), then
>> I don't see why most/all of the other fields shouldn't follow suit.
> 
> I don't mind changing them all in one patch.. But -
> 
> Since it is not directly related to the main purpose of the patch, would 
> it be okay
> if I did this in a follow up patch and go back to using 'int' for now?

"Go back" is probably the wrong term - in the new code you add
you should strive to use unsigned int where possible. And in a
second patch, converting the bogus uses of plain int to unsigned
would be desirable.

>>>                   /* Not IO */
>>>                   if ( !(bar & PCI_BASE_ADDRESS_SPACE_IO) )
>>> -                    continue;
>>> +                {
>>> +                    vendor = pci_conf_read16(0, b, d, f, PCI_VENDOR_ID);
>>> +                    device = pci_conf_read16(0, b, d, f, PCI_DEVICE_ID);
>>> +
>>> +                    /* Check for quirks in uart_config lookup table */
>>> +                    for ( i = 0; i < ARRAY_SIZE(uart_config); i++)
>>> +                    {
>>> +                        if ( uart_config[i].vendor_id != vendor )
>>> +                            continue;
>>> +
>>> +                        if ( uart_config[i].dev_id != device )
>>> +                            continue;
>>> +
>>> +                        if ( bar_idx >= uart_config[i].max_bars )
>>> +                            break;
>> Did you not mean "continue" here?
> 
> No.. break is fine here since we only support bar0 for this specific 
> device..
> (By the time we reach this condition, we have already verified 'vendor' 
> and 'device' from
> uart_config table anyway.. it is needless to iterate through other 
> devices in the table)

Here you make assumptions on the single entry you know the list
is currently holding. But you should consider the general case,
and I don't see why there couldn't be two flavors of a device
having the same (vendor, device) pair but different max_bars.

>>> +                    /* Not 8 bytes */
>>> +                    if ( (len & 0xffff) != 0xfff9 )
>>> +                        continue;
>> I think this size checking actually should also be done in the MMIO
>> case.
> 
> This check would not work for this device as the length of region is 64K.

I didn't mean you to make the exact same check; but I do expect
you to do some similar checking.

> But if we really want to force a length check for MMIO cases as well,
> we can define another field in struct uart_config and verify if length 
> matches..
> Do let me know what you make of it..

If you think an exact match check is necessary, then yes, such a
new field might be needed. But since I don't see Xen depending
on the exact size, but just on it being at least a certain size, doing
just that check would seem fine to me.

But then you'd need to carefully consider what the remainder of
the MMIO range is used for - if any of that could conflict with
what Xen needs to fully control the device, then you should also
hide any PAGE_SIZE chunks from all domains (including Dom0).
(Unfortunately we can't currently hide sub-page chunks in an
effective manner.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.