[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 7/9] ioreq-server: add support for multiple servers



>>> On 19.05.14 at 14:55, <Paul.Durrant@xxxxxxxxxx> wrote:
>> From: Paul Durrant
>> > > +    list_for_each_entry ( s,
>> > > +                          &d->arch.hvm_domain.ioreq_server.list,
>> > > +                          list_entry )
>> > > +    {
>> > > +        struct rangeset *r;
>> > > +
>> > > +        if ( s == d->arch.hvm_domain.default_ioreq_server )
>> > > +            continue;
>> > > +
>> > > +        BUILD_BUG_ON(IOREQ_TYPE_PIO != HVMOP_IO_RANGE_PORT);
>> > > +        BUILD_BUG_ON(IOREQ_TYPE_COPY !=
>> > HVMOP_IO_RANGE_MEMORY);
>> > > +        BUILD_BUG_ON(IOREQ_TYPE_PCI_CONFIG !=
>> > HVMOP_IO_RANGE_PCI);
>> > > +        r = s->range[type];
>> > > +
>> > > +        switch ( type )
>> > > +        {
>> > > +        case IOREQ_TYPE_PIO:
>> > > +        case IOREQ_TYPE_COPY:
>> > > +            if ( rangeset_contains_singleton(r, addr) )
>> >
>> > While the PCI check below certainly can be a singleton one, I don't
>> > think you can do so here: In order for a request to be forwarded,
>> > the full spanned range should be owned by the respective server.
>> > And yes, consideration how to deal with split accesses is going to be
>> > needed I'm afraid.
>> 
>> Yes - that must have been lost with the change to using rangesets. I'm going
>> to punt on split ranges though, as it's a whole other layer of complexity. 
>> I'll
>> add a comment somewhere to say that unless a cycle falls fully within the
>> range of a secondary emulator it will not be passed to it. That's certainly 
>> good
>> enough for what I'm using this for at the moment, and support for split
>> ranges could be added later if needed.
>> 
> 
> I've been looking at this and I actually think it's correct to only steer IO 
> based on the address, rather than a span of the entire range for port IO at 
> least; I'm pretty sure h/w only bases its decision to accept a cycle based on 
> a decode of the base address and the cycle size - so it's possible for two 
> different pieces of h/w to have overlapping ranges.

I don't think this is even fully defined for I/O ports - I could see things
done the way you say for e.g. fields falling inside a 16-byte range, but
for accesses to be split when crossing a 16-byte boundary (where the
16 is taken as an arbitrary example) - much like special MMIO behavior
results for accesses crossing cache line or page boundaries. 

Yet I think to be on the safe side here, accesses crossing server
boundaries should (at least initially, i.e. until we learn that certain
things need weakening to work in practice) be implemented as
restrictively as possible - either splitting them into parts or, for
simplicity's sake, by dropping writes and returning all ones on reads.

> I also notice that Xen only passes the base address to 
> hvm_mmio_handler.check_handler() in hvm_mmio_intercept() even though a fully 
> matching span seems reasonable for MMIO transactions.

ISTR having got puzzled by this too, but having had enough other,
more important things to do to also deal with this one.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.