|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] x86/HVM: avoid pointer wraparound in bufioreq handling
On Wed, 22 Jul 2015, Jan Beulich wrote:
> >>> On 21.07.15 at 18:18, <stefano.stabellini@xxxxxxxxxxxxx> wrote:
> > On Thu, 18 Jun 2015, Jan Beulich wrote:
> >> --- a/xen/arch/x86/hvm/hvm.c
> >> +++ b/xen/arch/x86/hvm/hvm.c
> >> @@ -921,7 +921,7 @@ static void hvm_ioreq_server_disable(str
> >>
> >> static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, struct
> >> domain *d,
> >> domid_t domid, bool_t is_default,
> >> - bool_t handle_bufioreq, ioservid_t id)
> >> + int bufioreq_handling, ioservid_t id)
> >
> > uint8_t?
>
> Why? I'm generally against using fixed width types when you don't
> really need them. And using uint_least8_t or uint_fast8_t is neither
> an opton, nor would it make the code look reasonable. Plain int is
> just fine here.
You are not just changing integer size but also switching from unsigned
to signed implicitly. I think it is not a good coding practice.
> >> @@ -2568,17 +2575,29 @@ int hvm_buffered_io_send(ioreq_t *p)
> >> return 0;
> >> }
> >>
> >> - pg->buf_ioreq[pg->write_pointer % IOREQ_BUFFER_SLOT_NUM] = bp;
> >> + pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] = bp;
> >>
> >> if ( qw )
> >> {
> >> bp.data = p->data >> 32;
> >> - pg->buf_ioreq[(pg->write_pointer+1) % IOREQ_BUFFER_SLOT_NUM] = bp;
> >> + pg->buf_ioreq[(pg->ptrs.write_pointer+1) % IOREQ_BUFFER_SLOT_NUM]
> >> = bp;
> >> }
> >>
> >> /* Make the ioreq_t visible /before/ write_pointer. */
> >> wmb();
> >> - pg->write_pointer += qw ? 2 : 1;
> >> + pg->ptrs.write_pointer += qw ? 2 : 1;
> >> +
> >> + /* Canonicalize read/write pointers to prevent their overflow. */
> >> + while ( s->bufioreq_atomic && qw++ < IOREQ_BUFFER_SLOT_NUM &&
> >> + pg->ptrs.read_pointer >= IOREQ_BUFFER_SLOT_NUM )
> >> + {
> >> + union bufioreq_pointers old = pg->ptrs, new;
> >> + unsigned int n = old.read_pointer / IOREQ_BUFFER_SLOT_NUM;
> >> +
> >> + new.read_pointer = old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM;
> >> + new.write_pointer = old.write_pointer - n * IOREQ_BUFFER_SLOT_NUM;
> >> + cmpxchg(&pg->ptrs.full, old.full, new.full);
> >> + }
> >
> > This is not safe: if the reader increments read_pointer (atomically)
> > after you read old.read_pointer and before cmpxchg, the increment is
> > lost.
>
> But that's what I use cmpxchg() for - if the value changes between
> the read and the cmpxchg, the latter will indicate so (for the purpose
> here simply by not updating the memory location), and the loop will go
> through another iteration.
Right, sorry, I read cmpxchg as xchg. I really don't know why. I think
that the code above works then.
> > I think you need to remove the cmpxchg and subtract a multiple of
> > IOREQ_BUFFER_SLOT_NUM from read_pointer atomically here. Unfortunately
> > if we do that, we cannot update both write_pointer and read_pointer
> > together anymore. But if we decrement write_pointer atomically before
> > read_pointer, I think we should be fine if we appropriately fix the
> > reader side too:
> >
> > QEMU:
> > atomic_read(&read_pointer);
> > read write_pointer;
> > xen_rmb();
> > while (read_pointer < write_pointer) {
>
> That would cause this comparison to possibly evaluate to "false", and
> I don't think leaving this loop early can bring any good.
Actually it would be fine because right after the loop we have:
notify_via_xen_event_channel(d, s->bufioreq_evtchn);
> >
> > [work]
> >
> > atomic_add(&read_pointer, stuff);
> > read write_pointer;
> > xen_rmb();
> > }
> >
> >
> > Xen:
> > [work]
> > increment write_pointer;
> > xen_wmb();
> >
> > if (read_pointer >= IOREQ_BUFFER_SLOT_NUM) {
> > write_pointer -= IOREQ_BUFFER_SLOT_NUM;
> > xen_wmb();
> > atomic_sub(read_pointer, IOREQ_BUFFER_SLOT_NUM);
> > }
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |