[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Ping: [PATCH v2 qemu-trad] HVM: atomically access pointers in bufioreq handling



I'd really like to see this fixed in 4.6, and it makes no sense to post the
final hypervisor patch before this one got applied. The upstream one
has been in our qemuu tree for a while now, but apparently didn't get
pushed upstream yet.

Jan

>>> On 23.07.15 at 09:00, <JBeulich@xxxxxxxx> wrote:
> The number of slots per page being 511 (i.e. not a power of two) means
> that the (32-bit) read and write indexes going beyond 2^32 will likely
> disturb operation. The hypervisor side gets I/O req server creation
> extended so we can indicate that we're using suitable atomic accesses
> where needed, allowing it to atomically canonicalize both pointers when
> both have gone through at least one cycle.
> 
> The Xen side counterpart (which is not a functional prereq to this
> change, albeit the intention is for Xen to assume default servers
> always use suitable atomic accesses) went in already (commit
> b7007bc6f9).
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> ---
> v2: Adjust description.
> 
> --- a/i386-dm/helper2.c
> +++ b/i386-dm/helper2.c
> @@ -493,10 +493,19 @@ static int __handle_buffered_iopage(CPUS
>  
>      memset(&req, 0x00, sizeof(req));
>  
> -    while (buffered_io_page->read_pointer !=
> -           buffered_io_page->write_pointer) {
> -        buf_req = &buffered_io_page->buf_ioreq[
> -            buffered_io_page->read_pointer % IOREQ_BUFFER_SLOT_NUM];
> +    for (;;) {
> +        uint32_t rdptr = buffered_io_page->read_pointer, wrptr;
> +
> +        xen_rmb();
> +        wrptr = buffered_io_page->write_pointer;
> +        xen_rmb();
> +        if (rdptr != buffered_io_page->read_pointer) {
> +            continue;
> +        }
> +        if (rdptr == wrptr) {
> +            break;
> +        }
> +        buf_req = &buffered_io_page->buf_ioreq[rdptr % 
> IOREQ_BUFFER_SLOT_NUM];
>          req.size = 1UL << buf_req->size;
>          req.count = 1;
>          req.addr = buf_req->addr;
> @@ -508,15 +517,14 @@ static int __handle_buffered_iopage(CPUS
>          req.data_is_ptr = 0;
>          qw = (req.size == 8);
>          if (qw) {
> -            buf_req = &buffered_io_page->buf_ioreq[
> -                (buffered_io_page->read_pointer+1) % IOREQ_BUFFER_SLOT_NUM];
> +            buf_req = &buffered_io_page->buf_ioreq[(rdptr + 1) %
> +                                                   IOREQ_BUFFER_SLOT_NUM];
>              req.data |= ((uint64_t)buf_req->data) << 32;
>          }
>  
>          __handle_ioreq(env, &req);
>  
> -        xen_mb();
> -        buffered_io_page->read_pointer += qw ? 2 : 1;
> +        __sync_fetch_and_add(&buffered_io_page->read_pointer, qw + 1);
>      }
>  
>      return req.count;




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.