[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 08/16] x86/hvm: unify stdvga mmio intercept with standard mmio intercept



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 09 July 2015 09:53
> To: Paul Durrant
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Keir (Xen.org)
> Subject: Re: [PATCH v6 08/16] x86/hvm: unify stdvga mmio intercept with
> standard mmio intercept
> 
> >>> On 03.07.15 at 18:25, <paul.durrant@xxxxxxxxxx> wrote:
> > --- a/xen/arch/x86/hvm/emulate.c
> > +++ b/xen/arch/x86/hvm/emulate.c
> > @@ -266,6 +266,15 @@ static int hvmemul_acquire_page(unsigned long
> gmfn, struct page_info **page)
> >          return X86EMUL_RETRY;
> >      }
> >
> > +    /* This code should not be reached if the gmfn is not RAM */
> > +    if ( p2m_is_mmio(p2mt) )
> > +    {
> > +        domain_crash(curr_d);
> > +
> > +        put_page(*page);
> > +        return X86EMUL_UNHANDLEABLE;
> > +    }
> > +
> >      return X86EMUL_OKAY;
> >  }
> 
> Does this change really belong here, not in an earlier patch?
> 

It could be, but it's always been in this patch because this is the one where I 
apparently remove code that handled MMIO <-> MMIO copying so I think it's most 
illustrative to leave it here.

> > @@ -424,8 +426,22 @@ static void stdvga_mem_writeb(uint64_t addr,
> uint32_t val)
> >      }
> >  }
> >
> > -static void stdvga_mem_write(uint64_t addr, uint64_t data, uint64_t size)
> > +static int stdvga_mem_write(const struct hvm_io_handler *handler,
> > +                            uint64_t addr, uint32_t size,
> > +                            uint64_t data)
> >  {
> > +    struct hvm_hw_stdvga *s = &current->domain-
> >arch.hvm_domain.stdvga;
> > +    ioreq_t p = { .type = IOREQ_TYPE_COPY,
> > +                  .addr = addr,
> > +                  .size = size,
> > +                  .count = 1,
> > +                  .dir = IOREQ_WRITE,
> > +                  .data = data,
> > +    };
> 
> Indentation.
> 
> > +    if ( !s->cache )
> > +        goto done;
> 
> Why is this not being dealt with by stdvga_mem_accept()?
> 

Because, as the big lock comment below states:

"Single-cycle write transactions are accepted even if the cache is
not active since we can assert, when in stdvga mode, that writes
to VRAM have no side effect and thus we can try to buffer them"

The code prior to this patch did this, as you pointed in an earlier review 
where I had not preserved this behaviour.

> > -    if ( s->stdvga && s->cache )
> > +    if ( !s->stdvga ||
> > +         (p->addr < VGA_MEM_BASE) ||
> > +         ((p->addr + p->size) > (VGA_MEM_BASE + VGA_MEM_SIZE)) )
> > +        goto reject;
> > +
> > +    if ( p->dir == IOREQ_WRITE && p->count > 1 )
> >      {
> > -        switch ( p->type )
> > +        /*
> > +         * We cannot return X86EMUL_UNHANDLEABLE on anything other
> then the
> > +         * first cycle of an I/O. So, since we cannot guarantee to always 
> > be
> > +         * able to buffer writes, we have to reject any multi-cycle I/O 
> > and,
> > +         * since we are rejecting an I/O, we must invalidate the cache.
> > +         * Single-cycle write transactions are accepted even if the cache 
> > is
> > +         * not active since we can assert, when in stdvga mode, that writes
> > +         * to VRAM have no side effect and thus we can try to buffer them.
> > +         */
> > +        if ( s->cache )
> 
> In this comment, does "buffer" refer to being able to successfully
> call hvm_buffered_io_send()?

Yes.

> To me it reads more like referring to
> the internal hypervisor buffering done here, which I can't see why
> we couldn't always guarantee to be able to do.
> 
> >          {
> > -        case IOREQ_TYPE_COPY:
> > -            buf = mmio_move(s, p);
> > -            if ( !buf )
> > -                s->cache = 0;
> > -            break;
> > -        default:
> > -            gdprintk(XENLOG_WARNING, "unsupported mmio request type:%d
> "
> > -                     "addr:0x%04x data:0x%04x size:%d count:%d state:%d "
> > -                     "isptr:%d dir:%d df:%d\n",
> > -                     p->type, (int)p->addr, (int)p->data, (int)p->size,
> > -                     (int)p->count, p->state,
> > -                     p->data_is_ptr, p->dir, p->df);
> > +            gdprintk(XENLOG_INFO, "leaving caching mode\n");
> >              s->cache = 0;
> >          }
> > +
> > +        goto reject;
> >      }
> > -    else
> > -    {
> > -        buf = (p->dir == IOREQ_WRITE);
> > -    }
> > +    else if ( p->dir == IOREQ_READ && !s->cache )
> > +        goto reject;
> >
> > -    rc = (buf && hvm_buffered_io_send(p));
> > +    return 1;
> 
> Perhaps worth a comment that the lock is intentionally not being
> dropped here (which seems pretty fragile, but I don't see an
> alternative).

Ok.

  Paul

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.