[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH] x86: Add a new physdev_op PHYSDEVOP_nr_irqs_gsi




> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@xxxxxxxxxxxxx]
> Sent: Wednesday, April 11, 2012 12:21 AM
> To: Lin Ming
> Cc: Jan Beulich; Konrad Rzeszutek Wilk; Zhang, Xiantao; xen-devel
> Subject: Re: [Xen-devel] [RFC PATCH] x86: Add a new physdev_op
> PHYSDEVOP_nr_irqs_gsi
> 
> On Tue, 10 Apr 2012, Lin Ming wrote:
> > On Tue, 2012-04-10 at 16:33 +0100, Jan Beulich wrote:
> > > >>> On 10.04.12 at 17:13, Lin Ming <mlin@xxxxxxxxxxxxx> wrote:
> > > > This new physdev_op is added for Linux guest kernel to get the
> > > > correct nr_irqs_gsi value.
> > >
> > > I'm not convinced this is really needed - the kernel can work out
> > > the right number without any hypercall afaict.
> >
> > In Linux kernel:
> >
> > mp_register_ioapic(...):
> >
> >         entries = io_apic_get_redir_entries(idx);
> >         gsi_cfg = mp_ioapic_gsi_routing(idx);
> >         gsi_cfg->gsi_base = gsi_base;
> >         gsi_cfg->gsi_end = gsi_base + entries - 1;
> >
> >         /*
> >          * The number of IO-APIC IRQ registers (== #pins):
> >          */
> >         ioapics[idx].nr_registers = entries;
> >
> >         if (gsi_cfg->gsi_end >= gsi_top)
> >                 gsi_top = gsi_cfg->gsi_end + 1;
> >
> > io_apic_get_redir_entries calls io_apic_read(), which returns wrong
> > value(0xFFFFFFFF) on Xen Dom0 kernel.
> >
> > How can we get the correct gsi_top value, which is used to set
> > nr_irqs_gsi, without hypercall?
> >
> > The problem here is we don't have a Xen version of io_apic_read in
> > Linux kernel.
> 
> Actually we do: http://marc.info/?l=linux-kernel&m=133295662314519

This is not enough, seems fixed value are returned for each read ?
Xiantao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.