[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v9 01/11] vpci: introduce basic handlers to trap accesses to the PCI config space
On Wed, Mar 14, 2018 at 08:31:03AM -0600, Jan Beulich wrote: > >>> On 14.03.18 at 15:03, <roger.pau@xxxxxxxxxx> wrote: > > --- a/xen/drivers/Kconfig > > +++ b/xen/drivers/Kconfig > > @@ -12,4 +12,6 @@ source "drivers/pci/Kconfig" > > > > source "drivers/video/Kconfig" > > > > +source "drivers/vpci/Kconfig" > > Are there more things to appear in that new file? If not, what the > point of introducing it? I've assumed that was the correct hierarchy to prevent polluting Kconfig files. pci/ and cpufreq/ also contain a Kconfig file with a single option. I can move it to drivers/Kconfig if you prefer. > > --- /dev/null > > +++ b/xen/drivers/vpci/Kconfig > > @@ -0,0 +1,4 @@ > > + > > +config HAS_VPCI > > + bool > > + depends on HAS_PCI > > So this is one of those common Kconfig issues: When there's no > prompt, any "depends on" can at best lead to warnings the tool > issues and no-one pays attention to. I can only advise to avoid > such dependencies. If anyone gets the selects wrong, there'll > be a build failure somewhere independent of said warning. Yes, the build is certainly going to fail. > With these taken care of, feel free to reinstate > Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> > (I think the adjustments could also be done while committing, > if enough of the series becomes ready to warrant putting in > at least part of it; I think it was really the lack of an ARM side > ack which prevented me from doing this on v8 already.) You have comments on other patches starting from patch 7, let me take a look at the comments, if they are minor I could probably fix them quickly and send a new version with this minor corrections also. Thanks, Roger. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |