[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 8/8] ioreq-server: bring the PCI hotplug controller implementation into Xen



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 09 April 2014 14:54
> To: Paul Durrant
> Cc: Ian Campbell; Ian Jackson; Stefano Stabellini; xen-devel@xxxxxxxxxxxxx
> Subject: RE: [PATCH v4 8/8] ioreq-server: bring the PCI hotplug controller
> implementation into Xen
> 
> >>> On 09.04.14 at 15:42, <Paul.Durrant@xxxxxxxxxx> wrote:
> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> >> >>> On 02.04.14 at 17:11, <paul.durrant@xxxxxxxxxx> wrote:
> >> > +int gpe_init(struct domain *d)
> >> > +{
> >> > +    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
> >> > +
> >> > +    hp->gpe_sts = xzalloc_array(uint8_t, ACPI_GPE0_BLK_LEN_V1 / 2);
> >> > +    if ( hp->gpe_sts == NULL )
> >> > +        goto fail1;
> >> > +
> >> > +    hp->gpe_en = xzalloc_array(uint8_t, ACPI_GPE0_BLK_LEN_V1 / 2);
> >> > +    if ( hp->gpe_en == NULL )
> >> > +        goto fail2;
> >>
> >> I'd like to ask (also elsewhere in this series) to try to limit the number
> >> of "goto"s to the absolute minimum required to help code readability.
> >
> > Personally I find using forward jumps to fail labels the most readable form
> > of error exit - I wish they were used more widely.
> 
> Interesting - almost everyone/-thing involved in educating me in
> programming skills recommended to try to get away without goto
> altogether, unless programming Fortran, Basic or some such.
> 

In my case it comes from programming for SPARC where forward branches were 
(are?) statically predicted untaken, so much better to code your error path as 
a single forward branch. Don't have to worry about such things for x86, but I'm 
used to reading code that looks like that :-/

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.