[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] dmar: device scope mem leak fix



On Tue, May 26, 2015 at 10:46:30AM +0100, Jan Beulich wrote:
> >>> On 23.05.15 at 03:27, <elena.ufimtseva@xxxxxxxxxx> wrote:
> > @@ -318,13 +321,13 @@ static int __init acpi_parse_dev_scope(
> >      if ( (cnt = scope_device_count(start, end)) < 0 )
> >          return cnt;
> >  
> > -    scope->devices_cnt = cnt;
> >      if ( cnt > 0 )
> >      {
> >          scope->devices = xzalloc_array(u16, cnt);
> >          if ( !scope->devices )
> >              return -ENOMEM;
> >      }
> > +    scope->devices_cnt = cnt;
> 
> Together with this, shouldn't you also clear ->devices_cnt along
> with freeing ->devices on the error path at the end of the function?
Yes, absolutely.
> 
> > @@ -658,6 +661,7 @@ acpi_parse_one_rmrr(struct acpi_dmar_header *header)
> >                  "  Ignore the RMRR (%"PRIx64", %"PRIx64") due to "
> >                  "devices under its scope are not PCI discoverable!\n",
> >                  rmrru->base_address, rmrru->end_address);
> > +            xfree(rmrru->scope.devices);
> >              xfree(rmrru);
Do you think the ret should be set in this case also?
> >          }
> >          else if ( base_addr > end_addr )
> > @@ -665,6 +669,7 @@ acpi_parse_one_rmrr(struct acpi_dmar_header *header)
> >              dprintk(XENLOG_WARNING VTDPREFIX,
> >                  "  The RMRR (%"PRIx64", %"PRIx64") is incorrect!\n",
> >                  rmrru->base_address, rmrru->end_address);
> > +            xfree(rmrru->scope.devices);
> >              xfree(rmrru);
> >              ret = -EFAULT;
> >          }
> 
> Why not also in acpi_parse_one_drhd() (and for consistency also
> acpi_parse_one_atsr(), even if only being a latent issue there right
> now)?

I missed that, will add as well.
> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.