[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation



On Thu, Mar 12, 2020 at 10:04:35AM +0100, Roger Pau Monné wrote:
> CAUTION: This email originated from outside of the organization. Do not click 
> links or open attachments unless you can confirm the sender and know the 
> content is safe.
> 
> 
> 
> On Wed, Mar 11, 2020 at 10:25:15PM +0000, Agarwal, Anchal wrote:
> > Hi Roger,
> > I am trying to understand your comments on indirect descriptors specially 
> > without polluting the mailing list hence emailing you personally.
> 
> IMO it's better to send to the mailing list. The issues or questions
> you have about indirect descriptors can be helpful to others in the
> future. If there's no confidential information please send to the
> list next time.
> 
> Feel free to forward this reply to the list also.
>
Sure no problem at all.
> > Hope that's ok by you.  Please see my response inline.
> >
> >     On Fri, Mar 06, 2020 at 06:40:33PM +0000, Anchal Agarwal wrote:
> >     > On Fri, Feb 21, 2020 at 03:24:45PM +0100, Roger Pau Monné wrote:
> >     > > On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
> >     > > >   blkfront_gather_backend_features(info);
> >     > > >   /* Reset limits changed by blk_mq_update_nr_hw_queues(). */
> >     > > >   blkif_set_queue_limits(info);
> >     > > > @@ -2046,6 +2063,9 @@ static int blkif_recover(struct 
> > blkfront_info *info)
> >     > > >           kick_pending_request_queues(rinfo);
> >     > > >   }
> >     > > >
> >     > > > + if (frozen)
> >     > > > +         return 0;
> >     > >
> >     > > I have to admit my memory is fuzzy here, but don't you need to
> >     > > re-queue requests in case the backend has different limits of 
> > indirect
> >     > > descriptors per request for example?
> >     > >
> >     > > Or do we expect that the frontend is always going to be resumed on 
> > the
> >     > > same backend, and thus features won't change?
> >     > >
> >     > So to understand your question better here, AFAIU the  maximum number 
> > of indirect
> >     > grefs is fixed by the backend, but the frontend can issue requests 
> > with any
> >     > number of indirect segments as long as it's less than the number 
> > provided by
> >     > the backend. So by your question you mean this max number of 
> > MAX_INDIRECT_SEGMENTS
> >     > 256 on backend can change ?
> >
> >     Yes, number of indirect descriptors supported by the backend can
> >     change, because you moved to a different backend, or because the
> >     maximum supported by the backend has changed. It's also possible to
> >     resume on a backend that has no indirect descriptors support at all.
> >
> > AFAIU, the code for requeuing the requests is only for xen suspend/resume. 
> > These request in the queue are
> > same that gets added to queuelist in blkfront_resume. Also, even if 
> > indirect descriptors change on resume,
> > they just need to be broadcasted to frontend and which means we could just 
> > mean that a request can process
> > more data.
> 
> Or less data. You could legitimately migrate from a host that has
> indirect descriptors to one without, in which case requests would need
> to be smaller to fit the ring slots.
> 
> > We do setup indirect descriptors on front end on blkif_recover before 
> > returning and queue limits are
> > setup accordingly.
> > Am I missing anything here?
> 
> Calling blkif_recover should take care of it AFAICT. As it resets the
> queue limits according to the data announced on xenstore.
> 
> I think I got confused, using blkif_recover should be fine, sorry.
> 
Ok. Thanks for confirming. I will fixup other suggestions in the patch and send
out a v4.
> >
> >     > > > @@ -2625,6 +2671,62 @@ static void blkif_release(struct gendisk 
> > *disk, fmode_t mode)
> >     > > >   mutex_unlock(&blkfront_mutex);
> >     > > >  }
> >     > > >
> >     > > > +static int blkfront_freeze(struct xenbus_device *dev)
> >     > > > +{
> >     > > > + unsigned int i;
> >     > > > + struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> >     > > > + struct blkfront_ring_info *rinfo;
> >     > > > + /* This would be reasonable timeout as used in 
> > xenbus_dev_shutdown() */
> >     > > > + unsigned int timeout = 5 * HZ;
> >     > > > + int err = 0;
> >     > > > +
> >     > > > + info->connected = BLKIF_STATE_FREEZING;
> >     > > > +
> >     > > > + blk_mq_freeze_queue(info->rq);
> >     > > > + blk_mq_quiesce_queue(info->rq);
> >     > >
> >     > > Don't you need to also drain the queue and make sure it's empty?
> >     > >
> >     > blk_mq_freeze_queue and blk_mq_quiesce_queue should take care of 
> > running HW queues synchronously
> >     > and making sure all the ongoing dispatches have finished. Did I 
> > understand your question right?
> >
> >     Can you please add some check to that end? (ie: that there are no
> >     pending requests on any queue?)
> >
> > Well a check to see if there are any unconsumed responses could be done.
> > I haven't come across use case in my testing where this failed but maybe 
> > there are other
> > setups that may cause issue here.
> 
> Thanks! It's mostly to be on the safe side if we expect the queues and
> rings to be fully drained.
> 
ACK.
> Roger.
Thanks,
Anchal

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.