[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v7 12/12] x86/hvm/ioreq: add a new mappable resource type...
>>> On 26.09.17 at 15:05, <Paul.Durrant@xxxxxxxxxx> wrote: >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] >> Sent: 26 September 2017 13:59 >> >>> On 18.09.17 at 17:31, <paul.durrant@xxxxxxxxxx> wrote: >> > @@ -780,6 +882,33 @@ int hvm_get_ioreq_server_info(struct domain *d, >> ioservid_t id, >> > return rc; >> > } >> > >> > +mfn_t hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, >> > + unsigned int idx) >> > +{ >> > + struct hvm_ioreq_server *s; >> > + mfn_t mfn = INVALID_MFN; >> > + >> > + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); >> > + >> > + s = get_ioreq_server(d, id); >> > + >> > + if ( id >= MAX_NR_IOREQ_SERVERS || !s || IS_DEFAULT(s) ) >> > + goto out; >> > + >> > + if ( hvm_ioreq_server_alloc_pages(s) ) >> > + goto out; >> > + >> > + if ( idx == 0 ) >> > + mfn = _mfn(page_to_mfn(s->bufioreq.page)); >> > + else if ( idx == 1 ) >> > + mfn = _mfn(page_to_mfn(s->ioreq.page)); >> >> else <some sort of error>? > > I set mfn to INVALID above. Is that not enough? Together with ... >> Also with buffered I/O being optional, >> wouldn't it be more natural for index 0 representing the synchronous >> page? And with buffered I/O not enabled, aren't you returning >> rubbish (NULL translated by page_to_mfn())? > > Good point. I should leave the mfn set to invalid if the buffered page is > not there. ... this - no, I don't think so. The two cases would be indistinguishable. An invalid index should be EINVAL or EDOM or some such. > As for making it zero, and putting the synchronous ones > afterwards, that was intentional because more synchronous pages will need to > be added if we needs to support more vCPUs, whereas there should only ever > need to be one buffered page. Ah, I see. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |