[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 05/11] ioreq: add internal ioreq initialization support
On Tue, Sep 10, 2019 at 02:59:57PM +0200, Paul Durrant wrote: > > -----Original Message----- > > From: Roger Pau Monne <roger.pau@xxxxxxxxxx> > > Sent: 03 September 2019 17:14 > > To: xen-devel@xxxxxxxxxxxxxxxxxxxx > > Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>; Jan Beulich > > <jbeulich@xxxxxxxx>; Andrew Cooper > > <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu <wl@xxxxxxx>; Paul Durrant > > <Paul.Durrant@xxxxxxxxxx> > > Subject: [PATCH v2 05/11] ioreq: add internal ioreq initialization support > > @@ -736,33 +754,39 @@ static int hvm_ioreq_server_init(struct > > hvm_ioreq_server *s, > > int rc; > > > > s->target = d; > > - > > - get_knownalive_domain(currd); > > - s->emulator = currd; > > - > > spin_lock_init(&s->lock); > > - INIT_LIST_HEAD(&s->ioreq_vcpu_list); > > - spin_lock_init(&s->bufioreq_lock); > > - > > - s->ioreq.gfn = INVALID_GFN; > > - s->bufioreq.gfn = INVALID_GFN; > > > > rc = hvm_ioreq_server_alloc_rangesets(s, id); > > if ( rc ) > > return rc; > > > > - s->bufioreq_handling = bufioreq_handling; > > - > > - for_each_vcpu ( d, v ) > > + if ( !hvm_ioreq_is_internal(id) ) > > { > > - rc = hvm_ioreq_server_add_vcpu(s, v); > > - if ( rc ) > > - goto fail_add; > > + get_knownalive_domain(currd); > > + > > + s->emulator = currd; > > + INIT_LIST_HEAD(&s->ioreq_vcpu_list); > > + spin_lock_init(&s->bufioreq_lock); > > + > > + s->ioreq.gfn = INVALID_GFN; > > + s->bufioreq.gfn = INVALID_GFN; > > + > > + s->bufioreq_handling = bufioreq_handling; > > + > > + for_each_vcpu ( d, v ) > > + { > > + rc = hvm_ioreq_server_add_vcpu(s, v); > > + if ( rc ) > > + goto fail_add; > > + } > > } > > + else > > + s->handler = NULL; > > The struct is zeroed out so initializing the handler is not necessary. > > > > > return 0; > > > > fail_add: > > + ASSERT(!hvm_ioreq_is_internal(id)); > > hvm_ioreq_server_remove_all_vcpus(s); > > hvm_ioreq_server_unmap_pages(s); > > > > I think it would be worthwhile having that ASSERT repeated in the called > functions, and other functions that only operate on external ioreq servers > e.g. hvm_ioreq_server_add_vcpu(), hvm_ioreq_server_map_pages(), etc. That's fine, but then I would also need to pass the ioreq server id to those functions just to perform the ASSERT. I will leave those as-is because I think passing the id just for that ASSERT is kind of pointless. > > @@ -864,20 +897,21 @@ int hvm_destroy_ioreq_server(struct domain *d, > > ioservid_t id) > > goto out; > > > > rc = -EPERM; > > - if ( s->emulator != current->domain ) > > + /* NB: internal servers cannot be destroyed. */ > > + if ( hvm_ioreq_is_internal(id) || s->emulator != current->domain ) > > Shouldn't the test of hvm_ioreq_is_internal(id) simply be an ASSERT? This > function should only be called from a dm_op(), right? Right, I think I've wrongly assumed this was also called when destroying a domain, but domain destruction uses hvm_destroy_all_ioreq_servers instead. > > goto out; > > > > domain_pause(d); > > > > p2m_set_ioreq_server(d, 0, id); > > > > - hvm_ioreq_server_disable(s); > > + hvm_ioreq_server_disable(s, hvm_ioreq_is_internal(id)); > > > > /* > > * It is safe to call hvm_ioreq_server_deinit() prior to > > * set_ioreq_server() since the target domain is paused. > > */ > > - hvm_ioreq_server_deinit(s); > > + hvm_ioreq_server_deinit(s, hvm_ioreq_is_internal(id)); > > set_ioreq_server(d, id, NULL); > > > > domain_unpause(d); > > @@ -909,7 +943,8 @@ int hvm_get_ioreq_server_info(struct domain *d, > > ioservid_t id, > > goto out; > > > > rc = -EPERM; > > - if ( s->emulator != current->domain ) > > + /* NB: don't allow fetching information from internal ioreq servers. */ > > + if ( hvm_ioreq_is_internal(id) || s->emulator != current->domain ) > > Again here, and several places below. I've fixed the calls to hvm_get_ioreq_server_info, hvm_get_ioreq_server_frame and hvm_map_mem_type_to_ioreq_server to include an ASSERT that the passed ioreq is not internal. Thanks, Roger. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |