[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Re: [XenPPC] RFC: xencomm - linux side
On Wed, 2006-08-23 at 09:59 +0200, Tristan Gingold wrote: > Le Mardi 22 AoÃt 2006 21:03, Hollis Blanchard a Ãcrit : > > I apologize for my mailer line-wrapping the patch as I quote it below. > > > > On Mon, 2006-08-21 at 17:18 +0200, Tristan Gingold wrote: > > > diff -r b7db009d622c linux-2.6-xen-sparse/drivers/xen/Kconfig > > > --- a/linux-2.6-xen-sparse/drivers/xen/Kconfig Mon Aug 21 09:41:24 > > > 2006 +0200 > > > +++ b/linux-2.6-xen-sparse/drivers/xen/Kconfig Mon Aug 21 15:04:32 > > > 2006 +0200 > > > @@ -257,4 +257,7 @@ config XEN_SMPBOOT > > > default y > > > depends on SMP > > > > > > +config XEN_XENCOMM > > > + bool > > > + default n > > > endif > > > > Shouldn't IA64 "select XEN_XENCOMM"? Or is your kernel in a separate > > tree? > The arch Kconfig overrides this parameter. My point was that I didn't see that in this patch. > > > diff -r b7db009d622c > > > linux-2.6-xen-sparse/drivers/xen/privcmd/privcmd.c > > > --- a/linux-2.6-xen-sparse/drivers/xen/privcmd/privcmd.c Mon > > > Aug 21 09:41:24 2006 +0200 > > > +++ b/linux-2.6-xen-sparse/drivers/xen/privcmd/privcmd.c Mon > > > Aug 21 15:04:32 2006 +0200 > > > @@ -34,6 +34,10 @@ > > > > > > static struct proc_dir_entry *privcmd_intf; > > > static struct proc_dir_entry *capabilities_intf; > > > + > > > +#ifdef CONFIG_XEN_XENCOMM > > > +extern int xencomm_privcmd_hypercall(privcmd_hypercall_t *hypercall); > > > +#endif > > > > > > #define NR_HYPERCALLS 64 > > > static DECLARE_BITMAP(hypercall_permission_map, NR_HYPERCALLS); > > > @@ -91,19 +95,8 @@ static int privcmd_ioctl(struct inode *i > > > "g" ((unsigned long)hypercall.arg[4]) > > > > > > : "r8", "r10", "memory" ); > > > > > > } > > > -#elif defined (__ia64__) > > > - __asm__ __volatile__ ( > > > - ";; mov r14=%2; mov r15=%3; " > > > - "mov r16=%4; mov r17=%5; mov r18=%6;" > > > - "mov r2=%1; break 0x1000;; mov %0=r8 ;;" > > > - : "=r" (ret) > > > - : "r" (hypercall.op), > > > - "r" (hypercall.arg[0]), > > > - "r" (hypercall.arg[1]), > > > - "r" (hypercall.arg[2]), > > > - "r" (hypercall.arg[3]), > > > - "r" (hypercall.arg[4]) > > > - : > > > "r14","r15","r16","r17","r18","r2","r8","memory"); > > > +#elif defined (CONFIG_XEN_XENCOMM) > > > + ret = xencomm_privcmd_hypercall (&hypercall); > > > #endif > > > } > > > break; > > > > Move all the #ifdef stuff into appropriate header files, then have every > > arch unconditionally call arch_privcmd_hypercall(). > I simply prefer not to touch other people code, as I can't try xen/x86. That's nice, but you're just moving code, and it's the Right Thing To Do, so please do it. You can point out that you've only compile-tested x86 when you submit. > > > +struct xencomm_handle *xencomm_create_inline (void *buffer, > > > + unsigned long bytes) > > > +{ > > > + unsigned long paddr; > > > + > > > + paddr = xen_vaddr_to_paddr((unsigned long)buffer); > > > + return (struct xencomm_handle *)XENCOMM_INLINE_CREATE(paddr); > > > +} > > > > XENCOMM_INLINE_CREATE in undefined in this patch. I liked your old patch > > just fine: > > +struct xencomm_desc *xencomm_create_inline (void *buffer, unsigned long > > bytes) > > +{ > > + return (struct xencomm_desc *) > > + (__kern_paddr((unsigned long)buffer) | XENCOMM_INLINE); > > +} > It is defined in arch-xxx.h file. But why? Do you anticipate that architectures will mark "inline" descriptors differently? If so, how? > > > + * In general, we need a xencomm descriptor to cover the top-level > > > data > > > + * structure (e.g. the dom0 op), plus another for every embedded > > > pointer to > > > + * another data structure (i.e. for every GUEST_HANDLE). > > > + */ > > > + > > > +int xencomm_hypercall_console_io(int cmd, int count, char *str) > > > +{ > > > + struct xencomm_handle *desc; > > > + int rc; > > > + > > > + desc = xencomm_create_inline (str, count); > > > + > > > + rc = xencomm_arch_hypercall_console_io (cmd, count, desc); > > > + > > > + return rc; > > > +} > > > > I don't understand the point of all these routines if they just call > > arch_foo anyways. > Sorry I have not explained the principle. > xencomm_arch_hypercall_XXX are the raw hypercalls. They must be defined by > architecture code. The xencomm_hypercall_XXX are the xencomm wrapper and are > shared. That much is clear. :) My question is what is being done in those "raw hypercalls" that can't be done here? You didn't include them in your patch, so I can't tell. It seems there are a few missing pieces to your patch. Next time please include the whole thing, including arch-specific parts, so we can see what's going on. > > > + if (ret) > > > + goto out; /* error mapping the nested pointer */ > > > + > > > + ret = xencomm_arch_hypercall_dom0_op (op_desc); > > > + > > > + /* FIXME: should we restore the handle? */ > > > + if (copy_to_user(user_op, &kern_op, sizeof(dom0_op_t))) > > > + ret = -EFAULT; > > > + > > > + if (desc) > > > + xencomm_free(desc); > > > +out: > > > + return ret; > > > +} > > > > You misplaced the out label; it needs to go before xencomm_free(desc); > ??? This was copied from your work. You've made changes here, and that's what I'm pointing out. > The code branches to out iff xencomm allocation failed. It is safe to call > xencomm_free but useless. There are multiple descriptors being created: one for the dom0_op top-level structure, and possibly one for a sub-structure. In fact, in your patch you never free 'op_desc' inside xencomm_privcmd_dom0_op(). OK, reading closer, I don't like that at *all*. The trick is that xencomm_create_inline() doesn't actually create anything, and therefore you don't need to free it. That needs to change. My suggestion: have xencomm_create() test IS_KERNEL_ADDR() (in whatever way is best for portability) and if it is, do the "inline" stuff. On the free side, if the descriptor was inline, free can just return. That would also make me happy because it removes the need to think about whether callers can/should call "create_inline" or not; the code just does the right thing. -- Hollis Blanchard IBM Linux Technology Center _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |