[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v3 08/16] hypervisor part of add limited support of VMware's hyper-call rpc



The support included is enough to allow VMware tools to install in a
HVM domU and provide guestinfo support.  guestinfo support is
provide by what is known as VMware RPC support.  This guestinfo
support is provided via libxc.  libxl support has not be written.

Note: VMware RPC support is only available on HVM domU.

This interface is an extension of __HYPERVISOR_HVM_op.  It was
picked because xc_get_hvm_param() also uses it and VMware guest
info is a lot like a hvm param.

The HVMOP_get_vmport_guest_info is used by two libxc functions,
xc_get_vmport_guest_info and xc_fetch_vmport_guest_info.
xc_fetch_vmport_guest_info is designed to be used to fetch all
currently set guestinfo values.

To save on hypervisor heap memory, the guestinfo support in done in
two sizes, normal and jumbo.  Normal is used to handle up to 128
byte values and jumbo is used to handle up to 4096 byte values.

Since all this is work is done when the guest is doing a single
instruction; it was designed to not use the hypervisor heap to
allocate the memory at this time.  Instead a few are allocated at
the create domain time and during the xen's hyper-call to get or set
them.  This was picked in that if a tool stack is using the VMware
guest info support, it should be using either of both of the get and
set.  And so in this case the guest should only see an out of memory
error when the compile max amount of hypervisor heap memory is in
use.

Doing it this way does lead to a lot of pointer use and many
sub structures.

If the domU is running VMware tools, then the "build version" of
the tools is also available via xc_get_HVM_param().  This also
enables the use of new triggers that will use the VMware hyper-call
to do some limited control of the domU.  The most useful are
poweroff and reboot.  Since a guest process needs to be running
for these to work, a tool stack should check that the build version
is non zero before assuming these will work.

The 2 hvm param's HVM_PARAM_VMPORT_BUILD_NUMBER_TIME and
HVM_PARAM_VMPORT_BUILD_NUMBER_VALUE are how "build version" is
accessed.  These 2 params are only allowed to be set to zero.  The
HVM_PARAM_VMPORT_BUILD_NUMBER_TIME can be used to track the last
time the VMware tools in the guest responded.  One such use would
be the health of the tools in the guest.  The hvm param
HVM_PARAM_VMPORT_RESET_TIME controls how often to request them in
seconds minus 1.  The minus 1 is to handle to 0 case.  I.E. the
fastest that can be selected is every second.  The default is 4
times a minute.

The VMware RPC support includes the notion of channels that are
opened, active and closed.  All RPC messages sent via a channel
starts with normal ASCII text.  The message some times does include
binary data.

Currently there are 2 protocols defined for VMware RPC.  They
determine the direction for data flow, domU to tool stack or
tool stack to domU.

There is no provided interrupt for VMware RPC.

For a debug=y build there is a new command line option
vmport_debug=.  It enabled output to the console of various
stages of handling the "IN EAX, DX" instruction.  Most uses
are the summary ones that show complete RPC actions.

Signed-off-by: Don Slutz <dslutz@xxxxxxxxxxx>
---
 xen/arch/x86/hvm/hvm.c                       |   42 +
 xen/arch/x86/hvm/vmware/Makefile             |    1 +
 xen/arch/x86/hvm/vmware/vmport.c             |    8 +-
 xen/arch/x86/hvm/vmware/vmport_rpc.c         | 1281 ++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/domain.h             |    4 +
 xen/include/asm-x86/hvm/vmport.h             |   16 +
 xen/include/public/arch-x86/hvm/vmporttype.h |  118 +++
 xen/include/public/hvm/hvm_op.h              |   18 +
 xen/include/public/hvm/params.h              |    5 +-
 9 files changed, 1491 insertions(+), 2 deletions(-)
 create mode 100644 xen/arch/x86/hvm/vmware/vmport_rpc.c
 create mode 100644 xen/include/public/arch-x86/hvm/vmporttype.h

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 67ad37f..1bd5ae9 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1499,6 +1499,10 @@ int hvm_domain_initialise(struct domain *d)
         goto fail1;
     d->arch.hvm_domain.io_handler->num_slot = 0;
 
+    rc = vmport_rpc_init(d);
+    if ( rc != 0 )
+        goto fail1;
+
     if ( is_pvh_domain(d) )
     {
         register_portio_handler(d, 0, 0x10003, handle_pvh_io);
@@ -1534,6 +1538,7 @@ int hvm_domain_initialise(struct domain *d)
     stdvga_deinit(d);
     vioapic_deinit(d);
  fail1:
+    vmport_rpc_deinit(d);
     xfree(d->arch.hvm_domain.io_handler);
     xfree(d->arch.hvm_domain.params);
  fail0:
@@ -6155,6 +6160,43 @@ long do_hvm_op(unsigned long op, 
XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
     }
 
+    case HVMOP_get_vmport_guest_info:
+    case HVMOP_set_vmport_guest_info:
+    {
+        struct xen_hvm_vmport_guest_info a;
+        struct domain *d;
+
+        if ( copy_from_guest(&a, arg, 1) )
+            return -EFAULT;
+
+        rc = vmport_rpc_hvmop_precheck(op, &a);
+        if ( rc )
+            return rc;
+
+        d = rcu_lock_domain_by_any_id(a.domid);
+        if ( d == NULL )
+            return rc;
+
+        rc = -EINVAL;
+        if ( !is_hvm_domain(d) )
+            goto param_fail9;
+
+        rc = xsm_hvm_param(XSM_TARGET, d, op);
+        if ( rc )
+            goto param_fail9;
+
+        rc = vmport_rpc_hvmop_do(d, op, &a);
+        if ( rc )
+            goto param_fail9;
+
+        if ( op == HVMOP_get_vmport_guest_info )
+            rc = copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
+
+    param_fail9:
+        rcu_unlock_domain(d);
+        break;
+    }
+
     default:
     {
         gdprintk(XENLOG_DEBUG, "Bad HVM op %ld.\n", op);
diff --git a/xen/arch/x86/hvm/vmware/Makefile b/xen/arch/x86/hvm/vmware/Makefile
index cd8815b..4a14124 100644
--- a/xen/arch/x86/hvm/vmware/Makefile
+++ b/xen/arch/x86/hvm/vmware/Makefile
@@ -1,2 +1,3 @@
 obj-y += cpuid.o
 obj-y += vmport.o
+obj-y += vmport_rpc.o
diff --git a/xen/arch/x86/hvm/vmware/vmport.c b/xen/arch/x86/hvm/vmware/vmport.c
index 0ee2db3..5769a81 100644
--- a/xen/arch/x86/hvm/vmware/vmport.c
+++ b/xen/arch/x86/hvm/vmware/vmport.c
@@ -139,6 +139,13 @@ int vmport_ioport(int dir, uint32_t port, uint32_t bytes, 
uint32_t *val)
             /* maxTimeLag */
             regs->rcx = 0;
             break;
+        case BDOOR_CMD_MESSAGE:
+            if ( !is_pv_vcpu(current) )
+            {
+                /* Only supported for non pv domains */
+                vmport_rpc(&current->domain->arch.hvm_domain, regs);
+            }
+            break;
         case BDOOR_CMD_GETGUIOPTIONS:
             regs->rax = VMWARE_GUI_AUTO_GRAB | VMWARE_GUI_AUTO_UNGRAB |
                 VMWARE_GUI_AUTO_RAISE_DISABLED | VMWARE_GUI_SYNC_TIME |
@@ -306,7 +313,6 @@ int vmport_gp_check(struct cpu_user_regs *regs, struct vcpu 
*v,
     return 14;
 }
 
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/hvm/vmware/vmport_rpc.c 
b/xen/arch/x86/hvm/vmware/vmport_rpc.c
new file mode 100644
index 0000000..695e58f
--- /dev/null
+++ b/xen/arch/x86/hvm/vmware/vmport_rpc.c
@@ -0,0 +1,1281 @@
+/*
+ * HVM VMPORT RPC emulation
+ *
+ * Copyright (C) 2012 Verizon Corporation
+ *
+ * This file is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License Version 2 (GPLv2)
+ * as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details. <http://www.gnu.org/licenses/>.
+ */
+
+/*
+ * VMware Tools running in a DOMU will do "info-get" and "info-set"
+ * guestinfo commands to get and set keys and values. Inside the VM,
+ * vmtools at its lower level will feed the command string 4 bytes
+ * at a time into the VMWARE magic port using the IN
+ * instruction. Each 4 byte mini-rpc will get handled
+ * vmport_io()-->vmport_rpc()-->vmport_process_packet()-->
+ * vmport_process_send_payload()-->vmport_send() and the command
+ * string will get accumulated into a channels send_buffer.  When
+ * the full length of the string has been accumulated, then this
+ * code copies the send_buffer into a free
+ * vmport_state->channel-->receive_bucket.buffer
+ * VMware tools then does RECVSIZE and RECVPAYLOAD messages, the
+ * latter then reads 4 bytes at a time using the IN instruction (for
+ * the info-get case).  Then a final RECVSTATUS message is sent to
+ * finish up
+ */
+
+#include <xen/config.h>
+#include <xen/lib.h>
+#include <xen/cper.h>
+#include <asm/hvm/hvm.h>
+#include <asm/hvm/support.h>
+#include <asm/hvm/vmport.h>
+#include <asm/hvm/trace.h>
+
+#include "public/arch-x86/hvm/vmporttype.h"
+
+#include "backdoor_def.h"
+#include "guest_msg_def.h"
+
+
+#define VMWARE_PROTO_TO_GUEST        0x4f4c4354
+#define VMWARE_PROTO_FROM_GUEST      0x49435052
+
+#define GUESTINFO_NOTFOUND      500
+#define GUESTINFO_VALTOOLONG    1
+#define GUESTINFO_KEYTOOLONG    2
+#define GUESTINFO_TOOMANYKEYS   3
+
+
+static inline uint16_t get_low_bits(uint32_t bits)
+{
+    return bits & 0xffff;
+}
+
+static inline uint16_t get_high_bits(uint32_t bits)
+{
+    return bits >> 16;
+}
+
+static inline uint32_t set_high_bits(uint32_t b, uint32_t val)
+{
+    return (val << 16) | get_low_bits(b);
+}
+
+static inline void set_status(struct cpu_user_regs *ur, uint16_t val)
+{
+    /* VMware defines this to be only 32 bits */
+    ur->rcx = (val << 16) | (ur->rcx & 0xffff);
+}
+
+#ifndef NDEBUG
+static void vmport_safe_print(char *prefix, int len, const char *msg)
+{
+    unsigned char c;
+    unsigned int end = len;
+    unsigned int i, k;
+    char out[4 * (VMPORT_MAX_SEND_BUF + 1) * 3 + 6];
+
+    if ( end > (sizeof(out) / 3 - 6) )
+        end = sizeof(out) / 3 - 6;
+    out[0] = '<';
+    k = 1;
+    for ( i = 0; i < end; i++ )
+    {
+        c = msg[i];
+        if ( (c == '^') || (c == '\\') || (c == '>') )
+        {
+            out[k++] = '\\';
+            out[k++] = c;
+        }
+        else if ( (c >= ' ') && (c <= '~') )
+            out[k++] = c;
+        else if ( c < ' ' )
+        {
+            out[k++] = '^';
+            out[k++] = c ^ 0x40;
+        }
+        else
+        {
+            snprintf(&out[k], sizeof(out) - k, "\\%02x", c);
+            k += 3;
+        }
+    }
+    out[k++] = '>';
+    if ( len > end )
+    {
+        out[k++] = '.';
+        out[k++] = '.';
+        out[k++] = '.';
+    }
+    out[k++] = 0;
+    gdprintk(XENLOG_DEBUG, "%s%d(%d,%d,%zu)%s\n", prefix, end, len, k,
+             sizeof(out), out);
+}
+#endif
+
+/*
+ * Copy message into a jumbo bucket buffer which vmtools will use to
+ * read from 4 bytes at a time until done with it
+ */
+static void vmport_send_jumbo(struct hvm_domain *hd, vmport_channel_t *c,
+                              const char *msg)
+{
+    unsigned int cur_recv_len = strlen(msg) + 1;
+    vmport_jumbo_bucket_t *b = &(c->jumbo_recv_bkt);
+
+    b->ctl.recv_len = cur_recv_len;
+    b->ctl.recv_idx = 0;
+
+    memset(b->recv_buf, 0, sizeof(b->recv_buf));
+
+    if ( cur_recv_len >= (sizeof(b->recv_buf) - 1) )
+    {
+        VMPORT_DBG_LOG(VMPORT_LOG_ERROR,
+                       "VMware jumbo recv_len=%d >= %ld",
+                       cur_recv_len, sizeof(b->recv_buf) - 1);
+        cur_recv_len = sizeof(b->recv_buf) - 1;
+    }
+
+    memcpy(b->recv_buf, msg, cur_recv_len);
+
+    c->ctl.jumbo = 1;
+}
+
+/*
+ * Copy message into a free receive bucket buffer which vmtools will use to
+ * read from 4 bytes at a time until done with it
+ */
+static void vmport_send_normal(struct hvm_domain *hd, vmport_channel_t *c,
+                               const char *msg)
+{
+    unsigned int cur_recv_len = strlen(msg) + 1;
+    unsigned int my_bkt = c->ctl.recv_write;
+    unsigned int next_bkt = my_bkt + 1;
+    vmport_bucket_t *b;
+
+    if ( next_bkt >= VMPORT_MAX_BKTS )
+        next_bkt = 0;
+
+    if ( next_bkt == c->ctl.recv_read )
+    {
+#ifndef NDEBUG
+        if ( opt_vmport_debug & VMPORT_LOG_SKIP_SEND )
+        {
+            char prefix[30];
+
+            snprintf(prefix, sizeof(prefix),
+                     "VMware _send skipped %d (%d, %d) ",
+                     c->ctl.chan_id, my_bkt, c->ctl.recv_read);
+            prefix[sizeof(prefix) - 1] = 0;
+            vmport_safe_print(prefix, cur_recv_len, msg);
+        }
+#endif
+        return;
+    }
+
+    c->ctl.recv_write = next_bkt;
+    b = &c->recv_bkt[my_bkt];
+#ifndef NDEBUG
+    if ( opt_vmport_debug & VMPORT_LOG_SEND )
+    {
+        char prefix[30];
+
+        snprintf(prefix, sizeof(prefix), "VMware _send %d (%d) ",
+                 c->ctl.chan_id, my_bkt);
+        prefix[sizeof(prefix) - 1] = 0;
+        vmport_safe_print(prefix, cur_recv_len, msg);
+    }
+#endif
+
+    b->ctl.recv_len = cur_recv_len;
+    b->ctl.recv_idx = 0;
+    memset(b->recv_buf, 0, sizeof(b->recv_buf));
+    if ( cur_recv_len >= (sizeof(b->recv_buf) - 1) )
+    {
+        VMPORT_DBG_LOG(VMPORT_LOG_ERROR, "VMware recv_len=%d >= %zd",
+                       cur_recv_len, sizeof(b->recv_buf) - 1);
+        cur_recv_len = sizeof(b->recv_buf) - 1;
+    }
+    memcpy(b->recv_buf, msg, cur_recv_len);
+}
+
+static void vmport_send(struct hvm_domain *hd, vmport_channel_t *c,
+                        const char *msg)
+{
+    unsigned int cur_recv_len = strlen(msg) + 1;
+
+    if ( cur_recv_len > VMPORT_MAX_VAL_LEN )
+        vmport_send_jumbo(hd, c, msg);
+    else
+        vmport_send_normal(hd, c, msg);
+}
+
+void vmport_ctrl_send(struct hvm_domain *hd, char *msg)
+{
+    struct vmport_state *vs = hd->vmport_data;
+    unsigned int i;
+
+    hd->vmport_data->ping_time = get_sec();
+    spin_lock(&hd->vmport_lock);
+    for ( i = 0; i < VMPORT_MAX_CHANS; i++ )
+    {
+        if ( vs->chans[i].ctl.proto_num == VMWARE_PROTO_TO_GUEST )
+            vmport_send(hd, &vs->chans[i], msg);
+    }
+    spin_unlock(&hd->vmport_lock);
+}
+
+static void vmport_flush(struct hvm_domain *hd)
+{
+    spin_lock(&hd->vmport_lock);
+    memset(&hd->vmport_data->chans, 0, sizeof(hd->vmport_data->chans));
+    spin_unlock(&hd->vmport_lock);
+}
+
+static void vmport_sweep(struct hvm_domain *hd, unsigned long now_time)
+{
+    struct vmport_state *vs = hd->vmport_data;
+    unsigned int i;
+
+    for ( i = 0; i < VMPORT_MAX_CHANS; i++ )
+    {
+        if ( vs->chans[i].ctl.proto_num )
+        {
+            vmport_channel_t *c = &vs->chans[i];
+            long delta = now_time - c->ctl.active_time;
+
+            if ( delta >= 80 )
+            {
+                VMPORT_DBG_LOG(VMPORT_LOG_SWEEP, "VMware flush %d. delta=%ld",
+                               c->ctl.chan_id, delta);
+                /* Return channel to free pool */
+                c->ctl.proto_num = 0;
+            }
+        }
+    }
+}
+
+static vmport_channel_t *vmport_new_chan(struct vmport_state *vs,
+                                         unsigned long now_time)
+{
+    unsigned int i;
+
+    for ( i = 0; i < VMPORT_MAX_CHANS; i++ )
+    {
+        if ( !vs->chans[i].ctl.proto_num )
+        {
+            vmport_channel_t *c = &vs->chans[i];
+
+            c->ctl.chan_id = i;
+            c->ctl.cookie = vs->open_cookie++;
+            c->ctl.active_time = now_time;
+            c->ctl.send_len = 0;
+            c->ctl.send_idx = 0;
+            c->ctl.recv_read = 0;
+            c->ctl.recv_write = 0;
+            return c;
+        }
+    }
+    return NULL;
+}
+
+static void vmport_process_send_size(struct hvm_domain *hd, vmport_channel_t 
*c,
+                                     struct cpu_user_regs *ur)
+{
+    /* vmware tools often send a 0 byte request size. */
+    c->ctl.send_len = ur->rbx;
+    c->ctl.send_idx = 0;
+
+    set_status(ur, MESSAGE_STATUS_SUCCESS);
+}
+
+/* ret_buffer is in/out param */
+static int vmport_get_guestinfo(struct hvm_domain *hd, struct vmport_state *vs,
+                                char *a_info_key, unsigned int a_key_len,
+                                char *ret_buffer, unsigned int ret_buffer_len)
+{
+    unsigned int i;
+
+    for ( i = 0; i < vs->used_guestinfo; i++ )
+    {
+        if ( vs->guestinfo[i] &&
+             (vs->guestinfo[i]->key_len == a_key_len) &&
+             (memcmp(a_info_key, vs->guestinfo[i]->key_data,
+                     vs->guestinfo[i]->key_len) == 0) )
+        {
+            snprintf(ret_buffer, ret_buffer_len - 1, "1 %.*s",
+                     (int)vs->guestinfo[i]->val_len,
+                     vs->guestinfo[i]->val_data);
+            return i;
+        }
+    }
+
+    for ( i = 0; i < vs->used_guestinfo_jumbo; i++ )
+    {
+        if ( vs->guestinfo_jumbo[i] &&
+             (vs->guestinfo_jumbo[i]->key_len == a_key_len) &&
+             (memcmp(a_info_key, vs->guestinfo_jumbo[i]->key_data,
+                     vs->guestinfo_jumbo[i]->key_len) == 0) )
+        {
+            snprintf(ret_buffer, ret_buffer_len - 1, "1 %.*s",
+                     (int)vs->guestinfo_jumbo[i]->val_len,
+                     vs->guestinfo_jumbo[i]->val_data);
+            return i;
+        }
+    }
+    return GUESTINFO_NOTFOUND;
+}
+
+static void hvm_del_guestinfo_jumbo(struct vmport_state *vs, char *key,
+                                    uint8_t len)
+{
+    int i;
+
+    for ( i = 0; i < vs->used_guestinfo_jumbo; i++ )
+    {
+        if ( !vs->guestinfo_jumbo[i] )
+        {
+#ifndef NDEBUG
+            gdprintk(XENLOG_WARNING,
+                     "i=%d not allocated used_guestinfo_jumbo=%d\n",
+                     i, vs->used_guestinfo_jumbo);
+#endif
+        }
+        else if ( (vs->guestinfo_jumbo[i]->key_len == len) &&
+                  (memcmp(key, vs->guestinfo_jumbo[i]->key_data, len) == 0) )
+        {
+            vs->guestinfo_jumbo[i]->key_len = 0;
+            vs->guestinfo_jumbo[i]->val_len = 0;
+            break;
+        }
+    }
+}
+
+static void hvm_del_guestinfo(struct vmport_state *vs, char *key, uint8_t len)
+{
+    int i;
+
+    for ( i = 0; i < vs->used_guestinfo; i++ )
+    {
+        if ( !vs->guestinfo[i] )
+        {
+#ifndef NDEBUG
+            gdprintk(XENLOG_WARNING,
+                     "i=%d not allocated, but used_guestinfo=%d\n",
+                     i, vs->used_guestinfo);
+#endif
+        }
+        else if ( (vs->guestinfo[i]->key_len == len) &&
+                  (memcmp(key, vs->guestinfo[i]->key_data, len) == 0) )
+        {
+            vs->guestinfo[i]->key_len = 0;
+            vs->guestinfo[i]->val_len = 0;
+            break;
+        }
+    }
+}
+
+static int vmport_set_guestinfo(struct vmport_state *vs, int a_key_len,
+                                unsigned int a_val_len, char *a_info_key, char 
*val)
+{
+    unsigned int i;
+    int free_i = -1, rc = 0;
+
+#ifndef NDEBUG
+    gdprintk(XENLOG_WARNING, "vmport_set_guestinfo a_val_len=%d\n", a_val_len);
+#endif
+
+    if ( a_key_len <= VMPORT_MAX_KEY_LEN )
+    {
+        if ( a_val_len <= VMPORT_MAX_VAL_LEN )
+        {
+            for ( i = 0; i < vs->used_guestinfo; i++ )
+            {
+                if ( !vs->guestinfo[i] )
+                {
+#ifndef NDEBUG
+                    gdprintk(XENLOG_WARNING,
+                             "i=%d not allocated, but used_guestinfo=%d\n",
+                             i, vs->used_guestinfo);
+#endif
+                }
+                else if ( (vs->guestinfo[i]->key_len == a_key_len) &&
+                          (memcmp(a_info_key, vs->guestinfo[i]->key_data,
+                                  vs->guestinfo[i]->key_len) == 0) )
+                {
+                    vs->guestinfo[i]->val_len = a_val_len;
+                    memcpy(vs->guestinfo[i]->val_data, val, a_val_len);
+                    break;
+                }
+                else if ( (vs->guestinfo[i]->key_len == 0) &&
+                          (free_i == -1) )
+                    free_i = i;
+            }
+            if ( i >= vs->used_guestinfo )
+            {
+                if ( free_i == -1 )
+                    rc = GUESTINFO_TOOMANYKEYS;
+                else
+                {
+                    vs->guestinfo[free_i]->key_len = a_key_len;
+                    memcpy(vs->guestinfo[free_i]->key_data,
+                           a_info_key, a_key_len);
+                    vs->guestinfo[free_i]->val_len = a_val_len;
+                    memcpy(vs->guestinfo[free_i]->val_data,
+                           val, a_val_len);
+                }
+            }
+        }
+        else
+            rc = GUESTINFO_VALTOOLONG;
+    }
+    else
+        rc = GUESTINFO_KEYTOOLONG;
+    if ( !rc )
+        hvm_del_guestinfo_jumbo(vs, a_info_key, a_key_len);
+    return rc;
+}
+
+static int vmport_set_guestinfo_jumbo(struct vmport_state *vs, int a_key_len,
+                                      int a_val_len, char *a_info_key, char 
*val)
+{
+    unsigned int i;
+    int free_i = -1, rc = 0;
+
+#ifndef NDEBUG
+    gdprintk(XENLOG_WARNING, "vmport_set_guestinfo_jumbo a_val_len=%d\n",
+             a_val_len);
+#endif
+
+    if ( a_key_len <= VMPORT_MAX_KEY_LEN )
+    {
+        if ( a_val_len <= VMPORT_MAX_VAL_JUMBO_LEN )
+        {
+            for ( i = 0; i < vs->used_guestinfo_jumbo; i++ )
+            {
+                if ( !vs->guestinfo_jumbo[i] )
+                {
+#ifndef NDEBUG
+                    gdprintk(XENLOG_WARNING,
+                             "i=%d not allocated; used_guestinfo_jumbo=%d\n",
+                             i, vs->used_guestinfo_jumbo);
+#endif
+                }
+                else if ( (vs->guestinfo_jumbo[i]->key_len == a_key_len) &&
+                          (memcmp(a_info_key,
+                                  vs->guestinfo_jumbo[i]->key_data,
+                                  vs->guestinfo_jumbo[i]->key_len) == 0) )
+                {
+
+                    vs->guestinfo_jumbo[i]->val_len = a_val_len;
+                    memcpy(vs->guestinfo_jumbo[i]->val_data, val, a_val_len);
+                    break;
+                }
+                else if ( (vs->guestinfo_jumbo[i]->key_len == 0) &&
+                          (free_i == -1) )
+                    free_i = i;
+            }
+            if ( i >= vs->used_guestinfo_jumbo )
+            {
+                if ( free_i == -1 )
+                    rc = GUESTINFO_TOOMANYKEYS;
+                else
+                {
+                    vs->guestinfo_jumbo[free_i]->key_len = a_key_len;
+                    memcpy(vs->guestinfo_jumbo[free_i]->key_data,
+                           a_info_key, a_key_len);
+                    vs->guestinfo_jumbo[free_i]->val_len = a_val_len;
+                    memcpy(vs->guestinfo_jumbo[free_i]->val_data,
+                           val, a_val_len);
+                }
+            }
+        }
+        else
+            rc = GUESTINFO_VALTOOLONG;
+    }
+    else
+        rc = GUESTINFO_KEYTOOLONG;
+    if ( !rc )
+        hvm_del_guestinfo(vs, a_info_key, a_key_len);
+    return rc;
+}
+
+static void vmport_process_send_payload(struct hvm_domain *hd,
+                                        vmport_channel_t *c,
+                                        struct cpu_user_regs *ur,
+                                        unsigned long now_time)
+{
+    /* Accumulate 4 bytes of paload into send_buf using offset */
+    if ( c->ctl.send_idx < VMPORT_MAX_SEND_BUF )
+        c->send_buf[c->ctl.send_idx] = ur->rbx;
+
+    c->ctl.send_idx++;
+    set_status(ur, MESSAGE_STATUS_SUCCESS);
+
+    if ( c->ctl.send_idx * 4 >= c->ctl.send_len )
+    {
+
+        /* We are done accumulating so handle the command */
+
+        if ( c->ctl.send_idx < VMPORT_MAX_SEND_BUF )
+            ((char *)c->send_buf)[c->ctl.send_len] = 0;
+#ifndef NDEBUG
+        if ( opt_vmport_debug & VMPORT_LOG_RECV )
+        {
+            char prefix[30];
+
+            snprintf(prefix, sizeof(prefix),
+                     "VMware RECV %d (%d) ", c->ctl.chan_id, c->ctl.recv_read);
+            prefix[sizeof(prefix) - 1] = 0;
+            vmport_safe_print(prefix, c->ctl.send_len, (char *)c->send_buf);
+        }
+#endif
+        if ( c->ctl.proto_num == VMWARE_PROTO_FROM_GUEST )
+        {
+            /*
+             * Eaxmples of messages:
+             *
+             *   log toolbox: Version: build-341836
+             *   SetGuestInfo  4 build-341836
+             *   info-get guestinfo.ip
+             *   info-set guestinfo.ip joe
+             *
+             */
+
+            char *build = NULL;
+            char *info_key = NULL;
+            char *ret_msg = "1 ";
+            char ret_buffer[2 + VMPORT_MAX_VAL_JUMBO_LEN + 2];
+
+            if ( strncmp((char *)c->send_buf, "log toolbox: Version: build-",
+                         strlen("log toolbox: Version: build-")) == 0 )
+
+                build = (char *)c->send_buf +
+                    strlen("log toolbox: Version: build-");
+
+            else if ( strncmp((char *)c->send_buf, "SetGuestInfo  4 build-",
+                              strlen("SetGuestInfo  4 build-")) == 0 )
+
+                build = (char *)c->send_buf + strlen("SetGuestInfo  4 build-");
+
+            else if ( strncmp((char *)c->send_buf, "info-get guestinfo.",
+                              strlen("info-get guestinfo.")) == 0 )
+            {
+
+                unsigned int a_key_len = c->ctl.send_len -
+                    strlen("info-get guestinfo.");
+                int rc;
+                struct vmport_state *vs = hd->vmport_data;
+
+                info_key = (char *)c->send_buf + strlen("info-get guestinfo.");
+                if ( a_key_len <= VMPORT_MAX_KEY_LEN )
+                {
+
+                    rc = vmport_get_guestinfo(hd, vs, info_key, a_key_len,
+                                              ret_buffer, sizeof(ret_buffer));
+                    if ( rc == GUESTINFO_NOTFOUND )
+                        ret_msg = "0 No value found";
+                    else
+                        ret_msg = ret_buffer;
+                }
+                else
+                    ret_msg = "0 Key is too long";
+
+            }
+            else if ( strncmp((char *)c->send_buf, "info-set guestinfo.",
+                              strlen("info-set guestinfo.")) == 0 )
+            {
+                char *val;
+                unsigned int rest_len = c->ctl.send_len -
+                    strlen("info-set guestinfo.");
+
+                info_key = (char *)c->send_buf + strlen("info-set guestinfo.");
+                val = strstr(info_key, " ");
+                if ( val )
+                {
+                    unsigned int a_key_len = val - info_key;
+                    unsigned int a_val_len = rest_len - a_key_len - 1;
+                    int rc;
+                    struct vmport_state *vs = hd->vmport_data;
+
+                    val++;
+                    if ( a_val_len > VMPORT_MAX_VAL_LEN )
+                        rc = vmport_set_guestinfo_jumbo(vs, a_key_len,
+                                                        a_val_len,
+                                                        info_key, val);
+                    else
+                        rc = vmport_set_guestinfo(vs, a_key_len, a_val_len,
+                                                  info_key, val);
+                    if ( rc == 0 )
+                        ret_msg = "1 ";
+                    if ( rc == GUESTINFO_VALTOOLONG )
+                        ret_msg = "0 Value too long";
+                    if ( rc == GUESTINFO_KEYTOOLONG )
+                        ret_msg = "0 Key is too long";
+                    if ( rc == GUESTINFO_TOOMANYKEYS )
+                        ret_msg = "0 Too many keys";
+
+
+                }
+                else
+                    ret_msg = "0 Two and exactly two arguments expected";
+            }
+
+            vmport_send(hd, c, ret_msg);
+            if ( build )
+            {
+                long val = 0;
+                char *p = build;
+
+                while ( *p )
+                {
+                    if ( *p < '0' || *p > '9' )
+                        break;
+                    val = val * 10 + *p - '0';
+                    p++;
+                };
+
+                hd->params[HVM_PARAM_VMPORT_BUILD_NUMBER_VALUE] = val;
+                hd->params[HVM_PARAM_VMPORT_BUILD_NUMBER_TIME] = now_time;
+            }
+        }
+        else
+        {
+            unsigned int my_bkt = c->ctl.recv_read - 1;
+            vmport_bucket_t *b;
+
+            if ( my_bkt >= VMPORT_MAX_BKTS )
+                my_bkt = VMPORT_MAX_BKTS - 1;
+            b = &c->recv_bkt[my_bkt];
+            b->ctl.recv_len = 0;
+        }
+    }
+}
+
+static void vmport_process_recv_size(struct hvm_domain *hd, vmport_channel_t 
*c,
+                                     struct cpu_user_regs *ur)
+{
+    vmport_bucket_t *b;
+    vmport_jumbo_bucket_t *jb;
+    int16_t recv_len;
+
+    if ( c->ctl.jumbo )
+    {
+        jb = &c->jumbo_recv_bkt;
+        recv_len = jb->ctl.recv_len;
+    }
+    else
+    {
+        b = &c->recv_bkt[c->ctl.recv_read];
+        recv_len = b->ctl.recv_len;
+    }
+    if ( recv_len )
+    {
+        set_status(ur, MESSAGE_STATUS_DORECV | MESSAGE_STATUS_SUCCESS);
+        ur->rdx = set_high_bits(ur->rdx, MESSAGE_TYPE_SENDSIZE);
+        ur->rbx = recv_len;
+    }
+    else
+        set_status(ur, MESSAGE_STATUS_SUCCESS);
+}
+
+static void vmport_process_recv_payload(struct hvm_domain *hd,
+                                        vmport_channel_t *c,
+                                        struct cpu_user_regs *ur)
+{
+    vmport_bucket_t *b;
+    vmport_jumbo_bucket_t *jb;
+
+    if ( c->ctl.jumbo )
+    {
+        jb = &c->jumbo_recv_bkt;
+        ur->rbx = jb->recv_buf[jb->ctl.recv_idx++];
+    }
+    else
+    {
+        b = &c->recv_bkt[c->ctl.recv_read];
+        if ( b->ctl.recv_idx < VMPORT_MAX_RECV_BUF )
+            ur->rbx = b->recv_buf[b->ctl.recv_idx++];
+        else
+            ur->rbx = 0;
+    }
+
+    set_status(ur, MESSAGE_STATUS_SUCCESS);
+    ur->rdx = set_high_bits(ur->rdx, MESSAGE_TYPE_SENDPAYLOAD);
+}
+
+static void vmport_process_recv_status(struct hvm_domain *hd,
+                                       vmport_channel_t *c,
+                                       struct cpu_user_regs *ur)
+{
+    vmport_bucket_t *b;
+    vmport_jumbo_bucket_t *jb;
+
+    set_status(ur, MESSAGE_STATUS_SUCCESS);
+
+    if ( c->ctl.jumbo )
+    {
+        c->ctl.jumbo = 0;
+        /* add debug here */
+        jb = &c->jumbo_recv_bkt;
+        return;
+    }
+
+    b = &c->recv_bkt[c->ctl.recv_read];
+
+    c->ctl.recv_read++;
+    if ( c->ctl.recv_read >= VMPORT_MAX_BKTS )
+        c->ctl.recv_read = 0;
+}
+
+static void vmport_process_close(struct hvm_domain *hd, vmport_channel_t *c,
+                                 struct cpu_user_regs *ur)
+{
+    /* Return channel to free pool */
+    c->ctl.proto_num = 0;
+    set_status(ur, MESSAGE_STATUS_SUCCESS);
+}
+
+static void vmport_process_packet(struct hvm_domain *hd, vmport_channel_t *c,
+                                  struct cpu_user_regs *ur, unsigned int 
sub_cmd,
+                                  unsigned long now_time)
+{
+    c->ctl.active_time = now_time;
+
+    switch ( sub_cmd )
+    {
+    case MESSAGE_TYPE_SENDSIZE:
+        vmport_process_send_size(hd, c, ur);
+        break;
+
+    case MESSAGE_TYPE_SENDPAYLOAD:
+        vmport_process_send_payload(hd, c, ur, now_time);
+        break;
+
+    case MESSAGE_TYPE_RECVSIZE:
+        vmport_process_recv_size(hd, c, ur);
+        break;
+
+    case MESSAGE_TYPE_RECVPAYLOAD:
+        vmport_process_recv_payload(hd, c, ur);
+        break;
+
+    case MESSAGE_TYPE_RECVSTATUS:
+        vmport_process_recv_status(hd, c, ur);
+        break;
+
+    case MESSAGE_TYPE_CLOSE:
+        vmport_process_close(hd, c, ur);
+        break;
+
+    default:
+        ur->rcx = 0;
+        break;
+    }
+}
+
+void vmport_rpc(struct hvm_domain *hd, struct cpu_user_regs *ur)
+{
+    unsigned int sub_cmd = get_high_bits(ur->rcx);
+    vmport_channel_t *c = NULL;
+    uint16_t msg_id;
+    uint32_t msg_cookie;
+    unsigned long now_time = get_sec();
+    long delta = now_time - hd->vmport_data->ping_time;
+
+    if ( delta > hd->params[HVM_PARAM_VMPORT_RESET_TIME] )
+    {
+        VMPORT_DBG_LOG(VMPORT_LOG_PING, "VMware ping. delta=%ld",
+                       delta);
+        vmport_ctrl_send(hd, "reset");
+    }
+    spin_lock(&hd->vmport_lock);
+    vmport_sweep(hd, now_time);
+    do {
+        /* Check to see if a new open request is happening... */
+        if ( MESSAGE_TYPE_OPEN == sub_cmd )
+        {
+            c = vmport_new_chan(hd->vmport_data, now_time);
+            if ( NULL == c )
+            {
+                VMPORT_DBG_LOG(VMPORT_LOG_ERROR,
+                               "VMware failed to find a free channel");
+                break;
+            }
+
+            /* Attach the apropriate protocol the the channel */
+            c->ctl.proto_num = ur->rbx & ~GUESTMSG_FLAG_COOKIE;
+            set_status(ur, MESSAGE_STATUS_SUCCESS);
+            ur->rdx = set_high_bits(ur->rdx, c->ctl.chan_id);
+            ur->rdi = get_low_bits(c->ctl.cookie);
+            ur->rsi = get_high_bits(c->ctl.cookie);
+            if ( c->ctl.proto_num == VMWARE_PROTO_TO_GUEST )
+                vmport_send(hd, c, "reset");
+            break;
+        }
+
+        msg_id = get_high_bits(ur->rdx);
+        msg_cookie = set_high_bits(ur->rdi, ur->rsi);
+        if ( msg_id >= VMPORT_MAX_CHANS )
+        {
+            VMPORT_DBG_LOG(VMPORT_LOG_ERROR, "VMware chan id err %d >= %d",
+                           msg_id, VMPORT_MAX_CHANS);
+            break;
+        }
+        c = &hd->vmport_data->chans[msg_id];
+        if ( !c->ctl.proto_num )
+        {
+            VMPORT_DBG_LOG(VMPORT_LOG_ERROR, "VMware chan %d not open",
+                           msg_id);
+            break;
+        }
+
+        /* We check the cookie here since it's possible that the
+         * connection timed out on us and another channel was opened
+         * if this happens, return error and the um tool will
+         * need to reopen the connection
+         */
+        if ( msg_cookie != c->ctl.cookie )
+        {
+            VMPORT_DBG_LOG(VMPORT_LOG_ERROR, "VMware ctl.cookie err %x vs %x",
+                           msg_cookie, c->ctl.cookie);
+            break;
+        }
+        vmport_process_packet(hd, c, ur, sub_cmd, now_time);
+    } while ( 0 );
+
+    if ( NULL == c )
+        set_status(ur, 0);
+
+    spin_unlock(&hd->vmport_lock);
+}
+
+static int hvm_set_guestinfo(struct vmport_state *vs,
+                             struct xen_hvm_vmport_guest_info *a,
+                             char *key, char *value)
+{
+    int idx;
+    int free_idx = -1;
+    int rc = 0;
+
+    for ( idx = 0; idx < vs->used_guestinfo; idx++ )
+    {
+        if ( !vs->guestinfo[idx] )
+        {
+#ifndef NDEBUG
+            gdprintk(XENLOG_WARNING,
+                     "idx=%d not allocated, but used_guestinfo=%d\n",
+                     idx, vs->used_guestinfo);
+#endif
+        }
+        else if ( (vs->guestinfo[idx]->key_len == a->key_length) &&
+                  (memcmp(key,
+                          vs->guestinfo[idx]->key_data,
+                          vs->guestinfo[idx]->key_len) == 0) )
+        {
+            vs->guestinfo[idx]->val_len = a->value_length;
+            memcpy(vs->guestinfo[idx]->val_data, value, a->value_length);
+            break;
+
+        }
+        else if ( (vs->guestinfo[idx]->key_len == 0) &&
+                  (free_idx == -1) )
+            free_idx = idx;
+    }
+
+    if ( idx >= vs->used_guestinfo )
+    {
+        if ( free_idx == -1 )
+            rc = -EBUSY;
+        else
+        {
+            vs->guestinfo[free_idx]->key_len = a->key_length;
+            memcpy(vs->guestinfo[free_idx]->key_data, key, a->key_length);
+            vs->guestinfo[free_idx]->val_len = a->value_length;
+            memcpy(vs->guestinfo[free_idx]->val_data, value, a->value_length);
+        }
+    }
+
+    /* Delete any duplicate entry */
+    if ( rc == 0 )
+        hvm_del_guestinfo_jumbo(vs, key, a->key_length);
+
+    return rc;
+}
+
+static int hvm_set_guestinfo_jumbo(struct vmport_state *vs,
+                                   struct xen_hvm_vmport_guest_info *a,
+                                   char *key, char *value)
+{
+    int idx;
+    int free_idx = -1;
+    int rc = 0;
+
+    for ( idx = 0; idx < vs->used_guestinfo_jumbo; idx++ )
+    {
+
+        if ( !vs->guestinfo_jumbo[idx] )
+        {
+#ifndef NDEBUG
+            gdprintk(XENLOG_WARNING,
+                     "idx=%d not allocated, but used_guestinfo_jumbo=%d\n",
+                     idx, vs->used_guestinfo_jumbo);
+#endif
+        }
+        else if ( (vs->guestinfo_jumbo[idx]->key_len == a->key_length) &&
+                  (memcmp(key, vs->guestinfo_jumbo[idx]->key_data,
+                          vs->guestinfo_jumbo[idx]->key_len) == 0) )
+        {
+            vs->guestinfo_jumbo[idx]->val_len = a->value_length;
+            memcpy(vs->guestinfo_jumbo[idx]->val_data, value, a->value_length);
+            break;
+
+        }
+        else if ( (vs->guestinfo_jumbo[idx]->key_len == 0) &&
+                  (free_idx == -1) )
+            free_idx = idx;
+    }
+
+    if ( idx >= vs->used_guestinfo_jumbo )
+    {
+        if ( free_idx == -1 )
+            rc = -EBUSY;
+        else
+        {
+            vs->guestinfo_jumbo[free_idx]->key_len = a->key_length;
+            memcpy(vs->guestinfo_jumbo[free_idx]->key_data,
+                   key, a->key_length);
+            vs->guestinfo_jumbo[free_idx]->val_len = a->value_length;
+            memcpy(vs->guestinfo_jumbo[free_idx]->val_data,
+                   value, a->value_length);
+        }
+    }
+
+    /* Delete any duplicate entry */
+    if ( rc == 0 )
+        hvm_del_guestinfo(vs, key, a->key_length);
+
+    return rc;
+}
+
+static int hvm_get_guestinfo(struct vmport_state *vs,
+                             struct xen_hvm_vmport_guest_info *a,
+                             char *key, char *value)
+{
+    int idx;
+    int rc = 0;
+
+    if ( a->key_length == 0 )
+    {
+        /*
+         * Here we are iterating on getting all guestinfo entries
+         * using index
+         */
+        idx = a->value_length;
+        if ( idx >= vs->used_guestinfo ||
+             !vs->guestinfo[idx] )
+            rc = -ENOENT;
+        else
+        {
+            a->key_length = vs->guestinfo[idx]->key_len;
+            memcpy(a->data, vs->guestinfo[idx]->key_data, a->key_length);
+            a->value_length = vs->guestinfo[idx]->val_len;
+            memcpy(&a->data[a->key_length], vs->guestinfo[idx]->val_data,
+                   a->value_length);
+            rc = 0;
+        }
+    }
+    else
+    {
+        for ( idx = 0; idx < vs->used_guestinfo; idx++ )
+        {
+            if ( vs->guestinfo[idx] &&
+                 (vs->guestinfo[idx]->key_len == a->key_length) &&
+                 (memcmp(key, vs->guestinfo[idx]->key_data,
+                         vs->guestinfo[idx]->key_len) == 0) )
+            {
+                a->value_length = vs->guestinfo[idx]->val_len;
+                memcpy(value, vs->guestinfo[idx]->val_data,
+                       a->value_length);
+                rc = 0;
+                break;
+            }
+        }
+        if ( idx >= vs->used_guestinfo )
+            rc = -ENOENT;
+    }
+    return rc;
+}
+
+static int hvm_get_guestinfo_jumbo(struct vmport_state *vs,
+                                   struct xen_hvm_vmport_guest_info *a,
+                                   char *key, char *value)
+{
+    int idx, total_entries;
+    int rc = 0;
+
+    if ( a->key_length == 0 )
+    {
+        /*
+         * Here we are iterating on getting all guestinfo entries
+         * using index
+         */
+        total_entries = vs->used_guestinfo + vs->used_guestinfo_jumbo;
+
+        /* Input index is in a->value_length */
+        if ( a->value_length >= total_entries )
+        {
+            rc = -ENOENT;
+            return rc;
+        }
+        idx = a->value_length - vs->used_guestinfo;
+        if ( idx >= vs->used_guestinfo_jumbo ||
+             !vs->guestinfo_jumbo[idx] )
+            rc = -ENOENT;
+        else
+        {
+            a->key_length = vs->guestinfo_jumbo[idx]->key_len;
+            memcpy(a->data, vs->guestinfo_jumbo[idx]->key_data, a->key_length);
+            a->value_length = vs->guestinfo_jumbo[idx]->val_len;
+            memcpy(&a->data[a->key_length],
+                   vs->guestinfo_jumbo[idx]->val_data, a->value_length);
+            rc = 0;
+        }
+    }
+    else
+    {
+        for ( idx = 0; idx < vs->used_guestinfo_jumbo; idx++ )
+        {
+            if ( vs->guestinfo_jumbo[idx] &&
+                 (vs->guestinfo_jumbo[idx]->key_len == a->key_length) &&
+                 (memcmp(key, vs->guestinfo_jumbo[idx]->key_data,
+                         vs->guestinfo_jumbo[idx]->key_len) == 0) )
+            {
+                a->value_length = vs->guestinfo_jumbo[idx]->val_len;
+                memcpy(value, vs->guestinfo_jumbo[idx]->val_data,
+                       a->value_length);
+                rc = 0;
+                break;
+            }
+        }
+        if ( idx >= vs->used_guestinfo_jumbo )
+            rc = -ENOENT;
+    }
+    return rc;
+}
+
+int vmport_rpc_hvmop_precheck(unsigned long op,
+                              struct xen_hvm_vmport_guest_info *a)
+{
+    int new_key_length = a->key_length;
+
+    if ( new_key_length > strlen("guestinfo.") )
+    {
+        if ( (size_t)new_key_length + (size_t)a->value_length >
+             sizeof(a->data) )
+            return -EINVAL;
+        if ( memcmp(a->data, "guestinfo.", strlen("guestinfo.")) == 0 )
+            new_key_length -= strlen("guestinfo.");
+        if ( new_key_length > VMPORT_MAX_KEY_LEN )
+        {
+            gdprintk(XENLOG_ERR, "bad key len %d\n", new_key_length);
+            return -EINVAL;
+        }
+        if ( a->value_length > VMPORT_MAX_VAL_JUMBO_LEN )
+        {
+            gdprintk(XENLOG_ERR, "bad val len %d\n", a->value_length);
+            return -EINVAL;
+        }
+    }
+    else if ( new_key_length > 0 )
+    {
+        if ( (size_t)new_key_length + (size_t)a->value_length >
+             sizeof(a->data) )
+            return -EINVAL;
+        if ( new_key_length > VMPORT_MAX_KEY_LEN )
+        {
+            gdprintk(XENLOG_ERR, "bad key len %d", new_key_length);
+            return -EINVAL;
+        }
+        if ( a->value_length > VMPORT_MAX_VAL_JUMBO_LEN )
+        {
+            gdprintk(XENLOG_ERR, "bad val len %d\n", a->value_length);
+            return -EINVAL;
+        }
+    }
+    else if ( (new_key_length == 0) && (op == HVMOP_set_vmport_guest_info) )
+        return -EINVAL;
+
+    return 0;
+}
+
+int vmport_rpc_hvmop_do(struct domain *d, unsigned long op,
+                        struct xen_hvm_vmport_guest_info *a)
+{
+    char *key = NULL;
+    char *value = NULL;
+    struct vmport_state *vs = d->arch.hvm_domain.vmport_data;
+    int i, total_entries;
+    vmport_guestinfo_t *add_slots[5];
+    vmport_guestinfo_jumbo_t *add_slots_jumbo[2];
+    int num_slots = 0, num_slots_jumbo = 0, num_free_slots = 0;
+    int rc = 0;
+
+    if ( a->key_length > strlen("guestinfo.") )
+    {
+        if ( memcmp(a->data, "guestinfo.", strlen("guestinfo.")) == 0 )
+        {
+            key = &a->data[strlen("guestinfo.")];
+            a->key_length -= strlen("guestinfo.");
+        }
+        else
+            key = &a->data[0];
+        value = key + a->key_length;
+    }
+    else if ( a->key_length > 0 )
+    {
+        key = &a->data[0];
+        value = key + a->key_length;
+    }
+
+    total_entries = vs->used_guestinfo + vs->used_guestinfo_jumbo;
+
+    if ( (a->key_length == 0) && (a->value_length >= total_entries) )
+    {
+        /*
+         * When key length is zero, we are interating on
+         * get-guest-info hypercalls to retrieve all guestinfo
+         * entries using index passed in a->value_length
+         */
+        return -E2BIG;
+    }
+
+    num_free_slots = 0;
+    for ( i = 0; i < vs->used_guestinfo; i++ )
+    {
+        if ( vs->guestinfo[i] &&
+             (vs->guestinfo[i]->key_len == 0) )
+            num_free_slots++;
+    }
+    if ( num_free_slots < 5 )
+    {
+        num_slots = 5 - num_free_slots;
+        if ( vs->used_guestinfo + num_slots > VMPORT_MAX_NUM_KEY )
+            num_slots = VMPORT_MAX_NUM_KEY - vs->used_guestinfo;
+        for ( i = 0; i < num_slots; i++ )
+            add_slots[i] = xzalloc(vmport_guestinfo_t);
+    }
+
+    num_free_slots = 0;
+    for ( i = 0; i < vs->used_guestinfo_jumbo; i++ )
+    {
+        if ( vs->guestinfo_jumbo[i] &&
+             (vs->guestinfo_jumbo[i]->key_len == 0) )
+            num_free_slots++;
+    }
+    if ( num_free_slots < 1 )
+    {
+        num_slots_jumbo = 1 - num_free_slots;
+        if ( vs->used_guestinfo_jumbo + num_slots_jumbo >
+             VMPORT_MAX_NUM_JUMBO_KEY )
+            num_slots_jumbo = VMPORT_MAX_NUM_JUMBO_KEY -
+                vs->used_guestinfo_jumbo;
+        for ( i = 0; i < num_slots_jumbo; i++ )
+            add_slots_jumbo[i] = xzalloc(vmport_guestinfo_jumbo_t);
+    }
+
+    spin_lock(&d->arch.hvm_domain.vmport_lock);
+
+    for ( i = 0; i < num_slots; i++ )
+        vs->guestinfo[vs->used_guestinfo + i] = add_slots[i];
+    vs->used_guestinfo += num_slots;
+
+    for ( i = 0; i < num_slots_jumbo; i++ )
+        vs->guestinfo_jumbo[vs->used_guestinfo_jumbo + i] =
+            add_slots_jumbo[i];
+    vs->used_guestinfo_jumbo += num_slots_jumbo;
+
+    if ( op == HVMOP_set_vmport_guest_info )
+    {
+        if ( a->value_length > VMPORT_MAX_VAL_LEN )
+            rc = hvm_set_guestinfo_jumbo(vs, a, key, value);
+        else
+            rc = hvm_set_guestinfo(vs, a, key, value);
+    }
+    else
+    {
+        /* Get Guest Info */
+        rc = hvm_get_guestinfo(vs, a, key, value);
+        if ( rc != 0 )
+            rc = hvm_get_guestinfo_jumbo(vs, a, key, value);
+    }
+    spin_unlock(&d->arch.hvm_domain.vmport_lock);
+
+    return rc;
+}
+
+int vmport_rpc_init(struct domain *d)
+{
+    struct vmport_state *vs = xzalloc(struct vmport_state);
+    int i;
+
+    spin_lock_init(&d->arch.hvm_domain.vmport_lock);
+    d->arch.hvm_domain.vmport_data = vs;
+
+    if ( !vs )
+        return -ENOMEM;
+
+    /*
+     * Any value is fine here. In fact a random number may better.
+     * It is used to help validate that a both sides are talking
+     * about the same channel.
+     */
+    vs->open_cookie = 435;
+
+    vs->used_guestinfo = 10;
+    for ( i = 0; i < vs->used_guestinfo; i++ )
+        vs->guestinfo[i] = xzalloc(vmport_guestinfo_t);
+
+    vs->used_guestinfo_jumbo = 2;
+    for ( i = 0; i < vs->used_guestinfo_jumbo; i++ )
+        vs->guestinfo_jumbo[i] = xzalloc(vmport_guestinfo_jumbo_t);
+
+    vmport_flush(&d->arch.hvm_domain);
+
+    d->arch.hvm_domain.params[HVM_PARAM_VMPORT_RESET_TIME] = 14;
+
+    return 0;
+}
+
+void vmport_rpc_deinit(struct domain *d)
+{
+    struct vmport_state *vs = d->arch.hvm_domain.vmport_data;
+    int i;
+
+    if ( !vs )
+        return;
+
+    for ( i = 0; i < vs->used_guestinfo; i++ )
+        xfree(vs->guestinfo[i]);
+    for ( i = 0; i < vs->used_guestinfo_jumbo; i++ )
+        xfree(vs->guestinfo_jumbo[i]);
+    xfree(vs);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 291a2e0..a571078 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -107,6 +107,10 @@ struct hvm_domain {
     /* emulated irq to pirq */
     struct radix_tree_root emuirq_pirq;
 
+    /* VMware special port RPC */
+    spinlock_t             vmport_lock;
+    struct vmport_state   *vmport_data;
+
     uint64_t              *params;
 
     /* Memory ranges with pinned cache attributes. */
diff --git a/xen/include/asm-x86/hvm/vmport.h b/xen/include/asm-x86/hvm/vmport.h
index 4dec094..e2864ab 100644
--- a/xen/include/asm-x86/hvm/vmport.h
+++ b/xen/include/asm-x86/hvm/vmport.h
@@ -24,6 +24,15 @@
 #define VMPORT_LOG_VGP_UNKNOWN     (1 << 2)
 #define VMPORT_LOG_REALMODE_GP     (1 << 3)
 
+#define VMPORT_LOG_RECV            (1 << 4)
+#define VMPORT_LOG_SEND            (1 << 5)
+#define VMPORT_LOG_SKIP_SEND       (1 << 6)
+#define VMPORT_LOG_ERROR           (1 << 7)
+
+#define VMPORT_LOG_SWEEP           (1 << 8)
+#define VMPORT_LOG_PING            (1 << 9)
+
+
 extern unsigned int opt_vmport_debug;
 #define VMPORT_DBG_LOG(level, _f, _a...)                                \
     do {                                                                \
@@ -41,6 +50,13 @@ int vmport_ioport(int dir, uint32_t port, uint32_t bytes, 
uint32_t *val);
 int vmport_gp_check(struct cpu_user_regs *regs, struct vcpu *v,
                     unsigned long inst_len, unsigned long inst_addr,
                     unsigned long ei);
+int vmport_rpc_init(struct domain *d);
+void vmport_rpc_deinit(struct domain *d);
+void vmport_rpc(struct hvm_domain *hd, struct cpu_user_regs *ur);
+int vmport_rpc_hvmop_precheck(unsigned long op,
+                              struct xen_hvm_vmport_guest_info *a);
+int vmport_rpc_hvmop_do(struct domain *d, unsigned long op,
+                        struct xen_hvm_vmport_guest_info *a);
 
 #endif /* ASM_X86_HVM_VMPORT_H__ */
 
diff --git a/xen/include/public/arch-x86/hvm/vmporttype.h 
b/xen/include/public/arch-x86/hvm/vmporttype.h
new file mode 100644
index 0000000..98875d2
--- /dev/null
+++ b/xen/include/public/arch-x86/hvm/vmporttype.h
@@ -0,0 +1,118 @@
+/*
+ * vmporttype.h: HVM VMPORT structure definitions
+ *
+ *
+ * Copyright (C) 2012 Verizon Corporation
+ *
+ * This file is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License Version 2 (GPLv2)
+ * as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details. <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef ASM_X86_HVM_VMPORTTYPE_H__
+#define ASM_X86_HVM_VMPORTTYPE_H__
+
+#define VMPORT_MAX_KEY_LEN 30
+#define VMPORT_MAX_VAL_LEN 128
+#define VMPORT_MAX_NUM_KEY  64
+#define VMPORT_MAX_NUM_JUMBO_KEY 4
+#define VMPORT_MAX_VAL_JUMBO_LEN 4096
+
+#define VMPORT_MAX_SEND_BUF ((22 + VMPORT_MAX_KEY_LEN +         \
+                              VMPORT_MAX_VAL_JUMBO_LEN + 3)/4)
+#define VMPORT_MAX_RECV_BUF ((2 + VMPORT_MAX_VAL_LEN + 3)/4)
+#define VMPORT_MAX_RECV_JUMBO_BUF ((2 + VMPORT_MAX_VAL_JUMBO_LEN + 3)/4)
+#define VMPORT_MAX_CHANS    6
+#define VMPORT_MAX_BKTS     8
+
+#define VMPORT_SAVE_VERSION 0xabcd0001
+
+typedef struct
+{
+    uint8_t key_len;
+    uint8_t val_len;
+    char key_data[VMPORT_MAX_KEY_LEN];
+    char val_data[VMPORT_MAX_VAL_LEN];
+} vmport_guestinfo_t;
+
+typedef struct
+{
+    uint16_t val_len;
+    uint8_t  key_len;
+    char     key_data[VMPORT_MAX_KEY_LEN];
+    char     val_data[VMPORT_MAX_VAL_JUMBO_LEN];
+} vmport_guestinfo_jumbo_t;
+
+typedef struct __attribute__((packed))
+{
+    uint16_t recv_len;
+    uint16_t recv_idx;
+}
+vmport_bucket_control_t;
+
+typedef struct __attribute__((packed))
+{
+    vmport_bucket_control_t ctl;
+    uint32_t recv_buf[VMPORT_MAX_RECV_BUF + 1];
+}
+vmport_bucket_t;
+
+typedef struct __attribute__((packed))
+{
+    vmport_bucket_control_t ctl;
+    uint32_t recv_buf[VMPORT_MAX_RECV_JUMBO_BUF + 1];
+}
+vmport_jumbo_bucket_t;
+
+typedef struct __attribute__((packed))
+{
+    uint64_t active_time;
+    uint32_t chan_id;
+    uint32_t cookie;
+    uint32_t proto_num;
+    uint16_t send_len;
+    uint16_t send_idx;
+    uint8_t jumbo;
+    uint8_t recv_read;
+    uint8_t recv_write;
+    uint8_t recv_chan_pad[1];
+}
+vmport_channel_control_t;
+
+typedef struct __attribute__((packed))
+{
+    vmport_channel_control_t ctl;
+    vmport_bucket_t recv_bkt[VMPORT_MAX_BKTS];
+    vmport_jumbo_bucket_t jumbo_recv_bkt;
+    uint32_t send_buf[VMPORT_MAX_SEND_BUF + 1];
+}
+vmport_channel_t;
+
+struct vmport_state
+{
+    uint64_t ping_time;
+    uint32_t open_cookie;
+    uint32_t used_guestinfo;
+    uint32_t used_guestinfo_jumbo;
+    uint8_t  max_chans;
+    uint8_t  state_pad[3];
+    vmport_channel_t chans[VMPORT_MAX_CHANS];
+    vmport_guestinfo_t *guestinfo[VMPORT_MAX_NUM_KEY];
+    vmport_guestinfo_jumbo_t *guestinfo_jumbo[VMPORT_MAX_NUM_JUMBO_KEY];
+};
+
+#endif /* ASM_X86_HVM_VMPORTTYPE_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index eeb0a60..8e1e072 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -369,6 +369,24 @@ DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_ioreq_server_state_t);
 
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
+/* Get/set vmport subcommands */
+#define HVMOP_get_vmport_guest_info 23
+#define HVMOP_set_vmport_guest_info 24
+#define VMPORT_GUEST_INFO_KEY_MAX   40
+#define VMPORT_GUEST_INFO_VAL_MAX   4096
+struct xen_hvm_vmport_guest_info {
+    /* Domain to be accessed */
+    domid_t   domid;
+    /* key length */
+    uint16_t   key_length;
+    /* value length */
+    uint16_t   value_length;
+    /* key and value data */
+    char      data[VMPORT_GUEST_INFO_KEY_MAX + VMPORT_GUEST_INFO_VAL_MAX];
+};
+typedef struct xen_hvm_vmport_guest_info xen_hvm_vmport_guest_info_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_vmport_guest_info_t);
+
 #endif /* __XEN_PUBLIC_HVM_HVM_OP_H__ */
 
 /*
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 77ff539..a9afc51 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -154,7 +154,10 @@
 /* Params for VMware */
 #define HVM_PARAM_VMWARE_HW                 35
 #define HVM_PARAM_VMWARE_PORT               36
+#define HVM_PARAM_VMPORT_BUILD_NUMBER_TIME  37
+#define HVM_PARAM_VMPORT_BUILD_NUMBER_VALUE 38
+#define HVM_PARAM_VMPORT_RESET_TIME         39
 
-#define HVM_NR_PARAMS          37
+#define HVM_NR_PARAMS          40
 
 #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.