[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 4/4] xen/arm: ffa: Add cached GET_REGS support


  • To: Jens Wiklander <jens.wiklander@xxxxxxxxxx>
  • From: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • Date: Wed, 4 Mar 2026 11:43:20 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 4.158.2.129) smtp.rcpttodomain=linaro.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com])
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=BMV26NOhbn4/xROmOocUyhrnyNxy916bRW2XhBd1/8o=; b=R1RIEieS8+VvgIxqS0u0Ux0MdRaYTsUdgSHEpQm7VH0nf+hA9Fh+vroncMUQzcfQvm2NmWnI4ye9xPQde7CIFBe/YHgAnQQmTuWHkgL8+K/YL2T9kETK/HWv9G4sNW4jr/Uian8nGuhdS5l3mK8+U6JHSTLxAHZAiW7cq1OoyvfZzNyRcjmVjkDO4WSNp16IwsHzbQbY7lA1gAzy4L/gTVENNGLl+xk0b71dXtni+WVOjpmTrCswFg75rHMFAtBNc95eblE5NaIulWPqRg1fkGcFLn7JtmA79zIF41fXDG87t/32WOMbQQL0fDEjsg/J1sLnnpOhmDvQ8ZkQmDLFlQ==
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=BMV26NOhbn4/xROmOocUyhrnyNxy916bRW2XhBd1/8o=; b=R6i4Gl8g6ELHu4n40VCmkcL3d4ZD6MrNUf9K7mJAuZtaHr/xVtJUhoVwksk6q3lLQ2+kHMR/sHOO0LxN649UShJMxbKjHyNSiTUsWoNQP1IqbYqz44ofibhFwW8+8mSj6i0VgT/EJElvCFytXufTUQ58AlX5IVaRVIKzT7UM2usTjk1D9HghjRVFQf/glo+kDQD0ejv/GI9oiLrSMxzW3pYG3kAe8MIdfPRIy7tupop93VGAccs5J0uBRPnGaueKNClrtkI1XLdTARMq8mJ5GDqKj0WBRze7AiSQYCPupLzaHKQWLFmIKHzkiI58uyG2k4vE0cOqrKO83fKawV6STQ==
  • Arc-seal: i=2; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=pass; b=x/PnP8ZejarHUbct6/iwbajkuCTwkvOtTtBgqAKrXTfz2ZxAwNVAaT2zyAoag6YadQLR6iHWhuLNYD79qrR+VWXdvSwaMGREcqntpmHYzRQA1nDRL1OaDJcHhX0VyCk1Vbv+NQWOM52+xMtymKFOhxTIk5hl4y6ZNrUf/3WUa6RpgltEyRqgBAjye3UtUjUHF4WvcoNuBzd3k60GJV0UcjxKYiX79GvhYZ0QaRtcDdSt3mL7KFiOoe4OU6Pq9fJaRCqAHRQn9RkTzgat5xgdXXAN+3UXCkkjN0WpJ/AHXdtyZ4BryK+49vjbExrzwIITsTe3kApCWupH1T0mvYPuhQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=frbX2MzNQsk6ZinpbEwUyixUDk/2KWc7qoK6HvSqhmBi8cM7uK9gxaQSeymTZmlFzX1ABi3v2Kev7cOpktqIulV3Ie0sgt4nKEOgjxd5C2s6zm/fMvjJNbM+QaOS5F+dr6RjodLpdmiXe4fF+SWiX3x0ABNVd1bFCU4hnA+ApulYYeKdPdD3zw2tbZqnGqLqPvxT8CtmCJ2DxssqIHKQ9IkI3y+xdtEZ75nVGI/46xwNt/6dt+v7OLqwDHpKfp8o+NWZthuMW6dIpCij+nit8DZu9rTXDRps6cNUf6vmVofGhY9Bjj47J/c0116J68SXownK1ZCkh8EByfnPYiKYWQ==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>
  • Delivery-date: Wed, 04 Mar 2026 11:44:58 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Thread-index: AQHcqluMWNwuk1kC7USqgRLmbYWnX7WeMukAgAAQhgA=
  • Thread-topic: [PATCH v2 4/4] xen/arm: ffa: Add cached GET_REGS support


> On 4 Mar 2026, at 11:44, Jens Wiklander <jens.wiklander@xxxxxxxxxx> wrote:
> 
> Hi Bertrand,
> 
> On Mon, Mar 2, 2026 at 4:44 PM Bertrand Marquis
> <bertrand.marquis@xxxxxxx> wrote:
>> 
>> FF-A v1.2 defines PARTITION_INFO_GET_REGS for register-based partition
>> info retrieval, but Xen currently only supports the buffer-based GET
>> path for guests.
>> 
>> Implement GET_REGS using the cached SP list and VM entries, including
>> the register window layout and input validation. Track VM list changes
>> via the partinfo tag and use it to validate GET_REGS tag inputs. Ensure
>> that when a non-Nil UUID is specified, the UUID fields in both GET and
>> GET_REGS results are MBZ as required by the specification.
>> 
>> PARTITION_INFO_GET_REGS is available to v1.2 guests, returning cached SP
>> entries and VM entries with UUIDs zeroed for non-Nil UUID queries.
>> 
>> Also publish VM membership updates (VM count, ctx list, and partinfo
>> tag) under the same write-locked section so GET_REGS sees coherent state
>> and concurrent changes are reliably reported via RETRY.
>> 
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@xxxxxxx>
>> ---
>> Changes since v1:
>> - ignore x4-x17 not being zero and x3 bits 63-32 not being zero (defined
>>  as SBZ in the spec)
>> - detect tag changes during GET_REGS handling and return RETRY
>> - remove strict check of sp_list_entry_size, larger cache entry sizes
>>  will now be accepted
>> - publish VM count, ctx list, and partinfo tag updates under
>>  ffa_ctx_list_rwlock for coherent visibility
>> ---
>> xen/arch/arm/tee/ffa.c          |  23 +++-
>> xen/arch/arm/tee/ffa_partinfo.c | 200 ++++++++++++++++++++++++++++++++
>> xen/arch/arm/tee/ffa_private.h  |   4 +-
>> 3 files changed, 223 insertions(+), 4 deletions(-)
>> 
>> diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
>> index aa43ae2595d7..d6cae67e1a48 100644
>> --- a/xen/arch/arm/tee/ffa.c
>> +++ b/xen/arch/arm/tee/ffa.c
>> @@ -44,6 +44,11 @@
>>  *   - doesn't support signalling the secondary scheduler of pending
>>  *     notification for secure partitions
>>  *   - doesn't support notifications for Xen itself
>> + * o FFA_PARTITION_INFO_GET/GET_REGS:
>> + *   - v1.0 guests may see duplicate SP IDs when firmware provides UUIDs
>> + *   - SP list is cached at init; SPMC tag changes are not tracked
>> + *     between calls
>> + *   - SP list is capped at FFA_MAX_NUM_SP entries
>>  *
>>  * There are some large locked sections with ffa_spmc_tx_lock and
>>  * ffa_spmc_rx_lock. Especially the ffa_spmc_tx_lock spinlock used
>> @@ -183,10 +188,11 @@ static bool ffa_negotiate_version(struct cpu_user_regs 
>> *regs)
>> 
>>         if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) )
>>         {
>> -            /* One more VM with FF-A support available */
>> -            inc_ffa_vm_count();
>>             write_lock(&ffa_ctx_list_rwlock);
>> +            /* Publish VM membership changes atomically with tag updates. */
>> +            inc_ffa_vm_count();
>>             list_add_tail(&ctx->ctx_list, &ffa_ctx_head);
>> +            ffa_partinfo_inc_tag();
>>             write_unlock(&ffa_ctx_list_rwlock);
>>         }
>> 
>> @@ -341,6 +347,12 @@ static void handle_features(struct cpu_user_regs *regs)
>>     case FFA_FEATURE_SCHEDULE_RECV_INTR:
>>         ffa_set_regs_success(regs, GUEST_FFA_SCHEDULE_RECV_INTR_ID, 0);
>>         break;
>> +    case FFA_PARTITION_INFO_GET_REGS:
>> +        if ( ACCESS_ONCE(ctx->guest_vers) >= FFA_VERSION_1_2 )
>> +            ffa_set_regs_success(regs, 0, 0);
>> +        else
>> +            ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
>> +        break;
>> 
>>     case FFA_NOTIFICATION_BIND:
>>     case FFA_NOTIFICATION_UNBIND:
>> @@ -402,6 +414,9 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
>>     case FFA_PARTITION_INFO_GET:
>>         ffa_handle_partition_info_get(regs);
>>         return true;
>> +    case FFA_PARTITION_INFO_GET_REGS:
>> +        ffa_handle_partition_info_get_regs(regs);
>> +        return true;
>>     case FFA_RX_RELEASE:
>>         e = ffa_rx_release(ctx);
>>         break;
>> @@ -625,9 +640,11 @@ static int ffa_domain_teardown(struct domain *d)
>> 
>>     if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) && ACCESS_ONCE(ctx->guest_vers) )
>>     {
>> -        dec_ffa_vm_count();
>>         write_lock(&ffa_ctx_list_rwlock);
>> +        /* Publish VM membership changes atomically with tag updates. */
>> +        dec_ffa_vm_count();
>>         list_del(&ctx->ctx_list);
>> +        ffa_partinfo_inc_tag();
>>         write_unlock(&ffa_ctx_list_rwlock);
>>     }
>> 
>> diff --git a/xen/arch/arm/tee/ffa_partinfo.c 
>> b/xen/arch/arm/tee/ffa_partinfo.c
>> index 419e19510f6f..16da5ee567db 100644
>> --- a/xen/arch/arm/tee/ffa_partinfo.c
>> +++ b/xen/arch/arm/tee/ffa_partinfo.c
>> @@ -29,10 +29,39 @@ struct ffa_partition_info_1_1 {
>>     uint8_t uuid[16];
>> };
>> 
>> +/* Registers a3..a17 (15 regs) carry partition descriptors, 3 regs each. */
>> +#define FFA_PARTINFO_REG_MAX_ENTRIES \
>> +    ((15 * sizeof(uint64_t)) / sizeof(struct ffa_partition_info_1_1))
>> +
>> /* SP list cache (secure endpoints only); populated at init. */
>> static void *sp_list __read_mostly;
>> static uint32_t sp_list_count __read_mostly;
>> static uint32_t sp_list_entry_size __read_mostly;
>> +
>> +/* SP list is static; tag only moves when VMs are added/removed. */
>> +static atomic_t ffa_partinfo_tag = ATOMIC_INIT(1);
>> +
>> +void ffa_partinfo_inc_tag(void)
>> +{
>> +    atomic_inc(&ffa_partinfo_tag);
>> +}
>> +
>> +static inline uint16_t ffa_partinfo_get_tag(void)
>> +{
>> +    /*
>> +     * Tag moves with VM list changes only.
>> +     *
>> +     * Limitation: we cannot detect an SPMC tag change between calls 
>> because we
>> +     * do not retain the previous SPMC tag; we only refresh it via the 
>> mandatory
>> +     * start_index=0 call and assume it stays stable while combined_tag (our
>> +     * VM/SP-count tag) is used for guest validation. This means SPMC tag
>> +     * changes alone will not trigger RETRY.
>> +     */
>> +    if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) )
>> +        return atomic_read(&ffa_partinfo_tag) & GENMASK(15, 0);
>> +    else
>> +        return 1;
>> +}
> 
> Please add an empty line here.

ack

> 
>> static int32_t ffa_partition_info_get(struct ffa_uuid uuid, uint32_t flags,
>>                                       uint32_t *count, uint32_t *fpi_size)
>> {
>> @@ -140,6 +169,7 @@ static int32_t ffa_get_sp_partinfo(struct ffa_uuid uuid, 
>> uint32_t *sp_count,
>>     for ( n = 0; n < sp_list_count; n++ )
>>     {
>>         void *entry = sp_list + n * sp_list_entry_size;
>> +        void *dst_pos;
>> 
>>         if ( !ffa_sp_entry_matches_uuid(entry, uuid) )
>>             continue;
>> @@ -151,11 +181,20 @@ static int32_t ffa_get_sp_partinfo(struct ffa_uuid 
>> uuid, uint32_t *sp_count,
>>          * This is a non-compliance to the specification but 1.0 VMs should
>>          * handle that on their own to simplify Xen implementation.
>>          */
>> +        dst_pos = *dst_buf;
>>         ret = ffa_copy_info(dst_buf, end_buf, entry, dst_size,
>>                             sp_list_entry_size);
>>         if ( ret )
>>             return ret;
>> 
>> +        if ( !ffa_uuid_is_nil(uuid) &&
>> +             dst_size >= sizeof(struct ffa_partition_info_1_1) )
>> +        {
>> +            struct ffa_partition_info_1_1 *fpi = dst_pos;
>> +
>> +            memset(fpi->uuid, 0, sizeof(fpi->uuid));
>> +        }
>> +
>>         count++;
>>     }
>> 
>> @@ -167,6 +206,38 @@ static int32_t ffa_get_sp_partinfo(struct ffa_uuid 
>> uuid, uint32_t *sp_count,
>>     return FFA_RET_OK;
>> }
>> 
>> +static uint16_t ffa_get_sp_partinfo_regs(struct ffa_uuid uuid,
>> +                                         uint16_t start_index,
>> +                                         uint64_t *out_regs,
>> +                                         uint16_t max_entries)
>> +{
>> +    uint32_t idx = 0;
>> +    uint16_t filled = 0;
>> +    uint32_t n;
>> +
>> +    for ( n = 0; n < sp_list_count && filled < max_entries; n++ )
>> +    {
>> +        void *entry = sp_list + n * sp_list_entry_size;
>> +
>> +        if ( !ffa_sp_entry_matches_uuid(entry, uuid) )
>> +            continue;
>> +
>> +        if ( idx++ < start_index )
>> +            continue;
>> +
>> +        memcpy(&out_regs[filled * 3], entry,
>> +               sizeof(struct ffa_partition_info_1_1));
>> +        if ( !ffa_uuid_is_nil(uuid) )
>> +        {
>> +            out_regs[filled * 3 + 1] = 0;
>> +            out_regs[filled * 3 + 2] = 0;
>> +        }
>> +        filled++;
>> +    }
>> +
>> +    return filled;
>> +}
>> +
>> static int32_t ffa_get_vm_partinfo(struct ffa_uuid uuid, uint32_t 
>> start_index,
>>                                    uint32_t *vm_count, void **dst_buf,
>>                                    void *end_buf, uint32_t dst_size)
>> @@ -383,6 +454,135 @@ out:
>>     }
>> }
>> 
>> +void ffa_handle_partition_info_get_regs(struct cpu_user_regs *regs)
>> +{
>> +    struct domain *d = current->domain;
>> +    struct ffa_ctx *ctx = d->arch.tee;
>> +    struct ffa_uuid uuid;
>> +    uint32_t sp_count = 0, vm_count = 0, total_count;
>> +    uint16_t start_index, tag;
>> +    uint16_t num_entries = 0;
>> +    uint64_t x3 = get_user_reg(regs, 3);
>> +    int32_t ret = FFA_RET_OK;
>> +    uint64_t out_regs[18] = { 0 };
>> +    unsigned int n;
>> +    uint16_t tag_out, tag_end;
>> +
>> +    if ( ACCESS_ONCE(ctx->guest_vers) < FFA_VERSION_1_2 )
>> +    {
>> +        ret = FFA_RET_NOT_SUPPORTED;
>> +        goto out;
>> +    }
>> +
>> +    /*
>> +     * Registers a3..a17 (15 regs) carry partition descriptors, 3 regs each.
>> +     * For FF-A 1.2, that yields a maximum of 5 entries per GET_REGS call.
>> +     * Enforce the assumed layout so window sizing stays correct.
>> +     */
>> +    BUILD_BUG_ON(FFA_PARTINFO_REG_MAX_ENTRIES != 5);
>> +
>> +    start_index = x3 & GENMASK(15, 0);
>> +    tag = (x3 >> 16) & GENMASK(15, 0);
>> +
>> +    /* Start index must allow room for up to 5 entries without 16-bit 
>> overflow. */
> 
> Nit: The line above is over 80 columns.

Right, how did i miss that.

> 
> With or without the line above fixed. Looks good.
> Reviewed-by: Jens Wiklander <jens.wiklander@xxxxxxxxxx>

Thanks,

I will check if those can be fixed on commit otherwise i will fix those 2 and
submit a v3 with your R-b on all patches :-)

Cheers
Bertrand

> 
> Cheers,
> Jens
> 
>> +    if ( start_index > (GENMASK(15, 0) - (FFA_PARTINFO_REG_MAX_ENTRIES - 
>> 1)) )
>> +    {
>> +        ret = FFA_RET_INVALID_PARAMETERS;
>> +        goto out;
>> +    }
>> +
>> +    uuid.val[0] = get_user_reg(regs, 1);
>> +    uuid.val[1] = get_user_reg(regs, 2);
>> +
>> +    tag_out = ffa_partinfo_get_tag();
>> +
>> +    if ( start_index == 0 )
>> +    {
>> +        if ( tag )
>> +        {
>> +            ret = FFA_RET_INVALID_PARAMETERS;
>> +            goto out;
>> +        }
>> +    }
>> +    else if ( tag != tag_out )
>> +    {
>> +        ret = FFA_RET_RETRY;
>> +        goto out;
>> +    }
>> +
>> +    if ( ffa_uuid_is_nil(uuid) )
>> +    {
>> +        if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) )
>> +            vm_count = get_ffa_vm_count();
>> +        else
>> +            vm_count = 1; /* Caller VM only */
>> +    }
>> +
>> +    ret = ffa_get_sp_count(uuid, &sp_count);
>> +    if ( ret )
>> +        goto out;
>> +
>> +    total_count = sp_count + vm_count;
>> +
>> +    if ( total_count == 0 || start_index >= total_count )
>> +    {
>> +        ret = FFA_RET_INVALID_PARAMETERS;
>> +        goto out;
>> +    }
>> +
>> +    if ( start_index < sp_count )
>> +        num_entries = ffa_get_sp_partinfo_regs(uuid, start_index, 
>> &out_regs[3],
>> +                                               
>> FFA_PARTINFO_REG_MAX_ENTRIES);
>> +
>> +    if ( num_entries < FFA_PARTINFO_REG_MAX_ENTRIES )
>> +    {
>> +        uint32_t vm_start = start_index > sp_count ?
>> +                            start_index - sp_count : 0;
>> +        uint32_t filled = 0;
>> +        void *vm_dst = &out_regs[3 + num_entries * 3];
>> +        void *vm_end = &out_regs[18];
>> +
>> +        ret = ffa_get_vm_partinfo(uuid, vm_start, &filled, &vm_dst, vm_end,
>> +                                  sizeof(struct ffa_partition_info_1_1));
>> +        if ( ret != FFA_RET_OK && ret != FFA_RET_NO_MEMORY )
>> +            goto out;
>> +
>> +        num_entries += filled;
>> +    }
>> +
>> +    if ( num_entries == 0 )
>> +    {
>> +        ret = FFA_RET_INVALID_PARAMETERS;
>> +        goto out;
>> +    }
>> +
>> +    /*
>> +     * Detect list changes while building the response so the caller can 
>> retry
>> +     * with a coherent snapshot tag.
>> +     */
>> +    tag_end = ffa_partinfo_get_tag();
>> +    if ( tag_end != tag_out )
>> +    {
>> +        ret = FFA_RET_RETRY;
>> +        goto out;
>> +    }
>> +
>> +    out_regs[0] = FFA_SUCCESS_64;
>> +    out_regs[2] = ((uint64_t)sizeof(struct ffa_partition_info_1_1) << 48) |
>> +                  ((uint64_t)tag_end << 32) |
>> +                  ((uint64_t)(start_index + num_entries - 1) << 16) |
>> +                  ((uint64_t)(total_count - 1) & GENMASK(15, 0));
>> +
>> +    for ( n = 0; n < ARRAY_SIZE(out_regs); n++ )
>> +        set_user_reg(regs, n, out_regs[n]);
>> +
>> +    return;
>> +
>> +out:
>> +    if ( ret )
>> +        ffa_set_regs_error(regs, ret);
>> +}
>> +
>> static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
>>                                       uint8_t msg)
>> {
>> diff --git a/xen/arch/arm/tee/ffa_private.h b/xen/arch/arm/tee/ffa_private.h
>> index 1a632983c860..c291f32b56ff 100644
>> --- a/xen/arch/arm/tee/ffa_private.h
>> +++ b/xen/arch/arm/tee/ffa_private.h
>> @@ -289,7 +289,7 @@
>> #define FFA_MSG_SEND2                   0x84000086U
>> #define FFA_CONSOLE_LOG_32              0x8400008AU
>> #define FFA_CONSOLE_LOG_64              0xC400008AU
>> -#define FFA_PARTITION_INFO_GET_REGS     0x8400008BU
>> +#define FFA_PARTITION_INFO_GET_REGS     0xC400008BU
>> #define FFA_MSG_SEND_DIRECT_REQ2        0xC400008DU
>> #define FFA_MSG_SEND_DIRECT_RESP2       0xC400008EU
>> 
>> @@ -452,6 +452,8 @@ bool ffa_partinfo_init(void);
>> int32_t ffa_partinfo_domain_init(struct domain *d);
>> bool ffa_partinfo_domain_destroy(struct domain *d);
>> void ffa_handle_partition_info_get(struct cpu_user_regs *regs);
>> +void ffa_handle_partition_info_get_regs(struct cpu_user_regs *regs);
>> +void ffa_partinfo_inc_tag(void);
>> 
>> int32_t ffa_endpoint_domain_lookup(uint16_t endpoint_id, struct domain 
>> **d_out,
>>                                    struct ffa_ctx **ctx_out);
>> --
>> 2.52.0



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.