[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/4] xen/arm: ffa: Cache SP partition info at init


  • To: Jens Wiklander <jens.wiklander@xxxxxxxxxx>
  • From: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • Date: Mon, 2 Mar 2026 09:57:46 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 4.158.2.129) smtp.rcpttodomain=linaro.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com])
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=D+9imK5XawmDQe7P0Xr+MoPnLE1iv7IUk7n8ZiKvx7E=; b=hACBAFkC/Oklxg3+xyMhHSGpC1d2s8QkZgV97uOlilku7Ndymve1WhwNsSaghyrX7a7AxdoxDrL9K5KVQCGYSMuM/AuK6/pX6zrbcTZWfvk+KqGzplRsEx+g6kwnRvp1R+VC9h/kVLu8A7NiQsxNJBSZ4vddNFdsSPY7TGhEmfdb2OaZSHz24mW6m8CD9QokScUicNa7CenI9te9oANqO1o4VQGIBOUJ6cMny4Ou1LO/rmkgbq+tU5Ud+j/ivNbLcbYGDuhUL1XORzRsLhh/Kz3FPCe8BfBjUns1nt3UIt5ODOVPrvspw7gzk8IEyzzR6OVj2z1lpXydIadt+U/Ksg==
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=D+9imK5XawmDQe7P0Xr+MoPnLE1iv7IUk7n8ZiKvx7E=; b=Frk3cOjmCZuihE138yHY7rvpAoa26e5tIDpwZ0hhHdOFpTifEvmQVqDiN7duNzEVrvNbOKfXQ06OyG6ZX7PUSPNO3HhCT6g3NKWMsWsAIUNgaJbmELDtByqgLGNCmSAG7Vy6qYoT95rdtiBLjFXLSjzAtwdEuL3+tvprBzyo8cfOdygMSM0+TaZdKFDC3LbeMxsl2BqhsEohut/FYnxg5zDw/I/d3Cq47UUifY+uEf+mcK+TPjQnY+P0RofDfz9gwvlpklmUFQg3QtzCVtGJ92Ewf7g8VQoCuUX71dgceKznkDoTBSezsu/os87IsCF0+wE+9g76UvmQrlPRSzFuBg==
  • Arc-seal: i=2; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=pass; b=aUiwWNEbeEr0+eof9Wmp9/+UWCyaVA+ryXioSJYHy2OOq+UHjmEh778XP15eOFF7DTHxB2WF/Epfw4Qv+IxWVeXC8XDwVuIDFjXZCIC7Sw3XZ/FG01eCllEPesNM+r3ngmj0R9jqKsoNxLNdAFgAI5VzpOKWII7dMm0Wd5niZC4rPEMyGAWJCG3LlBLPbuDVgS8QqM+J1GzhsHSzGcyRY95DkitzYONIvO4is/B416dA4YNQTL3SGZ5F3wvityL9Gp9JZLXnavaZ+2PbwYkt4l1ZUlGNoxdPbKoTPjS0CW2BTnTZsRuwF6Y5L9sr9Qe1fZUsUIZIg8/Qa8+2wiTYKQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=kUdTZkh9r0OLqFvfuq980byy77bKKS7HHS/Chad2+kVWXkYq+Xa8yBEEGjlgonV8B1HdTVmXH3KSalE5pvYchTGOdj6qgHp00nYwHJHuQKhWDzc1LIHGkJxZFLS8m8H8IyhDuccmDJfWVMzwwE+/I+uZPkymVGxYKYoOx5rV+yUcBhSiU51dpOxZUXQzCzwbuIE9/Qw5kLcp5P0grUGKVaq8PMfqPNGUacgdvmgCiktPGLoh24SxZLK7OEQCEG7TmMlgCUSWh8tTl0gAoKgY6YKjD5+iYTPS9TeNamXCTtMbgDw+7gTkIlhv9Dvbk3Z1qru5jWNNb8vOWdHlKWFBwA==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>
  • Delivery-date: Mon, 02 Mar 2026 10:01:41 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Thread-index: AQHcpj3iktrazLk1t0mCBnzOLulDCLWWXkkAgASYY4CAAAvlgIAABvEA
  • Thread-topic: [PATCH 2/4] xen/arm: ffa: Cache SP partition info at init

Hi Jens,

> On 2 Mar 2026, at 10:32, Jens Wiklander <jens.wiklander@xxxxxxxxxx> wrote:
> 
> Hi Bertrand,
> 
> On Mon, Mar 2, 2026 at 9:51 AM Bertrand Marquis
> <Bertrand.Marquis@xxxxxxx> wrote:
>> 
>> Hi Jens,
>> 
>>> On 27 Feb 2026, at 11:39, Jens Wiklander <jens.wiklander@xxxxxxxxxx> wrote:
>>> 
>>> Hi Bertrand,
>>> 
>>> On Wed, Feb 25, 2026 at 11:02 AM Bertrand Marquis
>>> <bertrand.marquis@xxxxxxx> wrote:
>>>> 
>>>> FFA_PARTITION_INFO_GET currently queries the SPMC on each call and walks 
>>>> the
>>>> RX buffer every time. The SP list is expected to be static, so repeated
>>>> firmware calls and validation are unnecessary.
>>>> 
>>>> Cache the SPMC partition-info list at init time, keeping only secure
>>>> endpoints, and reuse the cached entries for SP count and partition-info
>>>> responses. Initialize the VM create/destroy subscriber lists from the 
>>>> cached
>>>> list and free the cache on init failure.
>>>> 
>>>> SP partition info now reflects the init-time snapshot and will not change.
>>>> 
>>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@xxxxxxx>
>>>> ---
>>>> xen/arch/arm/tee/ffa_partinfo.c | 205 +++++++++++++++++++++-----------
>>>> 1 file changed, 138 insertions(+), 67 deletions(-)
>>>> 
>>>> diff --git a/xen/arch/arm/tee/ffa_partinfo.c 
>>>> b/xen/arch/arm/tee/ffa_partinfo.c
>>>> index 6a6f3ffb822e..8a3eac25f99f 100644
>>>> --- a/xen/arch/arm/tee/ffa_partinfo.c
>>>> +++ b/xen/arch/arm/tee/ffa_partinfo.c
>>>> @@ -6,6 +6,7 @@
>>>> #include <xen/const.h>
>>>> #include <xen/sizes.h>
>>>> #include <xen/types.h>
>>>> +#include <xen/xmalloc.h>
>>>> 
>>>> #include <asm/smccc.h>
>>>> #include <asm/regs.h>
>>>> @@ -33,6 +34,10 @@ static uint16_t subscr_vm_created_count __read_mostly;
>>>> static uint16_t *subscr_vm_destroyed __read_mostly;
>>>> static uint16_t subscr_vm_destroyed_count __read_mostly;
>>>> 
>>>> +/* SP list cache (secure endpoints only); populated at init. */
>>>> +static void *sp_list __read_mostly;
>>>> +static uint32_t sp_list_count __read_mostly;
>>>> +static uint32_t sp_list_entry_size __read_mostly;
>>>> static int32_t ffa_partition_info_get(struct ffa_uuid uuid, uint32_t flags,
>>>>                                      uint32_t *count, uint32_t *fpi_size)
>>>> {
>>>> @@ -79,92 +84,78 @@ static int32_t ffa_copy_info(void **dst, void 
>>>> *dst_end, const void *src,
>>>>    return FFA_RET_OK;
>>>> }
>>>> 
>>>> -static int32_t ffa_get_sp_count(struct ffa_uuid uuid, uint32_t *sp_count)
>>>> +static bool ffa_sp_entry_matches_uuid(const void *entry, struct ffa_uuid 
>>>> uuid)
>>>> {
>>>> -    uint32_t src_size;
>>>> +    const struct ffa_partition_info_1_1 *fpi = entry;
>>>> +    struct ffa_uuid sp_uuid;
>>>> +
>>>> +    if ( ffa_uuid_is_nil(uuid) )
>>>> +        return true;
>>>> 
>>>> -    return ffa_partition_info_get(uuid, FFA_PARTITION_INFO_GET_COUNT_FLAG,
>>>> -                                  sp_count, &src_size);
>>>> +    if ( sp_list_entry_size < sizeof(*fpi) )
>>>> +        return false;
>>> 
>>> This can never happen since we don't accept SPMC below version 1.1. We
>>> even have a check to ensure the SPMC doesn't return a too-small
>>> spi_size.
>> 
>> I tried to make the code forward compatible. It is not really a possible 
>> case right now
>> but we could fall into this in the future.
> 
> At the moment, it's only ffa_sp_list_cache_init(), which initializes
> and puts things in the SP. So as long as ffa_sp_list_cache_init() and
> this function are in sync, there should be no problem. It simplifies
> things if we can trust the SP cache to be usable.

Agree, we should not come into a situation where the SP cache content cannot be 
used.
Anything to be check must have been checked or converted when creating the 
cache not
when using it.

> 
>> 
>>> 
>>>> +
>>>> +    memcpy(&sp_uuid, fpi->uuid, sizeof(sp_uuid));
>>>> +    return ffa_uuid_equal(uuid, sp_uuid);
>>>> }
>>>> 
>>>> -static int32_t ffa_get_sp_partinfo(struct ffa_uuid uuid, uint32_t 
>>>> *sp_count,
>>>> -                                   void **dst_buf, void *end_buf,
>>>> -                                   uint32_t dst_size)
>>>> +static int32_t ffa_get_sp_count(struct ffa_uuid uuid, uint32_t *sp_count)
>>>> {
>>>> -    int32_t ret;
>>>> -    int32_t release_ret;
>>>> -    uint32_t src_size, real_sp_count;
>>>> -    void *src_buf;
>>>>    uint32_t count = 0;
>>>> -    bool notify_fw = false;
>>>> -
>>>> -    /* We need to use the RX buffer to receive the list */
>>>> -    src_buf = ffa_rxtx_spmc_rx_acquire();
>>>> -    if ( !src_buf )
>>>> -        return FFA_RET_DENIED;
>>>> -
>>>> -    ret = ffa_partition_info_get(uuid, 0, &real_sp_count, &src_size);
>>>> -    if ( ret )
>>>> -        goto out;
>>>> -    notify_fw = true;
>>>> +    uint32_t n;
>>>> 
>>>> -    /* Validate the src_size we got */
>>>> -    if ( src_size < sizeof(struct ffa_partition_info_1_0) ||
>>>> -         src_size >= FFA_PAGE_SIZE )
>>>> +    for ( n = 0; n < sp_list_count; n++ )
>>>>    {
>>>> -        ret = FFA_RET_NOT_SUPPORTED;
>>>> -        goto out;
>>>> +        void *entry = sp_list + n * sp_list_entry_size;
>>>> +
>>>> +        if ( ffa_sp_entry_matches_uuid(entry, uuid) )
>>>> +            count++;
>>>>    }
>>>> 
>>>> -    /*
>>>> -     * Limit the maximum time we hold the CPU by limiting the number of 
>>>> SPs.
>>>> -     * We just ignore the extra ones as this is tested during init in
>>>> -     * ffa_partinfo_init so the only possible reason is SP have been added
>>>> -     * since boot.
>>>> -     */
>>>> -    if ( real_sp_count > FFA_MAX_NUM_SP )
>>>> -        real_sp_count = FFA_MAX_NUM_SP;
>>>> +    *sp_count = count;
>>>> 
>>>> -    /* Make sure the data fits in our buffer */
>>>> -    if ( real_sp_count > (FFA_RXTX_PAGE_COUNT * FFA_PAGE_SIZE) / src_size 
>>>> )
>>>> -    {
>>>> -        ret = FFA_RET_NOT_SUPPORTED;
>>>> -        goto out;
>>>> -    }
>>>> +    if ( !ffa_uuid_is_nil(uuid) && !count )
>>>> +        return FFA_RET_INVALID_PARAMETERS;
>>>> 
>>>> -    for ( uint32_t sp_num = 0; sp_num < real_sp_count; sp_num++ )
>>>> -    {
>>>> -        struct ffa_partition_info_1_1 *fpi = src_buf;
>>>> +    return FFA_RET_OK;
>>>> +}
>>>> 
>>>> -        /* filter out SP not following bit 15 convention if any */
>>>> -        if ( FFA_ID_IS_SECURE(fpi->id) )
>>>> -        {
>>>> -            /*
>>>> -             * If VM is 1.0 but firmware is 1.1 we could have several 
>>>> entries
>>>> -             * with the same ID but different UUIDs. In this case the VM 
>>>> will
>>>> -             * get a list with several time the same ID.
>>>> -             * This is a non-compliance to the specification but 1.0 VMs 
>>>> should
>>>> -             * handle that on their own to simplify Xen implementation.
>>>> -             */
>>>> +static int32_t ffa_get_sp_partinfo(struct ffa_uuid uuid, uint32_t 
>>>> *sp_count,
>>>> +                                   void **dst_buf, void *end_buf,
>>>> +                                   uint32_t dst_size)
>>>> +{
>>>> +    int32_t ret;
>>>> +    uint32_t count = 0;
>>>> +    uint32_t n;
>>>> 
>>>> -            ret = ffa_copy_info(dst_buf, end_buf, src_buf, dst_size, 
>>>> src_size);
>>>> -            if ( ret )
>>>> -                goto out;
>>>> +    for ( n = 0; n < sp_list_count; n++ )
>>>> +    {
>>>> +        void *entry = sp_list + n * sp_list_entry_size;
>>>> 
>>>> -            count++;
>>>> -        }
>>>> +        if ( !ffa_sp_entry_matches_uuid(entry, uuid) )
>>>> +            continue;
>>>> 
>>>> -        src_buf += src_size;
>>>> +        /*
>>>> +         * If VM is 1.0 but firmware is 1.1 we could have several entries
>>>> +         * with the same ID but different UUIDs. In this case the VM will
>>>> +         * get a list with several time the same ID.
>>>> +         * This is a non-compliance to the specification but 1.0 VMs 
>>>> should
>>>> +         * handle that on their own to simplify Xen implementation.
>>>> +         */
>>>> +        ret = ffa_copy_info(dst_buf, end_buf, entry, dst_size,
>>>> +                            sp_list_entry_size);
>>>> +        if ( ret )
>>>> +            return ret;
>>>> +
>>>> +        count++;
>>>>    }
>>>> 
>>>>    *sp_count = count;
>>>> 
>>>> -out:
>>>> -    release_ret = ffa_rxtx_spmc_rx_release(notify_fw);
>>>> -    if ( release_ret )
>>>> -        gprintk(XENLOG_WARNING,
>>>> -                "ffa: Error releasing SPMC RX buffer: %d\n", release_ret);
>>>> -    return ret;
>>>> +    if ( !ffa_uuid_is_nil(uuid) && !count )
>>>> +        return FFA_RET_INVALID_PARAMETERS;
>>>> +
>>>> +    return FFA_RET_OK;
>>>> }
>>>> 
>>>> static int32_t ffa_get_vm_partinfo(struct ffa_uuid uuid, uint32_t 
>>>> start_index,
>>>> @@ -435,6 +426,14 @@ static int32_t ffa_direct_req_send_vm(uint16_t sp_id, 
>>>> uint16_t vm_id,
>>>>    return res;
>>>> }
>>>> 
>>>> +static void ffa_sp_list_cache_free(void)
>>>> +{
>>>> +    XFREE(sp_list);
>>>> +    sp_list = NULL;
>>> 
>>> XFREE() is already setting sp_list to NULL.
>> 
>> Ack will fix in v2
>> 
>>> 
>>>> +    sp_list_count = 0;
>>>> +    sp_list_entry_size = 0;
>>>> +}
>>>> +
>>>> static void uninit_subscribers(void)
>>>> {
>>>>        subscr_vm_created_count = 0;
>>>> @@ -443,6 +442,68 @@ static void uninit_subscribers(void)
>>>>        XFREE(subscr_vm_destroyed);
>>>> }
>>>> 
>>>> +static bool ffa_sp_list_cache_init(const void *buf, uint32_t count,
>>>> +                                   uint32_t fpi_size)
>>>> +{
>>>> +    const uint8_t *src = buf;
>>>> +    uint32_t secure_count = 0;
>>>> +    uint32_t n, idx = 0;
>>>> +    bool warned = false;
>>>> +
>>>> +    if ( fpi_size < sizeof(struct ffa_partition_info_1_0) ||
>>>> +         fpi_size >= FFA_PAGE_SIZE )
>>>> +        return false;
>>> 
>>> Would it make sense to check that fpi_size is well aligned with struct
>>> ffa_partition_info_1_0? If it's an odd size, we'll make unaligned
>>> accesses below with memcpy(). But perhaps that's a bit much. The SPMC
>>> isn't supposed to provide garbage.
>> 
>> Memcpy should prevent issues even if accesses are not aligned.
>> If we had this test, we cannot return an error to the SPMC so we would have 
>> to
>> generate one to the caller. It is simpler i think to handle non-aligned as 
>> we do not
>> expect the SPMC to generate such a case.
>> Tell me if you agree.
> 
> We dereference fpi below, and depending on compiler flags and pointer
> types, memcpy() might not be safe with unaligned pointers.
> From 6.3.2.3 Pointers, paragraph 7, in the C standard:
> "A pointer to an object type may be converted to a pointer to a
> different object type. If the
> resulting pointer is not correctly aligned for the referenced type,
> the behavior is
> undefined."
> 
> I've seen past examples where the compiler optimized memcpy() in a way
> that breaks with unaligned pointers.
> 
> We don't expect the test above to fail, but if it does we will not use
> the secure firmware. I think refusing unexpected sizes is even
> simpler. It should make finding eventual errors much easier.

In the ffa spec, the size can grow and this is why there is a size field.
FF-A expect that we either ignore or copy without looking the extra
content.
I think we should not be dependent on any alignment so we need to
make sure we do the copy in a way that is robust to non alignments.

> 
> So my question above is whether it's worth checking that fpi_size is
> well-aligned, or if it's so unlikely that we don't need to consider
> it.

I think we need to find a solution where we handle properly things even
if the size is not aligned. I though using memcpy would protect against
that but maybe we need to use a stronger solution to ensure that works
even if data is unaligned.

Cheers
Bertrand

> 
> Cheers,
> Jens
> 
>> 
>>> 
>>>> +
>>>> +    if ( count > (FFA_RXTX_PAGE_COUNT * FFA_PAGE_SIZE) / fpi_size )
>>>> +        return false;
>>>> +
>>>> +    for ( n = 0; n < count; n++ )
>>>> +    {
>>>> +        const struct ffa_partition_info_1_0 *fpi =
>>>> +            (const void *)(src + n * fpi_size);
>>>> +
>>>> +        if ( !FFA_ID_IS_SECURE(fpi->id) )
>>>> +        {
>>>> +            if ( !warned )
>>> 
>>> Is this needed? Doesn't printk_once() already ensure this? Or did you
>>> happen to leave printk_once() by mistake and meant for this to be
>>> printed once each time ffa_sp_list_cache_init() is called, since
>>> "warned" isn't static.
>> 
>> Very right, i need to remove the warned, printk_once should be enough here.
>> 
>>> 
>>>> +            {
>>>> +                printk_once(XENLOG_ERR
>>>> +                            "ffa: Firmware is not using bit 15 convention 
>>>> for IDs !!\n");
>>>> +                warned = true;
>>>> +            }
>>>> +            printk(XENLOG_ERR
>>>> +                   "ffa: Secure partition with id 0x%04x cannot be 
>>>> used\n",
>>>> +                   fpi->id);
>>>> +            continue;
>>>> +        }
>>>> +
>>>> +        secure_count++;
>>>> +    }
>>>> +
>>>> +    if ( secure_count )
>>>> +    {
>>>> +        sp_list = xzalloc_bytes(secure_count * fpi_size);
>>>> +        if ( !sp_list )
>>>> +            return false;
>>>> +    }
>>>> +
>>>> +    sp_list_count = secure_count;
>>>> +    sp_list_entry_size = fpi_size;
>>>> +
>>>> +    for ( n = 0; n < count; n++ )
>>>> +    {
>>>> +        const struct ffa_partition_info_1_0 *fpi =
>>>> +            (const void *)(src + n * fpi_size);
>>>> +
>>>> +        if ( !FFA_ID_IS_SECURE(fpi->id) )
>>>> +            continue;
>>>> +
>>>> +        memcpy(sp_list + idx * fpi_size, fpi, fpi_size);
>>>> +        idx++;
>>>> +    }
>>>> +
>>>> +    return true;
>>>> +}
>>>> +
>>>> static bool init_subscribers(void *buf, uint16_t count, uint32_t fpi_size)
>>>> {
>>>>    uint16_t n;
>>>> @@ -549,12 +610,22 @@ bool ffa_partinfo_init(void)
>>>>        goto out;
>>>>    }
>>>> 
>>>> -    ret = init_subscribers(spmc_rx, count, fpi_size);
>>>> +    if ( !ffa_sp_list_cache_init(spmc_rx, count, fpi_size) )
>>>> +    {
>>>> +        printk(XENLOG_ERR "ffa: Failed to cache SP list\n");
>>>> +        goto out;
>>>> +    }
>>>> +
>>>> +    ret = init_subscribers(sp_list, sp_list_count, sp_list_entry_size);
>>>> 
>>>> out:
>>>>    e = ffa_rxtx_spmc_rx_release(notify_fw);
>>>>    if ( e )
>>>>        printk(XENLOG_WARNING "ffa: Error releasing SPMC RX buffer: %d\n", 
>>>> e);
>>>> +    if ( !ret )
>>>> +        uninit_subscribers();
>>> 
>>> ret is initially false and can only be set to true if
>>> init_subscribers() returns true. So there's never any point in calling
>>> uninit_subscribers().
>> 
>> True, I will fix that.
>> 
>>> 
>>>> +    if ( !ret )
>>>> +        ffa_sp_list_cache_free();
>>> 
>>> ret can be false even if ffa_sp_list_cache_init() hasn't been called
>>> yet. Calling ffa_sp_list_cache_free() anyway is harmless, but not
>>> elegant.
>> 
>> yes, i will rework a bit the cleanup logic here.
>> 
>> Thanks for the review
>> 
>> Cheers
>> Bertrand
>> 
>>> 
>>> Cheers,
>>> Jens
>>> 
>>>>    return ret;
>>>> }
>>>> 
>>>> --
>>>> 2.52.0



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.