[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v9 15/16] xen/arm: vpci: check guest range


  • To: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Tue, 26 Sep 2023 17:48:17 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MTaeyxMKfuiKQVkgVRQmjTm18PR5jhyKsMf85T9lVMU=; b=FTZX9vEf/GnlhxxFYQ6BAkyLS29w5Man44DRK4Zsvmn/DUUGNAhrBSKlD4VAj50yeRwj37fRVnqKTFj94BFNqpyG6nk9p4FYC+d8pEGKoZe5sW7qwOLYTxI1mk79XhsSejU2h7CmZn76nesqJx8uhzi8ozaEhFdM+BIwzU2cOA6Lt2ookRXAAmyUEm3fEQtp3FjHzZPM12VAFLAJC8NqrtOxfac1pIi9nwmXQ1uqXkGGrHavNWKpIS66uvJmtoMsRBVAMr3lwglkp+SUQCtLV1fKf96UhngKSxdYHH4ydqy4KkwusNqgj/ZdMU6cGj+KKhE6nIybhABTGhm9UWAWHg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=POBqUEn/rPuH5/DSyqE+46Sd80mQ3xglUr0GWg6eiElRu04lti3E8UUZB8SxH55iwpYJRiTHDOYpRLwxdgkihFM4dQllIH4W5k79w99Jw+tuKHHP9sXdrWaHMvXzwO/gsgBQEVmMngXsdtFhXbm1XEdZWyNJ6SBioflVUhiWAhRW+OKRuMv4FO/0hAr1/g3dEbwH9RvNyDB8uxM1qfyyyVBbe42rV93lfs6MAQRnUjN198vdbqKaoRLuoSOkEJA0Pss7nYEKQOArdX3jbTmG4ThaC0J66advop1gb45EUiTrM8pHYTxz4DcVDraHyEarAB9D8G4XOS+7yAWLDSDDPg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 26 Sep 2023 15:49:05 +0000
  • Ironport-data: A9a23:eDgl0q8jWs8xsQtHEHvhDrUDuH+TJUtcMsCJ2f8bNWPcYEJGY0x3n 2EfW2GEaPfeYWqked8na4i39ktVuJLUn9UyGQdl/i48E34SpcT7XtnIdU2Y0wF+jCHgZBk+s 5hBMImowOQcFCK0SsKFa+C5xZVE/fjVAOK6UKidYnwZqTZMEE8JkQhkl/MynrlmiN24BxLlk d7pqojUNUTNNwRcawr40Ird7ks01BjOkGlA5AdmNKoV5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklR9 d4fF3MVcimopLm9xrKFRshGm+MseZyD0IM34hmMzBn/JNN/G9XpZfWP4tVVmjAtmspJAPDSI dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWOilUvgNABM/KMEjCObd9SkUuC4 HrP4kzyAw0ANczZwj2Amp6prraVwXmkBtxMRNVU8NZXm16wlmcJASE1elGe8Kjo1WGPQ/V2f hl8Fi0G6PJaGFaQZsLhUgKxumLCvh8YV9daCeQ85CmEz6aS6AGcbkA6STpGZM0jpdUBbzUg3 V+UnPvkHTVq9raSTBq15rqS6D+/JyURBWsDfjMfCxsI5cH5p4M+hQ6JScxseJNZlfXwEDD0h jqM/C43guxJidZRjvvru1fanziru57FCBYv4RnaVX6k6QU/Y5O5Y4uv6h7Q6vMowJulc2Rtd UMsw6C2hN3ix7nWxURhnM1l8GmV2su4
  • Ironport-hdrordr: A9a23:VxA0das7RojoRDIadLxlGn397skDeNV00zEX/kB9WHVpm62j+/ xG+c5x6faaslkssR0b9+xoWpPhfZqsz/9ICOAqVN/JMTUO01HYT72Kg7GSpgHIKmnT8fNcyL clU4UWMqyVMbGit7eZ3DWF
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Sep 26, 2023 at 11:27:48AM -0400, Stewart Hildebrand wrote:
> On 9/26/23 04:07, Roger Pau Monné wrote:
> > On Mon, Sep 25, 2023 at 05:49:00PM -0400, Stewart Hildebrand wrote:
> >> On 9/22/23 04:44, Roger Pau Monné wrote:
> >>> On Tue, Aug 29, 2023 at 11:19:47PM +0000, Volodymyr Babchuk wrote:
> >>>> From: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
> >>>>
> >>>> Skip mapping the BAR if it is not in a valid range.
> >>>>
> >>>> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
> >>>> ---
> >>>>  xen/drivers/vpci/header.c | 9 +++++++++
> >>>>  1 file changed, 9 insertions(+)
> >>>>
> >>>> diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
> >>>> index 1d243eeaf9..dbabdcbed2 100644
> >>>> --- a/xen/drivers/vpci/header.c
> >>>> +++ b/xen/drivers/vpci/header.c
> >>>> @@ -345,6 +345,15 @@ static int modify_bars(const struct pci_dev *pdev, 
> >>>> uint16_t cmd, bool rom_only)
> >>>>               bar->enabled == !!(cmd & PCI_COMMAND_MEMORY) )
> >>>>              continue;
> >>>>
> >>>> +#ifdef CONFIG_ARM
> >>>> +        if ( !is_hardware_domain(pdev->domain) )
> >>>> +        {
> >>>> +            if ( (start_guest < PFN_DOWN(GUEST_VPCI_MEM_ADDR)) ||
> >>>> +                 (end_guest >= PFN_DOWN(GUEST_VPCI_MEM_ADDR + 
> >>>> GUEST_VPCI_MEM_SIZE)) )
> >>>> +                continue;
> >>>> +        }
> >>>> +#endif
> >>>
> >>> Hm, I think this should be in a hook similar to pci_check_bar() that
> >>> can be implemented per-arch.
> >>>
> >>> IIRC at least on x86 we allow the guest to place the BARs whenever it
> >>> wants, would such placement cause issues to the hypervisor on Arm?
> >>
> >> Hm. I wrote this patch in a hurry to make v9 of this series work on ARM. 
> >> In my haste I also forgot about the prefetchable range starting at 
> >> GUEST_VPCI_PREFETCH_MEM_ADDR, but that won't matter as we can probably 
> >> throw this patch out.
> >>
> >> Now that I've had some more time to investigate, I believe the check in 
> >> this patch is more or less redundant to the existing check in map_range() 
> >> added in baa6ea700386 ("vpci: add permission checks to map_range()").
> >>
> >> The issue is that during initialization bar->guest_addr is zeroed, and 
> >> this initial value of bar->guest_addr will fail the permissions check in 
> >> map_range() and crash the domain. When the guest writes a new valid BAR, 
> >> the old invalid address remains in the rangeset to be mapped. If we simply 
> >> remove the old invalid BAR from the rangeset, that seems to fix the issue. 
> >> So something like this:
> > 
> > It does seem to me we are missing a proper cleanup of the rangeset
> > contents in some paths then.  In the above paragraph you mention "the
> > old invalid address remains in the rangeset to be mapped", how does it
> > get in there in the first place, and why is the rangeset not emptied
> > if the mapping failed?
> 
> Back in ("vpci/header: handle p2m range sets per BAR") I added a v->domain == 
> pdev->domain check near the top of vpci_process_pending() as you 
> appropriately suggested.
> 
> +    if ( v->domain != pdev->domain )
> +    {
> +        read_unlock(&v->domain->pci_lock);
> +        return false;
> +    }
> 
> I have also reverted this patch ("xen/arm: vpci: check guest range").
> 
> The sequence of events leading to the old value remaining in the rangeset are:
> 
> # xl pci-assignable-add 01:00.0
> drivers/vpci/vpci.c:vpci_deassign_device()
>     deassign 0000:01:00.0 from d0
> # grep pci domu.cfg
> pci = [ "01:00.0" ]
> # xl create domu.cfg
> drivers/vpci/vpci.c:vpci_deassign_device()
>     deassign 0000:01:00.0 from d[IO]
> drivers/vpci/vpci.c:vpci_assign_device()
>     assign 0000:01:00.0 to d1
>     bar->guest_addr is initialized to zero because of the line: pdev->vpci = 
> xzalloc(struct vpci);
> drivers/vpci/header.c:init_bars()
> drivers/vpci/header.c:modify_bars()

I think I've commented this on another patch, but why is the device
added with memory decoding enabled?  I would expect the FLR performed
before assigning would leave the device with memory decoding disabled?

Otherwise we might have to force init_bars() to assume memory decoding
to be disabled, IOW: memory decoding would be set as disabled in the
guest cmd view, and leave the physical device cmd as-is.  We might
also consider switching memory decoding off unconditionally for domUs
on the physical device.

>     BAR0: start 0xe0000, end 0xe000f, start_guest 0x0, end_guest 0xf
>     The range { 0-f } is added to the BAR0 rangeset for d1
> drivers/vpci/header.c:defer_map()
>     raise_softirq(SCHEDULE_SOFTIRQ);
> drivers/vpci/header.c:vpci_process_pending()
>     vpci_process_pending() returns because v->domain != pdev->domain (i.e. d0 
> != d1)

I don't think we can easily handle BAR mappings during device
assignment with vPCI, because that would require adding some kind of
continuation support which we don't have ATM.  Might be better to just
switch memory decoding unconditionally off at init_bars() for domUs as
that's the easier solution right now that would allow us to move
forward.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.