[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v7][RFC][PATCH 05/13] hvmloader/mmio: reconcile guest mmio with reserved device memory



On 2014/10/27 17:56, Jan Beulich wrote:
On 27.10.14 at 08:12, <tiejun.chen@xxxxxxxxx> wrote:
On 2014/10/24 22:42, Jan Beulich wrote:
On 24.10.14 at 09:34, <tiejun.chen@xxxxxxxxx> wrote:
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -37,6 +37,44 @@ uint64_t pci_hi_mem_start = 0, pci_hi_mem_end = 0;
   enum virtual_vga virtual_vga = VGA_none;
   unsigned long igd_opregion_pgbase = 0;

+unsigned int need_skip_rmrr = 0;

Static (and without initializer)?

static unsigned int need_skip_rmrr;

Please stop echoing back what was requested.

I try to fix inline to make sure I'm addressing all comments inline properly. If you think this is correct please just ignore that.


+                           uint64_t mmio_start, uint64_t mmio_size)
+{
+    if ( start + memsize <= mmio_start || start >= mmio_start + mmio_size )
+        return 0;
+    else
+        return 1;

Make this a simple single return statement?

static int check_mmio_hole_conflict(uint64_t start, uint64_t memsize,
                                      uint64_t mmio_start, uint64_t
mmio_size)
{
      if ( start + memsize <= mmio_start || start >= mmio_start + mmio_size )
          return 0;

      return 1;
}

Is "a simple single return statement" ambiguous in any way? This

static bool check_mmio_hole_conflict(uint64_t start, uint64_t memsize,
                                       uint64_t mmio_start, uint64_t mmio_size)
{
      return start + memsize > mmio_start && start < mmio_start + mmio_size;
}

is how I think this should be.

Thanks for your show.


+}
+
+static int check_reserved_device_memory_map(uint64_t mmio_base,
+                                            uint64_t mmio_max)
+{
+    uint32_t i = 0;

Pointless initializer.

+    uint64_t rdm_start, rdm_end;
+    int nr_entries = -1;

And again.

+
+    nr_entries = hvm_get_reserved_device_memory_map();

It's completely unclear why this can't be the variable's initializer.

      uint32_t i;
      uint64_t rdm_start, rdm_end;
      int nr_rdm_entries = hvm_get_reserved_device_memory_map();

And (see also below) "unsigned int". It's bogus anyway to have the
function return the count by normal means by the actual array via a
global variable. I think you ought to switch to a consistent model.

+    for ( i = 0; i < nr_entries; i++ )
+    {
+        rdm_start = rdm_map[i].start_pfn << PAGE_SHIFT;
+        rdm_end = rdm_start + (rdm_map[i].nr_pages << PAGE_SHIFT);

I'm pretty certain I pointed out before that you can't simply shift
these fields - you risk losing significant bits.

I tried to go back looking into something but just found you were saying
I shouldn't use PAGE_SHIFT and PAGE_SIZE at the same time. If I'm still
missing could you show me what you expect?

Shifting a 32-bit quantity left still yields a 32-bit quantity, no matter
whether the result is then stored in a 64-bit variable. You need to
up-cast the left side of the shift first.

Do you mean this?

rdm_start = (uint64_t)rdm_map[j].start_pfn << PAGE_SHIFT;
rdm_end = rdm_start + ((uint64_t)rdm_map[j].nr_pages << PAGE_SHIFT);


@@ -58,7 +96,9 @@ void pci_setup(void)
           uint32_t bar_reg;
           uint64_t bar_sz;
       } *bars = (struct bars *)scratch_start;
-    unsigned int i, nr_bars = 0;
+    unsigned int i, j, nr_bars = 0;
+    int nr_entries = 0;

And another pointless initializer. Plus as a count of something this

int nr_rdm_entries;

surely wants to be "unsigned int". Also I guess the variable name

nr_rdm_entries should be literally unsigned int but this value always be
set from  hvm_get_reserved_device_memory_map(),

nr_rdm_entries = hvm_get_reserved_device_memory_map()

I hope that return value can be negative value in some failed case

If only you checked for these negative values...

May I can simplify these failed cases handle with within hvm_get_reserved_device_memory_map() like,
        if ( rc )
                return 0;

Because actually we don't need any negative return value again. So '0' is always fine. So here,

unsigned long nr_rdm_entries = hvm_get_reserved_device_memory_map();


Additionally, actually there are some original codes just following my
codes:

          if ( need_skip_rmrr )
          {
                ...
          }

        base += bar_sz;

          if ( (base < resource->base) || (base > resource->max) )
          {
              printf("pci dev %02x:%x bar %02x size "PRIllx": no space for "
                     "resource!\n", devfn>>3, devfn&7, bar_reg,
                     PRIllx_arg(bar_sz));
              continue;
          }

This can guarantee we don't overwhelm the previous mmio range.

Resulting in the BAR not getting a value assigned afaict. Certainly
not what we want as a side effect of your changes.

I don't understand what a side effect is. I just to try to make sure BAR space skip any conflict range but they are still in these resource ranges.


and bar_data_upper will likely end up being garbage.

Did you actually _test_ this code?

Actually in my real case those RMRR ranges are always below MMIO.

Below whose MMIO? The host's or the guest's? In the latter case,
just (in order to test your code) increase the range reserved for
MMIO enough to cover the RMRR range.

In my platform,

RMRR region: base_addr ab80a000 end_address ab81dfff
RMRR region: base_addr ad000000 end_address af7fffff

So I guess you hope I change this

#define PCI_MEM_START       0xf0000000

to

#define PCI_MEM_START       0xa0000000

right?

But what test you want to see? Just boot?

Thanks
Tiejun

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.