[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 0/2] ARM: ACPI: IORT: Hide SMMU from hardware domain's IORT table



Hi Julien,

Why do you omit parts of mail where I have asked a question , please avoid skiping that removes the context. I raised a valid point and it was totally ignored and you asked me to explain the rationale on a later point.
So if you choose to ignore my first point, how can I put any point.

This is what I have asked


>>The ACPI tables for DomU are generated by the toolstack today. So I don't see why we would change that to support IORT.
>>
>>However, you can have a file shared between the toolstack and Xen and contain the generation of IORT.
>>
>>For instance, this is what we already does with libelf (see common/libelf).

This will amount to  adding a function make_iort in libxl__prepare_acpi,
which would use the common code that is also use generate dom0 IORT (domain_build.c).
Correct ?

If we go by this logic, then libxl_prepare_acpi and domain_build.c should use a common code for all acpi tables. - Are you suggesting we change that as well and make it part of common code.

The code in domain_build.c and in libxl__prepare_acpi is very similar, see the code below.

static int prepare_acpi(struct domain *d, struct kernel_info *kinfo)
{
    ....
    d->arch.efi_acpi_table = alloc_xenheap_pages(order, 0);
    ...

    rc = acpi_create_fadt(d, tbl_add);
    if ( rc != 0 )
        return rc;

    rc = acpi_create_madt(d, tbl_add);
    if ( rc != 0 )
        return rc;

    rc = acpi_create_stao(d, tbl_add);
    if ( rc != 0 )
        return rc;

    rc = acpi_create_xsdt(d, tbl_add);
    if ( rc != 0 )
        return rc;

    rc = acpi_create_rsdp(d, tbl_add);
    if ( rc != 0 )
        return rc;

    ...
}

int libxl__prepare_acpi(libxl__gc *gc, libxl_domain_build_info *info,
                        struct xc_dom_image *dom)
{
    ...

    rc = libxl__allocate_acpi_tables(gc, info, dom, acpitables);
    if (rc)
        goto out;

    make_acpi_rsdp(gc, dom, acpitables);
    make_acpi_xsdt(gc, dom, acpitables);
    make_acpi_gtdt(gc, dom, acpitables);
    rc = make_acpi_madt(gc, dom, info, acpitables);
    if (rc)
        goto out;

    make_acpi_fadt(gc, dom, acpitables);
    make_acpi_dsdt(gc, dom, acpitables);

out:
    return rc;
}

Now if you see both the codes are quite similar and there is redundancy in libxl and in xen code for preparing ACPI tables for dom0 and domU. The point I am raising is quite clear, if all other tables like MADT, XSDT, RSDP, GTDT etc does not share a common generation code with xen what is so special about IORT. Either we move all generation into a common code or keep redundancy for IORT.

I hope I have shown the code and made the point quite clear.
Please provide a technical answer rather than a simple "Why".

Cheers!

Manish

On 10/12/2017 4:34 PM, Julien Grall wrote:
Hello,

On 12/10/17 07:11, Manish Jaggi wrote:
On 10/6/2017 7:54 PM, Julien Grall wrote:
I am not asking to write the DomU support, but at least have a full separation between the Parsing and Generation. So we could easily adapt and re-use the code when we get the DomU support.

I got your point, but as of today there is no code reuse for most of apci_tables. So code result _only_ for IORT but not for acpi is not correct approach.

Why? The generation of IORT is fairly standalone.

And again, this was suggestion to share in the future and an expectation for this series. What I care the most is the generation to be fully separated from the rest.


Also this is the part of PCI passthrough flow so that also might change few things.

But from pov of dom0 smmu hiding, it is a different flow and is coupled with PCI PT.



I think 1) can be solved using this series as a base. I have quite some
comments ready for the patches, shall we follow this route.

2) obviously would change the game completely. We need to sit down and design this properly. Probably this means that Xen parses the IORT and builds internal representations of the mappings,
Can you please add more detail on the internal representations of the mappings.

What exactly do you want? This is likely going to be decided once you looked what is the expected interaction between IORT and Xen.
More details on this line "Probably this means that Xen parses the IORT and
builds internal representations of the mappings,"

I think you have enough meat in this thread to come up with a proposition based on the feedback. Once you send it, we can have a discussion and find agreement.

[...]

The IORT for the hardware domain is just a specific case as it is based pre-existing information. But because of removing nodes (e.g SMMU nodes and probably the PMU nodes), it is basically a full re-write.

So I would consider of full separate the logic of generating the IORT table from the host IORT table. By that I mean not browsing the host IORT when generating the host.

by "the host" you mean dom0 IORT  ?

yes.

Something on the lines of this ....
     struct iort_table_struct {
         header h;
         list_head pci_rc_list;
         list_head its_groups;
         list_head smmu;
         list_head platForm_devices;
     } host_iort;

Some context is probably missing.

This has two benefits:
1) The code generation could be re-used for generating the guest IORT later on.
See my comment above
    2) See 2) from Andre
3) We could decide in a finer grain which devices (e.g platform device) Dom0 can see.
ok,

So, IHMO, we should take a different approach from this series and extending the scope of it. Rather than focusing on only IORT for the hardware, I would be better to see IORT as a whole. I.e how IORT will interact with the hypervisor?

For instance, likely you would need to parse the IORT in quite a few places in Xen. It would be better to get IORT parsed only once and store the information in Xen like data-structures.

I am thinking of reusing much of ACPI tables for that and introducing less abstractions.
This require more work than this current scope of this series. But I think it would helful for future work such as PCI passthrough support.

Any opinions?

for DomUs it is tied to PCI PT, so both design should evolve together.

Why that? The generation of IORT is pretty standard.
DomU IORT will have a virtual PCI_RC and virtual ITS_GROUP and IIUC at the time of generation of domU IORT (with device passthrough) toolstack will communicate to xen virtual / physical deviceID. So Generation logic is standard what to put in IORT and what mappings to add, is a design consideration which is based on PCI_PT.
I still don't get why you are trying to argue on the information written in the IORT. The generation does not care whether those information are taken from the host IORT, virtual, a mix...

It just takes in input the information and output the IORT. That's it.


I have a question here: if a virtual e1000 device is added to domU (non-PCI PT case),
should it be added as a DomU platform device or a virtual pci device.

Hu? Is that e1000 devices a PCI or platform device? If you answer that, you answer to your question here.

Cheers,



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.