|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 11/11] xen/arm: smmuv3: Add support for SMMUv3 driver
Hello Oleksandr,
> On 11 Jan 2021, at 4:39 pm, Oleksandr <olekstysh@xxxxxxxxx> wrote:
>
>
> Hi Rahul
>
>
>>> -
>>> static int arm_smmu_device_probe(struct platform_device *pdev)
>>> {
>>> int irq, ret;
>>> - struct resource *res;
>>> - resource_size_t ioaddr;
>>> + paddr_t ioaddr, iosize;
>>> struct arm_smmu_device *smmu;
>>> - struct device *dev = &pdev->dev;
>>> - bool bypass;
>>> - smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
>>> + smmu = xzalloc(struct arm_smmu_device);
>>> if (!smmu) {
>>> - dev_err(dev, "failed to allocate arm_smmu_device\n");
>>> + dev_err(pdev, "failed to allocate arm_smmu_device\n");
>>> return -ENOMEM;
>>> }
>>> - smmu->dev = dev;
>>> + smmu->dev = pdev;
>>> - if (dev->of_node) {
>>> + if (pdev->of_node) {
>>> ret = arm_smmu_device_dt_probe(pdev, smmu);
>>> + if (ret)
>>> + return -EINVAL;
>>> } else {
>>> ret = arm_smmu_device_acpi_probe(pdev, smmu);
>>> if (ret == -ENODEV)
>>> return ret;
>>> }
>>> - /* Set bypass mode according to firmware probing result */
>>> - bypass = !!ret;
>>> -
>>> /* Base address */
>>> - res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
>>> - if (resource_size(res) < arm_smmu_resource_size(smmu)) {
>>> - dev_err(dev, "MMIO region too small (%pr)\n", res);
>>> + ret = dt_device_get_address(dev_to_dt(pdev), 0, &ioaddr, &iosize);
>>> + if (ret)
>>> + return -ENODEV;
>>> +
>>> + if (iosize < arm_smmu_resource_size(smmu)) {
>>> + dev_err(pdev, "MMIO region too small (%lx)\n", iosize);
>>> return -EINVAL;
>>> }
>>> - ioaddr = res->start;
>>> /*
>>> * Don't map the IMPLEMENTATION DEFINED regions, since they may
>>> contain
>>> - * the PMCG registers which are reserved by the PMU driver.
>>> + * the PMCG registers which are optional and currently not supported.
>>> */
>>> - smmu->base = arm_smmu_ioremap(dev, ioaddr, ARM_SMMU_REG_SZ);
>>> + smmu->base = ioremap_nocache(ioaddr, ARM_SMMU_REG_SZ);
>>> if (IS_ERR(smmu->base))
>>> return PTR_ERR(smmu->base);
>>> - if (arm_smmu_resource_size(smmu) > SZ_64K) {
>>> - smmu->page1 = arm_smmu_ioremap(dev, ioaddr + SZ_64K,
>>> + if (iosize > SZ_64K) {
>>> + smmu->page1 = ioremap_nocache(ioaddr + SZ_64K,
>>> ARM_SMMU_REG_SZ);
>>> if (IS_ERR(smmu->page1))
>>> return PTR_ERR(smmu->page1);
>>> @@ -2765,14 +3101,262 @@ static int arm_smmu_device_probe(struct
>>> platform_device *pdev)
>>> return ret;
>>> /* Reset the device */
>>> - ret = arm_smmu_device_reset(smmu, bypass);
>>> + ret = arm_smmu_device_reset(smmu);
>>> if (ret)
>>> return ret;
>>> + /*
>>> + * Keep a list of all probed devices. This will be used to query
>>> + * the smmu devices based on the fwnode.
>>> + */
>>> + INIT_LIST_HEAD(&smmu->devices);
>>> +
>>> + spin_lock(&arm_smmu_devices_lock);
>>> + list_add(&smmu->devices, &arm_smmu_devices);
>>> + spin_unlock(&arm_smmu_devices_lock);
>
> Looks like that we need some kind of manual roll-back logic here in case of
> error during probe (there is no real devm_*):
>
> iounmap, xfree, etc.
I agree with you that manual roll-back logic is good to have clean code but in
this scenario what I have found out that if there is an error during probe
arm_smmu_device_probe() will return and XEN will not continue to boot (call
panic function) , in that case if we free the memory also there is no much
difference. That why I decided not to modify the code that we ported from Linux.
XEN) I/O virtualisation disabled
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Couldn't configure correctly all the IOMMUs.
(XEN) ****************************************
(XEN)
(XEN) Manual reset required ('noreboot' specified)
Do we have a requirement to continue to boot the XEN if there is an IOMMU
available in the system and IOMMU probe is failed? If yes then I will modify
the code to free all the resources if there is error during probe.
Regards,
Rahul
>
>
>>>
>>> +
>>> return 0;
>>> }
>>
> --
> Regards,
>
> Oleksandr Tyshchenko
>
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |