[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v4 4/6] xen/arm: zynqmp: eemi access control



From: Edgar E. Iglesias <edgar.iglesias@xxxxxxxxxx>

Introduce data structs to implement basic access controls.
Introduce the following three functions:

domain_has_node_access: check access to the node
domain_has_reset_access: check access to the reset line
domain_has_mmio_access: check access to the register

Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xxxxxxxxxx>
Signed-off-by: Stefano Stabellini <stefanos@xxxxxxxxxx>

---
Changes in v4:
- add #include as needed
- add #if 0 for bisectability
- use mfn_t in pm_check_access
- add wrap-around ASSERT in domain_has_mmio_access
- use GENMASK in domain_has_mmio_access
- proper bound checks (== ARRAY_SIZE is out of bound)
---
 xen/arch/arm/platforms/xilinx-zynqmp-eemi.c | 790 ++++++++++++++++++++++++++++
 1 file changed, 790 insertions(+)

diff --git a/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c 
b/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c
index 369bb3f..ddb7a4a 100644
--- a/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c
+++ b/xen/arch/arm/platforms/xilinx-zynqmp-eemi.c
@@ -16,9 +16,799 @@
  * GNU General Public License for more details.
  */
 
+/*
+ *  EEMI Power Management API access
+ *
+ * Refs:
+ * https://www.xilinx.com/support/documentation/user_guides/ug1200-eemi-api.pdf
+ *
+ * Background:
+ * The ZynqMP has a subsystem named the PMU with a CPU and special devices
+ * dedicated to running Power Management Firmware. Other masters in the
+ * system need to send requests to the PMU in order to for example:
+ * * Manage power state
+ * * Configure clocks
+ * * Program bitstreams for the programmable logic
+ * * etc
+ *
+ * Although the details of the setup are configurable, in the common case
+ * the PMU lives in the Secure world. NS World cannot directly communicate
+ * with it and must use proxy services from ARM Trusted Firmware to reach
+ * the PMU.
+ *
+ * Power Management on the ZynqMP is implemented in a layered manner.
+ * The PMU knows about various masters and will enforce access controls
+ * based on a pre-configured partitioning. This configuration dictates
+ * which devices are owned by the various masters and the PMU FW makes sure
+ * that a given master cannot turn off a device that it does not own or that
+ * is in use by other masters.
+ *
+ * The PMU is not aware of multiple execution states in masters.
+ * For example, it treats the ARMv8 cores as single units and does not
+ * distinguish between Secure vs NS OS's nor does it know about Hypervisors
+ * and multiple guests. It is up to software on the ARMv8 cores to present
+ * a unified view of its power requirements.
+ *
+ * To implement this unified view, ARM Trusted Firmware at EL3 provides
+ * access to the PM API via SMC calls. ARM Trusted Firmware is responsible
+ * for mediating between the Secure and the NS world, rejecting SMC calls
+ * that request changes that are not allowed.
+ *
+ * Xen running above ATF owns the NS world and is responsible for presenting
+ * unified PM requests taking all guests and the hypervisor into account.
+ *
+ * Implementation:
+ * The PM API contains different classes of calls.
+ * Certain calls are harmless to expose to any guest.
+ * These include calls to get the PM API Version, or to read out the version
+ * of the chip we're running on.
+ *
+ * In order to correctly virtualize these calls, we need to know if
+ * guests issuing these calls have ownership of the given device.
+ * The approach taken here is to map PM API Nodes identifying
+ * a device into base addresses for registers that belong to that
+ * same device.
+ *
+ * If the guest has access to devices registers, we give the guest
+ * access to PM API calls that affect that device. This is implemented
+ * by pm_node_access and domain_has_node_access().
+ *
+ * MMIO access:
+ * These calls allow guests to access certain memory ranges. These ranges
+ * are typically protected for secure-world access only and also from
+ * certain masters only, so guests cannot access them directly.
+ * Registers within the memory regions affect certain nodes. In this case,
+ * our input is an address and we map that address into a node. If the
+ * guest has ownership of that node, the access is allowed.
+ * Some registers contain bitfields and a single register may contain
+ * bits that affect multiple nodes.
+ */
+
+#include <xen/iocap.h>
+#include <xen/sched.h>
 #include <asm/regs.h>
 #include <asm/platforms/xilinx-zynqmp-eemi.h>
 
+#if 0
+struct pm_access
+{
+    paddr_t addr;
+    bool hwdom_access;    /* HW domain gets access regardless.  */
+};
+
+/*
+ * This table maps a node into a memory address.
+ * If a guest has access to the address, it has enough control
+ * over the node to grant it access to EEMI calls for that node.
+ */
+static const struct pm_access pm_node_access[] = {
+    /* MM_RPU grants access to all RPU Nodes.  */
+    [NODE_RPU] = { MM_RPU },
+    [NODE_RPU_0] = { MM_RPU },
+    [NODE_RPU_1] = { MM_RPU },
+    [NODE_IPI_RPU_0] = { MM_RPU },
+
+    /* GPU nodes.  */
+    [NODE_GPU] = { MM_GPU },
+    [NODE_GPU_PP_0] = { MM_GPU },
+    [NODE_GPU_PP_1] = { MM_GPU },
+
+    [NODE_USB_0] = { MM_USB3_0_XHCI },
+    [NODE_USB_1] = { MM_USB3_1_XHCI },
+    [NODE_TTC_0] = { MM_TTC0 },
+    [NODE_TTC_1] = { MM_TTC1 },
+    [NODE_TTC_2] = { MM_TTC2 },
+    [NODE_TTC_3] = { MM_TTC3 },
+    [NODE_SATA] = { MM_SATA_AHCI_HBA },
+    [NODE_ETH_0] = { MM_GEM0 },
+    [NODE_ETH_1] = { MM_GEM1 },
+    [NODE_ETH_2] = { MM_GEM2 },
+    [NODE_ETH_3] = { MM_GEM3 },
+    [NODE_UART_0] = { MM_UART0 },
+    [NODE_UART_1] = { MM_UART1 },
+    [NODE_SPI_0] = { MM_SPI0 },
+    [NODE_SPI_1] = { MM_SPI1 },
+    [NODE_I2C_0] = { MM_I2C0 },
+    [NODE_I2C_1] = { MM_I2C1 },
+    [NODE_SD_0] = { MM_SD0 },
+    [NODE_SD_1] = { MM_SD1 },
+    [NODE_DP] = { MM_DP },
+
+    /* Guest with GDMA Channel 0 gets PM access. Other guests don't.  */
+    [NODE_GDMA] = { MM_GDMA_CH0 },
+    /* Guest with ADMA Channel 0 gets PM access. Other guests don't.  */
+    [NODE_ADMA] = { MM_ADMA_CH0 },
+
+    [NODE_NAND] = { MM_NAND },
+    [NODE_QSPI] = { MM_QSPI },
+    [NODE_GPIO] = { MM_GPIO },
+    [NODE_CAN_0] = { MM_CAN0 },
+    [NODE_CAN_1] = { MM_CAN1 },
+
+    /* Only for the hardware domain.  */
+    [NODE_AFI] = { .hwdom_access = true },
+    [NODE_APLL] = { .hwdom_access = true },
+    [NODE_VPLL] = { .hwdom_access = true },
+    [NODE_DPLL] = { .hwdom_access = true },
+    [NODE_RPLL] = { .hwdom_access = true },
+    [NODE_IOPLL] = { .hwdom_access = true },
+    [NODE_DDR] = { .hwdom_access = true },
+    [NODE_IPI_APU] = { .hwdom_access = true },
+    [NODE_PCAP] = { .hwdom_access = true },
+
+    [NODE_PCIE] = { MM_PCIE_ATTRIB },
+    [NODE_RTC] = { MM_RTC },
+};
+
+/*
+ * This table maps reset line IDs into a memory address.
+ * If a guest has access to the address, it has enough control
+ * over the affected node to grant it access to EEMI calls for
+ * resetting that node.
+ */
+#define XILPM_RESET_IDX(n) (n - XILPM_RESET_PCIE_CFG)
+static const struct pm_access pm_reset_access[] = {
+    [XILPM_RESET_IDX(XILPM_RESET_PCIE_CFG)] = { MM_AXIPCIE_MAIN },
+    [XILPM_RESET_IDX(XILPM_RESET_PCIE_BRIDGE)] = { MM_PCIE_ATTRIB },
+    [XILPM_RESET_IDX(XILPM_RESET_PCIE_CTRL)] = { MM_PCIE_ATTRIB },
+
+    [XILPM_RESET_IDX(XILPM_RESET_DP)] = { MM_DP },
+    [XILPM_RESET_IDX(XILPM_RESET_SWDT_CRF)] = { MM_SWDT },
+    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM5)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM4)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM3)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM2)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM1)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM0)] = { .hwdom_access = true },
+
+    /* Channel 0 grants PM access.  */
+    [XILPM_RESET_IDX(XILPM_RESET_GDMA)] = { MM_GDMA_CH0 },
+    [XILPM_RESET_IDX(XILPM_RESET_GPU_PP1)] = { MM_GPU },
+    [XILPM_RESET_IDX(XILPM_RESET_GPU_PP0)] = { MM_GPU },
+    [XILPM_RESET_IDX(XILPM_RESET_GT)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_SATA)] = { MM_SATA_AHCI_HBA },
+
+    [XILPM_RESET_IDX(XILPM_RESET_APM_FPD)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_SOFT)] = { .hwdom_access = true },
+
+    [XILPM_RESET_IDX(XILPM_RESET_GEM0)] = { MM_GEM0 },
+    [XILPM_RESET_IDX(XILPM_RESET_GEM1)] = { MM_GEM1 },
+    [XILPM_RESET_IDX(XILPM_RESET_GEM2)] = { MM_GEM2 },
+    [XILPM_RESET_IDX(XILPM_RESET_GEM3)] = { MM_GEM3 },
+
+    [XILPM_RESET_IDX(XILPM_RESET_QSPI)] = { MM_QSPI },
+    [XILPM_RESET_IDX(XILPM_RESET_UART0)] = { MM_UART0 },
+    [XILPM_RESET_IDX(XILPM_RESET_UART1)] = { MM_UART1 },
+    [XILPM_RESET_IDX(XILPM_RESET_SPI0)] = { MM_SPI0 },
+    [XILPM_RESET_IDX(XILPM_RESET_SPI1)] = { MM_SPI1 },
+    [XILPM_RESET_IDX(XILPM_RESET_SDIO0)] = { MM_SD0 },
+    [XILPM_RESET_IDX(XILPM_RESET_SDIO1)] = { MM_SD1 },
+    [XILPM_RESET_IDX(XILPM_RESET_CAN0)] = { MM_CAN0 },
+    [XILPM_RESET_IDX(XILPM_RESET_CAN1)] = { MM_CAN1 },
+    [XILPM_RESET_IDX(XILPM_RESET_I2C0)] = { MM_I2C0 },
+    [XILPM_RESET_IDX(XILPM_RESET_I2C1)] = { MM_I2C1 },
+    [XILPM_RESET_IDX(XILPM_RESET_TTC0)] = { MM_TTC0 },
+    [XILPM_RESET_IDX(XILPM_RESET_TTC1)] = { MM_TTC1 },
+    [XILPM_RESET_IDX(XILPM_RESET_TTC2)] = { MM_TTC2 },
+    [XILPM_RESET_IDX(XILPM_RESET_TTC3)] = { MM_TTC3 },
+    [XILPM_RESET_IDX(XILPM_RESET_SWDT_CRL)] = { MM_SWDT },
+    [XILPM_RESET_IDX(XILPM_RESET_NAND)] = { MM_NAND },
+    [XILPM_RESET_IDX(XILPM_RESET_ADMA)] = { MM_ADMA_CH0 },
+    [XILPM_RESET_IDX(XILPM_RESET_GPIO)] = { MM_GPIO },
+    [XILPM_RESET_IDX(XILPM_RESET_IOU_CC)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_TIMESTAMP)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_RPU_R50)] = { MM_RPU },
+    [XILPM_RESET_IDX(XILPM_RESET_RPU_R51)] = { MM_RPU },
+    [XILPM_RESET_IDX(XILPM_RESET_RPU_AMBA)] = { MM_RPU },
+    [XILPM_RESET_IDX(XILPM_RESET_OCM)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_RPU_PGE)] = { MM_RPU },
+
+    [XILPM_RESET_IDX(XILPM_RESET_USB0_CORERESET)] = { MM_USB3_0_XHCI },
+    [XILPM_RESET_IDX(XILPM_RESET_USB0_HIBERRESET)] = { MM_USB3_0_XHCI },
+    [XILPM_RESET_IDX(XILPM_RESET_USB0_APB)] = { MM_USB3_0_XHCI },
+
+    [XILPM_RESET_IDX(XILPM_RESET_USB1_CORERESET)] = { MM_USB3_1_XHCI },
+    [XILPM_RESET_IDX(XILPM_RESET_USB1_HIBERRESET)] = { MM_USB3_1_XHCI },
+    [XILPM_RESET_IDX(XILPM_RESET_USB1_APB)] = { MM_USB3_1_XHCI },
+
+    [XILPM_RESET_IDX(XILPM_RESET_IPI)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_APM_LPD)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_RTC)] = { MM_RTC },
+    [XILPM_RESET_IDX(XILPM_RESET_SYSMON)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_AFI_FM6)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_LPD_SWDT)] = { MM_SWDT },
+    [XILPM_RESET_IDX(XILPM_RESET_FPD)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_RPU_DBG1)] = { MM_RPU },
+    [XILPM_RESET_IDX(XILPM_RESET_RPU_DBG0)] = { MM_RPU },
+    [XILPM_RESET_IDX(XILPM_RESET_DBG_LPD)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_DBG_FPD)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_APLL)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_DPLL)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_VPLL)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_IOPLL)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_RPLL)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_0)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_1)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_2)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_3)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_4)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_5)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_6)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_7)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_8)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_9)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_10)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_11)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_12)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_13)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_14)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_15)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_16)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_17)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_18)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_19)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_20)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_21)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_22)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_23)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_24)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_25)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_26)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_27)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_28)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_29)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_30)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_GPO3_PL_31)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_RPU_LS)] = { MM_RPU },
+    [XILPM_RESET_IDX(XILPM_RESET_PS_ONLY)] = { .hwdom_access = true },
+    [XILPM_RESET_IDX(XILPM_RESET_PL)] = { .hwdom_access = true },
+};
+
+/*
+ * This table maps reset line IDs into a memory address.
+ * If a guest has access to the address, it has enough control
+ * over the affected node to grant it access to EEMI calls for
+ * resetting that node.
+ */
+static const struct {
+    paddr_t start;
+    paddr_t size;
+    uint32_t mask;   /* Zero means no mask, i.e all bits.  */
+    enum pm_node_id node;
+    bool hwdom_access;
+    bool readonly;
+} pm_mmio_access[] = {
+    {
+        .start = MM_CRF_APB + R_CRF_APLL_CTRL,
+        .size = R_CRF_ACPU_CTRL,
+        .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_IOPLL_CTRL,
+        .size = R_CRL_RPLL_TO_FPD_CTRL,
+        .hwdom_access = true
+    },
+    {
+        .start = MM_CRF_APB + R_CRF_DP_VIDEO_REF_CTRL,
+        .size = 4, .node = NODE_DP
+    },
+    {
+        .start = MM_CRF_APB + R_CRF_DP_AUDIO_REF_CTRL,
+        .size = 4, .node = NODE_DP
+    },
+    {
+        .start = MM_CRF_APB + R_CRF_DP_STC_REF_CTRL,
+        .size = 4, .node = NODE_DP
+    },
+    {
+        .start = MM_CRF_APB + R_CRF_GPU_REF_CTRL,
+        .size = 4, .node = NODE_GPU
+    },
+    {
+        .start = MM_CRF_APB + R_CRF_SATA_REF_CTRL,
+        .size = 4, .node = NODE_SATA
+    },
+    {
+        .start = MM_CRF_APB + R_CRF_PCIE_REF_CTRL,
+        .size = 4, .node = NODE_PCIE
+    },
+    {
+        .start = MM_CRF_APB + R_CRF_GDMA_REF_CTRL,
+        .size = 4, .node = NODE_GDMA
+    },
+    {
+        .start = MM_CRF_APB + R_CRF_DPDMA_REF_CTRL,
+        .size = 4, .node = NODE_DP
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_USB3_DUAL_REF_CTRL,
+        .size = 4, .node = NODE_USB_0
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_USB0_BUS_REF_CTRL,
+        .size = 4, .node = NODE_USB_0
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_USB1_BUS_REF_CTRL,
+        .size = 4, .node = NODE_USB_1
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_USB1_BUS_REF_CTRL,
+        .size = 4, .node = NODE_USB_1
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_GEM0_REF_CTRL,
+        .size = 4, .node = NODE_ETH_0
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_GEM1_REF_CTRL,
+        .size = 4, .node = NODE_ETH_1
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_GEM2_REF_CTRL,
+        .size = 4, .node = NODE_ETH_2
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_GEM3_REF_CTRL,
+        .size = 4, .node = NODE_ETH_3
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_QSPI_REF_CTRL,
+        .size = 4, .node = NODE_QSPI
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_SDIO0_REF_CTRL,
+        .size = 4, .node = NODE_SD_0
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_SDIO1_REF_CTRL,
+        .size = 4, .node = NODE_SD_1
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_UART0_REF_CTRL,
+        .size = 4, .node = NODE_UART_0
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_UART1_REF_CTRL,
+        .size = 4, .node = NODE_UART_1
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_SPI0_REF_CTRL,
+        .size = 4, .node = NODE_SPI_0
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_SPI1_REF_CTRL,
+        .size = 4, .node = NODE_SPI_1
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_CAN0_REF_CTRL,
+        .size = 4, .node = NODE_CAN_0
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_CAN1_REF_CTRL,
+        .size = 4, .node = NODE_CAN_1
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_CPU_R5_CTRL,
+        .size = 4, .node = NODE_RPU
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_IOU_SWITCH_CTRL,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_CSU_PLL_CTRL,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_PCAP_CTRL,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_LPD_SWITCH_CTRL,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_LPD_LSBUS_CTRL,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_DBG_LPD_CTRL,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_NAND_REF_CTRL,
+        .size = 4, .node = NODE_NAND
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_ADMA_REF_CTRL,
+        .size = 4, .node = NODE_ADMA
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_PL0_REF_CTRL,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_PL1_REF_CTRL,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_PL2_REF_CTRL,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_PL3_REF_CTRL,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_PL0_THR_CTRL,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_PL1_THR_CTRL,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_PL2_THR_CTRL,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_PL3_THR_CTRL,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_PL0_THR_CNT,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_PL1_THR_CNT,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_PL2_THR_CNT,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_PL3_THR_CNT,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_GEM_TSU_REF_CTRL,
+        .size = 4, .node = NODE_ETH_0
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_DLL_REF_CTRL,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_AMS_REF_CTRL,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_I2C0_REF_CTRL,
+        .size = 4, .node = NODE_I2C_0
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_I2C1_REF_CTRL,
+        .size = 4, .node = NODE_I2C_1
+    },
+    {
+        .start = MM_CRL_APB + R_CRL_TIMESTAMP_REF_CTRL,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_MIO_PIN_0,
+        .size = R_IOU_SLCR_MIO_MST_TRI2,
+        .hwdom_access = true
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_WDT_CLK_SEL,
+        .hwdom_access = true
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_CAN_MIO_CTRL,
+        .size = 4, .mask = 0x1ff, .node = NODE_CAN_0
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_CAN_MIO_CTRL,
+        .size = 4, .mask = 0x1ff << 15, .node = NODE_CAN_1
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_GEM_CLK_CTRL,
+        .size = 4, .mask = 0xf, .node = NODE_ETH_0
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_GEM_CLK_CTRL,
+        .size = 4, .mask = 0xf << 5, .node = NODE_ETH_1
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_GEM_CLK_CTRL,
+        .size = 4, .mask = 0xf << 10, .node = NODE_ETH_2
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_GEM_CLK_CTRL,
+        .size = 4, .mask = 0xf << 15, .node = NODE_ETH_3
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_GEM_CLK_CTRL,
+        .size = 4, .mask = 0x7 << 20, .hwdom_access = true
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_SDIO_CLK_CTRL,
+        .size = 4, .mask = 0x7, .node = NODE_SD_0
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_SDIO_CLK_CTRL,
+        .size = 4, .mask = 0x7 << 17, .node = NODE_SD_1
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_CTRL_REG_SD,
+        .size = 4, .mask = 0x1, .node = NODE_SD_0
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_CTRL_REG_SD,
+        .size = 4, .mask = 0x1 << 15, .node = NODE_SD_1
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_SD_ITAPDLY,
+        .size = R_IOU_SLCR_SD_CDN_CTRL,
+        .mask = 0x3ff << 0, .node = NODE_SD_0
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_SD_ITAPDLY,
+        .size = R_IOU_SLCR_SD_CDN_CTRL,
+        .mask = 0x3ff << 16, .node = NODE_SD_1
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_GEM_CTRL,
+        .size = 4, .mask = 0x3 << 0, .node = NODE_ETH_0
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_GEM_CTRL,
+        .size = 4, .mask = 0x3 << 2, .node = NODE_ETH_1
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_GEM_CTRL,
+        .size = 4, .mask = 0x3 << 4, .node = NODE_ETH_2
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_GEM_CTRL,
+        .size = 4, .mask = 0x3 << 6, .node = NODE_ETH_3
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_TTC_APB_CLK,
+        .size = 4, .mask = 0x3 << 0, .node = NODE_TTC_0
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_TTC_APB_CLK,
+        .size = 4, .mask = 0x3 << 2, .node = NODE_TTC_1
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_TTC_APB_CLK,
+        .size = 4, .mask = 0x3 << 4, .node = NODE_TTC_2
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_TTC_APB_CLK,
+        .size = 4, .mask = 0x3 << 6, .node = NODE_TTC_3
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_TAPDLY_BYPASS,
+        .size = 4, .mask = 0x3, .node = NODE_NAND
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_TAPDLY_BYPASS,
+        .size = 4, .mask = 0x1 << 2, .node = NODE_QSPI
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_COHERENT_CTRL,
+        .size = 4, .mask = 0xf << 0, .node = NODE_ETH_0
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_COHERENT_CTRL,
+        .size = 4, .mask = 0xf << 4, .node = NODE_ETH_1
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_COHERENT_CTRL,
+        .size = 4, .mask = 0xf << 8, .node = NODE_ETH_2
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_COHERENT_CTRL,
+        .size = 4, .mask = 0xf << 12, .node = NODE_ETH_3
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_COHERENT_CTRL,
+        .size = 4, .mask = 0xf << 16, .node = NODE_SD_0
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_COHERENT_CTRL,
+        .size = 4, .mask = 0xf << 20, .node = NODE_SD_1
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_COHERENT_CTRL,
+        .size = 4, .mask = 0xf << 24, .node = NODE_NAND
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_COHERENT_CTRL,
+        .size = 4, .mask = 0xf << 28, .node = NODE_QSPI
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_VIDEO_PSS_CLK_SEL,
+        .size = 4, .hwdom_access = true
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_GEM0,
+        .size = 4, .node = NODE_ETH_0
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_GEM1,
+        .size = 4, .node = NODE_ETH_1
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_GEM2,
+        .size = 4, .node = NODE_ETH_2
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_GEM3,
+        .size = 4, .node = NODE_ETH_3
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_SD0,
+        .size = 4, .node = NODE_SD_0
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_SD1,
+        .size = 4, .node = NODE_SD_1
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_CAN0,
+        .size = 4, .node = NODE_CAN_0
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_CAN1,
+        .size = 4, .node = NODE_CAN_1
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_LQSPI,
+        .size = 4, .node = NODE_QSPI
+    },
+    {
+        .start = MM_IOU_SLCR + R_IOU_SLCR_IOU_RAM_NAND,
+        .size = 4, .node = NODE_NAND
+    },
+    {
+        .start = MM_PMU_GLOBAL + R_PMU_GLOBAL_PWR_STATE,
+        .size = 4, .readonly = true, .hwdom_access = true
+    },
+    {
+        .start = MM_PMU_GLOBAL + R_PMU_GLOBAL_GLOBAL_GEN_STORAGE0,
+        .size = R_PMU_GLOBAL_PERS_GLOB_GEN_STORAGE7,
+        .readonly = true, .hwdom_access = true
+    },
+    {
+        /* Universal read-only access to CRF. Linux CCF needs this.  */
+        .start = MM_CRF_APB, .size = 0x104, .readonly = true,
+        .hwdom_access = true
+    },
+    {
+        /* Universal read-only access to CRL. Linux CCF needs this.  */
+        .start = MM_CRL_APB, .size = 0x284, .readonly = true,
+        .hwdom_access = true
+    }
+};
+
+static bool pm_check_access(const struct pm_access *acl, struct domain *d,
+                            uint32_t idx)
+{
+    mfn_t mfn;
+
+    if ( acl[idx].hwdom_access && is_hardware_domain(d) )
+        return true;
+
+    if ( !acl[idx].addr )
+        return false;
+
+    mfn = maddr_to_mfn(acl[idx].addr);
+    return iomem_access_permitted(d, mfn_x(mfn), mfn_x(mfn));
+}
+
+/* Check if a domain has access to a node.  */
+static bool domain_has_node_access(struct domain *d, uint32_t nodeid)
+{
+    if ( nodeid >= ARRAY_SIZE(pm_node_access) )
+        return false;
+
+    return pm_check_access(pm_node_access, d, nodeid);
+}
+
+/* Check if a domain has access to a reset line.  */
+static bool domain_has_reset_access(struct domain *d, uint32_t rst)
+{
+    if ( rst < XILPM_RESET_PCIE_CFG )
+        return false;
+
+    rst -= XILPM_RESET_PCIE_CFG;
+
+    if ( rst >= ARRAY_SIZE(pm_reset_access) )
+        return false;
+
+    return pm_check_access(pm_reset_access, d, rst);
+}
+
+/*
+ * Check if a given domain has access to perform an indirect
+ * MMIO access.
+ *
+ * If the provided mask is invalid, it will be fixed up.
+ */
+static bool domain_has_mmio_access(struct domain *d,
+                                   bool write, paddr_t addr,
+                                   uint32_t *mask)
+{
+    unsigned int i;
+    bool ret = false;
+    uint32_t prot_mask = 0;
+
+    /*
+     * The hardware domain gets read access to everything.
+     * Lower layers will do further filtering.
+     */
+    if ( !write && is_hardware_domain(d) )
+        return true;
+
+    /* Scan the ACL.  */
+    for ( i = 0; i < ARRAY_SIZE(pm_mmio_access); i++ )
+    {
+        ASSERT(pm_mmio_access[i].start + pm_mmio_access[i].size >=
+               pm_mmio_access[i].start);
+
+        if ( addr < pm_mmio_access[i].start )
+            return false;
+        if ( addr >= pm_mmio_access[i].start + pm_mmio_access[i].size )
+            continue;
+
+        if ( write && pm_mmio_access[i].readonly )
+            return false;
+        if ( pm_mmio_access[i].hwdom_access && !is_hardware_domain(d) )
+            return false;
+        if ( !domain_has_node_access(d, pm_mmio_access[i].node) )
+            return false;
+
+        /* We've got access to this reg (or parts of it).  */
+        ret = true;
+
+        /* Permit write access to selected bits.  */
+        prot_mask |= pm_mmio_access[i].mask ?: GENMASK(31, 0);
+        break;
+    }
+
+    /* Masking only applies to writes.  */
+    if ( write )
+        *mask &= prot_mask;
+
+    return ret;
+}
+#endif
+
 bool zynqmp_eemi(struct cpu_user_regs *regs)
 {
     return false;
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.