Hi Julien,
On 09/12/2016 02:08 PM, Julien Grall
wrote:
Hi Sergej,
On 16/08/16 23:17, Sergej Proskurin wrote:
The HVMOP_altp2m_set_mem_access allows to
set gfn permissions of
(currently one page at a time) of a specific altp2m view. In
case the
view does not hold the requested gfn entry, it will be first
copied from
the host's p2m table and then modified as requested.
Signed-off-by: Sergej Proskurin <proskurin@xxxxxxxxxxxxx>
---
Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
Cc: Julien Grall <julien.grall@xxxxxxx>
---
v2: Prevent the page reference count from being falsely updated
on
altp2m modification. Therefore, we add a check determining
whether
the target p2m is a hostp2m before p2m_put_l3_page is
called.
v3: Cosmetic fixes.
Added the functionality to set/get the default_access also
in/from
the requested altp2m view.
Read-locked hp2m in "altp2m_set_mem_access".
Moved the functions "p2m_is_(hostp2m|altp2m)" out of this
commit.
Moved the funtion "modify_altp2m_entry" out of this commit.
Moved the function "p2m_lookup_attr" out of this commit.
Moved guards for "p2m_put_l3_page" out of this commit.
---
xen/arch/arm/altp2m.c | 53 ++++++++++++++++++++++++++++
xen/arch/arm/hvm.c | 7 +++-
xen/arch/arm/p2m.c | 82
++++++++++++++++++++++++++++++++------------
xen/include/asm-arm/altp2m.h | 12 +++++++
4 files changed, 131 insertions(+), 23 deletions(-)
diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index ba345b9..03b8ce5 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -80,6 +80,59 @@ int altp2m_switch_domain_altp2m_by_id(struct
domain *d, unsigned int idx)
return rc;
}
+int altp2m_set_mem_access(struct domain *d,
+ struct p2m_domain *hp2m,
+ struct p2m_domain *ap2m,
+ p2m_access_t a,
+ gfn_t gfn)
+{
+ p2m_type_t p2mt;
+ p2m_access_t old_a;
+ mfn_t mfn;
+ unsigned int page_order;
+ int rc;
+
+ altp2m_lock(d);
Why do you have this lock? This will serialize all the mem access
modification even if the ap2m is different.
I am also surprised that you only lock the altp2m in
modify_altp2m_entry. IHMO, the change in the same a2pm should be
serialized.
You are correct, the implementation in the upcoming patch
write-locks the altp2m view and hence also removes the need for the
locking the altp2m_lock. Thank you.
+ p2m_read_lock(hp2m);
+
+ /* Check if entry is part of the altp2m view. */
+ mfn = p2m_lookup_attr(ap2m, gfn, &p2mt, NULL,
&page_order);
+
+ /* Check host p2m if no valid entry in ap2m. */
+ if ( mfn_eq(mfn, INVALID_MFN) )
+ {
+ /* Check if entry is part of the host p2m view. */
+ mfn = p2m_lookup_attr(hp2m, gfn, &p2mt, &old_a,
&page_order);
As mentioned in patch #27, p2m_lookup_attr will take a read
reference on the hp2m lock. However you already did it above.
Anyway, I really think that p2m_lookup_attr should not exist and
ever caller be replaced by a direct call to p2m_get_entry.
Sounds good. I thought you did not want have the functions
p2m_(set|get)_entry exported, which is the reason for the wrappers.
I have already removed these wrappers in the upcoming patch. Thank
you.
+ if ( mfn_eq(mfn, INVALID_MFN) ||
+ ((p2mt != p2m_ram_rw) && (p2mt !=
p2m_ram_ro)) )
Please use p2m_is_ram rather than open-coding the check on p2mt.
However, I don't understand why access restriction on altp2m is
limited to RAM whilst the hostp2m allows restriction on any type
of page.
I agree: It makes sense to allow setting memory permissions without
such restrictions. It it seems to be a remnant of the old
implementation, in which invalid mfn's were not guaranteed to be
marked as INVALID_MFN. Anyway, I will remove this check. Thank you.
+ {
+ rc = -ESRCH;
+ goto out;
+ }
+
+ /* If this is a superpage, copy that first. */
+ if ( page_order != THIRD_ORDER )
+ {
+ rc = modify_altp2m_entry(ap2m, gfn, mfn, p2mt,
old_a, page_order);
+ if ( rc < 0 )
+ {
+ rc = -ESRCH;
+ goto out;
+ }
+ }
+ }
+
+ /* Set mem access attributes - currently supporting only
one (4K) page. */
+ page_order = THIRD_ORDER;
This looks pointless, please use THIRD_ORDER directly as argument.
+ rc = modify_altp2m_entry(ap2m, gfn,
mfn, p2mt, a, page_order);
+
+out:
+ p2m_read_unlock(hp2m);
+ altp2m_unlock(d);
+
+ return rc;
+}
+
static void altp2m_vcpu_reset(struct vcpu *v)
{
struct altp2mvcpu *av = &altp2m_vcpu(v);
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 9ac3422..df78893 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -136,7 +136,12 @@ static int
do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
break;
case HVMOP_altp2m_set_mem_access:
- rc = -EOPNOTSUPP;
+ if ( a.u.set_mem_access.pad )
+ rc = -EINVAL;
+ else
+ rc = p2m_set_mem_access(d,
_gfn(a.u.set_mem_access.gfn), 1, 0, 0,
+
a.u.set_mem_access.hvmmem_access,
+ a.u.set_mem_access.view);
break;
case HVMOP_altp2m_change_gfn:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index df2b85b..8dee02187 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1913,7 +1913,7 @@ long p2m_set_mem_access(struct domain *d,
gfn_t gfn, uint32_t nr,
uint32_t start, uint32_t mask,
xenmem_access_t access,
unsigned int altp2m_idx)
{
- struct p2m_domain *p2m = p2m_get_hostp2m(d);
+ struct p2m_domain *hp2m = p2m_get_hostp2m(d), *ap2m = NULL;
p2m_access_t a;
unsigned int order;
long rc = 0;
@@ -1933,13 +1933,26 @@ long p2m_set_mem_access(struct domain
*d, gfn_t gfn, uint32_t nr,
#undef ACCESS
};
+ /* altp2m view 0 is treated as the hostp2m */
+ if ( altp2m_idx )
+ {
+ if ( altp2m_idx >= MAX_ALTP2M ||
+ d->arch.altp2m_p2m[altp2m_idx] == NULL )
+ return -EINVAL;
+
+ ap2m = d->arch.altp2m_p2m[altp2m_idx];
+ }
+
switch ( access )
{
case 0 ... ARRAY_SIZE(memaccess) - 1:
a = memaccess[access];
break;
case XENMEM_access_default:
- a = p2m->default_access;
+ if ( ap2m )
+ a = ap2m->default_access;
+ else
+ a = hp2m->default_access;
On the previous version, I requested to have this change
documented in the code [1]. I don't see it here.
You have requested to explain, why the default_access is set on the
hostp2m and not the altp2m. With this patch, we changed this
behavior and allow to set the default_access of both the hostp2m and
altp2m. Not sure what kind of explanation you expect at this point.
break;
default:
return -EINVAL;
@@ -1949,39 +1962,66 @@ long p2m_set_mem_access(struct domain
*d, gfn_t gfn, uint32_t nr,
* Flip mem_access_enabled to true when a permission is
set, as to prevent
* allocating or inserting super-pages.
*/
- p2m->mem_access_enabled = true;
+ if ( ap2m )
+ ap2m->mem_access_enabled = true;
+ else
+ hp2m->mem_access_enabled = true;
/* If request to set default access. */
if ( gfn_eq(gfn, INVALID_GFN) )
{
- p2m->default_access = a;
+ if ( ap2m )
+ ap2m->default_access = a;
+ else
+ hp2m->default_access = a;
+
return 0;
}
- p2m_write_lock(p2m);
-
for ( gfn = gfn_add(gfn, start); nr > start; gfn =
gfn_add(gfn, 1UL << order) )
{
- p2m_type_t t;
- mfn_t mfn = p2m_get_entry(p2m, gfn, &t, NULL,
&order);
-
- /* Skip hole */
- if ( mfn_eq(mfn, INVALID_MFN) )
+ if ( ap2m )
{
+ order = THIRD_ORDER;
+
/*
- * the order corresponds to the order of the
mapping in the
- * page table. so we need to align the gfn before
- * incrementing.
+ * ARM altp2m currently supports only setting of
memory access rights
+ * of only one (4K) page at a time.
This looks like a TODO to me. So please add either : "XXX:" or
"TODO:" in front.
Done.
*/
- gfn = _gfn(gfn_x(gfn) & ~((1UL << order)
- 1));
- continue;
+ rc = altp2m_set_mem_access(d, hp2m, ap2m, a, gfn);
+ if ( rc && rc != -ESRCH )
+ break;
}
else
{
- order = 0;
- rc = __p2m_set_entry(p2m, gfn, 0, mfn, t, a);
- if ( rc )
- break;
+ p2m_type_t t;
+ mfn_t mfn;
+
+ p2m_write_lock(hp2m);
+
+ mfn = p2m_get_entry(hp2m, gfn, &t, NULL,
&order);
+
+ /* Skip hole */
+ if ( mfn_eq(mfn, INVALID_MFN) )
+ {
+ /*
+ * the order corresponds to the order of the
mapping in the
+ * page table. so we need to align the gfn
before
+ * incrementing.
+ */
+ gfn = _gfn(gfn_x(gfn) & ~((1UL <<
order) - 1));
+ continue;
I just noticed that my series is buggy. The "continue" should be
dropped. I will do it in the next version.
Ok.
+ }
+ else
+ {
+ order = 0;
+
+ rc = __p2m_set_entry(hp2m, gfn, 0, mfn, t, a);
+ if ( rc )
+ break;
+ }
+
+ p2m_write_unlock(hp2m);
By moving the lock within the loop, you will impose a TLB flush
per-4K which is currently not the case.
True. Since I needed to lock the hp2m explicitly inside of
altp2m_set_mem_access, I could not move the lock outside of the
loop. Now, this restriction has changed.
Looking at the function, I think the implementation
the support of altp2m could really be simplified. The main
difference is if the entry is not present in the altp2m then you
lookup the host p2m.
I will try to simplify the function.
}
start += (1UL << order);
@@ -1993,8 +2033,6 @@ long p2m_set_mem_access(struct domain *d,
gfn_t gfn, uint32_t nr,
}
}
- p2m_write_unlock(p2m);
-
if ( rc < 0 )
return rc;
else if ( rc > 0 )
diff --git a/xen/include/asm-arm/altp2m.h
b/xen/include/asm-arm/altp2m.h
index c2e44ab..3e4c36d 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -71,4 +71,16 @@ void altp2m_flush(struct domain *d);
int altp2m_destroy_by_id(struct domain *d,
unsigned int idx);
+/*
+ * Set memory access attributes of the gfn in the altp2m view.
If the altp2m
+ * view does not contain the particular entry, copy it first
from the hostp2m.
+ *
+ * Currently supports memory attribute adoptions of only one
(4K) page.
+ */
+int altp2m_set_mem_access(struct domain *d,
+ struct p2m_domain *hp2m,
+ struct p2m_domain *ap2m,
+ p2m_access_t a,
+ gfn_t gfn);
+
#endif /* __ASM_ARM_ALTP2M_H */
Regards,
[1]
https://lists.xenproject.org/archives/html/xen-devel/2016-08/msg01007.html
Regards,
Cheers,
~Sergej
|