[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 2/2] xen/arm: Support runstate crossing pages
Hi,
On 11/06/2020 12:58, Bertrand Marquis wrote:
Add support for runstate area register with the structure crossing pages
Well, this has always been supported until your previous patch. In
general, we try to not break thing in a middle of the series so we can
still bisect it.
I think this patch can be simplified a lot by mapping the two page
contiguously (see my previous answer). With that it would be feasible to
fold this patch in #1.
The code is storing up to 2 pages reference during the hypercall.
During a context switch, the code is computing where the
state_entry_time is and is breaking the memcpy in 2 parts when it is
required.
Signed-off-by: Bertrand Marquis <bertrand.marquis@xxxxxxx>
---
xen/arch/arm/domain.c | 101 +++++++++++++++++++++++++----------
xen/include/asm-arm/domain.h | 5 +-
2 files changed, 76 insertions(+), 30 deletions(-)
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 739059234f..d847cb00f2 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -280,11 +280,16 @@ void arch_cleanup_runstate_guest(struct vcpu *v)
{
spin_lock(&v->arch.runstate_guest.lock);
- /* cleanup previous page if any */
- if ( v->arch.runstate_guest.page )
+ /* cleanup previous pages if any */
+ if ( v->arch.runstate_guest.page[0] )
{
- put_page_and_type(v->arch.runstate_guest.page);
- v->arch.runstate_guest.page = NULL;
+ put_page_and_type(v->arch.runstate_guest.page[0]);
+ v->arch.runstate_guest.page[0] = NULL;
+ if ( v->arch.runstate_guest.page[1] )
+ {
+ put_page_and_type(v->arch.runstate_guest.page[1]);
+ v->arch.runstate_guest.page[1] = NULL;
+ }
I think this can be moved outside of the first if.
v->arch.runstate_guest.offset = 0;
}
@@ -294,26 +299,25 @@ void arch_cleanup_runstate_guest(struct vcpu *v)
int arch_setup_runstate_guest(struct vcpu *v,
struct vcpu_register_runstate_memory_area area)
{
- struct page_info *page;
+ struct page_info *page[2] = {NULL,NULL};
I don't think you need the temporary variable. You can directly update
page[0] as it is protected by the lock. The nice benefits is you could
take advantage of a common helper to cleanup and reduce the complexity
of the code.
unsigned offset;
spin_lock(&v->arch.runstate_guest.lock);
- /* cleanup previous page if any */
- if ( v->arch.runstate_guest.page )
+ /* cleanup previous pages if any */
+ if ( v->arch.runstate_guest.page[0] )
{
- put_page_and_type(v->arch.runstate_guest.page);
- v->arch.runstate_guest.page = NULL;
+ put_page_and_type(v->arch.runstate_guest.page[0]);
+ v->arch.runstate_guest.page[0] = NULL;
+ if ( v->arch.runstate_guest.page[1] )
+ {
+ put_page_and_type(v->arch.runstate_guest.page[1]);
+ v->arch.runstate_guest.page[1] = NULL;
+ }
v->arch.runstate_guest.offset = 0;
}
offset = ((vaddr_t)area.addr.v) & ~PAGE_MASK;
- if ( offset > (PAGE_SIZE - sizeof(struct vcpu_runstate_info)) )
- {
- spin_unlock(&v->arch.runstate_guest.lock);
- gprintk(XENLOG_DEBUG, "Runstate is crossing pages\n");
- return -EINVAL;
- }
/* provided address must be aligned to a 64bit */
if ( offset % alignof(struct vcpu_runstate_info) )
@@ -323,15 +327,30 @@ int arch_setup_runstate_guest(struct vcpu *v,
return -EINVAL;
}
- page = get_page_from_gva(v, (vaddr_t)area.addr.v, GV2M_WRITE);
- if ( !page )
+ page[0] = get_page_from_gva(v, (vaddr_t)area.addr.v, GV2M_WRITE);
+ if ( !page[0] )
{
spin_unlock(&v->arch.runstate_guest.lock);
gprintk(XENLOG_DEBUG, "Runstate pointer is not mapped\n");
return -EINVAL;
}
- v->arch.runstate_guest.page = page;
+ if ( offset > (PAGE_SIZE - sizeof(struct vcpu_runstate_info)) )
+ {
+ /* guest area is crossing pages */
+ page[1] = get_page_from_gva(v, (vaddr_t)area.addr.v + PAGE_SIZE,
+ GV2M_WRITE);
+ if ( !page[1] )
+ {
+ put_page_and_type(v->arch.runstate_guest.page[0]);
v->arch.runstate_guest.page[0] would be NULL as you set it afterwards.
So you want to set v->arch.runstate_guest.page[0] beforehand.
Cheers,
--
Julien Grall
|