Quantcast

arm64: allocate an empty user map for kernel mode.

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

arm64: allocate an empty user map for kernel mode.

Dale Rahn
When the kernel switches to the kernel pmap, the pmap had been previously
leaving the ttbr0 unmodified as the kernel pmap space is all accesse thru
ttbr1. Well at one point the ttbr0 was updated with the ttbr1 content, but
that caused MASSIVE failures.

This diff add an allocation of a zeroed/empty L0/L1 page table to put into
ttbr0 when switching to the kernel pmap. A zeroed top level buffer will
indicate there are no valid L1/L2 entries and cause any accesses to
low addresses to cause a data abort.

This table is also used when a core is about to switch to a new asid, if
the pmap determines that it needs to reuse an asid, this will prevent
speculative accesses on the old ttbr0 after tlb invalidation.

also another minor change invalid asid in pmap is always flagged as
'pmap_asid == -1' checking for 'pm_asid < 0' could cause confusion,
and was change to be matching the invalidation value.

A possible improvement in the below code it to allocate and zero a buffer
on a size name different than Lx_TABLE_ALIGN, but there appears to be no
appropriate define present.

diff --git a/sys/arch/arm64/arm64/pmap.c b/sys/arch/arm64/arm64/pmap.c
index d1fa358b85f..18aff664beb 100644
--- a/sys/arch/arm64/arm64/pmap.c
+++ b/sys/arch/arm64/arm64/pmap.c
@@ -1112,6 +1112,7 @@ CTASSERT(sizeof(struct pmapvp0) == 8192);
 
 int mappings_allocated = 0;
 int pted_allocated = 0;
+paddr_t pted_kernel_ttbr0;
 
 vaddr_t
 pmap_bootstrap(long kvo, paddr_t lpt1,  long kernelstart, long kernelend,
@@ -1214,7 +1215,12 @@ pmap_bootstrap(long kvo, paddr_t lpt1,  long kernelstart, long kernelend,
  }
  }
 
- // XXX should this extend the l2 bootstrap mappings for kernel entries?
+ /* allocate an empty ttbr0 for when we are switched to kernel pmap */
+ pted_kernel_ttbr0 = pmap_steal_avail(Lx_TABLE_ALIGN, Lx_TABLE_ALIGN,
+    &va);
+ /* zero by pa, va may not be mapped */
+ bzero((void *)pa, Lx_TABLE_ALIGN);
+
 
  /* now that we have mapping space for everything, lets map it */
  /* all of these mappings are ram -> kernel va */
@@ -2311,7 +2317,12 @@ pmap_setttb(struct proc *p, paddr_t pagedir, struct pcb *pcb)
  //int oasid = pm->pm_asid;
 
  if (pm != pmap_kernel()) {
- if (pm->pm_asid < 0) {
+ if (pm->pm_asid == -1) {
+ /*
+ * We are running in kernel, about to switch to an
+ * unknown asid, move to an unused ttbr0 entry
+ */
+ cpu_setttb(pted_kernel_ttbr0);
  pmap_allocate_asid(pm);
  pcb->pcb_pagedir = ((uint64_t)pm->pm_asid << 48) |
     pm->pm_pt0pa;
@@ -2326,6 +2337,7 @@ pmap_setttb(struct proc *p, paddr_t pagedir, struct pcb *pcb)
  cpu_tlb_flush();
  }
  } else {
- // XXX what to do if switching to kernel pmap !?!?
+ /* switch userland to empty mapping page */
+ cpu_setttb(pted_kernel_ttbr0);
  }
 }
Dale Rahn [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: arm64: allocate an empty user map for kernel mode.

Mark Kettenis
> Date: Fri, 17 Mar 2017 16:12:13 -0400
> From: Dale Rahn <[hidden email]>
>
> When the kernel switches to the kernel pmap, the pmap had been previously
> leaving the ttbr0 unmodified as the kernel pmap space is all accesse thru
> ttbr1. Well at one point the ttbr0 was updated with the ttbr1 content, but
> that caused MASSIVE failures.
>
> This diff add an allocation of a zeroed/empty L0/L1 page table to put into
> ttbr0 when switching to the kernel pmap. A zeroed top level buffer will
> indicate there are no valid L1/L2 entries and cause any accesses to
> low addresses to cause a data abort.
>
> This table is also used when a core is about to switch to a new asid, if
> the pmap determines that it needs to reuse an asid, this will prevent
> speculative accesses on the old ttbr0 after tlb invalidation.
>
> also another minor change invalid asid in pmap is always flagged as
> 'pmap_asid == -1' checking for 'pm_asid < 0' could cause confusion,
> and was change to be matching the invalidation value.
>
> A possible improvement in the below code it to allocate and zero a buffer
> on a size name different than Lx_TABLE_ALIGN, but there appears to be no
> appropriate define present.
>
> diff --git a/sys/arch/arm64/arm64/pmap.c b/sys/arch/arm64/arm64/pmap.c
> index d1fa358b85f..18aff664beb 100644
> --- a/sys/arch/arm64/arm64/pmap.c
> +++ b/sys/arch/arm64/arm64/pmap.c
> @@ -1112,6 +1112,7 @@ CTASSERT(sizeof(struct pmapvp0) == 8192);
>  
>  int mappings_allocated = 0;
>  int pted_allocated = 0;
> +paddr_t pted_kernel_ttbr0;
>  
>  vaddr_t
>  pmap_bootstrap(long kvo, paddr_t lpt1,  long kernelstart, long kernelend,
> @@ -1214,7 +1215,12 @@ pmap_bootstrap(long kvo, paddr_t lpt1,  long kernelstart, long kernelend,
>   }
>   }
>  
> - // XXX should this extend the l2 bootstrap mappings for kernel entries?
> + /* allocate an empty ttbr0 for when we are switched to kernel pmap */
> + pted_kernel_ttbr0 = pmap_steal_avail(Lx_TABLE_ALIGN, Lx_TABLE_ALIGN,
> +    &va);
> + /* zero by pa, va may not be mapped */
> + bzero((void *)pa, Lx_TABLE_ALIGN);

You're zeroing the wrong bit of memory here.  And I think it is best
to use memset these days.

I wonder if it makes more sense to set pmap_kernel()->pm_pt0pa here
and use a global variable for the pagetables pointed at by tt1br.
Then you could initialize pmap_kernel()->pm_asid to 0 and we wouldn't
need to handle the kernel pmap special in pmap_setttb().

>   /* now that we have mapping space for everything, lets map it */
>   /* all of these mappings are ram -> kernel va */
> @@ -2311,7 +2317,12 @@ pmap_setttb(struct proc *p, paddr_t pagedir, struct pcb *pcb)
>   //int oasid = pm->pm_asid;
>  
>   if (pm != pmap_kernel()) {
> - if (pm->pm_asid < 0) {
> + if (pm->pm_asid == -1) {
> + /*
> + * We are running in kernel, about to switch to an
> + * unknown asid, move to an unused ttbr0 entry
> + */
> + cpu_setttb(pted_kernel_ttbr0);

I don't see why this is necessary.  The old ASID is perfectly valid
until we switch, and the delayed cpu_tlb_flush() will make sure we get
rid of any cached mappings of the old ASID.  This works fine even if
we had to roll over the ASIDs and happened to pick the exact same ASID
as we used before.

>   pmap_allocate_asid(pm);
>   pcb->pcb_pagedir = ((uint64_t)pm->pm_asid << 48) |
>      pm->pm_pt0pa;
> @@ -2326,6 +2337,7 @@ pmap_setttb(struct proc *p, paddr_t pagedir, struct pcb *pcb)
>   cpu_tlb_flush();
>   }
>   } else {
> - // XXX what to do if switching to kernel pmap !?!?
> + /* switch userland to empty mapping page */
> + cpu_setttb(pted_kernel_ttbr0);
>   }
>  }
> Dale Rahn [hidden email]
>
>

Loading...