Skip to content

Commit c3046f4

Browse files
andyrossjgl-meta
authored andcommitted
kernel/sched: Use new barrier and spin APIs
The switch_handle field in the thread struct is used as an atomic flag between CPUs in SMP, and has been known for a long time to technically require memory barriers for correct operation. We have an API for that now, so put them in: * The code immediately before arch_switch() needs a write barrier to ensure that thread state written by the scheduler is seen to happen before the outgoing thread is flagged with a valid switch handle. * The loop in z_sched_switch_spin() needs a read barrier at the end, to make sure the calling context doesn't load state from before the other CPU stored the switch handle. Also, that same spot in switch_spin was spinning with interrupts held, which means it needs a call to arch_spin_relax() to avoid a FPU state deadlock on some architectures. Signed-off-by: Andy Ross <[email protected]>
1 parent b89e427 commit c3046f4

File tree

1 file changed

+12
-6
lines changed

1 file changed

+12
-6
lines changed

kernel/include/kswap.h

Lines changed: 12 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@
88

99
#include <ksched.h>
1010
#include <zephyr/spinlock.h>
11+
#include <zephyr/sys/barrier.h>
1112
#include <kernel_arch_func.h>
1213

1314
#ifdef CONFIG_STACK_SENTINEL
@@ -48,19 +49,20 @@ void z_smp_release_global_lock(struct k_thread *thread);
4849
* treat this because the scheduler lock can't be released by the
4950
* switched-to thread, which is going to (obviously) be running its
5051
* own code and doesn't know it was switched out.
51-
*
52-
* Note: future SMP architectures may need a fence/barrier or cache
53-
* invalidation here. Current ones don't, and sadly Zephyr doesn't
54-
* have a framework for that yet.
5552
*/
5653
static inline void z_sched_switch_spin(struct k_thread *thread)
5754
{
5855
#ifdef CONFIG_SMP
5956
volatile void **shp = (void *)&thread->switch_handle;
6057

6158
while (*shp == NULL) {
62-
k_busy_wait(1);
59+
arch_spin_relax();
6360
}
61+
/* Read barrier: don't allow any subsequent loads in the
62+
* calling code to reorder before we saw switch_handle go
63+
* non-null.
64+
*/
65+
barrier_dmem_fence_full();
6466
#endif
6567
}
6668

@@ -154,8 +156,12 @@ static ALWAYS_INLINE unsigned int do_swap(unsigned int key,
154156
void *newsh = new_thread->switch_handle;
155157

156158
if (IS_ENABLED(CONFIG_SMP)) {
157-
/* Active threads MUST have a null here */
159+
/* Active threads must have a null here. And
160+
* it must be seen before the scheduler lock
161+
* is released!
162+
*/
158163
new_thread->switch_handle = NULL;
164+
barrier_dmem_fence_full(); /* write barrier */
159165
}
160166
k_spin_release(&sched_spinlock);
161167
arch_switch(newsh, &old_thread->switch_handle);

0 commit comments

Comments
 (0)