Skip to content

Commit 8307b30

Browse files
andyrosscfriedt
authored andcommitted
kernel/sched: Use new barrier and spin APIs
The switch_handle field in the thread struct is used as an atomic flag between CPUs in SMP, and has been known for a long time to technically require memory barriers for correct operation. We have an API for that now, so put them in: * The code immediately before arch_switch() needs a write barrier to ensure that thread state written by the scheduler is seen to happen before the outgoing thread is flagged with a valid switch handle. * The loop in z_sched_switch_spin() needs a read barrier at the end, to make sure the calling context doesn't load state from before the other CPU stored the switch handle. Also, that same spot in switch_spin was spinning with interrupts held, which means it needs a call to arch_spin_relax() to avoid a FPU state deadlock on some architectures. Signed-off-by: Andy Ross <[email protected]> (cherry picked from commit c3046f4)
1 parent 5fd65d6 commit 8307b30

File tree

1 file changed

+22
-6
lines changed

1 file changed

+22
-6
lines changed

kernel/include/kswap.h

Lines changed: 22 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@
77
#define ZEPHYR_KERNEL_INCLUDE_KSWAP_H_
88

99
#include <ksched.h>
10+
#include <sys/atomic.h>
1011
#include <spinlock.h>
1112
#include <kernel_arch_func.h>
1213

@@ -48,19 +49,25 @@ void z_smp_release_global_lock(struct k_thread *thread);
4849
* treat this because the scheduler lock can't be released by the
4950
* switched-to thread, which is going to (obviously) be running its
5051
* own code and doesn't know it was switched out.
51-
*
52-
* Note: future SMP architectures may need a fence/barrier or cache
53-
* invalidation here. Current ones don't, and sadly Zephyr doesn't
54-
* have a framework for that yet.
5552
*/
5653
static inline void z_sched_switch_spin(struct k_thread *thread)
5754
{
5855
#ifdef CONFIG_SMP
5956
volatile void **shp = (void *)&thread->switch_handle;
6057

6158
while (*shp == NULL) {
62-
k_busy_wait(1);
59+
__asm__ __volatile__ (""::: "memory");
6360
}
61+
/* Read barrier: don't allow any subsequent loads in the
62+
* calling code to reorder before we saw switch_handle go
63+
* non-null.
64+
*/
65+
#if defined(__GNUC__)
66+
/* GCC-ism */
67+
__atomic_thread_fence(__ATOMIC_SEQ_CST);
68+
#else
69+
atomic_thread_fence(memory_order_seq_cst);
70+
#endif
6471
#endif
6572
}
6673

@@ -152,8 +159,17 @@ static ALWAYS_INLINE unsigned int do_swap(unsigned int key,
152159
void *newsh = new_thread->switch_handle;
153160

154161
if (IS_ENABLED(CONFIG_SMP)) {
155-
/* Active threads MUST have a null here */
162+
/* Active threads must have a null here. And
163+
* it must be seen before the scheduler lock
164+
* is released!
165+
*/
156166
new_thread->switch_handle = NULL;
167+
#if defined(__GNUC__)
168+
/* GCC-ism */
169+
__atomic_thread_fence(__ATOMIC_SEQ_CST);
170+
#else
171+
atomic_thread_fence(memory_order_seq_cst);
172+
#endif
157173
}
158174
k_spin_release(&sched_spinlock);
159175
arch_switch(newsh, &old_thread->switch_handle);

0 commit comments

Comments
 (0)