@@ -34,7 +34,7 @@ semaphores used to implement blocking mutual exclusion continue to be
34
34
a proper application choice.
35
35
36
36
At the lowest level, however, Zephyr code has often used the
37
- `` irq_lock() ``/`` irq_unlock() ` ` primitives to implement fine grained
37
+ :c:func: ` irq_lock `/ :c:func: ` irq_unlock ` primitives to implement fine grained
38
38
critical sections using interrupt masking. These APIs continue to
39
39
work via an emulation layer (see below), but the masking technique
40
40
does not: the fact that your CPU will not be interrupted while you are
@@ -45,13 +45,13 @@ data!
45
45
Spinlocks
46
46
=========
47
47
48
- SMP systems provide a more constrained `` k_spin_lock() ` ` primitive
49
- that not only masks interrupts locally, as done by `` irq_lock() ` `, but
48
+ SMP systems provide a more constrained :c:func: ` k_spin_lock ` primitive
49
+ that not only masks interrupts locally, as done by :c:func: ` irq_lock `, but
50
50
also atomically validates that a shared lock variable has been
51
51
modified before returning to the caller, "spinning" on the check if
52
52
needed to wait for the other CPU to exit the lock. The default Zephyr
53
- implementation of `` k_spin_lock() `` and `` k_spin_unlock() ` ` is built
54
- on top of the pre-existing `` atomic_t ` ` layer (itself usually
53
+ implementation of :c:func: ` k_spin_lock ` and :c:func: ` k_spin_unlock ` is built
54
+ on top of the pre-existing :c:struct: ` atomic_ ` layer (itself usually
55
55
implemented using compiler intrinsics), though facilities exist for
56
56
architectures to define their own for performance reasons.
57
57
@@ -76,7 +76,7 @@ Legacy irq_lock() emulation
76
76
===========================
77
77
78
78
For the benefit of applications written to the uniprocessor locking
79
- API, `` irq_lock() `` and `` irq_unlock() ` ` continue to work compatibly on
79
+ API, :c:func: ` irq_lock ` and :c:func: ` irq_unlock ` continue to work compatibly on
80
80
SMP systems with identical semantics to their legacy versions. They
81
81
are implemented as a single global spinlock, with a nesting count and
82
82
the ability to be atomically reacquired on context switch into locked
@@ -88,7 +88,7 @@ release to happen.
88
88
89
89
The overhead involved in this process has measurable performance
90
90
impact, however. Unlike uniprocessor apps, SMP apps using
91
- `` irq_lock() ` ` are not simply invoking a very short (often ~1
91
+ :c:func: ` irq_lock ` are not simply invoking a very short (often ~1
92
92
instruction) interrupt masking operation. That, and the fact that the
93
93
IRQ lock is global, means that code expecting to be run in an SMP
94
94
context should be using the spinlock API wherever possible.
@@ -104,10 +104,10 @@ kconfig variable, which can associate a specific set of CPUs with each
104
104
thread, indicating on which CPUs it can run.
105
105
106
106
By default, new threads can run on any CPU. Calling
107
- `` k_thread_cpu_mask_disable() ` ` with a particular CPU ID will prevent
107
+ :c:func: ` k_thread_cpu_mask_disable ` with a particular CPU ID will prevent
108
108
that thread from running on that CPU in the future. Likewise
109
- `` k_thread_cpu_mask_enable() ` ` will re-enable execution. There are also
110
- `` k_thread_cpu_mask_clear() `` and `` k_thread_cpu_mask_enable_all() ` ` APIs
109
+ :c:func: ` k_thread_cpu_mask_enable ` will re-enable execution. There are also
110
+ :c:func: ` k_thread_cpu_mask_clear ` and :c:func: ` k_thread_cpu_mask_enable_all ` APIs
111
111
available for convenience. For obvious reasons, these APIs are
112
112
illegal if called on a runnable thread. The thread must be blocked or
113
113
suspended, otherwise an ``-EINVAL `` will be returned.
@@ -129,25 +129,25 @@ Auxiliary CPUs begin in a disabled state in the architecture layer.
129
129
All standard kernel initialization, including device initialization,
130
130
happens on a single CPU before other CPUs are brought online.
131
131
132
- Just before entering the application `` main() ` ` function, the kernel
133
- calls `` z_smp_init() ` ` to launch the SMP initialization process. This
132
+ Just before entering the application :c:func: ` main ` function, the kernel
133
+ calls :c:func: ` z_smp_init ` to launch the SMP initialization process. This
134
134
enumerates over the configured CPUs, calling into the architecture
135
- layer using `` arch_start_cpu() ` ` for each one. This function is
135
+ layer using :c:func: ` arch_start_cpu ` for each one. This function is
136
136
passed a memory region to use as a stack on the foreign CPU (in
137
137
practice it uses the area that will become that CPU's interrupt
138
- stack), the address of a local `` smp_init_top() ` ` callback function to
138
+ stack), the address of a local :c:func: ` smp_init_top ` callback function to
139
139
run on that CPU, and a pointer to a "start flag" address which will be
140
140
used as an atomic signal.
141
141
142
- The local SMP initialization (`` smp_init_top() ` `) on each CPU is then
142
+ The local SMP initialization (:c:func: ` smp_init_top `) on each CPU is then
143
143
invoked by the architecture layer. Note that interrupts are still
144
144
masked at this point. This routine is responsible for calling
145
- `` smp_timer_init() ` ` to set up any needed stat in the timer driver. On
145
+ :c:func: ` smp_timer_init ` to set up any needed stat in the timer driver. On
146
146
many architectures the timer is a per-CPU device and needs to be
147
147
configured specially on auxiliary CPUs. Then it waits (spinning) for
148
148
the atomic "start flag" to be released in the main thread, to
149
149
guarantee that all SMP initialization is complete before any Zephyr
150
- application code runs, and finally calls `` z_swap() ` ` to transfer
150
+ application code runs, and finally calls :c:func: ` z_swap ` to transfer
151
151
control to the appropriate runnable thread via the standard scheduler
152
152
API.
153
153
@@ -166,7 +166,7 @@ When running in multiprocessor environments, it is occasionally the
166
166
case that state modified on the local CPU needs to be synchronously
167
167
handled on a different processor.
168
168
169
- One example is the Zephyr `` k_thread_abort() ` ` API, which cannot return
169
+ One example is the Zephyr :c:func: ` k_thread_abort ` API, which cannot return
170
170
until the thread that had been aborted is no longer runnable. If it
171
171
is currently running on another CPU, that becomes difficult to
172
172
implement.
@@ -180,9 +180,9 @@ handle the newly-runnable load.
180
180
181
181
So where possible, Zephyr SMP architectures should implement an
182
182
interprocessor interrupt. The current framework is very simple: the
183
- architecture provides a `` arch_sched_ipi() ` ` call, which when invoked
183
+ architecture provides a :c:func: ` arch_sched_ipi ` call, which when invoked
184
184
will flag an interrupt on all CPUs (except the current one, though
185
- that is allowed behavior) which will then invoke the `` z_sched_ipi() ` `
185
+ that is allowed behavior) which will then invoke the :c:func: ` z_sched_ipi `
186
186
function implemented in the scheduler. The expectation is that these
187
187
APIs will evolve over time to encompass more functionality
188
188
(e.g. cross-CPU calls), and that the scheduler-specific calls here
@@ -193,7 +193,7 @@ Note that not all SMP architectures will have a usable IPI mechanism
193
193
Zephyr provides fallback behavior that is correct, but perhaps
194
194
suboptimal.
195
195
196
- Using this, `` k_thread_abort() ` ` becomes only slightly more
196
+ Using this, :c:func: ` k_thread_abort ` becomes only slightly more
197
197
complicated in SMP: for the case where a thread is actually running on
198
198
another CPU (we can detect this atomically inside the scheduler), we
199
199
broadcast an IPI and spin, waiting for the thread to either become
@@ -239,15 +239,15 @@ running concurrently. Likewise a kernel-provided interrupt stack
239
239
needs to be created and assigned for each physical CPU, as does the
240
240
interrupt nesting count used to detect ISR state.
241
241
242
- These fields are now moved into a separate `` struct _cpu ` ` instance
243
- within the `` _kernel ` ` struct, which has a ``cpus[] `` array indexed by ID.
242
+ These fields are now moved into a separate struct :c:struct: ` _cpu ` instance
243
+ within the :c:struct: ` _kernel ` struct, which has a ``cpus[] `` array indexed by ID.
244
244
Compatibility fields are provided for legacy uniprocessor code trying
245
245
to access the fields of ``cpus[0] `` using the older syntax and assembly
246
246
offsets.
247
247
248
248
Note that an important requirement on the architecture layer is that
249
249
the pointer to this CPU struct be available rapidly when in kernel
250
- context. The expectation is that `` arch_curr_cpu() ` ` will be
250
+ context. The expectation is that :c:func: ` arch_curr_cpu ` will be
251
251
implemented using a CPU-provided register or addressing mode that can
252
252
store this value across arbitrary context switches or interrupts and
253
253
make it available to any kernel-mode code.
@@ -263,7 +263,7 @@ a separate field in the thread struct.
263
263
Switch-based context switching
264
264
==============================
265
265
266
- The traditional Zephyr context switch primitive has been `` z_swap() ` `.
266
+ The traditional Zephyr context switch primitive has been :c:func: ` z_swap `.
267
267
Unfortunately, this function takes no argument specifying a thread to
268
268
switch to. The expectation has always been that the scheduler has
269
269
already made its preemption decision when its state was last modified
@@ -278,22 +278,22 @@ Instead, the SMP "switch to" decision needs to be made synchronously
278
278
with the swap call, and as we don't want per-architecture assembly
279
279
code to be handling scheduler internal state, Zephyr requires a
280
280
somewhat lower-level context switch primitives for SMP systems:
281
- `` arch_switch() ` ` is always called with interrupts masked, and takes
281
+ :c:func: ` arch_switch ` is always called with interrupts masked, and takes
282
282
exactly two arguments. The first is an opaque (architecture defined)
283
283
handle to the context to which it should switch, and the second is a
284
284
pointer to such a handle into which it should store the handle
285
285
resulting from the thread that is being switched out.
286
286
287
- The kernel then implements a portable `` z_swap() ` ` implementation on top
287
+ The kernel then implements a portable :c:func: ` z_swap ` implementation on top
288
288
of this primitive which includes the relevant scheduler logic in a
289
289
location where the architecture doesn't need to understand it.
290
290
Similarly, on interrupt exit, switch-based architectures are expected
291
- to call `` z_get_next_switch_handle() ` ` to retrieve the next thread to
291
+ to call :c:func: ` z_get_next_switch_handle ` to retrieve the next thread to
292
292
run from the scheduler, passing in an "interrupted" handle reflecting
293
293
the same opaque type used by switch, which the kernel will then save
294
294
in the interrupted thread struct.
295
295
296
296
Note that while SMP requires :option: `CONFIG_USE_SWITCH `, the reverse is not
297
- true. A uniprocessor architecture built with :option: `CONFIG_SMP ` = n might
297
+ true. A uniprocessor architecture built with :option: `CONFIG_SMP ` set to No might
298
298
still decide to implement its context switching using
299
- `` arch_switch() ` `.
299
+ :c:func: ` arch_switch `.
0 commit comments