This allows for builds with CONFIG_SYS_CLOCK_EXISTS=n in which case
busy waits are achieved with a crude CPU loop. If ever accuracy is
needed even with such a configuration then implementing arch_busy_wait()
should be considered.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Since the rbtree is using as list because we no longer
can assume that the object pointer is the address of the
data field in the dynamic object struct, lets just use
the already existent dlist for tracking dynamic kernel
objects.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Fix the preference allocation logic. If pool is preferred but POOL_SIZE
is 0 or pool allocation fails, it fallbacks to heap allocation if it
is enabled.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Add support for dynamic thread stack objects. A new container
for this kernel object was added to avoid its alignment constraint
to all dynamic objects.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Add a new API to dynamically allocate kernel objects that allow
passing an arbitrary size. This new API allows to allocate dynamic
thread stack.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
While the LOCKED pattern is universally useful it can be misused. This
change therefore exposes the LOCKED pattern with extensive usage
documentation to reduce the risk of abuse or unintended deadlock.
Signed-off-by: Florian Grandel <fgrandel@code-for-humans.de>
Update the return value of functions that modify the internal event
state from `void` to `uint32_t`, so that calling code can determine
whether the event was already in a given state, or if the call modified
it.
This simplifies the usage of `struct k_event` as an alternative to
`atomic_t` that users can block on.
Implements #57216
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Scheduling relative timeouts from within timer callbacks (=sys clock ISR
context) differs from scheduling relative timeouts from an application
context.
This change documents and explains the rationale of this distinction.
Signed-off-by: Florian Grandel <fgrandel@code-for-humans.de>
Device dependencies are not always required, so make them optional via
CONFIG_DEVICE_DEPS. When enabled, the gen_device_deps script will run so
that dependencies are collected and part of the final image. Related
APIs will be also made available. Since device dependencies are used in
just a few places (power domains), disable the feature by default. When
not enabled, a second linking pass will not be required.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The option can now be set by projects. This change will also allow to
make it dependent on a future CONFIG_DEVICE_DEPS option.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Rename the Kconfig option to be in line with recent renamings in device
handles/dependencies.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Rename struct device `handles` member to `deps`, in line with previous
renamings in the device API.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
This adds a few line use zephyr_syscall_header() to include
headers containing syscall function prototypes.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Only set a cpu as active (on pm subsystem) when the cpu is effectively
initialized. We cannot assume on pm subsystem that all cpus were
initialized since when the option CONFIG_SMP_BOOT_DELAY is used cpus are
initialized on demand by the application.
Note that once cpus are properly initialized the subystem is able to track
their status.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
As discovered by Carlo Caione, the k_thread_join code had a case where
it detected it had been called on a thread already marked _THREAD_DEAD
and exited early. That's not sufficient. The thread state is mutated
from the thread itself on its exit path. It may still be running!
Just like the code in z_swap(), we need to spin waiting on the other
CPU to write the switch handle before knowing it's safe to return,
otherwise the calling context might (and did) do something like
immediately k_thread_create() a new thread in the "dead" thread's
struct while it was still running on the other core.
There was also a similar case in k_thread_abort() which had the same
issue: it needs to spin waiting on the other CPU to kill the thread
via the same mechanism.
Fixes#58116
Originally-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Andy Ross <andyross@google.com>
The switch_handle field in the thread struct is used as an atomic flag
between CPUs in SMP, and has been known for a long time to technically
require memory barriers for correct operation. We have an API for
that now, so put them in:
* The code immediately before arch_switch() needs a write barrier to
ensure that thread state written by the scheduler is seen to happen
before the outgoing thread is flagged with a valid switch handle.
* The loop in z_sched_switch_spin() needs a read barrier at the end,
to make sure the calling context doesn't load state from before the
other CPU stored the switch handle.
Also, that same spot in switch_spin was spinning with interrupts held,
which means it needs a call to arch_spin_relax() to avoid a FPU state
deadlock on some architectures.
Signed-off-by: Andy Ross <andyross@google.com>
This trick turns out also to be needed by the abort/join code.
Promote it to a more formal-looking internal API and clean up the
documentation to (hopefully) clarify the exact behavior and better
explain the need.
This is one of the more... enchanted bits of the scheduler, and while
the trick is IMHO pretty clean, it remains a big SMP footgun.
Signed-off-by: Andy Ross <andyross@google.com>
Many RTOS applications assume the virtual and physical address
is 1:1 mapping, so add the 1:1 mapping support in z_phys_map()
to easy adapt these applications.
Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
Give architectures that need it the ability to perform special checks
while e.g. waiting for a spinlock to become available.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Introduce a new API for barrier operations starting with a general
skeleton and the implementation for barrier_data_memory_fence_full().
Select a built-in or an arch-based implementation according to new
Kconfig symbols CONFIG_BARRIER_OPERATIONS_BUILTIN and
CONFIG_BARRIER_OPERATIONS_ARCH.
The built-in implementation falls back on the compiler built-in
function using __ATOMIC_SEQ_CST as it is done for the atomic APIs
already.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
z_page_frame can't be packed on Xtensa due memory alignment
constraints. When this is struct is packed it is 5 bytes long it will
cause an memory alignment problem on Xtensa.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Until now iterable sections APIs have been part of the toolchain
(common) headers. They are not strictly related to a toolchain, they
just rely on linker providing support for sections. Most files relied on
indirect includes to access the API, now, it is included as needed.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
When a running thread gets aborted asynchronously (this only happens
in SMP contexts, obviously) it gets flagged "aborting", but the actual
abort needs to happen in the thread's own context. For convenience,
this was done in the next_up() routine that selects the next thread to
run at interrupt exit time.
But this check was being done AFTER the next candidate thread was
selected from the run queue. Thread abort can wake up threads blocked
in k_thread_join(), and therefore these weren't seen as runable
threads, even if they should have been.
Executive summary: if you killed a thread running on another CPU, and
there was another thread joined to the killed thread that should have
run on that CPU, it wouldn't (until it received an interrupt or
otherwise reached a schedule point).
Move the abort check above the run queue inspection and into the
end-of-interrupt processing in z_get_next_switch_handle() (so it's
actually a mild performance boost as it's no longer part of the
cooperative context switch path). Simple fix, subtle bug.
Fixes#58040
Signed-off-by: Andy Ross <andyross@google.com>
Exception handler(arch/x86/core/ia32/excstub.S) may access
_kernel variable, it will lead to failure when enabled paging,
so make this critical variable pinned.
Signed-off-by: Qipeng Zha <qipeng.zha@intel.com>
The ACE 2.0 LNL platform has 5 HIFI4 cores. Change number
of cores to enable 5th core on the platform.
Signed-off-by: Jaroslaw Stelter <Jaroslaw.Stelter@intel.com>
Without these parentheses, specifying a q_max_msgs of e.g.
`MY_DEFAULT_QUEUESIZE+1` would result in a buffer of size
(1 element + MY_DEFAULT_QUEUESIZE bytes).
This would then lead to an unbounded buffer overflow because the queue
never reaches the exact (offset by MY_DEFAULT_QUEUESIZE bytes)
`buffer_end` and just keeps writing.
Additionally, add asserts to make sure this can't happen again.
Signed-off-by: Armin Brauns <armin.brauns@embedded-solutions.at>
Use iterable sections to handle devices list. This simplifies devices
implementation by using standard APIs.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
When building sample.minimal.mt-no-preempt-no-timers.arm on arm-clang
we get a link error as z_pm_save_idle_exit expects sys_clock_idle_exit
to be defined.
However the sample sets CONFIG_SYS_CLOCK_EXISTS=n so
sys_clock_idle_exit() will not be defined by any driver. So add proper
ifdef protection in z_pm_save_idle_exit to fix this.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
When a semaphore is given and there is no thread waiting
for it, do not unconditionally perform a reschedule.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Some devices do not need to perform any initialization, so allow the
init function to be NULL. In this case, the initialization code will
just mark the device as initialized, i.e. ready.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Removes unused absolute symbols that are defined via the
GEN_ABSOLUTE_SYM() macro in the kernel directory.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
As both C and C++ standards require applications running under an OS to
return 'int', adapt that for Zephyr to align with those standard. This also
eliminates errors when building with clang when not using -ffreestanding,
and reduces the need for compiler flags to silence warnings for both clang
and gcc.
Most of these changes were automated using coccinelle with the following
script:
@@
@@
- void
+ int
main(...) {
...
- return;
+ return 0;
...
}
Approximately 40 files had to be edited by hand as coccinelle was unable to
fix them.
Signed-off-by: Keith Packard <keithp@keithp.com>
As both C and C++ standards require applications running under an OS to
return 'int', adapt that for Zephyr to align with those standard. This also
eliminates errors when building with clang when not using -ffreestanding,
and reduces the need for compiler flags to silence warnings for both clang
and gcc
Signed-off-by: Keith Packard <keithp@keithp.com>
Many areas of Zephyr divide and round up without using the DIV_ROUND_UP
macro. Make use of it, so that we make use of a tested system macro and
at the same time we make code more readable.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The init infrastructure, found in `init.h`, is currently used by:
- `SYS_INIT`: to call functions before `main`
- `DEVICE_*`: to initialize devices
They are all sorted according to an initialization level + a priority.
`SYS_INIT` calls are really orthogonal to devices, however, the required
function signature requires a `const struct device *dev` as a first
argument. The only reason for that is because the same init machinery is
used by devices, so we have something like:
```c
struct init_entry {
int (*init)(const struct device *dev);
/* only set by DEVICE_*, otherwise NULL */
const struct device *dev;
}
```
As a result, we end up with such weird/ugly pattern:
```c
static int my_init(const struct device *dev)
{
/* always NULL! add ARG_UNUSED to avoid compiler warning */
ARG_UNUSED(dev);
...
}
```
This is really a result of poor internals isolation. This patch proposes
a to make init entries more flexible so that they can accept sytem
initialization calls like this:
```c
static int my_init(void)
{
...
}
```
This is achieved using a union:
```c
union init_function {
/* for SYS_INIT, used when init_entry.dev == NULL */
int (*sys)(void);
/* for DEVICE*, used when init_entry.dev != NULL */
int (*dev)(const struct device *dev);
};
struct init_entry {
/* stores init function (either for SYS_INIT or DEVICE*)
union init_function init_fn;
/* stores device pointer for DEVICE*, NULL for SYS_INIT. Allows
* to know which union entry to call.
*/
const struct device *dev;
}
```
This solution **does not increase ROM usage**, and allows to offer clean
public APIs for both SYS_INIT and DEVICE*. Note that however, init
machinery keeps a coupling with devices.
**NOTE**: This is a breaking change! All `SYS_INIT` functions will need
to be converted to the new signature. See the script offered in the
following commit.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
init: convert SYS_INIT functions to the new signature
Conversion scripted using scripts/utils/migrate_sys_init.py.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
manifest: update projects for SYS_INIT changes
Update modules with updated SYS_INIT calls:
- hal_ti
- lvgl
- sof
- TraceRecorderSource
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
tests: devicetree: devices: adjust test
Adjust test according to the recently introduced SYS_INIT
infrastructure.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
tests: kernel: threads: adjust SYS_INIT call
Adjust to the new signature: int (*init_fn)(void);
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Add check to ensure that CONFIG_MP_NUM_CPUS and CONFIG_MP_MAX_NUM_CPUS
are set the same. This will at least cause a build issue for out of
tree users.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
All we really want here is to set default parameters. However
k_sched_time_slice_set() also calls z_reset_time_slice(_current)
which expects `_current` to be fully initialized.
Simply initialize `slice_ticks` and `slice_max_prio` with default values
directly. Unfortunately the compiler isn't smart enough to expand
k_ms_to_ticks_ceil32(CONFIG_TIMESLICE_SIZE) to a constant expression
at build time so we must do the conversion by hand (and it shouldn't
overflow due to the nature of the value).
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Slice expirations are now based on the same timeout mechanism as
regular timers which have been recently fixed and proven to work with
single-tick periods.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The reason for arch_num_cpus() is to be able to dynamically adapt to
the actual number of available CPUs at run time.
In the z_sched_init() case, it is not the number of active CPUs that
we need but rather the total number of potential CPUs, and that is
represented by CONFIG_MP_MAX_NUM_CPUS not arch_num_cpus().
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add the `zephyr,pm-device-runtime-auto` flag to `pm.yaml` and
`struct pm_device`.
This flag is intended to signify to the boot system that device runtime
PM should be automatically enabled on the device after the init function
has run.
Only run `pm_device_runtime_auto_enable` function on a device if
initialisation succeeded. This prevents actions being run on devices
that are not ready.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Make sliceable() the actual condition for a sliceable thread. Avoid
creating a slice timeout for non sliceable threads. Always reset
slice_expired even if the next thread is not sliceable. Fold
slice_expired_locked() into z_time_slice() to avoid the hidden
unlock/lock. Change `curr` to `thread` as this is not necessarily
the current thread (yet) being set. Make variables static.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Updates events to prevent a timeout from corrupting the list of
threads that needs to be waken up.
Signed-off-by: Aastha Grover <aastha.grover@intel.com>