Commit graph

3125 commits

Author SHA1 Message Date
Peter Mitsis
9ff5221d23 kernel: Update IPI usage in k_thread_priority_set()
1. The flagging of IPIs is moved out of k_thread_priority_set() into
z_thread_prio_set(). This allows for an IPI to be done for a thread
that had its priority bumped due to the handling of priority
inheritance from a mutex.

2. k_thread_priority_set()'s check for sched_locked only applies to
non-SMP builds that are using the old arch_swap() framework to switch
between threads.

Incidentally, nearly all calls to flag_ipi() are now performed with
sched_spinlock being locked. The only exception is in slice_timeout().

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-06-04 22:35:54 -04:00
Peter Mitsis
ed7a5f31c2 kernel: Update CONFIG_PIPES Kconfig description
Updates the CONFIG_PIPES Kconfig description to add a note that
enabling it will cause a slight increase to the thread structure.
This mirrors a similar comment in CONFIG_EVENTS.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-06-04 19:10:56 -04:00
Yong Cong Sin
6a3cb93d88 arch: remove the use of z_arch_esf_t completely from internal
Created `GEN_OFFSET_STRUCT` & `GEN_NAMED_OFFSET_STRUCT` that
works for `struct`, and remove the use of `z_arch_esf_t`
completely.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-04 14:02:51 -05:00
Yong Cong Sin
e54b27b967 arch: define struct arch_esf and deprecate z_arch_esf_t
Make `struct arch_esf` compulsory for all architectures by
declaring it in the `arch_interface.h` header.

After this commit, the named struct `z_arch_esf_t` is only used
internally to generate offsets, and is slated to be removed
from the `arch_interface.h` header in the future.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-04 14:02:51 -05:00
Flavio Ceolin
65fc5b7f17 device: Remove z_device_is_ready
This duplicates the functionality of device_is_ready.

Calls for z_device_is_ready are being done in kernel mode, so it is
safe to call its implementation directly.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-31 08:06:44 +02:00
Yong Cong Sin
3570408db5 build: namespace syscall sources to zephyr/
Namespace the `syscall_dispatch.c` & `syscall_export_llext.c`
to `zephyr/` as well

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-28 22:03:55 +02:00
Yong Cong Sin
bbe5e1e6eb build: namespace the generated headers with zephyr/
Namespaced the generated headers with `zephyr` to prevent
potential conflict with other headers.

Introduce a temporary Kconfig `LEGACY_GENERATED_INCLUDE_PATH`
that is enabled by default. This allows the developers to
continue the use of the old include paths for the time being
until it is deprecated and eventually removed. The Kconfig will
generate a build-time warning message, similar to the
`CONFIG_TIMER_RANDOM_GENERATOR`.

Updated the includes path of in-tree sources accordingly.

Most of the changes here are scripted, check the PR for more
info.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-28 22:03:55 +02:00
Fin Maaß
8c37f14b98 tracing: add k_realloc trace
For `k_realloc` add tracing feature.

Signed-off-by: Fin Maaß <f.maass@vogl-electronic.com>
2024-05-28 17:55:12 +02:00
Fin Maaß
09eaa8757f kernel: implement k_realloc
implement k_realloc.

Signed-off-by: Fin Maaß <f.maass@vogl-electronic.com>
2024-05-28 17:55:12 +02:00
Hess Nathan
20b55425d3 coding guidelines: comply with MISRA Rule 13.4
avoid the direct use of assignment expression
values for conditions

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-28 10:07:31 +02:00
Flavio Ceolin
4d85f3d91c pm: Deprecate z_pm_save_idle_exit
Deprecate z_pm_save_idle_exit and promote pm_system_resume.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-27 02:10:03 -07:00
Flavio Ceolin
f7437ac3b1 pm: Move z_pm_save_idle_exit to pm subsys
There is no need to this function be defined inside the kernel since
all places using it are protecting the call under ifdef PM guards.

This way we can also remove the ifdef condition inside the implementation.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-27 02:10:03 -07:00
Nicolas Pitre
b34e94f362 kernel: demand_paging: fix arch_page_location_get() documentation
Symbols from enum arch_page_location are defined as
ARCH_PAGE_LOCATION_* and not ARCH_PAGE_FAULT_*.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-05-24 07:47:49 -04:00
Peter Mitsis
d082cd29af kernel: Relax loop in z_smp_global_lock()
Updates z_smp_global_lock() to follow the pattern used in spinlocks
to relax the loop between atomic_cas() attempts.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-05-22 21:35:06 -04:00
Flavio Ceolin
c12f0507b6 userspace: dynamic: Fix k_thread_stack_free verification
k_thread_stack_free syscall was not checking if the caller
had permission to given stack object.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-21 20:54:56 -04:00
Daniel Leung
e6abc035c8 kernel: mem_domain: new config for isolated stacks
This adds a new kconfig to indicate if architecture code
supports isolating thread stacks within the same domain,
and another new kconfig to selectively enable this
behavior.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-05-21 20:53:09 -04:00
Daniel Leung
169bc07e83 kernel: move memory domain kconfigs into its own file
This moves memory domain related kconfigs into its own file
Kconfig.mem_domain.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-05-21 20:53:09 -04:00
Andy Ross
17a5beb341 kernel: Predicate _cpus_active on CONFIG_PM
This value isn't used outside of the PM subsystem, so don't build it.

More important than the four bytes of .bss was the use of an
atomic_inc().  Some platforms are forced to use
CONFIG_ATOMIC_OPERATIONS_C (but in almost all cases are single-core
devices that won't use atomics at runtime).  There, this turns into a
function call that pulls in the whole atomics implementation.

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-21 15:42:50 -07:00
Daniel Leung
2ad265cb75 kernel: userspace: manipulate _thread_idx_map on per-byte basis
The sys_bitfield_(clear/set)_bit() work on pointer size element.
However, _thread_idx_map[] is a byte array. On little endian
systems, the bitops should work fine. However, on big endian
systems, changing the lower bits may actually be manipulating
memory outside the array when CONFIG_MAX_THREAD_BYTES is not
multiple of 4. So modify the code to perform bit ops on
a per-byte basis.

Fixes #72430

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-05-18 15:53:27 +03:00
Nicolas Pitre
e9a47d932c kernel: mmu: shrink and align struct z_page_frame
The struct z_page_frame is marked __packed to avoid extra padding as
such padding may represent significant memory waste when lots of page
frames are used. However this is a bad strategy.

The code contained this somewhat dubious comment and code in
free_page_frame_list_put():

	/* The structure is packed, which ensures that this is true */
	void *node = pf;
	sys_slist_append(&free_page_frame_list, node);

This is bad for many reasons:

- type checking is completely bypassed;

- if the sys_snode_t node member is no longer located at the front of
  struct z_page_frame then the code will still compile and possibly run
  but be broken with memory corruption as a likely outcome;

- the sys_slist_append() code is completely unaware of the packed
  attribute which breaks architectures with alignment restrictions.

Let's improve code efficiency as well as memory usage by removing the
packed attribute and manually packing the flags in the unused virtual
address bits. This way the page frame array remains naturally aligned,
data access becomes optimal and the actual array size gets even smaller.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-05-13 16:04:40 -04:00
Nicolas Pitre
57305971d1 kernel: mmu: abstract access to page frame flags and address
Introduce z_page_frame_set() and z_page_frame_clear() to manipulate
flags. Obtain the virtual address using the existing
z_page_frame_to_virt(). This will make changes to the page frame
structure easier.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-05-13 16:04:40 -04:00
Daniel Apperloo
9fc26804fb linker: decouple KERNEL_WHOLE_ARCHIVE from LLEXT
Dynamic code execution applications not using LLEXT for "extension"
loading are subject to the same linker optimization symbol resolution
issue described in commit 321e395 (in summary, libkernel.a syscalls
not used directly by the application result in weak symbol resolution
of their z_mrsh_ wrapper).

To support usecases where an application is using alternative methods
to load and execute code calling syscalls (likely from userspace) or
is using a mechanism where the linker may not be aware, the configuration
option has been decoupled from CONFIG_LLEXT (who is now a selector) to
KERNEL_WHOLE_ARCHIVE.

Signed-off-by: Daniel Apperloo <daniel.apperloo@intel.com>
2024-05-13 14:23:38 +02:00
Hess Nathan
6d417d52c2 coding guidelines: comply with MISRA Rule 12.1.
added parentheses verifying lack of ambiguities

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-12 13:37:27 -04:00
Hess Nathan
e05c4a8786 coding guidelines: comply with MISRA Rule 11.8
- modified parameter types to receive a const pointer when a
  non-const pointer is not needed

- avoided redundant casts

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-10 14:45:14 -05:00
Flavio Ceolin
68ea73aca2 kernel: sem: Remove constant expression
limit is unsigned int and K_SEM_MAX_LIMIT is defined as UINT_MAX this
means that limit will never be greater K_SEM_MAX_LIMIT.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-09 12:39:46 -04:00
Pieter De Gendt
f147a5fec2 spelling: Replace occurrences of "iff" with "if and only if"
Spell checking tools do not recognize "iff", replace with "if and only if".
See https://en.wikipedia.org/wiki/If_and_only_if

Signed-off-by: Pieter De Gendt <pieter.degendt@basalte.be>
2024-05-06 14:58:08 +01:00
frei tycho
fe38c703b2 kernel: coding guidelines: cast unused arguments to void
- added missing ARG_UNUSED

Signed-off-by: frei tycho <tfrei@baumer.com>
2024-05-06 14:56:24 +01:00
Alberto Escolar Piedras
2f5e93938b Revert "kernel: retrieve system timer clock frequency at runtime or static"
This reverts commit 7c03e5de7f.

https://github.com/zephyrproject-rtos/zephyr/pull/69705
Introduced a regression in main in which
tests/subsys/logging/log_timestamp
started failing. (See
https://github.com/zephyrproject-rtos/zephyr/issues/72344
for more info).
Let's revert the PR. It can be submitted after with the issue
fixed.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-05-06 14:52:29 +03:00
Najumon B.A
7c03e5de7f kernel: retrieve system timer clock frequency at runtime or static
update kernel timeout logic based on retrieve system timer clock
frequency at runtime or static way based on Kconfig
TIMER_READS_ITS_FREQUENCY_AT_RUNTIME

Signed-off-by: Najumon B.A <najumon.ba@intel.com>
2024-05-04 13:24:12 +03:00
Andy Ross
dec022a848 kernel/sched: Fix edge^2 case in abort/join
The previous abort-lifecycle fix missed a case: other threads can
enter k_thread_join(), see that the thread is already dead, and then
need to call z_thread_switch_spin() to wait for a context switch.  But
the new "dummification" code was (by design!) terminating the thread
such that no context would be saved to it.  So switch_handle stayed
NULL and if you hit that timing case correctly[1] you'd deadlock
waiting for a switch that would never come.

Fix is just to set switch_handle when dummifying to any non-NULL
value.

Also add an assertion to catch the obvious case that a thread is
actually dead on the exit path of k_thread_abort() to make sure the
variant path continues to set flags correctly

[1] CI was doing it fairly reliably via tests/kernel/smp_abort on
    qemu_cortex_a53 only.  Only one of my dev systems could see it,
    and then only about 15% of the time.

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-02 13:55:03 -04:00
Andy Ross
47ab66311d kernel/sched: Fix lockless ordering in halt_thread()
We've had threads spinning on the thread state bits, but weren't being
careful to ensure that those bits were the last things seen to change
in a halting thread.  Move it to the end, and add a barrier for
correctness.

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-02 13:55:03 -04:00
Andy Ross
fd340ebf31 sched: Optimize dummy thread usage on SMP
Nicolas Pitre points out that since these thread structs are just
dummies for the context swtiching, they can be presumed to be "write
only" and thus there's no point in having one per CPU, everyone can
share the same one.

The only gotcha is that we never really documented (nor really have a
place to document) that rule, so it's not theoretically impossible for
an architecture to read back what it might have written underneath
arch_switch().  Leave this in a separate commit for bisection
purposes, but the risk seems very low.

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-02 13:55:03 -04:00
Andy Ross
f0fd54cb31 kernel/sched: Fix free-memory write when ISRs abort _current
After a k_thread_abort(), the resulting thread struct is documented as
unused/free memory that may be re-used (for example, to respawn a new
thread).

But in the special case of aborting the current thread from within an
ISR, that wasn't quite happening.  The scheduler cleanup would
complete, but the architecture layer would still try to context switch
away from the aborted thread on exit, and that can include writes to
the now-reused thread struct!  The specifics will depend on
architecture (some do a full context save on entry, most don't), but
in the case of USE_SWITCH=y it will at the very least write the
switch_handle field.

Fix this simply, with a per-cpu "switch dummy" thread struct for use
as a target for context switches like this.  There is some non-trivial
memory cost to that; thread structs on many architectures are large.

Pleasingly, this also addresses a known deadlock on SMP: because the
"spin in ISR" step now happens as the very last stage of
k_thread_abort() handling, the existing scheduler lock works to
serialize calls such that it's impossible for a cycle of threads to
independently decide to spin on each other: at least one will see
itself as "already aborting" and break the cycle.

Fixes #64646

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-02 13:55:03 -04:00
Andy Ross
fc56050e05 kernel/spinlock: Fix SPIN_VALIDATE in ISRs
Spinlocks taken in ISRs were storing the _current thread pointer of
the interrupted thread as the owner, which was never strictly correct
but was benign as the thread would never run until the lock was
released.

But now k_thread_abort(_current) in an ISR has been fixed to eliminate
all references to the (now aborted) thread struct, and _current points
to a dummy thread.  Handle that edge case in the validation framework.

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-02 13:55:03 -04:00
frei tycho
14cb7d5b03 kernel: coding guidelines: add explicit cast to void
- added explicit cast to void when returned value is expectedly ignored

Signed-off-by: frei tycho <tfrei@baumer.com>
2024-05-02 16:49:36 +02:00
Hess Nathan
7659cfd4dc coding guidelines: comply with MISRA Rule 2.2
- avoided dead stores

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-02 09:32:46 +01:00
Adrian Bonislawski
e44d2e65ee kernel: timeslicing: add time slice reset in slice per thread api
This will reset time slice in k_thread_time_slice_set()
when slice per thread api is used.

Currently it will reset it only in standard slice_set

Signed-off-by: Adrian Bonislawski <adrian.bonislawski@intel.com>
2024-05-01 22:55:50 +01:00
Hess Nathan
527e712448 coding guidelines: comply with MISRA Rule 20.9
- avoid to use undefined macros in #if expressions

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-01 19:48:19 +01:00
Hess Nathan
32af724fbb coding guidelines: comply with MISRA C:2012 Rule 11.2
avoid convert pointers to incomplete type using the pointer to first item

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-01 10:53:20 -04:00
Hess Nathan
c30a9c4c97 coding guidelines: comply with MISRA Rule 21.15
- made explicit the copied data type

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-01 10:52:43 -04:00
Peter Mitsis
a3c7152f92 kernel: Update thread cpu in z_get_next_switch_handle()
Updates z_get_next_switch_handle() to set the new thread's base.cpu
value as it is done in do_swap(). This helps to ensure that the
last CPU on which the thread executed remains current.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-04-29 17:40:28 +01:00
Eric Johnson
69c5c6d511 kernel: Remove duplicate execution_cycles write and improve docstring
There is a duplicate write in `z_sched_thread_usage()` that can be
removed. Also modified the docstrings to `k_thread_runtime_stats` to
help better describe the differences between execution_cycles and
total_cycles when getting stats for the CPU or a thread

Signed-off-by: Eric Johnson <eric@memfault.com>
2024-04-28 13:04:20 -04:00
Alberto Escolar Piedras
c05cba682b Revert "kernel/spinlock: Fix SPIN_VALIDATE in ISRs"
This reverts commit 93dc7e7438.

This PR introduced 2 regressions in main CI:
71977 & 71978
Let's revert it by now to get main's CI passing again.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-04-26 10:10:24 +00:00
Alberto Escolar Piedras
ea26bcf8d3 Revert "kernel/sched: Fix free-memory write when ISRs abort _current"
This reverts commit 61c70626a5.

This PR introduced 2 regressions in main CI:
71977 & 71978
Let's revert it by now to get main's CI passing again.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-04-26 10:10:24 +00:00
Alberto Escolar Piedras
c9ec937d71 Revert "sched: Optimize dummy thread usage on SMP"
This reverts commit 20611f13ca.

This PR introduced 2 regressions in main CI:
71977 & 71978
Let's revert it by now to get main's CI passing again.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-04-26 10:10:24 +00:00
Alberto Escolar Piedras
c60d4c2589 Revert "kernel/sched: Fix lockless ordering in halt_thread()"
This reverts commit 02b24911f7.

This PR introduced 2 regressions in main CI:
71977 & 71978
Let's revert it by now to get main's CI passing again.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-04-26 10:10:24 +00:00
Mohamed ElShahawi
7084662cc8 kernel: system_work_q: Mark queue thread as essential
Marking sysworkq as essential, so when it fails, the system will halt
instead of continuously working, and dependent components stay
in a broken state.

Signed-off-by: Mohamed ElShahawi <ExtremeGTX@hotmail.com>
2024-04-25 21:40:24 +02:00
Mohamed ElShahawi
9ba4653243 kernel: work_queue: make thread essential flag configurable
Allow the creator of a work_queue instance to choose whether
the work_queue thread should be marked as ESSENTIAL or not.

Signed-off-by: Mohamed ElShahawi <ExtremeGTX@hotmail.com>
2024-04-25 21:40:24 +02:00
Andy Ross
02b24911f7 kernel/sched: Fix lockless ordering in halt_thread()
We've had threads spinning on the thread state bits, but weren't being
careful to ensure that those bits were the last things seen to change
in a halting thread.  Move it to the end, and add a barrier for
correctness.

Signed-off-by: Andy Ross <andyross@google.com>
2024-04-25 15:12:02 +02:00
Andy Ross
20611f13ca sched: Optimize dummy thread usage on SMP
Nicolas Pitre points out that since these thread structs are just
dummies for the context swtiching, they can be presumed to be "write
only" and thus there's no point in having one per CPU, everyone can
share the same one.

The only gotcha is that we never really documented (nor really have a
place to document) that rule, so it's not theoretically impossible for
an architecture to read back what it might have written underneath
arch_switch().  Leave this in a separate commit for bisection
purposes, but the risk seems very low.

Signed-off-by: Andy Ross <andyross@google.com>
2024-04-25 15:12:02 +02:00
Andy Ross
61c70626a5 kernel/sched: Fix free-memory write when ISRs abort _current
After a k_thread_abort(), the resulting thread struct is documented as
unused/free memory that may be re-used (for example, to respawn a new
thread).

But in the special case of aborting the current thread from within an
ISR, that wasn't quite happening.  The scheduler cleanup would
complete, but the architecture layer would still try to context switch
away from the aborted thread on exit, and that can include writes to
the now-reused thread struct!  The specifics will depend on
architecture (some do a full context save on entry, most don't), but
in the case of USE_SWITCH=y it will at the very least write the
switch_handle field.

Fix this simply, with a per-cpu "switch dummy" thread struct for use
as a target for context switches like this.  There is some non-trivial
memory cost to that; thread structs on many architectures are large.

Pleasingly, this also addresses a known deadlock on SMP: because the
"spin in ISR" step now happens as the very last stage of
k_thread_abort() handling, the existing scheduler lock works to
serialize calls such that it's impossible for a cycle of threads to
independently decide to spin on each other: at least one will see
itself as "already aborting" and break the cycle.

Fixes #64646

Signed-off-by: Andy Ross <andyross@google.com>
2024-04-25 15:12:02 +02:00
Andy Ross
93dc7e7438 kernel/spinlock: Fix SPIN_VALIDATE in ISRs
Spinlocks taken in ISRs were storing the _current thread pointer of
the interrupted thread as the owner, which was never strictly correct
but was benign as the thread would never run until the lock was
released.

But now k_thread_abort(_current) in an ISR has been fixed to eliminate
all references to the (now aborted) thread struct, and _current points
to a dummy thread.  Handle that edge case in the validation framework.

Signed-off-by: Andy Ross <andyross@google.com>
2024-04-25 15:12:02 +02:00
Andy Ross
5fa2b6f377 kernel/sched: Refeactor/cleanup z_thread_halt()
Big change is to factor out a thread_halt_spin() utility to manage the
core complexity of this code: the situation where an ISR is asked to
abort a thread already running on another SMP CPU.

With that gone, things can be cleaned up quite a bit.  Remove early
returns, most of the "#if CONFIG_SMP" usage was superfluous and will
optimize out, unify and clean up the comments, etc...

No behavioral changes (hopefully), just refactoring.

Signed-off-by: Andy Ross <andyross@google.com>
2024-04-25 15:12:02 +02:00
Anas Nashif
297ad5186d kernel: increase main stack size for ztest on ARC
stack has been on the edge when using ztest, use 1024 for ARC as well.

Fixes #71797

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-24 10:49:05 +02:00
Anas Nashif
4593f0d71c kernel: priority queues: declare as static inlines
After the move to C files we got some drop in the performance when
running latency_measure. This patch declares the priority queue
functions as static inlines with minor optimizations.

The result for one metric (on qemu):

3.6 and before the anything was changed:

  Get data from LIFO (w/ ctx switch): 13087 ns

after original change (46484da502):

  Get data from LIFO (w/ ctx switch): 13663 ns

with this change:

  Get data from LIFO (w/ ctx switch): 12543 ns

So overall, a net gain of ~ 500ns that can be seen across the board on many
of the metrics.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-22 16:40:11 -04:00
Flavio Ceolin
11b85ee510 kernel: stack: Check possible overflow
Check possible overflow in k_stack data struct. An overflow
can happens resulting in a much smaller amount of memory allocation.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-04-22 15:20:39 -04:00
Ederson de Souza
eeebb4d911 kernel: Device deferred initialization
Currently, all devices are initialized at boot time (following their
level and priority order). This patch introduces deferred
initialization: by setting the property `zephyr,deferred-init` on a
device on the devicetree, Zephyr will not initialized the device.

To initialize such devices, one has to call `device_init()`.

Deferred initialization is done by grouping all deferred devices on a
different ELF section. In this way, there's no need to consume more
memory to keep track of deferred devices. When `device_init()` is
called, Zephyr will scan the deferred devices section and call the
initialization function for the matching device. As this scanning is
done only during deferred device initialization, its cost should be
bearable.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
2024-04-11 15:50:44 -04:00
Daniel Leung
d0a90a0b33 kernel: add the ability to memory map thread stacks
This introduces support for memory mapped thread stacks,
where each thread stack is mapped into virtual memory
address space with two guard pages to catch
under-/over-flowing the stack. This is just on the kernel
side. Additional architecture code is required to fully
support this feature.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00
Daniel Leung
04c5632bd4 kernel: mm: introduce k_mem_phys_map()/_unmap()
This is similar to k_mem_map()/_unmap(). But instead of using
anonymous memory, the provided physical region is mapped
into virtual address instead. In addition to simple mapping
physical ro virtual addresses, the mapping also adds two
guard pages before and after the virtual region to catch
buffer under-/over-flow.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00
Daniel Leung
378131c266 kernel: add options to cleanup after aborting current thread
This adds the mechanism to do cleanup after k_thread_abort()
is called with the current thread. This is mainly used for
cleaning up things when the thread cannot be running, e.g.,
cleanup the thread stack.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00
Krzysztof Chruściński
aee8dd1d33 kernel: timeout: Optimize setting next alarm
Next timeout was set unconditionally at the end of sys_clock_announce.
However, if one of the current expired timeouts was setting a new
timeout which is the first to execute then system clock was configured
twice. Lets configure system clock only once in the isr at the and of
sys_clock_announce.

If timeouts are frequent this optimization can reduce CPU load. In
many cases setting the new sys_clock timeout is the most time
consuming operation in the sys_clock isr handler. As an example,
on the target I used setting new sys_clock timeout is taking 6 uS of
9 uS spent in the isr and it takes 16 uS with the redundant call.

Signed-off-by: Krzysztof Chruściński <krzysztof.chruscinski@nordicsemi.no>
2024-04-09 13:55:07 -04:00
Simon Hein
ebdb07a05c kernel: Clean up mailbox async msg configuration
Remove confusing and duplicate async message configuration
switches for mailboxes.

Signed-off-by: Simon Hein <Shein@baumer.com>
2024-04-09 11:05:55 +02:00
Anas Nashif
20b2c98add kernel: move nothread support to own file
Do not build threading support when CONFIG_MULTITHREADING=n is set and
move needed calls to a new file with the changes needed instead of the
ifdef party in sched.c

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-06 14:22:08 +03:00
Yong Cong Sin
fbaf7dfdc1 kernel: banner: use BUILD_VERSION only if not empty
The `BUILD_VERSION` can be defined but empty when built
without git, causing version to be missing from the banner:

```
*** Booting Zephyr OS build  ***
Hello World! qemu_riscv64
```

Let's check if it is empty before using it, so that
`KERNEL_VERSION_STRING`, which is generated independently
with cmake can be used as a fallback:

```
*** Booting Zephyr OS build 3.5.0 ***
Hello World! qemu_riscv64
```

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-04-04 23:47:33 +02:00
Anas Nashif
8a88cd4805 kernel: thread: move thread states to header
Move state string defines into thread header.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
f5435b3df7 kernel: thread: move k_thread_priority_get
Move to thread.c alongside all other thread calls.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
5c170c7046 kernel: thread: rename is_preempt
Trivila rename to thread_is_preempt.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
6754cbd1b5 kernel: thread: move k_is_preempt_thread to thread.c
This belongs in thread.c

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
17c874f4fc kernel: thread: rename is_metairq
Trivial rename of is_metairq to thread_is_metairq.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
0b473ce925 kernel: rename sliceable -> thread_is_sliceable
Trivial rename of sliceable function.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
37df485463 kernel: split timeslicing/ipi code out of sched.c
Move both timeslicing and IPI code to own files.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
31bc210bbc kernel: sched: remove unused prototype: z_is_thread_time_slicing
A prototype not used anywhere.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Anas Nashif
ebb503ff7b kernel: move thread related helper function kthread.h
Move some helper functions to inernal kthread.h, to offload crowded
sched.c

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-04-01 18:47:36 -04:00
Daniel Leung
d34351d994 kernel: align thread stack size declaration
When thread stack is defined as an array, K_THREAD_STACK_LEN()
is used to calculate the size for each stack in the array.
However, standalone thread stack has its size calculated by
Z_THREAD_STACK_SIZE_ADJUST() instead. Depending on the arch
alignment requirement, they may not be the same... which
could cause some confusions. So align them both to use
K_THREAD_STACK_LEN().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-03-27 19:27:10 -04:00
Daniel Leung
6cd7936f57 kernel: align kernel stack size declaration
When kernel stack is defined as an array, K_KERNEL_STACK_LEN()
is used to calculate the size for each stack in the array.
However, standalone kernel stack has its size calculated by
Z_KERNEL_STACK_SIZE_ADJUST() instead. Depending on the arch
alignment requirement, they may not be the same... which
could cause some confusions. So align them both to use
K_KERNEL_STACK_LEN().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-03-27 19:27:10 -04:00
Daniel Leung
efe30749de kernel: rename Z_THREAD_STACK_BUFFER to K_THREAD_STACK_BUFFER
Simple rename to align the kernel naming scheme. This is being
used throughout the tree, especially in the architecture code.
As this is not a private API internal to kernel, prefix it
appropriately with K_.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-03-27 19:27:10 -04:00
Daniel Leung
b69d2486fe kernel: rename Z_KERNEL_STACK_BUFFER to K_KERNEL_STACK_BUFFER
Simple rename to align the kernel naming scheme. This is being
used throughout the tree, especially in the architecture code.
As this is not a private API internal to kernel, prefix it
appropriately with K_.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-03-27 19:27:10 -04:00
Ederson de Souza
321e395a8f linker: Include libkernel.a in the whole-archive when llext is enabled
Differently from other libraries, which are included whole in the final
Zephyr ELF, libkernel.a itself isn't. Assuming this is intended to
enable optimisations (if it isn't, this patch will break things) - linker
can remove parts of the kernel that are not used by the application.

However, when considering Linkable Loadable Extensions (llext), this
optimisations can be counterproductive: for instance, syscalls that are
not used by the application won't be available for extensions. It won't
matter if someone "EXPORT_SYMBOL" for them, or even try to keep them
using LINKER_KEEP, they'll be gone.

To avoid that, this patches includes, when CONFIG_LLEXT=y, libkernel.a
inside the linker "whole-archive" block. This ends up making it consider
libkernel.a as a library whose all symbols should be kept. Note this
doesn't mean that all symbols will be there - things compiled out via
Kconfig will naturally still be out.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
2024-03-26 19:31:56 -04:00
Simon Hein
bcd1d19322 kernel: add closing comments to config endifs
Add a closing comment to the endif with the configuration
information to which the endif belongs too.
To make the code more clearer if the configs need adaptions.

Signed-off-by: Simon Hein <Shein@baumer.com>
2024-03-25 18:03:31 -04:00
Daniel Leung
6ea749de52 arch: rename arch_start_cpu() to arch_cpu_start()
Rename arch_start_cpu() to arch_cpu_start() so it belongs to
the "cpu" namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-03-25 09:58:35 +00:00
Daniel Leung
3664ed64c3 arch: move arch_interface.h under zephyr/arch
arch_interface.h is for architecture and should not be
under sys/. So move it under include/zephyr/arch/.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-03-25 09:58:35 +00:00
Flavio Ceolin
e2f3840380 kernel: Fix possible overflow in k_mem_map
k_mem_map additionally allocates two guard pages that are not mapped.
These pages are not being accounted when checking the provided size and
when they are added an overflow can happen and the mapped memory is not
correct.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-03-22 15:58:33 -04:00
Ederson de Souza
67bb6db3f8 syscall: Export all emitted syscalls, enabling them for extensions
Linkable loadable extensions can only use syscalls if they are exported
via EXPORT_SYSCALL (or EXPORT_SYMBOL). Instead of enabling used syscalls
one by one, this patch exports all of them automatically via
`gen_syscalls.py`. If CONFIG_LLEXT=n, the section where the exported
symbols live is discarded, so it should be a non-op when llext is not
enabled.

This patch also removes the now redundant EXPORT_SYSCALL macro. Note
that EXPORT_SYMBOL is still useful on different situations (and is
indeed used by the code generated by `gen_syscalls.py`).

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
2024-03-20 16:26:54 +00:00
TaiJu Wu
1f5f0cf838 sched: Remove multi-level queue priority limit
Modified bitmask to  bitmask array, it can make multilevel queue remove
32 bit prioriry limit.

We can scan bitmask array to find which queue have ready thread.

Only need the number of queues as priority because the priority
is checked on create_thread.

Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
2024-03-12 19:37:40 -04:00
Andy Ross
f2280d119d kernel/sched: Don't touch deadline values on queued threads
k_thread_deadline_set() would modify the thread's deadline and then,
if it was in the run queue, requeue it to put it at the right spot.
Sounds right, right?

It's wrong.  The deadline field is part of the thread priority, so
this results in a mis-ordered list.  For dlist backends, that's benign
as the removal works anyway, but if CONFIG_SCHED_SCALABLE=y we've now
broken the sorting order of an in-tree item and corrupted the rbtree!

Fixes #69935

Signed-off-by: Andy Ross <andyross@google.com>
2024-03-11 15:42:26 +01:00
Nicolas Pitre
c31d646198 kernel: timeout: optimize z_timeout_expires()
This currently calls timeout_rem() then adds back the result of
elapsed() to cancel out the subtraction of another elapsed() call
within timeout_rem(). Better not to make any calls to elapsed() in
that case, especially given that elapsed() may incur hardware access
overhead through sys_clock_elapsed().

Let's move the elapsed() subtraction to the only user of timeout_rem()
that needs it and remove its invocation from the z_timeout_expires()
path entirely.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-03-08 18:05:10 +01:00
TaiJu Wu
39f710136e sched: finalize_cancel_locked can early return
The code `SYS_SLIST_FOR_EACH_CONTAINER_SAFE` just for remove work
from `pending_cancels`.
After removing work successfully, the function can return early.
It is unnecessary to iterate continuously.

Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
2024-03-07 19:40:51 -05:00
Peter Mitsis
9f7695dda0 kernel: Remove unused z_pend_curr_irqlock()
The routine z_pend_curr_irqlock() is no longer used anywhere.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-03-07 11:51:06 -05:00
Anas Nashif
0d8da5ff93 kernel: rename scheduler spinlock variable and make it private
rename sched_spinlock to _sched_spinglock to maintain it is privacy and
to avoid any misuse.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
868f099d61 kernel: sched: z_set_prio -> z_thread_prio_set
Rename private function to make it clear what priority we are setting
and to be consistent across the code.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
477a04a098 kernel: rename h -> heap
Avoid single characker variables that renders code unreadable and might
cause conflicts in maing, similar to t for both timeout and thread in
some places.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
595ff63f00 kernel: thread: use consistent thread parameter
Use thread wherever it makes sense, using 't' in some places can get
confused with 't' used for timeouts for example.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
3dc0c4544b kernel: move float operations out of thread.c
Move float operation out and add missing vrfy hook for enabling float.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
6c003bdbcf kernel: remove unused code in headers
List of functions defined in headers and not being used anywhere.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
9e83413542 kernel: split thread monitor
Move thread monitor related functions, not enabled in most cases outside
of thread.c and cleanup headers.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
5e591c38f1 kernel: do not export z_thread_priority_set
This function is only being used by a test, so instead of reimplementing
a syscall in the test, provide a Kconfig option to provide the
functionality that only works with tests and remove some of the
duplication and extra code.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
3ca50f5060 kernel: move z_init_static_threads to where it is being used
Move out of thread and put directly in init.c where it is being used.
Also remove definition from kernel.h, this is an internal function and
should not be in a public header.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
a6ce422b10 kernel: remove cmsis-rtos layering violation
We shouldn't be calling hooks from optional and upper layer subsystems
in the kernel, instead, just call the hook to set thread status in the
API where it is needed.

This now clears related bit in cmsis thread status bitarray when
terminating a thread in the cmsis rtos v1 layer directly and not in the
kenrel code.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
077222c975 kernel: split cpu_mask handling into own file
In an effort to cleanup sched.c, move sections of code that can be
compiled in based on options into own files. CPU mask here is managed by
a kconfig and is not widely used (SMP affinity on multicore systems).

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
6e95bdeca6 kernel: reorg Kconfigs and split them
The kernel kconfig is becoming too big and unmanageable with too many
options scattered across the file. Move some areas out and reorg main
Kconfig slightly.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
8791012ed1 kernel: move essential flag related routines out
The functions to manipulate the essential flag indeed operate on
threads, but they are misplaced in the thread implementation file. Put
them alongside other routines setting other thread flags and cleanup
headers a bit.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
2c31db4cc8 kernel: split irq_offload ccode into own file
Move irq_offload code out of thread.c into own file. This code is not
related to threads.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Anas Nashif
a7d74f80ce kernel: move spinlock validation to own file
Move spin_lock validation outside of thread.c into own file. This code
really has nothing to do with the thread code.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-06 19:27:28 -05:00
Ederson de Souza
4440d6a189 kernel/userspace: Fix dynamic thread stack allocation at userspace
It wasn't saving adjusted stack size at either the private stack or the
k_object, thus failing subsequent checks.

Test added to check for this case and prevent regressions.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
2024-03-06 14:17:53 +01:00
Anas Nashif
46484da502 kernel: move priority queue handling to own file/header
clean up headers under include/ and move handling of priority queue to
own file/header.
No need for the header  include/zephyr/kernel/internal/sched_priq.h
anymore. Move the relevant structures where they are being used in
kernel_structs.h.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-03-02 15:06:45 +01:00
Peter Mitsis
ee9c44fee6 linker: Round TLS sizes up in linker script
Instead of rounding up both __tdata_size and __tbss_size at runtime,
perform the calculation when the image is built.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-02-25 22:28:56 -05:00
Nguyen Minh Thien
8188be57d3 kernel: fix spelling errors
Fix spelling errors found in comment of the kernel source code.

Signed-off-by: Nguyen Minh Thien <nguyenmthien@live.com>
2024-02-25 20:53:37 -05:00
Peter Mitsis
51ae993c12 kernel: Update k_wakeup()
This commit does two things to k_wakeup():

1. It locks the scheduler before marking the thread as not suspended.
As the the clearing of the _THREAD_SUSPENDED bit is not atomic, this
helps ensure that neither another thread nor ISR interrupts this
action (resulting in a corrupted thread_state).

2. The call to flag_ipi() has been removed as it is already being
made within ready_thread().

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-02-25 20:50:03 -05:00
Guennadi Liakhovetski
09cf3e0910 smp: fix a race when starting / resuming multiple CPUs
cpu_start_fn is global, it's used by the initiator CPU to start or
resume secondary CPUs. However it's possible, that the initiator CPU
goes ahead and starts a second secondary CPU before the first one has
finished using the object. Fix this by creating a local copy of the
global object.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2024-02-16 07:27:04 +01:00
Flavio Ceolin
d622533ca7 kernel: thread: Allow stack in coherent memory
When allowing dynamic thread stack allocation the stack may come from
the heap in coherent memory, trying to use cached memory is over
complicated because of heap meta data and cache line sizes.
Also when userspace is enabled, stacks have to be page aligned and the
address of the stack is used to track kernel objects.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-02-09 14:23:14 -05:00
Daniel Leung
caacc27d37 kernel: smp: CPU start may result in null pointer access
It is observed that starting up CPU may result in other CPUs
crashing due to de-referencing NULL pointers. Note that this
happened on the up_squared board, but there was no way to
attach debugger to verify. It started working again after
moving z_dummy_thread_init() before smp_timer_init(), which
was the old behavior before commit
eefaeee061 where the issue
first appeared. So mimic the old behavior to workaround
the issue.

Fixes #68115

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-02-04 19:55:14 -06:00
Andy Ross
840db7858e kernel/thread: Detect in-kernel "reserved" stack overflow
Traditionally, k_thread_create() has required that the application
size the stack correctly.  Zephyr doesn't detect or return errors and
treats stack overflow as an application bug (though obviously some
architectures have runtime features to trap on overflows).

At this one spot though, it's possible for the kernel to adjust the
stack for K_THREAD_STACK_RESERVED in such a way that the arch layer's
own stack initialization overflows.  That failure can be seen by
static analysis, so we can't just sweep it under the rug as an
application failure.

Unfortunately there aren't any good options for handling it here (no
way to return failure, can't be a build assert as the size is a
runtime argument).  A panic will have to do.

Fixes: #67106
Fixes: #65584

Signed-off-by: Andy Ross <andyross@google.com>
2024-02-04 10:23:25 -05:00
Krzysztof Chruściński
25173f71cd pm: device_runtime: Extend with synchronous runtime PM
In many cases suspending or resuming of a device is limited to
just a few register writes. Current solution assumes that those
operations may be blocking, asynchronous and take a lot of time.
Due to this assumption runtime PM API cannot be effectively used
from the interrupt context. Zephyr has few driver APIs which
can be used from an interrupt context and now use of runtime PM
is limited in those cases.

Patch introduces a new type of PM device - synchronous PM. If
device is specified as capable of synchronous PM operations then
device runtime getting and putting is executed in the critical
section. In that case, runtime API can be used from an interrupt
context. Additionally, this approach reduces RAM needed for
PM device (104 -> 20 bytes of RAM on ARM Cortex-M).

Signed-off-by: Krzysztof Chruściński <krzysztof.chruscinski@nordicsemi.no>
2024-02-01 15:03:42 +01:00
Christopher Friedt
9f3d7776ab kernel: dynamic: reduce verbosity in degenerate case
k_thread_stack_free() is designed to be called with any pointer
value. We return -EINVAL when an attempt is made to free an
invalid stack pointer.

This change reduces the verbosity in the degenerate case, when
the pointer is not obtained via k_thread_stack_alloc(), but
otherwise does not affect functionality.

If debug log verbosity is not enabled, we save a few bytes in
.text / .rodata / .strtab.

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2024-01-26 06:50:11 -05:00
Daniel Leung
2cdd44801e kernel: move z_init_cpu to private kernel headers
z_init_cpu() is a private kernel API so move it under
kernel/include.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-01-17 11:57:20 -05:00
Daniel Leung
35a1814c4d kernel: smp: remove z_smp_thread_init/_swap
This removes z_smp_thread_init() and z_smp_thread_swap() as
SOF has been updated to use k_smp_cpu_custom_start() instead.
This removes the need for SOF to mirror part of the SMP
power up code, and thus these two functions are no longer
needed.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-01-17 11:57:20 -05:00
Daniel Leung
eefaeee061 kernel: smp: introduce k_smp_cpu_resume
This provides a path to resume a previously suspended CPU.
This differs from k_smp_cpu_start() where per-CPU kernel
structs are not initialized such that execution context
can be saved during suspend and restored during resume.
Though the actual context saving and restoring are
platform specific.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-01-17 11:57:20 -05:00
Daniel Leung
fe66e35db0 kernel: smp: put comment on SMP code
Adds some comments to the SMP code to, hopefully, make it
a bit more clear to future readers.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-01-17 11:57:20 -05:00
Daniel Leung
89b231e7e2 kernel: smp: introduce k_smp_cpu_start
This renames z_smp_cpu_start() to k_smp_cpu_start().
This effectively promotes z_smp_cpu_start() into a public API
which allows out of tree usage. Since this is a new API,
we can afford to change it signature, where it allows
an additional initialization steps to be done before a newly
powered up CPU starts participating in scheduling threads
to run.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-01-17 11:57:20 -05:00
Daniel Leung
803e0e452f kernel: amend wording on CONFIG_SMP_BOOT_DELAY
This extends the wording so that not only architecture code can
start secondary CPUs at a later time. Also adds a missing 'to'.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-01-17 11:57:20 -05:00
Gerson Fernando Budke
b8188e54a4 kernel: Implement k_sleep for Single Thread
The current z_tick_sleep return directly when building kernel for Single
Thread model. This reorganize the code to use k_busy_wait() to be time
coherent since subsystems may depend on it.

In the case of a K_FOREVER timeout is selected the Single Thread the
implementation will invoke k_cpu_idle() and the system will wait for
an interrupt saving power.

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
2024-01-10 15:10:16 +01:00
Gaetan Perrot
68581caa74 kernel: need_swap zephyrproject-rtos#66299
Enhancement on void z_reschedule_irqlock(uint32_t key)
to avoid useless context switch

signed-off-by: Gaetan Perrot <gaetanperrotpro@gmail.com>
2024-01-04 09:42:12 +01:00
Junfan Song
4ae558c505 kernel: work: Fix race in workqueue thread
After a call to k_work_flush returns the sync variable
may still be modified by the workq.  This is because
the work queue thread continues to modify the flag in
sync even after k_work_flush returns.  This commit adds
K_WORK_FLUSHING_BIT, and with this bit, we moved the
logic of waking up the caller from handle_flush to the
finalize_flush_locked in workq, so that after waking up
the caller, the workqueue will no longer operate on sync.

Fixes: #64530

Signed-off-by: Junfan Song <sjf221100@gmail.com>
2024-01-03 10:20:19 +01:00
Anas Nashif
05315ea6af kernel: fatal: remove LCOV exclusion
line is already excluded as part of the function.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-12-21 09:18:44 +01:00
Daniel Leung
a7dccc4475 kernel: mmu: mitigate range check overflow issue
It is possible that address + size will overflow the available
address space and the pointer wraps around back to zero. Some
of these have been fixed in previous commits. This fixes
the remaining ones with regard to Z_PHYS_RAM_START/_END,
and Z_VIRT_RAM_START/_END.

Fixes #65542

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-20 11:37:17 -05:00
Johan Hedberg
7bb16c779b posix: mqueue: Remove custom default for HEAP_MEM_POOL_SIZE
Use the new HEAP_MEM_POOL_ADD_SIZE_ prefix to construct a minimum
requirement for posix message queue usage. This way we can remove the
"special case" default values from the HEAP_MEM_POOL_SIZE Kconfig
definition.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2023-12-20 11:01:42 +01:00
Johan Hedberg
3fbf12487c kernel: Introduce a way to specify minimum system heap size
There are several subsystems and boards which require a relatively large
system heap (used by k_malloc()) to function properly. This became even
more notable with the recent introduction of the ACPICA library, which
causes ACPI-using boards to require a system heap of up to several
megabytes in size.

Until now, subsystems and boards have tried to solve this by having
Kconfig overlays which modify the default value of HEAP_MEM_POOL_SIZE.
This works ok, except when applications start explicitly setting values
in their prj.conf files:

$ git grep CONFIG_HEAP_MEM_POOL_SIZE= tests samples|wc -l
     157

The vast majority of values set by current sample or test applications
is much too small for subsystems like ACPI, which results in the
application not being able to run on such boards.

To solve this situation, we introduce support for subsystems to specify
their own custom system heap size requirement. Subsystems do
this by defining Kconfig options with the prefix HEAP_MEM_POOL_ADD_SIZE_.
The final value of the system heap is the sum of the custom
minimum requirements, or the value existing HEAP_MEM_POOL_SIZE option,
whichever is greater.

We also introduce a new HEAP_MEM_POOL_IGNORE_MIN Kconfig option which
applications can use to force a lower value than what subsystems have
specficied, however this behavior is disabled by default.

Whenever the minimum is greater than the requested value a CMake warning
will be issued in the build output.

This patch ends up modifying several places outside of kernel code,
since the presence of the system heap is no longer detected using a
non-zero CONFIG_HEAP_MEM_POOL_SIZE value, rather it's now detected using
a new K_HEAP_MEM_POOL_SIZE value that's evaluated at build.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2023-12-20 11:01:42 +01:00
Andrei Emeltchenko
9d3a3e96e1 kernel: threads: Do not use string compare instead of bit ops
Remove converting bit to string and comparing the string instead of
ready helpers. The "Check if thread is in use" seems to check only
that parameters state_buf and sizeof(state_buf) not zero.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2023-12-18 12:24:53 +01:00
Peter Mitsis
4b2bf5abcc kernel: Apply const to k_pipe_put() parameter
The pointer parameter 'data' in the function 'k_pipe_put()' ought to
use the const modifier as the contents of the buffer to which it
points never change. Internally, that const modifier is dropped as
both 'k_pipe_get()' and 'k_pipe_put()' share common code for copying
data; however 'k_pipe_put()' never takes a path that modifies those
contents.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-12-15 14:51:35 -05:00
Anas Nashif
a7e8391e31 debug: coredump: handle xtensa coredump like everyone else
There should not be any special handling for coredump with xtensa.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-12-14 09:32:27 +01:00
Andrej Butok
cc4ffb2a54 kernel: cmake: fix k_sleep compilation error for single thread
Fix k_sleep compilation error:
[build] ... syscalls/kernel.h:135: undefined reference to `z_impl_k_sleep'
for single thread applications (CONFIG_MULTITHREADING = n).
The shed.c contains source code which must be present also
in single thread applications.

Signed-off-by: Andrej Butok <andrey.butok@nxp.com>
2023-12-14 09:31:38 +01:00
Daniel Leung
fa561ccd59 kernel: mmu: no need to expose z_free_page_count
z_free_page_count is only used in one file, so there is
no need to expose it, even to other part of kernel.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-12 18:46:21 +00:00
Daniel Leung
22447c9736 kernel: mmu: fix typo K_DIRECT_MAP to K_MEM_DIRECT_MAP
Fix typo in comment to reflect the actual macro named
K_MEM_DIRECT_MAP.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-08 08:31:15 -05:00
Peter Mitsis
a3e5af95de kernel: Update k_sleep() and k_usleep() return values
Updates both the k_sleep() and k_usleep() return values so that if
the thread was woken up prematurely, they will return the time left
to sleep rounded up to the nearest millisecond (for k_sleep) or
microsecond (for k_usleep) instead of rounding down. This removes
ambiguity should there be a non-zero number of remaining ticks
that correlate to a time of less than 1 millisecond or 1 microsecond.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-12-07 10:41:00 +00:00
Keith Packard
9f42537fc7 kernel: Use k_us_to_cyc_ceil32 in k_busy_wait
Replace open-coded time conversion with the macro which as that will
usually use a constant divide or multiply.

Signed-off-by: Keith Packard <keithp@keithp.com>
2023-12-05 09:24:28 +01:00
Guennadi Liakhovetski
69cdc32892 llext: export some symbols
Export some symbols for loadable modules. Also add an
EXPORT_SYSCALL() helper macro for exporting system calls by their
official names.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2023-12-01 10:08:12 -05:00
Daniel Leung
34c6b17680 doc: fixed already renamed _Cstart to z_cstart
The function _Cstart has already been renamed to z_cstart,
so change the remaining references of it in various docs.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-30 21:01:47 -05:00
Qipeng Zha
3d29c9fe54 kernel: timeout: fix issue with z_timeout_expires
- issue found with Ztest case of test_thread_timeout_remaining_expires
  on Intel ISH platform when adjust CONFIG_SYS_CLOCK_TICKS_PER_SEC
  to 10k.
- timeout_rem() return exact remaining ticks which is calibrated by
  decrease elapsed(), while z_timeout_expires try to get expire ticks
  to be timeout using current tick as base, so need get exact current
  ticks by plus elasped().

Signed-off-by: Qipeng Zha <qipeng.zha@intel.com>
2023-11-30 12:22:54 +01:00
Peter Mitsis
91a6af3e8f kernel: mmu: Fix static analysis issue
Instead of performing a set of relative address comparisons using
pointers of type 'uint8_t *', we leverage the existing IN_RANGE()
macro and perform the comparisons with 'uintptr_t'.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-11-28 16:44:16 -05:00
Christopher Friedt
afc59112a9 device: support for mutable devices
Add support for mutable devices. Mutable devices are those which
can be modified after declaration, in-place, in kernel mode.

In order for a device to be mutable, the following must be true

  * `CONFIG_DEVICE_MUTABLE` must be y-selected
  * the Devicetree bindings for the device must include
    `mutable.yaml`
  * the Devicetree node must include the `zephyr,mutable` property

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2023-11-28 15:35:39 +01:00
Daniel Leung
40ba4015e3 kernel: mm: only include demand_paging.h if needed
This moves including of demand_paging.h out of kernel/mm.h,
so that users of demand paging APIs must include the header
explicitly. Since the main user is kernel itself, we can be
more discipline about header inclusion.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-23 10:01:45 +01:00
Flavio Ceolin
8679c58644 kernel: Option to not use tls to get current thread
Add a Kconfig option to tell whether or not using thread
local storage to store current thread.

The function using it can be called from ISR and using
TLS variables in this context may (should ???) not be
allowed

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Daniel Leung
f12d49d7ef kernel: mm: separate demand paging headers into its own file
This separates demand paging related headers into its own file
instead of being stuffed inside the main kernel memory
management header file.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-20 09:19:14 +01:00
Daniel Leung
c972ef1a0f kernel: mm: move kernel mm functions under kernel includes
This moves the k_* memory management functions from sys/ into
kernel/ includes, as there are kernel public APIs. The z_*
functions are further separated into the kernel internal
header directory.

Also made a quick change to doxygen to group sys_mem_* into
the OS Memory Management group so they will appear in doc.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-20 09:19:14 +01:00
Alexander Razinkov
d2c101d466 kernel: init: conditional .bss section zeroing
Some platforms already have .bss section zeroed-out externally before the
Zephyr initialization and there is no sence to zero it out the second time
from the SW.
Such boot-time optimization could be critical e.g. for RTL Simulation.

Signed-off-by: Alexander Razinkov <alexander.razinkov@syntacore.com>
2023-11-08 10:07:26 +01:00
Peter Mitsis
9364ba4353 kernel: Update k_thread_state_str()
Updates k_thread_state_str() to interpret the halting bits
correctly.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-11-06 18:59:35 -05:00
Peter Mitsis
e7986eb552 kernel: Extend halting to support suspending
Extends the concept of halting a thread from just aborting a thread
to both aborting and suspending a thread.

Part of this involves updating k_thread_suspend() to operate in a
similar fashion to that of k_thread_abort().

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-11-06 18:59:35 -05:00
Peter Mitsis
b1384a71bf kernel: Create z_thread_halt()
Extracts the essential thread synchronization logic when aborting
a thread from z_thread_abort() and moves it to its own routine
called z_thread_halt().

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-11-06 18:59:35 -05:00
Peter Mitsis
e1db1cec64 kernel: Rename end_thread() to halt_thread()
The routine halt_thread() acts nearly identical to end_thread()
except that instead of only halting the thread if the _THREAD_DEAD
state bit is not set, it will halt it if bit specified by the
parameter new_state is not set (which is always _THREAD_DEAD).

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-11-06 18:59:35 -05:00
Peter Mitsis
52ae56b8a9 kernel: Add halt_queue field to k_thread
The halt queue will be used to identify threads that are waiting
for a thread on another CPU to finish suspending.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-11-06 18:59:35 -05:00
Alexander Razinkov
4664813a12 kernel: spinlock: Ticket spinlocks
Basic spinlock implementation is based on single
atomic variable and doesn't guarantee locking fairness
across multiple CPUs. It's even possible that single CPU
will win the contention every time which will result
in a live-lock.

Ticket spinlocks provide a FIFO order of lock aquisition
which resolves such unfairness issue at the cost of slightly
increased memory footprint.

Signed-off-by: Alexander Razinkov <alexander.razinkov@syntacore.com>
2023-11-04 07:38:39 -04:00
Anas Nashif
a08bfeb49c syscall: rename Z_OOPS -> K_OOPS
Rename internal API to not use z_/Z_.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
ee9f278323 syscall: rename Z_SYSCALL_VERIFY -> K_SYSCALL_VERIFY
Rename internal API to not use z_/Z_.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
9c4d881183 syscall: rename Z_SYSCALL_ to K_SYSCALL_
Rename internal API to not use z_/Z_.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
9c1aeb5fd3 syscall: rename z_user_ to k_usermode_
Rename internal API to not use z_/Z_.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
56fddd805a syscall: rename z_user_from_copy -> k_usermode_from_copy
Rename internal API to not use z_/Z_.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
6ba8176e33 syscall: rename z_user_alloc_from_copy -> k_usermode_alloc_from_copy
Rename internal API to not use z_/Z_.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
70e791905d syscall: rename z_user_string_nlen -> k_usermode_string_nlen
Rename internal API to not use z_/Z_.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
3ab356604d syscall: rename z_dump_object_error -> k_object_dump_error
Rename internal API to not use z_/Z_.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
21254b2f40 syscall: rename z_object_validate -> k_object_validate
Rename internal API to not use z_/Z_.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
c25d0804f0 syscall: rename z_object_find -> k_object_find
Rename internal API to not use z_/Z_.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
27d74e95c9 syscall: rename z_object_wordlist_foreach -> k_object_wordlist_foreach
Rename internal API to not use z_/Z_.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
a5b49458eb syscall: rename z_thread_perms_inherit -> k_thread_perms_inherit
Rename internal API to not use z_/Z_.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
993f903830 syscall: rename z_thread_perms_set -> k_thread_perms_set
Rename z_thread_perms_set and do not use z_.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
e2a9d013e5 kernel: userspace: rename z_thread_perms_clear -> k_thread_perms_clear
Rename z_thread_perms_clear -> k_thread_perms_clear.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
70cf96b5e1 syscall: z_thread_perms_all_clear -> k_thread_perms_all_clear
Rename internal function z_thread_perms_all_clear.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
7a18c2b150 syscall: rename z_object_uninit -> k_object_uninit
Rename internal function z_object_uninit.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
43a7402baf syscall: rename z_object_recycle -> k_object_recycle
Rename z_object_recycle and do not use z_ for internal APIs.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
d178b6a0e7 syscall: z_obj_validation_check -> k_object_validation_check
Rename z_obj_validation_check  and do not use z_ for internal APIs.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
684b8fcdd0 syscall: Z_SYSCALL_VERIFY_MSG -> K_SYSCALL_VERIFY_MSG
Rename macros and do not use Z_ for internal APIs.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
4e396174ce kernel: move syscall_handler.h to internal include directory
Move the syscall_handler.h header, used internally only to a dedicated
internal folder that should not be used outside of Zephyr.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
a6b490073e kernel: object: rename z_object -> k_object
Do not use z_ for internal structures and rename to k_object instead.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
c91cad735a kernel: object: rename z_object_init to k_object_init
Do not use z_ for internal API and rename to k_object_init.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif
c54fb959e3 kernel: objects: rename z_dynamic_object_aligned_create
Do not use z_ for internal APIs and rename function.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Flavio Ceolin
6e02f02464 kernel: Fix k_thread_name_get signature
k_thread_name_get has inconsistent signature. In the function
declaration it uses k_tid_t but in the implementation it is using
struct k_thread *. Change the implementation to use k_tid_t.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-10-30 17:34:45 -04:00
Flavio Ceolin
a36c016dff kernel: mmu: Fix possible null de-reference
virt_page_phys_get can be called with phy parameter NULL when
the intention is just checking if a virtual address is mapped.

This function is generally overwritten by a an arch API that checks if
phys is null before using it but this default implementation doesn't.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-10-30 09:23:23 -04:00
Jukka Rissanen
83c875adab hostap: Move the relevant config options away from hostap
Moving the Zephyr specific config options from
modules/hostap/Kconfig to corresponding Kconfig where the
option is specified.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
2023-10-26 09:48:47 +02:00
Daniel Leung
88b05b3b91 kernel: disable THREAD_USERSPACE_LOCAL_DATA if LIBC_ERRNO
Thread userspace local data is to be used with storing errno per
thread without thread local storage support. However, if the C
library has native errno support, there is no need to enable
thread userspace local data to store errno per thread. Therefore,
amend the default for CONFIG_THREAD_USERSPACE_LOCAL_DATA so that
it is not enabled if the C library has native errno support.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-10-24 20:23:11 -04:00
Pedro Sousa
4207f4add8 kernel: timer: Fix race condition in k_timer_start
The documentation suggests that k_timer_start can be invoked from ISR
and preemptive contexts, however, an assertion failure occurs if one
k_timer_start call preempts another for the same timer instance. This
commit mitigates the issue by implementing a spinlock throughout the
k_timer_start function, ensuring thread-safety.

Fixes: #62908

Signed-off-by: Pedro Sousa <sousapedro596@gmail.com>
2023-10-23 12:10:06 +02:00
Daniel Leung
943e434559 mm: introduce CONFIG_KERNEL_VM_USE_CUSTOM_MEM_RANGE_CHECK
Sometimes the generic address range checker is not adequate
(think Xtensa cached/uncached pointers). This provides a way
to implement custom memory range checkers for those
situations. When enabled, sys_mm_is_phys_addr_in_range()
and sys_mm_is_virt_addr_in_range() must be implemented.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-10-20 15:08:34 +02:00
Flavio Ceolin
991ff6f48b kernel: init: Build constant in early random generator
Use a Kconfig symbol for initial state in the early
random number generator.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-10-13 10:03:53 +03:00
Flavio Ceolin
6c0fad2888 kernel: Fixes for z_early_rand_get
The early random get function was making many wrong assumptions
about random subsys and entropy drivers. First, it was assuming
that entropy_get_entropy() would be ISR safe, that is not right,
the driver has an ISR safe callback and if it is not implemented
or not working it is not ok using the other callback.
Second, the fallback to the random subsys is even more problematic
since they can use kernel services to protect internal states and be
thread-safe.

Another incorrect thing in this function was the guard around it.
It was needed by features like stack randomization and stack canaries,
and not when those conditions were match. Just remove it and in case
it is not needed the linker will take care of it.

The drawback of this change is that in the absence of an entropy
generator with support to be called from ISR the randomness is very
weak.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-10-13 10:03:53 +03:00
Flavio Ceolin
974e336140 kernel: random: Make z_early_rand_get a weak symbol
Allow targets come up with their own early random generator
since the default can be NOT so random due constraints.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-10-13 10:03:53 +03:00
Flavio Ceolin
f9c7a5e6fb kernel: random: Rename early random get function
Rename z_early_boot_rand_get with z_early_rand_get to get consistent
with other early functions.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-10-13 10:03:53 +03:00
Peter Mitsis
e9987aabbc kernel: Remove legacy mem block from mailbox
Memory blocks are a legacy feature and are to be removed from
mailboxes.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-10-13 09:56:02 +03:00
Keith Packard
362d8906a6 kernel: Support platforms with runtime timer freq for thread init_delay
Platforms that determine their basic timer frequency at runtime instead of
build time cannot compute thread initialization timeouts during
compilation.

Switch back to storing the init_delay value in milliseconds and perform the
conversion to a k_timeout_t at runtime.

Signed-off-by: Keith Packard <keithp@keithp.com>
2023-10-12 20:39:20 +03:00
Keith Packard
eb024b655f libc: Control Z_LIBC_PARTITION_EXISTS from Kconfig
Instead of adding every possible subsystem which places variables in the C
library memory partition in libc-hooks.h, place those conditions in the
related Kconfig files and simplify the libc-hooks.h to just looking at
CONFIG_NEED_LIBC_MEM_PARTITION.

Signed-off-by: Keith Packard <keithp@keithp.com>
2023-10-10 23:39:40 +03:00
Flavio Ceolin
e7bd10ae71 random: Rename random header
rand32.h does not make much sense, since the random subsystem
provides more APIs than just getting a random 32 bits value.

Rename it to random.h and get consistently with other
subsystems.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-10-10 14:23:50 +03:00
Daniel Leung
2c2d53c7e5 kernel: remove deprecate wording on arch_kernel_init()
The wording on deprecating arch_kernel_init() in favor of prep_c()
has never been materialized. Various architectures are using it to
perform initialization. So remove the wording.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-10-09 10:15:49 +02:00
Daniel Leung
dd6a7eb77d kernel: demand_paging: add doc to enum arch_page_location
This adds doc to enum arch_page_location.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-10-09 10:15:49 +02:00
Daniel Leung
77dc74136c kernel: tls: fix doc on arch_tls_stack_setup()
There was a typo there so fix it.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-10-09 10:15:49 +02:00
Yong Cong Sin
0cd135920a kernel: work: check handler when submit to queue
Assert that the handler of a work is not NULL when submitting
it to the queue. This allows early detection of the
code that is submitting a non-NULL work with NULL handler to
the work queue (where it happens), rather than right before the
work item get executed in the queue (when it happens).

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2023-10-05 13:43:07 +01:00
Flavio Ceolin
fee22dae40 kconfig: Deprecate CONFIG_MP_NUM_CPUS
Mark CONFIG_MP_NUM_CPUS deprecated and point to
CONFIG_MP_MAX_NUM_CPUS.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-10-03 17:45:53 +01:00
Flavio Ceolin
15aa3acaf6 kconfig: Remove MP_NUM_CPUS usage
Zephyr's code base uses MP_MAX_NUM_CPUS to
know how many cores exists in the target. It is
also expected that both symbols MP_MAX_NUM_CPUS
and MP_NUM_CPUS have the same value, so lets
just use MP_MAX_NUM_CPUS and simplify it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-10-03 17:45:53 +01:00
Anas Nashif
a1c7bfbc63 kernel: remove unused z_init_thread_base from kernel.h
This API is internal and not used in any way in kernel.h, so move it
back to where it is needed.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-09-30 18:43:28 +02:00
Anas Nashif
e19f21cb27 kernel: move z_is_thread_essential out of public kernel header
This is a private API to the kernel, so move out of kernel.h

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-09-30 18:43:28 +02:00
Anas Nashif
f0c7fbf0f1 kernel: move sched_priq.h to internal/ folder
This header is internal to the kernel and shall not be included directly.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-09-30 18:43:28 +02:00
Peter Mitsis
e6f1090553 kernel: Integrate object core statistics
Integrates object core statistics framework into the following
kernel objects:
  sys_mem_blocks, k_mem_slab
  threads, _cpu, z_kernel

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-09-30 08:04:14 +03:00
Peter Mitsis
1d5d674e0d kernel: Add initial k_obj_core_stats infrastructure
Adds the infrastructure to integrate statistics into
the object core.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-09-30 08:04:14 +03:00
Peter Mitsis
6df8efe354 kernel: Integrate object cores into kernel
Integrates object cores into the following kernel structures
   sys_mem_blocks, k_mem_slab
   _cpu, z_kernel
   k_thread, k_timer
   k_condvar, k_event, k_mutex, k_sem
   k_mbox, k_msgq, k_pipe, k_fifo, k_lifo, k_stack

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-09-30 08:04:14 +03:00