This adds a new kconfig for eviction algorithm which needs page
tracking. When enabled, k_mem_paging_eviction_add()/_remove()
and k_mem_paging_eviction_accessed() must be implemented.
If an algorithm does not do page tracking, there is no need to
implement these functions, and no need for the kernel MMU code
to call into empty functions. This should save a few function
calls and some CPU cycles.
Note that arm64 unconditionally calls those functions so
forces CONFIG_EVICTION_TRACKING to be enabled there.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Allowing an invalid address to be "freed" when asserts are disabled
is dangerous and can lead to a very hard class of bugs (and potential
security issues) to troubleshoot. This change always validates the
address before adding it to the free list and calls k_panic() if
asserts are not enabled.
Signed-off-by: Corey Wharton <xodus7@cwharton.com>
Obviously, everyone knows that there are 8 bits per byte, so
there isn't a lot of magic happening, per se, but it's also
helpful to clearly denote where the magic number 8 is referring
to the number of bits in a byte.
Occasionally, 8 will refer to a field size or offset in a
structure, MMR, or word. Occasionally, the number 8 will refer
to the number of bytes in a 64-bit value (which should probably
be replaced with `sizeof(uint64_t)`).
For converting bits to bytes, or vice-versa, let's use
`BITS_PER_BYTE` for clarity (or other appropriate `BITS_PER_*`
macros).
Signed-off-by: Chris Friedt <cfriedt@tenstorrent.com>
Move the initialization of the priority q for running out of sched.c to
remove one more ifdef from sched.c. No change in functionality but
better matches the rest of sched.c and priority_q.h such that the
ifdefry needed is done in in priority_q.h.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
Fix k_sleep implementation for no multi-threading mode.
Absolute value of timeout expiration was fed to the k_busy_wait()
function instead of delta value. That caused bug like incrementing of
sleep time in geometric progression (while actual function argument is
constant) during program running.
Signed-off-by: Mikhail Kushnerov <m.kushnerov@yadro.com>
Allow SoC to implement their custom per-core initialization function by
selecting `CONFIG_SOC_PER_CORE_INIT_HOOK` and implement
`soc_per_core_init_hook()`.
Signed-off-by: Maxim Adelman <imax@meta.com>
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
ramfunc region is copied into RAM from FLASH region during XIP init. We
copy from the loadaddr of the region, and were previously loading to the
symbol __ramfunc_start. This is incorrect when using an MPU with
alignment requirements, as the __ramfunc_start symbol may have padding
placed before it in the region. The __ramfunc_start symbol still needs
to be aligned in order to be used by the MPU though, so define a new
symbol __ramfunc_region_start, and use that symbol when copying the
__ramfunc region from FLASH to RAM.
Fixes#75296
Signed-off-by: Daniel DeGrasse <daniel.degrasse@nxp.com>
This reverts commit 6d4031f96c.
Those makes majority of builds og platforms with blobs tainted although
the blob were not used or compiled in. So it is very misleading.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
In a uniprocessor system, _sched_spinlock may not need to be
held in all the same cases that it does in a multiprocessor
system. Removing those unnecessary usages can lead to better
performance on UP systems. In the case of uncontested taking
and giving of a semaphore, this can be as much as a +14%
performance gain.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Inlining z_unpend_first_thread() has been observed to give a
+8% and +16% performance boost to the thread_metric benchmark's
message processing and synchronization tests respectively.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Re-orders the checks in should_preempt() tests so that the
z_is_thread_timeout_active() check is done last.
This change has been observed to give a +7% performance boost on
the thread_metric benchmark's preemptive scheduling test.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Due to the (potentially) hard to understand effects of blobs, it seems
prudent to make their presence more noticeable.
With this change, whenever blobs are present in the Zephyr work space,
the hello world sample output looks like this:
> *** Booting Zephyr OS build (tainted) v3.7.0-4569-gd4f8765ef20e ***
> Hello World! esp32c3_devkitm/esp32c3
Before, it looked like this:
> *** Booting Zephyr OS build v3.7.0-4568-g69c47471d187 ***
> Hello World! esp32c3_devkitm/esp32c3
Signed-off-by: Reto Schneider <reto.schneider@husqvarnagroup.com>
Removing the routine z_ready_thread_locked() as it is not
used anywhere. It was a leftover artefact from development
that previously escaped cleanup.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Applies the 'unlikely' attribute to various kernel objects that
use z_unpend_first_thread() to optimize for the non-blocking path.
This boosts the thread_metric synchronization benchmark numbers
on the frdm_k64f board by about 10%.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
This improves context switching by 7% when measured using the
thread_metric benchmark.
Before:
**** Thread-Metric Preemptive Scheduling Test **** Relative Time: 120
Time Period Total: 5451879
After:
**** Thread-Metric Preemptive Scheduling Test **** Relative Time: 30
Time Period Total: 5853535
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Implements a set of tests designed to show how the performance of the
three scheduler queue implementations (DUMB, SCALABLE and MULTIQ)
varies with respect to the number of threads in the ready queue.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
x86 architectures require a dynamic stack size that is a multiple
of 4096 bytes due to mmu restrictions.
For example, this test would previously fail when using the
default dynamic stack size of 1024 bytes for 32-bit
platforms.
```
west build -p auto -b qemu_x86/atom/nopae -t run \
tests/posix/common/ -- -DCONFIG_USERSPACE=y
```
It would pass with an additional argument
```
west build -p auto -b qemu_x86/atom/nopae -t run \
tests/posix/common/ -- -DCONFIG_USERSPACE=y \
-DCONFIG_DYNAMIC_THREAD_STACK_SIZE=4096
```
Add a special default for x86 when using dynamic thread stacks.
The x86 default removes the need for `boards/qemu_x86*.conf`,
with the exception of `qemu_x86_tiny`.
qemu_x86_tiny did not have sufficient memory (or configuration)
to run the non-userspace tests, so bump up the available ram
from 256k to 512k for this test and clone the .conf from the
demand paging tests.
Eventually, the common posix test should be split into more
concise functional categories.
Signed-off-by: Chris Friedt <cfriedt@tenstorrent.com>
Up until now, the `__thread` keyword has been used for declaring
variables as Thread local storage. However, `__thread` is a GNU
specific keyword which thus limits compatibility with other
toolchains (for instance IAR).
This PR intoduces a new macro `Z_THREAD_LOCAL` which expands to the
corresponding C11, C23 or C++11 standard keyword based on the standard
that is specified during compilation, else it uses the old `__thread`
keyword.
Signed-off-by: Daniel Flodin <daniel.flodin@iar.com>
This adds a interface to allow coredump to dump privileged
stack which is defined in architecture specific way.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
For code clarity, remove unnecessary `return` statements
in functions with a void return type they don't affect control flow.
Signed-off-by: Pisit Sawangvonganan <pisit@ndrsolution.com>
Simplifies the k_thread_cpu_pin() implementation to leverage the
existing cpu_mask_mod() infrastructure.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
`CONFIG_MP_NUM_CPUS` has been deprecated for more than 2
releases, it's time to remove it.
Updated all usage of `CONFIG_MP_NUM_CPUS` to
`CONFIG_MP_MAX_NUM_CPUS`
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
Do not use SYS_INIT for initializing irq_offload when enabled, instead
using a new interface that is called during the boot process for all
architectures.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Fill the memory of all CPU's IRQ stack with 0xAA on init, so
that `z_stack_space_get` can calculate the remaining space
correctly.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
Add missing braces to comply with MISRA C:2012 Rule 15.6 and
also following Zephyr's style guideline.
Signed-off-by: Pisit Sawangvonganan <pisit@ndrsolution.com>
This sets initial unpaged mappings for __ondemand_func code and
__ondemand_rodata variables. To achieve this, we have to augment the
backing store API.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Introduce soc and board hooks to replace arch specific code
and replace usages of SYS_INIT for platform initialization.
include/zephyr/platform/hooks.h introduces the hooks to be implemented
by boards and SoCs.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
It is sometimes necessary to modify/update memory permissions on some
pages, especially with LLEXT where some allocated segments have to be
executable.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add bootargs support to zefi. This implements
get_bootargs() when both efi and bootargs are
selected in config.
Signed-off-by: Jakub Michalski <jmichalski@internships.antmicro.com>
Signed-off-by: Filip Kokosinski <fkokosinski@antmicro.com>
Add bootargs support for multiboot. This
implements get_bootargs() when multiboot and
bootargs are selected in config.
Signed-off-by: Jakub Michalski <jmichalski@internships.antmicro.com>
Signed-off-by: Filip Kokosinski <fkokosinski@antmicro.com>
Add support for passing args to main(). The
content of bootargs is taken from get_bootargs()
which should be implemented for each loader and
then its split into args and passed to main.
Signed-off-by: Jakub Michalski <jmichalski@internships.antmicro.com>
Signed-off-by: Filip Kokosinski <fkokosinski@antmicro.com>
This adds a new arch_thread_priv_stack_space_get() interface for
each architecture to report privileged stack space usage. Each
architecture will need to implement this function as each arch
has their own way of defining privileged stacks.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This provides memory mappings with the ability to be initialized in their
paged-out state and be paged in on demand. This is especially nice for
anonymous memory mappings as they no longer have to allocate all memory
at mem_map time. This also allows for file mappings to be implemented by
simply providing backing store location tokens.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The intention of this API is to allow setting the posix thread
name equal to the zephyr thread name.
By defining it as an arch interface, the implementation becomes
generic.
Signed-off-by: Rubin Gerritsen <rubin.gerritsen@nordicsemi.no>
Add missing braces to comply with MISRA C:2012 Rule 15.6 and
also following Zephyr's style guideline.
Signed-off-by: Pisit Sawangvonganan <pisit@ndrsolution.com>
Dynamic memory allocation depends on various factors and many are
only known at runtime depending on application inputs.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Using DEVICE_DEFINE, a device without a corresponding DT node can be
defined (for example CRYPTO_MTLS), Z_DEVICE_INIT() does not initialize
dt_meta for such devices, leaving the field as NULL.
device_get_dt_nodelabels() and functions calling it have to handle
dev->dt_meta == NULL to prevent fatal errors.
Signed-off-by: Jan Peters <peters@kt-elektronik.de>
When logging is on and optimization and multithreading is off then
build fails to link because unoptimized compiler/linker seems to not
look beyond the function and it fails trying to link k_thread_name_get.
Reworking the code to make it known to the compiler without optimization
that k_thread_name_get is not needed and not logging current thread
name in that case.
Signed-off-by: Krzysztof Chruściński <krzysztof.chruscinski@nordicsemi.no>
Removes the ISR restriction from k_thread_priority_set().
Since the first commit, the routine k_thread_priority_set() has
had a restriction preventing it from being called from within
the context of an ISR. As there does not (any longer) appear to
be a reason why the restriction exists, it is being removed.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Add functions k_thread_foreach_unlocked_filter_by_cpu() and
k_thread_foreach_filter_by_cpu() to loop through the threads on the
specified cpu only.
Signed-off-by: Jyri Sarha <jyri.sarha@linux.intel.com>
Reduce `k_spin_unlock` calls in `k_obj_core_stats_xxx` functions by
consolidating error handling into an `if-else if-else` structure and
using a single return variable `rv`.
Signed-off-by: Pisit Sawangvonganan <pisit@ndrsolution.com>
Removes the ALWAYS_INLINE attribute from the definition of the
routine z_unpend_thread_no_timeout() to fix an unresolved symbol
error that was occurring with some compilers.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
PR #63973 namespaced generated headers with zephyr/, including generated
syscall headers.
Since then, some new generated syscall header includes have been added
without the zephyr/ prefix, breaking builds when
CONFIG_LEGACY_GENERATED_INCLUDE_PATH is disabled.
This commit adds the zephyr/ prefix to includes for generated syscall
headers where it has been missed.
Signed-off-by: Ben Marsh <ben.marsh@helvar.com>
Utilize a code spell-checking tool to scan for and correct spelling errors
in all files within the `kernel` directory.
Signed-off-by: Pisit Sawangvonganan <pisit@ndrsolution.com>
native_posix (unlike native_sim) or its breathen (NATIVE_APPLICATION)
link together the "runner" code and the embedded code.
This means that when CONFIG_STATIC_INIT_GNU is set,
any host library code (like the llvm fuzzer) constructors will get
postponed to the Zephyr initialization.
These libraries are unlikely to work if we do this.
(the llvm fuzzer does not)
So let's instead not enable STATIC_INIT_GNU for these targets.
This means possible embedded library constructors will continue to be
picked during the link and be still called during the native runner
initialization instead of during the Zephyr OS initialization as they
were just before we introduced STATIC_INIT_GNU in
6e977ae2d5
Note that native_posix will be deprecated shortly and its users
are strongly encouraged to move to native_sim.
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
In device init phase, it will call _mbedtls_init before malloc_prepare
as mbedtls has higher priority defined in SYS_INIT..
_mbedtls_init() will call psa_crypto_init() and malloc buffer,
but z_malloc_heap is not initialized, which will cause device hang.
Should call malloc_prepare() before _mbedtls_init to fix this issue,
so add new Kconfig to increase the priority of libc to deafult 30.
Signed-off-by: Maochen Wang <maochen.wang@nxp.com>
* Move ctors and init_array from the CPP library
to the kernel library, as this is common for both C
and C++ and it is the kernel who is running it.
* Rename the hidden kconfig option CPP_STATIC_INIT_GNU
STATIC_INIT_GNU instead.
* If STATIC_INIT_GNU is not selected verify there is
constructors left behind.
* Rename common-rom-cpp.ld to common-rom-init.ld
* Rename z_cpp_init_static to z_init_static,
and have the kernel always call it.
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
Signed-off-by: Keith Packard <keithp@keithp.com>
When this new Kconfig option is enabled, preempted threads on an SMP
system may generate additional IPIs when they are switched out.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Created tracing APIs to trace the enter and (exit + result) of
SYS_INIT and DEVICE_DT_DEFINE (and friends).
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
The deadline of deadline scheduler should lager than zero
because if deadline is negative, it menas the task should
be finished in past.
Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
Let eviction algorithms be notified when a given page frame:
- should be considered as possible candidate
- should no longer be considered as candidate
- has just been marked as "accessed"
The NRU algorithm is unchanged so it implements those as empty stubs.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Extern declaration of __stack_chk_guard added volatile to
the type while the declaration was non-volatile. This cause
type check errors with compilers that declares the
__stack_chk_guard variable in an internal pre-include
header file (IAR).
While I think the volatile keyword is unnecessary, I decided
on keep it and add it to the declaration in
kernel/compiler_stack_protect.c
Tested with IAR ICCARM and the Zephyr SDK GCC.
Signed-off-by: Lars-Ove Karlsson <lars-ove.karlsson@iar.com>
On each reboot, this option causes the serial output to start top-left
on the users terminal, simplifying (human) parsing.
Signed-off-by: Reto Schneider <reto.schneider@husqvarnagroup.com>
If a page is paged out or paged in but unaccessible for the purpose of
tracking the "accessed" flag then k_mem_unmap() may fails. Add the code
needed to support those cases.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
This is part of a series of moving memory management related
stuff out of the Z_ namespace and into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is part of a series of move memory management related
stuff out of Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Also any demand paging and page frame related bits are
renamed.
This is part of a series to move memory management related
stuff out of the Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Renames:
Z_KERNEL_VIRT_START to K_MEM_KERNEL_VIRT_START
Z_KERNEL_VIRT_SIZE to K_MEM_KERNEL_VIRT_SIZE
Z_KERNEL_VIRT_END to K_MEM_KERNEL_VIRT_END
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Renames:
Z_VIRT_RAM_START to K_MEM_VIRT_RAM_START
Z_VIRT_RAM_SIZE to K_MEM_VIRT_RAM_SIZE
Z_VIRT_RAM_END to K_MEM_VIRT_RAM_END
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Renames:
Z_PHYS_RAM_START to K_MEM_PHYS_RAM_START
Z_PHYS_RAM_SIZE to K_MEM_PHYS_RAM_SIZE
Z_PHYS_RAM_END to K_MEM_PHYS_RAM_END
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is part of a series to move memory management related
stuff from the Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Rename Z_BOOT_VIRT_TO_PHYS() and Z_BOOT_PHYS_TO_VIRT() to
K_MEM_BOOT_VIRT_TO_PHYS() and K_MEM_BOOT_PHYS_TO_VIRT()
respectively. This is part of a series to move memory management
functions away from the Z_ namespace and into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is part of a series to move memory management functions
away from the z_ namespace and into its own namespace. Also
make documentation available via doxygen.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This renames z_phys_map() and z_phys_unmap() to
k_mem_map_phys_bare() and k_mem_unmap_phys_bare()
respectively. This is part of the series to move memory
management functions away from the z_ namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
These functions were introduced alongside with the memory mapped
stack feature, and are currently only being used there only.
To avoid potential confusion with k_mem_un/map(), remove them
and use k_mem_map/unmap_phys_guard() directly instead.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The internal functions k_mem_map_impl() and k_mem_unmap_impl()
are renamed to k_mem_map_phys_guard() and
k_mem_unmap_phys_guard() respectively to better clarify
their usage.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
As their stacks are defined by zephyr's kernel/thread stack definition
macro, better use zephyr's kernel/thread stack size macro for their stack
size, ensuring consistency and preventing potenial issues related to stack
size misconfiguration.
Signed-off-by: Dong Wang <dong.d.wang@intel.com>
Move this to a call in the init process. arch_* calls are no services
and should be called consistently during initialization.
Place it between PRE_KERNEL_1 and PRE_KERNEL_2 as some drivers
initialized in PRE_KERNEL_2 might depend on SMP being setup.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This option allows you to look up a struct device from any of the
node labels that were attached to the devicetree node used to create
the device, etc.
This is helpful because node labels are a much more human-friendly set
of unique identifiers than the node names we are currently relying on
for use with device_get_binding(). Adding this infrastructure in the
device core allows anyone to make use of it without having to
replicate node label storage and search functions in various places in
the tree. The main use case, however, is for looking up devices by
node label in the shell.
Since there is a footprint penalty associated with storing the node
label metadata, leave this option disabled by default.
Signed-off-by: Martí Bolívar <mbolivar@amperecomputing.com>
Add a __ASSERT_ON guard around slab_ptr_is_good, as that is only used in
assertions and leaving it on seems to generate a build warning with some
clang versions:
kernel/mem_slab.c:207:20: error: unused function 'slab_ptr_is_good'
207 | static inline bool slab_ptr_is_good(struct k_mem_slab *slab,...
| ^~~~~~~~~~~~~~~~
Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
When the CONFIG_BOOT_BANNER flag is set to "n", but CONFIG_BOOT_DELAY
is enabled, there is a delay message printed at boot time.
This allows for the whole boot banner to be disabled.
Signed-off-by: Krzysztof Sychla <ksychla@antmicro.com>
Abstract slab pointer validation and apply it to block dequeue during
allocation in addition to the existing block freeing. This should help
catching some buffer overflow induced corruptions.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
As it is, blocks are allocated going backward within the buffer.
There is nothing fundamentally wrong with that, but it makes debugging
unnatural with the successively descending addresses. Create the free
list so pointers are oriented forward, at least initially.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Platforms that support IPIs allow them to be broadcast via the
new arch_sched_broadcast_ipi() routine (replacing arch_sched_ipi()).
Those that also allow IPIs to be directed to specific CPUs may
use arch_sched_directed_ipi() to do so.
As the kernel has the capability to track which CPUs may need an IPI
(see CONFIG_IPI_OPTIMIZE), this commit updates the signalling of
tracked IPIs to use the directed version if supported; otherwise
they continue to use the broadcast version.
Platforms that allow directed IPIs may see a significant reduction
in the number of IPI related ISRs when CONFIG_IPI_OPTIMIZE is
enabled and the number of CPUs increases. These platforms can be
identified by the Kconfig option CONFIG_ARCH_HAS_DIRECTED_IPIS.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
The CONFIG_IPI_OPTIMIZE configuration option allows for the flagging
and subsequent signaling of IPIs to be optimized.
It does this by making each bit in the kernel's pending_ipi field
a flag that indicates whether the corresponding CPU might need an IPI
to trigger the scheduling of a new thread on that CPU.
When a new thread is made ready, we compare that thread against each
of the threads currently executing on the other CPUs. If there is a
chance that that thread should preempt the thread on the other CPU
then we flag that an IPI is needed for that CPU. That is, a clear bit
indicates that the CPU absolutely will not need to reschedule, while a
set bit indicates that the target CPU must make that determination for
itself.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
1. The flagging of IPIs is moved out of k_thread_priority_set() into
z_thread_prio_set(). This allows for an IPI to be done for a thread
that had its priority bumped due to the handling of priority
inheritance from a mutex.
2. k_thread_priority_set()'s check for sched_locked only applies to
non-SMP builds that are using the old arch_swap() framework to switch
between threads.
Incidentally, nearly all calls to flag_ipi() are now performed with
sched_spinlock being locked. The only exception is in slice_timeout().
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Updates the CONFIG_PIPES Kconfig description to add a note that
enabling it will cause a slight increase to the thread structure.
This mirrors a similar comment in CONFIG_EVENTS.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Created `GEN_OFFSET_STRUCT` & `GEN_NAMED_OFFSET_STRUCT` that
works for `struct`, and remove the use of `z_arch_esf_t`
completely.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Make `struct arch_esf` compulsory for all architectures by
declaring it in the `arch_interface.h` header.
After this commit, the named struct `z_arch_esf_t` is only used
internally to generate offsets, and is slated to be removed
from the `arch_interface.h` header in the future.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
This duplicates the functionality of device_is_ready.
Calls for z_device_is_ready are being done in kernel mode, so it is
safe to call its implementation directly.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>