Commit graph

5737 commits

Author SHA1 Message Date
Mathieu Choplain
041714cb37 arch: arm: cortex_m: pm_s2ram: use macros to access struct __cpu_context
Use macros to wrap the interaction between the assembly code and the
struct __cpu_context. This helps making the assembly more readable.

Signed-off-by: Mathieu Choplain <mathieu.choplain@st.com>
2024-11-16 14:00:44 -05:00
Mathieu Choplain
7dd7dffe33 arch: arm: cortex_m: pm_s2ram: ignore xPSR
Remove all xPSR-related registers from struct __cpu_context, and the
associated save/restore code in S2RAM code, as they are not needed:

* EPSR and IPSR are read-only - they cannot be "restored"
* Bits N, V, Z, C, V, Q, and GE (if DSP Extension is implemented) of APSR
  could be restored, but this is not needed as the AAPCS indicates these
  bits to be "undefined on entry to or return from a public interface"

Signed-off-by: Mathieu Choplain <mathieu.choplain@st.com>
2024-11-16 14:00:44 -05:00
Yong Cong Sin
9c3482b1d5 arch: riscv: smp: allow other IPI implementation
The currently IPI implementation assumes that CLINT exists in the
system, however, that might not be the case as IPI can be implemented
with PLIC that supports software-triggering as well, such as the Andes
NCEPLIC100.

Refactor the CLINT-based IPI implementations into `ipi_clint.c`, and
create Kconfig that selects the CLINT implementation when
`sifive-clint0` exists and enabled, otherwise default to
`RISCV_SMP_IPI_CUSTOM` which allows OOT implementation. This also
makes way for the upstreaming of non-clint IPI implementation later.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-11-16 13:34:10 -05:00
Mark Holden
94c1079560 arch: arm: Don't use STKALIGN mask on ARMv8-M Baseline
The STKALIGN mask is not present for CONFIG_ARMV8_M_BASELINE as
well as CONFIG_ARMV8_M_MAINLINE. So filter out that check when
setting the sp for ARM core dumps.

Signed-off-by: Mark Holden <mholden@meta.com>
2024-11-14 17:27:17 -06:00
Yong Cong Sin
e30db2d53f arch: riscv: reset global pointer on exception
Reset the gp on exception entry from u-mode to protect the kernel
against a possible rogue user thread.

Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-11-13 19:08:54 -08:00
Anas Nashif
8a824f307d mips: tracing: add switched_out trace point
add missing switched_out trace point.

Partially fixes #76057

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-11-06 14:43:27 -06:00
Daniel DeGrasse
6023d6a142 arch: common: fix copy for ramfunc region during XIP init
ramfunc region is copied into RAM from FLASH region during XIP init. We
copy from the loadaddr of the region, and were previously loading to the
symbol __ramfunc_start. This is incorrect when using an MPU with
alignment requirements, as the __ramfunc_start symbol may have padding
placed before it in the region. The __ramfunc_start symbol still needs
to be aligned in order to be used by the MPU though, so define a new
symbol __ramfunc_region_start, and use that symbol when copying the
__ramfunc region from FLASH to RAM.

Fixes #75296

Signed-off-by: Daniel DeGrasse <daniel.degrasse@nxp.com>
2024-11-06 10:19:08 -08:00
Ha Duong Quang
6984237c06 arch: arm: core: cortex_a_r: enable the VFP unit on boot for FPU_SHARING
The FPU is already disabled by the z_arm_svc function when the first
thread starts. Therefore, disabling the FPU at boot is unnecessary for
lazy FPU; instead, it must be enabled to handle floating-point instructions
before the lazy FPU works.

Signed-off-by: Ha Duong Quang <ha.duongquang@nxp.com>
2024-11-06 10:18:59 -08:00
Mark Holden
8bd4f244b0 coredump: ARM: Ensure sp in dump is set as gdb expects
Gdb is typically able to reconstruct the first two frames of the
failing stack using the "pc" and "lr" registers. After that, (if
the frame pointer is omitted) it appears to need the stack pointer
(sp register) to point to the top of the stack before a fatal
error occurred.

The ARM Cortex-M processors push registers r0-r3, r12, LR,
{possibly FPU registers}, PC, SPSR onto the stack before entering the
exception handler. We adjust the stack pointer back to the point
before these registers were pushed for preservation in the dump.

During k_oops/k_panic, the sp wasn't stored in the core dump at all.
Apply similar logic to store it when failures occur in that path.

Signed-off-by: Mark Holden <mholden@meta.com>
2024-11-06 10:17:59 -08:00
Anas Nashif
d9bc0b60b9 arm: cortex_m: restore fix for loading z_arm_int_exit
This change was in the same commit previously reverted and seem to be
unrelated and should not be reverted.

Fixes the problem:

..... /swap_helper.S:432:(.text.z_arm_svc+0x26):
relocation truncated to fit: R_ARM_THM_JUMP11 against symbol
`z_arm_int_exit' defined in .text._HandlerModeExit section in
....core/cortex_m/libarch__arm__core__cortex_m.a(exc_exit.c.obj)

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-11-01 16:13:58 -07:00
Anas Nashif
e646b7f3bb Revert "arch: arm: cortex_m: move part of swap_helper to C"
This reverts commit 773739a52a.

Fixes #80701

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-11-01 13:54:44 -05:00
Anas Nashif
7aa4032ac6 Revert "arch: arm: cortex_m: restore comment lost in translation"
This reverts commit 7d7616214b.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-11-01 13:54:44 -05:00
Fabio Baltieri
63890e2526 arch: arm: cortex_m: add memory to the clobber list
Add "memory" to the clobber list"

From GCC 14 the compiler optimizes away memory accesses that do not
impact the asm block. Adding the memory to the clobber list lets the
compiler know that the memory state is to be preserved.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2024-10-31 14:16:48 -05:00
Fabio Baltieri
7015a0ee37 arch: arm: cortex_m: move _main in input list
Move the _main argument to the input list rather than the output one on
the asm block and change the spec to "r". The ASM block does not return,
so it does not make sense for it to expect any output.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2024-10-31 14:16:48 -05:00
Daniel Leung
cdb9166b81 cmake: toolchain/xcc,xt-clang: env vars for multiple cores
To use Xtensa toolchain, various environment variables must be
set so the executables can find necessary files and what core
to compile for. This becomes an annoyance when you have to
test multiple boards with different cores. You have to use
one set of environment variables per core. Twister cannot test
them all in one setting, and it is especially annoying doing
west builds. So enhance the environment variables handling so
that TOOLCHAIN_VER and XTENSA_CORE can be replaced by
TOOLCHAIN_VAR_<board> and XTENSA_CORE_<board> where <board>
is the normalized board target (think <board>.yaml). CMake
will then figure out the core ID for the toolchain to use.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-10-31 09:26:00 -05:00
Stephanos Ioannidis
cd9ddc95a8 arch: arm: cortex_a_r: Fix mrc/mcr instruction usage
The coprocessor number in ARM `mrc` and `mcr` instructions must be prefixed
with `p`.

GNU assembler allows specifying coprocessor number without the `p` prefix;
but, LLVM assembler is more picky about this and prints out "invalid
instruction" error otherwise.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2024-10-29 16:02:16 -07:00
Simon Tomschik
1647ad5c0a timing: fix ARM Cortex-M timing functions wrap-around issue
Added casts to uint32_t in arch_timing_cycles_get() to handle the
wrap-around of the 32-bit cycle counter correctly.

Signed-off-by: Simon Tomschik <simon.tomschik@askgroup.global>
2024-10-29 07:09:34 -05:00
Tahsin Mutlugun
94f307ab75 arch: xtensa: rename processor state save and restore calls
_xtos_pso_savearea and _xtos_core_restore_nw have been renamed to
xthal_pso_savearea and xthal_core_restore_nw in recent Xtensa
toolchains. Rename them in reset_vector.S to prevent linker errors
when building Zephyr with Xtensa toolchain.

Signed-off-by: Tahsin Mutlugun <Tahsin.Mutlugun@analog.com>
2024-10-29 07:07:42 -05:00
Sudan Landge
f2e115cca3 arch: arm: fix null pointer dereference check test
What is changed?
Updated the condition thats prevents mpu config for null dereference.
Added a new check so that mpu is configured for null dereference if
devicetree contains a memory-region node with:
 - node address starting at 0
 - size covered by the node is more than the null dereference page
   size (0x400) and
 - contains a memory-attr

Why is the change needed?
The check relied on flash base address to align with 0 for
configuring the mpu for null dereference but, a device tree
could have a flash starting at an address other than 0 and
still need the mpu config for null dereference.
The new extra check provides a way to connfigure mpu for
null dereference even if flash base address is not 0.

Note, though this change helps with mpu config for new boards having
flash address other than 0, this change does not change existing
behaviour for existing boards.

Signed-off-by: Sudan Landge <sudan.landge@arm.com>
2024-10-26 03:58:05 +01:00
Adam Kondraciuk
474d4c3249 arch: arm: cortex_m: pm_s2ram: Rework S2RAM mark functions
The S2RAM procedure requires marker checking after reset.
Such checking is performed on very early stage of the system initialization
and must ensure that the stack is not used due to the TLS pointer which is
not initialized yet.

Signed-off-by: Adam Kondraciuk <adam.kondraciuk@nordicsemi.no>
2024-10-25 13:58:37 +02:00
Evgeniy Paltsev
6d083cac7e Revert "arch: arc: replace ARC_EARLY_SOC_INIT with PLATFORM_RESET_HOOK"
The commit introduced regression for hsdk4xd platform.
The hsdk4xd SoC setup from soc_early_asm_init_percpu need to be done
in early code before any C code execution.

The current approach has multiple issues
 - we call function (which can be easily implemented in C for
   this or other SoC) from the place where we haven't setup stack
   pointer (so we can't use stack) - which is very error-prone
 - we never return back from soc_reset_hook on hsdk4xd platform

So let's just revert it for now. If any other ARC SoC need to use
soc_reset_hook - than it can be implemented properly.

This reverts commit 8c32a82e47.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2024-10-22 18:28:37 -04:00
Peter Mitsis
1d07e565fc arch: arc: add support for lock/unlock VPX
Adds support for cooperative locking/unlocking the VPX vector registers.
Provided that all VPX enabled threads use these routines to control
access to the VPX vector registers, it will allow multiple threads to
safely use them without the need for saving/restoring them upon each
context switch.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-10-17 15:49:49 -04:00
Guennadi Liakhovetski
18419b618f llext: xtensa: fix relocations with multiple SLOT0_OP
Support for R_XTENSA_SLOT0_OP was implemented with a relocation table
test case, containing only one entry at offset 0. For multiple
entries .r_addend has to be taken into account.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2024-10-17 10:49:50 -04:00
Sudan Landge
8bdd45be47 arch: arm: cortex_a_r: smp: minor fix for non cache coherent cores
What is changed?
1. Updated the data sync barrier to make sure the other parameters of
   `arm_cpu_boot_params` are updated before updating its member `mpidr`
2. Updated the MPIDR affinity level mask to account for affinity level
   1 and 2 along with level 0.

Why do we need this change?
1. As reported in issue #76182, on Cortex_A_R, the current code
   execution fails to consider the correct sequence of data sync
   barrier and cache maintenece for the code to work on non cache
   coherent cores in SMP enabled mode.
   The secondary cores are waiting in a loop for primary core to set
   `arm_cpu_boot_params.mpidr`. As soon as primary core set this,
    the secondary cores start reading other parameters from the
   `arm_cpu_boot_params` however, the existing position of DSB
   instruction doesn't guarantee that `arg`, `cpu_num` and other
   parameters of `arm_cpu_boot_params` would be updated before `mpidr`
   is udpated and this could lead to a unpredicatble behaviour so,
   we need to move the DSB instruction.
2. The affinity level mask is updated because it didn't account for
   level 1 to identify individual cores within a cluster and
   level 2 to identify different clusters within the system which can
   lead to an incorrect conversion between mpidr to core-id.

Signed-off-by: Sudan Landge <sudan.landge@arm.com>
2024-10-11 13:17:25 -04:00
Sudan Landge
ac2de3fa7c arch: arm: cortex_a_r: Set VBAR for all cores
What is changed?
Secondary cores can now boot successfully on cache and non-cache
coherent systems if the Zephyr image/vector table is loaded at an
address other than the default address 0x0.

How is it changed?
1. By calling the relocate_vector() from reset.S as part of EL1 reset
   initialization instead of prep_c to have VBAR set for all cores and
   not just for the primary core.
2. Remove dead code under CONFIG_SW_VECTOR_RELAY and
   CONFIG_SW_VECTOR_RELAY_CLIENT.

Why do we need this change?
1. As reported in issue #76182, on Cortex_ar, VBAR is set only for
   the primary cores while VBAR for the secondary cores are left with
   default value 0.
   This results in Zephyr not booting on secondary cores if the vector
   table for secondary cores is loaded at an address other than 0x0.
   VBAR is set in relocate_vector() so we move it to reboot.c which is
   better suited to have configs related to system control block.
2. The two SW_VECTOR_RELAY configs have a direct dependency on
   CONFIG_CPU_CORTEX_M, which is disabled while compiling for
   Cortex-A and Cortex-R hence leading to a dead code.

How is the change verified?
Verified with fvp_baser_aemv8r/fvp_aemv8r_aarch32/smp.

Signed-off-by: Sudan Landge <sudan.landge@arm.com>
2024-10-11 13:17:25 -04:00
Alberto Escolar Piedras
b29f9f21bc Revert "coredump: ARM: Ensure sp in dump is set as gdb expects"
This reverts commit ec7943bb18.
This commit introduced a regression.
Let's revert it so we do not block development in main.
For more information see:
https://github.com/zephyrproject-rtos/zephyr/issues/79594

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-10-09 13:42:19 +02:00
Mark Holden
ec7943bb18 coredump: ARM: Ensure sp in dump is set as gdb expects
Gdb is typically able to reconstruct the first two frames of the
failing stack using the "pc" and "lr" registers. After that, (if
the frame pointer is omitted) it appears to need the stack pointer
(sp register) to point to the top of the stack before a fatal
error occurred.

The ARM Cortex-M processors push registers r0-r3, r12, LR,
{possibly FPU registers}, PC, SPSR onto the stack before entering the
exception handler. We adjust the stack pointer back to the point
before these registers were pushed for preservation in the dump.

During k_oops/k_panic, the sp wasn't stored in the core dump at all.
Apply similar logic to store it when failures occur in that path.

Signed-off-by: Mark Holden <mholden@meta.com>
2024-10-09 09:41:00 +02:00
Daniel Leung
4f4cc4de08 xtensa: fix typo userpsace to userspace
s/userpsace/userspace/

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-10-08 18:10:03 -04:00
Declan Snyder
8221a9a704 Revert "arch: common: Add user can specify the nocache location"
This reverts commit 88f6851a3d.

This is being reverted because it is redundant with the capabilities of
zephyr,memory-region and zephyr,memory-attr properties.

Signed-off-by: Declan Snyder <declan.snyder@nxp.com>
2024-10-04 22:52:31 +01:00
Yong Cong Sin
453dfed3ad arch: riscv: smp: prevent nested #ifdef
The `CONFIG_PLIC_IRQ_AFFINITY` depends on `CONFIG_SMP` in the
Kconfig layer, nested #ifdef can be avoided for readability.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-10-04 10:50:14 +01:00
Eric Ackermann
5275d44409 llext: Add RISC-V arch-specific relocations
This commit introduces architecture-specific ELF relocations for RISC-V,
in accordance with the RISC-V PSABI specification:
https://github.com/riscv-non-isa/riscv-elf-psabi-doc/blob/master/riscv-elf.adoc
Also, the necessary compiler configurations for compiling LLEXT
extensions on RISC-V are added, and the llext tests are executed on
RISC-V targets.
Calling llext extensions from user threads in RISC-V is still
unsupported as of this commit.

Signed-off-by: Eric Ackermann <eric.ackermann@cispa.de>
2024-10-03 21:59:42 +01:00
Yong Cong Sin
52a202309b zephyr: bulk update to DT_NODE_HAS_STATUS_OKAY
Change instances of:

DT_NODE_HAS_STATUS(<node_id>, okay)

to

DT_NODE_HAS_STATUS_OKAY(<node_id>)

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-10-03 17:06:52 +01:00
Daniel Leung
8638e411e7 xtensa: fix insecure userspace warning wording
"in behave of" -> "on behalf of", which is more accurate of
what is actually happening in the code.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-10-03 11:41:40 +01:00
Yong Cong Sin
c710f8892b drivers: intc: plic: implement irq affinity configuration
- Implement irq-set-affinity in RISCV PLIC.
- Added new affinity shell command to get/set the irq(s)
  affinity in runtime, when `0` is sent as the `local_irq`, it
  means set/get all IRQs affinity.
- Some minor optimizations

Updated the build_all test to build this new configuration.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-10-02 13:48:05 -05:00
Yong Cong Sin
4bbc2a7893 arch: riscv: isr.S: restore s0 before jumping to z_riscv_fatal_error_csf
Restore the s0 we saved early in ISR entry so it shows up
properly in the CSF.

Signed-off-by: David Reiss <dreiss@meta.com>
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-10-02 09:48:02 +02:00
Krzysztof Sychla
5d76b563ea arch: Add Cortex-R8 support
Enable Cortex R8 support, similar to Cortex-R5.

Signed-off-by: Krzysztof Sychla <ksychla@antmicro.com>
Signed-off-by: Marek Slowinski <mslowinski@antmicro.com>
Signed-off-by: Piotr Zierhoffer <pzierhoffer@antmicro.com>
Signed-off-by: Mateusz Hołenko <mholenko@antmicro.com>
2024-10-01 09:58:22 +02:00
Guennadi Liakhovetski
d676d469bd LLEXT: Xtensa: add support for L32R relocation
When building LLEXT for Xtensa with custom sections the compiler
can leave unresolved references in them. Then this has to be done by
the LLEXT core during linking. This commit adds linking support for
the L32R Xtensa instruction.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2024-09-29 21:21:24 +02:00
Jakub Michalski
e75316b11a arch: x86_64: re-enable -mno-red-zone
Red zone was causing issues in places where inline assembly was pushing
data to stack, like arch_irq_lock, and possibly in other places.
Initially -mno-red-zone was enabled on x86_64, but was later, maybe
accidentially, disabled in https://github.com/zephyrproject-rtos/zephyr/
commit/b7eb04b3007767be9febe677924918d2422a4406

Signed-off-by: Jakub Michalski <jmichalski@internships.antmicro.com>
Signed-off-by: Filip Kokosinski <fkokosinski@antmicro.com>
2024-09-24 10:11:15 +02:00
Nicolas Pitre
66853f4307 x86: add support for on-demand mappings
This makes x86 compatible with K_MEM_MAP_UNPAGED.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-09-23 18:10:38 -04:00
Daniel Flodin
746c59c82a arch: kernel: lib: toolchain: Standardize TLS keyword
Up until now, the `__thread` keyword has been used for declaring
variables as Thread local storage. However, `__thread` is a GNU
specific keyword which thus limits compatibility with other
toolchains (for instance IAR).

This PR intoduces a new macro `Z_THREAD_LOCAL` which expands to the
corresponding C11, C23 or C++11 standard keyword based on the standard
that is specified during compilation, else it uses the old `__thread`
keyword.

Signed-off-by: Daniel Flodin <daniel.flodin@iar.com>
2024-09-23 10:01:48 +02:00
Daniel Leung
2c551554e2 riscv: support dumping privilege stack during coredump
Adds some bits to enable dumping privilege stack during
coredump.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-09-21 11:29:39 +02:00
Daniel Leung
efb2a354a0 xtensa: coredump: support dumping privilege stack
Adds the bits to support dumping privilege stack during
coredump.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-09-21 11:29:39 +02:00
Daniel Leung
a3f4251ed5 x86: coredump: support dumping privilege stack
Adds the bits to support dumping privilege stack during
coredump.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-09-21 11:29:39 +02:00
Daniel Leung
4f52860fe0 debug: coredump: dump privileged stack
This adds the bits to call into architecture code to dump
the privileged stack for user threads.

The weak implementation is simply there as a stub until
all architectures have implemented the associated function.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-09-21 11:29:39 +02:00
Pisit Sawangvonganan
9955b0bd6f style: arch: remove unnecessary return statements
For code clarity, remove unnecessary `return` statements
in functions with a void return type they don't affect control flow.

Signed-off-by: Pisit Sawangvonganan <pisit@ndrsolution.com>
2024-09-20 11:06:55 +02:00
Yong Cong Sin
b55f3c1c4f kernel: remove CONFIG_MP_NUM_CPUS
`CONFIG_MP_NUM_CPUS` has been deprecated for more than 2
releases, it's time to remove it.

Updated all usage of `CONFIG_MP_NUM_CPUS` to
`CONFIG_MP_MAX_NUM_CPUS`

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-09-19 18:28:37 +01:00
Tavish Naruka
b1af1928f8 cmake: set big-endian flags to TOOLCHAIN_*_FLAGS
Set -big-endian to both compiler and linker flags if
CONFIG_BIG_ENDIAN is set.

Signed-off-by: Tavish Naruka <t-naruka@ispace-inc.com>
2024-09-19 03:30:14 -04:00
Anas Nashif
7c90f1bca1 arch: arm64: init xen in arch_kernel_init()
Call xen_enlighten_init() from arch_kernel_init() instead of using
SYS_INIT.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-09-17 20:05:22 -04:00
Anas Nashif
328f52b0ce x86: init spec_ctrl in prep_c, do not use SYS_INIT
Do not use SYS_INIT, call init script directly.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-09-17 20:05:22 -04:00
Anas Nashif
7e225efab7 arch: initialize irq_offload during boot, do not use SYS_INIT
Do not use SYS_INIT for initializing irq_offload when enabled, instead
using a new interface that is called during the boot process for all
architectures.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-09-17 20:05:22 -04:00
Anas Nashif
c9f7b512da arm: init null pointer detection in prep_c, do not use SYS_INIT
Do not use SYS_INIT for initializing null pointer detection, call
directly in prep_c.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-09-17 20:05:22 -04:00
Anas Nashif
3e616cf4d3 arc: start secureshield in prep_c, do not use SYS_INIT
Initialize secureshield in prep_c instead of using SYS_INIT.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-09-17 20:05:22 -04:00
Anas Nashif
2294da24d7 arc: start mpu in prep_c, do not use SYS_INIT
Initialize MPU in prep_c similar to how all other architectures do it,
avoid use of SYS_INIT.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-09-17 20:05:22 -04:00
Anas Nashif
6ec74d02b6 cache: add new interface arch_cache_init() for initializing cache
Add a new call for initializing cache on architectures that need that.
Avoid using SYS_INIT for this and instead call the hook in a fixed place
and run if implemented.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-09-17 20:05:22 -04:00
Christopher J. Champagne
63051bf71a xtensa: add kconfig to allow non-preemptible interrupts
This adds a kconfig to enable making the interrupts
non-preemptible by other interrupts. Enabling this will set
the INTLEVEL to the max non-debug level before clearing
the EXCM bit.

Signed-off-by: Christopher J. Champagne <christopher.j.champagne@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-09-17 14:54:19 -04:00
Weiwei Guo
88f6851a3d arch: common: Add user can specify the nocache location
nocache ram is usually used by DMA to transfer data between
peripherals and ram. Some chips use isolated nocache ram,
which does not necessarily have to be in RAMABLE-REGION.
By specifying the zephyr,nocache-ram options, users can specify
the region where nocache-ram is located. If the user does not
specify it, it defaults to RAMABLE-REGION.

Signed-off-by: Weiwei Guo <guoweiwei@syriusrobotics.com>
2024-09-17 09:45:35 +02:00
Gerard Marull-Paretas
a056b608f2 arch: arm: do not enable PLATFORM_SPECIFIC_INIT if SOC_RESET_HOOK=y
Otherwise we can't escape from DEPRECATED being selected, and so getting
build warnings. It doesn't make sense that the option replacing the
deprecated one is used to automatically enable it.

Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
2024-09-16 15:12:18 -04:00
Daniel Leung
da5f7e1816 xtensa: mpu: update hardware if manipulating current domain
If adding/removing to the domain of the current running
thread, we need to update the hardware MPU regions or else
the addition or removal would not be reflected to current
running thread.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-09-16 09:55:53 +02:00
Daniel Leung
6bd0dcf920 xtensa: mpu: make sure write to MPU regions is atomic
This adds a spinlock to make sure writing to hardware MPU
regions is atomic, and cannot be interrupted until all
regions are written to hardware.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-09-16 09:55:53 +02:00
Yong Cong Sin
035c822253 arch: riscv: fill all IRQ stacks with 0xAA
Fill the memory of all CPU's IRQ stack with 0xAA on init, so
that `z_stack_space_get` can calculate the remaining space
correctly.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-09-13 09:17:34 +02:00
Adam Wojasinski
eb51529ab3 llext: Introduce AARCH64 relocation support
Adds support for all relocation type produced by GCC
on AARCH64 platform using partial linking (-r flag) or
shared link (-fpic and -shared flag).

Signed-off-by: Adam Wojasinski <awojasinski@baylibre.com>
2024-09-12 14:48:55 +02:00
Adam Wojasinski
44782ad2c1 llext: Move arm-specific relocation names from generic LLEXT file
Moved symbol definitions to the pleace where they are used.

Signed-off-by: Adam Wojasinski <awojasinski@baylibre.com>
2024-09-12 14:48:55 +02:00
Jiafei Pan
002ed73ff4 arm64: add sys_arch_reboot() support
If PSCI is enabled, it will leverage psci reset API to achieve system
reboot, otherwise a weak dump reboot API is provided, and platform
can override these APIs if platform specified implementation provided.

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2024-09-12 10:03:52 +02:00
Nicolas Pitre
c99371e486 arm64: demand paging is supported
Test configs for UP and SMP are also included.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-09-11 20:18:51 -04:00
Nicolas Pitre
be279d878e arm64: demand_paging: add support for on-demand mappings
This makes ARM64 compatible with K_MEM_MAP_UNPAGED.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-09-11 20:18:51 -04:00
Nicolas Pitre
79428bc81d arm64: demand_paging: allow page fault processing with IRQs enabled
Convention is to call k_mem_page_fault() with IRQs enabled if they were
enabled when the fault occurred.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-09-11 20:18:51 -04:00
Nicolas Pitre
3e0feab245 arm64: demand_paging: add the k_mem_paging_eviction_accessed() call
Needed for the LRU eviction algorithm to be effective.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-09-11 20:18:51 -04:00
Nicolas Pitre
7bd6892750 arm64: mmu: access fault handler for demand paging
This has two purposes: maintain the accessed and dirty page states, or
call the generic demand paging fault handler otherwise.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-09-11 20:18:51 -04:00
Nicolas Pitre
322865773b arm64: mmu: architecture specific hooks for demand paging support
A page table entry used for demand paging is always from the last level
page table and never completely zeroed. Anything else is considered
ARCH_DATA_PAGE_NOT_MAPPED.

Loaded pages use a PTE_PAGE_DESC table entry type. Paged-out pages use a
PTE_INVALID_DESC table entry type and the physical address field is
reused to hold the backing store location token value.

ARCH_DATA_PAGE_ACCESSED corresponds to the AF flag. It is initially
unset and manually set to catch when pages are being accessed and
eventually do something about it.

ARCH_DATA_PAGE_DIRTY corresponds to the lack of the AP_RO flag.
Similarly to the AF flag, pages are initially made read-only to catch
writes.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-09-11 20:18:51 -04:00
Nicolas Pitre
4e0e52cbec arm64: mmu: be stricter about free page entries
Let's consider free entries as being completely zeroed. Future patches
for demand paging support will populate entries and still mark them as
"invalid" which should not be considered free. Current code always clear
entries to be freed so no issues there.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-09-11 20:18:51 -04:00
Martin Åberg
884a4e5a35 arch: Fix assert logic for installing shared interrupt
With this commit, it is now allowed to register any ISR and arg
combination for the same IRQ, except the case when the exact same
ISR-arg combination is already registered.

The previous assert logic had a restriction where the same ISR could not
be registered multiple times with different arguments.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2024-09-11 07:41:20 -04:00
Pisit Sawangvonganan
56b6ccb2f4 style: arch: comply with MISRA C:2012 Rule 15.6
Add missing braces to comply with MISRA C:2012 Rule 15.6 and
also following Zephyr's style guideline.

Signed-off-by: Pisit Sawangvonganan <pisit@ndrsolution.com>
2024-09-11 07:40:35 -04:00
Yong Cong Sin
26d75ab796 arch: riscv: optionally stores a pointer to csf in struct arch_esf
The callee-saved-registers can be helpful to debug the state of
a core upon an exception, however, currently there's no way to
access that information in user-implemented
`k_sys_fatal_error_handler()`, even though the csf is already stored
in the stack.

This patch conditionally add a `csf` member in the `arch_esf` when
`CONFIG_EXTRA_EXCEPTION_INFO=y`*, which the `_isr_wrapper` would update
when a fatal error occurs before invoking `z_riscv_fatal_error_csf()`.

Functions such as `k_sys_fatal_error_handler()` would then be able
to access the callee-saved-registers at the time of exception via
`esf->csf`.

* For SoCs that select `RISCV_SOC_HAS_ISR_STACKING`, the
  `SOC_ISR_STACKING_ESF_DECLARE` has to include the `csf` member,
  otherwise the build would fail.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-09-10 11:43:40 +02:00
Daniel Leung
5ec60249ed x86: skip printing args when unwinding stack
On 32-bit x86, it was supposed to print the first argument to
the function during stack trace. However, it only works when
code optimizations are totally disabled (i.e. -O0). As such,
printing the args is not meaningful to aid with debugging.
So change it to simply print the function address, the same
as x86 64-bit.

Also, since unwind_stack() has exactly one caller, make it
ALWAYS_INLINE to skip a function call.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-09-09 18:41:04 -04:00
Yong Cong Sin
5365421fa1 arch: riscv: use string array instead of switch statement for cause
Get rid of the switch statement and use an string array
for the cause instead. This saves about ~600 bytes.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-09-09 15:23:43 +03:00
Yong Cong Sin
7844f5aebb arch: riscv: fatal: make cause_str reusable
Rename `cause_str` to `z_riscv_mcause_str` and make it non-static,
so that it can be used in user-implemented `k_sys_fatal_error_handler`.

Also, this function should return a constant string, so add `const`
to it.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-09-09 15:23:43 +03:00
Yong Cong Sin
951af0d457 arch: riscv: fatal: always print mcause & mtval
Relocate the logging of mcause & mtval from `_Fault` to
`z_riscv_fatal_error_csf` so that they are always printed
upon exception.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-09-09 15:23:43 +03:00
Anas Nashif
8c32a82e47 arch: arc: replace ARC_EARLY_SOC_INIT with PLATFORM_RESET_HOOK
Use generic hook infrastrucutre instead of custom Kconfig and hooks for
ARC.

Replace soc_early_asm_init_percpu() with platform_reset()

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-09-09 10:07:33 +02:00
Anas Nashif
81cf87001c arch: arm: select PLATFORM_RESET_HOOK if is PLATFORM_SPECIFIC_INIT set
Temporary until usage of PLATFORM_SPECIFIC_INIT is removed in modules.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-09-09 10:07:33 +02:00
Anas Nashif
f519dd1411 arch: arm: replace PLATFORM_SPECIFIC_INIT with PLATFORM_RESET_HOOK
Use generic hook infrastrucutre instead of custom Kconfig and hooks for
ARM.

Replace z_arm_platform_init() with platform_reset().

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-09-09 10:07:33 +02:00
Anas Nashif
e260d03686 init: introduce soc and board hooks
Introduce soc and board hooks to replace arch specific code
and replace usages of SYS_INIT for platform initialization.

include/zephyr/platform/hooks.h introduces the hooks to be implemented
by boards and SoCs.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-09-09 10:07:33 +02:00
Yong Cong Sin
31ebb79c86 arch: multilevel_irq: fix interrupt bits check
The bits allocated for each aggregator level only need to be enough to
encode CONFIG_MAX_IRQ_PER_AGGREGATOR, instead of the combined number of
IRQs from all aggregators in that level.

Add additional check for L3 interrupts as well, if it is enabled.

Updated the assert in `z_get_sw_isr_table_idx()` to be more verbose.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-09-06 14:06:23 -05:00
Jakub Michalski
f568e2d3ca zefi: add bootargs support
Add bootargs support to zefi. This implements
get_bootargs() when both efi and bootargs are
selected in config.

Signed-off-by: Jakub Michalski <jmichalski@internships.antmicro.com>
Signed-off-by: Filip Kokosinski <fkokosinski@antmicro.com>
2024-09-05 12:30:39 -05:00
Jakub Michalski
3b58d066e4 zefi: include autoconf.h
Zefi was only including include/zephyr but not the generated include
directory that contains autoconf.h. This commit fixes that.

Signed-off-by: Jakub Michalski <jmichalski@internships.antmicro.com>
Signed-off-by: Filip Kokosinski <fkokosinski@antmicro.com>
2024-09-05 12:30:39 -05:00
Jakub Michalski
0cf726b8ef arch/x86: multiboot: add bootargs support
Add bootargs support for multiboot. This
implements get_bootargs() when multiboot and
bootargs are selected in config.

Signed-off-by: Jakub Michalski <jmichalski@internships.antmicro.com>
Signed-off-by: Filip Kokosinski <fkokosinski@antmicro.com>
2024-09-05 12:30:39 -05:00
Mykyta Poturai
745c9543c6 xen: Make XEN_INTERFACE_VERSION configurable
Add Kconfig option to specify Xen interface version to use. This will
make it easier to switch between different versions of Xen.

Signed-off-by: Mykyta Poturai <mykyta_poturai@epam.com>
2024-09-04 12:52:16 +02:00
Magdalena Pastula
0237d375de arch: riscv: add an option for empty spurious interrupt handler
Add the possibility to disable fault handling in spurious
interrupt handler on RISCs and replacce it with an infinite loop.

Signed-off-by: Magdalena Pastula <magdalena.pastula@nordicsemi.no>
2024-09-02 12:35:57 -04:00
Yong Cong Sin
ed8dd0fa63 arch: riscv: stacktrace: undo the fp/sp alignment check in #76045
The change of alignment check in #76045 could be wrong and
isn't unnecessary to fix the stack traces output, undo it for
now.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-09-02 12:33:36 -04:00
Marcio Ribeiro
cb583995b8 arch: riscv: imply XIP config pushed to SoC level
'imply XIP' pushed from arch/Kconfig/'config RISCV' to riscv SoCs Kconfig
files to allow riscv SoCs having XIP enabled (or not) at SoC level

Signed-off-by: Marcio Ribeiro <marcio.ribeiro@espressif.com>
2024-08-31 06:47:52 -04:00
Daniel Leung
0962114f2b riscv: implements arch_thread_priv_stack_space_get
This implements arch_thread_priv_stack_space_get() so this can
be used to figure out how much privileged stack space is used.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-08-28 06:50:30 -04:00
Daniel Leung
bb313355f3 riscv: initialize privileged stack during thread init
This adds the bits to initialize the privileged stack when
a thread is transitioning to user mode. This prevents
information leaking if the stack is reused, and also aids
in calculating stack space usage during system calls.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-08-28 06:50:30 -04:00
Daniel Leung
55ee97c7d2 xtensa: implements arch_thread_priv_stack_space_get
This implements arch_thread_priv_stack_space_get() so this can
be used to figure out how much privileged stack space is used.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-08-28 06:50:30 -04:00
Daniel Leung
1dc02fcbda xtensa: initialize privileged stack during thread init
This adds the bits to initialize the privileged stack for
each thread during thread initialization. This prevents
information leaking if the thread stack is reused, and
also aids in calculating stack space usage during system
calls.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-08-28 06:50:30 -04:00
Daniel Leung
d736af8d26 x86: implements arch_thread_priv_stack_space_get
This implements arch_thread_priv_stack_space_get() so this can
be used to figure out how much privileged stack space is used.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-08-28 06:50:30 -04:00
Daniel Leung
fb0babacee x86: initialize privileged stack during thread init
This adds the bits to initialize the privileged stack for
each thread during thread initialization. This prevents
information leaking if the thread stack is reused, and
also aids in calculating stack space usage during system
calls.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-08-28 06:50:30 -04:00
Daniel Leung
c25fa96a68 x86: only set psp pointer for thread stacks
Only set the privileged stack pointer for thread stacks, but
nullify the pointer for kernel-only stacks, as these stacks
do not have the reserved space. The psp pointer may point to
arbitrary memory in this case if stack is not big enough.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-08-28 06:50:30 -04:00
Daniel Leung
b4c455c754 arch: add interface to get stack space of privileged stack
This adds a new arch_thread_priv_stack_space_get() interface for
each architecture to report privileged stack space usage. Each
architecture will need to implement this function as each arch
has their own way of defining privileged stacks.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-08-28 06:50:30 -04:00
Nicolas Pitre
6255bf1ef4 arch/arm64/mmu: allow partial unmap of block mappings again
Before commit baa70d8d36 ("arch/arm64/mmu: fix page table reference
counting part 2")  it was possible to perform a partial unmap of a block
mapping. Restore that ability and provide a test case to validate it.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-08-27 15:14:43 -04:00
Nicolas Pitre
c9aa98ebc0 kernel: mmu: support for on-demand mappings
This provides memory mappings with the ability to be initialized in their
paged-out state and be paged in on demand. This is especially nice for
anonymous memory mappings as they no longer have to allocate all memory
at mem_map time. This also allows for file mappings to be implemented by
simply providing backing store location tokens.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-08-26 17:25:41 -04:00
Yong Cong Sin
74f46bd421 arch: riscv: stacktrace: print additional arg when fatal error
Print `sp`/`fp` in traces.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-08-26 14:44:53 -04:00