Commit graph

5737 commits

Author SHA1 Message Date
Yong Cong Sin
074931057c subsys/debug: remove CONFIG_EXCEPTION_STACK_TRACE_SYMTAB
Having `CONFIG_EXCEPTION_STACK_TRACE_SYMTAB` to select the
`CONFIG_SYMTAB` or to explicitly not print the symbol name
during exception stack unwind seems unnecessary, as the extra
code to print the symbol name is negligible when compared with
the symbol table, so just remove it.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-08-26 14:44:53 -04:00
Yong Cong Sin
ab676fdb86 arch: arm64: implement arch_stack_walk()
Currently it supports `esf` based unwinding only.

Then, update the exception stack unwinding to use
`arch_stack_walk()`, and update the Kconfigs & testcase
accordingly.

Also, `EXCEPTION_STACK_TRACE_MAX_FRAMES` is unused and
made redundant after this change, so remove it.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-08-26 14:44:53 -04:00
Yong Cong Sin
06a8c35316 arch: x86: implement arch_stack_walk()
Currently it supports `esf` based unwinding only.

Then, update the exception stack unwinding to use
`arch_stack_walk()`, and update the Kconfigs & testcase
accordingly.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-08-26 14:44:53 -04:00
Yong Cong Sin
6fae891c7e arch: reorg the dependencies around (exception) stack trace
This commit introduces a new ARCH_STACKWALK Kconfig which
determines if the `arch_stack_walk()` is available should the
arch supports it.

Starting from RISCV, this will be able to converge the exception
stack trace implementation & stack walking features. Existing
exception stack trace implementation will be updated later.
Eventually we will end up with the following:

1. If an arch implements `arch_stack_walk()`
   `ARCH_HAS_STACKWALK` should be selected.
2. If the above is enabled, `ARCH_SUPPORTS_STACKWALK` indicates
   if the dependencies are met for arch to enable stack walking.
   This Kconfig replaces `<arch>_EXCEPTION_STACK_TRACE`
2. If the above is enabled, then, `ARCH_STACKWALK` determines
   if `arch_stack_walk()` should be compiled.
3. `EXCEPTION_STACK_TRACE` should build on top of the
   `ARCH_STACKWALK`, stack traces will be printed when it
   is enabled.
4. `ARCH_STACKWALK_MAX_FRAMES` will be removed as it is
   replaced by `ARCH_STACKWALK_MAX_FRAMES`

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-08-26 14:44:53 -04:00
Rubin Gerritsen
fb745f610f arch posix: Implement arch_thread_name_set()
This will update the posix thread names to match
the zephyr thread names.

This will simplify debugging as the debugger will
recognize the thread names.

Signed-off-by: Rubin Gerritsen <rubin.gerritsen@nordicsemi.no>
2024-08-23 08:01:33 -04:00
Anthony Giardina
8a4b3a4b61 xtensa: include xtensa-types.h for xt-clang
Newer Xtensa toolchain needs to include xtensa-types.h so that
macros in tie.h can be used without compilation errors.
The exact verison needing this is unknown but first encountered
in RJ-2023.2. Tested with older toolchain and that did not
cause any compilation errors so just include xtensa-types.h
if xt-clang is used. Haven't seen newer toolchains being
generated with xcc, so skip that for now.

Note that Zephyr SDK and the public HAL in Zephyr do not provide
this file.

Signed-off-by: Anthony Giardina <anthony.giardina@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-08-23 09:54:56 +02:00
Hessel van der Molen
2590c48d40 arch: arm: cortex_m: pm_s2ram: save system_off before s2ram marking
The r0 register holds the system_off function pointer. As r0 is a scratch
register, the pointer needs to moved to a preserved register before
branching to a (custom) marker function.

Furthermore, in accordance to rule 6.2.1.2 of aapcs32, the stack pointer
needs to align on 8 bytes. Hence r0 is pushed to the stack in addition to
the lr register, before calling the public interface of checking the
s2ram marker.

Signed-off-by: Hessel van der Molen <hvandermolen@dexels.com>
2024-08-22 14:21:07 -04:00
Yong Cong Sin
42362c6fcc subsys/profiling: relocate stack unwind backends
Relocate stack unwind backends from `arch/` to perf's
`backends/` folder, just like logging/shell/..

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-08-20 14:45:23 +02:00
Sudan Landge
b3fe647eaf arch: arm: cortex_a_r: Fix restore of registers while exiting exception
This commit fixes potential unpredictable behavior, caused by using
the ^ form of ldmia instruction, while exiting an exception in SMP
mode on Cortex-A/R.

Change:
Use "pop" instead of "ldmia" to restore user mode registers while
exiting from an exception via `z_arm_cortex_ar_exit_exc`.

Reason for change:
Processor mode is always set to system (MODE_SYS) before calling
`z_arm_cortex_ar_exit_exc` and hence, the user mode register can be
accessed directly without the ^ form of the instruction. Also, LDMIA
instruction is UNPREDICTABLE in SYStem mode.

Signed-off-by: Sudan Landge <sudan.landge@arm.com>
2024-08-15 12:10:42 -04:00
Sudan Landge
6d8deac010 arch: arm: cortex_a_r: Fix usage of stmdb while entering an exception
This commit fixes the unpredictable behavior, caused by using the
^ form of stmdb instruction, while entering an exception in SMP mode
on Cortex-A/R.

Change:
Use "push" instead of "stmdb" to store user mode registers on
stack while entering an exception in SYStem mode.

Reason for change:
As reported in discussion/#75339, processor is already in SYS mode
after entering `z_arm_cortex_ar_enter_exc()` in an exception and
using stmdb is UNPREDICTABLE in system mode. Also, the user mode
register can be accessed directly without the ^ form of the
instruction. The solution suggested to fix this is to use
`stmdb sp!, {r0-r3, r12, lr}` which can save the user registers,
update the SP and avoid an extra instruction.
We use "push {}" instruction instead since it is the preferred
mnemonic over `stmdb`.

Signed-off-by: Sudan Landge <sudan.landge@arm.com>
2024-08-15 12:10:42 -04:00
Stan Skowronek
f74592dcde arch: arm: Use voting lock for multi-core boot race condition
Port of similar change in arm64 that eliminates exclusive load/store
instructions, which may not work when MMU/MPU/cache are disabled.

Based on: 7904c6f0f3

Signed-off-by: Stan Skowronek <stan@corellium.com>
2024-08-15 11:58:01 -04:00
Daniel Leung
b0185189ad xtensa: mmu: fix page table initialization
xtensa_mmu_init() is called really early in the boot process
where the _kernel struct has not yet been initialized, and
thus we cannot use it to determine if the current CPU is
the boot CPU. In some cases, this may skip the call to
initialize the page tables which leaves us with incorrect
page table entries. Fix it by using a static variable to
determine whether the page tables have been initialized so
we only do it once per boot.

Fixes #76909

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-08-14 09:36:19 +02:00
Mikhail Kushnerov
140f9a57b7 arch: x86: ia32: Implement perf stack thrace func
Implement stack trace function for x86_32 arch, that get required
thread register values and unwind stack with it.

Signed-off-by: Mikhail Kushnerov <m.kushnerov@yadro.com>
2024-08-13 18:28:44 -04:00
Mikhail Kushnerov
1588390907 arch: x86: intel64: make perf stack thrace func
Implement stack trace function for x86_64 arch, that get required
thread register values and unwind stack with it.

Originally-by: Yonatan Goldschmidt <yon.goldschmidt@gmail.com>
Signed-off-by: Mikhail Kushnerov <m.kushnerov@yadro.com>
2024-08-13 18:28:44 -04:00
Mikhail Kushnerov
a74474c1f9 arch: riscv: Implement perf stack thrace func
Implement stack trace function for riscv arch, that get required
thread register values and unwind stack with it.

Signed-off-by: Mikhail Kushnerov <m.kushnerov@yadro.com>
2024-08-13 18:28:44 -04:00
Flavio Ceolin
f284091073 xtensa: core: Remove constant branch
is_dblexc is constant for targets without MMU. There is no need
to check it when building without MMU.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-08-13 18:18:53 -04:00
Alberto Escolar Piedras
bb9704a422 arch posix: Cleanup old code
When the native simulator use was introduced,
the POSIX architecture and SOC were split in 2 versions:

One was the old version, which remained used by native_posix and the
other NATIVE_APPLICATION based targets.
The new version was a shim on top of the native simulator threading
and CPU start/stop emulation.

This was done to ensure no regressions were introduced in the old
targets while the native simulator was tested and matured.

The old SOC code was removed a small while after, and all
NATIVE_APPLICATION targets moved to use the shim version on top of the
native simulator.

Now we remove also the old arch code, so native_posix and its
NATIVE_APPLICATION kin, also use the native simulator NCT component
instead.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-08-13 18:18:25 -04:00
Anas Nashif
f53d5d5712 arch: move custom arch call Kconfigs
Move from kernel/ to arch/ and have all those Kconfigs in one place.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-08-12 12:43:36 +02:00
Anas Nashif
a91c6e56c8 arch: use same syntax for custom arch calls
Use same Kconfig syntax for those  custom arch call.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-08-12 12:43:36 +02:00
Anas Nashif
7f52fc4188 arch: custom cpu_idle and cpu_atomic harmonization
custom arch_cpu_idle and arch_cpu_atomic_idle implementation was done
differently on different architectures. riscv implemented those as weak
symbols, xtensa used a kconfig and all other architectures did not
really care, but this was a global kconfig that should apply to all
architectures.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-08-12 12:43:36 +02:00
Dawid Niedzwiecki
f39d8bbe2c arm: clear UNALIGN_TRP bit in CCR register
Clear the UNALIGN_TRP bit in the CCR register, if the config
CONFIG_TRAP_UNALIGNED_ACCESS is not set.

Despite the fact that the reset value of UNALIGN_TRP is 0, always clear
the bit. It is useful in double image systems. The new image can't rely
on settings left by the previous image.

Signed-off-by: Dawid Niedzwiecki <dawidn@google.com>
2024-08-12 12:43:16 +02:00
Yong Cong Sin
5107320075 arch: common: isr_tables: add shell command
Add a shell command to dump the isr_tables.

```CONFIG_SYMTAB=n
uart:~$ isr_table sw_isr_table
_sw_isr_table[1035]

   7: 0x800056e2(0)
  11: 0x80005048(0x80008148)
  22: 0x800054ee(0x80008170)
```

```CONFIG_SYMTAB=y
uart:~$ isr_table sw_isr_table
_sw_isr_table[1035]

   7: timer_isr(0)
  11: plic_irq_handler(0x80008188)
  22: uart_ns16550_isr(0x800081b0)
```

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-08-12 10:10:57 +02:00
Anas Nashif
d590c18672 intel_adsp: ace: call soc_num_cpus_init early
Restore order of execution. Code that was run in EARLY init level is now
too late.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-08-07 13:50:53 +02:00
Anas Nashif
c79bbfadbb xtensa: move arch_kernel_init code into prep_c
arch_kernel_init() was misused for all architecture initialization code
that is done in prep_c and prior to cstart on other architectures.
arch_kernel_init() is late in the init process and comes after EARLY
init level, making xtensa have a very special boot path.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-08-07 13:50:53 +02:00
Anas Nashif
42396735bf xtensa: introduce prep_c for xtensa
xtensa is the only architecutre doing thing differently and introduces
inconsistency in the init process and dependencies as we attemp to
cleanup init levels and remove misused of SYS_INIT.

Introduce prep_c for this architecture and align with other
architectures.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-08-07 13:50:53 +02:00
Anas Nashif
299dddfdce xtensa: remove mention of crt0-app.S
crt0-app.S does not exist, remove it from comments to avoid confusion.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-08-07 13:50:53 +02:00
Yong Cong Sin
9698df9dc4 arch: riscv: stacktrace: fix output without ra on the stack top
Account for the scenario when we are doing `esf`-based
unwinding from a function which doesn't have any callee.
In this case the `ra` is not saved on the stack and the
second function from the top of the frame could be missing.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-08-06 17:17:17 -04:00
Mark Holden
e6b683d310 coredump: Guard new kconfig for only supported arch
Add new config, ARCH_SUPPORTS_COREDUMP_THREADS, and
only enable it for ARM CORTEX M where the gdb server
can support it.

Signed-off-by: Mark Holden <mholden@meta.com>
2024-08-02 03:32:09 -04:00
Pieter De Gendt
ad63ca284e kconfig: replace known integer constants with variables
Make the intent of the value clear and avoid invalid ranges with typos.

Signed-off-by: Pieter De Gendt <pieter.degendt@basalte.be>
2024-07-27 20:49:15 +03:00
Yong Cong Sin
7db18ab721 arch: riscv: stacktrace: fix user thread stack bound check
According to the riscv's `arch.h`:

 +------------+ <- thread.arch.priv_stack_start
 | Guard      | } Z_RISCV_STACK_GUARD_SIZE
 +------------+
 | Priv Stack | } CONFIG_PRIVILEGED_STACK_SIZE
 +------------+ <- thread.arch.priv_stack_start +
                   CONFIG_PRIVILEGED_STACK_SIZE +
                   Z_RISCV_STACK_GUARD_SIZE

The start of the privilege stack should be:

  `thread.arch.priv_stack_start + Z_RISCV_STACK_GUARD_SIZE`

Instead of

  `thread.arch.priv_stack_start - CONFIG_PRIVILEGED_STACK_SIZE`

For the `end`, use the same equation of `top_of_priv_stack` in
the `arch_user_mode_enter()`

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-07-27 20:47:51 +03:00
Jimmy Zheng
421f81a243 arch: riscv: remove PMP stack guard for stack overflow handler
When RISCV_ALWAYS_SWITCH_THROUGH_ECALL is enabled, do_swap() enables PMP
checking in is_kernel_syscall.
If the PMP stack guard is triggered and do_swap() is called from the
fault handler, a PMP error occurs because the stack usage violates the
previous PMP setting.

Remove the stack guard setting during a stack overflow handler to allow
enabling PMP checking safely in fault handler.

Signed-off-by: Jimmy Zheng <jimmyzhe@andestech.com>
2024-07-27 15:12:25 +03:00
Jimmy Zheng
58bda887ce arch: riscv: update PMP setting to privileged mode for fault handler
When RISCV_ALWAYS_SWITCH_THROUGH_ECALL is enabled, do_swap() enables PMP
checking in is_kernel_syscall.
If a user thread violates memory protection and do_swap() is called from
the fault handler, a PMP error occurs because the thread is in privileged
mode but still using the old user mode PMP setting.

Update the PMP setting to privileged mode for fault handler.
This also enables the stack guard for user thread's privileged stack in
fault handler.

Signed-off-by: Jimmy Zheng <jimmyzhe@andestech.com>
2024-07-27 15:12:25 +03:00
Jimmy Zheng
ba263da94e arch: riscv: fix exception_depth with RISCV_ALWAYS_SWITCH_THROUGH_ECALL
When 'arch_switch()' switches though Ecall, 'exception_depth' is
incorrectly added to the next thread because the current thread is updated
before arch_switch().
Add 'exception_depth' back to the previous thread when Ecall is called from
'arch_switch()'.

Signed-off-by: Jimmy Zheng <jimmyzhe@andestech.com>
2024-07-12 05:55:26 -04:00
Nicolas Pitre
8d485d4a24 riscv: pmp: actually activate stack overflow protection during boot
Before this, stack protection would be effective only after switching to
the first thread.

Even before the first thread is created, the kernel init code uses the
IRQ stack to set things up. Let's make sure this is safeguarded as well.

This also fixes the incompatibility between CONFIG_RISCV_PMP and
CONFIG_RISCV_ALWAYS_SWITCH_THROUGH_ECALL, the later needing an exception
call to switch to the first thread and exception code assuming stack
guard is already set up in the PMP.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-07-11 18:22:08 -04:00
Nicolas Pitre
0207c7ff73 riscv: pmp: abstract the MPRV catchall entry and its QEMU workaround
This will be needed at more than one location.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-07-11 18:22:08 -04:00
Wilfried Chauveau
1b820dfa85 arch: arm: cortex_m: add ip & lr to the clobber list
Calls to other function may clobber ip & lr too so these register need to
be added to the clobberlist.
r3 is not actually used in z_arm_switch_to_main_no_multithreading so it is
also removed from the clobber list.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-07-10 11:58:13 -04:00
Sigmund Klåpbakken
0f776cf0bf arch: arm: cmake: Correct endian in output format
Sets the property `PROPERTY_OUTPUT_FORMAT` to `elf32-bigarm` when
`CONFIG_BIG_ENDIAN` is set to `y`.

Signed-off-by: Sigmund Klåpbakken <sigmundklaa@outlook.com>
2024-07-04 18:01:51 -04:00
Sigmund Klåpbakken
5100f1185d arch: arm: cortex_a_r: Set XPSR endianness bit
When this bit is not set, it defaults to 0 (little endian). This
causes issues for big-endian devices, as data will be accessed using
little endian.

Signed-off-by: Sigmund Klåpbakken <sigmundklaa@outlook.com>
2024-07-04 18:01:51 -04:00
Yong Cong Sin
e912a95355 arch: riscv: update the description of INCLUDE_RESET_VECTOR Kconfig
Update the description of the `INCLUDE_RESET_VECTOR` Kconfig so
that it is more clear to the user what it does.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-07-02 09:24:44 +02:00
Daniel Leung
8c10964435 xtensa: mmu: clear ZSR_DEPC_SAVE at boot
ZSR_DEPC_SAVE is being used to determine whether we are faulting
inside double exception if this is not zero. It is possible that
the boot ROM or custom startup code leaves this non-zero, which
would result in a fake triple fault. So clear it at boot. Note
that the zeroing is done in MMU init code as these triple
faults are not actual hardware ones but only semantics, and will
occur once MMU is enabled.

Fixes #75194

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-28 20:53:37 -04:00
Jordan Yates
91f8c1aea9 everywhere: replace #if IS_ENABLED() as per docs
Replace `#if IS_ENABLED()` with `#if defined()` as recommended by the
documentation of `IS_ENABLED`.

Signed-off-by: Jordan Yates <jordan@embeint.com>
2024-06-28 07:20:32 -04:00
Hess Nathan
8b942e15e2 arch: x86: corrected parameter names
- applied the exact parameter names of the interface to implementation

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-06-27 20:06:20 -04:00
Duy Nguyen
259b3d0095 arch: arm: Add initial support for Cortex-M85 Core
Add initial support for the Cortex-M85 Core which is an implementation
of the Armv8.1-M mainline architecture.

The support is based on the Cortex-M55 support that already exists in
Zephyr.

Signed-off-by: Duy Nguyen <duy.nguyen.xa@renesas.com>
2024-06-26 13:36:14 -04:00
Nicolas Pitre
b5be646e20 arch/arm64/mmu: improve debugging output
Rationalize the debug log to make it easier to use.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-26 13:06:42 -04:00
Nicolas Pitre
d7e9ea0b71 arch/arm64/mmu: fix page table reference counting part 3
Commit f7e11649fd ("arch/arm64/mmu: fix page table reference
counting") missed another case where the freeing of a whole table
"branch" didn't take into account the fact that some sub-tables might
be shared and therefore must be cleared only if the reference count is
down to 1.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-26 13:06:42 -04:00
J. Neuschäfer
a0d1d0619d arm: Fix typo in comment
Change "etxra" to "extra" in Kconfig comment.

Signed-off-by: J. Neuschäfer <j.ne@posteo.net>
2024-06-25 19:13:00 -04:00
Lingao Meng
302422ad9d everywhere: replace double words
import os
import re

common_words = set([
    'about', 'after', 'all', 'also', 'an', 'and',
     'any', 'are', 'as', 'at',
    'be', 'because', 'but', 'by', 'can', 'come',
    'could', 'day', 'do', 'even',
    'first', 'for', 'get', 'give', 'go', 'has',
    'have', 'he', 'her',
    'him', 'his', 'how', 'I', 'in', 'into', 'it',
    'its', 'just',
    'know', 'like', 'look', 'make', 'man', 'many',
    'me', 'more', 'my', 'new',
    'no', 'not', 'now', 'of', 'one', 'only', 'or',
    'other', 'our', 'out',
    'over', 'people', 'say', 'see', 'she', 'so',
    'some', 'take', 'tell', 'than',
    'their', 'them', 'then', 'there', 'these',
    'they', 'think',
    'this', 'time', 'two', 'up', 'use', 'very',
    'want', 'was', 'way',
    'we', 'well', 'what', 'when', 'which', 'who',
    'will', 'with', 'would',
    'year', 'you', 'your'
])

valid_extensions = set([
    'c', 'h', 'yaml', 'cmake', 'conf', 'txt', 'overlay',
    'rst', 'dtsi',
    'Kconfig', 'dts', 'defconfig', 'yml', 'ld', 'sh', 'py',
    'soc', 'cfg'
])

def filter_repeated_words(text):
    # Split the text into lines
    lines = text.split('\n')

    # Combine lines into a single string with unique separator
    combined_text = '/*sep*/'.join(lines)

    # Replace repeated words within a line
    def replace_within_line(match):
        return match.group(1)

    # Regex for matching repeated words within a line
    within_line_pattern =
	re.compile(r'\b(' +
		'|'.join(map(re.escape, common_words)) +
		r')\b\s+\b\1\b')
    combined_text = within_line_pattern.
		sub(replace_within_line, combined_text)

    # Replace repeated words across line boundaries
    def replace_across_lines(match):
        return match.group(1) + match.group(2)

    # Regex for matching repeated words across line boundaries
    across_lines_pattern = re.
		compile(r'\b(' + '|'.join(
			map(re.escape, common_words)) +
			r')\b(\s*[*\/\n\s]*)\b\1\b')
    combined_text = across_lines_pattern.
		sub(replace_across_lines, combined_text)

    # Split the text back into lines
    filtered_text = combined_text.split('/*sep*/')

    return '\n'.join(filtered_text)

def process_file(file_path):
    with open(file_path, 'r', encoding='utf-8') as file:
        text = file.read()

    new_text = filter_repeated_words(text)

    with open(file_path, 'w', encoding='utf-8') as file:
        file.write(new_text)

def process_directory(directory_path):
    for root, dirs, files in os.walk(directory_path):
        dirs[:] = [d for d in dirs if not d.startswith('.')]
        for file in files:
            # Filter out hidden files
            if file.startswith('.'):
                continue
            file_extension = file.split('.')[-1]
            if
	file_extension in valid_extensions:  # 只处理指定后缀的文件
                file_path = os.path.join(root, file)
                print(f"Processed file: {file_path}")
                process_file(file_path)

directory_to_process = "/home/mi/works/github/zephyrproject/zephyr"
process_directory(directory_to_process)

Signed-off-by: Lingao Meng <menglingao@xiaomi.com>
2024-06-25 06:05:35 -04:00
Nicolas Pitre
a4ebd5e8db arch/arm64/mmu: move hardware specifics to private header
None of the moved definitions are meant to be used by any code outside
of arch/arm64/core/mmu.c. Move them away from global scope to the
private header where more such definitions already live.

This is especially relevant as the previous commit fixed some of those
definitions which then caused conflicts with some external SDK that
carries a copy of those original buggy definitions.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-24 22:24:50 -04:00
Nicolas Pitre
1e27100442 arch/arm64/mmu: use 64-bit mask values with page table descriptors
Inverting a mask whose type has only 32 bits  doesn't produce the
expected result. Fix those to be 64-bit values.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-24 22:24:50 -04:00
Jordan Yates
07870934e3 everywhere: replace double words
Treewide search and replace on a range of double word combinations:
    * `the the`
    * `to to`
    * `if if`
    * `that that`
    * `on on`
    * `is is`
    * `from from`

Signed-off-by: Jordan Yates <jordan@embeint.com>
2024-06-22 05:40:22 -04:00
Yong Cong Sin
2bb78f3e1a arch: riscv: stacktrace: use current thread if thread is NULL
Zephyr's thread stack size is not fixed, in most cases we would
need the `thread` argument to obtain the `stack_info`, unless
we are unwinding the irq stack, since that is fixed.

Otherwise we can only safely print the current `mepc` register,
unwinding the esf without the stack info of a thread can
result in undefined behavior.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-21 14:55:27 -04:00
Yong Cong Sin
b79a68fc49 arch: riscv: stacktrace: refactor fatal stack bound
Pass the current thread to `walk_stackframe()`, so that we do
not need to hardcode `_current` in `in_fatal_stack_bound()`,
which will allow it to reuse the `in_stack_bound()`

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-21 14:55:27 -04:00
Daniel Leung
c44c922198 soc: xtensa/dc233c: remove xtensa_dc233c_stack_ptr_is_sane
The generic stack pointer checker in the architecture code is
enough so we can remove the platform specific one. Besides,
xtensa_dc233c_stack_ptr_is_sane() does not do much checking
either.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-21 09:59:36 +02:00
Daniel Leung
31c96cf395 xtensa: check stack boundaries during backtrace
This checks for stack boundaries during backtrace to make sure
we are not stepping into invalid memory.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-21 09:59:36 +02:00
Daniel Leung
5b84bb4f4a xtensa: check stack frame pointer before dumping registers
Check that the stack frame pointer is valid before dumping
any registers while handling exceptions. If the pointer is
invalid, anything it points to will probably be also be
invalid. Accessing them may result in another access
violation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-21 09:59:36 +02:00
Daniel Leung
cb9f8b1019 xtensa: separate FATAL EXCEPTION printout into two
It is observed that during logging in fatal exception,
it prints "FATAL EXCEPTION(null)". Exact reason is unknown
as debugging through GDB would make it all work again. So
separating it into two log statements to avoid this situation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-21 09:59:36 +02:00
Daniel Leung
da702463b9 x86: select CONFIG_THREAD_STACK_INFO for exception stack trace
When CONFIG_X86_EXCEPTION_STACK_TRACE is enabled, also forcibly
enable CONFIG_THREAD_STACK_INFO. Without the thread stack info,
it is possible the stack unwinding would go out of the thread
stack and into unknown memory, resulting in hard fault.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-17 21:19:32 -04:00
Flavio Ceolin
36ef3da2d7 arch: xtensa: fatal: Comply with MISRA Rule 14.4
Use boolean expression in a controlling expression.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-06-17 17:46:16 -04:00
Daniel Leung
c1a462e1a5 xtensa: mmu: bail on semantic triple faults
There actually is no triple faults on Xtensa. Once PS.EXCM is
set, it keeps going through double exception vector for any
new exceptions. However, our exception code needs to unmask
PS.EXCM to enable register window operations. So after that,
any new exceptions will go through the kernel or user vectors
depending on PS.UM. If there is continuous faults, it may
keep ping-ponging between double and kernel/user exception
vectors that may never get resolved. Since we stash DEPC
during double exception, and the stashed one is only cleared
once the double exception has been processed, we can use
the stashed DEPC value to detect if the next exception could
be considered a triple fault. If such a case exists, simply
jump to an infinite loop, or quit the simulator, or invoke
debugger.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Daniel Leung
d344a6bc85 xtensa: make arch_user_string_nlen actually work
arch_user_string_nlen() did not exactly work correctly as any
invalid pointers being passed are de-referenced naively, which
results in DTLB misses (MMU) or access errors (MPU). However,
arch_user_string_nlen() should always return to the caller
with appropriate error code set, and should never result in
thread termination. Since we are usually going through syscalls
when arch_user_string_nlen() is called, for MMU, the DTLB miss
goes through double exception. Since the pointer is invalid,
there is a high chance there is not even a L2 page table
associated with that bad address. So the DTLB miss cannot be
handled and it just keeps looping in double exception until
there is another exception type where we get to the C handler.
However, the stack frame is no longer the frame associated
with the call to arch_user_string_nlen(), and the call return
address would be incorrect. Forcing this incorrect address as
the next PC would result in some other exceptions, e.g.
illegal instruction, which would go to the C handler again.
This time it will go to the end of handler and would result
in thread termination. For MPU systems, access errors would
simply result in thread terminal in the C handler. Because of
these reasons, change the arch_user_string_nlen() to check if
the memory region can be accessed under kernel mode first
before feeding it to strnlen().

Also remove the exception fixup arrays as there is nothing
there anymore.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Daniel Leung
79939e3279 xtensa: mmu: mpu: add xtensa_mem_kernel_has_access()
This adds a new function xtensa_mem_kernel_has_access() to
determine if a memory region can be accessed by kernel threads.
This allows checking for valid mapped memory before accessing
them to avoid relying on page faults to detect invalid access.

Also fixed an issue with arch_buffer_validate() on MPU where
it may return okay even if the incoming memory region has no
corresponding entry in the MPU table.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Daniel Leung
61ec0d15d5 xtensa: mmu: arch_buffer_validate is only for user thread
arch_buffer_validate() is only to verify that user threads have
access to the memory region. It should not be used to verify
if kernel thread has access (which they should anyway). So
change the logic.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Daniel Leung
27f4e7fe0c xtensa: only use BREAK if explicitly enabled
Introduce CONFIG_XTENSA_BREAK_ON_UNRECOVERABLE_EXCEPTIONS to
use BREAK instruction for unrecoverable exceptions. This
definitely requires debugger to be attached to the hardware
or simulator to catch that.

Also move the infinite loop to NOT result in an infinite
interrupt storm as the debug interrupt will be triggered over
and over again. Same for the simcall exit as it does not
need to be called repetitively.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Daniel Leung
bc3e77b356 xtensa: make it work with TLB misses during interrupt handling
If there are any TLB misses during interrupt handling,
the user, kernel and double exception vector will be triggered
for the miss and the DEPC and EXCCAUSE overwritten as the TLB
missse are be handled in the assembly code and execution
returned to the original vector code. Because of this, both
DEPC and EXCCAUSE being read in the C handler are not the ones
that triggered the original exception (for example, level-1
interrupt). So stash both DEPC and EXCCAUSE such that
the original cause of exception is visible in the C handler.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Daniel Leung
371ad016f8 xtensa: no need to clear DEPC on C handler exit for MPU
Xtensa MPU code does not handle double exception in C. So there
is no need to clear DEPC on C handler exit.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Daniel Leung
b696257eb2 xtensa: fix getting exccause during backtrace
We have frame pointer struct and BSA struct to extract
the exception cause (exccause). There is no need to
resort to custom assembly to do that. Besides, given
that the BSA is different between different Xtensa cores,
there is no guarantee it is at the same place as what
the assembly assumes. So just do that without assembly.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Daniel Leung
682b572414 xtensa: remove ZSR_MMU_0 and ZSR_MMU_1
They are not being used in the code so there is no need to
reserve them as scratch registers.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Raffael Rostagno
9265c82313 soc: esp32c6: Kconfig and .ld updates, DTS and comments fix
Kconfig, .ld and comments fixing
Fixed address of UART1, WDT and RTC timer disabled by default

Signed-off-by: Raffael Rostagno <raffael.rostagno@espressif.com>
2024-06-14 18:51:46 -04:00
Nicolas Pitre
696edbe841 arm64: move simple memcpy/memset alternatives to assembly
Assembly implementation for z_early_memset() and z_early_memcpy().
Otherwise the compiler will happily replace our C code with a direct
call to memset/memcpy which kind of defeats the purpose.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-14 17:11:40 -04:00
Nicolas Pitre
64855973c0 arm64: speed up simple memcpy/memset alternatives
We need those simple alternatives to be used during early boot when the
MMU is not yet enabled. However they don't have to be the slowest they
can be. Those functions are mainly used to clear .bss sections and copy
.data to final destination when doing XIP, etc. Therefore it is very
likely for provided pointers to be 64-bit aligned. Let's optimize for
that case.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-14 17:11:40 -04:00
Giardina, Anthony
809c0923c6 xtensa: userspace: fix uninitialized return values in mpu_map_region_add
Ensure that *first_idx is populated for the case of adding entries
to an empty table

Signed-off-by: Anthony Giardina <anthony.giardina@intel.com>
2024-06-14 14:49:29 -04:00
Evgeniy Paltsev
ec72339d61 ARC: enable barriers for HS
As we start to use data memory barriers in SMP scheduler code
explicitly (not only internaly in the atomics implementation)
let's enable barriers for ARC HS.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2024-06-14 15:38:39 +02:00
Yong Cong Sin
bb66d1188c arch: riscv: stacktrace: conditionally check stack_info
Check if an address is in the thread stack only when
`CONFIG_THREAD_STACK_INFO` is enabled, since otherwise the
`stack_info` will not be available.

This fixes compilation error when `CONFIG_THREAD_STACK_INFO`
is explicitly disabled.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-14 05:22:59 -04:00
Grant Ramsay
c5642a7b4d arch: kconfig: Set flash size/address to 0 by default when !XIP
Many boards/SoCs in-tree do this:
    if !XIP
    config FLASH_SIZE
        default 0
    config FLASH_BASE_ADDRESS
        default 0
    endif

And many other boards are missing this configuration (e.g. stm32 series).
Making this the default helps get non-XIP just working

Signed-off-by: Grant Ramsay <gramsay@enphaseenergy.com>
2024-06-13 20:15:35 -04:00
Nicolas Pitre
baa70d8d36 arch/arm64/mmu: fix page table reference counting part 2
Commit f7e11649fd ("arch/arm64/mmu: fix page table reference
counting") missed a case where the freeing of a table wasn't propagated
properly to all domains. To fix this, the page freeing logic was removed
from set_mapping() and a del_mapping() was created instead, to be usedby
both by remove_map() and globalize_table().

A test covering this case should definitely be created but that'll come
later. Proper operation was verified through manual debug log
inspection for now.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-13 17:03:33 -04:00
Marcin Szymczyk
0b20e2afa7 arch: riscv: skip isr.S when SW ISR table is not generated
`isr.S` depends on `CONFIG_GEN_SW_ISR_TABLE`.
Do not build it if SW ISR table is not present.

Signed-off-by: Marcin Szymczyk <marcin.szymczyk@nordicsemi.no>
2024-06-13 16:57:32 -04:00
Yong Cong Sin
61a0f9f1c0 arch: riscv: stop printing symbol name at mepc
Now that the unwind starts from mepc already, the symbol
name at the mepc reg is kinda redundant, so just remove it.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-13 16:46:48 -04:00
Yong Cong Sin
726fefd12c arch: riscv: stacktrace: implement arch_stack_walk()
Created the `arch_stack_walk()` function out from the original
`z_riscv_unwind_stack()`, it's been updated to support
unwinding any thread.

Updated the stack_unwind test case accordingly.

Increased the delay in `test_fatal_on_smp`, to wait
for the the fatal thread to be terminated, as stacktrace can
take a bit more time.

Doubled the kernel/smp testcase timeout from 60 (default) to
120s, as some of the tests can take a little bit more than 60s
to finish.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-13 16:46:48 -04:00
Yong Cong Sin
5121de6e73 arch: introduce arch_stack_walk()
An architecture can indicate that it has an implementation for
the `arch_stack_walk()` function by selecting
`ARCH_HAS_STACKWALK`.

Set the default value of `EXCEPTION_STACK_TRACE_MAX_FRAMES` to
`ARCH_STACKWALK_MAX_FRAMES` if the latter is available.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-13 16:46:48 -04:00
Nicolas Pitre
6cc9d7ec20 tests: arm64: exercize MMU page table allocation and recycling
This code was never formally tested before... and without the preceding
commit it obviously didn't work either.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-13 05:47:24 -04:00
Nicolas Pitre
f7e11649fd arch/arm64/mmu: fix page table reference counting
Existing code confused table usage and table reference counts together.
This obviously doesn't work. A table with one reference to it and one
populated PTE is not the same as a table with 2 references to it and
no PTe in use.

So split the two concepts and adjust the code accordingly. A page needs
to have its PTE usage count drop to zero before the last reference is
released. When both counts are 0 then the page is free.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-13 05:47:24 -04:00
Daniel Leung
564ca11631 kernel: mm: rename z_page_fault() to k_mem_page_fault()
This is part of a series of move memory management related
stuff out of Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung
54af5dda84 kernel: mm: rename z_page_frame_* to k_mem_page_frame_*
Also any demand paging and page frame related bits are
renamed.

This is part of a series to move memory management related
stuff out of the Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung
7715aa3341 kernel: mm: rename Z_SCRATCH_PAGE to K_MEM_SCRATCH_PAGE
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung
2d2bbc05d6 kernel: mm: rename Z_VM_KERNEL to K_MEM_IS_VM_KERNEL
This is part of a series to move memory management functions
away from the z_ namespace and into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung
db9d3134c5 kernel: mm: rename Z_MEM_PHYS/VIRT_ADDR to K_MEM_*
This is part of a series to move memory management functions
away from the z_ namespace and into its own namespace. Also
make documentation available via doxygen.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung
50640cb1b8 kernel: mm: rename Z_MEM_VM_OFFSET to K_MEM_VIRT_OFFSET
This is part of a series to move memory management functions
away from the z_ namespace and into its own namespace. Also
make documentation available via doxygen.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung
552e29790d kernel: mm: rename z_phys_un/map to k_mem_*_phys_bare
This renames z_phys_map() and z_phys_unmap() to
k_mem_map_phys_bare() and k_mem_unmap_phys_bare()
respectively. This is part of the series to move memory
management functions away from the z_ namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Wilfried Chauveau
fef0e8a211 arch: arm: cortex_m: update inline comment pointing to isr_wrapper.*
isr_wrapper has been converted to C but this inline comment was not
updated. This fixes the out-of-sync comment.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-06-12 18:29:12 -04:00
Anas Nashif
c20e798646 arch: call arch_smp_init() directly, do not use SYS_INIT
Move this to a call in the init process. arch_* calls are no services
and should be called consistently during initialization.

Place it between PRE_KERNEL_1 and PRE_KERNEL_2 as some drivers
initialized in PRE_KERNEL_2 might depend on SMP being setup.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-06-12 18:23:54 -04:00
Jérémy LOCHE - MAKEEN Energy
75d65821ce arch: arm: add rom_start_relocation prompts
Add a prompt to ROMSTART_RELOCATION_ROM configs to allow
projects config file selection.
This simplifies the usage in samples/app for board porting.

Signed-off-by: Jérémy LOCHE - MAKEEN Energy <jlh@makeenenergy.com>
2024-06-12 17:11:04 -05:00
Jakub Zymelka
502fcac821 arch: riscv: core: Enable RISCV IRQs for no multithreading
Enable MSTATUS.IEN to allow RISCV interrupts for
non-multithreaded applications.

Signed-off-by: Jakub Zymelka <jakub.zymelka@nordicsemi.no>
2024-06-10 16:57:44 +03:00
Lars-Ove Karlsson
b48aeedf77 arch: common: Removed unnecessary cast
Removed an unnecessary cast to void * from a function that already
had the correct signature.
This makes for more portable code as casting between code and data
pointers are frowned upon by the C standard.

Signed-off-by: Lars-Ove Karlsson <lars-ove.karlsson@iar.com>
2024-06-07 19:05:08 -04:00
Flavio Ceolin
64b1ac831b arm: pm: Don't use deprecated function
Use pm_system_resume instead of z_pm_save_idle_exit

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-06-05 17:36:22 -05:00
Flavio Ceolin
a3de27fce9 x86: pm: Don't use deprecated function
Use pm_system_resume instead of z_pm_save_idle_exit

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-06-05 17:36:22 -05:00
Flavio Ceolin
9a869ef33a arc: pm: Don't use deprecated function
Use pm_system_resume instead of z_pm_save_idle_exit

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-06-05 17:36:22 -05:00
Nikodem Kastelik
12142f72ec arch: arm: core: mpu: allow non-ARM memory attributes
Memory region defined in devicetree can have attributes
that are not intended to be parsed by MPU library,
but might be valid for other components.

Signed-off-by: Nikodem Kastelik <nikodem.kastelik@nordicsemi.no>
2024-06-05 14:42:50 +01:00
Peter Mitsis
0bcdae2c62 kernel: Add CONFIG_ARCH_HAS_DIRECTED_IPIS
Platforms that support IPIs allow them to be broadcast via the
new arch_sched_broadcast_ipi() routine (replacing arch_sched_ipi()).
Those that also allow IPIs to be directed to specific CPUs may
use arch_sched_directed_ipi() to do so.

As the kernel has the capability to track which CPUs may need an IPI
(see CONFIG_IPI_OPTIMIZE), this commit updates the signalling of
tracked IPIs to use the directed version if supported; otherwise
they continue to use the broadcast version.

Platforms that allow directed IPIs may see a significant reduction
in the number of IPI related ISRs when CONFIG_IPI_OPTIMIZE is
enabled and the number of CPUs increases.  These platforms can be
identified by the Kconfig option CONFIG_ARCH_HAS_DIRECTED_IPIS.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-06-04 22:35:54 -04:00
Flavio Ceolin
f74a84b251 xtensa: mmu: MMU re-initialization API
With power managment is enabled, depending on the SoC power state
used when idle, the MMU may lose context and may need to be re-initialized.
When re-initializing the MMU, we must not re-create the page table
because it may overwrite changes done during the execution, but we still
need to set the asid and page table for the current context.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-06-04 16:27:55 -05:00
Yong Cong Sin
6a3cb93d88 arch: remove the use of z_arch_esf_t completely from internal
Created `GEN_OFFSET_STRUCT` & `GEN_NAMED_OFFSET_STRUCT` that
works for `struct`, and remove the use of `z_arch_esf_t`
completely.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-04 14:02:51 -05:00
Yong Cong Sin
e54b27b967 arch: define struct arch_esf and deprecate z_arch_esf_t
Make `struct arch_esf` compulsory for all architectures by
declaring it in the `arch_interface.h` header.

After this commit, the named struct `z_arch_esf_t` is only used
internally to generate offsets, and is slated to be removed
from the `arch_interface.h` header in the future.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-04 14:02:51 -05:00
Yong Cong Sin
3998e18ec4 arch: rename all esf struct to struct arch_esf
Rename every architecture's esf struct to `struct esf`.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-04 14:02:51 -05:00
Piotr Wojnarowski
0f3fe4daab riscv: Align _isr_wrapper to 64 bytes for CLIC
The CLIC requires that mtvec.base is aligned to 64 bytes.
_isr_wrapper is used as mtvec.base, so align it to 64 bytes.

Signed-off-by: Piotr Wojnarowski <pwojnarowski@antmicro.com>
2024-06-04 13:41:49 +02:00
Ederson de Souza
7f0b5edd8c arch/x86: Make irq_offload SMP-safe on x86_64
The irq_offload mechanism was using the same entry of the IDT vector for
all CPUs on SMP systems. This caused race conditions when two CPUs were
doing irq_offload() calls.

This patch addresses that by adding one indirection layer: the
irq_offload() now sets a per CPU entry with the routine and parameter to
be run. Then a software interrupt is generated, and a default handler
will do the appropriate dispatching.

Finally, test "kernel/smp_abort" is enabled for x86 as it should work
now.

Fixes #72172.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
2024-06-04 07:57:06 +02:00
Yong Cong Sin
02770ad963 debug: EXCEPTION_STACK_TRACE should depend on arch Kconfigs
Fix the dependencies of `CONFIG_EXCEPTION_STACK_TRACE`:
- Architecture-specific Kconfig, i.e.
  `X86_EXCEPTION_STACK_TRACE`, will be enabled automatically
  when all the dependencies are met.
- `EXCEPTION_STACK_TRACE` depends on architecture-specific
  Kconfig to be enabled.
- The stack trace implementations should be compiled only if
  user enables `CONFIG_EXCEPTION_STACK_TRACE`.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-03 03:02:04 -07:00
Yong Cong Sin
190777dccf arch: arm64: create ARM64_EXCEPTION_STACK_TRACE
Currently, the stack trace in ARM64 implementation depends on
frame pointer Kconfigs combo to be enabled. Create a dedicated
Kconfig for that instead, so that it is consistent with x86 and
riscv, and update the source accordingly.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-03 03:02:04 -07:00
Yong Cong Sin
ea00e04382 arch: x86: select DEBUG_INFO in X86_EXCEPTION_STACK_TRACE
Select `DEBUG_INFO` instead of depending on it.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-03 03:02:04 -07:00
Yong Cong Sin
8a5823b474 debug: remove !OMIT_FRAME_POINTER from EXCEPTION_STACK_TRACE
Not all stack trace implementation requires frame pointer, move
that dependency to architecture Kconfig.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-03 03:02:04 -07:00
Yong Cong Sin
413b1cf409 debug: remove DEBUG_INFO from EXCEPTION_STACK_TRACE
The `DEBUG_INFO` in the `EXCEPTION_STACK_TRACE` is only
required by x86. Move that to `X86_EXCEPTION_STACK_TRACE`
instead.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-03 03:02:04 -07:00
Wilfried Chauveau
7d7616214b arch: arm: cortex_m: restore comment lost in translation
The comment about ISB in swap.S was lost when translation to C.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-05-31 09:53:31 -05:00
Yong Cong Sin
94346d2441 arch: arm64: fatal: limit max number of stack traces
In some cases, the `fp` will never be `NULL` and the stack
unwinding can go on and on forever, limit the max depth so that
this will not happen.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-30 03:00:50 -07:00
Yong Cong Sin
c87dc641bc arch: generalize frame pointer via CONFIG_FRAME_POINTER introduction
Enabling `CONFIG_FRAME_POINTER` allows the users to build the
kernel with frame-pointer.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-30 03:00:40 -07:00
Marcin Szymczyk
e17b3fd884 arch: riscv: implement arch_irq_disconnect_dynamic
For SoC with `CONFIG_RISCV_RESERVED_IRQ_ISR_TABLES_OFFSET`,
it should be taken into consideration when disconnecting IRQ.

Signed-off-by: Marcin Szymczyk <marcin.szymczyk@nordicsemi.no>
2024-05-29 11:58:44 +02:00
Yong Cong Sin
6e8d979336 arch: riscv: stacktrace: handle user threads
Handle user threads stack bound validation in
`in_stack_bound()` to get more accurate traces.

If `CONFIG_PMP_POWER_OF_TWO_ALIGNMENT` is enabled:

```
+------------+ <- thread.arch.priv_stack_start
| Guard      | } Z_RISCV_STACK_GUARD_SIZE
+------------+
| Priv Stack | } CONFIG_PRIVILEGED_STACK_SIZE
+------------+ <- thread.arch.priv_stack_start +
                  CONFIG_PRIVILEGED_STACK_SIZE +
                  Z_RISCV_STACK_GUARD_SIZE
```

otherwise:

```
+------------+ <- thread.stack_obj
| Guard      | } Z_RISCV_STACK_GUARD_SIZE
+------------+
| Priv Stack | } CONFIG_PRIVILEGED_STACK_SIZE
+------------+ <- thread.stack_info.start
| Thread     |
| stack      |
|            |
+............|
| TLS        | } thread.stack_info.delta
+------------+ <- thread.stack_info.start +
                  thread.stack_info.size
```

See: zephyr/include/zephyr/arch/riscv/arch.h

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-29 08:38:53 +02:00
Yong Cong Sin
602c993799 arch: riscv: stacktrace: fix cpuid type and optimize branch with compiler
Change the type of `cpu_id` to `uint8_t` since that is the type
of `arch_curr_cpu()->id`.

Instead of using precompiler switch (`#ifdef CONFIG_SMP`), use
if-else shorthand instead (`IS_ENABLED(CONFIG_SMP)`).

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-29 08:38:53 +02:00
Yong Cong Sin
bbe5e1e6eb build: namespace the generated headers with zephyr/
Namespaced the generated headers with `zephyr` to prevent
potential conflict with other headers.

Introduce a temporary Kconfig `LEGACY_GENERATED_INCLUDE_PATH`
that is enabled by default. This allows the developers to
continue the use of the old include paths for the time being
until it is deprecated and eventually removed. The Kconfig will
generate a build-time warning message, similar to the
`CONFIG_TIMER_RANDOM_GENERATOR`.

Updated the includes path of in-tree sources accordingly.

Most of the changes here are scripted, check the PR for more
info.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-28 22:03:55 +02:00
Yong Cong Sin
5a35037af3 arch: riscv: check esf before calling z_riscv_unwind_stack
Make sure that esf is not NULL before calling
z_riscv_unwind_stack to prevent NULL pointer dereferencing.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-27 06:19:32 -04:00
Flavio Ceolin
4d85f3d91c pm: Deprecate z_pm_save_idle_exit
Deprecate z_pm_save_idle_exit and promote pm_system_resume.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-27 02:10:03 -07:00
Nikolay Agishev
7180c515c4 ARC: MPU: Add thread stack isolation configs
Regarding recent changes in general MPU configuration
(https://github.com/zephyrproject-rtos/zephyr/pull/71969), add
appropriate configs for isolating thread stacks into ARC MPU.

Signed-off-by: Nikolay Agishev <agishev@synopsys.com>
2024-05-27 07:44:44 +02:00
Yong Cong Sin
7248efcd59 drivers: intc: update to use multi-level API
Update these multi-level interrupt drivers to use the new API.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-25 11:24:32 +03:00
Yong Cong Sin
5d9e266d13 drivers: interrupt_controller: irq_steer: use new multilevel irq impl
Update the NXP's irq_steer driver to use the new multi-level
interrupt implementation.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-25 11:24:32 +03:00
Yong Cong Sin
e2bcedc3ad arch: common: multilevel_irq: simplification with new multilevel IRQ APIs
Use the multi-level interrupt APIs that accepts `level` as an
argument for the code where the level of the interrupt is not
known at build time.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-25 11:24:32 +03:00
Yong Cong Sin
c5f5b964c1 arch: sw_isr: revamp multi-level interrupt architecture
Previously the multi-level irq lookup table is generated by
looping through the devicetree nodes using macros & Kconfig,
which is hard to read and flimsy.

This PR shifts the heavy lifting to devicetree & DT macros such
that an interrupt controller driver, which has its info in the
devicetree, can register itself directly with the multi-level
interrupt architecture, which is more straightforward.

The previous auto-generated look up table with macros is now
moved in a file of its own. A new compatibility Kconfig:
`CONFIG_LEGACY_MULTI_LEVEL_TABLE_GENERATION` is added and
enabled by default to compile the legacy look up table for
interrupt controller drivers that aren't updated to support the
new architecture yet.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-25 11:24:32 +03:00
Yong Cong Sin
c118cd5a13 arch: make the max stack frames configurable
Current on x86 & risc-v that implement stack trace, the
maximum depth of the stack trace is defined by a macro.

Introduce a new Kconfig:EXCEPTION_STACK_TRACE_MAX_FRAMES
so that this is configurable in software.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-23 11:52:08 -04:00
Yong Cong Sin
2a3d9d0d90 arch: arm64: use symtab to print function name in stack trace
Selecting `CONFIG_SYMTAB` will
enable the symtab generation which will be used in the
stack trace to print the function name of the return
address.

Added `arm64` to the `arch.common.stack_unwind.symtab` test.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-23 11:52:08 -04:00
Yong Cong Sin
eafc4eff04 arch: riscv: print symbol name of mepc if CONFIG_SYMTAB is enabled
The mepc register is the address of the instruction that was
interrupted, it will make debugging easier if we know the
name of the symbol, so print it if `CONFIG_SYMTAB` is enabled.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-23 11:52:08 -04:00
Yong Cong Sin
c1a925de98 arch: riscv: use symtab to print function name in stack trace
Selecting `CONFIG_EXCEPTION_STACK_TRACE_SYMTAB` will
enable the symtab generation which will be used in the
stack trace to print the function name of the return
address.

Updated the `stack_unwind` test to test the symbols in a
stack trace.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-23 11:52:08 -04:00
Andy Ross
3aeefd2250 arch/xtensa: Add automatic vector linkage generation
Existing solutions for linking the Xtensa vector table are a
cut-and-paste mess of inherited code, with more than a dozen special
sections that need to be linked into many special MEMORY{} regions.

Accept the existing convention used by C/asm code, but automatically
detect the needed offsets for the platform from core-isa.h (it can
share the preprocessing with gen_zsr.py) and emit a file that can be
included in lieu of all the existing boilerplate.

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-22 13:39:47 -05:00
Daniel Leung
27b0651f7a riscv: pmp: select CONFIG_MEM_DOMAIN_ISOLATED_STACKS
RISC-V PMP implementation supports isolating thread stacks
within the same memory domain, and also is the only
supported operating mode. So select the corresponding
kconfig by default.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-05-21 20:53:09 -04:00
Daniel Leung
7289dcdcda arm: mpu: select CONFIG_MEM_DOMAIN_ISOLATED_STACKS
ARM MPU implementation supports isolating thread stacks
within the same memory domain, and also is the only
supported operating mode. So select the corresponding
kconfig by default.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-05-21 20:53:09 -04:00
Yong Cong Sin
8c6da49f73 arch: riscv: relocate stack unwinding code into a separate file
Declutter `fatal.c` by moving the stack unwinding logic into
`stacktrace.c` and guard its compilation with `CMakeLists.txt`.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-20 20:52:18 -04:00
Yong Cong Sin
10a807537b arch: riscv: Add support for stack unwind without fp
Add a stack unwind implementation that only uses `sp`

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-20 20:52:18 -04:00
Andy Ross
bcf6b27c6b arch/xtensa: xtensa_intgen.py: Emit handlers for all levels
The original code would (unsurprisingly) only emit handler functions
for interrupt levels with interrupts associated with them.  But it
turns out that it's possible to configure an xtensa device with an
empty-but-otherwise-real interrupt level (specifically mt8195 has a
"Level 3" interrupt not associated with any input IRQS, it's one level
above EXCM_LEVEL and one level below the DEBUG exception).

This script is old, and not set up to parse the full core-isa.h
directly, so modifying it to detect this condition is difficult.
Instead, just emit all 15 possible interrupt handlers, even empty
ones.  The extra stubs are harmless as they'll be dropped if uncalled.

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-20 20:50:55 -04:00
Andy Ross
03cafbdaef arch/xtensa: "NMILEVEL" is an optional feature
Some oddballs cores can be generated without an "NMI" interrupt, in
which case core-isa.h will not define XCHAL_NMILEVEL.  This code is
trying to unconditionally mask interrupts, so XCHAL_EXCM_LEVEL is the
pedantically correct choice anyway (NMI's by definition, cannot be
masked).

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-20 20:50:55 -04:00
Andy Ross
6ab7735774 arch/xtensa: Automatically generate interrupt handlers (finally)
The script to generate the _soc_inthandlers.h header has been run
manually for years, only because I was a cmake novice at the time and
unsure how to integrate it into the build.  So every new platform has
to find the script and template file and figure out how to generate
the file.  And in a few cases it looks like we've tried to EDIT the
resulting files in the tree.

Let's finally do this right.  The file is now dropped (for every
xtensa platform) as a "xtensa_handlers.h" file, and there is a Kconfig
to control whether the original/manual file or the new one is used by
the platform code.  We can migrate the other platforms slowly as
people have time to validate.

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-20 20:50:55 -04:00
Hess Nathan
861235a9bc coding guidelines: comply with MISRA Rule 11.6
removed unneeded conversions from integer to pointer

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-20 19:21:01 +03:00
frei tycho
85a4b22c3f arch: x86: added missing parenthesis
- added missing parenthesis around macro argument expansion

Signed-off-by: frei tycho <tfrei@baumer.com>
2024-05-20 15:16:38 +01:00
Krzysztof Chruściński
24082d582f arch: arm: cortex_m: pm_s2ram: Add option for custom marking
s2ram procedure used RAM magic word for marking suspend-to-RAM. This
method may not work in some cases, e.g. when global reset does not
reset RAM content. In that case resuming from s2ram is detected when
global reset occurred.

RAM magic word method is the default but with
CONFIG_PM_S2RAM_CUSTOM_MARKING a custom implementation can be provided.

Signed-off-by: Krzysztof Chruściński <krzysztof.chruscinski@nordicsemi.no>
2024-05-17 14:33:47 +02:00
Jérémy LOCHE - MAKEEN Energy
8ef8e8b497 arch: arm: rom_start relocation configuration
In order to support Linux rproc loading, some SOCs require
the boot-vector and irq-vectors to be placed into a defined
memory area for the mcu to boot.

This is necessary for NXP's IMX SOCs for instance but
can be leveraged by other SOCs that have multiple
zephyr,flash choices.

Signed-off-by: Jérémy LOCHE - MAKEEN Energy <jlh@makeenenergy.com>
2024-05-16 15:52:20 +02:00
Yong Cong Sin
a82a54cd38 arch: riscv: remove unnecessary cast
Remove unnecessary cast of `fp` into `uintptr_t`.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-16 09:20:19 +02:00
Yong Cong Sin
9a4698159d arch: riscv: reorder fatal message
Print the backtrace message after the registers.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-16 09:20:19 +02:00
Flavio Ceolin
d6d3c1098d xtensa: mmu: dup_table does not need parameter
The only page table duplicated is the kernel page table. This function
does not need a parameter.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-14 15:54:55 +02:00
Flavio Ceolin
0012725d85 xtensa: mmu: Avoid k_mem_domain_default duplication
We can use some extra bits available for SW implementation to
save original permissions and avoid duplicating the kernel page tables
for the default memory domain.

Whe duplicating the page table to a new domain we just ensure
to restore the original map.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-14 15:54:55 +02:00
Flavio Ceolin
e9fa729a6b xtensa: mmu: Fix macro to get ring from a pte
The macro was not masking the pte correctly and it was returning
a wrong value.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-14 15:54:55 +02:00
Flavio Ceolin
eb7d60dd67 xtensa: mmu: Remove duplicated macro
XTENSA_MMU_PTE was defined twice.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-14 15:54:55 +02:00
Flavio Ceolin
dcaceda39b xtensa: mmu: Simplify memory map
Simplify the logic around the shared attribute. Checks if a memory
region should be shared only in the function that actually maps the
memory.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-14 15:54:55 +02:00
Nicolas Pitre
530b593275 arch: riscv: apply CONFIG_RISCV_MCAUSE_EXCEPTION_MASK to FPU code
Some implementations use bits outside of the mcause mask for other
purpose.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-05-14 09:32:39 +02:00
Najumon B.A
2803dcd564 arch: x86: remove limitation of number of cpu support in smp
Remove the limitation of number of cpu support in x86 arch.
Also add support for retrieve cpu informations such as for
hybird cores.

Signed-off-by: Najumon B.A <najumon.ba@intel.com>
2024-05-13 16:07:11 -04:00
Nicolas Pitre
57305971d1 kernel: mmu: abstract access to page frame flags and address
Introduce z_page_frame_set() and z_page_frame_clear() to manipulate
flags. Obtain the virtual address using the existing
z_page_frame_to_virt(). This will make changes to the page frame
structure easier.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-05-13 16:04:40 -04:00
frei tycho
81e4b91bb5 arch: x86: avoided increments/decrements with side effects
- moved ++/-- before or after the value use

Signed-off-by: frei tycho <tfrei@baumer.com>
2024-05-09 15:44:54 +02:00
Hess Nathan
ed12a0cc35 coding guidelines: comply with MISRA Rule 11.8
modified parameter types to receive a const pointer when a
non-const pointer is not needed

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-09 10:28:44 +02:00
Debbie Martin
882238116e arch: arm: cortex-r: Add compiler tuning for Cortex-R82
Change the GCC toolchain configuration to make use of the Cortex-R82
target. When Cortex-R82 was added as a GCC toolchain option, the GCC
version of the Zephyr SDK did not support Cortex-R82 tuning. Zephyr was
therefore compiled compiled for the Armv8.4-A architecture. Since Zephyr
SDK 0.15.0 (which updated GCC from 10.3.0 to 12.1.0) coupled with Zephyr
3.2, the Cortex-R82 target is supported.

The Armv8-R AArch64 architecture does not support the EL3 exception level.
EL3 support is therefore made conditional on Armv8-R vs Armv8-A.

Signed-off-by: Debbie Martin <Debbie.Martin@arm.com>
2024-05-07 17:57:05 -04:00
Yong Cong Sin
b32c5e2a60 arch: riscv: only use z_riscv_fatal_error_csf if CONFIG_EXCEPTION_DEBUG
Use `z_riscv_fatal_error_csf` that expects the
callee-saved-registers pointer only if `CONFIG_EXCEPTION_DEBUG`
is enabled, otherwise use `z_riscv_fatal_error`, as there can
be garbage in the `a2`.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-07 09:34:32 +02:00
frei tycho
4b7e09230c arch: x86: coding guidelines: cast unused arguments to void
- added missing ARG_UNUSED
- added void to cast where ARG_UNUSED macro is not available

Signed-off-by: frei tycho <tfrei@baumer.com>
2024-05-06 22:53:13 +01:00
frei tycho
30681799e8 coding guidelines: comply with MISRA C:2012 Rule 17.7 in arch
- added explicit cast to void when returned value is expectedly ignored

Signed-off-by: frei tycho <tfrei@baumer.com>
2024-05-04 13:00:14 +03:00
Jonathon Penix
274bd59283 arch: arm: cortex-m: Change character used to mark immediate operand
Change the character used to indicate immediate operands from '$' to '#'
to resolve an "invalid instruction" error when building with clang.

For arm, binutils allows either '#' or '$' to indicate immediate operands.
clang seems to accept '$' for arm in other instances
(my build accepts 'subs r0, r0, $0x02', for example), but in this case it
produces an error that this is an invalid instruction due to the "$0x02"
operand.

Given clang's inconsistent behavior, I'm guessing this is a bug in clang
somewhere, but:

  1. '#' for immediate operands seems to be more standard for arm in
     general and seems to be what is used throughout the rest of Zephyr's
     arm asm code.
  2. Switching out '$' for '#' shouldn't negatively impact other
     toolchains.

As such, switch out the character used to unblock clang builds until this
can be fixed in clang.

Signed-off-by: Jonathon Penix <jpenix@quicinc.com>
2024-05-03 07:28:52 -04:00
Sebastian Bøe
850cd54e67 arm: fatal: log which IRQn is triggering on spurious IRQs
LOG which IRQn line is triggering on spurious IRQs as this makes it
much easier to debug spurious IRQs.

The new logs with this patch looks like:

<err> os: Unhandled IRQn: 227
<err> os: >>> ZEPHYR FATAL ERROR 1: Unhandled interrupt on CPU 0
<err> os: Current thread: 0x20032c20 (unknown)
<err> os: Halting system

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2024-05-02 22:42:30 +01:00
Alberto Escolar Piedras
320f23ef96 arch posix fuzzing: Move kconfig options to sample
Move the kconfig options used to configure the interrupt
and wait time to the sample which uses them instead of
having them in the architecture code.
This options are very particular for this sample and not
really an API.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-05-02 20:46:03 +03:00
Alberto Escolar Piedras
df5440fc59 arch posix: Correct ARCH_POSIX_LIBFUZZER kconfig help message
This option can be used now also with native_sim
and seems to work fine with both 32 and 64 bit targets.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-05-02 20:46:03 +03:00
Alberto Escolar Piedras
f5553004b0 sample fuzzer: Move fuzzer specific code to sample and fix for native_sim
Move the LLVM fuzzing specific code out of the board main
file and into the sample.
That way we avoid needing to duplicate it for native_sim and
avoid having a very adhoc interface between the fuzzer test
and runner code.

Also ensure it works for native_sim and not just native_posix

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-05-02 20:46:03 +03:00
Hess Nathan
aa3e9c1d1e coding guidelines: comply with MISRA Rule 2.2
- avoided dead stores

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-01 10:52:30 -04:00
Hess Nathan
cbd9b37ef5 coding guidelines: comply with MISRA Rule 20.9
- avoid to use undefined macros in #if expressions

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-01 13:10:29 +02:00
Patryk Duda
4fe5ac9248 arch: posix: Undefine operating system specific macros for native_sim
Compilers predefine system-specific macros which carry information about
compiler, target architecture and operating system. It provides basic
compiler-dependent information like size of types, their maximal and
minimal values, etc. It allows to write common libc headers for multiple
architectures and operating systems.

These macros allow code to always determine what is the target operating
system. This is a problem when compiling code of modules that supports
multiple operating systems (e.g. cryptography libraries).

To avoid confusion we shouldn't leak host operating system macros (e.g.
__linux__, __linux, linux, etc.) when compiling for native_sim board.

Unfortunately, there is no single universal switch that disables all
operating system macros:
- '-undef' removes also architecture-related macros
- '--target' is only available for Clang compiler

This patch uses '-include' option to include file that undefines all
well-known operating system macros.

Run 'gcc -dM -E - < /dev/null | sort' to get full list of predefined
macros.

Signed-off-by: Patryk Duda <patrykd@google.com>
2024-04-30 14:30:30 -04:00
Hess Nathan
a1fc3b78ef coding guidelines: comply with MISRA Rule 15.6
- added missing braces

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-04-30 16:20:38 +02:00
François Baldassari
3f78ca9873 ARC: fault: Fix uninitialized memory access
Found via static analysis. In fault path when checking for stack
overflows, if CONFIG_MULTITHREADING is not set, `guard_end` is left
uninitialized and is subsequently used in a comparison.

The solution is to simply return `false` in this configuration as stack
guards are not configured in the first place.

Signed-off-by: François Baldassari <francois@memfault.com>
2024-04-29 17:39:57 +01:00
Flavio Ceolin
d8635905e9 xtensa: mmu: Avoid unnecessary mapping
When duplicating a page table, we don't need to copy
the mapping to the kernel l1 page table virtual address.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-04-27 11:05:45 +03:00
Marcin Szymczyk
0ea7bf19e4 arch: riscv: irq_manage: support ISR_OFFSET in dynamic IRQs
`CONFIG_RISCV_RESERVED_IRQ_ISR_TABLES_OFFSET` shoud be taken into
account in `arch_irq_connect_dynamic`, same as it is done in
`ARCH_IRQ_CONNECT` macro.

Signed-off-by: Marcin Szymczyk <marcin.szymczyk@nordicsemi.no>
2024-04-25 15:03:23 +02:00
Wilfried Chauveau
00ddd8a81a arch: arm: cortex_m: enable interrupts before entering application’s main
This also fixes a typo in `z_arm_switch_to_main_no_multithreading` making
it unlock irq instead of locking them when main returns.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-25 07:54:47 -04:00
Pieter De Gendt
ff6985766b arch: posix: Select at least C11 standard
Replace the global CSTD property with the CSTD kconfig option to select
at least C11 standard.

Signed-off-by: Pieter De Gendt <pieter.degendt@basalte.be>
2024-04-25 09:54:39 +00:00
Yong Cong Sin
30b122b3f0 arch: riscv: print callee-saved-registers in fatal error
Print callee-saved registers during fatal error
to help with debugging.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-04-24 15:57:40 -04:00
Yong Cong Sin
7398831884 arch: riscv: implement frame-pointer based stack unwinding
Influenced heavily by the RISCV64 stack unwinding
implementation in the Linux kernel.

`CONFIG_RISCV_EXCEPTION_STACK_TRACE` can be enabled by
configuring the following Kconfigs:

```prj.conf
CONFIG_DEBUG_INFO=y
CONFIG_EXCEPTION_STACK_TRACE=y
CONFIG_OVERRIDE_FRAME_POINTER_DEFAULT=y
CONFIG_OMIT_FRAME_POINTER=n
```

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-04-20 13:54:43 -04:00
Wilfried Chauveau
b621802614 arch: arm: cortex_m: cpu_idle: Add missing irq masking/unmasking
This was missed during conversion from ASM to C.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-17 15:00:25 +02:00
Wilfried Chauveau
fb6ab560a5 arch: arm: cortex_m: fix inverted logic in cpu_idle
This mistake was introduced when converting from ASM to C.
This change also restores the associated comment from the ASM source.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-17 15:00:25 +02:00
Wilfried Chauveau
42036cdbca arch: arm: cortex_m: Only trigger context switch if thread is preemptible
This is a fix for #61761 where a cooperative task is switched from at the
end of an exception. A cooperative thread should only be switched from if
the thread exists the ready state.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-15 09:09:28 -07:00
Wilfried Chauveau
773739a52a arch: arm: cortex_m: move part of swap_helper to C
Asm is notoriously harder to maintain than C and requires core specific
adaptation which impairs even more the readability of the code.

This change significantly enhances the maintainability & portability of the
code at the expanse of an indirection (1 outlined function).

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-15 09:09:28 -07:00
Wilfried Chauveau
65ec07fe33 arch: arm: cortex_m: use cmsis API rather than assembly
Asm is notoriously harder to maintain than C and requires core specific
adaptation which impairs even more the readability of the code.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-15 09:09:28 -07:00
Wilfried Chauveau
4760aad353 arch: arm: cortex_m: Convert cpu_idle from ASM to C
Asm is notoriously harder to maintain than C and requires core specific
adaptation which impairs even more the readability of the code.

This change reduces the need for core specific conditional compilation and
unifies irq locking code.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>

# Conflicts:
#	soc/arm/nordic_nrf/nrf53/soc_cpu_idle.h
2024-04-15 09:09:28 -07:00
Wilfried Chauveau
f11027df80 arch: arm: cortex_m: Use outlined arch_irq_(un)lock in assembly
Asm is notoriously harder to maintain than C and requires core specific
adaptation which impairs even more the readability of the code.

This change reduces the need for core specific conditional compilation.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-15 09:09:28 -07:00
Wilfried Chauveau
8e55468af2 arch: arm: cortex_m: use C rather than asm in isr_wrapper & exc_exit
Asm is notoriously harder to maintain than C and requires core specific
adaptation which impairs even more the readability of the code.

This is a first step in reducing the amount of ASM in arch/arm/cortex_m

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-15 09:09:28 -07:00
Wilfried Chauveau
85feaa60e2 arch: arm: cortex_m: Use r* register names rather than v*
v* register aliases are uncommon and it can be surprising to find them.
This change makes use of r* register names for a more consistent
experience of reading assembly.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-15 09:09:28 -07:00
Wilfried Chauveau
4c3f6ea5b2 arch: arm: cortex_m: Document why __aeabi_read_tp impl requires ASM impl
This method has special ABI requirement that requires the use of ASM.
This change documents why this is required & adds reference to the
related specification.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-15 09:09:28 -07:00
Kai Vehmanen
7fd0a7a5eb soc: intel_adsp: replace icache ISR workaround with custom idle solution
A workaround to avoid icache corruption was added in commit be881d4cf2
("arch: xtensa: add isync to interrupt vector").

This patch implements a different workaround by adding custom logic to
idle entry on affected Intel ADSP platforms. To safely enter "waiti"
when clock gating is enabled, we need to ensure icache is both unlocked
and invalidated upon entry.

Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
2024-04-15 16:26:39 +02:00
Aleksandar Cecaric
0144ed6b63 arch: riscv: update coredump for 64BIT RISCV
Add RISCV 64bit registers and parse them in coredump script.

Signed-off-by: Aleksandar Cecaric <aleksandar.cecaric@nextsilicon.com>
2024-04-13 07:03:23 -04:00
Alberto Escolar Piedras
4f7b144ef6 arch posix: When building for the native_simulator only link ASAN once
Only request the linker to link ASAN in the final stage, not
during the partial linking stage.
This fixes a link issue when building with llvm.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-04-12 15:03:35 +02:00
Alberto Escolar Piedras
b59b21f8bb arch posix: pass -fsanitize-recover=all also to native_simulator build
If the CONFIG_ASAN_RECOVER option is set, also pass
-fsanitize-recover=all to the build of the native simulator
built files.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-04-12 15:03:35 +02:00
Guennadi Liakhovetski
34ab1a1e51 llext: xtensa: add support for in-place relocatable extensions
Currently LLEXT on Xtensa supports relocatable extensions, linked for
a specific address range, while relocation itself takes place in a
temporary buffer. For this section addresses have to be set correctly
by the linker for their target locations.

This commit adds support for relocatable extensions, built without
using specific memory addresses and run at the same addresses, where
they are loaded.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2024-04-11 11:35:24 -05:00
Cedric Lescop
7b1d9d6166 llext: Full ARM ELF relocation support
Adds support for all relocation type produced by GCC
on ARM platform using partial linking (-r flag) or
shared link (-fpic and -shared flag).

Signed-off-by: Cedric Lescop <cedric.lescop@se.com>
2024-04-10 14:13:15 -04:00
Daniel Leung
027a1c30cd x86: add support for memory mapped stack for threads
This adds the necessary bits to enable memory mapping thread
stacks on both x86 and x86_64. Note that currently these do
not support multi level mappings (e.g. demand paging and
running in virtual address space: qemu_x86/atom/virt board)
as the mapped stacks require actual physical addresses.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00
Daniel Leung
d0a90a0b33 kernel: add the ability to memory map thread stacks
This introduces support for memory mapped thread stacks,
where each thread stack is mapped into virtual memory
address space with two guard pages to catch
under-/over-flowing the stack. This is just on the kernel
side. Additional architecture code is required to fully
support this feature.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00
Daniel Leung
94997a026f x86: correct size for stack bound check for privileged stack
Previous commit changed the privileged stack size to be using
kconfig CONFIG_PRIVILEGED_STACK_SIZE instead of simply
CONFIG_MMU_PAGE_SIZE. However, the stack bound check function
was still using the MMU page size, so fix that.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00
Daniel Leung
ac5835565b x86: synchronize usage of CONFIG_X86_STACK_PROTECTION
Most places use CONFIG_X86_STACK_PROTECTION, but there are some
places using CONFIG_HW_STACK_PROTECTION. So synchronize all
to use CONFIG_X86_STACK_PROTECTION instead.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00
Daniel Leung
3d39864900 x86: do not advertise demand paging support for x86_64
x86_64 does not currently support demand paging so don't
advertise it.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00
Daniel Leung
d8614afd8d x86: gen_gdt: remove extra unnecessary parens
Pylint complains so we fix.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00
Guennadi Liakhovetski
2ccf775396 llext: add support for relocatable objects on Xtensa
Some toolchains cannot create shared objects for Xtensa, with them we
have to use relocatable objects. Add support for them to llext.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2024-04-05 21:54:47 -05:00
Filip Kokosinski
ab84989a12 arch/riscv: remove the Kconfig.core file
This commit removes the `Kconfig.core` file. It's been largely unused, and
the only symbol it provides (`RISCV_CORE_E31`) overlaps with the SoC-layer
provided `SOC_SERIES_SIFIVE_FREEDOM_FE300`.

As of date, the only SoC that uses the E31 core in Zephyr is the FE310 SoC.

Signed-off-by: Filip Kokosinski <fkokosinski@antmicro.com>
2024-04-05 16:46:01 +03:00
Sylvio Alves
e587249704 soc: espressif: esp32: update to hal_espressif v5.1
Modify and reorganize SoC to meet updated hal.

Signed-off-by: Sylvio Alves <sylvio.alves@espressif.com>
Signed-off-by: Lucas Tamborrino <lucas.tamborrino@espressif.com>
2024-04-05 13:39:53 +02:00
Daniel Leung
d34351d994 kernel: align thread stack size declaration
When thread stack is defined as an array, K_THREAD_STACK_LEN()
is used to calculate the size for each stack in the array.
However, standalone thread stack has its size calculated by
Z_THREAD_STACK_SIZE_ADJUST() instead. Depending on the arch
alignment requirement, they may not be the same... which
could cause some confusions. So align them both to use
K_THREAD_STACK_LEN().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-03-27 19:27:10 -04:00
Daniel Leung
6cd7936f57 kernel: align kernel stack size declaration
When kernel stack is defined as an array, K_KERNEL_STACK_LEN()
is used to calculate the size for each stack in the array.
However, standalone kernel stack has its size calculated by
Z_KERNEL_STACK_SIZE_ADJUST() instead. Depending on the arch
alignment requirement, they may not be the same... which
could cause some confusions. So align them both to use
K_KERNEL_STACK_LEN().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-03-27 19:27:10 -04:00
Daniel Leung
efe30749de kernel: rename Z_THREAD_STACK_BUFFER to K_THREAD_STACK_BUFFER
Simple rename to align the kernel naming scheme. This is being
used throughout the tree, especially in the architecture code.
As this is not a private API internal to kernel, prefix it
appropriately with K_.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-03-27 19:27:10 -04:00
Daniel Leung
b69d2486fe kernel: rename Z_KERNEL_STACK_BUFFER to K_KERNEL_STACK_BUFFER
Simple rename to align the kernel naming scheme. This is being
used throughout the tree, especially in the architecture code.
As this is not a private API internal to kernel, prefix it
appropriately with K_.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-03-27 19:27:10 -04:00