This patch introduces `rt_smp_call_request` API to handle queued
requests across cores with user provided data buffer, which provides a
way to request IPI through a non-blocking pattern.
It also resolved several issues in the old implementation:
- Multiple requests from different cores can not be queued in the work
object of the target core.
- Data racing on `rt_smp_work` of same core. If multiple requests came
in turns, or if the call is used by the target cpu, while a new
request is coming, the value will be overwrite.
- Memory vulnerability. The rt_smp_event is allocated on stack, though
the caller may not wait until the call is done.
- API naming problem. Actually we don't provide a way to issue an IPI to
ANY core in mask. What the API do is aligned to MANY pattern.
- FUNC_IPI registering to PIC.
Changes:
- Declared and configured the new `RT_SMP_CALL_IPI` to support
functional IPIs for task requests across cores.
- Replaced the single `rt_smp_work` array with `call_req_cores` to
manage per-core call requests safely.
- Added `_call_req_take` and `_call_req_release` functions for atomic
handling of request lifetimes, preventing data race conditions.
- Replaced single event handling with a queue-based approach
(`call_queue`) for efficient multi-request processing per core.
- Introduced `rt_smp_call_ipi_handler` to process queued requests,
reducing IPI contention by only sending new requests when needed.
- Implemented `_smp_call_remote_request` to handle remote requests
with specific flags, enabling more flexible core-to-core task
signaling.
- Refined `rt_smp_call_req_init` to initialize and track requests
with atomic usage flags, mitigating potential memory vulnerabilities.
Signed-off-by: Shell <smokewood@qq.com>
These changes introduce the rt_interrupt_context family, providing a
mechanism for managing nested interrupts. The context management
ensures proper storage and retrieval of interrupt states, improving
reliability in nested interrupt scenarios by enabling context tracking
across different interrupt levels. This enhancement is essential for
platforms where nested interrupt handling is crucial, such as in real-
time or multi-threaded applications.
Changes:
- Defined rt_interrupt_context structure with context and node fields
in `rtdef.h` to support nested interrupts.
- Added rt_slist_pop function in `rtservice.h` for simplified node
removal in singly linked lists.
- Declared rt_interrupt_context_push, rt_interrupt_context_pop, and
rt_interrupt_context_get functions in `rtthread.h` to manage the
interrupt/exception stack.
- Modified AArch64 CPU support in `cpuport.h` to include
rt_hw_show_register for debugging registers.
- Refactored `_rt_hw_trap_irq` in `trap.c` for context-aware IRQ
handling, with stack push/pop logic to handle nested contexts.
- Implemented interrupt context push, pop, and retrieval logic in
`irq.c` to manage context at the CPU level.
Signed-off-by: Shell <smokewood@qq.com>
When RT_USING_DEBUG is disabled, variables used only in RT_ASSERT
statements become unused, triggering -Wunused-but-set-variable compiler
warnings. These variables are essential for runtime assertions in debug
builds but appear unused in release builds.
Example:
- Variables used in RT_ASSERT(var != RT_NULL) checks
- Affects multiple drivers and components using RT_ASSERT
This is a general cleanup to improve code compilation without affecting
functionality.
This patch addresses a use-after-free (UAF) vulnerability in the
sys_mount. The issue occurred due to improper handling of memory
deallocation, which could lead to crashes or undefined behavior on user
request of mounting.
Changes made:
- Moved the `rt_free(copy_source)` function call to occur after the necessary
operations are completed, preventing premature deallocation of memory.
Signed-off-by: Shell <smokewood@qq.com>
Prepare for the next release by removing the compatible codes for v5.1.0
and before.
Changes:
- Remove the compatible macros and Kconfig options for the old struct
rt_thread layout.
Signed-off-by: Shell <smokewood@qq.com>