We need a API to fix the driver load auto when
a second driver get it in probe process that
we can not be careful of the driver-to-driver's
depends in different SoC.
Signed-off-by: GuEe-GUI <2991707448@qq.com>
Drivers can manage their own IDs without having to concern
themselves with the register/unregister in system
Link: https://github.com/RT-Thread/rt-thread/issues/9534
Signed-off-by: GuEe-GUI <2991707448@qq.com>
This patch introduces `rt_smp_call_request` API to handle queued
requests across cores with user provided data buffer, which provides a
way to request IPI through a non-blocking pattern.
It also resolved several issues in the old implementation:
- Multiple requests from different cores can not be queued in the work
object of the target core.
- Data racing on `rt_smp_work` of same core. If multiple requests came
in turns, or if the call is used by the target cpu, while a new
request is coming, the value will be overwrite.
- Memory vulnerability. The rt_smp_event is allocated on stack, though
the caller may not wait until the call is done.
- API naming problem. Actually we don't provide a way to issue an IPI to
ANY core in mask. What the API do is aligned to MANY pattern.
- FUNC_IPI registering to PIC.
Changes:
- Declared and configured the new `RT_SMP_CALL_IPI` to support
functional IPIs for task requests across cores.
- Replaced the single `rt_smp_work` array with `call_req_cores` to
manage per-core call requests safely.
- Added `_call_req_take` and `_call_req_release` functions for atomic
handling of request lifetimes, preventing data race conditions.
- Replaced single event handling with a queue-based approach
(`call_queue`) for efficient multi-request processing per core.
- Introduced `rt_smp_call_ipi_handler` to process queued requests,
reducing IPI contention by only sending new requests when needed.
- Implemented `_smp_call_remote_request` to handle remote requests
with specific flags, enabling more flexible core-to-core task
signaling.
- Refined `rt_smp_call_req_init` to initialize and track requests
with atomic usage flags, mitigating potential memory vulnerabilities.
Signed-off-by: Shell <smokewood@qq.com>
add pci api,the pci/pcie driver writer can use this to get resource of current device with flag,there are three flag :
1. PCI_BUS_REGION_F_MEM it mean memory space
2. PCI_BUS_REGION_F_IO it mean io space
3. PCI_BUS_REGION_F_PREFETCH it mean prefetchable memory
Fix some code style and init for V2M, ITS.
V2M is the PCI MSI/MSI-X for GICv2.
ITS is the PCI MSI/MSI-X for GICv3/v4.
Signed-off-by: GuEe-GUI <2991707448@qq.com>
PIC may free because some wrongs in debug.
We should remove in PIC list or there are
some undefined behavior will happen.
Signed-off-by: GuEe-GUI <2991707448@qq.com>
The ofw parse should:
1. Check obj_name EQU the current node's rt_data.
2. Find the next object name.
3. goto "2" until obj_name EQU the cmp_cell's obj_name.
Signed-off-by: GuEe-GUI <2991707448@qq.com>
When the driver request a API gets RT_NULL which return value is ptr,
they could not know why get a RT_NULL.
some API return RT_NULL, is not error internal maybe, it just not
supported for this platform, but the driver still could work ok,
the API can return (RT_NULL + -RT_EEMPTY) to driver.
on the other hand, the driver can do more behaviors by error no.
When the API return the -RT_EBUSY, driver can wait for a moment and retry.
When the API return the -RT_ENOSYS, driver can try the next mode or request's name.
Signed-off-by: GuEe-GUI <wusongjie@rt-thread.com>