1. Disk and blk device management.
2. Support partitions probe auto.
3. Support DFS and user mode fops, ioctl.
4. Add a cmd for blk info.
Signed-off-by: GuEe-GUI <2991707448@qq.com>
1. Default return OK when input NULL (if is not necessary in device).
2. Support object parse in OFW.
3. Support CLK depends fix auto.
4. Fixup rt_clk_array_prepare_enable and rt_clk_array_disable_unprepare.
Signed-off-by: GuEe-GUI <2991707448@qq.com>
old phy bus is too simple add phy_bus is not adapt rt_bus framework,so writer a stronger phy bus framework.
here is my commit message:
add mdio bus and phy bus to kernel,the phy bus use rt_bus framewok ,driver writer can write phy_driver first .when mac driver need to use phy they can register phy_device and pjhy_devie will serach for driver which match by uid and mask,if no driver match with the device that you register,phy_bus will return the genphy to you device,the genphy driver is the general driver for phy,so you can use it but it can not support the capcity of chip it may be cause performance is not up to peak
We need a API to fix the driver load auto when
a second driver get it in probe process that
we can not be careful of the driver-to-driver's
depends in different SoC.
Signed-off-by: GuEe-GUI <2991707448@qq.com>
Drivers can manage their own IDs without having to concern
themselves with the register/unregister in system
Link: https://github.com/RT-Thread/rt-thread/issues/9534
Signed-off-by: GuEe-GUI <2991707448@qq.com>
This patch introduces `rt_smp_call_request` API to handle queued
requests across cores with user provided data buffer, which provides a
way to request IPI through a non-blocking pattern.
It also resolved several issues in the old implementation:
- Multiple requests from different cores can not be queued in the work
object of the target core.
- Data racing on `rt_smp_work` of same core. If multiple requests came
in turns, or if the call is used by the target cpu, while a new
request is coming, the value will be overwrite.
- Memory vulnerability. The rt_smp_event is allocated on stack, though
the caller may not wait until the call is done.
- API naming problem. Actually we don't provide a way to issue an IPI to
ANY core in mask. What the API do is aligned to MANY pattern.
- FUNC_IPI registering to PIC.
Changes:
- Declared and configured the new `RT_SMP_CALL_IPI` to support
functional IPIs for task requests across cores.
- Replaced the single `rt_smp_work` array with `call_req_cores` to
manage per-core call requests safely.
- Added `_call_req_take` and `_call_req_release` functions for atomic
handling of request lifetimes, preventing data race conditions.
- Replaced single event handling with a queue-based approach
(`call_queue`) for efficient multi-request processing per core.
- Introduced `rt_smp_call_ipi_handler` to process queued requests,
reducing IPI contention by only sending new requests when needed.
- Implemented `_smp_call_remote_request` to handle remote requests
with specific flags, enabling more flexible core-to-core task
signaling.
- Refined `rt_smp_call_req_init` to initialize and track requests
with atomic usage flags, mitigating potential memory vulnerabilities.
Signed-off-by: Shell <smokewood@qq.com>
add pci api,the pci/pcie driver writer can use this to get resource of current device with flag,there are three flag :
1. PCI_BUS_REGION_F_MEM it mean memory space
2. PCI_BUS_REGION_F_IO it mean io space
3. PCI_BUS_REGION_F_PREFETCH it mean prefetchable memory
Fix some code style and init for V2M, ITS.
V2M is the PCI MSI/MSI-X for GICv2.
ITS is the PCI MSI/MSI-X for GICv3/v4.
Signed-off-by: GuEe-GUI <2991707448@qq.com>
PIC may free because some wrongs in debug.
We should remove in PIC list or there are
some undefined behavior will happen.
Signed-off-by: GuEe-GUI <2991707448@qq.com>