This framework will be of use only to devices that use
external PHY (PHY functionality is not embedded within the controller).
Use in PCIE, USB, HDMI, DP...
Signed-off-by: GuEe-GUI <2991707448@qq.com>
* [DM/FEATURE] Support NVME
1. Support PRP and SGL (>= NVME v1.1) transport.
2. Support MSI/MSI-X for IO queues.
3. Support NVME on PCI.
Signed-off-by: GuEe-GUI <2991707448@qq.com>
1. Disk and blk device management.
2. Support partitions probe auto.
3. Support DFS and user mode fops, ioctl.
4. Add a cmd for blk info.
Signed-off-by: GuEe-GUI <2991707448@qq.com>
1. Default return OK when input NULL (if is not necessary in device).
2. Support object parse in OFW.
3. Support CLK depends fix auto.
4. Fixup rt_clk_array_prepare_enable and rt_clk_array_disable_unprepare.
Signed-off-by: GuEe-GUI <2991707448@qq.com>
old phy bus is too simple add phy_bus is not adapt rt_bus framework,so writer a stronger phy bus framework.
here is my commit message:
add mdio bus and phy bus to kernel,the phy bus use rt_bus framewok ,driver writer can write phy_driver first .when mac driver need to use phy they can register phy_device and pjhy_devie will serach for driver which match by uid and mask,if no driver match with the device that you register,phy_bus will return the genphy to you device,the genphy driver is the general driver for phy,so you can use it but it can not support the capcity of chip it may be cause performance is not up to peak
We need a API to fix the driver load auto when
a second driver get it in probe process that
we can not be careful of the driver-to-driver's
depends in different SoC.
Signed-off-by: GuEe-GUI <2991707448@qq.com>
Drivers can manage their own IDs without having to concern
themselves with the register/unregister in system
Link: https://github.com/RT-Thread/rt-thread/issues/9534
Signed-off-by: GuEe-GUI <2991707448@qq.com>
POSIX.1 says:
> read() attempts to read up to count bytes from file descriptor fd
> into the buffer starting at buf.
On the other hand, for rt_device_read, the `@size` is defined as the
number of unit (block size for blk device, otherwise one byte).
Changes:
- Transferred unit size from POSIX R/W(in bytes) to rt_device_R/W during
request of file R/W operations.
Signed-off-by: Shell <smokewood@qq.com>
The `struct stat` object used inside mount(2) is uninitialized, which
can lead to undefined behavior during running
Changes:
- Set zero to buffer before calling to stat()
Signed-off-by: Shell <smokewood@qq.com>
This patch introduces `rt_smp_call_request` API to handle queued
requests across cores with user provided data buffer, which provides a
way to request IPI through a non-blocking pattern.
It also resolved several issues in the old implementation:
- Multiple requests from different cores can not be queued in the work
object of the target core.
- Data racing on `rt_smp_work` of same core. If multiple requests came
in turns, or if the call is used by the target cpu, while a new
request is coming, the value will be overwrite.
- Memory vulnerability. The rt_smp_event is allocated on stack, though
the caller may not wait until the call is done.
- API naming problem. Actually we don't provide a way to issue an IPI to
ANY core in mask. What the API do is aligned to MANY pattern.
- FUNC_IPI registering to PIC.
Changes:
- Declared and configured the new `RT_SMP_CALL_IPI` to support
functional IPIs for task requests across cores.
- Replaced the single `rt_smp_work` array with `call_req_cores` to
manage per-core call requests safely.
- Added `_call_req_take` and `_call_req_release` functions for atomic
handling of request lifetimes, preventing data race conditions.
- Replaced single event handling with a queue-based approach
(`call_queue`) for efficient multi-request processing per core.
- Introduced `rt_smp_call_ipi_handler` to process queued requests,
reducing IPI contention by only sending new requests when needed.
- Implemented `_smp_call_remote_request` to handle remote requests
with specific flags, enabling more flexible core-to-core task
signaling.
- Refined `rt_smp_call_req_init` to initialize and track requests
with atomic usage flags, mitigating potential memory vulnerabilities.
Signed-off-by: Shell <smokewood@qq.com>