Merge branch 'master' into input

This commit is contained in:
GUI 2024-11-29 13:48:19 +08:00 committed by GitHub
commit 4edfdac0b0
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
67 changed files with 6535 additions and 430 deletions

View File

@ -1,25 +1,33 @@
<!-- TOC -->
- [参考文档](#参考文档)
- [概述](#概述)
- [BSP 支持情况](#bsp-支持情况)
- [驱动支持列表](#驱动支持列表)
- [编译](#编译)
- [Toolchain 下载](#toolchain-下载)
- [依赖安装](#依赖安装)
- [构建](#构建)
- [运行](#运行)
- [FAQ](#faq)
- [联系人信息](#联系人信息)
- [1. 参考文档](#1-参考文档)
- [2. 概述](#2-概述)
- [3. BSP 支持情况](#3-bsp-支持情况)
- [3.1. 驱动支持列表](#31-驱动支持列表)
- [4. 编译](#4-编译)
- [4.1. Toolchain 下载](#41-toolchain-下载)
- [4.2. 依赖安装](#42-依赖安装)
- [4.3. 构建](#43-构建)
- [4.3.1. 开发板选择](#431-开发板选择)
- [4.3.2. 开启 RT-Smart](#432-开启-rt-smart)
- [4.3.3. 编译](#433-编译)
- [5. 运行](#5-运行)
- [6. 大核 RT-Smart 启动并自动挂载根文件系统](#6-大核-rt-smart-启动并自动挂载根文件系统)
- [6.1. 内核构建配置](#61-内核构建配置)
- [6.2. 构建文件系统](#62-构建文件系统)
- [6.3. 将文件系统写入 sd-card](#63-将文件系统写入-sd-card)
- [6.4. 上电启动](#64-上电启动)
- [7. FAQ](#7-faq)
- [8. 联系人信息](#8-联系人信息)
<!-- /TOC -->
# 参考文档
# 1. 参考文档
- 【参考 1】CV1800B/CV1801B Datasheet中文版<https://github.com/milkv-duo/duo-files/blob/main/duo/datasheet/CV1800B-CV1801B-Preliminary-Datasheet-full-zh.pdf>
- 【参考 2】SG2002/SG2000 技术参考手册(中文版):<https://github.com/sophgo/sophgo-doc/releases>。官方定期发布 pdf 形式。可以下载下载最新版本的中文版本技术参考手册:`sg2002_trm_cn.pdf` 或者 `sg2000_trm_cn.pdf`
# 概述
# 2. 概述
支持开发板以及集成 SoC 芯片信息如下
@ -38,7 +46,7 @@ Duo 家族开发板采用 CV18xx 系列芯片。芯片的工作模式总结如
- 大核RISC-V C906@1GHz+ 小核RISC-V C906@700MHz
- 大核ARM Cortex-A53@1GHz+ 小核RISC-V C906@700MHz
# BSP 支持情况
# 3. BSP 支持情况
由于大小核的存在,以及不同 SoC 下不同工作模式的存在bsp/cvitek 提供了三种不同 BSP/OS需要单独编译。
@ -52,7 +60,7 @@ Duo 家族开发板采用 CV18xx 系列芯片。芯片的工作模式总结如
> 注:不同开发板 uart 输出管脚不同,默认配置可能导致串口无法正常显示,请根据开发板 uart 通过 `scons --menuconfig` 配置对应 uart 的输出管脚。
## 驱动支持列表
## 3.1. 驱动支持列表
| 驱动 | 支持情况 | 备注 |
| :---- | :------- | :---------------- |
@ -67,45 +75,54 @@ Duo 家族开发板采用 CV18xx 系列芯片。芯片的工作模式总结如
| sdio | 支持 | |
| eth | 支持 | |
# 编译
# 4. 编译
## Toolchain 下载
## 4.1. Toolchain 下载
> 注:当前 bsp 只支持 Linux 编译,推荐 ubuntu 22.04
1. RT-Thread 标准版工具链:`riscv64-unknown-elf-gcc` 下载地址 [https://occ-oss-prod.oss-cn-hangzhou.aliyuncs.com/resource//1705395512373/Xuantie-900-gcc-elf-newlib-x86_64-V2.8.1-20240115.tar.gz](https://occ-oss-prod.oss-cn-hangzhou.aliyuncs.com/resource//1705395512373/Xuantie-900-gcc-elf-newlib-x86_64-V2.8.1-20240115.tar.gz)
1. 用于编译 RT-Thread 标准版的工具链是 `riscv64-unknown-elf-gcc` 下载地址 [https://occ-oss-prod.oss-cn-hangzhou.aliyuncs.com/resource//1705395512373/Xuantie-900-gcc-elf-newlib-x86_64-V2.8.1-20240115.tar.gz](https://occ-oss-prod.oss-cn-hangzhou.aliyuncs.com/resource//1705395512373/Xuantie-900-gcc-elf-newlib-x86_64-V2.8.1-20240115.tar.gz)
2. RT-Smart 版工具链: `riscv64-unknown-linux-musl-gcc` 下载地址 [https://github.com/RT-Thread/toolchains-ci/releases/download/v1.7/riscv64-linux-musleabi_for_x86_64-pc-linux-gnu_latest.tar.bz2](https://github.com/RT-Thread/toolchains-ci/releases/download/v1.7/riscv64-linux-musleabi_for_x86_64-pc-linux-gnu_latest.tar.bz2)
2. 用于编译 RT-Thread Smart 版的工具链是 `riscv64-unknown-linux-musl-gcc` 下载地址 [https://github.com/RT-Thread/toolchains-ci/releases/download/v1.7/riscv64-linux-musleabi_for_x86_64-pc-linux-gnu_latest.tar.bz2](https://github.com/RT-Thread/toolchains-ci/releases/download/v1.7/riscv64-linux-musleabi_for_x86_64-pc-linux-gnu_latest.tar.bz2)
正确解压后,在`rtconfig.py`中将 `riscv64-unknown-elf-gcc``riscv64-unknown-linux-musl-gcc` 工具链的本地路径加入 `EXEC_PATH` 或通过 `RTT_EXEC_PATH` 环境变量指定路径。
正确解压后,导出如下环境变量,建议将这些 export 命令写入 `~/.bashrc`。**并注意在使用不同工具链时确保导出正确的一组环境变量**。
构建 RT-Thread 标准版时按照以下配置:
```shell
# RT-Thread 标准版按照以下配置:
$ export RTT_CC_PREFIX=riscv64-unknown-elf-
$ export RTT_EXEC_PATH=/opt/Xuantie-900-gcc-elf-newlib-x86_64-V2.8.1/bin
# RT-Samrt 版按照以下配置:
$ export RTT_CC_PREFIX=riscv64-unknown-linux-musl-
$ export RTT_EXEC_PATH=/opt/riscv64-linux-musleabi_for_x86_64-pc-linux-gnu/bin
export RTT_CC="gcc"
export RTT_CC_PREFIX=riscv64-unknown-elf-
export RTT_EXEC_PATH=/opt/Xuantie-900-gcc-elf-newlib-x86_64-V2.8.1/bin
```
## 依赖安装
构建 RT-Thread Smart 版时按照以下配置:
```shell
export RTT_CC="gcc"
export RTT_CC_PREFIX=riscv64-unknown-linux-musl-
export RTT_EXEC_PATH=/opt/riscv64-linux-musleabi_for_x86_64-pc-linux-gnu/bin
```
## 4.2. 依赖安装
```shell
$ sudo apt install -y scons libncurses5-dev device-tree-compiler
```
## 构建
## 4.3. 构建
异构芯片需单独编译每个核的 OS在大/小核对应的目录下,依次执行:
1. 开发板选择
Linux平台下可以先执行
### 4.3.1. 开发板选择
Linux平台下可以先执行
```shell
$ scons --menuconfig
```
选择当前需要编译的目标开发板类型
选择当前需要编译的目标开发板类型,默认是 "milkv-duo256m"。
```shell
Board Type (milkv-duo) --->
( ) milkv-duo
@ -115,39 +132,131 @@ Board Type (milkv-duo) --->
( ) milkv-duos
```
2. 可按照以下方式开启 RT-Smart
### 4.3.2. 开启 RT-Smart
目前大核默认启用 RT-Smart小核不支持 RT-Smart。
如果要对大核启用 RT-Smart可以按如下方式设置。
```shell
RT-Thread Kernel --->
[*] Enable RT-Thread Smart (microkernel on kernel/userland)
```
并配置内核虚拟起始地址 `0xFFFFFFC000200000`
**注意检查内核虚拟起始地址的配置,确保为 `0xFFFFFFC000200000`。**
```shell
RT-Thread Kernel --->
(0xFFFFFFC000200000) The virtural address of kernel start
RT-Thread Components --->
```
3. 编译
### 4.3.3. 编译
```shell
$ scons
```
编译成功后,会在 `bsp/cvitek/output` 对应开发板型号目录下自动生成 `fip.bin``boot.sd` 文件,其中大核运行文件在 `boot.sd` 中,小核的运行文件在 `fip.bin`
编译成功后,会在 `bsp/cvitek/output` 对应开发板型号目录下自动生成 `fip.bin``boot.sd` 文件。
- fip.binfsbl、opensbi、uboot、小核运行文件打包后的 bin 文件
- boot.sd大核打包后的 bin 文件
- `fip.bin`:这是一个打包后生成的 bin 文件,包含了 `fsbl`、`opensbi`、`uboot` 以及小核的内核镜像文件 `rtthread.bin`
- `boot.sd`:这也是一个打包后生成的 bin 文件,包含了大核的内核镜像文件 `rtthread.bin`
# 运行
# 5. 运行
1. 将 SD 卡分为 2 个分区,第 1 个分区用于存放 bin 文件,第 2 个分区用于作为数据存储分区,分区格式为 `FAT32`
1. 将 SD 卡分为 2 个分区,第 1 个分区的分区格式为 `FAT32`,用于存放 `fip.bin``boot.sd` 文件,第 2 个分区可选,如果有可用于作为数据存储分区或者存放文件系统
2. 将根目录下的 `fip.bin``boot.sd` 复制到 SD 卡第一个分区中。两个固件文件可以独立修改更新,譬如后续只需要更新大核,只需要重新编译 "cv18xx_risc-v" 并复制 `boot.sd` 文件即可。
2. 将根目录下的 `fip.bin``boot.sd` 复制到 SD 卡第一个分区中。两个固件文件可以独立修改更新,譬如后续只需要更新大核,只需要重新编译 "cv18xx_risc-v" 并替换 SD 卡第一个分区中的 `boot.sd` 文件即可。
3. 更新完固件文件后, 重新上电可以看到串口的输出信息。
# FAQ
# 6. 大核 RT-Smart 启动并自动挂载根文件系统
大核启用 RT-Smart 后可以在启动阶段挂载根文件系统。目前 Duo 支持 ext4fat 文件格式,下面以 fat 格式为例,具体操作说明如下:
## 6.1. 内核构建配置
首先确保开启 RT-Smart参考 [4.3.2. 可按照以下方式开启 RT-Smart](#432-可按照以下方式开启-rt-smart)。
在开启 RT-Smart 基础上确保如下配置修改。
- 使能 `BSP_USING_SDH`: Enable Secure Digital Host Controller, 因为使用 sd-card 存放文件系统。
- 使能 `BSP_USING_RTC`: Enable RTC, 避免挂载文件系统后执行命令报错:`[W/time] Cannot find a RTC device!`
- 使能 `BSP_ROOTFS_TYPE_DISKFS`: Disk FileSystems, e.g. ext4, fat ..., 该配置默认已打开。
保存后重新编译内核。
## 6.2. 构建文件系统
这里用 RT-Thread 官方的 userapps 工具制作文件系统。
userapps 仓库地址: <https://github.com/RT-Thread/userapps>。具体操作参考 [《介绍与快速入门》](https://github.com/RT-Thread/userapps/blob/main/README.md)。
制作根文件系统步骤如下,供参考:
```shell
cd $WS
git clone https://github.com/RT-Thread/userapps.git
cd $WS/userapps
source ./env.sh
cd apps
xmake f -a riscv64gc
xmake -j$(nproc)
xmake smart-rootfs
xmake smart-image -f fat
```
`$WS/userapps/apps/build` 路径下生成根文件系统镜像文件 `fat.img`
## 6.3. 将文件系统写入 sd-card
将 SD 卡分为 2 个分区,第 1 个分区用于存放 `fip.bin``boot.sd` 文件,第 2 个分区用于存放文件系统,分区格式为 `FAT32`
将 SD 卡插入 PC 主机系统,假设为 Ubuntu识别为 `/dev/sdb`,则第二个分区为 `/dev/sdb2`。将第二个分区挂载,假设挂载到 `~/ws/u-disk`
将上一步生成的 `fat.img` 文件也挂载到一个临时目录,假设是 `/tmp`
最后将 /tmp 下的文件全部拷贝到 `~/ws/u-disk` 中,即完成对 SD 卡中文件系统分区的烧写。
最后不要忘记卸载 SD 卡的分区。
简单步骤示例如下,供参考:
```shell
sudo mount -o loop fat.img /tmp
sudo mount /dev/sdb2 ~/ws/u-disk
sudo cp -a /tmp/* ~/ws/u-disk
sudo umount ~/ws/u-disk
sudo umount /tmp
```
## 6.4. 上电启动
启动完成后, 会看到 `[I/app.filesystem] device 'sd1' is mounted to '/' as FAT` 的输出,说明文件系统挂载成功。此时 `msh` 被替换为 `/bin/ash`
```shell
\ | /
- RT - Thread Smart Operating System
/ | \ 5.2.0 build Nov 26 2024 09:55:38
2006 - 2024 Copyright by RT-Thread team
lwIP-2.1.2 initialized!
[I/sal.skt] Socket Abstraction Layer initialize success.
[I/drivers.serial] Using /dev/ttyS0 as default console
[I/SDIO] SD card capacity 30216192 KB.
[I/SDIO] sd: switch to High Speed / SDR25 mode
found part[0], begin: 1048576, size: 128.0MB
found part[1], begin: 135266304, size: 28.707GB
[I/app.filesystem] device 'sd1' is mounted to '/' as FAT
Hello RT-Smart!
msh />[E/sal.skt] not find network interface device by protocol family(1).
[E/sal.skt] SAL socket protocol family input failed, return error -3.
/ # ls
bin etc mnt root sbin tc usr
dev lib proc run services tmp var
```
# 7. FAQ
1. 如遇到不能正常编译,请先使用 `scons --menuconfig` 重新生成配置。
@ -162,7 +271,7 @@ $ sudo dpkg -i libssl1.1_1.1.1f-1ubuntu2_amd64.deb
3. 如发现切换开发板编译正常,但无法正常打包,请切换至自动下载的 `cvi_bootloader` 目录,并手工运行 `git pull` 更新,或删除该目录后重新自动下载。
# 联系人信息
# 8. 联系人信息
维护人:[flyingcys](https://github.com/flyingcys)

View File

@ -2,10 +2,114 @@
#
# RT-Thread Kernel
#
#
# klibc options
#
#
# ------------rt_memset options------------
#
# CONFIG_RT_KLIBC_USING_USER_MEMSET is not set
# CONFIG_RT_KLIBC_USING_LIBC_MEMSET is not set
# CONFIG_RT_KLIBC_USING_TINY_MEMSET is not set
#
# ------------rt_memcpy options------------
#
# CONFIG_RT_KLIBC_USING_USER_MEMCPY is not set
# CONFIG_RT_KLIBC_USING_LIBC_MEMCPY is not set
# CONFIG_RT_KLIBC_USING_TINY_MEMCPY is not set
#
# ------------rt_memmove options------------
#
# CONFIG_RT_KLIBC_USING_USER_MEMMOVE is not set
# CONFIG_RT_KLIBC_USING_LIBC_MEMMOVE is not set
#
# ------------rt_memcmp options------------
#
# CONFIG_RT_KLIBC_USING_USER_MEMCMP is not set
# CONFIG_RT_KLIBC_USING_LIBC_MEMCMP is not set
#
# ------------rt_strstr options------------
#
# CONFIG_RT_KLIBC_USING_USER_STRSTR is not set
# CONFIG_RT_KLIBC_USING_LIBC_STRSTR is not set
#
# ------------rt_strcasecmp options------------
#
# CONFIG_RT_KLIBC_USING_USER_STRCASECMP is not set
#
# ------------rt_strncpy options------------
#
# CONFIG_RT_KLIBC_USING_USER_STRNCPY is not set
# CONFIG_RT_KLIBC_USING_LIBC_STRNCPY is not set
#
# ------------rt_strcpy options------------
#
# CONFIG_RT_KLIBC_USING_USER_STRCPY is not set
# CONFIG_RT_KLIBC_USING_LIBC_STRCPY is not set
#
# ------------rt_strncmp options------------
#
# CONFIG_RT_KLIBC_USING_USER_STRNCMP is not set
# CONFIG_RT_KLIBC_USING_LIBC_STRNCMP is not set
#
# ------------rt_strcmp options------------
#
# CONFIG_RT_KLIBC_USING_USER_STRCMP is not set
# CONFIG_RT_KLIBC_USING_LIBC_STRCMP is not set
#
# ------------rt_strlen options------------
#
# CONFIG_RT_KLIBC_USING_USER_STRLEN is not set
# CONFIG_RT_KLIBC_USING_LIBC_STRLEN is not set
#
# ------------rt_strlen options------------
#
#
# ------------rt_strnlen options------------
#
# CONFIG_RT_KLIBC_USING_USER_STRNLEN is not set
#
# ------------rt_vsscanf options------------
#
# CONFIG_RT_KLIBC_USING_LIBC_VSSCANF is not set
#
# ------------rt_vsnprintf options------------
#
# CONFIG_RT_KLIBC_USING_LIBC_VSNPRINTF is not set
CONFIG_RT_KLIBC_USING_VSNPRINTF_LONGLONG=y
CONFIG_RT_KLIBC_USING_VSNPRINTF_STANDARD=y
CONFIG_RT_KLIBC_USING_VSNPRINTF_DECIMAL_SPECIFIERS=y
CONFIG_RT_KLIBC_USING_VSNPRINTF_EXPONENTIAL_SPECIFIERS=y
CONFIG_RT_KLIBC_USING_VSNPRINTF_WRITEBACK_SPECIFIER=y
CONFIG_RT_KLIBC_USING_VSNPRINTF_CHECK_NUL_IN_FORMAT_SPECIFIER=y
# CONFIG_RT_KLIBC_USING_VSNPRINTF_MSVC_STYLE_INTEGER_SPECIFIERS is not set
CONFIG_RT_KLIBC_USING_VSNPRINTF_INTEGER_BUFFER_SIZE=32
CONFIG_RT_KLIBC_USING_VSNPRINTF_DECIMAL_BUFFER_SIZE=32
CONFIG_RT_KLIBC_USING_VSNPRINTF_FLOAT_PRECISION=6
CONFIG_RT_KLIBC_USING_VSNPRINTF_MAX_INTEGRAL_DIGITS_FOR_DECIMAL=9
CONFIG_RT_KLIBC_USING_VSNPRINTF_LOG10_TAYLOR_TERMS=4
# end of klibc options
CONFIG_RT_NAME_MAX=8
# CONFIG_RT_USING_ARCH_DATA_TYPE is not set
CONFIG_RT_USING_SMART=y
# CONFIG_RT_USING_NANO is not set
CONFIG_RT_USING_SMART=y
# CONFIG_RT_USING_AMP is not set
# CONFIG_RT_USING_SMP is not set
CONFIG_RT_CPUS_NR=1
@ -15,6 +119,7 @@ CONFIG_RT_THREAD_PRIORITY_32=y
# CONFIG_RT_THREAD_PRIORITY_256 is not set
CONFIG_RT_THREAD_PRIORITY_MAX=32
CONFIG_RT_TICK_PER_SECOND=1000
CONFIG_RT_USING_OVERFLOW_CHECK=y
CONFIG_RT_USING_HOOK=y
CONFIG_RT_HOOK_USING_FUNC_PTR=y
# CONFIG_RT_USING_HOOKLIST is not set
@ -28,18 +133,10 @@ CONFIG_RT_TIMER_THREAD_STACK_SIZE=16384
CONFIG_RT_USING_CPU_USAGE_TRACER=y
#
# kservice optimization
# kservice options
#
# CONFIG_RT_USING_TINY_FFS is not set
# end of kservice optimization
#
# klibc optimization
#
# CONFIG_RT_KLIBC_USING_STDLIB is not set
# CONFIG_RT_KLIBC_USING_TINY_SIZE is not set
CONFIG_RT_KLIBC_USING_PRINTF_LONGLONG=y
# end of klibc optimization
# end of kservice options
CONFIG_RT_USING_DEBUG=y
CONFIG_RT_DEBUGING_ASSERT=y
@ -47,7 +144,6 @@ CONFIG_RT_DEBUGING_COLOR=y
CONFIG_RT_DEBUGING_CONTEXT=y
# CONFIG_RT_DEBUGING_AUTO_INIT is not set
# CONFIG_RT_DEBUGING_PAGE_LEAK is not set
CONFIG_RT_USING_OVERFLOW_CHECK=y
#
# Inter-Thread communication
@ -206,6 +302,7 @@ CONFIG_RT_USING_CPUTIME_RISCV=y
CONFIG_CPUTIME_TIMER_FREQ=25000000
# CONFIG_RT_USING_I2C is not set
# CONFIG_RT_USING_PHY is not set
# CONFIG_RT_USING_PHY_V2 is not set
# CONFIG_RT_USING_ADC is not set
# CONFIG_RT_USING_DAC is not set
CONFIG_RT_USING_NULL=y
@ -235,6 +332,15 @@ CONFIG_RT_USING_WDT=y
# CONFIG_RT_USING_LCD is not set
# CONFIG_RT_USING_HWCRYPTO is not set
# CONFIG_RT_USING_WIFI is not set
CONFIG_RT_USING_BLK=y
#
# Partition Types
#
CONFIG_RT_BLK_PARTITION_DFS=y
CONFIG_RT_BLK_PARTITION_EFI=y
# end of Partition Types
# CONFIG_RT_USING_VIRTIO is not set
CONFIG_RT_USING_PIN=y
CONFIG_RT_USING_KTIME=y
@ -601,7 +707,6 @@ CONFIG_LWP_PTY_MAX_PARIS_LIMIT=64
# CONFIG_PKG_USING_JSMN is not set
# CONFIG_PKG_USING_AGILE_JSMN is not set
# CONFIG_PKG_USING_PARSON is not set
# CONFIG_PKG_USING_RYAN_JSON is not set
# end of JSON: JavaScript Object Notation, a lightweight data-interchange format
#
@ -812,8 +917,6 @@ CONFIG_LWP_PTY_MAX_PARIS_LIMIT=64
#
# STM32 HAL & SDK Drivers
#
# CONFIG_PKG_USING_STM32F4_HAL_DRIVER is not set
# CONFIG_PKG_USING_STM32F4_CMSIS_DRIVER is not set
# CONFIG_PKG_USING_STM32L4_HAL_DRIVER is not set
# CONFIG_PKG_USING_STM32L4_CMSIS_DRIVER is not set
# CONFIG_PKG_USING_STM32WB55_SDK is not set
@ -1008,7 +1111,6 @@ CONFIG_LWP_PTY_MAX_PARIS_LIMIT=64
# CONFIG_PKG_USING_SYSTEM_RUN_LED is not set
# CONFIG_PKG_USING_BT_MX01 is not set
# CONFIG_PKG_USING_RGPOWER is not set
# CONFIG_PKG_USING_BT_MX02 is not set
# CONFIG_PKG_USING_SPI_TOOLS is not set
# end of peripheral libraries and drivers
@ -1124,7 +1226,6 @@ CONFIG_PKG_ZLIB_VER="latest"
# CONFIG_PKG_USING_ARDUINO_MSGQ_C_CPP_DEMO is not set
# CONFIG_PKG_USING_ARDUINO_SKETCH_LOADER_DEMO is not set
# CONFIG_PKG_USING_ARDUINO_ULTRASOUND_RADAR is not set
# CONFIG_PKG_USING_ARDUINO_RTDUINO_SENSORFUSION_SHIELD is not set
# CONFIG_PKG_USING_ARDUINO_NINEINONE_SENSOR_SHIELD is not set
# CONFIG_PKG_USING_ARDUINO_SENSOR_KIT is not set
# CONFIG_PKG_USING_ARDUINO_MATLAB_SUPPORT is not set
@ -1385,5 +1486,5 @@ CONFIG_SOC_TYPE_SG2002=y
CONFIG_BOARD_TYPE_MILKV_DUO256M=y
# CONFIG_BOARD_TYPE_MILKV_DUO256M_SPINOR is not set
# CONFIG_BOARD_TYPE_MILKV_DUOS is not set
CONFIG_BSP_ROOTFS_TYPE_ROMFS=y
CONFIG_BSP_ROOTFS_TYPE_DISKFS=y
# CONFIG_BSP_ROOTFS_TYPE_CROMFS is not set

View File

@ -84,10 +84,10 @@ endchoice
choice BSP_ROOTFS_TYPE
prompt "rootfs type"
default BSP_ROOTFS_TYPE_ROMFS
default BSP_ROOTFS_TYPE_DISKFS
config BSP_ROOTFS_TYPE_ROMFS
bool "ROMFS"
config BSP_ROOTFS_TYPE_DISKFS
bool "Disk FileSystems, e.g. ext4, fat ..."
select RT_USING_DFS_ROMFS
config BSP_ROOTFS_TYPE_CROMFS

View File

@ -3,6 +3,63 @@
/* RT-Thread Kernel */
/* klibc options */
/* ------------rt_memset options------------ */
/* ------------rt_memcpy options------------ */
/* ------------rt_memmove options------------ */
/* ------------rt_memcmp options------------ */
/* ------------rt_strstr options------------ */
/* ------------rt_strcasecmp options------------ */
/* ------------rt_strncpy options------------ */
/* ------------rt_strcpy options------------ */
/* ------------rt_strncmp options------------ */
/* ------------rt_strcmp options------------ */
/* ------------rt_strlen options------------ */
/* ------------rt_strlen options------------ */
/* ------------rt_strnlen options------------ */
/* ------------rt_vsscanf options------------ */
/* ------------rt_vsnprintf options------------ */
#define RT_KLIBC_USING_VSNPRINTF_LONGLONG
#define RT_KLIBC_USING_VSNPRINTF_STANDARD
#define RT_KLIBC_USING_VSNPRINTF_DECIMAL_SPECIFIERS
#define RT_KLIBC_USING_VSNPRINTF_EXPONENTIAL_SPECIFIERS
#define RT_KLIBC_USING_VSNPRINTF_WRITEBACK_SPECIFIER
#define RT_KLIBC_USING_VSNPRINTF_CHECK_NUL_IN_FORMAT_SPECIFIER
#define RT_KLIBC_USING_VSNPRINTF_INTEGER_BUFFER_SIZE 32
#define RT_KLIBC_USING_VSNPRINTF_DECIMAL_BUFFER_SIZE 32
#define RT_KLIBC_USING_VSNPRINTF_FLOAT_PRECISION 6
#define RT_KLIBC_USING_VSNPRINTF_MAX_INTEGRAL_DIGITS_FOR_DECIMAL 9
#define RT_KLIBC_USING_VSNPRINTF_LOG10_TAYLOR_TERMS 4
/* end of klibc options */
#define RT_NAME_MAX 8
#define RT_USING_SMART
#define RT_CPUS_NR 1
@ -10,6 +67,7 @@
#define RT_THREAD_PRIORITY_32
#define RT_THREAD_PRIORITY_MAX 32
#define RT_TICK_PER_SECOND 1000
#define RT_USING_OVERFLOW_CHECK
#define RT_USING_HOOK
#define RT_HOOK_USING_FUNC_PTR
#define RT_USING_IDLE_HOOK
@ -20,19 +78,13 @@
#define RT_TIMER_THREAD_STACK_SIZE 16384
#define RT_USING_CPU_USAGE_TRACER
/* kservice optimization */
/* kservice options */
/* end of kservice optimization */
/* klibc optimization */
#define RT_KLIBC_USING_VSNPRINTF_LONGLONG
/* end of klibc optimization */
/* end of kservice options */
#define RT_USING_DEBUG
#define RT_DEBUGING_ASSERT
#define RT_DEBUGING_COLOR
#define RT_DEBUGING_CONTEXT
#define RT_USING_OVERFLOW_CHECK
/* Inter-Thread communication */
@ -157,6 +209,13 @@
#define RT_MMCSD_THREAD_PREORITY 22
#define RT_MMCSD_MAX_PARTITION 16
#define RT_USING_WDT
#define RT_USING_BLK
/* Partition Types */
#define RT_BLK_PARTITION_DFS
#define RT_BLK_PARTITION_EFI
/* end of Partition Types */
#define RT_USING_PIN
#define RT_USING_KTIME
#define RT_USING_HWTIMER
@ -495,6 +554,6 @@
#define __STACKSIZE__ 8192
#define SOC_TYPE_SG2002
#define BOARD_TYPE_MILKV_DUO256M
#define BSP_ROOTFS_TYPE_ROMFS
#define BSP_ROOTFS_TYPE_DISKFS
#endif

View File

@ -40,8 +40,8 @@ if GetDepend('BSP_USING_PWM'):
if GetDepend('BSP_ROOTFS_TYPE_CROMFS'):
src += ['port/mnt_cromfs.c']
elif GetDepend('BSP_ROOTFS_TYPE_ROMFS'):
src += ['port/mnt_romfs.c']
elif GetDepend('BSP_ROOTFS_TYPE_DISKFS'):
src += ['port/mnt_diskfs.c']
if GetDepend('BSP_USING_SDH'):
src += ['drv_sdhci.c']

View File

@ -317,7 +317,6 @@ static void rt_hw_rtc_isr(int irqno, void *param)
rt_interrupt_leave();
}
#endif
static rt_err_t _rtc_get_alarm(struct rt_rtc_wkalarm *alarm)
{
@ -367,6 +366,7 @@ static rt_err_t _rtc_set_alarm(struct rt_rtc_wkalarm *alarm)
return RT_EOK;
}
#endif
static const struct rt_rtc_ops _rtc_ops =
{

View File

@ -0,0 +1,62 @@
/*
* Copyright (c) 2006-2023, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <rtthread.h>
#ifdef RT_USING_DFS
#include <dfs_fs.h>
#define DBG_TAG "app.filesystem"
#define DBG_LVL DBG_LOG
#include <rtdbg.h>
static int _wait_device_ready(const char* devname)
{
int k;
for(k = 0; k < 10; k++)
{
if (rt_device_find(devname) != RT_NULL)
{
return 1;
}
rt_thread_mdelay(50);
}
return 0;
}
static void sd_mount(const char *devname)
{
if (!_wait_device_ready(devname)) {
LOG_W("Failed to find device: %s", devname);
return;
}
if (dfs_mount(devname, "/", "ext", 0, 0) == RT_EOK)
{
LOG_I("device '%s' is mounted to '/' as EXT", devname);
}
else if (dfs_mount(devname, "/", "elm", 0, 0) == RT_EOK)
{
LOG_I("device '%s' is mounted to '/' as FAT", devname);
}
else
{
LOG_W("Failed to mount device '%s' to '/': %d\n", devname, rt_get_errno());
}
}
int mount_init(void)
{
#ifdef BSP_USING_SDH
sd_mount("sd1");
#endif
return RT_EOK;
}
INIT_ENV_EXPORT(mount_init);
#endif /* RT_USING_DFS */

View File

@ -1,108 +0,0 @@
/*
* Copyright (c) 2006-2023, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2022/12/25 flyingcys first version
*/
#include <rtthread.h>
#ifdef RT_USING_DFS
#include <dfs_fs.h>
#include "dfs_romfs.h"
#define DBG_TAG "app.filesystem"
#define DBG_LVL DBG_LOG
#include <rtdbg.h>
static const struct romfs_dirent _romfs_root[] =
{
#ifdef BSP_USING_ON_CHIP_FLASH
{ROMFS_DIRENT_DIR, "flash", RT_NULL, 0},
#endif
{ROMFS_DIRENT_DIR, "sdcard", RT_NULL, 0}
};
const struct romfs_dirent romfs_root =
{
ROMFS_DIRENT_DIR, "/", (rt_uint8_t *)_romfs_root, sizeof(_romfs_root) / sizeof(_romfs_root[0])
};
static void sd_mount(void *parameter)
{
while (1)
{
rt_thread_mdelay(500);
if (rt_device_find("sd0") != RT_NULL)
{
if (dfs_mount("sd0", "/sdcard", "elm", 0, 0) == RT_EOK)
{
LOG_I("sd card mount to '/sdcard'");
break;
}
else
{
LOG_W("sd card mount to '/sdcard' failed! %d\n", rt_get_errno());
}
}
}
}
int mount_init(void)
{
if(dfs_mount(RT_NULL, "/", "rom", 0, &romfs_root) != 0)
{
LOG_E("rom mount to '/' failed!");
}
#ifdef BSP_USING_ON_CHIP_FLASH_FS
struct rt_device *flash_dev = RT_NULL;
/* 使用 filesystem 分区创建块设备,块设备名称为 filesystem */
flash_dev = fal_blk_device_create("filesystem");
if(flash_dev == RT_NULL)
{
LOG_E("Failed to create device.\n");
return -RT_ERROR;
}
if (dfs_mount("filesystem", "/flash", "lfs", 0, 0) != 0)
{
LOG_I("file system initialization failed!\n");
if(dfs_mkfs("lfs", "filesystem") == 0)
{
if (dfs_mount("filesystem", "/flash", "lfs", 0, 0) == 0)
{
LOG_I("mount to '/flash' success!");
}
}
}
else
{
LOG_I("mount to '/flash' success!");
}
#endif
#ifdef BSP_USING_SDH
rt_thread_t tid;
tid = rt_thread_create("sd_mount", sd_mount, RT_NULL,
4096, RT_THREAD_PRIORITY_MAX - 2, 20);
if (tid != RT_NULL)
{
rt_thread_startup(tid);
}
else
{
LOG_E("create sd_mount thread err!");
}
#endif
return RT_EOK;
}
INIT_APP_EXPORT(mount_init);
#endif /* RT_USING_DFS */

View File

@ -167,12 +167,12 @@ void devmem(int argc, char *argv[])
if (argc == 3)
{
if (sscanf(argv[2], "0x%x", &value) != 1)
if (rt_sscanf(argv[2], "0x%x", &value) != 1)
goto exit_devmem;
mode = 1; /*Write*/
}
if (sscanf(argv[1], "0x%x", &u32Addr) != 1)
if (rt_sscanf(argv[1], "0x%x", &u32Addr) != 1)
goto exit_devmem;
else if (!u32Addr || u32Addr & (4 - 1))
goto exit_devmem;
@ -203,12 +203,12 @@ void devmem2(int argc, char *argv[])
if (argc == 3)
{
if (sscanf(argv[2], "%u", &value) != 1)
if (rt_sscanf(argv[2], "%u", &value) != 1)
goto exit_devmem;
word_count = value;
}
if (sscanf(argv[1], "0x%x", &u32Addr) != 1)
if (rt_sscanf(argv[1], "0x%x", &u32Addr) != 1)
goto exit_devmem;
else if (!u32Addr || u32Addr & (4 - 1))
goto exit_devmem;

View File

@ -399,7 +399,7 @@ static rt_err_t lcd_set_overlay_colkey(int argc, char **argv)
for (index = 0; index < (len - 1); index ++)
{
if (sscanf(argv[index + 1], "%x", &arg[index]) != 1)
if (rt_sscanf(argv[index + 1], "%x", &arg[index]) != 1)
return -1;
}
rt_kprintf("colkeylow:0x%08x colkeyhigh:0x%08x\n", arg[0], arg[1]);
@ -459,7 +459,7 @@ static rt_err_t lcd_set_alphablend_opmode(int argc, char **argv)
for (index = 0; index < (len - 1); index ++)
{
if (sscanf(argv[index + 1], "%x", &arg[index]) != 1)
if (rt_sscanf(argv[index + 1], "%x", &arg[index]) != 1)
return -1;
}

View File

@ -307,12 +307,12 @@ void devmem(int argc, char *argv[])
if (argc == 3)
{
if (sscanf(argv[2], "0x%x", &value) != 1)
if (rt_sscanf(argv[2], "0x%x", &value) != 1)
goto exit_devmem;
mode = 1; //Write
}
if (sscanf(argv[1], "0x%x", &u32Addr) != 1)
if (rt_sscanf(argv[1], "0x%x", &u32Addr) != 1)
goto exit_devmem;
else if (u32Addr & (4 - 1))
goto exit_devmem;
@ -343,12 +343,12 @@ void devmem2(int argc, char *argv[])
if (argc == 3)
{
if (sscanf(argv[2], "%d", &value) != 1)
if (rt_sscanf(argv[2], "%d", &value) != 1)
goto exit_devmem;
word_count = value;
}
if (sscanf(argv[1], "0x%x", &u32Addr) != 1)
if (rt_sscanf(argv[1], "0x%x", &u32Addr) != 1)
goto exit_devmem;
else if (u32Addr & (4 - 1))
goto exit_devmem;

View File

@ -90,12 +90,12 @@ void devmem(int argc, char *argv[])
if (argc == 3)
{
if (sscanf(argv[2], "0x%x", &value) != 1)
if (rt_sscanf(argv[2], "0x%x", &value) != 1)
goto exit_devmem;
mode = 1; //Write
}
if (sscanf(argv[1], "0x%x", &u32Addr) != 1)
if (rt_sscanf(argv[1], "0x%x", &u32Addr) != 1)
goto exit_devmem;
else if (!u32Addr || u32Addr & (4 - 1))
goto exit_devmem;

View File

@ -298,12 +298,12 @@ void whc_devmem(int argc, char *argv[])
if (argc == 3)
{
if (sscanf(argv[2], "0x%x", &value) != 1)
if (rt_sscanf(argv[2], "0x%x", &value) != 1)
goto exit_devmem;
mode = 1; //Write
}
if (sscanf(argv[1], "0x%x", &u32Addr) != 1)
if (rt_sscanf(argv[1], "0x%x", &u32Addr) != 1)
goto exit_devmem;
else if (u32Addr & (4 - 1))
goto exit_devmem;

View File

@ -0,0 +1,286 @@
/*
* Copyright (c) 2006-2024 RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2024-11-26 hywing the first version.
*
*/
#include <rtthread.h>
#ifdef BSP_USING_HWTIMER
#define LOG_TAG "drv.hwtimer"
#include <drv_log.h>
#include <rtdevice.h>
#include "fsl_ctimer.h"
enum
{
#ifdef BSP_USING_CTIMER0
TIM0_INDEX,
#endif
#ifdef BSP_USING_CTIMER1
TIM1_INDEX,
#endif
#ifdef BSP_USING_CTIMER2
TIM2_INDEX,
#endif
};
#ifdef BSP_USING_CTIMER0
#define TIM0_CONFIG \
{ \
.tim_handle = CTIMER0, \
.tim_irqn = CTIMER0_IRQn, \
.name = "timer0", \
}
#endif /* TIM0_CONFIG */
#ifdef BSP_USING_CTIMER1
#define TIM1_CONFIG \
{ \
.tim_handle = CTIMER1, \
.tim_irqn = CTIMER1_IRQn, \
.name = "timer1", \
}
#endif /* TIM1_CONFIG */
#ifdef BSP_USING_CTIMER2
#define TIM2_CONFIG \
{ \
.tim_handle = CTIMER2, \
.tim_irqn = CTIMER2_IRQn, \
.name = "timer2", \
}
#endif /* TIM2_CONFIG */
struct mcxa_hwtimer
{
rt_hwtimer_t time_device;
CTIMER_Type* tim_handle;
enum IRQn tim_irqn;
char* name;
};
static struct mcxa_hwtimer mcxa_hwtimer_obj[] =
{
#ifdef BSP_USING_CTIMER0
TIM0_CONFIG,
#endif
#ifdef BSP_USING_CTIMER1
TIM1_CONFIG,
#endif
#ifdef BSP_USING_CTIMER2
TIM2_CONFIG,
#endif
};
static void NVIC_Configuration(void)
{
#ifdef BSP_USING_CTIMER0
EnableIRQ(CTIMER0_IRQn);
#endif
#ifdef BSP_USING_CTIMER1
EnableIRQ(CTIMER1_IRQn);
#endif
#ifdef BSP_USING_CTIMER2
EnableIRQ(CTIMER2_IRQn);
#endif
}
static rt_err_t mcxa_ctimer_control(rt_hwtimer_t *timer, rt_uint32_t cmd, void *args)
{
rt_err_t err = RT_EOK;
CTIMER_Type *hwtimer_dev;
hwtimer_dev = (CTIMER_Type *)timer->parent.user_data;
RT_ASSERT(timer != RT_NULL);
switch (cmd)
{
case HWTIMER_CTRL_FREQ_SET:
{
uint32_t clk;
uint32_t pre;
if(hwtimer_dev == CTIMER0) clk = CLOCK_GetCTimerClkFreq(0U);
if(hwtimer_dev == CTIMER1) clk = CLOCK_GetCTimerClkFreq(1U);
if(hwtimer_dev == CTIMER2) clk = CLOCK_GetCTimerClkFreq(2U);
pre = clk / *((uint32_t *)args) - 1;
hwtimer_dev->PR = pre;
}
break;
default:
err = -RT_ENOSYS;
break;
}
return err;
}
static rt_uint32_t mcxa_ctimer_count_get(rt_hwtimer_t *timer)
{
rt_uint32_t CurrentTimer_Count;
CTIMER_Type *hwtimer_dev;
hwtimer_dev = (CTIMER_Type *)timer->parent.user_data;
RT_ASSERT(timer != RT_NULL);
CurrentTimer_Count = hwtimer_dev->TC;
return CurrentTimer_Count;
}
static void mcxa_ctimer_init(rt_hwtimer_t *timer, rt_uint32_t state)
{
CTIMER_Type *hwtimer_dev;
ctimer_config_t cfg;
hwtimer_dev = (CTIMER_Type *)timer->parent.user_data;
RT_ASSERT(timer != RT_NULL);
/* Use Main clock for some of the Ctimers */
if(hwtimer_dev == CTIMER0) CLOCK_AttachClk(kFRO_HF_to_CTIMER0);
if(hwtimer_dev == CTIMER1) CLOCK_AttachClk(kFRO_HF_to_CTIMER1);
if(hwtimer_dev == CTIMER2) CLOCK_AttachClk(kFRO_HF_to_CTIMER2);
CTIMER_Init(hwtimer_dev, &cfg);
if (state == 1)
{
NVIC_Configuration();
CTIMER_GetDefaultConfig(&cfg);
CTIMER_Init(hwtimer_dev, &cfg);
}
}
static rt_err_t mcxa_ctimer_start(rt_hwtimer_t *timer, rt_uint32_t cnt, rt_hwtimer_mode_t mode)
{
CTIMER_Type *hwtimer_dev;
hwtimer_dev = (CTIMER_Type *)timer->parent.user_data;
/* Match Configuration for Channel 0 */
ctimer_match_config_t matchCfg;
RT_ASSERT(timer != RT_NULL);
/* Configuration*/
matchCfg.enableCounterReset = true;
matchCfg.enableCounterStop = (mode == HWTIMER_MODE_ONESHOT) ? true : false;;
matchCfg.matchValue = cnt;
matchCfg.outControl = kCTIMER_Output_NoAction;
matchCfg.outPinInitState = false;
matchCfg.enableInterrupt = true;
CTIMER_SetupMatch(hwtimer_dev, kCTIMER_Match_1, &matchCfg);
NVIC_Configuration();
CTIMER_StartTimer(hwtimer_dev);
return RT_EOK;
}
static void mcxa_ctimer_stop(rt_hwtimer_t *timer)
{
CTIMER_Type *hwtimer_dev;
hwtimer_dev = (CTIMER_Type *)timer->parent.user_data;
RT_ASSERT(timer != RT_NULL);
CTIMER_StopTimer(hwtimer_dev);
}
static const struct rt_hwtimer_ops mcxa_hwtimer_ops =
{
.init = mcxa_ctimer_init,
.start = mcxa_ctimer_start,
.stop = mcxa_ctimer_stop,
.count_get = mcxa_ctimer_count_get,
.control = mcxa_ctimer_control,
};
static const struct rt_hwtimer_info mcxa_hwtimer_info =
{
96000000, /* the maximum count frequency can be set */
6103, /* the minimum count frequency can be set */
0xFFFFFFFF,
HWTIMER_CNTMODE_UP,
};
int rt_hw_hwtimer_init(void)
{
int i = 0;
int result = RT_EOK;
for (i = 0; i < sizeof(mcxa_hwtimer_obj) / sizeof(mcxa_hwtimer_obj[0]); i++)
{
mcxa_hwtimer_obj[i].time_device.info = &mcxa_hwtimer_info;
mcxa_hwtimer_obj[i].time_device.ops = &mcxa_hwtimer_ops;
if (rt_device_hwtimer_register(&mcxa_hwtimer_obj[i].time_device,
mcxa_hwtimer_obj[i].name, mcxa_hwtimer_obj[i].tim_handle) == RT_EOK)
{
LOG_D("%s register success", mcxa_hwtimer_obj[i].name);
}
else
{
LOG_E("%s register failed", mcxa_hwtimer_obj[i].name);
result = -RT_ERROR;
}
}
return result;
}
INIT_DEVICE_EXPORT(rt_hw_hwtimer_init);
#ifdef BSP_USING_CTIMER0
void CTIMER0_IRQHandler(void)
{
rt_interrupt_enter();
uint32_t int_stat;
/* Get Interrupt status flags */
int_stat = CTIMER_GetStatusFlags(CTIMER0);
/* Clear the status flags that were set */
CTIMER_ClearStatusFlags(CTIMER0, int_stat);
rt_device_hwtimer_isr(&mcxa_hwtimer_obj[TIM0_INDEX].time_device);
rt_interrupt_leave();
}
#endif /* BSP_USING_HWTIMER0 */
#ifdef BSP_USING_CTIMER1
void CTIMER1_IRQHandler(void)
{
rt_interrupt_enter();
uint32_t int_stat;
/* Get Interrupt status flags */
int_stat = CTIMER_GetStatusFlags(CTIMER1);
/* Clear the status flags that were set */
CTIMER_ClearStatusFlags(CTIMER1, int_stat);
rt_device_hwtimer_isr(&mcxa_hwtimer_obj[TIM1_INDEX].time_device);
rt_interrupt_leave();
}
#endif /* BSP_USING_HWTIMER1 */
#ifdef BSP_USING_CTIMER2
void CTIMER2_IRQHandler(void)
{
rt_interrupt_enter();
uint32_t int_stat;
/* Get Interrupt status flags */
int_stat = CTIMER_GetStatusFlags(CTIMER2);
/* Clear the status flags that were set */
CTIMER_ClearStatusFlags(CTIMER2, int_stat);
rt_device_hwtimer_isr(&mcxa_hwtimer_obj[TIM2_INDEX].time_device);
rt_interrupt_leave();
}
#endif /* BSP_USING_HWTIMER2 */
#endif /* BSP_USING_HWTIMER */

View File

@ -103,7 +103,7 @@ menu "On-chip Peripheral Drivers"
menuconfig BSP_USING_HWTIMER
config BSP_USING_HWTIMER
bool "Enable Timer"
bool "Enable Hardware Timer"
select RT_USING_HWTIMER
default y
@ -116,12 +116,8 @@ menu "On-chip Peripheral Drivers"
bool "Enable CIMER1"
default n
config BSP_USING_CTIMER3
bool "Enable CIMER3"
default n
config BSP_USING_CTIMER4
bool "Enable CIMER4"
config BSP_USING_CTIMER2
bool "Enable CIMER2"
default n
endif

View File

@ -44,8 +44,12 @@ void BOARD_InitPins(void)
CLOCK_EnableClock(kCLOCK_GateGPIO2);
CLOCK_EnableClock(kCLOCK_GateGPIO3);
CLOCK_SetClockDiv(kCLOCK_DivCTIMER0, 1u);
CLOCK_AttachClk(kFRO_HF_to_CTIMER0);
CLOCK_SetClockDiv(kCLOCK_DivCTIMER1, 1u);
CLOCK_AttachClk(kFRO_HF_to_CTIMER1);
CLOCK_SetClockDiv(kCLOCK_DivCTIMER2, 1u);
CLOCK_AttachClk(kFRO_HF_to_CTIMER2);
RESET_ReleasePeripheralReset(kLPUART0_RST_SHIFT_RSTn);
RESET_ReleasePeripheralReset(kLPUART1_RST_SHIFT_RSTn);

View File

@ -253,6 +253,15 @@ CONFIG_RT_USING_QSPI=y
# CONFIG_RT_USING_LCD is not set
# CONFIG_RT_USING_HWCRYPTO is not set
# CONFIG_RT_USING_WIFI is not set
CONFIG_RT_USING_BLK=y
#
# Partition Types
#
CONFIG_RT_BLK_PARTITION_DFS=y
CONFIG_RT_BLK_PARTITION_EFI=y
# end of Partition Types
# CONFIG_RT_USING_VIRTIO is not set
CONFIG_RT_USING_PIN=y
CONFIG_RT_USING_KTIME=y

View File

@ -26,6 +26,7 @@ config PHYTIUM_ARCH_AARCH64
select RT_USING_CACHE
select TARGET_ARMV8_AARCH64
select ARCH_ARM_BOOTWITH_FLUSH_CACHE
select ARCH_USING_IRQ_CTX_LIST
select RT_USING_HW_ATOMIC
default y

View File

@ -160,6 +160,13 @@
#define RT_MMCSD_MAX_PARTITION 16
#define RT_USING_SPI
#define RT_USING_QSPI
#define RT_USING_BLK
/* Partition Types */
#define RT_BLK_PARTITION_DFS
#define RT_BLK_PARTITION_EFI
/* end of Partition Types */
#define RT_USING_PIN
#define RT_USING_KTIME
#define RT_USING_CHERRYUSB

View File

@ -22,12 +22,14 @@ rsource "graphic/Kconfig"
rsource "hwcrypto/Kconfig"
rsource "wlan/Kconfig"
rsource "input/Kconfig"
rsource "led/Kconfig"
rsource "mailbox/Kconfig"
rsource "phye/Kconfig"
rsource "ata/Kconfig"
rsource "block/Kconfig"
rsource "nvme/Kconfig"
rsource "scsi/Kconfig"
rsource "regulator/Kconfig"
rsource "reset/Kconfig"
rsource "virtio/Kconfig"
rsource "dma/Kconfig"

View File

@ -16,6 +16,8 @@
#include <stdlib.h>
#include <rtthread.h>
#include <drivers/dev_pin.h>
#include <drivers/core/driver.h>
/**
* @addtogroup Drivers RTTHREAD Driver
* @defgroup SPI SPI
@ -151,7 +153,12 @@ struct rt_spi_configuration
{
rt_uint8_t mode;
rt_uint8_t data_width;
#ifdef RT_USING_DM
rt_uint8_t data_width_tx;
rt_uint8_t data_width_rx;
#else
rt_uint16_t reserved;
#endif
rt_uint32_t max_hz;
};
@ -167,6 +174,12 @@ struct rt_spi_bus
rt_uint8_t mode;
const struct rt_spi_ops *ops;
#ifdef RT_USING_DM
rt_base_t *pins;
rt_bool_t slave;
int num_chipselect;
#endif /* RT_USING_DM */
struct rt_mutex lock;
struct rt_spi_device *owner;
};
@ -180,6 +193,20 @@ struct rt_spi_ops
rt_ssize_t (*xfer)(struct rt_spi_device *device, struct rt_spi_message *message);
};
#ifdef RT_USING_DM
/**
* @brief SPI delay info
*/
struct rt_spi_delay
{
#define RT_SPI_DELAY_UNIT_USECS 0
#define RT_SPI_DELAY_UNIT_NSECS 1
#define RT_SPI_DELAY_UNIT_SCK 2
rt_uint16_t value;
rt_uint8_t unit;
};
#endif /* RT_USING_DM */
/**
* @brief SPI Virtual BUS, one device must connected to a virtual BUS
*/
@ -188,6 +215,17 @@ struct rt_spi_device
struct rt_device parent;
struct rt_spi_bus *bus;
#ifdef RT_USING_DM
const char *name;
const struct rt_spi_device_id *id;
const struct rt_ofw_node_id *ofw_id;
rt_uint8_t chip_select;
struct rt_spi_delay cs_setup;
struct rt_spi_delay cs_hold;
struct rt_spi_delay cs_inactive;
#endif
struct rt_spi_configuration config;
rt_base_t cs_pin;
void *user_data;
@ -252,6 +290,31 @@ struct rt_qspi_device
#define SPI_DEVICE(dev) ((struct rt_spi_device *)(dev))
#ifdef RT_USING_DM
struct rt_spi_device_id
{
char name[20];
void *data;
};
struct rt_spi_driver
{
struct rt_driver parent;
const struct rt_spi_device_id *ids;
const struct rt_ofw_node_id *ofw_ids;
rt_err_t (*probe)(struct rt_spi_device *device);
rt_err_t (*remove)(struct rt_spi_device *device);
rt_err_t (*shutdown)(struct rt_spi_device *device);
};
rt_err_t rt_spi_driver_register(struct rt_spi_driver *driver);
rt_err_t rt_spi_device_register(struct rt_spi_device *device);
#define RT_SPI_DRIVER_EXPORT(driver) RT_DRIVER_EXPORT(driver, spi, BUILIN)
#endif /* RT_USING_DM */
/**
* @brief register a SPI bus
*

View File

@ -0,0 +1,57 @@
/*
* Copyright (c) 2006-2022, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2022-3-08 GuEe-GUI the first version
*/
#ifndef __LED_H__
#define __LED_H__
#include <rthw.h>
#include <rtdef.h>
struct rt_led_ops;
enum rt_led_state
{
RT_LED_S_OFF,
RT_LED_S_ON,
RT_LED_S_TOGGLE,
RT_LED_S_BLINK,
RT_LED_STATE_NR,
};
struct rt_led_device
{
struct rt_device parent;
const struct rt_led_ops *ops;
struct rt_spinlock spinlock;
void *sysdata;
void *priv;
};
struct rt_led_ops
{
rt_err_t (*set_state)(struct rt_led_device *led, enum rt_led_state state);
rt_err_t (*get_state)(struct rt_led_device *led, enum rt_led_state *out_state);
rt_err_t (*set_period)(struct rt_led_device *led, rt_uint32_t period_ms);
rt_err_t (*set_brightness)(struct rt_led_device *led, rt_uint32_t brightness);
};
rt_err_t rt_hw_led_register(struct rt_led_device *led);
rt_err_t rt_hw_led_unregister(struct rt_led_device *led);
rt_err_t rt_led_set_state(struct rt_led_device *led, enum rt_led_state state);
rt_err_t rt_led_get_state(struct rt_led_device *led, enum rt_led_state *out_state);
rt_err_t rt_led_set_period(struct rt_led_device *led, rt_uint32_t period_ms);
rt_err_t rt_led_set_brightness(struct rt_led_device *led, rt_uint32_t brightness);
#endif /* __LED_H__ */

View File

@ -73,6 +73,7 @@ struct rt_pci_ep_msix_tbl
};
struct rt_pci_ep_ops;
struct rt_pci_ep_mem;
struct rt_pci_ep
{
@ -84,6 +85,9 @@ struct rt_pci_ep
const struct rt_device *rc_dev;
const struct rt_pci_ep_ops *ops;
rt_size_t mems_nr;
struct rt_pci_ep_mem *mems;
rt_uint8_t max_functions;
RT_BITMAP_DECLARE(functions_map, 8);
rt_list_t epf_nodes;
@ -92,6 +96,16 @@ struct rt_pci_ep
void *priv;
};
struct rt_pci_ep_mem
{
rt_ubase_t cpu_addr;
rt_size_t size;
rt_size_t page_size;
rt_bitmap_t *map;
rt_size_t bits;
};
struct rt_pci_epf
{
rt_list_t list;
@ -170,6 +184,16 @@ rt_err_t rt_pci_ep_stop(struct rt_pci_ep *ep);
rt_err_t rt_pci_ep_register(struct rt_pci_ep *ep);
rt_err_t rt_pci_ep_unregister(struct rt_pci_ep *ep);
rt_err_t rt_pci_ep_mem_array_init(struct rt_pci_ep *ep,
struct rt_pci_ep_mem *mems, rt_size_t mems_nr);
rt_err_t rt_pci_ep_mem_init(struct rt_pci_ep *ep,
rt_ubase_t cpu_addr, rt_size_t size, rt_size_t page_size);
void *rt_pci_ep_mem_alloc(struct rt_pci_ep *ep,
rt_ubase_t *out_cpu_addr, rt_size_t size);
void rt_pci_ep_mem_free(struct rt_pci_ep *ep,
void *vaddr, rt_ubase_t cpu_addr, rt_size_t size);
rt_err_t rt_pci_ep_add_epf(struct rt_pci_ep *ep, struct rt_pci_epf *epf);
rt_err_t rt_pci_ep_remove_epf(struct rt_pci_ep *ep, struct rt_pci_epf *epf);

View File

@ -0,0 +1,153 @@
/*
* Copyright (c) 2006-2023, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2023-09-23 GuEe-GUI first version
*/
#ifndef __REGULATOR_H__
#define __REGULATOR_H__
#include <ref.h>
#include <rthw.h>
#include <rtthread.h>
#include <drivers/misc.h>
#define RT_REGULATOR_UVOLT_INVALID (((int)(RT_UINT32_MAX >> 1)))
struct rt_regulator_param
{
const char *name;
int min_uvolt; /* In uV */
int max_uvolt; /* In uV */
int min_uamp; /* In uA */
int max_uamp; /* In uA */
int ramp_delay; /* In uV/usec */
int enable_delay; /* In usec */
int off_on_delay; /* In usec */
rt_uint32_t enable_active_high:1;
rt_uint32_t boot_on:1; /* Is enabled on boot */
rt_uint32_t always_on:1; /* Must be enabled */
rt_uint32_t soft_start:1; /* Ramp voltage slowly */
rt_uint32_t pull_down:1; /* Pull down resistor when regulator off */
rt_uint32_t over_current_protection:1; /* Auto disable on over current */
};
struct rt_regulator_ops;
struct rt_regulator_node
{
rt_list_t list;
rt_list_t children_nodes;
struct rt_device *dev;
struct rt_regulator_node *parent;
const char *supply_name;
const struct rt_regulator_ops *ops;
struct rt_ref ref;
rt_atomic_t enabled_count;
const struct rt_regulator_param *param;
rt_list_t notifier_nodes;
void *priv;
};
/*
* NOTE: Power regulator control is dangerous work. We don't want non-internal
* consumer could access the power regulator tree without regulator's API. So
* we defined the `rt_regulator` member in core instead of here.
*/
struct rt_regulator;
#define RT_REGULATOR_MODE_INVALID 0
#define RT_REGULATOR_MODE_FAST RT_BIT(0)
#define RT_REGULATOR_MODE_NORMAL RT_BIT(1)
#define RT_REGULATOR_MODE_IDLE RT_BIT(2)
#define RT_REGULATOR_MODE_STANDBY RT_BIT(3)
struct rt_regulator_ops
{
rt_err_t (*enable)(struct rt_regulator_node *reg);
rt_err_t (*disable)(struct rt_regulator_node *reg);
rt_bool_t (*is_enabled)(struct rt_regulator_node *reg);
rt_err_t (*set_voltage)(struct rt_regulator_node *reg, int min_uvolt, int max_uvolt);
int (*get_voltage)(struct rt_regulator_node *reg);
rt_err_t (*set_mode)(struct rt_regulator_node *reg, rt_uint32_t mode);
rt_int32_t (*get_mode)(struct rt_regulator_node *reg);
rt_err_t (*set_ramp_delay)(struct rt_regulator_node *reg, int ramp);
rt_uint32_t (*enable_time)(struct rt_regulator_node *reg);
};
struct rt_regulator_notifier;
#define RT_REGULATOR_MSG_ENABLE RT_BIT(0)
#define RT_REGULATOR_MSG_DISABLE RT_BIT(1)
#define RT_REGULATOR_MSG_VOLTAGE_CHANGE RT_BIT(2)
#define RT_REGULATOR_MSG_VOLTAGE_CHANGE_ERR RT_BIT(3)
union rt_regulator_notifier_args
{
struct
{
int old_uvolt;
int min_uvolt;
int max_uvolt;
};
};
typedef rt_err_t (*rt_regulator_notifier_callback)(struct rt_regulator_notifier *notifier,
rt_ubase_t msg, void *data);
struct rt_regulator_notifier
{
rt_list_t list;
struct rt_regulator *regulator;
rt_regulator_notifier_callback callback;
void *priv;
};
rt_err_t rt_regulator_register(struct rt_regulator_node *reg_np);
rt_err_t rt_regulator_unregister(struct rt_regulator_node *reg_np);
rt_err_t rt_regulator_notifier_register(struct rt_regulator *reg,
struct rt_regulator_notifier *notifier);
rt_err_t rt_regulator_notifier_unregister(struct rt_regulator *reg,
struct rt_regulator_notifier *notifier);
struct rt_regulator *rt_regulator_get(struct rt_device *dev, const char *id);
void rt_regulator_put(struct rt_regulator *reg);
rt_err_t rt_regulator_enable(struct rt_regulator *reg);
rt_err_t rt_regulator_disable(struct rt_regulator *reg);
rt_bool_t rt_regulator_is_enabled(struct rt_regulator *reg);
rt_bool_t rt_regulator_is_supported_voltage(struct rt_regulator *reg, int min_uvolt, int max_uvolt);
rt_err_t rt_regulator_set_voltage(struct rt_regulator *reg, int min_uvolt, int max_uvolt);
int rt_regulator_get_voltage(struct rt_regulator *reg);
rt_err_t rt_regulator_set_mode(struct rt_regulator *reg, rt_uint32_t mode);
rt_int32_t rt_regulator_get_mode(struct rt_regulator *reg);
rt_inline rt_err_t rt_regulator_set_voltage_triplet(struct rt_regulator *reg,
int min_uvolt, int target_uvolt, int max_uvolt)
{
if (!rt_regulator_set_voltage(reg, target_uvolt, max_uvolt))
{
return RT_EOK;
}
return rt_regulator_set_voltage(reg, min_uvolt, max_uvolt);
}
#endif /* __REGULATOR_H__ */

View File

@ -51,6 +51,10 @@ extern "C" {
#endif /* RT_ATA_AHCI */
#endif /* RT_USING_ATA */
#ifdef RT_USING_LED
#include "drivers/led.h"
#endif
#ifdef RT_USING_MBOX
#include "drivers/mailbox.h"
#endif /* RT_USING_MBOX */
@ -89,6 +93,10 @@ extern "C" {
#include "drivers/pic.h"
#endif /* RT_USING_PIC */
#ifdef RT_USING_REGULATOR
#include "drivers/regulator.h"
#endif /* RT_USING_REGULATOR */
#ifdef RT_USING_RESET
#include "drivers/reset.h"
#endif /* RT_USING_RESET */

View File

@ -0,0 +1,15 @@
menuconfig RT_USING_LED
bool "Using Light Emitting Diode (LED) device drivers"
depends on RT_USING_DM
default n
config RT_LED_GPIO
bool "GPIO connected LEDs Support"
depends on RT_USING_LED
depends on RT_USING_PINCTRL
depends on RT_USING_OFW
default n
if RT_USING_LED
osource "$(SOC_DM_LED_DIR)/Kconfig"
endif

View File

@ -0,0 +1,18 @@
from building import *
group = []
if not GetDepend(['RT_USING_DM', 'RT_USING_LED']):
Return('group')
cwd = GetCurrentDir()
CPPPATH = [cwd + '/../include']
src = ['led.c']
if GetDepend(['RT_LED_GPIO']):
src += ['led-gpio.c']
group = DefineGroup('DeviceDrivers', src, depend = [''], CPPPATH = CPPPATH)
Return('group')

View File

@ -0,0 +1,228 @@
/*
* Copyright (c) 2006-2022, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2022-3-08 GuEe-GUI the first version
*/
#include <rtthread.h>
#include <rtdevice.h>
#define DBG_TAG "led.gpio"
#define DBG_LVL DBG_INFO
#include <rtdbg.h>
struct gpio_led
{
struct rt_led_device parent;
rt_base_t pin;
rt_uint8_t active_val;
};
#define raw_to_gpio_led(raw) rt_container_of(raw, struct gpio_led, parent);
static rt_err_t gpio_led_set_state(struct rt_led_device *led, enum rt_led_state state)
{
rt_err_t err = RT_EOK;
struct gpio_led *gled = raw_to_gpio_led(led);
rt_pin_mode(gled->pin, PIN_MODE_OUTPUT);
switch (state)
{
case RT_LED_S_OFF:
rt_pin_write(gled->pin, !gled->active_val);
break;
case RT_LED_S_ON:
rt_pin_write(gled->pin, gled->active_val);
break;
case RT_LED_S_TOGGLE:
err = led->ops->get_state(led, &state);
if (!err)
{
err = led->ops->set_state(led, state == RT_LED_S_OFF ? RT_LED_S_ON : RT_LED_S_OFF);
}
break;
default:
return -RT_ENOSYS;
}
return err;
}
static rt_err_t gpio_led_get_state(struct rt_led_device *led, enum rt_led_state *out_state)
{
struct gpio_led *gled = raw_to_gpio_led(led);
switch (rt_pin_read(gled->pin))
{
case PIN_LOW:
*out_state = RT_LED_S_OFF;
break;
case PIN_HIGH:
*out_state = RT_LED_S_ON;
break;
default:
return -RT_ERROR;
}
return RT_EOK;
}
const static struct rt_led_ops gpio_led_ops =
{
.set_state = gpio_led_set_state,
.get_state = gpio_led_get_state,
};
static rt_err_t ofw_append_gpio_led(struct rt_ofw_node *np)
{
rt_err_t err;
enum rt_led_state led_state = RT_LED_S_OFF;
const char *propname, *state, *trigger;
struct gpio_led *gled = rt_malloc(sizeof(*gled));
if (!gled)
{
return -RT_ENOMEM;
}
gled->pin = rt_ofw_get_named_pin(np, RT_NULL, 0, RT_NULL, &gled->active_val);
if (gled->pin < 0)
{
err = gled->pin;
goto _fail;
}
gled->parent.ops = &gpio_led_ops;
if ((err = rt_hw_led_register(&gled->parent)))
{
goto _fail;
}
if (!rt_ofw_prop_read_string(np, "default-state", &state))
{
if (!rt_strcmp(state, "on"))
{
led_state = RT_LED_S_ON;
}
}
if ((propname = rt_ofw_get_prop_fuzzy_name(np, "default-trigger$")))
{
if (!rt_ofw_prop_read_string(np, propname, &trigger))
{
if (!rt_strcmp(trigger, "heartbeat") ||
!rt_strcmp(trigger, "timer"))
{
led_state = RT_LED_S_BLINK;
}
}
}
rt_led_set_state(&gled->parent, led_state);
rt_ofw_data(np) = &gled->parent;
return RT_EOK;
_fail:
rt_free(gled);
return err;
}
static rt_err_t gpio_led_probe(struct rt_platform_device *pdev)
{
rt_bool_t pinctrl_apply = RT_FALSE;
struct rt_ofw_node *led_np, *np = pdev->parent.ofw_node;
if (rt_ofw_prop_read_bool(np, "pinctrl-0"))
{
pinctrl_apply = RT_TRUE;
rt_pin_ctrl_confs_apply_by_name(&pdev->parent, RT_NULL);
}
rt_ofw_foreach_available_child_node(np, led_np)
{
rt_err_t err = ofw_append_gpio_led(led_np);
if (err == -RT_ENOMEM)
{
rt_ofw_node_put(led_np);
return err;
}
else if (err)
{
LOG_E("%s: create LED fail", rt_ofw_node_full_name(led_np));
continue;
}
if (!pinctrl_apply)
{
struct rt_device dev_tmp;
dev_tmp.ofw_node = led_np;
rt_pin_ctrl_confs_apply_by_name(&dev_tmp, RT_NULL);
}
}
return RT_EOK;
}
static rt_err_t gpio_led_remove(struct rt_platform_device *pdev)
{
struct gpio_led *gled;
struct rt_led_device *led_dev;
struct rt_ofw_node *led_np, *np = pdev->parent.ofw_node;
rt_ofw_foreach_available_child_node(np, led_np)
{
led_dev = rt_ofw_data(led_np);
if (!led_dev)
{
continue;
}
gled = rt_container_of(led_dev, struct gpio_led, parent);
rt_ofw_data(led_np) = RT_NULL;
rt_hw_led_unregister(&gled->parent);
rt_free(gled);
}
return RT_EOK;
}
static const struct rt_ofw_node_id gpio_led_ofw_ids[] =
{
{ .compatible = "gpio-leds" },
{ /* sentinel */ }
};
static struct rt_platform_driver gpio_led_driver =
{
.name = "led-gpio",
.ids = gpio_led_ofw_ids,
.probe = gpio_led_probe,
.remove = gpio_led_remove,
};
RT_PLATFORM_DRIVER_EXPORT(gpio_led_driver);

View File

@ -0,0 +1,349 @@
/*
* Copyright (c) 2006-2022, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2022-3-08 GuEe-GUI the first version
*/
#include <rtthread.h>
#define DBG_TAG "rtdm.led"
#define DBG_LVL DBG_INFO
#include <rtdbg.h>
#include <drivers/led.h>
#include <drivers/core/rtdm.h>
struct blink_timer
{
rt_bool_t toggle;
rt_bool_t enabled;
struct rt_timer timer;
};
static struct rt_dm_ida led_ida = RT_DM_IDA_INIT(LED);
static const char * const _led_states[] =
{
[RT_LED_S_OFF] = "off",
[RT_LED_S_ON] = "on",
[RT_LED_S_TOGGLE] = "toggle",
[RT_LED_S_BLINK] = "blink",
};
static rt_ssize_t _led_read(rt_device_t dev, rt_off_t pos, void *buffer, rt_size_t size)
{
rt_ssize_t res;
rt_size_t state_len;
enum rt_led_state state;
struct rt_led_device *led = rt_container_of(dev, struct rt_led_device, parent);
if ((res = rt_led_get_state(led, &state)))
{
return res;
}
state_len = rt_strlen(_led_states[state]);
if (pos < state_len)
{
size = rt_min_t(rt_size_t, size, size - pos);
((char *)buffer)[size - 1] = '\0';
rt_strncpy(buffer, &_led_states[state][pos], size);
return size;
}
else
{
return 0;
}
}
static rt_ssize_t _led_write(rt_device_t dev, rt_off_t pos, const void *buffer, rt_size_t size)
{
rt_uint32_t brightness = 0;
const char *value = buffer;
struct rt_led_device *led = rt_container_of(dev, struct rt_led_device, parent);
for (int i = 0; i < RT_ARRAY_SIZE(_led_states); ++i)
{
if (!rt_strncpy((char *)_led_states[i], buffer, size))
{
return rt_led_set_state(led, i) ? : size;
}
}
while (*value)
{
if (*value < '0' || *value > '9')
{
return -RT_EINVAL;
}
brightness *= 10;
brightness += *value - '0';
++value;
}
rt_led_set_brightness(led, brightness);
return size;
}
#ifdef RT_USING_DEVICE_OPS
const static struct rt_device_ops _led_ops =
{
.read = _led_read,
.write = _led_write,
};
#endif
static void _led_blink_timerout(void *param)
{
struct rt_led_device *led = param;
struct blink_timer *btimer = led->sysdata;
if (btimer->toggle)
{
led->ops->set_state(led, RT_LED_S_OFF);
}
else
{
led->ops->set_state(led, RT_LED_S_ON);
}
btimer->toggle = !btimer->toggle;
}
rt_err_t rt_hw_led_register(struct rt_led_device *led)
{
rt_err_t err;
int device_id;
const char *dev_name;
struct blink_timer *btimer = RT_NULL;
if (!led || !led->ops)
{
return -RT_EINVAL;
}
if ((device_id = rt_dm_ida_alloc(&led_ida)) < 0)
{
return -RT_EFULL;
}
rt_dm_dev_set_name(&led->parent, "led%u", device_id);
dev_name = rt_dm_dev_get_name(&led->parent);
led->sysdata = RT_NULL;
rt_spin_lock_init(&led->spinlock);
if (!led->ops->set_period && led->ops->set_state)
{
btimer = rt_malloc(sizeof(*btimer));
if (!btimer)
{
LOG_E("%s create blink timer failed", dev_name);
err = -RT_ENOMEM;
goto _fail;
}
led->sysdata = btimer;
btimer->toggle = RT_FALSE;
btimer->enabled = RT_FALSE;
rt_timer_init(&btimer->timer, dev_name, _led_blink_timerout, led,
rt_tick_from_millisecond(500), RT_TIMER_FLAG_PERIODIC);
}
led->parent.type = RT_Device_Class_Char;
#ifdef RT_USING_DEVICE_OPS
led->parent.ops = &_led_ops;
#else
led->parent.read = _led_read;
led->parent.write = _led_write;
#endif
led->parent.master_id = led_ida.master_id;
led->parent.device_id = device_id;
if ((err = rt_device_register(&led->parent, dev_name, RT_DEVICE_FLAG_RDWR)))
{
goto _fail;
}
return RT_EOK;
_fail:
rt_dm_ida_free(&led_ida, device_id);
if (btimer)
{
rt_timer_detach(&btimer->timer);
rt_free(btimer);
led->sysdata = RT_NULL;
}
return err;
}
rt_err_t rt_hw_led_unregister(struct rt_led_device *led)
{
if (!led)
{
return -RT_EINVAL;
}
rt_led_set_state(led, RT_LED_S_OFF);
if (led->sysdata)
{
struct blink_timer *btimer = led->sysdata;
rt_timer_detach(&btimer->timer);
rt_free(btimer);
}
rt_dm_ida_free(&led_ida, led->parent.device_id);
rt_device_unregister(&led->parent);
return RT_EOK;
}
rt_err_t rt_led_set_state(struct rt_led_device *led, enum rt_led_state state)
{
rt_err_t err;
struct blink_timer *btimer;
if (!led)
{
return -RT_EINVAL;
}
if (!led->ops->set_state)
{
return -RT_ENOSYS;
}
rt_spin_lock(&led->spinlock);
btimer = led->sysdata;
if (btimer && btimer->enabled)
{
rt_timer_stop(&btimer->timer);
}
err = led->ops->set_state(led, state);
if (state == RT_LED_S_BLINK)
{
if (err == -RT_ENOSYS && btimer && !btimer->enabled)
{
btimer->enabled = RT_TRUE;
rt_timer_start(&btimer->timer);
}
}
else if (btimer && btimer->enabled)
{
if (err)
{
rt_timer_start(&btimer->timer);
}
else
{
btimer->enabled = RT_FALSE;
}
}
rt_spin_unlock(&led->spinlock);
return err;
}
rt_err_t rt_led_get_state(struct rt_led_device *led, enum rt_led_state *out_state)
{
rt_err_t err;
if (!led || !out_state)
{
return -RT_EINVAL;
}
if (!led->ops->get_state)
{
return -RT_ENOSYS;
}
rt_spin_lock(&led->spinlock);
err = led->ops->get_state(led, out_state);
rt_spin_unlock(&led->spinlock);
return err;
}
rt_err_t rt_led_set_period(struct rt_led_device *led, rt_uint32_t period_ms)
{
rt_err_t err;
if (!led)
{
return -RT_EINVAL;
}
if (!led->ops->set_period && !led->sysdata)
{
return -RT_ENOSYS;
}
rt_spin_lock(&led->spinlock);
if (led->ops->set_period)
{
err = led->ops->set_period(led, period_ms);
}
else
{
struct blink_timer *btimer = led->sysdata;
rt_tick_t tick = rt_tick_from_millisecond(period_ms);
err = rt_timer_control(&btimer->timer, RT_TIMER_CTRL_SET_TIME, &tick);
}
rt_spin_unlock(&led->spinlock);
return err;
}
rt_err_t rt_led_set_brightness(struct rt_led_device *led, rt_uint32_t brightness)
{
rt_err_t err;
if (!led)
{
return -RT_EINVAL;
}
if (!led->ops->set_brightness)
{
return -RT_ENOSYS;
}
rt_spin_lock(&led->spinlock);
err = led->ops->set_brightness(led, brightness);
rt_spin_unlock(&led->spinlock);
return err;
}

View File

@ -8,7 +8,7 @@ if not GetDepend(['RT_PCI_ENDPOINT']):
cwd = GetCurrentDir()
CPPPATH = [cwd + '/../../include']
src = ['endpoint.c']
src = ['endpoint.c', 'mem.c']
group = DefineGroup('DeviceDrivers', src, depend = [''], CPPPATH = CPPPATH)

View File

@ -0,0 +1,205 @@
/*
* Copyright (c) 2006-2022, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2022-08-25 GuEe-GUI first version
*/
#include <drivers/pci_endpoint.h>
#define DBG_TAG "pci.ep.mem"
#define DBG_LVL DBG_INFO
#include <rtdbg.h>
rt_err_t rt_pci_ep_mem_array_init(struct rt_pci_ep *ep,
struct rt_pci_ep_mem *mems, rt_size_t mems_nr)
{
rt_size_t idx;
rt_err_t err = RT_EOK;
if (!ep || !mems)
{
return -RT_EINVAL;
}
rt_mutex_take(&ep->lock, RT_WAITING_FOREVER);
ep->mems_nr = mems_nr;
ep->mems = rt_calloc(mems_nr, sizeof(*ep->mems));
if (!ep->mems)
{
return -RT_ENOMEM;
}
for (idx = 0; idx < mems_nr; ++idx)
{
struct rt_pci_ep_mem *mem = &ep->mems[idx];
mem->cpu_addr = mems->cpu_addr;
mem->size = mems->size;
mem->page_size = mems->page_size;
mem->bits = mems->size / mems->page_size;
mem->map = rt_calloc(RT_BITMAP_LEN(mem->bits), sizeof(*mem->map));
if (!mem->map)
{
err = -RT_ENOMEM;
goto _out_lock;
}
}
_out_lock:
if (err)
{
while (idx --> 0)
{
rt_free(ep->mems[idx].map);
}
rt_free(ep->mems);
ep->mems_nr = 0;
ep->mems = RT_NULL;
}
rt_mutex_release(&ep->lock);
return err;
}
rt_err_t rt_pci_ep_mem_init(struct rt_pci_ep *ep,
rt_ubase_t cpu_addr, rt_size_t size, rt_size_t page_size)
{
struct rt_pci_ep_mem mem;
if (!ep)
{
return -RT_EINVAL;
}
mem.cpu_addr = cpu_addr;
mem.size = size;
mem.page_size = page_size;
return rt_pci_ep_mem_array_init(ep, &mem, 1);
}
static rt_ubase_t bitmap_region_alloc(struct rt_pci_ep_mem *mem, rt_size_t size)
{
rt_size_t bit, next_bit, end_bit, max_bits;
size /= mem->page_size;
max_bits = mem->bits - size;
rt_bitmap_for_each_clear_bit(mem->map, bit, max_bits)
{
end_bit = bit + size;
for (next_bit = bit + 1; next_bit < end_bit; ++next_bit)
{
if (rt_bitmap_test_bit(mem->map, next_bit))
{
bit = next_bit;
goto _next;
}
}
if (next_bit == end_bit)
{
while (next_bit --> bit)
{
rt_bitmap_set_bit(mem->map, next_bit);
}
return mem->cpu_addr + bit * mem->page_size;
}
_next:
}
return ~0ULL;
}
static void bitmap_region_free(struct rt_pci_ep_mem *mem,
rt_ubase_t cpu_addr, rt_size_t size)
{
rt_size_t bit = (cpu_addr - mem->cpu_addr) / mem->page_size, end_bit;
size /= mem->page_size;
end_bit = bit + size;
for (; bit < end_bit; ++bit)
{
rt_bitmap_clear_bit(mem->map, bit);
}
}
void *rt_pci_ep_mem_alloc(struct rt_pci_ep *ep,
rt_ubase_t *out_cpu_addr, rt_size_t size)
{
void *vaddr = RT_NULL;
if (!ep || !out_cpu_addr)
{
return vaddr;
}
rt_mutex_take(&ep->lock, RT_WAITING_FOREVER);
for (rt_size_t idx = 0; idx < ep->mems_nr; ++idx)
{
rt_ubase_t cpu_addr;
struct rt_pci_ep_mem *mem = &ep->mems[idx];
cpu_addr = bitmap_region_alloc(mem, size);
if (cpu_addr != ~0ULL)
{
vaddr = rt_ioremap((void *)cpu_addr, size);
if (!vaddr)
{
bitmap_region_free(mem, cpu_addr, size);
/* Try next memory */
continue;
}
*out_cpu_addr = cpu_addr;
break;
}
}
rt_mutex_release(&ep->lock);
return vaddr;
}
void rt_pci_ep_mem_free(struct rt_pci_ep *ep,
void *vaddr, rt_ubase_t cpu_addr, rt_size_t size)
{
if (!ep || !vaddr || !size)
{
return;
}
rt_mutex_take(&ep->lock, RT_WAITING_FOREVER);
for (rt_size_t idx = 0; idx < ep->mems_nr; ++idx)
{
struct rt_pci_ep_mem *mem = &ep->mems[idx];
if (mem->cpu_addr > cpu_addr &&
mem->cpu_addr + mem->size >= cpu_addr + size)
{
rt_iounmap(mem);
bitmap_region_free(mem, cpu_addr, size);
break;
}
}
rt_mutex_release(&ep->lock);
}

View File

@ -8,3 +8,5 @@ config RT_PCI_HOST_GENERIC
depends on RT_PCI_ECAM
select RT_PCI_HOST_COMMON
default y
rsource "dw/Kconfig"

View File

@ -0,0 +1,13 @@
config RT_PCI_DW
bool "DesignWare-based PCIe"
depends on RT_MFD_SYSCON
depends on RT_USING_DMA
default n
config RT_PCI_DW_HOST
bool
depends on RT_PCI_DW
config RT_PCI_DW_EP
bool
depends on RT_PCI_DW

View File

@ -0,0 +1,21 @@
from building import *
group = []
if not GetDepend(['RT_PCI_DW']):
Return('group')
cwd = GetCurrentDir()
CPPPATH = [cwd + '/../../../include']
src = ['pcie-dw.c', 'pcie-dw_platfrom.c']
if GetDepend(['RT_PCI_DW_HOST']):
src += ['pcie-dw_host.c']
if GetDepend(['RT_PCI_DW_EP']):
src += ['pcie-dw_ep.c']
group = DefineGroup('DeviceDrivers', src, depend = [''], CPPPATH = CPPPATH)
Return('group')

View File

@ -0,0 +1,645 @@
/*
* Copyright (c) 2006-2023, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2023-09-23 GuEe-GUI first version
*/
#define DBG_TAG "pcie.dw"
#define DBG_LVL DBG_INFO
#include <rtdbg.h>
#include "pcie-dw.h"
static rt_uint8_t __dw_pcie_find_next_cap(struct dw_pcie *pci,
rt_uint8_t cap_ptr, rt_uint8_t cap)
{
rt_uint16_t reg;
rt_uint8_t cap_id, next_cap_ptr;
if (!cap_ptr)
{
return 0;
}
reg = dw_pcie_readw_dbi(pci, cap_ptr);
cap_id = (reg & 0x00ff);
if (cap_id > PCIY_MAX)
{
return 0;
}
if (cap_id == cap)
{
return cap_ptr;
}
next_cap_ptr = (reg & 0xff00) >> 8;
return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap);
}
rt_uint8_t dw_pcie_find_capability(struct dw_pcie *pci, rt_uint8_t cap)
{
rt_uint16_t reg;
rt_uint8_t next_cap_ptr;
reg = dw_pcie_readw_dbi(pci, PCIR_CAP_PTR);
next_cap_ptr = (reg & 0x00ff);
return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap);
}
static rt_uint16_t dw_pcie_find_next_ext_capability(struct dw_pcie *pci,
rt_uint16_t start, rt_uint8_t cap)
{
rt_uint32_t header;
int ttl, pos = PCI_REGMAX + 1;
/* minimum 8 bytes per capability */
ttl = ((PCIE_REGMAX + 1) - (PCI_REGMAX + 1)) / 8;
if (start)
{
pos = start;
}
header = dw_pcie_readl_dbi(pci, pos);
/*
* If we have no capabilities, this is indicated by cap ID,
* cap version and next pointer all being 0.
*/
if (header == 0)
{
return 0;
}
while (ttl-- > 0)
{
if (PCI_EXTCAP_ID(header) == cap && pos != start)
{
return pos;
}
pos = PCI_EXTCAP_NEXTPTR(header);
if (pos < PCI_REGMAX + 1)
{
break;
}
header = dw_pcie_readl_dbi(pci, pos);
}
return 0;
}
rt_uint16_t dw_pcie_find_ext_capability(struct dw_pcie *pci, rt_uint8_t cap)
{
return dw_pcie_find_next_ext_capability(pci, 0, cap);
}
rt_err_t dw_pcie_read(void *addr, rt_size_t size, rt_uint32_t *out_val)
{
/* Check aligned */
if ((rt_ubase_t)addr & ((rt_ubase_t)size - 1))
{
*out_val = 0;
return -RT_EINVAL;
}
if (size == 4)
{
*out_val = HWREG32(addr);
}
else if (size == 2)
{
*out_val = HWREG16(addr);
}
else if (size == 1)
{
*out_val = HWREG8(addr);
}
else
{
*out_val = 0;
return -RT_EINVAL;
}
return RT_EOK;
}
rt_err_t dw_pcie_write(void *addr, rt_size_t size, rt_uint32_t val)
{
/* Check aligned */
if ((rt_ubase_t)addr & ((rt_ubase_t)size - 1))
{
return -RT_EINVAL;
}
if (size == 4)
{
HWREG32(addr) = val;
}
else if (size == 2)
{
HWREG16(addr) = val;
}
else if (size == 1)
{
HWREG8(addr) = val;
}
else
{
return -RT_EINVAL;
}
return RT_EOK;
}
rt_uint32_t dw_pcie_read_dbi(struct dw_pcie *pci, rt_uint32_t reg, rt_size_t size)
{
rt_err_t err;
rt_uint32_t val = 0;
if (pci->ops->read_dbi)
{
return pci->ops->read_dbi(pci, pci->dbi_base, reg, size);
}
if ((err = dw_pcie_read(pci->dbi_base + reg, size, &val)))
{
LOG_E("Read DBI address error = %s", rt_strerror(err));
}
return val;
}
void dw_pcie_write_dbi(struct dw_pcie *pci, rt_uint32_t reg, rt_size_t size, rt_uint32_t val)
{
rt_err_t err;
if (pci->ops->write_dbi)
{
pci->ops->write_dbi(pci, pci->dbi_base, reg, size, val);
return;
}
if ((err = dw_pcie_write(pci->dbi_base + reg, size, val)))
{
LOG_E("Write DBI address error = %s", rt_strerror(err));
}
}
void dw_pcie_write_dbi2(struct dw_pcie *pci, rt_uint32_t reg, rt_size_t size, rt_uint32_t val)
{
rt_err_t err;
if (pci->ops && pci->ops->write_dbi2)
{
pci->ops->write_dbi2(pci, pci->dbi_base2, reg, size, val);
return;
}
if ((err = dw_pcie_write(pci->dbi_base2 + reg, size, val)))
{
LOG_E("Write DBI2 address error = %s", rt_strerror(err));
}
}
rt_uint32_t dw_pcie_readl_atu(struct dw_pcie *pci, rt_uint32_t reg)
{
rt_err_t err;
rt_uint32_t val = 0;
if (pci->ops->read_dbi)
{
return pci->ops->read_dbi(pci, pci->atu_base, reg, 4);
}
if ((err = dw_pcie_read(pci->atu_base + reg, 4, &val)))
{
LOG_E("Read ATU address error = %s", rt_strerror(err));
}
return val;
}
void dw_pcie_writel_atu(struct dw_pcie *pci, rt_uint32_t reg, rt_uint32_t val)
{
rt_err_t err;
if (pci->ops->write_dbi)
{
pci->ops->write_dbi(pci, pci->atu_base, reg, 4, val);
return;
}
if ((err = dw_pcie_write(pci->atu_base + reg, 4, val)))
{
LOG_E("Write ATU address error = %s", rt_strerror(err));
}
}
static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, rt_uint8_t func_no,
int index, int type, rt_uint64_t cpu_addr, rt_uint64_t pci_addr, rt_size_t size)
{
rt_uint64_t limit_addr = cpu_addr + size - 1;
dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_LOWER_BASE,
rt_lower_32_bits(cpu_addr));
dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_UPPER_BASE,
rt_upper_32_bits(cpu_addr));
dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_LOWER_LIMIT,
rt_lower_32_bits(limit_addr));
dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_UPPER_LIMIT,
rt_upper_32_bits(limit_addr));
dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_LOWER_TARGET,
rt_lower_32_bits(pci_addr));
dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_UPPER_TARGET,
rt_upper_32_bits(pci_addr));
dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL1,
type | PCIE_ATU_FUNC_NUM(func_no));
dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL2,
PCIE_ATU_ENABLE);
/*
* Make sure ATU enable takes effect before any subsequent config
* and I/O accesses.
*/
for (int retries = 0; retries < LINK_WAIT_MAX_IATU_RETRIES; ++retries)
{
if (dw_pcie_readl_ob_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL2) & PCIE_ATU_ENABLE)
{
return;
}
rt_thread_mdelay(LINK_WAIT_IATU);
}
LOG_E("Outbound iATU is not being enabled");
}
static void __dw_pcie_prog_outbound_atu(struct dw_pcie *pci, rt_uint8_t func_no,
int index, int type, rt_uint64_t cpu_addr, rt_uint64_t pci_addr, rt_size_t size)
{
if (pci->ops->cpu_addr_fixup)
{
cpu_addr = pci->ops->cpu_addr_fixup(pci, cpu_addr);
}
if (pci->iatu_unroll_enabled & DWC_IATU_UNROLL_EN)
{
dw_pcie_prog_outbound_atu_unroll(pci, func_no,
index, type, cpu_addr, pci_addr, size);
return;
}
dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, PCIE_ATU_REGION_OUTBOUND | index);
dw_pcie_writel_dbi(pci, PCIE_ATU_LOWER_BASE, rt_lower_32_bits(cpu_addr));
dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_BASE, rt_upper_32_bits(cpu_addr));
dw_pcie_writel_dbi(pci, PCIE_ATU_LIMIT, rt_lower_32_bits(cpu_addr + size - 1));
dw_pcie_writel_dbi(pci, PCIE_ATU_LOWER_TARGET, rt_lower_32_bits(pci_addr));
dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_TARGET, rt_upper_32_bits(pci_addr));
dw_pcie_writel_dbi(pci, PCIE_ATU_CR1, type | PCIE_ATU_FUNC_NUM(func_no));
dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, PCIE_ATU_ENABLE);
/*
* Make sure ATU enable takes effect before any subsequent config
* and I/O accesses.
*/
for (int retries = 0; retries < LINK_WAIT_MAX_IATU_RETRIES; ++retries)
{
if (dw_pcie_readl_dbi(pci, PCIE_ATU_CR2) & PCIE_ATU_ENABLE)
{
return;
}
rt_thread_mdelay(LINK_WAIT_IATU);
}
LOG_E("Outbound iATU is not being enabled");
}
void dw_pcie_prog_outbound_atu(struct dw_pcie *pci,
int index, int type, rt_uint64_t cpu_addr, rt_uint64_t pci_addr, rt_size_t size)
{
__dw_pcie_prog_outbound_atu(pci, 0, index, type, cpu_addr, pci_addr, size);
}
void dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, rt_uint8_t func_no,
int index, int type, rt_uint64_t cpu_addr, rt_uint64_t pci_addr, rt_size_t size)
{
__dw_pcie_prog_outbound_atu(pci, func_no, index, type, cpu_addr, pci_addr, size);
}
static rt_err_t dw_pcie_prog_inbound_atu_unroll(struct dw_pcie *pci,
rt_uint8_t func_no, int index, int bar, rt_uint64_t cpu_addr,
enum dw_pcie_aspace_type aspace_type)
{
int type;
dw_pcie_writel_ib_unroll(pci, index, PCIE_ATU_UNR_LOWER_TARGET,
rt_lower_32_bits(cpu_addr));
dw_pcie_writel_ib_unroll(pci, index, PCIE_ATU_UNR_UPPER_TARGET,
rt_upper_32_bits(cpu_addr));
switch (aspace_type)
{
case DW_PCIE_ASPACE_MEM:
type = PCIE_ATU_TYPE_MEM;
break;
case DW_PCIE_ASPACE_IO:
type = PCIE_ATU_TYPE_IO;
break;
default:
return -RT_EINVAL;
}
dw_pcie_writel_ib_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL1,
type | PCIE_ATU_FUNC_NUM(func_no));
dw_pcie_writel_ib_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL2,
PCIE_ATU_FUNC_NUM_MATCH_EN | PCIE_ATU_ENABLE |
PCIE_ATU_BAR_MODE_ENABLE | (bar << 8));
/*
* Make sure ATU enable takes effect before any subsequent config
* and I/O accesses.
*/
for (int retries = 0; retries < LINK_WAIT_MAX_IATU_RETRIES; ++retries)
{
if (dw_pcie_readl_ib_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL2) & PCIE_ATU_ENABLE)
{
return RT_EOK;
}
rt_thread_mdelay(LINK_WAIT_IATU);
}
LOG_E("Inbound iATU is not being enabled");
return -RT_EBUSY;
}
rt_err_t dw_pcie_prog_inbound_atu(struct dw_pcie *pci,
rt_uint8_t func_no, int index, int bar, rt_uint64_t cpu_addr,
enum dw_pcie_aspace_type aspace_type)
{
int type;
if (pci->iatu_unroll_enabled & DWC_IATU_UNROLL_EN)
{
return dw_pcie_prog_inbound_atu_unroll(pci, func_no,
index, bar, cpu_addr, aspace_type);
}
dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, PCIE_ATU_REGION_INBOUND | index);
dw_pcie_writel_dbi(pci, PCIE_ATU_LOWER_TARGET, rt_lower_32_bits(cpu_addr));
dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_TARGET, rt_upper_32_bits(cpu_addr));
switch (aspace_type)
{
case DW_PCIE_ASPACE_MEM:
type = PCIE_ATU_TYPE_MEM;
break;
case DW_PCIE_ASPACE_IO:
type = PCIE_ATU_TYPE_IO;
break;
default:
return -RT_EINVAL;
}
dw_pcie_writel_dbi(pci, PCIE_ATU_CR1, type | PCIE_ATU_FUNC_NUM(func_no));
dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, PCIE_ATU_ENABLE |
PCIE_ATU_FUNC_NUM_MATCH_EN | PCIE_ATU_BAR_MODE_ENABLE | (bar << 8));
/*
* Make sure ATU enable takes effect before any subsequent config
* and I/O accesses.
*/
for (int retries = 0; retries < LINK_WAIT_MAX_IATU_RETRIES; ++retries)
{
if (dw_pcie_readl_dbi(pci, PCIE_ATU_CR2) & PCIE_ATU_ENABLE)
{
return RT_EOK;
}
rt_thread_mdelay(LINK_WAIT_IATU);
}
LOG_E("Inbound iATU is not being enabled");
return -RT_EBUSY;
}
void dw_pcie_disable_atu(struct dw_pcie *pci, int index, enum dw_pcie_region_type type)
{
rt_uint32_t region;
switch (type)
{
case DW_PCIE_REGION_INBOUND:
region = PCIE_ATU_REGION_INBOUND;
break;
case DW_PCIE_REGION_OUTBOUND:
region = PCIE_ATU_REGION_OUTBOUND;
break;
default:
return;
}
if (pci->iatu_unroll_enabled)
{
if (region == PCIE_ATU_REGION_INBOUND)
{
dw_pcie_writel_ib_unroll(pci, index,
PCIE_ATU_UNR_REGION_CTRL2, ~(rt_uint32_t)PCIE_ATU_ENABLE);
}
else
{
dw_pcie_writel_ob_unroll(pci, index,
PCIE_ATU_UNR_REGION_CTRL2, ~(rt_uint32_t)PCIE_ATU_ENABLE);
}
}
else
{
dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, region | index);
dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, ~(rt_uint32_t)PCIE_ATU_ENABLE);
}
}
rt_err_t dw_pcie_wait_for_link(struct dw_pcie *pci)
{
/* Check if the link is up or not */
for (int retries = 0; retries < LINK_WAIT_MAX_RETRIES; ++retries)
{
if (dw_pcie_link_up(pci))
{
LOG_I("%s: Link up", rt_dm_dev_get_name(pci->dev));
return RT_EOK;
}
rt_hw_us_delay((LINK_WAIT_USLEEP_MIN + LINK_WAIT_USLEEP_MAX) >> 1);
}
LOG_I("PHY link never came up");
return -RT_ETIMEOUT;
}
rt_bool_t dw_pcie_link_up(struct dw_pcie *pci)
{
rt_uint32_t val;
if (pci->ops->link_up)
{
return pci->ops->link_up(pci);
}
val = HWREG32(pci->dbi_base + PCIE_PORT_DEBUG1);
return (val & PCIE_PORT_DEBUG1_LINK_UP) && (!(val & PCIE_PORT_DEBUG1_LINK_IN_TRAINING));
}
void dw_pcie_upconfig_setup(struct dw_pcie *pci)
{
rt_uint32_t val;
val = dw_pcie_readl_dbi(pci, PCIE_PORT_MULTI_LANE_CTRL);
val |= PORT_MLTI_UPCFG_SUPPORT;
dw_pcie_writel_dbi(pci, PCIE_PORT_MULTI_LANE_CTRL, val);
}
static void dw_pcie_link_set_max_speed(struct dw_pcie *pci, rt_uint32_t link_gen)
{
rt_uint32_t cap, ctrl2, link_speed;
rt_uint8_t offset = dw_pcie_find_capability(pci, PCIY_EXPRESS);
cap = dw_pcie_readl_dbi(pci, offset + PCIER_LINK_CAP);
ctrl2 = dw_pcie_readl_dbi(pci, offset + PCIER_LINK_CTL2);
ctrl2 &= ~PCIEM_LNKCTL2_TLS;
switch (link_gen)
{
case 1: link_speed = PCIEM_LNKCTL2_TLS_2_5GT; break;
case 2: link_speed = PCIEM_LNKCTL2_TLS_5_0GT; break;
case 3: link_speed = PCIEM_LNKCTL2_TLS_8_0GT; break;
case 4: link_speed = PCIEM_LNKCTL2_TLS_16_0GT; break;
default:
/* Use hardware capability */
link_speed = RT_FIELD_GET(PCIEM_LINK_CAP_MAX_SPEED, cap);
ctrl2 &= ~PCIEM_LNKCTL2_HASD;
break;
}
dw_pcie_writel_dbi(pci, offset + PCIER_LINK_CTL2, ctrl2 | link_speed);
cap &= ~((rt_uint32_t)PCIEM_LINK_CAP_MAX_SPEED);
dw_pcie_writel_dbi(pci, offset + PCIER_LINK_CAP, cap | link_speed);
}
void dw_pcie_setup(struct dw_pcie *pci)
{
rt_uint32_t val;
struct rt_device *dev = pci->dev;
if (pci->version >= 0x480a || (!pci->version && dw_pcie_iatu_unroll_enabled(pci)))
{
pci->iatu_unroll_enabled |= DWC_IATU_UNROLL_EN;
if (!pci->atu_base)
{
pci->atu_base = rt_dm_dev_iomap_by_name(dev, "atu");
}
if (!pci->atu_base)
{
pci->atu_base = pci->dbi_base + DEFAULT_DBI_ATU_OFFSET;
}
}
LOG_D("iATU unroll is %sabled", pci->iatu_unroll_enabled & DWC_IATU_UNROLL_EN ? "en" : "dis");
if (pci->link_gen > 0)
{
dw_pcie_link_set_max_speed(pci, pci->link_gen);
}
/* Configure Gen1 N_FTS */
if (pci->fts_number[0])
{
val = dw_pcie_readl_dbi(pci, PCIE_PORT_AFR);
val &= ~(PORT_AFR_N_FTS_MASK | PORT_AFR_CC_N_FTS_MASK);
val |= PORT_AFR_N_FTS(pci->fts_number[0]);
val |= PORT_AFR_CC_N_FTS(pci->fts_number[0]);
dw_pcie_writel_dbi(pci, PCIE_PORT_AFR, val);
}
/* Configure Gen2+ N_FTS */
if (pci->fts_number[1])
{
val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
val &= ~PORT_LOGIC_N_FTS_MASK;
val |= pci->fts_number[1];
dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
}
val = dw_pcie_readl_dbi(pci, PCIE_PORT_LINK_CONTROL);
val &= ~PORT_LINK_FAST_LINK_MODE;
val |= PORT_LINK_DLL_LINK_EN;
dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, val);
if (rt_dm_dev_prop_read_bool(dev, "snps,enable-cdm-check"))
{
val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS);
val |= PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS | PCIE_PL_CHK_REG_CHK_REG_START;
dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, val);
}
rt_dm_dev_prop_read_u32(dev, "num-lanes", &pci->num_lanes);
if (!pci->num_lanes)
{
LOG_D("Using h/w default number of lanes");
return;
}
/* Set the number of lanes */
val &= ~PORT_LINK_FAST_LINK_MODE;
val &= ~PORT_LINK_MODE_MASK;
switch (pci->num_lanes)
{
case 1: val |= PORT_LINK_MODE_1_LANES; break;
case 2: val |= PORT_LINK_MODE_2_LANES; break;
case 4: val |= PORT_LINK_MODE_4_LANES; break;
case 8: val |= PORT_LINK_MODE_8_LANES; break;
default:
LOG_E("Invail num-lanes = %d", pci->num_lanes);
return;
}
dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, val);
/* Set link width speed control register */
val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
val &= ~PORT_LOGIC_LINK_WIDTH_MASK;
switch (pci->num_lanes)
{
case 1: val |= PORT_LOGIC_LINK_WIDTH_1_LANES; break;
case 2: val |= PORT_LOGIC_LINK_WIDTH_2_LANES; break;
case 4: val |= PORT_LOGIC_LINK_WIDTH_4_LANES; break;
case 8: val |= PORT_LOGIC_LINK_WIDTH_8_LANES; break;
}
val |= pci->user_speed;
dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
}

View File

@ -0,0 +1,440 @@
/*
* Copyright (c) 2006-2023, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2023-09-23 GuEe-GUI first version
*/
#ifndef __PCIE_DESIGNWARE_H__
#define __PCIE_DESIGNWARE_H__
#include <rtthread.h>
#include <rtdevice.h>
/* Parameters for the waiting for link up routine */
#define LINK_WAIT_MAX_RETRIES 10
#define LINK_WAIT_USLEEP_MIN 90000
#define LINK_WAIT_USLEEP_MAX 100000
/* Parameters for the waiting for iATU enabled routine */
#define LINK_WAIT_MAX_IATU_RETRIES 5
#define LINK_WAIT_IATU 9
/* Synopsys-specific PCIe configuration registers */
#define PCIE_PORT_AFR 0x70c
#define PORT_AFR_N_FTS_MASK RT_GENMASK(15, 8)
#define PORT_AFR_N_FTS(n) RT_FIELD_PREP(PORT_AFR_N_FTS_MASK, n)
#define PORT_AFR_CC_N_FTS_MASK RT_GENMASK(23, 16)
#define PORT_AFR_CC_N_FTS(n) RT_FIELD_PREP(PORT_AFR_CC_N_FTS_MASK, n)
#define PORT_AFR_ENTER_ASPM RT_BIT(30)
#define PORT_AFR_L0S_ENTRANCE_LAT_SHIFT 24
#define PORT_AFR_L0S_ENTRANCE_LAT_MASK RT_GENMASK(26, 24)
#define PORT_AFR_L1_ENTRANCE_LAT_SHIFT 27
#define PORT_AFR_L1_ENTRANCE_LAT_MASK RT_GENMASK(29, 27)
#define PCIE_PORT_LINK_CONTROL 0x710
#define PORT_LINK_LPBK_ENABLE RT_BIT(2)
#define PORT_LINK_DLL_LINK_EN RT_BIT(5)
#define PORT_LINK_FAST_LINK_MODE RT_BIT(7)
#define PORT_LINK_MODE_MASK RT_GENMASK(21, 16)
#define PORT_LINK_MODE(n) RT_FIELD_PREP(PORT_LINK_MODE_MASK, n)
#define PORT_LINK_MODE_1_LANES PORT_LINK_MODE(0x1)
#define PORT_LINK_MODE_2_LANES PORT_LINK_MODE(0x3)
#define PORT_LINK_MODE_4_LANES PORT_LINK_MODE(0x7)
#define PORT_LINK_MODE_8_LANES PORT_LINK_MODE(0xf)
#define PCIE_PORT_DEBUG0 0x728
#define PORT_LOGIC_LTSSM_STATE_MASK 0x1f
#define PORT_LOGIC_LTSSM_STATE_L0 0x11
#define PCIE_PORT_DEBUG1 0x72c
#define PCIE_PORT_DEBUG1_LINK_UP RT_BIT(4)
#define PCIE_PORT_DEBUG1_LINK_IN_TRAINING RT_BIT(29)
#define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80c
#define PORT_LOGIC_N_FTS_MASK RT_GENMASK(7, 0)
#define PORT_LOGIC_SPEED_CHANGE RT_BIT(17)
#define PORT_LOGIC_LINK_WIDTH_MASK RT_GENMASK(12, 8)
#define PORT_LOGIC_LINK_WIDTH(n) RT_FIELD_PREP(PORT_LOGIC_LINK_WIDTH_MASK, n)
#define PORT_LOGIC_LINK_WIDTH_1_LANES PORT_LOGIC_LINK_WIDTH(0x1)
#define PORT_LOGIC_LINK_WIDTH_2_LANES PORT_LOGIC_LINK_WIDTH(0x2)
#define PORT_LOGIC_LINK_WIDTH_4_LANES PORT_LOGIC_LINK_WIDTH(0x4)
#define PORT_LOGIC_LINK_WIDTH_8_LANES PORT_LOGIC_LINK_WIDTH(0x8)
#define PCIE_MSI_ADDR_LO 0x820
#define PCIE_MSI_ADDR_HI 0x824
#define PCIE_MSI_INTR0_ENABLE 0x828
#define PCIE_MSI_INTR0_MASK 0x82c
#define PCIE_MSI_INTR0_STATUS 0x830
#define PCIE_PORT_MULTI_LANE_CTRL 0x8c0
#define PORT_MLTI_UPCFG_SUPPORT RT_BIT(7)
#define PCIE_ATU_VIEWPORT 0x900
#define PCIE_ATU_REGION_INBOUND RT_BIT(31)
#define PCIE_ATU_REGION_OUTBOUND 0
#define PCIE_ATU_CR1 0x904
#define PCIE_ATU_TYPE_MEM 0x0
#define PCIE_ATU_TYPE_IO 0x2
#define PCIE_ATU_TYPE_CFG0 0x4
#define PCIE_ATU_TYPE_CFG1 0x5
#define PCIE_ATU_FUNC_NUM(pf) ((pf) << 20)
#define PCIE_ATU_CR2 0x908
#define PCIE_ATU_ENABLE RT_BIT(31)
#define PCIE_ATU_BAR_MODE_ENABLE RT_BIT(30)
#define PCIE_ATU_FUNC_NUM_MATCH_EN RT_BIT(19)
#define PCIE_ATU_LOWER_BASE 0x90c
#define PCIE_ATU_UPPER_BASE 0x910
#define PCIE_ATU_LIMIT 0x914
#define PCIE_ATU_LOWER_TARGET 0x918
#define PCIE_ATU_BUS(x) RT_FIELD_PREP(RT_GENMASK(31, 24), x)
#define PCIE_ATU_DEV(x) RT_FIELD_PREP(RT_GENMASK(23, 19), x)
#define PCIE_ATU_FUNC(x) RT_FIELD_PREP(RT_GENMASK(18, 16), x)
#define PCIE_ATU_UPPER_TARGET 0x91c
#define PCIE_MISC_CONTROL_1_OFF 0x8bc
#define PCIE_DBI_RO_WR_EN RT_BIT(0)
#define PCIE_MSIX_DOORBELL 0x948
#define PCIE_MSIX_DOORBELL_PF_SHIFT 24
#define PCIE_PL_CHK_REG_CONTROL_STATUS 0xb20
#define PCIE_PL_CHK_REG_CHK_REG_START RT_BIT(0)
#define PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS RT_BIT(1)
#define PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR RT_BIT(16)
#define PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR RT_BIT(17)
#define PCIE_PL_CHK_REG_CHK_REG_COMPLETE RT_BIT(18)
#define PCIE_PL_CHK_REG_ERR_ADDR 0xb28
/*
* iATU Unroll-specific register definitions
* From 4.80 core version the address translation will be made by unroll
*/
#define PCIE_ATU_UNR_REGION_CTRL1 0x00
#define PCIE_ATU_UNR_REGION_CTRL2 0x04
#define PCIE_ATU_UNR_LOWER_BASE 0x08
#define PCIE_ATU_UNR_UPPER_BASE 0x0C
#define PCIE_ATU_UNR_LOWER_LIMIT 0x10
#define PCIE_ATU_UNR_LOWER_TARGET 0x14
#define PCIE_ATU_UNR_UPPER_TARGET 0x18
#define PCIE_ATU_UNR_UPPER_LIMIT 0x20
/*
* The default address offset between dbi_base and atu_base. Root controller
* drivers are not required to initialize atu_base if the offset matches this
* default; the driver core automatically derives atu_base from dbi_base using
* this offset, if atu_base not set.
*/
#define DEFAULT_DBI_ATU_OFFSET (0x3 << 20)
/* Register address builder */
#define PCIE_GET_ATU_OUTB_UNR_REG_OFFSET(region) ((region) << 9)
#define PCIE_GET_ATU_INB_UNR_REG_OFFSET(region) (((region) << 9) | RT_BIT(8))
#define MAX_MSI_IRQS 256
#define MAX_MSI_IRQS_PER_CTRL 32
#define MAX_MSI_CTRLS (MAX_MSI_IRQS / MAX_MSI_IRQS_PER_CTRL)
#define MSI_REG_CTRL_BLOCK_SIZE 12
#define MSI_DEF_NUM_VECTORS 32
/* Maximum number of inbound/outbound iATUs */
#define MAX_IATU_IN 256
#define MAX_IATU_OUT 256
#define DWC_IATU_UNROLL_EN RT_BIT(0)
#define DWC_IATU_IOCFG_SHARED RT_BIT(1)
struct dw_pcie_host_ops;
struct dw_pcie_ep_ops;
struct dw_pcie_ops;
enum dw_pcie_region_type
{
DW_PCIE_REGION_UNKNOWN,
DW_PCIE_REGION_INBOUND,
DW_PCIE_REGION_OUTBOUND,
};
enum dw_pcie_device_mode
{
DW_PCIE_UNKNOWN_TYPE,
DW_PCIE_EP_TYPE,
DW_PCIE_LEG_EP_TYPE,
DW_PCIE_RC_TYPE,
};
enum dw_pcie_aspace_type
{
DW_PCIE_ASPACE_UNKNOWN,
DW_PCIE_ASPACE_MEM,
DW_PCIE_ASPACE_IO,
};
struct dw_pcie_port
{
void *cfg0_base;
rt_uint64_t cfg0_addr;
rt_uint64_t cfg0_size;
rt_ubase_t io_addr;
rt_ubase_t io_bus_addr;
rt_size_t io_size;
const struct dw_pcie_host_ops *ops;
int sys_irq;
int msi_irq;
struct rt_pic *irq_pic;
struct rt_pic *msi_pic;
void *msi_data;
rt_ubase_t msi_data_phy;
rt_uint32_t irq_count;
rt_uint32_t irq_mask[MAX_MSI_CTRLS];
struct rt_pci_host_bridge *bridge;
const struct rt_pci_ops *bridge_child_ops;
struct rt_spinlock lock;
RT_BITMAP_DECLARE(msi_map, MAX_MSI_IRQS);
};
struct dw_pcie_host_ops
{
rt_err_t (*host_init)(struct dw_pcie_port *port);
rt_err_t (*msi_host_init)(struct dw_pcie_port *port);
void (*set_irq_count)(struct dw_pcie_port *port);
};
struct dw_pcie_ep_func
{
rt_list_t list;
rt_uint8_t func_no;
rt_uint8_t msi_cap; /* MSI capability offset */
rt_uint8_t msix_cap; /* MSI-X capability offset */
};
struct dw_pcie_ep
{
struct rt_pci_ep *epc;
struct rt_pci_ep_bar *epc_bar[PCI_STD_NUM_BARS];
rt_list_t func_nodes;
const struct dw_pcie_ep_ops *ops;
rt_uint64_t aspace;
rt_uint64_t aspace_size;
rt_size_t page_size;
rt_uint8_t bar_to_atu[PCI_STD_NUM_BARS];
rt_ubase_t *outbound_addr;
rt_bitmap_t *ib_window_map;
rt_bitmap_t *ob_window_map;
rt_uint32_t num_ib_windows;
rt_uint32_t num_ob_windows;
void *msi_mem;
rt_ubase_t msi_mem_phy;
};
struct dw_pcie_ep_ops
{
rt_err_t (*ep_init)(struct dw_pcie_ep *ep);
rt_err_t (*raise_irq)(struct dw_pcie_ep *ep, rt_uint8_t func_no, enum rt_pci_ep_irq type, unsigned irq);
rt_off_t (*func_select)(struct dw_pcie_ep *ep, rt_uint8_t func_no);
};
struct dw_pcie
{
struct rt_device *dev;
void *dbi_base;
void *dbi_base2;
void *atu_base;
rt_uint32_t version;
rt_uint32_t num_viewport;
rt_uint32_t num_lanes;
rt_uint32_t link_gen;
rt_uint32_t user_speed;
rt_uint8_t iatu_unroll_enabled; /* Internal Address Translation Unit */
rt_uint8_t fts_number[2]; /* Fast Training Sequences */
struct dw_pcie_port port;
struct dw_pcie_ep endpoint;
const struct dw_pcie_ops *ops;
void *priv;
};
struct dw_pcie_ops
{
rt_uint64_t (*cpu_addr_fixup)(struct dw_pcie *pcie, rt_uint64_t cpu_addr);
rt_uint32_t (*read_dbi)(struct dw_pcie *pcie, void *base, rt_uint32_t reg, rt_size_t size);
void (*write_dbi)(struct dw_pcie *pcie, void *base, rt_uint32_t reg, rt_size_t size, rt_uint32_t val);
void (*write_dbi2)(struct dw_pcie *pcie, void *base, rt_uint32_t reg, rt_size_t size, rt_uint32_t val);
rt_bool_t (*link_up)(struct dw_pcie *pcie);
rt_err_t (*start_link)(struct dw_pcie *pcie);
void (*stop_link)(struct dw_pcie *pcie);
};
#define to_dw_pcie_from_port(ptr) rt_container_of((ptr), struct dw_pcie, port)
#define to_dw_pcie_from_endpoint(ptr) rt_container_of((ptr), struct dw_pcie, endpoint)
#ifdef RT_PCI_DW_HOST
#undef RT_PCI_DW_HOST
#define RT_PCI_DW_HOST 1
#define HOST_API
#define HOST_RET(...) ;
#else
#define HOST_API rt_inline
#define HOST_RET(...) { return __VA_ARGS__; }
#endif
#ifdef RT_PCI_DW_EP
#undef RT_PCI_DW_EP
#define RT_PCI_DW_EP 1
#define EP_API
#define EP_RET(...) ;
#else
#define EP_API rt_inline
#define EP_RET(...) { return __VA_ARGS__; }
#endif
rt_uint8_t dw_pcie_find_capability(struct dw_pcie *pci, rt_uint8_t cap);
rt_uint16_t dw_pcie_find_ext_capability(struct dw_pcie *pci, rt_uint8_t cap);
rt_err_t dw_pcie_read(void *addr, rt_size_t size, rt_uint32_t *out_val);
rt_err_t dw_pcie_write(void *addr, rt_size_t size, rt_uint32_t val);
rt_uint32_t dw_pcie_read_dbi(struct dw_pcie *pci, rt_uint32_t reg, rt_size_t size);
void dw_pcie_write_dbi(struct dw_pcie *pci, rt_uint32_t reg, rt_size_t size, rt_uint32_t val);
void dw_pcie_write_dbi2(struct dw_pcie *pci, rt_uint32_t reg, rt_size_t size, rt_uint32_t val);
rt_uint32_t dw_pcie_readl_atu(struct dw_pcie *pci, rt_uint32_t reg);
void dw_pcie_writel_atu(struct dw_pcie *pci, rt_uint32_t reg, rt_uint32_t val);
rt_bool_t dw_pcie_link_up(struct dw_pcie *pci);
void dw_pcie_upconfig_setup(struct dw_pcie *pci);
rt_err_t dw_pcie_wait_for_link(struct dw_pcie *pci);
void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type, rt_uint64_t cpu_addr, rt_uint64_t pci_addr, rt_size_t size);
void dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, rt_uint8_t func_no, int index, int type, rt_uint64_t cpu_addr, rt_uint64_t pci_addr, rt_size_t size);
rt_err_t dw_pcie_prog_inbound_atu(struct dw_pcie *pci, rt_uint8_t func_no, int index, int bar, rt_uint64_t cpu_addr, enum dw_pcie_aspace_type aspace_type);
void dw_pcie_disable_atu(struct dw_pcie *pci, int index, enum dw_pcie_region_type type);
void dw_pcie_setup(struct dw_pcie *pci);
rt_inline void dw_pcie_writel_dbi(struct dw_pcie *pci, rt_uint32_t reg, rt_uint32_t val)
{
dw_pcie_write_dbi(pci, reg, 0x4, val);
}
rt_inline rt_uint32_t dw_pcie_readl_dbi(struct dw_pcie *pci, rt_uint32_t reg)
{
return dw_pcie_read_dbi(pci, reg, 0x4);
}
rt_inline void dw_pcie_writew_dbi(struct dw_pcie *pci, rt_uint32_t reg, rt_uint16_t val)
{
dw_pcie_write_dbi(pci, reg, 0x2, val);
}
rt_inline rt_uint16_t dw_pcie_readw_dbi(struct dw_pcie *pci, rt_uint32_t reg)
{
return dw_pcie_read_dbi(pci, reg, 0x2);
}
rt_inline void dw_pcie_writeb_dbi(struct dw_pcie *pci, rt_uint32_t reg, rt_uint8_t val)
{
dw_pcie_write_dbi(pci, reg, 0x1, val);
}
rt_inline rt_uint8_t dw_pcie_readb_dbi(struct dw_pcie *pci, rt_uint32_t reg)
{
return dw_pcie_read_dbi(pci, reg, 0x1);
}
rt_inline void dw_pcie_writel_dbi2(struct dw_pcie *pci, rt_uint32_t reg, rt_uint32_t val)
{
dw_pcie_write_dbi2(pci, reg, 0x4, val);
}
rt_inline void dw_pcie_dbi_ro_writable_enable(struct dw_pcie *pci, rt_bool_t enable)
{
const rt_uint32_t reg = PCIE_MISC_CONTROL_1_OFF;
if (enable)
{
dw_pcie_writel_dbi(pci, reg, dw_pcie_readl_dbi(pci, reg) | PCIE_DBI_RO_WR_EN);
}
else
{
dw_pcie_writel_dbi(pci, reg, dw_pcie_readl_dbi(pci, reg) & ~PCIE_DBI_RO_WR_EN);
}
}
rt_inline rt_uint8_t dw_pcie_iatu_unroll_enabled(struct dw_pcie *pci)
{
return dw_pcie_readl_dbi(pci, PCIE_ATU_VIEWPORT) == 0xffffffff ? 1 : 0;
}
rt_inline rt_uint32_t dw_pcie_readl_ob_unroll(struct dw_pcie *pci,
rt_uint32_t index, rt_uint32_t reg)
{
return dw_pcie_readl_atu(pci, PCIE_GET_ATU_OUTB_UNR_REG_OFFSET(index) + reg);
}
rt_inline void dw_pcie_writel_ob_unroll(struct dw_pcie *pci,
rt_uint32_t index, rt_uint32_t reg, rt_uint32_t val)
{
dw_pcie_writel_atu(pci, PCIE_GET_ATU_OUTB_UNR_REG_OFFSET(index) + reg, val);
}
rt_inline rt_uint32_t dw_pcie_readl_ib_unroll(struct dw_pcie *pci,
rt_uint32_t index, rt_uint32_t reg)
{
return dw_pcie_readl_atu(pci, PCIE_GET_ATU_INB_UNR_REG_OFFSET(index) + reg);
}
rt_inline void dw_pcie_writel_ib_unroll(struct dw_pcie *pci,
rt_uint32_t index, rt_uint32_t reg, rt_uint32_t val)
{
dw_pcie_writel_atu(pci, reg + PCIE_GET_ATU_INB_UNR_REG_OFFSET(index), val);
}
HOST_API rt_err_t dw_handle_msi_irq(struct dw_pcie_port *port) HOST_RET(-RT_ENOSYS)
HOST_API void dw_pcie_msi_init(struct dw_pcie_port *port) HOST_RET()
HOST_API void dw_pcie_free_msi(struct dw_pcie_port *port) HOST_RET()
HOST_API void dw_pcie_setup_rc(struct dw_pcie_port *port) HOST_RET()
HOST_API rt_err_t dw_pcie_host_init(struct dw_pcie_port *port) HOST_RET(-RT_ENOSYS)
HOST_API void dw_pcie_host_deinit(struct dw_pcie_port *port) HOST_RET()
HOST_API void dw_pcie_host_free(struct dw_pcie_port *port) HOST_RET()
HOST_API void *dw_pcie_own_conf_map(struct rt_pci_bus *bus, rt_uint32_t devfn, int reg) HOST_RET(RT_NULL)
EP_API rt_err_t dw_pcie_ep_init(struct dw_pcie_ep *ep) EP_RET(-RT_ENOSYS)
EP_API rt_err_t dw_pcie_ep_init_complete(struct dw_pcie_ep *ep) EP_RET(-RT_ENOSYS)
EP_API void dw_pcie_ep_exit(struct dw_pcie_ep *ep) EP_RET()
EP_API rt_err_t dw_pcie_ep_raise_legacy_irq(struct dw_pcie_ep *ep, rt_uint8_t func_no) EP_RET(-RT_ENOSYS)
EP_API rt_err_t dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, rt_uint8_t func_no, unsigned irq) EP_RET(-RT_ENOSYS)
EP_API rt_err_t dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, rt_uint8_t func_no, unsigned irq) EP_RET(-RT_ENOSYS)
EP_API rt_err_t dw_pcie_ep_raise_msix_irq_doorbell(struct dw_pcie_ep *ep, rt_uint8_t func_no, unsigned irq) EP_RET(-RT_ENOSYS)
EP_API void dw_pcie_ep_reset_bar(struct dw_pcie *pci, int bar_idx) EP_RET()
EP_API rt_err_t dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, rt_uint8_t func_no,
int bar_idx, rt_ubase_t cpu_addr, enum dw_pcie_aspace_type aspace_type) EP_RET(-RT_ENOSYS)
EP_API rt_err_t dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep, rt_uint8_t func_no,
rt_ubase_t phys_addr, rt_uint64_t pci_addr, rt_size_t size) EP_RET(-RT_ENOSYS)
EP_API struct dw_pcie_ep_func *dw_pcie_ep_get_func_from_ep(struct dw_pcie_ep *ep, rt_uint8_t func_no) EP_RET()
#endif /* __PCIE_DESIGNWARE_H__ */

View File

@ -0,0 +1,863 @@
/*
* Copyright (c) 2006-2023, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2023-09-23 GuEe-GUI first version
*/
#define DBG_TAG "pcie.dw-ep"
#define DBG_LVL DBG_INFO
#include <rtdbg.h>
#include "pcie-dw.h"
struct dw_pcie_ep_func *dw_pcie_ep_get_func_from_ep(struct dw_pcie_ep *ep, rt_uint8_t func_no)
{
struct dw_pcie_ep_func *ep_func;
rt_list_for_each_entry(ep_func, &ep->func_nodes, list)
{
if (ep_func->func_no == func_no)
{
return ep_func;
}
}
return RT_NULL;
}
static rt_uint8_t dw_pcie_ep_func_select(struct dw_pcie_ep *ep, rt_uint8_t func_no)
{
rt_uint8_t func_offset = 0;
if (ep->ops->func_select)
{
func_offset = ep->ops->func_select(ep, func_no);
}
return func_offset;
}
static void __dw_pcie_ep_reset_bar(struct dw_pcie *pci, rt_uint8_t func_no,
int bar_idx, int flags)
{
rt_uint32_t reg;
rt_uint8_t func_offset = 0;
struct dw_pcie_ep *ep = &pci->endpoint;
func_offset = dw_pcie_ep_func_select(ep, func_no);
reg = func_offset + PCIR_BAR(bar_idx);
dw_pcie_dbi_ro_writable_enable(pci, RT_TRUE);
dw_pcie_writel_dbi2(pci, reg, 0x0);
dw_pcie_writel_dbi(pci, reg, 0x0);
if (flags & PCIM_BAR_MEM_TYPE_64)
{
dw_pcie_writel_dbi2(pci, reg + 4, 0x0);
dw_pcie_writel_dbi(pci, reg + 4, 0x0);
}
dw_pcie_dbi_ro_writable_enable(pci, RT_FALSE);
}
void dw_pcie_ep_reset_bar(struct dw_pcie *pci, int bar_idx)
{
rt_uint8_t func_no, funcs = pci->endpoint.epc->max_functions;
for (func_no = 0; func_no < funcs; ++func_no)
{
__dw_pcie_ep_reset_bar(pci, func_no, bar_idx, 0);
}
}
static rt_uint8_t __dw_pcie_ep_find_next_cap(struct dw_pcie_ep *ep, rt_uint8_t func_no,
rt_uint8_t cap_ptr, rt_uint8_t cap)
{
rt_uint16_t reg;
rt_uint8_t func_offset = 0, cap_id, next_cap_ptr;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
if (!cap_ptr)
{
return 0;
}
func_offset = dw_pcie_ep_func_select(ep, func_no);
reg = dw_pcie_readw_dbi(pci, func_offset + cap_ptr);
cap_id = (reg & 0x00ff);
if (cap_id > PCIY_MAX)
{
return 0;
}
if (cap_id == cap)
{
return cap_ptr;
}
next_cap_ptr = (reg & 0xff00) >> 8;
return __dw_pcie_ep_find_next_cap(ep, func_no, next_cap_ptr, cap);
}
static rt_uint8_t dw_pcie_ep_find_capability(struct dw_pcie_ep *ep, rt_uint8_t func_no,
rt_uint8_t cap)
{
rt_uint16_t reg;
rt_uint8_t func_offset = 0, next_cap_ptr;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
func_offset = dw_pcie_ep_func_select(ep, func_no);
reg = dw_pcie_readw_dbi(pci, func_offset + PCIR_CAP_PTR);
next_cap_ptr = reg & 0x00ff;
return __dw_pcie_ep_find_next_cap(ep, func_no, next_cap_ptr, cap);
}
rt_err_t dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, rt_uint8_t func_no,
int bar_idx, rt_ubase_t cpu_addr, enum dw_pcie_aspace_type aspace_type)
{
rt_err_t err;
rt_uint32_t free_win;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
free_win = rt_bitmap_next_clear_bit(ep->ib_window_map, 0, ep->num_ib_windows);
if (free_win >= ep->num_ib_windows)
{
LOG_E("No free inbound window");
return -RT_EEMPTY;
}
err = dw_pcie_prog_inbound_atu(pci, func_no, free_win, bar_idx, cpu_addr, aspace_type);
if (err)
{
LOG_E("Failed to program IB window error = %s", rt_strerror(err));
return err;
}
ep->bar_to_atu[bar_idx] = free_win;
rt_bitmap_set_bit(ep->ib_window_map, free_win);
return RT_EOK;
}
rt_err_t dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep, rt_uint8_t func_no,
rt_ubase_t phys_addr, rt_uint64_t pci_addr, rt_size_t size)
{
rt_uint32_t free_win;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
free_win = rt_bitmap_next_clear_bit(ep->ob_window_map, 0, ep->num_ob_windows);
if (free_win >= ep->num_ob_windows)
{
LOG_E("No free outbound window");
return -RT_EEMPTY;
}
dw_pcie_prog_ep_outbound_atu(pci, func_no, free_win, PCIE_ATU_TYPE_MEM,
phys_addr, pci_addr, size);
ep->outbound_addr[free_win] = phys_addr;
rt_bitmap_set_bit(ep->ob_window_map, free_win);
return RT_EOK;
}
static rt_err_t dw_pcie_ep_write_header(struct rt_pci_ep *epc, rt_uint8_t func_no,
struct rt_pci_ep_header *hdr)
{
rt_uint8_t func_offset = 0;
struct dw_pcie_ep *ep = epc->priv;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
func_offset = dw_pcie_ep_func_select(ep, func_no);
dw_pcie_dbi_ro_writable_enable(pci, RT_TRUE);
dw_pcie_writew_dbi(pci, func_offset + PCIR_VENDOR, hdr->vendor);
dw_pcie_writew_dbi(pci, func_offset + PCIR_DEVICE, hdr->device);
dw_pcie_writeb_dbi(pci, func_offset + PCIR_REVID, hdr->revision);
dw_pcie_writeb_dbi(pci, func_offset + PCIR_PROGIF, hdr->progif);
dw_pcie_writew_dbi(pci, func_offset + PCIR_SUBCLASS, hdr->subclass | hdr->class_code << 8);
dw_pcie_writeb_dbi(pci, func_offset + PCIR_CACHELNSZ, hdr->cache_line_size);
dw_pcie_writew_dbi(pci, func_offset + PCIR_SUBVEND_0, hdr->subsystem_vendor);
dw_pcie_writew_dbi(pci, func_offset + PCIR_SUBDEV_0, hdr->subsystem_device);
dw_pcie_writeb_dbi(pci, func_offset + PCIR_INTPIN, hdr->intx);
dw_pcie_dbi_ro_writable_enable(pci, RT_FALSE);
return 0;
}
static rt_err_t dw_pcie_ep_clear_bar(struct rt_pci_ep *epc, rt_uint8_t func_no,
struct rt_pci_ep_bar *bar, int bar_idx)
{
rt_uint32_t atu_index;
struct dw_pcie_ep *ep = epc->priv;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
atu_index = ep->bar_to_atu[bar_idx];
__dw_pcie_ep_reset_bar(pci, func_no, bar_idx, ep->epc_bar[bar_idx]->bus.flags);
dw_pcie_disable_atu(pci, atu_index, DW_PCIE_REGION_INBOUND);
rt_bitmap_clear_bit(ep->ib_window_map, atu_index);
ep->epc_bar[bar_idx] = RT_NULL;
return RT_EOK;
}
static rt_err_t dw_pcie_ep_set_bar(struct rt_pci_ep *epc, rt_uint8_t func_no,
struct rt_pci_ep_bar *bar, int bar_idx)
{
rt_err_t err;
rt_uint32_t reg;
rt_uint8_t func_offset = 0;
rt_size_t size = bar->bus.size;
rt_ubase_t flags = bar->bus.flags;
enum dw_pcie_aspace_type aspace_type;
struct dw_pcie_ep *ep = epc->priv;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
func_offset = dw_pcie_ep_func_select(ep, func_no);
reg = PCIR_BAR(bar_idx) + func_offset;
if (!(flags & PCIM_BAR_SPACE))
{
aspace_type = DW_PCIE_ASPACE_MEM;
}
else
{
aspace_type = DW_PCIE_ASPACE_IO;
}
err = dw_pcie_ep_inbound_atu(ep, func_no, bar_idx, bar->bus.base, aspace_type);
if (err)
{
return err;
}
dw_pcie_dbi_ro_writable_enable(pci, RT_TRUE);
dw_pcie_writel_dbi2(pci, reg, rt_lower_32_bits(size - 1));
dw_pcie_writel_dbi(pci, reg, flags);
if (flags & PCIM_BAR_MEM_TYPE_64)
{
dw_pcie_writel_dbi2(pci, reg + 4, rt_upper_32_bits(size - 1));
dw_pcie_writel_dbi(pci, reg + 4, 0);
}
ep->epc_bar[bar_idx] = bar;
dw_pcie_dbi_ro_writable_enable(pci, RT_FALSE);
return 0;
}
static rt_err_t dw_pcie_find_index(struct dw_pcie_ep *ep,
rt_ubase_t addr, rt_uint32_t *atu_index)
{
for (rt_uint32_t index = 0; index < ep->num_ob_windows; ++index)
{
if (ep->outbound_addr[index] != addr)
{
continue;
}
*atu_index = index;
return RT_EOK;
}
return -RT_EINVAL;
}
static rt_err_t dw_pcie_ep_unmap_addr(struct rt_pci_ep *epc, rt_uint8_t func_no,
rt_ubase_t addr)
{
rt_err_t err;
rt_uint32_t atu_index;
struct dw_pcie_ep *ep = epc->priv;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
if ((err = dw_pcie_find_index(ep, addr, &atu_index)))
{
return err;
}
dw_pcie_disable_atu(pci, atu_index, DW_PCIE_REGION_OUTBOUND);
rt_bitmap_clear_bit(ep->ob_window_map, atu_index);
return RT_EOK;
}
static rt_err_t dw_pcie_ep_map_addr(struct rt_pci_ep *epc, rt_uint8_t func_no,
rt_ubase_t addr, rt_uint64_t pci_addr, rt_size_t size)
{
rt_err_t err;
struct dw_pcie_ep *ep = epc->priv;
err = dw_pcie_ep_outbound_atu(ep, func_no, addr, pci_addr, size);
if (err)
{
LOG_E("Failed to enable address error = %s", rt_strerror(err));
return err;
}
return RT_EOK;
}
static rt_err_t dw_pcie_ep_set_msi(struct rt_pci_ep *epc, rt_uint8_t func_no,
unsigned irq_nr)
{
rt_uint32_t val, reg;
rt_uint8_t func_offset = 0;
struct dw_pcie_ep_func *ep_func;
struct dw_pcie_ep *ep = epc->priv;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
if (!ep_func || !ep_func->msi_cap)
{
return -RT_EINVAL;
}
func_offset = dw_pcie_ep_func_select(ep, func_no);
reg = ep_func->msi_cap + func_offset + PCIR_MSI_CTRL;
val = dw_pcie_readw_dbi(pci, reg);
val &= ~PCIM_MSICTRL_MMC_MASK;
val |= (irq_nr << 1) & PCIM_MSICTRL_MMC_MASK;
dw_pcie_dbi_ro_writable_enable(pci, RT_TRUE);
dw_pcie_writew_dbi(pci, reg, val);
dw_pcie_dbi_ro_writable_enable(pci, RT_FALSE);
return RT_EOK;
}
static rt_err_t dw_pcie_ep_get_msi(struct rt_pci_ep *epc, rt_uint8_t func_no,
unsigned *out_irq_nr)
{
rt_uint32_t val, reg;
rt_uint8_t func_offset = 0;
struct dw_pcie_ep *ep = epc->priv;
struct dw_pcie_ep_func *ep_func;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
if (!ep_func || !ep_func->msi_cap)
{
return -RT_EINVAL;
}
func_offset = dw_pcie_ep_func_select(ep, func_no);
reg = ep_func->msi_cap + func_offset + PCIR_MSI_CTRL;
val = dw_pcie_readw_dbi(pci, reg);
if (!(val & PCIM_MSICTRL_MSI_ENABLE))
{
return -RT_EINVAL;
}
*out_irq_nr = (val & PCIM_MSICTRL_MME_MASK) >> 4;
return RT_EOK;
}
static rt_err_t dw_pcie_ep_set_msix(struct rt_pci_ep *epc, rt_uint8_t func_no,
unsigned irq_nr, int bar_idx, rt_off_t offset)
{
rt_uint32_t val, reg;
rt_uint8_t func_offset = 0;
struct dw_pcie_ep_func *ep_func;
struct dw_pcie_ep *ep = epc->priv;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
if (!ep_func || !ep_func->msix_cap)
{
return -RT_EINVAL;
}
dw_pcie_dbi_ro_writable_enable(pci, RT_TRUE);
func_offset = dw_pcie_ep_func_select(ep, func_no);
reg = ep_func->msix_cap + func_offset + PCIR_MSIX_CTRL;
val = dw_pcie_readw_dbi(pci, reg);
val &= ~PCIM_MSIXCTRL_TABLE_SIZE;
val |= irq_nr;
dw_pcie_writew_dbi(pci, reg, val);
reg = ep_func->msix_cap + func_offset + PCIR_MSIX_TABLE;
val = offset | bar_idx;
dw_pcie_writel_dbi(pci, reg, val);
reg = ep_func->msix_cap + func_offset + PCIR_MSIX_PBA;
val = (offset + (irq_nr * PCIM_MSIX_ENTRY_SIZE)) | bar_idx;
dw_pcie_writel_dbi(pci, reg, val);
dw_pcie_dbi_ro_writable_enable(pci, RT_FALSE);
return RT_EOK;
}
static rt_err_t dw_pcie_ep_get_msix(struct rt_pci_ep *epc, rt_uint8_t func_no,
unsigned *out_irq_nr)
{
rt_uint32_t val, reg;
rt_uint8_t func_offset = 0;
struct dw_pcie_ep_func *ep_func;
struct dw_pcie_ep *ep = epc->priv;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
if (!ep_func || !ep_func->msix_cap)
{
return -RT_EINVAL;
}
func_offset = dw_pcie_ep_func_select(ep, func_no);
reg = ep_func->msix_cap + func_offset + PCIR_MSIX_CTRL;
val = dw_pcie_readw_dbi(pci, reg);
if (!(val & PCIM_MSIXCTRL_MSIX_ENABLE))
{
return -RT_EINVAL;
}
*out_irq_nr = val & PCIM_MSIXCTRL_TABLE_SIZE;
return RT_EOK;
}
static rt_err_t dw_pcie_ep_raise_irq(struct rt_pci_ep *epc, rt_uint8_t func_no,
enum rt_pci_ep_irq type, unsigned irq)
{
struct dw_pcie_ep *ep = epc->priv;
if (!ep->ops->raise_irq)
{
return -RT_ENOSYS;
}
return ep->ops->raise_irq(ep, func_no, type, irq);
}
static rt_err_t dw_pcie_ep_stop(struct rt_pci_ep *epc)
{
struct dw_pcie_ep *ep = epc->priv;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
if (pci->ops->stop_link)
{
pci->ops->stop_link(pci);
}
return RT_EOK;
}
static rt_err_t dw_pcie_ep_start(struct rt_pci_ep *epc)
{
struct dw_pcie_ep *ep = epc->priv;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
if (pci->ops->start_link)
{
return pci->ops->start_link(pci);
}
return RT_EOK;
}
static const struct rt_pci_ep_ops dw_pcie_ep_ops =
{
.write_header = dw_pcie_ep_write_header,
.set_bar = dw_pcie_ep_set_bar,
.clear_bar = dw_pcie_ep_clear_bar,
.map_addr = dw_pcie_ep_map_addr,
.unmap_addr = dw_pcie_ep_unmap_addr,
.set_msi = dw_pcie_ep_set_msi,
.get_msi = dw_pcie_ep_get_msi,
.set_msix = dw_pcie_ep_set_msix,
.get_msix = dw_pcie_ep_get_msix,
.raise_irq = dw_pcie_ep_raise_irq,
.start = dw_pcie_ep_start,
.stop = dw_pcie_ep_stop,
};
rt_err_t dw_pcie_ep_raise_legacy_irq(struct dw_pcie_ep *ep, rt_uint8_t func_no)
{
LOG_E("EP cannot trigger legacy IRQs");
return -RT_EINVAL;
}
rt_err_t dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, rt_uint8_t func_no,
unsigned irq)
{
rt_err_t err;
rt_off_t aligned_offset;
rt_uint8_t func_offset = 0;
rt_uint64_t msg_addr;
rt_uint16_t msg_ctrl, msg_data;
rt_uint32_t msg_addr_lower, msg_addr_upper, reg;
struct rt_pci_ep *epc = ep->epc;
struct dw_pcie_ep_func *ep_func;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
if (!ep_func || !ep_func->msi_cap)
{
return -RT_EINVAL;
}
func_offset = dw_pcie_ep_func_select(ep, func_no);
/* Raise MSI per the PCI Local Bus Specification Revision 3.0, 6.8.1. */
reg = ep_func->msi_cap + func_offset + PCIR_MSI_CTRL;
msg_ctrl = dw_pcie_readw_dbi(pci, reg);
reg = ep_func->msi_cap + func_offset + PCIR_MSI_ADDR;
msg_addr_lower = dw_pcie_readl_dbi(pci, reg);
if (!!(msg_ctrl & PCIM_MSICTRL_64BIT))
{
reg = ep_func->msi_cap + func_offset + PCIR_MSI_ADDR_HIGH;
msg_addr_upper = dw_pcie_readl_dbi(pci, reg);
reg = ep_func->msi_cap + func_offset + PCIR_MSI_DATA_64BIT;
msg_data = dw_pcie_readw_dbi(pci, reg);
}
else
{
msg_addr_upper = 0;
reg = ep_func->msi_cap + func_offset + PCIR_MSI_DATA;
msg_data = dw_pcie_readw_dbi(pci, reg);
}
aligned_offset = msg_addr_lower & (ep->page_size - 1);
msg_addr = ((rt_uint64_t)msg_addr_upper << 32) | (msg_addr_lower & ~aligned_offset);
if ((err = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phy, msg_addr, ep->page_size)))
{
return err;
}
HWREG32(ep->msi_mem + aligned_offset) = msg_data | (irq - 1);
dw_pcie_ep_unmap_addr(epc, func_no, ep->msi_mem_phy);
return RT_EOK;
}
rt_err_t dw_pcie_ep_raise_msix_irq_doorbell(struct dw_pcie_ep *ep, rt_uint8_t func_no,
unsigned irq)
{
rt_uint32_t msg_data;
struct dw_pcie_ep_func *ep_func;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
if (!ep_func || !ep_func->msix_cap)
{
return -RT_EINVAL;
}
msg_data = (func_no << PCIE_MSIX_DOORBELL_PF_SHIFT) | (irq - 1);
dw_pcie_writel_dbi(pci, PCIE_MSIX_DOORBELL, msg_data);
return RT_EOK;
}
rt_err_t dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, rt_uint8_t func_no,
unsigned irq)
{
rt_err_t err;
int bar_idx;
rt_uint64_t msg_addr;
rt_uint32_t tbl_offset;
rt_off_t aligned_offset;
rt_uint8_t func_offset = 0;
rt_uint32_t reg, msg_data, vec_ctrl;
struct rt_pci_ep *epc = ep->epc;
struct rt_pci_ep_msix_tbl *msix_tbl;
struct dw_pcie_ep_func *ep_func;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no);
if (!ep_func || !ep_func->msix_cap)
{
return -RT_EINVAL;
}
func_offset = dw_pcie_ep_func_select(ep, func_no);
reg = ep_func->msix_cap + func_offset + PCIR_MSIX_TABLE;
tbl_offset = dw_pcie_readl_dbi(pci, reg);
bar_idx = (tbl_offset & PCIM_MSIX_BIR_MASK);
tbl_offset &= PCIM_MSIX_TABLE_OFFSET;
msix_tbl = (void *)ep->epc_bar[bar_idx]->cpu_addr + tbl_offset;
msg_addr = msix_tbl[(irq - 1)].msg_addr;
msg_data = msix_tbl[(irq - 1)].msg_data;
vec_ctrl = msix_tbl[(irq - 1)].vector_ctrl;
if (vec_ctrl & PCIM_MSIX_ENTRYVECTOR_CTRL_MASK)
{
return -RT_EINVAL;
}
aligned_offset = msg_addr & (ep->page_size - 1);
if ((err = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phy, msg_addr, ep->page_size)))
{
return err;
}
HWREG32(ep->msi_mem + aligned_offset) = msg_data;
dw_pcie_ep_unmap_addr(epc, func_no, ep->msi_mem_phy);
return RT_EOK;
}
void dw_pcie_ep_exit(struct dw_pcie_ep *ep)
{
struct rt_pci_ep *epc = ep->epc;
if (ep->msi_mem)
{
rt_pci_ep_mem_free(epc, ep->msi_mem, ep->msi_mem_phy, ep->page_size);
}
if (!rt_list_isempty(&ep->func_nodes))
{
struct dw_pcie_ep_func *ep_func, *ep_func_next;
rt_list_for_each_entry_safe(ep_func, ep_func_next, &ep->func_nodes, list)
{
rt_list_remove(&ep_func->list);
rt_free(ep_func);
}
}
if (ep->ib_window_map)
{
rt_free(ep->ib_window_map);
}
if (ep->ob_window_map)
{
rt_free(ep->ob_window_map);
}
if (ep->outbound_addr)
{
rt_free(ep->outbound_addr);
}
if (epc)
{
rt_free(epc);
}
}
static rt_uint32_t dw_pcie_ep_find_ext_capability(struct dw_pcie *pci, int cap)
{
rt_uint32_t header;
int pos = (PCI_REGMAX + 1);
while (pos)
{
header = dw_pcie_readl_dbi(pci, pos);
if (PCI_EXTCAP_ID(header) == cap)
{
return pos;
}
if (!(pos = PCI_EXTCAP_NEXTPTR(header)))
{
break;
}
}
return 0;
}
rt_err_t dw_pcie_ep_init_complete(struct dw_pcie_ep *ep)
{
rt_off_t offset;
rt_size_t bar_nr;
rt_uint32_t reg;
rt_uint8_t hdr_type;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
hdr_type = dw_pcie_readb_dbi(pci, PCIR_HDRTYPE) & PCIM_HDRTYPE;
if (hdr_type != PCIM_HDRTYPE_NORMAL)
{
LOG_E("PCIe controller is not set to EP mode hdr_type = %x", hdr_type);
return -RT_EIO;
}
offset = dw_pcie_ep_find_ext_capability(pci, PCIZ_RESIZE_BAR);
dw_pcie_dbi_ro_writable_enable(pci, RT_TRUE);
if (offset)
{
reg = dw_pcie_readl_dbi(pci, offset + PCIM_REBAR_CTRL);
bar_nr = (reg & PCIM_REBAR_CTRL_NBAR_MASK) >> PCIM_REBAR_CTRL_NBAR_SHIFT;
for (int i = 0; i < bar_nr; ++i, offset += PCIM_REBAR_CTRL)
{
dw_pcie_writel_dbi(pci, offset + PCIM_REBAR_CAP, 0x0);
}
}
dw_pcie_setup(pci);
dw_pcie_dbi_ro_writable_enable(pci, RT_FALSE);
return RT_EOK;
}
rt_err_t dw_pcie_ep_init(struct dw_pcie_ep *ep)
{
rt_err_t err;
struct rt_pci_ep *epc = RT_NULL;
struct dw_pcie_ep_func *ep_func;
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
struct rt_device *dev = pci->dev;
rt_list_init(&ep->func_nodes);
if (!pci->dbi_base || !pci->dbi_base2)
{
LOG_E("dbi_base/dbi_base2 is not populated");
return -RT_EINVAL;
}
if ((err = rt_dm_dev_prop_read_u32(dev, "num-ib-windows", &ep->num_ib_windows)))
{
LOG_E("Unable to read 'num-ib-windows' property");
return err;
}
if (ep->num_ib_windows > MAX_IATU_IN)
{
LOG_E("Invalid 'num-ib-windows'");
return -RT_EINVAL;
}
if ((err = rt_dm_dev_prop_read_u32(dev, "num-ob-windows", &ep->num_ob_windows)))
{
LOG_E("Unable to read 'num-ob-windows' property");
return err;
}
if (ep->num_ob_windows > MAX_IATU_OUT)
{
LOG_E("Invalid 'num-ob-windows'");
return -RT_EINVAL;
}
ep->ib_window_map = rt_calloc(RT_BITMAP_LEN(ep->num_ib_windows), sizeof(rt_bitmap_t));
if (!ep->ib_window_map)
{
return -RT_ENOMEM;
}
ep->ob_window_map = rt_calloc(RT_BITMAP_LEN(ep->num_ob_windows), sizeof(rt_bitmap_t));
if (!ep->ob_window_map)
{
err = -RT_ENOMEM;
goto _fail;
}
ep->outbound_addr = rt_calloc(ep->num_ob_windows, sizeof(rt_ubase_t));
if (!ep->outbound_addr)
{
err = -RT_ENOMEM;
goto _fail;
}
if (pci->link_gen < 1)
{
pci->link_gen = -1;
rt_dm_dev_prop_read_u32(dev, "max-link-speed", &pci->link_gen);
}
epc = rt_calloc(1, sizeof(*epc));
if (!epc)
{
err = -RT_ENOMEM;
goto _fail;
}
epc->name = rt_dm_dev_get_name(dev);
epc->rc_dev = dev;
epc->ops = &dw_pcie_ep_ops;
epc->priv = ep;
if ((err = rt_pci_ep_register(epc)))
{
goto _fail;
}
ep->epc = epc;
if (rt_dm_dev_prop_read_u8(dev, "max-functions", &epc->max_functions))
{
epc->max_functions = 1;
}
for (rt_uint8_t func_no = 0; func_no < epc->max_functions; ++func_no)
{
ep_func = rt_calloc(1, sizeof(*ep_func));
if (!ep_func)
{
err = -RT_ENOMEM;
goto _fail;
}
ep_func->func_no = func_no;
ep_func->msi_cap = dw_pcie_ep_find_capability(ep, func_no, PCIY_MSI);
ep_func->msix_cap = dw_pcie_ep_find_capability(ep, func_no, PCIY_MSIX);
rt_list_init(&ep_func->list);
rt_list_insert_after(&ep->func_nodes, &ep_func->list);
}
if (ep->ops->ep_init)
{
ep->ops->ep_init(ep);
}
if ((err = rt_pci_ep_mem_init(epc, ep->aspace, ep->aspace_size, ep->page_size)))
{
goto _fail;
}
ep->msi_mem = rt_pci_ep_mem_alloc(epc, &ep->msi_mem_phy, ep->page_size);
if (!ep->msi_mem)
{
LOG_E("Failed to reserve memory for MSI/MSI-X");
err = -RT_ENOMEM;
goto _fail;
}
if ((err = dw_pcie_ep_init_complete(ep)))
{
goto _fail;
}
return RT_EOK;
_fail:
dw_pcie_ep_exit(ep);
return err;
}

View File

@ -0,0 +1,644 @@
/*
* Copyright (c) 2006-2023, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2023-09-23 GuEe-GUI first version
*/
#define DBG_TAG "pcie.dw-host"
#define DBG_LVL DBG_INFO
#include <rtdbg.h>
#include "pcie-dw.h"
static void dw_pcie_irq_ack(struct rt_pic_irq *pirq)
{
int hwirq = pirq->hwirq;
rt_uint32_t res, bit, ctrl;
struct dw_pcie_port *port = pirq->pic->priv_data;
struct dw_pcie *pci = to_dw_pcie_from_port(port);
ctrl = hwirq / MAX_MSI_IRQS_PER_CTRL;
res = ctrl * MSI_REG_CTRL_BLOCK_SIZE;
bit = hwirq % MAX_MSI_IRQS_PER_CTRL;
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_STATUS + res, RT_BIT(bit));
}
static void dw_pcie_irq_mask(struct rt_pic_irq *pirq)
{
rt_ubase_t level;
int hwirq = pirq->hwirq;
rt_uint32_t res, bit, ctrl;
struct dw_pcie_port *port = pirq->pic->priv_data;
struct dw_pcie *pci = to_dw_pcie_from_port(port);
rt_pci_msi_mask_irq(pirq);
level = rt_spin_lock_irqsave(&port->lock);
ctrl = hwirq / MAX_MSI_IRQS_PER_CTRL;
res = ctrl * MSI_REG_CTRL_BLOCK_SIZE;
bit = hwirq % MAX_MSI_IRQS_PER_CTRL;
port->irq_mask[ctrl] |= RT_BIT(bit);
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK + res, port->irq_mask[ctrl]);
rt_spin_unlock_irqrestore(&port->lock, level);
}
static void dw_pcie_irq_unmask(struct rt_pic_irq *pirq)
{
rt_ubase_t level;
int hwirq = pirq->hwirq;
rt_uint32_t res, bit, ctrl;
struct dw_pcie_port *port = pirq->pic->priv_data;
struct dw_pcie *pci = to_dw_pcie_from_port(port);
rt_pci_msi_unmask_irq(pirq);
level = rt_spin_lock_irqsave(&port->lock);
ctrl = hwirq / MAX_MSI_IRQS_PER_CTRL;
res = ctrl * MSI_REG_CTRL_BLOCK_SIZE;
bit = hwirq % MAX_MSI_IRQS_PER_CTRL;
port->irq_mask[ctrl] &= ~RT_BIT(bit);
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK + res, port->irq_mask[ctrl]);
rt_spin_unlock_irqrestore(&port->lock, level);
}
static void dw_pcie_compose_msi_msg(struct rt_pic_irq *pirq, struct rt_pci_msi_msg *msg)
{
rt_uint64_t msi_target;
struct dw_pcie_port *port = pirq->pic->priv_data;
msi_target = (rt_uint64_t)port->msi_data_phy;
msg->address_lo = rt_lower_32_bits(msi_target);
msg->address_hi = rt_upper_32_bits(msi_target);
msg->data = pirq->hwirq;
}
static int dw_pcie_irq_alloc_msi(struct rt_pic *pic, struct rt_pci_msi_desc *msi_desc)
{
rt_ubase_t level;
int irq, hwirq;
struct rt_pic_irq *pirq;
struct dw_pcie_port *port = pic->priv_data;
level = rt_spin_lock_irqsave(&port->lock);
hwirq = rt_bitmap_next_clear_bit(port->msi_map, 0, port->irq_count);
if (hwirq >= port->irq_count)
{
irq = -RT_EEMPTY;
goto _out_lock;
}
pirq = rt_pic_find_irq(pic, hwirq);
irq = rt_pic_config_irq(pic, hwirq, hwirq);
pirq->mode = RT_IRQ_MODE_EDGE_RISING;
rt_bitmap_set_bit(port->msi_map, hwirq);
_out_lock:
rt_spin_unlock_irqrestore(&port->lock, level);
return irq;
}
static void dw_pcie_irq_free_msi(struct rt_pic *pic, int irq)
{
rt_ubase_t level;
struct rt_pic_irq *pirq;
struct dw_pcie_port *port = pic->priv_data;
pirq = rt_pic_find_pirq(pic, irq);
if (!pirq)
{
return;
}
level = rt_spin_lock_irqsave(&port->lock);
rt_bitmap_clear_bit(port->msi_map, pirq->hwirq);
rt_spin_unlock_irqrestore(&port->lock, level);
}
const static struct rt_pic_ops dw_pci_msi_ops =
{
.name = "DWPCI-MSI",
.irq_ack = dw_pcie_irq_ack,
.irq_mask = dw_pcie_irq_mask,
.irq_unmask = dw_pcie_irq_unmask,
.irq_compose_msi_msg = dw_pcie_compose_msi_msg,
.irq_alloc_msi = dw_pcie_irq_alloc_msi,
.irq_free_msi = dw_pcie_irq_free_msi,
.flags = RT_PIC_F_IRQ_ROUTING,
};
/* MSI int handler */
rt_err_t dw_handle_msi_irq(struct dw_pcie_port *port)
{
rt_err_t err;
int i, pos;
rt_bitmap_t status;
rt_uint32_t num_ctrls;
struct rt_pic_irq *pirq;
struct dw_pcie *pci = to_dw_pcie_from_port(port);
struct rt_pic *msi_pic = port->msi_pic;
err = -RT_EEMPTY;
num_ctrls = RT_DIV_ROUND_UP(port->irq_count, MAX_MSI_IRQS_PER_CTRL);
for (i = 0; i < num_ctrls; ++i)
{
status = dw_pcie_readl_dbi(pci, PCIE_MSI_INTR0_STATUS +
(i * MSI_REG_CTRL_BLOCK_SIZE));
if (!status)
{
continue;
}
err = RT_EOK;
rt_bitmap_for_each_set_bit(&status, pos, MAX_MSI_IRQS_PER_CTRL)
{
pirq = rt_pic_find_irq(msi_pic, pos + i * MAX_MSI_IRQS_PER_CTRL);
dw_pcie_irq_ack(pirq);
rt_pic_handle_isr(pirq);
}
}
return err;
}
static void dw_pcie_msi_isr(int irqno, void *param)
{
struct dw_pcie_port *port = param;
dw_handle_msi_irq(port);
}
void dw_pcie_free_msi(struct dw_pcie_port *port)
{
if (port->msi_irq >= 0)
{
rt_hw_interrupt_mask(port->msi_irq);
rt_pic_detach_irq(port->msi_irq, port);
}
if (port->msi_data)
{
struct dw_pcie *pci = to_dw_pcie_from_port(port);
rt_dma_free_coherent(pci->dev, sizeof(rt_uint64_t), port->msi_data,
port->msi_data_phy);
}
}
void dw_pcie_msi_init(struct dw_pcie_port *port)
{
#ifdef RT_PCI_MSI
struct dw_pcie *pci = to_dw_pcie_from_port(port);
rt_uint64_t msi_target = (rt_uint64_t)port->msi_data_phy;
/* Program the msi_data_phy */
dw_pcie_writel_dbi(pci, PCIE_MSI_ADDR_LO, rt_lower_32_bits(msi_target));
dw_pcie_writel_dbi(pci, PCIE_MSI_ADDR_HI, rt_upper_32_bits(msi_target));
#endif
}
static const struct rt_pci_ops dw_child_pcie_ops;
static const struct rt_pci_ops dw_pcie_ops;
rt_err_t dw_pcie_host_init(struct dw_pcie_port *port)
{
rt_err_t err;
struct dw_pcie *pci = to_dw_pcie_from_port(port);
struct rt_device *dev = pci->dev;
struct rt_pci_host_bridge *bridge;
rt_spin_lock_init(&port->lock);
rt_dm_dev_get_address_by_name(dev, "config", &port->cfg0_addr, &port->cfg0_size);
if (port->cfg0_addr)
{
port->cfg0_base = rt_ioremap((void *)port->cfg0_addr, port->cfg0_size);
if (!port->cfg0_base)
{
return -RT_EIO;
}
}
else if (!port->cfg0_base)
{
LOG_E("Missing 'config' reg space");
}
if (!(bridge = rt_pci_host_bridge_alloc(0)))
{
return -RT_ENOMEM;
}
bridge->parent.ofw_node = dev->ofw_node;
if ((err = rt_pci_host_bridge_init(bridge)))
{
goto _err_free_bridge;
}
port->bridge = bridge;
for (int i = 0; i < bridge->bus_regions_nr; ++i)
{
struct rt_pci_bus_region *region = &bridge->bus_regions[i];
switch (region->flags)
{
case PCI_BUS_REGION_F_IO:
port->io_addr = region->cpu_addr;
port->io_bus_addr = region->phy_addr;
port->io_size = region->size;
break;
case PCI_BUS_REGION_F_NONE:
port->cfg0_size = region->size;
port->cfg0_addr = region->cpu_addr;
if (!pci->dbi_base)
{
pci->dbi_base = rt_ioremap((void *)port->cfg0_addr, port->cfg0_size);
if (!pci->dbi_base)
{
LOG_E("Error with ioremap");
return -RT_ENOMEM;
}
}
break;
default:
break;
}
}
if (!port->cfg0_base && port->cfg0_addr)
{
port->cfg0_base = rt_ioremap((void *)port->cfg0_addr, port->cfg0_size);
if (!port->cfg0_base)
{
return -RT_ENOMEM;
}
}
if (rt_dm_dev_prop_read_u32(dev, "num-viewport", &pci->num_viewport))
{
pci->num_viewport = 2;
}
if (pci->link_gen < 1)
{
pci->link_gen = -1;
rt_dm_dev_prop_read_u32(dev, "max-link-speed", &pci->link_gen);
}
/*
* If a specific SoC driver needs to change the default number of vectors,
* it needs to implement the set_irq_count callback.
*/
if (!port->ops->set_irq_count)
{
port->irq_count = MSI_DEF_NUM_VECTORS;
}
else
{
port->ops->set_irq_count(port);
if (port->irq_count > MAX_MSI_IRQS || port->irq_count == 0)
{
LOG_E("Invalid count of irq = %d", port->irq_count);
return -RT_EINVAL;
}
}
if (!port->ops->msi_host_init)
{
port->msi_pic = rt_calloc(1, sizeof(*port->msi_pic));
if (!port->msi_pic)
{
return -RT_ENOMEM;
}
port->msi_pic->priv_data = port;
port->msi_pic->ops = &dw_pci_msi_ops;
rt_pic_linear_irq(port->msi_pic, port->irq_count);
rt_pic_user_extends(port->msi_pic);
if (port->msi_irq)
{
rt_hw_interrupt_install(port->msi_irq, dw_pcie_msi_isr, port, "dwc-pci-msi");
rt_hw_interrupt_umask(port->msi_irq);
}
port->msi_data = rt_dma_alloc_coherent(pci->dev, sizeof(rt_uint64_t),
&port->msi_data_phy);
if (!port->msi_data)
{
err = -RT_ENOMEM;
goto _err_free_msi;
}
}
else
{
if ((err = port->ops->msi_host_init(port)))
{
return err;
}
}
/* Set default bus ops */
bridge->ops = &dw_pcie_ops;
bridge->child_ops = &dw_child_pcie_ops;
if (port->ops->host_init && (err = port->ops->host_init(port)))
{
goto _err_free_msi;
}
bridge->sysdata = port;
if ((err = rt_pci_host_bridge_probe(bridge)))
{
goto _err_free_msi;
}
return RT_EOK;
_err_free_msi:
if (!port->ops->msi_host_init)
{
dw_pcie_free_msi(port);
rt_pic_cancel_irq(port->msi_pic);
rt_free(port->msi_pic);
port->msi_pic = RT_NULL;
}
_err_free_bridge:
rt_pci_host_bridge_free(bridge);
port->bridge = RT_NULL;
return err;
}
void dw_pcie_host_deinit(struct dw_pcie_port *port)
{
if (!port->ops->msi_host_init)
{
dw_pcie_free_msi(port);
}
}
void dw_pcie_host_free(struct dw_pcie_port *port)
{
if (!port->ops->msi_host_init)
{
dw_pcie_free_msi(port);
rt_pic_cancel_irq(port->msi_pic);
rt_free(port->msi_pic);
}
if (port->bridge)
{
rt_pci_host_bridge_free(port->bridge);
}
}
static void *dw_pcie_other_conf_map(struct rt_pci_bus *bus, rt_uint32_t devfn, int reg)
{
int type;
rt_uint32_t busdev;
struct dw_pcie_port *port = bus->sysdata;
struct dw_pcie *pci = to_dw_pcie_from_port(port);
/*
* Checking whether the link is up here is a last line of defense
* against platforms that forward errors on the system bus as
* SError upon PCI configuration transactions issued when the link is down.
* This check is racy by definition and does not stop the system from
* triggering an SError if the link goes down after this check is performed.
*/
if (!dw_pcie_link_up(pci))
{
return RT_NULL;
}
busdev = PCIE_ATU_BUS(bus->number) | PCIE_ATU_DEV(RT_PCI_SLOT(devfn)) |
PCIE_ATU_FUNC(RT_PCI_FUNC(devfn));
if (rt_pci_is_root_bus(bus->parent))
{
type = PCIE_ATU_TYPE_CFG0;
}
else
{
type = PCIE_ATU_TYPE_CFG1;
}
dw_pcie_prog_outbound_atu(pci, 0, type, port->cfg0_addr, busdev, port->cfg0_size);
return port->cfg0_base + reg;
}
static rt_err_t dw_pcie_other_read_conf(struct rt_pci_bus *bus,
rt_uint32_t devfn, int reg, int width, rt_uint32_t *value)
{
rt_err_t err;
struct dw_pcie_port *port = bus->sysdata;
struct dw_pcie *pci = to_dw_pcie_from_port(port);
err = rt_pci_bus_read_config_uxx(bus, devfn, reg, width, value);
if (!err && (pci->iatu_unroll_enabled & DWC_IATU_IOCFG_SHARED))
{
dw_pcie_prog_outbound_atu(pci, 0, PCIE_ATU_TYPE_IO,
port->io_addr, port->io_bus_addr, port->io_size);
}
return err;
}
static rt_err_t dw_pcie_other_write_conf(struct rt_pci_bus *bus,
rt_uint32_t devfn, int reg, int width, rt_uint32_t value)
{
rt_err_t err;
struct dw_pcie_port *port = bus->sysdata;
struct dw_pcie *pci = to_dw_pcie_from_port(port);
err = rt_pci_bus_write_config_uxx(bus, devfn, reg, width, value);
if (!err && (pci->iatu_unroll_enabled & DWC_IATU_IOCFG_SHARED))
{
dw_pcie_prog_outbound_atu(pci, 0, PCIE_ATU_TYPE_IO,
port->io_addr, port->io_bus_addr, port->io_size);
}
return err;
}
static const struct rt_pci_ops dw_child_pcie_ops =
{
.map = dw_pcie_other_conf_map,
.read = dw_pcie_other_read_conf,
.write = dw_pcie_other_write_conf,
};
void *dw_pcie_own_conf_map(struct rt_pci_bus *bus, rt_uint32_t devfn, int reg)
{
struct dw_pcie_port *port = bus->sysdata;
struct dw_pcie *pci = to_dw_pcie_from_port(port);
if (RT_PCI_SLOT(devfn) > 0)
{
return RT_NULL;
}
return pci->dbi_base + reg;
}
static const struct rt_pci_ops dw_pcie_ops =
{
.map = dw_pcie_own_conf_map,
.read = rt_pci_bus_read_config_uxx,
.write = rt_pci_bus_write_config_uxx,
};
void dw_pcie_setup_rc(struct dw_pcie_port *port)
{
rt_uint32_t val, num_ctrls;
struct dw_pcie *pci = to_dw_pcie_from_port(port);
/*
* Enable DBI read-only registers for writing/updating configuration.
* Write permission gets disabled towards the end of this function.
*/
dw_pcie_dbi_ro_writable_enable(pci, RT_TRUE);
dw_pcie_setup(pci);
if (!port->ops->msi_host_init)
{
num_ctrls = RT_DIV_ROUND_UP(port->irq_count, MAX_MSI_IRQS_PER_CTRL);
/* Initialize IRQ Status array */
for (int ctrl = 0; ctrl < num_ctrls; ++ctrl)
{
port->irq_mask[ctrl] = ~0;
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK +
(ctrl * MSI_REG_CTRL_BLOCK_SIZE), port->irq_mask[ctrl]);
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_ENABLE +
(ctrl * MSI_REG_CTRL_BLOCK_SIZE), ~0);
}
}
/* Setup RC BARs */
dw_pcie_writel_dbi(pci, PCIR_BAR(0), PCIM_BAR_MEM_TYPE_64);
dw_pcie_writel_dbi(pci, PCIR_BAR(1), PCIM_BAR_MEM_TYPE_32);
/* Setup interrupt pins */
val = dw_pcie_readl_dbi(pci, PCIR_INTLINE);
val &= 0xffff00ff;
val |= 0x00000100;
dw_pcie_writel_dbi(pci, PCIR_INTLINE, val);
/* Setup bus numbers */
val = dw_pcie_readl_dbi(pci, PCIR_PRIBUS_1);
val &= 0xff000000;
val |= 0x00ff0100;
dw_pcie_writel_dbi(pci, PCIR_PRIBUS_1, val);
/* Setup command register */
val = dw_pcie_readl_dbi(pci, PCIR_COMMAND);
val &= 0xffff0000;
val |= PCIM_CMD_PORTEN | PCIM_CMD_MEMEN | PCIM_CMD_BUSMASTEREN | PCIM_CMD_SERRESPEN;
dw_pcie_writel_dbi(pci, PCIR_COMMAND, val);
/*
* If the platform provides its own child bus config accesses, it means
* the platform uses its own address translation component rather than
* ATU, so we should not program the ATU here.
*/
if (pci->port.bridge->child_ops == &dw_child_pcie_ops)
{
int atu_idx = 0;
struct rt_pci_host_bridge *bridge = port->bridge;
/* Get last memory resource entry */
for (int i = 0; i < bridge->bus_regions_nr; ++i)
{
struct rt_pci_bus_region *region = &bridge->bus_regions[i];
if (region->flags != PCI_BUS_REGION_F_MEM)
{
continue;
}
if (pci->num_viewport <= ++atu_idx)
{
break;
}
dw_pcie_prog_outbound_atu(pci, atu_idx,
PCIE_ATU_TYPE_MEM, region->cpu_addr,
region->phy_addr, region->size);
}
if (port->io_size)
{
if (pci->num_viewport > ++atu_idx)
{
dw_pcie_prog_outbound_atu(pci, atu_idx,
PCIE_ATU_TYPE_IO, port->io_addr,
port->io_bus_addr, port->io_size);
}
else
{
pci->iatu_unroll_enabled |= DWC_IATU_IOCFG_SHARED;
}
}
if (pci->num_viewport <= atu_idx)
{
LOG_W("Resources exceed number of ATU entries (%d)", pci->num_viewport);
}
}
dw_pcie_writel_dbi(pci, PCIR_BAR(0), 0);
/* Program correct class for RC */
dw_pcie_writew_dbi(pci, PCIR_SUBCLASS, PCIS_BRIDGE_PCI);
val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
val |= PORT_LOGIC_SPEED_CHANGE;
dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
dw_pcie_dbi_ro_writable_enable(pci, RT_FALSE);
}

View File

@ -0,0 +1,295 @@
/*
* Copyright (c) 2006-2023, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2023-09-23 GuEe-GUI first version
*/
#include <rtthread.h>
#include <rtdevice.h>
#define DBG_TAG "pcie.dw.platfrom"
#define DBG_LVL DBG_INFO
#include <rtdbg.h>
#include "pcie-dw.h"
struct dw_dw_platform_pcie_soc_data
{
enum dw_pcie_device_mode mode;
};
struct dw_platform_pcie
{
struct dw_pcie *pci;
struct rt_syscon *regmap;
const struct dw_dw_platform_pcie_soc_data *soc_data;
};
static rt_err_t dw_platform_pcie_host_init(struct dw_pcie_port *port)
{
struct dw_pcie *pci = to_dw_pcie_from_port(port);
dw_pcie_setup_rc(port);
dw_pcie_wait_for_link(pci);
dw_pcie_msi_init(port);
return RT_EOK;
}
static void dw_platform_set_irq_count(struct dw_pcie_port *pp)
{
pp->irq_count = MAX_MSI_IRQS;
}
static const struct dw_pcie_host_ops dw_platform_pcie_host_ops =
{
.host_init = dw_platform_pcie_host_init,
.set_irq_count = dw_platform_set_irq_count,
};
static rt_err_t dw_platform_pcie_establish_link(struct dw_pcie *pci)
{
return RT_EOK;
}
static const struct dw_pcie_ops dw_platform_pcie_ops =
{
.start_link = dw_platform_pcie_establish_link,
};
static rt_err_t dw_platform_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_endpoint(ep);
for (int bar = 0; bar < PCI_STD_NUM_BARS; ++bar)
{
dw_pcie_ep_reset_bar(pci, bar);
}
return RT_EOK;
}
static rt_err_t dw_platform_pcie_ep_raise_irq(struct dw_pcie_ep *ep,
rt_uint8_t func_no, enum rt_pci_ep_irq type, unsigned irq)
{
switch (type)
{
case RT_PCI_EP_IRQ_LEGACY:
return dw_pcie_ep_raise_legacy_irq(ep, func_no);
case RT_PCI_EP_IRQ_MSI:
return dw_pcie_ep_raise_msi_irq(ep, func_no, irq);
case RT_PCI_EP_IRQ_MSIX:
return dw_pcie_ep_raise_msix_irq(ep, func_no, irq);
default:
LOG_E("Unknown IRQ type = %d", type);
}
return RT_EOK;
}
static const struct dw_pcie_ep_ops dw_platform_pcie_ep_ops =
{
.ep_init = dw_platform_pcie_ep_init,
.raise_irq = dw_platform_pcie_ep_raise_irq,
};
static rt_err_t dw_platform_add_pcie_port(struct dw_platform_pcie *plat_pcie,
struct rt_device *dev)
{
rt_err_t err;
struct dw_pcie *pci = plat_pcie->pci;
struct dw_pcie_port *port = &pci->port;
port->sys_irq = rt_dm_dev_get_irq(dev, 1);
if (port->sys_irq < 0)
{
return port->sys_irq;
}
#ifdef RT_PCI_MSI
port->msi_irq = rt_dm_dev_get_irq(dev, 0);
if (port->msi_irq < 0)
{
return port->msi_irq;
}
#endif
port->ops = &dw_platform_pcie_host_ops;
if ((err = dw_pcie_host_init(port)))
{
LOG_E("Failed to initialize host");
return err;
}
return RT_EOK;
}
static rt_err_t dw_platform_add_pcie_ep(struct dw_platform_pcie *plat_pcie,
struct rt_device *dev)
{
rt_err_t err;
struct dw_pcie *pci = plat_pcie->pci;
struct dw_pcie_ep *ep = &pci->endpoint;
pci->dbi_base2 = rt_dm_dev_iomap_by_name(dev, "dbi2");
if (!pci->dbi_base2)
{
return -RT_EIO;
}
err = rt_dm_dev_get_address_by_name(dev, "addr_space", &ep->aspace, &ep->aspace_size);
if (err)
{
rt_iounmap(pci->dbi_base2);
return err;
}
ep->ops = &dw_platform_pcie_ep_ops;
if ((err = dw_pcie_ep_init(ep)))
{
LOG_E("Failed to initialize endpoint");
return err;
}
return RT_EOK;
}
static rt_err_t dw_platform_pcie_probe(struct rt_platform_device *pdev)
{
rt_err_t err;
struct dw_pcie *pci = RT_NULL;
struct dw_platform_pcie *plat_pcie;
struct rt_device *dev = &pdev->parent;
if (!(plat_pcie = rt_calloc(1, sizeof(*plat_pcie))))
{
return -RT_ENOMEM;
}
if (!(pci = rt_calloc(1, sizeof(*pci))))
{
err = -RT_ENOMEM;
goto _fail;
}
plat_pcie->pci = pci;
plat_pcie->soc_data = pdev->id->data;
pci->dev = dev;
pci->ops = &dw_platform_pcie_ops;
pci->dbi_base = rt_dm_dev_iomap_by_name(dev, "dbi");
if (!pci->dbi_base)
{
err = -RT_EIO;
goto _fail;
}
dev->user_data = plat_pcie;
switch (plat_pcie->soc_data->mode)
{
case DW_PCIE_RC_TYPE:
if (!RT_KEY_ENABLED(RT_PCI_DW_HOST))
{
err = -RT_ENOSYS;
goto _fail;
}
if ((err = dw_platform_add_pcie_port(plat_pcie, dev)))
{
goto _fail;
}
break;
case DW_PCIE_EP_TYPE:
if (!RT_KEY_ENABLED(RT_PCI_DW_EP))
{
err = -RT_ENOSYS;
goto _fail;
}
if ((err = dw_platform_add_pcie_ep(plat_pcie, dev)))
{
goto _fail;
}
break;
default:
LOG_E("Invalid device type %d", plat_pcie->soc_data->mode);
err = -RT_EINVAL;
goto _fail;
}
return RT_EOK;
_fail:
if (pci)
{
if (pci->dbi_base)
{
rt_iounmap(pci->dbi_base);
}
rt_free(pci);
}
rt_free(plat_pcie);
return err;
}
static rt_err_t dw_platform_pcie_remove(struct rt_platform_device *pdev)
{
struct dw_platform_pcie *plat_pcie = pdev->parent.user_data;
rt_pci_host_bridge_remove(plat_pcie->pci->port.bridge);
dw_pcie_host_free(&plat_pcie->pci->port);
rt_iounmap(plat_pcie->pci->dbi_base);
rt_free(plat_pcie->pci);
rt_free(plat_pcie);
return RT_EOK;
}
static const struct dw_dw_platform_pcie_soc_data dw_platform_pcie_rc_soc_data =
{
.mode = DW_PCIE_RC_TYPE,
};
static const struct dw_dw_platform_pcie_soc_data dw_platform_pcie_ep_soc_data =
{
.mode = DW_PCIE_EP_TYPE,
};
static const struct rt_ofw_node_id dw_platform_pcie_ofw_ids[] =
{
{ .compatible = "snps,dw-pcie", .data = &dw_platform_pcie_rc_soc_data },
{ .compatible = "snps,dw-pcie-ep", .data = &dw_platform_pcie_ep_soc_data },
{ /* sentinel */ }
};
static struct rt_platform_driver dw_platform_pcie_driver =
{
.name = "dw-pcie",
.ids = dw_platform_pcie_ofw_ids,
.probe = dw_platform_pcie_probe,
.remove = dw_platform_pcie_remove,
};
RT_PLATFORM_DRIVER_EXPORT(dw_platform_pcie_driver);

View File

@ -116,7 +116,7 @@ rt_err_t rt_ofw_get_phyid(struct rt_ofw_node *np,rt_uint32_t *id)
if (ret)
return ret;
ret = sscanf(phy_id,"ethernet-phy-id%4x.%4x",&upper, &lower);
ret = rt_sscanf(phy_id,"ethernet-phy-id%4x.%4x",&upper, &lower);
if(ret != 2)
return -RT_ERROR;

View File

@ -0,0 +1,23 @@
menuconfig RT_USING_REGULATOR
bool "Using Voltage and Current Regulator"
select RT_USING_ADT
select RT_USING_ADT_REF
depends on RT_USING_DM
default n
config RT_REGULATOR_FIXED
bool "Fixed regulator support"
depends on RT_USING_REGULATOR
depends on RT_USING_PIN
depends on RT_USING_PINCTRL
default y
config RT_REGULATOR_GPIO
bool "GPIO regulator support"
depends on RT_USING_REGULATOR
depends on RT_USING_PIN
default y
if RT_USING_REGULATOR
osource "$(SOC_DM_REGULATOR_DIR)/Kconfig"
endif

View File

@ -0,0 +1,21 @@
from building import *
group = []
if not GetDepend(['RT_USING_REGULATOR']):
Return('group')
cwd = GetCurrentDir()
CPPPATH = [cwd + '/../include']
src = ['regulator.c', 'regulator_dm.c']
if GetDepend(['RT_REGULATOR_FIXED']):
src += ['regulator-fixed.c']
if GetDepend(['RT_REGULATOR_GPIO']):
src += ['regulator-gpio.c']
group = DefineGroup('DeviceDrivers', src, depend = [''], CPPPATH = CPPPATH)
Return('group')

View File

@ -0,0 +1,171 @@
/*
* Copyright (c) 2006-2023, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2023-09-23 GuEe-GUI first version
*/
#include "regulator_dm.h"
struct regulator_fixed
{
struct rt_regulator_node parent;
struct rt_regulator_param param;
rt_base_t enable_pin;
const char *input_supply;
};
#define raw_to_regulator_fixed(raw) rt_container_of(raw, struct regulator_fixed, parent)
static rt_err_t regulator_fixed_enable(struct rt_regulator_node *reg_np)
{
struct regulator_fixed *rf = raw_to_regulator_fixed(reg_np);
struct rt_regulator_param *param = &rf->param;
if (rf->enable_pin < 0 || param->always_on)
{
return RT_EOK;
}
rt_pin_mode(rf->enable_pin, PIN_MODE_OUTPUT);
rt_pin_write(rf->enable_pin, param->enable_active_high ? PIN_HIGH : PIN_LOW);
return RT_EOK;
}
static rt_err_t regulator_fixed_disable(struct rt_regulator_node *reg_np)
{
struct regulator_fixed *rf = raw_to_regulator_fixed(reg_np);
struct rt_regulator_param *param = &rf->param;
if (rf->enable_pin < 0 || param->always_on)
{
return RT_EOK;
}
rt_pin_mode(rf->enable_pin, PIN_MODE_OUTPUT);
rt_pin_write(rf->enable_pin, param->enable_active_high ? PIN_LOW: PIN_HIGH);
return RT_EOK;
}
static rt_bool_t regulator_fixed_is_enabled(struct rt_regulator_node *reg_np)
{
rt_uint8_t active;
struct regulator_fixed *rf = raw_to_regulator_fixed(reg_np);
struct rt_regulator_param *param = &rf->param;
if (rf->enable_pin < 0 || param->always_on)
{
return RT_TRUE;
}
rt_pin_mode(rf->enable_pin, PIN_MODE_INPUT);
active = rt_pin_read(rf->enable_pin);
if (param->enable_active_high)
{
return active == PIN_HIGH;
}
return active == PIN_LOW;
}
static int regulator_fixed_get_voltage(struct rt_regulator_node *reg_np)
{
struct regulator_fixed *rf = raw_to_regulator_fixed(reg_np);
return rf->param.min_uvolt + (rf->param.max_uvolt - rf->param.min_uvolt) / 2;
}
static const struct rt_regulator_ops regulator_fixed_ops =
{
.enable = regulator_fixed_enable,
.disable = regulator_fixed_disable,
.is_enabled = regulator_fixed_is_enabled,
.get_voltage = regulator_fixed_get_voltage,
};
static rt_err_t regulator_fixed_probe(struct rt_platform_device *pdev)
{
rt_err_t err;
rt_uint32_t val;
struct rt_device *dev = &pdev->parent;
struct regulator_fixed *rf = rt_calloc(1, sizeof(*rf));
struct rt_regulator_node *rnp;
if (!rf)
{
return -RT_ENOMEM;
}
regulator_ofw_parse(dev->ofw_node, &rf->param);
rnp = &rf->parent;
rnp->supply_name = rf->param.name;
rnp->ops = &regulator_fixed_ops;
rnp->param = &rf->param;
rnp->dev = &pdev->parent;
rf->enable_pin = rt_pin_get_named_pin(dev, "enable", 0, RT_NULL, RT_NULL);
if (rf->enable_pin < 0)
{
rf->enable_pin = rt_pin_get_named_pin(dev, RT_NULL, 0, RT_NULL, RT_NULL);
}
if (rf->enable_pin < 0)
{
rf->enable_pin = -1;
}
rt_pin_ctrl_confs_apply(dev, 0);
if (!rt_dm_dev_prop_read_u32(dev, "startup-delay-us", &val))
{
rf->param.enable_delay = val;
}
if (!rt_dm_dev_prop_read_u32(dev, "off-on-delay-us", &val))
{
rf->param.off_on_delay = val;
}
if ((err = rt_regulator_register(rnp)))
{
goto _fail;
}
return RT_EOK;
_fail:
rt_free(rf);
return err;
}
static const struct rt_ofw_node_id regulator_fixed_ofw_ids[] =
{
{ .compatible = "regulator-fixed" },
{ /* sentinel */ }
};
static struct rt_platform_driver regulator_fixed_driver =
{
.name = "reg-fixed-voltage",
.ids = regulator_fixed_ofw_ids,
.probe = regulator_fixed_probe,
};
static int regulator_fixed_register(void)
{
rt_platform_driver_register(&regulator_fixed_driver);
return 0;
}
INIT_SUBSYS_EXPORT(regulator_fixed_register);

View File

@ -0,0 +1,309 @@
/*
* Copyright (c) 2006-2023, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2023-09-23 GuEe-GUI first version
*/
#include <dt-bindings/pin/state.h>
#include "regulator_dm.h"
struct regulator_gpio_state
{
rt_uint32_t value;
rt_uint32_t gpios;
};
struct regulator_gpio_desc
{
rt_base_t pin;
rt_uint32_t flags;
};
struct regulator_gpio
{
struct rt_regulator_node parent;
rt_base_t enable_pin;
rt_size_t pins_nr;
struct regulator_gpio_desc *pins_desc;
int state;
rt_size_t states_nr;
struct regulator_gpio_state *states;
const char *input_supply;
rt_uint32_t startup_delay;
rt_uint32_t off_on_delay;
rt_bool_t enabled_at_boot;
struct rt_regulator_param param;
};
#define raw_to_regulator_gpio(raw) rt_container_of(raw, struct regulator_gpio, parent)
static rt_err_t regulator_gpio_enable(struct rt_regulator_node *reg_np)
{
struct regulator_gpio *rg = raw_to_regulator_gpio(reg_np);
struct rt_regulator_param *param = &rg->param;
if (param->always_on)
{
return RT_EOK;
}
if (rg->enable_pin >= 0)
{
rt_pin_mode(rg->enable_pin, PIN_MODE_OUTPUT);
rt_pin_write(rg->enable_pin, param->enable_active_high ? PIN_HIGH : PIN_LOW);
}
return RT_EOK;
}
static rt_err_t regulator_gpio_disable(struct rt_regulator_node *reg_np)
{
struct regulator_gpio *rg = raw_to_regulator_gpio(reg_np);
struct rt_regulator_param *param = &rg->param;
if (param->always_on)
{
return RT_EOK;
}
if (rg->enable_pin >= 0)
{
rt_pin_mode(rg->enable_pin, PIN_MODE_OUTPUT);
rt_pin_write(rg->enable_pin, param->enable_active_high ? PIN_LOW : PIN_HIGH);
}
return RT_EOK;
}
static rt_bool_t regulator_gpio_is_enabled(struct rt_regulator_node *reg_np)
{
struct regulator_gpio *rg = raw_to_regulator_gpio(reg_np);
struct rt_regulator_param *param = &rg->param;
if (param->always_on)
{
return RT_TRUE;
}
if (rg->enable_pin >= 0)
{
rt_uint8_t active_val = param->enable_active_high ? PIN_LOW : PIN_HIGH;
rt_pin_mode(rg->enable_pin, PIN_MODE_INPUT);
return rt_pin_read(rg->enable_pin) == active_val;
}
return RT_TRUE;
}
static rt_err_t regulator_gpio_set_voltage(struct rt_regulator_node *reg_np,
int min_uvolt, int max_uvolt)
{
int target = 0, best_val = RT_REGULATOR_UVOLT_INVALID;
struct regulator_gpio *rg = raw_to_regulator_gpio(reg_np);
for (int i = 0; i < rg->states_nr; ++i)
{
struct regulator_gpio_state *state = &rg->states[i];
if (state->value < best_val &&
state->value >= min_uvolt &&
state->value <= max_uvolt)
{
target = state->gpios;
best_val = state->value;
}
}
if (best_val == RT_REGULATOR_UVOLT_INVALID)
{
return -RT_EINVAL;
}
for (int i = 0; i < rg->pins_nr; ++i)
{
int state = (target >> i) & 1;
struct regulator_gpio_desc *gpiod = &rg->pins_desc[i];
rt_pin_mode(gpiod->pin, PIN_MODE_OUTPUT);
rt_pin_write(gpiod->pin, gpiod->flags == PIND_OUT_HIGH ? state : !state);
}
rg->state = target;
return RT_EOK;
}
static int regulator_gpio_get_voltage(struct rt_regulator_node *reg_np)
{
struct regulator_gpio *rg = raw_to_regulator_gpio(reg_np);
for (int i = 0; i < rg->states_nr; ++i)
{
if (rg->states[i].gpios == rg->state)
{
return rg->states[i].value;
}
}
return -RT_EINVAL;
}
static const struct rt_regulator_ops regulator_gpio_ops =
{
.enable = regulator_gpio_enable,
.disable = regulator_gpio_disable,
.is_enabled = regulator_gpio_is_enabled,
.set_voltage = regulator_gpio_set_voltage,
.get_voltage = regulator_gpio_get_voltage,
};
static rt_err_t regulator_gpio_probe(struct rt_platform_device *pdev)
{
rt_err_t err;
struct rt_device *dev = &pdev->parent;
struct regulator_gpio *rg = rt_calloc(1, sizeof(*rg));
struct rt_regulator_node *rgp;
if (!rg)
{
return -RT_ENOMEM;
}
regulator_ofw_parse(dev->ofw_node, &rg->param);
rgp = &rg->parent;
rgp->supply_name = rg->param.name;
rgp->ops = &regulator_gpio_ops;
rgp->param = &rg->param;
rgp->dev = &pdev->parent;
rt_dm_dev_prop_read_u32(dev, "startup-delay-us", &rg->startup_delay);
rt_dm_dev_prop_read_u32(dev, "off-on-delay-us", &rg->off_on_delay);
/* GPIO flags are ignored, we check by enable-active-high */
rg->enable_pin = rt_pin_get_named_pin(dev, "enable", 0, RT_NULL, RT_NULL);
if (rg->enable_pin < 0 && rg->enable_pin != -RT_EEMPTY)
{
err = rg->enable_pin;
goto _fail;
}
rg->pins_nr = rt_pin_get_named_pin_count(dev, "gpios");
if (rg->pins_nr > 0)
{
rg->pins_desc = rt_malloc(sizeof(*rg->pins_desc) * rg->pins_nr);
if (!rg->pins_desc)
{
err = -RT_ENOMEM;
goto _fail;
}
for (int i = 0; i < rg->pins_nr; ++i)
{
rt_uint32_t val;
struct regulator_gpio_desc *gpiod = &rg->pins_desc[i];
gpiod->pin = rt_pin_get_named_pin(dev, RT_NULL, i, RT_NULL, RT_NULL);
if (gpiod->pin < 0)
{
err = gpiod->pin;
goto _fail;
}
if (rt_dm_dev_prop_read_u32_index(dev, "gpios-states", i, &val) < 0)
{
gpiod->flags = PIND_OUT_HIGH;
}
else
{
gpiod->flags = val ? PIND_OUT_HIGH : PIND_OUT_LOW;
}
if (gpiod->flags == PIND_OUT_HIGH)
{
rg->state |= 1 << i;
}
}
}
rg->states_nr = rt_dm_dev_prop_count_of_u32(dev, "states") / 2;
if (rg->states_nr < 0)
{
err = -RT_EIO;
goto _fail;
}
rg->states = rt_malloc(sizeof(*rg->states) * rg->states_nr);
if (!rg->states)
{
err = -RT_ENOMEM;
goto _fail;
}
for (int i = 0; i < rg->states_nr; ++i)
{
rt_dm_dev_prop_read_u32_index(dev, "states", i * 2, &rg->states[i].value);
rt_dm_dev_prop_read_u32_index(dev, "states", i * 2 + 1, &rg->states[i].gpios);
}
if ((err = rt_regulator_register(rgp)))
{
goto _fail;
}
return RT_EOK;
_fail:
if (rg->pins_desc)
{
rt_free(rg->pins_desc);
}
if (rg->states)
{
rt_free(rg->states);
}
rt_free(rg);
return err;
}
static const struct rt_ofw_node_id regulator_gpio_ofw_ids[] =
{
{ .compatible = "regulator-gpio" },
{ /* sentinel */ }
};
static struct rt_platform_driver regulator_gpio_driver =
{
.name = "regulator-gpio",
.ids = regulator_gpio_ofw_ids,
.probe = regulator_gpio_probe,
};
static int regulator_gpio_register(void)
{
rt_platform_driver_register(&regulator_gpio_driver);
return 0;
}
INIT_SUBSYS_EXPORT(regulator_gpio_register);

View File

@ -0,0 +1,629 @@
/*
* Copyright (c) 2006-2023, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2023-09-23 GuEe-GUI first version
*/
#include <rtthread.h>
#include <rtservice.h>
#define DBG_TAG "rtdm.regulator"
#define DBG_LVL DBG_INFO
#include <rtdbg.h>
#include <drivers/ofw.h>
#include <drivers/platform.h>
#include <drivers/regulator.h>
struct rt_regulator
{
struct rt_regulator_node *reg_np;
};
static struct rt_spinlock _regulator_lock = { 0 };
static rt_err_t regulator_enable(struct rt_regulator_node *reg_np);
static rt_err_t regulator_disable(struct rt_regulator_node *reg_np);
rt_err_t rt_regulator_register(struct rt_regulator_node *reg_np)
{
const struct rt_regulator_param *param;
if (!reg_np || !reg_np->dev || !reg_np->param || !reg_np->ops)
{
return -RT_EINVAL;
}
rt_list_init(&reg_np->list);
rt_list_init(&reg_np->children_nodes);
rt_list_init(&reg_np->notifier_nodes);
rt_ref_init(&reg_np->ref);
rt_atomic_store(&reg_np->enabled_count, 0);
param = reg_np->param;
reg_np->parent = RT_NULL;
#ifdef RT_USING_OFW
if (reg_np->dev->ofw_node)
{
rt_ofw_data(reg_np->dev->ofw_node) = reg_np;
}
#endif /* RT_USING_OFW */
if (param->boot_on || param->always_on)
{
regulator_enable(reg_np);
}
return RT_EOK;
}
rt_err_t rt_regulator_unregister(struct rt_regulator_node *reg_np)
{
rt_err_t err = RT_EOK;
if (!reg_np)
{
return -RT_EINVAL;
}
rt_hw_spin_lock(&_regulator_lock.lock);
if (rt_atomic_load(&reg_np->enabled_count) != 0)
{
err = -RT_EBUSY;
LOG_E("%s was enabled by consumer", reg_np->supply_name);
goto _unlock;
}
if (!(reg_np->param->boot_on || reg_np->param->always_on))
{
regulator_disable(reg_np);
}
if (!rt_list_isempty(&reg_np->children_nodes) || rt_ref_read(&reg_np->ref) > 1)
{
err = -RT_EBUSY;
goto _unlock;
}
reg_np->parent = RT_NULL;
rt_list_remove(&reg_np->list);
_unlock:
rt_hw_spin_unlock(&_regulator_lock.lock);
return err;
}
rt_err_t rt_regulator_notifier_register(struct rt_regulator *reg,
struct rt_regulator_notifier *notifier)
{
struct rt_regulator_node *reg_np;
if (!reg || !notifier)
{
return -RT_EINVAL;
}
rt_hw_spin_lock(&_regulator_lock.lock);
reg_np = reg->reg_np;
notifier->regulator = reg;
rt_list_init(&notifier->list);
rt_list_insert_after(&reg_np->notifier_nodes, &notifier->list);
rt_hw_spin_unlock(&_regulator_lock.lock);
return RT_EOK;
}
rt_err_t rt_regulator_notifier_unregister(struct rt_regulator *reg,
struct rt_regulator_notifier *notifier)
{
if (!reg || !notifier)
{
return -RT_EINVAL;
}
rt_hw_spin_lock(&_regulator_lock.lock);
rt_list_remove(&notifier->list);
rt_hw_spin_unlock(&_regulator_lock.lock);
return RT_EOK;
}
static rt_err_t regulator_notifier_call_chain(struct rt_regulator_node *reg_np,
rt_ubase_t msg, void *data)
{
rt_err_t err = RT_EOK;
struct rt_regulator_notifier *notifier;
rt_list_t *head = &reg_np->notifier_nodes;
if (rt_list_isempty(head))
{
return err;
}
rt_list_for_each_entry(notifier, head, list)
{
err = notifier->callback(notifier, msg, data);
if (err == -RT_EIO)
{
break;
}
}
return err;
}
static rt_uint32_t regulator_get_enable_time(struct rt_regulator_node *reg_np)
{
if (reg_np->param->enable_delay)
{
return reg_np->param->enable_delay;
}
if (reg_np->ops->enable_time)
{
return reg_np->ops->enable_time(reg_np);
}
return 0;
}
static void regulator_delay(rt_uint32_t delay)
{
rt_uint32_t ms = delay / 1000;
rt_uint32_t us = delay % 1000;
if (ms > 0)
{
/*
* For small enough values, handle super-millisecond
* delays in the usleep_range() call below.
*/
if (ms < 20)
{
us += ms * 1000;
}
else if (rt_thread_self())
{
rt_thread_mdelay(ms);
}
else
{
rt_hw_us_delay(ms * 1000);
}
}
/*
* Give the scheduler some room to coalesce with any other
* wakeup sources. For delays shorter than 10 us, don't even
* bother setting up high-resolution timers and just busy-loop.
*/
if (us >= 10)
{
rt_hw_us_delay((us + 100) >> 1);
}
else
{
rt_hw_us_delay(us);
}
}
static rt_err_t regulator_enable(struct rt_regulator_node *reg_np)
{
rt_err_t err = RT_EOK;
rt_uint32_t enable_delay = regulator_get_enable_time(reg_np);
if (reg_np->ops->enable)
{
err = reg_np->ops->enable(reg_np);
if (!err)
{
if (enable_delay)
{
regulator_delay(enable_delay);
}
rt_atomic_add(&reg_np->enabled_count, 1);
err = regulator_notifier_call_chain(reg_np, RT_REGULATOR_MSG_ENABLE, RT_NULL);
}
}
if (!err && reg_np->parent)
{
err = regulator_enable(reg_np->parent);
}
return err;
}
rt_err_t rt_regulator_enable(struct rt_regulator *reg)
{
rt_err_t err;
if (!reg)
{
return -RT_EINVAL;
}
if (rt_regulator_is_enabled(reg))
{
return RT_EOK;
}
rt_hw_spin_lock(&_regulator_lock.lock);
err = regulator_enable(reg->reg_np);
rt_hw_spin_unlock(&_regulator_lock.lock);
return err;
}
static rt_err_t regulator_disable(struct rt_regulator_node *reg_np)
{
rt_err_t err = RT_EOK;
if (reg_np->ops->disable)
{
err = reg_np->ops->disable(reg_np);
if (!err)
{
if (reg_np->param->off_on_delay)
{
regulator_delay(reg_np->param->off_on_delay);
}
err = regulator_notifier_call_chain(reg_np, RT_REGULATOR_MSG_DISABLE, RT_NULL);
}
}
if (!err && reg_np->parent)
{
err = regulator_disable(reg_np->parent);
}
return err;
}
rt_err_t rt_regulator_disable(struct rt_regulator *reg)
{
rt_err_t err;
if (!reg)
{
return -RT_EINVAL;
}
if (!rt_regulator_is_enabled(reg))
{
return RT_EOK;
}
if (rt_atomic_load(&reg->reg_np->enabled_count) != 0)
{
rt_atomic_sub(&reg->reg_np->enabled_count, 1);
return RT_EOK;
}
rt_hw_spin_lock(&_regulator_lock.lock);
err = regulator_disable(reg->reg_np);
rt_hw_spin_unlock(&_regulator_lock.lock);
return err;
}
rt_bool_t rt_regulator_is_enabled(struct rt_regulator *reg)
{
if (!reg)
{
return -RT_EINVAL;
}
if (reg->reg_np->ops->is_enabled)
{
return reg->reg_np->ops->is_enabled(reg->reg_np);
}
return rt_atomic_load(&reg->reg_np->enabled_count) > 0;
}
static rt_err_t regulator_set_voltage(struct rt_regulator_node *reg_np, int min_uvolt, int max_uvolt)
{
rt_err_t err = RT_EOK;
if (reg_np->ops->set_voltage)
{
union rt_regulator_notifier_args args;
RT_ASSERT(reg_np->ops->get_voltage != RT_NULL);
args.old_uvolt = reg_np->ops->get_voltage(reg_np);
args.min_uvolt = min_uvolt;
args.max_uvolt = max_uvolt;
err = regulator_notifier_call_chain(reg_np, RT_REGULATOR_MSG_VOLTAGE_CHANGE, &args);
if (!err)
{
err = reg_np->ops->set_voltage(reg_np, min_uvolt, max_uvolt);
}
if (err)
{
regulator_notifier_call_chain(reg_np, RT_REGULATOR_MSG_VOLTAGE_CHANGE_ERR,
(void *)(rt_base_t)args.old_uvolt);
}
}
if (!err && reg_np->parent)
{
err = regulator_set_voltage(reg_np->parent, min_uvolt, max_uvolt);
}
return err;
}
rt_bool_t rt_regulator_is_supported_voltage(struct rt_regulator *reg, int min_uvolt, int max_uvolt)
{
const struct rt_regulator_param *param;
RT_ASSERT(reg != RT_NULL);
param = reg->reg_np->param;
if (!param)
{
return RT_FALSE;
}
return param->min_uvolt <= min_uvolt && param->max_uvolt >= max_uvolt;
}
rt_err_t rt_regulator_set_voltage(struct rt_regulator *reg, int min_uvolt, int max_uvolt)
{
rt_err_t err;
if (!reg)
{
return -RT_EINVAL;
}
rt_hw_spin_lock(&_regulator_lock.lock);
err = regulator_set_voltage(reg->reg_np, min_uvolt, max_uvolt);
rt_hw_spin_unlock(&_regulator_lock.lock);
return err;
}
int rt_regulator_get_voltage(struct rt_regulator *reg)
{
int uvolt = RT_REGULATOR_UVOLT_INVALID;
struct rt_regulator_node *reg_np;
if (!reg)
{
return -RT_EINVAL;
}
rt_hw_spin_lock(&_regulator_lock.lock);
reg_np = reg->reg_np;
if (reg_np->ops->get_voltage)
{
uvolt = reg_np->ops->get_voltage(reg->reg_np);
}
else
{
uvolt = -RT_ENOSYS;
}
rt_hw_spin_unlock(&_regulator_lock.lock);
return uvolt;
}
rt_err_t rt_regulator_set_mode(struct rt_regulator *reg, rt_uint32_t mode)
{
rt_err_t err;
struct rt_regulator_node *reg_np;
if (!reg)
{
return -RT_EINVAL;
}
rt_hw_spin_lock(&_regulator_lock.lock);
reg_np = reg->reg_np;
if (reg_np->ops->set_mode)
{
err = reg_np->ops->set_mode(reg_np, mode);
}
else
{
err = -RT_ENOSYS;
}
rt_hw_spin_unlock(&_regulator_lock.lock);
return err;
}
rt_int32_t rt_regulator_get_mode(struct rt_regulator *reg)
{
rt_int32_t mode;
struct rt_regulator_node *reg_np;
if (!reg)
{
return -RT_EINVAL;
}
rt_hw_spin_lock(&_regulator_lock.lock);
reg_np = reg->reg_np;
if (reg_np->ops->get_mode)
{
mode = reg_np->ops->get_mode(reg_np);
}
else
{
mode = -RT_ENOSYS;
}
rt_hw_spin_unlock(&_regulator_lock.lock);
return mode;
}
static void regulator_check_parent(struct rt_regulator_node *reg_np)
{
if (reg_np->parent)
{
return;
}
else
{
#ifdef RT_USING_OFW
rt_phandle parent_phandle = 0;
struct rt_ofw_node *np = reg_np->dev->ofw_node;
while (np)
{
if (rt_ofw_prop_read_u32(np, "vin-supply", &parent_phandle))
{
break;
}
if (!(np = rt_ofw_find_node_by_phandle(parent_phandle)))
{
break;
}
if (!(reg_np->parent = rt_ofw_data(np)))
{
LOG_W("%s parent ofw node = %s not init",
reg_np->supply_name, rt_ofw_node_full_name(np));
rt_ofw_node_put(np);
break;
}
rt_list_insert_after(&reg_np->parent->children_nodes, &reg_np->list);
rt_ofw_node_put(np);
}
#endif
}
}
struct rt_regulator *rt_regulator_get(struct rt_device *dev, const char *id)
{
struct rt_regulator *reg = RT_NULL;
struct rt_regulator_node *reg_np = RT_NULL;
if (!dev || !id)
{
reg = rt_err_ptr(-RT_EINVAL);
goto _end;
}
#ifdef RT_USING_OFW
if (dev->ofw_node)
{
rt_phandle supply_phandle;
struct rt_ofw_node *np = dev->ofw_node;
char supply_name[64];
rt_snprintf(supply_name, sizeof(supply_name), "%s-supply", id);
if (rt_ofw_prop_read_u32(np, supply_name, &supply_phandle))
{
goto _end;
}
if (!(np = rt_ofw_find_node_by_phandle(supply_phandle)))
{
reg = rt_err_ptr(-RT_EIO);
goto _end;
}
if (!rt_ofw_data(np))
{
rt_platform_ofw_request(np);
}
reg_np = rt_ofw_data(np);
rt_ofw_node_put(np);
}
#endif
if (!reg_np)
{
reg = rt_err_ptr(-RT_ENOSYS);
goto _end;
}
rt_hw_spin_lock(&_regulator_lock.lock);
regulator_check_parent(reg_np);
rt_hw_spin_unlock(&_regulator_lock.lock);
reg = rt_calloc(1, sizeof(*reg));
if (!reg)
{
reg = rt_err_ptr(-RT_ENOMEM);
goto _end;
}
reg->reg_np = reg_np;
rt_ref_get(&reg_np->ref);
_end:
return reg;
}
static void regulator_release(struct rt_ref *r)
{
struct rt_regulator_node *reg_np = rt_container_of(r, struct rt_regulator_node, ref);
rt_regulator_unregister(reg_np);
}
void rt_regulator_put(struct rt_regulator *reg)
{
if (!reg)
{
return;
}
rt_ref_put(&reg->reg_np->ref, &regulator_release);
rt_free(reg);
}

View File

@ -0,0 +1,59 @@
/*
* Copyright (c) 2006-2023, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2023-09-23 GuEe-GUI first version
*/
#include "regulator_dm.h"
#ifdef RT_USING_OFW
rt_err_t regulator_ofw_parse(struct rt_ofw_node *np, struct rt_regulator_param *param)
{
rt_uint32_t pval;
param->name = rt_ofw_prop_read_raw(np, "regulator-name", RT_NULL);
if (!rt_ofw_prop_read_u32(np, "regulator-min-microvolt", &pval))
{
param->min_uvolt = pval;
}
if (!rt_ofw_prop_read_u32(np, "regulator-max-microvolt", &pval))
{
param->max_uvolt = pval;
}
if (!rt_ofw_prop_read_u32(np, "regulator-min-microamp", &pval))
{
param->min_uamp = pval;
}
if (!rt_ofw_prop_read_u32(np, "regulator-max-microamp", &pval))
{
param->max_uamp = pval;
}
if (!rt_ofw_prop_read_u32(np, "regulator-ramp-delay", &pval))
{
param->ramp_delay = pval;
}
if (!rt_ofw_prop_read_u32(np, "regulator-enable-ramp-delay", &pval))
{
param->enable_delay = pval;
}
param->enable_active_high = rt_ofw_prop_read_bool(np, "enable-active-high");
param->boot_on = rt_ofw_prop_read_bool(np, "regulator-boot-on");
param->always_on = rt_ofw_prop_read_bool(np, "regulator-always-on");
param->soft_start = rt_ofw_prop_read_bool(np, "regulator-soft-start");
param->pull_down = rt_ofw_prop_read_bool(np, "regulator-pull-down");
param->over_current_protection = rt_ofw_prop_read_bool(np, "regulator-over-current-protection");
return RT_EOK;
}
#endif /* RT_USING_OFW */

View File

@ -0,0 +1,26 @@
/*
* Copyright (c) 2006-2023, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2023-09-23 GuEe-GUI first version
*/
#ifndef __REGULATOR_DM_H__
#define __REGULATOR_DM_H__
#include <rtthread.h>
#include <rtdevice.h>
#ifdef RT_USING_OFW
rt_err_t regulator_ofw_parse(struct rt_ofw_node *np, struct rt_regulator_param *param);
#else
rt_inline rt_err_t regulator_ofw_parse(struct rt_ofw_node *np, struct rt_regulator_param *param);
{
return RT_EOK;
}
#endif /* RT_USING_OFW */
#endif /* __REGULATOR_DM_H__ */

View File

@ -35,6 +35,9 @@ if GetDepend('RT_USING_SFUD'):
elif rtconfig.PLATFORM in ['armcc']:
LOCAL_CFLAGS += ' --c99'
if GetDepend('RT_USING_DM'):
src += ['dev_spi_dm.c', 'dev_spi_bus.c']
src += src_device
group = DefineGroup('DeviceDrivers', src, depend = ['RT_USING_SPI'], CPPPATH = CPPPATH, LOCAL_CFLAGS = LOCAL_CFLAGS)

View File

@ -10,6 +10,14 @@
#include <rtthread.h>
#include "drivers/dev_spi.h"
#define DBG_TAG "spi.dev"
#define DBG_LVL DBG_INFO
#include <rtdbg.h>
#ifdef RT_USING_DM
#include "dev_spi_dm.h"
#endif
/* SPI bus device interface, compatible with RT-Thread 0.3.x/1.0.x */
static rt_ssize_t _spi_bus_device_read(rt_device_t dev,
rt_off_t pos,
@ -155,3 +163,66 @@ rt_err_t rt_spidev_device_init(struct rt_spi_device *dev, const char *name)
/* register to device manager */
return rt_device_register(device, name, RT_DEVICE_FLAG_RDWR);
}
#ifdef RT_USING_DM
static rt_err_t spidev_probe(struct rt_spi_device *spi_dev)
{
const char *bus_name;
struct rt_device *dev = &spi_dev->parent;
if (spi_dev->parent.ofw_node)
{
if (rt_dm_dev_prop_index_of_string(dev, "compatible", "spidev") >= 0)
{
LOG_E("spidev is not supported in OFW");
return -RT_EINVAL;
}
}
bus_name = rt_dm_dev_get_name(&spi_dev->bus->parent);
rt_dm_dev_set_name(dev, "%s_%d", bus_name, spi_dev->chip_select);
return RT_EOK;
}
static const struct rt_spi_device_id spidev_ids[] =
{
{ .name = "dh2228fv" },
{ .name = "ltc2488" },
{ .name = "sx1301" },
{ .name = "bk4" },
{ .name = "dhcom-board" },
{ .name = "m53cpld" },
{ .name = "spi-petra" },
{ .name = "spi-authenta" },
{ .name = "em3581" },
{ .name = "si3210" },
{ /* sentinel */ },
};
static const struct rt_ofw_node_id spidev_ofw_ids[] =
{
{ .compatible = "cisco,spi-petra" },
{ .compatible = "dh,dhcom-board" },
{ .compatible = "lineartechnology,ltc2488" },
{ .compatible = "lwn,bk4" },
{ .compatible = "menlo,m53cpld" },
{ .compatible = "micron,spi-authenta" },
{ .compatible = "rohm,dh2228fv" },
{ .compatible = "semtech,sx1301" },
{ .compatible = "silabs,em3581" },
{ .compatible = "silabs,si3210" },
{ .compatible = "rockchip,spidev" },
{ /* sentinel */ },
};
static struct rt_spi_driver spidev_driver =
{
.ids = spidev_ids,
.ofw_ids = spidev_ofw_ids,
.probe = spidev_probe,
};
RT_SPI_DRIVER_EXPORT(spidev_driver);
#endif /* RT_USING_DM */

View File

@ -0,0 +1,203 @@
/*
* Copyright (c) 2006-2022, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2022-12-06 GuEe-GUI first version
*/
#include "dev_spi_dm.h"
#define DBG_TAG "spi.bus"
#define DBG_LVL DBG_INFO
#include <rtdbg.h>
extern rt_err_t rt_spidev_device_init(struct rt_spi_device *dev, const char *name);
static struct rt_bus spi_bus;
void spi_bus_scan_devices(struct rt_spi_bus *bus)
{
#ifdef RT_USING_OFW
if (bus->parent.ofw_node)
{
struct rt_ofw_node *np = bus->parent.ofw_node, *spi_dev_np;
rt_ofw_foreach_available_child_node(np, spi_dev_np)
{
rt_uint64_t reg_offset;
struct rt_spi_device *spi_dev;
if (!rt_ofw_prop_read_bool(spi_dev_np, "compatible"))
{
continue;
}
spi_dev = rt_calloc(1, sizeof(*spi_dev));
if (!spi_dev)
{
rt_ofw_node_put(spi_dev_np);
LOG_E("Not memory to create spi device: %s",
rt_ofw_node_full_name(spi_dev_np));
return;
}
rt_ofw_get_address(spi_dev_np, 0, &reg_offset, RT_NULL);
spi_dev->parent.ofw_node = spi_dev_np;
spi_dev->parent.type = RT_Device_Class_Unknown;
spi_dev->name = rt_ofw_node_name(spi_dev_np);
spi_dev->bus = bus;
rt_dm_dev_set_name(&spi_dev->parent, rt_ofw_node_full_name(spi_dev_np));
if (spi_device_ofw_parse(spi_dev))
{
continue;
}
rt_spi_device_register(spi_dev);
}
}
#endif /* RT_USING_OFW */
}
rt_err_t rt_spi_driver_register(struct rt_spi_driver *driver)
{
RT_ASSERT(driver != RT_NULL);
driver->parent.bus = &spi_bus;
return rt_driver_register(&driver->parent);
}
rt_err_t rt_spi_device_register(struct rt_spi_device *device)
{
RT_ASSERT(device != RT_NULL);
return rt_bus_add_device(&spi_bus, &device->parent);
}
static rt_bool_t spi_match(rt_driver_t drv, rt_device_t dev)
{
const struct rt_spi_device_id *id;
struct rt_spi_driver *driver = rt_container_of(drv, struct rt_spi_driver, parent);
struct rt_spi_device *device = rt_container_of(dev, struct rt_spi_device, parent);
if ((id = driver->ids))
{
for (; id->name[0]; ++id)
{
if (!rt_strcmp(id->name, device->name))
{
device->id = id;
device->ofw_id = RT_NULL;
return RT_TRUE;
}
}
}
#ifdef RT_USING_OFW
device->ofw_id = rt_ofw_node_match(device->parent.ofw_node, driver->ofw_ids);
if (device->ofw_id)
{
device->id = RT_NULL;
return RT_TRUE;
}
#endif
return RT_FALSE;
}
static rt_err_t spi_probe(rt_device_t dev)
{
rt_err_t err;
struct rt_spi_bus *bus;
struct rt_spi_driver *driver = rt_container_of(dev->drv, struct rt_spi_driver, parent);
struct rt_spi_device *device = rt_container_of(dev, struct rt_spi_device, parent);
if (!device->bus)
{
return -RT_EINVAL;
}
err = driver->probe(device);
if (err)
{
return err;
}
bus = device->bus;
if (bus->pins)
{
device->cs_pin = bus->pins[device->chip_select];
rt_pin_mode(device->cs_pin, PIN_MODE_OUTPUT);
}
else
{
device->cs_pin = PIN_NONE;
}
/* Driver not register SPI device to system */
if (device->parent.type == RT_Device_Class_Unknown)
{
rt_spidev_device_init(device, rt_dm_dev_get_name(&device->parent));
}
return err;
}
static rt_err_t spi_remove(rt_device_t dev)
{
struct rt_spi_driver *driver = rt_container_of(dev->drv, struct rt_spi_driver, parent);
struct rt_spi_device *device = rt_container_of(dev, struct rt_spi_device, parent);
if (driver && driver->remove)
{
driver->remove(device);
}
rt_free(device);
return RT_EOK;
}
static rt_err_t spi_shutdown(rt_device_t dev)
{
struct rt_spi_driver *driver = rt_container_of(dev->drv, struct rt_spi_driver, parent);
struct rt_spi_device *device = rt_container_of(dev, struct rt_spi_device, parent);
if (driver && driver->shutdown)
{
driver->shutdown(device);
}
rt_free(device);
return RT_EOK;
}
static struct rt_bus spi_bus =
{
.name = "spi",
.match = spi_match,
.probe = spi_probe,
.remove = spi_remove,
.shutdown = spi_shutdown,
};
static int spi_bus_init(void)
{
rt_bus_register(&spi_bus);
return 0;
}
INIT_CORE_EXPORT(spi_bus_init);

View File

@ -19,6 +19,10 @@
#define DBG_LVL DBG_INFO
#include <rtdbg.h>
#ifdef RT_USING_DM
#include "dev_spi_dm.h"
#endif
extern rt_err_t rt_spi_bus_device_init(struct rt_spi_bus *bus, const char *name);
extern rt_err_t rt_spidev_device_init(struct rt_spi_device *dev, const char *name);
@ -41,6 +45,46 @@ rt_err_t rt_spi_bus_register(struct rt_spi_bus *bus,
/* set bus mode */
bus->mode = RT_SPI_BUS_MODE_SPI;
#ifdef RT_USING_DM
if (!bus->slave)
{
int pin_count = rt_pin_get_named_pin_count(&bus->parent, "cs");
if (pin_count > 0)
{
pin_count = rt_max_t(int, pin_count, bus->num_chipselect);
bus->pins = rt_malloc(sizeof(bus->pins[0]) * pin_count);
if (!bus->pins)
{
rt_device_unregister(&bus->parent);
return -RT_ENOMEM;
}
for (int i = 0; i < pin_count; ++i)
{
bus->pins[i] = rt_pin_get_named_pin(&bus->parent, "cs", i,
RT_NULL, RT_NULL);
}
}
else if (pin_count == 0)
{
bus->pins = RT_NULL;
}
else
{
result = pin_count;
LOG_E("CS PIN find error = %s", rt_strerror(result));
rt_device_unregister(&bus->parent);
return result;
}
}
spi_bus_scan_devices(bus);
#endif
return RT_EOK;
}

View File

@ -0,0 +1,106 @@
/*
* Copyright (c) 2006-2022, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2022-12-06 GuEe-GUI first version
*/
#include "dev_spi_dm.h"
#define DBG_TAG "spi.dm"
#define DBG_LVL DBG_INFO
#include <rtdbg.h>
#ifdef RT_USING_OFW
static void ofw_parse_delay(struct rt_ofw_node *np, struct rt_spi_delay *delay,
const char *prop)
{
rt_uint32_t value;
if (!rt_ofw_prop_read_u32(np, prop, &value))
{
if (value > RT_UINT16_MAX)
{
delay->value = RT_DIV_ROUND_UP(value, 1000);
delay->unit = RT_SPI_DELAY_UNIT_USECS;
}
else
{
delay->value = value;
delay->unit = RT_SPI_DELAY_UNIT_NSECS;
}
}
}
rt_err_t spi_device_ofw_parse(struct rt_spi_device *spi_dev)
{
rt_err_t err;
rt_uint32_t value;
struct rt_spi_bus *spi_bus = spi_dev->bus;
struct rt_ofw_node *np = spi_dev->parent.ofw_node;
struct rt_spi_configuration *conf = &spi_dev->config;
if (rt_ofw_prop_read_bool(np, "spi-cpha"))
{
conf->mode |= RT_SPI_CPHA;
}
if (rt_ofw_prop_read_bool(np, "spi-cpol"))
{
conf->mode |= RT_SPI_CPOL;
}
if (rt_ofw_prop_read_bool(np, "spi-3wire"))
{
conf->mode |= RT_SPI_3WIRE;
}
if (rt_ofw_prop_read_bool(np, "spi-lsb-first"))
{
conf->mode |= RT_SPI_LSB;
}
if (rt_ofw_prop_read_bool(np, "spi-cs-high"))
{
conf->mode |= RT_SPI_CS_HIGH;
}
value = 1;
rt_ofw_prop_read_u32(np, "spi-tx-bus-width", &value);
conf->data_width_tx = value;
value = 1;
rt_ofw_prop_read_u32(np, "spi-rx-bus-width", &value);
conf->data_width_rx = value;
if (spi_bus->slave)
{
if (!rt_ofw_node_tag_equ(np, "slave"))
{
LOG_E("Invalid SPI device = %s", rt_ofw_node_full_name(np));
return -RT_EINVAL;
}
return RT_EOK;
}
if ((err = rt_ofw_prop_read_u32(np, "reg", &value)))
{
LOG_E("Find 'reg' failed");
return err;
}
spi_dev->chip_select = value;
if (!rt_ofw_prop_read_u32(np, "spi-max-frequency", &value))
{
conf->max_hz = value;
}
ofw_parse_delay(np, &spi_dev->cs_setup, "spi-cs-setup-delay-ns");
ofw_parse_delay(np, &spi_dev->cs_hold, "spi-cs-hold-delay-ns");
ofw_parse_delay(np, &spi_dev->cs_inactive, "spi-cs-inactive-delay-ns");
return RT_EOK;
}
#endif /* RT_USING_OFW */

View File

@ -0,0 +1,29 @@
/*
* Copyright (c) 2006-2022, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2022-11-26 GuEe-GUI first version
*/
#ifndef __DEV_SPI_DM_H__
#define __DEV_SPI_DM_H__
#include <rthw.h>
#include <rtthread.h>
#include <rtdevice.h>
#ifdef RT_USING_OFW
rt_err_t spi_device_ofw_parse(struct rt_spi_device *spi_dev);
#else
rt_inline rt_err_t spi_device_ofw_parse(struct rt_spi_device *spi_dev)
{
return RT_EOK;
}
#endif /* RT_USING_OFW */
void spi_bus_scan_devices(struct rt_spi_bus *bus);
#endif /* __DEV_SPI_DM_H__ */

View File

@ -759,7 +759,7 @@ static void at_connect_notice_cb(struct at_socket *sock, at_socket_evt_t event,
}
new_sock = at_get_socket(new_socket);
new_sock->state = AT_SOCKET_CONNECT;
sscanf(buff, "SOCKET:%d", &base_socket);
rt_sscanf(buff, "SOCKET:%d", &base_socket);
LOG_D("ACCEPT BASE SOCKET: %d", base_socket);
new_sock->user_data = (void *)base_socket;
@ -985,7 +985,7 @@ int at_accept(int socket, struct sockaddr *name, socklen_t *namelen)
at_do_event_changes(sock, AT_EVENT_RECV, RT_FALSE);
}
sscanf(&receive_buff[0], "SOCKET:%d", &new_socket);
rt_sscanf(&receive_buff[0], "SOCKET:%d", &new_socket);
new_sock = at_get_socket(new_socket);
ip4_addr_set_any(&remote_addr);
ipaddr_port_to_socketaddr(name, &remote_addr, &remote_port);

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006-2022, RT-Thread Development Team
* Copyright (c) 2006-2024 RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
@ -1262,6 +1262,24 @@ MSH_CMD_EXPORT(ulog_filter, Show ulog filter settings);
#endif /* RT_USING_FINSH */
#endif /* ULOG_USING_FILTER */
/**
* @brief register the backend device into the ulog.
*
* @param backend Backend device handle, a pointer to a "struct ulog_backend" obj.
* @param name Backend device name.
* @param support_color Whether it supports color logs.
* @return rt_err_t - return 0 on success.
*
* @note - This function is used to register the backend device into the ulog,
* ensuring that the function members in the backend device structure are set before registration.
* - about struct ulog_backend:
* 1. The name and support_color properties can be passed in through the ulog_backend_register() function.
* 2. output is the back-end specific output function, and all backends must implement the interface.
* 3. init/deinit is optional, init is called at register, and deinit is called at unregister or ulog_deinit.
* 4. flush is also optional, and some internal output cached backends need to implement this interface.
* For example, some file systems with RAM cache. The flush of the backend is usually called by
* ulog_flush in the case of an exception such as assertion or hardfault.
*/
rt_err_t ulog_backend_register(ulog_backend_t backend, const char *name, rt_bool_t support_color)
{
rt_base_t level;
@ -1287,6 +1305,13 @@ rt_err_t ulog_backend_register(ulog_backend_t backend, const char *name, rt_bool
return RT_EOK;
}
/**
* @brief unregister a backend device that has already been registered.
*
* @param backend Backend device handle
* @return rt_err_t - return 0 on success.
* @note deinit function will be called at unregister.
*/
rt_err_t ulog_backend_unregister(ulog_backend_t backend)
{
rt_base_t level;
@ -1460,6 +1485,14 @@ void ulog_flush(void)
}
}
/**
* @brief ulog initialization
*
* @return int return 0 on success, return -5 when failed of insufficient memory.
*
* @note This function must be called to complete ulog initialization before using ulog.
* This function will also be called automatically if component auto-initialization is turned on.
*/
int ulog_init(void)
{
if (ulog.init_ok)
@ -1518,6 +1551,11 @@ int ulog_async_init(void)
INIT_PREV_EXPORT(ulog_async_init);
#endif /* ULOG_USING_ASYNC_OUTPUT */
/**
* @brief ulog deinitialization
*
* @note This deinit release resource can be executed when ulog is no longer used.
*/
void ulog_deinit(void)
{
rt_slist_t *node;

View File

@ -669,7 +669,7 @@ Configuration is mainly done by modifying the file under project directory - rtc
/* While turning the system FinSH: define the number of historical command lines. */
#define FINSH_HISTORY_LINES 5
/* While turning the system FinSH: define this macro to open the Tab key, if not defined, close. */
/* While turning the system FinSH: define this macro to use symbol table in Finsh, if not defined, close. */
#define FINSH_USING_SYMTAB
/* While turning the system FinSH: define the priority of the thread. */

View File

@ -421,7 +421,7 @@ The reference configuration example in rtconfig.h is as follows, and can be conf
/* Record 5 lines of history commands */
#define FINSH_HISTORY_LINES 5
/* Enable the use of the Tab key */
/* Enable the use of symbol table */
#define FINSH_USING_SYMTAB
/* Turn on description */
#define FINSH_USING_DESCRIPTION

View File

@ -556,7 +556,7 @@ int main(int argc, char *argv[])
{
if (argc == 2)
{
sscanf(argv[1], "%d", &loop_count);
rt_sscanf(argv[1], "%d", &loop_count);
}
else
{

View File

@ -16,6 +16,8 @@ if rtconfig.CPU in common64_arch :
else :
group += SConscript(os.path.join('common', 'SConscript'))
group += SConscript(os.path.join('vector', 'SConscript'))
# cpu porting code files
if 'VENDOR' in vars(rtconfig) and rtconfig.VENDOR != '':
group = group + SConscript(os.path.join(rtconfig.VENDOR, rtconfig.CPU, 'SConscript'))

View File

@ -14,6 +14,8 @@
#include <rtconfig.h>
#include <opcode.h>
#ifndef __ASSEMBLY__
#ifdef RT_USING_SMP
typedef union {
unsigned long slock;
@ -24,8 +26,7 @@ typedef union {
} rt_hw_spinlock_t;
#endif
#ifndef __ASSEMBLY__
#include <rtdef.h>
#include <rtcompiler.h>
rt_inline void rt_hw_dsb(void)
{

View File

@ -5,9 +5,6 @@ cwd = GetCurrentDir()
src = Glob('*.c') + Glob('*.cpp') + Glob('*_gcc.S')
CPPPATH = [cwd]
if GetDepend('ARCH_RISCV_VECTOR'):
CPPPATH += [cwd + '/../../vector/rvv-1.0']
group = DefineGroup('libcpu', src, depend = [''], CPPPATH = CPPPATH)
Return('group')

View File

@ -0,0 +1,12 @@
# RT-Thread building script for component
from building import *
cwd = GetCurrentDir()
src = []
CPPPATH = []
CPPPATH += [cwd + '/rvv-1.0']
group = DefineGroup('libcpu', src, depend = ['ARCH_RISCV_VECTOR'], CPPPATH = CPPPATH)
Return('group')

View File

@ -5,9 +5,6 @@ cwd = GetCurrentDir()
src = Glob('*.c') + Glob('*.cpp') + Glob('*_gcc.S')
CPPPATH = [cwd]
if not GetDepend('ARCH_RISCV_VECTOR'):
SrcRemove(src, ['vector_gcc.S'])
group = DefineGroup('libcpu', src, depend = [''], CPPPATH = CPPPATH)
Return('group')

View File

@ -1,111 +0,0 @@
/*
* Copyright (c) 2006-2024, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2022-10-10 RT-Thread the first version,
* compatible to riscv-v-spec-1.0
*/
#ifndef __RVV_CONTEXT_H__
#define __RVV_CONTEXT_H__
#include "cpuport.h"
#include "encoding.h"
#if defined(ARCH_VECTOR_VLEN_128)
#define CTX_VECTOR_REGS 64
#elif defined(ARCH_VECTOR_VLEN_256)
#define CTX_VECTOR_REGS 128
#else
#error "No supported VLEN"
#endif /* VLEN */
#define CTX_VECTOR_REG_NR (CTX_VECTOR_REGS + 4)
/**
* ==================================
* VECTOR EXTENSION
* ==================================
*/
#define VEC_FRAME_VSTART (0 * REGBYTES)
#define VEC_FRAME_VTYPE (1 * REGBYTES)
#define VEC_FRAME_VL (2 * REGBYTES)
#define VEC_FRAME_VCSR (3 * REGBYTES)
#define VEC_FRAME_V0 (4 * REGBYTES)
.macro GET_VEC_FRAME_LEN, xreg
csrr \xreg, vlenb
slli \xreg, \xreg, 5
addi \xreg, \xreg, 4 * REGBYTES
.endm
/**
* @brief save vector extension hardware state
*
* @param dst register storing bottom of storage block
*
*/
.macro SAVE_VECTOR, dst
mv t1, \dst
csrr t0, vstart
STORE t0, VEC_FRAME_VSTART(t1)
csrr t0, vtype
STORE t0, VEC_FRAME_VTYPE(t1)
csrr t0, vl
STORE t0, VEC_FRAME_VL(t1)
csrr t0, vcsr
STORE t0, VEC_FRAME_VCSR(t1)
addi t1, t1, VEC_FRAME_V0
// config vector setting,
// t2 is updated to length of a vector group in bytes
VEC_CONFIG_SETVLI(t2, x0, VEC_IMM_SEW_8, VEC_IMM_LMUL_8)
vse8.v v0, (t1)
add t1, t1, t2
vse8.v v8, (t1)
add t1, t1, t2
vse8.v v16, (t1)
add t1, t1, t2
vse8.v v24, (t1)
.endm
/**
* @brief restore vector extension hardware states
*
* @param dst register storing bottom of storage block
*
*/
.macro RESTORE_VECTOR, dst
// restore vector registers first since it will modify vector states
mv t0, \dst
addi t1, t0, VEC_FRAME_V0
VEC_CONFIG_SETVLI(t2, x0, VEC_IMM_SEW_8, VEC_IMM_LMUL_8)
vle8.v v0, (t1)
add t1, t1, t2
vle8.v v8, (t1)
add t1, t1, t2
vle8.v v16, (t1)
add t1, t1, t2
vle8.v v24, (t1)
mv t1, t0
LOAD t0, VEC_FRAME_VSTART(t1)
csrw vstart, t0
LOAD t0, VEC_FRAME_VCSR(t1)
csrw vcsr, t0
LOAD t0, VEC_FRAME_VTYPE(t1)
LOAD t3, VEC_FRAME_VL(t1)
VEC_CONFIG_SET_VL_VTYPE(t3, t0)
.endm
#endif /* __RVV_CONTEXT_H__ */

View File

@ -1,51 +0,0 @@
/*
* Copyright (c) 2006-2022, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2022-10-10 RT-Thread the first version,
* compatible to riscv-v-spec-1.0
*/
#ifndef __VECTOR_ENCODING_H__
#define __VECTOR_ENCODING_H__
/* mstatus/sstatus */
#define MSTATUS_VS 0x00000600
#define SSTATUS_VS 0x00000600 /* Vector Status */
#define SSTATUS_VS_INITIAL 0x00000200
#define SSTATUS_VS_CLEAN 0x00000400
#define SSTATUS_VS_DIRTY 0x00000600
/**
* assembler names used for vset{i}vli vtypei immediate
*/
#define VEC_IMM_SEW_8 e8
#define VEC_IMM_SEW_16 e16
#define VEC_IMM_SEW_32 e32
#define VEC_IMM_SEW_64 e64
/* group setting, encoding by multiplier */
#define VEC_IMM_LMUL_F8 mf8
#define VEC_IMM_LMUL_F4 mf4
#define VEC_IMM_LMUL_F2 mf2
#define VEC_IMM_LMUL_1 m1
#define VEC_IMM_LMUL_2 m2
#define VEC_IMM_LMUL_4 m4
#define VEC_IMM_LMUL_8 m8
/* TAIL & MASK agnostic bits */
#define VEC_IMM_TAIL_AGNOSTIC ta
#define VEC_IMM_MASK_AGNOSTIC ma
#define VEC_IMM_TAMA VEC_IMM_TAIL_AGNOSTIC, VEC_IMM_MASK_AGNOSTIC
#define VEC_IMM_TAMU VEC_IMM_TAIL_AGNOSTIC
#define VEC_IMM_TUMA VEC_IMM_MASK_AGNOSTIC
/**
* configuration setting instruction
*/
#define VEC_CONFIG_SETVLI(xVl, xAvl, vtype...) vsetvli xVl, xAvl, ##vtype
#define VEC_CONFIG_SET_VL_VTYPE(xVl, xVtype) vsetvl x0, xVl, xVtype
#endif /* __VECTOR_ENCODING_H__ */

View File

@ -1,45 +0,0 @@
/*
* Copyright (c) 2006-2024, RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2018/10/28 Bernard The unify RISC-V porting implementation
* 2018/12/27 Jesven Add SMP support
* 2021/02/02 lizhirui Add userspace support
* 2022/10/22 Shell Support User mode RVV;
* Trimming process switch context
* 2024/09/01 Shell Separated vector ctx from the generic
*/
#include "cpuport.h"
#include "stackframe.h"
/**
* @param a0 pointer to frame bottom
*/
.global rt_hw_vector_ctx_save
rt_hw_vector_ctx_save:
SAVE_VECTOR a0
ret
/**
* @param a0 pointer to frame bottom
*/
.global rt_hw_vector_ctx_restore
rt_hw_vector_ctx_restore:
RESTORE_VECTOR a0
ret
.global rt_hw_disable_vector
rt_hw_disable_vector:
li t0, SSTATUS_VS
csrc sstatus, t0
ret
.global rt_hw_enable_vector
rt_hw_enable_vector:
li t0, SSTATUS_VS
csrs sstatus, t0
ret

View File

@ -90,7 +90,7 @@ void rt_components_board_init(void)
const struct rt_init_desc *desc;
for (desc = &__rt_init_desc_rti_board_start; desc < &__rt_init_desc_rti_board_end; desc ++)
{
rt_kprintf("initialize %s", desc->fn_name);
rt_kprintf("initialize %s\n", desc->fn_name);
result = desc->fn();
rt_kprintf(":%d done\n", result);
}
@ -116,7 +116,7 @@ void rt_components_init(void)
rt_kprintf("do components initialization.\n");
for (desc = &__rt_init_desc_rti_board_end; desc < &__rt_init_desc_rti_end; desc ++)
{
rt_kprintf("initialize %s", desc->fn_name);
rt_kprintf("initialize %s\n", desc->fn_name);
result = desc->fn();
rt_kprintf(":%d done\n", result);
}