diff --git a/documentation/README.md b/documentation/README.md new file mode 100644 index 0000000000..6166cb3eab --- /dev/null +++ b/documentation/README.md @@ -0,0 +1,136 @@ +# RT-Thread + +RT-Thread (Real-Time Thread) is an open source embedded real-time operating system and released under Apache License v2.0. It has a strong scalability: from a nano kernel running on a tiny MCU, for example ARM Cortex-M0, or Cortex-M3/4/7, to a rich feature system running on MIPS32, ARM Cortex-A, even the emerging open source RISC-V architecture is supported. RT-Thread can run either on single-core systems or on symmetric multi-core processors(SMP) systems. + +## Introduction + +RT-Thread has not noly a real-time kernel, but also rich components. Its architecture is as follows: + +![RT-Thread system framework](figures/02Software_framework_diagram.png) + +- **Kernel**: It includes preemptive multi-task real-time scheduler, and infrastructure such as semaphore, mutex, mailbox, message queue, signal, event, memory management, timer management, interrupt management, etc. It also includes libcpu/BSP (file related to chip transplantation/board support package). +- **components**: It is a software unit on the RT-Thread kernel layer, such as command line (FinSH), device driver framework (Device Drivers), network framework, virtual file system (FAT, YAFFS, UFFS, ROM/RAM file system, etc.), TCP/IP network protocol stack (lwIP), libc/POSIX standard layer and so on. Generally, a software component is placed in a folder in the RT-Thread/components directory, and each software component is described by a *SConscript* file and added to the RT-Thread construction system. When a software component is opened in the system configuration, it will be compiled and linked to the final RT-Thread firmware. +- **Packages**: It is a middleware running on RT-Thread IoT operating system platform and facing different application fields. Packages consist of description information, source code or library files. These packages can be provided by RT-Thread, third-party developers and enterprises, and the license agreement of the packages is provided by the author of the packages. These software packages have strong reusability and high modularity, which greatly facilitates application developers to build their desired application systems in the shortest time. For more package information, visit the [RT-Thread package repository](https://github.com/RT-Thread-packages). + +## Licence + +RT-Thread is an open source software and has been licensed under Apache License Version 2.0 since v3.1.1. License information and copyright information can generally be seen at the beginning of the code: + +``` +/* + * Copyright (c) 2006-2018, RT-Thread Development Team + * + * SPDX-License-Identifier: Apache-2.0 + */ +``` + +To avoid possible future license conflicts, developers need to sign a Contributor License Agreement (CLA) when submitting PR to RT-Thread. + +> Note: Because the BSP also contains the code provided by the chip manufacturer, this part of the code follows the license provided by the chip manufacturer, such as STM32 HAL, NXP, Atmel, etc. Relevant codes are usually only used in the chips of the corresponding manufacturers. + +## Supported Architectures + +RT-Thread RTOS can support many architectures,and has covered the major architectures in current applications. Architecture and chip manufacturer involved: + +- **ARM Cortex-M0/M0+**:manufacturers like ST +- **ARM Cortex-M3**:manufacturers like ST、Winner Micro、MindMotion, ect. +- **ARM Cortex-M4**:manufacturers like ST、Nuvton、NXP、GigaDevice、Realtek、Ambiq Micro, ect. +- **ARM Cortex-M7**:manufacturers like ST、NXP +- **ARM Cortex-M23**:manufacturers like GigaDevice +- **ARM Cortex-R4** +- **ARM Cortex-A8/A9**:manufacturers like NXP +- **ARM7**:manufacturers like Samsung +- **ARM9**:manufacturers like Allwinner、Xilinx 、GOKE +- **ARM11**:manufacturers like Fullhan +- **MIPS32**:manufacturers like loongson、Ingenic +- **RISC-V**:manufacturers like Hifive、Kendryte +- **ARC**:manufacturers like SYNOPSYS +- **DSP**:manufacturers like TI +- **C-Sky** +- **x86** + +## Supported IDE and Compiler + +The main IDE/compilers supported by RT-Thread are: + +- MDK KEIL +- IAR +- GCC + +Use Python-based [scons](http://www.scons.org) for command-line builds. + +# Source Code and Tools + +**Get the source code**: The source code of RT-Thread is hosted on Github, and click on the link to get the source code. + +- [Download RT-Thread source code](https://github.com/RT-Thread/rt-thread) + +**Get the Env Tool**: To better help developers, the RT-Thread team also provides Env tools (or Env scripts for Linux/MacOS). On Windows, Env tool is a development assistant tool launched by RT-Thread. It provides compiling and building environment, graphical system configuration and software package management functions for project projects based on RT-Thread operating system. Its built-in menuconfig provides a simple and easy-to-use configuration tailoring tool, which can tailor the kernel, components and software packages freely, so that the system can be built in the way of building blocks. + +- [Download Env Tool]() +- [User manual of Env](env/env.md) + +# Getting Started + +RT-Thread BSP can be compiled directly and downloaded to the corresponding development board for use. In addition, RT-Thread also provides qemu-vexpress-a9 BSP, which can be used without hardware platform. See the getting started guide below for details. + +- [Getting Started of QEMU (Windows)](quick-start/quick_start_qemu/quick_start_qemu.md) +- [Getting Started of QEMU (Ubuntu)](quick-start/quick_start_qemu/quick_start_qemu_linux.md) + +# Help + +Any questions can be asked in the [issue section of rtthread-manual-doc](https://github.com/RT-Thread/rtthread-manual-doc/issues). By creating a new issue to describe your questions, community members will answer them. + +# Contribution + +If you are interested in RT-Thread and want to join in the development of RT-Thread and become a code contributor,please refer to the [Code Contribution Guide](documentation/contribution_guide/contribution_guide.md). + +# Manual Catalogue + +- [RT-Thread Introduction](introduction/introduction.md) +- [Start Guide: Simulate STM32F103 on KEIL simulator](quick-start/quick-start.md) + +**Kernel** + +- [Kernel Basics](basic/basic.md) +- [Thread Management](thread/thread.md) +- [Clock&Timer Management](timer/timer.md) +- [Inter-thread Synchronization](thread-sync/thread-sync.md) +- [Inter-thread Communication](thread-comm/thread-comm.md) +- [Memory Management](memory/memory.md) +- [Interrupt Management](interrupt/interrupt.md) +- [Kernel Porting](kernel-porting/kernel-porting.md) + +**Tool** + +- [User Manual of Env](env/env.md) +- [SCons](scons/scons.md) + +**Device** + +- [I/O Device Framework](device/device.md) +- [PIN Device](device/pin/pin.md) +- [UART Device](device/uart/uart.md) +- [ADC Device](device/adc/adc.md) +- [I2C Bus Device](device/i2c/i2c.md) +- [SPI Device](device/spi/spi.md) +- [PWM Device](device/pwm/pwm.md) +- [RTC Device](device/rtc/rtc.md) +- [HWTIMER Device](device/hwtimer/hwtimer.md) +- [WATCHDOG Device](device/watchdog/watchdog.md) +- [WLAN Device](device/wlan/wlan.md) +- [Sensor Device](device/sensor/sensor.md) + +**Components** + +- [FinSH Console](finsh/finsh.md) +- [Virtual File System](filesystem/README.md) +- [utest Framework](utest/utest.md) +- [Dynamic Module: dlmodule](dlmodule/README.md) +- [Socket Abstraction Layer: SAL](sal/sal.md) +- [AT Commands](at/at.md) +- [POSIX Interface](posix/README.md) +- [Ulog Log](ulog/ulog.md) +- [Power Management: PM](pm/pm.md) +- [Network Framework](network/network.md) + diff --git a/documentation/at/at.md b/documentation/at/at.md new file mode 100644 index 0000000000..7997084f3f --- /dev/null +++ b/documentation/at/at.md @@ -0,0 +1,895 @@ +# AT Commands # + +## Introduction to AT Commands + +AT Commands was originally a control protocol invented by Hayes, which invented MODEM, to control MODEM. Later, with the upgrade of network bandwidth, the dial-up MODEM with very low speed basically exited the general market, but the AT command was retained. At that time, the major mobile phone manufacturers jointly developed a set of AT commands for GSM to control the GSM module of the mobile phone. The AT command evolved on this basis and added the GSM 07.05 standard and the later GSM 07.07 standard to achieve a more robust standardization. + +In the subsequent GPRS control, 3G module, etc., all use AT commands to control, AT commands gradually become the actual standard in product development. Nowadays, AT commands are also widely used in embedded development. AT commands are the protocol interfaces of the main chip and communication module. The hardware interface is usually the serial port, so the main control device can complete various operations through simple commands and hardware design. + +**The AT commands is a way of applying device connections and data communication between the AT Server and the AT Client.** The basic structure is shown below: + +![AT Command Set](figures/at_framework.jpg) + +1. The AT command consists of three parts: prefix, body, and terminator. The prefix consists of the character AT; the body consists of commands, parameters, and possibly used data; the terminator typically is `` (`"\r\n"`). + +2. The implementation of the AT function requires the AT Server and the AT Client to work together. + +3. The AT server is mainly used to receive commands sent by the AT client, determine the received commands and parameter formats, and deliver corresponding response data or actively send data. + +4. The AT client is mainly used to send commands, wait for the AT Server to respond, and parse the AT Server response data or the actively sent data to obtain related information. + +5. A variety of data communication methods (UART, SPI, etc.) are supported between AT Server and AT Client. Currently, the most commonly used serial port UART communication method. + +6. The data that the AT Server sends to the AT Client is divided into two types: response data and URC data. + +- Response Data: The AT Server response status and information received by the AT Client after sending the command. + +- URC Data: The data that the AT Server actively sends to the AT client generally appears in some special cases, such as disconnected WIFI connection, TCP receiving data, etc. These situations often require the user to perform corresponding operations. + +With the popularization of AT commands, more and more embedded products use AT commands. The AT commands are used as the protocol interfaces of the main chip and the communication module. The hardware interface is generally a serial port, so that the master device can performs a variety of operations using simple commands and hardware design. + +Although the AT command has standardization to a certain degree, the AT commands supported by different chips are not completely unified, which directly increases the complexity to use. There is no uniform way to handle the sending and receiving of AT commands and the parsing of data. Moreover, when the AT device is used to connect to the network, the simple device connection and data transceiving functions can only be completed by commands, and it is difficult to adapt the upper layer network application interface, which is not conducive to the development of the product device. + +In order to facilitate the user to use AT commands to easily adapt to different AT modules, RT-Thread provides AT components for AT device connectivity and data communication. The implementation of the AT component consists of both client and server. + +## Introduction to AT Components + +The AT component is based on the implementation of the `AT Server` and `AT Client` of the RT-Thread system. The component completes the AT command transmission, command format and parameter judgment, command response, response data reception, response data parsing, URC data processing, etc.. Command data interaction process. + +Through the AT component, the device can use the serial port to connect to other devices to send and receive parsed data. It can be used as an AT Server to allow other devices or even the computer to connect to complete the response of sending data. It can also start the CLI mode in the local shell to enable the device to support AT Server and AT Client at the same time. Server and AT Client features, this mode is mostly used for device development and debugging. + +**AT component resource usage:** + +- AT Client: 4.6K ROM and 2.0K RAM; + +- AT Server: 4.0K ROM and 2.5K RAM; + +- AT CLI: 1.5K ROM and almost no RAM is used. + +Overall, the AT component resources are extremely small, making them ideal for use in embedded devices with limited resources. The AT component code is primarily located in `rt-thread/components/net/at/`. The main functions includes : + +**Main Functions of AT Server:** + +- Basic commands: Implement a variety of common basic commands (ATE, ATZ, etc.); +- Command compatibility: The command supports ignoring case and improving command compatibility; +- Command detection: The command supports custom parameter expressions and implements self-detection of received command parameters. +- Command registration: Provides a simple way to add user-defined commands, similar to the way the `finsh/msh` command is added; +- Debug mode: Provides AT Server CLI command line interaction mode, mainly used for device debugging. + +**Main Functions of AT Client:** + +- URC data processing: The complete URC data processing method; +- Data analysis: Supports the analysis of custom response data, and facilitates the acquisition of relevant information in the response data; +- Debug mode: Provides AT Client CLI command line interaction mode, mainly used for device debugging. +- AT Socket: As an extension of AT Client function, it uses AT command to send and receive as the basis, implements the standard BSD Socket API, completes the data sending and receiving function, and enables users to complete device networking and data communication through AT commands. +- Multi-client support: The AT component currently supports multiple clients running simultaneously. + +## AT Server ## + +### AT Server Configuration ### + +When we use the AT Server feature in the AT component, we need to define the following configuration in rtconfig.h: + +| **Macro Definition** | **Description** | +| ---- | ---- | +|RT_USING_AT| Enable AT component | +|AT_USING_SERVER |Enable AT Server function| +|AT_SERVER_DEVICE |Define the serial communication device name used by AT Server on the device to ensure that it is not used and the device name is unique, such as `uart3` device.| +|AT_SERVER_RECV_BUFF_LEN|The maximum length of data received by the AT Server device| +|AT_CMD_END_MARK_CRLF|Determine the line terminator of the received command | +|AT_USING_CLI | Enable server-command-line interaction mode | +|AT_DEBUG|Enable AT component DEBUG mode to display more debug log information | +|AT_PRINT_RAW_CMD | Enable real-time display AT command communication data mode for easy debugging | + +For different AT devices, there are several formats of the line terminator of the sending commands: `"\r\n"`、`"\r"`、`"\n"`, the user needs to select the corresponding line terminator according to the device type connected to the AT Server. And then determine the end of the send command line, defined as follows: + +| **Macro Definition** | **Terminator** | +| ---- | ---- | +| AT_CMD_END_MARK_CRLF | `"\r\n"` | +| AT_CMD_END_MARK_CR | `"\r"` | +| AT_CMD_END_MARK_LF | `"\n"` | + +The above configuration options can be added by Env tool. The specific path in Env is as follows: + +```c +RT-Thread Components ---> + Network ---> + AT commands ---> + [*] Enable AT commands + [*] Enable debug log output + [*] Enable AT commands server + (uart3) Server device name + (256) The maximum length of server data accepted + The commands new line sign (\r\n) ---> + [ ] Enable AT commands client + [*] Enable command-line interface for AT commands + [ ] Enable print RAW format AT command communication data +``` + +After the add configuration is complete, you can use the command line to rebuild the project, or use `scons` to compile. + +### AT Server Initialization ### + +After enabling the AT Server in Env, you need to initialize it at startup aims to enable the AT Server function. If the component has been initialized automatically, no additional initialization is required. Otherwise, you need to call the following function in the initialization task. : + +```c +int at_server_init(void); +``` +The AT Server initialization function, which belongs to the application layer function, needs to be called before using the AT Server function or using the AT Server CLI function. `at_server_init()` function completes initialization of resources stored by AT commands ,such as data segment initialization, AT Server device initialization, and semaphore usage by the AT Server, and creates an at_server thread for parsing the receipt data in the AT Server. + +After the AT Server is successfully initialized, the device can be used as an AT server to connect to the AT client's serial device for data communication, or use a serial port conversion tool to connect to the PC, so that the PC-side serial debugging assistant can communicate with the AT client as data communication. + +### Add custom AT commands ### + +At present, the format of the AT command set used by AT devices of different manufacturers does not have a completely uniform standard, so the AT Server in the AT component only supports some basic general AT commands, such as ATE, AT+RST, etc. These commands can only be used to meet the basic operation of the device. If users want to use more functions, they need to implement custom AT Server commands for different AT devices.AT component provides AT command addition method similar to finsh/msh command addition method, which is convenient for users to implement the required commands. + +The basic commands currently supported by AT Server are as follows: + +- AT: AT test command; +- ATZ: The device is restored to factory settings; +- AT+RST: Reboot device ; +- ATE: ATE1 turns on echo, ATE0 turns off echo; +- AT&L: List all commands; +- AT+UART: Set the serial port information. + +AT commands can implement different functions depending on the format of the incoming parameters. For each AT command, there are up to four functions, as described below: + +- Test Function: `AT+=?` , used to query the command's parameter, format and value range; +- Query Function: `AT+?`, used to return the current value of the command parameter; +- Setting Function: `AT+=...` , used for user-defined parameter values; +- Execution Function: `AT+`, used to perform related operations. + +The four functions of each command do not need to be fully implemented. When you add the AT Server command, you can implement one or several of the above functions according to your needs. Unimplemented functions can be represented by `NULL` . And then through custom commands, the add function is added to the list of basic commands. The addition method is similar to the way the `finsh/msh` command is added. The function for adding commands is as follows: + +```c +AT_CMD_EXPORT(_name_, _args_expr_, _test_, _query_, _setup_, _exec_); +``` + +|**Parameter** |**Description** | +| ---------- | ------------------------------- | +| `_name_ ` | AT command name | +| `_args_expr_` | AT command parameter expression; (NULL means no parameter, `<>` means mandatory parameter and `[]` means optional parameter) | +| `_test_` | AT test function name; (NULL means no parameter) | +| `_query_` | AT query function name; (ibid.) | +| `_setup_` | AT setup function name; (ibid.) | +| `_exec_` | AT performs the function name; (ibid.) | + +The AT command registration example is as follows. The `AT+TEST` command has two parameters. The first parameter is a mandatory parameter, and the second parameter is an optional parameter. The command implements the query function and the execution function: + +```c +static at_result_t at_test_exec(void) +{ + at_server_printfln("AT test commands execute!"); + + return 0; +} +static at_result_t at_test_query(void) +{ + at_server_printfln("AT+TEST=1,2"); + + return 0; +} + +AT_CMD_EXPORT("AT+TEST", =[,], NULL, at_test_query, NULL, at_test_exec); +``` + +### AT Server APIs + +#### Send Data to the Client (no newline) + +```c +void at_server_printf(const char *format, ...); +``` + +This function is used by the AT Server to send fixed-format data to the corresponding AT Client serial device through the serial device. The data ends without a line break. Used to customize the function functions of AT commands in AT Server. + +| **Parameter** | **D**escription | +|------|-------------------------| +| format | Customize the expression of the input data | +| ... | Input data list, variable parameters | + +#### Send Data to the Client (newline) + +```c +void at_server_printfln(const char *format, ...); +``` + +This function is used by the AT Server to send fixed-format data to the corresponding AT Client serial device through the serial device, with a newline at the end of the data. Used to customize the function functions of AT commands in AT Server. + +| **Parameter** | **Description** | +|------|-------------------------| +| format | Customize the expression of the input data | +| ... | Input data list, variable parameters | + +#### Send Command Execution Results to the Client + +```c +void at_server_print_result(at_result_t result); +``` + +This function is used by the AT Server to send command execution results to the corresponding AT Client serial device through the serial device. The AT component provides a variety of fixed command execution result types. When you customize a command, you can use the function to return the result directly; + +| **Parameter** | **Description** | +|------|-----------------| +| result | Command execution result type | + +The command execution result type in the AT component is given in the enumerated type, as shown in the following table: + +| Types of Command Execution Result | Description | +|------------------------|------------------| +| AT_RESULT_OK | Command Execution Succeeded | +| AT_RESULT_FAILE | Command Execution Failed | +| AT_RESULT_NULL | Command No Result | +| AT_RESULT_CMD_ERR | Command Input Error | +| AT_RESULT_CHECK_FAILE | Parameter Expression Matching Error | +| AT_RESULT_PARSE_FAILE | Parameter Parsing Error | + +See the following code to learn how to use the `at_server_print_result` function: + +```c +static at_result_t at_test_setup(const char *args) +{ + if(!args) + { + /* If the parameter error after incoming orders, returns expression match error results */ + at_server_print_result(AT_RESULT_CHECK_FAILE); + } + + /* Return to successful execution under normal conditions */ + at_server_print_result(AT_RESULT_OK); + return 0; +} +static at_result_t at_test_exec(void) +{ + // execute some functions of the AT command. + + /* This command does not need to return results */ + at_server_print_result(AT_RESULT_NULL); + return 0; +} +AT_CMD_EXPORT("AT+TEST", =,, NULL, NULL, at_test_setup, at_test_exec); +``` + +#### Parsing Input Command Parameters + +```c +int at_req_parse_args(const char *req_args, const char *req_expr, ...); +``` + +Parsing input command parameters Among the four function functions of an AT command, only the setting function has an input parameter, and the input parameter is to remove the rest of the AT command, for example, a command input is `"AT+TEST=1,2,3,4"`, Then set the input parameter of the function to the parameter string `"=1,2,3,4"` . + +The command parsing function is mainly used in the AT function setting function, which is used to parse the incoming string parameter and obtain corresponding multiple input parameters for performing the following operations. The standard `sscanf` parsing grammar used in parsing grammar here will also be described in detail later in the AT Client parameter parsing function. + +| **Parameter** | **Description** | +|---------|-----------------------------------------------| +| req_args | The incoming parameter string of the request command | +| req_expr | Custom parameter parsing expression for parsing the above incoming parameter data | +| ... | Output parsing parameter list, which is a variable parameter | +| **Return** | -- | +| >0 | Successful, returns the number of variable parameters matching the parameter expression | +| =0 | Failed, no parameters matching the parameter expression | +| -1 | Failed, parameter parsing error | + +See the following code to learn how to use the at_server_print_result function: + +```c +static at_result_t at_test_setup(const char *args) +{ + int value1,value2; + + /* The input standard format of args should be "=1, 2", "=%d, %d" is a custom parameter parsing expression, and the result is parsed and stored in the value1 and value2 variables. */ + if (at_req_parse_args(args, "=%d,%d", &value1, &value2) > 0) + { + /* Data analysis succeeds, echoing data to AT Server serial device */ + at_server_printfln("value1 : %d, value2 : %d", value1, value2); + + /* The data is parsed successfully. The number of parsing parameters is greater than zero. The execution is successful. */ + at_server_print_result(AT_RESULT_OK); + } + else + { + /* Data parsing failed, the number of parsing parameters is not greater than zero, and the parsing failure result type is returned. */ + at_server_print_result(AT_RESULT_PARSE_FAILE); + } + return 0; +} +/* Add the "AT+TEST" command to the AT command list. The command parameters are formatted as two mandatory parameters and . */ +AT_CMD_EXPORT("AT+TEST", =,, NULL, NULL, at_test_setup, NULL); +``` + +#### Portation-related interfaces + +AT Server supports a variety of basic commands (ATE, ATZ, etc.) by default. The function implementation of some commands is related to hardware or platform and requires user-defined implementation. The AT component source code `src/at_server.c` file gives the weak function definition of the migration file. The user can create a new migration file in the project to implement the following function to complete the migration interface, or modify the weak function to complete the migration interface directly in the file. + +1. Device restart function: `void at_port_reset(void);`. This function completes the device soft restart function and is used to implement the basic command AT+RST in AT Server. + +2. The device restores the factory settings function: `void at_port_factory_reset(void);`. This function completes the device factory reset function and is used to implement the basic command ATZ in AT Server. + +3. Add a command table in the link script (add only in gcc, no need to add in keil and iar) + +If you use the gcc toolchain in your project, you need to add the *section* corresponding to the AT server command table in the link script. Refer to the following link script: + +```c +/* Constant data goes into FLASH */ +.rodata : +{ + ... + + /* section information for RT-thread AT package */ + . = ALIGN(4); + __rtatcmdtab_start = .; + KEEP(*(RtAtCmdTab)) + __rtatcmdtab_end = .; + . = ALIGN(4); +} > CODE +``` + +## AT Client + +### AT Client Configuration + +When we use the AT Client feature in the AT component, we need to define the following configuration in rtconfig.h: + +```c +#define RT_USING_AT +#define AT_USING_CLIENT +#define AT_CLIENT_NUM_MAX 1 +#define AT_USING_SOCKET +#define AT_USING_CLI +#define AT_PRINT_RAW_CMD +``` + +- `RT_USING_AT`: Used to enable or disable the AT component; + +- `AT_USING_CLIENT`: Used to enable the AT Client function; + +- `AT_CLIENT_NUM_MAX`: Maximum number of AT clients supported at the same time. + +- `AT_USING_SOCKET`: Used by the AT client to support the standard BSD Socket API and enable the AT Socket function. + +- `AT_USING_CLI`: Used to enable or disable the client command line interaction mode. + +- `AT_PRINT_RAW_CMD`: Used to enable the real-time display mode of AT command communication data for easy debugging. + +The above configuration options can be added directly to the `rtconfig.h` file or added by the Env. The specific path in Env is as follows: + +```c +RT-Thread Components ---> + Network ---> + AT commands ---> + [*] Enable AT commands + [ ] Enable debug log output + [ ] Enable AT commands server + [*] Enable AT commands client + (1) The maximum number of supported clients + [*] Enable BSD Socket API support by AT commnads + [*] Enable command-line interface for AT commands + [ ] Enable print RAW format AT command communication data +``` + +After the configuration is complete, you can use the command line to rebuild the project, or use `scons` to compile. + +### AT Client Initialization ### + +After configuring the AT Client, you need to initialize it at startup aims to enable the AT client function. If the component has been initialized automatically, no additional initialization is required. Otherwise, you need to call the following function in the initialization task: + +```c +int at_client_init(const char *dev_name, rt_size_t recv_bufsz); +``` + +The AT Client initialization function, which belongs to the application layer function, needs to be called before using the AT Client function or using the AT Client CLI function. The `at_client_init()` function completes the initialization of the AT Client device, the initialization of the AT Client porting function, the semaphore and mutex used by the AT Client, and other resources, and creates the `at_client` thread for parsing the data received in the AT Client and for processing the URC data. + +### AT Client data receiving and sending ### + +The main function of the AT Client is to send AT commands, receive data, and parse data. The following is an introduction to the processes and APIs related to AT Client data reception and transmission. + +Related structure definition: + +```c +struct at_response +{ + /* response buffer */ + char *buf; + /* the maximum response buffer size */ + rt_size_t buf_size; + /* the number of setting response lines + * == 0: the response data will auto return when received 'OK' or 'ERROR' + * != 0: the response data will return when received setting lines number data */ + rt_size_t line_num; + /* the count of received response lines */ + rt_size_t line_counts; + /* the maximum response time */ + rt_int32_t timeout; +}; +typedef struct at_response *at_response_t; +``` + +In the AT component, this structure is used to define a control block for AT command response data, which is used to store or limit the data format of the AT command response data. + +- `buf` is used to store the received response data. Note that the data stored in the buf is not the original response data, but the data of the original response data removal terminator (`"\r\n"`). Each row of data in the buf is split by '\0' to make it easy to get data by row. +- `buf_size` is a user-defined length of the received data that is most supported by this response. The length of the return value is defined by the user according to his own command. +- `line_num` is the number of rows that the user-defined response data needs to receive. **If there is no response line limit requirement, it can be set to 0.** +- `line_counts` is used to record the total number of rows of this response data. +- `timeout` is the user-defined maximum response time for this response data. + +`buf_size`、`line_num`、`timeout` parameters in the structure are restricted conditions, which are set when the structure is created, and other parameters are used to store data parameters for later data analysis. + +Introduction to related API interfaces: + +#### Create a Response Structure + +```c +at_response_t at_create_resp(rt_size_t buf_size, rt_size_t line_num, rt_int32_t timeout); +``` + +| **Parameter** | **Description** | +|---------|-----------------------------------------| +| buf_size | Maximum length of received data supported by this response | +| line_num | This response requires the number of rows of data to be returned. The number of rows is divided by a standard terminator (such as "\r\n"). If it is 0, it will end the response reception after receiving the "OK" or "ERROR" data; if it is greater than 0, it will return successfully after receiving the data of the current set line number. | +| timeout | The maximum response time of the response data, receiving data timeout will return an error | +| **Return** | -- | +| != NULL | Successful, return a pointer to the response structure | +| = NULL | Failed, insufficient memory | + +This function is used to create a custom response data receiving structure for later receiving and parsing the send command response data. + +#### Delete a Response Structure + +```c +void at_delete_resp(at_response_t resp); +``` + +| **Parameter** | **Description** | +|----|-------------------------| +| resp | The response structure pointer to be deleted | + +This function is used to delete the created response structure object, which is generally paired with the **at_create_resp** creation function. + +#### Set the Parameters of Response Structure + +```c +at_response_t at_resp_set_info(at_response_t resp, rt_size_t buf_size, rt_size_t line_num, rt_int32_t timeout); +``` + +| **Parameter** | **Description** | +|---------|----------------------------------| +| resp | Response structure pointer that has been created | +| buf_size | Maximum length of received data supported by this response | +| line_num | This response requires the number of rows of data to be returned. The number of lines is divided by the standard terminator. If it is 0, the response is received after receiving the "OK" or "ERROR" data. If it is greater than 0, the data is successfully returned after receiving the data of the currently set line number. | +| timeout | The maximum response time of the response data, receiving data timeout will return an error. | +| **Return** | -- | +| != NULL | Successful, return a pointer to the response structure | +| = NULL | Failed, insufficient memory | + +This function is used to set the response structure information that has been created. It mainly sets the restriction information on the response data. It is generally used after creating the structure and before sending the AT command. This function is mainly used to send commands when the device is initialized, which can reduce the number of times the response structure is created and reduce the code resource occupation. + +#### Send a Command and Receive a Response + +```c +rt_err_t at_exec_cmd(at_response_t resp, const char *cmd_expr, ...); +``` + +| **Parameter** | **Description** | +|---------|-----------------------------| +| resp | Response structure body pointer created | +| cmd_expr | Customize the expression of the input command | +| ... | Enter the command data list, a variable parameter | +| **Return** | -- | +| >=0 | Successful | +| -1 | Failed | +| -2 | Failed, receive response timed out | + +This function is used by the AT Client to send commands to the AT Server and wait for a response. `resp` is a pointer to the response structure that has been created. The AT command uses the variable argument input of the match expression. **You do not need to add a command terminator at the end of the input command.** + +Refer to the following code to learn how to use the above AT commands to send and receive related functions: + +```c +/* + * Program listing: AT Client sends commands and receives response routines + */ + +#include +#include /* AT component header file */ + +int at_client_send(int argc, char**argv) +{ + at_response_t resp = RT_NULL; + + if (argc != 2) + { + LOG_E("at_cli_send [command] - AT client send commands to AT server."); + return -RT_ERROR; + } + + /* Create a response structure, set the maximum support response data length to 512 bytes, the number of response data lines is unlimited, and the timeout period is 5 seconds. */ + resp = at_create_resp(512, 0, rt_tick_from_millisecond(5000)); + if (!resp) + { + LOG_E("No memory for response structure!"); + return -RT_ENOMEM; + } + + /* Send AT commands and receive AT Server response data, data and information stored in the resp structure */ + if (at_exec_cmd(resp, argv[1]) != RT_EOK) + { + LOG_E("AT client send commands failed, response error or timeout !"); + return -ET_ERROR; + } + + /* Command sent successful */ + LOG_D("AT Client send commands to AT Server success!"); + + /* Delete response structure */ + at_delete_resp(resp); + + return RT_EOK; +} +#ifdef FINSH_USING_MSH +#include +/* Output at_Client_send to msh */ +MSH_CMD_EXPORT(at_Client_send, AT Client send commands to AT Server and get response data); +#endif +``` + +The implementation principle of sending and receiving data is relatively simple. It mainly reads and writes the serial port device bound by the AT client, and sets the relevant number of rows and timeout to limit the response data. It is worth noting that the `res` response needs to be created first. The structure passed `in_exec_cmd` function for data reception. When the `at_exec_cmd` function's parameter `resp` is NULL, it means that the data sent this time **does not consider processing the response data and directly returns the result**. + +### AT Client Data Parsing Method ### + +After the data is normally acquired, the response data needs to be parsed, which is one of the important functions of the AT Client. Parsing of data in the AT Client provides a parsed form of a custom parsing expression whose parsing syntax uses the standard `sscanf` parsing syntax. Developers can use the custom data parsing expression to respond to useful information in the data, provided that the developer needs to review the relevant manual in advance to understand the basic format of the AT Server device response data that the AT Client connects to. The following is a simple AT Client data parsing method through several functions and routines. + +#### Get Response Data for the Specified Line Number + +```c +const char *at_resp_get_line(at_response_t resp, rt_size_t resp_line); +``` + +| **Parameter** | **Description** | +|----------|-----------------------------| +| resp |Response structure pointer | +| resp_line | Required line number for obtaining data | +| **Return** | -- | +| != NULL | Successful, return a pointer to the corresponding line number data | +| = NULL | Failed, input line number error | + +This function is used to get a row of data with the specified line number in the AT Server response data. The line number is judged by the standard data terminator. The above send and receive functions `at_exec_cmd` have recorded and processed the data and line numbers of the response data in the `resp` response structure, where the data information of the corresponding line number can be directly obtained. + +#### Get Response Data by the Specified Keyword + +```c +const char *at_resp_get_line_by_kw(at_response_t resp, const char *keyword); +``` + +| **Parameter** | **Description** | +|-------|-----------------------------| +| resp |Response structure pointer | +| keyword | Keyword information | +| **Return** | -- | +| != NULL | Successful, return a pointer to the corresponding line number data | +| = NULL | Failed, no keyword information found | + +This function is used to get a corresponding row of data by keyword in the AT Server response data. + +#### Parse Response Data for the Specified Line Number + +```c +int at_resp_parse_line_args(at_response_t resp, rt_size_t resp_line, const char *resp_expr, ...); +``` + +| **Parameter** | **Description** | +|----------|---------------------------------| +| resp |Response structure pointer | +| resp_line | Parsed data line number required, **from the start line number count 1** | +| resp_expr | Custom parameter parsing expression | +| ... | Parsing the parameter list as a variable parameter | +| **Return** | -- | +| >0 | Successful, return the number of parameters successfully parsed | +| =0 | Failed, no parameters matching the parsing expression | +| -1 | Failed, parameter parsing error | + +This function is used to get a row of data with the specified line number in the AT Server response data, and parse the parameters in the row data. + +#### Parse Response Data for a Row with Specified Keyword + +```c +int at_resp_parse_line_args_by_kw(at_response_t resp, const char *keyword, const char *resp_expr, ...); +``` + +| **Parameter** | **Description** | +|----------|---------------------------------| +| resp |Response structure pointer | +| keyword | Keyword information | +| resp_expr | Custom parameter parsing expression | +| ... | Parsing the parameter list as a variable parameter | +| **Return** | -- | +| >0 | Successful, return the number of parameters successfully parsed | +| =0 | Failed, no parameters matching the parsing expression | +| -1 | Failed, parameter parsing error | + +This function is used to get a row of data containing a keyword in the AT Server response data and parse the parameters in the row data. + +The data parsing syntax uses the standard `sscanf` syntax, the content of the syntax is more, developers can search their parsing syntax, here two procedures are used to introduce the simple use method. + +#### Serial Port Configuration Information Analysis Example + +The data sent by the client: + +```c +AT+UART? +``` + +The response data obtained by the client: + +```c +UART=115200,8,1,0,0\r\n +OK\r\n +``` + +The pseudo code is parsed as follows: + +```c +/* Create a server response structure, the maximum length of user-defined receive data is 64 */ +resp = at_create_resp(64, 0, rt_tick_from_millisecond(5000)); + +/* Send data to the server and receive response data in the resp structure */ +at_exec_cmd(resp, "AT+UART?"); + +/* Analyze the serial port configuration information, 1 means parsing the first line of response data, '%*[^=]' means ignoring the data before the equal sign */ +at_resp_parse_line_args(resp, 1,"%*[^=]=%d,%d,%d,%d,%d", &baudrate, &databits, &stopbits, &parity, &control); +printf("baudrate=%d, databits=%d, stopbits=%d, parity=%d, control=%d\n", + baudrate, databits, stopbits, parity, control); + +/* Delete server response structure */ +at_delete_resp(resp); +``` + +#### IP and MAC Address Resolution Example #### + +The data sent by the client: + +```c +AT+IPMAC? +``` + +The response data obtained by the server: + +```c +IP=192.168.1.10\r\n +MAC=12:34:56:78:9a:bc\r\n +OK\r\n +``` + +The pseudo code is parsed as follows: + +```c +/* Create a server response structure, the maximum length of user-defined receive data is 128 */ +resp = at_create_resp(128, 0, rt_tick_from_millisecond(5000)); + +at_exec_cmd(resp, "AT+IPMAC?"); + +/* Customize the parsing expression to parse the information in the current line number data */ +at_resp_parse_line_args(resp, 1,"IP=%s", ip); +at_resp_parse_line_args(resp, 2,"MAC=%s", mac); +printf("IP=%s, MAC=%s\n", ip, mac); + +at_delete_resp(resp); +``` + +The key to parsing data is to correctly define the expression. Because the response data of the different device manufacturers is not unique to the response data of the AT device, only the form of the custom parsing expression can be obtained to obtain the required information. The design of the `at_resp_parse_line_args` parsing parameter function is based on the `sscanf` data parsing method. Before using, the developer needs to understand the basic parsing syntax and then design the appropriate parsing syntax in combination with the response data. If the developer does not need to parse the specific parameters, you can use the `at_resp_get_line` function to get the specific data of a row. + +### AT Client URC Data Processing ### + +The processing of URC data is another important feature of AT Client. URC data is the data that is actively sent by the server. It cannot be received by the above data sending and receiving functions. The URC data format and function are different for different devices. Therefore, the URC data processing mode needs to be customized. The AT component provides a list management method for the processing of URC data. Users can customize the addition of URC data and its execution functions to the management list, so the processing of URC data is also the main porting work of AT Client. + +Related structure: + +```c +struct at_urc +{ + const char *cmd_prefix; // URC data prefix + const char *cmd_suffix; // URC data suffix + void (*func)(const char *data, rt_size_t size); // URC data execution function +}; +typedef struct at_urc *at_urc_t; +``` + +Each URC data has a structure control block that defines and determines the prefix and suffix of the URC data, as well as the execution function of the URC data. A piece of data can be defined as URC data only if it matches the prefix and suffix of the URC exactly. The URC data execution function is executed immediately after the matching URC data is obtained. So developers adding a URC data requires a custom matching prefix, suffix, and execution function. + + +#### URC Data List Initialization + +```c +void at_set_urc_table(const struct at_urc *table, rt_size_t size); +``` + +| **Parameter** | **Description** | +|-----|-----------------------| +| table | URC data structure array pointer | +| size | Number of URC data | + +This function is used to initialize the developer-defined URC data list, mainly used in the AT Client porting function. + +The example of AT Client migration is given below. This example mainly shows the specific processing of URC data in the `at_client_port_init()` porting function. The developer can directly apply it to his own porting file, or customize the implementation function to complete the AT Client porting. + +```c +static void urc_conn_func(const char *data, rt_size_t size) +{ + /* WIFI connection success information */ + LOG_D("AT Server device WIFI connect success!"); +} + +static void urc_recv_func(const char *data, rt_size_t size) +{ + /* Received data from the server */ + LOG_D("AT Client receive AT Server data!"); +} + +static void urc_func(const char *data, rt_size_t size) +{ + /* Device startup information */ + LOG_D("AT Server device startup!"); +} + +static struct at_urc urc_table[] = { + {"WIFI CONNECTED", "\r\n", urc_conn_func}, + {"+RECV", ":", urc_recv_func}, + {"RDY", "\r\n", urc_func}, +}; + +int at_client_port_init(void) +{ + /* Add multiple URC data to the URC list and execute the URC function when receiving data that matches both the URC prefix and the suffix */ + at_set_urc_table(urc_table, sizeof(urc_table) / sizeof(urc_table[0])); + return RT_EOK; +} +``` + +### AT Client Other APIs Introduction + +#### Send Specified Length Data + +```c +rt_size_t at_client_send(const char *buf, rt_size_t size); +``` + +| **Parameter** | **Description** | +|----|-----------------------------| +| buf | Pointer to send data | +| size | Length of data sent | +| **Return** | -- | +| >0 | Successful, return the length of the data sent | +| <=0 | Failed | + +This function is used to send the specified length data to the AT Server device through the AT Client device, which is mostly used for the AT Socket function. + +#### Receive Specified Length Data + +```c +rt_size_t at_client_recv(char *buf, rt_size_t size,rt_int32_t timeout); +``` + +| **Parameter** | **Description** | +|----|-----------------------------| +| buf | Pointer to receive data | +| size | Maximum length for receiving data | +| timeout | Receive data timeout (tick) | +| **Return** | -- | +| >0 | Successful, return the length of the data received successfully | +| <=0 | Failed, receiving data error or timeout | + +This function is used to receive data of a specified length through the AT Client device, and is mostly used for the AT Socket function. This function can only be used in URC callback handlers. + +#### Set the line terminator for receiving data #### + +```c +void at_set_end_sign(char ch); +``` + +| Parameter | 描述 | +| ----- | ----- | +|ch | Line terminator | +| **Return** | **描述** | +|- | - | + +This function is used to set the line terminator, which is used to judge the end of a row of data received by the client, and is mostly used for the AT Socket function. + +#### Waiting for module initialization to complete #### + +```c +int at_client_wait_connect(rt_uint32_t timeout); +``` + +| Parameter | Description | +| ----- | ----- | +|timeout | Waiting timeout | +| **Return** | **Description** | +|0 | Successful | +|<0 | Failed, no data returned during the timeout period | + +This function is used to cyclically send AT commands when the AT module starts, until the module responds to the data, indicating that the module is successfully started. + +### AT Client Multi-Client Support ### + +In general, the device as the AT Client only connects to one AT module (the AT module acts as the AT Server) and can directly use the above functions of data transmission and reception and command parsing. In a few cases, the device needs to connect multiple AT modules as the AT Client. In this case, the multi-client support function of the device is required. + +The AT component provides support for multi-client connections and provides two different sets of function interfaces: **Single-Client Mode Functions** and **Multi-Client Mode Functions**. + +- Single-Client Mode Function: This type of function interface is mainly used when the device is connected to only one AT module, or when the device is connected to multiple AT modules, it is used in the **first AT client**. + +- Multi-Client Mode Function: This type of function interface mainly uses devices to connect multiple AT modules. + +The advantages and disadvantages of the two different mode functions and in different application scenarios are as follows: + +![at client modes comparison](figures/at_multiple_client.jpg) + +The single client mode function definition is mainly different from the single connection mode function. The definition of the incoming client object is different. The single client mode function uses the first initialized AT client object by default, and the multi-client mode function can Pass in the user-defined custom client object. The function to get the client object is as follows: + +```c +at_client_t at_client_get(const char *dev_name); +``` + +This function obtains the AT client object created by the device through the incoming device name, which is used to distinguish different clients when connecting multiple clients. + +The single client mode and multi-client mode function interface definitions differ from the following functions:: + +| Single-Client Mode Functions | Multi-Client Mode Functions | +| ----------------------------| ---------------------------------------| +| at_exec_cmd(...) | at_obj_exec_cmd(client, ...) | +| at_set_end_sign(...) | at_obj_set_end_sign(client, ...) | +| at_set_urc_table(...) | at_obj_set_urc_table(client, ...) | +| at_client_wait_connect(...) | at_client_obj_wait_connect(client, ...) | +| at_client_send(...) | at_client_obj_send(client, ...) | +| at_client_recv(...) | at_client_obj_recv(client, ...) | + +The two modes of client data transmission and parsing are basically the same, and the function usage process is different, as shown below: + +```c +/* Single client mode function usage */ + +at_response_t resp = RT_NULL; + +at_client_init("uart2", 512); + +resp = at_create_resp(256, 0, 5000); + +/* Send commands using a single client mode function */ +at_exec_cmd(resp, "AT+CIFSR"); + +at_delete_resp(resp); +``` + +```c +/* Multi-client mode functions usage */ + +at_response_t resp = RT_NULL; +at_client_t client = RT_NULL; + +/* Initialize two AT clients */ +at_client_init("uart2", 512); +at_client_init("uart3", 512); + +/* Get the corresponding AT client object by name */ +client = at_client_get("uart3"); + +resp = at_create_resp(256, 0, 5000); + +/* Send commands using multi-client mode functions */ +at_obj_exec_cmd(client, resp, "AT+CIFSR"); + +at_delete_resp(resp); +``` + +The process differences used by other functions are similar to the above `at_obj_exec_cmd()` function. The main function is to obtain the client object through the `at_client_get() ` function, and then determine which client is the client through the incoming object to achieve multi-client support. + +## FAQs + +### Q: What should I do if the log on the shell shows an error when enabling the AT command to send and receive data real-time printing function. ? + +**A:** Increase the baudrate of the serial port device corresponding to the shell to 921600, improve the serial port printing speed, and prevent the printing error when the data is too large. + +### Q: When the AT Socket function is started, the compile prompt "The AT socket device is not selected, please select it through the env menuconfig". + +**A:** After the AT Socket function is enabled, the corresponding device model is enabled in the at device package by default. Enter the at device package, configure the device as an ESP8266 device, configure WIFI information, re-generate the project, compile and download. + +### Q: AT Socket function data reception timeout or data reception is not complete. + +**A:** The error may be that the receive data buffer in the serial device used by the AT is too small (RT_SERIAL_RB_BUFSZ default is 64 bytes), and the data is overwritten after the data is not received in time. The buffer size of the serial port receiving data (such as 256 bytes) is appropriately increased. diff --git a/documentation/at/figures/at_framework.jpg b/documentation/at/figures/at_framework.jpg new file mode 100644 index 0000000000..6bc88adc3c Binary files /dev/null and b/documentation/at/figures/at_framework.jpg differ diff --git a/documentation/at/figures/at_multiple_client.jpg b/documentation/at/figures/at_multiple_client.jpg new file mode 100644 index 0000000000..e800d69779 Binary files /dev/null and b/documentation/at/figures/at_multiple_client.jpg differ diff --git a/documentation/basic/basic.md b/documentation/basic/basic.md new file mode 100644 index 0000000000..0b6315f1a8 --- /dev/null +++ b/documentation/basic/basic.md @@ -0,0 +1,754 @@ +# Kernel Basics + +This chapter gives a brief introduction to the software architecture of the RT-Thread kernel, beginning with its composition and implementation. While also introducing RT-Thread kernel-related concepts for beginners. +After understanding this chapter, readers will have an elementary understanding of the RT Thread kernel and will be able to answer questions such as - + +- What are the constituents of the kernel? +- How does the system startup? +- How is the memory distributed? +- What are the methods of kernel configuration? + +In the nutshell, this is only a brief introduction to software architecture decomposition and implementation of the real-time kernel. This will give understanding and concepts of how RT-Thread kernel works togther. After learning from this chapter, readers will have basic knowledge of each kernel components, system booting up proccesses, memory allocation and distrubtion, and methods of kernel configuration. + +## **Table of Contents** + +1. [Introduction to RT-Thread Kernel](#introduction-to-rt-thread-kernel) +2. [RT-Thread Startup Process](#rt-thread-startup-process) +3. [RT-Thread Program Memory Distribution](#rt-thread-program-memory-distribution) +4. [RT-Thread Automatic Initialization Mechanism](#rt-thread-automatic-initialization-mechanism) +5. [RT-Thread Kernel Object Model](#rt-thread-kernel-object-model) + +## Introduction to RT-Thread Kernel + +Kernel is the most basic and fundenmental part of an Operating System. Kernel service library and RT-Thread kernel libraries are interfacing between hardware and components/service layer. This includes the implementation of real-time kernel service library (rtservice.h/kservice.c) and other RT-Thread kernel libraries such as object management, thread management and scheduler, inter-thread communication management, clock management and memory management respectively. Below diagram is the core architecture diagram of the core kernel. + +![RT-Thread Kernel and its Substructure](figures/03kernel_Framework.png) + +Implementation of core kernel libraries are similar to a small set of standard C runtime library and it can run independently. For example, Standard C library (C runtime library) provides "strcpy", "memcpy", "printf", "scanf" and other function implementations. RT-Thread kernel libraries also provide the function implementations which are mainly used by Core Kernel. However, to avoid name conflicts, specifically functions' names are preceded with rt_. + + +The built of the Kernel will be vary depending on the complier. For example, using GNU GCC compiler, it will use more implementation from the standard C library. Last but not least, the minimum resource requirements of the Kernel is 3KB ROM and 1.2KB RAM. + + +### Thread Scheduling + +Thread is the smallest scheduling unit in the RT-Thread operating system. The thread scheduling algorithm is a **Priority-based Full Preemptive Multi-Thread** scheduling algorithm. +The system can support up to 256(0 - 255) thread priorities. For systems with tight resources, configurations with 8 or 32 thread priorities can be chosen(For example, STM32 has 32 thread priorities as per the default configuration). Lower numbers have a higher priority where 0 represents the highest priority furthermore the lowest priority(highest number) is reserved for idle threads. +RT-Thread supports the creation of multiple threads with the same priority. Threads having the same priority are scheduled with a Time Slice Rotation Scheduling algorithm so that each thread runs for the same amount of time. +The number of threads is bounded by the memory of the hardware platform and not the system. + +Thread management will be covered in detail in the "Thread Management" chapter. + +### Clock Management + +RT-Thread's Clock management is based upon a **clock beat**, which is the smallest clock unit in the RT-Thread operating system. +The RT-Thread timer provides two types of timer mechanisms: +- **One-Shot Timer** - Triggers only one timer event after startup and then stops automatically. +- **Periodic Trigger Timer** - Periodically triggers timer events until the user manually stops the timer or it will continue to operate. + +The RT-Thread timer can be set to the `HARD_TIMER` or the `SOFT_TIMER` mode depending on the context in which the timeout function is executed. + +The timer service is concluded using a timer timing callback i.e. a timeout function. The user can select the appropriate type of timer according to their real-time requirements for timing processing. + +Timer will be explained further in the "Clock Management" chapter. + +### Synchronization between Threads + +RT-Thread uses thread semaphores, mutexes, and event sets to achieve inter-thread synchronization. +Thread synchronizations happen through the acquisition and release of semaphore and mutexes. +The mutex uses priority inheritance to solve the common priority inversion problem in the real-time system. The thread synchronization mechanism allows threads to wait according to priorities or to acquire semaphores/mutexes following the First In First Out(FIFO) method. +Event sets are primarily used for synchronization between threads, they can achieve one-to-many and many-to-many synchronization. It allows "**OR** trigger"(*independent synchronization*) and "**AND** trigger"(*associative synchronization*) suitable for situations where threads are waiting for multiple events. + +The concepts of semaphores, mutexes, and event sets are detailed in the "Inter-Thread Synchronization" chapter. + +### Inter-Thread Communication + +RT-Thread supports communication mechanisms such as mailbox, message queue, etc. The mailbox's message length is fixed to 4 bytes. Whereas, message queue can receive messages in variable size and cache the messages in its own memory space. +Compared to a message queue, a mailbox is more efficient. The sending action of the mailbox and message queue can be safely used in an ISR (Interrupt Service Routine). The communication mechanism allows threads to wait by priority or to acquire by the First In First Out (FIFO) method. +The concept of mailbox and message queue will be explained in detail in the "Inter-Thread Communication" chapter. + +### Memory Management + +RT-Thread allows: +1. Static Memory Pool +2. Dynamic Memory Pool + +When the static memory pool has available memory, the time allocated to the memory block will be constant. +When the static memory pool is empty, the system will then request for suspending or blocking the thread of the memory block. The thread will abandon the request and return if the memory block is not obtained after waiting for a while, or the thread will abandon and return immediately. The waiting time depends on the waiting time parameter set when the memory block is applied. When other threads release the memory block to the memory pool, the system will wake up the thread if there are suspended threads waiting to be allocated of memory blocks. + +Under circumstances of different system resources, the dynamic memory heap management module respectively provides memory management algorithms for small memory systems and SLAB memory management algorithms for large memory systems. + +There is also a dynamic memory heap management called memheap, suitable for memory heaps in systems with multiple addresses that can be discontinuous. Using memheap, the user can "paste" multiple memory heaps together, letting them operate as if operating a memory heap. + +The concept of memory management will be explained in the "Memory Management" chapter. + +### I/O Device Management + +RT-Thread uses I2C, SPI, USB, UART, etc., as peripheral devices and is uniformly registered through the device. It realized a device management subsystem accessed by the name, and it can access hardware devices according to a unified API interface. On the device driver interface, depending on the characteristics of the embedded system, corresponding events can be attached to different devices. The driver notifies the upper application program when the device event is triggered. + +The concept of I/O device management will be explained in the "Device Model" and "General Equipment" chapters. + +## RT-Thread Startup Process + +The understanding of most codes usually starts from learning the startup process. We will firstly look for the source of the startup. Taking MDK-ARM as an example, the user program entry for MDK-ARM is the main() function located in the main.c file. The launching of the system starts from the assembly code startup_stm32f103xe.s, jumps to the C code, initializes the RT-Thread system function, and finally enters the user program entry main(). + +To complete the RT-Thread system function initialization before entering main(), we used the MDK extensions `$Sub$$` and `$Super$$`. Users can add the prefix of `$Sub$$` to main to make it a new function `$Sub$$main`. +`$Sub$$main` can call some functions to be added before main (here, RT-Thread system initialization function is added). Then, call `$Super$$main` to the main() function so that the user does not have to manage the system initialization before main(). + +For more information on the use of the `$Sub$$` and `$Super$$`extensions, see the ARM® Compiler v5.06 for μVision®armlink User Guide. + +Let's take a look at this code defined in components.c: + +```c +/* $Sub$$main Function */ +int $Sub$$main(void) +{ + rtthread_startup(); + return 0; +} +``` + +Here, the `$Sub$$main` function simply calls the rtthread_startup() function. RT-Thread allows multiple platforms and multiple compilers, and the rtthread_startup() function is a uniform entry point specified by RT-Thread, so the `$Sub$$main` function only needs to call the rtthread_startup() function (RT-Thread compiled using compiler GNU GCC is an example where it jumps directly from the assembly startup code section to the rtthread_startup() function and starts the execution of the first C code). +The rtthread_startup() function can be found in the code of components.c, the startup process of RT-Thread is as shown below: + +![System startup process](figures/03Startup_process.png) + +Code for the rtthread_startup() function is as follows: + +```c +int rtthread_startup(void) +{ + rt_hw_interrupt_disable(); + + /* Board level initialization: system heap initialization is required inside the function */ + rt_hw_board_init(); + + /* Print RT-Thread version information */ + rt_show_version(); + + /* Timer initialization */ + rt_system_timer_init(); + + /* Scheduler initialization */ + rt_system_scheduler_init(); + +#ifdef RT_USING_SIGNALS + /* Signal initialization */ + rt_system_signal_init(); +#endif + + /* Create a user main() thread here */ + rt_application_init(); + + /* Timer thread initialization */ + rt_system_timer_thread_init(); + + /* Idle thread initialization */ + rt_thread_idle_init(); + + /* Start scheduler */ + rt_system_scheduler_start(); + + /* Will not execute till here */ + return 0; +} +``` + +This part of the startup code can be roughly divided into four parts: + +1. Initialize hardware related to the system. +2. Initialize system kernel objects, such as timers, schedulers, and signals. +3. Create the main thread initialize various modules in the main thread one by one. +4. Initialize the timer thread, idle thread, and start the scheduler. + +Set the system clock in rt_hw_board_init() to provide heartbeat and serial port initialization for the system, bound to the system's input and output terminals to this serial port. Subsequent system operation information will be printed out from the serial port later. +The main() function is the user code entry for RT-Thread, and users can add their own applications to the main() function. + +```c +int main(void) +{ + /* user app entry */ + return 0; +} +``` + +## RT-Thread Program Memory Distribution + +The general MCU contains storage space that includes: on-chip Flash and on-chip RAM, RAM is equivalent to memory, and Flash is equivalent to hard disk. The compiler classifies a program into several parts, which are stored in different memory areas of the MCU. + +After the Keil project is compiled, there will prompt information for occupied space by the corresponding program, as shown below: + +``` +linking... +Program Size: Code=48008 RO-data=5660 RW-data=604 ZI-data=2124 +After Build - User command \#1: fromelf --bin.\\build\\rtthread-stm32.axf--output rtthread.bin +".\\build\\rtthread-stm32.axf" - 0 Error(s), 0 Warning(s). +Build Time Elapsed: 00:00:07 +``` + +The Program Size mentioned above contains the following sections: + +1) Code: code segment, section of code that store the program; + +2) RO-data: read-only data segment, stores the constants defined in the program; + +3) RW-data: read and write data segment, stores global variables initialized to non-zero values; + +4) ZI-data: 0 data segment, stores uninitialized global variables and initialized-to-0 variables; + +After compiling the project, there will be a generated .map file that describes the size and address of each function. The last few lines of the file also illustrate the relationship between the above fields: + +``` +Total RO Size (Code + RO Data) 53668 ( 52.41kB) +Total RW Size (RW Data + ZI Data) 2728 ( 2.66kB) +Total ROM Size (Code + RO Data + RW Data) 53780 ( 52.52kB) +``` + +1) RO Size contains Code and RO-data, indicating the size of the Flash occupied by the program; + +2) RW Size contains RW-data and ZI-data, indicating the size of the RAM occupied when operating; + +3) ROM Size contains Code, RO Data and RW Data, indicating the size of the Flash occupied by the programming system; + +Before the program runs, the file entity need to be burned into STM32 Flash, usually it is bin or hex file. The burned file is called executable image file. The left figure, Figure 3-3, shows the memory distribution after the executable image file is burned to STM32 which includes two parts: the RO segment and the RW segment. The RO segment stores data of Code and RO-data and the RW segment holds the data of RW-data. Since ZI-data is 0, it is not included in the image file. + +STM32 is launched from Flash by default after power-on. After launching, RW-data (initialized global variable) in RW segment is transferred to RAM, but RO segment is not transferred. This means execution code of CPU is read from Flash, the ZI segment is allocated according to the ZI address and size given by the compiler, and the RAM area is cleared. + +![RT-Thread Memory Distribution](figures/03Memory_distribution.png) + +The dynamic memory heap is unused RAM space, and the memory blocks requested and released by the application come from this space. + +As the following example: + +```c +rt_uint8_t* msg_ptr; +msg_ptr = (rt_uint8_t*) rt_malloc (128); +rt_memset(msg_ptr, 0, 128); +``` + +The 128-byte memory space pointed to by the msg_ptr pointer in the code is in the dynamic memory heap space. + +Some global variables are stored in the RW segment and the ZI segment. The RW segment stores the global variable with the initial value (the global variable in the constant form is placed in the RO segment, which is a read-only property), and uninitialized global variable is stored in the ZI segment, as in the following example: + +```c +#include + +const static rt_uint32_t sensor_enable = 0x000000FE; +rt_uint32_t sensor_value; +rt_bool_t sensor_inited = RT_FALSE; + +void sensor_init() +{ + /* ... */ +} +``` + +The sensor_value is stored in the ZI segment and is automatically initialized to zero after system startup (some library functions provided by the user program or compiler are initialized to zero). The sensor_inited variable is stored in the RW segment, and the sensor_enable is stored in the RO segment. + +RT-Thread Automatic Initialization Mechanism +----------------------- + +The automatic initialization mechanism means that the initialization function does not need to be called by explicit function. It only needs to be declared by macro definition at the function definition, and it will be executed during system startup. + +For example, calling a macro definition in the serial port driver to inform the function that needs to be called to initialize the system. The code is as follows: + +```c +int rt_hw_usart_init(void) /* Serial port initialization function */ +{ + ... ... + /* Register serial port 1 device */ + rt_hw_serial_register(&serial1, "uart1", + RT_DEVICE_FLAG_RDWR | RT_DEVICE_FLAG_INT_RX, + uart); + return 0; +} +INIT_BOARD_EXPORT(rt_hw_usart_init); /* Use component auto-initialization mechanism */ +``` + +The last part of the sample code INIT_BOARD_EXPORT(rt_hw_usart_init) indicates that the automatic initialization function is used. In this way, rt_hw_usart_init() function is automatically called by the system, so where is it called? + +In the system startup flowchart, there are two functions: rt_components_board_init() and rt_components_init(), subsequent functions inside the box with the underlying color represent functions that are automatically initialized, where: + +1. “board init functions” are all initialization functions declared by INIT_BOARD_EXPORT(fn). +2. “pre-initialization functions” are all initialization functions declared by INIT_PREV_EXPORT(fn). +3. “device init functions” are all initialization functions declared by INIT_DEVICE_EXPORT(fn). +4. “components init functions” are all initialization functions declared by INIT_COMPONENT_EXPORT(fn). +5. “enviroment init functions” are all initialization functions declared by INIT_ENV_EXPORT(fn). +6. “application init functions” are all initialization functions declared by INIT_APP_EXPORT(fn). + +The rt_components_board_init() function executes earlier, mainly to initialize the relevant hardware environment. When this function is executed, it will traverse the initialization function table declared by INIT_BOARD_EXPORT(fn) and call each function. + +The rt_components_init() function is called and executed in the main thread created after the operating system is running. At this time, the hardware environment and the operating system have been initialized and the application-related code can be executed. The rt_components_init() function will transverse through the remaining few initialization function tables declared by macros. + +RT-Thread's automatic initialization mechanism uses a custom RTI symbol segment, it puts the function pointer that needs to be initialized at startup into this segment and forms an initialization function table which will be traversed during system startup. It calls the functions in the table to achieve the purpose of automatic initialization. + +The macro interface definitions used to implement the automatic initialization function are described in the following table: + +|Initialization sequence|Macro Interface |Description | +|----------------|------------------------------------|----------------------------------------------| +| 1 | INIT_BOARD_EXPORT(fn) | Very early initialization, the scheduler has not started yet. | +| 2 | INIT_PREV_EXPORT(fn) | Mainly used for pure software initialization, functions without too many dependencies | +| 3 | INIT_DEVICE_EXPORT(fn) | Peripheral driver initialization related, such as network card devices | +| 4 | INIT_COMPONENT_EXPORT(fn) | Component initialization, such as file system or LWIP | +| 5 | INIT_ENV_EXPORT(fn) | System environment initialization, such as mounting file systems | +| 6 | INIT_APP_EXPORT(fn) | Application initialization, such as application GUI | + +Initialization function actively declares through these macro interfaces, such as INIT_BOARD_EXPORT (rt_hw_usart_init), the linker will automatically collect all the declared initialization functions, placed in the RTI symbol segment, the symbol segment is located in the RO segment of the memory distribution. All functions in this RTI symbol segment are automatically called when the system is initialized. + +RT-Thread Kernel Object Model +--------------------- + +### Static and Dynamic Objects + +The RT-Thread kernel is designed with object-oriented method. The system-level infrastructures are all kernel objects such as threads, semaphores, mutexes, timers, and more. Kernel objects fall into two categories: static kernel objects and dynamic kernel objects. Static kernel objects are usually placed in RW and ZI segments, initialized in the program after system startup; dynamic kernel objects are created from the memory heap and then manually initialized. + +The following code is an example of static threads and dynamic threads: + + +```c +/* Thread 1 object and stack used while running */ +static struct rt_thread thread1; +static rt_uint8_t thread1_stack[512]; + +/* Thread 1 entry */ +void thread1_entry(void* parameter) +{ + int i; + + while (1) + { + for (i = 0; i < 10; i ++) + { + rt_kprintf("%d\n", i); + + /* Delay 100ms */ + rt_thread_mdelay(100); + } + } +} + +/* Thread 2 entry */ +void thread2_entry(void* parameter) +{ + int count = 0; + while (1) + { + rt_kprintf("Thread2 count:%d\n", ++count); + + /* Delay 50ms */ + rt_thread_mdelay(50); + } +} + +/* Thread routine initialization */ +int thread_sample_init() +{ + rt_thread_t thread2_ptr; + rt_err_t result; + + /* Initialize thread 1 */ + /* The thread entry is thread1_entry and the parameter is RT_NULL + * Thread stack is thread1_stack + * Priority is 200 and time slice is 10 OS Tick + */ + result = rt_thread_init(&thread1, + "thread1", + thread1_entry, RT_NULL, + &thread1_stack[0], sizeof(thread1_stack), + 200, 10); + + /* Start thread */ + if (result == RT_EOK) rt_thread_startup(&thread1); + + /* Create thread 2 */ + /* The thread entry is thread2_entry and the parameter is RT_NULL + * Stack space is 512, priority is 250, and time slice is 25 OS Tick + */ + thread2_ptr = rt_thread_create("thread2", + thread2_entry, RT_NULL, + 512, 250, 25); + + /* Start thread */ + if (thread2_ptr != RT_NULL) rt_thread_startup(thread2_ptr); + + return 0; +} +``` + +In this example, thread1 is a static thread object and thread2 is a dynamic thread object. The memory space of the thread1 object, including the thread control block thread1 and the stack space thread1_stack are all determined while compiling, because there is no initial value in the code and they are uniformly placed in the uninitialized data segment. The space used by thread2 is dynamically allocated includes the thread control block (the content pointed to by thread2_ptr) and the stack space. + +Static objects take up RAM space and is not depend on the memory heap manager. When allocating static objects, the time needed is determined. Dynamic objects depend on the memory heap manager. It requests RAM space while running. When the object is deleted, the occupied RAM space is released. These two methods have their own advantages and disadvantages, and can be selected according to actual needs. + +### Kernel Object Management Structure + +RT-Thread uses the kernel object management system to access/manage all kernel objects. Kernel objects contain most of the facilities in the kernel. These kernel objects can be statically allocated static objects and dynamic objects allocated from the system memory heap. . + +Because of this design for kernel object, RT-Thread is able to not depend on the specific memory allocation method, and the flexibility of the system is greatly improved. + +RT-Thread kernel objects include: threads, semaphores, mutexes, events, mailboxes, message queues and timers, memory pools, device drivers, and more. The object container contains information about each type of kernel object, including object type, size, and so on. The object container assigns a linked list to each type of kernel object. All kernel objects are linked to the linked list. The kernel object container and linked list of RT-Thread are shown in the following figure: + +![RT-Thread Kernel Object Container and Linked List](figures/03kernel_object.png) + +The following figure shows the derivation and inheritance relationships of various kernel objects in RT-Thread. For each specific kernel object and object control block, in addition to the basic structure, they have their own extended attributes (private attributes). Take thread control block for an example, the base object is extended, attributes like thread state, precedence and so on are added. These attributes are not used in the operation of the base class object and are only used in operations related to a specific thread. Therefore, from the object-oriented point of view, each concrete object can be considered as a derivative of an abstract object, inheriting the attributes of the base object and extending the attributes related to itself. + +![RT-Thread Kernel Object Inheritance Relationship](figures/03kernel_object2.png) + +In the object management module, a common data structure is defined to store the common attributes of various objects. Each specific object only needs to add some special attributes of its own and its own feature will be clearly expressed. + +The advantages of this design approach are: + +(1) Improve the reusability and scalability of the system. It is easy to add new object categories. It only needs to inherit the attributes of the general object and add a small amount of extension. + +(2) Provide a unified object operation mode, simplify the operation of various specific objects, and improve the reliability of the system. + +Derivations from object control block rt_object in the above figure includes: thread object, memory pool object, timer object, device object and IPC object (IPC: Inter-Process Communication. In RT-Thread real-time operating system, IPC objects is used for synchronization and communicate between threads); derivations from IPC objects includes: semaphores, mutexes, events, mailboxes, message queues, signals, etc. + +### Object Control Block + +Data structure of kernel object control block: + +```c +struct rt_object +{ + /* Kernel object name */ + char name[RT_NAME_MAX]; + /* Kernel object type */ + rt_uint8_t type; + /* Parameters to the kernel object */ + rt_uint8_t flag; + /* Kernel object management linked list */ + rt_list_t list; +}; +``` + +Types currently supported by kernel objects are as follows: + +```c +enum rt_object_class_type +{ + RT_Object_Class_Thread = 0, /* Object is thread type */ +#ifdef RT_USING_SEMAPHORE + RT_Object_Class_Semaphore, /* Object is semaphore type */ +#endif +#ifdef RT_USING_MUTEX + RT_Object_Class_Mutex, /* Object is mutex type */ +#endif +#ifdef RT_USING_EVENT + RT_Object_Class_Event, /* Object is event type */ +#endif +#ifdef RT_USING_MAILBOX + RT_Object_Class_MailBox, /* Object is mailbox type */ +#endif +#ifdef RT_USING_MESSAGEQUEUE + RT_Object_Class_MessageQueue, /* Object is message queue type */ +#endif +#ifdef RT_USING_MEMPOOL + RT_Object_Class_MemPool, /* Object is memory pool type */ +#endif +#ifdef RT_USING_DEVICE + RT_Object_Class_Device, /* Object is device type */ +#endif + RT_Object_Class_Timer, /* Object is timer type */ +#ifdef RT_USING_MODULE + RT_Object_Class_Module, /* Object is module */ +#endif + RT_Object_Class_Unknown, /* Object is unknown */ + RT_Object_Class_Static = 0x80 /* Object is a static object */ +}; +``` + +From the above type specification, we can see that if it is a static object, the highest bit of the object type will be 1 (which is the OR operation of RT_Object_Class_Static and other object types and operations). Otherwise it will be dynamic object, and the maximum number of object classes that the system can accommodate is 127. + +### Kernel Object Management + +Data structure of kernel object container: + +```c +struct rt_object_information +{ + /* Object type */ + enum rt_object_class_type type; + /* Object linked list */ + rt_list_t object_list; + /* Object size */ + rt_size_t object_size; +}; +``` + +A class of objects is managed by an rt_object_information structure, and each practical instance of such type of object is mounted to the object_list in the form of a linked list. The memory block size of this type of object is identified by object_size (the memory block each practical instance of each type of object is the same size). + +#### Initialization Object + +An uninitialized static object must be initialized before it can be used. The initialization object uses the following interfaces: + +```c +void rt_object_init(struct rt_object* object , + enum rt_object_class_type type , + const char* name) +``` + +When this function is called to initialize the object, the system will place the object into the object container for management, that is, initialize some parameters of the object, and then insert the object node into the object linked list of the object container. Input parameters of the function is described in the following table: + + +|Parameters|Description | +| -------- | ------------------------------------------------------------ | +| object | The object pointer that needs to be initialized must point to a specific object memory block, not a null pointer or a wild pointer. | +| type | The type of the object must be a enumeration type listed in rt_object_class_type, RT_Object_Class_Static excluded. (For static objects, or objects initialized with the rt_object_init interface, the system identifies it as an RT_Object_Class_Static type) | +| name | Name of the object. Each object can be set to a name, and the maximum length for the name is specified by RT_NAME_MAX. The system does not care if it uses ’`\0`’as a terminal symbol. | + +#### Detach Object + +Detach an object from the kernel object manager. The following interfaces are used to detach objects: + +```c +void rt_object_detach(rt_object_t object); +``` + +Calling this interface makes a static kernel object to be detached from the kernel object container, meaning the corresponding object node is deleted from the kernel object container linked list. After the object is detached, the memory occupied by the object will not be released. + +#### Allocate object + +The above descriptions are interfaces of objects initialization and detachment, both of which are under circumstances that object-oriented memory blocks already exist. But dynamic objects can be requested when needed. The memory space is freed for other applications when not needed. To request assigning new objects, you can use the following interfaces: + +```c +rt_object_t rt_object_allocate(enum rt_object_class_typetype , + const char* name) +``` + +When calling the above interface, the system first needs to obtain object information according to the object type (especially the size information of the object type for the system to allocate the correct size of the memory data block), and then allocate memory space corresponding to the size of the object from the memory heap. Next, to start necessary initialization for the object, and finally insert it into the object container linked list in which it is located. The input parameters for this function are described in the following table: + + +|Parameters |Description | +| ------------------ | ------------------------------------------------------------ | +| type | The type of the allocated object can only be of type rt_object_class_type other than RT_Object_Class_Static. In addition, the type of object allocated through this interface is dynamic, not static. | +| name | Name of the object. Each object can be set to a name, and the maximum length for the name is specified by RT_NAME_MAX. The system does not care if it uses ’`\0`’as a terminal symbol. | +|**Return** | —— | +| object handle allocated successfully | Allocate successfully | +| RT_NULL | Fail to allocate | + +#### Delete Object + +For a dynamic object, when it is no longer used, you can call the following interface to delete the object and release the corresponding system resources: + +```c +void rt_object_delete(rt_object_t object); +``` + +When the above interface is called, the object is first detached from the object container linked list, and then the memory occupied by the object is released. The following table describes the input parameters of the function: + + +|Parameter|Description | +|----------|------------| +| object | object handle | + +#### Identify objects + +Identify whether the specified object is a system object (static kernel object). The following interface is used to identify the object: + +```c +rt_err_t rt_object_is_systemobject(rt_object_t object); +``` + +Calling the rt_object_is_systemobject interface can help to identify whether an object is a system object. In RT-Thread operating system, a system object is also a static object, RT_Object_Class_Static bit is set to 1 on the object type identifier. Usually, objects that are initialized using the rt_object_init() method are system objects. The input parameters for this function are described in the following table: + +Input parameter of rt_object_is_systemobject() + +|**Parameter**|Description | +|----------|------------| +| object | Object handle | + +RT-Thread Kernel Configuration Example +---------------------- + +An important feature of RT-Thread is its high degree of tailorability, which allows for fine-tuning of the kernel and flexible removal of components. + +Configuration is mainly done by modifying the file under project directory - rtconfig.h. User can conditionally compile the code by opening/closing the macro definition in the file, and finally achieve the purpos e of system configuration and cropping, as follows: + +(1)RT-Thread Kernel part + +```c +/* Indicates the maximum length of the name of the kernel object. If the maximum length of the name of the object in the code is greater than the length of the macro definition, + * the extra part will be cut off. */ +#define RT_NAME_MAX 8 + +/* Set the number of aligned bytes when bytes are aligned. Usually use ALIGN(RT_ALIGN_SIZE) for byte alignment.*/ +#define RT_ALIGN_SIZE 4 + +/* Define the number of system thread priorities; usually define the priority of idle threads with RT_THREAD_PRIORITY_MAX-1 */ +#define RT_THREAD_PRIORITY_MAX 32 + +/* Define the clock beat. When it is 100, it means 100 tick per second, and a tick is 10ms. */ +#define RT_TICK_PER_SECOND 100 + +/* Check if the stack overflows, if not defined, close. */ +#define RT_USING_OVERFLOW_CHECK + +/* Define this macro to enable debug mode, if not defined, close. */ +#define RT_DEBUG +/* When debug mode is enabled: When the macro is defined as 0, the print component initialization information is turned off. When it is defined as 1, it is enabled. */ +#define RT_DEBUG_INIT 0 +/* When debug mode is enabled: When the macro is defined as 0, the print thread switching information is turned off. When it is defined as 1, it is enabled. */ +#define RT_DEBUG_THREAD 0 + +/* Defining this macro means the use of the hook function is started, if not defined, close. */ +#define RT_USING_HOOK + +/* Defines the stack size of idle threads. */ +#define IDLE_THREAD_STACK_SIZE 256 +``` + +(2)Inter-thread synchronization and communication part, the objects that will be used in this part are semaphores, mutexes, events, mailboxes, message queues, signals, and so on. + +```c +/* Define this macro to enable the use of semaphores, if not defined, close. */ +#define RT_USING_SEMAPHORE + +/* Define this macro to enable the use of mutexes, if not defined, close. */ +#define RT_USING_MUTEX + +/* Define this macro to enable the use of events, if not defined, close. */ +#define RT_USING_EVENT + +/* Define this macro to enable the use of mailboxes, if not defined, close. */ +#define RT_USING_MAILBOX + +/* Define this macro to enable the use of message queues, if not defined, close. */ +#define RT_USING_MESSAGEQUEUE + +/* Define this macro to enable the use of signals, if not defined, close. */ +#define RT_USING_SIGNALS +``` + +(3)Memory Management Part + +```c +/* Start the use of static memory pool */ +#define RT_USING_MEMPOOL + +/* Define this macro to start the concatenation of two or more memory heap , if not defined, close. */ +#define RT_USING_MEMHEAP + +/* Start algorithm for small memory management */ +#define RT_USING_SMALL_MEM + +/* Turn off SLAB memory management algorithm */ +/* #define RT_USING_SLAB */ + +/* Start the use of heap */ +#define RT_USING_HEAP +``` + +(4)Kernel Device Object + +```c +/* Indicates the start of useing system devices */ +#define RT_USING_DEVICE + +/* Define this macro to start the use of system console devices, if not defined, close. */ +#define RT_USING_CONSOLE +/* Define the buffer size of the console device. */ +#define RT_CONSOLEBUF_SIZE 128 +/* Name of the console device. */ +#define RT_CONSOLE_DEVICE_NAME "uart1" +``` + +(5)Automatic Initialization Method + +```c +/* Define this macro to enable automatic initialization mechanism, if not defined, close. */ +#define RT_USING_COMPONENTS_INIT + +/* Define this macro to set application entry as main function */ +#define RT_USING_USER_MAIN +/* Define the stack size of the main thread */ +#define RT_MAIN_THREAD_STACK_SIZE 2048 +``` + +(6)FinSH + +```c +/* Define this macro to start the use of the system FinSH debugging tool, if not defined, close. */ +#define RT_USING_FINSH + +/* While starting the system FinSH: the thread name is defined as tshell */ +#define FINSH_THREAD_NAME "tshell" + +/* While turning the system FinSH: use history commands. */ +#define FINSH_USING_HISTORY +/* While turning the system FinSH: define the number of historical command lines. */ +#define FINSH_HISTORY_LINES 5 + +/* While turning the system FinSH: define this macro to open the Tab key, if not defined, close. */ +#define FINSH_USING_SYMTAB + +/* While turning the system FinSH: define the priority of the thread. */ +#define FINSH_THREAD_PRIORITY 20 +/* While turning the system FinSH:define the stack size of the thread. */ +#define FINSH_THREAD_STACK_SIZE 4096 +/* While turning the system FinSH:define the length of command character. */ +#define FINSH_CMD_SIZE 80 + +/* While turning the system FinSH: define this macro to enable the MSH function. */ +#define FINSH_USING_MSH +/* While turning the system FinSH:when MSH function is enabled, macro is defined to use the MSH function by default. */ +#define FINSH_USING_MSH_DEFAULT +/* While turning the system FinSH:define this macro to use only the MSH function. */ +#define FINSH_USING_MSH_ONLY +``` + +(7)About MCU + +```c +/* Define the MCU used in this project is STM32F103ZE; the system defines the chip pins by defining the chip type. */ +#define STM32F103ZE + +/* Define the clock source frequency. */ +#define RT_HSE_VALUE 8000000 + +/* Define this macro to enable the use of UART1. */ +#define RT_USING_UART1 +``` + +>In practice, the system configuration file rtconfig.h is automatically generated by configuration tools and does not need to be changed manually. + +Common Macro Definition Description +-------------- + +Macro definitions are often used in RT-Thread. For example, some common macro definitions in the Keil compilation environment: + +1)rt_inline, definition is as follows, static keyword is to make the function only available for use in the current file; inline means inline, after modification using static, the compiler is recommended to perform inline expansion when calling the function. + +```c +#define rt_inline static __inline +``` + +2)RT_USED,definition is as follows, the purpose of this macro is to explain to the compiler that this code is useful, compilation needs to be saved even if it is not called in the function. For example, RT-Thread auto-initialization uses custom segments, using RT_USED will retain custom code snippets. + +```c +#define RT_USED __attribute__((used)) +``` + +3)RT_UNUSED,definition is as follows, indicates that a function or variable may not be used. This attribute prevents the compiler from generating warnings. + +```c +#define RT_UNUSED __attribute__((unused)) +``` + +4)RT_WEAK,definition is as follows, often used to define functions, when linking the function, the compiler will link the function without the keyword prefix first and link the function modified by weak if it can't find those functions. + +```c +#define RT_WEAK __weak +``` + +5)ALIGN(n),definition is as follows, is used to align its stored address with n bytes when allocating an address space to an object. Here, n can be the power of 2. Byte alignment not only facilitates quick CPU access, but also save memory space if byte alignment is properly used. + +```c +#define ALIGN(n) __attribute__((aligned(n))) +``` + +6)RT_ALIGN(size,align),definition is as follows, to increase size to a multiple of an integer defined by align. For example, RT_ALIGN(13,4) will return to 16. + +```c +#define RT_ALIGN(size, align) (((size) + (align) - 1) & ~((align) - 1)) +``` + diff --git a/documentation/basic/figures/03Memory_distribution.png b/documentation/basic/figures/03Memory_distribution.png new file mode 100644 index 0000000000..1dcd401648 Binary files /dev/null and b/documentation/basic/figures/03Memory_distribution.png differ diff --git a/documentation/basic/figures/03Startup_process.png b/documentation/basic/figures/03Startup_process.png new file mode 100644 index 0000000000..4128df8d94 Binary files /dev/null and b/documentation/basic/figures/03Startup_process.png differ diff --git a/documentation/basic/figures/03kernel_Framework.png b/documentation/basic/figures/03kernel_Framework.png new file mode 100644 index 0000000000..eb2c3059fe Binary files /dev/null and b/documentation/basic/figures/03kernel_Framework.png differ diff --git a/documentation/basic/figures/03kernel_object.png b/documentation/basic/figures/03kernel_object.png new file mode 100644 index 0000000000..85508581d3 Binary files /dev/null and b/documentation/basic/figures/03kernel_object.png differ diff --git a/documentation/basic/figures/03kernel_object2.png b/documentation/basic/figures/03kernel_object2.png new file mode 100644 index 0000000000..b1601b4868 Binary files /dev/null and b/documentation/basic/figures/03kernel_object2.png differ diff --git a/documentation/coding_style_cn.md b/documentation/contribution_guide/coding_style_cn.md similarity index 100% rename from documentation/coding_style_cn.md rename to documentation/contribution_guide/coding_style_cn.md diff --git a/documentation/coding_style_en.txt b/documentation/contribution_guide/coding_style_en.md similarity index 100% rename from documentation/coding_style_en.txt rename to documentation/contribution_guide/coding_style_en.md diff --git a/documentation/contribution_guide/contribution_guide.md b/documentation/contribution_guide/contribution_guide.md new file mode 100644 index 0000000000..a812cd1102 --- /dev/null +++ b/documentation/contribution_guide/contribution_guide.md @@ -0,0 +1,171 @@ +# Contribution Guide + +We sincerely thank you for your contribution, and welcome to submit the code through GitHub's fork and Pull Request processes. + +First, explain the word Pull Request. Pull request means to send a request. The purpose of the developer initiating Pull Request is to request the repository maintainer to adopt the code submitted by the developer. + +When you want to correct mistakes in other people's repositories, follow the following procedure: + +- To fork someone else's repository is equivalent to copying someone else's information. Because you can't guarantee that your modification is correct and beneficial to the project, you can't modify it directly in someone else's repository, but first fork it into your own git repository. +- Clone code to your own PC local, create a new branch, modify bugs or add new features, and then launch pull request to the original repository, so that the original repository manager can see the changes you submitted. +- The original repository manager reviews this submission and, if correct, merge it into his own project. Merge means merging, merging the part of code you modified into the original repository to add code or replace the original code. So far, the whole Pull Request process is over. + +## Coding Style + +Refer to the `coding_style_en.txt` file in the rt_thread project documentation directory for the RT-Thread code programming style. + +## Preparation + +Install Git: You need to add Git's directory to the system environment variable. + +## Contribution Process + +Now take RT-Thread repository as an example to illustrate the process of contributing code: + +### Fork + +Fork the RT-Thread/rt-thread repository into your git repository. + +![fork rt-thread repository](figures/cloneformgit.png) + +### Clone + +In your repository, copy the repository links after your fork: + +![clone rt-thread from your repo](figures/cloneformgit2.png) + +You can use the `git clone` command to copy the repository to your PC: + +``` +git clone [url] +``` + +![git clone](figures/git_clone.png) + +### Create a New Branch + +It is recommended that you create your own development branch based on the master branch, and use following commands to create a new branch: + +``` +git checkout -b YourBranchName +``` + +For example, create a branch named "dev": `git checkout -b dev`. + +### Developing + +Modify bugs and submit new functional code. For example, suppose the developer adds a USB driver: + +![Add a USB driver](figures/add_usb_driver.png) + +### Temporarily Store Modified Files + +Add all changes to the temporary area: + +``` +git add . +``` + +If you only want to add some specified files to the temporary area, use other commands of `git add`. + +### Commit + +Submit this modification to the local repository: + +``` +git commit -m "Describe your submission here" +``` + +> Note: If there are multiple commits in the local development branch, in order to ensure that the RT-Thread repository commit is clean, please tidy up the local commits. More than five commits are not accepted by Pull Request.。 + +### Push to Your Remote Repository + +Push the modified content to the branch of your remote repository. It is recommended that the branch name of the remote repository be consistent with the local branch name.Use the following command to push: + +``` +git push origin YourBranchName +``` + +### Create a Pull Request + +Enter the RT-Thread repository under your Github account and click `New pull request -> Create pull request`. Make sure you choose the right branch. + +![Create a Pull Request](figures/pull_request_step2.png) + +Step 1: Fill in the title of this Pull Request + +Step 2: Modify the description information of this Pull Request (modify it in `Write` and preview it with `Preview`): + +- Modify PR Description: Replace the content in the red box below with the description of this pull request according to the requirements in the red box below. + +- Check PR Options: Fill in [x] in the OK Options check box to confirm. Note that there are no spaces on both sides of [x]. + +![Modify PR Description and Check PR Options](figures/pr_description.png) + +Step 3:Create pull request. + +### Sign CLA + +The first contribution to RT-Thread requires signing the *Contributor License Agreement*. + +![Sign CLA](figures/cla.png) + +Make sure that CLA shows successful signing and CI compilation, as shown in the following figure: + +![CLA successful](figures/checkok.png) + +Note: Do not submit commmit using a non-GitHub account, or commit using a different account, which can lead to CLA signing failure. + +### Review Pull Request + +Once the request is successful, the RT-Thread maintainer can see the code you submitted. The code will be reviewed and comments will be filled in on GitHub. Please check the PR status in time and update the code according to the comments. + +### Merge Pull Request + +If the Pull Request code is okay, the code will be merged into the RT-Thread repository. This time Pull Request succeeded. + +So far, we have completed a code contribution process. + +## Keep in Sync with RT-Thread Repository + +The content of the RT-Thread GitHub repository is always updated. To develop based on the latest RT-Thread code, you need to update the local repository. + +After clone, the local master branch content is consistent with the master branch content of the RT-Thread repository. But when the RT-Thread repository is updated, your local code is different from the RT-Thread code. + +The local master is synchronized with the RT-Thread repository of your own GitHub account. If there is no content modification for the master branch (please create a new branch for development), then you can keep the local code synchronized with the RT-Thread repository according to the following steps: + +- To view the existing remote repository, there is usually only one default origin, which is your own remote repository: + +```c +$ git remote -v +origin https://github.com/YOUR_USERNAME/YOUR_FORK.git (fetch) +origin https://github.com/YOUR_USERNAME/YOUR_FORK.git (push) +``` + +* Add the RT-Thread remote repository and name it `rtt`, or you can customize the name by yourself: + +```c +$ git remote add rtt https://github.com/RT-Thread/rt-thread.git +``` + +* View all remote repositories tracked locally: + +```c +$ git remote -v +origin https://github.com/YOUR_USERNAME/YOUR_FORK.git (fetch) +origin https://github.com/YOUR_USERNAME/YOUR_FORK.git (push) +rtt https://github.com/RT-Thread/rt-thread.git (fetch) +rtt https://github.com/RT-Thread/rt-thread.git (push) +``` + +* Pull the code from the master branch of RT-Thread remote repository and merge it into the local master branch: + +```c +git pull rtt master +``` + +## Reference + +* Refer to the [*GitHub - Contributing to a Project*](https://git-scm.com/book/en/v2/GitHub-Contributing-to-a-Project) section of the official Git document for details. + + diff --git a/documentation/contribution_guide/figures/add_usb_driver.png b/documentation/contribution_guide/figures/add_usb_driver.png new file mode 100644 index 0000000000..a50133d1c1 Binary files /dev/null and b/documentation/contribution_guide/figures/add_usb_driver.png differ diff --git a/documentation/contribution_guide/figures/branch.png b/documentation/contribution_guide/figures/branch.png new file mode 100644 index 0000000000..40fdd37557 Binary files /dev/null and b/documentation/contribution_guide/figures/branch.png differ diff --git a/documentation/contribution_guide/figures/checkok.png b/documentation/contribution_guide/figures/checkok.png new file mode 100644 index 0000000000..1b1b5e68ce Binary files /dev/null and b/documentation/contribution_guide/figures/checkok.png differ diff --git a/documentation/contribution_guide/figures/cla.png b/documentation/contribution_guide/figures/cla.png new file mode 100644 index 0000000000..2c4db17f91 Binary files /dev/null and b/documentation/contribution_guide/figures/cla.png differ diff --git a/documentation/contribution_guide/figures/cloneformgit.png b/documentation/contribution_guide/figures/cloneformgit.png new file mode 100644 index 0000000000..349ef94eec Binary files /dev/null and b/documentation/contribution_guide/figures/cloneformgit.png differ diff --git a/documentation/contribution_guide/figures/cloneformgit2.png b/documentation/contribution_guide/figures/cloneformgit2.png new file mode 100644 index 0000000000..52505dab66 Binary files /dev/null and b/documentation/contribution_guide/figures/cloneformgit2.png differ diff --git a/documentation/contribution_guide/figures/git_clone.png b/documentation/contribution_guide/figures/git_clone.png new file mode 100644 index 0000000000..b19af3fa7c Binary files /dev/null and b/documentation/contribution_guide/figures/git_clone.png differ diff --git a/documentation/contribution_guide/figures/pr_description.png b/documentation/contribution_guide/figures/pr_description.png new file mode 100644 index 0000000000..2e004fce42 Binary files /dev/null and b/documentation/contribution_guide/figures/pr_description.png differ diff --git a/documentation/contribution_guide/figures/pull_request_step2.png b/documentation/contribution_guide/figures/pull_request_step2.png new file mode 100644 index 0000000000..cdaab28b74 Binary files /dev/null and b/documentation/contribution_guide/figures/pull_request_step2.png differ diff --git a/documentation/device/adc/adc.md b/documentation/device/adc/adc.md new file mode 100644 index 0000000000..6da8e4cae1 --- /dev/null +++ b/documentation/device/adc/adc.md @@ -0,0 +1,264 @@ +# ADC Device + +## An Introduction to ADC + +ADC refers to analog to digital converter which is a device that converts continuously changing analog signals into discrete digital signals. Analog signals , such as temperature, pressure, sound, or images, need to be converted into digital forms that are easier to be stored, processed, and transmitted. The analog-to-digital converter can do this, and it can be found in a variety of different products. The corresponding DAC (Digital-to-Analog Converter) has a reverse conversion process compared to that of the ADC. The ADC was first used to convert wireless signals to digital signals such as television signals, or signals from long-short broadcast stations. + +### Conversion Process + +As shown in the figure below, the analog-to-digital conversion generally involves steps of sampling, holding, quantifying, and encoding. In actual circuits, some processes are combined, such as sampling and holding, and quantization and encoding are implemented simultaneously in the conversion process. + +![ADC Conversion Process](figures/adc-p.png) + +Sampling is the conversion of analog signals that changes continuously over time into time-discrete analog signals. It takes a certain amount of time for the analog signals obtained by sampling to be converted into digital signals. In order to provide a stable value for the subsequent quantization coding process, it is required to keep the sampling analog signals for a period of time after the sampling circuit. + +The process of converting a numerically continuous analog quantity into a digital quantity is called quantization. Digital signals are discrete numerically. The output voltage of the sample-and-hold circuit also needs to be naturalized to a corresponding discrete level in a similar way, and any digital quantity can only be an integer multiple of a certain minimum quantity unit. The quantized value also requires the encoding process, which is the digital output of the A/D converter. + +### Resolution + +The resolution is expressed in binary (or decimal) digits. Generally, there are 8 bits, 10 bits, 12 bits, 16 bits, etc. It explains the resolution capability of the input signals by the analog-to-digital converter. The more bits, the higher the resolution, and the more accurate the analog signal will be. + +### Precision + +Precision represents the maximum error value between the analog and real values of the ADC devices at all numerical points, that is, the distance at which the output value deviates from the linear maximum. + +>Precision and resolution are two different concepts, so please pay attention to the distinction. + +### Conversion Rate + +The conversion rate is the reciprocal of the time it takes for the A/D converter to complete an AD conversion from analog to digital. For example, an A/D converter with a conversion rate of 1MHz means that an AD conversion time is 1 microsecond. + +## Access ADC Device + +The application accesses the ADC hardware through the ADC device management interface provided by RT-Thread. The relevant interfaces are as follows: + +| **Function** | Description | +| --------------- | ------------------ | +| rt_device_find() | Find device handles based on ADC device name | +| rt_adc_enable() | Enable ADC devices | +| rt_adc_read() | Read ADC device data | +| rt_adc_disable() | Close the ADC device | + +### Find ADC Devices + +The application gets the device handle based on the ADC device name, which in turn operates the ADC device. Functions for looking for devices are as follows: + +```c +rt_device_t rt_device_find(const char* name); +``` + +| **Parameter** | Description | +| -------- | ---------------------------------- | +| name | The name of the ADC device | +| **Return** | —— | +| Device handle | Finding the corresponding device will return to the corresponding device handle | +| RT_NULL | No device found | + +In general, the names of the ADC device registered to the system are adc0, adc1, etc., and an usage example is as follows: + +```c +#define ADC_DEV_NAME "adc1" /* ADC device name */ +rt_adc_device_t adc_dev; /* ADC device handle */ +/* find the device */ +adc_dev = (rt_adc_device_t)rt_device_find(ADC_DEV_NAME); +``` + +### Enable ADC Channel + +Before reading the ADC device data, Use the following function to enable the device: + +```c +rt_err_t rt_adc_enable(rt_adc_device_t dev, rt_uint32_t channel); +``` + +| Parameter | Description | +| ---------- | ------------------------------- | +| dev | ADC device handle | +| channel | ADC channel | +| **Return** | —— | +| RT_EOK | Succeed | +| -RT_ENOSYS | Failed, the device operation method is empty | +| Other error code | Failed | + + An usage example is as follows: + +```c +#define ADC_DEV_NAME "adc1" /* ADC device name */ +#define ADC_DEV_CHANNEL 5 /* ADC channel */ +rt_adc_device_t adc_dev; /* ADC device handle */ +/* find the device */ +adc_dev = (rt_adc_device_t)rt_device_find(ADC_DEV_NAME); +/* enable the device */ +rt_adc_enable(adc_dev, ADC_DEV_CHANNEL); +``` + +### Read ADC Channel Sample Values + +Reading the ADC channel sample values can be done by the following function: + +```c +rt_uint32_t rt_adc_read(rt_adc_device_t dev, rt_uint32_t channel); +``` + +| Parameter | Description | +| ---------- | ----------------- | +| dev | ADC device handle | +| channel | ADC channel | +| **Return** | —— | +| Read values | | + +An example of using the ADC sampled voltage value is as follows: + +```c +#define ADC_DEV_NAME "adc1" /* ADC device name */ +#define ADC_DEV_CHANNEL 5 /* ADC channel */ +#define REFER_VOLTAGE 330 /* Reference voltage 3.3V, data accuracy multiplied by 100 and reserve 2 decimal places*/ +#define CONVERT_BITS (1 << 12) /* The number of conversion bits is 12 */ + +rt_adc_device_t adc_dev; /* ADC device handle */ +rt_uint32_t value; +/* find the device */ +adc_dev = (rt_adc_device_t)rt_device_find(ADC_DEV_NAME); +/* enable the device */ +rt_adc_enable(adc_dev, ADC_DEV_CHANNEL); +/* Read sampling values */ +value = rt_adc_read(adc_dev, ADC_DEV_CHANNEL); +/* Convert to the corresponding voltage value */ +vol = value * REFER_VOLTAGE / CONVERT_BITS; +rt_kprintf("the voltage is :%d.%02d \n", vol / 100, vol % 100); +``` + +The calculation formula of the actual voltage value is: `sampling value * reference voltage/(1 << resolution digit)`. In the above example, variable *vol* was enlarged 100 times, so finally the integer part of voltage is obtained through *vol / 100*, and the decimal part of voltage is obtained through *vol % 100*. + +### Close the ADC Channel + +Use the following function can close the ADC channel : + +```c +rt_err_t rt_adc_disable(rt_adc_device_t dev, rt_uint32_t channel); +``` + +| **Parameter** | **Description** | +| ---------- | ------------------------------- | +| dev | ADC device handle | +| channel | ADC channel | +| **Return** | —— | +| RT_EOK | Succeed | +| -RT_ENOSYS | Failed, the device operation method is empty | +| Other error code | Failed | + +An example: + +```c +#define ADC_DEV_NAME "adc1" /* ADC device name */ +#define ADC_DEV_CHANNEL 5 /* ADC channel */ +rt_adc_device_t adc_dev; /* ADC device handle */ +rt_uint32_t value; +/* find the device */ +adc_dev = (rt_adc_device_t)rt_device_find(ADC_DEV_NAME); +/* enable the device */ +rt_adc_enable(adc_dev, ADC_DEV_CHANNEL); +/* read sampling values */ +value = rt_adc_read(adc_dev, ADC_DEV_CHANNEL); +/* convert to the corresponding voltage value */ +vol = value * REFER_VOLTAGE / CONVERT_BITS; +rt_kprintf("the voltage is :%d.%02d \n", vol / 100, vol % 100); +/* close the channel */ +rt_adc_disable(adc_dev, ADC_DEV_CHANNEL); +``` + +### FinSH Command + +Before using the device, you need to find out whether the device exists. You can use the command `adc probe` followed by the name of the registered ADC device. As follows: + +```c +msh >adc probe adc1 +probe adc1 success +``` + +A channel of the enabled device can use the command `adc enable` followed by the channel number. + +```c +msh >adc enable 5 +adc1 channel 5 enables success +``` + +To read data from a channel of an ADC device, you can use the command `adc read` followed by the channel number. + +```c +msh >adc read 5 +adc1 channel 5 read value is 0x00000FFF +msh > +``` + +To close a channel of an ADC device, you can use the command `adc disable` followed by the channel number. + +```c +msh >adc disable 5 +adc1 channel 5 disable success +msh > +``` + +## ADC Device Usage Example + +The specific usage of the ADC device can refer to the following sample code. The main steps of the sample code are as follows: + +1. First find the device handle based on the ADC device name “adc1”. +2. After the device is enabled, read the sample value of the corresponding channel 5 of the adc1 device, and then calculate the actual voltage value with the resolution of 12 bits and the reference voltage of 3.3V. +3. Finally close the corresponding channel of the ADC device. + +Running result: Print the raw and converted data which actually read , and print the calculated actual voltage value. + +```c +/* + * Program Listing: ADC Device Usage Routines + * The routine exports the adc_sample command to the control terminal + * adc_sample Command call format: adc_sample + * Program function: The voltage value is sampled by the ADC device and converted to a numerical value. + * The sample code reference voltage is 3.3V and the number of conversion bits is 12 bits. +*/ + +#include +#include + +#define ADC_DEV_NAME "adc1" /* ADC device name */ +#define ADC_DEV_CHANNEL 5 /* ADC channel */ +#define REFER_VOLTAGE 330 /* Reference voltage 3.3V, data accuracy multiplied by 100 and reserve 2 decimal places*/ +#define CONVERT_BITS (1 << 12) /* The number of conversion bits is 12 */ + +static int adc_vol_sample(int argc, char *argv[]) +{ + rt_adc_device_t adc_dev; + rt_uint32_t value, vol; + rt_err_t ret = RT_EOK; + + /* find the device */ + adc_dev = (rt_adc_device_t)rt_device_find(ADC_DEV_NAME); + if (adc_dev == RT_NULL) + { + rt_kprintf("adc sample run failed! can't find %s device!\n", ADC_DEV_NAME); + return RT_ERROR; + } + + /* enable the device */ + ret = rt_adc_enable(adc_dev, ADC_DEV_CHANNEL); + + /* read sampling values */ + value = rt_adc_read(adc_dev, ADC_DEV_CHANNEL); + rt_kprintf("the value is :%d \n", value); + + /* convert to the corresponding voltage value */ + vol = value * REFER_VOLTAGE / CONVERT_BITS; + rt_kprintf("the voltage is :%d.%02d \n", vol / 100, vol % 100); + + /* close the channel */ + ret = rt_adc_disable(adc_dev, ADC_DEV_CHANNEL); + + return ret; +} +/* export to the msh command list */ +MSH_CMD_EXPORT(adc_vol_sample, adc voltage convert sample); +``` + + + diff --git a/documentation/device/adc/figures/adc-p.png b/documentation/device/adc/figures/adc-p.png new file mode 100644 index 0000000000..26c6e9bfc3 Binary files /dev/null and b/documentation/device/adc/figures/adc-p.png differ diff --git a/documentation/device/device.md b/documentation/device/device.md new file mode 100644 index 0000000000..33887d3fe6 --- /dev/null +++ b/documentation/device/device.md @@ -0,0 +1,486 @@ +# I/O Device Framework + +Most embedded systems include some I/O (Input/Output) devices, data displays on instruments, serial communication on industrial devices, Flash or SD cards for saving data on data acquisition devices,as well as Ethernet interfaces for network devices, are examples of I/O devices that are commonly seen in embedded systems. + +This chapter describes how RT-Thread manages different I/O devices. + +## I/O Device Introduction + +### I/O Device Framework + +RT-Thread provides a set of I/O device framework, as shown in the following figure. It is located between the hardware and the application. It is divided into three layers. From top to bottom, they are I/O device management layer, device driver framework layer, and device driver layer. + +![I/O Device Framework](figures/io-dev.png) + +The application obtains the correct device driver through the I/O device management interface, and then uses this device driver to perform data (or control) interaction with the bottom I/O hardware device. + +The I/O device management layer implements the encapsulation of device drivers. The application accesses the bottom devices through the standard interface provided by the I/O device layer. The upgrade and replacement of the device driver will not affect the upper layer application. In this way, the hardware-related code of the device can exist independently of the application, and both parties only need to pay attention to the respective function implementation, thereby reducing the coupling and complexity of the code and improving the reliability of the system. + +The device driver framework layer is an abstraction of the same kind of hardware device driver. The same part of the same hardware device driver of different manufacturers is extracted, and the different parts are left out of interface, implemented by the driver. + +The device driver layer is a set of programs that drive the hardware devices to work, enabling access to hardware devices. It is responsible for creating and registering I/O devices. For devices with simple operation logic, you can register devices directly into the I/O Device Manager without going through the device driver framework layer. The sequence diagram is as shown below. There are mainly two points: + +* The device driver creates a device instance with hardware access capabilities based on the device model definition and registers the device with the `rt_device_register()` interface in the I/O Device Manager. +* The application finds the device through the`rt_device_find()` interface and then uses the I/O device management interface to access the hardware. + +![Simple I/O Device Using Sequence Diagram](figures/io-call.png) + +For other devices, such as watchdog, the created device instance will be registered to the corresponding device driver framework, and then the device driver framework will register with the I/O device manager. The main points are as follows: + +* The watchdog device driver creates a watchdog device instance with hardware access capability based on the watchdog device model definition and registers the watchdog device through the `rt_hw_watchdog_register()` interface into the watchdog device driver framework. +* The watchdog device driver framework registers the watchdog device to the I/O Device Manager via the `rt_device_register()` interface. +* The application accesses the watchdog device hardware through the I/O device management interface. + +Usage of Watchdog device: + +![Watchdog Device Use Sequence Diagram](figures/wtd-uml.png) + +### I/O Device Model + +The device model of RT-Thread is based on the kernel object model, which is considered a kind of objects and is included in the scope of the object manager. Each device object is derived from the base object. Each concrete device can inherit the properties of its parent class object and derive its own properties. The following figure is a schematic diagram of the inheritance and derivation relationship of device object. + +![Device Inheritance Diagram](figures/io-parent.png) + +The specific definitions of device objects are as follows: + +```c +struct rt_device +{ + struct rt_object parent; /* kernel object base class */ + enum rt_device_class_type type; /* device type */ + rt_uint16_t flag; /* device parameter */ + rt_uint16_t open_flag; /* device open flag */ + rt_uint8_t ref_count; /* number of times the device was cited */ + rt_uint8_t device_id; /* device ID,0 - 255 */ + + /* data transceiving callback function */ + rt_err_t (*rx_indicate)(rt_device_t dev, rt_size_t size); + rt_err_t (*tx_complete)(rt_device_t dev, void *buffer); + + const struct rt_device_ops *ops; /* device operate methods */ + + /* device's private data */ + void *user_data; +}; +typedef struct rt_device *rt_device_t; + +``` + +### I/O Device Type + +RT-Thread supports multiple I/O device types, the main device types are as follows: + +```c +RT_Device_Class_Char /* character device */ +RT_Device_Class_Block /* block device */ +RT_Device_Class_NetIf /* network interface device */ +RT_Device_Class_MTD /* memory device */ +RT_Device_Class_RTC /* RTC device */ +RT_Device_Class_Sound /* sound device */ +RT_Device_Class_Graphic /* graphic device */ +RT_Device_Class_I2CBUS /* I2C bus device */ +RT_Device_Class_USBDevice /* USB device */ +RT_Device_Class_USBHost /* USB host device */ +RT_Device_Class_SPIBUS /* SPI bus device */ +RT_Device_Class_SPIDevice /* SPI device */ +RT_Device_Class_SDIO /* SDIO device */ +RT_Device_Class_Miscellaneous /* miscellaneous devices */ +``` + +Character devices and block devices are commonly used device types, and their classification is based on the transmission processing between device data and the system. Character mode devices allow for unstructured data transfers, that is, data usually transfers in the form of serial, one byte at a time. Character devices are usually simple devices such as serial ports and buttons. + +A block device transfers one data block at a time, for example 512 bytes data at a time. This data block is enforced by the hardware. Data blocks may use some type of data interface or some mandatory transport protocol, otherwise an error may occur. Therefore, sometimes the block device driver must perform additional work on read or write operations, as shown in the following figure: + +![Block Device](figures/block-dev.png) + +When the system serves a write operation with a large amount of data, the device driver must first divide the data into multiple packets, each with the data size specified by the device. In the actual process, the last part of the data size may be smaller than the normal device block size. Each block in the above figure is written to the device using a separate write request, and the first three are directly written. However, the last data block size is smaller than the device block size, and the device driver must process the last data block differently than the first 3 blocks. Normally, the device driver needs to first perform a read operation of the corresponding device block, then overwrite the write data onto the read data, and then write the "composited" data block back to the device as a whole block. . For example, for block 4 in the above figure, the driver needs to read out the device block corresponding to block 4, and then overwrite the data to be written to the data read from the device block, and merge them into a new block. Finally write back to the block device. + +## Create and Register I/O Device + +The driver layer is responsible for creating device instances and registering them in the I/O Device Manager. You can create device instances in a statically declared manner or dynamically create them with the following interfaces: + +```c +rt_device_t rt_device_create(int type, int attach_size); +``` + +|**Parameters** |**Description** | +|-------------|-------------------------------------| +| type | device type, the device type values listed in "I/O Device Type" section can be used here | +| attach_size | user data size | +|**Return** | -- | +| Device Handle | Create successfully | +| RT_NULL | Creation failed, dynamic memory allocation failed | + +When this interface is called, the system allocates a device control block from the dynamic heap memory, the size of which is the sum of `struct rt_device` and `attach_size`, and the type of the device is set by the parameter type. After the device is created, implementing its access to the hardware is needed. + +```c +struct rt_device_ops +{ + /* common device interface */ + rt_err_t (*init) (rt_device_t dev); + rt_err_t (*open) (rt_device_t dev, rt_uint16_t oflag); + rt_err_t (*close) (rt_device_t dev); + rt_size_t (*read) (rt_device_t dev, rt_off_t pos, void *buffer, rt_size_t size); + rt_size_t (*write) (rt_device_t dev, rt_off_t pos, const void *buffer, rt_size_t size); + rt_err_t (*control)(rt_device_t dev, int cmd, void *args); +}; + +``` + +A description of each method of operation is shown in the following table: + +|**Method Name**|**Method Description** | +|----|-----------------------| +| init | Initialize the device. After the device is initialized, the flag of the device control block is set to the active state(RT_DEVICE_FLAG_ACTIVATED). If the flag in the device control block has been set to the active state, then the initialization interface will be returned immediately when running again, and will not be re-initialized. | +| open | Open the device. Some devices are not started when the system is started, or the device needs to send and receive data. However, if the upper application is not ready, the device should not be enabled by default and start receiving data. Therefore, it is recommended to enable the device when calling the open interface when writing the bottom driver. | +| close | Close the device. When the device is open, the device control block maintains an open count, the count will add 1 when the device is opended, and the count will subtract 1 when the device is closed, and a real shutdown operation is operated when the counter turns to 0. | +| read | Read data from the device. The parameter pos is the offset of the read data, but some devices do not necessarily need to specify the offset, such as serial devices, the device driver should ignore this parameter. But for block devices, pos and size are measured in the block size of the block device. For example, the block size of the block device is 512 byte, and in the parameter pos = 10, size = 2, then the driver should return the 10th block in the device (starting from the 0th block) for a total of 2 blocks of data. The type returned by this interface is rt_size_t, which is the number of bytes read or the number of blocks. Normally, the value of size in the parameter should be returned. If it returns zero, set the corresponding errno value. | +| write | Write data to the device. The parameter pos is the offset of the write data. Similar to read operations, for block devices, pos and size are measured in the block size of the block device. The type returned by this interface is rt_size_t, which is the number of bytes or blocks of data actually written. Normally, the value of size in the parameter should be returned. If it returns zero, set the corresponding errno value. | +| control | Control the device according to the cmd command. Commands are often implemented by the bottom device drivers. For example, the parameter RT_DEVICE_CTRL_BLK_GETGEOME means to get the size information of the block device. | + +When a dynamically created device is no longer needed, it can be destroyed using the following function: + +```c +void rt_device_destroy(rt_device_t device); +``` + +|**Parameters**|**Description**| +|----------|----------| +| device | device handle | + +After the device is created, it needs to be registered to the I/O Device Manager for the application to access. The functions for registering the device are as follows: + +```c +rt_err_t rt_device_register(rt_device_t dev, const char* name, rt_uint8_t flags); +``` + +|**Parameters** |**Description** | +|------------|-----------------------| +| dev | device handle | +| name | device name, the maximum length of the device name is specified by the macro RT_NAME_MAX defined in rtconfig.h, and the extra part is automatically truncated | +| flags | device mode flag | +|**Return** | -- | +| RT_EOK | registration success | +| -RT_ERROR | registration failed, dev is empty or name already exists | + +>It should be avoided to repeatedly register registered devices and to register devices with the same name. + +flags parameters support the following parameters (multiple parameters can be supported in OR logic): + +```c +#define RT_DEVICE_FLAG_RDONLY 0x001 /* read only */ +#define RT_DEVICE_FLAG_WRONLY 0x002 /* write only */ +#define RT_DEVICE_FLAG_RDWR 0x003 /* read and write */ +#define RT_DEVICE_FLAG_REMOVABLE 0x004 /* can be removed */ +#define RT_DEVICE_FLAG_STANDALONE 0x008 /* stand alone */ +#define RT_DEVICE_FLAG_SUSPENDED 0x020 /* suspended */ +#define RT_DEVICE_FLAG_STREAM 0x040 /* stream mode */ +#define RT_DEVICE_FLAG_INT_RX 0x100 /* interrupt reception */ +#define RT_DEVICE_FLAG_DMA_RX 0x200 /* DMA reception */ +#define RT_DEVICE_FLAG_INT_TX 0x400 /* interrupt sending */ +#define RT_DEVICE_FLAG_DMA_TX 0x800 /* DMA sending */ +``` + +Device Stream Mode The RT_DEVICE_FLAG_STREAM parameter is used to output a character string to the serial terminal: when the output character is `\n` , it automatically fills in a `\r` to make a branch. + +Successfully registered devices can use the `list_device` command on the FinSH command line to view all device information in the system, including the device name, device type, and number of times the device is opened: + +```c +msh />list_device +device type ref count +-------- -------------------- ---------- +e0 Network Interface 0 +sd0 Block Device 1 +rtc RTC 0 +uart1 Character Device 0 +uart0 Character Device 2 +msh /> +``` + +When the device is logged off, the device will be removed from the device manager and the device will no longer be found through the device. Logging out of the device does not release the memory occupied by the device control block. The function to log off of the device is as follows: + +```c +rt_err_t rt_device_unregister(rt_device_t dev); +``` + +|**Parameters**|**Description**| +|----------|----------| +| dev | device handle | +|**Return**| -- | +| RT_EOK | successful | + +The following code is an example of registering a watchdog device. After calling the `rt_hw_watchdog_register()` interface, the device is registered to the I/O Device Manager via the `rt_device_register()` interface. + +```c +const static struct rt_device_ops wdt_ops = +{ + rt_watchdog_init, + rt_watchdog_open, + rt_watchdog_close, + RT_NULL, + RT_NULL, + rt_watchdog_control, +}; + +rt_err_t rt_hw_watchdog_register(struct rt_watchdog_device *wtd, + const char *name, + rt_uint32_t flag, + void *data) +{ + struct rt_device *device; + RT_ASSERT(wtd != RT_NULL); + + device = &(wtd->parent); + + device->type = RT_Device_Class_Miscellaneous; + device->rx_indicate = RT_NULL; + device->tx_complete = RT_NULL; + + device->ops = &wdt_ops; + device->user_data = data; + + /* register a character device */ + return rt_device_register(device, name, flag); +} + +``` + +## Access I/O Devices + +The application accesses the hardware device through the I/O device management interface, which is accessible to the application when the device driver is implemented. The mapping relationship between the I/O device management interface and the operations on the I/O device is as follows: + +![Mapping between the I/O Device Management Interface and the Operations on the I/O Device](figures/io-fun-call.png) + +### Find Device + +The application obtains the device handle based on the device name, which in turn allows the device to operate. To find device, use function below: + +```c +rt_device_t rt_device_find(const char* name); +``` + +|**Parameters**|**Description** | +|----------|------------------------------------| +| name | device name | +|**Return**| -- | +| device handle | finding the corresponding device will return the corresponding device handle | +| RT_NULL | no corresponding device object found | + +### Initialize Device + +Once the device handle is obtained, the application can initialize the device using the following functions: + +```c +rt_err_t rt_device_init(rt_device_t dev); +``` + +|**Parameters**|**Description** | +|----------|----------------| +| dev | device handle | +|**Return**| -- | +| RT_EOK | device initialization succeeded | +| Error Code | device initialization failed | + +>When a device has been successfully initialized, calling this interface will not repeat initialization. + +### Open and Close Device + +Through the device handle, the application can open and close the device. When the device is opened, it will detect whether the device has been initialized. If it is not initialized, it will call the initialization interface to initialize the device by default. Open the device with the following function: + +```c +rt_err_t rt_device_open(rt_device_t dev, rt_uint16_t oflags); +``` + +|**Parameters** |**Description** | +|------------|-----------------------------| +| dev | device handle | +| oflags | open device in oflag mode | +|**Return** | -- | +| RT_EOK | device successfully turned on | +|-RT_EBUSY | device will not allow being repeated opened if the RT_DEVICE_FLAG_STANDALONE parameter is included in the parameters specified when the device is registered. | +| Other Error Code | device failed to be turned on | + +oflags supports the following parameters: + +```c +#define RT_DEVICE_OFLAG_CLOSE 0x000 /* device was already closed(internal use)*/ +#define RT_DEVICE_OFLAG_RDONLY 0x001 /* open the device in read-only mode */ +#define RT_DEVICE_OFLAG_WRONLY 0x002 /* open the device in write-only mode */ +#define RT_DEVICE_OFLAG_RDWR 0x003 /* open the device in read-and_write mode */ +#define RT_DEVICE_OFLAG_OPEN 0x008 /* device was already closed(internal use) */ +#define RT_DEVICE_FLAG_STREAM 0x040 /* open the device in stream mode */ +#define RT_DEVICE_FLAG_INT_RX 0x100 /* open the device in interrupt reception mode */ +#define RT_DEVICE_FLAG_DMA_RX 0x200 /* open the device in DMA mode */ +#define RT_DEVICE_FLAG_INT_TX 0x400 /* open the device in interrupt sending mode */ +#define RT_DEVICE_FLAG_DMA_TX 0x800 /* open the device in DMA mode */ +``` + +>If the upper application needs to set the device's receive callback function, it must open the device as RT_DEVICE_FLAG_INT_RX or RT_DEVICE_FLAG_DMA_RX, otherwise the callback function will not be called. + +After the application opens the device to complete reading and writing, if no further operations are needed, you can close the device using the following functions: + +```c +rt_err_t rt_device_close(rt_device_t dev); +``` + +|**Parameters** |**Description** | +|------------|------------------------------------| +| dev | device handle | +|**Return** | -- | +| RT_EOK | device successfully closed | +| \-RT_ERROR | device has been completely closed and cannot be closed repeatedly | +| Other Error Code | failed to close device | + +>Device interfaces `rt_device_open()` and `rt_device_close()` need to used in pairs. Open a device requires close the device, so that the device will be completely closed, otherwise the device will remain on. + +### Control Device + +By commanding the control word, the application can also control the device with the following function: + +```c +rt_err_t rt_device_control(rt_device_t dev, rt_uint8_t cmd, void* arg); +``` + +|**Parameters** |**Description** | +|-------------|--------------------------------------------| +| dev | device handle | +| cmd | command control word, this parameter is usually related to the device driver | +| arg | controlled parameter | +|**Return** | -- | +| RT_EOK | function executed successfully | +| -RT_ENOSYS | execution failed, dev is empty | +| Other Error Code | execution failed | + +The generic device command for the parameter `cmd` can be defined as follows: + +```c +#define RT_DEVICE_CTRL_RESUME 0x01 /* resume device */ +#define RT_DEVICE_CTRL_SUSPEND 0x02 /* suspend device */ +#define RT_DEVICE_CTRL_CONFIG 0x03 /* configure device */ +#define RT_DEVICE_CTRL_SET_INT 0x10 /* set interrupt */ +#define RT_DEVICE_CTRL_CLR_INT 0x11 /* clear interrupt */ +#define RT_DEVICE_CTRL_GET_INT 0x12 /* obtain interrupt status */ +``` + +### Read and Write Device + +Application can read data from the device by the following function: + +```c +rt_size_t rt_device_read(rt_device_t dev, rt_off_t pos,void* buffer, rt_size_t size); +``` + +|**Parameters** |**Description** | +|--------------------|--------------------------------| +| dev | device handle | +| pos | read data offset | +| buffer | memory buffer pointer, the data read will be saved in the buffer | +| size | size of the data read | +|**Return** | -- | +| Actual Size of the Data Read | If it is a character device, the return size is in bytes. If it is a block device, the returned size is in block units. | +| 0 | need to read the current thread's errno to determine the error status | + +Calling this function will read the data from the dev device and store it in the buffer. The maximum length of this buffer is *size*, and *pos* has different meanings depending on the device class. + +Writing data to the device can be done by the following function: + +```c +rt_size_t rt_device_write(rt_device_t dev, rt_off_t pos,const void* buffer, rt_size_t size); +``` + +|**Parameters** |**Description** | +|--------------------|--------------------------------| +| dev | device handle | +| pos | write data offset | +| buffer | memory buffer pointer, placing the data to be written in | +| size | size of the written data | +|**Return** | -- | +| Actual Size of the Data Written | If it is a character device, the return size is in bytes. If it is a block device, the returned size is in block units. | +| 0 | need to read the current thread's errno to determine the error status | + +Calling this function will write the data in the buffer to the *dev* device . The maximum length of the written data is *size*, and *pos* has different meanings depending on the device class. + +### Data Transceiving and Call-back + +When the hardware device receives the data, the following function can be used to call back another function to set the data receiving indication to notify the upper application thread that the data arrives: + +```c +rt_err_t rt_device_set_rx_indicate(rt_device_t dev, rt_err_t (*rx_ind)(rt_device_t dev,rt_size_t size)); +``` + +|**Parameters**|**Description** | +|----------|--------------| +| dev | device handle | +| rx_ind | callback function pointer | +|**Return**| -- | +| RT_EOK | set successfully | + +The callback of this function will be provided by the user. When the hardware device receives the data, it will perform the callback function and pass the received data length to the upper layer application in the *size* parameter. The upper application thread should read the data from the device as soon as it receives the indication. + +When the application calls `rt_device_write()` to write data, if the bottom hardware can support automatic sending, the upper application can set a callback function. This callback function is called after the bottom hardware data has been sent (for example, when the DMA transfer is complete or the FIFO has been written to complete, triggered interrupt). Use the following function to set the device with a send completion indication. The function parameters and return values are as follows: + +```c +rt_err_t rt_device_set_tx_complete(rt_device_t dev, rt_err_t (*tx_done)(rt_device_t dev,void *buffer)); +``` + +|**Parameters**|**Description** | +|----------|--------------| +| dev | device handle | +| tx_done | callback function pointer | +|**Return**| -- | +| RT_EOK | set successfully | + +When this function is called, the callback function is provided by the user. When the hardware device sends the data, the driver calls back the function and passes the sent data block address buffer as a parameter to the upper application. When the upper layer application (thread) receives the indication, it will release the buffer memory block or use it as the buffer for the next write data according to the condition of sending the buffer. + +### Access Device Sample + +The following code is an example of accessing a device. First, find the watchdog device through the `rt_device_find()` port, obtain the device handle, then initialize the device through the `rt_device_init()` port, and set the watchdog device timeout through the `rt_device_control()`port. + +```c +#include +#include + +#define IWDG_DEVICE_NAME "iwg" + +static rt_device_t wdg_dev; + +static void idle_hook(void) +{ + /* feed the dog in the callback function of the idle thread */ + rt_device_control(wdg_dev, RT_DEVICE_CTRL_WDT_KEEPALIVE, NULL); + rt_kprintf("feed the dog!\n "); +} + +int main(void) +{ + rt_err_t res = RT_EOK; + rt_uint32_t timeout = 1000; /* timeout */ + + /* find the watchdog device based on the device name, and obtain the device handle */ + wdg_dev = rt_device_find(IWDG_DEVICE_NAME); + if (!wdg_dev) + { + rt_kprintf("find %s failed!\n", IWDG_DEVICE_NAME); + return RT_ERROR; + } + /* initialize device */ + res = rt_device_init(wdg_dev); + if (res != RT_EOK) + { + rt_kprintf("initialize %s failed!\n", IWDG_DEVICE_NAME); + return res; + } + /* set watchdog timeout */ + res = rt_device_control(wdg_dev, RT_DEVICE_CTRL_WDT_SET_TIMEOUT, &timeout); + if (res != RT_EOK) + { + rt_kprintf("set %s timeout failed!\n", IWDG_DEVICE_NAME); + return res; + } + /* set idle thread callback function */ + rt_thread_idle_sethook(idle_hook); + + return res; +} +``` + diff --git a/documentation/device/figures/block-dev.png b/documentation/device/figures/block-dev.png new file mode 100644 index 0000000000..510e3d516a Binary files /dev/null and b/documentation/device/figures/block-dev.png differ diff --git a/documentation/device/figures/io-call.png b/documentation/device/figures/io-call.png new file mode 100644 index 0000000000..0fc097c322 Binary files /dev/null and b/documentation/device/figures/io-call.png differ diff --git a/documentation/device/figures/io-dev.png b/documentation/device/figures/io-dev.png new file mode 100644 index 0000000000..50fd63f954 Binary files /dev/null and b/documentation/device/figures/io-dev.png differ diff --git a/documentation/device/figures/io-fun-call.png b/documentation/device/figures/io-fun-call.png new file mode 100644 index 0000000000..49a77f7681 Binary files /dev/null and b/documentation/device/figures/io-fun-call.png differ diff --git a/documentation/device/figures/io-parent.png b/documentation/device/figures/io-parent.png new file mode 100644 index 0000000000..7c87db7f20 Binary files /dev/null and b/documentation/device/figures/io-parent.png differ diff --git a/documentation/device/figures/wtd-uml.png b/documentation/device/figures/wtd-uml.png new file mode 100644 index 0000000000..8fecc92aec Binary files /dev/null and b/documentation/device/figures/wtd-uml.png differ diff --git a/documentation/device/hwtimer/hwtimer.md b/documentation/device/hwtimer/hwtimer.md new file mode 100644 index 0000000000..350bae0747 --- /dev/null +++ b/documentation/device/hwtimer/hwtimer.md @@ -0,0 +1,403 @@ +# HWTIMER Device + + +## Introduction to the Timer + +Hardware timers generally have two modes of operation, timer mode and counter mode. No matter which mode is operated, it works by counting the pulse signal counted by the internal counter module. Here are some important concepts of timers. + +**Counter mode:** Counts the external pulse. + + **Timer mode **: Counts the internal pulse. Timers are often used as timing clocks for timing detection, timing response, and timing control. + +**Counter **: Counter can count up or down. The maximum count value of the 16-bit counter is 65535, and the maximum value of the 32-bit counter is 4294967295. + +**Counting frequency **:As for the number of counts within the counter time unit under the timer mode, since the system clock frequency is fixed, the timer time can be calculated according to the counter count value. `Timing time = count value / count frequency`. For example, if the counting frequency is 1 MHz, the counter counts once is 1 / 1000000 second. That is, every 1 microsecond counter is incremented by one (or subtract one), at this time, the maximum timing capability of the 16-bit counter is 65535 microseconds, which is 65.535 milliseconds. + +## Access Hardware Timer Device + +The application accesses the hardware timer device through the I/O device management interface provided by RT-Thread. The related interfaces are as follows: + +| **Function** | **Description** | +| -------------------- | ---------------------------------- | +| rt_device_find() | to look up the timer device | +| rt_device_open() | to open the timer device in read-write mode | +| rt_device_set_rx_indicate() | to set the timeout callback function | +| rt_device_control() | to control the timer device, you can set the timing mode (single time /cycle),counting frequency, or stop the timer | +| rt_device_write() | to set the timeout value of the timer. The timer then starts | +| rt_device_read() | to get the current value of the timer | +| rt_device_close() | to turn off the timer device. | + +### Find Timer Device + +The application obtains the device handle based on the hardware timer device name, and thus can operate the hardware timer device. The device function is as follows: + +```c +rt_device_t rt_device_find(const char* name); +``` + +| Parameter | **Description** | +| -------- | ---------------------------------- | +| name | hardware timer device name | +| **return** | —— | +| timer device handle | will return to the corresponding device handle if the corresponding device is found | +| RT_NULL | No device found | + +In general, the hardware timer device name registered to the system is timer0, timer1, etc. The usage examples are as follows: + +```c +#define HWTIMER_DEV_NAME "timer0" /* timer name */ +rt_device_t hw_dev; /* timer device handle */ +/* find timer device */ +hw_dev = rt_device_find(HWTIMER_DEV_NAME); +``` + +### Open Timer Device + +With the device handle, the application can open the device. When the device is open, it will detect whether the device has been initialized. If it is not initialized, it will call the initialization interface to initialize the device by default. Open the device with the following function: + +```c +rt_err_t rt_device_open(rt_device_t dev, rt_uint16_t oflags); +``` + +| Parameter | Description | +| ---------- | ------------------------------- | +| dev | hardware timer device handle | +| oflags | device open mode, is generally opened in read and write mode, which is to take the value:RT_DEVICE_OFLAG_RDWR | +| **return** | —— | +| RT_EOK | device opened successfully | +| other error code | device fail to open | + +An example of use is as follows: + +```c +#define HWTIMER_DEV_NAME "timer0" /* timer name */ +rt_device_t hw_dev; /* timer device handle */ +/* find timer device */ +hw_dev = rt_device_find(HWTIMER_DEV_NAME); +/* to open the timer device in read-write mode */ +rt_device_open(hw_dev, RT_DEVICE_OFLAG_RDWR); +``` + +### Set the Timeout Callback Function + +Set the timer timeout callback function by the following function, this callback function will be called when the timer expires: + +```c +rt_err_t rt_device_set_rx_indicate(rt_device_t dev, rt_err_t (*rx_ind)(rt_device_t dev,rt_size_t size)) +``` + +| Parameter | **Description** | +| ---------- | ------------------------------- | +| dev | device handle | +| rx_ind | timeout callback function, provided by the caller | +| **return** | —— | +| RT_EOK | success | + +An example of use is as follows: + +```c +#define HWTIMER_DEV_NAME "timer0" /* timer name */ +rt_device_t hw_dev; /* timer device handle */ + +/* timer timeout callback function */ +static rt_err_t timeout_cb(rt_device_t dev, rt_size_t size) +{ + rt_kprintf("this is hwtimer timeout callback fucntion!\n"); + rt_kprintf("tick is :%d !\n", rt_tick_get()); + + return 0; +} + +static int hwtimer_sample(int argc, char *argv[]) +{ + /* find timer device */ + hw_dev = rt_device_find(HWTIMER_DEV_NAME); + /* open the device in read and write mode */ + rt_device_open(hw_dev, RT_DEVICE_OFLAG_RDWR); + /* set the timeout callback function */ + rt_device_set_rx_indicate(hw_dev, timeout_cb); +} +``` + +### Control the Timer Device + +By commanding the control word, the application can configure the hardware timer device by the following function: + +```c +rt_err_t rt_device_control(rt_device_t dev, rt_uint8_t cmd, void* arg); +``` + +| Parameter | **Description** | +| ---------------- | ------------------------------ | +| dev | device handle | +| cmd | command control word | +| arg | controlled parameter | +| **return** | —— | +| RT_EOK | function executed successfully | +| -RT_ENOSYS | execution failed,dev is null | +| other error code | execution failed | + +The command control words available for the hardware timer device are as follows: + +| **Control word** | Description | +| ---------------------- | ------------------------ | +| HWTIMER_CTRL_FREQ_SET | set the counting frequency | +| HWTIMER_CTRL_STOP | stop the timer | +| HWTIMER_CTRL_INFO_GET | get timer feature information | +| HWTIMER_CTRL_MODE_SET | set timer mode | + +Get the timer parameter arg,which is a pointer to the structure struct rt_hwtimer_info, to save the obtained information. + +>Setting frequency is valid only when the timer hardware and driver support sets the counting frequency. Generally, the default frequency of the driving setting can be used. + +When setting the timer mode, the parameter arg can take the following values: + +```c +HWTIMER_MODE_ONESHOT /* Single timing */ +HWTIMER_MODE_PERIOD /* Periodic timing */ +``` + +An example of using the timer count frequency and timing mode is as follows: + +```c +#define HWTIMER_DEV_NAME "timer0" /* timer name */ +rt_device_t hw_dev; /* timer device handle */ +rt_hwtimer_mode_t mode; /* timer mode */ +rt_uint32_t freq = 10000; /* couting frequency */ + +/* Timer timeout callback function */ +static rt_err_t timeout_cb(rt_device_t dev, rt_size_t size) +{ + rt_kprintf("this is hwtimer timeout callback fucntion!\n"); + rt_kprintf("tick is :%d !\n", rt_tick_get()); + + return 0; +} + +static int hwtimer_sample(int argc, char *argv[]) +{ + /* find timer device */ + hw_dev = rt_device_find(HWTIMER_DEV_NAME); + /* open the device in read and write mode */ + rt_device_open(hw_dev, RT_DEVICE_OFLAG_RDWR); + /* Set the timeout callback function */ + rt_device_set_rx_indicate(hw_dev, timeout_cb); + + /* Set the counting frequency (1Mhz or the supported minimum counting frequency by default) */ + rt_device_control(hw_dev, HWTIMER_CTRL_FREQ_SET, &freq); + /* Set the mode to periodic timer */ + mode = HWTIMER_MODE_PERIOD; + rt_device_control(hw_dev, HWTIMER_CTRL_MODE_SET, &mode); +} +``` + +### Set the Timer Timeout Value + +The timer timeout value can be set by the following function: + +```c +rt_size_t rt_device_write(rt_device_t dev, rt_off_t pos, const void* buffer, rt_size_t size); +``` + +| **Parameter** | Description | +| ---------- | ------------------------------------------ | +| dev | device handle | +| pos | write data offset, unused now, can set 0 value | +| buffer | pointer to the timer timeout structure | +| size | timeout structure size | +| **return** | —— | +| The actual size of the written data | | +| 0 | fail | + +The prototype of the timeout structure is shown below : + +```c +typedef struct rt_hwtimerval +{ + rt_int32_t sec; /* second */ + rt_int32_t usec; /* microsecond */ +} rt_hwtimerval_t; +``` + +An example of using the timer timeout value is as follows: + +```c +#define HWTIMER_DEV_NAME "timer0" /* timer name */ +rt_device_t hw_dev; /* timer device handle */ +rt_hwtimer_mode_t mode; /* timer mode */ +rt_hwtimerval_t timeout_s; /* Timer timeout value */ + +/* Timer timeout callback function */ +static rt_err_t timeout_cb(rt_device_t dev, rt_size_t size) +{ + rt_kprintf("this is hwtimer timeout callback fucntion!\n"); + rt_kprintf("tick is :%d !\n", rt_tick_get()); + + return 0; +} + +static int hwtimer_sample(int argc, char *argv[]) +{ + /* find timer device */ + hw_dev = rt_device_find(HWTIMER_DEV_NAME); + /* open the device in read-write mode */ + rt_device_open(hw_dev, RT_DEVICE_OFLAG_RDWR); + /* set the timeout callback function */ + rt_device_set_rx_indicate(hw_dev, timeout_cb); + /* set the mode as periodic timer */ + mode = HWTIMER_MODE_PERIOD; + rt_device_control(hw_dev, HWTIMER_CTRL_MODE_SET, &mode); + + /* Set the timer timeout value to 5s and start the timer */ + timeout_s.sec = 5; /* second */ + timeout_s.usec = 0; /* microsecond */ + rt_device_write(hw_dev, 0, &timeout_s, sizeof(timeout_s)); +} +``` + +### Obtain the Current Value of the Timer + +The current value of the timer can be obtained by the following function: + +```c +rt_size_t rt_device_read(rt_device_t dev, rt_off_t pos, void* buffer, rt_size_t size); +``` + +| **Parameter** | Description | +| ---------- | ------------------------------------------ | +| dev | timer device handle | +| pos | write data offset, unused now , can set 0 value | +| buffer | output parameter, a pointer point to the timeout structure | +| size | timeout structure size | +| **return** | —— | +| Timeout structure size | success | +| 0 | fail | + +An example of use is shown below: + +```c +rt_hwtimerval_t timeout_s; /* Used to save the time the timer has elapsed */ +/* Read the elapsed time of the timer */ +rt_device_read(hw_dev, 0, &timeout_s, sizeof(timeout_s)); +``` + +### Close the Timer Device + +The timer device can be closed by the following function: + +```c +rt_err_t rt_device_close(rt_device_t dev); +``` + +| Parameter | Description | +| ---------- | ---------------------------------- | +| dev | timer device handle | +| **return** | —— | +| RT_EOK | close device successfully | +| -RT_ERROR | the device has been completely shut down and cannot be closed repeatedly | +| other error code | fail to close the device | + +To close the device interface and open the device interface should be used in pairs. When open a device, close the device after use , so that the device can be completely shut down, otherwise the device will remain a opening status. + + An example of use is shown below: + +```c +#define HWTIMER_DEV_NAME "timer0" /* timer name */ +rt_device_t hw_dev; /* timer device handle */ +/* find timer device */ +hw_dev = rt_device_find(HWTIMER_DEV_NAME); +... ... +rt_device_close(hw_dev); +``` + +>Timing errors may occur. Assume that the counter has a maximum value of 0xFFFF, a counting frequency of 1Mhz, and a timing time of 1 second and 1 microsecond. Since the timer can only count up to 65535us at a time, the timing requirement for 1000001us can be completed 20 times at 50000us, and the calculation error will be 1us. + +## Hardware Timer Device Usage Example + +The specific use of the hardware timer device can refer to the following sample code. The main steps of the sample code are as follows: + +1. First find the device handle based on the timer device name "timer0". +2. Open the device "timer0" in read-write mode. +3. Set the timer timeout callback function. +4. Set the timer mode to periodic timer and set the timeout period to 5 seconds. At this time, the timer starts. +5. Read the timer after 3500ms delay, the read value will be displayed in seconds and microseconds. + +```c + /* + * Program listing: This is an hwtimer device usage routine +  * The routine exports the hwtimer_sample command to the control terminal +  * Command call format: hwtimer_sample +  * Program function: The hardware timer timeout callback function periodically prints the current tick value, and the difference between the two tick values is converted to the time equivalent to the timing time value. + */ + +#include +#include + +#define HWTIMER_DEV_NAME "timer0" /* timer name */ + +/* Timer timeout callback function */ +static rt_err_t timeout_cb(rt_device_t dev, rt_size_t size) +{ + rt_kprintf("this is hwtimer timeout callback fucntion!\n"); + rt_kprintf("tick is :%d !\n", rt_tick_get()); + + return 0; +} + +static int hwtimer_sample(int argc, char *argv[]) +{ + rt_err_t ret = RT_EOK; + rt_hwtimerval_t timeout_s; /* timer timeout value */ + rt_device_t hw_dev = RT_NULL; /* timer device value */ + rt_hwtimer_mode_t mode; /* timer mode */ + + /* find timer device */ + hw_dev = rt_device_find(HWTIMER_DEV_NAME); + if (hw_dev == RT_NULL) + { + rt_kprintf("hwtimer sample run failed! can't find %s device!\n", HWTIMER_DEV_NAME); + return RT_ERROR; + } + + /* Open the device in read-write mode */ + ret = rt_device_open(hw_dev, RT_DEVICE_OFLAG_RDWR); + if (ret != RT_EOK) + { + rt_kprintf("open %s device failed!\n", HWTIMER_DEV_NAME); + return ret; + } + + /* set timeout callback function */ + rt_device_set_rx_indicate(hw_dev, timeout_cb); + + /* Setting mode is periodic timer */ + mode = HWTIMER_MODE_PERIOD; + ret = rt_device_control(hw_dev, HWTIMER_CTRL_MODE_SET, &mode); + if (ret != RT_EOK) + { + rt_kprintf("set mode failed! ret is :%d\n", ret); + return ret; + } + + /* Set the timer timeout value to 5s and start the timer. */ + timeout_s.sec = 5; /* second */ + timeout_s.usec = 0; /* microsecond */ + + if (rt_device_write(hw_dev, 0, &timeout_s, sizeof(timeout_s)) != sizeof(timeout_s)) + { + rt_kprintf("set timeout value failed\n"); + return RT_ERROR; + } + + /* delay 3500ms */ + rt_thread_mdelay(3500); + + /* read the current value of timer */ + rt_device_read(hw_dev, 0, &timeout_s, sizeof(timeout_s)); + rt_kprintf("Read: Sec = %d, Usec = %d\n", timeout_s.sec, timeout_s.usec); + + return ret; +} +/* Export to the msh command list */ +MSH_CMD_EXPORT(hwtimer_sample, hwtimer sample); +``` diff --git a/documentation/device/i2c/figures/i2c1.png b/documentation/device/i2c/figures/i2c1.png new file mode 100644 index 0000000000..7b3b2d204d Binary files /dev/null and b/documentation/device/i2c/figures/i2c1.png differ diff --git a/documentation/device/i2c/figures/i2c2.png b/documentation/device/i2c/figures/i2c2.png new file mode 100644 index 0000000000..58f83b7461 Binary files /dev/null and b/documentation/device/i2c/figures/i2c2.png differ diff --git a/documentation/device/i2c/figures/i2c3.png b/documentation/device/i2c/figures/i2c3.png new file mode 100644 index 0000000000..a8ad1a1c93 Binary files /dev/null and b/documentation/device/i2c/figures/i2c3.png differ diff --git a/documentation/device/i2c/i2c.md b/documentation/device/i2c/i2c.md new file mode 100644 index 0000000000..c8531bced0 --- /dev/null +++ b/documentation/device/i2c/i2c.md @@ -0,0 +1,298 @@ +# I2C Bus Device + +## Introduction of I2C + +The I2C (Inter Integrated Circuit) bus is a half-duplex, bidirectional two-wire synchronous serial bus developed by PHILIPS. The I2C bus has only two signal lines, one is the bidirectional data line SDA (serial data), and the other is the bidirectional clock line SCL (serial clock). The SPI bus has two lines for receiving data and transmitting data between the master and slave devices, while the I2C bus uses only one line for data transmission and reception. + +Like SPI, I2C works in a master-slave manner. Unlike SPI-master-multi-slave architecture, it allows multiple master devices to exist at the same time. Each device connected to the bus has a unique address, and the master device initiates data transfer, and generates a clock signal. The slave device is addressed by the master device, and only one master device is allowed at a time. As shown below: + +![I2C Bus master-slave device connection mode](figures/i2c1.png) + +The main data transmission format of the I2C bus is shown in the following figure: + +![I2C Bus Data Transmission Format](figures/i2c2.png) + +When the bus is idle, both SDA and SCL are in a high state. When the host wants to communicate with a slave, it will send a start condition first, then send the slave address and read and write control bits, and then transfer the data (host send or receive data). The host will send a stop condition when the data transfer ends. Each byte transmitted is 8 bits, with the high bit first and the low bit last. The different terms in the data transmission process are as follows: + +* **Starting Condition:** When SCL is high, the host pulls SDA low, indicating that data transfer is about to begin. + +* **Slave Address:** The first byte sent by the master is the slave address, the upper 7 bits are the address, the lowest bit is the R/W read/write control bit, R/W bit equals to 1 means the read operation, and 0 means the write operation. The general slave address has 7-bit address mode and 10-bit address mode. In the 10-bit address mode, the first 7 bits of the first byte are a combination of 11110XX, where the last two bits (XX) are two highest 10-bit addresses. The second byte is the remaining 8 bits of the 10-bit slave address, as shown in the following figure: + +![7-bit address and 10-bit address format](figures/i2c3.png) + +* **Answer Signal:** Each time a byte of data is transmitted, the receiver needs to reply with an ACK (acknowledge). The slave sends an ACK when writing data and the ACK by the host when reading data. When the host reads the last byte of data, it can send NACK (Not acknowledge) and then stop the condition. + +* **Data:** After the slave address is sent, some commands may be sent, depending on the slave, and then the data transmission starts, and is sent by the master or the slave. Each data is 8 bits, and the number of bytes of data is not limited. + +* **Repeat Start Condition:** In a communication process, when the host may need to transfer data with different slaves or need to switch read and write operations, the host can send another start condition. + +* **Stop Condition:** When SDA is low, the master pulls SCL high and stays high, then pulls SDA high to indicate the end of the transfer. + +## Access to I2C Bus Devices + +In general, the MCU's I2C device communicates as a master and slave. In the RT-Thread, the I2C master is virtualized as an I2C bus device. The I2C slave communicates with the I2C bus through the I2C device interface. The related interfaces are as follows: + +| **Function** | **Description** | +| --------------- | ---------------------------------- | +| rt_device_find() | Find device handles based on I2C bus device name | +| rt_i2c_transfer() | transfer data | + +### Finding I2C Bus Device + +Before using the I2C bus device, you need to obtain the device handle according to the I2C bus device name, so that you can operate the I2C bus device. The device function is as follows. + +```c +rt_device_t rt_device_find(const char* name); +``` + +| Parameter | Description | +| -------- | ---------------------------------- | +| name | I2C bus device name | +| **Return Value** | —— | +| device handle | Finding the corresponding device will return the corresponding device handle | +| RT_NULL | No corresponding device object found | + +In general, the name of the I2C device registered to the system is i2c0, i2c1, etc. The usage examples are as follows: + +```c +#define AHT10_I2C_BUS_NAME "i2c1" /* Sensor connected I2C bus device name */ +struct rt_i2c_bus_device *i2c_bus; /* I2C bus device handle */ + +/* Find the I2C bus device and get the I2C bus device handle */ +i2c_bus = (struct rt_i2c_bus_device *)rt_device_find(name); +``` + +### Data Transmission + +You can use `rt_i2c_transfer()` for data transfer by getting the I2C bus device handle. The function prototype is as follows: + +```c +rt_size_t rt_i2c_transfer(struct rt_i2c_bus_device *bus, + struct rt_i2c_msg msgs[], + rt_uint32_t num); +``` + +| Parameter | Description | +|--------------------|----------------------| +| bus | I2C bus device handle | +| msgs[] | Message array pointer to be transmitted | +| num | The number of elements in the message array | +| **Return Value** | —— | +| the number of elements in the message array | succeeded | +| error code | failed | + +Like the custom transport interface of the SPI bus, the data transmitted by the custom transport interface of the I2C bus is also in units of one message. The parameter msgs[] points to the array of messages to be transmitted. The user can customize the content of each message to implement two different data transmission modes supported by the I2C bus. If the master needs to send a repeat start condition, it will need to send 2 messages. + +>This function will call rt_mutex_take(), which cannot be called inside the interrupt service routine, which will cause assertion to report an error. + +The prototypes of the I2C message data structure are as follows: + +```c +struct rt_i2c_msg +{ + rt_uint16_t addr; /* Slave address */ + rt_uint16_t flags; /* Reading, writing signs, etc. */ + rt_uint16_t len; /* Read and write data bytes */ + rt_uint8_t *buf; /* Read and write data buffer pointer */ +} +``` + +Slave address (addr): Supports 7-bit and 10-bit binary addresses. You need to view the data sheets of different devices. + +>The slave address used by the RT-Thread I2C device interface does not contain read/write bits. The read/write bit control needs to modify the flag `flags`. + +The flags `flags` can be defined as macros that can be combined with other macros using the bitwise operation "|" as needed. + +```c +#define RT_I2C_WR 0x0000 /* Write flag */ +#define RT_I2C_RD (1u << 0) /* Read flag */ +#define RT_I2C_ADDR_10BIT (1u << 2) /* 10-bit address mode */ +#define RT_I2C_NO_START (1u << 4) /* No start condition */ +#define RT_I2C_IGNORE_NACK (1u << 5) /* Ignore NACK */ +#define RT_I2C_NO_READ_ACK (1u << 6) /* Do not send ACK when reading */ +``` + +Examples of use are as follows: + +```c +#define AHT10_I2C_BUS_NAME "i2c1" /* Sensor connected I2C bus device name */ +#define AHT10_ADDR 0x38 /* Slave address */ +struct rt_i2c_bus_device *i2c_bus; /* I2C bus device handle */ + +/* Find the I2C bus device and get the I2C bus device handle */ +i2c_bus = (struct rt_i2c_bus_device *)rt_device_find(name); + +/* Read sensor register data */ +static rt_err_t read_regs(struct rt_i2c_bus_device *bus, rt_uint8_t len, rt_uint8_t *buf) +{ + struct rt_i2c_msg msgs; + + msgs.addr = AHT10_ADDR; /* Slave address */ + msgs.flags = RT_I2C_RD; /* Read flag */ + msgs.buf = buf; /* Read and write data buffer pointer */ + msgs.len = len; /* Read and write data bytes */ + + /* Call the I2C device interface to transfer data */ + if (rt_i2c_transfer(bus, &msgs, 1) == 1) + { + return RT_EOK; + } + else + { + return -RT_ERROR; + } +} +``` + +## I2C Bus Device Usage Example + +The specific usage of the I2C device can be referred to the following sample code. The main steps of the sample code are as follows: + +1. First find the I2C name based on the I2C device name, get the device handle, and then initialize the aht10 sensor. +2. The two functions that control the sensor are the write sensor register `write_reg()` and the read sensor register `read_regs()`, both called `rt_i2c_transfer()` to transfer the data. The function `read_temp_humi()` calls the above two functions to read the temperature and humidity information. + +```c +/* + * Program listing: This is an I2C device usage routine + * The routine exports the i2c_aht10_sample command to the control terminal + * Command call format: i2c_aht10_sample i2c1 + * Command explanation: The second parameter of the command is the name of the I2C bus device to be used. If it is empty, the default I2C bus device is used. + * Program function: read the temperature and humidity data of the aht10 sensor and print. +*/ + +#include +#include + +#define AHT10_I2C_BUS_NAME "i2c1" /* Sensor connected I2C bus device name */ +#define AHT10_ADDR 0x38 /* Slave address */ +#define AHT10_CALIBRATION_CMD 0xE1 /* Calibration command */ +#define AHT10_NORMAL_CMD 0xA8 /* General command */ +#define AHT10_GET_DATA 0xAC /* Get data command */ + +static struct rt_i2c_bus_device *i2c_bus = RT_NULL; /* I2C bus device handle */ +static rt_bool_t initialized = RT_FALSE; /* Sensor initialization status */ + +/* Write sensor register */ +static rt_err_t write_reg(struct rt_i2c_bus_device *bus, rt_uint8_t reg, rt_uint8_t *data) +{ + rt_uint8_t buf[3]; + struct rt_i2c_msg msgs; + + buf[0] = reg; //cmd + buf[1] = data[0]; + buf[2] = data[1]; + + msgs.addr = AHT10_ADDR; + msgs.flags = RT_I2C_WR; + msgs.buf = buf; + msgs.len = 3; + + /* Call the I2C device interface to transfer data */ + if (rt_i2c_transfer(bus, &msgs, 1) == 1) + { + return RT_EOK; + } + else + { + return -RT_ERROR; + } +} + +/* Read sensor register data */ +static rt_err_t read_regs(struct rt_i2c_bus_device *bus, rt_uint8_t len, rt_uint8_t *buf) +{ + struct rt_i2c_msg msgs; + + msgs.addr = AHT10_ADDR; + msgs.flags = RT_I2C_RD; + msgs.buf = buf; + msgs.len = len; + + /* Call the I2C device interface to transfer data */ + if (rt_i2c_transfer(bus, &msgs, 1) == 1) + { + return RT_EOK; + } + else + { + return -RT_ERROR; + } +} + +static void read_temp_humi(float *cur_temp, float *cur_humi) +{ + rt_uint8_t temp[6]; + + write_reg(i2c_bus, AHT10_GET_DATA, 0); /* send command */ + rt_thread_mdelay(400); + read_regs(i2c_bus, 6, temp); /* obtian sensor data */ + + /* Humidity data conversion */ + *cur_humi = (temp[1] << 12 | temp[2] << 4 | (temp[3] & 0xf0) >> 4) * 100.0 / (1 << 20); + /* Temperature data conversion */ + *cur_temp = ((temp[3] & 0xf) << 16 | temp[4] << 8 | temp[5]) * 200.0 / (1 << 20) - 50; +} + +static void aht10_init(const char *name) +{ + rt_uint8_t temp[2] = {0, 0}; + + /* Find the I2C bus device and get the I2C bus device handle */ + i2c_bus = (struct rt_i2c_bus_device *)rt_device_find(name); + + if (i2c_bus == RT_NULL) + { + rt_kprintf("can't find %s device!\n", name); + } + else + { + write_reg(i2c_bus, AHT10_NORMAL_CMD, temp); + rt_thread_mdelay(400); + + temp[0] = 0x08; + temp[1] = 0x00; + write_reg(i2c_bus, AHT10_CALIBRATION_CMD, temp); + rt_thread_mdelay(400); + initialized = RT_TRUE; + } +} + +static void i2c_aht10_sample(int argc, char *argv[]) +{ + float humidity, temperature; + char name[RT_NAME_MAX]; + + humidity = 0.0; + temperature = 0.0; + + if (argc == 2) + { + rt_strncpy(name, argv[1], RT_NAME_MAX); + } + else + { + rt_strncpy(name, AHT10_I2C_BUS_NAME, RT_NAME_MAX); + } + + if (!initialized) + { + /* Sensor initialization */ + aht10_init(name); + } + if (initialized) + { + /* Read temperature and humidity data */ + read_temp_humi(&temperature, &humidity); + + rt_kprintf("read aht10 sensor humidity : %d.%d %%\n", (int)humidity, (int)(humidity * 10) % 10); + rt_kprintf("read aht10 sensor temperature: %d.%d \n", (int)temperature, (int)(temperature * 10) % 10); + } + else + { + rt_kprintf("initialize sensor failed!\n"); + } +} +/* Export to the msh command list */ +MSH_CMD_EXPORT(i2c_aht10_sample, i2c aht10 sample); +``` + diff --git a/documentation/device/pin/figures/pin2.png b/documentation/device/pin/figures/pin2.png new file mode 100644 index 0000000000..d51371fecc Binary files /dev/null and b/documentation/device/pin/figures/pin2.png differ diff --git a/documentation/device/pin/pin.md b/documentation/device/pin/pin.md new file mode 100644 index 0000000000..2528373de1 --- /dev/null +++ b/documentation/device/pin/pin.md @@ -0,0 +1,353 @@ +# PIN Device + +## Introduction of Pin + +The pins on the chip are generally divided into four categories: power supply, clock, control, and I/O. The I/O port is further divided into General Purpose Input Output (GPIO) and function multiplex I/O (such as SPI/I2C/UART, etc.) in the usage mode. + +Most MCU pins have more than one function. The internal structure of different pins is different and the functions are different. The actual function of the pin can be switched through different configurations. The main features of the General Purpose Input Output (GPIO) port are as follows: + +* Programmable Interrupt: The interrupt trigger mode is configurable. Generally, there are five interrupt trigger modes as shown in the following figure: + + ![5 Interrupt Trigger Modes](figures/pin2.png) + +* Input and output modes can be controlled. + + * Output modes generally include Output push-pull, Output open-drain, Output pull-up, and Output pull-down. When the pin is in the output mode, the connected peripherals can be controlled by configuring the level of the pin output to be high or low. + + * Input modes generally include: Input floating, Input pull-up, Input pull-down, and Analog. When the pin is in the input mode, the level state of the pin can be read, that is, high level or low level. + +## Access PIN Device + +The application accesses the GPIO through the PIN device management interface provided by RT-Thread. The related interfaces are as follows: + +| Function | **Description** | +| ---------------- | ---------------------------------- | +| rt_pin_mode() | Set pin mode | +| rt_pin_write() | Set the pin level | +| rt_pin_read() | Read pin level | +| rt_pin_attach_irq() | Bind pin interrupt callback function | +| rt_pin_irq_enable() | Enable pin interrupt | +| rt_pin_detach_irq() | Detach pin interrupt callback function | + +### Obtain Pin Number + +The pin numbers provided by RT-Thread need to be distinguished from the chip pin numbers. They are not the same concept. The pin numbers are defined by the PIN device driver and are related to the specific chip. There are two ways to obtain the pin number: use the macro definition or view the PIN driver file. + +#### Use Macro Definition + +If you use the BSP in the `rt-thread/bsp/stm32` directory, you can use the following macro to obtain the pin number: + +```c +GET_PIN(port, pin) +``` + +The sample code for the pin number corresponding to LED0 with pin number PF9 is as follows: + +```c +#define LED0_PIN GET_PIN(F, 9) +``` + +#### View Driver Files + +If you use a different BSP, you will need to check the PIN driver code `drv_gpio.c` file to confirm the pin number. There is an array in this file that holds the number information for each PIN pin, as shown below: + +```c +static const rt_uint16_t pins[] = +{ + __STM32_PIN_DEFAULT, + __STM32_PIN_DEFAULT, + __STM32_PIN(2, A, 15), + __STM32_PIN(3, B, 5), + __STM32_PIN(4, B, 8), + __STM32_PIN_DEFAULT, + __STM32_PIN_DEFAULT, + __STM32_PIN_DEFAULT, + __STM32_PIN(8, A, 14), + __STM32_PIN(9, B, 6), + ... ... +} +``` + +Take `__STM32_PIN(2, A, 15)` as an example, 2 is the pin number used by RT-Thread, A is the port number, and 15 is the pin number, so the pin number corresponding to PA15 is 2. + +### Set Pin Mode + +Before the pin is used, you need to set the input or output mode first, and the following functions are used: + +```c +void rt_pin_mode(rt_base_t pin, rt_base_t mode); +``` + +| Parameter | **Discription** | +| --------- | ------------------ | +| pin | Pin number | +| mode | Pin operation mode | + +At present, the pin working mode supported by RT-Thread can take one of the five macro definition values as shown. The mode supported by the chip corresponding to each mode needs to refer to the specific implementation of the PIN device driver: + +```c +#define PIN_MODE_OUTPUT 0x00 /* Output */ +#define PIN_MODE_INPUT 0x01 /* Input */ +#define PIN_MODE_INPUT_PULLUP 0x02 /* input Pull up */ +#define PIN_MODE_INPUT_PULLDOWN 0x03 /* input Pull down */ +#define PIN_MODE_OUTPUT_OD 0x04 /* output Open drain */ +``` + +An example of use is as follows: + +```c +#define BEEP_PIN_NUM 35 /* PB0 */ + +/* Buzzer pin is in output mode */ +rt_pin_mode(BEEP_PIN_NUM, PIN_MODE_OUTPUT); +``` + +### Set The Pin Level + +The function to set the pin output level is as follows: + +```c +void rt_pin_write(rt_base_t pin, rt_base_t value); +``` + +| **Parameter** | Discription | +|----------|-------------------------| +| pin | Pin number | +| value | Level logic value, which can take one of two macro definition values: PIN_LOW means low level, or PIN_HIGH means high level | + +Examples of use are as follows: + +```c +#define BEEP_PIN_NUM 35 /* PB0 */ + +/* Beep's pin is in output mode */ +rt_pin_mode(BEEP_PIN_NUM, PIN_MODE_OUTPUT); +/* Set low level */ +rt_pin_write(BEEP_PIN_NUM, PIN_LOW); +``` + +### Read Pin Level + +The functions to read the pin level are as follows: + +```c +int rt_pin_read(rt_base_t pin); +``` + +| Parameter | Description | +| ---------- | ----------- | +| pin | Pin number | +| **return** | —— | +| PIN_LOW | Low level | +| PIN_HIGH | High level | + +Examples of use are as follows: + +```c +#define BEEP_PIN_NUM 35 /* PB0 */ +int status; + +/* Buzzer pin is in output mode */ +rt_pin_mode(BEEP_PIN_NUM, PIN_MODE_OUTPUT); +/* Set low level */ +rt_pin_write(BEEP_PIN_NUM, PIN_LOW); + +status = rt_pin_read(BEEP_PIN_NUM); +``` + +### Bind Pin Interrupt Callback Function + +To use the interrupt function of the pin, you can use the following function to configure a pin to some interrupt trigger mode and bind an interrupt callback function to the corresponding pin. When the pin interrupt occurs, the callback function will be executed. : + +```c +rt_err_t rt_pin_attach_irq(rt_int32_t pin, rt_uint32_t mode, + void (*hdr)(void *args), void *args); +``` + +| Parameter | Description | +| ---------- | ------------------------------------------------------------ | +| pin | Pin number | +| mode | Interrupt trigger mode | +| hdr | Interrupt callback function. Users need to define this function | +| args | Interrupt the parameters of the callback function, set to RT_NULL when not needed | +| return | —— | +| RT_EOK | Binding succeeded | +| error code | Binding failed | + +Interrupt trigger mode mode can take one of the following five macro definition values: + +```c +#define PIN_IRQ_MODE_RISING 0x00 /* Rising edge trigger */ +#define PIN_IRQ_MODE_FALLING 0x01 /* Falling edge trigger */ +#define PIN_IRQ_MODE_RISING_FALLING 0x02 /* Edge trigger (triggered on both rising and falling edges)*/ +#define PIN_IRQ_MODE_HIGH_LEVEL 0x03 /* High level trigger */ +#define PIN_IRQ_MODE_LOW_LEVEL 0x04 /* Low level trigger */ +``` + +Examples of use are as follows: + +```c +#define KEY0_PIN_NUM 55 /* PD8 */ +/* Interrupt callback function */ +void beep_on(void *args) +{ + rt_kprintf("turn on beep!\n"); + + rt_pin_write(BEEP_PIN_NUM, PIN_HIGH); +} +static void pin_beep_sample(void) +{ + /* Button 0 pin is the input mode */ + rt_pin_mode(KEY0_PIN_NUM, PIN_MODE_INPUT_PULLUP); + /* Bind interrupt, rising edge mode, callback function named beep_on */ + rt_pin_attach_irq(KEY0_PIN_NUM, PIN_IRQ_MODE_FALLING, beep_on, RT_NULL); +} +``` + +### Enable Pin Interrupt + +After binding the pin interrupt callback function, use the following function to enable pin interrupt: + +```c +rt_err_t rt_pin_irq_enable(rt_base_t pin, rt_uint32_t enabled); +``` + +| **Parameter** | **Description** | +|----------|----------------| +| pin | Pin number | +| enabled | Status, one of two values: PIN_IRQ_ENABLE, and PIN_IRQ_DISABLE | +| **return** | —— | +| RT_EOK | Enablement succeeded | +| error code | Enablement failed | + +Examples of use are as follows: + +```c +#define KEY0_PIN_NUM 55 /* PD8 */ +/* Interrupt callback function */ +void beep_on(void *args) +{ + rt_kprintf("turn on beep!\n"); + + rt_pin_write(BEEP_PIN_NUM, PIN_HIGH); +} +static void pin_beep_sample(void) +{ + /* Key 0 pin is the input mode */ + rt_pin_mode(KEY0_PIN_NUM, PIN_MODE_INPUT_PULLUP); + /* Bind interrupt, rising edge mode, callback function named beep_on */ + rt_pin_attach_irq(KEY0_PIN_NUM, PIN_IRQ_MODE_FALLING, beep_on, RT_NULL); + /* Enable interrupt */ + rt_pin_irq_enable(KEY0_PIN_NUM, PIN_IRQ_ENABLE); +} +``` + +### Detach Pin Interrupt Callback Function + +You can use the following function to detach the pin interrupt callback function: + +```c +rt_err_t rt_pin_detach_irq(rt_int32_t pin); +``` + +| **Parameter** | **Description** | +| ------------- | -------------------- | +| pin | Pin number | +| **return** | —— | +| RT_EOK | Detachment succeeded | +| error code | Detachment failed | + +After the pin detaches the interrupt callback function, the interrupt is not closed. You can also call the bind interrupt callback function to bind the other callback functions again. + +```c +#define KEY0_PIN_NUM 55 /* PD8 */ +/* Interrupt callback function */ +void beep_on(void *args) +{ + rt_kprintf("turn on beep!\n"); + + rt_pin_write(BEEP_PIN_NUM, PIN_HIGH); +} +static void pin_beep_sample(void) +{ + /* Key 0 pin is the input mode */ + rt_pin_mode(KEY0_PIN_NUM, PIN_MODE_INPUT_PULLUP); + /* Bind interrupt, rising edge mode, callback function named beep_on */ + rt_pin_attach_irq(KEY0_PIN_NUM, PIN_IRQ_MODE_FALLING, beep_on, RT_NULL); + /* Enable interrupt */ + rt_pin_irq_enable(KEY0_PIN_NUM, PIN_IRQ_ENABLE); + /* Detach interrupt callback function */ + rt_pin_detach_irq(KEY0_PIN_NUM); +} +``` + +## PIN Device Usage Example + +The following sample code is the pin device usage example. The main steps of the sample code are as follows: + +1. Set the corresponding pin of the beep to the output mode and give a default low state. + +2. Set the key 0 and button 1 corresponding to the input mode, then bind the interrupt callback function and enable the interrupt. + +3. When the key 0 is pressed, the beep starts to sound, and when the key 1 is pressed, the beep stops. + +```c +/* + * Program listing: This is a PIN device usage routine + * The routine exports the pin_beep_sample command to the control terminal + * Command call format:pin_beep_sample + * Program function: control the buzzer by controlling the level state of the corresponding pin of the buzzer by pressing the button +*/ + +#include +#include + +/* Pin number, determined by looking at the device driver file drv_gpio.c */ +#ifndef BEEP_PIN_NUM + #define BEEP_PIN_NUM 35 /* PB0 */ +#endif +#ifndef KEY0_PIN_NUM + #define KEY0_PIN_NUM 55 /* PD8 */ +#endif +#ifndef KEY1_PIN_NUM + #define KEY1_PIN_NUM 56 /* PD9 */ +#endif + +void beep_on(void *args) +{ + rt_kprintf("turn on beep!\n"); + + rt_pin_write(BEEP_PIN_NUM, PIN_HIGH); +} + +void beep_off(void *args) +{ + rt_kprintf("turn off beep!\n"); + + rt_pin_write(BEEP_PIN_NUM, PIN_LOW); +} + +static void pin_beep_sample(void) +{ + /* Beep pin is in output mode */ + rt_pin_mode(BEEP_PIN_NUM, PIN_MODE_OUTPUT); + /* Default low level */ + rt_pin_write(BEEP_PIN_NUM, PIN_LOW); + + /* KEY 0 pin is the input mode */ + rt_pin_mode(KEY0_PIN_NUM, PIN_MODE_INPUT_PULLUP); + /* Bind interrupt, falling edge mode, callback function named beep_on */ + rt_pin_attach_irq(KEY0_PIN_NUM, PIN_IRQ_MODE_FALLING, beep_on, RT_NULL); + /* Enable interrupt */ + rt_pin_irq_enable(KEY0_PIN_NUM, PIN_IRQ_ENABLE); + + /* KEY 1 pin is input mode */ + rt_pin_mode(KEY1_PIN_NUM, PIN_MODE_INPUT_PULLUP); + /* Binding interrupt, falling edge mode, callback function named beep_off */ + rt_pin_attach_irq(KEY1_PIN_NUM, PIN_IRQ_MODE_FALLING, beep_off, RT_NULL); + /* Enable interrupt */ + rt_pin_irq_enable(KEY1_PIN_NUM, PIN_IRQ_ENABLE); +} +/* Export to the msh command list */ +MSH_CMD_EXPORT(pin_beep_sample, pin beep sample); +``` diff --git a/documentation/device/pwm/figures/pwm-f.png b/documentation/device/pwm/figures/pwm-f.png new file mode 100644 index 0000000000..1c6c7370cc Binary files /dev/null and b/documentation/device/pwm/figures/pwm-f.png differ diff --git a/documentation/device/pwm/figures/pwm-l.png b/documentation/device/pwm/figures/pwm-l.png new file mode 100644 index 0000000000..e793b2cfca Binary files /dev/null and b/documentation/device/pwm/figures/pwm-l.png differ diff --git a/documentation/device/pwm/pwm.md b/documentation/device/pwm/pwm.md new file mode 100644 index 0000000000..57338b3db1 --- /dev/null +++ b/documentation/device/pwm/pwm.md @@ -0,0 +1,265 @@ +# PWM Device + +## Introduction to PWM + +PWM (Pulse Width Modulation) is a method of digitally encoding the level of an analog signal. The frequency of the square wave is used to encode the level of a specific analog signal by pulses of different frequencies. The output receives a series of pulses of equal magnitude and uses these pulses to replace the device with the desired waveform. + +![PWM Schematic Diagram](figures/pwm-f.png) + +Above is a simple schematic diagram of PWM. Assuming that the timer works in a up-counter mode. When the count value is less than the threshold, it outputs a level state, such as a high level. When the count value is greater than the threshold, it outputs the opposite, such as a low level. When the count value reaches the maximum value, the counter recounts from 0 and returns to the original level state. The ratio of the high-level duration (pulse width) to the cycle time is the duty cycle, ranging from 0 to 100%. The high level of the above picture is just half of the cycle time, so the duty cycle is 50%. + +One of the common PWM control scenarios is to adjust the brightness of the light or screen. The brightness can be adjusted according to the duty cycle. The PWM adjusts the brightness not continuously, but constantly lights up and turns off the screen. When the light is turned on and off fast enough, the naked eye will always think that it is always bright. In the process of on and off, the longer the light is off, the lower the brightness of the screen to the naked eye. The longer the light is on, the less time is spent and the screen will be brighter. + +![PWM Brightness Adjustment](figures/pwm-l.png) + +## Access to PWM Devices + +The application accesses the PWM device hardware through the PWM device management interface provided by RT-Thread. The related interfaces are as follows: + +| **Function** | Description | +| ----------------- | ---------------------------------- | +| rt_device_find() | Find device handles based on the name of PWM device | +| rt_pwm_set() | Set PWM period and pulse width | +| rt_pwm_enable() | Enable PWM device | +| rt_pwm_disable() | Disable the PWM device | + +### Find the PWM Device + +The application obtains the device handle based on the name of PWM device, which in turn can operate the PWM device. The function is as follows: + +```c +rt_device_t rt_device_find(const char* name); +``` + +| Parameter | Description | +| -------- | ---------------------------------- | +| name | Device | +| **Return** | —— | +| Device handle | Found the corresponding device, will return the corresponding device handle | +| RT_NULL | Device not found | + +In general, the name of the PWM device registered to the system is pwm0, pwm1, etc. The usage examples are as follows: + +```c +#define PWM_DEV_NAME "pwm3" /* name of PWM device */ +struct rt_device_pwm *pwm_dev; /* PWM device handle */ +/* Search the device */ +pwm_dev = (struct rt_device_pwm *)rt_device_find(PWM_DEV_NAME); +``` + +### Set PWM Period and Pulse Width + +Set the PWM period and duty cycle by using the following function: + +```c +rt_err_t rt_pwm_set(struct rt_device_pwm *device, + int channel, + rt_uint32_t period, + rt_uint32_t pulse); +``` + +| Parameter | Description | +| ---------- | ----------------- | +| device | PWM device handle | +| channel | PWM channel | +| period | PWM period (ns) | +| pulse | PWM pulse width time (ns) | +| **Return** | —— | +| RT_EOK | successful | +| -RT_EIO | device is null | +| -RT_ENOSYS | Device operation method is null | +| Other Errors | Execute failed | + +The output frequency of the PWM is determined by the period. For example, the time of a period is 0.5ms (milliseconds), the period value is 500000ns (nanoseconds), the output frequency is 2KHz, the duty cycle is `pulse / period`, and the pulse value cannot exceed period. + +An example of use is as follows: + +```c +#define PWM_DEV_NAME "pwm3" /* name of PWM device */ +#define PWM_DEV_CHANNEL 4 /* PWM channel */ +struct rt_device_pwm *pwm_dev; /* PWM device handle */ +rt_uint32_t period, pulse; + +period = 500000; /* The period is 0.5ms, the unit is nanoseconds */ +pulse = 0; /* PWM pulse width value, the unit is nanoseconds */ +/* Search the device */ +pwm_dev = (struct rt_device_pwm *)rt_device_find(PWM_DEV_NAME); +/* Set the PWM period and pulse width */ +rt_pwm_set(pwm_dev, PWM_DEV_CHANNEL, period, pulse); +``` + +### Enable the PWM Device + +After setting the PWM period and pulse width, you can enable the PWM device by the following function: + +```c +rt_err_t rt_pwm_enable(struct rt_device_pwm *device, int channel); +``` + +| Parameter | Description | +| ---------- | ------------------------------- | +| device | PWM device handle | +| channel | PWM channel | +| **Return** | —— | +| RT_EOK | Enable device successful | +| -RT_ENOSYS | Device operation method is null | +| Other Errors | Enable device failed | + +An example of use is as follows: + +```c +#define PWM_DEV_NAME "pwm3" /* name of PWM device */ +#define PWM_DEV_CHANNEL 4 /* PWM channel */ +struct rt_device_pwm *pwm_dev; /* PWM device handle */ +rt_uint32_t period, pulse; + +period = 500000; /* The period is 0.5ms, the unit is nanoseconds */ +pulse = 0; /* PWM pulse width value, the unit is nanoseconds */ +/* Search the device */ +pwm_dev = (struct rt_device_pwm *)rt_device_find(PWM_DEV_NAME); +/* Set the PWM period and pulse width */ +rt_pwm_set(pwm_dev, PWM_DEV_CHANNEL, period, pulse); +/* Enable the device */ +rt_pwm_enable(pwm_dev, PWM_DEV_CHANNEL); +``` + +### Disable the PWM device Channel + +Use the following function to turn off the corresponding channel of the PWM device. + +```c +rt_err_t rt_pwm_disable(struct rt_device_pwm *device, int channel); +``` + +| **Parameter** | Description | +| ---------- | ------------------------------- | +| device | PWM device handle | +| channel | PWM channel | +| **Return** | —— | +| RT_EOK | Turn off device successful | +| -RT_EIO | Device handle is null | +| Other Errors | Turn off device failed | + +An example of use is as follows: + +```c +#define PWM_DEV_NAME "pwm3" /* name of PWM device */ +#define PWM_DEV_CHANNEL 4 /* PWM channel */ +struct rt_device_pwm *pwm_dev; /* PWM device handle */ +rt_uint32_t period, pulse; + +period = 500000; /* The period is 0.5ms, the unit is nanoseconds */ +pulse = 0; /* PWM pulse width value, the unit is nanoseconds */ +/* Search the device */ +pwm_dev = (struct rt_device_pwm *)rt_device_find(PWM_DEV_NAME); +/* Set the PWM period and pulse width */ +rt_pwm_set(pwm_dev, PWM_DEV_CHANNEL, period, pulse); +/* Enable the device */ +rt_pwm_enable(pwm_dev, PWM_DEV_CHANNEL); +/* Turn off the device channel */ +rt_pwm_disable(pwm_dev,PWM_DEV_CHANNEL); +``` + +## FinSH Command + +To set the period and duty cycle of a channel of a PWM device, use the command `pwm_set pwm1 1 500000 5000`. The first parameter is the command, the second parameter is the PWM device name, the third parameter is the PWM channel, and the fourth parameter is PWM period(ns), the fifth parameter is the pulse width (ns). + +```c +msh />pwm_set pwm1 1 500000 5000 +msh /> +``` + +To enable a channel of the PWM device, use the command`pwm_enable pwm1 1`. The first parameter is the command, the second parameter is the PWM device name, and the third parameter is the PWM channel. + +```c +msh />pwm_enable pwm1 1 +msh /> +``` + +To disable a channel of the PWM device, use the command `pwm_disable pwm1 1`. The first parameter is the command, the second parameter is the PWM device name, and the third parameter is the PWM channel. + +```c +msh />pwm_disable pwm1 1 +msh /> +``` + +## PWM Device Usage Example + +The following sample code is a PWM device usage sample . The main steps of the sample code are as follows: + +1. Find the PWM device to get the device handle. +2. Set the PWM period and pulse width. +3. Enable the PWM device. +4. The pulse width is modified every 50 milliseconds in the while loop. +5. Connect the PWM channel to a LED, and you can see that the LED changes from dark to bright gradually, and then from bright to dark. + +```c +/* + * Program list: This is PWM device usage example + * The routine exports the pwm_led_sample command to the control terminal + * Format for Command: pwm_led_sample + * Program function: By controlling the brightness of the LED light through the PWM device, + * you can see that the LED changes from dark to bright gradually, then from bright to dark. + */ + +#include +#include + +#define PWM_DEV_NAME "pwm3" /* PWM device name */ +#define PWM_DEV_CHANNEL 4 /* PWM channel */ + +struct rt_device_pwm *pwm_dev; /* PWM device handle */ + +static int pwm_led_sample(int argc, char *argv[]) +{ + rt_uint32_t period, pulse, dir; + + period = 500000; /* The period is 0.5ms, the unit is nanoseconds */ + dir = 1; /* Increase or decrease direction of PWM pulse width value */ + pulse = 0; /* PWM pulse width value, the unit is nanoseconds*/ + + /* Set LED pin mode to output */ + rt_pin_mode(LED_PIN_NUM, PIN_MODE_OUTPUT); + /* Set high LED pin mode */ + rt_pin_write(LED_PIN_NUM, PIN_HIGH); + + /* Search the Device */ + pwm_dev = (struct rt_device_pwm *)rt_device_find(PWM_DEV_NAME); + if (pwm_dev == RT_NULL) + { + rt_kprintf("pwm sample run failed! can't find %s device!\n", PWM_DEV_NAME); + return RT_ERROR; + } + + /* Set PWM period and pulse width defaults */ + rt_pwm_set(pwm_dev, PWM_DEV_CHANNEL, period, pulse); + /* Enable device */ + rt_pwm_enable(pwm_dev, PWM_DEV_CHANNEL); + + while (1) + { + rt_thread_mdelay(50); + if (dir) + { + pulse += 5000; /* Increase 5000ns each time from 0 */ + } + else + { + pulse -= 5000; /* 5000ns reduction from the maximum */ + } + if (pulse >= period) + { + dir = 0; + } + if (0 == pulse) + { + dir = 1; + } + + /* Set the PWM period and pulse width */ + rt_pwm_set(pwm_dev, PWM_DEV_CHANNEL, period, pulse); + } +} +/* Export to the msh command list */ +MSH_CMD_EXPORT(pwm_led_sample, pwm sample); +``` diff --git a/documentation/device/rtc/rtc.md b/documentation/device/rtc/rtc.md new file mode 100644 index 0000000000..503f1d814d --- /dev/null +++ b/documentation/device/rtc/rtc.md @@ -0,0 +1,198 @@ +# RTC Device + +## Introduction of RTC + +The RTC (Real-Time Clock) provides accurate real-time clock time, which can be used to generate information such as year, month, day, hour, minute, and second. At present, most real-time clock chips use a higher precision crystal oscillator as a clock source. In order to work when the main power supply is powered down, some clock chips will be powered by a battery to keep the time information valid. + +The RT-Thread RTC device provides the basic services for the operating system's time system. In the face of more and more IoT scenarios, RTC has become the standard configuration of the product, and even in the secure transmission process such as SSL, RTC has become an indispensable part. + + +## Access RTC Devices + +The application accesses the RTC hardware through the RTC device management interface, and the relevant interfaces are as follows: + +| **Function** | Description | +| ------------- | ---------------------------------- | +| set_date() | Set date, year, month, day | +| set_time() | Set time, hour, minute, second | +| time() | Obtain current time | + +### Set Date + +Set the current date value of the RTC device by the following functions: + +```c +rt_err_t set_date(rt_uint32_t year, rt_uint32_t month, rt_uint32_t day) +``` + +| **Parameter** | **Description** | +| -------- | ---------------------------------- | +|year |The year to be set to take effect| +|month |The month to be set to take effect| +|day | The date to be set to take effect | +| **return** | —— | +| RT_EOK | Set-up succeeded | +| -RT_ERROR | Set-up failed, no rtc device found | +| other error code | Set-up failed | + +An example of use is as follows: + +```c +/* Set the date to December 3, 2018 */ +set_date(2018, 12, 3); +``` + +### Set Time + +Set the current time value of the RTC device by the following function: + +```c +rt_err_t set_time(rt_uint32_t hour, rt_uint32_t minute, rt_uint32_t second) +``` + +| **Parameter** | **Description** | +| ---------- | ------------------------------- | +|hour |The hour to be set to take effect| +|minute |The minute to be set to take effect| +|second |The second to be set to take effect| +| **return** | —— | +| RT_EOK | Set-up succeeded | +| -RT_ERROR | Set-up failed, no rtc device found | +| other error code | Set-up failed | + +An example of use is as follows: + +```c +/* Set the time to 11:15:50 */ +set_time(11, 15, 50); +``` + +### Obtain Current Time + +Obtain time using the time API in the C standard library: + +```c +time_t time(time_t *t) +``` + +| **Parameter** | **Description** | +| ---------- | ------------------------------- | +|t |Time data pointer | +| **return** | —— | +| Current time value | | + +Examples of use are as follows: + +```c +time_t now; /* Save the current time value obtained */ +/* Obtain Time */ +now = time(RT_NULL); +/* Printout time information */ +rt_kprintf("%s\n", ctime(&now)); +``` + +>Currently only one RTC device is allowed in the system and the name is `"rtc"`. + +## Functional Configuration + +### Enable Soft RTC (Software Emulation RTC) + +You can use the function of enabling RTC software emulation, which is ideal for products that do not require high time precision and have no hardware RTC. The configuration options of menuconfig are as follows: + +```c +RT-Thread Components → + Device Drivers: + -*- Using RTC device drivers /* Use RTC device driver */ + [ ] Using software simulation RTC device /* Use software simulation RTC device */ +``` + +### Enable NTP Time Automatic Synchronization + +If the RT-Thread is connected to the Internet, you can enable automatic NTP time synchronization to synchronize local time periodically. + +First open the NTP function in menuconfig as follows: + +```c +RT-Thread online packages → + IoT - internet of things → + netutils: Networking utilities for RT-Thread: + [*] Enable NTP(Network Time Protocol) client +``` + +After the NTP is turned on, the RTC's automatic synchronization function will be automatically turned on, and the synchronization period and the delay time of the first synchronization can also be set: + +```c +RT-Thread Components → + Device Drivers: + -*- Using RTC device drivers /* Use RTC device driver */ + [ ] Using software simulation RTC device /* Use software simulation RTC device */ + [*] Using NTP auto sync RTC time /* Automatically synchronize RTC time with NTP */ + (30) NTP first sync delay time(second) for network connect /* The delay for performing NTP time synchronization for the first time. The purpose of the delay is to reserve a certain amount of time for the network connection and try to increase the success rate of the first NTP time synchronization. The default time is 30S; */ + (3600) NTP auto sync period(second) /* NTP The synchronization period is automatically synchronized in seconds, and the default period is one hour (ie 3600S). */ +``` + +## FinSH Command + +Enter `date` to view the current time. + +```c +msh />date +Fri Feb 16 01:11:56 2018 +msh /> +``` + +Also use the `date` command, after the command, enter `year` `month` `date` `hour ` ` minute ` ` second ` (between spaces, 24H system), and set the current time to 2018-02-16 01:15:30. The approximate effect is as follows: + +```c +msh />date 2018 02 16 01 15 30 +msh /> +``` + +## RTC Device Usage Examples + +For the specific usage of the RTC device, refer to the following example code. First, set the year, month, date, hour, minute and second information, and then delay the data for 3 seconds to get the current time information. + +```c +/* + * Program listing: This is an RTC device usage routine + * The routine exports the rtc_sample command to the control terminal + * Command call format:rtc_sample + * Program function: Set the date and time of the RTC device. After a delay, obtain the current time and print the display. +*/ + +#include +#include + +static int rtc_sample(int argc, char *argv[]) +{ + rt_err_t ret = RT_EOK; + time_t now; + + /* Set date */ + ret = set_date(2018, 12, 3); + if (ret != RT_EOK) + { + rt_kprintf("set RTC date failed\n"); + return ret; + } + + /* Set time */ + ret = set_time(11, 15, 50); + if (ret != RT_EOK) + { + rt_kprintf("set RTC time failed\n"); + return ret; + } + + /* Delay 3 seconds */ + rt_thread_mdelay(3000); + + /* Obtain Time */ + now = time(RT_NULL); + rt_kprintf("%s\n", ctime(&now)); + + return ret; +} +/* Export to the msh command list */ +MSH_CMD_EXPORT(rtc_sample, rtc sample); +``` diff --git a/documentation/device/sensor/sensor.md b/documentation/device/sensor/sensor.md new file mode 100644 index 0000000000..db9da643c9 --- /dev/null +++ b/documentation/device/sensor/sensor.md @@ -0,0 +1,465 @@ +# Sensor Device + +## Introduction + +Sensor is an important part of the Internet of Things, and "Sensor to the Internet of Things" is equivalent to "eyes to humans". Without eyes, human beings can not see the vast world of flowers. The same is true for the Internet of Things. + +Nowadays, with the development of Internet of Things, a large number of Sensors have been developed for developers to choose, such as Accelerometer, Magnetometer, Gyroscope, Barometer/pressure, Humidometer and so on. These sensors, manufactured by the world's leading semiconductor manufacturers, have increased market selectivity and made application development more difficult. Because different sensor manufacturers and sensors need their own unique drivers to run, so when developing applications, they need to adapt to different sensors, which naturally increases the difficulty of development. In order to reduce the difficulty of application development and increase the reusability of sensor driver, we designed a Sensor device. + +The function of Sensor device is to provide a unified operation interface for the upper layer and improve the reusability of the upper code. + +### Characteristics of Sensor Device + +- **Interface**: Standard device interface (open/close/read/control) +- **Work mode**: support polling, interruption, FIFO three modes +- **Power mode**: support four modes: power failure, common, low power consumption and high power consumption + +## Access Sensor Device + +The application accesses the sensor device through the I/O device management interface provided by RT-Thread. The related interfaces are as follows: + +| Functions | Description | +| --------------------------- | ------------------------------------------------------------ | +| rt_device_find() | Finding device handles based on device name of sensor device | +| rt_device_open() | open sensor device | +| rt_device_read() | read data | +| rt_device_control() | control sensor device | +| rt_device_set_rx_indicate() | setting reveive callback fuction | +| rt_device_close() | close sensor device | + +### Find Sensor Device + +The application obtains the device handle according to the name of the sensor device, and then can operate the sensor device. The function of finding the device is as follows: + +```c +rt_device_t rt_device_find(const char* name); +``` + +| **Parameter** | **Description** | +| ------------- | ------------------------------------------------------------ | +| name | sensor device name | +| **return** | —— | +| handle | Finding the corresponding device returns the corresponding device handle | +| RT_NULL | No corresponding device object was found | + +The use example is as follows: +```c +#define SENSOR_DEVICE_NAME "acce_st" /* sensor device name */ + +static rt_device_t sensor_dev; /* sensor device handle */ +/* Find the sensor device according to the device name and get the device handle */ +sensor_dev = rt_device_find(SENSOR_DEVICE_NAME); +``` + +### Open Sensor Device + +Through the device handle, the application can open and close the device. When the device is opened, it will check whether the device has been initialized or not. If it is not initialized, it will call the initialization interface initialization device by default. Open the device through the following functions: + +```c +rt_err_t rt_device_open(rt_device_t dev, rt_uint16_t oflags); +``` + +| **Parameter** | **Description** | +| ------------- | ------------------------------------------------------------ | +| dev | device handle | +| oflags | open mode flag | +| **Return** | —— | +| RT_EOK | open success | +| -RT_EBUSY | If the RT_DEVICE_FLAG_STANDALONE parameter is included in the parameter specified at the time of device registration, the device will not be allowed to open repeatedly. | +| -RT_EINVAL | Unsupported open mode | +| other err | open failed | + +The oflags parameter supports the following parameters: + +```c +#define RT_DEVICE_FLAG_RDONLY 0x001 /* Read-only mode for standard device, polling mode for corresponding sensors */ +#define RT_DEVICE_FLAG_INT_RX 0x100 /* Interrupt Receiving Mode */ +#define RT_DEVICE_FLAG_FIFO_RX 0x200 /* FIFO receiving mode */ +``` + +There are three modes of receiving and sending sensor data: interrupt mode, polling mode and FIFO mode. When using these three modes, **only one of them can be chosen**. If the sensor's open parameter oflags does not specify the use of interrupt mode or FIFO mode, polling mode is used by default. + +FIFO ,means first Input first output. FIFO transmission mode needs sensor hardware support, data is stored in hardware FIFO, read multiple data at a time, which saves CPU resources to do other operations. Very useful in low power mode + +If the sensor uses FIFO receiving mode, the value of oflags is RT_DEVICE_FLAG_FIFO_RX. + +An example of turning on sensor devices in polling mode is as follows: + +```c +#define SAMPLE_SENSOR_NAME "acce_st" /* sensor device name */ +int main(void) +{ + rt_device_t dev; + struct rt_sensor_data data; + + /* find sensor device */ + dev = rt_device_find(SAMPLE_SENSOR_NAME); + /* Open sensor devices in read-only and polling mode */ + rt_device_open(dev, RT_DEVICE_FLAG_RDWR); + + if (rt_device_read(dev, 0, &data, 1) == 1) + { + rt_kprintf("acce: x:%5d, y:%5d, z:%5d, timestamp:%5d\n", data.data.acce.x, data.data.acce.y, data.data.acce.z, data.timestamp); + } + rt_device_close(dev); + + return RT_EOK; +} +``` + +### Control Sensor Device + +By command control word, the application program can configure the sensor device through the following functions: + +```c +rt_err_t rt_device_control(rt_device_t dev, rt_uint8_t cmd, void* arg); +``` + +| **Parameter** | **Description** | +| ------------- | ------------------------------------------------------------ | +| dev | device handle | +| cmd | command control word, see below for more details. | +| arg | the parameters of command control word, see below for more details. | +| **Return** | —— | +| RT_EOK | success | +| -RT_ENOSYS | failed,device is NULL | +| other err | failed | + +`cmd` currently supports the following command control words: + +```c +#define RT_SEN_CTRL_GET_ID (0) /* read device ID */ +#define RT_SEN_CTRL_GET_INFO (1) /* get device information */ +#define RT_SEN_CTRL_SET_RANGE (2) /* Setting the measuring range of the sensor */ +#define RT_SEN_CTRL_SET_ODR (3) /* Setting the Output Rate of Sensor Data,unit is HZ */ +#define RT_SEN_CTRL_SET_MODE (4) /* Setting up working mode */ +#define RT_SEN_CTRL_SET_POWER (5) /* Setting up power mode */ +#define RT_SEN_CTRL_SELF_TEST (6) /* selfcheck */ +``` + +#### Get device information + +```c +struct rt_sensor_info info; +rt_device_control(dev, RT_SEN_CTRL_GET_INFO, &info); +LOG_I("vendor :%d", info.vendor); +LOG_I("model :%s", info.model); +LOG_I("unit :%d", info.unit); +LOG_I("intf_type :%d", info.intf_type); +LOG_I("period_min:%d", info.period_min); +``` + +#### Read Device ID + +```c +rt_uint8_t reg = 0xFF; +rt_device_control(dev, RT_SEN_CTRL_GET_ID, ®); +LOG_I("device id: 0x%x!", reg); +``` + +#### Setting the measuring range of the sensor + +The unit that sets the measuring range of the sensor is the unit that is provided when the device is registered. + +```c +rt_device_control(dev, RT_SEN_CTRL_SET_RANGE, (void *)1000); +``` + +#### Setting the Output Rate of Sensor Data + +Set the output rate to 100HZ and call the following interface. + +```c +rt_device_control(dev, RT_SEN_CTRL_SET_ODR, (void *)100); +``` + +#### Setting up working mode + +```c +/* Set the working mode to polling mode */ +rt_device_control(dev, RT_SEN_CTRL_SET_MODE, (void *)RT_SEN_MODE_POLLING); +/* Set working mode to interrupt mode */ +rt_device_control(dev, RT_SEN_CTRL_SET_MODE, (void *)RT_SEN_MODE_INT); +/* Set working mode to FIFO mode */ +rt_device_control(dev, RT_SEN_CTRL_SET_MODE, (void *)RT_SEN_MODE_FIFO); +``` + +#### Setting up power mode + +```c +/* Set power mode to power-off mode */ +rt_device_control(dev, RT_SEN_CTRL_SET_POWER, (void *)RT_SEN_POWER_DOWN); +/* Set power mode to normal mode */ +rt_device_control(dev, RT_SEN_CTRL_SET_POWER, (void *)RT_SEN_POWER_NORMAL); +/* Setting Power Mode to Low Power Consumption Mode */ +rt_device_control(dev, RT_SEN_CTRL_SET_POWER, (void *)RT_SEN_POWER_LOW); +/* Setting Power Mode to High Performance Mode */ +rt_device_control(dev, RT_SEN_CTRL_SET_POWER, (void *)RT_SEN_POWER_HIGH); +``` + +#### Device self-inspection + +```c +int test_res; +/* Control equipment self-check and return the results. Returning RT_EOK indicates success of self-check and other values indicate failure of self-check. */ +rt_device_control(dev, RT_SEN_CTRL_SELF_TEST, &test_res); +``` + +### Setting Reveive Callback Fuction + +Data reception instructions can be set by following functions. When the sensor receives data, it notifies the upper application thread that data arrives: + +```c +rt_err_t rt_device_set_rx_indicate(rt_device_t dev, rt_err_t (*rx_ind)(rt_device_t dev,rt_size_t size)); +``` + +| **Parameter** | **Description** | +| ------------- | --------------------------------------------- | +| dev | device handle | +| rx_ind | Callback function pointer | +| dev | device handle(parameter of callback function) | +| size | buffer size(parameter of callback function) | +| **Return** | —— | +| RT_EOK | Successful setup | + +The callback function of the function is provided by the user. If the sensor is opened in interrupt mode, when the sensor receives data and interrupts, the callback function will be called, and the data size of the buffer will be placed in the `size` parameter, and the sensor device handle will be placed in the `dev` parameter for users to obtain. + +Generally, receiving callback function can send a semaphore or event to inform sensor data processing thread that data arrives. The use example is as follows: + +```c +#define SAMPLE_SENSOR_NAME "acce_st" /* sensor device name */ +static rt_device_t dev; /* sensoe device handle*/ +static struct rt_semaphore rx_sem; /* The semaphore used to receive messages */ + +/* Callback function for receiving data */ +static rt_err_t sensor_input(rt_device_t dev, rt_size_t size) +{ + /* When the sensor receives the data, it generates an interrupt, calls the callback function, and sends the semphore . */ + rt_sem_release(&rx_sem); + + return RT_EOK; +} + +static int sensor_sample(int argc, char *argv[]) +{ + dev = rt_device_find(SAMPLE_SENSOR_NAME); + + /* Open Sensor Device in Interrupt Receive and Poll Send Mode */ + rt_device_open(dev, RT_DEVICE_FLAG_INT_RX); + /* init semphore */ + rt_sem_init(&rx_sem, "rx_sem", 0, RT_IPC_FLAG_FIFO); + + /* setting reveive callback function */ + rt_device_set_rx_indicate(dev, sensor_input); +} + +``` + +### Read Data of Sensor Device + +The following functions can be called to read the data received by the sensor: + +```c +rt_size_t rt_device_read(rt_device_t dev, rt_off_t pos, void* buffer, rt_size_t size); +``` + +| **Parameter** | **Description** | +| ---------------------- | ------------------------------------------------------------ | +| dev | device handle | +| pos | Read data offset, sensor does not use this parameter | +| buffer | Buffer pointer, read data will be saved in the buffer | +| size | Size of read data | +| **Return** | —— | +| Real size of read data | Returns the number of read data | +| 0 | The errno of the current thread needs to be read to determine the error status | + +The sensor uses the interrupt receiving mode and cooperates with the receiving callback function as follows: + +```c +static rt_device_t dev; /* sensor device handle */ +static struct rt_semaphore rx_sem; /* The semaphore used to receive messages */ + +/* Threads receiving data */ +static void sensor_irq_rx_entry(void *parameter) +{ + rt_device_t dev = parameter; + struct rt_sensor_data data; + rt_size_t res; + + while (1) + { + rt_sem_take(rx_sem, RT_WAITING_FOREVER); + + res = rt_device_read(dev, 0, &data, 1); + if (res == 1) + { + sensor_show_data(dev, &data); + } + } +} + +``` + +The sensor uses FIFO receiving mode and cooperates with receiving callback function as follows: + +```c +static rt_sem_t sensor_rx_sem = RT_NULL; +rt_err_t rx_cb(rt_device_t dev, rt_size_t size) +{ + rt_sem_release(sensor_rx_sem); + return 0; +} +static void sensor_fifo_rx_entry(void *parameter) +{ + rt_device_t dev = parameter; + struct rt_sensor_data data; + rt_size_t res, i; + + data = rt_malloc(sizeof(struct rt_sensor_data) * 32); + + while (1) + { + rt_sem_take(sensor_rx_sem, RT_WAITING_FOREVER); + + res = rt_device_read(dev, 0, data, 32); + for (i = 0; i < res; i++) + { + sensor_show_data(dev, &data[i]); + } + } +} +int main(void) +{ + static rt_thread_t tid1 = RT_NULL; + rt_device_t dev; + struct rt_sensor_data data; + + sensor_rx_sem = rt_sem_create("sen_rx_sem", 0, RT_IPC_FLAG_FIFO); + tid1 = rt_thread_create("sen_rx_thread", + sensor_fifo_rx_entry, dev, + 1024, + 15, 5); + if (tid1 != RT_NULL) + rt_thread_startup(tid1); + + dev = rt_device_find("acce_st"); + rt_device_set_rx_indicate(dev, rx_cb); + rt_device_open(dev, RT_SEN_FLAG_FIFO); + return RT_EOK; +} +``` + +### Close Sensor Device + +When the application completes the sensor operation, the sensor device can be closed by the following functions: + +```c +rt_err_t rt_device_close(rt_device_t dev); +``` + +| **Parameter** | **Description** | +| ------------- | ------------------------------------------------------------ | +| dev | device handle | +| **Return** | —— | +| RT_EOK | The equipment was closed successfully. | +| -RT_ERROR | The device has been completely shut down and cannot be closed repeatedly. | +| other err | failed to close th device | + +Closing the device interface and opening the device interface should be used in pairs, opening the primary device should close the primary device, so that the device will be completely closed, otherwise the device is still in an open state. + +## Example Code for Sensor Device + +The specific use of sensor devices can be referred to the following sample code, the main steps of the sample code are as follows: + +1. Find the sensor device first and get the device handle. + +2. Open the sensor device by polling. + +3. Read the data five times in a row and print it out. + +4. Close the sensor device. + +This sample code is not limited to a specific BSP. According to the BSP registered sensor device, input different dev_name to run. + +```c +/* + * Program List: This is a routine for sensor devices + * The routine exports the sensor_sample command to the control terminal + * Command Call Format:sensor_sample dev_name + * Command Interpretation: The second parameter of the command is the name of the sensor device to be used. + * Program function: Open the corresponding sensor, and then read the data five times in a row and print it out. +*/ + +#include "sensor.h" + +static void sensor_show_data(rt_size_t num, rt_sensor_t sensor, struct rt_sensor_data *sensor_data) +{ + switch (sensor->info.type) + { + case RT_SENSOR_CLASS_ACCE: + rt_kprintf("num:%3d, x:%5d, y:%5d, z:%5d, timestamp:%5d\n", num, sensor_data->data.acce.x, sensor_data->data.acce.y, sensor_data->data.acce.z, sensor_data->timestamp); + break; + case RT_SENSOR_CLASS_GYRO: + rt_kprintf("num:%3d, x:%8d, y:%8d, z:%8d, timestamp:%5d\n", num, sensor_data->data.gyro.x, sensor_data->data.gyro.y, sensor_data->data.gyro.z, sensor_data->timestamp); + break; + case RT_SENSOR_CLASS_MAG: + rt_kprintf("num:%3d, x:%5d, y:%5d, z:%5d, timestamp:%5d\n", num, sensor_data->data.mag.x, sensor_data->data.mag.y, sensor_data->data.mag.z, sensor_data->timestamp); + break; + case RT_SENSOR_CLASS_HUMI: + rt_kprintf("num:%3d, humi:%3d.%d%%, timestamp:%5d\n", num, sensor_data->data.humi / 10, sensor_data->data.humi % 10, sensor_data->timestamp); + break; + case RT_SENSOR_CLASS_TEMP: + rt_kprintf("num:%3d, temp:%3d.%dC, timestamp:%5d\n", num, sensor_data->data.temp / 10, sensor_data->data.temp % 10, sensor_data->timestamp); + break; + case RT_SENSOR_CLASS_BARO: + rt_kprintf("num:%3d, press:%5d, timestamp:%5d\n", num, sensor_data->data.baro, sensor_data->timestamp); + break; + case RT_SENSOR_CLASS_STEP: + rt_kprintf("num:%3d, step:%5d, timestamp:%5d\n", num, sensor_data->data.step, sensor_data->timestamp); + break; + default: + break; + } +} + +static void sensor_sample(int argc, char **argv) +{ + rt_device_t dev = RT_NULL; + struct rt_sensor_data data; + rt_size_t res, i; + + /* Finding Sensor Devices in the System */ + dev = rt_device_find(argv[1]); + if (dev == RT_NULL) + { + rt_kprintf("Can't find device:%s\n", argv[1]); + return; + } + + /* Open sensor devices in polling mode */ + if (rt_device_open(dev, RT_DEVICE_FLAG_RDWR) != RT_EOK) + { + rt_kprintf("open device failed!"); + return; + } + + for (i = 0; i < 5; i++) + { + /* Read a data from a sensor */ + res = rt_device_read(dev, 0, &data, 1); + if (res != 1) + { + rt_kprintf("read data failed!size is %d", res); + } + else + { + sensor_show_data(i, (rt_sensor_t)dev, &data); + } + rt_thread_mdelay(100); + } + /* Close the sensor device */ + rt_device_close(dev); +} +MSH_CMD_EXPORT(sensor_sample, sensor device sample); +``` + diff --git a/documentation/device/spi/figures/spi1.png b/documentation/device/spi/figures/spi1.png new file mode 100644 index 0000000000..2096701713 Binary files /dev/null and b/documentation/device/spi/figures/spi1.png differ diff --git a/documentation/device/spi/figures/spi2.png b/documentation/device/spi/figures/spi2.png new file mode 100644 index 0000000000..55c3188d55 Binary files /dev/null and b/documentation/device/spi/figures/spi2.png differ diff --git a/documentation/device/spi/figures/spi5.png b/documentation/device/spi/figures/spi5.png new file mode 100644 index 0000000000..2e71536451 Binary files /dev/null and b/documentation/device/spi/figures/spi5.png differ diff --git a/documentation/device/spi/spi.md b/documentation/device/spi/spi.md new file mode 100644 index 0000000000..de56a20823 --- /dev/null +++ b/documentation/device/spi/spi.md @@ -0,0 +1,741 @@ +# SPI Device + +## Introduction to SPI + +SPI (Serial Peripheral Interface) is a high-speed, full-duplex, synchronous communication bus commonly used for short-range communication. It is mainly used in EEPROM, FLASH, real-time clock, AD converter, and digital signal processing and between the device and the digital signal decoder. SPI generally uses 4 lines of communication, as shown in the following figure: + +![Ways of communication from SPI Master to SPI Slave](figures/spi1.png) + +* MOSI :SPI Bus Master Output/Slave Input. + +* MISO :SPI Bus Master Input/Slave Output. + +* SCLK :Serial Clock, Master device outputs clock signal to slave device. + +* CS : select the slave device, also called SS, CSB, CSN, EN, etc., the master device outputs a chip select signal to the slave device. + +The SPI works in master-slave mode and usually has one master and one or more slaves. The communication is initiated by the master device. The master device selects the slave device to communicate through CS, and then provides a clock signal to the slave device through SCLK. The data is output to the slave device through the MOSI, and the data sent by the slave device is received through the MISO. + +As shown in the figure below, the chip has two SPI controllers. The SPI controller corresponds to the SPI master. Each SPI controller can connect multiple SPI slaves. The slave devices mounted on the same SPI controller share three signal pins: SCK, MISO, MOSI, but the CS pins of each slave device are independent. + +![Connect from one SPI controller to multiple SPI slaves](figures/spi2.png) + +The master device selects the slave device by controlling the CS pin, typically active low. Only one CS pin is active on an SPI master, and the slave connected to the active CS pin can now communicate with the master. + +The slave's clock is provided by the master through SCLK, and MOSI and MISO complete the data transfer based on SCLK. The working timing mode of the SPI is determined by the phase relationship between CPOL (Clock Polarity) and CPHA (Clock Phase). CPOL represents the state of the initial level of the clock signal. A value of 0 indicates that the initial state of the clock signal is low, and a value of 1 indicates that the initial level of the clock signal is high. CPHA indicates on which clock edge the data is sampled. A value of 0 indicates that the data is sampled on the first clock change edge, and a value of 1 indicates that the data is sampled on the second clock change edge. There are 4 working timing modes according to different combinations of CPOL and CPHA: ①CPOL=0, CPHA=0; ②CPOL=0, CPHA=1; ③CPOL=1, CPHA=0; ④CPOL=1, CPHA=1. As shown below: + +![4 working timing modes of SPI](figures/spi5.png) + +**QSPI:** QSPI is short for Queued SPI and is an extension of the SPI interface from Motorola, which is more extensive than SPI applications. Based on the SPI protocol, Motorola has enhanced its functionality, added a queue transfer mechanism, and introduced a queue serial peripheral interface protocol (QSPI protocol). Using this interface, users can transfer transmission queues containing up to 16 8-bit or 16-bit data at one time. Once the transfer is initiated, CPU is not required until the end of the transfer, greatly improving the transfer efficiency. Compared to SPI, the biggest structural feature of QSPI is the replacement of the transmit and receive data registers of the SPI with 80 bytes of RAM. + +**Dual SPI Flash:** For SPI Flash, full-duplex is not commonly used. You can send a command byte into Dual mode and let it work in half-duplex mode to double data transfer. Thus, MOSI becomes SIO0 (serial io 0), and MISO becomes SIO1 (serial io 1), so that 2 bit data can be transmitted in one clock cycle, which doubles the data transmission. + +**Quad SPI Flash:** Similar to the Dual SPI, Quad SPI Flash adds two I/O lines (SIO2, SIO3) to transfer 4 bits of data in one clock. + +So for SPI Flash, there are three types of standard SPI Flash, Dual SPI Flash, Quad SPI Flash. At the same clock, the higher the number of lines, the higher the transmission rate. + +## Mount SPI Device + +The SPI driver registers the SPI bus and the SPI device needs to be mounted to the SPI bus that has already been registered. + +```C +rt_err_t rt_spi_bus_attach_device(struct rt_spi_device *device, + const char *name, + const char *bus_name, + void *user_data) +``` + +| **Parameter** | Description | +| -------- | ---------------------------------- | +| device | SPI device handle | +| name | SPI device name | +| bus_name | SPI bus name | +| user_data | user data pointer | +| **Return** | —— | +| RT_EOK | Success | +| Other Errors | Failure | + +This function is used to mount an SPI device to the specified SPI bus, register the SPI device with the kernel, and save user_data to the control block of the SPI device. + +The general SPI bus naming principle is spix, and the SPI device naming principle is spixy. For example, spi10 means device 0 mounted on the spi1 bus. User_data is generally the CS pin pointer of the SPI device. When data is transferred, the SPI controller will operate this pin for chip select. + +If you use the BSP in the `rt-thread/bsp/stm32` directory, you can use the following function to mount the SPI device to the bus: + +```c +rt_err_t rt_hw_spi_device_attach(const char *bus_name, const char *device_name, GPIO_TypeDef* cs_gpiox, uint16_t cs_gpio_pin); +``` + +The following sample code mounts the SPI FLASH W25Q128 to the SPI bus: + +```c +static int rt_hw_spi_flash_init(void) +{ + __HAL_RCC_GPIOB_CLK_ENABLE(); + rt_hw_spi_device_attach("spi1", "spi10", GPIOB, GPIO_PIN_14); + + if (RT_NULL == rt_sfud_flash_probe("W25Q128", "spi10")) + { + return -RT_ERROR; + }; + + return RT_EOK; +} +/* Export to automatic initialization */ +INIT_COMPONENT_EXPORT(rt_hw_spi_flash_init); +``` + +## Configuring SPI Device + +The SPI device's transmission parameters need to be configured after the SPI device is mounted to the SPI bus. + +```c +rt_err_t rt_spi_configure(struct rt_spi_device *device, + struct rt_spi_configuration *cfg) +``` + +| **Parameter** | **Description** | +| -------- | ---------------------------------- | +| device | SPI device handle | +| cfg | SPI configuration parameter pointer | +| **Return** | —— | +| RT_EOK | Success | + +This function saves the configuration parameters pointed to by `cfg` to the control block of the SPI device device, which is used when transferring data. + +The `struct rt_spi_configuration` prototype is as follows: + +```c +struct rt_spi_configuration +{ + rt_uint8_t mode; /* mode */ + rt_uint8_t data_width; /* data width, 8 bits, 16 bits, 32 bits */ + rt_uint16_t reserved; /* reserved */ + rt_uint32_t max_hz; /* maximum frequency */ +}; +``` + +**Mode: **Contains MSB/LSB, master-slave mode, timing mode, etc. The available macro combinations are as follows: + +```c +/* Set the data transmission order whether the MSB bit is first or the LSB bit is before */ +#define RT_SPI_LSB (0<<2) /* bit[2]: 0-LSB */ +#define RT_SPI_MSB (1<<2) /* bit[2]: 1-MSB */ + +/* Set the master-slave mode of the SPI */ +#define RT_SPI_MASTER (0<<3) /* SPI master device */ +#define RT_SPI_SLAVE (1<<3) /* SPI slave device */ + +/* Set clock polarity and clock phase */ +#define RT_SPI_MODE_0 (0 | 0) /* CPOL = 0, CPHA = 0 */ +#define RT_SPI_MODE_1 (0 | RT_SPI_CPHA) /* CPOL = 0, CPHA = 1 */ +#define RT_SPI_MODE_2 (RT_SPI_CPOL | 0) /* CPOL = 1, CPHA = 0 */ +#define RT_SPI_MODE_3 (RT_SPI_CPOL | RT_SPI_CPHA) /* CPOL = 1, CPHA = 1 */ + +#define RT_SPI_CS_HIGH (1<<4) /* Chipselect active high */ +#define RT_SPI_NO_CS (1<<5) /* No chipselect */ +#define RT_SPI_3WIRE (1<<6) /* SI/SO pin shared */ +#define RT_SPI_READY (1<<7) /* Slave pulls low to pause */ +``` + +**Data width:** The data width format that can be sent and received by the SPI master and SPI slaves is set to 8-bit, 16-bit or 32-bit. + +**Maximum Frequency:** Set the baud rate for data transfer, also based on the baud rate range at which the SPI master and SPI slaves operate. + +The example for configuration is as follows: + +```c + struct rt_spi_configuration cfg; + cfg.data_width = 8; + cfg.mode = RT_SPI_MASTER | RT_SPI_MODE_0 | RT_SPI_MSB; + cfg.max_hz = 20 * 1000 *1000; /* 20M */ + + rt_spi_configure(spi_dev, &cfg); +``` + +## QSPI Configuration + +To configure the transmission parameters of a QSPI device, use the following function: + +```c +rt_err_t rt_qspi_configure(struct rt_qspi_device *device, struct rt_qspi_configuration *cfg); +``` + +| **Parameter** | **Description** | +| -------- | ---------------------------------- | +| device | QSPI device handle | +| cfg | QSPI configuration parameter pointer | +| **Return** | —— | +| RT_EOK | Success | + +This function saves the configuration parameters pointed to by `cfg` to the control block of the QSPI device, which is used when transferring data. + +The `struct rt_qspi_configuration` prototype is as follows: + +```c +struct rt_qspi_configuration +{ + struct rt_spi_configuration parent; /* SPI device configuration parent */ + rt_uint32_t medium_size; /* medium size */ + rt_uint8_t ddr_mode; /* double rate mode */ + rt_uint8_t qspi_dl_width ; /* QSPI bus width, single line mode 1 bit, 2 line mode 2 bits, 4 line mode 4 bits */ +}; +``` + +## Access SPI Device + +In general, the MCU's SPI device communicates as a master and slave. In the RT-Thread, the SPI master is virtualized as an SPI bus device. The application uses the SPI device management interface to access the SPI slave device. The main interfaces are as follows: + +| **Function** | **Description** | +| -------------------- | ---------------------------------- | +| rt_device_find() | Find device handles based on SPI device name | +| rt_spi_transfer_message() | Custom transfer data | +| rt_spi_transfer() | Transfer data once | +| rt_spi_send() | Send data once | +| rt_spi_recv() | Receive data one | +| rt_spi_send_then_send() | Send data twice | +| rt_spi_send_then_recv() | Send then Receive | + +>The SPI data transfer related interface will call rt_mutex_take(). This function cannot be called in the interrupt service routine, which will cause the assertion to report an error. + +### Find SPI Device + +Before using the SPI device, you need to find and obtain the device handle according to the SPI device name, so that you can operate the SPI device. The device function is as follows. + +```c +rt_device_t rt_device_find(const char* name); +``` + +| **Parameter** | **Description** | +| ------------- | ------------------------------------------------------------ | +| name | Device name | +| **Return** | —— | +| device handle | Finding the corresponding device will return the corresponding device handle | +| RT_NULL | Corresponding device object unfound | + +In general, the name of the SPI device registered to the system is spi10, qspi10, etc. The usage examples are as follows: + +```c +#define W25Q_SPI_DEVICE_NAME "qspi10" /* SPI device name */ +struct rt_spi_device *spi_dev_w25q; /* SPI device handle */ + +/* Find the spi device to get the device handle */ +spi_dev_w25q = (struct rt_spi_device *)rt_device_find(W25Q_SPI_DEVICE_NAME); +``` + +### Transfer Custom Data + +By obtaining the SPI device handle, the SPI device management interface can be used to access the SPI device device for data transmission and reception. You can transfer messages by the following function: + +```c +struct rt_spi_message *rt_spi_transfer_message(struct rt_spi_device *device,struct rt_spi_message *message); +``` + +| **Parameter** | **Description** | +| ---------------- | ------------------------------------------------------------ | +| device | SPI device handle | +| message | message pointer | +| **Return** | —— | +| RT_NULL | Send successful | +| Non-null pointer | Send failed, return a pointer to the remaining unsent message | + +This function can transmit a series of messages, the user can customize the value of each parameter of the message structure to be transmitted, so that the data transmission mode can be conveniently controlled. The `struct rt_spi_message` prototype is as follows: + +```c +struct rt_spi_message +{ + const void *send_buf; /* Send buffer pointer */ + void *recv_buf; /* Receive buffer pointer */ + rt_size_t length; /* Send/receive data bytes */ + struct rt_spi_message *next; /* Pointer to the next message to continue sending */ + unsigned cs_take : 1; /* Take chip selection*/ + unsigned cs_release : 1; /* Release chip selection */ +}; +``` +send_buf :sendbuf is the send buffer pointer. When the value is RT_NULL, it means that the current transmission is only receiving state, and no data needs to be sent. + +recv_buf :recvbuf is the receive buffer pointer. When the value is RT_NULL, it means that the current transmission is in the transmit-only state. It does not need to save the received data, so the received data is directly discarded. + +length :The unit of length is word, that is, when the data length is 8 bits, each length occupies 1 byte; when the data length is 16 bits, each length occupies 2 bytes. + +next :The parameter next is a pointer to the next message to continue to send. If only one message is sent, the value of this pointer is RT_NULL. Multiple messages to be transmitted are connected together in a singly linked list by the next pointer. + +cs_take :A cs_take value of 1 means that the corresponding CS is set to a valid state before data is transferred. + +cs_release :A cs_release value of 1 indicates that the corresponding CS is released after the data transfer ends. + +>When send_buf or recv_buf is not empty, the available size for both cannot be less than length. +If you use this function to transfer messages, the first message sent by cs_take needs to be set to 1. Set the chip to be valid, and the cs_release of the last message needs to be set to 1. Release the chip select. + +An example of use is as follows: + +```c +#define W25Q_SPI_DEVICE_NAME "qspi10" /* SPI device name */ +struct rt_spi_device *spi_dev_w25q; /* SPI device handle */ +struct rt_spi_message msg1, msg2; +rt_uint8_t w25x_read_id = 0x90; /* command */ +rt_uint8_t id[5] = {0}; + +/* Find the spi device to get the device handle */ +spi_dev_w25q = (struct rt_spi_device *)rt_device_find(W25Q_SPI_DEVICE_NAME); +/* Send command to read ID */ +struct rt_spi_message msg1, msg2; + +msg1.send_buf = &w25x_read_id; +msg1.recv_buf = RT_NULL; +msg1.length = 1; +msg1.cs_take = 1; +msg1.cs_release = 0; +msg1.next = &msg2; + +msg2.send_buf = RT_NULL; +msg2.recv_buf = id; +msg2.length = 5; +msg2.cs_take = 0; +msg2.cs_release = 1; +msg2.next = RT_NULL; + +rt_spi_transfer_message(spi_dev_w25q, &msg1); +rt_kprintf("use rt_spi_transfer_message() read w25q ID is:%x%x\n", id[3], id[4]); +``` + +### Transfer Data Once + +If only transfer data for once, use the following function: + +```c +rt_size_t rt_spi_transfer(struct rt_spi_device *device, + const void *send_buf, + void *recv_buf, + rt_size_t length); +``` + +| **Parameter** | **Description** | +|----------|----------------------| +| device | SPI device handle | +| send_buf | Send data buffer pointer | +| recv_buf | Receive data buffer pointer | +| length | Length of data send/received | +| **Return** | —— | +| 0 | Transmission failed | +| Non-0 Value | Length of data successfully transferred | + +This function is equivalent to calling `rt_spi_transfer_message()` to transfer a message. When starting to send data, the chip is selected. When the function returns, the chip is released. The message parameter is configured as follows: + +```c +struct rt_spi_message msg; + +msg.send_buf = send_buf; +msg.recv_buf = recv_buf; +msg.length = length; +msg.cs_take = 1; +msg.cs_release = 1; +msg.next = RT_NULL; +``` + +### Send Data Once + +If only send data once and ignore the received data, use the following function: + +```c +rt_size_t rt_spi_send(struct rt_spi_device *device, + const void *send_buf, + rt_size_t length) +``` + +| **Parameter** | **Description** | +|----------|--------------------| +| device | SPI device handle | +| send_buf | Send data buffer pointer | +| length | Length of data sent | +| **Return** | —— | +| 0 | Transmission failed | +| Non-0 Value | Length of data successfully transferred | + +Call this function to send the data of the buffer pointed to by send_buf, ignoring the received data. This function is a wrapper of the `rt_spi_transfer()` function. + +This function is equivalent to calling `rt_spi_transfer_message()` to transfer a message. When the data starts to be sent, the chip is selected. When the function returns, the chip is released. The message parameter is configured as follows: + +```c +struct rt_spi_message msg; + +msg.send_buf = send_buf; +msg.recv_buf = RT_NULL; +msg.length = length; +msg.cs_take = 1; +msg.cs_release = 1; +msg.next = RT_NULL; +``` + +### Receive Data Once + +If only receive data once, use the following function: + +```c +rt_size_t rt_spi_recv(struct rt_spi_device *device, + void *recv_buf, + rt_size_t length); +``` + +| **Parameter** | **Description** | +|----------|--------------------| +| device | SPI device handle | +| recv_buf | Send data buffer pointer | +| length | Length of data sent | +| **Return** | —— | +| 0 | Transmission failed | +| Non-0 Value | Length of data successfully transferred | + +Call this function to receive the data and save it to the buffer pointed to by recv_buf. This function is a wrapper of the `rt_spi_transfer()` function. The SPI bus protocol stipulates that the master can only generate a clock, so when receiving data, the master will send the data 0XFF. + +This function is equivalent to calling `rt_spi_transfer_message()` to transfer a message. When receiving data, the chip is selected. When the function returns, the chip is released. The message parameter is configured as follows: + +```c +struct rt_spi_message msg; + +msg.send_buf = RT_NULL; +msg.recv_buf = recv_buf; +msg.length = length; +msg.cs_take = 1; +msg.cs_release = 1; +msg.next = RT_NULL; +``` + +### Send Data Twice in Succession + +If need to send data of 2 buffers in succession and the CS is not released within the process, you can call the following function: + +```c +rt_err_t rt_spi_send_then_send(struct rt_spi_device *device, + const void *send_buf1, + rt_size_t send_length1, + const void *send_buf2, + rt_size_t send_length2); +``` + +| **Parameter** | **Description** | +|--------------|---------------------------| +| device | SPI device handle | +| send_buf1 | Send data buffer pointer 1 | +| send_length1 | Send data buffer length 1 | +| send_buf2 | Send data buffer pointer 2 | +| send_length2 | Send data buffer length 2 | +| **Return** | —— | +| RT_EOK | Send Successful | +| -RT_EIO | Send Failed | + +This function can continuously send data of 2 buffers, ignore the received data, select the CS when send_buf1 is sent, and release the CS after sending send_buf2. + +This function is suitable for writing a piece of data to the SPI device, sending data such as commands and addresses for the first time, and sending data of the specified length for the second time. The reason is that it is sent twice instead of being merged into one data block, or `rt_spi_send()`is called twice, because in most data write operations, commands and addresses need to be sent first, and the length is usually only a few bytes. If send it in conjunction with the data that follows, it will need a memory space request and a lot of data handling. If `rt_spi_send()`is called twice, the chip select will be released after the command and address are sent. Most SPI devices rely on setting the chip select once to be the start of the command, so the chip selects the command or address after sending. After the data is released, the operation is discarded. + +This function is equivalent to calling `rt_spi_transfer_message()` to transfer 2 messages. The message parameter is configured as follows: + +```c +struct rt_spi_message msg1,msg2; + +msg1.send_buf = send_buf1; +msg1.recv_buf = RT_NULL; +msg1.length = send_length1; +msg1.cs_take = 1; +msg1.cs_release = 0; +msg1.next = &msg2; + +msg2.send_buf = send_buf2; +msg2.recv_buf = RT_NULL; +msg2.length = send_length2; +msg2.cs_take = 0; +msg2.cs_release = 1; +msg2.next = RT_NULL; +``` + +### Receive Data After Sending Data + +If need to send data to the slave device first, then receive the data sent from the slave device, and the CS is not released within the process, call the following function to implement: + +```c +rt_err_t rt_spi_send_then_recv(struct rt_spi_device *device, + const void *send_buf, + rt_size_t send_length, + void *recv_buf, + rt_size_t recv_length); +``` + +| **Parameter** | **Description** | +|-------------|--------------------------| +| device | SPI slave device handle | +| send_buf | Send data buffer pointer | +| send_length | Send data buffer length | +| recv_buf | Receive data buffer pointer | +| recv_length | Receive data buffer length | +| **Return** | —— | +| RT_EOK | Successful | +| -RT_EIO | Failed | + +This function select CS when sending the first data send_buf when the received data is ignored, and the second data is sent. At this time, the master device will send the data 0XFF, and the received data will be saved in recv_buf, and CS will be released when the function returns. + +This function is suitable for reading a piece of data from the SPI slave device. The first time it will send some command and address data, and then receive the data of the specified length. + +This function is equivalent to calling `rt_spi_transfer_message()` to transfer 2 messages. The message parameter is configured as follows: + +```c +struct rt_spi_message msg1,msg2; + +msg1.send_buf = send_buf; +msg1.recv_buf = RT_NULL; +msg1.length = send_length; +msg1.cs_take = 1; +msg1.cs_release = 0; +msg1.next = &msg2; + +msg2.send_buf = RT_NULL; +msg2.recv_buf = recv_buf; +msg2.length = recv_length; +msg2.cs_take = 0; +msg2.cs_release = 1; +msg2.next = RT_NULL; +``` + +The SPI device management module also provides `rt_spi_sendrecv8()` and `rt_spi_sendrecv16()` functions, both are wrapper of the `rt_spi_send_then_recv()`. `rt_spi_sendrecv8()` sends a byte data and receives one byte data, and`rt_spi_sendrecv16()` sends 2 bytes. The section data receives 2 bytes of data at the same time. + +## Access QSPI Device + +The data transfer interface of QSPI is as follows: + +| **P**arameter | **Description** | +| -------------------- | ----------------------------| +| rt_qspi_transfer_message() | Transfer message | +| rt_qspi_send_then_recv() | Send then receive | +| rt_qspi_send() | Send data once | + +>The QSPI data transfer related interface will call rt_mutex_take(). This function cannot be called in the interrupt service routine, which will cause the assertion to report an error. + +### Transfer Data + +Transfer messages by the following function: + +```c +rt_size_t rt_qspi_transfer_message(struct rt_qspi_device *device, struct rt_qspi_message *message); +``` + +| **Parameter** | **Description** | +|----------|--------------------------------------------| +| device | QSPI device handle | +| message | Message pointer | +| **Return** | —— | +| Actual transmitted message size | | + +The message `structure struct rt_qspi_message` prototype is as follows: + +```c +struct rt_qspi_message +{ + struct rt_spi_message parent; /* inhert from struct rt_spi_message */ + + struct + { + rt_uint8_t content; /* Instruction content */ + rt_uint8_t qspi_lines; /* Instruction mode, single line mode 1 bit, 2 line mode 2 bits, 4 line mode 4 bits */ + } instruction; /* Instruction phase */ + + struct + { + rt_uint32_t content; /* Address/alternate byte content */ + rt_uint8_t size; /* Address/alternate byte size */ + rt_uint8_t qspi_lines; /* Address/alternate byte mode, single line mode 1 bit, 2 line mode 2 bits, 4 line mode 4 bits */ + } address, alternate_bytes; /* Address/alternate byte stage */ + + rt_uint32_t dummy_cycles; /* Dummy cycle */ + rt_uint8_t qspi_data_lines; /* QSPI data line */ +}; +``` + +### Receive Data + +Use the following function to receive data: + +```c +rt_err_t rt_qspi_send_then_recv(struct rt_qspi_device *device, + const void *send_buf, + rt_size_t send_length, + void *recv_buf, + rt_size_t recv_length); +``` + +| **Parameter** | **Description** | +|-------------|--------------------------| +| device | QSPI device handle | +| send_buf | Send data buffer pointer | +| send_length | Send data length | +| recv_buf | Receive data buffer pointer | +| recv_length | Receive data length | +| **Return** | —— | +| RT_EOK | Successful | +| Other Errors | Failed | + +The send_buf parameter contains the sequence of commands that will be sent. + +### Send Data + +```c +rt_err_t rt_qspi_send(struct rt_qspi_device *device, const void *send_buf, rt_size_t length) +``` + +| **Parameter** | **Description** | +|-------------|--------------------------| +| device | QSPI device handle | +| send_buf | Send data buffer pointer | +| length | Send data length | +| **Return** | —— | +| RT_EOK | Successful | +| Other Errors | Failed | + +The send_buf parameter contains the sequence of commands and data to be sent. + +## Special Usage Scenarios + +In some special usage scenarios, a device wants to monopolize the bus for a period of time, and the CS is always valid during the period, during which the data transmission may be intermittent, then the relevant interface can be used as shown. The transfer data function must use `rt_spi_transfer_message()`, and this function must set the cs_take and cs_release of the message to be transmitted to 0 value, because the CS has already used other interface control, and does not need to control during data transmission. + +### Acquire the SPI bus + +In the case of multi-threading, the same SPI bus may be used in different threads. In order to prevent the data being transmitted by the SPI bus from being lost, the slave device needs to acquire the right to use the SPI bus before starting to transfer data. To transfer data using the bus, use the following function to acquire the SPI bus: + +```c +rt_err_t rt_spi_take_bus(struct rt_spi_device *device); +``` + +| **Parameter** | **Description** | +|----------|---------------| +| device | SPI device handle | +| **Return** | —— | +| RT_EOK | Successful | +| Other Errors | Failed | + +### Select CS + +After obtaining the usage right of the bus from the device, you need to set the corresponding chip selection signal to be valid. You can use the following function to select the CS: + +```c +rt_err_t rt_spi_take(struct rt_spi_device *device); +``` + +| **Parameter** | **Description** | +|----------|---------------| +| device | SPI device handle | +| **Return** | —— | +| 0 | Successful | +| Other Errors | Failed | + +### Add a New Message + +When using `rt_spi_transfer_message()` to transfer messages, all messages to be transmitted are connected in the form of a singly linked list. Use the following function to add a new message to be sent to the message list: + +```c +void rt_spi_message_append(struct rt_spi_message *list, + struct rt_spi_message *message); +``` + +| **Parameter** | **Description** | +| ------------- | ----------------------------------- | +| list | Message link node to be transmitted | +| message | New message pointer | + +### Release CS + +After the device data transfer is completed, CS need to be released. Use the following function to release the CS: + +```c +rt_err_t rt_spi_release(struct rt_spi_device *device); +``` + +| **Parameter** | **D**escription | +|----------|---------------| +| device | SPI device handle | +| Return | —— | +| 0 | Successful | +| Other Errors | Failed | + +### Release Data Bus + +The slave device does not use the SPI bus to transfer data. The bus must be released as soon as possible so that other slave devices can use the SPI bus to transfer data. The following function can be used to release the bus: + +```c +rt_err_t rt_spi_release_bus(struct rt_spi_device *device); +``` + +| **Parameter** | **Description** | +|----------|---------------| +| device | SPI device handle | +| **Return** | —— | +| RT_EOK | Successful | + +## SPI Device Usage Example + +The specific use of the SPI device can be referred to the following sample code. The sample code first finds the SPI device to get the device handle, and then uses the rt_spi_transfer_message() send command to read the ID information. + +```c +/* + * Program listing: This is a SPI device usage routine + * The routine exports the spi_w25q_sample command to the control terminal + * Command call format: spi_w25q_sample spi10 + * Command explanation: The second parameter of the command is the name of the SPI device to be used. If it is empty, the default SPI device is used. + * Program function: read w25q ID data through SPI device +*/ + +#include +#include + +#define W25Q_SPI_DEVICE_NAME "qspi10" + +static void spi_w25q_sample(int argc, char *argv[]) +{ + struct rt_spi_device *spi_dev_w25q; + char name[RT_NAME_MAX]; + rt_uint8_t w25x_read_id = 0x90; + rt_uint8_t id[5] = {0}; + + if (argc == 2) + { + rt_strncpy(name, argv[1], RT_NAME_MAX); + } + else + { + rt_strncpy(name, W25Q_SPI_DEVICE_NAME, RT_NAME_MAX); + } + + /* Find the spi device to get the device handle */ + spi_dev_w25q = (struct rt_spi_device *)rt_device_find(name); + if (!spi_dev_w25q) + { + rt_kprintf("spi sample run failed! can't find %s device!\n", name); + } + else + { + /* Method 1: Send the command to read the ID using rt_spi_send_then_recv() */ + rt_spi_send_then_recv(spi_dev_w25q, &w25x_read_id, 1, id, 5); + rt_kprintf("use rt_spi_send_then_recv() read w25q ID is:%x%x\n", id[3], id[4]); + + /* Method 2: Send the command to read the ID using rt_spi_transfer_message() */ + struct rt_spi_message msg1, msg2; + + msg1.send_buf = &w25x_read_id; + msg1.recv_buf = RT_NULL; + msg1.length = 1; + msg1.cs_take = 1; + msg1.cs_release = 0; + msg1.next = &msg2; + + msg2.send_buf = RT_NULL; + msg2.recv_buf = id; + msg2.length = 5; + msg2.cs_take = 0; + msg2.cs_release = 1; + msg2.next = RT_NULL; + + rt_spi_transfer_message(spi_dev_w25q, &msg1); + rt_kprintf("use rt_spi_transfer_message() read w25q ID is:%x%x\n", id[3], id[4]); + + } +} +/* Export to the msh command list */ +MSH_CMD_EXPORT(spi_w25q_sample, spi w25q sample); +``` + diff --git a/documentation/device/uart/figures/uart-dma.png b/documentation/device/uart/figures/uart-dma.png new file mode 100644 index 0000000000..1415e8552a Binary files /dev/null and b/documentation/device/uart/figures/uart-dma.png differ diff --git a/documentation/device/uart/figures/uart-int.png b/documentation/device/uart/figures/uart-int.png new file mode 100644 index 0000000000..c500f22d27 Binary files /dev/null and b/documentation/device/uart/figures/uart-int.png differ diff --git a/documentation/device/uart/figures/uart1.png b/documentation/device/uart/figures/uart1.png new file mode 100644 index 0000000000..7e52536f72 Binary files /dev/null and b/documentation/device/uart/figures/uart1.png differ diff --git a/documentation/device/uart/uart.md b/documentation/device/uart/uart.md new file mode 100644 index 0000000000..27227607b0 --- /dev/null +++ b/documentation/device/uart/uart.md @@ -0,0 +1,646 @@ +# UART Device + +## UART Introduction + +UART (Universal Asynchronous Receiver/Transmitter), as a kind of asynchronous serial communication protocol, the working principle is to transmit each character of the transmitted data one by one. It is the most frequently used data bus during application development. + +The UART serial port is characterized by sequentially transmitting data one bit at a time. As long as two transmission lines can realize two-way communication, one line transmits data while the other receives data . There are several important functions for UART serial communication, namely baud rate, start bit, data bit, stop bit and parity bit. For two ports that use UART serial port communication, these functions must be matched, otherwise the communication can't be carried out normally. The data format of the UART serial port transmission is as shown below: + +![Serial Transmission Data Format](figures/uart1.png) + +* Start bit: Indicates the start of data transfer and the level logic is "0". +- Data bits: Possible values are 5, 6, 7, 8, and 9, indicating that these bits are transmitted. The value is generally 8, because an ASCII character value is 8 bits. +- Parity check bit: It it used by the receiver to verify the received data. The number of bits is used in the check of "1" is even (even parity) or odd (odd parity) ,in order to verify the data transmission. It is also fine by not using this bit . +- Stop Bit: Indicates the end of one frame of data. The level logic is "1". +- Baudrate: It is the rate at which a serial port communicates, which expressed in bits per second (bps) of the binary code transmitted in unit time. The common baud rate values are 4800, 9600, 14400, 38400, 115200, etc. The higher the value is, the faster the data transmission will be. + +## Access UART Device + +The application accesses the serial port hardware through the I/O device management interface provided by RT-Thread. The related interfaces are as follows: + +| **Funtion** | **Description** | +| --------------------------- | -------------------------- | +| rt_device_find() | find device | +| rt_device_open() | open device | +| rt_device_read() | read device | +| rt_device_write() |write device| +| rt_device_control() | control device | +| rt_device_set_rx_indicate() | set receive callback function | +| rt_device_set_tx_complete() | set send complete callback function | +| rt_device_close() | close device | + +### Find UART Device + +The application obtains the device handle according to the uart device name, and then can operate the uart device.The device find function is shown below + +```c +rt_device_t rt_device_find(const char* name); +``` + +| **Parameter** | **Description** | +| ------------- | ------------------------------------------------------------ | +| name | device's name | +| **back** | —— | +| device handle | finding the corresponding device will return to the corresponding device handle | +| RT_NULL | corresponding device object was not found | + +Generally, the name of the uart device registered to the system is uart0, uart1, etc. samples are as follows: + +```c +#define SAMPLE_UART_NAME "uart2" /* uart device name */ +static rt_device_t serial; /* uart device handle */ +/* Find uart device*/ +serial = rt_device_find(SAMPLE_UART_NAME); +``` + +### Open UART Device + +Through the device handle, the application can open and close the device. When the device is opened, it will detect whether the device has been initialized. If it is not initialized, it will call the initialization interface to initialize the device by default. Open the device through the following functions: + +```c +rt_err_t rt_device_open(rt_device_t dev, rt_uint16_t oflags); +``` + +| **Parameter** | **Description** | +| ---------- | ------------------------------- | +| dev | device handle | +| oflags | device mode flags | +| **back** | —— | +| RT_EOK | device opened successfully | +| -RT_EBUSY | If the standalone parameter RT_DEVICE_FLAG_STANDALONE is included in the functions specified when the device is registered, the device will not be allowed to be opened repeatedly | +| Other error codes | device failed to open | + +oflags parameters support the following values (Use OR logic to support multiple values): + +```c +#define RT_DEVICE_FLAG_STREAM 0x040 /* Stream mode */ +/* Receive mode function */ +#define RT_DEVICE_FLAG_INT_RX 0x100 /* Interrupt receive mode */ +#define RT_DEVICE_FLAG_DMA_RX 0x200 /* DMA receiving mode */ +/* Receive mode function */ +#define RT_DEVICE_FLAG_INT_TX 0x400 /* Interrupt receive mode*/ +#define RT_DEVICE_FLAG_DMA_TX 0x800 /* DMA receive mode */ +``` + +There are three modes of uart data receiving and sending: interrupt mode, polling mode and DMA mode. When used, only one of the three modes can be selected. If the open parameter oflag of the serial port does not specify the use of interrupt mode or DMA mode, the polling mode is used by default. + +The DMA (Direct Memory Access) transfer mode does not require the CPU to directly control the transfer, and does not have the process of reserving the scene and restoring the scene as they have in the interrupt processing mode. The DMA controller opens a path for directly transferring data to the RAM and the I/O device, which saves CPU resources to do other things. Using DMA transfer can continuously acquire or send a piece of information without taking up interrupts or delays, which is useful when communication is frequent or when large pieces of information are to be transmitted. + +>RT_DEVICE_FLAG_STREAM: Stream mode is used to output a string to the serial terminal: when the output character is `"\n"` (corresponding to the hexadecimal value 0x0A), a ``\r"` is automatically output in front (corresponding to hexadecimal value is 0x0D). + +The stream mode `RT_DEVICE_FLAG_STREAM` can be used with the receive and send mode parameter with the "|" logic. + +An example of using a uart device in **interrupt receive mode and polling mode** as follows: + +```c +#define SAMPLE_UART_NAME "uart2" /* uart device name */ +static rt_device_t serial; /* uart device handle */ +/* find uart device */ +serial = rt_device_find(SAMPLE_UART_NAME); + +/* Open the uart device in interrupt receive mode and polling mode*/ +rt_device_open(serial, RT_DEVICE_FLAG_INT_RX); +``` + +If the uart is to use the DMA receive mode, the oflags takes the value RT_DEVICE_FLAG_DMA_RX. An example of using a uart device in the **DMA receive and polling send mode** is as follows: + +```c +#define SAMPLE_UART_NAME "uart2" /* uart device's name */ +static rt_device_t serial; /* uart device handle */ +/* find uart device */ +serial = rt_device_find(SAMPLE_UART_NAME); + +/* Open the uart device in DMA receive and polling send mode*/ +rt_device_open(serial, RT_DEVICE_FLAG_DMA_RX); +``` + +### Control UART Device + +Through command control word, the application can configure the uart device by the following function: + +```c +rt_err_t rt_device_control(rt_device_t dev, rt_uint8_t cmd, void* arg); +``` + +| **Parameter** | **Description** | +| ----------------- | ------------------------------------------------------------ | +| dev | device handle | +| cmd | command control word can be valued as:RT_DEVICE_CTRL_CONFIG | +| arg | controlled parameter: struct serial_configure | +| **Back** | —— | +| RT_EOK | function executed successfully | +| -RT_ENOSYS | execution failed, dev is empty | +| Other error codes | execution failed | + +* The prototype of control parameter structure: struct serial_configure is as follows: + +```c +struct serial_configure +{ + rt_uint32_t baud_rate; /* Baudrate */ + rt_uint32_t data_bits :4; /* Data bit */ + rt_uint32_t stop_bits :2; /* Stop bit */ + rt_uint32_t parity :2; /* Parity bit */ + rt_uint32_t bit_order :1; /* Prioritized by order */ + rt_uint32_t invert :1; /* Mode */ + rt_uint32_t bufsz :16; /* Receive data buffer size */ + rt_uint32_t reserved :4; /* Reserved bit */ +}; +``` + +* The default macro configuration provided by RT-Thread is as follows: + +```c +#define RT_SERIAL_CONFIG_DEFAULT \ +{ \ + BAUD_RATE_115200, /* 115200 bps */ \ + DATA_BITS_8, /* 8 databits */ \ + STOP_BITS_1, /* 1 stopbit */ \ + PARITY_NONE, /* No parity */ \ + BIT_ORDER_LSB, /* LSB first sent */ \ + NRZ_NORMAL, /* Normal mode */ \ + RT_SERIAL_RB_BUFSZ, /* Buffer size */ \ + 0 \ +} +``` + +The configuration parameters provided by RT-Thread can be defined as the following macro definitions:: + +```c +/* The baudrate can be defined as*/ +#define BAUD_RATE_2400 2400 +#define BAUD_RATE_4800 4800 +#define BAUD_RATE_9600 9600 +#define BAUD_RATE_19200 19200 +#define BAUD_RATE_38400 38400 +#define BAUD_RATE_57600 57600 +#define BAUD_RATE_115200 115200 +#define BAUD_RATE_230400 230400 +#define BAUD_RATE_460800 460800 +#define BAUD_RATE_921600 921600 +#define BAUD_RATE_2000000 2000000 +#define BAUD_RATE_3000000 3000000 +/* Data bits can be defined as*/ +#define DATA_BITS_5 5 +#define DATA_BITS_6 6 +#define DATA_BITS_7 7 +#define DATA_BITS_8 8 +#define DATA_BITS_9 9 +/* Stop bits can be defined as */ +#define STOP_BITS_1 0 +#define STOP_BITS_2 1 +#define STOP_BITS_3 2 +#define STOP_BITS_4 3 +/* Parity bits can be defined as */ +#define PARITY_NONE 0 +#define PARITY_ODD 1 +#define PARITY_EVEN 2 +/* Bit order can be defined as */ +#define BIT_ORDER_LSB 0 +#define BIT_ORDER_MSB 1 +/* Mode canbe defined as */ +#define NRZ_NORMAL 0 /* normal mode */ +#define NRZ_INVERTED 1 /* inverted mode */ +/* Default size of the receive data buffer */ +#define RT_SERIAL_RB_BUFSZ 64 +``` + +**Receive Buffer** + +When the uart device is opened using interrupt receive mode, the uart driver framework will open a buffer according to the size of RT_SERIAL_RB_BUFSZ to save the received data. When the underlying driver receives a data, it will put the data into the buffer in the interrupt service program. + +>The default size of the receive data buffer is 64 bytes. If the number of received data in one-time is too large and the data is not read in time, the data of the buffer will be overwritten by the newly received data, resulting in data loss. It is recommended to increase the buffer. + +A sample for configuring uart hardware parameters such as data bits, check bits, stop bits, and so on are shown below: + +```c +#define SAMPLE_UART_NAME "uart2" /* uart device's name */ +static rt_device_t serial; /* uart device handle */ +struct serial_configure config = RT_SERIAL_CONFIG_DEFAULT; /* Configuration parameters */ +/* Find uart devices */ +serial = rt_device_find(SAMPLE_UART_NAME); + +/* Open the uart device in interrupt receive and polling send mode */ +rt_device_open(serial, RT_DEVICE_FLAG_INT_RX); + +config.baud_rate = BAUD_RATE_115200; +config.data_bits = DATA_BITS_8; +config.stop_bits = STOP_BITS_2; +config.parity = PARITY_NONE; +/* The serial port configuration parameters can only be modified after opening the device */ +rt_device_control(serial, RT_DEVICE_CTRL_CONFIG, &config); +``` + +### Send Data + +To write data to the serial port, the following functions can be used: + +```c +rt_size_t rt_device_write(rt_device_t dev, rt_off_t pos, const void* buffer, rt_size_t size); +``` + +| **Parameter** | **Description** | +| ---------- | ------------------------------------------ | +| dev | device handle | +| pos | Write data offset, this parameter is not used in uart device | +| buffer | Memory buffer pointer, place the data to be written | +| size | The size of the written data | +| **back** | —— | +| The actual size of the written data | If it is a character device, the return size is in bytes; | +| 0 | It needs to read the current thread's errno to determine the error status | + +Calling this function will write the data in the `buffer` to the `dev` device, the size of the write data is: size. + +The sample program for writing data to the serial port is as follows: + +```c +#define SAMPLE_UART_NAME "uart2" /* uart device's name */ +static rt_device_t serial; /* uart device handle */ +char str[] = "hello RT-Thread!\r\n"; +struct serial_configure config = RT_SERIAL_CONFIG_DEFAULT; /* Configuration parameter */ +/*find uart device */ +serial = rt_device_find(SAMPLE_UART_NAME); + +/* Open the uart device in interrupt reception and polling mode */ +rt_device_open(serial, RT_DEVICE_FLAG_INT_RX); +/* Send string */ +rt_device_write(serial, 0, str, (sizeof(str) - 1)); +``` + +### Set The Send completion Callback Function + +When the application calls `rt_device_write()` to write data, if the underlying hardware can support automatic transmission, the upper application can set a callback function. This callback function is called after the underlying hardware data has been sent (for example, when the DMA transfer is complete or the FIFO has been written to complete the completion interrupt). You can set the device to send completion instructions by the following function: + +```c +rt_err_t rt_device_set_tx_complete(rt_device_t dev, rt_err_t (*tx_done)(rt_device_t dev,void *buffer)); +``` + +| **Parameter** | **Description** | +| ------------- | ------------------------- | +| dev | device handle | +| tx_done | callback function pointer | +| **back** | —— | +| RT_EOK | set up successfully | + +When this function is called, the callback function is provided by the user. When the hardware device sends the data, the device driver calls back this function and passes the sent data block address buffer as a parameter to the upper application. When the application (thread) receives the indication, it will release the buffer memory block or use it as the buffer for the next write data according to the condition of sending the buffer. + +### Set The Receive Callback Function + +The data receiving instruction can be set by the following function. When the serial port receives the data, it will inform the upper application thread that the data has arrived: + +```c +rt_err_t rt_device_set_rx_indicate(rt_device_t dev, rt_err_t (*rx_ind)(rt_device_t dev,rt_size_t size)); +``` + +| **Parameter** | **Description** | +| -------- | ------------ | +| dev | device handle | +| rx_ind | callback function pointer | +| dev | device handle (callback function parameter) | +| size | buffer data size (callback function parameter) | +| **back** | —— | +| RT_EOK | set up successfully | + +The callback function for this function is provided by the user. If the uart device is opened in interrupt receive mode, the callback function will be called when the serial port receives a data, and the data size of the buffer will be placed in the `size` parameter, and the uart device handle will be placed in the `dev` parameter. + +If the uart is opened in DMA receive mode, the callback function is called when the DMA completes receiving a batch of data. + +Normally the receiving callback function can send a semaphore or event to notify the serial port data processing thread that data has arrived. The example is as follows: + +```c +#define SAMPLE_UART_NAME "uart2" /* uart device name */ +static rt_device_t serial; /* uart device handle */ +static struct rt_semaphore rx_sem; /* The semaphore used to receive the message */ + +/* Receive data callback function */ +static rt_err_t uart_input(rt_device_t dev, rt_size_t size) +{ + /* When the serial port receives the data, it triggers interrupts, calls this callback function, and sends the received semaphore */ + rt_sem_release(&rx_sem); + + return RT_EOK; +} + +static int uart_sample(int argc, char *argv[]) +{ + serial = rt_device_find(SAMPLE_UART_NAME); + + /* Open the uart device in interrupting receive mode */ + rt_device_open(serial, RT_DEVICE_FLAG_INT_RX); + + /* Initialization semaphore */ + rt_sem_init(&rx_sem, "rx_sem", 0, RT_IPC_FLAG_FIFO); + + /* Set the receive callback function */ + rt_device_set_rx_indicate(serial, uart_input); +} + +``` + +### Receive Data + +You can call the following function to read the data received by the uart: + +```c +rt_size_t rt_device_read(rt_device_t dev, rt_off_t pos, void* buffer, rt_size_t size); +``` + +| **Parameter** | **Description** | +| -------------------------------- | ------------------------------------------------------------ | +| dev | device handle | +| pos | Read data offset, uart device dose not use this parameter | +| buffer | Buffer pointer, the data read will be saved in the buffer | +| size | Read the size of the data | +| **back** | —— | +| Read the actual size of the data | If it is a character device, the return size is in bytes. | +| 0 | It needs to read the current thread's errno to determine the error status | + +Read data offset: pos is not valid for character devices. This parameter is mainly used for block devices. + +An example of using the interrupt receive mode with the receive callback function is as follows: + +```c +static rt_device_t serial; /* uart device handle */ +static struct rt_semaphore rx_sem; /* Semaphore used to receive messages */ + +/* Thread receiving data */ +static void serial_thread_entry(void *parameter) +{ + char ch; + + while (1) + { + /* Reads a byte of data from the serial port and waits for the receiving semaphore if it is not read */ + while (rt_device_read(serial, -1, &ch, 1) != 1) + { + /* Blocking waiting to receive semaphore, waiting for the semaphore to read the data again*/ + rt_sem_take(&rx_sem, RT_WAITING_FOREVER); + } + /* Read the data through the serial port dislocation output*/ + ch = ch + 1; + rt_device_write(serial, 0, &ch, 1); + } +} +``` + +### Close The UART Device + +After the application completes the serial port operation, the uart device can be closed by the following functions: + +```c +rt_err_t rt_device_close(rt_device_t dev); +``` + +| **Parameter** | **Description** | +| ----------------- | ------------------------------------------------------------ | +| dev | device handle | +| **back** | —— | +| RT_EOK | device closed successfully | +| -RT_ERROR | The device has been completely shut down and cannot be shut down repeatedly | +| other error codes | fail to close the device | + +Use the `rt_device_close()` interface and `rt_device_open()` interface in pair. When you open the device, you need to close the device once, so that the device will be completely shut down, otherwise the device will remain open. + +## Examples Of Using UART Device + +### Interrupt Receiving And Polling Send + +The main steps of the sample code are as follows: + +1. First find the uart device to get the device handle. +2. Initialize the semaphore that the callback function sends, and then open the uart device in read/write and interrupt receive mode. +3. Set the receive callback function of the uart device, then send the string and create a read data thread. +4. The read data thread will try to read a character data. If there is no data, it will hang and wait for the semaphore. When the uart device receives a data, it will trigger an interrupt and call the receive callback function. This function will send a semaphore to wake up the thread. At this point, the thread will immediately read the received data. +5. This sample code is not limited to a specific BSP. According to the uart device registered by BSP, modify the uart device's name corresponding to the sample code's macro definition SAMPLE_UART_NAME to run. + +The running sequence diagram is shown as follows: + +![Serial Port Interrupt Reception and Polling Transmission Sequence Diagram](figures/uart-int.png) + + +```c +/* + * Program list: This is a uart device usage routine + * The routine exports the uart_sample command to the control terminal + * Format of command: uart_sample uart2 + * Command explanation: the second parameter of the command is the name of the uart device. If it is null, the default uart device wil be used + * Program function: output the string "hello RT-Thread!" through the serial port, and then malposition the input character +*/ + +#include + +#define SAMPLE_UART_NAME "uart2" + +/* Semaphore used to receive messages */ +static struct rt_semaphore rx_sem; +static rt_device_t serial; + +/* Receive data callback function */ +static rt_err_t uart_input(rt_device_t dev, rt_size_t size) +{ + /* After the uart device receives the data, it generates an interrupt, calls this callback function, and then sends the received semaphore. */ + rt_sem_release(&rx_sem); + + return RT_EOK; +} + +static void serial_thread_entry(void *parameter) +{ + char ch; + + while (1) + { + /* Read a byte of data from the serial port and wait for the receiving semaphore if it is not read */ + while (rt_device_read(serial, -1, &ch, 1) != 1) + { + /* Being Suspended and waiting for the semaphore */ + rt_sem_take(&rx_sem, RT_WAITING_FOREVER); + } + /* Read the data from the serial port and output through dislocation */ + ch = ch + 1; + rt_device_write(serial, 0, &ch, 1); + } +} + +static int uart_sample(int argc, char *argv[]) +{ + rt_err_t ret = RT_EOK; + char uart_name[RT_NAME_MAX]; + char str[] = "hello RT-Thread!\r\n"; + + if (argc == 2) + { + rt_strncpy(uart_name, argv[1], RT_NAME_MAX); + } + else + { + rt_strncpy(uart_name, SAMPLE_UART_NAME, RT_NAME_MAX); + } + + /* Find uart devices in the system */ + serial = rt_device_find(uart_name); + if (!serial) + { + rt_kprintf("find %s failed!\n", uart_name); + return RT_ERROR; + } + + /* Initialize the semaphore */ + rt_sem_init(&rx_sem, "rx_sem", 0, RT_IPC_FLAG_FIFO); + /* Open the uart device in interrupt receive and polling send mode */ + rt_device_open(serial, RT_DEVICE_FLAG_INT_RX); + /* Set the receive callback function */ + rt_device_set_rx_indicate(serial, uart_input); + /* Send string */ + rt_device_write(serial, 0, str, (sizeof(str) - 1)); + + /* Create a serial thread */ + rt_thread_t thread = rt_thread_create("serial", serial_thread_entry, RT_NULL, 1024, 25, 10); + /* Start the thread successfully */ + if (thread != RT_NULL) + { + rt_thread_startup(thread); + } + else + { + ret = RT_ERROR; + } + + return ret; +} +/* Export to the msh command list */ +MSH_CMD_EXPORT(uart_sample, uart device sample); +``` + +### DMA Reception And Polling Transmission + +When the serial port receives a batch of data, it will call the receive callback function. The receive callback function will send the data size of the buffer at this time to the waiting data processing thread through the message queue. After the thread gets the message, it is activated and reads the data. In general, the DMA receive mode completes data reception in conjunction with the DMA receive completion interrupt and the serial port idle interrupt. + +* This sample code is not limited to a specific BSP. According to the uart device registered by BSP, modify the sample code macro to define the uart device name corresponding to SAMPLE_UART_NAME to run. + +The running sequence diagram is shown below: + +![Serial DMA Receiving and Polling Transmission Sequence Diagram](figures/uart-dma.png) + +```c +/* + * Program list: This is a uart device DMA receive usage routine + * The routine exports the uart_dma_sample command to the control terminal + * Command format: uart_dma_sample uart3 + * Command explanation: The second parameter of the command is the name of the uart device to be used. If it is empty, the default uart device will be used. + * Program function: output the string "hello RT-Thread!" through the serial port, and output the received data through the serial port, and then print the received data. +*/ + +#include + +#define SAMPLE_UART_NAME "uart3" /* uart device name */ + +/* Serial port receiving message structure*/ +struct rx_msg +{ + rt_device_t dev; + rt_size_t size; +}; +/* uart device handle */ +static rt_device_t serial; +/* Message queue control block*/ +static struct rt_messagequeue rx_mq; + +/* Receive data callback function */ +static rt_err_t uart_input(rt_device_t dev, rt_size_t size) +{ + struct rx_msg msg; + rt_err_t result; + msg.dev = dev; + msg.size = size; + + result = rt_mq_send(&rx_mq, &msg, sizeof(msg)); + if ( result == -RT_EFULL) + { + /* message queue full */ + rt_kprintf("message queue full!\n"); + } + return result; +} + +static void serial_thread_entry(void *parameter) +{ + struct rx_msg msg; + rt_err_t result; + rt_uint32_t rx_length; + static char rx_buffer[RT_SERIAL_RB_BUFSZ + 1]; + + while (1) + { + rt_memset(&msg, 0, sizeof(msg)); + /* Read messages from the message queue*/ + result = rt_mq_recv(&rx_mq, &msg, sizeof(msg), RT_WAITING_FOREVER); + if (result == RT_EOK) + { + /*Read data from the serial port*/ + rx_length = rt_device_read(msg.dev, 0, rx_buffer, msg.size); + rx_buffer[rx_length] = '\0'; + /* Output the read message through the uart device: serial */ + rt_device_write(serial, 0, rx_buffer, rx_length); + /* Print data */ + rt_kprintf("%s\n",rx_buffer); + } + } +} + +static int uart_dma_sample(int argc, char *argv[]) +{ + rt_err_t ret = RT_EOK; + char uart_name[RT_NAME_MAX]; + static char msg_pool[256]; + char str[] = "hello RT-Thread!\r\n"; + + if (argc == 2) + { + rt_strncpy(uart_name, argv[1], RT_NAME_MAX); + } + else + { + rt_strncpy(uart_name, SAMPLE_UART_NAME, RT_NAME_MAX); + } + + /* find uart device */ + serial = rt_device_find(uart_name); + if (!serial) + { + rt_kprintf("find %s failed!\n", uart_name); + return RT_ERROR; + } + + /* Initialize message queue */ + rt_mq_init(&rx_mq, "rx_mq", + msg_pool, /* a pool for storing messages */ + sizeof(struct rx_msg), /* The maximum length of a message*/ + sizeof(msg_pool), /* The size of the message pool */ + RT_IPC_FLAG_FIFO); /* If there are multiple threads waiting, assign messages according to the order. */ + + /* Open the uart device in DMA receive and polling send mode */ + rt_device_open(serial, RT_DEVICE_FLAG_DMA_RX); + /* Set the receive callback function */ + rt_device_set_rx_indicate(serial, uart_input); + /* Send string */ + rt_device_write(serial, 0, str, (sizeof(str) - 1)); + + /* Create a thread */ + rt_thread_t thread = rt_thread_create("serial", serial_thread_entry, RT_NULL, 1024, 25, 10); + /* Start the thread if it is created successfully*/ + if (thread != RT_NULL) + { + rt_thread_startup(thread); + } + else + { + ret = RT_ERROR; + } + + return ret; +} +/* Export to the msh command list */ +MSH_CMD_EXPORT(uart_dma_sample, uart device dma sample); +``` + diff --git a/documentation/device/watchdog/watchdog.md b/documentation/device/watchdog/watchdog.md new file mode 100644 index 0000000000..053c4f20ec --- /dev/null +++ b/documentation/device/watchdog/watchdog.md @@ -0,0 +1,228 @@ +# WATCHDOG Device + +## An Introduction to WATCHDOG + +The hardware watchdog timer is a timer whose timing output is connected to the reset terminal of the circuit. In a productized embedded system, in order to automatically reset the system under abnormal conditions, it generally needs a watchdog. + +When the watchdog was started, the counter starts counting automatically. If it is not reset counter value before the counter overflows, the counter overflow will generate a reset signal to the CPU to restart the system. When the system is running normally, it is necessary to clear the watchdog counter within the time interval allowed by the watchdog (commonly known as "feeding the dog"), and the reset signal will not be generated. If the program can "feed the dog" on time,the system does not go wrong,otherwise the system will reset. + +In general, users can feed the dog in the idlehook function and key function of RT-Thread. + +## Access to the WATCHDOG Device + +The application accesses the watchdog hardware through the I/O device management interface provided by RT-Thread. The related interfaces are as follows: + +| **Function** | **Description** | +| ---------------- | ---------------------------------- | +| rt_device_find() | Find the device handle based on the device name of the watchdog device | +| rt_device_init() | Initialize the watchdog device | +| rt_device_control() |Control the watchdog device | +| rt_device_close() | Close the watchdog device | + +### Find the Watchdog Device + +The application obtains the device handle based on the watchdog device's name, and then it can operate the watchdog device. The function for finding a device is as follows: + +```c +rt_device_t rt_device_find(const char* name); +``` + +| **Function** | **Description** | +| -------- | ---------------------------------- | +| name | the name of the watchdog device | +| **return** | —— | +| device handle | finding the corresponding device and then return to the corresponding device handle | +| RT_NULL | no corresponding device object found | + +An usage example is as follows: + +```c +#define IWDG_DEVICE_NAME "iwg" /* the name of the watchdog device */ + +static rt_device_t wdg_dev; /* device handle of the watchdog */ +/* find the watchdog device based on the device's name and obtain the device handle */ +wdg_dev = rt_device_find(IWDG_DEVICE_NAME); +``` + +### Initialize the Watchdog Device + +The watchdog device need to be initialized before using, which can be done by the following function: + +```c +rt_err_t rt_device_init(rt_device_t dev); +``` + +| **Function** | **Description** | +| ---------- | ------------------------------- | +| dev | handle of the watchdog device | +| **return** | —— | +| RT_EOK | the device succeeded initializing | +| -RT_ENOSYS | initialization failed, the watchdog device driver initialization function is empty | +| other error code | the device failed to open | + +An example is as follows: + +```c +#define IWDG_DEVICE_NAME "iwg" /* the name of the watchdog device */ + +static rt_device_t wdg_dev; /* handle of the watchdog device */ +/* find the watchdog device based on the device's name and obtain the device handle */ +wdg_dev = rt_device_find(IWDG_DEVICE_NAME); + +/* initialize the device */ +rt_device_init(wdg_dev); +``` + +### Control the Watchdog Device + +The application can configure the watchdog device using the command control word, which can be done by the following function: + +```c +rt_err_t rt_device_control(rt_device_t dev, rt_uint8_t cmd, void* arg); +``` + +| **Function** | **Description** | +| ---------------- | ---------------------------------- | +| dev | handle of the watchdog device | +| cmd | the command word | +| arg | controlled parameter | +| **return** | —— | +| RT_EOK | function executed successfully | +| -RT_ENOSYS | execution failed, the dev is empty | +| other error code | execution failed | + +The command control word `'cmd'` can take the following macro definition values: + +```c +#define RT_DEVICE_CTRL_WDT_GET_TIMEOUT (1) /* get the overflow time */ +#define RT_DEVICE_CTRL_WDT_SET_TIMEOUT (2) /* set the overflow time */ +#define RT_DEVICE_CTRL_WDT_GET_TIMELEFT (3) /* get the remaining time */ +#define RT_DEVICE_CTRL_WDT_KEEPALIVE (4) /* feed the dog */ +#define RT_DEVICE_CTRL_WDT_START (5) /* start the watchdog */ +#define RT_DEVICE_CTRL_WDT_STOP (6) /* stop the watchdog */ +``` + +An example of setting the overflow time of the watchdog is as follows: + +```c +#define IWDG_DEVICE_NAME "iwg" /* the name of the watchdog device */ + +rt_uint32_t timeout = 1000; /* the overflow time */ +static rt_device_t wdg_dev; /* handle of the watchdog device */ +/* find the watchdog device based on the device's name and obtain the device handle */ +wdg_dev = rt_device_find(IWDG_DEVICE_NAME); +/* initialize the device */ +rt_device_init(wdg_dev); + +/* set the overflow time of the watch dog */ +rt_device_control(wdg_dev, RT_DEVICE_CTRL_WDT_SET_TIMEOUT, (void *)timeout); +/* set idle-hook function */ +rt_thread_idle_sethook(idle_hook); +``` + +An example of feeding a dog in an idle thread hook function is as follows: + +```c +static void idle_hook(void) +{ + /* Feed the dog in the callback function of the idle thread */ + rt_device_control(wdg_dev, RT_DEVICE_CTRL_WDT_KEEPALIVE, NULL); +} +``` + +### Close the Watchdog Device + +When the application completes the operation of the watchdog, it can close the watchdog device: + +```c +rt_err_t rt_device_close(rt_device_t dev); +``` + +| **Function** | **Description** | +| ---------- | ---------------------------------- | +| dev | handle of the watchdog device | +| **return** | —— | +| RT_EOK | close the device successfully | +| -RT_ERROR | The device has been completely shut down and cannot be closed repeatedly | +| other error code | fail to close the device | + +Closing the device interface and opening the device interface need to match each other. When you open the device, you need to close the device once correspondingly, so that the device will be completely shut down, otherwise the device will remain unclosed. + +## Watchdog Device usage example + +The specific use of the watchdog device can be referred to the following sample code. The main steps of the sample code are as follows: + +1. First find the device handle based on the device name "iwg". +2. Set the overflow time of the watchdog after initializing the device. +3. Set the idle thread callback function. +4. This callback function will run and feed the dog when the system executes idle threads. + +```c +/* + * Program list: This is an independent watchdog device usage routine + * The routine exports the iwdg_sample command to the control terminal + * Command call format: iwdg_sample iwg + * Command explanation: The second parameter of the command is the name of the watchdog device to be used. If it is empty, you can use the default watchdog device of the routine. + * Program function: The program finds the watchdog device through the device's name, and then initializes the device and sets the overflow time of the watchdog device. + * Then set the idle thread callback function, which will feed the dog in the idle callback function. +*/ + +#include +#include + +#define IWDG_DEVICE_NAME "iwg" /* the name of the watchdog device */ + +static rt_device_t wdg_dev; /* handle of the watchdog device */ + +static void idle_hook(void) +{ + /* feed the dog in the callback function */ + rt_device_control(wdg_dev, RT_DEVICE_CTRL_WDT_KEEPALIVE, NULL); + rt_kprintf("feed the dog!\n "); +} + +static int iwdg_sample(int argc, char *argv[]) +{ + rt_err_t ret = RT_EOK; + rt_uint32_t timeout = 1000; /* the overflow time */ + char device_name[RT_NAME_MAX]; + + /* Determine if the command-line parameter is given the device name */ + if (argc == 2) + { + rt_strncpy(device_name, argv[1], RT_NAME_MAX); + } + else + { + rt_strncpy(device_name, IWDG_DEVICE_NAME, RT_NAME_MAX); + } + /* find the watchdog device based on the device's name and obtain the device handle */ + wdg_dev = rt_device_find(device_name); + if (!wdg_dev) + { + rt_kprintf("find %s failed!\n", device_name); + return RT_ERROR; + } + /* initialize the device */ + ret = rt_device_init(wdg_dev); + if (ret != RT_EOK) + { + rt_kprintf("initialize %s failed!\n", device_name); + return RT_ERROR; + } + /* set the overflow time of the watch dog */ + ret = rt_device_control(wdg_dev, RT_DEVICE_CTRL_WDT_SET_TIMEOUT, &timeout); + if (ret != RT_EOK) + { + rt_kprintf("set %s timeout failed!\n", device_name); + return RT_ERROR; + } + /* set idle thread callback function */ + rt_thread_idle_sethook(idle_hook); + + return ret; +} +/* export to the msh command list */ +MSH_CMD_EXPORT(iwdg_sample, iwdg sample); +``` + diff --git a/documentation/device/wlan/figures/an0026_1.png b/documentation/device/wlan/figures/an0026_1.png new file mode 100644 index 0000000000..6686efbcfe Binary files /dev/null and b/documentation/device/wlan/figures/an0026_1.png differ diff --git a/documentation/device/wlan/figures/an0026_3.png b/documentation/device/wlan/figures/an0026_3.png new file mode 100644 index 0000000000..016d60b0e4 Binary files /dev/null and b/documentation/device/wlan/figures/an0026_3.png differ diff --git a/documentation/device/wlan/figures/an0026_4.png b/documentation/device/wlan/figures/an0026_4.png new file mode 100644 index 0000000000..a383cbba5d Binary files /dev/null and b/documentation/device/wlan/figures/an0026_4.png differ diff --git a/documentation/device/wlan/figures/an0026_5.png b/documentation/device/wlan/figures/an0026_5.png new file mode 100644 index 0000000000..d3b88a0e93 Binary files /dev/null and b/documentation/device/wlan/figures/an0026_5.png differ diff --git a/documentation/device/wlan/wlan.md b/documentation/device/wlan/wlan.md new file mode 100644 index 0000000000..cf45cca402 --- /dev/null +++ b/documentation/device/wlan/wlan.md @@ -0,0 +1,443 @@ +# WLAN Device + +With the rapid development of the Internet of Things, more and more embedded devices are equipped with WIFI wireless network devices. In order to be able to manage WIFI network devices, RT-Thread introduces a WLAN device management framework. This framework has many features to control and manage WIFI, providing developers with many conveniences for using WIFI devices. + +## Introduction to the WLAN Framework + +The WLAN framework is a set of middleware developed by RT-Thread for managing WIFI. Connect to the specific WIFI driver, control the WIFI connection disconnection, scan and other operations. Support different applications, provide WIFI control, events, data diversion and other operations for the application, and provide a unified WIFI control interface for the upper application. The WLAN framework consists of three main parts. The DEV driver interface layer provides a unified API for the WLAN framework. Manage layer provides users with specific functions such as WIFI scanning, connection, and disconnection. Protocol is responsible for processing the data stream generated on the WIFI. Different protocols such as LWIP can be mounted according to different usage scenarios. It has the characteristics of simple use, complete functions, convenient docking and strong compatibility. + +The following figure is a hierarchical diagram of the WIFI framework: + +![WIFI Framework](figures/an0026_1.png) + +The First Part: `APP`, the application layer. It is a specific application based on the WLAN framework, such as WiFi-related shell commands. + +The Second Part: `Airkiss and Voice`, the network configuration layer. Provide functions such as using wireless or sound waves to configure the network. + +The Third Part: `WLAN Manager`, the WLAN management layer. Ability to control and manage WLAN devices. It has functions related to WLAN control, such as setting mode, connecting hotspots, disconnecting hotspots, enabling hotspots, scanning hotspots, etc. It also provides management functions such as reconnection after disconnection and automatic hotspot switching. + +The Fourth Part: `WLAN Protocol`, the protocol layer. The data stream is submitted to a specific protocol for resolution, and the user can specify to communicate using different protocols. + +The Fifth Part: `WLAN Config`, the parameter management layer. Manage hotspot information and passwords for successful connections and write them to non-volatile storage media. + +The Sixth Part: `WLAN Device`, the driver interface layer. Connect to specific WLAN hardware and provide unified APIs for management. + +### Functions + +* Automatic Connection: After using automatic connection function, as long as the WIFI is disconnected, the hotspot information of the previous successful connection will be automatically read, and the hotspot will be connected. If a hotspot connection fails, switch to the next hotspot to connect until the connection is successful. The hotspot information used by the automatic connection is sequentially tried in the order of the success of the connection, and the hotspot information of the latest connection success is preferentially used. After the connection is successful, the hotspot information is cached first, and use it first when reconnecting after the next disconnection. +* Parameter storage: Stores the WIFI parameters for successful connection. The WIFI parameter will be cached in the memory. If the external non-volatile storage interface is configured, it will be stored in the external storage medium. Users can implement the `struct rt_wlan_cfg_ops` structure according to his actual situation and save the parameters anywhere. The cached parameters mainly provide hotspot information for automatic connections. When WIFI is unconnected, it will read the cached parameters and try to connect. +* WIFI control: Provide complete WIFI control APIs, scanning, connection, hotspot, etc. Provide WIFI related status callback events, disconnect, connection, connection failure, etc. Provide users with an easy to use WIFI management APIs. +* Shell command: You can enter the command in Msh to control WIFI to perform scanning, connecting, disconnecting and other actions. Print debugging information such as WIFI status. + +### Configuration + +Use `menuconfig` command in ENV to enter the WLAN configuration interface by following the menu: + +```c +RT-Thread Components -> Device Drivers -> Using WiFi -> +``` + +Configuration options are described in detail as follows: + +```c +[*] Using Wi-Fi framework /* Using Wi-Fi framework */ +(wlan0) The WiFi device name for station /* The default name for station */ +(wlan1) The WiFi device name for ap /* The default name for ap */ +(lwip) Default transport protocol /* Default protocol */ +(10000) Set scan timeout time(ms) /* Scan timeout time */ +(10000) Set connect timeout time(ms) /* Connect timeout time */ +(32) SSID name maximum length /* Maximum length of SSID name */ +(32) Maximum password length /* Maximum length of password */ +[*] Automatic sorting of scan results /* Automatic sorting of scan results */ +(3) Maximum number of WiFi information automatically saved /* Maximum number of WiFi information automatically saved */ +(wlan_job) WiFi work queue thread name /* WiFi work queue thread name */ +(2048) wifi work queue thread size /* wifi work queue thread size */ +(22) WiFi work queue thread priority /* WiFi work queue thread priority */ +(2) Maximum number of driver events /* Maximum number of driver events in dev layer */ +[ ] Forced use of PBUF transmission /* Forced use of PBUF transmission */ +[ ] Enable WLAN Debugging Options /* Enable WLAN Debugging Options */ +``` + +## Access Wi-Fi Devices + +The application accesses the WLAN device hardware through the WLAN device management interface, and the relevant interfaces are as follows: + +| Fuctions | **Description** | +| -------------------- | ---------------------------- | +| rt_wlan_prot_attach() | Specify the WLAN protocol attached | +| rt_wlan_scan_sync() | Synchronized WLAN Scan | +| rt_wlan_connect() | Synchronized Hotspot Connection | +| rt_wlan_disconnect() | Synchronized Hotspot Disconnection | +| rt_wlan_config_autoreconnect() | Configuration automatic reconnection mode | + +### Specify Protocol + +```c +rt_err_t rt_wlan_prot_attach(const char *dev_name, const char *prot_name); +``` + +| **Parameter** | **D**escription | +| ----------------------------- | ---------------------------------- | +| dev_name | WLAN device name | +| prot_name | Protocol name, possible values: RT_WLAN_PROT_LWIP, indicates the protocol type LWIP | +| Return | **--** | +| -RT_ERROR | Execution failed | +| RT_EOK | Execution succeed | + +### Synchronized WLAN Scan + +```c +struct rt_wlan_scan_result *rt_wlan_scan_sync(void); +``` + +| **Return** | **Description** | +| ---------- | ------------------------------- | +| rt_wlan_scan_result | Scan Result | + +The scan result is a structure as follows: + +```c +struct rt_wlan_scan_result +{ + rt_int32_t num; /* info number */ + struct rt_wlan_info *info; /* info pointer */ +}; +``` + +### Synchronized Hotspot Connection + +```c +rt_err_t rt_wlan_connect(const char *ssid, const char *password); +``` + +| **Parameter** | **Description** | +| ----------------------------- | ---------------------------------- | +| ssid | WIFI name | +| password | WIFI password | +| Return | **--** | +| -RT_EINVAL | Parameter error | +| -RT_EIO | Unregistered device | +| -RT_ERROR | Connection failed | +| RT_EOK | Connection successful | + +### Synchronized Hotspot Disconnection + +```c +rt_err_t rt_wlan_disconnect(void); +``` + +| Return | **Description** | +| ----------------------------- | ---------------------------------- | +| -RT_EIO | Unregistered device | +| -RT_ENOMEM | Not enough memory | +| -RT_ERROR | Disconnection failed | +| RT_EOK | Disconnection successful | + +### Automatic Reconnection Mode Configuration + +```c +void rt_wlan_config_autoreconnect(rt_bool_t enable); +``` + +| **P**arameter | **Description** | +| ----------------------------- | ---------------------------------- | +| enable | enable or disable automatic reconnection | + +## FinSH Command + +Using shell commands can help us quickly debug WiFi-related features. The wifi related shell commands are as follows: + +```c +wifi /* Print help */ +wifi help /* View help */ +wifi join SSID [PASSWORD] /* Connect wifi.if SSDI is empty, use configuration to connect automatically */ +wifi ap SSID [PASSWORD] /* Create hotspot */ +wifi scan /* Scan all hotspots */ +wifi disc /* Disconnnect */ +wifi ap_stop /* Stop hotspot */ +wifi status /* Print wifi status sta + ap */ +wifi smartconfig /* Start to configure network function */ +``` + +### WiFi Scan + +The wifi scan command is `wifi scan`. After the wifi scan command is executed, the surrounding hotspot information will be printed on the terminal. Through the printed hotspot information, you can see multiple attributes such as SSID and MAC address. + +Enter the command in msh and the scan results are as follows: + +```c +wifi scan +SSID MAC security rssi chn Mbps +------------------------------- ----------------- -------------- ---- --- ---- +rtt_test_ssid_1 c0:3d:46:00:3e:aa OPEN -14 8 300 +test_ssid 3c:f5:91:8e:4c:79 WPA2_AES_PSK -18 6 72 +rtt_test_ssid_2 ec:88:8f:88:aa:9a WPA2_MIXED_PSK -47 6 144 +rtt_test_ssid_3 c0:3d:46:00:41:ca WPA2_MIXED_PSK -48 3 300 +``` + +### WiFi Connection + +The wifi scan command is `wifi join`. The command needs to be followed by the hotspot name and hotspot password. If the hotspot does not have a password, you may not enter this item. After the WiFi connection command is executed, if the hotspot exists and the password is correct, the board will connect to the hotspot and obtain the IP address. After the network connection is successful, you can use `socket` sockets for network communication. + +An example of using the wifi connection command is as follows. After the connection is successful, the obtained IP address will be printed on the terminal as follows: + +```c +wifi join ssid_test 12345678 +[I/WLAN.mgnt] wifi connect success ssid:ssid_test +[I/WLAN.lwip] Got IP address : 192.168.1.110 +``` + +### WiFi Disconnection + +The command to disconnect WiFi is `wifi disc`. After the WiFi disconnect command is executed, the development board will disconnect from the hotspot. + +The WiFi disconnect command usage example is as follows. After the disconnection is successful, the following information will be printed on the terminal as shown below. + +```c +wifi disc +[I/WLAN.mgnt] disconnect success! +``` + +## Example for WLAN Device Usage + +### WiFi Scan + +The following code will show a WiFi sync scan, and then print the results on the terminal. First perform WIFI initialization, and then execute the WIFI scan function `rt_wlan_scan_sync`, this function is synchronous, the number of scans and results returned by the function. In this example, the scanned hotspot name will be printed. + +```c +#include +#include + +#include +#include +#include + +void wifi_scan(void) +{ + struct rt_wlan_scan_result *result; + int i = 0; + + /* Configuring WLAN device working mode */ + rt_wlan_set_mode(RT_WLAN_DEVICE_STA_NAME, RT_WLAN_STATION); + /* WiFi scan */ + result = rt_wlan_scan_sync(); + /* Print scan results */ + rt_kprintf("scan num:%d\n", result->num); + for (i = 0; i < result->num; i++) + { + rt_kprintf("ssid:%s\n", result->info[i].ssid.val); + } +} + +int scan(int argc, char *argv[]) +{ + wifi_scan(); + return 0; +} +MSH_CMD_EXPORT(scan, scan test.); +``` + +The results are as follows: + +![Scan](figures/an0026_3.png) + +### WiFi Connection and Disconnection + +The code below will show a WiFi sync connection. Initialize WIFI first, and then create a semaphore for waiting for the `RT_WLAN_EVT_READY` event. Register the callback function of the event that needs attention, execute the `rt_wlan_connect` wifi connection function, and return value will indicate whether the connection has been successful. If the WiFi connection succeeds, it needs to wait for the network to get the IP address before communication. Use the semaphore created in advance to wait for the network to be ready. Once the network is ready, it will be able to communicate. + +After connecting to WIFI, wait for a while and then execute `rt_wlan_disconnect` to disconnect. The disconnect operation is blocked, and the return value indicates whether the disconnection was successful. + +```c +#include +#include + +#include +#include +#include + +#define WLAN_SSID "SSID-A" +#define WLAN_PASSWORD "12345678" +#define NET_READY_TIME_OUT (rt_tick_from_millisecond(15 * 1000)) + +static rt_sem_t net_ready = RT_NULL; + +static void +wifi_ready_callback(int event, struct rt_wlan_buff *buff, void *parameter) +{ + rt_kprintf("%s\n", __FUNCTION__); + rt_sem_release(net_ready); +} + +static void +wifi_connect_callback(int event, struct rt_wlan_buff *buff, void *parameter) +{ + rt_kprintf("%s\n", __FUNCTION__); + if ((buff != RT_NULL) && (buff->len == sizeof(struct rt_wlan_info))) + { + rt_kprintf("ssid : %s \n", ((struct rt_wlan_info *)buff->data)->ssid.val); + } +} + +static void +wifi_disconnect_callback(int event, struct rt_wlan_buff *buff, void *parameter) +{ + rt_kprintf("%s\n", __FUNCTION__); + if ((buff != RT_NULL) && (buff->len == sizeof(struct rt_wlan_info))) + { + rt_kprintf("ssid : %s \n", ((struct rt_wlan_info *)buff->data)->ssid.val); + } +} + +static void +wifi_connect_fail_callback(int event, struct rt_wlan_buff *buff, void *parameter) +{ + rt_kprintf("%s\n", __FUNCTION__); + if ((buff != RT_NULL) && (buff->len == sizeof(struct rt_wlan_info))) + { + rt_kprintf("ssid : %s \n", ((struct rt_wlan_info *)buff->data)->ssid.val); + } +} + +rt_err_t wifi_connect(void) +{ + rt_err_t result = RT_EOK; + + /* Configuring WLAN device working mode */ + rt_wlan_set_mode(RT_WLAN_DEVICE_STA_NAME, RT_WLAN_STATION); + /* station connect */ + rt_kprintf("start to connect ap ...\n"); + net_ready = rt_sem_create("net_ready", 0, RT_IPC_FLAG_FIFO); + rt_wlan_register_event_handler(RT_WLAN_EVT_READY, + wifi_ready_callback, RT_NULL); + rt_wlan_register_event_handler(RT_WLAN_EVT_STA_CONNECTED, + wifi_connect_callback, RT_NULL); + rt_wlan_register_event_handler(RT_WLAN_EVT_STA_DISCONNECTED, + wifi_disconnect_callback, RT_NULL); + rt_wlan_register_event_handler(RT_WLAN_EVT_STA_CONNECTED_FAIL, + wifi_connect_fail_callback, RT_NULL); + + /* connect wifi */ + result = rt_wlan_connect(WLAN_SSID, WLAN_PASSWORD); + + if (result == RT_EOK) + { + /* waiting for IP to be got successfully */ + result = rt_sem_take(net_ready, NET_READY_TIME_OUT); + if (result == RT_EOK) + { + rt_kprintf("networking ready!\n"); + } + else + { + rt_kprintf("wait ip got timeout!\n"); + } + rt_wlan_unregister_event_handler(RT_WLAN_EVT_READY); + rt_sem_delete(net_ready); + + rt_thread_delay(rt_tick_from_millisecond(5 * 1000)); + rt_kprintf("wifi disconnect test!\n"); + /* disconnect */ + result = rt_wlan_disconnect(); + if (result != RT_EOK) + { + rt_kprintf("disconnect failed\n"); + return result; + } + rt_kprintf("disconnect success\n"); + } + else + { + rt_kprintf("connect failed!\n"); + } + return result; +} + +int connect(int argc, char *argv[]) +{ + wifi_connect(); + return 0; +} +MSH_CMD_EXPORT(connect, connect test.); +``` + +The results are as follows: + +![Disconnection](figures/an0026_4.png) + +### WiFi Auto Reconnection when Turn On + +First enable the automatic reconnection function, use the command line to connect to the hotspot A, and connect another hotspot B. After waiting for a few seconds, power off hotspot B, the system will automatically retry connecting B hotspot. At this time, B hotspot connection can not be connected, and the system automatically switches hotspot A to connect. After the connection is successful, the system stops connecting. + +```c +#include +#include + +#include +#include +#include + +static void +wifi_ready_callback(int event, struct rt_wlan_buff *buff, void *parameter) +{ + rt_kprintf("%s\n", __FUNCTION__); +} + +static void +wifi_connect_callback(int event, struct rt_wlan_buff *buff, void *parameter) +{ + rt_kprintf("%s\n", __FUNCTION__); + if ((buff != RT_NULL) && (buff->len == sizeof(struct rt_wlan_info))) + { + rt_kprintf("ssid : %s \n", ((struct rt_wlan_info *)buff->data)->ssid.val); + } +} + +static void +wifi_disconnect_callback(int event, struct rt_wlan_buff *buff, void *parameter) +{ + rt_kprintf("%s\n", __FUNCTION__); + if ((buff != RT_NULL) && (buff->len == sizeof(struct rt_wlan_info))) + { + rt_kprintf("ssid : %s \n", ((struct rt_wlan_info *)buff->data)->ssid.val); + } +} + +static void +wifi_connect_fail_callback(int event, struct rt_wlan_buff *buff, void *parameter) +{ + rt_kprintf("%s\n", __FUNCTION__); + if ((buff != RT_NULL) && (buff->len == sizeof(struct rt_wlan_info))) + { + rt_kprintf("ssid : %s \n", ((struct rt_wlan_info *)buff->data)->ssid.val); + } +} + +int wifi_autoconnect(void) +{ + /* Configuring WLAN device working mode */ + rt_wlan_set_mode(RT_WLAN_DEVICE_STA_NAME, RT_WLAN_STATION); + /* Start automatic connection */ + rt_wlan_config_autoreconnect(RT_TRUE); + /* register event */ + rt_wlan_register_event_handler(RT_WLAN_EVT_READY, + wifi_ready_callback, RT_NULL); + rt_wlan_register_event_handler(RT_WLAN_EVT_STA_CONNECTED, + wifi_connect_callback, RT_NULL); + rt_wlan_register_event_handler(RT_WLAN_EVT_STA_DISCONNECTED, + wifi_disconnect_callback, RT_NULL); + rt_wlan_register_event_handler(RT_WLAN_EVT_STA_CONNECTED_FAIL, + wifi_connect_fail_callback, RT_NULL); + return 0; +} + +int auto_connect(int argc, char *argv[]) +{ + wifi_autoconnect(); + return 0; +} +MSH_CMD_EXPORT(auto_connect, auto connect test.); +``` + +The results are as follows: + +![Autoconnection](figures/an0026_5.png) diff --git a/documentation/dlmodule/README.md b/documentation/dlmodule/README.md new file mode 100644 index 0000000000..de6b1928a9 --- /dev/null +++ b/documentation/dlmodule/README.md @@ -0,0 +1,314 @@ +# Dynamic Module: dlmodule # + +In traditional desktop operating systems, user space and kernel space are separate. The application runs in user space, and the kernel and kernel modules run in kernel space. The kernel module can be dynamically loaded and deleted to extend the kernel functionality. `dlmodule` is a software component of the dynamic module loading mechanism provided in kernel space of RT-Thread. In versions of RT-Thread v3.1.0, this was also called the `Application Module`. After RT-Thread v3.1.0 and later, it returned to the tradition and was named after the `dynamic module`. + +`dlmodule` is more of an ELF format loader. The code segment and data segment of a separately compiled elf file are loaded into memory, and the symbols are parsed and bound to the API address exported by the kernel. The elf files are primarily placed on file systems under RT-Thread. + +## Introduction ## + +The dynamic module provides a mechanism for dynamically loading program modules for RT-Thread. Because it is also compiled independently of the kernel, it is more flexible to use. In terms of implementation, this is a mechanism to separate the kernel from the dynamic modules. Through this mechanism, the kernel and dynamic modules can be compiled separately, and at runtime, the compiled dynamic modules are loaded into the kernel through the module loader in the kernel. + +In the dynamic module of RT-Thread, two formats are currently supported: + +* `.mo` is an executable dynamic module that is suffixed with `.mo` when compiled; it can be loaded, and a main thread is automatically created in the system to execute `main()` function in this dynamic module; at the same time, `main(int argc, char**argv)` can also accept arguments on the command line. +* `.so` is a dynamic library compiled with `.so` as a suffix; it can be loaded and resided in memory, and some set of functions is used by other programs (code or dynamic modules in the kernel). + +The current RT-Thread architecture supporting dynamic modules mainly includes ARM architecture and x86 architecture, and will be extended to MIPS and RISC-V architecture in the future. The RT-Thread kernel firmware section can use a variety of compiler toolchains, such as GCC, ARMCC, IAR and other toolchains; however, dynamic module partial compilation currently only supports GNU GCC toolchain compilation. Therefore, compiling the RT-Thread module requires downloading GCC tools, such as CodeSourcery's arm-none-eabi toolchain. In general, it's best to use kernel and dynamic modules to compile with the same toolchain (so that it doesn't produce inconsistent behavior in *libc*). In addition, dynamic modules can only be loaded into RAM and used for symbol resolution binding to the API address exported by the kernel. Instead of running directly in XIP mode based on Flash (because Flash can't modify the code segment again). + +## Using Dynamic Module ## + +When you want to use the dynamic modules in your system, you need to compile a firmware that supports dynamic modules, as well as dynamic modules that need to be run. The following two parts are compiling firmware and compiling dynamic modules. + +### Compile Firmware ### + +When you want to use the dynamic module, you need to open the corresponding option in the firmware configuration, use menuconfig to open the following configuration: + +```c + RT-Thread Components ---> + POSIX layer and C standard library ---> + [*] Enable dynamic module with dlopen/dlsym/dlclose feature +``` + +Also open the configuration options of the file system: + +```c + RT-Thread Components ---> + Device virtual file system ---> + [*] Using device virtual file system +``` + +The configuration parameters required for dynamic module compilation are set in *rtconfig.py* file corresponding to bsp: + +```Python +M_CFLAGS = CFLAGS + '-mlong-calls -fPIC' +M_CXXFLAGS = CXXFLAGS + '-mlong-calls -fPIC' +M_LFLAGS = DEVICE + CXXFLAGS + '-Wl,--gc-sections,-z,max-page-size=0x4' +\ + '-shared -fPIC -nostartfiles -nostdlib -static-libgcc' +M_POST_ACTION = STRIP + '-R .hash $TARGET\n' + SIZE + '$TARGET \n' +M_BIN_PATH = r'E:\qemu-dev310\fatdisk\root' +``` + +The relevant explanation is as follows: + +* **M_CFLAGS**: C code compilation parameters used in dynamic module compilation, generally compiled here in PIC mode (that is, the code address supports floating mode execution); +- **M_CXXFLAGS**: C++ code compilation parameters used in dynamic module compilation, parameters similar to M_CFLAGS above; +- **M_LFLAGS**: Parameters when the dynamic module is linked. The same is the PIC method, and is linked in a shared library (partial link); +- **M_POST_ACTION**: the action to be performed after the dynamic module is compiled. Here, the elf file is stripped to reduce the size of the elf file. +- **M_BIN_PATH**: When the dynamic module is successfully compiled, whether the corresponding dynamic module file needs to be copied to a unified place; + +Basically, these compilation configuration parameters for the ARM9, Cortex-A, and Cortex-M series are the same. + +The kernel firmware also exports some function APIs to the dynamic module via `RTM(function)`. All exported symbols information in firmware can be listed by command A under MSH: + +``` +list_symbols +``` + +The `dlmodule` loader also parses the symbols that need to be parsed in the dynamic module according to the symbol table exported here to complete the final binding action. + +This symbol table will be placed in a special section named `RTMSymTab`, so the corresponding firmware link script also needs to retain this area, and will not be removed by the linker optimization. You can add the corresponding information in the link script: + +```text +/* section information for modules */ +. = ALIGN(4); +__rtmsymtab_start = .; +KEEP(*(RTMSymTab)) +__rtmsymtab_end = .; +``` + +Then execute the `scons` under the BSP project directory and generate the firmware without errors. Execute the command in the BSP project directory: + +`scons --target=ua -s` + +to generate the kernel header file search path and global macro definitions that need to be included when compiling the dynamic module. + +### Compile Dynamic Module ### + +There is a separate repository on github: [rtthread-apps](https://github.com/RT-Thread/rtthread-apps) , which contains some examples of dynamic modules and dynamic libraries. + +Its directory structure is as follows: + +| **Directory** | **Description** | +| --- | ---------------- | +| cxx | Demonstrates how to program in C++ using dynamic modules | +| hello | A "hello world" example | +| lib | Example of dynamic library | +| md5 | Generate md5 code for a file | +| tools | *Python/SConscript* scripts required for dynamic module compilation | +| ymodem | Download a file to the file system through the serial port using the YModem protocol | + +You can clone this repository locally and compile it with the scons from the command line. For Windows platforms, the ENV tool is recommended. + +After entering the console command line, enter the directory where the `rtthread-apps` repo is located (the same, please ensure that the full path of this directory does not contain spaces, Chinese characters, etc.), and set two variables: + +* RTT_ROOT: points to the root directory of the RT-Thread code; +- BSP_ROOT: points to the project directory of the BSP; + +Use follow commands on Windows in Env tool(assuming the BSP used is qemu-vexpress-a9): + +```c +set RTT_ROOT=d:\your_rtthread +set BSP_ROOT=d:\your_rtthread\bsp\qemu-vexpress-a9 +``` + +to set the corresponding environment variable. Then use the following command to compile the dynamic module, such as the hello example: + +``` +scons --app=hello +``` + +After compiling successfully, it will generate a `hello.mo` file in the `rtthread-apps/hello` directory. + +You can also compile dynamic libraries, such as the lib example: + +``` +scons --lib=lib +``` + +After compiling successfully, it will generate the `lib.so` file in the `rtthread-apps/lib` directory. + +We can put these `mo` and `so` files under the RT-Thread file system. Under msh, you can simply execute the `hello.mo` dynamic module as a `hello` command: + +```c +msh />ls +Directory /: +hello.mo 1368 +lib.so 1376 +msh />hello +msh />Hello, world +``` + +After calling hello, the main function in `hello.mo` will be executed, and the corresponding dynamic module will be exited after execution. The code for `hello/main.c` is as follows: + +```c +#include + +int main(int argc, char *argv[]) +{ + printf("Hello, world\n"); + + return 0; +} +``` + +## APIs of Dynamic Module + +In addition to dynamically loading and executing dynamic modules via msh, dynamic modules can be loaded or unloaded using the dynamic module API provided by RT-Thread in the main program. + +### Load Dynamic Module + +```c +struct rt_dlmodule *dlmodule_load(const char* pgname); +``` + +|**Parameter**|**Description**| +| ---- | ---- | +| pgname | Dynamic module path | +|**Return**| —— | +| Dynamic module pointer | Successfully loaded | +| RT_NULL | Failed | + +This function loads the dynamic module from the file system into memory, and if it is loaded correctly, returns a pointer to the module. This function does not create a thread to execute this dynamic module, just load the module into memory and parse the symbolic address. + +### Execute Dynamic Module + +```c +struct rt_dlmodule *dlmodule_exec(const char* pgname, const char* cmd, int cmd_size); +``` + +|**Parameter**|**Description**| +| ---- | ---- | +| pgname | Dynamic module path | +| cmd | Command line string including the dynamic module command itself | +| cmd_size | Command line string size | +|**Return**| —— | +| Dynamic module pointer | Run successfully | +| RT_NULL | Failed | + +This function loads the dynamic module according to the `pgname` path and starts a thread to execute `main` of the dynamic module. At the same time, `cmd` is passed as the command line Parameter to `main` entry of the dynamic module. + +### Exit Dynamic Module + +```c +void dlmodule_exit(int ret_code); +``` + +|**Parameter**|**Description**| +| ---- | ---- | +| ret_code | Module's return parameter | + +This function is called by the module runtime, it can set the return value of the module exit `ret_code`, and then exit from the module. + +### Find Dynamic Modules + +```c +struct rt_dlmodule *dlmodule_find(const char *name); +``` + +|**Parameter**|**Description**| +| ---- | ---- | +| name | Module name | +|**Return**| —— | +| Dynamic module pointer | Successful | +| RT_NULL | Failed | + +This function uses `name` to find out if there is already a dynamic module loaded in the system. + +### Return Dynamic Module + +```c +struct rt_dlmodule *dlmodule_self(void); +``` + +|**Return**|**Description**| +| ---- | ---- | +| Dynamic module pointer | Successful | +| RT_NULL | Failed | + +This function returns a pointer of the dynamic module in the calling context. + +### Find Symbol + +```c +rt_uint32_t dlmodule_symbol_find(const char *sym_str); +``` + +|**Parameter**|**Description**| +| ---- | ---- | +| sym_str | Symbol name | +|**Return**| —— | +| Symbol address | Successful | +| 0 | Failed | + +This function returns the symbol address based on the symbol name. + +## Libdl API of POSIX Standard ## + +The POSIX standard libdl API is also supported in RT-Thread dlmodule. It is similar to loading a dynamic library into memory (and parsing some of the symbol information). This dynamic library provides the corresponding set of function operations. The libdl API needs to include the header files: `#include ` + +### Open Dynamic Library + +```c +void * dlopen (const char * pathname, int mode); +``` + +|**Parameter**|**Description**| +| ---- | ---- | +| pathname | Dynamic library path name | +| mode | The mode when opening a dynamic library, not used in RT-Thread | +|**Return**| —— | +| Dynamic library handle (`struct dlmodule` structure pointer) | Successful | +| NULL | Failed | + +This function is similar to the `dlmodule_load` , which loads the dynamic library from the file system and returns the handle pointer of the dynamic library. + +### Find Symbol + +```c +void* dlsym(void *handle, const char *symbol); +``` + +|**Parameter**|**Description**| +| ---- | ---- | +| handle | Dynamic library handle, return value of `dlopen` | +| symbol | The symbol address to return | +|**Return**| —— | +| symbol address | Successful | +| NULL | Failed | + +This function looks in the dynamic library `handle` for the presence of the symbol of `symbol` , if there is an address that returns it. + +### Close Dynamic Library + +``` +int dlclose (void *handle); +``` + +|**Parameter**|**Description**| +| ---- | ---- | +| handle | Dynamic library handle | +|**Return**| —— | +| 0 | Successful | +| Negative number | Failed | + +This function closes the dynamic library pointed to by `handle` and unloads it from memory. It should be noted that when the dynamic library is closed, the symbolic address originally returned by `dlsym` will no longer be available. If you still try to access it, it may cause a fault error. + +## FAQs + +Please refer to [*User Manual of Env*](../env/env.md) for issues related to the Env tool. + +### Q: Dynamic modules cannot be run successfully according to the documentation. + +**A:** Please update the RT-Thread source code to version 3.1.0 and above. + +### Q: Compile the project with the scons command, prompting "undefined reference to __rtmsymtab_start". + +**A:** Please refer to the qemu-vexpress-a9 BSP GCC link script file *link.lds* to add the following to the TEXT section of the project's GCC link script. + +``` + /* section information for modules */ + . = ALIGN(4); + __rtmsymtab_start = .; + KEEP(*(RTMSymTab)) + __rtmsymtab_end = .; +``` diff --git a/documentation/Doxyfile b/documentation/doxygen/Doxyfile similarity index 100% rename from documentation/Doxyfile rename to documentation/doxygen/Doxyfile diff --git a/documentation/env/env.md b/documentation/env/env.md new file mode 100644 index 0000000000..5fc176f0cb --- /dev/null +++ b/documentation/env/env.md @@ -0,0 +1,256 @@ +# User Manual of Env + +Env is a handy utility tool developed by RT-Thread team to build enviornment, graphical system configuration, and packages management for software projects that intend to run on RT-Thread operating system. + +It is a wrapper tool for build-in menuconfig; an open source GUI tool which is designed to tailor for ease of use for developers. It can also be used to configure the kernel configuraiton parameters, components and software packages so that developers can construct the system like lego blocks. + +## Main Features + +- Menuconfig provides graphical interface to interact with operational logic and congfiguration parameters +- Each configuration option come with help session by default. +- It automates dependencies installation process. +- Automatically generate rtconfig.h without manual modification. +- It uses scons tool to streamline build project and compliation enviornment. +- Modular software packages and decoupling design make it easier to maintain. +- It also featured with point and click to download additional software packages and dependencies directly from Internet. + +## Preparation + + Env tool come with source code builder, compilation enviornment and package management system. + +- [Download the Env tool](). +- Install Git (download link - https://git-scm.com/downloads). Follow git installation instructions and configure environment variable to add git. +- Take a note for working environment, all paths are not allowed to have Chinese characters or Spaces. + +## User Guide of Env + +### Open the control Console + +The rt-thread software package environment is mainly based on the command line console so that the rt-thread development environment can be set up by minimizing the modification of configuration files. + +There are two ways to open the console: + +#### 1. click the Env directory executable file + +To enter the Env directory, you can run `env.exe` in this directory. If it fails to open, you can try to use `env.bat`. + +#### 2. open the Env console from the right-click menu in the folder + +Add Env to the right-click menu: + +![env settings](figures/Add_Env_To_Right-click_Menu-1.png) + +![1561968527218](figures/Add_Env_To_Right-click_Menu-2.png) + +![1561968626998](figures/Add_Env_To_Right-click_Menu-3.png) + +Follow the steps from the image to launch the Env console from the right-click menu in any folder. Result are as follows: + +![Right-click menu to launch the Env console](figures/console.png) + +> Due to the need for the environment variables of the Env process to be set, the anti-virus software may misreport at the first startup. If the anti-virus software misreport is encountered, allow Env related programs to run, and then add related programs to the white list. + +### Compile + +Scons is a compile building tool used by RT-Thread to compile RT-Threads using the scons related commands. + +#### Step One: Switch to the BSP root directory + +If you use Method 1 to open Env console, you need to switch to the target BSP using the `cd` command. + +For example, the target BSP is `rt-thread\bsp\stm32\stm32f103-dofly-lyc8`: + +![stm32f429-apollo project directory](figures/cd_cmd.png) + +#### Step Two: Compile the BSP + +- Env carries `Python & scons` . To compile BSP, just use the default ARM_GCC toolchain by running `scons` command in the target BSP directory. + +![compilation project using scons command](figures/use_scons.png) + +Compiled successfully: + +![complied successfully](figures/scons_done.png) + +If you use mdk/iar for project development, you can use the project file in the BSP directly or use one of the following commands to regenerate the project and compile and download it. + +``` +scons --target=iar +scons --target=mdk4 +scons --target=mdk5 +``` + +For more scons tutorials, please refer to [*Scons*](../scons/scons.md). + +### BSP configuration: menuconfig + +Menuconfig is a graphical configuration tool that RT-Thread uses to configure and tailor the entire system. + +#### Instruction for Shortcuts + +Go to the BSP root directory and open the interface by entering `menuconfig`. The menuconfig common shortcuts are as shown: + +![Commonly-used Shortcuts for menuconfig ](figures/hotkey.png) + +#### Modify Settings + +There are many types of configuration items in menuconfig, and the modification methods are different. The common types are: + +- On/Off Type: Use the space bar to select or close +- Value, string type: After pressing the Enter key, a dialog box will appear, and the configuration items will be modified in the dialog box. + +#### Save Settings + +After selecting the configuration item, press `ESC` to exit, and select `Save` to automatically generate the `rtconfig.h` file. At this point, using the `scons` command again will recompile the project according to the new rtconfig.h file. + +### Package Management + +RT-Thread provides a package management platform where the officially available or developer-supplied packages are stored. The platform provides developers with a choice of reusable software packages that are an important part of RT-Thread. + +[Click here](https://github.com/RT-Thread-packages) to view the official RT-Thread package, most of which have detailed documentation and usage examples. + +As a part of Env, the `package` tool provides developers with management functions such as downloading, updating, and deleting packages. + +Enter the `pkgs` command on the Env command line to see an introduction to the command: + +``` +> pkgs +usage: env.py package [-h] [--update] [--list] [--wizard] [--upgrade] + [--printenv] + +optional arguments: + -h, --help show this help message and exit + --update update packages, install or remove the packages as you set in + menuconfig + --list list target packages + --wizard create a package with wizard + --upgrade update local packages list from git repo + --printenv print environmental variables to check +``` + +#### Download, update, and delete packages + +Before downloading and updating the software package, you need to **open** the target package in `menuconfig`. + +These packages locates in `RT-Thread online packages` , Once you enter the menu, you can see the following package categories: + +![Package Categories](figures/menuconfig_packages_list.png) + +Find the package you need and open, then save and exit menuconfig. The package will be marked, but has not been downloaded locally, so it is still unavailable. + +- **download**: if the software package is selected but not downloaded, enter: `pkgs --update`, then the software package will be downloaded automatically; +- **update**: if the selected package has a latest update on the server and the version is selected **latest**, then enter `pkgs --update` , the package will be updated in local; +- **delete**: if a software package is not needed, deselect it in menuconfig and then use `pkgs --update` command. Then locally downloaded but unselected packages will be deleted. + +#### Update local package information + +As the package system grows, more and more packages will be added, so the list of packages in menuconfig may be **unsynchronized** with the server. This can be fixed by using `pkgs --upgrade` command, which not only synchronizes updates to local package information, but also upgrades to Env's functional scripts, which are recommended for regular use. + +### Env Tool Configuration + +- The new version of the Env tool includes an automatic update package and an option to automatically generate mdk/iar projects. The default is not enabled. It can be configured using `menuconfig -s/--setting` . + +- Use `menuconfig -s` command to enter the Env configuration interface + + ![Env Configuration Interface](figures/menuconfig_s.png) + + Press Enter to enter the configuration menu with 3 configuration options: + + ![configuration options](figures/menuconfig_s_auto_update.png) + +The three options are: + +- **Auto update pkgs config**:Automatic package update function: After exiting the menuconfig function, `pkgs --update` command is automatically used to download and install the package and delete the old package. This feature is used when downloading online packages. +- **Auto create a MDK/IAR project**: After modifying the menuconfig configuration, you must re-generate the project by typing `scons --target=xxx` . Turning on this feature will automatically regenerate the project when you exit menuconfig, without having to manually enter the scons command to regenerate the project. + +## Use Env in Your Project + +### Requirements for Using Env + +- Menuconfig is a feature of RT-Thread over version 3.0. It is recommended to update RT-Thread over version 3.0. +- Currently RT-Thread does not support `menuconfig` for all BSPs, which means that some BSPs can't be configured with menuconfig for the time being, but the commonly used BSPs are already supported. + +### How to Modify Options in Menuconfig + +If you want to add a macro definition in the configuration item of menuconfig, you can modify the `Kconfig` file under BSP. The modification method can search Kconfig syntax on the Internet for detailed documentation, or refer to the Kconfig file in RT-Thread or The Kconfig file in the BSP that supports menuconfig. + +### To Add menuconfig function to New Project + +New project here refers to a newly developed project that has not yet generated `.config` and `rtconfig.h`. Because these two files are only created when menuconfig is first saved. The specific process is as follows: + +1. Copy the kconfig file from the BSP that already supports the menuconfig function to the new project root directory. +2. Note that modifying the RTT_ROOT value in Kconfig is the directory where RT-Thread is located, otherwise RTT_ROOT may be not found. +3. Start the configuration with the menuconfig command. + +### To Add menuconfig function to Old Project + +Old project here refers to the development that has been going on for a while, and there is a modified `rtconfig.h` file in the project, but there is no project configured with menuconfig. The specific process is as follows: + +1. First back up the rtconfig.h file in the old project. +2. Use `scons --genconfig` command to generate a `.config` file from the existing `rtconfig.h` file. The `.config` file generated here saves the configuration parameters of the `rtconfig.h` file in the old project. +3. Copy the `kconfig` file from the BSP that already supports the menuconfig function to the root directory of the project you want to modify. +4. Note that modifying the RTT_ROOT value in Kconfig is the directory where RT-Thread is located, otherwise RTT_ROOT may be not found. +5. Use the menuconfig command to configure the old project we want to modify. Menuconfig will read the `.config` file generated in the second step and generate a new `.config` file and rtconfig.h file based on the configuration parameters of the old project. +6. Check the old and new rtconfig.h files. If there are any inconsistencies, you can use the menuconfig command to adjust the configuration items. + +## Explore More with pip + +In the Env environment, you can't directly use the pip tool provided by Python to install more modules. If you need to use the pip function in Env environment, you can reinstall the pip tool as follows: + +1. Download the get-pip.py file from https://bootstrap.pypa.io/get-pip.py and save it on disk. +2. Run `python get-pip.py` command in the Env environment to reinstall the pip tool. +3. After the pip tool is reinstalled successfully, you can use `pip install module-name` command to install the required modules. + +## Notes for Using Env + +- For the first time, Env is recommended to go to the official website to download the latest version of the Env tool. The new version of Env will have better compatibility and also support automatic update commands. +- You can use the Env built-in command `pkgs --upgrade` to update the package list and Env's function code to minimize the problems you have fixed. +- Do not have Chinese or spaces in the routes of Env. +- Do not have Chinese or spaces in the routes where the BSP project is located. + +## FAQs + +### Q: There's unintelligible texts appear in Env. + +**A:** First check if there is a Chinese route. + +Check if the `chcp` command has been added to the system environment variable and try to change the character format to English using the `chcp 437` command. If the prompt does not have a `chcp` command, it is considered not to be added to the environment variable. + +The directory where the `chcp` command is located may be added to the environment variable in the `system32` directory. + +### Q: It prompts that the git command cannot be found. + +  'git' is not recognized as an internal or external command, possible program or batch file. + +**A:** Git is not installed. You need to install git and add environment variables. + +### Q: It prompts that the CMD command cannot be found. + +**A:** Right-click–>> Property—>> Advanced System Settings—>> Environment Variable, Add `C:\Windows\System32` to system environment variables. + +### Q: Prompt "no module named site" when running python. + +**A:** Computer right button –>> Properties—>> Advanced System Settings—>> Environment Variable, in the user variable of the administrator, create a new variable named PYTHONHOME, and the variable value is: `F:\git_repositories\env\tools\Python27` (the installation route of Python in Env), do not add ";" afterwards, otherwise it will be invalid. If add PYTHONHOME can not solve theproblem, try to add PYTHONPATH in the same way. + +### Q: What types of projects can I generate under Env? + + **A:** + +1. Currently, you can use the scons tool to generate mdk/iar projects under Env. There is no automatic generation of eclipse projects. +2. Generally, using gcc's toolchain, using an editor such as source insight or VS Code to edit the code and compile with `scons` command. + +### Q:How can my own BSP support menuconfig? + +**A:** You can refer to this chapter **Use Env in Your Project**. + +### Q: What is the difference between the pkgs --upgrade command and the pkgs --update command? + + **A:** + +1. The `pkgs --upgrade` command is used to upgrade the Env script itself and the list of packages. You cannot select a recently updated package without the latest package list. +2. The `pkgs --update` command is used to update the package itself. For example, if you selected json and mqtt packages in menuconfig, you did not download them when you exit menuconfig. You need to use the `pkgs --update` command, at which point Env will download the package you selected and add it to your project. +3. The new version of Env supports the `menuconfig -s/--setting` command. If you don't want to use the `pkgs --update` command after replacing the package, configure Env after using the `menuconfig -s/--setting` command. Select each time you use menuconfig. After the package is automatically updated. + +### Q: Prompt “can't find file Kconfig” while using menuconfig. + +**A:** The Kconfig file is missing from the current working BSP directory. Please refer *To Add menuconfig function to New Project* and *To Add menuconfig function to Old Project*. diff --git a/documentation/env/figures/Add_Env_To_Right-click_Menu-1.png b/documentation/env/figures/Add_Env_To_Right-click_Menu-1.png new file mode 100644 index 0000000000..37062950e9 Binary files /dev/null and b/documentation/env/figures/Add_Env_To_Right-click_Menu-1.png differ diff --git a/documentation/env/figures/Add_Env_To_Right-click_Menu-2.png b/documentation/env/figures/Add_Env_To_Right-click_Menu-2.png new file mode 100644 index 0000000000..c7e9ad3e0c Binary files /dev/null and b/documentation/env/figures/Add_Env_To_Right-click_Menu-2.png differ diff --git a/documentation/env/figures/Add_Env_To_Right-click_Menu-3.png b/documentation/env/figures/Add_Env_To_Right-click_Menu-3.png new file mode 100644 index 0000000000..6f6595312c Binary files /dev/null and b/documentation/env/figures/Add_Env_To_Right-click_Menu-3.png differ diff --git a/documentation/env/figures/cd_cmd.png b/documentation/env/figures/cd_cmd.png new file mode 100644 index 0000000000..2a77091ee6 Binary files /dev/null and b/documentation/env/figures/cd_cmd.png differ diff --git a/documentation/env/figures/console.png b/documentation/env/figures/console.png new file mode 100644 index 0000000000..c83d0b7a17 Binary files /dev/null and b/documentation/env/figures/console.png differ diff --git a/documentation/env/figures/hotkey.png b/documentation/env/figures/hotkey.png new file mode 100644 index 0000000000..2083248aa6 Binary files /dev/null and b/documentation/env/figures/hotkey.png differ diff --git a/documentation/env/figures/menuconfig_packages_list.png b/documentation/env/figures/menuconfig_packages_list.png new file mode 100644 index 0000000000..073456bc13 Binary files /dev/null and b/documentation/env/figures/menuconfig_packages_list.png differ diff --git a/documentation/env/figures/menuconfig_s.png b/documentation/env/figures/menuconfig_s.png new file mode 100644 index 0000000000..0c087b1fde Binary files /dev/null and b/documentation/env/figures/menuconfig_s.png differ diff --git a/documentation/env/figures/menuconfig_s_auto_prj.png b/documentation/env/figures/menuconfig_s_auto_prj.png new file mode 100644 index 0000000000..57ac4ae52d Binary files /dev/null and b/documentation/env/figures/menuconfig_s_auto_prj.png differ diff --git a/documentation/env/figures/menuconfig_s_auto_update.png b/documentation/env/figures/menuconfig_s_auto_update.png new file mode 100644 index 0000000000..462fa74b7e Binary files /dev/null and b/documentation/env/figures/menuconfig_s_auto_update.png differ diff --git a/documentation/env/figures/q1.png b/documentation/env/figures/q1.png new file mode 100644 index 0000000000..186362f0cf Binary files /dev/null and b/documentation/env/figures/q1.png differ diff --git a/documentation/env/figures/scons_done.png b/documentation/env/figures/scons_done.png new file mode 100644 index 0000000000..4a82eb21ce Binary files /dev/null and b/documentation/env/figures/scons_done.png differ diff --git a/documentation/env/figures/use_scons.png b/documentation/env/figures/use_scons.png new file mode 100644 index 0000000000..8cf8fba1a3 Binary files /dev/null and b/documentation/env/figures/use_scons.png differ diff --git a/documentation/figures/02Software_framework_diagram.png b/documentation/figures/02Software_framework_diagram.png new file mode 100644 index 0000000000..a236ff9338 Binary files /dev/null and b/documentation/figures/02Software_framework_diagram.png differ diff --git a/documentation/filesystem/README.md b/documentation/filesystem/README.md new file mode 100644 index 0000000000..15a2601487 --- /dev/null +++ b/documentation/filesystem/README.md @@ -0,0 +1,1027 @@ +# Virtual File System + +In early days, the amount of data to be stored in embedded systems was relatively small and data types were relatively simple. +The data were stored by directly writing to a specific address in storage devices. However, with today modern technology, embedded device's functions are getting complicated and required more data storage. Therefore, we need new data management methods to simplify and organize the data storage. + +A file system is made up of abstract data types and also a mechanism for providing data access, retrieve, implements, and store them in hierarchical structure. A folder contains multiple files and a file contains multiple organized data on the file system. This chapter explains about the RT-Thread file system, architecture, features and usage of virtual file system in RT-Thread OS. + +## An Introduction to DFS + +Device File System (DFS) is a virtual file system component and name structure is similar to UNIX files and folders. Following is the files and folders structure: + +The root directory is represented by "/". For example, if users want to access to f1.bin file under root directory, it can be accessed by "/f1.bin". If users want to access to f1.bin file under /2019 folder, it can be accessed by "/data/2019/f1.bin" according to their folder paths as in UNIX/Linux unlike Windows System. + +### The Architecture of DFS + +The main features of the RT-Thread DFS component are: + +- Provides a unified POSIX file and directory operations interface for applications: read, write, poll/select, and more. +- Supports multiple types of file systems, such as FatFS, RomFS, DevFS, etc., and provides management of common files, device files, and network file descriptors. +- Supports multiple types of storage devices such as SD Card, SPI Flash, Nand Flash, etc. + +The hierarchical structure of DFS is shown in the following figure, which is mainly divided into POSIX interface layer, virtual file system layer and device abstraction layer. + +![The hierarchical structure of DFS](figures/fs-layer.png) + +### POSIX Interface Layer + +POSIX stands for Portable Operating System Interface of UNIX (POSIX). The POSIX standard defines the interface standard that the operating system should provide for applications. It is a general term for a series of API standards defined by IEEE for software to run on various UNIX operating systems. + +The POSIX standard is intended to achieve software portability at the source code level. In other words, a program written for a POSIX-compatible operating system should be able to compile and execute on any other POSIX operating system (even from another vendor). RT-Thread supports the POSIX standard interface, so it is easy to port Linux/Unix programs to the RT-Thread operating system. + +On UNIX-like systems, normal files, device files, and network file descriptors are the same. In the RT-Thread operating system, DFS is used to achieve this uniformity. With the uniformity of such file descriptors, we can use the `poll/select` interface to uniformly poll these descriptors and bring convenience to the implement of the program functions. + +Using the `poll/select` interface to block and simultaneously detect whether a group of I/O devices which support non-blocking have events (such as readable, writable, high-priority error output, errors, etc.) until a device trigger the event was or exceed the specified wait time. This mechanism can help callers find devices that are currently ready, reducing the complexity of programming. + +### Virtual File System Layer + +Users can register specific file systems to DFS, such as FatFS, RomFS, DevFS, etc. Here are some common file system types: + +* FatFS is a Microsoft FAT format compatible file system developed for small embedded devices. It is written in ANSI C and has good hardware independence and portability. It is the most commonly used file system type in RT-Thread. +* The traditional RomFS file system is a simple, compact, read-only file system that does not support dynamic erasing and saving or storing data in order, thus it supports applications to run in XIP (execute In Place) method and save RAM space while the system is running. +* The Jffs2 file system is a log flash file system. It is mainly used for NOR flash memory, based on MTD driver layer, featuring: readable and writable, supporting data compression, Hash table based log file system, and providing crash/power failure security protection, write balance support, etc.. +* DevFS is the device file system. After the function is enabled in the RT-Thread operating system, the devices in the system can be virtualized into files in the `/dev` folder, so that the device can use the interfaces such as `read` and `write` according to the operation mode of the file to operate. +* NFS (Network File System) is a technology for sharing files over a network between different machines and different operating systems. In the development and debugging phase of the operating system, this technology can be used to build an NFS-based root file system on the host and mount it on the embedded device, which can easily modify the contents of the root file system. +* UFFS is short for Ultra-low-cost Flash File System. It is an open source file system developed by Chinese people and used for running Nand Flash in small memory environments such as embedded devices. Compared with the Yaffs file system which often used in embedded devices, it has the advantages of less resource consumption, faster startup speed and free. + +### Device Abstraction Layer + +The device abstraction layer abstracts physical devices such as SD Card, SPI Flash, and Nand Flash into devices that are accessible to the file system. For example, the FAT file system requires that the storage device be a block device type. + +Different file system types are implemented independently of the storage device driver, so the file system function can be correctly used after the drive interface of the underlying storage device is docked with the file system. + +## Mount Management + +The initialization process of the file system is generally divided into the following steps: + +1. Initialize the DFS component. +2. Initialize a specific type of file system. +3. Create a block device on the memory. +4. Format the block device. +5. Mount the block device to the DFS directory. +6. When the file system is no longer in use, you can unmount it. + +### Initialize the DFS Component + +The initialization of the DFS component is done by the dfs_init() function. The dfs_init() function initializes the relevant resources required by DFS and creates key data structures that allow DFS to find a specific file system in the system and get a way to manipulate files within a particular storage device. This function will be called automatically if auto-initialization is turned on (enabled by default). + +### Registered File System + +After the DFS component is initialized, you also need to initialize the specific type of file system used, that is, register a specific type of file system into DFS. The interface to register the file system is as follows: + +```c +int dfs_register(const struct dfs_filesystem_ops *ops); +``` + +|**Parameter**|**Description** | +|----------|------------------------------------| +| ops | a collection of operation functions of the file system | +|**return**|**——** | +| 0 | file registered successfully | +| -1 | file fail to register | + +This function does not require user calls, it will be called by the initialization function of different file systems, such as the elm-FAT file system's initialization function `elm_init()`. After the corresponding file system is enabled, if automatic initialization is enabled (enabled by default), the file system initialization function will also be called automatically. + +The `elm_init()` function initializes the elm-FAT file system, which calls the `dfs_register(`) function to register the elm-FAT file system with DFS. The file system registration process is shown below: + +![Register file system](figures/fs-reg.png) + +### Register a Storage Device as a Block Device + +Only block devices can be mounted to the file system, so you need to create the required block devices on the storage device. If the storage device is SPI Flash, you can use the "Serial Flash Universal Driver Library SFUD" component, which provides various SPI Flash drivers, and abstracts the SPI Flash into a block device for mounting. The process of registering block device is shown as follows: + +![The timing diagram of registering block device](figures/fs-reg-block.png) + +### Format the file system + +After registering a block device, you also need to create a file system of the specified type on the block device, that is, format the file system. You can use the `dfs_mkfs()` function to format the specified storage device and create a file system. The interface to format the file system is as follows: + +```c +int dfs_mkfs(const char * fs_name, const char * device_name); +``` + +|**Parameter** |**Description** | +|-------------|----------------------------| +| fs_name | type of the file system | +| device_name | name of the block device | +|**return** |**——** | +| 0 | file system formatted successfully | +| -1 | fail to format the file system | + +The file system type (fs_name) possible values and the corresponding file system is shown in the following table: + +|**Value** |**File System Type** | +|-------------|----------------------------| +| elm | elm-FAT file system | +| jffs2 | jffs2 journaling flash file system | +| nfs | NFS network file system | +| ram | RamFS file system | +| rom | RomFS read-only file system | +| uffs | uffs file system | + +Take the elm-FAT file system format block device as an example. The formatting process is as follows: + +![Formatted file system](figures/elm-fat-mkfs.png) + +You can also format the file system using the `mkfs` command. The result of formatting the block device sd0 is as follows: + +```shell +msh />mkfs sd0 # Sd0 is the name of the block device, the command will format by default +sd0 is elm-FAT file system +msh /> +msh />mkfs -t elm sd0 # Use the -t parameter to specify the file system type as elm-FAT file system +``` + +### Mount file system + +In RT-Thread, mounting refers to attaching a storage device to an existing path. To access a file on a storage device, we must mount the partition where the file is located to an existing path and then access the storage device through this path. The interface to mount the file system is as follows: + +```c +int dfs_mount(const char *device_name, + const char *path, + const char *filesystemtype, + unsigned long rwflag, + const void *data); +``` + +|**Parameter** |**Description** | +|----------------|------------------------------| +| device_name | the name of the block device that has been formatted | +| path | the mount path | +| filesystemtype | The type of the mounted file system. Possible values can refer to the dfs_mkfs() function description. | +| rwflag | read and write flag bit | +| data | private data for a specific file system | +|**return** | **——** | +| 0 | file system mounted successfully | +| -1 | file system mount fail to be mounted | + +If there is only one storage device, it can be mounted directly to the root directory `/`. + +### Unmount a file system + +When a file system does not need to be used anymore, it can be unmounted. The interface to unmount the file system is as follows: + +```c +int dfs_unmount(const char *specialfile); +``` + +|**Parameter** |**Description** | +|-------------|--------------------------| +| specialfile | mount path | +|**return** |**——** | +| 0 | unmount the file system successfully | +| -1 | fail to unmount the file system | + +## Document Management + +This section introduces the functions that are related to the operation of the file. The operation of the file is generally based on the file descriptor fd, as shown in the following figure: + +![common function of file management](figures/fs-mg.png) + +### Open and Close Files + +To open or create a file, you can call the following open() function: + +```c +int open(const char *file, int flags, ...); +``` + +|**Parameter** |**Description** | +|------------|--------------------------------------| +| file | file names that are opened or created | +| flags | Specify the way to open the file, and values can refer to the following table. | +|**return** |**——** | +| file descriptor | file opened successfully | +| -1 | fail to open the file | + +A file can be opened in a variety of ways, and multiple open methods can be specified at the same time. For example, if a file is opened by O_WRONLY and O_CREAT, then when the specified file which need to be open does not exist, it will create the file first and then open it as write-only. The file opening method is as follows: + +|**Parameter**|**Description** | +|----------|-----------------------| +| O_RDONLY | open file in read-only mode | +| O_WRONLY | open file in write-only mode | +| O_RDWR | open file in read-write mode | +| O_CREAT | if the file to be open does not exist, then you can create the file | +| O_APPEND | When the file is read or written, it will start from the end of the file, that is, the data written will be added to the end of the file in an additional way. | +| O_TRUNC | empty the contents of the file if it already exists | + +If you no longer need to use the file, you can use the `close()` function to close the file, and `close()` will write the data back to disk and release the resources occupied by the file. + +``` +int close(int fd); +``` + +|**Parameter**|**Description** | +|----------|--------------| +| fd | file descriptor | +|**return**|**——** | +| 0 | file closed successfully | +| -1 | fail to close the file | + +### Read and Write Data + +To read the contents of a file, use the `read()` function: + +```c +int read(int fd, void *buf, size_t len); +``` + +|**Parameter**|**Description** | +|----------|------------------------------------------| +| fd | file descriptor | +| buf | buffer pointer | +| len | read number of bytes of the files | +|**return**|**——** | +| int | the number of bytes actually read | +| 0 | read data has reached the end of the file or there is no readable data | +| -1 | read error, error code to view the current thread's errno | + +This function reads the `len` bytes of the file pointed to by the parameter `fd` into the memory pointed to by the `buf pointer`. In addition, the read/write position pointer of the file moves with the byte read. + +To write data into a file, use the `write()` function: + +```c +int write(int fd, const void *buf, size_t len); +``` + +|**Parameter**|**Description** | +|----------|---------------------------------------| +| fd | file descriptor | +| buf | buffer pointer | +| len | the number of bytes written to the file | +|**return**|**——** | +| int | the number of bytes actually written | +| -1 | write error, error code to view the current thread's errno | + +This function writes `len` bytes in the memory pointed out by the `buf pointer` into the file pointed out by the parameter `fd`. In addition, the read and write location pointer of the file moves with the bytes written. + +### Rename + +To rename a file, use the `rename()` function: + +``` +int rename(const char *old, const char *new); +``` + +|**Parameter**|**Description** | +|----------|--------------| +| old | file's old name | +| new | new name | +|**return**|**——** | +| 0 | change the name successfully | +| -1 | fail to change the name | + +This function changes the file name specified by the parameter `old` to the file name pointed to by the parameter `new`. If the file specified by `new` already exists, the file will be overwritten. + +### Get Status + +To get the file status, use the following `stat()` function: + +```c +int stat(const char *file, struct stat *buf); +``` + +|**Parameter**|**Description** | +|----------|--------------------------------------------| +| file | file name | +| buf | structure pointer to a structure that stores file status information | +|**return**|**——** | +| 0 | access status successfully | +| -1 | fail to access to status | + +### Delete Files + +Delete a file in the specified directory using the `unlink()` function: + +``` +int unlink(const char *pathname); +``` + +|**Parameter**|**Description** | +|----------|------------------------| +| pathname | specify the absolute path to delete the file | +|**return**|**——** | +| 0 | deleted the file successfully | +| -1 | fail to deleted the file | + +### Synchronize File Data to Storage Devices + +Synchronize all modified file data in memory to the storage device using the `fsync()` function: + +```c +int fsync(int fildes); +``` + +|**Parameter**|**Description** | +|----------|--------------| +| fildes | file descriptor | +|**Return**|**——** | +| 0 | synchronize files successfully | +| -1 | fail to synchronize files | + +### Query file system related information + +Use the `statfs()` function to query file system related information. + +```c +int statfs(const char *path, struct statfs *buf); +``` + +|**Parameter**|**Description** | +|----------|----------------------------------| +| path | file system mount path | +| buf | structure pointer for storing file system information | +|**Return**|**——** | +| 0 | query file system information successfully | +| -1 | fail to query file system information | + +### Monitor I/O device status + +To monitor the I/O device for events, use the `select()` function: + +```c +int select( int nfds, + fd_set *readfds, + fd_set *writefds, + fd_set *exceptfds, + struct timeval *timeout); +``` + +|**Parameter** |**Description** | +|-----------|---------------------------------------------------------| +| nfds | The range of all file descriptors in the collection, that is, the maximum value of all file descriptors plus 1 | +| readfds | Collection of file descriptors that need to monitor read changes | +| writefds | Collection of file descriptors that need to monitor write changes | +| exceptfds | Collection of file descriptors that need to be monitored for exceptions | +| timeout | timeout of **select** | +|**return** |**——** | +| positive value | a read/write event or error occurred in the monitored file collection | +| 0 | waiting timeout, no readable or writable or erroneous files | +| negative value | error | + +Use the `select()` interface to block and simultaneously detect whether a group of non-blocking I/O devices have events (such as readable, writable, high-priority error output, errors, etc.) until a device triggered an event or exceeded a specified wait time. + +## Directory management + +This section describes functions that directory management often uses, and operations on directories are generally based on directory addresses, as shown in the following image: + +![functions that directory management often uses](figures/fs-dir-mg.png) + +### Create and Delete Directories + +To create a directory, you can use the mkdir() function: + +```c +int mkdir(const char *path, mode_t mode); +``` + +|**Parameter**|**Description** | +|----------|----------------| +| path | the absolute address of the directory | +| mode | create a pattern | +|**Return**|**——** | +| 0 | create directory successfully | +| -1 | fail to create directory | + +This function is used to create a directory as a folder, the parameter path is the absolute path of the directory, the parameter mode is not enabled in the current version, so just fill in the default parameter 0x777. + +Delete a directory using the rmdir() function: + +```c +int rmdir(const char *pathname); +``` + +|**Parameter**|**Description** | +|----------|------------------------| +| pathname | absolute path to delete the directory | +|**Return**|**——** | +| 0 | delete the directory successfully | +| -1 | fail to delete the directory | + +### Open and Close the Directory + +Open the directory to use the `opendir()` function: + +```c +DIR* opendir(const char* name); +``` + +|**Parameter**|**Description** | +|----------|-----------------------------------------| +| name | absolute address of the directory | +|**Return**|**——** | +| DIR | open the directory successfully, and return to a pointer to the directory stream | +| NULL | fail to open | + +To close the directory, use the `closedir()` function: + +```c +int closedir(DIR* d); +``` + +|**Parameter**|**Description** | +|----------|--------------| +| d | directory stream pointer | +|**Return**|**——** | +| 0 | directory closed successfully | +| -1 | directory closing error | + +This function is used to close a directory and must be used with the `opendir()` function. + +### Read Directory + +To read the directory, use the `readdir()` function: + +```c +struct dirent* readdir(DIR *d); +``` + +|**Parameter**|**Description** | +|----------|---------------------------------------| +| d | directory stream pointer | +|**Return**|**——** | +| dirent | read successfully and return to a structure pointer to a directory entry | +| NULL | read to the end of the directory | + +This function is used to read the directory, and the parameter d is the directory stream pointer. In addition, each time a directory is read, the pointer position of the directory stream is automatically recursed by 1 position backward. + +### Get the Read Position of the Directory Stream + +To get the read location of the directory stream, use the `telldir()` function: + +``` +long telldir(DIR *d); +``` + +|**Parameter**|**Description** | +|----------|------------------| +| d | directory stream pointer | +|**Return**|**——** | +| long | read the offset of the position | + +The return value of this function records the current position of a directory stream. This return value represents the offset from the beginning of the directory file. You can use this value in the following `seekdir()` to reset the directory to the current position. In other words, the `telldir()` function can be used with the `seekdir()` function to reset the read position of the directory stream to the specified offset. + +### Set the Location to Read the Directory Next Time + +Set the location to read the directory next time using the `seekdir()` function: + +``` +void seekdir(DIR *d, off_t offset); +``` + +|**Parameter**|**Description** | +|----------|----------------------------| +| d | directory stream pointer | +| offset | the offset value, displacement from this directory | + +This is used to set the read position of the parameter d directory stream, and starts reading from this new position when readdir() is called. + +### Reset the Position of Reading Directory to the Beginning + +To reset the directory stream's read position to the beginning, use the `rewinddir()` function: + +``` +void rewinddir(DIR *d); +``` + +|**Parameter**|**Description** | +|----------|------------| +| d | directory stream pointer | + +This function can be used to set the current read position of the `d` directory stream to the initial position of the directory stream. + +## DFS Configuration Options + +The specific configuration path of the file system in menuconfig is as follows: + +```c +RT-Thread Components ---> + Device virtual file system ---> +``` + +The configuration menu description and corresponding macro definitions are shown in the following table: + +|**Configuration Options** |**Corresponding Macro Definition**|**Description** | +|-------------------------------|-------------------------------|----------------------| +|[*] Using device virtual file system |RT_USING_DFS |Open DFS virtual file system | +|[*] Using working directory |DFS_USING_WORKDIR |open a relative path | +|(2) The maximal number of mounted file system |DFS_FILESYSTEMS_MAX |maximum number of mounted file systems | +|(2) The maximal number of file system type |DFS_FILESYSTEM_TYPES_MAX |maximum number of supported file systems | +|(4) The maximal number of opened files | DFS_FD_MAX|maximum number of open files | +|[ ] Using mount table for file system|RT_USING_DFS_MNTTABLE |open the automatic mount table | +|[*] Enable elm-chan fatfs |RT_USING_DFS_ELMFAT |open the elm-FatFs file system | +|[*] Using devfs for device objects |RT_USING_DFS_DEVFS | open the DevFS device file system | +|[ ] Enable ReadOnly file system on flash |RT_USING_DFS_ROMFS |open the RomFS file system | +|[ ] Enable RAM file system |RT_USING_DFS_RAMFS |open the RamFS file system | +|[ ] Enable UFFS file system: Ultra-low-cost Flash File System |RT_USING_DFS_UFFS |open the UFFS file system | +|[ ] Enable JFFS2 file system |RT_USING_DFS_JFFS2 |open the JFFS2 file system | +|[ ] Using NFS v3 client file system |RT_USING_DFS_NFS |open the NFS file system | + +By default, the RT-Thread operating system does not turn on the relative path function in order to obtain a small memory footprint. When the Support Relative Paths option is not turned on, you should use an absolute directory when working with files and directory interfaces (because there is no currently working directory in the system). If you need to use the current working directory and the relative directory, you can enable the relative path function in the configuration item of the file system. + +When the option `[*] Use mount table for file system` is selected, the corresponding macro `RT_USING_DFS_MNTTABLE` will be enabled to turn on the automatic mount table function. The automatic `mount_table[]` is provided by the user in the application code. The user needs to specify the device name, mount path, file system type, read and write flag and private data in the table. After that, the system will traverse the mount table to execute the mount. It should be noted that the mount table must end with `{0}` to judge the end of the table. + +The automatic mount table `mount_table []` is shown below, where the five members of `mount_table [0]` are the five parameters of function `dfs_mount ()`. This means that the elm file system is mounted `/` path on the flash 0 device, rwflag is 0, data is 0, `mount_table [1]` is `{0}` as the end to judge the end of the table. + +```c +const struct dfs_mount_tbl mount_table[] = +{ + {"flash0", "/", "elm", 0, 0}, + {0} +}; +``` + +### elm-FatFs File System Configuration Option + +Elm-FatFs can be further configured after opening the elm-FatFs file system in menuconfig. The configuration menu description and corresponding macro definitions are as follows: + +|**Configuration Options** |**Corresponding Macro Definition**|**Description** | +|---------------------------------|-----------------------------------|-------------------| +|(437) OEM code page |RT_DFS_ELM_CODE_PAGE |encoding mode | +|[*] Using RT_DFS_ELM_WORD_ACCESS |RT_DFS_ELM_WORD_ACCESS | | +|Support long file name (0: LFN disable) ---> |RT_DFS_ELM_USE_LFN |open long file name submenu | +|(255) Maximal size of file name length |RT_DFS_ELM_MAX_LFN |maximum file name length | +|(2) Number of volumes (logical drives) to be used. |RT_DFS_ELM_DRIVES |number of devices mounting FatFs | +|(4096) Maximum sector size to be handled. |RT_DFS_ELM_MAX_SECTOR_SIZE |the sector size of the file system| +|[ ] Enable sector erase feature |RT_DFS_ELM_USE_ERASE | | +|[*] Enable the reentrancy (thread safe) of the FatFs module |RT_DFS_ELM_REENTRANT |open reentrant| + +#### Long File Name + +By default, FatFs file naming has the following disadvantages: + +- The file name (without suffix) can be up to 8 characters long and the suffix can be up to 3 characters long. The file name and suffix will be truncated when the limit is exceeded. +- File name does not support case sensitivity (displayed in uppercase). + +If you need to support long filenames, you need to turn on the option to support long filenames. The submenu of the long file name is described as follows: + +|**Configuration Options** |**Corresponding Macro Definition**|**Description** | +|----------------------------------|-------------------------|---------------------| +|( ) 0: LFN disable |RT_DFS_ELM_USE_LFN_0 |close the long file name | +|( ) 1: LFN with static LFN working buffer|RT_DFS_ELM_USE_LFN_1 |use static buffers to support long file names, and multi-threaded operation of file names will bring re-entry problems | +|( ) 2: LFN with dynamic LFN working buffer on the stack |RT_DFS_ELM_USE_LFN_2 |long file names are supported by temporary buffers in the stack. Larger demand for stack space. | +|(X) 3: LFN with dynamic LFN working buffer on the heap |RT_DFS_ELM_USE_LFN_3 |use the heap (malloc request) buffer to store long filenames, it is the safest (default) | + +#### Encoding Mode + +When long file name support is turned on, you can set the encoding mode for the file name. RT-Thread/FatFs uses 437 encoding (American English) by default. If you need to store the Chinese file name, you can use 936 encoding (GBK encoding). The 936 encoding requires a font library of approximately 180KB. If you only use English characters as a file, we recommend using 437 encoding (American English), this will save this 180KB of Flash space. + +The file encodings supported by FatFs are as follows: + +```c +/* This option specifies the OEM code page to be used on the target system. +/ Incorrect setting of the code page can cause a file open failure. +/ +/ 1 - ASCII (No extended character. Non-LFN cfg. only) +/ 437 - U.S. +/ 720 - Arabic +/ 737 - Greek +/ 771 - KBL +/ 775 - Baltic +/ 850 - Latin 1 +/ 852 - Latin 2 +/ 855 - Cyrillic +/ 857 - Turkish +/ 860 - Portuguese +/ 861 - Icelandic +/ 862 - Hebrew +/ 863 - Canadian French +/ 864 - Arabic +/ 865 - Nordic +/ 866 - Russian +/ 869 - Greek 2 +/ 932 - Japanese (DBCS) +/ 936 - Simplified Chinese (DBCS) +/ 949 - Korean (DBCS) +/ 950 - Traditional Chinese (DBCS) +*/ +``` + +#### File System Sector Size + +Specify the internal sector size of FatFs, which needs to be greater than or equal to the sector size of the actual hardware driver. For example, if a spi flash chip sector is 4096 bytes, the above macro needs to be changed to 4096. Otherwise, when the FatFs reads data from the driver, the array will be out of bounds and the system will crash (the new version gives a warning message when the system is executed) . + +Usually Flash device can be set to 4096, and the common TF card and SD card have a sector size of 512. + +#### Reentrant + +FatFs fully considers the situation of multi-threaded safe read and write security. When reading and writing FafFs in multi-threading, in order to avoid the problems caused by re-entry, you need to open the macro above. If the system has only one thread to operate the file system and there is no reentrancy problem, you can turn it off to save resources. + +#### More Configuration + +FatFs itself supports a lot of configuration options and the configuration is very flexible. The following file is a FatFs configuration file that can be modified to customize FatFs. + +```c +components/dfs/filesystems/elmfat/ffconf.h +``` + +## DFS Application Example + +### FinSH Command + +After the file system is successfully mounted, the files and directories can be operated. The commonly used FinSH commands for file system operations are shown in the following table: + +|**FinSH Command** |**Description** | +|--------|----------------------------------| +| ls | display information about files and directories | +| cd | enter the specified directory | +| cp | copy file | +| rm | delete the file or the directory | +| mv | move the file or rename it | +| echo | write the specified content to the specified file, write the file when it exists, and create a new file and write when the file does not exist. | +| cat | display the contents of the file | +| pwd | print out the current directory address | +| mkdir | create a folder | +| mkfs | formatted the file system | + +Use the `ls` command to view the current directory information, and the results are as follows: + +```c +msh />ls # use the `ls` command to view the current directory information +Directory /: # you can see that the root directory already exists / +``` + +Use the `mkdir` command to create a folder, and the results are as follows: + +```c +msh />mkdir rt-thread # create an rt-thread folder +msh />ls # view directory information as follows +Directory /: +rt-thread +``` + +Use the `echo` command to output the input string to the specified output location. The result is as follows: + +```c +msh />echo "hello rt-thread!!!" # outputs the string to standard output +hello rt-thread!!! +msh />echo "hello rt-thread!!!" hello.txt # output the string output to the hello.txt file +msh />ls +Directory /: +rt-thread +hello.txt 18 +msh /> +``` + +Use the `cat` command to view the contents of the file. The result is as follows: + +```c +msh />cat hello.txt # view the contents of the hello.txt file and output +hello rt-thread!!! +``` + +Use the `rm` command to delete a folder or file. The result is as follows: + +```c +msh />ls # view the information of current directory +Directory /: +rt-thread +hello.txt 18 +msh />rm rt-thread # delete the rt-thread folder +msh />ls +Directory /: +hello.txt 18 +msh />rm hello.txt # delete the hello.txt file +msh />ls +Directory /: +msh /> +``` + +### Read and Write File Examples + +Once the file system is working, you can run the application example. In the sample code, you first create a file `text.txt` using the `open()` function and write the string `"RT -Thread Programmer!\n"` in the file using the `write()` function, and then close the file. Use the ` open()` function again to open the `text.txt` file, read the contents and print it out, and close the file finally. + +The sample code is as follows: + +```c +#include +#include /* this header file need to be included when you need to operate the file */ + +static void readwrite_sample(void) +{ + int fd, size; + char s[] = "RT-Thread Programmer!", buffer[80]; + + rt_kprintf("Write string %s to test.txt.\n", s); + + /* open the ‘/text.txt’ file in create and read-write mode and create the file if it does not exist*/ + fd = open("/text.txt", O_WRONLY | O_CREAT); + if (fd>= 0) + { + write(fd, s, sizeof(s)); + close(fd); + rt_kprintf("Write done.\n"); + } + + /* open the ‘/text.txt’ file in read-only mode */ + fd = open("/text.txt", O_RDONLY); + if (fd>= 0) + { + size = read(fd, buffer, sizeof(buffer)); + close(fd); + rt_kprintf("Read from file test.txt : %s \n", buffer); + if (size < 0) + return ; + } + } +/* export to the msh command list */ +MSH_CMD_EXPORT(readwrite_sample, readwrite sample); + +``` + +### An Example of Changing the File Name + +The sample code in this section shows how to modify the file name. The program creates a function `rename_sample()` that manipulates the file and exports it to the msh command list. This function calls the `rename()` function to rename the file named `text.txt` to `text1.txt`. The sample code is as follows: + +```c +#include +#include /* this header file need to be included when you need to operate the file */ + +static void rename_sample(void) +{ + rt_kprintf("%s => %s", "/text.txt", "/text1.txt"); + + if (rename("/text.txt", "/text1.txt") < 0) + rt_kprintf("[error!]\n"); + else + rt_kprintf("[ok!]\n"); +} +/* export to the msh command list */ +MSH_CMD_EXPORT(rename_sample, rename sample); +``` + +Run the example in the FinSH console and the results are as follows: + +```shell +msh />echo "hello" text.txt +msh />ls +Directory /: +text.txt 5 +msh />rename_sample +/text.txt => /text1.txt [ok!] +msh />ls +Directory /: +text1.txt 5 +``` + +In the example demonstration, we first create a file named `text.txt` using the echo command, and then run the sample code to change the file name of the file `text.txt` to `text1.txt`. + +### Get File Status Example + +The sample code shows how to get the file status. The program creates a function `stat_sample()` that manipulates the file and exports it to the msh command list. This function calls the `stat()` function to get the file size information of the text.txt file. The sample code is as follows: + +```c +#include +#include /* this header file need to be included when you need to operate the file */ + +static void stat_sample(void) +{ + int ret; + struct stat buf; + ret = stat("/text.txt", &buf); + if(ret == 0) + rt_kprintf("text.txt file size = %d\n", buf.st_size); + else + rt_kprintf("text.txt file not fonud\n"); +} +/* export to the msh command list */ +MSH_CMD_EXPORT(stat_sample, show text.txt stat sample); +``` + +Run the example in the FinSH console and the results are as follows: + +```c +msh />echo "hello" text.txt +msh />stat_sample +text.txt file size = 5 +``` + +During the example run, the file `text.txt` is first created with the `echo` command, then the sample code is run, and the file size information for the file `text.txt` is printed. + +### Create a Directory Example + +The sample code in this section shows how to create a directory. The program creates a function file `mkdir_sample()` that manipulates the file and exports it to the msh command list, which calls the `mkdir()` function to create a folder called `dir_test`. The sample code is as follows: + +```c +#include +#include /* this header file need to be included when you need to operate the file */ + +static void mkdir_sample(void) +{ + int ret; + + /* create a directory */ + ret = mkdir("/dir_test", 0x777); + if (ret < 0) + { + /* fail to create a directory */ + rt_kprintf("dir error!\n"); + } + else + { + /* create a directory successfully */ + rt_kprintf("mkdir ok!\n"); + } +} +/* export to the msh command list */ +MSH_CMD_EXPORT(mkdir_sample, mkdir sample); +``` + +Run the example in the FinSH console and the result is as follows: + +```shell +msh />mkdir_sample +mkdir ok! +msh />ls +Directory /: +dir_test # it indicates that the type of the directory is a folder +``` + +This example demonstrates creating a folder named `dir_test` in the root directory. + +### Read directory Example + +The sample code shows how to read the directory. The program creates a function `readdir_sample()` that manipulates the file and exports it to the msh command list. This function calls the `readdir()` function to get the contents of the `dir_test` folder and print it out. The sample code is as follows: + +```c +#include +#include /* this header file need to be included when you need to operate the file */ + +static void readdir_sample(void) +{ + DIR *dirp; + struct dirent *d; + + /* open the / dir_test directory */ + dirp = opendir("/dir_test"); + if (dirp == RT_NULL) + { + rt_kprintf("open directory error!\n"); + } + else + { + /* read the directory */ + while ((d = readdir(dirp)) != RT_NULL) + { + rt_kprintf("found %s\n", d->d_name); + } + + /* close the directory */ + closedir(dirp); + } +} +/* exports to the msh command list */ +MSH_CMD_EXPORT(readdir_sample, readdir sample); +``` + +Run the example in the FinSH console and the result is as follows: + +```shell +msh />ls +Directory /: +dir_test +msh />cd dir_test +msh /dir_test>echo "hello" hello.txt # create a hello.txt file +msh /dir_test>cd .. # switch to the parent folder +msh />readdir_sample +found hello.txt +``` + +In this example, first create a hello.txt file under the dir_test folder and exit the dir_test folder. At this point, run the sample program to print out the contents of the dir_test folder. + +### An Example of Setting the location of the read directory + +The sample code in this section shows how to set the location to read the directory next time. The program creates a function `telldir_sample()` that manipulates the file and exports it to the msh command list. This function first opens the root directory, then reads all the directory information in the root directory and prints the directory information. Meanwhile, use the `telldir()` function to record the location information of the third directory entry. Before reading the directory information in the root directory for the second time, use the `seekdir()` function to set the read location to the address of the third directory entry previously recorded. At this point, read the information in the root directory again, and the directory information is printed out. The sample code is as follows: + +```c +#include +#include /* this header file need to be included when you need to operate the file */ + +/* assume that the file operation is done in one thread */ +static void telldir_sample(void) +{ + DIR *dirp; + int save3 = 0; + int cur; + int i = 0; + struct dirent *dp; + + /* open the root directory */ + rt_kprintf("the directory is:\n"); + dirp = opendir("/"); + + for (dp = readdir(dirp); dp != RT_NULL; dp = readdir(dirp)) + { + /* save the directory pointer for the third directory entry */ + i++; + if (i == 3) + save3 = telldir(dirp); + + rt_kprintf("%s\n", dp->d_name); + } + + /* go back to the directory pointer of the third directory entry you just saved */ + seekdir(dirp, save3); + + /* Check if the current directory pointer is equal to the pointer to the third directory entry that was saved. */ + cur = telldir(dirp); + if (cur != save3) + { + rt_kprintf("seekdir (d, %ld); telldir (d) == %ld\n", save3, cur); + } + + /* start printing from the third directory entry */ + rt_kprintf("the result of tell_seek_dir is:\n"); + for (dp = readdir(dirp); dp != NULL; dp = readdir(dirp)) + { + rt_kprintf("%s\n", dp->d_name); + } + + /* close the directory */ + closedir(dirp); +} +/* exports to the msh command list */ +MSH_CMD_EXPORT(telldir_sample, telldir sample); +``` + +In this demo example, you need to manually create the five folders from `hello_1` to `hello_5` in the root directory with the `mkdir` command, making sure that there is a folder directory under the root directory for running the sample. + +Run the example in the FinSH console and the results are as follows: + +```shell +msh />ls +Directory /: +hello_1 +hello_2 +hello_3 +hello_4 +hello_5 +msh />telldir_sample +the directory is: +hello_1 +hello_2 +hello_3 +hello_4 +hello_5 +the result of tell_seek_dir is: +hello_3 +hello_4 +hello_5 +``` + +After running the sample, you can see that the first time you read the root directory information, it starts from the first folder and prints out all the directory information in the root directory. When the directory information is printed for the second time, since the starting position of the reading is set to the position of the third folder by using the `seekdir()` function, the second time when reading the root directory is from the third folder. Start reading until the last folder, only the directory information from `hello_3` to `hello_5` is printed. + +## FAQ + +### Q: What should I do if I find that the file name or folder name is not displayed properly? + + **A:** Check if long file name support is enabled, DFS feature configuration section. + +### Q: What should I do if the file system fails to initialize? + + **A:** Check if the type and number of file systems allowed to be mounted in the file system configuration project are sufficient. + +### Q: What should I do if the file system *mkfs* command fails? + + **A:** Check if the storage device exists. If it exists, check to see if the device driver can pass the function test, if it fails, check the driver error. Check if the libc function is enabled. + +### Q: What should I do if the file system fails to mount? + +**A:** + +- Check if the specified mount path exists. The file system can be mounted directly to the root directory ("/"), but if you want to mount it on another path, such as ("/sdcard"). You need to make sure that the ("/sdcard") path exists. Otherwise, you need to create the `sdcard` folder in the root directory before the mount is successful. +- Check if the file system is created on the storage device. If there is no file system on the storage device, you need to create a file system on the storage using the `mkfs` command. + +### Q: What should I do if SFUD cannot detect the Flash specific model? + + **A:** + +- Check if the hardware pin settings are wrong. +- Whether the SPI device is already registered. +- Whether the SPI device is mounted to the bus. +- Check the `Using auto probe flash JEDEC SFDP parameter` and the `Using defined supported flash chip information table' under the 'RT-Thread Components → Device Drivers -> Using SPI Bus/Device device drivers -> Using Serial Flash Universal Driver` menu, to see whether the configuration item is selected, if it is not selected then you need to enable these two options. +- If the storage device is still not recognized with the above option turned on, then issues can be raised in the [SFUD](https://github.com/armink/SFUD) project. + + ### Q: Why does the benchmark test of the storage device take too long? + +**A:** + + - Compare the [benchmark test data](https://github.com/armink/SFUD/blob/master/docs/zh/benchmark.txt) when the `system tick` is 1000 and the length of time required for this test. If the time lag is too large, you can think that the test work is not working properly. + - Check the settings of the system tick, because some delay operations will be determined according to the tick time, so you need to set the appropriate `system tick` value according to the system conditions. If the system's `system tick` value is no less than 1000, you will need to use a logic analyzer to check the waveform to determine that the communication rate is normal. + +### Q: SPI Flash implements elmfat file system, and how to keep some sectors not used by file system? + + **A:** You can create multiple block devices for the entire storage device using the [partition](https://github.com/RT-Thread-packages/partition) tool package provided by RT-Thread. And block devices can be assigned different functions. + +### Q: What should I do if the program gets stuck during the test file system? + + **A:** Try using the debugger or print some necessary debugging information to determine where the program is stuck and ask questions. + +### Q: How can I check the problem of the file system step by step? + + **A:** You can step through the problem from the bottom to the top. + +- First check if the storage device is successfully registered and the function is normal. +- Check if a file system is created in the storage device. +- Check if the specified file system type is registered to the DFS framework, and often check if the allowed file system types and quantities are sufficient. +- Check if DFS is successfully initialized. The initialization of this step is pure software, so the possibility of error is not high. It should be noted that if component auto-initialization is turned on, there is no need to manually initialize it again. diff --git a/documentation/filesystem/figures/elm-fat-mkfs.png b/documentation/filesystem/figures/elm-fat-mkfs.png new file mode 100644 index 0000000000..db4621db87 Binary files /dev/null and b/documentation/filesystem/figures/elm-fat-mkfs.png differ diff --git a/documentation/filesystem/figures/fs-dir-mg.png b/documentation/filesystem/figures/fs-dir-mg.png new file mode 100644 index 0000000000..d5d82fab66 Binary files /dev/null and b/documentation/filesystem/figures/fs-dir-mg.png differ diff --git a/documentation/filesystem/figures/fs-dir.png b/documentation/filesystem/figures/fs-dir.png new file mode 100644 index 0000000000..ecaca5329a Binary files /dev/null and b/documentation/filesystem/figures/fs-dir.png differ diff --git a/documentation/filesystem/figures/fs-layer.png b/documentation/filesystem/figures/fs-layer.png new file mode 100644 index 0000000000..d380192109 Binary files /dev/null and b/documentation/filesystem/figures/fs-layer.png differ diff --git a/documentation/filesystem/figures/fs-mg.png b/documentation/filesystem/figures/fs-mg.png new file mode 100644 index 0000000000..11ef254079 Binary files /dev/null and b/documentation/filesystem/figures/fs-mg.png differ diff --git a/documentation/filesystem/figures/fs-reg-block.png b/documentation/filesystem/figures/fs-reg-block.png new file mode 100644 index 0000000000..c167e95376 Binary files /dev/null and b/documentation/filesystem/figures/fs-reg-block.png differ diff --git a/documentation/filesystem/figures/fs-reg.png b/documentation/filesystem/figures/fs-reg.png new file mode 100644 index 0000000000..d7bf43aca1 Binary files /dev/null and b/documentation/filesystem/figures/fs-reg.png differ diff --git a/documentation/finsh/figures/finsh-hd.png b/documentation/finsh/figures/finsh-hd.png new file mode 100644 index 0000000000..c48840ff58 Binary files /dev/null and b/documentation/finsh/figures/finsh-hd.png differ diff --git a/documentation/finsh/figures/finsh-mdk.png b/documentation/finsh/figures/finsh-mdk.png new file mode 100644 index 0000000000..1f7bbb8941 Binary files /dev/null and b/documentation/finsh/figures/finsh-mdk.png differ diff --git a/documentation/finsh/figures/finsh-run.png b/documentation/finsh/figures/finsh-run.png new file mode 100644 index 0000000000..647068131c Binary files /dev/null and b/documentation/finsh/figures/finsh-run.png differ diff --git a/documentation/finsh/finsh.md b/documentation/finsh/finsh.md new file mode 100644 index 0000000000..1b56c4636e --- /dev/null +++ b/documentation/finsh/finsh.md @@ -0,0 +1,562 @@ +# FinSH Console + +In the early days of computer development, before the advent of graphics systems, there was no mouse or even a keyboard. How did people interact with computers at the time? The earliest computers used a punched note to enter commands into the computer and write the program. Later, with the continuous development of computers, monitors and keyboards became the standard configuration of computers, but the operating system at this time did not support the graphical interface. Computer pioneers developed a software that accepts commands entered by the user, and after interpretation, passes it to The operating system and return the results of the operating system execution to the user. This program wraps around the operating system like a layer of shell, so it's called a shell. + +Embedded devices usually need to connect the development board to the PC for communication. Common connections include: serial port, USB, Ethernet, Wi-Fi, etc. A flexible shell should also support working on multiple connection methods. With the shell, the developer can easily get the system running and control the operation of the system through commands. Especially in the debugging phase, with the shell, in addition to being able to locate the problem more quickly, the developer can also use the shell to call the test function, change the parameters of the test function, reduce the number of times the code is downloaded, and shorten the development time of the project. + +FinSH is the command line component (shell) of RT-Thread. It is based on the above considerations. FinSH is pronounced [ˈfɪnʃ]. After reading this chapter, we will have a deeper understanding of how FinSH works and how to export your own commands to FinSH. + +## Introduction of FinSH + +FinSH is the command line component of RT-Thread. It provides a set of operation interfaces for users to call from the command line. It is mainly used to debug or view system information. It can communicate with a PC using serial/Ethernet/USB, etc. The hardware topology is shown below: + +![FinSH Hardware connection diagram](figures/finsh-hd.png) + +The user inputs a command in the control terminal, and the control terminal transmits the command to the FinSH in the device through the serial port, USB, network, etc., FinSH will read the device input command, parse and automatically scan the internal function table, find the corresponding function name, and execute the function. The response is output, the response is returned through the original path, and the result is displayed on the control terminal. + +When using a serial port to connect a device to a control terminal, the execution flow of the FinSH command is as follows: + +![FinSH FinSH Command execution flow chart](figures/finsh-run.png) + +FinSH supports the rights verification function. After the system is started, the system will perform the rights verification. Only when the rights verification is passed, the FinSH function will be enabled. This improves the security of system input. + +FinSH supports auto-completion, and viewing history commands, etc. These functions can be easily accessed through the keys on the keyboard. The keys supported by FinSH are shown in the following table: + +|Keys| **Functional Description** | +|----------|--------------| +| Tab key | Pressing the Tab key when no characters are entered will print all commands supported by the current system. If you press the Tab key when you have entered some characters, it will find the matching command, and will also complete the file name according to the file system's current directory, and you can continue to input, multiple completions. | +| ↑↓ key | Scroll up and down the recently entered history command | +| Backspace key | Delete character | +| ←→ key | Move the cursor left or right | + +FinSH supports two input modes, the traditional command line mode and the C language interpreter mode. + +### Traditional Command Line Mode + +This mode is also known as msh(module shell). In msh mode, FinSH is implemented in the same way as the traditional shell (dos/bash). For example, you can switch directories to the root directory with the `cd /` command. + +MSH can parse commands into parameters and parameters separated by spaces. Its command execution format is as follows: + +``` +command [arg1] [arg2] [...] +``` + +The command can be either a built-in command in RT-Thread or an executable file. + +### C Language Interpreter Mode + +This mode is also known as C-Style mode. In C language interpreter mode, FinSH can solve and parse most C language expressions, and use functions like C to access functions and global variables in the system. In addition, it can create variables through the command line. In this mode, the command entered must be similar to the function call in C language, that is, you must carry the `()` symbol. For example, to output all current threads and their status in the system, type `list_thread()` in FinSH to print out the required information. The output of the FinSH command is the return value of this function. For some functions that do not have a return value (void return value), this printout has no meaning. + +Initially FinSH only supported C-Style mode. Later, with the development of RT-Thread, C-Style mode is not convenient when running scripts or programs, and it is more convenient to use traditional shell method. In addition, in C-Style mode, FinSH takes up a lot of volume. For these reasons, the msh mode has been added to RT-Thread. The msh mode is small and easy to use. It is recommended that you use the msh mode. + +If both modes are enabled in the RT-Thread, they can be dynamically switched. Enter the `exit` in msh mode and press `Enter` to switch to C-Style mode. Enter `msh()` in C-Style mode and press `Enter` to enter msh mode. The commands of the two modes are not common, and the msh command cannot be used in C-Style mode, and vice versa. + +## FinSH Built-in Commands + +Some FinSH commands are built in by default in RT-Thread. You can print all commands supported by the current system by entering help in FinSH and pressing Enter or directly pressing Tab. The built-in commands in C-Style and msh mode are basically the same, so msh is taken as an example here. + +In msh mode, you can list all currently supported commands by pressing the Tab key. The number of default commands is not fixed, and the various components of RT-Thread will output some commands to FinSH. For example, when the DFS component is opened, commands such as `ls`, `cp`, and `cd` are added to FinSH for developers to debug. + +The following are all currently supported commands that display RT-Thread kernel status information after pressing the Tab key. The command name is on the left and the description of the command on the right: + +```c +RT-Thread shell commands: +version - show RT-Thread version information +list_thread - list thread +list_sem - list semaphore in system +list_event - list event in system +list_mutex - list mutex in system +list_mailbox - list mail box in system +list_msgqueue - list message queue in system +list_timer - list timer in system +list_device - list device in system +exit - return to RT-Thread shell mode. +help - RT-Thread shell help. +ps - List threads in the system. +time - Execute command with time. +free - Show the memory usage in the system. +``` + +Here lists the field information returned after entering the common commands, so that the developer can understand the content of the returned information. + +### Display Thread Status + +Use the `ps` or `list_thread` command to list all thread information in the system, including thread priority, state, maximum stack usage, and more. + +```c +msh />list_thread +thread pri status sp stack size max used left tick error +-------- --- ------- ---------- ---------- ------ ---------- --- +tshell 20 ready 0x00000118 0x00001000 29% 0x00000009 000 +tidle 31 ready 0x0000005c 0x00000200 28% 0x00000005 000 +timer 4 suspend 0x00000078 0x00000400 11% 0x00000009 000 +``` +list_thread Return field description: + +|**Field** |**Description** | +|------------|----------------------------| +| thread | Thread name | +| pri | Thread priority | +| status | The current state of the thread | +| sp | The current stack position of the thread | +| stack size | Thread stack size | +| max used | The maximum stack position used in thread history | +| left tick | The number of remaining ticks of the thread | +| error | Thread error code | + +### Display Semaphore Status + +Use the `list_sem` command to display all semaphore information in the system, including the name of the semaphore, the value of the semaphore, and the number of threads waiting for this semaphore. + +```c +msh />list_sem +semaphore v suspend thread +-------- --- -------------- +shrx 000 0 +e0 000 0 +``` + +list_sem Return field description: + +|**Field** | **Description** | +|----------------|--------------------------| +| semaphore | Semaphore name | +| v | The current value of semaphore | +| suspend thread | The number of threads waiting for this semaphore | + +### Display Event Status + +Use the `list_event` command to display all event information in the system, including the event name, the value of the event, and the number of threads waiting for this event. + +```c +msh />list_event +event set suspend thread +----- ---------- -------------- +``` + +list_event Return field description: + +| Field | **Description** | +|----------------|----------------------------------| +| event | Event set name | +| set | The current event in the event set | +| suspend thread | The number of threads waiting for an event in this event set | + +### Display Mutex Status + +Use the `list_mutex` command to display all mutex information in the system, including the mutex name, the owner of the mutex, and the number of nestings the owner holds on the mutex. + +```c +msh />list_mutex +mutex owner hold suspend thread +-------- -------- ---- -------------- +fat0 (NULL) 0000 0 +sal_lock (NULL) 0000 0 +``` + +list_mutex Return field description: + +| **Field** | **Description** | +|----------------|------------------------------------| +| mutxe | Mutex name | +| owner | The thread currently holding the mutex | +| hold | The number of times the holder is nested on this mutex | +| suspend thread | The number of threads waiting for this mutex | + +### Display Mailbox Status + +Use the `list_mailbox` command to display all mailbox information in the system, including the mailbox name, the number of messages in the mailbox, and the maximum number of messages the mailbox can hold. + +```c +msh />list_mailbox +mailbox entry size suspend thread +-------- ---- ---- -------------- +etxmb 0000 0008 1:etx +erxmb 0000 0008 1:erx +``` + +list_mailbox Return field description: + +| Field | **Description** | +|----------------|----------------------------| +| mailbox | Mailbox name | +| entry | The number of messages included in the mailbox | +| size | The maximum number of messages a mailbox can hold | +| suspend thread | The number of threads waiting for this mailbox | + +### Display Message Queue Status + +Use the `list_msgqueue` command to display all message queue information in the system, including the name of the message queue, the number of messages it contains, and the number of threads waiting for this message queue. + +```c +msh />list_msgqueue +msgqueue entry suspend thread +-------- ---- -------------- +``` + +list_msgqueue Return field description: + +| Field | **Description** | +|----------------|----------------------------| +| msgqueue | Message queue name | +| entry | The number of messages currently included in the message queue | +| suspend thread | Number of threads waiting for this message queue | + +### Display Memory Pool Status + +Use the `list_mempool` command to display all the memory pool information in the system, including the name of the memory pool, the size of the memory pool, and the maximum memory size used. + +```c +msh />list_mempool +mempool block total free suspend thread +------- ---- ---- ---- -------------- +signal 0012 0032 0032 0 +``` + +list_mempool Return field description: + +| Field | **Description** | +|----------------|--------------------| +| mempool | Memory pool name | +| block | Memory block size | +| total | Total memory block | +| free | Free memory block | +| suspend thread | The number of threads waiting for this memory pool | + +### Display Timer Status + +Use the `list_timer` command to display all the timer information in the system, including the name of the timer, whether it is the periodic timer, and the number of beats of the timer timeout. + +```c +msh />list_timer +timer periodic timeout flag +-------- ---------- ---------- ----------- +tshell 0x00000000 0x00000000 deactivated +tidle 0x00000000 0x00000000 deactivated +timer 0x00000000 0x00000000 deactivated +``` + +list_timer Return field description: + +| Field | **Description** | +|----------|--------------------------------| +| timer | Timer name | +| periodic | Whether the timer is periodic | +| timeout | The number of beats when the timer expires | +| flag | The state of the timer, activated indicates active, and deactivated indicates inactive | + +### Display Device Status + +Use the `list_device` command to display all device information in the system, including the device name, device type, and the number of times the device was opened. + +```c +msh />list_device +device type ref count +------ ----------------- ---------- +e0 Network Interface 0 +uart0 Character Device 2 +``` + +list_device Return field description: + +| Field | Description | +|-----------|----------------| +| device | Device name | +| type | Device type | +| ref count | The number of times the device was opened | + +### Display Dynamic Memory Status + +Use the `free` command to display all memory information in the system. + +```c +msh />free +total memory: 7669836 +used memory : 15240 +maximum allocated memory: 18520 +``` + +free Return field description: + +| Field | Description | +|--------------------------|------------------| +| total memory | Total memory size | +| used memory | Used memory size | +| maximum allocated memory | Maximum allocated memory | + +## Custom FinSH Command + +In addition to the commands that come with FinSH, FinSH also provides multiple macro interfaces to export custom commands. The exported commands can be executed directly in FinSH. + +### Custom msh Command + +The custom msh command can be run in msh mode. To export a command to msh mode, you can use the following macro interface: + +``` +MSH_CMD_EXPORT(name, desc); +``` + +|**Parameter**|**Description** | +|----------|----------------| +| name | The command to export | +| desc | Description of the export command | + +This command can export commands with parameters, or export commands without parameters. When exporting a parameterless command, the input parameter of the function is void. The example is as follows: + +```c +void hello(void) +{ + rt_kprintf("hello RT-Thread!\n"); +} + +MSH_CMD_EXPORT(hello , say hello to RT-Thread); +``` + +When exporting a command with parameters, the function's input parameters are `int argc` and `char**argv`. Argc represents the number of arguments, and argv represents a pointer to a command-line argument string pointer array. An example of exporting a parameter command is as follows: + +```c +static void atcmd(int argc, char**argv) +{ + …… +} + +MSH_CMD_EXPORT(atcmd, atcmd sample: atcmd ); +``` + +### Custom C-Style Commands and Variables + +Export custom commands to C-Style mode can use the following interface: + +``` +FINSH_FUNCTION_EXPORT(name, desc); +``` + +|**Parameter**| **Description** | +|----------|----------------| +| name | The command to export | +| desc | Description of the export command | + +The following example defines a `hello` function and exports it as a command in C-Style mode: + +```c +void hello(void) +{ + rt_kprintf("hello RT-Thread!\n"); +} + +FINSH_FUNCTION_EXPORT(hello , say hello to RT-Thread); +``` + +In a similar way, you can also export a variable that can be accessed through the following interface: + +``` +FINSH_VAR_EXPORT(name, type, desc); +``` + +| Parameter | **Description** | +|----------|----------------| +| name | The variable to be exported | +| type | Type of variable | +| desc | Description of the exported variable | + +The following example defines a `dummy` variable and exports it to a variable command in C-Style mode.: + +```c +static int dummy = 0; +FINSH_VAR_EXPORT(dummy, finsh_type_int, dummy variable for finsh) +``` +### Custom Command Rename + +The function name length of FinSH is limited. It is controlled by the macro definition `FINSH_NAME_MAX` in `finsh.h`. The default is 16 bytes, which means that the FinSH command will not exceed 16 bytes in length. There is a potential problem here: when a function name is longer than FINSH_NAME_MAX, after using FINSH_FUNCTION_EXPORT to export the function to the command table, the full function name is seen in the FinSH symbol table, but a full node execution will result in a *null node* error. This is because although the full function name is displayed, in fact FinSH saves the first 16 bytes as a command. Too many inputs will result in the command not being found correctly. In this case, you can use `FINSH_FUNCTION_EXPORT_ALIAS` to re-export the command name. + +``` +FINSH_FUNCTION_EXPORT_ALIAS(name, alias, desc); +``` + +| Parameter | Description | +|----------|-------------------------| +| name | The command to export | +| alias | The name that is displayed when exporting to FinSH | +| desc | Description of the export command | + +The command can be exported to msh mode by adding `__cmd_` to the renamed command name. Otherwise, the command will be exported to C-Style mode. The following example defines a `hello` function and renames it to `ho` and exports it to a command in C-Style mode. + +```c +void hello(void) +{ + rt_kprintf("hello RT-Thread!\n"); +} + +FINSH_FUNCTION_EXPORT_ALIAS(hello , ho, say hello to RT-Thread); +``` +## FinSH Function Configuration + +The FinSH function can be cropped, and the macro configuration options are defined in the rtconfig.h file. The specific configuration items are shown in the following table. + +| **Macro Definition** | **Value Type** | Description | Default | +|------------------------|----|------------|-------| +| #define RT_USING_FINSH | None | Enable FinSH | on | +| #define FINSH_THREAD_NAME | String | FinSH thread name | "tshell" | +| #define FINSH_USING_HISTORY | None | Turn on historical traceback | on | +| #define FINSH_HISTORY_LINES | Integer type | Number of historical command lines that can be traced back | 5| +| #define FINSH_USING_SYMTAB | None | Symbol table can be used in FinSH | on | +|#define FINSH_USING_DESCRIPTION | None | Add a description to each FinSH symbol | on | +| #define FINSH_USING_MSH| None | Enable msh mode | on | +| #define FINSH_USING_MSH_ONLY | None | Use only msh mode | on | +| #define FINSH_ARG_MAX | Integer type | Maximum number of input parameters | 10 | +| #define FINSH_USING_AUTH | None | Enable permission verification | off | +| #define FINSH_DEFAULT_PASSWORD | String | Authority verification password | off | + +The reference configuration example in rtconfig.h is as follows, and can be configured according to actual functional requirements. + +```c +/* Open FinSH */ +#define RT_USING_FINSH + +/* Define the thread name as tshell */ +#define FINSH_THREAD_NAME "tshell" + +/* Open history command */ +#define FINSH_USING_HISTORY +/* Record 5 lines of history commands */ +#define FINSH_HISTORY_LINES 5 + +/* Enable the use of the Tab key */ +#define FINSH_USING_SYMTAB +/* Turn on description */ +#define FINSH_USING_DESCRIPTION + +/* Define FinSH thread priority to 20 */ +#define FINSH_THREAD_PRIORITY 20 +/* Define the stack size of the FinSH thread to be 4KB */ +#define FINSH_THREAD_STACK_SIZE 4096 +/* Define the command character length to 80 bytes */ +#define FINSH_CMD_SIZE 80 + +/* Open msh function */ +#define FINSH_USING_MSH +/* Use msh function by default */ +#define FINSH_USING_MSH_DEFAULT +/* The maximum number of input parameters is 10 */ +#define FINSH_ARG_MAX 10 +``` + +## FinSH Application Examples + +### Examples of msh Command without Arguments + +This section demonstrates how to export a custom command to msh. The sample code is as follows, the hello function is created in the code, and the `hello` function can be exported to the FinSH command list via the `MSH_CMD_EXPORT` command. + +```c +#include + +void hello(void) +{ + rt_kprintf("hello RT-Thread!\n"); +} + +MSH_CMD_EXPORT(hello , say hello to RT-Thread); +``` + +Once the system is up and running, press the tab key in the FinSH console to see the exported command: + +```c +msh /> +RT-Thread shell commands: +hello - say hello to RT-Thread +version - show RT-Thread version information +list_thread - list thread +…… +``` + +Run the `hello` command and the results are as follows: + +```c +msh />hello +hello RT_Thread! +msh /> +``` + +### Example of msh Command with Parameters + +This section demonstrates how to export a custom command with parameters to FinSH. The sample code is as follows, the `atcmd()` function is created in the code, and the `atcmd()` function can be exported to the msh command list via the MSH_CMD_EXPORT command. + +```c +#include + +static void atcmd(int argc, char**argv) +{ + if (argc < 2) + { + rt_kprintf("Please input'atcmd '\n"); + return; + } + + if (!rt_strcmp(argv[1], "server")) + { + rt_kprintf("AT server!\n"); + } + else if (!rt_strcmp(argv[1], "client")) + { + rt_kprintf("AT client!\n"); + } + else + { + rt_kprintf("Please input'atcmd '\n"); + } +} + +MSH_CMD_EXPORT(atcmd, atcmd sample: atcmd ); +``` + +Once the system is running, press the Tab key in the FinSH console to see the exported command: + +```c +msh /> +RT-Thread shell commands: +hello - say hello to RT-Thread +atcmd - atcmd sample: atcmd +version - show RT-Thread version information +list_thread - list thread +…… +``` + +Run the `atcmd` command and the result is as follows: + +```c +msh />atcmd +Please input 'atcmd ' +msh /> +``` + +Run the `atcmd server` command and the result is as follows: + +```c +msh />atcmd server +AT server! +msh /> +``` + +Run the `atcmd client` command and the result is as follows: + +```c +msh />atcmd client +AT client! +msh /> +``` + +## FinSH Porting + +FinSH is written entirely in ANSI C and has excellent portability; it has a small memory footprint, and FinSH will not dynamically request memory if you do not use the functions described in the previous section to dynamically add symbols to FinSH. The FinSH source is located in the `components/finsh` directory. Porting FinSH requires attention to the following aspects: + +* FinSH thread: + +Each command execution is done in the context of a FinSH thread (that is, a tshell thread). When the RT_USING_FINSH macro is defined, the FinSH thread can be initialized by calling `finsh_system_init()` in the initialization thread. In RT-Thread 1.2.0 and later, you don't have to use the `finsh_set_device(const char* device_name)` function to explicitly specify the device to be used. Instead, the `rt_console_get_device()` function is called automatically to use the console device (The `finsh_set_device(const char* device_name)` must be used in 1.1.x and below to specify the device used by FinSH. The FinSH thread is created in the function `finsh_system_init()` function, which will wait for the rx_sem semaphore. + +* FinSH output: + +The output of FinSH depends on the output of the system and relies on the `rt_kprintf()` output in RT-Thread. In the startup function `rt_hw_board_init()`, the `rt_console_set_device(const char* name)` function sets the FinSH printout device. + +* FinSH input: + +After the rin_sem semaphore is obtained, the FinSH thread calls the `rt_device_read()` function to obtain a character from the device (select serial device) and then process it. So the migration of FinSH requires the implementation of the `rt_device_read()` function. The release of the rx_sem semaphore completes the input notification to the FinSH thread by calling the `rx_indicate()` function. The usual process is that when the serial port receive interrupt occurs (that is, the serial port has input character), the interrupt service routine calls the `rx_indicate()` function to notify the FinSH thread that there is input, and then the FinSH thread obtains the serial port input and finally performs the corresponding command processing. diff --git a/documentation/interrupt/figures/09fun1.png b/documentation/interrupt/figures/09fun1.png new file mode 100644 index 0000000000..10dc391162 Binary files /dev/null and b/documentation/interrupt/figures/09fun1.png differ diff --git a/documentation/interrupt/figures/09fun2.png b/documentation/interrupt/figures/09fun2.png new file mode 100644 index 0000000000..bed6b18ea7 Binary files /dev/null and b/documentation/interrupt/figures/09fun2.png differ diff --git a/documentation/interrupt/figures/09interrupt_handle.png b/documentation/interrupt/figures/09interrupt_handle.png new file mode 100644 index 0000000000..9110afbcf5 Binary files /dev/null and b/documentation/interrupt/figures/09interrupt_handle.png differ diff --git a/documentation/interrupt/figures/09interrupt_ops.png b/documentation/interrupt/figures/09interrupt_ops.png new file mode 100644 index 0000000000..96c9827ce0 Binary files /dev/null and b/documentation/interrupt/figures/09interrupt_ops.png differ diff --git a/documentation/interrupt/figures/09interrupt_reque.png b/documentation/interrupt/figures/09interrupt_reque.png new file mode 100644 index 0000000000..cf9daaaf46 Binary files /dev/null and b/documentation/interrupt/figures/09interrupt_reque.png differ diff --git a/documentation/interrupt/figures/09interrupt_table.png b/documentation/interrupt/figures/09interrupt_table.png new file mode 100644 index 0000000000..44ee881776 Binary files /dev/null and b/documentation/interrupt/figures/09interrupt_table.png differ diff --git a/documentation/interrupt/figures/09interrupt_work.png b/documentation/interrupt/figures/09interrupt_work.png new file mode 100644 index 0000000000..69e1711e9b Binary files /dev/null and b/documentation/interrupt/figures/09interrupt_work.png differ diff --git a/documentation/interrupt/figures/09interrupt_work_process.png b/documentation/interrupt/figures/09interrupt_work_process.png new file mode 100644 index 0000000000..afe99425a9 Binary files /dev/null and b/documentation/interrupt/figures/09interrupt_work_process.png differ diff --git a/documentation/interrupt/figures/09interrupt_work_sta.png b/documentation/interrupt/figures/09interrupt_work_sta.png new file mode 100644 index 0000000000..f86bdd3d94 Binary files /dev/null and b/documentation/interrupt/figures/09interrupt_work_sta.png differ diff --git a/documentation/interrupt/figures/09relation.png b/documentation/interrupt/figures/09relation.png new file mode 100644 index 0000000000..b4094b4dec Binary files /dev/null and b/documentation/interrupt/figures/09relation.png differ diff --git a/documentation/interrupt/figures/09ths_switch.png b/documentation/interrupt/figures/09ths_switch.png new file mode 100644 index 0000000000..c3066302ac Binary files /dev/null and b/documentation/interrupt/figures/09ths_switch.png differ diff --git a/documentation/interrupt/interrupt.md b/documentation/interrupt/interrupt.md new file mode 100644 index 0000000000..7fe328188f --- /dev/null +++ b/documentation/interrupt/interrupt.md @@ -0,0 +1,568 @@ +Interrupt Management +============== + +Interrupts often occur in embedded operating systems. When the CPU is processing a normal task, an external urgent event has occurred, requiring the CPU to suspend the current task to handle the asynchronous event. After the external event has been handled, CPU then returns to the interrupted address to continue working on the previous task. The system that implements this function is called the interrupt system, and the source of the request requesting for the CPU interrupt is called the interrupt source. An interrupt is an exception. An exception is any event that causes the processor to move away from normal operation and execute special code. If it is not processed in time, the system will either encounter an error or face a complete breakdown. So appropriately handling exceptions to avoid errors is a very important part of improving software robustness (stability). The following picture is a simple interrupt diagram. + +![Interrupt Diagram](figures/09interrupt_work.png) + +Interrupt processing is closely related to the CPU architecture. Therefore, this chapter first introduces the ARM Cortex-M CPU architecture, and then introduces the RT-Thread interrupt management mechanism in conjunction with the Cortex-M CPU architecture. After reading this chapter, you will learn more about the interrupt handling process of RT-Thread, how to add an interrupt service routine (ISR) and other matters related. + +Cortex-M CPU Architecture Foundation +-------------------- + +Unlike older classic ARM processors (like ARM7, ARM9), the ARM Cortex-M processor has a very different architecture. Cortex-M is serious which Cortex M0/M3/M4/M7 models. There will be some differences between each model. For example, the Cortex-M4 has more floating point calculation functions than the Cortex-M3, but their programming models are basically the same, so the parts of the book that describe interrupt management and porting are not going to be too finely differentiated for the Cortex M0/M3/M4/M7. This section focuses on the architectural aspects related to RT-Thread interrupt management. + +### Introduction to Register + +The register set of Cortex-M series CPU has 16 general register sets and several special function registers from R0~R15, as shown in the figure below. + +R13 in the general register set is used as the stack pointer register (SP); R14 is used as the link register (LR), which is used to store the return address when the subroutine is called; R15 is used as the program counter (PC) , where the stack pointer register can be either the main stack pointer (MSP) or the process stack pointer (PSP). + +![Register Schematic](figures/09interrupt_table.png) + +Special function registers include program status word register bank (PSRs), interrupt mask register banks (PRIMASK, FAULTMASK, BASEPRI), and control registers (CONTROL). Special function registers can be accessed through MSR/MRS instructions, such as: + +``` +MRS R0, CONTROL ; Read CONTROL to R0 +MSR CONTROL, R0 ; Write R0 to the CONTROL register +``` + +The program status word registers stores arithmetic and logic flags, such as negative flags, null result flags, overflow flags, and so on. The interrupt mask register bank controls the Cortex-M interrupt disable. The control registers are used to define the privilege level and decide which stack pointer is to be used. + +In the case of a Cortex-M4 or Cortex-M7 with a floating point unit, the control register is also used to indicate whether the floating point unit is currently in use. The floating point unit contains 32 floating point general-purpose registers S0~S31 and a special FPSCR register (Floating point status and control register). + +### Operating  Scheme and Privilege Level + +Cortex-M introduces the concept of operation scheme and privilege level, which are thread mode and processing mode respectively. If it enters exception or interrupt processing, it enters processing mode, otherwise it is thread mode. + +![Cortex-M Working Mode Switching Diagram](figures/09interrupt_work_sta.png) + +Cortex-M has two running levels, privilege-level and user-level. Thread mode can work at both privilege-level and user-level, while processing mode always works at the privilege-level and can be controlled by the CONTROL special register. The switching of different working modes is as shown in the figure above. + +Cortex-M's stack register SP corresponds to two physical registers MSP and PSP, MSP is the main stack, PSP is the process stack. Processing mode always uses MSP as the stack; thread mode can choose to use MSP or PSP as the stack, also controlled through special register CONTROL. After reset, Cortex-M enters thread mode, privilege-level, and uses the MSP stack by default. + +### Nested Vector Interrupt Controller + +The Cortex-M interrupt controller is called NVIC (nested vectored interrupt controller) and supports interrupt nesting. When an interrupt is triggered and the system responds, the processor hardware automatically pushes the context register of the current location of running into the interrupt stack. The registers in this section include the PSR, PC, LR, R12, and R3-R0 registers. + +![Relationship between Cortex-M Kernel and NVIC Diagram](figures/09relation.png) + +When the system is servicing an interrupt, if a higher priority interrupt is triggered, then the processor will also interrupt the currently running interrupt service routine, and then save the context of the interrupt service program register PSR, PC, LR, R12, R3-R0 to the interrupt stack. + +### PendSV System Call + +PendSV, also known as a suspendable system call, is an exception that can be suspended like a normal interrupt. It is specifically designed to assist the operating system in context switching. PendSV exceptions are initialized as lowest priority exceptions. Each time a context switch is required, the PendSV exception is triggered manually, and the context switch is performed in the PendSV exception handler. The detailed process of operating system context switching using the PendSV mechanism will be illustrated in the next chapter, *Kernel Porting*. + +RT-Thread Interruption Mechanism +--------------------- + +### Interrupt Vector Table + +The interrupt vector table is the entry point for all interrupt handlers. The following figure shows the Cortex-M serious of interrupt handlers: linking a function (user interrupt service routine) to the interrupt vector in a virtual interrupt vector table. When the interrupt vector corresponds to an interrupt, the hooked user interrupt service routine is called. + +![Interrupt Processing](figures/09interrupt_handle.png) + +On the Cortex-M core, all interrupts are processed using the interrupt vector table which means when an interrupt is triggered, the processor will directly determine which interrupt source it is, and then jump directly to the corresponding fixed location for processing. The interrupt service routines must be placed together at a uniform address (this address must be set to the NVIC interrupt vector offset register). The interrupt vector table is generally defined by an array or given in the start code. Given by the start code is applied by default: + +```c + __Vectors DCD __initial_sp ; Top of Stack + DCD Reset_Handler ; Reset processing function + DCD NMI_Handler ; NMI processing function + DCD HardFault_Handler ; Hard Fault processing function + DCD MemManage_Handler ; MPU Fault processing function + DCD BusFault_Handler ; Bus Fault processing function + DCD UsageFault_Handler ; Usage Fault processing function + DCD 0 ; reserve + DCD 0 ; reserve + DCD 0 ; reserve + DCD 0 ; reserved + DCD SVC_Handler ; SVCall processing function + DCD DebugMon_Handler ; Debug Monitor processing function + DCD 0 ; reserve + DCD PendSV_Handler ; PendSV processing function + DCD SysTick_Handler ; SysTick processing function + +… … + +NMI_Handler PROC + EXPORT NMI_Handler [WEAK] + B . + ENDP +HardFault_Handler PROC + EXPORT HardFault_Handler [WEAK] + B . + ENDP +… … +``` + +Note the [WEAK] after the code, which is the symbol weakening identifier. The symbols before [WEAK] (such as NMI_Handler, HardFault_Handler) will be weakened if the entire code encounters symbols with the same names (for example, NMI_Handler function with the same name), then the code will use symbols that are not weakened (functions with the same name as NMI_Handler), and the code associated with the weakened symbols will be automatically discarded. + +Take the SysTick interrupt as an example. In the system startup code, you need to fill in the SysTick_Handler interrupt entry function, and then implement the function to respond to the SysTick interrupt. The interrupt handler function sample program is as follows: + +```c +void SysTick_Handler(void) +{ + /* enter interrupt */ + rt_interrupt_enter(); + + rt_tick_increase(); + + /* leave interrupt */ + rt_interrupt_leave(); +} +``` + +### Interrupt Processing + +In RT-Thread interrupt management, interrupt handler is divided into three parts: interrupt preamble, user interrupt service routine, and interrupt follow-up procedure, as shown in the following figure: + +![3 Parts of the Interrupt Handler](figures/09interrupt_work_process.png) + +#### Interrupt Preamble + +The main job of interrupt preamble is as follows: + +1) Save the CPU interrupt context. This part is related to the CPU architecture. Different CPU architectures are implemented differently. + +For Cortex-M, this part of work is done automatically by hardware. When an interrupt is triggered and the system responds, the processor hardware automatically pushes the context register of the currently running portion into the interrupt stack. The registers in this section include the PSR, PC, LR, R12, and R3-R0 registers. + +2) Inform the kernel to enter the interrupt state, call the rt_interrupt_enter() function, and add 1 to the global variable rt_interrupt_nest to record the number of levels of interrupt nesting. The code is as follows. + +```c +void rt_interrupt_enter(void) +{ + rt_base_t level; + + level = rt_hw_interrupt_disable(); + rt_interrupt_nest ++; + rt_hw_interrupt_enable(level); +} +``` + +#### User Interrupt Service Routine + +In the user interrupt service routine (ISR), there are two cases. The first case is that no thread switching is performed. In this case, after user interrupt service routine and interrupt subsequent program finished running, it exits and return to the interrupted thread. . + +In another case, thread switching is required during interrupt processing. In this case, the rt_hw_context_switch_interrupt() function is called for context switching. This function is related to the CPU architecture, and different CPU architectures are implemented differently. + +In Cortex-M architecture, the function implementation of rt_hw_context_switch_interrupt() is shown in the following figure. It sets the thread rt_interrupt_to_thread variable that needs to be switched, and then triggers the PendSV exception (PendSV exception is specifically used to assist context switching and is initialized to the lowest level). After the PendSV exception is triggered, the PendSV exception interrupt handler will not be executed immediately as the interrupt processing is still in progress, the PendSV exception interrupt handler will be entered only after the interrupt subsequent program finishes running and exited the interrupt processing. + +![Function rt_hw_context_switch_interrupt() Implementation Process](figures/09fun1.png) + +#### Interrupt Follow-up Procedure + +The main work done by interrupt follow-up procedure is: + +1) Inform the kernel to leave the interrupt state and reduce the global variable rt_interrupt_nest by 1 through calling the rt_interrupt_leave() function. The code is as follows. + +```c +void rt_interrupt_leave(void) +{ + rt_base_t level; + + level = rt_hw_interrupt_disable(); + rt_interrupt_nest --; + rt_hw_interrupt_enable(level); +} +``` + +2) Restore the CPU context before the interrupt. If thread is not switched during the interrupt processing, the CPU context of the *from* thread is restored. If the thread is switched during the interrupt, the CPU context of the *to* thread is restored. This part of the implementation is related to the CPU architecture. Different CPU architectures are implemented differently. The implementation process in the Cortex-M architecture is shown in the following figure. + +![Function rt_hw_context_switch_interrupt() Implementation Process](figures/09fun2.png) + +### Interrupt Nesting + +In the case of interrupt nesting, in the process of executing the interrupt service routine, if a high priority interrupt occurs, the execution of the current interrupt service routine will be interrupted to execute the interrupt service routine of the high priority interrupt. After the processing of the high priority interrupt is completed, the interrupted interrupt service routine is resumed. If thread scheduling is required, the thread context switch will occur when all interrupt handlers finish running, as shown in the following figure. + +![Thread Switching during Interrupt](figures/09ths_switch.png) + +### Interrupt Stack + +During the interrupt processing, before the system responds to the interrupt, the software code (or processor) needs to save the context of the current thread (usually stored in the thread stack of the current thread), and then call the interrupt service routine for interrupt response and processing. During interrupt processing (essentially calling the user's interrupt service routine function), the interrupt handler function is likely to have its own local variables, which require the corresponding stack space to save, so the interrupt response still needs a stack space as the context to run the interrupt handler. The interrupt stack can be saved in the stack of the interrupted thread. When exiting from the interrupt, the corresponding thread is resumed to be executed. + +The interrupt stack can also be completely separated from the thread stack, that is, when entering the interrupt each time, after the interrupt thread context is saved, it switches to the new interrupt stack and runs independently. When the interrupt exits, the corresponding context is resumed. Using an independent interrupt stack is relatively easy to implement, and it is easier to understand and grasp the thread stack usage (otherwise it must reserve space for the interrupt stack. If the system supports interrupt nesting, you should also consider how much space should be reserved for nested interrupt?). + +RT-Thread adopts interrupt stack that provides independence. When an interrupt occurs, the preprocessor of the interrupt will replace the user's stack pointer into the interrupt stack space reserved by the system in advance, and then restore the user's stack when the interrupt exits. This way, the interrupt does not occupy the stack space of the thread, thereby improving the utilization of the memory space, and as the number of threads increases, the effect of reducing the memory footprint is more obvious. + +There are two stack pointers in the Cortex-M processor core. One is the main stack pointer (MSP) which is the stack pointer by default. It is used before the first thread and in the interrupt and exception handlers. The other is the thread stack pointer (PSP), used in threads. When the interrupt and exception service routine exits, modify the value of the second bit of LR register as 1, and the SP of the thread is switched from MSP to PSP. + +### Processing of the Bottom Half of the Interruption + +RT-Thread does not make any assumptions or restrictions on the processing time required by the interrupt service routine, but like other real-time operating systems or non-real-time operating systems, users need to ensure that all interrupt service routines are completed in the shortest possible time (the interrupt service routine is equivalent to having the highest priority in the system and will preempt all threads to execute first). In the process of interrupt nesting or masking the corresponding interrupt source, the other nested interrupt processing and the next interrupt signal of its own interrupt source will not delayed. + +When an interrupt occurs, the interrupt service routine needs to obtain the corresponding hardware state or data. If the interrupt service routine is to perform simple processing on the state or data, such as a CPU clock interrupt, the interrupt service routine only needs to add one to the system clock variable and then terminate the interrupt service routine. Such interrupts often require relatively short running time. However, for other interrupts, the interrupt service routine needs to perform a series of more time-consuming processing after obtaining the hardware state or data. Usually, the interrupt is divided into two parts, the **top half** and the **bottom half**. In the top half, after getting the hardware state and data, open the blocked interrupt, send a notification to the relevant thread (which can be the semaphore, event, mailbox or message queue provided by RT-Thread), and then end the interrupt service program. Then, the relevant thread, after receiving the notification, further processes the state or data, this part of the processing is called bottom half processing. + +In order to illustrate the implementation of the bottom half processing in RT-Thread, we take a virtual network device receiving network data packets as an example, as shown in the following code. Assume that after receiving the data message, the system analyzes and processes the message is a relatively time consuming process that is much less important than an external interrupt source signal. It can also be processed without masking the interrupt source signal. + +The program in this example creates an nwt thread that will block on the nw_bh_sem signal after it starts to run. Once this semaphore is released, the next nw_packet_parser process will be executed to begin the *Bottom Half* event processing. + +```c +/* + * program list: interrupt bottom half processing example + */ + +/* semaphore used to wake up threads */ +rt_sem_t nw_bh_sem; + +/* thread for data reading and analysis */ +void demo_nw_thread(void *param) +{ + /*First, perform the necessary initialization work on the device. */ + device_init_setting(); + + /*.. other operations..*/ + + /* create a semaphore to respond to Bottom Half events */ + nw_bh_sem = rt_sem_create("bh_sem", 0, RT_IPC_FLAG_FIFO); + + while(1) + { + /* Finally, let demo_nw_thread wait on nw_bh_sem. */ + rt_sem_take(nw_bh_sem, RT_WAITING_FOREVER); + + /* After receiving the semaphore signal, start the real Bottom Half processing. */ + nw_packet_parser (packet_buffer); + nw_packet_process(packet_buffer); + } +} + +int main(void) +{ + rt_thread_t thread; + + /* create processing thread */ + thread = rt_thread_create("nwt",demo_nw_thread, RT_NULL, 1024, 20, 5); + + if (thread != RT_NULL) + rt_thread_startup(thread); +} +``` + +Let's take a look at how Top Half is handled in demo_nw_isr and how Bottom Half is opened, as in the following example. + +```c +void demo_nw_isr(int vector, void *param) +{ + /* When the network device receives the data, it is met with an interrupt exception and starts executing this ISR. */ + /* Start the processing of the Top Half, such as reading the status of the hardware device to determine what kind of interruption occurred. */ + nw_device_status_read(); + + /*.. Some other data operations, etc. ..*/ + + /* Release nw_bh_sem, send a signal to demo_nw_thread, ready to start Bottom Half */ + rt_sem_release(nw_bh_sem); + + /* Then exit the interrupted Top Half section and end the device's ISR */ +} +``` + +As can be seen from the two code snippets of the above example, the interrupt service routine completes the start and end of the interrupt Bottom Half by waiting and releasing a semaphore object. Since the interrupt processing is divided into two parts, Top and Bottom, the interrupt processing becomes an asynchronous process. This part of the system overhead requires the user to seriously consider whether the interrupt service processing time is greater than the time to send notifications to Bottom Half and process when using RT-Thread. + +RT-Thread Interrupt Management Interface +--------------------- + +In order to isolate the operating system from the underlying exceptions and interrupt hardware, RT-Thread encapsulates interrupts and exceptions into a set of abstract interfaces, as shown in the following figure: + +![Interrupt Related Interfaces](figures/09interrupt_ops.png) + +### Mount Interrupt Service Routine + +The system associates the user's interrupt handler with the specified interrupt number. You can call the following interface to mount a new interrupt service routine: + +```c +rt_isr_handler_t rt_hw_interrupt_install(int vector, + rt_isr_handler_t handler, + void *param, + char *name); +``` + +After calling rt_hw_interrupt_install(), when the interrupt source generates an interrupt, the system will automatically call the mounted interrupt service routine. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_hw_interrupt_install() + +|**Parameters **|**Description** | +|----------|--------------------------------------------------| +| vector | vector is the mounted interrupt number | +| handler | newly mounted interrupt service routine | +| param | param is passed as a parameter to the interrupt service routine | +| name | name of the interrupt | +|**Return**| —— | +| return | the handle of the interrupt service routine mounted before the interrupt service routine was mounted | + +>This API does not appear in every migration branch. For example, there is usually no such API in the migration branch of Cortex-M0/M3/M4. + +The interrupt service routine is a kind of runtime environment that requires special attention. It runs in a non-threaded execution environment (generally a special operating mode of the chip (privileged mode)). In this runtime environment, the current thread cannot be suspended because the current thread does not exist. During the execution of related operations, information similar to print prompt information will appear, "Function [abc_func] shall not used in ISR", meaning a function that should not be called in the interrupt service routine. + +### Interrupt Source Management + +Usually before the ISR is ready to process an interrupt signal, we need to mask the interrupt source and open the previously blocked interrupt source in time after the ISR finishes processing the status or data. + +Masking the interrupt source ensures that the hardware state or data will not be disturbed during the following processing. The following function interface can be called: + +```c +void rt_hw_interrupt_mask(int vector); +``` + +After the rt_hw_interrupt_mask function interface is called, the corresponding interrupt will be masked (usually when this interrupt is triggered, the interrupt status register will change accordingly, but will not be sent to the processor for processing). The following table describes the input parameters for this function: + +Input parameters of rt_hw_interrupt_mask() + +|**Parameters**|**Description** | +|----------|----------------| +| vector | interrupt number to be masked | + +>This API does not appear in every migration branch. For example, there is usually no such API in the migration branch of Cortex-M0/M3/M4. + +In order to avoid losing the hardware interrupt signal as much as possible, the following function interface can be called to enable the blocked interrupt source: + +```c +void rt_hw_interrupt_umask(int vector); +``` + +After the rt_hw_interrupt_umask function interface is called, if the interrupt (and corresponding peripheral) is configured correctly, after the interrupt is triggered, it will be sent to the processor for processing. The following table describes the input parameters for this function: + +Input parameters of rt_hw_interrupt_umask() + +|**Parameters**|**Description** | +|----------|--------------------| +| vector | enable the blocked interrupt number | + +>This API does not appear in every migration branch. For example, there is usually no such API in the migration branch of Cortex-M0/M3/M4. + +### Global Interrupt Switch + +The global interrupt switch, also known as the interrupt lock, is the easiest way to disable multi-threaded access to critical sections by shutting down the interrupts to ensure that the current thread is not interrupted by other events (because the entire system no longer responds to those external events that could trigger a thread rescheduling), that is, the current thread will not be preempted unless the thread voluntarily gives up control of the processor. When you need to shut off the interrupt of the entire system , you can call the following function interface: + +```c +rt_base_t rt_hw_interrupt_disable(void); +``` + +The following table describes the return values for this function: + + Return value of rt_hw_interrupt_disable() + +|**Return**|**Description** | +|----------|---------------------------------------------| +| Interrupt Status | interrupt status before the function rt_hw_interrupt_disable runs | + +To resume interrupt can also be understood as turn on an interrupt. The rt_hw_interrupt_enable() function is used to "enable" interrupts, which resumes the interrupt state before the rt_hw_interrupt_disable() function is called. If the interrupt state is turned off before the rt_hw_interrupt_disable() function is called, then the interrupt state is still turned off after calling this function. Resuming interrupts are often used in pairs with turning off interrupts. The function interface called is as follows: + +```c +void rt_hw_interrupt_enable(rt_base_t level); +``` + +The following table describes the input parameters for this function: + + Input parameters for rt_hw_interrupt_enable() + +|**Parameters**|**Description** | +|----------|---------------------------------------------| +| level | the interrupt status returned by the previous rt_hw_interrupt_disable | + +1) The method of using the interrupt lock to operate the critical section can be applied to any occasion, and other types of synchronization methods are implemented relying on the interrupt lock. It can be said that the interrupt lock is the most powerful and efficient synchronization method. The main problem with using interrupt locks is that the system will no longer respond to any interrupts during the interrupt shutdown and will not be able to respond to external events. Therefore, the impact of the interrupt lock on the real-time system is very large. When used improperly, the system will be completely non-real-time (may cause the system to completely deviate from the required time requirement); when used properly, it will become a fast, efficient synchronization. + +For example, to ensure that a line of code (such as assignments) is running mutually exclusively , the quickest way is to use interrupt locks instead of semaphores or mutexes: + +```c + /* turn off the interrupt */ + level = rt_hw_interrupt_disable(); + a = a + value; + /* resume interrupt */ + rt_hw_interrupt_enable(level); +``` + +When using an interrupt lock, you need to ensure that the interrupt is turned off for a very short time, such as a = a + value in the above code; you can also switch to another method, such as using semaphores: + +```c + /* get semaphore lock */ + rt_sem_take(sem_lock, RT_WAITING_FOREVER); + a = a + value; + /* release the semaphore lock */ + rt_sem_release(sem_lock); +``` + +In the implementation of rt_sem_take and rt_sem_release, this code already has the behavior of using interrupt locks to protect semaphore internal variables, so for operations such as a = a + value;, it is more concise and fast to use interrupt locks. + +2) The function rt_base_t rt_hw_interrupt_disable(void) and the function void rt_hw_interrupt_enable(rt_base_t level) generally need to be used in pairs to ensure correct interrupt status. + +In RT-Thread, the API for switching global interrupts supports multi-level nesting. The code for simple nested interrupts is shown in the following code: + +Simple nested interrupt use + +```c +#include + +void global_interrupt_demo(void) +{ + rt_base_t level0; + rt_base_t level1; + + /* The global interrupt is turned off for the first time. The global interrupt status before being turned off may be turned on or off. */ + level0 = rt_hw_interrupt_disable(); + /* The global interrupt is turned off for the second time. The global interrupt status before being turned off may be turned on or off. */ + level1 = rt_hw_interrupt_disable(); + + do_something(); + + /* Resume the global interrupt to the state before the second turn-off, so the global interrupt is still turned off after this enable. */ + rt_hw_interrupt_enable(level1); + /* Resume the global interrupt to the state before the first turn-off, so the global interrupt status can be on or off. */ + rt_hw_interrupt_enable(level0); +} +``` + +This feature can bring great convenience to the development of the code. For example, if interrupt is turned off in a function, call some sub-functions and then turn on the interrupt. There may also be code for interrupt switch in these subfunctions. Since the API for global interrupts allows the use of nest, users do not need to do special processing for this code. + +### Interrupt Notification + +When the entire system is interrupted by an interrupt and enters the interrupt handler function, it needs to inform the kernel that it has entered the interrupt state. In this case, the following interfaces can be used: + +```c +void rt_interrupt_enter(void); +void rt_interrupt_leave(void); +``` + +These two interfaces are used respectively in the interrupt preamble and interrupt follow-up procedures, and will both modify the values of rt_interrupt_nest (interrupt nesting depth): + +Whenever an interrupt is entered, the rt_interrupt_enter() function can be called to notify the kernel that it has entered the interrupt state and increased the interrupt nesting depth (execute rt_interrupt_nest++); + +Whenever an interrupt is exited, the rt_interrupt_leave() function can be called to notify the kernel that it has exited the interrupt state and reduced the interrupt nesting depth (execute rt_interrupt_nest--). Be careful not to call these two interface functions in applications. + +The role of using rt_interrupt_enter/leave() is that in the interrupt service routine, if a kernel-related function (such as releasing a semaphore) is called, the kernel can be adjusted in time according to the current interrupt status. For example, if a semaphore is released in the interrupt and a thread is awakened, but the current system is found to be in the interrupt context, then Interrupt during thread switching should be implemented during the thread switching, instead of switching immediately . + +However, if the interrupt service routine does not call kernel-related functions (release semaphores, etc.), you may not call the rt_interrupt_enter/leave() function at this time. + +In the upper application, the rt_interrupt_get_nest() interface is called when the kernel needs to know that it has entered the interrupt state or the currently nested interrupt depth. It will return rt_interrupt_nest. as follows: + +```c +rt_uint8_t rt_interrupt_get_nest(void); +``` + +The following table describes the return value of rt_interrupt_get_nest() + +|**Return**|**Description** | +|----------|--------------------------------| +| 0 | the current system is not in an interrupt context | +| 1 | the current system is in an interrupt context | +| Bigger Than 1 | current interrupt nesting level | + +Interrupt and Polling +---------- + +When the drive peripheral is working, whether the programming mode is triggered by interrupt mode or polling mode is often the first problem to be considered by the driver developer, and there is a difference between the real-time operating system and the time-sharing operating system when it comes to this problem. Because the polling mode itself adopts the sequential execution mode: corresponding processing is done after finding corresponding event. Therefore, the polling mode is relatively simple and clear in terms of implementation. For example, to write data to the serial port, the program code writes the next data only when the serial controller finishes writing a data (otherwise the data is discarded). The corresponding code can look like this: + +```c +/* polling mode writes data to the serial port */ + while (size) + { + /* Determine if the data in the UART peripheral is sent. */ + while (!(uart->uart_device->SR & USART_FLAG_TXE)); + /* Send the next data when all data has been sent. */ + uart->uart_device->DR = (*ptr & 0x1FF); + + ++ptr; --size; + } +``` + +In the real-time system, the polling mode may be very problematic, because in a real-time operating system, when a program is continuously executed (when polling), the thread it is running will always run, and the thread with lower priority will not be running. In a time-sharing system, it will be the opposite. There is almost no difference in priority. You can run this program in one time slice and then run another program on another slice. + +So generally, in real-time systems, interrupt mode is mostly used to drive peripherals. When the data has arrived, the relevant processing threads are awoken by the interrupt, and then the subsequent actions are executed . For example, some serial peripherals that carry FIFO (FIFO queue with a certain amount of data) can be written as shown below: + +![Interrupt Mode Drive Peripheral](figures/09interrupt_reque.png) + +The thread first writes data to the serial port's FIFO. When the FIFO is full, the thread actively suspends. The serial controller continuously fetches data from the FIFO and sends it out at a configured baud rate (for example, 115200 bps). When all data in the FIFO is sent, an interrupt is triggered to the processor; when the interrupt service routine is executed, the thread can be awaken. Here is an example of a FIFO type device. In reality, there are also DMA type devices with similar principles. + +For low-speed devices, this mode is very good because the processor can run other threads before the serial peripheral sends the data in the FIFO which improves the overall operating efficiency of the system. (Even for time-sharing systems, such a mode is very necessary.) But for some high-speed devices, such as when the transmission speed reaches 10Mbps, assuming that the amount of data sent at one time is 32 bytes, we can calculate the time required to send such a piece of data: (32 X 8) X 1/10Mbps = 25us. When data needs to be transmitted continuously, the system will trigger an interrupt after 25us to wake up the upper thread to continue the next transmission. Suppose the system's thread switching time is 8us, (usually the real-time operating system's thread context switching only takes a few us) then when the entire system is running, the data bandwidth utilization will be only 25/(25+8) = 75.8%. However, with polling mode, the data bandwidth utilization rate may reach 100%. This is also why people generally think that the data throughput in the real-time system is insufficient. The system overhead is consumed in the thread switching. (some real-time systems may even use the bottom half processing and hierarchical interrupt processing as described earlier in this chapter which means the time overhead of interrupting to the sending thread is lengthened and the efficiency will be further reduced). + +Through the above calculation process, we can see some of the key factors: the smaller the amount of transmitted data, the faster the transmission speed, and the greater the impact on data throughput. Ultimately, it depends on how often the system generates interrupts. When a real-time system wants to increase data throughput, there are several ways that can be considered: + +1) Increase the length of each data volume for each transmission, and try to let the peripherals send as much data as possible every time; + +2) Change the interrupt mode to polling mode if necessary. At the same time, in order to solve the problem that the processor is always preempted with polling mode and other low-priority threads cannot be operated, the priority of the polling thread can be lowered accordingly. + +Global Interrupt Switch Usage Example +-------------------- + +This is an interrupted application routine: when multiple threads access the same variable, use the switch global interrupt to protect the variable, as shown in the following code: + +Use switch interrupts to access global variables + +```c +#include +#include + +#define THREAD_PRIORITY 20 +#define THREAD_STACK_SIZE 512 +#define THREAD_TIMESLICE 5 + +/* global variables accessed simultaneously */ +static rt_uint32_t cnt; +void thread_entry(void *parameter) +{ + rt_uint32_t no; + rt_uint32_t level; + + no = (rt_uint32_t) parameter; + while (1) + { + /* turn off glocal interrupt */ + level = rt_hw_interrupt_disable(); + cnt += no; + /* resume glocal interrupt */ + rt_hw_interrupt_enable(level); + + rt_kprintf("protect thread[%d]'s counter is %d\n", no, cnt); + rt_thread_mdelay(no * 10); + } +} + +/* user application entry */ +int interrupt_sample(void) +{ + rt_thread_t thread; + + /* create t1 thread */ + thread = rt_thread_create("thread1", thread_entry, (void *)10, + THREAD_STACK_SIZE, + THREAD_PRIORITY, THREAD_TIMESLICE); + if (thread != RT_NULL) + rt_thread_startup(thread); + + + /* create t2 thread */ + thread = rt_thread_create("thread2", thread_entry, (void *)20, + THREAD_STACK_SIZE, + THREAD_PRIORITY, THREAD_TIMESLICE); + if (thread != RT_NULL) + rt_thread_startup(thread); + + return 0; +} + +/* export to the msh command list */ +MSH_CMD_EXPORT(interrupt_sample, interrupt sample); +``` + +The simulation results are as follows: + +``` + \ | / +- RT - Thread Operating System + / | \ 3.1.0 build Aug 27 2018 + 2006 - 2018 Copyright by rt-thread team +msh >interrupt_sample +msh >protect thread[10]'s counter is 10 +protect thread[20]'s counter is 30 +protect thread[10]'s counter is 40 +protect thread[20]'s counter is 60 +protect thread[10]'s counter is 70 +protect thread[10]'s counter is 80 +protect thread[20]'s counter is 100 +protect thread[10]'s counter is 110 +protect thread[10]'s counter is 120 +protect thread[20]'s counter is 140 +… +``` + +>Since shutting down the global interrupt will cause the entire system to fail to respond to the interrupt, when using the global interrupt as a means of exclusive access to the critical section, it is necessary to ensure that the global interrupt is very short, such as the time to run several machine instructions. + diff --git a/documentation/introduction/figures/02Software_framework_diagram.png b/documentation/introduction/figures/02Software_framework_diagram.png new file mode 100644 index 0000000000..a236ff9338 Binary files /dev/null and b/documentation/introduction/figures/02Software_framework_diagram.png differ diff --git a/documentation/introduction/introduction.md b/documentation/introduction/introduction.md new file mode 100644 index 0000000000..5ed7512151 --- /dev/null +++ b/documentation/introduction/introduction.md @@ -0,0 +1,37 @@ +# RT-Thread Introduction + +As a beginner of RTOS, you might be new to RT-Thread. However, with a better understanding of it overtime, you will gradually discover the charm of RT-Thread and its advantages over other RTOSs of the same type. RT-Thread is an Embedded Real-time Operating System (RTOS) . After nearly 12 years of experiences accumulated, along with the rise of the Internet of Things, it is evolving into a powerful, component-rich IoT operating system. + +## RT-Thread Overview + +RT-Thread, short for Real Time-Thread, as its name implies, is an embedded real-time multi-threaded operating system. One of its basic properties is to support multi-tasking. Allowing multiple tasks to run at the same time does not mean that the processor actually performed multiple tasks at the same time. In fact, a processor core can only run one task at a time. Every task is executed quickly, and through the task scheduler (the scheduler determines the sequence according to priority), the tasks are switched rapidly which gives the illusion that multiple tasks are running at the same time. In the RT-Thread system, the task is implemented by threads. The thread scheduler in RT-Thread is the task scheduler mentioned above. + +RT-Thread is mainly written in C language, easy to understand and easy to port. It applies object-oriented programming methods to real-time system design, making the code elegant, structured, modular, and very tailorable. For resource-constrained Microcontroller Unit (MCU) systems, NANO version (NANO is a minimum kernel officially released by RT-Thread in July 2017) that requires only 3KB of Flash and 1.2KB of RAM memory resources can be tailored with easy-to-use tools; for resource-rich IoT devices, RT-Thread can use the on-line software package management tool, together with system configuration tools, to achieve intuitive and rapid modular cutting, seamlessly import rich Software f0 eature packs, thus achieving complex functions like Android's graphical interface and touch sliding effects, smart voice interaction effects, and so on. + +Compared with the Linux operating system, RT-Thread is small in size, low in cost, low in power consumption and fast in startup. In addition, RT-Thread has high instantaneity and low occupation, which is very suitable for various resource constraints (such as cost, power consumption, etc.). Although the 32-bit MCU is its main operating platform, other CPUs, ones with MMU, ones based on ARM9, ARM11 and even the Cortex-A series CPUs are suitable for RT-Thread in specific applications. + +## License Agreement + +The RT-Thread system is a completely open source system, the 3.1.0 version and its earlier versions follow the GPL V2 + open source license agreement. Versions from the 3.1.0 version onwards follow the Apache License 2.0 open source license agreement. The RT-Thread system can be used free of charge in commercial products and does not require opening private code to the public. + +## RT-Thread Frame + +In recent years, the concept of Internet of Things (IoT) has become widely known , and the Internet of Things market has developed rapidly. The networking of embedded devices is the trend of the times. Terminal networking has greatly increased the complexity of software. The traditional RTOS kernel can hardly meet the needs of the market. In this case, the concept of the Internet of Things Operating System (IoT OS) came into being. **IoT operating system refers to the software platform that is based on operating system kernel (like RTOS, Linux, etc.) and includes relatively complete middleware components such as file system, graphics library, etc. It is low in consumption and high in secure, abides by the Communication Protocol and has cloud-connect abilities.** RT-Thread is an IoT OS. + +One of the main differences between RT-Thread and many other RTOS such as FreeRTOS and uC/OS is that it is not only a real-time kernel, but also has a rich middle-tier component, as shown in the following figure. + +![RT-Thread Software Framework](figures/02Software_framework_diagram.png) + +It includes: + +- Kernel layer: RT-Thread kernel, the core part of RT-Thread, includes the implementation of objects in the kernel system, such as multi-threading and its scheduling, semaphore, mailbox, message queue, memory management, timer, etc.; libcpu/BSP (Chip Migration Related Files/Board Support Package) is closely related to hardware and consists of peripheral drivers and CPU transport. +- Components and Service Layer: Components are based on upper-level software on top of the RT-Thread kernel, such as virtual file systems, FinSH command-line interfaces, network frameworks, device frameworks, and more. Its modular design allows for high internal cohesion within the assembly and low coupling between components. +- RT-Thread software package: A general-purpose software component running on the RT-Thread IoT operating system platform for different application areas, consisting of description information, source code or library files. RT-Thread provides an open package platform with officially available or developer-supplied packages that provide developers with a choice of reusable packages that are an important part of the RT-Thread ecosystem. The package ecosystem is critical to the choice of an operating system because these packages are highly reusable and modular, making it easy for application developers to build the system they want in the shortest amount of time. RT-Thread supports more than 60 software packages, listed below: + +1. Internet of Things related software packages: Paho MQTT, WebClient, mongoose, WebTerminal, etc. +2. Scripting language related software packages: JerryScript and MicroPython are currently supported. +3. Multimedia related software packages: Openmv, mupdf. +4. Tools packages: CmBacktrace, EasyFlash, EasyLogger, SystemView. +5. System related software packages: RTGUI, Persimmon UI, lwext4, partition, SQLite, etc. +6. Peripheral library and driver software packages: RealTek RTL8710BN SDK. +7. Others. \ No newline at end of file diff --git a/documentation/kernel-porting/figures/10pendsv.png b/documentation/kernel-porting/figures/10pendsv.png new file mode 100644 index 0000000000..95e02834b5 Binary files /dev/null and b/documentation/kernel-porting/figures/10pendsv.png differ diff --git a/documentation/kernel-porting/figures/10stack.png b/documentation/kernel-porting/figures/10stack.png new file mode 100644 index 0000000000..eb3e105156 Binary files /dev/null and b/documentation/kernel-porting/figures/10stack.png differ diff --git a/documentation/kernel-porting/figures/10switch.png b/documentation/kernel-porting/figures/10switch.png new file mode 100644 index 0000000000..b2cb9cc18c Binary files /dev/null and b/documentation/kernel-porting/figures/10switch.png differ diff --git a/documentation/kernel-porting/figures/10switch2.png b/documentation/kernel-porting/figures/10switch2.png new file mode 100644 index 0000000000..293a78ad15 Binary files /dev/null and b/documentation/kernel-porting/figures/10switch2.png differ diff --git a/documentation/kernel-porting/figures/10ths_env1.png b/documentation/kernel-porting/figures/10ths_env1.png new file mode 100644 index 0000000000..60438250aa Binary files /dev/null and b/documentation/kernel-porting/figures/10ths_env1.png differ diff --git a/documentation/kernel-porting/figures/10ths_env2.png b/documentation/kernel-porting/figures/10ths_env2.png new file mode 100644 index 0000000000..4c4ec3b1ef Binary files /dev/null and b/documentation/kernel-porting/figures/10ths_env2.png differ diff --git a/documentation/kernel-porting/kernel-porting.md b/documentation/kernel-porting/kernel-porting.md new file mode 100644 index 0000000000..b8ae87e421 --- /dev/null +++ b/documentation/kernel-porting/kernel-porting.md @@ -0,0 +1,383 @@ +Kernel Porting +=============== + +After learning the previous chapters, everyone has a better understanding of RT-Thread, but many people are not familiar with how to port the RT-Thread kernel to different hardware platforms. Kernel porting refers to the RT-Thread kernel running on different chip architectures and different boards. It can have functions such as thread management and scheduling, memory management, inter-thread synchronization and communication, and timer management. Porting can be divided into two parts: CPU architecture porting and BSP (Board support package) porting . + +This chapter will introduce CPU architecture porting and BSP porting. The CPU architecture porting part will be introduced in conjunction with the Cortex-M CPU architecture. Therefore, it is necessary to review "Cortex-M CPU Architecture Foundation" in the previous chapter ["Interrupt Management"](../interrupt/interrupt.md). After reading this chapter, how to complete the RT-Thread kernel porting will be learned. + +CPU Architecture Porting +----------- + +There are many different CPU architectures in the embedded world, for example, Cortex-M, ARM920T, MIPS32, RISC-V, etc. In order to enable RT-Thread to run on different CPU architecture chips, RT-Thread provides a libcpu abstraction layer to adapt to different CPU architectures. The libcpu layer provides unified interfaces to the kernel, including global interrupt switches, thread stack initialization, context switching, and more. + +RT-Thread's libcpu abstraction layer provides a unified set of CPU architecture porting interfaces downwards. This part of the interface includes global interrupt switch functions, thread context switch functions, clock beat configuration and interrupt functions, Cache, and so on. The following table shows the interfaces and variables that the CPU architecture migration needs to implement. + +libcpu porting related API + +| **Functions and Variables** | **Description** | +| ------------------------------------------------------------ | ------------------------------------------------------------ | +| rt_base_t rt_hw_interrupt_disable(void); | disable global interrupt | +| void rt_hw_interrupt_enable(rt_base_t level); | enable global interrupt | +| rt_uint8_t \*rt_hw_stack_init(void \*tentry, void \*parameter, rt_uint8_t \*stack_addr, void \*texit); | The initialization of the thread stack, the kernel will call this function during thread creation and thread initialization. | +| void rt_hw_context_switch_to(rt_uint32 to); | Context switch without source thread, which is called when the scheduler starts the first thread, and is called in the signal. | +| void rt_hw_context_switch(rt_uint32 from, rt_uint32 to); | Switch from *from* thread to *to* thread, used for switch between threads. | +| void rt_hw_context_switch_interrupt(rt_uint32 from, rt_uint32 to); | Switch from *from* thread to *to* thread, used for switch in interrupt. | +| rt_uint32_t rt_thread_switch_interrupt_flag; | A flag indicating that a switch is needed in the interrupt. | +| rt_uint32_t rt_interrupt_from_thread, rt_interrupt_to_thread; | Used to save *from* and *to* threads when the thread is context switching. | + +### Implement Global Interrupt Enable/Disable + +Regardless of kernel code or user code, there may be some variables that need to be used in multiple threads or interrupts. If there is no corresponding protection mechanism, it may lead to critical section problems. In order to solve this problem, RT-Thread provides a series of inter-thread synchronization and communication mechanism. But these mechanisms require the global interrupt enable/disable function provided in libcpu. They are, respectively: + +```c +/* disable global interrupt */ +rt_base_t rt_hw_interrupt_disable(void); + +/* enable global interrupt */ +void rt_hw_interrupt_enable(rt_base_t level); +``` + +The following describes how to implement these two functions on the Cortex-M architecture. As mentioned earlier, the Cortex-M implements the CPS instruction in order to achieve fast switch interrupts, which can be used here. + +```c +CPSID I ;PRIMASK=1, ; disable global interrupt +CPSIE I ;PRIMASK=0, ; enable global interrupt +``` + +#### Disable Global Interrupt + +The functions that need to be done in order in the rt_hw_interrupt_disable() function are: + +1). Save the current global interrupt status and use the status as the return value of the function. + +2). Disable the global interrupt. + +Based on MDK, the global interrupt disabled function on the Cortex-M core, is shown in the following code: + +Disable global interrupt + +```c +;/* +; * rt_base_t rt_hw_interrupt_disable(void); +; */ +rt_hw_interrupt_disable PROC ;PROC pseudoinstruction definition function + EXPORT rt_hw_interrupt_disable ;EXPORT output defined function, similar to C language extern + MRS r0, PRIMASK ;read the value of the PRIMASK register to the r0 register + CPSID I ;disable global interrupt + BX LR ;function renturn + ENDP ;ENDP end of function +``` + +The above code first uses the MRS instruction to save the value of the PRIMASK register to the r0 register, then disable the global interrupt with the "CPSID I" instruction, and finally returns with the BX instruction. The data stored by r0 is the return value of the function. Interrupts can occur between the “MRS r0, PRIMASK” instruction and “CPSID I”, which does not cause a global interrupt status disorder. + +There are different conventions for different CPU architectures regarding how registers are managed during function calls and in interrupt handlers. A more detailed introduction to the use of registers for Cortex-M can be found in the official ARM manual, "*Procedure Call Standard for the ARM ® Architecture*." + +#### Enable Global Interrupt + +In `rt_hw_interrupt_enable(rt_base_t level)`, the variable *level* is used as the state to be restored, overriding the global interrupt status of the chip. + +Based on MDK, implementation on the Cortex-M core enables a global interrupt, as shown in the following code: + +Enable global interrupt + +```c +;/* +; * void rt_hw_interrupt_enable(rt_base_t level); +; */ +rt_hw_interrupt_enable PROC ; PROC pseudoinstruction definition function + EXPORT rt_hw_interrupt_enable ; EXPORT output defined function, similar to "extern" in C language + MSR PRIMASK, r0 ; write the value of the r0 register to the PRIMASK register + BX LR ; function renturn + ENDP ; ENDP end of function +``` + +The above code first uses the MSR instruction to write the value register of r0 to the PRIMASK register, thus restoring the previous interrupt status. + +### Implement Thread Stack Initialization + +When dynamically creating threads and initializing threads, the internal thread initialization function *_rt_thread_init()* is used. The _rt_thread_init() function calls the stack initialization function *rt_hw_stack_init()*, which manually constructs a context in the stack initialization function. The context will be used as the initial value for each thread's first execution. The layout of the context on the stack is shown below: + +![Context Information in the Stack](figures/10stack.png) + +The following code is the stack initialization code: + +Build a context on the stack + +```c +rt_uint8_t *rt_hw_stack_init(void *tentry, + void *parameter, + rt_uint8_t *stack_addr, + void *texit) +{ + struct stack_frame *stack_frame; + rt_uint8_t *stk; + unsigned long i; + + /* align the incoming stack pointer */ + stk = stack_addr + sizeof(rt_uint32_t); + stk = (rt_uint8_t *)RT_ALIGN_DOWN((rt_uint32_t)stk, 8); + stk -= sizeof(struct stack_frame); + + /* obtain the pointer to the stack frame of the context */ + stack_frame = (struct stack_frame *)stk; + + /* set the default value of all registers to 0xdeadbeef */ + for (i = 0; i < sizeof(struct stack_frame) / sizeof(rt_uint32_t); i ++) + { + ((rt_uint32_t *)stack_frame)[i] = 0xdeadbeef; + } + + /* save the first parameter in the r0 register according to the ARM APCS calling standard */ + stack_frame->exception_stack_frame.r0 = (unsigned long)parameter; + /* set the remaining parameter registers to 0 */ + stack_frame->exception_stack_frame.r1 = 0; /* r1 register */ + stack_frame->exception_stack_frame.r2 = 0; /* r2 register */ + stack_frame->exception_stack_frame.r3 = 0; /* r3 register */ + /* set IP (Intra-Procedure-call scratch register.) to 0 */ + stack_frame->exception_stack_frame.r12 = 0; /* r12 register */ + /* save the address of the thread exit function in the lr register */ + stack_frame->exception_stack_frame.lr = (unsigned long)texit; + /* save the address of the thread entry function in the pc register */ + stack_frame->exception_stack_frame.pc = (unsigned long)tentry; + /* Set the value of psr to 0x01000000L, which means that the default switch is Thumb mode. */ + stack_frame->exception_stack_frame.psr = 0x01000000L; + + /* return the stack address of the current thread */ + return stk; +} +``` + +### Implement Context Switching + +In different CPU architectures, context switches between threads and context switches from interrupts to context, the register portion of the context may be different or the same. In Cortex-M, context switching is done uniformly using PendSV exceptions, and there is no difference in the switching parts. However, in order to adapt to different CPU architectures, RT-Thread's libcpu abstraction layer still needs to implement three thread switching related functions: + +1) rt_hw_context_switch_to(): no source thread, switching to the target thread, which is called when the scheduler starts the first thread. + +2) rt_hw_context_switch():In a threaded environment, switch from the current thread to the target thread. + +3) rt_hw_context_switch_interrupt ():In the interrupt environment, switch from the current thread to the target thread. + +There are differences between switching in a threaded environment and switching in an interrupt environment. In the threaded environment, if the rt_hw_context_switch() function is called, the context switch can be performed immediately; in the interrupt environment, it needs to wait for the interrupt handler to complete processing the functions before switching. + +Due to this difference, the implementation of rt_hw_context_switch() and rt_hw_context_switch_interrupt() is not the same on platforms such as ARM9. If the thread's schedule is triggered in the interrupt handler, rt_hw_context_switch_interrupt() is called in the dispatch function to trigger the context switch. After the interrupt handler has processed the interrupt, check the rt_thread_switch_interrupt_flag variable before the schedule exits. If the value of the variable is 1, the context switch of the thread is completed according to the rt_interrupt_from_thread variable and the rt_interrupt_to_thread variable. + +In the Cortex-M processor architecture, context switching can be made more compact based on the features of automatic partial push and PendSV. + +Context switching between threads, as shown in the following figure: + +![Context Switch between Threads](figures/10ths_env1.png) + +The hardware automatically saves the PSR, PC, LR, R12, R3-R0 registers of the *from* thread before entering the PendSV interrupt, then saves the R11\~R4 registers of the *from* thread in PendSV, and restores the R4\~R11 registers of the *to* thread, and finally the hardware automatically restores the R0\~R3, R12, LR, PC, PSR registers of the *to* thread after exiting the PendSV interrupt. + +The context switch from interrupt to thread can be represented by the following figure: + +![Interrupt to Thread Switching](figures/10ths_env2.png) + +The hardware automatically saves the PSR, PC, LR, R12, R3-R0 registers of the *from* thread before entering the interrupt, and then triggers a PendSV exception. R11~R4 registers of the *from* thread are saved and R4~R11 registers of the *to* thread are restored in the PendSV exception handler. Finally, the hardware automatically restores the R0~R3, R12, PSR, PC, LR registers of the *to* thread after exiting the PendSV interrupt. + +Obviously, in the Cortex-M kernel, the rt_hw_context_switch() and rt_hw_context_switch_interrupt() have same functions, which is finishing saving and replying the remaining contexts in PendSV. So we just need to implement a piece of code to simplify the porting. + +#### Implement rt_hw_context_switch_to() + +rt_hw_context_switch_to() has only the target thread and no source thread. This function implements the function of switching to the specified thread. The following figure is a flowchart: + +![rt_hw_context_switch_to() Flowchart](figures/10switch.png) + +The rt_hw_context_switch_to() implementation on the Cortex-M3 kernel (based on MDK), as shown in the following code: + +MDK version rt_hw_context_switch_to() implementation + +```c +;/* +; * void rt_hw_context_switch_to(rt_uint32 to); +; * r0 --> to +; * this fucntion is used to perform the first thread switch +; */ +rt_hw_context_switch_to PROC + EXPORT rt_hw_context_switch_to + ; r0 is a pointer pointing to the SP member of the thread control block of the to thread + ; save the value of the r0 register to the rt_interrupt_to_thread variable + LDR r1, =rt_interrupt_to_thread + STR r0, [r1] + + ; set the from thread to empty, indicating that no context is needed to save from + LDR r1, =rt_interrupt_from_thread + MOV r0, #0x0 + STR r0, [r1] + + ; set the flag to 1, indicating that switching is required, this variable will be cleared when switching in the PendSV exception handler + LDR r1, =rt_thread_switch_interrupt_flag + MOV r0, #1 + STR r0, [r1] + + ; set PendSV exception priority to lowest priority + LDR r0, =NVIC_SYSPRI2 + LDR r1, =NVIC_PENDSV_PRI + LDR.W r2, [r0,#0x00] ; read + ORR r1,r1,r2 ; modify + STR r1, [r0] ; write-back + + ; trigger PendSV exception (PendSV exception handler will be executed) + LDR r0, =NVIC_INT_CTRL + LDR r1, =NVIC_PENDSVSET + STR r1, [r0] + + ; abandon the stack from chip startup to before the first context switch, set the value of the MSP as when it is started + LDR r0, =SCB_VTOR + LDR r0, [r0] + LDR r0, [r0] + MSR msp, r0 + + ; enable global interrupts and global exceptions. After enabling, the PendSV exception handler will be entered. + CPSIE F + CPSIE I + + ; will not execute to here + ENDP +``` + +#### Implement rt_hw_context_switch()/ rt_hw_context_switch_interrupt() + +The function rt_hw_context_switch() and the function rt_hw_context_switch_interrupt() have two parameters, the *from* thread and the *to* thread. They implement the function to switch from the *from* thread to the *to* thread. The following figure is a specific flow chart: + +![Rt_hw_context_switch()/ rt_hw_context_switch_interrupt() Flowchart](figures/10switch2.png) + + + +The rt_hw_context_switch() and rt_hw_context_switch_interrupt() implementations on the Cortex-M3 kernel (based on MDK) are shown in the following code: + +Implement rt_hw_context_switch()/rt_hw_context_switch_interrupt() + +```c +;/* +; * void rt_hw_context_switch(rt_uint32 from, rt_uint32 to); +; * r0 --> from +; * r1 --> to +; */ +rt_hw_context_switch_interrupt + EXPORT rt_hw_context_switch_interrupt +rt_hw_context_switch PROC + EXPORT rt_hw_context_switch + + ; check if the rt_thread_switch_interrupt_flag variable is 1 + ; skip updating the contents of the thread from if the variable is 1 + LDR r2, =rt_thread_switch_interrupt_flag + LDR r3, [r2] + CMP r3, #1 + BEQ _reswitch + ; set the rt_thread_switch_interrupt_flag variable to 1 + MOV r3, #1 + STR r3, [r2] + + ; update the rt_interrupt_from_thread variable from parameter r0 + LDR r2, =rt_interrupt_from_thread + STR r0, [r2] + +_reswitch + ; update the rt_interrupt_to_thread variable from parameter r1 + LDR r2, =rt_interrupt_to_thread + STR r1, [r2] + + ; trigger PendSV exception, will enter the PendSV exception handler to complete the context switch + LDR r0, =NVIC_INT_CTRL + LDR r1, =NVIC_PENDSVSET + STR r1, [r0] + BX LR +``` + +#### Implement PendSV Interrupt + +In Cortex-M3, the PendSV interrupt handler is PendSV_Handler(). The actual thread switching is done in PendSV_Handler(). The following figure is a specific flow chart: + +![PendSV Interrupt Handling](figures/10pendsv.png) + +The following code is a PendSV_Handler implementation: + +```c +; r0 --> switch from thread stack +; r1 --> switch to thread stack +; psr, pc, lr, r12, r3, r2, r1, r0 are pushed into [from] stack +PendSV_Handler PROC + EXPORT PendSV_Handler + + ; disable global interrupt + MRS r2, PRIMASK + CPSID I + + ; check if the rt_thread_switch_interrupt_flag variable is 0 + ; if it is zero, jump to pendsv_exit + LDR r0, =rt_thread_switch_interrupt_flag + LDR r1, [r0] + CBZ r1, pendsv_exit ; pendsv already handled + + ; clear the rt_thread_switch_interrupt_flag variable + MOV r1, #0x00 + STR r1, [r0] + + ; check the rt_thread_switch_interrupt_flag variable + ; if it is 0, the context save of the from thread is not performed. + LDR r0, =rt_interrupt_from_thread + LDR r1, [r0] + CBZ r1, switch_to_thread + + ; save the context of the from thread + MRS r1, psp ; obtain the stack pointer of the thread from + STMFD r1!, {r4 - r11} ; save r4~r11 to the thread's stack + LDR r0, [r0] + STR r1, [r0] ; update the SP pointer of the thread's control block + +switch_to_thread + LDR r1, =rt_interrupt_to_thread + LDR r1, [r1] + LDR r1, [r1] ; obtain the stack pointer of the thread to + + LDMFD r1!, {r4 - r11} ; restore the register value of the thread to in the stack of the thread + MSR psp, r1 ; update the value of r1 to psp + +pendsv_exit + ; restore global interrupt status + MSR PRIMASK, r2 + + ; modify bit 2 of the lr register to ensure that the process uses the PSP stack pointer + ORR lr, lr, #0x04 + ; exit interrupt function + BX lr + ENDP +``` + +### Impalement OS Tick + +With the basics of switching global interrupts and context switching, RTOS can perform functions such as creating, running, and scheduling threads. With OS Tick, RT-Thread can schedule time-to-roll rotations for threads of the same priority, implementing timer functions, implementing rt_thread_delay() delay functions, and so on. + +In oder to finish the porting of libcpu, we need to ensure that the rt_tick_increase() function is called periodically during the clock tick interrupt. The call cycle depends on the value of the RT_TICK_PER_SECOND macro of rtconfig.h. + +In Cortex M, the OS tick can be implemented by SysTick's interrupt handler. + +```c +void SysTick_Handler(void) +{ + /* enter interrupt */ + rt_interrupt_enter(); + + rt_tick_increase(); + + /* leave interrupt */ + rt_interrupt_leave(); +} +``` + +BSP Porting +------- + +In a practical project, for the same CPU architecture, different boards may use the same CPU architecture, carry different peripheral resources, and complete different products, so we also need to adapt to the board. RT-Thread provides a BSP abstraction layer to fit boards commonly seen. If you want to use the RT-Thread kernel on a board, in addition to the need to have the corresponding chip architecture porting, you need to port the corresponding board, that is, implement a basic BSP. The job is to establish a basic environment for the operating system to run. The main tasks that need to be completed are: + +1) Initialize the CPU internal registers and set the RAM operation timing. + +2) Implement clock driver and interrupt controller driver and interrupt management. + +3) Implement UART and GPIO driver. + +4) Initialize the dynamic memory heap to implement dynamic heap memory management. + + + diff --git a/documentation/memory/figures/08Memory_distribution.png b/documentation/memory/figures/08Memory_distribution.png new file mode 100644 index 0000000000..580095fbf8 Binary files /dev/null and b/documentation/memory/figures/08Memory_distribution.png differ diff --git a/documentation/memory/figures/08heap_ops.png b/documentation/memory/figures/08heap_ops.png new file mode 100644 index 0000000000..0cabb23884 Binary files /dev/null and b/documentation/memory/figures/08heap_ops.png differ diff --git a/documentation/memory/figures/08memheap.png b/documentation/memory/figures/08memheap.png new file mode 100644 index 0000000000..b27c818733 Binary files /dev/null and b/documentation/memory/figures/08memheap.png differ diff --git a/documentation/memory/figures/08mempool.png b/documentation/memory/figures/08mempool.png new file mode 100644 index 0000000000..c045d1acda Binary files /dev/null and b/documentation/memory/figures/08mempool.png differ diff --git a/documentation/memory/figures/08mempool_ops.png b/documentation/memory/figures/08mempool_ops.png new file mode 100644 index 0000000000..4e73043558 Binary files /dev/null and b/documentation/memory/figures/08mempool_ops.png differ diff --git a/documentation/memory/figures/08mempool_work.png b/documentation/memory/figures/08mempool_work.png new file mode 100644 index 0000000000..ee9d1ab39f Binary files /dev/null and b/documentation/memory/figures/08mempool_work.png differ diff --git a/documentation/memory/figures/08slab.png b/documentation/memory/figures/08slab.png new file mode 100644 index 0000000000..1b2a28dd19 Binary files /dev/null and b/documentation/memory/figures/08slab.png differ diff --git a/documentation/memory/figures/08smem_work.png b/documentation/memory/figures/08smem_work.png new file mode 100644 index 0000000000..397ddf070c Binary files /dev/null and b/documentation/memory/figures/08smem_work.png differ diff --git a/documentation/memory/figures/08smem_work2.png b/documentation/memory/figures/08smem_work2.png new file mode 100644 index 0000000000..533db94719 Binary files /dev/null and b/documentation/memory/figures/08smem_work2.png differ diff --git a/documentation/memory/figures/08smem_work3.png b/documentation/memory/figures/08smem_work3.png new file mode 100644 index 0000000000..4933c69ce3 Binary files /dev/null and b/documentation/memory/figures/08smem_work3.png differ diff --git a/documentation/memory/memory.md b/documentation/memory/memory.md new file mode 100644 index 0000000000..3812ca978b --- /dev/null +++ b/documentation/memory/memory.md @@ -0,0 +1,666 @@ +Memory Management +============== + +In a computing system, there are usually two types of memory space: internal memory space and external memory space. The internal memory is quick to access and can be accessed randomly according to the variable address. It is what we usually call RAM (Random-Access Memory) and can be understood as the computer's memory. In the external memory, the content stored is relatively fixed, and the data will not be lost even after the power is turned off. It is what we usually call ROM (Read Only Memory) and can be understood as the hard disk of the computer. + +In a computer system, variables and intermediate data are generally stored in RAM, and they are only transferred from RAM to CPU for calculation when actually used. The memory size required by some data needs to be determined according to the actual situation during the running of the program, which requires the system to have the ability to dynamically manage the memory space. User applies to the system when he needs a block of memory space, then the system selects a suitable memory space to allocate to the user. After the user finishes using it, the memory space is released back to the system which enables the system to recycle the memory space. + +This chapter introduces two kinds of memory management methods in RT-Thread, namely dynamic memory heap management and static memory pool management. After learning this chapter, readers will understand the memory management principle and usage of RT-Thread. + +Memory Management Functional Features +------------------ + +Because time requirements are very strict in real-time systems, memory management is often much more demanding than in general-purpose operating systems: + +1) Time for allocating memory must be deterministic. The general memory management algorithm is to find a free memory block that is compatible with the data according to the length of the data to be stored, and then store the data therein. The time it takes to find such a free block of memory is uncertain, but for real-time systems, this is unacceptable. Because real-time system requires that the allocation process of the memory block is completed within a predictable certain time, otherwise the response of the real-time task to the external event will become undeterminable. + +2) As memory is constantly being allocated and released, the entire memory area will produce more and more fragments (while using memory, some memory is applied, some of which are released, resulting in some small memory blocks in the memory space, these small memory blocks have inconsecutive addresses and cannot be allocated as a whole large block of memory.) There is enough free memory in the system, but because their addresses are inconsecutive, they cannot form a continuous block of complete memory, which will make the program unable to apply for large memory. For general-purpose systems, this inappropriate memory allocation algorithm can be solved by rebooting the system (once every month or once a few months). But it is unacceptable for embedded systems that need to work continuously in the field work all the year round. + +3) The resource environment of embedded system is also different. Some systems have relatively tight resources, only tens of kilobytes of memory are available for allocation, while some systems have several megabytes of memory. This makes choosing efficient memory allocation algorithm for these different systems more complicated. + +RT-Thread operating system provides different memory allocation management algorithms for memory management according to different upper layer applications and system resources. Generally, it can be divided into two categories: memory heap management and memory pool management. Memory heap management is divided into three cases according to specific memory devices: + +The first is allocation management for small memory blocks (small memory management algorithm); + +The second is allocation management for large memory blocks (slab management algorithm); + +The third is allocation management for multiple memory heaps (memheap management algorithm) + +Memory Heap Management +---------- + +Memory heap management is used to manage a contiguous memory space. We introduced the memory distribution of RT-Thread in chapter "Kernel Basics". As shown in the following figure, RT-Thread uses the space at "the end of the ZI segment" to the end of the memory as the memory heap. + +![RT-Thread Memory Distribution](figures/08Memory_distribution.png) + +If current resource allows, memory heap can allocate memory blocks of any size according to the needs of users. When user does not need to use these memory blocks, they can be released back to the heap for other applications to allocate and use. In order to meet different needs, RT-Thread system provides different memory management algorithms, namely small memory management algorithm, slab management algorithm and memheap management algorithm. + +The small memory management algorithm is mainly for system with less resources and with less than 2MB of memory. The slab memory management algorithm mainly provides a fast algorithm similar to multiple memory pool management algorithms when the system resources are rich. In addition to the above, RT-Thread also has a management algorithm for multi-memory heap, namely the memheap management algorithm. The memheap management algorithm is suitable for where there are multiple memory heaps in the system. It can “paste” multiple memories together to form a large memory heap, which is very easy to use for users. + +Either one or none of these memory heap management algorithms can be chosen when the system is running. These memory heap management algorithms provide the same API interface to the application. + +>Because the memory heap manager needs to meet the security allocation in multi-threaded conditions, which means mutual exclusion between multiple threads needs to be taken into consideration, so please do not allocate or release dynamic memory blocks in interrupt service routine, which may result in the current context being suspended. + +### Small Memory Management Algorithm + +The small memory management algorithm is a simple memory allocation algorithm. Initially, it is a large piece of memory. When a memory block needs to be allocated, the matching memory block is segmented from the large memory block, and then this matching free memory block is returned to the heap management system. Each memory block contains data head for management use through which the used block and the free block are linked by a doubly linked list, as shown in the following figure: + +![Small Memory Management Working Mechanism Diagram](figures/08smem_work.png) + +Each memory block (whether it is an allocated memory block or a free memory block) contains a data head, including: + +**1) magic**: Variable (also called magic number). It will be initialized to 0x1ea0 (that is, the English word heap) which is used to mark this memory block as a memory data block for memory management. The variable is not only used to identify that the data block is a memory data block for memory management. It is also a memory protection word: if this area is overridden, it means that the memory block is illegally overridden (normally only the memory manager will operate on this memory). + +**2)used**: Indicates whether the current memory block has been allocated. + +The performance of memory management is mainly reflected in the allocation and release of memory. The small memory management algorithm can be embodied by the following examples. + +As shown in the following figure, the free list pointer lfree initially points to a 32-byte block of memory. When the user thread wants to allocate a 64-byte memory block, since the memory block pointed to by this lfree pointer is only 32 bytes and does not meet the requirements, the memory manager will continue to search for the next memory block. When the next memory block with 128 bytes is found, it meets the requirements of the allocation. Because this memory block is large, the allocator will split the memory block, and the remaining memory block(52 bytes) will remain in the lfree linked list, as shown in the following table which is after 64 bytes is allocated. + +![Small Memory Management Algorithm Linked List Structure Diagram 1](figures/08smem_work2.png) + +![Small Memory Management Algorithm Linked List Structure Diagram 2](figures/08smem_work3.png) + +In addition, a 12-byte data head is reserved for `magic, used` information, and linked list nodes before each memory block is allocated. The address returned to the application is actually the address after 12 bytes of this memory block. The 12-byte data head is the part that the user should never use. (Note: The length of the 12-byte data head will be different as it aligns with the system). + +As for releasing, it is the reversed process, but the allocator will check if the adjacent memory blocks are free, and if they are free, the allocator will merge them into one large free memory block. + +### Slab Management Algorithm + +RT-Thread's slab allocator is an optimized memory allocation algorithm for embedded systems based on the slab allocator implemented by DragonFly BSD founder Matthew Dillon. The most primitive slab algorithm is Jeff Bonwick's efficient kernel memory allocation algorithm introduced for the Solaris operating system. + +RT-Thread's slab allocator implementation mainly removes the object construction and destruction process, and only retains the pure buffered memory pool algorithm. The slab allocator is divided into multiple zones according to the size of the object which can also be seen as having a memory pool for each type of object, as shown in the following figure: + +![slab Memory Allocation Structure](figures/08slab.png) + + + +A zone is between 32K and 128K bytes in size, and the allocator will automatically adjust based on the heap size when the heap is initialized. The zone in the system includes up to 72 objects, which can allocate up to 16K of memory at a time. It will directly allocate from the page allocator if it exceeds 16K. The size of the memory block allocated on each zone is fixed. Zones that can allocate blocks of the same size are linked in a linked list. The zone linked lists of the 72 objects are managed in an array. (zone_array[]). + +Here are the two main operations for the memory allocator: + +**(1) Memory Allocation** + +Assuming a 32-byte memory is allocated, the slab memory allocator first finds the corresponding zone linked list from the linked list head of zone array in accordance with the 32-byte value. If the linked list is empty, assign a new zone to the page allocator and return the first free block of memory from the zone. If the linked list is not empty, a free block must exist in the first zone node in the zone linked list, (otherwise it would not have been placed in the linked list), then take the corresponding free block. If all free memory blocks in the zone are used after the allocation, the allocator needs to remove this zone node from the linked list. + +**(2)Memory Release** + +The allocator needs to find the zone node where the memory block is located, and then link the memory block to the zone's free memory block linked list. If the free linked list of the zone indicates that all the memory blocks of the zone have been released, it means that the zone is completely free. The system will release the fully free zone to the page allocator when the number of free zones in the zone linked list reaches a certain number. + +### memheap Management Algorithm + +memheap management algorithm is suitable for systems with multiple memory heaps that are not contiguous. Using memheap memory management can simplify the use of multiple memory heaps in the system: when there are multiple memory heaps in the system, the user only needs to initialize multiple needed memheaps during system initialization and turn on the memheap function to glue multiple memheaps (addresses can be discontinuous) for the system's heap allocation. + +>The original heap function will be turned off after memheap is turned on. Both can only be selected by turning RT_USING_MEMHEAP_AS_HEAP on or off. + +Working mechanism of memheap is shown in the figure below. First, add multiple blocks of memory to the memheap_item linked list to glue. The allocation of a memory block starts with allocating memory from default memory heap. When it can not be allocated, memheap_item linked list is looked up, and an attempt is made to allocate a memory block from another memory heap. The application doesn't care which memory heap the currently allocated memory block is on, as if it were operating a memory heap. + +![memheap Handling Multiple Memory Heaps](figures/08memheap.png) + +### Memory Heap Configuration and Initialization + +When using the memory heap, heap initialization must be done at system initialization, which can be done through the following function interface: + +```c +void rt_system_heap_init(void* begin_addr, void* end_addr); +``` + +This function will use the memory space of the parameters begin_addr, end_addr as a memory heap. The following table describes the input parameters for this function: + +Input parameter for rt_system_heap_init() + +|**Parameters** |**Description** | +|------------|--------------------| +| begin_addr | Start address for heap memory area | +| end_addr | End address for heap memory area | + +When using memheap heap memory, you must initialize the heap memory at system initialization, which can be done through the following function interface: + +```c +rt_err_t rt_memheap_init(struct rt_memheap *memheap, + const char *name, + void *start_addr, + rt_uint32_t size) +``` + +If there are multiple non-contiguous memheaps, the function can be called multiple times to initialize it and join the memheap_item linked list. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_memheap_init() + +|**Parameters** |**Description** | +|------------|--------------------| +| memheap | memheap control block | +| name | The name of the memory heap | +| start_addr | Heap memory area start address | +| size | Heap memory size | +|**Return** | —— | +| RT_EOK | Successful | + +### Memory Heap Management + +Operations of the memory heap are as shown in the following figure, including: initialization, application for memory blocks, release of memory. After use, all dynamic memory should be released for future use by other programs. + +![Operations of the Memory Heap ](figures/08heap_ops.png) + +#### Allocate and Release Memory Block + +Allocate a memory block of user-specified size from the memory heap. The function interface is as follows: + +```c +void *rt_malloc(rt_size_t nbytes); +``` + +rt_malloc function finds a memory block of the appropriate size from the system heap space and returns the available address of the memory block to the user. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_malloc() + +|**Parameters** |**Description** | +|------------------|------------------------------------| +| nbytes | The size of the memory block to be allocated, in bytes | +|**Return** | —— | +| Allocated memory block address | Successful | +| RT_NULL | Fail | + +After the application uses the memory applied from the memory allocator, it must be released in time, otherwise it will cause a memory leak. The function interface for releasing the memory block is as follows: + +```c +void rt_free (void *ptr); +``` + +The rt_free function will return the to-be-released memory back to the heap manager. When calling this function, user needs to pass the to-be-released pointer of the memory block. If it is a null pointer, it returns directly. The following table describes the input parameters for this function: + +Input parameters of rt_free() + +|**Parameters**|**Description** | +|----------|--------------------| +| ptr | to-be-released memory block pointer | + +#### Re-allocate Memory Block + +Re-allocating the size of the memory block (increase or decrease) based on the allocated memory block can be done through the following function interface: + +```c +void *rt_realloc(void *rmem, rt_size_t newsize); +``` + +When the memory block is re-allocated, the original memory block data remains the same (in the case of reduction, the subsequent data is automatically truncated). The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_realloc() + +|**Parameters** |**Description** | +|----------------------|--------------------| +| rmem | Point to the allocated memory block | +| newsize | Re-allocated memory size | +|**Return** | —— | +| Re-allocated memory block address | Successful | + +#### Allocate Multiple Memory Blocks + +Allocating multiple memory blocks with contiguous memory addresses from the memory heap can be done through the following function interface: + +```c + void *rt_calloc(rt_size_t count, rt_size_t size); +``` + +The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_calloc() + +|**Parameters** |**Description** | +|----------------------------|---------------------------------------------| +| count | Number of memory block | +| size | Size of memory block | +|**Return** | —— | +| Pointer pointing to the first memory block address | Successful, all allocated memory blocks are initialized to zero. | +| RT_NULL | Allocation failed | + +#### Set Memory Hook Function + +When allocating memory blocks, user can set a hook function. The function interface called is as follows: + +```c +void rt_malloc_sethook(void (*hook)(void *ptr, rt_size_t size)); +``` + +The hook function set will callback after the memory allocation. During the callback, the allocated memory block address and size are passed as input parameters. The following table describes the input parameters for this function: + +Input parameters for rt_malloc_sethook() + +|**Parameters**|**Description** | +|----------|--------------| +| hook | Hook function pointer | + +The hook function interface is as follows: + +```c +void hook(void *ptr, rt_size_t size); +``` + +The following table describes the input parameters for the hook function: + +Allocate hook function interface parameters + +|**Parameters**|**Description** | +|----------|----------------------| +| ptr | The allocated memory block pointer | +| size | The size of the allocated memory block | + +When releasing memory, user can set a hook function, the function interface called is as follows: + +```c +void rt_free_sethook(void (*hook)(void *ptr)); +``` + +The hook function set will callback before the memory release is completed. During the callback, the released memory block address is passed in as the entry parameter (the memory block is not released at this time). The following table describes the input parameters for this function: + +Input parameters for rt_free_sethook() + +|**Parameters**|**Description** | +|----------|--------------| +| hook | Hook function pointer | + +The hook function interface is as follows: + +```c +void hook(void *ptr); +``` + +The following table describes the input parameters for the hook function: + +Input parameters of the hook function + +|**Parameters**|**Description** | +|----------|--------------------| +| ptr | Memory block pointer to be released | + +### Memory Heap Management Application Example + +This is an example of a memory heap application. This program creates a dynamic thread that dynamically requests memory and releases it. Each time it apples for more memory, it ends when it can't apply for it, as shown in the following code: + +Memory heap management + +```c +#include + +#define THREAD_PRIORITY 25 +#define THREAD_STACK_SIZE 512 +#define THREAD_TIMESLICE 5 + +/* thread entry */ +void thread1_entry(void *parameter) +{ + int i; + char *ptr = RT_NULL; /* memory block pointer */ + + for (i = 0; ; i++) + { + /* memory space for allocating (1 << i) bytes each time */ + ptr = rt_malloc(1 << i); + + /* if allocated successfully */ + if (ptr != RT_NULL) + { + rt_kprintf("get memory :%d byte\n", (1 << i)); + /* release memory block */ + rt_free(ptr); + rt_kprintf("free memory :%d byte\n", (1 << i)); + ptr = RT_NULL; + } + else + { + rt_kprintf("try to get %d byte memory failed!\n", (1 << i)); + return; + } + } +} + +int dynmem_sample(void) +{ + rt_thread_t tid = RT_NULL; + + /* create thread 1 */ + tid = rt_thread_create("thread1", + thread1_entry, RT_NULL, + THREAD_STACK_SIZE, + THREAD_PRIORITY, + THREAD_TIMESLICE); + if (tid != RT_NULL) + rt_thread_startup(tid); + + return 0; +} +/* Export to the msh command list */ +MSH_CMD_EXPORT(dynmem_sample, dynmem sample); +``` + +The simulation results are as follows: + +``` +\ | / +- RT - Thread Operating System + / | \ 3.1.0 build Aug 24 2018 + 2006 - 2018 Copyright by rt-thread team +msh >dynmem_sample +msh >get memory :1 byte +free memory :1 byte +get memory :2 byte +free memory :2 byte +… +get memory :16384 byte +free memory :16384 byte +get memory :32768 byte +free memory :32768 byte +try to get 65536 byte memory failed! +``` + +The memory is successfully allocated in the routine and the information is printed; when trying to apply 65536 byte, 64KB, of memory, the allocation fails because the total RAM size is only 64K and the available RAM is less than 64K. + +Memory Pool +------ + +The memory heap manager can allocate blocks of any size, which is very flexible and convenient. But it also has obvious shortcomings. Firstly, the allocation efficiency is not high because free memory block needs to be looked up for each allocation. Secondly, it is easy to generate memory fragmentation. In order to improve the memory allocation efficiency and avoid memory fragmentation, RT-Thread provides another method of memory management: Memory Pool. + +Memory pool is a memory allocation method for allocating a large number of small memory blocks of the same size. It can greatly speed up memory allocation and release, and can avoid memory fragmentation as much as possible. In addition, RT-Thread's memory pool allows thread suspend function. When there is no free memory block in the memory pool, the application thread will be suspended until there is a new available memory block in the memory pool, and then the suspended application thread will be awakened. + +The thread suspend function of the memory pool is very suitable for scenes that need to be synchronized by memory resources. For example, when playing music, the player thread decodes the music file and then sends it to the sound card driver to drive the hardware to play music. + +![Player Thread & Sound Card Driver Relationship](figures/08mempool.png) + +As shown in the figure above, when the player thread needs to decode the data, it will request the memory block from the memory pool. If there is no memory block available, the thread will be suspended, otherwise it will obtain the memory block to place the decoded data. + +The player thread then writes the memory block containing the decoded data to the sound card abstraction device (the thread will return immediately and continue to decode more data); + +After the sound card device is written, the callback function set by the player thread is called to release the written memory block. If the player thread is suspended because there is no memory block in the memory pool available, then it will be awakened to continue to decode. + +### Memory Pool Working Mechanism + +#### Memory Pool Control Block + +The memory pool control block is a data structure used by the operating system to manage the memory pool. It stores some information about the memory pool, such as the start address of the data area in memory pool, memory block size and memory block list. It also includes memory blocks, linked list structure used for the connection between memory blocks, event set of the thread suspended due to the memory block being unavailable, and so on. + +In the RT-Thread real-time operating system, the memory pool control block is represented by the structure `struct rt_mempool`. Another C expression, `rt_mp_t`, represents the memory block handle. The implementation in C language is a pointer pointing to the memory pool control block. For details, see the following code: + +```c +struct rt_mempool +{ + struct rt_object parent; + + void *start_address; /* start address of memory pool data area */ + rt_size_t size; /* size of memory pool data area */ + + rt_size_t block_size; /* size of memory block */ + rt_uint8_t *block_list; /* list of memory block */ + + /* maximum number of memory blocks that can be accommodated in the memory pool data area */ + rt_size_t block_total_count; + /* number of free memory blocks in the memory pool */ + rt_size_t block_free_count; + /* list of threads suspended because memory blocks are unavailable */ + rt_list_t suspend_thread; + /* number of threads suspended because memory blocks are unavailable */ + rt_size_t suspend_thread_count; +}; +typedef struct rt_mempool* rt_mp_t; +``` + +#### Memory Block Allocation Mechanism + +When the memory pool is created, it first applies a large amount of memory from the system. Then it divides the memory into multiple small memory blocks of the same size. The small memory blocks are directly connected by a linked list (this linked list is also called a free linked list). At each allocation, the first memory block is taken from the head of the free linked list and provided to the applicant. As you can see from the figure below, there are multiple memory pools of different sizes allowed in physical memory. Each memory pool is composed of multiple free memory blocks, which are used by the kernel for memory management. When a memory pool object is created, the memory pool object is assigned to a memory pool control block. The parameters of the memory control block include the memory pool name, memory buffer, memory block size, number of blocks, and a queue of threads waiting. + +![Memory Pool Working Mechanism Diagram](figures/08mempool_work.png) + +The kernel is responsible for allocating memory pool control blocks to the memory pool. It also receives the application for allocation of memory blocks from the user thread. When these information are obtained, the kernel can allocate memory for the memory pool from the memory pool. Once the memory pool is initialized, the size of the memory blocks inside will no longer be available for adjustment. + +Each memory pool object consists of the above structure, where suspend_thread forms a list for thread waiting for memory blocks, that is, when there is no memory block available in the memory pool, and the request thread allows waiting, the thread applying for the memory block will suspend on the suspend_thread linked list. + +### Memory Pool Management + +Memory pool control block is a structure that contains important parameters related to the memory pool and acts as a link between various states of the memory pool. The related interfaces of the memory pool are as shown in the following figure. The operation of the memory pool includes: create/initialize memory pool, apply for memory block, release memory block, delete/detach memory pool. It needs to noted hat nut not all memory pools will be deleted. The deletion is related to the needs of the designer, but the used memory blocks should be released. + +![Related Interfaces of Memory Pool](figures/08mempool_ops.png) + +#### Create and Delete Memory Pool + +To create a memory pool, a memory pool object is created first and then a memory heap is allocated from the heap. Creating a memory pool is a prerequisite for allocating and releasing memory blocks from the corresponding memory pool. After the memory pool is created, thread then can perform operations like application, release and so on. To creating a memory pool, use the following function interface. This function returns a memory pool object that has been created. + +```c +rt_mp_t rt_mp_create(const char* name, + rt_size_t block_count, + rt_size_t block_size); +``` + +Using this function interface can create a memory pool that matches the size and number of memory blocks required. The creation will be successful if system resources allow it (most importantly memory heap memory resources). When you create a memory pool, you need to give the memory pool a name. The kernel then applies for a memory pool object from the system. Next, a memory buffer calculated from the number and sizes of blocks will be allocated from the memory heap. Then memory pool object is initialized. Afterwards, the successfully applied memory block buffer is organized into idle linked list used for allocation. The following table describes the input parameters and return values for this function: + +Input parameters and return values for rt_mp_create() + +|**Parameters** |**Description** | +|--------------|--------------------| +| name | Name of the memory pool | +| block_count | Number of memory blocks | +| block_size | Size of memory block | +|**Return** | —— | +| Handle of memory pool | Creation of memory pool object successful | +| RT_NULL | Creation of memory pool failed | + +Deleting memory pool will delete the memory pool object and release the applied memory. Use the following function interface: + +```c +rt_err_t rt_mp_delete(rt_mp_t mp); +``` + +When a memory pool is deleted, all threads waiting on the memory pool object will be awakened(return -RT_ERROR ). Then the memory pool data storage area allocated from the memory heap is released, and the memory pool object is deleted. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mp_delete() + +|**Parameters**|**Description** | +|----------|-----------------------------------| +| mp | memory pool object handle | +|**Return**| —— | +| RT_EOK | Deletion successful | + +#### Initialize and Detach Memory Pool + +Memory pool Initialization is similar to memory pool creation, except that the memory pool initialization is for static memory management mode, and the memory pool control block is derived from static objects that the user applies in the system. In addition, unlike memory pool creation, the memory space used by the memory pool object here is a buffer space specified by user. User passes the pointer of the buffer to the memory pool control block, the rest of the initialization is the same as the creation of the memory pool. The function interface is as follows: + +```c +rt_err_t rt_mp_init(rt_mp_t mp, + const char* name, + void *start, + rt_size_t size, + rt_size_t block size); +``` + +When initializing the memory pool, pass the memory pool object that needs to be initialized to the kernel, and also pass the memory space used by the memory pool, the number and sizes of memory blocks managed by the memory pool and assign a name to the memory pool. This way, the kernel can initialize the memory pool and organize the memory space used by the memory pool into a free block linked list for allocation. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mp_init() + +|**Parameters** |**Description** | +|-------------|--------------------| +| mp | memory pool object | +| name | memory pool name | +| start | starting address of memory pool | +| size | memory pool data area size | +| block_size | memory pool size | +|**Return** | —— | +| RT_EOK | initialization successful | +| \- RT_ERROR | Fail | + +The number of memory pool blocks = size / (block_size + 4-byte, linked list pointer size), the calculation result needs to be rounded (an integer). + +For example, the size of the memory pool data area is set to 4096 bytes, and the memory block size block_size is set to 80 bytes; then the number of memory blocks applied is 4096/(80+4)=48. + +Detaching the memory pool means the memory pool object will be detached from the kernel object manager. Use the following function interface to detach the memory pool: + +```c +rt_err_t rt_mp_detach(rt_mp_t mp); +``` + +After using this function interface, the kernel wakes up all threads waiting on the memory pool object and then detaches the memory pool object from the kernel object manager. The following table describes the input parameters and return values for this function: + +Input parameters and return values for rt_mp_detach() + +|**Parameters**|**Description** | +|----------|------------| +| mp | memory pool object | +|**Return**| —— | +| RT_EOK | Successful | + +#### Allocate and Release Memory Block + +To allocate a memory block from the specified memory pool, use the following interface: + +```c +void *rt_mp_alloc (rt_mp_t mp, rt_int32_t time); +``` + +The time parameter means the timeout period for applying for allocation of memory blocks. If there is a memory block available in the memory pool, remove a memory block from the free linked list of the memory pool, reduce the number of free blocks and return this memory block; if there is no free memory block in the memory pool, determine the timeout time setting: if the timeout period is set to zero, the empty memory block is immediately returned; if the waiting time is greater than zero, the current thread is suspended on the memory pool object until there is free memory block available in the memory pool, or the waiting time elapses. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mp_alloc() + +|**Parameters** |**Description** | +|------------------|------------| +| mp | Memory pool object | +| time | Timeout | +|**Return** | —— | +| Allocated memory block address | Successful | +| RT_NULL | Fail | + +Any memory block must be released after it has been used. Otherwise, memory leaks will occur. The memory block is released using the following interface: + +```c +void rt_mp_free (void *block); +``` + +When using the function interface, firstly, the memory pool object of (or belongs to) the memory block will be calculated by the pointer of the memory block that needs to be released. Secondly, the number of available memory blocks of the memory pool object will be increased. Thirdly, the released memory block to the linked list of free memory blocks will be added. Then, it will be determined whether there is a suspended thread on the memory pool object, if so, the first thread on the suspended thread linked list will be awakened. The following table describes the input parameters for this function: + +Input parameters of rt_mp_free() + +|**Parameters**|**Description** | +|----------|------------| +| block | memory block pointer | + +### Memory Pool Application Example + +This is a static internal memory pool application routine that creates a static memory pool object and 2 dynamic threads. One thread will try to get the memory block from the memory pool, and the other thread will release the memory block, as shown in the following code: + +   Memory pool usage example + +```c +#include + +static rt_uint8_t *ptr[50]; +static rt_uint8_t mempool[4096]; +static struct rt_mempool mp; + +#define THREAD_PRIORITY 25 +#define THREAD_STACK_SIZE 512 +#define THREAD_TIMESLICE 5 + +/* pointer pointing to the thread control block */ +static rt_thread_t tid1 = RT_NULL; +static rt_thread_t tid2 = RT_NULL; + +/* thread 1 entry */ +static void thread1_mp_alloc(void *parameter) +{ + int i; + for (i = 0 ; i < 50 ; i++) + { + if (ptr[i] == RT_NULL) + { + /* Trying to apply for a memory block 50 times, when no memory block is available, +                thread 1 suspends, thread 2 runs */ + ptr[i] = rt_mp_alloc(&mp, RT_WAITING_FOREVER); + if (ptr[i] != RT_NULL) + rt_kprintf("allocate No.%d\n", i); + } + } +} + +/* thread 2 entry, thread 2 has a lower priority than Thread 1, so thread 1 should be executed first. */ +static void thread2_mp_release(void *parameter) +{ + int i; + + rt_kprintf("thread2 try to release block\n"); + for (i = 0; i < 50 ; i++) + { + /* release all successfully allocated memory blocks */ + if (ptr[i] != RT_NULL) + { + rt_kprintf("release block %d\n", i); + rt_mp_free(ptr[i]); + ptr[i] = RT_NULL; + } + } +} + +int mempool_sample(void) +{ + int i; + for (i = 0; i < 50; i ++) ptr[i] = RT_NULL; + + /* initialize the memory pool object */ + rt_mp_init(&mp, "mp1", &mempool[0], sizeof(mempool), 80); + + /* create thread 1: applying for memory pool */ + tid1 = rt_thread_create("thread1", thread1_mp_alloc, RT_NULL, + THREAD_STACK_SIZE, + THREAD_PRIORITY, THREAD_TIMESLICE); + if (tid1 != RT_NULL) + rt_thread_startup(tid1); + + + /* create thread 2: release memory pool */ + tid2 = rt_thread_create("thread2", thread2_mp_release, RT_NULL, + THREAD_STACK_SIZE, + THREAD_PRIORITY + 1, THREAD_TIMESLICE); + if (tid2 != RT_NULL) + rt_thread_startup(tid2); + + return 0; +} + +/* export to the msh command list */ +MSH_CMD_EXPORT(mempool_sample, mempool sample); +``` + +The simulation results are as follows: + +``` + \ | / +- RT - Thread Operating System + / | \ 3.1.0 build Aug 24 2018 + 2006 - 2018 Copyright by rt-thread team +msh >mempool_sample +msh >allocate No.0 +allocate No.1 +allocate No.2 +allocate No.3 +allocate No.4 +… +allocate No.46 +allocate No.47 +thread2 try to release block +release block 0 +allocate No.48 +release block 1 +allocate No.49 +release block 2 +release block 3 +release block 4 +release block 5 +… +release block 47 +release block 48 +release block 49 +``` + +This routine initializes 4096 /(80+4) = 48 memory blocks when initializing the memory pool object. + +①After thread 1 applies for 48 memory blocks, the memory block has been used up and needs to be released elsewhere to be applied again; but at this time, thread 1 has applied for another one in the same way, because it cannot be allocated, so thread 1 suspends; + +②Thread 2 starts to execute the operation of releasing the memory. When thread 2 releases a memory block, it means there is a memory block that is free. Wake up thread 1 to apply for memory, and then apply again after the application is successful, thread 1 suspends again, and repeats ②; + +③Thread 2 continues to release the remaining memory blocks, release is complete. + diff --git a/documentation/network/figures/net-hello.png b/documentation/network/figures/net-hello.png new file mode 100644 index 0000000000..aec33b859c Binary files /dev/null and b/documentation/network/figures/net-hello.png differ diff --git a/documentation/network/figures/net-layer.png b/documentation/network/figures/net-layer.png new file mode 100644 index 0000000000..9e896f44cf Binary files /dev/null and b/documentation/network/figures/net-layer.png differ diff --git a/documentation/network/figures/net-osi.png b/documentation/network/figures/net-osi.png new file mode 100644 index 0000000000..aecb790e2d Binary files /dev/null and b/documentation/network/figures/net-osi.png differ diff --git a/documentation/network/figures/net-recv.png b/documentation/network/figures/net-recv.png new file mode 100644 index 0000000000..23461838a3 Binary files /dev/null and b/documentation/network/figures/net-recv.png differ diff --git a/documentation/network/figures/net-send.png b/documentation/network/figures/net-send.png new file mode 100644 index 0000000000..aa4777d544 Binary files /dev/null and b/documentation/network/figures/net-send.png differ diff --git a/documentation/network/figures/net-tcp-s.png b/documentation/network/figures/net-tcp-s.png new file mode 100644 index 0000000000..5d07186f96 Binary files /dev/null and b/documentation/network/figures/net-tcp-s.png differ diff --git a/documentation/network/figures/net-tcp.png b/documentation/network/figures/net-tcp.png new file mode 100644 index 0000000000..c13a25c308 Binary files /dev/null and b/documentation/network/figures/net-tcp.png differ diff --git a/documentation/network/figures/net-udp-client.png b/documentation/network/figures/net-udp-client.png new file mode 100644 index 0000000000..c0e916b421 Binary files /dev/null and b/documentation/network/figures/net-udp-client.png differ diff --git a/documentation/network/figures/net-udp-server.png b/documentation/network/figures/net-udp-server.png new file mode 100644 index 0000000000..79d1513c2a Binary files /dev/null and b/documentation/network/figures/net-udp-server.png differ diff --git a/documentation/network/figures/net-udp.png b/documentation/network/figures/net-udp.png new file mode 100644 index 0000000000..cea06a21e3 Binary files /dev/null and b/documentation/network/figures/net-udp.png differ diff --git a/documentation/network/network.md b/documentation/network/network.md new file mode 100644 index 0000000000..3499cec4f8 --- /dev/null +++ b/documentation/network/network.md @@ -0,0 +1,779 @@ +# Network Framework + +With the popularity of the Internet, people's lives are increasingly dependent on the application of the network. More and more products need to connect to the Internet, and device networking has become a trend. To achieve the connection between the device and the network, you need to follow the TCP/IP protocol, you can run the network protocol stack on the device to connect to the network, or you can use devices (chips with hardware network protocol stack interfaces) to connect to the Internet. + +When the device is connected to the network, it is like plugging in the wings. You can use the network to upload data in real time. The user can see the current running status and collected data of the device in a hundred thousand miles, and remotely control the device to complete specific tasks. You can also play online music, make online calls, and act as a LAN storage server through your device. + +This chapter will explain the related content of the RT-Thread network framework, and introduce you to the concept, function and usage of the network framework. After reading this chapter, you will be familiar with the concept and implementation principle of the RT-Thread network framework and familiar with network programming using Socket API. + +## TCP/IP Introduction to Network Protocols + +TCP/IP is short for Transmission Control Protocol/Internet Protocol. It is not a single protocol, but a general term for a protocol family. It includes IP protocol, ICMP protocol, TCP protocol, and http and ftp, pop3, https protocol, etc., which define how electronic devices connect to the Internet and the standards by which data is transferred between them. + +### OSI Reference Model + +OSI (Open System Interconnect), which is an open system interconnection. Generally referred to as the OSI reference model, it is a network interconnection model studied by the ISO (International Organization for Standardization) in 1985. The architecture standard defines a seven-layer framework for the network interconnection (physical layer, data link layer, network layer, transport layer, session layer, presentation layer, and application layer), that is, the ISO open system interconnection reference model. The first to third layers belong to the lower three layers of the OSI Reference Model and are responsible for creating links for network communication connections; the fourth to seventh layers are the upper four layers of the OSI reference model and is responsible for end-to-end data communication. The capabilities of each layer are further detailed in this framework to achieve interconnectivity, interoperability, and application portability in an open system environment. + +### TCP/IP Reference Model + +The TCP/IP communication protocol uses a four-layer hierarchical structure, and each layer calls the network provided by its next layer to fulfill its own needs. The four layers are: + +* **Application layer**: Different types of network applications have different communication rules, so the application layer protocols are various, such as Simple Mail Transfer Protocol (SMTP), File Transfer Protocol (FTP), and network remote access protocol (Telnet). +* **Transport layer**: In this layer, it provides data transfer services between nodes, such as Transmission Control Protocol (TCP), User Datagram Protocol (UDP), etc. TCP and UDP add data to the data packet and transmit it to the next layer, this layer is responsible for transmitting data and determining that the data has been delivered and received. +* **Network layer**: responsible for providing basic data packet transfer functions, so that each packet can reach the destination host (but not check whether it is received correctly), such as Internet Protocol (IP). +* **Network interface layer**: Management of actual network media, defining how to use actual networks (such as Ethernet, Serial Line, etc.) to transmit data. + +### Difference between TCP/IP Reference Model and OSI Reference Model + +The following figure shows the TCP/IP reference model and the OSI reference model diagram: + +![TCP/IP Reference Model and OSI Reference Model](figures/net-osi.png) + +Both the OSI reference model and the TCP/IP reference model are hierarchical, based on the concept of a separate protocol stack. The OSI reference model has 7 layers, while the TCP/IP reference model has only 4 layers, that is, the TCP/IP reference model has no presentation layer and session layer, and the data link layer and physical layer are merged into a network interface layer. However, there is a certain correspondence between the two layers. Due to the complexity of the OSI system and the design prior to implementation, many designs are too ideal and not very convenient for software implementation. Therefore, there are not many systems that fully implement the OSI reference model, and the scope of application is limited. The TCP/IP reference model was first implemented in a computer system. It has a stable implementation on UNIX and Windows platforms, and provides a simple and convenient programming interface (API) on which a wide range of applications are developed. The TCP/IP reference model has become the international standard and industry standard for Internet connectivity. + +### IP Address + +The IP address refers to the Internet Protocol Address (also translated as the Internet Protocol Address) and is a uniform address format that assigns a logical address to each network and each host on the Internet to mask physical address differences provided by Internet Protocol. The common LAN IP address is 192.168.X.X. + +### Subnet Mask + +Subnet mask (also called netmask, address mask), which is used to indicate which bits of an IP address identify the subnet where the host is located, and which bits are identified as the bit mask of the host. The subnet mask cannot exist alone, it must be used in conjunction with an IP address. Subnet mask has only one effect, which is to divide an IP address into two parts: network address and host address. The subnet mask is the bit of 1, the IP address is the network address, the subnet mask is the bit of 0, and the IP address is the host address. Taking the IP address 192.168.1.10 and the subnet mask 255.255.255.0 as an example, the first 24 bits of the subnet mask (converting decimal to binary) is 1, so the first 24 bits of the IP address 192.168.1 represent the network address. The remaining 0 is the host address. + +### MAC Address + +MAC (figures Access Control or Medium Access Control) address, which is translated as media access control, or physical address, hardware address, used to define the location of network devices. In OSI model, the third layer network Layer is responsible for IP address, the second layer data link layer is responsible for the MAC address. A host will have at least one MAC address. + +## Introduction to the Network Framework of RT-Thread + +In order to support various network protocol stacks, RT-Thread has developed a **SAL** component, the full name of the **Socket abstraction layer**. RT-Thread can seamlessly access various protocol stacks, including several commonly used TCP/IP protocol stack, such as the LwIP protocol stack commonly used in embedded development and the AT Socket protocol stack component developed by RT-Thread, which complete the conversion of data from the network layer to the transport layer. + +The main features of the RT-Thread network framework are as follows: + +* Support for standard network sockets BSD Socket API, support for poll/select +* Abstract, unified multiple network protocol stack interfaces +* Support various physical network cards, network communication module hardware +* The resource occupancy of SAL socket abstraction layer component is small: ROM 2.8K and RAM 0.6K. + +RT-Thread's network framework adopts a layered design with four layers, each layer has different responsibilities. The following figure shows the RT-Thread network framework structure: + +![RT-Thread network framework structure](figures/net-layer.png) + +The network framework provides a standard BSD Socket interface to user applications. Developers use the BSD Socket interface to operate without worrying about how the underlying network is implemented, and no need to care which network protocol stack the network data passes through. The socket abstraction layer provides the upper application layer. The interfaces are: `accept`, `connect`, `send`, `recv`, etc. + +Below the SAL layer is the protocol stack layer. The main protocol stacks supported in the current network framework are as follows: + +* **LwIP** is an open source TCP/IP protocol stack implementation that reduces RAM usage while maintaining the main functionality of the TCP/IP protocol, making the LwIP protocol stack ideal for use in embedded systems. +* **AT Socket** is a component for modules that support AT instructions. The AT command uses a standard serial port for data transmission and reception, and converts complex device communication methods into simple serial port programming, which greatly simplifies the hardware design and software development costs of the product, which makes almost all network modules such as GPRS, 3G/4G, NB-IoT, Bluetooth, WiFi, GPS and other modules are very convenient to access the RT-Thread network framework, develop network applications through the standard BSD Socket method, greatly simplifying the development of upper-layer applications. +* **Socket CAN** is a way of programming CAN, it is easy to use and easy to program. By accessing the SAL layer, developers can implement Socket CAN programming on RT-Thread. + +Below the protocol stack layer is an abstract device layer that is connected to various network protocol stacks by abstracting hardware devices into Ethernet devices or AT devices. + +The bottom layer is a variety of network chips or modules (for example: Ethernet chips with built-in protocol stack such as W5500/CH395, WiFi module with AT command, GPRS module, NB-IoT module, etc.). These hardware modules are the carrier that truly performs the network communication function and is responsible for communicating with various physical networks. + +In general, the RT-Thread network framework allows developers to only care about and use the standard BSD Socket network interface for network application development, without concern for the underlying specific network protocol stack type and implementation, greatly improving system compatibility and convenience. Developers have completed the development of network-related applications, and have greatly improved the compatibility of RT-Thread in different areas of the Internet of Things. + +In addition, based on the network framework, RT-Thread provides a large number of network software packages, which are various network applications based on the SAL layer, such as **Paho MQTT**, **WebClient**, **cJSON**, **netutils**, etc., which can be obtained from the online package management center. These software packages are web application tools. Using them can greatly simplify the development of network applications and shorten the network application development cycle. At present, there are more than a dozen network software packages. The following table lists some of the network software packages currently supported by RT-Thread, and the number of software packages is still increasing. + +| **Package Name** | **Description** | +| ---------------- | ------------------------------------------------------------ | +| Paho MQTT | Based on Eclipse open source Paho MQTT, it has done a lot of functions and performance optimization, such as: increased automatic reconnection after disconnection, pipe model, support for non-blocking API, support for TLS encrypted transmission, etc. | +| WebClient | Easy-to-use HTTP client with support for HTTP GET/POST and other common request functions, support for HTTPS, breakpoint retransmission, etc. | +| mongoose | Embedded Web server network library, similar to Nginx in the embedded world. Licensing is not friendly enough, business needs to be charged | +| WebTerminal | Access Finsh/MSH Shell packages in the browser or on the mobile | +| cJSON | Ultra-lightweight JSON parsing library | +| ljson | Json to struct parsing, output library | +| ezXML | XML file parsing library, currently does not support parsing XML data | +| nanopb | Protocol Buffers format data parsing library, Protocol Buffers format takes up less resources than JSON, XML format resources | +| GAgent | Software package for accessing Gizwits Cloud Platform | +| Marvell WiFi | Marvell WiFi driver | +| Wiced WiFi | WiFi driver for Wiced interface | +| CoAP | Porting libcoap's CoAP communication package | +| nopoll | Ported open source WebSocket communication package | +| netutils | A collection of useful network debugging gadgets, including: ping, TFTP, iperf, NetIO, NTP, Telnet, etc. | +| OneNet | Software for accessing China Mobile OneNet Cloud | + +## Network Framework Workflow + +Using the RT-Thread network framework, you first need to initialize the SAL, then register various network protocol clusters to ensure that the application can communicate using the socket network socket interface. This section mainly uses LwIP as an example. + +### Register the Network Protocol Cluster + +First use the `sal_init()` interface to initialize resources such as mutex locks used in the component. The interface looks like this: + +```c +int sal_init(void); +``` + +After the SAL is initialized, then use the the `sal_proto_family_register()` interface to register network protocol cluster, for example, the LwIP network protocol cluster is registered to the SAL. The sample code is as follows: + +```c +static const struct proto_family LwIP_inet_family_ops = { + "LwIP", + AF_INET, + AF_INET, + inet_create, + LwIP_gethostbyname, + LwIP_gethostbyname_r, + LwIP_freeaddrinfo, + LwIP_getaddrinfo, +}; + +int LwIP_inet_init(void) +{ + sal_proto_family_register(&LwIP_inet_family_ops); + + return 0; +} +``` + +`AF_INET` stands for IPv4 address, for example 127.0.0.1; `AF` is short for "Address Family" and `INET` is short for "Internet". + +The `sal_proto_family_register()` interface is defined as follows: + +``` +int sal_proto_family_register(const struct proto_family *pf); +``` + +|**Parameters**|**Description** | +|----------|------------------| +| pf | Protocol cluster structure pointer | +|**Return**|**——** | +| 0 | registration success | +| -1 | registration failed | + +### Network Data Receiving Process + +After the LwIP is registered to the SAL, the application can send and receive network data through the network socket interface. In LwIP, several main threads are created, and they are `tcpip` thread, `erx` receiving thread and `etx` sending thread. The network data receiving process is as shown in the following picture. The application receives data by calling the standard socket interface `recv()` with blocking mode. When the Ethernet hardware device receives the network data packet, it stores the packet in the receiving buffer, and then sends an email to notify the `erx` thread that the data arrives through the Ethernet interrupt program. The `erx` thread applies for the `pbuf` memory block according to the received data length and put the data into the pbuf's `payload` data, then send the `pbuf` memory block to the `tcpip` thread via mailbox, and the `tcpip` thread returns the data to the application that is blocking the receiving data. + +![Data receiving function call flow chart](figures/net-recv.png) + +### Network Data Sending Process + +The network data sending process is shown in the figure below. When there is data to send, the application calls the standard network socket interface `send()` to hand the data to the `tcpip` thread. The `tcpip` thread sends a message to wake up the `etx` thread. The `etx` thread first determines if the Ethernet is sending data. If data is not being sent, it will put the data to be sent into the send buffer, and then send the data through the Ethernet device. If data is being sent, the `etx` thread suspends itself until the Ethernet device is idle before sending the data out. + +![Data sending function call flow chart](figures/net-send.png) + +## Network Socket Programming + +The application uses Socket (network socket) interface programming to implement network communication functions. Socket is a set of application program interface (API), which shields the communication details of each protocol, so that the application does not need to pay attention to the protocol itself, directly using the interfaces provide by socket to communicate between different hosts interconnected. + +### TCP socket Communication Process + +TCP(Tranfer Control Protocol) is a connection-oriented protocol to ensure reliable data transmission. Through the TCP protocol transmission, a sequential error-free data stream is obtained. The TCP-based socket programming flow diagram is shown in the following figure. A connection must be established between the sender and the receiver's two sockets in order to communicate on the basis of the TCP protocol. When a socket (usually a server socket) waits for a connection to be established. Another socket can request a connection. Once the two sockets are connected, they can perform two-way data transmission, and both sides can send or receive data. A TCP connection is a reliable connection that guarantees that packets arrive in order, and if a packet loss occurs, the packet is automatically resent. + +For example, TCP is equivalent to calling in life. When you call the other party, you must wait for the other party to answer. Only when the other party answers your call can he/she establish a connection with you. The two parties can talk and pass information to each other. Of course, the information passed at this time is reliable, because the other party can't hear what you said and can ask you to repeat the content again. When either party on the call wants to end the call, they will bid farewell to the other party and wait until the other party bids farewell to them before they hang up and end the call. + +![TCP-based socket programming flow chart](figures/net-tcp.png) + +### UDP socket Communication Process + +UDP is short for User Datagram Protocol. It is a connectionless protocol. Each datagram is a separate information, including the complete source address and destination address. It is transmitted to the destination on the network in any possible path. Therefore, whether the destination can be reached, the time to reach the destination, and the correctness of the content cannot be guaranteed. The UDP-based socket programming flow is shown in the following figure. + +![UDP-based socket programming flow](figures/net-udp.png) + +For example, UDP is equivalent to the walkie-talkie communication in life. After you set up the channel, you can directly say the information you want to express. The data is sent out by the walkie-talkie, but you don't know if your message has been received by others. By the way, unless someone else responds to you with a walkie-talkie. So this method is not reliable. + +### Create a Socket + +Before communicating, the communicating parties first use the `socket()` interface to create a socket, assigning a socket descriptor and its resources based on the specified address family, data type, and protocol. The interface is as follows: + +```c +int socket(int domain, int type, int protocol); +``` + +|**Parameters**|**Description** | +|----------|--------------------------------------------------| +| domain | Protocol family | +| type | Specify the communication type, including values SOCK_STREAM and SOCK_DGRAM. | +| protocol | Protocol allows to specify a protocol for the socket, which is set to 0 by default. | +|**Return**|**——** | +| >=0 | Successful, returns an integer representing the socket descriptor | +| -1 | Fail | + +**Communication types** include SOCK_STREAM and SOCK_DGRAM. + +**SOCK_STREAM** indicates connection-oriented TCP data transfer. Data can arrive at another computer without any errors. If it is damaged or lost, it can be resent, but it is relatively slow. + +**SOCK_DGRAM** indicates the connectionless UDP data transfer method. The computer only transmits data and does not perform data verification. If the data is damaged during transmission or does not reach another computer, there is no way to remedy it. In other words, if the data is wrong, it is wrong and cannot be retransmitted. Because SOCK_DGRAM does less validation work, it is more efficient than SOCK_STREAM. + +The sample code for creating a TCP type socket is as follows: + +```c + /* Create a socket, type is SOCKET_STREAM,TCP type */ + if ((sock = socket(AF_INET, SOCK_STREAM, 0)) == -1) + { + /* failed to create socket*/ + rt_kprintf("Socket error\n"); + + return; + } +``` + +### Binding Socket + +A binding socket is used to bind a port number and an IP address to a specified socket. When using socket() to create a socket, only the protocol family is given, and no address is assigned. Before the socket receives a connection from another host, it must bind it with an address and port number using bind(). The interface is as follows: + +```c +int bind(int s, const struct sockaddr *name, socklen_t namelen); +``` + +|**Parameters**|**Description** | +|----------|--------------------------------------------| +| S | Socket descriptor | +| name | Pointer to the sockaddr structure representing the address to be bound | +| namelen | Length of sockaddr structure | +|**Return**|**——** | +| 0 | Successful | +| -1 | Fail | + +### Establishing a TCP Connection + +For server-side programs, after using `bind()` to bind the socket, you also need to use the `listen()` function to make the socket enter the passive listening state, and then call the `accept()` function to respond to the client at any time. + +#### Listening Socket + +The listening socket is used by the TCP server to listen for the specified socket connection. The interface is as follows: + +```c +int listen(int s, int backlog); +``` + +|**Parameters**|**Description** | +|----------|--------------------------------| +| s | Socket descriptor | +| backlog | Indicates the maximum number of connections that can be waited at a time | +|**Return**|**——** | +| 0 | Successful | +| -1 | Fail | + +#### Accept the Connection + +When the application listens for connections from other clients, the connection must be initialized with the `accept()` function, which creates a new socket for each connection and removes the connection from the listen queue. The interface is as follows: + +```c +int accept(int s, struct sockaddr *addr, socklen_t *addrlen); +``` + +|**Parameters**|**Description** | +|----------|--------------------------------| +| s | Socket descriptor | +| addr | Client device address information | +| addrlen | Client device address structure length | +|**Return**|**——** | +| >=0 | Successful, return the newly created socket descriptor | +| -1 | Fail | + +#### Establish Connection + +Used by the client to establish a connection with the specified server. The interface is as follows: + +``` +int connect(int s, const struct sockaddr *name, socklen_t namelen); +``` + +|**Parameters**|**Description** | +|----------|------------------------| +| s | Socket descriptor | +| name | Server address information | +| namelen | Server address structure length | +|**Return**|**——** | +| 0 | Successful | +| -1 | Fail | + +When the client connects to the server, first set the server address and then use the `connect()` function to connect. The sample code is as follows: + +```c +struct sockaddr_in server_addr; +/* Initialize the pre-connected server address */ +server_addr.sin_family = AF_INET; +server_addr.sin_port = htons(port); +server_addr.sin_addr = *((struct in_addr *)host->h_addr); +rt_memset(&(server_addr.sin_zero), 0, sizeof(server_addr.sin_zero)); + +/* Connect to the server */ +if (connect(sock, (struct sockaddr *)&server_addr, sizeof(struct sockaddr)) == -1) +{ + /* Connection failed */ + closesocket(sock); + + return; +} +``` + +### Data Transmission + +TCP and UDP have different data transmission methods. TCP needs to establish a connection before data transmission, use `send()` function for data transmission, use `recv()` function for data reception, and UDP does not need to establish connection. It uses `sendto()` function sends data and receives data using the `recvfrom()` function. + +#### TCP Data Transmission + +After the TCP connection is established, the data is sent using the `send()` function. The interface is as follows: + +```c +int send(int s, const void *dataptr, size_t size, int flags); +``` + +|**Parameters**|**Description** | +|----------|----------------------------| +| s | Socket descriptor | +| dataptr | The data pointer to send | +| size | Length of data sent | +| flags | Flag, generally 0 | +|**Return**|**——** | +| >0 | Successful, return the length of the sent data | +| <=0 | Failed | + +#### TCP Data Reception + +After the TCP connection is established, use `recv()` to receive the data. The interface is as follows: + +```c +int recv(int s, void *mem, size_t len, int flags); +``` + +|**Parameters**|**Description** | +|----------|----------------------------| +| s | Socket descriptor | +| mem | Received data pointer | +| len | Received data length | +| flags | Flag, generally 0 | +|**Return**|**Description** | +| >0 | Successful, return the length of the received data | +| =0 | The destination address has been transferred and the connection is closed | +| <0 | Fail | + +#### UDP Data transmission + +In the case where a connection is not established, you can use the `sendto()` function to send UDP data to the specified destination address, as shown below: + +```c +int sendto(int s, const void *dataptr, size_t size, int flags, + const struct sockaddr *to, socklen_t tolen); +``` + +|**Parameters**|**Description** | +|----------|----------------------------| +| s | Socket descriptor | +| dataptr | Data pointer sent | +| size | Length of data sent | +| flags | Flag, generally 0 | +| to | Target address structure pointer | +| tolen | Target address structure length | +|**Return**|**——** | +| >0 | Successful, return the length of the sent data | +| <=0 | Fail | + +#### UDP Data Reception + +To receive UDP data, use the `recvfrom()` function, and the interface is: + +```c +int recvfrom(int s, void *mem, size_t len, int flags, + struct sockaddr *from, socklen_t *fromlen); +``` + +|**Parameters**|**Description** | +|----------|----------------------------| +| s | Socket descriptor | +| mem | Received data pointer | +| len | Received data length | +| flags | Flag, generally 0 | +| from | Receive address structure pointer | +| fromlen | Receive address structure length | +|**Return**|**——** | +| >0 | Successful, return the length of the received data | +| 0 | The receiving address has been transferred and the connection is closed | +| <0 | Fail | + +### Close Network Connection + +After the network communication is over, you need to close the network connection. There are two ways to use `closesocket()` and `shutdown()`. + +The `closesocket()` interface is used to close an existing socket connection, release the socket resource, clear the socket descriptor from memory, and then the socket could not be used again. The connection and cache associated with the socket are also lost the meaning, the TCP protocol will automatically close the connection. The interface is as follows: + +```c +int closesocket(int s); +``` + +|**Parameters**|**Description** | +|----------|--------------| +| s | Socket descriptor | +|**Return**|**——** | +| 0 | Successful | +| -1 | Fail | + +Network connections can also be turned off using the `shutdown()` function. The TCP connection is full-duplex. You can use the `shutdown()` function to implement a half-close. It can close the read or write operation of the connection, or both ends, but it does not release the socket resource. The interface is as follows: + +```c +int shutdown(int s, int how); +``` + +|**Parameters**|**Description** | +|----------|-----------------------------| +| s | Socket descriptor | +| how | SHUT_RD closes the receiving end of the connection and no longer receives data.
SHUT_WR closes the connected sender and no longer sends data.
SHUT_RDWR is closed at both ends. | +|**Return**|**——** | +| 0 | Successful | +| -1 | Fail | + +## Network Function Configuration + +The main functional configuration options of the network framework are shown in the following table, which can be configured according to different functional requirements: + +SAL component configuration options: + +|**Macro Definition** |**Value Type**|**Description** | +|------------------------|--------------|--------------------| +| RT_USING_SAL | Boolean | Enable SAL | +| SAL_USING_LWIP | Boolean | Enable LwIP component | +| SAL_USING_AT | Boolean | Enable the AT component | +| SAL_USING_POSIX | Boolean | Enable POSIX interface | +| SAL_PROTO_FAMILIES_NUM | Integer | the maximum number of protocol families supported | + +LwIP Configuration options: + +|**Macro Definition** |**Value Type**|**Description** | +|-----------------------------|--------------|----------------------| +| RT_USING_LWIP | Boolean | Enable LwIP protocol | +| RT_USING_LWIP_IPV6 | Boolean | Enable IPV6 protocol | +| RT_LWIP_IGMP | Boolean | Enable the IGMP protocol | +| RT_LWIP_ICMP | Boolean | Enable the ICMP protocol | +| RT_LWIP_SNMP | Boolean | Enable the SNMP protocol | +| RT_LWIP_DNS | Boolean | Enable DHCP function | +| RT_LWIP_DHCP | Boolean | Enable DHCP function | +| IP_SOF_BROADCAST | Integer | filtering roadcasting Packets Sended by IP | +| IP_SOF_BROADCAST_RECV | Integer | filtering roadcasting Packets received by IP | +| RT_LWIP_IPADDR | String | IP address | +| RT_LWIP_GWADDR | String | Gateway address | +| RT_LWIP_MSKADDR | String | Subnet mask | +| RT_LWIP_UDP | Boolean | Enable UDP protocol | +| RT_LWIP_TCP | Boolean | Enable TCP protocol | +| RT_LWIP_RAW | Boolean | Enable RAW API | +| RT_MEMP_NUM_NETCONN | Integer | Support Numbers of network links | +| RT_LWIP_PBUF_NUM | Integer | pbuf number of memory blocks | +| RT_LWIP_RAW_PCB_NUM | Integer | Maximum number of connections for RAW | +| RT_LWIP_UDP_PCB_NUM | Integer | Maximum number of connections for UDP | +| RT_LWIP_TCP_PCB_NUM | Integer | Maximum number of connections for TCP | +| RT_LWIP_TCP_SND_BUF | Integer | TCP send buffer size | +| RT_LWIP_TCP_WND | Integer | TCP sliding window size | +| RT_LWIP_TCPTHREAD_PRIORITY | Integer | TCP thread priority | +| RT_LWIP_TCPTHREAD_MBOX_SIZE | Integer | TCP thread mailbox size | +| RT_LWIP_TCPTHREAD_STACKSIZE | Integer | TCP thread stack size | +| RT_LWIP_ETHTHREAD_PRIORITY | Integer | Receive/send thread's priority | +| RT_LWIP_ETHTHREAD_STACKSIZE | Integer | Receive/send thread's stack size | +| RT_LwIP_ETHTHREAD_MBOX_SIZE | Integer | Receive/send thread's mailbox size | + +## Network Application Example + +### View IP Address + +In the console, you can use the ifconfig command to check the network status. The IP address is 192.168.12.26, and the FLAGS status is LINK_UP, indicating that the network is configured: + +```c +msh >ifconfig +network interface: e0 (Default) +MTU: 1500 +MAC: 00 04 a3 12 34 56 +FLAGS: UP LINK_UP ETHARP BROADCAST IGMP +ip address: 192.168.12.26 +gw address: 192.168.10.1 +net mask : 255.255.0.0· +dns server #0: 192.168.10.1 +dns server #1: 223.5.5.5 +``` + +### Ping Network Test + +Use the ping command for network testing: + +```c +msh />ping rt-thread.org +60 bytes from 116.62.244.242 icmp_seq=0 ttl=49 time=11 ticks +60 bytes from 116.62.244.242 icmp_seq=1 ttl=49 time=10 ticks +60 bytes from 116.62.244.242 icmp_seq=2 ttl=49 time=12 ticks +60 bytes from 116.62.244.242 icmp_seq=3 ttl=49 time=10 ticks +msh />ping 192.168.10.12 +60 bytes from 192.168.10.12 icmp_seq=0 ttl=64 time=5 ticks +60 bytes from 192.168.10.12 icmp_seq=1 ttl=64 time=1 ticks +60 bytes from 192.168.10.12 icmp_seq=2 ttl=64 time=2 ticks +60 bytes from 192.168.10.12 icmp_seq=3 ttl=64 time=3 ticks +msh /> +``` + +Getting the above output indicates that the connection network is successful! + +### TCP Client Example + +After the network is successfully connected, you can run the network example, first run the TCP client example. This example will open a TCP server on the PC, open a TCP client on the IoT Board, and both parties will communicate on the network. + +In the example project, there is already a TCP client program `tcpclient_sample.c`. The function is to implement a TCP client that can receive and display the information sent from the server. If it receives the information starting with 'q' or 'Q', then exit the program directly and close the TCP client. The program exports the tcpclient command to the FinSH console. The command format is `tcpclient URL PORT`, where URL is the server address and PORT is the port number. The sample code is as follows: + +```c +/* + * Program list: tcp client + * + * This is a tcp client routine + * Export the tcpclient command to MSH + * Command call format: tcpclient URL PORT + * URL: server address PORT:: port number + * Program function: Receive and display the information sent from the server, and receive the information that starts with 'q' or 'Q' to exit the program. +*/ + +#include +#include /* To use BSD socket, you need to include the socket.h header file. */ +#include +#include +#include + +#define BUFSZ 1024 + +static const char send_data[] = "This is TCP Client from RT-Thread."; /* Sending used data */ +void tcpclient(int argc, char**argv) +{ + int ret; + char *recv_data; + struct hostent *host; + int sock, bytes_received; + struct sockaddr_in server_addr; + const char *url; + int port; + + /* Received less than 3 parameters */ + if (argc < 3) + { + rt_kprintf("Usage: tcpclient URL PORT\n"); + rt_kprintf("Like: tcpclient 192.168.12.44 5000\n"); + return ; + } + + url = argv[1]; + port = strtoul(argv[2], 0, 10); + + /* Get the host address through the function entry parameter url (if it is a domain name, it will do domain name resolution) */ + host = gethostbyname(url); + + /* Allocate buffers for storing received data */ + recv_data = rt_malloc(BUFSZ); + if (recv_data == RT_NULL) + { + rt_kprintf("No memory\n"); + return; + } + + /* Create a socket of type SOCKET_STREAM, TCP type */ + if ((sock = socket(AF_INET, SOCK_STREAM, 0)) == -1) + { + /* Failed to create socket */ + rt_kprintf("Socket error\n"); + + /* Release receive buffer */ + rt_free(recv_data); + return; + } + + /* Initialize the pre-connected server address */ + server_addr.sin_family = AF_INET; + server_addr.sin_port = htons(port); + server_addr.sin_addr = *((struct in_addr *)host->h_addr); + rt_memset(&(server_addr.sin_zero), 0, sizeof(server_addr.sin_zero)); + + /* Connect to the server */ + if (connect(sock, (struct sockaddr *)&server_addr, sizeof(struct sockaddr)) == -1) + { + /* Connection failed */ + rt_kprintf("Connect fail!\n"); + closesocket(sock); + + /* Release receive buffer */ + rt_free(recv_data); + return; + } + + while (1) + { + /* Receive maximum BUFSZ-1 byte data from a sock connection */ + bytes_received = recv(sock, recv_data, BUFSZ - 1, 0); + if (bytes_received < 0) + { + /* Receive failed, close this connection */ + closesocket(sock); + rt_kprintf("\nreceived error,close the socket.\r\n"); + + /* Release receive buffer */ + rt_free(recv_data); + break; + } + else if (bytes_received == 0) + { + /* Print the recv function returns a warning message with a value of 0 */ + rt_kprintf("\nReceived warning,recv function return 0.\r\n"); + + continue; + } + + /* Received data, clear the end */ + recv_data[bytes_received] = '\0'; + + if (strncmp(recv_data, "q", 1) == 0 || strncmp(recv_data, "Q", 1) == 0) + { + /* If the initial letter is q or Q, close this connection */ + closesocket(sock); + rt_kprintf("\n got a'q'or'Q',close the socket.\r\n"); + + /* Release receive buffer */ + rt_free(recv_data); + break; + } + else + { + /* Display the received data at the control terminal */ + rt_kprintf("\nReceived data = %s", recv_data); + } + + /* Send data to sock connection */ + ret = send(sock, send_data, strlen(send_data), 0); + if (ret < 0) + { + /* Receive failed, close this connection */ + closesocket(sock); + rt_kprintf("\nsend error,close the socket.\r\n"); + + rt_free(recv_data); + break; + } + else if (ret == 0) + { + /* Print the send function returns a warning message with a value of 0 */ + rt_kprintf("\n Send warning,send function return 0.\r\n"); + } + } + return; +} +MSH_CMD_EXPORT(tcpclient, a tcp client sample); +``` + +When running the example, first open a network debugging assistant on your computer and open a TCP server. Select the protocol type as TCP +Server, fill in the local IP address and port 5000, as shown below. + +![Network debugging tool interface](figures/net-tcp-s.png) + +Then start the TCP client to connect to the TCP server by entering the following command in the FinSH console: + +```C +msh />tcpclient 192.168.12.45 5000 // Input according to actual situation +Connect successful +``` + +When the console outputs the log message "Connect successful", it indicates that the TCP connection was successfully established. Next, you can perform data communication. In the network debugging tool window, send Hello RT-Thread!, which means that a data is sent from the TCP server to the TCP client, as shown in the following figure: + +![网络调试工具界面](figures/net-hello.png) + +After receiving the data on the FinSH console, the corresponding log information will be output, you can see: + +```c +msh >tcpclient 192.168.12.130 5000 +Connect successful +Received data = hello world +Received data = hello world +Received data = hello world +Received data = hello world +Received data = hello world + got a 'q' or 'Q',close the socket. +msh > +``` + +The above information indicates that the TCP client received 5 "hello world" data sent from the server. Finally, the exit command 'q' was received from the TCP server, and the TCP client program exited the operation and returned to the FinSH console. + +### UDP Client Example + +This is an example of a UDP client. This example will open a UDP server on the PC and open a UDP client on the IoT Board for network communication. A UDP client program has been implemented in the sample project. The function is to send data to the server. The sample code is as follows: + +```c +/* + * Program list: udp client + * + * This is a udp client routine + * Export the udpclient command to the msh + * Command call format: udpclient URL PORT [COUNT = 10] + * URL: Server Address PORT: Port Number COUNT: Optional Parameter Default is 10 + * Program function: send COUNT datas to the remote end of the service +*/ + +#include +#include /* To use BSD socket, you need to include the sockets.h header file. */ +#include +#include +#include + +const char send_data[] = "This is UDP Client from RT-Thread.\n"; /* data */ + +void udpclient(int argc, char**argv) +{ + int sock, port, count; + struct hostent *host; + struct sockaddr_in server_addr; + const char *url; + + /* Received less than 3 parameters */ + if (argc < 3) + { + rt_kprintf("Usage: udpclient URL PORT [COUNT = 10]\n"); + rt_kprintf("Like: udpclient 192.168.12.44 5000\n"); + return ; + } + + url = argv[1]; + port = strtoul(argv[2], 0, 10); + + if (argc> 3) + count = strtoul(argv[3], 0, 10); + else + count = 10; + + /* Get the host address through the function entry parameter url (if it is a domain name, it will do domain name resolution) */ + host = (struct hostent *) gethostbyname(url); + + /* Create a socket of type SOCK_DGRAM, UDP type */ + if ((sock = socket(AF_INET, SOCK_DGRAM, 0)) == -1) + { + rt_kprintf("Socket error\n"); + return; + } + + /* Initialize the pre-connected server address */ + server_addr.sin_family = AF_INET; + server_addr.sin_port = htons(port); + server_addr.sin_addr = *((struct in_addr *)host->h_addr); + rt_memset(&(server_addr.sin_zero), 0, sizeof(server_addr.sin_zero)); + + /* Send count data in total */ + while (count) + { + /* Send data to the remote end of the service */ + sendto(sock, send_data, strlen(send_data), 0, + (struct sockaddr *)&server_addr, sizeof(struct sockaddr)); + + /* Thread sleep for a while */ + rt_thread_delay(50); + + /* Count value minus one */ + count --; + } + + /* Turn off this socket */ + closesocket(sock); +} +``` + +When running the example, first open a network debugging assistant on your computer and open a UDP server. Select the protocol type as UDP and fill in the local IP address and port 5000, as shown in the figure below. + +![网络调试工具界面](figures/net-udp-server.png) + +Then you can enter the following command in the FinSH console to send data to the UDP server. + +```c +msh />udpclient 192.168.12.45 1001 // Need to enter according to the real situation +``` + +The server will receive 10 messages from This is UDP Client from RT-Thread., as shown below: + +![网络调试工具界面](figures/net-udp-client.png) diff --git a/documentation/pm/figures/pm_architecture.png b/documentation/pm/figures/pm_architecture.png new file mode 100644 index 0000000000..e9e6035fa0 Binary files /dev/null and b/documentation/pm/figures/pm_architecture.png differ diff --git a/documentation/pm/figures/pm_description.png b/documentation/pm/figures/pm_description.png new file mode 100644 index 0000000000..509fad5680 Binary files /dev/null and b/documentation/pm/figures/pm_description.png differ diff --git a/documentation/pm/figures/pm_sequence.png b/documentation/pm/figures/pm_sequence.png new file mode 100644 index 0000000000..57cd0d2bf6 Binary files /dev/null and b/documentation/pm/figures/pm_sequence.png differ diff --git a/documentation/pm/figures/pm_system.png b/documentation/pm/figures/pm_system.png new file mode 100644 index 0000000000..fe224f9ec0 Binary files /dev/null and b/documentation/pm/figures/pm_system.png differ diff --git a/documentation/pm/pm.md b/documentation/pm/pm.md new file mode 100644 index 0000000000..ff4c1b802a --- /dev/null +++ b/documentation/pm/pm.md @@ -0,0 +1,430 @@ +# Power Management: PM + +The purpose of low power management of embedded system is to reduce system energy consumption as much as possible to prolong the standby time of equipment on the premise of satisfying users'performance requirements. The contradiction between high performance and limited battery energy is most prominent in embedded systems. The combination of hardware low power design and software low power management has become an effective means to solve the contradiction. Nowadays, all kinds of MCUs provide management interfaces in low power consumption more or less. For example, adjusting the frequency of the main control clock, changing the working voltage, adjusting or even closing the bus frequency, closing the working clock of peripheral equipment, etc. With the support of hardware, reasonable software design becomes the key to energy saving. Generally, low power management can be divided into three categories: + +- Processor Power Management + + - The main ways of realization are dynamic management of CPU frequency and adjustment of working mode when the system is idle. + +- Power management of equipment + + - The main way to achieve this is to shut down individual idle devices + +- Power Management of System Platform + + - The main way to realize it is to specific customization of infrequent devices on specific system platforms. + +With the rise of the Internet of Things (IoT), the demand for power consumption of products becomes more and more intense. Sensor nodes as data acquisition usually need to work long-term when the battery is powered, and SOC connected to the network also needs fast response function and low power consumption. + +In the initial stage of product development, the first consideration is to complete the development of product functions as soon as possible. After the function of the product is gradually improved, it is necessary to add the power management (PM) function. To meet this need of IoT, RT-Thread provides power management components. The idea of power management components is to be as transparent as possible, making it easier for products to add low power functions. + +## Introduction of PM Components + +RT-Thread's PM components adopt a layered design idea, separating architecture and chip-related parts, and extracting common parts as the core. While providing a common interface to the upper layer, it also makes it easier for the bottom driver to adapt components. + +![PM Component Overview](figures/pm_system.png) + +### Main Features + +The main features of RT-Thread PM components are as follows: + +- It manages power consumption based on mode, dynamically adjusts working mode in idle time, and supports multiple levels of sleeping. +- Transparent to applications, components automatically complete power management at the bottom layer. +- It supports dynamic frequency conversion in running mode and updates the frequency configuration of equipment automatically according to the mode to ensure normal operation in different running modes. +- Support equipment power management, automatically manage device suspending and resuming according to the mode, ensure correct suspending and resuming in different sleep mode. +- Optional sleep time compensation is supported to make OS Tick dependent applications transparent. +- Provide the device interface to the upper layer. If the devfs component is opened, it can also be accessed through the file system interface. + +### Working Principle + +The essence of low power consumption is that when the system is idle, the CPU stops working, interrupts or resumes working after the event wakes up. In RTOS, there is usually an IDLE task, which has the lowest priority and remains ready. When the high priority task is not ready, the OS executes the IDLE task. Generally, the CPU executes empty instructions in IDLE tasks without low power processing. The power management component of RT-Thread can effectively reduce the power consumption of the system by managing CPU, clock and equipment in IDLE tasks. + +![PM工作原理](figures/pm_description.png) + +As shown in the figure above, when the high priority thread runs out or is suspended, the system enters the IDLE thread . After the IDLE thread is executed, it will determine whether the system can go to sleep (to save power). If the system goes to sleep, Some hardware modules will be shut down depending on the chip condition, and OS Tick is also very likely to enter a pause state. At this time, the power management framework will calculate the next timeout point according to the system timer situation, and set a low-power timer, so that the device can wake up at that point, and carry out follow-up work. When the system is awakened (low power timer interrupt or other wake-up interrupt source), the system also needs to know how long it sleeps, and compensate for OS Tick, so that the OS tick value of the system is adjusted to a correct value. + +## PM Framework + +In RT-Thrad PM components, peripherals or applications vote on the required power consumption mode by voting mechanism. When the system is idle, the appropriate power consumption mode is decided according to the voting number, and the abstract interface is called to control the chip to enter a low power consumption state, so as to reduce the power consumption of the system. When no vote is taken, it is entered in the default mode (usually idle mode). Unlike applications, some peripherals may perform specific operations when they enter a low-power state and take measures to recover when they exit a low-power state, which can be achieved by registering PM devices. By registering PM devices, `suspend` callbacks of registered devices will be triggered before entering a low power state. Developers can perform their own operations in the callbacks. Similarly, `resume` callbacks will be triggered when exiting from a low power state. + +![PM Framework](figures/pm_architecture.png) + +## Low Power State and Mode + +The RT-Thread PM component divides the system into two states: RUN(running state) and Sleep(sleeping state). + +Running state controls the frequency of CPU, which is suitable for frequency conversion scenarios. Sleeping state realizes sleeping CPU according to SOC characteristics to reduce power consumption. The two states are controlled independently using different API interfaces. + +- **Sleeping state** + +Sleeping state is also a low power state in the usual sense. By closing the peripherals and executing the SOC power management interface, the system power consumption is reduced. + +The state of sleeping is divided into six patterns, which take the form of pyramids. As the mode increases, the power consumption decreases step by step. The following is the definition of the mode in the sleeping state. Developers can implement the corresponding mode according to the specific SOC, but they need to follow the characteristics of power consumption decreasing step by step. + +| Patterns | Level | Description | +| ---------------------- | ----- | ------------------------------------------------------------ | +| PM_SLEEP_MODE_NONE | 0 | The system is active without any power reduction. | +| PM_SLEEP_MODE_IDLE | 1 | The idle mode, which stops CPU and part of the clock when the system is idle. Any event or interrupt can wake up system. | +| PM_SLEEP_MODE_LIGHT | 2 | Light sleep modes, CPU stops, most clocks and peripherals stop, and time compensation is required after wake-up. | +| PM_SLEEP_MODE_DEEP | 3 | Deep sleep mode, CPU stops, only a few low power peripheral work, can be awakened by special interrupts | +| PM_SLEEP_MODE_STANDBY | 4 | Standby mode, CPU stops, device context loss (can be saved to special peripherals), usually reset after wake-up | +| PM_SLEEP_MODE_SHUTDOWN | 5 | Shutdown mode, lower power consumption than Standby mode, context is usually irrecoverable, reset after wake-up | + +>Note: The implementation of power management varies from chip to chip. The above description only gives some recommended scenarios, not all of which need to be implemented. Developers can choose several of them to implement according to their own situation, but they need to follow the principle of higher level and lower power consumption. + +- **Running state** + +Running state is usually used to change the running frequency of CPU, independent of sleep mode. Current operation status is divided into four levels: high-speed, normal-speed, medium-speed and low-speed, as follows: + +| Patterns | Description | +| ------------------------ | ------------------------------------------------------------ | +| PM_RUN_MODE_HIGH_SPEED | High-speed mode, suitable for some over-frequency scenarios | +| PM_RUN_MODE_NORMAL_SPEED | Normal-speed mode, which is the default running state | +| PM_RUN_MODE_MEDIUM_SPEED | Medium-speed mode, reduce CPU running speed, thereby reducing power consumption | +| PM_RUN_MODE_LOW_SPEED | Low-speed mode, CPU frequency further reduced | + +### Request and release of patterns + +In PM components, upper applications can actively participate in power management by requesting and releasing sleep modes. Applications can request different sleep modes according to scenarios and release them after processing. As long as any application or device requests higher-level power mode, it will not switch to a lower mode. Therefore, the requests and releases of sleep mode usually occur in pairs and can be used to protect a certain stage, such as the peripheral DMA transmission process. + +### Device Sensitive to Mode Changes + +In PM components, switching to a new mode of operation may lead to changes in CPU frequency. If peripherals and CPUs share a part of the clock, the peripheral clock will be affected. When entering the new sleep mode, most clock sources will be stopped. If the peripheral does not support the freezing function of sleep, then the peripheral clock needs to be reconfigured when waking up from sleep. So PM components support PM mode sensitive PM devices. It enables the device to work normally when switching to a new operation mode or a new sleep mode. This function requires the underlying driver to implement the relevant interface and register as a device sensitive to mode changes. + +## The calling process + +![PM Sequence](figures/pm_sequence.png) + +Firstly, the application layer sets the callback function of entering and exiting the dormancy state, and then calls `rt_pm_request` to request the sleeping mode to trigger the sleeping operation. The PM component checks the number of sleeping modes when the system is idle, and gives the recommended mode according to the number of votes. Then the PM component calls `notfiy` to inform the application that it is going to enter the sleep mode, and then suspends the registered PM device and executes the sleep mode implemented by SOC after returning to OK. The system enters the sleep state (if the enabling time is compensated, the low-power timer will be started before the sleep). At this point, the CPU stops working and waits for an event or interrupt to wake up. When the system is awakened, because the global interruption is closed, the system continues to execute from there, gets the sleep time to compensate the OS tick of the system, wakes up the device in turn, and notifies the application to exit from the sleep mode. Such a cycle is completed, exits, and waits for the system to be idle next time. + +## Introduction to APIs + +### Request Sleep Mode + +```c +void rt_pm_request(uint8_t sleep_mode); +``` + +| Parameter | Mode | +| ---------- | ------------------------ | +| sleep_mode | Request Sleep mode level | + +Sleep_mode takes the following enumeration values: + +```c +enum +{ + /* sleep modes */ + PM_SLEEP_MODE_NONE = 0, /* active state */ + PM_SLEEP_MODE_IDLE, /* idle mode (default) */ + PM_SLEEP_MODE_LIGHT, /* Light Sleep Mode */ + PM_SLEEP_MODE_DEEP, /* Deep Sleep Mode */ + PM_SLEEP_MODE_STANDBY, /* Standby Mode */ + PM_SLEEP_MODE_SHUTDOWN, /* Shutdowm Mode */ + PM_SLEEP_MODE_MAX, +}; +``` + +Calling this function adds the corresponding pattern count to 1 and locks the pattern. At this point, if a lower level of power mode is requested, it will not be accessible. Only after releasing (unlocking) the previously requested mode, the system can enter a lower level of power mode; requests to higher power mode are not affected by this. This function needs to be used in conjunction with `rt_pm_release` to protect a certain stage or process. + +### Release Sleep Mode + +```c +void rt_pm_release(uint8_t sleep_mode); +``` + +| Parameter | Mode | +| ---------- | ---------------- | +| sleep_mode | Sleep mode level | + +Calling this function decreases the corresponding pattern count by 1, and releases the previously requested pattern in conjunction with `rt_pm_request`. + +### Setting up Running Mode + +```c +int rt_pm_run_enter(uint8_t run_mode); +``` + +| Parameter | Mode | +| --------- | ------------------ | +| run_mode | Running mode level | + +`run_mode` can take the following enumeration values: + +```c +enum +{ + /* run modes*/ + PM_RUN_MODE_HIGH_SPEED = 0, /* high speed */ + PM_RUN_MODE_NORMAL_SPEED, /* Normal speed(default) */ + PM_RUN_MODE_MEDIUM_SPEED, /* Medium speed */ + PM_RUN_MODE_LOW_SPEED, /* low speed */ + PM_RUN_MODE_MAX, +}; +``` + +Calling this function changes the CPU's running frequency, thereby reducing the power consumption at runtime. This function only provides levels, and the specific CPU frequency should depend on the actual situation during the migration phase. + +### Setting up callback notifications for entering/exiting sleep mode + +```c +void rt_pm_notify_set(void (*notify)(uint8_t event, uint8_t mode, void *data), void *data); +``` + +| Parameter | Mode | +| --------- | ----------------- | +| notify | callback function | +| data | Private data | + +Evet is the following two enumeration values, identifying entry/exit sleep mode respectively. + +```c +enum +{ + RT_PM_ENTER_SLEEP = 0, /* entry sleep mode */ + RT_PM_EXIT_SLEEP, /* exit sleep mode */ +}; + +``` + +## Instruction for Use + +### Setting Low Power Level + +If the system needs to enter a specified level of low power consumption, it can be achieved by calling rt_pm_request. For example, into deep sleep mode: + +```c +/* Request for Deep Sleep Mode */ +rt_pm_request(PM_SLEEP_MODE_DEEP); +``` + +> Note: If higher power consumption modes, such as Light Mode or Idle Mode, are requested elsewhere in the program, then the corresponding mode needs to be released before the deep sleep mode can be entered. + +### Protect a stage or process + +In special cases, for example, the system is not allowed to enter a lower power mode at a certain stage, which can be protected by rt_pm_request and rt_pm_release. For example, deep sleep mode (which may cause peripherals to stop working) is not allowed during I2C reading data, so the following processing can be done: + +```c +/* Request light sleep mode (I2C peripheral works normally in this mode) */ +rt_pm_request(PM_SLEEP_MODE_LIGHT); + +/* Reading Data Procedure */ + +/* Release this model */ +rt_pm_release(PM_SLEEP_MODE_LIGHT); + +``` + +### Changing CPU Running Frequency + +Reducing the running frequency can effectively reduce the power consumption of the system, and the running frequency of the CPU can be changed through the `rt_pm_run_enter` interface. Generally speaking, frequency reduction means that CPU performance decreases and processing speed decreases, which may lead to the increase of task execution time and need to be weighed reasonably. + +```c +/* Enter Medium Speed Mode */ +rt_pm_run_enter(PM_RUN_MODE_MEDIUM_SPEED); +``` + +## Migration instructions + +Low power management is a very meticulous task. Developers need not only to fully understand the power management of the chip itself, but also to be familiar with the peripheral circuit of the board and deal with it one by one when they enter the low power state, so as to avoid leakage of the peripheral circuit and pull up the overall power consumption. + +RT-Thread PM components Abstract each part and provide different OPS interfaces for developers to adapt. The following points need to be paid attention to when they migration: + +```c +/** + * low power mode operations + */ +struct rt_pm_ops +{ + /* Sleep interface for adapting chip-related low power characteristics */ + void (*sleep)(struct rt_pm *pm, uint8_t mode); + /* Run Interface for Frequency Conversion and Voltage Conversion in Running Mode */ + void (*run)(struct rt_pm *pm, uint8_t mode); + /* The following three interfaces are used to start a low-power timer after cardiac arrest to compensate for the OS tick */ + void (*timer_start)(struct rt_pm *pm, rt_uint32_t timeout); + void (*timer_stop)(struct rt_pm *pm); + rt_tick_t (*timer_get_tick)(struct rt_pm *pm); +}; + +/* The OPS is used to manage the power consumption of peripherals */ +struct rt_device_pm_ops +{ + /* Suspending the peripheral before entering sleeping mode and return to non-zero to indicate that the peripheral is not ready and cannot enter */ + int (*suspend)(const struct rt_device *device, uint8_t mode); + /* Resume peripherals after exiting from sleep mode */ + void (*resume)(const struct rt_device *device, uint8_t mode); + /* When the mode changes, notify peripheral processing in the running state */ + int (*frequency_change)(const struct rt_device *device, uint8_t mode); +}; + +/* Register a PM device */ +void rt_pm_device_register(struct rt_device *device, const struct rt_device_pm_ops *ops); +``` + +### Power Consumption Characteristics of Chips + +```c +void (*sleep)(struct rt_pm *pm, uint8_t mode); +``` + +Each chip has different definitions and management of low power mode. PM component abstracts chip-related characteristics into sleep interface. The interface adapts to low power management related to chips. When entering different `sleep` modes, some hardware-related configurations, storage and other related processing are needed. + +### Time Compensation for Sleep Mode + +```c +void (*timer_start)(struct rt_pm *pm, rt_uint32_t timeout); +void (*timer_stop)(struct rt_pm *pm); +rt_tick_t (*timer_get_tick)(struct rt_pm *pm); +``` + +In some sleep modes (Light Sleep or Deep Sleep), the OS Tick timer in the kernel may be stopped. At this time, it is necessary to start a timer to measure the time of sleep and compensate for the OS Tick after waking up. Time compensated timer must still be able to work properly in this mode and wake up the system, otherwise it is meaningless! + +- **timer_start**: Start a low-power timer, and the input parameter is the latest next task readiness time. +- **timer_get_tick**: Get the sleep time when the system is awakened; +- **timer_stop**: Used to stop low-power timers after system wake-up. + +**Note**: Time compensation for sleep mode needs to be turned on at the initialization stage by setting the bit control of the corresponding mode of timer_mask. For example, if time compensation in Deep Sleep mode needs to be turned on, then after implementing the timer-related OPS interface, the corresponding bit is set at initialization: + +```c +rt_uint8_t timer_mask = 0; + +/* Set the bit corresponding to Deep Sleep mode to enable sleep time compensation */ +timer_mask = 1UL << PM_SLEEP_MODE_DEEP; + +/* initialize system pm module */ +rt_system_pm_init(&_ops, timer_mask, RT_NULL); + +``` + +### Frequency Conversion in Running Mode + +``` +void (*run)(struct rt_pm *pm, uint8_t mode); +``` + +The frequency conversion of operation mode is realized by adapting the `run` interface in `rt_pm_ops`, and the appropriate frequency is selected according to the use scenario. + +### Power management of peripherals + +Power processing of peripherals is an important part of low power management system. When entering some level of sleep mode, it is usually necessary to process some peripherals, such as emptying DMA, closing clock or setting IO to reset state, and recover after quitting sleep. + +In this case, PM devices can be registered through rt_pm_device_register interface, suspend and resume callbacks of registered devices will be executed when entering/exiting Sleeping mode, and frequency_change callbacks of devices will also be triggered by frequency changes in Running mode. + +A more detailed migration case can be referred to stm32l476-nucleo BSP in the RT-Thread repository. + +## MSH Commands + +### Request Sleep Mode + +The `pm_request` command can be used to request related patterns, using an example as follows: + +```c +msh />pm_request 0 +msh /> +``` + +The range of parameter values is 0-5, corresponding to the following enumeration values: + +```c +enum +{ + /* sleep modes */ + PM_SLEEP_MODE_NONE = 0, /* active state */ + PM_SLEEP_MODE_IDLE, /* idle mode (default) */ + PM_SLEEP_MODE_LIGHT, /* Light Sleep Mode */ + PM_SLEEP_MODE_DEEP, /* Deep Sleep Mode */ + PM_SLEEP_MODE_STANDBY, /* Standby Mode */ + PM_SLEEP_MODE_SHUTDOWN, /* Shutdowm Mode */ + PM_SLEEP_MODE_MAX, +}; +``` + +### Release Sleep Mode + +You can use the `pm_release` command to release the sleep mode. The range of parameters is 0-5, and the examples are as follows: + +```c +msh />pm_release 0 +msh /> +``` + +### Setting up Running Mode + +You can use the `pm_run` command to switch the mode of operation with parameters ranging from 0 to 3, as shown in the following example + +```c +msh />pm_run 2 +msh /> +``` + +The range of parameters is 0 to 3: + +```c +enum +{ + /* run modes*/ + PM_RUN_MODE_HIGH_SPEED = 0, + PM_RUN_MODE_NORMAL_SPEED, + PM_RUN_MODE_MEDIUM_SPEED, + PM_RUN_MODE_LOW_SPEED, + PM_RUN_MODE_MAX, +}; +``` + +### View mode status + +You can use the `pm_dump` command to view the mode state of the PM component, as shown in the following example + +```c +msh > +msh >pm_dump +| Power Management Mode | Counter | Timer | ++-----------------------+---------+-------+ +| None Mode | 0 | 0 | +| Idle Mode | 0 | 0 | +| LightSleep Mode | 1 | 0 | +| DeepSleep Mode | 0 | 1 | +| Standby Mode | 0 | 0 | +| Shutdown Mode | 0 | 0 | ++-----------------------+---------+-------+ +pm current sleep mode: LightSleep Mode +pm current run mode: Normal Speed +msh > +``` + +In the pattern list of `pm_dump`, the priority of sleep mode is arranged from high to low. + +The `Counter` column identifies the count of requests. The graph shows that the Light Sleep mode is requested once, so the current work is in a slight sleep state. + +The `Timer` column identifies whether to turn on sleep time compensation. In the figure, only Deep Sleep mode is used for time compensation. + +The bottom part identifies the current sleep mode and running mode level respectively. + +## Common problems and debugging methods + +- When the system enters the low power mode, the power consumption is too high. + +According to the peripheral circuit, check whether the equipment is in a reasonable state to avoid leakage of peripheral equipment. + +According to the product's own situation, turn off the peripherals and clocks that are not used during the corresponding sleep mode. + +- Unable to enter lower levels of power consumption + +Check whether the higher-level power consumption mode is not released. RT-Thread's PM component uses `rt_pm_request` to request dormancy mode. If it is not released after requesting high-power consumption mode, the system will not be able to switch to lower-level power consumption. + +For example, after requesting Light Sleep mode, then requesting Deep Sleep mode, the system is still in Light Sleep mode. By calling the interface `rt_pm_request` to release the Light Sleep mode, the system will automatically switch to the Deep Sleep mode. + + + + + + + + + + + + + + + diff --git a/documentation/posix/README.md b/documentation/posix/README.md new file mode 100644 index 0000000000..57e49198c0 --- /dev/null +++ b/documentation/posix/README.md @@ -0,0 +1,2604 @@ +# POSIX Interface + +## Introduction to Pthreads + +POSIX Threads is abbreviated as Pthreads. POSIX is the abbreviation of "Portable Operating System Interface". POSIX is a set of standards established by IEEE Computer Society to improve the compatibility of different operating systems and the portability of applications. Pthreads is a threaded POSIX standard defined in the POSIX.1c, Threads extensions (IEEE Std1003.1c-1995) standard, which defines a set of C programming language types, functions, and constants. Defined in the `pthread.h` header file and a thread library, there are about 100 APIs, all of which have a "`pthread_`" prefix and can be divided into 4 categories: + +- **Thread management**: Thread creating, detaching, joining, and setting and querying thread attributes. + +- **Mutex**: Abbreviation for "mutual exclusion", which restricts thread access to shared data and protects the integrity of shared data. This includes creating, destroying, locking, and unlocking mutex and some functions for setting or modifying mutex properties. + +- **Condition variable**: Communication between threads used to share a mutex. It includes functions such as creation, destruction, waiting condition variables, and sending signal . + +- **Read/write locks and barriers**: including the creation, destruction, wait, and related property settings of read-write locks and barriers. + +POSIX semaphores are used with Pthreads, but are not part of the Pthreads standard definition and are defined in POSIX.1b, Real-time extensions (IEEE Std1003.1b-1993). Therefore the prefix of the semaphore correlation function is "`sem_`" instead of "`pthread_`". + +Message queues, like semaphores, are used with Pthreads and are not part of the Pthreads standard definition and are defined in the IEEE Std 1003.1-2001 standard. The prefix of the message queue related function is "`mq_`". + +| Function Prefix | Function Group | +|----------------------|----------------------| +| `pthread_ ` | Thread itself and various related functions | +| `pthread_attr_` | Thread attribute object | +| `Pthread_mutex_` | Mutex | +| `pthread_mutexattr_` | Mutex attribute object | +| `pthread_cond_ ` | Conditional variable | +| `pthread_condattr_` | Condition variable attribute object | +| `pthread_rwlock_` | Read-write lock | +| `pthread_rwlockattr_` | Read-write lock attribute object | +| `pthread_spin_` | Spin lock | +| `pthread_barrier_ ` | Barrier | +| `pthread_barrierattr_` | Barrier attribute object | +| `sem_` | Semaphore | +| `mq_ ` | Message queue | + +Most Pthreads functions return a value of 0 if they succeed, and an error code contained in the `errno.h` header file if unsuccessful. Many operating systems support Pthreads, such as Linux, MacOSX, Android, and Solaris, so applications written using Pthreads functions are very portable and can be compiled and run directly on many platforms that support Pthreads. + +### Use POSIX in RT-Thread + +Using the POSIX API interface in RT-Thread includes several parts: libc (for example, newlib), filesystem, pthread, and so on. Need to open the relevant options in rtconfig.h: + +``` c +#define RT_USING_LIBC +#define RT_USING_DFS +#define RT_USING_DFS_DEVFS +#define RT_USING_PTHREADS +``` + +RT-Thread implements most of the functions and constants of Pthreads, defined in the pthread.h, mqueue.h, semaphore.h, and sched.h header files according to the POSIX standard. Pthreads is a sublibrary of libc, and Pthreads in RT-Thread are based on the encapsulation of RT-Thread kernel functions, making them POSIX compliant. The Pthreads functions and related functions implemented in RT-Thread are described in detail in the following sections. + +## Thread + +### Thread Handle + +``` c +typedef rt_thread_t pthread_t; +``` + +`Pthread_t` is a redefinition of the `rt_thread_t` type, defined in the `pthread.h` header file. rt_thread_t is the thread handle (or thread identifier) of the RT-Thread and is a pointer to the thread control block. You need to define a variable of type pthread_t before creating a thread. Each thread corresponds to its own thread control block, which is a data structure used by the operating system to control threads. It stores some information about the thread, such as priority, thread name, and thread stack address. Thread control blocks and thread specific information are described in detail in the [Thread Management](../thread/thread.md) chapter. + +### Create Thread + +``` c +int pthread_create (pthread_t *tid, + const pthread_attr_t *attr, + void *(*start) (void *), void *arg); +``` + +| **Parameter** | **Description** | +|----------|------------------------------------------------------| +| tid | Pointer to thread handle (thread identifier), cannot be NULL | +| attr | Pointer to the thread property, if NULL is used, the default thread property is used | +| start | Thread entry function address | +| arg | The argument passed to the thread entry function | +|**return**| —— | +| 0 | succeeded | +| EINVAL | Invalid parameter | +| ENOMEM | Dynamic allocation of memory failed | + +This function creates a pthread thread. This function dynamically allocates the POSIX thread data block and the RT-Thread thread control block, and saves the start address (thread ID) of the thread control block in the memory pointed to by the parameter tid, which can be used to operate in other threads. This thread; and the thread attribute pointed to by attr, the thread entry function pointed to by start, and the entry function parameter arg are stored in the thread data block and the thread control block. If the thread is created successfully, the thread immediately enters the ready state and participates in the scheduling of the system. If the thread creation fails, the resources occupied by the thread are released. + +Thread properties and related functions are described in detail in the *Thread Advanced Programming* chapter. In general, the default properties can be used. + +> After the pthread thread is created, if the thread needs to be created repeatedly, you need to set the pthread thread to detach mode, or use pthread_join to wait for the created pthread thread to finish. + +#### Example Code for Creating Thread + +The following program initializes two threads, which have a common entry function, but their entry parameters are not the same. Others, they have the same priority and are scheduled for rotation in time slices. + +``` c +#include +#include +#include + +/* Thread control block */ +static pthread_t tid1; +static pthread_t tid2; + +/* Function return value check */ +static void check_result(char* str,int result) +{ + if (0 == result) + { + printf("%s successfully!\n",str); + } + else + { + printf("%s failed! error code is %d\n",str,result); + } +} + +/* Thread entry function */ +static void* thread_entry(void* parameter) +{ + int count = 0; + int no = (int) parameter; /* Obtain the thread's entry parameters */ + + while (1) + { + /* Printout thread count value */ + printf("thread%d count: %d\n", no, count ++); + + sleep(2); /* Sleep for 2 seconds */ + } +} + +/* User application portal */ +int rt_application_init() +{ + int result; + + /* Create thread 1, the property is the default value, the entry function is thread_entry, and the entry function parameter is 1 */ + result = pthread_create(&tid1,NULL,thread_entry,(void*)1); + check_result("thread1 created", result); + + /* Create thread 2, the property is the default value, the entry function is thread_entry, and the entry function parameter is 2 */ + result = pthread_create(&tid2,NULL,thread_entry,(void*)2); + check_result("thread2 created", result); + + return 0; +} +``` + +### Detach Thread + +``` c +int pthread_detach (pthread_t thread); +``` + +| Parameter | Description | +|------|----------------------| +| thread | Thread handle (thread identifier) | +|**return**| —— | +| 0 | succeeded | + +Call this function, If the pthread does not finish running, set the detach state of the thread thread property to detached; when the thread thread has finished, the system will reclaim the resources occupied by the pthread thread. + +Usage: The child thread calls `pthread_detach(pthread_self())` (*pthread_self()* returns the thread handle of the currently calling thread), or another thread calls `pthread_detach(thread_id)`. The separation state of the thread attributes will be described in detail later. + +> Once the detach state of the thread property is set to detached, the thread cannot be waited by the pthread_join() function or re-set to detached. + +#### Example Code for Detaching Thread + +The following program initializes 2 threads, which have the same priority and are scheduled according to the time slice. Both threads will be set to the detached state. The 2 threads will automatically exit after printing 3 times of information. After exiting, the system will automatically reclaim its resources. + +``` c +#include +#include +#include + +/* Thread control block */ +static pthread_t tid1; +static pthread_t tid2; + +/* Function return value check */ +static void check_result(char* str,int result) +{ + if (0 == result) + { + printf("%s successfully!\n",str); + } + else + { + printf("%s failed! error code is %d\n",str,result); + } +} + +/* Thread 1 entry function */ +static void* thread1_entry(void* parameter) +{ + int i; + + printf("i'm thread1 and i will detach myself!\n"); + pthread_detach(pthread_self()); /* Thread 1 detach itself */ + + for (i = 0;i < 3;i++) /* Cycle print 3 times */ + { + printf("thread1 run count: %d\n",i); + sleep(2); /* Sleep 2 seconds */ + } + + printf("thread1 exited!\n"); + return NULL; +} + +/* Thread 2 entry function */ +static void* thread2_entry(void* parameter) +{ + int i; + + for (i = 0;i < 3;i++) /* Cycle print 3 times */ + { + printf("thread2 run count: %d\n",i); + sleep(2); /* Sleep 2 seconds */ + } + + printf("thread2 exited!\n"); + return NULL; +} +/* User application portal */ +int rt_application_init() +{ + int result; + + /* Create thread 1, property is the default value, separation state is the default value joinable, + * The entry function is thread1_entry and the entry function parameter is NULL */ + result = pthread_create(&tid1,NULL,thread1_entry,NULL); + check_result("thread1 created",result); + + /* Create thread 2, the property is the default value, the separation state is the default value joinable, + * The entry function is thread2_entry and the entry function parameter is NULL */ + result = pthread_create(&tid2,NULL,thread2_entry,NULL); + check_result("thread2 created",result); + + pthread_detach(tid2); /* detach thread 2 */ + + return 0; +} +``` + +### Waiting for Thread to End + +``` c +int pthread_join (pthread_t thread, void**value_ptr); +``` + +| Parameter | **Description** | +|----------|----------------------| +| thread | Thread handle (thread identifier) | +| value_ptr | User-defined pointer to store the return value of the waiting thread, which can be obtained by the function pthread_join() | +|**Return**| —— | +| 0 | succeeded | +| EDEADLK | Thread join itself | +| EINVAL | Join a thread with a detached state | +| ESRCH | Could not find the thread | + +The thread calling this function blocks and waits for the thread with the joinable property to finish running and gets the return value of the thread. The address of the returned value is stored in `value_ptr` and frees the resources held by thread. + +The pthread_join() and pthread_detach() functions are similar in that they are used to reclaim the resources occupied by threads after the thread running ends. A thread cannot wait for itself to end. The detached state of the thread thread must be `joinable`, and one thread only corresponds to the `pthread_join()` call. A thread with a split state of joinable will only release the resources it occupies when other threads execute `pthread_join()` on it. So in order to avoid memory leaks, all threads that will end up running, either detached or set to detached, or use pthread_join() to reclaim the resources they consume. + +#### Example Code for Waiting for the Thread to End + +The following program code initializes 2 threads, they have the same priority, and the threads of the same priority are scheduled according to the time slice. The separation status of the 2 thread attributes is the default value joinable, and thread 1 starts running first, and ends after printing 3 times of information. Thread 2 calls pthread_join() to block waiting for thread 1 to end, and reclaims the resources occupied by thread 1, and thread 2 prints the message every 2 seconds. + +``` c +#include +#include +#include + +/* Thread control block */ +static pthread_t tid1; +static pthread_t tid2; + +/* Function return value check */ +static void check_result(char* str,int result) +{ + if (0 == result) + { + printf("%s successfully!\n",str); + } + else + { + printf("%s failed! error code is %d\n",str,result); + } +} + +/* Thread 1 entry function */ +static void* thread1_entry(void* parameter) +{ + int i; + + for (int i = 0;i < 3;i++) /* Cycle print 3 times */ + { + printf("thread1 run count: %d\n",i); + sleep(2); /* Sleep 2 seconds */ + } + + printf("thread1 exited!\n"); + return NULL; +} + +/* Thread 2 entry function */ +static void* thread2_entry(void* parameter) +{ + int count = 0; + void* thread1_return_value; + + /* Blocking waiting thread 1 running end */ + pthread_join(tid1, NULL); + + /* Thread 2 print information to start output */ + while(1) + { + /* Print thread count value output */ + printf("thread2 run count: %d\n",count ++); + sleep(2); /* Sleep 2 seconds */ + } + + return NULL; +} + +/* User application portal */ +int rt_application_init() +{ + int result; + /* Create thread 1, property is the default value, separation state is the default value joinable, + * The entry function is thread1_entry and the entry function parameter is NULL */ + result = pthread_create(&tid1,NULL,thread1_entry,NULL); + check_result("thread1 created",result); + + /* Create thread 2, the property is the default value, the separation state is the default value joinable, + * The entry function is thread2_entry and the entry function parameter is NULL */ + result = pthread_create(&tid2,NULL,thread2_entry,NULL); + check_result("thread2 created",result); + + return 0; +} +``` + +### Exit Thread + +``` c +void pthread_exit(void *value_ptr); +``` + +| *Parameter* | **Description** | +|----------|---------------------------| +| value_ptr | User-defined pointer to store the return value of the waiting thread, which can be obtained by the function pthread_join() | + +Calling this function by the pthread thread terminates execution, just as the process calls the exit() function and returns a pointer to the value returned by the thread. The thread exit is initiated by the thread itself. + +> If the split state of the thread is joinable, the resources occupied by the thread will not be released after the thread exits. The pthread_join() function must be called to release the resources occupied by the thread. + +#### Example Code for Exiting Thread + +This program initializes 2 threads, they have the same priority, and the threads of the same priority are scheduled according to the time slice. The separation state of the two thread attributes is the default value joinable, and thread 1 starts running first, sleeps for 2 seconds after printing the information once, and then prints the exit information and then ends the operation. Thread 2 calls pthread_join() to block waiting for thread 1 to end, and reclaims the resources occupied by thread 1, and thread 2 prints the message every 2 seconds. + +``` c +#include +#include +#include + +/* Thread control block */ +static pthread_t tid1; +static pthread_t tid2; + +/* Function return value check function */ +static void check_result(char* str,int result) +{ + if (0 == result) + { + printf("%s successfully!\n",str); + } + else + { + printf("%s failed! error code is %d\n",str,result); + } +} + +/* Thread 1 entry function */ +static void* thread1_entry(void* parameter) +{ + int count = 0; + while(1) + { + /* Print thread count value output */ + printf("thread1 run count: %d\n",count ++); + sleep(2); /* Sleep 2 seconds */ + printf("thread1 will exit!\n"); + + pthread_exit(0); /* Thread 1 voluntarily quits */ + } +} + +/* Thread 2 entry function */ +static void* thread2_entry(void* parameter) +{ + int count = 0; + + /* The block waits for thread 1 to finish running */ + pthread_join(tid1,NULL); + /* Thread 2 starts outputting print information */ + while(1) + { + /* Print thread count value output */ + printf("thread2 run count: %d\n",count ++); + sleep(2); /* Sleep 2 seconds */ + } +} + +/* User application portal */ +int rt_application_init() +{ + int result; + + /* Create thread 1, property is the default value, separation state is the default value joinable, + * The entry function is thread1_entry and the entry function parameter is NULL */ + result = pthread_create(&tid1,NULL,thread1_entry,NULL); + check_result("thread1 created",result); + + /* Create thread 2, the property is the default value, the separation state is the default value joinable, + * The entry function is thread2_entry and the entry function parameter is NULL */ + result = pthread_create(&tid2,NULL,thread2_entry,NULL); + check_result("thread2 created",result); + + return 0; +} +``` + +## Mutex + +Mutexes, also known as mutually exclusive semaphores, are a special binary semaphore. Mutexes are used to ensure the integrity of shared resources. Only one thread can access the shared resource at any time. To access shared resources, the thread must first obtain the mutex. After the access is complete, the mutex must be released. Embedded shared resources include memory, IO, SCI, SPI, etc. If two threads access shared resources at the same time, there may be problems because one thread may use the resource while another thread modifies the shared resource and consider sharing. + +There are only two kinds of operations of mutex, locking or unlocking, and only one thread holds a mutex at a time. When a thread holds it, the mutex is latched and its ownership is obtained by this thread. Conversely, when this thread releases it, it unlocks the mutex and loses its ownership. When a thread holds a mutex, other threads will not be able to unlock it or hold it. + +The main APIs of the mutex include: calling `pthread_mutex_init()` to initialize a mutex, `pthread_mutex_destroy()` to destroy the mutex, pthread_mutex_lock() to lock the mutex, and `pthread_mutex_unlock()` to unlock the mutex. + +The rt-thread operating system implements a priority inheritance algorithm to prevent priority inversion.Priority inheritance is the practice of raising the priority of a low-priority thread that occupies a resource to the same level as the highest-priority thread of all the threads waiting for the resource, then executing, and then returning to the initial setting when the low-priority thread releases the resource.Thus, threads that inherit priority prevent system resources from being preempted by any intermediate priority thread. + +For a detailed introduction to priority reversal, please refer to the [Inter-thread Synchronization](../thread-sync/thread-sync.md) Mutex section. + +### Mutex Lock Control Block + +Each mutex corresponds to a mutex control block that contains some information about the control of the mutex. Before creating a mutex, you must first define a variable of type `pthread_mutex_t`. pthread_mutex_t is a redefinition of pthread_mutex. The pthread_mutex data structure is defined in the pthread.h header file. The data structure is as follows: + +``` c +struct pthread_mutex +{ + pthread_mutexattr_t attr; /* Mutex attribute */ + struct rt_mutex lock; /* RT-Thread Mutex lock control block */ +}; +typedef struct pthread_mutex pthread_mutex_t; + +//rt_mutex is a data structure defined in the RT-Thread kernel, defined in the rtdef.h header file. The data structure is as follows: + +struct rt_mutex +{ + struct rt_ipc_object parent; /* Inherited from the ipc_object class */ + rt_uint16_t value; /* Mutex value */ + rt_uint8_t original_priority; /* thread's original priority */ + rt_uint8_t hold; /* Mutex lock holding count */ + struct rt_thread *owner; /* Thread that currently has a mutex */ +}; +typedef struct rt_mutex* rt_mutex_t; /* Rt_mutext_t is a pointer to the mutex structure */ +``` + +### Initialize the Mutex + +``` c +int pthread_mutex_init(pthread_mutex_t *mutex, const pthread_mutexattr_t *attr); +``` + +| **Parameter** | **Description** | +|----------|------------------------------------------------------| +| mutex | Mutex lock handle, cannot be NULL | +| attr | Pointer to the mutex attribute, if the pointer is NULL, the default attribute is used. | +|**return**| —— | +| 0 | succeeded | +| EINVAL | Invalid parameter | + +This function initializes the mutex `mutex` and sets the mutex property according to the mutex attribute object pointed to by `attr`. After successful initialization, the mutex is unlocked and the thread can obtain it. This function encapsulates the rt_mutex_init() function. + +In addition to calling the pthread_mutex_init() function to create a mutex, you can also statically initialize the mutex with the macro PTHREAD_MUTEX_INITIALIZER by: `pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER` (structure constant), which is equivalent to specifying attr to NULL when calling pthread_mutex_init(). + +The mutex lock properties and related functions are described in detail in the *thread advanced programming* chapter. In general, the default properties can be used. + +### Destroy Mutex + +``` c +int pthread_mutex_destroy(pthread_mutex_t *mutex); +``` + +| **Parameter** | **Description** | +|----------|----------------------| +| mutex | Mutex lock handle, cannot be NULL | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Mutex is empty or mutex has been destroyed | +| EBUSY | Mutex is being used | + +This function destroys the mutex `mutex`. Mutex is mutable in an uninitialized state after destruction. After destroying the mutex's properties and control block parameters will not be valid, but you can call pthread_mutex_init() to reinitialize the destroyed mutex. However, there is no need to destroy the mutex that is statically initialized with the macro PTHREAD_MUTEX_INITIALIZER. + +The mutex can be destroyed when it is determined that the mutex is not locked and no thread is blocked on the mutex. + +### Blocking Mode Locks the Mutex + +``` c +int pthread_mutex_lock(pthread_mutex_t *mutex); +``` + +| **Parameter** | **Description** | +|----------|----------------------| +| mutex | Mutex lock handle, cannot be NULL | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | +| EDEADLK | Mutexes mutex do not call this function repeatedly for a thread with a nested lock | + +This function locks the mutex `mutex`, which is a wrapper of the rt_mutex_take() function. If the mutex has not been locked yet, the thread applying for the mutex will successfully lock the mutex. If the mutex has been locked by the current thread and the mutex type is a nested lock, the mutex's holding count is incremented by one, and the current thread will not suspend waiting (deadlock), but the thread must corresponds to the same number of unlocks. If the mutex is held by another thread, the current thread will be blocked until the other thread unlocks the mutex, and the thread waiting for the mutex will acquire the mutex according to the *first in first out* principle. . + +### Non-blocking Mode Locks the Mutex + +``` c +int pthread_mutex_trylock(pthread_mutex_t *mutex); +``` + +| **Parameter** | **Description** | +|----------|----------------------| +| mutex | Mutex lock handle, cannot be NULL | +|**return**| —— | +| 0 | succeeded | +| EINVAL | Invalid parameter | +| EDEADLK | Mutexes are not nested locks, but threads call this function repeatedly | +| EBUSY | Mutexes mutex has been locked by other threads | + +This function is a non-blocking version of the pthread_mutex_lock() function. The difference is that if the mutex has been locked, the thread will not be blocked, but the error code will be returned immediately. + +### Unlock the Mutex + +``` c +int pthread_mutex_unlock(pthread_mutex_t *mutex); +``` + +| Parameter | **Description** | +|----------|----------------------| +| mutex | Mutex lock handle, cannot be NULL | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | +| EPERM | This function is called repeatedly by a thread when the mutex is not a nested lock | +| EBUSY | Unlock the mutex held by other threads with the type of error detection lock | + +Calling this function to unlock the mutex. This function is a wrapper of the rt_mutex_release() function. When the thread completes the access of the shared resource, it should release the possessed mutex as soon as possible, so that other threads can acquire the mutex in time. Only a thread that already has a mutex can release it, and its holding count is decremented by one each time the mutex is released. When the mutex's holding count is zero (ie, the holding thread has released all holding operations), the mutex becomes available, and the thread waiting on the mutex is placed in a first-in-first-out manner. If the thread's run priority is promoted by the mutex lock, then when the mutex is released, the thread reverts to the priority before holding the mutex. + +### Example Code for Mutex Lock + +This program will initialize 2 threads, they have the same priority, 2 threads will call the same printer() function to output their own string, the printer() function will output only one character at a time, then sleep for 1 second, call printer The thread of the () function also sleeps. If you do not use a mutex, thread 1 prints a character, and after hibernation, thread 2 is executed, and thread 2 prints a character, so that the thread 1 and thread 2 strings cannot be completely printed, and the printed string is confusing. If a mutex is used to protect the print function printer() shared by 2 threads, thread 1 takes the mutex and executes the printer() print function to print a character, then sleeps for 1 second, which is switched to thread 2 because The nick lock has been locked by thread 1, and thread 2 will block until thread 1 of thread 1 is fully released and the thread 2 is woken up after the mutex is actively released. + +``` c +#include +#include +#include + +/* Thread control block */ +static pthread_t tid1; +static pthread_t tid2; +/* Mutex lock control block */ +static pthread_mutex_t mutex; +/* Thread-sharing print function */ +static void printer(char* str) +{ + while(*str != 0) + { + putchar(*str); /* Output one character */ + str++; + sleep(1); /* Sleep 1 second */ + } + printf("\n"); +} +/* Function return value check */ +static void check_result(char* str,int result) +{ + if (0 == result) + { + printf("%s successfully!\n",str); + } + else + { + printf("%s failed! error code is %d\n",str,result); + } +} +/* Thread entry */ +static void* thread1_entry(void* parameter) +{ + char* str = "thread1 hello RT-Thread"; + while (1) + { + pthread_mutex_lock(&mutex); /* Mutex lock */ + + printer(str); /* Access shared print function */ + + pthread_mutex_unlock(&mutex); /* Unlock after access is complete */ + + sleep(2); /* Sleep 2 seconds */ + } +} +static void* thread2_entry(void* parameter) +{ + char* str = "thread2 hi world"; + while (1) + { + pthread_mutex_lock(&mutex); /* The mutex locks */ + + printer(str); /* Access shared print function */ + + pthread_mutex_unlock(&mutex); /* Unlock after access is complete */ + + sleep(2); /* Sleep 2 seconds */ + } +} +/* User application portal */ +int rt_application_init() +{ + int result; + /* Initialize a mutex */ + pthread_mutex_init(&mutex,NULL); + + /* Create thread 1, the thread entry is thread1_entry, the attribute parameter is NULL, the default value is selected, and the entry parameter is NULL.*/ + result = pthread_create(&tid1,NULL,thread1_entry,NULL); + check_result("thread1 created",result); + + /* Create thread 2, thread entry is thread2_entry, property parameter is NULL select default value, entry parameter is NULL*/ + result = pthread_create(&tid2,NULL,thread2_entry,NULL); + check_result("thread2 created",result); + + return 0; +} +``` + +## Conditional Variable + +A condition variable is actually a semaphore used for synchronization between threads. A condition variable is used to block a thread. When a condition is met, a condition is sent to the blocked thread. The blocking thread is woken up. The condition variable needs to be used with the mutex. The mutex is used to protect the shared data. + +Condition variables can be used to inform shared data status. For example, if a thread that processes a shared resource queue finds that the queue is empty, then the thread can only wait until one node is added to the queue. After adding, a conditional variable signal is sent to activate the waiting thread. + +The main operations of the condition variable include: calling `pthread_cond_init()` to initialize the condition variable, calling `pthread_cond_destroy()` to destroy a condition variable, calling `pthread_cond_wait()` to wait for a condition variable, and calling `pthread_cond_signal()` to send a condition variable. + +### Condition Variable Control Block + +Each condition variable corresponds to a condition variable control block, including some information about the operation of the condition variable. A `pthread_cond_t` condition variable control block needs to be defined before initializing a condition variable. `pthread_cond_t` is a redefinition of the `pthread_cond` structure type, defined in the pthread.h header file. + +``` c +struct pthread_cond +{ + pthread_condattr_t attr; /* Condition variable attribute */ + struct rt_semaphore sem; /* RT-Thread semaphore control block */ +}; +typedef struct pthread_cond pthread_cond_t; + +Rt_semaphore is a data structure defined in the RT-Thread kernel. It is a semaphore control block defined in the rtdef.h header file. + +struct rt_semaphore +{ + struct rt_ipc_object parent; /* Inherited from the ipc_object class */ + rt_uint16_t value; /* Semaphore value */ +}; +``` + +### Initialization Condition Variable + +``` c +int pthread_cond_init(pthread_cond_t *cond, const pthread_condattr_t *attr); +``` + +| **Parameter** | **Description** | +|----|------------------------------------------------| +| cond | Conditional variable handle, cannot be NULL | +| attr | Pointer to the condition variable property, if NULL then use the default property value | +|**return**| —— | +| 0 | succeeded | +| EINVAL | Invalid parameter | + +This function initializes the `cond` condition variable and sets its properties according to the condition variable property pointed to by `attr` , which is a wrapper of the `rt_sem_init()` function, based on semaphore implementation. The condition variable is not available after successful initialization. + +You can also statically initialize a condition variable with the macro PTHREAD_COND_INITIALIZER by: `pthread_cond_t cond = PTHREAD_COND_INITIALIZER` (structural constant), which is equivalent to specifying NULL when calling `pthread_cond_init()`. + +Attr General setting NULL use the default value, as described in the thread advanced programming chapter. + +### Destroy Condition Variables + +``` c +int pthread_cond_destroy(pthread_cond_t *cond); +``` + +| **Parameter** | **Description** | +|----|------------------------| +| cond | Conditional variable handle, cannot be NULL | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | +| EPERM | Mutexes are not nested locks, but threads call this function repeatedly | +| EBUSY | Condition variables are being used | + +This function destroys the `cond` condition variable, and the `cond` is uninitialized after destruction. The attribute and control block parameters of the condition variable will not be valid after destruction, but can be reinitialized by calling `pthread_cond_init()` or statically. + +Before destroying a condition variable, you need to make sure that no threads are blocked on the condition variable and will not wait to acquire, signal, or broadcast. + +### Blocking Mode to Obtain Condition Variables + +``` c +int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex); +``` + +| **Parameter** | **Description** | +|----------|----------------------------------| +| cond | Conditional variable handle, cannot be NULL | +| mutex | Pointer to the mutex control block, cannot be NULL | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | + +This function gets the `cond` condition variable in blocking mode. The thread needs to lock the mutex before waiting for the condition variable. This function first determines whether the condition variable is available. If it is not available, initializes a condition variable, then unlocks the mutex and then tries to acquire a semaphore when the semaphore's value is greater than zero, it indicates that the semaphore is available, the thread will get the semaphore, and the condition variable will be obtained, and the corresponding semaphore value will be decremented by 1. If the value of the semaphore is equal to zero, indicating that the semaphore is not available, the thread will block until the semaphore is available, after which the mutex will be locked again. + +### Specify Blocking Time to Obtain Condition Variables + +``` c +int pthread_cond_timedwait(pthread_cond_t *cond, + pthread_mutex_t *mutex, + const struct timespec *abstime); +``` + +| **Parameter** | **Description** | +|-------|-------------------------------------------------| +| cond | Conditional variable handle, cannot be NULL | +| mutex | Pointer to the mutex control block, cannot be NULL | +| abstime | The specified wait time in operating system clock tick (OS Tick) | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | +| EPERM | Mutexes are not nested locks, but threads call this function repeatedly | +| ETIMEDOUT | time out | + +The only difference between this function and the `pthread_cond_wait()` function is that if the condition variable is not available, the thread will be blocked for the `abstime` duration. After the timeout, the function will directly return the ETIMEDOUT error code and the thread will be woken up to the ready state. + +### Send a Conditional Semaphore + +``` c +int pthread_cond_signal(pthread_cond_t *cond); +``` + +| **Parameter** | **Description** | +|----|------------------------| +| cond | Conditional variable handle, cannot be NULL | +|**return**| —— | +| 0 | Succeeded | + +This function sends a signal and wakes up only one thread waiting for the `cond` condition variable, which encapsulates the rt_sem_release() function, which is to send a semaphore. When the value of the semaphore is equal to zero, and a thread waits for this semaphore, it will wake up the first thread waiting in the queue of the semaphore to get the semaphore. Otherwise the value of the semaphore will be increased by 1. + +### Broadcast + +``` c +int pthread_cond_broadcast(pthread_cond_t *cond); +``` + +| **Parameter** | **Description** | +|----|------------------------| +| cond | Conditional variable handle, cannot be NULL | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | + +Calling this function will wake up all threads waiting for the `cond` condition variable. + +### Example Code for Condition Variable + +This example is a producer consumer model with a producer thread and a consumer thread that have the same priority. The producer will produce a number every 2 seconds, put it in the list pointed to by the `head`, and then call pthread_cond_signal() to send signal to the consumer thread to inform the consumer that there is data in the thread list. The consumer thread calls pthread_cond_wait() to wait for the producer thread to send a signal. + +``` c +#include +#include +#include +#include + +/* Statically initialize a mutex and a condition variable */ +static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; +static pthread_cond_t cond = PTHREAD_COND_INITIALIZER; + +/* Pointer to the thread control block */ +static pthread_t tid1; +static pthread_t tid2; + +/* Function return value check */ +static void check_result(char* str,int result) +{ + if (0 == result) + { + printf("%s successfully!\n",str); + } + else + { + printf("%s failed! error code is %d\n",str,result); + } +} + +/* The structure data produced by the producer is stored in the linked list. */ +struct node +{ + int n_number; + struct node* n_next; +}; +struct node* head = NULL; /* Link header, is a shared resource */ + +/* Consumer thread entry function */ +static void* consumer(void* parameter) +{ + struct node* p_node = NULL; + + pthread_mutex_lock(&mutex); /* Lock on mutex */ + + while (1) + { + while (head == NULL) /* Determine if there are elements in the list */ + { + pthread_cond_wait(&cond,&mutex); /* Try to get a condition variable */ + } + /* + Pthread_cond_wait() will unlock the mutex first, then block in the wait queue until the fetch condition variable is awakened. After being woken up, the thread will lock the mutex again and successfully enter the critical section. + */ + + p_node = head; /* Obtain resources */ + head = head->n_next; /* Header pointing to the next resource */ + /* Printout */ + printf("consume %d\n",p_node->n_number); + + free(p_node); /* Release the memory occupied by the node after obtaining the resource */ + } + pthread_mutex_unlock(&mutex); /* Release the mutex */ + return 0; +} +/* Producer thread entry function */ +static void* product(void* patameter) +{ + int count = 0; + struct node *p_node; + + while(1) + { + /* Dynamically allocate a block of structured memory */ + p_node = (struct node*)malloc(sizeof(struct node)); + if (p_node != NULL) + { + p_node->n_number = count++; + pthread_mutex_lock(&mutex); /* To operate on the critical resource head, lock it first */ + + p_node->n_next = head; + head = p_node; /* Insert data into the list header */ + + pthread_mutex_unlock(&mutex); /* Unlock */ + printf("produce %d\n",p_node->n_number); + + pthread_cond_signal(&cond); /* send a Signal to wake up a thread */ + + sleep(2); /* Sleep 2 seconds */ + } + else + { + printf("product malloc node failed!\n"); + break; + } + } +} + +int rt_application_init() +{ + int result; + + /* Create a producer thread, the property is the default value, the entry function is product, and the entry function parameter is NULL*/ + result = pthread_create(&tid1,NULL,product,NULL); + check_result("product thread created",result); + + /* Create a consumer thread, the property is the default value, the entry function is consumer, and the entry function parameter is NULL */ + result = pthread_create(&tid2,NULL,consumer,NULL); + check_result("consumer thread created",result); + + return 0; +} +``` + +## Read-write Lock + +Read-write locks are also known as multi-reader single-writer locks. The read-write lock divides the visitors of the shared resource into readers and writers. The reader only reads and accesses the shared resources, and the writer needs to write the shared resources. Only one thread can occupy the read-write lock of the write mode at the same time, but there can be multiple threads simultaneously occupying the read-write lock of the read mode. Read-write locks are suitable for reading data structures much more often than writes because read patterns can be shared when locked, and write mode locks are exclusive. + +Read-write locks are usually implemented based on mutex locks and condition variables. A thread can lock a read-write lock several times, and it must also have the corresponding number of unlocks. + +The main operations of the read-write lock include: calling `pthread_rwlock_init()` to initialize a read-write lock, the write thread calling `pthread_rwlock_wrlock()` to lock the read-write lock, and the read thread calling `pthread_rwlock_rdlock()` to lock the read-write lock , when this read-write lock is not required, calling `pthread_rwlock_destroy()` to destroys the read-write lock. + +### Read-write Lock Control Block + +Each read-write lock corresponds to a read-write lock control block, including some information about the operation of the read-write lock. `pthread_rwlock_t` is a redefinition of the `pthread_rwlock` data structure, defined in the `pthread.h` header file. Before creating a read-write lock, you need to define a data structure of type `pthread_rwlock_t`. + +``` c +struct pthread_rwlock +{ + pthread_rwlockattr_t attr; /* Read-write lock attribute */ + pthread_mutex_t rw_mutex; /* Mutex lock */ + pthread_cond_t rw_condreaders; /* Conditional variables for the reader thread to use */ + pthread_cond_t rw_condwriters; /* Conditional variable for the writer thread to use */ + int rw_nwaitreaders; /* Reader thread waiting to count */ + int rw_nwaitwriters; /* Writer thread waiting to count */ + /* Read-write lock value, value 0: unlocked, value -1: is locked by the writer thread, greater than 0 value: locked by the reader thread */ + int rw_refcount; +}; +typedef struct pthread_rwlock pthread_rwlock_t; /* Type redefinition */ +``` + +### Initialize Read-write Lock + +``` c +int pthread_rwlock_init (pthread_rwlock_t *rwlock, + const pthread_rwlockattr_t *attr); +``` + +| **Parameter** | **Description** | +|------|-------------------------------------------| +| rwlock | Read-write lock handle, cannot be NULL | +| attr | Pointer to the read-write lock property, RT-Thread does not use this variable | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | + +This function initializes an `rwlock` read-write lock. This function initializes the semaphore and condition variables of the read-write lock control block with default values, and the associated count parameter is initially 0. The read-write lock after initialization is in an unlocked state. + +You can also use the macro PTHREAD_RWLOCK_INITIALIZER to statically initialize the read-write lock by: `pthread_rwlock_t mutex = PTHREAD_RWLOCK_INITIALIZER` (structural constant), which is equivalent to specifying `attr` a NULL value when calling pthread_rwlock_init(). + +`attr` generally sets NULL to the default value, as described in the chapter on advanced threading. + +### Destroy Read-write Lock + +``` c +int pthread_rwlock_destroy (pthread_rwlock_t *rwlock); +``` + +| **Parameter** | **Description** | +|------|----------------------| +| rwlock | Read-write lock handle, cannot be NULL | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | +| EBUSY | The read-write lock is currently being used or has a thread waiting for the read-write lock | +| EDEADLK | Deadlock | + +This function destroys a `rwlock` read-write lock, which destroys the mutex and condition variables in the read-write lock. After the destruction, the properties of the read-write lock and the control block parameters will not be valid, but you can call pthread_rwlock_init() or re-initialize the read-write lock in static mode. + +### Read-Lock of Read-Write Lock + +#### Blocking mode Read-lock the read-write locks + +``` c +int pthread_rwlock_rdlock (pthread_rwlock_t *rwlock); +``` + +| **Parameter** | **Description** | +|------|----------------------| +| rwlock | Read-write lock handle, cannot be NULL | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | +| EDEADLK | Deadlock | + +The reader thread can call this function to read-lock the `rwlock` read-write lock. If the read-write lock is not write-locked and no writer thread is blocked on the read-write lock, the read-write thread will successfully acquire the read-write lock. If the read-write lock has been write-locked, the reader thread will block until the thread that executes the write-lock unlocks the read-write lock. + +#### Non-blocking Mode Read-lock Read-write Locks + +``` c +int pthread_rwlock_tryrdlock (pthread_rwlock_t *rwlock); +``` + +| **Parameter** | **Description** | +|------|----------------------| +| rwlock | Read-write lock handle, cannot be NULL | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | +| EBUSY | The read-write lock is currently being used or has a thread waiting for the read-write lock | +| EDEADLK | Deadlock | + +This function differs from the pthread_rwlock_rdlock() function in that if the read-write lock is already write-locked, the reader thread is not blocked, but instead returns an error code EBUSY. + +#### Specify Blocking Time for the Read-write Lock to be Read-Locked + +``` c +int pthread_rwlock_timedrdlock (pthread_rwlock_t *rwlock, + const struct timespec *abstime); +``` + +| **Parameter** | **Description** | +|-------|-------------------------------------------------| +| rwlock | Read-write lock handle, cannot be NULL | +| abstime | The specified wait time in operating system clock tick (OS Tick) | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | +| ETIMEDOUT | Time out | +| EDEADLK | Deadlock | + +The difference between this function and the pthread_rwlock_rdlock() function is that if the read-write lock has been write-locked, the reader thread will block the specified abstime duration. After the timeout, the function will return the error code ETIMEDOUT and the thread will be woken up to the ready state. + +### Write-Lock of Read-Write Lock + +#### Blocking Mode Write-Locks a Read-write Lock + +``` c +int pthread_rwlock_wrlock (pthread_rwlock_t *rwlock); +``` + +| **Parameter** | **Description** | +|------|----------------------| +| rwlock | Read-write lock handle, cannot be NULL | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | +| EDEADLK | Deadlock | + +The writer thread calls this function to write-lock the `rwlock` read-write lock. A write-lock read-write lock is similar to a mutex, and only one thread can write-lock a read-write lock at a time. If no thread locks the read-write lock, that is, the read-write lock value is 0, the writer thread that calls this function will write-lock the read-write lock, and other threads cannot acquire the read-write lock at this time. If there is already a thread locked the read-write lock, ie the read/write lock value is not 0, then the writer thread will be blocked until the read-write lock is unlocked. + +#### Non-blocking Mode Write-Lock a Read-write Lock + +``` c +int pthread_rwlock_trywrlock (pthread_rwlock_t *rwlock); +``` + +| **Parameter** | **Description** | +|------|----------------------| +| rwlock | Read-write lock handle, cannot be NULL | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | +| EBUSY | The read-write lock is currently Write-Locked or there are reader threads blocked on the read-write lock | +| EDEADLK | Deadlock | + +The only difference between this function and the pthread_rwlock_wrlock() function is that if a thread has locked the read-write lock, ie the read-write lock value is not 0, the writer thread that called the function will directly return an error code, and the thread will not be Blocked. + +#### Specify Blocking Time for the Read-write Lock to be Write-Lock + +``` c +int pthread_rwlock_timedwrlock (pthread_rwlock_t *rwlock, + const struct timespec *abstime); +``` + +| **Parameter** | **Description** | +|--------------|---------------------| +| rwlock abstime | Read-write lock handle, cannot specify the wait time for NULL, the unit is the operating system clock beat (OS Tick) | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | +| ETIMEDOUT | Time out | +| EDEADLK | Deadlock | + +The only difference between this function and the pthread_rwlock_wrlock() function is that if a thread locks the read-write lock, that is, the read-write lock value is not 0, the calling thread blocks the specified `abstime` duration. After the timeout, the function returns the error code ETIMEDOUT, and the thread will be woken up to the ready state. + +### Unlock the Read-write Lock + +``` c +int pthread_rwlock_unlock (pthread_rwlock_t *rwlock); +``` + +| **Parameter** | **Description** | +|------|----------------------| +| rwlock | Read-write lock handle, cannot be NULL | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | +| EDEADLK | Deadlock | + +This function unlocks the `rwlock` read-write lock. A thread locks the same read-write lock multiple times and must have the same number of unlocks. If multiple threads wait for the read-write lock to lock after unlocking, the system will activate the waiting thread according to the first-in-first-out rule. + +### Example Code for Read-write Lock + +This program has two reader threads, one reader thread. The two reader threads read-lock the read-write lock first, then sleep for 2 seconds. This time the other reader threads can read-lock the read-write lock, and then read the shared data. + +``` c +#include +#include +#include + +/* Thread control block */ +static pthread_t reader1; +static pthread_t reader2; +static pthread_t writer1; +/* Shared data book */ +static int book = 0; +/* Read-write lock */ +static pthread_rwlock_t rwlock; +/* Function result check */ +static void check_result(char* str,int result) +{ + if (0 == result) + { + printf("%s successfully!\n",str); + } + else + { + printf("%s failed! error code is %d\n",str,result); + } +} +/* Thread entry */ +static void* reader1_entry(void* parameter) +{ + while (1) + { + + pthread_rwlock_rdlock(&rwlock); /* Try to read-lock the read-write lock */ + + printf("reader1 read book value is %d\n",book); + sleep(2); /* The thread sleeps for 2 seconds, switching to another thread to run */ + + pthread_rwlock_unlock(&rwlock); /* Unlock the read-write lock after the thread runs */ + } +} +static void* reader2_entry(void* parameter) +{ + while (1) + { + pthread_rwlock_rdlock(&rwlock); /* Try to read-lock the read-write lock */ + + printf("reader2 read book value is %d\n",book); + sleep(2); /* The thread sleeps for 2 seconds, switching to another thread to run */ + + pthread_rwlock_unlock(&rwlock); /* Unlock the read-write lock after the thread runs */ + } +} +static void* writer1_entry(void* parameter) +{ + while (1) + { + pthread_rwlock_wrlock(&rwlock); /* Try to write-lock the read-write lock */ + + book++; + printf("writer1 write book value is %d\n",book); + + pthread_rwlock_unlock(&rwlock); /* Unlock the read-write lock */ + + sleep(2); /* The thread sleeps for 2 seconds, switching to another thread to run */ + } +} +/* User application portal */ +int rt_application_init() +{ + int result; + /* Default property initializes read-write lock */ + pthread_rwlock_init(&rwlock,NULL); + + /* Create a reader1 thread, the thread entry is reader1_entry, the thread attribute is the default value, and the entry parameter is NULL*/ + result = pthread_create(&reader1,NULL,reader1_entry,NULL); + check_result("reader1 created",result); + + /* Create a reader2 thread, the thread entry is reader2_entry, the thread attribute is the default value, and the entry parameter is NULL*/ + result = pthread_create(&reader2,NULL,reader2_entry,NULL); + check_result("reader2 created",result); + + /* Create a writer1 thread, the thread entry is writer1_entry, the thread attribute is, and the entry parameter is NULL*/ + result = pthread_create(&writer1,NULL,writer1_entry,NULL); + check_result("writer1 created",result); + + return 0; +} +``` + +## Barrier + +Barriers are a way to synchronize multithreading. Barrier means a barrier or railing that blocks multiple threads that arrive in the same railing until all threads arrived, then remove the railings and let them go at the same time. The thread that arrives first will block, and when all the threads that call the pthread_barrier_wait() function (the number equal to the count specified by the barrier initialization) arrive, the threads will enter the ready state from the blocked state and participate in the system scheduling again. + +Barriers are implemented based on condition variables and mutex locks. The main operations include: calling `pthread_barrier_init()` to initialize a barrier, and other threads calling `pthread_barrier_wait()`. After all threads arrived, the thread wakes up to the ready state. Destroy a barrier by calling pthread_barrier_destroy() when the barrier will not be used. + +### Barrier Control Block + +Before creating a barrier, you need to define a `pthread_barrier_t` barrier control block. `pthread_barrier_t` is a redefinition of the `pthread_barrier` structure type, defined in the pthread.h header file. + +``` c +struct pthread_barrier +{ + int count; /* The number of waiting threads specified */ + pthread_cond_t cond; /* Conditional variable */ + pthread_mutex_t mutex; /* Mutex lock */ +}; +typedef struct pthread_barrier pthread_barrier_t; +``` + +### Create a Barrier + +``` c +int pthread_barrier_init(pthread_barrier_t *barrier, + const pthread_barrierattr_t *attr, + unsigned count); +``` + +| **Parameter** | **Description** | +|-------|-------------------------------| +| attr | Pointer to the barrier property, if passing NULL, use the default value. PTHREAD_PROCESS_PRIVATE must be used as a non-NULL value. | +| barrier | Barrier handle | +| count | The number of waiting threads specified | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | + +This function creates a `barrier` barrier and initializes the conditional variables and mutex locks of the barrier control block according to the default parameters. The number of waiting threads specified after initialization is `count`, and pthread_barrier_wait() must be called for `count` threads. + +attr generally sets NULL to the default value, as described in the chapter on *thread advanced programming*. + +### Destruction of Barrier + +``` c +int pthread_barrier_destroy(pthread_barrier_t *barrier); +``` + +| **Parameter** | **Description** | +|-------|--------| +| barrier | Barrier handle | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | + +This function destroys a barrier. The barrier's properties and control block parameters will not be valid after destruction, but can be reinitialized by calling pthread_barrier_init(). + +### Wait for Barrier + +``` c +int pthread_barrier_wait(pthread_barrier_t *barrier); +``` + +| **Parameter** | **Description** | +|-------|--------| +| barrier | Barrier handle | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | + +This function synchronizes the threads waiting in front of the barrier and called by each thread. If the number of queue waiting threads is not 0, count will be decremented by 1. If the count is 0, indicating that all threads have reached the railing. All arriving threads will be woken up and re-entered into the ready state to participate in system scheduling. If count is not 0 after the decrease, it indicates that there is still threads that do not reach the barrier, and the calling thread will block until all threads reach the barrier. + +### Example Code for Barrier + +This program will create 3 threads, initialize a barrier, and the barrier waits for 3 threads. 3 threads will call pthread_barrier_wait() to wait in front of the barrier. When all 3 threads are arrived, 3 threads will enter the ready state. The output count information is printed every 2 seconds. + +``` c +#include +#include +#include + +/* Thread control block */ +static pthread_t tid1; +static pthread_t tid2; +static pthread_t tid3; +/* Barrier control block */ +static pthread_barrier_t barrier; +/* Function return value check function */ +static void check_result(char* str,int result) +{ + if (0 == result) + { + printf("%s successfully!\n",str); + } + else + { + printf("%s failed! error code is %d\n",str,result); + } +} +/* Thread 1 entry function */ +static void* thread1_entry(void* parameter) +{ + int count = 0; + + printf("thread1 have arrived the barrier!\n"); + pthread_barrier_wait(&barrier); /* Reach the barrier and wait for other threads to arrive */ + + while (1) + { + /* Print thread count value output */ + printf("thread1 count: %d\n",count ++); + + /* Sleep 2 seconds */ + sleep(2); + } +} +/* Thread 2 entry function */ +static void* thread2_entry(void* parameter) +{ + int count = 0; + + printf("thread2 have arrived the barrier!\n"); + pthread_barrier_wait(&barrier); + + while (1) + { + /* Print thread count value */ + printf("thread2 count: %d\n",count ++); + + /* Sleep 2 seconds */ + sleep(2); + } +} +/* Thread 3 entry function */ +static void* thread3_entry(void* parameter) +{ + int count = 0; + + printf("thread3 have arrived the barrier!\n"); + pthread_barrier_wait(&barrier); + + while (1) + { + /* Print thread count value */ + printf("thread3 count: %d\n",count ++); + + /* Sleep 2 seconds */ + sleep(2); + } +} +/* User application portal */ +int rt_application_init() +{ + int result; + pthread_barrier_init(&barrier,NULL,3); + + /* Create thread 1, thread entry is thread1_entry, property parameter is set to NULL, select default value, entry parameter is NULL*/ + result = pthread_create(&tid1,NULL,thread1_entry,NULL); + check_result("thread1 created",result); + + /* Create thread 2, thread entry is thread2_entry, property parameter is set to NULL, select default value, entry parameter is NULL*/ + result = pthread_create(&tid2,NULL,thread2_entry,NULL); + check_result("thread2 created",result); + + /* Create thread 3, thread entry is thread3_entry, property parameter is set to NULL Select default value, entry parameter is NULL*/ + result = pthread_create(&tid3,NULL,thread3_entry,NULL); + check_result("thread3 created",result); + +} +``` + +## Semaphore + +Semaphores can be used for communication between processes and processes, or between in-process threads. Each semaphore has a semaphore value that is not less than 0, corresponding to the available amount of semaphore. Call sem_init() or sem_open() to assign an initial value to the semaphore . Call sem_post() to increment the semaphore value by 1. Call sem_wait() to decrement the semaphore value by 1. If the current semaphore is 0, call sem_wait(), the thread will suspended on the wait queue for this semaphore until the semaphore value is greater than 0 and is available. + +Depending on the value of the semaphore (representing the number of available resources), POSIX semaphores can be divided into: + +- **Binary semaphore**: The value of the semaphore is only 0 and 1, and the initial value is specified as 1. This is the same as a mutex. If the resource is locked, the semaphore value is 0. If the resource is available, the semaphore value is 1. Equivalent to only one key, after the thread gets the key, after completing the access to the shared resource, you need to unlock it, put the key back, and use it for other threads that need this key. The method is the same as the mutex lock. The wait semaphore function must be used in pairs with the send semaphore function. It cannot be used alone. + +- **Count semaphore**: The value of the semaphore ranges from 0 to a limit greater than 1 (POSIX indicates that the system's maximum limit is at least 32767). This count indicates the number of available semaphores. At this point, the send semaphore function can be called separately to send the semaphore, which is equivalent to having more than one key, the thread takes a key and consumes one, and the used key does not have to be put back. + +POSIX semaphores are also divided into named semaphores and unnamed semaphores: + +- A named semaphore: its value is stored in a file and is generally used for inter-process synchronization or mutual exclusion. + +- Unnamed semaphore: Its value is stored in memory and is generally used for inter-thread synchronization or mutual exclusion. + +The POSIX semaphore of the RT-Thread operating system is mainly based on a package of RT-Thread kernel semaphores, mainly used for communication between threads in the system. It is used in the same way as the semaphore of the RT-Thread kernel. + +### Semaphore Control Block + +Each semaphore corresponds to a semaphore control block. Before creating a semaphore, you need to define a sem_t semaphore control block. Sem_t is a redefinition of the posix_sem structure type, defined in the semaphore.h header file. + +``` c +struct posix_sem +{ + rt_uint16_t refcount; + rt_uint8_t unlinked; + rt_uint8_t unamed; + rt_sem_t sem; /* RT-Thread semaphore */ + struct posix_sem* next; /* Point to the next semaphore control block */ +}; +typedef struct posix_sem sem_t; + +Rt_sem_t is the RT-Thread semaphore control block, defined in the rtdef.h header file. + +struct rt_semaphore +{ + struct rt_ipc_object parent;/* Inherited from the ipc_object class */ + rt_uint16_t value; /* Semaphore value */ +}; +/* rt_sem_t is a pointer type to the semaphore structure */ +typedef struct rt_semaphore* rt_sem_t; + +``` + +### Unnamed semaphore + +The value of an unnamed semaphore is stored in memory and is generally used for inter-thread synchronization or mutual exclusion. Before using it, you must first call sem_init() to initialize it. + +#### Initialize the unnamed semaphore + +``` c +int sem_init(sem_t *sem, int pshared, unsigned int value); +``` + +| **Parameter** | **Description** | +|-------|--------------------------------------| +| sem | Semaphore handle | +| value | The initial value of the semaphore, indicating the available amount of semaphore resources | +| pshared | RT-Thread unimplemented parameters | +|**return**| —— | +| 0 | Succeeded | +| -1 | Failed | + +This function initializes an unnamed semaphore sem, initializes the semaphore related data structure according to the given or default parameters, and puts the semaphore into the semaphore list. The semaphore value after initialization is the given initial value. This function is a wrapper of the rt_sem_create() function. + +#### Destroy Unnamed Semaphore + +``` c +int sem_destroy(sem_t *sem); +``` + +| **Parameter** | **Description** | +|----|----------| +| sem | Semaphore handle | +|**return**| —— | +| 0 | Succeeded | +| -1 | Failed | + +This function destroys an unnamed semaphore sem and releases the resources occupied by the semaphore. + +### Named Semaphore + +A named semaphore whose value is stored in a file and is generally used for inter-process synchronization or mutual exclusion. Two processes can operate on named semaphores of the same name. The well-known semaphore implementation in the RT-Thread operating system is similar to the unnamed semaphore. It is designed for communication between threads and is similar in usage. + +#### Create or Open a Named Semaphore + +``` c +sem_t *sem_open(const char *name, int oflag, ...); +``` + +| **Parameter** | **Description** | +|----------|----------------| +| name | Semaphore name | +| oflag | The way the semaphore is opened | +|**return**| —— | +| Semaphore handle | Succeeded | +| NULL | Failed | + +This function creates a new semaphore based on the semaphore name or opens an existing semaphore. The optional values for Oflag are `0`, `O_CREAT` or `O_CREAT|O_EXCL`. If Oflag is set to `O_CREAT` , a new semaphore is created. If Oflag sets to `O_CREAT|O_EXCL`, it returns NULL if the semaphore already exists, and creates a new semaphore if it does not exist. If Oflag is set to 0, a semaphore does not exist and NULL is returned. + +#### Detach the Named Semaphore + +``` c +int sem_unlink(const char *name); +``` + +| **Parameter** | **Description** | +|----|----------| +| name | Semaphore name | +|**return**| —— | +| 0 | Succeeded | +| -1 | Failed, semaphore does not exist | + +This function looks up the semaphore based on the semaphore name, and marks the semaphore as a detached state if the semaphore is present. Then check the reference count. If the value is 0, the semaphore is deleted immediately. If the value is not 0, it will not be deleted until all threads holding the semaphore close the semaphore. + +#### Close the Named Semaphore + +``` c +int sem_close(sem_t *sem); +``` + +| **Parameter** | **Description** | +|----|----------| +| sem | Semaphore handle | +|**return**| —— | +| 0 | Succeeded | +| -1 | Failed | + +When a thread terminates,it closes the semaphore it occupies. Whether the thread terminates voluntarily or involuntarily, this closing operation is performed. This is equivalent to a reduction of 1 in the number of semaphores held. If the holding count is zero after subtracting 1 and the semaphore is in separated state, the `sem` semaphore will be deleted and the resources it occupies will be released. + +### Obtain Semaphore Value + +``` c +int sem_getvalue(sem_t *sem, int *sval); +``` + +| **Parameter** | **Description** | +|----|---------------------------------| +| sem | Semaphore handle, cannot be NULL | +| sval | Save the obtained semaphore value address, cannot be NULL | +|**return**| —— | +| 0 | Succeeded | +| -1 | Failed | + +This function obtains the value of the semaphore and saves it in the memory pointed to by `sval` to know the amount of semaphore resources. + +### Blocking Mode to Wait Semaphore + +``` c +int sem_wait(sem_t *sem); +``` + +| **Parameter** | **Description** | +|----|----------------------| +| sem | Semaphore handle, cannot be NULL | +|**return**| —— | +| 0 | Succeeded | +| -1 | Failed | + +The thread calls this function to get the semaphore, which is a wrapper of the `rt_sem_take(sem,RT_WAITING_FOREVER)` function. If the semaphore value is greater than zero, the semaphore is available, the thread gets the semaphore, and the semaphore value is decremented by one. If the semaphore value is equal to 0, indicating that the semaphore is not available, the thread is blocked and entering the suspended state and queued in a first-in, first-out manner until the semaphore is available. + +### Non-blocking Mode to Wait Semaphore + +``` c +int sem_trywait(sem_t *sem); +``` + +| **Parameter** | **Description** | +|----|----------------------| +| sem | Semaphore handle, cannot be NULL | +|**return**| —— | +| 0 | Succeeded | +| -1 | Failed | + +This function is a non-blocking version of the sem_wait() function and is a wrapper of the `rt_sem_take(sem,0)` function. When the semaphore is not available, the thread does not block, but returns directly. + +### Specify the Blocking Time Waiting for the Semaphore + +``` c +int sem_timedwait(sem_t *sem, const struct timespec *abs_timeout); +``` + +| **Parameter** | **Description** | +|------------|-------------------------------------------------| +| sem | Semaphore handle, cannot be NULL | +| abs_timeout | The specified wait time in operating system clock tick (OS Tick) | +|**return**| —— | +| 0 | Succeeded | +| -1 | Failed | + +The difference between this function and `the sem_wait()` function is that if the semaphore is not available, the thread will block the duration of `abs_timeout`. After the timeout, the function returns -1, and the thread will be awakened from the blocking state to the ready state. + +### Send Semaphore + +``` c +int sem_post(sem_t *sem); +``` + +| **Parameter** | **Description** | +|----|----------------------| +| sem | Semaphore handle, cannot be NULL | +|**return**| —— | +| 0 | Succeeded | +| -1 | Failed | + +This function will release a sem semaphore, which is a wrapper of the rt_sem_release() function. If the thread queue waiting for the semaphore is not empty, indicating that there are threads waiting for the semaphore, the first thread waiting for the semaphore will switch from the suspended state to the ready state, waiting for system scheduling. If no thread is waiting for the semaphore, the semaphore value will be incremented by one. + +### Example Code for Unnamed Semaphore + +A typical case of semaphore usage is the producer consumer model. A producer thread and a consumer thread operate on the same block of memory, the producer fills the shared memory, and the consumer reads the data from the shared memory. + +This program creates 2 threads, 2 semaphores, one semaphore indicates that the shared data is empty, one semaphore indicates that the shared data is not empty, and a mutex is used to protect the shared resource. After the producer thread produces the data, it will send a `full_sem` semaphore to the consumer, informing the consumer that the thread has data available, and waiting for the `empty_sem` semaphore sent by the consumer thread after 2 seconds of sleep. The consumer thread processes the shared data after the `full_sem` sent by the producer, and sends an `empty_sem` semaphore to the producer thread after processing. The program will continue to loop like this. + +``` c +#include +#include +#include +#include +#include + +/* Statically initialize a mutex to protect shared resources */ +static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; +/* 2 semaphore control blocks, one for resource empty signals and one for resource full signals */ +static sem_t empty_sem,full_sem; + +/* Pointer to the thread control block */ +static pthread_t tid1; +static pthread_t tid2; + +/* Function return value check */ +static void check_result(char* str,int result) +{ + if (0 == result) + { + printf("%s successfully!\n",str); + } + else + { + printf("%s failed! error code is %d\n",str,result); + } +} + +/* The structure data produced by the producer is stored in the linked list. */ +struct node +{ + int n_number; + struct node* n_next; +}; +struct node* head = NULL; /* Link header, a shared resource */ + +/* Consumer thread entry function */ +static void* consumer(void* parameter) +{ + struct node* p_node = NULL; + + while (1) + { + sem_wait(&full_sem); + pthread_mutex_lock(&mutex); /* Lock mutex */ + + while (head != NULL) /* Determine if there are elements in the list */ + { + p_node = head; /* Obtain resources */ + head = head->n_next; /* Header pointing to the next resource */ + /* Print */ + printf("consume %d\n",p_node->n_number); + + free(p_node); /* Release the memory occupied by the node after getting the resource */ + } + + pthread_mutex_unlock(&mutex); /* The critical section data operation is completed, and the mutex is released. */ + + sem_post(&empty_sem); /* Send a null semaphore to the producer */ + } +} +/* Producer thread entry function */ +static void* product(void* patameter) +{ + int count = 0; + struct node *p_node; + + while(1) + { + /* Dynamically allocate a block of structured memory */ + p_node = (struct node*)malloc(sizeof(struct node)); + if (p_node != NULL) + { + p_node->n_number = count++; + pthread_mutex_lock(&mutex); /* To operate on the critical resource head, lock it first */ + + p_node->n_next = head; + head = p_node; /* Insert data into the list header */ + + pthread_mutex_unlock(&mutex); /* Unlock */ + printf("produce %d\n",p_node->n_number); + + sem_post(&full_sem); /* Send a full semaphore to the consumer */ + } + else + { + printf("product malloc node failed!\n"); + break; + } + sleep(2); /* Sleep 2 seconds */ + sem_wait(&empty_sem); /* Wait for consumers to send empty semapho */ + } +} + +int rt_application_init() +{ + int result; + + sem_init(&empty_sem,NULL,0); + sem_init(&full_sem,NULL,0); + /* Create a producer thread, the property is the default value, the entry function is product, and the entry function parameter is NULL*/ + result = pthread_create(&tid1,NULL,product,NULL); + check_result("product thread created",result); + + /* Create a consumer thread, the property is the default value, the entry function is consumer, and the entry function parameter is NULL */ + result = pthread_create(&tid2,NULL,consumer,NULL); + check_result("consumer thread created",result); + + return 0; +} +``` + +## Message Queue + +Message Queuing is another commonly used inter-thread communication method that accepts messages of unfixed length from threads or interrupt service routines and caches the messages in their own memory space. Other threads can also read the corresponding message from the message queue, and when the message queue is empty, the reader thread can be suspended. When a new message arrives, the suspended thread will be woken up to receive and process the message. + +The main operations of the message queue include: creating or opening by the function `mq_open()`, calling `mq_send()` to send a message to the message queue, calling `mq_receive()` to get a message from the message queue, and when the message queue is not in use, you can call `mq_unlink()` to delete message queue. + +POSIX message queue is mainly used for inter-process communication. The POSIX message queue of RT-Thread operating system is mainly based on a package of RT-Thread kernel message queue, mainly used for communication between threads in the system. It is used in the same way as the message queue of the RT-Thread kernel. + +### Message Queue Control Block + +Each message queue corresponds to a message queue control block. Before creating a message queue, you need to define a message queue control block. The message queue control block is defined in the mqueue.h header file. + +``` c +struct mqdes +{ + rt_uint16_t refcount; /* Reference count */ + rt_uint16_t unlinked; /* Separation status of the message queue, a value of 1 indicates that the message queue has been separated */ + rt_mq_t mq; /* RT-Thread message queue control block */ + struct mqdes* next; /* Point to the next message queue control block */ +}; +typedef struct mqdes* mqd_t; /* Message queue control block pointer type redefinition */ +``` + +### Create or Open a Message Queue + +``` c +mqd_t mq_open(const char *name, int oflag, ...); +``` + +| **Parameter** | **Description** | +|----------|----------------| +| name | Message queue name | +| oflag | Message queue open mode | +|**return**| —— | +| Message queue handle | Succeeded | +| NULL | Failed | + +This function creates a new message queue or opens an existing message queue based on the name of the message queue. The optional values for Oflag are `0`, `O_CREAT` or `O_CREAT\|O_EXCL`. If Oflag is set to `O_CREAT` then a new message queue is created. If Oflag sets `O_CREAT\|O_EXCL`, it returns NULL if the message queue already exists, and creates a new message queue if it does not exist. If Oflag is set to `0`, the message queue does not exist and returns NULL. + +### Detach Message Queue + +``` c +int mq_unlink(const char *name); +``` + +| **Parameter** | **Description** | +|----|------------| +| name | Message queue name | +|**return**| —— | +| 0 | Succeeded | +| -1 | Failed | + +This function finds the message queue based on the message queue name name. If found, it sets the message queue to a detached state. If the hold count is 0, the message queue is deleted and the resources occupied by the message queue are released. + +### Close the Message Queue + +``` c +int mq_close(mqd_t mqdes); +``` + +| **Parameter** | **Description** | +|----------|------------| +| mqdes | Message queue handle | +|**return**| —— | +| 0 | Succeeded | +| -1 | Failed | + +When a thread terminates,it closes the message queue it occupies. Whether the thread terminates voluntarily or involuntarily, this closure is performed. This is equivalent to the message queue holding count minus 1. If the holding count is 0 after the minus 1 and the message queue is in the separated state, the `mqdes` message queue will be deleted and released the resources it occupies. + +### Block Mode to Send a Message + +``` c +int mq_send(mqd_t mqdes, + const char *msg_ptr, + size_t msg_len, + unsigned msg_prio); +``` + +| **Parameter** | **Description** | +|---------|----------------------------------| +| mqdes | Message queue handle, cannot be NULL | +| sg_ptr | Pointer to the message to be sent, cannot be NULL | +| msg_len | The length of the message sent | +| msg_prio | RT-Thread unimplemented this parameter | +|**return**| —— | +| 0 | Succeeded | +| -1 | Failed | + +This function is used to send a message to the `mqdes` message queue, which is a wrapper of the rt_mq_send() function. This function adds the message pointed to by `msg_ptr` to the `mqdes` message queue, and the length of the message sent `msg_len` must be less than or equal to the maximum message length set when the message queue is created. + +If the message queue is full, that is, the number of messages in the message queue is equal to the maximum number of messages, the thread that sent the message or the interrupt program will receive an error code (-RT_EFULL). + +### Specify Blocking Time to Send a Message + +``` c +int mq_timedsend(mqd_t mqdes, + const char *msg_ptr, + size_t msg_len, + unsigned msg_prio, + const struct timespec *abs_timeout); +``` + +| **Parameter** | **Description** | +|------------|-------------------------------------------------| +| mqdes | Message queue handle, cannot be NULL | +| msg_ptr | Pointer to the message to be sent, cannot be NULL | +| msg_len | The length of the message sent | +| msg_prio | RT-Thread unimplemented parameters | +| abs_timeout | The specified wait time in operating system clock tick (OS Tick) | +|**Parameter**| —— | +| 0 | Succeeded | +| -1 | Failed | + +Currently RT-Thread does not support sending messages with the specified blocking time, but the function interface has been implemented, which is equivalent to calling mq_send(). + +### Blocking Mode to Receive Message + +``` c +ssize_t mq_receive(mqd_t mqdes, + char *msg_ptr, + size_t msg_len, + unsigned *msg_prio); +``` + +| **Parameter** | **Description** | +|---------|----------------------------------| +| mqdes | Message queue handle, cannot be NULL | +| msg_ptr | Pointer to the message to be sent, cannot be NULL | +| msg_len | The length of the message sent | +| msg_prio | RT-Thread unimplemented parameters | +|**return**| —— | +| Message length | Succeeded | +| -1 | Failed | + +This function removes the oldest message from the `mqdes` message queue and puts the message in the memory pointed to by `msg_ptr`. If the message queue is empty, the thread that called the mq_receive() function will block until the message in the message queue is available. + +### Specify Blocking Time to Receive Message + +``` c +ssize_t mq_timedreceive(mqd_t mqdes, + char *msg_ptr, + size_t msg_len, + unsigned *msg_prio, + const struct timespec *abs_timeout); +``` + +| **Parameter** | **Description** | +|------------|-------------------------------------------------| +| mqdes | Message queue handle, cannot be NULL | +| msg_ptr | Pointer to the message to be sent, cannot be NULL | +| msg_len | The length of the message sent | +| msg_prio | RT-Thread unimplemented parameters | +| abs_timeout | The specified wait time in operating system clock tick (OS Tick) | +|**return**| —— | +| Message length | Succeeded | +| -1 | Failed | + +The difference between this function and the mq_receive() function is that if the message queue is empty, the thread will block the `abs_timeout` duration. After the timeout, the function will return `-1`, and the thread will be awakened from the blocking state to the ready state. + +### Example Code for Message Queue + +This program creates 3 threads, thread2 accepts messages from the message queue, and thread2 and thread3 send messages to the message queue. + +``` c +#include +#include + +/* Thread control block */ +static pthread_t tid1; +static pthread_t tid2; +static pthread_t tid3; +/* Message queue handle */ +static mqd_t mqueue; + +/* Function return value check function */ +static void check_result(char* str,int result) +{ + if (0 == result) + { + printf("%s successfully!\n",str); + } + else + { + printf("%s failed! error code is %d\n",str,result); + } +} +/* Thread 1 entry function */ +static void* thread1_entry(void* parameter) +{ + char buf[128]; + int result; + + while (1) + { + /* Receive messages from the message queue */ + result = mq_receive(mqueue, &buf[0], sizeof(buf), 0); + if (result != -1) + { + /* Output content */ + printf("thread1 recv [%s]\n", buf); + } + + /* Sleep 1 second */ + // sleep(1); + } +} +/* Thread 2 entry function */ +static void* thread2_entry(void* parameter) +{ + int i, result; + char buf[] = "message2 No.x"; + + while (1) + { + for (i = 0; i < 10; i++) + { + buf[sizeof(buf) - 2] = '0' + i; + + printf("thread2 send [%s]\n", buf); + /* Send a message to the message queue */ + result = mq_send(mqueue, &buf[0], sizeof(buf), 0); + if (result == -1) + { + /* Message queue full, delayed 1s */ + printf("thread2:message queue is full, delay 1s\n"); + sleep(1); + } + } + + /* Sleep 2 seconds */ + sleep(2); + } +} +/* Thread 3 entry function */ +static void* thread3_entry(void* parameter) +{ + int i, result; + char buf[] = "message3 No.x"; + + while (1) + { + for (i = 0; i < 10; i++) + { + buf[sizeof(buf) - 2] = '0' + i; + + printf("thread3 send [%s]\n", buf); + /* Send messages to the message queue */ + result = mq_send(mqueue, &buf[0], sizeof(buf), 0); + if (result == -1) + { + /* Message queue full, delayed 1s */ + printf("thread3:message queue is full, delay 1s\n"); + sleep(1); + } + } + + /* Sleep 2 seconds */ + sleep(2); + } +} +/* User application portal */ +int rt_application_init() +{ + int result; + struct mq_attr mqstat; + int oflag = O_CREAT|O_RDWR; +#define MSG_SIZE 128 +#define MAX_MSG 128 + memset(&mqstat, 0, sizeof(mqstat)); + mqstat.mq_maxmsg = MAX_MSG; + mqstat.mq_msgsize = MSG_SIZE; + mqstat.mq_flags = 0; + mqueue = mq_open("mqueue1",O_CREAT,0777,&mqstat); + + /* Create thread 1, thread entry is thread1_entry, property parameter is set to NULL, select default value, entry parameter is NULL*/ + result = pthread_create(&tid1,NULL,thread1_entry,NULL); + check_result("thread1 created",result); + + /* Create thread 2, thread entry is thread2_entry, property parameter is set to NULL, select default value, entry parameter is NULL*/ + result = pthread_create(&tid2,NULL,thread2_entry,NULL); + check_result("thread2 created",result); + + /* Create thread 3, thread entry is thread3_entry, property parameter is set to NULL Select default value, entry parameter is NULL*/ + result = pthread_create(&tid3,NULL,thread3_entry,NULL); + check_result("thread3 created",result); + + + return 0; +} +``` + +## Thread Advanced Programming + +This section provides a detailed introduction to some of the rarely used property objects and related functions. + +The thread attributes implemented by RT-Thread include thread stack size, thread priority, thread separation status, and thread scheduling policy. `pthread_create()` must initialize the property object before using the property object. APIs such as setting thread properties should be called before the thread is created. Changes of thread attributes do not affect the threads that have been created. + +The thread attribute structure `pthread_attr_t` is defined in the pthread.h header file. The thread attribute structure is as follows: + +``` c +/* pthread_attr_t Type redefinition */ +typedef struct pthread_attr pthread_attr_t; +/* Thread attribute structure */ +struct pthread_attr +{ + void* stack_base; /* Thread stack address */ + rt_uint32_t stack_size; /* Thread stack size */ + rt_uint8_t priority; /* Thread priority */ + rt_uint8_t detachstate; /* Thread detached state */ + rt_uint8_t policy; /* Thread scheduling policy */ + rt_uint8_t inheritsched; /* Thread inheritance */ +}; +``` + +#### Thread Property Initialization and Deinitialization + +The thread property initialization and deinitialization functions are as follows: + +``` c +int pthread_attr_init(pthread_attr_t *attr); +int pthread_attr_destroy(pthread_attr_t *attr); +``` + +| **Parameter** | **Description** | +|----|------------------| +| attr | Pointer to the thread property | +|**return**| —— | +| 0 | Succeeded | + +Using the pthread_attr_init() function initializes the thread attribute structure `attr` with the default value, which is equivalent to setting the parameter to NULL when calling the thread initialization function. You need to define a `pthread_attr_t` attribute object before use. This function must be called before the pthread_create() function. + +The pthread_attr_destroy() function deinitializes the property pointed to by `attr` and can then reinitialize this property object by calling the pthread_attr_init() function again. + +#### Thread Detached State + +Setting or getting the separation state of a thread is as follows. By default, the thread is non-separated. + +``` c +int pthread_attr_setdetachstate(pthread_attr_t *attr, int state); +int pthread_attr_getdetachstate(pthread_attr_t const *attr, int *state); +``` + +| **Parameter** | **Description** | +|----------|-------------------| +| attr | Pointer to the thread property | +| state | Thread detached state | +|**return**| —— | +| 0 | Succeeded | + +The thread separation state property value state can be `PTHREAD_CREATE_JOINABL` (non-detached) and `THREAD_CREATE_DETACHED` (detached). + +The detached state of a thread determines how a thread reclaims the resources it occupies after the end of its run. There are two types of thread separation: joinable or detached. When the thread is created, you should call pthread_join() or pthread_detach() to reclaim the resources occupied by the thread after it finishes running. If the thread's detached state is joinable, other threads can call the pthread_join() function to wait for the thread to finish and get the thread return value, and then reclaim the resources occupied by the thread. A thread with a detached state cannot be joined by another thread. Immediately after the end of its operation, the system resources are released. + +#### Thread Scheduling Policy + +Setting \ Obtaining thread scheduling policy function is as follows: + +``` c +int pthread_attr_setschedpolicy(pthread_attr_t *attr, int policy); +int pthread_attr_getschedpolicy(pthread_attr_t const *attr, int *policy); +``` + +Only the function interface is implemented. The default different priorities are based on priority scheduling, and the same priority time slice polling scheduling + +#### Thread Scheduling Parameter + +Set / Obtain the thread's priority function as follows: + +``` c +int pthread_attr_setschedparam(pthread_attr_t *attr, + struct sched_param const *param); +int pthread_attr_getschedparam(pthread_attr_t const *attr, + struct sched_param *param); +``` + +| **Parameter** | **Description** | +|----------|------------------| +| attr | Pointer to the thread property | +| param | Pointer to the dispatch parameter | +|**return**| —— | +| 0 | Succeeded | + +The `pthread_attr_setschedparam()` function sets the priority of the thread. Use `param` to set the thread priority. + +**Parameter** : The `struct sched_param` is defined in sched.h and has the following structure: + +``` c +struct sched_param +{ + int sched_priority; /* Thread priority */ +}; +``` + +The member `sched_paraority` of the `sched_param` controls the priority value of the thread. + +#### Thread Stack Size + +Set / Obtain the stack size of a thread is as follows: + +``` c +int pthread_attr_setstacksize(pthread_attr_t *attr, size_t stack_size); +int pthread_attr_getstacksize(pthread_attr_t const *attr, size_t *stack_size); +``` + +| **Parameter** | **Description** | +|-----------|------------------| +| attr | Pointer to the thread property | +| stack_size | Thread stack size | +|**return**| —— | +| 0 | Succeeded | + +The `pthread_attr_setstacksize()` function sets the stack size in bytes. Stack space address alignment is required on most systems (for example, the ARM architecture needs to be aligned to a 4-byte address). + +#### Thread Stack Size and Address + +Set / Obtain the stack address and stack size of a thread is as follows: + +``` c +int pthread_attr_setstack(pthread_attr_t *attr, + void *stack_base, + size_t stack_size); +int pthread_attr_getstack(pthread_attr_t const *attr, + void**stack_base, + size_t *stack_size); +``` + +| **Parameter** | **Description** | +|-----------|------------------| +| attr | Pointer to the thread property | +| stack_size | Thread stack size | +| stack_base | Thread stack address | +|**return**| —— | +| 0 | Succeeded | + +#### Thread Attribute Related Function + +The function that sets / obtains the scope of the thread is as follows: + +``` c +int pthread_attr_setscope(pthread_attr_t *attr, int scope); +int pthread_attr_getscope(pthread_attr_t const *attr); +``` + +| **Parameter** | **Description** | +|-----------|------------------| +| attr | Pointer to the thread property | +| scope | Thread scope | +|**return**| —— | +| 0 | scope is PTHREAD_SCOPE_SYSTEM | +| EOPNOTSUPP | scope is PTHREAD_SCOPE_PROCESS | +| EINVAL | scope is PTHREAD_SCOPE_SYSTEM | + +#### Example Code for Thread Property + +This program will initialize 2 threads, they have a common entry function, but their entry parameters are not the same. The first thread created will use the provided `attr` thread attribute, and the other thread will use the system default attribute. Thread priority is a very important parameter, so this program will modify the first created thread to have a priority of 8, and the system default priority of 24. + +``` c +#include +#include +#include +#include + +/* Thread control block */ +static pthread_t tid1; +static pthread_t tid2; + +/* Function return value check */ +static void check_result(char* str,int result) +{ + if (0 == result) + { + printf("%s successfully!\n",str); + } + else + { + printf("%s failed! error code is %d\n",str,result); + } +} +/* Thread entry function */ +static void* thread_entry(void* parameter) +{ + int count = 0; + int no = (int) parameter; /* Obtain the thread's entry parameters */ + + while (1) + { + /* Printout thread count value */ + printf("thread%d count: %d\n", no, count ++); + + sleep(2); /* Sleep 2 seconds */ + } +} + +/* User application portal */ +int rt_application_init() +{ + int result; + pthread_attr_t attr; /* Thread attribute */ + struct sched_param prio; /* Thread priority */ + + prio.sched_priority = 8; /* Priority is set to 8 */ + pthread_attr_init(&attr); /* Initialize the property with default values first */ + pthread_attr_setschedparam(&attr,&prio); /* Modify the priority corresponding to the attribute */ + + /* Create thread 1, attribute is attr, entry function is thread_entry, and the entry function parameter is 1 */ + result = pthread_create(&tid1,&attr,thread_entry,(void*)1); + check_result("thread1 created",result); + + /* Create thread 2, the property is the default value, the entry function is thread_entry, and the entry function parameter is 2 */ + result = pthread_create(&tid2,NULL,thread_entry,(void*)2); + check_result("thread2 created",result); + + return 0; +} +``` + +### Thread Cancellation + +Cancellation is a mechanism that allows one thread to end other threads. A thread can send a cancel request to another thread. Depending on the settings, the target thread may ignore it and may end immediately or postpone it until the next cancellation point. + +#### Send Cancellation Request + +The cancellation request can be sent using the following function: + +``` c +int pthread_cancel(pthread_t thread); +``` + +| **Parameter** | **Description** | +|------|--------| +| thread | Thread handle | +|**return**| —— | +| 0 | Succeeded | + +This function sends a cancel request to the `thread` thread. Whether the thread will respond to the cancellation request and when it responds depends on the state and type of thread cancellation. + +#### Set Cancel Status + +The cancellation request can be set using the following function: + +``` c +int pthread_setcancelstate(int state, int *oldstate); +``` + +| **Parameter** | **Description** | +|--------|-------------------------------| +| state | There are two values:
`PTHREAD_CANCEL_ENABLE`: Cancel enable.
`PTHREAD_CANCEL_DISABLE`: Cancel disabled (default value when thread is created). | +| oldstate | Save the original cancellation status | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | state is not PTHREAD_CANCEL_ENABLE or PTHREAD_CANCEL_DISABLE | + +This function sets the cancel state and is called by the thread itself. Canceling the enabled thread will react to the cancel request, and canceling the disabled thread will not react to the cancel request. + +#### Set Cancellation Type + +You can use the following function to set the cancellation type, which is called by the thread itself: + +``` c +int pthread_setcanceltype(int type, int *oldtype); +``` + +| **Parameter** | **Description** | +|-------|---------------------------------| +| type | There are 2 values:
`PTHREAD_CANCEL_DEFFERED`: After the thread receives the cancellation request, it will continue to run to the next cancellation point and then end. (Default value when thread is created) .
`PTHREAD_CANCEL_ASYNCHRONOUS`: The thread ends immediately. | +| oldtype | Save the original cancellation type | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | state is neither PTHREAD_CANCEL_DEFFERED nor PTHREAD_CANCEL_ASYNCHRONOUS | + +#### Set Cancellation Point + +The cancellation point can be set using the following function: + +``` c +void pthread_testcancel(void); +``` + +This function creates a cancellation point where the thread is called. Called primarily by a thread that does not contain a cancellation point, it can respond to a cancellation request. This function does not work if pthread_testcancel() is called while the cancel state is disabled. + +#### Cancellation Point + +The cancellation point is where the thread ends when it accepts the cancellation request. According to the POSIX standard, system calls that cause blocking, such as pthread_join(), pthread_testcancel(), pthread_cond_wait(), pthread_cond_timedwait(), and sem_wait(), are cancellation points. + +All cancellation points included in RT-Thread are as follows: + +- mq_receive() + +- mq_send() + +- mq_timedreceive() + +- mq_timedsend() + +- msgrcv() + +- msgsnd() + +- msync() + +- pthread_cond_timedwait() + +- pthread_cond_wait() + +- pthread_join() + +- pthread_testcancel() + +- sem_timedwait() + +- sem_wait() + +- pthread_rwlock_rdlock() + +- pthread_rwlock_timedrdlock() + +- pthread_rwlock_timedwrlock() + +- pthread_rwlock_wrlock() + +#### Example Code for Thread Cancel + +This program creates 2 threads. After thread2 starts running, it sleeps for 8 seconds. Thread1 sets its own cancel state and type, and then prints the run count information in an infinite loop. After thread2 wakes up, it sends a cancel request to thread1, and thread1 ends the run immediately after receiving the cancel request. + +``` c +#include +#include +#include + +/* Thread control block */ +static pthread_t tid1; +static pthread_t tid2; + +/* Function return value check */ +static void check_result(char* str,int result) +{ + if (0 == result) + { + printf("%s successfully!\n",str); + } + else + { + printf("%s failed! error code is %d\n",str,result); + } +} +/* Thread 1 entry function */ +static void* thread1_entry(void* parameter) +{ + int count = 0; + /* Set the cancel state of thread 1 to be enabled. The cancel type is terminated immediately after the thread receives the cancel point. */ + pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, NULL); + pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, NULL); + + while(1) + { + /* Print thread count value output */ + printf("thread1 run count: %d\n",count ++); + sleep(2); /* Sleep 2 seconds */ + } +} +/* Thread 2 entry function */ +static void* thread2_entry(void* parameter) +{ + int count = 0; + sleep(8); + /* Send a cancel request to thread 1 */ + pthread_cancel(tid1); + /* Waiting for thread 1 to finish in blocking mode */ + pthread_join(tid1,NULL); + printf("thread1 exited!\n"); + /* Thread 2 print information to start output */ + while(1) + { + /* Print thread count value output */ + printf("thread2 run count: %d\n",count ++); + sleep(2); /* Sleep 2 seconds */ + } +} +/* User application portal */ +int rt_application_init() +{ + int result; + /* Create thread 1, the property is the default value, the separation state is the default value joinable, the entry function is thread1_entry, and the entry function parameter is NULL */ + result = pthread_create(&tid1,NULL,thread1_entry,NULL); + check_result("thread1 created",result); + + /* Create thread 2, the property is the default value, the separation state is the default value joinable, the entry function is thread2_entry, and the entry function parameter is NULL */ + result = pthread_create(&tid2,NULL,thread2_entry,NULL); + check_result("thread2 created",result); + + return 0; +} +``` + +### One-time Initialization + +It can be initialized once using the following function: + +``` c +int pthread_once(pthread_once_t * once_control, void (*init_routine) (void)); +``` + +| **Parameter** | **Description** | +|-------------|--------| +| once_control | Control variable | +| init_routine | Execute function | +|**return**| —— | +| 0 | Succeeded | + +Sometimes we need to initialize some variables only once. If we do multiple initialization procedures, it will get an error. In traditional sequential programming, one-time initialization is often managed by using Boolean variables. The control variable is statically initialized to 0, and any code that relies on initialization can test the variable. If the variable value is still 0, it can be initialized and then set the variable to 1. Codes that are checked later will skip initialization. + +### Clean up after the Thread Ends + +The thread cleanup function interface: + +``` c +void pthread_cleanup_pop(int execute); +void pthread_cleanup_push(void (*routine)(void*), void *arg); +``` + +| **Parameter** | **Description** | +|-------|-----------------------------| +| execute | 0 or 1, determin whether to execute the cleanup function | +| routine | Pointer to the cleanup function | +| arg | The parameter passed to the cleanup function | + +pthread_cleanup_push() puts the specified cleanup `routine` into the thread's cleanup function list. pthread_cleanup_pop() takes the first function from the header of the cleanup function list. If `execute` is a non-zero value, then this function is executed. + +### Other Thread Related Functions + +#### Determine if two Threads are Equal + +``` c +int pthread_equal (pthread_t t1, pthread_t t2); +``` + +| **Parameter** | **Description** | +|----------|--------| +| pthread_t | Thread handle | +|**return**| —— | +| 0 | Not equal | +| 1 | Equal | + +#### Obtain Thread Handle + +``` c +pthread_t pthread_self (void); +``` +pthread_self() returns the handle of the calling thread. + +#### Get the Maximum and Minimum Priority + +``` c +int sched_get_priority_min(int policy); +int sched_get_priority_max(int policy); +``` + +| **Parameter** | **Description** | +|------|---------------------------------| +| policy | 2 values are optional: SCHED_FIFO, SCHED_RR | + +sched_get_priority_min() returns a value of 0, with the highest priority in RT-Thread and sched_get_priority_max() with the lowest priority. + +### Mutex Attribute + +The mutex properties implemented by RT-Thread include the mutex type and the mutex scope. + +#### Mutex Lock Attribute Initialization and Deinitialization + +``` c +int pthread_mutexattr_init(pthread_mutexattr_t *attr); +int pthread_mutexattr_destroy(pthread_mutexattr_t *attr); +``` + +| **Parameter** | **Description** | +|----|------------------------| +| attr | Pointer to the mutex attribute object | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | + +The pthread_mutexattr_init() function initializes the property object pointed to by `attr` with the default value, which is equivalent to setting the property parameter to NULL when calling the pthread_mutex_init() function. + +The pthread_mutexattr_destroy() function will initialize the property object pointed to by `attr` and can be reinitialized by calling the pthread_mutexattr_init() function. + +#### Mutex Lock Scope + +``` c +int pthread_mutexattr_setpshared(pthread_mutexattr_t *attr, int pshared); +int pthread_mutexattr_getpshared(pthread_mutexattr_t *attr, int *pshared); +``` + +| **Parameter** | **Description** | +|-------|--------------------| +| type | Mutex type | +| pshared | There are 2 optional values:
`PTHREAD_PROCESS_PRIVATE`: The default value, used to synchronize only threads in the process. `PTHREAD_PROCESS_SHARED`: Used to synchronize threads in this process and other processes. | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | + +#### Mutex Type + +``` c +int pthread_mutexattr_settype(pthread_mutexattr_t *attr, int type); +int pthread_mutexattr_gettype(const pthread_mutexattr_t *attr, int *type); +``` + +| **Parameter** | **Description** | +|----|------------------------| +| type | Mutex type | +| attr | Pointer to the mutex attribute object | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | + +The type of mutex determines how a thread behaves when it acquires a mutex. RT-Thread implements three mutex types: + +- **PTHREAD_MUTEX_NORMAL**: Normal lock. When a thread is locked, the remaining threads requesting the lock will form a wait queue, and after unlocking, the lock will be acquired in the first-in first-out manner. If a thread attempts to regain the mutex without first releasing the mutex, it does not generate a deadlock, but instead returns an error code, just like the error checking lock. + +- **PTHREAD_MUTEX_RECURSIVE**: Nested locks that allow a thread to successfully acquire the same lock multiple times, requiring the same number of unlocks to release the mutex. + +- **PTHREAD_MUTEX_ERRORCHECK**: Error checking lock, if a thread tries to regain the mutex without first releasing the mutex, an error is returned. This ensures that deadlocks do not occur when multiple locks are not allowed. + +### Condition Variable Attribute + +Use the default value PTHREAD_PROCESS_PRIVATE to initialize the condition variable attribute attr to use the following function: + +``` c +int pthread_condattr_init(pthread_condattr_t *attr); +``` + +| **Parameter** | **Description** | +|----|--------------------------| +| attr | Pointer to a condition variable property object | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | + +#### Obtain Condition Variable Scope + +``` c +int pthread_mutexattr_getpshared(pthread_mutexattr_t *attr, int *pshared); +``` + +| **Parameter** | **Description** | +|----|--------------------------| +| attr | Pointer to a condition variable property object | +|**return**| —— | +| 0 | Succeeded | +| EINVAL | Invalid parameter | + +### Read-write Lock Attribute + +#### Initialize Property + +``` c +int pthread_rwlockattr_init (pthread_rwlockattr_t *attr); +``` + +| **Parameter** | **Description** | +|----|--------------------| +| attr | Pointer to the read-write lock property | +|**return**| —— | +| 0 | Succeeded | +|-1 | Invalid parameter | + +This function initializes the read-write lock attribute `attr` with the default value PTHREAD_PROCESS_PRIVATE. + +#### Obtain Scope + +``` c +int pthread_rwlockattr_getpshared (const pthread_rwlockattr_t *attr, int *pshared); +``` + +| **Parameter** | **Description** | +|-------|--------------------------| +| attr | Pointer to the read-write lock property | +| pshared | Pointer to the scope of the read-write lock | +|**return**| —— | +| 0 | Succeeded | +|-1 | Invalid parameter | + +The memory pointed to by pshared is saved as PTHREAD_PROCESS_PRIVATE. + +### Barrier Attribute + +#### Initialize Property + +``` c +int pthread_barrierattr_init(pthread_barrierattr_t *attr); +``` + +| **Parameter** | **Description** | +|----|------------------| +| attr | Pointer to the barrier property | +|**return**| —— | +| 0 | Succeeded | +|-1 | Invalid parameter | + +The modified function initializes the barrier attribute `attr` with the default value PTHREAD_PROCESS_PRIVATE. + +#### Obtain Scope + +``` c +int pthread_barrierattr_getpshared(const pthread_barrierattr_t *attr, int *pshared); +``` + +| **Parameter** | **Description** | +|-------|-----------------------------| +| attr | Pointer to the barrier property | +| pshared | Pointer to save barrier scope data | +|**return**| —— | +| 0 | Succeeded | +|-1 | Invalid parameter | + +### Message Queue Property + +The message queue attribute control block is as follows: + +``` c +struct mq_attr +{ + long mq_flags; /* Message queue flag to indicate whether to block */ + long mq_maxmsg; /* Message queue maximum number of messages */ + long mq_msgsize; /* The maximum number of bytes per message in the message queue */ + long mq_curmsgs; /* Message queue current message number */ +}; +``` +#### Obtain Attribute +``` c +int mq_getattr(mqd_t mqdes, struct mq_attr *mqstat); +``` + +| **Parameter** | **Description** | +|------|------------------------| +| mqdes | Pointer to the message queue control block | +| mqstat | Pointer to save the get data | +|**return**| —— | +| 0 | Succeeded | +|-1 | Invalid parameter | diff --git a/documentation/quick-start/figures/10.png b/documentation/quick-start/figures/10.png new file mode 100644 index 0000000000..92c771a9b9 Binary files /dev/null and b/documentation/quick-start/figures/10.png differ diff --git a/documentation/quick-start/figures/11.png b/documentation/quick-start/figures/11.png new file mode 100644 index 0000000000..d7e995732c Binary files /dev/null and b/documentation/quick-start/figures/11.png differ diff --git a/documentation/quick-start/figures/14.png b/documentation/quick-start/figures/14.png new file mode 100644 index 0000000000..6a20af1021 Binary files /dev/null and b/documentation/quick-start/figures/14.png differ diff --git a/documentation/quick-start/figures/5.png b/documentation/quick-start/figures/5.png new file mode 100644 index 0000000000..90794700ca Binary files /dev/null and b/documentation/quick-start/figures/5.png differ diff --git a/documentation/quick-start/figures/6.png b/documentation/quick-start/figures/6.png new file mode 100644 index 0000000000..d53031df23 Binary files /dev/null and b/documentation/quick-start/figures/6.png differ diff --git a/documentation/quick-start/figures/7.png b/documentation/quick-start/figures/7.png new file mode 100644 index 0000000000..de0207c0c9 Binary files /dev/null and b/documentation/quick-start/figures/7.png differ diff --git a/documentation/quick-start/figures/8.png b/documentation/quick-start/figures/8.png new file mode 100644 index 0000000000..ceed801407 Binary files /dev/null and b/documentation/quick-start/figures/8.png differ diff --git a/documentation/quick-start/figures/9.png b/documentation/quick-start/figures/9.png new file mode 100644 index 0000000000..73d821e705 Binary files /dev/null and b/documentation/quick-start/figures/9.png differ diff --git a/documentation/quick-start/figures/compile.jpg b/documentation/quick-start/figures/compile.jpg new file mode 100644 index 0000000000..962323d256 Binary files /dev/null and b/documentation/quick-start/figures/compile.jpg differ diff --git a/documentation/quick-start/figures/debug.jpg b/documentation/quick-start/figures/debug.jpg new file mode 100644 index 0000000000..0a0cfc63f3 Binary files /dev/null and b/documentation/quick-start/figures/debug.jpg differ diff --git a/documentation/quick-start/keil-installation/figures/1.png b/documentation/quick-start/keil-installation/figures/1.png new file mode 100644 index 0000000000..f2df5fc551 Binary files /dev/null and b/documentation/quick-start/keil-installation/figures/1.png differ diff --git a/documentation/quick-start/keil-installation/figures/12.png b/documentation/quick-start/keil-installation/figures/12.png new file mode 100644 index 0000000000..d56139166a Binary files /dev/null and b/documentation/quick-start/keil-installation/figures/12.png differ diff --git a/documentation/quick-start/keil-installation/figures/13.png b/documentation/quick-start/keil-installation/figures/13.png new file mode 100644 index 0000000000..17aaf3f9d4 Binary files /dev/null and b/documentation/quick-start/keil-installation/figures/13.png differ diff --git a/documentation/quick-start/keil-installation/figures/2.png b/documentation/quick-start/keil-installation/figures/2.png new file mode 100644 index 0000000000..b1e54f46d5 Binary files /dev/null and b/documentation/quick-start/keil-installation/figures/2.png differ diff --git a/documentation/quick-start/keil-installation/figures/3.png b/documentation/quick-start/keil-installation/figures/3.png new file mode 100644 index 0000000000..c3d131e715 Binary files /dev/null and b/documentation/quick-start/keil-installation/figures/3.png differ diff --git a/documentation/quick-start/keil-installation/figures/4.png b/documentation/quick-start/keil-installation/figures/4.png new file mode 100644 index 0000000000..9eb7156e29 Binary files /dev/null and b/documentation/quick-start/keil-installation/figures/4.png differ diff --git a/documentation/quick-start/keil-installation/keil-installation.md b/documentation/quick-start/keil-installation/keil-installation.md new file mode 100644 index 0000000000..9eb6ad32ac --- /dev/null +++ b/documentation/quick-start/keil-installation/keil-installation.md @@ -0,0 +1,36 @@ +# Keil MDK Installation + +Before running the RT-Thread operating system, we need to install MDK-ARM 5.24 (either official or evaluation version, version 5.14 and above), this version is also a relatively new version. This version can provide relatively complete debugging functions. Here, we are using evaluation version 5.24 of 16k compiled code limit. If you want to remove the 16k compiled code limit, please purchase the official MDK-ARM. + +Firstly, download the MDK-ARM evaluation version from the official website of www.keil.com: +[http://www.keil.com/download/](http://www.keil.com/download/) + +When downloading, you need to fill in some basic information, please fill in the corresponding complete information, and then start downloading. After it is downloaded, double-click the mouse to start the installation, you will see the software installation as shown: + +![First Step](./figures/1.png) + +This is the MDK-ARM installation instructions, click “Next>>” to enter the next step, as shown. + +![Second Step](./figures/2.png) + +Click "√" in the box next to "I agree to all the terms of the preceding License Agreement" and click "Next >>" to proceed to the next step of installation, as shown: + +![Third Step](./figures/3.png) + +Click "Browse..." to select the installation directory of MDK-ARM or directly input installation path in the "Destination Folder" box. Here, we default to "C:/Keil_v5", then click "Next>>" to proceed to the next step of installation, as shown: + +![Fourth Step](./figures/4.png) + +Input your name after "First Name", input your last name after "Last Name", input your company name after "Company Name", input your email address after "E-mail", and then click "Next>> " for the installation. Wait for a while for the installation to finish and you will see the following: + +![Fifth Step](./figures/12.png) + +The default selection does not need to be changed, just click “Next” to enter the next step as shown. + +![MDK-ARM Installment Complete](./figures/13.png) + +Here, you can click "Finish" to complete the installation of the entire MDK-ARM software. + +With a useful took like MDK-ARM, you can start the RT-Thread operating system easily and explore real-time operating systems. + +>Note: There is a charge for Official version of MDK-ARM. If you want to be able to compile larger binaries, please purchase the official version of MDK-ARM. RT-Thread operating system also supports GNU GCC compiler by Free Software Foundation which is an open source compiler. For more information on how to use GNU related tools, please refer to the related documentation on RT-Thread website. diff --git a/documentation/quick-start/quick-start.md b/documentation/quick-start/quick-start.md new file mode 100644 index 0000000000..baaae13c12 --- /dev/null +++ b/documentation/quick-start/quick-start.md @@ -0,0 +1,150 @@ +# Start Guide: Simulate STM32F103 on keil simulator + +Because of its particularity, the embedded operating system is often closely related to the hardware platform. Specific embedded operating system can only run on specific hardware. For those who are new to the RT-Thread operating system, it is not easy to get a hardware module that is compatible with the RT-Thread operating system. However, with the development of computer technology, we can use software to simulate a hardware module that has the ability to run RT-Thread operating system. This is the simulation environment called MDK-ARM produced by the company ARM. + +MDK-ARM (MDK-ARM Mi6hyicrocontroller Development Kit) software is a complete integrated development environment (IDE) from ARM. It includes efficient C/C++ compiler for ARM chips (ARM7, ARM9, Cortex-M series, Cortex-R series, etc.) ; project wizard and project management for various ARM devices, evaluation boards; simulator for software simulating hardware platform; and debuggers connected to simulators debugging the target board, common on-line simulators on the market are stlink jlink, etc. The simulator software in MDK-ARM uses a complete software simulation to interpret and execute machine instructions from ARM and implement some peripheral logic to form a complete virtual hardware environment, enabling users to execute the corresponding target program on the computer without using real hardware platform. + +Because of its full STM32F103 software simulation environment, the MDK-ARM integrated development environment gives us the opportunity to run object code directly on the computer without using a real hardware environment . This simulator platform can completely virtualize the various operating modes and peripherals of the ARM Cortex-M3, such as exceptional interrupts, clock timers, serial ports, etc., which is almost identical to the real hardware environment. Practice has also proved that the RT-Thread introductory sample used in this article, after compiling into binary code, can not only run on the simulator platform, but also can run normally on the real hardware platform without modification. + +Next, we will select the MDK-ARM integrated development environment as the target hardware platform to observe how the RT-Thread operating system works. + +## Preparation + +MDK development environment: MDK-ARM 5.24 (official or evaluation version, version 5.14 and above) needs to be installed. This version is also a relatively new version, which can provide relatively complete debugging functions. How to install can be referred to the [Keil MDK Installation](./keil-installation/keil-installation.md). + +## First acquaintance with RT-Thread + +As an operating system, what is the code size of RT-Thread? Before we can figure this out, the first thing we need to do is to get an example of RT-Thread that corresponds to this manual. This example can be obtained from the following link: + +[RT-Thread Simulator Sample](./rtthread_simulator_v0.1.0.zip) + +This example is a zip file, unzip it. Here, we decompressed it to D:/. The directory structure after decompression is as shown below: + +![rtthread_simulator_v0.1.0 Code Directory ](./figures/7.png) + +Descriptions of the file types contained in each directory are shown in the following table: + +Directory Name | Description +--- | --- +applications| RT-Thread application. +rt-thread | Source file for RT-Thread. +- components| Respective component directories of RT-Thread. +- include | Header file for RT-Thread kernel. +- libcpu | Porting code for various types of chips, including porting files of STM32. +- src | Source file for RT-Thread kernel. +- tools | Script file of RT-Thread commanding building tool. +drivers | Driver of RT-Thread, implementations of bottom driver of different platforms. +Libraries | ST's STM32 firmware library file. +kernel-sample-0.1.0 | Kernel sample for RT-Thread. + +In the directory, there is project.uvprojx file, which is an MDK5 project file in the sample referenced in this manual. Double-click "project.uvprojx" icon to open the project file: + +![Open the project](./figures/5.png) + +Under the "Project" column on the left side of the main window of the project, you can see the file list of the project. These files are stored in the following groups, respectively: + +| Directory Group | Description | +| :-------------- | ------------------------------------------------------------ | +| Applications | The corresponding directory is rtthread_simulator_v0.1.0/applications, used to store user application code. | +| Drivers | The corresponding directory is rtthread_simulator_v0.1.0/drivers, used to store the bottom driver code for RT-Thread. | +| STM32_HAL | The corresponding directory is rtthread_simulator_v0.1.0/Libraries/CMSIS/Device/ST/STM32F1xx, used to store the firmware library files of STM32. | +| kernel-sample | The corresponding directory is rtthread_simulator_v0.1.0/kernel-sample-0.1.0, used to store kernel samples of RT-Thread. | +| Kernel | The corresponding directory is rtthread_simulator_v0.1.0/src, used to store RT-Thread kernel core code. | +| CORTEX-M3 | The corresponding directory is rtthread_simulator_v0.1.0/rt-thread/libcpu, used to store ARM Cortex-M3 porting code. | +| DeviceDrivers | The corresponding directory is rtthread_simulator_v0.1.0/rt-thread/components/drivers, used to store driver framework source code of RT-Thread. | +| finsh | The corresponding directory is rtthread_simulator_v0.1.0/rt-thread/components/finsh, used to store command line of RT-Thread finsh command line component. | + +Now let's click the button from the toolbar on the top the window,![img](./figures/compile.jpg), compiling the project as shown: + +![compiling](./figures/9.png) + +The result of the compilation is displayed in the "Build Output" bar at the bottom of the window. If nothing else, it will say "0 Error(s), * Warning(s)." on the last line, that is, there are no errors or warnings. + +After compiling RT-Thread/STM32, we can simulate running RT-Thread through the MDK-ARM simulator. Click ![img](./figures/debug.jpg)at the top right of the window or directly hit Ctrl+F5 to enter the simulation interface and hit F5 to start, then click the button in the toolbar shown in the screen shot or select “View→Serial Windows→UART#1” in the menu bar to open the serial port 1 window. You can see that the output of the serial port only shows the LOGO of RT-Thread. This is because the user code is empty and the result of its simulation is as shown: + +![simulate RT-Thread1](./figures/10.png) + +>We can output all the commands supported by the current system by inputting the Tab key or `help + enter ` , as shown in the following figure. + +![simulate RT-Thread2](./figures/6.png) + + +## User Entry Code + +The above startup code is basically related to the RT-Thread system, so how do users add initialization code for their own applications? RT-Thread uses main function as the user code entry, all you need to do is just add your own code to the main function. + +```c +int main(void) +{ + /* user app entry */ + return 0; +} +``` + +>Note: In order to complete the initialization for the system functions before entering the main program, you can use the `$sub$$` and `$super$$` function identifiers to call another sample before entering the main program, this was, users can ignore the initialization operations before the main() function. See [ARM® Compiler v5.06 for µVision® armlink User Guide](http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0377g/pge1362065967698.html) for details. + +## Example of a Marquee + +For technical engineers working on electronics, marquee is probably the simplest example, it is like the first program Hello World in every programming language that programmers learned. So we will start with a marquee in the following example, to make it periodically update (turn on or off) the LED. + +Under UART#1, input msh command: led and then click Enter to run it, as shown: + +![run led](./figures/11.png) + +**Example of a Marquee** + +```c +/* + * Manifest of programs: Marquee sample + * + * marquee is probably the simplest example, it is like the first program + * Hello World in every programming language that programmers learned. So we will start with a marquee in the following example, start a thread to make it periodically + * update (turn on or off) the LED. + */ + +int led(void) +{ + rt_uint8_t count; + + rt_pin_mode(LED_PIN, PIN_MODE_OUTPUT); + + for(count = 0 ; count < 10 ;count++) + { + rt_pin_write(LED_PIN, PIN_HIGH); + rt_kprintf("led on, count : %d\r\n", count); + rt_thread_mdelay(500); + + rt_pin_write(LED_PIN, PIN_LOW); + rt_kprintf("led off\r\n"); + rt_thread_mdelay(500); + } + return 0; +} +MSH_CMD_EXPORT(led, RT-Thread first led sample); +``` + +## Other Examples + +Additional kernel examples can be found in the kernel-sample-0.1.0 directory. + +![more kernel samples](./figures/14.png) + +## Frequently Asked Question + +* Compilation error occurred as following: + +``` +rt-thread\src\kservice.c(823): error: #929: incorrect use of vaarg fieldwidth = aarg(args, int); +rt-thread\src\kservice.c(842): error: #929: incorrect use of vaarg precision = aarg(args, int); +……… +``` + +Cause: This type of problem is usually caused by installation of ADS, when ADS and keil coexist, the header file of va_start points to the ADS folder. + +Solution: + +- Delete ADS environment variables +- Uninstall ADS and keil, restart the computer, reload keil + + + diff --git a/documentation/quick-start/quick_start_qemu/figures/echo-cat.png b/documentation/quick-start/quick_start_qemu/figures/echo-cat.png new file mode 100644 index 0000000000..542a8ef0c0 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/echo-cat.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/env.png b/documentation/quick-start/quick_start_qemu/figures/env.png new file mode 100644 index 0000000000..fa243e968c Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/env.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/env_menu.png b/documentation/quick-start/quick_start_qemu/figures/env_menu.png new file mode 100644 index 0000000000..dc935517ae Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/env_menu.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/env_menu_ubuntu.png b/documentation/quick-start/quick_start_qemu/figures/env_menu_ubuntu.png new file mode 100644 index 0000000000..a4a7b9e52d Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/env_menu_ubuntu.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/finsh-cmd.png b/documentation/quick-start/quick_start_qemu/figures/finsh-cmd.png new file mode 100644 index 0000000000..1f7a1b30fb Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/finsh-cmd.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/finsh-thread.png b/documentation/quick-start/quick_start_qemu/figures/finsh-thread.png new file mode 100644 index 0000000000..427c887b8b Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/finsh-thread.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/menuconfig.png b/documentation/quick-start/quick_start_qemu/figures/menuconfig.png new file mode 100644 index 0000000000..eaed1390fa Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/menuconfig.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/menuconfig_menu.png b/documentation/quick-start/quick_start_qemu/figures/menuconfig_menu.png new file mode 100644 index 0000000000..7df950dfa2 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/menuconfig_menu.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/mkfs-sd0.png b/documentation/quick-start/quick_start_qemu/figures/mkfs-sd0.png new file mode 100644 index 0000000000..ee18f51325 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/mkfs-sd0.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/python3-version.png b/documentation/quick-start/quick_start_qemu/figures/python3-version.png new file mode 100644 index 0000000000..2187db87a3 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/python3-version.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/qemu.bat.png b/documentation/quick-start/quick_start_qemu/figures/qemu.bat.png new file mode 100644 index 0000000000..00276e706a Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/qemu.bat.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/qemu.png b/documentation/quick-start/quick_start_qemu/figures/qemu.png new file mode 100644 index 0000000000..00d07c1255 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/qemu.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/qemubsp.png b/documentation/quick-start/quick_start_qemu/figures/qemubsp.png new file mode 100644 index 0000000000..eca64825c7 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/qemubsp.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/scons.png b/documentation/quick-start/quick_start_qemu/figures/scons.png new file mode 100644 index 0000000000..d385f6f58f Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/scons.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/ubuntu-env-menu.png b/documentation/quick-start/quick_start_qemu/figures/ubuntu-env-menu.png new file mode 100644 index 0000000000..4dc06d9500 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/ubuntu-env-menu.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/ubuntu-filesys.png b/documentation/quick-start/quick_start_qemu/figures/ubuntu-filesys.png new file mode 100644 index 0000000000..5fb9c8e2b5 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/ubuntu-filesys.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/ubuntu-menuconfig.png b/documentation/quick-start/quick_start_qemu/figures/ubuntu-menuconfig.png new file mode 100644 index 0000000000..d0e15cefe9 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/ubuntu-menuconfig.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/ubuntu-mkfs-sd0.png b/documentation/quick-start/quick_start_qemu/figures/ubuntu-mkfs-sd0.png new file mode 100644 index 0000000000..e7bdae69f9 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/ubuntu-mkfs-sd0.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/ubuntu-msh-help.png b/documentation/quick-start/quick_start_qemu/figures/ubuntu-msh-help.png new file mode 100644 index 0000000000..a22bee6cde Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/ubuntu-msh-help.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/ubuntu-pkg-menuconfig.png b/documentation/quick-start/quick_start_qemu/figures/ubuntu-pkg-menuconfig.png new file mode 100644 index 0000000000..a24cc1e2d7 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/ubuntu-pkg-menuconfig.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/ubuntu-pkg-set.png b/documentation/quick-start/quick_start_qemu/figures/ubuntu-pkg-set.png new file mode 100644 index 0000000000..452056e631 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/ubuntu-pkg-set.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/ubuntu-pkgs-add-to-menu.png b/documentation/quick-start/quick_start_qemu/figures/ubuntu-pkgs-add-to-menu.png new file mode 100644 index 0000000000..838410f23a Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/ubuntu-pkgs-add-to-menu.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/ubuntu-qemu-bsp.png b/documentation/quick-start/quick_start_qemu/figures/ubuntu-qemu-bsp.png new file mode 100644 index 0000000000..d2ed93e999 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/ubuntu-qemu-bsp.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/ubuntu-qume-sh.png b/documentation/quick-start/quick_start_qemu/figures/ubuntu-qume-sh.png new file mode 100644 index 0000000000..a1a94068a3 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/ubuntu-qume-sh.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/ubuntu-rtconfig-py.png b/documentation/quick-start/quick_start_qemu/figures/ubuntu-rtconfig-py.png new file mode 100644 index 0000000000..8cec260d8a Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/ubuntu-rtconfig-py.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/ubuntu-save.png b/documentation/quick-start/quick_start_qemu/figures/ubuntu-save.png new file mode 100644 index 0000000000..a859b58252 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/ubuntu-save.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/ubuntu-scons.png b/documentation/quick-start/quick_start_qemu/figures/ubuntu-scons.png new file mode 100644 index 0000000000..711f57e785 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/ubuntu-scons.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/ubuntu-select-pkg.png b/documentation/quick-start/quick_start_qemu/figures/ubuntu-select-pkg.png new file mode 100644 index 0000000000..b3907f3957 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/ubuntu-select-pkg.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/ubuntu-thread-timer.png b/documentation/quick-start/quick_start_qemu/figures/ubuntu-thread-timer.png new file mode 100644 index 0000000000..8fe7ee4661 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/ubuntu-thread-timer.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/ubuntu-update-pkg.png b/documentation/quick-start/quick_start_qemu/figures/ubuntu-update-pkg.png new file mode 100644 index 0000000000..b83cdf4c97 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/ubuntu-update-pkg.png differ diff --git a/documentation/quick-start/quick_start_qemu/figures/win-menuconfig.png b/documentation/quick-start/quick_start_qemu/figures/win-menuconfig.png new file mode 100644 index 0000000000..0579503c50 Binary files /dev/null and b/documentation/quick-start/quick_start_qemu/figures/win-menuconfig.png differ diff --git a/documentation/quick-start/quick_start_qemu/quick_start_qemu.md b/documentation/quick-start/quick_start_qemu/quick_start_qemu.md new file mode 100644 index 0000000000..43d72a2398 --- /dev/null +++ b/documentation/quick-start/quick_start_qemu/quick_start_qemu.md @@ -0,0 +1,146 @@ +# Getting Started of QEMU (Windows) + +The development of embedded software is inseparable from the development board. Without physical development boards, similar virtual machines like QEMU can be used to simulate the development board. QEMU is a virtual machine that supports cross-platform virtualization. It can virtualize many development boards. To facilitate the experience of RT-Thread without a development board, RT-Thread provides a board-level support package (BSP) for QEMU-simulated **ARM vexpress A9** development board. + +## Preparations + +- [Download RT-Thread Source Code](https://github.com/RT-Thread/rt-thread) +- Download Env Tool +- [Install Git on your PC](https://www.git-scm.com/download/) + + +## Instructions for the Env tool + +When using Env tools, you need to enter the corresponding BSP directory in the Env terminal. + +### Configuration + +``` +menuconfig +``` + +Type the `menuconfig` command in the Env terminal to enter the configuration interface, and then configure the BSP: + +![menuconfig command](figures/win-menuconfig.png) + +![enter the configuration interface](figures/env_menu.png) + +You can use the keyboard `↑` key and `↓` key to look up and down menu items, use the `Enter` key to enter the selected directory, use the `Space` key to select or cancel bool variables, and use the `Esc` key to exit the current directory. + +### Acquisition of software packages + +``` +pkgs --update +``` + +If a package is selected in menuconfig, download the package using the `pkgs --update` command (Git needs to be installed) + +### Compile + +``` +scons +``` + +Compile using the `scons` command. + +### Generate IDE's Project Files + +``` +scons --target=xxx +``` + +If you use the MDK or IAR IDE for development, you need to regenerate project files to make the configuration work after the configuration is completed. The command is `scons --target=xxx`, as shown below, which is the generation of IAR project, MDK4 project and MDK5 project. + +```c +scons --target=iar +scons --target=mdk4 +scons --target=mdk5 +``` + +## Introduction of QEMU BSP Catalogue + +The board-level support package (BSP) provided by RT-Thread simulates ARM vexpress A9 development board is located in the `qemu-vexpress-a9` folder under the BSP directory of RT-Thread source code. This BSP implements LCD, keyboard, mouse, SD card, Ethernet card, serial port and other related drivers. The contents of the folder are shown in the following figure. + +![qemu-vexpress-a9 folder](figures/qemubsp.png) + +The main files and directories of `qemu-vexpress-a9` BSP are described as follows: + +| Fles/Directories | Description | +| ---------------- | ------------------------------------------- | +| .vscode | configuration file of vscode | +| applications | User application code directory | +| drivers | The underlying driver provided by RT-Thread | +| qemu.bat | Script files running on Windows platform | +| qemu.sh | Script files running on Linux platform | +| qemu-dbg.bat | Debugging script files on Windows platform | +| qemu-dbg.sh | Debugging script files on Linux platform | +| README.md | Description document of BSP | +| rtconfig.h | A header file of BSP | + +## Compile and Run + +### Step 1. Use the *scons* Command to Compile the Project + +Open the Env folder and double-click the `env.exe` file to open the Env console: + +![Env folder](figures/env.png) + +Switch to the QEMU BSP directory and enter the `scons` command to compile the project. If the compilation is correct, the `rtthread.elf` file will be generated in the BSP directory, which is a target file required for QEMU to run. + +![compile the project](figures/scons.png) + +### Step 2. Use the *qemu.bat* Command to Run the Project + +After compiling, type `qemu.bat` to start the virtual machine and BSP project. `qemu.bat` is a Windows batch file. This file is located in the BSP folder, mainly including the execution instructions of QEMU. The first run of the project will create a blank `sd.bin` file under the BSP folder, which is a virtual SD card with a size of 64M. The Env command interface displays the initialization information and version number information printed during the start-up of RT-Thread system, and the QEMU virtual machine is also running. As shown in the following picture: + +![run the project](figures/qemu.bat.png) + +![QEMU virtual machine](figures/qemu.png) + +### Run the Finsh Console + +RT-Thread supports Finsh, and users can use command operations in command line mode. + +Type `help` or press `Tab` to view all supported commands. As shown in the figure below, commands are on the left and command descriptions are on the right. + +![view all supported commands](figures/finsh-cmd.png) + +For example, by entering the `list_thread` command, you can see the currently running threads, thread status and stack size; by entering the `list_timer`, you can see the status of the timers. + +![threads and timers](figures/finsh-thread.png) + +### Run the File System + +Type `list_device` to view all devices registered in the system. You can see the virtual SD card "sd0" device as shown in the following picture. Next, we can format the SD card using the `mkfs sd0` command, which will format the SD card into a FatFS file system. FatFs is a Microsoft fat-compatible file system developed for small embedded devices. It is written in ANSI C, uses abstract hardware I/O layer and provides continuous maintenance, so it has good portability. + +For more information on FatFS, click on the link: [http://elm-chan.org/fsw/ff/00index_e.html](http://elm-chan.org/fsw/ff/00index_e.html) + +![format the SD card ](figures/mkfs-sd0.png) + +The file system will not be loaded immediately after the first formatting of the SD card, and the file system will be loaded correctly only after the second boot. So exit the virtual machine, and then restart the virtual machine and project by entering `qemu.bat` on the Env command line interface. Entering `ls` command, you can see that the `Directory` directory has been added, the file system has been loaded, and then you can experience the file system with other commands provided by RT-Thread: + +![commands of file system](figures/echo-cat.png) + +- ls: Display file and directory information +- cd: Switch to the specified directory +- rm: Delete files or directories +- echo: Writes the specified content to the target file +- cat: Displays the details of a file +- mkdir: Create folders + +Please enter `help` to see more commands. + +## More Functions + +Open the Env tool in the BSP directory and enter the `menuconfig` command: + +![menuconfig](figures/menuconfig.png) + +You can configure more functions in the configuration interface. After the configuration is completed, save the configuration first, and then exit the configuration interface: + +![menuconfig interface](figures/menuconfig_menu.png) + +1. If you choose a package, you need to use the command `pkgs --update` to download the package. +2. Compile with `scons`. +3. Then enter `qemu.bat` to run. +4. Use `help` to view all commands of the BSP. And then use the commands. diff --git a/documentation/quick-start/quick_start_qemu/quick_start_qemu_linux.md b/documentation/quick-start/quick_start_qemu/quick_start_qemu_linux.md new file mode 100644 index 0000000000..1945e4a333 --- /dev/null +++ b/documentation/quick-start/quick_start_qemu/quick_start_qemu_linux.md @@ -0,0 +1,203 @@ +# Getting Started of QEMU (Ubuntu) + +The development of embedded software is inseparable from the development board. Without physical development boards, similar virtual machines like QEMU can be used to simulate the development board. QEMU is a virtual machine that supports cross-platform virtualization. It can virtualize many development boards. To facilitate the experience of RT-Thread without a development board, RT-Thread provides a board-level support package (BSP) for QEMU-simulated **ARM vexpress A9** development board. + +## 1 Install dependency libraries + +We need to type commands as following in the terminal: + +```shell +sudo apt install gcc +sudo apt install python3 +sudo apt install python3-pip +sudo apt install gcc-arm-none-eabi +sudo apt install gdb-arm-none-eabi +sudo apt install binutils-arm-none-eabi +sudo apt install scons +sudo apt install libncurses5-dev +sudo apt install qemu +sudo apt install qemu-system-arm +sudo apt install git +``` + +## 2 Get RT-Thread source code + +Download RT-Thread Source Code : `git clone https://github.com/RT-Thread/rt-thread.git` + +You can directly ignore the following steps, this is used for setting GCC compiler manually. Usually, you don't need to set this. + +> - Install the compiler. If the compiler version installed with `apt-get` command is too old, it will cause compilation errors. You can download and install the new version using the following command in turn. The download link and the decompression folder name will vary according to the download version. The following Compression Packet will unzip to the `/opt` folder. +> +> - `wget https://armkeil.blob.core.windows.net/developer/Files/downloads/gnu-rm/6-2016q4/gcc-arm-none-eabi-6_2-2016q4-20161216-linux.tar.bz2` +> - `cd /opt` +> - `sudo tar xf ~/gcc-arm-none-eabi-6_2-2016q4-20161216-linux.tar.bz2` +> +> +> - After the compiler is installed, it is necessary to modify the `rtconfig.py` file under `rt-thread/bsp/qemu-vexpress-a9` BSP, modify the corresponding path to the bin directory corresponding to the compiler decompressed into the opt directory. Referring to the following figure, the directory name varies according to the downloaded version of the compiler: +> +> ![edit EXEC_PATH in rtconfig.py](figures/ubuntu-rtconfig-py.png) +> + + + +## 3 Build QEMU Project + +### 3.1 Move into QEMU folder + +``` +cd rt-thread/bsp/qemu-vexpress-a9/ +``` + +### 3.2 Configure the environment of Env tool + +#### 3.2.1 Remap python command + +We need to remap `python` command as python3 by default. + +Using `whereis` command to identify your python3's version: + +![python3-version](figures/python3-version.png) + +For instance, as you can see, in my computer, the python3's version is python 3.9. You need to identify python3's version in your computer. Then, we remap the `python` command as python3 by default: + +```shell +sudo rm -rf /usr/bin/python3 +sudo rm -rf /usr/bin/python +sudo ln -s /usr/bin/python3.9 /usr/bin/python3 +sudo ln -s /usr/bin/python3.9 /usr/bin/python +``` + +### 3.3 Install Env and Configure BSP + +Type following the command under `bsp/qemu-vexpress-a9` folder: + +``` +scons --menuconfig +``` + +The Env tool will be installed and initialized after using the `scons --menuconfig` command. Then it will enter the configuration interface, and you could configure the BSP: + +![install env tool](figures/ubuntu-menuconfig.png) + +![enter the configuration interface](figures/ubuntu-env-menu.png) + +You can use the keyboard `↑` key and `↓` key to look up and down menu items, use the `Enter` key to enter the selected directory, use the `Space` key to select or cancel bool variables, and press `Esc Esc` to exit the current directory. + +> Notice: Please make sure that the terminal size is larger than 80x24 character size. + +### 3.4 Configure the QEMU BSP and acquisition of software packages + +``` +source ~/.env/env.sh +scons --menuconfig +pkgs --update +``` + +The `env.sh` file is a file that needs to be executed. It configures the environment variables so that we can update the package with the pkgs command and execute it with the `source ~/.env/env.sh` command. + +Then use `scons --menuconfig` command to enter menuconfig, and you could select the online packages by this time. + +![commands of acquisition of pkgs](figures/ubuntu-pkg-menuconfig.png) + +![add pkg menu](figures/ubuntu-pkgs-add-to-menu.png) + +For example, select the kernel sample package: semphore sample. + +![select a package](figures/ubuntu-select-pkg.png) + +Exit and save the configuration. + +![save the configuration](figures/ubuntu-save.png) + +If you have selected an online package, you can download the package to the packages folder in the BSP directory using the `pkgs --update` command (Git needs to be installed): + +![download the package](figures/ubuntu-update-pkg.png) + +#### 4.1 Tips + +Before you use the `pkgs` command, you need to type command `source ~/.env/env.sh`. This is a annoying work. We can attach this command as a new line at the end of `~/.bashrc`, which can let you to to use `pkgs` command directly. + +### 3.5 Compile the QEMU project + +``` +scons +``` + +Using the `scons` command to compile the BSP. + +![compile the BSP](figures/ubuntu-scons.png) + +## 4 Introduction of QEMU BSP Catalogue + +The board-level support package (BSP) provided by RT-Thread simulates ARM vexpress A9 development board is located in the `qemu-vexpress-a9` folder under the `bsp` directory of RT-Thread source code. This BSP implements LCD, keyboard, mouse, SD card, Ethernet card, serial port and other related drivers. The contents of the folder are shown in the following figure. + +![qemu-vexpress-a9 folder](figures/ubuntu-qemu-bsp.png) + +The main files and directories of `qemu-vexpress-a9` BSP are described as follows: + +| Fles/Directories | Description | +| ---------------- | ------------------------------------------- | +| applications | User application code directory | +| drivers | The underlying driver provided by RT-Thread | +| qemu.bat | Script files running on Windows platform | +| qemu.sh | Script files running on Linux platform | +| qemu-dbg.bat | Debugging script files on Windows platform | +| qemu-dbg.sh | Debugging script files on Linux platform | +| README.md | Description document of BSP | +| rtconfig.h | A header file of BSP | + +## 5 Compile and Run + +### 5.1 Use the *scons* Command to Compile the Project + +Switch to the QEMU BSP directory and enter the `scons` command to compile the project. If the compilation is correct, the `rtthread.elf` file will be generated in the BSP directory, which is a target file required for QEMU to run. + +![compile the project](figures/ubuntu-scons.png) + +### 5.2 Use the *./qemu.sh* Command to Run the Project + +After compiling, type `./qemu.sh` to start the virtual machine and BSP project. `qemu.sh` is a Linux batch file. This file is located in the BSP folder, mainly including the execution instructions of QEMU. The first run of the project will create a blank `sd.bin` file under the BSP folder, which is a virtual SD card with a size of 64M. The Env command interface displays the initialization information and version number information printed during the start-up of RT-Thread system, and the QEMU virtual machine is also running. As shown in the following picture: + +![run the project](figures/ubuntu-qume-sh.png) + +### 5.3 Run the Finsh Console + +RT-Thread supports Finsh, and users can use command operations in command line mode. + +Type `help` or press `Tab` to view all supported commands. As shown in the figure below, commands are on the left and command descriptions are on the right. + +![view all supported commands](figures/ubuntu-msh-help.png) + +For example, by entering the `list_thread` command, you can see the currently running threads, thread status and stack size; by entering the `list_timer`, you can see the status of the timers. + +![threads and timers](figures/ubuntu-thread-timer.png) + +### 5.4 Run the File System + +Type `list_device` to view all devices registered in the system. You can see the virtual SD card "sd0" device as shown in the following picture. Next, we can format the SD card using the `mkfs sd0` command, which will format the SD card into a FatFS file system. FatFs is a Microsoft fat-compatible file system developed for small embedded devices. It is written in ANSI C, uses abstract hardware I/O layer and provides continuous maintenance, so it has good portability. + +> For more information on FatFS, click on the link: [http://elm-chan.org/fsw/ff/00index_e.html](http://elm-chan.org/fsw/ff/00index_e.html) + +![format the SD card](figures/ubuntu-mkfs-sd0.png) + +The file system will not be loaded immediately after the first formatting of the SD card, and the file system will be loaded correctly only after the second boot. So exit the virtual machine, and then restart the virtual machine and project by entering `./qemu.sh` on the command line interface. Entering `ls` command, you can see that the `Directory` directory has been added, the file system has been loaded, and then you can experience the file system with other commands provided by RT-Thread: + +![commands of file system](figures/ubuntu-filesys.png) + +- ls: Display the file and directory information +- cd: Switch to the specified directory +- rm: Delete files or directories +- echo: Writes the specified content to the target file +- cat: Displays the details of a file +- mkdir: Create folders + +Please enter `help` to see more commands. + +## 6 More Functions + +You can configure more functions in the menuconfig's configuration interface. use `scons --menuconfig` to config the BSP. After the configuration is completed, save the configuration first, and then exit the configuration interface, then: + +1. If you choose a package, you need to use the command `pkgs --update` to download the package. +2. Compile with `scons`. +3. Then enter `./qemu.sh` to run QEMU. +4. Use `help` to view all commands of the BSP. And then use the commands. diff --git a/documentation/quick-start/rtthread_simulator_v0.1.0.7z b/documentation/quick-start/rtthread_simulator_v0.1.0.7z new file mode 100644 index 0000000000..c36f62e88c Binary files /dev/null and b/documentation/quick-start/rtthread_simulator_v0.1.0.7z differ diff --git a/documentation/quick-start/rtthread_simulator_v0.1.0.zip b/documentation/quick-start/rtthread_simulator_v0.1.0.zip new file mode 100644 index 0000000000..cb6706b04d Binary files /dev/null and b/documentation/quick-start/rtthread_simulator_v0.1.0.zip differ diff --git a/documentation/roadmap-1.2.0.md b/documentation/roadmap-1.2.0.md deleted file mode 100644 index a783d2f286..0000000000 --- a/documentation/roadmap-1.2.0.md +++ /dev/null @@ -1,45 +0,0 @@ -# Roadmap for RT-Thread 1.2.0 # - -The document is the mainly task of RT-Thread 1.2.0. In this series, there will be a full manual document for RT-Thread 1.x series. The format of document is markdown document[0] on github.com and some hardware environment is used in document (RT-Thread Real-Touch[1]). - -The document will be wroten in Chinese firstly. At least when RT-Thread 1.2.0 has officially released, the Chinese edition of manual is ready. The manual includes: - -1. RT-Thread Kernel (The basic facilities in RTOS) -2. How to port RT-Thread in a new architecture. -3. RT-Thread components. -4. How to debug in RT-Thread. - -## Other codes changes in planning ## - -### Improvement on bsp porting ### - -- LPC18xx & LPC43xx - * USB host and device driver; - -- Other BSP. - * welcome contributions. - -### New features on Components ### - -- device IPC - * implement the work queue[2]. - * implement the rwlock[3]. - * The APIs are like *BSD, but implement in RT-Thread - -- finsh shell - * implement a UNIX style shell, and this shell can execute application module. - -- device file system - * implement select[4] API for device object in RT-Thread. - -- lwIP TCP/IP stack - * enable IPv6 feature[5]. - -- gdb server or stub - -[0] RT-Thread manual: https://github.com/RT-Thread/manual-doc -[1] RT-Thread Real-Touch: https://github.com/RT-Thread/realtouch-stm32f4 -[2] work queue: http://fxr.watson.org/fxr/source/sys/workqueue.h?v=NETBSD -[3] rwlock: http://fxr.watson.org/fxr/source/sys/rwlock.h?v=NETBSD -[4] select API: http://pubs.opengroup.org/onlinepubs/7908799/xsh/select.html -[5] dual IPv4/v6 stack: http://lwip.wikia.com/wiki/LwIP_IPv4/IPv6_stacks diff --git a/documentation/roadmap-2.1.0.md b/documentation/roadmap-2.1.0.md deleted file mode 100644 index 9e9e1e5cae..0000000000 --- a/documentation/roadmap-2.1.0.md +++ /dev/null @@ -1,50 +0,0 @@ -# The Roadmap of RT-Thread v2.1.0 version # - -Thank all of the developers and contributors, the final version of RT-Thread v2.0.0 has been released. The next version should be a small version, not always similar to the last version, which is a big version:-) The version number should be v2.1.0. There are lot of people ask me what's the features of next version. In fact, I would say, RT-Thread is an open source community, which the development of RT-Thread RTOS is depended by community, belonging to each community participants. If you want to let RT-Thread has some features, please implement it! Then share them to the community. If those part meet the RT-Thread rule (such as no license conflict), there is no reason that not to put them into the upstream of RT-Thread. - -So more representatives mentioned below are my (Bernard) personal point of view: - -1. CloudIDE, which is hosted on http://lab.rt-thread.org/cloudide, is an online IDE, ah, similar to mbed;-) But hope there are some own characteristics, and at least, it's faster in China. (When it's ready, we maybe setup a local IDE version.) It's in alpha phase right now, which introduces multi-tab edit mode; adding the Wi-Fi startkit hardware to update its firmware in fly; sharing snippets and components between developers; integrating help documentation and other information. The intention is to create a convenient way for newcomer, but not troubled by the development environment. Developers only need a web browser and the corresponding hardware, such as Wi-Fi startkit (which also is called as ART-wifi). - -2. In embedded system salon which is held in Shanghai China, December of last year, developer Weety mentioned POSIX compatibility issue, which leading to not easy to port some Linux software to RT-Thread. The main problem here is that the BSD socket interface is entirely in lwIP protocol stack, while the file system interface of RT-Thread is another one. Therefore, RT-Thread has no unified select/read/write/poll function on socket/file descriptor/device; Another implicit problem is, POSIX implementation is not completed standard. There may be some pits inside. This issue is a big problem, since we chose the open source system, then he/she must also take into account the open source ecosystem as well. There are many open source resources you can use or re-use. Therefore, RT-Thread also need to be more open attitude to solve this problem so that it can be more open, to enhance the affinity of the POSIX standard itself. Similarly, it should be better supported for some of the C ++ standard. RT-Thread will be more POSIX, more open. RT-Thread is there, and how to create a better application, it's up to the user's innovation. - -3. Some rich feature SoC, such as the number of new pop package ARM9 (With built-in SDRAM/DDR), Cortex-A7/8/9, MIPS32/64, or even x86, these SoC will be certainly and gradually evolved into the RT-Thread target hardware platform, but the work should be heavier. If the above No.2 POSIX issue resolved, it's possible to support them. The primary working is the driver implementation, and then integrated with POSIX interface, it will be easier to port other components. - -From my side of the energy can put into it, I should be focusing my working on the building up the platform, so that RT-Thread can be more POSIX, more standardized, more open and easy to use. The current planning point is to release RT-Thread v2.1.0 alpha version on the end of Q1 2015. This version should include dfs_lwIP file system interface, and then make sure the branch direction. - -The others, the following list are some thought but no obligation feature list, please interested guys come to claim, thanks: - -* CloudIDE related - - Improve the NAT function, turns ART-wifi board into a Wi-Fi repeater (routing). - - Look forward to sharing MQTT/CoAP components on CloudIDE; - - Look forward to adding Wi-Fi/6LoWPAN gateway in the ART-wifi startkit. - - Look forward to adding Wi-Fi/nRF51822 BLE gateway in the ART-wifi startkit. - - Look forward to turning the ART-wifi startkit as a multi-axis flight control, and porting some algorighms in PX4 project; - - Look forward to sharing Lewei50/Yeelink access component on CloudIDE; - - Look forward to sharing SSL component on CloudIDE; - - Look forward to sharing components on CloudIDE to access Ali cloud, Baidu cloud, Tencent cloud etc; - - - Add more sensors driver, e.g., barometer, thermometer, illumination, 9-axis sensor etc; - - - Look forward to integrating RealBoard LPC4088 APP development environment on CloudIDE; - - Look forward to integrating UI design feature on CloudIDE; - - Look forward to turnning CloudIDE become a local desktop application; - -* POSIX-related - - Implement dfs_lwIP file system interface for DFS fd/lwIP socket interface. To implement select/poll interface before DFS; - - More better integration between DFS and device file system interface(devfs). - - Add more POSIX interfaces, including but not limited to aio, signal and other functions etc; - - Improve DeviceDriver framework for device interfaces (rt_device_*). Application layer uses rt_device_* and devfs interface, firmware/driver developer uses device driver framework interface. - -* Others - - Porting TCP/IP protocol stack and POSIX environmemnt in OpenBSD; - - CanOpen component; - - ARM Cortex-A8/A9 + M4/M3 hardware platform; - - Some other hardware porting; - - - - -Bernard Xiong -2015.2.26 - diff --git a/documentation/sal/figures/sal_frame.jpg b/documentation/sal/figures/sal_frame.jpg new file mode 100644 index 0000000000..0fb9299544 Binary files /dev/null and b/documentation/sal/figures/sal_frame.jpg differ diff --git a/documentation/sal/sal.md b/documentation/sal/sal.md new file mode 100644 index 0000000000..98af885541 --- /dev/null +++ b/documentation/sal/sal.md @@ -0,0 +1,826 @@ +# Socket Abstraction Layer: SAL + +## SAL Introduction + +In order to adapt to more network protocol stack types and avoid the system's dependence on a single network protocol stack, the RT-Thread system provides a SAL (Socket Abstraction Layer) components that implement different network protocol stacks or network implementations. The abstraction of the interface provides a set of standard BSD Socket APIs to the upper layer, so that developers only need to care about and use the network interface provided by the network application layer, without concern for the underlying specific network protocol stack type and implementation, which greatly improves the system's compatibility makes it easy for developers to complete protocol stack adaptation and network-related development. Main features of the SAL component are as follows: + +- Abstract and unified multiple network protocol stack interfaces;; +- Provide Socket-level TLS encrypted transport feature; +- Support standard BSD Socket APIs; +- Unified FD management for easy operation of network functions using read/write poll/select; + +### SAL Network Framework + +The SAL network framework of RT-Thread is mainly shown in the following structure: + +![SAL 网络框架图](figures/sal_frame.jpg) + +The top layer is the network's application layer, provides a set of standard BSD Socket APIs, such as `socket`, `connect` and other functions, for most of the system network development applications. + +The second part is the virtual file system layer. In the RT-Thread system, the DFS file system program can implement different file system operations using standard interface functions.The network socket interface also supports the file system structure. The network socket descriptor created when using the network socket interface is uniformly managed by the file system. Therefore, the network socket descriptor can also use the standard file operation interface.The interfaces provided for the upper application layer are: `read`, `write`, `close`, `poll`/`select`, and so on. + +The third part is the socket abstraction layer, through which RT-thread system can adapt to different network protocol stacks in the lower layer and provide unified network programming interface in the upper layer to facilitate access of different protocol stacks. The socket abstraction layer provides interfaces for the upper application layer, such as `accept`, `connect`, `send`, `recv`, etc. + +The fourth part is the protocol stack layer, which includes several commonly used TCP/IP protocol stacks, such as lwIP, a light TCP/IP protocol stack commonly used in embedded development, and AT Socket network function implementation developed by rt-thread. These protocol stacks or network functions directly contact the hardware to complete the transformation of data from the network layer to the transmission layer. + +The network application layer of RT-thread provides interfaces mainly based on the standard BSD Socket API, which ensures that programs can be written on PC, debugged, and then ported to the RT-thread operating system. + +### Working Principles + +The working principle of SAL component is mainly divided into the following three parts: + +- Unified abstract functions of multi-protocol stack access and interface functions; +- SAL TLS encryption transmission function; + +#### Multi-Protocol Stack Access and Unified Abstract Function Of Interface Function + +For different protocol stacks or network function implementations, the names of network interfaces may be different. Take the `connect` connection function as an example. The interface name in the lwIP protocol stack is `lwip_connect`, and the interface name in the AT Socket network implementation is `at_connect`. The SAL component provides abstraction and unification of the interface of different protocol stacks or networks. When the socket is created, the component **judges the protocol stack or network function used by judging the incoming protocol domain type**, and completes the RT-Thread. + +Currently, the protocol stack or network implementation types supported by the SAL component are: **lwIP protocol stack**, **AT Socket protocol stack**, **WIZnet hardware TCP/IP protocol stack**. + +```c +int socket(int domain, int type, int protocol); +``` + +The above is the definition of the socket creation function in the standard BSD Socket API. The `domain` indicates that the protocol domain is also called the protocol domain. It is used to determine which protocol stack or network implementation to use. The domain type used by the AT Socket protocol stack is **AF_AT**, lwIP protocol stack uses protocol domain type **AF_INET**, etc., WIZnet protocol stack uses protocol domain type **AF_WIZ**. + +For different software packages, the protocol domain type passed to the socket may be fixed and will not change depending on how the SAL component is accessed. **In order to dynamically adapt access to different protocol stacks or network implementations**, the SAL component provides two protocol domain type matching methods for each protocol stack or network implementation: **Primary protocol domain type and secondary protocol domain type**. When socket is created, it first determines whether the incoming protocol domain type has the supported primary protocol type. If it is, it uses the corresponding protocol stack or network implementation, if not, determine whether the subprotocol cluster type supports. The current system support protocol domain types are as follows: + +1. + lwIP Protocol stack: family = AF_INET、sec_family = AF_INET + +2. AT Socket Protocol stack: family = AF_AT、sec_family = AF_INET + +3. WIZnet hardware TCP/IP Protocol stack: family = AF_WIZ、sec_family = AF_INET + + + +The main function of the SAL component is to unify the underlying BSD Socket API interface. The following takes the `connect` function call flow as an example to illustrate the SAL component function call method: + +- `connect`: The abstract BSD Socket API provided by the SAL component for unified FD management; +- `sal_connect`: The `connect` implementation function in the SAL component that is used to call the `operation` function registered by the underlying stack.。 +- `lwip_connect`: The layer `connect` connection function provided by the underlying protocol stack is registered in the SAL component when the NIC is initialized, and the final call operation function。 + +```c +/* SAL component provides BSD Socket APIs for the application layer */ +int connect(int s, const struct sockaddr *name, socklen_t namelen) +{ + /* Get the SAL socket descriptor */ + int socket = dfs_net_getsocket(s); + + /* Execute the sal_connect function with a SAL socket descriptor*/ + return sal_connect(socket, name, namelen); +} + +/* SAL component abstract function interface implementation */ +int sal_connect(int socket, const struct sockaddr *name, socklen_t namelen) +{ + struct sal_socket *sock; + struct sal_proto_family *pf; + int ret; + + /* Check if the SAL socket structure is normal */ + SAL_SOCKET_OBJ_GET(sock, socket); + + /* Check if the current socket network connection status is normal. */ + SAL_NETDEV_IS_COMMONICABLE(sock->netdev); + /* Check if the underlying operation function corresponding to the current socket is normal. */ + SAL_NETDEV_SOCKETOPS_VALID(sock->netdev, pf, connect); + + /* The connect operation function that performs the underlying registration */ + ret = pf->skt_ops->connect((int) sock->user_data, name, namelen); +#ifdef SAL_USING_TLS + if (ret >= 0 && SAL_SOCKOPS_PROTO_TLS_VALID(sock, connect)) + { + if (proto_tls->ops->connect(sock->user_data_tls) < 0) + { + return -1; + } + return ret; + } +#endif + return ret; +} + +/* The underlying connect function of the lwIP protocol stack function */ +int lwip_connect(int socket, const struct sockaddr *name, socklen_t namelen) +{ + ... +} +``` + +#### SAL TLS Encrypted Transmission Function + +**1. SAL TLS Feature** + +In the transmission of protocol data such as TCP and UDP, since the data packet is plaintext, it is likely to be intercepted and parsed by others, which has a great impact on the secure transmission of information. In order to solve such problems, users generally need to add the SSL/TLS protocol between the application layer and the transport layer. + +TLS (Transport Layer Security) is a protocol based on the transport layer TCP protocol. Its predecessor is SSL (Secure Socket Layer). Its main function is to encrypt the application layer message asymmetrically and then transmit it by TCP protocol, so as to achieve secure data encryption interaction. + +Currently used TLS methods: **MbedTLS, OpenSSL, s2n**, etc., but for different encryption methods, you need to use their specified encryption interface and process for encryption, the migration of some application layer protocols is more complicated. Therefore, the SAL TLS function is generated, the main function is to **provide TLS-encrypted transmission characteristics at the Socket level, abstract multiple TLS processing methods, and provide a unified interface for completing TLS data interaction**. + +**2. How to use the SAL TLS feature** + +The process is as follows: + +- Configure to enable any network protocol stack support (such as lwIP protocol stack); + +- Configure to enable the MbedTLS package (currently only supports MbedTLS type encryption); + +- Configure to enable SAL_TLS support (as shown in the configuration options section below); + +After the configuration is complete, as long as the `protocol` type passed in the socket creation uses **PROTOCOL_TLS** or **PROTOCOL_DTLS **, the standard BSD Socket API interface can be used to complete the establishment of the TLS connection and the data transmission and reception. The sample code is as follows: + +```c +#include +#include + +#include +#include +#include + +/* RT-Thread offical website,supporting TLS function */ +#define SAL_TLS_HOST "www.rt-thread.org" +#define SAL_TLS_PORT 443 +#define SAL_TLS_BUFSZ 1024 + +static const char *send_data = "GET /download/rt-thread.txt HTTP/1.1\r\n" + "Host: www.rt-thread.org\r\n" + "User-Agent: rtthread/4.0.1 rtt\r\n\r\n"; + +void sal_tls_test(void) +{ + int ret, i; + char *recv_data; + struct hostent *host; + int sock = -1, bytes_received; + struct sockaddr_in server_addr; + + /* Get the host address through the function entry parameter url (if it is a domain name, it will do domain name resolution) */ + host = gethostbyname(SAL_TLS_HOST); + + recv_data = rt_calloc(1, SAL_TLS_BUFSZ); + if (recv_data == RT_NULL) + { + rt_kprintf("No memory\n"); + return; + } + + /* Create a socket of type SOCKET_STREAM, TCP protocol, TLS type */ + if ((sock = socket(AF_INET, SOCK_STREAM, PROTOCOL_TLS)) < 0) + { + rt_kprintf("Socket error\n"); + goto __exit; + } + + /* Initialize the server address */ + server_addr.sin_family = AF_INET; + server_addr.sin_port = htons(SAL_TLS_PORT); + server_addr.sin_addr = *((struct in_addr *)host->h_addr); + rt_memset(&(server_addr.sin_zero), 0, sizeof(server_addr.sin_zero)); + + if (connect(sock, (struct sockaddr *)&server_addr, sizeof(struct sockaddr)) < 0) + { + rt_kprintf("Connect fail!\n"); + goto __exit; + } + + /* Send data to the socket connection */ + ret = send(sock, send_data, strlen(send_data), 0); + if (ret <= 0) + { + rt_kprintf("send error,close the socket.\n"); + goto __exit; + } + + /* Receive and print the response data, using encrypted data transmission */ + bytes_received = recv(sock, recv_data, SAL_TLS_BUFSZ - 1, 0); + if (bytes_received <= 0) + { + rt_kprintf("received error,close the socket.\n"); + goto __exit; + } + + rt_kprintf("recv data:\n"); + for (i = 0; i < bytes_received; i++) + { + rt_kprintf("%c", recv_data[i]); + } + +__exit: + if (recv_data) + rt_free(recv_data); + + if (sock >= 0) + closesocket(sock); +} + +#ifdef FINSH_USING_MSH +#include +MSH_CMD_EXPORT(sal_tls_test, SAL TLS function test); +#endif /* FINSH_USING_MSH */ +``` + +### Configuration Options + +When we use the SAL component we need to define the following macro definition in rtconfig.h: + +| **Macro definition** | **Description** | +|--------------------------------|--------------------------------| +| RT_USING_SAL | Enable the SAL function | +| SAL_USING_LWIP | Enable lwIP stack support | +| SAL_USING_AT | Enable AT Socket protocol stack support | +| SAL_USING_TLS | Enable SAL TLS feature support | +| SAL_USING_POSIX | Enable POSIX file system related function support, such as read, write, select/poll, etc. | + +Currently, the SAL abstraction layer supports the lwIP protocol stack, the AT Socket protocol stack, and the WIZnet hardware TCP/IP protocol stack. To enable SAL in the system, at least one protocol stack support is required. + +The above configuration options can be added directly to the `rtconfig.h` file or added by the component package management tool Env configuration option. The specific configuration path in the Env tool is as follows: + +```c +RT-Thread Components ---> + Network ---> + Socket abstraction layer ---> + [*] Enable socket abstraction layer + protocol stack implement ---> + [ ] Support lwIP stack + [ ] Support AT Commands stack + [ ] Support MbedTLS protocol + [*] Enable BSD socket operated by file system API +``` + +After the configuration is complete, you can use the `scons` command to regenerate the function and complete the addition of the SAL component. + +## Initialization ## + +配置开启 SAL 选项之后,需要在启动时对它进行初始化,开启 SAL 功能,如果程序中已经使用了组件自动初始化,则不再需要额外进行单独的初始化,否则需要在初始化任务中调用如下函数: + +```c +int sal_init(void); +``` + +The initialization function is mainly for initializing the SAL component, supporting the component to repeatedly initialize the judgment, and completing the initialization of the resource such as the mutex used in the component. There is no new thread created in the SAL component, which means that the SAL component resource is very small. Currently, the **SAL component resource is occupied by ROM 2.8K and RAM 0.6K**. + + +## BSD Socket API Introduction ## + +The SAL component abstracts the standard BSD Socket API interface. The following is an introduction to common network interfaces: + +### Create a Socket (socket) + +``` c +int socket(int domain, int type, int protocol); +``` + +| **Parameter** | **Description** | +|--------|-------------------------------------| +| domain | protocol domain type | +| type | protocol type | +| protocol | Transport layer protocol for actual use | +| **back** | -- | +| >=0 | Success, an integer representing the socket descriptor will be returned | +| -1 | Fail | + +This function is used to assign a socket descriptor and the resources it USES based on the specified address family, data type, and protocol. + +**domain ( protocol domain type ):** + +- AF_INET: IPv4 +- AF_INET6: IPv6 + +**type ( protocol type ):** + +- SOCK_STREAM:Stream socket +- SOCK_DGRAM: Datagram socket +- SOCK_RAW: Raw socket + +### Bind Socket (bind) + +```c +int bind(int s, const struct sockaddr *name, socklen_t namelen); +``` + +| **P**arameter | **Description** | +|---------|---------------------------------------------| +| s | socket descriptor | +| name | a pointer to the sockaddr structure representing the address to bind to | +| namelen | length of sockaddr structure | +| **back** | -- | +| 0 | success | +| -1 | fail | + +This function is used to bind the port number and IP address to the specified socket。 + +SAL components depend on the `netdev` components, when using ` bind () ` function, can get IP address information through the netdev nic name, is used to create Socket bound to the specified object of network interface card. The following example completes the process of binding the IP address of the network interface card and connecting to the server through the name of the incoming network interface card: + +```c +#include +#include +#include + +#define SERVER_HOST "192.168.1.123" +#define SERVER_PORT 1234 + +static int bing_test(int argc, char **argv) +{ + struct sockaddr_in client_addr; + struct sockaddr_in server_addr; + struct netdev *netdev = RT_NULL; + int sockfd = -1; + + if (argc != 2) + { + rt_kprintf("bind_test [netdev_name] --bind network interface device by name.\n"); + return -RT_ERROR; + } + + /* Get the netdev network interface card object by name */ + netdev = netdev_get_by_name(argv[1]); + if (netdev == RT_NULL) + { + rt_kprintf("get network interface device(%s) failed.\n", argv[1]); + return -RT_ERROR; + } + + if ((sockfd = socket(AF_INET, SOCK_STREAM, 0)) < 0) + { + rt_kprintf("Socket create failed.\n"); + return -RT_ERROR; + } + + /* Initializes the client address to bind to */ + client_addr.sin_family = AF_INET; + client_addr.sin_port = htons(8080); + /* Gets the IP address information in the network interface card object */ + client_addr.sin_addr.s_addr = netdev->ip_addr.addr; + rt_memset(&(client_addr.sin_zero), 0, sizeof(client_addr.sin_zero)); + + if (bind(sockfd, (struct sockaddr *)&client_addr, sizeof(struct sockaddr)) < 0) + { + rt_kprintf("socket bind failed.\n"); + closesocket(sockfd); + return -RT_ERROR; + } + rt_kprintf("socket bind network interface device(%s) success!\n", netdev->name); + + /* Initializes the server address for the pre-connection */ + server_addr.sin_family = AF_INET; + server_addr.sin_port = htons(SERVER_PORT); + server_addr.sin_addr.s_addr = inet_addr(SERVER_HOST); + rt_memset(&(server_addr.sin_zero), 0, sizeof(server_addr.sin_zero)); + + /* Connect to the server */ + if (connect(sockfd, (struct sockaddr *)&server_addr, sizeof(struct sockaddr)) < 0) + { + rt_kprintf("socket connect failed!\n"); + closesocket(sockfd); + return -RT_ERROR; + } + else + { + rt_kprintf("socket connect success!\n"); + } + + /* Close the connection */ + closesocket(sockfd); + return RT_EOK; +} + +#ifdef FINSH_USING_MSH +#include +MSH_CMD_EXPORT(bing_test, bind network interface device test); +#endif /* FINSH_USING_MSH */ +``` + +### Listen Socket (listen) + +```c +int listen(int s, int backlog); +``` + +| **Parameter** | **Description** | +|-------|-------------------------------| +| s | socket descriptor | +| backlog | represents the maximum number of connections that can wait at a time | +| **back** | -- | +| 0 | success | +| -1 | fail | + +This function is used by the TCP server to listen for a specified socket connection。 + +### Accept Connection (accept) + +```c +int accept(int s, struct sockaddr *addr, socklen_t *addrlen); +``` + +| **Parameter** | **Descrption** | +|-------|-------------------------------| +| s | socket descriptor | +| addr | represents the maximum number of connections that can wait at a time | +| addrlen | length of client device address structure | +| **back** | -- | +| 0 | success, return the newly created socket descriptor | +| -1 | fail | + +When the application listens for connections from other hosts, the connection is initialized with the `accept()` function, and `accept()` creates a new socket for each connection and removes the connection from the listen queue. + +### Establish Connection (connect) + +```c +int connect(int s, const struct sockaddr *name, socklen_t namelen); +``` + +| **Parameter** | **Description** | +|-------|-------------------------------| +| s | socket descriptor | +| name | server address information | +| namelen | server address structure length | +| **back** | -- | +| 0 | successful, return the newly created socket descriptor | +| -1 | fail | + +This function is used to establish a connection to the specified socket. + +### Send TCP Data (send) + +```c +int send(int s, const void *dataptr, size_t size, int flags); +``` + +| **Parameter** | **Description** | +|-------|---------------------------| +| s | socket descriptor | +| dataptr | sent data pointer | +| size | length of sent data | +| flags | flags, usually 0 | +| **back** | -- | +| >0 | success, the length of the sent data is returned | +| <=0 | Fail | + +This function is commonly used to send data over a TCP connection。 + +### Receive TCP Data (recv) + +```c +int recv(int s, void *mem, size_t len, int flags); +``` + +| **Parameter** | **Description** | +|-----|---------------------------| +| s | socket descriptor | +| mem | received data pointer | +| len | length of received data | +| flags | flags, usually 0 | +| **back** | -- | +| >0 | success, the length of the received data will be returned | +| =0 | the destination address has been transmitted and the connection is closed | +| <0 | fail | + +This function is used to receive data over a TCP connection。 + +### Send UDP Data (sendto) + +```c +int sendto(int s, const void *dataptr, size_t size, int flags, const struct sockaddr *to, socklen_t tolen); +``` + +| **Parameter** | **Description** | +|-------|---------------------------| +| s | socket descriptor | +| dataptr | sent data pointer | +| size | length of sent data | +| flags | flags, usually 0 | +| to | target address structure pointer | +| tolen | length of the target address structure | +| **return** | -- | +| >0 | success, the length of the received data will be returned | +| <=0 | fail | + +This function is used for UDP connections to send data。 + +### Receive UDP Data (recvfrom) + +```c +int recvfrom(int s, void *mem, size_t len, int flags, struct sockaddr *from, socklen_t *fromlen); +``` + +| **Parameter** | **Description** | +|-------|---------------------------| +| s | socket descriptor | +| mem | received data pointer | +| len | length of data received | +| flags | flags, usually 0 | +| from | received address structure pointer | +| fromlen | length of received address structure | +| **back** | -- | +| >0 | success, return the length of the received data | +| =0 | the receiving address has been transferred and the connection is closed | +| <0 | fail | + +This function is used to receive data on a UDP connection。 + +### Close Socket (closesocket) + +```c +int closesocket(int s); +``` + +| **Parameter** | **Description** | +|----|-------------| +| s | Socket descriptor | +| **back** | -- | +| 0 | success | +| -1 | fail | + +This function is used to close the connection and release the resource.。 + +### Shutdown The Socket By Setting(shutdown) + +```c +int shutdown(int s, int how); +``` + +| **Parameter** | **Descrption** | +|----|-----------------| +| s | socket descriptor | +| how | socket control | +| **back** | -- | +| 0 | fail | +| -1 | success | + +This function provides more permissions to control the closing process of the socket.。 + +**How ( socket control ):** + +- 0: Stop receiving current data and reject future data reception; +- 1: Stop sending data and discard unsent data; +- 2: Stop receiving and sending data。 + +### Set Socket Options(setsockopt) + +```c +int setsockopt(int s, int level, int optname, const void *optval, socklen_t optlen); +``` + +| **Parameter** | **Description** | +|-------|-----------------------| +| s | socket descriptor | +| level | protocol stack configuration options | +| optname | option name to be set | +| optval | set the buffer address of the option value | +| optlen | set the buffer length of the option value | +| **back** | -- | +| =0 | success | +| <0 | fail | + +This function is used to set the socket mode and modify the socket configuration options.。 + +**level ( protocol stack configuration options ):** + +- SOL_SOCKET:Socket layer +- IPPROTO_TCP:TCP layer +- IPPROTO_IP:IP layer + +**optname ( Option name to be set ):** + +- SO_KEEPALIVE:Set keep alive options +- SO_RCVTIMEO:Set socket data reception timeout +- SO_SNDTIMEO:Set socket data sending timeout + +### Get Socket Options(getsockopt) + +```c +int getsockopt(int s, int level, int optname, void *optval, socklen_t *optlen); +``` + +| **Parameter** | **Description** | +|-------|---------------------------| +| s | socket descriptor | +| level | protocol stack configuration options | +| optname | option name to be set | +| optval | get the buffer address of the option value | +| optlen | get the buffer length address of the option value | +| **back** | -- | +| =0 | success | +| <0 | fail | + +This function is used to get the socket configuration options。 + +### Get Remote Address Information (getpeername) + +```c +int getpeername(int s, struct sockaddr *name, socklen_t *namelen); +``` + +| **Parameter** | **Description** | +|-------|-------------------------| +| s | socket descriptor | +| name | received address structure pointer | +| namelen | length of received address structure | +| **back** | -- | +| =0 | success | +| <0 | fail | + +This function is used to get the remote address information associated with the socket。 + +### Get Local Address Information (getsockname) + +```c +int getsockname(int s, struct sockaddr *name, socklen_t *namelen); +``` + +| **Parameter** | **Description** | +|-------|-------------------------| +| s | Socket descriptor | +| name | received address structure pointer | +| namelen | length of received address structure | +| **back** | -- | +| =0 | success | +| <0 | fail | + +This function is used to get local socket address information。 + +### Configure Socket Parameters (ioctlsocket)) + +```c +int ioctlsocket(int s, long cmd, void *arg); +``` + +| **Parameter** | **Description** | +|-----|-----------------| +| s | socket descriptor | +| cmd | socket operation command | +| arg | operation command's parameters | +| **back** | -- | +| =0 | success | +| <0 | fail | + +This function sets the socket control mode。 + +**The CMD supports following commands** + +- FIONBIO: Turns on or off the socket's non-blocking mode. Arg parameter 1 is open non-blocking and 0 is closed non-blocking. + +## Network Protocol Stack Access + +Access to the network protocol stack or network function implementation is mainly to initialize and register the protocol cluster structure, and add it to the protocol cluster list in SAL component. The protocol cluster structure is defined as follows: + +```c +/* network interface socket opreations */ +struct sal_socket_ops +{ + int (*socket) (int domain, int type, int protocol); + int (*closesocket)(int s); + int (*bind) (int s, const struct sockaddr *name, socklen_t namelen); + int (*listen) (int s, int backlog); + int (*connect) (int s, const struct sockaddr *name, socklen_t namelen); + int (*accept) (int s, struct sockaddr *addr, socklen_t *addrlen); + int (*sendto) (int s, const void *data, size_t size, int flags, const struct sockaddr *to, socklen_t tolen); + int (*recvfrom) (int s, void *mem, size_t len, int flags, struct sockaddr *from, socklen_t *fromlen); + int (*getsockopt) (int s, int level, int optname, void *optval, socklen_t *optlen); + int (*setsockopt) (int s, int level, int optname, const void *optval, socklen_t optlen); + int (*shutdown) (int s, int how); + int (*getpeername)(int s, struct sockaddr *name, socklen_t *namelen); + int (*getsockname)(int s, struct sockaddr *name, socklen_t *namelen); + int (*ioctlsocket)(int s, long cmd, void *arg); +#ifdef SAL_USING_POSIX + int (*poll) (struct dfs_fd *file, struct rt_pollreq *req); +#endif +}; + +/* sal network database name resolving */ +struct sal_netdb_ops +{ + struct hostent* (*gethostbyname) (const char *name); + int (*gethostbyname_r)(const char *name, struct hostent *ret, char *buf, size_t buflen, struct hostent **result, int *h_errnop); + int (*getaddrinfo) (const char *nodename, const char *servname, const struct addrinfo *hints, struct addrinfo **res); + void (*freeaddrinfo) (struct addrinfo *ai); +}; + +/* Protocol domain structure definition */ +struct sal_proto_family +{ + int family; /* primary protocol families type */ + int sec_family; /* secondary protocol families type */ + const struct sal_socket_ops *skt_ops; /* socket opreations */ + const struct sal_netdb_ops *netdb_ops; /* network database opreations */ +}; + +``` + +- `family`: Each protocol stack supports the main protocol cluster types, such as AF_INET for lwIP, AF_AT Socket, and AF_WIZ for WIZnet。 +- `sec_family`:The type of sub-protocol domain supported by each protocol stack, used to support a single protocol stack or network implementation, that matches other types of protocol cluster types in the package。 +- `skt_ops`: Define socket related functions, such as connect, send, recv, etc., each protocol cluster has a different set of implementation。 +- `netdb_ops`:Define non-socket-related execution functions, such as `gethostbyname`, `getaddrinfo`, `freeaddrinfo`, etc. Each protocol cluster has a different set of implementations。 + +The following is the access registration process implemented by AT Socket network. Developers can refer to other protocol stacks or network implementations for access: + +```c +#include +#include +#include /* SAL component structure holds the header file */ +#include /* AT Socket related header file */ +#include + +#include /*network interface card function related header file */ + +#ifdef SAL_USING_POSIX +#include /* Poll function related header file */ +#endif + +#ifdef SAL_USING_AT + +/* A custom poll execution function that handles the events received in the poll */ +static int at_poll(struct dfs_fd *file, struct rt_pollreq *req) +{ + int mask = 0; + struct at_socket *sock; + struct socket *sal_sock; + + sal_sock = sal_get_socket((int) file->data); + if(!sal_sock) + { + return -1; + } + + sock = at_get_socket((int)sal_sock->user_data); + if (sock != NULL) + { + rt_base_t level; + + rt_poll_add(&sock->wait_head, req); + + level = rt_hw_interrupt_disable(); + if (sock->rcvevent) + { + mask |= POLLIN; + } + if (sock->sendevent) + { + mask |= POLLOUT; + } + if (sock->errevent) + { + mask |= POLLERR; + } + rt_hw_interrupt_enable(level); + } + + return mask; +} +#endif + +/* Define and assign socket execution functions, and the SAL component calls the underlying function of the registration when it executes relevant functions */ +static const struct proto_ops at_inet_stream_ops = +{ + at_socket, + at_closesocket, + at_bind, + NULL, + at_connect, + NULL, + at_sendto, + at_recvfrom, + at_getsockopt, + at_setsockopt, + at_shutdown, + NULL, + NULL, + NULL, + +#ifdef SAL_USING_POSIX + at_poll, +#else + NULL, +#endif /* SAL_USING_POSIX */ +}; + +static const struct sal_netdb_ops at_netdb_ops = +{ + at_gethostbyname, + NULL, + at_getaddrinfo, + at_freeaddrinfo, +}; + +/* define and assign AT Socket protocol domain structure */ +static const struct sal_proto_family at_inet_family = +{ + AF_AT, + AF_INET, + &at_socket_ops, + &at_netdb_ops, +}; + +/* Used to set the protocol domain information in the network interface card device */ +int sal_at_netdev_set_pf_info(struct netdev *netdev) +{ + RT_ASSERT(netdev); + + netdev->sal_user_data = (void *) &at_inet_family; + return 0; +} + +#endif /* SAL_USING_AT */ +``` diff --git a/documentation/scons/figures/hello-menu.png b/documentation/scons/figures/hello-menu.png new file mode 100644 index 0000000000..e123880931 Binary files /dev/null and b/documentation/scons/figures/hello-menu.png differ diff --git a/documentation/scons/figures/hello-rtconfig.png b/documentation/scons/figures/hello-rtconfig.png new file mode 100644 index 0000000000..f3a0ba2726 Binary files /dev/null and b/documentation/scons/figures/hello-rtconfig.png differ diff --git a/documentation/scons/figures/hello-value.png b/documentation/scons/figures/hello-value.png new file mode 100644 index 0000000000..40f592af35 Binary files /dev/null and b/documentation/scons/figures/hello-value.png differ diff --git a/documentation/scons/figures/hello.png b/documentation/scons/figures/hello.png new file mode 100644 index 0000000000..0842ada775 Binary files /dev/null and b/documentation/scons/figures/hello.png differ diff --git a/documentation/scons/figures/kconfig.png b/documentation/scons/figures/kconfig.png new file mode 100644 index 0000000000..4ef28a97dd Binary files /dev/null and b/documentation/scons/figures/kconfig.png differ diff --git a/documentation/scons/figures/scons.png b/documentation/scons/figures/scons.png new file mode 100644 index 0000000000..e54d6611c1 Binary files /dev/null and b/documentation/scons/figures/scons.png differ diff --git a/documentation/scons/scons.md b/documentation/scons/scons.md new file mode 100644 index 0000000000..01ee8a444c --- /dev/null +++ b/documentation/scons/scons.md @@ -0,0 +1,596 @@ +# SCons + +## Introduction to SCons + +SCons is an open source build system written in the Python language, similar to GNU Make. It uses a different approach than the usual Makefile, but instead uses SConstruct and SConscript files instead. These files are also Python scripts that can be written using standard Python syntax, so the Python standard library can be called in SConstruct, SConscript files for a variety of complex processing, not limited to the rules set by the Makefile. + +A detailed [SCons user manual](http://www.scons.org/doc/production/HTML/scons-user/index.html) can be found on the SCons website. This section describes the basic usage of SCons and how to use the SCons tool in RT-Thread. + +### What is Construction Tool + +A software construction tool is a piece of software that compiles source code into an executable binary program according to certain rules or instructions. This is the most basic and important feature of building tools. In fact, these are not the only functions of construction tools. Usually these rules have a certain syntax and are organized into files. These files are used to control the behavior of the build tool, and you can do other things besides software building. + +The most popular build tool today is GNU Make. Many well-known open source software, such as the Linux kernel, are built using Make. Make detects the organization and dependencies of the file by reading the Makefile and completes the commands specified in the Makefile. + +Due to historical reasons, the syntax of the Makefile is confusing, which is not conducive to beginners. In addition, it is not convenient to use Make on the Windows platform, you need to install the Cygwin environment. To overcome the shortcomings of Make, other build tools have been developed, such as CMake and SCons. + +### RT-Thread Construciton Tool + +RT-Thread was built using Make/Makefile in the earlier stage. Starting from 0.3.x, the RT-Thread development team gradually introduced the SCons build system. The only goal of introducing SCons is to get everyone out of the complex Makefile configuration, IDE configuration, and focus on RT-Thread function development. + +Some may doubt the difference between the build tools described here and the IDE. The IDE completes the build through the operation of the graphical interface. Most IDEs generate script files like Makefile or SConscript based on the source code added by the user, and call the tools like Make or SCons to build the source code. + +### Install SCons + +It needs to be installed on the PC host before using the SCons system because it is written in the Python language, so you need to install the Python runtime environment before using SCons. + +The Env configuration tool provided by RT-Thread comes with SCons and Python, so using SCons on Windows platforms does not require the installation of these two software. + +In Linux and BSD environments, Python should already be installed by default, which is also the Python environment of the 2.x version. At this time, you only need to install SCons. For example, in Ubuntu you can install SCons with the following command: + +`sudo apt-get install scons` + +## Basic Functions of SCons + +The RT-Thread build system supports multiple compilers, including ARM GCC, MDK, IAR, VisualStudio, and Visual DSP. The mainstream ARM Cortex M0, M3, M4 platforms, basically all support ARM GCC, MDK, IAR. Some BSPs may only support one compiler, and readers can read the currently supported compiler by reading the CROSS_TOOL option in rtconfig.py under the BSP directory. + +If it is a chip on the ARM platform, you can use the Env tool and enter the scons command to compile the BSP directly. At this time, the ARM GCC compiler is used by default because the Env tool comes with the ARM GCC compiler. Compile a BSP using the scons command as shown below, and the SCons will be based on this BSP. + +![Compile BSP using scons](figures/scons.png) + +If the user wants to use another compiler that the BSP already supports to compile the project, or if the BSP is a non-ARM platform chip, then you can't compile the project directly with the scons command. You need to install the corresponding compiler yourself and specify the compiler path to use. Before compiling the project, you can use the following two commands in the Env command line interface to specify the compiler path for the MDK and the compiler path to MDK. + +```c +set RTT_CC=keil +set RTT_EXEC_PATH=C:/Keilv5 +``` + +### Commonly Used SCons Commands + +This section describes the SCons commands that are commonly used in RT-Thread. SCons not only completes basic compilation, but also generates MDK/IAR/VS projects. + +#### scons + +Go to the BSP project directory to be compiled in the Env command line window, and then use this command to compile the project directly. If some source files are modified after executing the `scons` command, and the scons command is executed again, SCons will incrementally compile and compile only the modified source files and link them. + + +`scons` can also be followed by a `-s` parameter, the command `scons -s`, which differs from the `scons` command in that it does not print specific internal commands. + +#### scons -c + +Clear the compilation target. This command clears the temporary and target files generated when `scons` is executed. + +#### scons --target=XXX + +If you use mdk/iar for project development, when you open or close some components, you need to use one of the following commands to regenerate the corresponding customized project, then compile and download in mdk/iar. + +```c +scons --target=iar +scons --target=mdk4 +scons --target=mdk5 +``` + +In the Env command line window, enter the BSP project directory to be compiled. After using the `scons --target=mdk5` command, a new MDK project file named project.uvprojx will be generated in the BSP directory. Double-click it to open and you can use MDK to compile and debug. Using the `scons --target=iar` command will generate a new IAR project file named project.eww. Users who are not used to SCons can use this method. If project.uvproj fails to open, please delete project.uvopt and rebuild the project. + +Under `bsp/simulator` directory , you can use the following command to generate a project for vs2012 or a project for vs2005. + +```c +scons --target=vs2012 +Scons --target=vs2005 +``` + +If you provide template files for other IDE projects in the BSP directory, you can also use this command to generate corresponding new projects, such as ua, vs, cb, cdk. + +This command can also be followed by a `-s` parameter, such as the command `scons –target=mdk5 -s`, which does not print specific internal commands when executing this command. + +> To generate a MDK or IAR project file, the prerequisite is that there is a project template file in the BSP directory, and then the scons will add relevant source code, header file search path, compilation parameters, link parameters, etc. according to the template file. As for which chip this project is for, it is directly specified by this engineering template file. So in most cases, this template file is an empty project file that is used to assist SCons in generating project.uvprojx or project.eww. + +#### scons -jN + +Multi-threaded compilation target, you can use this command to speed up compilation on multi-core computers. In general, a cpu core can support 2 threads. Use the `scons -j4` command on a dual-core machine. + +> If you just want to look at compilation errors or warnings, it's best not to use the -j parameter so that the error message won't be mixed with multiple files in parallel. + +#### scons --dist + +Build a project framework. Using this command will generate the `dist` directory in the BSP directory, this is the directory structure of the development project, including RT-Thread source code and BSP related projects, irrelevant BSP folder and libcpu will be removed, and you can freely copy this work to any directory. + +#### scons --verbose + +By default, output compiled with the scons command does not display compilation parameters as follows: + +```c +D:\repository\rt-thread\bsp\stm32f10x>scons +scons: Reading SConscript files ... +scons: done reading SConscript files. +scons: Building targets ... +scons: building associated VariantDir targets: build +CC build\applications\application.o +CC build\applications\startup.o +CC build\components\drivers\serial\serial.o +... +``` + +The effect of using the scons –verbose command is as follows: + +```c +armcc -o build\src\mempool.o -c --device DARMSTM --apcs=interwork -ID:/Keil/ARM/ +RV31/INC -g -O0 -DUSE_STDPERIPH_DRIVER -DSTM32F10X_HD -Iapplications -IF:\Projec +t\git\rt-thread\applications -I. -IF:\Project\git\rt-thread -Idrivers -IF:\Proje +ct\git\rt-thread\drivers -ILibraries\STM32F10x_StdPeriph_Driver\inc -IF:\Project +\git\rt-thread\Libraries\STM32F10x_StdPeriph_Driver\inc -ILibraries\STM32_USB-FS +-Device_Driver\inc -IF:\Project\git\rt-thread\Libraries\STM32_USB-FS-Device_Driv +er\inc -ILibraries\CMSIS\CM3\DeviceSupport\ST\STM32F10x -IF:\Project\git\rt-thre +... +``` + +## SCons Advanced + +SCons uses SConscript and SConstruct files to organize the source structure. Usually a project has only one SConstruct, but there will be multiple SConscripts. In general, an SConscript will be placed in each subdirectory where the source code is stored. + +In order to make RT-Thread better support multiple compilers and to easily adjust compilation parameters, RT-Thread creates a separate file for each BSP called `rtconfig.py`. So the following three files exist in every RT-Thread BSP directory: `rtconfig.py`, `SConstruct`, and `SConscript`, which control the compilation of the BSP. There is only one SConstruct file in a BSP, but there are multiple SConscript files. It can be said that the SConscript file is the main force of the organization source code. + +RT-Thread SConscript files are also present in most source folders. These files are "found" by the SConscript file in the BSP directory to add the source code corresponding to the macro defined in rtconfig.h to the compiler. The following article will take stm32f10x-HAL BSP as an example to explain how SCons builds the project. + +### SCons Build-In Functions + +If you want to add some of your own source code to the SCons build environment, you can usually create or modify an existing SConscript file. The SConscript file can control the addition of source files and can specify the group of files (similar to the concept of Groups in IDEs such as MDK/IAR). + +SCons provides a lot of built-in functions to help us quickly add source code, and with these simple Python statements we can add or remove source code to our project. The following is a brief introduction to some common functions. + +#### GetCurrentDir() + +Get current directory. + +#### Glob('\*.c') + +Get all C files in the current directory. Modify the value of the parameter to match the suffix to match all files of the current directory. + +#### GetDepend(macro) + +This function is defined in the script file in the `tools` directory. It reads the configuration information from the rtconfig.h file with the macro name in rtconfig.h. This method (function) returns true if rtconfig.h has a macro turned on, otherwise it returns false. + +#### Split(str) + +Split the string str into a list list. + +#### DefineGroup(name, src, depend,**parameters) + +This is a method (function) of RT-Thread based on the SCons extension. DefineGroup is used to define a component. A component can be a directory (under a file or subdirectory) and a Group or folder in some subsequent IDE project files. + +Parameter description of `DefineGroup() ` : + +|**Parameter**|**D**escription | +|-------|------------------------------------| +| name | name of Group | +| src | The files contained in the Group generally refer to C/C++ source files. For convenience, you can also use the Glob function to list the matching files in the directory where the SConscript file is located by using a wildcard. | +| depend | The options that the Group depends on when compiling (for example, the FinSH component depends on the RT_USING_FINSH macro definition). The compile option generally refers to the RT_USING_xxx macro defined in rtconfig.h. When the corresponding macro is defined in the rtconfig.h configuration file, then this group will be added to the build environment for compilation. If the dependent macro is not defined in rtconfig.h, then this Group will not be added to compile. Similarly, when using scons to generate as an IDE project file, if the dependent macros are not defined, the corresponding Group will not appear in the project file. | +| parameters | Configure other parameters, the values can be found in the table below. You do not need to configure all parameters in actual use. | + +parameters that could be added: + +|**P**arameter|**Description** | +|------------|--------------------------------------------------| +| CCFLAGS | source file compilation parameters | +| CPPPATH | head file path | +| CPPDEFINES | Link parameter | +| LIBRARY | Include this parameter, the object file generated by the component will be packaged into a library file | + +#### SConscript(dirs,variant_dir,duplicate) + +Read the new SConscript file, and the parameter description of the SConscript() function is as follows: + +|**Parameter** |**Description** | +|-------------|---------------------------------------| +| dirs | SConscript file path | +| variant_dir | Specify the path to store the generated target file | +| duiplicate | Set whether to copy or link the source file to variant_dir | + +## SConscript Examples + +Below we will use a few SConscript as an example to explain how to use the scons tool. + +### SConscript Example 1 + +Let's start with the SConcript file in the stm32f10x-HAL BSP directory. This file manages all the other SConscript files under the BSP, as shown below. + +```c +import os +cwd = str(Dir('#')) +objs = [] +list = os.listdir(cwd) +for d in list: + path = os.path.join(cwd, d) + if os.path.isfile(os.path.join(path, 'SConscript')): + objs = objs + SConscript(os.path.join(d, 'SConscript')) +Return('objs') +``` + +* `import os:` Importing the Python system programming os module, you can call the functions provided by the os module to process files and directories. + +* `cwd = str(Dir('#')):` Get the top-level directory of the project and assign it to the string variable cwd, which is the directory where the project's SConstruct is located, where it has the same effect as `cwd = GetCurrentDir()` . + +* `objs = []:` An empty list variable objs is defined. + +* `list = os.listdir(cwd):` Get all the subdirectories under the current directory and save them to the variable list. + +* This is followed by a python for loop that walks through all the subdirectories of the BSP and runs the SConscript files for those subdirectories. The specific operation is to take a subdirectory of the current directory, use `os.path.join(cwd,d)` to splicing into a complete path, and then determine whether there is a file named SConscript in this subdirectory. If it exists, execute `objs = objs + SConscript(os.path.join(d,'SConscript'))` . This sentence uses a built-in function `SConscript()` provided by SCons, which can read in a new SConscript file and add the source code specified in the SConscript file to the source compilation list objs. + +With this SConscript file, the source code required by the BSP project is added to the compilation list. + +### SConscript Example 2 + +So what about stm32f10x-HAL BSP other SConcript files? Let's take a look at the SConcript file in the drivers directory, which will manage the source code under the drivers directory. The drivers directory is used to store the underlying driver code implemented by the driver framework provided by RT-Thread. + +```c +Import('rtconfig') +from building import * + +cwd = GetCurrentDir() + +# add the general drivers. +src = Split(""" +board.c +stm32f1xx_it.c +""") + +if GetDepend(['RT_USING_PIN']): + src += ['drv_gpio.c'] +if GetDepend(['RT_USING_SERIAL']): + src += ['drv_usart.c'] +if GetDepend(['RT_USING_SPI']): + src += ['drv_spi.c'] +if GetDepend(['RT_USING_USB_DEVICE']): + src += ['drv_usb.c'] +if GetDepend(['RT_USING_SDCARD']): + src += ['drv_sdcard.c'] + +if rtconfig.CROSS_TOOL == 'gcc': + src += ['gcc_startup.s'] + +CPPPATH = [cwd] + +group = DefineGroup('Drivers', src, depend = [''], CPPPATH = CPPPATH) + +Return('group') + +``` + +* `Import('rtconfig'):` Import the rtconfig object, and the rtconfig.CROSS_TOOL used later is defined in this rtconfig module. + +* `from building import *:` All the contents of the building module are imported into the current module, and the DefineGroup used later is defined in this module. + +* `cwd = GetCurrentDir():` Get the current path and save it to the string variable cwd. + +The next line uses the `Split()` function to split a file string into a list, the effect of which is equivalent to + +`src = ['board.c','stm32f1xx_it.c']` + +Later, `if` judgement and `GetDepend ()` are used to check whether a macro in `rtconfig.h` is open or not, and if so, `src += [src_name]` is used to append the source code file to the list variable src. + +* `CPPPATH = [cwd]:` Save the current path to a list variable CPPPATH. + +The last line uses DefineGroup to create a group called Drivers, which corresponds to the grouping in the MDK or IAR. The source code file for this group is the file specified by src, and the dependency is empty to indicate that the group does not depend on any macros of rtconfig.h. + +`CPPPATH =CPPPATH` means to add the current path to the system's header file path. The CPPPATH on the left is a built-in parameter in the DefineGroup that represents the header file path. The CPPPATH on the right is defined in the previous line of this document. This way we can reference the header files in the drivers directory in other source code. + +### SConscript Example 3 + +Let's take a look at the SConcript file in the applications directory, which will manage the source code under the applications directory for the user's own application code. + +```c +from building import * + +cwd = GetCurrentDir() +src = Glob('*.c') +CPPPATH = [cwd, str(Dir('#'))] + +group = DefineGroup('Applications', src, depend = [''], CPPPATH = CPPPATH) + +Return('group') +``` + +`src = Glob('*.c'):` Get all the C files in the current directory. + +`CPPPATH = [cwd, str(Dir('#'))]:` Save the current path and the path of the project's SConstruct to the list variable CPPPATH. + +The last line uses DefineGroup to create a group called `Applications`. The source code file for this group is the file specified by src. If the dependency is empty, the group does not depend on any rtconfig.h macros, and the path saved by CPPPATH is added to the system header search path. Such application directories and header files in the stm32f10x-HAL BSP directory can be referenced elsewhere in the source code. + +To sum up, this source program will add all c programs in the current directory to the group `Applications`, so if you add or delete files in this directory, you can add files to the project or delete them from the project. It is suitable for adding source files in batches. + +### SConscript Example 4 + +Below is the contents of the RT-Thread source code `component/finsh/SConscript` file, which will manage the source code under the finsh directory. + +```c +Import('rtconfig') +from building import * + +cwd = GetCurrentDir() +src = Split(''' +shell.c +symbol.c +cmd.c +''') + +fsh_src = Split(''' +finsh_compiler.c +finsh_error.c +finsh_heap.c +finsh_init.c +finsh_node.c +finsh_ops.c +finsh_parser.c +finsh_var.c +finsh_vm.c +finsh_token.c +''') + +msh_src = Split(''' +msh.c +msh_cmd.c +msh_file.c +''') + +CPPPATH = [cwd] +if rtconfig.CROSS_TOOL == 'keil': + LINKFLAGS = '--keep *.o(FSymTab)' + + if not GetDepend('FINSH_USING_MSH_ONLY'): + LINKFLAGS = LINKFLAGS + '--keep *.o(VSymTab)' +else: + LINKFLAGS = '' + +if GetDepend('FINSH_USING_MSH'): + src = src + msh_src +if not GetDepend('FINSH_USING_MSH_ONLY'): + src = src + fsh_src + +group = DefineGroup('finsh', src, depend = ['RT_USING_FINSH'], CPPPATH = CPPPATH, LINKFLAGS = LINKFLAGS) + +Return('group') +``` + +Let's take a look at the contents of the first Python conditional statement in the file. If the compilation tool is keil, the variable `LINKFLAGS = '--keep *.o(FSymTab)'` left blank. + +DefinGroup also creates the file specified by src in the finsh directory as a finsh group. `depend = ['RT_USING_FINSH']` indicates that this group depends on the macro RT_USING_FINSH in `rtconfig.h`. When the macro RT_USING_FINSH is opened in rtconfig.h, the source code in the finsh group will be compiled, otherwise SCons will not compile. + +Then add the finsh directory to the system header directory so that we can reference the header files in the finsh directory in other source code. + +`LINKFLAGS = LINKFLAGS` has the same meaning as `CPPPATH = CPPPATH` . The LINKFLAGS on the left represents the link parameter, and the LINKFLAGS on the right is the value defined by the previous if else statement. That is, specify the link parameters for the project. + +## Manage Projects with SCons + +The previous section on the SConscript related to the RT-Thread source code is explained in detail, you should also know some common ways to write SConscript files, this section will guide you how to use SCons to manage your own projects. + +### Add App Code + +As mentioned earlier, the Applications folder under BSP is used to store the user's own application code. Currently there is only one main.c file. If the user's application code is not a lot, it is recommended that the relevant source files be placed under this folder. Two simple files hello.c and hello.h have been added under the Applications folder, as shown below. + +```c +/* file: hello.h */ + +#ifndef _HELLO_H_ +#define _HELLO_H_ + +int hello_world(void); + +#endif /* _HELLO_H_ */ + +/* file: hello.c */ +#include +#include +#include + +int hello_world(void) +{ + rt_kprintf("Hello, world!\n"); + + return 0; +} + +MSH_CMD_EXPORT(hello_world, Hello world!) +``` + +The SConcript file in the applications directory will add all source files in the current directory to the project. You need to use the `scons --target=xxx` command to add the 2 new files to your project. Note that the project will be regenerated each time a new file is added. + +### Add Module + +As mentioned above, in the case that there are not many source code files, it is recommended that all source code files be placed in the applications folder. If the user has a lot of source code and wants to create your own project module, or need to use other modules that you have obtained, what would be appropriate? + +Take also hello.c and hello.h mentioned above for example, these two files will be managed in a separate folder, and have their own grouping in the MDK project file, and can be selected through menuconfig whether to use This module. Add a hello folder under BSP. + +![New added hello folder](figures/hello.png) + +Noticed that there is an additional SConscript file in the folder. If you want to add some of your own source code to the SCons build environment, you can usually create or modify an existing SConscript file. Refer to the above analysis of the SConscript file for the RT-Thread source code. The contents of this new hello module SConscript file are as follows: + +```c +from building import * + +cwd = GetCurrentDir() +include_path = [cwd] +src = [] + +if GetDepend(['RT_USING_HELLO']): + src += ['hello.c'] + +group = DefineGroup('hello', src, depend = [''], CPPPATH = include_path) + +Return('group') +``` + +Through the above simple lines of code, a new group hello is created, and the source file to be added to the group can be controlled by the macro definition, and the directory where the group is located is added to the system header file path. So how is the custom macro RT_USING_HELLO defined? Here is a new file called Kconfig. Kconfig is used to configure the kernel, and the configuration interface generated by the menuconfig command used when configuring the system with Env relies on the Kconfig file. The menuconfig command generates a configuration interface for the user to configure the kernel by reading the various Kconfig files of the project. Finally, all configuration-related macro definitions are automatically saved to the rtconfig.h file in the BSP directory. Each BSP has an rtconfig.h file. That is the configuration information of this BSP. + +There is already a Kconfig file for this BSP in the stm32f10x-HAL BSP directory, and we can add the configuration options we need based on this file. The following configuration options have been added to the hello module. The # sign is followed by a comment. + +```shell +menu "hello module" # create a "hello module" menu + + config RT_USING_HELLO # RT_USING_HELLO configuration options + bool "Enable hello module" # RT_USING_HELLO is a bool variable and display as "Enable hello module" + default y # RT_USING_HELLO can take values y and n, default y + help # If use help, it would display "this hello module only used for test" + this hello module only used for test + + config RT_HELLO_NAME # RT_HELLO_NAME configuration options + string "hello name" # RT_HELLO_NAME is a string variable and the menu show as "hello name" + default "hello" # default name is "hello" + + config RT_HELLO_VALUE # RT_HELLO_VALUE configuration options + int "hello value" # RT_HELLO_VALUE is an int variable and the menu show as "hello value" + default 8 # the default value is 8 + +endmenu # the hello menu is end + +``` + +After entering the stm32f10x-HAL BSP directory using the Env tool, you can use the menuconfig command to see the configuration menu of the new hello module at the bottom of the main page. After entering the menu, the following figure is displayed. + +![hello Module Configuration Menu](figures/hello-menu.png) + +You can also modify the value of the hello value. + +![Modify the Value of Hello Value](figures/hello-value.png) + +After saving the configuration, exit the configuration interface and open the rtconfig.h file in the stm32f10x-HAL BSP directory. You can see that the configuration information of the hello module is available. + +![hello Module Related Macro Definition](figures/hello-rtconfig.png) + +**Note: Use the scons --target=XXX command to generate a new project each time menuconfig is configured.** + +Because the RT_USING_HELLO macro has been defined in rtconfig.h, the source file for hello.c is added to the new project when the project is newly built. + +The above simply enumerates the configuration options for adding your own modules to the Kconfig file. Users can also refer to [*User Manual of Env*](../env/env.md), which also explains how to modify and add configuration options. They can also view the Kconfig documentation in your own Baidu to implement other more complex configuration options. + +### Add Library + +If you want to add an extra library to your project, you need to pay attention to the naming of the binary library by different toolchains. For example, the GCC toolchain, which recognizes library names such as `libabc.a`, specifies `abc` instead of `libabc` when specifying a library. So you need to pay special attention to the SConscript file when linking additional libraries. In addition, when specifying additional libraries, it is also a good idea to specify the corresponding library search path. Here is an example: + +```c +Import('rtconfig') +from building import * + +cwd = GetCurrentDir() +src = Split(''' +''') + +LIBPATH = [cwd + '/libs'] +LIBS = ['abc'] + +group = DefineGroup('ABC', src, depend = [''], LIBS = LIBS, LIBPATH=LIBPATH) +``` + +LIBPATH specifies the path to the library, and LIBS specifies the name of the library. If the toolchain is GCC, the name of the library should be libabc.a; if the toolchain is armcc, the name of the library should be abc.lib.`LIBPATH = [cwd + '/libs']` indicates that the search path for the library is the 'libs' directory in the current directory. + +### Compiler Options + +`rtconfig.py` is a RT-Thread standard compiler configuration file that controls most of the compilation options. It is a script file written in Python and is used to do the following: + +* Specify the compiler (choose one of the multiple compilers you support). +- Specify compiler parameters such as compile options, link options, and more. + +When we compile the project using the scons command, the project is compiled according to the compiler configuration options of `rtconfig.py`. The following code is part of the rtconfig.py code in the stm32f10x-HAL BSP directory. + +```c +import os + +# toolchains options +ARCH='arm' +CPU='cortex-m3' +CROSS_TOOL='gcc' + +if os.getenv('RTT_CC'): + CROSS_TOOL = os.getenv('RTT_CC') + +# cross_tool provides the cross compiler +# EXEC_PATH is the compiler execute path, for example, CodeSourcery, Keil MDK, IAR + +if CROSS_TOOL == 'gcc': + PLATFORM = 'gcc' + EXEC_PATH = '/usr/local/gcc-arm-none-eabi-5_4-2016q3/bin/' +elif CROSS_TOOL == 'keil': + PLATFORM = 'armcc' + EXEC_PATH = 'C:/Keilv5' +elif CROSS_TOOL == 'iar': + PLATFORM = 'iar' + EXEC_PATH = 'C:/Program Files/IAR Systems/Embedded Workbench 6.0 Evaluation' + +if os.getenv('RTT_EXEC_PATH'): + EXEC_PATH = os.getenv('RTT_EXEC_PATH') + +BUILD = 'debug' + +if PLATFORM == 'gcc': + # toolchains + PREFIX = 'arm-none-eabi-' + CC = PREFIX + 'gcc' + AS = PREFIX + 'gcc' + AR = PREFIX + 'ar' + LINK = PREFIX + 'gcc' + TARGET_EXT = 'elf' + SIZE = PREFIX + 'size' + OBJDUMP = PREFIX + 'objdump' + OBJCPY = PREFIX + 'objcopy' + + DEVICE = '-mcpu=cortex-m3 -mthumb -ffunction-sections -fdata-sections' + CFLAGS = DEVICE + AFLAGS = '-c' + DEVICE + '-x assembler-with-cpp' + LFLAGS = DEVICE + '-Wl,--gc-sections,-Map=rtthread-stm32.map,-cref,-u,Reset_Handler -T stm32_rom.ld' +``` + +Where CFLAGS is the compile option for C files, AFLAGS is the compile option for assembly files, and LFLAGS is the link option. The BUILD variable controls the level of code optimization. The default BUILD variable takes the value 'debug', which is compiled using debug mode, with an optimization level of 0. If you modify this variable to something else, it will compile with optimization level 2. The following are all possible ways to write (in short, no 'debug'). + +```shell +BUILD = '' +BUILD = 'release' +BUILD = 'hello, world' +``` + +It is recommended to use the debug mode to compile during the development phase, without optimization, and then consider optimization after the product is stable. + +The specific meaning of these options needs to refer to the compiler manual, such as `armcc` used above is the underlying compiler of MDK. The meaning of its compile options is detailed in MDK help. + +As mentioned earlier, if the user wants to compile the project with another compiler when executing the scons command, you can use the relevant commands to specify the compiler and compiler paths on the command line side of Env. However, this modification is only valid for the current Env process. When you open it again, you need to re-use the command settings. We can directly modify the `rtconfig.py` file to achieve the purpose of permanently configuring the compiler. In general, we only need to modify the CROSS_TOOL and EXEC_PATH options below. + +* CROSS_TOOL:Specify the compiler. The optional values are keil, gcc, iar. Browse `rtconfig.py` to see the compilers supported by the current BSP. If MDK is installed on your machine, you can modify CROSS_TOOL to keil and use MDK to compile the project. + +* EXEC_PATH: The installation path of the compiler. There are two points to note here: + +(1) When installing the compiler (such as MDK, GNU GCC, IAR, etc.), do not install it in a path with Chinese or spaces. Otherwise, some errors will occur when parsing the path. Some programs are installed by default into the `C:\Program Files` directory with spaces in between. It is recommended to choose other paths during installation to develop good development habits. + +(2) When modifying EXEC_PATH, you need to pay attention to the format of the path. On Windows platforms, the default path splitter is the backslash `“\”`, which is used for escape characters in both C and Python. So when modifying the path, you can change `“\”` to `“/”`or add r (python-specific syntax for raw data). + +Suppose a compiler is installed under `D:\Dir1\Dir2`. The following are the correct ways to write: + +* EXEC_PATH = `r'D:\Dir1\Dir2'` Note that with the string `r` in front of the string, `“\”`can be used normally. + +* EXEC_PATH = `'D:/Dir1/Dir2'` Note that instead of `“/”`, there is no `r` in front. + +* EXEC_PATH = `'D:\\Dir1\\Dir2'` Note that the escapement of `“\”` s used here to escape `“\”` itself. + +* This is the wrong way to write: EXEC_PATH = `'D:\Dir1\Dir2'`。 + +If the rtconfig.py file has the following code, comment out the following code when configuring your own compiler. + +```c +if os.getenv('RTT_CC'): + CROSS_TOOL = os.getenv('RTT_CC') +... ... +if os.getenv('RTT_EXEC_PATH'): + EXEC_PATH = os.getenv('RTT_EXEC_PATH') +``` + +The above 2 `if` judgments will set CROSS_TOOL and EXEC_PATH to the default value of Env. + +After the compiler is configured, we can use SCons to compile the BSP of RT-Thread. Open a command line window in the BSP directory and execute the `scons` command to start the compilation process. + +### RT-Thread Auxiliary Compilation Script + +In the tools directory of the RT-Thread source code, there are some auxiliary compiled scripts defined by RT-Thread, such as the project files for automatically generating RT-Thread for some IDE. The most important of these is the building.py script. + +### SCons Further Usage + +For a complex, large-scale system, it is obviously more than just a few files in a directory. It is probably a combination of several folders at the first level. + +In SCons, you can write SConscript script files to compile files in these relatively independent directories, and you can also use the Export and Import functions in SCons to share data between SConstruct and SConscript files (that is, an object data in Python). For more information on how to use SCons, please refer to the [SCons Official Documentation](https://scons.org/documentation.html). + diff --git a/documentation/thread-comm/figures/07mb_ops.png b/documentation/thread-comm/figures/07mb_ops.png new file mode 100644 index 0000000000..f8abd19148 Binary files /dev/null and b/documentation/thread-comm/figures/07mb_ops.png differ diff --git a/documentation/thread-comm/figures/07mb_work.png b/documentation/thread-comm/figures/07mb_work.png new file mode 100644 index 0000000000..9c014afbda Binary files /dev/null and b/documentation/thread-comm/figures/07mb_work.png differ diff --git a/documentation/thread-comm/figures/07msg_ops.png b/documentation/thread-comm/figures/07msg_ops.png new file mode 100644 index 0000000000..a0163ec39e Binary files /dev/null and b/documentation/thread-comm/figures/07msg_ops.png differ diff --git a/documentation/thread-comm/figures/07msg_syn.png b/documentation/thread-comm/figures/07msg_syn.png new file mode 100644 index 0000000000..70b0bd164a Binary files /dev/null and b/documentation/thread-comm/figures/07msg_syn.png differ diff --git a/documentation/thread-comm/figures/07msg_work.png b/documentation/thread-comm/figures/07msg_work.png new file mode 100644 index 0000000000..01569817bb Binary files /dev/null and b/documentation/thread-comm/figures/07msg_work.png differ diff --git a/documentation/thread-comm/figures/07signal_ops.png b/documentation/thread-comm/figures/07signal_ops.png new file mode 100644 index 0000000000..e408e9f6b5 Binary files /dev/null and b/documentation/thread-comm/figures/07signal_ops.png differ diff --git a/documentation/thread-comm/figures/07signal_work.png b/documentation/thread-comm/figures/07signal_work.png new file mode 100644 index 0000000000..e08adc669d Binary files /dev/null and b/documentation/thread-comm/figures/07signal_work.png differ diff --git a/documentation/thread-comm/thread-comm.md b/documentation/thread-comm/thread-comm.md new file mode 100644 index 0000000000..36ca46c2a8 --- /dev/null +++ b/documentation/thread-comm/thread-comm.md @@ -0,0 +1,1042 @@ +Inter-thread Communication +========== + +In the last chapter, we talked about inter-thread synchronization, concepts such as semaphores, mutexes, and event sets were mentioned. Following the last chapter, this chapter is going to explain inter-thread communication. In bare-metal programming, global variables are often used for communication between functions. For example, some functions may change the value of a global variable due to some operations. Another function reads the global variable and will perform corresponding actions to achieve communication and collaboration according to the global variable values it read. More tools are available in RT-Thread to help pass information between different threads. These tools are covered in more detail in this chapter. After reading this chapter, you will learn how to use mailboxes, message queues, and signals for communication between threads. + +Mailbox +---- + +Mailbox service is a typical inter-thread communication method in real-time operating systems.For example, there are two threads, thread 1 detects the state of the button and sends it's state, and thread 2 reads the state of the button and turns on or off the LED according to the state of the button. Here, a mailbox can be used to communicate. Thread 1 sends the status of the button as an email to the mailbox. Thread 2 reads the message in the mailbox to get the button status and turn on or off the LED accordingly. + +Thread 1 here can also be extended to multiple threads. For example, there are three threads, thread 1 detects and sends the button state, thread 2 detects and sends the ADC information, and thread 3 performs different operations according to the type of information received. + +### Mailbox Working Mechanism + +The mailbox of the RT-Thread operating system is used for inter-thread communication, which is characterized by low overhead and high efficiency. Each message in the mailbox can only hold a fixed 4 bytes(for a 32-bit processing system, the pointer is 4 bytes in size, so an email can hold only one pointer). A typical mailbox is also called message exchange. As shown in the following figure, a thread or an interrupt service routine sends a 4-byte message to a mailbox, and one or more threads can receive and process the message from the mailbox. + +![Mailbox Working Mechanism Diagram](figures/07mb_work.png) + +The sending process of the non-blocking mode mails can be safely used in ISR. It is an effective way for the thread, the interrupt service, and the timer to send a message to the thread. In general, the receiving of the mails can be a blocked process, depending on whether there is a message in the mailbox and the timeout set when the message was received. When there is no mail in the mailbox and the set timeout is not 0, the receive of the mails will become blocked. In such cases, the mails can only be received by threads. + +When a thread sends a message to a mailbox, if the mailbox is not full, the message will be copied to the mailbox. If the mailbox is full, the thread sending the message can set a timeout, and choose to wait and suspend or return directly - RT_EFULL. If the thread sending the message chooses to suspend and wait, then when the mails in the mailbox are received and space is left open again, the thread sending the message will be awaken and will continue to send. + +When a thread receives a message from a mailbox, if the mailbox is empty, the thread receiving the message can choose whether to set a timeout or wait and suspend until a new message is received to be awaken. When the set timeout is up and the mailbox still hasn't received the message, the thread that chose to wait till timeout will be awaken and return - RT_ETIMEOUT. If there are messages in the mailbox, then the thread receiving the message will copy the 4-byte message in the mailbox to the receiving cache. + +### Mailbox Control Block + +In RT-Thread, the mailbox control block is a data structure used by the operating system to manage mailboxes, represented by the structure `struct rt_mailbox`. Another C expression, `rt_mailbox_t`, represents the handle of the mailbox, and the implementation in C language is a pointer to the mailbox control block. See the following code for a detailed definition of the mailbox control block structure: + + +```c +struct rt_mailbox +{ + struct rt_ipc_object parent; + + rt_uint32_t* msg_pool; /* the start address of the mailbox buffer */ + rt_uint16_t size; /* the size of the mailbox buffer */ + + rt_uint16_t entry; /* the number of messages in the mailbox */ + rt_uint16_t in_offset, out_offset; /* the entry and exit pointer of the mailbox buffer */ + rt_list_t suspend_sender_thread; /* send the suspend and wait queue of the thread */ +}; +typedef struct rt_mailbox* rt_mailbox_t; +``` + +The `rt_mailbox` object is derived from `rt_ipc_object` and is managed by the IPC container. + +### Management of Mailbox + +The mailbox control block is a structure that contains important parameters related to mailbox and it plays an important role in the function implementation of the mailbox. The relevant interfaces of the mailbox are as shown in the following figure. The operation on a mailbox includes: create/initiate a mailbox, send a mail, receive a mail, and delete/detach a mailbox. + +![Mailbox Related Interface](figures/07mb_ops.png) + +#### Create and Delete Mailbox + +To dynamically create a mailbox object, call the following function interface: + +```c +rt_mailbox_t rt_mb_create (const char* name, rt_size_t size, rt_uint8_t flag); +``` + +When a mailbox object is created, a mailbox object is first allocated from the object manager, and then a memory space is dynamically allocated to the mailbox for storing the mail. The size of the memory is equal to the product of the message size (4 bytes) and the mailbox capacity. Then initialize the number of incoming messages and the offset of the outgoing message in the mailbox. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mb_create() + +|**Parameters** |**Description** | +|----------------|------------------------------------------------------------------| +| name | The name of the mailbox | +| size | Mailbox capacity | +| flag | The mailbox flag, which can take the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO | +|**Return** | —— | +| RT_NULL | Creation failed | +| The handle of the mailbox object | Creation successful | + +When a mailbox created with rt_mb_create() is no longer used, it should be deleted to release the corresponding system resources. Once the operation is completed, the mailbox will be permanently deleted. The function interface for deleting a mailbox is as follows: + +```c +rt_err_t rt_mb_delete (rt_mailbox_t mb); +``` + +When deleting a mailbox, if a thread is suspended on the mailbox object, the kernel first wakes up all threads suspended on the mailbox (the thread return value is -RT_ERROR), then releases the memory used by the mailbox, and finally deletes the mailbox object. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mb_delete() + +|**Parameters**|**Description** | +|----------|----------------| +| mb | The handle of the mailbox object | +|**Return**| —— | +| RT_EOK | Successful | + +#### Initialize and Detach Mailbox + +Initializing a mailbox is similar to creating a mailbox, except that the mailbox initialized is for static mailbox objects. Different from creating a mailbox, the memory of a static mailbox object is allocated by the compiler during system compilation which is usually placed in a read-write data segment or an uninitialized data segment. The rest of the initialization is the same as the creation of a mailbox. The function interface is as follows: + +```c + rt_err_t rt_mb_init(rt_mailbox_t mb, + const char* name, + void* msgpool, + rt_size_t size, + rt_uint8_t flag) +``` + +When the mailbox is initialized, the function interface needs to obtain the mailbox object control block that the user has applied for, the pointer of the buffer, the mailbox name and mailbox capacity (the number of messages that can be stored). The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mb_init() + +|**Parameters**|**Description** | +|----------|-----------------------------------------------------------------| +| mb | The handle of the mailbox object | +| name | Mailbox name | +| msgpool | Buffer pointer | +| size | Mailbox capacity | +| flag | The mailbox flag, which can take the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO | +|**Return**| —— | +| RT_EOK | Successful | + +The size parameter here specifies the capacity of the mailbox, if the number of bytes in the buffer pointed to by msgpool is N, then the mailbox capacity should be N/4. + +Detaching the mailbox means to detach the statically initialized mailbox objects from the kernel object manager. Use the following interface to detach the mailbox: + +```c +rt_err_t rt_mb_detach(rt_mailbox_t mb); +``` + +After using this function interface, the kernel wakes up all the threads suspended on the mailbox (the threads return -RT_ERROR), and then detaches the mailbox objects from the kernel object manager. The following table describes the input parameters and return values for this function: + +Input parameters and return values for rt_mb_detach() + +|**Parameters**|**Description** | +|----------|----------------| +| mb | The handle of the mailbox object | +|**Return**| —— | +| RT_EOK | Successful | + +#### Send Mail + +The thread or ISR can send mail to other threads through the mailbox. The function interface of sending mails is as follows: + +```c +rt_err_t rt_mb_send (rt_mailbox_t mb, rt_uint32_t value); +``` + +The message sent can be any data 32-bit formatted, an integer value or a pointer pointing to the buffer. When the mailbox is fully filled with mails, the thread or ISR that sent the mail will receive a return value of -RT_EFULL. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mb_send() + +|**Parameters** |**Description** | +|------------|----------------| +| mb | The handle of the mailbox object | +| value | Content of email | +|**Return** | —— | +| RT_EOK | Sent successfully | +| \-RT_EFULL | The mailbox is filled | + +#### Send Mails with Waiting + +Users can also send mails to specified mailbox through the following function interface: + +```c +rt_err_t rt_mb_send_wait (rt_mailbox_t mb, + rt_uint32_t value, + rt_int32_t timeout); +``` + +The difference between rt_mb_send_wait() and rt_mb_send() is that there is waiting time. If the mailbox is full, the thread sending the message will wait for the mailbox to release space as mails are received according to the set timeout parameter. If by the set timeout, there is still no available space, then the thread sending the message will wake up and return an error code. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mb_send_wait() + +|**Parameters** |**Description** | +|---------------|----------------| +| mb | The handle of the mailbox object | +| value | Mail content | +| timeout | Timeout | +|**Return** | —— | +| RT_EOK | Sent successfully | +| \-RT_ETIMEOUT | Timeout | +| \-RT_ERROR | Failed, return error | + +#### Receive Mails + +Only when there is a mail in the mailbox, the recipient can receive the mail immediately and return RT_EOK. Otherwise, the thread receiving the message will suspend on the waiting thread queue of the mailbox or return directly according to the set timeout. The receiving mail function interface is as follows: + +```c +rt_err_t rt_mb_recv (rt_mailbox_t mb, rt_uint32_t* value, rt_int32_t timeout); +``` + +When receiving a mail, the recipient needs to specify the mailbox handle, and specify the location to store the received message and the maximum timeout that it can wait. If a timeout is set at the time of receiving, -RT_ETIMEOUT will be returned when the message has not been received within the specified time. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mb_recv() + +|**Parameters** |**Description** | +|---------------|----------------| +| mb | The handle of the mailbox object | +| value | Mail content | +| timeout | Timeout | +|**Return** | —— | +| RT_EOK | Sent successfully | +| \-RT_ETIMEOUT | Timeout | +| \-RT_ERROR | Failed, return error | + +### Mail Usage Sample + +This is a mailbox application routine that initializes 2 static threads, 1 static mailbox object, one of the threads sends mail to the mailbox, and one thread receives mail from the mailbox. As shown in the following code: + + Mailbox usage routine + +```c +#include + +#define THREAD_PRIORITY 10 +#define THREAD_TIMESLICE 5 + +/* Mailbox control block */ +static struct rt_mailbox mb; +/* Memory pool for mails storage */ +static char mb_pool[128]; + +static char mb_str1[] = "I'm a mail!"; +static char mb_str2[] = "this is another mail!"; +static char mb_str3[] = "over"; + +ALIGN(RT_ALIGN_SIZE) +static char thread1_stack[1024]; +static struct rt_thread thread1; + +/* Thread 1 entry */ +static void thread1_entry(void *parameter) +{ + char *str; + + while (1) + { + rt_kprintf("thread1: try to recv a mail\n"); + + /* Receive mail from the mailbox */ + if (rt_mb_recv(&mb, (rt_uint32_t *)&str, RT_WAITING_FOREVER) == RT_EOK) + { + rt_kprintf("thread1: get a mail from mailbox, the content:%s\n", str); + if (str == mb_str3) + break; + + /* Delay 100ms */ + rt_thread_mdelay(100); + } + } + /* Executing the mailbox object detachment */ + rt_mb_detach(&mb); +} + +ALIGN(RT_ALIGN_SIZE) +static char thread2_stack[1024]; +static struct rt_thread thread2; + +/* Thread 2 entry*/ +static void thread2_entry(void *parameter) +{ + rt_uint8_t count; + + count = 0; + while (count < 10) + { + count ++; + if (count & 0x1) + { + /* Send the mb_str1 address to the mailbox */ + rt_mb_send(&mb, (rt_uint32_t)&mb_str1); + } + else + { + /* Send the mb_str2 address to the mailbox */ + rt_mb_send(&mb, (rt_uint32_t)&mb_str2); + } + + /* Delay 200ms */ + rt_thread_mdelay(200); + } + + /* Send mail to inform thread 1 that thread 2 has finished running */ + rt_mb_send(&mb, (rt_uint32_t)&mb_str3); +} + +int mailbox_sample(void) +{ + rt_err_t result; + + /* Initialize a mailbox */ + result = rt_mb_init(&mb, + "mbt", /* Name is mbt */ + &mb_pool[0], /* The memory pool used by the mailbox is mb_pool */ + sizeof(mb_pool) / 4, /* The number of messages in the mailbox because a message occupies 4 bytes */ + RT_IPC_FLAG_FIFO); /* Thread waiting in FIFO approach */ + if (result != RT_EOK) + { + rt_kprintf("init mailbox failed.\n"); + return -1; + } + + rt_thread_init(&thread1, + "thread1", + thread1_entry, + RT_NULL, + &thread1_stack[0], + sizeof(thread1_stack), + THREAD_PRIORITY, THREAD_TIMESLICE); + rt_thread_startup(&thread1); + + rt_thread_init(&thread2, + "thread2", + thread2_entry, + RT_NULL, + &thread2_stack[0], + sizeof(thread2_stack), + THREAD_PRIORITY, THREAD_TIMESLICE); + rt_thread_startup(&thread2); + return 0; +} + +/* Export to the msh command list */ +MSH_CMD_EXPORT(mailbox_sample, mailbox sample); +``` + +The simulation results are as follows: + +``` + \ | / +- RT - Thread Operating System + / | \ 3.1.0 build Aug 27 2018 + 2006 - 2018 Copyright by rt-thread team +msh >mailbox_sample +thread1: try to recv a mail +thread1: get a mail from mailbox, the content:I'm a mail! +msh >thread1: try to recv a mail +thread1: get a mail from mailbox, the content:this is another mail! +… +thread1: try to recv a mail +thread1: get a mail from mailbox, the content:this is another mail! +thread1: try to recv a mail +thread1: get a mail from mailbox, the content:over +``` + +The routine demonstrates how to use the mailbox. Thread 2 sends the mails, for a total of 11 times; thread 1 receives the mails, for a total of 11 mails, prints the contents of the mail, and determines end. + +### Occasions to Use Mails + +Mailbox is a simple inter-thread messaging method, which is characterized by low overhead and high efficiency. In the implementation of the RT-Thread operating system, a 4-byte message can be delivered at a time, and the mailbox has certain storage capabilities, which can cache a certain number of messages (the number of messages is determined by the capacity specified when creating and initializing the mailbox). The maximum length of a message in a mailbox is 4 bytes, so the mailbox can be used for messages less than 4 bytes. Since on the 32 system, 4 bytes can be placed right on a pointer, when a larger message needs to be transferred between threads, a pointer pointing to a buffer can be sent as an mail to the mailbox, which means the mailbox can also delivery a pointer. For example, + +```c +struct msg +{ + rt_uint8_t *data_ptr; + rt_uint32_t data_size; +}; +``` + +For such a message structure, it contains a pointer pointing to data `data_ptr` and a variable `data_size` of the length of the data block. When a thread needs to send this message to another thread, the following operations can be used: + +```c +struct msg* msg_ptr; + +msg_ptr = (struct msg*)rt_malloc(sizeof(struct msg)); +msg_ptr->data_ptr = ...; /* Point to the corresponding data block address */ +msg_ptr->data_size = len; /* Length of data block */ +/* Send this message pointer to the mb mailbox */ +rt_mb_send(mb, (rt_uint32_t)msg_ptr); +``` + +When receiving the thread, because the pointer is what's being received, and msg_ptr is a newly allocated memory block, so after the thread receiving the message finishes processing, the corresponding memory block needs to be released: + +```c +struct msg* msg_ptr; +if (rt_mb_recv(mb, (rt_uint32_t*)&msg_ptr) == RT_EOK) +{ + /* After the thread receiving the message finishes processing, the corresponding memory block needs to be released: */ + rt_free(msg_ptr); +} +``` + +Message Queue +-------- + +Message Queuing is another commonly used inter-thread communication method, which is an extension of the mailbox. Can be used in a variety of occasions, like message exchange between threads, use the serial port to receive variable length data, etc. + +### Message Queue Working Mechanism + +Message queue can receive messages with unfixed length from threads or ISR and cache messages in their own memory space. Other threads can also read the corresponding message from the message queue, and when the message queue is empty, the thread reading the messages can be suspended. When a new message arrives, the suspended thread will be awaken to receive and process the message. Message queue is an asynchronous way of communication. + +As shown in the following figure, a thread or interrupt service routine can put one or more messages into the message queue. Similarly, one or more threads can also get messages from the message queue. When multiple messages are sent to the message queue, the message that first enters the message queue is first passed to the thread. That is, the thread first gets the message that first enters the message queue, in other words, it follows the first in first out principle (FIFO). + +![Message Queue Working Mechanism Diagram](figures/07msg_work.png) + +The message queue object of the RT-Thread operating system consists of multiple elements. When a message queue is created, it is assigned a message queue control block: name, memory buffer, message size, and queue length. At the same time, each message queue object contains multiple message boxes, and each message box can store one message. The first and last message boxes in the message queue are respectively called the message linked list header and the message linked list tail, corresponding to `msg_queue_head` and `msg_queue_tail` in the queue control block. Some message boxes may be empty, they form a linked list of idle message boxes via `msg_queue_free`. The total number of message boxes in all message queues is the length of the message queue, which can be specified when creating the message queue. + +### Message Queue Control Block + +In RT-Thread, a message queue control block is a data structure used by the operating system to manage message queues, represented by the structure `struct rt_messagequeue`. Another C expression, `rt_mq_t`, represents the handle of the message queue. The implementation in C language is a pointer to the message queue control block. See the following code for a detailed definition of the message queue control block structure: + +```c +struct rt_messagequeue +{ + struct rt_ipc_object parent; + + void* msg_pool; /* Pointer pointing to the buffer storing the messages */ + + rt_uint16_t msg_size; /* The length of each message */ + rt_uint16_t max_msgs; /* Maximum number of messages that can be stored */ + + rt_uint16_t entry; /* Number of messages already in the queue */ + + void* msg_queue_head; /* Message linked list header */ + void* msg_queue_tail; /* Message linked list tail */ + void* msg_queue_free; /* Idle message linked list */ +}; +typedef struct rt_messagequeue* rt_mq_t; +``` + +rt_messagequeue object is derived from rt_ipc_object and is managed by the IPC container. + +### Management of Message Queue + +The message queue control block is a structure that contains important parameters related to the message queue and plays an important role in the implementation of functions of the message queue. The relevant interfaces of the message queue are shown in the figure below. Operations on a message queue include: create a message queue - send a message - receive a message - delete a message queue. + +![Message Queue Related Interfaces](figures/07msg_ops.png) + +#### Create and Delete Message Queue + +The message queue should be created before it is used, or the existing static message queue objects should be initialized. The function interface for creating the message queue is as follows: + +```c +rt_mq_t rt_mq_create(const char* name, rt_size_t msg_size, + rt_size_t max_msgs, rt_uint8_t flag); +``` + +When creating a message queue, first allocate a message queue object from the object manager, then allocate a memory space to the message queue object forming an idle message linked list. The size of this memory = [message size + message header (for linked list connection) Size] X the number of messages in the message queue. Then initialize the message queue, at which point the message queue is empty. The following table describes the input parameters and return values for this function: + +  Input parameters and return values for rt_mq_create() + +|**Parameters** |**Description** | +|--------------------|-------------------------------------------------------------------------------| +| name | The name of the message queue | +| msg_size | The maximum length of a message in the message queue, in bytes | +| max_msgs | The number of messages in the message queue | +| flag | The waiting method took by the message queue, which can take the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO | +|**Return** | —— | +| RT_EOK | Sent successfully | +| The handle of the message queue object | Successful | +| RT_NULL | Fail | + +When the message queue is no longer in use, it should be deleted to free up system resources, and once the operation is complete, the message queue will be permanently deleted. The function interface for deleting the message queue is as follows: + +```c +rt_err_t rt_mq_delete(rt_mq_t mq); +``` + +When deleting a message queue, if a thread is suspended on the message queue's waiting queue, the kernel wakes up all the threads suspended on the message waiting queue (the thread returns the value is -RT_ERROR), and then releases the memory used by the message queue. Finally, delete the message queue object. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mq_delete() + +|**Parameters**|**Description** | +|----------|--------------------| +| mq | The handle of the message queue object | +|**Return**| —— | +| RT_EOK | Successful | + +#### Initialize and Detach Message Queue + +Initializing a static message queue object is similar to creating a message queue object, except that the memory of a static message queue object is allocated by the compiler during system compilation and is typically placed in a read data segment or an uninitialized data segment. Initialization is required before using such static message queue objects. The function interface for initializing a message queue object is as follows: + +```c +rt_err_t rt_mq_init(rt_mq_t mq, const char* name, + void *msgpool, rt_size_t msg_size, + rt_size_t pool_size, rt_uint8_t flag); +``` + +When the message queue is initialized, the interface requires the handle to the message queue object that the user has requested (that is, a pointer pointing to the message queue object control block), the message queue name, the message buffer pointer, the message size, and the message queue buffer size. As shown in the following figure, all messages after the message queue is initialized are suspended on the idle message list, and the message queue is empty. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mq_init() + +|**Parameters** |**Description** | +|-----------|-------------------------------------------------------------------------------| +| mq | The handle of the message queue object | +| name | The name of the message queue | +| msgpool | Pointer pointing to the buffer storing the messages | +| msg_size | The maximum length of a message in the message queue, in bytes | +| pool_size | The buffer size for storing messages | +| flag | The waiting method took by the message queue, which can take the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO | +|**Return** | —— | +| RT_EOK | Successful | + +Detaching the message queue will cause the message queue object to be detached from the kernel object manager. The following interface is used to detach from the message queue: + +```c +rt_err_t rt_mq_detach(rt_mq_t mq); +``` + +After using this function interface, the kernel wakes up all threads suspended on the message waiting queue object (the thread return value is -RT_ERROR) and then detaches the message queue object from the kernel object manager. The following table describes the input parameters and return values for this function: + +Input parameters and return values for rt_mq_detach() + +|**Parameters**|**Description** | +|----------|--------------------| +| mq | The handle of the message queue object | +|**Return**| —— | +| RT_EOK | Successful | + +#### Send Message + +A thread or ISR can send a message to the message queue. When sending a message, the message queue object first takes an idle message block from the idle message list, then copies the content of the message sent by the thread or the interrupt service program to the message block, and then suspends the message block to the end of the message queue. The sender can successfully send a message if and only if there is an idle message block available on the idle message list; when there is no message block available on the idle message list, it means that the message queue is full, at this time, the thread or interrupt program that sent the message will receive an error code (-RT_EFULL). The function interface for sending messages is as follows: + +```c +rt_err_t rt_mq_send (rt_mq_t mq, void* buffer, rt_size_t size); +``` + +When sending a message, the sender specifies the object handle of the sent message queue (that is, a pointer pointing to the message queue control block) and specifies the content of the message being sent and the size of the message. As shown in the figure below, after sending a normal message, the first message on the idle message list is transferred to the end of the message queue. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mq_send() + +|**Parameter** |**Description** | +|------------|------------------------------------------------------| +| mq | The handle of the message queue object | +| buffer | Message content | +| size | Message size | +|**Return** | —— | +| RT_EOK | Successful | +| \-RT_EFULL | Message queue is full | +| \-RT_ERROR | Failed, indicating that the length of the sent message is greater than the maximum length of the message in the message queue. | + +#### Send an Emergency Message + +The process of sending an emergency message is almost the same as sending a message. The only difference is that when an emergency message is sent, the message block taken from the idle message list is not put in the end of the message queue, but the head of the queue. The recipient can receive the emergency message preferentially, so that the message can be processed in time. The function interface for sending an emergency message is as follows: + +```c +rt_err_t rt_mq_urgent(rt_mq_t mq, void* buffer, rt_size_t size); +``` + +The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mq_urgent() + +|**Parameters** |**Description** | +|------------|--------------------| +| mq | The handle of the message queue object | +| buffer | Message content | +| size | Message size | +|**Return** | —— | +| RT_EOK | Successful | +| \-RT_EFULL | Message queue is full | +| \-RT_ERROR | Fail | + +#### Receive Message + +Only when there is a message in the message queue, can the receiver receive message, otherwise the receiver will set according to the timeout, or suspend on the waiting queue of the message queue, or return directly. The function interface to receive message is as follows: + +```c +rt_err_t rt_mq_recv (rt_mq_t mq, void* buffer, + rt_size_t size, rt_int32_t timeout); +``` + +When receiving a message, the receiver needs to specify the message queue object handle that stores the message and specify a memory buffer into which the contents of the received message will be copied. In addition, the timeout when the message is not received in time needs to be set. As shown in the figure below, the message on the front of the message queue is transferred to the end of the idle message linked list after receiving a message. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mq_recv() + +|**Parameters** |**Description** | +|---------------|--------------------| +| mq | The handle of the message queue object | +| buffer | Message content | +| size | Message size | +| timeout | Specified timeout | +|**Return** | —— | +| RT_EOK | Received successfully | +| \-RT_ETIMEOUT | Timeout | +| \-RT_ERROR | Fail, return error | + +### Message Queue Application Example + +This is a message queue application routine. Two static threads are initialized in the routine, one thread will receive messages from the message queue; and another thread will periodically send regular messages and emergency messages to the message queue, as shown in the following code: + +Message queue usage routine + +```c +#include + +/* Message queue control block */ +static struct rt_messagequeue mq; +/* The memory pool used to place messages in the message queue */ +static rt_uint8_t msg_pool[2048]; + +ALIGN(RT_ALIGN_SIZE) +static char thread1_stack[1024]; +static struct rt_thread thread1; +/* Thread 1 entry function */ +static void thread1_entry(void *parameter) +{ + char buf = 0; + rt_uint8_t cnt = 0; + + while (1) + { + /* Receive messages from the message queue */ + if (rt_mq_recv(&mq, &buf, sizeof(buf), RT_WAITING_FOREVER) == RT_EOK) + { + rt_kprintf("thread1: recv msg from msg queue, the content:%c\n", buf); + if (cnt == 19) + { + break; + } + } + /* Delay 50ms */ + cnt++; + rt_thread_mdelay(50); + } + rt_kprintf("thread1: detach mq \n"); + rt_mq_detach(&mq); +} + +ALIGN(RT_ALIGN_SIZE) +static char thread2_stack[1024]; +static struct rt_thread thread2; +/* Thread 2 entry */ +static void thread2_entry(void *parameter) +{ + int result; + char buf = 'A'; + rt_uint8_t cnt = 0; + + while (1) + { + if (cnt == 8) + { + /* Send emergency message to the message queue */ + result = rt_mq_urgent(&mq, &buf, 1); + if (result != RT_EOK) + { + rt_kprintf("rt_mq_urgent ERR\n"); + } + else + { + rt_kprintf("thread2: send urgent message - %c\n", buf); + } + } + else if (cnt>= 20)/* Exit after sending 20 messages */ + { + rt_kprintf("message queue stop send, thread2 quit\n"); + break; + } + else + { + /* Send a message to the message queue */ + result = rt_mq_send(&mq, &buf, 1); + if (result != RT_EOK) + { + rt_kprintf("rt_mq_send ERR\n"); + } + + rt_kprintf("thread2: send message - %c\n", buf); + } + buf++; + cnt++; + /* Delay 5ms */ + rt_thread_mdelay(5); + } +} + +/* Initialization of the message queue example */ +int msgq_sample(void) +{ + rt_err_t result; + + /* Initialize the message queue */ + result = rt_mq_init(&mq, + "mqt", + &msg_pool[0], /* Memory pool points to msg_pool */ + 1, /* The size of each message is 1 byte */ + sizeof(msg_pool), /* The size of the memory pool is the size of msg_pool */ + RT_IPC_FLAG_FIFO); /* If there are multiple threads waiting, assign messages in first come first get mode. */ + + if (result != RT_EOK) + { + rt_kprintf("init message queue failed.\n"); + return -1; + } + + rt_thread_init(&thread1, + "thread1", + thread1_entry, + RT_NULL, + &thread1_stack[0], + sizeof(thread1_stack), 25, 5); + rt_thread_startup(&thread1); + + rt_thread_init(&thread2, + "thread2", + thread2_entry, + RT_NULL, + &thread2_stack[0], + sizeof(thread2_stack), 25, 5); + rt_thread_startup(&thread2); + + return 0; +} + +/* Export to the msh command list */ +MSH_CMD_EXPORT(msgq_sample, msgq sample); +``` + +The simulation results are as follows: + +``` +\ | / +- RT - Thread Operating System + / | \ 3.1.0 build Aug 24 2018 + 2006 - 2018 Copyright by rt-thread team +msh > msgq_sample +msh >thread2: send message - A +thread1: recv msg from msg queue, the content:A +thread2: send message - B +thread2: send message - C +thread2: send message - D +thread2: send message - E +thread1: recv msg from msg queue, the content:B +thread2: send message - F +thread2: send message - G +thread2: send message - H +thread2: send urgent message - I +thread2: send message - J +thread1: recv msg from msg queue, the content:I +thread2: send message - K +thread2: send message - L +thread2: send message - M +thread2: send message - N +thread2: send message - O +thread1: recv msg from msg queue, the content:C +thread2: send message - P +thread2: send message - Q +thread2: send message - R +thread2: send message - S +thread2: send message - T +thread1: recv msg from msg queue, the content:D +message queue stop send, thread2 quit +thread1: recv msg from msg queue, the content:E +thread1: recv msg from msg queue, the content:F +thread1: recv msg from msg queue, the content:G +… +thread1: recv msg from msg queue, the content:T +thread1: detach mq +``` + +The routine demonstrates how to use message queue. Thread 1 receives messages from the message queue; thread 2 periodically sends regular and emergency messages to the message queue. Since the message "I" that thread 2 sent is an emergency message, it will be inserted directly into the front of the message queue. So after receiving the message "B", thread 1 receives the emergency message and then receives the message "C". + +### Occasions to Use Message Queue + +Message Queue can be used where occasional long messages are sent, including exchanging message between threads and threads and sending message to threads in interrupt service routines (interrupt service routines cannot receive messages). The following sections describe the use of message queues from two perspectives, sending messages and synchronizing messages. + +#### Sending Messages + +The obvious difference between message queue and mailbox is that the length of the message is not limited to 4 bytes. In addition, message queue also includes a function interface for sending emergency messages. But when you create a message queue with a maximum length of 4 bytes for all messages, the message queue object will be reduced to a mailbox. This unrestricted message is also reflected in the case of code writing which is also a mailbox-like code: + +```c +struct msg +{ + rt_uint8_t *data_ptr; /* Data block starting address */ + rt_uint32_t data_size; /* Data block size */ +}; +``` + +Similar to the mailbox example is the message structure definition. Now, let's assume that such a message needs to be sent to thread receiving messages. In the mailbox example, this structure can only send pointers pointing to this structure (after the function pointer is sent, the thread receiving messages can access the content pointing to this address, usually this piece of data needs to be left to the thread receiving the messages to release). How message queues is used is quite different: + +```c +void send_op(void *data, rt_size_t length) +{ + struct msg msg_ptr; + + msg_ptr.data_ptr = data; /* Point to the corresponding data block address */ + msg_ptr.data_size = length; /* Datablock length */ + + /* Send this message pointer to the mq message queue */ + rt_mq_send(mq, (void*)&msg_ptr, sizeof(struct msg)); +} +``` + +Note that in the above code, the data content of a local variable is sent to the message queue. In the thread that receives message, the same structure that receives messages using local variables is used: + +```c +void message_handler() +{ + struct msg msg_ptr; /* Local variable used to place the message */ + + /* Receive messages from the message queue into msg_ptr */ + if (rt_mq_recv(mq, (void*)&msg_ptr, sizeof(struct msg)) == RT_EOK) + { + /* Successfully received the message, corresponding data processing is performed */ + } +} +``` + +Because message queue is a direct copy of data content, so in the above example, local structure is used to save the message structure, which eliminates the trouble of dynamic memory allocation (and no need to worry, because message memory space has already been released when the thread receives the message). + +#### Synchronizing Messages + +In general system designs, the problem of sending synchronous messages is often encountered. At this time, corresponding implementations can be selected according to the state of the time: two threads can be implemented in the form of [message queue + semaphore or mailbox]. The thread sending messages sends the corresponding message to the message queue in the form of message sending. After the message is sent, it is hoping to receive the confirmation from the threads receiving messages. The working diagram is as shown below: + +![Synchronizing Messages Diagram](figures/07msg_syn.png) + +Depending on the message confirmation, the message structure can be defined as: + +```c +struct msg +{ + /* Other members of the message structure */ + struct rt_mailbox ack; +}; +/* or */ +struct msg +{ + /* Other members of the message structure */ + struct rt_semaphore ack; +}; +``` + +The first type of message uses a mailbox as a confirmation flag, while the second type of message uses a semaphore as a confirmation flag. The mailbox is used as a confirmation flag, which means that the receiving thread can notify some status values to the thread sending messages; and the semaphore is used as a confirmation flag that can only notify the thread sending messages in a single way, and the message has been confirmed to be received. + +Signal +---- + +A signal (also known as a soft interrupt signal), from a software perspective, is a simulation of interrupt mechanism. When it comes to its principle, thread receiving a signal is similar to processor receiving an interrupt request. + +### Signal Working Mechanism + +Signals are used for asynchronous communication in RT-Thread. The POSIX standard defines that sigset_t type defines a signal set. However, the sigset_t type may be defined differently in different systems. In RT-Thread, sigset_t is defined as unsigned long and named rt_sigset_t, the signals that the application can use are SIGUSR1(10) and SIGUSR2(12). + +The essence of signal is soft interrupt, which is used to notify the thread that an asynchronous event has occurred. It is used to notify abnormality and handle emergency between threads. No operation is needed for a thread to wait for the signal's arrival. In fact, the thread does not know when the signal will arrive. Threads can send soft interrupt signals by calling `rt_thread_kill()` between each other. + +The thread that receives the signals has different processing methods for various signals, and these methods can be divided into three categories: + +The first one is an interrupt-like processing program. For signals that need to be processed, the thread can specify a function to process. + +The second one is to ignore a signal and do nothing about it, as if it had not happened. + +The third one is to reserve the default value of the system for processing the signal. + +As shown in the following figure, suppose that thread 1 needs to process the signal. First, thread 1 installs a signal and unmasks it. At the same time, it sets the approach of how to process abnormality of signals. Then other threads can send a signal to thread 1, triggering thread 1 to process the signal. + +![Signal Working Mechanism](figures/07signal_work.png) + +When the signal is passed to thread 1, if it is suspended, it will be switched to ready to process the corresponding signal. If it is running, it will create a new stack frame space on its current thread stack to process the corresponding signal. It should be noted that the thread stack size used will also increase accordingly. + +### Management of Signals + +Operations on signals include: install signal, block signal, unblock signal, send signal, and wait for signal. The interfaces of the signal are shown in the following figure: + +![Signal Related Interface](figures/07signal_ops.png) + +#### Install Signal + +If the thread is to process a signal, then the signal needs to be installed in the thread. Signal installation is primarily used to determine the mapping relation between the signal value and the actions of the thread on that signal value, that is what signal is to be processed, what actions will be taken when the signal is passed to the thread. See the following code for a detailed definition: + +```c +rt_sighandler_t rt_signal_install(int signo, rt_sighandler_t[] handler); +``` + +`rt_sighandler_t` is the type of function pointer that defines the signal process function. The following table describes the input parameters and return values for this function: + +Input parameters and return values for rt_signal_install() + +|**Parameters** |**Description** | +|-----------------------|--------------------------------------------------------| +| signo | Signal value (only SIGUSR1 and SIGUSR2 are open to the user, the same applies below) | +| handler | Set the approach to process signal values | +|**Return** | —— | +| SIG_ERR | Wrong signal | +| The handler value before the signal is installed | Successful | + +Setting the handler parameter during signal installation determines the different processing methods for this signal. Processing methods can be divided into three types: + +1) Similar to the interrupt processing method, the parameter points to the user-defined processing function when the signal occurs, and is processed by the function. + +2) The parameter is set to SIG_IGN. Ignore a signal, and do nothing on the signal, as if it had not happened. + +3) The parameter is set to SIG_DFL, and the system will call the default function _signal_default_handler() to process. + +#### Block Signal + +Blocking signal can also be understood as shielding signals. If the signal is blocked, the signal will not be delivered to the thread that installed the signal and will not cause soft interrupt processing. Call rt_signal_mask() to block the signal: + +```c +void rt_signal_mask(int signo); +``` + +The following table describes the input parameters for this function: + +rt_signal_mask() function parameters + +|**Parameters**|**Description**| +|----------|----------| +| signo | Signal value | + +#### Unblock Signal + +Several signals can be installed in the thread. Using this function can give some "attention" to some of the signals, then sending these signals will cause a soft interrupt for the thread. Calling rt_signal_unmask() to unblock the signal: + +```c +void rt_signal_unmask(int signo); +``` + +The following table describes the input parameters for this function: + +rt_signal_unmask() function parameters + +|**Parameters**|**Description**| +|----------|----------| +| signo | Signal value | + +#### Send Signals + +When we need to process abnormality, a signal can be sent to the thread that has been set to process abnormality. Call rt_thread_kill() to send a signal to any thread: + +```c +int rt_thread_kill(rt_thread_t tid, int sig); +``` + +The following table describes the input parameters and return values for this function: + +Input parameters and return values for rt_thread_kill() + +|**Parameters** |**Description** | +|-------------|----------------| +| tid | The thread receiving signal | +| sig | Signal value | +|**Return** | —— | +| RT_EOK | Sent successfully | +| \-RT_EINVAL | Parameter error | + +#### Wait for Signal + +Wait for the arrival of the set signal. If this signal did not arrive, suspend the thread until the signal arrival or the wait until time exceeds the specified timeout. If the signal arrived, the pointer pointing to the signal body is stored in si, as follows is the function to wait for signal. + +```c +int rt_signal_wait(const rt_sigset_t *set, + rt_siginfo_t[] *si, rt_int32_t timeout); +``` + +rt_siginfo_t is the data type that defines the signal information, and the following table describes the input parameters and return values of the function: + +Input parameters and return values for rt_signal_wait() + +|**Parameters** |**Description** | +|---------------|----------------------------| +| set | Specify the signal to wait | +| si | Pointer pointing to signal information | +| timeout | Waiting time | +|**Return** | —— | +| RT_EOK | Signal arrives | +| \-RT_ETIMEOUT | Timeout | +| \-RT_EINVAL | Parameter error | + +### Signal Application Example + +This is a signal application routine, as shown in the following code. This routine creates one thread. When the signal is installed, the signal processing mode is set to custom processing. The processing function of the signal is defined to be thread1_signal_handler(). After the thread is running and the signal is installed, a signal is sent to this thread. This thread will receive the signal and print the message. + +Signal usage routine + +```c +#include + +#define THREAD_PRIORITY 25 +#define THREAD_STACK_SIZE 512 +#define THREAD_TIMESLICE 5 + +static rt_thread_t tid1 = RT_NULL; + +/* Signal process function for thread 1 signal handler */ +void thread1_signal_handler(int sig) +{ + rt_kprintf("thread1 received signal %d\n", sig); +} + +/* Entry function for thread 1 */ +static void thread1_entry(void *parameter) +{ + int cnt = 0; + + /* Install signal */ + rt_signal_install(SIGUSR1, thread1_signal_handler); + rt_signal_unmask(SIGUSR1); + + /* Run for 10 times */ + while (cnt < 10) + { + /* Thread 1 runs with low-priority and prints the count value all through*/ + rt_kprintf("thread1 count : %d\n", cnt); + + cnt++; + rt_thread_mdelay(100); + } +} + +/* Initialization of the signal example */ +int signal_sample(void) +{ + /* Create thread 1 */ + tid1 = rt_thread_create("thread1", + thread1_entry, RT_NULL, + THREAD_STACK_SIZE, + THREAD_PRIORITY, THREAD_TIMESLICE); + + if (tid1 != RT_NULL) + rt_thread_startup(tid1); + + rt_thread_mdelay(300); + + /* Send signal SIGUSR1 to thread 1 */ + rt_thread_kill(tid1, SIGUSR1); + + return 0; +} + +/* Export to the msh command list */ +MSH_CMD_EXPORT(signal_sample, signal sample); +``` + +The simulation results are as follows: + +``` + \ | / +- RT - Thread Operating System + / | \ 3.1.0 build Aug 24 2018 + 2006 - 2018 Copyright by rt-thread team +msh >signal_sample +thread1 count : 0 +thread1 count : 1 +thread1 count : 2 +msh >thread1 received signal 10 +thread1 count : 3 +thread1 count : 4 +thread1 count : 5 +thread1 count : 6 +thread1 count : 7 +thread1 count : 8 +thread1 count : 9 +``` + +In the routine, the thread first installs the signal and unblocks it, then sends a signal to the thread. The thread receives the signal and prints out the signal received: SIGUSR1 (10). + + diff --git a/documentation/thread-sync/figures/06event_ops.png b/documentation/thread-sync/figures/06event_ops.png new file mode 100644 index 0000000000..8b6691632d Binary files /dev/null and b/documentation/thread-sync/figures/06event_ops.png differ diff --git a/documentation/thread-sync/figures/06event_use.png b/documentation/thread-sync/figures/06event_use.png new file mode 100644 index 0000000000..e590b5165c Binary files /dev/null and b/documentation/thread-sync/figures/06event_use.png differ diff --git a/documentation/thread-sync/figures/06event_work.png b/documentation/thread-sync/figures/06event_work.png new file mode 100644 index 0000000000..bf2cb9f658 Binary files /dev/null and b/documentation/thread-sync/figures/06event_work.png differ diff --git a/documentation/thread-sync/figures/06inter_ths_commu1.png b/documentation/thread-sync/figures/06inter_ths_commu1.png new file mode 100644 index 0000000000..240a0d9d48 Binary files /dev/null and b/documentation/thread-sync/figures/06inter_ths_commu1.png differ diff --git a/documentation/thread-sync/figures/06inter_ths_commu2.png b/documentation/thread-sync/figures/06inter_ths_commu2.png new file mode 100644 index 0000000000..774d4d61ce Binary files /dev/null and b/documentation/thread-sync/figures/06inter_ths_commu2.png differ diff --git a/documentation/thread-sync/figures/06mutex_ops.png b/documentation/thread-sync/figures/06mutex_ops.png new file mode 100644 index 0000000000..7c004b9622 Binary files /dev/null and b/documentation/thread-sync/figures/06mutex_ops.png differ diff --git a/documentation/thread-sync/figures/06mutex_work.png b/documentation/thread-sync/figures/06mutex_work.png new file mode 100644 index 0000000000..a9f1f23463 Binary files /dev/null and b/documentation/thread-sync/figures/06mutex_work.png differ diff --git a/documentation/thread-sync/figures/06priority_inherit.png b/documentation/thread-sync/figures/06priority_inherit.png new file mode 100644 index 0000000000..0229975046 Binary files /dev/null and b/documentation/thread-sync/figures/06priority_inherit.png differ diff --git a/documentation/thread-sync/figures/06priority_inversion.png b/documentation/thread-sync/figures/06priority_inversion.png new file mode 100644 index 0000000000..20fc8bab54 Binary files /dev/null and b/documentation/thread-sync/figures/06priority_inversion.png differ diff --git a/documentation/thread-sync/figures/06sem_lock.png b/documentation/thread-sync/figures/06sem_lock.png new file mode 100644 index 0000000000..07e79caf0d Binary files /dev/null and b/documentation/thread-sync/figures/06sem_lock.png differ diff --git a/documentation/thread-sync/figures/06sem_ops.png b/documentation/thread-sync/figures/06sem_ops.png new file mode 100644 index 0000000000..ae1f163958 Binary files /dev/null and b/documentation/thread-sync/figures/06sem_ops.png differ diff --git a/documentation/thread-sync/figures/06sem_work.png b/documentation/thread-sync/figures/06sem_work.png new file mode 100644 index 0000000000..8c34715a41 Binary files /dev/null and b/documentation/thread-sync/figures/06sem_work.png differ diff --git a/documentation/thread-sync/thread-sync.md b/documentation/thread-sync/thread-sync.md new file mode 100644 index 0000000000..c1a832428f --- /dev/null +++ b/documentation/thread-sync/thread-sync.md @@ -0,0 +1,1333 @@ +# Inter-thread Synchronization + +In a multi-threaded real-time system, the completion of a task can often be accomplished through coordination of multiple threads, so how do these multiple threads collaborate well with each other to perform without errors? Here is an example. + +For example, two threads in one task: one thread receives data from the sensor and writes the data to shared memory, while another thread periodically reads data from the shared memory and sends it to display. The following figure depicts data transfer between two threads: + +![Diagram of Data Transfer between Threads](figures/06inter_ths_commu1.png) + +If access to shared memory is not exclusive, then it may be accessed simultaneously by each thread, which causes data consistency issues. For example, before thread #2 (thread that can display data) attempts to display data, thread #1 (thread that can receive data) has not yet completed the writing in of data, then the display will contain data sampled at different times, causing the display data to be disordered. + +Thread #1 that writes the sensor data to the shared memory block and thread #2 that reads the sensor data from the shared memory block access the same memory block. In order to prevent data errors, the actions of the two threads must be mutually exclusive. One of threads should only be allowed after another thread completes its operation on the shared memory block. This way, thread #1 and thread #2 can work properly to execute this task correctly. + +Synchronization refers to running in a predetermined order. Thread synchronization refers to multiple threads controlling the execution order between threads through specific mechanisms (such as mutex, event object, critical section). In other words, establish a relationship of execution order by synchronization between threads and if there is no synchronization, the threads will be out-of-order. + +Multiple threads operate / access the same area (code), this block of code is called the critical section, the shared memory block in the above example is the critical section. Thread mutual exclusion refers to the exclusiveness of access to critical section resources. When multiple threads use critical section resources, only one thread is allowed each time. Other threads that want to use the resource must wait until the resource occupant releases the resource. Thread mutex can be seen as a special kind of thread synchronization. + +There are many ways to synchronize threads. The core idea is that **only one (or one kind of) thread is allowed to run when accessing the critical section.** There are many ways to enter/exit the critical section: + +1) Call rt_hw_interrupt_disable() to enter the critical section and call rt_hw_interrupt_enable() to exit the critical section; see the *Global Interrupt Switch* in *Interrupt Management* for details. + +2)Call rt_enter_critical() to enter the critical section and call rt_exit_critical() to exit the critical section. + +This chapter introduces several synchronization methods: **semaphores,** **mutex**, and **event**. After learning this chapter, you will learn how to use semaphores, mutex, and event to synchronize threads. + +Semaphores +------ + +Take parking lot as an example to understand the concept of semaphore: + +①When the parking lot is empty, the administrator of the parking lot finds that there are a lot of empty parking spaces. And then, cars outside will enter the parking lot and get parking spaces. + +②When the parking space of the parking lot is full, the administrator finds that there is no empty parking space. As a result, cars outside will be prohibited from entering the parking lot, and they will be waiting in line; + +③When cars are leaving the parking lot, the administrator finds that there are empty parking spaces for cars outside to enter the parking lot; after the empty parking spaces are taken, cars outside are prohibited from entering. + +In this example, the administrator is equivalent to the semaphore. The number of empty parking spaces that the administrator is in charge of is the value of the semaphore (non-negative, dynamic change); the parking space is equivalent to the common resource (critical section), and the cars are equivalent to the threads. Cars access the parking spaces by obtaining permission from the administrator, which is similar to thread accessing public resource by obtaining the semaphore. + +### Semaphore Working Mechanism + +Semaphore is a light-duty kernel object that can solve the problems of synchronization between threads. By obtaining or releasing semaphore, a thread can achieve synchronization or mutual exclusion. + +Schematic diagram of semaphore is shown in the figure below. Each semaphore object has a semaphore value and a thread waiting queue. Semaphore value corresponds to the actual number of instances of semaphore object and the number of resources. If the semaphore value is 5, it means that there are 5 semaphore instances (resources) that can be used. If the number of semaphore instances is zero, the thread that is applying for the semaphore will be suspended on the waiting queue of the semaphore, waiting for available semaphore instances (resources). + +![Schematic Diagram of Semaphore](figures/06sem_work.png) + +### Semaphore Control Block + +In RT-Thread, semaphore control block is a data structure used by the operating system to manage semaphores, represented by struct rt rt_semaphore. Another C expression is rt_sem_t, which represents the handle of the semaphore, and the implementation in C language is a pointer to the semaphore control block. The detailed definition of semaphore control block structure is as follows: + +```c +struct rt_semaphore +{ + struct rt_ipc_object parent; /* Inherited from the ipc_object class */ + rt_uint16_t value; /* Semaphore Value */ +}; +/* rt_sem_t is the type of pointer pointing to semaphore structure */ +typedef struct rt_semaphore* rt_sem_t; +``` + +rt_semaphore object is derived from rt_ipc_object and is managed by the IPC container. The maximum semaphore is 65535. + +### Semaphore Management + +Semaphore control block contains important parameters related to the semaphore and acts as a link between various states of the semaphore. Interfaces related to semaphore are shown in the figure below. Operations on a semaphore includes: creating/initializing the semaphore, obtaining the semaphore, releasing the semaphore, and deleting/detaching the semaphore. + +![Interfaces Related to Semaphore](figures/06sem_ops.png) + +#### Create and Delete Semaphore + +When creating a semaphore, the kernel first creates a semaphore control block, then performs basic initialization on the control block. The following function interface is used to create a semaphore: + +```c + rt_sem_t rt_sem_create(const char *name, + rt_uint32_t value, + rt_uint8_t flag); +``` + +When this function is called, the system will first allocate a semaphore object from the object manager and initialize the object, and then initialize the parent class IPC object and semaphore-related parts. Among parameters specified in the creation of semaphore, semaphore flag parameter determines the queuing way of how multiple threads wait when the semaphore is not available. When the RT_IPC_FLAG_FIFO (first in, first out) mode is selected, the waiting thread queue will be queued in a first in first out manner. The first thread that goes in will firstly obtain the waiting semaphore. When the RT_IPC_FLAG_PRIO (priority waiting) mode is selected, the waiting threads will be queued in order of priority. Threads waiting with the highest priority will get the wait semaphore first. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_sem_create() + +|Parameters |Description | +|--------------------|-------------------------------------------------------------------| +| name | Semaphore Name | +| value | Semaphore Initial Value | +| flag | Semaphore flag, which can be the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO | +|**Return** | —— | +| RT_NULL | Creation failed | +| semaphore control block pointer | Creation successful | + +For dynamically created semaphores, when the system no longer uses semaphore, they can be removed to release system resources. To delete a semaphore, use the following function interface: + +```c +rt_err_t rt_sem_delete(rt_sem_t sem); +``` + +When this function is called, the system will delete this semaphore. If there is a thread waiting for this semaphore when it is being deleted, the delete operation will first wake up the thread waiting on the semaphore (return value of the waiting thread is - RT_ERROR), then release the semaphore's memory resources. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_sem_delete() + +|Parameters|Description | +|----------|----------------------------------| +| sem | Semaphore object created by rt_sem_create() | +|**Return**| —— | +| RT_EOK | Successfully deleted | + +#### Initialize and Detach Semaphore + +For a static semaphore object, its memory space is allocated by the compiler during compiling and placed on the read-write data segment or on the uninitialized data segment. In this case,rt_sem_create interface is no longer needed to create the semaphore to use it, just initialize it before using it. To initialize the semaphore object, use the following function interface: + +```c +rt_err_t rt_sem_init(rt_sem_t sem, + const char *name, + rt_uint32_t value, + rt_uint8_t flag) +``` + +When this function is called, the system will initialize the semaphore object, then initialize the IPC object and parts related to the semaphore. The flag mentioned above in semaphore function creation can be used as the semaphore flag here. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_sem_init() + +|**Parameters**|**Description** | +|----------|-------------------------------------------------------------------| +| sem | Semaphore object handle | +| name | Semaphore name | +| value | Semaphore initial value | +| flag | Semaphore flag, which can be the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO | +|**Return**| —— | +| RT_EOK | Initialization successful | + +For statically initialized semaphore, detaching the semaphore is letting the semaphore object detach from the kernel object manager. To detach the semaphore, use the following function interface: + +```c +rt_err_t rt_sem_detach(rt_sem_t sem); +``` + +After using this function, the kernel wakes up all threads suspended in the semaphore wait queue and then detaches the semaphore from the kernel object manager. The waiting thread that was originally suspended on the semaphore will get the return value of -RT_ERROR. The following table describes the input parameters and return values for this function: + + Input parameters and return values of rt_sem_detach() + +|Parameters|**Description** | +|----------|------------------| +| sem | Semaphore object handle | +|**Return**| —— | +| RT_EOK | Successfully detached | + +#### Obtain Semaphore + +Thread obtains semaphore resource instances by obtaining semaphores. When the semaphore value is greater than zero, the thread will get the semaphore, and the corresponding semaphore value will be reduced by 1. The semaphore is obtained using the following function interface: + +```c +rt_err_t rt_sem_take (rt_sem_t sem, rt_int32_t time); +``` + +When calling this function, if the value of the semaphore is zero, it means the current semaphore resource instance is not available, and the thread applying for the semaphore will choose according to the time parameters to either return directly, or suspend for a period of time, or wait forever. While waiting, if other threads or ISR released the semaphore, then the thread will stop the waiting. If the semaphore is still not available after parameter specified time, the thread will time out and return, return value is -RT_ETIMEOUT. The following table describes the input parameters and return values for this function: + + Input parameters and return values of rt_sem_take() + +|Parameters |Description | +|---------------|---------------------------------------------------| +| sem | Semaphore object handle | +| time | Specified wait time, unit is operating system clock tick (OS Tick) | +|**Return** | —— | +| RT_EOK | Semaphore obtained successfully | +| \-RT_ETIMEOUT | Did not received semaphore after timeout | +| \-RT_ERROR | Other errors | + +#### Obtain Semaphore without Waiting + +When user does not want to suspend thread on the applied semaphore and wait, the semaphore can be obtained using the wait-free mode , and the following function interface is used for obtaining the semaphore without waiting: + +```c +rt_err_t rt_sem_trytake(rt_sem_t sem); +``` + +This function has the same effect as rt_sem_take(sem, 0), which means when the semaphore resource instance requested by the thread is not available, it will not wait on the semaphore, instead it returns to -RT_ETIMEOUT directly. The following table describes the input parameters and return values for this function: + + Input parameters and return values for rt_sem_trytake() + +|**Parameter** |Description | +|---------------|------------------| +| sem | Semaphore object handle | +|**Return** | —— | +| RT_EOK | Semaphore successfully obtained | +| \-RT_ETIMEOUT | Semaphore obtainment failed | + +#### Interrupt Service Routine + +Semaphore is released to wake up the thread that suspends on the semaphore. To release the semaphore, use the following function interface: + +```c +rt_err_t rt_sem_release(rt_sem_t sem); +``` + +For example, when semaphore value is zero and a thread is waiting for this semaphore, releasing the semaphore will wake up the first thread waiting in the thread queue of the semaphore, and this thread will obtain the semaphore; otherwise value of the semaphore will plus 1. The following table describes the input parameters and return values of the function: + + Input parameters and return values of rt_sem_release() + +|**Parameters**|Description | +|----------|------------------| +| sem | Semaphore object handle | +|**Return**| —— | +| RT_EOK | Semaphore successfully released | + +### Semaphore Application Sample + +This is a sample of semaphore usage routine. This routine creates a dynamic semaphore and initializes two threads, one thread sends the semaphore, and one thread receives the semaphore and performs the corresponding operations. As shown in the following code: + +Use of semaphore + + +```c +#include + +#define THREAD_PRIORITY 25 +#define THREAD_TIMESLICE 5 + +/* pointer to semaphore */ +static rt_sem_t dynamic_sem = RT_NULL; + +ALIGN(RT_ALIGN_SIZE) +static char thread1_stack[1024]; +static struct rt_thread thread1; +static void rt_thread1_entry(void *parameter) +{ + static rt_uint8_t count = 0; + + while(1) + { + if(count <= 100) + { + count++; + } + else + return; + + /* count release semaphore every 10 counts */ + if(0 == (count % 10)) + { + rt_kprintf("t1 release a dynamic semaphore.\n"); + rt_sem_release(dynamic_sem); + } + } +} + +ALIGN(RT_ALIGN_SIZE) +static char thread2_stack[1024]; +static struct rt_thread thread2; +static void rt_thread2_entry(void *parameter) +{ + static rt_err_t result; + static rt_uint8_t number = 0; + while(1) + { + /* permanently wait for the semaphore; once obtain the semaphore, perform the number self-add operation */ + result = rt_sem_take(dynamic_sem, RT_WAITING_FOREVER); + if (result != RT_EOK) + { + rt_kprintf("t2 take a dynamic semaphore, failed.\n"); + rt_sem_delete(dynamic_sem); + return; + } + else + { + number++; + rt_kprintf("t2 take a dynamic semaphore. number = %d\n" ,number); + } + } +} + +/* initialization of the semaphore sample */ +int semaphore_sample(void) +{ + /* create a dynamic semaphore with an initial value of 0 */ + dynamic_sem = rt_sem_create("dsem", 0, RT_IPC_FLAG_FIFO); + if (dynamic_sem == RT_NULL) + { + rt_kprintf("create dynamic semaphore failed.\n"); + return -1; + } + else + { + rt_kprintf("create done. dynamic semaphore value = 0.\n"); + } + + rt_thread_init(&thread1, + "thread1", + rt_thread1_entry, + RT_NULL, + &thread1_stack[0], + sizeof(thread1_stack), + THREAD_PRIORITY, THREAD_TIMESLICE); + rt_thread_startup(&thread1); + + rt_thread_init(&thread2, + "thread2", + rt_thread2_entry, + RT_NULL, + &thread2_stack[0], + sizeof(thread2_stack), + THREAD_PRIORITY-1, THREAD_TIMESLICE); + rt_thread_startup(&thread2); + + return 0; +} +/* export to msh command list */ +MSH_CMD_EXPORT(semaphore_sample, semaphore sample); +``` + +Simulation results: + +``` + \ | / +- RT - Thread Operating System + / | \ 3.1.0 build Aug 27 2018 + 2006 - 2018 Copyright by rt-thread team +msh >semaphore_sample +create done. dynamic semaphore value = 0. +msh >t1 release a dynamic semaphore. +t2 take a dynamic semaphore. number = 1 +t1 release a dynamic semaphore. +t2 take a dynamic semaphore. number = 2 +t1 release a dynamic semaphore. +t2 take a dynamic semaphore. number = 3 +t1 release a dynamic semaphore. +t2 take a dynamic semaphore. number = 4 +t1 release a dynamic semaphore. +t2 take a dynamic semaphore. number = 5 +t1 release a dynamic semaphore. +t2 take a dynamic semaphore. number = 6 +t1 release a dynamic semaphore. +t2 take a dynamic semaphore. number = 7 +t1 release a dynamic semaphore. +t2 take a dynamic semaphore. number = 8 +t1 release a dynamic semaphore. +t2 take a dynamic semaphore. number = 9 +t1 release a dynamic semaphore. +t2 take a dynamic semaphore. number = 10 +``` + +As the result of the operation above: Thread 1 sends a semaphore when the count is multiple of 10 (the thread exits after the count reaches 100), and thread 2 adds one on top of the number after receiving the semaphore. + +Another semaphore application routine is shown below. This sample will use two threads and three semaphores to implement an example of producers and consumers. Among them: + +Another application routine for semaphores is as follows: + +To be more accurate, the producer consumer model is actually a "producer-consumer-warehouse" model. We call the available spots in the warehouse "empty seats", and once an available spot ("empty seat") is taken, we then call it a "full seat". For this model, the following points should be clarified: +1、The producer only produces when the warehouse is not full; the producer will stop production when the warehouse is full. +2、The consumer can consume only when there are products in the warehouse; consumers will wait if the warehouse is empty. +3、When the consumer consumes, the warehouse is not full anymore, and then the producer will be notified to produce again. +4、The producer should notify the consumer to consumer after it produces consumable products. + + +This routine will use two threads and three semaphores to implement examples of producers and consumers. Among them +The three semaphores in the example are: +①sem_lock: This semaphore acts as a lock because both threads operate on the same array array which means this array is a shared resource and sem_lock is used to protect this shared resource. +②sem_empty: Its value is used to indicate the number of "warehouse" available seats , and the value of sem_empty is initialized to 5, indicating that there are 5 "empty seats" . +③sem_full:Its value is used to indicate the number of "full seats" in the "warehouse", and the value of sem_full is initialized to 0, indicating that there are 0 "full seats". + +The 2 threads in the example are: +①Producer thread: After obtaining the available seat (sem_empty value minus 1), generate a number, loop into the array, and then release a "full seat"(sem_full value plus 1). +②Consumer thread: After getting the "full seat" (the value of sem_full is decremented by 1), the contents of the array are read and added, and then an "empty seat" is released (the value of sem_empty is increased by 1). + +Producer consumer routine + +```c +#include + +#define THREAD_PRIORITY 6 +#define THREAD_STACK_SIZE 512 +#define THREAD_TIMESLICE 5 + +/* Define a maximum of 5 elements to be generated */ +#define MAXSEM 5 + +/* An array of integers used to place production */ +rt_uint32_t array[MAXSEM]; + +/* Point to the producer and consumer's read-write position in the array */ +static rt_uint32_t set, get; + +/* Pointer to the thread control block */ +static rt_thread_t producer_tid = RT_NULL; +static rt_thread_t consumer_tid = RT_NULL; + +struct rt_semaphore sem_lock; +struct rt_semaphore sem_empty, sem_full; + +/* Pointer to the thread control block */ +void producer_thread_entry(void *parameter) +{ + int cnt = 0; + + /* Run for 10 times*/ + while (cnt < 10) + { + /* Obtain one vacancy */ + rt_sem_take(&sem_empty, RT_WAITING_FOREVER); + + /* Modify array content, lock */ + rt_sem_take(&sem_lock, RT_WAITING_FOREVER); + array[set % MAXSEM] = cnt + 1; + rt_kprintf("the producer generates a number: %d\n", array[set % MAXSEM]); + set++; + rt_sem_release(&sem_lock); + + /* 发布一个满位 */ + rt_sem_release(&sem_full); + cnt++; + + /* Pause for a while */ + rt_thread_mdelay(20); + } + + rt_kprintf("the producer exit!\n"); +} + +/* Consumer thread entry */ +void consumer_thread_entry(void *parameter) +{ + rt_uint32_t sum = 0; + + while (1) + { + /* obtain a "full seat" */ + rt_sem_take(&sem_full, RT_WAITING_FOREVER); + + /* Critical region, locked for operation */ + rt_sem_take(&sem_lock, RT_WAITING_FOREVER); + sum += array[get % MAXSEM]; + rt_kprintf("the consumer[%d] get a number: %d\n", (get % MAXSEM), array[get % MAXSEM]); + get++; + rt_sem_release(&sem_lock); + + /* Release one vacancy */ + rt_sem_release(&sem_empty); + + /* The producer produces up to 10 numbers, stops, and the consumer thread stops accordingly */ + if (get == 10) break; + + /* Pause for a while */ + rt_thread_mdelay(50); + } + + rt_kprintf("the consumer sum is: %d\n", sum); + rt_kprintf("the consumer exit!\n"); +} + +int producer_consumer(void) +{ + set = 0; + get = 0; + + /* Initialize 3 semaphores */ + rt_sem_init(&sem_lock, "lock", 1, RT_IPC_FLAG_FIFO); + rt_sem_init(&sem_empty, "empty", MAXSEM, RT_IPC_FLAG_FIFO); + rt_sem_init(&sem_full, "full", 0, RT_IPC_FLAG_FIFO); + + /* Create producer thread */ + producer_tid = rt_thread_create("producer", + producer_thread_entry, RT_NULL, + THREAD_STACK_SIZE, + THREAD_PRIORITY - 1, + THREAD_TIMESLICE); + if (producer_tid != RT_NULL) + { + rt_thread_startup(producer_tid); + } + else + { + rt_kprintf("create thread producer failed"); + return -1; + } + + /* Create consumer thread */ + consumer_tid = rt_thread_create("consumer", + consumer_thread_entry, RT_NULL, + THREAD_STACK_SIZE, + THREAD_PRIORITY + 1, + THREAD_TIMESLICE); + if (consumer_tid != RT_NULL) + { + rt_thread_startup(consumer_tid); + } + else + { + rt_kprintf("create thread consumer failed"); + return -1; + } + + return 0; +} + +/* Export to msh command list */ +MSH_CMD_EXPORT(producer_consumer, producer_consumer sample); +``` + +The simulation results for this routine are as follows: + +``` +\ | / +- RT - Thread Operating System + / | \ 3.1.0 build Aug 27 2018 + 2006 - 2018 Copyright by rt-thread team +msh >producer_consumer +the producer generates a number: 1 +the consumer[0] get a number: 1 +msh >the producer generates a number: 2 +the producer generates a number: 3 +the consumer[1] get a number: 2 +the producer generates a number: 4 +the producer generates a number: 5 +the producer generates a number: 6 +the consumer[2] get a number: 3 +the producer generates a number: 7 +the producer generates a number: 8 +the consumer[3] get a number: 4 +the producer generates a number: 9 +the consumer[4] get a number: 5 +the producer generates a number: 10 +the producer exit! +the consumer[0] get a number: 6 +the consumer[1] get a number: 7 +the consumer[2] get a number: 8 +the consumer[3] get a number: 9 +the consumer[4] get a number: 10 +the consumer sum is: 55 +the consumer exit! +``` + +This routine can be understood as the process of producers producing products and putting them into the warehouse and the consumer taking the products from the warehouse. + +(1) Producer thread: + +1) Obtain 1 "empty seat" (put product number), now the number of "empty seats" is decremented by 1; + +2) Lock protection; the generated number value is cnt+1, and the value is looped into the array; then unlocked; + +3) Release 1 "full seat" (put one product into the warehouse, there will be one more "full seat" in the warehouse), add 1 to the number "full seats". + +(2) Consumer thread: + +1)Obtain 1 "full seat" (take product number), then the the number of "full seats" is decremented by 1; + +2)Lock protection; read the number produced by the producer from array and add it to the last number; then unlock it; + +3)Release 1 "empty seat" (take one product from the warehouse, then there is one more "empty seat" in the warehouse), add 1 to the number of "empty seats". + +The producer generates 10 numbers in turn, and the consumers take them away in turn and sum the values of the 10 numbers. Semaphore lock protects array critical region resources, ensuring the exclusivity of number taking for the consumers each time and achieving inter-thread synchronization. + +### Semaphore Usage Occasion + +Semaphores is a very flexible way to synchronize and can be used in a variety of situations, like forming locks, synchronization, resource counts, etc. It can also be conveniently used for synchronization between threads and threads, interrupts and threads. + +#### Thread Synchronization + +Thread synchronization is one of the simplest types of semaphore applications. For example, using semaphores to synchronize between two threads, the value of the semaphore is initialized to 0, indicating that there are 0 semaphore resource instances; and the thread attempting to acquire the semaphore will wait directly on this semaphore. + +When the thread holding the semaphore completes the work it is processing, it will release this semaphore. Thread waiting on this semaphore can be awaken, and it can then perform the next part of the work. This occasion can also be seen as using the semaphore for the work completion flag: the thread holding the semaphore completes its own work, and then notifies the thread waiting for the semaphore to continue the next part of the work. + +#### Lock + +Locks, a single lock is often applied to multiple threads accessing the same shared resource (in other words, critical region). When semaphore is used as a lock, the semaphore resource instance should normally be initialized to 1, indicating that the system has one resource available by default. Because the semaphore value always varies between 1 and 0, so this type of lock is also called binary semaphore. As shown in the following figure, when a thread needs to access shared resource, it needs to obtain the resource lock first. When this thread successfully obtains the resource lock, other threads that intend to access the shared resource will suspend because they cannot obtain the resource. This is because it is already locked (semaphore value is 0) when other threads are trying to obtain the lock. When thread holding the semaphore is processed and exiting the critical region, it will release the semaphore and unlock the lock, and the first waiting thread that is suspending on the lock will be awaken to gain access to the critical region. + +![Lock](figures/06sem_lock.png) + +#### Interrupt Synchronization between Threads + +Semaphore can also be easily applied to interrupting synchronization between threads, such as an interrupt trigger. When interrupting service routine, thread needs to be notifies to perform corresponding data processing. At this time, the initial value of the semaphore can be set to 0. When the thread tries to hold this semaphore, since the initial value of the semaphore is 0, the thread will then suspends on this semaphore until the semaphore is released. When the interrupt is triggered, hardware-related actions are firstly performed, such as reading corresponding data from the hardware I/O port, and confirming the interrupt to clear interrupt source, and then releasing a semaphore to wake up the corresponding thread for subsequent data processing. For example, the processing of FinSH threads is as shown in the following figure. + +![sync between ISR and FinSH thread](figures/06inter_ths_commu2.png) + +The value of the semaphore is initially 0. When the FinSH thread attempts to obtain the semaphore, it will be suspended because the semaphore value is 0. When the console device has data input, an interrupt is generated to enter the interrupt service routine. In the interrupt service routine, it reads the data of the console device, puts the read data into the UART buffer for buffering, and then releases the semaphore. The semaphore release will wake up the shell thread. After the interrupt service routine has finished, if there are no ready threads with higher priority than the shell thread in the system, the shell thread will hold the semaphore and run, obtaining the input data from the UART buffer. + +>The mutual exclusion between interrupts and threads cannot be done by means of semaphores (locks), but by means of switch interrupts. + +#### Resource Count + +Semaphore can also be considered as an incrementing or decrementing counter. It should be noted that the semaphore value is non-negative. For example, if the value of a semaphore is initialized to 5, then the semaphore can be reduced by a maximum of 5 consecutive times until the counter is reduced to zero. Resource count is suitable for occasions where the processing speeds between threads do not match. At this time, the semaphore can be counted as the number of completed tasks of the previous thread, and when dispatched to the next thread, it can also be used in a continuous manner handling multiple events each time. For example, in the producer and consumer problem, the producer can release the semaphore multiple times, and then the consumer can process multiple semaphore resources each time when dispatched. + +>Generally, resource count is mostly inter-thread synchronization in a hybrid mode, because there are still multiple accesses from threads for a single resource processing, which requires accessing and processing for a single resource, and operate lock mutex operation. + +Mutex +------ + +Mutexes, also known as mutually exclusive semaphores, are a special binary semaphore. Mutex is similar to a parking lot with only one parking space: when one car enters, the parking lot gate is locked and other vehicles are waiting outside. When the car inside comes out, parking lot gate will open and the next car can enter. + +### Mutex Working Mechanism + +The difference between a mutex and a semaphore is that the thread with a mutex has ownership of the mutex, mutex supports recursive access and prevents thread priority from reversing; and mutex can only be released by the thread holding it, whereas semaphore can be released by any thread. + +There are only two states of mutex, unlocked and locked (two state values). When a thread holds it, then the mutex is locked and its ownership is obtained by this thread. Conversely, when this thread releases it, it unlocks the mutex and loses its ownership. When a thread is holding a mutex, other threads will not be able to unlock this mutex or hold it. The thread holding the mutex can also acquire the lock again without being suspended, as shown in the following figure. This feature is quite different from the general binary semaphore: in semaphore, because there is no instance, the thread will suspend if the thread recursively holds the semaphore (which eventually leads to a deadlock). + +![Mutex Working Mechanism Diagram](figures/06mutex_work.png) + +Another potential problem with using semaphores is thread priority inversion. The so-called priority inversion is when a high-priority thread attempts to access shared resource through the semaphore mechanism, if the semaphore is already held by a low-priority thread which may happen to be preempted by other medium-priority threads while running, this leads to high-priority threads being blocked by many lower-priority threads which means instantaneity is difficult to guarantee. As shown in the following figure: There are three threads with the priority levels A, B, and C, priority A> B > C. Threads A and B are in suspended state, waiting for an event to trigger; thread C is running, and thread C starts using a shared resource M. While using the resource, the event thread A is waiting for occurs and thread A switches to ready state because it has higher priority than thread C, so it executes immediately. But when thread A wants to use shared resource M, because it is being used by thread C, thread A is suspended and thread C is running. If the event thread B is waiting for occurs, thread B switches to ready state. Since thread B has a higher priority than thread C, thread B starts running, thread C won't run until thread B finishes. Thread A is only executed when thread C releases the shared resource M. In this case, the priority has been reversed: thread B runs before thread A. This does not guarantee the response time for high priority threads. + +![Priority Inversion (M is the semaphore)](figures/06priority_inversion.png) + +In the RT-Thread operating system, mutex can solve the priority inversion problem and implement the priority inheritance algorithm. Priority inheritance solves the problems caused by priority inversion by raising the priority of thread C to the priority of thread A during the period during when thread A is suspended trying to access the shared resource. This prevents C (indirectly preventing A) from being preempted by B, as shown in the following figure. Priority inheritance refers to raising the priority of a low-priority thread that occupies a certain resource, making the priority level of the low-priority thread to be equal to the priority of the thread with the highest priority level among all threads waiting for the resource, and then executes. When this is low-priority releases the resource, the priority level returns to the initial setting. Therefore, threads that inherit priority help to prevent the system resources from being preempted by any intermediate-priority thread. + +![Priority Inheritance (M is a mutex)](figures/06priority_inherit.png) + +>After the mutex is obtained, release the mutex as soon as possible. During the time when holding the mutex, you must not change the priority of the thread holding the mutex. + +### Mutex Control Block + +In RT-Thread, mutex control block is a data structure used by the operating system to manage mutexes, represented by the struct rt rt_mutex. Another C expression, rt_mutex_t, represents the handle of the mutex, and the implementation in C language refers to the pointer of the mutex control block. See the following code for a detailed definition of the mutex control block structure: + +```c +struct rt_mutex + { + struct rt_ipc_object parent; /* inherited from the ipc_object class */ + + rt_uint16_t value; /* mutex value */ + rt_uint8_t original_priority; /* hold the original priority of the thread */ + rt_uint8_t hold; /* number of times holding the threads */ + struct rt_thread *owner; /* thread that currently owns the mutex */ + }; + /* rt_mutext_t pointer type of the one poniter pointing to the mutex structure */ + typedef struct rt_mutex* rt_mutex_t; +``` + +The rt_mutex object is derived from rt_ipc_object and is managed by the IPC container. + +### Mutex Management + +The mutex control block contains important parameters related to mutex and it plays an important role in the implementation of the mutex function. The mutex-related interface is as shown in the following figure. The operation of a mutex includes: creating/initiating a mutex, obtaining a mutex, releasing a mutex, and deleting/detaching a mutex. + +![Mutex Related Interface](figures/06mutex_ops.png) + +#### Create and Delete Mutex + +When creating a mutex, the kernel first creates a mutex control block and then completes the initialization of the control block. Create a mutex using the following function interface: + +```c +rt_mutex_t rt_mutex_create (const char* name, rt_uint8_t flag); +``` + +You can call the rt_mutex_create function to create a mutex whose name is designated by name. When this function is called, the system will first allocate a mutex object from the object manager, initialize the object, and then initialize the parent class IPC object and the mutex-related part. The flag of the mutex is set to RT_IPC_FLAG_PRIO, which means that when multiple threads are waiting for resources, the resources will be accessed by the thread with higher priority. The flag is set to RT_IPC_FLAG_FIFO, which means that when multiple threads are waiting for resources, resources are being accesses in a first-come-first-served order. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mutex_create() + +|Parameters |**Description** | +|------------|-------------------------------------------------------------------| +| name | Mutex name | +| flag | Mutex flag, which can take the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO | +|**Return** | —— | +| Mutex handle | Created successfully | +| RT_NULL | Creation failed | + +For dynamically created mutex, when the mutex is no longer used, the system resource is released by removing the mutex. To remove a mutex, use the following function interface: + +```c +rt_err_t rt_mutex_delete (rt_mutex_t mutex); +``` + +When a mutex is deleted, all threads waiting for this mutex will be woken up, return value for the waiting threads is - RT_ERROR. The system then removes the mutex from the kernel object manager linked list and releases the memory space occupied by the mutex. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mutex_delete() + +|Parameters|Description | +|----------|------------------| +| mutex | The handle of the mutex object | +|**Return**| —— | +| RT_EOK | Deleted successfully | + +#### Initialize and Detach Mutex + +The memory of a static mutex object is allocated by the compiler during system compilation, and is usually placed in a read-write data segment or an uninitialized data segment. Before using such static mutex objects, you need to initialize them first. To initialize the mutex, use the following function interface: + +```c +rt_err_t rt_mutex_init (rt_mutex_t mutex, const char* name, rt_uint8_t flag); +``` + +When using this function interface, you need to specify the handle of the mutex object (that is, the pointer to the mutex control block), the mutex name, and the mutex flag. The mutex flag can be the flags mentioned in the creation of mutex function above. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mutex_init() + +|Parameters|Description | +|----------|-------------------------------------------------------------------| +| mutex | The handle of the mutex object, which is provided by the user and points to the memory block of the mutex object | +| name | Mutex name | +| flag | Mutex flag, which can take the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO | +|**Return**| —— | +| RT_EOK | Initialization successful | + +For statically initialized muex, detaching mutex means to remove the mutex object from the kernel object manager. To detach the mutex, use the following function interface: + +```c +rt_err_t rt_mutex_detach (rt_mutex_t mutex); +``` + +After using this function interface, the kernel wakes up all threads suspended on the mutex (the return value of the thread is -RT_ERROR), and then the system detaches the mutex from the kernel object manager. The following table describes the input parameters and return values for this function: + +Input parameters and return values for rt_mutex_detach() + +|Parameters|Description | +|----------|------------------| +| mutex | The handle of the mutex object | +|**Return**| —— | +| RT_EOK | Successful | + +#### Obtain Mutex + +Once the thread obtains the mutex, the thread has ownership of the mutex, that is, a mutex can only be held by one thread at a time. To obtain the mutex, use the following function interface: + +```c +rt_err_t rt_mutex_take (rt_mutex_t mutex, rt_int32_t time); +``` + +If the mutex is not controlled by another thread, the thread requesting the mutex will successfully obtain the mutex. If the mutex is already controlled by the current thread, then add one 1 to the number of holds for the mutex, and the current thread will not be suspended to wait. If the mutex is already occupied by another thread, the current thread suspends and waits on the mutex until another thread releases it or until the specified timeout elapses. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mutex_take() + +|**Parameters** |Description | +|---------------|------------------| +| mutex | The handle of the mutex object | +| time | Specified waiting time | +|**Return** | —— | +| RT_EOK | Successfully obtained mutex | +| \-RT_ETIMEOUT | Timeout | +| \-RT_ERROR | Failed to obtain | + +#### Release Mutex + +When a thread completes the access to a mutually exclusive resource, it should release the mutex it occupies as soon as possible, so that other threads can obtain the mutex in time. To release the mutex, use the following function interface: + +```c +rt_err_t rt_mutex_release(rt_mutex_t mutex); +``` + +When using this function interface, only threads that already have control of the mutex can release it. Each time the mutex is released, its holding count is reduced by one. When the mutex's holding count is zero (that is, the holding thread has released all holding operations), it becomes available, threads waiting on the semaphore is awaken. If the thread's priority is increased by the mutex, then when the mutex is released, the thread reverts to the priority level before holding the mutex. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_mutex_release() + +|**Parameters**|**Description** | +|----------|------------------| +| mutex | The handle of the mutex object | +|**Return**| —— | +| RT_EOK | Success | + +### Mutex Application Sample + +This is a mutex application routine, and a mutex lock is a way to protect shared resources. When a thread has the mutex lock, it can protect shared resources from being destroyed by other threads. The following example can be used to illustrate. There are two threads: thread 1 and thread 2, thread 1 adds 1 to each of the two numbers; thread 2 also adds 1 to each of the two numbers. mutex is used to ensure that the operation of the thread changing the values of the 2 numbers is not interrupted. As shown in the following code: + +Mutex routine + +```c +#include + +#define THREAD_PRIORITY 8 +#define THREAD_TIMESLICE 5 + +/* Pointer to the mutex */ +static rt_mutex_t dynamic_mutex = RT_NULL; +static rt_uint8_t number1,number2 = 0; + +ALIGN(RT_ALIGN_SIZE) +static char thread1_stack[1024]; +static struct rt_thread thread1; +static void rt_thread_entry1(void *parameter) +{ + while(1) + { + /* After thread 1 obtains the mutex, it adds 1 to number1 and number2, and then releases the mutex. */ + rt_mutex_take(dynamic_mutex, RT_WAITING_FOREVER); + number1++; + rt_thread_mdelay(10); + number2++; + rt_mutex_release(dynamic_mutex); + } +} + +ALIGN(RT_ALIGN_SIZE) +static char thread2_stack[1024]; +static struct rt_thread thread2; +static void rt_thread_entry2(void *parameter) +{ + while(1) + { + /* After thread 2 obtains the mutex, check whether the values of number1 and number2 are the same. If they are the same, it means the mutex succeesfully played the role of a lock. */ + rt_mutex_take(dynamic_mutex, RT_WAITING_FOREVER); + if(number1 != number2) + { + rt_kprintf("not protect.number1 = %d, mumber2 = %d \n",number1 ,number2); + } + else + { + rt_kprintf("mutex protect ,number1 = mumber2 is %d\n",number1); + } + + number1++; + number2++; + rt_mutex_release(dynamic_mutex); + + if(number1>=50) + return; + } +} + +/* Initialization of the mutex sample */ +int mutex_sample(void) +{ + /* Create a dynamic mutex */ + dynamic_mutex = rt_mutex_create("dmutex", RT_IPC_FLAG_FIFO); + if (dynamic_mutex == RT_NULL) + { + rt_kprintf("create dynamic mutex failed.\n"); + return -1; + } + + rt_thread_init(&thread1, + "thread1", + rt_thread_entry1, + RT_NULL, + &thread1_stack[0], + sizeof(thread1_stack), + THREAD_PRIORITY, THREAD_TIMESLICE); + rt_thread_startup(&thread1); + + rt_thread_init(&thread2, + "thread2", + rt_thread_entry2, + RT_NULL, + &thread2_stack[0], + sizeof(thread2_stack), + THREAD_PRIORITY-1, THREAD_TIMESLICE); + rt_thread_startup(&thread2); + return 0; +} + +/* Export to the MSH command list */ +MSH_CMD_EXPORT(mutex_sample, mutex sample); +``` + +Both thread 1 and thread 2 use mutexes to protect the operation on the 2 numbers (if the obtain and release mutex statements in thread 1 are commented out, thread 1 will no longer protect number), the simulation results are as follows : + +``` +\ | / +- RT - Thread Operating System + / | \ 3.1.0 build Aug 24 2018 + 2006 - 2018 Copyright by rt-thread team +msh >mutex_sample +msh >mutex protect ,number1 = mumber2 is 1 +mutex protect ,number1 = mumber2 is 2 +mutex protect ,number1 = mumber2 is 3 +mutex protect ,number1 = mumber2 is 4 +… +mutex protect ,number1 = mumber2 is 48 +mutex protect ,number1 = mumber2 is 49 +``` + +Threads use mutexes to protect the operation on the two numbers, keeping the number values consistent. + +Another example of a mutex is shown in the following code. This example creates three dynamic threads to check if the priority level of the thread holding the mutex is adjusted to the highest priority level among the waiting threads. + +Prevent priority inversion routine + +```c +#include + +/* Pointer to the thread control block */ +static rt_thread_t tid1 = RT_NULL; +static rt_thread_t tid2 = RT_NULL; +static rt_thread_t tid3 = RT_NULL; +static rt_mutex_t mutex = RT_NULL; + + +#define THREAD_PRIORITY 10 +#define THREAD_STACK_SIZE 512 +#define THREAD_TIMESLICE 5 + +/* Thread 1 Entry */ +static void thread1_entry(void *parameter) +{ + /* Let the low priority thread run first */ + rt_thread_mdelay(100); + + /* At this point, thread3 holds the mutex and thread2 is waiting to hold the mutex */ + + /* Check the priority level of thread2 and thread3 */ + if (tid2->current_priority != tid3->current_priority) + { + /* The priority is different, the test fails */ + rt_kprintf("the priority of thread2 is: %d\n", tid2->current_priority); + rt_kprintf("the priority of thread3 is: %d\n", tid3->current_priority); + rt_kprintf("test failed.\n"); + return; + } + else + { + rt_kprintf("the priority of thread2 is: %d\n", tid2->current_priority); + rt_kprintf("the priority of thread3 is: %d\n", tid3->current_priority); + rt_kprintf("test OK.\n"); + } +} + +/* Thread 2 Entry */ +static void thread2_entry(void *parameter) +{ + rt_err_t result; + + rt_kprintf("the priority of thread2 is: %d\n", tid2->current_priority); + + /* Let the low-priority thread run first */ + rt_thread_mdelay(50); + + /* + * Trying to hold a mutex lock. At this point, thread 3 has the mutex lock, so the priority level of thread 3 should be raised + * to the same level of priority as thread 2 + */ + result = rt_mutex_take(mutex, RT_WAITING_FOREVER); + + if (result == RT_EOK) + { + /* Release mutex lock */ + rt_mutex_release(mutex); + } +} + +/* Thread 3 Entry */ +static void thread3_entry(void *parameter) +{ + rt_tick_t tick; + rt_err_t result; + + rt_kprintf("the priority of thread3 is: %d\n", tid3->current_priority); + + result = rt_mutex_take(mutex, RT_WAITING_FOREVER); + if (result != RT_EOK) + { + rt_kprintf("thread3 take a mutex, failed.\n"); + } + + /* Operate a long cycle, 500ms */ + tick = rt_tick_get(); + while (rt_tick_get() - tick < (RT_TICK_PER_SECOND / 2)) ; + + rt_mutex_release(mutex); +} + +int pri_inversion(void) +{ + /* Created a mutex lock */ + mutex = rt_mutex_create("mutex", RT_IPC_FLAG_FIFO); + if (mutex == RT_NULL) + { + rt_kprintf("create dynamic mutex failed.\n"); + return -1; + } + + /* Create thread 1*/ + tid1 = rt_thread_create("thread1", + thread1_entry, + RT_NULL, + THREAD_STACK_SIZE, + THREAD_PRIORITY - 1, THREAD_TIMESLICE); + if (tid1 != RT_NULL) + rt_thread_startup(tid1); + + /* Create thread 2 */ + tid2 = rt_thread_create("thread2", + thread2_entry, + RT_NULL, + THREAD_STACK_SIZE, + THREAD_PRIORITY, THREAD_TIMESLICE); + if (tid2 != RT_NULL) + rt_thread_startup(tid2); + + /* Create thread 3 */ + tid3 = rt_thread_create("thread3", + thread3_entry, + RT_NULL, + THREAD_STACK_SIZE, + THREAD_PRIORITY + 1, THREAD_TIMESLICE); + if (tid3 != RT_NULL) + rt_thread_startup(tid3); + + return 0; +} + +/* Export to the msh command list */ +MSH_CMD_EXPORT(pri_inversion, prio_inversion sample); +``` + +The simulation results are as follows: + +``` +\ | / +- RT - Thread Operating System + / | \ 3.1.0 build Aug 27 2018 + 2006 - 2018 Copyright by rt-thread team +msh >pri_inversion +the priority of thread2 is: 10 +the priority of thread3 is: 11 +the priority of thread2 is: 10 +the priority of thread3 is: 10 +test OK. +``` + +The routine demonstrates how to use the mutex. Thread 3 holds the mutex first, and then thread 2 tries to hold the mutex, at which point thread 3's priority is raised to the same level as thread 2. + +>It is important to remember that mutexes cannot be used in interrupt service routines. + +### Occasions to Use Mutex + +The use of a mutex is relatively simple because it is a type of semaphore and it exists in the form of a lock. At the time of initialization, the mutex is always unlocked, and when it is held by the thread, it immediately becomes locked. Mutex is more suitable for: + +(1) When a thread holds a mutex multiple times. This can avoid the problem of deadlock caused by multiple recursive holdings of the same thread. + +(2) A situation in which priority inversion may occur due to multi-thread synchronization. + +Event +------ + +Event set is also one of the mechanisms for synchronization between threads. An event set can contain multiple events. Event set can be used to complete one-to-many, many-to-many thread synchronization. Let's take taking bus as an example to illustrate event. There may be the following situations when waiting for a bus at a bus stop: + +①P1 is taking a bus to a certain place, only one type of bus can reach the destination. P1 can leave for the destination once that bus arrives. + +②P1 is taking a bus to a certain place, 3 types of buses can reach the destination. P1 can leave for the destination once any 1 of the 3 types of bus arrives. + +③P1 is traveling with P2 to a certain place together, P1 can't leave for the destination unless two conditions are met. These two conditions are “P2 arrives at the bus stop” and “bus arrives at the bus stop”. + +Here, P1 leaving for a certain place can be regarded as a thread, and “bus arrives at the bus stop” and “P2 arrives at the bus stop” are regarded as the occurrence of events. Situation ① is a specific event to wakes up the thread; situation ② is any single event to wake up the thread; situation ③ is when multiple events must occur simultaneously to wake up the thread. + +### Event Set Working Mechanism + +The event set is mainly used for synchronization between threads. Unlike the semaphore, it can achieve one-to-many, many-to-many synchronization. That is, the relationship between a thread and multiple events can be set as follows: any one of the events wakes up the thread, or several events wake up the thread for subsequent processing; likewise, the event can be multiple threads to synchronize multiple events. This collection of multiple events can be represented by a 32-bit unsigned integer variable, each bit of the variable representing an event, and the thread associates one or more events by "logical AND" or "logical OR" to form event combination. The "logical OR" of an event is also called independent synchronization, which means that the thread is synchronized with one of the events; the event "logical AND" is also called associative synchronization, which means that the thread is synchronized with several events. + +The event set defined by RT-Thread has the following characteristics: + +1) Events are related to threads only, and events are independent of each other: each thread can have 32 event flags, recorded with a 32-bit unsigned integer, each bit representing an event; + +2) Event is only used for synchronization and does not provide data transfer functionality; + +3) Events are not queuing, that is, sending the same event to the thread multiple times (if the thread has not had time read it), the effect is equivalent to sending only once. + +In RT-Thread, each thread has an event information tag with three attributes. They are RT_EVENT_FLAG_AND (logical AND), RT_EVENT_FLAG_OR (logical OR), and RT_EVENT_FLAG_CLEAR (clear flag). When the thread waits for event synchronization, it can determine whether the currently received event satisfies the synchronization condition by 32 event flags and this event information flag. + +![Event Set Work Diagram](figures/06event_work.png) + +As shown in the figure above, the first and 30th bits of the event flag of thread #1 are set. If the event information flag is set to logical AND, it means that thread #1 will be triggered to wake up only after both event 1 and event 30 occur. If the event information flag is set to logical OR, the occurrence of either event 1 or event 30 will trigger to wake up thread #1. If the message flag also sets the clear flag bit, this means event 1 and event 30 will be automatically cleared to zero when thread #1 wakes up, otherwise the event flag will still be present (set to 1). + +### Event Set Control Block + +In RT-Thread, event set control block is a data structure used by the operating system to manage events, represented by the structure struct rt_event. Another C expression, rt_event_t, represents the handle of the event set, and the implementation in C language is a pointer to the event set control block. See the following code for a detailed definition of the event set control block structure: + +```c +struct rt_event +{ + struct rt_ipc_object parent; /* Inherited from the ipc_object class */ + + /* The set of events, each bit represents 1 event, the value of the bit can mark whether an event occurs */ + rt_uint32_t set; +}; +/* rt_event_t is the pointer type poniting to the event structure */ +typedef struct rt_event* rt_event_t; +``` + +rt_event object is derived from rt_ipc_object and is managed by the IPC container. + +### Management of Event Sets + +Event set control block contains important parameters related to the event set and plays an important role in the implementation of the event set function. The event set related interfaces are as shown in the following figure. The operations on an event set include: create/initiate event sets, send events, receive events, and delete/detach event sets. + +![Event Related Interface](figures/06event_ops.png) + +#### Create and Delete Event Set + +When creating an event set, the kernel first creates an event set control block, and then performs basic initialization on the event set control block. The event set is created using the following function interface: + +```c +rt_event_t rt_event_create(const char* name, rt_uint8_t flag); +``` + +When the function interface is called, the system allocates the event set object from the object manager, initializes the object, and then initializes the parent class IPC object. The following table describes the input parameters and return values for this function: + + Input parameters and return values for rt_event_create() + +|**Parameters** |**Description** | +|----------------|---------------------------------------------------------------------| +| name | Name of the event set. | +| flag | The flag of the event set, which can take the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO | +|**Return** | —— | +| RT_NULL | Creation failed. | +| Handle of the event object | Creation successful | + +When the system no longer uses the event set object created by rt_event_create(), the system resource is released by deleting the event set object control block. To delete an event set you can use the following function interface: + +```c +rt_err_t rt_event_delete(rt_event_t event); +``` + +When you call rt_event_delete function to delete an event set object, you should ensure that the event set is no longer used. All threads that are suspended on the event set will be awaken before the deletion (the return value of the thread is -RT_ERROR), and then the memory block occupied by the event set object is released. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_event_delete() + +|**Parameters**|**Description** | +|----------|------------------| +| event | The handle of the event set object | +|**Return**| —— | +| RT_EOK | Success | + +#### Initialize and Detach Event Set + +The memory of a static event set object is allocated by the compiler during system compilation, and is usually placed in a read-write data segment or an uninitialized data segment. Before using a static event set object, you need to initialize it first. The initialization event set uses the following function interface: + +```c +rt_err_t rt_event_init(rt_event_t event, const char* name, rt_uint8_t flag); +``` + +When the interface is called, you need to specify the handle of the static event set object (that is, the pointer pointing to the event set control block), and then the system will initialize the event set object and add it to the system object container for management. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_event_init() + +|**Parameters**|**Description** | +|----------|---------------------------------------------------------------------| +| event | The handle of the event set object | +| name | The name of the event set | +| flag | The flag of the event set, which can take the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO | +|**Return**| —— | +| RT_EOK | Success | + +When the system no longer uses the event set object initialized by rt_event_init(), the system resources are released by detaching the event set object control block. Detaching event set means to detach the event set object from the kernel object manager. To detach an event set, use the following function interface: + +```c +rt_err_t rt_event_detach(rt_event_t event); +``` + +When the user calls this function, the system first wakes up all the threads suspended on the event set wait queue (the return value of the thread is -RT_ERROR), and then detaches the event set from the kernel object manager. The following table describes the input parameters and return values for this function: + +Input parameters and return values for rt_event_detach() + +|**Parameters**|**Description** | +|----------|------------------| +| event | The handle of the event set object | +|**Return**| —— | +| RT_EOK | Success | + +#### Send Event + +The send event function can send one or more events in the event set as follows: + +```c +rt_err_t rt_event_send(rt_event_t event, rt_uint32_t set); +``` + +When using the function interface, the event flag value of the event set object is set by the event flag specified by the parameter set, and then the waiting thread linked list transversely waiting for the event set objects is used to determine whether there is a thread has the event activation requirement that matches the current event flag value. If there is, wake up the thread. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_event_send() + +|**Parameters**|**Description** | +|----------|------------------------------| +| event | The handle of the event set object | +| set | The flag value of one or more events sent | +|**Return**| —— | +| RT_EOK | Success | + +#### Receive Event + +The kernel uses a 32-bit unsigned integer to identify the event set, each bit represents an event, so an event set object can wait to receive 32 events at the same time, and the kernel can decide how to activate the thread by specifying the parameter "logical AND" or "logical OR". Using the "logical AND" parameter indicates that the thread is only activated when all waiting events occur, and using the "logical OR" parameter means that the thread is activated as soon as one waiting event occurs. To receive events, use the following function interface: + +```c +rt_err_t rt_event_recv(rt_event_t event, + rt_uint32_t set, + rt_uint8_t option, + rt_int32_t timeout, + rt_uint32_t* recved); +``` + +If it has already occurred, it determines whether to reset the corresponding flag of the event according to whether RT_EVENT_FLAG_CLEAR is set on the parameter option. If it has already occurred, it determines whether to reset the corresponding flag of the event according to whether RT_EVENT_FLAG_CLEAR is set on the parameter option, then return (where the recved parameter returns to the received event); if it has not occurred, fill the waiting set and option parameters into the structure of the thread itself, then suspend the thread on this event until its waiting event satisfies the condition or until the specified timeout elapses. If the timeout is set to zero, it means that when the event to be accepted by the thread does not meet the requirements, it does not wait, but returns directly-RT_ETIMEOUT. The following table describes the input parameters and return values for this function: + +Input parameters and return values of rt_event_recv() + +|**Parameters** |**Description** | +|---------------|----------------------| +| event | The handle of the event set object | +| set | Receive events of interest to the thread | +| option | Receive options | +| timeout | Timeout | +| recved | Point to the received event | +|**Return** | —— | +| RT_EOK | Successful | +| \-RT_ETIMEOUT | Timeout | +| \-RT_ERROR | Error | + +The value of option can be: + +```c +/* Select AND or OR to receive events */ +RT_EVENT_FLAG_OR +RT_EVENT_FLAG_AND + +/* Choose to clear reset event flag */ +RT_EVENT_FLAG_CLEAR +``` + +### Event Set Application Sample + +This is the application routine for the event set, which initializes an event set, two threads. One thread waits for an event of interest to it, and another thread sends an event, as shown in code listing 6-5: + +Event set usage routine + +```c +#include + +#define THREAD_PRIORITY 9 +#define THREAD_TIMESLICE 5 + +#define EVENT_FLAG3 (1 << 3) +#define EVENT_FLAG5 (1 << 5) + +/* Event control block */ +static struct rt_event event; + +ALIGN(RT_ALIGN_SIZE) +static char thread1_stack[1024]; +static struct rt_thread thread1; + +/* Thread 1 entry function*/ +static void thread1_recv_event(void *param) +{ + rt_uint32_t e; + + /* The first time the event is received, either event 3 or event 5 can trigger thread 1, clearing the event flag after receiving */ + if (rt_event_recv(&event, (EVENT_FLAG3 | EVENT_FLAG5), + RT_EVENT_FLAG_OR | RT_EVENT_FLAG_CLEAR, + RT_WAITING_FOREVER, &e) == RT_EOK) + { + rt_kprintf("thread1: OR recv event 0x%x\n", e); + } + + rt_kprintf("thread1: delay 1s to prepare the second event\n"); + rt_thread_mdelay(1000); + + /* The second time the event is received, both event 3 and event 5 can trigger thread 1, clearing the event flag after receiving */ + if (rt_event_recv(&event, (EVENT_FLAG3 | EVENT_FLAG5), + RT_EVENT_FLAG_AND | RT_EVENT_FLAG_CLEAR, + RT_WAITING_FOREVER, &e) == RT_EOK) + { + rt_kprintf("thread1: AND recv event 0x%x\n", e); + } + rt_kprintf("thread1 leave.\n"); +} + + +ALIGN(RT_ALIGN_SIZE) +static char thread2_stack[1024]; +static struct rt_thread thread2; + +/* Thread 2 Entry */ +static void thread2_send_event(void *param) +{ + rt_kprintf("thread2: send event3\n"); + rt_event_send(&event, EVENT_FLAG3); + rt_thread_mdelay(200); + + rt_kprintf("thread2: send event5\n"); + rt_event_send(&event, EVENT_FLAG5); + rt_thread_mdelay(200); + + rt_kprintf("thread2: send event3\n"); + rt_event_send(&event, EVENT_FLAG3); + rt_kprintf("thread2 leave.\n"); +} + +int event_sample(void) +{ + rt_err_t result; + + /* Initialize event object */ + result = rt_event_init(&event, "event", RT_IPC_FLAG_FIFO); + if (result != RT_EOK) + { + rt_kprintf("init event failed.\n"); + return -1; + } + + rt_thread_init(&thread1, + "thread1", + thread1_recv_event, + RT_NULL, + &thread1_stack[0], + sizeof(thread1_stack), + THREAD_PRIORITY - 1, THREAD_TIMESLICE); + rt_thread_startup(&thread1); + + rt_thread_init(&thread2, + "thread2", + thread2_send_event, + RT_NULL, + &thread2_stack[0], + sizeof(thread2_stack), + THREAD_PRIORITY, THREAD_TIMESLICE); + rt_thread_startup(&thread2); + + return 0; +} + +/* Export to the msh command list */ +MSH_CMD_EXPORT(event_sample, event sample); +``` + +The simulation results are as follows: + +```c + \ | / +- RT - Thread Operating System + / | \ 3.1.0 build Aug 24 2018 + 2006 - 2018 Copyright by rt-thread team +msh >event_sample +thread2: send event3 +thread1: OR recv event 0x8 +thread1: delay 1s to prepare the second event +msh >thread2: send event5 +thread2: send event3 +thread2 leave. +thread1: AND recv event 0x28 +thread1 leave. +``` + +The routine demonstrates how to use the event set. Thread 1 receives events twice before and after, using the "logical OR" and "logical AND" respectively. + +### Occasions to Use Event Set + +Event sets can be used in a variety of situations, and it can replace semaphores to some extent for inter-thread synchronization. A thread or interrupt service routine sends an event to the event set object, and the waiting thread is awaken and the corresponding event is processed. However, unlike semaphore, the event transmission operation is not cumulative until the event is cleared, and the release actions of semaphore are cumulative. Another feature of the event is that the receiving thread can wait for multiple events, meaning multiple events correspond to one thread or multiple threads. At the same time, according to thread waiting parameters, you can choose between a "logical OR" trigger or a "logical AND" trigger. This feature is not available for semaphores, etc. The semaphore can only recognize a single release action, and cannot wait for multiple types of release at the same time. The following figure shows the multi-event receiving diagram: + +![Multi-event Receiving Schematic](figures/06event_use.png) + +An event set contains 32 events, and a particular thread only waits for and receives events it is interested in. It can be a thread waiting for the arrival of multiple events (threads 1, 2 are waiting for multiple events, logical "and" or logical "or" can be used to trigger the thread in events), or multiple threads waiting for an event to arrive (event 25). When there are events of interest to them occur, the thread will be awaken and subsequent processing actions will be taken. + + diff --git a/documentation/thread/figures/04Object_container.png b/documentation/thread/figures/04Object_container.png new file mode 100644 index 0000000000..d79943ae6e Binary files /dev/null and b/documentation/thread/figures/04Object_container.png differ diff --git a/documentation/thread/figures/04Task_switching.png b/documentation/thread/figures/04Task_switching.png new file mode 100644 index 0000000000..4ec4d438ad Binary files /dev/null and b/documentation/thread/figures/04Task_switching.png differ diff --git a/documentation/thread/figures/04main_thread.png b/documentation/thread/figures/04main_thread.png new file mode 100644 index 0000000000..7e86b22aa6 Binary files /dev/null and b/documentation/thread/figures/04main_thread.png differ diff --git a/documentation/thread/figures/04thread_ops.png b/documentation/thread/figures/04thread_ops.png new file mode 100644 index 0000000000..29395f1613 Binary files /dev/null and b/documentation/thread/figures/04thread_ops.png differ diff --git a/documentation/thread/figures/04thread_sta.png b/documentation/thread/figures/04thread_sta.png new file mode 100644 index 0000000000..3157b70dfc Binary files /dev/null and b/documentation/thread/figures/04thread_sta.png differ diff --git a/documentation/thread/figures/04thread_stack.png b/documentation/thread/figures/04thread_stack.png new file mode 100644 index 0000000000..9e12945c52 Binary files /dev/null and b/documentation/thread/figures/04thread_stack.png differ diff --git a/documentation/thread/figures/04time_slience.png b/documentation/thread/figures/04time_slience.png new file mode 100644 index 0000000000..6cd9c4575a Binary files /dev/null and b/documentation/thread/figures/04time_slience.png differ diff --git a/documentation/thread/thread.md b/documentation/thread/thread.md new file mode 100644 index 0000000000..315ca73465 --- /dev/null +++ b/documentation/thread/thread.md @@ -0,0 +1,773 @@ +Thread Management +======================== + +When we are facing a big task in our daily life, we usually break it down into a number of simple, easy-to-manage smaller tasks. Then, we would deal with these smaller tasks one by one, gradually, the big task is worked out. In a multi-threaded operating system, developers also need to break down a complex application into multiple small, schedulable, and serialized program units. When tasks are reasonably divided and properly executed, this design allows the system to meet the capacity and time requirements of the real-time system. For example, to have the embedded system to perform such tasks, the system would collect data through sensors and display the data on the screen. In a multi-threaded real-time system, the task can be decomposed into two subtasks. The subtask, as shown in the following figure, reads the sensor data continuously and writes the data into the shared memory. The other subtask periodically reads the data from the shared memory and outputs the sensor data onto the screen. + +![Switching Execution of Sensor's Data Receiving Task and Display Task](figures/04Task_switching.png) + +In RT-Thread, the application entity corresponding to the above subtask is the thread. Thread is the carrier of the task. It is the most basic scheduling unit in RT-Thread. It describes the running environment of a task execution. It also describes the priority level of the task, the important task can be set a relatively high priority, the non-important task can be set a lower priority, and different tasks can also be set the same priority and take turn to run. + +When a thread runs, it thinks it is hogging the CPU as it runs. The runtime environment when the thread executes is called the context, specifically each variables and data, including all register variables, stacks, memory information, and so on. + +This chapter will be divided into five sections to introduce thread management of RT-Thread. After reading this chapter, readers will have a deeper understanding of the thread management mechanism of RT-Thread. They will have clear answers to questions like what states does a thread have, how to create a thread, why do idle threads exist, etc. + +Thread Management Features +------------------ + +The main function of RT-Thread thread management is to manage and schedule threads. There are two types of threads in the system, namely system thread and user thread. System thread is the thread created by RT-Thread kernel. User thread is the thread created by application. Both types of thread will allocate thread objects from the kernel object container. When the thread is deleted, it will also be deleted from the object container. As shown in the following figure, each thread has important attributes, such as thread control block, thread stack, entry function, and so on. + +![Object Container and Thread Object ](figures/04Object_container.png) + +The thread scheduler of RT-Thread is preemptive and its main job is to find the highest priority thread from the list of ready threads so as to ensure that the highest priority thread can be run. Once the highest priority task is ready, it can always get the CPU usage right. + +When a running thread makes the running condition ready for another thread with a higher priority, then the current running thread's CPU usage right is deprived, or in other words, released, and the high-priority thread immediately gets the CPU usage right. + +If it is the interrupt service routine that makes the running condition ready for the thread with a higher priority, when the interrupt is completed, the interrupted thread is suspended, the thread with the higher priority starts running. + +When the scheduler schedules threads and switch them, the current thread context is first saved. When it is switched back to this thread, the scheduler restores the context information of the thread. + +Working Mechanism of Thread +-------------- + +### Thread Control Block + +In RT-Thread, the thread control block is represented by structure struct rt_thread, which is a data structure used by the operating system to manage threads. It stores information about the thread, such as priority, thread name, thread status, etc. It also includes linked list structure for connecting threads, event collection of thread waiting , etc., which are defined as follows: + +```c +/* Thread Control Block */ +struct rt_thread +{ + /* rt Object */ + char name[RT_NAME_MAX]; /* Thread Name */ + rt_uint8_t type; /* Object Type */ + rt_uint8_t flags; /* Flag Position */ + + rt_list_t list; /* Object List */ + rt_list_t tlist; /* Thread List */ + + /* Stack Pointer and Entry pointer */ + void *sp; /* Stack Pointer */ + void *entry; /* Entry Function Pointer */ + void *parameter; /* Parameter */ + void *stack_addr; /* Stack Address Pointer */ + rt_uint32_t stack_size; /* Stack Size */ + + /* Error Code */ + rt_err_t error; /* Thread Error Code */ + rt_uint8_t stat; /* Thread State */ + + /* Priority */ + rt_uint8_t current_priority; /* Current Priority */ + rt_uint8_t init_priority; /* Initial Priority */ + rt_uint32_t number_mask; + + ...... + + rt_ubase_t init_tick; /* Thread Initialization Count Value */ + rt_ubase_t remaining_tick; /* Thread Remaining Count Value */ + + struct rt_timer thread_timer; /* Built-in Thread Timer */ + + void (*cleanup)(struct rt_thread *tid); /* Thread Exit Clear Function */ + rt_uint32_t user_data; /* User Data */ +}; +``` + +`init_priority` is the thread priority specified when the thread was created, and will not be changed while the thread is running (unless the user executes the thread control function to manually adjust the thread priority). `cleanup` will be called back by the idle thread when the thread exits to perform the user-setup cleanup site and so on. The last member, `user_data`, can be mounted by the user some data information into the thread control block to provide an implementation similar to thread private data. + +### Thread Important Attributes + +#### Thread Stack + +RT-Thread thread has dependent stack. When the thread is switched, the context of the current thread is stored in the stack. When the thread is to resume operation, the context information is read from the stack and recovered. + +Thread stack is also used to store local variables in functions: local variables in functions are applied from the thread stack space; local variables in functions are initially allocated from registers (ARM architecture), when this function calls another function, these local variables will be placed on the stack. + +For the first run of thread, context can be constructed manually to set initial environment like: entry function (PC register), entry parameter (R0 register), return position (LR register), current machine operating status (CPSR register). + +The growth direction of thread stack is closely related to the chip architecture. Versions before RT-Thread 3.1.0 only allows the stack to grow from high address to low address. For ARM Cortex-M architecture, the thread stack can be constructed as shown below. + +![Thread Stack](figures/04thread_stack.png) + +When setting the size for thread stack, a larger thread stack can be designed for a MCU with a relatively large resource; or a larger stack can be set initially, for example, a size of 1K or 2K bytes, then in FinSH, use `list_thread` command to check the size of the stack used by the thread during the running of the thread. With this command, you can see the maximum stack depth used by the thread from the start of the thread to the current point in time, and then add the appropriate margin to form the final thread stack size, and finally modify the size of the stack space. + +#### Thread State + +When the thread is running, there is only one thread allowed to run in the processor at the same time. Divided from the running process, thread has various different operating states, such as initial state, suspended state, ready state, etc. In RT-Thread, thread has five states, and the operating system would automatically adjust its state based on its running condition. + +The five states of a thread in RT-Thread are shown in the following table: + +| **States** | **Description** | +| --------------- | ------------------------------------------------------------ | +| Initial State | Thread is in initial state when it has just been created but has not started running; in initial state, the thread does not participate in scheduling. In RT-Thread, the macro definition of this state is RT_THREAD_INIT | +| Ready State | In ready state, the thread is queued according to priority, waiting to be executed; the processor is available again once the current thread is finished, and the operating system will then immediately find the ready thread with the highest priority to run. In RT-Thread, the macro definition of this state is RT_THREAD_READY | +| Running State | Thread is currently running. In a single-core system, only the thread returned from the rt_thread_self() function is running; in a multi-core system, more than one thread may be running. In RT-Thread, the macro definition of this state is RT_THREAD_RUNNING | +| Suspended State | Also known as the blocking state. It may be suspended and paused because the resource is unavailable, or the thread is suspended because it is voluntarily delayed. In suspended state, threads do not participate in scheduling. In RT-Thread, the macro definition of this state is RT_THREAD_SUSPEND | +| Closed State | It will be turned to closed state when the thread finishes running. The thread in closed state does not participate in the thread's scheduling. In RT-Thread, the macro definition of this state is RT_THREAD_CLOSE | + +#### Thread Priority + +The priority of the RT-Thread thread indicates the thread's priority of being scheduled. Each thread has its priority. The more important the thread, the higher priority should be given, the bigger chance of being scheduled. + +RT-Thread supports a maximum of 256 thread priorities (0~255). The lower the number, the higher the priority and 0 is the highest priority. In some systems with tight resources, you can choose system configurations that only support 8 or 32 priorities according to the actual situation; for the ARM Cortex-M series, 32 priorities are commonly used. The lowest priority is assigned to idle threads by default and is not used by users. In the system, when a thread with a higher priority is ready, the current thread with the lower priority will be swapped out immediately, and the high-priority thread will preempt the processor. + +#### Time Slice + +Each thread has a time slice parameter, but time slice is only valid for ready-state threads of the same priority. The system schedules the ready-state threads with the same priority using time slice rotation scheduling method. In this case, time slice plays the role of constraining thread's single running time and the unit is a system tick (OS Tick). Suppose there are 2 ready-state threads, A and B with the same priority. Time slice of A thread is set to 10, and time slice of B thread is set to 5. When there is no ready-state thread with higher priority than A in the system, the system will switch back and forth between thread A and B. Each time the system performs 10 OS Ticks on thread A, and 5 OS Ticks on thread B, as shown below. + +![Same Priority Time Slice Round Robin](figures/04time_slience.png) + +#### Thread Entry Function + +"Entry" in Thread Control Block is the thread's entry function, which is a function for the thread to achieve intended functionality. The thread's entry function is designed by the user. There are generally two forms of code: + +**Infinite Loop Mode**: + +In real-time systems, threads are usually passive: this is determined by the characteristics of the real-time system, which usually waits for external events to occur, and then performs the appropriate services: + +```c +void thread_entry(void* paramenter) +{ + while (1) + { + /* waiting for an event to occur */ + + /* Serve and process events */ + } +} +``` + +It seems that threads have no restrictions on program execution and that all operations can be performed. But as a real-time system, a real-time system with clear priorities, if a program in a thread is stuck in an infinite loop, then threads with a lower priorities will not be executed. Therefore, one thing that must be noted in the real-time operating system is that the thread cannot be stuck in an infinite loop operation, and there must be an action to relinquish the use of the CPU, such as calling a delay function in the loop or actively suspending. The purpose of user designing a infinite loop thread is to let this thread be continuously scheduled and run by the system and never be deleted. + +**Sequential Execution or Finite-Cycle Mode**: + +Such as simple sequential statements, do `whlie()` or `for()` loop, etc., these threads will not cycle or permanently loop. They can be described as a "one-off" threads and will surely be executed. After the execution is complete, the thread will be automatically deleted by the system. + +```c +static void thread_entry(void* paraemter) +{ + /* Processing Transaction #1 */ + … + /* Processing Transaction #2 */ + … + /* Processing Transaction #3 */ +} +``` + +#### Thread Error Code + +One thread is one execution scenario. Error code is closely related to the execution environment, so each thread is equipped with a variable to store the error code. The error code of the thread includes: + +```c +#define RT_EOK 0 /* No error */ +#define RT_ERROR 1 /* Regular error */ +#define RT_ETIMEOUT 2 /* Timeout error */ +#define RT_EFULL 3 /* Resource is full */ +#define RT_EEMPTY 4 /* No resource */ +#define RT_ENOMEM 5 /* No memory */ +#define RT_ENOSYS 6 /* System does not support */ +#define RT_EBUSY 7 /* System busy */ +#define RT_EIO 8 /* IO error */ +#define RT_EINTR 9 /* Interrupt system call */ +#define RT_EINVAL 10 /* Invalid Parameter */ +``` + +### Switching Thread State + +RT-Thread provides a set of operating system call interfaces that make the state of a thread to switch back and forth between these five states. The conversion relationship between these states is shown in the following figure: + +![Thread State Switching Diagram](figures/04thread_sta.png) + +Thread enters the initial state (RT_THREAD_INIT) by calling the function rt_thread_create/init(); thread in the initial state enters the ready state (RT_THREAD_READY) by calling the function rt_thread_startup(); thread in the ready state is scheduled by the scheduler and enters the running state (RT_THREAD_RUNNING). When a running thread calls a function such as rt_thread_delay(), rt_sem_take(), rt_mutex_take(), rt_mb_recv() or fails to get resources, it will enter the suspended state (RT_THREAD_SUSPEND); if threads in suspended state waited till timeout and still didn't acquire the resources, or other threads released the resources, it will return to the ready state. + +If thread in suspended state called function rt_thread_delete/detach(), it will switch to the closed state (RT_THREAD_CLOSE); as for thread in running state, if operation is completed, function rt_thread_exit() will be executed at the last part of the thread to change the state into closed state. + +>In RT-Thread, thread does not actually have a running state; the ready state and the running state are equivalent. + +### System thread + +As mentioned previously, system thread refers to thread created by the system and user thread is thread created by user program calling the thread management interface. System thread In RT-Thread kernel includes idle thread and main thread. + +#### Idle Thread + +An idle thread is the lowest priority thread created by the system, and its thread state is always ready. When no other ready thread exists in the system, the scheduler will schedule the idle thread, which is usually an infinite loop and can never be suspended. In addition, idle threads have special functions in RT-Thread: + +If a thread finishes running, the system will automatically delete the thread: automatically execute function rt_thread_exit() , first remove the thread from the system ready queue, then change the state of the thread to closed state which means it no longer participates in system scheduling, and then suspend it into the rt_thread_defunct queue (a thread queue that is not reclaimed and in a closed state). Lastly, idle thread reclaims the resources of the deleted thread. + +Idle thread also provides an interface to run the hook function set by the user. The hook function is called when idle thread is running, which is suitable for operations like hooking into power management, watchdog feeding, etc. + +#### Main Thread + +When the system starts, the system will create the main thread. Its entry function is main_thread_entry(). User's application entry function main() starts from here. After the system scheduler starts, the main thread starts running. The process is as follows. Users can add their own application initialization code to the main() function. + +![Main Thread Calling Process](figures/04main_thread.png) + +Thread Management +-------------- + +The first two sections of this chapter conceptually explain the function and working mechanism of threads. I believe that everyone is no stranger to threads. This section will delve into the various interfaces of the thread and give some source code to help the reader understand thread. + +The following figure depicts related operations to threads, including: create / initialize threads, start threads, run threads, delete / detach threads. You can use rt_thread_create() to create a dynamic thread and rt_thread_init() to initialize a static thread. The difference between a dynamic thread and a static thread is that for dynamic thread, the system automatically allocates stack space and thread handles from the dynamic memory heap ( only after initializing the heap can create create a dynamic thread). As for static thread, it is user that allocates the stack space and the thread handle. + +![Thread Related Operations](figures/04thread_ops.png) + +### Create and Delete Thread + +To become an executable object, a thread must be created by the kernel of the operating system. You can create a dynamic thread through the following interface: + +```c +rt_thread_t rt_thread_create(const char* name, + void (*entry)(void* parameter), + void* parameter, + rt_uint32_t stack_size, + rt_uint8_t priority, + rt_uint32_t tick); +``` + +When this function is called, the system will allocate a thread handle from the dynamic heap memory and allocates the corresponding space from the dynamic heap memory according to the stack size specified in the parameter. The allocated stack space is aligned in RT_ALIGN_SIZE mode configured in rtconfig.h. The parameters and return values of the thread creation rt_thread_create() are as follows: + +|Parameters |Description | +|------------|----------------------------------------------------------------------------------------| +| name | The name of the thread; the maximum length of the thread name is specified by macro RT_NAME_MAX in rtconfig.h, and the extra part is automatically truncated. | +| entry | Thread entry function. | +| parameter | Thread entry function's parameter. | +| stack_size | Thread stack size in bytes. | +| priority | Priority of the thread. The priority range is based on the system configuration (macro definition RT_THREAD_PRIORITY_MAX in rtconfig.h). If 256-level priority is supported, then the range is from 0 to 255. The smaller the value, the higher the priority, and 0 is the highest priority. | +| tick | The time slice size of the thread. The unit of the time slice (tick) is the tick of the operating system. When there are threads with the same priority in the system, this parameter specifies the maximum length of time of a thread for one schedule. At the end of this time slice run, the scheduler automatically selects the next ready state of the same priority thread to run. | +|**Return** | —— | +| thread | Thread creation succeeds, return thread handle. | +| RT_NULL | Failed to create thread. | + +For some threads created with rt_thread_create(), when not needed or when an error occurs, we can use the following function interface to completely remove the thread from the system: + +```c +rt_err_t rt_thread_delete(rt_thread_t thread); +``` + +After calling this function, the thread object will be moved out of the thread list and removed from the kernel object manager. Consequently, the stack space occupied by the thread will also be freed, and the reclaimed space will be reused for other memory allocations. In fact, use the rt_thread_delete() function to delete the thread interface is just changing the corresponding thread state to RT_THREAD_CLOSE state and then putting it into rt_thread_defunct queue; the actual delete action (release the thread control block and release the thread stack) needs to be completed later by an idle thread when it is being executed. Thread deletion The parameters and return values of thread deleting rt_thread_delete() interface are shown in the following table: + +|**Parameter** |**Description** | +|------------|------------------| +| thread | Thread handles to delete | +|**Return** | —— | +| RT_EOK | Delete thread successfully. | +| \-RT_ERROR | Failed to delete thread. | + +This function is only valid when the system dynamic heap is enabled (meaning RT_USING_HEAP macro definition is already defined). + +### Initialize and Detach Thread + +The initialization of a thread can be done using the following function interface, to initialize a static thread object: + +```c +rt_err_t rt_thread_init(struct rt_thread* thread, + const char* name, + void (*entry)(void* parameter), void* parameter, + void* stack_start, rt_uint32_t stack_size, + rt_uint8_t priority, rt_uint32_t tick); +``` + +The thread handle of the static thread (in other words, the thread control block pointer) and the thread stack are provided by the user. A static thread means that the thread control block and the thread running stack are generally set to global variables, which are determined and allocated when compiling. Kernel is not responsible for dynamically allocating memory space. It should be noted that the user-provided stack starting address needs to be system aligned (for example, 4-byte alignment is required on ARM). The parameters and return values of the thread initialization interface rt_thread_init() are as follows: + +|**Parameter** |**Description** | +|-----------------|---------------------------------------------------------------------------| +| thread | Thread handle. Thread handle is provided by the user and points to the corresponding thread control block memory address. | +| name | Name of the thread; the maximum length of the thread name is specified by the RT_NAME_MAX macro defined in rtconfig.h, and the extra part is automatically truncated. | +| entry | Thread entry function. | +| parameter | Thread entry function parameter. | +| stack_start | Thread stack start address | +| stack_size | Thread stack size in bytes. Stack space address alignment is required in most systems (for example, alignment to 4-byte addresses in the ARM architecture) | +| priority | The priority of the thread. The priority range is based on the system configuration (macro definition RT_THREAD_PRIORITY_MAX in rtconfig.h). If 256 levels of priority are supported, the range is from 0 to 255. The smaller the value, the higher the priority, and 0 is the highest priority. | +| tick | The time slice size of the thread. The unit of the time slice (tick) is the tick of the operating system. The unit of the time slice (tick) is the tick of the operating system. When there are threads with the same priority in the system, this parameter specifies the maximum length of time of a thread for one schedule. At the end of this time slice run, the scheduler automatically selects the next ready state of the same priority thread to run. | +|**Return** | —— | +| RT_EOK | Thread creation succeeds. | +| \-RT_ERROR | Failed to create thread. | + +For threads initialized with rt_thread_init() , using rt_thread_detach() will cause the thread object to be detached from the thread queue and kernel object manager. The detach thread function is as follows: + +```c +rt_err_t rt_thread_detach (rt_thread_t thread); +``` + +Parameters and return values of the thread detached from the interface rt_thread_detach() are as follows: + +|**Parameters** |**Description** | +|------------|------------------------------------------------------------| +| thread | Thread handle, which should be the thread handle initialized by rt_thread_init. | +|**Return** | —— | +| RT_EOK | Thread detached successfully. | +| \-RT_ERROR | Thread detachment failed. | + +This function interface corresponds to the rt_thread_delete() function. The object operated by the rt_thread_delete() function is the handle created by rt_thread_create(), and the object operated by the rt_thread_detach() function is the thread control block initialized with the rt_thread_init() function. Again, the thread itself should not call this interface to detach thread itself. + +### Start Thread + +The thread created (initialized) is in initial state and does not enter the scheduling queue of the ready thread. We can call the following function interface to make the thread enter the ready state after the thread is initialized/created successfully: + +```c +rt_err_t rt_thread_startup(rt_thread_t thread); +``` + +When this function is called, the state of the thread is changed to ready state and placed in the corresponding priority queue for scheduling. If the newly started thread has a higher priority than the current thread, it will immediately switch to the new thread. The parameters and return values of the thread start interface rt_thread_startup() are as follows: + +|**Parameter** |**Description** | +|------------|--------------| +| thread | Thread handle. | +|**Return** | —— | +| RT_EOK | Thread started successfully. | +| \-RT_ERROR | Thread start failed. | + +### Obtaining Current Thread + +During the running of the program, the same piece of code may be executed by multiple threads. At the time of execution, the currently executed thread handle can be obtained through the following function interface: + +```c +rt_thread_t rt_thread_self(void); +``` + +The return value of this interface is shown in the following table: + +|**Return**|**Description** | +|----------|----------------------| +| thread | The currently running thread handle. | +| RT_NULL | Failed, the scheduler has not started yet. | + +### Making Thread Release Processor Resources + +When the current thread's time slice runs out or the thread actively requests to release the processor resource, it will no longer occupy the processor, and the scheduler will select the next thread of the same priority to execute. After the thread calls this interface, the thread is still in the ready queue. The thread gives up the processor using the following function interface: + +```c +rt_err_t rt_thread_yield(void); +``` + +After calling this function, the current thread first removes itself from its ready priority thread queue, then suspends itself to the end of the priority queue list, and then activates the scheduler for thread context switching (if there is no other thread with the same priority, then this thread continues to execute without context switching action). + +The rt_thread_yield() function is similar to the rt_schedule() function, but the behavior of the system is completely different when other ready-state threads of the same priority exist. After executing the rt_thread_yield() function, the current thread is swapped out and the next ready thread of the same priority will be executed. After the rt_schedule() function is executed, the current thread is not necessarily swapped out. Even if it is swapped out, it will not be placed at the end of the ready thread list. Instead, the thread with the highest priority is selected in the system and executed. ( If there is no thread in the system with a higher priority than the current thread, the system will continue to execute the current thread after the rt_schedule() function is executed). + +### Thread Sleep + +In practical applications, we sometimes need to delay the current thread running for a period of time and re-run at a specified time. This is called "thread sleep". Thread sleep can use the following three function interfaces: + +```c +rt_err_t rt_thread_sleep(rt_tick_t tick); +rt_err_t rt_thread_delay(rt_tick_t tick); +rt_err_t rt_thread_mdelay(rt_int32_t ms); +``` + +These three function interfaces have the same effect. Calling them can cause the current thread to suspend for a specified period of time. After, the thread will wake up and enter the ready state again. This function accepts a parameter that specifies the sleep time of the thread. The parameters and return values of the thread sleep interface rt_thread_sleep/delay/mdelay() are as follows: + +|**Parameters**|Description | +| -------- | ------------------------------------------------------------ | +| tick/ms | Thread sleep time:
The input parameter tick of rt_thread_sleep/rt_thread_delay is in units of 1 OS Tick;
The input parameter ms of rt_thread_mdelay is in units of 1ms; | +|**Return**| —— | +| RT_EOK | Successful operation. | + +### Suspend and Resume Thread + +When a thread calls rt_thread_delay(), the thread will voluntarily suspend; when a function such as rt_sem_take(), rt_mb_recv() is called, the resource is not available for use and will cause the thread to suspend. A thread in a suspended state, if it waits for resource overtime (over the set time), then the thread will no longer wait for these resources and will return to the ready state; or, when other threads release the resource the thread is waiting for, the thread will also return to the ready state. + +The thread suspends using the following function interface: + +```c +rt_err_t rt_thread_suspend (rt_thread_t thread); +``` + +The parameters and return values of the thread suspend interface rt_thread_suspend() are shown in the following table: + +|**Parameters** |Description | +|------------|----------------------------------------------| +| thread | Thread handle. | +|**Return** | —— | +| RT_EOK | Thread suspends successfully | +| \-RT_ERROR | Thread suspension failed because the thread is not in ready state. | + +>Generally, you should not use this function to suspend the thread itself, if you really need to use rt_thread_suspend() to suspend the current task, immediately after calling function rt_thread_suspend(), rt_schedule() needs to be called. + +   Function's context switch is achieved manually. User only needs to understand the role of the interface which is not recommended. + +Resuming a thread is to let the suspended thread re-enter the ready state and put the thread into the system's ready queue; if the recovered thread is first in place of the priority list, then the system will start context switching. Thread resumption uses the following function interface: + +```c +rt_err_t rt_thread_resume (rt_thread_t thread); +``` + +The parameters and return values of the thread recovery interface rt_thread_resume() are as follows: + +|Parameter |**Description** | +|------------|---------------------------------------------------------------| +| thread | Thread handle. | +|**Return** | —— | +| RT_EOK | Thread resumed successfully. | +| \-RT_ERROR | Thread recovery failed because the state of the thread is not RT_THREAD_SUSPEND state | + +### Control Thread + +When you need other control over a thread, such as dynamically changing the priority of a thread, you can call the following function interface: + +```c +rt_err_t rt_thread_control(rt_thread_t thread, rt_uint8_t cmd, void* arg); +``` + +The parameters and return values of the thread control interface rt_thread_control() are as follows: + +|Function Parameters|**Description** | +|--------------|--------------| +| thread | Thread handle. | +| cmd | Control command demand. | +| arg | Control parameter. | +|**Return** | —— | +| RT_EOK | Control execution is correct. | +| \-RT_ERROR | Failure. | + +Demands supported by control command demand cmd include: + +•RT_THREAD_CTRL_CHANGE_PRIORITY:dynamically change the priority of a thread; + +•RT_THREAD_CTRL_STARTUP:Start running a thread, equivalent to the rt_thread_startup() function call; + +•RT_THREAD_CTRL_CLOSE:Close a thread, equivalent to the rt_thread_delete() function call. + +### Set and Delete Idle Hooks + +The idle hook function is a hook function of the idle thread. If the idle hook function is set, the idle hook function can be automatically executed to perform other things, such as the LED of system indicator , when the system executes the idle thread. The interface for setting/deleting idle hooks is as follows: + +```c +rt_err_t rt_thread_idle_sethook(void (*hook)(void)); +rt_err_t rt_thread_idle_delhook(void (*hook)(void)); +``` + +Input parameters and return values of setting idle hook function rt_thread_idle_sethook() are as shown in the following table: + +|**Function Parameters**|Description | +|--------------|----------------| +| hook | Set hook function. | +|**Return** | —— | +| RT_EOK | Set Successfully. | +| \-RT_EFULL | Set fail. | + +Input parameters and return values of deleting the idle hook function rt_thread_idle_delhook() are as shown in the following table: + +|Function Parameters|Description | +|--------------|----------------| +| hook | Deleted hook function. | +|**Return** | —— | +| RT_EOK | Successfully deleted. | +| \-RT_ENOSYS | Failed to delete. | + +>An idle thread is a thread whose state is always ready. Therefore, hook function must ensure that idle threads will not be suspended at any time. Functions like rt_thread_delay(), rt_sem_take(), etc can't be used because they may cause the thread to suspend. + +### Set the Scheduler Hook + +During the time when the system is running, it is in the process of thread running, interrupt triggering, responding to interrupts, switching to other threads, and switching between threads. In other words, context switching is the most common event in the system. Sometimes the user may want to know what kind of thread switch has occurred at times, you can set a corresponding hook function by calling the following function interface. This hook function will be called when the system thread switches: + +```c +void rt_scheduler_sethook(void (*hook)(struct rt_thread* from, struct rt_thread* to)); +``` + +Input parameters for setting the scheduler hook function are shown in the following table: + +|**Function Parameters**|Description | +|--------------|----------------------------| +| hook | Represents a user-defined hook function pointer | + +Hook function hook() is declared as follows: + +```c +void hook(struct rt_thread* from, struct rt_thread* to); +``` + +Input parameters for the scheduler hook function hook() are shown in the following table: + +|Function Parameters|**Description** | +|--------------|------------------------------------| +| from | Indicates the thread control block pointer that the system wants to switch out | +| to | Indicates the thread control block pointer that the system wants to switch out | + +>Please carefully compile your hook function, any carelessness is likely to cause the entire system to run abnormally (in this hook function, it is basically not allowed to call the system API, and should not cause the current running context to suspend). + +Thread Application Sample +------------ + +An application example in Keil simulator environment is given below. + +### Create Thread Sample + +This sample is creating a dynamic thread and initializing a static thread. A thread is automatically deleted by the system after it has finished running. The other thread is always printing the counts, as follows: + +```c +#include + +#define THREAD_PRIORITY 25 +#define THREAD_STACK_SIZE 512 +#define THREAD_TIMESLICE 5 + +static rt_thread_t tid1 = RT_NULL; + +/* Entry Function for Thread 1 */ +static void thread1_entry(void *parameter) +{ + rt_uint32_t count = 0; + + while (1) + { + /* Thread 1 runs with low priority and prints the count value all the time */ + rt_kprintf("thread1 count: %d\n", count ++); + rt_thread_mdelay(500); + } +} + +ALIGN(RT_ALIGN_SIZE) +static char thread2_stack[1024]; +static struct rt_thread thread2; +/* Entry for Thread 2 */ +static void thread2_entry(void *param) +{ + rt_uint32_t count = 0; + + /* Thread 2 has a higher priority to preempt thread 1 and get executed */ + for (count = 0; count < 10 ; count++) + { + /* Thread 2 prints count value */ + rt_kprintf("thread2 count: %d\n", count); + } + rt_kprintf("thread2 exit\n"); + /* Thread 2 will also be automatically detached from the system after it finishes running. */ +} + +/* Thread Sample */ +int thread_sample(void) +{ + /* Creat thread 1, Name is thread1,Entry is thread1_entry */ + tid1 = rt_thread_create("thread1", + thread1_entry, RT_NULL, + THREAD_STACK_SIZE, + THREAD_PRIORITY, THREAD_TIMESLICE); + + /* Start this thread if you get the thread control block */ + if (tid1 != RT_NULL) + rt_thread_startup(tid1); + + /* Creat thread 2, Name is thread2,Entry is thread2_entry */ + rt_thread_init(&thread2, + "thread2", + thread2_entry, + RT_NULL, + &thread2_stack[0], + sizeof(thread2_stack), + THREAD_PRIORITY - 1, THREAD_TIMESLICE); + rt_thread_startup(&thread2); + + return 0; +} + +/* Export to msh command list */ +MSH_CMD_EXPORT(thread_sample, thread sample); +``` + +The simulation results are as follows: + +``` +\ | / +- RT - Thread Operating System + / | \ 3.1.0 build Aug 24 2018 + 2006 - 2018 Copyright by rt-thread team +msh >thread_sample +msh >thread2 count: 0 +thread2 count: 1 +thread2 count: 2 +thread2 count: 3 +thread2 count: 4 +thread2 count: 5 +thread2 count: 6 +thread2 count: 7 +thread2 count: 8 +thread2 count: 9 +thread2 exit +thread1 count: 0 +thread1 count: 1 +thread1 count: 2 +thread1 count: 3 +… +``` + +When thread 2 counts to a certain value, it will stop running. Then thread 2 is automatically deleted by the system, and therefore the counting stops. Thread 1 prints the count all the time. + +>About deleting threads: Most threads are executed cyclically without needing to be deleted. For thread that can finish running, RT-Thread automatically deletes the thread after the thread finishes running, and deletes it in rt_thread_exit(). User only needs to understand the role of the interface. It is not recommended to use this interface (this interface can be called by other threads or call this interface in the timer timeout function to delete a thread which is not used very often). + +### Thread Time Slice Round-Robin Scheduling Sample + +This sample is creating two threads that will always print counts when executing, as follows: + +```c +#include + +#define THREAD_STACK_SIZE 1024 +#define THREAD_PRIORITY 20 +#define THREAD_TIMESLICE 10 + +/* Thread Entry */ +static void thread_entry(void* parameter) +{ + rt_uint32_t value; + rt_uint32_t count = 0; + + value = (rt_uint32_t)parameter; + while (1) + { + if(0 == (count % 5)) + { + rt_kprintf("thread %d is running ,thread %d count = %d\n", value , value , count); + + if(count> 200) + return; + } + count++; + } +} + +int timeslice_sample(void) +{ + rt_thread_t tid = RT_NULL; + /* Create Thread 1 */ + tid = rt_thread_create("thread1", + thread_entry, (void*)1, + THREAD_STACK_SIZE, + THREAD_PRIORITY, THREAD_TIMESLICE); + if (tid != RT_NULL) + rt_thread_startup(tid); + + + /* Create Thread 2 */ + tid = rt_thread_create("thread2", + thread_entry, (void*)2, + THREAD_STACK_SIZE, + THREAD_PRIORITY, THREAD_TIMESLICE-5); + if (tid != RT_NULL) + rt_thread_startup(tid); + return 0; +} + +/* Export to msh command list */ +MSH_CMD_EXPORT(timeslice_sample, timeslice sample); +``` + +The simulation results are as follows: + +``` + \ | / +- RT - Thread Operating System + / | \ 3.1.0 build Aug 27 2018 + 2006 - 2018 Copyright by rt-thread team +msh >timeslice_sample +msh >thread 1 is running ,thread 1 count = 0 +thread 1 is running ,thread 1 count = 5 +thread 1 is running ,thread 1 count = 10 +thread 1 is running ,thread 1 count = 15 +… +thread 1 is running ,thread 1 count = 125 +thread 1 is rthread 2 is running ,thread 2 count = 0 +thread 2 is running ,thread 2 count = 5 +thread 2 is running ,thread 2 count = 10 +thread 2 is running ,thread 2 count = 15 +thread 2 is running ,thread 2 count = 20 +thread 2 is running ,thread 2 count = 25 +thread 2 is running ,thread 2 count = 30 +thread 2 is running ,thread 2 count = 35 +thread 2 is running ,thread 2 count = 40 +thread 2 is running ,thread 2 count = 45 +thread 2 is running ,thread 2 count = 50 +thread 2 is running ,thread 2 count = 55 +thread 2 is running ,thread 2 count = 60 +thread 2 is running ,thread 2 cunning ,thread 2 count = 65 +thread 1 is running ,thread 1 count = 135 +… +thread 2 is running ,thread 2 count = 205 +``` + +As can be seen from the running count results, thread 2 runs half the time of thread 1. + +### Thread Scheduler Hook Sample + +When thread is scheduling switch, it executes the schedule. We can set a scheduler hook so that we can do other things when the thread is being switched. This sample is printing switch information between the threads in the scheduler hook function, as shown in the following code. + +```c +#include + +#define THREAD_STACK_SIZE 1024 +#define THREAD_PRIORITY 20 +#define THREAD_TIMESLICE 10 + +/* Counter for each thread */ +volatile rt_uint32_t count[2]; + +/* Threads 1, 2 share an entry, but the entry parameters are different */ +static void thread_entry(void* parameter) +{ + rt_uint32_t value; + + value = (rt_uint32_t)parameter; + while (1) + { + rt_kprintf("thread %d is running\n", value); + rt_thread_mdelay(1000); // Delay for a while + } +} + +static rt_thread_t tid1 = RT_NULL; +static rt_thread_t tid2 = RT_NULL; + +static void hook_of_scheduler(struct rt_thread* from, struct rt_thread* to) +{ + rt_kprintf("from: %s --> to: %s \n", from->name , to->name); +} + +int scheduler_hook(void) +{ + /* Set the scheduler hook */ + rt_scheduler_sethook(hook_of_scheduler); + + /* Create Thread 1 */ + tid1 = rt_thread_create("thread1", + thread_entry, (void*)1, + THREAD_STACK_SIZE, + THREAD_PRIORITY, THREAD_TIMESLICE); + if (tid1 != RT_NULL) + rt_thread_startup(tid1); + + /* Create Thread 2 */ + tid2 = rt_thread_create("thread2", + thread_entry, (void*)2, + THREAD_STACK_SIZE, + THREAD_PRIORITY,THREAD_TIMESLICE - 5); + if (tid2 != RT_NULL) + rt_thread_startup(tid2); + return 0; +} + +/* Export to msh command list */ +MSH_CMD_EXPORT(scheduler_hook, scheduler_hook sample); +``` + +The simulation results are as follows: + +``` + \ | / +- RT - Thread Operating System + / | \ 3.1.0 build Aug 27 2018 + 2006 - 2018 Copyright by rt-thread team +msh > scheduler_hook +msh >from: tshell --> to: thread1 +thread 1 is running +from: thread1 --> to: thread2 +thread 2 is running +from: thread2 --> to: tidle +from: tidle --> to: thread1 +thread 1 is running +from: thread1 --> to: tidle +from: tidle --> to: thread2 +thread 2 is running +from: thread2 --> to: tidle +… +``` + +It can be seen from the simulation results that when the threads are being switched, the set scheduler hook function is working normally and printing the information of the thread switching, including switching to the idle thread. diff --git a/documentation/timer/figures/05timer_env.png b/documentation/timer/figures/05timer_env.png new file mode 100644 index 0000000000..f11edc7aa6 Binary files /dev/null and b/documentation/timer/figures/05timer_env.png differ diff --git a/documentation/timer/figures/05timer_linked_list.png b/documentation/timer/figures/05timer_linked_list.png new file mode 100644 index 0000000000..434e46165f Binary files /dev/null and b/documentation/timer/figures/05timer_linked_list.png differ diff --git a/documentation/timer/figures/05timer_linked_list2.png b/documentation/timer/figures/05timer_linked_list2.png new file mode 100644 index 0000000000..06e7282f74 Binary files /dev/null and b/documentation/timer/figures/05timer_linked_list2.png differ diff --git a/documentation/timer/figures/05timer_ops.png b/documentation/timer/figures/05timer_ops.png new file mode 100644 index 0000000000..383a7af750 Binary files /dev/null and b/documentation/timer/figures/05timer_ops.png differ diff --git a/documentation/timer/figures/05timer_skip_list.png b/documentation/timer/figures/05timer_skip_list.png new file mode 100644 index 0000000000..eb0b54bfd3 Binary files /dev/null and b/documentation/timer/figures/05timer_skip_list.png differ diff --git a/documentation/timer/figures/05timer_skip_list2.png b/documentation/timer/figures/05timer_skip_list2.png new file mode 100644 index 0000000000..b16117c97d Binary files /dev/null and b/documentation/timer/figures/05timer_skip_list2.png differ diff --git a/documentation/timer/figures/05timer_skip_list3.png b/documentation/timer/figures/05timer_skip_list3.png new file mode 100644 index 0000000000..d5550cc3e5 Binary files /dev/null and b/documentation/timer/figures/05timer_skip_list3.png differ diff --git a/documentation/timer/timer.md b/documentation/timer/timer.md new file mode 100644 index 0000000000..921719dd7e --- /dev/null +++ b/documentation/timer/timer.md @@ -0,0 +1,525 @@ +Clock Management +======== + +The concept of time is very important. You need to set a time to go out with friends and it takes time to complete tasks. Life is inseparable from time. The same is true for operating systems, which require time to regulate the execution of their tasks. The smallest time unit in operating system is clock tick (OS Tick). This chapter focuses on introduction of clock ticks and clock-based timers. After reading this chapter, we will learn how clock ticks are generated and how to use RT-Thread timers. + +Clock Tick(OS Tick) +-------- + +Any operating system needs to provide a clock tick for the system to handle all time-related events, such as thread latency, thread time slice rotation scheduling, timer timeout, etc. Clock tick is a specific periodic interrupt. This interrupt can be regarded as the system heartbeat. The time interval between interrupts depends on different applications, generally it is 1ms–100ms. The faster the clock tick rate, the greater the overhead of the system. The number of clock ticks counted from the start of the system is called the system time. + +In RT-Thread, the length of clock tick can be adjusted according to the value of RT_TICK_PER_SECOND, which is equal to 1/RT_TICK_PER_SECOND second. + +### Clock Tick Implementation + +Clock tick is generated by a hardware timer configured in interrupt trigger mode. `void rt_tick_increase(void)` will be called when an interrupt occurs,notifying the operating system that a system clock has passed; different hardware driver have different timer interrupt implementations. Here is an example of using STM32 `SysTick_Handler` to achieve clock Tick. + +```c +void SysTick_Handler(void) +{ + /* Entry Interrupt*/ + rt_interrupt_enter(); + …… + rt_tick_increase(); + /* Leave Interrupt */ + rt_interrupt_leave(); +} +``` + +Call `rt_tick_increase()` in interrupt function to self-add global variable rt_tick. The code is as follows: + +```c +void rt_tick_increase(void) +{ + struct rt_thread *thread; + + /* Global variable rt_tick self-add */ + ++ rt_tick; + + /* Check time slice */ + thread = rt_thread_self(); + + -- thread->remaining_tick; + if (thread->remaining_tick == 0) + { + /* Re-assign initial value */ + thread->remaining_tick = thread->init_tick; + + /* Thread suspends */ + rt_thread_yield(); + } + + /* Check time slice */ + rt_timer_check(); +} +``` + +You can see that global variable rt_tick is incremented by one on every clock tick. The value of rt_tick indicates the total number of clock ticks that the system has elapsed since it started, that is, system time. In addition, on every clock tick, whether the current thread's time slice is exhausted and whether there is a timer timeout will be checked. + +>In interrupt, rt_timer_check() is used to check the system hardware timer linked list. If there is a timer timeout, the corresponding timeout function will be called. All timers are removed from the timer linked list if it timed out, and periodic timer is added to the timer linked list when it is started again. + +### Obtain Clock Tick + +Since global variable rt_tick is incremented by one on every clock tick, the value of the current rt_tick will be returned by calling `rt_tick_get`, which is the current clock tick value. This interface can be used to record the length of time a system is running, or to measure the time it takes for a task to run. The interface function is as follows: + +```c +rt_tick_t rt_tick_get(void); +``` + +The following table describes the return values of `rt_tick_get()` function: + +|**R**eturn|Description | +|----------|----------------| +| rt_tick | Current clock tick value | + +Timer Management +---------- + +Timer refers to triggering an event after a certain specified time from a specified moment, for example, setting a timer to wake up yourself the next morning. Timer includes hardware timer and software timer: + +1) **Hardware timer** is the timing function provided by the chip itself. The hardware timer can be used by configuring the timer module into a timer mode and setting the time. Hardware timer is accurate to nanosecond precision, and is interrupt trigger mode. + +2) **Software timer** is a type of system interface provided by the operating system. It is built on the basis of the hardware timer to enable the system to provide a timer service with no constraint on numbers. + +RT-Thread operating system provides software-implemented timers in units of clock tick (OS Tick), that is, the timing value must be an integer multiple of OS Tick. For example, an OS Tick is 10ms, then the software timer can only be timed 10ms, 20ms, 100ms, etc., but not 15ms. RT-Thread timer is also based on the clock tick, providing timing capabilities based on integral multiples of the clock tick. + +### RT-Thread Timer Introduction + +RT-Thread timer provides two types of timer mechanisms: the first type is a one-shot timer, which only triggers a timer event for onetime after startup, and then the timer stops automatically. The second type is a periodic trigger timer, which periodically triggers a timer event until the user manually stops it, otherwise it will continue to execute forever. + +In addition, according to the context in which the timeout function is executed, RT-Thread timer can be divided into HARD_TIMER mode and SOFT_TIMER mode, as shown in the following figure. + +![Timer Context](figures/05timer_env.png) + +#### HARD_TIMER Mode + +The timer timeout function of HARD_TIMER mode is executed in the interrupt context and can be specified with the parameter RT_TIMER_FLAG_HARD_TIMER when initializing/creating the timer. + +When executed in interrupt context, the requirements for timeout function are the same as those for the interrupt service routine: execution time should be as short as possible, and the execution should not cause the current context to suspend and wait. For example, a timeout function executed in an interrupt context should not attempt to apply for dynamic memory, free dynamic memory, etc. + +The default mode of RT-Thread timer is HARD_TIMER mode which means after the timer timeout, the timeout function runs in the context of the system clock interrupt. The execution mode in the interrupt context determines that the timer's timeout function should not call any system function that will cause the current context to suspend; nor can it be executing for a very long time, otherwise the response time of other interrupts will be lengthened or the running time of other threads will be preempted. + +#### SOFT_TIMER Mode + +The SOFT_TIMER mode is configurable and macro definition RT_USING_TIMER_SOFT is used to determine whether the mode should be enabled. When this mode is enabled, the system will create a timer thread at initialization, and then the timer timeout function of SOFT_TIMER mode will be executed in the context of the timer thread. SOFT_TIMER mode can be specified using the parameter RT_TIMER_FLAG_SOFT_TIMER when initializing/creating the timer. + +### Timer Working Mechanism + +The following is an example to illustrate the working mechanism of RT-Thread timer. Two important global variables are maintained in the RT-Thread timer module: + + (1) Elapsed tick time of the current system rt_tick (when hardware timer interrupt comes, it will add 1); + + (2) Timer linked list `rt_timer_list`. Newly created and activated timers of the system are inputted into the `rt_timer_list` linked list in a timeout-ordered manner. + +As shown in the figure below, the current tick value of the system is 20. In the current system, three timers have been created and started, which are Timer1 with 50 ticks set time, Timer2 with 100 ticks, and Timer3 with 500 ticks. Current time of the system rt_tick=20 is added respectively on these three timers and they are linked from small to large in the rt_timer_list linked list, forming a timer linked list structure as shown in the figure. + +![Timer Linked List Diagram](figures/05timer_linked_list.png) + +Along with the trigger of the hardware timer, `rt_tick` has been increasing (rt_tick variable is incremented by 1 every time the hardware timer is interrupted). After 50 tick, rt_tick is increased from 20 to 70, which is equal to the timeout value of Timer1. Then, timeout functions associated with the Timer1 timer will be triggered and Timer1 from the rt_timer_list linked list will be removed. Similarly, after 100 ticks and 500 ticks, timeout functions associated with Timer2 and Timer3 are triggered, and timers of Time2 and Timer3 will be removed from the rt_timer_list linked list. + +If after system's 10 ticks (current rt_tick=30), a new task has created Timer4 with 300 ticks, so the Timer4's timeout is equal to rt_tick add 300, that is 330. Timer4 will be inserted in between Timer2 and Timer3, forming a linked list structure shown below: + +![Timer linked List Insertion Diagram](figures/05timer_linked_list2.png) + +#### Timer Control Block + +In RT-Thread operating system, timer control block is defined by structure `struct rt_timer` and forms a timer kernel object, which is then linked to the kernel object container for management. It is a data structure used by the operating system to manage timers. It stores information about timers, such as the initial number of ticks, the number of timeout ticks, the linked list structure used to connect timers, timeout callback functions, etc. + +```c +struct rt_timer +{ + struct rt_object parent; + rt_list_t row[RT_TIMER_SKIP_LIST_LEVEL]; /* Timer Linked List Node */ + + void (*timeout_func)(void *parameter); /* Timeout Function */ + void *parameter; /* Parameters of Timeout Function */ + rt_tick_t init_tick; /* Timer Initial Timeout Ticks */ + rt_tick_t timeout_tick; /* Number of ticks when the timer actually times out */ +}; +typedef struct rt_timer *rt_timer_t; +``` + +Timer control block is defined by structure `struct rt_timer` and forms a timer kernel object, which is then linked to the kernel object container for management. The `list` member is used to link an active (already started) timer to the `rt_timer_list` linked list. + +#### Timer Skip List Algorithm + +In the introduction of working mechanics of the timer above, we have talked about that the newly created and activated timers are inserted into the rt_timer_list linked list in the order of the timeout, that is, rt_timer_list linked list is an ordered list. RT-Thread uses a skip list algorithm to speed up the search for linked list elements. + +Skip list is a data structure based on a parallel linked list, which is simple to implement, and the time complexity of insertion, deletion, and search is O(log n). Skip list is a kind of linked list, but it adds a "skip" function to the linked list. It is this function that enables the skip list to have the time complexity of O(log n) when looking for elements, for example: + +An ordered list, as shown in the following figure, searches for elements {13, 39} from the ordered list. The number of comparisons is {3, 5} and the total number of comparisons is 3 + 5 = 8 times. + +![Ordered Linked List](figures/05timer_skip_list.png) + +After using the skip list algorithm, a method similar to binary search tree can be used to extract some nodes as indexes, and then the structure shown in the following figure is obtained: + +![Ordered Linked List Index](figures/05timer_skip_list2.png) + +In this structure, {3, 18, 77} is extracted as first-level index, so that the number of comparisons can be reduced when searching. For example, there are only 3 times of comparisons when searching 39 (by comparing 3, 18, 39). Of course, we can also extract some elements from the first-level index, as a secondary index, which can speed up the element search. + +![Three Layer Skip List](figures/05timer_skip_list3.png) + +Therefore, the timer skip list can pass the index of the upper layer, reducing the number of comparisons during the search and improving the efficiency of the search. This is an algorithm of "space in exchange of time", macro definition RT_TIMER_SKIP_LIST_LEVEL is used to configure the number of layers in skip list. The default value is 1, which means that ordered linked list algorithm for first-order ordered list graph is used. Each additional one means that another level of index is added to the original list. + +### Timer Management + +RT-Thread timer is introduced in the previous sections and the working mechanism of the timer is conceptually explained. This section will go deep into the various interfaces of the timer to help the reader understand the RT-Thread timer at the code level. + +Timer management system needs to be initialized at system startup. This can be done through the following function interface: + +```c +void rt_system_timer_init(void); +``` + +If you need to use SOFT_TIMER, the following function interface should be called when the system is initialized: + +```c +void rt_system_timer_thread_init(void); +``` + +Timer control block contains important parameters related to the timer and acts as a link between various states of the timer. Relevant operations of the timer are as shown in the following figure. Relevant operations of the timer includes: creating/initializing the timer, starting the timer, running the timer, and deleting/detaching the timer. All the timers will be moved from the timer linked list after their timings expire. However, periodic timer is added back to the timer linked list when it is started again, which is related to timer parameter settings. Each time an operating system clock interrupt occurs, a change is made to the timer status parameter that has timed out. + +![Timer Related Operations](figures/05timer_ops.png) + +#### Create and Delete Timer + +When dynamically creating a timer, the following function interface can be used: + +```c +rt_timer_t rt_timer_create(const char* name, + void (*timeout)(void* parameter), + void* parameter, + rt_tick_t time, + rt_uint8_t flag); +``` + +After calling the function interface, kernel first allocates a timer control block from the dynamic memory heap and then performs basic initialization on the control block. The description of each parameter and return value is detailed in the following table: + + Input parameters and return values of `rt_timer_create()` + +|**Parameters** |**Description** | +|---------------------------------|--------------------------------------------------------------------------| +| name | Name of the timer | +| void (timeout) (void parameter) | Timer timeout function pointer (this function is called when the timer expires) | +| parameter | Entry parameter of the timer timeout function (when the timer expires, calling timeout callback function will pass this parameters as the entry parameter to the timeout function) | +| time | Timeout of the timer, the unit is the clock tick | +| flag | Parameters when the timer is created. The supported values include one-shot timing, periodic timing, hardware timer, software timer, etc. (You can use multiple values with "OR") | +|**Return** | —— | +| RT_NULL | Creation failed (usually returning RT_NULL due to insufficient system memory) | +| Timer Handle | Timer was created successfully. | + +In include/rtdef.h, some timer related macros are defined, as follows: + +```c +#define RT_TIMER_FLAG_ONE_SHOT     0x0     /* One shot timing     */ +#define RT_TIMER_FLAG_PERIODIC     0x2     /* Periodic timing     */ + +#define RT_TIMER_FLAG_HARD_TIMER   0x0     /* Hardware timer   */ +#define RT_TIMER_FLAG_SOFT_TIMER   0x4     /* Software timer  */ +``` + +The above two sets of values can be assigned to the flag in an "or" logical manner. When the specified flag is RT_TIMER_FLAG_HARD_TIMER, if the timer expires, the timer's callback function will be called in the context of the service routine of the clock interrupt; when the specified flag is RT_TIMER_FLAG_SOFT_TIMER, if the timer expires, the timer's callback function will be called in the context of the system clock timer thread. + +When the system no longer uses dynamic timers, the following function interface can be used: + +```c +rt_err_t rt_timer_delete(rt_timer_t timer); +``` + +After calling this function interface, the system will remove this timer from the rt_timer_list linked list, and then release the memory occupied by the corresponding timer control block. The parameters and return values are detailed in the following table: + +Input parameters and return values of rt_timer_delete() + +|**Parameters**|**Description** | +|----------|-------------------------------------------------------------------------| +| timer | Timer handle, pointing at timer needs to be deleted | +|**Return**| —— | +| RT_EOK | Deletion is successful (if the parameter timer handle is RT_NULL, it will result in an ASSERT assertion) | + +#### Initialize and Detach Timer + +When creating a timer statically , the timer can be initialized by using `rt_timer_init` interface. The function interface is as follows: + +```c +void rt_timer_init(rt_timer_t timer, + const char* name, + void (*timeout)(void* parameter), + void* parameter, + rt_tick_t time, rt_uint8_t flag); +``` + +When using this function interface, the corresponding timer control block, the corresponding timer name, timer timeout function, etc will be initialized. The description of each parameter and return value is shown in the following table: + +Input parameters and return values of `rt_timer_init()` + +|Parameters |**Description** | +|---------------------------------|-------------------------------------------------------------------------------------| +| timer | Timer handle, pointing to the to-be-initialized timer control block | +| name | Name of the timer | +| void (timeout) (void parameter) | Timer timeout function pointer (this function is called when the timer expires) | +| parameter | Entry parameter of the timer timeout function (when the timer expires, the call timeout callback function will pass this parameter as the entry parameter to the timeout function) | +| time | Timeout of the timer, the unit is clock tick | +| flag | Parameters of when the timer is created. The supported values include one shot timing, periodic timing, hardware timer, and software timer (multiple values can be taken with OR). For details, see Creating a Timer. | + +When a static timer does not need to be used again, you can use the following function interface: + +```c +rt_err_t rt_timer_detach(rt_timer_t timer); +``` + +When detaching a timer, the system will detach the timer object from the kernel object container, but the memory occupied by the timer object will not be released. The parameters and return values are detailed in the following table: + +Input parameters and return values for `rt_timer_detach()` + +|**Parameters**|Description | +|----------|--------------------------------------| +| timer | Timer handle, pointing to the to-be-detached timer control block | +|**Return **| —— | +| RT_EOK | Successfully detached | + +#### Start and Stop Timer + +When the timer is created or initialized, it will not be started immediately. It will start after timer function interface is called. The timer function interface is started as follows: + +```c +rt_err_t rt_timer_start(rt_timer_t timer); +``` + +After the timer start function interface is called, the state of the timer is changed to activated state (RT_TIMER_FLAG_ACTIVATED) and inserted into rt_timer_list queue linked list, the parameters and return values are detailed in the following table: + +  Input parameters and return values of rt_timer_start() + +|Parameters|Description | +|----------|--------------------------------------| +| timer | Timer handle, pointing to the to-be-initialized timer control block | +|**Return**| —— | +| RT_EOK | Successful startup | + +For an example of starting the timer, please refer to the sample code below. + +After starting the timer, if you want to stop it, you can use the following function interface: + +```c +rt_err_t rt_timer_stop(rt_timer_t timer); +``` + +After the timer stop function interface is called, the timer state will change to the stop state and will be detached from the rt_timer_list linked list without participating in the timer timeout check. When a (periodic) timer expires, this function interface can also be called to stop the (periodic) timer itself. The parameters and return values are detailed in the following table: + +  Input parameters and return values of rt_timer_stop() + +|**Parameters** |Description | +|-------------|--------------------------------------| +| timer | Timer handle, pointing to the to-be-stopped timer control block | +|**Return** | —— | +| RT_EOK | Timer successfully stopped | +| \- RT_ERROR | timer is in stopped state | + +#### Control Timer + +In addition to some of the programming interfaces provided above, RT-Thread additionally provides a timer control function interface to obtain or set more timer information. The control timer function interface is as follows: + +```c +rt_err_t rt_timer_control(rt_timer_t timer, rt_uint8_t cmd, void* arg); +``` + +The control timer function interface can view or change the setting of the timer according to the parameters of the command type. The description of each parameter and return value is as follows: + +Input parameters and return values of rt_timer_control() + +|Parameters|**Description** | +|----------|----------------------------------------------------------------------------------------------------------| +| timer | Timer handle, pointing to the to-be-stopped timer control block | +| cmd | The command for controlling the timer currently supports four command interfaces, which are setting timing, viewing the timing time, setting a one shot trigger, and setting the periodic trigger. | +| arg | Control command parameters corresponding to cmd. For example, when cmd is the set timeout time, the timeout time parameter can be set by arg. | +|**Return**| —— | +| RT_EOK | Successful | + +Commands supported by function parameters cmd: + +```c +#define RT_TIMER_CTRL_SET_TIME     0x0     /* Set Timeout value      */ +#define RT_TIMER_CTRL_GET_TIME     0x1     /* Obtain Timer Timeout Time      */ +#define RT_TIMER_CTRL_SET_ONESHOT   0x2     /* Set the timer as a oneshot timer.   */ +#define RT_TIMER_CTRL_SET_PERIODIC 0x3     /* Set the timer as a periodic timer */ +``` + +See "dynamic timer routine" for code that uses the timer control interface. + +Timer Application Sample +-------------- + +This is an example of creating a timer that creates two dynamic timers, one for one shot timing, another one for periodic timing and for the periodic timer to run for a while and then stop running, as shown below: + +```c +#include + +/* Timer Control Block */ +static rt_timer_t timer1; +static rt_timer_t timer2; +static int cnt = 0; + +/* Timer 1 Timeout Function */ +static void timeout1(void *parameter) +{ + rt_kprintf("periodic timer is timeout %d\n", cnt); + + /* On the 10th time, stops perodic timer */ + if (cnt++>= 9) + { + rt_timer_stop(timer1); + rt_kprintf("periodic timer was stopped! \n"); + } +} + +/* Timer 2 Timeout Function */ +static void timeout2(void *parameter) +{ + rt_kprintf("one shot timer is timeout\n"); +} + +int timer_sample(void) +{ + /* Create Timer 1 Periodic Timer */ + timer1 = rt_timer_create("timer1", timeout1, + RT_NULL, 10, + RT_TIMER_FLAG_PERIODIC); + + /* Start Timer 1*/ + if (timer1 != RT_NULL) rt_timer_start(timer1); + + /* Create Timer 2 One Shot Timer */ + timer2 = rt_timer_create("timer2", timeout2, + RT_NULL, 30, + RT_TIMER_FLAG_ONE_SHOT); + + /* Start Timer 2 */ + if (timer2 != RT_NULL) rt_timer_start(timer2); + return 0; +} + +/* Export to msh command list */ +MSH_CMD_EXPORT(timer_sample, timer sample); +``` + +The simulation results are as follows: + +``` + \ | / +- RT - Thread Operating System + / | \ 3.1.0 build Aug 24 2018 + 2006 - 2018 Copyright by rt-thread team +msh >timer_sample +msh >periodic timer is timeout 0 +periodic timer is timeout 1 +one shot timer is timeout +periodic timer is timeout 2 +periodic timer is timeout 3 +periodic timer is timeout 4 +periodic timer is timeout 5 +periodic timer is timeout 6 +periodic timer is timeout 7 +periodic timer is timeout 8 +periodic timer is timeout 9 +periodic timer was stopped! +``` + +Timeout function of periodic timer 1 runs once every 10 OS Ticks for 10 times (after 10 times, rt_timer_stop is called to stop timer 1); timeout function of one-shot timer 2 runs once on the 30th OS Tick. + +The example of initializing timer is similar to the example of creating a timer. This program initializes two static timers, one is one-shot timing and one is periodic timing, as shown in the following code: + +```c +#include + +/* Timer Control Block */ +static struct rt_timer timer1; +static struct rt_timer timer2; +static int cnt = 0; + +/* Timer 1 Timeout Function */ +static void timeout1(void* parameter) +{ + rt_kprintf("periodic timer is timeout\n"); + /* Run for 10 times */ + if (cnt++>= 9) + { + rt_timer_stop(&timer1); + } +} + +/* Timer 2 Timeout Function */ +static void timeout2(void* parameter) +{ + rt_kprintf("one shot timer is timeout\n"); +} + +int timer_static_sample(void) +{ + /* Initialize Timer */ + rt_timer_init(&timer1, "timer1", /* Timer name is timer1 */ + timeout1, /* Callback handler for timeout */ + RT_NULL, /* Entry parameter of the timeout function */ + 10, /* Timing length in OS Tick, 10 OS Tick */ + RT_TIMER_FLAG_PERIODIC); /* Periodic timer */ + rt_timer_init(&timer2, "timer2", /* Timer name is timer2 */ + timeout2, /* Callback handler for timeout */ + RT_NULL, /* Entry parameter of the timeout function */ + 30, /* Timing length is 30 OS Tick */ + RT_TIMER_FLAG_ONE_SHOT); /* One-shot timer */ + + /* Start Timer */ + rt_timer_start(&timer1); + rt_timer_start(&timer2); + return 0; +} +/* Export to msh command list */ +MSH_CMD_EXPORT(timer_static_sample, timer_static sample); +``` + +The simulation results are as follows: + +``` +\ | / +- RT - Thread Operating System + / | \ 3.1.0 build Aug 24 2018 + 2006 - 2018 Copyright by rt-thread team +msh >timer_static_sample +msh >periodic timer is timeout +periodic timer is timeout +one shot timer is timeout +periodic timer is timeout +periodic timer is timeout +periodic timer is timeout +periodic timer is timeout +periodic timer is timeout +periodic timer is timeout +periodic timer is timeout +periodic timer is timeout +``` + +The timeout function of periodic timer1 runs once every 10 OS Ticks for 10 times (After 10 times rt_timer_stop is called to stop timer1); the timeout function of one-shot timer2 runs once on the 30th OS Tick. + +High Precision Delay +---------- + +The minimum precision of the RT-Thread timer is determined by the system clock tick (1 OS Tick = 1/RT_TICK_PER_SECOND second, RT_TICK_PER_SECOND value is defined in the rtconfig.h file), and the timer must be set to an integer multiple of the OS Tick. When it is necessary to implement system timing for a shorter time length. For example, the OS Tick is 10ms but the program needs to implement a timing or delay of 1ms. In this case, the operating system timer can't meet the requirements. This problem can be solved by reading the counter of a hardware timer of the system or using hardware timer directly. + +In Cortex-M series, SysTick has been used by RT-Thread as an OS Tick. It is configured to trigger an interrupt after 1/RT_TICK_PER_SECOND seconds. The interrupt handler uses the Cortex-M3 default name `SysTick_Handler`. In Cortex-M3 CMSIS (Cortex Microcontroller Software Interface Standard) specification, SystemCoreClock represents the dominant frequency of the chip, so based on SysTick and SystemCoreClock, we can use SysTick to obtain an accurate delay function, as shown in the following example, Cortex-M3 SysTick-based precision delay (requires the system to enable SysTick): + +The high-precision delay routine is as follows: + +```c +#include +void rt_hw_us_delay(rt_uint32_t us) +{ + rt_uint32_t delta; + /* Obtain the number of ticks of the delay */ + us = us * (SysTick->LOAD/(1000000/RT_TICK_PER_SECOND)); + /* Obtain current time */ + delta = SysTick->VAL; + /* Loop to obtain the current time until the specified time elapses and exits the loop */ + while (delta - SysTick->VAL< us); +} +``` + +The entry parameter us indicates the number of microseconds that need to be delayed. This function can only support delays shorter than 1 OS Tick, otherwise the SysTick will overflow and not be able to obtain the specified delay time. + diff --git a/documentation/ulog/figures/ulog_async_vs_sync.png b/documentation/ulog/figures/ulog_async_vs_sync.png new file mode 100644 index 0000000000..39d651fa13 Binary files /dev/null and b/documentation/ulog/figures/ulog_async_vs_sync.png differ diff --git a/documentation/ulog/figures/ulog_example.png b/documentation/ulog/figures/ulog_example.png new file mode 100644 index 0000000000..7d4d5b945f Binary files /dev/null and b/documentation/ulog/figures/ulog_example.png differ diff --git a/documentation/ulog/figures/ulog_example_all_format.png b/documentation/ulog/figures/ulog_example_all_format.png new file mode 100644 index 0000000000..70dbb240ed Binary files /dev/null and b/documentation/ulog/figures/ulog_example_all_format.png differ diff --git a/documentation/ulog/figures/ulog_example_async.png b/documentation/ulog/figures/ulog_example_async.png new file mode 100644 index 0000000000..43fd13ecdd Binary files /dev/null and b/documentation/ulog/figures/ulog_example_async.png differ diff --git a/documentation/ulog/figures/ulog_example_filter20.png b/documentation/ulog/figures/ulog_example_filter20.png new file mode 100644 index 0000000000..42c61980dc Binary files /dev/null and b/documentation/ulog/figures/ulog_example_filter20.png differ diff --git a/documentation/ulog/figures/ulog_example_filter30.png b/documentation/ulog/figures/ulog_example_filter30.png new file mode 100644 index 0000000000..daccd700f2 Binary files /dev/null and b/documentation/ulog/figures/ulog_example_filter30.png differ diff --git a/documentation/ulog/figures/ulog_example_filter40.png b/documentation/ulog/figures/ulog_example_filter40.png new file mode 100644 index 0000000000..b5d5f245c9 Binary files /dev/null and b/documentation/ulog/figures/ulog_example_filter40.png differ diff --git a/documentation/ulog/figures/ulog_example_hexdump.png b/documentation/ulog/figures/ulog_example_hexdump.png new file mode 100644 index 0000000000..1d14f050f3 Binary files /dev/null and b/documentation/ulog/figures/ulog_example_hexdump.png differ diff --git a/documentation/ulog/figures/ulog_example_syslog.png b/documentation/ulog/figures/ulog_example_syslog.png new file mode 100644 index 0000000000..996cde3a17 Binary files /dev/null and b/documentation/ulog/figures/ulog_example_syslog.png differ diff --git a/documentation/ulog/figures/ulog_framework.png b/documentation/ulog/figures/ulog_framework.png new file mode 100644 index 0000000000..bc01eb4101 Binary files /dev/null and b/documentation/ulog/figures/ulog_framework.png differ diff --git a/documentation/ulog/figures/ulog_framework_backend.png b/documentation/ulog/figures/ulog_framework_backend.png new file mode 100644 index 0000000000..5df4c71cc1 Binary files /dev/null and b/documentation/ulog/figures/ulog_framework_backend.png differ diff --git a/documentation/ulog/figures/ulog_menuconfig_async.png b/documentation/ulog/figures/ulog_menuconfig_async.png new file mode 100644 index 0000000000..7e5f591829 Binary files /dev/null and b/documentation/ulog/figures/ulog_menuconfig_async.png differ diff --git a/documentation/ulog/figures/ulog_menuconfig_format.png b/documentation/ulog/figures/ulog_menuconfig_format.png new file mode 100644 index 0000000000..63bf46019e Binary files /dev/null and b/documentation/ulog/figures/ulog_menuconfig_format.png differ diff --git a/documentation/ulog/figures/ulog_syslog_format.png b/documentation/ulog/figures/ulog_syslog_format.png new file mode 100644 index 0000000000..69b17b71db Binary files /dev/null and b/documentation/ulog/figures/ulog_syslog_format.png differ diff --git a/documentation/ulog/ulog.md b/documentation/ulog/ulog.md new file mode 100644 index 0000000000..78d1e0a1d6 --- /dev/null +++ b/documentation/ulog/ulog.md @@ -0,0 +1,800 @@ +# Ulog Log + +## Ulog Introduction + +**Log definition**:The log is to output the status, process and other information of the software to different media (for example: file, console, display, etc.), display and save. Provide reference for software traceability, performance analysis, system monitoring, fault warning and other functions during software debugging and maintenance. It can be said that the use of logs consumes at least 80% of the software life cycle. + +**The importance of the log**:For the operating system, because the complexity of the software is very large, single-step debugging is not suitable in some scenarios, the log component is almost standard part on the operating system. A sophisticated logging system can also make the debugging of the operating system more effective. + +**The origin of ulog**: RT-Thread has always lacked a small, useful log component, and the birth of ulog complements this short board. It will be open sourced as a basic component of RT-Thread, allowing our developers to use a simple and easy-to-use logging system to improve development efficiency. + +Ulog is a very simple and easy to use C/C++ log component. The first letter u stands for μ, which means micro. It can achieve the lowest **ROM<1K, RAM<0.2K** resource usage. Ulog is not only small in size, but also has very comprehensive functions. Its design concept refers to another C/C++ open source log library: EasyLogger (referred to as elog), and has made many improvements in terms of functions and performance. The main features are as follows: + +* The backend of the log output is diversified and can support, for example, serial port, network, file, flash memory and other backend forms. + +* The log output is designed to be thread-safe and supports asynchronous output mode. + +* The logging system is highly reliable and is still available in complex environments such as interrupted ISRs and Hardfault. + +* The log supports runtime/compilation time to set the output level. + +* The log content supports global filtering by keyword and label. + +* The APIs and log formats are compatible with linux syslog. + +* Support for dumping debug data to the log in hex format. + +* Compatible with `rtdbg` (RTT's early log header file) and EasyLogger's log output API. + +### Ulog Architecture + +The following figure shows the ulog log component architecture diagram: + +![ulog architecture](figures/ulog_framework.png) + +* **Front end**:This layer is the closest layer to the application, and provides users with two types of API interfaces, `syslog` and `LOG_X`, which are convenient for users to use in different scenarios. + +* **Core**:The main work of the middle core layer is to format and filter the logs passed by the upper layer, and then generate log frames, and finally output them to the lowest-end back-end devices through different output modules. + +* **Back end**:After receiving the log frames sent from the core layer, the logs are output to the registered log backend devices, such as files, consoles, log servers, and so on. + +### Configuration Options ### + +The path to configure ulog using menuconfig in the ENV tool is as follows: + +```c + RT-Thread Components → Utilities → Enable ulog +``` + + The ulog configuration options are described below. In general, the default configuration is used: + +```c +[*] Enable ulog /* Enable ulog */ + The static output log level./* Select a static log output level. After the selection is completed, the log level lower than the set level (here specifically the log using the LOG_X API) will not be compiled into the ROM. */ +[ ] Enable ISR log. /* Enable interrupted ISR log, ie log output API can also be used in ISR */ +[*] Enable assert check. /* Enable assertion checks. If fter disabled, the asserted log will not be compiled into ROM */ +(128) The log's max width. /* The maximum length of the log. Since ulog's logging API is in units of rows, this length also represents the maximum length of a row of logs. */ +[ ] Enable async output mode. /* Enable asynchronous log output mode. When this mode is turned on, the log will not be output to the backend immediately, but will be cached first, and then handed to the log output thread (for example: idle thread) to output. */ + log format ---> /* Configure the format of the log, such as time information, color information, thread information, whether to support floating point, etc. */ +[*] Enable console backend. /* Enable the console as a backend. After enabling, the log can be output to the console serial port. It is recommended to keep it on. */ +[ ] Enable runtime log filter. /* Enable the runtime log filter, which is dynamic filtering. After enabling, the log will support dynamic filtering when the system is running, by means of tags, keywords, and so on. */ +``` + +**The configuration log format option description is as follows:** + +```c +[ ] Enable float number support. It will using more thread stack. /* Supporting floating-point variables (traditional rtdbg/rt_kprintf does not support floating-point logs) */ + [*] Enable color log. /* Colored log */ + [*] Enable time information. /* Time information */ + [ ] Enable timestamp format for time. /* Including timestamp */ + [*] Enable level information. /* Level information */ + [*] Enable tag information. /* Label Information */ + [ ] Enable thread information. /* Thread information */ +``` + +### Log Level + +The log level represents the importance of the log, from high to low in ulog, with the following log levels: + +| Level | Name | Description | +| ------------ | ---- | ----------------------- | +| LOG_LVL_ASSERT | assertion | Unhandled and fatal errors occurred, so that the system could not continue to run. These are assertion logs. | +| LOG_LVL_ERROR | error | The log that is output when a serious, **unrepairable** error occurs is an error level log. | +| LOG_LVL_WARNING | warning | These warning logs are output when there are some less important errors with **repairability**. | +| LOG_LVL_INFO | information | A log of important prompt information that is viewed by the upper-level user of the module, for example, initialization success, current working status, and so on. This level of log is generally **retained** during mass production. | +| LOG_LVL_DBG | debug | The debug log that is viewed by the developer of this module. This level of log is generally **closed** during mass production. | + +The log level in ulog also has the following classification: + +* **Static and dynamic levels**:Classify according to whether the log can be modified during the run phase. The dynamic level that can be modified during the run phase can only be called static level in the **compilation phase**. Logs that are lower than the static level (here specifically the logs using the `LOG_X` API) will not be compiled into the ROM and will not be output or displayed. The dynamic level can control logs that their level are higher than or equal to the static level. Logs that are lower than the dynamic level are filtered out when ulog is running. + +* **Global level and module level**:Classification by scope. Each file (module) can also be set to a separate log level in ulog. The global level scope is larger than the module level, that is, the module level can only control module logs higher than or equal to the global level. + +As can be seen from the above classification, the output level of the log can be set in the following four aspects of ulog: + +* **Global static **log level:Configured in menuconfig, corresponding to the `ULOG_OUTPUT_LVL` macro. + +* **Global Dynamics** log level:Use the `void ulog_global_filter_lvl_set(rt_uint32_t level)` function to set it. + +* **Module static** log level:The `LOG_LVL` macro is defined in the module (file), similar to the way the log tag macro `LOG_TAG` is defined. + +* **Module dynamics** log level:Use the `int ulog_tag_lvl_filter_set(const char *tag, rt_uint32_t level)` function to set it. + +Their scope of action is:**Global Static**>**Global Dynamics**>**Module Static**>**Module Dynamic**. + +### Log Label + +Due to the increasing log output, in order to avoid the log being outputted indiscriminately, it is necessary to use a tag to classify each log. The definition of the label is in the form of **modular**, for example: Wi-Fi components include device driver (wifi_driver), device management (wifi_mgnt) and other modules, Wi-Fi component internal module can use `wifi.driver`, `wifi.mgnt` is used as a label to perform classified output of logs. + +The tag attribute of each log can also be output and displayed. At the same time, ulog can also set the output level of each tag (module) corresponding to the log. The log of the current unimportant module can be selectively closed, which not only reduces ROM resources, but also helps developers filter irrelevant logs. + +See the `rt-thread\examples\ulog_example.c` ulog routine file with the `LOG_TAG` macro defined at the top of the file: + +```c +#define LOG_TAG "example" // The label corresponding to this module. When not defined, default: NO_TAG +#define LOG_LVL LOG_LVL_DBG // The log output level corresponding to this module. When not defined, default: debug level +#include // this header file Must be under LOG_TAG and LOG_LVL +``` + +Note that the definition log tag must be above `#include `, otherwise the default `NO_TAG` will be used (not recommended to define these macros in the header file). + +The scope of the log tag is the current source file, and the project source code will usually be classified according to the module. Therefore, when defining a label, you can specify the module name and sub-module name as the label name. This is not only clear and intuitive when the log output is displayed, but also facilitates subsequent dynamic adjustment of the level or filtering by label. + +## Log Initialization + +### Initialization + +```c +int ulog_init(void) +``` + +| **Return** | **Description** | +| :----- | :----- | +|>=0 | Succeeded | +|-5 | Failed, insufficient memory | + +This function must be called to complete ulog initialization before using ulog. This function will also be called automatically if component auto-initialization is turned on. + +### Deinitialization + +```c +void ulog_deinit(void) +``` + +This deinit release resource can be executed when ulog is no longer used. + +## Log Output API + +Ulog mainly has two log output macro APIs, which are defined in the source code as follows: + +```c +#define LOG_E(...) ulog_e(LOG_TAG, __VA_ARGS__) +#define LOG_W(...) ulog_w(LOG_TAG, __VA_ARGS__) +#define LOG_I(...) ulog_i(LOG_TAG, __VA_ARGS__) +#define LOG_D(...) ulog_d(LOG_TAG, __VA_ARGS__) +#define LOG_RAW(...) ulog_raw(__VA_ARGS__) +#define LOG_HEX(name, width, buf, size) ulog_hex(name, width, buf, size) +``` + +* The macro `LOG_X(...)`:`X` corresponds to the first letter of the different levels. The parameter `...` is the log content, and the format is the same as printf. This method is preferred because on the one hand, because its API format is simple, only one log information is entered, and the static log level filtering by module is also supported. + +* The macro `ulog_x(LOG_TAG, __VA_ARGS__)`: `x ` corresponds to a different level of shorthand. The parameter `LOG_TAG` is the log label, the parameter `...` is the log content, and the format is the same as printf. This API is useful when you use different tag output logs in one file. + +| **API** |**Description** | +|-------------------------|--------------------------| +| LOG_E(...)| Error level log | +| LOG_W(...) | Error level log | +| LOG_I(...) | Prompt level log | +| LOG_D(...)| Debug level log | +| LOG_RAW(...) | Output raw log | +| LOG_HEX(name, width, buf, size)| Output hexadecimal format data to the log | + +API such as ` LOG_X` and `ulog_x` , the output are formatted logs. When you need to output logs without any format, you can use `LOG_RAW` or `ulog_raw()`. E.g: + +```c +LOG_RAW("\r"); +ulog_raw("\033[2A"); +``` + +You can use `LOG_HEX()` or `ulog_hex` to dump data into the log in hexadecimal hex format. The function parameters and descriptions are as follows: + +| **Parameter** | **Description** | +| ---- | -------------------------- | +| tag | Log label | +| width | The width (number) of a line of hex content | +| buf | Data content to be output | +| size | Data size | + +The `hexdump` log is DEBUG level, supports runtime level filtering. The tag corresponding to the hexdump log supports tag filtering during runtime. + +Ulog also provides the assertion API: `ASSERT(expression)`. When the assertion is triggered, the system will stop running, and `ulog_flush()` will be executed internally, and all log backends will execute flush. If asynchronous mode is turned on, all logs in the buffer will also be flushed. An example of the use of assertions is as follows: + +```c +void show_string(const char *str) +{ + ASSERT(str); + ... +} +``` + +## ULog Usage Example + +### Example + +The following is a description of the ulog routine. Open `rt-thread\examples\ulog_example.c` and you can see that there are labels and static priorities defined at the top. + +```c +#define LOG_TAG "example" +#define LOG_LVL LOG_LVL_DBG +#include +``` + +The `LOG_X` API is used in the `void ulog_example(void)` function, which is roughly as follows: + +```c +/* output different level log by LOG_X API */ +LOG_D("LOG_D(%d): RT-Thread is an open source IoT operating system from China.", count); +LOG_I("LOG_I(%d): RT-Thread is an open source IoT operating system from China.", count); +LOG_W("LOG_W(%d): RT-Thread is an open source IoT operating system from China.", count); +LOG_E("LOG_E(%d): RT-Thread is an open source IoT operating system from China.", count); +``` + +These log output APIs support the printf format and will automatically wrap lines at the end of the log. + +The following will show the effect of the ulog routine on qemu: + +- Copy `rt-thread\examples\ulog_example.c` to the `rt-thread\bsp\qemu-vexpress-a9\applications` folder. +- Go to the `rt-thread\bsp\qemu-vexpress-a9` directory in Env +- After determining that the configuration of ulog has been executed before, execute the `scons` command and wait for the compilation to complete. +- Run `qemu.bat` to open RT-Thread's qemu simulator +- Enter the `ulog_example` command to see the results of the ulog routine. The effect is as follows. + +![ulog routine](figures/ulog_example.png) + +You can see that each log is displayed in rows, and different levels of logs have different colors. At the top of the log is the tick of the current system, with the log level and label displayed in the middle, and the specific log content at the end. These log formats and configuration instructions are also highlighted later in this article. + +### Used in Interrupt ISR + +Many times you need to output a log in the interrupt ISR, but the ISR may interrupt the thread that is doing the log output. To ensure that the interrupt log and the thread log do not interfere with each other, special handling must be performed for the interrupt condition. + +Ulog has integrated interrupt log function, but it is not enabled by default. Open the `Enable ISR log` option when using it. The API of the log is the same as that used in the thread, for example: + +```c +#define LOG_TAG "driver.timer" +#define LOG_LVL LOG_LVL_DBG +#include + +void Timer2_Handler(void) +{ + /* enter interrupt */ + rt_interrupt_enter(); + + LOG_D("I'm in timer2 ISR"); + + /* leave interrupt */ + rt_interrupt_leave(); +} + +``` + +Here are the different strategies for interrupt logging in ulog in synchronous mode and asynchronous mode: + +**In synchronous mode**:If the thread is interrupted when the log is being output at this time, and there is a log to be output in the interrupt, it will be directly output to the console, and output to other backends is not supported; + +**In asynchronous mode**:If the above situation occurs, the log in the interrupt will be put into the buffer first, and finally sent to the log output thread for processing together with the thread log. + +### Set the Log Format + +The log format supported by ulog can be configured in menuconfig, located in `RT-Thread Components` → `Utilities` → `ulog` → `log format`. The specific configuration is as follows: + +![ulog format configuration](figures/ulog_menuconfig_format.png) + +They can be configured separately: floating-point number support (traditional rtdbg/rt_kprintf does not support floating-point logs), colored logs, time information (including timestamps), level information, tag information, thread information. Below we will **select all of these options**, save and recompile and run the ulog routine again in qemu to see the actual effect: + +![ulog routine (all formats)](figures/ulog_example_all_format.png) + +It can be seen that the time information has been changed from the tick value of the system to the timestamp information compared to the first run routine, and the thread information has also been output. + +### Hexdump Output Using + +Hexdump is also a more common function when logging output. hexdump can output a piece of data in hex format. The corresponding API is: `void ulog_hexdump(const char *tag, rt_size_t width, rt_uint8_t *buf, rt_size_t size)` , see below the specific use method and operation effect: + +```c +/* Define an array of 128 bytes in length */ +uint8_t i, buf[128]; +/* Fill the array with numbers */ +for (i = 0; i < sizeof(buf); i++) +{ + buf[i] = i; +} +/* Dumps the data in the array in hex format with a width of 16 */ +ulog_hexdump("buf_dump_test", 16, buf, sizeof(buf)); +``` + +You can copy the above code into the ulog routine, and then look at the actual running results: + +![ulog routine (hexdump)](figures/ulog_example_hexdump.png) + +It can be seen that the middle is the hexadecimal information of the buf data, and the rightmost is the character information corresponding to each data. + +## Log Advanced Features + +After understanding the introduction of the log in the previous section, the basic functions of ulog can be mastered. In order to let everyone better use ulog, this application note will focus on the advanced features of ulog and some experience and skills in log debugging. After learning these advanced uses, developers can also greatly improve the efficiency of log debugging. + +It also introduces the advanced mode of ulog: syslog mode, which is fully compatible with the Linux syslog from the front-end API to the log format, greatly facilitating the migration of software from Linux. + +### Log Backend + +![Ulog framework](figures/ulog_framework_backend.png) + +Speaking of the backend, let's review the ulog's framework. As can be seen from the above figure, ulog is a design with front and back ends separated, and there is no dependence on the front and back ends. And the backends that are supported are diversified, no matter what kind of backends, as long as they are implemented, they can be registered. + +Currently ulog has integrated the console backend, the traditional device that outputs `rt_kprintf` print logs. Ulog also supports the Flash backend, which seamlessly integrates with EasyFlash. See its package for details.([Click to view](https://github.com/armink-rtt-pkgs/ulog_easyflash_be))。Later ulog will also increase the implementation of backends such as file backends and network backends. Of course, if there are special needs, users can also implement the backend themselves. + +#### Register Backend Device + +```c +rt_err_t ulog_backend_register(ulog_backend_t backend, const char *name, rt_bool_t support_color) +``` + +| **Parameter** | **Description** | +| :----- | :----- | +|backend | Backend device handle | +|name| Backend device name | +|support_color| Whether it supports color logs | +|**return**|-- | +|>=0 | Succeeded | + +This function is used to register the backend device into the ulog, ensuring that the function members in the backend device structure are set before registration. + +#### Logout Backend Device + +```c +rt_err_t ulog_backend_unregister(ulog_backend_t backend); +``` + +| **Parameter** | **Description** | +| :----- | :----- | +|backend | Backend device handle | +|**return**|-- | +|>=0 | Succeeded | + +This function is used to unregister a backend device that has already been registered. + +#### Backend Implementation and Registration Examples + +The console backend is taken as an example to briefly introduce the implementation method and registration method of the backend. + +Open the `rt-thread/components/utilities/ulog/backend/console_be.c` file and you can see the following: + +```c +#include +#include + +/* Defining console backend devices */ +static struct ulog_backend console; +/* Console backend output function */ +void ulog_console_backend_output(struct ulog_backend *backend, rt_uint32_t level, const char *tag, rt_bool_t is_raw, const char *log, size_t len) +{ + ... + /* Output log to the console */ + ... +} +/* Console backend initialization */ +int ulog_console_backend_init(void) +{ + /* Set output function */ + console.output = ulog_console_backend_output; + /* Registration backend */ + ulog_backend_register(&console, "console", RT_TRUE); + + return 0; +} +INIT_COMPONENT_EXPORT(ulog_console_backend_init); +``` + +Through the above code, it can be seen that the implementation of the console backend is very simple. Here, the `output` function of the backend device is implemented, and the backend is registered in the ulog, and then the log of ulog is output to the console. + +If you want to implement a more complex back-end device, you need to understand the back-end device structure, as follows: + +```c +struct ulog_backend +{ + char name[RT_NAME_MAX]; + rt_bool_t support_color; + void (*init) (struct ulog_backend *backend); + void (*output)(struct ulog_backend *backend, rt_uint32_t level, const char *tag, rt_bool_t is_raw, const char *log, size_t len); + void (*flush) (struct ulog_backend *backend); + void (*deinit)(struct ulog_backend *backend); + rt_slist_t list; +}; +``` + +From the perspective of this structure, the requirements for implementing the backend device are as follows: + +* `The name` and `support_color` properties can be passed in through the `ulog_backend_register()` function. + +* `output` is the back-end specific output function, and all backends must implement the interface. + +* `init`/`deinit` is optional, `init` is called at `register`, and `deinit` is called at `ulog_deinit`. + +* `flush` is also optional, and some internal output cached backends need to implement this interface. For example, some file systems with RAM cache. The flush of the backend is usually called by `ulog_flush` in the case of an exception such as assertion or hardfault. + +### Asynchronous Log + +In ulog, the default output mode is synchronous mode, and in many scenarios users may also need asynchronous mode. When the user calls the log output API, the log is cached in the buffer, and the thread dedicated to the log output takes out the log and outputs it to the back end. + +Asynchronous mode and synchronous mode are the same for the user, there is no difference in the use of the log API, because ulog will distinguish between the underlying processing. The difference between the two works is as follows: + +![ulog asynchronous VS synchronization](figures/ulog_async_vs_sync.png) + +The advantages and disadvantages of asynchronous mode are as follows: + +**Advantage**: + +* First, the log output will not block the current thread, and some backend output rates are low, so using the synchronous output mode may affect the timing of the current thread. The asynchronous mode does not have this problem. + +* Secondly, since each thread that uses the log omits the action of the backend output, the stack overhead of these threads may also be reduced, and from this perspective, the resource consumption of the entire system can also be reduced. + +* Interrupt logs in synchronous mode can only be output to the console backend, while in asynchronous mode interrupt logs can be output to all backends. + +**Disadvantage**:First, the asynchronous mode requires a log buffer. Furthermore, the output of the asynchronous log needs to be completed by a special thread, such as an idle thread or a user-defined thread, which is slightly more complicated to use. The overall sense of asynchronous mode resource occupancy will be higher than the synchronous mode. + +#### Configuration Option + +Use menuconfig in the Env tool to enter the ulog configuration options: + +```c + RT-Thread Components → Utilities → Enable ulog +``` + +The asynchronous mode related configuration options are described as follows: + +```c +[*] Enable async output mode. /* Enable asynchronous mode */ +(2048) The async output buffer size. /* Asynchronous buffer size, default is 2048*/ +[*] Enable async output by thread. /* Whether to enable the asynchronous log output thread in ulog, the thread will wait for log notification when it runs, and then output the log to all backends. This option is turned on by default, and can be turned off if you want to modify it to another thread, such as an idle thread. */ +(1024) The async output thread stack size. /* Asynchronous output thread stack size, default is 1024 */ +(30) The async output thread stack priority./* The priority of the asynchronous output thread, the default is 30*/ +``` + +When using the idle thread output, the implementation is simple, just call `rt_thread_idle_sethook(ulog_async_output)` at the application layer, but there are some limitations. + +* The idle thread stack size needs to be adjusted based on actual backend usage. + +* Because thread suspend operations are not allowed inside idle threads, backends such as Flash and networking may not be available based on idle threads. + +#### Use Example + +Save the asynchronous output option configuration and copy `rt-thread\examples\ulog_example.c` to the `rt-thread\bsp\qemu-vexpress-a9\applications` folder. + +Execute the `scons` command and wait for the compilation to complete. Run `qemu.bat` to open the qemu emulator for RT-Thread. +Enter the `ulog_example` command to see the results of the ulog routine. The approximate effect is as follows: + +![ulog asynchronous routine](figures/ulog_example_async.png) + +If you look carefully, you can see that after the asynchronous mode is turned on, the time information of these logs that are very close in code is almost the same. However, in synchronous mode, the log is output using the user thread. Since the log output takes a certain amount of time, there is a certain interval between each log. It also fully shows that the asynchronous log output is very efficient, and it takes almost no time for the caller. + +### Log Dynamic Filter + +In the previous section, some static filtering functions have been introduced. Static filtering has its advantages such as saving resources, but in many cases, users need to dynamically adjust the filtering mode of the log while the software is running. This allows the dynamic filter function of ulog to be used. To use the dynamic filter feature, turn on the `Enable runtime log filter.` option in menuconfig, which is **turned off by default**. + +There are four types of dynamic filtering supported by ulog, and there are corresponding API functions and Finsh/MSH commands, which will be introduced one by one. + +#### Filter by Module Level + +```c +int ulog_tag_lvl_filter_set(const char *tag, rt_uint32_t level) +``` + +| **Parameter** | **Description** | +| ------- | ------------------------------ | +| tag | Log label | +| level | Set log level | +|**return**|-- | +| >=0 | Succeeded | +| -5 | Failed, not enough memory | + +* Command format: `ulog_tag_lvl ` + +The **module** referred to here represents a class of log code with the same tag attributes. Sometimes it is necessary to dynamically modify the log output level of a module at runtime. + +The parameter log `level` can take the following values: + +|**Level** |**Name** | +| --------------------- | ---------------- | +| LOG_LVL_ASSERT | Assertion | +| LOG_LVL_ERROR | Error | +| LOG_LVL_WARNING | Warning | +| LOG_LVL_INFO | Information | +| LOG_LVL_DBG | Debug | +| LOG_FILTER_LVL_SILENT | Silent (stop output) | +| LOG_FILTER_LVL_ALL | All | + +An example of a function call and command is as follows: + +| Function | Function Call | Execute an order | +| ---------------- | ------------------------------ | ------------------ | +| Close all logs of `wifi` module | `ulog_tag_lvl_filter_set("wifi", LOG_FILTER_LVL_SILENT);` | `ulog_tag_lvl wifi 0` | +| Open all logs of `wifi` module | `ulog_tag_lvl_filter_set("wifi", LOG_FILTER_LVL_ALL);` | `ulog_tag_lvl wifi 7` | +| Set the `wifi` module log level to warning | `ulog_tag_lvl_filter_set("wifi", LOG_LVL_WARNING);` | `ulog_tag_lvl wifi 4` | + +#### Global Filtering by Label + +```c +void ulog_global_filter_tag_set(const char *tag) +``` + +| **Parameter** | **Description** | +| :--- | :------------- | +| tag | Set filter label | + +* The command format: `ulog_tag [tag]`, when the tag is empty, the label filtering is canceled. + +This filtering method can perform label filtering on all logs, and only the **log containing the label information** is allowed to output. + +For example: there are 3 kinds of tags for `wifi.driver`, `wifi.mgnt`, `audio.driver`. When setting the filter tag to `wifi`, only the tags are `wifi.driver` and `wifi.mgnt. The log of ` will be output. Similarly, when the filter tag is set to `driver`, only logs with tags `wifi.driver` and `audio.driver` will be output. Examples of function calls and commands for common functions are as follows: + +| Function | Function Call | Execute an Order | +| -------------| -------------------- | ---------- | +| Set the filter tag to `wifi` | `ulog_global_filter_tag_set("wifi");` | `ulog_tag wifi` | +| Set the filter tag to `driver` | `ulog_global_filter_tag_set("driver");` | `ulog_tag driver` | +| Cancel label filtering | `ulog_global_filter_tag_set("");` | `ulog_tag` | + +#### Global Filtering by Level + +```c +void ulog_global_filter_lvl_set(rt_uint32_t level) +``` + +| **Parameter** | **Description** | +| ---- | -------------------| +| level | Set log level | + +* Command format: `ulog_lvl ` , level Refer to the following table: + +| **Value** | **Description** | +| :------------ | :--------------- | +| 0 | assertion | +| 3 | error | +| 4 | warning | +| 6 | information | +| 7 | debug | + +After setting the global filter level by function or command, the log below **setting level** will stop output. Examples of function calls and commands for common functions are as follows: + +| Function | Function Call | Execute an order | +| ----------| ------------------------------ | ------- | +| Close all logs | `ulog_global_filter_lvl_set(LOG_FILTER_LVL_SILENT);` | `ulog_lvl 0` | +| Open all logs | `ulog_global_filter_lvl_set(LOG_FILTER_LVL_ALL);` | `ulog_lvl 7` | +| Set the log level to warning | `ulog_global_filter_lvl_set(LOG_LVL_WARNING);` | `ulog_lvl 4` | + +#### Global Filtering by Keyword + +```c +void ulog_global_filter_kw_set(const char *keyword) +``` + +| **Parameter** | **Description** | +| :------ | :--------------- | +| keyword | Set filter keywords | + +* The command format: `ulog_kw [keyword]`, when the keyword is empty, the keyword filtering is canceled. + +This filtering method can filter all the logs by keyword, and the log **containing the keyword information** is allowed to output. Examples of function calls and commands for common functions are as follows: + +| Function | Function Call | Execute an order | +| -------------- | ------------------- | --------- | +| Set the filter keyword to `wifi` | `ulog_global_filter_kw_set("wifi");` | `ulog_kw wifi` | +| Clear filter keywords | `ulog_global_filter_kw_set("");` | `ulog_kw` | + +#### View Filter Information + +After setting the filter parameters, if you want to view the current filter information, you can enter the `ulog_filter` command. The approximate effect is as follows: + +```c +msh />ulog_filter +-------------------------------------- +ulog global filter: +level : Debug +tag : NULL +keyword : NULL +-------------------------------------- +ulog tag's level filter: +wifi : Warning +audio.driver : Error +msh /> +``` + +> Filter parameters are also supported storing in Flash and also support boot autoload configuration. If you need this feature, please see the instructions for the **ulog_easyflash** package.([Click to check](https://github.com/armink-rtt-pkgs/ulog_easyflash_be)) + +#### Use Example + +Still executing in qemu BSP, first open dynamic filter in menuconfig, then save the configuration and compile and run the routine. After the log output is about **20** times, the corresponding filter code in ulog_example.c will be executed: + +```c +if (count == 20) +{ + /* Set the global filer level is INFO. All of DEBUG log will stop output */ + ulog_global_filter_lvl_set(LOG_LVL_INFO); + /* Set the test tag's level filter's level is ERROR. The DEBUG, INFO, WARNING log will stop output. */ + ulog_tag_lvl_filter_set("test", LOG_LVL_ERROR); +} +... +``` + +At this point, the global filter level is set to the INFO level, so you can no longer see logs lower than the INFO level. At the same time, the log output level of the `test` tag is set to ERROR, and the log lower than ERROR in the `test` tag is also stopped. In each log, there is a count value of the current log output count. The effect of the comparison is as follows: + +![ulog filter routine 20](figures/ulog_example_filter20.png) + +After the log output is about **30** times, the following filter code corresponding to ulog_example.c is executed: + +```c +... +else if (count == 30) +{ + /* Set the example tag's level filter's level is LOG_FILTER_LVL_SILENT, the log enter silent mode. */ + ulog_tag_lvl_filter_set("example", LOG_FILTER_LVL_SILENT); + /* Set the test tag's level filter's level is WARNING. The DEBUG, INFO log will stop output. */ + ulog_tag_lvl_filter_set("test", LOG_LVL_WARNING); +} +... +``` + +At this point, the filter of the `example` module has been added, and all the logs of this module are stopped, so the module log will not be visible next. At the same time, reduce the log output level of the `test` tag to WARING. At this point, you can only see the WARING and ERROR level logs of the `test` tag. The effect is as follows: + +![ulog filter routine 30](figures/ulog_example_filter30.png) + +After the log output is about **40** times, the following filter code corresponding to ulog_example.c is executed: + +```c +... +else if (count == 40) +{ + /* Set the test tag's level filter's level is LOG_FILTER_LVL_ALL. All level log will resume output. */ + ulog_tag_lvl_filter_set("test", LOG_FILTER_LVL_ALL); + /* Set the global filer level is LOG_FILTER_LVL_ALL. All level log will resume output */ + ulog_global_filter_lvl_set(LOG_FILTER_LVL_ALL); +} +``` + +At this time, the log output level of the `test` module is adjusted to `LOG_FILTER_LVL_ALL`, that is, the log of any level of the module is no longer filtered. At the same time, the global filter level is set to `LOG_FILTER_LVL_ALL`, so all the logs of the `test` module will resume output. The effect is as follows: + +![ulog filter routine 40](figures/ulog_example_filter40.png) + +### Usage when the System is Abnormal + +Since the asynchronous mode of ulog has a caching mechanism, the registered backend may also have a cache inside. If there are error conditions such as hardfault and assertion in the system, but there are still logs in the cache that are not output, which may cause the log to be lost. It is impossible to find the cause of the exception. + +For this scenario, ulog provides a unified log flush function: `void ulog_flush(void)`. When an exception occurs, when the exception information log is output, the function is called at the same time to ensure that the remaining logs in the cache can also be output to the back end. + +The following is an example of RT-Thread assertion and CmBacktrace: + +#### Assertion + +RT-Thread assertions support assertion callbacks. We define an assertion hook function similar to the following, and then set it to the system via the `rt_assert_set_hook(rtt_user_assert_hook);` function. + +```c +static void rtt_user_assert_hook(const char* ex, const char* func, rt_size_t line) +{ + rt_enter_critical(); + + ulog_output(LOG_LVL_ASSERT, "rtt", RT_TRUE, "(%s) has assert failed at %s:%ld.", ex, func, line); + /* flush all log */ + ulog_flush(); + while(1); +} +``` + +#### CmBacktrace + +CmBacktrace is a fault diagnosis library for ARM Cortex-M series MCUs. It also has a corresponding RT-Thread package, and the latest version of the package has been adapted for ulog. The adaptation code is located in `cmb_cfg.h` : + +```c +... +/* print line, must config by user */ +#include +#ifndef RT_USING_ULOG +#define cmb_println(...) rt_kprintf(__VA_ARGS__);rt_kprintf("\r\n") +#else +#include +#define cmb_println(...) ulog_e("cmb", __VA_ARGS__);ulog_flush() +#endif /* RT_USING_ULOG */ +... +``` + +It can be seen that when ulog is enabled, each log output of CmBacktrace will use the error level, and `ulog_flush` will be executed at the same time, and the user does not need to make any modifications. + +### Syslog Mode + +On Unix-like operating systems, syslog is widely used in system logs. The common backends of syslog are files and networks. The syslog logs can be recorded in local files or sent over the network to the server that receives the syslog. + +Ulog provides support for the syslog mode, not only the front-end API is exactly the same as the syslog API, but the log format is also RFC compliant. However, it should be noted that after the syslog mode is enabled, the log output format of the entire ulog will be in syslog format regardless of which log output API is used. + +To use the syslog configuration you need to enable the `Enable syslog format log and API.` option. + +#### Log Format + +![ulog syslog format](figures/ulog_syslog_format.png) + +As shown in the figure above, the ulog syslog log format is divided into the following four parts: + +| Format | **Description** | +| ---- | --------------------- | +| PRI | The PRI part consists of a number enclosed in angle brackets. This number contains the Facility and Severity information, which is made by Facility multiplying 8 and then adding Severity. Facility and Severity are passed in by the syslog function. For details, see syslog.h. | +| Header | The Header part is mainly a timestamp indicating the time of the current log; | +| TAG | The current log label can be passed in via the `openlog` function. If not specified, `rtt` will be used as the default label. | +| Content | The specific content of the log | + +#### Instruction + +The syslog option needs to be enabled in menuconfig before use. The main commonly used APIs are: + +* Open syslog:`void openlog(const char *ident, int option, int facility)` + +* Output syslog log:`void syslog(int priority, const char *format, ...)` + +> Hint: Calling `openlog` is optional. If you do not call `openlog`, the openlog is automatically called when syslog is called for the first time. + + +syslog() is very simple to use. The input format is the same as the printf function. There are also syslog routines in ulog_example.c that work as follows in qemu: + +![ulog syslog routine](figures/ulog_example_syslog.png) + +### Migrate from *rt_dbg.h* or elog to ulog + +If the two types of log components were used in the project before, when ulog is to be used, it will involve how to make the previous code also support ulog. The following will focus on the migration process. + +#### Migrate from rt_dbg.h + +Currently rtdbg has completed **seamless docking** ulog. After ulog is turned on, the code of rtdbg in the old project can be used to complete the log output without any modification. + +#### Migrate from Elog(EasyLogger) + +If you are not sure that a source code file is running on a target platform that will use ulog, then it is recommended to add the following changes to the file: + +```c +#ifdef RT_USING_ULOG +#include +#else +#include +#endif /* RT_USING_ULOG */ +``` + +If you explicitly only use the ulog component, then simply change the header file reference from `elog.h` to `ulog .h`, and no other code needs to be changed. + +### Log Usage Tip + +With the logging tool, if it is used improperly, it will also cause the log to be abused, and the log information cannot be highlighted. Here we will focus on sharing some tips on the use of the log component to make the log information more intuitive. The main concerns are: + +#### Rational Use of Label Classification + +Reasonably use the label function. For each module code before the use of the log, first clear the module, sub-module name. This also allows the logs to be categorized at the very beginning and ready for later log filtering. + +#### Rational Use of Log Levels + +When you first use the log library, you will often encounter warnings and errors. The logs cannot be distinguished. The information cannot be distinguished from the debug logs, which makes the log level selection inappropriate. Some important logs may not be visible, and unimportant logs are full of problems. Therefore, be sure to read the log level section carefully before using it. For each level, there are clear standards. + +#### Avoid Repetitive and Redundant Logs + +In some cases, repeated or cyclic execution of code occurs, and the same, similar log problems are output multiple times. Such a log not only takes up a lot of system resources, but also affects the developer's positioning of the problem. Therefore, in this case, it is recommended to add special processing for repetitive logs, such as: let the upper layer output some business-related logs, the bottom layer only returns the specific result status; the same log at the same time point, can increase to deal with the re-processing, only output once and so on when the error state has not changed. + +#### Open More Log Formats + +The timestamp and thread information are not turned on in the default log format of ulog. These two log messages are useful on RTOS. They can help developers to understand the running time and time difference of each log, and clearly see which thread is executing the current code. So if conditions permit, it is still recommended to open. + +#### Close Unimportant Logs + +Ulog provides log switch and filtering functions in various dimensions, which can completely control the refinement. Therefore, if you debug a function module, you can properly close the log output of other unrelated modules, so that you can focus on the current debugging on the module. + +## Common Problems + +### Q: The log code has been executed, but there is no output. + + **A:** Refer to the Log Levels section for the log level classification and check the log filtering parameters. There is also the possibility of accidentally closing the console backend and re-opening `Enable console backend`. + +### Q: After ulog is turned on, the system crashes, for example: thread stack overflow. + + **A:** Ulog will occupy more part of the thread stack space than the previous rtdbg or `rt_kprintf` printout function. If floating-point printing support is enabled, it is recommended because it uses the internal memory of libc with a large amount of `vsnprintf`. Reserve more than 250 bytes. If the timestamp function is turned on, the stack recommends a reserve of 100 bytes. + +### Q: The end of the log content is missing. + + **A:** This is because the log content exceeds the maximum width of the set log. Check the `The log's max width` option and increase it to the appropriate size. + +### Q: After turning on the timestamp, why can't I see the millisecond time? + + **A:** This is because ulog currently only supports millisecond timestamps when the software emulation RTC state is turned on. To display, just turn on the RT-Thread software to simulate the RTC function. + +### Q: Define LOG_TAG and LOG_LVL before each include ulog header file, can it be simplified? + +**A:** If `LOG_TAG` is not defined, the `NO_TAG` tag will be used by default, so the output log will be easily misunderstood. Therefore the tag macro is not recommended to be omitted. + +If `LOG_LVL` is not defined, the debug level is used by default. If the module is in the development stage, this process can be omitted, but if the module code is stable, it is recommended to define the macro and modify the level to information level. + +### Q: Warning when running:Warning: There is no enough buffer for saving async log, please increase the ULOG_ASYNC_OUTPUT_BUF_SIZE option。 + +**A:** When this prompt is encountered, it indicates that the buffer in the asynchronous mode has overflowed, which will cause some of the log to be lost. Increasing the ULOG_ASYNC_OUTPUT_BUF_SIZE option can solve the problem. + +### Q: Compile time prompt:The idle thread stack size must more than 384 when using async output by idle (ULOG_ASYNC_OUTPUT_BY_IDLE)。 + +**A:** When using an idle thread as the output thread, the stack size of the idle thread needs to be increased, depending on the specific backend device. For example, when the console being a backend, the idle thread must be at least 384 bytes. diff --git a/documentation/utest/figures/UtestAppStruct-1.png b/documentation/utest/figures/UtestAppStruct-1.png new file mode 100644 index 0000000000..1925cbe767 Binary files /dev/null and b/documentation/utest/figures/UtestAppStruct-1.png differ diff --git a/documentation/utest/figures/UtestRunLogShow.png b/documentation/utest/figures/UtestRunLogShow.png new file mode 100644 index 0000000000..0de5e47c03 Binary files /dev/null and b/documentation/utest/figures/UtestRunLogShow.png differ diff --git a/documentation/utest/figures/testcase-runflowchart.jpg b/documentation/utest/figures/testcase-runflowchart.jpg new file mode 100644 index 0000000000..76117ac2c1 Binary files /dev/null and b/documentation/utest/figures/testcase-runflowchart.jpg differ diff --git a/documentation/utest/utest.md b/documentation/utest/utest.md new file mode 100644 index 0000000000..d496fbed07 --- /dev/null +++ b/documentation/utest/utest.md @@ -0,0 +1,274 @@ +# utest Framework + +## utest Introduction + +utest (unit test) is a unit testing framework developed by RT-Thread. The original intention of designing utest is to make it easier for RT-Thread developers to write test programs using a unified framework interface for unit testing, coverage testing, and integration testing. + +### Test Case Definition + +A test case (tc) is a single test performed to achieve a specific test objective. It is a specification that includes test input, execution conditions, test procedures, and expected results. It is a infinite loop with clear end conditions and test results. + +The utest (unit test) framework defines user-written test programs as **test cases**, and a test case contains only one *testcase* function (similar to the main function), which can contain multiple *test unit* functions. + +The test code for a function, specifically through the API provided by the utest framework, is a test case. + +### Test Unit Definition + +The test unit is a test point subdivided by the function to be tested. Each test point can be the smallest measurable unit of the function to be tested. Of course, different classification methods will subdivide different test units. + +### utest Application Block Diagram + +![utest Application Block Diagram](./figures/UtestAppStruct-1.png) + +As shown in the figure above, the test case is designed based on the service interface provided by the test framework utest, which supports compiling multiple test cases together for testing. In addition, as you can see from the figure, a test case corresponds to a unique *testcase* function, and multiple test units are included in *testcase*. + +## utest API + +To enable uniform test case code, the test framework utest provides a common API interface for test case writing. + +### Macros of assertion + +> NOTE: +> Here assert only records the number of passes and failures, it does not generate assertions or terminates program execution. Its function is not equivalent to RT_ASSERT. + + +| assert Macro | Description | +| :------ | :------ | +| uassert_true(value) | If the value is true then the test passes, otherwise the test fails. | +| uassert_false(value) | If the value is false then the test passes, otherwise the test fails. | +| uassert_null(value) | If the Value is null then the test passes, otherwise the test fails | +| uassert_not_null(value)| If the value is a non-null value, the test passes, otherwise the test fails. | +| uassert_int_equal(a, b)| If the values of a and b are equal, the test passes, otherwise the test fails. | +| uassert_int_not_equal(a, b)| If the values of a and b are not equal, the test passes, otherwise the test fails. | +| uassert_str_equal(a, b) | If the string a and the string b are the same, the test passes, otherwise the test fails. | +| uassert_str_not_equal(a, b)| If the string a and the string b are not the same, the test passes, otherwise the test fails. | +| uassert_in_range(value, min, max) | If the value is in the range of min and max, the test passes, otherwise the test fails. | +| uassert_not_in_range(value, min, max)| If the value is not in the range of min and max, the test passes, otherwise the test fails. | + +### Macros for Running Test Units + +```c +UTEST_UNIT_RUN(test_unit_func) +``` + +In the test case, the specified test unit function `test_unit_func` is executed using the `UTEST_UNIT_RUN` macro. The test unit must be executed using the `UTEST_UNIT_RUN` macro. + +### Macros for Exporting Test Cases + +```c +UTEST_TC_EXPORT(testcase, name, init, cleanup, timeout) +``` + +| Parameters | Description | +| :----- | :------ | +| testcase | Test case main-bearing function (**specifies** using a function called *static void testcase(void)* | +| name | Test case name (uniqueness). Specifies the naming format for connecting relative names of test cases relative to `testcases directory` with `.` | +| init | the initialization function before Test case startup | +| cleanup | Cleanup function after the end of the test case | +| timeout | Test case expected test time (in seconds) | + +**Test case naming requirements:** + +Test cases need to be named in the prescribed format. Specifies the naming format for the connection of the current test case relative to the `testcases directory ` linked with `.` . The name contains the file name of the current test case file (the file name except the suffix name). + +**Test case naming example:** + +Assuming that there is a `testcases\components\filesystem\dfs\dfs_api_tc.c` test case file in the test case `testcases` directory, the test case name in the `dfs_api_tc.c` is named `components.filesystem.dfs.dfs_api_tc`. + +### Test Case LOG Output Interface + +The utest framework relies on the *ulog log module* for log output and the log output level in the utest framework. So just add `#include "utest.h"` to the test case to use all level interfaces (LOG_D/LOG_I/LOG_E) of the ulog log module. + +In addition, the utest framework adds an additional log control interface as follows: + +```c +#define UTEST_LOG_ALL (1u) +#define UTEST_LOG_ASSERT (2u) + +void utest_log_lv_set(rt_uint8_t lv); +``` + +Users can use the `utest_log_lv_set` interface to control the log output level in test cases. The `UTEST_LOG_ALL` configuration outputs all logs, and the `UTEST_LOG_ASSERT` configuration only outputs logs after the failure of uassert. + +## Configuration Enable + +Using the utest framework requires the following configuration in the ENV tool using menuconfig: + +```c +RT-Thread Kernel ---> + Kernel Device Object ---> + (256) the buffer size for console log printf /* The minimum buffer required by the utest log */ +RT-Thread Components ---> + Utilities ---> + -*- Enable utest (RT-Thread test framework) /* Enable utest framework */ + (4096) The utest thread stack size /* Set the utest thread stack (required for -thread mode) */ + (20) The utest thread priority /* Set utest thread priority (required for -thread mode) */ +``` + +## Application Paradigm + +The utest framework and related APIs were introduced earlier. The basic test case code structure is described here. + +The code blocks necessary for the test case file are as follows: + +```c +/* + * Copyright (c) 2006-2019, RT-Thread Development Team + * + * SPDX-License-Identifier: Apache-2.0 + * + * Change Logs: + * Date Author Notes + * 2019-01-16 MurphyZhao the first version + */ + +#include +#include "utest.h" + +static void test_xxx(void) +{ + uassert_true(1); +} + +static rt_err_t utest_tc_init(void) +{ + return RT_EOK; +} + +static rt_err_t utest_tc_cleanup(void) +{ + return RT_EOK; +} + +static void testcase(void) +{ + UTEST_UNIT_RUN(test_xxx); +} +UTEST_TC_EXPORT(testcase, "components.utilities.utest.sample.sample_tc", utest_tc_init, utest_tc_cleanup, 10); +``` + +A basic test case must contain the following: + +- File comment header (Copyright) + + The test case file must contain a file comment header containing `Copyright`, time, author, and description information. + +- utest_tc_init(void) + + The initialization function before the test run is generally used to initialize the environment required for the test. + +- utest_tc_cleanup(void) + + The cleanup function after the test is used to clean up the resources (such as memory, threads, semaphores, etc.) applied during the test. + +- testcase(void) + + The mainly function of testcase, a test case implementation can only contain one testcase function (similar to the main function). Usually this function is only used to run the test unit execution function `UTEST_UNIT_RUN`. + + A testcase can contain multiple test units, each of which is executed by `UTEST_UNIT_RUN`. + +- UTEST_UNIT_RUN + + Test unit execution function. + +- test_xxx(void) + + Test implementation of each functional unit. The user determines the function name and function implementation based on the requirements. + +- uassert_true + + The assertion macro used to determine the test result (this assertion macro does not terminate the program run). Test cases must use the `uassert_xxx` macro to determine the test results, otherwise the test framework does not know if the test passed. + + After all the `uassert_xxx` macros have been passed, the entire test case is passed. + +- UTEST_TC_EXPORT + + Export the test case testcase function to the test framework. + +## Requirements for running test cases + +The test framework utest exports all test cases to the `UtestTcTab` code segment. The `UtestTcTab` section is not required to be defined in the link script in the IAR and MDK compilers, but it needs to be explicitly set in the link script when GCC is compiled. + +Therefore, in order for test cases to be compiled and run under GCC, the `UtestTcTab` code segment must be defined in the *link script* of GCC. + +In the `.text` of the GCC link script, add the definition of the `UtestTcTab` section in the following format: + +```c +/* section information for utest */ +. = ALIGN(4); +__rt_utest_tc_tab_start = .; +KEEP(*(UtestTcTab)) +__rt_utest_tc_tab_end = .; +``` + +## Running Test Cases + +The test framework provides the following commands to make it easy for users to run test cases on the RT-Thread MSH command line. The commands are as follows: + +***utest_list* command** + +Lists the test cases supported by the current system, including the name of the test case and the time required for the test. This command has no parameters. + +***utest_run* command** + +Test case execution command, the format of the command is as follows: + +```c +utest_run [-thread or -help] [testcase name] [loop num] +``` + +| utest_run Command Parameters | Description | +| :---- | :----- | +| -thread | Run the test framework in threaded mode | +| -help | Print help information | +| testcase name | Specify the test case name. Using the wildcard `*` is supported, specifying front byte of the test case name is supported. | +| loop num | the number of iterations of test cases | + +**Example of test command usage:** + +```c +msh />utest_list +[14875] I/utest: Commands list : +[14879] I/utest: [testcase name]:components.filesystem.dfs.dfs_api_tc; [run timeout]:30 +[14889] I/utest: [testcase name]:components.filesystem.posix.posix_api_tc; [run timeout]:30 +[14899] I/utest: [testcase name]:packages.iot.netutils.iperf.iperf_tc; [run timeout]:30 +msh /> +msh />utest_run components.filesystem.dfs.dfs_api_tc +[83706] I/utest: [==========] [ utest ] started +[83712] I/utest: [----------] [ testcase ] (components.filesystem.dfs.dfs_api_tc) started +[83721] I/testcase: in testcase func... +[84615] D/utest: [ OK ] [ unit ] (test_mkfs:26) is passed +[84624] D/testcase: dfs mount rst: 0 +[84628] D/utest: [ OK ] [ unit ] (test_dfs_mount:35) is passed +[84639] D/utest: [ OK ] [ unit ] (test_dfs_open:40) is passed +[84762] D/utest: [ OK ] [ unit ] (test_dfs_write:74) is passed +[84770] D/utest: [ OK ] [ unit ] (test_dfs_read:113) is passed +[85116] D/utest: [ OK ] [ unit ] (test_dfs_close:118) is passed +[85123] I/utest: [ PASSED ] [ result ] testcase (components.filesystem.dfs.dfs_api_tc) +[85133] I/utest: [----------] [ testcase ] (components.filesystem.dfs.dfs_api_tc) finished +[85143] I/utest: [==========] [ utest ] finished +msh /> +``` + +### Test result analysis + +![utest log display](./figures/UtestRunLogShow.png) + +As shown in the figure above, the log of the test case run is divided into four columns from left to right, which are `(1) log header information`, `(2) result bar`, `(3) property bar`, and `(4) detail information display bar`. The test case test result (PASSED or FAILED) is identified in the log using the `result` attribute. + +## Test Case Run Process + +![Test Case Run Process](./figures/testcase-runflowchart.jpg) + +From the above flow chart you can get the following: + +* The utest framework is a sequential execution of all **test units** in the *testcase* function +* Assert of the previous UTEST_UNIT_RUN macro has occurred, and all subsequent UTEST_UNIT_RUN will skip execution. + +## NOTE + +- Determine whether the link script has the `UtestTcTab` section added before compiling with GCC. +- Make sure `RT-Thread Kernel -> Kernel Device Object -> (256) the buffer size for console log printf` is at least 256 bytes before compiling. +- The resources (threads, semaphores, timers, memory, etc.) created in the test case need to be released before the test ends. +- A test case implementation can only export a test body function (testcase function) using `UTEST_TC_EXPORT` +- Write a `README.md` document for the your test case to guide the user through configuring the test environment.