rt-thread-official/components/drivers/smp_call/smp_call.h

70 lines
2.3 KiB
C
Raw Normal View History

feat: smp_call: added signaling call_req This patch introduces `rt_smp_call_request` API to handle queued requests across cores with user provided data buffer, which provides a way to request IPI through a non-blocking pattern. It also resolved several issues in the old implementation: - Multiple requests from different cores can not be queued in the work object of the target core. - Data racing on `rt_smp_work` of same core. If multiple requests came in turns, or if the call is used by the target cpu, while a new request is coming, the value will be overwrite. - Memory vulnerability. The rt_smp_event is allocated on stack, though the caller may not wait until the call is done. - API naming problem. Actually we don't provide a way to issue an IPI to ANY core in mask. What the API do is aligned to MANY pattern. - FUNC_IPI registering to PIC. Changes: - Declared and configured the new `RT_SMP_CALL_IPI` to support functional IPIs for task requests across cores. - Replaced the single `rt_smp_work` array with `call_req_cores` to manage per-core call requests safely. - Added `_call_req_take` and `_call_req_release` functions for atomic handling of request lifetimes, preventing data race conditions. - Replaced single event handling with a queue-based approach (`call_queue`) for efficient multi-request processing per core. - Introduced `rt_smp_call_ipi_handler` to process queued requests, reducing IPI contention by only sending new requests when needed. - Implemented `_smp_call_remote_request` to handle remote requests with specific flags, enabling more flexible core-to-core task signaling. - Refined `rt_smp_call_req_init` to initialize and track requests with atomic usage flags, mitigating potential memory vulnerabilities. Signed-off-by: Shell <smokewood@qq.com>
2024-10-31 11:57:04 +08:00
/*
* Copyright (c) 2006-2024 RT-Thread Development Team
*
* SPDX-License-Identifier: Apache-2.0
*
* Change Logs:
* Date Author Notes
* 2024/9/12 zhujiale the first version
* 2024/10/24 Shell added non-blocking IPI calling method
*/
2024-08-29 10:12:47 +08:00
#ifndef __SMP_IPI_H__
#define __SMP_IPI_H__
#include <rtthread.h>
feat: smp_call: added signaling call_req This patch introduces `rt_smp_call_request` API to handle queued requests across cores with user provided data buffer, which provides a way to request IPI through a non-blocking pattern. It also resolved several issues in the old implementation: - Multiple requests from different cores can not be queued in the work object of the target core. - Data racing on `rt_smp_work` of same core. If multiple requests came in turns, or if the call is used by the target cpu, while a new request is coming, the value will be overwrite. - Memory vulnerability. The rt_smp_event is allocated on stack, though the caller may not wait until the call is done. - API naming problem. Actually we don't provide a way to issue an IPI to ANY core in mask. What the API do is aligned to MANY pattern. - FUNC_IPI registering to PIC. Changes: - Declared and configured the new `RT_SMP_CALL_IPI` to support functional IPIs for task requests across cores. - Replaced the single `rt_smp_work` array with `call_req_cores` to manage per-core call requests safely. - Added `_call_req_take` and `_call_req_release` functions for atomic handling of request lifetimes, preventing data race conditions. - Replaced single event handling with a queue-based approach (`call_queue`) for efficient multi-request processing per core. - Introduced `rt_smp_call_ipi_handler` to process queued requests, reducing IPI contention by only sending new requests when needed. - Implemented `_smp_call_remote_request` to handle remote requests with specific flags, enabling more flexible core-to-core task signaling. - Refined `rt_smp_call_req_init` to initialize and track requests with atomic usage flags, mitigating potential memory vulnerabilities. Signed-off-by: Shell <smokewood@qq.com>
2024-10-31 11:57:04 +08:00
/* callback of smp call */
typedef void (*rt_smp_call_cb_t)(void *data);
typedef rt_bool_t (*rt_smp_cond_t)(int cpu, void *info);
#define SMP_CALL_EVENT_GLOB_ASYNC 0x1
#define SMP_CALL_EVENT_GLOB_SYNC 0x2
#define SMP_CALL_EVENT_REQUEST 0x4
2024-08-29 10:12:47 +08:00
feat: smp_call: added signaling call_req This patch introduces `rt_smp_call_request` API to handle queued requests across cores with user provided data buffer, which provides a way to request IPI through a non-blocking pattern. It also resolved several issues in the old implementation: - Multiple requests from different cores can not be queued in the work object of the target core. - Data racing on `rt_smp_work` of same core. If multiple requests came in turns, or if the call is used by the target cpu, while a new request is coming, the value will be overwrite. - Memory vulnerability. The rt_smp_event is allocated on stack, though the caller may not wait until the call is done. - API naming problem. Actually we don't provide a way to issue an IPI to ANY core in mask. What the API do is aligned to MANY pattern. - FUNC_IPI registering to PIC. Changes: - Declared and configured the new `RT_SMP_CALL_IPI` to support functional IPIs for task requests across cores. - Replaced the single `rt_smp_work` array with `call_req_cores` to manage per-core call requests safely. - Added `_call_req_take` and `_call_req_release` functions for atomic handling of request lifetimes, preventing data race conditions. - Replaced single event handling with a queue-based approach (`call_queue`) for efficient multi-request processing per core. - Introduced `rt_smp_call_ipi_handler` to process queued requests, reducing IPI contention by only sending new requests when needed. - Implemented `_smp_call_remote_request` to handle remote requests with specific flags, enabling more flexible core-to-core task signaling. - Refined `rt_smp_call_req_init` to initialize and track requests with atomic usage flags, mitigating potential memory vulnerabilities. Signed-off-by: Shell <smokewood@qq.com>
2024-10-31 11:57:04 +08:00
#define SMP_CALL_WAIT_ALL (1ul << 0)
#define SMP_CALL_NO_LOCAL (1ul << 1)
#define SMP_CALL_SIGNAL (1ul << 2)
2024-09-12 11:54:50 +08:00
#define RT_ALL_CPU ((1 << RT_CPUS_NR) - 1)
2024-09-12 18:25:10 +08:00
struct rt_smp_event
2024-08-29 10:12:47 +08:00
{
2024-09-13 09:35:06 +08:00
int event_id;
void *data;
feat: smp_call: added signaling call_req This patch introduces `rt_smp_call_request` API to handle queued requests across cores with user provided data buffer, which provides a way to request IPI through a non-blocking pattern. It also resolved several issues in the old implementation: - Multiple requests from different cores can not be queued in the work object of the target core. - Data racing on `rt_smp_work` of same core. If multiple requests came in turns, or if the call is used by the target cpu, while a new request is coming, the value will be overwrite. - Memory vulnerability. The rt_smp_event is allocated on stack, though the caller may not wait until the call is done. - API naming problem. Actually we don't provide a way to issue an IPI to ANY core in mask. What the API do is aligned to MANY pattern. - FUNC_IPI registering to PIC. Changes: - Declared and configured the new `RT_SMP_CALL_IPI` to support functional IPIs for task requests across cores. - Replaced the single `rt_smp_work` array with `call_req_cores` to manage per-core call requests safely. - Added `_call_req_take` and `_call_req_release` functions for atomic handling of request lifetimes, preventing data race conditions. - Replaced single event handling with a queue-based approach (`call_queue`) for efficient multi-request processing per core. - Introduced `rt_smp_call_ipi_handler` to process queued requests, reducing IPI contention by only sending new requests when needed. - Implemented `_smp_call_remote_request` to handle remote requests with specific flags, enabling more flexible core-to-core task signaling. - Refined `rt_smp_call_req_init` to initialize and track requests with atomic usage flags, mitigating potential memory vulnerabilities. Signed-off-by: Shell <smokewood@qq.com>
2024-10-31 11:57:04 +08:00
rt_smp_call_cb_t func;
union
{
rt_atomic_t *calling_cpu_mask;
rt_atomic_t usage_tracer;
} typed;
2024-08-29 10:12:47 +08:00
};
feat: smp_call: added signaling call_req This patch introduces `rt_smp_call_request` API to handle queued requests across cores with user provided data buffer, which provides a way to request IPI through a non-blocking pattern. It also resolved several issues in the old implementation: - Multiple requests from different cores can not be queued in the work object of the target core. - Data racing on `rt_smp_work` of same core. If multiple requests came in turns, or if the call is used by the target cpu, while a new request is coming, the value will be overwrite. - Memory vulnerability. The rt_smp_event is allocated on stack, though the caller may not wait until the call is done. - API naming problem. Actually we don't provide a way to issue an IPI to ANY core in mask. What the API do is aligned to MANY pattern. - FUNC_IPI registering to PIC. Changes: - Declared and configured the new `RT_SMP_CALL_IPI` to support functional IPIs for task requests across cores. - Replaced the single `rt_smp_work` array with `call_req_cores` to manage per-core call requests safely. - Added `_call_req_take` and `_call_req_release` functions for atomic handling of request lifetimes, preventing data race conditions. - Replaced single event handling with a queue-based approach (`call_queue`) for efficient multi-request processing per core. - Introduced `rt_smp_call_ipi_handler` to process queued requests, reducing IPI contention by only sending new requests when needed. - Implemented `_smp_call_remote_request` to handle remote requests with specific flags, enabling more flexible core-to-core task signaling. - Refined `rt_smp_call_req_init` to initialize and track requests with atomic usage flags, mitigating potential memory vulnerabilities. Signed-off-by: Shell <smokewood@qq.com>
2024-10-31 11:57:04 +08:00
struct rt_smp_call_req
2024-08-29 16:21:19 +08:00
{
feat: smp_call: added signaling call_req This patch introduces `rt_smp_call_request` API to handle queued requests across cores with user provided data buffer, which provides a way to request IPI through a non-blocking pattern. It also resolved several issues in the old implementation: - Multiple requests from different cores can not be queued in the work object of the target core. - Data racing on `rt_smp_work` of same core. If multiple requests came in turns, or if the call is used by the target cpu, while a new request is coming, the value will be overwrite. - Memory vulnerability. The rt_smp_event is allocated on stack, though the caller may not wait until the call is done. - API naming problem. Actually we don't provide a way to issue an IPI to ANY core in mask. What the API do is aligned to MANY pattern. - FUNC_IPI registering to PIC. Changes: - Declared and configured the new `RT_SMP_CALL_IPI` to support functional IPIs for task requests across cores. - Replaced the single `rt_smp_work` array with `call_req_cores` to manage per-core call requests safely. - Added `_call_req_take` and `_call_req_release` functions for atomic handling of request lifetimes, preventing data race conditions. - Replaced single event handling with a queue-based approach (`call_queue`) for efficient multi-request processing per core. - Introduced `rt_smp_call_ipi_handler` to process queued requests, reducing IPI contention by only sending new requests when needed. - Implemented `_smp_call_remote_request` to handle remote requests with specific flags, enabling more flexible core-to-core task signaling. - Refined `rt_smp_call_req_init` to initialize and track requests with atomic usage flags, mitigating potential memory vulnerabilities. Signed-off-by: Shell <smokewood@qq.com>
2024-10-31 11:57:04 +08:00
/* handle the busy status synchronization */
rt_hw_spinlock_t freed_lock;
2024-09-12 18:25:10 +08:00
struct rt_smp_event event;
feat: smp_call: added signaling call_req This patch introduces `rt_smp_call_request` API to handle queued requests across cores with user provided data buffer, which provides a way to request IPI through a non-blocking pattern. It also resolved several issues in the old implementation: - Multiple requests from different cores can not be queued in the work object of the target core. - Data racing on `rt_smp_work` of same core. If multiple requests came in turns, or if the call is used by the target cpu, while a new request is coming, the value will be overwrite. - Memory vulnerability. The rt_smp_event is allocated on stack, though the caller may not wait until the call is done. - API naming problem. Actually we don't provide a way to issue an IPI to ANY core in mask. What the API do is aligned to MANY pattern. - FUNC_IPI registering to PIC. Changes: - Declared and configured the new `RT_SMP_CALL_IPI` to support functional IPIs for task requests across cores. - Replaced the single `rt_smp_work` array with `call_req_cores` to manage per-core call requests safely. - Added `_call_req_take` and `_call_req_release` functions for atomic handling of request lifetimes, preventing data race conditions. - Replaced single event handling with a queue-based approach (`call_queue`) for efficient multi-request processing per core. - Introduced `rt_smp_call_ipi_handler` to process queued requests, reducing IPI contention by only sending new requests when needed. - Implemented `_smp_call_remote_request` to handle remote requests with specific flags, enabling more flexible core-to-core task signaling. - Refined `rt_smp_call_req_init` to initialize and track requests with atomic usage flags, mitigating potential memory vulnerabilities. Signed-off-by: Shell <smokewood@qq.com>
2024-10-31 11:57:04 +08:00
rt_ll_slist_t slist_node;
2024-08-29 16:21:19 +08:00
};
2024-08-29 10:12:47 +08:00
void rt_smp_call_ipi_handler(int vector, void *param);
feat: smp_call: added signaling call_req This patch introduces `rt_smp_call_request` API to handle queued requests across cores with user provided data buffer, which provides a way to request IPI through a non-blocking pattern. It also resolved several issues in the old implementation: - Multiple requests from different cores can not be queued in the work object of the target core. - Data racing on `rt_smp_work` of same core. If multiple requests came in turns, or if the call is used by the target cpu, while a new request is coming, the value will be overwrite. - Memory vulnerability. The rt_smp_event is allocated on stack, though the caller may not wait until the call is done. - API naming problem. Actually we don't provide a way to issue an IPI to ANY core in mask. What the API do is aligned to MANY pattern. - FUNC_IPI registering to PIC. Changes: - Declared and configured the new `RT_SMP_CALL_IPI` to support functional IPIs for task requests across cores. - Replaced the single `rt_smp_work` array with `call_req_cores` to manage per-core call requests safely. - Added `_call_req_take` and `_call_req_release` functions for atomic handling of request lifetimes, preventing data race conditions. - Replaced single event handling with a queue-based approach (`call_queue`) for efficient multi-request processing per core. - Introduced `rt_smp_call_ipi_handler` to process queued requests, reducing IPI contention by only sending new requests when needed. - Implemented `_smp_call_remote_request` to handle remote requests with specific flags, enabling more flexible core-to-core task signaling. - Refined `rt_smp_call_req_init` to initialize and track requests with atomic usage flags, mitigating potential memory vulnerabilities. Signed-off-by: Shell <smokewood@qq.com>
2024-10-31 11:57:04 +08:00
void rt_smp_call_each_cpu(rt_smp_call_cb_t func, void *data, rt_uint8_t flags);
void rt_smp_call_each_cpu_cond(rt_smp_call_cb_t func, void *data, rt_uint8_t flag, rt_smp_cond_t cond_func);
void rt_smp_call_cpu_mask(rt_ubase_t cpu_mask, rt_smp_call_cb_t func, void *data, rt_uint8_t flags);
void rt_smp_call_cpu_mask_cond(rt_ubase_t cpu_mask, rt_smp_call_cb_t func, void *data, rt_uint8_t flag, rt_smp_cond_t cond_func);
void rt_smp_call_init(void);
rt_err_t rt_smp_call_request(int callcpu, rt_uint8_t flags, struct rt_smp_call_req *call_req);
void rt_smp_call_req_init(struct rt_smp_call_req *call_req,
rt_smp_call_cb_t func, void *data);
void rt_smp_request_wait_freed(struct rt_smp_call_req *req);
2024-08-29 10:12:47 +08:00
feat: smp_call: added signaling call_req This patch introduces `rt_smp_call_request` API to handle queued requests across cores with user provided data buffer, which provides a way to request IPI through a non-blocking pattern. It also resolved several issues in the old implementation: - Multiple requests from different cores can not be queued in the work object of the target core. - Data racing on `rt_smp_work` of same core. If multiple requests came in turns, or if the call is used by the target cpu, while a new request is coming, the value will be overwrite. - Memory vulnerability. The rt_smp_event is allocated on stack, though the caller may not wait until the call is done. - API naming problem. Actually we don't provide a way to issue an IPI to ANY core in mask. What the API do is aligned to MANY pattern. - FUNC_IPI registering to PIC. Changes: - Declared and configured the new `RT_SMP_CALL_IPI` to support functional IPIs for task requests across cores. - Replaced the single `rt_smp_work` array with `call_req_cores` to manage per-core call requests safely. - Added `_call_req_take` and `_call_req_release` functions for atomic handling of request lifetimes, preventing data race conditions. - Replaced single event handling with a queue-based approach (`call_queue`) for efficient multi-request processing per core. - Introduced `rt_smp_call_ipi_handler` to process queued requests, reducing IPI contention by only sending new requests when needed. - Implemented `_smp_call_remote_request` to handle remote requests with specific flags, enabling more flexible core-to-core task signaling. - Refined `rt_smp_call_req_init` to initialize and track requests with atomic usage flags, mitigating potential memory vulnerabilities. Signed-off-by: Shell <smokewood@qq.com>
2024-10-31 11:57:04 +08:00
#define rt_smp_for_each_cpu(_iter) for (_iter = 0; (_iter) < RT_CPUS_NR; (_iter)++)
rt_inline size_t rt_smp_get_next_remote(size_t iter, size_t cpuid)
{
iter++;
return iter == cpuid ? iter + 1 : iter;
}
#define rt_smp_for_each_remote_cpu(_iter, _cpuid) for (_iter = rt_smp_get_next_remote(-1, _cpuid); (_iter) < RT_CPUS_NR; _iter=rt_smp_get_next_remote(_iter, _cpuid))
2024-08-29 10:16:33 +08:00
#endif