Replaced the k_pipe-based implementation in sockpair with ring_buffer
based implementation instead.
The move to ring_buffer is done to avoid overhead of k_pipe and to align
with the new k_pipe API.
This does not pose any added risk to concurrency as the read and write
functions are protected by semaphores for both spairs.
Signed-off-by: Måns Ansgariusson <Mansgariusson@gmail.com>
For each of the fdtable.h functions listed below, convert the
z_ prefixed semi-private functions to use the zvfs_ prefix.
ZVFS stands for Zephyr Virtual File System and
is intended to be a common library used by the C library,
POSIX API, Networking, Filesystem, and other areas.
There are already a few functions in fdtable.h that use the
zvfs_ prefix, so this change is mostly about unifying them in
a way that uses a suitable prefix ("namespace") so that it can
be considered a public API.
- z_alloc_fd
- z_fdtable_call_ioctl
- z_finalize_fd
- z_finalize_typed_fd
- z_free_fd
- z_get_fd_obj
- z_get_fd_obj_and_vtable
- z_get_obj_lock_and_cond
- z_reserve_fd
Signed-off-by: Chris Friedt <cfriedt@tenstorrent.com>
Fill-in the mode field of the fd_entry so that the
implementation can be made aware that the specific file
descriptors created are sockets.
Signed-off-by: Chris Friedt <cfriedt@tenstorrent.com>
If network socket tracing is enabled, then the system will track
various socket API calls for usage.
Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
Namespaced the generated headers with `zephyr` to prevent
potential conflict with other headers.
Introduce a temporary Kconfig `LEGACY_GENERATED_INCLUDE_PATH`
that is enabled by default. This allows the developers to
continue the use of the old include paths for the time being
until it is deprecated and eventually removed. The Kconfig will
generate a build-time warning message, similar to the
`CONFIG_TIMER_RANDOM_GENERATOR`.
Updated the includes path of in-tree sources accordingly.
Most of the changes here are scripted, check the PR for more
info.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Move the syscall_handler.h header, used internally only to a dedicated
internal folder that should not be used outside of Zephyr.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
In low memory conditions, its possible for socketpair memory allocation
to fail and then the socketpair is freed but after that the remote
semaphore is released causing a crash.
Fix this by freeing the socketpair after releasing the semaphore. Add a
test case to induce low memory conditions (low HEAP and high socketpair
buffer size), with the fix issue is not seen.
Signed-off-by: Chaitanya Tata <Chaitanya.Tata@nordicsemi.no>
Modify the signature of the k_mem_slab_free() function with a new one,
replacing the old void **mem with void *mem as a parameter.
The following function:
void k_mem_slab_free(struct k_mem_slab *slab, void **mem);
has the wrong signature. mem is only used as a regular pointer, so there
is no need to use a double-pointer. The correct signature should be:
void k_mem_slab_free(struct k_mem_slab *slab, void *mem);
The issue with the current signature, although functional, is that it is
extremely confusing. I myself, a veteran Zephyr developer, was confused
by this parameter when looking at it recently.
All in-tree uses of the function have been adapted.
Fixes#61888.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
In order to get a semi-accurate assessment of how many
bytes are available on a socket prior to performing a read,
BSD and POSIX systems have typically used
`ioctl(fd, FIONREAD, &avail)`
We can support this in Zephyr as well with little effort, so
add support for `socketpair()` sockets as an example.
Signed-off-by: Christopher Friedt <cfriedt@meta.com>
When the target board does not have heap by default, allows
statically reserving the space for required socketpairs.
Signed-off-by: Seppo Takalo <seppo.takalo@nordicsemi.no>
* include `<zephyr/posix/fcntl.h>` instead of `<fcntl.h>`
* drop unused logging header and module declaration
* reorder headers alphabetically
Signed-off-by: Chris Friedt <cfriedt@meta.com>
Using a socketpair for communication in an ISR is not a great
solution, but the implementation should be robust in that case
as well.
It is not acceptible to block in ISR context, so robustness here
means to return -1 to indicate an error, setting errno to `EAGAIN`
(which is synonymous with `EWOULDBLOCK`).
Fixes#25417
Signed-off-by: Chris Friedt <cfriedt@meta.com>
In order to bring consistency in-tree, migrate all subsystems code to
the new prefix <zephyr/...>. Note that the conversion has been scripted,
refer to zephyrproject-rtos#45388 for more details.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The irq_lock() usage here is incompatible with SMP systems, and one's
first reaction might be to convert it to a spinlock.
But are those irq_lock() instances really necessary?
Commit 6161ea2542 ("net: socket: socketpair: mitigate possible race
condition") doesn't say much:
> There was a possible race condition between sock_is_nonblock()
> and k_sem_take() in spair_read() and spair_write() that was
> mitigated.
A possible race without the irq_lock would be:
thread A thread B
| |
+ spair_write(): |
+ is_nonblock = sock_is_nonblock(spair); [false]
* [preemption here] |
| + spair_ioctl():
| + res = k_sem_take(&spair->sem, K_FOREVER);
| + [...]
| + spair->flags |= SPAIR_FLAG_NONBLOCK;
| * [preemption here]
+ res = k_sem_take(&spair->sem, K_NO_WAIT); [-1]
+ if (res < 0) { |
+ if (is_nonblock) { [skipped] }
* res = k_sem_take(&spair->sem, K_FOREVER); [blocks here]
| + [...]
But the version with irq_lock() isn't much better:
thread A thread B
| |
| + spair_ioctl():
| + res = k_sem_take(&spair->sem, K_FOREVER);
| + [...]
| * [preemption here]
+ spair_write(): |
+ irq_lock(); |
+ is_nonblock = sock_is_nonblock(spair); [false]
+ res = k_sem_take(&spair->sem, K_NO_WAIT); [-1]
+ irq_unlock(); |
* [preemption here] |
| + spair->flags |= SPAIR_FLAG_NONBLOCK;
| + [...]
| + k_sem_give(&spair->sem);
| + spair_read():
| + res = k_sem_take(&spair->sem, K_NO_WAIT);
| * [preemption here]
+ if (res < 0) { |
+ if (is_nonblock) { [skipped] }
* res = k_sem_take(&spair->sem, K_FOREVER); [blocks here]
In both cases the last k_sem_take(K_FOREVER) will block despite
SPAIR_FLAG_NONBLOCK being set at that moment. Other race scenarios
exist too, and on SMP they are even more likely.
The only guarantee provided by the irq_lock() is to make sure that
whenever the semaphore is acquired then the is_nonblock value is always
current. A better way to achieve that and be SMP compatible is to simply
move the initial sock_is_nonblock() *after* the k_sem_take() and remove
those irq_locks().
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Rename `write_signal` to `readable` and `read_signal` to `writeable`
which are more meaningful to the actual states they represent, and make
the code analysis easier.
Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
In case read or write were called before the actual poll() call, the
poll() function was not signalled correctly about such events, which in
order could lead to a deadlock if the poll() was called with infinite
timeout.
Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
When verifying the parameters check NULL value separately.
This will avoid nasty warning message to be printed.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Do not route close() calls via ioctl() as that is error prone
and quite pointless. Instead create a callback for close() in
fdtable and use it directly.
Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
The socketpair file descriptor context objects are heap allocated
and not drawn from a static pool. Register these as kernel objects
when we create them if user mode is enabled.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Used for permission validation when accessing the associated file
descriptors from user mode.
There often get defined in implementation code, expand the search
to look in drivers/ and subsys/net/.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
In this, case is_nonblock is false and will_block is true.
Therefore, we *may* block, and furthermore we *expect* to
block. Checking is_nonblock is, in fact, redundant, and
passing K_FOREVER to k_sem_take() is justified.
Fixes#25727
Coverity-CID: 210611
Signed-off-by: Christopher Friedt <chrisfriedt@gmail.com>
There was a possible race condition between sock_is_nonblock()
and k_sem_take() in spair_read() and spair_write() that was
mitigated.
Also clarified some of the conditional branching in those
functions.
Signed-off-by: Christopher Friedt <chrisfriedt@gmail.com>