This commit adds support for the `__set_name__` data model method specified
by PEP487 - Simpler customisation of class creation.
This includes support for methods that mutate the owner class, and avoids
the naive modify-while-iterating hazard possible in a naive implementation
like micropython/micropython#15503.
Note that based on the benchmarks in micropython/micropython#16825, this is
also as fast or faster than the naive implementation, thanks to clever data
layout in `setname_list_t`, and the way this allows the capture step to run
during an existing loop through the class dict.
Other rejected approaches for dealing with the hazard include:
- python/cpython#72983
During the implementation of this feature for MicroPython, it was
discovered that some versions of CPython also have this naive hazard.
CPython resolved this bug in BPO-28797 and now makes a complete flat copy
of the class's dict to iterate. This design decision doesn't make much
sense for a microcontroller though, even if it's perfectly reasonable in
the desktop world where memcpy might actually be cheaper than a
hard-to-branch-predict conditional; and it's also motivated in their case
by error-tracing considerations.
- micropython/micropython#16816
This is an equivalent implementation to CPython's approach that places this
copy directly on the stack; however it is both slower and has larger code
size than the approach taken here.
- micropython/micropython#15503
The simplest implementation is to just not worry about it and let the user
face the consequences if they mutate the owner class. That's not a very
friendly behavior, though, and it's not actually much more performant than
this implementation on either time or code size.
- micropython/micropython#17693
Another alternative is to do the same as #15503 but leverage MicroPython's
existing `is_fixed` field in its dict type to convert attempted mutations
of the owner dict into `AttributeError`s. This is safer than just leaving
the open hazard, but there's still important use-cases for owner-mutating
descriptors, and the performance gain is small enough that it isn't worth
missing support for those cases.
- combined micropython/micropython#17693 with this
Another version of this feature used a new feature define,
`MICROPY_PY_METACLASSES_LITE`, to control whether this algorithm or the
naive version is used. This was rejected in favor of simplicity, based on
the very limited performance margin the naive version has (which in some
cases even goes _against_ it).
Signed-off-by: Anson Mansfield <amansfield@mantaro.com>
Including the stochastic tests needed to guarantee sensitivity to the
potential iterate-while-modifying hazard a naive implementation might have.
Signed-off-by: Anson Mansfield <amansfield@mantaro.com>
Current longlong implementation does not allow a float as RHS of mathematic
operators, as it lacks the delegation code present in mpz.
Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
This is to fix an outstanding TODO. The test cases is using a range as
this will exist in all builds, but `mp_obj_get_int` is used in many
different parts of code where an overflow is more likely to occur.
Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
XOSC_MHZ and XOSC_KHZ may not be defined if we use a custom XIN clock
by defining PLL_SYS_REFDIV etc. calculated by vcocalc.py.
Signed-off-by: Christian Lang <lang.chr86@gmail.com>
On a build like nanbox, `mp_uint_t` is wider than `u/intptr_t`. Using a
signed type for fetching pointer values resulted in erroneous results: like
`<function f at 0xfffffffff7a60bc0>` instead of
`<function f at 0xf7a60bc0>`.
Signed-off-by: Jeff Epler <jepler@gmail.com>
All these arguments are of type `mp_{u,}int_t`, but the actual value is
always a small integer. Cast it so that it can format with the `%d/%u`
formatter.
Before, the compiler plugin produced an error in the PYBD_SF6 build, which
is a nanboxing build with 64-bit ints.
Signed-off-by: Jeff Epler <jepler@gmail.com>
On the nanbox build, `o->obj` is a 64-bit type but `%p` formats a 32-bit
type, leading to undefined behavior.
Print the cell's ID as a hex integer instead.
This location was found using an experimental gcc plugin for `mp_printf`
error checking.
Signed-off-by: Jeff Epler <jepler@gmail.com>
Before, the compiler plugin produced an error in the PYBD_SF6 build, which
is a nanboxing build with 64-bit ints.
I made the decision here to cast the value even though some significant
bits might be lost after 49.7 days. However, the format used is "% 8d",
which produces a consistent width output for small ticks values (up to
about 1.1 days). I judged that it was more valuable to preserve the fixed
width display than to accurately represent long time periods.
Signed-off-by: Jeff Epler <jepler@gmail.com>
As timeout is of type `mp_uint_t`, it must be printed with UINT_FMT.
Before, the compiler plugin produced an error in the PYBD_SF6 build, which
is a nanboxing build with 64-bit ints.
Signed-off-by: Jeff Epler <jepler@gmail.com>
During the coverage test, all the values encountered are within the range
of `%d`.
These locations were found using an experimental gcc plugin for `mp_printf`
error checking.
Signed-off-by: Jeff Epler <jepler@gmail.com>
This fixes the following diagnostic produced by the plugin:
error: argument 3: Format ‘%x’ requires a ‘int’ or
‘unsigned int’ (32 bits), not ‘long unsigned int’ [size 64]
[-Werror=format=]
Signed-off-by: Jeff Epler <jepler@gmail.com>
The type of the argument must match the format string. Add casts to ensure
that they do.
It's possible that casting from `size_t` to `unsigned` loses the correct
values by masking off upper bits, but it seems likely that the quantities
involved in practice are small enough that the `%u` formatter (32 bits on
most platforms, 16 on pic16bit) will in fact hold the correct value.
The alternative, casting to a wider type, adds code size.
These locations were found using an experimental gcc plugin for `mp_printf`
error checking, cross-building for x64 windows on Linux.
In one case there was already a cast, but it was written incorrectly and
did not have the intended effect.
Signed-off-by: Jeff Epler <jepler@gmail.com>
The name field of type objects is of type `uint16_t` for efficiency, but
when the type is passed to `mp_printf` it must be cast explicitly to type
`qstr`.
These locations were found using an experimental gcc plugin for `mp_printf`
error checking, cross-building for x64 windows on Linux.
Signed-off-by: Jeff Epler <jepler@gmail.com>
These tests all depend on generating arbitrarily long (>64-bit) integers.
It would be possible to have these tests work in this case I think, as the
results are always masked to shorter values. But quite fiddly. So just
rename them so they are automatically skipped if the target doesn't have
big int support.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
Previous comment was wrong, left shifting a negative value is UB in C. Use
the same approach as small int shifts (from runtime.c).
Signed-off-by: Angus Gratton <angus@redyak.com.au>
The recently merged 5e9189d6d1 now allows
temporary slices to be allocated on the C stack, which is much better than
allocating them on the GC heap.
Unfortunately there are cases where the C-allocated slice can escape and be
retained as an object, which leads to crashes (because that object points
to the C stack which now has other values on it).
The fix here is to add a new `MP_TYPE_FLAG_SUBSCR_ALLOWS_STACK_SLICE`.
Native types should set this flag if their subscr method is guaranteed not
to hold on to a reference of the slice object.
Fixes issue #17733 (see also #17723).
Signed-off-by: Damien George <damien@micropython.org>
Fixes a bug in the binding of self/this to JavaScript methods.
The new semantics match Pyodide's behaviour, at least for the included
tests.
Signed-off-by: Damien George <damien@micropython.org>
SDK 2.1.1 shipped with PICOTOOL_FETCH_FROM_GIT configured to fetch the
"develop" branch. This broke downstream CI, which was trusting Pico
SDK to fetch the correct version.
RPi have added a "2.1.1-correct-picotool" tag which fixes this.
lib/pico-sdk: Bump to "2.1.1-correct-picotool" tag.
Signed-off-by: Phil Howard <github@gadgetoid.com>
It seems GCC 14 got stricter with warn/errs like -Wsign-compare and types a
"bare number" as a long int that can't be compared to a (unsigned) size_t.
Signed-off-by: Andrew Leech <andrew.leech@planetinnovation.com.au>
Commit dc2fcfcc55 seems to have accidentally
changed the ruff quote style to "preserve", instead of keeping it at the
default which is "double".
Put it back to the default and update relevant .py files with this rule.
Signed-off-by: Damien George <damien@micropython.org>
Without this there's a build error on macOS (at least). This was likely
due to a combination of 9b7d85227e and
df05caea6c.
Signed-off-by: Damien George <damien@micropython.org>
There is currently no build using REPR_C in the unix CI tests. As
discussed in PR #16953, this is something that combines well with the
longlong build.
Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
Current implementation of REPR_C works by clearing the two lower bits of
the mantissa to zero. As this happens after each floating point operation,
this tends to bias floating point numbers towards zero, causing decimals
like .9997 instead of rounded numbers. This is visible in test cases
involving repeated computations, such as `tests/misc/rge_sm.py` for
instance.
The suggested fix fills in the missing bits by copying the previous two
bits. Although this cannot recreate missing information, it fixes the bias
by inserting plausible values for the lost bits, at a relatively low cost.
Some float tests involving irrational numbers have to be softened in case
of REPR_C, as the 30 bits are not always enough to fulfill the expectations
of the original test, and the change may randomly affect the last digits.
Such cases have been made explicit by testing for REPR_C or by adding a
clear comment.
The perf_test fft code was also missing a call to round() before casting a
log_2 operation to int, which was causing a failure due to a last-decimal
change.
Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
When this configuration flag is set, VfsPosix instances can be written.
Otherwise, they will always be created "read only".
This flag is useful when fuzzing micropython: Without VfsPosix, the fuzzing
input script cannot be read; but with writable VfsPosix, fuzzing scripts
can potentially perform undesired operations on the host filesystem.
Signed-off-by: Jeff Epler <jepler@gmail.com>
Back in LFS2 version 2.6 they updated the on-disk version from 2.0 to 2.1
which broke back compatibility (aka older versions could no long read new
version disk format), see
https://github.com/littlefs-project/littlefs/releases/tag/v2.6.0
Then in LFS2 v2.7 an optional `config->disk_version` was added to force the
library to use an older disk format instead, see:
https://github.com/littlefs-project/littlefs/releases/tag/v2.7.0
This commit simply exposes `config->disk_version` as a compile time option
if LFS2_MULTIVERSION is set, otherwise there is no change in behavior.
This is Useful for compatibility with older LFS versions.
Note: LFS2_MULTIVERSION needs to be defined at the make / CFLAGS level,
setting it in mpconfigboard.h doesn't work as it's not included in the
`lfs2.c` file in any way.
Signed-off-by: Andrew Leech <andrew.leech@planetinnovation.com.au>
This is an annoying regression caused by including mpconfig.h in 36922df -
the mimxrt platform headers define ARRAY_SIZE and mbedtls also defines in
some source files, using a different parameter name which is a warning in
gcc.
Technically mimxrt SDK is to blame here, but as this isn't a named warning
in gcc the only way to work around it in the mimxrt port would be to
disable all warnings when building this particular mbedTLS source file.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
The original version of this test had to exchange a 1 byte UDP packet
before the DTLS handshake. This is no longer needed due to MSG_PEEK
support.
The test also doesn't work with HelloVerify enabled, as the first
connection attempt always fails with an
MBEDTLS_ERR_SSL_HELLO_VERIFY_REQUIRED result. Anticipate this by listening
for the client twice on the server side.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
- DTLS spec recommends HelloVerify and Anti Replay protection be enabled,
and these are enabled in the default mbedTLS config. Implement them here.
- To help compensate for the possible increase in code size, add a
MICROPY_PY_SSL_DTLS build config macro that's enabled for EXTRA and
above by default.
This allows bare metal mbedTLS ports to use DTLS with HelloVerify support.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
This is already enabled in the ESP-IDF mbedTLS config, so provide an
implementation of the cookie store functions. This allows DTLS connections
between two esp32 boards.
The session cookie store is a very simple dictionary associated with the
SSLContext. To work, the server needs to reuse the same SSLContext (but
cookies are never cleaned up, so a server with a high number of clients
should recycle the context periodically.)
Server code still needs to handle the MBEDTLS_ERR_SSL_HELLO_VERIFY_REQUIRED
error by waiting for the next UDP packet from the client.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
With the recent update to ESP-IDF 5.4.2, there is a change in BLE event
behaviour which makes `tests/multi_bluetooth/ble_mtu.py` and
`tests/multi_bluetooth/ble_mtu_peripheral.py` now fail on ESP32 with IDF
5.4.2.
The change in behaviour is that MTU_EXCHANGE events can now occur before
CENTRAL_CONNECT/PERIPHERAL_CONNECT events. That seems a bit strange,
because the MTU exchange occurs after the connection. And looking at the
timing of the events there is exactly 100ms between them, ie MTU_EXCHANGE
fires and then exactly 100ms later CENTRAL_CONNECT/PERIPHERAL_CONNECT
fires.
It's unknown if this is a bug in (Espressif's) NimBLE, a subtle change in
scheduling with still valid behaviour, an intended change, a change allowed
under the BLE spec, or something else.
But in order to move forward with updating to IDF 5.4.2, the relevant tests
have been adjusted so they can pass. The test just needs to wait a bit
between doing the connect and doing the MTU exchange, so the other side
sees the original/correct ordering of events. This wait is done using the
multitest synchronisation primitives (broadcast and wait).
Signed-off-by: Damien George <damien@micropython.org>
This is a patch release of the IDF. Comparing with 5.4.1, firmware size is
up by about 1.5k on ESP32 and 9k on ESP32-S3. But IRAM usage (of the IDF)
is down by about 500 byte on ESP32 and DRAM usage is down by about 20k on
ESP32 and 10k on ESP32-S3.
Testing on ESP32, ESP32-S2, ESP32-S3 and ESP32-C3 shows no regressions,
except in BLE MTU ordering (the MTU exchange event occuring before the
connect event).
Signed-off-by: Damien George <damien@micropython.org>
Currently, `UART.sendbreak()` on esp32 will reconfigure the UART to a
slower baudrate and send out a null byte, to synthesise a break condition.
That's not great because it changes the baudrate of the RX path as well,
which could miss incoming bytes while sending the break.
This commit changes the sendbreak implementation to just reconfigure the TX
pin as GPIO in output mode, and hold the pin low for the required duration.
Signed-off-by: Damien George <damien@micropython.org>
This parameter is already used for PC-based tests (eg unix and webassembly
ports), and it makes sense for it to be used for bare-metal ports as well.
That way the timeout is configurable for all targets.
Because this increases the default timeout from 10s to 30s, this fixes some
long-running tests that would previously fail due to a timeout such as
`thread/stress_aes.py` on ESP32.
Signed-off-by: Damien George <damien@micropython.org>