There are currently a few tests that are excluded when using the native
emitter because they test printing of exception tracebacks, which includes
line numbers. And the native emitter doesn't store line numbers, so gets
these tests wrong.
But we'd still like to run these tests using the native emitter, because
they test useful things even if the line number info is not in the
traceback (eg that threads which crash print out their exception).
This commit adds support for native-specific .exp files, which are of the
form `<test>.py.native.exp`. If such an .exp file exists then it take
precedence over any normal `<test>.py.exp` file.
(Actually, the implementation here is general enough that it also supports
`<test>.py.bytecode.exp` as well, if bytecode ever needs a specific exp
file.)
Signed-off-by: Damien George <damien@micropython.org>
Instead of using a feature check. This is more consistent with how other
optional modules are skipped.
Signed-off-by: Damien George <damien@micropython.org>
The unicode tests are now run on all targets that enable unicode. And
other unicode tests (namely `extmod/json_loads.py`) are now properly
skipped if the target doesn't have unicode support.
Signed-off-by: Damien George <damien@micropython.org>
Ports that now run the stress tests, that didn't prior to this commit are:
cc3200, esp8266, minimal, nrf, renesas-ra, samd, qemu, webassembly.
Signed-off-by: Damien George <damien@micropython.org>
This simplifies the code by removing the explicit addition of the "float/"
test directory for certain targets. It also means the tests won't be added
incorrectly, eg on a unix build without float.
Signed-off-by: Damien George <damien@micropython.org>
Some tests (currently given by the `special_tests` list) have output which
must be mached via a regex, because it can change from run to run (eg the
address of an object is printed). These tests are currently classified as
`is_special` in the test runner, which means they get special treatment.
In particular they don't set the emitter as specified by `args.emit`. That
means these tests do not run via .mpy or using the native emitter, even if
those options are given on the command line.
This commit fixes that by considering `is_special` as different to
`tests_with_regex_output`. The former is used for things like target
feature detection (which are not really tests) and when extra command line
options need to be passed to the unix micropython executable. The latter
(now called `tests_with_regex_output`) are specifically for tests that have
output to be matched via regex.
The `thread_exc2.py` test now needs to be excluded when running using the
native emitter, because the native emitter doesn't print traceback info.
And the `sys_settrace_cov.py` test needs to be excluded because set-trace
output is different with the native emitter.
Signed-off-by: Damien George <damien@micropython.org>
This makes `run-tests.py` a little more organised, by putting all the
tests-to-skip-when-using-the-native-emitter in a dedicated list.
This should make it easier to maintain the list, and understand why a test
is there.
Signed-off-by: Damien George <damien@micropython.org>
Following discussions in PR #16666, this commit updates the float
formatting code to improve the `repr` reversibility, i.e. the percentage of
valid floating point numbers that do parse back to the same number when
formatted by `repr` (in CPython it's 100%).
This new code offers a choice of 3 float conversion methods, depending on
the desired tradeoff between code size and conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate the exact
representation, which is a bit slower but but does not have a big impact
on code size. It provides `repr` reversibility on >99.8% of the cases in
double precision, and on >98.5% in single precision (except with REPR_C,
where reversibility is 100% as the last two bits are not taken into
account).
- EXACT method uses higher-precision floats during conversion, which
provides perfect results but has a higher impact on code size. It is
faster than APPROX method, and faster than the CPython equivalent
implementation. It is however not available on all compilers when using
FLOAT_IMPL_DOUBLE.
Here is the table comparing the impact of the three conversion methods on
code footprint on PYBV10 (using single-precision floats) and reversibility
rate for both single-precision and double-precision floats. The table
includes current situation as a baseline for the comparison:
PYBV10 REPR_C FLOAT DOUBLE
current = 364688 12.9% 27.6% 37.9%
basic = 364812 85.6% 60.5% 85.7%
approx = 365080 100.0% 98.5% 99.8%
exact = 366408 100.0% 100.0% 100.0%
Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
Commit dc2fcfcc55 seems to have accidentally
changed the ruff quote style to "preserve", instead of keeping it at the
default which is "double".
Put it back to the default and update relevant .py files with this rule.
Signed-off-by: Damien George <damien@micropython.org>
Current implementation of REPR_C works by clearing the two lower bits of
the mantissa to zero. As this happens after each floating point operation,
this tends to bias floating point numbers towards zero, causing decimals
like .9997 instead of rounded numbers. This is visible in test cases
involving repeated computations, such as `tests/misc/rge_sm.py` for
instance.
The suggested fix fills in the missing bits by copying the previous two
bits. Although this cannot recreate missing information, it fixes the bias
by inserting plausible values for the lost bits, at a relatively low cost.
Some float tests involving irrational numbers have to be softened in case
of REPR_C, as the 30 bits are not always enough to fulfill the expectations
of the original test, and the change may randomly affect the last digits.
Such cases have been made explicit by testing for REPR_C or by adding a
clear comment.
The perf_test fft code was also missing a call to round() before casting a
log_2 operation to int, which was causing a failure due to a last-decimal
change.
Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
This parameter is already used for PC-based tests (eg unix and webassembly
ports), and it makes sense for it to be used for bare-metal ports as well.
That way the timeout is configurable for all targets.
Because this increases the default timeout from 10s to 30s, this fixes some
long-running tests that would previously fail due to a timeout such as
`thread/stress_aes.py` on ESP32.
Signed-off-by: Damien George <damien@micropython.org>
When detecting the target platform, also check if it has threading and
whether the GIL is enabled or not (using the new attribute
`sys.implementation._thread`). If threading is available, add the thread
tests to the set of tests to run (unless the set of tests is explicitly
given).
With this change, the unix port no longer needs to explicitly run the set
of thread tests, so that line has been removed from the Makefile.
This change will make sure thread tests are run with other testing
combinations. In particular, thread tests are now run:
- on the unix port with the native emitter
- on macOS builds
- on unix qemu, the architectures MIPS, ARM and RISCV-64
Signed-off-by: Damien George <damien@micropython.org>
These will run on all ports which support them, but importantly
they'll also run on ports that don't support arbitrary precision
but do support 64-bit long ints.
Includes some test workarounds to account for things which will overflow
once "long long" big integers overflow (added in follow-up commit):
- uctypes_array_load_store test was failing already, now won't parse.
- all the ffi_int tests contain 64-bit unsigned values, that won't parse
as long long.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
This commit adds a fast-path optimisation for when a BUILD_SLICE is
immediately followed by a LOAD/STORE_SUBSCR for a native type, to avoid
needing to allocate the slice on the heap.
In some cases (e.g. `a[1:3] = x`) this can result in no allocations at all.
We can't do this for instance types because the get/set/delattr
implementation may keep a reference to the slice.
Adds more tests to the basic slice tests to ensure that a stack-allocated
slice never makes it to Python, and also a heapalloc test that verifies
(when using bytecode) that assigning to a slice is no-alloc.
This work was funded through GitHub Sponsors.
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
Signed-off-by: Damien George <damien@micropython.org>
The test `micropython/ringio_async.py` is a test that requires async
keyword support, and will fail with SyntaxError on targets that don't
support async/await. Really it should be skipped on such targets, and this
commit makes sure that's the case.
Signed-off-by: Damien George <damien@micropython.org>
The additional overhead of the settrace profiler means that the
`aes_stress.py` test was running too slowly on GitHub CI. Double the
timeout to 60 seconds.
Signed-off-by: Jeff Epler <jepler@gmail.com>
This commit introduces a mechanism to customise the code that is
injected to the board when performing a test file upload and execution.
A new argument, --begin", is added so regular Python code can be
inserted in the injected fragment between the module file creation and
the effective file import. This is needed for running larger tests
(usually ones that have been pre-compiled with
"--via-mpy --emit native") on ESP8266, as that board does not have
enough memory to fit certain blocks of code unless additional
configuration is performed.
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
This allows having {\xDD} in tests, which will be expanded to the given
hex character.
Signed-off-by: Andrew Leech <andrew.leech@planetinnovation.com.au>
This commit factors existing code in `run-tests.py` into a new helper
function `create_test_report()`. That function prints out a summary of the
test run (eg number of tests passed, number failed, number skipped) and
creates the corresponding `_results.json` file.
This is done so `create_test_report()` can be reused by the other test
runners.
The `test_count` counter is now gone, and instead the number of passed plus
number of failed tests is used as an equivalent count.
For consistency this commit makes a minor change to the printed output of
`run-tests.py`: instead of printing a shorthand name for tests that failed
or skipped, it now prints the full name. Eg what was previously printed as
`attrtuple2` is now printed as `basics/attrtuple2.py`. This makes the
output a little longer (when there are failed/skipped tests) but helps to
disambiguate the test name, eg which directory it's in.
Signed-off-by: Damien George <damien@micropython.org>
This commit lets the test runner enumerate and run native tests if the
feature check fails but native tests were explicitly requested from the
command line.
The old behaviour would disable native tests anyway if the feature check
failed, however this hid a bug in the x86 native emitter that would be
triggered even during the feature check. That meant the test suite
would pass on x86 even with a broken emitter, as those tests would have
been skipped anyway.
Now, if the user asks for native code it will get native code out of the
runner no matter what.
Co-authored-by: Damien George <damien@micropython.org>
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
Some tests are just too big for targets that don't have much heap memory,
eg `tests/extmod/vfs_rom.py`. Other tests are too large because the target
doesn't have enough IRAM for native code, eg esp8266 running
`tests/micropython/viper_args.py`.
Previously, such tests were explicitly skipped on targets known to have
little memory, eg esp8266. But this doesn't scale to multiple targets, nor
to more and more tests which are too large.
This commit addresses that by adding logic to the test runner so it can
automatically skip tests when they don't fit in the target's memory. It
does this by prepending a `print('START TEST')` to every test, and if a
`MemoryError` occurs before that line is printed then the test was too big.
This works for standard tests, tests that go via .mpy files, and tests that
run in native emitter mode via .mpy files.
For tests that are too big, it prints `lrge <test name>` on the output,
and at the end prints them on a separate line of skipped tests so they can
be distinguished. They are also distinguished in the `_result.json` file
as a skipped test with reason "too large".
Signed-off-by: Damien George <damien@micropython.org>
The `_results.json` output of `run-tests.py` was recently changed in
7a55cb6b36 to add a list of passed and
skipped tests.
The way this was done turned out to be not general enough, because we want
to add another type of result, namely tests that are skipped because they
are too large.
Instead of having separate lists in `_results.json` for each kind of result
(pass, fail, skip, skip too large, etc), this commit changes the output
form of `_results.json` so that it stores a single list of 3-tuples of all
tests that were run:
[(test_name, result, reason), ...]
That's more general and allows adding a reason for skipped and failed
tests. At the moment this reason is just an empty string, but can be
improved in the future.
Signed-off-by: Damien George <damien@micropython.org>
The output `_result.json` file generated by `run-tests.py` currently
contains a list of failed tests. This commit adds to the output a list of
passed and skipped tests, and so now provides full information about which
tests were run and what their results were.
Signed-off-by: Damien George <damien@micropython.org>
This commit fixes three open issues related to the asyncio scheduler
exiting prematurely when the main task queue is empty, in cases where
CPython would not exit (for example, because the main task is not done
because it's on a different queue).
In the first case, the scheduler exits because running a task via
`run_until_complete` did not schedule any dependent tasks.
In the other two cases, the scheduler exits because the tasks are queued in
an event queue.
Tests have been added which reproduce the original issues. These test
cases document the unauthorized use of `Event.set()` from a soft IRQ, and
are skipped in unsupported environments (webassembly and native emitter).
Fixes issues #16759, #16569 and #16318.
Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
Allocation of a large compression window may fail, and in that case keep
the `DeflateIO` state consistent so its other methods (such as `close()`)
still work. Consistency is kept by only updating the `self->write` member
if the window allocation succeeds.
Thanks to @jimmo for finding the bug.
Signed-off-by: Damien George <damien@micropython.org>
This won't be generated normally, but a failed run (for example, from a
unittest with an error or which doesn't call unittest.main()) will
generate one.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
When using unittest (for example) with injected mpy files, not only does
the name of the main test module need to be `__main__`, but also the
`__main__` module should correspond to this injected module. Otherwise the
unittest test won't be detected.
Signed-off-by: Damien George <damien@micropython.org>
This commit implements a method to detect at runtime if inline assembler
support is enabled, and if so which platform it targets.
This allows clean test runs even on modified version of ARM-based ports
where inline assembler support is disabled, running inline assembler tests
on ports that have such feature not enabled by default and manually
enabled, and allows to always run the correct inlineasm tests for ports
that support more than one architecture (esp32, qemu, rp2).
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
This commit adds support for writing inline assembler functions when
targeting a RV32IMC processor.
Given that this takes up a bit of rodata space due to its large
instruction decoding table and its extensive error messages, it is
enabled by default only on offline targets such as mpy-cross and the
qemu port.
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
Thumb/Thumb2 tests are now into their own subdirectory, as
RV32IMC-specific tests will be added as part of the RV32 inline
assembler support.
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
A return value of 0 from Python-level `ioctl()` means success, but if
that's returned unconditionally it means that the method supports all
ioctl calls, which is not true. Returning 0 without doing anything can
potentially lead to a crash, eg for MP_STREAM_SEEK which requires returning
a value in the passed-in struct pointer.
This commit makes it so that all `ioctl()` methods respond only to
MP_STREAM_CLOSE, ie they return -1 (indicating error) for all other ioctl
calls.
Signed-off-by: Damien George <damien@micropython.org>
Running unittest-based tests with --via-mpy is currently broken, because
the unittest test needs the module to be named `__main__`, whereas it's
actually called `__injected_test`.
Fix this by changing the name when the file is opened.
Signed-off-by: Damien George <damien@micropython.org>
So that a failing unittest-based test has its entire log printed when using
`run-tests.py --print-failures`.
Signed-off-by: Damien George <damien@micropython.org>
All the existing tests require a .exp file (either manually specified or
generated running the test first under CPython) that is used to check the
output of running the test under MicroPython. The test passes if the
output matches the expected output exactly.
This has worked very well for a long time now. But some of the newer
hardware tests (eg UART, SPI, PWM) don't really fit this model, for the
following main reasons:
- Some but not all parts of the test should be skipped on certain hardware
targets. With the expected-output approach, skipping tests is either all
or nothing.
- It's often useful to output diagnostics as part of the test, which should
not affect the result of the test (eg the diagnostics change from run to
run, like timing values, or from target to target).
- Sometimes a test will do a complex check and then print False/True if it
passed or not, which obscures the actual test result.
To improve upon this, this commit adds support to `run-tests.py` for a test
to use `unittest`. It detects this by looking at the end of the output
after running the test, looking for the test summary printed by `unittest`
(or an error message saying `unittest` was not found). If the test uses
`unittest` then it should not have a .exp file, and it's not run under
CPython. A `unittest` based test passes or fails based on the summary
printed by `unittest`.
Note that (as long as `unittest` is installed on the target) the tests are
still fully independent and you can still run them without `run-tests.py`:
you just run it as usual, eg `mpremote run <test.py>`. This is very useful
when creating and debugging tests.
Note also that the standard test suite testing Python semantics (eg
everything in `tests/basics/`) will probably never use unittest. Only more
advanced tests will, and ones that are not runnable under CPython.
Signed-off-by: Damien George <damien@micropython.org>
Previously to this commit, running the test suite on a bare-metal board
required specifying the target (really platform) and device, eg:
$ ./run-tests.py --target pyboard --device /dev/ttyACM1
That's quite a lot to type, and you also need to know what the target
platform is, when a lot of the time you either don't care or it doesn't
matter.
This commit makes it easier to run the tests by replacing both of these
options with a single `--test-instance` (`-t` for short) option. That
option specifies the executable/port/device to test. Then the target
platform is automatically detected.
The `--test-instance` can be passed:
- "unix" (the default) to use the unix version of MicroPython
- "webassembly" to test the webassembly port
- anything else is considered a port/device to pass to Pyboard
There are also some shortcuts to specify a port/device, following
`mpremote`:
- a<n> is short for /dev/ttyACM<n>
- u<n> is short for /dev/ttyUSB<n>
- c<n> is short for COM<n>
For example:
$ ./run-tests.py -t a1
Note that the default test instance is "unix" and so this commit does not
change the standard way to run tests on the unix port, by just doing
`./run-tests.py`.
As part of this change, the platform (and it's native architecture if it
supports importing native .mpy files) is show at the start of the test run.
Signed-off-by: Damien George <damien@micropython.org>
Commit 69c25ea865 made raising `SystemExit`
do a soft reset (on bare-metal targets). This means that any test which is
skipped by a target (by raising `SystemExit`) will trigger a soft reset on
that target, and then it must execute its startup code, such as `boot.py`.
If the timing is right, this startup code can be unintentionally
interrupted by the test runner when preparing the next test, because the
test runner enters the raw REPL again via a Ctrl-C Ctrl-A ctrl-D sequence
(in `Pyboard.enter_raw_repl()`).
When this happens (`boot.py` is interrupted) the target may not be set up
correctly, and it may (in the case of stm32 boards) flash LEDs and take
extra time, slowing down the test run.
Fix this by explicitly waiting for the target to finish its soft reset when
it skips a test.
Signed-off-by: Damien George <damien@micropython.org>