linuxcnc/tests
2012-04-20 01:02:36 +02:00
..
abs.0 make this test less sensitive to variations in floating-point number display. 2007-03-16 12:51:01 +00:00
alias.0 test unalias too 2008-11-26 16:05:20 +00:00
and-or-not-mux.0 modernize hal files in testsuite 2010-10-31 21:59:11 -05:00
basic add the beginnings of a test suite 2006-10-06 21:07:21 +00:00
ccomp Fix runtests: this new canon call is expected before a tool change. 2011-02-26 20:41:02 -06:00
counter-encoder.0 prefer net to linkxx 2009-10-22 13:57:22 -05:00
flipflop.0 modernize hal files in testsuite 2010-10-31 21:59:11 -05:00
halmodule.0 pyhal: Added tests for item object 2010-11-03 17:42:05 +03:00
hm2-idrom hm2: make llios specify their pins per connector 2011-08-28 21:54:41 -06:00
interp tests: show correct sequence number tracking in oword subroutines 2012-04-20 01:02:36 +02:00
limit3.0 modernize hal files in testsuite 2010-10-31 21:59:11 -05:00
limit3.1 modernize hal files in testsuite 2010-10-31 21:59:11 -05:00
limit3.2 modernize hal files in testsuite 2010-10-31 21:59:11 -05:00
loadrt.1 The "all" function only encourages users not to carefully consider the order of functions in threads. get rid of it. 2008-12-09 01:54:50 +00:00
m70-m73 tests/m73autorestore.0: fix typo 2011-10-28 08:26:23 +02:00
modparam.0 revert to 1.1 since freqgen was reverted too 2009-03-17 16:50:04 +00:00
motion refer to linuxcncrsh 2012-01-21 10:13:36 -06:00
near.0 near: add a test for correctness 2010-10-25 11:18:12 -05:00
overrun testsuite: fix overrun test 2010-09-07 13:43:06 -05:00
oword interp/oword: add regression test for while loop handling 2011-05-26 06:37:00 +02:00
remap tests: exercise extending predefined named params by Python functions 2012-03-25 09:07:46 +02:00
rtapi_printf.0 rtapi_vsnprintf: provide a test program 2011-08-14 18:41:18 +00:00
save.0 update results 2009-11-29 08:39:54 -06:00
save.1 skip this test, because newinst has been disabled 2006-10-30 03:39:06 +00:00
source.0 modernize hal files in testsuite 2010-10-31 21:59:11 -05:00
stepgen.0 modernize hal files in testsuite 2010-10-31 21:59:11 -05:00
stepgen.1 modernize hal files in testsuite 2010-10-31 21:59:11 -05:00
stepgen.2 modernize hal files in testsuite 2010-10-31 21:59:11 -05:00
threads.0 This changes the "pass" criterion to be more forgiving of slow hardware. 2008-11-14 03:32:18 +00:00
threads.1 The "all" function only encourages users not to carefully consider the order of functions in threads. get rid of it. 2008-12-09 01:54:50 +00:00
timedelay.0 replace timedelay with a .comp version, so that we get uniform naming and automatic documentation 2007-12-31 21:50:41 +00:00
usercomp.0 check for result in right location 2009-07-11 21:58:01 -05:00
.gitignore clean up ignores some more 2009-06-20 20:30:13 -05:00
README testsuite: allow tests written in scripting langs 2010-09-07 13:43:06 -05:00

The HAL test suite
~~~~~~~~~~~~~~~~~~~
The tests in these directories serve to test the behavior of HAL components.

Each subdirectory of this directory may contain a test item.  The runtests
script recurses through the directory structure, so multiple tests could
be structured as
	tests/
		xyz.0
		xyz.1
		xyz.2
or
	tests/
		xyz/
			0	
			1	
			2	


Two types of tests are supported: Regression tests, in which the output is
tested against a "known good" output, and functional tests, in which the
output is fed to a program that can determine whether it is correct or not


Running the tests
~~~~~~~~~~~~~~~~~
Currently, tests only work with the "run in place" configuration.  They
can be run by executing (from the top emc2 directory)
	scripts/runtests tests
A subset of the tests can also be run:
	scripts/runtests tests/xyz tests/a*
The directories named on the commandline are searched recursively for
'test.hal' or 'test.sh' files, and a directory with such a file is
assumed to contain a regression test or a functional test.

Tests may contain files other than the ones specified below.  For instance,
when using 'streamer' data as test input, a shell script with
"halstreamer<<EOF" and a "here document" will generally be present.
(see and-or-not-mux.0/runstreamer for an example)

Regression Tests
~~~~~~~~~~~~~~~~
A regression test should consist of these three files:
	README
		A human-readable file describing the test
	test.hal *or* test.sh *or* test
		The test script to execute.  test.hal is executed with
		'halrun -f', test.sh is executed with 'bash -x', and
		test is executed as ./test
	expected
		A file whose contents are compared with the stdout of
			halrun -f test.hal

A typical regression test will load several components, usually including
'threads' and 'sampler', and often including 'streamer', then collect samples
from some number of calls to the realtime thread, then exit.

Regression test "test.hal" files will almost always include the line
	setexact_for_test_suite_only
which causes HAL to act as though the requested base_period was available.
Otherwise, results will differ slightly depending on the actual base_period
and regression tests will fail.

The test passes if the expected and actual output are identical.
Otherwise, the test fails.


Functional Tests
~~~~~~~~~~~~~~~~
A functional test should consist of three files:
	README
		A human-readable file describing the test
	test.hal *or* test.sh *or* test
		The test script to execute.  test.hal is executed with
		'halrun -f', test.sh is executed with 'bash -x', and
		test is executed as ./test
	checkresult
		An executable file (such as a shell or python script)
		which determines if the stdout of
			halrun -f test.hal
		indicates success or failure

Regression test "test.hal" files will often include the line
	setexact_for_test_suite_only
which causes HAL to act as though the requested base_period was available.
Otherwise, results will differ slightly depending on the actual base_period,
which could affect whether 'checkresult' gives an accurate result.

A typical regression test will load several components, usually including
'threads' and 'sample', then collect samples from some number of calls
to the realtime thread, then exit.  'checkresult' will look at the output
and see if it indicates success.

The test passes if the command "checkresult actual" returns a shell
success value (exit code 0).  Otherwise, the test fails.