Compare commits

...

151 commits

Author SHA1 Message Date
Jeff Epler
c69f31ac08 OK how about 100x 2017-08-16 21:24:54 -05:00
Jeff Epler
9012f5d438 just run test_inverse_attr3 to speed appveyor turnaround 2017-08-16 21:20:34 -05:00
Jeff Epler
e4eabb0108 does opening in binary help? 2017-08-16 17:41:39 -05:00
Jeff Epler
99379a7d0b these test failures are thought to be fixed 2017-08-16 08:24:56 -05:00
Jeff Epler
7f385521a5 Merge remote-tracking branches 'jepler/appveyor-single-thread-test', 'jepler/mac-inverse-attr3-failure', 'jepler/mismatch-new-delete', 'jepler/nullptr-bool', 'jepler/sanitize' and 'jepler/test-parallelism-failures' 2017-08-16 08:21:49 -05:00
Jeff Epler
0b6cd90dd9 Fix test_inverse_attr3 failure
This fixes the failure in test_inverse_attr3 seen on travis ci's osx
build.

Actually, only the change to sectionReader::getRealInstance is
needed to fix the test, but as the reason that 'unget' can fail is
unclear, I changed all instances of 'unget' to use the 'seekg' +
arithmetic method instead.

I failed to find a reason why 'unget' could fail in this way, or
reports of macos-specific failures in 'unget', but I was not
enlightened.

I do not know whether test_inverse_attr3 would *consistently* hang
on Appveyor, but after this change (and modifying .appveyor.yml
to not skip test_inverse_attr3) it did succeed on the first try.
2017-08-16 07:49:13 -05:00
Jeff Epler
4160f3fe8a customize travis for my fork 2017-08-15 21:25:20 -05:00
Jeff Epler
02910b2d38 Match new[] and delete[]
.. this would otherwise cause a memory use error in the unusual case
where a numeric identifer held more letters than expected.

For instance, passing the following file as the input to tst_inverse_attr3:

ISO-10303-21;
HEADER;
FILE_DESCRIPTION(('SCL test file'),'2;1');
FILE_NAME('test_inverse_attr.p21','2012-06-30T',('mp'),(''),'0','1','2');
FILE_SCHEMA(('test_inverse_attr'));
ENDSEC;
DATA;
ENDSEC;
END-ISO-10303-21;

and running it under valgrind can cause a diagnostic similar to the following to be displayed:
Mismatched free() / delete / delete []
   at 0x4C2D2DB: operator delete(void*) (vg_replace_malloc.c:576)
   by 0x507A5A6: sectionReader::readInstanceNumber() (sectionReader.cc:224)
   by 0x507CCC7: lazyP21DataSectionReader::nextInstance() (lazyP21DataSectionReader.cc:53)
   by 0x507C797: lazyP21DataSectionReader::lazyP21DataSectionReader(lazyFileReader*, std::basic_ifstream<char, std::char_traits<char> >&, std::fpos<__mbstate_t>, unsigned short) (lazyP21DataSectionReader.cc:11)
   by 0x50699F2: lazyFileReader::initP21() (lazyFileReader.cc:14)
   by 0x5069E5D: lazyFileReader::lazyFileReader(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, lazyInstMgr*, unsigned short) (lazyFileReader.cc:61)
] and delete[]
   by 0x506AAA7: lazyInstMgr::openFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) (lazyInstMgr.cc:103)
   by 0x4023A1: main (inverse_attr3.cc:35)
 Address 0x6a861a0 is 0 bytes inside a block of size 21 alloc'd
   at 0x4C2C93F: operator new[](unsigned long) (vg_replace_malloc.c:423)
   by 0x507A3A7: sectionReader::readInstanceNumber() (sectionReader.cc:202)

This problem is also reported in static analysis by clang, and as such can be seen in travis ci build logs.
2017-08-15 21:24:23 -05:00
Jeff Epler
3fd71cf457 appveyor build: don't use ctest parallelism
On Windows, concurrent access to files is severely restricted
compared to standard operating systems.  When ctest is invoking
cmake, this causes it to write simultaneously to the same files in
each concurrent cmake invocation, leading to spurious test failures
like

  error MSB3491: Could not write lines to file "...".  The process
  cannot access the file '...' because it is being used by another
  process.

Explicitly ask for no parallelism with "-j1", even though it is
probably the default.
2017-08-15 20:38:47 -05:00
Jeff Epler
1ff41f76a4 sc_version_string: Use temporary names resilient against parallel builds
In #359 I identify a race condition between multiple parallel invocations
of cmake, which can arise naturally during ctests.  Now that the file
contents will not change without an intervening git commit, it is
sufficient to ensure that the parallel invocations use distinct temporary
file names with high probability.
2017-08-15 19:54:05 -05:00
Jeff Epler
a11c373fc7 sc_version_string: omit the date and time
As analyzed in #359, if the header contains the current time, it will
be updated while running the testsuite; this, in turn, causes multiple
cmake processes to attempt to update targets like lib/libexpress.so.2.0.0
at the same time, causing test failures.
2017-08-15 19:47:26 -05:00
Jeff Epler
6f1b5adc3f errordesc.cc: Correctly append a single character to a std::string
The idiom
    char c = ...;
    _userMsg.append( &c );
is not correct C++, because it treats the address of 'c' as a NUL-
terminated C string.  However, this is not guaranteed.

When building and testing on Debian Stretch with AddressSanitizer:
    ASAN_OPTIONS="detect_leaks=false" CXX="clang++" CC=clang CXXFLAGS="-fsanitize=address" LDFLAGS="-fsanitize=address" cmake .. -DSC_ENABLE_TESTING=ON  -DSC_BUILD_SCHEMAS="ifc2x3;ap214e3;ap209"
    ASAN_OPTIONS="detect_leaks=false" make
    ASAN_OPTIONS="detect_leaks=false" ctest . --output-on-failure
an error like the following is encountered:

==15739==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffeb2ca7621 at pc 0x00000043c943 bp 0x7ffeb2ca75d0 sp 0x7ffeb2ca6d80
READ of size 33 at 0x7ffeb2ca7621 thread T0
    #0 0x43c942 in __interceptor_strlen.part.45 (/home/jepler/src/stepcode/build/bin/lazy_sdai_ap214e3+0x43c942)
    #1 0x7fb9056e6143 in std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::append(char const*) (/usr/lib/x86_64-linux-gnu/libstdc++.so.6+0x11f143)
    #2 0x7fb905b677c3 in ErrorDescriptor::AppendToDetailMsg(char) /home/jepler/src/stepcode/src/clutils/errordesc.cc:150:5

Address 0x7ffeb2ca7621 is located in stack of thread T0 at offset 33 in frame
    #0 0x7fb905b676af in ErrorDescriptor::AppendToDetailMsg(char) /home/jepler/src/stepcode/src/clutils/errordesc.cc:149

  This frame has 1 object(s):
    [32, 33) '' <== Memory access at offset 33 overflows this variable

A similar problem with AppendToUserMsg is found by inspection.

After this change, all 200 tests pass under the AddressSanitizer
configuration
2017-08-15 07:51:44 -05:00
Jeff Epler
9df2f19fc6 express/error.c: Ensure the error buffer does not overflow
On Debian Stretch, when configuring stepcode like so:
    ASAN_OPTIONS="detect_leaks=false" CXX="clang++" CXXFLAGS="-fsanitize=address" cmake ..
a fatal error would be detected:

  ==29661==ERROR: AddressSanitizer: heap-buffer-overflow on address
  0x62100001dca0 at pc 0x0000004435e3 bp 0x7ffed6d9cae0 sp 0x7ffed6d9c290

  READ of size 4001 at 0x62100001dca0 thread T0

      #0 0x4435e2 in __interceptor_strlen.part.45 (/home/jepler/src/stepcode/build/bin/schema_scanner+0x4435e2)
      #1 0x501d7b in ERRORreport_with_symbol /home/jepler/src/stepcode/src/express/error.c:413

  0x62100001dca0 is located 0 bytes to the right of 4000-byte region
  [0x62100001cd00,0x62100001dca0)

  allocated by thread T0 here:

      #0 0x4c3ae8 in __interceptor_malloc (/home/jepler/src/stepcode/build/bin/schema_scanner+0x4c3ae8)
      #1 0x5011fc in ERRORinitialize /home/jepler/src/stepcode/src/express/error.c:129

Operations on ERROR_string were unsafe, because they did not guard
against accesses beyond the end of the allocatd region.

This patch ensures that all accesses via *printf functions do respect
the end of the buffer; and encapsulates the routine for pointing
ERROR_string at the space for the next error text to start, if space is
available.

Finally, because it was found with search and replace, a stray manipulation
of ERROR_string within the print-to-file branch of the code is removed.
This stray line would have had the effect of moving ERROR_string one byte
further along at every warning-to-file, which could also have been a
cause of the problem here.
2017-08-15 07:51:44 -05:00
Jeff Epler
0fbc3c0c84 Fix build error with g++ 6.3 (Debian Stretch)
On this platform, TEST_NULLPTR fails, even though nullptr and
nullptr_t are supported:

/home/jepler/src/stepcode/build/CMakeFiles/CMakeTmp/src.cxx:4:23:
    error: converting to 'bool' from 'std::nullptr_t'
    requires direct-initialization [-fpermissive]
 int main() {return !!f();}
                      ~^~

Subsequent to this failure, the workaround definitions in sc_nullptr.h
prevent standard C++ headers (which must refer to real nullptr) to fail.

The failure occurs because the C++ standard apparently does not state
that operator! may be used on nullptr.  Despite this, some compilers
have historically allowed it.  g++ 6.3's behavior appears to be aligned
with the standard.

As requested by @brlcad, ensure that the function 'f' is used from main,
to avoid a clever (but not nullptr-supporting) compiler from somehow
skipping 'f' altogether, creating a false positive for nullptr support.
2017-08-15 06:50:56 -05:00
Mark
a96336ab97 Merge pull request #351 from stepcode/review/327
Review/327
2017-08-13 20:46:34 -04:00
Thomas Paviot
b24680d7e5 Merge pull request #356 from luzpaz/ascii-typos
Fixed typos showing up as ascii chars
2017-03-05 07:41:13 +01:00
Kunda
15afe96d67 Fixed typos showing up as ascii chars
Using http://www.lisi.ensma.fr/ftp/enseignement/A3_Master_Ingenierie_donnees/fonctionsGrammaire_EXPRESS.pdf I was able to fix typos in the text files for Builtin.py
2017-03-04 07:22:46 -05:00
Cliff Yapp
a78ca01b54 Revert "Get latest version of ap242 from http://stepmod.cvs.sourceforge.net/viewvc/stepmod/stepmod/data/modules/ap242_managed_model_based_3d_engineering/mim_lf.exp"
This reverts commit 0b456a833e.

New schema apparently doesn't build.
2016-08-06 16:51:58 -04:00
Cliff Yapp
8627627c5e Add an option to completely bypass the git management of the version header. 2016-08-06 13:53:12 -04:00
Cliff Yapp
9b091756b1 Allow the user to control whether C++11 is used (matters for subbuilds) 2016-08-06 13:02:35 -04:00
Cliff Yapp
dfce2dcf07 For the flags variables, rather annoyingly, they actually need to be managed as strings and not lists. 2016-08-06 12:37:01 -04:00
Cliff Yapp
5c7e63c75b Fix macro comments. This approach to managing the targets varies from the old EXCLUDE_OR_INSTALL setup in that by default the testable targets are not added to the 'all' build - in other works, 'make' will make just the main stepcode targets. The testable targets are available individually, and if SC_ENABLE_TESTING is enabled they *will* be added to the 'all' target, but by default they are not compiled in the basic build. 2016-08-06 12:32:12 -04:00
Cliff Yapp
0b456a833e Get latest version of ap242 from http://stepmod.cvs.sourceforge.net/viewvc/stepmod/stepmod/data/modules/ap242_managed_model_based_3d_engineering/mim_lf.exp 2016-08-06 12:05:18 -04:00
Cliff Yapp
a243b4d8c8 Separate shared and static generated files for better parallel building safety. 2016-08-06 11:55:38 -04:00
Cliff Yapp
daec3e2640 Add option handling to the SC target macros, replacing the EXCLUDE_FROM_INSTALL macro. 2016-08-06 11:43:59 -04:00
Cliff Yapp
06b13bb9af Make a stab at adapting the new, simplier verification to vanilla stepcode 2016-08-06 11:23:34 -04:00
Cliff Yapp
e38519a7f1 Update Find cmake scripts 2016-08-06 10:53:19 -04:00
Cliff Yapp
12def15dd2 Start working on merging BRL-CAD changes back into upstream. 2016-08-06 10:46:23 -04:00
Mark Pictor
a160cc9af6 fix #327 - statically initialize t_sdaiINTEGER etc 2015-08-30 11:57:58 -04:00
Mark Pictor
1dfe76b3b4 README: put CI badges in table 2015-08-23 22:30:36 -04:00
Mark Pictor
c23ba65c41 appveyor log - use JSON-like console data, massaged for parsability 2015-08-23 22:30:16 -04:00
Mark Pictor
274de2a91d appveyor: exclude hanging test 2015-08-10 21:06:51 -04:00
Mark Pictor
1a757024c4 add appveyor badge 2015-08-10 21:04:10 -04:00
Mark Pictor
abae0a7c45 allow downloading log directly from AV 2015-08-10 21:00:35 -04:00
Mark
bfc11face5 Merge pull request #345 from stepcode/review/misc
appveyor isn't at 100%, but it's considerably better
2015-08-10 20:47:30 -04:00
Mark Pictor
677261d4fb fix LNK2004 getEDesc already defined in sectionReader.obj
is this really the only/best way to fix this?!
2015-08-09 23:36:54 -04:00
Mark Pictor
4207a46f07 fix MSVC link error for NilSTEPentity 2015-08-09 23:30:13 -04:00
Mark Pictor
7316fe5070 msvc warnings/errors 2015-08-09 23:29:15 -04:00
Mark Pictor
4d32009592 'register' storage class specifier is deprecated [-Wdeprecated-register] 2015-08-03 22:00:22 -04:00
Mark Pictor
0b6078b72b missing include 2015-08-03 21:55:00 -04:00
Mark Pictor
36e34862cc cllazyfile: work around LNK2005 error. had to tweak class members so MSVC didn't see MgrNodeBase class twice.
suspect there is a better solution, but I'm not sure what it would be
2015-08-03 21:50:19 -04:00
Mark Pictor
9dcb6aa640 build judy array as part of base lib, else import/export macros don't work 2015-08-03 21:38:37 -04:00
Mark Pictor
bc5533bda8 test for, and use, nullptr if we have it 2015-08-03 21:37:28 -04:00
Mark Pictor
4e88ad69eb summarize-appveyor-log: sort before printing 2015-08-02 22:51:22 -04:00
Mark Pictor
b890c156f5 attempt to silence msvc linker errors 2015-08-02 22:50:26 -04:00
Mark Pictor
0ba7343004 no more excuses, build cllazyfile on windows 2015-08-02 15:59:48 -04:00
Mark Pictor
f2247d222f use COMPILE_DEFINITIONS property for definitions 2015-08-02 15:58:04 -04:00
Mark Pictor
5288026043 add windows dll import/export macros to cllazyfile 2015-08-02 15:57:10 -04:00
Mark Pictor
6e8cad223e cmake 2.8.7 needs append_string, not append 2015-08-02 15:37:45 -04:00
Mark Pictor
018e7cfffc reduce delay used in parallel test, as it appears to cause a failure.
TODO: rework the test to not be timing-sensitive
2015-08-02 14:53:36 -04:00
Mark Pictor
458b775f41 simplify cmake logic - use set_property(...APPEND...) rather than get/list append/set 2015-08-02 14:30:01 -04:00
Mark Pictor
dc82923cf1 support cmake 2.8.7 since that's what travis-ci uses 2015-08-02 14:28:53 -04:00
Mark Pictor
aa967b3316 tweak includ dir logic, print path 2015-07-26 23:22:32 -04:00
Mark Pictor
17b41da525 add std::chrono test, if available use in thread test 2015-07-26 22:22:44 -04:00
Mark Pictor
135adc76c8 fix warning, improve cmake messages 2015-07-26 20:49:04 -04:00
Mark Pictor
017faa942f make schema-specific tests work on cmake 3.x 2015-07-26 20:49:04 -04:00
Mark Pictor
46c37207da replace c++ style comments recently introduced with c-style comments 2015-07-26 16:59:42 -04:00
Mark Pictor
85f45f38e4 debug message for appveyor 2015-07-26 15:22:24 -04:00
Mark Pictor
be378119f4 piping to grep prevents appveyor from detecting failures 2015-07-26 15:22:09 -04:00
Mark Pictor
ecde882d5a oops, forgot export macro for path2str 2015-07-26 14:32:43 -04:00
Mark Pictor
3d90ffdf83 remove yet more CORBA and ObjectStore stuff... surprised it still exists 2015-07-26 14:22:07 -04:00
Mark Pictor
5dfca2ed78 exp2py - remove unused function USEREFout 2015-07-26 14:20:24 -04:00
Mark Pictor
fbf0272d3b indent a listdo/listod 2015-07-26 13:42:52 -04:00
Mark Pictor
c69f9ebab2 fix length check for keyword detection function 2015-07-26 13:42:25 -04:00
Mark Pictor
942fb89f68 cleanup 2015-07-26 13:41:49 -04:00
Mark Pictor
893936b11f printf(...) -> fprintf( stderr, ...): warnings and errors should not be on stdout 2015-07-26 13:39:40 -04:00
Mark Pictor
e4a8be26da resolve MSVC "unknown escape sequence" warning 2015-07-19 18:29:55 -04:00
Mark Pictor
c63b3cc9b8 add CONTRIBUTING.md 2015-07-19 15:02:54 -04:00
Mark Pictor
2dace5da2e update CI-related stuff 2015-07-19 15:02:54 -04:00
Mark
7568033499 Merge pull request #344 from cshorler/python_p21_lexer_and_parser_improvements
Python p21 lexer and parser improvements
2015-07-12 17:41:14 -04:00
Mark
65b6869b30 Merge pull request #343 from cshorler/python_2_6_compatibility
Python 2.6 compatibility
2015-07-12 17:39:18 -04:00
Christopher Horler
915e0de65f simplification - invocation of t_STRING / t_BINARY guarantees we can use string slicing rather than strip() 2015-07-08 07:59:37 +01:00
Christopher Horler
9c83ba32a9 fix list / params handling 2015-07-07 18:46:19 +01:00
Christopher Horler
13f36c11a8 implement value conversions for simple types 2015-07-07 18:44:58 +01:00
Christopher Horler
6b26410d9d make default implementation take bigger "slurps" looking for PART21_START tokens 2015-07-06 23:55:16 +01:00
Christopher Horler
3e84677ac9 change handling of base_tokens to simplify subclass implementations 2015-07-06 23:52:27 +01:00
Christopher Horler
081cf35855 Update tests function to parse every .stp file in the stepcode tree
(assumes code is under ~/projects/src/stepcode)
2015-07-05 18:36:33 +01:00
Christopher Horler
c1c3bc1077 Lexer improvements
- change way states are used, could give a substantial performance improvement
 - implement a more flexible approach for exchange_file start token search (more extensibile for subclassing)
 - rework/standardise keyword implementation for DATA token

Parser improvements
 - implement error handling for duplicate entity instances
   * parser catches the error, logs it
   * resyncs and continues (the duplicate is ignored)
 - rework the exchange_file structure detection
   * added parser.reset() to allow a more flexible approach to subclassing
2015-07-05 18:27:46 +01:00
Christopher Horler
6351ff38d9 Python 2.6
- replace another dict comprehension
 - ensure new style classes are used in Python 2.6
 - change the way the tokens list is used (improves ability to subclass)
2015-07-05 18:08:33 +01:00
Christopher Horler
a82f7497a6 Python 2.6 doesn't have NullHandler or dict comprehensions 2015-07-05 17:30:40 +01:00
Mark
2a3e2a9abf Merge pull request #342 from cshorler/improve_extensibility_for_python_p21_handling
make Python Part21 lexer more extensible for writing custom parser rules
2015-06-26 15:26:40 -04:00
Christopher Horler
eaf9ffc3f3 raise ValueError instead of sys.exit if input doesn't have valid header / or duplicate entity instances 2015-06-24 22:26:34 +01:00
Christopher Horler
b72f4d404a fix typo in rule 2015-06-24 17:59:01 +01:00
Christopher Horler
b31d8ef853 when subclassing due to ply's dir() usage to determine rules a start rule is necessary 2015-06-24 17:24:23 +01:00
Christopher Horler
c865023114 files may contain multiple exchange structures, to allow for this add rudimentary state tracking 2015-06-24 17:22:49 +01:00
Christopher Horler
25ca2a788d make Python Part21 lexer more extensible for writing custom parser rules 2015-06-22 20:38:20 +01:00
Mark
a80489dc96 Merge pull request #340 from stepcode/review/misc
warnings + misc
2015-06-14 21:25:25 -04:00
Mark Pictor
35170b388e give up on msbuild, seems to offer no benefits. go back to cmake --build 2015-06-14 21:23:08 -04:00
Mark Pictor
d2680940ad disable 2 builds so appveyor will go faster 2015-06-14 19:00:28 -04:00
Mark Pictor
e24ac2acd3 appveyor - print results from failed tests 2015-06-14 18:07:04 -04:00
Mark Pictor
903277f288 in appveyor summary, print errors first 2015-06-13 21:29:36 -04:00
Mark Pictor
5b2782a34c appveyor has grep, so use it on build output 2015-06-13 21:28:59 -04:00
Mark Pictor
97731f8611 warning about extra parens 2015-06-13 21:13:04 -04:00
Mark Pictor
272928d6e5 treat 'fatal error' as error; also ran go fmt 2015-06-13 21:13:04 -04:00
Mark Pictor
76944b3a90 powershell's select-string is an abomination... is grep in PATH? 2015-06-13 21:13:04 -04:00
Mark
8a270222c6 Merge pull request #341 from cshorler/mpictor/review-misc
remove strncpy as we're not using it anyway - fixes crash on enumerat…
2015-06-13 20:34:40 -04:00
Christopher Horler
e8afb772f1 remove strncpy as we're not using it anyway - fixes crash on enumeration output 2015-06-13 15:35:32 +01:00
Mark Pictor
0eefd769e3 oops, use our own stdbool 2015-06-08 22:53:45 -04:00
Mark Pictor
1c8225f79a eliminate gcc warnings for exp2py 2015-06-08 22:40:38 -04:00
Mark Pictor
2d8a7373fd fix some MSVC warnings 2015-06-08 22:40:38 -04:00
Mark Pictor
2300021416 MSVC warning - multiple default constructors 2015-06-08 22:40:38 -04:00
Mark Pictor
91a2dcacf3 MSVC lacks __func__, use __FUNCTION__ instead 2015-06-08 22:40:38 -04:00
Mark Pictor
618deae50e add golang program to summarize msvc warnings/errors from appveyor 2015-06-08 22:40:38 -04:00
Mark Pictor
1aa78e7e48 make appveyor happier 2015-06-08 22:40:38 -04:00
Mark Pictor
4129d363b0 add stepcore test for segfault in STEPattribute::set_null() 2015-05-20 20:51:14 -04:00
Mark Pictor
619c927ecb update Doxyfile 2015-05-20 20:03:53 -04:00
Mark Pictor
89684015ae improve comments for doxygen 2015-05-20 20:03:47 -04:00
Mark Pictor
8423811031 delete makefile fragment 2015-05-20 20:02:09 -04:00
Mark
4f9ad1c75a Merge pull request #338 from ShabalinAnton/cllazyfile_fixes
Cllazyfile fixes
2015-05-19 19:30:41 -04:00
Anton Shabalin
0c34752882 1. Heap corruption error fixed at sectionReader
2. Win support added at lazyRefs
2015-05-19 11:21:11 +08:00
Mark
0b3ab924f8 Merge pull request #332 from pcjc2/speedup_instmgr
Speedup InstMgr by running PrettyTmpName() outside of search loops
2015-04-18 21:47:39 -04:00
Mark
fae60fbe8c Merge pull request #334 from stepcode/mp/TypeName_temp_ref
returning pointer to temporary for type name
2015-04-14 22:54:35 -04:00
Mark Pictor
703654ae70 update use of AttrTypeName in test/scl2html.cc 2015-04-14 22:53:40 -04:00
Mark
9fca25bed6 Merge pull request #335 from stepcode/mp/supertype-parens
exppp - supertype parens
2015-04-14 21:07:35 -04:00
Mark Pictor
988d2614be add test for supertype andor parens 2015-04-12 23:23:24 -04:00
Mark Pictor
1a8282d5dc missing parens on supertype andor - #318 2015-04-12 22:59:29 -04:00
Mark Pictor
235d38a80c doxify comments 2015-04-12 22:16:25 -04:00
Mark Pictor
cba6fec8d7 eliminate a reference to temporary, modify 2 funcs
modify AttrDescriptor::TypeName() and TypeDescriptor::AttrTypeName() - don't return
std::string::c_str()
2015-04-12 22:15:59 -04:00
Mark
1d2e33e575 Merge pull request #326 from cshorler/exp2python_remove_exppp_dependency
Exp2python remove exppp dependency
2015-04-12 15:31:56 -04:00
Peter Clifton
2829deb7cc Speedup InstMgr by running PrettyTmpName() outside of search loops
PrettyTmpName() came up fairly hot in at least one of these loops
when profiling loading and iterating over STEP files.
2015-04-12 19:48:08 +01:00
Christopher Horler
c83f0792a1 remove unused TypeDescription function 2015-04-03 21:17:29 +01:00
Christopher Horler
482c7905f3 refactor expression output for Python (remove exppp dependency) 2015-04-03 21:08:47 +01:00
Christopher Horler
46b5e74596 remove exppp runtime dependency 2015-04-03 21:08:32 +01:00
Mark
ec0443a446 Merge pull request #324 from cshorler/ap242ed1_fixes
Python - AP242ed1 fixes
2015-04-01 18:42:39 -05:00
Mark
6708334b37 Merge pull request #325 from ramcdona/FixUps
Fix ups
2015-03-29 18:29:01 -05:00
Christopher Horler
9025643a1b fix generation of TYPE ENUMERATION 2015-03-28 13:32:21 +00:00
Christopher Horler
b996d835f5 use the Python standard library enum module to implement the EXPRESS ENUMERATION 2015-03-28 13:19:01 +00:00
Rob McDonald
48df74932a Remove unused real typedef. 2015-03-27 08:38:46 -07:00
Rob McDonald
3b0f0b33ea Add ExternalProject_add based build of ap203min example 2015-03-27 08:38:38 -07:00
Christopher Sean Morrison
167d1d7fab severity is unused, unname it 2015-03-27 11:19:56 -04:00
Christopher Sean Morrison
abb43fcb6f remove unused (and duplicated) variable, quellage 2015-03-27 11:18:24 -04:00
Christopher Sean Morrison
a2f2f41e66 quell warning about type mismatch. needs to be signed as the sdai instance could be -1. 2015-03-27 11:13:59 -04:00
Christopher Sean Morrison
e68afbad49 respond to the TODO about neg values ever being used. they currently are, which cascades into a signed/unsigned mismatch down the line in the lazy loader. 2015-03-27 11:12:17 -04:00
Christopher Sean Morrison
33ff7c57dc quell compiler warnings (errors in later versions of llvm 3.6) about incompatible type assignment (expecting a pointer) 2015-03-27 10:59:34 -04:00
Christopher Horler
04a57d0f2a avoid subclassing boolean types in Python 2015-03-25 21:56:23 +00:00
Christopher Horler
062b0e6ae3 fix - use of local variable name "pass" in AP242 causes invalid python code 2015-03-23 19:50:59 +00:00
Mark
5c6ffc7dca Merge pull request #322 from cshorler/fix_hdr_install
fix header installation - remove sc_stdbool.h from src/base/CMakeLists.t...
2015-03-20 20:10:05 -04:00
Christopher Horler
ea80885185 fix header installation - remove sc_stdbool.h from src/base/CMakeLists.txt, is already in include/CMakeLists.txt 2015-03-19 19:42:06 +00:00
Mark
8964d2ed05 Merge pull request #320 from stepcode/review/segfault
ap242 segfault
2015-02-23 20:36:36 -05:00
Mark
f25fb9d968 Merge pull request #319 from stepcode/mp/msvc-warn
msvc warn
2015-02-22 22:24:00 -05:00
Mark Pictor
0336649b07 also delete these attrs in dtor 2015-02-22 22:20:56 -05:00
Mark Pictor
bdcd5166e1 was returning Severity from a bool function 2015-02-19 20:23:16 -05:00
Mark Pictor
dbbd0c63b6 cout -> std::cout 2015-02-19 20:23:15 -05:00
Mark Pictor
cb771f628e int->bool in several places 2015-02-19 20:23:15 -05:00
Mark Pictor
2a17763354 fix msvc warnings in express/test/print_schemas.c 2015-02-17 20:19:52 -05:00
Mark Pictor
7f19877b1a fix msvc warnings in express/test/print_attrs.c 2015-02-17 20:17:32 -05:00
Mark Pictor
164355c640 fix LISTdo indentation for 2 loops 2015-02-16 21:48:09 -05:00
Mark Pictor
2c7ed5c826 add another loop to initialize any attrs missed by first loop 2015-02-16 21:46:09 -05:00
Mark Pictor
264db5d03c MSVC warning C4113: 'void (__cdecl *)()' != 'void (__cdecl *)(void)' 2015-02-16 20:58:37 -05:00
Mark Pictor
423d5e08f6 appveyor: run fewer tests 2015-02-16 20:58:37 -05:00
Mark Pictor
ba6d0a1c0c add test and files for a p21read segfault.
like the 210e2 segfault, this one is probably related to SELECTs.
2015-02-16 20:58:15 -05:00
131 changed files with 3104 additions and 2661 deletions

169
.appveyor.yml Normal file
View file

@ -0,0 +1,169 @@
version: '{build}'
# for Appveyor CI (windows)
os: Windows Server 2012 R2
clone_folder: c:\projects\STEPcode
#grab zip instead of git clone
shallow_clone: true
platform: x64
configuration: Debug
# errors or couldn't be found by cmake
# - GENERATOR: "Visual Studio 9 2008"
# ARCH: 32
# - GENERATOR: "Visual Studio 10"
# ARCH: 32
#no point in these without artifact support...
# - GENERATOR: "Visual Studio 11"
#ARCH: 32
#- GENERATOR: "Visual Studio 12"
#ARCH: 32
environment:
matrix:
- GENERATOR: "Visual Studio 12 Win64"
ARCH: 64
# build:
# parallel: true
# project: ALL_BUILD.vcxproj
#appveyor limits compile/test to 30 minutes
# to reduce time, only test schemas with files: ifc2x3, ap214e3, ap209
build_script:
- ps: |
cd c:\projects\STEPcode
mkdir build
cd build
cmake -version
grep --version
cmake .. -DSC_ENABLE_TESTING=ON -G"$env:GENERATOR" -DSC_BUILD_SCHEMAS="ifc2x3"
cmake --build . --config Debug --target tst_inverse_attr3
#msbuld seems to provide no benefits, and I can't filter its output...
#msbuild SC.sln /logger:"C:\Program Files\AppVeyor\BuildAgent\Appveyor.MSBuildLogger.dll" /p:Configuration=Debug /p:Platform=x64
# /toolsversion:14.0 /p:PlatformToolset=v140
test_script:
- cmd: echo inverse_attr3 test 100x
- cmd: cd c:\projects\STEPcode\build
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: echo 10
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: echo 20
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: echo 30
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: echo 40
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: echo 50
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: echo 60
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: echo 70
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: echo 80
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: echo 90
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: bin\tst_inverse_attr3 ..\test\p21\test_inverse_attr.p21
- cmd: echo 100
- cmd: echo done
# - cmd: grep -niB20 "Test Failed" Testing/Temporary/LastTest.log
# we could upload a compiled zip somewhere (see Appveyor artifact documentation)

View file

@ -2,18 +2,7 @@ sudo: false
language: cpp
compiler:
- clang
script: mkdir build && cd build && cmake .. -DSC_ENABLE_TESTING=ON && make -j3 && ctest -j2; if [ $? -ne 0 ]; then echo; echo; echo "-----------------------------"; grep -niB20 "Test Failed" Testing/Temporary/LastTest.log && false; fi
branches:
only:
- master
notifications:
irc: "chat.freenode.net#stepcode"
email: scl-dev@groups.google.com
on_success: change
on_failure: always
script: mkdir build && cd build && cmake .. -DSC_ENABLE_TESTING=ON && make -j3 && ctest -j2 --output-on-failure
os:
- linux
- osx
matrix:
allow_failures:
- os: osx

View file

@ -62,11 +62,11 @@ endif(COMMAND CMAKE_POLICY)
# CMake derives much of its functionality from modules, typically
# stored in one directory - let CMake know where to find them.
set(SC_CMAKE_DIR "${SC_SOURCE_DIR}/cmake")
if(NOT IS_SUBBUILD)
if(NOT SC_IS_SUBBUILD)
set(CMAKE_MODULE_PATH "${SC_CMAKE_DIR};${CMAKE_MODULE_PATH}")
else(NOT IS_SUBBUILD)
else(NOT SC_IS_SUBBUILD)
set(CMAKE_MODULE_PATH "${CMAKE_MODULE_PATH};${SC_CMAKE_DIR}")
endif(NOT IS_SUBBUILD)
endif(NOT SC_IS_SUBBUILD)
# testing and compilation options, build output dirs, install dirs, uninstall, package creation, etc
include(${SC_CMAKE_DIR}/SC_Build_opts.cmake)
@ -95,10 +95,12 @@ if(NOT DEFINED SC_BUILD_SCHEMAS)
set(SC_BUILD_SCHEMAS "ALL" CACHE string "Semicolon-separated list of paths to EXPRESS schemas to be built")
endif(NOT DEFINED SC_BUILD_SCHEMAS)
list(APPEND CONFIG_END_MESSAGES
if(NOT SC_IS_SUBBUILD)
list(APPEND CONFIG_END_MESSAGES
".. Don't worry about any messages above about missing headers or failed tests, as long as"
" you see 'Configuring done' below. Headers and features vary by compiler."
".. Generating step can take a while if you are building several schemas.")
endif(NOT SC_IS_SUBBUILD)
# create config headers sc_cf.h and sc_version_string.h
include(${SC_CMAKE_DIR}/SC_Config_Headers.cmake)
@ -114,6 +116,9 @@ elseif(BORLAND)
add_definitions(-D__BORLAND__ -D__WIN32__)
else()
add_definitions(-pedantic -W -Wall -Wundef -Wfloat-equal -Wshadow -Winline -Wno-long-long)
if(HAVE_NULLPTR)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
endif(HAVE_NULLPTR)
endif()
include_directories(
@ -130,9 +135,7 @@ add_subdirectory(src/clstepcore)
add_subdirectory(src/cleditor)
add_subdirectory(src/cldai)
add_subdirectory(src/clutils)
if(NOT WIN32) # don't build cllazyfile on windows until export/import macros are in place
add_subdirectory(src/cllazyfile)
endif(NOT WIN32)
add_subdirectory(src/cllazyfile)
add_subdirectory(include)
add_subdirectory(data)
if(SC_ENABLE_TESTING)
@ -147,7 +150,7 @@ add_dependencies(core stepdai check-express stepeditor exp2cxx)
# CONFIG_END_MESSAGES - list of messages to be printed after everything else is done.
# THIS MUST BE LAST to ensure that they are visible to the user without scrolling.
foreach(_msg ${CONFIG_END_MESSAGES})
message("${_msg}")
message(STATUS "${_msg}")
endforeach(_msg ${CONFIG_END_MESSAGES})
# Local Variables:

45
CONTRIBUTING.md Normal file
View file

@ -0,0 +1,45 @@
# How to contribute
We love contributions!
## Getting started
* Create a github account if you haven't already, and fork the project
* Create a new branch, using a branch name that gives an idea of what the changes are about
* One topic per commit; a number of small commits are better than one big one
* Do not introduce whitespace changes! (**Windows users:** `git config --global core.autocrlf true`)
* Encouraged but not enforced: each commit should stand alone, in the sense that the code should compile and run at that point.
* One major topic per pull request. Commits that fix small things (typos, formatting) are perfectly acceptable in a PR fixing a bug or adding a feature.
* Tests are good. Tests are required unless you're fixing something simple or that was obviously broken.
* Make your changes and push them to your GitHub repo
* Once your branch is pushed, submit a pull request.
* We'll look at the PR and either merge or add feedback. If there isn't any activity within several days, send a message to the mailing list - `scl-dev` AT `groups.google.com`.
## Coding Standards
SC's source has been reformatted with astyle. When making changes, try
to match the current formatting. The main points are:
- compact (java-style) brackets:
```C
if( a == 3 ) {
c = 5;
function( a, b );
} else {
somefunc();
}
```
- indents are 4 spaces
- no tab characters
- line endings are LF (linux), not CRLF (windows)
- brackets around single-line conditionals
- spaces inside parentheses and around operators
- return type on the same line as the function name, unless that's
too long
- doxygen-style comments
(see http://www.stack.nl/~dimitri/doxygen/docblocks.html)
If in doubt about a large patch, run astyle with the config file
misc/astyle.cfg.
Download astyle from http://sourceforge.net/projects/astyle/files/astyle/

View file

@ -1,5 +1,7 @@
Travis-CI build status:
[![Build Status](https://travis-ci.org/stepcode/stepcode.svg?branch=master)](https://travis-ci.org/stepcode/stepcode)
Travis-CI | AppVeyor CI
:-------------:|:---------------:
Linux, OSX (LLVM) | Windows (MSVC)
[![Build Status](https://travis-ci.org/stepcode/stepcode.svg?branch=master)](https://travis-ci.org/stepcode/stepcode) | [![Build status](https://ci.appveyor.com/api/projects/status/3fbr9t9gfa812oqu?svg=true)](https://ci.appveyor.com/project/mpictor/stepcode)
***********************************************************************
STEPcode v0.8 -- stepcode.org, github.com/stepcode/stepcode

View file

@ -1,50 +0,0 @@
version: '{build}'
# for Appveyor CI (windows)
branches:
only:
- master
os: Windows Server 2012 R2
clone_folder: c:\projects\STEPcode
#grab zip instead of git clone
shallow_clone: true
platform: x64
configuration: Debug
# errors or couldn't be found by cmake
# - GENERATOR: "Visual Studio 9 2008"
# ARCH: 32
# - GENERATOR: "Visual Studio 10"
# ARCH: 32
environment:
matrix:
- GENERATOR: "Visual Studio 11"
ARCH: 32
- GENERATOR: "Visual Studio 12"
ARCH: 32
- GENERATOR: "Visual Studio 12 Win64"
ARCH: 64
# build:
# parallel: true
# project: ALL_BUILD.vcxproj
build_script:
- ps: |
cd c:\projects\STEPcode
mkdir build
cd build
cmake .. -DSC_ENABLE_TESTING=ON -G"$env:GENERATOR"
cmake --build . --config Debug
test_script:
- cmd: echo Running CTest...
- cmd: cd c:\projects\STEPcode\build
- cmd: ctest -j2 . -C Debug
# we could upload a compiled zip somewhere (see Appveyor artifact documentation)

View file

@ -1,91 +0,0 @@
# - Check if the given C source code compiles and runs.
# CHECK_C_SOURCE_RUNS(<code> <var>)
# <code> - source code to try to compile
# <var> - variable to store the result
# (1 for success, empty for failure)
# The following variables may be set before calling this macro to
# modify the way the check is run:
#
# CMAKE_REQUIRED_FLAGS = string of compile command line flags
# CMAKE_REQUIRED_DEFINITIONS = list of macros to define (-DFOO=bar)
# CMAKE_REQUIRED_INCLUDES = list of include directories
# CMAKE_REQUIRED_LIBRARIES = list of libraries to link
#=============================================================================
# Copyright 2006-2009 Kitware, Inc.
#
# Distributed under the OSI-approved BSD License (the "License");
# see accompanying file Copyright.txt for details.
#
# This software is distributed WITHOUT ANY WARRANTY; without even the
# implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the License for more information.
#=============================================================================
# (To distributed this file outside of CMake, substitute the full
# License text for the above reference.)
macro(CHECK_C_FILE_RUNS SOURCE VAR)
if("${VAR}" MATCHES "^${VAR}$")
set(MACRO_CHECK_FUNCTION_DEFINITIONS
"-D${VAR} ${CMAKE_REQUIRED_FLAGS}")
if(CMAKE_REQUIRED_LIBRARIES)
set(CHECK_C_SOURCE_COMPILES_ADD_LIBRARIES
"-DLINK_LIBRARIES:STRING=${CMAKE_REQUIRED_LIBRARIES}")
else(CMAKE_REQUIRED_LIBRARIES)
set(CHECK_C_SOURCE_COMPILES_ADD_LIBRARIES)
endif(CMAKE_REQUIRED_LIBRARIES)
if(CMAKE_REQUIRED_INCLUDES)
set(CHECK_C_SOURCE_COMPILES_ADD_INCLUDES
"-DINCLUDE_DIRECTORIES:STRING=${CMAKE_REQUIRED_INCLUDES}")
else(CMAKE_REQUIRED_INCLUDES)
set(CHECK_C_SOURCE_COMPILES_ADD_INCLUDES)
endif(CMAKE_REQUIRED_INCLUDES)
message(STATUS "Performing Test ${VAR}")
try_run(${VAR}_EXITCODE ${VAR}_COMPILED
${CMAKE_BINARY_DIR}
${SOURCE}
COMPILE_DEFINITIONS ${CMAKE_REQUIRED_DEFINITIONS} ${FILE_RUN_DEFINITIONS}
CMAKE_FLAGS -DCOMPILE_DEFINITIONS:STRING=${MACRO_CHECK_FUNCTION_DEFINITIONS}
-DCMAKE_SKIP_RPATH:BOOL=${CMAKE_SKIP_RPATH}
"${CHECK_C_SOURCE_COMPILES_ADD_LIBRARIES}"
"${CHECK_C_SOURCE_COMPILES_ADD_INCLUDES}"
COMPILE_OUTPUT_VARIABLE OUTPUT)
# if it did not compile make the return value fail code of 1
if(NOT ${VAR}_COMPILED)
set(${VAR}_EXITCODE 1)
endif(NOT ${VAR}_COMPILED)
# if the return value was 0 then it worked
if("${${VAR}_EXITCODE}" EQUAL 0)
set(${VAR} 1 CACHE INTERNAL "Test ${VAR}")
message(STATUS "Performing Test ${VAR} - Success")
file(APPEND ${CMAKE_BINARY_DIR}${CMAKE_FILES_DIRECTORY}/CMakeOutput.log
"Performing C SOURCE FILE Test ${VAR} succeded with the following output:\n"
"${OUTPUT}\n"
"Return value: ${${VAR}}\n"
"Source file was:\n${SOURCE}\n")
else("${${VAR}_EXITCODE}" EQUAL 0)
if(CMAKE_CROSSCOMPILING AND "${${VAR}_EXITCODE}" MATCHES "FAILED_TO_RUN")
set(${VAR} "${${VAR}_EXITCODE}")
else(CMAKE_CROSSCOMPILING AND "${${VAR}_EXITCODE}" MATCHES "FAILED_TO_RUN")
set(${VAR} "" CACHE INTERNAL "Test ${VAR}")
endif(CMAKE_CROSSCOMPILING AND "${${VAR}_EXITCODE}" MATCHES "FAILED_TO_RUN")
message(STATUS "Performing Test ${VAR} - Failed")
file(APPEND ${CMAKE_BINARY_DIR}${CMAKE_FILES_DIRECTORY}/CMakeError.log
"Performing C SOURCE FILE Test ${VAR} failed with the following output:\n"
"${OUTPUT}\n"
"Return value: ${${VAR}_EXITCODE}\n"
"Source file was:\n${SOURCE}\n")
endif("${${VAR}_EXITCODE}" EQUAL 0)
endif("${VAR}" MATCHES "^${VAR}$")
endmacro(CHECK_C_FILE_RUNS)
# Local Variables:
# tab-width: 8
# mode: cmake
# indent-tabs-mode: t
# End:
# ex: shiftwidth=2 tabstop=8

View file

@ -10,7 +10,7 @@
#
# Originally based off of FindBISON.cmake from Kitware's CMake distribution
#
# Copyright (c) 2010-2012 United States Government as represented by
# Copyright (c) 2010-2016 United States Government as represented by
# the U.S. Army Research Laboratory.
# Copyright 2009 Kitware, Inc.
# Copyright 2006 Tristan Carel
@ -47,21 +47,158 @@
find_program(LEMON_EXECUTABLE lemon DOC "path to the lemon executable")
mark_as_advanced(LEMON_EXECUTABLE)
if(LEMON_EXECUTABLE AND NOT LEMON_TEMPLATE)
if (LEMON_EXECUTABLE AND NOT LEMON_TEMPLATE)
# look for the template in share
if (DATA_DIR AND EXISTS "${DATA_DIR}/lemon/lempar.c")
set (LEMON_TEMPLATE "${DATA_DIR}/lemon/lempar.c")
elseif (EXISTS "share/lemon/lempar.c")
set (LEMON_TEMPLATE "share/lemon/lempar.c")
elseif (EXISTS "/usr/share/lemon/lempar.c")
set (LEMON_TEMPLATE "/usr/share/lemon/lempar.c")
endif (DATA_DIR AND EXISTS "${DATA_DIR}/lemon/lempar.c")
endif (LEMON_EXECUTABLE AND NOT LEMON_TEMPLATE)
if (LEMON_EXECUTABLE AND NOT LEMON_TEMPLATE)
# look for the template in bin dir
get_filename_component(lemon_path ${LEMON_EXECUTABLE} PATH)
if(lemon_path)
set(LEMON_TEMPLATE "")
if(EXISTS ${lemon_path}/lempar.c)
set(LEMON_TEMPLATE "${lemon_path}/lempar.c")
endif(EXISTS ${lemon_path}/lempar.c)
if(EXISTS /usr/share/lemon/lempar.c)
set(LEMON_TEMPLATE "/usr/share/lemon/lempar.c")
endif(EXISTS /usr/share/lemon/lempar.c)
endif(lemon_path)
if (lemon_path)
if (EXISTS ${lemon_path}/lempar.c)
set (LEMON_TEMPLATE "${lemon_path}/lempar.c")
endif (EXISTS ${lemon_path}/lempar.c)
if (EXISTS /usr/share/lemon/lempar.c)
set (LEMON_TEMPLATE "/usr/share/lemon/lempar.c")
endif (EXISTS /usr/share/lemon/lempar.c)
endif (lemon_path)
endif(LEMON_EXECUTABLE AND NOT LEMON_TEMPLATE)
if (LEMON_EXECUTABLE AND NOT LEMON_TEMPLATE)
# fallback
set (LEMON_TEMPLATE "lempar.c")
if (NOT EXISTS ${LEMON_TEMPLATE})
message(WARNING "Lemon's lempar.c template file could not be found automatically, set LEMON_TEMPLATE")
endif (NOT EXISTS ${LEMON_TEMPLATE})
endif (LEMON_EXECUTABLE AND NOT LEMON_TEMPLATE)
mark_as_advanced(LEMON_TEMPLATE)
include(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(LEMON DEFAULT_MSG LEMON_EXECUTABLE LEMON_TEMPLATE)
mark_as_advanced(LEMON_TEMPLATE)
# Define the macro
# LEMON_TARGET(<Name> <LemonInput> <LemonSource> <LemonHeader>
# [<ArgString>])
# which will create a custom rule to generate a parser. <LemonInput> is
# the path to a lemon file. <LemonSource> is the desired name for the
# generated source file. <LemonHeader> is the desired name for the
# generated header which contains the token list. Anything in the optional
# <ArgString> parameter is appended to the lemon command line.
#
# ====================================================================
# Example:
#
# find_package(LEMON)
# LEMON_TARGET(MyParser parser.y parser.c parser.h)
# add_executable(Foo main.cpp ${LEMON_MyParser_OUTPUTS})
# ====================================================================
include(CMakeParseArguments)
if(NOT COMMAND LEMON_TARGET)
macro(LEMON_TARGET Name Input)
get_filename_component(IN_FILE_WE ${Input} NAME_WE)
set(LVAR_PREFIX ${Name}_${IN_FILE_WE})
if(${ARGC} GREATER 3)
CMAKE_PARSE_ARGUMENTS(${LVAR_PREFIX} "" "OUT_SRC_FILE;OUT_HDR_FILE;WORKING_DIR;EXTRA_ARGS" "" ${ARGN})
endif(${ARGC} GREATER 3)
# Need a working directory
if("${${LVAR_PREFIX}_WORKING_DIR}" STREQUAL "")
set(${LVAR_PREFIX}_WORKING_DIR "${CMAKE_CURRENT_BINARY_DIR}/${LVAR_PREFIX}")
endif("${${LVAR_PREFIX}_WORKING_DIR}" STREQUAL "")
file(MAKE_DIRECTORY ${${LVAR_PREFIX}_WORKING_DIR})
# Output source file
if ("${${LVAR_PREFIX}_OUT_SRC_FILE}" STREQUAL "")
set(${LVAR_PREFIX}_OUT_SRC_FILE ${${LVAR_PREFIX}_WORKING_DIR}/${IN_FILE_WE}.c)
else ("${${LVAR_PREFIX}_OUT_SRC_FILE}" STREQUAL "")
get_filename_component(specified_out_dir ${${LVAR_PREFIX}_OUT_SRC_FILE} PATH)
if(NOT "${specified_out_dir}" STREQUAL "")
message(FATAL_ERROR "\nFull path specified for OUT_SRC_FILE - should be filename only.\n")
endif(NOT "${specified_out_dir}" STREQUAL "")
set(${LVAR_PREFIX}_OUT_SRC_FILE ${${LVAR_PREFIX}_WORKING_DIR}/${${LVAR_PREFIX}_OUT_SRC_FILE})
endif ("${${LVAR_PREFIX}_OUT_SRC_FILE}" STREQUAL "")
# Output header file
if ("${${LVAR_PREFIX}_OUT_HDR_FILE}" STREQUAL "")
set(${LVAR_PREFIX}_OUT_HDR_FILE ${${LVAR_PREFIX}_WORKING_DIR}/${IN_FILE_WE}.h)
else ("${${LVAR_PREFIX}_OUT_HDR_FILE}" STREQUAL "")
get_filename_component(specified_out_dir ${${LVAR_PREFIX}_OUT_HDR_FILE} PATH)
if(NOT "${specified_out_dir}" STREQUAL "")
message(FATAL_ERROR "\nFull path specified for OUT_HDR_FILE - should be filename only.\n")
endif(NOT "${specified_out_dir}" STREQUAL "")
set(${LVAR_PREFIX}_OUT_HDR_FILE ${${LVAR_PREFIX}_WORKING_DIR}/${${LVAR_PREFIX}_OUT_HDR_FILE})
endif ("${${LVAR_PREFIX}_OUT_HDR_FILE}" STREQUAL "")
# input file
get_filename_component(in_full ${Input} ABSOLUTE)
if("${in_full}" STREQUAL "${Input}")
set(lemon_in_file ${Input})
else("${in_full}" STREQUAL "${Input}")
set(lemon_in_file "${CMAKE_CURRENT_SOURCE_DIR}/${Input}")
endif("${in_full}" STREQUAL "${Input}")
# names of lemon output files will be based on the name of the input file
set(LEMON_GEN_SOURCE ${${LVAR_PREFIX}_WORKING_DIR}/${IN_FILE_WE}.c)
set(LEMON_GEN_HEADER ${${LVAR_PREFIX}_WORKING_DIR}/${IN_FILE_WE}.h)
set(LEMON_GEN_OUT ${${LVAR_PREFIX}_WORKING_DIR}/${IN_FILE_WE}.out)
# copy input to bin directory and run lemon
get_filename_component(INPUT_NAME ${Input} NAME)
add_custom_command(
OUTPUT ${LEMON_GEN_OUT} ${LEMON_GEN_SOURCE} ${LEMON_GEN_HEADER}
COMMAND ${CMAKE_COMMAND} -E copy ${lemon_in_file} ${${LVAR_PREFIX}_WORKING_DIR}/${INPUT_NAME}
COMMAND ${LEMON_EXECUTABLE} -T${LEMON_TEMPLATE} ${${LVAR_PREFIX}_WORKING_DIR}/${INPUT_NAME} ${${LVAR_PREFIX}__EXTRA_ARGS}
DEPENDS ${Input} ${LEMON_TEMPLATE} ${LEMON_EXECUTABLE_TARGET}
WORKING_DIRECTORY ${${LVAR_PREFIX}_WORKING_DIR}
COMMENT "[LEMON][${Name}] Building parser with ${LEMON_EXECUTABLE}"
)
# rename generated outputs
if(NOT "${${LVAR_PREFIX}_OUT_SRC_FILE}" STREQUAL "${LEMON_GEN_SOURCE}")
add_custom_command(
OUTPUT ${${LVAR_PREFIX}_OUT_SRC_FILE}
COMMAND ${CMAKE_COMMAND} -E copy ${LEMON_GEN_SOURCE} ${${LVAR_PREFIX}_OUT_SRC_FILE}
DEPENDS ${LemonInput} ${LEMON_EXECUTABLE_TARGET} ${LEMON_GEN_SOURCE}
)
set(LEMON_${Name}_OUTPUTS ${${LVAR_PREFIX}_OUT_SRC_FILE} ${LEMON_${Name}_OUTPUTS})
endif(NOT "${${LVAR_PREFIX}_OUT_SRC_FILE}" STREQUAL "${LEMON_GEN_SOURCE}")
if(NOT "${${LVAR_PREFIX}_OUT_HDR_FILE}" STREQUAL "${LEMON_GEN_HEADER}")
add_custom_command(
OUTPUT ${${LVAR_PREFIX}_OUT_HDR_FILE}
COMMAND ${CMAKE_COMMAND} -E copy ${LEMON_GEN_HEADER} ${${LVAR_PREFIX}_OUT_HDR_FILE}
DEPENDS ${LemonInput} ${LEMON_EXECUTABLE_TARGET} ${LEMON_GEN_HEADER}
)
set(LEMON_${Name}_OUTPUTS ${${LVAR_PREFIX}_OUT_HDR_FILE} ${LEMON_${Name}_OUTPUTS})
endif(NOT "${${LVAR_PREFIX}_OUT_HDR_FILE}" STREQUAL "${LEMON_GEN_HEADER}")
set(LEMON_${Name}_OUTPUTS ${LEMON_${Name}_OUTPUTS} ${LEMON_GEN_OUT})
# make sure we clean up generated output and copied input
set_property(DIRECTORY APPEND PROPERTY ADDITIONAL_MAKE_CLEAN_FILES "${LEMON_${Name}_OUTPUTS}")
set_property(DIRECTORY APPEND PROPERTY ADDITIONAL_MAKE_CLEAN_FILES "${${LVAR_PREFIX}_WORKING_DIR}/${INPUT_NAME}")
# macro ran successfully
set(LEMON_${Name}_DEFINED TRUE)
set(LEMON_${Name}_SRC ${${LVAR_PREFIX}_OUT_SRC_FILE})
set(LEMON_${Name}_HDR ${${LVAR_PREFIX}_OUT_HDR_FILE})
set(LEMON_${Name}_INCLUDE_DIR ${${LVAR_PREFIX}_WORKING_DIR})
endmacro(LEMON_TARGET)
endif(NOT COMMAND LEMON_TARGET)
#============================================================
# FindLEMON.cmake ends here

View file

@ -10,7 +10,7 @@
#
# Originally based off of FindBISON.cmake from Kitware's CMake distribution
#
# Copyright (c) 2010-2012 United States Government as represented by
# Copyright (c) 2010-2016 United States Government as represented by
# the U.S. Army Research Laboratory.
# Copyright 2009 Kitware, Inc.
# Copyright 2006 Tristan Carel
@ -68,6 +68,187 @@ include(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(PERPLEX DEFAULT_MSG PERPLEX_EXECUTABLE PERPLEX_TEMPLATE)
mark_as_advanced(PERPLEX_TEMPLATE)
# Defines two macros - PERPLEX_TARGET, which takes perplex inputs and
# runs both perplex and re2c to generate C source code/headers, and
# ADD_PERPLEX_LEMON_DEPENDENCY which is used to set up dependencies between
# scanner and parser targets when necessary.
#
# #====================================================================
# Example:
#
# find_package(LEMON)
# find_package(RE2C)
# find_package(PERPLEX)
#
# LEMON_TARGET(MyParser parser.y "${CMAKE_CURRENT_BINARY_DIR}/parser.cpp")
# PERPLEX_TARGET(MyScanner scanner.re "${CMAKE_CURRENT_BINARY_DIR}/scanner.cpp" "${CMAKE_CURRENT_BINARY_DIR}/scanner_header.hpp")
# ADD_PERPLEX_LEMON_DEPENDENCY(MyScanner MyParser)
#
# include_directories("${CMAKE_CURRENT_BINARY_DIR}")
# add_executable(Foo
# Foo.cc
# ${LEMON_MyParser_OUTPUTS}
# ${PERPLEX_MyScanner_OUTPUTS}
# )
# ====================================================================
#
#=============================================================================
#
# Originally based off of FindBISON.cmake from Kitware's CMake distribution
#
# Copyright (c) 2010-2016 United States Government as represented by
# the U.S. Army Research Laboratory.
# Copyright 2009 Kitware, Inc.
# Copyright 2006 Tristan Carel
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# * The names of the authors may not be used to endorse or promote
# products derived from this software without specific prior written
# permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#=============================================================================
#============================================================
# PERPLEX_TARGET (public macro)
#============================================================
include(CMakeParseArguments)
if(NOT COMMAND PERPLEX_TARGET)
macro(PERPLEX_TARGET Name Input)
get_filename_component(IN_FILE_WE ${Input} NAME_WE)
set(PVAR_PREFIX ${Name}_${IN_FILE_WE})
if(${ARGC} GREATER 3)
CMAKE_PARSE_ARGUMENTS(${PVAR_PREFIX} "" "TEMPLATE;OUT_SRC_FILE;OUT_HDR_FILE;WORKING_DIR" "" ${ARGN})
endif(${ARGC} GREATER 3)
# Need a working directory
if("${${PVAR_PREFIX}_WORKING_DIR}" STREQUAL "")
set(${PVAR_PREFIX}_WORKING_DIR "${CMAKE_CURRENT_BINARY_DIR}/${PVAR_PREFIX}")
endif("${${PVAR_PREFIX}_WORKING_DIR}" STREQUAL "")
file(MAKE_DIRECTORY ${${PVAR_PREFIX}_WORKING_DIR})
# Set up intermediate and final output names
# Output source file
if ("${${PVAR_PREFIX}_OUT_SRC_FILE}" STREQUAL "")
set(${PVAR_PREFIX}_OUT_SRC_FILE ${${PVAR_PREFIX}_WORKING_DIR}/${IN_FILE_WE}.c)
else ("${${PVAR_PREFIX}_OUT_SRC_FILE}" STREQUAL "")
get_filename_component(specified_out_dir ${${PVAR_PREFIX}_OUT_SRC_FILE} PATH)
if(NOT "${specified_out_dir}" STREQUAL "")
message(FATAL_ERROR "\nFull path specified for OUT_SRC_FILE - should be filename only.\n")
endif(NOT "${specified_out_dir}" STREQUAL "")
set(${PVAR_PREFIX}_OUT_SRC_FILE ${${PVAR_PREFIX}_WORKING_DIR}/${${PVAR_PREFIX}_OUT_SRC_FILE})
endif ("${${PVAR_PREFIX}_OUT_SRC_FILE}" STREQUAL "")
# Output header file
if ("${${PVAR_PREFIX}_OUT_HDR_FILE}" STREQUAL "")
set(${PVAR_PREFIX}_OUT_HDR_FILE ${${PVAR_PREFIX}_WORKING_DIR}/${IN_FILE_WE}.h)
else ("${${PVAR_PREFIX}_OUT_HDR_FILE}" STREQUAL "")
get_filename_component(specified_out_dir ${${PVAR_PREFIX}_OUT_HDR_FILE} PATH)
if(NOT "${specified_out_dir}" STREQUAL "")
message(FATAL_ERROR "\nFull path specified for OUT_HDR_FILE - should be filename only.\n")
endif(NOT "${specified_out_dir}" STREQUAL "")
set(${PVAR_PREFIX}_OUT_HDR_FILE ${${PVAR_PREFIX}_WORKING_DIR}/${${PVAR_PREFIX}_OUT_HDR_FILE})
endif ("${${PVAR_PREFIX}_OUT_HDR_FILE}" STREQUAL "")
# input file
get_filename_component(in_full ${Input} ABSOLUTE)
if("${in_full}" STREQUAL "${Input}")
set(perplex_in_file ${Input})
else("${in_full}" STREQUAL "${Input}")
set(perplex_in_file "${CMAKE_CURRENT_SOURCE_DIR}/${Input}")
endif("${in_full}" STREQUAL "${Input}")
# Intermediate file
set(re2c_src "${${PVAR_PREFIX}_WORKING_DIR}/${IN_FILE_WE}.re")
# Make sure we have a template
if ("${${PVAR_PREFIX}_TEMPLATE}" STREQUAL "")
if(PERPLEX_TEMPLATE)
set(${PVAR_PREFIX}_TEMPLATE ${PERPLEX_TEMPLATE})
else(PERPLEX_TEMPLATE)
message(FATAL_ERROR "\nNo Perplex template file specified - please specify the file using the PERPLEX_TEMPLATE variable:\ncmake .. -DPERPLEX_TEMPLATE=/path/to/template_file.c\n")
endif(PERPLEX_TEMPLATE)
endif ("${${PVAR_PREFIX}_TEMPLATE}" STREQUAL "")
get_filename_component(IN_FILE ${Input} NAME)
add_custom_command(
OUTPUT ${re2c_src} ${${PVAR_PREFIX}_OUT_HDR_FILE} ${${PVAR_PREFIX}_WORKING_DIR}/${IN_FILE}
COMMAND ${CMAKE_COMMAND} -E copy ${perplex_in_file} ${${PVAR_PREFIX}_WORKING_DIR}/${IN_FILE}
COMMAND ${PERPLEX_EXECUTABLE} -c -o ${re2c_src} -i ${${PVAR_PREFIX}_OUT_HDR_FILE} -t ${${PVAR_PREFIX}_TEMPLATE} ${${PVAR_PREFIX}_WORKING_DIR}/${IN_FILE}
DEPENDS ${Input} ${${PVAR_PREFIX}_TEMPLATE} ${PERPLEX_EXECUTABLE_TARGET} ${RE2C_EXECUTABLE_TARGET}
WORKING_DIRECTORY ${${PVAR_PREFIX}_WORKING_DIR}
COMMENT "[PERPLEX][${Name}] Generating re2c input with ${PERPLEX_EXECUTABLE}"
)
if(NOT DEBUGGING_GENERATED_SOURCES)
add_custom_command(
OUTPUT ${${PVAR_PREFIX}_OUT_SRC_FILE}
COMMAND ${RE2C_EXECUTABLE} --no-debug-info --no-generation-date -c -o ${${PVAR_PREFIX}_OUT_SRC_FILE} ${re2c_src}
DEPENDS ${Input} ${re2c_src} ${${PVAR_PREFIX}_OUT_HDR_FILE} ${PERPLEX_EXECUTABLE_TARGET} ${RE2C_EXECUTABLE_TARGET}
WORKING_DIRECTORY ${${PVAR_PREFIX}_WORKING_DIR}
COMMENT "[RE2C][${Name}] Building scanner with ${RE2C_EXECUTABLE}"
)
else(NOT DEBUGGING_GENERATED_SOURCES)
add_custom_command(
OUTPUT ${${PVAR_PREFIX}_OUT_SRC_FILE}
COMMAND ${RE2C_EXECUTABLE} --no-generation-date -c -o ${${PVAR_PREFIX}_OUT_SRC_FILE} ${re2c_src}
DEPENDS ${Input} ${re2c_src} ${${PVAR_PREFIX}_OUT_HDR_FILE} ${PERPLEX_EXECUTABLE_TARGET} ${RE2C_EXECUTABLE_TARGET}
WORKING_DIRECTORY ${${PVAR_PREFIX}_WORKING_DIR}
COMMENT "[RE2C][${Name}] Building scanner with ${RE2C_EXECUTABLE}"
)
endif(NOT DEBUGGING_GENERATED_SOURCES)
set(PERPLEX_${Name}_DEFINED TRUE)
set(PERPLEX_${Name}_SRC ${${PVAR_PREFIX}_OUT_SRC_FILE})
set(PERPLEX_${Name}_HDR ${${PVAR_PREFIX}_OUT_HDR_FILE})
set(PERPLEX_${Name}_INCLUDE_DIR ${${PVAR_PREFIX}_WORKING_DIR})
endmacro(PERPLEX_TARGET)
endif(NOT COMMAND PERPLEX_TARGET)
#============================================================
# ADD_PERPLEX_LEMON_DEPENDENCY (public macro)
#============================================================
if(NOT COMMAND ADD_PERPLEX_LEMON_DEPENDENCY)
macro(ADD_PERPLEX_LEMON_DEPENDENCY PERPLEXTarget LemonTarget)
if(NOT PERPLEX_${PERPLEXTarget}_SRC)
message(SEND_ERROR "PERPLEX target `${PERPLEXTarget}' does not exists.")
endif()
if(NOT LEMON_${LemonTarget}_HDR)
message(SEND_ERROR "Lemon target `${LemonTarget}' does not exists.")
endif()
set_source_files_properties(${PERPLEX_${PERPLEXTarget}_SRC}
PROPERTIES OBJECT_DEPENDS ${LEMON_${LemonTarget}_HDR})
endmacro(ADD_PERPLEX_LEMON_DEPENDENCY)
endif(NOT COMMAND ADD_PERPLEX_LEMON_DEPENDENCY)
#============================================================
# FindPERPLEX.cmake ends here

View file

@ -9,10 +9,137 @@ mark_as_advanced(RE2C_EXECUTABLE)
include(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(RE2C DEFAULT_MSG RE2C_EXECUTABLE)
# Provide a macro to generate custom build rules:
# RE2C_TARGET(Name RE2CInput RE2COutput [COMPILE_FLAGS <string>])
# which creates a custom command to generate the <RE2COutput> file from
# the <RE2CInput> file. If COMPILE_FLAGS option is specified, the next
# parameter is added to the re2c command line. Name is an alias used to
# get details of this custom command.
# This module also defines a macro:
# ADD_RE2C_LEMON_DEPENDENCY(RE2CTarget LemonTarget)
# which adds the required dependency between a scanner and a parser
# where <RE2CTarget> and <LemonTarget> are the first parameters of
# respectively RE2C_TARGET and LEMON_TARGET macros.
#
# ====================================================================
# Example:
#
# find_package(LEMON)
# find_package(RE2C)
#
# LEMON_TARGET(MyParser parser.y "${CMAKE_CURRENT_BINARY_DIR}/parser.cpp")
# RE2C_TARGET(MyScanner scanner.re "${CMAKE_CURRENT_BINARY_DIR}/scanner.cpp")
# ADD_RE2C_LEMON_DEPENDENCY(MyScanner MyParser)
#
# include_directories("${CMAKE_CURRENT_BINARY_DIR}")
# add_executable(Foo
# Foo.cc
# ${LEMON_MyParser_OUTPUTS}
# ${RE2C_MyScanner_OUTPUTS}
# )
# ====================================================================
#
#=============================================================================
# Copyright (c) 2010-2016 United States Government as represented by
# the U.S. Army Research Laboratory.
# Copyright 2009 Kitware, Inc.
# Copyright 2006 Tristan Carel
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# * The names of the authors may not be used to endorse or promote
# products derived from this software without specific prior written
# permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#=============================================================================
#============================================================
# RE2C_TARGET (public macro)
#============================================================
#
# TODO - rework this macro to make use of CMakeParseArguments, see
# http://www.cmake.org/pipermail/cmake/2012-July/051309.html
if(NOT COMMAND RE2C_TARGET)
macro(RE2C_TARGET Name Input Output)
set(RE2C_TARGET_usage "RE2C_TARGET(<Name> <Input> <Output> [COMPILE_FLAGS <string>]")
if(${ARGC} GREATER 3)
if(${ARGC} EQUAL 5)
if("${ARGV3}" STREQUAL "COMPILE_FLAGS")
set(RE2C_EXECUTABLE_opts "${ARGV4}")
SEPARATE_ARGUMENTS(RE2C_EXECUTABLE_opts)
else()
message(SEND_ERROR ${RE2C_TARGET_usage})
endif()
else()
message(SEND_ERROR ${RE2C_TARGET_usage})
endif()
endif()
add_custom_command(OUTPUT ${Output}
COMMAND ${RE2C_EXECUTABLE}
ARGS ${RE2C_EXECUTABLE_opts} -o${Output} ${Input}
DEPENDS ${Input} ${RE2C_EXECUTABLE_TARGET}
COMMENT "[RE2C][${Name}] Building scanner with ${RE2C_EXECUTABLE}"
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}")
set(RE2C_${Name}_DEFINED TRUE)
set(RE2C_${Name}_OUTPUTS ${Output})
set(RE2C_${Name}_INPUT ${Input})
set(RE2C_${Name}_COMPILE_FLAGS ${RE2C_EXECUTABLE_opts})
set_property(DIRECTORY APPEND PROPERTY ADDITIONAL_MAKE_CLEAN_FILES "${Output}")
endmacro(RE2C_TARGET)
endif(NOT COMMAND RE2C_TARGET)
#============================================================
#============================================================
# ADD_RE2C_LEMON_DEPENDENCY (public macro)
#============================================================
#
if(NOT COMMAND ADD_RE2C_LEMON_DEPENDENCY)
macro(ADD_RE2C_LEMON_DEPENDENCY RE2CTarget LemonTarget)
if(NOT RE2C_${RE2CTarget}_OUTPUTS)
message(SEND_ERROR "RE2C target `${RE2CTarget}' does not exists.")
endif()
if(NOT LEMON_${LemonTarget}_HDR)
message(SEND_ERROR "Lemon target `${LemonTarget}' does not exists.")
endif()
set_source_files_properties(${RE2C_${RE2CTarget}_OUTPUTS}
PROPERTIES OBJECT_DEPENDS ${LEMON_${LemonTarget}_HDR})
endmacro(ADD_RE2C_LEMON_DEPENDENCY)
endif(NOT COMMAND ADD_RE2C_LEMON_DEPENDENCY)
#============================================================
# RE2C_Util.cmake ends here
# Local Variables:
# tab-width: 8
# mode: cmake
# indent-tabs-mode: t
# End:
# ex: shiftwidth=2 tabstop=8

View file

@ -59,20 +59,6 @@ macro(WRITE_MD5_SUMS filelist outfile)
endforeach(fileitem ${filelist})
endmacro(WRITE_MD5_SUMS)
macro(GET_GENERATOR_EXEC_VERSIONS)
# Read lemon version
execute_process(COMMAND ${LEMON_EXECUTABLE} -x OUTPUT_VARIABLE lemon_version)
string(REPLACE "Lemon version " "" lemon_version "${lemon_version}")
string(STRIP "${lemon_version}" lemon_version)
# Read re2c version
execute_process(COMMAND ${RE2C_EXECUTABLE} -V OUTPUT_VARIABLE re2c_version)
string(STRIP "${re2c_version}" re2c_version)
# Read perplex version
execute_process(COMMAND ${PERPLEX_EXECUTABLE} -v OUTPUT_VARIABLE perplex_version)
string(STRIP "${perplex_version}" perplex_version)
endmacro(GET_GENERATOR_EXEC_VERSIONS)
# Local Variables:
# tab-width: 8
# mode: cmake

View file

@ -1,146 +0,0 @@
# Defines the macro
# LEMON_TARGET(<Name> <LemonInput> <LemonSource> <LemonHeader>
# [<ArgString>])
# which will create a custom rule to generate a parser. <LemonInput> is
# the path to a lemon file. <LemonSource> is the desired name for the
# generated source file. <LemonHeader> is the desired name for the
# generated header which contains the token list. Anything in the optional
# <ArgString> parameter is appended to the lemon command line.
#
# ====================================================================
# Example:
#
# find_package(LEMON)
# LEMON_TARGET(MyParser parser.y parser.c parser.h)
# add_executable(Foo main.cpp ${LEMON_MyParser_OUTPUTS})
# ====================================================================
#
#=============================================================================
#
# Originally based off of FindBISON.cmake from Kitware's CMake distribution
#
# Copyright (c) 2010-2012 United States Government as represented by
# the U.S. Army Research Laboratory.
# Copyright 2009 Kitware, Inc.
# Copyright 2006 Tristan Carel
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# * The names of the authors may not be used to endorse or promote
# products derived from this software without specific prior written
# permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#=============================================================================
#============================================================
# LEMON_TARGET (public macro)
#============================================================
#
macro(LEMON_TARGET Name LemonInput LemonSource LemonHeader)
if(NOT ${ARGC} EQUAL 4 AND NOT ${ARGC} EQUAL 5)
message(SEND_ERROR "Usage")
else()
get_filename_component(LemonInputFull ${LemonInput} ABSOLUTE)
get_filename_component(LemonSourceFull ${LemonSource} ABSOLUTE)
get_filename_component(LemonHeaderFull ${LemonHeader} ABSOLUTE)
if(NOT ${LemonInput} STREQUAL ${LemonInputFull})
set(LEMON_${Name}_INPUT "${CMAKE_CURRENT_BINARY_DIR}/${LemonInput}")
else(NOT ${LemonInput} STREQUAL ${LemonInputFull})
set(LEMON_${Name}_INPUT "${LemonInput}")
endif(NOT ${LemonInput} STREQUAL ${LemonInputFull})
if(NOT ${LemonSource} STREQUAL ${LemonSourceFull})
set(LEMON_${Name}_OUTPUT_SOURCE "${CMAKE_CURRENT_BINARY_DIR}/${LemonSource}")
else(NOT ${LemonSource} STREQUAL ${LemonSourceFull})
set(LEMON_${Name}_OUTPUT_SOURCE "${LemonSource}")
endif(NOT ${LemonSource} STREQUAL ${LemonSourceFull})
if(NOT ${LemonHeader} STREQUAL ${LemonHeaderFull})
set(LEMON_${Name}_OUTPUT_HEADER "${CMAKE_CURRENT_BINARY_DIR}/${LemonHeader}")
else(NOT ${LemonHeader} STREQUAL ${LemonHeaderFull})
set(LEMON_${Name}_OUTPUT_HEADER "${LemonHeader}")
endif(NOT ${LemonHeader} STREQUAL ${LemonHeaderFull})
set(LEMON_${Name}_EXTRA_ARGS "${ARGV4}")
# get input name minus path
get_filename_component(INPUT_NAME "${LemonInput}" NAME)
set(LEMON_BIN_INPUT ${CMAKE_CURRENT_BINARY_DIR}/${INPUT_NAME})
# names of lemon output files will be based on the name of the input file
string(REGEX REPLACE "^(.*)(\\.[^.]*)$" "\\1.c" LEMON_GEN_SOURCE "${INPUT_NAME}")
string(REGEX REPLACE "^(.*)(\\.[^.]*)$" "\\1.h" LEMON_GEN_HEADER "${INPUT_NAME}")
string(REGEX REPLACE "^(.*)(\\.[^.]*)$" "\\1.out" LEMON_GEN_OUT "${INPUT_NAME}")
# copy input to bin directory and run lemon
add_custom_command(
OUTPUT ${LEMON_GEN_OUT} ${LEMON_GEN_SOURCE} ${LEMON_GEN_HEADER}
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_CURRENT_SOURCE_DIR}/${LemonInput} ${LEMON_BIN_INPUT}
COMMAND ${LEMON_EXECUTABLE} ${INPUT_NAME} ${LEMON_${Name}_EXTRA_ARGS}
DEPENDS ${LemonInput} ${LEMON_TEMPLATE} ${LEMON_EXECUTABLE_TARGET}
COMMENT "[LEMON][${Name}] Building parser with ${LEMON_EXECUTABLE}"
)
# rename generated outputs
if(NOT "${LemonSource}" STREQUAL "${LEMON_GEN_SOURCE}")
add_custom_command(
OUTPUT ${LemonSource}
COMMAND ${CMAKE_COMMAND} -E copy ${LEMON_GEN_SOURCE} ${LemonSource}
DEPENDS ${LemonInput} ${LEMON_EXECUTABLE_TARGET} ${LEMON_GEN_SOURCE}
)
set(LEMON_${Name}_OUTPUTS ${LemonSource} ${LEMON_${Name}_OUTPUTS})
endif(NOT "${LemonSource}" STREQUAL "${LEMON_GEN_SOURCE}")
if(NOT "${LemonHeader}" STREQUAL "${LEMON_GEN_HEADER}")
add_custom_command(
OUTPUT ${LemonHeader}
COMMAND ${CMAKE_COMMAND} -E copy ${LEMON_GEN_HEADER} ${LemonHeader}
DEPENDS ${LemonInput} ${LEMON_EXECUTABLE_TARGET} ${LEMON_GEN_HEADER}
)
set(LEMON_${Name}_OUTPUTS ${LemonHeader} ${LEMON_${Name}_OUTPUTS})
endif(NOT "${LemonHeader}" STREQUAL "${LEMON_GEN_HEADER}")
set(LEMON_${Name}_OUTPUTS ${LEMON_GEN_OUT} ${LemonSource} ${LemonHeader})
# make sure we clean up generated output and copied input
if("${CMAKE_SOURCE_DIR}" STREQUAL "${CMAKE_BINARY_DIR}")
set_property(DIRECTORY APPEND PROPERTY ADDITIONAL_MAKE_CLEAN_FILES "${LEMON_${Name}_OUTPUTS}")
else("${CMAKE_SOURCE_DIR}" STREQUAL "${CMAKE_BINARY_DIR}")
set_property(DIRECTORY APPEND PROPERTY ADDITIONAL_MAKE_CLEAN_FILES "${LEMON_${Name}_OUTPUTS};${LEMON_BIN_INPUT}")
endif("${CMAKE_SOURCE_DIR}" STREQUAL "${CMAKE_BINARY_DIR}")
# macro ran successfully
set(LEMON_${Name}_DEFINED TRUE)
endif(NOT ${ARGC} EQUAL 4 AND NOT ${ARGC} EQUAL 5)
endmacro(LEMON_TARGET)
#
#============================================================
# LEMON_Utils.cmake ends here
# Local Variables:
# tab-width: 8
# mode: cmake
# indent-tabs-mode: t
# End:
# ex: shiftwidth=2 tabstop=8

View file

@ -1,134 +0,0 @@
# Defines two macros - PERPLEX_TARGET, which takes perplex inputs and
# runs both perplex and re2c to generate C source code/headers, and
# ADD_PERPLEX_LEMON_DEPENDENCY which is used to set up dependencies between
# scanner and parser targets when necessary.
#
# #====================================================================
# Example:
#
# find_package(LEMON)
# find_package(RE2C)
# find_package(PERPLEX)
#
# LEMON_TARGET(MyParser parser.y ${CMAKE_CURRENT_BINARY_DIR}/parser.cpp
# PERPLEX_TARGET(MyScanner scanner.re ${CMAKE_CURRENT_BINARY_DIR}/scanner.cpp ${CMAKE_CURRENT_BINARY_DIR}/scanner_header.hpp)
# ADD_PERPLEX_LEMON_DEPENDENCY(MyScanner MyParser)
#
# include_directories(${CMAKE_CURRENT_BINARY_DIR})
# add_executable(Foo
# Foo.cc
# ${LEMON_MyParser_OUTPUTS}
# ${PERPLEX_MyScanner_OUTPUTS}
# )
# ====================================================================
#
#=============================================================================
#
# Originally based off of FindBISON.cmake from Kitware's CMake distribution
#
# Copyright (c) 2010-2012 United States Government as represented by
# the U.S. Army Research Laboratory.
# Copyright 2009 Kitware, Inc.
# Copyright 2006 Tristan Carel
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# * The names of the authors may not be used to endorse or promote
# products derived from this software without specific prior written
# permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#=============================================================================
#============================================================
# PERPLEX_TARGET (public macro)
#============================================================
macro(PERPLEX_TARGET Name Input OutputSrc OutputHeader)
if(${ARGC} GREATER 4)
set(Template ${ARGV4})
else(${ARGC} GREATER 4)
if(PERPLEX_TEMPLATE)
set(Template ${PERPLEX_TEMPLATE})
else(PERPLEX_TEMPLATE)
message(FATAL_ERROR "\nNo Perplex template file specifed - please specify the file using the PERPLEX_TEMPLATE variable:\ncmake .. -DPERPLEX_TEMPLATE=/path/to/template_file.c\n")
endif(PERPLEX_TEMPLATE)
endif(${ARGC} GREATER 4)
get_filename_component(OutputName ${OutputSrc} NAME)
set(re2c_src "${CMAKE_CURRENT_BINARY_DIR}/${OutputName}.re")
add_custom_command(
OUTPUT ${re2c_src} ${OutputHeader}
COMMAND ${PERPLEX_EXECUTABLE} -c -o ${re2c_src} -i ${OutputHeader} -t ${Template} ${Input}
DEPENDS ${Input} ${Template} ${PERPLEX_EXECUTABLE_TARGET} ${RE2C_EXECUTABLE_TARGET}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
COMMENT "[PERPLEX][${Name}] Generating re2c input with ${PERPLEX_EXECUTABLE}"
)
if(NOT DEBUGGING_GENERATED_SOURCES)
add_custom_command(
OUTPUT ${OutputSrc}
COMMAND ${RE2C_EXECUTABLE} --no-debug-info --no-generation-date -c -o ${OutputSrc} ${re2c_src}
DEPENDS ${Input} ${re2c_src} ${OutputHeader} ${PERPLEX_EXECUTABLE_TARGET} ${RE2C_EXECUTABLE_TARGET}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
COMMENT "[RE2C][${Name}] Building scanner with ${RE2C_EXECUTABLE}"
)
else(NOT DEBUGGING_GENERATED_SOURCES)
add_custom_command(
OUTPUT ${OutputSrc}
COMMAND ${RE2C_EXECUTABLE} --no-generation-date -c -o ${OutputSrc} ${re2c_src}
DEPENDS ${Input} ${re2c_src} ${OutputHeader} ${PERPLEX_EXECUTABLE_TARGET} ${RE2C_EXECUTABLE_TARGET}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
COMMENT "[RE2C][${Name}] Building scanner with ${RE2C_EXECUTABLE}"
)
endif(NOT DEBUGGING_GENERATED_SOURCES)
set(PERPLEX_${Name}_DEFINED TRUE)
set(PERPLEX_${Name}_OUTPUTS ${OutputSrc})
set(PERPLEX_${Name}_INPUT ${Input})
endmacro(PERPLEX_TARGET)
#============================================================
# ADD_PERPLEX_LEMON_DEPENDENCY (public macro)
#============================================================
macro(ADD_PERPLEX_LEMON_DEPENDENCY PERPLEXTarget LemonTarget)
if(NOT PERPLEX_${PERPLEXTarget}_OUTPUTS)
message(SEND_ERROR "PERPLEX target `${PERPLEXTarget}' does not exists.")
endif()
if(NOT LEMON_${LemonTarget}_OUTPUT_HEADER)
message(SEND_ERROR "Lemon target `${LemonTarget}' does not exists.")
endif()
set_source_files_properties(${PERPLEX_${PERPLEXTarget}_OUTPUTS}
PROPERTIES OBJECT_DEPENDS ${LEMON_${LemonTarget}_OUTPUT_HEADER})
endmacro(ADD_PERPLEX_LEMON_DEPENDENCY)
#============================================================
# PERPLEX_Utils.cmake ends here
# Local Variables:
# tab-width: 8
# mode: cmake
# indent-tabs-mode: t
# End:
# ex: shiftwidth=2 tabstop=8

View file

@ -1,3 +1,12 @@
# BIN and LIB directories
if(NOT DEFINED BIN_DIR)
set(BIN_DIR bin)
endif(NOT DEFINED BIN_DIR)
if(NOT DEFINED LIB_DIR)
set(LIB_DIR lib)
endif(NOT DEFINED LIB_DIR)
# testing and compilation options, build output dirs, install dirs, etc
# included by root CMakeLists
@ -42,6 +51,12 @@ OPTION_WITH_DEFAULT(SC_CPP_GENERATOR "Compile exp2cxx" ON)
OPTION_WITH_DEFAULT(SC_MEMMGR_ENABLE_CHECKS "Enable sc_memmgr's memory leak detection" OFF)
OPTION_WITH_DEFAULT(SC_TRACE_FPRINTF "Enable extra comments in generated code so the code's source in exp2cxx may be located" OFF)
# Should we use C++11?
OPTION_WITH_DEFAULT(SC_ENABLE_CXX11 "Build with C++ 11 features" ON)
# Get version from git
OPTION_WITH_DEFAULT(SC_GIT_VERSION "Build using version from git" ON)
option(SC_BUILD_EXPRESS_ONLY "Only build express parser." OFF)
mark_as_advanced(SC_BUILD_EXPRESS_ONLY)
@ -71,6 +86,13 @@ if(SC_ENABLE_TESTING)
ENABLE_TESTING()
endif(SC_ENABLE_TESTING)
#---------------------------------------------------------------------
# Executable install option
OPTION_WITH_DEFAULT(SC_SKIP_EXEC_INSTALL "Skip installing executables" OFF)
if(SC_SKIP_EXEC_INSTALL)
set(SC_EXEC_NOINSTALL "NO_INSTALL")
endif(SC_SKIP_EXEC_INSTALL)
#---------------------------------------------------------------------
# The following logic is what allows binaries to run successfully in
# the build directory AND install directory. Thanks to plplot for

View file

@ -44,25 +44,53 @@ CHECK_FUNCTION_EXISTS(getopt HAVE_GETOPT)
CHECK_TYPE_SIZE("ssize_t" SSIZE_T)
set( TEST_STD_THREAD "
if(SC_ENABLE_CXX11)
set( TEST_STD_THREAD "
#include <iostream>
#include <thread>
void do_work() {
std::cout << \"thread\" << std::endl;
}
int main() {
std::thread t(do_work);
t.join();
}
" )
cmake_push_check_state()
void do_work() {std::cout << \"thread\" << std::endl;}
int main() {std::thread t(do_work);t.join();}
" )
cmake_push_check_state()
if( UNIX )
set( CMAKE_REQUIRED_FLAGS "-pthread -std=c++0x" )
set( CMAKE_REQUIRED_FLAGS "-pthread -std=c++11" )
else( UNIX )
# vars probably need set for MSVC11, embarcadero, etc
# vars probably need set for embarcadero, etc
endif( UNIX )
CHECK_CXX_SOURCE_RUNS( "${TEST_STD_THREAD}" HAVE_STD_THREAD ) #quotes are *required*!
cmake_pop_check_state()
cmake_pop_check_state()
set( TEST_STD_CHRONO "
#include <iostream>
#include <chrono>
int main() {
std::chrono::seconds sec(1);
std::cout << \"1s is \"<< std::chrono::duration_cast<std::chrono::milliseconds>(sec).count() << \" ms\" << std::endl;
}
" )
cmake_push_check_state()
if( UNIX )
set( CMAKE_REQUIRED_FLAGS "-std=c++11" )
else( UNIX )
# vars probably need set for embarcadero, etc
endif( UNIX )
CHECK_CXX_SOURCE_RUNS( "${TEST_STD_CHRONO}" HAVE_STD_CHRONO ) #quotes are *required*!
cmake_pop_check_state()
set( TEST_NULLPTR "
#include <cstddef>
std::nullptr_t f() {return nullptr;}
int main() {return !(f() == f());}
" )
cmake_push_check_state()
if( UNIX )
set( CMAKE_REQUIRED_FLAGS "-std=c++11" )
else( UNIX )
# vars probably need set for embarcadero, etc
endif( UNIX )
CHECK_CXX_SOURCE_RUNS( "${TEST_NULLPTR}" HAVE_NULLPTR ) #quotes are *required*!
cmake_pop_check_state()
endif(SC_ENABLE_CXX11)
# Now that all the tests are done, configure the sc_cf.h file:
get_property(CONFIG_H_FILE_CONTENTS GLOBAL PROPERTY SC_CONFIG_H_CONTENTS)
@ -75,14 +103,23 @@ configure_file(${CONFIG_H_FILE} ${SC_BINARY_DIR}/${INCLUDE_INSTALL_DIR}/sc_cf.h)
# Using 'ver_string' instead of 'sc_version_string.h' is a trick to force the
# command to always execute when the custom target is built. It works because
# a file by that name never exists.
configure_file(${SC_CMAKE_DIR}/sc_version_string.cmake ${SC_BINARY_DIR}/sc_version_string.cmake @ONLY)
add_custom_target(version_string ALL DEPENDS ver_string )
# creates sc_version_string.h using cmake script
add_custom_command(OUTPUT ver_string ${CMAKE_CURRENT_BINARY_DIR}/${INCLUDE_INSTALL_DIR}/sc_version_string.h
COMMAND ${CMAKE_COMMAND} -DSOURCE_DIR=${SC_SOURCE_DIR}
-DBINARY_DIR=${SC_BINARY_DIR}
-P ${SC_BINARY_DIR}/sc_version_string.cmake)
# sc_version_string.h is a generated file
if(SC_GIT_VERSION)
configure_file(${SC_CMAKE_DIR}/sc_version_string.cmake ${SC_BINARY_DIR}/sc_version_string.cmake @ONLY)
add_custom_target(version_string ALL DEPENDS ver_string)
# creates sc_version_string.h using cmake script
add_custom_command(OUTPUT ver_string
COMMAND ${CMAKE_COMMAND} -DSOURCE_DIR=${SC_SOURCE_DIR} -DBINARY_DIR=${SC_BINARY_DIR} -P ${SC_BINARY_DIR}/sc_version_string.cmake
)
# sc_version_string.h is a generated file
else(SC_GIT_VERSION)
set(VER_HDR "
#ifndef SC_VERSION_STRING
#define SC_VERSION_STRING
static char sc_version[512] = {\"${SC_VERSION}\"};
#endif"
)
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/${INCLUDE_INSTALL_DIR}/sc_version_string.h "${VER_HDR}")
endif(SC_GIT_VERSION)
set_source_files_properties(${CMAKE_CURRENT_BINARY_DIR}/${INCLUDE_INSTALL_DIR}/sc_version_string.h
PROPERTIES GENERATED TRUE
HEADER_FILE_ONLY TRUE )

View file

@ -40,12 +40,6 @@ if(NOT "${SC_GENERATE_LEXER_PARSER}" STREQUAL "OFF")
get_filename_component(PERPLEX_TEMPLATE "${CMAKE_BINARY_DIR}/${PERPLEX_TEMPLATE}" ABSOLUTE)
endif(NOT "${perplex_template_fpath}" STREQUAL "${PERPLEX_TEMPLATE}")
if(NOT COMMAND LEMON_TARGET)
include(${SC_CMAKE_DIR}/LEMON_Util.cmake)
endif(NOT COMMAND LEMON_TARGET)
if(NOT COMMAND PERPLEX_TARGET)
include(${SC_CMAKE_DIR}/PERPLEX_Util.cmake)
endif(NOT COMMAND PERPLEX_TARGET)
set(SC_GENERATE_LP_SOURCES 1)
message(".. Found perplex, re2c, and lemon - can regenerate lexer/parser if necessary")
else(LEMON_EXECUTABLE AND LEMON_TEMPLATE AND PERPLEX_EXECUTABLE AND PERPLEX_TEMPLATE AND RE2C_EXECUTABLE)

View file

@ -10,60 +10,46 @@ macro(DEFINE_DLL_EXPORTS libname)
string(TOUPPER ${LOWERCORE} UPPER_CORE)
set(export "SC_${UPPER_CORE}_DLL_EXPORTS")
endif()
get_target_property(defs ${libname} COMPILE_DEFINITIONS)
if(defs) #if no properties, ${defs} will be defs-NOTFOUND which CMake interprets as false
set(defs "${defs};${export}")
else(defs)
set(defs "${export}")
endif(defs)
set_target_properties(${libname} PROPERTIES COMPILE_DEFINITIONS "${defs}")
set_property(TARGET ${libname} APPEND PROPERTY COMPILE_DEFINITIONS "${export}")
endif(MSVC OR BORLAND)
endmacro(DEFINE_DLL_EXPORTS libname)
# set compile definitions for dll imports on windows
macro(DEFINE_DLL_IMPORTS tgt libs)
if(MSVC OR BORLAND)
get_target_property(defs ${tgt} COMPILE_DEFINITIONS)
if(NOT defs) #if no properties, ${defs} will be defs-NOTFOUND which CMake interprets as false
set(defs "")
endif(NOT defs)
set(imports "")
foreach(lib ${libs})
string(REGEX REPLACE "lib" "" shortname "${lib}")
string(REGEX REPLACE "step" "" LOWERCORE "${shortname}")
string(TOUPPER ${LOWERCORE} UPPER_CORE)
list(APPEND defs "SC_${UPPER_CORE}_DLL_IMPORTS")
list(APPEND imports "SC_${UPPER_CORE}_DLL_IMPORTS")
endforeach(lib ${libs})
if(DEFINED defs)
if(defs)
set_target_properties(${tgt} PROPERTIES COMPILE_DEFINITIONS "${defs}")
endif(defs)
endif(DEFINED defs)
set_property(TARGET ${tgt} APPEND PROPERTY COMPILE_DEFINITIONS "${imports}")
endif(MSVC OR BORLAND)
endmacro(DEFINE_DLL_IMPORTS tgt libs)
#EXCLUDE_OR_INSTALL(target destination ARGV3)
# installs ${target} in ${destination} unless testing is enabled AND ${arg_3} == "TESTABLE",
# in which case the EXCLUDE_FROM_ALL property is set for testing.
# EXCLUDE_FROM_ALL cannot be set on targets that are to be installed,
# so either test the target or install it - but not both
macro(EXCLUDE_OR_INSTALL target dest arg_3)
if(NOT ((SC_ENABLE_TESTING) AND ("${arg_3}" STREQUAL "TESTABLE")))
INSTALL(TARGETS ${target} DESTINATION ${dest})
else(NOT ((SC_ENABLE_TESTING) AND ("${arg_3}" STREQUAL "TESTABLE")))
set_target_properties(${target} PROPERTIES EXCLUDE_FROM_ALL ON)
endif(NOT ((SC_ENABLE_TESTING) AND ("${arg_3}" STREQUAL "TESTABLE")))
endmacro(EXCLUDE_OR_INSTALL target dest arg_3)
#SC_ADDEXEC(execname "source files" "linked libs" ["TESTABLE"] ["MSVC flag" ...])
# optional 4th argument of "TESTABLE", passed to EXCLUDE_OR_INSTALL macro
# optional args can also be used by MSVC-specific code, but it looks like these two uses
# will not conflict because the MSVC args must contain "STRICT"
#SC_ADDEXEC(execname "source files" "linked libs" ["TESTABLE"] ["NO_INSTALL"])
macro(SC_ADDEXEC execname srcslist libslist)
if(SC_BUILD_SHARED_LIBS)
string(TOUPPER "${execname}" EXECNAME_UPPER)
if(${ARGC} GREATER 3)
CMAKE_PARSE_ARGUMENTS(${EXECNAME_UPPER} "NO_INSTALL;TESTABLE" "" "" ${ARGN})
endif(${ARGC} GREATER 3)
add_executable(${execname} ${srcslist})
target_link_libraries(${execname} ${libslist})
DEFINE_DLL_IMPORTS(${execname} "${libslist}") #add import definitions for all libs that the executable is linked to
EXCLUDE_OR_INSTALL(${execname} "bin" "${ARGV3}")
if(NOT ${EXECNAME_UPPER}_NO_INSTALL AND NOT ${EXECNAME_UPPER}_TESTABLE)
install(TARGETS ${execname}
RUNTIME DESTINATION ${BIN_DIR}
LIBRARY DESTINATION ${LIB_DIR}
ARCHIVE DESTINATION ${LIB_DIR}
)
endif(NOT ${EXECNAME_UPPER}_NO_INSTALL AND NOT ${EXECNAME_UPPER}_TESTABLE)
if(NOT SC_ENABLE_TESTING AND ${EXECNAME_UPPER}_TESTABLE)
set_target_properties( ${execname} PROPERTIES EXCLUDE_FROM_ALL ON )
endif(NOT SC_ENABLE_TESTING AND ${EXECNAME_UPPER}_TESTABLE)
# Enable extra compiler flags if local executables and/or global options dictate
set(LOCAL_COMPILE_FLAGS "")
foreach(extraarg ${ARGN})
@ -74,44 +60,36 @@ macro(SC_ADDEXEC execname srcslist libslist)
if(LOCAL_COMPILE_FLAGS)
set_target_properties(${execname} PROPERTIES COMPILE_FLAGS ${LOCAL_COMPILE_FLAGS})
endif(LOCAL_COMPILE_FLAGS)
endif(SC_BUILD_SHARED_LIBS)
if(SC_BUILD_STATIC_LIBS)
if(NOT SC_BUILD_SHARED_LIBS)
set(staticexecname "${execname}")
else()
set(staticexecname "${execname}-static")
endif(NOT SC_BUILD_SHARED_LIBS)
add_executable(${staticexecname} ${srcslist})
target_link_libraries(${staticexecname} ${libslist})
EXCLUDE_OR_INSTALL(${staticexecname} "bin" "${ARGV3}")
# Enable extra compiler flags if local executables and/or global options dictate
set(LOCAL_COMPILE_FLAGS "")
foreach(extraarg ${ARGN})
if(${extraarg} MATCHES "STRICT" AND SC-ENABLE_STRICT)
set(LOCAL_COMPILE_FLAGS "${LOCAL_COMPILE_FLAGS} ${STRICT_FLAGS}")
endif(${extraarg} MATCHES "STRICT" AND SC-ENABLE_STRICT)
endforeach(extraarg ${ARGN})
if(LOCAL_COMPILE_FLAGS)
set_target_properties(${staticexecname} PROPERTIES COMPILE_FLAGS ${LOCAL_COMPILE_FLAGS})
endif(LOCAL_COMPILE_FLAGS)
endif(SC_BUILD_STATIC_LIBS)
endmacro(SC_ADDEXEC execname srcslist libslist)
#SC_ADDLIB(libname "source files" "linked libs" ["TESTABLE"] ["MSVC flag" ...])
# optional 4th argument of "TESTABLE", passed to EXCLUDE_OR_INSTALL macro
# optional args can also be used by MSVC-specific code, but it looks like these two uses
# will not conflict because the MSVC args must contain "STRICT"
#SC_ADDLIB(libname "source files" "linked libs" ["TESTABLE"] ["NO_INSTALL"] ["SO_SRCS ..."] ["STATIC_SRCS ..."])
macro(SC_ADDLIB libname srcslist libslist)
string(TOUPPER "${libname}" LIBNAME_UPPER)
if(${ARGC} GREATER 3)
CMAKE_PARSE_ARGUMENTS(${LIBNAME_UPPER} "NO_INSTALL;TESTABLE" "" "SO_SRCS;STATIC_SRCS" ${ARGN})
endif(${ARGC} GREATER 3)
string(REGEX REPLACE "-framework;" "-framework " libslist "${libslist1}")
if(SC_BUILD_SHARED_LIBS)
add_library(${libname} SHARED ${srcslist})
add_library(${libname} SHARED ${srcslist} ${${LIBNAME_UPPER}_SO_SRCS})
DEFINE_DLL_EXPORTS(${libname})
if(NOT "${libs}" MATCHES "NONE")
target_link_libraries(${libname} ${libslist})
DEFINE_DLL_IMPORTS(${libname} "${libslist}")
endif(NOT "${libs}" MATCHES "NONE")
set_target_properties(${libname} PROPERTIES VERSION ${SC_ABI_VERSION} SOVERSION ${SC_ABI_SOVERSION})
EXCLUDE_OR_INSTALL(${libname} "lib" "${ARGV3}")
if(NOT ${LIBNAME_UPPER}_NO_INSTALL AND NOT ${LIBNAME_UPPER}_TESTABLE)
install(TARGETS ${libname}
RUNTIME DESTINATION ${BIN_DIR}
LIBRARY DESTINATION ${LIB_DIR}
ARCHIVE DESTINATION ${LIB_DIR}
)
endif(NOT ${LIBNAME_UPPER}_NO_INSTALL AND NOT ${LIBNAME_UPPER}_TESTABLE)
if(NOT SC_ENABLE_TESTING AND ${LIBNAME_UPPER}_TESTABLE)
set_target_properties( ${libname} PROPERTIES EXCLUDE_FROM_ALL ON )
endif(NOT SC_ENABLE_TESTING AND ${LIBNAME_UPPER}_TESTABLE)
if(APPLE)
set_target_properties(${libname} PROPERTIES LINK_FLAGS "-flat_namespace -undefined suppress")
endif(APPLE)
@ -122,7 +100,7 @@ macro(SC_ADDLIB libname srcslist libslist)
else()
set(staticlibname "${libname}-static")
endif(NOT SC_BUILD_SHARED_LIBS)
add_library(${staticlibname} STATIC ${srcslist})
add_library(${staticlibname} STATIC ${srcslist} ${${LIBNAME_UPPER}_STATIC_SRCS})
DEFINE_DLL_EXPORTS(${staticlibname})
if(NOT ${libs} MATCHES "NONE")
target_link_libraries(${staticlibname} "${libslist}")
@ -137,7 +115,16 @@ macro(SC_ADDLIB libname srcslist libslist)
# http://www.cmake.org/Wiki/CMake_FAQ#How_do_I_make_my_shared_and_static_libraries_have_the_same_root_name.2C_but_different_suffixes.3F
set_target_properties(${staticlibname} PROPERTIES PREFIX "lib")
endif(WIN32)
EXCLUDE_OR_INSTALL(${staticlibname} "lib" "${ARGV3}")
if(NOT ${LIBNAME_UPPER}_NO_INSTALL AND NOT ${LIBNAME_UPPER}_TESTABLE)
install(TARGETS ${libname}-static
RUNTIME DESTINATION ${BIN_DIR}
LIBRARY DESTINATION ${LIB_DIR}
ARCHIVE DESTINATION ${LIB_DIR}
)
endif(NOT ${LIBNAME_UPPER}_NO_INSTALL AND NOT ${LIBNAME_UPPER}_TESTABLE)
if(NOT SC_ENABLE_TESTING AND ${LIBNAME_UPPER}_TESTABLE)
set_target_properties( ${libname}-static PROPERTIES EXCLUDE_FROM_ALL ON )
endif(NOT SC_ENABLE_TESTING AND ${LIBNAME_UPPER}_TESTABLE)
if(APPLE)
set_target_properties(${staticlibname} PROPERTIES LINK_FLAGS "-flat_namespace -undefined suppress")
endif(APPLE)

View file

@ -20,11 +20,3 @@ foreach (file ${files})
message(STATUS "File \"$ENV{DESTDIR}${file}\" does not exist.")
endif (EXISTS "$ENV{DESTDIR}${file}")
endforeach(file)
# Local Variables:
# tab-width: 8
# mode: cmake
# indent-tabs-mode: t
# End:
# ex: shiftwidth=2 tabstop=8

View file

@ -1,102 +0,0 @@
# Inherit the parent CMake setting
set(CURRENT_SOURCE_DIR "@CMAKE_CURRENT_SOURCE_DIR@")
set(LEMON_EXECUTABLE "@LEMON_EXECUTABLE@")
set(RE2C_EXECUTABLE "@RE2C_EXECUTABLE@")
set(PERPLEX_EXECUTABLE "@PERPLEX_EXECUTABLE@")
set(SYNC_SCRIPT "@SYNC_SCRIPT@")
set(SYNC_TARGET_NAME "@SYNC_TARGET_NAME@")
set(DEBUGGING_GENERATED_SOURCES "@DEBUGGING_GENERATED_SOURCES@")
if(NOT DEBUGGING_GENERATED_SOURCES)
# Include the file the provides the baseline against which
# current files will be compared
include("@BASELINE_INFORMATION_FILE@")
# Define a variety of convenience routines
include("@PROJECT_CMAKE_DIR@/Generated_Source_Utils.cmake")
# The following need to be checked:
#
# 1. baseline input MD5 hashes against the current input
# hashes. If the cached sources were generated using
# inputs other than the current inputs, note they are
# out of sync but don't stop. Templates used by perplex
# and lemon are part of this group.
#
# 2. baseline cached source MD5 hashes against current
# cached source MD5 hashes. Making sure no changes
# have been made to the generated sources. If the
# cached sources need to be updated (see #1, for example)
# their MD5 hashes need to be updated at the same time.
#
# 3. MD5 hashes of output generated by the tools against
# the MD5 hashes of the equalivent cached sources, if
# a) the tool versions are the same b) the input MD5
# hash comparisions were the same and c) the baseline
# test from #2 passed. This is done to detect platform
# differences in output sources, but is only valid if
# the input files are in their "pristine" state and the
# toolchain is equalivent to that used for the baseline.
# Individually verify all of the files in question.
set(input_files "@INPUT_FILELIST@")
VERIFY_FILES("${input_files}" 0 input_unchanged)
set(template_files "@TEMPLATE_FILELIST@")
VERIFY_FILES("${template_files}" 1 templates_unchanged)
set(cached_files "@CACHED_FILELIST@")
VERIFY_FILES("${cached_files}" 1 cached_unchanged)
if(cached_unchanged)
message( "Cached generated source code has not been modified.")
else()
message(FATAL_ERROR "Cached generated sources do not match the MD5 hashes present in /home/mark/step/sc/src/express/generated/verification_info.cmake - if updating cached sources, remember that the build enforces the requirement that associated MD5 hashes in /home/mark/step/sc/src/express/generated/verification_info.cmake are current as well. Cached generated sources should not be directly edited.")
endif(cached_unchanged)
GET_GENERATOR_EXEC_VERSIONS()
if("${lemon_version}" VERSION_EQUAL "${baseline_lemon_version}" AND "${perplex_version}" VERSION_EQUAL "${baseline_perplex_version}" AND "${re2c_version}" VERSION_EQUAL "${baseline_re2c_version}")
set(tool_versions_equal 1)
else()
set(tool_versions_equal 0)
endif()
if(NOT input_unchanged)
if(templates_unchanged AND tool_versions_equal)
message("Input files changed - syncing cached outputs")
execute_process(COMMAND ${CMAKE_COMMAND} -P ${SYNC_SCRIPT} OUTPUT_VARIABLE output)
else(templates_unchanged AND tool_versions_equal)
if(NOT templates_unchanged AND NOT tool_versions_equal)
message("Input files have been updated, but templates and current tool versions do not match those previously used to generate cached sources. Automatic syncing will not proceed.")
message("To force syncing, use the build target ${SYNC_TARGET_NAME}")
else(NOT templates_unchanged AND NOT tool_versions_equal)
if(NOT templates_unchanged)
message("Input files have been updated, but templates do not match those previously used to generate cached sources. Automatic syncing will not proceed.")
message("To force syncing, use the build target ${SYNC_TARGET_NAME}")
endif(NOT templates_unchanged)
if(NOT tool_versions_equal)
message("Input files have been updated, but tool versions do not match those previously used to generate cached sources. Automatic syncing will not proceed.")
message("To force syncing, use the build target ${SYNC_TARGET_NAME}")
endif(NOT tool_versions_equal)
endif(NOT templates_unchanged AND NOT tool_versions_equal)
endif(templates_unchanged AND tool_versions_equal)
else(NOT input_unchanged)
if(templates_unchanged AND cached_unchanged AND tool_versions_equal)
# Under these conditions, the uncached generated output should be equal to the cached files.
# Check if it is - a difference here may indicate a platform-specific behavior in one of the
# generators.
set(build_files "@BUILD_OUTPUT_FILELIST@")
VERIFY_FILES("${build_files}" 1 platform_unchanged)
if(NOT platform_unchanged)
message("Note: give these build inputs and tools, source files should be identical to generated files. Differences were still observed - possible indiciation of platform-specific generator behavior.")
endif(NOT platform_unchanged)
endif(templates_unchanged AND cached_unchanged AND tool_versions_equal)
endif(NOT input_unchanged)
else(NOT DEBUGGING_GENERATED_SOURCES)
message("\nNote: DEBUGGING_GENERATED_SOURCES is enabled - generated outputs will contain configuration-specific debugging information, so syncing cached output files is not possible. To restore normal behavior, disable DEBUGGING_GENERATED_SOURCES.\n")
endif(NOT DEBUGGING_GENERATED_SOURCES)
# Local Variables:
# tab-width: 8
# mode: cmake
# indent-tabs-mode: t
# End:
# ex: shiftwidth=2 tabstop=8

35
cmake/md5_gen.cmake.in Normal file
View file

@ -0,0 +1,35 @@
# Inherit the parent CMake setting
set(CURRENT_SOURCE_DIR @CMAKE_CURRENT_SOURCE_DIR@)
set(CURRENT_BINARY_DIR @CMAKE_CURRENT_BINARY_DIR@)
# Define a variety of convenience routines
include(@PROJECT_CMAKE_DIR@/Generated_Source_Utils.cmake)
# The following steps are executed to sync generated sources:
#
# 1. Create a new verification_info.cmake file and populate
# it with the MD5 sums for current files.
#
# 2. Overwrite the original cached verification_info.cmake
# and generated files with the new ones. If LOCKED_SOURCE_DIR
# is ON, this step will not be carried out - instead, an
# informational message with manual updating instructions
# will be printed.
set(new_info_file "${CURRENT_BINARY_DIR}/verification_info.cmake")
file(WRITE ${new_info_file} "# Autogenerated verification information\n")
# Handle input files
set(input_files "@MD5_FILELIST@")
WRITE_MD5_SUMS("${input_files}" "${new_info_file}")
message("New verification file created: ${new_info_file}")
# Local Variables:
# tab-width: 8
# mode: cmake
# indent-tabs-mode: t
# End:
# ex: shiftwidth=2 tabstop=8

30
cmake/md5_verify.cmake.in Normal file
View file

@ -0,0 +1,30 @@
# Inherit the parent CMake setting
set(DEBUGGING_GENERATED_SOURCES @DEBUGGING_GENERATED_SOURCES@)
set(CURRENT_SOURCE_DIR "@CMAKE_CURRENT_SOURCE_DIR@")
# Include the file the provides the baseline against which
# current files will be compared
include("@BASELINE_INFORMATION_FILE@")
# Define a variety of convenience routines
include("@PROJECT_CMAKE_DIR@/Generated_Source_Utils.cmake")
# Individually verify all of the files in question.
set(filelist "@MD5_FILELIST@")
VERIFY_FILES("${filelist}" 1 srcs_pass)
if(NOT srcs_pass)
if(NOT DEBUGGING_GENERATED_SOURCES)
message(FATAL_ERROR "Sources have been modified and md5 sums have not been updated. This generally indicates either\n a) an input file has been modified but generated files have not been updated, or\n b) genenerated files have been edited directly.\nTo clear the error:\n a) Copy the new generated sources from the build directory to the generated/ sources directory, use the <target>_md5gen build target to create a new verifictation_info.cmake file, and copy verfication_info.cmake to generated/ as well.\n b) install Perplex/Re2C/LEMON and make the changes to the input file rather than the generated file.\nNote:\n If this is a debugging situation where multiple sequential tests must be conducted, temporarily set the variable DEBUGGING_GENERATED_SOURCES to ON during the CMake configure to disable this check.\nThis measure is necessary to ensure that compilations using either Perplex/Re2C/LEMON generation or the cached outputs of those tools produce consistent results.")
else(NOT DEBUGGING_GENERATED_SOURCES)
message(WARNING "Note: Sources have been modified and md5 sums have not been updated - build failure condition temporarily overridden by DEBUGGING_GENERATED_SOURCES setting.")
endif(NOT DEBUGGING_GENERATED_SOURCES)
endif(NOT srcs_pass)
# Local Variables:
# tab-width: 8
# mode: cmake
# indent-tabs-mode: t
# End:
# ex: shiftwidth=2 tabstop=8

View file

@ -47,45 +47,24 @@ string(REPLACE "\n" "" GIT_COMMIT_ID ${GIT_COMMIT_ID})
#once cmake_minimum_required is >= 2.8.11, we can use TIMESTAMP:
#string(TIMESTAMP date_time_string)
if(UNIX)
execute_process(COMMAND date "+%d %b %Y %H:%M" OUTPUT_VARIABLE date_time_string OUTPUT_STRIP_TRAILING_WHITESPACE)
elseif(WIN32)
execute_process(COMMAND cmd /c date /t OUTPUT_VARIABLE currentDate OUTPUT_STRIP_TRAILING_WHITESPACE)
execute_process(COMMAND cmd /c time /t OUTPUT_VARIABLE currentTime OUTPUT_STRIP_TRAILING_WHITESPACE)
set (date_time_string "${currentDate} ${currentTime}")
else()
set(date_time_string "\" __DATE__ \" \" __TIME__ \" ")
if(NOT SC_IS_SUBBUILD)
message(STATUS "Unknown platform - using date from preprocessor")
endif(NOT SC_IS_SUBBUILD)
endif()
set(header_string "/* sc_version_string.h - written by cmake. Changes will be lost! */\n"
"#ifndef SC_VERSION_STRING\n"
"#define SC_VERSION_STRING\n\n"
"/*\n** The git commit id looks like \"test-1-g5e1fb47\", where test is the\n"
"** name of the last tagged git revision, 1 is the number of commits since that tag,\n"
"** 'g' is unknown, and 5e1fb47 is the first 7 chars of the git sha1 commit id.\n"
"** timestamp is created from date/time commands on known platforms, and uses\n"
"** preprocessor macros elsewhere.\n*/\n\n"
"*/\n\n"
"static char sc_version[512] = {\n"
" \"git commit id: ${GIT_COMMIT_ID}, build timestamp ${date_time_string}\"\n"
" \"git commit id: ${GIT_COMMIT_ID}\"\n"
"}\;\n\n"
"#endif\n"
)
#compare the new and old commit versions, don't update the file if only the timestamp differs
if(EXISTS ${SC_VERSION_HEADER})
file(READ ${SC_VERSION_HEADER} OLD_VER_STRING LIMIT 600) #file is ~586 bytes
string(FIND "${OLD_VER_STRING}" "git commit id: ${GIT_COMMIT_ID}" COMMIT_MATCH )
# ${COMMIT_MATCH} == -1 if no match
else()
set(COMMIT_MATCH -1)
endif(EXISTS ${SC_VERSION_HEADER})
if(${COMMIT_MATCH} LESS 1 )
file(WRITE ${SC_VERSION_HEADER} ${header_string})
endif(${COMMIT_MATCH} LESS 1)
#don't update the file unless somethig changed
string(RANDOM tmpsuffix)
file(WRITE ${SC_VERSION_HEADER}.${tmpsuffix} ${header_string})
execute_process(COMMAND ${CMAKE_COMMAND} -E copy_if_different ${SC_VERSION_HEADER}.${tmpsuffix} ${SC_VERSION_HEADER})
execute_process(COMMAND ${CMAKE_COMMAND} -E remove ${SC_VERSION_HEADER}.${tmpsuffix})
if(NOT SC_IS_SUBBUILD)
message("-- sc_version_string.h is up-to-date.")

View file

@ -12,15 +12,15 @@
# this makes compilation faster, but sometimes runs into compiler limitations
if(NOT DEFINED SC_UNITY_BUILD)
if(BORLAND)
message(".. Will not do unity build for this compiler.")
message( STATUS "Will not do unity build for this compiler. (SC_UNITY_BUILD=FALSE)")
set(SC_UNITY_BUILD FALSE)
else()
message(".. Assuming compiler is capable of unity build.")
message( STATUS "Assuming compiler is capable of unity build. (SC_UNITY_BUILD=TRUE)")
set(SC_UNITY_BUILD TRUE)
endif(BORLAND)
message(".. Override by setting SC_UNITY_BUILD; TRUE will result in *huge* translation units, higher memory use in compilation, and faster build times.")
message( STATUS "Override by setting SC_UNITY_BUILD; TRUE will result in faster build times but *huge* translation units and higher memory use in compilation.")
else(NOT DEFINED SC_UNITY_BUILD)
message(".. Respecting user-defined SC_UNITY_BUILD value of ${SC_UNITY_BUILD}.")
message( STATUS "Respecting user-defined SC_UNITY_BUILD value of ${SC_UNITY_BUILD}.")
endif(NOT DEFINED SC_UNITY_BUILD)
@ -47,7 +47,7 @@ set(CMAKE_C_COMPILER \"${CMAKE_C_COMPILER}\" CACHE STRING \"compiler\")
set(CMAKE_CXX_COMPILER \"${CMAKE_CXX_COMPILER}\" CACHE STRING \"compiler\")
")
message("-- Compiling schema scanner...")
message( STATUS "Compiling schema scanner...")
execute_process(COMMAND ${CMAKE_COMMAND} -E make_directory ${SC_BINARY_DIR}/schemas)
execute_process(COMMAND ${CMAKE_COMMAND} -E make_directory ${SCANNER_BUILD_DIR})
@ -72,7 +72,7 @@ if(NOT ${_ss_build_stat} STREQUAL "0")
message(FATAL_ERROR "Scanner build status: ${_ss_build_stat}. stdout:\n${_ss_build_out}\nstderr:\n${_ss_build_err}")
endif(NOT ${_ss_build_stat} STREQUAL "0")
message("-- Schema scanner built. Running it...")
message( STATUS "Schema scanner built. Running it...")
# not sure if it makes sense to install this or not...
if(WIN32)

View file

@ -1,39 +0,0 @@
# Inherit the parent CMake setting
set(DEBUGGING_GENERATED_SOURCES @DEBUGGING_GENERATED_SOURCES@)
set(CURRENT_SOURCE_DIR "@CMAKE_CURRENT_SOURCE_DIR@")
# Include the file the provides the baseline against which
# current files will be compared
if(NOT DEBUGGING_GENERATED_SOURCES)
include("@BASELINE_INFORMATION_FILE@")
# Define a variety of convenience routines
include("@PROJECT_CMAKE_DIR@/Generated_Source_Utils.cmake")
# Individually verify all of the files in question.
set(filelist "@CACHED_FILELIST@")
VERIFY_FILES("${filelist}" 1 srcs_pass)
if( srcs_pass)
message( "Generated source code has not been modified.")
else(srcs_pass)
message(FATAL_ERROR "Generated sources have been modified. These files should never be modified directly except when debugging faulty output from the generators - changes to lexer and parser logic should be made to the generator input files. If this is a debugging situation, set the variable DEBUGGING_GENERATED_SOURCES to ON during the CMake configure.")
endif(srcs_pass)
# If we got by that test, see if it looks like these
# sources came from the current input files. It's not
# a failure condition if they didn't, but warn about it.
set(filelist "@INPUT_FILELIST@")
VERIFY_FILES("${filelist}" 0 inputs_same)
if(NOT inputs_same)
message("Note: cached generated sources are not in sync with input files.")
endif(NOT inputs_same)
endif(NOT DEBUGGING_GENERATED_SOURCES)
# Local Variables:
# tab-width: 8
# mode: cmake
# indent-tabs-mode: t
# End:
# ex: shiftwidth=2 tabstop=8

View file

@ -1,4 +1,4 @@
# Doxyfile 1.7.4
# Doxyfile 1.7.6.1
# This file describes the settings to be used by the documentation system
# doxygen (www.doxygen.org) for a project
@ -22,22 +22,23 @@
DOXYFILE_ENCODING = UTF-8
# The PROJECT_NAME tag is a single word (or a sequence of words surrounded
# by quotes) that should identify the project.
# The PROJECT_NAME tag is a single word (or sequence of words) that should
# identify the project. Note that if you do not use Doxywizard you need
# to put quotes around the project name if it contains spaces.
PROJECT_NAME = scl
PROJECT_NAME = SC
# The PROJECT_NUMBER tag can be used to enter a project or revision number.
# This could be handy for archiving the generated documentation or
# if some version control system is used.
PROJECT_NUMBER = 3.2
PROJECT_NUMBER = 0.8
# Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer
# a quick idea about the purpose of the project. Keep the description short.
PROJECT_BRIEF = "STEPcode"
PROJECT_BRIEF = STEPcode
# With the PROJECT_LOGO tag one can specify an logo or icon that is
# included in the documentation. The maximum height of the logo should not
@ -51,7 +52,7 @@ PROJECT_LOGO =
# If a relative path is entered, it will be relative to the location
# where doxygen was started. If left blank the current directory will be used.
OUTPUT_DIRECTORY = .
OUTPUT_DIRECTORY = /mnt/raid/mark/sc-doxygen
# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create
# 4096 sub-directories (in 2 levels) under the output directory of each output
@ -204,6 +205,13 @@ TAB_SIZE = 8
ALIASES =
# This tag can be used to specify a number of word-keyword mappings (TCL only).
# A mapping has the form "name=value". For example adding
# "class=itcl::class" will allow you to use the command class in the
# itcl::class meaning.
TCL_SUBST =
# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C
# sources only. Doxygen will then generate output that is more tailored for C.
# For instance, some of the names that are used will be different. The list
@ -293,6 +301,15 @@ SUBGROUPING = YES
INLINE_GROUPED_CLASSES = NO
# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and
# unions with only public data fields will be shown inline in the documentation
# of the scope in which they are defined (i.e. file, namespace, or group
# documentation), provided this scope is documented. If set to NO (the default),
# structs, classes, and unions are shown on a separate page (for HTML and Man
# pages) or section (for LaTeX and RTF).
INLINE_SIMPLE_STRUCTS = NO
# When TYPEDEF_HIDES_STRUCT is enabled, a typedef of a struct, union, or enum
# is documented as struct, union, or enum with the name of the typedef. So
# typedef struct TypeS {} TypeT, will appear in the documentation as a struct
@ -315,10 +332,21 @@ TYPEDEF_HIDES_STRUCT = NO
# a logarithmic scale so increasing the size by one will roughly double the
# memory usage. The cache size is given by this formula:
# 2^(16+SYMBOL_CACHE_SIZE). The valid range is 0..9, the default is 0,
# corresponding to a cache size of 2^16 = 65536 symbols
# corresponding to a cache size of 2^16 = 65536 symbols.
SYMBOL_CACHE_SIZE = 0
# Similar to the SYMBOL_CACHE_SIZE the size of the symbol lookup cache can be
# set using LOOKUP_CACHE_SIZE. This cache is used to resolve symbols given
# their name and scope. Since this can be an expensive process and often the
# same symbol appear multiple times in the code, doxygen keeps a cache of
# pre-resolved symbols. If the cache is too small doxygen will become slower.
# If the cache is too large, memory is wasted. The cache size is given by this
# formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range is 0..9, the default is 0,
# corresponding to a cache size of 2^16 = 65536 symbols.
LOOKUP_CACHE_SIZE = 0
#---------------------------------------------------------------------------
# Build related configuration options
#---------------------------------------------------------------------------
@ -338,7 +366,7 @@ EXTRACT_PRIVATE = NO
# If the EXTRACT_STATIC tag is set to YES all static members of a file
# will be included in the documentation.
EXTRACT_STATIC = NO
EXTRACT_STATIC = YES
# If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs)
# defined locally in source files will be included in the documentation.
@ -559,6 +587,16 @@ FILE_VERSION_FILTER =
LAYOUT_FILE =
# The CITE_BIB_FILES tag can be used to specify one or more bib files
# containing the references data. This must be a list of .bib files. The
# .bib extension is automatically appended if omitted. Using this command
# requires the bibtex tool to be installed. See also
# http://en.wikipedia.org/wiki/BibTeX for more info. For LaTeX the style
# of the bibliography can be controlled using LATEX_BIB_STYLE. To use this
# feature you need bibtex and perl available in the search path.
CITE_BIB_FILES =
#---------------------------------------------------------------------------
# configuration options related to warning and progress messages
#---------------------------------------------------------------------------
@ -621,7 +659,8 @@ WARN_LOGFILE =
INPUT = src \
include \
build/src/express
build/src/express \
build/include
# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding, which is
@ -677,13 +716,16 @@ FILE_PATTERNS = *.c \
RECURSIVE = YES
# The EXCLUDE tag can be used to specify files and/or directories that should
# The EXCLUDE tag can be used to specify files and/or directories that should be
# excluded from the INPUT source files. This way you can easily exclude a
# subdirectory from a directory tree whose root is specified with the INPUT tag.
# Note that relative paths are relative to the directory from which doxygen is
# run.
EXCLUDE =
EXCLUDE = .git \
cmake/
# The EXCLUDE_SYMLINKS tag can be used select whether or not files or
# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or
# directories that are symbolic links (a Unix file system feature) are excluded
# from the input.
@ -773,7 +815,7 @@ FILTER_SOURCE_PATTERNS =
# Note: To get rid of all source code in the generated output, make sure also
# VERBATIM_HEADERS is set to NO.
SOURCE_BROWSER = NO
SOURCE_BROWSER = YES
# Setting the INLINE_SOURCES tag to YES will include the body
# of functions and classes directly in the documentation.
@ -868,7 +910,7 @@ HTML_FILE_EXTENSION = .html
# standard header. Note that when using a custom header you are responsible
# for the proper inclusion of any scripts and style sheets that doxygen
# needs, which is dependent on the configuration options used.
# It is adviced to generate a default header using "doxygen -w html
# It is advised to generate a default header using "doxygen -w html
# header.html footer.html stylesheet.css YourConfigFile" and then modify
# that header. Note that the header is subject to change so you typically
# have to redo this when upgrading to a newer version of doxygen or when
@ -887,7 +929,7 @@ HTML_FOOTER =
# fine-tune the look of the HTML output. If the tag is left blank doxygen
# will generate a default style sheet. Note that doxygen will try to copy
# the style sheet file to the HTML output directory, so don't put your own
# stylesheet in the HTML output directory as well, or it will be erased!
# style sheet in the HTML output directory as well, or it will be erased!
HTML_STYLESHEET =
@ -901,7 +943,7 @@ HTML_STYLESHEET =
HTML_EXTRA_FILES =
# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output.
# Doxygen will adjust the colors in the stylesheet and background images
# Doxygen will adjust the colors in the style sheet and background images
# according to this color. Hue is specified as an angle on a colorwheel,
# see http://en.wikipedia.org/wiki/Hue for more information.
# For instance the value 0 represents red, 60 is yellow, 120 is green,
@ -1096,19 +1138,14 @@ GENERATE_ECLIPSEHELP = NO
ECLIPSE_DOC_ID = org.doxygen.Project
# The DISABLE_INDEX tag can be used to turn on/off the condensed index at
# top of each HTML page. The value NO (the default) enables the index and
# the value YES disables it.
# The DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs)
# at top of each HTML page. The value NO (the default) enables the index and
# the value YES disables it. Since the tabs have the same information as the
# navigation tree you can set this option to NO if you already set
# GENERATE_TREEVIEW to YES.
DISABLE_INDEX = NO
# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values
# (range [0,1..20]) that doxygen will group on one line in the generated HTML
# documentation. Note that a value of 0 will completely suppress the enum
# values from appearing in the overview section.
ENUM_VALUES_PER_LINE = 4
# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index
# structure should be generated to display hierarchical information.
# If the tag value is set to YES, a side panel will be generated
@ -1116,9 +1153,18 @@ ENUM_VALUES_PER_LINE = 4
# is generated for HTML Help). For this to work a browser that supports
# JavaScript, DHTML, CSS and frames is required (i.e. any modern browser).
# Windows users are probably better off using the HTML help feature.
# Since the tree basically has the same information as the tab index you
# could consider to set DISABLE_INDEX to NO when enabling this option.
GENERATE_TREEVIEW = YES
# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values
# (range [0,1..20]) that doxygen will group on one line in the generated HTML
# documentation. Note that a value of 0 will completely suppress the enum
# values from appearing in the overview section.
ENUM_VALUES_PER_LINE = 4
# By enabling USE_INLINE_TREES, doxygen will generate the Groups, Directories,
# and Class Hierarchy pages using a tree view instead of an ordered list.
@ -1171,6 +1217,11 @@ USE_MATHJAX = NO
MATHJAX_RELPATH = http://www.mathjax.org/mathjax
# The MATHJAX_EXTENSIONS tag can be used to specify one or MathJax extension
# names that should be enabled during MathJax rendering.
MATHJAX_EXTENSIONS =
# When the SEARCHENGINE tag is enabled doxygen will generate a search box
# for the HTML output. The underlying search engine uses javascript
# and DHTML and should work on any modern browser. Note that when using
@ -1179,7 +1230,7 @@ MATHJAX_RELPATH = http://www.mathjax.org/mathjax
# typically be disabled. For large projects the javascript based search engine
# can be slow, then enabling SERVER_BASED_SEARCH may provide a better solution.
SEARCHENGINE = NO
SEARCHENGINE = YES
# When the SERVER_BASED_SEARCH tag is enabled the search engine will be
# implemented using a PHP enabled web server instead of at the web client
@ -1284,6 +1335,12 @@ LATEX_HIDE_INDICES = NO
LATEX_SOURCE_CODE = NO
# The LATEX_BIB_STYLE tag can be used to specify the style to use for the
# bibliography, e.g. plainnat, or ieeetr. The default style is "plain". See
# http://en.wikipedia.org/wiki/BibTeX for more info.
LATEX_BIB_STYLE = plain
#---------------------------------------------------------------------------
# configuration options related to the RTF output
#---------------------------------------------------------------------------
@ -1315,7 +1372,7 @@ COMPACT_RTF = NO
RTF_HYPERLINKS = NO
# Load stylesheet definitions from file. Syntax is similar to doxygen's
# Load style sheet definitions from file. Syntax is similar to doxygen's
# config file, i.e. a series of assignments. You only have to provide
# replacements, missing definitions are set to their default value.
@ -1585,32 +1642,30 @@ HAVE_DOT = YES
DOT_NUM_THREADS = 0
# By default doxygen will write a font called Helvetica to the output
# directory and reference it in all dot files that doxygen generates.
# When you want a differently looking font you can specify the font name
# using DOT_FONTNAME. You need to make sure dot is able to find the font,
# which can be done by putting it in a standard location or by setting the
# DOTFONTPATH environment variable or by setting DOT_FONTPATH to the directory
# containing the font.
# By default doxygen will use the Helvetica font for all dot files that
# doxygen generates. When you want a differently looking font you can specify
# the font name using DOT_FONTNAME. You need to make sure dot is able to find
# the font, which can be done by putting it in a standard location or by setting
# the DOTFONTPATH environment variable or by setting DOT_FONTPATH to the
# directory containing the font.
DOT_FONTNAME = FreeSans
DOT_FONTNAME = Roboto
# The DOT_FONTSIZE tag can be used to set the size of the font of dot graphs.
# The default size is 10pt.
DOT_FONTSIZE = 10
# By default doxygen will tell dot to use the output directory to look for the
# FreeSans.ttf font (which doxygen will put there itself). If you specify a
# different font using DOT_FONTNAME you can set the path where dot
# can find it using this tag.
# By default doxygen will tell dot to use the Helvetica font.
# If you specify a different font using DOT_FONTNAME you can use DOT_FONTPATH to
# set the path where dot can find it.
DOT_FONTPATH =
# If the CLASS_GRAPH and HAVE_DOT tags are set to YES then doxygen
# will generate a graph for each documented class showing the direct and
# indirect inheritance relations. Setting this tag to YES will force the
# the CLASS_DIAGRAMS tag to NO.
# CLASS_DIAGRAMS tag to NO.
CLASS_GRAPH = YES
@ -1681,10 +1736,21 @@ DIRECTORY_GRAPH = YES
# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images
# generated by dot. Possible values are svg, png, jpg, or gif.
# If left blank png will be used.
# If left blank png will be used. If you choose svg you need to set
# HTML_FILE_EXTENSION to xhtml in order to make the SVG files
# visible in IE 9+ (other browsers do not have this requirement).
DOT_IMAGE_FORMAT = png
# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to
# enable generation of interactive SVG images that allow zooming and panning.
# Note that this requires a modern browser other than Internet Explorer.
# Tested and working are Firefox, Chrome, Safari, and Opera. For IE 9+ you
# need to set HTML_FILE_EXTENSION to xhtml in order to make the SVG files
# visible. Older versions of IE do not have SVG support.
INTERACTIVE_SVG = NO
# The tag DOT_PATH can be used to specify the path where the dot tool can be
# found. If left blank, it is assumed the dot tool can be found in the path.
@ -1735,7 +1801,7 @@ DOT_TRANSPARENT = NO
# makes dot run faster, but since only newer versions of dot (>1.8.10)
# support this, this feature is disabled by default.
DOT_MULTI_TARGETS = NO
DOT_MULTI_TARGETS = YES
# If the GENERATE_LEGEND tag is set to YES (the default) Doxygen will
# generate a legend page explaining the meaning of the various boxes and

View file

@ -0,0 +1,80 @@
#
# CMakeLists.txt for AP203 Minimum
#
# This file is released to the public domain. Any part of this file may be
# freely copied in part or in full for any purpose. No acknowledgment is required
# for the use of this file.
#
project(AP203Minimum)
cmake_minimum_required(VERSION 2.8)
INCLUDE( ExternalProject )
set(CMAKE_MODULE_PATH ${AP203Minimum_SOURCE_DIR}/cmake ${CMAKE_MODULE_PATH})
INCLUDE( External_STEPCode )
#####
# Variables ideally set by FindSTEPCode.cmake
IF(NOT WIN32)
set( STEPCODE_LIBRARIES
${STEPCODE_INSTALL_DIR}/lib/libbase.a
${STEPCODE_INSTALL_DIR}/lib/libstepcore.a
${STEPCODE_INSTALL_DIR}/lib/libstepeditor.a
${STEPCODE_INSTALL_DIR}/lib/libstepdai.a
${STEPCODE_INSTALL_DIR}/lib/libsteputils.a
${STEPCODE_INSTALL_DIR}/lib/libsdai_ap203.a
)
ELSE()
set( STEPCODE_LIBRARIES
${STEPCODE_INSTALL_DIR}/lib/libbase.lib
${STEPCODE_INSTALL_DIR}/lib/libstepcore.lib
${STEPCODE_INSTALL_DIR}/lib/libstepeditor.lib
${STEPCODE_INSTALL_DIR}/lib/libstepdai.lib
${STEPCODE_INSTALL_DIR}/lib/libsteputils.lib
${STEPCODE_INSTALL_DIR}/lib/libsdai_ap203.lib
${STEPCODE_INSTALL_DIR}/lib/libexpress.lib
${STEPCODE_INSTALL_DIR}/lib/libexppp.lib
shlwapi.lib
)
ENDIF()
MESSAGE( STATUS "STEPCODE_INSTALL_DIR: " ${STEPCODE_INSTALL_DIR} )
set( STEPCODE_INCLUDE_DIR
${STEPCODE_INSTALL_DIR}/include/stepcode
${STEPCODE_INSTALL_DIR}/include/stepcode/base
${STEPCODE_INSTALL_DIR}/include/stepcode/clstepcore
${STEPCODE_INSTALL_DIR}/include/stepcode/cldai
${STEPCODE_INSTALL_DIR}/include/stepcode/clutils
${STEPCODE_INSTALL_DIR}/include/stepcode/cleditor
${STEPCODE_INSTALL_DIR}/include/schemas/sdai_ap203
)
# End of variables ideally set by FindSTEPCode.cmake
######
include_directories(
${STEPCODE_INCLUDE_DIR}
)
set(SRCS ../ap203min.cpp)
set(HDRS )
add_executable( ${PROJECT_NAME} ${SRCS} ${HDRS})
add_dependencies( ${PROJECT_NAME} STEPCODE )
target_link_libraries( ${PROJECT_NAME}
${STEPCODE_LIBRARIES}
)
# Local Variables:
# tab-width: 8
# mode: cmake
# indent-tabs-mode: t
# End:
# ex: shiftwidth=2 tabstop=8

View file

@ -0,0 +1,29 @@
ExternalProject_Add( STEPCODE
URL ${CMAKE_CURRENT_SOURCE_DIR}/../../..
CMAKE_ARGS -DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
-DCMAKE_CXX_FLAGS=${CMAKE_CXX_FLAGS}
-DCMAKE_C_FLAGS=${CMAKE_C_FLAGS}
-DSC_BUILD_TYPE=Debug
-DSC_BUILD_SCHEMAS=ap203/ap203.exp
-DSC_BUILD_STATIC_LIBS=ON
-DSC_PYTHON_GENERATOR=OFF
-DSC_INSTALL_PREFIX:PATH=<INSTALL_DIR>
)
ExternalProject_Get_Property( STEPCODE SOURCE_DIR )
ExternalProject_Get_Property( STEPCODE BINARY_DIR )
ExternalProject_Get_Property( STEPCODE INSTALL_DIR )
IF( NOT WIN32 )
SET( STEPCODE_INSTALL_DIR ${SOURCE_DIR}/../sc-install )
ELSE()
SET( STEPCODE_INSTALL_DIR ${INSTALL_DIR} )
ENDIF()
SET( STEPCODE_BINARY_DIR ${BINARY_DIR} )
# SC CMake does not honor -DCMAKE_INSTALL_PREFIX:PATH=<INSTALL_DIR>
# Consequently, force Debug so it installs in ../sc-install directory
# instead of /usr/local/lib.
#
# SC's own programs fail to build with -DSC_BUILD_SHARED_LIBS=OFF

View file

@ -24,5 +24,7 @@
#cmakedefine HAVE_SSIZE_T 1
#cmakedefine HAVE_STD_THREAD 1
#cmakedefine HAVE_STD_CHRONO 1
#cmakedefine HAVE_NULLPTR 1
#endif /* SCL_CF_H */

View file

@ -0,0 +1,330 @@
//summarize MSVC errors from an appveyor log
// compile with 'go build summarize-appveyor-log.go'
// takes 0 or 1 args; with 0, gets log from latest
// build. with 1, uses that file as raw json-like log
package main
import (
"bufio"
"encoding/json"
"fmt"
"io"
"net/http"
"os"
"regexp"
"sort"
"strings"
)
const (
headerKey = "Authorization"
headerVal = "Bearer %s"
projUrl = "https://ci.appveyor.com/api/projects/mpictor/stepcode"
//"https://ci.appveyor.com/api/buildjobs/2rjxdv1rnb8jcg8y/log"
logUrl = "https://ci.appveyor.com/api/buildjobs/%s/log"
consoleUrl = "https://ci.appveyor.com/api/buildjobs/%s/console"
)
func main() {
var rawlog io.ReadCloser
var build string
var err error
if len(os.Args) == 2 {
rawlog, build, err = processArgv()
} else {
rawlog, build, err = getLog()
}
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
return
}
defer rawlog.Close()
log := decodeConsole(rawlog)
warns, errs := countMessages(log)
fi, err := os.Create(fmt.Sprintf("appveyor-%s.smy", build))
if err != nil {
fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
return
}
printMessages("error", errs, fi)
printMessages("warning", warns, fi)
fmt.Printf("done\n")
}
/* categorizes warnings and errors based upon the MSVC message number (i.e. C4244)
* the regex will match lines like
c:\projects\stepcode\src\base\sc_benchmark.h(45): warning C4251: 'benchmark::descr' : class 'std::basic_string<char,std::char_traits<char>,std::allocator<char>>' needs to have dll-interface to be used by clients of class 'benchmark' [C:\projects\STEPcode\build\src\base\base.vcxproj]
[00:03:48] C:\projects\STEPcode\src\base\sc_benchmark.cc(61): warning C4244: '=' : conversion from 'SIZE_T' to 'long', possible loss of data [C:\projects\STEPcode\build\src\base\base.vcxproj]*
*/
func countMessages(log []string) (warns, errs map[string][]string) {
warns = make(map[string][]string)
errs = make(map[string][]string)
fname := " *(.*)" // $1
fline := `(?:\((\d+)\)| ): ` // $2 - either line number in parenthesis or a space, followed by a colon
msgNr := `([A-Z]+\d+): ` // $3 - C4251, LNK2005, etc
msgTxt := `([^\[]*) ` // $4
tail := `\[[^\[\]]*\]`
warnRe := regexp.MustCompile(fname + fline + `warning ` + msgNr + msgTxt + tail)
errRe := regexp.MustCompile(fname + fline + `(?:fatal )?error ` + msgNr + msgTxt + tail)
for _, line := range log {
if warnRe.MatchString(line) {
key := warnRe.ReplaceAllString(line, "$3")
path := strings.ToLower(warnRe.ReplaceAllString(line, "$1:$2"))
arr := warns[key]
if arr == nil {
arr = make([]string, 5)
//detailed text as first string in array
text := warnRe.ReplaceAllString(line, "$4")
arr[0] = fmt.Sprintf("%s", text)
}
//eliminate duplicates
match := false
for _, l := range arr {
if l == path {
match = true
}
}
if !match {
warns[key] = append(arr, path)
}
} else if errRe.MatchString(line) {
key := errRe.ReplaceAllString(line, "$3")
path := strings.ToLower(errRe.ReplaceAllString(line, "$1:$2"))
arr := errs[key]
if arr == nil {
arr = make([]string, 5)
//detailed text as first string in array
text := errRe.ReplaceAllString(line, "$4")
arr[0] = fmt.Sprintf("%s", text)
}
//eliminate duplicates
match := false
for _, l := range arr {
if l == path {
match = true
}
}
if !match {
errs[key] = append(arr, path)
}
}
}
return
}
func printMessages(typ string, m map[string][]string, w io.Writer) {
//sort keys
keys := make([]string, 0, len(m))
for key := range m {
keys = append(keys, key)
}
sort.Strings(keys)
for _, k := range keys {
for i, l := range m[k] {
//first string is an example, not a location
if i == 0 {
fmt.Fprintf(w, "%s %s (i.e. \"%s\")\n", typ, k, l)
} else if len(l) > 1 { //not sure where blank lines are coming from...
fmt.Fprintf(w, " >> %s\n", l)
}
}
}
}
//structs from http://json2struct.mervine.net/
//{"values":[{"i":0,"t":"Specify a project or solution file. The directory does not contain a project or solution file.\r\n","dt":"00:00:04","bg":12,"fg":15}]}
type AppVeyorConsoleLines struct {
Values []struct {
I int `json:"i"`
Text string `json:"t"`
DateTime string `json:"dt"`
BgColor int `json:"bg"`
FgColor int `json:"fg"`
}
}
type AppVeyorBuild struct {
Build struct {
/*BuildNumber int `json:"buildNumber"`*/
Version string `json:"version"`
Jobs []struct {
JobID string `json:"jobId"`
} `json:"jobs"`
} `json:"build"`
}
func splitAppend(log *[]string, blob string) {
//blob = strings.Replace(blob,"\r\n", "\n",-1)
blob = strings.Replace(blob, "\\", "/", -1)
r := strings.NewReader(blob)
unwrapScanner := bufio.NewScanner(r)
for unwrapScanner.Scan() {
txt := unwrapScanner.Text()
//fmt.Printf("%s\n", txt)
*log = append(*log, txt)
}
}
//calculate length of string without escape chars
// func escapeLen(s string)(l int) {
// //s = strings.Replace(s,"\\\\", "/",-1)
// s = strings.Replace(s,"\\\"", "",-1)
// s = strings.Replace(s,"\r\n", "RN",-1)
// return len(s)
// }
//decode the almost-JSON console data from appveyor
func decodeConsole(r io.Reader) (log []string) {
wrapper := Wrap(r)
dec := json.NewDecoder(wrapper)
var consoleLines AppVeyorConsoleLines
var err error
var txtBlob string
err = dec.Decode(&consoleLines)
if err == io.EOF {
err = nil
}
if err == nil {
for _, l := range consoleLines.Values {
txtBlob += l.Text
//el := escapeLen(l.Text)
//something inserts newlines at 229 chars (+\n\r == 231) (found in CMake output)
lenTwoThreeOne := len(l.Text) == 231
if lenTwoThreeOne {
txtBlob = strings.TrimSuffix(txtBlob, "\r\n")
}
//something else starts new log lines at 1024 chars without inserting newlines (found in CTest error output)
if len(l.Text) != 1024 && !lenTwoThreeOne {
//fmt.Printf("sa for l %d, el %d\n", len(l.Text),el)
splitAppend(&log, txtBlob)
txtBlob = ""
}
}
} else {
fmt.Printf("decode err %s\n", err)
}
if len(txtBlob) > 0 {
splitAppend(&log, txtBlob)
}
return
}
func processArgv() (log io.ReadCloser, build string, err error) {
fname := os.Args[1]
if len(fname) < 14 {
err = fmt.Errorf("Name arg '%s' too short. Run as '%s appveyor-NNN.log'", fname, os.Args[0])
return
}
buildRe := regexp.MustCompile(`appveyor-(.+).log`)
build = buildRe.ReplaceAllString(fname, "$1")
if len(build) == 0 {
err = fmt.Errorf("No build id in %s", fname)
return
}
log, err = os.Open(fname)
return
}
func getLog() (log io.ReadCloser, build string, err error) {
client := &http.Client{}
req, err := http.NewRequest("GET", projUrl, nil)
if err != nil {
return
}
apikey := os.Getenv("APPVEYOR_API_KEY")
//api key isn't necessary for read-only queries on public projects
if len(apikey) > 0 {
req.Header.Add(headerKey, fmt.Sprintf(headerVal, apikey))
} //else {
// fmt.Printf("Env var APPVEYOR_API_KEY is not set.")
//}
resp, err := client.Do(req)
if err != nil {
return
}
build, job := decodeProjInfo(resp.Body)
fmt.Printf("build #%s, jobId %s\n", build, job)
resp, err = http.Get(fmt.Sprintf(consoleUrl, job))
if err != nil {
return
}
logName := fmt.Sprintf("appveyor-%s.log", build)
fi, err := os.Create(logName)
if err != nil {
return
}
_, err = io.Copy(fi, resp.Body)
if err != nil {
return
}
log, err = os.Open(logName)
if err != nil {
log = nil
}
return
}
func decodeProjInfo(r io.Reader) (vers string, job string) {
dec := json.NewDecoder(r)
var av AppVeyorBuild
err := dec.Decode(&av)
if err != io.EOF && err != nil {
fmt.Printf("err %s\n", err)
return
}
if len(av.Build.Jobs) != 1 {
return
}
vers = av.Build.Version
job = av.Build.Jobs[0].JobID
return
}
//wrap a reader, modifying content to make the json decoder happy
//only tested with data from appveyor console
type jsonWrapper struct {
source io.Reader
begin bool
end bool
}
func Wrap(r io.Reader) *jsonWrapper {
return &jsonWrapper{
source: r,
begin: true,
}
}
// func nonNeg(n int) (int) {
// if n < 0 {
// return 0
// }
// return n
// }
func (w *jsonWrapper) Read(p []byte) (n int, err error) {
if w.end {
return 0, io.EOF
}
if w.begin {
w.begin = false
n = copy(p, []byte(`{"values":[`))
}
m, err := w.source.Read(p[n:])
n += m
if err == io.EOF {
w.end = true
if n < len(p) {
n = copy(p, []byte(`{"dummy":"data"}]}`))
} else {
err = fmt.Errorf("No room to terminate JSON struct with '}'\n")
}
}
return
}
// kate: indent-width 8; space-indent off; replace-tabs off; replace-tabs-save off; replace-trailing-space-save on; remove-trailing-space on; tab-intent on; tab-width 8; show-tabs off;

View file

@ -5,6 +5,8 @@ set(SC_BASE_SOURCES
sc_getopt.cc
sc_benchmark.cc
sc_mkdir.c
path2str.c
judy/src/judy.c
)
set(SC_BASE_HDRS
@ -12,8 +14,19 @@ set(SC_BASE_HDRS
sc_memmgr.h
sc_getopt.h
sc_trace_fprintf.h
sc_stdbool.h
sc_mkdir.h
sc_nullptr.h
path2str.h
judy/src/judy.h
judy/src/judyLArray.h
judy/src/judyL2Array.h
judy/src/judySArray.h
judy/src/judyS2Array.h
)
include_directories(
${CMAKE_CURRENT_SOURCE_DIR}
${CMAKE_CURRENT_SOURCE_DIR}/judy/src
)
if(MINGW OR MSVC OR BORLAND)

View file

@ -62,10 +62,11 @@
#include "judy.h"
#if defined(STANDALONE) || defined(ASKITIS)
#include <string.h>
#include <stdio.h>
#if defined(STANDALONE) || defined(ASKITIS)
extern unsigned int MaxMem;
// void judy_abort (char *msg) __attribute__ ((noreturn)); // Tell static analyser that this function will not return

View file

@ -166,7 +166,7 @@ class judyL2Array {
kv.value = *_lastSlot;
_success = true;
} else {
kv.value = ( JudyValue ) 0;
kv.value = NULL;
_success = false;
}
kv.key = _buff[0];

View file

@ -191,7 +191,7 @@ class judyS2Array {
kv.value = *_lastSlot;
_success = true;
} else {
kv.value = ( JudyValue ) 0;
kv.value = NULL;
_success = false;
}
kv.key = _buff;

29
src/base/path2str.c Normal file
View file

@ -0,0 +1,29 @@
#include "path2str.h"
#include "sc_memmgr.h"
#include <string.h>
/* for windows, rewrite backslashes in paths
* that will be written to generated code
*/
const char * path2str_fn( const char * fileMacro ) {
static char * result = 0;
static size_t rlen = 0;
char * p;
if( rlen < strlen( fileMacro ) ) {
if( result ) {
sc_free( result );
}
rlen = strlen( fileMacro );
result = ( char * )sc_malloc( rlen * sizeof( char ) + 1 );
}
strcpy( result, fileMacro );
p = result;
while( *p ) {
if( *p == '\\' ) {
*p = '/';
}
p++;
}
return result;
}

20
src/base/path2str.h Normal file
View file

@ -0,0 +1,20 @@
#ifndef PATH2STR_H
#define PATH2STR_H
#include <sc_export.h>
/** windows only: rewrite backslashes in paths as forward slashes
* call as path2str(__FILE__) to take advantage of macro
*
* silence "unknown escape sequence" warning when contents of __FILE__
* are fprintf'd into string in generated code
*/
SC_BASE_EXPORT const char * path2str_fn( const char * fileMacro );
#if defined( _WIN32 ) || defined ( __WIN32__ )
# define path2str(path) path2str_fn(path)
#else
# define path2str(path) path
#endif /* defined( _WIN32 ) || defined ( __WIN32__ ) */
#endif /* PATH2STR_H */

View file

@ -6,6 +6,7 @@
#ifdef __cplusplus
#include <iostream>
#include <iosfwd>
#include <string>
#include "sc_memmgr.h"

View file

@ -168,7 +168,7 @@ sc_memmgr::~sc_memmgr( void ) {
// Check if total allocated equals total deallocated
if( _allocated_total != _deallocated_total ) {
// todo: generate warning for possible memory leaks, enable full memory leak checking
printf( "sc_memmgr warning: Possible memory leaks detected (%d of %d bytes)\n", _allocated_total - _deallocated_total, _allocated_total );
fprintf( stderr, "sc_memmgr warning: Possible memory leaks detected (%d of %d bytes)\n", _allocated_total - _deallocated_total, _allocated_total );
}
// Compact leaks into an error list to prevent same leak being reported multiple times.
@ -193,7 +193,7 @@ sc_memmgr::~sc_memmgr( void ) {
ierror != errors.end();
ierror ++ ) {
// todo: generate error for memory leak
printf( "sc_memmgr warning: Possible memory leak in %s line %d\n", ierror->getsrcfile().c_str(), ierror->getsrcline() );
fprintf( stderr, "sc_memmgr warning: Possible memory leak in %s line %d\n", ierror->getsrcfile().c_str(), ierror->getsrcline() );
}
// Clear remaining records
@ -211,7 +211,7 @@ void * sc_memmgr::allocate( size_t size, const char * file, const int line ) {
addr = malloc( size );
if( addr == NULL ) {
// todo: error allocation failed
printf( "sc_memmgr error: Memory allocation failed in %s line %d\n", file, line );
fprintf( stderr, "sc_memmgr error: Memory allocation failed in %s line %d\n", file, line );
}
// Some stl implementations (for example debian gcc) use the new operator to construct
@ -246,7 +246,7 @@ void * sc_memmgr::reallocate( void * addr, size_t size, const char * file, const
record = _records.find( sc_memmgr_record( addr ) );
if( record == _records.end() ) {
// todo: error reallocating memory not allocated?
printf( "sc_memmgr warning: Reallocation of not allocated memory at %s line %d\n", file, line );
fprintf( stderr, "sc_memmgr warning: Reallocation of not allocated memory at %s line %d\n", file, line );
} else {
// Update stats
_allocated -= record->getsize();
@ -264,7 +264,7 @@ void * sc_memmgr::reallocate( void * addr, size_t size, const char * file, const
addr = realloc( addr, size );
if( addr == NULL ) {
// todo: error reallocation failed
printf( "sc_memmgr error: Reallocation failed at %s line %d\n", file, line );
fprintf( stderr, "sc_memmgr error: Reallocation failed at %s line %d\n", file, line );
}
#ifdef SC_MEMMGR_ENABLE_CHECKS
@ -296,7 +296,7 @@ void sc_memmgr::deallocate( void * addr, const char * file, const int line ) {
record = _records.find( sc_memmgr_record( addr ) );
if( record == _records.end() ) {
// todo: error free called for not allocated memory?
printf( "sc_memmgr warning: Deallocate of not allocated memory at %s line %d\n", file, line );
fprintf( stderr, "sc_memmgr warning: Deallocate of not allocated memory at %s line %d\n", file, line );
} else {
// Update stats
_allocated -= record->getsize();

13
src/base/sc_nullptr.h Normal file
View file

@ -0,0 +1,13 @@
#ifndef NULLPTR_H
#define NULLPTR_H
#include <sc_cf.h>
#ifdef HAVE_NULLPTR
#include <cstddef>
#else
# define nullptr_t void*
# define nullptr NULL
#endif //HAVE_NULLPTR
#endif //NULLPTR_H

View file

@ -9,17 +9,17 @@
*
* This header must be included *after* all other headers, otherwise the compiler will
* report errors in system headers.
* \sa trace_fprintf
* \sa trace_fprintf()
**/
#include "sc_export.h"
/** used to find where generated c++ originates from in exp2cxx
* To enable, configure with 'cmake .. -DSC_TRACE_FPRINTF=ON'
*/
#ifdef __cplusplus
extern "C" {
#endif
/** Used to find where generated c++ originates from in exp2cxx.
* To enable, configure with 'cmake .. -DSC_TRACE_FPRINTF=ON'
*/
SC_BASE_EXPORT void trace_fprintf( char const * sourcefile, int line, FILE * file, const char * format, ... );
#ifdef __cplusplus
}

View file

@ -31,6 +31,8 @@
#include <math.h>
#include "sc_memmgr.h"
#include "sdaiApplication_instance.h"
// to help ObjectCenter
#ifndef HAVE_MEMMOVE
extern "C"

View file

@ -9,17 +9,11 @@
typedef char * SDAI_DAObjectID;
// interface PID (ISO/DIS 10303-23:1996(E) 5.3.10.1)
// Also, CORBA POS specification, Section 5.4
//
// The PID class maintains the persistent object identifier for every
// persistent object, objects of class DAObject, and objects of any class
// derived directly or indirectly from DAObject.
//
// POS: The PID identifies one or more locations within a Datastore that
// represent the persistent data of an object and generates a string
// identifier for that data. An object must have a PID in order to store
// its data persistently.
//
/*
The EXPRESS ENTITY application_instance from the SDAI_data_type_schema from
@ -29,7 +23,7 @@ typedef char * SDAI_DAObjectID;
The class DAObject is supported by the classes PID, PID_SDAI and the type
SDAI_DAObjectID as follows:
*/
/// interface PID (ISO/DIS 10303-23:1996(E) 5.3.10.1)
class SC_DAI_EXPORT SDAI_PID : public SDAI_sdaiObject {
public:
@ -67,8 +61,6 @@ typedef SDAI_PID * SDAI_PID_ptr;
typedef SDAI_PID_ptr SDAI_PID_var;
// interface PID_DA (ISO/DIS 10303-23:1996(E) 5.3.10.3)
// Also, CORBA POS specification, Direct Access Protocol, Section 5.10.1
//
// The Direct Access Protocol supports direct access to persistent data
// through typed attributes organized in data objects that are defined
@ -88,7 +80,7 @@ typedef SDAI_PID_ptr SDAI_PID_var;
// may be accessed through this extension to the CosPersistencePID
// interface.
//
/// interface PID_DA (ISO/DIS 10303-23:1996(E) 5.3.10.3)
class SC_DAI_EXPORT SDAI_PID_DA: public SDAI_PID {
public:
@ -120,11 +112,11 @@ class SC_DAI_EXPORT SDAI_PID_DA: public SDAI_PID {
typedef SDAI_PID_DA * SDAI_PID_DA_ptr;
typedef SDAI_PID_DA_ptr SDAI_PID_DA_var;
// interface PID_SDAI (ISO/DIS 10303-23:1996(E) 5.3.10.2)
//
// The PID_SDAI class maintains the persistent object identifier for
// a Model_contents object.
//
/// interface PID_SDAI (ISO/DIS 10303-23:1996(E) 5.3.10.2)
class SC_DAI_EXPORT SDAI_PID_SDAI : public SDAI_PID {
public:
SDAI_String _modelid;
@ -150,8 +142,6 @@ class SC_DAI_EXPORT SDAI_PID_SDAI : public SDAI_PID {
typedef SDAI_PID_SDAI * SDAI_PID_SDAI_ptr;
typedef SDAI_PID_SDAI_ptr SDAI_PID_SDAI_var;
// interface DAObject (ISO/DIS 10303-23:1996(E) 5.3.10.5)
// Also, CORBA POS Section 5.10.2, Direct Access Protocol.
//
// From POS: The DAObject interface provides operations that many data
// object clients need. A Datastore implementation may provide support
@ -166,6 +156,7 @@ class SDAI_DAObject;
typedef SDAI_DAObject * SDAI_DAObject_ptr;
typedef SDAI_DAObject_ptr SDAI_DAObject_var;
/// interface DAObject (ISO/DIS 10303-23:1996(E) 5.3.10.5)
class SC_DAI_EXPORT SDAI_DAObject : public SDAI_sdaiObject {
public:
@ -262,7 +253,7 @@ class SC_DAI_EXPORT SDAI_DAObject : public SDAI_sdaiObject {
void dado_free() { }
};
/*
/**
5.3.10.1 DAObject_SDAI
*/

View file

@ -357,6 +357,7 @@ int SDAI_Enum::put( const char * n ) {
}
/// return 0 if unset otherwise return 1
/// WARNING it appears that exists() will return true after a call to nullify(). is this intended?
int SDAI_Enum::exists() const {
return !( v > no_elements() );
}

View file

@ -12,6 +12,7 @@
* and is not subject to copyright.
*/
#include <iostream>
#include <sc_export.h>
class SC_DAI_EXPORT SDAI_Enum {
@ -27,7 +28,7 @@ class SC_DAI_EXPORT SDAI_Enum {
public:
virtual ~SDAI_Enum() {};
void PrintContents( ostream & out = cout ) const {
void PrintContents( ostream & out = std::cout ) const {
DebugDisplay( out );
}
@ -51,7 +52,7 @@ class SC_DAI_EXPORT SDAI_Enum {
}
const char * asStr( std::string & s ) const;
void STEPwrite( ostream & out = cout ) const;
void STEPwrite( ostream & out = std::cout ) const;
const char * STEPwrite( std::string & s ) const;
Severity StrToVal( const char * s, ErrorDescriptor * err, int optional = 1 );
@ -60,8 +61,8 @@ class SC_DAI_EXPORT SDAI_Enum {
virtual int put( int val );
virtual int put( const char * n );
int is_null() const {
return !( exists() );
bool is_null() const {
return ( exists() == 0 );
}
void set_null() {
nullify();
@ -69,9 +70,11 @@ class SC_DAI_EXPORT SDAI_Enum {
SDAI_Enum & operator= ( const int );
SDAI_Enum & operator= ( const SDAI_Enum & );
/// WARNING it appears that exists() will return true after a call to nullify(). is this intended?
///FIXME need to rewrite this function, but strange implementation...
virtual int exists() const;
virtual void nullify();
void DebugDisplay( ostream & out = cout ) const;
void DebugDisplay( ostream & out = std::cout ) const;
protected:
virtual Severity ReadEnum( istream & in, ErrorDescriptor * err,

View file

@ -1307,7 +1307,7 @@ SDAI_Application_instance * STEPfile::ReadInstance( istream & in, ostream & out,
case SEVERITY_BUG:
case SEVERITY_INCOMPLETE:
if( ( _fileType == VERSION_CURRENT ) ) {
if( _fileType == VERSION_CURRENT ) {
cerr << "ERROR in EXCHANGE FILE: incomplete instance #"
<< obj -> STEPfile_id << ".\n";
if( _fileType != WORKING_SESSION ) {

View file

@ -6,9 +6,6 @@ set( clLazyFile_SRCS
p21HeaderSectionReader.cc
sectionReader.cc
lazyP21DataSectionReader.cc
../base/judy/src/judy.c
../base/judy/src/judyLArray.h
../base/judy/src/judySArray.h
)
set( SC_CLLAZYFILE_HDRS
@ -33,11 +30,11 @@ include_directories(
${SC_SOURCE_DIR}/src/base/judy/src
)
SC_ADDLIB(steplazyfile "${clLazyFile_SRCS};${clLazyFile_HDRS}" "stepcore;stepdai;steputils;base")
SC_ADDEXEC(lazy_test "lazy_test.cc" "steplazyfile;stepeditor" )
set_target_properties(lazy_test PROPERTIES COMPILE_FLAGS "-DNO_REGISTRY" )
SC_ADDLIB(steplazyfile "${clLazyFile_SRCS};${clLazyFile_HDRS}" "stepcore;stepdai;steputils;base;stepeditor")
SC_ADDEXEC(lazy_test "lazy_test.cc" "steplazyfile;stepeditor" NO_INSTALL)
set_property(TARGET lazy_test APPEND PROPERTY COMPILE_DEFINITIONS "NO_REGISTRY")
if(TARGET lazy_test-static)
set_target_properties(lazy_test-static PROPERTIES COMPILE_FLAGS "-DNO_REGISTRY" )
set_property(TARGET lazy_test-static APPEND PROPERTY COMPILE_DEFINITIONS "NO_REGISTRY")
endif(TARGET lazy_test-static)
install(FILES ${SC_CLLAZYFILE_HDRS}

View file

@ -1,6 +1,9 @@
#ifndef INSTMGRHELPER_H
#define INSTMGRHELPER_H
#include <sc_export.h>
#include <mgrnode.h>
#include <lazyInstMgr.h>
#include <instmgr.h>
@ -14,7 +17,7 @@
* This class is used when creating SDAI_Application_instance's and using a lazyInstMgr. It is returned
* by instMgrAdapter. SDAI_Application_instance only uses the GetSTEPentity function.
*/
class mgrNodeHelper: public MgrNodeBase {
class SC_LAZYFILE_EXPORT mgrNodeHelper: public MgrNodeBase {
protected:
lazyInstMgr * _lim;
instanceID _id;
@ -40,7 +43,7 @@ class mgrNodeHelper: public MgrNodeBase {
* when an instance is looked up, this uses lazyInstMgr to load it, and then returns a pointer to it.
*/
class instMgrAdapter: public InstMgrBase {
class SC_LAZYFILE_EXPORT instMgrAdapter: public InstMgrBase {
protected:
mgrNodeHelper _mn;
public:

View file

@ -13,7 +13,7 @@
* \sa lazyP21DataSectionReader
* \sa lazyP28DataSectionReader
*/
class lazyDataSectionReader: public sectionReader {
class SC_LAZYFILE_EXPORT lazyDataSectionReader: public sectionReader {
protected:
bool _error, _completelyLoaded;
std::string _sectionIdentifier;

View file

@ -50,7 +50,7 @@ instancesLoaded_t * lazyFileReader::getHeaderInstances() {
}
lazyFileReader::lazyFileReader( std::string fname, lazyInstMgr * i, fileID fid ): _fileName( fname ), _parent( i ), _fileID( fid ) {
_file.open( _fileName.c_str() );
_file.open( _fileName.c_str(), std::ios::binary );
_file.imbue( std::locale::classic() );
_file.unsetf( std::ios_base::skipws );
assert( _file.is_open() && _file.good() );

View file

@ -6,6 +6,8 @@
#include "instMgrHelper.h"
#include "lazyRefs.h"
#include "sdaiApplication_instance.h"
lazyInstMgr::lazyInstMgr() {
_headerRegistry = new Registry( HeaderSchemaInit );
_instanceTypes = new instanceTypes_t( 255 ); //NOTE arbitrary max of 255 chars for a type name
@ -137,7 +139,7 @@ SDAI_Application_instance * lazyInstMgr::loadInstance( instanceID id, bool reSee
std::cerr << "Instance #" << id << " exists in multiple sections. This is not yet supported." << std::endl;
break;
}
if( ( inst ) && ( inst != & NilSTEPentity ) ) {
if( !isNilSTEPentity( inst ) ) {
_instancesLoaded.insert( id, inst );
_loadedInstanceCount++;
lazyRefs lr( this, inst );

View file

@ -11,6 +11,7 @@
#include "Registry.h"
#include "sc_memmgr.h"
#include "sc_export.h"
#include "judyLArray.h"
#include "judySArray.h"
@ -20,7 +21,7 @@
class Registry;
class instMgrAdapter;
class lazyInstMgr {
class SC_LAZYFILE_EXPORT lazyInstMgr {
protected:
/** multimap from instance number to instances that it refers to
* \sa instanceRefs_pair
@ -72,8 +73,8 @@ class lazyInstMgr {
void openFile( std::string fname );
void addLazyInstance( namedLazyInstance inst );
instMgrAdapter * getAdapter() {
return _ima;
InstMgrBase * getAdapter() {
return ( InstMgrBase * ) _ima;
}
instanceRefs_t * getFwdRefs() {

View file

@ -6,14 +6,19 @@
#include <utility>
#include <vector>
#include "sc_export.h"
#include "lazyTypes.h"
#include "lazyInstMgr.h"
#include "ExpDict.h"
#include "sdaiApplication_instance.h"
#include "SubSuperIterators.h"
#include <STEPattribute.h>
#include <STEPaggregate.h>
#ifdef _WIN32
#define strcasecmp _strcmpi
#endif // _WIN32
class SDAI_Application_instance;
/*
* given inverted attr ia:
* attr method value
@ -48,7 +53,7 @@
//TODO what about complex instances? scanning each on disk could be a bitch; should the compositional types be scanned during lazy loading?
//TODO/FIXME in generated code, store ia data in map and eliminate data members that are currently used. modify accessors to use map.
class lazyRefs {
class SC_LAZYFILE_EXPORT lazyRefs {
public:
typedef std::set< instanceID > referentInstances_t;
protected:
@ -103,7 +108,7 @@ class lazyRefs {
if( !ai ) {
ias.i = rinst;
_inst->setInvAttr( ia, ias );
} else if( ai->GetFileId() != inst ) {
} else if( ai->GetFileId() != (int)inst ) {
std::cerr << "ERROR: two instances (" << rinst << ", #" << rinst->GetFileId() << "=" << rinst->getEDesc()->Name();
std::cerr << " and " << ai << ", #" << ai->GetFileId() <<"=" << ai->getEDesc()->Name() << ") refer to inst ";
std::cerr << _inst->GetFileId() << ", but its inverse attribute is not an aggregation type!" << std::endl;
@ -126,7 +131,7 @@ class lazyRefs {
bool found = false;
if( sa.getADesc()->IsAggrType() ) {
//aggregate - search for current inst id
EntityAggregate * aggr = dynamic_cast< EntityAggregate * >( sa.Aggregate());
EntityAggregate * aggr = dynamic_cast< EntityAggregate * >( sa.Aggregate() );
assert( aggr );
EntityNode * en = ( EntityNode * ) aggr->GetHead();
while( en ) {
@ -174,7 +179,7 @@ class lazyRefs {
}
std::cerr << "Error! inverse attr " << ia->Name() << " (" << ia << ") not found in iAMap for entity " << inst->getEDesc()->Name() << std::endl;
abort();
iAstruct nil;
iAstruct nil = {nullptr};
return nil;
}

View file

@ -17,7 +17,6 @@
#include "lazyFileReader.h"
#include "lazyInstMgr.h"
#include "lazyTypes.h"
#include "instMgrHelper.h"
#include "current_function.hpp"
@ -48,7 +47,7 @@ std::streampos sectionReader::findNormalString( const std::string & str, bool se
}
if( c == '\'' ) {
//push past string
_file.unget();
_file.seekg( _file.tellg() - std::streampos(1) );
GetLiteralStr( _file, _lazyFile->getInstMgr()->getErrorDesc() );
}
if( ( c == '/' ) && ( _file.peek() == '*' ) ) {
@ -130,7 +129,7 @@ std::streampos sectionReader::seekInstanceEnd( instanceRefs ** refs ) {
}
break;
case '\'':
_file.unget();
_file.seekg( _file.tellg() - std::streampos(1) );
GetLiteralStr( _file, _lazyFile->getInstMgr()->getErrorDesc() );
break;
case '=':
@ -156,7 +155,7 @@ std::streampos sectionReader::seekInstanceEnd( instanceRefs ** refs ) {
if( _file.get() == ';' ) {
return _file.tellg();
} else {
_file.unget();
_file.seekg( _file.tellg() - std::streampos(1) );
}
}
default:
@ -187,7 +186,7 @@ instanceID sectionReader::readInstanceNumber() {
if( ( c == '/' ) && ( _file.peek() == '*' ) ) {
findNormalString( "*/" );
} else {
_file.unget();
_file.seekg( _file.tellg() - std::streampos(1) );
}
skipWS();
c = _file.get();
@ -200,7 +199,7 @@ instanceID sectionReader::readInstanceNumber() {
assert( std::numeric_limits<instanceID>::max() <= std::numeric_limits<unsigned long long int>::max() );
size_t instanceIDLength = std::numeric_limits<instanceID>::digits10 + 1;
char * buffer = new char( instanceIDLength + 1 ); // +1 for the terminating character
char * buffer = new char[ instanceIDLength + 1 ]; // +1 for the terminating character
std::stringstream errorMsg;
@ -211,7 +210,7 @@ instanceID sectionReader::readInstanceNumber() {
digits++;
} else {
_file.unget();
_file.seekg( _file.tellg() - std::streampos(1) );
break;
}
@ -222,7 +221,7 @@ instanceID sectionReader::readInstanceNumber() {
_error->UserMsg( "A very large instance ID encountered" );
_error->DetailMsg( errorMsg.str() );
delete buffer;
delete [] buffer;
return 0;
}
@ -243,7 +242,7 @@ instanceID sectionReader::readInstanceNumber() {
assert( id > 0 );
}
delete buffer;
delete [] buffer;
return id;
}
@ -303,15 +302,14 @@ SDAI_Application_instance * sectionReader::getRealInstance( const Registry * reg
inst = reg->ObjCreate( tName, sName );
break;
}
if( inst != & NilSTEPentity ) {
if( !isNilSTEPentity( inst ) ) {
if( !comment.empty() ) {
inst->AddP21Comment( comment );
}
assert( inst->getEDesc() );
_file.seekg( begin );
findNormalString( "(" );
_file.unget();
_file.seekg( _file.tellg() - std::streampos(1) );
sev = inst->STEPread( instance, 0, _lazyFile->getInstMgr()->getAdapter(), _file, sName, true, false );
//TODO do something with 'sev'
inst->InitIAttrs();
@ -319,7 +317,7 @@ SDAI_Application_instance * sectionReader::getRealInstance( const Registry * reg
return inst;
}
STEPcomplex * sectionReader::CreateSubSuperInstance( const Registry * reg, instanceID fileid, Severity & sev ) {
STEPcomplex * sectionReader::CreateSubSuperInstance( const Registry * reg, instanceID fileid, Severity & ) {
std::string buf;
ErrorDescriptor err;
std::vector<std::string *> typeNames;

View file

@ -13,13 +13,15 @@
#include <Registry.h>
#include "sc_memmgr.h"
const TypeDescriptor * t_sdaiINTEGER = NULL;
const TypeDescriptor * t_sdaiREAL = NULL;
const TypeDescriptor * t_sdaiNUMBER = NULL;
const TypeDescriptor * t_sdaiSTRING = NULL;
const TypeDescriptor * t_sdaiBINARY = NULL;
const TypeDescriptor * t_sdaiBOOLEAN = NULL;
const TypeDescriptor * t_sdaiLOGICAL = NULL;
/* these may be shared between multiple Registry instances, so don't create/destroy in Registry ctor/dtor
* Name, FundamentalType, Originating Schema, Description */
const TypeDescriptor * const t_sdaiINTEGER = new TypeDescriptor( "INTEGER", sdaiINTEGER, 0, "INTEGER" );
const TypeDescriptor * const t_sdaiREAL = new TypeDescriptor( "REAL", sdaiREAL, 0, "Real" );
const TypeDescriptor * const t_sdaiNUMBER = new TypeDescriptor( "NUMBER", sdaiNUMBER, 0, "Number" );
const TypeDescriptor * const t_sdaiSTRING = new TypeDescriptor( "STRING", sdaiSTRING, 0, "String" );
const TypeDescriptor * const t_sdaiBINARY = new TypeDescriptor( "BINARY", sdaiBINARY, 0, "Binary" );
const TypeDescriptor * const t_sdaiBOOLEAN = new TypeDescriptor( "BOOLEAN", sdaiBOOLEAN, 0, "Boolean" );
const TypeDescriptor * const t_sdaiLOGICAL = new TypeDescriptor( "LOGICAL", sdaiLOGICAL, 0, "Logical" );
static int uniqueNames( const char *, const SchRename * );
@ -30,43 +32,6 @@ Registry::Registry( CF_init initFunct )
active_schemas = SC_HASHcreate( 10 );
active_types = SC_HASHcreate( 100 );
if( !t_sdaiINTEGER ) {
t_sdaiINTEGER = new TypeDescriptor( "INTEGER", // Name
sdaiINTEGER, // FundamentalType
0, // Originating Schema
"INTEGER" ); // Description;
}
if( !t_sdaiREAL ) {
t_sdaiREAL = new TypeDescriptor( "REAL", sdaiREAL,
0, // Originating Schema
"Real" );
}
if( !t_sdaiSTRING ) {
t_sdaiSTRING = new TypeDescriptor( "STRING", sdaiSTRING,
0, // Originating Schema
"String" );
}
if( !t_sdaiBINARY ) {
t_sdaiBINARY = new TypeDescriptor( "BINARY", sdaiBINARY,
0, // Originating Schema
"Binary" );
}
if( !t_sdaiBOOLEAN ) {
t_sdaiBOOLEAN = new TypeDescriptor( "BOOLEAN", sdaiBOOLEAN,
0, // Originating Schema
"Boolean" );
}
if( !t_sdaiLOGICAL ) {
t_sdaiLOGICAL = new TypeDescriptor( "LOGICAL", sdaiLOGICAL,
0, // Originating Schema
"Logical" );
}
if( !t_sdaiNUMBER ) {
t_sdaiNUMBER = new TypeDescriptor( "NUMBER", sdaiNUMBER,
0, // Originating Schema
"Number" );
}
initFunct( *this );
SC_HASHlistinit( active_types, &cur_type );
SC_HASHlistinit( primordialSwamp, &cur_entity ); // initialize cur's
@ -80,35 +45,6 @@ Registry::~Registry() {
SC_HASHdestroy( active_schemas );
SC_HASHdestroy( active_types );
delete col;
if( t_sdaiINTEGER ) {
delete t_sdaiINTEGER;
t_sdaiINTEGER = NULL;
}
if( t_sdaiREAL ) {
delete t_sdaiREAL;
t_sdaiREAL = NULL;
}
if( t_sdaiSTRING ) {
delete t_sdaiSTRING;
t_sdaiSTRING = NULL;
}
if( t_sdaiBINARY ) {
delete t_sdaiBINARY;
t_sdaiBINARY = NULL;
}
if( t_sdaiBOOLEAN ) {
delete t_sdaiBOOLEAN;
t_sdaiBOOLEAN = NULL;
}
if( t_sdaiLOGICAL ) {
delete t_sdaiLOGICAL;
t_sdaiLOGICAL = NULL;
}
if( t_sdaiNUMBER ) {
delete t_sdaiNUMBER;
t_sdaiNUMBER = NULL;
}
}
void Registry::DeleteContents() {

View file

@ -21,13 +21,13 @@
// defined and created in Registry.cc
extern SC_CORE_EXPORT const TypeDescriptor * t_sdaiINTEGER;
extern SC_CORE_EXPORT const TypeDescriptor * t_sdaiREAL;
extern SC_CORE_EXPORT const TypeDescriptor * t_sdaiNUMBER;
extern SC_CORE_EXPORT const TypeDescriptor * t_sdaiSTRING;
extern SC_CORE_EXPORT const TypeDescriptor * t_sdaiBINARY;
extern SC_CORE_EXPORT const TypeDescriptor * t_sdaiBOOLEAN;
extern SC_CORE_EXPORT const TypeDescriptor * t_sdaiLOGICAL;
extern SC_CORE_EXPORT const TypeDescriptor * const t_sdaiINTEGER;
extern SC_CORE_EXPORT const TypeDescriptor * const t_sdaiREAL;
extern SC_CORE_EXPORT const TypeDescriptor * const t_sdaiNUMBER;
extern SC_CORE_EXPORT const TypeDescriptor * const t_sdaiSTRING;
extern SC_CORE_EXPORT const TypeDescriptor * const t_sdaiBINARY;
extern SC_CORE_EXPORT const TypeDescriptor * const t_sdaiBOOLEAN;
extern SC_CORE_EXPORT const TypeDescriptor * const t_sdaiLOGICAL;
typedef struct Hash_Table * HashTable;

View file

@ -83,9 +83,10 @@ Severity EntityAggregate::ReadValue( istream & in, ErrorDescriptor * err,
item->StrToVal( in, &errdesc, elem_type, insts, addFileId );
}
elem_type->AttrTypeName( buf );
// read up to the next delimiter and set errors if garbage is
// found before specified delims (i.e. comma and quote)
CheckRemainingInput( in, &errdesc, elem_type->AttrTypeName( buf ), ",)" );
CheckRemainingInput( in, &errdesc, buf, ",)" );
if( errdesc.severity() < SEVERITY_INCOMPLETE ) {
sprintf( errmsg, " index: %d\n", value_cnt );

View file

@ -81,9 +81,10 @@ Severity SelectAggregate::ReadValue( istream & in, ErrorDescriptor * err,
item->StrToVal( in, &errdesc, elem_type, insts, addFileId, currSch );
}
elem_type->AttrTypeName( buf );
// read up to the next delimiter and set errors if garbage is
// found before specified delims (i.e. comma and quote)
CheckRemainingInput( in, &errdesc, elem_type->AttrTypeName( buf ), ",)" );
CheckRemainingInput( in, &errdesc, buf, ",)" );
if( errdesc.severity() < SEVERITY_INCOMPLETE ) {
sprintf( errmsg, " index: %d\n", value_cnt );

View file

@ -19,8 +19,6 @@
#include <ExpDict.h>
#include "sc_memmgr.h"
const int Real_Num_Precision = REAL_NUM_PRECISION; // from STEPattribute.h
/**
* \file STEPaggregate.cc Functions for manipulating aggregate attributes
@ -37,7 +35,7 @@ STEPaggregate NilSTEPaggregate;
STEPaggregate::STEPaggregate() {
_null = 1;
_null = true;
}
STEPaggregate::~STEPaggregate() {
@ -72,7 +70,8 @@ Severity STEPaggregate::AggrValidLevel( const char * value, ErrorDescriptor * er
istringstream in( ( char * )value ); // sz defaults to length of s
ReadValue( in, err, elem_type, insts, addFileId, 0, 0 );
CheckRemainingInput( in, err, elem_type->AttrTypeName( buf ), tokenList );
elem_type->AttrTypeName( buf );
CheckRemainingInput( in, err, buf, tokenList );
if( optional && ( err->severity() == SEVERITY_INCOMPLETE ) ) {
err->severity( SEVERITY_NULL );
}
@ -90,7 +89,8 @@ Severity STEPaggregate::AggrValidLevel( istream & in, ErrorDescriptor * err,
}
ReadValue( in, err, elem_type, insts, addFileId, 0, 1 );
CheckRemainingInput( in, err, elem_type->AttrTypeName( buf ), tokenList );
elem_type->AttrTypeName( buf );
CheckRemainingInput( in, err, buf, tokenList );
if( optional && ( err->severity() == SEVERITY_INCOMPLETE ) ) {
err->severity( SEVERITY_NULL );
}
@ -121,7 +121,7 @@ Severity STEPaggregate::ReadValue( istream & in, ErrorDescriptor * err,
c = in.peek(); // does not advance input
if( in.eof() || c == '$' ) {
_null = 1;
_null = true;
err->GreaterSeverity( SEVERITY_INCOMPLETE );
return SEVERITY_INCOMPLETE;
}
@ -171,7 +171,8 @@ Severity STEPaggregate::ReadValue( istream & in, ErrorDescriptor * err,
// read up to the next delimiter and set errors if garbage is
// found before specified delims (i.e. comma and quote)
CheckRemainingInput( in, &errdesc, elem_type->AttrTypeName( buf ), ",)" );
elem_type->AttrTypeName( buf );
CheckRemainingInput( in, &errdesc, buf, ",)" );
if( errdesc.severity() < SEVERITY_INCOMPLETE ) {
sprintf( errmsg, " index: %d\n", value_cnt );
@ -195,7 +196,7 @@ Severity STEPaggregate::ReadValue( istream & in, ErrorDescriptor * err,
}
}
if( c == ')' ) {
_null = 0;
_null = false;
} else { // expectation for end paren delim has not been met
err->GreaterSeverity( SEVERITY_INPUT_ERROR );
err->AppendToUserMsg( "Missing close paren for aggregate value" );
@ -264,12 +265,12 @@ SingleLinkNode * STEPaggregate::NewNode() {
void STEPaggregate::AddNode( SingleLinkNode * n ) {
SingleLinkList::AppendNode( n );
_null = 0;
_null = false;
}
void STEPaggregate::Empty() {
SingleLinkList::Empty();
_null = 1;
_null = true;
}

View file

@ -36,7 +36,7 @@ typedef STEPaggregate_ptr STEPaggregate_var;
class SC_CORE_EXPORT STEPaggregate : public SingleLinkList {
protected:
int _null;
bool _null;
protected:
@ -47,7 +47,7 @@ class SC_CORE_EXPORT STEPaggregate : public SingleLinkList {
const char * currSch = 0 );
public:
int is_null() {
bool is_null() {
return _null;
}

View file

@ -10,7 +10,6 @@
*/
#include <iomanip>
#include <sstream>
#include <string>
#include <read_func.h>
@ -815,10 +814,8 @@ bool STEPattribute::is_null() const {
case REFERENCE_TYPE:
case GENERIC_TYPE:
cerr << "Internal error: " << __FILE__ << ": " << __LINE__
<< "\n" << _POC_ "\n";
return SEVERITY_BUG;
//should be an error, but this is a const function - no way to report.
return true;
case UNKNOWN_TYPE:
default:
return ( ptr.u -> is_null() );
@ -1377,7 +1374,7 @@ const char * STEPattribute::TypeName() const {
if( _redefAttr ) {
return _redefAttr->TypeName();
}
return aDesc->TypeName();
return aDesc->TypeName().c_str();
}
BASE_TYPE STEPattribute::Type() const {

View file

@ -26,8 +26,6 @@
*/
#define REAL_NUM_PRECISION 15
typedef double real;
class InstMgrBase;
class SDAI_Application_instance;
class STEPaggregate;
@ -198,8 +196,8 @@ class SC_CORE_EXPORT STEPattribute {
////////////// Return info on attr
bool Nullable() const; // may this attribute be null?
bool is_null() const; // is this attribute null?
bool Nullable() const; ///< may this attribute be null?
bool is_null() const; ///< is this attribute null?
bool IsDerived() const {
return _derive;
}

View file

@ -11,8 +11,8 @@
*/
#include <stdio.h> // to get the BUFSIZ #define
#include <STEPundefined.h>
#include <STEPattribute.h>
#include <STEPundefined.h>
#include "sc_memmgr.h"
/** \class SCLundefined
@ -149,7 +149,7 @@ int SCLundefined::set_null() {
return 1;
}
int SCLundefined::is_null() {
bool SCLundefined::is_null() {
return ( val.empty() );
}

View file

@ -35,7 +35,7 @@ class SC_CORE_EXPORT SCLundefined {
virtual void STEPwrite( ostream & out = cout );
int set_null();
int is_null();
bool is_null();
SCLundefined & operator= ( const SCLundefined & );
SCLundefined & operator= ( const char * str );
SCLundefined();

View file

@ -39,8 +39,6 @@ class recursiveEntDescripIterator {
recursiveEntDescripIterator( const EntityDescriptor * t = 0 ): startEntity( t ), position( 0 ) {
//NOTE due to pure virtual functions, derived class constructor *must* call reset(t)
}
recursiveEntDescripIterator( ): startEntity( 0 ), position( 0 ) {
}
~recursiveEntDescripIterator( ) {
}

View file

@ -47,7 +47,8 @@ const char * AttrDescriptor::AttrExprDefStr( std::string & s ) const {
s.append( "OPTIONAL " );
}
if( DomainType() ) {
s.append( DomainType()->AttrTypeName( buf ) );
DomainType()->AttrTypeName( buf );
s.append( buf );
}
return const_cast<char *>( s.c_str() );
}
@ -104,14 +105,13 @@ AttrDescriptor::Type() const {
* right side of attr def
* NOTE this returns a \'const char * \' instead of an std::string
*/
const char * AttrDescriptor::TypeName() const {
const std::string AttrDescriptor::TypeName() const {
std::string buf;
if( _domainType ) {
return _domainType->AttrTypeName( buf );
} else {
return "";
_domainType->AttrTypeName( buf );
}
return buf;
}
/// an expanded right side of attr def

View file

@ -99,11 +99,13 @@ class SC_CORE_EXPORT AttrDescriptor {
PrimitiveType AggrElemType() const;
const TypeDescriptor * AggrElemTypeDescriptor() const;
// The type of the attributes TypeDescriptor
/// The type of the attributes TypeDescriptor
PrimitiveType Type() const;
const char * TypeName() const; // right side of attr def
// an expanded right side of attr def
/// right side of attr def
const std::string TypeName() const;
/// an expanded right side of attr def
const char * ExpandedTypeName( std::string & s ) const;
int RefersToType() const {

View file

@ -19,7 +19,8 @@ const char * Derived_attribute::AttrExprDefStr( std::string & s ) const {
s.append( Name() );
s.append( " : " );
if( DomainType() ) {
s.append( DomainType()->AttrTypeName( buf ) );
DomainType()->AttrTypeName( buf );
s.append( buf );
}
if( _initializer ) { // this is supposed to exist for a derived attribute.
s.append( " \n\t\t:= " );

View file

@ -308,11 +308,11 @@ InstMgr::EntityKeywordCount( const char * name ) {
MgrNode * node;
SDAI_Application_instance * se;
int n = InstanceCount();
const char *pretty_name = PrettyTmpName( name );
for( int j = 0; j < n; ++j ) {
node = GetMgrNode( j );
se = node->GetApplication_instance();
if( !strcmp( se->EntityName(),
PrettyTmpName( name ) ) ) {
if( !strcmp( se->EntityName(), pretty_name ) ) {
++count;
}
}
@ -357,13 +357,13 @@ SDAI_Application_instance *
InstMgr::GetApplication_instance( const char * entityKeyword, int starting_index ) {
MgrNode * node;
SDAI_Application_instance * se;
const char *pretty_name = PrettyTmpName( entityKeyword );
int count = InstanceCount();
for( int j = starting_index; j < count; ++j ) {
node = GetMgrNode( j );
se = node->GetApplication_instance();
if( !strcmp( se->EntityName(),
PrettyTmpName( entityKeyword ) ) ) {
if( !strcmp( se->EntityName(), pretty_name ) ) {
return se;
}
}
@ -374,13 +374,13 @@ SDAI_Application_instance *
InstMgr::GetSTEPentity( const char * entityKeyword, int starting_index ) {
MgrNode * node;
SDAI_Application_instance * se;
const char *pretty_name = PrettyTmpName( entityKeyword );
int count = InstanceCount();
for( int j = starting_index; j < count; ++j ) {
node = GetMgrNode( j );
se = node->GetApplication_instance();
if( !strcmp( se->EntityName(),
PrettyTmpName( entityKeyword ) ) ) {
if( !strcmp( se->EntityName(), pretty_name ) ) {
return se;
}
}

View file

@ -10,7 +10,8 @@ const char * Inverse_attribute::AttrExprDefStr( std::string & s ) const {
s.append( "OPTIONAL " );
}
if( DomainType() ) {
s.append( DomainType()->AttrTypeName( buf ) );
DomainType()->AttrTypeName( buf );
s.append( buf );
}
s.append( " FOR " );
s.append( _inverted_attr_id );

View file

@ -19,20 +19,21 @@
class GenericNode;
class DisplayNode;
#include <sdai.h>
//class SDAI_Application_instance;
#include <gennode.h>
#include <gennodelist.h>
//#include <gennode.inline.h>
#include <editordefines.h>
#include <sc_nullptr.h>
class InstMgr;
class SC_CORE_EXPORT MgrNodeBase : public GenericNode {
public:
virtual inline SDAI_Application_instance * GetSTEPentity() {
abort();
return nullptr;
};
virtual ~MgrNodeBase() {};
};

View file

@ -213,6 +213,8 @@ typedef SDAI_Model_contents_ptr SDAI_Model_contents_var;
// ENTITY
extern SC_CORE_EXPORT SDAI_Application_instance NilSTEPentity;
//FIXME why 3 of these? remove 2?
//in libexpress, ENTITY_NULL is also used but refers to something different
#define ENTITY_NULL &NilSTEPentity
#define NULL_ENTITY &NilSTEPentity
#define S_ENTITY_NULL &NilSTEPentity

View file

@ -20,8 +20,17 @@
#include "sdaiApplication_instance.h"
#include "superInvAttrIter.h"
#include <sc_nullptr.h>
SDAI_Application_instance NilSTEPentity;
bool isNilSTEPentity( const SDAI_Application_instance * ai ) {
if( ai && ai == &NilSTEPentity ) {
return true;
}
return false;
}
/**************************************************************//**
** \file sdaiApplication_instance.cc Functions for manipulating entities
**
@ -163,8 +172,11 @@ void SDAI_Application_instance::AppendMultInstance( SDAI_Application_instance *
}
}
// BUG implement this -- FIXME function is never used
const EntityDescriptor* SDAI_Application_instance::getEDesc() const {
return eDesc;
}
// BUG implement this -- FIXME function is never used
SDAI_Application_instance * SDAI_Application_instance::GetMiEntity( char * entName ) {
std::string s1, s2;
@ -944,7 +956,7 @@ const SDAI_Application_instance::iAMap_t::value_type SDAI_Application_instance::
}
iAstruct z;
memset( &z, 0, sizeof z );
iAMap_t::value_type nil( NULL, z );
iAMap_t::value_type nil( (Inverse_attribute *) nullptr, z );
return nil;
}

View file

@ -1,5 +1,5 @@
#ifndef STEPENTITY_H
#define STEPENTITY_H 1
#define STEPENTITY_H
/*
* NIST STEP Core Class Library
@ -12,15 +12,15 @@
* and is not subject to copyright.
*/
#include <sc_export.h>
#include <map>
#include <iostream>
//class STEPinvAttrList;
#include <sc_export.h>
#include <sdaiDaObject.h>
class EntityAggregate;
class Inverse_attribute;
typedef struct {
// bool aggregate;
union {
EntityAggregate * a;
SDAI_Application_instance * i;
@ -43,7 +43,13 @@ class SC_CORE_EXPORT SDAI_Application_instance : public SDAI_DAObject_SDAI {
public: //TODO make these private?
STEPattributeList attributes;
int STEPfile_id; //TODO are neg values ever used (signalling)? if not, make unsigned?
/* see mgrnode.cc where -1 is returned when there is no sdai
* instance. might be possible to treat 0 for this purpose
* instead of negative so the ID's can become unsigned.
*/
int STEPfile_id;
ErrorDescriptor _error;
std::string p21Comment;
@ -72,9 +78,7 @@ class SC_CORE_EXPORT SDAI_Application_instance : public SDAI_DAObject_SDAI {
void setEDesc( const EntityDescriptor * const ed ) {
eDesc = ed;
}
const EntityDescriptor * getEDesc() const {
return eDesc;
}
const EntityDescriptor * getEDesc() const;
void StepFileId( int fid ) {
STEPfile_id = fid;
}
@ -190,5 +194,6 @@ class SC_CORE_EXPORT SDAI_Application_instance : public SDAI_DAObject_SDAI {
// current style of CORBA handles for Part 23 - NOTE - used for more than CORBA
typedef SDAI_Application_instance * SDAI_Application_instance_ptr;
typedef SDAI_Application_instance_ptr SDAI_Application_instance_var;
SC_CORE_EXPORT bool isNilSTEPentity( const SDAI_Application_instance * ai );
#endif
#endif //STEPENTITY_H

View file

@ -592,11 +592,11 @@ const char * SDAI_Select::STEPwrite( std::string & s, const char * currSch ) co
return const_cast<char *>( s.c_str() );
}
int SDAI_Select::set_null() {
bool SDAI_Select::set_null() {
nullify();
return 1;
return true;
}
int SDAI_Select::is_null() {
bool SDAI_Select::is_null() {
return ( !exists() );
}

View file

@ -95,8 +95,9 @@ class SC_CORE_EXPORT SDAI_Select {
//linux has a regression if the pure virtual operator= is commented out
virtual SDAI_Select & operator =( const SDAI_Select & other );
int set_null();
int is_null();
//FIXME set_null always returns true. why not void?!
bool set_null();
bool is_null();
}; /** end class **/

View file

@ -24,6 +24,7 @@ endfunction(add_stepcore_test name libs)
add_stepcore_test("SupertypesIterator" "stepcore;steputils;stepeditor;stepdai;base") #all these libs are necessary?
add_stepcore_test("operators_STEPattribute" "stepcore;steputils;stepeditor;stepdai;base")
add_stepcore_test("operators_SDAI_Select" "stepcore;steputils;stepeditor;stepdai;base")
add_stepcore_test("null_attr" "stepcore;steputils;stepeditor;stepdai;base")
# Local Variables:
# tab-width: 8

View file

@ -0,0 +1,24 @@
#include <ExpDict.h>
#include <STEPattribute.h>
#include <sdaiString.h>
AttrDescriptor *ad = 0;
EntityDescriptor *ed = 0;
TypeDescriptor *td;
Schema *sch = 0;
int main () {
SDAI_String _description;
sch = new Schema( "Ifc2x3" );
td = new TypeDescriptor( "Ifctext", sdaiSTRING, sch, "STRING" );
ed = new EntityDescriptor( "Ifcroot", sch, LTrue, LFalse );
ad = new AttrDescriptor( "description", td, LTrue, LFalse, AttrType_Explicit, *ed );
ed->AddExplicitAttr( ad );
STEPattribute *a = new STEPattribute(*ad, &_description);
a -> set_null();
delete a;
//delete ad; //this is deleted in EntityDescriptor dtor
delete ed;
delete td;
delete sch;
}

View file

@ -122,10 +122,13 @@ void TypeDescriptor::addAltName( const char * schnm, const char * newnm ) {
}
}
const char * TypeDescriptor::AttrTypeName( std::string & buf, const char * schnm ) const {
std::string sstr;
buf = Name( schnm ) ? StrToLower( Name( schnm ), sstr ) : _description;
return const_cast<char *>( buf.c_str() );
void TypeDescriptor::AttrTypeName( std::string & buf, const char * schnm ) const {
const char * sn = Name( schnm );
if( sn ) {
StrToLower( sn , buf );
} else {
buf = _description;
}
}
const char * TypeDescriptor::GenerateExpress( std::string & buf ) const {

View file

@ -97,43 +97,43 @@ class SC_CORE_EXPORT TypeDescriptor {
protected:
// the name of the type (see above)
//
// NOTE - memory is not allocated for this, or for _description
// below. It is assumed that at creation, _name will be made
// to point to a static location in memory. The exp2cxx
// generated code, for example, places a literal string in its
// TypeDesc constructor calls. This creates a location in me-
// mory static throughout the lifetime of the calling program.
/// the name of the type (see above)
///
/// NOTE - memory is not allocated for this, or for _description
/// below. It is assumed that at creation, _name will be made
/// to point to a static location in memory. The exp2cxx
/// generated code, for example, places a literal string in its
/// TypeDesc constructor calls. This creates a location in me-
/// mory static throughout the lifetime of the calling program.
const char * _name ;
// an alternate name of type - such as one given by a different
// schema which USEs/ REFERENCEs this. (A complete list of
// alternate names is stored in altNames below. _altname pro-
// vides storage space for the currently used one.)
/// an alternate name of type - such as one given by a different
/// schema which USEs/ REFERENCEs this. (A complete list of
/// alternate names is stored in altNames below. _altname pro-
/// vides storage space for the currently used one.)
char _altname[BUFSIZ];
// contains list of renamings of type - used by other schemas
// which USE/ REFERENCE this
/// contains list of renamings of type - used by other schemas
/// which USE/ REFERENCE this
const SchRename * altNames;
// the type of the type (see above).
// it is an enum see file clstepcore/baseType.h
/// the type of the type (see above).
/// it is an enum see file clstepcore/baseType.h
PrimitiveType _fundamentalType;
const Schema * _originatingSchema;
// further describes the type (see above)
// most often (or always) points at a subtype.
/// further describes the type (see above)
/// most often (or always) points at a subtype.
const TypeDescriptor * _referentType;
// Express file description (see above)
// e.g. the right side of an Express TYPE stmt
// (See note above by _name regarding memory allocation.)
/// Express file description (see above)
/// e.g. the right side of an Express TYPE stmt
/// (See note above by _name regarding memory allocation.)
const char * _description;
public:
// a Where_rule may contain only a comment
/// a Where_rule may contain only a comment
Where_rule__list_var _where_rules; // initially a null pointer
Where_rule__list_var & where_rules_() {
@ -145,8 +145,8 @@ class SC_CORE_EXPORT TypeDescriptor {
}
protected:
// Functions used to check the current name of the type (may
// != _name if altNames has diff name for current schema).
/// Functions used to check the current name of the type (may
/// != _name if altNames has diff name for current schema).
bool PossName( const char * ) const;
bool OurName( const char * ) const;
bool AltName( const char * ) const;
@ -161,34 +161,34 @@ class SC_CORE_EXPORT TypeDescriptor {
virtual const char * GenerateExpress( std::string & buf ) const;
// The name of this type. If schnm != NULL, the name we're
// referred to by schema schnm (may be diff name in our alt-
// names list (based on schnm's USE/REF list)).
/// The name of this type. If schnm != NULL, the name we're
/// referred to by schema schnm (may be diff name in our alt-
/// names list (based on schnm's USE/REF list)).
const char * Name( const char * schnm = NULL ) const;
// The name that would be found on the right side of an
// attribute definition. In the case of a type defined like
// TYPE name = STRING END_TYPE;
// with attribute definition employee_name : name;
// it would be the _name member variable. If it was a type
// defined in an attribute it will be the _description
// member variable since _name will be null. e.g. attr. def.
// project_names : ARRAY [1..10] name;
const char * AttrTypeName( std::string & buf, const char * schnm = NULL ) const;
/// The name that would be found on the right side of an
/// attribute definition. In the case of a type defined like
/// TYPE name = STRING END_TYPE;
/// with attribute definition employee_name : name;
/// it would be the _name member variable. If it was a type
/// defined in an attribute it will be the _description
/// member variable since _name will be null. e.g. attr. def.
/// project_names : ARRAY [1..10] name;
void AttrTypeName( std::string & buf, const char * schnm = NULL ) const;
// Linked link of alternate names for the type:
/// Linked link of alternate names for the type:
const SchRename * AltNameList() const {
return altNames;
}
// This is a fully expanded description of the type.
// This returns a string like the _description member variable
// except it is more thorough of a description where possible
// e.g. if the description contains a TYPE name it will also
// be explained.
/// This is a fully expanded description of the type.
/// This returns a string like the _description member variable
/// except it is more thorough of a description where possible
/// e.g. if the description contains a TYPE name it will also
/// be explained.
const char * TypeString( std::string & s ) const;
// This TypeDescriptor's type
/// This TypeDescriptor's type
PrimitiveType Type() const {
return _fundamentalType;
}
@ -196,27 +196,27 @@ class SC_CORE_EXPORT TypeDescriptor {
_fundamentalType = type;
}
// This is the underlying Express base type of this type. It will
// be the type of the last TypeDescriptor following the
// _referentType member variable pointers. e.g.
// TYPE count = INTEGER;
// TYPE ref_count = count;
// TYPE count_set = SET OF ref_count;
// each of the above will generate a TypeDescriptor and for
// each one, PrimitiveType BaseType() will return INTEGER_TYPE.
// TypeDescriptor *BaseTypeDescriptor() returns the TypeDescriptor
// for Integer.
/// This is the underlying Express base type of this type. It will
/// be the type of the last TypeDescriptor following the
/// _referentType member variable pointers. e.g.
/// TYPE count = INTEGER;
/// TYPE ref_count = count;
/// TYPE count_set = SET OF ref_count;
/// each of the above will generate a TypeDescriptor and for
/// each one, PrimitiveType BaseType() will return INTEGER_TYPE.
/// TypeDescriptor *BaseTypeDescriptor() returns the TypeDescriptor
/// for Integer.
PrimitiveType BaseType() const;
const TypeDescriptor * BaseTypeDescriptor() const;
const char * BaseTypeName() const;
// the first PrimitiveType that is not REFERENCE_TYPE (the first
// TypeDescriptor *_referentType that does not have REFERENCE_TYPE
// for it's fundamentalType variable). This would return the same
// as BaseType() for fundamental types. An aggregate type
// would return AGGREGATE_TYPE then you could find out the type of
// an element by calling AggrElemType(). Select types
// would work the same?
/// the first PrimitiveType that is not REFERENCE_TYPE (the first
/// TypeDescriptor *_referentType that does not have REFERENCE_TYPE
/// for it's fundamentalType variable). This would return the same
/// as BaseType() for fundamental types. An aggregate type
/// would return AGGREGATE_TYPE then you could find out the type of
/// an element by calling AggrElemType(). Select types
/// would work the same?
PrimitiveType NonRefType() const;
const TypeDescriptor * NonRefTypeDescriptor() const;
@ -232,7 +232,7 @@ class SC_CORE_EXPORT TypeDescriptor {
_fundamentalType = ftype;
}
// The TypeDescriptor for the type this type is based on
/// The TypeDescriptor for the type this type is based on
const TypeDescriptor * ReferentType() const {
return _referentType;
}
@ -255,9 +255,9 @@ class SC_CORE_EXPORT TypeDescriptor {
}
}
// A description of this type's type. Basically you
// get the right side of a TYPE statement minus END_TYPE.
// For base type TypeDescriptors it is the same as _name.
/// A description of this type's type. Basically you
/// get the right side of a TYPE statement minus END_TYPE.
/// For base type TypeDescriptors it is the same as _name.
const char * Description() const {
return _description;
}
@ -280,9 +280,8 @@ class SC_CORE_EXPORT TypeDescriptor {
return ( CurrName( n, schNm ) ? this : 0 );
}
bool CurrName( const char *, const char * = 0 ) const;
/// Adds an additional name, newnm, to be use when schema schnm is USE/REFERENCE'ing us (added to altNames).
void addAltName( const char * schnm, const char * newnm );
// Adds an additional name, newnm, to be use when schema schnm
// is USE/REFERENCE'ing us (added to altNames).
};
#endif //TYPEDESCRIPTOR_H

View file

@ -318,3 +318,7 @@ Severity CheckRemainingInput( istream & in, ErrorDescriptor * err,
}
return err->severity();
}
Severity CheckRemainingInput( std::istream & in, ErrorDescriptor * err, const std::string typeName, const char * tokenList ) {
return CheckRemainingInput( in, err, typeName.c_str(), tokenList );
}

View file

@ -39,10 +39,9 @@ SC_UTILS_EXPORT char * EntityClassName( char * oldname );
SC_UTILS_EXPORT bool StrEndsWith( const std::string & s, const char * suffix );
SC_UTILS_EXPORT std::string GetLiteralStr( istream & in, ErrorDescriptor * err );
extern SC_UTILS_EXPORT Severity CheckRemainingInput
( istream & in, ErrorDescriptor * err,
extern SC_UTILS_EXPORT Severity CheckRemainingInput( std::istream & in, ErrorDescriptor * err,
const char * typeName, // used in error message
const char * tokenList ); // e.g. ",)"
extern SC_UTILS_EXPORT Severity CheckRemainingInput( std::istream & in, ErrorDescriptor * err, const std::string typeName, const char * tokenList );
#endif

View file

@ -131,7 +131,7 @@ void ErrorDescriptor::PrependToUserMsg( const char * msg ) {
}
void ErrorDescriptor::AppendToUserMsg( const char c ) {
_userMsg.append( &c );
_userMsg.push_back( c );
}
void ErrorDescriptor::AppendToUserMsg( const char * msg ) {
@ -147,7 +147,7 @@ void ErrorDescriptor::PrependToDetailMsg( const char * msg ) {
}
void ErrorDescriptor::AppendToDetailMsg( const char c ) {
_detailMsg.append( &c );
_detailMsg.push_back( c );
}
void ErrorDescriptor::AppendToDetailMsg( const char * msg ) {

View file

@ -1,6 +1,4 @@
/* "$Id: sc_hash.cc,v 3.0.1.2 1997/11/05 22:33:50 sauderd DP3.1 $"; */
/*
/** \file sc_hash.cc
* Dynamic hashing, after CACM April 1988 pp 446-457, by Per-Ake Larson.
* Coded into C, with minor code improvements, and with hsearch(3) interface,
* by ejp@ausmelb.oz, Jul 26, 1988: 13:16;
@ -13,48 +11,31 @@
#include <string.h>
#include <sc_memmgr.h>
/*************/
/* constants */
/*************/
#define HASH_NULL (Hash_TableP)NULL
#define SEGMENT_SIZE 256
#define SEGMENT_SIZE_SHIFT 8 /* log2(SEGMENT_SIZE) */
#define PRIME1 37
#define PRIME2 1048583
#define MAX_LOAD_FACTOR 5
/************/
/* typedefs */
/************/
typedef unsigned long Address;
/******************************/
/* macro function definitions */
/******************************/
/*
** Fast arithmetic, relying on powers of 2
*/
#define MUL(x,y) ((x) << (y##_SHIFT))
#define DIV(x,y) ((x) >> (y##_SHIFT))
#define MOD(x,y) ((x) & ((y)-1))
#define SC_HASH_Table_new() new Hash_Table
#define SC_HASH_Table_destroy(x) delete x
#define SC_HASH_Element_new() new Element
#define SC_HASH_Element_destroy(x) delete x
/* Macros for fast arithmetic, relying on powers of 2 */
#define MUL(x,y) ((x) << (y##_SHIFT))
#define DIV(x,y) ((x) >> (y##_SHIFT))
#define MOD(x,y) ((x) & ((y)-1))
/* typedefs */
typedef unsigned long Address;
typedef struct Element * ElementP;
typedef struct Hash_Table * Hash_TableP;
/*
** Internal routines
*/
/* Internal routines */
Address SC_HASHhash( char *, Hash_TableP );
static void SC_HASHexpand_table( Hash_TableP );
@ -62,8 +43,8 @@ static void SC_HASHexpand_table( Hash_TableP );
static long HashAccesses, HashCollisions;
# endif
void *
SC_HASHfind( Hash_TableP t, char * s ) {
/// find entry in given hash table
void * SC_HASHfind( Hash_TableP t, char * s ) {
struct Element e;
struct Element * ep;
@ -73,8 +54,8 @@ SC_HASHfind( Hash_TableP t, char * s ) {
return( ep ? ep->data : 0 );
}
void
SC_HASHinsert( Hash_TableP t, char * s, void * data ) {
/// insert entry into given hash table
void SC_HASHinsert( Hash_TableP t, char * s, void * data ) {
struct Element e, *e2;
e.key = s;
@ -82,12 +63,12 @@ SC_HASHinsert( Hash_TableP t, char * s, void * data ) {
e.symbol = 0;
e2 = SC_HASHsearch( t, &e, HASH_INSERT );
if( e2 ) {
printf( "Redeclaration of %s\n", s );
fprintf( stderr, "%s: Redeclaration of %s\n", __FUNCTION__, s );
}
}
Hash_TableP
SC_HASHcreate( unsigned count ) {
/// create a hash table
Hash_TableP SC_HASHcreate( unsigned count ) {
unsigned int i;
Hash_TableP table;
@ -138,10 +119,9 @@ SC_HASHcreate( unsigned count ) {
return( table );
}
/* initialize pointer to beginning of hash table so we can step through it */
/* on repeated calls to HASHlist - DEL */
void
SC_HASHlistinit( Hash_TableP table, HashEntry * he ) {
/** initialize pointer to beginning of hash table so we can
* step through it on repeated calls to HASHlist - DEL */
void SC_HASHlistinit( Hash_TableP table, HashEntry * he ) {
he->i = he->j = 0;
he->p = 0;
he->table = table;
@ -149,8 +129,7 @@ SC_HASHlistinit( Hash_TableP table, HashEntry * he ) {
he->e = 0;
}
void
SC_HASHlistinit_by_type( Hash_TableP table, HashEntry * he, char type ) {
void SC_HASHlistinit_by_type( Hash_TableP table, HashEntry * he, char type ) {
he->i = he->j = 0;
he->p = 0;
he->table = table;
@ -158,9 +137,8 @@ SC_HASHlistinit_by_type( Hash_TableP table, HashEntry * he, char type ) {
he->e = 0;
}
/* provide a way to step through the hash */
struct Element *
SC_HASHlist( HashEntry * he ) {
/** provide a way to step through the hash */
struct Element * SC_HASHlist( HashEntry * he ) {
int i2 = he->i;
int j2 = he->j;
struct Element ** s;
@ -202,8 +180,8 @@ SC_HASHlist( HashEntry * he ) {
return( he->e );
}
void
SC_HASHdestroy( Hash_TableP table ) {
/// destroy all elements in given table, then the table itself
void SC_HASHdestroy( Hash_TableP table ) {
struct Element ** s;
struct Element * p, *q;
@ -226,16 +204,13 @@ SC_HASHdestroy( Hash_TableP table ) {
}
SC_HASH_Table_destroy( table );
# if defined(HASH_STATISTICS) && defined(DEBUG)
fprintf( stderr,
"[hdestroy] Accesses %ld Collisions %ld\n",
HashAccesses,
HashCollisions );
fprintf( stderr, "[hdestroy] Accesses %ld Collisions %ld\n", HashAccesses, HashCollisions );
# endif
}
}
struct Element *
SC_HASHsearch( Hash_TableP table, const struct Element * item, Action action ) {
/// search table for 'item', perform 'action' (find/insert/delete)
struct Element * SC_HASHsearch( Hash_TableP table, const struct Element * item, Action action ) {
Address h;
struct Element ** CurrentSegment;
int SegmentIndex;
@ -317,10 +292,9 @@ SC_HASHsearch( Hash_TableP table, const struct Element * item, Action action ) {
** Internal routines
*/
Address
SC_HASHhash( char * Key, Hash_TableP table ) {
Address SC_HASHhash( char * Key, Hash_TableP table ) {
Address h, address;
register unsigned char * k = ( unsigned char * )Key;
unsigned char * k = ( unsigned char * )Key;
h = 0;
/*
@ -337,9 +311,7 @@ SC_HASHhash( char * Key, Hash_TableP table ) {
return( address );
}
static
void
SC_HASHexpand_table( Hash_TableP table ) {
static void SC_HASHexpand_table( Hash_TableP table ) {
struct Element ** OldSegment, **NewSegment;
struct Element * Current, **Previous, **LastOfNew;
@ -407,7 +379,7 @@ SC_HASHexpand_table( Hash_TableP table ) {
}
}
/* following code is for testing hash package */
/* for testing sc_hash */
#ifdef HASHTEST
struct Element e1, e2, e3, *e;
struct Hash_Table * t;

View file

@ -39,7 +39,9 @@ include_directories(
SC_ADDEXEC(exp2cxx "${exp2cxx_SOURCES}" "libexppp;express;base")
add_dependencies(exp2cxx version_string)
if(NOT SC_IS_SUBBUILD AND SC_GIT_VERSION)
add_dependencies(exp2cxx version_string)
endif(NOT SC_IS_SUBBUILD AND SC_GIT_VERSION)
if(SC_ENABLE_TESTING)
add_subdirectory(test)

View file

@ -126,8 +126,7 @@ void DataMemberPrintAttr( Entity entity, Variable a, FILE * file ) {
ctype = TYPEget_ctype( VARget_type( a ) );
generate_attribute_name( a, attrnm );
if( !strcmp( ctype, "SCLundefined" ) ) {
printf( "WARNING: in entity %s, ", ENTITYget_name( entity ) );
printf( " the type for attribute %s is not fully implemented\n", attrnm );
fprintf( stderr, "Warning: in entity %s, the type for attribute %s is not fully implemented\n", ENTITYget_name( entity ), attrnm );
}
if( TYPEis_entity( VARget_type( a ) ) ) {
fprintf( file, " SDAI_Application_instance_ptr _%s;", attrnm );

View file

@ -36,7 +36,7 @@ FILE * FILEcreate( const char * filename ) {
const char * fn;
if( ( file = fopen( filename, "w" ) ) == NULL ) {
printf( "**Error in SCHEMAprint: unable to create file %s ** \n", filename );
fprintf( stderr, "**Error in SCHEMAprint: unable to create file %s ** \n", filename );
return ( NULL );
}
@ -262,7 +262,7 @@ const char * GetTypeDescriptorName( Type t ) {
case generic_:
return "TypeDescriptor";
default:
printf( "Error in %s, line %d: type %d not handled by switch statement.", __FILE__, __LINE__, TYPEget_body( t )->type );
fprintf( stderr, "Error at %s:%d - type %d not handled by switch statement.", __FILE__, __LINE__, TYPEget_body( t )->type );
abort();
}
/* NOTREACHED */
@ -318,9 +318,7 @@ Entity ENTITYput_superclass( Entity entity ) {
ignore = e;
}
if( ignore ) {
printf( "WARNING: multiple inheritance not implemented.\n" );
printf( "\tin ENTITY %s\n\tSUPERTYPE %s IGNORED.\n\n",
ENTITYget_name( entity ), ENTITYget_name( e ) );
fprintf( stderr, "WARNING: multiple inheritance not implemented. In ENTITY %s, SUPERTYPE %s ignored.\n", ENTITYget_name( entity ), ENTITYget_name( e ) );
}
LISTod;
}

View file

@ -15,6 +15,7 @@ N350 ( August 31, 1993 ) of ISO 10303 TC184/SC4/WG7.
/* #define NEWDICT */
#include <sc_memmgr.h>
#include <path2str.h>
#include <stdlib.h>
#include <assert.h>
#include <sc_mkdir.h>
@ -42,6 +43,10 @@ int isMultiDimAggregateType( const Type t );
void Type_Description( const Type, char * );
void TypeBody_Description( TypeBody body, char * buf );
/** write representation of expression to end of buf
*
* TODO: add buflen arg and check for overflow
*/
void strcat_expr( Expression e, char * buf ) {
if( e == LITERAL_INFINITY ) {
strcat( buf, "?" );
@ -64,7 +69,10 @@ void strcat_expr( Expression e, char * buf ) {
}
}
/** print t's bounds to end of buf */
/** print t's bounds to end of buf
*
* TODO: add buflen arg and check for overflow
*/
void strcat_bounds( TypeBody b, char * buf ) {
if( !b->upper ) {
return;
@ -93,7 +101,6 @@ void strcat_bounds( TypeBody b, char * buf ) {
** Change Date: 5/22/91 CD
******************************************************************/
const char * EnumCElementName( Type type, Expression expr ) {
static char buf [BUFSIZ];
sprintf( buf, "%s__",
EnumName( TYPEget_name( type ) ) );
@ -103,7 +110,6 @@ const char * EnumCElementName( Type type, Expression expr ) {
}
char * CheckEnumSymbol( char * s ) {
static char b [BUFSIZ];
if( strcmp( s, "sdaiTRUE" )
&& strcmp( s, "sdaiFALSE" )
@ -114,8 +120,7 @@ char * CheckEnumSymbol( char * s ) {
} else {
strcpy( b, s );
strcat( b, "_" );
printf( "** warning: the enumerated value %s is already being used ", s );
printf( " and has been changed to %s **\n", b );
fprintf( stderr, "Warning in %s: the enumerated value %s is already being used and has been changed to %s\n", __FUNCTION__, s, b );
return ( b );
}
}
@ -362,6 +367,9 @@ void TYPEPrint( const Type type, FILES *files, Schema schema ) {
* Prints a bunch of lines for enumeration creation functions (i.e., "cre-
* ate_SdaiEnum1()"). Since this is done both for an enum and for "copies"
* of it (when "TYPE enum2 = enum1"), I placed this code in a separate fn.
*
* NOTE - "Print ObjectStore Access Hook function" comment seen at one of
* the calls seems to imply it's ObjectStore specific...
*/
static void printEnumCreateHdr( FILE * inc, const Type type ) {
const char * nm = TYPEget_ctype( type );
@ -1279,7 +1287,7 @@ char * TYPEget_express_type( const Type t ) {
/* default returns undefined */
printf( "WARNING2: type %s is undefined\n", TYPEget_name( t ) );
fprintf( stderr, "Warning in %s: type %s is undefined\n", __FUNCTION__, TYPEget_name( t ) );
return ( "SCLundefined" );
}
@ -1305,7 +1313,7 @@ void AGGRprint_bound( FILE * header, FILE * impl, const char * var_name, const c
fprintf( header, " break;\n" );
fprintf( header, " }\n" );
fprintf( header, " }\n" );
fprintf( header, " assert( a->NonRefType() == INTEGER_TYPE && \"Error in schema or in exp2cxx at %s:%d %s\" );\n", __FILE__,
fprintf( header, " assert( a->NonRefType() == INTEGER_TYPE && \"Error in schema or in exp2cxx at %s:%d %s\" );\n", path2str( __FILE__ ),
__LINE__, "(incorrect assumption of integer type?) Please report error to STEPcode: scl-dev at groups.google.com." );
fprintf( header, " return *( a->Integer() );\n" ); /* always an integer? if not, would need to translate somehow due to return type... */
fprintf( header, "}\n" );
@ -1325,8 +1333,7 @@ void AGGRprint_bound( FILE * header, FILE * impl, const char * var_name, const c
*/
void AGGRprint_init( FILE * header, FILE * impl, const Type t, const char * var_name, const char * aggr_name ) {
if( !header ) {
fprintf( stderr, "ERROR at %s:%d! 'header' is null for aggregate %s.",
__FILE__, __LINE__, t->symbol.name );
fprintf( stderr, "ERROR at %s:%d! 'header' is null for aggregate %s.", __FILE__, __LINE__, t->symbol.name );
abort();
}
if( !TYPEget_head( t ) ) {

View file

@ -83,10 +83,9 @@
extern void print_fedex_version( void );
static void exp2cxx_usage( void ) {
fprintf( stderr, "usage: %s [-s|-S] [-a|-A] [-c|-C] [-L] [-v] [-d # | -d 9 -l nnn -u nnn] [-n] [-p <object_type>] {-w|-i <warning>} express_file\n", EXPRESSprogram_name );
fprintf( stderr, "usage: %s [-s|-S] [-a|-A] [-L] [-v] [-d # | -d 9 -l nnn -u nnn] [-n] [-p <object_type>] {-w|-i <warning>} express_file\n", EXPRESSprogram_name );
fprintf( stderr, "where\t-s or -S uses only single inheritance in the generated C++ classes\n" );
fprintf( stderr, "\t-a or -A generates the early bound access functions for entity classes the old way (without an underscore)\n" );
fprintf( stderr, "\t-c or -C generates C++ classes for use with CORBA (Orbix)\n" );
fprintf( stderr, "\t-L prints logging code in the generated C++ classes\n" );
fprintf( stderr, "\t-v produces the version description below\n" );
fprintf( stderr, "\t-d turns on debugging (\"-d 0\" describes this further\n" );
@ -132,7 +131,7 @@ void EXPRESSinit_init( void ) {
EXPRESSsucceed = success;
EXPRESSgetopt = Handle_FedPlus_Args;
/* so the function getopt (see man 3 getopt) will not report an error */
strcat( EXPRESSgetopt_options, "sSlLcCaA" );
strcat( EXPRESSgetopt_options, "sSlLaA" );
ERRORusage_function = exp2cxx_usage;
}

View file

@ -733,8 +733,17 @@ void TYPEselect_lib_print_part_one( const Type type, FILE * f,
if( isAggregateType( t ) && t->u.type->body->base ) {
fprintf( f, " _%s = new %s;\n", SEL_ITEMget_dmname( t ), TYPEget_utype( t ) );
}
} LISTod
/* above misses some attr's that are initialized in part 1 ctor below.
* hopefully this won't add duplicates...
*/
LISTdo( SEL_TYPEget_items( type ), t, Type ) {
if( ( TYPEis_entity( t ) ) || ( !utype_member( dups, t, 1 ) ) ) {
if( isAggregateType( t ) && ( t->u.type->body->base ) ) {
fprintf( f, " _%s = new %s;\n", SEL_ITEMget_dmname( t ), TYPEget_utype( t ) );
}
LISTod;
}
} LISTod
fprintf( f, " nullify();\n" );
fprintf( f, "#ifdef SC_LOGGING\n if( *logStream )\n {\n" );
fprintf( f, "// *logStream << \"DAVE ERR exiting %s constructor.\" << std::endl;\n", n );
@ -743,9 +752,8 @@ void TYPEselect_lib_print_part_one( const Type type, FILE * f,
/* constructors with underlying types */
fprintf( f, "\n // part 1\n" );
LISTdo( SEL_TYPEget_items( type ), t, Type )
if( ( TYPEis_entity( t ) )
|| ( !utype_member( dups, t, 1 ) ) ) {
LISTdo( SEL_TYPEget_items( type ), t, Type ) {
if( ( TYPEis_entity( t ) ) || ( !utype_member( dups, t, 1 ) ) ) {
/* if there is not more than one underlying type that maps to the same
* base type print out the constructor using the type from the TYPE
* statement as the underlying type. Also skip enums/sels which are
@ -760,15 +768,12 @@ void TYPEselect_lib_print_part_one( const Type type, FILE * f,
been to make it all pretty - DAR. ;-) */
fprintf( f, "const SelectTypeDescriptor *typedescript )\n" );
fprintf( f, " : " BASE_SELECT " (typedescript, %s)",
TYPEtd_name( t ) );
fprintf( f, " : " BASE_SELECT " (typedescript, %s)", TYPEtd_name( t ) );
initSelItems( type, f );
fprintf( f, "\n{\n" );
fprintf( f, "#ifdef SC_LOGGING\n if( *logStream )\n {\n" );
fprintf( f,
" *logStream << \"DAVE ERR entering %s constructor.\""
" << std::endl;\n", n );
fprintf( f, " }\n#endif\n" );
fprintf( f, "#ifdef SC_LOGGING\n if( *logStream ) { " );
fprintf( f, "*logStream << \"DAVE ERR entering %s constructor.\" << std::endl; }\n", n );
fprintf( f, "#endif\n" );
if( isAggregateType( t ) ) {
if( t->u.type->body->base ) {
@ -779,16 +784,14 @@ void TYPEselect_lib_print_part_one( const Type type, FILE * f,
} else {
fprintf( f, " _%s = o;\n", SEL_ITEMget_dmname( t ) );
}
fprintf( f, "#ifdef SC_LOGGING\n if( *logStream )\n {\n" );
fprintf( f,
"// *logStream << \"DAVE ERR exiting %s constructor.\""
" << std::endl;\n", n );
fprintf( f, " }\n#endif\n" );
fprintf( f, "#ifdef SC_LOGGING\n if( *logStream ) { " );
fprintf( f, "*logStream << \"DAVE ERR exiting %s constructor.\" << std::endl; }\n", n );
fprintf( f, "#endif\n" );
fprintf( f, "}\n\n" );
}
LISTod;
LISTdo( dups, t, Type )
} LISTod
LISTdo( dups, t, Type ) {
/* if there is more than one underlying type that maps to the
* same base type, print a constructor using the base type.
*/
@ -834,10 +837,10 @@ void TYPEselect_lib_print_part_one( const Type type, FILE * f,
fprintf( f, " }\n#endif\n" );
fprintf( f, "}\n\n" );
}
LISTod;
} LISTod
/* dtor */
fprintf( f, "%s::~%s()\n{\n", n, n );
fprintf( f, "%s::~%s() {\n", n, n );
/* delete objects that data members point to */
LISTdo( dups, t, Type ) {
if( isAggregateType( t ) && t->u.type->body->base ) {
@ -847,6 +850,16 @@ void TYPEselect_lib_print_part_one( const Type type, FILE * f,
}
}
LISTod;
LISTdo( SEL_TYPEget_items( type ), t, Type ) {
if( ( TYPEis_entity( t ) ) || ( !utype_member( dups, t, 1 ) ) ) {
if( isAggregateType( t ) && ( t->u.type->body->base ) ) {
fprintf( f, " if( _%s ) {\n", SEL_ITEMget_dmname( t ) );
fprintf( f, " delete _%s;\n", SEL_ITEMget_dmname( t ) );
fprintf( f, " _%s = 0;\n }\n", SEL_ITEMget_dmname( t ) );
}
}
} LISTod
fprintf( f, "}\n\n" );
fprintf( f, "%s_agg::%s_agg( SelectTypeDescriptor *s)\n"
@ -1033,8 +1046,7 @@ void TYPEselect_lib_part_three_getter( const Type type, const char * classnm, co
* a select class -- access functions for the data members of underlying entity
* types.
*/
void TYPEselect_lib_print_part_three( const Type type, FILE * f,
char * classnm ) {
void TYPEselect_lib_print_part_three( const Type type, FILE * f, char * classnm ) {
#define ENTITYget_type(e) ((e)->u.entity->type)
char uent[BUFSIZ], /* name of underlying entity type */

View file

@ -2,7 +2,6 @@ if(SC_PYTHON_GENERATOR)
include_directories(
${SC_SOURCE_DIR}/include
${SC_SOURCE_DIR}/include/exppp
${SC_SOURCE_DIR}/include/express
${SC_SOURCE_DIR}/src/base
)
@ -30,9 +29,11 @@ if(SC_PYTHON_GENERATOR)
../exp2cxx/write.cc
../exp2cxx/print.cc
)
SC_ADDEXEC(exp2python "${exp2python_SOURCES}" "libexppp;express;base")
SC_ADDEXEC(exp2python "${exp2python_SOURCES}" "express;base")
if(NOT SC_IS_SUBBUILD AND SC_GIT_VERSION)
add_dependencies(exp2python version_string)
endif(NOT SC_IS_SUBBUILD AND SC_GIT_VERSION)
endif(SC_PYTHON_GENERATOR)
# Local Variables:

View file

@ -69,7 +69,7 @@ class ARRAY(BaseType.Type, BaseType.Aggregate):
"""
EXPRESS definition:
==================
An array data type has as its domain indexed, fixed-size collections of like elements. The lower
An array data type has as its domain indexed, fixed-size collections of like elements. The lower
and upper bounds, which are integer-valued expressions, define the range of index values, and
thus the size of each array collection.
An array data type definition may optionally specify
@ -93,21 +93,21 @@ class ARRAY(BaseType.Type, BaseType.Aggregate):
NOTE 1 { The bounds may be positive, negative or zero, but may not be indeterminate (?) (see
14.2).
Rules and restrictions:
a) Both expressions in the bound speci cation, bound_1 and bound_2, shall evaluate to
a) Both expressions in the bound specification, bound_1 and bound_2, shall evaluate to
integer values. Neither shall evaluate to the indeterminate (?) value.
b) bound_1 gives the lower bound of the array. This shall be the lowest index which is
valid for an array value of this data type.
c) bound_2 gives the upper bound of the array. This shall be the highest index which is
valid for an array value of this data type.
d) bound_1 shall be less than or equal to bound_2.
e) If the optional keyword is speci ed, an array value of this data type may have the
e) If the optional keyword is specified, an array value of this data type may have the
indeterminate (?) value at one or more index positions.
f) If the optional keyword is not speci ed, an array value of this data type shall not
f) If the optional keyword is not specified, an array value of this data type shall not
contain an indeterminate (?) value at any index position.
g) If the unique keyword is speci ed, each element in an array value of this data type
shall be di erent from (i.e., not instance equal to) every other element in the same array
g) If the unique keyword is specified, each element in an array value of this data type
shall be different from (i.e., not instance equal to) every other element in the same array
value.
NOTE 2 : Both optional and unique may be speci ed in the same array data type definition.
NOTE 2 : Both optional and unique may be specified in the same array data type definition.
This does not preclude multiple indeterminate (?) values from occurring in a single array value.
This is because comparisons between indeterminate (?) values result in unknown so the uniqueness
constraint is not violated.
@ -115,7 +115,7 @@ class ARRAY(BaseType.Type, BaseType.Aggregate):
sectors : ARRAY [ 1 : 10 ] OF -- first dimension
ARRAY [ 11 : 14 ] OF -- second dimension
UNIQUE something;
The first array has 10 elements of data type ARRAY[11:14] OF UNIQUE something. There is
The first array has 10 elements of data type ARRAY[11:14] OF UNIQUE something. There is
a total of 40 elements of data type something in the attribute named sectors. Within each
ARRAY[11:14], no duplicates may occur; however, the same something instance may occur in two
different ARRAY[11:14] values within a single value for the attribute named sectors.
@ -223,10 +223,10 @@ class LIST(BaseType.Type, BaseType.Aggregate):
If this value is indeterminate (?) the number of elements in a list value of this data type is
not bounded from above.
c) If the bound_spec is omitted, the limits are [0:?].
d) If the unique keyword is speci ed, each element in a list value of this data type shall
be di erent from (i.e., not instance equal to) every other element in the same list value.
EXAMPLE 28 { This example de nes a list of arrays. The list can contain zero to ten arrays. Each
array of ten integers shall be di erent from all other arrays in a particular list.
d) If the unique keyword is specified, each element in a list value of this data type shall
be different from (i.e., not instance equal to) every other element in the same list value.
EXAMPLE 28 { This example defines a list of arrays. The list can contain zero to ten arrays. Each
array of ten integers shall be different from all other arrays in a particular list.
complex_list : LIST[0:10] OF UNIQUE ARRAY[1:10] OF INTEGER;
Python definition:
@ -374,7 +374,7 @@ class BAG(BaseType.Type, BaseType.Aggregate):
==================
A bag data type has as its domain unordered collections of like elements. The optional lower
and upper bounds, which are integer-valued expressions, define the minimum and maximum
number of elements that can be held in the collection de ned by a bag data type.
number of elements that can be held in the collection defined by a bag data type.
Syntax:
170 bag_type = BAG [ bound_spec ] OF base_type .
@ -490,8 +490,8 @@ class SET(BaseType.Type, BaseType.Aggregate):
==================
A set data type has as its domain unordered collections of like elements. The set data type is
a specialization of the bag data type. The optional lower and upper bounds, which are integer-
valued expressions, de ne the minimum and maximum number of elements that can be held in
the collection de ned by a set data type. The collection de ned by set data type shall not
valued expressions, define the minimum and maximum number of elements that can be held in
the collection defined by a set data type. The collection defined by set data type shall not
contain two or more elements which are instance equal.
Syntax:
285 set_type = SET [ bound_spec ] OF base_type .
@ -509,14 +509,14 @@ class SET(BaseType.Type, BaseType.Aggregate):
If this value is indeterminate (?) the number of elements in a set value of this data type is
not be bounded from above.
c) If the bound_spec is omitted, the limits are [0:?].
d) Each element in an occurrence of a set data type shall be di erent from (i.e., not
d) Each element in an occurrence of a set data type shall be different from (i.e., not
instance equal to) every other element in the same set value.
EXAMPLE 30 { This example de nes an attribute as a set of points (a named data type assumed
EXAMPLE 30 { This example defines an attribute as a set of points (a named data type assumed
to have been declared elsewhere).
a_set_of_points : SET OF point;
The attribute named a_set_of_points can contain zero or more points. Each point instance (in
the set value) is required to be di erent from every other point in the set.
If the value is required to have no more than 15 points, the speci cation can provide an upper bound,
the set value) is required to be different from every other point in the set.
If the value is required to have no more than 15 points, the specification can provide an upper bound,
as in:
a_set_of_points : SET [0:15] OF point;
The value of the attribute named a_set_of_points now may contain no more than 15 points.

View file

@ -50,8 +50,8 @@ CONST_E = REAL(math.pi)
#14.2 Indeterminate
#The indeterminate symbol (?) stands for an ambiguous value. It is compatible with all data
#types.
#NOTE - The most common use of indeterminate (?) is as the upper bound speci cation of a bag,
#list or set. This usage represents the notion that the size of the aggregate value de ned by the
#NOTE - The most common use of indeterminate (?) is as the upper bound specification of a bag,
#list or set. This usage represents the notion that the size of the aggregate value defined by the
#aggregation data type is unbounded.
# python note: indeterminate value is mapped to None in aggregate bounds
@ -65,7 +65,7 @@ FALSE = False
# EXPRESS definition:
# ===================
#14.4 Pi
#PI is a REAL constant representing the mathematical value , the ratio of a circle's circumference
#PI is a REAL constant representing the mathematical value π, the ratio of a circle's circumference
#to its diameter.
PI = REAL(math.pi)
@ -74,7 +74,7 @@ PI = REAL(math.pi)
#14.5 Self
#SELF refers to the current entity instance or type value. self may appear within an entity
#declaration, a type declaration or an entity constructor.
#NOTE - sSELF is not a constant, but behaves as one in every context in which it can appear.
#NOTE - SELF is not a constant, but behaves as one in every context in which it can appear.
# python note: SELF is not mapped to any constant, but is mapper to self
# EXPRESS definition:
@ -87,7 +87,7 @@ TRUE = True
# EXPRESS definition:
# ===================
#14.7 Unknown
#unknown is a logical constant representing that there is insucient information available to
#unknown is a logical constant representing that there is insufficient information available to
#be able to evaluate a logical condition. It is compatible with the logical data type, but not
#with the boolean data type.
# @TODO: define UNKNOWN in python
@ -122,7 +122,7 @@ def ABS(V):
#FUNCTION ACOS ( V:NUMBER ) : REAL;
#The acos function returns the angle given a cosine value.
#Parameters : V is a number which is the cosine of an angle.
#Result : The angle in radians (0  result  ) whose cosine is V.
#Result : The angle in radians (0 <= result <= pi) whose cosine is V.
#Conditions : -1.0=<V<=1.0
#EXAMPLE 126 { ACOS ( 0.3 ) --> 1.266103...
# Python definition:
@ -149,7 +149,7 @@ def ASIN(V):
#a) V1 is a number.
#b) V2 is a number.
#Result : The angle in radians (-pi/2<=result<=pi/2) whose tangent is V. If V2 is zero, the result
#is pi/2 or -pi/2 depending on the sign of V1.
#is pi/2 or -pi/2 depending on the sign of V1.
#Conditions : Both V1 and V2 shall not be zero.
#EXAMPLE 128 { ATAN ( -5.5, 3.0 ) --> -1.071449...
def ATAN(V1,V2):
@ -197,7 +197,7 @@ def BLENGTH(V):
#FUNCTION SIN ( V:NUMBER ) : REAL;
#The sin function returns the sine of an angle.
#Parameters : V is a number representing an angle expressed in radians.
#Result : The sine of V (-1.0  result  1.0).
#Result : The sine of V (-1.0 <= result <= 1.0).
#EXAMPLE 144 { SIN ( PI ) --> 0.0
#
def COS(V):
@ -487,14 +487,14 @@ def ODD(V):
# ===================
#15.20 RolesOf - general function
#FUNCTION ROLESOF ( V:GENERIC ) : SET OF STRING;
#The rolesof function returns a set of strings containing the fully quali ed names of the roles
#played by the speci ed entity instance. A fully quali ed name is de ned to be the name of the
#attribute quali ed by the name of the schema and entity in which this attribute is declared (i.e.
#The rolesof function returns a set of strings containing the fully qualified names of the roles
#played by the specified entity instance. A fully qualified name is defined to be the name of the
#attribute qualified by the name of the schema and entity in which this attribute is declared (i.e.
#'SCHEMA.ENTITY.ATTRIBUTE').
#Parameters : V is any instance of an entity data type.
#Result : A set of string values (in upper case) containing the fully quali ed names of the
#Result : A set of string values (in upper case) containing the fully qualified names of the
#attributes of the entity instances which use the instance V.
#When a named data type is use'd or reference'd, the schema and the name in that schema,
#When a named data type is used or referenced, the schema and the name in that schema,
#if renamed, are also returned. Since use statements may be chained, all the chained schema
#names and the name in each schema are returned.
#EXAMPLE 143 { This example shows that a point might be used as the centre of a circle. The
@ -567,7 +567,7 @@ def SIZEOF(V):
#The sqrt function returns the non-negative square root of a number.
#Parameters : V is any non-negative number.
#Result : The non-negative square root of V.
#Conditions : V  0:0
#Conditions : V >= 0:0
#EXAMPLE 146 - SQRT ( 121 ) --> 11.0
def SQRT(V):
if not isinstance(V,NUMBER):
@ -602,16 +602,16 @@ def TAN(V):
#The typeof function returns a set of strings that contains the names of all the data types
#of which the parameter is a member. Except for the simple data types (binary, boolean,
#integer, logical, number, real, and string) and the aggregation data types (array, bag,
#list, set) these names are quali ed by the name of the schema which contains the de nition of
#list, set) these names are qualified by the name of the schema which contains the definition of
#the type.
#NOTE 1 { The primary purpose of this function is to check whether a given value (variable, at-
#tribute value) can be used for a certain purpose, e.g. to ensure assignment compatibility between
#two values. It may also be used if di erent subtypes or specializations of a given type have to be
#treated di erently in some context.
#two values. It may also be used if different subtypes or specializations of a given type have to be
#treated differently in some context.
#Parameters : V is a value of any type.
#Result : The contents of the returned set of string values are the names (in upper case) of all
#types the value V is a member of. Such names are quali ed by the name of the schema which
#contains the de nition of the type ('SCHEMA.TYPE') if it is neither a simple data type nor an
#types the value V is a member of. Such names are qualified by the name of the schema which
#contains the definition of the type ('SCHEMA.TYPE') if it is neither a simple data type nor an
#aggregation data type. It may be derived by the following algorithm (which is given here for
#specification purposes rather than to prescribe any particular type of implementation)
def TYPEOF(V):
@ -636,8 +636,8 @@ def TYPEOF(V):
# ===================
#15.26 UsedIn - general function
#FUNCTION USEDIN ( T:GENERIC; R:STRING) : BAG OF GENERIC;
#The usedin function returns each entity instance that uses a speci ed entity instance in a
#speci ed role.
#The usedin function returns each entity instance that uses a specified entity instance in a
#specified role.
def USEDIN(T,R):
raise NotImplemented("USEDIN function not yet implemented.")
@ -654,8 +654,8 @@ def USEDIN(T,R):
#VALUE ( 'abc' ) --> ? null
def VALUE(V):
if not isinstance(V,STRING):
raise TypeError("VALULE function takes a NUMBER parameter")
# first try to instanciate an INTEGER from the string:
raise TypeError("VALUE function takes a NUMBER parameter")
# first try to instantiate an INTEGER from the string:
try:
return INTEGER(V)
except:
@ -691,7 +691,7 @@ def VALUE(V):
def VALUE_IN(C,V):
if not isinstance(C,Aggregate):
raise TypeError("VALUE_IN method takes an aggregate as first parameter")
raise NotImplemented("VALUE_IN function not et implemented")
raise NotImplemented("VALUE_IN function not yet implemented")
# EXPRESS definition:
# ===================
@ -705,8 +705,8 @@ def VALUE_IN(C,V):
#b) If any any two elements of V are value equal, false is returned.
#c) If any element of V is indeterminate (?), unknown is returned.
#d) Otherwise true is returned.
#EXAMPLE 153 { The following test ensures tht each point is a set is at a di erent position, (by
#de nition they are distinct, i.e., instance unique).
#EXAMPLE 153 { The following test ensures that each point is placed at a different position, (by
#definition they are distinct, i.e., instance unique).
#IF VALUE_UNIQUE(points) THEN ...
def VALUE_UNIQUE(V):
if not isinstance(V,Aggregate):

View file

@ -30,19 +30,10 @@
# THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import sys
from enum import Enum
import BaseType
class EnumerationId(object):
"""
EXPRESS definition:
===================
An enumeration data type has as its domain an ordered set of names. The names represent
values of the enumeration data type. These names are designated by enumeration_ids and are
referred to as enumeration items.
"""
pass
class ENUMERATION(object):
class ENUMERATION(Enum):
"""
EXPRESS definition:
===================
@ -58,49 +49,15 @@ class ENUMERATION(object):
behind);
END_TYPE; -- ahead_or_behind
is implemented in python with the line:
>>> ahead_of_behind = ENUMERATION('ahead','behind', the_current_scope)
>>> ahead_or_behind.ahead
>>> ahead_of_behind.behind
Scoping and visibility of ENUMERATIONS is similar in EXPRESS and Python
And, if and only if ahead and/or behind are not in scope (e.g. they are not entity names,
and/or many enums define the same enumeration identifier):
>>> ahead
>>> behind
Enum implemented as per Standard Library / PEP 435
>>> ahead_or_behind = ENUMERATION('ahead_or_behind', 'ahead behind')
>>> race_position = ahead_or_behind.ahead
>>> if race_position == ahead_or_behind.ahead:
... # do stuff!
"""
def __init__(self,*kargs,**args):
# first defining the scope
if args.has_key('scope'):
self._scope = args['scope']
else:
self._scope = None
# store passed enum identifiers
self._enum_id_names = list(kargs)
self._enum_ids = []
# we create enums id from names, and create attributes
# for instance, from the identifier name 'ahead',
# we create an attribute ahead with which is a new
# instance of EnumerationId
for enum_id_name in self._enum_id_names:
setattr(self,enum_id_name,EnumerationId())
# we store this new attributes to the enum_ids list, which
# will be accessed by the type checker with the get_enum_ids method
self._enum_ids.append(self.__getattribute__(enum_id_name))
#
# Then we check if the enums names can be added to the current scope:
# if the name is already in the scope, then another enums id or select
# has the same name -> we do nothing, enums will be called
# with ahead_of_behind.ahead or ahead_or_behind.behind.
# otherwise, they can be called as only ahead or behind
# Note: since ENUMERATIONS are defined *before* entities, if an entity
# has the same name as an enum id, it will replace it in the current scope.
#
for enum_id_name in self._enum_id_names:
if not vars(self._scope).has_key(enum_id_name):
vars(self._scope)[enum_id_name] = self.__getattribute__(enum_id_name)
def get_enum_ids(self):
return self._enum_ids
pass
class SELECT(object):
""" A select data type has as its domain the union of the domains of the named data types in

View file

@ -34,34 +34,52 @@
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
# THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import sys
import logging
import ply.lex as lex
import ply.yacc as yacc
from ply.lex import LexError
logger = logging.getLogger(__name__)
# ensure Python 2.6 compatibility
if not hasattr(logging, 'NullHandler'):
class NullHandler(logging.Handler):
def handle(self, record):
pass
def emit(self, record):
pass
def createLock(self):
self.lock = None
setattr(logging, 'NullHandler', NullHandler)
logger.addHandler(logging.NullHandler())
####################################################################################################
# Common Code for Lexer / Parser
####################################################################################################
class Base:
tokens = ('INTEGER', 'REAL', 'USER_DEFINED_KEYWORD', 'STANDARD_KEYWORD', 'STRING', 'BINARY',
base_tokens = ['INTEGER', 'REAL', 'USER_DEFINED_KEYWORD', 'STANDARD_KEYWORD', 'STRING', 'BINARY',
'ENTITY_INSTANCE_NAME', 'ENUMERATION', 'PART21_END', 'PART21_START', 'HEADER_SEC',
'ENDSEC', 'DATA_SEC')
'ENDSEC', 'DATA']
####################################################################################################
# Lexer
####################################################################################################
class Lexer(Base):
def __init__(self, debug=0, optimize=0, compatibility_mode=False, header_limit=1024):
self.lexer = lex.lex(module=self, debug=debug, debuglog=logger, optimize=optimize,
errorlog=logger)
self.entity_keywords = []
class Lexer(object):
tokens = list(base_tokens)
states = (('slurp', 'exclusive'),)
def __init__(self, debug=0, optimize=0, compatibility_mode=False, header_limit=4096):
self.base_tokens = list(base_tokens)
self.schema_dict = {}
self.active_schema = {}
self.input_length = 0
self.compatibility_mode = compatibility_mode
self.header_limit = header_limit
states = (('compatibility', 'inclusive'),)
self.lexer = lex.lex(module=self, debug=debug, debuglog=logger, optimize=optimize,
errorlog=logger)
self.reset()
def __getattr__(self, name):
if name == 'lineno':
@ -72,16 +90,12 @@ class Lexer(Base):
raise AttributeError
def input(self, s):
startidx = s.find('ISO-10303-21;', 0, self.header_limit)
if startidx == -1:
sys.exit('Aborting... ISO-10303-21; header not found')
self.lexer.input(s[startidx:])
self.lexer.lineno += s[0:startidx].count('\n')
self.lexer.input(s)
self.input_length += len(s)
if self.compatibility_mode:
self.lexer.begin('compatibility')
else:
self.lexer.begin('INITIAL')
def reset(self):
self.lexer.lineno = 1
self.lexer.begin('slurp')
def token(self):
try:
@ -89,63 +103,105 @@ class Lexer(Base):
except StopIteration:
return None
def register_entities(self, entities):
self.entity_keywords.extend(entities)
def activate_schema(self, schema_name):
if schema_name in self.schema_dict:
self.active_schema = self.schema_dict[schema_name]
else:
raise ValueError('schema not registered')
def register_schema(self, schema_name, entities):
if schema_name in self.schema_dict:
raise ValueError('schema already registered')
for k in entities:
if k in self.base_tokens: raise ValueError('schema cannot override base_tokens')
if isinstance(entities, list):
entities = dict((k, k) for k in entities)
self.schema_dict[schema_name] = entities
def t_slurp_PART21_START(self, t):
r'ISO-10303-21;'
t.lexer.begin('INITIAL')
return t
def t_slurp_error(self, t):
offset = t.value.find('\nISO-10303-21;', 0, self.header_limit)
if offset == -1 and self.header_limit < len(t.value): # not found within header_limit
raise LexError("Scanning error. try increasing lexer header_limit parameter",
"{0}...".format(t.value[0:20]))
elif offset == -1: # not found before EOF
t.lexer.lexpos = self.input_length
else: # found ISO-10303-21;
offset += 1 # also skip the \n
t.lexer.lineno += t.value[0:offset].count('\n')
t.lexer.skip(offset)
# Comment (ignored)
def t_ANY_COMMENT(self, t):
def t_COMMENT(self, t):
r'/\*(.|\n)*?\*/'
t.lexer.lineno += t.value.count('\n')
def t_ANY_PART21_START(self, t):
r'ISO-10303-21;'
return t
def t_ANY_PART21_END(self, t):
def t_PART21_END(self, t):
r'END-ISO-10303-21;'
t.lexer.begin('slurp')
return t
def t_ANY_HEADER_SEC(self, t):
def t_HEADER_SEC(self, t):
r'HEADER;'
return t
def t_ANY_ENDSEC(self, t):
def t_ENDSEC(self, t):
r'ENDSEC;'
return t
# Keywords
def t_compatibility_STANDARD_KEYWORD(self, t):
r'(?:!|)[A-Z_][0-9A-Za-z_]*'
def t_STANDARD_KEYWORD(self, t):
r'(?:!|)[A-Za-z_][0-9A-Za-z_]*'
if self.compatibility_mode:
t.value = t.value.upper()
if t.value == 'DATA':
t.type = 'DATA_SEC'
elif not t.value.isupper():
raise LexError('Scanning error. Mixed/lower case keyword detected, please use compatibility_mode=True', t.value)
if t.value in self.base_tokens:
t.type = t.value
elif t.value in self.active_schema:
t.type = self.active_schema[t.value]
elif t.value.startswith('!'):
t.type = 'USER_DEFINED_KEYWORD'
elif t.value in self.entity_keywords:
t.type = t.value
return t
def t_ANY_STANDARD_KEYWORD(self, t):
r'(?:!|)[A-Z_][0-9A-Z_]*'
if t.value == 'DATA':
t.type = 'DATA_SEC'
elif t.value.startswith('!'):
t.type = 'USER_DEFINED_KEYWORD'
elif t.value in self.entity_keywords:
t.type = t.value
return t
def t_ANY_newline(self, t):
def t_newline(self, t):
r'\n+'
t.lexer.lineno += len(t.value)
# Simple Data Types
t_ANY_REAL = r'[+-]*[0-9][0-9]*\.[0-9]*(?:E[+-]*[0-9][0-9]*)?'
t_ANY_INTEGER = r'[+-]*[0-9][0-9]*'
t_ANY_STRING = r"'(?:[][!\"*$%&.#+,\-()?/:;<=>@{}|^`~0-9a-zA-Z_\\ ]|'')*'"
t_ANY_BINARY = r'"[0-3][0-9A-F]*"'
t_ANY_ENTITY_INSTANCE_NAME = r'\#[0-9]+'
t_ANY_ENUMERATION = r'\.[A-Z_][A-Z0-9_]*\.'
def t_REAL(self, t):
r'[+-]*[0-9][0-9]*\.[0-9]*(?:E[+-]*[0-9][0-9]*)?'
t.value = float(t.value)
return t
def t_INTEGER(self, t):
r'[+-]*[0-9][0-9]*'
t.value = int(t.value)
return t
def t_STRING(self, t):
r"'(?:[][!\"*$%&.#+,\-()?/:;<=>@{}|^`~0-9a-zA-Z_\\ ]|'')*'"
t.value = t.value[1:-1]
return t
def t_BINARY(self, t):
r'"[0-3][0-9A-F]*"'
try:
t.value = int(t.value[2:-1], base=16)
except ValueError:
t.value = None
return t
t_ENTITY_INSTANCE_NAME = r'\#[0-9]+'
t_ENUMERATION = r'\.[A-Z_][A-Z0-9_]*\.'
# Punctuation
literals = '()=;,*$'
@ -171,7 +227,7 @@ class P21Header:
class HeaderEntity:
def __init__(self, type_name, *params):
self.type_name = type_name
self.params = list(*params) if params else []
self.params = list(params) if params else []
class Section:
def __init__(self, entities):
@ -181,43 +237,64 @@ class SimpleEntity:
def __init__(self, ref, type_name, *params):
self.ref = ref
self.type_name = type_name
self.params = list(*params) if params else []
self.params = list(params) if params else []
class ComplexEntity:
def __init__(self, ref, *params):
self.ref = ref
self.params = list(*params) if params else []
self.params = list(params) if params else []
class TypedParameter:
def __init__(self, type_name, *params):
self.type_name = type_name
self.params = list(*params) if params else None
self.params = list(params) if params else None
####################################################################################################
# Parser
####################################################################################################
class Parser(Base):
def __init__(self, lexer=None, debug=0):
self.parser = yacc.yacc(module=self, debug=debug, debuglog=logger, errorlog=logger)
class Parser(object):
tokens = list(base_tokens)
start = 'exchange_file'
if lexer is None:
lexer = Lexer()
self.lexer = lexer
def __init__(self, lexer=None, debug=0):
self.lexer = lexer if lexer else Lexer()
try: self.tokens = lexer.tokens
except AttributeError: pass
self.parser = yacc.yacc(module=self, debug=debug, debuglog=logger, errorlog=logger)
self.reset()
def parse(self, p21_data, **kwargs):
#TODO: will probably need to change this function if the lexer is ever to support t_eof
self.lexer.reset()
self.lexer.input(p21_data)
self.refs = {}
if 'debug' in kwargs:
result = self.parser.parse(lexer=self.lexer, debug=logger,
**{ k: kwargs[k] for k in kwargs if k != 'debug'})
** dict((k, v) for k, v in kwargs.iteritems() if k != 'debug'))
else:
result = self.parser.parse(lexer=self.lexer, **kwargs)
return result
def reset(self):
self.refs = {}
self.is_in_exchange_structure = False
def p_exchange_file(self, p):
"""exchange_file : PART21_START header_section data_section_list PART21_END"""
"""exchange_file : check_p21_start_token header_section data_section_list check_p21_end_token"""
p[0] = P21File(p[2], p[3])
def p_check_start_token(self, p):
"""check_p21_start_token : PART21_START"""
self.is_in_exchange_structure = True
p[0] = p[1]
def p_check_end_token(self, p):
"""check_p21_end_token : PART21_END"""
self.is_in_exchange_structure = False
p[0] = p[1]
# TODO: Specialise the first 3 header entities
def p_header_section(self, p):
"""header_section : HEADER_SEC header_entity header_entity header_entity ENDSEC"""
@ -235,8 +312,8 @@ class Parser(Base):
def p_check_entity_instance_name(self, p):
"""check_entity_instance_name : ENTITY_INSTANCE_NAME"""
if p[1] in self.refs:
logger.error('Line %i, duplicate entity instance name: %s', p.lineno(1), p[1])
sys.exit('Aborting...')
logger.error('Line: {0}, SyntaxError - Duplicate Entity Instance Name: {1}'.format(p.lineno(1), p[1]))
raise SyntaxError
else:
self.refs[p[1]] = None
p[0] = p[1]
@ -245,6 +322,11 @@ class Parser(Base):
"""simple_entity_instance : check_entity_instance_name '=' simple_record ';'"""
p[0] = SimpleEntity(p[1], *p[3])
def p_entity_instance_error(self, p):
"""simple_entity_instance : error '=' simple_record ';'
complex_entity_instance : error '=' subsuper_record ';'"""
pass
def p_complex_entity_instance(self, p):
"""complex_entity_instance : check_entity_instance_name '=' subsuper_record ';'"""
p[0] = ComplexEntity(p[1], p[3])
@ -302,12 +384,12 @@ class Parser(Base):
p[0] = []
def p_data_start(self, p):
"""data_start : DATA_SEC '(' parameter_list ')' ';'"""
"""data_start : DATA '(' parameter_list ')' ';'"""
pass
def p_data_start_empty(self, p):
"""data_start : DATA_SEC '(' ')' ';'
| DATA_SEC ';'"""
"""data_start : DATA '(' ')' ';'
| DATA ';'"""
pass
def p_data_section(self, p):
@ -316,10 +398,13 @@ class Parser(Base):
def p_entity_instance_list(self, p):
"""entity_instance_list : entity_instance_list entity_instance
| empty"""
| entity_instance"""
try: p[0] = p[1] + [p[2],]
except IndexError: pass # p[2] doesn't exist, p[1] is None
except TypeError: p[0] = [p[2],] # p[1] is None, p[2] is valid
except IndexError: p[0] = [p[1],]
def p_entity_instance_list_empty(self, p):
"""entity_instance_list : empty"""
p[0] = []
def p_entity_instance(self, p):
"""entity_instance : simple_entity_instance
@ -346,34 +431,60 @@ class Parser(Base):
pass
def test_debug():
import os.path
logging.basicConfig()
logger.setLevel(logging.DEBUG)
s = open('io1-tu-203.stp', 'r').read()
parser = Parser()
parser.reset()
logger.info("***** parser debug *****")
p = os.path.expanduser('~/projects/src/stepcode/data/ap214e3/s1-c5-214/s1-c5-214.stp')
with open(p, 'rU') as f:
s = f.read()
try:
r = parser.parse(s, debug=1)
parser.parse(s, debug=1)
except SystemExit:
pass
return (parser, r)
logger.info("***** finished *****")
def test():
import os, os.path, itertools, codecs
logging.basicConfig()
logger.setLevel(logging.ERROR)
logger.setLevel(logging.INFO)
s = open('io1-tu-203.stp', 'r').read()
parser = Parser()
compat_list = []
def parse_check(p):
logger.info("processing {0}".format(p))
parser.reset()
with open(p, 'rU') as f:
iso_wrapper = codecs.EncodedFile(f, 'iso-8859-1')
s = iso_wrapper.read()
parser.parse(s)
logger.info("***** standard test *****")
for d, _, files in os.walk(os.path.expanduser('~/projects/src/stepcode')):
for f in itertools.ifilter(lambda x: x.endswith('.stp'), files):
p = os.path.join(d, f)
try:
r = parser.parse(s)
except SystemExit:
pass
parse_check(p)
except LexError:
logger.exception('Lexer issue, adding {0} to compatibility test list'.format(os.path.basename(p)))
compat_list.append(p)
return (parser, r)
lexer = Lexer(compatibility_mode=True)
parser = Parser(lexer=lexer)
logger.info("***** compatibility test *****")
for p in compat_list:
parse_check(p)
logger.info("***** finished *****")
if __name__ == '__main__':
test()

View file

@ -38,7 +38,7 @@ class NUMBER:
EXPRESS definition:
===================
The number data type has as its domain all numeric values in the language. The number data
type shall be used when a more speci c numeric representation is not important.
type shall be used when a more specific numeric representation is not important.
Syntax:
248 number_type = NUMBER .
EXAMPLE 15 - Since we may not know the context of size we do not know how to correctly
@ -56,7 +56,7 @@ class REAL(float,NUMBER):
"""
EXPRESS definition:
===================
The real data type has as its domain all rational, irrational and scientfic real numbers. It is
The real data type has as its domain all rational, irrational and scientific real numbers. It is
a specialization of the number data type.
Syntax:
265 real_type = REAL [ '(' precision_spec ')' ] .
@ -110,23 +110,23 @@ class INTEGER(int,NUMBER):
class STRING(str):
"""
The string data type has as its domain sequences of characters. The characters which are
permitted to form part of a string value are de ned in ISO 10646.
permitted to form part of a string value are defined in ISO 10646.
Syntax:
293 string_type = STRING [ width_spec ] .
318 width_spec = '(' width ')' [ FIXED ] .
317 width = numeric_expression .
A string data type may be de ned as either xed or varying width (number of characters). If
A string data type may be defined as either fixed or varying width (number of characters). If
it is not specfically defined as fixed width (by using the fixed reserved word in the dfinition)
the string has varying width.
The domain of a xed width string data type is the set of all character sequences of exactly
the width speci ed in the type de nition.
The domain of a fixed width string data type is the set of all character sequences of exactly
the width specified in the type definition.
The domain of a varying width string data type is the set of all character sequences of width
less than or equal to the maximum width speci ed in the type de nition.
If no width is speci ed, the domain is the set of all character sequences, with no constraint on
less than or equal to the maximum width specified in the type definition.
If no width is specified, the domain is the set of all character sequences, with no constraint on
the width of these sequences.
Substrings and individual characters may be addressed using subscripts as described in 12.5.
The case (upper or lower) of letters within a string is signi cant.
The case (upper or lower) of letters within a string is significant.
Python mapping: INTEGER is mapped the 'str' type. An additional width_spec parameter can be passed
to handle the FIXED length constraint
@ -166,10 +166,10 @@ class BINARY(str):
A binary data type may be defined as either fixed or varying width (number of bits). If it is
not specifically defined as fixed width (by using the fixed reserved word in the definition) the
binary data type has varying width.
The domain of a fixed width binary data type is the set of all bit sequences of exactly the width
speci ed in the type definition.
The domain of a fixed width binary data type is the set of all bit sequences of exactly the width
specified in the type definition.
The domain of a varying width binary data type is the set of all bit sequences of width less
than or equal to the maximum width speci ed in the type de nition. If no width is specified,
than or equal to the maximum width specified in the type definition. If no width is specified,
the domain is the set of all bit sequences, with no constraint on the width of these sequences.
Subbinaries and individual bits may be addressed using subscripts as described in 12.3.
@ -180,7 +180,7 @@ class BINARY(str):
return str.__new__(self, value)
def __init__(self, value, width=-1, fixed=False):
""" By default, lenght is set to None"""
""" By default, length is set to None"""
self._specified_width = width
self._fixed = fixed
# Check implicit width

View file

@ -23,7 +23,6 @@ N350 ( August 31, 1993 ) of ISO 10303 TC184/SC4/WG7.
#include "express.h"
#include "exppp.h"
#include "dict.h"
#define MAX_LEN 240
@ -104,9 +103,9 @@ FILE * FILEcreate( const char * );
void FILEclose( FILE * );
const char * ClassName( const char * );
const char * ENTITYget_classname( Entity );
void FUNCPrint( Function, FILES *, Schema );
void RULEPrint( Rule, FILES *, Schema );
void ENTITYPrint( Entity, FILES *, Schema );
void FUNCPrint( Function function, FILES* files );
void RULEPrint( Rule rule, FILES* files );
void ENTITYPrint( Entity, FILES * );
const char * StrToConstant( const char * );
void TYPEselect_print( Type, FILES *, Schema );
void ENTITYprint_new( Entity, FILES *, Schema, int );
@ -130,7 +129,7 @@ const char * TYPEget_ctype( const Type t );
void print_file( Express );
void resolution_success( void );
void SCHEMAprint( Schema schema, FILES* files, int suffix );
Type TYPEget_ancestor( Type );
Type TYPEget_ancestor( Type t );
const char * FundamentalType( const Type t, int report_reftypes );
/*Variable*/
@ -146,7 +145,5 @@ void print_schemas_separate( Express, FILES * );
void getMCPrint( Express, FILE *, FILE * );
int sameSchema( Scope, Scope );
void USEREFout( Schema schema, Dictionary refdict, Linked_List reflist, char * type, FILE * file );
#endif

View file

@ -19,7 +19,6 @@ N350 ( August 31, 1993 ) of ISO 10303 TC184/SC4/WG7.
*******************************************************************/
extern int multiple_inheritance;
/*extern int corba_binding; */
/******************************************************************
** The following functions will be used ***
@ -60,7 +59,7 @@ CheckWord( const char * word ) {
high = nwords - 1;
/* word is obviously not in list, if it is longer than any of the words in the list */
if( strlen( word ) > 12 ) {
if( strlen( word ) > 18 ) {
return ( word );
}
@ -71,10 +70,8 @@ CheckWord( const char * word ) {
} else if( cond > 0 ) {
low = i + 1;
} else { /* word is a reserved word, capitalize it */
printf( "** warning: reserved word %s ", word );
*( word + 0 ) = toupper( *( word + 0 ) );
printf( "is changed to %s **\n", word );
fprintf( stderr, "Warning: reserved word %s capitalized\n", word );
*word = toupper( *word );
}
}
#endif
@ -128,8 +125,7 @@ StrToLower( const char * word ) {
}
const char *
StrToUpper( const char * word ) {
const char * StrToUpper( const char * word ) {
static char newword [MAX_LEN];
int i = 0;
char ToUpper( char c );
@ -143,8 +139,7 @@ StrToUpper( const char * word ) {
return ( newword );
}
const char *
StrToConstant( const char * word ) {
const char * StrToConstant( const char * word ) {
static char newword [MAX_LEN];
int i = 0;
@ -161,28 +156,15 @@ StrToConstant( const char * word ) {
return ( newword );
}
/******************************************************************
** Procedure: FILEcreate
** Description: creates a file for c++ with header definitions
** Parameters: filename
** Returns: FILE* pointer to file created or NULL
** Side Effects: creates a file with name filename
** Status: complete
******************************************************************/
FILE *
FILEcreate( const char * filename ) {
/** creates a file for python */
FILE * FILEcreate( const char * filename ) {
FILE * file;
//const char * fn;
if( ( file = fopen( filename, "w" ) ) == NULL ) {
printf( "**Error in SCHEMAprint: unable to create file %s ** \n", filename );
fprintf( stderr, "Error in SCHEMAprint: unable to create file %s\n", filename );
return ( NULL );
}
//fprintf( file, "#ifndef %s\n", fn = StrToConstant( filename ) );
//fprintf( file, "#define %s\n", fn );
fprintf( file, "# This file was generated by exp2python. You probably don't want to edit\n" );
fprintf( file, "# it since your modifications will be lost if exp2python is used to\n" );
fprintf( file, "# regenerate it.\n" );
@ -190,69 +172,32 @@ FILEcreate( const char * filename ) {
}
/******************************************************************
** Procedure: FILEclose
** Description: closes a file opened with FILEcreate
** Parameters: FILE* file -- pointer to file to close
** Returns:
** Side Effects:
** Status: complete
******************************************************************/
void
FILEclose( FILE * file ) {
/** closes a file opened with FILEcreate */
void FILEclose( FILE * file ) {
fclose( file );
}
/******************************************************************
** Procedure: isAggregate
** Parameters: Attribute a
** Returns: int indicates whether the attribute is an aggregate
** Description: indicates whether the attribute is an aggregate
** Side Effects: none
** Status: complete 1/15/91
******************************************************************/
int
isAggregate( Variable a ) {
/** indicates whether the attribute is an aggregate */
int isAggregate( Variable a ) {
return( TYPEinherits_from( VARget_type( a ), aggregate_ ) );
}
int
isAggregateType( const Type t ) {
/** indicates whether the type is an aggregate type */
int isAggregateType( const Type t ) {
return( TYPEinherits_from( t, aggregate_ ) );
}
/******************************************************************
** Procedure: TypeName
** Parameters: Type t
** Returns: name of type as defined in SDAI C++ binding 4-Nov-1993
** Status: 4-Nov-1993
******************************************************************/
const char *
TypeName( Type t ) {
}
/******************************************************************
** Procedure: ClassName
** Parameters: const char * oldname
** Returns: temporary copy of name suitable for use as a class name
** Side Effects: erases the name created by a previous call to this function
** Status: complete
******************************************************************/
const char *
ClassName( const char * oldname ) {
/** returns temporary copy of name suitable for use as a class name
*
* each call erases the name created by a previous call to this function
*/
const char * ClassName( const char * oldname ) {
int i = 0, j = 0;
static char newname [BUFSIZ];
if( !oldname ) {
return ( "" );
}
strcpy( newname, ENTITYCLASS_PREFIX ) ;
j = strlen( ENTITYCLASS_PREFIX ) ;
newname [j] = ToUpper( oldname [i] );
@ -260,9 +205,6 @@ ClassName( const char * oldname ) {
++j;
while( oldname [i] != '\0' ) {
newname [j] = ToLower( oldname [i] );
/* if (oldname [i] == '_') */
/* character is '_' */
/* newname [++j] = ToUpper (oldname [++i]);*/
++i;
++j;
}
@ -270,38 +212,14 @@ ClassName( const char * oldname ) {
return ( newname );
}
const char *
ENTITYget_CORBAname( Entity ent ) {
static char newname [BUFSIZ];
strcpy( newname, ENTITYget_name( ent ) );
newname[0] = ToUpper( newname [0] );
return newname;
}
/******************************************************************
** Procedure: ENTITYget_classname
** Parameters: Entity ent
** Returns: the name of the c++ class representing the entity
** Status: complete
******************************************************************/
const char *
ENTITYget_classname( Entity ent ) {
/** returns the name of the c++ class representing the entity */
const char * ENTITYget_classname( Entity ent ) {
const char * oldname = ENTITYget_name( ent );
return ( ClassName( oldname ) );
}
/******************************************************************
** Procedure: PrettyTmpName (char * oldname)
** Procedure: PrettyNewName (char * oldname)
** Parameters: oldname
** Returns: a new capitalized name
** Description: creates a new name with first character's in caps
** Side Effects: PrettyNewName allocates memory for the new name
** Status: OK 7-Oct-1992 kcm
******************************************************************/
const char *
PrettyTmpName( const char * oldname ) {
/** returns a new capitalized name, in internal static buffer */
const char * PrettyTmpName( const char * oldname ) {
int i = 0;
static char newname [BUFSIZ];
newname [0] = '\0';
@ -321,9 +239,8 @@ PrettyTmpName( const char * oldname ) {
return newname;
}
/* This function is out of date DAS */
const char *
EnumName( const char * oldname ) {
/** This function is out of date DAS */
const char * EnumName( const char * oldname ) {
int j = 0;
static char newname [MAX_LEN];
if( !oldname ) {
@ -339,8 +256,7 @@ EnumName( const char * oldname ) {
return ( newname );
}
const char *
SelectName( const char * oldname ) {
const char * SelectName( const char * oldname ) {
int j = 0;
static char newname [MAX_LEN];
if( !oldname ) {
@ -357,8 +273,7 @@ SelectName( const char * oldname ) {
return ( newname );
}
const char *
FirstToUpper( const char * word ) {
const char * FirstToUpper( const char * word ) {
static char newword [MAX_LEN];
strncpy( newword, word, MAX_LEN );
@ -366,11 +281,11 @@ FirstToUpper( const char * word ) {
return ( newword );
}
/* return fundamental type but as the string which corresponds to */
/* the appropriate type descriptor */
/* if report_reftypes is true, report REFERENCE_TYPE when appropriate */
const char *
FundamentalType( const Type t, int report_reftypes ) {
/** return fundamental type but as the string which corresponds to
* the appropriate type descriptor
* if report_reftypes is true, report REFERENCE_TYPE when appropriate
*/
const char * FundamentalType( const Type t, int report_reftypes ) {
if( report_reftypes && TYPEget_head( t ) ) {
return( "REFERENCE_TYPE" );
}
@ -412,18 +327,16 @@ FundamentalType( const Type t, int report_reftypes ) {
}
}
/* this actually gets you the name of the variable that will be generated to
be a TypeDescriptor or subtype of TypeDescriptor to represent Type t in
the dictionary. */
const char *
TypeDescriptorName( Type t ) {
/** this actually gets you the name of the variable that will be generated to
* be a TypeDescriptor or subtype of TypeDescriptor to represent Type t in
* the dictionary.
*/
const char * TypeDescriptorName( Type t ) {
static char b [BUFSIZ];
Schema parent = t->superscope;
/* NOTE - I corrected a prev bug here in which the *current* schema was
** passed to this function. Now we take "parent" - the schema in which
** Type t was defined - which was actually used to create t's name. DAR */
if( !parent ) {
parent = TYPEget_body( t )->entity->superscope;
/* This works in certain cases that don't work otherwise (basically a
@ -437,11 +350,10 @@ TypeDescriptorName( Type t ) {
return b;
}
/* this gets you the name of the type of TypeDescriptor (or subtype) that a
variable generated to represent Type t would be an instance of. */
const char *
GetTypeDescriptorName( Type t ) {
/** this gets you the name of the type of TypeDescriptor (or subtype) that a
* variable generated to represent Type t would be an instance of.
*/
const char * GetTypeDescriptorName( Type t ) {
switch( TYPEget_body( t )->type ) {
case aggregate_:
return "AggrTypeDescriptor";
@ -477,13 +389,12 @@ GetTypeDescriptorName( Type t ) {
case generic_:
return "TypeDescriptor";
default:
printf( "Error in %s, line %d: type %d not handled by switch statement.", __FILE__, __LINE__, TYPEget_body( t )->type );
fprintf( stderr, "Error in %s, line %d: type %d not handled by switch statement.", __FILE__, __LINE__, TYPEget_body( t )->type );
abort();
}
}
int
ENTITYhas_explicit_attributes( Entity e ) {
int ENTITYhas_explicit_attributes( Entity e ) {
Linked_List l = ENTITYget_attributes( e );
int cnt = 0;
LISTdo( l, a, Variable )
@ -495,8 +406,7 @@ ENTITYhas_explicit_attributes( Entity e ) {
}
Entity
ENTITYput_superclass( Entity entity ) {
Entity ENTITYput_superclass( Entity entity ) {
#define ENTITYget_type(e) ((e)->u.entity->type)
Linked_List l = ENTITYget_supertypes( entity );
@ -518,7 +428,7 @@ ENTITYput_superclass( Entity entity ) {
/* find the first parent that has attributes (in the parent or any of its
ancestors). Make super point at that parent and print warnings for
all the rest of the parents. DAS */
LISTdo( l, e, Entity )
LISTdo( l, e, Entity ) {
/* if there's no super class yet,
or if the entity super class [the variable] super is pointing at
doesn't have any attributes: make super point at the current parent.
@ -537,9 +447,8 @@ ENTITYput_superclass( Entity entity ) {
printf( "\tin ENTITY %s\n\tSUPERTYPE %s IGNORED.\n\n",
ENTITYget_name( entity ), ENTITYget_name( e ) );
}
LISTod;
} LISTod
}
tag = ( EntityTag ) malloc( sizeof( struct EntityTag_ ) );
tag -> superclass = super;
TYPEput_clientData( ENTITYget_type( entity ), tag );
@ -548,8 +457,7 @@ ENTITYput_superclass( Entity entity ) {
return 0;
}
Entity
ENTITYget_superclass( Entity entity ) {
Entity ENTITYget_superclass( Entity entity ) {
EntityTag tag;
tag = TYPEget_clientData( ENTITYget_type( entity ) );
return ( tag ? tag -> superclass : 0 );
@ -597,15 +505,14 @@ void ENTITYget_first_attribs( Entity entity, Linked_List result ) {
** // tell it to be * for reading and writing
**/
Variable
VARis_type_shifter( Variable a ) {
Variable VARis_type_shifter( Variable a ) {
char * temp;
if( VARis_derived( a ) || VARget_inverse( a ) ) {
return 0;
}
temp = EXPRto_string( VARget_name( a ) );
temp = strdup( VARget_name( a )->symbol.name );
if( ! strncmp( StrToLower( temp ), "self\\", 5 ) ) {
/* a is a type shifter */
free( temp );
@ -615,8 +522,7 @@ VARis_type_shifter( Variable a ) {
return 0;
}
Variable
VARis_overrider( Entity e, Variable a ) {
Variable VARis_overrider( Entity e, Variable a ) {
Variable other;
char * tmp;
@ -632,13 +538,11 @@ VARis_overrider( Entity e, Variable a ) {
return 0;
}
Type
TYPEget_ancestor( Type t )
/*
/**
* For a renamed type, returns the original (ancestor) type from which t
* descends. Return NULL if t is top level.
*/
{
Type TYPEget_ancestor( Type t ) {
Type i = t;
if( !TYPEget_head( i ) ) {

File diff suppressed because it is too large Load diff

View file

@ -4,8 +4,6 @@
#include "complexSupport.h"
extern int corba_binding;
void use_ref( Schema, Express, FILES * );
/******************************************************************
@ -106,7 +104,8 @@ void SCOPEPrint( Scope scope, FILES * files, Schema schema ) {
if( t->search_id == CANPROCESS ) {
// Only selects haven't been processed yet and may still be set to
// CANPROCESS.
TYPEselect_print( t, files, schema );
//FIXME this function is not implemented!
// TYPEselect_print( t, files, schema );
t->search_id = PROCESSED;
}
SCOPEod;
@ -114,7 +113,7 @@ void SCOPEPrint( Scope scope, FILES * files, Schema schema ) {
// process each entity. This must be done *before* typedefs are defined
LISTdo( list, e, Entity );
if( e->search_id == CANPROCESS ) {
ENTITYPrint( e, files, schema );
ENTITYPrint( e, files );
e->search_id = PROCESSED;
}
LISTod;
@ -122,13 +121,13 @@ void SCOPEPrint( Scope scope, FILES * files, Schema schema ) {
// process each function. This must be done *before* typedefs are defined
LISTdo( function_list, f, Function );
FUNCPrint( f, files, schema );
FUNCPrint( f, files );
LISTod;
LISTfree( function_list );
// process each rule. This must be done *before* typedefs are defined
LISTdo( rule_list, r, Rule );
RULEPrint( r, files, schema );
RULEPrint( r, files );
LISTod;
LISTfree( rule_list );

View file

@ -1,75 +1,9 @@
/************************************************************************
** Driver for Fed-X Express parser.
************************************************************************/
/* Driver for exp2python (generation of python from EXPRESS) */
/*
* This software was developed by U.S. Government employees as part of
* their official duties and is not subject to copyright.
*
* $Log: fedex_main.c,v $
* Revision 3.0.1.3 1997/11/05 23:12:18 sauderd
* Adding a new state DP3.1 and associated revision
*
* Revision 3.0.1.2 1997/09/26 15:59:10 sauderd
* Finished implementing the -a option (changed from -e) to generate the early
* bound access functions the old way. Implemented the change to generate them
* the new correct way by default.
*
* Revision 3.0.1.1 1997/09/18 21:18:41 sauderd
* Added a -e or -E option to generate attribute get and put functions the old
* way (without an underscore). It sets the variable old_accessors. This doesn't
* currently do anything. It needs to be implemented to generate attr funcs
* correctly.
*
* Revision 3.0.1.0 1997/04/16 19:29:03 dar
* Setting the first branch
*
* Revision 3.0 1997/04/16 19:29:02 dar
* STEP Class Library Pre Release 3.0
*
* Revision 2.1.0.5 1997/03/11 15:33:59 sauderd
* Changed code so that if exp2python is passed the -c or -C option it would
* generate implementation objects for Orbix (CORBA). Look for code that is
* inside stmts such as if(corba_binding)
*
* Revision 2.1.0.4 1996/09/25 22:56:55 sauderd
* Added a command line argument for logging SCL use. The option added is -l or
* -L. It also says the option in the usage stmt when you run exp2python without
* an argument. Added the options to the EXPRESSgetopt_options string.
*
* Revision 2.1.0.3 1996/06/18 18:14:17 sauderd
* Changed the line that gets printed when you run exp2python with no
* arguments to include the option for single inheritance.
*
* Revision 2.1.0.2 1995/05/19 22:40:03 sauderd
* Added a command line argument -s or -S for generating code based on the old
* method as opposed to the new method of multiple inheritance.
*
* Revision 2.1.0.1 1995/05/16 19:52:18 lubell
* setting state to dp21
*
* Revision 2.1.0.0 1995/05/12 18:53:48 lubell
* setting branch
*
* Revision 2.1 1995/05/12 18:53:47 lubell
* changing version to 2.1
*
* Revision 1.7 1995/03/16 20:58:50 sauderd
* ? changes.
*
* Revision 1.6 1992/09/29 15:46:55 libes
* added messages for KC
*
* Revision 1.5 1992/08/27 23:28:52 libes
* moved Descriptor "new"s to precede assignments
* punted SELECT type
*
* Revision 1.4 1992/08/19 18:49:59 libes
* registry support
*
* Revision 1.3 1992/06/05 19:55:28 libes
* Added * to typedefs. Replaced warning kludges with ERRORoption.
*/
#include <stdlib.h>
@ -81,14 +15,8 @@ extern void print_fedex_version( void );
static void exp2python_usage( void ) {
fprintf( stderr, "usage: %s [-v] [-d # | -d 9 -l nnn -u nnn] [-n] [-p <object_type>] {-w|-i <warning>} express_file\n", EXPRESSprogram_name );
//fprintf( stderr, "where\t-s or -S uses only single inheritance in the generated C++ classes\n" );
//fprintf( stderr, "\t-a or -A generates the early bound access functions for entity classes the old way (without an underscore)\n" );
//fprintf( stderr, "\t-c or -C generates C++ classes for use with CORBA (Orbix)\n" );
//fprintf( stderr, "\t-L prints logging code in the generated C++ classes\n" );
fprintf( stderr, "\t-v produces the version description below\n" );
fprintf( stderr, "\t-d turns on debugging (\"-d 0\" describes this further\n" );
//fprintf( stderr, "\t-p turns on printing when processing certain objects (see below)\n" );
//fprintf( stderr, "\t-n do not pause for internal errors (useful with delta script)\n" );
fprintf( stderr, "\t-w warning enable\n" );
fprintf( stderr, "\t-i warning ignore\n" );
fprintf( stderr, "and <warning> is one of:\n" );
@ -117,6 +45,7 @@ void resolution_success( void ) {
}
int success( Express model ) {
(void) model; /* unused */
printf( "Done.\n" );
return( 0 );
}

View file

@ -48,7 +48,7 @@ static void markDescs( Entity );
static int checkItem( Type, Scope, Schema, int *, int );
static int ENUMcanBeProcessed( Type, Schema );
static int inSchema( Scope, Scope );
static void addRenameTypedefs( Schema, FILE * );
/* static void addRenameTypedefs( Schema, FILE * ); */
static void addAggrTypedefs( Schema schema );
static void addUseRefNames( Schema, FILE * );
@ -70,8 +70,8 @@ void print_schemas_separate( Express express, FILES * files )
/* First set all marks we'll be using to UNPROCESSED/NOTKNOWN: */
initializeMarks( express );
//FIXME SdaiAll.cc:12:24: warning: unused variable is [-Wunused-variable] (also for ui & ri)
//fprintf( files->create, " Interface_spec_ptr is;\n Used_item_ptr ui;\n Referenced_item_ptr ri;\n Uniqueness_rule_ptr ur;\n Where_rule_ptr wr;\n Global_rule_ptr gr;\n" );
/* FIXME SdaiAll.cc:12:24: warning: unused variable is [-Wunused-variable] (also for ui & ri) */
/* fprintf( files->create, " Interface_spec_ptr is;\n Used_item_ptr ui;\n Referenced_item_ptr ri;\n Uniqueness_rule_ptr ur;\n Where_rule_ptr wr;\n Global_rule_ptr gr;\n" ); */
while( !complete ) {
complete = TRUE;
DICTdo_type_init( express->symbol_table, &de, OBJ_SCHEMA );
@ -116,9 +116,7 @@ void print_schemas_separate( Express express, FILES * files )
}
}
/*******************
*******************/
/*
DICTdo_type_init( express->symbol_table, &de, OBJ_SCHEMA );
while( ( schema = ( Scope )DICTdo( &de ) ) != 0 ) {
//fprintf( files->create,
@ -129,8 +127,7 @@ void print_schemas_separate( Express express, FILES * files )
// "\t//////////////// REFERENCE statements\n" );
//USEREFout( schema, schema->u.schema->refdict, schema->u.schema->ref_schemas, "REFERENCE", files->create );
}
/*****************
*****************/
*/
/* Before closing, we have three more situations to deal with (i.e., three
// types of declarations etc. which could only be printed at the end).
// Each is explained in the header section of its respective function. */
@ -144,26 +141,25 @@ void print_schemas_separate( Express express, FILES * files )
// of addAggrTypedefs.) */
DICTdo_type_init( express->symbol_table, &de, OBJ_SCHEMA );
while( ( schema = ( Scope )DICTdo( &de ) ) != 0 ) {
//addAggrTypedefs( schema, files->classes );
/* addAggrTypedefs( schema, files->classes ); */
addAggrTypedefs( schema );
}
/* On our way out, print the necessary statements to add support for
// complex entities. (The 1st line below is a part of SchemaInit(),
// which hasn't been closed yet. (That's done on 2nd line below.)) */
// which hasn't been closed yet. (That's done on 2nd line below.)) * /
//fprintf( files->initall, "\t reg.SetCompCollect( gencomplex() );\n" );
//fprintf( files->initall, "}\n\n" );
//fprintf( files->incall, "\n#include <complexSupport.h>\n" );
//fprintf( files->incall, "ComplexCollect *gencomplex();\n" );
//fprintf( files->incall, "ComplexCollect *gencomplex();\n" ); */
/* Function GetModelContents() is printed at the end of the schema.xx
// files. This is done in a separate loop through the schemas, in function
// below. */
//getMCPrint( express, files->incall, files->initall );
// below. * /
//getMCPrint( express, files->incall, files->initall ); */
}
static void initializeMarks( Express express )
/*
/**
* Set all schema->search_id's to UNPROCESSED, meaning we haven't processed
* all the ents and types in it yet. Also, put an int=0 in each schema's
* clientData field. We'll use it to record what # file we're generating
@ -172,7 +168,7 @@ static void initializeMarks( Express express )
* an attribute/item which comes from another schema. All other types can
* be processed the first time, but that will be caught in checkTypes().)
*/
{
static void initializeMarks( Express express ) {
DictionaryEntry de_sch, de_ent, de_type;
Schema schema;
@ -190,8 +186,7 @@ static void initializeMarks( Express express )
}
}
static void unsetObjs( Schema schema )
/*
/**
* Resets all the ents & types of schema which had been set to CANTPROCRSS
* to NOTKNOWN. This function is called every time print_schemas_separate
* iterates through the schemas, printing to file what can be printed. At
@ -200,7 +195,7 @@ static void unsetObjs( Schema schema )
* types which have already been marked PROCESSED will not have to be
* revisited, and are not changed.
*/
{
static void unsetObjs( Schema schema ) {
DictionaryEntry de;
SCOPEdo_types( schema, t, de )
@ -215,8 +210,7 @@ static void unsetObjs( Schema schema )
SCOPEod
}
static int checkTypes( Schema schema )
/*
/**
* Goes through the types contained in this schema checking for ones which
* can't be processed. This may be the case if: (1) We have a select type
* which has enumeration or select items which have not yet been defined
@ -228,7 +222,7 @@ static int checkTypes( Schema schema )
* CANTPROCESS. If some types in schema *can* be processed now, we return
* TRUE. (See relevant header comments of checkEnts() below.)
*/
{
static int checkTypes( Schema schema ) {
DictionaryEntry de;
int retval = FALSE, unknowncnt;
Type i;
@ -259,9 +253,9 @@ static int checkTypes( Schema schema )
schema->search_id = UNPROCESSED;
}
} else if( TYPEis_select( type ) ) {
LISTdo( SEL_TYPEget_items( type ), i, Type )
if( !TYPEis_entity( i ) ) {
if( checkItem( i, type, schema, &unknowncnt, 0 ) ) {
LISTdo( SEL_TYPEget_items( type ), ii, Type ) {
if( !TYPEis_entity( ii ) ) {
if( checkItem( ii, type, schema, &unknowncnt, 0 ) ) {
break;
}
/* checkItem does most of the work of determining if
@ -269,13 +263,13 @@ static int checkTypes( Schema schema )
// processable. It checks for conditions which would
// make this true and sets values in type, schema, and
// unknowncnt accordingly. (See checkItem's commenting
// below.) It also return TRUE if i has made type un-
// below.) It also return TRUE if ii has made type un-
// processable. If so, we break - there's no point
// checking the other items of type any more. */
} else {
/* Check if our select has an entity item which itself
// has unprocessed selects or enums. */
ent = ENT_TYPEget_entity( i );
ent = ENT_TYPEget_entity( ii );
if( ent->search_id == PROCESSED ) {
continue;
}
@ -287,15 +281,15 @@ static int checkTypes( Schema schema )
// cessed object), while it will contain actual objects
// for the enum and select attributes of ent.) */
attribs = ENTITYget_all_attributes( ent );
LISTdo( attribs, attr, Variable )
LISTdo_n( attribs, attr, Variable, b ) {
if( checkItem( attr->type, type, schema,
&unknowncnt, 1 ) ) {
break;
}
LISTod
} LISTod
LISTfree( attribs );
}
LISTod
} LISTod
/* One more condition - if we're a select which is a rename of
// another select - we must also make sure the original select
// is in this schema or has been processed. Since a rename-
@ -599,7 +593,7 @@ static void addAggrTypedefs( Schema schema )
// 2D aggr's and higher only need type GenericAggr defined
// which is built-in. */
printf( "in addAggrTypedefs. %s is enum or select.\n", TYPEget_name( t ) );
//strncpy( nm, ClassName( TYPEget_name( t ) ), BUFSIZ );
/* strncpy( nm, ClassName( TYPEget_name( t ) ), BUFSIZ );
//printf("%s;%s",nm,TYPEget_ctype( t ));
//if( firsttime ) {
// fprintf( classes, "The first TIME\n" );
@ -610,7 +604,7 @@ static void addAggrTypedefs( Schema schema )
//fprintf( classes, "typedef %s\t%s;\n",
// TYPEget_ctype( t ), nm );
//fprintf( classes, "typedef %s *\t%sH;\n", nm, nm );
//fprintf( classes, "typedef %s *\t%s_ptr;\n", nm, nm );
//fprintf( classes, "typedef %s *\t%s_ptr;\n", nm, nm ); */
}
}
SCOPEod

Some files were not shown because too many files have changed in this diff Show more