Zio Tom's blog
Thoughts about programming languages
2023-12-23
This blog is no longer active
2015-01-21
LFI data analysis with FPC
In this post I describe my attempts to write a program for doing scientific data analysis on a High-Performance Cluster (HPC) using Free Pascal 2.6.4. The code is parallelized using MPI and is ~5 kLOC (thousands of lines of code) long.
Why Free Pascal?
This summer I had to implement a data analysis software for the forthcoming Planck/LFI polarization papers (due to appear in 2015). Since the tasks was quite orthogonal to what I had wrote before (in the sense that nobody else was going to help me in writing it, and I was not foreseeing the possibility of re-using old code), I felt free to decide to pick the language I wanted, instead of sticking with C++ as usual. My first thoughts went to Nim (see my post Experiments with Nimrod — meanwhile, the name has been changed into Nim), but at the time I feared it was too unstable for the task. (I must say that the Nim team has released version 0.10.2, which feels much more mature than any previous release, and they have announced that version 1.0 will soon be delivered. I’ll talk more about Nim in a post soon to be due.)
At the end I decided to give a try to Free Pascal, while waiting for Nim 1.0 to appear. The Pascal language has always been close to my heart: I started doing "serious" programming using Turbo Pascal 3.0, and I followed its evolution till Borland Pascal 7.0, which I never forgot (while I’m writing this, I’m gazing at a couple of Borland Pascal 7.0 manuals that are still standing on my bookshelf). What I most vividly remember of Borland Pascal 7.0 were two characteristics that C++ still lacks in 2015:
-
A good module system. It’s incredible how Turbo Pascal got it right already in Turbo Pascal 4.0 (1987)!
-
Amazing compilation times.
Free Pascal provides a very good level of compatibility with Borland Pascal, and it shares the two advantages I just mentioned above. Moreover, Free Pascal’s dialect is an improvement over Turbo Pascal, because it provides some modern language constructs (e.g., generics, classes, …: these were first introduced in the Pascal language by Borland Pascal’s successor, Delphi). Another big plus is its standard library, which is larger than C++ and includes a few useful functions:
-
There is a proper
Format
function which is similar to Python’sformat
: it is incredible how plain C++ forces you either to install Boost Format or rely on weird stream functions likesetw
,setfill
,setprecision
, and so on. -
There is a class for parsing INI files (these are usually used for keeping configuration parameters, and I needed something similar for my project); a JSON library is included as well.
-
I found particularly handy to have a set of SQLite bindings bundled with the compiler, because part of the data I needed for my calculations were kept in this format.
-
There is also an open-source clone of the Turbo Vision library bundled with Free Pascal! (However, I did not use this.)
Characteristics of the code
Since the 2015 Planck data release hasn’t been published yet, I cannot reveal too many details about the code. Here is a short description of what it does:
-
It must process a huge quantity of data (of the order of a terabyte, stored in thousands of files);
-
It must be parallelized using MPI (because it is going to be run in a distributed memory environment);
-
The mathematics behind it is not too complicated, it just uses the basic arithmetic operations.
-
It must produce FITS maps in the Healpix format.
Bindings
Before starting writing the code, I had first to implement the following units:
-
A wrapper to the CFITSIO library (the standard library used to read/write FITS files). This has not been hard, since it is extremely easy to write bindings to C libraries using Free Pascal;
-
A set of functions to handle Healpix maps (it was not too difficult to write these functions in Pascal, rather than create a set of bindings);
-
A wrapper to the MPI functions. This was the toughest part, as I discovered that different MPI libraries implement a few basic data structures using widely different memory representations (see this answer of mine on Stack Overflow). The compatibility with the MPI standard is typically granted by set of C macros. Therefore, instead of having straight bindings to C functions, I had to implement an intermediate layer of C code that abstracted the interface to MPI itself. (Free Pascal provides bindings to MPICH, which is an implementation of MPI. Unfortunately, the clusters I am using for support other MPI libraries that are too different for these bindings to be easily ported.)
Writing wrappers to C libraries is not difficult. Free Pascal
implements the keyword external
that directly binds a function in a
static library to a Pascal name. For example, you just have to use the
following definition to bind the ffopen
name to a function with the
same name in the library cfitsio
(under Unix, the name of the
library file will of course be libcfitsio.a
):
function ffopen(out Ptr : Pointer; FileName : PChar; IoMode : CInt; var Status : Cint) : Cint; cdecl; external 'cfitsio'; |
Creating bindings to CFITSIO was one of the most boring parts. Since a
FITS files can contain tables of data of various types (integers,
single/double precision floating point numbers, booleans, strings…),
for every function that accesses the rows of a table the CFITSIO
library (a pure C library) provides a generic implementation of the
same function that accept a void pointer plus an integer variable that
specifies how to cast the pointer to something meaningful. For
instance, the
ffgcv
function reads a number of rows from a file, and its prototype is the
following:
int ffgcv(fitsfile * f, int datatype, int colnum, LONGLONG firstrow, LONGLONG firstelem, LONGLONG nelems, void * nullval, void * dest, int * anynull, int * status); |
The interesting parts in this definition are the parameters
datatype
, nullval
, and dest
. The first one must be equal to one
of the constants CSHORT
, CLONG
, CSTRING`…, and it specifies how
to interpret `nullval
(a pointer to a value to be used as default for
those rows in the table which do not provide a value) and dest
(which must point to an array of nelems
elements that will be filled
by ffgcv
).
This kind of prototype is quite representative of the CFITSIO’s API:
it is quite easy to make mistakes (e.g., by passing datatype=CLONG
and making dest
point to an array of short
values), so I decided
to make my bindings less error-prone. I implemented a number of
ReadColumn
functions that accepted as an input parameter different
types of arrays (note: Free Pascal’s extended mode allows function
overloading):
procedure ReadColumn(var F : TFitsFile; ColumnNumber : Integer; FirstRow : Int64; FirstElement : Int64; var Elements : Array of String); procedure ReadColumn(var F : TFitsFile; ColumnNumber : Integer; FirstRow : Int64; FirstElement : Int64; var Elements : Array of Boolean); procedure ReadColumn(var F : TFitsFile; ColumnNumber : Integer; FirstRow : Int64; FirstElement : Int64; var Elements : Array of Uint8); { etc. } |
Of course, this has been really boring to do! I am not sure if I could have used Free Pascal’s preprocessor to ease this implementation, but I preferred not to dig into this. Here I wished Free Pascal had metaprogramming capabilities: this is something Lisp has provided since ages (see Seibel’s chapter about Common Lisp macros) and which new languages like Julia and Nim have wisely adopted. (To be fair, I think that allowing metaprogramming constructs in an existing language like Pascal is much harder that designing a new language with such constructs built-in.)
The code
Writing the code was really a smooth process. The compilation times were so fast that whenever I was unsure about a language feature/function syntax, I tried many variants of the code by quickly running many write/compile cycles instead of Googling around. It’s incredible how much your productivity can be boosted by negligible compilation times! (See below for some quantitative arguments.)
The code runs fast, probably as fast as C++, although I cannot provide a quantitative comparison as I do not have a C++ program which runs the same analysis for making measurements.
What I like about FP
Compilation times
As I mentioned before, the time required to compile code with Free
Pascal is incredible, especially if you’re used with the speed of a
typical C++ compiler (I use GCC). For
instance, recompiling 5838 LOC requires 0.4 seconds on my laptop. (I
used FPC’s -B
flag to force the recompilation of everything: it is
similar to the -B
parameter in GNU Make.)
As a comparison, recompiling the legacy C++ code I was referring to above (10623 lines of code) requires 83 seconds. I’ll repeat it: 83 seconds. In this case the compilation speed is 127 LOC/sec, while Free Pascal in the example above was able to reach a speed of 14.6 kLOC/sec. Therefore, Free Pascal is two orders of magnitude faster than GCC in compiling my codes!
To understand why this happens, you should consider that my C++
code uses the C++ Standard Template Library and a few of the Boost
libraries. Both make an heavy use of templates, so that they cannot be
compiled separately but instead literally copied into each of my
source file by the C++ preprocessor (the details are not exact,
but this should provide a fair representation of what’s really
happening). So the actual number of LOCs processed by GCC is far
higher than the 10623 number I provided above, because I should add
many thousands of lines taken from my /usr/include
path. However, my
point still holds. If I compile a program consisting of ~10000
LOCs, I can expect to wait ~100 times more if I use GCC than if I
use Free Pascal.
You might object that GCC takes this huge amount of time only when it
needs to recompile everything, and that this is not always the case.
However, if I need to modify some header file that is included by many
other source files in the project, this will trigger a recompilation
of any of them, and the compilation times will be not much different.
(The same would happen if I need to change some compilation switch.) I
checked this by touching just one file in my C++ project and
triggering a make
(which therefore recompiled just the touched
source file and rebuilt the executable). There are 34 .cpp
files and
38 .hpp
(header) files, and I iterated this touch-and-recompile loop
over all of them. The results are the following:
-
If one
.cpp
file is modified, my computer takes on average 3.7 seconds to rebuild the executable, with a maximum of 7.8 seconds and a minimum of 0.8 seconds. -
If one
.hpp
file is modified, things get obviously worse. On average the compilation time is 10.7 seconds, and the extremes are 2.8 seconds and 21 seconds.
Given these numbers, the 0.4 seconds required by FPC to recompile from scratch a 5 kLOC project are even more amazing!
Units
Part of the huge speed achieved by Free Pascal is due to its sound
module system, which is directly derived from Turbo Pascal (which in
turns inherited it from Modula-2). It may feel a bit outdated for
Python users, but it is still much better than C++ archaic
reliance on the cpp
preprocessor.
Free Pascal programs are divided in source files called "units". The
Free Pascal compiler is able to create a dependency tree of the units
used by a program or by another unit, and it triggers recompilations
only if needed. Thus, GNU Make is not strictly needed for producing an
executable out of a set of source files: you just run fpc main.pas
and the compiler will automatically recompile every module (unit) used
by main.pas
. Moreover, FPC is smart enough to recomple only those
source files that are newer than the object file. (I still use a
Makefile because I need to compile the MPI middleware I mentioned
above, and I want to pass a number of flags to the fpc
compiler; but
it is an extremely simple and short Makefile.)
What I don’t like about FP
FPC is really great, but like everything else it has its shortcomings. In this section I will describe a couple of them.
Bad segmentation faults can still happen
The first thing I discovered while programming in Free Pascal is that
"bad" segmentation faults can still happen! (By "bad" segmentation
fault I mean one of those crashes that leave you with a SIGSEGV
message on the terminal and no other clue about what triggered it.)
There are many reasons for this. One that is particularly subtle is the way static and dynamic arrays are implemented. A "static array" is a plain sequence of values that lie on adjacent memory locations and whose size is specified at compilation time, like the following:
var arr : Array[1..3] of Integer; |
On the other side, the size of a "dynamic array" is decided at runtime. (It does not need to change during the execution of the program: the important point here is that before the program starts, the compiler has still no idea of which size will have). Here is an example:
var arr : Array of Integer; begin SetLength(arr, 3); { ... } end; |
The former type of array is useful e.g. when you want to define a 3-D vector, as you already know that your vector will have 3 elements. But if you want to read a set of numbers from a file provided by the user, you surely need dynamic arrays.
Here comes the problem with my segmentation fault. I had some code which set the elements of an array to zero:
var arr : Array[1..10] of Integer; begin FillChar(arr, 0, SizeOf(Integer) * Length(arr)); { Now use "arr" } end; |
Then I changed the code so that the number of elements was decided at runtime:
var arr : Array of Integer; begin SetLength(arr, getNumOfElementsFromSomewhereElse); FillChar(arr, 0, SizeOf(Integer) * Length(arr)); { Bang! } end; |
After this change, the code crashed. The problem, I learned, is due to
the fact that the internal representation of a dynamic array is a
record (i.e., the analogous of a C struct
type) with two elements,
like in the following example:
type DynamicArray = record numOfElements : Integer; ptr : Pointer; end; |
So, when I call FillChar
on a static array, the address of the
actual array is passed to the procedure. But if the array is dynamic,
FillChar
gets the address to the record
and overwrites
numOfElements
, ptr
, plus a number of bytes that follow them!
Once I discovered the cause of this bug, it was easy to fix the code.
Just pass the first element of the array to FillChar
:
{ This works both for static and dynamic arrays } FillChar(arr[0], 0, SizeOf(Integer) * Length(arr)); |
The point is that the compiler does not produce any warning/hint if
you forget the [0]
and pass a dynamic array as its first parameter.
And the program will crash without saying you what line caused the
problem…
No advanced programming features
The language has improved a lot since Borland Pascal 7, but it is still lacking some important features. One of the C++ features I missed most is the lack of generic procedures/functions/records: Free Pascal 2.6.4 still supports only generic classes. (A class is not very different from a record, but the latter can be allocated on the stack, while a class always requires to be allocated on the heap — and properly freed when it is no longer needed.)
In the code I wrote I had these two structures:
type IntRange = record begin, end : Integer; end; DoubleRange = record begin, end : Double; end; |
They are equivalent, yet there in FPC 2.6.4 is no mechanism to define them using one generic construct.
Another thing I missed is the lack of metaprogramming facilities. (See above.)
Languages like C++ allow to define variables wherever you want, and
the newest ones also have some form of type-inference. Even Ada (which
is clearly derived from Pascal) has a limited type-inference in the
variables used in for
loops; in the following example, the Elem
variable is declared within the body of a for
loop, and the variable
i
must not be declared in advance because the compiler already
infers its type:
-- This is written in Ada procedure LoopOverArr is begin for i in Arr'Range loop declare Elem : Arr'Type := Arr (i); begin -- "Elem" is the i-th element of the array Arr end; end loop; end LoopOverArr; |
Unfortunately, Free Pascal does not provide any of these, and the loop above must be implemented in the following way:
procedure LoopOverArr; var i : Integer; Elem : ArrType; begin for i := Low(Arr) to High(Arr) do begin { ... } end; end; |
This does not look particularly annoying for short procedures like the one above. But if your procedure body is longer than a few lines, having the declaration of a variable widely separated from the place you’re actually using it affects readability.
Weirdness in the standard library
One thing that struck me is the lack of a sorting function in the standard library. The same applies to other handy algorithms as well (binary searches, looking for the minimum/maximum value in an array of arbitrary type…). I figure the motivation is the lack of support for generic procedures, but still it is quite amazing that in 2015 a language does not provide such facilities and forces you to roll your own implementation.
Another minor point is that a few low-level functions do not define
their arguments as you would expect. One nice feature of Free Pascal
is the ability to qualify a procedure argument using in
, var
, or
out
:
-
A
in
parameter is an input parameter which cannot be modified by the procedure. -
A
var
parameter is a parameter that is modified by the procedure (and it is therefore internally passed as a reference, much like C++&
parameters); -
Finally, a
out
parameter is like avar
parameter, but there is no need to initialize it before calling the procedure, because it is assumed that the initialization will be done by the procedure itself.
The problem I faced is the fact that a few low-level functions like
fillchar
, move
, and other similar procedures do not make proper
use of such qualifiers, leading the compiler to produce strange
messages. Consider this example:
program FillCharDemo; {$mode objfpc} procedure ZeroArray(out arr : array of Double); begin FillChar(arr[Low(arr)], 0, Length(arr) * SizeOf(Double)); end; var arr : Array[0..10] of Double; begin ZeroArray(arr); end. |
Compiling this code using the -vh
flag (show hints) produces the
following hint:
$ fpc -vh fillchar_demo.pas
Hint: Start of reading config file /etc/fpc.cfg
Hint: End of reading config file /etc/fpc.cfg
Free Pascal Compiler version 2.6.4+dfsg-3 [2014/07/13] for x86_64
Copyright (c) 1993-2014 by Florian Klaempfl and others
Target OS: Linux for x86-64
Compiling fillchar_demo.pas
fillchar_demo.pas(7,14) Hint: Variable "arr" does not seem to be initialized
Linking fillchar_demo
/usr/bin/ld.bfd: warning: link.res contains output sections; did you forget -T?
16 lines compiled, 0.1 sec
3 hint(s) issued
The message Variable "arr" does not seem to be initialized
refers to
the fact that within the ZeroArray
we have used the arr
variable
without it being properly initialized before. But the purpose of
FillChar
is to initialize every byte of the first parameter to some
value! The problem is that the prototype of the FillChar
procedure
marks the first argument as var
instead of out
(had been the
latter, no hint would have been produced). There are
motivation
for this, but the basic fact is that almost every time you use
functions like FillChar
and Move
you will get useless hints.
Possible workarounds are either to ignore hints (by avoiding the -vh
flag) or to suppress hints within the body of ZeroArray
:
procedure ZeroArray(out arr : array of Double); begin {$hints off} FillChar(arr[Low(arr)], 0, Length(arr) * SizeOf(Double)); {$hints on} end; |
None of these is completely satisfying, IMHO. But, as I said before, this is not a huge problem, once you know a workaround.
Conclusions
My overall experience with Free Pascal is positive. Its amazing compilation times and good quality of the code are a clear winner over GCC. I don’t regret having used it for this new project of mine.
There are a few quirks in the language and in its standard library that might well be improved. But I understand that it is difficult to implement such changes in the code when one of its selling features is a very high compatibility with legacy systems (Free Pascal is able to compile painlessly code written for Borland Turbo Pascal for DOS!).
The number of libraries available does not match C, and it couldn’t have been so. However, the amount of work required to create bindings to C libraries is quite limited, as Pascal is not that much different from C in terms of data type representation and program logic. (It is surely easier than writing bindings for languages like, e.g., Ocaml or Haskell.)
It is interesting to see what the next releases will bring us. Support for Android and for the JVM is being prepared.
Edit: As explained by TimJYoung on HackerNews, my explanation of the reason why FillChar crashed when I passed a dynamic array to it is wrong. What really happens is that the dynamic array is truly a pointer to its first element. However, FillChar does not follow the pointer, but instead sets it and all the following bytes to zero. Also, I would like to mention that generic procedures and functions are going to be incorporated into Free Pascal (unfortunately, they have not been incorporated in the new 3.0.0 release).
Edit: I released the source code here: https://github.com/ziotom78/phi_d_phi_sky. Enjoy!
2014-07-21
C++11
In these weeks I am struggling with some large C++ codebase I need to extend and port to a new architecture. Its original developers did not care of commenting the code, nor did they produce any documentation explaining the inner details of this beast. Variable and function names are sometimes unrelated to the true purpose of the object they name. So my task is quite painful, and the characteristics of the C++ language are not helping me much. In this post I'll describe several problems I am finding about C++. It is a follow-up of my previous post, What Ada and D taught me about C++.
C++11 + Boost? Nice! (In theory)
I got quite interested when I saw the new features in the C++11 standard: the auto
keyword, enum class
, and vector initializers. Since C++11 is compatible with the old C++98 used in the code I am porting, I decided to use it whenever I had to modify/add something. In addition to the facilities provided by bare C++11, I knew I would have needed the Boost libraries, as I had to add support for property trees and for formatted messages, as well as automated tests.
"Standard" libraries? Oh my!
I started developing a few new modules using my laptop (Linux Mint 17, with GCC 4.8.2). Things went quite smooth: C++11 is much nicer than the old C++98 I was used to, apart from the slow compilation times (once you used Free Pascal once, you're always comparing compilation speeds to it!). But once I tried to deploy the code on the production platform, things immediately got much worse!
The platform for which I am porting this code is a cluster with a quite old version of GCC (4.4.6). I already knew that C++11 was a no-option with GCC 4.4, so I already compiled and installed GCC 4.8.0 under my homedir a few months ago. So I set up the environment variables in order to use my own GCC, and fired make
.
BANG! A series of error messages involving Boost::Spirit
(which I never used, but it's included by Boost::Property_tree
) popped on the screen. I think I set a new world record: how can an error message be thousands of characters long?!?
/usr/include/boost/property_tree/detail/json_parser_read.hpp: In
instantiation of ‘void boost::property_tree::json_parser::contex
t<Ptree>::a_literal_val::operator()(boost::property_tree::json_p
arser::context<Ptree>::It, boost::property_tree::json_parser::co
ntext<Ptree>::It) const [with Ptree = boost::property_tree::basi
c_ptree<std::basic_string<char>, std::basic_string<char> >; boos
t::property_tree::json_parser::context<Ptree>::It = __gnu_cxx::_
_normal_iterator<char*, std::vector<char, std::allocator<char> > >]’:
/usr/include/boost/spirit/home/classic/core/scanner/scanner.hpp:
148:30: required from ‘static void boost::spirit::classic::att
ributed_action_policy<boost::spirit::classic::nil_t>::call(const
ActorT&, boost::spirit::classic::nil_t, const IteratorT&, const
IteratorT&) [with ActorT = boost::property_tree::json_parser::co
ntext<boost::property_tree::basic_ptree<std::basic_string<char>,
std::basic_string<char> > >::a_literal_val; IteratorT = __gnu_cx
x::__normal_iterator<char*, std::vector<char, std::allocator<cha
r> > >]’
/usr/include/boost/spirit/home/classic/core/scanner/scanner.hpp:
163:74: required from ‘void boost::spirit::classic::action_pol
icy::do_action(const ActorT&, AttrT&, const IteratorT&, const It
eratorT&) const [with ActorT = boost::property_tree::json_parser
::context<boost::property_tree::basic_ptree<std::basic_string<ch
ar>, std::basic_string<char> > >::a_literal_val; AttrT = boost::
spirit::classic::nil_t; IteratorT = __gnu_cxx::__normal_iterator
<char*, std::vector<char, std::allocator<char> > >]’
/usr/include/boost/spirit/home/classic/core/composite/actions.hp
p:116:17: required from ‘typename boost::spirit::classic::pars
er_result<boost::spirit::classic::action<ParserT, ActionT>, Scan
nerT>::type boost::spirit::classic::action<ParserT, ActionT>::pa
rse(const ScannerT&) const [with ScannerT = boost::spirit::class
ic::scanner<__gnu_cxx::__normal_iterator<char*, std::vector<char
, std::allocator<char> > >, boost::spirit::classic::scanner_poli
cies<boost::spirit::classic::skip_parser_iteration_policy<boost:
:spirit::classic::alternative<boost::spirit::classic::alternativ
e<boost::spirit::classic::space_parser, boost::spirit::classic::
confix_parser<boost::spirit::classic::strlit<const char*>, boost
::spirit::classic::kleene_star<boost::spirit::classic::anychar_p
arser>, boost::spirit::classic::alternative<boost::spirit::class
ic::eol_parser, boost::spirit::classic::end_parser>, boost::spir
it::classic::unary_parser_category, boost::spirit::classic::non_
nested, boost::spirit::classic::is_lexeme> >, boost::spirit::cla
ssic::confix_parser<boost::spirit::classic::strlit<const char*>,
boost::spirit::classic::kleene_star<boost::spirit::classic::anyc
har_parser>, boost::spirit::classic::strlit<const char*>, boost:
:spirit::classic::unary_parser_category, boost::spirit::classic:
:non_nested, boost::spirit::classic::is_lexeme> >, boost::spirit
::classic::iteration_policy>, boost::spirit::classic::match_poli
cy, boost::spirit::classic::action_policy> >; ParserT = boost::s
pirit::classic::alternative<boost::spirit::classic::alternative<
boost::spirit::classic::alternative<boost::spirit::classic::rule
<boost::spirit::classic::scanner<__gnu_cxx::__normal_iterator<ch
ar*, std::vector<char, std::allocator<char> > >, boost::spirit::
classic::scanner_policies<boost::spirit::classic::skip_parser_it
eration_policy<boost::spirit::classic::alternative<boost::spirit
::classic::alternative<boost::spirit::classic::space_parser, boo
st::spirit::classic::confix_parser<boost::spirit::classic::strli
t<const char*>, boost::spirit::classic::kleene_star<boost::spiri
t::classic::anychar_parser>, boost::spirit::classic::alternative
<boost::spirit::classic::eol_parser, boost::spirit::classic::end
_parser>, boost::spirit::classic::unary_parser_category, boost::
spirit::classic::non_nested, boost::spirit::classic::is_lexeme>
>, boost::spirit::classic::confix_parser<boost::spirit::classic:
:strlit<const char*>, boost::spirit::classic::kleene_star<boost:
:spirit::classic::anychar_parser>, boost::spirit::classic::strli
t<const char*>, boost::spirit::classic::unary_parser_category, b
oost::spirit::classic::non_nested, boost::spirit::classic::is_le
xeme> >, boost::spirit::classic::iteration_policy>, boost::spiri
t::classic::match_policy, boost::spirit::classic::action_policy>
>, boost::spirit::classic::nil_t, boost::spirit::classic::nil_t>
, boost::spirit::classic::strlit<const char*> >, boost::spirit::
classic::strlit<const char*> >, boost::spirit::classic::strlit<c
onst char*> >; ActionT = boost::property_tree::json_parser::cont
ext<boost::property_tree::basic_ptree<std::basic_string<char>, s
td::basic_string<char> > >::a_literal_val; typename boost::spiri
t::classic::parser_result<boost::spirit::classic::action<ParserT
, ActionT>, ScannerT>::type = boost::spirit::classic::match<boos
t::spirit::classic::nil_t>]’
/usr/include/boost/spirit/home/classic/core/composite/alternativ
e.hpp:67:59: recursively required from ‘typename boost::spirit
::classic::parser_result<boost::spirit::classic::alternative<A,
B>, ScannerT>::type boost::spirit::classic::alternative<A, B>::p
arse(const ScannerT&) const [with ScannerT = boost::spirit::clas
sic::scanner<__gnu_cxx::__normal_iterator<char*, std::vector<cha
r, std::allocator<char> > >, boost::spirit::classic::scanner_pol
icies<boost::spirit::classic::skip_parser_iteration_policy<boost
::spirit::classic::alternative<boost::spirit::classic::alternati
ve<boost::spirit::classic::space_parser, boost::spirit::classic:
:confix_parser<boost::spirit::classic::strlit<const char*>, boos
t::spirit::classic::kleene_star<boost::spirit::classic::anychar_
parser>, boost::spirit::classic::alternative<boost::spirit::clas
sic::eol_parser, boost::spirit::classic::end_parser>, boost::spi
rit::classic::unary_parser_category, boost::spirit::classic::non
_nested, boost::spirit::classic::is_lexeme> >, boost::spirit::cl
assic::confix_parser<boost::spirit::classic::strlit<const char*>
, boost::spirit::classic::kleene_star<boost::spirit::classic::an
ychar_parser>, boost::spirit::classic::strlit<const char*>, boos
t::spirit::classic::unary_parser_category, boost::spirit::classi
c::non_nested, boost::spirit::classic::is_lexeme> >, boost::spir
it::classic::iteration_policy>, boost::spirit::classic::match_po
licy, boost::spirit::classic::action_policy> >; A = boost::spiri
t::classic::alternative<boost::spirit::classic::action<boost::sp
irit::classic::rule<boost::spirit::classic::scanner<__gnu_cxx::_
_normal_iterator<char*, std::vector<char, std::allocator<char> >
>, boost::spirit::classic::scanner_policies<boost::spirit::class
ic::skip_parser_iteration_policy<boost::spirit::classic::alterna
tive<boost::spirit::classic::alternative<boost::spirit::classic:
:space_parser, boost::spirit::classic::confix_parser<boost::spir
it::classic::strlit<const char*>, boost::spirit::classic::kleene
_star<boost::spirit::classic::anychar_parser>, boost::spirit::cl
assic::alternative<boost::spirit::classic::eol_parser, boost::sp
irit::classic::end_parser>, boost::spirit::classic::unary_parser
_category, boost::spirit::classic::non_nested, boost::spirit::cl
assic::is_lexeme> >, boost::spirit::classic::confix_parser<boost
::spirit::classic::strlit<const char*>, boost::spirit::classic::
kleene_star<boost::spirit::classic::anychar_parser>, boost::spir
it::classic::strlit<const char*>, boost::spirit::classic::unary_
parser_category, boost::spirit::classic::non_nested, boost::spir
it::classic::is_lexeme> >, boost::spirit::classic::iteration_pol
icy>, boost::spirit::classic::match_policy, boost::spirit::class
ic::action_policy> >, boost::spirit::classic::nil_t, boost::spir
it::classic::nil_t>, boost::property_tree::json_parser::context<
boost::property_tree::basic_ptree<std::basic_string<char>, std::
basic_string<char> > >::a_string_val>, boost::spirit::classic::a
ction<boost::spirit::classic::alternative<boost::spirit::classic
::alternative<boost::spirit::classic::alternative<boost::spirit:
:classic::rule<boost::spirit::classic::scanner<__gnu_cxx::__norm
al_iterator<char*, std::vector<char, std::allocator<char> > >, b
oost::spirit::classic::scanner_policies<boost::spirit::classic::
skip_parser_iteration_policy<boost::spirit::classic::alternative
<boost::spirit::classic::alternative<boost::spirit::classic::spa
ce_parser, boost::spirit::classic::confix_parser<boost::spirit::
classic::strlit<const char*>, boost::spirit::classic::kleene_sta
r<boost::spirit::classic::anychar_parser>, boost::spirit::classi
c::alternative<boost::spirit::classic::eol_parser, boost::spirit
::classic::end_parser>, boost::spirit::classic::unary_parser_cat
egory, boost::spirit::classic::non_nested, boost::spirit::classi
c::is_lexeme> >, boost::spirit::classic::confix_parser<boost::sp
irit::classic::strlit<const char*>, boost::spirit::classic::klee
ne_star<boost::spirit::classic::anychar_parser>, boost::spirit::
classic::strlit<const char*>, boost::spirit::classic::unary_pars
er_category, boost::spirit::classic::non_nested, boost::spirit::
classic::is_lexeme> >, boost::spirit::classic::iteration_policy>
, boost::spirit::classic::match_policy, boost::spirit::classic::
action_policy> >, boost::spirit::classic::nil_t, boost::spirit::
classic::nil_t>, boost::spirit::classic::strlit<const char*> >,
boost::spirit::classic::strlit<const char*> >, boost::spirit::cl
assic::strlit<const char*> >, boost::property_tree::json_parser:
:context<boost::property_tree::basic_ptree<std::basic_string<cha
r>, std::basic_string<char> > >::a_literal_val> >; B = boost::sp
irit::classic::rule<boost::spirit::classic::scanner<__gnu_cxx::_
_normal_iterator<char*, std::vector<char, std::allocator<char> >
>, boost::spirit::classic::scanner_policies<boost::spirit::class
ic::skip_parser_iteration_policy<boost::spirit::classic::alterna
tive<boost::spirit::classic::alternative<boost::spirit::classic:
:space_parser, boost::spirit::classic::confix_parser<boost::spir
it::classic::strlit<const char*>, boost::spirit::classic::kleene
_star<boost::spirit::classic::anychar_parser>, boost::spirit::cl
assic::alternative<boost::spirit::classic::eol_parser, boost::sp
irit::classic::end_parser>, boost::spirit::classic::unary_parser
_category, boost::spirit::classic::non_nested, boost::spirit::cl
assic::is_lexeme> >, boost::spirit::classic::confix_parser<boost
::spirit::classic::strlit<const char*>, boost::spirit::classic::
kleene_star<boost::spirit::classic::anychar_parser>, boost::spir
it::classic::strlit<const char*>, boost::spirit::classic::unary_
parser_category, boost::spirit::classic::non_nested, boost::spir
it::classic::is_lexeme> >, boost::spirit::classic::iteration_pol
icy>, boost::spirit::classic::match_policy, boost::spirit::class
ic::action_policy> >, boost::spirit::classic::nil_t, boost::spir
it::classic::nil_t>; typename boost::spirit::classic::parser_res
ult<boost::spirit::classic::alternative<A, B>, ScannerT>::type =
boost::spirit::classic::match<boost::spirit::classic::nil_t>]’
/usr/include/boost/spirit/home/classic/core/composite/alternativ
e.hpp:67:59: required from ‘typename boost::spirit::classic::p
arser_result<boost::spirit::classic::alternative<A, B>, ScannerT
>::type boost::spirit::classic::alternative<A, B>::parse(const S
cannerT&) const [with ScannerT = boost::spirit::classic::scanner
<__gnu_cxx::__normal_iterator<char*, std::vector<char, std::allo
cator<char> > >, boost::spirit::classic::scanner_policies<boost:
:spirit::classic::skip_parser_iteration_policy<boost::spirit::cl
assic::alternative<boost::spirit::classic::alternative<boost::sp
irit::classic::space_parser, boost::spirit::classic::confix_pars
er<boost::spirit::classic::strlit<const char*>, boost::spirit::c
lassic::kleene_star<boost::spirit::classic::anychar_parser>, boo
st::spirit::classic::alternative<boost::spirit::classic::eol_par
ser, boost::spirit::classic::end_parser>, boost::spirit::classic
::unary_parser_category, boost::spirit::classic::non_nested, boo
st::spirit::classic::is_lexeme> >, boost::spirit::classic::confi
x_parser<boost::spirit::classic::strlit<const char*>, boost::spi
rit::classic::kleene_star<boost::spirit::classic::anychar_parser
>, boost::spirit::classic::strlit<const char*>, boost::spirit::c
lassic::unary_parser_category, boost::spirit::classic::non_neste
d, boost::spirit::classic::is_lexeme> >, boost::spirit::classic:
:iteration_policy>, boost::spirit::classic::match_policy, boost:
:spirit::classic::action_policy> >; A = boost::spirit::classic::
alternative<boost::spirit::classic::alternative<boost::spirit::c
lassic::action<boost::spirit::classic::rule<boost::spirit::class
ic::scanner<__gnu_cxx::__normal_iterator<char*, std::vector<char
, std::allocator<char> > >, boost::spirit::classic::scanner_poli
cies<boost::spirit::classic::skip_parser_iteration_policy<boost:
:spirit::classic::alternative<boost::spirit::classic::alternativ
e<boost::spirit::classic::space_parser, boost::spirit::classic::
confix_parser<boost::spirit::classic::strlit<const char*>, boost
::spirit::classic::kleene_star<boost::spirit::classic::anychar_p
arser>, boost::spirit::classic::alternative<boost::spirit::class
ic::eol_parser, boost::spirit::classic::end_parser>, boost::spir
it::classic::unary_parser_category, boost::spirit::classic::non_
nested, boost::spirit::classic::is_lexeme> >, boost::spirit::cla
ssic::confix_parser<boost::spirit::classic::strlit<const char*>,
boost::spirit::classic::kleene_star<boost::spirit::classic::anyc
har_parser>, boost::spirit::classic::strlit<const char*>, boost:
:spirit::classic::unary_parser_category, boost::spirit::classic:
:non_nested, boost::spirit::classic::is_lexeme> >, boost::spirit
::classic::iteration_policy>, boost::spirit::classic::match_poli
cy, boost::spirit::classic::action_policy> >, boost::spirit::cla
ssic::nil_t, boost::spirit::classic::nil_t>, boost::property_tre
e::json_parser::context<boost::property_tree::basic_ptree<std::b
asic_string<char>, std::basic_string<char> > >::a_string_val>, b
oost::spirit::classic::action<boost::spirit::classic::alternativ
e<boost::spirit::classic::alternative<boost::spirit::classic::al
ternative<boost::spirit::classic::rule<boost::spirit::classic::s
canner<__gnu_cxx::__normal_iterator<char*, std::vector<char, std
::allocator<char> > >, boost::spirit::classic::scanner_policies<
boost::spirit::classic::skip_parser_iteration_policy<boost::spir
it::classic::alternative<boost::spirit::classic::alternative<boo
st::spirit::classic::space_parser, boost::spirit::classic::confi
x_parser<boost::spirit::classic::strlit<const char*>, boost::spi
rit::classic::kleene_star<boost::spirit::classic::anychar_parser
>, boost::spirit::classic::alternative<boost::spirit::classic::e
ol_parser, boost::spirit::classic::end_parser>, boost::spirit::c
lassic::unary_parser_category, boost::spirit::classic::non_neste
d, boost::spirit::classic::is_lexeme> >, boost::spirit::classic:
:confix_parser<boost::spirit::classic::strlit<const char*>, boos
t::spirit::classic::kleene_star<boost::spirit::classic::anychar_
parser>, boost::spirit::classic::strlit<const char*>, boost::spi
rit::classic::unary_parser_category, boost::spirit::classic::non
_nested, boost::spirit::classic::is_lexeme> >, boost::spirit::cl
assic::iteration_policy>, boost::spirit::classic::match_policy,
boost::spirit::classic::action_policy> >, boost::spirit::classic
::nil_t, boost::spirit::classic::nil_t>, boost::spirit::classic:
:strlit<const char*> >, boost::spirit::classic::strlit<const cha
r*> >, boost::spirit::classic::strlit<const char*> >, boost::pro
perty_tree::json_parser::context<boost::property_tree::basic_ptr
ee<std::basic_string<char>, std::basic_string<char> > >::a_liter
al_val> >, boost::spirit::classic::rule<boost::spirit::classic::
scanner<__gnu_cxx::__normal_iterator<char*, std::vector<char, st
d::allocator<char> > >, boost::spirit::classic::scanner_policies
<boost::spirit::classic::skip_parser_iteration_policy<boost::spi
rit::classic::alternative<boost::spirit::classic::alternative<bo
ost::spirit::classic::space_parser, boost::spirit::classic::conf
ix_parser<boost::spirit::classic::strlit<const char*>, boost::sp
irit::classic::kleene_star<boost::spirit::classic::anychar_parse
r>, boost::spirit::classic::alternative<boost::spirit::classic::
eol_parser, boost::spirit::classic::end_parser>, boost::spirit::
classic::unary_parser_category, boost::spirit::classic::non_nest
ed, boost::spirit::classic::is_lexeme> >, boost::spirit::classic
::confix_parser<boost::spirit::classic::strlit<const char*>, boo
st::spirit::classic::kleene_star<boost::spirit::classic::anychar
_parser>, boost::spirit::classic::strlit<const char*>, boost::sp
irit::classic::unary_parser_category, boost::spirit::classic::no
n_nested, boost::spirit::classic::is_lexeme> >, boost::spirit::c
lassic::iteration_policy>, boost::spirit::classic::match_policy,
boost::spirit::classic::action_policy> >, boost::spirit::classic
::nil_t, boost::spirit::classic::nil_t> >; B = boost::spirit::cl
assic::rule<boost::spirit::classic::scanner<__gnu_cxx::__normal_
iterator<char*, std::vector<char, std::allocator<char> > >, boos
t::spirit::classic::scanner_policies<boost::spirit::classic::ski
p_parser_iteration_policy<boost::spirit::classic::alternative<bo
ost::spirit::classic::alternative<boost::spirit::classic::space_
parser, boost::spirit::classic::confix_parser<boost::spirit::cla
ssic::strlit<const char*>, boost::spirit::classic::kleene_star<b
oost::spirit::classic::anychar_parser>, boost::spirit::classic::
alternative<boost::spirit::classic::eol_parser, boost::spirit::c
lassic::end_parser>, boost::spirit::classic::unary_parser_catego
ry, boost::spirit::classic::non_nested, boost::spirit::classic::
is_lexeme> >, boost::spirit::classic::confix_parser<boost::spiri
t::classic::strlit<const char*>, boost::spirit::classic::kleene_
star<boost::spirit::classic::anychar_parser>, boost::spirit::cla
ssic::strlit<const char*>, boost::spirit::classic::unary_parser_
category, boost::spirit::classic::non_nested, boost::spirit::cla
ssic::is_lexeme> >, boost::spirit::classic::iteration_policy>, b
oost::spirit::classic::match_policy, boost::spirit::classic::act
ion_policy> >, boost::spirit::classic::nil_t, boost::spirit::cla
ssic::nil_t>; typename boost::spirit::classic::parser_result<boo
st::spirit::classic::alternative<A, B>, ScannerT>::type = boost:
:spirit::classic::match<boost::spirit::classic::nil_t>]’
/usr/include/boost/spirit/home/classic/core/non_terminal/impl/ru
le.ipp:240:36: required from ‘typename boost::spirit::classic:
:match_result<ScannerT, ContextResultT>::type boost::spirit::cla
ssic::impl::concrete_parser<ParserT, ScannerT, AttrT>::do_parse_
virtual(const ScannerT&) const [with ParserT = boost::spirit::cl
assic::alternative<boost::spirit::classic::alternative<boost::sp
irit::classic::alternative<boost::spirit::classic::action<boost:
:spirit::classic::rule<boost::spirit::classic::scanner<__gnu_cxx
::__normal_iterator<char*, std::vector<char, std::allocator<char
> > >, boost::spirit::classic::scanner_policies<boost::spirit::c
lassic::skip_parser_iteration_policy<boost::spirit::classic::alt
ernative<boost::spirit::classic::alternative<boost::spirit::clas
sic::space_parser, boost::spirit::classic::confix_parser<boost::
spirit::classic::strlit<const char*>, boost::spirit::classic::kl
eene_star<boost::spirit::classic::anychar_parser>, boost::spirit
::classic::alternative<boost::spirit::classic::eol_parser, boost
::spirit::classic::end_parser>, boost::spirit::classic::unary_pa
rser_category, boost::spirit::classic::non_nested, boost::spirit
::classic::is_lexeme> >, boost::spirit::classic::confix_parser<b
oost::spirit::classic::strlit<const char*>, boost::spirit::class
ic::kleene_star<boost::spirit::classic::anychar_parser>, boost::
spirit::classic::strlit<const char*>, boost::spirit::classic::un
ary_parser_category, boost::spirit::classic::non_nested, boost::
spirit::classic::is_lexeme> >, boost::spirit::classic::iteration
_policy>, boost::spirit::classic::match_policy, boost::spirit::c
lassic::action_policy> >, boost::spirit::classic::nil_t, boost::
spirit::classic::nil_t>, boost::property_tree::json_parser::cont
ext<boost::property_tree::basic_ptree<std::basic_string<char>, s
td::basic_string<char> > >::a_string_val>, boost::spirit::classi
c::action<boost::spirit::classic::alternative<boost::spirit::cla
ssic::alternative<boost::spirit::classic::alternative<boost::spi
rit::classic::rule<boost::spirit::classic::scanner<__gnu_cxx::__
normal_iterator<char*, std::vector<char, std::allocator<char> >
>, boost::spirit::classic::scanner_policies<boost::spirit::class
ic::skip_parser_iteration_policy<boost::spirit::classic::alterna
tive<boost::spirit::classic::alternative<boost::spirit::classic:
:space_parser, boost::spirit::classic::confix_parser<boost::spir
it::classic::strlit<const char*>, boost::spirit::classic::kleene
_star<boost::spirit::classic::anychar_parser>, boost::spirit::cl
assic::alternative<boost::spirit::classic::eol_parser, boost::sp
irit::classic::end_parser>, boost::spirit::classic::unary_parser
_category, boost::spirit::classic::non_nested, boost::spirit::cl
assic::is_lexeme> >, boost::spirit::classic::confix_parser<boost
::spirit::classic::strlit<const char*>, boost::spirit::classic::
kleene_star<boost::spirit::classic::anychar_parser>, boost::spir
it::classic::strlit<const char*>, boost::spirit::classic::unary_
parser_category, boost::spirit::classic::non_nested, boost::spir
it::classic::is_lexeme> >, boost::spirit::classic::iteration_pol
icy>, boost::spirit::classic::match_policy, boost::spirit::class
ic::action_policy> >, boost::spirit::classic::nil_t, boost::spir
it::classic::nil_t>, boost::spirit::classic::strlit<const char*>
>, boost::spirit::classic::strlit<const char*> >, boost::spirit:
:classic::strlit<const char*> >, boost::property_tree::json_pars
er::context<boost::property_tree::basic_ptree<std::basic_string<
char>, std::basic_string<char> > >::a_literal_val> >, boost::spi
rit::classic::rule<boost::spirit::classic::scanner<__gnu_cxx::__
normal_iterator<char*, std::vector<char, std::allocator<char> >
>, boost::spirit::classic::scanner_policies<boost::spirit::class
ic::skip_parser_iteration_policy<boost::spirit::classic::alterna
tive<boost::spirit::classic::alternative<boost::spirit::classic:
:space_parser, boost::spirit::classic::confix_parser<boost::spir
it::classic::strlit<const char*>, boost::spirit::classic::kleene
_star<boost::spirit::classic::anychar_parser>, boost::spirit::cl
assic::alternative<boost::spirit::classic::eol_parser, boost::sp
irit::classic::end_parser>, boost::spirit::classic::unary_parser
_category, boost::spirit::classic::non_nested, boost::spirit::cl
assic::is_lexeme> >, boost::spirit::classic::confix_parser<boost
::spirit::classic::strlit<const char*>, boost::spirit::classic::
kleene_star<boost::spirit::classic::anychar_parser>, boost::spir
it::classic::strlit<const char*>, boost::spirit::classic::unary_
parser_category, boost::spirit::classic::non_nested, boost::spir
it::classic::is_lexeme> >, boost::spirit::classic::iteration_pol
icy>, boost::spirit::classic::match_policy, boost::spirit::class
ic::action_policy> >, boost::spirit::classic::nil_t, boost::spir
it::classic::nil_t> >, boost::spirit::classic::rule<boost::spiri
t::classic::scanner<__gnu_cxx::__normal_iterator<char*, std::vec
tor<char, std::allocator<char> > >, boost::spirit::classic::scan
ner_policies<boost::spirit::classic::skip_parser_iteration_polic
y<boost::spirit::classic::alternative<boost::spirit::classic::al
ternative<boost::spirit::classic::space_parser, boost::spirit::c
lassic::confix_parser<boost::spirit::classic::strlit<const char*
>, boost::spirit::classic::kleene_star<boost::spirit::classic::a
nychar_parser>, boost::spirit::classic::alternative<boost::spiri
t::classic::eol_parser, boost::spirit::classic::end_parser>, boo
st::spirit::classic::unary_parser_category, boost::spirit::class
ic::non_nested, boost::spirit::classic::is_lexeme> >, boost::spi
rit::classic::confix_parser<boost::spirit::classic::strlit<const
char*>, boost::spirit::classic::kleene_star<boost::spirit::class
ic::anychar_parser>, boost::spirit::classic::strlit<const char*>
, boost::spirit::classic::unary_parser_category, boost::spirit::
classic::non_nested, boost::spirit::classic::is_lexeme> >, boost
::spirit::classic::iteration_policy>, boost::spirit::classic::ma
tch_policy, boost::spirit::classic::action_policy> >, boost::spi
rit::classic::nil_t, boost::spirit::classic::nil_t> >; ScannerT
= boost::spirit::classic::scanner<__gnu_cxx::__normal_iterator<c
har*, std::vector<char, std::allocator<char> > >, boost::spirit:
:classic::scanner_policies<boost::spirit::classic::skip_parser_i
teration_policy<boost::spirit::classic::alternative<boost::spiri
t::classic::alternative<boost::spirit::classic::space_parser, bo
ost::spirit::classic::confix_parser<boost::spirit::classic::strl
it<const char*>, boost::spirit::classic::kleene_star<boost::spir
it::classic::anychar_parser>, boost::spirit::classic::alternativ
e<boost::spirit::classic::eol_parser, boost::spirit::classic::en
d_parser>, boost::spirit::classic::unary_parser_category, boost:
:spirit::classic::non_nested, boost::spirit::classic::is_lexeme>
>, boost::spirit::classic::confix_parser<boost::spirit::classic:
:strlit<const char*>, boost::spirit::classic::kleene_star<boost:
:spirit::classic::anychar_parser>, boost::spirit::classic::strli
t<const char*>, boost::spirit::classic::unary_parser_category, b
oost::spirit::classic::non_nested, boost::spirit::classic::is_le
xeme> >, boost::spirit::classic::iteration_policy>, boost::spiri
t::classic::match_policy, boost::spirit::classic::action_policy>
>; AttrT = boost::spirit::classic::nil_t; typename boost::spirit
::classic::match_result<ScannerT, ContextResultT>::type = boost:
:spirit::classic::match<boost::spirit::classic::nil_t>]’
(I artificially put newlines each 64 characters.) This is pure madness.
After some googling, I discovered that this kind of problems is most likely caused by my Boost distribution (1.41) being older than GCC (4.8.0). So I had to download and install Boost's latest version (1.55) in order to make my code compile in the production environment. And this was not easy: despite the fact that AutoConf was able to use the newer Boost library installed under my home, the compile kept trying to #include
files from /usr/include
. I had to uninstall the system Boost header files in order to make it compile!
I think this story shows that a de facto standard like Boost cannot be kept separated from the C++ compiler. I know how C++ standardisation works, but from the point of view of an user, having such a huge number of useful libraries not included in the C++ standard is utterly unacceptable. Boost provides a number of indispensable facilities:
- Unit test library
- JSON parsing (sort of)
- Formatted strings
- File system access routines
- Priority queues
- Logging library
- Multiprecision arithmetic
- Library to parse program options provided through the command line
- Regular expressions (I know, the C++11 standard has them, but GCC has yet to implement them.)
I cannot resist but underline that the standard libraries provided by Free Pascal, D, and Ada already have many (if not all) of these. Even languages yet to be released like Nimrod and Rust provide much more than C++11!
Headers and dependencies
As I mentioned in one of my previous posts, one of C++'s most serious problems is the way header files are handled. As a short recap, consider this tiny library to add two numbers:
// File: sum.hpp
int sum(int a, int b);
// File: sum.cpp
int sum(int a, int b) {
return a + b;
}
This library is used by the following program:
// File: main.cpp
#include "sum.hpp"
#include <iostream>
int main() {
int a = 1, b = 2;
std::cout << "The sum of " << a << " and " << b << " is "
<< sum(a, b);
return 0;
}
A very simple Makefile
is used to compile the program and the library:
main: main.o sum.o
(no need to call g++
explicitly: GNU Make is smart enough to do that.) Now, suppose somebody changes file sum.hpp
in order to make calculations using floating-point variables, but they forget to update main.cpp
:
// File: sum.hpp
double sum(double a, double b);
// File: sum.cpp
double sum(double a, double b) {
return a + b;
}
When I run make
, it will detect that sum.cpp
has changed and will therefore recompile it. However, make
has no way to know that main.cpp
needs to be recompiled too (because it depends on sum.hpp
, which has changed). Therefore, main.o
will be left untouched, still believing that sum
's arguments are two four-byte integers instead of two floating-point variables. Random errors are very likely to occur!
Several methods have been devised to make GNU Make aware of the dependency of source files on headers. In the case of my codebase, I chose to use GNU Automake (this was far from trivial too, but I will not discuss this topic here). Such a burden should be removed from the programmer's back and be managed by the compiler itself.
Compare this situation with Ada's packages, D modules, and Free Pascal units. In these cases, the compile is able to understand exactly which module needs to be recompiled, without even the need of a Makefile
. (Nimrod has the same ability.)
Compilation times
I already mentioned above that compilation times are still horrible with C++11. This has a number of causes. In order of importance, they are:
- Boost heavily relies on templates, which in principle should be compiled every time you
#include
them in your code. (Modern C++ compilers can use precompiled headers; however, their use is tricky.) - Apart from Boost, every header file needs to be compiled every time you include it. (But the header files used in my app are not as long as Boost's.)
- C++ is a complex language, and therefore it takes more time to parse it than others.
The primary cause for this is the baroque usage of the C/C++ preprocessor to #include
header files. A new module system able to replace the preprocessor would surely help here! Unfortunately, despite some interesting attempts, this was not included in the C++11 standard. (This is apparently not due to lack of motivation, but because of the difficulty in devising a solution which does not break compatibility, when dealing with preprocessor macros. So, the preprocessor is to be blamed here too.) The approach followed by Clang developers seems promising (btw, their page nicely sums up the current state of things with the C++11 preprocessor), and the C++ standardisation committee is trying to have such module system ready in time for the C++17 (they already know it will not ready for C++14.)
Clang's proposal is described in this presentation by Doug Gregor. That's very nice, and it is exactly what C++ needs. But I cannot avoid thinking that similar solutions were already present in the first specification of the Ada language (1983, the same year C++ appeared). And Borland introduced units in Turbo Pascal 4 (1987). Is it possible that the latest C++ standard, released in 2011, still lacks a decent solution for a problem for which much sounder solutions were already available 30 years ago?!?
In the last weeks, I often have found myself thinking of porting this gargantuan C++ codebase to Pascal or Ada. Surely it would require a huge work to rewrite everything from scratch, but it would nevertheless be a very interesting exercise!
2014-01-17
Experiments with Nimrod
In the last months I have studied with great interest three new (imperative) languages: Nimrod, Go, and Rust. I find them interesting because they promise to reach the same speeds as C++, but without some of its most annoying features. They all are less verbose than Ada, but at the same time retain some of its clarity and strictness. At the moment I am trying to use them in some small/medium-sized projects, in order to better understand what are their advantages and their limits.
In this post I am going to describe my experience with Nimrod. The language is still in development (the latest release is 0.9.2), but it seems it has reached a pretty stable state. It has a number of interesting features:
- Its syntax is similar to C and Pascal, but it uses indentation as a way to delimit blocks of code (in the same way as Python does).
- The performance of Nimrod programs is usually the same as their C counterparts.
- It is a strongly typed language (i.e., you cannot sum integers and floats without using casting).
- Large programs can be split into packages, which work pretty much like Python.
- The language supports garbage collection, so that memory management is easier than in C.
- It features a LISP-like macro system!
- Like my old friend Chicken Scheme, it compiles sources to C and then invokes a C compiler. This means that writing bindings to C libraries is extremely easy.
I decided to use Nimrod to write a small program (~500 lines of code) to perform a simple task: determine at which time a celestial source in the sky was observed by the Planck/LFI radiometers. The process is quite easy to understand:
- The program reads the so-called "pointing information", which tell which sky direction each LFI horn was pointing at a certain time. It is a very large table (tens of GB), with four columns: the time of the observation, the ecliptic latitude, the ecliptic longitude, and the rotation of the antenna around the observing direction.
- Pointing information is divided into many files. There is one file per day, and I only need to know in which days an object was observed. Therefore, for all the observing directions contained in each file X, the program determines how far each of them is from the celestial source. If at least one of them is smaller than the angular resolution of the instrument, then the name of file X is saved in a list.
- When all the files have been read, print the list built during the process.
Let's see how I implemented this using Nimrod.
Basic trigonometry
Implementing the trigonometric functions used to calculate the angular distance between two positions was fairly trivial. To compute the angular distance between two directions I first converted each direction into a vector and then calculated their dot product. So I implemented two types: the first one represents an angular direction in ecliptic coordinates, and the other one represents a vector in a 3D space:
import math
type
TAngularPosition* = tuple[colatitude, longitude: float] # In radians
TVector* = tuple[x, y, z: float]
Then I overloaded the multiplication operator *
in order to compute the dot product between two vectors:
proc `*` (v1, v2: TVector): float {. inline .} =
return (v1.x * v2.x) + (v1.y * v2.y) + (v1.z * v2.z)
Note the use of {. inline .}
, which avoids the cost of a function call.
Then I implemented a function to convert a TAngularPosition
object into a TVector
:
proc AngToVec*(pos: TAngularPosition): TVector {. inline .} =
let sin_theta = math.sin(pos.colatitude)
result = (sin_theta * math.cos(pos.longitude),
sin_theta * math.sin(pos.longitude),
math.cos(pos.colatitude))
(The purpose of let
is to introduce a constant whose value is defined at runtime. Changing the value of sin_theta
after its initial assignment is forbidden.)
Finally, I implemented the function which computes the angular distance (in radians) between two TAngularPosition
s (it implicitly uses the fact that vectors returned by AngToVec
have length one):
proc AngularDistance*(pos1, pos2 : TAngularPosition): float =
let vec1 = AngToVec(pos1)
let vec2 = AngToVec(pos2)
result = math.arccos(vec1 * vec2)
At the end of the file I implemented a number of unit tests to check the validity of my implementation:
when isMainModule:
const TOLERANCE = 1.0e-7
let ang1 = (colatitude: 0.1, longitude: 0.2)
let ang2 = (colatitude: 0.4, longitude: 0.6)
let vec1 = AngToVec(ang1)
let vec2 = AngToVec(ang2)
assert(abs(vec1.x - 0.09784340) < TOLERANCE)
assert(abs(vec1.y - 0.01983384) < TOLERANCE)
assert(abs(vec1.z - 0.99500417) < TOLERANCE)
assert(abs(vec2.x - 0.32140083) < TOLERANCE)
assert(abs(vec2.y - 0.21988214) < TOLERANCE)
assert(abs(vec2.z - 0.92106099) < TOLERANCE)
let dist = AngularDistance(ang1, ang2)
assert(abs(dist - 0.31021624) < TOLERANCE)
Once I saved these definitions into positions.nim
, I was able to run the unit tests simply by running
$ nimrod compile --run positions.nim
On the other hand, including the string
import positions
in another Nimrod source file imports these definitions. This is almost the same as in Python, with the difference that Nimrod does not include the unit tests in the compilation if positions.nim
is imported into another file. (It is able to do this because when isMainModule
is computed at compile time. It is like C's #if
, but much less error-prone.)
Binding the program to CFITSIO
Since LFI's pointing information are saved in FITS files, I had to write a set of bindings for the CFITSIO library, However, as I said above, in Nimrod it is extremely easy to write bindings to C libraries.
First of all, I defined the name of the dynamic library for CFITSIO. I used the where
construct we already used above (although I used this code only on a Linux machine):
when defined(windows):
const LibraryName = "cfitsio.dll"
elif defined(macosx):
const LibraryName = "libcfitsio(|.0).dylib"
else:
const LibraryName = "libcfitsio.so"
(const
declares a constant whose value must be fixed at compilation time.)
Then I defined a data type for CFITSIO's fitsptr
, the basic data structure that represents a FITS file:
type
TInternalFitsStruct = pointer
PInternalFitsStruct = ptr TInternalFitsStruct
TFitsFile* {. final .} = object
file: TInternalFitsStruct
The type TInternalFitsStruct
is a raw pointer to a fitsptr
object, while TFitsFile
is an high-level type which is going to be exported outside the module (hence the *
at the end of the name).
Next step is to define how errors must be handled. Each of CFITSIO's functions returns an integer error code, but we want to take advantage of Nimrod's exception mechanism:
type
EFitsException* = object of E_Base
code: int
message: string
proc fitsGetErrStatus(statusCode: int,
errorText: array[0..30, char]) {. cdecl,
dynlib: LibraryName,
importc: "ffgerr" .}
proc raiseFitsException(statusCode: int) {. noinline, noreturn .} =
var errorText: array[0..30, char]
fitsGetErrStatus(statusCode, errorText)
raise newException(EFitsException, $errorText)
This code shows how to bind to a C function (in this case, ffgerr
, which fills errorText
with a string which describes the error represented by statusCode
): as you can see, it is enough to include {. cdecl, importc: "ffgerr" .}
at the end of the definition. Note that fitsGetErrorStatus
is not a wrapper to ffgerr
, but it is ffgerr
. (Unlike let
, var
declares a true variable, i.e., one whose value can change during the execution of the program.)
At this point we can write a binding to the ffopen
function:
type
TIOMode* = enum
ReadOnly = 0, ReadWrite = 1
proc fitsOpenFile(filePtr: PInternalFitsStruct,
fileName: cstring,
ioMode: TIOMode,
status: ptr int): int {. cdecl,
dynlib: LibraryName,
importc: "ffopen" .}
proc OpenFile*(fileName: string, ioMode: TIOMode): TFitsFile =
var status = 0
result.file = nil
if fitsOpenFile(addr(result.file), fileName, ioMode, addr(status)) != 0:
raiseFitsException(status)
Function OpenFile
is the high-level equivalent of ffopen
, since it uses native Nimrod's data types string
and enum
, and it raises exceptions when error occur. As above, OpenFile
is the only one to be exported, because of the *
.
I wrote bindings for many other CFITSIO functions, but what I've shown here should already give the idea of how easy is to interface with C.
Reading data from FITS files
It is quite easy to read the FITS files that contain Planck/LFI pointings using CFITSIO: the files contain one large table with the time and the ecliptic coordinates in separate columns, and CFITSIO provides a number of functions to read columns into C arrays.
It is however quite tricky to read multiple columns efficiently: CFITSIO provides functions to read one column of data (or a subset of rows in that column), but FITS tables are saved in rows. Therefore, in order to read N columns CFITSIO must traverse the file from the beginning to the end N times: this is clearly inefficient.
CFITSIO provides a trick to overcome this limitation: since it reads data in chunks of a few KB, many elements from contiguous columns are present at the same time in its cache. CFITSIO is able to tell how many rows fill its cache (using the ffgrsz
function, which I bound to GetOptimalNumberOfRowsForIO
), so I implemented the following strategy:
type TPositions = seq[positions.TAngularPosition]
proc ReadAnglesFromFile(fileName: string): TPositions =
var f = OpenTable(fileName, ReadOnly)
let numOfRows = GetNumberOfRows(f)
newSeq(result, numOfRows)
let rowsPerChunk = GetOptimalNumberOfRowsForIO(f)
var chunkOfData: seq[float]
newSeq(chunkOfData, rowsPerChunk)
var firstRow = 1
var rowsToRead = numOfRows
var done = false
while not done:
if rowsToRead < rowsPerChunk:
setLen(chunkOfData, rowsToRead)
ReadColumnOfFloats(f,
colNum = 3,
firstRow = firstRow,
firstElem = 1,
destArray = chunkOfData)
for idx in 0..len(chunkOfData)-1:
result[firstRow - 1 + idx].colatitude = chunkOfData[idx]
ReadColumnOfFloats(f,
colNum = 4,
firstRow = firstRow,
firstElem = 1,
destArray = chunkOfData)
for idx in 0..len(chunkOfData)-1:
result[firstRow - 1 + idx].longitude = chunkOfData[idx]
firstRow += len(chunkOfData)
rowsToRead -= len(chunkOfData)
if rowsToRead <= 0:
break
CloseFile(f)
(The functions OpenTable
, GetNumberOfRows
, GetOptimalNumberOfRowsForIO
, ReadColumnOfFloats
, and CloseFile
are my bindings to analogous functions in CFITSIO.) The logic of the code should be easy to understand: the number of rows passed to the function ReadColumnOfFloats
(rowsToRead
) is always smaller or equal to the value returned by GetOptimalNumberOfRowsForIO
, so that each call to ReadColumnOfFloats
either reads just one chunk and stores it into the cache, or it reuses all the information already present in the cache without further I/O instructions. In each iteration of the while not done:...
cycle, the function ReadColumnOfFloats
is called twice, first to read the ecliptic latitude (column 3) and then the longitude (column 4). (There is also a column containing times, but you might recall that I am not interested in reading it for the purpose of my analysis.)
To store the arrays I used a seq
type, which is basically a dynamic array. Unfortunately there does not seem to be a realiable way to use array slices in Nimrod. It would have been very useful to rewrite this code:
for idx in 0..len(chunkOfData)-1:
result[firstRow - 1 + idx].colatitude = chunkOfData[idx]
into something like this:
result[firstRow:(firstRow + len(chunkOfData))] = chunkOfData
(this would have allowed Nimrod to easily vectorize such expressions in multicore CPUs). Such features are present in Ada and Fortran, and they are handy in number-crunching programs like this.
The last thing to do is to sew things together and build the main loop of the program.
Building the list of matches
Here is the (abridged) main loop of the program. I am skipping the definitions at the beginning of the file (I simply initialize a few variables like objectPosition
and tolerance
according to the values specified on the command line).
var matchFound = false
var matchFileNames: seq[string] = @[]
for paramIndex in 4..os.ParamCount():
let curFile = os.ParamStr(paramIndex)
echo("Reading file ", curFile)
var angles: seq[TAngularPosition]
try:
angles = ReadAnglesFromFile(curFile)
except EFitsException:
let msg = getCurrentExceptionMsg()
echo "Error reading file " & curFile & ", reason: ", msg
continue
var minPos: positions.TAngularPosition
var minDist = 2 * math.pi
for idx in 0..len(angles)-1:
let dist = positions.AngularDistance(angles[idx], objectPosition)
if dist < minDist:
minPos = angles[idx]
minDist = dist
if minDist < tolerance:
matchFound = true
add(matchFileNames, curFile)
if matchFound:
echo "Matches found:"
for i in 0..len(matchFileNames)-1:
echo " " & matchFileNames[i]
Nimrod: the good parts
Overally my experience with Nimrod was nice. I was able to write this program in one afternoon, which is not bad considering that I had no previous experience with the language and had to implement a set of bindings for CFITSIO. The features that mostly impressed me are the following:
It is extremely easy to create bindings to C libraries. (I wonder how much difficult would be to produce a Nimrod executable which uses many processes communicating via MPI?)
Its syntax is clean and elegant: I found that typing Nimrod code is a very quick process, because you do not have to bother with a number of details (e.g., declaring the type of each variable, putting
{
and}
everywhere, …).Nimrod programs are very fast. I did some quick tests writing short programs in C and Nimrod, and I found that the difference in speed between them is negligible (< 1%).
Nimrod: what I do not like
However, the language has some minor problems too. First of all, Nimrod's documentation is quite poor. At the moment the best introduction to the language is given in two tutorials, which however do not tell everything. There is also a manual, which is however considered still a "draft" and is not as readable as the tutorials.
Given the lack of good documentation, I would have expected Nimrod to follow the "principle of least surprise" in entity naming. Yet there are a number of small things that are different from other languages:
You print to the terminal using
echo
(only shell languages use this verb, while the rest of the world uses something likeprint
orput
). The existence ofecho
is however well explained in the tutorial from its very beginning.You append elements to a dynamic array using
add
. No other language I know uses this verb: the most used solution isappend
, with the exception of C++'spush_back
. This is annoying, as the tutorial does not say a word about this: I had to search in the documentation of thesystem
module to discover the existence ofadd
. (I discovered some days later that this is briefly mentioned in the manual.)
Another thing that I miss in Nimrod is an easy way to use format strings, similar to C's printf
and Python's format
. Apparently, when you want to use such facility you have to bind to the libc's implementation of printf
. This is helpful if you only want to write very basic types (e.g., integers and floating point numbers), since of course printf
is ignorant about Nimrod's datatypes. (Update: I discovered the existence of the strfmt module in the standard library. However, such a fundamental module does not seem to be advertised in the tutorials.)
The compiler shows a couple of weird behaviors too:
Why does the compiler emit error message using parentheses to identify the line number and column, like
fits.nim(54, 2) Error: ...
?The problem is that the majority of the compiler and tools around (e.g., GCC, Clang, Go, Rust, …) use a different format for error messages, namely
filename:line:column: message
. This means that a few handy macros in Emacs (e.g.next-error
) cannot work without some tweaking, as well as some handy scripts I regularly use (e.g. to highlight errors and warnings using different colors). This is a minor problem, but it is nevertheless annoying, expecially considering that Rust and Go follow the more widely used standard. (I voluntarized to add a command-line flag to the compiler for switching to the more standard syntax, but people weren't interested.)Update: Too bad for my scripts, but at least I was able to make Emacs understand this new syntax. Unfortunately, my tweak only works when I run
nimrod
directly: if I callmake
, it doesn't. I've spent too much time trying to fix it, so at the end I gave up. Anyway, I added the following to my.emacs
file:
(setq compilation-error-regexp-alist
(push '("^\\([a-zA-Z0-9\\.]+\\)(\\([0-9]+\\),\\([0-9]+\\))\s\\(.*$\\)"
1 2 3)
compilation-error-regexp-alist))
- Compile error messages might be improved a bit. When I was implementing
ReadColumnOfFloats
I missed one parameter (it has six parameters, and it is a wrapper to a CFITSIO function with nine parameters!). The error message was not very helpful, since it did not point out which parameter I forgot:
fits.nim(339, 22) Error: type mismatch: got \
(TInternalFitsStruct, int, int64, int64, \
float, ptr float, ptr int, ptr int)
but expected one of:
fits.fitsReadDblColumn(filePtr: TInternalFitsStruct, \
colNum: int, firstRow: int64, firstElem: int64, \
numOfElems: int64, nullValue: float, \
destArray: ptr float, anyNull: ptr int, \
status: ptr int): int
Conclusions
My overall impression of Nimrod is very good. The language is extremely easy to use, yet it has some powerful features. The ability to easily create bindings for C libraries is a big plus. It still shows some minor problems, which are probably due either to its early state (no version 1.0 has been release yet) or to the small number of people contributing to it.