I saw run in which I caught an MPI task hanging in ctime() here. Swiching
to ctime_r() fixes that. This function is only called form rank==0, but it
hangs anyway.
This provides an HDFS back-end, allowing IOR to exercise a Hadoop
Distributed File-System, plus corresponding changes throughout, to
integrate the new module into the build. The commit compiles at LANL, but
hasn't been run yet. We're currently waiting for some configuration on
machines that will eventually provide HDFS. By default, configure ignores
the HDFS module. You have to explicitly add --with-hdfs.
Only print total summary after all tests run.
Put calculated results from each iteration of a test in a separate
IOR_results_t structure. Clean up the allocation and freeing code
for these caluclated bits, which allowing us to hang onto the results
until the end of all tests. That in turn allows us to perform one
big summary at the end of all of the tests.
Clean up the header files to only contain those things that
need to be shared between .c files.
Functions that are not shared are now declared static to
make their file scope explicit. Functions that ARE shared
are declared in appropriate headers.
I am not going to claim that I caugh everything, but at
least it is a good start.