On systems where numTasks is not evenly divisible by 'tasksPerNode' we were
seeing some nodes reading multiple files while others read none after
reordering.
Commonly all nodes have the same number of tasks but there is nothing
requiring that to be the case. Imagine having 64 tasks running against 4
nodes which can run 20 tasks each. Here you get three groups of 20 and one
group of 4. On this sytem nodes running in the group of 4 were previously
getting tasksPerNode of 4 which meant they reordered tasks differently than
the nodes which got tasksPerNode of 20.
The key to fixing this is ensuring that every node reorders tasks the same
way, which means ensuring they all use the same input values. Obviously on
systems where the number of tasks per node is inconsistent the reordering will
also be inconsistent (some tasks may end up on the same node, or not as far
separated as desired, etc.) but at least this way you'll always end up with a
1:1 reordering.
- Renamed nodes/nodeCount to numNodes
- Renamed tasksPerNode to numTasksOnNode0
- Ensured that numTasksOnNode0 will always have the same value regardless of
which node you're on
- Removed inconsistently used globals numTasksWorld and tasksPerNode and
replaced with per-test params equivalents
- Added utility functions for setting these values:
- numNodes -> GetNumNodes
- numTasks -> GetNumTasks
- numTasksOnNode0 -> GetNumNodesOnTask0
- Improved MPI_VERSION < 3 logic for GetNumNodes so it works when numTasks is
not evenly divisible by numTasksOnNode0
- Left 'nodes' and 'tasksPerNode' in output alone to not break compatibility
- Allowed command-line params to override numTasks, numNodes, and
numTasksOnNode0 but default to using the MPI-calculated values
Changed the parser to fix#98. Does _not_ yet have the HDF5 backend args updated. They were reverted to allow this to make the 3.2 release and should be re-applied as a separate commit.
Currently, calculation for barriers case looks a bit
suprising, for example, Min stats is from min of
Max value of number of mpi proc, this makes Min
and Max very similar in our testing thus we get
a small Std value too.
I am not a MPI expert, but question is why we
don't use more nature calcuation which calculate
all results from different iterations and MPI proc?
Signed-off-by: Wang Shilong <wshilong@ddn.com>