Merge remote-tracking branch 'upstream/master' into daos

Signed-off-by: Mohamad Chaarawi <mohamad.chaarawi@intel.com>

Conflicts:
	src/aiori.c
	src/aiori.h
	src/ior.c
	src/mdtest-main.c
	src/mdtest.c
	src/option.c
master
Mohamad Chaarawi 2019-01-24 00:31:12 +00:00
commit 2c87b5e0f5
47 changed files with 1686 additions and 1032 deletions

View File

@ -28,11 +28,4 @@ install:
# aiori-S3.c to achive this.
# GPFS
# NOTE: Think GPFS need a license and is therefore not testable with travis.
before_script: ./bootstrap
script: mkdir build && cd build && ../configure --with-hdf5 && make && cd .. && ./testing/basic-tests.sh
# notifications:
# email:
# on_success: change # default: change
# on_failure: always # default: always
script: ./travis-build.sh && CONFIGURE_OPTS="--with-hdf5" ./travis-test.sh

137
ChangeLog
View File

@ -1,137 +0,0 @@
Changes in IOR-3.0.0
* Reorganization of the build system. Now uses autoconf/automake.
N.B. Windows suport is not included. Patches welcome.
* Much code refactoring.
* Final summary table is printed after all tests have finished.
* Error messages significantly improved.
* Drop all "undocumented changes". If they are worth having, they
need to be implemented well and documented.
Changes in IOR-2.10.3
* bug 2962326 "Segmentation Fault When Summarizing Results" fixed
* bug 2786285 "-Wrong number of parameters to function H5Dcreate" fixed
(NOTE: to compile for HDF5 1.6 libs use "-D H5_USE_16_API")
* bug 1992514 "delay (-d) doesn't work" fixed
Contributed by demyn@users.sourceforge.net
* Ported to Windows. Required changes related to 'long' types, which on Windows
are always 32-bits, even on 64-bit systems. Missing system headers and
functions acount for most of the remaining changes.
New files for Windows:
- IOR/ior.vcproj - Visual C project file
- IOR/src/C/win/getopt.{h,c} - GNU getopt() support
See updates in the USER_GUIDE for build instructions on Windows.
* Fixed bug in incrementing transferCount
* Fixed bugs in SummarizeResults with mismatched format specifiers
* Fixed inconsistencies between option names, -h output, and the USER_GUIDE.
Changes in IOR-2.10.2:
Hodson, 8/18/2008:
* extend existing random I/O capabilities and enhance performance
output statistics.
Fixes in IOR-2.10.1:
* added '-J' setAlignment option for HDF5 alignment in bytes; default value
is 1, which does not set alignment
* changed how HDF5 and PnetCDF calculate performance -- formerly each used
the size of the stat()'ed file; changed it to be number of data bytes
transferred. these library-generated files can have large headers and
filler as well as sparse file content
* found potential overflow error in cast -- using IOR_offset_t, not int now
* replaced HAVE_HDF5_NO_FILL macro to instead directly check if H5_VERS_MAJOR
H5_VERS_MINOR are defined; if so, then must be HDF5-1.6.x or higher for
no-fill usage.
* corrected IOR_GetFileSize() function to point to HDF5 and NCMPI versions of
IOR_GetFileSize() calls
* changed the netcdf dataset from 1D array to 4D array, where the 4 dimensions
are: [segmentCount][numTasksWorld][numTransfers][transferSize]
This patch from Wei-keng Liao allows for file sizes > 4GB (provided no
single dimension is > 4GB).
* finalized random-capability release
* changed statvfs() to be for __sun (SunOS) only
* retired Python GUI
Fixes in IOR-2.10.0.1:
* Cleaned up WriteOrRead(), reducing much to smaller functions.
* Added random capability for transfer offset.
* modified strtok(NULL, " \t\r\n") in ExtractHints() so no trailing characters
* added capability to set hints in NCMPI
Fixes in IOR-2.9.6.1:
* for 'pvfs2:' filename prefix, now skips DisplayFreeSpace(); formerly this
caused a problem with statvfs()
* changed gethostname() to MPI_Get_processor_name() so for certain cases
when gethostname() would only return the frontend node name
* added SunOS compiler settings for makefile
* updated O_DIRECT usage for SunOS compliance
* changed statfs() to instead use statvfs() for SunOS compliance
* renamed compiler directive _USE_LUSTRE to _MANUALLY_SET_LUSTRE_STRIPING
Fixes in IOR-2.9.5:
* Wall clock deviation time relabeled to be "Start time skew across all tasks".
* Added notification for "Using reorderTasks '-C' (expecting block, not cyclic,
task assignment)"
* Corrected bug with read performance with stonewalling (was using full size,
stat'ed file instead of bytes transfered).
Fixes in IOR-2.9.4:
* Now using IOR_offset_t instead of int for tmpOffset in IOR.c:WriteOrRead().
Formerly, this would cause error in file(s) > 2GB for ReadCheck. The
more-commonly-used WriteCheck option was not affected by this.
Fixes in IOR-2.9.3:
* Changed FILE_DELIMITER from ':' to '@'.
* Any time skew between nodes is automatically adjusted so that all nodes
are calibrated to the root node's time.
* Wall clock deviation time is still reported, but have removed the warning
message that was generated for this. (Didn't add much new information,
just annoyed folks.)
* The '-j' outlierThreshold option now is set to 0 (off) as default. To use
this, set to a positive integer N, and any task who's time (for open,
access, etc.) is not within N seconds of the mean of all the tasks will show
up as an outlier.
Fixes in IOR-2.9.2:
* Simple cleanup of error messages, format, etc.
* Changed error reporting so that with VERBOSITY=2 (-vv) any error encountered
is displayed. Previously, this was with VERBOSITY=3, along with full test
parameters, environment, and all access times of operations.
* Added deadlineForStonewalling option (-D). This option is used for measuring
the amount of data moved in a fixed time. After the barrier, each task
starts its own timer, begins moving data, and the stops moving data at a
prearranged time. Instead of measuring the amount of time to move a fixed
amount of data, this option measures the amount of data moved in a fixed
amount of time. The objective is to prevent tasks slow to complete from
skewing the performance.
Fixes in IOR-2.9.1:
* Updated test script to run through file1:file2 cases.
* Program exit code is now total numbers of errors (both writecheck and
readcheck for all iterations), unless quitOnError (-q) is set.
* For various failure situations, replace abort with warning, including:
- failed uname() for platform name now gives warning
- failed unlink() of data file now gives warning
- failed fsync() of data file now gives warning
- failed open() of nonexistent script file now gives warning
* Changed labelling for error checking output to be (hopefully) more
clear in details on errors for diagnostics.
* Another fix for -o file1:file2 option.
* Corrected bug in GetTestFileName() -- now possible to handle -o file1:file2
cases for file-per-proc.
Fixes in IOR-2.9.0:
* Improved checkRead option to reread data from different node (to avoid
cache) and then compare both reads.
* Added outlierThreshold (-j) option to give warning if any task is more than
this number of seconds from the mean of all participating tasks. If so, the
task is identified, its time (start, elapsed create, elapsed transfer,
elapsed close, or end) is reported, as is the mean and standard deviation for
all tasks. The default for this is 5, i.e. any task not within 5 seconds of
the mean for those times is displayed. This value can be set with
outlierThreshold=<value> or -j <value>.
* Correct for clock skew between nodes - if skew greater than wallclock
deviation threshold (WC_DEV_THRESHOLD) in seconds, then broadcast root
node's timestamp and adjust by the difference. WC_DEV_THRESHOLD currently
set to 5.
* Added a Users Guide.

2
META
View File

@ -1,3 +1,3 @@
Package: ior
Version: 3.1.0
Version: 3.2.0
Release: 0

View File

@ -1,5 +1,13 @@
MAKEFLAGS = --no-print-directory
SUBDIRS = src doc contrib
EXTRA_DIST = META COPYRIGHT README.md ChangeLog
EXTRA_DIST = META COPYRIGHT README.md NEWS testing
# ACLOCAL_AMFLAGS needed for autoconf < 2.69
ACLOCAL_AMFLAGS = -I config
# The basic-tests.sh scripts run MPI versions of IOR/mdtest and are therefore
# too complicated to run in the context of distclean. As such we reserve
# `make dist` and `make test` for simple test binaries that do not require any
# special environment.
#TESTS = testing/basic-tests.sh
#DISTCLEANFILES = -r test test_out

205
NEWS
View File

@ -1,9 +1,204 @@
IOR NEWS
========
Version 3.2.0
--------------------------------------------------------------------------------
Last updated 2017-06
New major features:
3.0.2
- mdtest now included as a frontend for the IOR aiori backend (Nathan Hjelm,
LANL)
- Added mmap AIORI (Li Dongyang, DDN)
- Added RADOS AIORI (Shane Snyder, ANL)
- Added IME AIORI (Jean-Yves Vet, DDN)
- Added stonewalling for mdtest (Julian Kunkel, U Reading)
New minor features:
- Dropped support for PLFS AIORI (John Bent)
- Added stoneWallingWearOut functionality to allow stonewalling to ensure that
each MPI rank writes the same amount of data and captures the effects of slow
processes (Julian Kunkel, DKRZ)
- Added support for JSON output (Enno Zickler, U Hamburg; Julian Kunkel, U
Reading)
- Added dummy AIORI (Julian Kunkel, U Reading)
- Added support for HDF5 collective metadata operations (Rob Latham, ANL)
General user improvements:
- BeeGFS parameter support (Oliver Steffen, ThinkParQ)
- Semantics of `-R` now compares to expected signature (`-G`) (Julian Kunkel,
DKRZ)
- Improved macOS support for ncmpi (Vinson Leung)
- Added more complete documentation (Enno Zickler, U Hamburg)
- Assorted bugfixes and code refactoring (Adam Moody, LLNL; Julian Kunkel, U
Reading; Enno Zickler, U Hamburg; Nathan Hjelm, LANL; Rob Latham, ANL;
Jean-Yves Vet, DDN)
- More robust support for non-POSIX-backed AIORIs (Shane Snyder, ANL)
General developer improvements:
- Improvements to build process (Nathan Hjelm, LANL; Ian Kirker, UCL)
- Added continuous integration support (Enno Zickler, U Hamburg)
- Added better support for automated testing (Julian Kunkel, U Reading)
- Rewritten option handling to improve support for backend-specific options
(Julian Kunkel, U Reading)
Known issues:
- `api=RADOS` cannot work correctly if specified in a config file (via `-f`)
because `-u`/`-c`/`-p` cannot be specified (issue #98)
- `writeCheck` cannot be enabled for write-only tests using some AIORIs such as
MPI-IO (pull request #89)
Version 3.0.2
--------------------------------------------------------------------------------
- IOR and mdtest now share a common codebase. This will make it easier
run performance benchmarks on new hardware.
run performance benchmarks on new hardware.
- Note: this version was never properly released
Version 3.0.0
--------------------------------------------------------------------------------
- Reorganization of the build system. Now uses autoconf/automake. N.B. Windows
support is not included. Patches welcome.
- Much code refactoring.
- Final summary table is printed after all tests have finished.
- Error messages significantly improved.
- Drop all "undocumented changes". If they are worth having, they need to be
implemented well and documented.
Version 2.10.3
--------------------------------------------------------------------------------
- bug 2962326 "Segmentation Fault When Summarizing Results" fixed
- bug 2786285 "-Wrong number of parameters to function H5Dcreate" fixed
(NOTE: to compile for HDF5 1.6 libs use "-D H5_USE_16_API")
- bug 1992514 "delay (-d) doesn't work" fixed
Contributed by demyn@users.sourceforge.net
- Ported to Windows. Required changes related to 'long' types, which on Windows
are always 32-bits, even on 64-bit systems. Missing system headers and
functions acount for most of the remaining changes.
New files for Windows:
- IOR/ior.vcproj - Visual C project file
- IOR/src/C/win/getopt.{h,c} - GNU getopt() support
See updates in the USER_GUIDE for build instructions on Windows.
- Fixed bug in incrementing transferCount
- Fixed bugs in SummarizeResults with mismatched format specifiers
- Fixed inconsistencies between option names, -h output, and the USER_GUIDE.
Version 2.10.2
--------------------------------------------------------------------------------
- Extend existing random I/O capabilities and enhance performance
output statistics. (Hodson, 8/18/2008)
Version 2.10.1
--------------------------------------------------------------------------------
- Added '-J' setAlignment option for HDF5 alignment in bytes; default value
is 1, which does not set alignment
- Changed how HDF5 and PnetCDF calculate performance -- formerly each used
the size of the stat()'ed file; changed it to be number of data bytes
transferred. these library-generated files can have large headers and
filler as well as sparse file content
- Found potential overflow error in cast -- using IOR_offset_t, not int now
- Replaced HAVE_HDF5_NO_FILL macro to instead directly check if H5_VERS_MAJOR
H5_VERS_MINOR are defined; if so, then must be HDF5-1.6.x or higher for
no-fill usage.
- Corrected IOR_GetFileSize() function to point to HDF5 and NCMPI versions of
IOR_GetFileSize() calls
- Changed the netcdf dataset from 1D array to 4D array, where the 4 dimensions
are: [segmentCount][numTasksWorld][numTransfers][transferSize]
This patch from Wei-keng Liao allows for file sizes > 4GB (provided no
single dimension is > 4GB).
- Finalized random-capability release
- Changed statvfs() to be for __sun (SunOS) only
- Retired Python GUI
Version 2.10.0.1
--------------------------------------------------------------------------------
- Cleaned up WriteOrRead(), reducing much to smaller functions.
- Added random capability for transfer offset.
- Modified strtok(NULL, " \t\r\n") in ExtractHints() so no trailing characters
- Added capability to set hints in NCMPI
Version 2.9.6.1
--------------------------------------------------------------------------------
- For 'pvfs2:' filename prefix, now skips DisplayFreeSpace(); formerly this
caused a problem with statvfs()
- Changed gethostname() to MPI_Get_processor_name() so for certain cases
when gethostname() would only return the frontend node name
- Added SunOS compiler settings for makefile
- Updated O_DIRECT usage for SunOS compliance
- Changed statfs() to instead use statvfs() for SunOS compliance
- Renamed compiler directive _USE_LUSTRE to _MANUALLY_SET_LUSTRE_STRIPING
Version 2.9.5
--------------------------------------------------------------------------------
- Wall clock deviation time relabeled to be "Start time skew across all tasks".
- Added notification for "Using reorderTasks '-C' (expecting block, not cyclic,
task assignment)"
- Corrected bug with read performance with stonewalling (was using full size,
stat'ed file instead of bytes transfered).
Version 2.9.4
--------------------------------------------------------------------------------
- Now using IOR_offset_t instead of int for tmpOffset in IOR.c:WriteOrRead().
Formerly, this would cause error in file(s) > 2GB for ReadCheck. The
more-commonly-used WriteCheck option was not affected by this.
Version 2.9.3
--------------------------------------------------------------------------------
- Changed FILE_DELIMITER from ':' to '@'.
- Any time skew between nodes is automatically adjusted so that all nodes
are calibrated to the root node's time.
- Wall clock deviation time is still reported, but have removed the warning
message that was generated for this. (Didn't add much new information,
just annoyed folks.)
- The '-j' outlierThreshold option now is set to 0 (off) as default. To use
this, set to a positive integer N, and any task who's time (for open,
access, etc.) is not within N seconds of the mean of all the tasks will show
up as an outlier.
Version 2.9.2
--------------------------------------------------------------------------------
- Simple cleanup of error messages, format, etc.
- Changed error reporting so that with VERBOSITY=2 (-vv) any error encountered
is displayed. Previously, this was with VERBOSITY=3, along with full test
parameters, environment, and all access times of operations.
- Added deadlineForStonewalling option (-D). This option is used for measuring
the amount of data moved in a fixed time. After the barrier, each task
starts its own timer, begins moving data, and the stops moving data at a
prearranged time. Instead of measuring the amount of time to move a fixed
amount of data, this option measures the amount of data moved in a fixed
amount of time. The objective is to prevent tasks slow to complete from
skewing the performance.
Version 2.9.1
--------------------------------------------------------------------------------
- Updated test script to run through file1:file2 cases.
- Program exit code is now total numbers of errors (both writecheck and
readcheck for all iterations), unless quitOnError (-q) is set.
- For various failure situations, replace abort with warning, including:
- failed uname() for platform name now gives warning
- failed unlink() of data file now gives warning
- failed fsync() of data file now gives warning
- failed open() of nonexistent script file now gives warning
- Changed labelling for error checking output to be (hopefully) more
clear in details on errors for diagnostics.
- Another fix for -o file1:file2 option.
- Corrected bug in GetTestFileName() -- now possible to handle -o file1:file2
cases for file-per-proc.
Version 2.9.0
--------------------------------------------------------------------------------
- Improved checkRead option to reread data from different node (to avoid cache)
and then compare both reads.
- Added outlierThreshold (-j) option to give warning if any task is more than
this number of seconds from the mean of all participating tasks. If so, the
task is identified, its time (start, elapsed create, elapsed transfer,
elapsed close, or end) is reported, as is the mean and standard deviation for
all tasks. The default for this is 5, i.e. any task not within 5 seconds of
the mean for those times is displayed. This value can be set with
outlierThreshold=<value> or -j <value>.
- Correct for clock skew between nodes - if skew greater than wallclock
deviation threshold (WC_DEV_THRESHOLD) in seconds, then broadcast root
node's timestamp and adjust by the difference. WC_DEV_THRESHOLD currently
set to 5.
- Added a Users Guide.

View File

@ -20,9 +20,7 @@ with both IOR and mdtest.
Running with DAOS API
---------------------
Driver specific options are specified at the end after "--". For example:
ior -a DAOS [ior_options] -- [daos_options]
ior -a DAOS [ior_options] [daos_options]
In the IOR options, the file name should be specified as a container uuid using
"-o <container_uuid>". If the "-E" option is given, then this UUID shall denote
@ -33,21 +31,18 @@ uuidgen(1) to generate the UUID of the new container.
The DAOS options include:
Required Options:
-p <pool_uuid>: pool uuid to connect to (has to be created beforehand)
-v <pool_svcl>: pool svcl list (: separated)
--daos.pool <pool_uuid>: pool uuid to connect to (has to be created beforehand)
--daos.svcl <pool_svcl>: pool svcl list (: separated)
Optional Options:
-g <group_name>: group name of servers with the pool
-r <record_size>: object record size for IO
-s <stripe_size>
-c <stripe_count>
-m <max_stripe_size>
-a <num>: number of concurrent async IOs
-w : Flag to indicate no commit, just update
-e <epoch_number>
-t <epoch_number>: wait for specific epoch before read
-k : flag to kill a rank during IO
-o <object_class>: specific object class
--daos.group <group_name>: group name of servers with the pool
--daos.recordSize <record_size>: object record size for IO
--daos.stripeSize <stripe_size>
--daos.stripeCount <stripe_count>
--daos.stripeMax <max_stripe_size>
--daos.aios <num>: number of concurrent async IOs
--daos.kill flag to kill a rank during IO
--daos.objectClass <object_class>: specific object class
When benchmarking write performance, one likely does not want "-W", which causes
the write phase to do one additional memory copy for every I/O. This is due to
@ -64,34 +59,28 @@ the epoch to access automatically on each iteration.
Examples that should work include:
- "ior -a DAOS -w -W -o <container_uuid> -- -p <pool_uuid> -v <svc_ranks>"
- "ior -a DAOS -w -W -o <container_uuid> --daos.pool <pool_uuid> --daos.svcl <svc_ranks>"
writes into a new container and verifies the data, using default
daosRecordSize, transferSize, daosStripeSize, blockSize, daosAios, etc.
- "ior -a DAOS -w -W -r -R -o <container_uuid> -b 1g -t 4m -C --
-p <pool_uuid> -v <svc_ranks> -r 1m -s 4m -c 256 -a 8"
- "ior -a DAOS -w -W -r -R -o <container_uuid> -b 1g -t 4m -C \
--daos.pool <pool_uuid> --daos.svcl <svc_ranks> --daos.recordSize 1m --daos.stripeSize 4m\
--daos.stripeCount 256 --daos.aios 8
does all IOR tests and shifts ranks during checkWrite and checkRead.
- "ior -a DAOS -w -r -o <container_uuid> -b 8g -t 1m -C --
-p <pool_uuid> -v <svc_ranks> -r 1m -s 4m -c 256 -a 8"
may be a base to be tuned for performance benchmarking.
Running with DFS API
---------------------
Driver specific options are specified at the end after "--". For example:
ior -a DFS [ior_options] -- [dfs_options]
mdtest -a DFS [mdtest_options] -- [dfs_options]
ior -a DFS [ior_options] [dfs_options]
mdtest -a DFS [mdtest_options] [dfs_options]
Required Options:
-p <pool_uuid>: pool uuid to connect to (has to be created beforehand)
-v <pool_svcl>: pool svcl list (: separated)
-c <co_uuid>: container uuid that will hold the encapsulated namespace
--daos.pool <pool_uuid>: pool uuid to connect to (has to be created beforehand)
--daos.svcl <pool_svcl>: pool svcl list (: separated)
--daos.cont <co_uuid>: container uuid that will hold the encapsulated namespace
Optional Options:
-g <group_name>: group name of servers with the pool
--daos.group <group_name>: group name of servers with the pool
In the IOR options, the file name should be specified on the root dir directly
since ior does not create directories and the DFS container representing the
@ -99,12 +88,12 @@ encapsulated namespace is not the same as the system namespace the user is
executing from.
Examples that should work include:
- "ior -a DFS -w -W -o /test1 -- -p <pool_uuid> -v <svc_ranks> -c <co_uuid>"
- "ior -a DFS -w -W -r -R -o /test2 -b 1g -t 4m -C -- -p <pool_uuid> -v <svc_ranks> -c <co_uuid>"
- "ior -a DFS -w -r -o /test3 -b 8g -t 1m -C -- -p <pool_uuid> -v <svc_ranks> -c <co_uuid>"
- "ior -a DFS -w -W -o /test1 --daos.pool <pool_uuid> --daos.svcl <svc_ranks> --daos.cont <co_uuid>"
- "ior -a DFS -w -W -r -R -o /test2 -b 1g -t 4m -C --daos.pool <pool_uuid> --daos.svcl <svc_ranks> --daos.cont <co_uuid>"
- "ior -a DFS -w -r -o /test3 -b 8g -t 1m -C --daos.pool <pool_uuid> --daos.svcl <svc_ranks> --daos.cont <co_uuid>"
Running mdtest, the user needs to specify a directory with -d where the test
tree will be created. Some examples:
- "mdtest -a DFS -n 100 -F -D -d /bla -- -p <pool_uuid> -v <svc_ranks> -c <co_uuid>"
- "mdtest -a DFS -n 1000 -F -C -d /bla -- -p <pool_uuid> -v <svc_ranks> -c <co_uuid>"
- "mdtest -a DFS -I 10 -z 5 -b 2 -L -d /bla -- -p <pool_uuid> -v <svc_ranks> -c <co_uuid>"
- "mdtest -a DFS -n 100 -F -D -d /bla --daos.pool <pool_uuid> --daos.svcl <svc_ranks> --daos.cont <co_uuid>"
- "mdtest -a DFS -n 1000 -F -C -d /bla --daos.pool <pool_uuid> --daos.svcl <svc_ranks> --daos.cont <co_uuid>"
- "mdtest -a DFS -I 10 -z 5 -b 2 -L -d /bla --daos.pool <pool_uuid> --daos.svcl <svc_ranks> --daos.cont <co_uuid>"

View File

@ -19,11 +19,22 @@ AM_INIT_AUTOMAKE([check-news dist-bzip2 gnu no-define foreign subdir-objects])
m4_ifdef([AM_SILENT_RULES], [AM_SILENT_RULES([yes])])
AM_MAINTAINER_MODE
# Check for system-specific stuff
case "${host_os}" in
*linux*)
;;
*darwin*)
CPPFLAGS="${CPPFLAGS} -D_DARWIN_C_SOURCE"
;;
*)
;;
esac
# Checks for programs
# We can't do anything without a working MPI
AX_PROG_CC_MPI(,,[
AC_MSG_FAILURE([MPI compiler requested, but couldn't use MPI.])
AC_MSG_FAILURE([MPI compiler requested, but could not use MPI.])
])
AC_PROG_RANLIB
@ -39,7 +50,7 @@ AC_CHECK_HEADERS([fcntl.h libintl.h stdlib.h string.h strings.h sys/ioctl.h sys/
AC_TYPE_SIZE_T
# Checks for library functions.
AC_CHECK_FUNCS([getpagesize gettimeofday memset mkdir pow putenv realpath regcomp sqrt strcasecmp strchr strerror strncasecmp strstr uname statfs statvfs])
AC_CHECK_FUNCS([sysconf gettimeofday memset mkdir pow putenv realpath regcomp sqrt strcasecmp strchr strerror strncasecmp strstr uname statfs statvfs])
AC_SEARCH_LIBS([sqrt], [m], [],
[AC_MSG_ERROR([Math library not found])])
@ -65,14 +76,18 @@ AS_IF([test "$ac_cv_header_gpfs_h" = "yes" -o "$ac_cv_header_gpfs_fcntl_h" = "ye
# Check for system capabilities
AC_SYS_LARGEFILE
AC_DEFINE([_XOPEN_SOURCE], [700], [C99 compatibility])
# Check for lustre availability
AC_ARG_WITH([lustre],
[AS_HELP_STRING([--with-lustre],
[support configurable Lustre striping values @<:@default=check@:>@])],
[], [with_lustre=check])
AS_IF([test "x$with_lustre" != xno], [
AC_CHECK_HEADERS([lustre/lustre_user.h], [], [
if test "x$with_lustre" != xcheck; then
AC_CHECK_HEADERS([linux/lustre/lustre_user.h lustre/lustre_user.h], break, [
if test "x$with_lustre" != xcheck -a \
"x$ac_cv_header_linux_lustre_lustre_user_h" = "xno" -a \
"x$ac_cv_header_lustre_lustre_user_h" = "xno" ; then
AC_MSG_FAILURE([--with-lustre was given, <lustre/lustre_user.h> not found])
fi
])
@ -98,8 +113,12 @@ AC_ARG_WITH([hdf5],
AM_CONDITIONAL([USE_HDF5_AIORI], [test x$with_hdf5 = xyes])
AM_COND_IF([USE_HDF5_AIORI],[
AC_DEFINE([USE_HDF5_AIORI], [], [Build HDF5 backend AIORI])
AC_SEARCH_LIBS([H5Pset_all_coll_metadata_ops], [hdf5])
AC_CHECK_FUNCS([H5Pset_all_coll_metadata_ops])
])
# HDFS support
AC_ARG_WITH([hdfs],
[AS_HELP_STRING([--with-hdfs],
@ -259,12 +278,6 @@ Consider --with-aws4c=, CPPFLAGS, LDFLAGS, etc])
])
# Enable building "IOR", in all capitals
AC_ARG_ENABLE([caps],
[AS_HELP_STRING([--enable-caps],

View File

@ -170,6 +170,8 @@ GENERAL:
NOTE: it does not delay before a check write or
check read
* interIODelay - this time in us (microseconds) after each I/O simulates computing time.
* outlierThreshold - gives warning if any task is more than this number
of seconds from the mean of all participating tasks.
If so, the task is identified, its time (start,

View File

@ -289,6 +289,9 @@ HDF5-ONLY
* setAlignment - HDF5 alignment in bytes (e.g.: 8, 4k, 2m, 1g) [1]
* collectiveMetadata - enable HDF5 collective metadata (available since
HDF5-1.10.0)
MPIIO-, HDF5-, AND NCMPI-ONLY
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* collective - uses collective operations for access [0=FALSE]

View File

@ -5,7 +5,7 @@ if USE_CAPS
bin_PROGRAMS += IOR MDTEST
endif
noinst_HEADERS = ior.h utilities.h parse_options.h aiori.h iordef.h ior-internal.h option.h
noinst_HEADERS = ior.h utilities.h parse_options.h aiori.h iordef.h ior-internal.h option.h mdtest.h
lib_LIBRARIES = libaiori.a
libaiori_a_SOURCES = ior.c mdtest.c utilities.c parse_options.c ior-output.c option.c

View File

@ -24,6 +24,7 @@
#include <stdint.h>
#include <assert.h>
#include <unistd.h>
#include <strings.h>
#include <sys/types.h>
#include <libgen.h>
#include <stdbool.h>
@ -63,23 +64,23 @@ static struct daos_options o = {
};
static option_help options [] = {
{'p', "daosPool", "pool uuid", OPTION_REQUIRED_ARGUMENT, 's', &o.daosPool},
{'v', "daosPoolSvc", "pool SVCL", OPTION_REQUIRED_ARGUMENT, 's', &o.daosPoolSvc},
{'g', "daosGroup", "server group", OPTION_OPTIONAL_ARGUMENT, 's', &o.daosGroup},
{'r', "daosRecordSize", "Record Size", OPTION_OPTIONAL_ARGUMENT, 'd', &o.daosRecordSize},
{'s', "daosStripeSize", "Stripe Size", OPTION_OPTIONAL_ARGUMENT, 'd', &o.daosStripeSize},
{'c', "daosStripeCount", "Stripe Count", OPTION_OPTIONAL_ARGUMENT, 'u', &o.daosStripeCount},
{'m', "daosStripeMax", "Max Stripe",OPTION_OPTIONAL_ARGUMENT, 'u', &o.daosStripeMax},
{'a', "daosAios", "Concurrent Async IOs",OPTION_OPTIONAL_ARGUMENT, 'd', &o.daosAios},
{'k', "daosKill", "Kill target while running",OPTION_FLAG, 'd', &o.daosKill},
{'o', "daosObjectClass", "object class", OPTION_OPTIONAL_ARGUMENT, 's', &o.daosObjectClass},
{0, "daos.pool", "pool uuid", OPTION_REQUIRED_ARGUMENT, 's', &o.daosPool},
{0, "daos.svcl", "pool SVCL", OPTION_REQUIRED_ARGUMENT, 's', &o.daosPoolSvc},
{0, "daos.group", "server group", OPTION_OPTIONAL_ARGUMENT, 's', &o.daosGroup},
{0, "daos.recordSize", "Record Size", OPTION_OPTIONAL_ARGUMENT, 'd', &o.daosRecordSize},
{0, "daos.stripeSize", "Stripe Size", OPTION_OPTIONAL_ARGUMENT, 'd', &o.daosStripeSize},
{0, "daos.stripeCount", "Stripe Count", OPTION_OPTIONAL_ARGUMENT, 'u', &o.daosStripeCount},
{0, "daos.stripeMax", "Max Stripe",OPTION_OPTIONAL_ARGUMENT, 'u', &o.daosStripeMax},
{0, "daos.aios", "Concurrent Async IOs",OPTION_OPTIONAL_ARGUMENT, 'd', &o.daosAios},
{0, "daos.kill", "Kill target while running",OPTION_FLAG, 'd', &o.daosKill},
{0, "daos.objectClass", "object class", OPTION_OPTIONAL_ARGUMENT, 's', &o.daosObjectClass},
LAST_OPTION
};
/**************************** P R O T O T Y P E S *****************************/
static void DAOS_Init(IOR_param_t *);
static void DAOS_Fini(IOR_param_t *);
static void DAOS_Init();
static void DAOS_Fini();
static void *DAOS_Create(char *, IOR_param_t *);
static void *DAOS_Open(char *, IOR_param_t *);
static IOR_offset_t DAOS_Xfer(int, void *, IOR_size_t *,
@ -141,6 +142,7 @@ static daos_pool_info_t poolInfo;
static daos_oclass_id_t objectClass = DAOS_OC_LARGE_RW;
static CFS_LIST_HEAD(aios);
static IOR_offset_t total_size;
static bool daos_initialized = false;
/***************************** F U N C T I O N S ******************************/
@ -158,9 +160,9 @@ do { \
} \
} while (0)
#define INFO(level, param, format, ...) \
#define INFO(level, format, ...) \
do { \
if (verbose >= level) \
if (verbose >= level) \
printf("[%d] "format"\n", rank, ##__VA_ARGS__); \
} while (0)
@ -172,8 +174,7 @@ do { \
} while (0)
/* Distribute process 0's pool or container handle to others. */
static void HandleDistribute(daos_handle_t *handle, enum handleType type,
IOR_param_t *param)
static void HandleDistribute(daos_handle_t *handle, enum handleType type)
{
daos_iov_t global;
int rc;
@ -194,7 +195,7 @@ static void HandleDistribute(daos_handle_t *handle, enum handleType type,
}
MPI_CHECK(MPI_Bcast(&global.iov_buf_len, 1, MPI_UINT64_T, 0,
param->testComm),
MPI_COMM_WORLD),
"Failed to bcast global handle buffer size");
global.iov_buf = malloc(global.iov_buf_len);
@ -210,7 +211,7 @@ static void HandleDistribute(daos_handle_t *handle, enum handleType type,
}
MPI_CHECK(MPI_Bcast(global.iov_buf, global.iov_buf_len, MPI_BYTE, 0,
param->testComm),
MPI_COMM_WORLD),
"Failed to bcast global pool handle");
if (rank != 0) {
@ -241,15 +242,14 @@ static void ContainerOpen(char *testFileName, IOR_param_t *param,
if (param->open == WRITE &&
param->useExistingTestFile == FALSE) {
INFO(VERBOSE_2, param, "Creating container %s",
testFileName);
INFO(VERBOSE_2, "Creating container %s", testFileName);
rc = daos_cont_create(pool, uuid, NULL /* ev */);
DCHECK(rc, "Failed to create container %s",
testFileName);
}
INFO(VERBOSE_2, param, "Opening container %s", testFileName);
INFO(VERBOSE_2, "Opening container %s", testFileName);
if (param->open == WRITE)
dFlags = DAOS_COO_RW;
@ -261,7 +261,7 @@ static void ContainerOpen(char *testFileName, IOR_param_t *param,
DCHECK(rc, "Failed to open container %s", testFileName);
}
HandleDistribute(container, CONTAINER_HANDLE, param);
HandleDistribute(container, CONTAINER_HANDLE);
MPI_CHECK(MPI_Bcast(info, sizeof *info, MPI_BYTE, 0, param->testComm),
"Failed to broadcast container info");
@ -364,8 +364,8 @@ static void AIOInit(IOR_param_t *param)
cfs_list_add(&aio->a_list, &aios);
INFO(VERBOSE_3, param, "Allocated AIO %p: buffer %p", aio,
aio->a_iov.iov_buf);
INFO(VERBOSE_3, "Allocated AIO %p: buffer %p", aio,
aio->a_iov.iov_buf);
}
nAios = o.daosAios;
@ -383,7 +383,7 @@ static void AIOFini(IOR_param_t *param)
free(events);
cfs_list_for_each_entry_safe(aio, tmp, &aios, a_list) {
INFO(VERBOSE_3, param, "Freeing AIO %p: buffer %p", aio,
INFO(VERBOSE_3, "Freeing AIO %p: buffer %p", aio,
aio->a_iov.iov_buf);
cfs_list_del_init(&aio->a_list);
daos_event_fini(&aio->a_event);
@ -424,11 +424,11 @@ static void AIOWait(IOR_param_t *param)
nAios++;
if (param->verbose >= VERBOSE_3)
INFO(VERBOSE_3, param, "Completed AIO %p: buffer %p", aio,
INFO(VERBOSE_3, "Completed AIO %p: buffer %p", aio,
aio->a_iov.iov_buf);
}
INFO(VERBOSE_3, param, "Found %d completed AIOs (%d free %d busy)", rc,
INFO(VERBOSE_3, "Found %d completed AIOs (%d free %d busy)", rc,
nAios, o.daosAios - nAios);
}
@ -466,7 +466,7 @@ static void ObjectClassParse(const char *string)
GERR("Invalid 'daosObjectClass' argument: '%s'", string);
}
static void ParseService(IOR_param_t *param, int max, d_rank_list_t *ranks)
static void ParseService(int max, d_rank_list_t *ranks)
{
char *s;
@ -490,21 +490,18 @@ static option_help * DAOS_options(){
return options;
}
static void DAOS_Init(IOR_param_t *param)
static void DAOS_Init()
{
int rc;
if (daos_initialized)
return;
if (o.daosPool == NULL || o.daosPoolSvc == NULL)
return;
if (o.daosObjectClass)
ObjectClassParse(o.daosObjectClass);
if (param->filePerProc)
GERR("'filePerProc' not yet supported");
if (o.daosStripeMax % o.daosStripeSize != 0)
GERR("'daosStripeMax' must be a multiple of 'daosStripeSize'");
if (o.daosStripeSize % param->transferSize != 0)
GERR("'daosStripeSize' must be a multiple of 'transferSize'");
if (param->transferSize % o.daosRecordSize != 0)
GERR("'transferSize' must be a multiple of 'daosRecordSize'");
if (o.daosKill && ((objectClass != DAOS_OC_R2_RW) ||
(objectClass != DAOS_OC_R3_RW) ||
(objectClass != DAOS_OC_R4_RW) ||
@ -515,7 +512,7 @@ static void DAOS_Init(IOR_param_t *param)
GERR("'daosKill' only makes sense with 'daosObjectClass=repl'");
if (rank == 0)
INFO(VERBOSE_0, param, "WARNING: USING daosStripeMax CAUSES READS TO RETURN INVALID DATA");
INFO(VERBOSE_0, "WARNING: USING daosStripeMax CAUSES READS TO RETURN INVALID DATA");
rc = daos_init();
if (rc != -DER_ALREADY)
@ -529,18 +526,12 @@ static void DAOS_Init(IOR_param_t *param)
d_rank_t d_rank[13];
d_rank_list_t ranks;
if (o.daosPool == NULL)
GERR("'daosPool' must be specified");
if (o.daosPoolSvc == NULL)
GERR("'daosPoolSvc' must be specified");
INFO(VERBOSE_2, param, "Connecting to pool %s %s",
o.daosPool, o.daosPoolSvc);
INFO(VERBOSE_2, "Connecting to pool %s %s", o.daosPool, o.daosPoolSvc);
rc = uuid_parse(o.daosPool, uuid);
DCHECK(rc, "Failed to parse 'daosPool': %s", o.daosPool);
ranks.rl_ranks = d_rank;
ParseService(param, sizeof(d_rank) / sizeof(d_rank[0]), &ranks);
ParseService(sizeof(d_rank) / sizeof(d_rank[0]), &ranks);
rc = daos_pool_connect(uuid, o.daosGroup, &ranks,
DAOS_PC_RW, &pool, &poolInfo,
@ -548,20 +539,24 @@ static void DAOS_Init(IOR_param_t *param)
DCHECK(rc, "Failed to connect to pool %s", o.daosPool);
}
HandleDistribute(&pool, POOL_HANDLE, param);
HandleDistribute(&pool, POOL_HANDLE);
MPI_CHECK(MPI_Bcast(&poolInfo, sizeof poolInfo, MPI_BYTE, 0,
param->testComm),
MPI_CHECK(MPI_Bcast(&poolInfo, sizeof poolInfo, MPI_BYTE, 0, MPI_COMM_WORLD),
"Failed to bcast pool info");
if (o.daosStripeCount == -1)
o.daosStripeCount = poolInfo.pi_ntargets * 64UL;
daos_initialized = true;
}
static void DAOS_Fini(IOR_param_t *param)
static void DAOS_Fini()
{
int rc;
if (!daos_initialized)
return;
rc = daos_pool_disconnect(pool, NULL /* ev */);
DCHECK(rc, "Failed to disconnect from pool %s", o.daosPool);
@ -570,6 +565,8 @@ static void DAOS_Fini(IOR_param_t *param)
rc = daos_fini();
DCHECK(rc, "Failed to finalize daos");
daos_initialized = false;
}
static void *DAOS_Create(char *testFileName, IOR_param_t *param)
@ -625,7 +622,7 @@ kill_daos_server(IOR_param_t *param)
targets.rl_ranks = &d_rank;
svc.rl_ranks = svc_ranks;
ParseService(param, sizeof(svc_ranks)/ sizeof(svc_ranks[0]), &svc);
ParseService(sizeof(svc_ranks)/ sizeof(svc_ranks[0]), &svc);
rc = daos_pool_exclude(uuid, NULL, &svc, &targets, NULL);
DCHECK(rc, "Error in excluding pool from poolmap\n");
@ -668,6 +665,15 @@ static IOR_offset_t DAOS_Xfer(int access, void *file, IOR_size_t *buffer,
uint64_t round;
int rc;
if (!daos_initialized)
GERR("DAOS is not initialized!");
if (param->filePerProc)
GERR("'filePerProc' not yet supported");
if (o.daosStripeSize % param->transferSize != 0)
GERR("'daosStripeSize' must be a multiple of 'transferSize'");
if (param->transferSize % o.daosRecordSize != 0)
GERR("'transferSize' must be a multiple of 'daosRecordSize'");
assert(length == param->transferSize);
assert(param->offset % length == 0);
@ -717,7 +723,7 @@ static IOR_offset_t DAOS_Xfer(int access, void *file, IOR_size_t *buffer,
else if (access == WRITECHECK || access == READCHECK)
memset(aio->a_iov.iov_buf, '#', length);
INFO(VERBOSE_3, param, "Starting AIO %p (%d free %d busy): access %d "
INFO(VERBOSE_3, "Starting AIO %p (%d free %d busy): access %d "
"dkey '%s' iod <%llu, %llu> sgl <%p, %lu>", aio, nAios,
o.daosAios - nAios, access, (char *) aio->a_dkey.iov_buf,
(unsigned long long) aio->a_iod.iod_recxs->rx_idx,
@ -756,6 +762,8 @@ static void DAOS_Close(void *file, IOR_param_t *param)
struct fileDescriptor *fd = file;
int rc;
if (!daos_initialized)
return;
while (o.daosAios - nAios > 0)
AIOWait(param);
AIOFini(param);
@ -772,7 +780,10 @@ static void DAOS_Delete(char *testFileName, IOR_param_t *param)
uuid_t uuid;
int rc;
INFO(VERBOSE_2, param, "Deleting container %s", testFileName);
if (!daos_initialized)
GERR("DAOS is not initialized!");
INFO(VERBOSE_2, "Deleting container %s", testFileName);
rc = uuid_parse(testFileName, uuid);
DCHECK(rc, "Failed to parse 'testFile': %s", testFileName);

View File

@ -62,10 +62,10 @@ static struct dfs_options o = {
};
static option_help options [] = {
{'p', "pool", "DAOS pool uuid", OPTION_REQUIRED_ARGUMENT, 's', & o.pool},
{'v', "svcl", "DAOS pool SVCL", OPTION_REQUIRED_ARGUMENT, 's', & o.svcl},
{'g', "group", "DAOS server group", OPTION_OPTIONAL_ARGUMENT, 's', & o.group},
{'c', "cont", "DFS container uuid", OPTION_REQUIRED_ARGUMENT, 's', & o.cont},
{0, "dfs.pool", "DAOS pool uuid", OPTION_REQUIRED_ARGUMENT, 's', & o.pool},
{0, "dfs.svcl", "DAOS pool SVCL", OPTION_REQUIRED_ARGUMENT, 's', & o.svcl},
{0, "dfs.group", "DAOS server group", OPTION_OPTIONAL_ARGUMENT, 's', & o.group},
{0, "dfs.cont", "DFS container uuid", OPTION_REQUIRED_ARGUMENT, 's', & o.cont},
LAST_OPTION
};
@ -84,8 +84,8 @@ static int DFS_Stat (const char *, struct stat *, IOR_param_t *);
static int DFS_Mkdir (const char *, mode_t, IOR_param_t *);
static int DFS_Rmdir (const char *, IOR_param_t *);
static int DFS_Access (const char *, int, IOR_param_t *);
static void DFS_Init(IOR_param_t *param);
static void DFS_Finalize(IOR_param_t *param);
static void DFS_Init();
static void DFS_Finalize();
static option_help * DFS_options();
/************************** D E C L A R A T I O N S ***************************/
@ -122,7 +122,7 @@ do { \
format"\n", __FILE__, __LINE__, rank, _rc, \
##__VA_ARGS__); \
fflush(stderr); \
MPI_Abort(MPI_COMM_WORLD, -1); \
exit(-1); \
} \
} while (0)
@ -222,7 +222,7 @@ static option_help * DFS_options(){
}
static void
DFS_Init(IOR_param_t *param) {
DFS_Init() {
uuid_t pool_uuid, co_uuid;
daos_pool_info_t pool_info;
daos_cont_info_t co_info;
@ -231,7 +231,7 @@ DFS_Init(IOR_param_t *param) {
int rc;
if (o.pool == NULL || o.svcl == NULL || o.cont == NULL)
ERR("Invalid Arguments to DFS\n");
ERR("Invalid pool or container options\n");
rc = uuid_parse(o.pool, pool_uuid);
DCHECK(rc, "Failed to parse 'Pool uuid': %s", o.pool);
@ -275,7 +275,7 @@ DFS_Init(IOR_param_t *param) {
}
static void
DFS_Finalize(IOR_param_t *param)
DFS_Finalize()
{
int rc;
@ -341,11 +341,12 @@ DFS_Open(char *testFileName, IOR_param_t *param)
{
char *name = NULL, *dir_name = NULL;
dfs_obj_t *obj = NULL, *parent = NULL;
mode_t pmode;
mode_t pmode, mode;
int rc;
int fd_oflag = 0;
fd_oflag |= O_RDWR;
mode = S_IFREG | param->mode;
rc = parse_filename(testFileName, &name, &dir_name);
DERR(rc, "Failed to parse path %s", testFileName);
@ -356,7 +357,7 @@ DFS_Open(char *testFileName, IOR_param_t *param)
rc = dfs_lookup(dfs, dir_name, O_RDWR, &parent, &pmode);
DERR(rc, "dfs_lookup() of %s Failed", dir_name);
rc = dfs_open(dfs, parent, name, S_IFREG, fd_oflag, 0, NULL, &obj);
rc = dfs_open(dfs, parent, name, mode, fd_oflag, 0, NULL, &obj);
DERR(rc, "dfs_open() of %s Failed", name);
out:
@ -412,8 +413,7 @@ DFS_Xfer(int access, void *file, IOR_size_t *buffer, IOR_offset_t length,
if (ret < remaining) {
if (param->singleXferAttempt == TRUE)
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1),
"barrier error");
exit(-1);
if (xferRetries > MAX_RETRY)
ERR("too many retries -- aborting");
}
@ -625,7 +625,6 @@ DFS_Access(const char *path, int mode, IOR_param_t * param)
name = NULL;
}
rc = dfs_stat(dfs, parent, name, &stbuf);
DERR(rc, "dfs_stat() of %s Failed", name);
out:
if (name)

View File

@ -9,6 +9,7 @@
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <time.h>
#include "ior.h"
#include "aiori.h"
@ -29,9 +30,9 @@ static struct dummy_options o = {
};
static option_help options [] = {
{'c', "delay-create", "Delay per create in usec", OPTION_OPTIONAL_ARGUMENT, 'l', & o.delay_creates},
{'x', "delay-xfer", "Delay per xfer in usec", OPTION_OPTIONAL_ARGUMENT, 'l', & o.delay_xfer},
{'z', "delay-only-rank0", "Delay only Rank0", OPTION_FLAG, 'd', & o.delay_rank_0_only},
{0, "dummy.delay-create", "Delay per create in usec", OPTION_OPTIONAL_ARGUMENT, 'l', & o.delay_creates},
{0, "dummy.delay-xfer", "Delay per xfer in usec", OPTION_OPTIONAL_ARGUMENT, 'l', & o.delay_xfer},
{0, "dummy.delay-only-rank0", "Delay only Rank0", OPTION_FLAG, 'd', & o.delay_rank_0_only},
LAST_OPTION
};
@ -48,7 +49,8 @@ static void *DUMMY_Create(char *testFileName, IOR_param_t * param)
}
if (o.delay_creates){
if (! o.delay_rank_0_only || (o.delay_rank_0_only && rank == 0)){
usleep(o.delay_creates);
struct timespec wait = { o.delay_creates / 1000 / 1000, 1000l * (o.delay_creates % 1000000)};
nanosleep( & wait, NULL);
}
}
return current++;
@ -102,7 +104,8 @@ static IOR_offset_t DUMMY_Xfer(int access, void *file, IOR_size_t * buffer, IOR_
}
if (o.delay_xfer){
if (! o.delay_rank_0_only || (o.delay_rank_0_only && rank == 0)){
usleep(o.delay_xfer);
struct timespec wait = {o.delay_xfer / 1000 / 1000, 1000l * (o.delay_xfer % 1000000)};
nanosleep( & wait, NULL);
}
}
return length;
@ -136,6 +139,7 @@ static int DUMMY_stat (const char *path, struct stat *buf, IOR_param_t * param){
ior_aiori_t dummy_aiori = {
"DUMMY",
NULL,
DUMMY_Create,
DUMMY_Open,
DUMMY_Xfer,

View File

@ -98,6 +98,7 @@ static int HDF5_Access(const char *, int, IOR_param_t *);
ior_aiori_t hdf5_aiori = {
.name = "HDF5",
.name_legacy = NULL,
.create = HDF5_Create,
.open = HDF5_Open,
.xfer = HDF5_Xfer,
@ -228,14 +229,27 @@ static void *HDF5_Open(char *testFileName, IOR_param_t * param)
param->setAlignment),
"cannot set alignment");
#ifdef HAVE_H5PSET_ALL_COLL_METADATA_OPS
if (param->collective_md) {
/* more scalable metadata */
HDF5_CHECK(H5Pset_all_coll_metadata_ops(accessPropList, 1),
"cannot set collective md read");
HDF5_CHECK(H5Pset_coll_metadata_write(accessPropList, 1),
"cannot set collective md write");
}
#endif
/* open file */
if (param->open == WRITE) { /* WRITE */
*fd = H5Fcreate(testFileName, fd_mode,
createPropList, accessPropList);
HDF5_CHECK(*fd, "cannot create file");
} else { /* READ or CHECK */
*fd = H5Fopen(testFileName, fd_mode, accessPropList);
HDF5_CHECK(*fd, "cannot open file");
if(! param->dryRun){
if (param->open == WRITE) { /* WRITE */
*fd = H5Fcreate(testFileName, fd_mode,
createPropList, accessPropList);
HDF5_CHECK(*fd, "cannot create file");
} else { /* READ or CHECK */
*fd = H5Fopen(testFileName, fd_mode, accessPropList);
HDF5_CHECK(*fd, "cannot open file");
}
}
/* show hints actually attached to file handle */
@ -260,6 +274,8 @@ static void *HDF5_Open(char *testFileName, IOR_param_t * param)
HDF5_CHECK(H5Fget_vfd_handle
(*fd, apl, (void **)&fd_mpiio),
"cannot get MPIIO file handle");
if (mpiHintsCheck != MPI_INFO_NULL)
MPI_Info_free(&mpiHintsCheck);
MPI_CHECK(MPI_File_get_info
(*fd_mpiio, &mpiHintsCheck),
"cannot get info object through MPIIO");
@ -267,6 +283,8 @@ static void *HDF5_Open(char *testFileName, IOR_param_t * param)
"\nhints returned from opened file (MPIIO) {\n");
ShowHints(&mpiHintsCheck);
fprintf(stdout, "}\n");
if (mpiHintsCheck != MPI_INFO_NULL)
MPI_Info_free(&mpiHintsCheck);
}
}
MPI_CHECK(MPI_Barrier(testComm), "barrier error");
@ -328,6 +346,8 @@ static void *HDF5_Open(char *testFileName, IOR_param_t * param)
and shape of data set, and open it for access */
dataSpace = H5Screate_simple(NUM_DIMS, dataSetDims, NULL);
HDF5_CHECK(dataSpace, "cannot create simple data space");
if (mpiHints != MPI_INFO_NULL)
MPI_Info_free(&mpiHints);
return (fd);
}
@ -377,6 +397,9 @@ static IOR_offset_t HDF5_Xfer(int access, void *fd, IOR_size_t * buffer,
}
}
if(param->dryRun)
return length;
/* create new data set */
if (startNewDataSet == TRUE) {
/* if just opened this file, no data set to close yet */
@ -422,6 +445,8 @@ static void HDF5_Fsync(void *fd, IOR_param_t * param)
*/
static void HDF5_Close(void *fd, IOR_param_t * param)
{
if(param->dryRun)
return;
if (param->fd_fppReadCheck == NULL) {
HDF5_CHECK(H5Dclose(dataSet), "cannot close data set");
HDF5_CHECK(H5Sclose(dataSpace), "cannot close data space");
@ -441,7 +466,10 @@ static void HDF5_Close(void *fd, IOR_param_t * param)
*/
static void HDF5_Delete(char *testFileName, IOR_param_t * param)
{
return(MPIIO_Delete(testFileName, param));
if(param->dryRun)
return
MPIIO_Delete(testFileName, param);
return;
}
/*
@ -573,7 +601,9 @@ static void SetupDataSet(void *fd, IOR_param_t * param)
static IOR_offset_t
HDF5_GetFileSize(IOR_param_t * test, MPI_Comm testComm, char *testFileName)
{
return(MPIIO_GetFileSize(test, testComm, testFileName));
if(test->dryRun)
return 0;
return(MPIIO_GetFileSize(test, testComm, testFileName));
}
/*
@ -581,5 +611,7 @@ HDF5_GetFileSize(IOR_param_t * test, MPI_Comm testComm, char *testFileName)
*/
static int HDF5_Access(const char *path, int mode, IOR_param_t *param)
{
return(MPIIO_Access(path, mode, param));
if(param->dryRun)
return 0;
return(MPIIO_Access(path, mode, param));
}

View File

@ -115,6 +115,7 @@ static IOR_offset_t HDFS_GetFileSize(IOR_param_t *, MPI_Comm, char *);
ior_aiori_t hdfs_aiori = {
.name = "HDFS",
.name_legacy = NULL,
.create = HDFS_Create,
.open = HDFS_Open,
.xfer = HDFS_Xfer,
@ -289,9 +290,9 @@ static void *HDFS_Create_Or_Open( char *testFileName, IOR_param_t *param, unsign
* truncate each other's writes
*/
if (( param->openFlags & IOR_WRONLY ) &&
( !param->filePerProc ) &&
( rank != 0 )) {
if (( param->openFlags & IOR_WRONLY ) &&
( !param->filePerProc ) &&
( rank != 0 )) {
MPI_CHECK(MPI_Barrier(testComm), "barrier error");
}
@ -308,7 +309,7 @@ static void *HDFS_Create_Or_Open( char *testFileName, IOR_param_t *param, unsign
param->transferSize,
param->hdfs_replicas,
param->hdfs_block_size);
}
}
hdfs_file = hdfsOpenFile( param->hdfs_fs,
testFileName,
fd_oflags,
@ -323,12 +324,12 @@ static void *HDFS_Create_Or_Open( char *testFileName, IOR_param_t *param, unsign
* For N-1 write, Rank 0 waits for the other ranks to open the file after it has.
*/
if (( param->openFlags & IOR_WRONLY ) &&
( !param->filePerProc ) &&
( rank == 0 )) {
if (( param->openFlags & IOR_WRONLY ) &&
( !param->filePerProc ) &&
( rank == 0 )) {
MPI_CHECK(MPI_Barrier(testComm), "barrier error");
}
}
if (param->verbose >= VERBOSE_4) {
printf("<- HDFS_Create_Or_Open\n");
@ -404,7 +405,7 @@ static IOR_offset_t HDFS_Xfer(int access, void *file, IOR_size_t * buffer,
}
if (param->verbose >= VERBOSE_4) {
printf("\thdfsWrite( 0x%llx, 0x%llx, 0x%llx, %lld)\n",
printf("\thdfsWrite( 0x%llx, 0x%llx, 0x%llx, %lld)\n",
hdfs_fs, hdfs_file, ptr, remaining ); /* DEBUGGING */
}
rc = hdfsWrite( hdfs_fs, hdfs_file, ptr, remaining );
@ -426,7 +427,7 @@ static IOR_offset_t HDFS_Xfer(int access, void *file, IOR_size_t * buffer,
}
if (param->verbose >= VERBOSE_4) {
printf("\thdfsRead( 0x%llx, 0x%llx, 0x%llx, %lld)\n",
printf("\thdfsRead( 0x%llx, 0x%llx, 0x%llx, %lld)\n",
hdfs_fs, hdfs_file, ptr, remaining ); /* DEBUGGING */
}
rc = hdfsRead( hdfs_fs, hdfs_file, ptr, remaining );

View File

@ -63,6 +63,7 @@ extern MPI_Comm testComm;
ior_aiori_t ime_aiori = {
.name = "IME",
.name_legacy = "IM",
.create = IME_Create,
.open = IME_Open,
.xfer = IME_Xfer,
@ -271,10 +272,10 @@ static char *IME_GetVersion()
/*
* XXX: statfs call is currently not exposed by IME native interface.
*/
static int IME_StatFS(const char *oid, ior_aiori_statfs_t *stat_buf,
static int IME_StatFS(const char *path, ior_aiori_statfs_t *stat_buf,
IOR_param_t *param)
{
(void)oid;
(void)path;
(void)stat_buf;
(void)param;
@ -282,29 +283,33 @@ static int IME_StatFS(const char *oid, ior_aiori_statfs_t *stat_buf,
return -1;
}
/*
* XXX: mkdir call is currently not exposed by IME native interface.
*/
static int IME_MkDir(const char *oid, mode_t mode, IOR_param_t *param)
static int IME_MkDir(const char *path, mode_t mode, IOR_param_t *param)
{
(void)oid;
(void)mode;
(void)param;
WARN("mkdir is currently not supported in IME backend!");
#if (IME_NATIVE_API_VERSION >= 130)
return ime_native_mkdir(path, mode);
#else
(void)path;
(void)mode;
WARN("mkdir not supported in IME backend!");
return -1;
#endif
}
/*
* XXX: rmdir call is curretly not exposed by IME native interface.
*/
static int IME_RmDir(const char *oid, IOR_param_t *param)
static int IME_RmDir(const char *path, IOR_param_t *param)
{
(void)oid;
(void)param;
WARN("rmdir is currently not supported in IME backend!");
#if (IME_NATIVE_API_VERSION >= 130)
return ime_native_rmdir(path);
#else
(void)path;
WARN("rmdir not supported in IME backend!");
return -1;
#endif
}
/*

View File

@ -46,6 +46,7 @@ static void MPIIO_Fsync(void *, IOR_param_t *);
ior_aiori_t mpiio_aiori = {
.name = "MPIIO",
.name_legacy = NULL,
.create = MPIIO_Create,
.open = MPIIO_Open,
.xfer = MPIIO_Xfer,
@ -68,6 +69,9 @@ ior_aiori_t mpiio_aiori = {
*/
int MPIIO_Access(const char *path, int mode, IOR_param_t *param)
{
if(param->dryRun){
return MPI_SUCCESS;
}
MPI_File fd;
int mpi_mode = MPI_MODE_UNIQUE_OPEN;
@ -92,7 +96,10 @@ int MPIIO_Access(const char *path, int mode, IOR_param_t *param)
*/
static void *MPIIO_Create(char *testFileName, IOR_param_t * param)
{
return MPIIO_Open(testFileName, param);
if(param->dryRun){
return 0;
}
return MPIIO_Open(testFileName, param);
}
/*
@ -170,11 +177,13 @@ static void *MPIIO_Open(char *testFileName, IOR_param_t * param)
ShowHints(&mpiHints);
fprintf(stdout, "}\n");
}
MPI_CHECK(MPI_File_open(comm, testFileName, fd_mode, mpiHints, fd),
if(! param->dryRun){
MPI_CHECK(MPI_File_open(comm, testFileName, fd_mode, mpiHints, fd),
"cannot open file");
}
/* show hints actually attached to file handle */
if (rank == 0 && param->showHints) {
if (rank == 0 && param->showHints && ! param->dryRun) {
if (mpiHints != MPI_INFO_NULL)
MPI_CHECK(MPI_Info_free(&mpiHints), "MPI_Info_free failed");
MPI_CHECK(MPI_File_get_info(*fd, &mpiHints),
@ -185,7 +194,7 @@ static void *MPIIO_Open(char *testFileName, IOR_param_t * param)
}
/* preallocate space for file */
if (param->preallocate && param->open == WRITE) {
if (param->preallocate && param->open == WRITE && ! param->dryRun) {
MPI_CHECK(MPI_File_preallocate(*fd,
(MPI_Offset) (param->segmentCount
*
@ -231,11 +240,13 @@ static void *MPIIO_Open(char *testFileName, IOR_param_t * param)
MPI_CHECK(MPI_Type_commit(&param->fileType),
"cannot commit datatype");
MPI_CHECK(MPI_File_set_view(*fd, (MPI_Offset) 0,
if(! param->dryRun){
MPI_CHECK(MPI_File_set_view(*fd, (MPI_Offset) 0,
param->transferType,
param->fileType, "native",
(MPI_Info) MPI_INFO_NULL),
"cannot set file view");
}
}
if (mpiHints != MPI_INFO_NULL)
MPI_CHECK(MPI_Info_free(&mpiHints), "MPI_Info_free failed");
@ -253,6 +264,9 @@ static IOR_offset_t MPIIO_Xfer(int access, void *fd, IOR_size_t * buffer,
will get "assignment from incompatible pointer-type" warnings,
if we only use this one set of signatures. */
if(param->dryRun)
return length;
int (MPIAPI * Access) (MPI_File, void *, int,
MPI_Datatype, MPI_Status *);
int (MPIAPI * Access_at) (MPI_File, MPI_Offset, void *, int,
@ -381,8 +395,10 @@ static IOR_offset_t MPIIO_Xfer(int access, void *fd, IOR_size_t * buffer,
*/
static void MPIIO_Fsync(void *fdp, IOR_param_t * param)
{
if (MPI_File_sync(*(MPI_File *)fdp) != MPI_SUCCESS)
EWARN("fsync() failed");
if(param->dryRun)
return;
if (MPI_File_sync(*(MPI_File *)fdp) != MPI_SUCCESS)
EWARN("fsync() failed");
}
/*
@ -390,7 +406,9 @@ static void MPIIO_Fsync(void *fdp, IOR_param_t * param)
*/
static void MPIIO_Close(void *fd, IOR_param_t * param)
{
MPI_CHECK(MPI_File_close((MPI_File *) fd), "cannot close file");
if(! param->dryRun){
MPI_CHECK(MPI_File_close((MPI_File *) fd), "cannot close file");
}
if ((param->useFileView == TRUE) && (param->fd_fppReadCheck == NULL)) {
/*
* need to free the datatype, so done in the close process
@ -408,8 +426,10 @@ static void MPIIO_Close(void *fd, IOR_param_t * param)
*/
void MPIIO_Delete(char *testFileName, IOR_param_t * param)
{
MPI_CHECK(MPI_File_delete(testFileName, (MPI_Info) MPI_INFO_NULL),
"cannot delete file");
if(param->dryRun)
return;
MPI_CHECK(MPI_File_delete(testFileName, (MPI_Info) MPI_INFO_NULL),
"cannot delete file");
}
/*
@ -472,6 +492,8 @@ static IOR_offset_t SeekOffset(MPI_File fd, IOR_offset_t offset,
IOR_offset_t MPIIO_GetFileSize(IOR_param_t * test, MPI_Comm testComm,
char *testFileName)
{
if(test->dryRun)
return 0;
IOR_offset_t aggFileSizeFromStat, tmpMin, tmpMax, tmpSum;
MPI_File fd;
MPI_Comm comm;

View File

@ -53,7 +53,7 @@ static IOR_offset_t NCMPI_Xfer(int, void *, IOR_size_t *,
IOR_offset_t, IOR_param_t *);
static void NCMPI_Close(void *, IOR_param_t *);
static void NCMPI_Delete(char *, IOR_param_t *);
static void NCMPI_SetVersion(IOR_param_t *);
static char *NCMPI_GetVersion();
static void NCMPI_Fsync(void *, IOR_param_t *);
static IOR_offset_t NCMPI_GetFileSize(IOR_param_t *, MPI_Comm, char *);
static int NCMPI_Access(const char *, int, IOR_param_t *);
@ -62,6 +62,7 @@ static int NCMPI_Access(const char *, int, IOR_param_t *);
ior_aiori_t ncmpi_aiori = {
.name = "NCMPI",
.name_legacy = NULL,
.create = NCMPI_Create,
.open = NCMPI_Open,
.xfer = NCMPI_Xfer,
@ -175,7 +176,7 @@ static void *NCMPI_Open(char *testFileName, IOR_param_t * param)
static IOR_offset_t NCMPI_Xfer(int access, void *fd, IOR_size_t * buffer,
IOR_offset_t length, IOR_param_t * param)
{
char *bufferPtr = (char *)buffer;
signed char *bufferPtr = (signed char *)buffer;
static int firstReadCheck = FALSE, startDataSet;
int var_id, dim_id[NUM_DIMS];
MPI_Offset bufSize[NUM_DIMS], offset[NUM_DIMS];
@ -343,7 +344,7 @@ static void NCMPI_Delete(char *testFileName, IOR_param_t * param)
*/
static char* NCMPI_GetVersion()
{
return ncmpi_inq_libvers();
return (char *)ncmpi_inq_libvers();
}
/*

View File

@ -31,7 +31,10 @@
#include <sys/stat.h>
#include <assert.h>
#ifdef HAVE_LUSTRE_LUSTRE_USER_H
#ifdef HAVE_LINUX_LUSTRE_LUSTRE_USER_H
# include <linux/lustre/lustre_user.h>
#elif defined(HAVE_LUSTRE_LUSTRE_USER_H)
# include <lustre/lustre_user.h>
#endif
#ifdef HAVE_GPFS_H
@ -73,6 +76,7 @@ static void POSIX_Fsync(void *, IOR_param_t *);
ior_aiori_t posix_aiori = {
.name = "POSIX",
.name_legacy = NULL,
.create = POSIX_Create,
.open = POSIX_Open,
.xfer = POSIX_Xfer,
@ -273,7 +277,16 @@ void *POSIX_Create(char *testFileName, IOR_param_t * param)
if (param->useO_DIRECT == TRUE)
set_o_direct_flag(&fd_oflag);
if(param->dryRun)
return 0;
#ifdef HAVE_LUSTRE_LUSTRE_USER_H
/* Add a #define for FASYNC if not available, as it forms part of
* the Lustre O_LOV_DELAY_CREATE definition. */
#ifndef FASYNC
#define FASYNC 00020000 /* fcntl, for BSD compatibility */
#endif
if (param->lustre_set_striping) {
/* In the single-shared-file case, task 0 has to creat the
file with the Lustre striping options before any other processes
@ -294,7 +307,8 @@ void *POSIX_Create(char *testFileName, IOR_param_t * param)
opts.lmm_stripe_count = param->lustre_stripe_count;
/* File needs to be opened O_EXCL because we cannot set
Lustre striping information on a pre-existing file. */
* Lustre striping information on a pre-existing file.*/
fd_oflag |=
O_CREAT | O_EXCL | O_RDWR | O_LOV_DELAY_CREATE;
*fd = open64(testFileName, fd_oflag, 0664);
@ -378,6 +392,10 @@ void *POSIX_Open(char *testFileName, IOR_param_t * param)
set_o_direct_flag(&fd_oflag);
fd_oflag |= O_RDWR;
if(param->dryRun)
return 0;
*fd = open64(testFileName, fd_oflag);
if (*fd < 0)
ERR("open64 failed");
@ -414,6 +432,9 @@ static IOR_offset_t POSIX_Xfer(int access, void *file, IOR_size_t * buffer,
long long rc;
int fd;
if(param->dryRun)
return length;
fd = *(int *)file;
#ifdef HAVE_GPFS_FCNTL_H
@ -495,6 +516,8 @@ static void POSIX_Fsync(void *fd, IOR_param_t * param)
*/
void POSIX_Close(void *fd, IOR_param_t * param)
{
if(param->dryRun)
return;
if (close(*(int *)fd) != 0)
ERR("close() failed");
free(fd);
@ -505,11 +528,14 @@ void POSIX_Close(void *fd, IOR_param_t * param)
*/
void POSIX_Delete(char *testFileName, IOR_param_t * param)
{
char errmsg[256];
sprintf(errmsg, "[RANK %03d]: unlink() of file \"%s\" failed\n",
rank, testFileName);
if (unlink(testFileName) != 0)
if(param->dryRun)
return;
if (unlink(testFileName) != 0){
char errmsg[256];
sprintf(errmsg, "[RANK %03d]: unlink() of file \"%s\" failed\n",
rank, testFileName);
EWARN(errmsg);
}
}
/*
@ -518,6 +544,8 @@ void POSIX_Delete(char *testFileName, IOR_param_t * param)
IOR_offset_t POSIX_GetFileSize(IOR_param_t * test, MPI_Comm testComm,
char *testFileName)
{
if(test->dryRun)
return 0;
struct stat stat_buf;
IOR_offset_t aggFileSizeFromStat, tmpMin, tmpMax, tmpSum;

View File

@ -41,9 +41,9 @@ static struct rados_options o = {
};
static option_help options [] = {
{'u', "user", "Username for the RADOS cluster", OPTION_REQUIRED_ARGUMENT, 's', & o.user},
{'c', "conf", "Config file for the RADOS cluster", OPTION_REQUIRED_ARGUMENT, 's', & o.conf},
{'p', "pool", "RADOS pool to use for I/O", OPTION_REQUIRED_ARGUMENT, 's', & o.pool},
{0, "rados.user", "Username for the RADOS cluster", OPTION_REQUIRED_ARGUMENT, 's', & o.user},
{0, "rados.conf", "Config file for the RADOS cluster", OPTION_REQUIRED_ARGUMENT, 's', & o.conf},
{0, "rados.pool", "RADOS pool to use for I/O", OPTION_REQUIRED_ARGUMENT, 's', & o.pool},
LAST_OPTION
};
@ -67,6 +67,7 @@ static option_help * RADOS_options();
/************************** D E C L A R A T I O N S ***************************/
ior_aiori_t rados_aiori = {
.name = "RADOS",
.name_legacy = NULL,
.create = RADOS_Create,
.open = RADOS_Open,
.xfer = RADOS_Xfer,

View File

@ -167,6 +167,7 @@ static void S3_finalize();
// N:N fails if "transfer-size" != "block-size" (because that requires "append")
ior_aiori_t s3_aiori = {
.name = "S3",
.name_legacy = NULL,
.create = S3_Create,
.open = S3_Open,
.xfer = S3_Xfer,

View File

@ -12,6 +12,17 @@
*
\******************************************************************************/
#ifdef HAVE_CONFIG_H
# include "config.h"
#endif
#include <assert.h>
#include <stdbool.h>
#if defined(HAVE_STRINGS_H)
#include <strings.h>
#endif
#include "aiori.h"
#if defined(HAVE_SYS_STATVFS_H)
@ -65,13 +76,35 @@ ior_aiori_t *available_aiori[] = {
NULL
};
void aiori_supported_apis(char * APIs){
void airoi_parse_options(int argc, char ** argv, option_help * global_options){
int airoi_c = aiori_count();
options_all opt;
opt.module_count = airoi_c + 1;
opt.modules = malloc(sizeof(option_module) * (airoi_c + 1));
opt.modules[0].prefix = NULL;
opt.modules[0].options = global_options;
ior_aiori_t **tmp = available_aiori;
for (int i=1; *tmp != NULL; ++tmp, i++) {
opt.modules[i].prefix = (*tmp)->name;
if((*tmp)->get_options != NULL){
opt.modules[i].options = (*tmp)->get_options();
}else{
opt.modules[i].options = NULL;
}
}
option_parse(argc, argv, &opt);
free(opt.modules);
}
void aiori_supported_apis(char * APIs, char * APIs_legacy){
ior_aiori_t **tmp = available_aiori;
if(*tmp != NULL){
APIs += sprintf(APIs, "%s", (*tmp)->name);
tmp++;
for (; *tmp != NULL; ++tmp) {
APIs += sprintf(APIs, "|%s", (*tmp)->name);
if ((*tmp)->name_legacy != NULL)
APIs_legacy += sprintf(APIs_legacy, "|%s", (*tmp)->name_legacy);
}
}
}
@ -135,73 +168,141 @@ char* aiori_get_version()
return "";
}
static int is_initialized = FALSE;
static bool is_initialized = false;
void aiori_initialize(IOR_test_t *tests_head){
if (is_initialized) return;
is_initialized = TRUE;
/* Sanity check, we were compiled with SOME backend, right? */
if (0 == aiori_count ()) {
ERR("No IO backends compiled into aiori. "
"Run 'configure --with-<backend>', and recompile.");
}
for (ior_aiori_t **tmp = available_aiori ; *tmp != NULL; ++tmp) {
if((*tmp)->initialize){
(*tmp)->initialize(tests_head ? &tests_head->params : NULL);
}
}
static void init_or_fini_internal(const ior_aiori_t *test_backend,
const bool init)
{
if (init)
{
if (test_backend->initialize)
test_backend->initialize();
}
else
{
if (test_backend->finalize)
test_backend->finalize();
}
}
void aiori_finalize(IOR_test_t *tests_head){
if (! is_initialized) return;
is_initialized = FALSE;
static void init_or_fini(IOR_test_t *tests, const bool init)
{
/* Sanity check, we were compiled with SOME backend, right? */
if (0 == aiori_count ()) {
ERR("No IO backends compiled into aiori. "
"Run 'configure --with-<backend>', and recompile.");
}
for (ior_aiori_t **tmp = available_aiori ; *tmp != NULL; ++tmp) {
if((*tmp)->finalize){
(*tmp)->finalize(tests_head ? &tests_head->params : NULL);
}
}
/* Pointer to the initialize of finalize function */
/* if tests is NULL, initialize or finalize all available backends */
if (tests == NULL)
{
for (ior_aiori_t **tmp = available_aiori ; *tmp != NULL; ++tmp)
init_or_fini_internal(*tmp, init);
return;
}
for (IOR_test_t *t = tests; t != NULL; t = t->next)
{
IOR_param_t *params = &t->params;
assert(params != NULL);
const ior_aiori_t *test_backend = params->backend;
assert(test_backend != NULL);
init_or_fini_internal(test_backend, init);
}
}
/**
* Initialize IO backends.
*
* @param[in] tests Pointers to the first test
*
* This function initializes all backends which will be used. If tests is NULL
* all available backends are initialized.
*/
void aiori_initialize(IOR_test_t *tests)
{
if (is_initialized)
return;
init_or_fini(tests, true);
is_initialized = true;
}
/**
* Finalize IO backends.
*
* @param[in] tests Pointers to the first test
*
* This function finalizes all backends which were used. If tests is NULL
* all available backends are finialized.
*/
void aiori_finalize(IOR_test_t *tests)
{
if (!is_initialized)
return;
is_initialized = false;
init_or_fini(tests, false);
}
const ior_aiori_t *aiori_select (const char *api)
{
char warn_str[256] = {0};
for (ior_aiori_t **tmp = available_aiori ; *tmp != NULL; ++tmp) {
if (NULL == api || strcasecmp(api, (*tmp)->name) == 0) {
if (NULL == (*tmp)->statfs) {
(*tmp)->statfs = aiori_posix_statfs;
snprintf(warn_str, 256, "assuming POSIX-based backend for"
" %s statfs call", api);
WARN(warn_str);
}
if (NULL == (*tmp)->mkdir) {
(*tmp)->mkdir = aiori_posix_mkdir;
snprintf(warn_str, 256, "assuming POSIX-based backend for"
" %s mkdir call", api);
WARN(warn_str);
}
if (NULL == (*tmp)->rmdir) {
(*tmp)->rmdir = aiori_posix_rmdir;
snprintf(warn_str, 256, "assuming POSIX-based backend for"
" %s rmdir call", api);
WARN(warn_str);
}
if (NULL == (*tmp)->access) {
(*tmp)->access = aiori_posix_access;
snprintf(warn_str, 256, "assuming POSIX-based backend for"
" %s access call", api);
WARN(warn_str);
}
if (NULL == (*tmp)->stat) {
(*tmp)->stat = aiori_posix_stat;
snprintf(warn_str, 256, "assuming POSIX-based backend for"
" %s stat call", api);
WARN(warn_str);
}
return *tmp;
char *name_leg = (*tmp)->name_legacy;
if (NULL != api &&
(strcasecmp(api, (*tmp)->name) != 0) &&
(name_leg == NULL || strcasecmp(api, name_leg) != 0))
continue;
if (name_leg != NULL && strcasecmp(api, name_leg) == 0)
{
snprintf(warn_str, 256, "%s backend is deprecated use %s"
" instead", api, (*tmp)->name);
WARN(warn_str);
}
if (NULL == (*tmp)->statfs) {
(*tmp)->statfs = aiori_posix_statfs;
snprintf(warn_str, 256, "assuming POSIX-based backend for"
" %s statfs call", api);
WARN(warn_str);
}
if (NULL == (*tmp)->mkdir) {
(*tmp)->mkdir = aiori_posix_mkdir;
snprintf(warn_str, 256, "assuming POSIX-based backend for"
" %s mkdir call", api);
WARN(warn_str);
}
if (NULL == (*tmp)->rmdir) {
(*tmp)->rmdir = aiori_posix_rmdir;
snprintf(warn_str, 256, "assuming POSIX-based backend for"
" %s rmdir call", api);
WARN(warn_str);
}
if (NULL == (*tmp)->access) {
(*tmp)->access = aiori_posix_access;
snprintf(warn_str, 256, "assuming POSIX-based backend for"
" %s access call", api);
WARN(warn_str);
}
if (NULL == (*tmp)->stat) {
(*tmp)->stat = aiori_posix_stat;
snprintf(warn_str, 256, "assuming POSIX-based backend for"
" %s stat call", api);
WARN(warn_str);
}
return *tmp;
}
return NULL;

View File

@ -65,6 +65,7 @@ typedef struct ior_aiori_statfs {
typedef struct ior_aiori {
char *name;
char *name_legacy;
void *(*create)(char *, IOR_param_t *);
void *(*open)(char *, IOR_param_t *);
IOR_offset_t (*xfer)(int, void *, IOR_size_t *,
@ -79,8 +80,8 @@ typedef struct ior_aiori {
int (*rmdir) (const char *path, IOR_param_t * param);
int (*access) (const char *path, int mode, IOR_param_t * param);
int (*stat) (const char *path, struct stat *buf, IOR_param_t * param);
void (*initialize)(IOR_param_t *); /* called once per program before MPI is started */
void (*finalize)(IOR_param_t *); /* called once per program after MPI is shutdown */
void (*initialize)(); /* called once per program before MPI is started */
void (*finalize)(); /* called once per program after MPI is shutdown */
option_help * (*get_options)();
} ior_aiori_t;
@ -99,11 +100,12 @@ extern ior_aiori_t rados_aiori;
extern ior_aiori_t daos_aiori;
extern ior_aiori_t dfs_aiori;
void aiori_initialize(IOR_test_t *th);
void aiori_finalize(IOR_test_t *th);
void aiori_initialize(IOR_test_t * tests);
void aiori_finalize(IOR_test_t * tests);
const ior_aiori_t *aiori_select (const char *api);
int aiori_count (void);
void aiori_supported_apis(char * APIs);
void aiori_supported_apis(char * APIs, char * APIs_legacy);
void airoi_parse_options(int argc, char ** argv, option_help * global_options);
const char *aiori_default (void);
/* some generic POSIX-based backend calls */

View File

@ -6,7 +6,6 @@
#define _IOR_INTERNAL_H
/* Part of ior-output.c */
void PrintEarlyHeader();
void PrintHeader(int argc, char **argv);
void ShowTestStart(IOR_param_t *params);
void ShowTestEnd(IOR_test_t *tptr);

View File

@ -11,8 +11,6 @@
extern char **environ;
static struct results *bw_values(int reps, IOR_results_t * measured, int offset, double *vals);
static struct results *ops_values(int reps, IOR_results_t * measured, int offset, IOR_offset_t transfer_size, double *vals);
static double mean_of_array_of_doubles(double *values, int len);
static void PPDouble(int leftjustify, double number, char *append);
static void PrintNextToken();
@ -153,6 +151,9 @@ static void PrintNamedArrayStart(char * key){
}
static void PrintEndSection(){
if (rank != 0)
return;
indent--;
if(outputFormat == OUTPUT_JSON){
fprintf(out_resultfile, "\n");
@ -163,6 +164,8 @@ static void PrintEndSection(){
}
static void PrintArrayStart(){
if (rank != 0)
return;
PrintNextToken();
needNextToken = 0;
if(outputFormat == OUTPUT_JSON){
@ -171,6 +174,8 @@ static void PrintArrayStart(){
}
static void PrintArrayNamedStart(char * key){
if (rank != 0)
return;
PrintNextToken();
needNextToken = 0;
if(outputFormat == OUTPUT_JSON){
@ -179,6 +184,9 @@ static void PrintArrayNamedStart(char * key){
}
static void PrintArrayEnd(){
if (rank != 0)
return;
indent--;
if(outputFormat == OUTPUT_JSON){
fprintf(out_resultfile, "]\n");
@ -187,10 +195,14 @@ static void PrintArrayEnd(){
}
void PrintRepeatEnd(){
if (rank != 0)
return;
PrintArrayEnd();
}
void PrintRepeatStart(){
if (rank != 0)
return;
if( outputFormat == OUTPUT_DEFAULT){
return;
}
@ -233,20 +245,6 @@ void PrintReducedResult(IOR_test_t *test, int access, double bw, double *diff_su
fflush(out_resultfile);
}
/*
* Message to print immediately after MPI_Init so we know that
* ior has started.
*/
void PrintEarlyHeader()
{
if (rank != 0)
return;
fprintf(out_resultfile, "IOR-" META_VERSION ": MPI Coordinated Test of Parallel I/O\n");
fflush(out_resultfile);
}
void PrintHeader(int argc, char **argv)
{
struct utsname unamebuf;
@ -254,8 +252,13 @@ void PrintHeader(int argc, char **argv)
if (rank != 0)
return;
PrintStartSection();
PrintStartSection();
if (outputFormat != OUTPUT_DEFAULT){
PrintKeyVal("Version", META_VERSION);
}else{
printf("IOR-" META_VERSION ": MPI Coordinated Test of Parallel I/O\n");
}
PrintKeyVal("Began", CurrentTimeString());
PrintKeyValStart("Command line");
fprintf(out_resultfile, "%s", argv[0]);
@ -336,6 +339,7 @@ void ShowTestStart(IOR_param_t *test)
PrintKeyValInt("outlierThreshold", test->outlierThreshold);
PrintKeyVal("options", test->options);
PrintKeyValInt("dryRun", test->dryRun);
PrintKeyValInt("nodes", test->nodes);
PrintKeyValInt("memoryPerTask", (unsigned long) test->memoryPerTask);
PrintKeyValInt("memoryPerNode", (unsigned long) test->memoryPerNode);
@ -388,10 +392,12 @@ void ShowTestStart(IOR_param_t *test)
void ShowTestEnd(IOR_test_t *tptr){
if(rank == 0 && tptr->params.stoneWallingWearOut){
size_t pairs_accessed = tptr->results->write.pairs_accessed;
if (tptr->params.stoneWallingStatusFile){
StoreStoneWallingIterations(tptr->params.stoneWallingStatusFile, tptr->results->pairs_accessed);
StoreStoneWallingIterations(tptr->params.stoneWallingStatusFile, pairs_accessed);
}else{
fprintf(out_logfile, "Pairs deadlineForStonewallingaccessed: %lld\n", (long long) tptr->results->pairs_accessed);
fprintf(out_logfile, "Pairs deadlineForStonewallingaccessed: %ld\n", pairs_accessed);
}
}
PrintEndSection();
@ -438,6 +444,9 @@ void ShowSetup(IOR_param_t *params)
PrintKeyVal("xfersize", HumanReadable(params->transferSize, BASE_TWO));
PrintKeyVal("blocksize", HumanReadable(params->blockSize, BASE_TWO));
PrintKeyVal("aggregate filesize", HumanReadable(params->expectedAggFileSize, BASE_TWO));
if(params->dryRun){
PrintKeyValInt("dryRun", params->dryRun);
}
#ifdef HAVE_LUSTRE_LUSTRE_USER_H
if (params->lustre_set_striping) {
@ -457,14 +466,63 @@ void ShowSetup(IOR_param_t *params)
fflush(out_resultfile);
}
static struct results *bw_ops_values(const int reps, IOR_results_t *measured,
IOR_offset_t transfer_size,
const double *vals, const int access)
{
struct results *r;
int i;
r = (struct results *)malloc(sizeof(struct results)
+ (reps * sizeof(double)));
if (r == NULL)
ERR("malloc failed");
r->val = (double *)&r[1];
for (i = 0; i < reps; i++, measured++) {
IOR_point_t *point = (access == WRITE) ? &measured->write :
&measured->read;
r->val[i] = ((double) (point->aggFileSizeForBW))
/ transfer_size / vals[i];
if (i == 0) {
r->min = r->val[i];
r->max = r->val[i];
r->sum = 0.0;
}
r->min = MIN(r->min, r->val[i]);
r->max = MAX(r->max, r->val[i]);
r->sum += r->val[i];
}
r->mean = r->sum / reps;
r->var = 0.0;
for (i = 0; i < reps; i++) {
r->var += pow((r->mean - r->val[i]), 2);
}
r->var = r->var / reps;
r->sd = sqrt(r->var);
return r;
}
static struct results *bw_values(const int reps, IOR_results_t *measured,
const double *vals, const int access)
{
return bw_ops_values(reps, measured, 1, vals, access);
}
static struct results *ops_values(const int reps, IOR_results_t *measured,
IOR_offset_t transfer_size,
const double *vals, const int access)
{
return bw_ops_values(reps, measured, transfer_size, vals, access);
}
/*
* Summarize results
*
* operation is typically "write" or "read"
*/
static void PrintLongSummaryOneOperation(IOR_test_t *test, int times_offset, char *operation)
static void PrintLongSummaryOneOperation(IOR_test_t *test, const int access)
{
IOR_param_t *params = &test->params;
IOR_results_t *results = test->results;
@ -479,14 +537,20 @@ static void PrintLongSummaryOneOperation(IOR_test_t *test, int times_offset, cha
double * times = malloc(sizeof(double)* reps);
for(int i=0; i < reps; i++){
times[i] = *(double*)((char*) & results[i] + times_offset);
IOR_point_t *point = (access == WRITE) ? &results[i].write :
&results[i].read;
times[i] = point->time;
}
bw = bw_values(reps, results, offsetof(IOR_results_t, aggFileSizeForBW), times);
ops = ops_values(reps, results, offsetof(IOR_results_t, aggFileSizeForBW), params->transferSize, times);
bw = bw_values(reps, results, times, access);
ops = ops_values(reps, results, params->transferSize, times, access);
IOR_point_t *point = (access == WRITE) ? &results[0].write :
&results[0].read;
if(outputFormat == OUTPUT_DEFAULT){
fprintf(out_resultfile, "%-9s ", operation);
fprintf(out_resultfile, "%-9s ", access == WRITE ? "write" : "read");
fprintf(out_resultfile, "%10.2f ", bw->max / MEBIBYTE);
fprintf(out_resultfile, "%10.2f ", bw->min / MEBIBYTE);
fprintf(out_resultfile, "%10.2f ", bw->mean / MEBIBYTE);
@ -508,13 +572,13 @@ static void PrintLongSummaryOneOperation(IOR_test_t *test, int times_offset, cha
fprintf(out_resultfile, "%6lld ", params->segmentCount);
fprintf(out_resultfile, "%8lld ", params->blockSize);
fprintf(out_resultfile, "%8lld ", params->transferSize);
fprintf(out_resultfile, "%9.1f ", (float)results[0].aggFileSizeForBW / MEBIBYTE);
fprintf(out_resultfile, "%9.1f ", (float)point->aggFileSizeForBW / MEBIBYTE);
fprintf(out_resultfile, "%3s ", params->api);
fprintf(out_resultfile, "%6d", params->referenceNumber);
fprintf(out_resultfile, "\n");
}else if (outputFormat == OUTPUT_JSON){
PrintStartSection();
PrintKeyVal("operation", operation);
PrintKeyVal("operation", access == WRITE ? "write" : "read");
PrintKeyVal("API", params->api);
PrintKeyValInt("TestID", params->id);
PrintKeyValInt("ReferenceNumber", params->referenceNumber);
@ -541,7 +605,7 @@ static void PrintLongSummaryOneOperation(IOR_test_t *test, int times_offset, cha
PrintKeyValDouble("OPsMean", ops->mean);
PrintKeyValDouble("OPsSD", ops->sd);
PrintKeyValDouble("MeanTime", mean_of_array_of_doubles(times, reps));
PrintKeyValDouble("xsizeMiB", (double) results[0].aggFileSizeForBW / MEBIBYTE);
PrintKeyValDouble("xsizeMiB", (double) point->aggFileSizeForBW / MEBIBYTE);
PrintEndSection();
}else if (outputFormat == OUTPUT_CSV){
@ -559,9 +623,9 @@ void PrintLongSummaryOneTest(IOR_test_t *test)
IOR_param_t *params = &test->params;
if (params->writeFile)
PrintLongSummaryOneOperation(test, offsetof(IOR_results_t, writeTime), "write");
PrintLongSummaryOneOperation(test, WRITE);
if (params->readFile)
PrintLongSummaryOneOperation(test, offsetof(IOR_results_t, readTime), "read");
PrintLongSummaryOneOperation(test, READ);
}
void PrintLongSummaryHeader()
@ -613,8 +677,8 @@ void PrintShortSummary(IOR_test_t * test)
{
IOR_param_t *params = &test->params;
IOR_results_t *results = test->results;
double max_write = 0.0;
double max_read = 0.0;
double max_write_bw = 0.0;
double max_read_bw = 0.0;
double bw;
int reps;
int i;
@ -626,33 +690,31 @@ void PrintShortSummary(IOR_test_t * test)
reps = params->repetitions;
max_write = results[0].writeTime;
max_read = results[0].readTime;
for (i = 0; i < reps; i++) {
bw = (double)results[i].aggFileSizeForBW / results[i].writeTime;
max_write = MAX(bw, max_write);
bw = (double)results[i].aggFileSizeForBW / results[i].readTime;
max_read = MAX(bw, max_read);
bw = (double)results[i].write.aggFileSizeForBW / results[i].write.time;
max_write_bw = MAX(bw, max_write_bw);
bw = (double)results[i].read.aggFileSizeForBW / results[i].read.time;
max_read_bw = MAX(bw, max_read_bw);
}
if(outputFormat == OUTPUT_DEFAULT){
if (params->writeFile) {
fprintf(out_resultfile, "Max Write: %.2f MiB/sec (%.2f MB/sec)\n",
max_write/MEBIBYTE, max_write/MEGABYTE);
max_write_bw/MEBIBYTE, max_write_bw/MEGABYTE);
}
if (params->readFile) {
fprintf(out_resultfile, "Max Read: %.2f MiB/sec (%.2f MB/sec)\n",
max_read/MEBIBYTE, max_read/MEGABYTE);
max_read_bw/MEBIBYTE, max_read_bw/MEGABYTE);
}
}else if (outputFormat == OUTPUT_JSON){
PrintNamedSectionStart("max");
if (params->writeFile) {
PrintKeyValDouble("writeMiB", max_write/MEBIBYTE);
PrintKeyValDouble("writeMB", max_write/MEGABYTE);
PrintKeyValDouble("writeMiB", max_write_bw/MEBIBYTE);
PrintKeyValDouble("writeMB", max_write_bw/MEGABYTE);
}
if (params->readFile) {
PrintKeyValDouble("readMiB", max_read/MEBIBYTE);
PrintKeyValDouble("readMB", max_read/MEGABYTE);
PrintKeyValDouble("readMiB", max_read_bw/MEBIBYTE);
PrintKeyValDouble("readMB", max_read_bw/MEGABYTE);
}
PrintEndSection();
}
@ -738,78 +800,6 @@ static void PPDouble(int leftjustify, double number, char *append)
fprintf(out_resultfile, format, number, append);
}
static struct results *bw_values(int reps, IOR_results_t * measured, int offset, double *vals)
{
struct results *r;
int i;
r = (struct results *) malloc(sizeof(struct results) + (reps * sizeof(double)));
if (r == NULL)
ERR("malloc failed");
r->val = (double *)&r[1];
for (i = 0; i < reps; i++, measured++) {
r->val[i] = (double) *((IOR_offset_t*) ((char*)measured + offset)) / vals[i];
if (i == 0) {
r->min = r->val[i];
r->max = r->val[i];
r->sum = 0.0;
}
r->min = MIN(r->min, r->val[i]);
r->max = MAX(r->max, r->val[i]);
r->sum += r->val[i];
}
r->mean = r->sum / reps;
r->var = 0.0;
for (i = 0; i < reps; i++) {
r->var += pow((r->mean - r->val[i]), 2);
}
r->var = r->var / reps;
r->sd = sqrt(r->var);
return r;
}
static struct results *ops_values(int reps, IOR_results_t * measured, int offset,
IOR_offset_t transfer_size,
double *vals)
{
struct results *r;
int i;
r = (struct results *)malloc(sizeof(struct results)
+ (reps * sizeof(double)));
if (r == NULL)
ERR("malloc failed");
r->val = (double *)&r[1];
for (i = 0; i < reps; i++, measured++) {
r->val[i] = (double) *((IOR_offset_t*) ((char*)measured + offset))
/ transfer_size / vals[i];
if (i == 0) {
r->min = r->val[i];
r->max = r->val[i];
r->sum = 0.0;
}
r->min = MIN(r->min, r->val[i]);
r->max = MAX(r->max, r->val[i]);
r->sum += r->val[i];
}
r->mean = r->sum / reps;
r->var = 0.0;
for (i = 0; i < reps; i++) {
r->var += pow((r->mean - r->val[i]), 2);
}
r->var = r->var / reps;
r->sd = sqrt(r->var);
return r;
}
static double mean_of_array_of_doubles(double *values, int len)
{
double tot = 0.0;

396
src/ior.c
View File

@ -20,6 +20,11 @@
#include <math.h>
#include <mpi.h>
#include <string.h>
#if defined(HAVE_STRINGS_H)
#include <strings.h>
#endif
#include <sys/stat.h> /* struct stat */
#include <time.h>
@ -36,6 +41,7 @@
#include "utilities.h"
#include "parse_options.h"
#define IOR_NB_TIMERS 6
/* file scope globals */
extern char **environ;
@ -48,8 +54,9 @@ static char **ParseFileName(char *, int *);
static void InitTests(IOR_test_t * , MPI_Comm);
static void TestIoSys(IOR_test_t *);
static void ValidateTests(IOR_param_t *);
static IOR_offset_t WriteOrRead(IOR_param_t * test, IOR_results_t * results, void *fd, int access, IOR_io_buffers* ioBuffers);
static void WriteTimes(IOR_param_t *, double **, int, int);
static IOR_offset_t WriteOrRead(IOR_param_t *test, IOR_results_t *results,
void *fd, const int access,
IOR_io_buffers *ioBuffers);
IOR_test_t * ior_run(int argc, char **argv, MPI_Comm world_com, FILE * world_out){
IOR_test_t *tests_head;
@ -60,7 +67,6 @@ IOR_test_t * ior_run(int argc, char **argv, MPI_Comm world_com, FILE * world_out
MPI_CHECK(MPI_Comm_size(mpi_comm_world, &numTasksWorld), "cannot get number of tasks");
MPI_CHECK(MPI_Comm_rank(mpi_comm_world, &rank), "cannot get rank");
PrintEarlyHeader();
/* setup tests, and validate parameters */
tests_head = ParseCommandLine(argc, argv);
@ -111,8 +117,6 @@ int ior_main(int argc, char **argv)
"cannot get number of tasks");
MPI_CHECK(MPI_Comm_rank(mpi_comm_world, &rank), "cannot get rank");
PrintEarlyHeader();
/* set error-handling */
/*MPI_CHECK(MPI_Errhandler_set(mpi_comm_world, MPI_ERRORS_RETURN),
"cannot set errhandler"); */
@ -121,6 +125,8 @@ int ior_main(int argc, char **argv)
InitTests(tests_head, mpi_comm_world);
verbose = tests_head->params.verbose;
aiori_initialize(tests_head);
PrintHeader(argc, argv);
/* perform each test */
@ -150,10 +156,12 @@ int ior_main(int argc, char **argv)
/* display finish time */
PrintTestEnds();
DestroyTests(tests_head);
aiori_finalize(tests_head);
MPI_CHECK(MPI_Finalize(), "cannot finalize MPI");
DestroyTests(tests_head);
return totalErrorCount;
}
@ -253,44 +261,38 @@ DisplayOutliers(int numTasks,
/*
* Check for outliers in start/end times and elapsed create/xfer/close times.
*/
static void CheckForOutliers(IOR_param_t * test, double **timer, int rep,
int access)
static void
CheckForOutliers(IOR_param_t *test, const double *timer, const int access)
{
int shift;
if (access == WRITE) {
shift = 0;
} else { /* READ */
shift = 6;
}
DisplayOutliers(test->numTasks, timer[shift + 0][rep],
DisplayOutliers(test->numTasks, timer[0],
"start time", access, test->outlierThreshold);
DisplayOutliers(test->numTasks,
timer[shift + 1][rep] - timer[shift + 0][rep],
timer[1] - timer[0],
"elapsed create time", access, test->outlierThreshold);
DisplayOutliers(test->numTasks,
timer[shift + 3][rep] - timer[shift + 2][rep],
timer[3] - timer[2],
"elapsed transfer time", access,
test->outlierThreshold);
DisplayOutliers(test->numTasks,
timer[shift + 5][rep] - timer[shift + 4][rep],
timer[5] - timer[4],
"elapsed close time", access, test->outlierThreshold);
DisplayOutliers(test->numTasks, timer[shift + 5][rep], "end time",
DisplayOutliers(test->numTasks, timer[5], "end time",
access, test->outlierThreshold);
}
/*
* Check if actual file size equals expected size; if not use actual for
* calculating performance rate.
*/
static void CheckFileSize(IOR_test_t *test, IOR_offset_t dataMoved, int rep)
static void CheckFileSize(IOR_test_t *test, IOR_offset_t dataMoved, int rep,
const int access)
{
IOR_param_t *params = &test->params;
IOR_results_t *results = test->results;
IOR_point_t *point = (access == WRITE) ? &results[rep].write :
&results[rep].read;
MPI_CHECK(MPI_Allreduce(&dataMoved, & results[rep].aggFileSizeFromXfer,
MPI_CHECK(MPI_Allreduce(&dataMoved, &point->aggFileSizeFromXfer,
1, MPI_LONG_LONG_INT, MPI_SUM, testComm),
"cannot total data moved");
@ -298,18 +300,18 @@ static void CheckFileSize(IOR_test_t *test, IOR_offset_t dataMoved, int rep)
strcasecmp(params->api, "DAOS") != 0) {
if (verbose >= VERBOSE_0 && rank == 0) {
if ((params->expectedAggFileSize
!= results[rep].aggFileSizeFromXfer)
|| (results[rep].aggFileSizeFromStat
!= results[rep].aggFileSizeFromXfer)) {
!= point->aggFileSizeFromXfer)
|| (point->aggFileSizeFromStat
!= point->aggFileSizeFromXfer)) {
fprintf(out_logfile,
"WARNING: Expected aggregate file size = %lld.\n",
(long long) params->expectedAggFileSize);
fprintf(out_logfile,
"WARNING: Stat() of aggregate file size = %lld.\n",
(long long) results[rep].aggFileSizeFromStat);
(long long) point->aggFileSizeFromStat);
fprintf(out_logfile,
"WARNING: Using actual aggregate bytes moved = %lld.\n",
(long long) results[rep].aggFileSizeFromXfer);
(long long) point->aggFileSizeFromXfer);
if(params->deadlineForStonewalling){
fprintf(out_logfile,
"WARNING: maybe caused by deadlineForStonewalling\n");
@ -317,7 +319,8 @@ static void CheckFileSize(IOR_test_t *test, IOR_offset_t dataMoved, int rep)
}
}
}
results[rep].aggFileSizeForBW = results[rep].aggFileSizeFromXfer;
point->aggFileSizeForBW = point->aggFileSizeFromXfer;
}
/*
@ -459,12 +462,16 @@ static int CountErrors(IOR_param_t * test, int access, int errors)
*/
static void *aligned_buffer_alloc(size_t size)
{
size_t pageSize;
size_t pageMask;
char *buf, *tmp;
char *aligned;
pageSize = getpagesize();
#ifdef HAVE_SYSCONF
long pageSize = sysconf(_SC_PAGESIZE);
#else
size_t pageSize = getpagesize();
#endif
pageMask = pageSize - 1;
buf = malloc(size + pageSize + sizeof(void *));
if (buf == NULL)
@ -497,7 +504,7 @@ static void* safeMalloc(uint64_t size){
return d;
}
static void AllocResults(IOR_test_t *test)
void AllocResults(IOR_test_t *test)
{
int reps;
if (test->results != NULL)
@ -531,7 +538,6 @@ IOR_test_t *CreateTest(IOR_param_t *init_params, int test_num)
newTest->next = NULL;
newTest->results = NULL;
AllocResults(newTest);
return newTest;
}
@ -825,14 +831,15 @@ static char *PrependDir(IOR_param_t * test, char *rootDir)
sprintf(dir, "%s%d", dir, (rank + rankOffset) % test->numTasks);
/* dir doesn't exist, so create */
if (access(dir, F_OK) != 0) {
if (mkdir(dir, S_IRWXU) < 0) {
if (backend->access(dir, F_OK, test) != 0) {
if (backend->mkdir(dir, S_IRWXU, test) < 0) {
ERR("cannot create directory");
}
/* check if correct permissions */
} else if (access(dir, R_OK) != 0 || access(dir, W_OK) != 0 ||
access(dir, X_OK) != 0) {
} else if (backend->access(dir, R_OK, test) != 0 ||
backend->access(dir, W_OK, test) != 0 ||
backend->access(dir, X_OK, test) != 0) {
ERR("invalid directory permissions");
}
@ -847,54 +854,46 @@ static char *PrependDir(IOR_param_t * test, char *rootDir)
/*
* Reduce test results, and show if verbose set.
*/
static void ReduceIterResults(IOR_test_t *test, double **timer, int rep,
int access)
static void
ReduceIterResults(IOR_test_t *test, double *timer, const int rep, const int access)
{
double reduced[12] = { 0 };
double diff[6];
double *diff_subset;
double totalTime;
double bw;
int i;
MPI_Op op;
double reduced[IOR_NB_TIMERS] = { 0 };
double diff[IOR_NB_TIMERS / 2 + 1];
double totalTime;
double bw;
int i;
MPI_Op op;
assert(access == WRITE || access == READ);
assert(access == WRITE || access == READ);
/* Find the minimum start time of the even numbered timers, and the
maximum finish time for the odd numbered timers */
for (i = 0; i < 12; i++) {
for (i = 0; i < IOR_NB_TIMERS; i++) {
op = i % 2 ? MPI_MAX : MPI_MIN;
MPI_CHECK(MPI_Reduce(&timer[i][rep], &reduced[i], 1, MPI_DOUBLE,
MPI_CHECK(MPI_Reduce(&timer[i], &reduced[i], 1, MPI_DOUBLE,
op, 0, testComm), "MPI_Reduce()");
}
if (rank != 0) {
/* Only rank 0 tallies and prints the results. */
return;
}
/* Only rank 0 tallies and prints the results. */
if (rank != 0)
return;
/* Calculate elapsed times and throughput numbers */
for (i = 0; i < 6; i++) {
diff[i] = reduced[2 * i + 1] - reduced[2 * i];
}
if (access == WRITE) {
totalTime = reduced[5] - reduced[0];
test->results[rep].writeTime = totalTime;
diff_subset = &diff[0];
} else { /* READ */
totalTime = reduced[11] - reduced[6];
test->results[rep].readTime = totalTime;
diff_subset = &diff[3];
}
/* Calculate elapsed times and throughput numbers */
for (i = 0; i < IOR_NB_TIMERS / 2; i++)
diff[i] = reduced[2 * i + 1] - reduced[2 * i];
if (verbose < VERBOSE_0) {
return;
}
totalTime = reduced[5] - reduced[0];
bw = (double)test->results[rep].aggFileSizeForBW / totalTime;
IOR_point_t *point = (access == WRITE) ? &test->results[rep].write :
&test->results[rep].read;
PrintReducedResult(test, access, bw, diff_subset, totalTime, rep);
point->time = totalTime;
if (verbose < VERBOSE_0)
return;
bw = (double)point->aggFileSizeForBW / totalTime;
PrintReducedResult(test, access, bw, diff, totalTime, rep);
}
/*
@ -1117,7 +1116,72 @@ static void *HogMemory(IOR_param_t *params)
return buf;
}
/*
* Write times taken during each iteration of the test.
*/
static void
WriteTimes(IOR_param_t *test, const double *timer, const int iteration,
const int access)
{
char timerName[MAX_STR];
for (int i = 0; i < IOR_NB_TIMERS; i++) {
if (access == WRITE) {
switch (i) {
case 0:
strcpy(timerName, "write open start");
break;
case 1:
strcpy(timerName, "write open stop");
break;
case 2:
strcpy(timerName, "write start");
break;
case 3:
strcpy(timerName, "write stop");
break;
case 4:
strcpy(timerName, "write close start");
break;
case 5:
strcpy(timerName, "write close stop");
break;
default:
strcpy(timerName, "invalid timer");
break;
}
}
else {
switch (i) {
case 0:
strcpy(timerName, "read open start");
break;
case 1:
strcpy(timerName, "read open stop");
break;
case 2:
strcpy(timerName, "read start");
break;
case 3:
strcpy(timerName, "read stop");
break;
case 4:
strcpy(timerName, "read close start");
break;
case 5:
strcpy(timerName, "read close stop");
break;
default:
strcpy(timerName, "invalid timer");
break;
}
}
fprintf(out_logfile, "Test %d: Iter=%d, Task=%d, Time=%f, %s\n",
test->id, iteration, (int)rank, timer[i],
timerName);
}
}
/*
* Using the test parameters, run iteration(s) of single test.
*/
@ -1126,10 +1190,10 @@ static void TestIoSys(IOR_test_t *test)
IOR_param_t *params = &test->params;
IOR_results_t *results = test->results;
char testFileName[MAX_STR];
double *timer[12];
double timer[IOR_NB_TIMERS];
double startTime;
int pretendRank;
int i, rep;
int rep;
void *fd;
MPI_Group orig_group, new_group;
int range[3];
@ -1176,18 +1240,10 @@ static void TestIoSys(IOR_test_t *test)
}
params->tasksPerNode = CountTasksPerNode(testComm);
/* setup timers */
for (i = 0; i < 12; i++) {
timer[i] = (double *)malloc(params->repetitions * sizeof(double));
if (timer[i] == NULL)
ERR("malloc failed");
}
/* bind I/O calls to specific API */
backend = aiori_select(params->api);
if (backend->initialize)
backend->initialize(params);
if (backend == NULL)
ERR_SIMPLE("unrecognized I/O API");
/* show test setup */
if (rank == 0 && verbose >= VERBOSE_0)
@ -1261,9 +1317,9 @@ static void TestIoSys(IOR_test_t *test)
params->stoneWallingWearOutIterations = params_saved_wearout;
MPI_CHECK(MPI_Barrier(testComm), "barrier error");
params->open = WRITE;
timer[0][rep] = GetTimeStamp();
timer[0] = GetTimeStamp();
fd = backend->create(testFileName, params);
timer[1][rep] = GetTimeStamp();
timer[1] = GetTimeStamp();
if (params->intraTestBarriers)
MPI_CHECK(MPI_Barrier(testComm),
"barrier error");
@ -1272,40 +1328,40 @@ static void TestIoSys(IOR_test_t *test)
"Commencing write performance test: %s",
CurrentTimeString());
}
timer[2][rep] = GetTimeStamp();
dataMoved = WriteOrRead(params, & results[rep], fd, WRITE, &ioBuffers);
timer[2] = GetTimeStamp();
dataMoved = WriteOrRead(params, &results[rep], fd, WRITE, &ioBuffers);
if (params->verbose >= VERBOSE_4) {
fprintf(out_logfile, "* data moved = %llu\n", dataMoved);
fflush(out_logfile);
}
timer[3][rep] = GetTimeStamp();
timer[3] = GetTimeStamp();
if (params->intraTestBarriers)
MPI_CHECK(MPI_Barrier(testComm),
"barrier error");
timer[4][rep] = GetTimeStamp();
timer[4] = GetTimeStamp();
backend->close(fd, params);
timer[5][rep] = GetTimeStamp();
timer[5] = GetTimeStamp();
MPI_CHECK(MPI_Barrier(testComm), "barrier error");
/* get the size of the file just written */
results[rep].aggFileSizeFromStat =
results[rep].write.aggFileSizeFromStat =
backend->get_file_size(params, testComm, testFileName);
/* check if stat() of file doesn't equal expected file size,
use actual amount of byte moved */
CheckFileSize(test, dataMoved, rep);
CheckFileSize(test, dataMoved, rep, WRITE);
if (verbose >= VERBOSE_3)
WriteTimes(params, timer, rep, WRITE);
ReduceIterResults(test, timer, rep, WRITE);
if (params->outlierThreshold) {
CheckForOutliers(params, timer, rep, WRITE);
CheckForOutliers(params, timer, WRITE);
}
/* check if in this round we run write with stonewalling */
if(params->deadlineForStonewalling > 0){
params->stoneWallingWearOutIterations = results[rep].pairs_accessed;
params->stoneWallingWearOutIterations = results[rep].write.pairs_accessed;
}
}
@ -1333,7 +1389,7 @@ static void TestIoSys(IOR_test_t *test)
GetTestFileName(testFileName, params);
params->open = WRITECHECK;
fd = backend->open(testFileName, params);
dataMoved = WriteOrRead(params, & results[rep], fd, WRITECHECK, &ioBuffers);
dataMoved = WriteOrRead(params, &results[rep], fd, WRITECHECK, &ioBuffers);
backend->close(fd, params);
rankOffset = 0;
}
@ -1364,7 +1420,7 @@ static void TestIoSys(IOR_test_t *test)
/* random process offset reading */
if (params->reorderTasksRandom) {
/* this should not intefere with randomOffset within a file because GetOffsetArrayRandom */
/* seeds every random() call */
/* seeds every rand() call */
int nodeoffset;
unsigned int iseed0;
nodeoffset = params->taskPerNodeOffset;
@ -1400,9 +1456,9 @@ static void TestIoSys(IOR_test_t *test)
DelaySecs(params->interTestDelay);
MPI_CHECK(MPI_Barrier(testComm), "barrier error");
params->open = READ;
timer[6][rep] = GetTimeStamp();
timer[0] = GetTimeStamp();
fd = backend->open(testFileName, params);
timer[7][rep] = GetTimeStamp();
timer[1] = GetTimeStamp();
if (params->intraTestBarriers)
MPI_CHECK(MPI_Barrier(testComm),
"barrier error");
@ -1411,30 +1467,30 @@ static void TestIoSys(IOR_test_t *test)
"Commencing read performance test: %s",
CurrentTimeString());
}
timer[8][rep] = GetTimeStamp();
dataMoved = WriteOrRead(params, & results[rep], fd, operation_flag, &ioBuffers);
timer[9][rep] = GetTimeStamp();
timer[2] = GetTimeStamp();
dataMoved = WriteOrRead(params, &results[rep], fd, operation_flag, &ioBuffers);
timer[3] = GetTimeStamp();
if (params->intraTestBarriers)
MPI_CHECK(MPI_Barrier(testComm),
"barrier error");
timer[10][rep] = GetTimeStamp();
timer[4] = GetTimeStamp();
backend->close(fd, params);
timer[11][rep] = GetTimeStamp();
timer[5] = GetTimeStamp();
/* get the size of the file just read */
results[rep].aggFileSizeFromStat =
results[rep].read.aggFileSizeFromStat =
backend->get_file_size(params, testComm,
testFileName);
/* check if stat() of file doesn't equal expected file size,
use actual amount of byte moved */
CheckFileSize(test, dataMoved, rep);
CheckFileSize(test, dataMoved, rep, READ);
if (verbose >= VERBOSE_3)
WriteTimes(params, timer, rep, READ);
ReduceIterResults(test, timer, rep, READ);
if (params->outlierThreshold) {
CheckForOutliers(params, timer, rep, READ);
CheckForOutliers(params, timer, READ);
}
}
@ -1469,12 +1525,7 @@ static void TestIoSys(IOR_test_t *test)
if (hog_buf != NULL)
free(hog_buf);
for (i = 0; i < 12; i++) {
free(timer[i]);
}
if (backend->finalize)
backend->finalize(NULL);
/* Sync with the tasks that did not participate in this test */
MPI_CHECK(MPI_Barrier(mpi_comm_world), "barrier error");
@ -1721,11 +1772,11 @@ static IOR_offset_t *GetOffsetArrayRandom(IOR_param_t * test, int pretendRank,
/* set up seed for random() */
if (access == WRITE || access == READ) {
test->randomSeed = seed = random();
test->randomSeed = seed = rand();
} else {
seed = test->randomSeed;
}
srandom(seed);
srand(seed);
fileSize = test->blockSize * test->segmentCount;
if (test->filePerProc == FALSE) {
@ -1737,7 +1788,7 @@ static IOR_offset_t *GetOffsetArrayRandom(IOR_param_t * test, int pretendRank,
if (test->filePerProc == FALSE) {
// this counts which process get how many transferes in
// a shared file
if ((random() % test->numTasks) == pretendRank) {
if ((rand() % test->numTasks) == pretendRank) {
offsets++;
}
} else {
@ -1759,9 +1810,9 @@ static IOR_offset_t *GetOffsetArrayRandom(IOR_param_t * test, int pretendRank,
}
} else {
/* fill with offsets (pass 2) */
srandom(seed); /* need same seed to get same transfers as counted in the beginning*/
srand(seed); /* need same seed to get same transfers as counted in the beginning*/
for (i = 0; i < fileSize; i += test->transferSize) {
if ((random() % test->numTasks) == pretendRank) {
if ((rand() % test->numTasks) == pretendRank) {
offsetArray[offsetCnt] = i;
offsetCnt++;
}
@ -1769,7 +1820,7 @@ static IOR_offset_t *GetOffsetArrayRandom(IOR_param_t * test, int pretendRank,
}
/* reorder array */
for (i = 0; i < offsets; i++) {
value = random() % offsets;
value = rand() % offsets;
tmp = offsetArray[value];
offsetArray[value] = offsetArray[i];
offsetArray[i] = tmp;
@ -1801,11 +1852,19 @@ static IOR_offset_t WriteOrReadSingle(IOR_offset_t pairCnt, IOR_offset_t *offset
backend->xfer(access, fd, buffer, transfer, test);
if (amtXferred != transfer)
ERR("cannot write to file");
if (test->interIODelay > 0){
struct timespec wait = {test->interIODelay / 1000 / 1000, 1000l * (test->interIODelay % 1000000)};
nanosleep( & wait, NULL);
}
} else if (access == READ) {
amtXferred =
backend->xfer(access, fd, buffer, transfer, test);
if (amtXferred != transfer)
ERR("cannot read from file");
if (test->interIODelay > 0){
struct timespec wait = {test->interIODelay / 1000 / 1000, 1000l * (test->interIODelay % 1000000)};
nanosleep( & wait, NULL);
}
} else if (access == WRITECHECK) {
memset(checkBuffer, 'a', transfer);
@ -1837,7 +1896,8 @@ static IOR_offset_t WriteOrReadSingle(IOR_offset_t pairCnt, IOR_offset_t *offset
* Write or Read data to file(s). This loops through the strides, writing
* out the data to each block in transfer sizes, until the remainder left is 0.
*/
static IOR_offset_t WriteOrRead(IOR_param_t * test, IOR_results_t * results, void *fd, int access, IOR_io_buffers* ioBuffers)
static IOR_offset_t WriteOrRead(IOR_param_t *test, IOR_results_t *results,
void *fd, const int access, IOR_io_buffers *ioBuffers)
{
int errors = 0;
IOR_offset_t transferCount = 0;
@ -1847,6 +1907,8 @@ static IOR_offset_t WriteOrRead(IOR_param_t * test, IOR_results_t * results, voi
IOR_offset_t dataMoved = 0; /* for data rate calculation */
double startForStonewall;
int hitStonewall;
IOR_point_t *point = ((access == WRITE) || (access == WRITECHECK)) ?
&results->write : &results->read;
/* initialize values */
pretendRank = (rank + rankOffset) % test->numTasks;
@ -1875,35 +1937,35 @@ static IOR_offset_t WriteOrRead(IOR_param_t * test, IOR_results_t * results, voi
}
long long data_moved_ll = (long long) dataMoved;
long long pairs_accessed_min = 0;
MPI_CHECK(MPI_Allreduce(& pairCnt, &results->pairs_accessed,
MPI_CHECK(MPI_Allreduce(& pairCnt, &point->pairs_accessed,
1, MPI_LONG_LONG_INT, MPI_MAX, testComm), "cannot reduce pairs moved");
double stonewall_runtime = GetTimeStamp() - startForStonewall;
results->stonewall_time = stonewall_runtime;
point->stonewall_time = stonewall_runtime;
MPI_CHECK(MPI_Reduce(& pairCnt, & pairs_accessed_min,
1, MPI_LONG_LONG_INT, MPI_MIN, 0, testComm), "cannot reduce pairs moved");
MPI_CHECK(MPI_Reduce(& data_moved_ll, & results->stonewall_min_data_accessed,
MPI_CHECK(MPI_Reduce(& data_moved_ll, &point->stonewall_min_data_accessed,
1, MPI_LONG_LONG_INT, MPI_MIN, 0, testComm), "cannot reduce pairs moved");
MPI_CHECK(MPI_Reduce(& data_moved_ll, & results->stonewall_avg_data_accessed,
MPI_CHECK(MPI_Reduce(& data_moved_ll, &point->stonewall_avg_data_accessed,
1, MPI_LONG_LONG_INT, MPI_SUM, 0, testComm), "cannot reduce pairs moved");
if(rank == 0){
fprintf(out_logfile, "stonewalling pairs accessed min: %lld max: %zu -- min data: %.1f GiB mean data: %.1f GiB time: %.1fs\n",
pairs_accessed_min, results->pairs_accessed,
results->stonewall_min_data_accessed /1024.0 / 1024 / 1024, results->stonewall_avg_data_accessed / 1024.0 / 1024 / 1024 / test->numTasks , results->stonewall_time);
results->stonewall_min_data_accessed *= test->numTasks;
pairs_accessed_min, point->pairs_accessed,
point->stonewall_min_data_accessed /1024.0 / 1024 / 1024, point->stonewall_avg_data_accessed / 1024.0 / 1024 / 1024 / test->numTasks , point->stonewall_time);
point->stonewall_min_data_accessed *= test->numTasks;
}
if(pairs_accessed_min == pairCnt){
results->stonewall_min_data_accessed = 0;
results->stonewall_avg_data_accessed = 0;
point->stonewall_min_data_accessed = 0;
point->stonewall_avg_data_accessed = 0;
}
if(pairCnt != results->pairs_accessed){
if(pairCnt != point->pairs_accessed){
// some work needs still to be done !
for(; pairCnt < results->pairs_accessed; pairCnt++ ) {
for(; pairCnt < point->pairs_accessed; pairCnt++ ) {
dataMoved += WriteOrReadSingle(pairCnt, offsetArray, pretendRank, & transferCount, & errors, test, fd, ioBuffers, access);
}
}
}else{
results->pairs_accessed = pairCnt;
point->pairs_accessed = pairCnt;
}
@ -1916,73 +1978,3 @@ static IOR_offset_t WriteOrRead(IOR_param_t * test, IOR_results_t * results, voi
}
return (dataMoved);
}
/*
* Write times taken during each iteration of the test.
*/
static void
WriteTimes(IOR_param_t * test, double **timer, int iteration, int writeOrRead)
{
char accessType[MAX_STR];
char timerName[MAX_STR];
int i, start = 0, stop = 0;
if (writeOrRead == WRITE) {
start = 0;
stop = 6;
strcpy(accessType, "WRITE");
} else if (writeOrRead == READ) {
start = 6;
stop = 12;
strcpy(accessType, "READ");
} else {
ERR("incorrect WRITE/READ option");
}
for (i = start; i < stop; i++) {
switch (i) {
case 0:
strcpy(timerName, "write open start");
break;
case 1:
strcpy(timerName, "write open stop");
break;
case 2:
strcpy(timerName, "write start");
break;
case 3:
strcpy(timerName, "write stop");
break;
case 4:
strcpy(timerName, "write close start");
break;
case 5:
strcpy(timerName, "write close stop");
break;
case 6:
strcpy(timerName, "read open start");
break;
case 7:
strcpy(timerName, "read open stop");
break;
case 8:
strcpy(timerName, "read start");
break;
case 9:
strcpy(timerName, "read stop");
break;
case 10:
strcpy(timerName, "read close start");
break;
case 11:
strcpy(timerName, "read close stop");
break;
default:
strcpy(timerName, "invalid timer");
break;
}
fprintf(out_logfile, "Test %d: Iter=%d, Task=%d, Time=%f, %s\n",
test->id, iteration, (int)rank, timer[i][iteration],
timerName);
}
}

View File

@ -77,9 +77,11 @@ typedef struct IO_BUFFERS
* USER_GUIDE
*/
struct ior_aiori;
typedef struct
{
const void * backend;
const struct ior_aiori * backend;
char * debug; /* debug info string */
unsigned int mode; /* file permissions */
unsigned int openFlags; /* open flags (see also <open>) */
@ -91,6 +93,7 @@ typedef struct
char * testFileName_fppReadCheck;/* filename for fpp read check */
char * hintsFileName; /* full name for hints file */
char * options; /* options string */
int dryRun; /* do not perform any I/Os just run evtl. inputs print dummy output */
int numTasks; /* number of tasks for test */
int nodes; /* number of nodes for test */
int tasksPerNode; /* number of tasks per node */
@ -98,6 +101,7 @@ typedef struct
int repCounter; /* rep counter */
int multiFile; /* multiple files */
int interTestDelay; /* delay between reps in seconds */
int interIODelay; /* delay after each I/O in us */
int open; /* flag for writing or reading */
int readFile; /* read of existing file */
int writeFile; /* write of file */
@ -175,6 +179,8 @@ typedef struct
char* URI; /* "path" to target object */
size_t part_number; /* multi-part upload increment (PER-RANK!) */
char* UploadId; /* key for multi-part-uploads */
int collective_md; /* use collective metatata optimization */
/* RADOS variables */
rados_t rados_cluster; /* RADOS cluster handle */
@ -202,12 +208,9 @@ typedef struct
int intraTestBarriers; /* barriers between open/op and op/close */
} IOR_param_t;
/* each pointer is to an array, each of length equal to the number of
repetitions in the test */
/* each pointer for a single test */
typedef struct {
double writeTime;
double readTime;
int errors;
double time;
size_t pairs_accessed; // number of I/Os done, useful for deadlineForStonewalling
double stonewall_time;
@ -217,17 +220,24 @@ typedef struct {
IOR_offset_t aggFileSizeFromStat;
IOR_offset_t aggFileSizeFromXfer;
IOR_offset_t aggFileSizeForBW;
} IOR_point_t;
typedef struct {
int errors;
IOR_point_t write;
IOR_point_t read;
} IOR_results_t;
/* define the queuing structure for the test parameters */
typedef struct IOR_test_t {
IOR_param_t params;
IOR_results_t *results; /* This is an array of reps times IOR_results_t */
IOR_results_t *results;
struct IOR_test_t *next;
} IOR_test_t;
IOR_test_t *CreateTest(IOR_param_t *init_params, int test_num);
void AllocResults(IOR_test_t *test);
char * GetPlatformName();
void init_IOR_Param_t(IOR_param_t *p);

View File

@ -45,7 +45,7 @@
# define srandom srand
# define random() (rand() * (RAND_MAX+1) + rand()) /* Note: only 30 bits */
# define sleep(X) Sleep((X)*1000)
# define getpagesize() 4096
# define sysconf(X) 4096
#else
# include <sys/param.h> /* MAXPATHLEN */
# include <unistd.h>
@ -96,7 +96,6 @@ enum OutputFormat_t{
#define WRITECHECK 1
#define READ 2
#define READCHECK 3
#define CHECK 4
/* verbosity settings */
#define VERBOSE_0 0

View File

@ -3,9 +3,8 @@
int main(int argc, char **argv) {
MPI_Init(&argc, &argv);
mdtest_run(argc, argv, MPI_COMM_WORLD, stdout);
MPI_Finalize();
return 0;
}

View File

@ -28,7 +28,6 @@
* $Date: 2013/11/27 17:05:31 $
* $Author: brettkettering $
*/
#include <limits.h>
#include <math.h>
#include <stdio.h>
@ -59,6 +58,11 @@
#include <fcntl.h>
#include <string.h>
#if HAVE_STRINGS_H
#include <strings.h>
#endif
#include <unistd.h>
#include <dirent.h>
#include <errno.h>
@ -123,6 +127,7 @@ static uint64_t num_dirs_in_tree;
*/
static uint64_t items;
static uint64_t items_per_dir;
static uint64_t num_dirs_in_tree_calc; /* this is a workaround until the overal code is refactored */
static int directory_loops;
static int print_time;
static int random_seed;
@ -179,7 +184,7 @@ void offset_timers(double * t, int tcount) {
fflush( out_logfile );
}
toffset = MPI_Wtime() - t[tcount];
toffset = GetTimeStamp() - t[tcount];
for (i = 0; i < tcount+1; i++) {
t[i] += toffset;
}
@ -299,7 +304,6 @@ static void remove_file (const char *path, uint64_t itemNum) {
fprintf(out_logfile, "V-3: create_remove_items_helper (non-dirs remove): curr_item is \"%s\"\n", curr_item);
fflush(out_logfile);
}
if (!(shared_file && rank != 0)) {
backend->delete (curr_item, &param);
}
@ -319,7 +323,7 @@ static void create_file (const char *path, uint64_t itemNum) {
//create files
sprintf(curr_item, "%s/file.%s"LLU"", path, mk_name, itemNum);
if (rank == 0 && verbose >= 3) {
if ((rank == 0 && verbose >= 3) || verbose >= 5) {
fprintf(out_logfile, "V-3: create_remove_items_helper (non-dirs create): curr_item is \"%s\"\n", curr_item);
fflush(out_logfile);
}
@ -343,6 +347,7 @@ static void create_file (const char *path, uint64_t itemNum) {
} else {
param.openFlags = IOR_CREAT | IOR_WRONLY;
param.filePerProc = !shared_file;
param.mode = FILEMODE;
if (rank == 0 && verbose >= 3) {
fprintf(out_logfile, "V-3: create_remove_items_helper (non-collective, shared): open...\n" );
@ -400,7 +405,9 @@ void create_remove_items_helper(const int dirs, const int create, const char *pa
create_remove_dirs (path, create, itemNum + i);
}
if(CHECK_STONE_WALL(progress)){
progress->items_done = i + 1;
if(progress->items_done == 0){
progress->items_done = i + 1;
}
return;
}
}
@ -432,6 +439,7 @@ void collective_helper(const int dirs, const int create, const char* path, uint6
//create files
param.openFlags = IOR_WRONLY | IOR_CREAT;
param.mode = FILEMODE;
aiori_fh = backend->create (curr_item, &param);
if (NULL == aiori_fh) {
FAIL("unable to create file");
@ -543,7 +551,7 @@ void mdtest_stat(const int random, const int dirs, const long dir_iter, const ch
uint64_t stop_items = items;
if( directory_loops != 1 ){
if( directory_loops != 1 || leaf_only ){
stop_items = items_per_dir;
}
@ -845,7 +853,7 @@ void directory_test(const int iteration, const int ntasks, const char *path, ran
}
MPI_Barrier(testComm);
t[0] = MPI_Wtime();
t[0] = GetTimeStamp();
/* create phase */
if(create_only) {
@ -880,7 +888,7 @@ void directory_test(const int iteration, const int ntasks, const char *path, ran
if (barriers) {
MPI_Barrier(testComm);
}
t[1] = MPI_Wtime();
t[1] = GetTimeStamp();
/* stat phase */
if (stat_only) {
@ -912,7 +920,7 @@ void directory_test(const int iteration, const int ntasks, const char *path, ran
if (barriers) {
MPI_Barrier(testComm);
}
t[2] = MPI_Wtime();
t[2] = GetTimeStamp();
/* read phase */
if (read_only) {
@ -944,7 +952,7 @@ void directory_test(const int iteration, const int ntasks, const char *path, ran
if (barriers) {
MPI_Barrier(testComm);
}
t[3] = MPI_Wtime();
t[3] = GetTimeStamp();
if (remove_only) {
for (int dir_iter = 0; dir_iter < directory_loops; dir_iter ++){
@ -977,7 +985,7 @@ void directory_test(const int iteration, const int ntasks, const char *path, ran
if (barriers) {
MPI_Barrier(testComm);
}
t[4] = MPI_Wtime();
t[4] = GetTimeStamp();
if (remove_only) {
if (unique_dir_per_task) {
@ -1047,7 +1055,7 @@ int updateStoneWallIterations(int iteration, rank_progress_t * progress, double
uint64_t done = progress->items_done;
long long unsigned max_iter = 0;
MPI_Allreduce(& progress->items_done, & max_iter, 1, MPI_LONG_LONG_INT, MPI_MAX, testComm);
summary_table[iteration].stonewall_time[MDTEST_FILE_CREATE_NUM] = MPI_Wtime() - tstart;
summary_table[iteration].stonewall_time[MDTEST_FILE_CREATE_NUM] = GetTimeStamp() - tstart;
// continue to the maximum...
long long min_accessed = 0;
@ -1055,13 +1063,13 @@ int updateStoneWallIterations(int iteration, rank_progress_t * progress, double
long long sum_accessed = 0;
MPI_Reduce(& progress->items_done, & sum_accessed, 1, MPI_LONG_LONG_INT, MPI_SUM, 0, testComm);
if(items != (sum_accessed / size) && rank == 0){
if(items != (sum_accessed / size)){
summary_table[iteration].stonewall_item_sum[MDTEST_FILE_CREATE_NUM] = sum_accessed;
summary_table[iteration].stonewall_item_min[MDTEST_FILE_CREATE_NUM] = min_accessed * size;
fprintf( out_logfile, "Continue stonewall hit min: %lld max: %lld avg: %.1f \n", min_accessed, max_iter, ((double) sum_accessed) / size);
fflush( out_logfile );
}
if( done != max_iter ){
if (rank == 0){
fprintf( out_logfile, "Continue stonewall hit min: %lld max: %lld avg: %.1f \n", min_accessed, max_iter, ((double) sum_accessed) / size);
fflush( out_logfile );
}
hit = 1;
}
progress->items_start = done;
@ -1082,7 +1090,7 @@ void file_test(const int iteration, const int ntasks, const char *path, rank_pro
}
MPI_Barrier(testComm);
t[0] = MPI_Wtime();
t[0] = GetTimeStamp();
/* create phase */
if (create_only ) {
@ -1118,14 +1126,19 @@ void file_test(const int iteration, const int ntasks, const char *path, rank_pro
if (hit){
progress->stone_wall_timer_seconds = 0;
printf("stonewall rank %d: %lld of %lld \n", rank, (long long) progress->items_start, (long long) progress->items_per_dir);
if (verbose > 1){
printf("stonewall rank %d: %lld of %lld \n", rank, (long long) progress->items_start, (long long) progress->items_per_dir);
}
create_remove_items(0, 0, 1, 0, temp_path, 0, progress);
// now reset the values
progress->stone_wall_timer_seconds = stone_wall_timer_seconds;
items = progress->items_done;
}
if (stoneWallingStatusFile){
StoreStoneWallingIterations(stoneWallingStatusFile, progress->items_done);
}
// reset stone wall timer to allow proper cleanup
progress->stone_wall_timer_seconds = 0;
}
}
}else{
@ -1150,7 +1163,7 @@ void file_test(const int iteration, const int ntasks, const char *path, rank_pro
if (barriers) {
MPI_Barrier(testComm);
}
t[1] = MPI_Wtime();
t[1] = GetTimeStamp();
/* stat phase */
if (stat_only ) {
@ -1178,7 +1191,7 @@ void file_test(const int iteration, const int ntasks, const char *path, rank_pro
if (barriers) {
MPI_Barrier(testComm);
}
t[2] = MPI_Wtime();
t[2] = GetTimeStamp();
/* read phase */
if (read_only ) {
@ -1210,9 +1223,11 @@ void file_test(const int iteration, const int ntasks, const char *path, rank_pro
if (barriers) {
MPI_Barrier(testComm);
}
t[3] = MPI_Wtime();
t[3] = GetTimeStamp();
if (remove_only) {
progress->items_start = 0;
for (int dir_iter = 0; dir_iter < directory_loops; dir_iter ++){
prep_testdir(iteration, dir_iter);
if (unique_dir_per_task) {
@ -1242,7 +1257,7 @@ void file_test(const int iteration, const int ntasks, const char *path, rank_pro
if (barriers) {
MPI_Barrier(testComm);
}
t[4] = MPI_Wtime();
t[4] = GetTimeStamp();
if (remove_only) {
if (unique_dir_per_task) {
unique_dir_access(RM_UNI_DIR, temp_path);
@ -1260,6 +1275,10 @@ void file_test(const int iteration, const int ntasks, const char *path, rank_pro
offset_timers(t, 4);
}
if(num_dirs_in_tree_calc){ /* this is temporary fix needed when using -n and -i together */
items *= num_dirs_in_tree_calc;
}
/* calculate times */
if (create_only) {
summary_table[iteration].rate[4] = items*size/(t[1] - t[0]);
@ -1303,7 +1322,8 @@ void print_help (void) {
int j;
char APIs[1024];
aiori_supported_apis(APIs);
char APIs_legacy[1024];
aiori_supported_apis(APIs, APIs_legacy);
char apiStr[1024];
sprintf(apiStr, "API for I/O [%s]", APIs);
@ -1408,68 +1428,7 @@ void summarize_results(int iterations) {
start = stop = 0;
}
/* calculate aggregates */
if (barriers) {
double maxes[iterations];
/* Because each proc times itself, in the case of barriers we
* have to backwards calculate the time to simulate the use
* of barriers.
*/
for (i = start; i < stop; i++) {
for (j=0; j<iterations; j++) {
maxes[j] = all[j*tableSize + i];
for (k=0; k<size; k++) {
curr = all[(k*tableSize*iterations) + (j*tableSize) + i];
if (maxes[j] < curr) {
maxes[j] = curr;
}
}
}
min = max = maxes[0];
for (j=0; j<iterations; j++) {
if (min > maxes[j]) {
min = maxes[j];
}
if (max < maxes[j]) {
max = maxes[j];
}
sum += maxes[j];
}
mean = sum / iterations;
for (j=0; j<iterations; j++) {
var += pow((mean - maxes[j]), 2);
}
var = var / iterations;
sd = sqrt(var);
switch (i) {
case 0: strcpy(access, "Directory creation:"); break;
case 1: strcpy(access, "Directory stat :"); break;
/* case 2: strcpy(access, "Directory read :"); break; */
case 2: ; break; /* N/A */
case 3: strcpy(access, "Directory removal :"); break;
case 4: strcpy(access, "File creation :"); break;
case 5: strcpy(access, "File stat :"); break;
case 6: strcpy(access, "File read :"); break;
case 7: strcpy(access, "File removal :"); break;
default: strcpy(access, "ERR"); break;
}
if (i != 2) {
fprintf(out_logfile, " %s ", access);
fprintf(out_logfile, "%14.3f ", max);
fprintf(out_logfile, "%14.3f ", min);
fprintf(out_logfile, "%14.3f ", mean);
fprintf(out_logfile, "%14.3f\n", sd);
fflush(out_logfile);
}
sum = var = 0;
}
} else {
for (i = start; i < stop; i++) {
for (i = start; i < stop; i++) {
min = max = all[i];
for (k=0; k < size; k++) {
for (j = 0; j < iterations; j++) {
@ -1515,7 +1474,6 @@ void summarize_results(int iterations) {
}
sum = var = 0;
}
}
/* calculate tree create/remove rates */
@ -1615,9 +1573,9 @@ void valid_tests() {
FAIL("-c not compatible with -B");
}
if ( strcasecmp(backend_name, "POSIX") != 0 && strcasecmp(backend_name, "DUMMY") != 0 &&
strcasecmp(backend_name, "DFS") != 0) {
FAIL("-a only supported interface is POSIX, DFS and DUMMY right now!");
if (strcasecmp(backend_name, "POSIX") != 0 && strcasecmp(backend_name, "DUMMY") != 0 &&
strcasecmp(backend_name, "DFS") != 0) {
FAIL("-a only supported interface is POSIX, DFS (and DUMMY) right now!");
}
/* check for shared file incompatibilities */
@ -1891,7 +1849,7 @@ static void mdtest_iteration(int i, int j, MPI_Group testgroup, mdtest_results_t
/* create hierarchical directory structure */
MPI_Barrier(testComm);
startCreate = MPI_Wtime();
startCreate = GetTimeStamp();
for (int dir_iter = 0; dir_iter < directory_loops; dir_iter ++){
prep_testdir(j, dir_iter);
@ -1953,7 +1911,7 @@ static void mdtest_iteration(int i, int j, MPI_Group testgroup, mdtest_results_t
}
}
MPI_Barrier(testComm);
endCreate = MPI_Wtime();
endCreate = GetTimeStamp();
summary_table->rate[8] =
num_dirs_in_tree / (endCreate - startCreate);
summary_table->time[8] = (endCreate - startCreate);
@ -2024,7 +1982,8 @@ static void mdtest_iteration(int i, int j, MPI_Group testgroup, mdtest_results_t
MPI_Barrier(testComm);
if (remove_only) {
startCreate = MPI_Wtime();
progress->items_start = 0;
startCreate = GetTimeStamp();
for (int dir_iter = 0; dir_iter < directory_loops; dir_iter ++){
prep_testdir(j, dir_iter);
if (unique_dir_per_task) {
@ -2086,7 +2045,7 @@ static void mdtest_iteration(int i, int j, MPI_Group testgroup, mdtest_results_t
}
MPI_Barrier(testComm);
endCreate = MPI_Wtime();
endCreate = GetTimeStamp();
summary_table->rate[9] = num_dirs_in_tree / (endCreate - startCreate);
summary_table->time[9] = endCreate - startCreate;
summary_table->items[9] = num_dirs_in_tree;
@ -2138,6 +2097,7 @@ void mdtest_init_args(){
unique_dir_per_task = 0;
time_unique_dir_overhead = 0;
items = 0;
num_dirs_in_tree_calc = 0;
collective_creates = 0;
write_bytes = 0;
stone_wall_timer_seconds = 0;
@ -2174,7 +2134,8 @@ mdtest_results_t * mdtest_run(int argc, char **argv, MPI_Comm world_com, FILE *
char * path = "./out";
int randomize = 0;
char APIs[1024];
aiori_supported_apis(APIs);
char APIs_legacy[1024];
aiori_supported_apis(APIs, APIs_legacy);
char apiStr[1024];
sprintf(apiStr, "API for I/O [%s]", APIs);
@ -2191,7 +2152,7 @@ mdtest_results_t * mdtest_run(int argc, char **argv, MPI_Comm world_com, FILE *
{'e', NULL, "bytes to read from each file", OPTION_OPTIONAL_ARGUMENT, 'l', & read_bytes},
{'f', NULL, "first number of tasks on which the test will run", OPTION_OPTIONAL_ARGUMENT, 'd', & first},
{'F', NULL, "perform test on files only (no directories)", OPTION_FLAG, 'd', & files_only},
{'i', NULL, "number of iterations the test will run", OPTION_OPTIONAL_ARGUMENT, 'i', & iterations},
{'i', NULL, "number of iterations the test will run", OPTION_OPTIONAL_ARGUMENT, 'd', & iterations},
{'I', NULL, "number of items per directory in tree", OPTION_OPTIONAL_ARGUMENT, 'l', & items_per_dir},
{'l', NULL, "last number of tasks on which the test will run", OPTION_OPTIONAL_ARGUMENT, 'd', & last},
{'L', NULL, "files only at leaf level of tree", OPTION_FLAG, 'd', & leaf_only},
@ -2215,41 +2176,19 @@ mdtest_results_t * mdtest_run(int argc, char **argv, MPI_Comm world_com, FILE *
{'Z', NULL, "print time instead of rate", OPTION_FLAG, 'd', & print_time},
LAST_OPTION
};
int printhelp = 0;
int parsed_options = option_parse(argc, argv, options, & printhelp);
airoi_parse_options(argc, argv, options);
backend = aiori_select(backend_name);
if (NULL == backend) {
FAIL("Could not find suitable backend to use");
}
if(backend->get_options != NULL){
option_parse(argc - parsed_options, argv + parsed_options, backend->get_options(), & printhelp);
}
if (backend->initialize)
backend->initialize(NULL);
if(printhelp != 0){
printf("Usage: %s ", argv[0]);
option_print_help(options, 0);
if(backend->get_options != NULL){
printf("\nPlugin options for backend %s\n", backend_name);
option_print_help(backend->get_options(), 1);
}
if(printhelp == 1){
exit(0);
}else{
exit(1);
}
}
MPI_Comm_rank(testComm, &rank);
MPI_Comm_size(testComm, &size);
if (backend->initialize)
backend->initialize();
pid = getpid();
uid = getuid();
@ -2337,13 +2276,14 @@ mdtest_results_t * mdtest_run(int argc, char **argv, MPI_Comm world_com, FILE *
} else if (branch_factor == 1) {
num_dirs_in_tree = depth + 1;
} else {
num_dirs_in_tree =
(1 - pow(branch_factor, depth+1)) / (1 - branch_factor);
num_dirs_in_tree = (pow(branch_factor, depth+1) - 1) / (branch_factor - 1);
}
}
if (items_per_dir > 0) {
if(unique_dir_per_task){
if(items == 0){
items = items_per_dir * num_dirs_in_tree;
}else{
num_dirs_in_tree_calc = num_dirs_in_tree;
}
} else {
if (leaf_only) {
@ -2487,17 +2427,21 @@ mdtest_results_t * mdtest_run(int argc, char **argv, MPI_Comm world_com, FILE *
MPI_Group_range_incl(worldgroup, 1, (void *)&range, &testgroup);
MPI_Comm_create(testComm, testgroup, &testComm);
if (rank == 0) {
uint64_t items_all = i * items;
if(num_dirs_in_tree_calc){
items_all *= num_dirs_in_tree_calc;
}
if (files_only && dirs_only) {
fprintf(out_logfile, "\n%d tasks, "LLU" files/directories\n", i, i * items);
fprintf(out_logfile, "\n%d tasks, "LLU" files/directories\n", i, items_all);
} else if (files_only) {
if (!shared_file) {
fprintf(out_logfile, "\n%d tasks, "LLU" files\n", i, i * items);
fprintf(out_logfile, "\n%d tasks, "LLU" files\n", i, items_all);
}
else {
fprintf(out_logfile, "\n%d tasks, 1 file\n", i);
}
} else if (dirs_only) {
fprintf(out_logfile, "\n%d tasks, "LLU" directories\n", i, i * items);
fprintf(out_logfile, "\n%d tasks, "LLU" directories\n", i, items_all);
}
}
if (rank == 0 && verbose >= 1) {

View File

@ -139,40 +139,7 @@ static void print_help_section(option_help * args, option_value_type type, char
}
}
void option_print_help(option_help * args, int is_plugin){
option_help * o;
int optionalArgs = 0;
for(o = args; o->shortVar != 0 || o->longVar != 0 ; o++){
if(o->arg != OPTION_REQUIRED_ARGUMENT){
optionalArgs = 1;
}
switch(o->arg){
case (OPTION_OPTIONAL_ARGUMENT):
case (OPTION_FLAG):{
if(o->shortVar != 0){
printf("[-%c] ", o->shortVar);
}else if(o->longVar != 0){
printf("[--%s] ", o->longVar);
}
break;
}case (OPTION_REQUIRED_ARGUMENT):{
if(o->shortVar != 0){
printf("-%c ", o->shortVar);
}else if(o->longVar != 0){
printf("--%s ", o->longVar);
}
break;
}
}
}
if (optionalArgs){
//printf(" [Optional Args]");
}
if (! is_plugin){
printf(" -- <Plugin options, see below>\n");
}
void option_print_help(option_help * args){
print_help_section(args, OPTION_REQUIRED_ARGUMENT, "Required arguments");
print_help_section(args, OPTION_FLAG, "Flags");
print_help_section(args, OPTION_OPTIONAL_ARGUMENT, "Optional arguments");
@ -261,17 +228,23 @@ void option_print_current(option_help * args){
print_current_option_section(args, OPTION_FLAG);
}
int option_parse(int argc, char ** argv, option_help * args, int * printhelp){
int option_parse(int argc, char ** argv, options_all * opt_all){
int error = 0;
int requiredArgsSeen = 0;
int requiredArgsNeeded = 0;
int i;
int printhelp = 0;
for(option_help * o = args; o->shortVar != 0 || o->longVar != 0 ; o++ ){
if(o->arg == OPTION_REQUIRED_ARGUMENT){
requiredArgsNeeded++;
for(int m = 0; m < opt_all->module_count; m++ ){
option_help * args = opt_all->modules[m].options;
if(args == NULL) continue;
for(option_help * o = args; o->shortVar != 0 || o->longVar != 0 ; o++ ){
if(o->arg == OPTION_REQUIRED_ARGUMENT){
requiredArgsNeeded++;
}
}
}
for(i=1; i < argc; i++){
char * txt = argv[i];
int foundOption = 0;
@ -282,109 +255,108 @@ int option_parse(int argc, char ** argv, option_help * args, int * printhelp){
arg++;
replaced_equal = 1;
}
if(strcmp(txt, "--") == 0){
// we found plugin options
break;
}
// try to find matching option help
for(option_help * o = args; o->shortVar != 0 || o->longVar != 0 || o->help != NULL ; o++ ){
if( o->shortVar == 0 && o->longVar == 0 ){
// section
continue;
}
for(int m = 0; m < opt_all->module_count; m++ ){
option_help * args = opt_all->modules[m].options;
if(args == NULL) continue;
// try to find matching option help
for(option_help * o = args; o->shortVar != 0 || o->longVar != 0 || o->help != NULL ; o++ ){
if( o->shortVar == 0 && o->longVar == 0 ){
// section
continue;
}
if ( (txt[0] == '-' && o->shortVar == txt[1]) || (strlen(txt) > 2 && txt[0] == '-' && txt[1] == '-' && o->longVar != NULL && strcmp(txt + 2, o->longVar) == 0)){
foundOption = 1;
if ( (txt[0] == '-' && o->shortVar == txt[1]) || (strlen(txt) > 2 && txt[0] == '-' && txt[1] == '-' && o->longVar != NULL && strcmp(txt + 2, o->longVar) == 0)){
foundOption = 1;
// now process the option.
switch(o->arg){
case (OPTION_FLAG):{
assert(o->type == 'd');
(*(int*) o->variable)++;
break;
}
case (OPTION_OPTIONAL_ARGUMENT):
case (OPTION_REQUIRED_ARGUMENT):{
// check if next is an argument
if(arg == NULL){
if(o->shortVar == txt[1] && txt[2] != 0){
arg = & txt[2];
}else{
// simply take the next value as argument
i++;
arg = argv[i];
}
// now process the option.
switch(o->arg){
case (OPTION_FLAG):{
assert(o->type == 'd');
(*(int*) o->variable)++;
break;
}
if(arg == NULL){
const char str[] = {o->shortVar, 0};
printf("Error, argument missing for option %s\n", (o->longVar != NULL) ? o->longVar : str);
exit(1);
}
switch(o->type){
case('p'):{
// call the function in the variable
void(*fp)() = o->variable;
fp(arg);
break;
}
case('F'):{
*(double*) o->variable = atof(arg);
break;
}
case('f'):{
*(float*) o->variable = atof(arg);
break;
}
case('d'):{
int64_t val = string_to_bytes(arg);
if (val > INT_MAX || val < INT_MIN){
printf("WARNING: parsing the number %s to integer, this produced an overflow!\n", arg);
case (OPTION_OPTIONAL_ARGUMENT):
case (OPTION_REQUIRED_ARGUMENT):{
// check if next is an argument
if(arg == NULL){
if(o->shortVar == txt[1] && txt[2] != 0){
arg = & txt[2];
}else{
// simply take the next value as argument
i++;
arg = argv[i];
}
*(int*) o->variable = val;
break;
}
case('H'):
case('s'):{
(*(char **) o->variable) = strdup(arg);
break;
if(arg == NULL){
const char str[] = {o->shortVar, 0};
printf("Error, argument missing for option %s\n", (o->longVar != NULL) ? o->longVar : str);
exit(1);
}
case('c'):{
(*(char *)o->variable) = arg[0];
if(strlen(arg) > 1){
printf("Error, ignoring remainder of string for option %c (%s).\n", o->shortVar, o->longVar);
switch(o->type){
case('p'):{
// call the function in the variable
void(*fp)() = o->variable;
fp(arg);
break;
}
break;
case('F'):{
*(double*) o->variable = atof(arg);
break;
}
case('f'):{
*(float*) o->variable = atof(arg);
break;
}
case('d'):{
int64_t val = string_to_bytes(arg);
if (val > INT_MAX || val < INT_MIN){
printf("WARNING: parsing the number %s to integer, this produced an overflow!\n", arg);
}
*(int*) o->variable = val;
break;
}
case('H'):
case('s'):{
(*(char **) o->variable) = strdup(arg);
break;
}
case('c'):{
(*(char *)o->variable) = arg[0];
if(strlen(arg) > 1){
printf("Error, ignoring remainder of string for option %c (%s).\n", o->shortVar, o->longVar);
}
break;
}
case('l'):{
*(long long*) o->variable = string_to_bytes(arg);
break;
}
case('u'):{
*(uint64_t*) o->variable = string_to_bytes(arg);
break;
}
default:
printf("ERROR: Unknown option type %c\n", o->type);
}
case('l'):{
*(long long*) o->variable = string_to_bytes(arg);
break;
}
case('u'):{
*(uint64_t*) o->variable = string_to_bytes(arg);
break;
}
default:
printf("ERROR: Unknown option type %c\n", o->type);
}
}
}
if(replaced_equal){
arg[-1] = '=';
}
if(replaced_equal){
arg[-1] = '=';
}
if(o->arg == OPTION_REQUIRED_ARGUMENT){
requiredArgsSeen++;
}
if(o->arg == OPTION_REQUIRED_ARGUMENT){
requiredArgsSeen++;
}
break;
break;
}
}
}
if (! foundOption){
if(strcmp(txt, "-h") == 0 || strcmp(txt, "--help") == 0){
*printhelp=1;
printhelp = 1;
}else{
printf("Error invalid argument: %s\n", txt);
error = 1;
@ -392,14 +364,23 @@ int option_parse(int argc, char ** argv, option_help * args, int * printhelp){
}
}
if( requiredArgsSeen != requiredArgsNeeded ){
printf("Error: Missing some required arguments\n\n");
*printhelp = -1;
}
if(error != 0){
printf("Invalid options\n");
*printhelp = -1;
printhelp = -1;
}
if(printhelp == 1){
printf("Synopsis %s\n", argv[0]);
for(int m = 0; m < opt_all->module_count; m++ ){
option_help * args = opt_all->modules[m].options;
if(args == NULL) continue;
char * prefix = opt_all->modules[m].prefix;
if(prefix != NULL){
printf("\n\nModule %s\n", prefix);
}
option_print_help(args);
}
exit(0);
}
return i;

View File

@ -4,7 +4,7 @@
#include <stdint.h>
/*
* Initial revision by JK
* Initial version by JK
*/
typedef enum{
@ -23,13 +23,22 @@ typedef struct{
void * variable;
} option_help;
typedef struct{
char * prefix; // may be NULL to include it in the standard name
option_help * options;
} option_module;
typedef struct{
int module_count;
option_module * modules;
} options_all;
#define LAST_OPTION {0, 0, 0, (option_value_type) 0, 0, NULL}
int64_t string_to_bytes(char *size_str);
void option_print_help(option_help * args, int is_plugin);
void option_print_current(option_help * args);
//@return the number of parsed arguments
int option_parse(int argc, char ** argv, option_help * args, int * print_help);
int option_parse(int argc, char ** argv, options_all * args);
#endif

View File

@ -21,6 +21,9 @@
#include <ctype.h>
#include <string.h>
#if defined(HAVE_STRINGS_H)
#include <strings.h>
#endif
#include "utilities.h"
#include "ior.h"
@ -48,7 +51,12 @@ static size_t NodeMemoryStringToBytes(char *size_str)
if (percent > 100 || percent < 0)
ERR("percentage must be between 0 and 100");
#ifdef HAVE_SYSCONF
page_size = sysconf(_SC_PAGESIZE);
#else
page_size = getpagesize();
#endif
#ifdef _SC_PHYS_PAGES
num_pages = sysconf(_SC_PHYS_PAGES);
if (num_pages == -1)
@ -88,9 +96,8 @@ static void CheckRunSettings(IOR_test_t *tests)
* (We assume int-valued params are exclusively 0 or 1.)
*/
if ((params->openFlags & IOR_RDWR)
&& ((params->readFile | params->checkRead)
^ (params->writeFile | params->checkWrite))
&& (params->openFlags & IOR_RDWR)) {
&& ((params->readFile | params->checkRead | params->checkWrite)
^ params->writeFile)) {
params->openFlags &= ~(IOR_RDWR);
if (params->readFile | params->checkRead) {
@ -100,7 +107,6 @@ static void CheckRunSettings(IOR_test_t *tests)
else
params->openFlags |= IOR_WRONLY;
}
}
}
@ -112,12 +118,17 @@ void DecodeDirective(char *line, IOR_param_t *params)
char option[MAX_STR];
char value[MAX_STR];
int rc;
int initialized;
rc = sscanf(line, " %[^=# \t\r\n] = %[^# \t\r\n] ", option, value);
if (rc != 2 && rank == 0) {
fprintf(out_logfile, "Syntax error in configuration options: %s\n",
line);
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1), "MPI_Abort() error");
MPI_CHECK(MPI_Initialized(&initialized), "MPI_Initialized() error");
if (initialized)
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1), "MPI_Abort() error");
else
exit(-1);
}
if (strcasecmp(option, "api") == 0) {
params->api = strdup(value);
@ -167,6 +178,8 @@ void DecodeDirective(char *line, IOR_param_t *params)
params->repetitions = atoi(value);
} else if (strcasecmp(option, "intertestdelay") == 0) {
params->interTestDelay = atoi(value);
} else if (strcasecmp(option, "interiodelay") == 0) {
params->interIODelay = atoi(value);
} else if (strcasecmp(option, "readfile") == 0) {
params->readFile = atoi(value);
} else if (strcasecmp(option, "writefile") == 0) {
@ -304,7 +317,11 @@ void DecodeDirective(char *line, IOR_param_t *params)
if (rank == 0)
fprintf(out_logfile, "Unrecognized parameter \"%s\"\n",
option);
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1), "MPI_Abort() error");
MPI_CHECK(MPI_Initialized(&initialized), "MPI_Initialized() error");
if (initialized)
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1), "MPI_Abort() error");
else
exit(-1);
}
}
@ -363,6 +380,7 @@ IOR_test_t *ReadConfigScript(char *scriptName)
int runflag = 0;
char linebuf[MAX_STR];
char empty[MAX_STR];
char *ptr;
FILE *file;
IOR_test_t *head = NULL;
IOR_test_t *tail = NULL;
@ -385,19 +403,27 @@ IOR_test_t *ReadConfigScript(char *scriptName)
/* Iterate over a block of IOR commands */
while (fgets(linebuf, MAX_STR, file) != NULL) {
/* skip over leading whitespace */
ptr = linebuf;
while (isspace(*ptr))
ptr++;
/* skip empty lines */
if (sscanf(linebuf, "%s", empty) == -1)
if (sscanf(ptr, "%s", empty) == -1)
continue;
/* skip lines containing only comments */
if (sscanf(linebuf, " #%s", empty) == 1)
if (sscanf(ptr, " #%s", empty) == 1)
continue;
if (contains_only(linebuf, "ior stop")) {
if (contains_only(ptr, "ior stop")) {
break;
} else if (contains_only(linebuf, "run")) {
} else if (contains_only(ptr, "run")) {
if (runflag) {
/* previous line was a "run" as well
create duplicate test */
tail->next = CreateTest(&tail->params, test_num++);
AllocResults(tail);
tail = tail->next;
}
runflag = 1;
@ -406,16 +432,18 @@ IOR_test_t *ReadConfigScript(char *scriptName)
create and initialize a new test structure */
runflag = 0;
tail->next = CreateTest(&tail->params, test_num++);
AllocResults(tail);
tail = tail->next;
ParseLine(linebuf, &tail->params);
ParseLine(ptr, &tail->params);
} else {
ParseLine(linebuf, &tail->params);
ParseLine(ptr, &tail->params);
}
}
/* close the script */
if (fclose(file) != 0)
ERR("fclose() of script file failed");
AllocResults(tail);
return head;
}
@ -440,7 +468,8 @@ IOR_test_t *ParseCommandLine(int argc, char **argv)
parameters = & initialTestParams;
char APIs[1024];
aiori_supported_apis(APIs);
char APIs_legacy[1024];
aiori_supported_apis(APIs, APIs_legacy);
char apiStr[1024];
sprintf(apiStr, "API for I/O [%s]", APIs);
@ -503,14 +532,14 @@ IOR_test_t *ParseCommandLine(int argc, char **argv)
{'Z', NULL, "reorderTasksRandom -- changes task ordering to random ordering for readback", OPTION_FLAG, 'd', & initialTestParams.reorderTasksRandom},
{.help=" -O summaryFile=FILE -- store result data into this file", .arg = OPTION_OPTIONAL_ARGUMENT},
{.help=" -O summaryFormat=[default,JSON,CSV] -- use the format for outputing the summary", .arg = OPTION_OPTIONAL_ARGUMENT},
{0, "dryRun", "do not perform any I/Os just run evtl. inputs print dummy output", OPTION_FLAG, 'd', & initialTestParams.dryRun},
LAST_OPTION,
};
IOR_test_t *tests = NULL;
GetPlatformName(initialTestParams.platform);
int printhelp = 0;
int parsed_options = option_parse(argc, argv, options, & printhelp);
airoi_parse_options(argc, argv, options);
if (toggleG){
initialTestParams.setTimeStampSignature = toggleG;
@ -538,35 +567,18 @@ IOR_test_t *ParseCommandLine(int argc, char **argv)
if (memoryPerNode){
initialTestParams.memoryPerNode = NodeMemoryStringToBytes(optarg);
}
const ior_aiori_t * backend = aiori_select(initialTestParams.api);
if (backend == NULL)
ERR_SIMPLE("unrecognized I/O API");
initialTestParams.backend = backend;
initialTestParams.apiVersion = backend->get_version();
if(backend->get_options != NULL){
option_parse(argc - parsed_options, argv + parsed_options, backend->get_options(), & printhelp);
}
if(printhelp != 0){
printf("Usage: %s ", argv[0]);
option_print_help(options, 0);
if(backend->get_options != NULL){
printf("\nPlugin options for backend %s (%s)\n", initialTestParams.api, backend->get_version());
option_print_help(backend->get_options(), 1);
}
if(printhelp == 1){
exit(0);
}else{
exit(1);
}
}
if (testscripts){
tests = ReadConfigScript(testscripts);
}else{
tests = CreateTest(&initialTestParams, 0);
AllocResults(tests);
}
CheckRunSettings(tests);

View File

@ -16,6 +16,10 @@
# include "config.h"
#endif
#ifdef __linux__
# define _GNU_SOURCE /* Needed for O_DIRECT in fcntl */
#endif /* __linux__ */
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>

View File

@ -6,12 +6,15 @@
# You can override the defaults by setting the variables before invoking the script, or simply set them here...
# Example: export IOR_EXTRA="-v -v -v"
ROOT=${0%/*}
ROOT="$(dirname ${BASH_SOURCE[0]})"
TYPE="basic"
source $ROOT/test-lib.sh
MDTEST 1 -a POSIX
MDTEST 2 -a POSIX -W 2
MDTEST 1 -C -T -r -F -I 1 -z 1 -b 1 -L -u
MDTEST 1 -C -T -I 1 -z 1 -b 1 -u
IOR 1 -a POSIX -w -z -F -Y -e -i1 -m -t 100k -b 1000k
IOR 1 -a POSIX -w -z -F -k -e -i2 -m -t 100k -b 100k
@ -23,4 +26,7 @@ IOR 2 -a POSIX -r -z -Z -Q 2 -F -k -e -i1 -m -t 100k -b 100k
IOR 2 -a POSIX -r -z -Z -Q 3 -X 13 -F -k -e -i1 -m -t 100k -b 100k
IOR 2 -a POSIX -w -z -Z -Q 1 -X -13 -F -e -i1 -m -t 100k -b 100k
IOR 2 -f "$ROOT/test_comments.ior"
END

View File

@ -5,6 +5,7 @@
# Example: export IOR_EXTRA="-v -v -v"
ROOT=${0%/*}
TYPE="advanced"
source $ROOT/test-lib.sh

View File

@ -0,0 +1,93 @@
V-3: main (before display_freespace): testdirpath is "/dev/shm/mdest"
V-3: testdirpath is "/dev/shm/mdest"
V-3: Before show_file_system_size, dirpath is "/dev/shm"
V-3: After show_file_system_size, dirpath is "/dev/shm"
V-3: main (after display_freespace): testdirpath is "/dev/shm/mdest"
V-3: main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"
V-3: main: Using unique_mk_dir, "mdtest_tree.0"
V-3: main: Copied unique_mk_dir, "mdtest_tree.0", to topdir
V-3: directory_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19"
V-3: file_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.0"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.1"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.2"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.3"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.4"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.5"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.6"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.7"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.8"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.9"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.10"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.11"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.12"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.13"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.14"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.15"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.16"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.17"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.18"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.19"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: main: Using testdir, "/dev/shm/mdest/#test-dir.0-0"

View File

@ -0,0 +1,50 @@
V-3: main (before display_freespace): testdirpath is "/dev/shm/mdest"
V-3: testdirpath is "/dev/shm/mdest"
V-3: Before show_file_system_size, dirpath is "/dev/shm"
V-3: After show_file_system_size, dirpath is "/dev/shm"
V-3: main (after display_freespace): testdirpath is "/dev/shm/mdest"
V-3: main: Using unique_mk_dir, "mdtest_tree.0"
V-3: main: Copied unique_mk_dir, "mdtest_tree.0", to topdir
V-3: directory_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19
V-3: file_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.0
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.1
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.2
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.3
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.4
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.5
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.6
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.7
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.8
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.9
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.10
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.11
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.12
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.13
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.14
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.15
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.16
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.17
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.18
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.19
V-3: main: Using testdir, "/dev/shm/mdest/#test-dir.0-0"

View File

@ -0,0 +1,77 @@
V-3: main (before display_freespace): testdirpath is "/dev/shm/mdest"
V-3: testdirpath is "/dev/shm/mdest"
V-3: Before show_file_system_size, dirpath is "/dev/shm"
V-3: After show_file_system_size, dirpath is "/dev/shm"
V-3: main (after display_freespace): testdirpath is "/dev/shm/mdest"
V-3: main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"
V-3: main: Using unique_mk_dir, "mdtest_tree.0"
V-3: main: Copied unique_mk_dir, "mdtest_tree.0", to topdir
V-3: directory_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19"
V-3: directory_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19
V-3: directory_test: read path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: directory_test: remove directories path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18"
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19"
V-3: directory_test: remove unique directories path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: main: Using testdir, "/dev/shm/mdest/#test-dir.0-0"
V-3: main (remove hierarchical directory loop-!unique_dir_per_task): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"

View File

@ -0,0 +1,24 @@
V-3: main (before display_freespace): testdirpath is "/dev/shm/mdest"
V-3: testdirpath is "/dev/shm/mdest"
V-3: Before show_file_system_size, dirpath is "/dev/shm"
V-3: After show_file_system_size, dirpath is "/dev/shm"
V-3: main (after display_freespace): testdirpath is "/dev/shm/mdest"
V-3: main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"
V-3: main: Using unique_mk_dir, "mdtest_tree.0"
V-3: main: Copied unique_mk_dir, "mdtest_tree.0", to topdir
V-3: directory_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: directory_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: directory_test: read path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: directory_test: remove directories path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: directory_test: remove unique directories path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: file_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: file_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: file_test: read path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: file_test: rm directories path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: file_test: rm unique directories path is "mdtest_tree.0"
V-3: main: Using testdir, "/dev/shm/mdest/#test-dir.0-0"
V-3: main (remove hierarchical directory loop-!unique_dir_per_task): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"

View File

@ -0,0 +1,24 @@
V-3: main (before display_freespace): testdirpath is "/dev/shm/mdest"
V-3: testdirpath is "/dev/shm/mdest"
V-3: Before show_file_system_size, dirpath is "/dev/shm"
V-3: After show_file_system_size, dirpath is "/dev/shm"
V-3: main (after display_freespace): testdirpath is "/dev/shm/mdest"
V-3: main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"
V-3: main: Using unique_mk_dir, "mdtest_tree.0"
V-3: main: Copied unique_mk_dir, "mdtest_tree.0", to topdir
V-3: directory_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: directory_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: directory_test: read path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: directory_test: remove directories path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: directory_test: remove unique directories path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: file_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: file_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: file_test: read path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: file_test: rm directories path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
V-3: file_test: rm unique directories path is "mdtest_tree.0"
V-3: main: Using testdir, "/dev/shm/mdest/#test-dir.0-0"
V-3: main (remove hierarchical directory loop-!unique_dir_per_task): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"

View File

@ -0,0 +1,25 @@
V-3: main (before display_freespace): testdirpath is "/dev/shm/mdest"
V-3: testdirpath is "/dev/shm/mdest"
V-3: Before show_file_system_size, dirpath is "/dev/shm"
V-3: After show_file_system_size, dirpath is "/dev/shm"
V-3: main (after display_freespace): testdirpath is "/dev/shm/mdest"
V-3: main (create hierarchical directory loop-!collective_creates): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"
V-3: main: Copied unique_mk_dir, "mdtest_tree.0.0", to topdir
V-3: file_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
V-3: create_remove_items (for loop): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/"
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//file.mdtest.0.1"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/"
V-3: file_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/file.mdtest.0.1
V-3: file_test: rm directories path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
V-3: create_remove_items (for loop): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/"
V-3: create_remove_items_helper (non-dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//file.mdtest.0.1"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/"
V-3: file_test: rm unique directories path is "/dev/shm/mdest/#test-dir.0-0/"
V-3: main (remove hierarchical directory loop-!collective): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"

View File

@ -0,0 +1,31 @@
V-3: main (before display_freespace): testdirpath is "/dev/shm/mdest"
V-3: testdirpath is "/dev/shm/mdest"
V-3: Before show_file_system_size, dirpath is "/dev/shm"
V-3: After show_file_system_size, dirpath is "/dev/shm"
V-3: main (after display_freespace): testdirpath is "/dev/shm/mdest"
V-3: main (create hierarchical directory loop-!collective_creates): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"
V-3: main: Copied unique_mk_dir, "mdtest_tree.0.0", to topdir
V-3: directory_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/dir.mdtest.0.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
V-3: create_remove_items (for loop): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/"
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//dir.mdtest.0.1"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/"
V-3: directory_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/dir.mdtest.0.0
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/dir.mdtest.0.1
V-3: file_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/file.mdtest.0.0"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
V-3: create_remove_items (for loop): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/"
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//file.mdtest.0.1"
V-3: create_remove_items_helper (non-collective, shared): open...
V-3: create_remove_items_helper: close...
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/"
V-3: file_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/file.mdtest.0.0
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/file.mdtest.0.1

View File

@ -1,18 +1,22 @@
# Test script for basic IOR functionality testing various patterns
# It is kept as simple as possible and outputs the parameters used such that any test can be rerun easily.
# It is kept as simple as possible and outputs the parameters used such that any
# test can be rerun easily.
# You can override the defaults by setting the variables before invoking the script, or simply set them here...
# You can override the defaults by setting the variables before invoking the
# script, or simply set them here...
# Example: export IOR_EXTRA="-v -v -v"
IOR_MPIRUN=${IOR_MPIRUN:-mpiexec -np}
IOR_BIN_DIR=${IOR_BIN_DIR:-./build/src}
IOR_OUT=${IOR_OUT:-./build/test}
IOR_BIN_DIR=${IOR_BIN_DIR:-./src}
IOR_OUT=${IOR_OUT:-./test_logs}
IOR_TMP=${IOR_TMP:-/dev/shm}
IOR_EXTRA=${IOR_EXTRA:-} # Add global options like verbosity
MDTEST_EXTRA=${MDTEST_EXTRA:-}
MDTEST_TEST_PATTERNS=${MDTEST_TEST_PATTERNS:-../testing/mdtest-patterns/$TYPE}
################################################################################
mkdir -p ${IOR_OUT}
mkdir -p /dev/shm/mdest
mkdir -p ${IOR_TMP}/mdest
## Sanity check
@ -36,8 +40,8 @@ I=0
function IOR(){
RANKS=$1
shift
WHAT="${IOR_MPIRUN} $RANKS ${IOR_BIN_DIR}/ior ${@} ${IOR_EXTRA} -o /dev/shm/ior"
$WHAT 1>${IOR_OUT}/$I 2>&1
WHAT="${IOR_MPIRUN} $RANKS ${IOR_BIN_DIR}/ior ${@} ${IOR_EXTRA} -o ${IOR_TMP}/ior"
$WHAT 1>"${IOR_OUT}/test_out.$I" 2>&1
if [[ $? != 0 ]]; then
echo -n "ERR"
ERRORS=$(($ERRORS + 1))
@ -51,12 +55,27 @@ function IOR(){
function MDTEST(){
RANKS=$1
shift
WHAT="${IOR_MPIRUN} $RANKS ${IOR_BIN_DIR}/mdtest ${@} ${MDTEST_EXTRA} -d /dev/shm/mdest"
$WHAT 1>${IOR_OUT}/$I 2>&1
rm -rf ${IOR_TMP}/mdest
WHAT="${IOR_MPIRUN} $RANKS ${IOR_BIN_DIR}/mdtest ${@} ${MDTEST_EXTRA} -d ${IOR_TMP}/mdest -V=4"
$WHAT 1>"${IOR_OUT}/test_out.$I" 2>&1
if [[ $? != 0 ]]; then
echo -n "ERR"
ERRORS=$(($ERRORS + 1))
else
# compare basic pattern
if [[ -r ${MDTEST_TEST_PATTERNS}/$I.txt ]] ; then
grep "V-3" "${IOR_OUT}/test_out.$I" > "${IOR_OUT}/tmp"
cmp -s "${IOR_OUT}/tmp" ${MDTEST_TEST_PATTERNS}/$I.txt
if [[ $? != 0 ]]; then
mv "${IOR_OUT}/tmp" ${IOR_OUT}/tmp.$I
echo -n "Pattern differs! check: diff -u ${MDTEST_TEST_PATTERNS}/$I.txt ${IOR_OUT}/tmp.$I "
fi
else
if [[ ! -e ${MDTEST_TEST_PATTERNS} ]] ; then
mkdir -p ${MDTEST_TEST_PATTERNS}
fi
grep "V-3" "${IOR_OUT}/test_out.$I" > ${MDTEST_TEST_PATTERNS}/$I.txt
fi
echo -n "OK "
fi
echo " $WHAT"

17
testing/test_comments.ior Normal file
View File

@ -0,0 +1,17 @@
# test to ensure that leading whitespace is ignored
IOR START
api=posix
writeFile =1
randomOffset=1
reorderTasks=1
filePerProc=1
keepFile=1
fsync=1
repetitions=1
multiFile=1
# tab-prefixed comment
transferSize=100k
blockSize=100k
# space-prefixed comment
run
ior stop

26
travis-build.sh Executable file
View File

@ -0,0 +1,26 @@
#!/usr/bin/env bash
#
# Build the IOR source package. Returns the path to the built artifact.
#
BASE_DIR="$(cd "${0%/*}" && pwd)"
if [ -z "$BASE_DIR" -o ! -d "$BASE_DIR" ]; then
echo "Cannot determine BASE_DIR (${BASE_DIR})" >&2
exit 2
fi
BUILD_DIR="${BASE_DIR}/build"
PACKAGE="$(awk '/^Package/ {print $2}' $BASE_DIR/META)"
VERSION="$(awk '/^Version/ {print $2}' $BASE_DIR/META)"
DIST_TGZ="${PACKAGE}-${VERSION}.tar.gz"
# Build the distribution
set -e
./bootstrap
test -d "$BUILD_DIR" && rm -rf "$BUILD_DIR"
mkdir -p "$BUILD_DIR"
cd "$BUILD_DIR"
$BASE_DIR/configure
set +e
make dist && mv -v "${BUILD_DIR}/${DIST_TGZ}" "${BASE_DIR}/${DIST_TGZ}"

48
travis-test.sh Executable file
View File

@ -0,0 +1,48 @@
#!/usr/bin/env bash
#
# Test the IOR source package. This is a complicated alternative to
# the `make distcheck` option.
#
# These options will be passed directly to the autoconf configure script
CONFIGURE_OPTS="${CONFIGURE_OPTS:-"CFLAGS=-std=c99 --disable-silent-rules"}"
BASE_DIR="$(cd "${0%/*}" && pwd)"
if [ -z "$BASE_DIR" -o ! -d "$BASE_DIR" ]; then
echo "Cannot determine BASE_DIR (${BASE_DIR})" >&2
exit 2
fi
PACKAGE="$(awk '/^Package/ {print $2}' $BASE_DIR/META)"
VERSION="$(awk '/^Version/ {print $2}' $BASE_DIR/META)"
DIST_TGZ="${BASE_DIR}/${PACKAGE}-${VERSION}.tar.gz"
TEST_DIR="${BASE_DIR}/test"
INSTALL_DIR="${TEST_DIR}/_inst"
if [ -z "$DIST_TGZ" -o ! -f "$DIST_TGZ" ]; then
echo "Cannot find DIST_TGZ ($DIST_TGZ)" >&2
exit 1
fi
test -d "$TEST_DIR" && rm -rf "$TEST_DIR"
mkdir -p "$TEST_DIR"
tar -C "$TEST_DIR" -zxf "${DIST_TGZ}"
# Configure, make, and install from the source distribution
set -e
cd "$TEST_DIR/${PACKAGE}-${VERSION}"
./configure $CONFIGURE_OPTS "--prefix=$INSTALL_DIR"
make install
set +e
# Run the MPI tests
export IOR_BIN_DIR="${INSTALL_DIR}/bin"
export IOR_OUT="${TEST_DIR}/test_logs"
export IOR_TMP="$(mktemp -d)"
source "${TEST_DIR}/${PACKAGE}-${VERSION}/testing/basic-tests.sh"
# Clean up residual temporary directories (if this isn't running as root)
if [ -d "$IOR_TMP" -a "$(id -u)" -ne 0 -a ! -z "$IOR_TMP" ]; then
rm -rvf "$IOR_TMP"
fi