Merge remote-tracking branch 'upstream/master' into daos

Signed-off-by: Mohamad Chaarawi <mohamad.chaarawi@intel.com>

Conflicts:
	src/aiori.c
	src/aiori.h
	src/ior.c
	src/mdtest-main.c
	src/mdtest.c
	src/option.c
master
Mohamad Chaarawi 2019-01-24 00:31:12 +00:00
commit 2c87b5e0f5
47 changed files with 1686 additions and 1032 deletions

View File

@ -28,11 +28,4 @@ install:
# aiori-S3.c to achive this.
# GPFS
# NOTE: Think GPFS need a license and is therefore not testable with travis.
before_script: ./bootstrap
script: mkdir build && cd build && ../configure --with-hdf5 && make && cd .. && ./testing/basic-tests.sh
# notifications:
# email:
# on_success: change # default: change
# on_failure: always # default: always
script: ./travis-build.sh && CONFIGURE_OPTS="--with-hdf5" ./travis-test.sh

137
ChangeLog
View File

@ -1,137 +0,0 @@
Changes in IOR-3.0.0
* Reorganization of the build system. Now uses autoconf/automake.
N.B. Windows suport is not included. Patches welcome.
* Much code refactoring.
* Final summary table is printed after all tests have finished.
* Error messages significantly improved.
* Drop all "undocumented changes". If they are worth having, they
need to be implemented well and documented.
Changes in IOR-2.10.3
* bug 2962326 "Segmentation Fault When Summarizing Results" fixed
* bug 2786285 "-Wrong number of parameters to function H5Dcreate" fixed
(NOTE: to compile for HDF5 1.6 libs use "-D H5_USE_16_API")
* bug 1992514 "delay (-d) doesn't work" fixed
Contributed by demyn@users.sourceforge.net
* Ported to Windows. Required changes related to 'long' types, which on Windows
are always 32-bits, even on 64-bit systems. Missing system headers and
functions acount for most of the remaining changes.
New files for Windows:
- IOR/ior.vcproj - Visual C project file
- IOR/src/C/win/getopt.{h,c} - GNU getopt() support
See updates in the USER_GUIDE for build instructions on Windows.
* Fixed bug in incrementing transferCount
* Fixed bugs in SummarizeResults with mismatched format specifiers
* Fixed inconsistencies between option names, -h output, and the USER_GUIDE.
Changes in IOR-2.10.2:
Hodson, 8/18/2008:
* extend existing random I/O capabilities and enhance performance
output statistics.
Fixes in IOR-2.10.1:
* added '-J' setAlignment option for HDF5 alignment in bytes; default value
is 1, which does not set alignment
* changed how HDF5 and PnetCDF calculate performance -- formerly each used
the size of the stat()'ed file; changed it to be number of data bytes
transferred. these library-generated files can have large headers and
filler as well as sparse file content
* found potential overflow error in cast -- using IOR_offset_t, not int now
* replaced HAVE_HDF5_NO_FILL macro to instead directly check if H5_VERS_MAJOR
H5_VERS_MINOR are defined; if so, then must be HDF5-1.6.x or higher for
no-fill usage.
* corrected IOR_GetFileSize() function to point to HDF5 and NCMPI versions of
IOR_GetFileSize() calls
* changed the netcdf dataset from 1D array to 4D array, where the 4 dimensions
are: [segmentCount][numTasksWorld][numTransfers][transferSize]
This patch from Wei-keng Liao allows for file sizes > 4GB (provided no
single dimension is > 4GB).
* finalized random-capability release
* changed statvfs() to be for __sun (SunOS) only
* retired Python GUI
Fixes in IOR-2.10.0.1:
* Cleaned up WriteOrRead(), reducing much to smaller functions.
* Added random capability for transfer offset.
* modified strtok(NULL, " \t\r\n") in ExtractHints() so no trailing characters
* added capability to set hints in NCMPI
Fixes in IOR-2.9.6.1:
* for 'pvfs2:' filename prefix, now skips DisplayFreeSpace(); formerly this
caused a problem with statvfs()
* changed gethostname() to MPI_Get_processor_name() so for certain cases
when gethostname() would only return the frontend node name
* added SunOS compiler settings for makefile
* updated O_DIRECT usage for SunOS compliance
* changed statfs() to instead use statvfs() for SunOS compliance
* renamed compiler directive _USE_LUSTRE to _MANUALLY_SET_LUSTRE_STRIPING
Fixes in IOR-2.9.5:
* Wall clock deviation time relabeled to be "Start time skew across all tasks".
* Added notification for "Using reorderTasks '-C' (expecting block, not cyclic,
task assignment)"
* Corrected bug with read performance with stonewalling (was using full size,
stat'ed file instead of bytes transfered).
Fixes in IOR-2.9.4:
* Now using IOR_offset_t instead of int for tmpOffset in IOR.c:WriteOrRead().
Formerly, this would cause error in file(s) > 2GB for ReadCheck. The
more-commonly-used WriteCheck option was not affected by this.
Fixes in IOR-2.9.3:
* Changed FILE_DELIMITER from ':' to '@'.
* Any time skew between nodes is automatically adjusted so that all nodes
are calibrated to the root node's time.
* Wall clock deviation time is still reported, but have removed the warning
message that was generated for this. (Didn't add much new information,
just annoyed folks.)
* The '-j' outlierThreshold option now is set to 0 (off) as default. To use
this, set to a positive integer N, and any task who's time (for open,
access, etc.) is not within N seconds of the mean of all the tasks will show
up as an outlier.
Fixes in IOR-2.9.2:
* Simple cleanup of error messages, format, etc.
* Changed error reporting so that with VERBOSITY=2 (-vv) any error encountered
is displayed. Previously, this was with VERBOSITY=3, along with full test
parameters, environment, and all access times of operations.
* Added deadlineForStonewalling option (-D). This option is used for measuring
the amount of data moved in a fixed time. After the barrier, each task
starts its own timer, begins moving data, and the stops moving data at a
prearranged time. Instead of measuring the amount of time to move a fixed
amount of data, this option measures the amount of data moved in a fixed
amount of time. The objective is to prevent tasks slow to complete from
skewing the performance.
Fixes in IOR-2.9.1:
* Updated test script to run through file1:file2 cases.
* Program exit code is now total numbers of errors (both writecheck and
readcheck for all iterations), unless quitOnError (-q) is set.
* For various failure situations, replace abort with warning, including:
- failed uname() for platform name now gives warning
- failed unlink() of data file now gives warning
- failed fsync() of data file now gives warning
- failed open() of nonexistent script file now gives warning
* Changed labelling for error checking output to be (hopefully) more
clear in details on errors for diagnostics.
* Another fix for -o file1:file2 option.
* Corrected bug in GetTestFileName() -- now possible to handle -o file1:file2
cases for file-per-proc.
Fixes in IOR-2.9.0:
* Improved checkRead option to reread data from different node (to avoid
cache) and then compare both reads.
* Added outlierThreshold (-j) option to give warning if any task is more than
this number of seconds from the mean of all participating tasks. If so, the
task is identified, its time (start, elapsed create, elapsed transfer,
elapsed close, or end) is reported, as is the mean and standard deviation for
all tasks. The default for this is 5, i.e. any task not within 5 seconds of
the mean for those times is displayed. This value can be set with
outlierThreshold=<value> or -j <value>.
* Correct for clock skew between nodes - if skew greater than wallclock
deviation threshold (WC_DEV_THRESHOLD) in seconds, then broadcast root
node's timestamp and adjust by the difference. WC_DEV_THRESHOLD currently
set to 5.
* Added a Users Guide.

2
META
View File

@ -1,3 +1,3 @@
Package: ior
Version: 3.1.0
Version: 3.2.0
Release: 0

View File

@ -1,5 +1,13 @@
MAKEFLAGS = --no-print-directory
SUBDIRS = src doc contrib
EXTRA_DIST = META COPYRIGHT README.md ChangeLog
EXTRA_DIST = META COPYRIGHT README.md NEWS testing
# ACLOCAL_AMFLAGS needed for autoconf < 2.69
ACLOCAL_AMFLAGS = -I config
# The basic-tests.sh scripts run MPI versions of IOR/mdtest and are therefore
# too complicated to run in the context of distclean. As such we reserve
# `make dist` and `make test` for simple test binaries that do not require any
# special environment.
#TESTS = testing/basic-tests.sh
#DISTCLEANFILES = -r test test_out

205
NEWS
View File

@ -1,9 +1,204 @@
IOR NEWS
========
Version 3.2.0
--------------------------------------------------------------------------------
Last updated 2017-06
New major features:
3.0.2
- mdtest now included as a frontend for the IOR aiori backend (Nathan Hjelm,
LANL)
- Added mmap AIORI (Li Dongyang, DDN)
- Added RADOS AIORI (Shane Snyder, ANL)
- Added IME AIORI (Jean-Yves Vet, DDN)
- Added stonewalling for mdtest (Julian Kunkel, U Reading)
New minor features:
- Dropped support for PLFS AIORI (John Bent)
- Added stoneWallingWearOut functionality to allow stonewalling to ensure that
each MPI rank writes the same amount of data and captures the effects of slow
processes (Julian Kunkel, DKRZ)
- Added support for JSON output (Enno Zickler, U Hamburg; Julian Kunkel, U
Reading)
- Added dummy AIORI (Julian Kunkel, U Reading)
- Added support for HDF5 collective metadata operations (Rob Latham, ANL)
General user improvements:
- BeeGFS parameter support (Oliver Steffen, ThinkParQ)
- Semantics of `-R` now compares to expected signature (`-G`) (Julian Kunkel,
DKRZ)
- Improved macOS support for ncmpi (Vinson Leung)
- Added more complete documentation (Enno Zickler, U Hamburg)
- Assorted bugfixes and code refactoring (Adam Moody, LLNL; Julian Kunkel, U
Reading; Enno Zickler, U Hamburg; Nathan Hjelm, LANL; Rob Latham, ANL;
Jean-Yves Vet, DDN)
- More robust support for non-POSIX-backed AIORIs (Shane Snyder, ANL)
General developer improvements:
- Improvements to build process (Nathan Hjelm, LANL; Ian Kirker, UCL)
- Added continuous integration support (Enno Zickler, U Hamburg)
- Added better support for automated testing (Julian Kunkel, U Reading)
- Rewritten option handling to improve support for backend-specific options
(Julian Kunkel, U Reading)
Known issues:
- `api=RADOS` cannot work correctly if specified in a config file (via `-f`)
because `-u`/`-c`/`-p` cannot be specified (issue #98)
- `writeCheck` cannot be enabled for write-only tests using some AIORIs such as
MPI-IO (pull request #89)
Version 3.0.2
--------------------------------------------------------------------------------
- IOR and mdtest now share a common codebase. This will make it easier
run performance benchmarks on new hardware.
run performance benchmarks on new hardware.
- Note: this version was never properly released
Version 3.0.0
--------------------------------------------------------------------------------
- Reorganization of the build system. Now uses autoconf/automake. N.B. Windows
support is not included. Patches welcome.
- Much code refactoring.
- Final summary table is printed after all tests have finished.
- Error messages significantly improved.
- Drop all "undocumented changes". If they are worth having, they need to be
implemented well and documented.
Version 2.10.3
--------------------------------------------------------------------------------
- bug 2962326 "Segmentation Fault When Summarizing Results" fixed
- bug 2786285 "-Wrong number of parameters to function H5Dcreate" fixed
(NOTE: to compile for HDF5 1.6 libs use "-D H5_USE_16_API")
- bug 1992514 "delay (-d) doesn't work" fixed
Contributed by demyn@users.sourceforge.net
- Ported to Windows. Required changes related to 'long' types, which on Windows
are always 32-bits, even on 64-bit systems. Missing system headers and
functions acount for most of the remaining changes.
New files for Windows:
- IOR/ior.vcproj - Visual C project file
- IOR/src/C/win/getopt.{h,c} - GNU getopt() support
See updates in the USER_GUIDE for build instructions on Windows.
- Fixed bug in incrementing transferCount
- Fixed bugs in SummarizeResults with mismatched format specifiers
- Fixed inconsistencies between option names, -h output, and the USER_GUIDE.
Version 2.10.2
--------------------------------------------------------------------------------
- Extend existing random I/O capabilities and enhance performance
output statistics. (Hodson, 8/18/2008)
Version 2.10.1
--------------------------------------------------------------------------------
- Added '-J' setAlignment option for HDF5 alignment in bytes; default value
is 1, which does not set alignment
- Changed how HDF5 and PnetCDF calculate performance -- formerly each used
the size of the stat()'ed file; changed it to be number of data bytes
transferred. these library-generated files can have large headers and
filler as well as sparse file content
- Found potential overflow error in cast -- using IOR_offset_t, not int now
- Replaced HAVE_HDF5_NO_FILL macro to instead directly check if H5_VERS_MAJOR
H5_VERS_MINOR are defined; if so, then must be HDF5-1.6.x or higher for
no-fill usage.
- Corrected IOR_GetFileSize() function to point to HDF5 and NCMPI versions of
IOR_GetFileSize() calls
- Changed the netcdf dataset from 1D array to 4D array, where the 4 dimensions
are: [segmentCount][numTasksWorld][numTransfers][transferSize]
This patch from Wei-keng Liao allows for file sizes > 4GB (provided no
single dimension is > 4GB).
- Finalized random-capability release
- Changed statvfs() to be for __sun (SunOS) only
- Retired Python GUI
Version 2.10.0.1
--------------------------------------------------------------------------------
- Cleaned up WriteOrRead(), reducing much to smaller functions.
- Added random capability for transfer offset.
- Modified strtok(NULL, " \t\r\n") in ExtractHints() so no trailing characters
- Added capability to set hints in NCMPI
Version 2.9.6.1
--------------------------------------------------------------------------------
- For 'pvfs2:' filename prefix, now skips DisplayFreeSpace(); formerly this
caused a problem with statvfs()
- Changed gethostname() to MPI_Get_processor_name() so for certain cases
when gethostname() would only return the frontend node name
- Added SunOS compiler settings for makefile
- Updated O_DIRECT usage for SunOS compliance
- Changed statfs() to instead use statvfs() for SunOS compliance
- Renamed compiler directive _USE_LUSTRE to _MANUALLY_SET_LUSTRE_STRIPING
Version 2.9.5
--------------------------------------------------------------------------------
- Wall clock deviation time relabeled to be "Start time skew across all tasks".
- Added notification for "Using reorderTasks '-C' (expecting block, not cyclic,
task assignment)"
- Corrected bug with read performance with stonewalling (was using full size,
stat'ed file instead of bytes transfered).
Version 2.9.4
--------------------------------------------------------------------------------
- Now using IOR_offset_t instead of int for tmpOffset in IOR.c:WriteOrRead().
Formerly, this would cause error in file(s) > 2GB for ReadCheck. The
more-commonly-used WriteCheck option was not affected by this.
Version 2.9.3
--------------------------------------------------------------------------------
- Changed FILE_DELIMITER from ':' to '@'.
- Any time skew between nodes is automatically adjusted so that all nodes
are calibrated to the root node's time.
- Wall clock deviation time is still reported, but have removed the warning
message that was generated for this. (Didn't add much new information,
just annoyed folks.)
- The '-j' outlierThreshold option now is set to 0 (off) as default. To use
this, set to a positive integer N, and any task who's time (for open,
access, etc.) is not within N seconds of the mean of all the tasks will show
up as an outlier.
Version 2.9.2
--------------------------------------------------------------------------------
- Simple cleanup of error messages, format, etc.
- Changed error reporting so that with VERBOSITY=2 (-vv) any error encountered
is displayed. Previously, this was with VERBOSITY=3, along with full test
parameters, environment, and all access times of operations.
- Added deadlineForStonewalling option (-D). This option is used for measuring
the amount of data moved in a fixed time. After the barrier, each task
starts its own timer, begins moving data, and the stops moving data at a
prearranged time. Instead of measuring the amount of time to move a fixed
amount of data, this option measures the amount of data moved in a fixed
amount of time. The objective is to prevent tasks slow to complete from
skewing the performance.
Version 2.9.1
--------------------------------------------------------------------------------
- Updated test script to run through file1:file2 cases.
- Program exit code is now total numbers of errors (both writecheck and
readcheck for all iterations), unless quitOnError (-q) is set.
- For various failure situations, replace abort with warning, including:
- failed uname() for platform name now gives warning
- failed unlink() of data file now gives warning
- failed fsync() of data file now gives warning
- failed open() of nonexistent script file now gives warning
- Changed labelling for error checking output to be (hopefully) more
clear in details on errors for diagnostics.
- Another fix for -o file1:file2 option.
- Corrected bug in GetTestFileName() -- now possible to handle -o file1:file2
cases for file-per-proc.
Version 2.9.0
--------------------------------------------------------------------------------
- Improved checkRead option to reread data from different node (to avoid cache)
and then compare both reads.
- Added outlierThreshold (-j) option to give warning if any task is more than
this number of seconds from the mean of all participating tasks. If so, the
task is identified, its time (start, elapsed create, elapsed transfer,
elapsed close, or end) is reported, as is the mean and standard deviation for
all tasks. The default for this is 5, i.e. any task not within 5 seconds of
the mean for those times is displayed. This value can be set with
outlierThreshold=<value> or -j <value>.
- Correct for clock skew between nodes - if skew greater than wallclock
deviation threshold (WC_DEV_THRESHOLD) in seconds, then broadcast root
node's timestamp and adjust by the difference. WC_DEV_THRESHOLD currently
set to 5.
- Added a Users Guide.

View File

@ -20,9 +20,7 @@ with both IOR and mdtest.
Running with DAOS API
---------------------
Driver specific options are specified at the end after "--". For example:
ior -a DAOS [ior_options] -- [daos_options]
ior -a DAOS [ior_options] [daos_options]
In the IOR options, the file name should be specified as a container uuid using
"-o <container_uuid>". If the "-E" option is given, then this UUID shall denote
@ -33,21 +31,18 @@ uuidgen(1) to generate the UUID of the new container.
The DAOS options include:
Required Options:
-p <pool_uuid>: pool uuid to connect to (has to be created beforehand)
-v <pool_svcl>: pool svcl list (: separated)
--daos.pool <pool_uuid>: pool uuid to connect to (has to be created beforehand)
--daos.svcl <pool_svcl>: pool svcl list (: separated)
Optional Options:
-g <group_name>: group name of servers with the pool
-r <record_size>: object record size for IO
-s <stripe_size>
-c <stripe_count>
-m <max_stripe_size>
-a <num>: number of concurrent async IOs
-w : Flag to indicate no commit, just update
-e <epoch_number>
-t <epoch_number>: wait for specific epoch before read
-k : flag to kill a rank during IO
-o <object_class>: specific object class
--daos.group <group_name>: group name of servers with the pool
--daos.recordSize <record_size>: object record size for IO
--daos.stripeSize <stripe_size>
--daos.stripeCount <stripe_count>
--daos.stripeMax <max_stripe_size>
--daos.aios <num>: number of concurrent async IOs
--daos.kill flag to kill a rank during IO
--daos.objectClass <object_class>: specific object class
When benchmarking write performance, one likely does not want "-W", which causes
the write phase to do one additional memory copy for every I/O. This is due to
@ -64,34 +59,28 @@ the epoch to access automatically on each iteration.
Examples that should work include:
- "ior -a DAOS -w -W -o <container_uuid> -- -p <pool_uuid> -v <svc_ranks>"
- "ior -a DAOS -w -W -o <container_uuid> --daos.pool <pool_uuid> --daos.svcl <svc_ranks>"
writes into a new container and verifies the data, using default
daosRecordSize, transferSize, daosStripeSize, blockSize, daosAios, etc.
- "ior -a DAOS -w -W -r -R -o <container_uuid> -b 1g -t 4m -C --
-p <pool_uuid> -v <svc_ranks> -r 1m -s 4m -c 256 -a 8"
- "ior -a DAOS -w -W -r -R -o <container_uuid> -b 1g -t 4m -C \
--daos.pool <pool_uuid> --daos.svcl <svc_ranks> --daos.recordSize 1m --daos.stripeSize 4m\
--daos.stripeCount 256 --daos.aios 8
does all IOR tests and shifts ranks during checkWrite and checkRead.
- "ior -a DAOS -w -r -o <container_uuid> -b 8g -t 1m -C --
-p <pool_uuid> -v <svc_ranks> -r 1m -s 4m -c 256 -a 8"
may be a base to be tuned for performance benchmarking.
Running with DFS API
---------------------
Driver specific options are specified at the end after "--". For example:
ior -a DFS [ior_options] -- [dfs_options]
mdtest -a DFS [mdtest_options] -- [dfs_options]
ior -a DFS [ior_options] [dfs_options]
mdtest -a DFS [mdtest_options] [dfs_options]
Required Options:
-p <pool_uuid>: pool uuid to connect to (has to be created beforehand)
-v <pool_svcl>: pool svcl list (: separated)
-c <co_uuid>: container uuid that will hold the encapsulated namespace
--daos.pool <pool_uuid>: pool uuid to connect to (has to be created beforehand)
--daos.svcl <pool_svcl>: pool svcl list (: separated)
--daos.cont <co_uuid>: container uuid that will hold the encapsulated namespace
Optional Options:
-g <group_name>: group name of servers with the pool
--daos.group <group_name>: group name of servers with the pool
In the IOR options, the file name should be specified on the root dir directly
since ior does not create directories and the DFS container representing the
@ -99,12 +88,12 @@ encapsulated namespace is not the same as the system namespace the user is
executing from.
Examples that should work include:
- "ior -a DFS -w -W -o /test1 -- -p <pool_uuid> -v <svc_ranks> -c <co_uuid>"
- "ior -a DFS -w -W -r -R -o /test2 -b 1g -t 4m -C -- -p <pool_uuid> -v <svc_ranks> -c <co_uuid>"
- "ior -a DFS -w -r -o /test3 -b 8g -t 1m -C -- -p <pool_uuid> -v <svc_ranks> -c <co_uuid>"
- "ior -a DFS -w -W -o /test1 --daos.pool <pool_uuid> --daos.svcl <svc_ranks> --daos.cont <co_uuid>"
- "ior -a DFS -w -W -r -R -o /test2 -b 1g -t 4m -C --daos.pool <pool_uuid> --daos.svcl <svc_ranks> --daos.cont <co_uuid>"
- "ior -a DFS -w -r -o /test3 -b 8g -t 1m -C --daos.pool <pool_uuid> --daos.svcl <svc_ranks> --daos.cont <co_uuid>"
Running mdtest, the user needs to specify a directory with -d where the test
tree will be created. Some examples:
- "mdtest -a DFS -n 100 -F -D -d /bla -- -p <pool_uuid> -v <svc_ranks> -c <co_uuid>"
- "mdtest -a DFS -n 1000 -F -C -d /bla -- -p <pool_uuid> -v <svc_ranks> -c <co_uuid>"
- "mdtest -a DFS -I 10 -z 5 -b 2 -L -d /bla -- -p <pool_uuid> -v <svc_ranks> -c <co_uuid>"
- "mdtest -a DFS -n 100 -F -D -d /bla --daos.pool <pool_uuid> --daos.svcl <svc_ranks> --daos.cont <co_uuid>"
- "mdtest -a DFS -n 1000 -F -C -d /bla --daos.pool <pool_uuid> --daos.svcl <svc_ranks> --daos.cont <co_uuid>"
- "mdtest -a DFS -I 10 -z 5 -b 2 -L -d /bla --daos.pool <pool_uuid> --daos.svcl <svc_ranks> --daos.cont <co_uuid>"

View File

@ -19,11 +19,22 @@ AM_INIT_AUTOMAKE([check-news dist-bzip2 gnu no-define foreign subdir-objects])
m4_ifdef([AM_SILENT_RULES], [AM_SILENT_RULES([yes])])
AM_MAINTAINER_MODE
# Check for system-specific stuff
case "${host_os}" in
*linux*)
;;
*darwin*)
CPPFLAGS="${CPPFLAGS} -D_DARWIN_C_SOURCE"
;;
*)
;;
esac
# Checks for programs
# We can't do anything without a working MPI
AX_PROG_CC_MPI(,,[
AC_MSG_FAILURE([MPI compiler requested, but couldn't use MPI.])
AC_MSG_FAILURE([MPI compiler requested, but could not use MPI.])
])
AC_PROG_RANLIB
@ -39,7 +50,7 @@ AC_CHECK_HEADERS([fcntl.h libintl.h stdlib.h string.h strings.h sys/ioctl.h sys/
AC_TYPE_SIZE_T
# Checks for library functions.
AC_CHECK_FUNCS([getpagesize gettimeofday memset mkdir pow putenv realpath regcomp sqrt strcasecmp strchr strerror strncasecmp strstr uname statfs statvfs])
AC_CHECK_FUNCS([sysconf gettimeofday memset mkdir pow putenv realpath regcomp sqrt strcasecmp strchr strerror strncasecmp strstr uname statfs statvfs])
AC_SEARCH_LIBS([sqrt], [m], [],
[AC_MSG_ERROR([Math library not found])])
@ -65,14 +76,18 @@ AS_IF([test "$ac_cv_header_gpfs_h" = "yes" -o "$ac_cv_header_gpfs_fcntl_h" = "ye
# Check for system capabilities
AC_SYS_LARGEFILE
AC_DEFINE([_XOPEN_SOURCE], [700], [C99 compatibility])
# Check for lustre availability
AC_ARG_WITH([lustre],
[AS_HELP_STRING([--with-lustre],
[support configurable Lustre striping values @<:@default=check@:>@])],
[], [with_lustre=check])
AS_IF([test "x$with_lustre" != xno], [
AC_CHECK_HEADERS([lustre/lustre_user.h], [], [
if test "x$with_lustre" != xcheck; then
AC_CHECK_HEADERS([linux/lustre/lustre_user.h lustre/lustre_user.h], break, [
if test "x$with_lustre" != xcheck -a \
"x$ac_cv_header_linux_lustre_lustre_user_h" = "xno" -a \
"x$ac_cv_header_lustre_lustre_user_h" = "xno" ; then
AC_MSG_FAILURE([--with-lustre was given, <lustre/lustre_user.h> not found])
fi
])
@ -98,8 +113,12 @@ AC_ARG_WITH([hdf5],
AM_CONDITIONAL([USE_HDF5_AIORI], [test x$with_hdf5 = xyes])
AM_COND_IF([USE_HDF5_AIORI],[
AC_DEFINE([USE_HDF5_AIORI], [], [Build HDF5 backend AIORI])
AC_SEARCH_LIBS([H5Pset_all_coll_metadata_ops], [hdf5])
AC_CHECK_FUNCS([H5Pset_all_coll_metadata_ops])
])
# HDFS support
AC_ARG_WITH([hdfs],
[AS_HELP_STRING([--with-hdfs],
@ -259,12 +278,6 @@ Consider --with-aws4c=, CPPFLAGS, LDFLAGS, etc])
])
# Enable building "IOR", in all capitals
AC_ARG_ENABLE([caps],
[AS_HELP_STRING([--enable-caps],

View File

@ -170,6 +170,8 @@ GENERAL:
NOTE: it does not delay before a check write or
check read
* interIODelay - this time in us (microseconds) after each I/O simulates computing time.
* outlierThreshold - gives warning if any task is more than this number
of seconds from the mean of all participating tasks.
If so, the task is identified, its time (start,

View File

@ -289,6 +289,9 @@ HDF5-ONLY
* setAlignment - HDF5 alignment in bytes (e.g.: 8, 4k, 2m, 1g) [1]
* collectiveMetadata - enable HDF5 collective metadata (available since
HDF5-1.10.0)
MPIIO-, HDF5-, AND NCMPI-ONLY
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* collective - uses collective operations for access [0=FALSE]

View File

@ -5,7 +5,7 @@ if USE_CAPS
bin_PROGRAMS += IOR MDTEST
endif
noinst_HEADERS = ior.h utilities.h parse_options.h aiori.h iordef.h ior-internal.h option.h
noinst_HEADERS = ior.h utilities.h parse_options.h aiori.h iordef.h ior-internal.h option.h mdtest.h
lib_LIBRARIES = libaiori.a
libaiori_a_SOURCES = ior.c mdtest.c utilities.c parse_options.c ior-output.c option.c

View File

@ -24,6 +24,7 @@
#include <stdint.h>
#include <assert.h>
#include <unistd.h>
#include <strings.h>
#include <sys/types.h>
#include <libgen.h>
#include <stdbool.h>
@ -63,23 +64,23 @@ static struct daos_options o = {
};
static option_help options [] = {
{'p', "daosPool", "pool uuid", OPTION_REQUIRED_ARGUMENT, 's', &o.daosPool},
{'v', "daosPoolSvc", "pool SVCL", OPTION_REQUIRED_ARGUMENT, 's', &o.daosPoolSvc},
{'g', "daosGroup", "server group", OPTION_OPTIONAL_ARGUMENT, 's', &o.daosGroup},
{'r', "daosRecordSize", "Record Size", OPTION_OPTIONAL_ARGUMENT, 'd', &o.daosRecordSize},
{'s', "daosStripeSize", "Stripe Size", OPTION_OPTIONAL_ARGUMENT, 'd', &o.daosStripeSize},
{'c', "daosStripeCount", "Stripe Count", OPTION_OPTIONAL_ARGUMENT, 'u', &o.daosStripeCount},
{'m', "daosStripeMax", "Max Stripe",OPTION_OPTIONAL_ARGUMENT, 'u', &o.daosStripeMax},
{'a', "daosAios", "Concurrent Async IOs",OPTION_OPTIONAL_ARGUMENT, 'd', &o.daosAios},
{'k', "daosKill", "Kill target while running",OPTION_FLAG, 'd', &o.daosKill},
{'o', "daosObjectClass", "object class", OPTION_OPTIONAL_ARGUMENT, 's', &o.daosObjectClass},
{0, "daos.pool", "pool uuid", OPTION_REQUIRED_ARGUMENT, 's', &o.daosPool},
{0, "daos.svcl", "pool SVCL", OPTION_REQUIRED_ARGUMENT, 's', &o.daosPoolSvc},
{0, "daos.group", "server group", OPTION_OPTIONAL_ARGUMENT, 's', &o.daosGroup},
{0, "daos.recordSize", "Record Size", OPTION_OPTIONAL_ARGUMENT, 'd', &o.daosRecordSize},
{0, "daos.stripeSize", "Stripe Size", OPTION_OPTIONAL_ARGUMENT, 'd', &o.daosStripeSize},
{0, "daos.stripeCount", "Stripe Count", OPTION_OPTIONAL_ARGUMENT, 'u', &o.daosStripeCount},
{0, "daos.stripeMax", "Max Stripe",OPTION_OPTIONAL_ARGUMENT, 'u', &o.daosStripeMax},
{0, "daos.aios", "Concurrent Async IOs",OPTION_OPTIONAL_ARGUMENT, 'd', &o.daosAios},
{0, "daos.kill", "Kill target while running",OPTION_FLAG, 'd', &o.daosKill},
{0, "daos.objectClass", "object class", OPTION_OPTIONAL_ARGUMENT, 's', &o.daosObjectClass},
LAST_OPTION
};
/**************************** P R O T O T Y P E S *****************************/
static void DAOS_Init(IOR_param_t *);
static void DAOS_Fini(IOR_param_t *);
static void DAOS_Init();
static void DAOS_Fini();
static void *DAOS_Create(char *, IOR_param_t *);
static void *DAOS_Open(char *, IOR_param_t *);
static IOR_offset_t DAOS_Xfer(int, void *, IOR_size_t *,
@ -141,6 +142,7 @@ static daos_pool_info_t poolInfo;
static daos_oclass_id_t objectClass = DAOS_OC_LARGE_RW;
static CFS_LIST_HEAD(aios);
static IOR_offset_t total_size;
static bool daos_initialized = false;
/***************************** F U N C T I O N S ******************************/
@ -158,9 +160,9 @@ do { \
} \
} while (0)
#define INFO(level, param, format, ...) \
#define INFO(level, format, ...) \
do { \
if (verbose >= level) \
if (verbose >= level) \
printf("[%d] "format"\n", rank, ##__VA_ARGS__); \
} while (0)
@ -172,8 +174,7 @@ do { \
} while (0)
/* Distribute process 0's pool or container handle to others. */
static void HandleDistribute(daos_handle_t *handle, enum handleType type,
IOR_param_t *param)
static void HandleDistribute(daos_handle_t *handle, enum handleType type)
{
daos_iov_t global;
int rc;
@ -194,7 +195,7 @@ static void HandleDistribute(daos_handle_t *handle, enum handleType type,
}
MPI_CHECK(MPI_Bcast(&global.iov_buf_len, 1, MPI_UINT64_T, 0,
param->testComm),
MPI_COMM_WORLD),
"Failed to bcast global handle buffer size");
global.iov_buf = malloc(global.iov_buf_len);
@ -210,7 +211,7 @@ static void HandleDistribute(daos_handle_t *handle, enum handleType type,
}
MPI_CHECK(MPI_Bcast(global.iov_buf, global.iov_buf_len, MPI_BYTE, 0,
param->testComm),
MPI_COMM_WORLD),
"Failed to bcast global pool handle");
if (rank != 0) {
@ -241,15 +242,14 @@ static void ContainerOpen(char *testFileName, IOR_param_t *param,
if (param->open == WRITE &&
param->useExistingTestFile == FALSE) {
INFO(VERBOSE_2, param, "Creating container %s",
testFileName);
INFO(VERBOSE_2, "Creating container %s", testFileName);
rc = daos_cont_create(pool, uuid, NULL /* ev */);
DCHECK(rc, "Failed to create container %s",
testFileName);
}
INFO(VERBOSE_2, param, "Opening container %s", testFileName);
INFO(VERBOSE_2, "Opening container %s", testFileName);
if (param->open == WRITE)
dFlags = DAOS_COO_RW;
@ -261,7 +261,7 @@ static void ContainerOpen(char *testFileName, IOR_param_t *param,
DCHECK(rc, "Failed to open container %s", testFileName);
}
HandleDistribute(container, CONTAINER_HANDLE, param);
HandleDistribute(container, CONTAINER_HANDLE);
MPI_CHECK(MPI_Bcast(info, sizeof *info, MPI_BYTE, 0, param->testComm),
"Failed to broadcast container info");
@ -364,8 +364,8 @@ static void AIOInit(IOR_param_t *param)
cfs_list_add(&aio->a_list, &aios);
INFO(VERBOSE_3, param, "Allocated AIO %p: buffer %p", aio,
aio->a_iov.iov_buf);
INFO(VERBOSE_3, "Allocated AIO %p: buffer %p", aio,
aio->a_iov.iov_buf);
}
nAios = o.daosAios;
@ -383,7 +383,7 @@ static void AIOFini(IOR_param_t *param)
free(events);
cfs_list_for_each_entry_safe(aio, tmp, &aios, a_list) {
INFO(VERBOSE_3, param, "Freeing AIO %p: buffer %p", aio,
INFO(VERBOSE_3, "Freeing AIO %p: buffer %p", aio,
aio->a_iov.iov_buf);
cfs_list_del_init(&aio->a_list);
daos_event_fini(&aio->a_event);
@ -424,11 +424,11 @@ static void AIOWait(IOR_param_t *param)
nAios++;
if (param->verbose >= VERBOSE_3)
INFO(VERBOSE_3, param, "Completed AIO %p: buffer %p", aio,
INFO(VERBOSE_3, "Completed AIO %p: buffer %p", aio,
aio->a_iov.iov_buf);
}
INFO(VERBOSE_3, param, "Found %d completed AIOs (%d free %d busy)", rc,
INFO(VERBOSE_3, "Found %d completed AIOs (%d free %d busy)", rc,
nAios, o.daosAios - nAios);
}
@ -466,7 +466,7 @@ static void ObjectClassParse(const char *string)
GERR("Invalid 'daosObjectClass' argument: '%s'", string);
}
static void ParseService(IOR_param_t *param, int max, d_rank_list_t *ranks)
static void ParseService(int max, d_rank_list_t *ranks)
{
char *s;
@ -490,21 +490,18 @@ static option_help * DAOS_options(){
return options;
}
static void DAOS_Init(IOR_param_t *param)
static void DAOS_Init()
{
int rc;
if (daos_initialized)
return;
if (o.daosPool == NULL || o.daosPoolSvc == NULL)
return;
if (o.daosObjectClass)
ObjectClassParse(o.daosObjectClass);
if (param->filePerProc)
GERR("'filePerProc' not yet supported");
if (o.daosStripeMax % o.daosStripeSize != 0)
GERR("'daosStripeMax' must be a multiple of 'daosStripeSize'");
if (o.daosStripeSize % param->transferSize != 0)
GERR("'daosStripeSize' must be a multiple of 'transferSize'");
if (param->transferSize % o.daosRecordSize != 0)
GERR("'transferSize' must be a multiple of 'daosRecordSize'");
if (o.daosKill && ((objectClass != DAOS_OC_R2_RW) ||
(objectClass != DAOS_OC_R3_RW) ||
(objectClass != DAOS_OC_R4_RW) ||
@ -515,7 +512,7 @@ static void DAOS_Init(IOR_param_t *param)
GERR("'daosKill' only makes sense with 'daosObjectClass=repl'");
if (rank == 0)
INFO(VERBOSE_0, param, "WARNING: USING daosStripeMax CAUSES READS TO RETURN INVALID DATA");
INFO(VERBOSE_0, "WARNING: USING daosStripeMax CAUSES READS TO RETURN INVALID DATA");
rc = daos_init();
if (rc != -DER_ALREADY)
@ -529,18 +526,12 @@ static void DAOS_Init(IOR_param_t *param)
d_rank_t d_rank[13];
d_rank_list_t ranks;
if (o.daosPool == NULL)
GERR("'daosPool' must be specified");
if (o.daosPoolSvc == NULL)
GERR("'daosPoolSvc' must be specified");
INFO(VERBOSE_2, param, "Connecting to pool %s %s",
o.daosPool, o.daosPoolSvc);
INFO(VERBOSE_2, "Connecting to pool %s %s", o.daosPool, o.daosPoolSvc);
rc = uuid_parse(o.daosPool, uuid);
DCHECK(rc, "Failed to parse 'daosPool': %s", o.daosPool);
ranks.rl_ranks = d_rank;
ParseService(param, sizeof(d_rank) / sizeof(d_rank[0]), &ranks);
ParseService(sizeof(d_rank) / sizeof(d_rank[0]), &ranks);
rc = daos_pool_connect(uuid, o.daosGroup, &ranks,
DAOS_PC_RW, &pool, &poolInfo,
@ -548,20 +539,24 @@ static void DAOS_Init(IOR_param_t *param)
DCHECK(rc, "Failed to connect to pool %s", o.daosPool);
}
HandleDistribute(&pool, POOL_HANDLE, param);
HandleDistribute(&pool, POOL_HANDLE);
MPI_CHECK(MPI_Bcast(&poolInfo, sizeof poolInfo, MPI_BYTE, 0,
param->testComm),
MPI_CHECK(MPI_Bcast(&poolInfo, sizeof poolInfo, MPI_BYTE, 0, MPI_COMM_WORLD),
"Failed to bcast pool info");
if (o.daosStripeCount == -1)
o.daosStripeCount = poolInfo.pi_ntargets * 64UL;
daos_initialized = true;
}
static void DAOS_Fini(IOR_param_t *param)
static void DAOS_Fini()
{
int rc;
if (!daos_initialized)
return;
rc = daos_pool_disconnect(pool, NULL /* ev */);
DCHECK(rc, "Failed to disconnect from pool %s", o.daosPool);
@ -570,6 +565,8 @@ static void DAOS_Fini(IOR_param_t *param)
rc = daos_fini();
DCHECK(rc, "Failed to finalize daos");
daos_initialized = false;
}
static void *DAOS_Create(char *testFileName, IOR_param_t *param)
@ -625,7 +622,7 @@ kill_daos_server(IOR_param_t *param)
targets.rl_ranks = &d_rank;
svc.rl_ranks = svc_ranks;
ParseService(param, sizeof(svc_ranks)/ sizeof(svc_ranks[0]), &svc);
ParseService(sizeof(svc_ranks)/ sizeof(svc_ranks[0]), &svc);
rc = daos_pool_exclude(uuid, NULL, &svc, &targets, NULL);
DCHECK(rc, "Error in excluding pool from poolmap\n");
@ -668,6 +665,15 @@ static IOR_offset_t DAOS_Xfer(int access, void *file, IOR_size_t *buffer,
uint64_t round;
int rc;
if (!daos_initialized)
GERR("DAOS is not initialized!");
if (param->filePerProc)
GERR("'filePerProc' not yet supported");
if (o.daosStripeSize % param->transferSize != 0)
GERR("'daosStripeSize' must be a multiple of 'transferSize'");
if (param->transferSize % o.daosRecordSize != 0)
GERR("'transferSize' must be a multiple of 'daosRecordSize'");
assert(length == param->transferSize);
assert(param->offset % length == 0);
@ -717,7 +723,7 @@ static IOR_offset_t DAOS_Xfer(int access, void *file, IOR_size_t *buffer,
else if (access == WRITECHECK || access == READCHECK)
memset(aio->a_iov.iov_buf, '#', length);
INFO(VERBOSE_3, param, "Starting AIO %p (%d free %d busy): access %d "
INFO(VERBOSE_3, "Starting AIO %p (%d free %d busy): access %d "
"dkey '%s' iod <%llu, %llu> sgl <%p, %lu>", aio, nAios,
o.daosAios - nAios, access, (char *) aio->a_dkey.iov_buf,
(unsigned long long) aio->a_iod.iod_recxs->rx_idx,
@ -756,6 +762,8 @@ static void DAOS_Close(void *file, IOR_param_t *param)
struct fileDescriptor *fd = file;
int rc;
if (!daos_initialized)
return;
while (o.daosAios - nAios > 0)
AIOWait(param);
AIOFini(param);
@ -772,7 +780,10 @@ static void DAOS_Delete(char *testFileName, IOR_param_t *param)
uuid_t uuid;
int rc;
INFO(VERBOSE_2, param, "Deleting container %s", testFileName);
if (!daos_initialized)
GERR("DAOS is not initialized!");
INFO(VERBOSE_2, "Deleting container %s", testFileName);
rc = uuid_parse(testFileName, uuid);
DCHECK(rc, "Failed to parse 'testFile': %s", testFileName);

View File

@ -62,10 +62,10 @@ static struct dfs_options o = {
};
static option_help options [] = {
{'p', "pool", "DAOS pool uuid", OPTION_REQUIRED_ARGUMENT, 's', & o.pool},
{'v', "svcl", "DAOS pool SVCL", OPTION_REQUIRED_ARGUMENT, 's', & o.svcl},
{'g', "group", "DAOS server group", OPTION_OPTIONAL_ARGUMENT, 's', & o.group},
{'c', "cont", "DFS container uuid", OPTION_REQUIRED_ARGUMENT, 's', & o.cont},
{0, "dfs.pool", "DAOS pool uuid", OPTION_REQUIRED_ARGUMENT, 's', & o.pool},
{0, "dfs.svcl", "DAOS pool SVCL", OPTION_REQUIRED_ARGUMENT, 's', & o.svcl},
{0, "dfs.group", "DAOS server group", OPTION_OPTIONAL_ARGUMENT, 's', & o.group},
{0, "dfs.cont", "DFS container uuid", OPTION_REQUIRED_ARGUMENT, 's', & o.cont},
LAST_OPTION
};
@ -84,8 +84,8 @@ static int DFS_Stat (const char *, struct stat *, IOR_param_t *);
static int DFS_Mkdir (const char *, mode_t, IOR_param_t *);
static int DFS_Rmdir (const char *, IOR_param_t *);
static int DFS_Access (const char *, int, IOR_param_t *);
static void DFS_Init(IOR_param_t *param);
static void DFS_Finalize(IOR_param_t *param);
static void DFS_Init();
static void DFS_Finalize();
static option_help * DFS_options();
/************************** D E C L A R A T I O N S ***************************/
@ -122,7 +122,7 @@ do { \
format"\n", __FILE__, __LINE__, rank, _rc, \
##__VA_ARGS__); \
fflush(stderr); \
MPI_Abort(MPI_COMM_WORLD, -1); \
exit(-1); \
} \
} while (0)
@ -222,7 +222,7 @@ static option_help * DFS_options(){
}
static void
DFS_Init(IOR_param_t *param) {
DFS_Init() {
uuid_t pool_uuid, co_uuid;
daos_pool_info_t pool_info;
daos_cont_info_t co_info;
@ -231,7 +231,7 @@ DFS_Init(IOR_param_t *param) {
int rc;
if (o.pool == NULL || o.svcl == NULL || o.cont == NULL)
ERR("Invalid Arguments to DFS\n");
ERR("Invalid pool or container options\n");
rc = uuid_parse(o.pool, pool_uuid);
DCHECK(rc, "Failed to parse 'Pool uuid': %s", o.pool);
@ -275,7 +275,7 @@ DFS_Init(IOR_param_t *param) {
}
static void
DFS_Finalize(IOR_param_t *param)
DFS_Finalize()
{
int rc;
@ -341,11 +341,12 @@ DFS_Open(char *testFileName, IOR_param_t *param)
{
char *name = NULL, *dir_name = NULL;
dfs_obj_t *obj = NULL, *parent = NULL;
mode_t pmode;
mode_t pmode, mode;
int rc;
int fd_oflag = 0;
fd_oflag |= O_RDWR;
mode = S_IFREG | param->mode;
rc = parse_filename(testFileName, &name, &dir_name);
DERR(rc, "Failed to parse path %s", testFileName);
@ -356,7 +357,7 @@ DFS_Open(char *testFileName, IOR_param_t *param)
rc = dfs_lookup(dfs, dir_name, O_RDWR, &parent, &pmode);
DERR(rc, "dfs_lookup() of %s Failed", dir_name);
rc = dfs_open(dfs, parent, name, S_IFREG, fd_oflag, 0, NULL, &obj);
rc = dfs_open(dfs, parent, name, mode, fd_oflag, 0, NULL, &obj);
DERR(rc, "dfs_open() of %s Failed", name);
out:
@ -412,8 +413,7 @@ DFS_Xfer(int access, void *file, IOR_size_t *buffer, IOR_offset_t length,
if (ret < remaining) {
if (param->singleXferAttempt == TRUE)
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1),
"barrier error");
exit(-1);
if (xferRetries > MAX_RETRY)
ERR("too many retries -- aborting");
}
@ -625,7 +625,6 @@ DFS_Access(const char *path, int mode, IOR_param_t * param)
name = NULL;
}
rc = dfs_stat(dfs, parent, name, &stbuf);
DERR(rc, "dfs_stat() of %s Failed", name);
out:
if (name)

View File

@ -9,6 +9,7 @@
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <time.h>
#include "ior.h"
#include "aiori.h"
@ -29,9 +30,9 @@ static struct dummy_options o = {
};
static option_help options [] = {
{'c', "delay-create", "Delay per create in usec", OPTION_OPTIONAL_ARGUMENT, 'l', & o.delay_creates},
{'x', "delay-xfer", "Delay per xfer in usec", OPTION_OPTIONAL_ARGUMENT, 'l', & o.delay_xfer},
{'z', "delay-only-rank0", "Delay only Rank0", OPTION_FLAG, 'd', & o.delay_rank_0_only},
{0, "dummy.delay-create", "Delay per create in usec", OPTION_OPTIONAL_ARGUMENT, 'l', & o.delay_creates},
{0, "dummy.delay-xfer", "Delay per xfer in usec", OPTION_OPTIONAL_ARGUMENT, 'l', & o.delay_xfer},
{0, "dummy.delay-only-rank0", "Delay only Rank0", OPTION_FLAG, 'd', & o.delay_rank_0_only},
LAST_OPTION
};
@ -48,7 +49,8 @@ static void *DUMMY_Create(char *testFileName, IOR_param_t * param)
}
if (o.delay_creates){
if (! o.delay_rank_0_only || (o.delay_rank_0_only && rank == 0)){
usleep(o.delay_creates);
struct timespec wait = { o.delay_creates / 1000 / 1000, 1000l * (o.delay_creates % 1000000)};
nanosleep( & wait, NULL);
}
}
return current++;
@ -102,7 +104,8 @@ static IOR_offset_t DUMMY_Xfer(int access, void *file, IOR_size_t * buffer, IOR_
}
if (o.delay_xfer){
if (! o.delay_rank_0_only || (o.delay_rank_0_only && rank == 0)){
usleep(o.delay_xfer);
struct timespec wait = {o.delay_xfer / 1000 / 1000, 1000l * (o.delay_xfer % 1000000)};
nanosleep( & wait, NULL);
}
}
return length;
@ -136,6 +139,7 @@ static int DUMMY_stat (const char *path, struct stat *buf, IOR_param_t * param){
ior_aiori_t dummy_aiori = {
"DUMMY",
NULL,
DUMMY_Create,
DUMMY_Open,
DUMMY_Xfer,

View File

@ -98,6 +98,7 @@ static int HDF5_Access(const char *, int, IOR_param_t *);
ior_aiori_t hdf5_aiori = {
.name = "HDF5",
.name_legacy = NULL,
.create = HDF5_Create,
.open = HDF5_Open,
.xfer = HDF5_Xfer,
@ -228,14 +229,27 @@ static void *HDF5_Open(char *testFileName, IOR_param_t * param)
param->setAlignment),
"cannot set alignment");
#ifdef HAVE_H5PSET_ALL_COLL_METADATA_OPS
if (param->collective_md) {
/* more scalable metadata */
HDF5_CHECK(H5Pset_all_coll_metadata_ops(accessPropList, 1),
"cannot set collective md read");
HDF5_CHECK(H5Pset_coll_metadata_write(accessPropList, 1),
"cannot set collective md write");
}
#endif
/* open file */
if (param->open == WRITE) { /* WRITE */
*fd = H5Fcreate(testFileName, fd_mode,
createPropList, accessPropList);
HDF5_CHECK(*fd, "cannot create file");
} else { /* READ or CHECK */
*fd = H5Fopen(testFileName, fd_mode, accessPropList);
HDF5_CHECK(*fd, "cannot open file");
if(! param->dryRun){
if (param->open == WRITE) { /* WRITE */
*fd = H5Fcreate(testFileName, fd_mode,
createPropList, accessPropList);
HDF5_CHECK(*fd, "cannot create file");
} else { /* READ or CHECK */
*fd = H5Fopen(testFileName, fd_mode, accessPropList);
HDF5_CHECK(*fd, "cannot open file");
}
}
/* show hints actually attached to file handle */
@ -260,6 +274,8 @@ static void *HDF5_Open(char *testFileName, IOR_param_t * param)
HDF5_CHECK(H5Fget_vfd_handle
(*fd, apl, (void **)&fd_mpiio),
"cannot get MPIIO file handle");
if (mpiHintsCheck != MPI_INFO_NULL)
MPI_Info_free(&mpiHintsCheck);
MPI_CHECK(MPI_File_get_info
(*fd_mpiio, &mpiHintsCheck),
"cannot get info object through MPIIO");
@ -267,6 +283,8 @@ static void *HDF5_Open(char *testFileName, IOR_param_t * param)
"\nhints returned from opened file (MPIIO) {\n");
ShowHints(&mpiHintsCheck);
fprintf(stdout, "}\n");
if (mpiHintsCheck != MPI_INFO_NULL)
MPI_Info_free(&mpiHintsCheck);
}
}
MPI_CHECK(MPI_Barrier(testComm), "barrier error");
@ -328,6 +346,8 @@ static void *HDF5_Open(char *testFileName, IOR_param_t * param)
and shape of data set, and open it for access */
dataSpace = H5Screate_simple(NUM_DIMS, dataSetDims, NULL);
HDF5_CHECK(dataSpace, "cannot create simple data space");
if (mpiHints != MPI_INFO_NULL)
MPI_Info_free(&mpiHints);
return (fd);
}
@ -377,6 +397,9 @@ static IOR_offset_t HDF5_Xfer(int access, void *fd, IOR_size_t * buffer,
}
}
if(param->dryRun)
return length;
/* create new data set */
if (startNewDataSet == TRUE) {
/* if just opened this file, no data set to close yet */
@ -422,6 +445,8 @@ static void HDF5_Fsync(void *fd, IOR_param_t * param)
*/
static void HDF5_Close(void *fd, IOR_param_t * param)
{
if(param->dryRun)
return;
if (param->fd_fppReadCheck == NULL) {
HDF5_CHECK(H5Dclose(dataSet), "cannot close data set");
HDF5_CHECK(H5Sclose(dataSpace), "cannot close data space");
@ -441,7 +466,10 @@ static void HDF5_Close(void *fd, IOR_param_t * param)
*/
static void HDF5_Delete(char *testFileName, IOR_param_t * param)
{
return(MPIIO_Delete(testFileName, param));
if(param->dryRun)
return
MPIIO_Delete(testFileName, param);
return;
}
/*
@ -573,7 +601,9 @@ static void SetupDataSet(void *fd, IOR_param_t * param)
static IOR_offset_t
HDF5_GetFileSize(IOR_param_t * test, MPI_Comm testComm, char *testFileName)
{
return(MPIIO_GetFileSize(test, testComm, testFileName));
if(test->dryRun)
return 0;
return(MPIIO_GetFileSize(test, testComm, testFileName));
}
/*
@ -581,5 +611,7 @@ HDF5_GetFileSize(IOR_param_t * test, MPI_Comm testComm, char *testFileName)
*/
static int HDF5_Access(const char *path, int mode, IOR_param_t *param)
{
return(MPIIO_Access(path, mode, param));
if(param->dryRun)
return 0;
return(MPIIO_Access(path, mode, param));
}

View File

@ -115,6 +115,7 @@ static IOR_offset_t HDFS_GetFileSize(IOR_param_t *, MPI_Comm, char *);
ior_aiori_t hdfs_aiori = {
.name = "HDFS",
.name_legacy = NULL,
.create = HDFS_Create,
.open = HDFS_Open,
.xfer = HDFS_Xfer,
@ -289,9 +290,9 @@ static void *HDFS_Create_Or_Open( char *testFileName, IOR_param_t *param, unsign
* truncate each other's writes
*/
if (( param->openFlags & IOR_WRONLY ) &&
( !param->filePerProc ) &&
( rank != 0 )) {
if (( param->openFlags & IOR_WRONLY ) &&
( !param->filePerProc ) &&
( rank != 0 )) {
MPI_CHECK(MPI_Barrier(testComm), "barrier error");
}
@ -308,7 +309,7 @@ static void *HDFS_Create_Or_Open( char *testFileName, IOR_param_t *param, unsign
param->transferSize,
param->hdfs_replicas,
param->hdfs_block_size);
}
}
hdfs_file = hdfsOpenFile( param->hdfs_fs,
testFileName,
fd_oflags,
@ -323,12 +324,12 @@ static void *HDFS_Create_Or_Open( char *testFileName, IOR_param_t *param, unsign
* For N-1 write, Rank 0 waits for the other ranks to open the file after it has.
*/
if (( param->openFlags & IOR_WRONLY ) &&
( !param->filePerProc ) &&
( rank == 0 )) {
if (( param->openFlags & IOR_WRONLY ) &&
( !param->filePerProc ) &&
( rank == 0 )) {
MPI_CHECK(MPI_Barrier(testComm), "barrier error");
}
}
if (param->verbose >= VERBOSE_4) {
printf("<- HDFS_Create_Or_Open\n");
@ -404,7 +405,7 @@ static IOR_offset_t HDFS_Xfer(int access, void *file, IOR_size_t * buffer,
}
if (param->verbose >= VERBOSE_4) {
printf("\thdfsWrite( 0x%llx, 0x%llx, 0x%llx, %lld)\n",
printf("\thdfsWrite( 0x%llx, 0x%llx, 0x%llx, %lld)\n",
hdfs_fs, hdfs_file, ptr, remaining ); /* DEBUGGING */
}
rc = hdfsWrite( hdfs_fs, hdfs_file, ptr, remaining );
@ -426,7 +427,7 @@ static IOR_offset_t HDFS_Xfer(int access, void *file, IOR_size_t * buffer,
}
if (param->verbose >= VERBOSE_4) {
printf("\thdfsRead( 0x%llx, 0x%llx, 0x%llx, %lld)\n",
printf("\thdfsRead( 0x%llx, 0x%llx, 0x%llx, %lld)\n",
hdfs_fs, hdfs_file, ptr, remaining ); /* DEBUGGING */
}
rc = hdfsRead( hdfs_fs, hdfs_file, ptr, remaining );

View File

@ -63,6 +63,7 @@ extern MPI_Comm testComm;
ior_aiori_t ime_aiori = {
.name = "IME",
.name_legacy = "IM",
.create = IME_Create,
.open = IME_Open,
.xfer = IME_Xfer,
@ -271,10 +272,10 @@ static char *IME_GetVersion()
/*
* XXX: statfs call is currently not exposed by IME native interface.
*/
static int IME_StatFS(const char *oid, ior_aiori_statfs_t *stat_buf,
static int IME_StatFS(const char *path, ior_aiori_statfs_t *stat_buf,
IOR_param_t *param)
{
(void)oid;
(void)path;
(void)stat_buf;
(void)param;
@ -282,29 +283,33 @@ static int IME_StatFS(const char *oid, ior_aiori_statfs_t *stat_buf,
return -1;
}
/*
* XXX: mkdir call is currently not exposed by IME native interface.
*/
static int IME_MkDir(const char *oid, mode_t mode, IOR_param_t *param)
static int IME_MkDir(const char *path, mode_t mode, IOR_param_t *param)
{
(void)oid;
(void)mode;
(void)param;
WARN("mkdir is currently not supported in IME backend!");
#if (IME_NATIVE_API_VERSION >= 130)
return ime_native_mkdir(path, mode);
#else
(void)path;
(void)mode;
WARN("mkdir not supported in IME backend!");
return -1;
#endif
}
/*
* XXX: rmdir call is curretly not exposed by IME native interface.
*/
static int IME_RmDir(const char *oid, IOR_param_t *param)
static int IME_RmDir(const char *path, IOR_param_t *param)
{
(void)oid;
(void)param;
WARN("rmdir is currently not supported in IME backend!");
#if (IME_NATIVE_API_VERSION >= 130)
return ime_native_rmdir(path);
#else
(void)path;
WARN("rmdir not supported in IME backend!");
return -1;
#endif
}
/*

View File

@ -46,6 +46,7 @@ static void MPIIO_Fsync(void *, IOR_param_t *);
ior_aiori_t mpiio_aiori = {
.name = "MPIIO",