Merge pull request #4 from hpc/main

Merging in master
master
Adrian Jackson 2021-05-18 13:00:22 +01:00 committed by GitHub
commit 6445267e10
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
65 changed files with 6620 additions and 4318 deletions

View File

@ -20,12 +20,12 @@ install:
# TODO: Not in repos for 14.04 trustz but comes 16.04 xenial
#- sudo apt-get install -y libpnetcdf-dev pnetcdf-bin
# Install HDFS
# TODO: Not sure with which c libray hdfs should be used and if it is in
# TODO: Not sure with which c library hdfs should be used and if it is in
# the ubuntu repos
# Probably hadoop needs to be installed an provides native API.
# Install Amazon S3
# TODO: The needed library needs to be installed. Follow the instructions in
# aiori-S3.c to achive this.
# aiori-S3.c to achieve this.
# GPFS
# NOTE: Think GPFS need a license and is therefore not testable with travis.
script:

View File

@ -10,4 +10,5 @@ ACLOCAL_AMFLAGS = -I config
# `make dist` and `make test` for simple test binaries that do not require any
# special environment.
#TESTS = testing/basic-tests.sh
#DISTCLEANFILES = -r test test_out
DISTCLEANFILES = ./src/build.conf

56
NEWS
View File

@ -1,4 +1,4 @@
Version 3.3.0+dev
Version 3.4.0+dev
--------------------------------------------------------------------------------
New major features:
@ -7,6 +7,54 @@ New minor features:
Bugfixes:
Version 3.3.0
--------------------------------------------------------------------------------
New major features:
- Add CephFS AIORI (Mark Nelson)
- Add Gfarm AIORI (Osamu Tatebe)
- Add DAOS AIORI (Mohamad Chaarawi)
- Add DAOS DFS AIORI (Mohamad Chaarawi)
- -B option has been replaced with --posix.odirect
New minor features:
- Display outlier host names (Jean-Yves Vet)
- Enable global default dir layout for subdirs in Lustre (Petros Koutoupis)
- Removed pound signs (#) from mdtest output file names (Julian Kunkel)
- Print I/O hints from NCMPI (Wei-keng Liao)
- Add mknod support to mdtest (Gu Zheng)
- Refactor AIORI-specific options (Julian Kunkel)
- Enable IME native backend for mdtest (Jean-Yves Vet)
- Enable mkdir/rmdir to IME AIORI (Jean-Yves Vet)
- Add HDF5 collective metadata option (Rob Latham)
- Add support for sync to AIORIs (Julian Kunkel)
General user improvements and bug fixes:
- Allocate aligned buffers to support DirectIO for BeeGFS (Sven Breuner)
- Added IOPS and latency results to json output (Robert LeBlanc)
- Fixed case where numTasks is not evenly divisible by tasksPerNode (J. Schwartz)
- Fix several memory leaks and buffer alignment problems (J. Schwartz, Axel Huebl, Sylvain Didelot)
- Add mdtest data verification (Julian Kunkel)
- Clean up functionality of stonewall (Julian Kunkel)
- Fix checks for lustre_user.h (Andreas Dilger)
- Make write verification work without read test (Jean-Yves Vet)
- Documentation updates (Vaclav Hapla, Glenn Lockwood)
- Add more debugging support (J. Schwartz)
General developer improvements:
- Fix type casting errors (Vaclav Hapla)
- Add basic test infrastructure (Julian Kunkel, Glenn Lockwood)
- Conform to strict C99 (Glenn Lockwood)
Known issues:
- S3 and HDFS backends may not compile with new versions of respective libraries
Version 3.2.1
--------------------------------------------------------------------------------
@ -63,7 +111,7 @@ Known issues:
because `-u`/`-c`/`-p` cannot be specified (issue #98)
- `writeCheck` cannot be enabled for write-only tests using some AIORIs such as
MPI-IO (pull request #89)
Version 3.0.2
--------------------------------------------------------------------------------
@ -91,7 +139,7 @@ Version 2.10.3
Contributed by demyn@users.sourceforge.net
- Ported to Windows. Required changes related to 'long' types, which on Windows
are always 32-bits, even on 64-bit systems. Missing system headers and
functions acount for most of the remaining changes.
functions account for most of the remaining changes.
New files for Windows:
- IOR/ior.vcproj - Visual C project file
- IOR/src/C/win/getopt.{h,c} - GNU getopt() support
@ -151,7 +199,7 @@ Version 2.9.5
- Added notification for "Using reorderTasks '-C' (expecting block, not cyclic,
task assignment)"
- Corrected bug with read performance with stonewalling (was using full size,
stat'ed file instead of bytes transfered).
stat'ed file instead of bytes transferred).
Version 2.9.4
--------------------------------------------------------------------------------

View File

@ -1,8 +1,8 @@
# HPC IO Benchmark Repository [![Build Status](https://travis-ci.org/hpc/ior.svg?branch=master)](https://travis-ci.org/hpc/ior)
# HPC IO Benchmark Repository [![Build Status](https://travis-ci.org/hpc/ior.svg?branch=main)](https://travis-ci.org/hpc/ior)
This repository contains the IOR and mdtest parallel I/O benchmarks. The
[official IOR/mdtest documention][] can be found in the `docs/` subdirectory or
on Read the Docs.
[official IOR/mdtest documentation][] can be found in the `docs/` subdirectory
or on Read the Docs.
## Building
@ -28,4 +28,4 @@ on Read the Docs.
distributions at once.
[official IOR release]: https://github.com/hpc/ior/releases
[official IOR/mdtest documention]: http://ior.readthedocs.org/
[official IOR/mdtest documentation]: http://ior.readthedocs.org/

View File

@ -4,55 +4,13 @@ Building
The DAOS library must be installed on the system.
./bootstrap
./configure --prefix=iorInstallDir --with-daos=DIR --with-cart=DIR
One must specify "--with-daos=/path/to/daos/install and --with-cart". When that
is specified the DAOS and DFS driver will be built.
The DAOS driver uses the DAOS API to open a container (or create it if it
doesn't exist first) then create an array object in that container (file) and
read/write to the array object using the daos Array API. The DAOS driver works
with IOR only (no mdtest support yet). The file name used by IOR (passed by -o
option) is hashed to an object ID that is used as the array oid.
./configure --prefix=iorInstallDir --with-daos=DIR
The DFS (DAOS File System) driver creates an encapsulated namespace and emulates
the POSIX driver using the DFS API directly on top of DAOS. The DFS driver works
with both IOR and mdtest.
Running with DAOS API
---------------------
ior -a DAOS [ior_options] [daos_options]
In the IOR options, the file name should be specified as a container uuid using
"-o <container_uuid>". If the "-E" option is given, then this UUID shall denote
an existing container created by a "matching" IOR run. Otherwise, IOR will
create a new container with this UUID. In the latter case, one may use
uuidgen(1) to generate the UUID of the new container.
The DAOS options include:
Required Options:
--daos.pool <pool_uuid>: pool uuid to connect to (has to be created beforehand)
--daos.svcl <pool_svcl>: pool svcl list (: separated)
--daos.cont <cont_uuid>: container for the IOR files/objects (can use `uuidgen`)
Optional Options:
--daos.group <group_name>: group name of servers with the pool
--daos.chunk_size <chunk_size>: Chunk size of the array object controlling striping over DKEYs
--daos.destroy flag to destory the container on finalize
--daos.oclass <object_class>: specific object class for array object
Examples that should work include:
- "ior -a DAOS -w -W -o file_name --daos.pool <pool_uuid> --daos.svcl <svc_ranks>\
--daos.cont <cont_uuid>"
- "ior -a DAOS -w -W -r -R -o file_name -b 1g -t 4m \
--daos.pool <pool_uuid> --daos.svcl <svc_ranks> --daos.cont <cont_uuid>\
--daos.chunk_size 1024 --daos.oclass R2"
Running with DFS API
Running
---------------------
ior -a DFS [ior_options] [dfs_options]
@ -64,15 +22,17 @@ Required Options:
--dfs.cont <co_uuid>: container uuid that will hold the encapsulated namespace
Optional Options:
--dfs.group <group_name>: group name of servers with the pool
--dfs.chunk_size <chunk_size>: Chunk size of the files
--dfs.destroy flag to destory the container on finalize
--dfs.oclass <object_class>: specific object class for files
--dfs.group <group_name>: group name of servers with the pool (default: daos_server)
--dfs.chunk_size <chunk_size>: Chunk size of the files (default: 1MiB)
--dfs.destroy: flag to destroy the container on finalize (default: no)
--dfs.oclass <object_class>: specific object class for files (default: SX)
--dfs.dir_oclass <object_class>: specific object class for directories (default: SX)
--dfs.prefix <path>: absolute path to account for DFS files/dirs before the cont root
In the IOR options, the file name should be specified on the root dir directly
since ior does not create directories and the DFS container representing the
encapsulated namespace is not the same as the system namespace the user is
executing from.
If prefix is not set, in the IOR options, the file name should be specified on
the root dir directly since ior does not create directories and the DFS
container representing the encapsulated namespace is not the same as the system
namespace the user is executing from.
Examples that should work include:
- "ior -a DFS -w -W -o /test1 --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"
@ -80,7 +40,8 @@ Examples that should work include:
- "ior -a DFS -w -r -o /test3 -b 8g -t 1m -C --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"
Running mdtest, the user needs to specify a directory with -d where the test
tree will be created. Some examples:
- "mdtest -a DFS -n 100 -F -D -d /bla --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"
- "mdtest -a DFS -n 1000 -F -C -d /bla --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"
- "mdtest -a DFS -I 10 -z 5 -b 2 -L -d /bla --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"
tree will be created (set '/' if writing to the root of the DFS container). Some
examples:
- "mdtest -a DFS -n 100 -F -D -d / --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"
- "mdtest -a DFS -n 1000 -F -C -d / --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"
- "mdtest -a DFS -I 10 -z 5 -b 2 -L -d / --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"

View File

@ -73,6 +73,53 @@ AS_IF([test "$ac_cv_header_gpfs_h" = "yes" -o "$ac_cv_header_gpfs_fcntl_h" = "ye
])
])
# Check for CUDA
AC_ARG_WITH([cuda],
[AS_HELP_STRING([--with-cuda],
[support configurable CUDA @<:@default=check@:>@])],
[], [with_cuda=check])
AS_IF([test "x$with_cuda" != xno], [
LDFLAGS="$LDFLAGS -L$with_cuda/lib64 -Wl,--enable-new-dtags -Wl,-rpath=$with_cuda/lib64"
CPPFLAGS="$CPPFLAGS -I$with_cuda/include"
AC_CHECK_HEADERS([cuda_runtime.h], [AC_DEFINE([HAVE_CUDA], [], [CUDA GPU API found])], [
if test "x$with_cuda" != xcheck; then
AC_MSG_FAILURE([--with-cuda was given, <cuda_runtime.h> not found])
fi
])
AS_IF([test "$ac_cv_header_cuda_runtime_h" = "yes"], [
AC_SEARCH_LIBS([cudaMalloc], [cudart cudart_static], [],
[AC_MSG_ERROR([Library containing cudaMalloc symbol not found])])
])
])
AM_CONDITIONAL([HAVE_CUDA], [test x$with_cuda = xyes])
AM_COND_IF([HAVE_CUDA],[AC_DEFINE([HAVE_CUDA], [], [CUDA GPU API found])])
# Check for GPUDirect
AC_ARG_WITH([gpuDirect],
[AS_HELP_STRING([--with-gpuDirect],
[support configurable GPUDirect @<:@default=check@:>@])],
[], [with_gpuDirect=check])
AS_IF([test "x$with_gpuDirect" != xno], [
LDFLAGS="$LDFLAGS -L$with_gpuDirect/lib64 -Wl,--enable-new-dtags -Wl,-rpath=$with_gpuDirect/lib64"
CPPFLAGS="$CPPFLAGS -I$with_gpuDirect/include"
AC_CHECK_HEADERS([cufile.h], [AC_DEFINE([HAVE_GPU_DIRECT], [], [GPUDirect API found])], [
if test "x$with_gpuDirect" != xcheck; then
AC_MSG_FAILURE([--with-gpuDirect was given, <cufile.h> not found])
fi
])
AS_IF([test "$ac_cv_header_cufile_h" = "yes"], [
AC_SEARCH_LIBS([cuFileDriverOpen], [cufile], [],
[AC_MSG_ERROR([Library containing cuFileDriverOpen symbol not found])])
])
])
AM_CONDITIONAL([HAVE_GPU_DIRECT], [test x$with_gpuDirect = xyes])
AM_COND_IF([HAVE_GPU_DIRECT],[AC_DEFINE([HAVE_GPU_DIRECT], [], [GPUDirect API found])])
# Check for system capabilities
AC_SYS_LARGEFILE
@ -84,7 +131,7 @@ AC_ARG_WITH([lustre],
[support configurable Lustre striping values @<:@default=check@:>@])],
[], [with_lustre=check])
AS_IF([test "x$with_lustre" = xyes ], [
AC_CHECK_HEADERS([linux/lustre/lustre_user.h lustre/lustre_user.h], break, [
AC_CHECK_HEADERS([linux/lustre/lustre_user.h lustre/lustre_user.h], [AC_DEFINE([HAVE_LUSTRE_USER], [], [Lustre user API available in some shape or form])], [
if test "x$with_lustre" != xcheck -a \
"x$ac_cv_header_linux_lustre_lustre_user_h" = "xno" -a \
"x$ac_cv_header_lustre_lustre_user_h" = "xno" ; then
@ -160,8 +207,10 @@ AC_ARG_WITH([ncmpi],
[],
[with_ncmpi=no])
AM_CONDITIONAL([USE_NCMPI_AIORI], [test x$with_ncmpi = xyes])
AM_COND_IF([USE_NCMPI_AIORI],[
AC_DEFINE([USE_NCMPI_AIORI], [], [Build NCMPI backend AIORI])
AS_IF([test "x$with_ncmpi" = xyes ], [
AC_CHECK_HEADERS([pnetcdf.h], [AC_DEFINE([USE_NCMPI_AIORI], [], [PNetCDF available])], [
AC_MSG_FAILURE([--with-ncmpi was given but pnetcdf.h not found])
])
])
# MMAP IO support
@ -200,6 +249,19 @@ AS_IF([test "x$with_pmdk" != xno], [
[AC_MSG_ERROR([Library containing pmdk symbols not found])])
])
# LINUX AIO support
AC_ARG_WITH([aio],
[AS_HELP_STRING([--with-aio],
[support Linux AIO @<:@default=no@:>@])],
[],
[with_aio=no])
AM_CONDITIONAL([USE_AIO_AIORI], [test x$with_aio = xyes])
AS_IF([test "x$with_aio" != xno], [
AC_DEFINE([USE_AIO_AIORI], [], [Build AIO backend])
AC_CHECK_HEADERS(libaio.h,, [unset AIO])
AC_SEARCH_LIBS([aio], [io_setup], [AC_MSG_ERROR([Library containing AIO symbol io_setup not found])])
])
# RADOS support
AC_ARG_WITH([rados],
@ -226,40 +288,25 @@ AM_COND_IF([USE_CEPHFS_AIORI],[
AC_DEFINE([USE_CEPHFS_AIORI], [], [Build CEPHFS backend AIORI])
])
# DAOS Backends (DAOS and DFS) IO support require DAOS and CART/GURT
AC_ARG_WITH([cart],
[AS_HELP_STRING([--with-cart],
[support IO with DAOS backends @<:@default=no@:>@])],
[], [with_daos=no])
AS_IF([test "x$with_cart" != xno], [
CART="yes"
LDFLAGS="$LDFLAGS -L$with_cart/lib64 -Wl,--enable-new-dtags -Wl,-rpath=$with_cart/lib64"
LDFLAGS="$LDFLAGS -L$with_cart/lib -Wl,--enable-new-dtags -Wl,-rpath=$with_cart/lib"
CPPFLAGS="$CPPFLAGS -I$with_cart/include/"
AC_CHECK_HEADERS(gurt/common.h,, [unset CART])
AC_CHECK_LIB([gurt], [d_hash_murmur64],, [unset CART])
])
# DAOS-FS Backend (DFS)
AC_ARG_WITH([daos],
[AS_HELP_STRING([--with-daos],
[support IO with DAOS backends @<:@default=no@:>@])],
[support IO with DAOS backend @<:@default=no@:>@])],
[], [with_daos=no])
AS_IF([test "x$with_daos" != xno], [
DAOS="yes"
LDFLAGS="$LDFLAGS -L$with_daos/lib64 -Wl,--enable-new-dtags -Wl,-rpath=$with_daos/lib64"
CPPFLAGS="$CPPFLAGS -I$with_daos/include"
AC_CHECK_HEADERS(daos_types.h,, [unset DAOS])
AC_CHECK_HEADERS(gurt/common.h,, [unset DAOS])
AC_CHECK_HEADERS(daos.h,, [unset DAOS])
AC_CHECK_LIB([gurt], [d_hash_murmur64],, [unset DAOS])
AC_CHECK_LIB([uuid], [uuid_generate],, [unset DAOS])
AC_CHECK_LIB([daos_common], [daos_sgl_init],, [unset DAOS])
AC_CHECK_LIB([daos], [daos_init],, [unset DAOS])
AC_CHECK_LIB([dfs], [dfs_mkdir],, [unset DAOS])
])
AM_CONDITIONAL([USE_DAOS_AIORI], [test x$DAOS = xyes])
AM_COND_IF([USE_DAOS_AIORI],[
AC_DEFINE([USE_DAOS_AIORI], [], [Build DAOS backends AIORI])
AC_DEFINE([USE_DAOS_AIORI], [], [Build DAOS-FS backend AIORI])
])
# Gfarm support
@ -308,19 +355,54 @@ AM_COND_IF([AWS4C_DIR],[
])
# Amazon S3 support [see also: --with-aws4c]
AC_ARG_WITH([S3],
[AS_HELP_STRING([--with-S3],
[support IO with Amazon S3 backend @<:@default=no@:>@])],
# Amazon S3 support using the libs3 API
AC_ARG_WITH([S3-libs3],
[AS_HELP_STRING([--with-S3-libs3],
[support IO with Amazon libS3 @<:@default=no@:>@])],
[],
[with_S3=no])
AM_CONDITIONAL([USE_S3_AIORI], [test x$with_S3 = xyes])
AM_COND_IF([USE_S3_AIORI],[
AC_DEFINE([USE_S3_AIORI], [], [Build Amazon-S3 backend AIORI])
[with_S3_libs3=no])
AM_CONDITIONAL([USE_S3_LIBS3_AIORI], [test x$with_S3_libs3 = xyes])
AM_COND_IF([USE_S3_LIBS3_AIORI],[
AC_DEFINE([USE_S3_LIBS3_AIORI], [], [Build Amazon-S3 backend AIORI using libs3])
])
err=0
AS_IF([test "x$with_S3" != xno], [
AS_IF([test "x$with_S3_libs3" != xno], [
AC_MSG_NOTICE([beginning of S3-related checks])
ORIG_CPPFLAGS=$CPPFLAGS
ORIG_LDFLAGS=$LDFLAGS
AC_CHECK_HEADERS([libs3.h], [], [err=1])
# Autotools thinks searching for a library means I want it added to LIBS
ORIG_LIBS=$LIBS
AC_CHECK_LIB([s3], [S3_initialize], [], [err=1])
LIBS=$ORIG_LIBS
AC_MSG_NOTICE([end of S3-related checks])
if test "$err" == 1; then
AC_MSG_FAILURE([S3 support is missing. dnl Make sure you have access to libs3. dnl])
fi
# restore user's values
CPPFLAGS=$ORIG_CPPFLAGS
LDFLAGS=$ORIG_LDFLAGS
])
# Amazon S3 support [see also: --with-aws4c]
AC_ARG_WITH([S3-4c],
[AS_HELP_STRING([--with-S3-4c],
[support IO with Amazon S3 backend @<:@default=no@:>@])],
[],
[with_S3_4c=no])
AM_CONDITIONAL([USE_S3_4C_AIORI], [test x$with_S3_4c = xyes])
AM_COND_IF([USE_S3_4C_AIORI],[
AC_DEFINE([USE_S3_4C_AIORI], [], [Build Amazon-S3 backend AIORI using lib4c])
])
err=0
AS_IF([test "x$with_S3_4c" != xno], [
AC_MSG_NOTICE([beginning of S3-related checks])
# save user's values, while we use AC_CHECK_HEADERS with $AWS4C_DIR
@ -352,6 +434,30 @@ Consider --with-aws4c=, CPPFLAGS, LDFLAGS, etc])
LDFLAGS=$ORIG_LDFLAGS
])
# Check for existence of the function to detect the CPU socket ID (for multi-socket systems)
AC_COMPILE_IFELSE(
[AC_LANG_SOURCE([[
int main(){
unsigned long a,d,c;
__asm__ volatile("rdtscp" : "=a" (a), "=d" (d), "=c" (c));
return 0;
}
]])],
AC_DEFINE([HAVE_RDTSCP_ASM], [], [Has ASM to detect CPU socket ID]))
AC_COMPILE_IFELSE(
[AC_LANG_SOURCE([[
#define _GNU_SOURCE
#include <unistd.h>
#include <sys/syscall.h>
unsigned long GetProcessorAndCore(int *chip, int *core){
return syscall(SYS_getcpu, core, chip, NULL);
}
int main(){
}
]])],
AC_DEFINE([HAVE_GETCPU_SYSCALL], [], [Has syscall to detect CPU socket ID]))
# Enable building "IOR", in all capitals
AC_ARG_ENABLE([caps],

View File

@ -47,7 +47,7 @@ Two ways to run IOR:
E.g., to execute: IOR -W -f script
This defaults all tests in 'script' to use write data checking.
* The Command line supports to specify additional parameters for the choosen API.
* The Command line supports to specify additional parameters for the chosen API.
For example, username and password for the storage.
Available options are listed in the help text after selecting the API when running with -h.
For example, 'IOR -a DUMMY -h' shows the supported options for the DUMMY backend.
@ -164,7 +164,7 @@ GENERAL:
* numTasks - number of tasks that should participate in the test
[0]
NOTE: 0 denotes all tasks
NOTE: -1 denotes all tasks
* interTestDelay - this is the time in seconds to delay before
beginning a write or read in a series of tests [0]
@ -361,7 +361,7 @@ GPFS-SPECIFIC:
* gpfsReleaseToken - immediately after opening or creating file, release
all locks. Might help mitigate lock-revocation
traffic when many proceses write/read to same file.
traffic when many processes write/read to same file.
BeeGFS-SPECIFIC (POSIX only):
================
@ -499,7 +499,7 @@ zip, gzip, and bzip.
3) bzip2: For bziped files a transfer size of 1k is insufficient (~50% compressed).
To avoid compression a transfer size of greater than the bzip block size is required
(default = 900KB). I suggest a transfer size of greather than 1MB to avoid bzip2 compression.
(default = 900KB). I suggest a transfer size of greater than 1MB to avoid bzip2 compression.
Be aware of the block size your compression algorithm will look at, and adjust the transfer size
accordingly.
@ -660,7 +660,7 @@ HOW DO I USE HINTS?
'setenv IOR_HINT__MPI__<hint> <value>'
HOW DO I EXPLICITY SET THE FILE DATA SIGNATURE?
HOW DO I EXPLICITLY SET THE FILE DATA SIGNATURE?
The data signature for a transfer contains the MPI task number, transfer-
buffer offset, and also timestamp for the start of iteration. As IOR works

View File

@ -28,7 +28,7 @@ Use ``collective creates'', meaning task 0 does all the creates.
Only perform the create phase of the tests.
.TP
.I "-d" testdir[@testdir2]
The directory in which the tests will run. For multiple pathes, must use fully-qualified pathnames.
The directory in which the tests will run. For multiple paths, must use fully-qualified pathnames.
[default: working directory of mdtest].
.TP
.I "-D"
@ -78,6 +78,9 @@ Stride # between neighbor tasks for file/dir stat, 0 = local
.I "-p" seconds
Pre-iteration delay (in seconds).
.TP
.I "-P"
Print both the file creation rate and the elapsed time.
.TP
.I "-r"
Only perform the remove phase of the tests.
.TP
@ -121,6 +124,19 @@ Set verbosity value
Set the number of Bytes to write to each file after it is created
[default: 0].
.TP
.I "-W" seconds
Specify the stonewall time in seconds. When the stonewall timer has elapsed,
the rank with the highest number of creates sets
.I number_of_items
for the other ranks, so that all ranks create the same number of files.
.TP
.I "-x" filename
Filename to use for stonewall synchronization between processes.
.TP
.I "Y"
Call the sync command after each phase, which is included in the
timing. Note that it causes all IO to be flushed from the nodes.
.TP
.I "-z" tree_depth
The depth of the hierarchical directory tree [default: 0].
.SH EXAMPLES

View File

@ -1,38 +1,156 @@
Release Process
===============
To build a new version of IOR::
General release process
-----------------------
The versioning for IOR is encoded in the ``META`` file in the root of the
repository. The nomenclature is
* 3.2.0 designates a proper release
* 3.2.0rc1 designates the first release candidate in preparation for the 3.2.0
release
* 3.2.0+dev indicates development towards 3.2.0 prior to a feature freeze
* 3.2.0rc1+dev indicates development towards 3.2.0's first release candidate
after a feature freeze
Building a release of IOR
-------------------------
To build a new version of IOR, e.g., from the 3.2 release branch::
$ docker run -it ubuntu bash
$ apt-get update
$ apt-get install -y git automake autoconf make gcc mpich
$ git clone -b rc https://github.com/hpc/ior
$ git clone -b 3.2 https://github.com/hpc/ior
$ cd ior
$ ./travis-build.sh
To create a new release candidate from RC,
Alternatively you can build an an arbitrary branch in Docker using a bind mount.
This will be wrapped into a build-release Dockerfile in the future::
1. Disable the ``check-news`` option in ``AM_INIT_AUTOMAKE`` inside configure.ac
2. Append "rcX" to the ``Version:`` field in META where X is the release
candidate number
3. Build a release package as described above
$ docker run -it --mount type=bind,source=$PWD,target=/ior ubuntu
$ apt-get update
$ apt-get install -y git automake autoconf make gcc mpich
$ ./travis-build.sh
To create a new minor release of IOR,
Feature freezing for a new release
----------------------------------
1. Build the rc branch as described above
2. Create a release on GitHub which creates the appropriate tag
3. Upload the source distributions generated by travis-build.sh
1. Branch `major.minor` from the commit at which the feature freeze should take
effect.
2. Append the "rc1+dev" designator to the Version field in the META file, and
update the NEWS file to have this new version as the topmost heading
3. Commit and push this new branch
2. Update the ``Version:`` field in META `of the main branch` to be the `next`
release version, not the one whose features have just been frozen, and update
the NEWS file as you did in step 2.
To create a micro branch of IOR (e.g., if a release needs a hotfix),
For example, to feature-freeze for version 3.2::
1. Check out the relevant release tagged in the rc branch (e.g., ``3.2.0``)
2. Create a branch with the major.minor name (e.g., ``3.2``) from that tag
3. Update the ``Version:`` in META
4. Apply hotfix(es) to that major.minor branch
5. Create the major.minor.micro release on GitHub
$ git checkout 11469ac
$ git checkout -B 3.2
$ vim META # update the ``Version:`` field to 3.2.0rc1+dev
$ vim NEWS # update the topmost version number to 3.2.0rc1+dev
$ git add NEWS META
$ git commit -m "Update version for feature freeze"
$ git push upstream 3.2
$ git checkout main
$ vim META # update the ``Version:`` field to 3.3.0+dev
$ vim NEWS # update the topmost version number to 3.3.0+dev
$ git add NEWS META
$ git commit -m "Update version number"
$ git push upstream main
To initiate a feature freeze,
Creating a new release candidate
--------------------------------
1. Merge the master branch into the rc branch
2. Update the ``Version:`` field in META `of the master branch` to be the `next`
release version, not the one whose features have just been frozen
1. Check out the appropriate commit from the `major.minor` branch
2. Disable the ``check-news`` option in ``AM_INIT_AUTOMAKE`` inside configure.ac
3. Remove the "+dev" designator from the Version field in META
4. Build a release package as described above
5. Revert the change from #2 (it was just required to build a non-release tarball)
5. Tag and commit the updated META so one can easily recompile this rc from git
6. Update the "rcX" number and add "+dev" back to the ``Version:`` field in
META. This will allow anyone playing with the tip of this branch to see that
this the state is in preparation of the next rc, but is unreleased because of
+dev.
7. Commit
For example to release 3.2.0rc1::
$ git checkout 3.2
$ # edit configure.ac and remove the check-news option
$ # remove +dev from the Version field in META (Version: 3.2.0rc1)
$ # build
$ git checkout configure.ac
$ git add META
$ git commit -m "Release candidate for 3.2.0rc1"
$ git tag 3.2.0rc1
$ # uptick rc number and re-add +dev to META (Version: 3.2.0rc2+dev)
$ git add META # should contain Version: 3.2.0rc2+dev
$ git commit -m "Uptick version after release"
$ git push && git push --tags
Applying patches to a new microrelease
--------------------------------------
If a released version 3.2.0 has bugs, cherry-pick the fixes from main into the
3.2 branch::
$ git checkout 3.2
$ git cherry-pick cb40c99
$ git cherry-pick aafdf89
$ git push upstream 3.2
Once you've accumulated enough bugs, move on to issuing a new release below.
Creating a new release
----------------------
This is a two-phase process because we need to ensure that NEWS in main
contains a full history of releases, and we achieve this by always merging
changes from main into a release branch.
1. Check out main
2. Ensure that the latest release notes for this release are reflected in NEWS
3. Commit that to main
Then work on the release branch:
1. Check out the relevant `major.minor` branch
2. Remove any "rcX" and "+dev" from the Version field in META
3. Cherry-pick your NEWS update commit from main into this release branch.
Resolve conflicts and get rid of news that reflect future releases.
4. Build a release package as described above
5. Tag and commit the updated NEWS and META so one can easily recompile this
release from git
6. Update the Version field to the next rc version and re-add "+dev"
7. Commit
8. Create the major.minor.micro release on GitHub from the associated tag
For example to release 3.2.0::
$ git checkout main
$ vim NEWS # add release notes from ``git log --oneline 3.2.0rc1..``
$ git commit
Let's say the above generated commit abc345e on main. Then::
$ git checkout 3.2
$ vim META # 3.2.0rc2+dev -> 3.2.0
$ git cherry-pick abc345e
$ vim NEWS # resolve conflicts, delete stuff for e.g., 3.4
$ # build
$ git add NEWS META
$ git commit -m "Release v3.2.0"
$ git tag 3.2.0
$ vim META # 3.2.0 -> 3.2.1rc1+dev
# vim NEWS # add a placeholder for 3.2.1rc2+dev so automake is happy
$ git add NEWS META
$ git commit -m "Uptick version after release"
Then push your main and your release branch and also push tags::
$ git checkout main && git push && git push --tags
$ git checkout 3.2 && git push && git push --tags

View File

@ -146,7 +146,7 @@ HOW DO I USE HINTS?
'setenv IOR_HINT__MPI__<hint> <value>'
HOW DO I EXPLICITY SET THE FILE DATA SIGNATURE?
HOW DO I EXPLICITLY SET THE FILE DATA SIGNATURE?
The data signature for a transfer contains the MPI task number, transfer-
buffer offset, and also timestamp for the start of iteration. As IOR works

View File

@ -6,19 +6,19 @@ Install
Building
--------
0. If "configure" is missing from the top level directory, you
0. If ``configure`` is missing from the top level directory, you
probably retrieved this code directly from the repository.
Run "./bootstrap".
Run ``./bootstrap``.
If your versions of the autotools are not new enough to run
this script, download and official tarball in which the
configure script is already provided.
1. Run "./configure"
1. Run ``./configure``
See "./configure --help" for configuration options.
See ``./configure --help`` for configuration options.
2. Run "make"
2. Run ``make``
3. Optionally, run "make install". The installation prefix
can be changed as an option to the "configure" script.
3. Optionally, run ``make install``. The installation prefix
can be changed as an option to the ``configure`` script.

View File

@ -302,7 +302,7 @@ GPFS-SPECIFIC
* ``gpfsReleaseToken`` - release all locks immediately after opening or
creating file. Might help mitigate lock-revocation traffic when many
proceses write/read to same file. (default: 0)
processes write/read to same file. (default: 0)
Verbosity levels
----------------
@ -338,7 +338,7 @@ bzip.
3) bzip2: For bziped files a transfer size of 1k is insufficient (~50% compressed).
To avoid compression a transfer size of greater than the bzip block size is required
(default = 900KB). I suggest a transfer size of greather than 1MB to avoid bzip2 compression.
(default = 900KB). I suggest a transfer size of greater than 1MB to avoid bzip2 compression.
Be aware of the block size your compression algorithm will look at, and adjust
the transfer size accordingly.

View File

@ -4,30 +4,31 @@ First Steps with IOR
====================
This is a short tutorial for the basic usage of IOR and some tips on how to use
IOR to handel caching effects as these are very likely to affect your
IOR to handle caching effects as these are very likely to affect your
measurements.
Running IOR
-----------
There are two ways of running IOR:
1) Command line with arguments -- executable followed by command line
options.
1) Command line with arguments -- executable followed by command line options.
::
$ ./IOR -w -r -o filename
.. code-block:: shell
This performs a write and a read to the file 'filename'.
$ ./IOR -w -r -o filename
This performs a write and a read to the file 'filename'.
2) Command line with scripts -- any arguments on the command line will
establish the default for the test run, but a script may be used in
conjunction with this for varying specific tests during an execution of
the code. Only arguments before the script will be used!
establish the default for the test run, but a script may be used in
conjunction with this for varying specific tests during an execution of
the code. Only arguments before the script will be used!
::
$ ./IOR -W -f script
.. code-block:: shell
This defaults all tests in 'script' to use write data checking.
$ ./IOR -W -f script
This defaults all tests in 'script' to use write data checking.
In this tutorial the first one is used as it is much easier to toy around with
@ -40,10 +41,10 @@ Getting Started with IOR
IOR writes data sequentially with the following parameters:
* blockSize (-b)
* transferSize (-t)
* segmentCount (-s)
* numTasks (-n)
* ``blockSize`` (``-b``)
* ``transferSize`` (``-t``)
* ``segmentCount`` (``-s``)
* ``numTasks`` (``-n``)
which are best illustrated with a diagram:
@ -52,30 +53,34 @@ which are best illustrated with a diagram:
These four parameters are all you need to get started with IOR. However,
naively running IOR usually gives disappointing results. For example, if we run
a four-node IOR test that writes a total of 16 GiB::
a four-node IOR test that writes a total of 16 GiB:
$ mpirun -n 64 ./ior -t 1m -b 16m -s 16
...
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
write 427.36 16384 1024.00 0.107961 38.34 32.48 38.34 2
read 239.08 16384 1024.00 0.005789 68.53 65.53 68.53 2
remove - - - - - - 0.534400 2
.. code-block:: shell
$ mpirun -n 64 ./ior -t 1m -b 16m -s 16
...
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
write 427.36 16384 1024.00 0.107961 38.34 32.48 38.34 2
read 239.08 16384 1024.00 0.005789 68.53 65.53 68.53 2
remove - - - - - - 0.534400 2
we can only get a couple hundred megabytes per second out of a Lustre file
system that should be capable of a lot more.
Switching from writing to a single-shared file to one file per process using the
-F (filePerProcess=1) option changes the performance dramatically::
``-F`` (``filePerProcess=1``) option changes the performance dramatically:
$ mpirun -n 64 ./ior -t 1m -b 16m -s 16 -F
...
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
write 33645 16384 1024.00 0.007693 0.486249 0.195494 0.486972 1
read 149473 16384 1024.00 0.004936 0.108627 0.016479 0.109612 1
remove - - - - - - 6.08 1
.. code-block:: shell
$ mpirun -n 64 ./ior -t 1m -b 16m -s 16 -F
...
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
write 33645 16384 1024.00 0.007693 0.486249 0.195494 0.486972 1
read 149473 16384 1024.00 0.004936 0.108627 0.016479 0.109612 1
remove - - - - - - 6.08 1
This is in large part because letting each MPI process work on its own file cuts
@ -123,7 +128,7 @@ There are a couple of ways to measure the read performance of the underlying
Lustre file system. The most crude way is to simply write more data than will
fit into the total page cache so that by the time the write phase has completed,
the beginning of the file has already been evicted from cache. For example,
increasing the number of segments (-s) to write more data reveals the point at
increasing the number of segments (``-s``) to write more data reveals the point at
which the nodes' page cache on my test system runs over very clearly:
.. image:: tutorial-ior-overflowing-cache.png
@ -142,17 +147,19 @@ written by node N-1.
Since page cache is not shared between compute nodes, shifting tasks this way
ensures that each MPI process is reading data it did not write.
IOR provides the -C option (reorderTasks) to do this, and it forces each MPI
IOR provides the ``-C`` option (``reorderTasks``) to do this, and it forces each MPI
process to read the data written by its neighboring node. Running IOR with
this option gives much more credible read performance::
this option gives much more credible read performance:
$ mpirun -n 64 ./ior -t 1m -b 16m -s 16 -F -C
...
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
write 41326 16384 1024.00 0.005756 0.395859 0.095360 0.396453 0
read 3310.00 16384 1024.00 0.011786 4.95 4.20 4.95 1
remove - - - - - - 0.237291 1
.. code-block:: shell
$ mpirun -n 64 ./ior -t 1m -b 16m -s 16 -F -C
...
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
write 41326 16384 1024.00 0.005756 0.395859 0.095360 0.396453 0
read 3310.00 16384 1024.00 0.011786 4.95 4.20 4.95 1
remove - - - - - - 0.237291 1
But now it should seem obvious that the write performance is also ridiculously
@ -166,16 +173,18 @@ pages we just wrote to flush out to Lustre. Including the time it takes for
fsync() to finish gives us a measure of how long it takes for our data to write
to the page cache and for the page cache to write back to Lustre.
IOR provides another convenient option, -e (fsync), to do just this. And, once
again, using this option changes our performance measurement quite a bit::
IOR provides another convenient option, ``-e`` (fsync), to do just this. And, once
again, using this option changes our performance measurement quite a bit:
$ mpirun -n 64 ./ior -t 1m -b 16m -s 16 -F -C -e
...
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
write 2937.89 16384 1024.00 0.011841 5.56 4.93 5.58 0
read 2712.55 16384 1024.00 0.005214 6.04 5.08 6.04 3
remove - - - - - - 0.037706 0
.. code-block:: shell
$ mpirun -n 64 ./ior -t 1m -b 16m -s 16 -F -C -e
...
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
write 2937.89 16384 1024.00 0.011841 5.56 4.93 5.58 0
read 2712.55 16384 1024.00 0.005214 6.04 5.08 6.04 3
remove - - - - - - 0.037706 0
and we finally have a believable bandwidth measurement for our file system.
@ -192,16 +201,17 @@ the best choice. There are several ways in which we can get clever and defeat
page cache in a more general sense to get meaningful performance numbers.
When measuring write performance, bypassing page cache is actually quite simple;
opening a file with the O_DIRECT flag going directly to disk. In addition,
the fsync() call can be inserted into applications, as is done with IOR's -e
opening a file with the ``O_DIRECT`` flag going directly to disk. In addition,
the ``fsync()`` call can be inserted into applications, as is done with IOR's ``-e``
option.
Measuring read performance is a lot trickier. If you are fortunate enough to
have root access on a test system, you can force the Linux kernel to empty out
its page cache by doing
::
# echo 1 > /proc/sys/vm/drop_caches
.. code-block:: shell
# echo 1 > /proc/sys/vm/drop_caches
and in fact, this is often good practice before running any benchmark
(e.g., Linpack) because it ensures that you aren't losing performance to the
@ -210,23 +220,25 @@ memory for its own use.
Unfortunately, many of us do not have root on our systems, so we have to get
even more clever. As it turns out, there is a way to pass a hint to the kernel
that a file is no longer needed in page cache::
that a file is no longer needed in page cache:
#define _XOPEN_SOURCE 600
#include <unistd.h>
#include <fcntl.h>
int main(int argc, char *argv[]) {
int fd;
fd = open(argv[1], O_RDONLY);
fdatasync(fd);
posix_fadvise(fd, 0,0,POSIX_FADV_DONTNEED);
close(fd);
return 0;
}
.. code-block:: c
The effect of passing POSIX_FADV_DONTNEED using posix_fadvise() is usually that
#define _XOPEN_SOURCE 600
#include <unistd.h>
#include <fcntl.h>
int main(int argc, char *argv[]) {
int fd;
fd = open(argv[1], O_RDONLY);
fdatasync(fd);
posix_fadvise(fd, 0,0,POSIX_FADV_DONTNEED);
close(fd);
return 0;
}
The effect of passing POSIX_FADV_DONTNEED using ``posix_fadvise()`` is usually that
all pages belonging to that file are evicted from page cache in Linux. However,
this is just a hint--not a guarantee--and the kernel evicts these pages
this is just a hint --not a guarantee-- and the kernel evicts these pages
asynchronously, so it may take a second or two for pages to actually leave page
cache. Fortunately, Linux also provides a way to probe pages in a file to see
if they are resident in memory.

View File

@ -1,20 +1,25 @@
SUBDIRS = . test
bin_PROGRAMS = ior mdtest
bin_PROGRAMS = ior mdtest md-workbench
if USE_CAPS
bin_PROGRAMS += IOR MDTEST
bin_PROGRAMS += IOR MDTEST MD-WORKBENCH
endif
noinst_HEADERS = ior.h utilities.h parse_options.h aiori.h iordef.h ior-internal.h option.h mdtest.h
noinst_HEADERS = ior.h utilities.h parse_options.h aiori.h iordef.h ior-internal.h option.h mdtest.h aiori-debug.h aiori-POSIX.h md-workbench.h
lib_LIBRARIES = libaiori.a
libaiori_a_SOURCES = ior.c mdtest.c utilities.c parse_options.c ior-output.c option.c
libaiori_a_SOURCES = ior.c mdtest.c utilities.c parse_options.c ior-output.c option.c md-workbench.c
extraSOURCES = aiori.c aiori-DUMMY.c
extraLDADD =
extraLDFLAGS =
extraCPPFLAGS =
md_workbench_SOURCES = md-workbench-main.c
md_workbench_LDFLAGS =
md_workbench_LDADD = libaiori.a
md_workbench_CPPFLAGS =
ior_SOURCES = ior-main.c
ior_LDFLAGS =
ior_LDADD = libaiori.a
@ -36,6 +41,14 @@ extraLDFLAGS += -L/opt/hadoop-2.2.0/lib/native
extraLDADD += -lhdfs
endif
if HAVE_CUDA
extraLDADD += -lcudart
endif
if HAVE_GPU_DIRECT
extraLDADD += -lcufile
endif
if USE_HDF5_AIORI
extraSOURCES += aiori-HDF5.c
extraLDADD += -lhdf5 -lz
@ -65,6 +78,11 @@ if USE_POSIX_AIORI
extraSOURCES += aiori-POSIX.c
endif
if USE_AIO_AIORI
extraSOURCES += aiori-aio.c
extraLDADD += -laio
endif
if USE_PMDK_AIORI
extraSOURCES += aiori-PMDK.c
extraLDADD += -lpmem
@ -82,7 +100,8 @@ endif
if USE_DAOS_AIORI
extraSOURCES += aiori-DAOS.c aiori-DFS.c
extraSOURCES += aiori-DFS.c
extraLDADD += -lgurt -ldaos_common -ldaos -ldfs -luuid
endif
if USE_GFARM_AIORI
@ -90,8 +109,8 @@ extraSOURCES += aiori-Gfarm.c
extraLDADD += -lgfarm
endif
if USE_S3_AIORI
extraSOURCES += aiori-S3.c
if USE_S3_4C_AIORI
extraSOURCES += aiori-S3-4c.c
if AWS4C_DIR
extraCPPFLAGS += $(AWS4C_CPPFLAGS)
extraLDFLAGS += $(AWS4C_LDFLAGS)
@ -100,6 +119,12 @@ extraLDADD += -lcurl
extraLDADD += -lxml2
extraLDADD += -laws4c
extraLDADD += -laws4c_extra
extraLDADD += -lcrypto
endif
if USE_S3_LIBS3_AIORI
extraSOURCES += aiori-S3-libs3.c
extraLDADD += -ls3
endif
if WITH_LUSTRE
@ -116,6 +141,16 @@ mdtest_LDFLAGS += $(extraLDFLAGS)
mdtest_LDADD += $(extraLDADD)
mdtest_CPPFLAGS += $(extraCPPFLAGS)
md_workbench_SOURCES += $(extraSOURCES)
md_workbench_LDFLAGS += $(extraLDFLAGS)
md_workbench_LDADD += $(extraLDADD)
md_workbench_CPPFLAGS += $(extraCPPFLAGS)
MD_WORKBENCH_SOURCES = $(md_workbench_SOURCES)
MD_WORKBENCH_LDFLAGS = $(md_workbench_LDFLAGS)
MD_WORKBENCH_LDADD = $(md_workbench_LDADD)
MD_WORKBENCH_CPPFLAGS = $(md_workbench_CPPFLAGS)
IOR_SOURCES = $(ior_SOURCES)
IOR_LDFLAGS = $(ior_LDFLAGS)
IOR_LDADD = $(ior_LDADD)
@ -128,3 +163,10 @@ MDTEST_CPPFLAGS = $(mdtest_CPPFLAGS)
libaiori_a_SOURCES += $(extraSOURCES)
libaiori_a_CPPFLAGS = $(extraCPPFLAGS)
# Generate a config file with the build flags to allow the reuse of library
.PHONY: build.conf
all-local: build.conf
build.conf:
@echo LDFLAGS=$(LDFLAGS) $(extraLDFLAGS) $(extraLDADD) $(LIBS) > build.conf
@echo CFLAGS=$(CFLAGS) $(extraCPPFLAGS) >> build.conf

View File

@ -1,570 +0,0 @@
/* -*- mode: c; c-basic-offset: 8; indent-tabs-mode: nil; -*-
* vim:expandtab:shiftwidth=8:tabstop=8:
*/
/*
* Copyright (C) 2018-2020 Intel Corporation
* See the file COPYRIGHT for a complete copyright notice and license.
*/
/*
* This file implements the abstract I/O interface for DAOS Array API.
*/
#define _BSD_SOURCE
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#include <stdint.h>
#include <assert.h>
#include <unistd.h>
#include <strings.h>
#include <sys/types.h>
#include <libgen.h>
#include <stdbool.h>
#include <mpi.h>
#include <gurt/common.h>
#include <daos.h>
#include "aiori.h"
#include "utilities.h"
#include "iordef.h"
/************************** O P T I O N S *****************************/
typedef struct {
char *pool;
char *svcl;
char *group;
char *cont;
int chunk_size;
int destroy;
char *oclass;
} DAOS_options_t;
static option_help * DAOS_options(aiori_mod_opt_t ** init_backend_options,
aiori_mod_opt_t * init_values){
DAOS_options_t * o = malloc(sizeof(DAOS_options_t));
if (init_values != NULL) {
memcpy(o, init_values, sizeof(DAOS_options_t));
} else {
memset(o, 0, sizeof(DAOS_options_t));
/* initialize the options properly */
o->chunk_size = 1048576;
}
*init_backend_options = (aiori_mod_opt_t *) o;
option_help h [] = {
{0, "daos.pool", "pool uuid", OPTION_OPTIONAL_ARGUMENT, 's', &o->pool},
{0, "daos.svcl", "pool SVCL", OPTION_OPTIONAL_ARGUMENT, 's', &o->svcl},
{0, "daos.group", "server group", OPTION_OPTIONAL_ARGUMENT, 's', &o->group},
{0, "daos.cont", "container uuid", OPTION_OPTIONAL_ARGUMENT, 's', &o->cont},
{0, "daos.chunk_size", "chunk size", OPTION_OPTIONAL_ARGUMENT, 'd', &o->chunk_size},
{0, "daos.destroy", "Destroy Container", OPTION_FLAG, 'd', &o->destroy},
{0, "daos.oclass", "object class", OPTION_OPTIONAL_ARGUMENT, 's', &o->oclass},
LAST_OPTION
};
option_help * help = malloc(sizeof(h));
memcpy(help, h, sizeof(h));
return help;
}
/**************************** P R O T O T Y P E S *****************************/
static void DAOS_Init(aiori_mod_opt_t *);
static void DAOS_Fini(aiori_mod_opt_t *);
static aiori_fd_t *DAOS_Create(char *, int, aiori_mod_opt_t *);
static aiori_fd_t *DAOS_Open(char *, int, aiori_mod_opt_t *);
static int DAOS_Access(const char *, int, aiori_mod_opt_t *);
static IOR_offset_t DAOS_Xfer(int, aiori_fd_t *, IOR_size_t *, IOR_offset_t,
IOR_offset_t, aiori_mod_opt_t *);
static void DAOS_Close(aiori_fd_t *, aiori_mod_opt_t *);
static void DAOS_Delete(char *, aiori_mod_opt_t *);
static char* DAOS_GetVersion();
static void DAOS_Fsync(aiori_fd_t *, aiori_mod_opt_t *);
static IOR_offset_t DAOS_GetFileSize(aiori_mod_opt_t *, MPI_Comm, char *);
static option_help * DAOS_options();
static void DAOS_init_xfer_options(aiori_xfer_hint_t *);
static int DAOS_check_params(aiori_mod_opt_t *);
/************************** D E C L A R A T I O N S ***************************/
ior_aiori_t daos_aiori = {
.name = "DAOS",
.initialize = DAOS_Init,
.finalize = DAOS_Fini,
.create = DAOS_Create,
.open = DAOS_Open,
.access = DAOS_Access,
.xfer = DAOS_Xfer,
.close = DAOS_Close,
.delete = DAOS_Delete,
.get_version = DAOS_GetVersion,
.xfer_hints = DAOS_init_xfer_options,
.fsync = DAOS_Fsync,
.get_file_size = DAOS_GetFileSize,
.statfs = aiori_posix_statfs,
.mkdir = aiori_posix_mkdir,
.rmdir = aiori_posix_rmdir,
.stat = aiori_posix_stat,
.get_options = DAOS_options,
.xfer_hints = DAOS_init_xfer_options,
.check_params = DAOS_check_params,
.enable_mdtest = false,
};
#define IOR_DAOS_MUR_SEED 0xDEAD10CC
enum handleType {
POOL_HANDLE,
CONT_HANDLE,
ARRAY_HANDLE
};
static daos_handle_t poh;
static daos_handle_t coh;
static daos_handle_t aoh;
static daos_oclass_id_t objectClass = OC_SX;
static bool daos_initialized = false;
/***************************** F U N C T I O N S ******************************/
/* For DAOS methods. */
#define DCHECK(rc, format, ...) \
do { \
int _rc = (rc); \
\
if (_rc < 0) { \
fprintf(stderr, "ior ERROR (%s:%d): %d: %d: " \
format"\n", __FILE__, __LINE__, rank, _rc, \
##__VA_ARGS__); \
fflush(stdout); \
MPI_Abort(MPI_COMM_WORLD, -1); \
} \
} while (0)
#define INFO(level, format, ...) \
do { \
if (verbose >= level) \
printf("[%d] "format"\n", rank, ##__VA_ARGS__); \
} while (0)
/* For generic errors like invalid command line options. */
#define GERR(format, ...) \
do { \
fprintf(stderr, format"\n", ##__VA_ARGS__); \
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1), "MPI_Abort() error"); \
} while (0)
static aiori_xfer_hint_t * hints = NULL;
void DAOS_init_xfer_options(aiori_xfer_hint_t * params)
{
hints = params;
}
static int DAOS_check_params(aiori_mod_opt_t * options){
DAOS_options_t *o = (DAOS_options_t *) options;
if (o->pool == NULL || o->svcl == NULL || o->cont == NULL)
ERR("Invalid pool or container options\n");
return 0;
}
/* Distribute process 0's pool or container handle to others. */
static void
HandleDistribute(daos_handle_t *handle, enum handleType type)
{
d_iov_t global;
int rc;
global.iov_buf = NULL;
global.iov_buf_len = 0;
global.iov_len = 0;
if (rank == 0) {
/* Get the global handle size. */
if (type == POOL_HANDLE)
rc = daos_pool_local2global(*handle, &global);
else if (type == CONT_HANDLE)
rc = daos_cont_local2global(*handle, &global);
else
rc = daos_array_local2global(*handle, &global);
DCHECK(rc, "Failed to get global handle size");
}
MPI_CHECK(MPI_Bcast(&global.iov_buf_len, 1, MPI_UINT64_T, 0,
MPI_COMM_WORLD),
"Failed to bcast global handle buffer size");
global.iov_len = global.iov_buf_len;
global.iov_buf = malloc(global.iov_buf_len);
if (global.iov_buf == NULL)
ERR("Failed to allocate global handle buffer");
if (rank == 0) {
if (type == POOL_HANDLE)
rc = daos_pool_local2global(*handle, &global);
else if (type == CONT_HANDLE)
rc = daos_cont_local2global(*handle, &global);
else
rc = daos_array_local2global(*handle, &global);
DCHECK(rc, "Failed to create global handle");
}
MPI_CHECK(MPI_Bcast(global.iov_buf, global.iov_buf_len, MPI_BYTE, 0,
MPI_COMM_WORLD),
"Failed to bcast global pool handle");
if (rank != 0) {
if (type == POOL_HANDLE)
rc = daos_pool_global2local(global, handle);
else if (type == CONT_HANDLE)
rc = daos_cont_global2local(poh, global, handle);
else
rc = daos_array_global2local(coh, global, 0, handle);
DCHECK(rc, "Failed to get local handle");
}
free(global.iov_buf);
}
static void
DAOS_Init(aiori_mod_opt_t * options)
{
DAOS_options_t *o = (DAOS_options_t *)options;
int rc;
if (daos_initialized)
return;
if (o->pool == NULL || o->svcl == NULL || o->cont == NULL)
return;
if (o->oclass) {
objectClass = daos_oclass_name2id(o->oclass);
if (objectClass == OC_UNKNOWN)
GERR("Invalid DAOS Object class %s\n", o->oclass);
}
rc = daos_init();
if (rc)
DCHECK(rc, "Failed to initialize daos");
if (rank == 0) {
uuid_t uuid;
d_rank_list_t *svcl = NULL;
static daos_pool_info_t po_info;
static daos_cont_info_t co_info;
INFO(VERBOSE_1, "Connecting to pool %s", o->pool);
rc = uuid_parse(o->pool, uuid);
DCHECK(rc, "Failed to parse 'pool': %s", o->pool);
svcl = daos_rank_list_parse(o->svcl, ":");
if (svcl == NULL)
ERR("Failed to allocate svcl");
rc = daos_pool_connect(uuid, o->group, svcl, DAOS_PC_RW,
&poh, &po_info, NULL);
d_rank_list_free(svcl);
DCHECK(rc, "Failed to connect to pool %s", o->pool);
INFO(VERBOSE_1, "Create/Open Container %s", o->cont);
uuid_clear(uuid);
rc = uuid_parse(o->cont, uuid);
DCHECK(rc, "Failed to parse 'cont': %s", o->cont);
rc = daos_cont_open(poh, uuid, DAOS_COO_RW, &coh, &co_info,
NULL);
/* If NOEXIST we create it */
if (rc == -DER_NONEXIST) {
INFO(VERBOSE_2, "Creating DAOS Container...\n");
rc = daos_cont_create(poh, uuid, NULL, NULL);
if (rc == 0)
rc = daos_cont_open(poh, uuid, DAOS_COO_RW,
&coh, &co_info, NULL);
}
DCHECK(rc, "Failed to create container");
}
HandleDistribute(&poh, POOL_HANDLE);
HandleDistribute(&coh, CONT_HANDLE);
aoh.cookie = 0;
daos_initialized = true;
}
static void
DAOS_Fini(aiori_mod_opt_t *options)
{
DAOS_options_t *o = (DAOS_options_t *)options;
int rc;
if (!daos_initialized)
return;
MPI_Barrier(MPI_COMM_WORLD);
rc = daos_cont_close(coh, NULL);
if (rc) {
DCHECK(rc, "Failed to close container %s (%d)", o->cont, rc);
MPI_Abort(MPI_COMM_WORLD, -1);
}
MPI_Barrier(MPI_COMM_WORLD);
if (o->destroy) {
if (rank == 0) {
uuid_t uuid;
double t1, t2;
INFO(VERBOSE_1, "Destroying DAOS Container %s", o->cont);
uuid_parse(o->cont, uuid);
t1 = MPI_Wtime();
rc = daos_cont_destroy(poh, uuid, 1, NULL);
t2 = MPI_Wtime();
if (rc == 0)
INFO(VERBOSE_1, "Container Destroy time = %f secs", t2-t1);
}
MPI_Bcast(&rc, 1, MPI_INT, 0, MPI_COMM_WORLD);
if (rc) {
if (rank == 0)
DCHECK(rc, "Failed to destroy container %s (%d)", o->cont, rc);
MPI_Abort(MPI_COMM_WORLD, -1);
}
}
if (rank == 0)
INFO(VERBOSE_1, "Disconnecting from DAOS POOL..");
rc = daos_pool_disconnect(poh, NULL);
DCHECK(rc, "Failed to disconnect from pool %s", o->pool);
MPI_CHECK(MPI_Barrier(MPI_COMM_WORLD), "barrier error");
if (rank == 0)
INFO(VERBOSE_1, "Finalizing DAOS..");
rc = daos_fini();
DCHECK(rc, "Failed to finalize daos");
daos_initialized = false;
}
static void
gen_oid(const char *name, daos_obj_id_t *oid)
{
oid->lo = d_hash_murmur64(name, strlen(name), IOR_DAOS_MUR_SEED);
oid->hi = 0;
daos_array_generate_id(oid, objectClass, true, 0);
}
static aiori_fd_t *
DAOS_Create(char *testFileName, int flags, aiori_mod_opt_t *param)
{
DAOS_options_t *o = (DAOS_options_t*) param;
daos_obj_id_t oid;
int rc;
/** Convert file name into object ID */
gen_oid(testFileName, &oid);
/** Create the array */
if (hints->filePerProc || rank == 0) {
rc = daos_array_create(coh, oid, DAOS_TX_NONE, 1, o->chunk_size,
&aoh, NULL);
DCHECK(rc, "Failed to create array object\n");
}
/** Distribute the array handle if not FPP */
if (!hints->filePerProc)
HandleDistribute(&aoh, ARRAY_HANDLE);
return (aiori_fd_t*)(&aoh);
}
static int
DAOS_Access(const char *testFileName, int mode, aiori_mod_opt_t * param)
{
daos_obj_id_t oid;
daos_size_t cell_size, chunk_size;
int rc;
/** Convert file name into object ID */
gen_oid(testFileName, &oid);
rc = daos_array_open(coh, oid, DAOS_TX_NONE, DAOS_OO_RO,
&cell_size, &chunk_size, &aoh, NULL);
if (rc)
return rc;
if (cell_size != 1)
GERR("Invalid DAOS Array object.\n");
rc = daos_array_close(aoh, NULL);
aoh.cookie = 0;
return rc;
}
static aiori_fd_t *
DAOS_Open(char *testFileName, int flags, aiori_mod_opt_t *param)
{
daos_obj_id_t oid;
/** Convert file name into object ID */
gen_oid(testFileName, &oid);
/** Open the array */
if (hints->filePerProc || rank == 0) {
daos_size_t cell_size, chunk_size;
int rc;
rc = daos_array_open(coh, oid, DAOS_TX_NONE, DAOS_OO_RW,
&cell_size, &chunk_size, &aoh, NULL);
DCHECK(rc, "Failed to create array object\n");
if (cell_size != 1)
GERR("Invalid DAOS Array object.\n");
}
/** Distribute the array handle if not FPP */
if (!hints->filePerProc)
HandleDistribute(&aoh, ARRAY_HANDLE);
return (aiori_fd_t*)(&aoh);
}
static IOR_offset_t
DAOS_Xfer(int access, aiori_fd_t *file, IOR_size_t *buffer, IOR_offset_t length,
IOR_offset_t off, aiori_mod_opt_t *param)
{
daos_array_iod_t iod;
daos_range_t rg;
d_sg_list_t sgl;
d_iov_t iov;
int rc;
/** set array location */
iod.arr_nr = 1;
rg.rg_len = length;
rg.rg_idx = off;
iod.arr_rgs = &rg;
/** set memory location */
sgl.sg_nr = 1;
d_iov_set(&iov, buffer, length);
sgl.sg_iovs = &iov;
if (access == WRITE) {
rc = daos_array_write(aoh, DAOS_TX_NONE, &iod, &sgl, NULL);
DCHECK(rc, "daos_array_write() failed (%d).", rc);
} else {
rc = daos_array_read(aoh, DAOS_TX_NONE, &iod, &sgl, NULL);
DCHECK(rc, "daos_array_read() failed (%d).", rc);
}
return length;
}
static void
DAOS_Close(aiori_fd_t *file, aiori_mod_opt_t *param)
{
int rc;
if (!daos_initialized)
GERR("DAOS is not initialized!");
rc = daos_array_close(aoh, NULL);
DCHECK(rc, "daos_array_close() failed (%d).", rc);
aoh.cookie = 0;
}
static void
DAOS_Delete(char *testFileName, aiori_mod_opt_t *param)
{
daos_obj_id_t oid;
daos_size_t cell_size, chunk_size;
int rc;
if (!daos_initialized)
GERR("DAOS is not initialized!");
/** Convert file name into object ID */
gen_oid(testFileName, &oid);
/** open the array to verify it exists */
rc = daos_array_open(coh, oid, DAOS_TX_NONE, DAOS_OO_RW,
&cell_size, &chunk_size, &aoh, NULL);
DCHECK(rc, "daos_array_open() failed (%d).", rc);
if (cell_size != 1)
GERR("Invalid DAOS Array object.\n");
rc = daos_array_destroy(aoh, DAOS_TX_NONE, NULL);
DCHECK(rc, "daos_array_destroy() failed (%d).", rc);
rc = daos_array_close(aoh, NULL);
DCHECK(rc, "daos_array_close() failed (%d).", rc);
aoh.cookie = 0;
}
static char *
DAOS_GetVersion()
{
static char ver[1024] = {};
sprintf(ver, "%s", "DAOS");
return ver;
}
static void
DAOS_Fsync(aiori_fd_t *file, aiori_mod_opt_t *param)
{
return;
}
static IOR_offset_t
DAOS_GetFileSize(aiori_mod_opt_t *param, MPI_Comm comm, char *testFileName)
{
daos_obj_id_t oid;
daos_size_t size;
int rc;
if (!daos_initialized)
GERR("DAOS is not initialized!");
/** Convert file name into object ID */
gen_oid(testFileName, &oid);
/** open the array to verify it exists */
if (hints->filePerProc || rank == 0) {
daos_size_t cell_size, chunk_size;
rc = daos_array_open(coh, oid, DAOS_TX_NONE, DAOS_OO_RO,
&cell_size, &chunk_size, &aoh, NULL);
DCHECK(rc, "daos_array_open() failed (%d).", rc);
if (cell_size != 1)
GERR("Invalid DAOS Array object.\n");
rc = daos_array_get_size(aoh, DAOS_TX_NONE, &size, NULL);
DCHECK(rc, "daos_array_get_size() failed (%d).", rc);
rc = daos_array_close(aoh, NULL);
DCHECK(rc, "daos_array_close() failed (%d).", rc);
aoh.cookie = 0;
}
if (!hints->filePerProc)
MPI_Bcast(&size, 1, MPI_LONG, 0, MPI_COMM_WORLD);
return size;
}

View File

@ -39,8 +39,8 @@
dfs_t *dfs;
static daos_handle_t poh, coh;
static daos_oclass_id_t objectClass = OC_SX;
static daos_oclass_id_t dir_oclass = OC_SX;
static daos_oclass_id_t objectClass;
static daos_oclass_id_t dir_oclass;
static struct d_hash_table *dir_hash;
static bool dfs_init;
@ -59,7 +59,9 @@ enum handleType {
/************************** O P T I O N S *****************************/
typedef struct {
char *pool;
#if !defined(DAOS_API_VERSION_MAJOR) || DAOS_API_VERSION_MAJOR < 1
char *svcl;
#endif
char *group;
char *cont;
int chunk_size;
@ -85,7 +87,9 @@ static option_help * DFS_options(aiori_mod_opt_t ** init_backend_options,
option_help h [] = {
{0, "dfs.pool", "pool uuid", OPTION_OPTIONAL_ARGUMENT, 's', &o->pool},
#if !defined(DAOS_API_VERSION_MAJOR) || DAOS_API_VERSION_MAJOR < 1
{0, "dfs.svcl", "pool SVCL", OPTION_OPTIONAL_ARGUMENT, 's', &o->svcl},
#endif
{0, "dfs.group", "server group", OPTION_OPTIONAL_ARGUMENT, 's', &o->group},
{0, "dfs.cont", "DFS container uuid", OPTION_OPTIONAL_ARGUMENT, 's', &o->cont},
{0, "dfs.chunk_size", "chunk size", OPTION_OPTIONAL_ARGUMENT, 'd', &o->chunk_size},
@ -114,7 +118,7 @@ static void DFS_Delete(char *, aiori_mod_opt_t *);
static char* DFS_GetVersion();
static void DFS_Fsync(aiori_fd_t *, aiori_mod_opt_t *);
static void DFS_Sync(aiori_mod_opt_t *);
static IOR_offset_t DFS_GetFileSize(aiori_mod_opt_t *, MPI_Comm, char *);
static IOR_offset_t DFS_GetFileSize(aiori_mod_opt_t *, char *);
static int DFS_Statfs (const char *, ior_aiori_statfs_t *, aiori_mod_opt_t *);
static int DFS_Stat (const char *, struct stat *, aiori_mod_opt_t *);
static int DFS_Mkdir (const char *, mode_t, aiori_mod_opt_t *);
@ -188,9 +192,13 @@ void DFS_init_xfer_options(aiori_xfer_hint_t * params)
static int DFS_check_params(aiori_mod_opt_t * options){
DFS_options_t *o = (DFS_options_t *) options;
if (o->pool == NULL || o->svcl == NULL || o->cont == NULL)
if (o->pool == NULL || o->cont == NULL)
ERR("Invalid pool or container options\n");
#if !defined(DAOS_API_VERSION_MAJOR) || DAOS_API_VERSION_MAJOR < 1
if (o->svcl == NULL)
ERR("Invalid SVCL\n");
#endif
return 0;
}
@ -247,8 +255,7 @@ HandleDistribute(enum handleType type)
DCHECK(rc, "Failed to get global handle size");
}
MPI_CHECK(MPI_Bcast(&global.iov_buf_len, 1, MPI_UINT64_T, 0,
MPI_COMM_WORLD),
MPI_CHECK(MPI_Bcast(&global.iov_buf_len, 1, MPI_UINT64_T, 0, testComm),
"Failed to bcast global handle buffer size");
global.iov_len = global.iov_buf_len;
@ -266,8 +273,7 @@ HandleDistribute(enum handleType type)
DCHECK(rc, "Failed to create global handle");
}
MPI_CHECK(MPI_Bcast(global.iov_buf, global.iov_buf_len, MPI_BYTE, 0,
MPI_COMM_WORLD),
MPI_CHECK(MPI_Bcast(global.iov_buf, global.iov_buf_len, MPI_BYTE, 0, testComm),
"Failed to bcast global pool handle");
if (rank != 0) {
@ -374,6 +380,45 @@ out:
return rc;
}
static void
share_file_handle(dfs_obj_t **file, MPI_Comm comm)
{
d_iov_t global;
int rc;
global.iov_buf = NULL;
global.iov_buf_len = 0;
global.iov_len = 0;
if (rank == 0) {
rc = dfs_obj_local2global(dfs, *file, &global);
DCHECK(rc, "Failed to get global handle size");
}
MPI_CHECK(MPI_Bcast(&global.iov_buf_len, 1, MPI_UINT64_T, 0, testComm),
"Failed to bcast global handle buffer size");
global.iov_len = global.iov_buf_len;
global.iov_buf = malloc(global.iov_buf_len);
if (global.iov_buf == NULL)
ERR("Failed to allocate global handle buffer");
if (rank == 0) {
rc = dfs_obj_local2global(dfs, *file, &global);
DCHECK(rc, "Failed to create global handle");
}
MPI_CHECK(MPI_Bcast(global.iov_buf, global.iov_buf_len, MPI_BYTE, 0, testComm),
"Failed to bcast global pool handle");
if (rank != 0) {
rc = dfs_obj_global2local(dfs, 0, global, file);
DCHECK(rc, "Failed to get local handle");
}
free(global.iov_buf);
}
static dfs_obj_t *
lookup_insert_dir(const char *name, mode_t *mode)
{
@ -418,9 +463,14 @@ DFS_Init(aiori_mod_opt_t * options)
return;
/** shouldn't be fatal since it can be called with POSIX backend selection */
if (o->pool == NULL || o->svcl == NULL || o->cont == NULL)
if (o->pool == NULL || o->cont == NULL)
return;
#if !defined(DAOS_API_VERSION_MAJOR) || DAOS_API_VERSION_MAJOR < 1
if (o->svcl == NULL)
return;
#endif
rc = daos_init();
DCHECK(rc, "Failed to initialize daos");
@ -441,7 +491,6 @@ DFS_Init(aiori_mod_opt_t * options)
if (rank == 0) {
uuid_t pool_uuid, co_uuid;
d_rank_list_t *svcl = NULL;
daos_pool_info_t pool_info;
daos_cont_info_t co_info;
@ -451,17 +500,25 @@ DFS_Init(aiori_mod_opt_t * options)
rc = uuid_parse(o->cont, co_uuid);
DCHECK(rc, "Failed to parse 'Cont uuid': %s", o->cont);
INFO(VERBOSE_1, "Pool uuid = %s", o->pool);
INFO(VERBOSE_1, "DFS Container namespace uuid = %s", o->cont);
#if !defined(DAOS_API_VERSION_MAJOR) || DAOS_API_VERSION_MAJOR < 1
d_rank_list_t *svcl = NULL;
svcl = daos_rank_list_parse(o->svcl, ":");
if (svcl == NULL)
ERR("Failed to allocate svcl");
INFO(VERBOSE_1, "Pool uuid = %s, SVCL = %s\n", o->pool, o->svcl);
INFO(VERBOSE_1, "DFS Container namespace uuid = %s\n", o->cont);
INFO(VERBOSE_1, "Pool svcl = %s", o->svcl);
/** Connect to DAOS pool */
rc = daos_pool_connect(pool_uuid, o->group, svcl, DAOS_PC_RW,
&poh, &pool_info, NULL);
d_rank_list_free(svcl);
#else
rc = daos_pool_connect(pool_uuid, o->group, DAOS_PC_RW,
&poh, &pool_info, NULL);
#endif
DCHECK(rc, "Failed to connect to pool");
rc = daos_cont_open(poh, co_uuid, DAOS_COO_RW, &coh, &co_info,
@ -498,23 +555,23 @@ DFS_Finalize(aiori_mod_opt_t *options)
DFS_options_t *o = (DFS_options_t *)options;
int rc;
MPI_Barrier(MPI_COMM_WORLD);
MPI_Barrier(testComm);
d_hash_table_destroy(dir_hash, true /* force */);
rc = dfs_umount(dfs);
DCHECK(rc, "Failed to umount DFS namespace");
MPI_Barrier(MPI_COMM_WORLD);
MPI_Barrier(testComm);
rc = daos_cont_close(coh, NULL);
DCHECK(rc, "Failed to close container %s (%d)", o->cont, rc);
MPI_Barrier(MPI_COMM_WORLD);
MPI_Barrier(testComm);
if (o->destroy) {
if (rank == 0) {
uuid_t uuid;
double t1, t2;
INFO(VERBOSE_1, "Destorying DFS Container: %s\n", o->cont);
INFO(VERBOSE_1, "Destroying DFS Container: %s\n", o->cont);
uuid_parse(o->cont, uuid);
t1 = MPI_Wtime();
rc = daos_cont_destroy(poh, uuid, 1, NULL);
@ -523,7 +580,7 @@ DFS_Finalize(aiori_mod_opt_t *options)
INFO(VERBOSE_1, "Container Destroy time = %f secs", t2-t1);
}
MPI_Bcast(&rc, 1, MPI_INT, 0, MPI_COMM_WORLD);
MPI_Bcast(&rc, 1, MPI_INT, 0, testComm);
if (rc) {
if (rank == 0)
DCHECK(rc, "Failed to destroy container %s (%d)", o->cont, rc);
@ -537,7 +594,7 @@ DFS_Finalize(aiori_mod_opt_t *options)
rc = daos_pool_disconnect(poh, NULL);
DCHECK(rc, "Failed to disconnect from pool");
MPI_CHECK(MPI_Barrier(MPI_COMM_WORLD), "barrier error");
MPI_CHECK(MPI_Barrier(testComm), "barrier error");
if (rank == 0)
INFO(VERBOSE_1, "Finalizing DAOS..\n");
@ -547,21 +604,23 @@ DFS_Finalize(aiori_mod_opt_t *options)
/** reset tunables */
o->pool = NULL;
#if !defined(DAOS_API_VERSION_MAJOR) || DAOS_API_VERSION_MAJOR < 1
o->svcl = NULL;
o->group = NULL;
#endif
o->group = NULL;
o->cont = NULL;
o->chunk_size = 1048576;
o->oclass = NULL;
o->dir_oclass = NULL;
o->prefix = NULL;
o->destroy = 0;
objectClass = OC_SX;
dir_oclass = OC_SX;
objectClass = 0;
dir_oclass = 0;
dfs_init = false;
}
/*
* Creat and open a file through the DFS interface.
* Create and open a file through the DFS interface.
*/
static aiori_fd_t *
DFS_Create(char *testFileName, int flags, aiori_mod_opt_t *param)
@ -578,26 +637,21 @@ DFS_Create(char *testFileName, int flags, aiori_mod_opt_t *param)
assert(dir_name);
assert(name);
parent = lookup_insert_dir(dir_name, NULL);
if (parent == NULL)
GERR("Failed to lookup parent dir");
mode = S_IFREG | mode;
if (hints->filePerProc || rank == 0) {
fd_oflag |= O_CREAT | O_RDWR | O_EXCL;
parent = lookup_insert_dir(dir_name, NULL);
if (parent == NULL)
GERR("Failed to lookup parent dir");
rc = dfs_open(dfs, parent, name, mode, fd_oflag,
objectClass, o->chunk_size, NULL, &obj);
DCHECK(rc, "dfs_open() of %s Failed", name);
}
if (!hints->filePerProc) {
MPI_Barrier(MPI_COMM_WORLD);
if (rank != 0) {
fd_oflag |= O_RDWR;
rc = dfs_open(dfs, parent, name, mode, fd_oflag,
objectClass, o->chunk_size, NULL, &obj);
DCHECK(rc, "dfs_open() of %s Failed", name);
}
share_file_handle(&obj, testComm);
}
if (name)
@ -629,13 +683,19 @@ DFS_Open(char *testFileName, int flags, aiori_mod_opt_t *param)
assert(dir_name);
assert(name);
parent = lookup_insert_dir(dir_name, NULL);
if (parent == NULL)
GERR("Failed to lookup parent dir");
if (hints->filePerProc || rank == 0) {
parent = lookup_insert_dir(dir_name, NULL);
if (parent == NULL)
GERR("Failed to lookup parent dir");
rc = dfs_open(dfs, parent, name, mode, fd_oflag, objectClass,
o->chunk_size, NULL, &obj);
DCHECK(rc, "dfs_open() of %s Failed", name);
rc = dfs_open(dfs, parent, name, mode, fd_oflag, objectClass,
o->chunk_size, NULL, &obj);
DCHECK(rc, "dfs_open() of %s Failed", name);
}
if (!hints->filePerProc) {
share_file_handle(&obj, testComm);
}
if (name)
free(name);
@ -675,14 +735,14 @@ DFS_Xfer(int access, aiori_fd_t *file, IOR_size_t *buffer, IOR_offset_t length,
if (access == WRITE) {
rc = dfs_write(dfs, obj, &sgl, off, NULL);
if (rc) {
fprintf(stderr, "dfs_write() failed (%d)", rc);
fprintf(stderr, "dfs_write() failed (%d)\n", rc);
return -1;
}
ret = remaining;
} else {
rc = dfs_read(dfs, obj, &sgl, off, &ret, NULL);
if (rc || ret == 0)
fprintf(stderr, "dfs_read() failed(%d)", rc);
fprintf(stderr, "dfs_read() failed(%d)\n", rc);
}
if (ret < remaining) {
@ -774,43 +834,36 @@ static char* DFS_GetVersion()
* Use DFS stat() to return aggregate file size.
*/
static IOR_offset_t
DFS_GetFileSize(aiori_mod_opt_t * test, MPI_Comm comm, char *testFileName)
DFS_GetFileSize(aiori_mod_opt_t * test, char *testFileName)
{
dfs_obj_t *obj;
daos_size_t fsize, tmpMin, tmpMax, tmpSum;
MPI_Comm comm;
daos_size_t fsize;
int rc;
rc = dfs_lookup(dfs, testFileName, O_RDONLY, &obj, NULL, NULL);
if (rc) {
fprintf(stderr, "dfs_lookup() of %s Failed (%d)", testFileName, rc);
return -1;
if (hints->filePerProc == TRUE) {
comm = MPI_COMM_SELF;
} else {
comm = testComm;
}
rc = dfs_get_size(dfs, obj, &fsize);
if (rc)
return -1;
dfs_release(obj);
if (hints->filePerProc == TRUE) {
MPI_CHECK(MPI_Allreduce(&fsize, &tmpSum, 1,
MPI_LONG_LONG_INT, MPI_SUM, comm),
"cannot total data moved");
fsize = tmpSum;
} else {
MPI_CHECK(MPI_Allreduce(&fsize, &tmpMin, 1,
MPI_LONG_LONG_INT, MPI_MIN, comm),
"cannot total data moved");
MPI_CHECK(MPI_Allreduce(&fsize, &tmpMax, 1,
MPI_LONG_LONG_INT, MPI_MAX, comm),
"cannot total data moved");
if (tmpMin != tmpMax) {
if (rank == 0) {
WARN("inconsistent file size by different tasks");
}
/* incorrect, but now consistent across tasks */
fsize = tmpMin;
if (hints->filePerProc || rank == 0) {
rc = dfs_lookup(dfs, testFileName, O_RDONLY, &obj, NULL, NULL);
if (rc) {
fprintf(stderr, "dfs_lookup() of %s Failed (%d)", testFileName, rc);
return -1;
}
rc = dfs_get_size(dfs, obj, &fsize);
dfs_release(obj);
if (rc)
return -1;
}
if (!hints->filePerProc) {
rc = MPI_Bcast(&fsize, 1, MPI_UINT64_T, 0, comm);
if (rc)
return rc;
}
return (fsize);
@ -914,7 +967,6 @@ DFS_Stat(const char *path, struct stat *buf, aiori_mod_opt_t * param)
GERR("Failed to lookup parent dir");
rc = dfs_stat(dfs, parent, name, buf);
DCHECK(rc, "dfs_stat() of Failed (%d)", rc);
if (name)
free(name);

View File

@ -108,7 +108,7 @@ static char * DUMMY_getVersion()
return "0.5";
}
static IOR_offset_t DUMMY_GetFileSize(aiori_mod_opt_t * options, MPI_Comm testComm, char *testFileName)
static IOR_offset_t DUMMY_GetFileSize(aiori_mod_opt_t * options, char *testFileName)
{
if(verbose > 4){
fprintf(out_logfile, "DUMMY getFileSize: %s\n", testFileName);
@ -156,6 +156,11 @@ static int DUMMY_stat (const char *path, struct stat *buf, aiori_mod_opt_t * opt
return 0;
}
static int DUMMY_rename (const char *path, const char *path2, aiori_mod_opt_t * options){
return 0;
}
static int DUMMY_check_params(aiori_mod_opt_t * options){
return 0;
}
@ -188,6 +193,7 @@ ior_aiori_t dummy_aiori = {
.statfs = DUMMY_statfs,
.mkdir = DUMMY_mkdir,
.rmdir = DUMMY_rmdir,
.rename = DUMMY_rename,
.access = DUMMY_access,
.stat = DUMMY_stat,
.initialize = DUMMY_init,

View File

@ -14,6 +14,14 @@ struct gfarm_file {
GFS_File gf;
};
static aiori_xfer_hint_t *hints = NULL;
void
Gfarm_xfer_hints(aiori_xfer_hint_t *params)
{
hints = params;
}
void
Gfarm_initialize()
{
@ -26,14 +34,14 @@ Gfarm_finalize()
gfarm_terminate();
}
void *
Gfarm_create(char *fn, IOR_param_t *param)
aiori_fd_t *
Gfarm_create(char *fn, int flag, aiori_mod_opt_t *param)
{
GFS_File gf;
struct gfarm_file *fp;
gfarm_error_t e;
if (param->dryRun)
if (hints->dryRun)
return (NULL);
e = gfs_pio_create(fn, GFARM_FILE_RDWR, 0664, &gf);
@ -43,17 +51,17 @@ Gfarm_create(char *fn, IOR_param_t *param)
if (fp == NULL)
ERR("no memory");
fp->gf = gf;
return (fp);
return ((aiori_fd_t *)fp);
}
void *
Gfarm_open(char *fn, IOR_param_t *param)
aiori_fd_t *
Gfarm_open(char *fn, int flag, aiori_mod_opt_t *param)
{
GFS_File gf;
struct gfarm_file *fp;
gfarm_error_t e;
if (param->dryRun)
if (hints->dryRun)
return (NULL);
e = gfs_pio_open(fn, GFARM_FILE_RDWR, &gf);
@ -63,14 +71,14 @@ Gfarm_open(char *fn, IOR_param_t *param)
if (fp == NULL)
ERR("no memory");
fp->gf = gf;
return (fp);
return ((aiori_fd_t *)fp);
}
IOR_offset_t
Gfarm_xfer(int access, void *fd, IOR_size_t *buffer, IOR_offset_t len,
IOR_param_t *param)
Gfarm_xfer(int access, aiori_fd_t *fd, IOR_size_t *buffer,
IOR_offset_t len, IOR_offset_t offset, aiori_mod_opt_t *param)
{
struct gfarm_file *fp = fd;
struct gfarm_file *fp = (struct gfarm_file *)fd;
IOR_offset_t rem = len;
gfarm_off_t off;
gfarm_error_t e;
@ -78,7 +86,7 @@ Gfarm_xfer(int access, void *fd, IOR_size_t *buffer, IOR_offset_t len,
int sz, n;
char *buf = (char *)buffer;
if (param->dryRun)
if (hints->dryRun)
return (len);
if (len > MAX_SZ)
@ -86,7 +94,7 @@ Gfarm_xfer(int access, void *fd, IOR_size_t *buffer, IOR_offset_t len,
else
sz = len;
e = gfs_pio_seek(fp->gf, param->offset, GFARM_SEEK_SET, &off);
e = gfs_pio_seek(fp->gf, offset, GFARM_SEEK_SET, &off);
if (e != GFARM_ERR_NO_ERROR)
ERR("gfs_pio_seek failed");
while (rem > 0) {
@ -105,11 +113,11 @@ Gfarm_xfer(int access, void *fd, IOR_size_t *buffer, IOR_offset_t len,
}
void
Gfarm_close(void *fd, IOR_param_t *param)
Gfarm_close(aiori_fd_t *fd, aiori_mod_opt_t *param)
{
struct gfarm_file *fp = fd;
struct gfarm_file *fp = (struct gfarm_file *)fd;
if (param->dryRun)
if (hints->dryRun)
return;
if (gfs_pio_close(fp->gf) != GFARM_ERR_NO_ERROR)
@ -118,11 +126,11 @@ Gfarm_close(void *fd, IOR_param_t *param)
}
void
Gfarm_delete(char *fn, IOR_param_t *param)
Gfarm_delete(char *fn, aiori_mod_opt_t *param)
{
gfarm_error_t e;
if (param->dryRun)
if (hints->dryRun)
return;
e = gfs_unlink(fn);
@ -137,11 +145,11 @@ Gfarm_version()
}
void
Gfarm_fsync(void *fd, IOR_param_t *param)
Gfarm_fsync(aiori_fd_t *fd, aiori_mod_opt_t *param)
{
struct gfarm_file *fp = fd;
struct gfarm_file *fp = (struct gfarm_file *)fd;
if (param->dryRun)
if (hints->dryRun)
return;
if (gfs_pio_sync(fp->gf) != GFARM_ERR_NO_ERROR)
@ -149,12 +157,12 @@ Gfarm_fsync(void *fd, IOR_param_t *param)
}
IOR_offset_t
Gfarm_get_file_size(IOR_param_t *param, MPI_Comm comm, char *fn)
Gfarm_get_file_size(aiori_mod_opt_t *param, char *fn)
{
struct gfs_stat st;
IOR_offset_t size, sum, min, max;
if (param->dryRun)
if (hints->dryRun)
return (0);
if (gfs_stat(fn, &st) != GFARM_ERR_NO_ERROR)
@ -162,34 +170,17 @@ Gfarm_get_file_size(IOR_param_t *param, MPI_Comm comm, char *fn)
size = st.st_size;
gfs_stat_free(&st);
if (param->filePerProc == TRUE) {
MPI_CHECK(MPI_Allreduce(&size, &sum, 1, MPI_LONG_LONG_INT,
MPI_SUM, comm), "cannot total data moved");
size = sum;
} else {
MPI_CHECK(MPI_Allreduce(&size, &min, 1, MPI_LONG_LONG_INT,
MPI_MIN, comm), "cannot total data moved");
MPI_CHECK(MPI_Allreduce(&size, &max, 1, MPI_LONG_LONG_INT,
MPI_MAX, comm), "cannot total data moved");
if (min != max) {
if (rank == 0)
WARN("inconsistent file size by different "
"tasks");
/* incorrect, but now consistent across tasks */
size = min;
}
}
return (size);
}
int
Gfarm_statfs(const char *fn, ior_aiori_statfs_t *st, IOR_param_t *param)
Gfarm_statfs(const char *fn, ior_aiori_statfs_t *st, aiori_mod_opt_t *param)
{
gfarm_off_t used, avail, files;
gfarm_error_t e;
int bsize = 4096;
if (param->dryRun)
if (hints->dryRun)
return (0);
e = gfs_statfs_by_path(fn, &used, &avail, &files);
@ -206,11 +197,11 @@ Gfarm_statfs(const char *fn, ior_aiori_statfs_t *st, IOR_param_t *param)
}
int
Gfarm_mkdir(const char *fn, mode_t mode, IOR_param_t *param)
Gfarm_mkdir(const char *fn, mode_t mode, aiori_mod_opt_t *param)
{
gfarm_error_t e;
if (param->dryRun)
if (hints->dryRun)
return (0);
e = gfs_mkdir(fn, mode);
@ -221,11 +212,11 @@ Gfarm_mkdir(const char *fn, mode_t mode, IOR_param_t *param)
}
int
Gfarm_rmdir(const char *fn, IOR_param_t *param)
Gfarm_rmdir(const char *fn, aiori_mod_opt_t *param)
{
gfarm_error_t e;
if (param->dryRun)
if (hints->dryRun)
return (0);
e = gfs_rmdir(fn);
@ -236,12 +227,12 @@ Gfarm_rmdir(const char *fn, IOR_param_t *param)
}
int
Gfarm_access(const char *fn, int mode, IOR_param_t *param)
Gfarm_access(const char *fn, int mode, aiori_mod_opt_t *param)
{
struct gfs_stat st;
gfarm_error_t e;
if (param->dryRun)
if (hints->dryRun)
return (0);
e = gfs_stat(fn, &st);
@ -259,12 +250,12 @@ Gfarm_access(const char *fn, int mode, IOR_param_t *param)
#define STAT_BLKSIZ 512 /* for st_blocks */
int
Gfarm_stat(const char *fn, struct stat *buf, IOR_param_t *param)
Gfarm_stat(const char *fn, struct stat *buf, aiori_mod_opt_t *param)
{
struct gfs_stat st;
gfarm_error_t e;
if (param->dryRun)
if (hints->dryRun)
return (0);
e = gfs_stat(fn, &st);
@ -293,11 +284,22 @@ Gfarm_stat(const char *fn, struct stat *buf, IOR_param_t *param)
return (0);
}
void
Gfarm_sync(aiori_mod_opt_t *param)
{
if (hints->dryRun)
return;
/* no cache in libgfarm */
return;
}
ior_aiori_t gfarm_aiori = {
.name = "Gfarm",
.name_legacy = NULL,
.create = Gfarm_create,
.open = Gfarm_open,
.xfer_hints = Gfarm_xfer_hints,
.xfer = Gfarm_xfer,
.close = Gfarm_close,
.delete = Gfarm_delete,
@ -312,5 +314,6 @@ ior_aiori_t gfarm_aiori = {
.initialize = Gfarm_initialize,
.finalize = Gfarm_finalize,
.get_options = NULL,
.sync = Gfarm_sync,
.enable_mdtest = true,
};

View File

@ -91,7 +91,7 @@ static void HDF5_Close(aiori_fd_t *, aiori_mod_opt_t *);
static void HDF5_Delete(char *, aiori_mod_opt_t *);
static char* HDF5_GetVersion();
static void HDF5_Fsync(aiori_fd_t *, aiori_mod_opt_t *);
static IOR_offset_t HDF5_GetFileSize(aiori_mod_opt_t *, MPI_Comm, char *);
static IOR_offset_t HDF5_GetFileSize(aiori_mod_opt_t *, char *);
static int HDF5_Access(const char *, int, aiori_mod_opt_t *);
static void HDF5_init_xfer_options(aiori_xfer_hint_t * params);
static int HDF5_check_params(aiori_mod_opt_t * options);
@ -171,6 +171,8 @@ static aiori_xfer_hint_t * hints = NULL;
static void HDF5_init_xfer_options(aiori_xfer_hint_t * params){
hints = params;
/** HDF5 utilizes the MPIIO backend too, so init hints there */
MPIIO_xfer_hints(params);
}
static int HDF5_check_params(aiori_mod_opt_t * options){
@ -660,11 +662,11 @@ static void SetupDataSet(void *fd, int flags, aiori_mod_opt_t * param)
* Use MPIIO call to get file size.
*/
static IOR_offset_t
HDF5_GetFileSize(aiori_mod_opt_t * test, MPI_Comm testComm, char *testFileName)
HDF5_GetFileSize(aiori_mod_opt_t * test, char *testFileName)
{
if(hints->dryRun)
return 0;
return(MPIIO_GetFileSize(test, testComm, testFileName));
return(MPIIO_GetFileSize(test, testFileName));
}
/*

View File

@ -77,14 +77,13 @@
#include <sys/stat.h>
#include <assert.h>
/*
#ifdef HAVE_LUSTRE_LUSTRE_USER_H
#ifdef HAVE_LUSTRE_USER
#include <lustre/lustre_user.h>
#endif
*/
#include "ior.h"
#include "aiori.h"
#include "iordef.h"
#include "utilities.h"
#ifndef open64 /* necessary for TRU64 -- */
# define open64 open /* unlikely, but may pose */
@ -101,15 +100,23 @@
#include "hdfs.h"
/**************************** P R O T O T Y P E S *****************************/
static void *HDFS_Create(char *, IOR_param_t *);
static void *HDFS_Open(char *, IOR_param_t *);
static IOR_offset_t HDFS_Xfer(int, void *, IOR_size_t *,
IOR_offset_t, IOR_param_t *);
static void HDFS_Close(void *, IOR_param_t *);
static void HDFS_Delete(char *, IOR_param_t *);
static void HDFS_SetVersion(IOR_param_t *);
static void HDFS_Fsync(void *, IOR_param_t *);
static IOR_offset_t HDFS_GetFileSize(IOR_param_t *, MPI_Comm, char *);
static aiori_fd_t *HDFS_Create(char *testFileName, int flags, aiori_mod_opt_t * param);
static aiori_fd_t *HDFS_Open(char *testFileName, int flags, aiori_mod_opt_t * param);
static IOR_offset_t HDFS_Xfer(int access, aiori_fd_t *file, IOR_size_t * buffer,
IOR_offset_t length, IOR_offset_t offset, aiori_mod_opt_t * param);
static void HDFS_Close(aiori_fd_t *, aiori_mod_opt_t *);
static void HDFS_Delete(char *testFileName, aiori_mod_opt_t * param);
static void HDFS_Fsync(aiori_fd_t *, aiori_mod_opt_t *);
static IOR_offset_t HDFS_GetFileSize(aiori_mod_opt_t *,char *);
static void hdfs_xfer_hints(aiori_xfer_hint_t * params);
static option_help * HDFS_options(aiori_mod_opt_t ** init_backend_options, aiori_mod_opt_t * init_values);
static int HDFS_mkdir (const char *path, mode_t mode, aiori_mod_opt_t * options);
static int HDFS_rmdir (const char *path, aiori_mod_opt_t * options);
static int HDFS_access (const char *path, int mode, aiori_mod_opt_t * options);
static int HDFS_stat (const char *path, struct stat *buf, aiori_mod_opt_t * options);
static int HDFS_statfs (const char * path, ior_aiori_statfs_t * stat, aiori_mod_opt_t * options);
static aiori_xfer_hint_t * hints = NULL;
/************************** D E C L A R A T I O N S ***************************/
@ -121,13 +128,120 @@ ior_aiori_t hdfs_aiori = {
.xfer = HDFS_Xfer,
.close = HDFS_Close,
.delete = HDFS_Delete,
.set_version = HDFS_SetVersion,
.get_options = HDFS_options,
.get_version = aiori_get_version,
.xfer_hints = hdfs_xfer_hints,
.fsync = HDFS_Fsync,
.get_file_size = HDFS_GetFileSize,
.statfs = HDFS_statfs,
.mkdir = HDFS_mkdir,
.rmdir = HDFS_rmdir,
.access = HDFS_access,
.stat = HDFS_stat,
.enable_mdtest = true
};
/***************************** F U N C T I O N S ******************************/
void hdfs_xfer_hints(aiori_xfer_hint_t * params){
hints = params;
}
/************************** O P T I O N S *****************************/
typedef struct {
char * user;
char * name_node;
int replicas; /* n block replicas. (0 gets default) */
int direct_io;
IOR_offset_t block_size; /* internal blk-size. (0 gets default) */
// runtime options
hdfsFS fs; /* file-system handle */
tPort name_node_port; /* (uint16_t) */
} hdfs_options_t;
static void hdfs_connect( hdfs_options_t* o );
option_help * HDFS_options(aiori_mod_opt_t ** init_backend_options, aiori_mod_opt_t * init_values){
hdfs_options_t * o = malloc(sizeof(hdfs_options_t));
if (init_values != NULL){
memcpy(o, init_values, sizeof(hdfs_options_t));
}else{
memset(o, 0, sizeof(hdfs_options_t));
char *hdfs_user;
hdfs_user = getenv("USER");
if (!hdfs_user){
hdfs_user = "";
}
o->user = strdup(hdfs_user);
o->name_node = "default";
}
*init_backend_options = (aiori_mod_opt_t*) o;
option_help h [] = {
{0, "hdfs.odirect", "Direct I/O Mode", OPTION_FLAG, 'd', & o->direct_io},
{0, "hdfs.user", "Username", OPTION_OPTIONAL_ARGUMENT, 's', & o->user},
{0, "hdfs.name_node", "Namenode", OPTION_OPTIONAL_ARGUMENT, 's', & o->name_node},
{0, "hdfs.replicas", "Number of replicas", OPTION_OPTIONAL_ARGUMENT, 'd', & o->replicas},
{0, "hdfs.block_size", "Blocksize", OPTION_OPTIONAL_ARGUMENT, 'l', & o->block_size},
LAST_OPTION
};
option_help * help = malloc(sizeof(h));
memcpy(help, h, sizeof(h));
return help;
}
int HDFS_mkdir (const char *path, mode_t mode, aiori_mod_opt_t * options){
hdfs_options_t * o = (hdfs_options_t*) options;
hdfs_connect(o);
return hdfsCreateDirectory(o->fs, path);
}
int HDFS_rmdir (const char *path, aiori_mod_opt_t * options){
hdfs_options_t * o = (hdfs_options_t*) options;
hdfs_connect(o);
return hdfsDelete(o->fs, path, 1);
}
int HDFS_access (const char *path, int mode, aiori_mod_opt_t * options){
hdfs_options_t * o = (hdfs_options_t*) options;
hdfs_connect(o);
return hdfsExists(o->fs, path);
}
int HDFS_stat (const char *path, struct stat *buf, aiori_mod_opt_t * options){
hdfsFileInfo * stat;
hdfs_options_t * o = (hdfs_options_t*) options;
hdfs_connect(o);
stat = hdfsGetPathInfo(o->fs, path);
if(stat == NULL){
return 1;
}
memset(buf, 0, sizeof(struct stat));
buf->st_atime = stat->mLastAccess;
buf->st_size = stat->mSize;
buf->st_mtime = stat->mLastMod;
buf->st_mode = stat->mPermissions;
hdfsFreeFileInfo(stat, 1);
return 0;
}
int HDFS_statfs (const char * path, ior_aiori_statfs_t * stat, aiori_mod_opt_t * options){
hdfs_options_t * o = (hdfs_options_t*) options;
hdfs_connect(o);
stat->f_bsize = hdfsGetDefaultBlockSize(o->fs);
stat->f_blocks = hdfsGetCapacity(o->fs) / hdfsGetDefaultBlockSize(o->fs);
stat->f_bfree = stat->f_blocks - hdfsGetUsed(o->fs) / hdfsGetDefaultBlockSize(o->fs);
stat->f_bavail = 1;
stat->f_files = 1;
stat->f_ffree = 1;
return 0;
}
/* This is identical to the one in aiori-POSIX.c Doesn't seem like
* it would be appropriate in utilities.c.
*/
@ -159,16 +273,16 @@ void hdfs_set_o_direct_flag(int *fd)
* NOTE: It's okay to call this thing whenever you need to be sure the HDFS
* filesystem is connected.
*/
static void hdfs_connect( IOR_param_t* param ) {
if (param->verbose >= VERBOSE_4) {
void hdfs_connect( hdfs_options_t* o ) {
if (verbose >= VERBOSE_4) {
printf("-> hdfs_connect [nn:\"%s\", port:%d, user:%s]\n",
param->hdfs_name_node,
param->hdfs_name_node_port,
param->hdfs_user );
o->name_node,
o->name_node_port,
o->user );
}
if ( param->hdfs_fs ) {
if (param->verbose >= VERBOSE_4) {
if ( o->fs ) {
if (verbose >= VERBOSE_4) {
printf("<- hdfs_connect [nothing to do]\n"); /* DEBUGGING */
}
return;
@ -176,34 +290,35 @@ static void hdfs_connect( IOR_param_t* param ) {
/* initialize a builder, holding parameters for hdfsBuilderConnect() */
struct hdfsBuilder* builder = hdfsNewBuilder();
if ( ! builder )
ERR_SIMPLE("couldn't create an hdfsBuilder");
if ( ! builder ){
ERR("couldn't create an hdfsBuilder");
}
hdfsBuilderSetForceNewInstance ( builder ); /* don't use cached instance */
hdfsBuilderSetNameNode ( builder, param->hdfs_name_node );
hdfsBuilderSetNameNodePort( builder, param->hdfs_name_node_port );
hdfsBuilderSetUserName ( builder, param->hdfs_user );
hdfsBuilderSetNameNode ( builder, o->name_node );
hdfsBuilderSetNameNodePort( builder, o->name_node_port );
hdfsBuilderSetUserName ( builder, o->user );
/* NOTE: hdfsBuilderConnect() frees the builder */
param->hdfs_fs = hdfsBuilderConnect( builder );
if ( ! param->hdfs_fs )
ERR_SIMPLE("hdsfsBuilderConnect failed");
o->fs = hdfsBuilderConnect( builder );
if ( ! o->fs )
ERR("hdsfsBuilderConnect failed");
if (param->verbose >= VERBOSE_4) {
if (verbose >= VERBOSE_4) {
printf("<- hdfs_connect [success]\n");
}
}
static void hdfs_disconnect( IOR_param_t* param ) {
if (param->verbose >= VERBOSE_4) {
static void hdfs_disconnect( hdfs_options_t* o ) {
if (verbose >= VERBOSE_4) {
printf("-> hdfs_disconnect\n");
}
if ( param->hdfs_fs ) {
hdfsDisconnect( param->hdfs_fs );
param->hdfs_fs = NULL;
if ( o->fs ) {
hdfsDisconnect( o->fs );
o->fs = NULL;
}
if (param->verbose >= VERBOSE_4) {
if (verbose >= VERBOSE_4) {
printf("<- hdfs_disconnect\n");
}
}
@ -214,16 +329,17 @@ static void hdfs_disconnect( IOR_param_t* param ) {
* Return an hdfsFile.
*/
static void *HDFS_Create_Or_Open( char *testFileName, IOR_param_t *param, unsigned char createFile ) {
if (param->verbose >= VERBOSE_4) {
static void *HDFS_Create_Or_Open( char *testFileName, int flags, aiori_mod_opt_t *param, unsigned char createFile ) {
if (verbose >= VERBOSE_4) {
printf("-> HDFS_Create_Or_Open\n");
}
hdfs_options_t * o = (hdfs_options_t*) param;
hdfsFile hdfs_file = NULL;
int fd_oflags = 0, hdfs_return;
/* initialize file-system handle, if needed */
hdfs_connect( param );
hdfs_connect( o );
/*
* Check for unsupported flags.
@ -234,15 +350,15 @@ static void *HDFS_Create_Or_Open( char *testFileName, IOR_param_t *param, unsign
* The other two, we just note that they are not supported and don't do them.
*/
if ( param->openFlags & IOR_RDWR ) {
if ( flags & IOR_RDWR ) {
ERR( "Opening or creating a file in RDWR is not implemented in HDFS" );
}
if ( param->openFlags & IOR_EXCL ) {
if ( flags & IOR_EXCL ) {
fprintf( stdout, "Opening or creating a file in Exclusive mode is not implemented in HDFS\n" );
}
if ( param->openFlags & IOR_APPEND ) {
if ( flags & IOR_APPEND ) {
fprintf( stdout, "Opening or creating a file for appending is not implemented in HDFS\n" );
}
@ -254,8 +370,8 @@ static void *HDFS_Create_Or_Open( char *testFileName, IOR_param_t *param, unsign
fd_oflags = O_CREAT;
}
if ( param->openFlags & IOR_WRONLY ) {
if ( !param->filePerProc ) {
if ( flags & IOR_WRONLY ) {
if ( ! hints->filePerProc ) {
// in N-1 mode, only rank 0 truncates the file
if ( rank != 0 ) {
@ -279,7 +395,7 @@ static void *HDFS_Create_Or_Open( char *testFileName, IOR_param_t *param, unsign
* Now see if O_DIRECT is needed.
*/
if ( param->useO_DIRECT == TRUE ) {
if ( o->direct_io == TRUE ) {
hdfs_set_o_direct_flag( &fd_oflags );
}
@ -290,10 +406,7 @@ static void *HDFS_Create_Or_Open( char *testFileName, IOR_param_t *param, unsign
* truncate each other's writes
*/
if (( param->openFlags & IOR_WRONLY ) &&
( !param->filePerProc ) &&
( rank != 0 )) {
if (( flags & IOR_WRONLY ) && ( ! hints->filePerProc ) && ( rank != 0 )) {
MPI_CHECK(MPI_Barrier(testComm), "barrier error");
}
@ -301,21 +414,16 @@ static void *HDFS_Create_Or_Open( char *testFileName, IOR_param_t *param, unsign
* Now rank zero can open and truncate, if necessary.
*/
if (param->verbose >= VERBOSE_4) {
printf("\thdfsOpenFile(0x%llx, %s, 0%o, %d, %d, %d)\n",
param->hdfs_fs,
if (verbose >= VERBOSE_4) {
printf("\thdfsOpenFile(%p, %s, 0%o, %lld, %d, %lld)\n",
o->fs,
testFileName,
fd_oflags, /* shown in octal to compare w/ <bits/fcntl.h> */
param->transferSize,
param->hdfs_replicas,
param->hdfs_block_size);
hints->transferSize,
o->replicas,
o->block_size);
}
hdfs_file = hdfsOpenFile( param->hdfs_fs,
testFileName,
fd_oflags,
param->transferSize,
param->hdfs_replicas,
param->hdfs_block_size);
hdfs_file = hdfsOpenFile( o->fs, testFileName, fd_oflags, hints->transferSize, o->replicas, o->block_size);
if ( ! hdfs_file ) {
ERR( "Failed to open the file" );
}
@ -324,14 +432,14 @@ static void *HDFS_Create_Or_Open( char *testFileName, IOR_param_t *param, unsign
* For N-1 write, Rank 0 waits for the other ranks to open the file after it has.
*/
if (( param->openFlags & IOR_WRONLY ) &&
( !param->filePerProc ) &&
if (( flags & IOR_WRONLY ) &&
( !hints->filePerProc ) &&
( rank == 0 )) {
MPI_CHECK(MPI_Barrier(testComm), "barrier error");
}
if (param->verbose >= VERBOSE_4) {
if (verbose >= VERBOSE_4) {
printf("<- HDFS_Create_Or_Open\n");
}
return ((void *) hdfs_file );
@ -341,36 +449,36 @@ static void *HDFS_Create_Or_Open( char *testFileName, IOR_param_t *param, unsign
* Create and open a file through the HDFS interface.
*/
static void *HDFS_Create( char *testFileName, IOR_param_t * param ) {
if (param->verbose >= VERBOSE_4) {
static aiori_fd_t *HDFS_Create(char *testFileName, int flags, aiori_mod_opt_t * param) {
if (verbose >= VERBOSE_4) {
printf("-> HDFS_Create\n");
}
if (param->verbose >= VERBOSE_4) {
if (verbose >= VERBOSE_4) {
printf("<- HDFS_Create\n");
}
return HDFS_Create_Or_Open( testFileName, param, TRUE );
return HDFS_Create_Or_Open( testFileName, flags, param, TRUE );
}
/*
* Open a file through the HDFS interface.
*/
static void *HDFS_Open( char *testFileName, IOR_param_t * param ) {
if (param->verbose >= VERBOSE_4) {
static aiori_fd_t *HDFS_Open(char *testFileName, int flags, aiori_mod_opt_t * param) {
if (verbose >= VERBOSE_4) {
printf("-> HDFS_Open\n");
}
if ( param->openFlags & IOR_CREAT ) {
if (param->verbose >= VERBOSE_4) {
if ( flags & IOR_CREAT ) {
if (verbose >= VERBOSE_4) {
printf("<- HDFS_Open( ... TRUE)\n");
}
return HDFS_Create_Or_Open( testFileName, param, TRUE );
return HDFS_Create_Or_Open( testFileName, flags, param, TRUE );
}
else {
if (param->verbose >= VERBOSE_4) {
if (verbose >= VERBOSE_4) {
printf("<- HDFS_Open( ... FALSE)\n");
}
return HDFS_Create_Or_Open( testFileName, param, FALSE );
return HDFS_Create_Or_Open( testFileName, flags, param, FALSE );
}
}
@ -378,19 +486,18 @@ static void *HDFS_Open( char *testFileName, IOR_param_t * param ) {
* Write or read to file using the HDFS interface.
*/
static IOR_offset_t HDFS_Xfer(int access, void *file, IOR_size_t * buffer,
IOR_offset_t length, IOR_param_t * param) {
if (param->verbose >= VERBOSE_4) {
printf("-> HDFS_Xfer(acc:%d, file:0x%llx, buf:0x%llx, len:%llu, 0x%llx)\n",
static IOR_offset_t HDFS_Xfer(int access, aiori_fd_t *file, IOR_size_t * buffer,
IOR_offset_t length, IOR_offset_t offset, aiori_mod_opt_t * param) {
if (verbose >= VERBOSE_4) {
printf("-> HDFS_Xfer(acc:%d, file:%p, buf:%p, len:%llu, %p)\n",
access, file, buffer, length, param);
}
hdfs_options_t * o = (hdfs_options_t*) param;
int xferRetries = 0;
long long remaining = (long long)length;
char* ptr = (char *)buffer;
long long rc;
off_t offset = param->offset;
hdfsFS hdfs_fs = param->hdfs_fs; /* (void*) */
hdfsFS hdfs_fs = o->fs; /* (void*) */
hdfsFile hdfs_file = (hdfsFile)file; /* (void*) */
@ -401,37 +508,34 @@ static IOR_offset_t HDFS_Xfer(int access, void *file, IOR_size_t * buffer,
if (verbose >= VERBOSE_4) {
fprintf( stdout, "task %d writing to offset %lld\n",
rank,
param->offset + length - remaining);
offset + length - remaining);
}
if (param->verbose >= VERBOSE_4) {
printf("\thdfsWrite( 0x%llx, 0x%llx, 0x%llx, %lld)\n",
if (verbose >= VERBOSE_4) {
printf("\thdfsWrite( %p, %p, %p, %lld)\n",
hdfs_fs, hdfs_file, ptr, remaining ); /* DEBUGGING */
}
rc = hdfsWrite( hdfs_fs, hdfs_file, ptr, remaining );
if ( rc < 0 ) {
ERR( "hdfsWrite() failed" );
}
offset += rc;
if ( param->fsyncPerWrite == TRUE ) {
HDFS_Fsync( hdfs_file, param );
if ( hints->fsyncPerWrite == TRUE ) {
HDFS_Fsync( file, param );
}
}
else { /* READ or CHECK */
if (verbose >= VERBOSE_4) {
fprintf( stdout, "task %d reading from offset %lld\n",
rank,
param->offset + length - remaining );
rank, offset + length - remaining );
}
if (param->verbose >= VERBOSE_4) {
printf("\thdfsRead( 0x%llx, 0x%llx, 0x%llx, %lld)\n",
if (verbose >= VERBOSE_4) {
printf("\thdfsRead( %p, %p, %p, %lld)\n",
hdfs_fs, hdfs_file, ptr, remaining ); /* DEBUGGING */
}
rc = hdfsRead( hdfs_fs, hdfs_file, ptr, remaining );
rc = hdfsPread(hdfs_fs, hdfs_file, offset, ptr, remaining);
if ( rc == 0 ) {
ERR( "hdfs_read() returned EOF prematurely" );
}
@ -449,9 +553,9 @@ static IOR_offset_t HDFS_Xfer(int access, void *file, IOR_size_t * buffer,
rank,
access == WRITE ? "hdfsWrite()" : "hdfs_read()",
rc, remaining,
param->offset + length - remaining );
offset + length - remaining );
if ( param->singleXferAttempt == TRUE ) {
if ( hints->singleXferAttempt == TRUE ) {
MPI_CHECK( MPI_Abort( MPI_COMM_WORLD, -1 ), "barrier error" );
}
@ -467,7 +571,16 @@ static IOR_offset_t HDFS_Xfer(int access, void *file, IOR_size_t * buffer,
xferRetries++;
}
if (param->verbose >= VERBOSE_4) {
if(access == WRITE){
// flush user buffer, this makes the write visible to readers
// it is the expected semantics of read/writes
rc = hdfsHFlush(hdfs_fs, hdfs_file);
if(rc != 0){
WARN("Error during flush");
}
}
if (verbose >= VERBOSE_4) {
printf("<- HDFS_Xfer\n");
}
return ( length );
@ -476,67 +589,38 @@ static IOR_offset_t HDFS_Xfer(int access, void *file, IOR_size_t * buffer,
/*
* Perform hdfs_sync().
*/
static void HDFS_Fsync( void *fd, IOR_param_t * param ) {
if (param->verbose >= VERBOSE_4) {
printf("-> HDFS_Fsync\n");
}
hdfsFS hdfs_fs = param->hdfs_fs; /* (void *) */
static void HDFS_Fsync(aiori_fd_t * fd, aiori_mod_opt_t * param) {
hdfs_options_t * o = (hdfs_options_t*) param;
hdfsFS hdfs_fs = o->fs; /* (void *) */
hdfsFile hdfs_file = (hdfsFile)fd; /* (void *) */
#if 0
if (param->verbose >= VERBOSE_4) {
printf("\thdfsHSync(0x%llx, 0x%llx)\n", hdfs_fs, hdfs_file);
if (verbose >= VERBOSE_4) {
printf("\thdfsFlush(%p, %p)\n", hdfs_fs, hdfs_file);
}
if ( hdfsHSync( hdfs_fs, hdfs_file ) != 0 ) {
EWARN( "hdfsHSync() failed" );
}
#elif 0
if (param->verbose >= VERBOSE_4) {
printf("\thdfsHFlush(0x%llx, 0x%llx)\n", hdfs_fs, hdfs_file);
}
if ( hdfsHFlush( hdfs_fs, hdfs_file ) != 0 ) {
EWARN( "hdfsHFlush() failed" );
}
#else
if (param->verbose >= VERBOSE_4) {
printf("\thdfsFlush(0x%llx, 0x%llx)\n", hdfs_fs, hdfs_file);
}
if ( hdfsFlush( hdfs_fs, hdfs_file ) != 0 ) {
// Hsync is implemented to flush out data with newer Hadoop versions
EWARN( "hdfsFlush() failed" );
}
#endif
if (param->verbose >= VERBOSE_4) {
printf("<- HDFS_Fsync\n");
}
}
/*
* Close a file through the HDFS interface.
*/
static void HDFS_Close( void *fd, IOR_param_t * param ) {
if (param->verbose >= VERBOSE_4) {
static void HDFS_Close(aiori_fd_t * fd, aiori_mod_opt_t * param) {
if (verbose >= VERBOSE_4) {
printf("-> HDFS_Close\n");
}
hdfs_options_t * o = (hdfs_options_t*) param;
hdfsFS hdfs_fs = param->hdfs_fs; /* (void *) */
hdfsFS hdfs_fs = o->fs; /* (void *) */
hdfsFile hdfs_file = (hdfsFile)fd; /* (void *) */
int open_flags;
if ( param->openFlags & IOR_WRONLY ) {
open_flags = O_CREAT | O_WRONLY;
} else {
open_flags = O_RDONLY;
}
if ( hdfsCloseFile( hdfs_fs, hdfs_file ) != 0 ) {
ERR( "hdfsCloseFile() failed" );
}
if (param->verbose >= VERBOSE_4) {
if (verbose >= VERBOSE_4) {
printf("<- HDFS_Close\n");
}
}
@ -547,119 +631,66 @@ static void HDFS_Close( void *fd, IOR_param_t * param ) {
* NOTE: The signature for ior_aiori.delete doesn't include a parameter to
* select recursive deletes. We'll assume that that is never needed.
*/
static void HDFS_Delete( char *testFileName, IOR_param_t * param ) {
if (param->verbose >= VERBOSE_4) {
static void HDFS_Delete( char *testFileName, aiori_mod_opt_t * param ) {
if (verbose >= VERBOSE_4) {
printf("-> HDFS_Delete\n");
}
hdfs_options_t * o = (hdfs_options_t*) param;
char errmsg[256];
/* initialize file-system handle, if needed */
hdfs_connect( param );
hdfs_connect(o);
if ( ! param->hdfs_fs )
ERR_SIMPLE( "Can't delete a file without an HDFS connection" );
if ( ! o->fs )
ERR( "Can't delete a file without an HDFS connection" );
if ( hdfsDelete( param->hdfs_fs, testFileName, 0 ) != 0 ) {
sprintf(errmsg,
"[RANK %03d]: hdfsDelete() of file \"%s\" failed\n",
if ( hdfsDelete( o->fs, testFileName, 0 ) != 0 ) {
sprintf(errmsg, "[RANK %03d]: hdfsDelete() of file \"%s\" failed\n",
rank, testFileName);
EWARN( errmsg );
}
if (param->verbose >= VERBOSE_4) {
if (verbose >= VERBOSE_4) {
printf("<- HDFS_Delete\n");
}
}
/*
* Determine api version.
*/
static void HDFS_SetVersion( IOR_param_t * param ) {
if (param->verbose >= VERBOSE_4) {
printf("-> HDFS_SetVersion\n");
}
strcpy( param->apiVersion, param->api );
if (param->verbose >= VERBOSE_4) {
printf("<- HDFS_SetVersion\n");
}
}
/*
* Use hdfsGetPathInfo() to get info about file?
* Is there an fstat we can use on hdfs?
* Should we just use POSIX fstat?
*/
static IOR_offset_t
HDFS_GetFileSize(IOR_param_t * param,
MPI_Comm testComm,
static IOR_offset_t HDFS_GetFileSize(aiori_mod_opt_t * param,
char * testFileName) {
if (param->verbose >= VERBOSE_4) {
if (verbose >= VERBOSE_4) {
printf("-> HDFS_GetFileSize(%s)\n", testFileName);
}
hdfs_options_t * o = (hdfs_options_t*) param;
IOR_offset_t aggFileSizeFromStat;
IOR_offset_t tmpMin, tmpMax, tmpSum;
/* make sure file-system is connected */
hdfs_connect( param );
hdfs_connect( o );
/* file-info struct includes size in bytes */
if (param->verbose >= VERBOSE_4) {
printf("\thdfsGetPathInfo(%s) ...", testFileName);fflush(stdout);
if (verbose >= VERBOSE_4) {
printf("\thdfsGetPathInfo(%s) ...", testFileName);
fflush(stdout);
}
hdfsFileInfo* info = hdfsGetPathInfo( param->hdfs_fs, testFileName );
hdfsFileInfo* info = hdfsGetPathInfo( o->fs, testFileName );
if ( ! info )
ERR_SIMPLE( "hdfsGetPathInfo() failed" );
if (param->verbose >= VERBOSE_4) {
ERR( "hdfsGetPathInfo() failed" );
if (verbose >= VERBOSE_4) {
printf("done.\n");fflush(stdout);
}
aggFileSizeFromStat = info->mSize;
if ( param->filePerProc == TRUE ) {
if (param->verbose >= VERBOSE_4) {
printf("\tall-reduce (1)\n");
}
MPI_CHECK(
MPI_Allreduce(
&aggFileSizeFromStat, &tmpSum, 1, MPI_LONG_LONG_INT, MPI_SUM, testComm ),
"cannot total data moved" );
aggFileSizeFromStat = tmpSum;
}
else {
if (param->verbose >= VERBOSE_4) {
printf("\tall-reduce (2a)\n");
}
MPI_CHECK(
MPI_Allreduce(
&aggFileSizeFromStat, &tmpMin, 1, MPI_LONG_LONG_INT, MPI_MIN, testComm ),
"cannot total data moved" );
if (param->verbose >= VERBOSE_4) {
printf("\tall-reduce (2b)\n");
}
MPI_CHECK(
MPI_Allreduce(
&aggFileSizeFromStat, &tmpMax, 1, MPI_LONG_LONG_INT, MPI_MAX, testComm ),
"cannot total data moved" );
if ( tmpMin != tmpMax ) {
if ( rank == 0 ) {
WARN( "inconsistent file size by different tasks" );
}
/* incorrect, but now consistent across tasks */
aggFileSizeFromStat = tmpMin;
}
}
if (param->verbose >= VERBOSE_4) {
if (verbose >= VERBOSE_4) {
printf("<- HDFS_GetFileSize [%llu]\n", aggFileSizeFromStat);
}
return ( aggFileSizeFromStat );

View File

@ -21,8 +21,8 @@
#include <stdio.h>
#include <stdlib.h>
#include <sys/stat.h>
#include <errno.h> /* sys_errlist */
#include <fcntl.h> /* IO operations */
#include <errno.h> /* sys_errlist */
#include <fcntl.h> /* IO operations */
#include "ior.h"
#include "iordef.h"
@ -30,63 +30,68 @@
#include "utilities.h"
#include "ime_native.h"
#ifndef O_BINARY /* Required on Windows */
#define IME_UNUSED(x) (void)(x) /* Silence compiler warnings */
#ifndef O_BINARY /* Required on Windows */
# define O_BINARY 0
#endif
/**************************** P R O T O T Y P E S *****************************/
static void *IME_Create(char *, IOR_param_t *);
static void *IME_Open(char *, IOR_param_t *);
static void IME_Close(void *, IOR_param_t *);
static void IME_Delete(char *, IOR_param_t *);
static char *IME_GetVersion();
static void IME_Fsync(void *, IOR_param_t *);
static int IME_Access(const char *, int, IOR_param_t *);
static IOR_offset_t IME_GetFileSize(IOR_param_t *, MPI_Comm, char *);
static IOR_offset_t IME_Xfer(int, void *, IOR_size_t *,
IOR_offset_t, IOR_param_t *);
static int IME_StatFS(const char *, ior_aiori_statfs_t *,
IOR_param_t *);
static int IME_RmDir(const char *, IOR_param_t *);
static int IME_MkDir(const char *, mode_t, IOR_param_t *);
static int IME_Stat(const char *, struct stat *, IOR_param_t *);
aiori_fd_t *IME_Create(char *, int, aiori_mod_opt_t *);
aiori_fd_t *IME_Open(char *, int, aiori_mod_opt_t *);
void IME_Close(aiori_fd_t *, aiori_mod_opt_t *);
void IME_Delete(char *, aiori_mod_opt_t *);
char *IME_GetVersion();
void IME_Fsync(aiori_fd_t *, aiori_mod_opt_t *);
int IME_Access(const char *, int, aiori_mod_opt_t *);
IOR_offset_t IME_GetFileSize(aiori_mod_opt_t *, char *);
IOR_offset_t IME_Xfer(int, aiori_fd_t *, IOR_size_t *, IOR_offset_t,
IOR_offset_t, aiori_mod_opt_t *);
int IME_Statfs(const char *, ior_aiori_statfs_t *,
aiori_mod_opt_t *);
int IME_Rmdir(const char *, aiori_mod_opt_t *);
int IME_Mkdir(const char *, mode_t, aiori_mod_opt_t *);
int IME_Stat(const char *, struct stat *, aiori_mod_opt_t *);
void IME_Xferhints(aiori_xfer_hint_t *params);
#if (IME_NATIVE_API_VERSION >= 132)
static int IME_Mknod(char *);
static void IME_Sync(IOR_param_t *);
int IME_Mknod(char *);
void IME_Sync(aiori_mod_opt_t *param);
#endif
static void IME_Initialize();
static void IME_Finalize();
void IME_Initialize();
void IME_Finalize();
/****************************** O P T I O N S *********************************/
/************************** O P T I O N S *****************************/
typedef struct{
int direct_io;
int direct_io;
} ime_options_t;
option_help *IME_Options(aiori_mod_opt_t **init_backend_options,
aiori_mod_opt_t *init_values)
{
ime_options_t *o = malloc(sizeof(ime_options_t));
option_help * IME_options(void ** init_backend_options, void * init_values){
ime_options_t * o = malloc(sizeof(ime_options_t));
if (init_values != NULL)
memcpy(o, init_values, sizeof(ime_options_t));
else
o->direct_io = 0;
if (init_values != NULL){
memcpy(o, init_values, sizeof(ime_options_t));
}else{
o->direct_io = 0;
}
*init_backend_options = (aiori_mod_opt_t*)o;
*init_backend_options = o;
option_help h[] = {
{0, "ime.odirect", "Direct I/O Mode", OPTION_FLAG, 'd', & o->direct_io},
LAST_OPTION
};
option_help *help = malloc(sizeof(h));
memcpy(help, h, sizeof(h));
option_help h [] = {
{0, "ime.odirect", "Direct I/O Mode", OPTION_FLAG, 'd', & o->direct_io},
LAST_OPTION
};
option_help * help = malloc(sizeof(h));
memcpy(help, h, sizeof(h));
return help;
return help;
}
/************************** D E C L A R A T I O N S ***************************/
extern int rank;
@ -100,19 +105,20 @@ ior_aiori_t ime_aiori = {
.create = IME_Create,
.open = IME_Open,
.xfer = IME_Xfer,
.xfer_hints = IME_Xferhints,
.close = IME_Close,
.delete = IME_Delete,
.get_version = IME_GetVersion,
.fsync = IME_Fsync,
.get_file_size = IME_GetFileSize,
.access = IME_Access,
.statfs = IME_StatFS,
.rmdir = IME_RmDir,
.mkdir = IME_MkDir,
.statfs = IME_Statfs,
.rmdir = IME_Rmdir,
.mkdir = IME_Mkdir,
.stat = IME_Stat,
.initialize = IME_Initialize,
.finalize = IME_Finalize,
.get_options = IME_options,
.get_options = IME_Options,
#if (IME_NATIVE_API_VERSION >= 132)
.sync = IME_Sync,
.mknod = IME_Mknod,
@ -120,72 +126,92 @@ ior_aiori_t ime_aiori = {
.enable_mdtest = true,
};
static aiori_xfer_hint_t *hints = NULL;
static bool ime_initialized = false;
/***************************** F U N C T I O N S ******************************/
void IME_Xferhints(aiori_xfer_hint_t *params)
{
hints = params;
}
/*
* Initialize IME (before MPI is started).
*/
static void IME_Initialize()
void IME_Initialize()
{
if (ime_initialized)
return;
ime_native_init();
ime_initialized = true;
}
/*
* Finlize IME (after MPI is shutdown).
*/
static void IME_Finalize()
void IME_Finalize()
{
if (!ime_initialized)
return;
(void)ime_native_finalize();
ime_initialized = false;
}
/*
* Try to access a file through the IME interface.
*/
static int IME_Access(const char *path, int mode, IOR_param_t *param)
int IME_Access(const char *path, int mode, aiori_mod_opt_t *module_options)
{
(void)param;
IME_UNUSED(module_options);
return ime_native_access(path, mode);
}
/*
* Creat and open a file through the IME interface.
* Create and open a file through the IME interface.
*/
static void *IME_Create(char *testFileName, IOR_param_t *param)
aiori_fd_t *IME_Create(char *testFileName, int flags, aiori_mod_opt_t *param)
{
return IME_Open(testFileName, param);
return IME_Open(testFileName, flags, param);
}
/*
* Open a file through the IME interface.
*/
static void *IME_Open(char *testFileName, IOR_param_t *param)
aiori_fd_t *IME_Open(char *testFileName, int flags, aiori_mod_opt_t *param)
{
int fd_oflag = O_BINARY;
int *fd;
if (hints->dryRun)
return NULL;
fd = (int *)malloc(sizeof(int));
if (fd == NULL)
ERR("Unable to malloc file descriptor");
ime_options_t * o = (ime_options_t*) param->backend_options;
if (o->direct_io == TRUE){
set_o_direct_flag(&fd_oflag);
}
ime_options_t *o = (ime_options_t*) param;
if (o->direct_io == TRUE)
set_o_direct_flag(&fd_oflag);
if (param->openFlags & IOR_RDONLY)
if (flags & IOR_RDONLY)
fd_oflag |= O_RDONLY;
if (param->openFlags & IOR_WRONLY)
if (flags & IOR_WRONLY)
fd_oflag |= O_WRONLY;
if (param->openFlags & IOR_RDWR)
if (flags & IOR_RDWR)
fd_oflag |= O_RDWR;
if (param->openFlags & IOR_APPEND)
if (flags & IOR_APPEND)
fd_oflag |= O_APPEND;
if (param->openFlags & IOR_CREAT)
if (flags & IOR_CREAT)
fd_oflag |= O_CREAT;
if (param->openFlags & IOR_EXCL)
if (flags & IOR_EXCL)
fd_oflag |= O_EXCL;
if (param->openFlags & IOR_TRUNC)
if (flags & IOR_TRUNC)
fd_oflag |= O_TRUNC;
*fd = ime_native_open(testFileName, fd_oflag, 0664);
@ -194,14 +220,14 @@ static void *IME_Open(char *testFileName, IOR_param_t *param)
ERR("cannot open file");
}
return((void *)fd);
return (aiori_fd_t*) fd;
}
/*
* Write or read access to file using the IM interface.
*/
static IOR_offset_t IME_Xfer(int access, void *file, IOR_size_t *buffer,
IOR_offset_t length, IOR_param_t *param)
IOR_offset_t IME_Xfer(int access, aiori_fd_t *file, IOR_size_t *buffer,
IOR_offset_t length, IOR_offset_t offset, aiori_mod_opt_t *param)
{
int xferRetries = 0;
long long remaining = (long long)length;
@ -209,25 +235,28 @@ static IOR_offset_t IME_Xfer(int access, void *file, IOR_size_t *buffer,
int fd = *(int *)file;
long long rc;
if (hints->dryRun)
return length;
while (remaining > 0) {
/* write/read file */
if (access == WRITE) { /* WRITE */
if (verbose >= VERBOSE_4) {
fprintf(stdout, "task %d writing to offset %lld\n",
rank, param->offset + length - remaining);
rank, offset + length - remaining);
}
rc = ime_native_pwrite(fd, ptr, remaining, param->offset);
rc = ime_native_pwrite(fd, ptr, remaining, offset);
if (param->fsyncPerWrite)
IME_Fsync(&fd, param);
if (hints->fsyncPerWrite)
IME_Fsync(file, param);
} else { /* READ or CHECK */
if (verbose >= VERBOSE_4) {
fprintf(stdout, "task %d reading from offset %lld\n",
rank, param->offset + length - remaining);
rank, offset + length - remaining);
}
rc = ime_native_pread(fd, ptr, remaining, param->offset);
rc = ime_native_pread(fd, ptr, remaining, offset);
if (rc == 0)
ERR("hit EOF prematurely");
else if (rc < 0)
@ -238,9 +267,9 @@ static IOR_offset_t IME_Xfer(int access, void *file, IOR_size_t *buffer,
fprintf(stdout, "WARNING: Task %d, partial %s, %lld of "
"%lld bytes at offset %lld\n",
rank, access == WRITE ? "write" : "read", rc,
remaining, param->offset + length - remaining );
remaining, offset + length - remaining );
if (param->singleXferAttempt) {
if (hints->singleXferAttempt) {
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1),
"barrier error");
}
@ -264,7 +293,7 @@ static IOR_offset_t IME_Xfer(int access, void *file, IOR_size_t *buffer,
/*
* Perform fsync().
*/
static void IME_Fsync(void *fd, IOR_param_t *param)
void IME_Fsync(aiori_fd_t *fd, aiori_mod_opt_t *param)
{
if (ime_native_fsync(*(int *)fd) != 0)
WARN("cannot perform fsync on file");
@ -273,33 +302,34 @@ static void IME_Fsync(void *fd, IOR_param_t *param)
/*
* Close a file through the IME interface.
*/
static void IME_Close(void *fd, IOR_param_t *param)
void IME_Close(aiori_fd_t *file, aiori_mod_opt_t *param)
{
if (ime_native_close(*(int *)fd) != 0)
{
free(fd);
ERR("cannot close file");
}
else
free(fd);
if (hints->dryRun)
return;
if (ime_native_close(*(int*)file) != 0)
ERRF("Cannot close file descriptor: %d", *(int*)file);
free(file);
}
/*
* Delete a file through the IME interface.
*/
static void IME_Delete(char *testFileName, IOR_param_t *param)
void IME_Delete(char *testFileName, aiori_mod_opt_t *param)
{
char errmsg[256];
sprintf(errmsg, "[RANK %03d]:cannot delete file %s\n",
rank, testFileName);
if (hints->dryRun)
return;
if (ime_native_unlink(testFileName) != 0)
WARN(errmsg);
EWARNF("[RANK %03d]: cannot delete file \"%s\"\n",
rank, testFileName);
}
/*
* Determine API version.
*/
static char *IME_GetVersion()
char *IME_GetVersion()
{
static char ver[1024] = {};
#if (IME_NATIVE_API_VERSION >= 120)
@ -310,18 +340,17 @@ static char *IME_GetVersion()
return ver;
}
static int IME_StatFS(const char *path, ior_aiori_statfs_t *stat_buf,
IOR_param_t *param)
int IME_Statfs(const char *path, ior_aiori_statfs_t *stat_buf,
aiori_mod_opt_t *module_options)
{
(void)param;
IME_UNUSED(module_options);
#if (IME_NATIVE_API_VERSION >= 130)
struct statvfs statfs_buf;
int ret = ime_native_statvfs(path, &statfs_buf);
if (ret)
return ret;
return ret;
stat_buf->f_bsize = statfs_buf.f_bsize;
stat_buf->f_blocks = statfs_buf.f_blocks;
stat_buf->f_bfree = statfs_buf.f_bfree;
@ -330,38 +359,37 @@ static int IME_StatFS(const char *path, ior_aiori_statfs_t *stat_buf,
return 0;
#else
(void)path;
(void)stat_buf;
IME_UNUSED(path);
IME_UNUSED(stat_buf);
WARN("statfs is currently not supported in IME backend!");
return -1;
#endif
}
static int IME_MkDir(const char *path, mode_t mode, IOR_param_t *param)
int IME_Mkdir(const char *path, mode_t mode, aiori_mod_opt_t * module_options)
{
(void)param;
IME_UNUSED(module_options);
#if (IME_NATIVE_API_VERSION >= 130)
return ime_native_mkdir(path, mode);
#else
(void)path;
(void)mode;
IME_UNUSED(path);
IME_UNUSED(mode);
WARN("mkdir not supported in IME backend!");
return -1;
#endif
}
static int IME_RmDir(const char *path, IOR_param_t *param)
int IME_Rmdir(const char *path, aiori_mod_opt_t *module_options)
{
(void)param;
IME_UNUSED(module_options);
#if (IME_NATIVE_API_VERSION >= 130)
return ime_native_rmdir(path);
#else
(void)path;
IME_UNUSED(path);
WARN("rmdir not supported in IME backend!");
return -1;
@ -371,9 +399,10 @@ static int IME_RmDir(const char *path, IOR_param_t *param)
/*
* Perform stat() through the IME interface.
*/
static int IME_Stat(const char *path, struct stat *buf, IOR_param_t *param)
int IME_Stat(const char *path, struct stat *buf,
aiori_mod_opt_t *module_options)
{
(void)param;
IME_UNUSED(module_options);
return ime_native_stat(path, buf);
}
@ -381,62 +410,39 @@ static int IME_Stat(const char *path, struct stat *buf, IOR_param_t *param)
/*
* Use IME stat() to return aggregate file size.
*/
static IOR_offset_t IME_GetFileSize(IOR_param_t *test, MPI_Comm testComm,
char *testFileName)
IOR_offset_t IME_GetFileSize(aiori_mod_opt_t *test, char *testFileName)
{
struct stat stat_buf;
IOR_offset_t aggFileSizeFromStat, tmpMin, tmpMax, tmpSum;
if (ime_native_stat(testFileName, &stat_buf) != 0) {
ERR("cannot get status of written file");
}
aggFileSizeFromStat = stat_buf.st_size;
if (hints->dryRun)
return 0;
if (test->filePerProc) {
MPI_CHECK(MPI_Allreduce(&aggFileSizeFromStat, &tmpSum, 1,
MPI_LONG_LONG_INT, MPI_SUM, testComm),
"cannot total data moved");
aggFileSizeFromStat = tmpSum;
} else {
MPI_CHECK(MPI_Allreduce(&aggFileSizeFromStat, &tmpMin, 1,
MPI_LONG_LONG_INT, MPI_MIN, testComm),
"cannot total data moved");
MPI_CHECK(MPI_Allreduce(&aggFileSizeFromStat, &tmpMax, 1,
MPI_LONG_LONG_INT, MPI_MAX, testComm),
"cannot total data moved");
if (tmpMin != tmpMax) {
if (rank == 0) {
WARN("inconsistent file size by different tasks");
}
/* incorrect, but now consistent across tasks */
aggFileSizeFromStat = tmpMin;
}
}
return(aggFileSizeFromStat);
if (ime_native_stat(testFileName, &stat_buf) != 0)
ERRF("cannot get status of written file %s",
testFileName);
return stat_buf.st_size;
}
#if (IME_NATIVE_API_VERSION >= 132)
/*
* Create a file through mknod interface.
*/
static int IME_Mknod(char *testFileName)
int IME_Mknod(char *testFileName)
{
int ret = ime_native_mknod(testFileName, S_IFREG | S_IRUSR, 0);
if (ret < 0)
ERR("mknod failed");
int ret = ime_native_mknod(testFileName, S_IFREG | S_IRUSR, 0);
if (ret < 0)
ERR("mknod failed");
return ret;
return ret;
}
/*
* Use IME sync to flush page cache of all opened files.
*/
static void IME_Sync(IOR_param_t * param)
void IME_Sync(aiori_mod_opt_t *param)
{
int ret = ime_native_sync(0);
if (ret != 0)
FAIL("Error executing the sync command.");
int ret = ime_native_sync(0);
if (ret != 0)
FAIL("Error executing the sync command.");
}
#endif

View File

@ -22,6 +22,7 @@
#include "ior.h"
#include "aiori.h"
#include "aiori-POSIX.h"
#include "iordef.h"
#include "utilities.h"
@ -86,7 +87,7 @@ static aiori_xfer_hint_t * hints = NULL;
static void MMAP_xfer_hints(aiori_xfer_hint_t * params){
hints = params;
aiori_posix_xfer_hints(params);
POSIX_xfer_hints(params);
}
static int MMAP_check_params(aiori_mod_opt_t * options){
@ -128,7 +129,7 @@ static void ior_mmap_file(int *file, int mflags, void *param)
}
/*
* Creat and open a file through the POSIX interface, then setup mmap.
* Create and open a file through the POSIX interface, then setup mmap.
*/
static aiori_fd_t *MMAP_Create(char *testFileName, int flags, aiori_mod_opt_t * param)
{

View File

@ -40,7 +40,6 @@ static IOR_offset_t MPIIO_Xfer(int, aiori_fd_t *, IOR_size_t *,
static void MPIIO_Close(aiori_fd_t *, aiori_mod_opt_t *);
static char* MPIIO_GetVersion();
static void MPIIO_Fsync(aiori_fd_t *, aiori_mod_opt_t *);
static void MPIIO_xfer_hints(aiori_xfer_hint_t * params);
static int MPIIO_check_params(aiori_mod_opt_t * options);
/************************** D E C L A R A T I O N S ***************************/
@ -48,6 +47,7 @@ static int MPIIO_check_params(aiori_mod_opt_t * options);
typedef struct{
MPI_File fd;
MPI_Datatype transferType; /* datatype for transfer */
MPI_Datatype contigType; /* elem datatype */
MPI_Datatype fileType; /* filetype for file view */
} mpiio_fd_t;
@ -73,7 +73,7 @@ static option_help * MPIIO_options(aiori_mod_opt_t ** init_backend_options, aior
{0, "mpiio.hintsFileName","Full name for hints file", OPTION_OPTIONAL_ARGUMENT, 's', & o->hintsFileName},
{0, "mpiio.showHints", "Show MPI hints", OPTION_FLAG, 'd', & o->showHints},
{0, "mpiio.preallocate", "Preallocate file size", OPTION_FLAG, 'd', & o->preallocate},
{0, "mpiio.useStridedDatatype", "put strided access into datatype [not working]", OPTION_FLAG, 'd', & o->useStridedDatatype},
{0, "mpiio.useStridedDatatype", "put strided access into datatype", OPTION_FLAG, 'd', & o->useStridedDatatype},
//{'P', NULL, "useSharedFilePointer -- use shared file pointer [not working]", OPTION_FLAG, 'd', & params->useSharedFilePointer},
{0, "mpiio.useFileView", "Use MPI_File_set_view", OPTION_FLAG, 'd', & o->useFileView},
LAST_OPTION
@ -108,7 +108,7 @@ ior_aiori_t mpiio_aiori = {
/***************************** F U N C T I O N S ******************************/
static aiori_xfer_hint_t * hints = NULL;
static void MPIIO_xfer_hints(aiori_xfer_hint_t * params){
void MPIIO_xfer_hints(aiori_xfer_hint_t * params){
hints = params;
}
@ -121,8 +121,6 @@ static int MPIIO_check_params(aiori_mod_opt_t * module_options){
ERR("segment size must be < 2GiB");
if (param->useSharedFilePointer)
ERR("shared file pointer not implemented");
if (param->useStridedDatatype)
ERR("strided datatype not implemented");
if (param->useStridedDatatype && (hints->blockSize < sizeof(IOR_size_t)
|| hints->transferSize <
sizeof(IOR_size_t)))
@ -140,10 +138,10 @@ static int MPIIO_check_params(aiori_mod_opt_t * module_options){
*/
int MPIIO_Access(const char *path, int mode, aiori_mod_opt_t *module_options)
{
mpiio_options_t * param = (mpiio_options_t*) module_options;
if(hints->dryRun){
return MPI_SUCCESS;
}
mpiio_options_t * param = (mpiio_options_t*) module_options;
MPI_File fd;
int mpi_mode = MPI_MODE_UNIQUE_OPEN;
MPI_Info mpiHints = MPI_INFO_NULL;
@ -185,9 +183,7 @@ static aiori_fd_t *MPIIO_Open(char *testFileName, int flags, aiori_mod_opt_t * m
offsetFactor,
tasksPerFile,
transfersPerBlock = hints->blockSize / hints->transferSize;
struct fileTypeStruct {
int globalSizes[2], localSizes[2], startIndices[2];
} fileTypeStruct;
mpiio_fd_t * mfd = malloc(sizeof(mpiio_fd_t));
memset(mfd, 0, sizeof(mpiio_fd_t));
@ -272,15 +268,18 @@ static aiori_fd_t *MPIIO_Open(char *testFileName, int flags, aiori_mod_opt_t * m
hints->numTasks)),
"cannot preallocate file");
}
/* create file view */
if (param->useFileView) {
/* Create in-memory datatype */
MPI_CHECK(MPI_Type_contiguous (hints->transferSize / sizeof(IOR_size_t), MPI_LONG_LONG_INT, & mfd->contigType), "cannot create contiguous datatype");
MPI_CHECK(MPI_Type_create_resized( mfd->contigType, 0, 0, & mfd->transferType), "cannot create resized type");
MPI_CHECK(MPI_Type_commit(& mfd->contigType), "cannot commit datatype");
MPI_CHECK(MPI_Type_commit(& mfd->transferType), "cannot commit datatype");
/* create contiguous transfer datatype */
MPI_CHECK(MPI_Type_contiguous
(hints->transferSize / sizeof(IOR_size_t),
MPI_LONG_LONG_INT, & mfd->transferType),
"cannot create contiguous datatype");
MPI_CHECK(MPI_Type_commit(& mfd->transferType),
"cannot commit datatype");
if (hints->filePerProc) {
offsetFactor = 0;
tasksPerFile = 1;
@ -289,33 +288,39 @@ static aiori_fd_t *MPIIO_Open(char *testFileName, int flags, aiori_mod_opt_t * m
tasksPerFile = hints->numTasks;
}
/*
* create file type using subarray
*/
fileTypeStruct.globalSizes[0] = 1;
fileTypeStruct.globalSizes[1] =
transfersPerBlock * tasksPerFile;
fileTypeStruct.localSizes[0] = 1;
fileTypeStruct.localSizes[1] = transfersPerBlock;
fileTypeStruct.startIndices[0] = 0;
fileTypeStruct.startIndices[1] =
transfersPerBlock * offsetFactor;
if(! hints->dryRun) {
if(! param->useStridedDatatype){
struct fileTypeStruct {
int globalSizes[2], localSizes[2], startIndices[2];
} fileTypeStruct;
MPI_CHECK(MPI_Type_create_subarray
(2, fileTypeStruct.globalSizes,
fileTypeStruct.localSizes,
fileTypeStruct.startIndices, MPI_ORDER_C,
mfd->transferType, & mfd->fileType),
"cannot create subarray");
MPI_CHECK(MPI_Type_commit(& mfd->fileType),
"cannot commit datatype");
if(! hints->dryRun){
MPI_CHECK(MPI_File_set_view(mfd->fd, (MPI_Offset) 0,
mfd->transferType,
mfd->fileType, "native",
/*
* create file type using subarray
*/
fileTypeStruct.globalSizes[0] = 1;
fileTypeStruct.globalSizes[1] = transfersPerBlock * tasksPerFile;
fileTypeStruct.localSizes[0] = 1;
fileTypeStruct.localSizes[1] = transfersPerBlock;
fileTypeStruct.startIndices[0] = 0;
fileTypeStruct.startIndices[1] = transfersPerBlock * offsetFactor;
MPI_CHECK(MPI_Type_create_subarray
(2, fileTypeStruct.globalSizes,
fileTypeStruct.localSizes,
fileTypeStruct.startIndices, MPI_ORDER_C,
mfd->contigType, & mfd->fileType),
"cannot create subarray");
MPI_CHECK(MPI_Type_commit(& mfd->fileType), "cannot commit datatype");
MPI_CHECK(MPI_File_set_view(mfd->fd, 0,
mfd->contigType,
mfd->fileType,
"native",
(MPI_Info) MPI_INFO_NULL),
"cannot set file view");
}else{
MPI_CHECK(MPI_Type_create_resized(mfd->contigType, 0, tasksPerFile * hints->blockSize, & mfd->fileType), "cannot create MPI_Type_create_hvector");
MPI_CHECK(MPI_Type_commit(& mfd->fileType), "cannot commit datatype");
}
}
}
if (mpiHints != MPI_INFO_NULL)
@ -380,7 +385,7 @@ static IOR_offset_t MPIIO_Xfer(int access, aiori_fd_t * fdp, IOR_size_t * buffer
* Access_ordered = MPI_File_read_ordered;
*/
}
/*
* 'useFileView' uses derived datatypes and individual file pointers
*/
@ -391,16 +396,28 @@ static IOR_offset_t MPIIO_Xfer(int access, aiori_fd_t * fdp, IOR_size_t * buffer
/* if unsuccessful */
length = -1;
} else {
/*
* 'useStridedDatatype' fits multi-strided pattern into a datatype;
* must use 'length' to determine repetitions (fix this for
* multi-segments someday, WEL):
* e.g., 'IOR -s 2 -b 32K -t 32K -a MPIIO -S'
*/
* 'useStridedDatatype' fits multi-strided pattern into a datatype;
* must use 'length' to determine repetitions (fix this for
* multi-segments someday, WEL):
* e.g., 'IOR -s 2 -b 32K -t 32K -a MPIIO --mpiio.useStridedDatatype --mpiio.useFileView'
*/
if (param->useStridedDatatype) {
length = hints->segmentCount;
} else {
length = 1;
if(offset >= (rank+1) * hints->blockSize){
/* we shall write only once per transferSize */
/* printf("FAKE access %d %lld\n", rank, offset); */
return hints->transferSize;
}
length = hints->segmentCount;
MPI_CHECK(MPI_File_set_view(mfd->fd, offset,
mfd->contigType,
mfd->fileType,
"native",
(MPI_Info) MPI_INFO_NULL), "cannot set file view");
/* printf("ACCESS %d %lld -> %lld\n", rank, offset, length); */
}else{
length = 1;
}
if (hints->collective) {
/* individual, collective call */
@ -415,7 +432,12 @@ static IOR_offset_t MPIIO_Xfer(int access, aiori_fd_t * fdp, IOR_size_t * buffer
mfd->transferType, &status),
"cannot access noncollective");
}
length *= hints->transferSize; /* for return value in bytes */
/* MPI-IO driver does "nontcontiguous" by transfering
* 'segment' regions of 'transfersize' bytes, but
* our caller WriteOrReadSingle does not know how to
* deal with us reporting that we wrote N times more
* data than requested. */
length = hints->transferSize;
}
} else {
/*
@ -456,7 +478,7 @@ static IOR_offset_t MPIIO_Xfer(int access, aiori_fd_t * fdp, IOR_size_t * buffer
}
}
}
return (length);
return hints->transferSize;
}
/*
@ -483,11 +505,12 @@ static void MPIIO_Close(aiori_fd_t *fdp, aiori_mod_opt_t * module_options)
MPI_CHECK(MPI_File_close(& mfd->fd), "cannot close file");
}
if (param->useFileView == TRUE) {
/*
* need to free the datatype, so done in the close process
*/
MPI_CHECK(MPI_Type_free(& mfd->fileType), "cannot free MPI file datatype");
MPI_CHECK(MPI_Type_free(& mfd->transferType), "cannot free MPI transfer datatype");
/*
* need to free the datatype, so done in the close process
*/
MPI_CHECK(MPI_Type_free(& mfd->fileType), "cannot free MPI file datatype");
MPI_CHECK(MPI_Type_free(& mfd->transferType), "cannot free MPI transfer datatype");
MPI_CHECK(MPI_Type_free(& mfd->contigType), "cannot free type");
}
free(fdp);
}
@ -562,8 +585,7 @@ static IOR_offset_t SeekOffset(MPI_File fd, IOR_offset_t offset,
* Use MPI_File_get_size() to return aggregate file size.
* NOTE: This function is used by the HDF5 and NCMPI backends.
*/
IOR_offset_t MPIIO_GetFileSize(aiori_mod_opt_t * module_options, MPI_Comm testComm,
char *testFileName)
IOR_offset_t MPIIO_GetFileSize(aiori_mod_opt_t * module_options, char *testFileName)
{
mpiio_options_t * test = (mpiio_options_t*) module_options;
if(hints->dryRun)
@ -589,26 +611,5 @@ IOR_offset_t MPIIO_GetFileSize(aiori_mod_opt_t * module_options, MPI_Comm testCo
if (mpiHints != MPI_INFO_NULL)
MPI_CHECK(MPI_Info_free(&mpiHints), "MPI_Info_free failed");
if (hints->filePerProc == TRUE) {
MPI_CHECK(MPI_Allreduce(&aggFileSizeFromStat, &tmpSum, 1,
MPI_LONG_LONG_INT, MPI_SUM, testComm),
"cannot total data moved");
aggFileSizeFromStat = tmpSum;
} else {
MPI_CHECK(MPI_Allreduce(&aggFileSizeFromStat, &tmpMin, 1,
MPI_LONG_LONG_INT, MPI_MIN, testComm),
"cannot total data moved");
MPI_CHECK(MPI_Allreduce(&aggFileSizeFromStat, &tmpMax, 1,
MPI_LONG_LONG_INT, MPI_MAX, testComm),
"cannot total data moved");
if (tmpMin != tmpMax) {
if (rank == 0) {
WARN("inconsistent file size by different tasks");
}
/* incorrect, but now consistent across tasks */
aggFileSizeFromStat = tmpMin;
}
}
return (aggFileSizeFromStat);
}

View File

@ -45,20 +45,57 @@
/**************************** P R O T O T Y P E S *****************************/
static int GetFileMode(IOR_param_t *);
static int GetFileMode(int flags);
static void *NCMPI_Create(char *, IOR_param_t *);
static void *NCMPI_Open(char *, IOR_param_t *);
static IOR_offset_t NCMPI_Xfer(int, void *, IOR_size_t *,
IOR_offset_t, IOR_param_t *);
static void NCMPI_Close(void *, IOR_param_t *);
static void NCMPI_Delete(char *, IOR_param_t *);
static aiori_fd_t *NCMPI_Create(char *, int iorflags, aiori_mod_opt_t *);
static aiori_fd_t *NCMPI_Open(char *, int iorflags, aiori_mod_opt_t *);
static IOR_offset_t NCMPI_Xfer(int, aiori_fd_t *, IOR_size_t *,
IOR_offset_t, IOR_offset_t, aiori_mod_opt_t *);
static void NCMPI_Close(aiori_fd_t *, aiori_mod_opt_t *);
static void NCMPI_Delete(char *, aiori_mod_opt_t *);
static char *NCMPI_GetVersion();
static void NCMPI_Fsync(void *, IOR_param_t *);
static IOR_offset_t NCMPI_GetFileSize(IOR_param_t *, MPI_Comm, char *);
static int NCMPI_Access(const char *, int, IOR_param_t *);
static void NCMPI_Fsync(aiori_fd_t *, aiori_mod_opt_t *);
static IOR_offset_t NCMPI_GetFileSize(aiori_mod_opt_t *, char *);
static int NCMPI_Access(const char *, int, aiori_mod_opt_t *);
/************************** D E C L A R A T I O N S ***************************/
static aiori_xfer_hint_t * hints = NULL;
static void NCMPI_xfer_hints(aiori_xfer_hint_t * params){
hints = params;
MPIIO_xfer_hints(params);
}
typedef struct {
int showHints; /* show hints */
char * hintsFileName; /* full name for hints file */
/* runtime variables */
int var_id; /* variable id handle for data set */
int firstReadCheck;
int startDataSet;
} ncmpi_options_t;
static option_help * NCMPI_options(aiori_mod_opt_t ** init_backend_options, aiori_mod_opt_t * init_values){
ncmpi_options_t * o = malloc(sizeof(ncmpi_options_t));
if (init_values != NULL){
memcpy(o, init_values, sizeof(ncmpi_options_t));
}else{
memset(o, 0, sizeof(ncmpi_options_t));
}
*init_backend_options = (aiori_mod_opt_t*) o;
option_help h [] = {
{0, "mpiio.hintsFileName","Full name for hints file", OPTION_OPTIONAL_ARGUMENT, 's', & o->hintsFileName},
{0, "mpiio.showHints", "Show MPI hints", OPTION_FLAG, 'd', & o->showHints},
LAST_OPTION
};
option_help * help = malloc(sizeof(h));
memcpy(help, h, sizeof(h));
return help;
}
ior_aiori_t ncmpi_aiori = {
.name = "NCMPI",
@ -76,6 +113,8 @@ ior_aiori_t ncmpi_aiori = {
.rmdir = aiori_posix_rmdir,
.access = NCMPI_Access,
.stat = aiori_posix_stat,
.get_options = NCMPI_options,
.xfer_hints = NCMPI_xfer_hints,
};
/***************************** F U N C T I O N S ******************************/
@ -83,15 +122,16 @@ ior_aiori_t ncmpi_aiori = {
/*
* Create and open a file through the NCMPI interface.
*/
static void *NCMPI_Create(char *testFileName, IOR_param_t * param)
static aiori_fd_t *NCMPI_Create(char *testFileName, int iorflags, aiori_mod_opt_t * param)
{
int *fd;
int fd_mode;
MPI_Info mpiHints = MPI_INFO_NULL;
ncmpi_options_t * o = (ncmpi_options_t*) param;
/* read and set MPI file hints from hintsFile */
SetHints(&mpiHints, param->hintsFileName);
if (rank == 0 && param->showHints) {
SetHints(&mpiHints, o->hintsFileName);
if (rank == 0 && o->showHints) {
fprintf(stdout, "\nhints passed to MPI_File_open() {\n");
ShowHints(&mpiHints);
fprintf(stdout, "}\n");
@ -101,7 +141,7 @@ static void *NCMPI_Create(char *testFileName, IOR_param_t * param)
if (fd == NULL)
ERR("malloc() failed");
fd_mode = GetFileMode(param);
fd_mode = GetFileMode(iorflags);
NCMPI_CHECK(ncmpi_create(testComm, testFileName, fd_mode,
mpiHints, fd), "cannot create file");
@ -111,7 +151,7 @@ static void *NCMPI_Create(char *testFileName, IOR_param_t * param)
#if defined(PNETCDF_VERSION_MAJOR) && (PNETCDF_VERSION_MAJOR > 1 || PNETCDF_VERSION_MINOR >= 2)
/* ncmpi_get_file_info is first available in 1.2.0 */
if (rank == 0 && param->showHints) {
if (rank == 0 && o->showHints) {
MPI_Info info_used;
MPI_CHECK(ncmpi_get_file_info(*fd, &info_used),
"cannot inquire file info");
@ -123,21 +163,22 @@ static void *NCMPI_Create(char *testFileName, IOR_param_t * param)
}
#endif
return (fd);
return (aiori_fd_t*)(fd);
}
/*
* Open a file through the NCMPI interface.
*/
static void *NCMPI_Open(char *testFileName, IOR_param_t * param)
static aiori_fd_t *NCMPI_Open(char *testFileName, int iorflags, aiori_mod_opt_t * param)
{
int *fd;
int fd_mode;
MPI_Info mpiHints = MPI_INFO_NULL;
ncmpi_options_t * o = (ncmpi_options_t*) param;
/* read and set MPI file hints from hintsFile */
SetHints(&mpiHints, param->hintsFileName);
if (rank == 0 && param->showHints) {
SetHints(&mpiHints, o->hintsFileName);
if (rank == 0 && o->showHints) {
fprintf(stdout, "\nhints passed to MPI_File_open() {\n");
ShowHints(&mpiHints);
fprintf(stdout, "}\n");
@ -147,7 +188,7 @@ static void *NCMPI_Open(char *testFileName, IOR_param_t * param)
if (fd == NULL)
ERR("malloc() failed");
fd_mode = GetFileMode(param);
fd_mode = GetFileMode(iorflags);
NCMPI_CHECK(ncmpi_open(testComm, testFileName, fd_mode,
mpiHints, fd), "cannot open file");
@ -157,7 +198,7 @@ static void *NCMPI_Open(char *testFileName, IOR_param_t * param)
#if defined(PNETCDF_VERSION_MAJOR) && (PNETCDF_VERSION_MAJOR > 1 || PNETCDF_VERSION_MINOR >= 2)
/* ncmpi_get_file_info is first available in 1.2.0 */
if (rank == 0 && param->showHints) {
if (rank == 0 && o->showHints) {
MPI_Info info_used;
MPI_CHECK(ncmpi_get_file_info(*fd, &info_used),
"cannot inquire file info");
@ -169,51 +210,43 @@ static void *NCMPI_Open(char *testFileName, IOR_param_t * param)
}
#endif
return (fd);
return (aiori_fd_t*)(fd);
}
/*
* Write or read access to file using the NCMPI interface.
*/
static IOR_offset_t NCMPI_Xfer(int access, void *fd, IOR_size_t * buffer,
IOR_offset_t length, IOR_param_t * param)
static IOR_offset_t NCMPI_Xfer(int access, aiori_fd_t *fd, IOR_size_t * buffer, IOR_offset_t transferSize, IOR_offset_t offset, aiori_mod_opt_t * param)
{
signed char *bufferPtr = (signed char *)buffer;
static int firstReadCheck = FALSE, startDataSet;
ncmpi_options_t * o = (ncmpi_options_t*) param;
int var_id, dim_id[NUM_DIMS];
MPI_Offset bufSize[NUM_DIMS], offset[NUM_DIMS];
MPI_Offset bufSize[NUM_DIMS], offsets[NUM_DIMS];
IOR_offset_t segmentPosition;
int segmentNum, transferNum;
/* determine by offset if need to start data set */
if (param->filePerProc == TRUE) {
if (hints->filePerProc == TRUE) {
segmentPosition = (IOR_offset_t) 0;
} else {
segmentPosition =
(IOR_offset_t) ((rank + rankOffset) % param->numTasks)
* param->blockSize;
segmentPosition = (IOR_offset_t) ((rank + rankOffset) % hints->numTasks) * hints->blockSize;
}
if ((int)(param->offset - segmentPosition) == 0) {
startDataSet = TRUE;
if ((int)(offset - segmentPosition) == 0) {
o->startDataSet = TRUE;
/*
* this toggle is for the read check operation, which passes through
* this function twice; note that this function will open a data set
* only on the first read check and close only on the second
*/
if (access == READCHECK) {
if (firstReadCheck == TRUE) {
firstReadCheck = FALSE;
} else {
firstReadCheck = TRUE;
}
o->firstReadCheck = ! o->firstReadCheck;
}
}
if (startDataSet == TRUE &&
(access != READCHECK || firstReadCheck == TRUE)) {
if (o->startDataSet == TRUE &&
(access != READCHECK || o->firstReadCheck == TRUE)) {
if (access == WRITE) {
int numTransfers =
param->blockSize / param->transferSize;
int numTransfers = hints->blockSize / hints->transferSize;
/* reshape 1D array to 3D array:
[segmentCount*numTasks][numTransfers][transferSize]
@ -229,7 +262,7 @@ static IOR_offset_t NCMPI_Xfer(int access, void *fd, IOR_size_t * buffer,
"cannot define data set dimensions");
NCMPI_CHECK(ncmpi_def_dim
(*(int *)fd, "transfer_size",
param->transferSize, &dim_id[2]),
hints->transferSize, &dim_id[2]),
"cannot define data set dimensions");
NCMPI_CHECK(ncmpi_def_var
(*(int *)fd, "data_var", NC_BYTE, NUM_DIMS,
@ -244,77 +277,72 @@ static IOR_offset_t NCMPI_Xfer(int access, void *fd, IOR_size_t * buffer,
"cannot retrieve data set variable");
}
if (param->collective == FALSE) {
if (hints->collective == FALSE) {
NCMPI_CHECK(ncmpi_begin_indep_data(*(int *)fd),
"cannot enable independent data mode");
}
param->var_id = var_id;
startDataSet = FALSE;
o->var_id = var_id;
o->startDataSet = FALSE;
}
var_id = param->var_id;
var_id = o->var_id;
/* calculate the segment number */
segmentNum = param->offset / (param->numTasks * param->blockSize);
segmentNum = offset / (hints->numTasks * hints->blockSize);
/* calculate the transfer number in each block */
transferNum = param->offset % param->blockSize / param->transferSize;
transferNum = offset % hints->blockSize / hints->transferSize;
/* read/write the 3rd dim of the dataset, each is of
amount param->transferSize */
bufSize[0] = 1;
bufSize[1] = 1;
bufSize[2] = param->transferSize;
bufSize[2] = transferSize;
offset[0] = segmentNum * param->numTasks + rank;
offset[1] = transferNum;
offset[2] = 0;
offsets[0] = segmentNum * hints->numTasks + rank;
offsets[1] = transferNum;
offsets[2] = 0;
/* access the file */
if (access == WRITE) { /* WRITE */
if (param->collective) {
if (hints->collective) {
NCMPI_CHECK(ncmpi_put_vara_schar_all
(*(int *)fd, var_id, offset, bufSize,
bufferPtr),
(*(int *)fd, var_id, offsets, bufSize, bufferPtr),
"cannot write to data set");
} else {
NCMPI_CHECK(ncmpi_put_vara_schar
(*(int *)fd, var_id, offset, bufSize,
bufferPtr),
(*(int *)fd, var_id, offsets, bufSize, bufferPtr),
"cannot write to data set");
}
} else { /* READ or CHECK */
if (param->collective == TRUE) {
if (hints->collective == TRUE) {
NCMPI_CHECK(ncmpi_get_vara_schar_all
(*(int *)fd, var_id, offset, bufSize,
bufferPtr),
(*(int *)fd, var_id, offsets, bufSize, bufferPtr),
"cannot read from data set");
} else {
NCMPI_CHECK(ncmpi_get_vara_schar
(*(int *)fd, var_id, offset, bufSize,
bufferPtr),
(*(int *)fd, var_id, offsets, bufSize, bufferPtr),
"cannot read from data set");
}
}
return (length);
return (transferSize);
}
/*
* Perform fsync().
*/
static void NCMPI_Fsync(void *fd, IOR_param_t * param)
static void NCMPI_Fsync(aiori_fd_t *fd, aiori_mod_opt_t * param)
{
;
}
/*
* Close a file through the NCMPI interface.
*/
static void NCMPI_Close(void *fd, IOR_param_t * param)
static void NCMPI_Close(aiori_fd_t *fd, aiori_mod_opt_t * param)
{
if (param->collective == FALSE) {
if (hints->collective == FALSE) {
NCMPI_CHECK(ncmpi_end_indep_data(*(int *)fd),
"cannot disable independent data mode");
}
@ -325,7 +353,7 @@ static void NCMPI_Close(void *fd, IOR_param_t * param)
/*
* Delete a file through the NCMPI interface.
*/
static void NCMPI_Delete(char *testFileName, IOR_param_t * param)
static void NCMPI_Delete(char *testFileName, aiori_mod_opt_t * param)
{
return(MPIIO_Delete(testFileName, param));
}
@ -341,35 +369,35 @@ static char* NCMPI_GetVersion()
/*
* Return the correct file mode for NCMPI.
*/
static int GetFileMode(IOR_param_t * param)
static int GetFileMode(int flags)
{
int fd_mode = 0;
/* set IOR file flags to NCMPI flags */
/* -- file open flags -- */
if (param->openFlags & IOR_RDONLY) {
if (flags & IOR_RDONLY) {
fd_mode |= NC_NOWRITE;
}
if (param->openFlags & IOR_WRONLY) {
fprintf(stdout, "File write only not implemented in NCMPI\n");
if (flags & IOR_WRONLY) {
WARN("File write only not implemented in NCMPI");
}
if (param->openFlags & IOR_RDWR) {
if (flags & IOR_RDWR) {
fd_mode |= NC_WRITE;
}
if (param->openFlags & IOR_APPEND) {
fprintf(stdout, "File append not implemented in NCMPI\n");
if (flags & IOR_APPEND) {
WARN("File append not implemented in NCMPI");
}
if (param->openFlags & IOR_CREAT) {
if (flags & IOR_CREAT) {
fd_mode |= NC_CLOBBER;
}
if (param->openFlags & IOR_EXCL) {
fprintf(stdout, "Exclusive access not implemented in NCMPI\n");
if (flags & IOR_EXCL) {
WARN("Exclusive access not implemented in NCMPI");
}
if (param->openFlags & IOR_TRUNC) {
fprintf(stdout, "File truncation not implemented in NCMPI\n");
if (flags & IOR_TRUNC) {
WARN("File truncation not implemented in NCMPI");
}
if (param->openFlags & IOR_DIRECT) {
fprintf(stdout, "O_DIRECT not implemented in NCMPI\n");
if (flags & IOR_DIRECT) {
WARN("O_DIRECT not implemented in NCMPI");
}
/* to enable > 4GB file size */
@ -381,16 +409,16 @@ static int GetFileMode(IOR_param_t * param)
/*
* Use MPIIO call to get file size.
*/
static IOR_offset_t NCMPI_GetFileSize(IOR_param_t * test, MPI_Comm testComm,
static IOR_offset_t NCMPI_GetFileSize(aiori_mod_opt_t * opt,
char *testFileName)
{
return(MPIIO_GetFileSize(test, testComm, testFileName));
return(MPIIO_GetFileSize(opt, testFileName));
}
/*
* Use MPIIO call to check for access.
*/
static int NCMPI_Access(const char *path, int mode, IOR_param_t *param)
static int NCMPI_Access(const char *path, int mode, aiori_mod_opt_t *param)
{
return(MPIIO_Access(path, mode, param));
}

View File

@ -28,14 +28,19 @@ static option_help options [] = {
/**************************** P R O T O T Y P E S *****************************/
static option_help * PMDK_options();
static void *PMDK_Create(char *, IOR_param_t *);
static void *PMDK_Open(char *, IOR_param_t *);
static IOR_offset_t PMDK_Xfer(int, void *, IOR_size_t *, IOR_offset_t, IOR_param_t *);
static void PMDK_Fsync(void *, IOR_param_t *);
static void PMDK_Close(void *, IOR_param_t *);
static void PMDK_Delete(char *, IOR_param_t *);
static IOR_offset_t PMDK_GetFileSize(IOR_param_t *, MPI_Comm, char *);
static aiori_fd_t *PMDK_Create(char *,int iorflags, aiori_mod_opt_t *);
static aiori_fd_t *PMDK_Open(char *, int iorflags, aiori_mod_opt_t *);
static IOR_offset_t PMDK_Xfer(int, aiori_fd_t *, IOR_size_t *, IOR_offset_t, IOR_offset_t, aiori_mod_opt_t *);
static void PMDK_Fsync(aiori_fd_t *, aiori_mod_opt_t *);
static void PMDK_Close(aiori_fd_t *, aiori_mod_opt_t *);
static void PMDK_Delete(char *, aiori_mod_opt_t *);
static IOR_offset_t PMDK_GetFileSize(aiori_mod_opt_t *, char *);
static aiori_xfer_hint_t * hints = NULL;
static void PMDK_xfer_hints(aiori_xfer_hint_t * params){
hints = params;
}
/************************** D E C L A R A T I O N S ***************************/
@ -55,6 +60,7 @@ ior_aiori_t pmdk_aiori = {
.delete = PMDK_Delete,
.get_version = aiori_get_version,
.fsync = PMDK_Fsync,
.xfer_hints = PMDK_xfer_hints,
.get_file_size = PMDK_GetFileSize,
.statfs = aiori_posix_statfs,
.mkdir = aiori_posix_mkdir,
@ -78,18 +84,18 @@ static option_help * PMDK_options(){
/*
* Create and open a memory space through the PMDK interface.
*/
static void *PMDK_Create(char * testFileName, IOR_param_t * param){
static aiori_fd_t *PMDK_Create(char * testFileName, int iorflags, aiori_mod_opt_t * param){
char *pmemaddr = NULL;
int is_pmem;
size_t mapped_len;
size_t open_length;
if(!param->filePerProc){
if(! hints->filePerProc){
fprintf(stdout, "\nPMDK functionality can only be used with filePerProc functionality\n");
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1), "MPI_Abort() error");
}
open_length = param->blockSize * param->segmentCount;
open_length = hints->blockSize * hints->segmentCount;
if((pmemaddr = pmem_map_file(testFileName, open_length,
PMEM_FILE_CREATE|PMEM_FILE_EXCL,
@ -98,7 +104,7 @@ static void *PMDK_Create(char * testFileName, IOR_param_t * param){
perror("pmem_map_file");
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1), "MPI_Abort() error");
}
if(!is_pmem){
fprintf(stdout, "\n is_pmem is %d\n",is_pmem);
fprintf(stdout, "\npmem_map_file thinks the hardware being used is not pmem\n");
@ -106,7 +112,7 @@ static void *PMDK_Create(char * testFileName, IOR_param_t * param){
}
return((void *)pmemaddr);
} /* PMDK_Create() */
@ -115,20 +121,19 @@ static void *PMDK_Create(char * testFileName, IOR_param_t * param){
/*
* Open a memory space through the PMDK interface.
*/
static void *PMDK_Open(char * testFileName, IOR_param_t * param){
static aiori_fd_t *PMDK_Open(char * testFileName,int iorflags, aiori_mod_opt_t * param){
char *pmemaddr = NULL;
int is_pmem;
size_t mapped_len;
size_t open_length;
if(!param->filePerProc){
if(!hints->filePerProc){
fprintf(stdout, "\nPMDK functionality can only be used with filePerProc functionality\n");
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1), "MPI_Abort() error");
}
open_length = param->blockSize * param->segmentCount;
open_length = hints->blockSize * hints->segmentCount;
if((pmemaddr = pmem_map_file(testFileName, 0,
PMEM_FILE_EXCL,
@ -138,12 +143,12 @@ static void *PMDK_Open(char * testFileName, IOR_param_t * param){
fprintf(stdout, "\n %ld %ld\n",open_length, mapped_len);
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1), "MPI_Abort() error");
}
if(!is_pmem){
fprintf(stdout, "pmem_map_file thinks the hardware being used is not pmem\n");
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1), "MPI_Abort() error");
}
return((void *)pmemaddr);
} /* PMDK_Open() */
@ -153,8 +158,8 @@ static void *PMDK_Open(char * testFileName, IOR_param_t * param){
* Write or read access to a memory space created with PMDK. Include drain/flush functionality.
*/
static IOR_offset_t PMDK_Xfer(int access, void *file, IOR_size_t * buffer,
IOR_offset_t length, IOR_param_t * param){
static IOR_offset_t PMDK_Xfer(int access, aiori_fd_t *file, IOR_size_t * buffer,
IOR_offset_t length, IOR_offset_t offset, aiori_mod_opt_t * param){
int xferRetries = 0;
long long remaining = (long long)length;
char * ptr = (char *)buffer;
@ -162,11 +167,11 @@ static IOR_offset_t PMDK_Xfer(int access, void *file, IOR_size_t * buffer,
long long i;
long long offset_size;
offset_size = param->offset;
offset_size = offset;
if(access == WRITE){
if(param->fsync){
pmem_memcpy_nodrain(&file[offset_size], ptr, length);
if(hints->fsyncPerWrite){
pmem_memcpy_nodrain(&file[offset_size], ptr, length);
}else{
pmem_memcpy_persist(&file[offset_size], ptr, length);
}
@ -183,7 +188,7 @@ static IOR_offset_t PMDK_Xfer(int access, void *file, IOR_size_t * buffer,
* Perform fsync().
*/
static void PMDK_Fsync(void *fd, IOR_param_t * param)
static void PMDK_Fsync(aiori_fd_t *fd, aiori_mod_opt_t * param)
{
pmem_drain();
} /* PMDK_Fsync() */
@ -194,11 +199,10 @@ static void PMDK_Fsync(void *fd, IOR_param_t * param)
* Stub for close functionality that is not required for PMDK
*/
static void PMDK_Close(void *fd, IOR_param_t * param){
static void PMDK_Close(aiori_fd_t *fd, aiori_mod_opt_t * param){
size_t open_length;
open_length = param->transferSize;
open_length = hints->transferSize;
pmem_unmap(fd, open_length);
} /* PMDK_Close() */
@ -207,38 +211,25 @@ static void PMDK_Close(void *fd, IOR_param_t * param){
* Delete the file backing a memory space through PMDK
*/
static void PMDK_Delete(char *testFileName, IOR_param_t * param)
static void PMDK_Delete(char *testFileName, aiori_mod_opt_t * param)
{
char errmsg[256];
sprintf(errmsg,"[RANK %03d]:cannot delete file %s\n",rank,testFileName);
if (unlink(testFileName) != 0) WARN(errmsg);
} /* PMDK_Delete() */
/******************************************************************************/
/*
* Determine api version.
*/
static void PMDK_SetVersion(IOR_param_t *test)
{
strcpy(test->apiVersion, test->api);
} /* PMDK_SetVersion() */
/******************************************************************************/
/*
* Use POSIX stat() to return aggregate file size.
*/
static IOR_offset_t PMDK_GetFileSize(IOR_param_t * test,
MPI_Comm testComm,
static IOR_offset_t PMDK_GetFileSize(aiori_mod_opt_t * test,
char * testFileName)
{
struct stat stat_buf;
IOR_offset_t aggFileSizeFromStat,
tmpMin, tmpMax, tmpSum;
if (test->filePerProc == FALSE) {
if (hints->filePerProc == FALSE) {
fprintf(stdout, "\nPMDK functionality can only be used with filePerProc functionality\n");
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1), "MPI_Abort() error");
}
@ -248,10 +239,5 @@ static IOR_offset_t PMDK_GetFileSize(IOR_param_t * test,
}
aggFileSizeFromStat = stat_buf.st_size;
MPI_CHECK(MPI_Allreduce(&aggFileSizeFromStat, &tmpSum, 1,
MPI_LONG_LONG_INT, MPI_SUM, testComm),
"cannot total data moved");
aggFileSizeFromStat = tmpSum;
return(aggFileSizeFromStat);
} /* PMDK_GetFileSize() */

View File

@ -34,7 +34,7 @@
#ifdef HAVE_LINUX_LUSTRE_LUSTRE_USER_H
# include <linux/lustre/lustre_user.h>
#elif defined(HAVE_LUSTRE_LUSTRE_USER_H)
#elif defined(HAVE_LUSTRE_USER)
# include <lustre/lustre_user.h>
#endif
#ifdef HAVE_GPFS_H
@ -55,6 +55,22 @@
#include "iordef.h"
#include "utilities.h"
#include "aiori-POSIX.h"
#ifdef HAVE_GPU_DIRECT
typedef long long loff_t;
#include <cuda_runtime.h>
#include <cufile.h>
#endif
typedef struct {
int fd;
#ifdef HAVE_GPU_DIRECT
CUfileHandle_t cf_handle;
#endif
} posix_fd;
#ifndef open64 /* necessary for TRU64 -- */
# define open64 open /* unlikely, but may pose */
#endif /* not open64 */ /* conflicting prototypes */
@ -67,35 +83,32 @@
# define O_BINARY 0
#endif
#ifdef HAVE_GPU_DIRECT
static const char* cuFileGetErrorString(CUfileError_t status){
if(IS_CUDA_ERR(status)){
return cudaGetErrorString(status.err);
}
return strerror(status.err);
}
static void init_cufile(posix_fd * pfd){
CUfileDescr_t cf_descr = (CUfileDescr_t){
.handle.fd = pfd->fd,
.type = CU_FILE_HANDLE_TYPE_OPAQUE_FD
};
CUfileError_t status = cuFileHandleRegister(& pfd->cf_handle, & cf_descr);
if(status.err != CU_FILE_SUCCESS){
EWARNF("Could not register handle %s", cuFileGetErrorString(status));
}
}
#endif
/**************************** P R O T O T Y P E S *****************************/
static void POSIX_Initialize(aiori_mod_opt_t * options);
static void POSIX_Finalize(aiori_mod_opt_t * options);
static IOR_offset_t POSIX_Xfer(int, aiori_fd_t *, IOR_size_t *,
IOR_offset_t, IOR_offset_t, aiori_mod_opt_t *);
static void POSIX_Fsync(aiori_fd_t *, aiori_mod_opt_t *);
static void POSIX_Sync(aiori_mod_opt_t * );
static int POSIX_check_params(aiori_mod_opt_t * options);
/************************** O P T I O N S *****************************/
typedef struct{
/* in case of a change, please update depending MMAP module too */
int direct_io;
/* Lustre variables */
int lustre_set_striping; /* flag that we need to set lustre striping */
int lustre_stripe_count;
int lustre_stripe_size;
int lustre_start_ost;
int lustre_ignore_locks;
/* gpfs variables */
int gpfs_hint_access; /* use gpfs "access range" hint */
int gpfs_release_token; /* immediately release GPFS tokens after
creating or opening a file */
/* beegfs variables */
int beegfs_numTargets; /* number storage targets to use */
int beegfs_chunkSize; /* srtipe pattern for new files */
} posix_options_t;
option_help * POSIX_options(aiori_mod_opt_t ** init_backend_options, aiori_mod_opt_t * init_values){
posix_options_t * o = malloc(sizeof(posix_options_t));
@ -105,6 +118,7 @@ option_help * POSIX_options(aiori_mod_opt_t ** init_backend_options, aiori_mod_o
}else{
memset(o, 0, sizeof(posix_options_t));
o->direct_io = 0;
o->lustre_stripe_count = -1;
o->lustre_start_ost = -1;
o->beegfs_numTargets = -1;
o->beegfs_chunkSize = -1;
@ -123,11 +137,14 @@ option_help * POSIX_options(aiori_mod_opt_t ** init_backend_options, aiori_mod_o
{0, "posix.gpfs.releasetoken", "", OPTION_OPTIONAL_ARGUMENT, 'd', & o->gpfs_release_token},
#endif
#ifdef HAVE_LUSTRE_LUSTRE_USER_H
#ifdef HAVE_LUSTRE_USER
{0, "posix.lustre.stripecount", "", OPTION_OPTIONAL_ARGUMENT, 'd', & o->lustre_stripe_count},
{0, "posix.lustre.stripesize", "", OPTION_OPTIONAL_ARGUMENT, 'd', & o->lustre_stripe_size},
{0, "posix.lustre.startost", "", OPTION_OPTIONAL_ARGUMENT, 'd', & o->lustre_start_ost},
{0, "posix.lustre.ignorelocks", "", OPTION_FLAG, 'd', & o->lustre_ignore_locks},
#endif
#ifdef HAVE_GPU_DIRECT
{0, "gpuDirect", "allocate I/O buffers on the GPU", OPTION_FLAG, 'd', & o->gpuDirect},
#endif
LAST_OPTION
};
@ -143,19 +160,22 @@ option_help * POSIX_options(aiori_mod_opt_t ** init_backend_options, aiori_mod_o
ior_aiori_t posix_aiori = {
.name = "POSIX",
.name_legacy = NULL,
.initialize = POSIX_Initialize,
.finalize = POSIX_Finalize,
.create = POSIX_Create,
.mknod = POSIX_Mknod,
.open = POSIX_Open,
.xfer = POSIX_Xfer,
.close = POSIX_Close,
.delete = POSIX_Delete,
.xfer_hints = aiori_posix_xfer_hints,
.xfer_hints = POSIX_xfer_hints,
.get_version = aiori_get_version,
.fsync = POSIX_Fsync,
.get_file_size = POSIX_GetFileSize,
.statfs = aiori_posix_statfs,
.mkdir = aiori_posix_mkdir,
.rmdir = aiori_posix_rmdir,
.rename = POSIX_Rename,
.access = aiori_posix_access,
.stat = aiori_posix_stat,
.get_options = POSIX_options,
@ -168,16 +188,24 @@ ior_aiori_t posix_aiori = {
static aiori_xfer_hint_t * hints = NULL;
void aiori_posix_xfer_hints(aiori_xfer_hint_t * params){
void POSIX_xfer_hints(aiori_xfer_hint_t * params){
hints = params;
}
static int POSIX_check_params(aiori_mod_opt_t * param){
int POSIX_check_params(aiori_mod_opt_t * param){
posix_options_t * o = (posix_options_t*) param;
if (o->beegfs_chunkSize != -1 && (!ISPOWEROFTWO(o->beegfs_chunkSize) || o->beegfs_chunkSize < (1<<16)))
ERR("beegfsChunkSize must be a power of two and >64k");
if(o->lustre_stripe_count != -1 || o->lustre_stripe_size != 0)
o->lustre_set_striping = 1;
if(o->gpuDirect && ! o->direct_io){
ERR("GPUDirect required direct I/O to be used!");
}
#ifndef HAVE_GPU_DIRECT
if(o->gpuDirect){
ERR("GPUDirect support is not compiled");
}
#endif
return 0;
}
@ -203,7 +231,7 @@ void gpfs_free_all_locks(int fd)
EWARNF("gpfs_fcntl(%d, ...) release all locks hint failed.", fd);
}
}
void gpfs_access_start(int fd, IOR_offset_t length, int access)
void gpfs_access_start(int fd, IOR_offset_t length, IOR_offset_t offset, int access)
{
int rc;
struct {
@ -217,7 +245,7 @@ void gpfs_access_start(int fd, IOR_offset_t length, int access)
take_locks.access.structLen = sizeof(take_locks.access);
take_locks.access.structType = GPFS_ACCESS_RANGE;
take_locks.access.start = hints->offset;
take_locks.access.start = offset;
take_locks.access.length = length;
take_locks.access.isWrite = (access == WRITE);
@ -227,7 +255,7 @@ void gpfs_access_start(int fd, IOR_offset_t length, int access)
}
}
void gpfs_access_end(int fd, IOR_offset_t length, int access)
void gpfs_access_end(int fd, IOR_offset_t length, IOR_offset_t offset, int access)
{
int rc;
struct {
@ -242,7 +270,7 @@ void gpfs_access_end(int fd, IOR_offset_t length, int access)
free_locks.free.structLen = sizeof(free_locks.free);
free_locks.free.structType = GPFS_FREE_RANGE;
free_locks.free.start = hints->offset;
free_locks.free.start = offset;
free_locks.free.length = length;
rc = gpfs_fcntl(fd, &free_locks);
@ -368,42 +396,39 @@ bool beegfs_createFilePath(char* filepath, mode_t mode, int numTargets, int chun
/*
* Creat and open a file through the POSIX interface.
* Create and open a file through the POSIX interface.
*/
aiori_fd_t *POSIX_Create(char *testFileName, int flags, aiori_mod_opt_t * param)
{
int fd_oflag = O_BINARY;
int mode = 0664;
int *fd;
fd = (int *)malloc(sizeof(int));
if (fd == NULL)
ERR("Unable to malloc file descriptor");
posix_fd * pfd = safeMalloc(sizeof(posix_fd));
posix_options_t * o = (posix_options_t*) param;
if (o->direct_io == TRUE){
set_o_direct_flag(&fd_oflag);
set_o_direct_flag(& fd_oflag);
}
if(hints->dryRun)
return (aiori_fd_t*) 0;
#ifdef HAVE_LUSTRE_LUSTRE_USER_H
#ifdef HAVE_LUSTRE_USER
/* Add a #define for FASYNC if not available, as it forms part of
* the Lustre O_LOV_DELAY_CREATE definition. */
#ifndef FASYNC
#define FASYNC 00020000 /* fcntl, for BSD compatibility */
#endif
if (o->lustre_set_striping) {
/* In the single-shared-file case, task 0 has to creat the
file with the Lustre striping options before any other processes
open the file */
/* In the single-shared-file case, task 0 has to create the
file with the Lustre striping options before any other
processes open the file */
if (!hints->filePerProc && rank != 0) {
MPI_CHECK(MPI_Barrier(testComm), "barrier error");
fd_oflag |= O_RDWR;
*fd = open64(testFileName, fd_oflag, mode);
if (*fd < 0)
ERRF("open64(\"%s\", %d, %#o) failed",
testFileName, fd_oflag, mode);
pfd->fd = open64(testFileName, fd_oflag, mode);
if (pfd->fd < 0){
ERRF("open64(\"%s\", %d, %#o) failed. Error: %s",
testFileName, fd_oflag, mode, strerror(errno));
}
} else {
struct lov_user_md opts = { 0 };
@ -416,30 +441,24 @@ aiori_fd_t *POSIX_Create(char *testFileName, int flags, aiori_mod_opt_t * param)
/* File needs to be opened O_EXCL because we cannot set
* Lustre striping information on a pre-existing file.*/
fd_oflag |=
O_CREAT | O_EXCL | O_RDWR | O_LOV_DELAY_CREATE;
*fd = open64(testFileName, fd_oflag, mode);
if (*fd < 0) {
fprintf(stdout, "\nUnable to open '%s': %s\n",
fd_oflag |= O_CREAT | O_EXCL | O_RDWR | O_LOV_DELAY_CREATE;
pfd->fd = open64(testFileName, fd_oflag, mode);
if (pfd->fd < 0) {
ERRF("Unable to open '%s': %s\n",
testFileName, strerror(errno));
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1),
"MPI_Abort() error");
} else if (ioctl(*fd, LL_IOC_LOV_SETSTRIPE, &opts)) {
} else if (ioctl(pfd->fd, LL_IOC_LOV_SETSTRIPE, &opts)) {
char *errmsg = "stripe already set";
if (errno != EEXIST && errno != EALREADY)
errmsg = strerror(errno);
fprintf(stdout,
"\nError on ioctl for '%s' (%d): %s\n",
testFileName, *fd, errmsg);
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1),
"MPI_Abort() error");
ERRF("Error on ioctl for '%s' (%d): %s\n",
testFileName, pfd->fd, errmsg);
}
if (!hints->filePerProc)
MPI_CHECK(MPI_Barrier(testComm),
"barrier error");
}
} else {
#endif /* HAVE_LUSTRE_LUSTRE_USER_H */
#endif /* HAVE_LUSTRE_USER */
fd_oflag |= O_CREAT | O_RDWR;
@ -458,34 +477,40 @@ aiori_fd_t *POSIX_Create(char *testFileName, int flags, aiori_mod_opt_t * param)
}
#endif /* HAVE_BEEGFS_BEEGFS_H */
*fd = open64(testFileName, fd_oflag, mode);
if (*fd < 0)
ERRF("open64(\"%s\", %d, %#o) failed",
testFileName, fd_oflag, mode);
pfd->fd = open64(testFileName, fd_oflag, mode);
if (pfd->fd < 0){
ERRF("open64(\"%s\", %d, %#o) failed. Error: %s",
testFileName, fd_oflag, mode, strerror(errno));
}
#ifdef HAVE_LUSTRE_LUSTRE_USER_H
#ifdef HAVE_LUSTRE_USER
}
if (o->lustre_ignore_locks) {
int lustre_ioctl_flags = LL_FILE_IGNORE_LOCK;
if (ioctl(*fd, LL_IOC_SETFLAGS, &lustre_ioctl_flags) == -1)
ERRF("ioctl(%d, LL_IOC_SETFLAGS, ...) failed", *fd);
if (ioctl(pfd->fd, LL_IOC_SETFLAGS, &lustre_ioctl_flags) == -1)
ERRF("ioctl(%d, LL_IOC_SETFLAGS, ...) failed", pfd->fd);
}
#endif /* HAVE_LUSTRE_LUSTRE_USER_H */
#endif /* HAVE_LUSTRE_USER */
#ifdef HAVE_GPFS_FCNTL_H
/* in the single shared file case, immediately release all locks, with
* the intent that we can avoid some byte range lock revocation:
* everyone will be writing/reading from individual regions */
if (o->gpfs_release_token ) {
gpfs_free_all_locks(*fd);
gpfs_free_all_locks(pfd->fd);
}
#endif
return (aiori_fd_t*) fd;
#ifdef HAVE_GPU_DIRECT
if(o->gpuDirect){
init_cufile(pfd);
}
#endif
return (aiori_fd_t*) pfd;
}
/*
* Creat a file through mknod interface.
* Create a file through mknod interface.
*/
int POSIX_Mknod(char *testFileName)
{
@ -504,43 +529,48 @@ int POSIX_Mknod(char *testFileName)
aiori_fd_t *POSIX_Open(char *testFileName, int flags, aiori_mod_opt_t * param)
{
int fd_oflag = O_BINARY;
int *fd;
fd = (int *)malloc(sizeof(int));
if (fd == NULL)
ERR("Unable to malloc file descriptor");
if(flags & IOR_RDONLY){
fd_oflag |= O_RDONLY;
}else if(flags & IOR_WRONLY){
fd_oflag |= O_WRONLY;
}else{
fd_oflag |= O_RDWR;
}
posix_fd * pfd = safeMalloc(sizeof(posix_fd));
posix_options_t * o = (posix_options_t*) param;
if (o->direct_io == TRUE)
if (o->direct_io == TRUE){
set_o_direct_flag(&fd_oflag);
fd_oflag |= O_RDWR;
}
if(hints->dryRun)
return (aiori_fd_t*) 0;
*fd = open64(testFileName, fd_oflag);
if (*fd < 0)
ERRF("open64(\"%s\", %d) failed", testFileName, fd_oflag);
pfd->fd = open64(testFileName, fd_oflag);
if (pfd->fd < 0)
ERRF("open64(\"%s\", %d) failed: %s", testFileName, fd_oflag, strerror(errno));
#ifdef HAVE_LUSTRE_LUSTRE_USER_H
#ifdef HAVE_LUSTRE_USER
if (o->lustre_ignore_locks) {
int lustre_ioctl_flags = LL_FILE_IGNORE_LOCK;
if (verbose >= VERBOSE_1) {
fprintf(stdout,
"** Disabling lustre range locking **\n");
EINFO("** Disabling lustre range locking **\n");
}
if (ioctl(*fd, LL_IOC_SETFLAGS, &lustre_ioctl_flags) == -1)
ERRF("ioctl(%d, LL_IOC_SETFLAGS, ...) failed", *fd);
if (ioctl(pfd->fd, LL_IOC_SETFLAGS, &lustre_ioctl_flags) == -1)
ERRF("ioctl(%d, LL_IOC_SETFLAGS, ...) failed", pfd->fd);
}
#endif /* HAVE_LUSTRE_LUSTRE_USER_H */
#endif /* HAVE_LUSTRE_USER */
#ifdef HAVE_GPFS_FCNTL_H
if(o->gpfs_release_token) {
gpfs_free_all_locks(*fd);
gpfs_free_all_locks(pfd->fd);
}
#endif
return (aiori_fd_t*) fd;
#ifdef HAVE_GPU_DIRECT
if(o->gpuDirect){
init_cufile(pfd);
}
#endif
return (aiori_fd_t*) pfd;
}
/*
@ -559,11 +589,12 @@ static IOR_offset_t POSIX_Xfer(int access, aiori_fd_t *file, IOR_size_t * buffer
if(hints->dryRun)
return length;
fd = *(int *)file;
posix_fd * pfd = (posix_fd *) file;
fd = pfd->fd;
#ifdef HAVE_GPFS_FCNTL_H
if (o->gpfs_hint_access) {
gpfs_access_start(fd, length, access);
gpfs_access_start(fd, length, offset, access);
}
#endif
@ -571,17 +602,24 @@ static IOR_offset_t POSIX_Xfer(int access, aiori_fd_t *file, IOR_size_t * buffer
/* seek to offset */
if (lseek64(fd, offset, SEEK_SET) == -1)
ERRF("lseek64(%d, %lld, SEEK_SET) failed", fd, offset);
off_t mem_offset = 0;
while (remaining > 0) {
/* write/read file */
if (access == WRITE) { /* WRITE */
if (verbose >= VERBOSE_4) {
fprintf(stdout,
"task %d writing to offset %lld\n",
EINFO("task %d writing to offset %lld\n",
rank,
offset + length - remaining);
}
rc = write(fd, ptr, remaining);
#ifdef HAVE_GPU_DIRECT
if(o->gpuDirect){
rc = cuFileWrite(pfd->cf_handle, ptr, remaining, offset + mem_offset, mem_offset);
}else{
#endif
rc = write(fd, ptr, remaining);
#ifdef HAVE_GPU_DIRECT
}
#endif
if (rc == -1)
ERRF("write(%d, %p, %lld) failed",
fd, (void*)ptr, remaining);
@ -590,12 +628,19 @@ static IOR_offset_t POSIX_Xfer(int access, aiori_fd_t *file, IOR_size_t * buffer
}
} else { /* READ or CHECK */
if (verbose >= VERBOSE_4) {
fprintf(stdout,
"task %d reading from offset %lld\n",
EINFO("task %d reading from offset %lld\n",
rank,
offset + length - remaining);
}
rc = read(fd, ptr, remaining);
#ifdef HAVE_GPU_DIRECT
if(o->gpuDirect){
rc = cuFileRead(pfd->cf_handle, ptr, remaining, offset + mem_offset, mem_offset);
}else{
#endif
rc = read(fd, ptr, remaining);
#ifdef HAVE_GPU_DIRECT
}
#endif
if (rc == 0)
ERRF("read(%d, %p, %lld) returned EOF prematurely",
fd, (void*)ptr, remaining);
@ -604,43 +649,38 @@ static IOR_offset_t POSIX_Xfer(int access, aiori_fd_t *file, IOR_size_t * buffer
fd, (void*)ptr, remaining);
}
if (rc < remaining) {
fprintf(stdout,
"WARNING: Task %d, partial %s, %lld of %lld bytes at offset %lld\n",
EWARNF("task %d, partial %s, %lld of %lld bytes at offset %lld\n",
rank,
access == WRITE ? "write()" : "read()",
rc, remaining,
offset + length - remaining);
if (hints->singleXferAttempt == TRUE)
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1),
"barrier error");
if (xferRetries > MAX_RETRY)
if (xferRetries > MAX_RETRY || hints->singleXferAttempt)
ERR("too many retries -- aborting");
}
assert(rc >= 0);
assert(rc <= remaining);
remaining -= rc;
ptr += rc;
mem_offset += rc;
xferRetries++;
}
#ifdef HAVE_GPFS_FCNTL_H
if (o->gpfs_hint_access) {
gpfs_access_end(fd, length, param, access);
gpfs_access_end(fd, length, offset, access);
}
#endif
return (length);
}
/*
* Perform fsync().
*/
static void POSIX_Fsync(aiori_fd_t *fd, aiori_mod_opt_t * param)
void POSIX_Fsync(aiori_fd_t *afd, aiori_mod_opt_t * param)
{
if (fsync(*(int *)fd) != 0)
EWARNF("fsync(%d) failed", *(int *)fd);
int fd = ((posix_fd*) afd)->fd;
if (fsync(fd) != 0)
EWARNF("fsync(%d) failed", fd);
}
static void POSIX_Sync(aiori_mod_opt_t * param)
void POSIX_Sync(aiori_mod_opt_t * param)
{
int ret = system("sync");
if (ret != 0){
@ -652,13 +692,21 @@ static void POSIX_Sync(aiori_mod_opt_t * param)
/*
* Close a file through the POSIX interface.
*/
void POSIX_Close(aiori_fd_t *fd, aiori_mod_opt_t * param)
void POSIX_Close(aiori_fd_t *afd, aiori_mod_opt_t * param)
{
if(hints->dryRun)
return;
if (close(*(int *)fd) != 0)
ERRF("close(%d) failed", *(int *)fd);
free(fd);
posix_options_t * o = (posix_options_t*) param;
int fd = ((posix_fd*) afd)->fd;
#ifdef HAVE_GPU_DIRECT
if(o->gpuDirect){
cuFileHandleDeregister(((posix_fd*) afd)->cf_handle);
}
#endif
if (close(fd) != 0){
ERRF("close(%d) failed", fd);
}
free(afd);
}
/*
@ -669,16 +717,25 @@ void POSIX_Delete(char *testFileName, aiori_mod_opt_t * param)
if(hints->dryRun)
return;
if (unlink(testFileName) != 0){
EWARNF("[RANK %03d]: unlink() of file \"%s\" failed\n",
rank, testFileName);
EWARNF("[RANK %03d]: unlink() of file \"%s\" failed", rank, testFileName);
}
}
int POSIX_Rename(const char * oldfile, const char * newfile, aiori_mod_opt_t * module_options){
if(hints->dryRun)
return 0;
if(rename(oldfile, newfile) != 0){
EWARNF("[RANK %03d]: rename() of file \"%s\" to \"%s\" failed", rank, oldfile, newfile);
return -1;
}
return 0;
}
/*
* Use POSIX stat() to return aggregate file size.
*/
IOR_offset_t POSIX_GetFileSize(aiori_mod_opt_t * test, MPI_Comm testComm,
char *testFileName)
IOR_offset_t POSIX_GetFileSize(aiori_mod_opt_t * test, char *testFileName)
{
if(hints->dryRun)
return 0;
@ -690,26 +747,17 @@ IOR_offset_t POSIX_GetFileSize(aiori_mod_opt_t * test, MPI_Comm testComm,
}
aggFileSizeFromStat = stat_buf.st_size;
if (hints->filePerProc == TRUE) {
MPI_CHECK(MPI_Allreduce(&aggFileSizeFromStat, &tmpSum, 1,
MPI_LONG_LONG_INT, MPI_SUM, testComm),
"cannot total data moved");
aggFileSizeFromStat = tmpSum;
} else {
MPI_CHECK(MPI_Allreduce(&aggFileSizeFromStat, &tmpMin, 1,
MPI_LONG_LONG_INT, MPI_MIN, testComm),
"cannot total data moved");
MPI_CHECK(MPI_Allreduce(&aggFileSizeFromStat, &tmpMax, 1,
MPI_LONG_LONG_INT, MPI_MAX, testComm),
"cannot total data moved");
if (tmpMin != tmpMax) {
if (rank == 0) {
WARN("inconsistent file size by different tasks");
}
/* incorrect, but now consistent across tasks */
aggFileSizeFromStat = tmpMin;
}
}
return (aggFileSizeFromStat);
}
void POSIX_Initialize(aiori_mod_opt_t * options){
#ifdef HAVE_GPU_DIRECT
CUfileError_t err = cuFileDriverOpen();
#endif
}
void POSIX_Finalize(aiori_mod_opt_t * options){
#ifdef HAVE_GPU_DIRECT
CUfileError_t err = cuFileDriverClose();
#endif
}

43
src/aiori-POSIX.h Normal file
View File

@ -0,0 +1,43 @@
#ifndef AIORI_POSIX_H
#define AIORI_POSIX_H
#include "aiori.h"
/************************** O P T I O N S *****************************/
typedef struct{
/* in case of a change, please update depending MMAP module too */
int direct_io;
/* Lustre variables */
int lustre_set_striping; /* flag that we need to set lustre striping */
int lustre_stripe_count;
int lustre_stripe_size;
int lustre_start_ost;
int lustre_ignore_locks;
/* gpfs variables */
int gpfs_hint_access; /* use gpfs "access range" hint */
int gpfs_release_token; /* immediately release GPFS tokens after
creating or opening a file */
/* beegfs variables */
int beegfs_numTargets; /* number storage targets to use */
int beegfs_chunkSize; /* srtipe pattern for new files */
int gpuDirect;
} posix_options_t;
void POSIX_Sync(aiori_mod_opt_t * param);
int POSIX_check_params(aiori_mod_opt_t * param);
void POSIX_Fsync(aiori_fd_t *, aiori_mod_opt_t *);
int POSIX_check_params(aiori_mod_opt_t * options);
aiori_fd_t *POSIX_Create(char *testFileName, int flags, aiori_mod_opt_t * module_options);
int POSIX_Mknod(char *testFileName);
aiori_fd_t *POSIX_Open(char *testFileName, int flags, aiori_mod_opt_t * module_options);
IOR_offset_t POSIX_GetFileSize(aiori_mod_opt_t * test, char *testFileName);
void POSIX_Delete(char *testFileName, aiori_mod_opt_t * module_options);
int POSIX_Rename(const char *oldfile, const char *newfile, aiori_mod_opt_t * module_options);
void POSIX_Close(aiori_fd_t *fd, aiori_mod_opt_t * module_options);
option_help * POSIX_options(aiori_mod_opt_t ** init_backend_options, aiori_mod_opt_t * init_values);
void POSIX_xfer_hints(aiori_xfer_hint_t * params);
#endif

File diff suppressed because it is too large Load Diff

586
src/aiori-S3-libs3.c Normal file
View File

@ -0,0 +1,586 @@
/*
* S3 implementation using the newer libs3
* https://github.com/bji/libs3
* Use one object per file chunk
*/
#ifdef HAVE_CONFIG_H
# include "config.h"
#endif
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <time.h>
#include <libs3.h>
#include "ior.h"
#include "aiori.h"
#include "aiori-debug.h"
#include "utilities.h"
static aiori_xfer_hint_t * hints = NULL;
static void s3_xfer_hints(aiori_xfer_hint_t * params){
hints = params;
}
/************************** O P T I O N S *****************************/
typedef struct {
int bucket_per_file;
char * access_key;
char * secret_key;
char * host;
char * bucket_prefix;
char * bucket_prefix_cur;
char * locationConstraint;
char * authRegion;
int timeout;
int dont_suffix;
int s3_compatible;
int use_ssl;
S3BucketContext bucket_context;
S3Protocol s3_protocol;
} s3_options_t;
static option_help * S3_options(aiori_mod_opt_t ** init_backend_options, aiori_mod_opt_t * init_values){
s3_options_t * o = malloc(sizeof(s3_options_t));
if (init_values != NULL){
memcpy(o, init_values, sizeof(s3_options_t));
}else{
memset(o, 0, sizeof(s3_options_t));
}
*init_backend_options = (aiori_mod_opt_t*) o;
o->bucket_prefix = "ior";
o->bucket_prefix_cur = "b";
option_help h [] = {
{0, "S3-libs3.bucket-per-file", "Use one bucket to map one file/directory, otherwise one bucket is used to store all dirs/files.", OPTION_FLAG, 'd', & o->bucket_per_file},
{0, "S3-libs3.bucket-name-prefix", "The prefix of the bucket(s).", OPTION_OPTIONAL_ARGUMENT, 's', & o->bucket_prefix},
{0, "S3-libs3.dont-suffix-bucket", "By default a hash will be added to the bucket name to increase uniqueness, this disables the option.", OPTION_FLAG, 'd', & o->dont_suffix },
{0, "S3-libs3.s3-compatible", "to be selected when using S3 compatible storage", OPTION_FLAG, 'd', & o->s3_compatible },
{0, "S3-libs3.use-ssl", "used to specify that SSL is needed for the connection", OPTION_FLAG, 'd', & o->use_ssl },
{0, "S3-libs3.host", "The host optionally followed by:port.", OPTION_OPTIONAL_ARGUMENT, 's', & o->host},
{0, "S3-libs3.secret-key", "The secret key.", OPTION_OPTIONAL_ARGUMENT, 's', & o->secret_key},
{0, "S3-libs3.access-key", "The access key.", OPTION_OPTIONAL_ARGUMENT, 's', & o->access_key},
{0, "S3-libs3.region", "The region used for the authorization signature.", OPTION_OPTIONAL_ARGUMENT, 's', & o->authRegion},
{0, "S3-libs3.location", "The bucket geographic location.", OPTION_OPTIONAL_ARGUMENT, 's', & o->locationConstraint},
LAST_OPTION
};
option_help * help = malloc(sizeof(h));
memcpy(help, h, sizeof(h));
return help;
}
static void def_file_name(s3_options_t * o, char * out_name, char const * path){
if(o->bucket_per_file){
out_name += sprintf(out_name, "%s-", o->bucket_prefix_cur);
}
// duplicate path except "/"
while(*path != 0){
char c = *path;
if(((c >= '0' && c <= '9') || (c >= 'a' && c <= 'z') )){
*out_name = *path;
out_name++;
}else if(c >= 'A' && c <= 'Z'){
*out_name = *path + ('a' - 'A');
out_name++;
}else if(c == '/'){
*out_name = '_';
out_name++;
}else{
// encode special characters
*out_name = 'a' + (c / 26);
out_name++;
*out_name = 'a' + (c % 26);
out_name++;
}
path++;
}
*out_name = 'b';
out_name++;
*out_name = '\0';
}
static void def_bucket_name(s3_options_t * o, char * out_name, char const * path){
// S3_MAX_BUCKET_NAME_SIZE
if(o->bucket_per_file){
out_name += sprintf(out_name, "%s-", o->bucket_prefix_cur);
}
// duplicate path except "/"
while(*path != 0){
char c = *path;
if(((c >= '0' && c <= '9') || (c >= 'a' && c <= 'z') )){
*out_name = *path;
out_name++;
}else if(c >= 'A' && c <= 'Z'){
*out_name = *path + ('a' - 'A');
out_name++;
}
path++;
}
*out_name = '\0';
// S3Status S3_validate_bucket_name(const char *bucketName, S3UriStyle uriStyle);
}
struct data_handling{
IOR_size_t * buf;
int64_t size;
};
static S3Status s3status = S3StatusInterrupted;
static S3ErrorDetails s3error = {NULL};
static S3Status responsePropertiesCallback(const S3ResponseProperties *properties, void *callbackData){
s3status = S3StatusOK;
return s3status;
}
static void responseCompleteCallback(S3Status status, const S3ErrorDetails *error, void *callbackData) {
s3status = status;
if (error == NULL){
s3error.message = NULL;
}else{
s3error = *error;
}
return;
}
#define CHECK_ERROR(p) \
if (s3status != S3StatusOK){ \
EWARNF("S3 %s:%d (path:%s) \"%s\": %s %s", __FUNCTION__, __LINE__, p, S3_get_status_name(s3status), s3error.message, s3error.furtherDetails ? s3error.furtherDetails : ""); \
}
static S3ResponseHandler responseHandler = { &responsePropertiesCallback, &responseCompleteCallback };
static char * S3_getVersion()
{
return "0.5";
}
static void S3_Fsync(aiori_fd_t *fd, aiori_mod_opt_t * options)
{
// Not needed
}
static void S3_Sync(aiori_mod_opt_t * options)
{
// Not needed
}
static S3Status S3ListResponseCallback(const char *ownerId, const char *ownerDisplayName, const char *bucketName, int64_t creationDateSeconds, void *callbackData){
uint64_t * count = (uint64_t*) callbackData;
*count += 1;
return S3StatusOK;
}
static S3ListServiceHandler listhandler = { { &responsePropertiesCallback, &responseCompleteCallback }, & S3ListResponseCallback};
static int S3_statfs (const char * path, ior_aiori_statfs_t * stat, aiori_mod_opt_t * options){
stat->f_bsize = 1;
stat->f_blocks = 1;
stat->f_bfree = 1;
stat->f_bavail = 1;
stat->f_ffree = 1;
s3_options_t * o = (s3_options_t*) options;
// use the number of bucket as files
uint64_t buckets = 0;
S3_list_service(o->s3_protocol, o->access_key, o->secret_key, NULL, o->host,
o->authRegion, NULL, o->timeout, & listhandler, & buckets);
stat->f_files = buckets;
CHECK_ERROR(o->authRegion);
return 0;
}
static S3Status S3multipart_handler(const char *upload_id, void *callbackData){
*((char const**)(callbackData)) = upload_id;
return S3StatusOK;
}
static S3MultipartInitialHandler multipart_handler = { {&responsePropertiesCallback, &responseCompleteCallback }, & S3multipart_handler};
typedef struct{
char * object;
} S3_fd_t;
static int putObjectDataCallback(int bufferSize, char *buffer, void *callbackData){
struct data_handling * dh = (struct data_handling *) callbackData;
const int64_t size = dh->size > bufferSize ? bufferSize : dh->size;
if(size == 0) return 0;
memcpy(buffer, dh->buf, size);
dh->buf = (IOR_size_t*) ((char*)(dh->buf) + size);
dh->size -= size;
return size;
}
static S3PutObjectHandler putObjectHandler = { { &responsePropertiesCallback, &responseCompleteCallback }, & putObjectDataCallback };
static aiori_fd_t *S3_Create(char *path, int iorflags, aiori_mod_opt_t * options)
{
char * upload_id;
s3_options_t * o = (s3_options_t*) options;
char p[FILENAME_MAX];
def_file_name(o, p, path);
if(iorflags & IOR_CREAT){
if(o->bucket_per_file){
S3_create_bucket(o->s3_protocol, o->access_key, o->secret_key, NULL, o->host, p, o->authRegion, S3CannedAclPrivate, o->locationConstraint, NULL, o->timeout, & responseHandler, NULL);
}else{
struct data_handling dh = { .buf = NULL, .size = 0 };
S3_put_object(& o->bucket_context, p, 0, NULL, NULL, o->timeout, &putObjectHandler, & dh);
}
if (s3status != S3StatusOK){
CHECK_ERROR(p);
return NULL;
}
}
S3_fd_t * fd = malloc(sizeof(S3_fd_t));
fd->object = strdup(p);
return (aiori_fd_t*) fd;
}
static S3Status statResponsePropertiesCallback(const S3ResponseProperties *properties, void *callbackData){
// check the size
struct stat *buf = (struct stat*) callbackData;
if(buf != NULL){
buf->st_size = properties->contentLength;
buf->st_mtime = properties->lastModified;
}
s3status = S3StatusOK;
return s3status;
}
static S3ResponseHandler statResponseHandler = { &statResponsePropertiesCallback, &responseCompleteCallback };
static aiori_fd_t *S3_Open(char *path, int flags, aiori_mod_opt_t * options)
{
if(flags & IOR_CREAT){
return S3_Create(path, flags, options);
}
if(flags & IOR_WRONLY){
WARN("S3 IOR_WRONLY is not supported");
}
if(flags & IOR_RDWR){
WARN("S3 IOR_RDWR is not supported");
}
s3_options_t * o = (s3_options_t*) options;
char p[FILENAME_MAX];
def_file_name(o, p, path);
if (o->bucket_per_file){
S3_test_bucket(o->s3_protocol, S3UriStylePath, o->access_key, o->secret_key,
NULL, o->host, p, o->authRegion, 0, NULL,
NULL, o->timeout, & responseHandler, NULL);
}else{
struct stat buf;
S3_head_object(& o->bucket_context, p, NULL, o->timeout, & statResponseHandler, & buf);
}
if (s3status != S3StatusOK){
CHECK_ERROR(p);
return NULL;
}
S3_fd_t * fd = malloc(sizeof(S3_fd_t));
fd->object = strdup(p);
return (aiori_fd_t*) fd;
}
static S3Status getObjectDataCallback(int bufferSize, const char *buffer, void *callbackData){
struct data_handling * dh = (struct data_handling *) callbackData;
const int64_t size = dh->size > bufferSize ? bufferSize : dh->size;
memcpy(dh->buf, buffer, size);
dh->buf = (IOR_size_t*) ((char*)(dh->buf) + size);
dh->size -= size;
return S3StatusOK;
}
static S3GetObjectHandler getObjectHandler = { { &responsePropertiesCallback, &responseCompleteCallback }, & getObjectDataCallback };
static IOR_offset_t S3_Xfer(int access, aiori_fd_t * afd, IOR_size_t * buffer, IOR_offset_t length, IOR_offset_t offset, aiori_mod_opt_t * options){
S3_fd_t * fd = (S3_fd_t *) afd;
struct data_handling dh = { .buf = buffer, .size = length };
s3_options_t * o = (s3_options_t*) options;
char p[FILENAME_MAX];
if(o->bucket_per_file){
o->bucket_context.bucketName = fd->object;
if(offset != 0){
sprintf(p, "%ld-%ld", (long) offset, (long) length);
}else{
sprintf(p, "0");
}
}else{
if(offset != 0){
sprintf(p, "%s-%ld-%ld", fd->object, (long) offset, (long) length);
}else{
sprintf(p, "%s", fd->object);
}
}
if(access == WRITE){
S3_put_object(& o->bucket_context, p, length, NULL, NULL, o->timeout, &putObjectHandler, & dh);
}else{
S3_get_object(& o->bucket_context, p, NULL, 0, length, NULL, o->timeout, &getObjectHandler, & dh);
}
if (! o->s3_compatible){
CHECK_ERROR(p);
}
return length;
}
static void S3_Close(aiori_fd_t * afd, aiori_mod_opt_t * options)
{
S3_fd_t * fd = (S3_fd_t *) afd;
free(fd->object);
free(afd);
}
typedef struct {
int status; // do not reorder!
s3_options_t * o;
int truncated;
char const *nextMarker;
} s3_delete_req;
S3Status list_delete_cb(int isTruncated, const char *nextMarker, int contentsCount, const S3ListBucketContent *contents, int commonPrefixesCount, const char **commonPrefixes, void *callbackData){
s3_delete_req * req = (s3_delete_req*) callbackData;
for(int i=0; i < contentsCount; i++){
S3_delete_object(& req->o->bucket_context, contents[i].key, NULL, req->o->timeout, & responseHandler, NULL);
}
req->truncated = isTruncated;
if(isTruncated){
req->nextMarker = nextMarker;
}
return S3StatusOK;
}
static S3ListBucketHandler list_delete_handler = {{&responsePropertiesCallback, &responseCompleteCallback }, list_delete_cb};
static void S3_Delete(char *path, aiori_mod_opt_t * options)
{
s3_options_t * o = (s3_options_t*) options;
char p[FILENAME_MAX];
def_file_name(o, p, path);
if(o->bucket_per_file){
o->bucket_context.bucketName = p;
s3_delete_req req = {0, o, 0, NULL};
do{
S3_list_bucket(& o->bucket_context, NULL, req.nextMarker, NULL, INT_MAX, NULL, o->timeout, & list_delete_handler, & req);
}while(req.truncated);
S3_delete_bucket(o->s3_protocol, S3UriStylePath, o->access_key, o->secret_key, NULL, o->host, p, o->authRegion, NULL, o->timeout, & responseHandler, NULL);
}else{
char * del_heuristics = getenv("S3LIB_DELETE_HEURISTICS");
if(del_heuristics){
struct stat buf;
S3_head_object(& o->bucket_context, p, NULL, o->timeout, & statResponseHandler, & buf);
if(s3status != S3StatusOK){
// As the file does not exist, can return safely
CHECK_ERROR(p);
return;
}
int threshold = atoi(del_heuristics);
if (buf.st_size > threshold){
// there may exist fragments, so try to delete them
s3_delete_req req = {0, o, 0, NULL};
do{
S3_list_bucket(& o->bucket_context, p, req.nextMarker, NULL, INT_MAX, NULL, o->timeout, & list_delete_handler, & req);
}while(req.truncated);
}
S3_delete_object(& o->bucket_context, p, NULL, o->timeout, & responseHandler, NULL);
}else{
// Regular deletion, must remove all created fragments
S3_delete_object(& o->bucket_context, p, NULL, o->timeout, & responseHandler, NULL);
if(s3status != S3StatusOK){
// As the file does not exist, can return savely
CHECK_ERROR(p);
return;
}
s3_delete_req req = {0, o, 0, NULL};
do{
S3_list_bucket(& o->bucket_context, p, req.nextMarker, NULL, INT_MAX, NULL, o->timeout, & list_delete_handler, & req);
}while(req.truncated);
}
}
CHECK_ERROR(p);
}
static int S3_mkdir (const char *path, mode_t mode, aiori_mod_opt_t * options){
s3_options_t * o = (s3_options_t*) options;
char p[FILENAME_MAX];
def_bucket_name(o, p, path);
if (o->bucket_per_file){
S3_create_bucket(o->s3_protocol, o->access_key, o->secret_key, NULL, o->host, p, o->authRegion, S3CannedAclPrivate, o->locationConstraint, NULL, o->timeout, & responseHandler, NULL);
CHECK_ERROR(p);
return 0;
}else{
struct data_handling dh = { .buf = NULL, .size = 0 };
S3_put_object(& o->bucket_context, p, 0, NULL, NULL, o->timeout, & putObjectHandler, & dh);
if (! o->s3_compatible){
CHECK_ERROR(p);
}
return 0;
}
}
static int S3_rmdir (const char *path, aiori_mod_opt_t * options){
s3_options_t * o = (s3_options_t*) options;
char p[FILENAME_MAX];
def_bucket_name(o, p, path);
if (o->bucket_per_file){
S3_delete_bucket(o->s3_protocol, S3UriStylePath, o->access_key, o->secret_key, NULL, o->host, p, o->authRegion, NULL, o->timeout, & responseHandler, NULL);
CHECK_ERROR(p);
return 0;
}else{
S3_delete_object(& o->bucket_context, p, NULL, o->timeout, & responseHandler, NULL);
CHECK_ERROR(p);
return 0;
}
}
static int S3_stat(const char *path, struct stat *buf, aiori_mod_opt_t * options){
s3_options_t * o = (s3_options_t*) options;
char p[FILENAME_MAX];
def_file_name(o, p, path);
memset(buf, 0, sizeof(struct stat));
// TODO count the individual file fragment sizes together
if (o->bucket_per_file){
S3_test_bucket(o->s3_protocol, S3UriStylePath, o->access_key, o->secret_key,
NULL, o->host, p, o->authRegion, 0, NULL,
NULL, o->timeout, & responseHandler, NULL);
}else{
S3_head_object(& o->bucket_context, p, NULL, o->timeout, & statResponseHandler, buf);
}
if (s3status != S3StatusOK){
return -1;
}
return 0;
}
static int S3_access (const char *path, int mode, aiori_mod_opt_t * options){
struct stat buf;
return S3_stat(path, & buf, options);
}
static IOR_offset_t S3_GetFileSize(aiori_mod_opt_t * options, char *testFileName)
{
struct stat buf;
if(S3_stat(testFileName, & buf, options) != 0) return -1;
return buf.st_size;
}
static int S3_check_params(aiori_mod_opt_t * options){
s3_options_t * o = (s3_options_t*) options;
if(o->access_key == NULL){
o->access_key = "";
}
if(o->secret_key == NULL){
o->secret_key = "";
}
if(o->host == NULL){
WARN("The S3 hostname should be specified");
}
return 0;
}
static void S3_init(aiori_mod_opt_t * options){
s3_options_t * o = (s3_options_t*) options;
int ret = S3_initialize(NULL, S3_INIT_ALL, o->host);
if(ret != S3StatusOK)
FAIL("Could not initialize S3 library");
// create a bucket id based on access-key using a trivial checksumming
if(! o->dont_suffix){
uint64_t c = 0;
char * r = o->access_key;
for(uint64_t pos = 1; (*r) != '\0' ; r++, pos*=10) {
c += (*r) * pos;
}
int count = snprintf(NULL, 0, "%s%lu", o->bucket_prefix, c % 1000);
char * old_prefix = o->bucket_prefix;
o->bucket_prefix_cur = malloc(count + 1);
sprintf(o->bucket_prefix_cur, "%s%lu", old_prefix, c % 1000);
}else{
o->bucket_prefix_cur = o->bucket_prefix;
}
// init bucket context
memset(& o->bucket_context, 0, sizeof(o->bucket_context));
o->bucket_context.hostName = o->host;
o->bucket_context.bucketName = o->bucket_prefix_cur;
if (o->use_ssl){
o->s3_protocol = S3ProtocolHTTPS;
}else{
o->s3_protocol = S3ProtocolHTTP;
}
o->bucket_context.protocol = o->s3_protocol;
o->bucket_context.uriStyle = S3UriStylePath;
o->bucket_context.accessKeyId = o->access_key;
o->bucket_context.secretAccessKey = o->secret_key;
if (! o->bucket_per_file && rank == 0){
S3_create_bucket(o->s3_protocol, o->access_key, o->secret_key, NULL, o->host, o->bucket_context.bucketName, o->authRegion, S3CannedAclPrivate, o->locationConstraint, NULL, o->timeout, & responseHandler, NULL);
CHECK_ERROR(o->bucket_context.bucketName);
}
if ( ret != S3StatusOK ){
FAIL("S3 error %s", S3_get_status_name(ret));
}
}
static void S3_final(aiori_mod_opt_t * options){
s3_options_t * o = (s3_options_t*) options;
if (! o->bucket_per_file && rank == 0){
S3_delete_bucket(o->s3_protocol, S3UriStylePath, o->access_key, o->secret_key, NULL, o->host, o->bucket_context.bucketName, o->authRegion, NULL, o->timeout, & responseHandler, NULL);
CHECK_ERROR(o->bucket_context.bucketName);
}
S3_deinitialize();
}
ior_aiori_t S3_libS3_aiori = {
.name = "S3-libs3",
.name_legacy = NULL,
.create = S3_Create,
.open = S3_Open,
.xfer = S3_Xfer,
.close = S3_Close,
.delete = S3_Delete,
.get_version = S3_getVersion,
.fsync = S3_Fsync,
.xfer_hints = s3_xfer_hints,
.get_file_size = S3_GetFileSize,
.statfs = S3_statfs,
.mkdir = S3_mkdir,
.rmdir = S3_rmdir,
.access = S3_access,
.stat = S3_stat,
.initialize = S3_init,
.finalize = S3_final,
.get_options = S3_options,
.check_params = S3_check_params,
.sync = S3_Sync,
.enable_mdtest = true
};

258
src/aiori-aio.c Normal file
View File

@ -0,0 +1,258 @@
/*
This backend uses linux-aio
Requires: libaio-dev
*/
#ifdef HAVE_CONFIG_H
# include "config.h"
#endif
#include <libaio.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/ioctl.h>
#include <fcntl.h>
#include <errno.h>
#include <sys/stat.h>
#include <assert.h>
#include <unistd.h>
#include "ior.h"
#include "aiori.h"
#include "iordef.h"
#include "utilities.h"
#include "aiori-POSIX.h"
/************************** O P T I O N S *****************************/
typedef struct{
aiori_mod_opt_t * p; // posix options
int max_pending;
int granularity; // how frequent to submit, submit ever granularity elements
// runtime data
io_context_t ioctx; // one context per fs
struct iocb ** iocbs;
int iocbs_pos; // how many are pending in iocbs
int in_flight; // total pending ops
IOR_offset_t pending_bytes; // track pending IO volume for error checking
} aio_options_t;
option_help * aio_options(aiori_mod_opt_t ** init_backend_options, aiori_mod_opt_t * init_values){
aio_options_t * o = malloc(sizeof(aio_options_t));
if (init_values != NULL){
memcpy(o, init_values, sizeof(aio_options_t));
}else{
memset(o, 0, sizeof(aio_options_t));
o->max_pending = 128;
o->granularity = 16;
}
option_help * p_help = POSIX_options((aiori_mod_opt_t**)& o->p, init_values == NULL ? NULL : (aiori_mod_opt_t*) ((aio_options_t*)init_values)->p);
*init_backend_options = (aiori_mod_opt_t*) o;
option_help h [] = {
{0, "aio.max-pending", "Max number of pending ops", OPTION_OPTIONAL_ARGUMENT, 'd', & o->max_pending},
{0, "aio.granularity", "How frequent to submit pending IOs, submit every *granularity* elements", OPTION_OPTIONAL_ARGUMENT, 'd', & o->granularity},
LAST_OPTION
};
option_help * help = option_merge(h, p_help);
free(p_help);
return help;
}
/************************** D E C L A R A T I O N S ***************************/
typedef struct{
aiori_fd_t * pfd; // the underlying POSIX fd
} aio_fd_t;
/***************************** F U N C T I O N S ******************************/
static aiori_xfer_hint_t * hints = NULL;
static void aio_xfer_hints(aiori_xfer_hint_t * params){
hints = params;
POSIX_xfer_hints(params);
}
static void aio_initialize(aiori_mod_opt_t * param){
aio_options_t * o = (aio_options_t*) param;
if(io_setup(o->max_pending, & o->ioctx) != 0){
ERRF("Couldn't initialize io context %s", strerror(errno));
}
printf("%d\n", (o->max_pending));
o->iocbs = malloc(sizeof(struct iocb *) * o->granularity);
o->iocbs_pos = 0;
o->in_flight = 0;
}
static void aio_finalize(aiori_mod_opt_t * param){
aio_options_t * o = (aio_options_t*) param;
io_destroy(o->ioctx);
}
static int aio_check_params(aiori_mod_opt_t * param){
aio_options_t * o = (aio_options_t*) param;
POSIX_check_params((aiori_mod_opt_t*) o->p);
if(o->max_pending < 8){
ERRF("AIO max-pending = %d < 8", o->max_pending);
}
if(o->granularity > o->max_pending){
ERRF("AIO granularity must be < max-pending, is %d > %d", o->granularity, o->max_pending);
}
return 0;
}
static aiori_fd_t *aio_Open(char *testFileName, int flags, aiori_mod_opt_t * param){
aio_options_t * o = (aio_options_t*) param;
aio_fd_t * fd = malloc(sizeof(aio_fd_t));
fd->pfd = POSIX_Open(testFileName, flags, o->p);
return (aiori_fd_t*) fd;
}
static aiori_fd_t *aio_create(char *testFileName, int flags, aiori_mod_opt_t * param){
aio_options_t * o = (aio_options_t*) param;
aio_fd_t * fd = malloc(sizeof(aio_fd_t));
fd->pfd = POSIX_Create(testFileName, flags, o->p);
return (aiori_fd_t*) fd;
}
/* called whenever the granularity is met */
static void submit_pending(aio_options_t * o){
if(o->iocbs_pos == 0){
return;
}
int res;
res = io_submit(o->ioctx, o->iocbs_pos, o->iocbs);
//printf("AIO submit %d jobs\n", o->iocbs_pos);
if(res != o->iocbs_pos){
if(errno == EAGAIN){
ERR("AIO: errno == EAGAIN; this should't happen");
}
ERRF("AIO: submitted %d, error: \"%s\" ; this should't happen", res, strerror(errno));
}
o->iocbs_pos = 0;
}
/* complete all pending ops */
static void complete_all(aio_options_t * o){
submit_pending(o);
struct io_event events[o->in_flight];
int num_events;
num_events = io_getevents(o->ioctx, o->in_flight, o->in_flight, events, NULL);
for (int i = 0; i < num_events; i++) {
struct io_event event = events[i];
if(event.res == -1){
ERR("AIO, error in io_getevents(), IO incomplete!");
}else{
o->pending_bytes -= event.res;
}
free(event.obj);
}
if(o->pending_bytes != 0){
ERRF("AIO, error in flushing data, pending bytes: %lld", o->pending_bytes);
}
o->in_flight = 0;
}
/* called if we must make *some* progress */
static void process_some(aio_options_t * o){
if(o->in_flight == 0){
return;
}
struct io_event events[o->in_flight];
int num_events;
int mn = o->in_flight < o->granularity ? o->in_flight : o->granularity;
num_events = io_getevents(o->ioctx, mn, o->in_flight, events, NULL);
//printf("Completed: %d\n", num_events);
for (int i = 0; i < num_events; i++) {
struct io_event event = events[i];
if(event.res == -1){
ERR("AIO, error in io_getevents(), IO incomplete!");
}else{
o->pending_bytes -= event.res;
}
free(event.obj);
}
o->in_flight -= num_events;
}
static IOR_offset_t aio_Xfer(int access, aiori_fd_t *fd, IOR_size_t * buffer,
IOR_offset_t length, IOR_offset_t offset, aiori_mod_opt_t * param){
aio_options_t * o = (aio_options_t*) param;
aio_fd_t * afd = (aio_fd_t*) fd;
if(o->in_flight >= o->max_pending){
process_some(o);
}
o->pending_bytes += length;
struct iocb * iocb = malloc(sizeof(struct iocb));
if(access == WRITE){
io_prep_pwrite(iocb, *(int*)afd->pfd, buffer, length, offset);
}else{
io_prep_pread(iocb, *(int*)afd->pfd, buffer, length, offset);
}
o->iocbs[o->iocbs_pos] = iocb;
o->iocbs_pos++;
o->in_flight++;
if(o->iocbs_pos == o->granularity){
submit_pending(o);
}
return length;
}
static void aio_Close(aiori_fd_t *fd, aiori_mod_opt_t * param){
aio_options_t * o = (aio_options_t*) param;
aio_fd_t * afd = (aio_fd_t*) fd;
complete_all(o);
POSIX_Close(afd->pfd, o->p);
}
static void aio_Fsync(aiori_fd_t *fd, aiori_mod_opt_t * param){
aio_options_t * o = (aio_options_t*) param;
complete_all(o);
aio_fd_t * afd = (aio_fd_t*) fd;
POSIX_Fsync(afd->pfd, o->p);
}
static void aio_Sync(aiori_mod_opt_t * param){
aio_options_t * o = (aio_options_t*) param;
complete_all(o);
POSIX_Sync((aiori_mod_opt_t*) o->p);
}
ior_aiori_t aio_aiori = {
.name = "AIO",
.name_legacy = NULL,
.create = aio_create,
.get_options = aio_options,
.initialize = aio_initialize,
.finalize = aio_finalize,
.xfer_hints = aio_xfer_hints,
.get_options = aio_options,
.fsync = aio_Fsync,
.open = aio_Open,
.xfer = aio_Xfer,
.close = aio_Close,
.sync = aio_Sync,
.check_params = aio_check_params,
.delete = POSIX_Delete,
.get_version = aiori_get_version,
.get_file_size = POSIX_GetFileSize,
.statfs = aiori_posix_statfs,
.mkdir = aiori_posix_mkdir,
.rmdir = aiori_posix_rmdir,
.access = aiori_posix_access,
.stat = aiori_posix_stat,
.enable_mdtest = true
};

131
src/aiori-debug.h Normal file
View File

@ -0,0 +1,131 @@
#ifndef _AIORI_UTIL_H
#define _AIORI_UTIL_H
/* This file contains only debug relevant helpers */
#include <stdio.h>
#include <mpi.h>
extern FILE * out_logfile;
extern int verbose; /* verbose output */
#define FAIL(...) FailMessage(rank, ERROR_LOCATION, __VA_ARGS__)
void FailMessage(int rank, const char *location, char *format, ...);
/******************************** M A C R O S *********************************/
/******************************************************************************/
/*
* WARN_RESET will display a custom error message and set value to default
*/
#define WARN_RESET(MSG, TO_STRUCT_PTR, FROM_STRUCT_PTR, MEMBER) do { \
(TO_STRUCT_PTR)->MEMBER = (FROM_STRUCT_PTR)->MEMBER; \
if (rank == 0) { \
fprintf(out_logfile, "WARNING: %s. Using value of %d.\n", \
MSG, (TO_STRUCT_PTR)->MEMBER); \
} \
fflush(out_logfile); \
} while (0)
extern int aiori_warning_as_errors;
#define WARN(MSG) do { \
if(aiori_warning_as_errors){ ERR(MSG); } \
if (verbose > VERBOSE_2) { \
fprintf(out_logfile, "WARNING: %s, (%s:%d).\n", \
MSG, __FILE__, __LINE__); \
} else { \
fprintf(out_logfile, "WARNING: %s.\n", MSG); \
} \
fflush(out_logfile); \
} while (0)
/* warning with format string and errno printed */
#define EWARNF(FORMAT, ...) do { \
if(aiori_warning_as_errors){ ERRF(FORMAT, __VA_ARGS__); } \
if (verbose > VERBOSE_2) { \
fprintf(out_logfile, "WARNING: " FORMAT ", (%s:%d).\n", \
__VA_ARGS__, __FILE__, __LINE__); \
} else { \
fprintf(out_logfile, "WARNING: " FORMAT "\n", \
__VA_ARGS__); \
} \
fflush(out_logfile); \
} while (0)
/* warning with errno printed */
#define EWARN(MSG) do { \
EWARNF("%s", MSG); \
} while (0)
/* warning with format string and errno printed */
#define EINFO(FORMAT, ...) do { \
if (verbose > VERBOSE_2) { \
fprintf(out_logfile, "INFO: " FORMAT ", (%s:%d).\n", \
__VA_ARGS__, __FILE__, __LINE__); \
} else { \
fprintf(out_logfile, "INFO: " FORMAT "\n", \
__VA_ARGS__); \
} \
fflush(out_logfile); \
} while (0)
/* display error message with format string and terminate execution */
#define ERRF(FORMAT, ...) do { \
fprintf(out_logfile, "ERROR: " FORMAT ", (%s:%d)\n", \
__VA_ARGS__, __FILE__, __LINE__); \
fflush(out_logfile); \
MPI_Abort(MPI_COMM_WORLD, -1); \
} while (0)
/* display error message and terminate execution */
#define ERR_ERRNO(MSG) do { \
ERRF("%s", MSG); \
} while (0)
/* display a simple error message (i.e. errno is not set) and terminate execution */
#define ERR(MSG) do { \
fprintf(out_logfile, "ERROR: %s, (%s:%d)\n", \
MSG, __FILE__, __LINE__); \
fflush(out_logfile); \
MPI_Abort(MPI_COMM_WORLD, -1); \
} while (0)
/******************************************************************************/
/*
* MPI_CHECKF will display a custom format string as well as an error string
* from the MPI_STATUS and then exit the program
*/
#define MPI_CHECKF(MPI_STATUS, FORMAT, ...) do { \
char resultString[MPI_MAX_ERROR_STRING]; \
int resultLength; \
int checkf_mpi_status = MPI_STATUS; \
\
if (checkf_mpi_status != MPI_SUCCESS) { \
MPI_Error_string(checkf_mpi_status, resultString, &resultLength);\
fprintf(out_logfile, "ERROR: " FORMAT ", MPI %s, (%s:%d)\n", \
__VA_ARGS__, resultString, __FILE__, __LINE__); \
fflush(out_logfile); \
MPI_Abort(MPI_COMM_WORLD, -1); \
} \
} while(0)
/******************************************************************************/
/*
* MPI_CHECK will display a custom error message as well as an error string
* from the MPI_STATUS and then exit the program
*/
#define MPI_CHECK(MPI_STATUS, MSG) do { \
MPI_CHECKF(MPI_STATUS, "%s", MSG); \
} while(0)
#endif

View File

@ -42,11 +42,13 @@ ior_aiori_t *available_aiori[] = {
#ifdef USE_POSIX_AIORI
&posix_aiori,
#endif
#ifdef USE_AIO_AIORI
&aio_aiori,
#endif
#ifdef USE_PMDK_AIORI
&pmdk_aiori,
#endif
#ifdef USE_DAOS_AIORI
&daos_aiori,
&dfs_aiori,
#endif
& dummy_aiori,
@ -68,8 +70,11 @@ ior_aiori_t *available_aiori[] = {
#ifdef USE_MMAP_AIORI
&mmap_aiori,
#endif
#ifdef USE_S3_AIORI
&s3_aiori,
#ifdef USE_S3_LIBS3_AIORI
&S3_libS3_aiori,
#endif
#ifdef USE_S3_4C_AIORI
&s3_4c_aiori,
&s3_plus_aiori,
&s3_emc_aiori,
#endif
@ -100,6 +105,7 @@ void * airoi_update_module_options(const ior_aiori_t * backend, options_all_t *
}
options_all_t * airoi_create_all_module_options(option_help * global_options){
if(! out_logfile) out_logfile = stdout;
int airoi_c = aiori_count();
options_all_t * opt = malloc(sizeof(options_all_t));
opt->module_count = airoi_c + 1;
@ -122,6 +128,8 @@ void aiori_supported_apis(char * APIs, char * APIs_legacy, enum bench_type type)
{
ior_aiori_t **tmp = available_aiori;
char delimiter = ' ';
*APIs = 0;
*APIs_legacy = 0;
while (*tmp != NULL)
{
@ -130,7 +138,6 @@ void aiori_supported_apis(char * APIs, char * APIs_legacy, enum bench_type type)
tmp++;
continue;
}
if (delimiter == ' ')
{
APIs += sprintf(APIs, "%s", (*tmp)->name);
@ -142,6 +149,7 @@ void aiori_supported_apis(char * APIs, char * APIs_legacy, enum bench_type type)
if ((*tmp)->name_legacy != NULL)
APIs_legacy += sprintf(APIs_legacy, "%c%s",
delimiter, (*tmp)->name_legacy);
tmp++;
}
}

View File

@ -15,16 +15,11 @@
#ifndef _AIORI_H
#define _AIORI_H
#include <mpi.h>
#ifndef MPI_FILE_NULL
# include <mpio.h>
#endif /* not MPI_FILE_NULL */
#include <sys/stat.h>
#include <stdbool.h>
#include "iordef.h" /* IOR Definitions */
#include "aiori-debug.h"
#include "option.h"
/*************************** D E F I N I T I O N S ****************************/
@ -81,9 +76,9 @@ typedef struct aiori_xfer_hint_t{
} aiori_xfer_hint_t;
/* this is a dummy structure to create some type safety */
typedef struct aiori_mod_opt_t{
struct aiori_mod_opt_t{
void * dummy;
} aiori_mod_opt_t;
};
typedef struct aiori_fd_t{
void * dummy;
@ -100,12 +95,12 @@ typedef struct ior_aiori {
*/
void (*xfer_hints)(aiori_xfer_hint_t * params);
IOR_offset_t (*xfer)(int access, aiori_fd_t *, IOR_size_t *,
IOR_offset_t size, IOR_offset_t offset, aiori_mod_opt_t *);
void (*close)(aiori_fd_t *, aiori_mod_opt_t *);
void (*delete)(char *, aiori_mod_opt_t *);
IOR_offset_t size, IOR_offset_t offset, aiori_mod_opt_t * module_options);
void (*close)(aiori_fd_t *, aiori_mod_opt_t * module_options);
void (*delete)(char *, aiori_mod_opt_t * module_options);
char* (*get_version)(void);
void (*fsync)(aiori_fd_t *, aiori_mod_opt_t *);
IOR_offset_t (*get_file_size)(aiori_mod_opt_t * module_options, MPI_Comm, char *);
void (*fsync)(aiori_fd_t *, aiori_mod_opt_t * module_options);
IOR_offset_t (*get_file_size)(aiori_mod_opt_t * module_options, char * filename);
int (*statfs) (const char *, ior_aiori_statfs_t *, aiori_mod_opt_t * module_options);
int (*mkdir) (const char *path, mode_t mode, aiori_mod_opt_t * module_options);
int (*rmdir) (const char *path, aiori_mod_opt_t * module_options);
@ -113,6 +108,7 @@ typedef struct ior_aiori {
int (*stat) (const char *path, struct stat *buf, aiori_mod_opt_t * module_options);
void (*initialize)(aiori_mod_opt_t * options); /* called once per program before MPI is started */
void (*finalize)(aiori_mod_opt_t * options); /* called once per program after MPI is shutdown */
int (*rename) (const char *oldpath, const char *newpath, aiori_mod_opt_t * module_options);
option_help * (*get_options)(aiori_mod_opt_t ** init_backend_options, aiori_mod_opt_t* init_values); /* initializes the backend options as well and returns the pointer to the option help structure */
int (*check_params)(aiori_mod_opt_t *); /* check if the provided module_optionseters for the given test and the module options are correct, if they aren't print a message and exit(1) or return 1*/
void (*sync)(aiori_mod_opt_t * ); /* synchronize every pending operation for this storage */
@ -125,6 +121,7 @@ enum bench_type {
};
extern ior_aiori_t dummy_aiori;
extern ior_aiori_t aio_aiori;
extern ior_aiori_t daos_aiori;
extern ior_aiori_t dfs_aiori;
extern ior_aiori_t hdf5_aiori;
@ -135,7 +132,8 @@ extern ior_aiori_t ncmpi_aiori;
extern ior_aiori_t posix_aiori;
extern ior_aiori_t pmdk_aiori;
extern ior_aiori_t mmap_aiori;
extern ior_aiori_t s3_aiori;
extern ior_aiori_t S3_libS3_aiori;
extern ior_aiori_t s3_4c_aiori;
extern ior_aiori_t s3_plus_aiori;
extern ior_aiori_t s3_emc_aiori;
extern ior_aiori_t rados_aiori;
@ -158,20 +156,12 @@ int aiori_posix_mkdir (const char *path, mode_t mode, aiori_mod_opt_t * module_o
int aiori_posix_rmdir (const char *path, aiori_mod_opt_t * module_options);
int aiori_posix_access (const char *path, int mode, aiori_mod_opt_t * module_options);
int aiori_posix_stat (const char *path, struct stat *buf, aiori_mod_opt_t * module_options);
void aiori_posix_xfer_hints(aiori_xfer_hint_t * params);
aiori_fd_t *POSIX_Create(char *testFileName, int flags, aiori_mod_opt_t * module_options);
int POSIX_Mknod(char *testFileName);
aiori_fd_t *POSIX_Open(char *testFileName, int flags, aiori_mod_opt_t * module_options);
IOR_offset_t POSIX_GetFileSize(aiori_mod_opt_t * test, MPI_Comm testComm, char *testFileName);
void POSIX_Delete(char *testFileName, aiori_mod_opt_t * module_options);
void POSIX_Close(aiori_fd_t *fd, aiori_mod_opt_t * module_options);
option_help * POSIX_options(aiori_mod_opt_t ** init_backend_options, aiori_mod_opt_t * init_values);
/* NOTE: these 3 MPI-IO functions are exported for reuse by HDF5/PNetCDF */
/* NOTE: these 4 MPI-IO functions are exported for reuse by HDF5/PNetCDF */
void MPIIO_Delete(char *testFileName, aiori_mod_opt_t * module_options);
IOR_offset_t MPIIO_GetFileSize(aiori_mod_opt_t * options, MPI_Comm testComm, char *testFileName);
int MPIIO_Access(const char *, int, aiori_mod_opt_t *);
IOR_offset_t MPIIO_GetFileSize(aiori_mod_opt_t * options, char *testFileName);
int MPIIO_Access(const char *, int, aiori_mod_opt_t * module_options);
void MPIIO_xfer_hints(aiori_xfer_hint_t * params);
#endif /* not _AIORI_H */

View File

@ -25,8 +25,7 @@ void PrintTestEnds();
void PrintTableHeader();
/* End of ior-output */
IOR_offset_t *GetOffsetArraySequential(IOR_param_t * test, int pretendRank);
IOR_offset_t *GetOffsetArrayRandom(IOR_param_t * test, int pretendRank, int access);
IOR_offset_t *GetOffsetArrayRandom(IOR_param_t * test, int pretendRank, IOR_offset_t * out_count);
struct results {
double min;

View File

@ -20,6 +20,8 @@ void PrintTableHeader(){
fprintf(out_resultfile, "\n");
fprintf(out_resultfile, "access bw(MiB/s) IOPS Latency(s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter\n");
fprintf(out_resultfile, "------ --------- ---- ---------- ---------- --------- -------- -------- -------- -------- ----\n");
}else if(outputFormat == OUTPUT_CSV){
fprintf(out_resultfile, "access,bw(MiB/s),IOPS,Latency,block(KiB),xfer(KiB),open(s),wr/rd(s),close(s),total(s),numTasks,iter\n");
}
}
@ -45,8 +47,6 @@ static void PrintKeyValStart(char * key){
}
if(outputFormat == OUTPUT_JSON){
fprintf(out_resultfile, "\"%s\": \"", key);
}else if(outputFormat == OUTPUT_CSV){
}
}
@ -84,7 +84,7 @@ static void PrintKeyVal(char * key, char * value){
if(outputFormat == OUTPUT_JSON){
fprintf(out_resultfile, "\"%s\": \"%s\"", key, value);
}else if(outputFormat == OUTPUT_CSV){
fprintf(out_resultfile, "%s", value);
fprintf(out_resultfile, "%s,", value);
}
}
@ -98,7 +98,7 @@ static void PrintKeyValDouble(char * key, double value){
if(outputFormat == OUTPUT_JSON){
fprintf(out_resultfile, "\"%s\": %.4f", key, value);
}else if(outputFormat == OUTPUT_CSV){
fprintf(out_resultfile, "%.4f", value);
fprintf(out_resultfile, "%.4f,", value);
}
}
@ -113,7 +113,7 @@ static void PrintKeyValInt(char * key, int64_t value){
if(outputFormat == OUTPUT_JSON){
fprintf(out_resultfile, "\"%s\": %lld", key, (long long) value);
}else if(outputFormat == OUTPUT_CSV){
fprintf(out_resultfile, "%lld", (long long) value);
fprintf(out_resultfile, "%lld,", (long long) value);
}
}
@ -203,13 +203,16 @@ void PrintRepeatEnd(){
void PrintRepeatStart(){
if (rank != 0)
return;
if( outputFormat == OUTPUT_DEFAULT){
if(outputFormat == OUTPUT_DEFAULT){
return;
}
PrintArrayStart();
}
void PrintTestEnds(){
if (outputFormat == OUTPUT_CSV){
return;
}
if (rank != 0 || verbose <= VERBOSE_0) {
PrintEndSection();
return;
@ -246,7 +249,21 @@ void PrintReducedResult(IOR_test_t *test, int access, double bw, double iops, do
PrintKeyValDouble("closeTime", diff_subset[2]);
PrintKeyValDouble("totalTime", totalTime);
PrintEndSection();
}else if (outputFormat == OUTPUT_CSV){
PrintKeyVal("access", access == WRITE ? "write" : "read");
PrintKeyValDouble("bwMiB", bw / MEBIBYTE);
PrintKeyValDouble("iops", iops);
PrintKeyValDouble("latency", latency);
PrintKeyValDouble("blockKiB", (double)test->params.blockSize / KIBIBYTE);
PrintKeyValDouble("xferKiB", (double)test->params.transferSize / KIBIBYTE);
PrintKeyValDouble("openTime", diff_subset[0]);
PrintKeyValDouble("wrRdTime", diff_subset[1]);
PrintKeyValDouble("closeTime", diff_subset[2]);
PrintKeyValDouble("totalTime", totalTime);
PrintKeyValInt("Numtasks", test->params.numTasks);
fprintf(out_resultfile, "%d\n", rep);
}
fflush(out_resultfile);
}
@ -258,6 +275,10 @@ void PrintHeader(int argc, char **argv)
if (rank != 0)
return;
if (outputFormat == OUTPUT_CSV){
return;
}
PrintStartSection();
if (outputFormat != OUTPUT_DEFAULT){
PrintKeyVal("Version", META_VERSION);
@ -284,23 +305,6 @@ void PrintHeader(int argc, char **argv)
}
PrintKeyValEnd();
}
#ifdef _NO_MPI_TIMER
if (verbose >= VERBOSE_2)
fprintf(out_logfile, "Using unsynchronized POSIX timer\n");
#else /* not _NO_MPI_TIMER */
if (MPI_WTIME_IS_GLOBAL) {
if (verbose >= VERBOSE_2)
fprintf(out_logfile, "Using synchronized MPI timer\n");
} else {
if (verbose >= VERBOSE_2)
fprintf(out_logfile, "Using unsynchronized MPI timer\n");
}
#endif /* _NO_MPI_TIMER */
if (verbose >= VERBOSE_1) {
fprintf(out_logfile, "Start time skew across all tasks: %.02f sec\n",
wall_clock_deviation);
}
if (verbose >= VERBOSE_3) { /* show env */
fprintf(out_logfile, "STARTING ENVIRON LOOP\n");
for (i = 0; environ[i] != NULL; i++) {
@ -319,11 +323,16 @@ void PrintHeader(int argc, char **argv)
*/
void ShowTestStart(IOR_param_t *test)
{
if (outputFormat == OUTPUT_CSV){
return;
}
PrintStartSection();
PrintKeyValInt("TestID", test->id);
PrintKeyVal("StartTime", CurrentTimeString());
ShowFileSystemSize(test);
char filename[MAX_PATHLEN];
GetTestFileName(filename, test);
ShowFileSystemSize(filename, test->backend, test->backend_options);
if (verbose >= VERBOSE_3 || outputFormat == OUTPUT_JSON) {
char* data_packets[] = {"g","t","o","i"};
@ -362,19 +371,19 @@ void ShowTestStart(IOR_param_t *test)
PrintKeyValInt("randomOffset", test->randomOffset);
PrintKeyValInt("checkWrite", test->checkWrite);
PrintKeyValInt("checkRead", test->checkRead);
PrintKeyValInt("storeFileOffset", test->storeFileOffset);
PrintKeyValInt("dataPacketType", test->dataPacketType);
PrintKeyValInt("keepFile", test->keepFile);
PrintKeyValInt("keepFileWithError", test->keepFileWithError);
PrintKeyValInt("quitOnError", test->quitOnError);
PrintKeyValInt("warningAsErrors", test->warningAsErrors);
PrintKeyValInt("verbose", verbose);
PrintKeyVal("data packet type", data_packets[test->dataPacketType]);
PrintKeyValInt("setTimeStampSignature/incompressibleSeed", test->setTimeStampSignature); /* Seed value was copied into setTimeStampSignature as well */
PrintKeyValInt("collective", test->collective);
PrintKeyValInt("segmentCount", test->segmentCount);
#ifdef HAVE_GPFS_FCNTL_H
PrintKeyValInt("gpfsHintAccess", test->gpfs_hint_access);
PrintKeyValInt("gpfsReleaseToken", test->gpfs_release_token);
#endif
//#ifdef HAVE_GPFS_FCNTL_H
//PrintKeyValInt("gpfsHintAccess", test->gpfs_hint_access);
//PrintKeyValInt("gpfsReleaseToken", test->gpfs_release_token);
//#endif
PrintKeyValInt("transferSize", test->transferSize);
PrintKeyValInt("blockSize", test->blockSize);
PrintEndSection();
@ -401,6 +410,9 @@ void ShowTestEnd(IOR_test_t *tptr){
*/
void ShowSetup(IOR_param_t *params)
{
if (outputFormat == OUTPUT_CSV){
return;
}
if (params->debug) {
fprintf(out_logfile, "\n*** DEBUG MODE ***\n");
fprintf(out_logfile, "*** %s ***\n\n", params->debug);
@ -594,9 +606,6 @@ static void PrintLongSummaryOneOperation(IOR_test_t *test, const int access)
PrintKeyValInt("taskPerNodeOffset", params->taskPerNodeOffset);
PrintKeyValInt("reorderTasksRandom", params->reorderTasksRandom);
PrintKeyValInt("reorderTasksRandomSeed", params->reorderTasksRandomSeed);
PrintKeyValInt("segmentCount", params->segmentCount);
PrintKeyValInt("blockSize", params->blockSize);
PrintKeyValInt("transferSize", params->transferSize);
PrintKeyValDouble("bwMaxMIB", bw->max / MEBIBYTE);
PrintKeyValDouble("bwMinMIB", bw->min / MEBIBYTE);
PrintKeyValDouble("bwMeanMIB", bw->mean / MEBIBYTE);
@ -612,8 +621,6 @@ static void PrintLongSummaryOneOperation(IOR_test_t *test, const int access)
}
PrintKeyValDouble("xsizeMiB", (double) point->aggFileSizeForBW / MEBIBYTE);
PrintEndSection();
}else if (outputFormat == OUTPUT_CSV){
}
fflush(out_resultfile);
@ -638,7 +645,7 @@ void PrintLongSummaryHeader()
if (rank != 0 || verbose <= VERBOSE_0)
return;
if(outputFormat != OUTPUT_DEFAULT){
return;
return;
}
fprintf(out_resultfile, "\n");
@ -665,8 +672,6 @@ void PrintLongSummaryAllTests(IOR_test_t *tests_head)
fprintf(out_resultfile, "Summary of all tests:");
}else if (outputFormat == OUTPUT_JSON){
PrintNamedArrayStart("summary");
}else if (outputFormat == OUTPUT_CSV){
}
PrintLongSummaryHeader();

895
src/ior.c

File diff suppressed because it is too large Load Diff

View File

@ -39,19 +39,19 @@
#include "iordef.h"
#include "aiori.h"
#include <mpi.h>
#ifndef MPI_FILE_NULL
# include <mpio.h>
#endif /* not MPI_FILE_NULL */
#define ISPOWEROFTWO(x) ((x != 0) && !(x & (x - 1)))
/******************** DATA Packet Type ***************************************/
/* Holds the types of data packets: generic, offset, timestamp, incompressible */
enum PACKET_TYPE
{
generic = 0, /* No packet type specified */
timestamp=1, /* Timestamp packet set with -l */
offset=2, /* Offset packet set with -l */
incompressible=3 /* Incompressible packet set with -l */
};
typedef enum{
IOR_MEMORY_TYPE_CPU = 0,
IOR_MEMORY_TYPE_GPU_MANAGED = 1,
IOR_MEMORY_TYPE_GPU_DEVICE_ONLY = 2,
} ior_memory_flags;
/***************** IOR_BUFFERS *************************************************/
@ -92,9 +92,13 @@ typedef struct
char * options; /* options string */
// intermediate options
int collective; /* collective I/O */
MPI_Comm testComm; /* MPI communicator */
MPI_Comm testComm; /* Current MPI communicator */
MPI_Comm mpi_comm_world; /* The global MPI communicator */
int dryRun; /* do not perform any I/Os just run evtl. inputs print dummy output */
int dualMount; /* dual mount points */
int dualMount; /* dual mount points */
ior_memory_flags gpuMemoryFlags; /* use the GPU to store the data */
int gpuDirect; /* use gpuDirect, this influences gpuMemoryFlags as well */
int gpuID; /* the GPU to use for gpuDirect or memory options */
int numTasks; /* number of tasks for test */
int numNodes; /* number of nodes for test */
int numTasksOnNode0; /* number of tasks on node 0 (usually all the same, but don't have to be, use with caution) */
@ -117,18 +121,18 @@ typedef struct
int keepFile; /* don't delete the testfile on exit */
int keepFileWithError; /* don't delete the testfile with errors */
int errorFound; /* error found in data check */
int quitOnError; /* quit code when error in check */
IOR_offset_t segmentCount; /* number of segments (or HDF5 datasets) */
IOR_offset_t blockSize; /* contiguous bytes to write per task */
IOR_offset_t transferSize; /* size of transfer in bytes */
IOR_offset_t expectedAggFileSize; /* calculated aggregate file size */
IOR_offset_t randomPrefillBlocksize; /* prefill option for random IO, the amount of data used for prefill */
char * saveRankDetailsCSV; /* save the details about the performance to a file */
int summary_every_test; /* flag to print summary every test, not just at end */
int uniqueDir; /* use unique directory for each fpp */
int useExistingTestFile; /* do not delete test file before access */
int storeFileOffset; /* use file offset as stored signature */
int deadlineForStonewalling; /* max time in seconds to run any test phase */
int stoneWallingWearOut; /* wear out the stonewalling, once the timout is over, each process has to write the same amount */
int stoneWallingWearOut; /* wear out the stonewalling, once the timeout is over, each process has to write the same amount */
uint64_t stoneWallingWearOutIterations; /* the number of iterations for the stonewallingWearOut, needed for readBack */
char * stoneWallingStatusFile;
@ -145,7 +149,7 @@ typedef struct
char * memoryPerNodeStr; /* for parsing */
char * testscripts; /* for parsing */
char * buffer_type; /* for parsing */
enum PACKET_TYPE dataPacketType; /* The type of data packet. */
ior_dataPacketType_e dataPacketType; /* The type of data packet. */
void * backend_options; /* Backend-specific options */
@ -154,27 +158,15 @@ typedef struct
int fsyncPerWrite; /* fsync() after each write */
int fsync; /* fsync() after write */
/* HDFS variables */
char * hdfs_user; /* copied from ENV, for now */
const char* hdfs_name_node;
tPort hdfs_name_node_port; /* (uint16_t) */
hdfsFS hdfs_fs; /* file-system handle */
int hdfs_replicas; /* n block replicas. (0 gets default) */
int hdfs_block_size; /* internal blk-size. (0 gets default) */
char* URI; /* "path" to target object */
size_t part_number; /* multi-part upload increment (PER-RANK!) */
char* UploadId; /* key for multi-part-uploads */
/* RADOS variables */
rados_t rados_cluster; /* RADOS cluster handle */
rados_ioctx_t rados_ioctx; /* I/O context for our pool in the RADOS cluster */
/* NCMPI variables */
int var_id; /* variable id handle for data set */
int id; /* test's unique ID */
int intraTestBarriers; /* barriers between open/op and op/close */
int warningAsErrors; /* treat any warning as an error */
aiori_xfer_hint_t hints;
} IOR_param_t;
@ -185,8 +177,9 @@ typedef struct {
size_t pairs_accessed; // number of I/Os done, useful for deadlineForStonewalling
double stonewall_time;
long long stonewall_min_data_accessed;
long long stonewall_avg_data_accessed;
long long stonewall_min_data_accessed; // of all processes
long long stonewall_avg_data_accessed; // across all processes
long long stonewall_total_data_accessed; // sum accross all processes
IOR_offset_t aggFileSizeFromStat;
IOR_offset_t aggFileSizeFromXfer;
@ -210,7 +203,7 @@ IOR_test_t *CreateTest(IOR_param_t *init_params, int test_num);
void AllocResults(IOR_test_t *test);
char * GetPlatformName(void);
void init_IOR_Param_t(IOR_param_t *p);
void init_IOR_Param_t(IOR_param_t *p, MPI_Comm global_com);
/*
* This function runs IOR given by command line, useful for testing

View File

@ -18,8 +18,12 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
#include <mpi.h>
typedef enum {
DATA_TIMESTAMP, /* Will not include any offset, hence each buffer will be the same */
DATA_OFFSET,
DATA_INCOMPRESSIBLE /* Will include the offset as well */
} ior_dataPacketType_e;
#ifdef _WIN32
# define _CRT_SECURE_NO_WARNINGS
@ -52,13 +56,6 @@
# include <limits.h>
#endif
/************************** D E C L A R A T I O N S ***************************/
extern int numTasks; /* MPI variables */
extern int rank;
extern int rankOffset;
extern int verbose; /* verbose output */
/*************************** D E F I N I T I O N S ****************************/
enum OutputFormat_t{
@ -115,117 +112,11 @@ enum OutputFormat_t{
#define DELIMITERS " \t\r\n=" /* ReadScript() */
#define FILENAME_DELIMITER '@' /* ParseFileName() */
/* MACROs for debugging */
#define HERE fprintf(stdout, "** LINE %d (TASK=%d) **\n", \
__LINE__, rank);
typedef long long int IOR_offset_t;
typedef long long int IOR_size_t;
#define IOR_format "%016llx"
/******************************** M A C R O S *********************************/
/******************************************************************************/
/*
* WARN_RESET will display a custom error message and set value to default
*/
#define WARN_RESET(MSG, TO_STRUCT_PTR, FROM_STRUCT_PTR, MEMBER) do { \
(TO_STRUCT_PTR)->MEMBER = (FROM_STRUCT_PTR)->MEMBER; \
if (rank == 0) { \
fprintf(stdout, "ior WARNING: %s. Using value of %d.\n", \
MSG, (TO_STRUCT_PTR)->MEMBER); \
} \
fflush(stdout); \
} while (0)
#define WARN(MSG) do { \
if (verbose > VERBOSE_2) { \
fprintf(stdout, "ior WARNING: %s, (%s:%d).\n", \
MSG, __FILE__, __LINE__); \
} else { \
fprintf(stdout, "ior WARNING: %s.\n", MSG); \
} \
fflush(stdout); \
} while (0)
/* warning with format string and errno printed */
#define EWARNF(FORMAT, ...) do { \
if (verbose > VERBOSE_2) { \
fprintf(stdout, "ior WARNING: " FORMAT ", errno %d, %s (%s:%d).\n", \
__VA_ARGS__, errno, strerror(errno), __FILE__, __LINE__); \
} else { \
fprintf(stdout, "ior WARNING: " FORMAT ", errno %d, %s \n", \
__VA_ARGS__, errno, strerror(errno)); \
} \
fflush(stdout); \
} while (0)
/* warning with errno printed */
#define EWARN(MSG) do { \
EWARNF("%s", MSG); \
} while (0)
/* display error message with format string and terminate execution */
#define ERRF(FORMAT, ...) do { \
fprintf(stdout, "ior ERROR: " FORMAT ", errno %d, %s (%s:%d)\n", \
__VA_ARGS__, errno, strerror(errno), __FILE__, __LINE__); \
fflush(stdout); \
MPI_Abort(MPI_COMM_WORLD, -1); \
} while (0)
/* display error message and terminate execution */
#define ERR_ERRNO(MSG) do { \
ERRF("%s", MSG); \
} while (0)
/* display a simple error message (i.e. errno is not set) and terminate execution */
#define ERR(MSG) do { \
fprintf(stdout, "ior ERROR: %s, (%s:%d)\n", \
MSG, __FILE__, __LINE__); \
fflush(stdout); \
MPI_Abort(MPI_COMM_WORLD, -1); \
} while (0)
/******************************************************************************/
/*
* MPI_CHECKF will display a custom format string as well as an error string
* from the MPI_STATUS and then exit the program
*/
#define MPI_CHECKF(MPI_STATUS, FORMAT, ...) do { \
char resultString[MPI_MAX_ERROR_STRING]; \
int resultLength; \
\
if (MPI_STATUS != MPI_SUCCESS) { \
MPI_Error_string(MPI_STATUS, resultString, &resultLength); \
fprintf(stdout, "ior ERROR: " FORMAT ", MPI %s, (%s:%d)\n", \
__VA_ARGS__, resultString, __FILE__, __LINE__); \
fflush(stdout); \
MPI_Abort(MPI_COMM_WORLD, -1); \
} \
} while(0)
/******************************************************************************/
/*
* MPI_CHECK will display a custom error message as well as an error string
* from the MPI_STATUS and then exit the program
*/
#define MPI_CHECK(MPI_STATUS, MSG) do { \
MPI_CHECKF(MPI_STATUS, "%s", MSG); \
} while(0)
/******************************************************************************/
/*
* System info for Windows.

13
src/md-workbench-main.c Normal file
View File

@ -0,0 +1,13 @@
#include <mpi.h>
#include "md-workbench.h"
int main(int argc, char ** argv){
MPI_Init(& argc, & argv);
//phase_stat_t* results =
md_workbench_run(argc, argv, MPI_COMM_WORLD, stdout);
// API check, access the results of the first phase which is precrate.
//printf("Max op runtime: %f\n", results->max_op_time);
MPI_Finalize();
return 0;
}

1053
src/md-workbench.c Normal file

File diff suppressed because it is too large Load Diff

42
src/md-workbench.h Normal file
View File

@ -0,0 +1,42 @@
#ifndef IOR_MD_WORKBENCH_H
#define IOR_MD_WORKBENCH_H
#include <stdint.h>
#include <stdio.h>
#include <mpi.h>
typedef struct{
float min;
float q1;
float median;
float q3;
float q90;
float q99;
float max;
} time_statistics_t;
// statistics for running a single phase
typedef struct{ // NOTE: if this type is changed, adjust end_phase() !!!
time_statistics_t stats_create;
time_statistics_t stats_read;
time_statistics_t stats_stat;
time_statistics_t stats_delete;
int errors;
double rate;
double max_op_time;
double runtime;
uint64_t iterations_done;
} mdworkbench_result_t;
typedef struct{
int count; // the number of results
int errors;
mdworkbench_result_t result[];
} mdworkbench_results_t;
// @Return The first statistics returned are precreate, then iteration many benchmark runs, the last is cleanup
mdworkbench_results_t* md_workbench_run(int argc, char ** argv, MPI_Comm world_com, FILE * out_logfile);
#endif

File diff suppressed because it is too large Load Diff

View File

@ -8,28 +8,31 @@
typedef enum {
MDTEST_DIR_CREATE_NUM = 0,
MDTEST_DIR_STAT_NUM = 1,
MDTEST_DIR_READ_NUM = 1,
MDTEST_DIR_REMOVE_NUM = 3,
MDTEST_FILE_CREATE_NUM = 4,
MDTEST_FILE_STAT_NUM = 5,
MDTEST_FILE_READ_NUM = 6,
MDTEST_FILE_REMOVE_NUM = 7,
MDTEST_TREE_CREATE_NUM = 8,
MDTEST_TREE_REMOVE_NUM = 9,
MDTEST_DIR_READ_NUM = 2,
MDTEST_DIR_RENAME_NUM = 3,
MDTEST_DIR_REMOVE_NUM = 4,
MDTEST_FILE_CREATE_NUM = 5,
MDTEST_FILE_STAT_NUM = 6,
MDTEST_FILE_READ_NUM = 7,
MDTEST_FILE_REMOVE_NUM = 8,
MDTEST_TREE_CREATE_NUM = 9,
MDTEST_TREE_REMOVE_NUM = 10,
MDTEST_LAST_NUM
} mdtest_test_num_t;
typedef struct
{
double rate[MDTEST_LAST_NUM]; /* Calculated throughput */
double rate[MDTEST_LAST_NUM]; /* Calculated throughput after the barrier */
double rate_before_barrier[MDTEST_LAST_NUM]; /* Calculated throughput before the barrier */
double time[MDTEST_LAST_NUM]; /* Time */
uint64_t items[MDTEST_LAST_NUM]; /* Number of operations done */
double time_before_barrier[MDTEST_TREE_CREATE_NUM]; /* individual time before executing the barrier */
uint64_t items[MDTEST_LAST_NUM]; /* Number of operations done in this process*/
/* Statistics when hitting the stonewall */
double stonewall_time[MDTEST_LAST_NUM]; /* runtime until completion / hit of the stonewall */
uint64_t stonewall_last_item[MDTEST_LAST_NUM]; /* Max number of items a process has accessed */
uint64_t stonewall_item_min[MDTEST_LAST_NUM]; /* Min number of items a process has accessed */
uint64_t stonewall_item_sum[MDTEST_LAST_NUM]; /* Total number of items accessed until stonewall */
double stonewall_time[MDTEST_LAST_NUM]; /* Max runtime of any process until completion / hit of the stonewall */
uint64_t stonewall_last_item[MDTEST_LAST_NUM]; /* The number of items a process has accessed */
uint64_t stonewall_item_min[MDTEST_LAST_NUM]; /* Min number of items any process has accessed */
uint64_t stonewall_item_sum[MDTEST_LAST_NUM]; /* Total number of items accessed by all processes until stonewall */
} mdtest_results_t;
mdtest_results_t * mdtest_run(int argc, char **argv, MPI_Comm world_com, FILE * out_logfile);

View File

@ -7,6 +7,23 @@
#include <option.h>
/* merge two option lists and return the total size */
option_help * option_merge(option_help * a, option_help * b){
int count_a = 0;
for(option_help * i = a; i->type != 0; i++){
count_a++;
}
int count = count_a + 1; // LAST_OPTION is one
for(option_help * i = b; i->type != 0; i++){
count++;
}
option_help * h = malloc(sizeof(option_help) * count);
memcpy(h, a, sizeof(option_help) * count_a);
memcpy(h + count_a, b, sizeof(option_help) * (count - count_a));
return h;
}
/*
* Takes a string of the form 64, 8m, 128k, 4g, etc. and converts to bytes.
*/
@ -236,8 +253,10 @@ static void option_parse_token(char ** argv, int * flag_parsed_next, int * requi
int i = 0;
if(arg != NULL){
arg[0] = 0;
arg++;
replaced_equal = 1;
// Check empty value
arg = (arg[1] == 0) ? NULL : arg + 1;
}
*flag_parsed_next = 0;
@ -247,11 +266,13 @@ static void option_parse_token(char ** argv, int * flag_parsed_next, int * requi
return;
}
txt++;
int parsed = 0;
// printf("Parsing: %s : %s\n", txt, arg);
// support groups of multiple flags like -vvv or -vq
for(int flag_index = 0; flag_index < strlen(txt); ++flag_index){
// don't loop looking for multiple flags if we already processed a long option
if(txt[0] == '-' && flag_index > 0)
if(txt[flag_index] == '=' || (txt[0] == '-' && flag_index > 0))
break;
for(int m = 0; m < opt_all->module_count; m++ ){
@ -264,6 +285,7 @@ static void option_parse_token(char ** argv, int * flag_parsed_next, int * requi
continue;
}
if ( (o->shortVar == txt[flag_index]) || (strlen(txt) > 2 && txt[0] == '-' && o->longVar != NULL && strcmp(txt + 1, o->longVar) == 0)){
// printf("Found %s %c=%c? %d %d\n", o->help, o->shortVar, txt[flag_index], (o->shortVar == txt[flag_index]), (strlen(txt) > 2 && txt[0] == '-' && o->longVar != NULL && strcmp(txt + 1, o->longVar) == 0));
// now process the option.
switch(o->arg){
case (OPTION_FLAG):{
@ -279,7 +301,7 @@ static void option_parse_token(char ** argv, int * flag_parsed_next, int * requi
case (OPTION_OPTIONAL_ARGUMENT):
case (OPTION_REQUIRED_ARGUMENT):{
// check if next is an argument
if(arg == NULL){
if(arg == NULL && replaced_equal != 1){
if(o->shortVar == txt[0] && txt[1] != 0){
arg = & txt[1];
}else{
@ -353,12 +375,13 @@ static void option_parse_token(char ** argv, int * flag_parsed_next, int * requi
(*requiredArgsSeen)++;
}
return;
parsed = 1;
}
}
}
}
if(parsed) return;
if(strcmp(txt, "h") == 0 || strcmp(txt, "-help") == 0){
*print_help = 1;
}else{

View File

@ -43,6 +43,7 @@ void option_print_current(option_help * args);
//@return the number of parsed arguments
int option_parse(int argc, char ** argv, options_all_t * args);
int option_parse_str(char*val, options_all_t * opt_all);
option_help * option_merge(option_help * a, option_help * b);
/* Parse a single line */
int option_parse_key_value(char * key, char * value, options_all_t * opt_all);

View File

@ -32,7 +32,7 @@
#include "option.h"
#include "aiori.h"
IOR_param_t initialTestParams;
static IOR_param_t initialTestParams;
option_help * createGlobalOptions(IOR_param_t * params);
@ -62,7 +62,17 @@ static void CheckRunSettings(IOR_test_t *tests)
}
if(params->dualMount && !params->filePerProc) {
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1), "Dual Mount can only be used with File Per Process");
ERR("Dual Mount can only be used with File Per Process");
}
if(params->gpuDirect){
if(params->gpuMemoryFlags == IOR_MEMORY_TYPE_GPU_MANAGED){
ERR("GPUDirect cannot be used with managed memory");
}
params->gpuMemoryFlags = IOR_MEMORY_TYPE_GPU_DEVICE_ONLY;
if(params->checkRead || params->checkWrite){
ERR("GPUDirect data cannot yet be checked");
}
}
}
}
@ -103,6 +113,21 @@ void DecodeDirective(char *line, IOR_param_t *params, options_all_t * module_opt
}
printf("Writing output to %s\n", value);
}
} else if (strcasecmp(option, "saveRankPerformanceDetailsCSV") == 0){
if (rank == 0){
// check that the file is writeable, truncate it and add header
FILE* fd = fopen(value, "w");
if (fd == NULL){
FAIL("Cannot open saveRankPerformanceDetailsCSV file for write!");
}
char buff[] = "access,rank,runtime-with-openclose,runtime,throughput-withopenclose,throughput\n";
int ret = fwrite(buff, strlen(buff), 1, fd);
if(ret != 1){
FAIL("Cannot write header to saveRankPerformanceDetailsCSV file");
}
fclose(fd);
}
params->saveRankDetailsCSV = strdup(value);
} else if (strcasecmp(option, "summaryFormat") == 0) {
if(strcasecmp(value, "default") == 0){
outputFormat = OUTPUT_DEFAULT;
@ -123,6 +148,12 @@ void DecodeDirective(char *line, IOR_param_t *params, options_all_t * module_opt
params->testFileName = strdup(value);
} else if (strcasecmp(option, "dualmount") == 0){
params->dualMount = atoi(value);
} else if (strcasecmp(option, "allocateBufferOnGPU") == 0) {
params->gpuMemoryFlags = atoi(value);
} else if (strcasecmp(option, "GPUid") == 0) {
params->gpuID = atoi(value);
} else if (strcasecmp(option, "GPUDirect") == 0) {
params->gpuDirect = atoi(value);
} else if (strcasecmp(option, "deadlineforstonewalling") == 0) {
params->deadlineForStonewalling = atoi(value);
} else if (strcasecmp(option, "stoneWallingWearOut") == 0) {
@ -175,8 +206,8 @@ void DecodeDirective(char *line, IOR_param_t *params, options_all_t * module_opt
params->keepFileWithError = atoi(value);
} else if (strcasecmp(option, "multiFile") == 0) {
params->multiFile = atoi(value);
} else if (strcasecmp(option, "quitonerror") == 0) {
params->quitOnError = atoi(value);
} else if (strcasecmp(option, "warningAsErrors") == 0) {
params->warningAsErrors = atoi(value);
} else if (strcasecmp(option, "segmentcount") == 0) {
params->segmentCount = string_to_bytes(value);
} else if (strcasecmp(option, "blocksize") == 0) {
@ -191,8 +222,8 @@ void DecodeDirective(char *line, IOR_param_t *params, options_all_t * module_opt
params->verbose = atoi(value);
} else if (strcasecmp(option, "settimestampsignature") == 0) {
params->setTimeStampSignature = atoi(value);
} else if (strcasecmp(option, "storefileoffset") == 0) {
params->storeFileOffset = atoi(value);
} else if (strcasecmp(option, "dataPacketType") == 0) {
params->dataPacketType = parsePacketType(value[0]);
} else if (strcasecmp(option, "uniqueDir") == 0) {
params->uniqueDir = atoi(value);
} else if (strcasecmp(option, "useexistingtestfile") == 0) {
@ -282,7 +313,7 @@ int contains_only(char *haystack, char *needle)
/* check for "needle" */
if (strncasecmp(ptr, needle, strlen(needle)) != 0)
return 0;
/* make sure the rest of the line is only whitspace as well */
/* make sure the rest of the line is only whitespace as well */
for (ptr += strlen(needle); ptr < end; ptr++) {
if (!isspace(*ptr))
return 0;
@ -384,7 +415,7 @@ option_help * createGlobalOptions(IOR_param_t * params){
char APIs[1024];
char APIs_legacy[1024];
aiori_supported_apis(APIs, APIs_legacy, IOR);
char apiStr[1024];
char * apiStr = safeMalloc(1024);
sprintf(apiStr, "API for I/O [%s]", APIs);
option_help o [] = {
@ -395,9 +426,16 @@ option_help * createGlobalOptions(IOR_param_t * params){
{'C', NULL, "reorderTasks -- changes task ordering for readback (useful to avoid client cache)", OPTION_FLAG, 'd', & params->reorderTasks},
{'d', NULL, "interTestDelay -- delay between reps in seconds", OPTION_OPTIONAL_ARGUMENT, 'd', & params->interTestDelay},
{'D', NULL, "deadlineForStonewalling -- seconds before stopping write or read phase", OPTION_OPTIONAL_ARGUMENT, 'd', & params->deadlineForStonewalling},
{.help=" -O stoneWallingWearOut=1 -- once the stonewalling timout is over, all process finish to access the amount of data", .arg = OPTION_OPTIONAL_ARGUMENT},
{.help=" -O stoneWallingWearOut=1 -- once the stonewalling timeout is over, all process finish to access the amount of data", .arg = OPTION_OPTIONAL_ARGUMENT},
{.help=" -O stoneWallingWearOutIterations=N -- stop after processing this number of iterations, needed for reading data back written with stoneWallingWearOut", .arg = OPTION_OPTIONAL_ARGUMENT},
{.help=" -O stoneWallingStatusFile=FILE -- this file keeps the number of iterations from stonewalling during write and allows to use them for read", .arg = OPTION_OPTIONAL_ARGUMENT},
#ifdef HAVE_CUDA
{.help=" -O allocateBufferOnGPU=X -- allocate I/O buffers on the GPU: X=1 uses managed memory, X=2 device memory.", .arg = OPTION_OPTIONAL_ARGUMENT},
{.help=" -O GPUid=X -- select the GPU to use.", .arg = OPTION_OPTIONAL_ARGUMENT},
#ifdef HAVE_GPU_DIRECT
{0, "gpuDirect", "allocate I/O buffers on the GPU and use gpuDirect to store data; this option is incompatible with any option requiring CPU access to data.", OPTION_FLAG, 'd', & params->gpuDirect},
#endif
#endif
{'e', NULL, "fsync -- perform a fsync() operation at the end of each read/write phase", OPTION_FLAG, 'd', & params->fsync},
{'E', NULL, "useExistingTestFile -- do not remove test file before write access", OPTION_FLAG, 'd', & params->useExistingTestFile},
{'f', NULL, "scriptFile -- test script name", OPTION_OPTIONAL_ARGUMENT, 's', & params->testscripts},
@ -412,13 +450,12 @@ option_help * createGlobalOptions(IOR_param_t * params){
{'j', NULL, "outlierThreshold -- warn on outlier N seconds from mean", OPTION_OPTIONAL_ARGUMENT, 'd', & params->outlierThreshold},
{'k', NULL, "keepFile -- don't remove the test file(s) on program exit", OPTION_FLAG, 'd', & params->keepFile},
{'K', NULL, "keepFileWithError -- keep error-filled file(s) after data-checking", OPTION_FLAG, 'd', & params->keepFileWithError},
{'l', NULL, "datapacket type-- type of packet that will be created [offset|incompressible|timestamp|o|i|t]", OPTION_OPTIONAL_ARGUMENT, 's', & params->buffer_type},
{'l', "dataPacketType", "datapacket type-- type of packet that will be created [offset|incompressible|timestamp|o|i|t]", OPTION_OPTIONAL_ARGUMENT, 's', & params->buffer_type},
{'m', NULL, "multiFile -- use number of reps (-i) for multiple file count", OPTION_FLAG, 'd', & params->multiFile},
{'M', NULL, "memoryPerNode -- hog memory on the node (e.g.: 2g, 75%)", OPTION_OPTIONAL_ARGUMENT, 's', & params->memoryPerNodeStr},
{'N', NULL, "numTasks -- number of tasks that are participating in the test (overrides MPI)", OPTION_OPTIONAL_ARGUMENT, 'd', & params->numTasks},
{'o', NULL, "testFile -- full name for test", OPTION_OPTIONAL_ARGUMENT, 's', & params->testFileName},
{'O', NULL, "string of IOR directives (e.g. -O checkRead=1,lustreStripeCount=32)", OPTION_OPTIONAL_ARGUMENT, 'p', & decodeDirectiveWrapper},
{'q', NULL, "quitOnError -- during file error-checking, abort on error", OPTION_FLAG, 'd', & params->quitOnError},
{'Q', NULL, "taskPerNodeOffset for read tests use with -C & -Z options (-C constant N, -Z at least N)", OPTION_OPTIONAL_ARGUMENT, 'd', & params->taskPerNodeOffset},
{'r', NULL, "readFile -- read existing file", OPTION_FLAG, 'd', & params->readFile},
{'R', NULL, "checkRead -- verify that the output of read matches the expected signature (used with -G)", OPTION_FLAG, 'd', & params->checkRead},
@ -434,9 +471,13 @@ option_help * createGlobalOptions(IOR_param_t * params){
{'y', NULL, "dualMount -- use dual mount points for a filesystem", OPTION_FLAG, 'd', & params->dualMount},
{'Y', NULL, "fsyncPerWrite -- perform sync operation after every write operation", OPTION_FLAG, 'd', & params->fsyncPerWrite},
{'z', NULL, "randomOffset -- access is to random, not sequential, offsets within a file", OPTION_FLAG, 'd', & params->randomOffset},
{0, "randomPrefill", "For random -z access only: Prefill the file with this blocksize, e.g., 2m", OPTION_OPTIONAL_ARGUMENT, 'l', & params->randomPrefillBlocksize},
{0, "random-offset-seed", "The seed for -z", OPTION_OPTIONAL_ARGUMENT, 'd', & params->randomSeed},
{'Z', NULL, "reorderTasksRandom -- changes task ordering to random ordering for readback", OPTION_FLAG, 'd', & params->reorderTasksRandom},
{0, "warningAsErrors", "Any warning should lead to an error.", OPTION_FLAG, 'd', & params->warningAsErrors},
{.help=" -O summaryFile=FILE -- store result data into this file", .arg = OPTION_OPTIONAL_ARGUMENT},
{.help=" -O summaryFormat=[default,JSON,CSV] -- use the format for outputing the summary", .arg = OPTION_OPTIONAL_ARGUMENT},
{.help=" -O summaryFormat=[default,JSON,CSV] -- use the format for outputting the summary", .arg = OPTION_OPTIONAL_ARGUMENT},
{.help=" -O saveRankPerformanceDetailsCSV=<FILE> -- store the performance of each rank into the named CSV file.", .arg = OPTION_OPTIONAL_ARGUMENT},
{0, "dryRun", "do not perform any I/Os just run evtl. inputs print dummy output", OPTION_FLAG, 'd', & params->dryRun},
LAST_OPTION,
};
@ -449,9 +490,9 @@ option_help * createGlobalOptions(IOR_param_t * params){
/*
* Parse Commandline.
*/
IOR_test_t *ParseCommandLine(int argc, char **argv)
IOR_test_t *ParseCommandLine(int argc, char **argv, MPI_Comm com)
{
init_IOR_Param_t(& initialTestParams);
init_IOR_Param_t(& initialTestParams, com);
IOR_test_t *tests = NULL;

View File

@ -13,8 +13,6 @@
#include "ior.h"
extern IOR_param_t initialTestParams;
IOR_test_t *ParseCommandLine(int argc, char **argv);
IOR_test_t *ParseCommandLine(int argc, char **argv, MPI_Comm com);
#endif /* !_PARSE_OPTIONS_H */

View File

@ -1,8 +1,10 @@
#include <assert.h>
#include <ior.h>
#include <ior-internal.h>
#include "../ior.h"
#include "../ior-internal.h"
// Run all tests via:
// make distcheck
// build a single test via, e.g., mpicc example.c -I ../src/ ../src/libaiori.a -lm
int main(){
@ -16,16 +18,6 @@ int main(){
// having an individual file
test.filePerProc = 1;
IOR_offset_t * offsets;
offsets = GetOffsetArraySequential(& test, 0);
assert(offsets[0] == 0);
assert(offsets[1] == 10);
assert(offsets[2] == 20);
assert(offsets[3] == 30);
assert(offsets[4] == 40);
// for(int i = 0; i < test.segmentCount; i++){
// printf("%lld\n", (long long int) offsets[i]);
// }
printf("OK\n");
return 0;
}

View File

@ -16,6 +16,12 @@
# include "config.h"
#endif
#ifdef HAVE_GETCPU_SYSCALL
# define _GNU_SOURCE
# include <unistd.h>
# include <sys/syscall.h>
#endif
#ifdef __linux__
# define _GNU_SOURCE /* Needed for O_DIRECT in fcntl */
#endif /* __linux__ */
@ -31,6 +37,10 @@
#include <sys/types.h>
#include <time.h>
#ifdef HAVE_CUDA
#include <cuda_runtime.h>
#endif
#ifndef _WIN32
# include <regex.h>
# ifdef __sun /* SunOS does not support statfs(), instead uses statvfs() */
@ -59,13 +69,87 @@ int rank = 0;
int rankOffset = 0;
int verbose = VERBOSE_0; /* verbose output */
MPI_Comm testComm;
MPI_Comm mpi_comm_world;
FILE * out_logfile;
FILE * out_resultfile;
FILE * out_logfile = NULL;
FILE * out_resultfile = NULL;
enum OutputFormat_t outputFormat;
/***************************** F U N C T I O N S ******************************/
void update_write_memory_pattern(uint64_t item, char * buf, size_t bytes, int rand_seed, int pretendRank, ior_dataPacketType_e dataPacketType){
if(dataPacketType == DATA_TIMESTAMP || bytes < 8) return;
int k=1;
uint64_t * buffi = (uint64_t*) buf;
for(size_t i=0; i < bytes/sizeof(uint64_t); i+=512, k++){
buffi[i] = ((uint32_t) item * k) | ((uint64_t) pretendRank) << 32;
}
}
void generate_memory_pattern(char * buf, size_t bytes, int rand_seed, int pretendRank, ior_dataPacketType_e dataPacketType){
uint64_t * buffi = (uint64_t*) buf;
// first half of 64 bits use the rank
const size_t size = bytes / 8;
// the first 8 bytes of each 4k block are updated at runtime
unsigned seed = rand_seed + pretendRank;
for(size_t i=0; i < size; i++){
switch(dataPacketType){
case(DATA_INCOMPRESSIBLE):{
uint64_t hi = ((uint64_t) rand_r(& seed) << 32);
uint64_t lo = (uint64_t) rand_r(& seed);
buffi[i] = hi | lo;
break;
}case(DATA_OFFSET):{
}case(DATA_TIMESTAMP):{
buffi[i] = ((uint64_t) pretendRank) << 32 | rand_seed + i;
break;
}
}
}
for(size_t i=size*8; i < bytes; i++){
buf[i] = (char) i;
}
}
int verify_memory_pattern(uint64_t item, char * buffer, size_t bytes, int rand_seed, int pretendRank, ior_dataPacketType_e dataPacketType){
int error = 0;
// always read all data to ensure that performance numbers stay the same
uint64_t * buffi = (uint64_t*) buffer;
// the first 8 bytes are set to item number
int k=1;
unsigned seed = rand_seed + pretendRank;
const size_t size = bytes / 8;
for(size_t i=0; i < size; i++){
uint64_t exp;
switch(dataPacketType){
case(DATA_INCOMPRESSIBLE):{
uint64_t hi = ((uint64_t) rand_r(& seed) << 32);
uint64_t lo = (uint64_t) rand_r(& seed);
exp = hi | lo;
break;
}case(DATA_OFFSET):{
}case(DATA_TIMESTAMP):{
exp = ((uint64_t) pretendRank) << 32 | rand_seed + i;
break;
}
}
if(i % 512 == 0 && dataPacketType != DATA_TIMESTAMP){
exp = ((uint32_t) item * k) | ((uint64_t) pretendRank) << 32;
k++;
}
if(buffi[i] != exp){
error = 1;
}
}
for(size_t i=size*8; i < bytes; i++){
if(buffer[i] != (char) i){
error = 1;
}
}
return error;
}
void* safeMalloc(uint64_t size){
void * d = malloc(size);
if (d == NULL){
@ -81,8 +165,8 @@ void FailMessage(int rank, const char *location, char *format, ...) {
va_start(args, format);
vsnprintf(msg, 4096, format, args);
va_end(args);
fprintf(out_logfile, "%s: Process %d: FAILED in %s, %s: %s\n",
PrintTimestamp(), rank, location, msg, strerror(errno));
fprintf(out_logfile, "%s: Process %d: FAILED in %s, %s\n",
PrintTimestamp(), rank, location, msg);
fflush(out_logfile);
MPI_Abort(testComm, 1);
}
@ -119,28 +203,28 @@ size_t NodeMemoryStringToBytes(char *size_str)
return mem / 100 * percent;
}
ior_dataPacketType_e parsePacketType(char t){
switch(t) {
case '\0': return DATA_TIMESTAMP;
case 'i': /* Incompressible */
return DATA_INCOMPRESSIBLE;
case 't': /* timestamp */
return DATA_TIMESTAMP;
case 'o': /* offset packet */
return DATA_OFFSET;
default:
ERRF("Unknown packet type \"%c\"; generic assumed\n", t);
return DATA_OFFSET;
}
}
void updateParsedOptions(IOR_param_t * options, options_all_t * global_options){
if (options->setTimeStampSignature){
options->incompressibleSeed = options->setTimeStampSignature;
}
if (options->buffer_type && options->buffer_type[0] != 0){
switch(options->buffer_type[0]) {
case 'i': /* Incompressible */
options->dataPacketType = incompressible;
break;
case 't': /* timestamp */
options->dataPacketType = timestamp;
break;
case 'o': /* offset packet */
options->storeFileOffset = TRUE;
options->dataPacketType = offset;
break;
default:
fprintf(out_logfile,
"Unknown argument for -l %s; generic assumed\n", options->buffer_type);
break;
}
options->dataPacketType = parsePacketType(options->buffer_type[0]);
}
if (options->memoryPerNodeStr){
options->memoryPerNode = NodeMemoryStringToBytes(options->memoryPerNodeStr);
@ -158,7 +242,7 @@ void updateParsedOptions(IOR_param_t * options, options_all_t * global_options){
/* Used in aiori-POSIX.c and aiori-PLFS.c
*/
void set_o_direct_flag(int *fd)
void set_o_direct_flag(int *flag)
{
/* note that TRU64 needs O_DIRECTIO, SunOS uses directio(),
and everyone else needs O_DIRECT */
@ -171,7 +255,7 @@ void set_o_direct_flag(int *fd)
# endif /* not O_DIRECTIO */
#endif /* not O_DIRECT */
*fd |= O_DIRECT;
*flag |= O_DIRECT;
}
@ -566,16 +650,14 @@ IOR_offset_t StringToBytes(char *size_str)
/*
* Displays size of file system and percent of data blocks and inodes used.
*/
void ShowFileSystemSize(IOR_param_t * test) // this might be converted to an AIORI call
void ShowFileSystemSize(char * filename, const struct ior_aiori * backend, void * backend_options) // this might be converted to an AIORI call
{
ior_aiori_statfs_t stat;
if(! test->backend->statfs){
if(! backend->statfs){
WARN("Backend doesn't implement statfs");
return;
}
char filename[MAX_PATHLEN];
GetTestFileName(filename, test);
int ret = test->backend->statfs(filename, & stat, test->backend_options);
int ret = backend->statfs(filename, & stat, backend_options);
if( ret != 0 ){
WARN("Backend returned error during statfs");
return;
@ -648,27 +730,6 @@ int Regex(char *string, char *pattern)
return (retValue);
}
/*
* Seed random generator.
*/
void SeedRandGen(MPI_Comm testComm)
{
unsigned int randomSeed;
if (rank == 0) {
#ifdef _WIN32
rand_s(&randomSeed);
#else
struct timeval randGenTimer;
gettimeofday(&randGenTimer, (struct timezone *)NULL);
randomSeed = randGenTimer.tv_usec;
#endif
}
MPI_CHECK(MPI_Bcast(&randomSeed, 1, MPI_INT, 0,
testComm), "cannot broadcast random seed value");
srandom(randomSeed);
}
/*
* System info for Windows.
*/
@ -691,10 +752,6 @@ int uname(struct utsname *name)
}
#endif /* _WIN32 */
double wall_clock_deviation;
double wall_clock_delta = 0;
/*
* Get time stamp. Use MPI_Timer() unless _NO_MPI_TIMER is defined,
* in which case use gettimeofday().
@ -702,55 +759,46 @@ double wall_clock_delta = 0;
double GetTimeStamp(void)
{
double timeVal;
#ifdef _NO_MPI_TIMER
struct timeval timer;
if (gettimeofday(&timer, (struct timezone *)NULL) != 0)
ERR("cannot use gettimeofday()");
timeVal = (double)timer.tv_sec + ((double)timer.tv_usec / 1000000);
#else /* not _NO_MPI_TIMER */
timeVal = MPI_Wtime(); /* no MPI_CHECK(), just check return value */
if (timeVal < 0)
ERR("cannot use MPI_Wtime()");
#endif /* _NO_MPI_TIMER */
/* wall_clock_delta is difference from root node's time */
timeVal -= wall_clock_delta;
return (timeVal);
}
/*
* Determine any spread (range) between node times.
* Obsolete
*/
static double TimeDeviation(void)
static double TimeDeviation(MPI_Comm com)
{
double timestamp;
double min = 0;
double max = 0;
double roottimestamp;
MPI_CHECK(MPI_Barrier(mpi_comm_world), "barrier error");
MPI_CHECK(MPI_Barrier(com), "barrier error");
timestamp = GetTimeStamp();
MPI_CHECK(MPI_Reduce(&timestamp, &min, 1, MPI_DOUBLE,
MPI_MIN, 0, mpi_comm_world),
MPI_MIN, 0, com),
"cannot reduce tasks' times");
MPI_CHECK(MPI_Reduce(&timestamp, &max, 1, MPI_DOUBLE,
MPI_MAX, 0, mpi_comm_world),
MPI_MAX, 0, com),
"cannot reduce tasks' times");
/* delta between individual nodes' time and root node's time */
roottimestamp = timestamp;
MPI_CHECK(MPI_Bcast(&roottimestamp, 1, MPI_DOUBLE, 0, mpi_comm_world),
MPI_CHECK(MPI_Bcast(&roottimestamp, 1, MPI_DOUBLE, 0, com),
"cannot broadcast root's time");
wall_clock_delta = timestamp - roottimestamp;
// wall_clock_delta = timestamp - roottimestamp;
return max - min;
}
void init_clock(){
/* check for skew between tasks' start times */
wall_clock_deviation = TimeDeviation();
void init_clock(MPI_Comm com){
}
char * PrintTimestamp() {
@ -768,16 +816,16 @@ char * PrintTimestamp() {
return datestring;
}
int64_t ReadStoneWallingIterations(char * const filename){
int64_t ReadStoneWallingIterations(char * const filename, MPI_Comm com){
long long data;
if(rank != 0){
MPI_Bcast( & data, 1, MPI_LONG_LONG_INT, 0, mpi_comm_world);
MPI_Bcast( & data, 1, MPI_LONG_LONG_INT, 0, com);
return data;
}else{
FILE * out = fopen(filename, "r");
if (out == NULL){
data = -1;
MPI_Bcast( & data, 1, MPI_LONG_LONG_INT, 0, mpi_comm_world);
MPI_Bcast( & data, 1, MPI_LONG_LONG_INT, 0, com);
return data;
}
int ret = fscanf(out, "%lld", & data);
@ -785,7 +833,7 @@ int64_t ReadStoneWallingIterations(char * const filename){
return -1;
}
fclose(out);
MPI_Bcast( & data, 1, MPI_LONG_LONG_INT, 0, mpi_comm_world);
MPI_Bcast( & data, 1, MPI_LONG_LONG_INT, 0, com);
return data;
}
}
@ -869,17 +917,15 @@ char *HumanReadable(IOR_offset_t value, int base)
return valueStr;
}
#if defined(__aarch64__)
// TODO: This might be general enough to provide the functionality for any system
// regardless of processor type given we aren't worried about thread/process migration.
#if defined(HAVE_GETCPU_SYSCALL)
// Assume we aren't worried about thread/process migration.
// Test on Intel systems and see if we can get rid of the architecture specificity
// of the code.
unsigned long GetProcessorAndCore(int *chip, int *core){
return syscall(SYS_getcpu, core, chip, NULL);
}
// TODO: Add in AMD function
#else
// If we're not on an ARM processor assume we're on an intel processor and use the
#elif defined(HAVE_RDTSCP_ASM)
// We're on an intel processor and use the
// rdtscp instruction.
unsigned long GetProcessorAndCore(int *chip, int *core){
unsigned long a,d,c;
@ -888,5 +934,81 @@ unsigned long GetProcessorAndCore(int *chip, int *core){
*core = c & 0xFFF;
return ((unsigned long)a) | (((unsigned long)d) << 32);;
}
#else
// TODO: Add in AMD function
unsigned long GetProcessorAndCore(int *chip, int *core){
#warning GetProcessorAndCore is implemented as a dummy
*chip = 0;
*core = 0;
return 1;
}
#endif
/*
* Allocate a page-aligned (required by O_DIRECT) buffer.
*/
void *aligned_buffer_alloc(size_t size, ior_memory_flags type)
{
size_t pageMask;
char *buf, *tmp;
char *aligned;
if(type == IOR_MEMORY_TYPE_GPU_MANAGED){
#ifdef HAVE_CUDA
// use unified memory here to allow drop-in-replacement
if (cudaMallocManaged((void**) & buf, size, cudaMemAttachGlobal) != cudaSuccess){
ERR("Cannot allocate buffer on GPU");
}
return buf;
#else
ERR("No CUDA supported, cannot allocate on the GPU");
#endif
}else if(type == IOR_MEMORY_TYPE_GPU_DEVICE_ONLY){
#ifdef HAVE_GPU_DIRECT
if (cudaMalloc((void**) & buf, size) != cudaSuccess){
ERR("Cannot allocate buffer on GPU");
}
return buf;
#else
ERR("No GPUDirect supported, cannot allocate on the GPU");
#endif
}
#ifdef HAVE_SYSCONF
long pageSize = sysconf(_SC_PAGESIZE);
#else
size_t pageSize = getpagesize();
#endif
pageMask = pageSize - 1;
buf = safeMalloc(size + pageSize + sizeof(void *));
/* find the alinged buffer */
tmp = buf + sizeof(char *);
aligned = tmp + pageSize - ((size_t) tmp & pageMask);
/* write a pointer to the original malloc()ed buffer into the bytes
preceding "aligned", so that the aligned buffer can later be free()ed */
tmp = aligned - sizeof(void *);
*(void **)tmp = buf;
return (void *)aligned;
}
/*
* Free a buffer allocated by aligned_buffer_alloc().
*/
void aligned_buffer_free(void *buf, ior_memory_flags gpu)
{
if(gpu){
#ifdef HAVE_CUDA
if (cudaFree(buf) != cudaSuccess){
WARN("Cannot free buffer on GPU");
}
return;
#else
ERR("No CUDA supported, cannot free on the GPU");
#endif
}
free(*(void **)((char *)buf - sizeof(char *)));
}

View File

@ -22,8 +22,6 @@ extern int rank;
extern int rankOffset;
extern int verbose;
extern MPI_Comm testComm;
extern MPI_Comm mpi_comm_world;
extern FILE * out_logfile;
extern FILE * out_resultfile;
extern enum OutputFormat_t outputFormat; /* format of the output */
@ -31,25 +29,22 @@ extern enum OutputFormat_t outputFormat; /* format of the output */
* Try using the system's PATH_MAX, which is what realpath and such use.
*/
#define MAX_PATHLEN PATH_MAX
#ifdef __linux__
#define ERROR_LOCATION __func__
#else
#define ERROR_LOCATION __LINE__
#endif
#define FAIL(...) FailMessage(rank, ERROR_LOCATION, __VA_ARGS__)
void FailMessage(int rank, const char *location, char *format, ...);
void* safeMalloc(uint64_t size);
void set_o_direct_flag(int *fd);
ior_dataPacketType_e parsePacketType(char t);
void update_write_memory_pattern(uint64_t item, char * buf, size_t bytes, int rand_seed, int rank, ior_dataPacketType_e dataPacketType);
void generate_memory_pattern(char * buf, size_t bytes, int rand_seed, int rank, ior_dataPacketType_e dataPacketType);
/* check a data buffer, @return 0 if all is correct, otherwise 1 */
int verify_memory_pattern(uint64_t item, char * buffer, size_t bytes, int rand_seed, int pretendRank, ior_dataPacketType_e dataPacketType);
char *CurrentTimeString(void);
int Regex(char *, char *);
void ShowFileSystemSize(IOR_param_t * test);
void ShowFileSystemSize(char * filename, const struct ior_aiori * backend, void * backend_options);
void DumpBuffer(void *, size_t);
void SeedRandGen(MPI_Comm);
void SetHints (MPI_Info *, char *);
void ShowHints (MPI_Info *);
char *HumanReadable(IOR_offset_t value, int base);
@ -62,14 +57,13 @@ void updateParsedOptions(IOR_param_t * options, options_all_t * global_options);
size_t NodeMemoryStringToBytes(char *size_str);
/* Returns -1, if cannot be read */
int64_t ReadStoneWallingIterations(char * const filename);
int64_t ReadStoneWallingIterations(char * const filename, MPI_Comm com);
void StoreStoneWallingIterations(char * const filename, int64_t count);
void init_clock(void);
void init_clock(MPI_Comm com);
double GetTimeStamp(void);
char * PrintTimestamp(); // TODO remove this function
unsigned long GetProcessorAndCore(int *chip, int *core);
extern double wall_clock_deviation;
extern double wall_clock_delta;
void *aligned_buffer_alloc(size_t size, ior_memory_flags type);
void aligned_buffer_free(void *buf, ior_memory_flags type);
#endif /* !_UTILITIES_H */

View File

@ -15,18 +15,39 @@ MDTEST 1 -a POSIX
MDTEST 2 -a POSIX -W 2
MDTEST 1 -C -T -r -F -I 1 -z 1 -b 1 -L -u
MDTEST 1 -C -T -I 1 -z 1 -b 1 -u
MDTEST 2 -n 1 -f 1 -l 2
IOR 1 -a POSIX -w -z -F -Y -e -i1 -m -t 100k -b 1000k
IOR 1 -a POSIX -w -z -F -k -e -i2 -m -t 100k -b 100k
IOR 1 -a MMAP -r -z -F -k -e -i1 -m -t 100k -b 100k
IOR 1 -a POSIX -w -z -F -Y -e -i1 -m -t 100k -b 2000k
IOR 1 -a POSIX -w -z -F -k -e -i2 -m -t 100k -b 200k
IOR 1 -a MMAP -r -z -F -k -e -i1 -m -t 100k -b 200k
IOR 2 -a POSIX -w -z -C -F -k -e -i1 -m -t 100k -b 100k
IOR 2 -a POSIX -w -z -C -Q 1 -F -k -e -i1 -m -t 100k -b 100k
IOR 2 -a POSIX -r -z -Z -Q 2 -F -k -e -i1 -m -t 100k -b 100k
IOR 2 -a POSIX -r -z -Z -Q 3 -X 13 -F -k -e -i1 -m -t 100k -b 100k
IOR 2 -a POSIX -w -z -Z -Q 1 -X -13 -F -e -i1 -m -t 100k -b 100k
IOR 2 -a POSIX -w -C -k -e -i1 -m -t 100k -b 200k
IOR 2 -a POSIX -w -z -C -F -k -e -i1 -m -t 100k -b 200k
IOR 2 -a POSIX -w -z -C -Q 1 -F -k -e -i1 -m -t 100k -b 200k
IOR 2 -a POSIX -r -z -Z -Q 2 -F -k -e -i1 -m -t 100k -b 200k
IOR 2 -a POSIX -r -z -Z -Q 3 -X 13 -F -k -e -i1 -m -t 100k -b 200k
IOR 3 -a POSIX -w -z -Z -Q 1 -X -13 -F -e -i1 -m -t 100k -b 200k
IOR 2 -f "$ROOT/test_comments.ior"
# Test for JSON output
IOR 2 -a DUMMY -e -F -t 1m -b 1m -A 328883 -O summaryFormat=JSON -O summaryFile=OUT.json
python -mjson.tool OUT.json >/dev/null && echo "JSON OK"
# MDWB
MDWB 3 -a POSIX -O=1 -D=1 -G=10 -P=1 -I=1 -R=2 -X
MDWB 3 -a POSIX -O=1 -D=4 -G=10 -P=4 -I=1 -R=2 -X -t=0.001 -L=latency.txt
MDWB 3 -a POSIX -O=1 -D=2 -G=10 -P=4 -I=3 -R=2 -X -W -w 1
MDWB 3 -a POSIX -O=1 -D=2 -G=10 -P=4 -I=3 -1 -W -w 1 --run-info-file=mdw.tst --print-detailed-stats
MDWB 3 -a POSIX -O=1 -D=2 -G=10 -P=4 -I=3 -2 -W -w 1 --run-info-file=mdw.tst --print-detailed-stats
MDWB 3 -a POSIX -O=1 -D=2 -G=10 -P=4 -I=3 -2 -W -w 1 --read-only --run-info-file=mdw.tst --print-detailed-stats
MDWB 3 -a POSIX -O=1 -D=2 -G=10 -P=4 -I=3 -2 -W -w 1 --read-only --run-info-file=mdw.tst --print-detailed-stats
MDWB 3 -a POSIX -O=1 -D=2 -G=10 -P=4 -I=3 -3 -W -w 1 --run-info-file=mdw.tst --print-detailed-stats
MDWB 2 -a POSIX -O=1 -D=1 -G=3 -P=2 -I=2 -R=2 -X -S 772 --dataPacketType=t
DELETE=0
MDWB 2 -a POSIX -D=1 -P=2 -I=2 -R=2 -X -G=2252 -S 772 --dataPacketType=i -1
MDWB 2 -a POSIX -D=1 -P=2 -I=2 -R=2 -X -G=2252 -S 772 --dataPacketType=i -2
MDWB 2 -a POSIX -D=1 -P=2 -I=2 -R=2 -X -G=2252 -S 772 --dataPacketType=i -3
END

18
testing/build-hdfs.sh Executable file
View File

@ -0,0 +1,18 @@
#!/bin/bash
mkdir build-hdfs
cd build-hdfs
VER=hadoop-3.2.1
if [[ ! -e $VER.tar.gz ]] ; then
wget https://www.apache.org/dyn/closer.cgi/hadoop/common/$VER/$VER.tar.gz
tar -xf $VER.tar.gz
fi
../configure --with-hdfs CFLAGS="-I$PWD/$VER/include/ -O0 -g3" LDFLAGS="-L$PWD/$VER/lib/native -Wl,-rpath=$PWD/$VER/lib/native"
make -j
echo "To run execute:"
echo export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/
echo export CLASSPATH=$(find $VER/ -name "*.jar" -printf "%p:")
echo ./src/ior -a HDFS

View File

@ -10,22 +10,22 @@ TYPE="advanced"
source $ROOT/test-lib.sh
#stonewalling tests
IOR 2 -a DUMMY -w -O stoneWallingStatusFile=stonewall.log -O stoneWallingWearOut=1 -D 1 -t 1000 -b 1000 -s 15
IOR 2 -a DUMMY -r -O stoneWallingStatusFile=stonewall.log -D 1 -t 1000 -b 1000 -s 30 # max 15 still!
IOR 2 -a DUMMY -r -O stoneWallingStatusFile=stonewall.log -t 1000 -b 1000 -s 30
IOR 2 -a DUMMY -w -O stoneWallingStatusFile=stonewall.log -O stoneWallingWearOut=1 -D 1 -t 1000 -b 1000 -s 15 -k
IOR 2 -a DUMMY -r -O stoneWallingStatusFile=stonewall.log -D 1 -t 1000 -b 1000 -s 30 -k # max 15 still!
IOR 2 -a DUMMY -r -O stoneWallingStatusFile=stonewall.log -t 1000 -b 1000 -s 30 -k
MDTEST 2 -I 20 -a DUMMY -W 1 -x stonewall-md.log -C
MDTEST 2 -I 20 -a DUMMY -x stonewall-md.log -T -v
MDTEST 2 -I 20 -a DUMMY -x stonewall-md.log -D -v
#shared tests
IOR 2 -a POSIX -w -z -Y -e -i1 -m -t 100k -b 100k
IOR 2 -a POSIX -w -k -e -i1 -m -t 100k -b 100k
IOR 2 -a POSIX -r -z-k -e -i1 -m -t 100k -b 100k
IOR 2 -a POSIX -w -z -Y -e -i1 -m -t 100k -b 200k
IOR 2 -a POSIX -w -k -e -i1 -m -t 100k -b 200k
IOR 2 -a POSIX -r -z-k -e -i1 -m -t 100k -b 200k
#test mutually exclusive options
IOR 2 -a POSIX -w -z -k -e -i1 -m -t 100k -b 100k
IOR 2 -a POSIX -w -z -k -e -i1 -m -t 100k -b 100k
IOR 2 -a POSIX -w -z -k -e -i1 -m -t 100k -b 200k
IOR 2 -a POSIX -w -z -k -e -i1 -m -t 100k -b 200k
IOR 2 -a POSIX -w -Z -i1 -m -t 100k -b 100k -d 0.1
# Now set the num tasks per node to 1:

View File

@ -7,7 +7,7 @@ Following are basic notes on how to deploy the 'ceph/demo' docker container. The
Run `docker pull ceph/demo` to download the image to your system.
################################
# Deploy 'ceph/demo' conatiner #
# Deploy 'ceph/demo' container #
################################
To deploy the Ceph cluster, execute the following command:

View File

@ -46,7 +46,7 @@ for IMAGE in $(find -type d | cut -b 3- |grep -v "^$") ; do
done
if [[ $ERROR != 0 ]] ; then
echo "Errors occured!"
echo "Errors occurred!"
else
echo "OK: all tests passed!"
fi

View File

@ -1,95 +1,92 @@
V-3: Rank 0 Line 2082 main (before display_freespace): testdirpath is '/dev/shm/mdest'
V-3: Rank 0 Line 1506 Entering display_freespace on /dev/shm/mdest...
V-3: Rank 0 Line 1525 Before show_file_system_size, dirpath is '/dev/shm'
V-3: Rank 0 Line 1527 After show_file_system_size, dirpath is '/dev/shm'
V-3: Rank 0 Line 2097 main (after display_freespace): testdirpath is '/dev/shm/mdest'
V-3: Rank 0 Line 1656 main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 Line 1683 V-3: main: Using unique_mk_dir, 'mdtest_tree.0'
V-3: Rank 0 Line 1704 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0', to topdir
V-3: Rank 0 Line 801 directory_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19'
V-3: Rank 0 Line 1716 will file_test on mdtest_tree.0
V-3: Rank 0 Line 990 Entering file_test on mdtest_tree.0
V-3: Rank 0 Line 1012 file_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.0'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.1'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.2'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.3'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.4'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.5'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.6'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.7'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.8'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.9'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.10'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.11'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.12'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.13'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.14'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.15'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.16'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.17'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.18'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.19'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 1723 main: Using testdir, '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 main (before display_freespace): o.testdirpath is '/dev/shm/mdest'
V-3: Rank 0 main (after display_freespace): o.testdirpath is '/dev/shm/mdest'
V-3: Rank 0 main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 V-3: main: Using unique_mk_dir, 'mdtest_tree.0'
V-3: Rank 0 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0', to topdir
V-3: Rank 0 directory_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19'
V-3: Rank 0 will file_test on mdtest_tree.0
V-3: Rank 0 Entering file_test on mdtest_tree.0
V-3: Rank 0 file_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.0'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.1'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.2'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.3'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.4'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.5'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.6'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.7'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.8'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.9'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.10'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.11'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.12'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.13'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.14'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.15'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.16'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.17'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.18'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.19'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 main: Using o.testdir, '/dev/shm/mdest/test-dir.0-0'

View File

@ -1,52 +1,49 @@
V-3: Rank 0 Line 2082 main (before display_freespace): testdirpath is '/dev/shm/mdest'
V-3: Rank 0 Line 1506 Entering display_freespace on /dev/shm/mdest...
V-3: Rank 0 Line 1525 Before show_file_system_size, dirpath is '/dev/shm'
V-3: Rank 0 Line 1527 After show_file_system_size, dirpath is '/dev/shm'
V-3: Rank 0 Line 2097 main (after display_freespace): testdirpath is '/dev/shm/mdest'
V-3: Rank 0 Line 1683 V-3: main: Using unique_mk_dir, 'mdtest_tree.0'
V-3: Rank 0 Line 1704 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0', to topdir
V-3: Rank 0 Line 833 stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19
V-3: Rank 0 Line 1716 will file_test on mdtest_tree.0
V-3: Rank 0 Line 990 Entering file_test on mdtest_tree.0
V-3: Rank 0 Line 1079 file_test: stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.0
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.1
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.2
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.3
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.4
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.5
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.6
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.7
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.8
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.9
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.10
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.11
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.12
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.13
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.14
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.15
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.16
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.17
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.18
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.19
V-3: Rank 0 Line 1723 main: Using testdir, '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 main (before display_freespace): o.testdirpath is '/dev/shm/mdest'
V-3: Rank 0 main (after display_freespace): o.testdirpath is '/dev/shm/mdest'
V-3: Rank 0 V-3: main: Using unique_mk_dir, 'mdtest_tree.0'
V-3: Rank 0 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0', to topdir
V-3: Rank 0 stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19
V-3: Rank 0 will file_test on mdtest_tree.0
V-3: Rank 0 Entering file_test on mdtest_tree.0
V-3: Rank 0 file_test: stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.0
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.1
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.2
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.3
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.4
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.5
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.6
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.7
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.8
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.9
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.10
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.11
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.12
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.13
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.14
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.15
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.16
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.17
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.18
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.19
V-3: Rank 0 main: Using o.testdir, '/dev/shm/mdest/test-dir.0-0'

View File

@ -1,77 +1,95 @@
V-3: Rank 0 Line 2082 main (before display_freespace): testdirpath is '/dev/shm/mdest'
V-3: Rank 0 Line 1506 Entering display_freespace on /dev/shm/mdest...
V-3: Rank 0 Line 1525 Before show_file_system_size, dirpath is '/dev/shm'
V-3: Rank 0 Line 1527 After show_file_system_size, dirpath is '/dev/shm'
V-3: Rank 0 Line 2097 main (after display_freespace): testdirpath is '/dev/shm/mdest'
V-3: Rank 0 Line 1656 main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 Line 1683 V-3: main: Using unique_mk_dir, 'mdtest_tree.0'
V-3: Rank 0 Line 1704 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0', to topdir
V-3: Rank 0 Line 801 directory_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19'
V-3: Rank 0 Line 833 stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19
V-3: Rank 0 Line 862 directory_test: read path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 890 directory_test: remove directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19'
V-3: Rank 0 Line 915 directory_test: remove unique directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 1723 main: Using testdir, '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 Line 1764 V-3: main (remove hierarchical directory loop-!unique_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 main (before display_freespace): o.testdirpath is '/dev/shm/mdest'
V-3: Rank 0 main (after display_freespace): o.testdirpath is '/dev/shm/mdest'
V-3: Rank 0 main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 V-3: main: Using unique_mk_dir, 'mdtest_tree.0'
V-3: Rank 0 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0', to topdir
V-3: Rank 0 directory_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19'
V-3: Rank 0 stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19
V-3: Rank 0 directory_test: read path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 rename path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18
V-3: Rank 0 mdtest_rename dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19
V-3: Rank 0 directory_test: remove directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18'
V-3: Rank 0 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19'
V-3: Rank 0 directory_test: remove unique directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 main: Using o.testdir, '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 V-3: main (remove hierarchical directory loop-!unique_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'

View File

@ -1,27 +1,25 @@
V-3: Rank 0 Line 2082 main (before display_freespace): testdirpath is '/dev/shm/mdest'
V-3: Rank 0 Line 1506 Entering display_freespace on /dev/shm/mdest...
V-3: Rank 0 Line 1525 Before show_file_system_size, dirpath is '/dev/shm'
V-3: Rank 0 Line 1527 After show_file_system_size, dirpath is '/dev/shm'
V-3: Rank 0 Line 2097 main (after display_freespace): testdirpath is '/dev/shm/mdest'
V-3: Rank 0 Line 1656 main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 Line 1683 V-3: main: Using unique_mk_dir, 'mdtest_tree.0'
V-3: Rank 0 Line 1704 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0', to topdir
V-3: Rank 0 Line 801 directory_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 833 stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 862 directory_test: read path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 890 directory_test: remove directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 915 directory_test: remove unique directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 1716 will file_test on mdtest_tree.0
V-3: Rank 0 Line 990 Entering file_test on mdtest_tree.0
V-3: Rank 0 Line 1012 file_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 1079 file_test: stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 1104 file_test: read path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 1134 file_test: rm directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 1141 gonna create /dev/shm/mdest/test-dir.0-0/mdtest_tree.0
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 1158 file_test: rm unique directories path is 'mdtest_tree.0'
V-3: Rank 0 Line 1723 main: Using testdir, '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 Line 1764 V-3: main (remove hierarchical directory loop-!unique_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 main (before display_freespace): o.testdirpath is '/dev/shm/mdest'
V-3: Rank 0 main (after display_freespace): o.testdirpath is '/dev/shm/mdest'
V-3: Rank 0 main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 V-3: main: Using unique_mk_dir, 'mdtest_tree.0'
V-3: Rank 0 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0', to topdir
V-3: Rank 0 directory_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 directory_test: read path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 rename path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 directory_test: remove directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 directory_test: remove unique directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 will file_test on mdtest_tree.0
V-3: Rank 0 Entering file_test on mdtest_tree.0
V-3: Rank 0 file_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 file_test: stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 file_test: read path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 file_test: rm directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 gonna create /dev/shm/mdest/test-dir.0-0/mdtest_tree.0
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 file_test: rm unique directories path is 'mdtest_tree.0'
V-3: Rank 0 main: Using o.testdir, '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 V-3: main (remove hierarchical directory loop-!unique_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'

View File

@ -1,27 +1,25 @@
V-3: Rank 0 Line 2082 main (before display_freespace): testdirpath is '/dev/shm/mdest'
V-3: Rank 0 Line 1506 Entering display_freespace on /dev/shm/mdest...
V-3: Rank 0 Line 1525 Before show_file_system_size, dirpath is '/dev/shm'
V-3: Rank 0 Line 1527 After show_file_system_size, dirpath is '/dev/shm'
V-3: Rank 0 Line 2097 main (after display_freespace): testdirpath is '/dev/shm/mdest'
V-3: Rank 0 Line 1656 main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 Line 1683 V-3: main: Using unique_mk_dir, 'mdtest_tree.0'
V-3: Rank 0 Line 1704 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0', to topdir
V-3: Rank 0 Line 801 directory_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 833 stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 862 directory_test: read path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 890 directory_test: remove directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 915 directory_test: remove unique directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 1716 will file_test on mdtest_tree.0
V-3: Rank 0 Line 990 Entering file_test on mdtest_tree.0
V-3: Rank 0 Line 1012 file_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 1079 file_test: stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 1104 file_test: read path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 1134 file_test: rm directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 1141 gonna create /dev/shm/mdest/test-dir.0-0/mdtest_tree.0
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 Line 1158 file_test: rm unique directories path is 'mdtest_tree.0'
V-3: Rank 0 Line 1723 main: Using testdir, '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 Line 1764 V-3: main (remove hierarchical directory loop-!unique_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 main (before display_freespace): o.testdirpath is '/dev/shm/mdest'
V-3: Rank 0 main (after display_freespace): o.testdirpath is '/dev/shm/mdest'
V-3: Rank 0 main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 V-3: main: Using unique_mk_dir, 'mdtest_tree.0'
V-3: Rank 0 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0', to topdir
V-3: Rank 0 directory_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 directory_test: read path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 rename path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 directory_test: remove directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 directory_test: remove unique directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 will file_test on mdtest_tree.0
V-3: Rank 0 Entering file_test on mdtest_tree.0
V-3: Rank 0 file_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 file_test: stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 file_test: read path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 file_test: rm directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 gonna create /dev/shm/mdest/test-dir.0-0/mdtest_tree.0
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
V-3: Rank 0 file_test: rm unique directories path is 'mdtest_tree.0'
V-3: Rank 0 main: Using o.testdir, '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 V-3: main (remove hierarchical directory loop-!unique_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'

View File

@ -1,29 +1,26 @@
V-3: Rank 0 Line 2082 main (before display_freespace): testdirpath is '/dev/shm/mdest'
V-3: Rank 0 Line 1506 Entering display_freespace on /dev/shm/mdest...
V-3: Rank 0 Line 1525 Before show_file_system_size, dirpath is '/dev/shm'
V-3: Rank 0 Line 1527 After show_file_system_size, dirpath is '/dev/shm'
V-3: Rank 0 Line 2097 main (after display_freespace): testdirpath is '/dev/shm/mdest'
V-3: Rank 0 Line 1647 main (create hierarchical directory loop-!collective_creates): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 Line 1694 i 1 nstride 0
V-3: Rank 0 Line 1704 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0.0', to topdir
V-3: Rank 0 Line 1716 will file_test on mdtest_tree.0.0
V-3: Rank 0 Line 990 Entering file_test on mdtest_tree.0.0
V-3: Rank 0 Line 1012 file_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 Line 483 create_remove_items (for loop): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//file.mdtest.0.1'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
V-3: Rank 0 Line 1079 file_test: stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/file.mdtest.0.1
V-3: Rank 0 Line 1134 file_test: rm directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 Line 1141 gonna create /dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 Line 483 create_remove_items (for loop): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
V-3: Rank 0 Line 310 create_remove_items_helper (non-dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//file.mdtest.0.1'
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
V-3: Rank 0 Line 1158 file_test: rm unique directories path is '/dev/shm/mdest/test-dir.0-0/'
V-3: Rank 0 Line 1754 main (remove hierarchical directory loop-!collective): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 main (before display_freespace): o.testdirpath is '/dev/shm/mdest'
V-3: Rank 0 main (after display_freespace): o.testdirpath is '/dev/shm/mdest'
V-3: Rank 0 main (create hierarchical directory loop-!collective_creates): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 i 1 nstride 0
V-3: Rank 0 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0.0', to topdir
V-3: Rank 0 will file_test on mdtest_tree.0.0
V-3: Rank 0 Entering file_test on mdtest_tree.0.0
V-3: Rank 0 file_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 create_remove_items (for loop): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//file.mdtest.0.1'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
V-3: Rank 0 file_test: stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/file.mdtest.0.1
V-3: Rank 0 file_test: rm directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 gonna create /dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 create_remove_items (for loop): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
V-3: Rank 0 create_remove_items_helper (non-dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//file.mdtest.0.1'
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
V-3: Rank 0 file_test: rm unique directories path is '/dev/shm/mdest/test-dir.0-0/'
V-3: Rank 0 main (remove hierarchical directory loop-!collective): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'

View File

@ -1,34 +1,31 @@
V-3: Rank 0 Line 2082 main (before display_freespace): testdirpath is '/dev/shm/mdest'
V-3: Rank 0 Line 1506 Entering display_freespace on /dev/shm/mdest...
V-3: Rank 0 Line 1525 Before show_file_system_size, dirpath is '/dev/shm'
V-3: Rank 0 Line 1527 After show_file_system_size, dirpath is '/dev/shm'
V-3: Rank 0 Line 2097 main (after display_freespace): testdirpath is '/dev/shm/mdest'
V-3: Rank 0 Line 1647 main (create hierarchical directory loop-!collective_creates): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 Line 1694 i 1 nstride 0
V-3: Rank 0 Line 1704 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0.0', to topdir
V-3: Rank 0 Line 801 directory_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/dir.mdtest.0.0'
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 Line 483 create_remove_items (for loop): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//dir.mdtest.0.1'
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
V-3: Rank 0 Line 833 stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/dir.mdtest.0.0
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/dir.mdtest.0.1
V-3: Rank 0 Line 1716 will file_test on mdtest_tree.0.0
V-3: Rank 0 Line 990 Entering file_test on mdtest_tree.0.0
V-3: Rank 0 Line 1012 file_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/file.mdtest.0.0'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 Line 483 create_remove_items (for loop): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//file.mdtest.0.1'
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 Line 373 create_remove_items_helper: close...
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
V-3: Rank 0 Line 1079 file_test: stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/file.mdtest.0.0
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/file.mdtest.0.1
V-3: Rank 0 main (before display_freespace): o.testdirpath is '/dev/shm/mdest'
V-3: Rank 0 main (after display_freespace): o.testdirpath is '/dev/shm/mdest'
V-3: Rank 0 main (create hierarchical directory loop-!collective_creates): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
V-3: Rank 0 i 1 nstride 0
V-3: Rank 0 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0.0', to topdir
V-3: Rank 0 directory_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/dir.mdtest.0.0'
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 create_remove_items (for loop): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
V-3: Rank 0 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//dir.mdtest.0.1'
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
V-3: Rank 0 stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/dir.mdtest.0.0
V-3: Rank 0 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/dir.mdtest.0.1
V-3: Rank 0 will file_test on mdtest_tree.0.0
V-3: Rank 0 Entering file_test on mdtest_tree.0.0
V-3: Rank 0 file_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/file.mdtest.0.0'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 create_remove_items (for loop): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
V-3: Rank 0 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//file.mdtest.0.1'
V-3: Rank 0 create_remove_items_helper (non-collective, shared): open...
V-3: Rank 0 create_remove_items_helper: close...
V-3: Rank 0 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
V-3: Rank 0 file_test: stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/file.mdtest.0.0
V-3: Rank 0 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/file.mdtest.0.1

33
testing/s3.sh Executable file
View File

@ -0,0 +1,33 @@
#!/bin/bash
# Test basic S3 behavior using minio.
ROOT="$(dirname ${BASH_SOURCE[0]})"
TYPE="basic"
if [[ ! -e $ROOT/minio ]] ; then
wget https://dl.min.io/server/minio/release/linux-amd64/minio
mv minio $ROOT
chmod +x $ROOT/minio
fi
export MINIO_ACCESS_KEY=accesskey
export MINIO_SECRET_KEY=secretkey
$ROOT/minio --quiet server /dev/shm &
export IOR_EXTRA="-o test"
export MDTEST_EXTRA="-d test"
source $ROOT/test-lib.sh
I=100 # Start with this ID
IOR 2 -a S3-libs3 --S3.host=localhost:9000 --S3.secret-key=secretkey --S3.access-key=accesskey -b $((10*1024*1024)) -t $((10*1024*1024))
MDTEST 2 -a S3-libs3 -L --S3.host=localhost:9000 --S3.secret-key=secretkey --S3.access-key=accesskey -n 10
MDTEST 2 -a S3-libs3 --S3.host=localhost:9000 --S3.secret-key=secretkey --S3.access-key=accesskey -n 5 -w 1024 -e 1024
IOR 1 -a S3-libs3 --S3.host=localhost:9000 --S3.secret-key=secretkey --S3.access-key=accesskey -b $((10*1024)) -t $((10*1024)) --S3.bucket-per-file
MDTEST 1 -a S3-libs3 -L --S3.host=localhost:9000 --S3.secret-key=secretkey --S3.access-key=accesskey --S3.bucket-per-file -n 5
MDTEST 1 -a S3-libs3 --S3.host=localhost:9000 --S3.secret-key=secretkey --S3.access-key=accesskey --S3.bucket-per-file -n 10 -w 1024 -e 1024
kill -9 %1

View File

@ -7,12 +7,17 @@
# Example: export IOR_EXTRA="-v -v -v"
IOR_MPIRUN=${IOR_MPIRUN:-mpiexec -np}
if ${IOR_MPIRUN} 1 --oversubscribe true ; then
IOR_MPIRUN="mpiexec --oversubscribe -np"
fi
IOR_BIN_DIR=${IOR_BIN_DIR:-./src}
IOR_OUT=${IOR_OUT:-./test_logs}
IOR_OUT=${IOR_OUT:-./test_logs/$TYPE}
IOR_TMP=${IOR_TMP:-/dev/shm}
IOR_EXTRA=${IOR_EXTRA:-} # Add global options like verbosity
MDTEST_EXTRA=${MDTEST_EXTRA:-}
MDTEST_TEST_PATTERNS=${MDTEST_TEST_PATTERNS:-../testing/mdtest-patterns/$TYPE}
MDWB_EXTRA=${MDWB_EXTRA:-}
################################################################################
mkdir -p ${IOR_OUT}
@ -40,7 +45,7 @@ I=0
function IOR(){
RANKS=$1
shift
WHAT="${IOR_MPIRUN} $RANKS ${IOR_BIN_DIR}/ior ${@} ${IOR_EXTRA} -o ${IOR_TMP}/ior"
WHAT="${IOR_MPIRUN} $RANKS ${IOR_BIN_DIR}/ior ${@} -o ${IOR_TMP}/ior ${IOR_EXTRA}"
$WHAT 1>"${IOR_OUT}/test_out.$I" 2>&1
if [[ $? != 0 ]]; then
echo -n "ERR"
@ -56,15 +61,15 @@ function MDTEST(){
RANKS=$1
shift
rm -rf ${IOR_TMP}/mdest
WHAT="${IOR_MPIRUN} $RANKS ${IOR_BIN_DIR}/mdtest ${@} ${MDTEST_EXTRA} -d ${IOR_TMP}/mdest -V=4"
WHAT="${IOR_MPIRUN} $RANKS ${IOR_BIN_DIR}/mdtest ${@} -d ${IOR_TMP}/mdest ${MDTEST_EXTRA} -V=4"
$WHAT 1>"${IOR_OUT}/test_out.$I" 2>&1
if [[ $? != 0 ]]; then
echo -n "ERR"
ERRORS=$(($ERRORS + 1))
else
# compare basic pattern
grep "V-3" "${IOR_OUT}/test_out.$I" | sed "s/Line *[0-9]*//" > "${IOR_OUT}/tmp"
if [[ -r ${MDTEST_TEST_PATTERNS}/$I.txt ]] ; then
grep "V-3" "${IOR_OUT}/test_out.$I" > "${IOR_OUT}/tmp"
cmp -s "${IOR_OUT}/tmp" ${MDTEST_TEST_PATTERNS}/$I.txt
if [[ $? != 0 ]]; then
mv "${IOR_OUT}/tmp" ${IOR_OUT}/tmp.$I
@ -74,7 +79,7 @@ function MDTEST(){
if [[ ! -e ${MDTEST_TEST_PATTERNS} ]] ; then
mkdir -p ${MDTEST_TEST_PATTERNS}
fi
grep "V-3" "${IOR_OUT}/test_out.$I" > ${MDTEST_TEST_PATTERNS}/$I.txt
mv "${IOR_OUT}/tmp" ${MDTEST_TEST_PATTERNS}/$I.txt
fi
echo -n "OK "
fi
@ -82,6 +87,25 @@ function MDTEST(){
I=$((${I}+1))
}
function MDWB(){
RANKS=$1
shift
if [[ "$DELETE" != "0" ]] ; then
rm -rf "${IOR_TMP}/md-workbench"
fi
WHAT="${IOR_MPIRUN} $RANKS ${IOR_BIN_DIR}/md-workbench ${@} -o ${IOR_TMP}/md-workbench ${MDWB_EXTRA}"
LOG="${IOR_OUT}/test_out.$I"
$WHAT 1>"$LOG" 2>&1
if [[ $? != 0 ]] || grep '!!!' "$LOG" ; then
echo -n "ERR"
ERRORS=$(($ERRORS + 1))
else
echo -n "OK "
fi
echo " $WHAT"
I=$((${I}+1))
}
function END(){
if [[ ${ERRORS} == 0 ]] ; then
echo "PASSED"

View File

@ -2,16 +2,16 @@
IOR START
api=posix
writeFile =1
randomOffset=1
randomOffset=1
reorderTasks=1
filePerProc=1
filePerProc=1
keepFile=1
fsync=1
repetitions=1
multiFile=1
# tab-prefixed comment
transferSize=100k
blockSize=100k
transferSize=10k
blockSize=20k
# space-prefixed comment
run
--dummy.delay-create=1000