Merge branch 'master' into fix-194
commit
ccba3b11e5
|
@ -1,8 +1,11 @@
|
|||
tags
|
||||
Makefile
|
||||
Makefile.in
|
||||
aclocal.m4
|
||||
config.log
|
||||
config.status
|
||||
COPYING
|
||||
INSTALL
|
||||
config/compile
|
||||
config/config.guess
|
||||
config/config.sub
|
||||
|
@ -12,11 +15,14 @@ config/missing
|
|||
config/test-driver
|
||||
configure
|
||||
contrib/.deps/
|
||||
contrib/cbif
|
||||
contrib/Makefile
|
||||
contrib/Makefile.in
|
||||
contrib/cbif
|
||||
doc/Makefile
|
||||
doc/Makefile.in
|
||||
src/.deps/
|
||||
src/mdtest
|
||||
src/Makefile
|
||||
src/Makefile.in
|
||||
src/config.h
|
||||
|
@ -29,7 +35,14 @@ contrib/cbif.o
|
|||
src/*.o
|
||||
src/*.i
|
||||
src/*.s
|
||||
src/*.a
|
||||
src/ior
|
||||
src/mdtest
|
||||
src/testlib
|
||||
src/test/.deps/
|
||||
src/test/.dirstamp
|
||||
src/test/lib.o
|
||||
build/
|
||||
|
||||
doc/doxygen/build
|
||||
doc/sphinx/_*/
|
||||
|
|
2
NEWS
2
NEWS
|
@ -120,7 +120,7 @@ Version 2.10.1
|
|||
- Corrected IOR_GetFileSize() function to point to HDF5 and NCMPI versions of
|
||||
IOR_GetFileSize() calls
|
||||
- Changed the netcdf dataset from 1D array to 4D array, where the 4 dimensions
|
||||
are: [segmentCount][numTasksWorld][numTransfers][transferSize]
|
||||
are: [segmentCount][numTasks][numTransfers][transferSize]
|
||||
This patch from Wei-keng Liao allows for file sizes > 4GB (provided no
|
||||
single dimension is > 4GB).
|
||||
- Finalized random-capability release
|
||||
|
|
|
@ -0,0 +1,86 @@
|
|||
Building
|
||||
----------------------
|
||||
|
||||
The DAOS library must be installed on the system.
|
||||
|
||||
./bootstrap
|
||||
./configure --prefix=iorInstallDir --with-daos=DIR --with-cart=DIR
|
||||
|
||||
One must specify "--with-daos=/path/to/daos/install and --with-cart". When that
|
||||
is specified the DAOS and DFS driver will be built.
|
||||
|
||||
The DAOS driver uses the DAOS API to open a container (or create it if it
|
||||
doesn't exist first) then create an array object in that container (file) and
|
||||
read/write to the array object using the daos Array API. The DAOS driver works
|
||||
with IOR only (no mdtest support yet). The file name used by IOR (passed by -o
|
||||
option) is hashed to an object ID that is used as the array oid.
|
||||
|
||||
The DFS (DAOS File System) driver creates an encapsulated namespace and emulates
|
||||
the POSIX driver using the DFS API directly on top of DAOS. The DFS driver works
|
||||
with both IOR and mdtest.
|
||||
|
||||
Running with DAOS API
|
||||
---------------------
|
||||
|
||||
ior -a DAOS [ior_options] [daos_options]
|
||||
|
||||
In the IOR options, the file name should be specified as a container uuid using
|
||||
"-o <container_uuid>". If the "-E" option is given, then this UUID shall denote
|
||||
an existing container created by a "matching" IOR run. Otherwise, IOR will
|
||||
create a new container with this UUID. In the latter case, one may use
|
||||
uuidgen(1) to generate the UUID of the new container.
|
||||
|
||||
The DAOS options include:
|
||||
|
||||
Required Options:
|
||||
--daos.pool <pool_uuid>: pool uuid to connect to (has to be created beforehand)
|
||||
--daos.svcl <pool_svcl>: pool svcl list (: separated)
|
||||
--daos.cont <cont_uuid>: container for the IOR files/objects (can use `uuidgen`)
|
||||
|
||||
Optional Options:
|
||||
--daos.group <group_name>: group name of servers with the pool
|
||||
--daos.chunk_size <chunk_size>: Chunk size of the array object controlling striping over DKEYs
|
||||
--daos.destroy flag to destory the container on finalize
|
||||
--daos.oclass <object_class>: specific object class for array object
|
||||
|
||||
Examples that should work include:
|
||||
|
||||
- "ior -a DAOS -w -W -o file_name --daos.pool <pool_uuid> --daos.svcl <svc_ranks>\
|
||||
--daos.cont <cont_uuid>"
|
||||
|
||||
- "ior -a DAOS -w -W -r -R -o file_name -b 1g -t 4m \
|
||||
--daos.pool <pool_uuid> --daos.svcl <svc_ranks> --daos.cont <cont_uuid>\
|
||||
--daos.chunk_size 1024 --daos.oclass R2"
|
||||
|
||||
Running with DFS API
|
||||
---------------------
|
||||
|
||||
ior -a DFS [ior_options] [dfs_options]
|
||||
mdtest -a DFS [mdtest_options] [dfs_options]
|
||||
|
||||
Required Options:
|
||||
--dfs.pool <pool_uuid>: pool uuid to connect to (has to be created beforehand)
|
||||
--dfs.svcl <pool_svcl>: pool svcl list (: separated)
|
||||
--dfs.cont <co_uuid>: container uuid that will hold the encapsulated namespace
|
||||
|
||||
Optional Options:
|
||||
--dfs.group <group_name>: group name of servers with the pool
|
||||
--dfs.chunk_size <chunk_size>: Chunk size of the files
|
||||
--dfs.destroy flag to destory the container on finalize
|
||||
--dfs.oclass <object_class>: specific object class for files
|
||||
|
||||
In the IOR options, the file name should be specified on the root dir directly
|
||||
since ior does not create directories and the DFS container representing the
|
||||
encapsulated namespace is not the same as the system namespace the user is
|
||||
executing from.
|
||||
|
||||
Examples that should work include:
|
||||
- "ior -a DFS -w -W -o /test1 --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"
|
||||
- "ior -a DFS -w -W -r -R -o /test2 -b 1g -t 4m -C --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"
|
||||
- "ior -a DFS -w -r -o /test3 -b 8g -t 1m -C --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"
|
||||
|
||||
Running mdtest, the user needs to specify a directory with -d where the test
|
||||
tree will be created. Some examples:
|
||||
- "mdtest -a DFS -n 100 -F -D -d /bla --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"
|
||||
- "mdtest -a DFS -n 1000 -F -C -d /bla --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"
|
||||
- "mdtest -a DFS -I 10 -z 5 -b 2 -L -d /bla --dfs.pool <pool_uuid> --dfs.svcl <svc_ranks> --dfs.cont <co_uuid>"
|
53
configure.ac
53
configure.ac
|
@ -185,7 +185,60 @@ AM_COND_IF([USE_RADOS_AIORI],[
|
|||
AC_DEFINE([USE_RADOS_AIORI], [], [Build RADOS backend AIORI])
|
||||
])
|
||||
|
||||
# DAOS Backends (DAOS and DFS) IO support require DAOS and CART/GURT
|
||||
AC_ARG_WITH([cart],
|
||||
[AS_HELP_STRING([--with-cart],
|
||||
[support IO with DAOS backends @<:@default=no@:>@])],
|
||||
[],
|
||||
[with_daos=no])
|
||||
|
||||
AS_IF([test "x$with_cart" != xno],
|
||||
CART="yes"
|
||||
LDFLAGS="$LDFLAGS -L$with_cart/lib"
|
||||
CPPFLAGS="$CPPFLAGS -I$with_cart/include/"
|
||||
AC_CHECK_HEADERS(gurt/common.h,, [unset CART])
|
||||
AC_CHECK_LIB([gurt], [d_hash_murmur64],, [unset CART]))
|
||||
|
||||
AC_ARG_WITH([daos],
|
||||
[AS_HELP_STRING([--with-daos],
|
||||
[support IO with DAOS backends @<:@default=no@:>@])],
|
||||
[],
|
||||
[with_daos=no])
|
||||
|
||||
AS_IF([test "x$with_daos" != xno],
|
||||
DAOS="yes"
|
||||
LDFLAGS="$LDFLAGS -L$with_daos/lib"
|
||||
CPPFLAGS="$CPPFLAGS -I$with_daos/include"
|
||||
AC_CHECK_HEADERS(daos_types.h,, [unset DAOS])
|
||||
AC_CHECK_LIB([uuid], [uuid_generate],, [unset DAOS])
|
||||
AC_CHECK_LIB([daos_common], [daos_sgl_init],, [unset DAOS])
|
||||
AC_CHECK_LIB([daos], [daos_init],, [unset DAOS])
|
||||
AC_CHECK_LIB([dfs], [dfs_mkdir],, [unset DAOS]))
|
||||
|
||||
AM_CONDITIONAL([USE_DAOS_AIORI], [test x$DAOS = xyes])
|
||||
AM_COND_IF([USE_DAOS_AIORI],[
|
||||
AC_DEFINE([USE_DAOS_AIORI], [], [Build DAOS backends AIORI])
|
||||
])
|
||||
|
||||
# Gfarm support
|
||||
AC_MSG_CHECKING([for Gfarm file system])
|
||||
AC_ARG_WITH([gfarm],
|
||||
[AS_HELP_STRING([--with-gfarm=GFARM_ROOT],
|
||||
[support IO with Gfarm backend @<:@default=no@:>@])],
|
||||
[], [with_gfarm=no])
|
||||
AC_MSG_RESULT([$with_gfarm])
|
||||
AM_CONDITIONAL([USE_GFARM_AIORI], [test x$with_gfarm != xno])
|
||||
if test x$with_gfarm != xno; then
|
||||
AC_DEFINE([USE_GFARM_AIORI], [], [Build Gfarm backend AIORI])
|
||||
case x$with_gfarm in
|
||||
xyes) ;;
|
||||
*)
|
||||
CPPFLAGS="$CPPFLAGS -I$with_gfarm/include"
|
||||
LDFLAGS="$LDFLAGS -L$with_gfarm/lib" ;;
|
||||
esac
|
||||
AC_CHECK_LIB([gfarm], [gfarm_initialize],, [AC_MSG_ERROR([libgfarm not found])])
|
||||
AC_CHECK_MEMBERS([struct stat.st_mtim.tv_nsec])
|
||||
fi
|
||||
|
||||
# aws4c is needed for the S3 backend (see --with-S3, below).
|
||||
# Version 0.5.2 of aws4c is available at https://github.com/jti-lanl/aws4c.git
|
||||
|
|
|
@ -550,6 +550,17 @@ HOW DOES IOR CALCULATE PERFORMANCE?
|
|||
operations (-g), the sum of the open, transfer, and close times may not equal
|
||||
the elapsed time from the first open to the last close.
|
||||
|
||||
After each iteration (-i) IOR reports performance for that iteration, and
|
||||
those numbers include:
|
||||
|
||||
- Bandwidth (described above)
|
||||
|
||||
- IOPS: I/O rate (operations per second) achieved by all tasks given the total
|
||||
time spent in reading and writing the data.
|
||||
|
||||
- Latency: computed by taking the average latency of all I/O operations from a
|
||||
single task. If ior is run with multiple tasks, then the latency reported is
|
||||
the minimum that was computed between all tasks.
|
||||
|
||||
HOW DO I ACCESS MULTIPLE FILE SYSTEMS IN IOR?
|
||||
|
||||
|
|
|
@ -181,6 +181,7 @@ again, using this option changes our performance measurement quite a bit::
|
|||
and we finally have a believable bandwidth measurement for our file system.
|
||||
|
||||
Defeating Page Cache
|
||||
--------------------
|
||||
Since IOR is specifically designed to benchmark I/O, it provides these options
|
||||
that make it as easy as possible to ensure that you are actually measuring the
|
||||
performance of your file system and not your compute nodes' memory. That being
|
||||
|
|
|
@ -70,6 +70,15 @@ extraSOURCES += aiori-RADOS.c
|
|||
extraLDADD += -lrados
|
||||
endif
|
||||
|
||||
if USE_DAOS_AIORI
|
||||
extraSOURCES += aiori-DAOS.c aiori-DFS.c
|
||||
endif
|
||||
|
||||
if USE_GFARM_AIORI
|
||||
extraSOURCES += aiori-Gfarm.c
|
||||
extraLDADD += -lgfarm
|
||||
endif
|
||||
|
||||
if USE_S3_AIORI
|
||||
extraSOURCES += aiori-S3.c
|
||||
if AWS4C_DIR
|
||||
|
|
|
@ -0,0 +1,551 @@
|
|||
/*
|
||||
* -*- mode: c; c-basic-offset: 8; indent-tabs-mode: nil; -*-
|
||||
* vim:expandtab:shiftwidth=8:tabstop=8:
|
||||
*/
|
||||
/*
|
||||
* Copyright (C) 2018-2019 Intel Corporation
|
||||
*
|
||||
* GOVERNMENT LICENSE RIGHTS-OPEN SOURCE SOFTWARE
|
||||
* The Government's rights to use, modify, reproduce, release, perform, display,
|
||||
* or disclose this software are subject to the terms of the Apache License as
|
||||
* provided in Contract No. 8F-30005.
|
||||
* Any reproduction of computer software, computer software documentation, or
|
||||
* portions thereof marked with this legend must also reproduce the markings.
|
||||
*/
|
||||
|
||||
/*
|
||||
* This file implements the abstract I/O interface for DAOS Array API.
|
||||
*/
|
||||
|
||||
#define _BSD_SOURCE
|
||||
|
||||
#ifdef HAVE_CONFIG_H
|
||||
#include "config.h"
|
||||
#endif
|
||||
|
||||
#include <stdint.h>
|
||||
#include <assert.h>
|
||||
#include <unistd.h>
|
||||
#include <strings.h>
|
||||
#include <sys/types.h>
|
||||
#include <libgen.h>
|
||||
#include <stdbool.h>
|
||||
|
||||
#include <gurt/common.h>
|
||||
#include <daos.h>
|
||||
|
||||
#include "ior.h"
|
||||
#include "aiori.h"
|
||||
#include "iordef.h"
|
||||
|
||||
/************************** O P T I O N S *****************************/
|
||||
struct daos_options{
|
||||
char *pool;
|
||||
char *svcl;
|
||||
char *group;
|
||||
char *cont;
|
||||
int chunk_size;
|
||||
int destroy;
|
||||
char *oclass;
|
||||
};
|
||||
|
||||
static struct daos_options o = {
|
||||
.pool = NULL,
|
||||
.svcl = NULL,
|
||||
.group = NULL,
|
||||
.cont = NULL,
|
||||
.chunk_size = 1048576,
|
||||
.destroy = 0,
|
||||
.oclass = NULL,
|
||||
};
|
||||
|
||||
static option_help options [] = {
|
||||
{0, "daos.pool", "pool uuid", OPTION_OPTIONAL_ARGUMENT, 's', &o.pool},
|
||||
{0, "daos.svcl", "pool SVCL", OPTION_OPTIONAL_ARGUMENT, 's', &o.svcl},
|
||||
{0, "daos.group", "server group", OPTION_OPTIONAL_ARGUMENT, 's', &o.group},
|
||||
{0, "daos.cont", "container uuid", OPTION_OPTIONAL_ARGUMENT, 's', &o.cont},
|
||||
{0, "daos.chunk_size", "chunk size", OPTION_OPTIONAL_ARGUMENT, 'd', &o.chunk_size},
|
||||
{0, "daos.destroy", "Destroy Container", OPTION_FLAG, 'd', &o.destroy},
|
||||
{0, "daos.oclass", "object class", OPTION_OPTIONAL_ARGUMENT, 's', &o.oclass},
|
||||
LAST_OPTION
|
||||
};
|
||||
|
||||
/**************************** P R O T O T Y P E S *****************************/
|
||||
|
||||
static void DAOS_Init();
|
||||
static void DAOS_Fini();
|
||||
static void *DAOS_Create(char *, IOR_param_t *);
|
||||
static void *DAOS_Open(char *, IOR_param_t *);
|
||||
static int DAOS_Access(const char *, int, IOR_param_t *);
|
||||
static IOR_offset_t DAOS_Xfer(int, void *, IOR_size_t *,
|
||||
IOR_offset_t, IOR_param_t *);
|
||||
static void DAOS_Close(void *, IOR_param_t *);
|
||||
static void DAOS_Delete(char *, IOR_param_t *);
|
||||
static char* DAOS_GetVersion();
|
||||
static void DAOS_Fsync(void *, IOR_param_t *);
|
||||
static IOR_offset_t DAOS_GetFileSize(IOR_param_t *, MPI_Comm, char *);
|
||||
static option_help * DAOS_options();
|
||||
|
||||
/************************** D E C L A R A T I O N S ***************************/
|
||||
|
||||
ior_aiori_t daos_aiori = {
|
||||
.name = "DAOS",
|
||||
.create = DAOS_Create,
|
||||
.open = DAOS_Open,
|
||||
.access = DAOS_Access,
|
||||
.xfer = DAOS_Xfer,
|
||||
.close = DAOS_Close,
|
||||
.delete = DAOS_Delete,
|
||||
.get_version = DAOS_GetVersion,
|
||||
.fsync = DAOS_Fsync,
|
||||
.get_file_size = DAOS_GetFileSize,
|
||||
.initialize = DAOS_Init,
|
||||
.finalize = DAOS_Fini,
|
||||
.get_options = DAOS_options,
|
||||
.statfs = aiori_posix_statfs,
|
||||
.mkdir = aiori_posix_mkdir,
|
||||
.rmdir = aiori_posix_rmdir,
|
||||
.stat = aiori_posix_stat,
|
||||
};
|
||||
|
||||
#define IOR_DAOS_MUR_SEED 0xDEAD10CC
|
||||
|
||||
enum handleType {
|
||||
POOL_HANDLE,
|
||||
CONT_HANDLE,
|
||||
ARRAY_HANDLE
|
||||
};
|
||||
|
||||
static daos_handle_t poh;
|
||||
static daos_handle_t coh;
|
||||
static daos_handle_t aoh;
|
||||
static daos_oclass_id_t objectClass = OC_SX;
|
||||
static bool daos_initialized = false;
|
||||
|
||||
/***************************** F U N C T I O N S ******************************/
|
||||
|
||||
/* For DAOS methods. */
|
||||
#define DCHECK(rc, format, ...) \
|
||||
do { \
|
||||
int _rc = (rc); \
|
||||
\
|
||||
if (_rc < 0) { \
|
||||
fprintf(stderr, "ior ERROR (%s:%d): %d: %d: " \
|
||||
format"\n", __FILE__, __LINE__, rank, _rc, \
|
||||
##__VA_ARGS__); \
|
||||
fflush(stdout); \
|
||||
MPI_Abort(MPI_COMM_WORLD, -1); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#define INFO(level, format, ...) \
|
||||
do { \
|
||||
if (verbose >= level) \
|
||||
printf("[%d] "format"\n", rank, ##__VA_ARGS__); \
|
||||
} while (0)
|
||||
|
||||
/* For generic errors like invalid command line options. */
|
||||
#define GERR(format, ...) \
|
||||
do { \
|
||||
fprintf(stderr, format"\n", ##__VA_ARGS__); \
|
||||
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1), "MPI_Abort() error"); \
|
||||
} while (0)
|
||||
|
||||
/* Distribute process 0's pool or container handle to others. */
|
||||
static void
|
||||
HandleDistribute(daos_handle_t *handle, enum handleType type)
|
||||
{
|
||||
d_iov_t global;
|
||||
int rc;
|
||||
|
||||
global.iov_buf = NULL;
|
||||
global.iov_buf_len = 0;
|
||||
global.iov_len = 0;
|
||||
|
||||
if (rank == 0) {
|
||||
/* Get the global handle size. */
|
||||
if (type == POOL_HANDLE)
|
||||
rc = daos_pool_local2global(*handle, &global);
|
||||
else if (type == CONT_HANDLE)
|
||||
rc = daos_cont_local2global(*handle, &global);
|
||||
else
|
||||
rc = daos_array_local2global(*handle, &global);
|
||||
DCHECK(rc, "Failed to get global handle size");
|
||||
}
|
||||
|
||||
MPI_CHECK(MPI_Bcast(&global.iov_buf_len, 1, MPI_UINT64_T, 0,
|
||||
MPI_COMM_WORLD),
|
||||
"Failed to bcast global handle buffer size");
|
||||
|
||||
global.iov_len = global.iov_buf_len;
|
||||
global.iov_buf = malloc(global.iov_buf_len);
|
||||
if (global.iov_buf == NULL)
|
||||
ERR("Failed to allocate global handle buffer");
|
||||
|
||||
if (rank == 0) {
|
||||
if (type == POOL_HANDLE)
|
||||
rc = daos_pool_local2global(*handle, &global);
|
||||
else if (type == CONT_HANDLE)
|
||||
rc = daos_cont_local2global(*handle, &global);
|
||||
else
|
||||
rc = daos_array_local2global(*handle, &global);
|
||||
DCHECK(rc, "Failed to create global handle");
|
||||
}
|
||||
|
||||
MPI_CHECK(MPI_Bcast(global.iov_buf, global.iov_buf_len, MPI_BYTE, 0,
|
||||
MPI_COMM_WORLD),
|
||||
"Failed to bcast global pool handle");
|
||||
|
||||
if (rank != 0) {
|
||||
if (type == POOL_HANDLE)
|
||||
rc = daos_pool_global2local(global, handle);
|
||||
else if (type == CONT_HANDLE)
|
||||
rc = daos_cont_global2local(poh, global, handle);
|
||||
else
|
||||
rc = daos_array_global2local(coh, global, 0, handle);
|
||||
DCHECK(rc, "Failed to get local handle");
|
||||
}
|
||||
|
||||
free(global.iov_buf);
|
||||
}
|
||||
|
||||
static option_help *
|
||||
DAOS_options()
|
||||
{
|
||||
return options;
|
||||
}
|
||||
|
||||
static void
|
||||
DAOS_Init()
|
||||
{
|
||||
int rc;
|
||||
|
||||
if (daos_initialized)
|
||||
return;
|
||||
|
||||
if (o.pool == NULL || o.svcl == NULL || o.cont == NULL) {
|
||||
GERR("Invalid DAOS pool/cont\n");
|
||||
return;
|
||||
}
|
||||
|
||||
if (o.oclass) {
|
||||
objectClass = daos_oclass_name2id(o.oclass);
|
||||
if (objectClass == OC_UNKNOWN)
|
||||
GERR("Invalid DAOS Object class %s\n", o.oclass);
|
||||
}
|
||||
|
||||
rc = daos_init();
|
||||
if (rc)
|
||||
DCHECK(rc, "Failed to initialize daos");
|
||||
|
||||
if (rank == 0) {
|
||||
uuid_t uuid;
|
||||
d_rank_list_t *svcl = NULL;
|
||||
static daos_pool_info_t po_info;
|
||||
static daos_cont_info_t co_info;
|
||||
|
||||
INFO(VERBOSE_1, "Connecting to pool %s", o.pool);
|
||||
|
||||
rc = uuid_parse(o.pool, uuid);
|
||||
DCHECK(rc, "Failed to parse 'pool': %s", o.pool);
|
||||
|
||||
svcl = daos_rank_list_parse(o.svcl, ":");
|
||||
if (svcl == NULL)
|
||||
ERR("Failed to allocate svcl");
|
||||
|
||||
rc = daos_pool_connect(uuid, o.group, svcl, DAOS_PC_RW,
|
||||
&poh, &po_info, NULL);
|
||||
d_rank_list_free(svcl);
|
||||
DCHECK(rc, "Failed to connect to pool %s", o.pool);
|
||||
|
||||
INFO(VERBOSE_1, "Create/Open Container %s", o.cont);
|
||||
|
||||
uuid_clear(uuid);
|
||||
rc = uuid_parse(o.cont, uuid);
|
||||
DCHECK(rc, "Failed to parse 'cont': %s", o.cont);
|
||||
|
||||
rc = daos_cont_open(poh, uuid, DAOS_COO_RW, &coh, &co_info,
|
||||
NULL);
|
||||
/* If NOEXIST we create it */
|
||||
if (rc == -DER_NONEXIST) {
|
||||
INFO(VERBOSE_2, "Creating DAOS Container...\n");
|
||||
rc = daos_cont_create(poh, uuid, NULL, NULL);
|
||||
if (rc == 0)
|
||||
rc = daos_cont_open(poh, uuid, DAOS_COO_RW,
|
||||
&coh, &co_info, NULL);
|
||||
}
|
||||
DCHECK(rc, "Failed to create container");
|
||||
}
|
||||
|
||||
HandleDistribute(&poh, POOL_HANDLE);
|
||||
HandleDistribute(&coh, CONT_HANDLE);
|
||||
aoh.cookie = 0;
|
||||
|
||||
daos_initialized = true;
|
||||
}
|
||||
|
||||
static void
|
||||
DAOS_Fini()
|
||||
{
|
||||
int rc;
|
||||
|
||||
if (!daos_initialized)
|
||||
return;
|
||||
|
||||
MPI_Barrier(MPI_COMM_WORLD);
|
||||
rc = daos_cont_close(coh, NULL);
|
||||
if (rc) {
|
||||
DCHECK(rc, "Failed to close container %s (%d)", o.cont, rc);
|
||||
MPI_Abort(MPI_COMM_WORLD, -1);
|
||||
}
|
||||
MPI_Barrier(MPI_COMM_WORLD);
|
||||
|
||||
if (o.destroy) {
|
||||
if (rank == 0) {
|
||||
uuid_t uuid;
|
||||
double t1, t2;
|
||||
|
||||
INFO(VERBOSE_1, "Destroying DAOS Container %s", o.cont);
|
||||
uuid_parse(o.cont, uuid);
|
||||
t1 = MPI_Wtime();
|
||||
rc = daos_cont_destroy(poh, uuid, 1, NULL);
|
||||
t2 = MPI_Wtime();
|
||||
if (rc == 0)
|
||||
INFO(VERBOSE_1, "Container Destroy time = %f secs", t2-t1);
|
||||
}
|
||||
|
||||
MPI_Bcast(&rc, 1, MPI_INT, 0, MPI_COMM_WORLD);
|
||||
if (rc) {
|
||||
if (rank == 0)
|
||||
DCHECK(rc, "Failed to destroy container %s (%d)", o.cont, rc);
|
||||
MPI_Abort(MPI_COMM_WORLD, -1);
|
||||
}
|
||||
}
|
||||
|
||||
if (rank == 0)
|
||||
INFO(VERBOSE_1, "Disconnecting from DAOS POOL..");
|
||||
|
||||
rc = daos_pool_disconnect(poh, NULL);
|
||||
DCHECK(rc, "Failed to disconnect from pool %s", o.pool);
|
||||
|
||||
MPI_CHECK(MPI_Barrier(MPI_COMM_WORLD), "barrier error");
|
||||
if (rank == 0)
|
||||
INFO(VERBOSE_1, "Finalizing DAOS..");
|
||||
|
||||
rc = daos_fini();
|
||||
DCHECK(rc, "Failed to finalize daos");
|
||||
|
||||
daos_initialized = false;
|
||||
}
|
||||
|
||||
static void
|
||||
gen_oid(const char *name, daos_obj_id_t *oid)
|
||||
{
|
||||
|
||||
oid->lo = d_hash_murmur64(name, strlen(name), IOR_DAOS_MUR_SEED);
|
||||
oid->hi = 0;
|
||||
|
||||
daos_array_generate_id(oid, objectClass, true, 0);
|
||||
}
|
||||
|
||||
static void *
|
||||
DAOS_Create(char *testFileName, IOR_param_t *param)
|
||||
{
|
||||
daos_obj_id_t oid;
|
||||
int rc;
|
||||
|
||||
/** Convert file name into object ID */
|
||||
gen_oid(testFileName, &oid);
|
||||
|
||||
/** Create the array */
|
||||
if (param->filePerProc || rank == 0) {
|
||||
rc = daos_array_create(coh, oid, DAOS_TX_NONE, 1, o.chunk_size,
|
||||
&aoh, NULL);
|
||||
DCHECK(rc, "Failed to create array object\n");
|
||||
}
|
||||
|
||||
/** Distribute the array handle if not FPP */
|
||||
if (!param->filePerProc)
|
||||
HandleDistribute(&aoh, ARRAY_HANDLE);
|
||||
|
||||
return &aoh;
|
||||
}
|
||||
|
||||
static int
|
||||
DAOS_Access(const char *testFileName, int mode, IOR_param_t * param)
|
||||
{
|
||||
daos_obj_id_t oid;
|
||||
daos_size_t cell_size, chunk_size;
|
||||
int rc;
|
||||
|
||||
/** Convert file name into object ID */
|
||||
gen_oid(testFileName, &oid);
|
||||
|
||||
rc = daos_array_open(coh, oid, DAOS_TX_NONE, DAOS_OO_RO,
|
||||
&cell_size, &chunk_size, &aoh, NULL);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
if (cell_size != 1)
|
||||
GERR("Invalid DAOS Array object.\n");
|
||||
|
||||
rc = daos_array_close(aoh, NULL);
|
||||
aoh.cookie = 0;
|
||||
return rc;
|
||||
}
|
||||
|
||||
static void *
|
||||
DAOS_Open(char *testFileName, IOR_param_t *param)
|
||||
{
|
||||
daos_obj_id_t oid;
|
||||
|
||||
/** Convert file name into object ID */
|
||||
gen_oid(testFileName, &oid);
|
||||
|
||||
/** Open the array */
|
||||
if (param->filePerProc || rank == 0) {
|
||||
daos_size_t cell_size, chunk_size;
|
||||
int rc;
|
||||
|
||||
rc = daos_array_open(coh, oid, DAOS_TX_NONE, DAOS_OO_RW,
|
||||
&cell_size, &chunk_size, &aoh, NULL);
|
||||
DCHECK(rc, "Failed to create array object\n");
|
||||
|
||||
if (cell_size != 1)
|
||||
GERR("Invalid DAOS Array object.\n");
|
||||
}
|
||||
|
||||
/** Distribute the array handle if not FPP */
|
||||
if (!param->filePerProc)
|
||||
HandleDistribute(&aoh, ARRAY_HANDLE);
|
||||
|
||||
return &aoh;
|
||||
}
|
||||
|
||||
static IOR_offset_t
|
||||
DAOS_Xfer(int access, void *file, IOR_size_t *buffer,
|
||||
IOR_offset_t length, IOR_param_t *param)
|
||||
{
|
||||
daos_array_iod_t iod;
|
||||
daos_range_t rg;
|
||||
d_sg_list_t sgl;
|
||||
d_iov_t iov;
|
||||
int rc;
|
||||
|
||||
/** set array location */
|
||||
iod.arr_nr = 1;
|
||||
rg.rg_len = length;
|
||||
rg.rg_idx = param->offset;
|
||||
iod.arr_rgs = &rg;
|
||||
|
||||
/** set memory location */
|
||||
sgl.sg_nr = 1;
|
||||
d_iov_set(&iov, buffer, length);
|
||||
sgl.sg_iovs = &iov;
|
||||
|
||||
if (access == WRITE) {
|
||||
rc = daos_array_write(aoh, DAOS_TX_NONE, &iod, &sgl, NULL, NULL);
|
||||
DCHECK(rc, "daos_array_write() failed (%d).", rc);
|
||||
} else {
|
||||
rc = daos_array_read(aoh, DAOS_TX_NONE, &iod, &sgl, NULL, NULL);
|
||||
DCHECK(rc, "daos_array_read() failed (%d).", rc);
|
||||
}
|
||||
|
||||
return length;
|
||||
}
|
||||
|
||||
static void
|
||||
DAOS_Close(void *file, IOR_param_t *param)
|
||||
{
|
||||
int rc;
|
||||
|
||||
if (!daos_initialized)
|
||||
GERR("DAOS is not initialized!");
|
||||
|
||||
rc = daos_array_close(aoh, NULL);
|
||||
DCHECK(rc, "daos_array_close() failed (%d).", rc);
|
||||
|
||||
aoh.cookie = 0;
|
||||
}
|
||||
|
||||
static void
|
||||
DAOS_Delete(char *testFileName, IOR_param_t *param)
|
||||
{
|
||||
daos_obj_id_t oid;
|
||||
daos_size_t cell_size, chunk_size;
|
||||
int rc;
|
||||
|
||||
if (!daos_initialized)
|
||||
GERR("DAOS is not initialized!");
|
||||
|
||||
/** Convert file name into object ID */
|
||||
gen_oid(testFileName, &oid);
|
||||
|
||||
/** open the array to verify it exists */
|
||||
rc = daos_array_open(coh, oid, DAOS_TX_NONE, DAOS_OO_RW,
|
||||
&cell_size, &chunk_size, &aoh, NULL);
|
||||
DCHECK(rc, "daos_array_open() failed (%d).", rc);
|
||||
|
||||
if (cell_size != 1)
|
||||
GERR("Invalid DAOS Array object.\n");
|
||||
|
||||
rc = daos_array_destroy(aoh, DAOS_TX_NONE, NULL);
|
||||
DCHECK(rc, "daos_array_destroy() failed (%d).", rc);
|
||||
|
||||
rc = daos_array_close(aoh, NULL);
|
||||
DCHECK(rc, "daos_array_close() failed (%d).", rc);
|
||||
aoh.cookie = 0;
|
||||
}
|
||||
|
||||
static char *
|
||||
DAOS_GetVersion()
|
||||
{
|
||||
static char ver[1024] = {};
|
||||
|
||||
sprintf(ver, "%s", "DAOS");
|
||||
return ver;
|
||||
}
|
||||
|
||||
static void
|
||||
DAOS_Fsync(void *file, IOR_param_t *param)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
static IOR_offset_t
|
||||
DAOS_GetFileSize(IOR_param_t *param, MPI_Comm testComm, char *testFileName)
|
||||
{
|
||||
daos_obj_id_t oid;
|
||||
daos_size_t size;
|
||||
int rc;
|
||||
|
||||
if (!daos_initialized)
|
||||
GERR("DAOS is not initialized!");
|
||||
|
||||
/** Convert file name into object ID */
|
||||
gen_oid(testFileName, &oid);
|
||||
|
||||
/** open the array to verify it exists */
|
||||
if (param->filePerProc || rank == 0) {
|
||||
daos_size_t cell_size, chunk_size;
|
||||
|
||||
rc = daos_array_open(coh, oid, DAOS_TX_NONE, DAOS_OO_RO,
|
||||
&cell_size, &chunk_size, &aoh, NULL);
|
||||
DCHECK(rc, "daos_array_open() failed (%d).", rc);
|
||||
|
||||
if (cell_size != 1)
|
||||
GERR("Invalid DAOS Array object.\n");
|
||||
|
||||
rc = daos_array_get_size(aoh, DAOS_TX_NONE, &size, NULL);
|
||||
DCHECK(rc, "daos_array_get_size() failed (%d).", rc);
|
||||
|
||||
rc = daos_array_close(aoh, NULL);
|
||||
DCHECK(rc, "daos_array_close() failed (%d).", rc);
|
||||
aoh.cookie = 0;
|
||||
}
|
||||
|
||||
if (!param->filePerProc)
|
||||
MPI_Bcast(&size, 1, MPI_LONG, 0, MPI_COMM_WORLD);
|
||||
|
||||
return size;
|
||||
}
|
|
@ -0,0 +1,859 @@
|
|||
/* -*- mode: c; c-basic-offset: 8; indent-tabs-mode: nil; -*-
|
||||
* vim:expandtab:shiftwidth=8:tabstop=8:
|
||||
*/
|
||||
/*
|
||||
* Copyright (C) 2018-2019 Intel Corporation
|
||||
*
|
||||
* GOVERNMENT LICENSE RIGHTS-OPEN SOURCE SOFTWARE
|
||||
* The Government's rights to use, modify, reproduce, release, perform, display,
|
||||
* or disclose this software are subject to the terms of the Apache License as
|
||||
* provided in Contract No. 8F-30005.
|
||||
* Any reproduction of computer software, computer software documentation, or
|
||||
* portions thereof marked with this legend must also reproduce the markings.
|
||||
*/
|
||||
|
||||
/*
|
||||
* This file implements the abstract I/O interface for DAOS FS API.
|
||||
*/
|
||||
|
||||
#define _BSD_SOURCE
|
||||
|
||||
#ifdef HAVE_CONFIG_H
|
||||
#include "config.h"
|
||||
#endif
|
||||
|
||||
#include <string.h>
|
||||
#include <assert.h>
|
||||
#include <errno.h>
|
||||
#include <stdio.h>
|
||||
#include <dirent.h>
|
||||
#include <sys/types.h>
|
||||
#include <sys/stat.h>
|
||||
#include <unistd.h>
|
||||
#include <fcntl.h>
|
||||
#include <libgen.h>
|
||||
|
||||
#include <gurt/common.h>
|
||||
#include <gurt/hash.h>
|
||||
#include <daos.h>
|
||||
#include <daos_fs.h>
|
||||
|
||||
#include "ior.h"
|
||||
#include "iordef.h"
|
||||
#include "aiori.h"
|
||||
#include "utilities.h"
|
||||
|
||||
dfs_t *dfs;
|
||||
static daos_handle_t poh, coh;
|
||||
static daos_oclass_id_t objectClass = OC_SX;
|
||||
static struct d_hash_table *dir_hash;
|
||||
|
||||
struct aiori_dir_hdl {
|
||||
d_list_t entry;
|
||||
dfs_obj_t *oh;
|
||||
char name[PATH_MAX];
|
||||
};
|
||||
|
||||
enum handleType {
|
||||
POOL_HANDLE,
|
||||
CONT_HANDLE,
|
||||
ARRAY_HANDLE
|
||||
};
|
||||
|
||||
/************************** O P T I O N S *****************************/
|
||||
struct dfs_options{
|
||||
char *pool;
|
||||
char *svcl;
|
||||
char *group;
|
||||
char *cont;
|
||||
int chunk_size;
|
||||
char *oclass;
|
||||
int destroy;
|
||||
};
|
||||
|
||||
static struct dfs_options o = {
|
||||
.pool = NULL,
|
||||
.svcl = NULL,
|
||||
.group = NULL,
|
||||
.cont = NULL,
|
||||
.chunk_size = 1048576,
|
||||
.oclass = NULL,
|
||||
.destroy = 0,
|
||||
};
|
||||
|
||||
static option_help options [] = {
|
||||
{0, "dfs.pool", "pool uuid", OPTION_OPTIONAL_ARGUMENT, 's', & o.pool},
|
||||
{0, "dfs.svcl", "pool SVCL", OPTION_OPTIONAL_ARGUMENT, 's', & o.svcl},
|
||||
{0, "dfs.group", "server group", OPTION_OPTIONAL_ARGUMENT, 's', & o.group},
|
||||
{0, "dfs.cont", "DFS container uuid", OPTION_OPTIONAL_ARGUMENT, 's', & o.cont},
|
||||
{0, "dfs.chunk_size", "chunk size", OPTION_OPTIONAL_ARGUMENT, 'd', &o.chunk_size},
|
||||
{0, "dfs.oclass", "object class", OPTION_OPTIONAL_ARGUMENT, 's', &o.oclass},
|
||||
{0, "dfs.destroy", "Destroy DFS Container", OPTION_FLAG, 'd', &o.destroy},
|
||||
LAST_OPTION
|
||||
};
|
||||
|
||||
/**************************** P R O T O T Y P E S *****************************/
|
||||
static void *DFS_Create(char *, IOR_param_t *);
|
||||
static void *DFS_Open(char *, IOR_param_t *);
|
||||
static IOR_offset_t DFS_Xfer(int, void *, IOR_size_t *,
|
||||
IOR_offset_t, IOR_param_t *);
|
||||
static void DFS_Close(void *, IOR_param_t *);
|
||||
static void DFS_Delete(char *, IOR_param_t *);
|
||||
static char* DFS_GetVersion();
|
||||
static void DFS_Fsync(void *, IOR_param_t *);
|
||||
static IOR_offset_t DFS_GetFileSize(IOR_param_t *, MPI_Comm, char *);
|
||||
static int DFS_Statfs (const char *, ior_aiori_statfs_t *, IOR_param_t *);
|
||||
static int DFS_Stat (const char *, struct stat *, IOR_param_t *);
|
||||
static int DFS_Mkdir (const char *, mode_t, IOR_param_t *);
|
||||
static int DFS_Rmdir (const char *, IOR_param_t *);
|
||||
static int DFS_Access (const char *, int, IOR_param_t *);
|
||||
static void DFS_Init();
|
||||
static void DFS_Finalize();
|
||||
static option_help * DFS_options();
|
||||
|
||||
/************************** D E C L A R A T I O N S ***************************/
|
||||
|
||||
ior_aiori_t dfs_aiori = {
|
||||
.name = "DFS",
|
||||
.create = DFS_Create,
|
||||
.open = DFS_Open,
|
||||
.xfer = DFS_Xfer,
|
||||
.close = DFS_Close,
|
||||
.delete = DFS_Delete,
|
||||
.get_version = DFS_GetVersion,
|
||||
.fsync = DFS_Fsync,
|
||||
.get_file_size = DFS_GetFileSize,
|
||||
.statfs = DFS_Statfs,
|
||||
.mkdir = DFS_Mkdir,
|
||||
.rmdir = DFS_Rmdir,
|
||||
.access = DFS_Access,
|
||||
.stat = DFS_Stat,
|
||||
.initialize = DFS_Init,
|
||||
.finalize = DFS_Finalize,
|
||||
.get_options = DFS_options,
|
||||
};
|
||||
|
||||
/***************************** F U N C T I O N S ******************************/
|
||||
|
||||
/* For DAOS methods. */
|
||||
#define DCHECK(rc, format, ...) \
|
||||
do { \
|
||||
int _rc = (rc); \
|
||||
\
|
||||
if (_rc != 0) { \
|
||||
fprintf(stderr, "ERROR (%s:%d): %d: %d: " \
|
||||
format"\n", __FILE__, __LINE__, rank, _rc, \
|
||||
##__VA_ARGS__); \
|
||||
fflush(stderr); \
|
||||
exit(-1); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#define INFO(level, format, ...) \
|
||||
do { \
|
||||
if (verbose >= level) \
|
||||
printf("[%d] "format"\n", rank, ##__VA_ARGS__); \
|
||||
} while (0)
|
||||
|
||||
#define GERR(format, ...) \
|
||||
do { \
|
||||
fprintf(stderr, format"\n", ##__VA_ARGS__); \
|
||||
MPI_CHECK(MPI_Abort(MPI_COMM_WORLD, -1), "MPI_Abort() error"); \
|
||||
} while (0)
|
||||
|
||||
static inline struct aiori_dir_hdl *
|
||||
hdl_obj(d_list_t *rlink)
|
||||
{
|
||||
return container_of(rlink, struct aiori_dir_hdl, entry);
|
||||
}
|
||||
|
||||
static bool
|
||||
key_cmp(struct d_hash_table *htable, d_list_t *rlink,
|
||||
const void *key, unsigned int ksize)
|
||||
{
|
||||
struct aiori_dir_hdl *hdl = hdl_obj(rlink);
|
||||
|
||||
return (strcmp(hdl->name, (const char *)key) == 0);
|
||||
}
|
||||
|
||||
static void
|
||||
rec_free(struct d_hash_table *htable, d_list_t *rlink)
|
||||
{
|
||||
struct aiori_dir_hdl *hdl = hdl_obj(rlink);
|
||||
|
||||
assert(d_hash_rec_unlinked(&hdl->entry));
|
||||
dfs_release(hdl->oh);
|
||||
free(hdl);
|
||||
}
|
||||
|
||||
static d_hash_table_ops_t hdl_hash_ops = {
|
||||
.hop_key_cmp = key_cmp,
|
||||
.hop_rec_free = rec_free
|
||||
};
|
||||
|
||||
/* Distribute process 0's pool or container handle to others. */
|
||||
static void
|
||||
HandleDistribute(daos_handle_t *handle, enum handleType type)
|
||||
{
|
||||
d_iov_t global;
|
||||
int rc;
|
||||
|
||||
global.iov_buf = NULL;
|
||||
global.iov_buf_len = 0;
|
||||
global.iov_len = 0;
|
||||
|
||||
assert(type == POOL_HANDLE || type == CONT_HANDLE);
|
||||
if (rank == 0) {
|
||||
/* Get the global handle size. */
|
||||
if (type == POOL_HANDLE)
|
||||
rc = daos_pool_local2global(*handle, &global);
|
||||
else
|
||||
rc = daos_cont_local2global(*handle, &global);
|
||||
DCHECK(rc, "Failed to get global handle size");
|
||||
}
|
||||
|
||||
MPI_CHECK(MPI_Bcast(&global.iov_buf_len, 1, MPI_UINT64_T, 0,
|
||||
MPI_COMM_WORLD),
|
||||
"Failed to bcast global handle buffer size");
|
||||
|
||||
global.iov_len = global.iov_buf_len;
|
||||
global.iov_buf = malloc(global.iov_buf_len);
|
||||
if (global.iov_buf == NULL)
|
||||
ERR("Failed to allocate global handle buffer");
|
||||
|
||||
if (rank == 0) {
|
||||
if (type == POOL_HANDLE)
|
||||
rc = daos_pool_local2global(*handle, &global);
|
||||
else
|
||||
rc = daos_cont_local2global(*handle, &global);
|
||||
DCHECK(rc, "Failed to create global handle");
|
||||
}
|
||||
|
||||
MPI_CHECK(MPI_Bcast(global.iov_buf, global.iov_buf_len, MPI_BYTE, 0,
|
||||
MPI_COMM_WORLD),
|
||||
"Failed to bcast global pool handle");
|
||||
|
||||
if (rank != 0) {
|
||||
if (type == POOL_HANDLE)
|
||||
rc = daos_pool_global2local(global, handle);
|
||||
else
|
||||
rc = daos_cont_global2local(poh, global, handle);
|
||||
DCHECK(rc, "Failed to get local handle");
|
||||
}
|
||||
|
||||
free(global.iov_buf);
|
||||
}
|
||||
|
||||
static int
|
||||
parse_filename(const char *path, char **_obj_name, char **_cont_name)
|
||||
{
|
||||
char *f1 = NULL;
|
||||
char *f2 = NULL;
|
||||
char *fname = NULL;
|
||||
char *cont_name = NULL;
|
||||
int rc = 0;
|
||||
|
||||
if (path == NULL || _obj_name == NULL || _cont_name == NULL)
|
||||
return -EINVAL;
|
||||
|
||||
if (strcmp(path, "/") == 0) {
|
||||
*_cont_name = strdup("/");
|
||||
if (*_cont_name == NULL)
|
||||
return -ENOMEM;
|
||||
*_obj_name = NULL;
|
||||
return 0;
|
||||
}
|
||||
|
||||
f1 = strdup(path);
|
||||
if (f1 == NULL) {
|
||||
rc = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
f2 = strdup(path);
|
||||
if (f2 == NULL) {
|
||||
rc = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
fname = basename(f1);
|
||||
cont_name = dirname(f2);
|
||||
|
||||
if (cont_name[0] == '.' || cont_name[0] != '/') {
|
||||
char cwd[1024];
|
||||
|
||||
if (getcwd(cwd, 1024) == NULL) {
|
||||
rc = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (strcmp(cont_name, ".") == 0) {
|
||||
cont_name = strdup(cwd);
|
||||
if (cont_name == NULL) {
|
||||
rc = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
} else {
|
||||
char *new_dir = calloc(strlen(cwd) + strlen(cont_name)
|
||||
+ 1, sizeof(char));
|
||||
if (new_dir == NULL) {
|
||||
rc = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
strcpy(new_dir, cwd);
|
||||
if (cont_name[0] == '.') {
|
||||
strcat(new_dir, &cont_name[1]);
|
||||
} else {
|
||||
strcat(new_dir, "/");
|
||||
strcat(new_dir, cont_name);
|
||||
}
|
||||
cont_name = new_dir;
|
||||
}
|
||||
*_cont_name = cont_name;
|
||||
} else {
|
||||
*_cont_name = strdup(cont_name);
|
||||
if (*_cont_name == NULL) {
|
||||
rc = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
*_obj_name = strdup(fname);
|
||||
if (*_obj_name == NULL) {
|
||||
free(*_cont_name);
|
||||
*_cont_name = NULL;
|
||||
rc = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
out:
|
||||
if (f1)
|
||||
free(f1);
|
||||
if (f2)
|
||||
free(f2);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static dfs_obj_t *
|
||||
lookup_insert_dir(const char *name)
|
||||
{
|
||||
struct aiori_dir_hdl *hdl;
|
||||
d_list_t *rlink;
|
||||
int rc;
|
||||
|
||||
rlink = d_hash_rec_find(dir_hash, name, strlen(name));
|
||||
if (rlink != NULL) {
|
||||
hdl = hdl_obj(rlink);
|
||||
return hdl->oh;
|
||||
}
|
||||
|
||||
hdl = calloc(1, sizeof(struct aiori_dir_hdl));
|
||||
if (hdl == NULL)
|
||||
GERR("failed to alloc dir handle");
|
||||
|
||||
strncpy(hdl->name, name, PATH_MAX-1);
|
||||
hdl->name[PATH_MAX-1] = '\0';
|
||||
|
||||
rc = dfs_lookup(dfs, name, O_RDWR, &hdl->oh, NULL, NULL);
|
||||
DCHECK(rc, "dfs_lookup() of %s Failed", name);
|
||||
|
||||
rc = d_hash_rec_insert(dir_hash, hdl->name, strlen(hdl->name),
|
||||
&hdl->entry, true);
|
||||
DCHECK(rc, "Failed to insert dir handle in hashtable");
|
||||
|
||||
return hdl->oh;
|
||||
}
|
||||
|
||||
static option_help * DFS_options(){
|
||||
return options;
|
||||
}
|
||||
|
||||
static void
|
||||
DFS_Init() {
|
||||
int rc;
|
||||
|
||||
if (o.pool == NULL || o.svcl == NULL || o.cont == NULL)
|
||||
ERR("Invalid pool or container options\n");
|
||||
|
||||
if (o.oclass) {
|
||||
objectClass = daos_oclass_name2id(o.oclass);
|
||||
if (objectClass == OC_UNKNOWN)
|
||||
GERR("Invalid DAOS Object class %s\n", o.oclass);
|
||||
}
|
||||
|
||||
rc = daos_init();
|
||||
DCHECK(rc, "Failed to initialize daos");
|
||||
|
||||
rc = d_hash_table_create(0, 16, NULL, &hdl_hash_ops, &dir_hash);
|
||||
DCHECK(rc, "Failed to initialize dir hashtable");
|
||||
|
||||
if (rank == 0) {
|
||||
uuid_t pool_uuid, co_uuid;
|
||||
d_rank_list_t *svcl = NULL;
|
||||
daos_pool_info_t pool_info;
|
||||
daos_cont_info_t co_info;
|
||||
|
||||
rc = uuid_parse(o.pool, pool_uuid);
|
||||
DCHECK(rc, "Failed to parse 'Pool uuid': %s", o.pool);
|
||||
|
||||
rc = uuid_parse(o.cont, co_uuid);
|
||||
DCHECK(rc, "Failed to parse 'Cont uuid': %s", o.cont);
|
||||
|
||||
svcl = daos_rank_list_parse(o.svcl, ":");
|
||||
if (svcl == NULL)
|
||||
ERR("Failed to allocate svcl");
|
||||
|
||||
INFO(VERBOSE_1, "Pool uuid = %s, SVCL = %s\n", o.pool, o.svcl);
|
||||
INFO(VERBOSE_1, "DFS Container namespace uuid = %s\n", o.cont);
|
||||
|
||||
/** Connect to DAOS pool */
|
||||
rc = daos_pool_connect(pool_uuid, o.group, svcl, DAOS_PC_RW,
|
||||
&poh, &pool_info, NULL);
|
||||
d_rank_list_free(svcl);
|
||||
DCHECK(rc, "Failed to connect to pool");
|
||||
|
||||
rc = daos_cont_open(poh, co_uuid, DAOS_COO_RW, &coh, &co_info,
|
||||
NULL);
|
||||
/* If NOEXIST we create it */
|
||||
if (rc == -DER_NONEXIST) {
|
||||
INFO(VERBOSE_1, "Creating DFS Container ...\n");
|
||||
|
||||
rc = dfs_cont_create(poh, co_uuid, NULL, &coh, NULL);
|
||||
if (rc)
|
||||
DCHECK(rc, "Failed to create container");
|
||||
} else if (rc) {
|
||||
DCHECK(rc, "Failed to create container");
|
||||
}
|
||||
}
|
||||
|
||||
HandleDistribute(&poh, POOL_HANDLE);
|
||||
HandleDistribute(&coh, CONT_HANDLE);
|
||||
|
||||
rc = dfs_mount(poh, coh, O_RDWR, &dfs);
|
||||
DCHECK(rc, "Failed to mount DFS namespace");
|
||||
}
|
||||
|
||||
static void
|
||||
DFS_Finalize()
|
||||
{
|
||||
int rc;
|
||||
|
||||
MPI_Barrier(MPI_COMM_WORLD);
|
||||
d_hash_table_destroy(dir_hash, true /* force */);
|
||||
|
||||
rc = dfs_umount(dfs);
|
||||
DCHECK(rc, "Failed to umount DFS namespace");
|
||||
MPI_Barrier(MPI_COMM_WORLD);
|
||||
|
||||
rc = daos_cont_close(coh, NULL);
|
||||
DCHECK(rc, "Failed to close container %s (%d)", o.cont, rc);
|
||||
MPI_Barrier(MPI_COMM_WORLD);
|
||||
|
||||
if (o.destroy) {
|
||||
if (rank == 0) {
|
||||
uuid_t uuid;
|
||||
double t1, t2;
|
||||
|
||||
INFO(VERBOSE_1, "Destorying DFS Container: %s\n", o.cont);
|
||||
uuid_parse(o.cont, uuid);
|
||||
t1 = MPI_Wtime();
|
||||
rc = daos_cont_destroy(poh, uuid, 1, NULL);
|
||||
t2 = MPI_Wtime();
|
||||
if (rc == 0)
|
||||
INFO(VERBOSE_1, "Container Destroy time = %f secs", t2-t1);
|
||||
}
|
||||
|
||||
MPI_Bcast(&rc, 1, MPI_INT, 0, MPI_COMM_WORLD);
|
||||
if (rc) {
|
||||
if (rank == 0)
|
||||
DCHECK(rc, "Failed to destroy container %s (%d)", o.cont, rc);
|
||||
MPI_Abort(MPI_COMM_WORLD, -1);
|
||||
}
|
||||
}
|
||||
|
||||
if (rank == 0)
|
||||
INFO(VERBOSE_1, "Disconnecting from DAOS POOL\n");
|
||||
|
||||
rc = daos_pool_disconnect(poh, NULL);
|
||||
DCHECK(rc, "Failed to disconnect from pool");
|
||||
|
||||
MPI_CHECK(MPI_Barrier(MPI_COMM_WORLD), "barrier error");
|
||||
|
||||
if (rank == 0)
|
||||
INFO(VERBOSE_1, "Finalizing DAOS..\n");
|
||||
|
||||
rc = daos_fini();
|
||||
DCHECK(rc, "Failed to finalize DAOS");
|
||||
}
|
||||
|
||||
/*
|
||||
* Creat and open a file through the DFS interface.
|
||||
*/
|
||||
static void *
|
||||
DFS_Create(char *testFileName, IOR_param_t *param)
|
||||
{
|
||||
char *name = NULL, *dir_name = NULL;
|
||||
dfs_obj_t *obj = NULL, *parent = NULL;
|
||||
mode_t mode;
|
||||
int fd_oflag = 0;
|
||||
int rc;
|
||||
|
||||
assert(param);
|
||||
|
||||
rc = parse_filename(testFileName, &name, &dir_name);
|
||||
DCHECK(rc, "Failed to parse path %s", testFileName);
|
||||
assert(dir_name);
|
||||
assert(name);
|
||||
|
||||
parent = lookup_insert_dir(dir_name);
|
||||
if (parent == NULL)
|
||||
GERR("Failed to lookup parent dir");
|
||||
|
||||
mode = S_IFREG | param->mode;
|
||||
if (param->filePerProc || rank == 0) {
|
||||
fd_oflag |= O_CREAT | O_RDWR | O_EXCL;
|
||||
|
||||
rc = dfs_open(dfs, parent, name, mode, fd_oflag,
|
||||
objectClass, o.chunk_size, NULL, &obj);
|
||||
DCHECK(rc, "dfs_open() of %s Failed", name);
|
||||
}
|
||||
if (!param->filePerProc) {
|
||||
MPI_Barrier(MPI_COMM_WORLD);
|
||||
if (rank != 0) {
|
||||
fd_oflag |= O_RDWR;
|
||||
rc = dfs_open(dfs, parent, name, mode, fd_oflag,
|
||||
objectClass, o.chunk_size, NULL, &obj);
|
||||
DCHECK(rc, "dfs_open() of %s Failed", name);
|
||||
}
|
||||
}
|
||||
|
||||
if (name)
|
||||
free(name);
|
||||
if (dir_name)
|
||||
free(dir_name);
|
||||
|
||||
return ((void *)obj);
|
||||
}
|
||||
|
||||
/*
|
||||
* Open a file through the DFS interface.
|
||||
*/
|
||||
static void *
|
||||
DFS_Open(char *testFileName, IOR_param_t *param)
|
||||
{
|
||||
char *name = NULL, *dir_name = NULL;
|
||||
dfs_obj_t *obj = NULL, *parent = NULL;
|
||||
mode_t mode;
|
||||
int rc;
|
||||
int fd_oflag = 0;
|
||||
|
||||
fd_oflag |= O_RDWR;
|
||||
mode = S_IFREG | param->mode;
|
||||
|
||||
rc = parse_filename(testFileName, &name, &dir_name);
|
||||
DCHECK(rc, "Failed to parse path %s", testFileName);
|
||||
|
||||
assert(dir_name);
|
||||
assert(name);
|
||||
|
||||
parent = lookup_insert_dir(dir_name);
|
||||
if (parent == NULL)
|
||||
GERR("Failed to lookup parent dir");
|
||||
|
||||
rc = dfs_open(dfs, parent, name, mode, fd_oflag, objectClass,
|
||||
o.chunk_size, NULL, &obj);
|
||||
DCHECK(rc, "dfs_open() of %s Failed", name);
|
||||
|
||||
if (name)
|
||||
free(name);
|
||||
if (dir_name)
|
||||
free(dir_name);
|
||||
|
||||
return ((void *)obj);
|
||||
}
|
||||
|
||||
/*
|
||||
* Write or read access to file using the DFS interface.
|
||||
*/
|
||||
static IOR_offset_t
|
||||
DFS_Xfer(int access, void *file, IOR_size_t *buffer, IOR_offset_t length,
|
||||
IOR_param_t *param)
|
||||
{
|
||||
int xferRetries = 0;
|
||||
long long remaining = (long long)length;
|
||||
char *ptr = (char *)buffer;
|
||||
daos_size_t ret;
|
||||
int rc;
|
||||
dfs_obj_t *obj;
|
||||
|
||||
obj = (dfs_obj_t *)file;
|
||||
|
||||
while (remaining > 0) {
|
||||
d_iov_t iov;
|
||||
d_sg_list_t sgl;
|
||||
|
||||
/** set memory location */
|
||||
sgl.sg_nr = 1;
|
||||
sgl.sg_nr_out = 0;
|
||||
d_iov_set(&iov, (void *)ptr, remaining);
|
||||
sgl.sg_iovs = &iov;
|
||||
|
||||
/* write/read file */
|
||||
if (access == WRITE) {
|
||||
rc = dfs_write(dfs, obj, sgl, param->offset);
|
||||
if (rc) {
|
||||
fprintf(stderr, "dfs_write() failed (%d)", rc);
|
||||
return -1;
|
||||
}
|
||||
ret = remaining;
|
||||
} else {
|
||||
rc = dfs_read(dfs, obj, sgl, param->offset, &ret);
|
||||
if (rc || ret == 0)
|
||||
fprintf(stderr, "dfs_read() failed(%d)", rc);
|
||||
}
|
||||
|
||||
if (ret < remaining) {
|
||||
if (param->singleXferAttempt == TRUE)
|
||||
exit(-1);
|
||||
if (xferRetries > MAX_RETRY)
|
||||
ERR("too many retries -- aborting");
|
||||
}
|
||||
|
||||
assert(ret >= 0);
|
||||
assert(ret <= remaining);
|
||||
remaining -= ret;
|
||||
ptr += ret;
|
||||
xferRetries++;
|
||||
}
|
||||
|
||||
return (length);
|
||||
}
|
||||
|
||||
/*
|
||||
* Perform fsync().
|
||||
*/
|
||||
static void
|
||||
DFS_Fsync(void *fd, IOR_param_t * param)
|
||||
{
|
||||
dfs_sync(dfs);
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* Close a file through the DFS interface.
|
||||
*/
|
||||
static void
|
||||
DFS_Close(void *fd, IOR_param_t * param)
|
||||
{
|
||||
dfs_release((dfs_obj_t *)fd);
|
||||
}
|
||||
|
||||
/*
|
||||
* Delete a file through the DFS interface.
|
||||
*/
|
||||
static void
|
||||
DFS_Delete(char *testFileName, IOR_param_t * param)
|
||||
{
|
||||
char *name = NULL, *dir_name = NULL;
|
||||
dfs_obj_t *parent = NULL;
|
||||
int rc;
|
||||
|
||||
rc = parse_filename(testFileName, &name, &dir_name);
|
||||
DCHECK(rc, "Failed to parse path %s", testFileName);
|
||||
|
||||
assert(dir_name);
|
||||
assert(name);
|
||||
|
||||
parent = lookup_insert_dir(dir_name);
|
||||
if (parent == NULL)
|
||||
GERR("Failed to lookup parent dir");
|
||||
|
||||
rc = dfs_remove(dfs, parent, name, false, NULL);
|
||||
DCHECK(rc, "dfs_remove() of %s Failed", name);
|
||||
|
||||
if (name)
|
||||
free(name);
|
||||
if (dir_name)
|
||||
free(dir_name);
|
||||
}
|
||||
|
||||
static char* DFS_GetVersion()
|
||||
{
|
||||
static char ver[1024] = {};
|
||||
|
||||
sprintf(ver, "%s", "DAOS");
|
||||
return ver;
|
||||
}
|
||||
|
||||
/*
|
||||
* Use DFS stat() to return aggregate file size.
|
||||
*/
|
||||
static IOR_offset_t
|
||||
DFS_GetFileSize(IOR_param_t * test, MPI_Comm comm, char *testFileName)
|
||||
{
|
||||
dfs_obj_t *obj;
|
||||
daos_size_t fsize, tmpMin, tmpMax, tmpSum;
|
||||
int rc;
|
||||
|
||||
rc = dfs_lookup(dfs, testFileName, O_RDONLY, &obj, NULL, NULL);
|
||||
if (rc) {
|
||||
fprintf(stderr, "dfs_lookup() of %s Failed (%d)", testFileName, rc);
|
||||
return -1;
|
||||
}
|
||||
|
||||
rc = dfs_get_size(dfs, obj, &fsize);
|
||||
if (rc)
|
||||
return -1;
|
||||
|
||||
dfs_release(obj);
|
||||
|
||||
if (test->filePerProc == TRUE) {
|
||||
MPI_CHECK(MPI_Allreduce(&fsize, &tmpSum, 1,
|
||||
MPI_LONG_LONG_INT, MPI_SUM, comm),
|
||||
"cannot total data moved");
|
||||
fsize = tmpSum;
|
||||
} else {
|
||||
MPI_CHECK(MPI_Allreduce(&fsize, &tmpMin, 1,
|
||||
MPI_LONG_LONG_INT, MPI_MIN, comm),
|
||||
"cannot total data moved");
|
||||
MPI_CHECK(MPI_Allreduce(&fsize, &tmpMax, 1,
|
||||
MPI_LONG_LONG_INT, MPI_MAX, comm),
|
||||
"cannot total data moved");
|
||||
if (tmpMin != tmpMax) {
|
||||
if (rank == 0) {
|
||||
WARN("inconsistent file size by different tasks");
|
||||
}
|
||||
/* incorrect, but now consistent across tasks */
|
||||
fsize = tmpMin;
|
||||
}
|
||||
}
|
||||
|
||||
return (fsize);
|
||||
}
|
||||
|
||||
static int
|
||||
DFS_Statfs(const char *path, ior_aiori_statfs_t *sfs, IOR_param_t * param)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
DFS_Mkdir(const char *path, mode_t mode, IOR_param_t * param)
|
||||
{
|
||||
dfs_obj_t *parent = NULL;
|
||||
char *name = NULL, *dir_name = NULL;
|
||||
int rc;
|
||||
|
||||
rc = parse_filename(path, &name, &dir_name);
|
||||
DCHECK(rc, "Failed to parse path %s", path);
|
||||
|
||||
assert(dir_name);
|
||||
if (!name)
|
||||
return 0;
|
||||
|
||||
parent = lookup_insert_dir(dir_name);
|
||||
if (parent == NULL)
|
||||
GERR("Failed to lookup parent dir");
|
||||
|
||||
rc = dfs_mkdir(dfs, parent, name, mode);
|
||||
DCHECK(rc, "dfs_mkdir() of %s Failed", name);
|
||||
|
||||
if (name)
|
||||
free(name);
|
||||
if (dir_name)
|
||||
free(dir_name);
|
||||
if (rc)
|
||||
return -1;
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int
|
||||
DFS_Rmdir(const char *path, IOR_param_t * param)
|
||||
{
|
||||
dfs_obj_t *parent = NULL;
|
||||
char *name = NULL, *dir_name = NULL;
|
||||
int rc;
|
||||
|
||||
rc = parse_filename(path, &name, &dir_name);
|
||||
DCHECK(rc, "Failed to parse path %s", path);
|
||||
|
||||
assert(dir_name);
|
||||
assert(name);
|
||||
|
||||
parent = lookup_insert_dir(dir_name);
|
||||
if (parent == NULL)
|
||||
GERR("Failed to lookup parent dir");
|
||||
|
||||
rc = dfs_remove(dfs, parent, name, false, NULL);
|
||||
DCHECK(rc, "dfs_remove() of %s Failed", name);
|
||||
|
||||
if (name)
|
||||
free(name);
|
||||
if (dir_name)
|
||||
free(dir_name);
|
||||
if (rc)
|
||||
return -1;
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int
|
||||
DFS_Access(const char *path, int mode, IOR_param_t * param)
|
||||
{
|
||||
dfs_obj_t *parent = NULL;
|
||||
char *name = NULL, *dir_name = NULL;
|
||||
struct stat stbuf;
|
||||
int rc;
|
||||
|
||||
rc = parse_filename(path, &name, &dir_name);
|
||||
DCHECK(rc, "Failed to parse path %s", path);
|
||||
|
||||
assert(dir_name);
|
||||
|
||||
parent = lookup_insert_dir(dir_name);
|
||||
if (parent == NULL)
|
||||
GERR("Failed to lookup parent dir");
|
||||
|
||||
if (name && strcmp(name, ".") == 0) {
|
||||
free(name);
|
||||
name = NULL;
|
||||
}
|
||||
rc = dfs_stat(dfs, parent, name, &stbuf);
|
||||
|
||||
if (name)
|
||||
free(name);
|
||||
if (dir_name)
|
||||
free(dir_name);
|
||||
if (rc)
|
||||
return -1;
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int
|
||||
DFS_Stat(const char *path, struct stat *buf, IOR_param_t * param)
|
||||
{
|
||||
dfs_obj_t *parent = NULL;
|
||||
char *name = NULL, *dir_name = NULL;
|
||||
int rc;
|
||||
|
||||
rc = parse_filename(path, &name, &dir_name);
|
||||
DCHECK(rc, "Failed to parse path %s", path);
|
||||
|
||||
assert(dir_name);
|
||||
assert(name);
|
||||
|
||||
parent = lookup_insert_dir(dir_name);
|
||||
if (parent == NULL)
|
||||
GERR("Failed to lookup parent dir");
|
||||
|
||||
rc = dfs_stat(dfs, parent, name, buf);
|
||||
DCHECK(rc, "dfs_stat() of Failed (%d)", rc);
|
||||
|
||||
if (name)
|
||||
free(name);
|
||||
if (dir_name)
|
||||
free(dir_name);
|
||||
if (rc)
|
||||
return -1;
|
||||
return rc;
|
||||
}
|
|
@ -143,6 +143,10 @@ static int DUMMY_stat (const char *path, struct stat *buf, IOR_param_t * param){
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int DUMMY_check_params(IOR_param_t * test){
|
||||
return 1;
|
||||
}
|
||||
|
||||
ior_aiori_t dummy_aiori = {
|
||||
.name = "DUMMY",
|
||||
.name_legacy = NULL,
|
||||
|
@ -163,4 +167,5 @@ ior_aiori_t dummy_aiori = {
|
|||
.finalize = NULL,
|
||||
.get_options = DUMMY_options,
|
||||
.enable_mdtest = true,
|
||||
.check_params = DUMMY_check_params
|
||||
};
|
||||
|
|
|
@ -0,0 +1,316 @@
|
|||
#include <stdlib.h>
|
||||
#include <unistd.h>
|
||||
#include <sys/types.h>
|
||||
#include <errno.h>
|
||||
#include <gfarm/gfarm.h>
|
||||
#undef PACKAGE_NAME
|
||||
#undef PACKAGE_STRING
|
||||
#undef PACKAGE_TARNAME
|
||||
#undef PACKAGE_VERSION
|
||||
#include "ior.h"
|
||||
#include "aiori.h"
|
||||
|
||||
struct gfarm_file {
|
||||
GFS_File gf;
|
||||
};
|
||||
|
||||
void
|
||||
Gfarm_initialize()
|
||||
{
|
||||
gfarm_initialize(NULL, NULL);
|
||||
}
|
||||
|
||||
void
|
||||
Gfarm_finalize()
|
||||
{
|
||||
gfarm_terminate();
|
||||
}
|
||||
|
||||
void *
|
||||
Gfarm_create(char *fn, IOR_param_t *param)
|
||||
{
|
||||
GFS_File gf;
|
||||
struct gfarm_file *fp;
|
||||
gfarm_error_t e;
|
||||
|
||||
if (param->dryRun)
|
||||
return (NULL);
|
||||
|
||||
e = gfs_pio_create(fn, GFARM_FILE_RDWR, 0664, &gf);
|
||||
if (e != GFARM_ERR_NO_ERROR)
|
||||
ERR("gfs_pio_create failed");
|
||||
GFARM_MALLOC(fp);
|
||||
if (fp == NULL)
|
||||
ERR("no memory");
|
||||
fp->gf = gf;
|
||||
return (fp);
|
||||
}
|
||||
|
||||
void *
|
||||
Gfarm_open(char *fn, IOR_param_t *param)
|
||||
{
|
||||
GFS_File gf;
|
||||
struct gfarm_file *fp;
|
||||
gfarm_error_t e;
|
||||
|
||||
if (param->dryRun)
|
||||
return (NULL);
|
||||
|
||||
e = gfs_pio_open(fn, GFARM_FILE_RDWR, &gf);
|
||||
if (e != GFARM_ERR_NO_ERROR)
|
||||
ERR("gfs_pio_open failed");
|
||||
GFARM_MALLOC(fp);
|
||||
if (fp == NULL)
|
||||
ERR("no memory");
|
||||
fp->gf = gf;
|
||||
return (fp);
|
||||
}
|
||||
|
||||
IOR_offset_t
|
||||
Gfarm_xfer(int access, void *fd, IOR_size_t *buffer, IOR_offset_t len,
|
||||
IOR_param_t *param)
|
||||
{
|
||||
struct gfarm_file *fp = fd;
|
||||
IOR_offset_t rem = len;
|
||||
gfarm_off_t off;
|
||||
gfarm_error_t e;
|
||||
#define MAX_SZ (1024 * 1024 * 1024)
|
||||
int sz, n;
|
||||
char *buf = (char *)buffer;
|
||||
|
||||
if (param->dryRun)
|
||||
return (len);
|
||||
|
||||
if (len > MAX_SZ)
|
||||
sz = MAX_SZ;
|
||||
else
|
||||
sz = len;
|
||||
|
||||
e = gfs_pio_seek(fp->gf, param->offset, GFARM_SEEK_SET, &off);
|
||||
if (e != GFARM_ERR_NO_ERROR)
|
||||
ERR("gfs_pio_seek failed");
|
||||
while (rem > 0) {
|
||||
if (access == WRITE)
|
||||
e = gfs_pio_write(fp->gf, buf, sz, &n);
|
||||
else
|
||||
e = gfs_pio_read(fp->gf, buf, sz, &n);
|
||||
if (e != GFARM_ERR_NO_ERROR)
|
||||
ERR("xfer failed");
|
||||
if (n == 0)
|
||||
ERR("EOF encountered");
|
||||
rem -= n;
|
||||
buf += n;
|
||||
}
|
||||
return (len);
|
||||
}
|
||||
|
||||
void
|
||||
Gfarm_close(void *fd, IOR_param_t *param)
|
||||
{
|
||||
struct gfarm_file *fp = fd;
|
||||
|
||||
if (param->dryRun)
|
||||
return;
|
||||
|
||||
if (gfs_pio_close(fp->gf) != GFARM_ERR_NO_ERROR)
|
||||
ERR("gfs_pio_close failed");
|
||||
free(fp);
|
||||
}
|
||||
|
||||
void
|
||||
Gfarm_delete(char *fn, IOR_param_t *param)
|
||||
{
|
||||
gfarm_error_t e;
|
||||
|
||||
if (param->dryRun)
|
||||
return;
|
||||
|
||||
e = gfs_unlink(fn);
|
||||
if (e != GFARM_ERR_NO_ERROR)
|
||||
errno = gfarm_error_to_errno(e);
|
||||
}
|
||||
|
||||
char *
|
||||
Gfarm_version()
|
||||
{
|
||||
return ((char *)gfarm_version());
|
||||
}
|
||||
|
||||
void
|
||||
Gfarm_fsync(void *fd, IOR_param_t *param)
|
||||
{
|
||||
struct gfarm_file *fp = fd;
|
||||
|
||||
if (param->dryRun)
|
||||
return;
|
||||
|
||||
if (gfs_pio_sync(fp->gf) != GFARM_ERR_NO_ERROR)
|
||||
ERR("gfs_pio_sync failed");
|
||||
}
|
||||
|
||||
IOR_offset_t
|
||||
Gfarm_get_file_size(IOR_param_t *param, MPI_Comm comm, char *fn)
|
||||
{
|
||||
struct gfs_stat st;
|
||||
IOR_offset_t size, sum, min, max;
|
||||
|
||||
if (param->dryRun)
|
||||
return (0);
|
||||
|
||||
if (gfs_stat(fn, &st) != GFARM_ERR_NO_ERROR)
|
||||
ERR("gfs_stat failed");
|
||||
size = st.st_size;
|
||||
gfs_stat_free(&st);
|
||||
|
||||
if (param->filePerProc == TRUE) {
|
||||
MPI_CHECK(MPI_Allreduce(&size, &sum, 1, MPI_LONG_LONG_INT,
|
||||
MPI_SUM, comm), "cannot total data moved");
|
||||
size = sum;
|
||||
} else {
|
||||
MPI_CHECK(MPI_Allreduce(&size, &min, 1, MPI_LONG_LONG_INT,
|
||||
MPI_MIN, comm), "cannot total data moved");
|
||||
MPI_CHECK(MPI_Allreduce(&size, &max, 1, MPI_LONG_LONG_INT,
|
||||
MPI_MAX, comm), "cannot total data moved");
|
||||
if (min != max) {
|
||||
if (rank == 0)
|
||||
WARN("inconsistent file size by different "
|
||||
"tasks");
|
||||
/* incorrect, but now consistent across tasks */
|
||||
size = min;
|
||||
}
|
||||
}
|
||||
return (size);
|
||||
}
|
||||
|
||||
int
|
||||
Gfarm_statfs(const char *fn, ior_aiori_statfs_t *st, IOR_param_t *param)
|
||||
{
|
||||
gfarm_off_t used, avail, files;
|
||||
gfarm_error_t e;
|
||||
int bsize = 4096;
|
||||
|
||||
if (param->dryRun)
|
||||
return (0);
|
||||
|
||||
e = gfs_statfs_by_path(fn, &used, &avail, &files);
|
||||
if (e != GFARM_ERR_NO_ERROR) {
|
||||
errno = gfarm_error_to_errno(e);
|
||||
return (-1);
|
||||
}
|
||||
st->f_bsize = bsize;
|
||||
st->f_blocks = (used + avail) / bsize;
|
||||
st->f_bfree = avail / bsize;
|
||||
st->f_files = 2 * files; /* XXX */
|
||||
st->f_ffree = files; /* XXX */
|
||||
return (0);
|
||||
}
|
||||
|
||||
int
|
||||
Gfarm_mkdir(const char *fn, mode_t mode, IOR_param_t *param)
|
||||
{
|
||||
gfarm_error_t e;
|
||||
|
||||
if (param->dryRun)
|
||||
return (0);
|
||||
|
||||
e = gfs_mkdir(fn, mode);
|
||||
if (e == GFARM_ERR_NO_ERROR)
|
||||
return (0);
|
||||
errno = gfarm_error_to_errno(e);
|
||||
return (-1);
|
||||
}
|
||||
|
||||
int
|
||||
Gfarm_rmdir(const char *fn, IOR_param_t *param)
|
||||
{
|
||||
gfarm_error_t e;
|
||||
|
||||
if (param->dryRun)
|
||||
return (0);
|
||||
|
||||
e = gfs_rmdir(fn);
|
||||
if (e == GFARM_ERR_NO_ERROR)
|
||||
return (0);
|
||||
errno = gfarm_error_to_errno(e);
|
||||
return (-1);
|
||||
}
|
||||
|
||||
int
|
||||
Gfarm_access(const char *fn, int mode, IOR_param_t *param)
|
||||
{
|
||||
struct gfs_stat st;
|
||||
gfarm_error_t e;
|
||||
|
||||
if (param->dryRun)
|
||||
return (0);
|
||||
|
||||
e = gfs_stat(fn, &st);
|
||||
if (e != GFARM_ERR_NO_ERROR) {
|
||||
errno = gfarm_error_to_errno(e);
|
||||
return (-1);
|
||||
}
|
||||
gfs_stat_free(&st);
|
||||
return (0);
|
||||
}
|
||||
|
||||
/* XXX FIXME */
|
||||
#define GFS_DEV ((dev_t)-1)
|
||||
#define GFS_BLKSIZE 8192
|
||||
#define STAT_BLKSIZ 512 /* for st_blocks */
|
||||
|
||||
int
|
||||
Gfarm_stat(const char *fn, struct stat *buf, IOR_param_t *param)
|
||||
{
|
||||
struct gfs_stat st;
|
||||
gfarm_error_t e;
|
||||
|
||||
if (param->dryRun)
|
||||
return (0);
|
||||
|
||||
e = gfs_stat(fn, &st);
|
||||
if (e != GFARM_ERR_NO_ERROR) {
|
||||
errno = gfarm_error_to_errno(e);
|
||||
return (-1);
|
||||
}
|
||||
buf->st_dev = GFS_DEV;
|
||||
buf->st_ino = st.st_ino;
|
||||
buf->st_mode = st.st_mode;
|
||||
buf->st_nlink = st.st_nlink;
|
||||
buf->st_uid = getuid(); /* XXX */
|
||||
buf->st_gid = getgid(); /* XXX */
|
||||
buf->st_size = st.st_size;
|
||||
buf->st_blksize = GFS_BLKSIZE;
|
||||
buf->st_blocks = (st.st_size + STAT_BLKSIZ - 1) / STAT_BLKSIZ;
|
||||
buf->st_atime = st.st_atimespec.tv_sec;
|
||||
buf->st_mtime = st.st_mtimespec.tv_sec;
|
||||
buf->st_ctime = st.st_ctimespec.tv_sec;
|
||||
#if defined(HAVE_STRUCT_STAT_ST_MTIM_TV_NSEC)
|
||||
buf->st_atim.tv_nsec = st.st_atimespec.tv_nsec;
|
||||
buf->st_mtim.tv_nsec = st.st_mtimespec.tv_nsec;
|
||||
buf->st_ctim.tv_nsec = st.st_ctimespec.tv_nsec;
|
||||
#endif
|
||||
gfs_stat_free(&st);
|
||||
return (0);
|
||||
}
|
||||
|
||||
ior_aiori_t gfarm_aiori = {
|
||||
.name = "Gfarm",
|
||||
.name_legacy = NULL,
|
||||
.create = Gfarm_create,
|
||||
.open = Gfarm_open,
|
||||
.xfer = Gfarm_xfer,
|
||||
.close = Gfarm_close,
|
||||
.delete = Gfarm_delete,
|
||||
.get_version = Gfarm_version,
|
||||
.fsync = Gfarm_fsync,
|
||||
.get_file_size = Gfarm_get_file_size,
|
||||
.statfs = Gfarm_statfs,
|
||||
.mkdir = Gfarm_mkdir,
|
||||
.rmdir = Gfarm_rmdir,
|
||||
.access = Gfarm_access,
|
||||
.stat = Gfarm_stat,
|
||||
.initialize = Gfarm_initialize,
|
||||
.finalize = Gfarm_finalize,
|
||||
.get_options = NULL,
|
||||
.enable_mdtest = true,
|
||||
};
|
|
@ -74,6 +74,7 @@ int MPIIO_Access(const char *path, int mode, IOR_param_t *param)
|
|||
}
|
||||
MPI_File fd;
|
||||
int mpi_mode = MPI_MODE_UNIQUE_OPEN;
|
||||
MPI_Info mpiHints = MPI_INFO_NULL;
|
||||
|
||||
if ((mode & W_OK) && (mode & R_OK))
|
||||
mpi_mode |= MPI_MODE_RDWR;
|
||||
|
@ -82,12 +83,15 @@ int MPIIO_Access(const char *path, int mode, IOR_param_t *param)
|
|||
else
|
||||
mpi_mode |= MPI_MODE_RDONLY;
|
||||
|
||||
int ret = MPI_File_open(MPI_COMM_SELF, path, mpi_mode,
|
||||
MPI_INFO_NULL, &fd);
|
||||
SetHints(&mpiHints, param->hintsFileName);
|
||||
|
||||
int ret = MPI_File_open(MPI_COMM_SELF, path, mpi_mode, mpiHints, &fd);
|
||||
|
||||
if (!ret)
|
||||
MPI_File_close(&fd);
|
||||
|
||||
if (mpiHints != MPI_INFO_NULL)
|
||||
MPI_CHECK(MPI_Info_free(&mpiHints), "MPI_Info_free failed");
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -178,8 +182,8 @@ static void *MPIIO_Open(char *testFileName, IOR_param_t * param)
|
|||
fprintf(stdout, "}\n");
|
||||
}
|
||||
if(! param->dryRun){
|
||||
MPI_CHECK(MPI_File_open(comm, testFileName, fd_mode, mpiHints, fd),
|
||||
"cannot open file");
|
||||
MPI_CHECKF(MPI_File_open(comm, testFileName, fd_mode, mpiHints, fd),
|
||||
"cannot open file: %s", testFileName);
|
||||
}
|
||||
|
||||
/* show hints actually attached to file handle */
|
||||
|
@ -428,8 +432,8 @@ void MPIIO_Delete(char *testFileName, IOR_param_t * param)
|
|||
{
|
||||
if(param->dryRun)
|
||||
return;
|
||||
MPI_CHECK(MPI_File_delete(testFileName, (MPI_Info) MPI_INFO_NULL),
|
||||
"cannot delete file");
|
||||
MPI_CHECKF(MPI_File_delete(testFileName, (MPI_Info) MPI_INFO_NULL),
|
||||
"cannot delete file: %s", testFileName);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -497,6 +501,7 @@ IOR_offset_t MPIIO_GetFileSize(IOR_param_t * test, MPI_Comm testComm,
|
|||
IOR_offset_t aggFileSizeFromStat, tmpMin, tmpMax, tmpSum;
|
||||
MPI_File fd;
|
||||
MPI_Comm comm;
|
||||
MPI_Info mpiHints = MPI_INFO_NULL;
|
||||
|
||||
if (test->filePerProc == TRUE) {
|
||||
comm = MPI_COMM_SELF;
|
||||
|
@ -504,12 +509,15 @@ IOR_offset_t MPIIO_GetFileSize(IOR_param_t * test, MPI_Comm testComm,
|
|||
comm = testComm;
|
||||
}
|
||||
|
||||
SetHints(&mpiHints, test->hintsFileName);
|
||||
MPI_CHECK(MPI_File_open(comm, testFileName, MPI_MODE_RDONLY,
|
||||
MPI_INFO_NULL, &fd),
|
||||
mpiHints, &fd),
|
||||
"cannot open file to get file size");
|
||||
MPI_CHECK(MPI_File_get_size(fd, (MPI_Offset *) & aggFileSizeFromStat),
|
||||
"cannot get file size");
|
||||
MPI_CHECK(MPI_File_close(&fd), "cannot close file");
|
||||
if (mpiHints != MPI_INFO_NULL)
|
||||
MPI_CHECK(MPI_Info_free(&mpiHints), "MPI_Info_free failed");
|
||||
|
||||
if (test->filePerProc == TRUE) {
|
||||
MPI_CHECK(MPI_Allreduce(&aggFileSizeFromStat, &tmpSum, 1,
|
||||
|
|
|
@ -216,7 +216,7 @@ static IOR_offset_t NCMPI_Xfer(int access, void *fd, IOR_size_t * buffer,
|
|||
param->blockSize / param->transferSize;
|
||||
|
||||
/* reshape 1D array to 3D array:
|
||||
[segmentCount*numTasksWorld][numTransfers][transferSize]
|
||||
[segmentCount*numTasks][numTransfers][transferSize]
|
||||
Requirement: none of these dimensions should be > 4G,
|
||||
*/
|
||||
NCMPI_CHECK(ncmpi_def_dim
|
||||
|
@ -267,7 +267,7 @@ static IOR_offset_t NCMPI_Xfer(int access, void *fd, IOR_size_t * buffer,
|
|||
bufSize[1] = 1;
|
||||
bufSize[2] = param->transferSize;
|
||||
|
||||
offset[0] = segmentNum * numTasksWorld + rank;
|
||||
offset[0] = segmentNum * param->numTasks + rank;
|
||||
offset[1] = transferNum;
|
||||
offset[2] = 0;
|
||||
|
||||
|
|
|
@ -71,6 +71,7 @@
|
|||
static IOR_offset_t POSIX_Xfer(int, void *, IOR_size_t *,
|
||||
IOR_offset_t, IOR_param_t *);
|
||||
static void POSIX_Fsync(void *, IOR_param_t *);
|
||||
static void POSIX_Sync(IOR_param_t * );
|
||||
|
||||
/************************** O P T I O N S *****************************/
|
||||
typedef struct{
|
||||
|
@ -122,6 +123,7 @@ ior_aiori_t posix_aiori = {
|
|||
.stat = aiori_posix_stat,
|
||||
.get_options = POSIX_options,
|
||||
.enable_mdtest = true,
|
||||
.sync = POSIX_Sync
|
||||
};
|
||||
|
||||
/***************************** F U N C T I O N S ******************************/
|
||||
|
@ -146,7 +148,7 @@ void gpfs_free_all_locks(int fd)
|
|||
|
||||
rc = gpfs_fcntl(fd, &release_all);
|
||||
if (verbose >= VERBOSE_0 && rc != 0) {
|
||||
EWARN("gpfs_fcntl release all locks hint failed.");
|
||||
EWARNF("gpfs_fcntl(%d, ...) release all locks hint failed.", fd);
|
||||
}
|
||||
}
|
||||
void gpfs_access_start(int fd, IOR_offset_t length, IOR_param_t *param, int access)
|
||||
|
@ -169,7 +171,7 @@ void gpfs_access_start(int fd, IOR_offset_t length, IOR_param_t *param, int acce
|
|||
|
||||
rc = gpfs_fcntl(fd, &take_locks);
|
||||
if (verbose >= VERBOSE_2 && rc != 0) {
|
||||
EWARN("gpfs_fcntl access range hint failed.");
|
||||
EWARNF("gpfs_fcntl(fd, ...) access range hint failed.", fd);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -193,7 +195,7 @@ void gpfs_access_end(int fd, IOR_offset_t length, IOR_param_t *param, int access
|
|||
|
||||
rc = gpfs_fcntl(fd, &free_locks);
|
||||
if (verbose >= VERBOSE_2 && rc != 0) {
|
||||
EWARN("gpfs_fcntl free range hint failed.");
|
||||
EWARNF("gpfs_fcntl(fd, ...) free range hint failed.", fd);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -260,14 +262,14 @@ bool beegfs_createFilePath(char* filepath, mode_t mode, int numTargets, int chun
|
|||
char* dir = dirname(dirTmp);
|
||||
DIR* parentDirS = opendir(dir);
|
||||
if (!parentDirS) {
|
||||
ERR("Failed to get directory");
|
||||
ERRF("Failed to get directory: %s", dir);
|
||||
}
|
||||
else
|
||||
{
|
||||
int parentDirFd = dirfd(parentDirS);
|
||||
if (parentDirFd < 0)
|
||||
{
|
||||
ERR("Failed to get directory descriptor");
|
||||
ERRF("Failed to get directory descriptor: %s", dir);
|
||||
}
|
||||
else
|
||||
{
|
||||
|
@ -319,6 +321,7 @@ bool beegfs_createFilePath(char* filepath, mode_t mode, int numTargets, int chun
|
|||
void *POSIX_Create(char *testFileName, IOR_param_t * param)
|
||||
{
|
||||
int fd_oflag = O_BINARY;
|
||||
int mode = 0664;
|
||||
int *fd;
|
||||
|
||||
fd = (int *)malloc(sizeof(int));
|
||||
|
@ -346,9 +349,10 @@ void *POSIX_Create(char *testFileName, IOR_param_t * param)
|
|||
if (!param->filePerProc && rank != 0) {
|
||||
MPI_CHECK(MPI_Barrier(testComm), "barrier error");
|
||||
fd_oflag |= O_RDWR;
|
||||
*fd = open64(testFileName, fd_oflag, 0664);
|
||||
*fd = open64(testFileName, fd_oflag, mode);
|
||||
if (*fd < 0)
|
||||
ERR("open64() failed");
|
||||
ERRF("open64(\"%s\", %d, %#o) failed",
|
||||
testFileName, fd_oflag, mode);
|
||||
} else {
|
||||
struct lov_user_md opts = { 0 };
|
||||
|
||||
|
@ -363,7 +367,7 @@ void *POSIX_Create(char *testFileName, IOR_param_t * param)
|
|||
|
||||
fd_oflag |=
|
||||
O_CREAT | O_EXCL | O_RDWR | O_LOV_DELAY_CREATE;
|
||||
*fd = open64(testFileName, fd_oflag, 0664);
|
||||
*fd = open64(testFileName, fd_oflag, mode);
|
||||
if (*fd < 0) {
|
||||
fprintf(stdout, "\nUnable to open '%s': %s\n",
|
||||
testFileName, strerror(errno));
|
||||
|
@ -392,7 +396,7 @@ void *POSIX_Create(char *testFileName, IOR_param_t * param)
|
|||
if (beegfs_isOptionSet(param->beegfs_chunkSize)
|
||||
|| beegfs_isOptionSet(param->beegfs_numTargets)) {
|
||||
bool result = beegfs_createFilePath(testFileName,
|
||||
0664,
|
||||
mode,
|
||||
param->beegfs_numTargets,
|
||||
param->beegfs_chunkSize);
|
||||
if (result) {
|
||||
|
@ -403,9 +407,10 @@ void *POSIX_Create(char *testFileName, IOR_param_t * param)
|
|||
}
|
||||
#endif /* HAVE_BEEGFS_BEEGFS_H */
|
||||
|
||||
*fd = open64(testFileName, fd_oflag, 0664);
|
||||
*fd = open64(testFileName, fd_oflag, mode);
|
||||
if (*fd < 0)
|
||||
ERR("open64() failed");
|
||||
ERRF("open64(\"%s\", %d, %#o) failed",
|
||||
testFileName, fd_oflag, mode);
|
||||
|
||||
#ifdef HAVE_LUSTRE_LUSTRE_USER_H
|
||||
}
|
||||
|
@ -413,7 +418,7 @@ void *POSIX_Create(char *testFileName, IOR_param_t * param)
|
|||
if (param->lustre_ignore_locks) {
|
||||
int lustre_ioctl_flags = LL_FILE_IGNORE_LOCK;
|
||||
if (ioctl(*fd, LL_IOC_SETFLAGS, &lustre_ioctl_flags) == -1)
|
||||
ERR("ioctl(LL_IOC_SETFLAGS) failed");
|
||||
ERRF("ioctl(%d, LL_IOC_SETFLAGS, ...) failed", *fd);
|
||||
}
|
||||
#endif /* HAVE_LUSTRE_LUSTRE_USER_H */
|
||||
|
||||
|
@ -469,7 +474,7 @@ void *POSIX_Open(char *testFileName, IOR_param_t * param)
|
|||
|
||||
*fd = open64(testFileName, fd_oflag);
|
||||
if (*fd < 0)
|
||||
ERR("open64 failed");
|
||||
ERRF("open64(\"%s\", %d) failed", testFileName, fd_oflag);
|
||||
|
||||
#ifdef HAVE_LUSTRE_LUSTRE_USER_H
|
||||
if (param->lustre_ignore_locks) {
|
||||
|
@ -479,7 +484,7 @@ void *POSIX_Open(char *testFileName, IOR_param_t * param)
|
|||
"** Disabling lustre range locking **\n");
|
||||
}
|
||||
if (ioctl(*fd, LL_IOC_SETFLAGS, &lustre_ioctl_flags) == -1)
|
||||
ERR("ioctl(LL_IOC_SETFLAGS) failed");
|
||||
ERRF("ioctl(%d, LL_IOC_SETFLAGS, ...) failed", *fd);
|
||||
}
|
||||
#endif /* HAVE_LUSTRE_LUSTRE_USER_H */
|
||||
|
||||
|
@ -517,7 +522,7 @@ static IOR_offset_t POSIX_Xfer(int access, void *file, IOR_size_t * buffer,
|
|||
|
||||
/* seek to offset */
|
||||
if (lseek64(fd, param->offset, SEEK_SET) == -1)
|
||||
ERR("lseek64() failed");
|
||||
ERRF("lseek64(%d, %lld, SEEK_SET) failed", fd, param->offset);
|
||||
|
||||
while (remaining > 0) {
|
||||
/* write/read file */
|
||||
|
@ -530,7 +535,8 @@ static IOR_offset_t POSIX_Xfer(int access, void *file, IOR_size_t * buffer,
|
|||
}
|
||||
rc = write(fd, ptr, remaining);
|
||||
if (rc == -1)
|
||||
ERR("write() failed");
|
||||
ERRF("write(%d, %p, %lld) failed",
|
||||
fd, (void*)ptr, remaining);
|
||||
if (param->fsyncPerWrite == TRUE)
|
||||
POSIX_Fsync(&fd, param);
|
||||
} else { /* READ or CHECK */
|
||||
|
@ -542,9 +548,11 @@ static IOR_offset_t POSIX_Xfer(int access, void *file, IOR_size_t * buffer,
|
|||
}
|
||||
rc = read(fd, ptr, remaining);
|
||||
if (rc == 0)
|
||||
ERR("read() returned EOF prematurely");
|
||||
ERRF("read(%d, %p, %lld) returned EOF prematurely",
|
||||
fd, (void*)ptr, remaining);
|
||||
if (rc == -1)
|
||||
ERR("read() failed");
|
||||
ERRF("read(%d, %p, %lld) failed",
|
||||
fd, (void*)ptr, remaining);
|
||||
}
|
||||
if (rc < remaining) {
|
||||
fprintf(stdout,
|
||||
|
@ -579,9 +587,19 @@ static IOR_offset_t POSIX_Xfer(int access, void *file, IOR_size_t * buffer,
|
|||
static void POSIX_Fsync(void *fd, IOR_param_t * param)
|
||||
{
|
||||
if (fsync(*(int *)fd) != 0)
|
||||
EWARN("fsync() failed");
|
||||
EWARNF("fsync(%d) failed", *(int *)fd);
|
||||
}
|
||||
|
||||
|
||||
static void POSIX_Sync(IOR_param_t * param)
|
||||
{
|
||||
int ret = system("sync");
|
||||
if (ret != 0){
|
||||
FAIL("Error executing the sync command, ensure it exists.");
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* Close a file through the POSIX interface.
|
||||
*/
|
||||
|
@ -590,7 +608,7 @@ void POSIX_Close(void *fd, IOR_param_t * param)
|
|||
if(param->dryRun)
|
||||
return;
|
||||
if (close(*(int *)fd) != 0)
|
||||
ERR("close() failed");
|
||||
ERRF("close(%d) failed", *(int *)fd);
|
||||
free(fd);
|
||||
}
|
||||
|
||||
|
@ -602,10 +620,8 @@ void POSIX_Delete(char *testFileName, IOR_param_t * param)
|
|||
if(param->dryRun)
|
||||
return;
|
||||
if (unlink(testFileName) != 0){
|
||||
char errmsg[256];
|
||||
sprintf(errmsg, "[RANK %03d]: unlink() of file \"%s\" failed\n",
|
||||
rank, testFileName);
|
||||
EWARN(errmsg);
|
||||
EWARNF("[RANK %03d]: unlink() of file \"%s\" failed\n",
|
||||
rank, testFileName);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -621,7 +637,7 @@ IOR_offset_t POSIX_GetFileSize(IOR_param_t * test, MPI_Comm testComm,
|
|||
IOR_offset_t aggFileSizeFromStat, tmpMin, tmpMax, tmpSum;
|
||||
|
||||
if (stat(testFileName, &stat_buf) != 0) {
|
||||
ERR("stat() failed");
|
||||
ERRF("stat(\"%s\", ...) failed", testFileName);
|
||||
}
|
||||
aggFileSizeFromStat = stat_buf.st_size;
|
||||
|
||||
|
|
|
@ -159,6 +159,8 @@ static void S3_Fsync(void*, IOR_param_t*);
|
|||
static IOR_offset_t S3_GetFileSize(IOR_param_t*, MPI_Comm, char*);
|
||||
static void S3_init();
|
||||
static void S3_finalize();
|
||||
static int S3_check_params(IOR_param_t *);
|
||||
|
||||
|
||||
/************************** D E C L A R A T I O N S ***************************/
|
||||
|
||||
|
@ -177,7 +179,8 @@ ior_aiori_t s3_aiori = {
|
|||
.fsync = S3_Fsync,
|
||||
.get_file_size = S3_GetFileSize,
|
||||
.initialize = S3_init,
|
||||
.finalize = S3_finalize
|
||||
.finalize = S3_finalize,
|
||||
.check_params = S3_check_params
|
||||
};
|
||||
|
||||
// "S3", plus EMC-extensions enabled
|
||||
|
@ -228,6 +231,22 @@ static void S3_finalize(){
|
|||
aws_cleanup();
|
||||
}
|
||||
|
||||
static int S3_check_params(IOR_param_t * test){
|
||||
/* N:1 and N:N */
|
||||
IOR_offset_t NtoN = test->filePerProc;
|
||||
IOR_offset_t Nto1 = ! NtoN;
|
||||
IOR_offset_t s = test->segmentCount;
|
||||
IOR_offset_t t = test->transferSize;
|
||||
IOR_offset_t b = test->blockSize;
|
||||
|
||||
if (Nto1 && (s != 1) && (b != t)) {
|
||||
ERR("N:1 (strided) requires xfer-size == block-size");
|
||||
return 0;
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* modelled on similar macros in iordef.h */
|
||||
#define CURL_ERR(MSG, CURL_ERRNO, PARAM) \
|
||||
do { \
|
||||
|
|
|
@ -41,6 +41,10 @@
|
|||
ior_aiori_t *available_aiori[] = {
|
||||
#ifdef USE_POSIX_AIORI
|
||||
&posix_aiori,
|
||||
#endif
|
||||
#ifdef USE_DAOS_AIORI
|
||||
&daos_aiori,
|
||||
&dfs_aiori,
|
||||
#endif
|
||||
& dummy_aiori,
|
||||
#ifdef USE_HDF5_AIORI
|
||||
|
@ -68,6 +72,9 @@ ior_aiori_t *available_aiori[] = {
|
|||
#endif
|
||||
#ifdef USE_RADOS_AIORI
|
||||
&rados_aiori,
|
||||
#endif
|
||||
#ifdef USE_GFARM_AIORI
|
||||
&gfarm_aiori,
|
||||
#endif
|
||||
NULL
|
||||
};
|
||||
|
|
|
@ -86,6 +86,8 @@ typedef struct ior_aiori {
|
|||
void (*finalize)(); /* called once per program after MPI is shutdown */
|
||||
option_help * (*get_options)(void ** init_backend_options, void* init_values); /* initializes the backend options as well and returns the pointer to the option help structure */
|
||||
bool enable_mdtest;
|
||||
int (*check_params)(IOR_param_t *); /* check if the provided parameters for the given test and the module options are correct, if they aren't print a message and exit(1) or return 1*/
|
||||
void (*sync)(IOR_param_t * ); /* synchronize every pending operation for this storage */
|
||||
} ior_aiori_t;
|
||||
|
||||
enum bench_type {
|
||||
|
@ -94,6 +96,8 @@ enum bench_type {
|
|||
};
|
||||
|
||||
extern ior_aiori_t dummy_aiori;
|
||||
extern ior_aiori_t daos_aiori;
|
||||
extern ior_aiori_t dfs_aiori;
|
||||
extern ior_aiori_t hdf5_aiori;
|
||||
extern ior_aiori_t hdfs_aiori;
|
||||
extern ior_aiori_t ime_aiori;
|
||||
|
@ -105,6 +109,7 @@ extern ior_aiori_t s3_aiori;
|
|||
extern ior_aiori_t s3_plus_aiori;
|
||||
extern ior_aiori_t s3_emc_aiori;
|
||||
extern ior_aiori_t rados_aiori;
|
||||
extern ior_aiori_t gfarm_aiori;
|
||||
|
||||
void aiori_initialize(IOR_test_t * tests);
|
||||
void aiori_finalize(IOR_test_t * tests);
|
||||
|
|
|
@ -20,7 +20,8 @@ void PrintLongSummaryOneTest(IOR_test_t *test);
|
|||
void DisplayFreespace(IOR_param_t * test);
|
||||
void GetTestFileName(char *, IOR_param_t *);
|
||||
void PrintRemoveTiming(double start, double finish, int rep);
|
||||
void PrintReducedResult(IOR_test_t *test, int access, double bw, double *diff_subset, double totalTime, int rep);
|
||||
void PrintReducedResult(IOR_test_t *test, int access, double bw, double iops, double latency,
|
||||
double *diff_subset, double totalTime, int rep);
|
||||
void PrintTestEnds();
|
||||
void PrintTableHeader();
|
||||
/* End of ior-output */
|
||||
|
|
|
@ -18,8 +18,8 @@ static void PrintNextToken();
|
|||
void PrintTableHeader(){
|
||||
if (outputFormat == OUTPUT_DEFAULT){
|
||||
fprintf(out_resultfile, "\n");
|
||||
fprintf(out_resultfile, "access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter\n");
|
||||
fprintf(out_resultfile, "------ --------- ---------- --------- -------- -------- -------- -------- ----\n");
|
||||
fprintf(out_resultfile, "access bw(MiB/s) IOPS Latency(s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter\n");
|
||||
fprintf(out_resultfile, "------ --------- ---- ---------- ---------- --------- -------- -------- -------- -------- ----\n");
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -219,10 +219,13 @@ void PrintTestEnds(){
|
|||
PrintEndSection();
|
||||
}
|
||||
|
||||
void PrintReducedResult(IOR_test_t *test, int access, double bw, double *diff_subset, double totalTime, int rep){
|
||||
void PrintReducedResult(IOR_test_t *test, int access, double bw, double iops, double latency,
|
||||
double *diff_subset, double totalTime, int rep){
|
||||
if (outputFormat == OUTPUT_DEFAULT){
|
||||
fprintf(out_resultfile, "%-10s", access == WRITE ? "write" : "read");
|
||||
PPDouble(1, bw / MEBIBYTE, " ");
|
||||
PPDouble(1, iops, " ");
|
||||
PPDouble(1, latency, " ");
|
||||
PPDouble(1, (double)test->params.blockSize / KIBIBYTE, " ");
|
||||
PPDouble(1, (double)test->params.transferSize / KIBIBYTE, " ");
|
||||
PPDouble(1, diff_subset[0], " ");
|
||||
|
@ -318,7 +321,8 @@ void ShowTestStart(IOR_param_t *test)
|
|||
PrintKeyValInt("TestID", test->id);
|
||||
PrintKeyVal("StartTime", CurrentTimeString());
|
||||
/* if pvfs2:, then skip */
|
||||
if (Regex(test->testFileName, "^[a-z][a-z].*:") == 0) {
|
||||
if (strcasecmp(test->api, "DFS") &&
|
||||
Regex(test->testFileName, "^[a-z][a-z].*:") == 0) {
|
||||
DisplayFreespace(test);
|
||||
}
|
||||
|
||||
|
@ -339,10 +343,10 @@ void ShowTestStart(IOR_param_t *test)
|
|||
|
||||
PrintKeyVal("options", test->options);
|
||||
PrintKeyValInt("dryRun", test->dryRun);
|
||||
PrintKeyValInt("nodes", test->nodes);
|
||||
PrintKeyValInt("nodes", test->numNodes);
|
||||
PrintKeyValInt("memoryPerTask", (unsigned long) test->memoryPerTask);
|
||||
PrintKeyValInt("memoryPerNode", (unsigned long) test->memoryPerNode);
|
||||
PrintKeyValInt("tasksPerNode", tasksPerNode);
|
||||
PrintKeyValInt("tasksPerNode", test->numTasksOnNode0);
|
||||
PrintKeyValInt("repetitions", test->repetitions);
|
||||
PrintKeyValInt("multiFile", test->multiFile);
|
||||
PrintKeyValInt("interTestDelay", test->interTestDelay);
|
||||
|
@ -430,8 +434,9 @@ void ShowSetup(IOR_param_t *params)
|
|||
PrintKeyValInt("task offset", params->taskPerNodeOffset);
|
||||
PrintKeyValInt("reorder random seed", params->reorderTasksRandomSeed);
|
||||
}
|
||||
PrintKeyValInt("nodes", params->numNodes);
|
||||
PrintKeyValInt("tasks", params->numTasks);
|
||||
PrintKeyValInt("clients per node", params->tasksPerNode);
|
||||
PrintKeyValInt("clients per node", params->numTasksOnNode0);
|
||||
if (params->memoryPerTask != 0){
|
||||
PrintKeyVal("memoryPerTask", HumanReadable(params->memoryPerTask, BASE_TWO));
|
||||
}
|
||||
|
@ -574,7 +579,7 @@ static void PrintLongSummaryOneOperation(IOR_test_t *test, const int access)
|
|||
}
|
||||
fprintf(out_resultfile, "%5d ", params->id);
|
||||
fprintf(out_resultfile, "%6d ", params->numTasks);
|
||||
fprintf(out_resultfile, "%3d ", params->tasksPerNode);
|
||||
fprintf(out_resultfile, "%3d ", params->numTasksOnNode0);
|
||||
fprintf(out_resultfile, "%4d ", params->repetitions);
|
||||
fprintf(out_resultfile, "%3d ", params->filePerProc);
|
||||
fprintf(out_resultfile, "%5d ", params->reorderTasks);
|
||||
|
@ -598,7 +603,7 @@ static void PrintLongSummaryOneOperation(IOR_test_t *test, const int access)
|
|||
PrintKeyValInt("blockSize", params->blockSize);
|
||||
PrintKeyValInt("transferSize", params->transferSize);
|
||||
PrintKeyValInt("numTasks", params->numTasks);
|
||||
PrintKeyValInt("tasksPerNode", params->tasksPerNode);
|
||||
PrintKeyValInt("tasksPerNode", params->numTasksOnNode0);
|
||||
PrintKeyValInt("repetitions", params->repetitions);
|
||||
PrintKeyValInt("filePerProc", params->filePerProc);
|
||||
PrintKeyValInt("reorderTasks", params->reorderTasks);
|
||||
|
@ -774,7 +779,7 @@ void PrintRemoveTiming(double start, double finish, int rep)
|
|||
return;
|
||||
|
||||
if (outputFormat == OUTPUT_DEFAULT){
|
||||
fprintf(out_resultfile, "remove - - - - - - ");
|
||||
fprintf(out_resultfile, "remove - - - - - - - - ");
|
||||
PPDouble(1, finish-start, " ");
|
||||
fprintf(out_resultfile, "%-4d\n", rep);
|
||||
}else if (outputFormat == OUTPUT_JSON){
|
||||
|
|
176
src/ior.c
176
src/ior.c
|
@ -65,7 +65,6 @@ IOR_test_t * ior_run(int argc, char **argv, MPI_Comm world_com, FILE * world_out
|
|||
out_resultfile = world_out;
|
||||
mpi_comm_world = world_com;
|
||||
|
||||
MPI_CHECK(MPI_Comm_size(mpi_comm_world, &numTasksWorld), "cannot get number of tasks");
|
||||
MPI_CHECK(MPI_Comm_rank(mpi_comm_world, &rank), "cannot get rank");
|
||||
|
||||
/* setup tests, and validate parameters */
|
||||
|
@ -113,8 +112,6 @@ int ior_main(int argc, char **argv)
|
|||
MPI_CHECK(MPI_Init(&argc, &argv), "cannot initialize MPI");
|
||||
|
||||
mpi_comm_world = MPI_COMM_WORLD;
|
||||
MPI_CHECK(MPI_Comm_size(mpi_comm_world, &numTasksWorld),
|
||||
"cannot get number of tasks");
|
||||
MPI_CHECK(MPI_Comm_rank(mpi_comm_world, &rank), "cannot get rank");
|
||||
|
||||
/* set error-handling */
|
||||
|
@ -133,7 +130,8 @@ int ior_main(int argc, char **argv)
|
|||
for (tptr = tests_head; tptr != NULL; tptr = tptr->next) {
|
||||
verbose = tptr->params.verbose;
|
||||
if (rank == 0 && verbose >= VERBOSE_0) {
|
||||
ShowTestStart(&tptr->params);
|
||||
backend = tptr->params.backend;
|
||||
ShowTestStart(&tptr->params);
|
||||
}
|
||||
|
||||
// This is useful for trapping a running MPI process. While
|
||||
|
@ -143,6 +141,7 @@ int ior_main(int argc, char **argv)
|
|||
sleep(5);
|
||||
fprintf(out_logfile, "\trank %d: awake.\n", rank);
|
||||
}
|
||||
|
||||
TestIoSys(tptr);
|
||||
ShowTestEnd(tptr);
|
||||
}
|
||||
|
@ -155,10 +154,10 @@ int ior_main(int argc, char **argv)
|
|||
/* display finish time */
|
||||
PrintTestEnds();
|
||||
|
||||
MPI_CHECK(MPI_Finalize(), "cannot finalize MPI");
|
||||
|
||||
aiori_finalize(tests_head);
|
||||
|
||||
MPI_CHECK(MPI_Finalize(), "cannot finalize MPI");
|
||||
|
||||
DestroyTests(tests_head);
|
||||
|
||||
return totalErrorCount;
|
||||
|
@ -188,8 +187,14 @@ void init_IOR_Param_t(IOR_param_t * p)
|
|||
p->writeFile = p->readFile = FALSE;
|
||||
p->checkWrite = p->checkRead = FALSE;
|
||||
|
||||
p->nodes = 1;
|
||||
p->tasksPerNode = 1;
|
||||
/*
|
||||
* These can be overridden from the command-line but otherwise will be
|
||||
* set from MPI.
|
||||
*/
|
||||
p->numTasks = -1;
|
||||
p->numNodes = -1;
|
||||
p->numTasksOnNode0 = -1;
|
||||
|
||||
p->repetitions = 1;
|
||||
p->repCounter = -1;
|
||||
p->open = WRITE;
|
||||
|
@ -293,7 +298,8 @@ static void CheckFileSize(IOR_test_t *test, IOR_offset_t dataMoved, int rep,
|
|||
1, MPI_LONG_LONG_INT, MPI_SUM, testComm),
|
||||
"cannot total data moved");
|
||||
|
||||
if (strcasecmp(params->api, "HDF5") != 0 && strcasecmp(params->api, "NCMPI") != 0) {
|
||||
if (strcasecmp(params->api, "HDF5") != 0 && strcasecmp(params->api, "NCMPI") != 0 &&
|
||||
strcasecmp(params->api, "DAOS") != 0) {
|
||||
if (verbose >= VERBOSE_0 && rank == 0) {
|
||||
if ((params->expectedAggFileSize
|
||||
!= point->aggFileSizeFromXfer)
|
||||
|
@ -785,8 +791,7 @@ void GetTestFileName(char *testFileName, IOR_param_t * test)
|
|||
static char *PrependDir(IOR_param_t * test, char *rootDir)
|
||||
{
|
||||
char *dir;
|
||||
char fname[MAX_STR + 1];
|
||||
char *p;
|
||||
char *fname;
|
||||
int i;
|
||||
|
||||
dir = (char *)malloc(MAX_STR + 1);
|
||||
|
@ -806,35 +811,27 @@ static char *PrependDir(IOR_param_t * test, char *rootDir)
|
|||
}
|
||||
|
||||
/* get file name */
|
||||
strcpy(fname, rootDir);
|
||||
p = fname;
|
||||
while (i > 0) {
|
||||
if (fname[i] == '\0' || fname[i] == '/') {
|
||||
p = fname + (i + 1);
|
||||
break;
|
||||
}
|
||||
i--;
|
||||
}
|
||||
fname = rootDir + i + 1;
|
||||
|
||||
/* create directory with rank as subdirectory */
|
||||
sprintf(dir, "%s%d", dir, (rank + rankOffset) % test->numTasks);
|
||||
sprintf(dir + i + 1, "%d", (rank + rankOffset) % test->numTasks);
|
||||
|
||||
/* dir doesn't exist, so create */
|
||||
if (backend->access(dir, F_OK, test) != 0) {
|
||||
if (backend->mkdir(dir, S_IRWXU, test) < 0) {
|
||||
ERR("cannot create directory");
|
||||
ERRF("cannot create directory: %s", dir);
|
||||
}
|
||||
|
||||
/* check if correct permissions */
|
||||
} else if (backend->access(dir, R_OK, test) != 0 ||
|
||||
backend->access(dir, W_OK, test) != 0 ||
|
||||
backend->access(dir, X_OK, test) != 0) {
|
||||
ERR("invalid directory permissions");
|
||||
ERRF("invalid directory permissions: %s", dir);
|
||||
}
|
||||
|
||||
/* concatenate dir and file names */
|
||||
strcat(dir, "/");
|
||||
strcat(dir, p);
|
||||
strcat(dir, fname);
|
||||
|
||||
return dir;
|
||||
}
|
||||
|
@ -848,8 +845,9 @@ ReduceIterResults(IOR_test_t *test, double *timer, const int rep, const int acce
|
|||
{
|
||||
double reduced[IOR_NB_TIMERS] = { 0 };
|
||||
double diff[IOR_NB_TIMERS / 2 + 1];
|
||||
double totalTime;
|
||||
double bw;
|
||||
double totalTime, accessTime;
|
||||
IOR_param_t *params = &test->params;
|
||||
double bw, iops, latency, minlatency;
|
||||
int i;
|
||||
MPI_Op op;
|
||||
|
||||
|
@ -863,15 +861,12 @@ ReduceIterResults(IOR_test_t *test, double *timer, const int rep, const int acce
|
|||
op, 0, testComm), "MPI_Reduce()");
|
||||
}
|
||||
|
||||
/* Only rank 0 tallies and prints the results. */
|
||||
if (rank != 0)
|
||||
return;
|
||||
|
||||
/* Calculate elapsed times and throughput numbers */
|
||||
for (i = 0; i < IOR_NB_TIMERS / 2; i++)
|
||||
diff[i] = reduced[2 * i + 1] - reduced[2 * i];
|
||||
|
||||
totalTime = reduced[5] - reduced[0];
|
||||
accessTime = reduced[3] - reduced[2];
|
||||
|
||||
IOR_point_t *point = (access == WRITE) ? &test->results[rep].write :
|
||||
&test->results[rep].read;
|
||||
|
@ -882,7 +877,25 @@ ReduceIterResults(IOR_test_t *test, double *timer, const int rep, const int acce
|
|||
return;
|
||||
|
||||
bw = (double)point->aggFileSizeForBW / totalTime;
|
||||
PrintReducedResult(test, access, bw, diff, totalTime, rep);
|
||||
|
||||
/* For IOPS in this iteration, we divide the total amount of IOs from
|
||||
* all ranks over the entire access time (first start -> last end). */
|
||||
iops = (point->aggFileSizeForBW / params->transferSize) / accessTime;
|
||||
|
||||
/* For Latency, we divide the total access time for each task over the
|
||||
* number of I/Os issued from that task; then reduce and display the
|
||||
* minimum (best) latency achieved. So what is reported is the average
|
||||
* latency of all ops from a single task, then taking the minimum of
|
||||
* that between all tasks. */
|
||||
latency = (timer[3] - timer[2]) / (params->blockSize / params->transferSize);
|
||||
MPI_CHECK(MPI_Reduce(&latency, &minlatency, 1, MPI_DOUBLE,
|
||||
MPI_MIN, 0, testComm), "MPI_Reduce()");
|
||||
|
||||
/* Only rank 0 tallies and prints the results. */
|
||||
if (rank != 0)
|
||||
return;
|
||||
|
||||
PrintReducedResult(test, access, bw, iops, latency, diff, totalTime, rep);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -900,6 +913,10 @@ static void RemoveFile(char *testFileName, int filePerProc, IOR_param_t * test)
|
|||
GetTestFileName(testFileName, test);
|
||||
}
|
||||
if (backend->access(testFileName, F_OK, test) == 0) {
|
||||
if (verbose >= VERBOSE_3) {
|
||||
fprintf(out_logfile, "task %d removing %s\n", rank,
|
||||
testFileName);
|
||||
}
|
||||
backend->delete(testFileName, test);
|
||||
}
|
||||
if (test->reorderTasksRandom == TRUE) {
|
||||
|
@ -908,6 +925,10 @@ static void RemoveFile(char *testFileName, int filePerProc, IOR_param_t * test)
|
|||
}
|
||||
} else {
|
||||
if ((rank == 0) && (backend->access(testFileName, F_OK, test) == 0)) {
|
||||
if (verbose >= VERBOSE_3) {
|
||||
fprintf(out_logfile, "task %d removing %s\n", rank,
|
||||
testFileName);
|
||||
}
|
||||
backend->delete(testFileName, test);
|
||||
}
|
||||
}
|
||||
|
@ -919,12 +940,17 @@ static void RemoveFile(char *testFileName, int filePerProc, IOR_param_t * test)
|
|||
*/
|
||||
static void InitTests(IOR_test_t *tests, MPI_Comm com)
|
||||
{
|
||||
int size;
|
||||
int mpiNumNodes = 0;
|
||||
int mpiNumTasks = 0;
|
||||
int mpiNumTasksOnNode0 = 0;
|
||||
|
||||
MPI_CHECK(MPI_Comm_size(com, & size), "MPI_Comm_size() error");
|
||||
|
||||
/* count the tasks per node */
|
||||
tasksPerNode = CountTasksPerNode(com);
|
||||
/*
|
||||
* These default values are the same for every test and expensive to
|
||||
* retrieve so just do it once.
|
||||
*/
|
||||
mpiNumNodes = GetNumNodes(com);
|
||||
mpiNumTasks = GetNumTasks(com);
|
||||
mpiNumTasksOnNode0 = GetNumTasksOnNode0(com);
|
||||
|
||||
/*
|
||||
* Since there is no guarantee that anyone other than
|
||||
|
@ -937,12 +963,28 @@ static void InitTests(IOR_test_t *tests, MPI_Comm com)
|
|||
while (tests != NULL) {
|
||||
IOR_param_t *params = & tests->params;
|
||||
params->testComm = com;
|
||||
params->nodes = params->numTasks / tasksPerNode;
|
||||
params->tasksPerNode = tasksPerNode;
|
||||
params->tasksBlockMapping = QueryNodeMapping(com,false);
|
||||
if (params->numTasks == 0) {
|
||||
params->numTasks = size;
|
||||
|
||||
/* use MPI values if not overridden on command-line */
|
||||
if (params->numNodes == -1) {
|
||||
params->numNodes = mpiNumNodes;
|
||||
}
|
||||
if (params->numTasks == -1) {
|
||||
params->numTasks = mpiNumTasks;
|
||||
} else if (params->numTasks > mpiNumTasks) {
|
||||
if (rank == 0) {
|
||||
fprintf(out_logfile,
|
||||
"WARNING: More tasks requested (%d) than available (%d),",
|
||||
params->numTasks, mpiNumTasks);
|
||||
fprintf(out_logfile, " running with %d tasks.\n",
|
||||
mpiNumTasks);
|
||||
}
|
||||
params->numTasks = mpiNumTasks;
|
||||
}
|
||||
if (params->numTasksOnNode0 == -1) {
|
||||
params->numTasksOnNode0 = mpiNumTasksOnNode0;
|
||||
}
|
||||
|
||||
params->tasksBlockMapping = QueryNodeMapping(com,false);
|
||||
params->expectedAggFileSize =
|
||||
params->blockSize * params->segmentCount * params->numTasks;
|
||||
|
||||
|
@ -1090,7 +1132,7 @@ static void *HogMemory(IOR_param_t *params)
|
|||
if (verbose >= VERBOSE_3)
|
||||
fprintf(out_logfile, "This node hogging %ld bytes of memory\n",
|
||||
params->memoryPerNode);
|
||||
size = params->memoryPerNode / params->tasksPerNode;
|
||||
size = params->memoryPerNode / params->numTasksOnNode0;
|
||||
} else {
|
||||
return NULL;
|
||||
}
|
||||
|
@ -1190,16 +1232,6 @@ static void TestIoSys(IOR_test_t *test)
|
|||
IOR_io_buffers ioBuffers;
|
||||
|
||||
/* set up communicator for test */
|
||||
if (params->numTasks > numTasksWorld) {
|
||||
if (rank == 0) {
|
||||
fprintf(out_logfile,
|
||||
"WARNING: More tasks requested (%d) than available (%d),",
|
||||
params->numTasks, numTasksWorld);
|
||||
fprintf(out_logfile, " running on %d tasks.\n",
|
||||
numTasksWorld);
|
||||
}
|
||||
params->numTasks = numTasksWorld;
|
||||
}
|
||||
MPI_CHECK(MPI_Comm_group(mpi_comm_world, &orig_group),
|
||||
"MPI_Comm_group() error");
|
||||
range[0] = 0; /* first rank */
|
||||
|
@ -1226,7 +1258,6 @@ static void TestIoSys(IOR_test_t *test)
|
|||
"Using reorderTasks '-C' (useful to avoid read cache in client)\n");
|
||||
fflush(out_logfile);
|
||||
}
|
||||
params->tasksPerNode = CountTasksPerNode(testComm);
|
||||
backend = params->backend;
|
||||
/* show test setup */
|
||||
if (rank == 0 && verbose >= VERBOSE_0)
|
||||
|
@ -1363,7 +1394,7 @@ static void TestIoSys(IOR_test_t *test)
|
|||
/* move two nodes away from writing node */
|
||||
int shift = 1; /* assume a by-node (round-robin) mapping of tasks to nodes */
|
||||
if (params->tasksBlockMapping) {
|
||||
shift = params->tasksPerNode; /* switch to by-slot (contiguous block) mapping */
|
||||
shift = params->numTasksOnNode0; /* switch to by-slot (contiguous block) mapping */
|
||||
}
|
||||
rankOffset = (2 * shift) % params->numTasks;
|
||||
}
|
||||
|
@ -1388,7 +1419,7 @@ static void TestIoSys(IOR_test_t *test)
|
|||
if(params->stoneWallingStatusFile){
|
||||
params->stoneWallingWearOutIterations = ReadStoneWallingIterations(params->stoneWallingStatusFile);
|
||||
if(params->stoneWallingWearOutIterations == -1 && rank == 0){
|
||||
fprintf(out_logfile, "WARNING: Could not read back the stonewalling status from the file!");
|
||||
fprintf(out_logfile, "WARNING: Could not read back the stonewalling status from the file!\n");
|
||||
params->stoneWallingWearOutIterations = 0;
|
||||
}
|
||||
}
|
||||
|
@ -1403,7 +1434,7 @@ static void TestIoSys(IOR_test_t *test)
|
|||
/* move one node away from writing node */
|
||||
int shift = 1; /* assume a by-node (round-robin) mapping of tasks to nodes */
|
||||
if (params->tasksBlockMapping) {
|
||||
shift=params->tasksPerNode; /* switch to a by-slot (contiguous block) mapping */
|
||||
shift=params->numTasksOnNode0; /* switch to a by-slot (contiguous block) mapping */
|
||||
}
|
||||
rankOffset = (params->taskPerNodeOffset * shift) % params->numTasks;
|
||||
}
|
||||
|
@ -1414,7 +1445,7 @@ static void TestIoSys(IOR_test_t *test)
|
|||
int nodeoffset;
|
||||
unsigned int iseed0;
|
||||
nodeoffset = params->taskPerNodeOffset;
|
||||
nodeoffset = (nodeoffset < params->nodes) ? nodeoffset : params->nodes - 1;
|
||||
nodeoffset = (nodeoffset < params->numNodes) ? nodeoffset : params->numNodes - 1;
|
||||
if (params->reorderTasksRandomSeed < 0)
|
||||
iseed0 = -1 * params->reorderTasksRandomSeed + rep;
|
||||
else
|
||||
|
@ -1424,7 +1455,7 @@ static void TestIoSys(IOR_test_t *test)
|
|||
rankOffset = rand() % params->numTasks;
|
||||
}
|
||||
while (rankOffset <
|
||||
(nodeoffset * params->tasksPerNode)) {
|
||||
(nodeoffset * params->numTasksOnNode0)) {
|
||||
rankOffset = rand() % params->numTasks;
|
||||
}
|
||||
/* Get more detailed stats if requested by verbose level */
|
||||
|
@ -1454,7 +1485,7 @@ static void TestIoSys(IOR_test_t *test)
|
|||
"barrier error");
|
||||
if (rank == 0 && verbose >= VERBOSE_1) {
|
||||
fprintf(out_logfile,
|
||||
"Commencing read performance test: %s",
|
||||
"Commencing read performance test: %s\n",
|
||||
CurrentTimeString());
|
||||
}
|
||||
timer[2] = GetTimeStamp();
|
||||
|
@ -1588,6 +1619,7 @@ static void ValidateTests(IOR_param_t * test)
|
|||
&& (strcasecmp(test->api, "MPIIO") != 0)
|
||||
&& (strcasecmp(test->api, "MMAP") != 0)
|
||||
&& (strcasecmp(test->api, "HDFS") != 0)
|
||||
&& (strcasecmp(test->api, "Gfarm") != 0)
|
||||
&& (strcasecmp(test->api, "RADOS") != 0)) && test->fsync)
|
||||
WARN_RESET("fsync() not supported in selected backend",
|
||||
test, &defaults, fsync);
|
||||
|
@ -1667,11 +1699,8 @@ static void ValidateTests(IOR_param_t * test)
|
|||
#if (H5_VERS_MAJOR > 0 && H5_VERS_MINOR > 5)
|
||||
;
|
||||
#else
|
||||
char errorString[MAX_STR];
|
||||
sprintf(errorString,
|
||||
"'no fill' option not available in %s",
|
||||
ERRF("'no fill' option not available in %s",
|
||||
test->apiVersion);
|
||||
ERR(errorString);
|
||||
#endif
|
||||
#else
|
||||
WARN("unable to determine HDF5 version for 'no fill' usage");
|
||||
|
@ -1681,15 +1710,12 @@ static void ValidateTests(IOR_param_t * test)
|
|||
if (test->useExistingTestFile && test->lustre_set_striping)
|
||||
ERR("Lustre stripe options are incompatible with useExistingTestFile");
|
||||
|
||||
/* N:1 and N:N */
|
||||
IOR_offset_t NtoN = test->filePerProc;
|
||||
IOR_offset_t Nto1 = ! NtoN;
|
||||
IOR_offset_t s = test->segmentCount;
|
||||
IOR_offset_t t = test->transferSize;
|
||||
IOR_offset_t b = test->blockSize;
|
||||
|
||||
if (Nto1 && (s != 1) && (b != t)) {
|
||||
ERR("N:1 (strided) requires xfer-size == block-size");
|
||||
/* allow the backend to validate the options */
|
||||
if(test->backend->check_params){
|
||||
int check = test->backend->check_params(test);
|
||||
if (check == 0){
|
||||
ERR("The backend returned that the test parameters are invalid.");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1870,14 +1896,16 @@ static IOR_offset_t WriteOrReadSingle(IOR_offset_t pairCnt, IOR_offset_t *offset
|
|||
*transferCount, test,
|
||||
WRITECHECK);
|
||||
} else if (access == READCHECK) {
|
||||
amtXferred = backend->xfer(access, fd, buffer, transfer, test);
|
||||
memset(checkBuffer, 'a', transfer);
|
||||
|
||||
amtXferred = backend->xfer(access, fd, checkBuffer, transfer, test);
|
||||
if (amtXferred != transfer){
|
||||
ERR("cannot read from file");
|
||||
}
|
||||
if (test->storeFileOffset == TRUE) {
|
||||
FillBuffer(readCheckBuffer, test, test->offset, pretendRank);
|
||||
}
|
||||
*errors += CompareBuffers(readCheckBuffer, buffer, transfer, *transferCount, test, READCHECK);
|
||||
*errors += CompareBuffers(readCheckBuffer, checkBuffer, transfer, *transferCount, test, READCHECK);
|
||||
}
|
||||
return amtXferred;
|
||||
}
|
||||
|
|
|
@ -98,8 +98,8 @@ typedef struct
|
|||
// intermediate options
|
||||
int dryRun; /* do not perform any I/Os just run evtl. inputs print dummy output */
|
||||
int numTasks; /* number of tasks for test */
|
||||
int nodes; /* number of nodes for test */
|
||||
int tasksPerNode; /* number of tasks per node */
|
||||
int numNodes; /* number of nodes for test */
|
||||
int numTasksOnNode0; /* number of tasks on node 0 (usually all the same, but don't have to be, use with caution) */
|
||||
int tasksBlockMapping; /* are the tasks in contiguous blocks across nodes or round-robin */
|
||||
int repetitions; /* number of repetitions of test */
|
||||
int repCounter; /* rep counter */
|
||||
|
|
52
src/iordef.h
52
src/iordef.h
|
@ -151,28 +151,41 @@ typedef long long int IOR_size_t;
|
|||
fflush(stdout); \
|
||||
} while (0)
|
||||
|
||||
/* warning with errno printed */
|
||||
#define EWARN(MSG) do { \
|
||||
|
||||
/* warning with format string and errno printed */
|
||||
#define EWARNF(FORMAT, ...) do { \
|
||||
if (verbose > VERBOSE_2) { \
|
||||
fprintf(stdout, "ior WARNING: %s, errno %d, %s (%s:%d).\n", \
|
||||
MSG, errno, strerror(errno), __FILE__, __LINE__); \
|
||||
fprintf(stdout, "ior WARNING: " FORMAT ", errno %d, %s (%s:%d).\n", \
|
||||
__VA_ARGS__, errno, strerror(errno), __FILE__, __LINE__); \
|
||||
} else { \
|
||||
fprintf(stdout, "ior WARNING: %s, errno %d, %s \n", \
|
||||
MSG, errno, strerror(errno)); \
|
||||
fprintf(stdout, "ior WARNING: " FORMAT ", errno %d, %s \n", \
|
||||
__VA_ARGS__, errno, strerror(errno)); \
|
||||
} \
|
||||
fflush(stdout); \
|
||||
} while (0)
|
||||
|
||||
|
||||
/* display error message and terminate execution */
|
||||
#define ERR(MSG) do { \
|
||||
fprintf(stdout, "ior ERROR: %s, errno %d, %s (%s:%d)\n", \
|
||||
MSG, errno, strerror(errno), __FILE__, __LINE__); \
|
||||
/* warning with errno printed */
|
||||
#define EWARN(MSG) do { \
|
||||
EWARNF("%s", MSG); \
|
||||
} while (0)
|
||||
|
||||
|
||||
/* display error message with format string and terminate execution */
|
||||
#define ERRF(FORMAT, ...) do { \
|
||||
fprintf(stdout, "ior ERROR: " FORMAT ", errno %d, %s (%s:%d)\n", \
|
||||
__VA_ARGS__, errno, strerror(errno), __FILE__, __LINE__); \
|
||||
fflush(stdout); \
|
||||
MPI_Abort(MPI_COMM_WORLD, -1); \
|
||||
} while (0)
|
||||
|
||||
|
||||
/* display error message and terminate execution */
|
||||
#define ERR(MSG) do { \
|
||||
ERRF("%s", MSG); \
|
||||
} while (0)
|
||||
|
||||
|
||||
/* display a simple error message (i.e. errno is not set) and terminate execution */
|
||||
#define ERR_SIMPLE(MSG) do { \
|
||||
fprintf(stdout, "ior ERROR: %s, (%s:%d)\n", \
|
||||
|
@ -184,24 +197,35 @@ typedef long long int IOR_size_t;
|
|||
|
||||
/******************************************************************************/
|
||||
/*
|
||||
* MPI_CHECK will display a custom error message as well as an error string
|
||||
* MPI_CHECKF will display a custom format string as well as an error string
|
||||
* from the MPI_STATUS and then exit the program
|
||||
*/
|
||||
|
||||
#define MPI_CHECK(MPI_STATUS, MSG) do { \
|
||||
#define MPI_CHECKF(MPI_STATUS, FORMAT, ...) do { \
|
||||
char resultString[MPI_MAX_ERROR_STRING]; \
|
||||
int resultLength; \
|
||||
\
|
||||
if (MPI_STATUS != MPI_SUCCESS) { \
|
||||
MPI_Error_string(MPI_STATUS, resultString, &resultLength); \
|
||||
fprintf(stdout, "ior ERROR: %s, MPI %s, (%s:%d)\n", \
|
||||
MSG, resultString, __FILE__, __LINE__); \
|
||||
fprintf(stdout, "ior ERROR: " FORMAT ", MPI %s, (%s:%d)\n", \
|
||||
__VA_ARGS__, resultString, __FILE__, __LINE__); \
|
||||
fflush(stdout); \
|
||||
MPI_Abort(MPI_COMM_WORLD, -1); \
|
||||
} \
|
||||
} while(0)
|
||||
|
||||
|
||||
/******************************************************************************/
|
||||
/*
|
||||
* MPI_CHECK will display a custom error message as well as an error string
|
||||
* from the MPI_STATUS and then exit the program
|
||||
*/
|
||||
|
||||
#define MPI_CHECK(MPI_STATUS, MSG) do { \
|
||||
MPI_CHECKF(MPI_STATUS, "%s", MSG); \
|
||||
} while(0)
|
||||
|
||||
|
||||
/******************************************************************************/
|
||||
/*
|
||||
* System info for Windows.
|
||||
|
|
|
@ -2,12 +2,9 @@
|
|||
#include "aiori.h"
|
||||
|
||||
int main(int argc, char **argv) {
|
||||
aiori_initialize(NULL);
|
||||
MPI_Init(&argc, &argv);
|
||||
|
||||
mdtest_run(argc, argv, MPI_COMM_WORLD, stdout);
|
||||
|
||||
MPI_Finalize();
|
||||
aiori_finalize(NULL);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
79
src/mdtest.c
79
src/mdtest.c
|
@ -79,7 +79,7 @@
|
|||
#define FILEMODE S_IRUSR|S_IWUSR|S_IRGRP|S_IWGRP|S_IROTH
|
||||
#define DIRMODE S_IRUSR|S_IWUSR|S_IXUSR|S_IRGRP|S_IWGRP|S_IXGRP|S_IROTH|S_IXOTH
|
||||
#define RELEASE_VERS META_VERSION
|
||||
#define TEST_DIR "#test-dir"
|
||||
#define TEST_DIR "test-dir"
|
||||
#define ITEM_COUNT 25000
|
||||
|
||||
#define LLU "%lu"
|
||||
|
@ -148,6 +148,7 @@ static size_t write_bytes;
|
|||
static int stone_wall_timer_seconds;
|
||||
static size_t read_bytes;
|
||||
static int sync_file;
|
||||
static int call_sync;
|
||||
static int path_count;
|
||||
static int nstride; /* neighbor stride */
|
||||
static int make_node = 0;
|
||||
|
@ -263,6 +264,19 @@ static void prep_testdir(int j, int dir_iter){
|
|||
pos += sprintf(& testdir[pos], ".%d-%d", j, dir_iter);
|
||||
}
|
||||
|
||||
static void phase_end(){
|
||||
if (call_sync){
|
||||
if(! backend->sync){
|
||||
FAIL("Error, backend does not provide the sync method, but your requested to use sync.");
|
||||
}
|
||||
backend->sync(& param);
|
||||
}
|
||||
|
||||
if (barriers) {
|
||||
MPI_Barrier(testComm);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* This function copies the unique directory name for a given option to
|
||||
* the "to" parameter. Some memory must be allocated to the "to" parameter.
|
||||
|
@ -353,6 +367,7 @@ static void create_file (const char *path, uint64_t itemNum) {
|
|||
} else {
|
||||
param.openFlags = IOR_CREAT | IOR_WRONLY;
|
||||
param.filePerProc = !shared_file;
|
||||
param.mode = FILEMODE;
|
||||
|
||||
VERBOSE(3,5,"create_remove_items_helper (non-collective, shared): open..." );
|
||||
|
||||
|
@ -430,6 +445,7 @@ void collective_helper(const int dirs, const int create, const char* path, uint6
|
|||
|
||||
//create files
|
||||
param.openFlags = IOR_WRONLY | IOR_CREAT;
|
||||
param.mode = FILEMODE;
|
||||
aiori_fh = backend->create (curr_item, ¶m);
|
||||
if (NULL == aiori_fh) {
|
||||
FAIL("unable to create file %s", curr_item);
|
||||
|
@ -836,9 +852,7 @@ void directory_test(const int iteration, const int ntasks, const char *path, ran
|
|||
}
|
||||
}
|
||||
|
||||
if (barriers) {
|
||||
MPI_Barrier(testComm);
|
||||
}
|
||||
phase_end();
|
||||
t[1] = GetTimeStamp();
|
||||
|
||||
/* stat phase */
|
||||
|
@ -864,10 +878,7 @@ void directory_test(const int iteration, const int ntasks, const char *path, ran
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (barriers) {
|
||||
MPI_Barrier(testComm);
|
||||
}
|
||||
phase_end();
|
||||
t[2] = GetTimeStamp();
|
||||
|
||||
/* read phase */
|
||||
|
@ -894,9 +905,7 @@ void directory_test(const int iteration, const int ntasks, const char *path, ran
|
|||
}
|
||||
}
|
||||
|
||||
if (barriers) {
|
||||
MPI_Barrier(testComm);
|
||||
}
|
||||
phase_end();
|
||||
t[3] = GetTimeStamp();
|
||||
|
||||
if (remove_only) {
|
||||
|
@ -924,9 +933,7 @@ void directory_test(const int iteration, const int ntasks, const char *path, ran
|
|||
}
|
||||
}
|
||||
|
||||
if (barriers) {
|
||||
MPI_Barrier(testComm);
|
||||
}
|
||||
phase_end();
|
||||
t[4] = GetTimeStamp();
|
||||
|
||||
if (remove_only) {
|
||||
|
@ -1082,9 +1089,7 @@ void file_test(const int iteration, const int ntasks, const char *path, rank_pro
|
|||
}
|
||||
}
|
||||
|
||||
if (barriers) {
|
||||
MPI_Barrier(testComm);
|
||||
}
|
||||
phase_end();
|
||||
t[1] = GetTimeStamp();
|
||||
|
||||
/* stat phase */
|
||||
|
@ -1107,9 +1112,7 @@ void file_test(const int iteration, const int ntasks, const char *path, rank_pro
|
|||
}
|
||||
}
|
||||
|
||||
if (barriers) {
|
||||
MPI_Barrier(testComm);
|
||||
}
|
||||
phase_end();
|
||||
t[2] = GetTimeStamp();
|
||||
|
||||
/* read phase */
|
||||
|
@ -1136,9 +1139,7 @@ void file_test(const int iteration, const int ntasks, const char *path, rank_pro
|
|||
}
|
||||
}
|
||||
|
||||
if (barriers) {
|
||||
MPI_Barrier(testComm);
|
||||
}
|
||||
phase_end();
|
||||
t[3] = GetTimeStamp();
|
||||
|
||||
if (remove_only) {
|
||||
|
@ -1168,9 +1169,7 @@ void file_test(const int iteration, const int ntasks, const char *path, rank_pro
|
|||
}
|
||||
}
|
||||
|
||||
if (barriers) {
|
||||
MPI_Barrier(testComm);
|
||||
}
|
||||
phase_end();
|
||||
t[4] = GetTimeStamp();
|
||||
if (remove_only) {
|
||||
if (unique_dir_per_task) {
|
||||
|
@ -1549,6 +1548,9 @@ void display_freespace(char *testdirpath)
|
|||
strcpy(dirpath, ".");
|
||||
}
|
||||
|
||||
if (param.api && strcasecmp(param.api, "DFS") == 0)
|
||||
return;
|
||||
|
||||
VERBOSE(3,5,"Before show_file_system_size, dirpath is '%s'", dirpath );
|
||||
show_file_system_size(dirpath);
|
||||
VERBOSE(3,5, "After show_file_system_size, dirpath is '%s'\n", dirpath );
|
||||
|
@ -1853,6 +1855,7 @@ void mdtest_init_args(){
|
|||
stone_wall_timer_seconds = 0;
|
||||
read_bytes = 0;
|
||||
sync_file = 0;
|
||||
call_sync = 0;
|
||||
path_count = 0;
|
||||
nstride = 0;
|
||||
make_node = 0;
|
||||
|
@ -1867,7 +1870,8 @@ mdtest_results_t * mdtest_run(int argc, char **argv, MPI_Comm world_com, FILE *
|
|||
|
||||
mdtest_init_args();
|
||||
int i, j;
|
||||
int nodeCount;
|
||||
int numNodes;
|
||||
int numTasksOnNode0 = 0;
|
||||
MPI_Group worldgroup, testgroup;
|
||||
struct {
|
||||
int first;
|
||||
|
@ -1925,6 +1929,7 @@ mdtest_results_t * mdtest_run(int argc, char **argv, MPI_Comm world_com, FILE *
|
|||
{'x', NULL, "StoneWallingStatusFile; contains the number of iterations of the creation phase, can be used to split phases across runs", OPTION_OPTIONAL_ARGUMENT, 's', & stoneWallingStatusFile},
|
||||
{'X', "verify-read", "Verify the data read", OPTION_FLAG, 'd', & verify_read},
|
||||
{'y', NULL, "sync file after writing", OPTION_FLAG, 'd', & sync_file},
|
||||
{'Y', NULL, "call the sync command after each phase (included in the timing; note it causes all IO to be flushed from your node)", OPTION_FLAG, 'd', & call_sync},
|
||||
{'z', NULL, "depth of hierarchical directory structure", OPTION_OPTIONAL_ARGUMENT, 'd', & depth},
|
||||
{'Z', NULL, "print time instead of rate", OPTION_FLAG, 'd', & print_time},
|
||||
LAST_OPTION
|
||||
|
@ -1940,11 +1945,14 @@ mdtest_results_t * mdtest_run(int argc, char **argv, MPI_Comm world_com, FILE *
|
|||
MPI_Comm_rank(testComm, &rank);
|
||||
MPI_Comm_size(testComm, &size);
|
||||
|
||||
if (backend->initialize)
|
||||
backend->initialize();
|
||||
|
||||
pid = getpid();
|
||||
uid = getuid();
|
||||
|
||||
tasksPerNode = CountTasksPerNode(testComm);
|
||||
nodeCount = size / tasksPerNode;
|
||||
numNodes = GetNumNodes(testComm);
|
||||
numTasksOnNode0 = GetNumTasksOnNode0(testComm);
|
||||
|
||||
char cmd_buffer[4096];
|
||||
strncpy(cmd_buffer, argv[0], 4096);
|
||||
|
@ -1953,7 +1961,7 @@ mdtest_results_t * mdtest_run(int argc, char **argv, MPI_Comm world_com, FILE *
|
|||
}
|
||||
|
||||
VERBOSE(0,-1,"-- started at %s --\n", PrintTimestamp());
|
||||
VERBOSE(0,-1,"mdtest-%s was launched with %d total task(s) on %d node(s)", RELEASE_VERS, size, nodeCount);
|
||||
VERBOSE(0,-1,"mdtest-%s was launched with %d total task(s) on %d node(s)", RELEASE_VERS, size, numNodes);
|
||||
VERBOSE(0,-1,"Command line used: %s", cmd_buffer);
|
||||
|
||||
/* adjust special variables */
|
||||
|
@ -2008,6 +2016,7 @@ mdtest_results_t * mdtest_run(int argc, char **argv, MPI_Comm world_com, FILE *
|
|||
VERBOSE(1,-1, "unique_dir_per_task : %s", ( unique_dir_per_task ? "True" : "False" ));
|
||||
VERBOSE(1,-1, "write_bytes : "LLU"", write_bytes );
|
||||
VERBOSE(1,-1, "sync_file : %s", ( sync_file ? "True" : "False" ));
|
||||
VERBOSE(1,-1, "call_sync : %s", ( call_sync ? "True" : "False" ));
|
||||
VERBOSE(1,-1, "depth : %d", depth );
|
||||
VERBOSE(1,-1, "make_node : %d", make_node );
|
||||
|
||||
|
@ -2120,10 +2129,10 @@ mdtest_results_t * mdtest_run(int argc, char **argv, MPI_Comm world_com, FILE *
|
|||
|
||||
/* set the shift to mimic IOR and shift by procs per node */
|
||||
if (nstride > 0) {
|
||||
if ( nodeCount > 1 && tasksBlockMapping ) {
|
||||
if ( numNodes > 1 && tasksBlockMapping ) {
|
||||
/* the user set the stride presumably to get the consumer tasks on a different node than the producer tasks
|
||||
however, if the mpirun scheduler placed the tasks by-slot (in a contiguous block) then we need to adjust the shift by ppn */
|
||||
nstride *= tasksPerNode;
|
||||
nstride *= numTasksOnNode0;
|
||||
}
|
||||
VERBOSE(0,5,"Shifting ranks by %d for each phase.", nstride);
|
||||
}
|
||||
|
@ -2148,7 +2157,7 @@ mdtest_results_t * mdtest_run(int argc, char **argv, MPI_Comm world_com, FILE *
|
|||
|
||||
/* setup summary table for recording results */
|
||||
summary_table = (mdtest_results_t *) malloc(iterations * sizeof(mdtest_results_t));
|
||||
memset(summary_table, 0, sizeof(mdtest_results_t));
|
||||
memset(summary_table, 0, iterations * sizeof(mdtest_results_t));
|
||||
for(int i=0; i < iterations; i++){
|
||||
for(int j=0; j < MDTEST_LAST_NUM; j++){
|
||||
summary_table[i].rate[j] = 0.0;
|
||||
|
@ -2224,5 +2233,9 @@ mdtest_results_t * mdtest_run(int argc, char **argv, MPI_Comm world_com, FILE *
|
|||
if (random_seed > 0) {
|
||||
free(rand_array);
|
||||
}
|
||||
|
||||
if (backend->finalize)
|
||||
backend->finalize(NULL);
|
||||
|
||||
return summary_table;
|
||||
}
|
||||
|
|
13
src/option.c
13
src/option.c
|
@ -89,6 +89,10 @@ static int print_value(option_help * o){
|
|||
pos += printf("=%lld", *(long long*) o->variable);
|
||||
break;
|
||||
}
|
||||
case('u'):{
|
||||
pos += printf("=%lu", *(uint64_t*) o->variable);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
if (o->arg == OPTION_FLAG && (*(int*)o->variable) != 0){
|
||||
|
@ -180,6 +184,10 @@ static int print_option_value(option_help * o){
|
|||
pos += printf("=%lld", *(long long*) o->variable);
|
||||
break;
|
||||
}
|
||||
case('u'):{
|
||||
pos += printf("=%lu", *(uint64_t*) o->variable);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}else{
|
||||
//printf(" ");
|
||||
|
@ -327,8 +335,13 @@ static void option_parse_token(char ** argv, int * flag_parsed_next, int * requi
|
|||
*(long long*) o->variable = string_to_bytes(arg);
|
||||
break;
|
||||
}
|
||||
case('u'):{
|
||||
*(uint64_t*) o->variable = string_to_bytes(arg);
|
||||
break;
|
||||
}
|
||||
default:
|
||||
printf("ERROR: Unknown option type %c\n", o->type);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -151,8 +151,12 @@ void DecodeDirective(char *line, IOR_param_t *params, options_all_t * module_opt
|
|||
params->maxTimeDuration = atoi(value);
|
||||
} else if (strcasecmp(option, "outlierthreshold") == 0) {
|
||||
params->outlierThreshold = atoi(value);
|
||||
} else if (strcasecmp(option, "nodes") == 0) {
|
||||
params->nodes = atoi(value);
|
||||
} else if (strcasecmp(option, "numnodes") == 0) {
|
||||
params->numNodes = atoi(value);
|
||||
} else if (strcasecmp(option, "numtasks") == 0) {
|
||||
params->numTasks = atoi(value);
|
||||
} else if (strcasecmp(option, "numtasksonnode0") == 0) {
|
||||
params->numTasksOnNode0 = atoi(value);
|
||||
} else if (strcasecmp(option, "repetitions") == 0) {
|
||||
params->repetitions = atoi(value);
|
||||
} else if (strcasecmp(option, "intertestdelay") == 0) {
|
||||
|
@ -286,8 +290,6 @@ void DecodeDirective(char *line, IOR_param_t *params, options_all_t * module_opt
|
|||
params->beegfs_chunkSize = string_to_bytes(value);
|
||||
if (!ISPOWEROFTWO(params->beegfs_chunkSize) || params->beegfs_chunkSize < (1<<16))
|
||||
ERR("beegfsChunkSize must be a power of two and >64k");
|
||||
} else if (strcasecmp(option, "numtasks") == 0) {
|
||||
params->numTasks = atoi(value);
|
||||
} else if (strcasecmp(option, "summaryalways") == 0) {
|
||||
params->summary_every_test = atoi(value);
|
||||
} else {
|
||||
|
@ -477,7 +479,7 @@ option_help * createGlobalOptions(IOR_param_t * params){
|
|||
{.help=" -O stoneWallingWearOut=1 -- once the stonewalling timout is over, all process finish to access the amount of data", .arg = OPTION_OPTIONAL_ARGUMENT},
|
||||
{.help=" -O stoneWallingWearOutIterations=N -- stop after processing this number of iterations, needed for reading data back written with stoneWallingWearOut", .arg = OPTION_OPTIONAL_ARGUMENT},
|
||||
{.help=" -O stoneWallingStatusFile=FILE -- this file keeps the number of iterations from stonewalling during write and allows to use them for read", .arg = OPTION_OPTIONAL_ARGUMENT},
|
||||
{'e', NULL, "fsync -- perform sync operation after each block write", OPTION_FLAG, 'd', & params->fsync},
|
||||
{'e', NULL, "fsync -- perform a fsync() operation at the end of each read/write phase", OPTION_FLAG, 'd', & params->fsync},
|
||||
{'E', NULL, "useExistingTestFile -- do not remove test file before write access", OPTION_FLAG, 'd', & params->useExistingTestFile},
|
||||
{'f', NULL, "scriptFile -- test script name", OPTION_OPTIONAL_ARGUMENT, 's', & params->testscripts},
|
||||
{'F', NULL, "filePerProc -- file-per-process", OPTION_FLAG, 'd', & params->filePerProc},
|
||||
|
@ -498,7 +500,7 @@ option_help * createGlobalOptions(IOR_param_t * params){
|
|||
{'m', NULL, "multiFile -- use number of reps (-i) for multiple file count", OPTION_FLAG, 'd', & params->multiFile},
|
||||
{'M', NULL, "memoryPerNode -- hog memory on the node (e.g.: 2g, 75%)", OPTION_OPTIONAL_ARGUMENT, 's', & params->memoryPerNodeStr},
|
||||
{'n', NULL, "noFill -- no fill in HDF5 file creation", OPTION_FLAG, 'd', & params->noFill},
|
||||
{'N', NULL, "numTasks -- number of tasks that should participate in the test", OPTION_OPTIONAL_ARGUMENT, 'd', & params->numTasks},
|
||||
{'N', NULL, "numTasks -- number of tasks that are participating in the test (overrides MPI)", OPTION_OPTIONAL_ARGUMENT, 'd', & params->numTasks},
|
||||
{'o', NULL, "testFile -- full name for test", OPTION_OPTIONAL_ARGUMENT, 's', & params->testFileName},
|
||||
{'O', NULL, "string of IOR directives (e.g. -O checkRead=1,lustreStripeCount=32)", OPTION_OPTIONAL_ARGUMENT, 'p', & decodeDirectiveWrapper},
|
||||
{'p', NULL, "preallocate -- preallocate file size", OPTION_FLAG, 'd', & params->preallocate},
|
||||
|
|
119
src/utilities.c
119
src/utilities.c
|
@ -53,11 +53,9 @@
|
|||
extern int errno;
|
||||
extern int numTasks;
|
||||
|
||||
/* globals used by other files, also defined "extern" in ior.h */
|
||||
int numTasksWorld = 0;
|
||||
/* globals used by other files, also defined "extern" in utilities.h */
|
||||
int rank = 0;
|
||||
int rankOffset = 0;
|
||||
int tasksPerNode = 0; /* tasks per node */
|
||||
int verbose = VERBOSE_0; /* verbose output */
|
||||
MPI_Comm testComm;
|
||||
MPI_Comm mpi_comm_world;
|
||||
|
@ -77,15 +75,15 @@ void* safeMalloc(uint64_t size){
|
|||
}
|
||||
|
||||
void FailMessage(int rank, const char *location, char *format, ...) {
|
||||
char msg[4096];
|
||||
char msg[4096];
|
||||
va_list args;
|
||||
va_start(args, format);
|
||||
vsnprintf(msg, 4096, format, args);
|
||||
va_end(args);
|
||||
fprintf(out_logfile, "%s: Process %d: FAILED in %s, %s: %s\n",
|
||||
PrintTimestamp(), rank, location, msg, strerror(errno));
|
||||
fflush(out_logfile);
|
||||
MPI_Abort(testComm, 1);
|
||||
fprintf(out_logfile, "%s: Process %d: FAILED in %s, %s: %s\n",
|
||||
PrintTimestamp(), rank, location, msg, strerror(errno));
|
||||
fflush(out_logfile);
|
||||
MPI_Abort(testComm, 1);
|
||||
}
|
||||
|
||||
size_t NodeMemoryStringToBytes(char *size_str)
|
||||
|
@ -265,35 +263,108 @@ int QueryNodeMapping(MPI_Comm comm, int print_nodemap) {
|
|||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* There is a more direct way to determine the node count in modern MPI
|
||||
* versions so we use that if possible.
|
||||
*
|
||||
* For older versions we use a method which should still provide accurate
|
||||
* results even if the total number of tasks is not evenly divisible by the
|
||||
* tasks on node rank 0.
|
||||
*/
|
||||
int GetNumNodes(MPI_Comm comm) {
|
||||
#if MPI_VERSION >= 3
|
||||
int CountTasksPerNode(MPI_Comm comm) {
|
||||
/* modern MPI provides a simple way to get the local process count */
|
||||
MPI_Comm shared_comm;
|
||||
int count;
|
||||
MPI_Comm shared_comm;
|
||||
int shared_rank = 0;
|
||||
int local_result = 0;
|
||||
int numNodes = 0;
|
||||
|
||||
MPI_CHECK(MPI_Comm_split_type(comm, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL, &shared_comm),
|
||||
"MPI_Comm_split_type() error");
|
||||
MPI_CHECK(MPI_Comm_rank(shared_comm, &shared_rank), "MPI_Comm_rank() error");
|
||||
local_result = shared_rank == 0? 1 : 0;
|
||||
MPI_CHECK(MPI_Allreduce(&local_result, &numNodes, 1, MPI_INT, MPI_SUM, comm),
|
||||
"MPI_Allreduce() error");
|
||||
MPI_CHECK(MPI_Comm_free(&shared_comm), "MPI_Comm_free() error");
|
||||
|
||||
MPI_Comm_split_type (comm, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL, &shared_comm);
|
||||
MPI_Comm_size (shared_comm, &count);
|
||||
MPI_Comm_free (&shared_comm);
|
||||
return numNodes;
|
||||
#else
|
||||
int numTasks = 0;
|
||||
int numTasksOnNode0 = 0;
|
||||
|
||||
return count;
|
||||
numTasks = GetNumTasks(comm);
|
||||
numTasksOnNode0 = GetNumTasksOnNode0(comm);
|
||||
|
||||
return ((numTasks - 1) / numTasksOnNode0) + 1;
|
||||
#endif
|
||||
}
|
||||
|
||||
|
||||
int GetNumTasks(MPI_Comm comm) {
|
||||
int numTasks = 0;
|
||||
|
||||
MPI_CHECK(MPI_Comm_size(comm, &numTasks), "cannot get number of tasks");
|
||||
|
||||
return numTasks;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* It's very important that this method provide the same result to every
|
||||
* process as it's used for redistributing which jobs read from which files.
|
||||
* It was renamed accordingly.
|
||||
*
|
||||
* If different nodes get different results from this method then jobs get
|
||||
* redistributed unevenly and you no longer have a 1:1 relationship with some
|
||||
* nodes reading multiple files while others read none.
|
||||
*
|
||||
* In the common case the number of tasks on each node (MPI_Comm_size on an
|
||||
* MPI_COMM_TYPE_SHARED communicator) will be the same. However, there is
|
||||
* nothing which guarantees this. It's valid to have, for example, 64 jobs
|
||||
* across 4 systems which can run 20 jobs each. In that scenario you end up
|
||||
* with 3 MPI_COMM_TYPE_SHARED groups of 20, and one group of 4.
|
||||
*
|
||||
* In the (MPI_VERSION < 3) implementation of this method consistency is
|
||||
* ensured by asking specifically about the number of tasks on the node with
|
||||
* rank 0. In the original implementation for (MPI_VERSION >= 3) this was
|
||||
* broken by using the LOCAL process count which differed depending on which
|
||||
* node you were on.
|
||||
*
|
||||
* This was corrected below by first splitting the comm into groups by node
|
||||
* (MPI_COMM_TYPE_SHARED) and then having only the node with world rank 0 and
|
||||
* shared rank 0 return the MPI_Comm_size of its shared subgroup. This yields
|
||||
* the original consistent behavior no matter which node asks.
|
||||
*
|
||||
* In the common case where every node has the same number of tasks this
|
||||
* method will return the same value it always has.
|
||||
*/
|
||||
int GetNumTasksOnNode0(MPI_Comm comm) {
|
||||
#if MPI_VERSION >= 3
|
||||
MPI_Comm shared_comm;
|
||||
int shared_rank = 0;
|
||||
int tasks_on_node_rank0 = 0;
|
||||
int local_result = 0;
|
||||
|
||||
MPI_CHECK(MPI_Comm_split_type(comm, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL, &shared_comm),
|
||||
"MPI_Comm_split_type() error");
|
||||
MPI_CHECK(MPI_Comm_rank(shared_comm, &shared_rank), "MPI_Comm_rank() error");
|
||||
if (rank == 0 && shared_rank == 0) {
|
||||
MPI_CHECK(MPI_Comm_size(shared_comm, &local_result), "MPI_Comm_size() error");
|
||||
}
|
||||
MPI_CHECK(MPI_Allreduce(&local_result, &tasks_on_node_rank0, 1, MPI_INT, MPI_SUM, comm),
|
||||
"MPI_Allreduce() error");
|
||||
MPI_CHECK(MPI_Comm_free(&shared_comm), "MPI_Comm_free() error");
|
||||
|
||||
return tasks_on_node_rank0;
|
||||
#else
|
||||
/*
|
||||
* Count the number of tasks that share a host.
|
||||
*
|
||||
* This function employees the gethostname() call, rather than using
|
||||
* This version employs the gethostname() call, rather than using
|
||||
* MPI_Get_processor_name(). We are interested in knowing the number
|
||||
* of tasks that share a file system client (I/O node, compute node,
|
||||
* whatever that may be). However on machines like BlueGene/Q,
|
||||
* MPI_Get_processor_name() uniquely identifies a cpu in a compute node,
|
||||
* not the node where the I/O is function shipped to. gethostname()
|
||||
* is assumed to identify the shared filesystem client in more situations.
|
||||
*
|
||||
* NOTE: This also assumes that the task count on all nodes is equal
|
||||
* to the task count on the host running MPI task 0.
|
||||
*/
|
||||
int CountTasksPerNode(MPI_Comm comm) {
|
||||
int size;
|
||||
MPI_Comm_size(comm, & size);
|
||||
/* for debugging and testing */
|
||||
|
@ -336,8 +407,8 @@ int CountTasksPerNode(MPI_Comm comm) {
|
|||
MPI_Bcast(&count, 1, MPI_INT, 0, comm);
|
||||
|
||||
return(count);
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
|
|
|
@ -18,10 +18,8 @@
|
|||
#include <mpi.h>
|
||||
#include "ior.h"
|
||||
|
||||
extern int numTasksWorld;
|
||||
extern int rank;
|
||||
extern int rankOffset;
|
||||
extern int tasksPerNode;
|
||||
extern int verbose;
|
||||
extern MPI_Comm testComm;
|
||||
extern MPI_Comm mpi_comm_world;
|
||||
|
@ -55,8 +53,10 @@ void SeedRandGen(MPI_Comm);
|
|||
void SetHints (MPI_Info *, char *);
|
||||
void ShowHints (MPI_Info *);
|
||||
char *HumanReadable(IOR_offset_t value, int base);
|
||||
int CountTasksPerNode(MPI_Comm comm);
|
||||
int QueryNodeMapping(MPI_Comm comm, int print_nodemap);
|
||||
int GetNumNodes(MPI_Comm);
|
||||
int GetNumTasks(MPI_Comm);
|
||||
int GetNumTasksOnNode0(MPI_Comm);
|
||||
void DelaySecs(int delay);
|
||||
void updateParsedOptions(IOR_param_t * options, options_all_t * global_options);
|
||||
size_t NodeMemoryStringToBytes(char *size_str);
|
||||
|
|
|
@ -1,93 +1,95 @@
|
|||
V-3: main (before display_freespace): testdirpath is "/dev/shm/mdest"
|
||||
V-3: testdirpath is "/dev/shm/mdest"
|
||||
V-3: Before show_file_system_size, dirpath is "/dev/shm"
|
||||
V-3: After show_file_system_size, dirpath is "/dev/shm"
|
||||
V-3: main (after display_freespace): testdirpath is "/dev/shm/mdest"
|
||||
V-3: main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"
|
||||
V-3: main: Using unique_mk_dir, "mdtest_tree.0"
|
||||
V-3: main: Copied unique_mk_dir, "mdtest_tree.0", to topdir
|
||||
V-3: directory_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19"
|
||||
V-3: file_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.0"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.1"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.2"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.3"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.4"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.5"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.6"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.7"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.8"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.9"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.10"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.11"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.12"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.13"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.14"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.15"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.16"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.17"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.18"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.19"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: main: Using testdir, "/dev/shm/mdest/#test-dir.0-0"
|
||||
V-3: Rank 0 Line 2082 main (before display_freespace): testdirpath is '/dev/shm/mdest'
|
||||
V-3: Rank 0 Line 1506 Entering display_freespace on /dev/shm/mdest...
|
||||
V-3: Rank 0 Line 1525 Before show_file_system_size, dirpath is '/dev/shm'
|
||||
V-3: Rank 0 Line 1527 After show_file_system_size, dirpath is '/dev/shm'
|
||||
V-3: Rank 0 Line 2097 main (after display_freespace): testdirpath is '/dev/shm/mdest'
|
||||
V-3: Rank 0 Line 1656 main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
|
||||
V-3: Rank 0 Line 1683 V-3: main: Using unique_mk_dir, 'mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1704 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0', to topdir
|
||||
V-3: Rank 0 Line 801 directory_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19'
|
||||
V-3: Rank 0 Line 1716 will file_test on mdtest_tree.0
|
||||
V-3: Rank 0 Line 990 Entering file_test on mdtest_tree.0
|
||||
V-3: Rank 0 Line 1012 file_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.0'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.1'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.2'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.3'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.4'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.5'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.6'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.7'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.8'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.9'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.10'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.11'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.12'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.13'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.14'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.15'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.16'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.17'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.18'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.19'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 1723 main: Using testdir, '/dev/shm/mdest/test-dir.0-0'
|
||||
|
|
|
@ -1,50 +1,52 @@
|
|||
V-3: main (before display_freespace): testdirpath is "/dev/shm/mdest"
|
||||
V-3: testdirpath is "/dev/shm/mdest"
|
||||
V-3: Before show_file_system_size, dirpath is "/dev/shm"
|
||||
V-3: After show_file_system_size, dirpath is "/dev/shm"
|
||||
V-3: main (after display_freespace): testdirpath is "/dev/shm/mdest"
|
||||
V-3: main: Using unique_mk_dir, "mdtest_tree.0"
|
||||
V-3: main: Copied unique_mk_dir, "mdtest_tree.0", to topdir
|
||||
V-3: directory_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19
|
||||
V-3: file_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.0
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.1
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.2
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.3
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.4
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.5
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.6
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.7
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.8
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.9
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.10
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.11
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.12
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.13
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.14
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.15
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.16
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.17
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.18
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/file.mdtest.0.19
|
||||
V-3: main: Using testdir, "/dev/shm/mdest/#test-dir.0-0"
|
||||
V-3: Rank 0 Line 2082 main (before display_freespace): testdirpath is '/dev/shm/mdest'
|
||||
V-3: Rank 0 Line 1506 Entering display_freespace on /dev/shm/mdest...
|
||||
V-3: Rank 0 Line 1525 Before show_file_system_size, dirpath is '/dev/shm'
|
||||
V-3: Rank 0 Line 1527 After show_file_system_size, dirpath is '/dev/shm'
|
||||
V-3: Rank 0 Line 2097 main (after display_freespace): testdirpath is '/dev/shm/mdest'
|
||||
V-3: Rank 0 Line 1683 V-3: main: Using unique_mk_dir, 'mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1704 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0', to topdir
|
||||
V-3: Rank 0 Line 833 stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19
|
||||
V-3: Rank 0 Line 1716 will file_test on mdtest_tree.0
|
||||
V-3: Rank 0 Line 990 Entering file_test on mdtest_tree.0
|
||||
V-3: Rank 0 Line 1079 file_test: stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.0
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.1
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.2
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.3
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.4
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.5
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.6
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.7
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.8
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.9
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.10
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.11
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.12
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.13
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.14
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.15
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.16
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.17
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.18
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/file.mdtest.0.19
|
||||
V-3: Rank 0 Line 1723 main: Using testdir, '/dev/shm/mdest/test-dir.0-0'
|
||||
|
|
|
@ -1,77 +1,77 @@
|
|||
V-3: main (before display_freespace): testdirpath is "/dev/shm/mdest"
|
||||
V-3: testdirpath is "/dev/shm/mdest"
|
||||
V-3: Before show_file_system_size, dirpath is "/dev/shm"
|
||||
V-3: After show_file_system_size, dirpath is "/dev/shm"
|
||||
V-3: main (after display_freespace): testdirpath is "/dev/shm/mdest"
|
||||
V-3: main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"
|
||||
V-3: main: Using unique_mk_dir, "mdtest_tree.0"
|
||||
V-3: main: Copied unique_mk_dir, "mdtest_tree.0", to topdir
|
||||
V-3: directory_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19"
|
||||
V-3: directory_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19
|
||||
V-3: directory_test: read path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: directory_test: remove directories path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18"
|
||||
V-3: create_remove_items_helper (dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19"
|
||||
V-3: directory_test: remove unique directories path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: main: Using testdir, "/dev/shm/mdest/#test-dir.0-0"
|
||||
V-3: main (remove hierarchical directory loop-!unique_dir_per_task): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"
|
||||
V-3: Rank 0 Line 2082 main (before display_freespace): testdirpath is '/dev/shm/mdest'
|
||||
V-3: Rank 0 Line 1506 Entering display_freespace on /dev/shm/mdest...
|
||||
V-3: Rank 0 Line 1525 Before show_file_system_size, dirpath is '/dev/shm'
|
||||
V-3: Rank 0 Line 1527 After show_file_system_size, dirpath is '/dev/shm'
|
||||
V-3: Rank 0 Line 2097 main (after display_freespace): testdirpath is '/dev/shm/mdest'
|
||||
V-3: Rank 0 Line 1656 main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
|
||||
V-3: Rank 0 Line 1683 V-3: main: Using unique_mk_dir, 'mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1704 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0', to topdir
|
||||
V-3: Rank 0 Line 801 directory_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19'
|
||||
V-3: Rank 0 Line 833 stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19
|
||||
V-3: Rank 0 Line 862 directory_test: read path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 890 directory_test: remove directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.0'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.1'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.2'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.3'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.4'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.5'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.6'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.7'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.8'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.9'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.10'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.11'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.12'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.13'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.14'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.15'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.16'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.17'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.18'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0/dir.mdtest.0.19'
|
||||
V-3: Rank 0 Line 915 directory_test: remove unique directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1723 main: Using testdir, '/dev/shm/mdest/test-dir.0-0'
|
||||
V-3: Rank 0 Line 1764 V-3: main (remove hierarchical directory loop-!unique_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
|
||||
|
|
|
@ -1,24 +1,27 @@
|
|||
V-3: main (before display_freespace): testdirpath is "/dev/shm/mdest"
|
||||
V-3: testdirpath is "/dev/shm/mdest"
|
||||
V-3: Before show_file_system_size, dirpath is "/dev/shm"
|
||||
V-3: After show_file_system_size, dirpath is "/dev/shm"
|
||||
V-3: main (after display_freespace): testdirpath is "/dev/shm/mdest"
|
||||
V-3: main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"
|
||||
V-3: main: Using unique_mk_dir, "mdtest_tree.0"
|
||||
V-3: main: Copied unique_mk_dir, "mdtest_tree.0", to topdir
|
||||
V-3: directory_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: directory_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: directory_test: read path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: directory_test: remove directories path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: directory_test: remove unique directories path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: file_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: file_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: file_test: read path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: file_test: rm directories path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: file_test: rm unique directories path is "mdtest_tree.0"
|
||||
V-3: main: Using testdir, "/dev/shm/mdest/#test-dir.0-0"
|
||||
V-3: main (remove hierarchical directory loop-!unique_dir_per_task): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"
|
||||
V-3: Rank 0 Line 2082 main (before display_freespace): testdirpath is '/dev/shm/mdest'
|
||||
V-3: Rank 0 Line 1506 Entering display_freespace on /dev/shm/mdest...
|
||||
V-3: Rank 0 Line 1525 Before show_file_system_size, dirpath is '/dev/shm'
|
||||
V-3: Rank 0 Line 1527 After show_file_system_size, dirpath is '/dev/shm'
|
||||
V-3: Rank 0 Line 2097 main (after display_freespace): testdirpath is '/dev/shm/mdest'
|
||||
V-3: Rank 0 Line 1656 main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
|
||||
V-3: Rank 0 Line 1683 V-3: main: Using unique_mk_dir, 'mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1704 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0', to topdir
|
||||
V-3: Rank 0 Line 801 directory_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 833 stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 862 directory_test: read path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 890 directory_test: remove directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 915 directory_test: remove unique directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1716 will file_test on mdtest_tree.0
|
||||
V-3: Rank 0 Line 990 Entering file_test on mdtest_tree.0
|
||||
V-3: Rank 0 Line 1012 file_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1079 file_test: stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1104 file_test: read path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1134 file_test: rm directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1141 gonna create /dev/shm/mdest/test-dir.0-0/mdtest_tree.0
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1158 file_test: rm unique directories path is 'mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1723 main: Using testdir, '/dev/shm/mdest/test-dir.0-0'
|
||||
V-3: Rank 0 Line 1764 V-3: main (remove hierarchical directory loop-!unique_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
|
||||
|
|
|
@ -1,24 +1,27 @@
|
|||
V-3: main (before display_freespace): testdirpath is "/dev/shm/mdest"
|
||||
V-3: testdirpath is "/dev/shm/mdest"
|
||||
V-3: Before show_file_system_size, dirpath is "/dev/shm"
|
||||
V-3: After show_file_system_size, dirpath is "/dev/shm"
|
||||
V-3: main (after display_freespace): testdirpath is "/dev/shm/mdest"
|
||||
V-3: main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"
|
||||
V-3: main: Using unique_mk_dir, "mdtest_tree.0"
|
||||
V-3: main: Copied unique_mk_dir, "mdtest_tree.0", to topdir
|
||||
V-3: directory_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: directory_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: directory_test: read path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: directory_test: remove directories path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: directory_test: remove unique directories path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: file_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: file_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: file_test: read path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: file_test: rm directories path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0"
|
||||
V-3: file_test: rm unique directories path is "mdtest_tree.0"
|
||||
V-3: main: Using testdir, "/dev/shm/mdest/#test-dir.0-0"
|
||||
V-3: main (remove hierarchical directory loop-!unique_dir_per_task): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"
|
||||
V-3: Rank 0 Line 2082 main (before display_freespace): testdirpath is '/dev/shm/mdest'
|
||||
V-3: Rank 0 Line 1506 Entering display_freespace on /dev/shm/mdest...
|
||||
V-3: Rank 0 Line 1525 Before show_file_system_size, dirpath is '/dev/shm'
|
||||
V-3: Rank 0 Line 1527 After show_file_system_size, dirpath is '/dev/shm'
|
||||
V-3: Rank 0 Line 2097 main (after display_freespace): testdirpath is '/dev/shm/mdest'
|
||||
V-3: Rank 0 Line 1656 main (create hierarchical directory loop-!unque_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
|
||||
V-3: Rank 0 Line 1683 V-3: main: Using unique_mk_dir, 'mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1704 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0', to topdir
|
||||
V-3: Rank 0 Line 801 directory_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 833 stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 862 directory_test: read path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 890 directory_test: remove directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 915 directory_test: remove unique directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1716 will file_test on mdtest_tree.0
|
||||
V-3: Rank 0 Line 990 Entering file_test on mdtest_tree.0
|
||||
V-3: Rank 0 Line 1012 file_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1079 file_test: stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1104 file_test: read path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1134 file_test: rm directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1141 gonna create /dev/shm/mdest/test-dir.0-0/mdtest_tree.0
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1158 file_test: rm unique directories path is 'mdtest_tree.0'
|
||||
V-3: Rank 0 Line 1723 main: Using testdir, '/dev/shm/mdest/test-dir.0-0'
|
||||
V-3: Rank 0 Line 1764 V-3: main (remove hierarchical directory loop-!unique_dir_per_task): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
|
||||
|
|
|
@ -1,25 +1,29 @@
|
|||
V-3: main (before display_freespace): testdirpath is "/dev/shm/mdest"
|
||||
V-3: testdirpath is "/dev/shm/mdest"
|
||||
V-3: Before show_file_system_size, dirpath is "/dev/shm"
|
||||
V-3: After show_file_system_size, dirpath is "/dev/shm"
|
||||
V-3: main (after display_freespace): testdirpath is "/dev/shm/mdest"
|
||||
V-3: main (create hierarchical directory loop-!collective_creates): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"
|
||||
V-3: main: Copied unique_mk_dir, "mdtest_tree.0.0", to topdir
|
||||
V-3: file_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
|
||||
V-3: create_remove_items (for loop): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/"
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//file.mdtest.0.1"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/"
|
||||
V-3: file_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/file.mdtest.0.1
|
||||
V-3: file_test: rm directories path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
|
||||
V-3: create_remove_items (for loop): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/"
|
||||
V-3: create_remove_items_helper (non-dirs remove): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//file.mdtest.0.1"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/"
|
||||
V-3: file_test: rm unique directories path is "/dev/shm/mdest/#test-dir.0-0/"
|
||||
V-3: main (remove hierarchical directory loop-!collective): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"
|
||||
V-3: Rank 0 Line 2082 main (before display_freespace): testdirpath is '/dev/shm/mdest'
|
||||
V-3: Rank 0 Line 1506 Entering display_freespace on /dev/shm/mdest...
|
||||
V-3: Rank 0 Line 1525 Before show_file_system_size, dirpath is '/dev/shm'
|
||||
V-3: Rank 0 Line 1527 After show_file_system_size, dirpath is '/dev/shm'
|
||||
V-3: Rank 0 Line 2097 main (after display_freespace): testdirpath is '/dev/shm/mdest'
|
||||
V-3: Rank 0 Line 1647 main (create hierarchical directory loop-!collective_creates): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
|
||||
V-3: Rank 0 Line 1694 i 1 nstride 0
|
||||
V-3: Rank 0 Line 1704 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0.0', to topdir
|
||||
V-3: Rank 0 Line 1716 will file_test on mdtest_tree.0.0
|
||||
V-3: Rank 0 Line 990 Entering file_test on mdtest_tree.0.0
|
||||
V-3: Rank 0 Line 1012 file_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
|
||||
V-3: Rank 0 Line 483 create_remove_items (for loop): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//file.mdtest.0.1'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
|
||||
V-3: Rank 0 Line 1079 file_test: stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/file.mdtest.0.1
|
||||
V-3: Rank 0 Line 1134 file_test: rm directories path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
|
||||
V-3: Rank 0 Line 1141 gonna create /dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
|
||||
V-3: Rank 0 Line 483 create_remove_items (for loop): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
|
||||
V-3: Rank 0 Line 310 create_remove_items_helper (non-dirs remove): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//file.mdtest.0.1'
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
|
||||
V-3: Rank 0 Line 1158 file_test: rm unique directories path is '/dev/shm/mdest/test-dir.0-0/'
|
||||
V-3: Rank 0 Line 1754 main (remove hierarchical directory loop-!collective): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
|
||||
|
|
|
@ -1,31 +1,34 @@
|
|||
V-3: main (before display_freespace): testdirpath is "/dev/shm/mdest"
|
||||
V-3: testdirpath is "/dev/shm/mdest"
|
||||
V-3: Before show_file_system_size, dirpath is "/dev/shm"
|
||||
V-3: After show_file_system_size, dirpath is "/dev/shm"
|
||||
V-3: main (after display_freespace): testdirpath is "/dev/shm/mdest"
|
||||
V-3: main (create hierarchical directory loop-!collective_creates): Calling create_remove_directory_tree with "/dev/shm/mdest/#test-dir.0-0"
|
||||
V-3: main: Copied unique_mk_dir, "mdtest_tree.0.0", to topdir
|
||||
V-3: directory_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/dir.mdtest.0.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
|
||||
V-3: create_remove_items (for loop): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/"
|
||||
V-3: create_remove_items_helper (dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//dir.mdtest.0.1"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/"
|
||||
V-3: directory_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/dir.mdtest.0.0
|
||||
V-3: mdtest_stat dir : /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/dir.mdtest.0.1
|
||||
V-3: file_test: create path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/file.mdtest.0.0"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
|
||||
V-3: create_remove_items (for loop): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/"
|
||||
V-3: create_remove_items_helper (non-dirs create): curr_item is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//file.mdtest.0.1"
|
||||
V-3: create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: create_remove_items_helper: close...
|
||||
V-3: create_remove_items (start): temp_path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/"
|
||||
V-3: file_test: stat path is "/dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0"
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/file.mdtest.0.0
|
||||
V-3: mdtest_stat file: /dev/shm/mdest/#test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/file.mdtest.0.1
|
||||
V-3: Rank 0 Line 2082 main (before display_freespace): testdirpath is '/dev/shm/mdest'
|
||||
V-3: Rank 0 Line 1506 Entering display_freespace on /dev/shm/mdest...
|
||||
V-3: Rank 0 Line 1525 Before show_file_system_size, dirpath is '/dev/shm'
|
||||
V-3: Rank 0 Line 1527 After show_file_system_size, dirpath is '/dev/shm'
|
||||
V-3: Rank 0 Line 2097 main (after display_freespace): testdirpath is '/dev/shm/mdest'
|
||||
V-3: Rank 0 Line 1647 main (create hierarchical directory loop-!collective_creates): Calling create_remove_directory_tree with '/dev/shm/mdest/test-dir.0-0'
|
||||
V-3: Rank 0 Line 1694 i 1 nstride 0
|
||||
V-3: Rank 0 Line 1704 V-3: main: Copied unique_mk_dir, 'mdtest_tree.0.0', to topdir
|
||||
V-3: Rank 0 Line 801 directory_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/dir.mdtest.0.0'
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
|
||||
V-3: Rank 0 Line 483 create_remove_items (for loop): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
|
||||
V-3: Rank 0 Line 288 create_remove_items_helper (dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//dir.mdtest.0.1'
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
|
||||
V-3: Rank 0 Line 833 stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/dir.mdtest.0.0
|
||||
V-3: Rank 0 Line 588 mdtest_stat dir: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/dir.mdtest.0.1
|
||||
V-3: Rank 0 Line 1716 will file_test on mdtest_tree.0.0
|
||||
V-3: Rank 0 Line 990 Entering file_test on mdtest_tree.0.0
|
||||
V-3: Rank 0 Line 1012 file_test: create path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/file.mdtest.0.0'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
|
||||
V-3: Rank 0 Line 483 create_remove_items (for loop): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
|
||||
V-3: Rank 0 Line 326 create_remove_items_helper (non-dirs create): curr_item is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1//file.mdtest.0.1'
|
||||
V-3: Rank 0 Line 348 create_remove_items_helper (non-collective, shared): open...
|
||||
V-3: Rank 0 Line 373 create_remove_items_helper: close...
|
||||
V-3: Rank 0 Line 457 create_remove_items (start): temp_path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/'
|
||||
V-3: Rank 0 Line 1079 file_test: stat path is '/dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0'
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/file.mdtest.0.0
|
||||
V-3: Rank 0 Line 588 mdtest_stat file: /dev/shm/mdest/test-dir.0-0/mdtest_tree.0.0/mdtest_tree.0.1/file.mdtest.0.1
|
||||
|
|
Loading…
Reference in New Issue