All ranks locally capture and accumulate Etags for the parts they are
writing. In the N:1 cases, these are ethen collected by rank 0, via
MPI_Gather. This is effectively an organization matching the "segmented"
layout. If data was written segmented, then rank0 assigns part-numbers to
with appropriate offsets to correspond to what would've been used by each
rank when writing a given etag. If data was written strided, then etags
must also be accessed in strided order, to build the XML that will be sent.
TBD: Once the total volume of etag data exceeds the size of memory at rank
0, we'll need to impose a more-sophisticated technique. One idea is to
thread the MPI comms differently from the libcurl comms, so that multiple
gathers can be staged incrementally, while sending a single stream of XML
data tot he servers. For example, the libcurl write-function could
interact with the MPI prog to allow the appearance of a single stream of
data.
These are variants on S3. S3 uses the "pure" S3 interface, e.g. using
Multi-Part-Upload. The "plus" variant enables EMC-extensions in the aws4c
library. This allows the N:N case to use "append", in the case where
"transfer_size" != "block_size" for IOR. In pure S3, the N:N case will
fail, because the EMC-extensions won't be enabled, and appending (which
attempts to use the EMC byte-range tricks to do this) will throw an error.
In the S3_EMC alg, N:1 uses EMCs other byte-range tricks to write different
parts of an N:1 file, and also uses append to write the parts of an N:N
file. Preliminary tests show these EMC extensions look to improve BW by
~20%.
I put all three algs in aiori-S3.c, because it seemed some code was getting
reused. Not sure if that's still going to make sense after the TBD, below.
TBD: Recently realized that the "pure' S3 shouldn't be trying to use
appends for anything. In the N:N case, it should just use MPU, within each
file. Then, there's no need for S3_plus. We just have S3, which does MPU
for all writes where transfer_size != block_size, and uses (standard)
byte-range reads for reading. Then S3_EMC uses "appends for N:N writes,
and byte-range writes for N:1 writes. This separates the code for the two
algs a little more, but we might still want them in the same file.
Testing on our EMC ViPR installation. Therefore, we also have available
some EMC extensions. For example, EMC supports a special "byte-range"
header-option ("Range: bytes=-1-") which allows appending to an object.
This is not needed for N:1 (where every write creates an independent part),
but is vital for N:N (where every write is considered an append, unless
"transfer-size" is the same as "block-size").
We also use a LANL-extended implementation of aws4c 0.5, which provides
some special features, and allows greater efficiency. That is included in
this commit as a tarball. Untar it somewhere else and build it, to produce
a library, which is linked with IOR. (configure with --with-S3).
TBD: EMC also supports a simpler alternative to Multi-Part Upload, which
appears to have several advantages. We'll add that in next, but wanted to
capture this as is, before I break it.
Along the way, added a bunch of diagnostic output in the HDFS calls, which
only shows up at verbosity >= 4. I'll probably remove this stuff before
merging with master. Also, there's an #ifdef'ed-out sleep() in place,
which I used to attach gdb to a running MPI task. I'll get rid of that
later, too.
Also, added another hdfs-related parameter to the IOR_param_t structure;
hdfs_user_name gets the value of the USER environment-variable as the
default HDFS user for connections. Does this cause portability problems?
I saw run in which I caught an MPI task hanging in ctime() here. Swiching
to ctime_r() fixes that. This function is only called form rank==0, but it
hangs anyway.
This is not a problem for most backends, but HDFS doesn't support opening
RDWR. If you use only write-oriented or read-oriented flags on the
command-line, CheckRunSettings() will undo the default IOR_RDWR flag and
install the appropriate IOR_WRONLY or IO_RDONLY open-flags, respectively.
This provides an HDFS back-end, allowing IOR to exercise a Hadoop
Distributed File-System, plus corresponding changes throughout, to
integrate the new module into the build. The commit compiles at LANL, but
hasn't been run yet. We're currently waiting for some configuration on
machines that will eventually provide HDFS. By default, configure ignores
the HDFS module. You have to explicitly add --with-hdfs.
GPFS supports a "gpfs_fcntl" method for hinting various things,
including "i'm about to write this block of data". Let's see if, for
the cost of a few system calls, we can wrangle the GPFS locking system
into allowing concurrent access with less overhead. (new IOR parameter
gpfsHintAccess)
Also, drop all locks on a file immediately after open/creation in the
shared file case, since we know all processes will touch unique regions
of the file. It may or may not be a good idea to release all file locks
after opening. Processes will then have to re-acquire locks already
held. (new IOR parameter gpfsReleaseToken)
Improve the scalabilit of CountTasksPerNode() by using
a Broadcast and AllReduce, rather than flooding task zero
with MPI_Send() messages.
Also change the hostname lookup function from MPI_Get_processor_name
to gethostname(), which should work on most systems that I know of,
including BlueGene/Q.
Removing AC_FUNC_MALLOC from configure.ac, to allow compilation
on BG/P systems. This check can fail in cross-compilation environments,
which unnecessarily forces autoconf to require an rpl_malloc()
replacement for malloc(). We could implement the conditional addition
of rpl_malloc(), but removing AC_FUNC_MALLOC is a quite work-around.
fixes#4
Allows every task to allocate a specified amount of memory as
a rough simulation of a real application's memory usage.
Every page of the allocated memory is touch to defeat lazy
memory allocation.
Original patch by Michael Kluge <michael.kluge@tu-dresden.de>