We can not have a static rpc->inbuf buffer since that will no longer guarantee
that the received buffer is valid for the duration of callbacks.
One of the problems is that if we issue new (sync) RPCs from within a
callback, that will overwrite and invalidate the receive buffer that
we passed to the callback.
Revert "init: do not leak rpc->inbuf"
This reverts commit f7bc4c8bb1.
Revert "socket: we have to use memmove in rpc_read_from_socket"
This reverts commit 24429e95b8.
Revert "socket: make rpc->inbuf static and simplify receive logic"
This reverts commit 7000a0aa04.
There is no guarantee that we get the same fd again when
reestablishing a session. But if the fd changes during a
reconnect we might up with a client application busy polling
on the old fd.
Qemu registers a read handler on the current fd, but is
not realizing fd changes. So we busy poll on the old fd for good.
Things are working (except for the busy polling) until
a drain all is issued. At this point Qemu deadlocks.
Signed-off-by: Peter Lieven <pl@kamp.de>
only logging to stderr is supported at the moment. Per default
there is no output. Its possible to set the log level via
debug url parameter.
Example:
nfs-ls nfs://127.0.0.1/export?debug=2
Signed-off-by: Peter Lieven <pl@kamp.de>
the write limit of libnfs has been 1M since a long time.
Restrict rtmax and wrmax to 1M and error out otherwise.
Limit the PDU size when reading from socket to rule out
malicious servers forcing us to allocate a lot of memory.
Signed-off-by: Peter Lieven <pl@kamp.de>
Update the configure to add some sanity -W arguments.
A good start is probably :
-Wall -Werror -Wshadow -Wno-write-strings -Wstrict-prototypes
-Wpointer-arith -Wcast-align -Wno-strict-aliasing
Fixup the paces in the code that triggers.
(one of which is readahead code which is perhaps broken?)
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
Add nfs_access2(), like nfs_access() but it returns the individual
statuses of R_OK, W_OK and X_OK rather than a single success or failure
status. This saves the latency and overhead of multiple lookups if an
application tries to determine the status of each of R_OK, W_OK and
X_OK.
Signed-off-by: Ross Lagerwall <rosslagerwall@gmail.com>
All current platforms have a quad type that maps to a 64bit scalar.
But there are platforms where quad maps to a 64bit non-scalar.
Replace quad with int64 in the protocol definitions and the ZDR layer
so that these fields will map to a 64 bit scalar also on those platforms
where quad can not be used.
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
Add lchmod which is like chmod but operates on the symbolic link itself
if the destination is a symbolic link.
Signed-off-by: Ross Lagerwall <rosslagerwall@gmail.com>
Add lutimes which is like utimes but operates on the symbolic link
itself if the destination is a symbolic link.
Signed-off-by: Ross Lagerwall <rosslagerwall@gmail.com>
Add lstat which is like stat but operates on the symbolic link itself if
the destination is a symbolic link.
Signed-off-by: Ross Lagerwall <rosslagerwall@gmail.com>
Add lchown which is like chown but operates on the symbolic link itself
if the destination is a symbolic link.
Signed-off-by: Ross Lagerwall <rosslagerwall@gmail.com>
Set as much stat information as possible for stat, stat64, fstat and
readdir.
Fill in dev to the given fsid.
Fill in rdev to the given major and minor numbers.
Set the file type bits in the mode from the type returned by the server.
Set the number of blocks used based on the number of bytes used in
blocks of size 512 (which is what stat(2) uses), rounded up.
Fill in the nanosecond timestamps.
Signed-off-by: Ross Lagerwall <rosslagerwall@gmail.com>
Add a new family of functions, nfs_create, like nfs_creat but takes an
additional flags argument which allows extra flags like O_SYNC, O_EXCL
and O_APPEND to be specified.
This patch add support for an internal readahead machanism. The maximum readahead
size can be specified via URL parameter readahead. This should significantly
speed up small sequential reads.
Signed-off-by: Peter Lieven <pl@kamp.de>
NFS servers can respond to requests in any order, and they do. In our
tests there is also some clustering to the responses; it could be
because eg. requests are served synchronously if the data is in the cache.
Introduce a hash table so that we are able to find the pdu quickly in
all cases, assuming random distribution of the responses.
When making many concurrent requests (as is likely in any performance
criticial application), the use of SLIST_REMOVE and SLIST_ADD_END are
a severe bottleneck because of their linear search.
I considered using a double-linked list but it was unnecessary to
allocate the additional memory for each list entry.
Instead, continue to use a single-linked list but retain:
* a pointer to the end of the list; and
* a pointer to the previous entry during a linear search.
The former would makes append operations O(1) time, and the latter
does the same for removal. We can do this because removal only happens
within the linear search, and there is no random access to the queue.
From http://en.cppreference.com/w/cpp/keyword/export :
Until C++11:
"Used to mark a template definition exported, which allows the same
template to be declared, but not defined, in other translation units."
Since C++11:
"The keyword is unused and reserved."
Signed-off-by: Arne Redlich <arne.redlich@googlemail.com>
O_TRUNC will attempt to truncate the file when opened with O_RDWR
or O_WRONLY.
Normal posix open(O_RDONLY|O_TRUNC) is undefined.
libnfs nfs_open() only uses the O_TRUNC flag when used in combination with either O_RDWR or O_WRONLY.
When O_TRUNC is used together with O_RDONLY libnfs will silently ignore the O_TRUNC flag.
Libnfs nfs_open(O_RDONLY|O_TRUNC) is thus the same as nfs_open(O_RDONLY)
This is mainly needed when having to track and control the file descriptors that are used by libnfs, for example when trying to emulate dup2() ontop
of libnfs.
Add chdir and getcwd and store cwd in the nfs_context.
Add functions to process the paths specified and normalize them
by performing the transforms :
// -> /
/./ -> /
^/../ -> error
^[^/] -> error
/string/../ -> /
/$ -> \0
/.$ -> \0
^/..$ -> error
/string/..$ -> /
Update the path lookup function to allow specifying relative paths based on
cwd for all functions.
Add chdir and getcwd and store cwd in the nfs_context.
Add functions to process the paths specified and normalize them
by performing the transforms :
// -> /
/./ -> /
^/../ -> error
^[^/] -> error
/string/../ -> /
/$ -> \0
/.$ -> \0
^/..$ -> error
/string/..$ -> /
Update the path lookup function to allow specifying relative paths based on
cwd for all functions.
- Use _stat64 on windows so file sizes become 64bit always.
- Increase default marshalling buffer so we can marshall large PDUs.
- RPC layer support for NFSv2
- Win32 updates and fixes
- Add URL parsing functions and URL argument support.
- New utility: nfs-io
- nfs-ls enhancements
- RPC layer support for NSM
- Add example FUSE filesystem.
- Minor fixes.
We also get uid/gid for free when using READDIRPLU3 (and READDIRPLUS3 emulation)
so store these too so applications that needs to look at the uid/gid can avoid
the extra call to nfs_stat()
This allows to connect with an alternate uid or gid than that
of the current user.
Example:
examples/nfs-ls nfs://10.0.0.1/export?uid=1000&gid=33
Signed-off-by: Peter Lieven <pl@kamp.de>
This allows indirect support for a configurable connect timeout.
Linux uses a exponential backoff for SYN retries starting
with 1 second.
This means for a value n for TCP_SYNCNT, the connect will
effectively timeout after 2^(n+1)-1 seconds.
Example:
examples/nfs-ls nfs://10.0.0.1/export?tcp-syncnt=1
Signed-off-by: Peter Lieven <pl@kamp.de>
This makes it possible for multiple processes/contexts to use the same
target and (with some synchronization) avoid XID collissions across processes/contexts.