For NFS sessions, change the autoreconnect options to be :
-1: (default) retry connectiong back to the server forever. Just like
normal NFS clients do.
0: Do not attempt reconnecting at all. Immediately fail and return an
error back to the application on session loss.
>=1: Retry connecting to the server this many times before giving up.
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
If wait_for_nfs_reply() times out, nfs_mount can return with RPCs
still pending. In that case when the RPCs complete (perhaps because
someone calls destroy_context()), the callbacks run, and private_data
is pointing at what was the stack-allocated cb_data structure. Stack
smashing and segfaulty fun ensue.
Fix by ensuring no RPCs are pending before returning from nfs_mount()
by disconnecting on errors.
Clamp the max read write size we handle to NFS_MAX_XFER_SIZE for servers
that advertize very large PDU support instead of erroring out.
Fix for issue #188
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
Limit the number of retries when autoreconnecting (to an arbitrary 10)
and return an error to the application if this limit is reached.
Without this, libnfs retries indefinitely and consumes 100% CPU.
See also: https://bugzilla.gnome.org/show_bug.cgi?id=762544
Signed-off-by: Ross Lagerwall <rosslagerwall@gmail.com>
If the callback does anything fishy that modifies the linked list,
libnfs may crash after returning. So doing any pending list removals
before invoking the callbacks is safer.
Enables callers to pass any opaque data chunk without having to cast
it explicitly.
A write never modifies the source buffer, and thus the pointer should
be const.
Signed-off-by: Max Kellermann <max.kellermann@gmail.com>
rpc_read_from_socket can currently only read one PDU in each rpc_service invocation even
if there is more data available on the socket. This patch reads all PDUs until the socket
would block.
Signed-off-by: Peter Lieven <pl@kamp.de>
The ioctl version breaks Qemu. I will post an updated once we found
a good solution in libiscsi and then adapt it to libnfs.
This reverts commit 003b3c7ce2.
rpc_read_from_socket can currently only read one PDU in each rpc_service invocation even
if there is more data available on the socket. This patch reads all PDUs available on
the socket when rpc_read_from_socket is entered.
Signed-off-by: Peter Lieven <pl@kamp.de>
we always read 4 bytes to get the PDU size and than realloc
these 4 bytes to the full size of the PDU. Avoid this by
using a static buf for the record marker.
Signed-off-by: Peter Lieven <pl@kamp.de>
There is no need to allocate and deallocate this structue every time
we update the udp destinateion.
For the client side, where we set the destination just once per lifetime
of the context it might not matter too much but once we add udp server support
we will need to update the sockaddr for every rpc we receive.
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
Add a flags field to rpc_pdu and add a flag that indicates that the PDU
should be discarded as soon as it has been written to the socket.
We do not put it on the waitpdu queue nor do we wait for a reply.
This will later be used for when we are sending replies back to a client
when operating in a server context.
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
This allows us to use the NULL function for any arbitrary
program/version from rpc_connect_program() instead of the hardcoded support
for mount v3 and nfs v3
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
In zdr_array we can not use the check that num_elements * element_size
will fit inside the remaining bytes in the ZDR buffer.
The reason for this is that IF it is an array of unions, then
element-size will have the size of the largest arm in that union.
If the array consists of union items that are smaller than the largest arm,
then it becomes likely that this will pack in less than num_elements *
element_size and this it is possible that the array WILL fir in the remaining
bytes.
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
- Reduce the number of memory allocations in the ZDR layer.
- Check both seconds and nanoseconds field when validating dir cache.
- Invalidate the dir cache immediately if we do something that would cause
it to become stale, such as adding/removing objects from the cache.
- Add options to enable/disable dir caching.
- Discard readahead cache on [p]write and truncate.
- Android fixes
- Windows fixes
- Support timeouts for sync functions
- Add an internal pagecache
- Add nfs_rewinddir(), nfs_seekdir() and nfs_telldir()
- Fix crash in nfs_truncate()
- Fix segfault that can trigger if we rpc_disconnect() during the mount.
- Add support to bind to a specific interface (linux only)
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
`nfs_set_interface` and `rpc_set_interface` APIs, or via the
NFS URL `if=<interface>` parameter. This feature requires
`root` permissions.
NOTE: This has only been compiled and tested on Ubuntu 14.04. It's
unlikely that it'll work on other platforms without modification,
particularly around the inclusion of <net/if.h> and IFNAMSIZ define
in `libnfs-private.h`.
This addresses a bug causing a segfault if we destroy the nfs context/
disconnect the session while the mount_8_cb callbacks for checking the
filehandle for nested mountpoints are still in flight.
Issue found and reported by doktorstick
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>