For NFS sessions, change the autoreconnect options to be :
-1: (default) retry connectiong back to the server forever. Just like
normal NFS clients do.
0: Do not attempt reconnecting at all. Immediately fail and return an
error back to the application on session loss.
>=1: Retry connecting to the server this many times before giving up.
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
Limit the number of retries when autoreconnecting (to an arbitrary 10)
and return an error to the application if this limit is reached.
Without this, libnfs retries indefinitely and consumes 100% CPU.
See also: https://bugzilla.gnome.org/show_bug.cgi?id=762544
Signed-off-by: Ross Lagerwall <rosslagerwall@gmail.com>
rpc_read_from_socket can currently only read one PDU in each rpc_service invocation even
if there is more data available on the socket. This patch reads all PDUs until the socket
would block.
Signed-off-by: Peter Lieven <pl@kamp.de>
The ioctl version breaks Qemu. I will post an updated once we found
a good solution in libiscsi and then adapt it to libnfs.
This reverts commit 003b3c7ce2.
rpc_read_from_socket can currently only read one PDU in each rpc_service invocation even
if there is more data available on the socket. This patch reads all PDUs available on
the socket when rpc_read_from_socket is entered.
Signed-off-by: Peter Lieven <pl@kamp.de>
we always read 4 bytes to get the PDU size and than realloc
these 4 bytes to the full size of the PDU. Avoid this by
using a static buf for the record marker.
Signed-off-by: Peter Lieven <pl@kamp.de>
There is no need to allocate and deallocate this structue every time
we update the udp destinateion.
For the client side, where we set the destination just once per lifetime
of the context it might not matter too much but once we add udp server support
we will need to update the sockaddr for every rpc we receive.
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
Add a flags field to rpc_pdu and add a flag that indicates that the PDU
should be discarded as soon as it has been written to the socket.
We do not put it on the waitpdu queue nor do we wait for a reply.
This will later be used for when we are sending replies back to a client
when operating in a server context.
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
`nfs_set_interface` and `rpc_set_interface` APIs, or via the
NFS URL `if=<interface>` parameter. This feature requires
`root` permissions.
NOTE: This has only been compiled and tested on Ubuntu 14.04. It's
unlikely that it'll work on other platforms without modification,
particularly around the inclusion of <net/if.h> and IFNAMSIZ define
in `libnfs-private.h`.
This addresses a bug causing a segfault if we destroy the nfs context/
disconnect the session while the mount_8_cb callbacks for checking the
filehandle for nested mountpoints are still in flight.
Issue found and reported by doktorstick
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
in commit b319b97 the check for count == 0 was introduced, but
it was accidently reverted in commit f681a2c if pdu->inpos < 4.
This patch fixes this issue resulting in deadlocks and removes
the somewhat redundant receive code.
Signed-off-by: Peter Lieven <pl@kamp.de>
POLLERR and POLLHUP handling in rpc_service() could not deal with
session failures or auto reconnect.
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
It makes no sense to have socket.c keep invoking this callback over and over.
Just change it to become one-shot.
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
The linux kernel does not check the UDP checksum until the application tries
to read if from the socket.
This means that the socket might be readable, but when we try to read
the data, or inspect how much data is available, the packets will be discarded
by the kernel.
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
If we are trying to read (part of?) the RM, we can not assume that as long
as recv() returned non-error that we have the full RM.
We must check before we proceed to try to read the actual PDU data.
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>