This fixes build with options such as --as-needed that require correct positional argument passing. They also ensure that the right dependency library is used.
POLLERR and POLLHUP handling in rpc_service() could not deal with
session failures or auto reconnect.
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
It makes no sense to have socket.c keep invoking this callback over and over.
Just change it to become one-shot.
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
The linux kernel does not check the UDP checksum until the application tries
to read if from the socket.
This means that the socket might be readable, but when we try to read
the data, or inspect how much data is available, the packets will be discarded
by the kernel.
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
If we are trying to read (part of?) the RM, we can not assume that as long
as recv() returned non-error that we have the full RM.
We must check before we proceed to try to read the actual PDU data.
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
We can not have a static rpc->inbuf buffer since that will no longer guarantee
that the received buffer is valid for the duration of callbacks.
One of the problems is that if we issue new (sync) RPCs from within a
callback, that will overwrite and invalidate the receive buffer that
we passed to the callback.
Revert "init: do not leak rpc->inbuf"
This reverts commit f7bc4c8bb1.
Revert "socket: we have to use memmove in rpc_read_from_socket"
This reverts commit 24429e95b8.
Revert "socket: make rpc->inbuf static and simplify receive logic"
This reverts commit 7000a0aa04.
Remove the fuse module from the examples subdirectory.
This module is now a standalone repo :
https://github.com/sahlberg/fuse-nfs
And it comes with proper build rules, documentation etc etc.
It is a useful module and it has now graduated to become its own
repo.
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
This funciton is called from rpc_service when it has detected that
a socket has errored out during reading/writing.
However, since this fucntion returns 0 (==success) for the case where
autoreconnect is not enabled, this means that for an errored socket we
will return 0 (==success) from rpc_service() back to the application.
Change rpc_reconnect_requeue to return -1 when invoked and autoreconnect
is disabled so that applications will receive an error back from rpc_service.
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
There is no guarantee that we get the same fd again when
reestablishing a session. But if the fd changes during a
reconnect we might up with a client application busy polling
on the old fd.
Qemu registers a read handler on the current fd, but is
not realizing fd changes. So we busy poll on the old fd for good.
Things are working (except for the busy polling) until
a drain all is issued. At this point Qemu deadlocks.
Signed-off-by: Peter Lieven <pl@kamp.de>
otherwise we end up eating up all socket errors in rpc_service and then
believe we are connected, but the next call to rpc_read_from_socket
fails because the socket is closed. we then reconnect anyway.
Signed-off-by: Peter Lieven <pl@kamp.de>
The fuse framework allows us to directly expose symlinks from NFS to the user,
just like a real NFS mount would. All we need to do is call lstat rather
than stat and implement a readlink function.
With this patch I can successfully chroot into a rootfs mounted using
fuse_nfs.
Signed-off-by: Alexander Graf <agraf@suse.de>
At least in my version of glibc the members st_mtim, st_ctim and st_atim
are defined as struct timespec rather than struct timeval, thus
containing a tv_nsec field rather than tv_usec.
Use the proper struct fields instead, fixing compilation on Linux.
Signed-off-by: Alexander Graf <agraf@suse.de>
the requeueing code is broken because we access pdu->next
after we mangled it in rpc_return_to_queue.
This leads to losing of waitqueue elements and more severe
a deadlock as soon as more than one waitpdu queue has elements.
Reason for that is that the first elements of the first
two queues are linked to each other.
Example:
waitpdu[0]->head = pduA ; pduA->next = pduB; pduB->next = NULL;
waitpdu[1]->head = pduC ; pduC->next = NULL;
outqueue->head = NULL;
After the for loop for waitpdu[0] queue the outqueue looks like
outqueue->head = pduA; pduA->next = NULL;
At this point pduB is lost!
In the for loop for waitpdu[1] queue the outqueue looks like this
after the first iteration:
outqueue->head = pduC; pduC->next = pduA; pduA->next = NULL;
We now fetch pdu->next of pduC which is pduA.
In the next iteration we put pduA in front of pduC. pduA->next
is then pduC and pduC->next is pduA. => Deadlock.
Signed-off-by: Peter Lieven <pl@kamp.de>