Discussion:
WSASend() on remotely closed socket
(too old to reply)
B***@gmail.com
2008-02-01 22:20:08 UTC
Permalink
I cannot seem to find any posts addressing this. I am sure they are
somewhere, but I am just tired of looking.

I am using WSASend() with a completion routine and I have to be
reasonably sure that the data either did or did not arrive. However,
WSASend() seems to have basically the same behavior as send().

I create a listening socket with socket() which, according to MSDN,
creates a socket that can use overlapped I/O. Also, according to
MSDN, accept() creates a child socket that is also overlapped if the
listening socket is overlapped. I make every socket I create non-
blocking with ioctlsocket().

Ok, so the original problem that I was trying to solve is that, if the
client closes its socket, send() returns as if it actually sent data
when, in fact, it just put the data in a buffer to be sent and never
actually sends the data. However, my server believes it has been sent
and goes on its merry way under that assumption.

As I understand it, this is how TCP is designed. Fine.

No problem. So, I read some more and find out that WSASend() allows
you to specify a callback function that is notified when an I/O
operation completes successfully or otherwise.

The problem is, if I put a breakpoint in the server to stop it from
sending the data and close the connection on the client side,
WSASend() returns zero (not SOCKET_ERROR / WSA_IO_PENDING to indicate
that it queued the data) and my callback function is always called
with an error value of zero and the number of bytes transferred
indicating that the entire message was sent.

So...I'm confused. Is it just impossible to tell if the client closed
its connection without calling WSARecv() or recv(). I mean, I just
assumed that I/O completion ports or an I/O completion callback would
give me useful information instead of just telling me that it
successfully buffered the data for sending. It seems pointless to try
to use WSARecv() or recv() before a send call since the socket can be
closed between the calls.

Am I missing something? I have to be missing something. There has to
be a way to make the I/O completion callback wait until TCP times out
and tell me that no data was sent.
Alexander Nickolov
2008-02-15 20:49:09 UTC
Permalink
The WSASend callback is called when your data is copied into the
socket buffer. That is of little value to you I suspect.

The only way to know that the remote party has received the data is
via a reply over the socket (or via other means). The TCP layer
does receive ACKs, but the socket API does not expose that
information to the developer.
--
=====================================
Alexander Nickolov
Microsoft MVP [VC], MCSD
email: ***@mvps.org
MVP VC FAQ: http://vcfaq.mvps.org
=====================================
Post by B***@gmail.com
I cannot seem to find any posts addressing this. I am sure they are
somewhere, but I am just tired of looking.
I am using WSASend() with a completion routine and I have to be
reasonably sure that the data either did or did not arrive. However,
WSASend() seems to have basically the same behavior as send().
I create a listening socket with socket() which, according to MSDN,
creates a socket that can use overlapped I/O. Also, according to
MSDN, accept() creates a child socket that is also overlapped if the
listening socket is overlapped. I make every socket I create non-
blocking with ioctlsocket().
Ok, so the original problem that I was trying to solve is that, if the
client closes its socket, send() returns as if it actually sent data
when, in fact, it just put the data in a buffer to be sent and never
actually sends the data. However, my server believes it has been sent
and goes on its merry way under that assumption.
As I understand it, this is how TCP is designed. Fine.
No problem. So, I read some more and find out that WSASend() allows
you to specify a callback function that is notified when an I/O
operation completes successfully or otherwise.
The problem is, if I put a breakpoint in the server to stop it from
sending the data and close the connection on the client side,
WSASend() returns zero (not SOCKET_ERROR / WSA_IO_PENDING to indicate
that it queued the data) and my callback function is always called
with an error value of zero and the number of bytes transferred
indicating that the entire message was sent.
So...I'm confused. Is it just impossible to tell if the client closed
its connection without calling WSARecv() or recv(). I mean, I just
assumed that I/O completion ports or an I/O completion callback would
give me useful information instead of just telling me that it
successfully buffered the data for sending. It seems pointless to try
to use WSARecv() or recv() before a send call since the socket can be
closed between the calls.
Am I missing something? I have to be missing something. There has to
be a way to make the I/O completion callback wait until TCP times out
and tell me that no data was sent.
B***@gmail.com
2008-02-20 21:06:31 UTC
Permalink
Post by Alexander Nickolov
The WSASend callback is called when your data is copied into the
socket buffer. That is of little value to you I suspect.
The only way to know that the remote party has received the data is
via a reply over the socket (or via other means). The TCP layer
does receive ACKs, but the socket API does not expose that
information to the developer.
I think the primary problem is that I am trying to synchronize the
actions of two applications, and a three-way handshake is not enough.

Just a general overview:

App A (client) contacts App B (server) to request permission to
perform an action.
App B sends a grant/deny message.
App A sends an ACK and performs the action if App B grants permission.
App B logs that App A performed the action when it receives the ACK.

If App A takes too long to send the ACK back, App B will terminate the
connection. When that happens, App A can "successfully" send an ACK
to a closed connection and go on its merry way without App B logging
it.

The only way I could get the behavior I want is to turn lingering on
with a timeout of zero. However, I am not sure if that will have any
adverse consequences.

In practice, App A should always be able to send an ACK immediately
since it does not do any processing between receiving the grant
message and sending the ACK. The probability of the connection
breaking in those few micro-/milliseconds is pretty low. So, for now,
I am not worrying about it. We are going to be testing this system
for quite a while out in the real world anyway with a backup method of
logging to see if we miss anything.

Loading...