The proposal is for a simple document server which could be placed on an 'exposed network' without fear of compromise, because (1) it would not have to keep any state information about incoming connections, and (2) with one exception, outgoing packets would be no larger than incoming packets; even in
that case, they'd not be much larger.
Because the server would use TCP/IP protocol, people behind firewalls would be able to access if if the required port was open, without any special handling by their firewall.
The server would respond to ARP packets in the usual fashion, and to TCP/IP packets destined for the proper port. All non-reset TCP packets would generate an identically-sized TCP packet in return, with one exception.
For every TCP packet, the reply should have the source and destination addresses and ports swapped from the original. Except as noted, the seq and ack numbers should be passed exactly as received.
A SYN packet should return a SYN+ACK packet with a sequence number equal to the received sequence number, and an ack number one above the received sequence number, and with MSS set to 512 bytes [this is the one packet that may be larger than the original, because of the need to include MSS].
A RES packet should be dropped on the floor.
Other packets would be sent back with seq and ack numbers as received, and with the same-sized data payload as received.
For the server to return useful information, the documents to be served should be encoded in such a way as to not contain any 'FF' characters. Assuming each file has a 32-bit ID, the entity requesting data would simply send out a stream of [file-id][counter], where 'counter' was a file offset counter which increased by eight each time it was sent.
The server, upon receipt of each data packet, would look through the data payload for two bytes, eight bytes aprt, that differed by eight. It would recognize this as the LSB of the counter, and could thus tell what portion of the file was being requested. It would then return a data packet, identical in size to the one received, with the appropriate portion of the requested file.
If, for some reason, the server could not make sense of the received packet's data payload, it would return FF's. The client would recognize this occurrence and request the missing data later.
Although the protocol would not be terribly efficient, the server would be able to handle as many connections as bandwidth permitted without difficulty. If someone wanted to open up 1,000 connections each from 1,000 computers and just leave them open, they could. All of the connections would remain open for as long as the clients wanted them, but the server wouldn't care one iota.
Anyone ever heard of anything like that?