If I have 1000Mbits to send then gigE is 10 times faster. But in the article we are transferring only 2k across the network. Mixing latency and bandwidth here. The latency to send 2k across an empty network isn't 10 times greater on a fastE versus gigE network, right?
2K is going to be 2 packets, a full size and and a short packet, roughly 1.5K and .5K.
For any transfer there is the per packet overhead (running it through the software stack) plus the time to transfer the packet.
The first packet will, in practice, transfer very close to 10x faster, unless your software stack really sucks.
The second packet is a 1/3rd size packet so the overhead of the stack will be proportionally larger.
And it matters _a lot_ if you are counting the connect time for a TCP socket. If this is a hot potato type of test then the TCP connection is hot. If it is connect, send the data, disconnect, that's going to be a very different answer.
Not sure if I'm helping or not here, ask again if I'm not.