No, the Max Attainable figure already takes this 6dB into account.
My figures don't reflect this: they currently show
Attainable rate (kbit/s) 87124
SNR margin (dB) 5.7
Line rate (kbit/s) 75260
Maybe it depends on which modem and how it reports (mine is HG612)
Mine is an HG612 too, so they should be comparable.
The attainable rate seems to be quoted as a speed without FEC, while the actual line speed will have FEC taken into account, if the line is currently using it. The difference in yours, around 15%, suggests that FEC has been put on your line.
There goes my reputation if it hasn't got FEC
I thought that the amount of error correction used was always the same; with interleaving spreading it as you say to reduce problems from burst noise. Does VDSL ever work with no error correction?
The modem sync negotiation can vary both the amount of interleaving and the amount of FEC, and yes - there can be no interleaving and no FEC on the line.
How can you find out about FEC and interleaving? There are 3 places it can be seen in the output of the "--show" and "--stats" variants of the xdslcmd command.
First, you can see the demand that DLM has placed on the modems from this part:
INP: 3.00 0.00
PER: 2.21 8.87
delay: 8.00 0.00
OR: 86.72 55.88
Those are from an old line of mine, which had FEC and interleaving activated downstream.
"INP" (impulse noise protection) indicates how many symbols need to be protected against - which tells the negotiation how many consecutive faulty bits it must protect against. This shows 3 symbols.
"delay" dictates the maximum delay that is acceptable for the additional latency. In this case 8 milliseconds.
The negotiation for that line resulted in these figures:
B: 57 111
M: 1 2
T: 64 50
R: 16 16
S: 0.0461 0.7101
L: 12835 2704
D: 701 1
I: 74 120
N: 74 240
Interleaving is shown by properties "I" and "D" - bytes to be sent out get written into an array 74x701, while the interleaved data is read out of the same array in the other direction (written in rows of length 701, read from columns of length 74). "D" is otherwise known as the depth, so shows how spread out the data is.
A depth of 1 means no interleaving (like the upstream figure).
FEC is shown by properties N and R, where R is the number of bytes of parity, while N is the total number of bytes in a "Reed Solomon" (RS) block. In this case, the block is 74 bytes, but 16 are used for parity overhead while the remaining 58 are for the user's data - so 21.6% of the line capacity is used for FEC overhead.
Note that there is FEC being applied upstream (16/240, or 6.6%), even though there is no interleaving (D=1) and there was no requirement to do so either (INP=0 and delay=0).
Finally, a clue that FEC is taking place comes from the line statistics:
OHF: 492092661 362863
OHFErr: 2162 254
RS: 4287526116 2051247
RSCorr: 7773977 19758
RSUnCorr: 260868 0
The RS count is non-zero, so FEC is taking place. RSCorr shows how many RS frame errors were corrected using the parity data, while RSUnCorr shows how many RS blocks could not be fixed with the parity data (ie were too corrupted).
The RSUnCorr blocks then become corruptions in the larger blocks protected by CRC checks - which will result in an OHFErr. RS blocks are smaller than the CRC blocks, so the count of CRC failures will usually be smaller than the RSUnCorr count - because RS failures can occur in batches within a single CRC frame.
My current line is shorter (so can get 80/20-ish) and better quality. It hasn't much need for interleaving.
Last January, when it was still on a 40/10 package, the figures looked like this:
Max: Upstream rate = 27209 Kbps, Downstream rate = 90520 Kbps
Path: 0, Upstream rate = 10000 Kbps, Downstream rate = 39999 Kbps
INP: 0.00 0.00
delay: 1.00 0.00
R: 14 0
D: 19 1
I: 255 120
N: 255 240
OHF: 422503933 663284
OHFErr: 114 616
RS: 1270464109 23430
RSCorr: 9376 0
RSUnCorr: 1759 0
Note that interleaving was turned on fractionally, and FEC lightly (about 5.5%). But that's a very low RSCorr count, and isn't really needed much!
Later in February, it was converted to 80/20. The "max attainable" estimate dropped quite a lot, but FEC and interleaving were turned off.
Max: Upstream rate = 26428 Kbps, Downstream rate = 84792 Kbps
Path: 0, Upstream rate = 20000 Kbps, Downstream rate = 79999 Kbps
INP: 0.00 0.00
delay: 0.00 0.00
R: 0 16
D: 1 1
I: 240 120
N: 240 240
OHF: 179858276 2859890
OHFErr: 69202 773
RS: 0 658425
RSCorr: 0 4870
RSUnCorr: 0 0
Strangely, the line has gained FEC upstream (but not interleaving), while it lost it downstream.
TCP window sizes should adjust to cope
I agree, as long as the protocol asks for all the data and leaves it to tcp. I think downloads protocols often have other higher level request/response handshaking getting data in blocks, which mean that the latency effect is more marked.
The TCP protocol is responsible for making sure that data is received in the right order, and nothing goes missing - and most "download" protocols (such as FTP and HTTP) are built on top of this. TCP uses handshaking to acknowledge safe receipt of individual blocks of data, so is certainly susceptible to latency delays when packets are lost, and the delays in getting back on track.
However the "window" is used by TCP to allow many blocks to be in transit, so a re-transmission of one block will not unduly delay others. But if lots of blocks start to need retransmission, then the overall throughput drops because it does cause delays in getting back on track.
The TCP window should adjust itself to take account of the end-end latency, and the speed of the link. This was a limitation in older Windows versions, and settings needed to be tweaked to allow for full throughput on links that were either high speed or long latency (eg satellite links).
As an example of the impact on throughput...
My first line suffered errors from the moment it was turned on (and you can see the FEC/interleaving setting that DLM chose in my first example above). However, DLM only intervened after 2 days, so I got to see the effect of the errors directly for those first 2 days.
The line synced at the full 40Mbps, with a profile of 38.7Mbps. I was getting about 4% packet loss (as measured on the TBB "ping graph", or BQM).
Without packet loss, you would expect such a line to show speeds of around 37Mbps (using speedtest.net). My line would show speeds of around 33.5Mbps (IIRC).
So the random 4% packet loss (seen using unprotected UDP packets) caused throughput limitations on the protected TCP packets of around 10%.
Other cases I have seen suggest that a link starts to become unusable when the packet loss is around 10%.