In other words what it considers to be above normal error rates may not be noticeable to the end user, and if left they may never do so, the errors are often not enough to cause problems with throughput or instability. dlm is triggered way before the line is actually impacted
It depends what you are doing with the data.
If everything is transferred under a protocol that notices the failure and gets a re-transmission, such as TCP (used by downloads) then you'll just get a drop in throughput. If the error rate gets high enough, the throughput drops dramatically, but for more normal error rates, you'll barely notice the difference.
If, however, it is transferred under a protocol that does not re-transmit, you just get gaps in the transmission. For most people, this happens to the protocols used for streaming audio & video, and for online gaming. As the error rate increases, the gaps become more noticeable (and, for streaming, the glitches they cause to the audio and video become more intolerable). Any errors here are noticeable to the eye and ear, even if they don't cause you to miss the plot. We humans are rather less tolerant of these errors than a computer is to a re-transmitted packet.
Under FTTC, DLM is undoubtedly tuned so as to give a good experience for streamed video - and particularly to make sure that subscription TV gets somewhere close to broadcast quality. That means it has a lower tolerance to errors than would be needed for "just" downloading.