Agree with Mr Saffron. But that is the entire point. Firebricks don't have proper QoS unfortunately but they are supposed to make do with a general heuristic that prioritises small packets. So surely PPP LCP pings ought to qualify? Then they should go first and queue-jump, no? So it would seem that the queuing must have been elsewhere? This would definitely have been the case before for flat-out @100% upstream.
I took Andrew from AA's advice. I recently did some extensive detailed timing measurements involving uploads on three lines and a lot of slog in a spreadsheet to examine RTT in different scenarios and compare near-zero load RTT with the case of a flat-out upload with MTU-sized packets. From this I think/hope I managed to work out the queuing delay, either in the modems or else in the Firebrick, as the FB’s rate-limiters' settings are varied and it throttles upstream packets before they get into the modem. The conclusions were that the modem seems to have a buffer with 16 slots, each of 1536-bytes, best guess, which comes to 24kiB exactly. When you get to 16 packets in the queue (each being full MTU size in this particular test case), that's when the packet loss really kicks in, as you would expect. Btw, 1536 is an interesting number, the worst-case AAL5 payload size excluding CPCS for PPPoEoA.
By cutting the rates down to 97.5% instead of 100% of the theoretical protocol efficiency for PPPoEoA with 1532 byte ATM payload (that is incl AAL5 CPCS trailer and all overheads of higher protocols), I brought the queuing delay right down, so that the RTT was around 150ms on a flat-out upload. (I calculate the protocol efficiency as roughly 0.884434 * sync rate, from the number of whole ATM cell payloads required to take 1532 bytes * 48 / 53 for ATM inefficiency)
Anyone who is interested can have the spreadsheet. It's from the Apple iOS "Numbers" app, but that app, does claim iirc to be able to write Excel files.