Re: speed
I remember seeing somewhere (long, long ago) that taking the various transmission protocol overheads into account you should expect (on average) on a WAN link to need 9.5 bits for every 8 bits of data.
That sort of rule of thumb would have applied to serial communications, for example, directly between two PCs or from a PC to a modem etc. In this case you "lose" some bandwidth for framing overhead etc. (e.g. start bits, stop bits, parity bits etc.). However this does not simple mean that a 56Kbps dial-up connection will generally max out at 47.6Kbps because there are other technologies that come into play (e.g. modem compression etc.).
The same general principle applies to most protocols in that not all of the bandwidth is actually used to transmit data all of the time due to the need for header fields, retries, collision detection/avoidance etc. As I mentioned earlier, where intermediate devices (e.g. modems, switches etc.) do compression you may actually get higher speeds than the nominal rated throughput because the data is compressed before transmission.
Basically there is no generally applicable rule of thumb for all protocols and environments. I'm not sure what the average empirical protocol overhead might be but the "9.5 bits for every 8 bits of data" idea doesn't really make much sense in the context of ethernet, 802.11 etc. even if the bandwidth "loss" due to protocol overhead etc. does happen to be around 15% (8 / 9.5 * 100).
I believe Cable Internet is more reliable than telephone line internet?
Reliable in what way precisely? As far as I know, one characteristic of cable networks compared to telecom networks is that the former often suffers from higher latencies (i.e. delays in dispatching packets/frames).