Multihoming - Resilience or Independence
Havard Eidnes he at uninett.no
Fri Oct 12 11:36:55 CEST 2001
> On Wed, 10 Oct 2001, Randy Bush wrote: > > > the basic issue is that multi-homing is *the demand*. And it's > > > not the ISP who has to evaluate whether it's the right one but > > > the *customer*. We live in a customer - driven world. Money > > > makes the world go round, not techies. > > > > not a problem. they can demand all they want. but i will listen > > to their flakey routes when they *pay* me to do so. > > No flame intended here, but aren't your customers already paying you > to get the best connectivity to remote sites? Presumably the customers also want this connectivity to be reasonably stable? If there is a conflict between "stable connectivity to most of the Internet" or "not quite so stable connectivity to everywhere", I think it's a no-brainer for an ISP to choose between the two. This in combination with a certain dose of defensive conservatism is what caused the birth of "route filtering on RIR allocation boundaries". > Of course I see the point in filling up 128megs of ram with routing > tables but I ask myself what costs more; 128megs of extra ram or > customers running off to another ISP because that has better > connectivity to their favorite site? As has been said in other venues many other times: it's not just the cost of the DRAM that's the issue here. Certain CPU boards or router systems typically have a maximum amount of memory which can be installed, and there's a certain "step function" associated with crossing that limit, and the price increase is quite a bit higher than the cost of an additional 128MB module. Secondly, trying to solve this problem by just throwing more memory at the problem is likely to expose other scalability problems caused by a large default-free routing table, such as excessive convergence times, CPU horsepower deficiencies to keep up with doing the route computation in the face of an increasing stream of routing updates etc. etc. etc. The real worry has however been that the growth in the default-free routing table has in the past exceeded the growth predicted by "Moore's law" (which in the past has reasonably well predicted the electronics component manufacturers' ability to produce faster or higher-density components, be that CPUs, DRAMs or what have you), which does not bode well for the longer-term success of the approach of "throwing hardware at the problem". So, it's some peoples' opinion that if this problem is going to be solved properly (in a properly scalable fashion), we probably need a new routing and addressing paradigm. It is my current personal opinion that in this context IPv6 doesn't really contribute anything up and above IPv4's routing architecture except "more of the same" (i.e. longer addresses, but still holding on to most other aspects of the IPv4 routing architecture). However, let's just say that I'm not holding my breath while waiting... Regards, - Håvard
[ lir-wg Archives ]