OSPF uses a simple formula to calculate the OSPF cost for an interface with this formula:
cost = reference bandwidth / interface bandwidth
The reference bandwidth is a value in Mbps that we can set ourselves. By default, this is 100Mbps on Cisco IOS routers. The interface bandwidth is something we can look up.
Let’s take a look at an example of how this works. I’ll use this router:
The router above has two interfaces, a FastEthernet and a serial interface:
R1#show ip interface brief
Interface IP-Address OK? Method Status Protocol
FastEthernet0/0 192.168.1.1 YES manual up up
Serial0/0 192.168.2.1 YES manual up up
Let’s enable OSPF on these interfaces:
R1(config)#router ospf 1
R1(config-router)#network 192.168.1.0 0.0.0.255 area 0
R1(config-router)#network 192.168.2.0 0.0.0.255 area 0
After enabling OSPF, we can check what the reference bandwidth is:
Router#show ip ospf | include Reference
Reference bandwidth unit is 100 mbps
By default, this is 100 Mbps. Let’s see what cost values OSPF has calculated for our two interfaces:
Router#show interfaces FastEthernet 0/0 | include BW
MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec
Router#show ip ospf interface FastEthernet 0/0 | include Cost
Process ID 1, Router ID 192.168.1.1, Network Type BROADCAST, Cost: 1
The FastEthernet interface has a bandwidth of 100,000 kbps (100 Mbps), and the OSPF cost is 1. The formula to calculate the cost looks like this:
100.000 kbps reference bandwidth / 100.000 interface bandwidth = 1
What about the serial interface? Let’s find out:
well explained.
“It now has a cost of 1 which means that a Gigabit interface would end up with a cost of 1.”
Did you mean 10?
Oops yes Just fixed it! Thanks Scott.
check this command its not working
this one is working
Router#show ip protocols | include Reference
Hey Rene,
I’ve seen
auto-cost reference-bandwidth
as well asreference-bandwidth
.Is there any difference between those two commands or are they just IOS version dependent?
Thank you!