Friday, April 27, 2012

10Gig options within the rack

This is a roundup of my preliminary look into the current state of 10 Gigabit Ethernet cabling within a rack.  While 10GBASE-SR was the common compatibility choice, it's also one of the more expensive.  I've seen it commonly supported in devices ranging from firewalls to storage.

While a bit dated now, Lisa Huff gives a great analysis of the copper lay of the land in 10GBASE-T vs 10GBASE-CR :
        "Both SFP+ DAC and 10GBASE-T products will be needed in the long term – 10GBASE-T for inexpensive connections         and 10GBASE-CR (SFP+ DAC) for lower latency and lower power consumption."

Howard Marks gives a great update on what's happening with the previous power-hog champion, originally weighing in at 8W PER PORT :
All I Want For Christmas Is 10Gbase-T - Network Computing

..which is partially in response to the original (and convincing) attack on 10GBASE-T by the illustrious Mr. Ferro

Monday, April 23, 2012

DIY Router or Switch Management port

Say you've become very fond of that dedicated management port on your Cisco Nexus switch. Now how to connect your Catalyst 6500 into that out-of-band management network?

Local policy to the rescue!  This works like a champ and is not to be confused with Policy-based Routing. (Although PBR does have a great ring to it!).

A local policy is triggered by packets involving the device itself.  Not traffic being routed through.  Here's a sample:


ip local policy route-map local-mgmt
!
ip access-list extended mgmt
 permit icmp host 10.2.3.4 10.161.161.0 0.0.0.255
!
route-map local-mgmt permit 10
 match ip address mgmt
 set ip next-hop 10.2.161.1
!
end

Where 10.2.3.4 is a loopback or other management-related interface on your switch and 10.2.161.1 is the next hop router that gets you to your management networks.  Of course you could just alternately make your OOB management a single, large, flat subnet.  Connected routes for the win!


For more information, Petr Lapukhov has several other examples, his fourth coinciding with mine above.

Wednesday, April 18, 2012

Brief Netflix analysis


Firing up Netflix  in a browser, I decided to see what could be found.  Wireshark was my main cohort in this exercise.  Selecting 'Statistics...Conversations' and then sorting by Bytes gave me these:
Back in the main capture, I looked at what order these remote IP's were referred and then tried a reverse DNS lookup:

>nslookup 50.19.81.73
Name:    ec2-50-19-81-73.compute-1.amazonaws.com
Address:  50.19.81.73

>nslookup 65.126.84.9
Name:    a65-126-84-9.deploy.akamaitechnologies.com
Address:  65.126.84.9

>nslookup 65.126.84.11
Name:    a65-126-84-11.deploy.akamaitechnologies.com
Address:  65.126.84.11

>nslookup 65.126.84.18
Name:    a65-126-84-18.deploy.akamaitechnologies.com
Address:  65.126.84.18

So it seems the main 'logic' of the website such as account login, pulling up my favorites, and searching for movies is hosted on Amazon.  My guess is that the movie poster images are hosted on Akamai.  The HTTP content of that field confirms:

GET /en_us/boxshots/166/60020865.jpg HTTP/1.1\r\n
Host: cdn-5.nflximg.com\r\n
Keep-Alive: 115\r\n
Connection: keep-alive\r\n
Referer: http://movies.netflix.com/WiHome\r\n

Finally, I pulled out the TCP stream for the top flow based on Bytes.  


  A 'whois' on the IP reveals it is from Level3:
 NetRange: 8.0.0.0 - 8.255.255.255
CIDR: 8.0.0.0/8
NetType: Direct Allocation
OrgName: Level 3 Communications, Inc.


  The Wireshark decode shows an HTTP/1.1 stream (Content-Type: application/octet-stream\r\n , Server: Level-3 Origin Storage/1.5\r\n)

 A little bit of packet capture tells us they use Amazon Web Services for the front-end and account logic, Akamai for static content, and Level3 Storage for the media streams.  Some to-do items: check browser data for the application type that plays the media and look for TCP header data to learn more about flow control and media streaming.

Monday, April 16, 2012

North American IPv6 Summit

Last week, Denver hosted the 2012 North American IPv6 Summit.

The keynote by John Curran of ARIN kicked things off nicely.  My favorite points were:

  1.  A few years ago, NAT was sounding the death knell for IPv6 adoption.
  2. Today, media providers are hindered by NAT. (Think tracking and targeting...)  Follow the money people - IPv6 is gaining support.

Other interesting tidbits overheard:

  1. Enable a client for IPv6 and see content come in that way as much as 10% of the time.
  2. Enable a web server for IPv6 and only get <1% traffic over it.
  3. Over 30% of DNS domains are IPv6-enabled largely due to efforts of Godaddy and other leading registrars.

I learned from Oracle's Paul Zawacky that 'dual stack' is becoming the transition method of choice.  Owen Delong from Hurricane Electric advised to stick with a /64 per VLAN and stay away from "IPv4 thinking" when considering alternate subnet masks.  Finally, NAT64 is a very interesting method for moving all hosts to a pure-IPv6 environment and maintaining v4 Internet access.

@scotthogg organized and summarized the event quite nicely.  Thanks!