Hey Nolan, I love Cumulus' approach and I bet you guys will go very far!
I thought about adopting Cumulus for a large telco project, but for the moment it doesn't seem a good fit as we do lots of openflow, L3 and custom application development.
We don't do OpenFlow, but L3 and custom apps are our main focus.
That said, often times the subset of things that can be done w/ OpenFlow in actual existing hardware are things we can do natively. Feel free to email me if you want to discuss!
There are 1 gig 48 port switches on Cumulus's HCL, they're around $2500 and $700/yr for Cumulus support. Unlike Cisco or Juniper, you must maintain support in order to continue using the product (legally). I can't imagine getting enough utility from a switch for this to make sense for home use.
It is substantially better pricing than list for comparable Juniper / Cisco / etc. equipment. However, Cumulus has no truly low end 1 gig switch (single power supply, limited L3 capability), and you can absolutely negotiate Juniper / Cisco / etc. down to be close to or even below the pricing of the Cumulus solution. That is tougher for 10 gig or 40 gig equipment, which is where the value proposition kicks in for Cumulus.
Maybe Cumulus's approach is enough of a value add for it to make sense to pay a premium, but everyone I talk to is interested in cost savings first and better manageability a distant second. You'll still have to have Juniper / Cisco / etc. in your life to an extent, Cumulus doesn't do routers and they don't have a full range of switch models.
All of these things are based on the Broadcom Trident II chip. If you want it cheap, don't go to juniper - get one from Quanta. You can have a 32x40G switch for $6,000.
On a similar note, I just recently became aware of Cumulus[1] Debian based switches (but the good bits are closed source) from among others edge-core[2] (via a presentation by PaaS-provider http://zetta.io) -- eg:
To clarify, the only part that is closed is "switchd", which is a userspace program that watches the kernel data structures (route tables, neighbor tables, bridges, ports, vlans, etc) and programs the hardware to match. It links against proprietary silicon vendor SDKs, and programs registers whose description were given to us under NDA.
Without this part, everything works the same, but is of course not hardware accelerated. So the 100% open source parts of Cumulus Linux would still make a great Network OS for a router/switch VM.
We don't yet have an official VM version, but that is something we will have in the future.
What is the flexibility with "open" switches? To get linerate switching, I'm guessing you're still limited by the hardware? Is the benefit that you can more easily setup routing tables (instead of depending on the switch vendor's capabilities), vlans, etc. just by creating them in userspace then pushing them over to the hardware part?
Or can you actually get fairly low level, like implementing your own algorithms for channel bonding? A while back I wanted to do some L7 inspection, but could only get like 10G per server, and we had 40G coming in. EtherChannel didn't acceptably balance out the traffic. Doing so would have required dealing with one of the network processor vendors and all that mess. Would an open switch platform make this a straightforward exercise?
You are limited by the hardware, and what our code supports programming into it.
The big advantages are reusing config management tools like puppet/chef/ansible/etc, and monitoring tools like collectd/graphite/nagios/etc.
Also, it is super easy to run services on the switches. For example, you can easily run isc-dhcpd on each ToR, instead of DHCP relaying back to one mega DHCP server. Distributing services like this scales better, and reduces the blast radius of service failure.
I've been experimenting with the idea of a transparent caching TFTP proxy server running on the top of rack switch, to make PXE scale better to large clusters.
The important thing is that anyone who has the know-how to write a transparent caching TFTP proxy server for Linux can just go ahead and do that on a Cumulus Linux switch! You don't need to come to us and convince us that it is a good idea and then wait for us to actually implement it. Compare that to asking for features from a traditional switch vendor...
We've been loving Cumulus + Quanta for 10Gb and 40Gb, in that it's more manageable than Cisco (for our environment) and a fraction of the cost. We end up using it at 1Gb too, but it's just a price match there, instead of a win.
While I wasn't aware you could get them that cheap, I think I'd still look into second hand infiniband for home use. Allows the use of copper for short (pc-to-pc, point-to-point) distances. While optic is absolutely cool, as far as I can tell a single 1m patch cable will cost ~80 USD -- quite a lot considering the cost of the NIC... And even in IBM's sales brochure for their optical switch, infiband comes out a little ahead:
40GbE uses approximately the same copper cables as Infiniband. There's probably more used IB equipment floating around than used 40GbE, but otherwise I'd go with Ethernet.
Ah, of course. I was misled by wikipedia[1], but it should've been obvious that the same connection could be used for both inifinband and ethernet. So, indeed, there are copper interconnects:
Actually seems the price is finally coming down a bit (compared to what I remember these used to cost, years ago -- but maybe I've just upped my budget ;-).
"The Quad Small Form-factor Pluggable (QSFP) is a compact, hot-pluggable transceiver used for data communications applications. It interfaces networking hardware to a fiber optic cable."
But it also interfaces hardware to copper cables, as I gather.
The cable you're looking for is called a direct connect cable. "Fake" optical modules on both sides, plugs it together with a permanently wired copper cable.
DAC is a common term for this. Direct Attached Copper.
It is a SFP+ plug on each end (but without all the optical magic), connected with twin-ax cable, which is like coax but with 2 signal paths, one for each direction.
However, at no single site (especially my home) do I need a 6U chassis full of switch ports.
Is there a 1U version of this on the horizon ?