Julien Goodwin
Juniper Qfabic, what's missing? 
17th-Sep-2011 11:49 pm
southpark cartoon
Juniper have finally released Qfabric, and although it's only day one there's a few things that, for me, are missing before this becomes a really nice solution.

Given that my job allows me to not care about datacenter networks these days this is a somewhat academic exercise, but I still think about them.

1. Common XRE - Juniper now has several external routing engines, the QFX3100 (for Qfabric), the XRE200 (for EX8200), the JCS1200 (For T-series and TX-Matrix plus this one is an IBM blade chassis). As they are all so similar why not make them one SKU with multiple software loads (ideally including a BGP route reflector, something Juniper operators are crying out for, but the only official option, the JCS1200 is too physically large and expensive for)

2. Single-box management plane switch. Build a big box with just the needed gig-e ports and a single pair of power supplies, even if it's just four EX4200's internally it would make things neater.

3. MX/SRX5k interface module. Make a four-port 40g module for the MX and SRX5k that directly uplinks into Qfabric, even if it's really most of a Qfabric node from the Qfabric side, and some weird aggregate interface from the MX/SRX side. This would allow external connectivity and security to live closer to the fabric. A node module that's four 40g up to the fabric and four 40/4x10 combo ports would also work, although may not be worth producing

4. Offer a fibre control plane option. Currently the management network is copper only which limits the furthest rack to 100M (by copper run) from the fabric interconnect (assuming the management switches are nearby the fabric interconnect). Going to multimode switches the limitation to 150M on OM4 due to 40g limits. If you were able to use single mode for both this would essentially eliminate cable distance as an issue. The real win here is the elimination of bulky copper runs, which also eliminates cross-rack copper, something important to avoid for some situations

5. A 4RU(ish) interconnect, to support ~32 nodes. This would be enough for plenty of situations, and allow a fully redundant setup in less then half a rack (for control switches, directors, and interconnects). Building anything smaller is probably not worth it. (Juniper has stated that "smaller" interconnects are coming, but no solid sizes have been announced). These days that would buy at least 32 racks (redundantly connected) of blade chassis, which, once they're running hypervisors is a huge amount of capacity (and power for that matter) that would suffice for many situations.
This page was loaded Dec 22nd 2014, 9:06 am GMT.