If you're totally unfamiliar with ACI (formerly Insieme), I recommend listening to Episode 12 of the Class C Block podcast with guest Joe Onisick. This was far more informative than anything I encountered at the actual launch event, probably because the Tech Field Day crew went straight from the John Chambers presentation into a room where we recorded a roundtable discussion. There may have been some technical discussion going on next door, but I missed it.
There's no shortage of people expressing opinions about ACI and what it will or won't do for you, most of whom have beaten me to the punch by several days. I'm going to post instead about a few details of the launch that I found interesting.
Defining Policy Might Not Be Easy
ACI requires that applications (really application owners) express to it the relationships between nodes before any traffic is allowed to flow. There are countless ways this might happen, but they all boil down to figuring out which ports on virtual or physical switches need to communicate to make the application run. There are the obvious flows, like the ones between middleware and database, and then the less obvious ones like syslog, DHCP, DNS, RADIUS, etc... All communications will need to be identified to the ACI controller, and I worry that it will turn out to be nontrivial.
While commercial applications generally have their requirements spelled out pretty clearly for firewall purposes, I've found that in-house applications often do not. The application developers know their components, but often don't know what's really happening under the covers. Too often I have conversations about firewall policy that include statements like the following: "It's not UDP or TCP. I keep telling you, I'm using an MQ library!"
Canned Policies?
Somebody (I think it was Pete Welcher?) speculated that software vendors might decide to ship ACI policies with their products. In the same way that we deploy virtual appliances, large software packages could come with canned ACI policies. That would certainly make things easy for rollout of a multi-tier software package onto an ACI environment.
Ultimately, The Policy Expresses What Now?
If I understand things correctly, the policy exists to fit switchports into categories and then to express which categories of ports may talk to one another. The categorization can be based on all sorts of criteria including things that might be reminiscent of firewall policy. The Cisco folks even evoke firewall notions when they explain that the system defaults to no communication: "Hosts can't talk without a policy explicitly allowing it."
The security angle is compelling, but don't confuse it with a packet filter. Think of it more like a policy which assigns assets to VLANs in a router-free environment. Once the assignment is done, any and all communication is possible between assets which have been bucket-ized together. The "express communication requirements to the network in application terms" business isn't a packet filter.
About ASICs
Cisco's "merchant silicon plus" strategy with the Nexus 9500 speaks volumes about the future of packet forwarding hardware. There's a proud tradition of using in-house ASICs at Cisco, and they're backing away from it. Sure, they're shipping ACI-specific hardware along with the Broadcom Trident II. It does things that the Broadcom Trident II can't do, like routing between VXLAN and NVGRE overlays. I'm betting that the ACI-specific hardware will sit completely idle at a large percentage of customers, meaning that the Nexus 9500 is a commodity switch running stripped down (there was a slide about this, but I can't find a citation - edit: Gideon Tam shares this slide from the presentation) version of NX-OS.
Fabric Modules
Nexus 7000 fabric modules aren't much to look at. They're inexpensive (compared to the rest of the platform components), don't generate much heat, etc... Nexus 9500 represents a big departure from the arbiter-controlled fabric of the 7000 series, because they've got switching ASICs (two to four Trident IIs per fabric module) onboard, making the Nexus 95xx a "clos-in-a-box" architecture. I'm anxious to see the packet walk for multicast and broadcast traffic from the Cisco Live presentation next year.
No Midplane
Apparently the 9500 is entirely open from front to back, which is new for Cisco. I guess the line cards and fabric modules connect directly to one another (I haven't seen it yet). This is nifty, because it allows front-to-back airflow without lots of crazy ducting like the Nexus 7010, which seems like it's half duct/half switch.
BiDi Optics
This is cool stuff. Is it unique to Cisco? There's no QSFP module with an LC connector listed on Finisar's site right now.
These modules run 40Gb/s Ethernet for up to 150m over just two strands of MMF. Prior to the introduction of this module, 40Gb/s Ethernet required a 12 strand MPO connector. I've been advising customers for years to populate their data centers with pre terminated MPO stuff, I particularly like the high density offerings from Corning.
Customers who already have MPO don't have any worries, but for folks with minimal LC connectors at top of rack, Cisco claims their QSFP-40G-SR-BD module is a big cost saver because no new fiber needs to be installed for the upgrade from 10Gb/s to 40Gb/s. Yes, the idea of saving money with Cisco optics is hilarious. :) Pricing isn't out yet.
No comments:
Post a Comment