After writing a post yesterday about advancing the state of the art, by taking an applied science-based approach, I found this tweet interesting
The purpose of showing your work is to get better at it – a 'to the point' post from Seth Godin #showyourwork #wol https://t.co/YdlbHJzK77
— Michelle Ockers (@MichelleOckers) March 17, 2016
So I went to check out the post in question, and it struck a further chord. As Seth Godin says:
What works is evolving in public, with the team. Showing your work. Thinking out loud. Failing on the way to succeeding, imperfecting on your way to better than good enough.
Do people want to be stuck with the first version of the iPhone, the Ford, the Chanel dress? Do they want to read the first draft of that novel, see the rough cut of that film? Of course not.
Ship before you’re ready, because you will never be ready. Ready implies you know it’s going to work, and you can’t know that. You should ship when you’re prepared, when it’s time to show your work, but not a minute later.
The purpose isn’t to please the critics. The purpose is to make your work better.
Polish with your peers, your true fans, the market. Because when we polish together, we make better work.
This. Is how cloud computing is involving. In further related news, I also just saw this
Looks like we're lifting the veil on Maglev. Take note if you care about software load balancing. https://t.co/JJiHXORZVw
— William Vambenepe (@vambenepe) March 16, 2016
At NSDI ‘16, we’re revealing the details of Maglev1, our software network load balancer that enables Google Compute Engine load balancing to serve a million requests per second with no pre-warming.
Google has a long history of building our own networking gear, and perhaps unsurprisingly, we build our own network load balancers as well, which have been handling most of the traffic to Google services since 2008. Unlike the custom Jupiter fabrics that carry traffic around Google’s data centers, Maglev load balancers run on ordinary servers — the same hardware that the services themselves use.
Hardware load balancers are often deployed in an active-passive configuration to provide failover, wasting at least half of the load balancing capacity. Maglev load balancers don’t run in active-passive configuration. Instead, they use Equal-Cost Multi-Path routing (ECMP) to spread incoming packets across all Maglevs, which then use consistent hashing techniques to forward packets to the correct service backend servers, no matter which Maglev receives a particular packet. All Maglevs in a cluster are active, performing useful work.
It is worth noting here that this is research paper sharing, rather than a code drop. Google of course didn’t open source Borg, but did open source an implementation of it, in the shape of Kubernetes. I am wondering whether that team will build their own implementation of Maglev which will be open sourced. Load balancers like Maglev would be beyond the scale needs of most organisations.
Google though isn’t the only one opening the kimono on Cloud network architecture. Microsoft just open sourced Software for Open Networking in the Cloud (SONIC), which builds on Azure Cloud Switch, a Debian-software based switch.
We’re talking about ACS publicly as we believe this approach of disaggregating the switch software from the switch hardware will continue to be a growing trend in the networking industry and we would like to contribute our insights and experiences of this journey starting here.
The challenge for traditional networking gear suppliers is going to become increasingly severe as the collaborative Applied Science approach, underpinned by cloud scale providers I described yesterday takes hold in that market. Enterprises and Web companies however are going to significantly benefit from all of this innovation, in both cost and capability.