My first encounter with the term “fail fast” came in 1999 as I was raising venture capital money for the company that would eventually become Hatteras Networks. Venture capitalists, of course, invest millions of dollars in a number of deals in the hope that one or more will reach an exit (liquidity event) that nets the investors at least a 10x return. Now, if a company is not going to become wildly successful, the investors want to know very quickly so they don’t continue to invest round after round of cash only to see it all vanish when the company eventually fails. In this context, the “fail fast” would save the investor lots of money and heartburn.
Fast forward five years or so, and we arrive at my second significant encounter with the phrase “fail fast.” This time it is in the context of Web2.0 and the new model of rapid software prototyping. This model is best illustrated by looking at the Google application delivery model. Those who are paying attention probably notice that every new application or feature that Google introduces is first tested as a “beta” and is refined continuously. This perpetual beta model allows Google to quickly add new features as users request them and pivot on the successful ones to develop new, previously unimagined, capabilities. But just as importantly, this allows Google to remove the features that are not used or have proven ineffective – or failed. This “fail fast” model is actually a well documented part of the Google culture.
Now, traditional communications service providers have never been known for being on the cutting edge of popular software development methodologies. In fact, failure has historically been penalized by regulators and customers, alike. This is for good reason. The public communications network is considered critical infrastructure, and too important to fail. On top of this cultural fear of failure, add proprietary and dated software tools and methodologies, and you have a recipe for “epoxy that greases the wheels of progress”.
But this year, it seems the tide may be shifting. New technologies such as cloud computing, software defined networking and network functions virtualization have finally given service providers the tools to develop services rapidly. Modern RESTful or JSON interfaces and other Web2.0 development environments have freed service providers’ IT staff from the stranglehold of proprietary systems. Automation and integration with partners is now straightforward, making the dreams of creative CTOs and product managers finally within reach. This has set the stage for my most recent encounter with the phrase “fail fast.”
Recently, while in discussions with a major US service provider about our Ensemble OSA™, a senior technical representative from the company explained that they planned to shift their service development process to a “fail fast” model. One that allows them to rapidly prototype new service offerings without changing the underlying network hardware. One that allows them to test the services in the real marketplace and quickly weed out the failures and build on the successful.
One of the key benefits of new technologies like SDN, NFV and virtualization is that they enable creation of new services much faster and at a lower price than traditional models. As a result the cost of failure is dramatically reduced. Because the service can be delivered more quickly, there is less risk that the market will shift between the start of the project and the delivery. How will this change in the speed and cost of development affect service provider? I predict that within two years, we will see the seeds of this revolution begin to bear fruit. And “fail fast” will, in fact be the proud mantra of a winning crop of service providers.
Contact Overture to learn how you can increase profitability by transforming service delivery.
Copyright © 2016 | All Rights Reserved | Overture Networks