In the data center, the value of automation is apparent when considering a series of manually-demanding tasks that are required to be repeated at scale. If we organize machines to execute these tasks, we typically see huge increases in speed allied with repeatable precision. Humans just cannot compete with a machine’s capacity to execute the same task over and over again with unfailing accuracy.
So in the quest for greater productivity, do we simply look to automate everything and ditch the business liability that is the human workforce?
Well, no. The point of automation is to allow a business to re-focus its workforce on tackling higher-level issues that will have far greater impact on that business’ chance of success. Secondly, through automated design, we have a new opportunity to re-engineer how parts of the business operate such that they are better aligned to your overall business goals. Thirdly, and perhaps the most immediately tangible benefit, is the elimination of operational risk.
Automation is often discussed as though it is a set of tools, but that is only part of the story. The most powerful results are when you consider Automation in a top-down fashion, affecting how you design, build, and operate the network.
One example of this is in the realm of Network Design Verification Testing (N-DVT). As Engineers, we typically create test environments to validate a prospective network design. We build heavyweight test plans of many hundreds of tests to validate the various features enabled and to stress and discover limits of scale and availability. Upon completion of the plan, providing we have assembled enough tests passed, we satisfy ourselves the design can be put into production.
But wait—do we really understand the potential behaviour of this new network?
The psychology behind this approach is probably what many have documented as the ‘optimism bias’ which whilst being a well-meaning evolutionary trait to tackle adversity, it can significantly skew our ability to assess true levels of risk.
This bias is introduced in the following ways:
A fully automated approach protects you from this bias—in all its forms. It places test execution, result analysis, and result reporting into the machine space. This brings all the benefits machines have to offer – speed of execution, consistency across multiple executions, and reporting the results in an easily consumed format.
This releases engineers to do what they are good at: coming up with new tests to automate, analyzing the tests results and, most importantly, communicating with various stakeholders ensuring effective coordination.
But how much faster than a person can a machine be at performing testing? Based on our own experiences, a machine can do in one hour what would take a person one month. That is a testing time factor of improvement of 160 times. Such a massive reduction in time enables 100% coverage of all tests in every execution. Assuming it takes 12 hours to execute all tests, you could evaluate a different version of software every single day.
This approach can be truly transformative to the business. No longer is testing a cumbersome, incomplete exercise where resources seemingly disappear for months at a time. Instead, it becomes responsive, transparent, and guarantees the network delivers promised capabilities visibly aligned to the business.
When you consider the Operational lifecycle as a whole, from design through to build and operation, Juniper Networks is considering six key areas of focus for any business looking to fundamentally de-risk their network operation whilst accelerating their service deployment process.
There are six use cases for automation in the Operational Lifecycle:
As outlined in our N-DVT example above, an automated approach allows more effective characterization of your network’s behavior by weeding out potential for human short-cutting or oversight, and permits human focus to be on analysis of data rather than test execution. This should be allied with a test environment that is as identical to your production environment as possible - or your production should mirror your lab, it rather depends on your point of view! In addition, a ‘test everything, every time’ approach should be taken regardless of how trivial the design change is considered. Furthermore, the automated test cases should be created to test behavior not state – this will allow you to build the tests once and use for the lifespan of the network.
Version control is a concept long familiar to software developers and should be applied to the network design process too. Wouldn’t it be great if a configuration template within your network ‘source code’ resulted in automated provision of this new ‘network version’? How about following that up with automated traffic flow generation and test battery execution with all results captured and normalized for trend analysis whilst highlighting failures that require deeper inspection?
It sounds too good to be true, but a lot of the heavy-lifting here in being able to source control, automatically build the network, run through the tests and access the data as well as graph and centrally archive results has already been provided by the Open Source community. Combined with the power of Junos to present all configuration and state in a structured (XML-type) format that can be retrieved using simple API calls (via NETCONF) and you are there.
The automated provisioning used within the Certification step can be re-used upon arrival at that shiny new data center. This not only accelerates the network stand-up but ensures you are deploying your network as designed. It also obviates the need for someone to stage the equipment (another potential source of risk and therefore delay). At time of writing, there is at least one Open Source example of configuration management tooling that can drive configuration changes via console thus eliminating the need for IP management before you can begin.
Products are the devices in your network whether physical or virtual. We understand how important an inventory of these devices is but how well is it maintained? Is it always synchronized with the live network for adds and removals? Does it have full visibility of network software version changes? To get the most accurate readings, your network inventory should be ‘plugged’ into the live network so new devices and any software changes are detected in real-time, i.e. automatically. This is absolutely key so your organization can get a handle on potential issues with specific hardware or software before they manifest themselves.
Probably the greatest opportunity for network destabilization is in provision and tear-down of services, simply because these instances constitute the bulk of operational changes. If you add that to the fact that many Enterprises arbitrate changes via Command Line Interface (CLI) without strict control of service configuration templates and you have a disaster waiting to happen. We might blame lack of skill for introducing these problems with remediation focused on better training and more rigor in peer review. However, the strong opinions of Automation illuminati would argue the narrow corridor of Service Delivery operation should have been clearly identified as part of Certification and for this to be enforced through system-driven workflow or even fully software-driven service rollout.
Assuming you have a solid view of your device inventory then the next step to capitalizing on this is to cross-check it regularly for any potential software or hardware issues as published by your vendors. This is an important pro-active step that allows you to eliminate risks before they become the production issues that so often collapse network operation into a state of fire-fighting. In this panic state all optimization activities are nearly always put on hold and so events like this stagnate progression, not to mention the time spent chewing over how you got there in the first place! If you are able to put in place an automated mechanism to provide this cross-check then, in our opinion at least, you will be one step closer to network nirvana.
Automated auditing should also be considered for configuration to understand which parts of your network have drifted from your golden configuration (even with automated provisioning this is still likely owing to break-fix or the odd industrious operative electing to deviate from best practice). It should also be considered for network state as this ultimately defines how traffic is forwarded through the network. Network state is the heart rate and blood pressure of your network and they should be observed and recorded before and after you have given the patient its dose of service re-configuration.
How else can you track whether your intervention has created any potentially unsavoury side-effects? Actually, if you want the most robust assessment then you should be checking every network vital statistic for deviation against pre-determined limits and thresholds and this is where Automation can help with managing this scale. An Automated network state validation process should therefore be instrinsic to any Service Delivery process if you are to be sure of the continuing health of the network.
Bad things can happen (sorry) and in the event that they do, you will want immediate and relevant data capture. The best way this can be achieved is through on-device event handling that alerts your operations to specific device(s) actually having the issue. Firstly this eliminates the time needed for hunting the problem’s source and secondly provides you with all the relevant data captured at time of issue so that your vendor support organization can respond with full root-cause analysis in the fastest possible time.
In conclusion, an Automation philosophy that spans your Operational lifecycle will improve your Speed by orders of magnitude but where it is fundamentally of value is in eliminating Risk and permitting organizational Focus on the science of analysis and optimisation.
Introducing something new is difficult and typically the first time you do something in a new way the old way would have been faster. Where Automation really pays off is on tasks that are to be executed a large number of times. Unfortunately, there is no magic number of iterations whereby Automation should be considered – you will need to assess investment in effort versus reward in terms of speed of execution and consistency of results for your specific requirement.
Here are some best practices for getting started:
Written by David Gethings, Solutions Consultant at Juniper Networks
Written by Richard Balmer, Senior Systems Engineer at Juniper Networks Financial Services