Solution focus – Infrastructure Automation
Automation is a big part of LSD’s solution toolbox, so we decided to have a closer look at it and hopefully explain the concept in a little more detail. There are a ton of questions that typically come up from customers and people interested in the solution, which can be answered in enough detail to write an informative post filled with the right answers. To do that, we roped in one of our automation experts, Andrew Hill (aka “Krow” in-house) to answer some common questions and give some insight on infrastructure automation projects.
A little bit about Andrew’s background in infrastructure automation:
Krow has been in this space for quite a while, mostly getting involved in automation projects focusing on provisioning infrastructure and pipeline tools. He gets called in when companies want to reduce the time spent by their technical people on provisioning, deploying and maintaining infrastructure. Andrew also works with many different automation tools depending on the task at hand, so there is a deep understanding of the problems faced by companies and the tools that can fix them.
What does a common solution look like?
This all depends on the customer’s problems. Typically, you’d start off by building and configuring a basic (‘vanilla’) server setup – which ends up in a Docker container. This image will become the template for all the other infrastructure that you’ll be provisioning (depending on the requirements), so once this process is completed, it can be deployed thousands of times with the exact same configuration. In Andrew’s projects, Red Hat Satellite normally features to give him a complete view of the estate (all the infrastructure involved). The different servers can have tags applied to them that indicate their function, e.g. a web server, so that all the tools required for a web server can be automatically installed and kept as an image for future web server deployments. It creates a standard internally so that configurations don’t differ depending on the technicians that built the machine. With a standard image and other images that you’ve configured to function in certain roles, your tech people have the power to deploy in a matter of minutes from a single interface.
According to Andrew, there are a few challenges faced when starting out with an automation project. Most teething problems revolve around the customer environment. In some cases, getting all the credentials and access to sources can hold the process up a bit, but this is completely dependent on the customer and their policies. As with any new solution or technology that enters such an environment, there will be cases where security personnel will want to ensure that everything in the automation solution adheres to their internal policy. In one case, Andrew explained that security personnel sat down with him, going through the automation process step-by-step to investigate how it functions and what it accesses. That isn’t at all doom and gloom, though – once everything is green lit, newly deployed machines don’t need to be inspected because they’re set up to be identical to the original.
The good stuff: results that customers are seeing by starting the process.
The main benefit that always gets mentioned is that valuable time is being saved by technical resources who used to provision, configure and maintain infrastructure. Deployment times are drastically reduced to get something provisioned, freeing up time for technical resources to work on other revenue-generating tasks. Their environment gets a benchmark so that infrastructure is standardized – which means that troubleshooting time is less as all the servers are configured the same. Another big benefit is rolling out updates, patches and other files out to infrastructure automatically, instead of someone having to apply it to group policy or walk from desk to desk to do it. And when something breaks? Just deploy a new one in minutes.
There are quite a few generic tasks that automation is used for, but a very interesting idea from Andrew on where automation can be used is to automate tasks that are either done on a regular basis (like running a couple of queries or scripts) or that are run with large stretches of time in-between events. For example, someone might need to run a script once a week to generate a particular result. Even if it takes 15 minutes to complete the task, if it is done once a week, it amounts to an hour of time that someone will be doing it in a month. In the same manner, if a script needs to run every six months with other projects running in the meantime, it is sometimes needed for a technical resource to get re-acquainted with the process before it can be kicked off. If these tasks were automated, an hour can be saved every month by that resource and almost two full days of working hours in a year. It may seem like a small amount of time but imagine how many of those tasks are going on in your business right now that can amount to DAYS worth of saved time a year.
We asked Andrew if he’s worked on a cool automation project that was a little different from the standard use case, and he had a great one in mind. On one project, automation was used to generate reports from multiple sources, but that wasn’t the main benefit. The time taken to produce a report was shrunk from three hours down to just over twenty minutes. Where this was useful is that some of these reports would be needed in meetings to be discussed, but sometimes took too long to generate and it took some planning to get them available at the right times. Now, with a reduced time of twenty minutes, a report can be generated on shorter notice. Another handy project was creating a custom front-end for a customer that featured infrastructure specifications in drop-down menus so that users could select their specs and deploy machines using a wizard-like process.
The technology used in Automation
At LSD, our automation projects normally include tools like Ansible and Jenkins – so we had Andrew tell us what they do and what he likes about them. First off, Ansible is a configuration manager with an orchestration component, whereas Jenkins is a pipeline tool with hundreds of plugins to perform tasks in the pipeline process. What Andrew liked about Ansible is that he feels like it was built from a DevOps person’s point of view, whereas other similar tools like SaltStack and Chef have a strong developer focus. It makes it easy for people in the DevOps environment to make sense of everything, keeping the learning curve down to a minimum. This is of course all dependent on the person setting up the automation tasks and the tasks required for the project, and many different tools can be used to complete the job.
Hopefully through this insight, you’ll have a better idea of infrastructure automation, the tools used, how it is implemented, how projects typically work and the benefits that your business will experience by having automation in place. We’ll have some more automation content up soon, including comparing some of the popular tools.
You can find Andrew Hill on LinkedIn for more of his automation expertise, and we’d like to thank him for taking the time to share this info with us. If you have any questions or would like to add anything, please get in touch on our Contact Us page!