As part of Digital Transformation, enterprises are rethinking their infrastructure, applications and other software based systems and service platforms to enable innovation, business agility, performance improvement, resilience, and secure operations. Automation, while considered as a key driver for DevOps, if leveraged in a holistic manner, can be the shortest path to driving related change with confidence”, claims Puppet. With humble roots in Open Source, Puppet is now positioning itself as a major player in Automation. With multiple established Automation providers serving Global Enterprises already, how does Puppet match up? We speak with Puppet’s Chief Technical Strategist to find out.
Contributor
Download Podcast
Apple Podcast, Google Podcast, Spotify, Pandora, iHeartRadio, SoundCloud, TuneIn, and Stitcher. Find other syndication channels here or search CIO Talk Network podcast on any other app.
Top 5 Learning Points
- Most IT operations are repetitive and non-unique. How do we set up systems that may use new technologies to make a difference to the process?
- How technology teams can help deliver business value?
- How does automation enable innovation or business agility which is the business need of today?
- How do CIOs arrive at the RoI for automation?
- What are the security issues automation can help solve?
Show Notes
- Automation by itself is not sufficient to guarantee the successful outcomes of any digital transformation initiative.
- Today, delivering business value is everyone’s job.
- We see both as the senior leadership have an understanding of the strategic encore of automation and that enables you to deliver higher quality software faster ultimately.
- Automation allows the transparency, the speed of response, and helps handle security issues better, and that’s what business needs.
- The ROI for automation really depends on its ability to increase speed, the delivery and increasing quality at the same time and reducing of defect rate.
- Most of the data from the security industry show that the biggest security risk in most companies is actually in terms of inconsistency in the production run.
Summary
As part of Digital Transformation, enterprises are rethinking their infrastructure, applications and other software-based systems and service platforms to enable innovation, business agility, performance improvement, resilience, and secure operations. Automation, while considered as a key driver for DevOps, if leveraged in a holistic manner, can be the shortest path to driving-related change with confidence”, claims Puppet. With humble roots in Open Source, Puppet is now positioning itself as a major player in Automation. With multiple established Automation providers serving Global Enterprises already, how does Puppet match up? We speak with Puppet’s Chief Technical Strategist to find out.
Transcript
Sanjog: We shall talk about Evaluating Puppets Automation Capabilities, with Nigel Kersten, Chief Technical Strategist, Puppet. Today, many organizations are using digital transformation to address their next wave of needs or challenges. You claim that while automation is a key driver for DevOps, it can also be the shortest way to drive digital transformation. Is that positioning justified?
Nigel: Automation has been around for quite a while, but with the DevOps movement picking up in the enterprise and corporate sectors, it has gathered steam. However, automation is necessary but not sufficient for DevOps. One of the earlier definitions of DevOps was around CAM – Culture, Automation, Measurement, and sharing. When packaged together, these a cultural change where silos within organizations break down. This systems thinking approach for optimizing the software delivery lifecycle spans from code testing and running it in production, to actually measure the results of the automation. Hence you’ve instituted a culture of sharing, both within the organization and externally.
One of the common things we often talk to people who are embarking on this journey is that 80% to 90% of what you’re doing in IT is the same as everyone else. So the prepackaged work in the existing modules Puppet, Puppeteer, does pretty much the same as many of the other tools in the space.
…the way I always think about DevOps, the automation, and their relationship- is that automation is necessary, but it’s not actually sufficient… I totally agree that automation by itself is not sufficient to guarantee the success of the outcomes of any digital transformation initiative.
We try to standardize the operation to align with how everyone else doing it, then focus on unique differentiation around that 10% of your infrastructure which is actually different from everyone else. But I totally agree that automation by itself is not sufficient to guarantee the successful outcomes of any digital transformation initiative. This is why we think DevOps really picks up when you take a more holistic approach to systems thinking, doing more than just agile, automated infrastructure management. That’s where you start seeing much higher rates of success.
Sanjog: Clearly, automation just enables select IT capabilities. How do you see that enabling innovation or business agility which is the business need of today?
Nigel: This has been the other really great aspect of the whole DevOps movement in the enterprise. But for too long, IT operations, systems and administrative, are considered the cost center and seen as a necessary evil, without much connection to actual business value. We have very tech savvy senior leaders, who have an understanding of the strategic core of automation. That enables you to deliver higher quality software, faster ultimately.
But on the flip side, people who work in the DevOps space can’t just ignore what the business does. Today, no technical person doesn’t do business, delivering business value is everyone’s job. And that’s been an amazing mind shift that we’ve seen amongst the practitioners and team managers, particularly in large organizations.
We see both as the senior leadership have an understanding of the strategic encore of automation and that enables you to deliver higher quality software faster ultimately.
They care now about the actual value being delivered to the business, and both align in any project–in terms of delivering that actual business value. It also helps them to be able to communicate the changes that they’re making. You need buy-in from higher management but projects that are delivering business value and enabling the business to be agiler and more innovative, get budget support. They are showing a culture of continuous improvement and innovation and will continue getting funded for that, creating that sort of momentum with an organization.
Sanjog: But Nigel, many organizations have already embraced automation, and they’re also talking about DevOps as part of their IT portfolio. In many cases, they have actually been reaping the benefits. What gaps do you think Puppet is uniquely positioned to fill?
Nigel: Yes. I think there are a few things. If you take as a proxy for automation, the whole infrastructure is code movement that was one of the earliest trends that I’d say 10, 15 years ago. Before the DevOps movement started, we have seen few people who offer solutions in that space, but we are unique because we follow an infrastructure is code process. Rather than just doing automation operators, they’re creating series of some proprietary systems where you might describe automation movement as part of the application. What we can do with a tool like Puppet, is actually express your infrastructure in a text-based format.
You can start taking advantage of decades of learning from the software engineering side, start doing structures of codes and use those for control. You can have tags releases, branches, or you can start doing peer review around your code change.
DevOps isn’t about operations people having to become systems architect. It’s an understanding of software delivery like technology and how they can take advantage of software engineering features.
We know that all of these things result in more reliable code and more reliable infrastructure. I think they have general benefits in the infrastructure of code space. This way we are uniquely positioned to take advantage of those things. We also provide an acceptable language; you don’t need to be a full-time programmer. DevOps isn’t about operations and people having to become systems architects. It’s an understanding of software delivery like technology and how they can take advantage of software engineering features.
We have a very clear domain-specific language that we use in Puppet. It lets you express your infrastructure and is acceptable to both developers and users. Moreover, it feels like working with code and the software acceptable to operations people who perhaps, with their limited programming experience, find it easier. I think that’s one of the really big differentiators.
This language is very robust, and its usage is quite rigorous. When people express their infrastructure in the code for using Puppet, it can be compiled that into a catalog and can be stored historically. I think we’re increasingly seeing that the DevOps movement provided all sorts of benefits with our Dev and Ops teams working together for a common goal. But inside enterprises, particularly in financial services, retail and healthcare, there are other constraints. One likes to paint the security team the boogeyman, the bad people who just want to say no to everything. There are real reasons why these people exist, and why process and policies exist the way they do because we actually provide very, very strong and rigorous system around.
We’re people who are just embarking on learning programming and the sort of automation journey. Both are rigorous enough to fulfill the actual business needs around the security and compliance.
Reporting the changes that were made, we discovered that we had a vulnerability, in the first week of November last year. We can actually go back to see what was the actual state description of that part of the infrastructure at that time. But I think the way Puppet’s uniquely positioned is that we’re people who are just embarking on learning programming and the sort of automation journey. Both are rigorous enough to fulfill the actual business needs around the security and compliance.
Sanjog: Puppet also claims that the automation solutions you deliver would ensure better class security and DevOps, compared to the competitors. What’s the basis of this claim? And what has been added or changed in your service that allows you to make this assertion or even strengthen it? It’s a new offering. If it’s new offering, are there any disclaimers?
Nigel: I’d say the other aspect of an infrastructure is code work. The moment we see it’s from the DevOps report that we put out each year, which is quoted and will be published as results in the next few months for 2017, you get much, much better results if security is involved out there in the pipeline. Rather than just security turn into a low way, you have to validate the production and run through a checklist, you know what ports are open, and if it is an application installed. So, once you’ve actually got an infrastructure code approach and a high degree of automation, security can rest with confidence that all of your production systems are configured the same way. There is no inconsistency across them which is where we see most vulnerability and most production outages actually come from. They can get engaged earlier in the design phase.
Just as you want your operation treatment developers collaborating together early in the design phase when they hit production, we’ve seen the security increasingly being involved in that process as well. Over the last few years, we’ve increased adoption in the enterprise space is that the making those enterprises more accessible. Not just getting out a bunch of machines for metadata for those reports – but also making the reporting interfaces much more accessible to people, really improving the speed of answering simple questions about your infrastructure.
The really big one I would say over the last year that has been, us focusing on giving people the ability to observe the difference between desired change and undesired change.
At the same time, I think we’ve moved well beyond the point where we were. Most enterprises are actually homogenous environments, where there’s a single tool, and because we’ve been succeeding for the enterprise, we’re having a lot of people wanting to work with us on integration. Either way, we actually make the data that we’re exposing more visible and available in other decision-making contexts.
The really big one I would say over the last year that has been, us focusing on giving people the ability to observe the difference between desired and undesired change.
So, when Puppet mix runs a proxy, a change will come through on the report, saying this is being changed, and we can grow a whole trial all the way through from this change to production. That’s a desired change. Then, maybe someone did something manually, there’s a bad actor on the server, or perhaps it was an unwitting dependency where someone upgraded something causing the code to dependent service to be checked to change its version. That’s an undesired change. It’s been really focused on making that available to APIs and through data but also in the space.
Sanjog: Puppet is still a third party, and when it comes to the customer security when you’re dealing with mission-critical data systems or applications, you have different and evolving expectations. How much security do we have to incorporate into it? Are you actually well prepared to match up to all the expectations that those decision makers may have at any given point?
Nigel: Sure. I think this is something where we’ve always had a big advantage, in terms of competitors. But there’s a little bit more work to get going with Puppet because we use ethical systems certificate to verify all of the endpoints to follow the actual services that people connect to. We always had a very strong and rigorous secure approach. We’ve been quite lucky in that – I first joined Puppet when we were 14 or 15 people seven years ago, and now we’re getting close to 550.
We had a lot of people who have been in security roles, from various organizations. We had people who had come from banks in Australia; I worked at Google closely with the security teams. We had people from the Caterpillar. We had a lot of people who had actually to do security and enterprise. We all had a pretty good understanding of what are you actually looking for from a vendor. Ultimately, I think everyone is looking for transparency, the speed of response, and a good trial around how you actually handled security issues in the past.
Ultimately, I think it all comes down to, you’re looking for transparency, you’re looking for speed of response, and you’re looking for a good trial around how you actually handled security issues in the past.
Honestly, most enterprises are in such a terrible space when it comes to security. Not through the fault of anyone in the security team but just because they’re not automated, that lots of silos that trying to get visibility across everything. But once we actually show them how seriously we take security and we talk about secure architecture, the security people clamoring the most to actually adopt the solution.
Sanjog: we can see reasons why people chose automation for enterprise IT needs. Is there any not-so-obvious enterprise IT needs that automation solves, and is there a chance of overkill? Are there areas where automation would be overkill? If you were a customer and trying to calculate ROI for automation, would you say that ROI will be all in hard dollars or would you see them primarily mostly being in soft benefits?
Nigel: I come from the automation space and even before I started with Puppet, I was one of those people who was a very strong proponent of automating solving everything. Even if you’ve only got one server running for a given service, it’s worth automating it. Not just because of scale, not just because of the speed of deploying features but for a better your disaster recovery strategy. Would you rather have someone to follow a checklist and set up a whole box by hand? Would you rather just be able to deploy another instance? What’s your plan when you actually need to scale out that particular service?
I think there are very few spots where automation is actually overkilled. but sometimes there’s a degree of investments in automation. A good example I would say is legacy systems, which we often run into. While they are usually working on greenfield deployment, they actually want to follow modern principles around these. But they don’t actually think that there is a huge benefit in investing in automation for that legacy. Sometimes really small investments can actually have a really huge impact. An example I would say, for the foundational layer, like time synchronization and authentication issues. How often are you having trouble actually investigating or doing forensics around security issues because log timestamps are out of lag?
Very simple things – like ensuring all of the servers are synchronized in terms of time, ensuring your authentication credentials, the up-to-date process for automating the delivery of credential quantum machines and access control, can actually ensure a huge win.
I think very rarely is their overkill in automation.
One story I often tell people is, a customer who had 12 instances of a legacy application just decided they weren’t going to bother automating any of them. They just took all of the configuration files for that server and checked them all into those in control. Then they set up simple automation to deliver those config files onto the box. Puppet takes two or three minutes to find file resources- to grab this file and deploy it as the configure file. Thus when they got these insights at things, realized they had 12 services that are almost all exactly the same. They had ten different variations with the config file. By simplifying the process, they suddenly freed up a lot of time every time they did an upgrade. And then actually reduce differences and inconsistency of those services.
That’s my general answer for whether there’s any place where automation would be overkill. I think very rarely is there overkill in automation.
Now as far as calculating ROI for automation, you ultimately have to come back to your actual business goals. They really rest upon increasing speed, the delivery, and increasing quality and at the same time reducing of defect rate. For increasing speed – in both delivering new features and for recovering from failure. How you actually calculate your ROI for automation is by measuring all of these things. Whether that can have dollars or soft benefits depends upon your business goals, but most people should be able to quantify in hard dollars. What is the actual cost of serving that? What’s the cost in terms of people time spent on sitting in service and managing it all by hand?
… Calculating ROI for automation, you ultimately have to come back to – what are your actual business goals? They really rest upon increasing speed, the delivery and increasing quality at the same time and reducing of defect rate.
Sanjog: The more you automate, there would be more risk of cascading issues. What is Puppet done to prevent something like that from happening with its automation suite? Does Puppet have anything in its automation suite so that the very ROI that you were looking for in the first place isn’t offset?
Nigel: I want to challenge some of the premises in that question. I feel there is a stage risk sometimes with the automation where people feel, if we let the robots control everything, we’re going to end up in the Terminator zone. But I think the reality is in most environments is that currently, they’re undergoing an unknown amount of risk. They don’t actually have a great insight into what their infrastructure is actually doing, and they may be comfortable with that.
They may be used to a world where customers report an issue, whether internal or external and get that escalated for the service desk. So someone scrambles and wakes up at 2:00 in the morning and going through the whole recovery process, fixing it in 20 minutes before more customers know it. But often that’s not actually a great well quantified risk. I think we’ve seen that most of the data from the security industry show that the biggest security risk in most companies is actually in terms of inconsistency in the production run.
..The reality is for most environments is that currently, they’re undergoing an unknown amount of risk. They don’t actually have a great insight into what their infrastructure is actually doing, and they may be comfortable with that.
I think when you categorize the whole business overview perspective, the biggest risk for most companies is actually in terms of overall for service availability, being able to secure the feature, because in many cases shipping in the feature is the same process that’s applying a security fix or patch from a vendor, and their overall security posture. I think the biggest risk for all of those is actually the inconsistency code by manual deployment.
Before I joined Puppet, when I was on Google, it was one of the tools that secure services at once than anything else I’ve ever come across. But the reality is, we have a run tool in Puppet, which allows you to do a simulation run where you can know the mode and all the changes that are going to happen. Rather than having the sort of processes that most enterprises have, to reduce risk. They’ll have it on their checklist, and it is passed as final by 25 different people, but the one inevitably breaks down, no one person is to blame because they all were at risk.
What we see is double down on your automation, and on automating your test environment. We know how this works with software, people didn’t start writing less code to get their applications working more reliably. They started building infrastructure; they started doing unit test, acceptance test, automated CI pipeline and continuous delivery. Automation means a lot of software engineering principle and not being afraid of actually sharing, or doing an internal postmortem about things when they actually went wrong, to build a more fully automated solution. We can actually test changes before they go to production.
This is I think one of the biggest benefits in the automated environment, that the production environment and your Dev test UAT environment, it can all be exactly the same. The number of times we’ve talked to enterprises who are adopting our tooling entirely because of the major outage caused by the fact that their testing environment isn’t anything like their production environment is very large. The more you can equip self-service around this, where developers can actually run something that looks exactly like production, will actually develop the code on the QA and Q18s. When the testing code looks exactly the same way as it was in production, you’re minimizing risk. Particularly if you’re adopting agile principles around working in small chunks, small changes are happening more frequently, and that reduces risk more than anything else.
Actually, I think one of the biggest benefits in the automated environment, that the production environment and your Dev test UAT environment, it can all be exactly the same.
Sanjog: When you see a perceived lack of control and visibility which could very well happen when we are talking about automation, then people might have security concerns. Do you have something in your architecture and the controls what you put in your solution suite, which will automatically lead itself to higher visibility, in turn allowing people to flexibility to enable security in such a way, so you can claim that if you put Puppet’s automation solution, it’s going to result in secure operation.
Nigel: I think it’s about the whole infrastructure code approach and involving security in the design phase, rather than just validating production or indeed being a bottleneck and a gateway for code being shipped to production. Until you actually have a decent contract between the security applications and operations people, they can actually all collaborate on the same code reports. They can actually verify changes before they get production, and the security team can have the same kind of access to a self-service environment that the developers and operations people are. And the contract goes multiple ways. One of the biggest frustrations we’re seeing on operations people is when they just get handed a folder or a zip file of code to get off, this is the thing that needs to be deployed to production.
A contract we often see that works really well is when we have application developers to package all of that code, produce Puppet’s NF5s, IPNs or Devs, whatever package format you’re using to your platform. So, on the application side, we can guarantee, this is the payload that should exist. On the operation side, they can have access to the infrastructure code form. This is what the infrastructure is going to look like once it’s deployed.
If you open those up to self-service deployment for all of your teams including security, they can actually test closely. In addition, we have features in Puppet that let you do things like tag a person resource is being particularly sensitive. So when you modify user access control or firewall configuration or s8 Linux or anything that improves security posture, security teams can get a report and you can do that before you actually get production. So, a seamless software development lifecycle is built and tested upon – you can also have test changes that are impacting security. There is no need for even a code review by a security team since most of them are designed to send a report automatically.
Sanjog: What does puppet doing to learn and innovate in these areas and differentiate from automation solution provider? If someone has to buy a solution or adopt a solution, he or she needs to know at least that you are a thought leader and a go-to-resource. What’s Puppet doing to become that go-to-resource that thought the leader in the automation space?
Nigel: I would definitely argue we’re already the market leader in what Gartner called the Continuous Configuration Automation space where we’ve been steadily scaling deployments from the BMCs, the BladeLogic, and the HP Service Automation.
People are moving to a new world of adopting a tool like Puppet. But I’d say we’re definitely already the market leader and innovator in the enterprise, DevOps automation space. But one of the things that back it up would be our publishing of the DevOps report each year that we partner with some people like Gene Kim who is ex-CTO of Tripwire and wrote the Phoenix Project which is the DevOps enterprise story in mobile form. Jez Humble, who is one of the authors of the Continuous Delivery and Lean Enterprise space, and Dr. Nicole Forsgren who has been a researcher working in the field of IT organizational performance for a long time.
They have a company called DevOps Research Investments, and we partner with them on doing the state of DevOps report each year. We increasingly see that thought of information is in the business leaders that we look for. But there we actually survey a number of practitioners and ask them about their practices, how their organization works. We also ask them about the metrics, how long does it take, the failure, how often do you have failures, how often do you promote changes? And then using some statistical analysis every year that people separate into different costs. But despite people adopting DevOps practices and tooling, achieving significantly hierarchical performance, nobody uses the release of testing analysis on that. This shows that companies are adopting these processes, turn out to be higher performing IT organizations. This has a really concrete impact both on the bottom line and the level to which they achieve internal goals.
I would say the fact that we’re increasingly focused on integration with other components of the DevOps enterprise toolchain as it emerges is ensuring that we will definitely become a market leader.
Sanjog: When you look at a digital transformation that’s the end and say the automation and DevOps are the main. But where digital transformation itself is evolving, you may not really be having all elements for the future ready? What would you say your solution doesn’t have because it does a lot of things what it doesn’t do?
Nigel: If you look at the adoption code of Puppet and Puppet Enterprise in business space, we were very much at the bleeding or hemorrhaging edge of the market of very early adopters, in entirely into textual interfaces. But we’ve been crossing the chasm for a while now. We’re being called into more enterprise environment so that people have higher requirements around visibility, around multi-tendency and particularly the out-of-the-box experience. We’ve been very much focused on the out-of-the-box experience with developers and operations folks.
But the last few years we’ve really seen increased interest from the security side, so we haven’t yet solved all of the out-of-the-box requirements of security teams in a DevOps world, and I think you’re going to see us continuing focus on that space, where we’re giving more, let’s get you 80% of the way through to your HIPAA, your PCI/DFF both from a compliance standard, where people are really looking for a really short time of value. In short, I’d say just more out of box experience and making to even more acceptable for people.
Sanjog: How should an enterprise select an automation solutions provider and how would you guide and mentor the prospects when the customer is a bigger enterprise decision-making leadership? How do they go about picking it up and how they could best use automation? And as you mentioned DevOps towards meeting that end goal of driving change with confidence when it comes to changes that you want to make for digital transformation?
Nigel: In general, the approach you would take would be- you want to make sure you’re making a globally optimized decision, not just locally optimized one. It can be very easy to let one team within your organization they take a prime spot and dictate tooling to all of the other teams. And it’s important to make sure that you take support from all of your people, your application developers, your operation staff and your security team. But also, you need to take into account the needs of your management chain. Because one of the biggest improvements in the velocity we see amongst all these groups is where you have a tool that lets management have the kind of reporting they actually need, but that you had to ask for. I think this is where self-service will come with a whole bunch of different demand.
Software adoption acceleration would be the adoption of public cloud and storage containers and the level of architectures and all of these things that are hugely disruptive to the traditional ways of working.
Are you picking a tool that will actually get used because too often we see software just sitting on the shelf and get renewed year after year, and no one has any idea if that’s actually been deployed? And make sure it’s a tool that people actually enjoy using, maybe user experience is actually good, and that you’ve defined a common interface across all of your team. But I think there are a few things there that in the future are pretty uncertain. I think in terms of infrastructure we see change happen faster and faster to a greater degree than we’ve seen, I think ever. Software adoption acceleration would be the adoption of public cloud and storage containers and the level of architectures and all of these things that are hugely disruptive to the traditional ways of working.
And no one really knows where it’s all going to go. I think if you said, you know 10 years ago that a fellow would go to become the biggest provider of automation infrastructure in the world, it would have to be difficult to look into in your crystal ball and see that. But I think you want to focus on a few things, one, look for vendors who have experience in managing that sort of change, and with whom you have been around for more than a while. You’re looking for people who have open APIs and interfaces, but you can integrate them. The thing people pick is best of breed tooling, to build that cutting edge infra and that’s going to change over time to one that has open APIs and integration point, it has a wide degree and variety of applications that they’re actually integrating, so that they can continue the investment as the rest of the world changes.
Ultimately I think I funneled that up in, you picking a tool that is actually capable of being bridge to the future and manage all of the infrastructures you have now, and do they actually have their eyes on where things are going, and they look like they’re actually going to be ready in dealing with the disruptive change that are on the way.
Sanjog: Thank you, Nigel, for sharing your thoughts and insights in our Solutions Spotlight segment.
Explore More
-
- From Automation to Autonomous Testing
- Can Kofax deliver on its Intelligent Automation promise?
- From Traditional IT Ops to Automated AI Ops
- Evaluating Security Intelligence and Analytics Solutions
- Evaluating HP’s Virtual Private Cloud
- Automate Everything—The new cupid for BPOs
- Charting the Path to Automating the Data Center