August 31, 2018
Among today’s most exciting digital transformation strategies, DevOps is one that is growing in momentum, relevance, and competitive necessity. The term DevOps was coined to address the urgent need to continuously improve the capabilities of the software development and information technology (IT) operations teams, enabling them to build secure, compliant, and optimal applications. Research has shown that organizations that internalize the core DevOps principles are often able to respond to customer needs more quickly, lower their IT costs, deliver products and services of higher quality, and assert their competitive advantage in the marketplace. Regardless of how organizations define digital transformation or how seriously they take it, at its heart, a DevOps transformation represents the relevance and undeniable transformative role of technology in business. In today’s marketplace, nimble and disruptive companies are challenging long-stagnant markets at increasing rates, and the change to the status quo they bring can no longer be ignored. Not only are these disruptors using many DevOps strategies, they are doing so with less risk. DevOps is one of the most effective ways for organizations to remain competitive in an ever-shifting digital landscape. DevOps methodology and software enables true, lasting, holistic transformation for businesses — and their market standing. It also helps prepare businesses for the inevitable technology innovations of the future by implementing more flexible and scalable frameworks in their IT departments. However, DevOps is not a quick fix. It requires adjusting not only organizations’ current operating techniques, but also the mindsets around these operations. It is important to consider both the executive and individual practitioner views, as there is often significant disconnect between their DevOps goals, and the success of a DevOps transformation depends upon a shared understanding and vision. This report will examine the use of DevOps in the marketplace, including trends around current development methodologies, automation adoption rates, and the top factors driving current innovators’ DevOps transformations. It will review the current state of leading DevOps technologies, including an overview of features, functionality, and services. Finally, it will provide a guide to gaining buy-in for a DevOps transformation.
DevOps, as it is known today, began to take form in the 2000s. One of its underlying foundations is the principle of “lean manufacturing,” which is a set of best practices meant to optimize manufacturing processes on the production floor. Lean manufacturing promotes continuous improvement by keeping inventory levels and order queues low while maximizing efficiency throughout the workflow with frequent, consistent evaluations. These best practices crossed over to the technology world when IT teams began to see the value of faster, iterative development strategies over more traditional methods. The creation of the term “DevOps” and its general methodology is often credited to Patrick Debois. In 2008, Debois was engaged in a difficult consulting project that centered around a common problem: how to create harmony and increase efficiency between development and operations teams. His experience and thought leadership helped foster discussion around this problem, and have been driving factors in the evolution of DevOps. It is important to note that DevOps capabilities are typically created through broader transformation or change activities. While many of the complexities of DevOps transformations are beyond the scope of this report, it is beneficial to understand two main requirements that are essential for true, holistic, and lasting transformation. First, it is important to align all IT activities with the business outcomes that are critical to the organization. Many organizations believe that merely indulging in a few Agile and DevOps methods are sufficient to deliver those business outcomes. Though one should expect improvements in bottom-line metrics from these transformative efforts, the efforts alone do not guarantee the organization’s desired state of efficiency. Organizations that are focused on business outcomes (e.g., lowering IT costs, increasing customer satisfaction, and rapidly responding to changes in security and compliance postures) often realize that they have a better chance of thriving in the competitive marketplace if they develop the necessary capabilities among their teams—capabilities that result in lower lead times, improve deployment throughput, encourage a generative culture, and employ an automated test and deployment process. In other words, organizations must realize the necessity of aligning business outcomes with the capabilities of the IT ecosystem in order to achieve true success with DevOps. Second, organizations must define and measure the key performance indicators (KPIs) that indicate the current state of the IT teams’ capabilities. The marketplace is constantly evolving to changing customer needs and responding to disruptive technologies that redefine the relevance of products and services that are offered to customers. In such challenging landscapes, organizations that develop the necessary capabilities are always interested in identifying process, technology, and cultural candidates that can be iteratively improved. However, a lack of true indicators of performance, efficiency, and impact often takes teams down the wrong track with transformative efforts that do not deliver the business impact expected of them. Organizations that realize that capabilities are not static, but must be continually reexamined and improved upon, are often better positioned to iteratively identify and improve IT processes, technologies, and culture to their long-run competitive advantage. DevOps capabilities are generally aligned with and support the activities that are executed as part of systems development life cycle (SDLC) processes and infrastructure operations practices. The capabilities, outlined in bullets below, are also illustrated in a DevOps process flow in Figure 1.
Code — The process of writing software, performing the necessary unit and functional tests, and committing code to source control.
Build — The process by which source code is compiled and/or packaged for deployment.
Test — The automated execution of tests and code scans to address security and compliance requirements.
Release and Deploy — The process of validating that the packaged software can be successfully deployed by installing and performing integration testing to the software in a non-production environment. After validation, the packaged software is deployed in a production environment.
Operate — The ongoing operation of the deployed application, which may include automatic actions in response to failures, and automatic infrastructure scaling in response to demand.
Monitor — The instrumentation of the environment in a way that allows for both automated and manual responses to changes in the environment.
Learn and Plan — The iterative tasks of observing and improving the SDLC process.
DevOps is a mix of philosophy, skills, and technology tools. It is based on a few key principles:
Continuous Improvement – This is an iterative technique that entails measuring, evaluating, and improving upon delivery processes. One of the most important ways this is accomplished is by moving away from more traditional development management strategies (e.g., the linear “waterfall” development method) that dictate larger projects and longer coding and release timelines. By shortening and breaking down the project lifecycles, and continually improving upon processes, organizations can better identify inefficiencies in code, process, and structure, and address them much more quickly.
Development Predictability – If an organization can maintain predictable development, it is better able to grow and scale over time. The DevOps approach to predictability involves putting tools and processes in place that streamline, standardize, and control development and operations so that unforeseen challenges, bugs, and process breakdowns are greatly reduced or completely eliminated. This often entails teams building out process guidelines and controls, and leveraging tools that not only help to enforce those rules, but also automate many parts of the controlled processes.
Development Consistency – Similar to predictability, consistency speaks to the standardization of code and product quality, and is also dependent on processes and tools put in place that enable such consistency. Consistent code and production processes enhance strategic product development planning, and also increase the overall productivity of development and operations teams.
Development Velocity – This is related to arranging the development process in order to enable fast, frequent delivery cycles, which is particularlyimportant for companies that have many successive releases and deployments. A DevOps approach dictates that organizations structure their development and operations teams in a way that not only facilitates rapid releases, but also enables responsiveness and faster reaction times to any issues.
A few technical solutions that were designed to meet the requirements of the principles above are Agile methodology, Continuous Integration / Continuous Delivery (CI/CD), and configuration management. Agile methodology is a key component of the DevOps quality assurance (QA) process. Many of the processes, tools, and philosophies in place behind DevOps, if implemented properly, all work together to ensure the delivery of a high quality product and lower risk for the product’s success.
Agile is an iterative development approach that involves collaboration between different teams around development requirements, changes, and solutions. An Agile project management (APM) method typically promotes frequent review while still encouraging self-management and accountability within a development team. The goal of Agile is to create a transparent development ecosystem that enables quick correction, continuous enhancement, and high quality code. Agile often uses functions such as scrums and Kanban boards. Agile methods are brought up in conjunction with DevOps not because they are one and the same, but because Agile competency is one of the clearest signs that an organization is ready for, is working toward, or has achieved some level of DevOps transformation. Additionally, as the rate of software change increases, automating the release of software becomes more critical to ensuring that these changes do not become bogged down by manual processes or human error.
The combination of the two concepts, Continuous Integration (CI) and Continuous Delivery (CD), as reflected in the acronym “CI/CD”, is important because successful continuous delivery is dependent on the continuous integration of code. This integration includes the process used with source code to build, test, scan, analyze, distribute, retest, and deploy the results, while keeping all relevant stakeholders informed regarding progress. Continuous integration focuses on testing software in isolation, then introducing external and internal software collaborators that perform tests at certain points during the coding process. The goal of this functionality is to ensure that defects are detected and corrected soon after introduction, when they are cheapest to fix. Continuous deployment coordinates the release of the tested software artifacts into target environments — first, non-production environments for testing or additional certification, and finally, into production environments for customer use. Some teams and practitioners prefer to associate the CD acronym with Continuous Delivery, which illustrates a commitment to having software at a high readiness deployment. Fundamentally speaking, a team must practice continuous delivery in order to offer continuous deployment.
Configuration management helps ensure that a particular machine—whether a general purpose server, networking device, or something else—is adequately prepared to perform its duties within the target architecture, both initially and over time. Common uses of these tools include operational tasks like ongoing patch management and system hardening, as well as tasks that may involve developers, like preparing the operating system to run a certain workload for a given piece of software or configuring it to run correctly in a given environment.
When considering the value of DevOps, it is important to remember that while its basic principles can all be tied to a specific software method or tool within DevOps automation, these philosophies also work toward achieving many of an organization’s other goals. Velocity, for example, is essentially an increased time-to-market; while a CFO may not understand the specific technology application and change that must be applied to enable velocity in the software development process, the CFO will understand that enabling velocity with a DevOps approach speeds up the company’s time-to-market on its products. Consistency speaks to a more reliable product, which can lead to a more satisfied and loyal customer base. Automating tasks that are often tactical, mundane, and prone to human error also raises the value of labor by allowing organizations to reallocate existing employees to more strategic — and often more profitable — activities.
In order to illustrate how some of the above topics are relevant to North American organizations, Levvel Research conducted a market survey across many different industries and revenue segments. The survey respondents’ revenue segments were made of primarily of middle market and enterprise companies, see Table 1.
Surveyed Organizations' Revenue Segmentation - "What is your organization's annual revenue in the most recent 12-month reporting period?"
|Enterprise||More than $5 billion||40%|
|Enterprise||More than $2 billion||16%|
|Middle Market||$501 million to $2 billion||20%|
|Middle Market||$101 million to $500 million||6%|
|Small and Medium-Sized Enterprise (SME)||$30 million to $100 million||8%|
|Small and Medium-Sized Enterprise (SME)||Less than $30 million||10%|
Over two dozen industries were surveyed; the top 7 most common industries among respondents are Finance/Banking, Internet and Cloud Software, Education, Telecommunications, Healthcare / Medical, Manufacturing, Computer, and Transportation and Distribution. The roles were made up of Decision Makers and Contributors, see Table 2.
Respondents' Role Segmentation - "What is your role within your organization?"
|Decision Makers||Owner, Founder, CEO, Director; Upper Management (SVP, Director, CTO, COO, Partner, Principal)||28%|
|Contributors||Middle Management (Manager, Scrum Master, Coach, Software Development Manager)||57%|
|Contributors||Development Team Member (Software Engineer, Developer, QA, Tester, Consultant)||15%|
Levvel Research has separated the respondents into two categories: Decision Makers and Contributors. Decision Makers align their goals and opinions concerning DevOps with the greater goals of the business, such as being more competitive or reducing IT costs. Decision Makers have recognized DevOps as a strategic initiative after evaluating their competitors and the state of the market. Contributors are more attuned to tangible process inefficiencies within the IT organization. Therefore they are drawn by DevOps’ capabilities to improve and maintain the quality of software code and to consistently monitor and maintain recovery times and software deployment times.
To assess the current state of development and operations teams in North American businesses, respondents were asked about the location of these teams. Data shows that the majority of organizations have teams in more than one location, and almost one-half of respondents’ team members are not in the same physical location, see Figure 2.
"Are your software development / test / infrastructure / production support teams physically co-located?"
Traditional IT departments have typically required all team members to be in the same office, but now more companies are supporting teams located across many different cities and regions. One reason for this shift is the move toward competitive talent acquisition, making businesses more interested in the quality of their team members than the individual members’ locations. This new business environment can be problematic for organizations without the proper tools in place, as dispersed development teams must deal with issues around communication and collaboration that teams in the same location do not. Automation in development and operations is critical in order to ensure synchronization and efficient communication among separated team members. The importance of DevOps transformations is even greater for companies with more widespread teams and shorter release cycles.
Another factor that can affect the efficiency of development team management is how the teams are organized—according to function or according to the deliverables. Data shows that companies are almost evenly split between organizing their teams by function or deliverable, see Figure 3. The method in which a team is organized may vary based on the software product the company delivers, the infrastructure environment, and the size of the organization, as well as its maturity / age. The way teams are organized also affects how an organization applies DevOps processes and technology, as each model will have different requirements.
“Are your teams organized based on function or as multi-dimensional teams aligned with particular deliverables?”
Figure 4 shows that the majority of organizations deploy code once a month, while 11 percent deploy code once a day or more. Organizations typically determine their deployment schedules by the restrictions of their current state. In other words, some companies would likely prefer to deploy code at much faster rates in order to stay competitive, but structural, workforce, or technology restrictions may prohibit them from doing this efficiently.
“For the primary project/application you work on, how often does your team deploy code?”
When IT environments are not run efficiently, unforeseen challenges arise at higher rates. The majority of organizations report experiencing outages once aquarter, see Figure 5. While outages are inevitable for any software team, the strategies and tools in place to deal with outages can have a significant impact on how long it takes to restore services.
“How frequently does your production environment experience unexpected outages resulting in failed customer interactions?”
The majority of organizations report that they are able to restore services in less than half a day, see Figure 6. Any outage can have a detrimental impact on customer satisfaction and a company’s market standing, particularly when it takes longer than a few minutes to restore services. With the proper tools and processes in place, organizations are not only able to recover much more quickly, they also reduce the number of outages.
“How quickly are your teams typically able to restore services (i.e., such that customers are no longer affected by the issue)?”
A key process in every development team’s function—and an essential step to ensuring a quality product—is testing. Many organizations use several different types of testing methods depending on their industry, product type, coding languages and technology systems, and many other factors. For example, more regulated industries are more likely to perform automated testing methods when performing unit and functional testing. Research shows that the most common tests used are functional/integration testing, unit tests, performance tests, and compliance and security tests, see Figure 7.
“Which forms of software testing do you currently perform? (Select all that apply)”
As organizations employ more effective testing processes, they also increase the chance that they will deploy high quality code. However, it is important to note that while some organizations report they use multiple forms of testing, not all achieve high success rates in deployment and production due to inadequacy in various aspects of the testing process. Organizations might not always perform the appropriate tests for the software requirements, write high quality test code, or complete coverage of the test. Efficient and effective testing not only entails performing various forms of testing; it also requires that the process to test code is adaptive, and is based on learning from previously observed errors.
Inadequate testing is a good example of how some DevOps methods are not a quick fix or a one-tool solution. Organizations must also employ the correct strategies to ensure DevOps methodologies benefit the organization. Anotherway to gain the full benefits of automated testing involves using Agile methods to automate different tests throughout the development process, ultimately enabling teams to deliver sprints within a few weeks, rather than the months they can take under more traditional development methods.
Research shows that many organizations are using some of the more progressive development methods like sprint planning, daily standup, and CI/CD-based processes, see Figure 8. The move toward these methods is a fairly recent trend, as traditional IT operations have been largely dependent on less efficient methods. However, while methods like sprint planning are much better than older methods, they do not indicate on their own that an organization is high performing and efficient—the appropriate cultural, technology, and process changes are also required. This data also does not mean an organization’s entire development and IT functions are operating with more progressive methods. However, when it comes to DevOps transformations, organizations that are familiar with or have adopted some sprint or CI/CD-based methods are a few steps ahead of their more traditional peers.
“What methods do you employ in your software development process? (Select all that apply)”
Since Agile is a key enabler of DevOps and a good measure of an organization’s preparedness for further DevOps applications, Levvel Research asked respondents which methodologies they were currently employing, see Figure 9. Scrum is the most popular method, likely due to its relatively simple adoption requirements. As Agile is a framework that houses many different strategies, an organization may claim to be Agile while only employing part of the methodology.
“Which of the following Agile methodologies, if any, do you use in your software development process?" (Select all that apply)
The popularity of scrum among respondents is not surprising given that it isclosely tied to sprint planning, which is another popular Agile activity. Scrum and sprint planning are often where many companies begin when moving toward additional Agile development methods. Other popular Agile practices are Kanban boards, which promote high visibility across continuous, small changes, and Agile Unified Process, which is a simplified version of the Rational Unified Process (RUP). This is likely because RUP is an upfit of an older IBM product, which makes the barrier to adoption for AUP relatively low among some organizations—particularly companies dependent on IBM technologies. It should be noted that although most companies surveyed report using some form of Agile methodology, Levvel Research believes that very few organizations in the North American market today would classify as fully Agile.
Survey results show that respondents’ top pain points in the software development process concern siloed processes, poor communication, and late detection of defects, see Figure 10. When broken down by revenue, the data indicates that regardless of size, most companies share the same problems under manual processes.
“What are the top 3 pain points of your software development process?” & “What is your organization’s annual revenue?”
When analyzing the survey data, Levvel Research found that many of the organizations claiming to use Agile methodologies still experience many pain points in their software development process, including knowledge silos and inconsistencies in deployments. This is because automation and special methodologies do not have effective or guaranteed outcomes if they are not employed properly. A DevOps transformation is complex and extensive, touching many different areas and requiring the proper methods in each of those areas to bring about widespread results. An organization may be using one important element of DevOps, but this does not necessarily mean that they will achieve all the potential benefits of DevOps—they must also employ the correct strategies and tools elsewhere in the development-to-deployment lifecycle.
Automation is one of the key elements of DevOps. While most companies have implemented some sort of automation in their software testing process, only a small fraction are fully automated, see Figure 11.
“Is the software testing process manual or automated in your organization?”
Industry plays a role in automation. Most companies in the finance industry have partially automated software testing, which is indicative of the relatively high technology adoption rates seen in the industry overall. There is a similar trend in the internet and cloud software and computer (e.g., hardware, desktop software) industries. This is likely due to these organizations’ lower barrier to entry when it comes to automating their IT environments, as they have much of the technical infrastructure needed to support DevOps tools already in place.
Organizations in manufacturing and healthcare were some of the greatest laggards in terms of automating the software testing process. In the healthcare industry, this is likely due to requirements around how organizations manage patient data, including strict government regulations that make things like integrations for digital transmission of patient data (e.g., x-rays, labs) very difficult. Manufacturing is an industry that has traditionally been slow to embrace automation, but as in healthcare, companies in this industry must also manage a great deal of information and adhere to regulations, particularly around international supply chain compliance. Manufacturing companies also tend to be more spread out than many other organizations, and technology implementation projects can seem too costly and complex. When asked about other parts of the development lifecycle organizations have automated, the most common methods were automated test cycles, deployment to test, and deployment to production, see Figure 12.
“What portions of the software development lifecycle has your organization chosen to automate?" (Select all that apply)
While a DevOps approach is functionally designed to break down process silos and enable integration between development and operations, it also improves the business as a whole. Among organizations that have applied a DevOps approach within their organization, data shows that the top improvements are those that benefit the entire organization—from both a functional perspective and a competitive, business perspective. For example, the top benefit—improved flow from development to production—most directly impacts those at the Contributor level, while improved time-to-market is highly valuable to those at the Decision Maker level, see Figure 13. Many other benefits, such as improved software pipeline visibility and higher software quality, benefit both groups. This speaks to the holistic power of a DevOps transformation and its cross-functional capabilities like improving overall product delivery, giving employees more time to innovate, boosting team morale, and improving the bottom line.
“What are the top benefits you have seen from a DevOps approach?”
However, organizations do not typically automate all DevOps functions at one time; they often begin with the piece that is easiest to deploy or very crucial to improving the current state. One example of a current-state parameter that may impact what an organization chooses to automate first is team location, see Figure 14. Organizations with teams that are not co-located are more likely to have automated their test cycles than those in the same location, which speaks to the importance of this process for controlling code development when communication and synchronization is more difficult. Organizations that are not co-located are also much more likely to have automated infrastructure for developers, which is because dispersed teams are less likely to always have in-house infrastructure management personnel at their disposal; investing in a tool greatly reduces the time and hassle of requisitioning additional technology infrastructure across widespread locations.
“What portions of the software development lifecycle has your organization chosen to automate?" (Select all that apply) & “Are your software development / test / infrastructure / production support teams physically co-located?”
Research shows that automating different steps of the software development process is likely to yield different benefits, see Table 3. For example, while improved flow from development to production is the top benefit across every automation method, organizations that automate deployment to production and deployment to test are most likely to achieve this benefit. Organizations that automate validation of production deployment are more likely to improve time-totest, and automated deployment to test is most likely to produce high quality software.
|Automated Process||Top Three Benefits|
|Automated validation of production deployment||1. Improved flow from development to production (51%) 2. Improved time-to-test (37%) 3. Improved software pipeline visibility (32%)|
|Provisioning infrastructure for test||1. Improved flow from development to production (53%) 2. Improved software pipeline visibility (36%) 3. Reduced project cost (31%)|
|Provisioning infrastructure for developers||1. Improved flow from development to production(48%) 2. Improved software pipeline visibility (36%) 3. Improved time-to-test (33%)|
|Automated deployment to production||1. Improved flow from development to production (59%) 2. Improved software pipeline visibility (33%) 3. Improved time-to-test (28%)|
|Automated deployment to test||1. Improved flow from development to production (59%) 2. Improved software pipeline visibility (32%) 3. Higher software quality (28%)|
|Automated test cycles||1. Improved flow from Development to Production (51%) 2. Improved software pipeline visibility (39%) 3. Improved time-to-test (29%)|
The length of time an organization has used DevOps is another factor that affects the benefits it achieves, see Figure 15. A few benefits, such as attracting/retaining talent, improved time-to-market and test, and increased capacity for innovation are likely seen within six months to a year. Others, such as enhanced customer experience, higher software quality, improved flow from development to production, improved software pipeline visibility, improved team morale, and reduced project cost are more likely to take at least one year, and in some cases more than two years, to be fully realized.
Figure 15 - Time Using DevOps Affects the Benefits Achieved - “What are the greatest benefits to adopting a DevOps approach? (Select up to 3)” and “How long has your organization been using DevOps practices?”
This relatively lengthy amount of time it takes to realize the value of DevOps is largely because DevOps transformations are complex, extensive projects, and a holistic approach cannot be hastily applied. These projects require time for proper implementation, change management, infrastructure re-architecture, and training. One benefit seen in the short run, an ability to attract better talent, is something that requires very little action—more innovative, forward-thinking, and skilled developers will be attracted to more progressive IT teams. Other longer term aspects, like improved software pipeline visibility, are dependent upon implementing DevOps techniques and technologies across many different environments, including physical and technological, which are projects that take a while to execute. It should be noted that improved time-to-market, a benefit that has a direct impact on a company’s competitive advantage, was often reported as a benefit within the first year. Research also shows that the amount of time an organization has used a DevOps approach has an effect on outage times, see Figure 16. The respondents that have been using DevOps for over 2 years experience the shortest outage time, as 68 percent of these organizations are able to alleviate outages in less than a day. While there is some fluctuation in restoration times between the first six months and two years, this is largely attributed again to the complexity of a DevOps transformation, and the time required to fully implement and realize the benefits of the strategies and tools.
Figure 16 - Organizations Experience Fewer Outages If They Have Been Using DevOps for More Than Two Years - “How long has your organization been using DevOps practices?” and “How frequently does your production environment experience unexpected outages resulting in failed customer interactions?”
It is important to identify what is prompting organizations to move toward a DevOps approach. For many organizations, it is simply the desire to reduce high costs or to improve time-to-market. For others, it is it aligning with IT trends in the market. Sometimes the driver varies by the goals of different groups, such as developers versus C-suite executives. However, research shows that there are similarities in drivers across different roles; many Contributors share the same big picture goals as those at a Decision Maker level and vice versa, see Figure 17. In some cases, those at the Contributor level showed more enthusiasm for more business-focused drivers, like the desire to say competitive within a market.
Figure 17 - Organizations’ Top Drivers Are to Speed Up Software Development, Improve Software Quality, and Stay Competitive Within the Market - “Which factors are driving DevOps activity in your organization? (Select up to 3)” and “What is your role within your organization?”
A company’s industry also plays a role in its motivation to adopt DevOps, see Figure 18. Companies in highly automated industries, including finance, internet and cloud software, and computers, tend to be driven more by non-financial motivators, such as the desire to stay competitive, than less automated industries. Those in low automation industries (e.g., manufacturing, healthcare) are more than twice as likely to be motivated by reducing project costs than high automation industries, and slightly more likely to be driven by a desire to increase revenue. Levvel Research has found that when costs are the only driver, the comparative costs of actually implementing DevOps technology and change management might dissuade a company from moving forward in the end. When innovation and internal improvement are the leading drivers, organizations are more likely to think of a transformation as a long-run competitive advantage—one that is critical to business success.
Figure 18 - High Automation Industries Are Most Likely to Adopt DevOps to Speed Up Software Development - “Which factors are driving DevOps activity in your organization? (Select up to 3)” and “Please select the standard industry description that best fits your organization.”
Research shows that despite its proven benefits, DevOps is simply not a priority for many companies. The top three barriers to adopting a DevOps approach are having resources tied up in existing projects, belief that current processes are working, and a lack of budget, see Figure 19. These barriers vary only slightly by company size, which shows that the issues preventing adoption—and their solutions—will be familiar to almost any company. Barriers to adoption vary slightly more by industry, as low automation industries are more likely to cite a lack of technical resources, satisfaction with the current process state, and resistance to change. This is typical of industries like manufacturing, where many organizations have outdated approaches and mindsets regarding technology. In healthcare, lack of budget becomes more of a concern, as these organizations must consider highly complex technical architecture paired with limited budgets.
Figure 19 - Organizations’ Top Adoption Barriers Are That Resources Are Tied Up in Existing Processes, Belief That Current Processes Work, and Lack of Budget -“What are the greatest barriers to adopting a DevOps approach? (Select up to 3)” & “What is your organization’s annual revenue?”
Whatever the barriers may be to DevOps adoption, there are many ways to overcome them. Addressing these challenges first requires properly understanding the reasons behind the barriers, and then understanding how to overcome them. For example:
Resources Are Tied Up in Existing Projects — The top barrier to adoption is that company resources are tied up in already existing projects. This might be due to a view of DevOps as an expensive, arduous, and drawn-out “money pit” that requires a great deal of time and effort and places risk on business operations.
Instead, DevOps should be viewed as a long-run strategic advantage that will strengthen the organization’s ability to stay competitive for years to come. DevOps technology providers and consultants are aware of the importance of DevOps for improving competitive advantage, and work hard to prevent implementation and adoption from becoming a hindrance to the organization in any way. Many service providers help organizations strategically plan their DevOps transformation in a way that places the smallest burden on their existing teams and other resources. Organizations should also keep in mind they do not have to adopt every aspect of a DevOps approach at once, but can adopt tools and processes as their financial and business constraints allow. In addition, if the correct message about DevOps’ long-run benefits is spread and adopted, it will lead some organizations to reevaluate their current projects and potentially re-prioritize and restructure their resource allocation.
Current Processes Are Working — The belief that the current process is working can be detrimental to the long-run success of the company. DevOps affects so many aspects of service and product delivery, as well as customer retention and satisfaction, that inefficient methods of development and delivery can mean life or death for a company. If one organization stays in its current state, its more innovative competitors will still move on, adapting to market changes and becoming more successful by significantly reducing costs and improving the quality of their services, and releasing new productsfaster and in less time.
Many organizations fail to realize that DevOps is not just a technological transformation, but a cultural one as well. One of the ways to overcome internal opposition to DevOps change is to properly outline the current state, including process bottlenecks and structural / workflow inefficiencies, then compare the current state to improvement goals. By seeing the inefficiency of the current state brought to light, those resisting change will have less desire to leave things as they are—and more motivation to explore DevOps.
Lack of Budget — Another benefit of outlining the current state is that it reveals high costs and lost revenue, and allows an organization to understand the potential return on investment (ROI) of a DevOps transformation. It is also important to understand that many organizations have exaggerated expectations of the costs of DevOps transformation. In order to properly determine if a transformation is feasible, organizations should speak with DevOps consultants about the actual costs. They should also remember that they can build a scalable transformation roadmap that allows them to adopt tools as their budgets allow.
Lack of Technical Resources / Lack of Understanding — One of the main challenges Decision Makers grapple with is the momentum of change in the DevOps space in general. The problems traditional IT departments face, like escalating costs, significant complexities, market-related workforce turnover, and knowledge silos are made worse by the introduction of new technologies that many do not understand. Even if these organizations understand what tools they should adopt, they are often unable to execute because they do not have the proper skill sets and knowledge, and because they can’t find the right resources to help.
The following content offers an overview of DevOps technology tools in order to help organizations better understand the current space, and where to start with a DevOps transformation.
DevOps is essentially a methodology, not a technology, and therefore cannot be provided by one software provider or consultancy. It requires a mix of cultural shifts, process updates, and technology. However, the technological tools are still an essential component of a DevOps transformation, and organizations must be properly educated on how these tools work before they embark on a DevOps transformation journey.
Most people think about DevOps from either an infrastructure perspective or a developer-centric view. Under the infrastructure perspective, the individuals are concerned with how DevOps automation supports the IT environment, such as hardware, servers, and the software that runs on them, and they are less familiar with the impact on development. Developers, on the other hand, understand what happens in the background and are interested in the things that support their development functions.
From the infrastructure perspective, organizations are interested in around infrastructure automation, and ask questions like “How does one automate the parts of the IT environment so that team members don’t have to run manual processes?” In this particular instance, they are speaking of configuration management automation, which is the ability to quickly configure environments in ways that converge to a “known good” state. Essentially, configuration management is the ability to treat infrastructure as code, and tooling now exists for operators to automate aspects of their work that had traditionally been performed manually. This automation can greatly reduce instances of human error, enables infrastructure to respond to change more quickly and accurately, and makes multiple infrastructure environments easier to govern.
Another aspect of DevOps from the infrastructure operation viewpoint is monitoring and running the technical environment. This viewpoint actually affects both infrastructure and developers, which is where platform as a service (PaaS) and containers also come into play. Containers serve as a foundation for multiple applications, while PaaS helps organizations manage them more effectively. Developers also benefit from containers in PaaS because it helps make their work easier and their delivery standup timelines much shorter. For example, in a manual environment, it can take anywhere from a few weeks to several months for a new developer to begin contributing code. On the other hand, if the development environment is based on containers, the developer may be productive in a much shorter time span. This enables developers to contribute to their company’s code infrastructure in a faster, more seamless, and more strategic manner. Within the PaaS and container space, there are a few different tools. Some are very lightweight and require a moderate amount of customization on the part of the customer in order to automate their process. Others are pre-configured and offer extensive services and features out of the box.
In many cases, developers are attracted to DevOps automation for its ability to automate builds and testing, carry out continuous integration testing, and deploy code either to test environments or production environments. This general process is known as CI/CD. Rather than having someone manually perform these steps, automatically testing the code, and deploying it into test environments, CI/CD, when integrated with an infrastructure as code approach or container platforms, allows an organization to automate the entire process. Levvel Research has found that many companies have only taken basic steps with CI/CD, and have not fully leveraged its capabilities. For example, when continuous delivery and infrastructure are integrated, a company can continuously deliver software in an automated manner. However, many companies automate everything up to the point that it actually gets released into production—but when it comes time to actually expose the code to customers and put it into production, these companies are still operating manually because they not comfortable allowing that part of the process to be automated. It is important that organizations take advantage of the power of CI/CD throughout the product’s entire lifecycle.
The DevOps technology market is relatively new, constantly evolving, and still years away from being in a place that makes it easy to define, measure, or predict. This means there are really only a few leading technology providers, and these providers do not currently have monopolistic advantages. These providers’ standing as leaders is not secure, as the offerings of companies that hold a large share of the market today may not even be relevant in a decade because the space is changing so quickly.
Levvel Research identifies one of the main challenges for some of the older, larger DevOps technology vendors as the fact that the entrance to the market is relatively easy. There are an increasing number of “disruptor” companies without the baggage of existing clients and existing products that larger providers carry. One example of this is the steady increase and growing relevance of open source technologies. These new open source vendors can pick up an open source platform or project, and easily enter and impact the market.
Customers themselves are also major drivers shaping the products of DevOps technology providers. Customers in the DevOps software space have relatively more power in shaping products than in many other markets, as what they ask for in the next few years has the potential to make some solutions offered by small companies essential tools, and some giant technologies obsolete.
Another challenge for providers is operationalizing their platforms. Many of today’s technologies are new, and many providers have only implemented their own products a handful of times. The technologies in general are still evolving, as are the best practices and methods around the proper way to implement them, such as the proper ways of backing up and restoring platforms, managing, maintaining, and updating the platforms on a regular basis, and performing disaster recovery. Some of the widely used and accepted strategies for these functions may be completely different in a few years, and this creates a difficult problem for DevOps technology providers trying to plan their product roadmaps in a competitive, strategic way. Some of the main players are not adjusting to market changes quickly enough, creating a disconnect between the core technologies that many clients depend on and the new solutions that are changing the DevOps space. In all, much of the challenge for DevOps vendors is simply keeping up—with competitors, customers’ expectations, and the technology itself.
When it comes to DevOps automation software, there is not one holistic solution that can solve every problem or automate every aspect. Under the DevOps umbrella are many specialized software tools. Sometimes a vendor will focus on one particular DevOps tool; others will offer several complementary tools or a modular set of tools that a company can adopt over time. Not every provider that can help bring about DevOps automation will be a direct competitor, as many work in conjunction with each other.
DevOps technology software usually includes several complementary capabilities, including:
Cloud Support — Public cloud providers are a famously powerful driver of technical innovation, and private cloud services lend added control with a comparable feature set.
Version Control — This provides the ability to control when and how changes to a software system are applied, combined, and released. Version control tools leverage a methodical approach to change integration that helps keep programmers accountable, isolates issues as they arise, and makes the system resilient to mistakes or errors.
Build Management — These tools build, compile, package, or version code so that it can be easily redistributed. Keeping builds standardized and compliant with internal policies allows developers to uniformly configure development environments, and better familiarize themselves with the environment within which their software needs to work. The more uniformity within a development pipeline, the faster software can move through it with minimal friction.
Test Automation — This is software that features automated unit and/or integration testing. Increased efficiency in testing stages maintains high software quality as development cycles become shorter.
Deployment — These features allow software to be made available quickly and updated with minimal effort. Being able to quickly and seamlessly deliver updated software is a key part of providing more value to end users.
Infrastructure Monitoring — This is software that provides a collection of real-time, relevant data about the health status of host machines. Such data allows IT management to predict potential issues or respond quickly to existing ones.
Application Monitoring — Similar to infrastructure monitoring, application monitoring software aggregates and presents data about the system, but focuses on a system’s performance, providing a means for evaluating the overall user experience.
Serverless — A continuation of the evolution that led IT organizations from physical machines to PaaS, a serverless architecture combines API gateways to receive requests and ephemeral compute resources to formulate responses.
Infrastructure Automation — The most mature public cloud offerings are sometimes categorized as Infrastructure as a Service (IaaS). Their core offerings approximate physical equivalents like servers, disks, load balancers, etc.
Configuration Management — Roughly picking up where infrastructure automation leaves off, configuration management helps ensure that a particular machine is appropriately prepared to perform its duties within the target architecture, both initially and over time.
Infrastructure Capacity Planning — This entails replacing the advance purchase and capital expenditure of expensive datacenters with the operational expenditure of leasing the resources demanded in a given moment. It also offers the ability to manage resources and forecast expenditures to meet budgets. Infrastructure capacity planning has been a major driver of cloud adoption.
It is important to note that many of the items that are termed “capabilities” could also be viewed simply as processes, and with a small enough team, they could be enacted without the need for technology investment at all. However, even though many functions could be transformed manually in theory, without the right software tools in place, small efficient teams will not remain so—manual processes are simply not scalable. This distinction is important for its ability to highlight the holistic and diverse makeup of DevOps—it is not just technology, but a balance of process change, cultural change, and infrastructure change. The following items describe the main categories of DevOps automation software:
A wide range of factors from programming language and development environments to cloud and container adoption can impact how source code is packaged and distributed to its runtime environment. This represents the first stage of preparations that a development team will undertake for a particular codebase on its path to delivering value to customers.
As automated unit testing has risen in popularity for developers, development teams have begun adopting additional phases of automated testing and screening to ensure quality within a tight feedback loop. Functional, integration, and end-to-end tests check for correctness of behavior, while security and code quality scans help support the non-functional requirements of software development and operation. Conducting these tests automatically and safely involves integration with the operational footprint and CI/CD installation, making it a DevOps concept.
Most development teams operate their software in more than one environment. QA/User Acceptance Testing (UAT) might allow project owners or business stakeholders to review work in progress. Staging or pre-production environments might serve to validate the infrastructure automation and configuration management instructions without disruption to customers. The production environment ultimately delivers the customer value of the application. Coordinating and automating releases across these environments always requires technical preparation. Additionally, teams that have embraced containers and microservices have a separate, larger set of responsibilities that ensure each microservice is collaborating appropriately with other microservices, while meeting the appropriate set of quality of service expectations. Tools in this space are labeled “service discovery” and “service management.”
Cloud-based workloads, whether they include virtual machines, container hosts, or serverless functions, still require creation and coordination of multiple kinds of resources. Whenever and wherever additional resources can be acquired with an API invocation, products exist to help make that process a better experience for customers.
Traditional accounting, procurement, and datacenter operations processes acquire and prepare physical servers, load balancers, storage devices, and other machines for their useful life. Even when using the modern technical processes that provision virtual equivalents, technical staff need to prepare each machine, from the base operating system to the particular workload it will operate. Networking devices have a similar requirement that is driven more by architectural concerns, but exist either in a datacenter or with a cloud provider.
The health of the underlying machines, both virtual and otherwise, and the applications that run atop them are of paramount importance in a business environment where high speed and availability are table stakes. The ability to continue operating even during service outages of major providers is a growing expectation, and monitoring products help diagnose and respond when outside factors begin to impact operations.
With all the different components of DevOps technology, many organizations wonder where to start. Often organizations focus too much on a single point and don’t look at the big picture, and they invest in one main technology tool that addresses their top pain point, hoping it will go a long way. However, when it comes to DevOps, ignoring one part of the development workflow will only delay or move a pain point to another part of the workflow. For example, an organization may speed up code with build automation technology, but if they are still testing the code manually, they have only sped up the inflow of code waiting to go through the testing process. Or a company may automate testing, but without a place to host the new code, it takes several days to build a server. In short, development lifecycles are only as fast as their slowest parts.
Addressing the most urgent problem is a good place to start, but it will not greatly relieve many of the organization’s most pressing challenges around product quality, costs, and time-to-market. Organizations must understand DevOps as a holistic transformation in order to adopt it effectively, and they must plan their adoption initiative according to this mentality. The following section outlines some ways to gain buy-in for and plan a DevOps transformation with long-run success in mind.
Who should adopt DevOps? Levvel Research believes that most companies should work to implement at least some element of a DevOps approach. The main varying factor is in what that DevOps recommendation looks like.
Size plays a significant role in how a company adopts DevOps, their experience with it, and their barriers to adoption. For example, a small, young software company with somewhat modern back-office technology will find it relatively easy to adopt a DevOps methodology and automation tools to solve their IT challenges. However, a Fortune 100 company will face many more obstacles, as they have many different development teams leveraging different technology, including varying coding languages, IT environments, and servers. Unfortunately, while there are greater barriers to adoption within a larger organization, there is much less urgency to adopt within smaller organizations. Forward thinking SMEs should view DevOps as an investment for the future, as the problems that DevOps was created to fix only grow more complex the larger the organization becomes.
There will also be different approaches to DevOps depending on the current structure between the development team and operations team and where they fit in conjunction with the overall business structure. For example, the DevOps approach, and its appeal to the company, may be different if the organization is a large company with scrum managers that operate outside of the organizational chart, as opposed to a company with multiple lines of business and shared IT. These varying structures can help determine where one should seek buy-in for launching a DevOps initiative. Under the former structure, the key parties to get input from are the CIO or IT managers. Under the shared IT structure, some of the most prominent members to gain feedback and buy-in from are development executives, software executives, and sometimes, software developers. With that being said, the general ways in which a company should approach DevOps are mostly the same across all business types. For any business, gaining buy-in entails current state assessments, communication, education, enthusiasm, and a focus on ROI.
Gaining buy-in is not just about engaging with C-suite executives. There are many different stakeholders in the decision, and each one has a story. For example, there is a different story between the developer and the application development executive or CIO/CTO, or a network administrator and middle management.
When building a business case, it is important to consider the stories of each of these demographics, and to figure out which element of DevOps resonates most with each one. The interdisciplinary nature of DevOps, the focus on customer value, and its dependence on the people and technologies involved mean that no two stories will be alike in inception, duration, or methodology. It’s important to begin from a place of agreement on present state and future goals. Chances of a successful transformation increase as the range of participating stakeholders broadens.
Ultimately, a DevOps story must focus on value delivery. Quite often, questions of business value are not common in the daily and weekly lives of technical workers. Additionally, technical solutions can take quite a different shape if they are not subjected to the realities of the business opportunity that necessitates them. When considering a DevOps transformation, it is important to open an honest and accurate dialogue about present and future states. In such an environment, group ownership can help involve the entire team in finding the best possible solutions.
Building software is a stressful activity, both professionally and financially. Existing teams often have to deal with the complexities of building and operating software over time, which can result in inefficiencies during creation and operation. In many cases, those inefficiencies require significant commitment to remove; not only must the IT organization plan and implement the technical solutions, but it must also find and maintain financial and leadership backing to sanction change. Since technical projects have non-zero risk, it is not always clear that incurring expenses in the short term will, in fact, lower expenses in the long-term.
When it comes to measuring ROI with a DevOps transformation, some of the best metrics to target are around time, productivity, product roll-out, and competitive advantage. While there are countless ways that DevOps processes and tools save organizations time, one useful example is in provisioning servers. Under a traditional process, it can take weeks to provision a new server, but with a configuration management tool in place, a company could reduce a six-week provisioning time to twenty minutes.
Along with time, the automation, efficiency, and control that come with many DevOps automation tools helps increase the productivity—and therefore, hourly value—of staff by allowing them to focus on more strategic tasks. Other areas of savings are in security, governance, and risk. Many technology tools help to improve a company’s compliance with regulators, which lowers the risk of financial penalties.
The DevOps principles of consistency and predictability are other key areas where organizations can gain ROI. Not only do these principles help improve time-to-market, but they empower a company to be nimble—able to adjust to changing market demands at rates far ahead of their competitors. For example, if a company applies a DevOps approach that reduces the release timeline for a new, key product from 14 months to 6 months, that is a tremendous competitive advantage to the company. This is a key value factor for stakeholders in particular, as it shows that with a DevOps transformation, they will be able to plan new business initiatives with a greater chance of success and a faster rollout than ever before. One of the largest focuses of successful CEOs is technology, and if an organization can quantify a productivity boost on one of the biggest gains in the business, it is easy to gain buy-in on technology investments.
Roadmaps should be based on shared views and goals of future states, and discussions about the components of DevOps that are most strategic for the organization in both the short and long term. Roadmaps can include exploratory time, training time, field tests, and anything else that the stakeholders feel appropriate. The important part is to create a mutually agreeable plan that business stakeholders can manage with, and technical stakeholders can execute.
Selecting the right provider is one of the most important aspects of a DevOps transformation, as it helps to ensure long-run success. Levvel Research has seen many instances where organizations’ DevOps implementations have failed because the organization did not truly understand what type of systems they needed, or what those systems looked like.
It is important to identify providers, both in terms of technology and services, that have actual experience in the space. Because DevOps is a relatively new technology and methodology—one that is still evolving—there are not “veterans” in the market, or providers or experts with decades of expertise and more than a handful of implementations / transformations under their belt. With that being said, there are still leaders in both software and professional services; organizations should just be mindful when engaging with potential vendors in order to select the ones with actual experience and success in the space. Along those same lines, the market for DevOps technology changes rapidly, and new, innovative tools and approaches are added often. It is also important that businesses evaluate products—particularly from newer vendors—carefully against their current state analysis and actual business requirements so as not to invest in a tool that is not appropriate for their needs. This will only become more difficult as the market evolves, so it can be strategic to leverage the services of DevOps software analysts to help build out the evaluation criteria and RFP when selecting a vendor.
When evaluating a software provider, it is important to gauge the provider’s ability to track market trends, to stay nimble, and to adapt and expand their offerings in accordance with what the customer may need in the next few years. A few questions an organization can keep in mind as they evaluate a vendor are:
After product selection comes execution. This is when it is important to have a plan—and to consider leveraging the services of a change management expert to assist with creating and executing that plan. Often organizations will spend several months and a great deal of money preparing for DevOps and purchasing a solution, but when it comes to implementing the technology and approach, they find themselves without the proper resources and unable to deliver upon the promised business objectives.
One of the most important aspects of structuring and enacting a change management plan is understanding the organization’s big picture and specific needs. For example, a business that must release software every week is going to have very different needs than one releasing every six months—and will require a much different starting point and approach to DevOps. Business needs will be different across organizations, and it is important to find a service provider than can differentiate DevOps approaches according to varying needs, and that can help an organization prioritize their roadmap.
Organizations should look for a provider that can address the holistic business problem, that is able to outline a DevOps strategy to solve that problem, and that has expertise to back it up. These service providers look at a business and determine what aspect of DevOps will deliver the most value for them, and how DevOps will bring the business closer to their objectives. The provider will integrate those goals with their own expert knowledge on the technology tooling that is available and their experiences with change management. They will then provide the business with a tailored roadmap for achieving what it wants, and a strategy for how to target the most value with DevOps in the shortest amount of time.
Although the numerous tools, strategies, and paradigms that DevOps employs pertain to improving development and operations processes, it is critical that organizations view it as more than just a technology transformation. Instead, they should recognize the DevOps methodology’s holistic value for building long-run, sustainable business success and competitive advantage. DevOps is the modern-day benchmark for best-in-class organizations, and it is the key to successful digital transformation in the modern era.
CloudBees®, founded in 2010, provides capabilities to orchestrate and automate continuous delivery and DevOps, empowering teams while providing unified governance across the application portfolio. CloudBees is one of the largest commercial supporters of the Jenkins open source project, and the company employs certified Jenkins® experts in engineering, support, services, and other technical roles. CloudBees acquired CodeShip in 2018, which allows the company to offer a fully-managed CI/CD platform for small teams. With the addition of CloudBees CodeShip, the CloudBees Suite (CloudBees CoreTM, CloudBees DevOpticsTM, and CloudBees CodeShipTM) provides an end-to-end software delivery system for teams and organizations of all sizes.
|Headquarters||San Jose, CA|
|Other Locations||Raleigh, NC; Richmond, VA; Neuchatel, Switzerland; London, UK|
|Target Verticals||Automotive, Financial Services, Retail & Telecom|
|Partners/Resellers||Alliance Partners: Red Hat, AWS, Google, Microsoft, VMware; Reseller Partners: Levvel, Zivra, iTMethods, Column Technologies|
|Awards/Recognitions||Dev-Insider 2017 IT Awards: Gold Award for DevOps Tools; DevOps.com – The DevOps Dozen 2017 (Jenkins Project-Best DevOps Open Source Project, KK/Sacha-Best DevOps Executives, & Capital One- Best DevOps Transformation); 2018 IDC Innovators – Agile Code Development; SD100 2018 (7th year in a row)|
The CloudBees Suite is an integrated collection of solutions designed to address the needs of all software-driven organizations, large or small, new or expert. CloudBees offers several capabilities to enable DevOps in an organization. These capabilities are available in-house, through an integration with a software provider partner or both. CloudBees provides functionality around: version control, build management, test automation, deployment automation, infrastructure and application monitoring, infrastructure automation, configuration management, and infrastructure capacity planning. The CloudBees Suite is comprised of CloudBees Core, CloudBees DevOptics, and CloudBees CodeShip.
CloudBees Core, for organizations seeking to scale CD and DevOps, is a flexible CI/CD automation engine able to support diverse software portfolios and the unified governance required by growing organizations. CloudBees Core is cloud-native and leverages both Docker and Kubernetes to enable CI/CD for applications developed on and deployed to the cloud.
CloudBees DevOptics, for organizations that need to drive and measure DevOps adoption and performance, provides visibility and actionable insights into the software delivery process, helping companies diagnose bottlenecks, fine-tune team practices, and maintain a CI/CD infrastructure. The tool will provide DevOps performance metrics, real-time value-stream insights, and CD platform metrics. CloudBees CodeShip, for small to medium organizations and teams in need of an easy-to-use SaaS solution for CI and CD, is a cloud-based, externally-managed CD platform targeted at smaller teams. This tool is simple and easy to configure and comes with native Docker support. As CI/CD processes scale within a company, the CloudBees Suite provides a clear migration path to enterprise-level capabilities.
CloudBees Jenkins Support is offered specifically for open source users. CloudBees provides access to Jenkins support engineers, who are available to answer questions and troubleshoot issues. Along with support, CloudBees provides a verified distribution of the Jenkins core and a curated set of the most popular plugins—all thoroughly tested by the CloudBees team for compatibility with Jenkins and interoperability with each other.
Jenkins X, a downstream project for Jenkins Core, provides a Kubernetesbased continuous delivery platform for developing cloud-native applications using a distribution of Jenkins as the core automation engine. Jenkins X enables developers to quickly establish continuous delivery best practices for their cloudnative applications by automating the creation of applications, environments, and pipelines to promote and deploy an application from development, testing, and staging to production.
A typical installation of a CloudBees product takes 30 minutes to an hour, and the services engagement covers setup, integration, and mentoring on operational use cases around the product and platform. A typical CloudBees Quickstart is a four-day, on-site engagement with a customer, and a one day remote engagement to assemble the customer’s runbook for the platform along with any notes and go-forward recommendations.
CloudBees offers paid training classes based on clients’ preferences. Training class types include admin, certification, and user training, as well as Jenkinsspecific training. After implementation, CloudBees provides expert support based on the support package the client selects.
CloudBees products are sold according to the number of users participating in continuous delivery services.
NGINX is an open source technology that powers the web and digital offerings of over 450 million sites. The original NGINX server was developed in 2003 by Igor Sysoev, who created it in an effort to solve the basic problem of how to handle customer growth in web applications. After successfully running NGINX in production, he open-sourced the technology, which led to global adoption and eventually an incorporation—NGINX Inc. was founded in 2011. Today, the NGINX Application Platform helps companies operate dynamically in the cloud with load balancing, API gateways, and microservices, collapsing many disparate application development and delivery technologies into a single platform.
|Headquarters||San Francisco, CA|
|Other Locations||Singapore; Sydney, Australia; Cork, Ireland; Moscow, Russia|
|Number of Employees||230+|
|Number of Customers||1,600+|
|Target Verticals||High tech/software, telco, finance, retail|
|Partners/Resellers||Sampling of alliance/technology partnerships: Red Hat, Docker, AWS, Google|
|Awards/Recognitions||Gold in 2017 Stevie® Awards for Sales & Customer Service – NGINX; Frequent speaker at KubeCon; Frequent speaker at O’Reilly open source, cloud, and DevOps events; 1 of only 2 infrastructure investments made by Goldman Sachs in 2018|
The NGINX Application Platform consists of four main products to help with DevOps processes—NGINX Plus, NGINX WAF, NGINX Controller, and NGINX Unit. While security and integration capabilities vary slightly across these products, the NGINX Web Application Firewall (WAF) can be leveraged specifically to protect applications against SQL injection, LFI, RFI, and other Layer 7 attacks. The NGINX WAF module is based on the ModSecurity open source software. Other security capabilities include the ability for NGINX Plus to handle web and API encryption, authentication, and authorization. Another example is NGINX Unit memory isolation, which ensures that compromised applications cannot crash neighboring apps on the same server.
NGINX has several technology and integration partners that bring additional value to customer deployments. NGINX’s certified module program offers customers dynamically loadable software modules from third parties that have been tested and certified by NGINX. NGINX works with third parties offering technologies in the areas of identity and access management, dynamic application and API security, web application firewalls, DDoS mitigation, bot detection and remediation, encryption, and others.
Across the four tools within the NGINX Application Platform, capabilities and support include load balancing, reverse proxying, content caching, web serving, security controls, dynamic modules, monitoring, and scalable and reliable HA deployments.
NGINX Plus, NGINX’s flagship product, is a commercial version of the company’s open source technology, and is built for large enterprises. NGINX Plus is a software load balancer, web server, and content cache built on top of open source NGINX. It offers several exclusive features in addition to the open source offerings, including session persistence, configuration via API, and active health checks. It also offers a Kubernetes Ingress controller that allows users to create Kubernetes applications with NGINX Plus in front, and includes support for load balancing with SSL/TLS termination, as well as webSocket and HTTP/2.
NGINX Controller is a centralized management platform built to help organizations manage NGINX Plus instances across a multi‑cloud environment. It includes a wizard-style interface that allows users to configure features such as load balancing, URL routing, and SSL termination. Controller also has monitoring and alerting capabilities, and provides visibility into 200 key metrics and preemptive recommendations based on best practices. Controller enables NGINX users to keep track of infrastructure assets and improve configuration with static analysis. Controller also monitors the underlying OS, application servers (like PHP-FPM), databases, and other components that NGINX Plus interacts with. A lighter-weight, SaaS version of Controller, NGINX Amplify, is available for monitoring and configuration analysis.
NGINX Unit is a web and application server that helps organizations manage distributed applications, allowing users to deploy configuration changes with no service disruption. The solution’s application server offers multi-language support, including Go, Perl, PHP, Python, and Ruby, and allows users to run multiple applications written in different languages on the same server as well as use multiple versions of a language simultaneously on the same server (PHP 5 and PHP 7, Python 2.7 and Python 3). Organizations can use Unit as the foundation for their service mesh, with access to an integrated network stack for service‑to‑service communication, and offload network configuration from application code to NGINX Unit.
NGINX is a lightweight software designed to run in containers, virtual machines, bare metal, any public cloud environment, and across multiple of these environments. It allows developers to use the same solution across development, test, and production. NGINX does not require cloud vendor lock-in, which allows customers to change or add cloud providers as needed without changing their cloud tools.
NGINX Plus can be installed on bare metal servers, virtual machines, containers, or in a compute instance on a public cloud. NGINX technologies have extremely small footprints (e.g., 2.5MBs for NGINX Plus), and many instances can be implemented quickly. NGINX training and documentation is available online, and NGINX offers a range of formal training sessions, from basic to intermediate to advanced.
Products within the NGINX Application Platform are packaged with one of three levels of commercial support: Basic, Professional, or Enterprise. Product and module subscription pricing is based on the support SLA chosen, which differ in the areas of business hours, phone support availability, response time, and support for both third-party modules as well as NGINX Unit and NGINX Controller.
Red Hat® is one of the world’s leading providers of open source solutions. The company offers a comprehensive portfolio of open source technologies for the enterprise, including solutions for infrastructure automation, cloud, integration, and application development. These technologies form the bedrock of modern DevOps capabilities, and the software and professional services provided to implement them cover many of today’s most important IT areas, including operating systems, virtualization, middleware, storage, and cloud computing—as well as the tools necessary to manage and automate complex environments. Red Hat offers its customers access to one of the largest developer ecosystems built around JBoss and OpenShift technologies and partners. It also offers numerous certified applications available on any Linux® platform, and most recently, the creation of a robust certification program for containerized applications. Red Hat offers consulting services to help companies accelerate DevOps adoption by assessing their software delivery environments and introducing tools and methodologies for improving application lifecycle management. These services also address areas like standardization, optimization, automation, monitoring, management, and delivery pipelines.
|Other Locations||More than 95 offices spanning the globe|
|Number of Employees||~ 12,212|
|Number of Customers||Serves over 90 percent of Fortune 500 companies|
|Target Verticals||Government, FSI, Telco, Tech & Medical|
|Partners/Resellers||Intel, AWS, Google, Microsoft|
|Awards/Recognitions||Software Vendor of the Year, European IT & Software Excellence Awards 2018; Cloud Innovator of the Year, Dynatrace at its EMEA Partner Summit 2018; Red Hat OpenShift was awarded Best Cloud Agile Technology in the second annual Computing DevOps Excellence Awards; Red Hat OpenShift was awarded Best Cloud Platform in the TechXLR8 2018 awards|
Red Hat’s open source model supplies enterprise computing solutions across physical, virtual, and cloud environments. Red Hat also offers numerous support, training, and consulting services to its customers worldwide and through top-tier partners. Red Hat’s primary products providing DevOps capabilities are Red Hat Ansible Automation® , Red Hat OpenShift Container Management Platform, Red Hat Cloudforms®, and Red Hat Satellite.
Red Hat Ansible is an IT automation technology that helps companies migrate applications for better optimization, which provides a single language for DevOps practices across an organization. The first component of Ansible Automation is Red Hat Ansible Engine, which automates repetitive IT tasks using a simple language the entire IT organization can understand. There is no software to install to use Ansible Engine, so IT teams can get started and collaborate quickly. Some of the features offered with Ansible include role-based access control, automated deployment, centralized logging and auditing, andsystem tracking.
The second main module is Red Hat Ansible Tower, which helps teams manage complex multi-tier deployments by adding control and delegation to Ansiblepowered environments. Teams can automate by centralizing and controlling Ansible infrastructure with a user interface, role-based access controls, job scheduling, and graphical inventory management. The Ansible Tower’s REST application programming interface (API) and command-line interface (CLI) make it easy to embed into existing tools and processes.
Red Hat OpenShift is a container application platform that brings Docker and Kubernetes to the enterprise. Red Hat OpenShift integrates the architecture, processes, platforms, and services needed to empower development and operations teams. Regardless of the organization’s applications architecture, OpenShift allows the organization to easily and quickly build, develop, and deploy in nearly any infrastructure, public or private.
OpenShift is available via three delivery modules: Red Hat OpenShift Container Platform, OpenShift Dedicated, and OpenShift Online. The OpenShift Container Platform provides enterprise-grade Kubernetes environments for building, deploying, and managing container-based applications across any public or private datacenter where Red Hat Enterprise Linux® is supported. OpenShift
Dedicated provides managed, single-tenant OpenShift environments on the public cloud. OpenShift Online is Red Hat’s hosted public Platform-as-a-Service (PaaS) that offers an application development, build, deployment, and hosting solution in the cloud.
Red Hat OpenShift runs and supports both stateful and stateless applications. It delivers built-in security for container-based applications, including role-based access controls, Security-Enhanced Linux (SELinux)-enabled isolation, and checks throughout the container build process. In order to help clients create more modern applications, Red Hat combines OpenShift with Red Hat JBoss Middleware to provide composable cloud-native services, including developer tools, integration, business automation, and data management. Red Hat OpenShift provides development and operations teams with a common platform and set of tools. This aligns both teams with a common, continuous application development and maintenance workflow.
The OpenShift tool includes an enterprise foundation in Red Hat Enterprise Linux. This allows organizations to deploy and support OpenShift anywhere Red Hat Enterprise Linux is deployed and supported, including Amazon Web Services, Microsoft Azure, Google Cloud Platform, VMware, and more. Organizations can use a single container application platform across these public and private clouds, and with Red Hat OpenShift Container Platform on Microsoft Azure, they can build, deploy, and manage containerized services and applications. Red Hat CloudForms is an infrastructure management platform that allows IT departments to provision, manage, and maintain compliance across virtual machines and private clouds. With CloudForms, organizations can discover, monitor, and track all private cloud and virtual resources and their relationships, as well as automate provisioning processes and management policies. They can also apply corporate governance policies across environments and customize automated remediation processes.
Red Hat Satellite is a system management platform for organizations with growing Linux infrastructure. Red Hat Satellite is built on open standards and based on functional modules that lets teams enhance management capabilities for Red Hat Enterprise Linux on virtualized or bare metal deployments. Red Hat Satellite adds extensive lifecycle management capabilities, including patching, subscription management, provisioning, and configuration management.
Pricing and support packages vary across different Red Hat products. Pricing is based on the number of nodes (systems, hosts, instances, VMs, containers, or devices) that organizations are managing. Red Hat® Ansible Engine is available in two additions that are differentiated by support and features, while Ansible® Tower is available in three editions. Red Hat Openshift pricing varies across delivery modules, which are Online, Dedicated, and Container Platform.
Levvel Research, formerly PayStream Advisors, is a research and advisory firm that operates within the IT consulting company, Levvel. Levvel Research is focused on many areas of innovative technology, including business process automation, DevOps, emerging payment technologies, full-stack software development, mobile application development, cloud infrastructure, and content publishing automation. Levvel Research’s team of experts provide targeted research content to address the changing technology and business process needs of competitive organizations across a range of verticals. In short, Levvel Research is dedicated to maximizing returns and minimizing risks associated with technology investment. Levvel Research’s reports, white papers, webinars, and tools are available free of charge at www.levvel.io
Research Senior Manager
Major Bottoms Jr.
Research Content Specialist
Anna Barnett is a Research Senior Manager for Levvel Research. She manages Levvel's team of analysts and all research content delivery, and helps lead research development strategy for the firm's many technology focus areas. Anna joined Levvel through the acquisition of PayStream Advisors, and for the past several years has served as an expert in several facets of business process automation software. She also covers digital transformation trends and technology, including around DevOps strategy, design systems, application development, and cloud migration. Anna has extensive experience in research-based analytical writing and editing, as well as sales and marketing content creation.
Major Bottoms Jr. is a Research Consultant for Levvel Research based in Charlotte, NC. He plays a key role in the analysis and presentation of data for Levvel’s research reports, webinars, and consulting engagements. Major’s expertise lies in the Procure-to-Pay, Source-to-Settle, and travel and expense management processes and software, as well as technologies and strategies across DevOps, digital payments, design systems, and application development. Prior to joining Levvel, Major held various roles in the mortgage finance field at Bank of America and Wells Fargo. Major graduated with a degree in Finance from the Robert H. Smith School of Business at the University of Maryland.
Jamie Kim is a Research Content Specialist for Levvel Research based in New York City. She develops and writes research-based content, including data-driven reports, whitepapers, and case studies, as well as market insights within various digital transformation spaces. Jamie’s research focus is on business automation processes, including Procure-to-Pay, as well as DevOps, design practices, and cloud platforms. In addition to her research skills and content creation, Jamie has expertise in design and front-end development. She came to Levvel with a research and technical writing background at an IT consulting company focused on upcoming AI and machine learning technologies, as well as academic book editorial experience at Oxford University Press working on its music list.
API design is crucial, giving structure to application interaction. Given cross-functional teams and applications, development time is reduced with a clear, intuitive way to access data. API development often follows two approaches: REST and GraphQL.
As of June 2018, the state of California passed a new privacy law that could lead to more consequences for US-based companies than the European Union’s General Data Protection Regulation (GDPR). Here's what you need to know and how to be compliant.
Before your data scientists wring value out of your reams of data, it has to be accessible and, on some basic level, coherently arranged. To harness all that brainpower, you need to keep the data wrangling to a minimum. Enter the data lake.
Legacy applications get no respect. The developers who wrote them have aged out and no new developers want to work on career-killing software stacks. But they are still faithfully doing the job they were created to do long ago. So what's the problem?