A DevOps Maturity Model to Monitor Your Progress

This article was derived from a DevOps Maturity Level Assessment tool that we use internally to help our clients benchmark their proficiency in DevOps. This assessment will place clients in one of four categories determined by the maturity of their DevOps processes:

In the past, we have discussed at length the cultural, organizational, and procedural changes organizations adopting DevOps will need to undergo to achieve their desired results. We also discussed some best practices for facilitating these changes and avoiding some of the common pitfalls. Finally, we outlined some critical metrics for which to measure improvement over current results. If you are just beginning your DevOps journey, take a moment to understand the path ahead of you with the resources below:

Finished catching up? Were you already well on your way? Great! Now we can begin to explore the topic at hand.

We’ll aim to answer some common questions many organizations have and clearly define the final destination of the transformative journey you are now on. A few of the questions that will be answered are:

  • What does a successful adoption effort look like?
  • When can an organization claim they are practicing DevOps?
  • Or rather, when can they claim that the level of DevOps they are practicing is genuinely mature?
  • And where do we go once we get there?

We’ll also discuss four critical components of DevOps and provide a clear view of the processes that mature DevOps companies have implemented that can serve as a benchmark for your efforts while you strive to achieve DevOps maturity. Then we discuss what’s on the horizon for DevOps. We also share a client’s story and how we assisted them in maturing their DevOps practices.

DevOps Maturity Model Key Factors

We believe there are four fundamental areas that organizations should focus on when adopting DevOps. They are culture and organization, CI/CD, testing, and architecture. We see many organizations that focus primarily on CI/CD and automation, but without the right culture, architecture, and testing practices, these organizations will never get the full benefits of DevOps.

In the following four sections, we discuss why each of these key factors is critical for getting the most out of your efforts, and show you what DevOps maturity looks like. Before you begin this journey, take the time to compare your own organization’s maturity in these areas against the best practices listed in each section, and take note of the areas you need to focus on. This will provide you with the best possible roadmap for adoption efforts.

Culture & Organization

DevOps is considered by most to be a cultural shift rather than a technological one. Enabling the benefits of DevOps requires deep collaboration across functions, as well as a pervasive mentality that embraces rapid failure. Most importantly, getting buy-in from all stakeholders is critical to ensure that the transition isn’t perceived as negative or purposefully sabotaged by members of the organization.

A report by Gartner indicated that by 2022, three-quarters of DevOps initiatives will fail to meet expectations due to an organization’s inability to resolve issues around organizational and cultural change. Gartner cites a lack of consideration of business outcomes, lack of buy-in from staff, lack of collaboration, and unrealistic expectations as the primary cause of these failures.

“For me, DevOps is a cultural shift. Developers not only build it, but ship it, and also observe it and monitor it while it’s out in production. So it’s their baby, from birth out into the wild.”

Sean Sullivan – CBT Nuggets

In fact, in a recent survey, we conducted of over 200 IT decision-makers, cultural and organizational issues were the most often stated challenge they experienced during their DevOps implementation.

While this does mean that most companies will fail to achieve their desired results, it also means that some companies succeed.

Below we provide some key guidelines that should be followed in order to give your organization the best chance at achieving the desired results:

  • A separate team exists for each product.
  • Each team has its own backlog.
  • The team is responsible for the product all the way to production.
  • Work is prioritized per release needs.
  • There is no boundary between development and testing.
  • Requirements are clearly defined, including acceptance criteria.

You can learn more about best practices for adopting a DevOps culture in our recent eBook: A Business Leaders Guide to DevOps. Read now.

CI/CD (Build, Deployment & Release)

The goal of CI/CD is to deliver better quality software by preventing issues before they occur by testing earlier. This comes from the ability to identify defects and quality issues on smaller changes in code, earlier in the process. Ultimately, this has the effect of shortening the feedback loop between end-users and the development team.

Also, it is far more unlikely that merges when committing will be required due to several developers making changes to the same code, and allows developers to commit changes more often while still maintaining stability.

CI/CD is not only a best practice for agile development. Its adoption is also well understood to be fundamental before beginning a DevOps initiative. Some might say it is the best proxy for measuring the entire DevOps initiative. In any case, too many manual steps or layers of bureaucracy will make your processes too slow to succeed.

It is best practice to try to automate the build and testing processes in order to find bugs early and not waste time with needless manual activities. However, it is important to have a well-defined process before automating. Automating an undefined or non-optimal process will only further exacerbate any inherent flaws in the process. Also, the pipeline must also be designed to be scalable over time so that new features and requirements in the automated build process can be added transparently.

In a recent survey, 69% of IT decision-makers indicated that they were shipping new features to production once per day or more. This highlights the importance of automating manual steps in order to keep pace with the competition.

Rate of Application Deployment

“We don’t want a lapse in time where something has been built and hasn’t been brought out to the customer. By putting smaller changes into production more frequently, we decrease volatility and hesitancy. We’ve all seen where there are two weeks where the code changes are backed up in a branch. But to get that out, there was a bit of trepidation because all these things changed. If you can reduce that cognitive load of shipping to a minimum and it’s less likely something’s going to not be optimal because you know exactly what changed. It’s gonna be a smaller thing to digest and the customer gets it sooner. So we all win.”
Sean Sullivan, CTO – CBT Nuggets

You can feel good that your CI/CD processes are mature when you are practicing each of the processes below.

Efficient Build Process

  • A good build process produces artifacts, logs, and a status per each executed. It is triggered automatically with each code commit, and the history is available for the team to review what has happened over the last executions. Whenever a problem is found, like a test fault or a security vulnerability, the build is marked as failed, since the code is analyzed on every run.
  • The application is built only once, using a dedicated server, and the output artifact can be deployed without the need to rebuild for each different environment. Each artifact is tagged and versioned so the build can be traced across the pipeline. With every build run, metrics are gathered and analyzed so the process can be improved.

Deployment

  • A deployment pipeline exists and can deploy to all environments using the same standard process, regardless if it is production or not. There are no manual tasks required, making the process easily measurable and predictable. There is none or minimal human intervention (zero-touch) on each deployment, and they are executed continually.
  • Releases are disconnected from deployment, and features can be hidden using flags or configuration values. No downtime is required whenever getting a new version to production, and once it’s there, application health is measured on different intensities and aspects to ensure everything is working correctly. Whenever a problem is detected in production, the deployment process can be used to rapidly roll-forward the fixes, without the need of rolling back previous changes, nor making manual changes as each deployment is immutable, and there are even self-healing tools in place.

Code Management

  • To ensure rapid release cadence, there is no (or minimal) branching in source control, and no feature branch lives longer than a day. The team performs frequent commits, multiple times a day. All changes related to the application are stored in version control, including infrastructure, configuration, and database.
  • All deployment environments and dev boxes are production-like and live only while required by creating them on-demand and automatically, including all required adjustments.

Data Management

  • To ensure repeatability and control, database changes are done through code (migrations) or scripts stored in version control, fully automated, versioned, and performed as part of the deployment process.

Continuous Testing

Many organizations are now releasing code to production weekly, daily, or even hourly. As a result, testing and maintenance need to be performed much more quickly to maintain the desired cadence. Continuous testing has evolved out of this need.

Continuous testing is a type of software testing that is characterized by a combination of testing early, testing often, testing everywhere, and automation in order to address business risks associated with a software risk early.

As applications gain prevalence as a source of competitive advantage, business leaders are becoming more aware of how critical speed and quality are when delivering applications to users. Issues with build quality or performance can negatively impact the user experience. At the same time, delays in delivery can result in lagging behind the competition. These factors are increasingly presenting themselves as significant business risks highlighting the importance of implementing continuous testing.

Getting continuous testing right results in improved code quality, accelerated time-to-market, a continuous feedback mechanism, and eliminates the disconnect between development, testing, and operations.

The list of processes below represents an extremely high level of maturity in your continuous testing capabilities and will ensure you are achieving the maximum value DevOps can offer.

  • The testing team does not need to wait until the end of sprint/release to verify quality.
  • There is a dedicated test environment per product.
  • Automated unit tests exist and are run manually.
  • All committed changes are security tested automatically.
  • All committed changes are unit tested automatically.
  • Unit test coverage is constantly analyzed and validated.
  • Functional tests are automated.
  • Integration tests are executed automatically.
  • Performance tests are executed manually.
  • Security tests are executed automatically.
  • Acceptance tests are executed manually.
  • Regression testing is defined and fully automated.
  • Exploratory testing is executed manually based on risk analysis.

Architecture & Design

While culture, CI/CD, and continuous testing are critical for achieving the maximum value from a DevOps initiative, your ability to fully mature in DevOps (or even begin utilizing DevOps methodologies) will be based on the foundation you have laid with your application architecture and design.

Application architecture is one of the main factors that enable or prevent a Company from achieving a rapid release cadence through DevOps. If the system is not designed to be tested quickly, easily, and frequently, you’ll end up with a bottleneck that won’t help you be as fast as you desire. The same goes for deployment. Therefore, it is critical to focus on key non-functional requirements associated with the benefits that you want to achieve, like modularity, testability, agility, etc.

Different architecture styles can support these goals, such as microservices, service-oriented or event-driven architectures. The challenge is choosing the one that best fits your needs, and aligning it with other infrastructure and development technologies that will help.

Below we outline the architecture and design best practices that you should strive for.

  • No (or minimal) business logic is placed on hard to test places (stored procedures, messaging infrastructure).
  • The system is divided into modules with clear boundaries and physical separation (i.e. assemblies).
  • The architecture allows teams to work on modules independently and safely, so it won’t affect others.
  • Application architecture enables meaningful unit tests.
  • Applications are architected as products, instead of solutions for projects.
  • Desired quality attributes are clearly defined.
  • Application architecture allows every component of the application to be tested as soon as it is developed (shift left).
  • The application is designed to have imported data for testing purposes.
  • The system is divided into components that can be tested and deployed independently (i.e. services, queues, etc.)
  • Releasing a code change in the application does not require full regression, nor represents the risk of global failure.
  • Application architecture is loosely coupled, and modules talk to each other through well-defined interfaces.
  • Application architecture allows for meaningful monitoring.
  • The application is designed to have automated testing data generation and aging.
  • Constant feedback about the architecture state is received automatically.
  • Components can scale independently.
  • Service virtualization is leveraged for testing components that are unavailable or difficult to access for development purposes.
  • The application is designed to easily extract sanitized production data.
    The system is designed to prevent cascading failures (i.e. circuit breaker).

But DevOps Maturity is Just the Beginning of Your Journey..

Whew! You made it to the end. You surely must have completed your DevOps journey by this point… The reality is there really is no end to the path towards DevOps maturity. DevOps is about continuous improvement, and with each new day, DevOps continues to evolve.

You will need to continually evolve with it.

That said, there are some trends and technologies on the horizon that will extend the current scope and capabilities of DevOps. While these trends are currently seeing adoption at the very bleeding edge of software development, they will continue to gain steam and inevitably erupt as organizations continue to push the bounds of improving the quality of software and the speed at which they deliver to customers. This will continue to push your organization to keep pace or phase out.

To prepare you for this brave new world we will outline a few of these trends that we see bubbling up on the horizon.

  • DevSecOps – Skipping out on security puts customers, brand, and bottom line at risk. That’s where DevSecOps comes in, bringing security to the entire application without slowing down the production pipeline. Learn more about DevSecOps.
  • AIOps – The need for AIOps has arisen out of the ever-increasing complexity and scale of IT Operations. Massive streams of data generated by IoT devices, the shift of processing to the edge, the criticality of delivering disruption-free and fast customer experience, have made monitoring, managing, and maintaining systems manually exceedingly difficult.AI and machine learning help to address some of these challenges by identifying patterns and cutting through the noise in IT data, enabling IT teams to identify probable causes of failure and prevent them before they occur.
  • Serverless Architectures – Reduces the maintenance of infrastructure and allows the use of more flexible and reliable practices, with increased agility and reduced TCO.
  • Edge Computing – The edge offers several advantages–cost savings, low latency, improved security protections, and real-time access to accurate information. Organizations adopting this approach will need to find a way to extend DevOps to the edge. Learn more about edge computing.
  • Chaos Engineering – Chaos engineering is the practice of experimenting on a system to test it’s resiliency and is driven by the certainty that a system; at some point, will fail. This is especially true with the uncertainty introduced by the rapid and frequent releases of DevOps. Adopting chaos engineering will help organizations become more accepting of failure, and learn from those failures to implement the necessary processes to prevent it in the future.
  • Continuous Deployment – Continuous deployment goes one step further than continuous delivery, with each build forgoes a manual check, and is automatically pushed to production. This has the potential to greatly accelerate the delivery of features to end-users. Continuous deployment also frees up developers’ valuable time by eliminating yet another layer of manual testing. However, this approach carries significant risk if adequate testing is not in place.
  • Cloud-Native – Cloud-native applications allow organizations to deploy new features quickly. They offer enormous benefits, including cost advantages offered by pay-as-you-go pricing models and the horizontal scalability provided by on-demand virtual resources. When cloud-native applications are implemented using a DevOps approach with CI/CD, they can produce substantial ROI. Learn more about cloud-native applications.
  • Multi-Cloud Deployment – Organizations are increasingly looking to adopt a multi-cloud strategy because it can help organizations minimize data sprawl and loss, sidestep being locked into a specific vendor, reduce downtime through added redundancy, enable the versatility to utilize various tools available through multiple providers, and reduce costs. Organizations practicing. DevOps will have to learn to address the challenges of building, testing and deploying applications in multi-cloud environments in order to leverage these benefits.
  • Service Mesh – A service mesh is a dedicated infrastructure layer for aiding inter-service communications between microservices. A service mesh improves the collaboration between development and operations by providing a centralized place to manage microservices at runtime. This enables developers to focus on the code, while operations focus on the underlying infrastructure. This results in an environment that is more resilient, scalable, and secure.

[adinserter name=”DevOps-Closing”]

SHARE
3Pillar graphic pattern

Stay in Touch

Keep your competitive edge – subscribe to our newsletter for updates on emerging software engineering, data and AI, and cloud technology trends.