Imagine this scenario. Your team built a continuous delivery pipeline. Team members deploy multiple times a day. Telemetry warns the team about production issues before they become outages. Automated tests ensure known regressions don’t enter production. Team velocity is consistent and fast. All-in-all things are looking up.
What’s next? Is this the top or just a plateau?
It’s the plateau of learning and experimentation. If you’ve read my previous posts on this blog, you’ll know that I frequently say that DevOps has no end state. The scenario describes successful implementations of the first and second ways of DevOps. The next step is implementing the third way: continuous learning and experimentation. The idea is to experiment with different ways of working to improve different aspects of the value stream. Improvements may be technical, people, or the processes themselves.
Cloud Academy offers a DevOps Training Library that delivers the theory, technical knowledge, and hands-on practice to build your practical skills with the DevOps tools that matter: Puppet, Docker, Chef, Ansible, and more. It’s loaded with scenario-based Learning Paths, Courses, Hands-on Labs, Lab Challenges, and Exams — it’s the perfect way to learn the fundamentals or advance your skills.
A new definition of “done”
Let’s step back to a point in the software development life cycle (SDLC) before the backlog where product ideas happen. The product owner has an idea that may help users or differentiate the product from the competition. The product owner is happy because the development team (equipped with the new DevOps shine) can ship to production faster than ever. The product owner throws an item into the backlog. Soon afterward, the new functionality is available in production. Everyone celebrates. A week later other stakeholders follow up with the product owner by asking “how many users are using the new feature?” Unfortunately, the product owner doesn’t have an answer. It just seemed like a good idea.
Good ideas are not the same thing as validated ideas. Software teams spend too much effort on shipping good ideas when they should ship validated ideas. This practice comes from lean school and is referred to as hypothesis-driven development. The hypothesis-driven development considers each change as a scientific experiment that can be confirmed or denied with data. That sounds more like software engineering, right? This idea can move teams off the plateau and onto the next level. Eliminate the wasteful hunch based thinking first.
Ronny Kohavi, from the Analysis and Experimentation Group at Microsoft, noticed that only 1/3 of their work was successful at improving key metrics. Conversely, 2/3 of work is a complete waste (assuming that work matters if-and-only-if it improves key business metrics)!
If we are not performing user research, the odds are that two-thirds of the features we are building deliver zero or negative value to our organization, even as they make our codebase ever more complex, thus increasing our maintenance costs over time and making our software more difficult to change. Furthermore, the effort to build these features is often made at the expense of delivering features that would deliver value (i.e., opportunity cost). — The DevOps Handbook
This information is disturbing. What’s the point of developing the technical architecture — continuous delivery, telemetry, automated testing, etc.— if it’s used to ship waste? It’s even more worrying that all these ideas were considered good ideas with the best intentions. Kohavi’s research demonstrates the importance of hypothesis-driven development backed by user research and A/B testing.
Teams can adopt hypothesis-driven development by modifying their definition of done to include “validated XXX metrics.” Metrics may be decided on a task-by-task basis or global business metrics such as sign-ups, purchases, or traffic. The run-up to big ideas begins with small scale experiments backed by A/B tests. An experiment may be as simple as adding a new button that displays a “coming soon” message when clicked and counting clicks. If this results in, say 30% engagement, then it may be worth a follow-up experiment. Then when the team has conducted enough experiments — and gained the confidence that follows — they can craft a change intended to improve key business metrics with a validation metric such as a 20% increase in signups.
Hypothesis-driven development combats waste (a recurring evil villain in Lean thinking) by keeping the team focused on what actually delivers the intended business outcomes. It also enables a new approach for achieving big-ticket project planning as we’ll learn from Mark Schwarz.
Preferring objectives over requirements
Mark Schwarz, the CIO at the USCIS, describes a project management strategy in his 2019 book War & Peace & IT. The USCIS’ E-Verify system verifies U.S. residents and citizen’s employment authorization. The use of the system had been picking up and Schwarz was tasked with scaling up and increasing efficiency.
The department spent four years producing a one-inch thick binder full of requirement documents. Schwarz’s first action was to discard these requirements documents (but well-intentioned, remember Kohavi from earlier?). Next, he set three clear objectives for the project:
- Raise the number of cases an individual could do per day
- Increase cases processed with automation from 98.6 to closer to 100%
- Improve the registration conversation rate. A large number of companies started registration but did not complete it
The team used impact mapping in coordination with a mix of stakeholders to create a hypothesis that could achieve the objectives. They already had dashboards and metrics in place so there was an existing structure for assessing progress. The team agreed to allocate capital every three months to accomplish these goals with informal follow-ups each month and formal quarterly reviews. They would report progress to a steering committee biweekly for course corrections. Capital investments are only granted as far as there is progress towards the objective. Investments may be increased, decreased, or halted depending on the situation. Schwarz calls this approach “staged investments.”
The results were impressive. The team achieved the first two objectives. They realized through multiple experiments, that their registration rate had plateaued and wasn’t likely to change (they concluded many companies had registered to test the system out rather than intending to complete the process). The project was originally planned for four years. It ended in two and a half years with better than expected results. Schwarz was able to eliminate waste. This was possible by replacing requirements with objectives, then making data validated steps towards the objective.
You could say that the project had achieved the Agile ideal: maximizing outcomes while minimizing output, or in other words, maximizing the amount of work not done. — War & Peace & IT
This approach is radical compared to the approaches in many companies. It demonstrates what’s possible when leadership trusts the teams closest to the problems to achieve objectives using data-backed methods. It’s rare that we can cite the government as a technological best practice leader, so we may at least revel in that for a moment.
Schwarz accomplished something special here. He applied the coaching kata. The “coaching kata” is not mentioned by name. It’s revealed in Schwarz’s behaviors.
Two katas to drive experimentation
Mike Rother describes the coaching and improvement kata in The Toyota Kata. “Kata” means practice in Japanese. Toyota lives and breathes this kata. They help Toyota continuously improve in a disciplined and scientific manner.
The improvement kata identifies a target condition (e.g., “the goal”) and a framework for navigating from the current condition to the target condition. The coaching kata teaches the improvement kata to new practitioners. Senior leaders are the most experienced in coaching kata practitioners. They help propagate and institutionalize the kata. Also, everyone at Toyota has a mentored intended to help develop the improvement kata in the mentee.
As tempting as it seems, one cannot reorganize your way to continuous improvement and adaptiveness. What is decisive is not the form of the organization, but how people act and react. The roots of Toyota’s success lie not in its organizational structures, but in developing capability and habits in its people. — Mike Rother, The Toyota Kata
The same should be said about your organization. How are you internalizing behaviors? How is everyone in your team adopting hypothesis-driven development? What objectives and metrics align the team? What will give leadership the confidence to replace requirements with objectives?
These questions will help you move off the plateau of learning and experimentation and into the next phase of DevOps.
Internally, we described our goal as creating “buoys, not boundaries.” Instead of drawing hard boundaries that everyone has to stay within, we put buoys that indicate deep areas of the channel where you’re safe and supported. You can go past the buoys as long as you follow the organizational principles. After all, how are we ever going to see the next innovation that helps us win if we’re not exploring and testing at the edges? — Ralph Loura, CIO at HP, in The DevOps Handbook
You’ll need a new set of organizational principles — ones that focus on the entire value stream, not on technological particulars. Project to Product offers the Flow Framework as a way to track and measure the value stream across the organization. It’s an excellent framework for assessing your progress toward business value. Also, the insight derived from implementing the flow framework will reveal more waste reduction opportunities. The next leap comes not from technical changes but from organizational change.
You don’t have to navigate this path on your own. Mark Andresen, Director of DevOps at Capital One, shares his experience transforming culture and leading change in the Recipe for DevOps Success Webinar.