When I was at HP Discover earlier this year we recorded a podcast about continuous deployment and DevOps with Dana Gardner of Interarbor Solutions and Ashish Kuthiala, Senior Director for Strategy at Hewlett Packard Enterprise. You can listen to the podcast here. Here are some edits from the transcript, with hopefully a couple of nuggets. I am fascinated for example by the convergence of social monitoring and product management and if necessary recall- as exemplified by the GM example.
Kuthiala: The continuous assessment term, despite my objections to the word continuous all the time, is a term that we’ve been talking about at HPE. The idea here is that for most software development teams and production teams, when they start to collaborate well, take the user experience, the bugs, and what’s not working on the production end at the users’ hands — where the software is being used — and feed those bugs and the user experience back to the development teams.
When companies actually get to that stage, it’s a significant improvement. It’s not the support teams telling you that five users were screaming at us today about this feature or that feature. It’s the idea that you start to have this feedback directly from the users’ hands.
We should stretch this assessment piece a little further. Why assess the application or the software when it’s at the hands of the end users? The developer, the enterprise architects, and the planners design an application and they know best how it should function. Whether it’s monitoring tools or it’s the health and availability of the application, start to shift left, as we call it.
Governor: One notion of quality I was very taken with was when I was reading about the history of ship-building and the roles and responsibilities involved in building a ship. One of the things they found was that if you have a team doing the riveting separate from doing the quality assurance (QA) on the riveting, the results are not as good. Someone will happily just go along — rivet, rivet, rivet, rivet — and not really care if they’re doing a great job, because somebody else is going to have to worry about the quality.
As they moved forward with this, they realized that you needed to have the person doing the riveting also doing the QA. That’s a powerful notion of how things have changed. Certainly the notion of shifting left and doing more testing earlier in the process, whether that be in terms of integration, load testing, whatever, all the testing needs to happen up front and it needs to be something that the developers are doing.
Governor: We’re making reference to manufacturing modes and models. Lean manufacturing is something that led to fewer defects, apart from (at least) one catastrophic example to the contrary. And we’re looking at that and asking how we can learn from that.
So lean manufacturing ties into lean startups, which ties into lean and continuous assessment.
What’s interesting is that now we’re beginning to see some interplay between the two and paying that forward. If you look at GM, they just announced a team explicitly looking at Twitter to find user complaints very, very early in the process, rather than waiting until you had 10,000 people that were affected before you did the recall.
Last year was the worst year ever for recalls in American car manufacturing, which is interesting, because if we have continuous improvement and everything, why did that happen? They’re actually using social tooling to try to identify early, so that they can recall 100 cars or 1,000 cars, rather than 50,000.
It’s that monitoring really early in the process, testing early in the process, and most importantly, garnering user feedback early in the process. If GM can improve and we can improve, yes.
Gardner: I remember in the late ’80s, when the Japanese car makers were really kicking the pants out of Detroit, that we started to hear a lot about simultaneous engineering. You wouldn’t just design something, but you designed for its manufacturability at the same time. So it’s a similar concept.
But going back to the software process, Ashish, we see a level of functionality in software that needs to be rigorous with security and performance, but we’re also seeing more and more the need for that user experience for features and functions that we can’t even guess at, that we need to put into place in the field and see what happens.
How does an enterprise get to that point, where they can so rapidly do software that they’re willing to take a chance and put something out to the users, perhaps a mobile app, and learn from its actual behavior? We can get the data, but we have to change our processes before we can utilize it.
Kuthiala: Absolutely. Let me be a little provocative here, but I think it’s a well-known fact that the era of the three-year, forward-looking roadmaps is gone. It’s good to have a vision of where you’re headed, but what feature, function and which month will you release so that the users will find it useful? I think that’s just gone, with this concept of the minimum viable product (MVP) that more startups take off with and try to build a product and fund themselves as they gain success.
It’s an approach even that bigger enterprises need to take. You don’t know what the end users’ tastes are.
I change my taste on the applications I use and the user experience I get, the features and functionality. I’m always looking at different products, and I switch my mind quite often. But if I like something and they’re always delivering the right user experience for me, I stick with them.
The way for an enterprise to figure out what to build next is to capture this experience, whether it’s through social media channels or engineering your codes so that you can figure out what the user behavior actually is.
The days of business planners and developers sitting in cubicles and thinking this is the coolest thing I’m going to invent and roll out is not going to work anymore. You definitely need that for innovation, but you need to test that fairly quickly.
Also gone are the days of rolling back something when something doesn’t work. If something doesn’t work, if you can deliver software really quickly at the hands of end users, you just roll forward. You don’t roll back anymore.
It could be a feature that’s buggy. So go and fix it, because you can fix it in two days or two hours, versus the three- to six-month cycle. If you release a feature and you see that most users — 80 percent of the users — don’t even bother about it, turn it off, and introduce the new feature that you were thinking about.
This assessment from the development, testing, and production that you’re always doing starts to benefit you. When you’re standing up for that daily sprint and wondering what are the three features I’m going to work on as a team, whether it’s the two things that your CEO told you you have to absolutely do it, because “I think it’s the greatest thing since sliced bread,” or it’s the developer saying, “I think we should build this feature,” or some use case is coming out of the business analyst or enterprise architects.
Gardner: For organizations that grok this, that say, “I want continuous delivery. I want continuous assessment,” what do we need to put in place to actually execute on it to make it happen?
Governor: We’ve spoken a lot about cultural change, and that’s going to be important. One of the things, frankly, that is an underpinning, if we’re talking about data and being data-driven, is just that we have new platforms that enable us to store a lot more data than we could before at a reasonable cost.
There were many business problems that were stymied by the fact that you would have to spend the GDP of a country in order to do the kind of processing that you wanted to, in order to truly understand how something was working. If we’re going to model the experiences, if we are going to collect all this data, some of the thinking about what’s infrastructure for that so that you can analyze the data is going to be super important. There’s no point talking in being data-driven if you don’t have architecture for delivering on that.
Kuthiala: You’re right. We have a very rich portfolio across the entire software development cycle. You’ve heard about our Big Data Platform. What can it really do, if you think about it? James just referred to this. It’s cheaper and easier to store data with the new technologies, whether it’s structured, unstructured, video, social, etc., and you can start to make sense out of it when you put it all together.
There is a lot of rich data in the planning and testing process, and all the different lifecycles. A simple example is a technology that we’ve worked on internally, where when you start to deliver software faster and you change one line of code and you want this to go out. You really can’t afford to do the 20,000 tests that you think you need to do, because you’re not sure what’s going to happen.
We’ve actually had data scientists working internally in our labs, studying the patterns, looking at the data, and testing concepts such as intelligent testing. If I change this one line of code, even before I check it in, what parts of the code is it really affecting, what functionality? If you are doing this intelligently, does it affect all the regions of the world, the demographics? What feature function does it affect? It’s narrowing it down and helping you say, “Okay, I only need to run these 50 tests and I don’t need to go into these 10,000 tests, because I need to run through this test cycle fast and have the confidence that it will not break something else.”
So it’s a cultural thing, like James said, but the technologies are also helping make it easier.
Kuthiala: We were talking about Lean Functional Testing (LeanFT) at HP Discover. The idea is that the developer, like James said, knows his code well. He can test it well before and he doesn’t throw it over the wall and let the other team take a shot at it. It’s his responsibility. If he writes a line of code, he should be responsible for the quality of it.
Governor: The RedMonk view of the world, is that, increasingly, developers are making the choices, and then we’re going to find ways to support the choices they are making. The term continuous integration began as a developer term, and then the next wave of that began to be called continuous deployment. That’s quite scary for a lot of organizations. They say, “These developers are talking about continuous deployment. How is that going to work?”
The circle was squared when I had somebody come in and say what we’re talking to customers about is continuous improvement, which of course is a term again that we saw first in manufacturing. But The Developer Aesthetic is tremendously influential here, and this change has been driven by them. My favourite “continuous” is a great phrase, continuous partial attention, which is the world we all live in now.