So how do you get to Continuous Delivery?
The following sections suggest a road map.
Separate the myths from the realities
There are many myths about CD, and they need to be brought to the surface and either dispelled or properly managed. Here are some of the most dangerous:
|CD is too risky.||It’s true that any organisation that builds a Continuous Delivery system is taking a risk, especially early on. Some aspect of the test and authorisation cycles (which becomes largely invisible under Continuous Delivery) is quite likely not to work properly until the team has had a good deal of practice, and as a result faulty solutions may well be released into the operational environment.|
But in that respect, CD is not very different from any other automation initiative, be it database management, testing or critical operational tasks. In fact it isn’t very different from engineering in general – whether or not building a bridge is too risky surely depends on how well you do it. There isn’t anything risky about bridges themselves, but it becomes terrible risk in the wrong hands!
|Getting to CD is too hard.||This would be a persuasive argument, except:|
Also, there is no need to do CD in a single step. Like any other Agile process, it can (and probably should) be approached incrementally. Then, as each successive step is taken, the remaining steps are very likely to become more achievable.
|Maintaining the CD system increases overhead.||Continuous Delivery isn’t free, either to create or to run. But that does not make it more expensive than the build, integration and deployment processes that you’re already using.|
When all existing issues with the typical manual or semi automated approach have been taken into account – the errors and omissions, the waste and rework, the delays and conflicts, the resource cost of manual processes, and so on – you may well find that a fully automated process would soon be cheaper to operate or easier to maintain.
|There are too many failed builds.||Often this happens when developers fail to execute a private build before committing their code to the mainline. As a result, files may be missing or defects allowed to enter the mainline.|
In other words, this is not a failure of continuous build or integration but rather a failure to use them properly. ‘Quality First’ and a rapid response to bugs is imperative when implementing CI.
|The technology is too expensive.||Even allowing for the widespread and freely available tools, the costs of CD technology are certainly not negligible. However, the reason this argument seems to be convincing is often precisely that the costs of CD technology are not compared to the real alternative – to cost of not doing CD.|
So the reply to this argument is still: expensive compared to what?
For example, to use CI effectively a separate integration machine should certainly be acquired. However, this is a nominal expense when compared with finding problems later in the development lifecycle – which is part of the price of not implementing CI.
|We already have people performing all these activities.||Managers sometimes interpret the successive phases of CD as just duplicating what should be being done anyway. And it is true that much of what makes up Continuous Delivery are already being done by someone.|
However, when individuals perform such tasks manually they are far more likely to suffer from errors, omissions and delays. These can be all be minimised (and often completely avoided) if they are carried out a separate, highly automated environment.
In other words, the aim of CD is not to do anything radically new but to things that are often already being done, but to automate an equivalent suite of activities, and make them more efficient and effective in the process.
Finally, it would be a strange thing if the industry that has managed to allow so many other industries slash their costs by automation could not reap the same benefits for itself!
|It will never be economic to automate some areas.||In principle, this is true, and CD should only be implemented where there is a robust business case for doing so.|
But it should also be remembered that each individual activity doesn’t need to have a positive business case. In fact, since the basic benefits of CD come from integrating the end-to-end process rather than the individual steps, it is important that management is willing to incur costs in some areas in order to gain much greater benefits in others, or in the overall process.
So cost/benefit calculations should always take into account the overall process, not just individual tasks or phases.
|Some things can’t be automated.||Some stages are almost unavoidably manual – UAT for one, and regulatory approval for another. But this is not for technical treasons – they could be automated – but for organisational and political reasons. Business owners and regulators are often very reluctant to trust the development team to verify systems for them. In the case of the user community, it is perhaps too ambitious to expect them to accept automated acceptance early on, but that does not mean that this could not be achieved in the long run.|
|Our processes and/or infrastructure are too unstable to automate.||Automation is always a problem when the processes being automated change frequently or unpredictably. In fact it should be a basic rule of Continuous Delivery that it should not be attempted in such circumstances – CD will only result in automating chaos. Not only is it often as expensive to change an automated process as it is to create it in the first place but the delays and disruption new changes cause can become disproportionate and generate a great deal of friction.|
A similar argument applies to the widespread use of ‘non-standard’ infrastructure.
Nonstandard infrastructure is costly: The effort and cost to diagnose production incidents that result from configuration differences between the development, testing and production environments are another form of unnecessary waste. Standardized environment configurations and automated provisioning dramatically reduce or eliminate the problem.
So neither instability or weak standards mean that CD cannot be achieved. In fact, setting Continuous Delivery as a goal can be used as a driver and model for process improvement, whose first goal should be to stabilise and standardise any organisation, process or other factor that prevents the alignment of the end-to-end CD flow or the automation or any of its components.
|Our releases are too unpredictable to automate.||There are two versions of this problem:|
In both these cases, either the structure of the delivery or the way it is managed prevents the organisation from defining a clear pathway from the first builds to the final deployment.
But as with unstable processes, this only means that the organisation first needs to stabilise its releases before trying to integrate or automate them. This is very real problem, but not insoluble, and stabilised deliveries will almost certainly deliver other benefits too.
|We’re just different.||It is always tempting to argue that any real Continuous Delivery system would have to include multiple, organisation-specific additional steps and controls, and that these make any attempt at CD impossibly difficult. And to an extent this is true.|
But it is equally tempting to reply, How many of these unique extra steps only exist because of the manual nature of the existing process or because of the now-irrelevant nature of the old delivery organisation? In fact it’s likely that mature CD systems will resemble each other much more than they resemble their non-continuous ancestors, so the peculiarity of your existing systems should not be a barrier to adopting CD.
To summarise, Continuous Delivery is a realistic goal for almost every organisation. But the obstacles to achieving it should not be ignored or minimised because of enthusiasm for the principle of Continuous Delivery.
Create a strategy for CD
Once the myths of Continuous Delivery have been understood and the organisation is committed to rejecting or managing them, the key issue is to define an overall strategy for Continuous Delivery. For although it can be done incrementally, and Continuous Delivery can be broken into steps that can be carried out separately, it is vital that there should be a strategy – including data, systems, processes, organisation and governance – in place.
The reason for this is simple: if you make a lot of localised changes based on locally-defined methods, tools, designs, etc., they are unlikely to be consistent with one another or to support true end-to-end integration. In fact, localised changes may even cause enough local improvements in efficiency and effectiveness to support the argument for keeping local silos. So the radical improvements that Continuous Delivery is capable of delivering will never be realised.
So what would a Continuous Delivery strategy contain?
Change what you deliver
Fortunately, one of the key thing that needs to be changed when adopting CD has probably already been changed when you moved to Agile. This is the structure of what and how you deliver.
In a traditional delivery based on a waterfall lifecycle, implementations tend to be big and infrequent. This means that the overall delivery process tends to equally big – and with that, complex, risky and dangerous. But with CD (and Agile generally), big, infrequent implementations are replaced by small, rapid updates.
This has its own problems – users may face change fatigue, for example – but from the point of delivery it means individual changes tend to be far less of a problem, and if there are any problems, they are far easier to deal with.
‘Not wrong for long’
The corollary of incremental delivery is that, although the quality of deliverables must remain as high as ever, you don’t always have to get it absolutely right first time.
Like any other aspect of delivery, you need to specify in advance pass/fail criteria. But when implementing CD – and again, Agile in general – you should bear in mind that there will probably be another delivery along very soon. Like Agile, CD allows delivery to happen far more rapidly than in, say, a waterfall lifecycle. So although Continuous Delivery retains the same overall goals as traditional delivery, it should probably not be held to exactly the same standards. Unless you are dealing with (mission/safety/security) critical software, you may decide that a higher level of production defects is more acceptable than it was in the past. After all, if a failure is found tomorrow, you can rejig your backlog to ensure that it is eliminated in a couple of weeks.
Changing your organisation’s design
Most of Agile focuses on the development team and its relationships with the organisation, carefully channelled via the Product Owner. But as Figure 3. Traditional delivery silos shows, Continuous Delivery is very different. It requires the organisation to restructure the relationships between many different groups. These usually include at least:
- The Agile Core Team.
- User acceptance teams.
- Operational acceptance teams.
- Business owners, process management and operations.
- IT management and operations.
- Representatives of the users.
- Audit and compliance.
These groups may be located is many different places, and have multiple units and groups depending on which applications are being delivered and the business units to which they are being delivered.
In addition, the relationships between all these groups need to be restructured and coordinated on almost every level:
- Policy and standards.
- Process and procedures.
- Organisation, roles and responsibilities.
- Day-to-day operations.
- Data and systems.
And so on.
In addition to the sheer number of relationships this creates, all these elements need to be:
- Made fully explicit, so that the end-to-end process becomes visible, and with it the challenges to a unified Continuous Delivery system.
- Aligned into a single coherent and consistent process, including the whole end-to-end flow of information and decisions.
- Integrated into a single, uniform but flexible system.
Change your organisation’s expectations
Perhaps most difficult of all, many parts of the existing organisation that are directly involved in delivery will have to accept lower levels of visibility of, authority over or even contact with the delivery process. In most cases, the span of the local management’s control over the process will be radically reduced, and many existing silos will be redesigned. In some cases, they will no longer serve any useful purpose, and be abolished completely.
The political and cultural impact of Continuous Delivery is thus much higher than that of other aspects of Agile, and should be treated with corresponding caution and seriousness. So implementing Continuous Delivery should be treated like a large-scale change management process – not a collection of local ‘fixes’.
Map the delivery process
The first step in delivering any Continuous Delivery strategy is to define what you mean by Continuous Delivery – what the target Continuous Delivery process, system and organisation looks like.
Exactly what this means depends on your organisation. But any organisation will need to construct a map of its different delivery ‘patterns’.
Mapping the end-to-end delivery process
Where you do have the option of implementing a more integrated approach, the following table may make the necessary steps a little clearer.
|The applications you want to deliver continuously.|
|The organisations and user groups using these applications.|
|Current development processes for each application.|
|Current acceptance processes.|
|Current deployment processes.|
|Current operational procedures and systems for running each application.|
You do not need to complete a single map all at once. Some areas may be obvious candidates for early Continuous Delivery:
- Areas where Continuous Integration is already well established.
- Areas where the relationship between the delivery team and its operational and business stakeholders is close enough to make migrating to Continuous Delivery a (relatively) painless process.
- Areas where you would be starting from scratch, and new environments, tools, policies, organisation, etc. are a realistic option.
In other areas, a series of progressive prototyping and piloting is a more Agile – and more sustainable – strategy. However, make sure you avoid locking yourself into short-term solutions that make it harder to automate other areas.
What exactly needs to be defined for each for these areas? Starting again from the end-to-end delivery process, you should agree for each task in that process the current details of:
- Output (flag, data, etc.).
- Input data and source.
- Analysis and/or decision-making process.
- Tool or system.
- Responsible role(s).
- Pre/post interface.
- Records created.
- Reports and notifications generated (formal and informal).
If there are multiple levels or variants of each task, then for each option within each task, repeat this model recursively.
From human to machine decision-making
Many of the decision-making steps in traditional delivery process remain in a Continuous Delivery environment. The delivered software still needs to be checked against a similar set of criteria – functionality, integrity, performance, security, acceptability to users, operations, regulators and others, and so on.
However, how this is checked in a fully automated CD environment need not – in fact probably should not – exactly mimic any existing manual process. Many aspects of a manual organisation exist because they are manual.
- In some cases these exist because of the limitations of human nature – compared to computers, people are slow, have a far more limited span of control and only a fraction of the data processing capacity of the most simple machine.
- In these situations, what seems a large, complex, multiple-step process to human beings may be a simple data processing step to a properly designed automated CD program.
- In other cases the reverse is true: human beings bring to bear special capacities that it is hard, and sometimes impossible, for machines to mimic.
- These include especially the ability to make qualitative judgements based on limited data, and a similar ability to make professional judgement based on knowledge and experience it would be extremely hard to automate.
- For example, a good deal of testing actually relies on the skills, experience and insight of intelligent testers, and is not actually documented in scripts or present in the test data. But if it is to be automated, all aspects of the testing must be brought to the surface, defined and automated.
- The structure of the deployment process especially often reflects the structure of the organisation more than the objective needs of testing.
- For example, tests that are currently repeated by different groups (e.g., business units or user vs operational acceptance) because they need to satisfy different groups will only need to be performed once.
- Likewise the sequencing and implementation of particular tests and evaluations may change substantially.
So it’s unlikely that automated CD will resemble existing processes at all closely. Achieving this rationalisation can be very demanding. Every governance step – every test, every review and approval, every go/no-go decision also relies on human intervention, and automating them all is likely to be a serious challenge. In the case of the final governance steps, the cultural and political difficulties are also likely to be much greater.
Building the new lifecycle
A fully automated Continuous Delivery process is practically invisible – press a button and delivery just happens. There should be very few procedures or roles or anything else left over, and so very little to document in a formal ‘lifecycle’.
But until that final state has been achieved, it is important that everyone should be very clear what the new process is and what everyone’s role in using it should be. So:
- The end-to-end process should have a named owner. If this isn’t possible, make sure that:
- It is actively owned by its major stakeholders.
- It has a single manager with overall responsibility for its successful implementation.
- If you chose to move to a full CD environment incrementally, each new increment will change someone’s role and responsibilities. So:
- The process should be documented (though in as lightweight a way as possible), including decision points, outputs, tasks, responsibilities, links to tools, etc.
- it is crucial that changes are properly designed, generally accepted, well understood and fully communicated to everyone in the end-to-end delivery cycle.
- Its development should be an integral part of delivering the Continuous Delivery process:
- The end-to-end process is effectively the functional specification for what each new version of the process should add and change to the existing delivery process.
- The process should be a key tool for communicating these changes to the delivery teams, to their management, to the delivery stakeholders and to executive management.
- Be careful to acknowledge and fully re-design areas where fundamental change takes place. For example:
- The budgetary separation between development costs and operational costs (including maintenance and support) is largely removed by CD. Is it clear how the new approach will be supported by your finance department? Do you have the Chief Finance Officer’s blessing for these changes?
- Long-dreamed of methods of measurement and reporting, such as true Total Cost of Ownership, are much closer to realisation. Are they part of your case for CD?
- – and there will be many!
- The end-to-end process as a whole should be subject to regular review at senior level to see how effectively it is realising the objectives of the Continuous Delivery programme as a whole.
- Even in its final form, the CD process as a whole should not be rigid or closed.
- There will always be situations where flexibility is required. Stuff happens.
- There will always be change – new products, new opportunities, new strategies, new technologies – any of which will require the ability to adapt or even override automated processes, CD included.
Organise for Continuous Delivery
CD requires professionalism
The end-goal of Continuous Delivery is a fully automated system for moving working software directly onto user desktops. However, much more important than having the right systems is having the right people, teams, organisation, culture and mindset. The key concept here is professionalism.
Most IT developers would consider themselves to be professionals. And from a strictly technical point of view, this is quite realistic. But a full professionalism has non-technical dimensions, without which Continuous Delivery is unlikely to be achieved. This means:
- A very high level of self-discipline, so that the various practices involved in Continuous Delivery are carried out reliably, even if they offer little direct benefit to the individual or the local team.
- A high level of technical knowledge, including not only knowledge of one’s own specialism but also quickly developing a practical knowledge of the full range of tasks in which the team is engaged. Without this the detailed demands and specialised requirements of Continuous Delivery are unlikely to be properly understood.
- A full commitment to the goals of your CD programme – to the planned outputs, schedule and costs, but also to quality, value, delivery and above all to success – rather than just to doing one’s job.
Continuous Delivery requires Agile
Of course, these are also the prerequisites of creating an effective Agile organisation, so hopefully you should not find these requirements too challenging! Conversely, not all working environments are suited to Continuous Delivery. In fact Agile is essential, not only for maximising the benefits Continuous Delivery provides but also for implementing it in the first place.
What does Continuous Delivery offer? In brief:
- Rapid movement of code from developers to users.
- Minimal manual intervention.
- Highly reliable delivery.
- High frequency of delivery.
So what type of development and delivery organisation would be able to reap these benefits? Probably not a more traditional (e.g., waterfall) project or programme, which would aim to deliver seldom and typically only towards the end of a long, complex process.
In such an environment, absolute speed is less valuable, and the process is unlikely to be repeated more than a few times. There are also so many opportunities for and risks of delay and outright failure that the extreme refinement of Continuous Delivery is unlikely to be the best possible investment.
In an Agile organisation, by contrast, the above benefits are precisely what is needed. For example, CD means that, if the mainline build fails, it is fixed right away, including a rapid fix, rebuild, re-integration and re-delivery to users. But how is this to be done if the movement of code from developers to users is consistently delayed or distracted by multiple manual interventions by very fallible and imperfectly coordinated human beings? Plainly, CD is simply the logical next step once the goal of building working software has been achieved.
Communicate the value of the change
The best antidote to the sheer scale and difficulty or moving to Continuous Delivery is making sure that the people and organisations affected by the change are convinced of four things:
- The long-term benefits of Continuous Delivery will be much greater than the short-term costs.
- The change is vital to the organisation as a whole, and is regarded by senior management as being of strategic value.
- It also is in their own interests as individuals to make the change – not just the organisation’s.
- They will be supported throughout the process – at all stages, and in all aspects of the change.
Of course, the above messages must all be true: if any of them is incorrect, the transformation is very unlikely to succeed.
Align everyone and everything
And align them at all levels.
If Continuous Delivery is to work, the first thing it needs is that the flow of information, decisions and software components should be smooth and continuous from end to end of the delivery pipeline. This means more than just data and tools: the goals, targets, processes, organisations, standards, governance and decision-making of the various groups involved in the release process all need to be compatible with one another and to lead actively to the next step – with no exceptions or omissions.
If exceptions are unavoidable, they should be fully identified in the CD system and exception-handling should be built into the tools – even if it is only to ensure that the exception is rapidly identified, flagged up, its impact limited and the exception itself is routed to the right places for a manual action, decision or intervention in the system itself, so that the automated delivery process can get back on track as quickly as possible.
It’s important not to underestimate the scale of this alignment, or to treat it as a purely technical problem.
One of the most daunting aspects of CD is the sheer number and diversity of stakeholders involved: the business, its users, developers, testers, DBAs, systems administrators, compliance authorities, regulators, and others. In many organisations:
- These stakeholders have little in common but they often have different and even conflicting priorities and agendas.
- Both sides of any given interface need to be ‘clean’ and reliable for CD to work, yet there can be:
- Overlaps between what various teams do.
- Gaps between what they do.
- Conflicts between what they do (e.g., incompatible technology).
- No clear interfaces at all with neighbouring functions or groups (e.g., they rely on ad hoc or even personal relationships).
- Both the internal mechanics of an existing function and their external interfaces rely on a mixture of manual and automated practices.
- But often they are based on unrelated procedures and technologies.
- There are variations – often undefined and even unconscious – depending on the situation, the product or even the individual.
- And rely on the personal knowledge and experience of individual staff members – often completely implicit.
- There may also be corporate interests to consider.
Manage the stakeholders
So stakeholder management is a key skill at this point. Key components of a stakeholder management strategy include:
- Make sure you have the support of whoever has overall responsibility for the end-to-end delivery process.
- Ensure that all stakeholders are fully briefed on the (true) benefits and costs of the implementing CD.
- Offer implementation options that minimise the risks and impact and prove the value of the approach early, such as careful piloting.
- Make sure you understand the real issues for each group:
- They will often be as much cultural and political (e.g., changes to span of control, loss of budgets or perceived authority within the organisation as a whole) as functional or technical.
- For senior stakeholders the key issues are more likely to be the financials, time to market, resourcing and productivity than functional or technical improvements in delivery as such. So a convincing business case is fundamental, even if CD does begin as a technical issue.
- Analyse your various stakeholder groups separately:
- For each group of supporters, provide a robust case for implementation.
- In the short term, key issues are likely to be disruption, continuing to deliver existing commitments, staff and training costs, etc.
- For the long term, the key issues are likely to be economic (ROI, cashflow, budgets, etc.), headcounts, etc.
- Try to move hostile groups to a neutral position.
- For each group of supporters, provide a robust case for implementation.
- At all costs avoid appearing to favour any single group. The value of CD should not only be visible to all but it should, as far as possible, be shared widely. And the ultimate ‘beneficiary’ of CD should be – and should be seen to be – the organisation as a whole.
Ensure everyone can see what’s happening
On one level, all phases of Continuous Delivery are about communication:
- Continuous Build is about making sure that the developer and their machine are still on the same wavelength.
- Continuous Integration does the same for the development team as a whole.
- Continuous Delivery creates an astonishingly intensive and extensive communications channel between the development team and the organisation as a whole.
This explains one of the many the paradoxical effects of Continuous Delivery: the more delivery is automated (and therefore hidden in the workings of machines) the clearer it becomes to everyone what is really happening.
- The delivery process as a whole is much clearer and more straightforward.
- Progress from development to users is simple and transparent.
- Bugs and other faults are brought to the surface quickly.
- The results of CD – i.e., the delivery of working software (and therefore of value) – are visible immediately.
Many of the tools that have been developed in this area make it easy to communicate. In Continuous Integration, for example, depending on which tool you choose, you might find that it comes with an interface (or even a whole website) that shows:
- There’s a build in progress.
- Whose build it is.
- The changes that are actually being made.
- The state of the most recent mainline build.
- How long it’s been in that state.
- The history of previous changes (which generates a powerful sense of progress and the team’s overall status).
Websites and other tools work well for distributed teams too, of course, and for interested stakeholders such as the Product Owner, higher level groups (e.g., programme managers and PMOs) and technical support teams (DBAs, system administrators, etc.).
This visibility is not just a useful effect of implementing CD (or any of its individual phases. The success of any change programme invariably rests on the quality of its communications. So you want to ensure that everyone can easily see the state of the delivery system and the changes that have been made to it. That’s why CD’s built-in communications channels are invaluable not only for day-to-day operations once the CD system is in place but also for making a success of migrating to CD in the first place.
Support the changes
As has already been said, Continuous Delivery is much more than a technical change. In fact is represents a real transformation. So if you focus solely on the technical aspects of the change, you are likely to fail.
On the other hand, one of the most familiar notions in large-scale change management is that it is relatively easy to get change going – there are some people who love change – but difficult to sustain. This problem is summarised in the following diagram:
The key point in this sequence – so important that it is often called The Chasm – lies between the Early Adopter and the Early Majority phases. Innovators and Early Adopters like change and are willing to invest effort in it. Not only will they adopt changes but they are frequently knowledgeable about them and will go out of their way to make them succeed. So they need relatively little support.
This can create an illusion, however. When a change of the kind envisioned here has just begin, it may proceed smoothly, exceeding everyone’s expectations. The pilots and then first 10%-15% of the organisation take the change in their stride. So little effort is put into making the change work for the rest of the organisation.
Unfortunately, the essential difference between Early Adopters and the next group, the Early Majority, is that the Early Majority are not so willing to go out of their way or to invest effort and time in making the change work. They have no problem with the idea of improvement, but they do not find change for its own sake interesting or attractive. So they may be perfectly happy to go through the change, but you have to give them with a good deal of support to get them through.
Depending on how far to the right you go with is, each successive group is likely to need more and more of the following:
- A clear change roadmap that makes the main phases and events in the change explicit.
- A clear case for why it is in their individual interest to accept the change.
- A thorough explanation of their role in the change.
- An equally thorough explanation (and justification) of the impact the change will have on their day-to-day work, their overall role and their prospects in the organisation.
- Training, coaching and mentoring through the change.
- Explicit (and often repeated) reassurance that senior management is committed to this change.
- Explicit permission for themselves to operate and behave differently (which is not the same as the previous item!).
- And in the last resort (especially for Laggards), threats of what will happen if they do not collaborate.
This is all complicated by the fact that you are unlikely to meet these groups in exactly this order. Changes often start in areas where they are likely to succeed, since it is often these areas where the idea of change is often floated. But that is not always so – many changes start with senior management, come from consultants, and so on – so the change is just as likely to be initiated where it is unwanted – in ‘problem areas’ where difficulty of driving change is precisely the problem.
So again, it is important that the move to CD be very carefully prepared, led, supported and sustained.
Build your Continuous Delivery system
Ultimately, the purpose of Continuous Delivery is to make as much as possible of build, integration and deployment simply disappear – into machines, into code, into the click of a mouse. Ideally, the organisation should no more have to think about how delivery is done than they now need to understand how its spreadsheet does all that calculating and analysis. It just does.
How then is the Continuous Delivery system actually built?
Everything is code
For the CD process to be fully aligned, integrated and automated, it must be possible to complete the entire process from build to the solution’s arrival on the user’s desk at the click of a mouse. For that to be possible, not only the solution but also the entire build, integration and deployment environment must work on a single principle:
everything is code
This phrase (which seems to have been popularised by Kohsuke Kawaguchi) means that every element that affects the delivery process from build onwards (tests, infrastructure, approvals, services, etc.) must be programmable. To achieve this, every element and factor in the delivery process needs to be virtualised in the form of data being managed by code.
You also need tools that are able to treat everything as code, plus setting up every aspect of the delivery process as capable of being treated as code. This includes everything from moving systems on- and offline, building and provisioning environments, setting up and validating configurations, migrating users and data, selecting, executing and checking tests, setting and applying governance criteria, reporting progress, status and results, and much else.
Plainly this is not a trivial activity:
- You must have a very clear and exhaustive model of what the delivery process as a whole will achieve, including not only velocity as such but also countless secondary and indirect effects.
- You must have a very clear and exhaustive model of exactly how the delivery process as a whole will be executed.
- Each item in that process, and every options within that process, must be controllable by code.
- The tools for exercising that control must be acquired, set up and operated by an appropriately skilled group.
- Libraries of functions for exercising that control need to be prepared and managed.
Existing delivery processes will almost certainly involve a staggering number of components whose complexities, interconnections, options, overlaps, gaps, variants, bespoke adaptations, localisations, and misalignment were previously managed by local teams, hard-won experience, and many other things that hard to recognise bring it to the surface, never mind articulate, align and automate.
The other side of ‘everything is code’ is that, like any other code-based system, Continuous Delivery is likely to need to change. So the aim of Continuous Delivery should never be a single, rigid, linear solution that will be hard to change. On the contrary, not only will undefinable types and amounts of change be needed in future but any realistic CD system will certainly need to be highly configurable right away. So all the standard disciplines of flexible and adaptable architecture, design, programming and coding apply to Continuous Delivery.
(This is another way in which Agile is especially well suited to the implementation of Continuous Delivery, of course. Its conversion of large-scale, abstract requirements into concrete, functional stories encourages a highly modular and encapsulated approach to functionality is exactly what a flexible and adaptable Continuous Delivery system needs.)
Hence the value of the ‘everything is code’ approach. Managing hardware, networks and other machinery directly is difficult, cumbersome and extremely complicated. It’s also a job few IT developers have any experience of. But managing the same technology through code and data is much the same as the configuration management activities with which they have long been familiar, and basically the same solutions apply – right down to using the same CMDBs.
The basic outcome of Continuous Delivery is automated, configurable tools for the managing the end-to-end delivery process. This section gives a very high-level overview of what you are likely to find in your CD architecture. It resembles a traditional architecture in many ways, with the crucial difference that an automated version of each key area or function would differ very substantially from the traditional version. So what follows is really a checklist of all the areas you need to consider when moving to CD.
The core architecture
In some ways – though only some – it is hard to define a complete CD system. Individual organisations have different needs, and CD itself is constantly evolving. However, such a system would certainly include an automated and integrated architecture for the following:
- Code access and distribution.
- Version control.
- Code build.
- Unit testing (e.g., XUnit test-driven development tools).
- Functional testing.
- Non-functional testing.
- Integration and system.
- Performance, etc.
- Operational acceptance.
- Environment management and provisioning.
- Multiple development and testing environments.
- Operational acceptance, controls, provisioning, etc.
- Desktop/on-site provisioning, monitoring and control.
- Product certification (typically at multiple levels – internal compliance, regulatory, etc.).
- Product release.
In fact anything that represents repeated effort or introduces either risk or delay should be a candidate for automation.
The architecture on which these tools operate might include environments for all of the following:
- Mainline source.
- Local development, build and test.
- Continuous integration.
- Integration testing.
- Automated testing (e.g., overnight integration and regression testing).
- Other (automated and non-automated) tests (performance, security, etc.).
- Staging areas for successive phases of deployment.
- One or more production environments.
- Administrative tasks such as tracking, analysis and reporting.
Executables, data and other components will need to be reset or move between environments, often several times each day, and the configuration and provisioning of each environment will also need to be changed, refreshed and checked repeatedly.
This again needs to be performed automatically. So a key part of Continuous Deployment is the creation and maintenance of scripts for all these tasks. This will simplify, accelerate and raise the quality of these processes. Of course, scripts (and their maintenance) demand investment, but the return will almost certainly be much greater than the initial outlay.
A natural precaution in such a highly automated environment would be automated rollback, to restore everything to the last know healthy state if an error does occur. The power to reliably revert automatically will also encourage the team to deploy frequently, so users receive improvements frequently and the organisation as a whole benefits accordingly.
Build a single repository
Talk of rapid integration, automated testing and delivery directly to the user assumes that code, database schemas, test data and dozens – or thousands – of other connected artefacts called for by the system to be delivered are straightforwardly accessible – to the developer, to the developer’s build and integration scripts, and to one another.
There’s also a second issue: security. As the development process becomes more and more complex, with resources of any number of types (code, images, interfaces, webpages…) flowing in through multiple channels and from many teams and locations, a fundamental problem is to ensure that you know exactly where each item is, how it is accessed – and how unwarranted access is prevented.
Hence the centrality of integrated code management tools, including version control, configuration management and other functions. And hence also the absolute rule that everything – everything – should be stored in it:
- The code mainline.
- Development components.
- Database schemas.
- Configuration scripts.
- Code and file assets.
- Test materials.
- Working environment.
- Operating system.
- Development environment (including the team or the product’s standard development configuration).
- Other tools.
- System documentation.
– and anything else you need to build a fully-functional system of any appropriate type at the click of a single mouse button.
Oddly, there is some dispute about whether the actual builds that result from all this activity should be kept in the repository. For some it is obvious; for others it is a sign (in the current parlance, a ‘smell’) that the team can’t actually create builds reliably, and so prefer to store the ones they do manage to complete safely away, as though they were prized exceptions.
So the repository sits at the heart of a vital distribution network, to which many other groups and individuals need to have access. The value this delivers becomes clear when the realities of software development are taken into account. In particular, code needs to be widely available outside the team (though only in a controlled way). For example, at the same time as a piece of code is being updated, an earlier version of the executable may be being showcased, another team may be investigating how they can best make use of its interesting new capabilities, and an independent tester may be pulling it to pieces.
That the relationship between all these instances should be clear and rigorously controlled is hopefully clear.
Build a staging environment
The need for one or more staging environments may be obvious, but it is worth examining exactly what a staging environment is and what value it adds.
A staging environment is a specialised environment that permits the software to be tested, packaged and generally prepared for production. Just as this preparation process has several stages, so several staging environments may be needed.
For example, there are situations where the final approval cannot be reduced to technical issues. A regulated product may need its environment for regulatory checks and approvals, and many organisations (banks, etc.) have demanding internal certification requirements too, which are unlikely to be fully automated.
Another type of staging environment is common where Continuous Delivery is not quite fully implemented. It is also used where, as it often the case, where the code cannot in fact be released directly to users:
- A firmware-based, embedded system will require a separate manufacturing or ‘burning’ stage that is unlikely to be totally automated.
- Commercial off-the-shelf (COTS) software that has to be packaged and sold on to a mass market cannot be distributed directly to its end-users. However, the growth of software-as-a-service (SaaS) systems does make this option available.
- Internal requirements (often political or cultural) make it essential to allow stakeholders to validate the end-product before it is finally released.
- Where the business cycles are not perfectly synchronised with the development cycle, and software becomes available before it is due for release, the staging environment can be used to ‘store’ the core temporarily. This also makes it easier for the development teams to maintain a regular cadence.
- Where the code is being produced as part of a larger programme but the availability of resources means that the team cannot adopt the standard Agile ‘just in time’ approach, the product needs to be stored somewhere, and perhaps also be available for use in testing other systems being built by the programme.
One of the most striking features of software development is the huge amount of effort and technology invested in testing. 30%-40% of total development costs is not unusual, yet this is an activity that by definition adds no value. True, testing is fundamental to avoiding loss of value, in the form of omissions, errors, defects and failures, but testing itself adds nothing. So performing this activity as economically as possible should be a basic aim of Continuous Delivery – not only to make it cheaper but to accelerate and de-risk it too.
CD testing is automated on multiple levels and in multiple ways.
- Test-driven development (TDD) aims to embed detailed self-running tests in the code itself.
- Agile development assumes that the user performs a more or less implicit for form of functional testing while collaborating with the developer. But if it isn’t, it is readily automated by conventional test automation methods.
- Acceptance and non-functional testing (performance, load, security, etc.) are likewise readily automated using conventional test automation tools and techniques.
- As largely a subset of previous tests, automated regression testing is as straightforward as any other test class.
There are limits to test automation, of course.
- The absence of test failures does not prove the absence of bugs, and especially not when automation is relatively immature, be it in coverage, in quality and in sophistication.
- In areas where the software is relatively unstable – areas where new functionality it regularly introduced, for example – automation is difficult and probably a waste of effort.
- However, once automation becomes an option, it is often a good idea to restructure the system so that the highly unstable areas are isolated from more stable areas.
- This will make it easy to automate the stable areas, and so to minimise the ongoing investment in manual testing and repeated disruptions to Continuous Delivery.
- To the extent that good tester brings originality and insight to the verification process, it is unlikely that testing will ever be totally automated.
- But at the same time, it should usually be possible to automate the new tests – and new testing concepts – such creative testers introduce.
- So automated testing does not mean the total absence of manual testing; but if novel manual tests may always be needed, there should also always be drive to automate them as quickly as possible.
- If you are building public-facing systems – online business systems, SaaS, COTS, etc. – it takes an extremely brave organisation to allow a machine to make the final decision.
- Some things just aren’t right, even if they do pass every possible test.
- But that does not mean that the last step needs to be a test (as that term is normally used in IT): a product manager checking ‘look and feel’ or a marketing expert scanning for branding details is equally likely to be the last word.
In all these cases, once automation becomes a specific objective, the system will quickly evolve towards greater testability, and the team will become more sensitive to regular (i.e., automatable) testing patterns.
Build in measurement
The aim of measuring Continuous Delivery is the same as that of measuring the more traditional approach. It helps you to see what has succeeded and what has failed, to evaluate hypotheses about CD, reduce uncertainty and make better decisions – if, that is, the right metrics are selected and they are interpreted the right way. In short, metrics provide you with data, and some facts and information, but are not, in themselves, knowledge. As for wisdom, metrics can provide some insight, but if they are relied on to the exclusion of all else, they become positively dangerous.
Again, exactly what this means will depend on the individual organisation’s needs, wants and expectations. But in general terms, metrics that are usually worth tracking include:
- Elapsed time from working software to production.
- Throughput and stability of the delivery process.
- Defects in production.
- An effective Continuous Delivery system will have already identified any problems arising after the code was written (e.g., caused by deployment and implementation).
- So what is the Mean Time Between Failures?
- Time to resolve production defects.
- Automation eliminates human supervision, which may allow previously detected defects to escape into the production environment.
- So what is the Mean Time To Recover?
- Production downtime.
- Especially important for interactive (transactional, non-batch) systems.
Make Continuous Delivery sustainable
In principle, once established, Continuous Delivery should require little further attention. It is, after all, a fully integrated, fully automated system, and as such should become invisible (or at least, no more visible than any other standard tool).
But of course, that is never true: all systems require periodic updating, maintenance and support.
- No system is perfect.
- There will be bugs and gaps in your CD system too.
- The first version of your Continuous Delivery system will certainly not be your last.
- In fact it would be a massive risk to try to adopt a complete system from scratch, so it will be a long time after you launch your first Continuous Delivery system before you think it is complete.
- Users come and go, and new users (or users changing their role) will need assistance or will make the system do unintended things.
- No system is fool-proof, even for experienced users.
- Executive management, IT, the business, operations and other parts of the organisation will always want you to change, speed up, extend and improve the system.
- The system will always be capable of technical improvement, so further development must be expected.
- The individual components of your overall CD system will be upgraded and replaced.
- Each change creates its own challenges.
- Every time you improve the system, expect your improvements to break it in various ways, and so to have to support it further.
And so on. Sustainability is an often unplanned-for feature of system automation. In the case of Continuous Delivery, the sheer importance and complexity of the process it provides makes active planning (owning, managing, resourcing, governing) sustainability especially important.
- Assign a team to its maintenance and further development.
- Do not try to make supporting such a critical system a secondary responsibility of another, non-specialist team.
- Have a proper Agile structure to manage it – with Product Owner, backlogs, plans, and so on.
 ‘Competitive Pressures Drive The Business Case For Modern Application Delivery’. Forrester Research, October 2014.