Modelling Distributed Continuous Delivery pipeline for Serverless architectures

Przemek Sempruch
6 min readSep 20, 2018

--

Serverless architectures are getting more and more attention due to their simplicity and short time to deliver. As for every architecture there are concerns that need to be addressed. This article will focus on modelling Continuous Integration and Delivery Pipeline for a generic serverless architecture using a top-down approach.

For starters, let us remind ourselves what the typical delivery pipeline looks like…

Typical delivery pipeline

The above is simple and may fit many projects. However, when there are many sources of changes at both application and infrastructure level, such design may prove restricting and make innovation hard. Ideally the pipeline should be agile.

Serverless makes it a challenge as it is a kind of architecture that relies heavily on cloud services, something that is inherently hard to replicate in a local environment. Certain amount of testing can be done locally, but a real life behaviour, especially related to integration, can only really be tested in the cloud. It is time-consuming and being a developer needing to wait ages for his build is not a great feeling. It is a waste of productivity and morale which can be invested into activity that brings better ROI instead.

Ideally, as a developer I want my feature to be built and tested in an environment resembling production as close as possible, so not only I can boost my own confidence, but also my Product Owner’s confidence who can explore the feature when it gets deployed in the cloud. What is more, I do not want to wait in queue for other developers.

Requirements have been set. Where do we start, then ?

Let us imagine we have an example serverless environment architecture as below:

Serverless environment

The environment definition is composed of:

  • application code delivered as ZIP packages
  • infrastructure templates, be it Terraform or, in the case of AWS, Cloudformation templates.

The templates are fully configurable so that it is possible to restrict capacity/cost for developer environment without affecting production configuration. All of these are applied as an input to Infrastructure Orchestration Engine such as Terraform which, in turn, produces the environment copy.

The application code, infrastructure templates and configuration live on specific branch(es). Assuming each new feature is represented by a set of changes, each set needs its own cloud resources and the pipeline should be able to ensure they are isolated. In AWS most non-VPC resources (the ones that lend to serverless architectures best) live in one namespace within a single account. To avoid conflicts, one can take a template approach. The approach causes the name of every environment-specific resource to be a result of expanding a template. For example, a lambda function responsible for publishing news will receive an environment-specific suffix vide news-publisher-${env}. The same applies to S3 bucket keeping the news: news-bucket-${env}.

How ${env} variable is expanded is up to you. What we have practiced so far has been a following pattern:

  • DEV-${developer_name} for environments specific to each developer
  • INT for integration environment i.e. code built based off the integration branch
  • PP for preproduction environment, code and infrastructure templates promoted from integration environment
  • PRD for production environment, code and infrastructure templates promoted from preproduction environment
Serverless environments layout

PRD environment is usually extracted to a separate account to guarantee Service Limits are not shared plus additional security lockdown is applied.

By now, we have come up with a high-level layout of serverless environments, which allows for isolation and high agility. What is left is the implementation of the pipeline.

Based on my experience, the delivery pipeline should address the following needs:

  • build and test application code
  • store application code
  • create deployment manifest
  • deploy environment
  • test environment
  • tear down environment
  • migrate environment state from one to another
  • roll back the migration

Need #1: Build and test code artefacts

This need is all about being able to build and test code (at a unit level) in a given language. It is the same as in any other architecture.

Need #2: Store code artefacts

Once application code is packaged, it needs to be stored in a repository. S3 serves well as a serverless code repository due to infrastructure templates usually referring to S3 as a source of application packages.

Need #3: Create deployment manifest

Possibly one or more artefacts have been built so far so together they form a a to-be-deployed image of my new environment. It is the deployment manifest that contains description of those. The key information is their location in a repository so they can be picked up during infrastructure set-up. It can as well include the pointer to a given revision of infrastructure templates and configuration that will be applied to infrastructure orchestration engine (IOE).

To give an example of deployment manifest in the case of Terraform will be tfvars file. For more descriptive, one can use JSON or YAML.

Need #4: Deploy the environment

At this stage deployment manifest is applied to IOE by composing:

  • deployment manifest
  • infrastructure templates
  • environment configuration

A final result is an environment with up and running interfaces (Web, API, etc).

Need #5: Test the environment

When interfaces are up, the deployment can be tested functionally and non-functionally. It can involve security tests, UI tests, load tests, API tests, exploratory tests, etc. Providing they are successful it has been confirmed that the just deployed revision of the environment including a new feature represents shippable software.

Need #6: Tear down the environment

Assuming the environment is no longer needed, it can be torn down to avoid incurring extra cost. What we have practised is to destroy a development environment at the end of the day to free developers from having to remember about created resources.

Need #7: Migrate environment state from one to another

The last need is to facilitate promotion from one environment to another — the foundation of our delivery pipeline. If we want to migrate environment state A to B (environment built of resources that have their state), one can apply previously created deployment manifest for state B to the orchestration engine. It resembles step 3, however, in this instance we migrate between existing state and the new one instead of creating one from scratch. It is a very important exercise as when it comes to applying new state of environment to production, it is rare or impractical to spin up the environment from scratch.

Need #8: Roll back the migration

From time to time it is necessary to roll back the migration. With the concept of deployment manifest and IOE like Terraform it comes down to applying the deployment manifest from previous release. Simples!

Promotion and Rollback

I hope the above gives you enough material to shape the pipeline for your serverless solution. It is presented it at a high level so that it should work for you regardless of the implementation. Best of luck!

--

--