Introduction to Amazon Code Pipeline with Java part 24: summary
August 28, 2016 Leave a comment
In the previous post we finalised the code for the job worker daemon thread. The job worker daemon is the central work horse of the Code Pipeline third party action. It includes all the necessary functions and objects that poll the CP endpoint for new jobs and act on each new job. The job worker is started as soon as the servlet context listener object has been executed. The daemon then sets up the polling action which is executed at specific intervals. As soon as there’s some new job the third party action will act upon it in some way. That part is entirely up to what your plug-in will accomplish.
The previous post was also the last part in this series. In this post we’ll quickly summarise what we’ve gone through in this series.
What is Code Pipeline? It is a Continuous Delivery (CD) tool that enables users to run builds, tests and deploys automatically. Its purpose is similar to other CI tools such as TeamCity or Jenkins or Atlassian Bamboo but there are some fundamental differences in the architecture and customisation options. Continuous Delivery usually means a software production process where continuous integration and continuous deployment are 2 substeps. There can be other substeps however, such as automated unit tests, integration tests, builds etc. From Wikipedia: Continuous delivery (CD) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time.
In practice, CD means approximately the following steps:
- The developer checks in the changes to the code repository, e.g. GitHub
- A CI/CD tool, such as TeamCity can “see” that new code was pushed to GitHub which in turn triggers one or more actions in TeamCity
- The actions in TeamCity can be whatever was set up using the available plugins: build the project, run the unit tests, run a custom script, execute a load test etc.
- TeamCity can even deploy the project to a deployment server
- TeamCity can also perform additional tests on the deployed application such as GUI tests with Selenium
This is where Amazon has entered the scene with its Code Pipeline service.
Key differences between CP and Jenkins/TeamCity/Bamboo
- Installation: both TC, Jenkins and Bamboo are web based services that you can download and start using on your CI server and even start testing on your local machine. We can add our own projects and custom build runners after the installation. There’s nothing like that in Amazon. CodePipeline cannot be installed on a CI server. Instead, it’s a service offered by AWS and you’ll need an AWS account. It’s also a paid service but you can try it for free for a period.
- Build agents: TC, Jenkins and Bamboo come with their own build agents. We have our TC CI server on a Windows machine and there’s also a TC Windows service that monitors the progress of each build. CP however cannot be installed on its own. CP also has a built-in job monitoring service that will trigger the stages and actions in the pipeline. However, if you build your own custom job runner for CP then you must also build and deploy the agent that will monitor the job execution.
- Custom build runners: TC, Jenkins and Bamboo allow you to build your own plugins of various types. The UI is extremely extensible. You can add your elements to various menus, tabs, pages, popup windows etc. The possibilities are almost endless, you just need to find the right extension point. CP plugin development on the other hand is very different. The CP UI is not customisable to any degree. All custom elements required for the job runner must be custom built and located outside CP.
- Pipeline: a pipeline describes a chain of steps that describes how a piece of software goes through the release process
- Artifact: the “things” that are either passed through the pipeline, such as the piece of software, or some result of the process like a deployable package
- Stage: each pipeline consists of two or more stages. The outer containers in a pipeline are the stages. The stage names are unique within a pipeline. A single stage can only process one artifact at a time and must finish its task with the artifact before it can act on the next artifact.
- Action: each stage consists of one or more actions. An action represents a task performed on an artifact
- Transitioning: transitions are represented by the arrows that connect two stages. They simply show the order in which the stages will be executed.
- Job agent: a key component that third party action developers will need to work on is the job agent. A job agent is a process that continuously monitors a CodePipeline endpoint where it can poll for new jobs periodically, e.g. once every 10 seconds. E.g. if the pipeline has reached the Apica Loadtest action then our job agent will be able to pull that job and process it. It’s important to keep in mind that the job agent polls CP for new jobs. CP doesn’t push signals about a new job to an agent, it’s the way around.
What we have learnt
In this series we’ve gone through quite a lot around CodePipeline:
- Basic installation if a new pipeline
- Adding a third party action to a stage
- Detailed communication flow between the job agent and a Code Pipeline endpoint
- Detailed code examples to get you started with your own third party action
I hope you’ve found this series interesting and that you’ve been able to start building your CP plug-in based on the provided code examples.
View all posts related to Amazon Web Services and Big Data here.