Introduction to Amazon Code Pipeline with Java part 6: third party action overview

Introduction

In the previous post we looked at the most important keywords related to AWS CodePipeline. A pipeline is a workflow of stages that together describe the delivery process for a piece of software. This piece of software is called an artifact and goes through various revisions as it is passed from one stage to another. Each stage can consist of one or more actions. An action is a task performed on an artifact. If all actions in a stage have completed without a failure then the stage transitions into the next stage in the pipeline. It’s possible to disable the transition e.g. if it’s necessary to enforce a manual activation.

In this post we’ll start looking at a global overview for third party action development.

Read more of this post

Introduction to Amazon Code Pipeline with Java part 5: architecture key terms

Introduction

In the previous post we looked at some key differences between TeamCity/Jenkins and AWS CodePipeline. There are a number of aspects where these CI tools differ such as the installation, the deployment and custom build runner development.

In this post we’ll start looking at the CP architecture. We’ll concentrate on the key terms to begin with.

Read more of this post

Introduction to Amazon Code Pipeline with Java part 4: comparison with TeamCity and Jenkins

Introduction

In the previous post we saw how to add a custom job runner of type Test to an existing pipeline. These runners cannot be added to a pipeline during the setup process, they can be applied when updating one instead. We went through an example with the Apica Loadtest job runner and saw how to specify the necessary inputs for the job that the runner executes.

In this post we’ll discuss some of the key differences between TeamCity/Jenkins and CodePipeline (CP). TeamCity (TC) and Jenkins are quite similar so I will treat them as a group for this discussion.

Read more of this post

Introduction to Amazon Code Pipeline with Java part 3: adding custom job runners

Introduction

In the previous post we went through the steps to set up a brand new pipeline in AWS Code Pipeline. If you are entirely new to the AWS tools then you might find it overwhelming at first as you have to learn about other AWS tools as well such as S3 and Elastic Beanstalk. However, you might not need all of it since a pipeline can be quite little and consist of only 2-3 steps. The pipeline will start executing as soon as it has been set up. The arrows connecting the steps enable us to cut the execution of the pipeline at a specific step.

In this post we’ll see how to update a pipeline. Updating the pipeline also makes it possible to add custom build runners to the pipeline. I’ll use the load test job runner I referred to in the first part. Note that you’ll need an Apica Loadtest account to fully follow along these steps. The main point is to demonstrate the process of adding a new custom job runner to an existing pipeline. The implementation details are not important at this moment.

Read more of this post

Introduction to Amazon Code Pipeline with Java part 2: setup

Introduction

In the previous post we set out the main topic of this series. We’ll be talking and Amazon Code Pipeline and what it can do for you in terms of Continuous Delivery. The larger part of the post dealt with the differences between 3 related topics: Continuous Integration, Continuous Deployment and Continuous Delivery.

Code Pipeline is an example of an automated Continuous Delivery tool which can help you with the often tedious steps of builds, test runners and deployments. The idea is that the developer can concentrate on the exciting stuff, such as writing some fantastically well-written code which is then pushed into a code repository, like GitHub. The rest of the steps is then handled by a CI/CD tool like Jenkins.

In this post we’ll take a quick visual tour of Code Pipeline so that you get the idea how to set up a new pipeline. We won’t yet add a custom job runner as that is not part of the initial setup process.

Read more of this post

Introduction to Amazon Code Pipeline with Java part 1: basics of CI/CD

Introduction

Amazon has a relatively new service out called Code Pipeline (CP). It is a Continuous Delivery tool that enables users to run builds, tests and deploys automatically. Its purpose is similar to other CI tools such as TeamCity or Jenkins but there are some fundamental differences in the architecture and customisation options.

The company I work for had the honour to team up with Amazon and be among the first to integrate a custom job processor in CP before it was made public in June 2015. I was very fortunate to take part in this project as the developer who was responsible for writing the job processor. Our tool is a selectable option among the job processor tools of type “Test”:

Apica Loadtest in Code Pipeline

In this series we’ll take a closer look at Code Pipeline and also how a new job processor can be integrated with it using Java. It’s a large topic so the series will also consist of many posts. At this point I’m not sure yet how many there will be but 15-18 is my initial estimate since I’d like to be as detailed as possible. The AWS CP home page provides a lot of details both about the general architecture and setup. The developer pages provide the API for the CP related classes and functions.

You’ll need at least a test AWS account if you want to try the tool and have a go at building a custom job processor. However, even if you don’t have an account it can be interesting for you to learn about this new technology.

Read more of this post

Using Amazon DynamoDb for IP and co-ordinate based geo-location services part 12: querying the geolocation range to DynamoDb

Introduction

In the previous post we created the DynamoDb source file with a reduced set of geolocations from the full MaxMind data source. We saw how we could reuse the same process as before in the case of the IP and coordinate range tables.

In this final post of this series we’ll close the loop by actually extracting and geographic properties of a geoname ID. After all you’d like to know whether a visitor come from New York other than “geoname ID 3452334”.

Read more of this post

Using Amazon DynamoDb for IP and co-ordinate based geo-location services part 11: uploading the geolocation range to DynamoDb

Introduction

In the previous post we successfully queried the coordinate range database in DynamoDb. We used the query endpoints that are built into the AWS geo-location library to find the data records within the radius around a centre point.

Where are we now?

We’ve got quite far with our project. We have the ability to query an IPv4 and coordinate range table in DynamoDb. We can extract a geoname ID that belongs to either an IP or a latitude-longitude pair. The next step is to dress up those IDs with real location data such as “Stockholm” or “Tehran”.

Read more of this post

Using Amazon DynamoDb for IP and co-ordinate based geo-location services part 10: querying the coordinate range table

Introduction

In the previous post we loaded the limited lng/lat range records into DynamoDb. As we’re only talking about about 50 records we could have added them in code one by one. However, that strategy would never work for the full MaxMind data set even after discarding the duplicates. So instead we looked at the built-in Import/Export functionality in DynamoDb. You’ll be able to go through the same process when you’re ready to import the full data set.

In this post we’ll see how to query the lnglat range database to extract the ID of the nearest geolocation. We’ll get to use the AWS Java SDK.

Read more of this post

Using Amazon DynamoDb for IP and co-ordinate based geo-location services part 9: uploading the co-ordinate range to DynamoDb

Introduction

In the previous post we successfully created the lng/lat import file that DynamoDb can understand and process.

In this post we’ll upload this file to DynamoDb. The process will be the same to what we saw in this post where we inserted the demo data into the IPv4 range table. If necessary then re-read that post to refresh your memory about the process. We’ll follow the strategy we laid out in this post.

Read more of this post

Elliot Balynn's Blog

A directory of wonderful thoughts

Software Engineering

Web development

Disparate Opinions

Various tidbits

chsakell's Blog

WEB APPLICATION DEVELOPMENT TUTORIALS WITH OPEN-SOURCE PROJECTS

Once Upon a Camayoc

Bite-size insight on Cyber Security for the not too technical.

%d bloggers like this: