Big Data: a summary of various Amazon Big Data tools

Introduction

We have gone through a lot of material about Big Data on this blog. This post summarises each Amazon Cloud component one by one, what they do and what their roles are in a Big Data architecture.

The components

Read more of this post

Big Data series: a summary of Amazon Big Data tools we have discussed

Introduction

We have gone through a lot of material about Big Data on this blog. This post summarises each Amazon Cloud component one by one, what they do and what their roles are in a Big Data architecture.

The components

Read more of this post

Using Amazon RedShift with the AWS .NET API Part 10: RedShift in Big Data

Introduction

In the previous post we discussed how to calculate the more complex parts of the aggregation script: the median and nth percentile if the URL response time.

This post will take up the Big Data thread where we left off at the end of the series on Amazon S3. We’ll also refer to what we built [at the end of the series on Elastic MapReduce]. That post took up how to run an aggregation job via the AWS .NET SDK on an available EMR cluster. Therefore the pre-requisite of following the code examples in this post is familiarity with what we discussed in those topics.

In this post our goal is to show an alternative to EMR. We’ll also see how to import the raw data source from S3 into RedShift.

Read more of this post

Using Amazon RedShift with the AWS .NET API Part 9: data warehousing and the star schema 3

Introduction

In the previous post we started formulating a couple of Postgresql statements to fill in the dimension tables and the aggregation values. We saw that it wasn’t particularly difficult to calculate some basic aggregations over combinations of URL and Customer. We ignored the calculation of the median and percentile values and set them to 0. I’ve decided to dedicate a post just for those functions as I thought they were a lot more complex than min, max and average.

Median in RedShift

Median is also a percentile value, it is the 50th percentile. So we could use the percentile function for the median as well but median has its own dedicated function in RedShift. It’s not a compact function, like min() where you can pass in one or more arguments and you get a single value.

Read more of this post

Using Amazon RedShift with the AWS .NET API Part 8: data warehousing and the star schema 2

Introduction

In the previous post we discussed the basics of data warehousing and the different commonly used database schemas associated with it. We also set up a couple of tables: one raw data table which we filled with some raw data records, two dimension tables and a fact table.

In this post we’ll build upon the existing tables and present a couple of useful Postgresql statements in RedShift. Keep in mind that Postgresql in RedShift is very limited compared to the full version so you often need to be resourceful.

Fill in the dimension tables

Recall that we have 2 dimension tables: DimUrl and DimCustomer. Both are referenced from the fact table by their primary keys. We haven’t added any data into them yet. We’ll do that now.

Read more of this post

Using Amazon RedShift with the AWS .NET API Part 7: data warehousing and the star schema

Introduction

In the previous post we dived into Postgresql statement execution on a RedShift cluster using C# and ODBC. We saw how to execute a single statement or many of them at once. We also tested a parameterised query which can protect us from SQL injections.

In this post we’ll deviate from .NET a little and concentrate on the basics of data warehousing and data mining in RedShift. In particular we’ll learn about a popular schema type often used in conjunction with data mining: the star schema.

Star and snowflake schemas

I went through the basic characteristics of star and snowflake schemas elsewhere on this blog, I’ll copy the relevant parts here.

Read more of this post

Using Amazon RedShift with the AWS .NET API Part 6: Postgresql to master node using ODBC

Introduction

In the previous post we tested how to connect to the master node in code using the .NET AWS SDK and ODBC. We also executed our first simple Postgresql remotely. In this post we’ll continue in those tracks and execute some more Postgresql statements on our master node.

Preparation

We’ll execute most of the scripts we saw in this blog post. Prepare a text file called postgresscript.txt with the following content and save it somewhere on your harddrive:

Read more of this post

ultimatemindsettoday

A great WordPress.com site

iReadable { }

.NET Tips & Tricks

Robin Sedlaczek's Blog

Love to use Microsoft Technologies

HarsH ReaLiTy

A Good Blog is Hard to Find

Ricos Blog zu Softwaredesign- und architektur

Ideen und Gedanken rund um Softwaredesign und -architektur, Domain-Driven Design, C# und Windows Azure

the software architecture

thoughts, ideas, diagrams,enterprise code, design pattern , solution designs

Technology Talks

on Microsoft technologies, Web, Android and others

Software Engineering

Web development

Disparate Opinions

Various tidbits

chsakell's Blog

Anything around ASP.NET MVC,WEB API, WCF, Entity Framework & AngularJS

Cyber Matters

Bite-size insight on Cyber Security for the not too technical.

Guru N Guns's

OneSolution To dOTnET.

Johnny Zraiby

Measuring programming progress by lines of code is like measuring aircraft building progress by weight.

%d bloggers like this: