What is Amazon Redshift? How to use it?

Read it in 8 Mins

Last updated on
11st Mar, 2021
Published
30th Aug, 2019
Views
6,132
What is Amazon Redshift? How to use it?

Amazon Web Services is a cloud platform with more than 165 fully-featured services. From startups to large enterprises to government agencies, AWS is used by millions of customers for powering their infrastructure at a lower cost. Amazon Redshift does the same for big data analytics and data warehousing.

It contains columnar data store with billions of rows of data that are parallel placed with each other. It is the fastest-growing service offered by the AWS. But what exactly is Amazon Redshift? On the fundamental level, it is a combination of two technologies – Column-oriented technologies (columnar data store) and MPP (massively parallel processing).

What is a column-oriented database?

This type of database management system uses sections of columns instead of rows to store the data. This is mainly used in big data, analytics, and data warehouse applications. Other benefits of reducing a column-oriented database are that the need for joins is reduced and queries are resolved quickly.

When it comes to row-oriented databases, performing operations is not that efficient. Columnar databases flip the dataset which makes it easy to perform operations. Amazon Redshift is an affordable, fast, and easy way to get your operation up and running.

What is Massively Parallel Processing (MVP)?

This means that a large number of computers or processors are performing computations simultaneously in a parallel fashion. Along with AWS and EC2, Amazon Redshift involves deploying a cluster. Deploying a singer server or node is not possible in RedShift. The cluster has a leader followed by nodes. Depending on the sort key you have specified for the table, the data will be spread across the cluster optimizing its ability to solve queries. 

Do You want to Get AWS Certified? Learn about various AWS Certifications in detail.

What is Amazon Redshift?

This is a data warehouse service that uses MPP and column-orientation to perform operations of data warehouses, ELT, big data, and analytics. It is a linearly scalable database system that can run easily, quickly, and cheaply. You can start working with a couple of hundred gigabytes of data and move on to petabytes. This helps you in acquiring insights for your organization.

If you haven’t used Amazon Redshift before, you must try the following guides and books:

AWS Command Line Interface or Amazon Redshift console can be used for managing clusters in an interactive way. If you want to programmatically manage clusters, you can use the AWS Software Development Kit or the Amazon Redshift Query API.

Amazon Redshift was made to handle database migrations and large scale datasets. It is based on PostgreSQL 8.0.2’s older version. In November 2012, a preview beta was released. Three months later, on 15th February 2013, a full release of Redshift was made. Redshift has more than 6,5000 deployments which make it the biggest cloud data warehouse deployments. 

In the APN Partner program of Amazon, it has listed a number of proprietors and tested their tools like Actuate Corporation, Qlik, Looker, Logi Analytics, IBM Cognos, InetSoft, Actian, etc.

Using Amazon Redshift over traditional data warehouses will offer you the following benefits:

  1. It uses different techniques like MPP architecture and distributing SQL operations to gain a high level of performance on queries.
  2. With just a simple API call or a few clicks from the AWS management console, you can scale the Amazon Redshift.
  3. Services provided by Redshift like upgrades, patches, and automatic data backups make monitoring and managing the warehouse easier.
  4. Tasks like creating a cluster, defining its size, the underlying type of node and security profile can be done through the AWS Management Console or a simple API call in no time.
  5. It saves your time and resources by loading the data smoothly into the Redshift. 
  6. Redshift has one of the fastest speeds across all data warehouse architecture.  It is 10x faster than Hadoop.
  7. Amazon uses a platform that works similarly to MySQL with tools like JDBC, PostgreSQL, and ODBC drivers.
  8. Like other AWS, Redshift is a cost effective solution that allows flexibility to the companies to take care of their data warehousing costs.
  9. When you are working with sensitive data, you need protection tools in your data warehouse to lock the data. Redshift offers security and encryption tools like VPC for network isolation.

Data types used in Amazon RedShift

Every value used in the Amazon Redshift has a data type with a certain set of properties. It also can constrain the values the given argument or column can contain. You need to declare the data type while creating the table. The following data types are used in Amazon Redshift tables:

Data Type
Aliases
Description
SMALLINT
INT2
Signed two-byte integer
INTEGER
INT, INT4
Signed four-byte integer
BIGINT
INT8
Signed eight-byte integer
DECIMAL
NUMERIC
Exact numeric of selectable precision
REAL
FLOAT4
Single precision floating-point number
DOUBLE PRECISION
FLOAT8, FLOAT
Double precision floating-point number
BOOLEAN
BOOL
Logical Boolean (true/false)
CHAR
CHARACTER, NCHAR, BPCHAR
Fixed-length character string
VARCHAR
CHARACTER VARYING, NVARCHAR, TEXT
Variable-length character string with a user-defined limit
DATE

Calendar date (year, month, day)
TIMESTAMP
TIMESTAMP WITHOUT TIME ZONE
Date and time (without time zone)
TIMESTAMPTZ
TIMESTAMP WITH TIME ZONE
Date and time (with time zone)

How to Get Started with Amazon Redshift?

Steps to Setup Redshift

The following steps will help you in setting up a Redshift instance, loading data, and running basic queries on the dataset.

Step 1: Prerequisites

To get started with Amazon Redshift, you need to have the following prerequisites:

  1. Signing up for AWS Visit http://portal.aws.amazon.com/billing/signup. Follow the instructions. During the sign-up process, you will get a phone call where you would have to enter the verification code.
  2. Determining rules of Firewall This includes specifying a port for launching the Redshift cluster. For allowing access, you will have to create an inbound ingress rule. If the client’s system is behind the firewall, you have to open port which you can use. This will help in connecting the SQL client tools to the cluster and running queries.

Step 2: Creating an IAM role

Your cluster needs to have permission to access the data and the resources. The AWS Identity and Access Management (IAM) is used to provide permissions. To do this, you can either provided the IAM user’s AWS access key or through an IAM role which is attached to the cluster. Creating an IAM role will safeguard your access credential for the AWS and protect your sensitive data. Here are the steps you need to follow:

  1. Open up the IAM console by signing into the AWS Management Console.
  2. Select Roles from the navigation pane and select Create role.
  3. Choose Redshift option from the AWS Service group.
  4. Select Redshift – Customizable present under Select your use case. Next, select Next: Permissions.
  5. You will be redirected to the Attach permissions policies page, where you have to select the AmazonS3ReadOnlyAccess option.
  6. For Set permissions boundary, let the default setting be and then select Next: Tags.
  7. On the Add Tags page, you can add tags optionally. After this, select Next: Review.
  8. Write a name for the role in Role name like myRedshiftRole.
  9. Select Create Role after reviewing the information.
  10. Select the role that you had just created.
  11. Copy the Role ARN somewhere. You will be using this value for loading data.

Step 3: Launching a Sample Amazon Redshift Cluster

Before you launch the cluster, remember that it is live and a standard usage fee will be charged to you until you delete the cluster. Here is what you need to do for launching an Amazon Redshift Cluster:

  • Open the Amazon Redshift console by signing in to the AWS Management Console.
  • From the main menu, select a region from where you will be creating the cluster.
  • Select Quick launch cluster from the Amazon Redshift Dashboard.
  • You will be taken to the Cluster specifications page, where you need to select Launch cluster after entering the following values:
    • Dc.2large – Node type
    • 2 – Number of compute nodes
    • Example cluster – Cluster Identifier
    • Awsuser – Master user name
    • A Password – Master user password
    • 5439 – Database port
    • myRedshiftRole – Available IAM roles

This creates a default database with the name dev from the Quick Launch.

  • Cluster takes a few minutes and after that, a confirmation page appears. For returning to the list of clusters, select the Close option.
  • You will be redirected to the Clusters page where you can select the cluster that was just launched. Make sure that the health of databases is good and cluster status is available before connecting it to the database.
  • Click on Modify cluster. Select the VPC security groups for associating the security group with the cluster. Select the Modify option. Before continuing to the next step, ensure that VPC security groups are displayed in the Cluster properties.

Step 4: Authorizing access to the cluster

Configuring a security group for authorizing access is required before connecting the cluster. Follow the below-mentioned steps if you used the EC2-VPC platform for launching the cluster:

  • Open the Amazon Redshift Console. Select Clusters present in the navigation pane.
  • Make sure that you are on the Configuration tab and then select example cluster.
  • Select your security group from under the Cluster properties.
  • Select the Inbound tab after security group has opened up in the Amazon EC2 console.
  • Select Edit, Add Rule, and choose Save after entering the following:
    • Custom TCP Rule – Type
    • TCP – Protocol
    • The same port number used for launching the cluster – Post Range
    • Custom and then 0.0.0.0/0 - Source

Step 5: Connecting to the cluster and running queries

For using the Amazon Redshift cluster as a host for querying databases, you have the following two options:

1. Using the Query Editor

You need permission for accessing the Query editor. For enabling access, you need to attach the AWS IAM user you use for accessing the cluster to the AmazonRedshiftReadOnlyAccess and AmazonRedshiftQueryEditor policies for IAM. Here is how you can do that:

  • Open up the IAM console.
  • Select Users and then choose the user that requires access.
  • Select Add permissions and then Attach existing policies directly.
  • Choose AmazonRedshiftReadOnlyAccess and AmazonRedshiftQueryEditor for Policy names.
  • Select Next: Review and in the last select Add permissions.

For using the Query editor you need to perform the following tasks:

  • Running SQL commands
  • Viewing details of query execution
  • Saving the query
  • Downloading the result set of the query

2. Using a SQL Client

Using the SQL client to connect cluster includes the following steps:

  • Installing the SQL Client tools and drivers
  • Getting the connection string
  • Connecting the SQL workbench to the cluster

Step 6: Loading sample data from Amazon S3

Right now you are connected to a database named dev. After this comes creating tables, uploading data to these tables and trying a query. Here are the steps you need to follow:

  • Create tables

Study the Amazon Redshift database developer guide to get information regarding the syntax required for creating table statements. 

  • Use the COPY command for loading the sample data from Amazon S3.

For loading the data, you can either provide key-based or role-based authentication.

  • For reviewing the queries, you need to open the Amazon Redshift console. 
Profile

Joydip Kumar

Solution Architect

Joydip is passionate about building cloud-based applications and has been providing solutions to various multinational clients. Being a java programmer and an AWS certified cloud architect, he loves to design, develop, and integrate solutions. Amidst his busy work schedule, Joydip loves to spend time on writing blogs and contributing to the opensource community.