Search

What is Amazon Redshift? How to use it?

Amazon Web Services is a cloud platform with more than 165 fully-featured services. From startups to large enterprises to government agencies, AWS is used by millions of customers for powering their infrastructure at a lower cost. Amazon Redshift does the same for big data analytics and data warehousing.It contains columnar data store with billions of rows of data that are parallel placed with each other. It is the fastest-growing service offered by the AWS. But what exactly is Amazon Redshift? On the fundamental level, it is a combination of two technologies – Column-oriented technologies (columnar data store) and MPP (massively parallel processing).What is a column-oriented database?This type of database management system uses sections of columns instead of rows to store the data. This is mainly used in big data, analytics, and data warehouse applications. Other benefits of reducing a column-oriented database are that the need for joins is reduced and queries are resolved quickly.When it comes to row-oriented databases, performing operations is not that efficient. Columnar databases flip the dataset which makes it easy to perform operations. Amazon Redshift is an affordable, fast, and easy way to get your operation up and running.What is Massively Parallel Processing (MVP)?This means that a large number of computers or processors are performing computations simultaneously in a parallel fashion. Along with AWS and EC2, Amazon Redshift involves deploying a cluster. Deploying a singer server or node is not possible in RedShift. The cluster has a leader followed by nodes. Depending on the sort key you have specified for the table, the data will be spread across the cluster optimizing its ability to solve queries. Do You want to Get AWS Certified? Learn about various AWS Certifications in detail.What is Amazon Redshift?This is a data warehouse service that uses MPP and column-orientation to perform operations of data warehouses, ELT, big data, and analytics. It is a linearly scalable database system that can run easily, quickly, and cheaply. You can start working with a couple of hundred gigabytes of data and move on to petabytes. This helps you in acquiring insights for your organization.If you haven’t used Amazon Redshift before, you must try the following guides and books:Amazon Redshift Management Overview – For an overview of Amazon Redshift.Service Highlights and Pricing – For its pricing, highlights, and value proposition.Amazon Redshift Getting Started – How to create a cluster, a database, upload data and test queries.Amazon Redshift Cluster Management Guide – For creating and managing clusters.Amazon Redshift Database Developer Guide – For designing, building, querying, and maintaining the databases.AWS Command Line Interface or Amazon Redshift console can be used for managing clusters in an interactive way. If you want to programmatically manage clusters, you can use the AWS Software Development Kit or the Amazon Redshift Query API.Amazon Redshift was made to handle database migrations and large scale datasets. It is based on PostgreSQL 8.0.2’s older version. In November 2012, a preview beta was released. Three months later, on 15th February 2013, a full release of Redshift was made. Redshift has more than 6,5000 deployments which make it the biggest cloud data warehouse deployments. In the APN Partner program of Amazon, it has listed a number of proprietors and tested their tools like Actuate Corporation, Qlik, Looker, Logi Analytics, IBM Cognos, InetSoft, Actian, etc.Using Amazon Redshift over traditional data warehouses will offer you the following benefits:It uses different techniques like MPP architecture and distributing SQL operations to gain a high level of performance on queries.With just a simple API call or a few clicks from the AWS management console, you can scale the Amazon Redshift.Services provided by Redshift like upgrades, patches, and automatic data backups make monitoring and managing the warehouse easier.Tasks like creating a cluster, defining its size, the underlying type of node and security profile can be done through the AWS Management Console or a simple API call in no time.It saves your time and resources by loading the data smoothly into the Redshift. Redshift has one of the fastest speeds across all data warehouse architecture.  It is 10x faster than Hadoop.Amazon uses a platform that works similarly to MySQL with tools like JDBC, PostgreSQL, and ODBC drivers.Like other AWS, Redshift is a cost effective solution that allows flexibility to the companies to take care of their data warehousing costs.When you are working with sensitive data, you need protection tools in your data warehouse to lock the data. Redshift offers security and encryption tools like VPC for network isolation.Data types used in Amazon RedShiftEvery value used in the Amazon Redshift has a data type with a certain set of properties. It also can constrain the values the given argument or column can contain. You need to declare the data type while creating the table. The following data types are used in Amazon Redshift tables:Data TypeAliasesDescriptionSMALLINTINT2Signed two-byte integerINTEGERINT, INT4Signed four-byte integerBIGINTINT8Signed eight-byte integerDECIMALNUMERICExact numeric of selectable precisionREALFLOAT4Single precision floating-point numberDOUBLE PRECISIONFLOAT8, FLOATDouble precision floating-point numberBOOLEANBOOLLogical Boolean (true/false)CHARCHARACTER, NCHAR, BPCHARFixed-length character stringVARCHARCHARACTER VARYING, NVARCHAR, TEXTVariable-length character string with a user-defined limitDATECalendar date (year, month, day)TIMESTAMPTIMESTAMP WITHOUT TIME ZONEDate and time (without time zone)TIMESTAMPTZTIMESTAMP WITH TIME ZONEDate and time (with time zone)How to Get Started with Amazon Redshift?The following steps will help you in setting up a Redshift instance, loading data, and running basic queries on the dataset.Step 1: PrerequisitesTo get started with Amazon Redshift, you need to have the following prerequisites:Signing up for AWS Visit http://portal.aws.amazon.com/billing/signup. Follow the instructions. During the sign-up process, you will get a phone call where you would have to enter the verification code.Determining rules of Firewall This includes specifying a port for launching the Redshift cluster. For allowing access, you will have to create an inbound ingress rule. If the client’s system is behind the firewall, you have to open port which you can use. This will help in connecting the SQL client tools to the cluster and running queries.Step 2: Creating an IAM roleYour cluster needs to have permission to access the data and the resources. The AWS Identity and Access Management (IAM) is used to provide permissions. To do this, you can either provided the IAM user’s AWS access key or through an IAM role which is attached to the cluster. Creating an IAM role will safeguard your access credential for the AWS and protect your sensitive data. Here are the steps you need to follow:Open up the IAM console by signing into the AWS Management Console.Select Roles from the navigation pane and select Create role.Choose Redshift option from the AWS Service group.Select Redshift – Customizable present under Select your use case. Next, select Next: Permissions.You will be redirected to the Attach permissions policies page, where you have to select the AmazonS3ReadOnlyAccess option.For Set permissions boundary, let the default setting be and then select Next: Tags.On the Add Tags page, you can add tags optionally. After this, select Next: Review.Write a name for the role in Role name like myRedshiftRole.Select Create Role after reviewing the information.Select the role that you had just created.Copy the Role ARN somewhere. You will be using this value for loading data.Step 3: Launching a Sample Amazon Redshift ClusterBefore you launch the cluster, remember that it is live and a standard usage fee will be charged to you until you delete the cluster. Here is what you need to do for launching an Amazon Redshift Cluster:Open the Amazon Redshift console by signing in to the AWS Management Console.From the main menu, select a region from where you will be creating the cluster.Select Quick launch cluster from the Amazon Redshift Dashboard.You will be taken to the Cluster specifications page, where you need to select Launch cluster after entering the following values:Dc.2large – Node type2 – Number of compute nodesExample cluster – Cluster IdentifierAwsuser – Master user nameA Password – Master user password5439 – Database portmyRedshiftRole – Available IAM rolesThis creates a default database with the name dev from the Quick Launch.Cluster takes a few minutes and after that, a confirmation page appears. For returning to the list of clusters, select the Close option.You will be redirected to the Clusters page where you can select the cluster that was just launched. Make sure that the health of databases is good and cluster status is available before connecting it to the database.Click on Modify cluster. Select the VPC security groups for associating the security group with the cluster. Select the Modify option. Before continuing to the next step, ensure that VPC security groups are displayed in the Cluster properties.Step 4: Authorizing access to the clusterConfiguring a security group for authorizing access is required before connecting the cluster. Follow the below-mentioned steps if you used the EC2-VPC platform for launching the cluster:Open the Amazon Redshift Console. Select Clusters present in the navigation pane.Make sure that you are on the Configuration tab and then select example cluster.Select your security group from under the Cluster properties.Select the Inbound tab after security group has opened up in the Amazon EC2 console.Select Edit, Add Rule, and choose Save after entering the following:Custom TCP Rule – TypeTCP – ProtocolThe same port number used for launching the cluster – Post RangeCustom and then 0.0.0.0/0 - SourceStep 5: Connecting to the cluster and running queriesFor using the Amazon Redshift cluster as a host for querying databases, you have the following two options:1. Using the Query EditorYou need permission for accessing the Query editor. For enabling access, you need to attach the AWS IAM user you use for accessing the cluster to the AmazonRedshiftReadOnlyAccess and AmazonRedshiftQueryEditor policies for IAM. Here is how you can do that:Open up the IAM console.Select Users and then choose the user that requires access.Select Add permissions and then Attach existing policies directly.Choose AmazonRedshiftReadOnlyAccess and AmazonRedshiftQueryEditor for Policy names.Select Next: Review and in the last select Add permissions.For using the Query editor you need to perform the following tasks:Running SQL commandsViewing details of query executionSaving the queryDownloading the result set of the query2. Using a SQL ClientUsing the SQL client to connect cluster includes the following steps:Installing the SQL Client tools and driversGetting the connection stringConnecting the SQL workbench to the clusterStep 6: Loading sample data from Amazon S3Right now you are connected to a database named dev. After this comes creating tables, uploading data to these tables and trying a query. Here are the steps you need to follow:Create tablesStudy the Amazon Redshift database developer guide to get information regarding the syntax required for creating table statements. Use the COPY command for loading the sample data from Amazon S3.For loading the data, you can either provide key-based or role-based authentication.For reviewing the queries, you need to open the Amazon Redshift console. 

What is Amazon Redshift? How to use it?

6K
  • by Joydip Kumar
  • 30th Aug, 2019
  • Last updated on 11th Mar, 2021
  • 8 mins read
What is Amazon Redshift? How to use it?

Amazon Web Services is a cloud platform with more than 165 fully-featured services. From startups to large enterprises to government agencies, AWS is used by millions of customers for powering their infrastructure at a lower cost. Amazon Redshift does the same for big data analytics and data warehousing.

It contains columnar data store with billions of rows of data that are parallel placed with each other. It is the fastest-growing service offered by the AWS. But what exactly is Amazon Redshift? On the fundamental level, it is a combination of two technologies – Column-oriented technologies (columnar data store) and MPP (massively parallel processing).

What is a column-oriented database?

This type of database management system uses sections of columns instead of rows to store the data. This is mainly used in big data, analytics, and data warehouse applications. Other benefits of reducing a column-oriented database are that the need for joins is reduced and queries are resolved quickly.

When it comes to row-oriented databases, performing operations is not that efficient. Columnar databases flip the dataset which makes it easy to perform operations. Amazon Redshift is an affordable, fast, and easy way to get your operation up and running.

What is Massively Parallel Processing (MVP)?

This means that a large number of computers or processors are performing computations simultaneously in a parallel fashion. Along with AWS and EC2, Amazon Redshift involves deploying a cluster. Deploying a singer server or node is not possible in RedShift. The cluster has a leader followed by nodes. Depending on the sort key you have specified for the table, the data will be spread across the cluster optimizing its ability to solve queries. 

Do You want to Get AWS Certified? Learn about various AWS Certifications in detail.

What is Amazon Redshift?

This is a data warehouse service that uses MPP and column-orientation to perform operations of data warehouses, ELT, big data, and analytics. It is a linearly scalable database system that can run easily, quickly, and cheaply. You can start working with a couple of hundred gigabytes of data and move on to petabytes. This helps you in acquiring insights for your organization.

If you haven’t used Amazon Redshift before, you must try the following guides and books:

AWS Command Line Interface or Amazon Redshift console can be used for managing clusters in an interactive way. If you want to programmatically manage clusters, you can use the AWS Software Development Kit or the Amazon Redshift Query API.

Amazon Redshift was made to handle database migrations and large scale datasets. It is based on PostgreSQL 8.0.2’s older version. In November 2012, a preview beta was released. Three months later, on 15th February 2013, a full release of Redshift was made. Redshift has more than 6,5000 deployments which make it the biggest cloud data warehouse deployments. 

In the APN Partner program of Amazon, it has listed a number of proprietors and tested their tools like Actuate Corporation, Qlik, Looker, Logi Analytics, IBM Cognos, InetSoft, Actian, etc.

Using Amazon Redshift over traditional data warehouses will offer you the following benefits:

  1. It uses different techniques like MPP architecture and distributing SQL operations to gain a high level of performance on queries.
  2. With just a simple API call or a few clicks from the AWS management console, you can scale the Amazon Redshift.
  3. Services provided by Redshift like upgrades, patches, and automatic data backups make monitoring and managing the warehouse easier.
  4. Tasks like creating a cluster, defining its size, the underlying type of node and security profile can be done through the AWS Management Console or a simple API call in no time.
  5. It saves your time and resources by loading the data smoothly into the Redshift. 
  6. Redshift has one of the fastest speeds across all data warehouse architecture.  It is 10x faster than Hadoop.
  7. Amazon uses a platform that works similarly to MySQL with tools like JDBC, PostgreSQL, and ODBC drivers.
  8. Like other AWS, Redshift is a cost effective solution that allows flexibility to the companies to take care of their data warehousing costs.
  9. When you are working with sensitive data, you need protection tools in your data warehouse to lock the data. Redshift offers security and encryption tools like VPC for network isolation.

Data types used in Amazon RedShift

Every value used in the Amazon Redshift has a data type with a certain set of properties. It also can constrain the values the given argument or column can contain. You need to declare the data type while creating the table. The following data types are used in Amazon Redshift tables:

Data Type
Aliases
Description
SMALLINT
INT2
Signed two-byte integer
INTEGER
INT, INT4
Signed four-byte integer
BIGINT
INT8
Signed eight-byte integer
DECIMAL
NUMERIC
Exact numeric of selectable precision
REAL
FLOAT4
Single precision floating-point number
DOUBLE PRECISION
FLOAT8, FLOAT
Double precision floating-point number
BOOLEAN
BOOL
Logical Boolean (true/false)
CHAR
CHARACTER, NCHAR, BPCHAR
Fixed-length character string
VARCHAR
CHARACTER VARYING, NVARCHAR, TEXT
Variable-length character string with a user-defined limit
DATE

Calendar date (year, month, day)
TIMESTAMP
TIMESTAMP WITHOUT TIME ZONE
Date and time (without time zone)
TIMESTAMPTZ
TIMESTAMP WITH TIME ZONE
Date and time (with time zone)

How to Get Started with Amazon Redshift?

Steps to Setup Redshift

The following steps will help you in setting up a Redshift instance, loading data, and running basic queries on the dataset.

Step 1: Prerequisites

To get started with Amazon Redshift, you need to have the following prerequisites:

  1. Signing up for AWS Visit http://portal.aws.amazon.com/billing/signup. Follow the instructions. During the sign-up process, you will get a phone call where you would have to enter the verification code.
  2. Determining rules of Firewall This includes specifying a port for launching the Redshift cluster. For allowing access, you will have to create an inbound ingress rule. If the client’s system is behind the firewall, you have to open port which you can use. This will help in connecting the SQL client tools to the cluster and running queries.

Step 2: Creating an IAM role

Your cluster needs to have permission to access the data and the resources. The AWS Identity and Access Management (IAM) is used to provide permissions. To do this, you can either provided the IAM user’s AWS access key or through an IAM role which is attached to the cluster. Creating an IAM role will safeguard your access credential for the AWS and protect your sensitive data. Here are the steps you need to follow:

  1. Open up the IAM console by signing into the AWS Management Console.
  2. Select Roles from the navigation pane and select Create role.
  3. Choose Redshift option from the AWS Service group.
  4. Select Redshift – Customizable present under Select your use case. Next, select Next: Permissions.
  5. You will be redirected to the Attach permissions policies page, where you have to select the AmazonS3ReadOnlyAccess option.
  6. For Set permissions boundary, let the default setting be and then select Next: Tags.
  7. On the Add Tags page, you can add tags optionally. After this, select Next: Review.
  8. Write a name for the role in Role name like myRedshiftRole.
  9. Select Create Role after reviewing the information.
  10. Select the role that you had just created.
  11. Copy the Role ARN somewhere. You will be using this value for loading data.

Step 3: Launching a Sample Amazon Redshift Cluster

Before you launch the cluster, remember that it is live and a standard usage fee will be charged to you until you delete the cluster. Here is what you need to do for launching an Amazon Redshift Cluster:

  • Open the Amazon Redshift console by signing in to the AWS Management Console.
  • From the main menu, select a region from where you will be creating the cluster.
  • Select Quick launch cluster from the Amazon Redshift Dashboard.
  • You will be taken to the Cluster specifications page, where you need to select Launch cluster after entering the following values:
    • Dc.2large – Node type
    • 2 – Number of compute nodes
    • Example cluster – Cluster Identifier
    • Awsuser – Master user name
    • A Password – Master user password
    • 5439 – Database port
    • myRedshiftRole – Available IAM roles

This creates a default database with the name dev from the Quick Launch.

  • Cluster takes a few minutes and after that, a confirmation page appears. For returning to the list of clusters, select the Close option.
  • You will be redirected to the Clusters page where you can select the cluster that was just launched. Make sure that the health of databases is good and cluster status is available before connecting it to the database.
  • Click on Modify cluster. Select the VPC security groups for associating the security group with the cluster. Select the Modify option. Before continuing to the next step, ensure that VPC security groups are displayed in the Cluster properties.

Step 4: Authorizing access to the cluster

Configuring a security group for authorizing access is required before connecting the cluster. Follow the below-mentioned steps if you used the EC2-VPC platform for launching the cluster:

  • Open the Amazon Redshift Console. Select Clusters present in the navigation pane.
  • Make sure that you are on the Configuration tab and then select example cluster.
  • Select your security group from under the Cluster properties.
  • Select the Inbound tab after security group has opened up in the Amazon EC2 console.
  • Select Edit, Add Rule, and choose Save after entering the following:
    • Custom TCP Rule – Type
    • TCP – Protocol
    • The same port number used for launching the cluster – Post Range
    • Custom and then 0.0.0.0/0 - Source

Step 5: Connecting to the cluster and running queries

For using the Amazon Redshift cluster as a host for querying databases, you have the following two options:

1. Using the Query Editor

You need permission for accessing the Query editor. For enabling access, you need to attach the AWS IAM user you use for accessing the cluster to the AmazonRedshiftReadOnlyAccess and AmazonRedshiftQueryEditor policies for IAM. Here is how you can do that:

  • Open up the IAM console.
  • Select Users and then choose the user that requires access.
  • Select Add permissions and then Attach existing policies directly.
  • Choose AmazonRedshiftReadOnlyAccess and AmazonRedshiftQueryEditor for Policy names.
  • Select Next: Review and in the last select Add permissions.

For using the Query editor you need to perform the following tasks:

  • Running SQL commands
  • Viewing details of query execution
  • Saving the query
  • Downloading the result set of the query

2. Using a SQL Client

Using the SQL client to connect cluster includes the following steps:

  • Installing the SQL Client tools and drivers
  • Getting the connection string
  • Connecting the SQL workbench to the cluster

Step 6: Loading sample data from Amazon S3

Right now you are connected to a database named dev. After this comes creating tables, uploading data to these tables and trying a query. Here are the steps you need to follow:

  • Create tables

Study the Amazon Redshift database developer guide to get information regarding the syntax required for creating table statements. 

  • Use the COPY command for loading the sample data from Amazon S3.

For loading the data, you can either provide key-based or role-based authentication.

  • For reviewing the queries, you need to open the Amazon Redshift console. 
Joydip

Joydip Kumar

Solution Architect

Joydip is passionate about building cloud-based applications and has been providing solutions to various multinational clients. Being a java programmer and an AWS certified cloud architect, he loves to design, develop, and integrate solutions. Amidst his busy work schedule, Joydip loves to spend time on writing blogs and contributing to the opensource community.


Website : https://geeks18.com/

Join the Discussion

Your email address will not be published. Required fields are marked *

Suggested Blogs

A Glimpse Of The Major Leading SAFe® Versions

A Quick view of SAFe® Agile has gained popularity in recent years, and with good reason. Teams love this approach that allows them to get a value to the customer faster while learning and adjusting to change as needed. But teams often don’t work in isolation. Many teams work in the context of larger organizations.  Often Agile doesn’t fit their needs. Some teams need an Agile approach that scales to larger projects that involve multiple teams.   It’s possible to do this. That’s where the Scaled Agile Framework, or SAFe®, can help.Why SAFe® is the best scalable framework?The Scaled Agile Framework is a structured Agile approach for large enterprises. It’s prescriptive and provides a path for interdependent teams to gain the benefits of using an Agile approach.Scaled Agile provides guidance not only at the team level but also at the Program and Portfolio levels. It also has built-in coordinated planning across related teams who are working in Release Trains.These planning increments allow teams to plan together to work with customers and release value frequently in a way that’s sustainable to teams.And it supports continuous improvement.It’s a great way for large companies to maintain structure and roll out Agile at a large scale.  What is SAFe® 4.5? Scaled Agile, otherwise known as SAFe®, was initially released in 2011 by Dean Leffingwell as a knowledge base for enterprises to adopt Agile. Over the years it has grown and evolved. SAFe® 4.5 was released on June 22, 2017, to accommodate improvements to the framework. Following are some of the key improvements in SAFe® 4.5:Essential SAFe® and ConfigurabilityInnovation with Lean Startup and Lean UXScalable DevOps and Continuous DeliveryImplementation roadmapBenefits of SAFe® 4.5 to companies:Organizations who adopt SAFe® 4.5 will be able to gain the following benefits:1) Test ideas more quickly. SAFe® 4.5 has a build-in iterative development and testing. This lets teams get faster feedback to learn and adjust more quickly.2) Deliver much faster. The changes to SAFe® 4.5 allow teams to move complex work through the pipeline and deliver value to the customer faster.3) Simplify governance and improve portfolio performance. Guidance and support have been added at the Portfolio level to guide organizations in addressing Portfolio-level concerns in a scaled agile context. SAFe® 4.5 - Key areas of improvements:A. Essential SAFe® and ConfigurabilityFour configurations of SAFe® that provide a more configurable and scalable approach:Essential SAFe®: The most basic level that teams can use. It contains just the essentials that a team needs to get the benefits of SAFe®.Portfolio SAFe®: For enterprises that implement multiple solutions that have portfolio responsibilities such as governance, strategy, and portfolio funding.Large Solution: Complex solutions that involve multiple Agile Release Trains. These initiatives don’t require Portfolio concerns, but only include the Large Solution and Essential SAFe® elements.  SAFe® Full SAFe®: The most comprehensive level that can be applied to huge enterprise initiatives requiring hundreds of people to complete.Because SAFe® is a framework, that provides the flexibility to choose the level of SAFe® that best fits your organization’s needs.B. Innovation with Lean Startup and Lean UXRather than creating an entire project plan up-front, SAFe® teams focus on features. They create a hypothesis about what a new feature will deliver and then use an iterative approach to develop and test their hypothesis along the way. As teams move forward through development, they perform this development and test approach repeatedly and adjust as needed, based on feedback. Teams also work closely with end users to identify the Minimum Viable Product (MVP) to focus on first. They identify what will be most valuable to the customer most immediately. Then they rely on feedback and learning as they develop the solution incrementally. They adjust as needed to incorporate what they’ve learned into the features. This collaboration and fast feedback and adjustment cycle result in a more successful product.  C. Scalable DevOps & Continuous DeliveryThe addition of a greater focus on DevOps allows teams to innovate faster. Like Agile, DevOps is a mindset. And like Agile, it allows teams to learn, adjust, and deliver value to users incrementally. The continuous delivery pipeline allows teams to move value through the pipeline faster through continuous exploration, continuous integration, continuous deployment, and released on demand. DevOps breaks down silos and supports Agile teams to work together more seamlessly. This results in more efficient delivery of value to the end users faster. It’s a perfect complement to Scaled Agile.D. Implementation RoadmapSAFe® now offers a suggested roadmap to SAFe® adoption. While change can be challenging, the implementation roadmap provides guidance that can help with that organizational change.Critical Role of the SAFe® Program ConsultantSAFe® Program Consultants, or SPCs, are critical change agents in the transition to Scaled Agile.Because of the depth of knowledge required to gain SPC certification, they’re perfectly positioned to help the organization move through challenges of change.They can train and coach all levels of SAFe® participants, from team members to executive leaders. They can also train the Scrum Master, Product Owners, and Agile Release Train Engineers, which are critical roles in SAFe®.The SPC can also train teams and help them launch their Agile Release Trains (ARTs).And they can support teams on the path to continued improvement as they continue to learn and grow.The SPC can also help identify value streams in the organization that may be ready to launch Agile Release Trains.The can also help develop rollout plans for SAFe® in the enterprise.Along with this, they can provide important communications that help the enterprise understand the drivers and value behind the SAFe® transition.       How SAFe® 4.5 is backward compatible with SAFe® 4.0?Even if your organization has already adopted SAFe® 4.0, SAFe® 4.5 has been developed in a way that can be easily adopted without disruption. Your organization can adopt the changes at the pace that works best.Few Updates in the new courseware The courseware for SAFe® 4.5 has incorporated changes to support the changes in SAFe® 4.5.They include Implementing SAFe®, Leading SAFe®, and SAFe® for Teams.Some of the changes you’ll see are as follows:Two new lessons for Leading SAFe®Student workbookTrainer GuideNew look and feelUpdated LPM contentSmoother lesson flowNEW Course Delivery Enablement (CDE) Changes were made to improve alignment between SAFe® and Scrum:Iteration Review: Increments previously known as Sprints now have reviews added. This allows more opportunities for teams to incorporate improvements. Additionally, a Team Demo has been added in each iteration review. This provides more opportunity for transparency, sharing, and feedback.Development Team: The Development team was specifically identified at the team level in SAFe® 4.5. The development team is made up of three to nine people who can move an element of work from development through the test. This development team contains software developers, testers, and engineers, and does not include the Product Owner and Scrum Master. Each of those roles is shown separately at the team level in SAFe® 4.5.Scrum events: The list of scrum events are shown next to the ScrumXP icon and include Plan, Execute, Review, and Retro (for a retrospective.)Combined SAFe® Foundation Elements SAFe® 4.0 had the foundational elements of Core Values, Lean-Agile Mindset, SAFe® Principles, and Implementing SAFe® at a basic level.SAFe® 4.5 adds to the foundation elements by also including Lean-Agile Leaders, the Implementation Roadmap, and the support of the SPC in the successful implementation of SAFe®.Additional changes include: Communities of Practice: This was moved to the spanning palette to show support at all levels: team, program, large solution, and portfolio.Lean-Agile Leaders: This role is now included in the foundational level. Supportive leadership is critical to a successful SAFe® adoption.SAFe® Program Consultant: This role was added to the Foundational Layer. The SPC can play a key leadership role in a successful transition to Scaled Agile.Implementation Roadmap: The implementation roadmap replaces the basic implementation information in SAFe® 4.0. It provides more in-depth information on the elements to a successful enterprise transition to SAFe®.Benefits of upgrading to SAFe® 4.5With the addition of Lean Startup approaches, along with a deeper focus on DevOps and Continuous Delivery, teams will be situated to deliver quality and value to users more quickly.With improvements at the Portfolio level, teams get more guidance on Portfolio governance and other portfolio levels concerns, such as budgeting and compliance.  Reasons to Upgrade to SAFe® 4.5 Enterprises who’ve been using SAFe® 4.0 will find greater flexibility with the added levels in SAFe® 4.5. Smaller groups in the enterprise can use the team level, while groups working on more complex initiatives can create Agile Release Trains with many teams.Your teams can innovate faster by using the Lean Startup Approach. Work with end users to identify the Minimum Viable Product (MVP), then iterate as you get fast feedback and adjust. This also makes your customer more of a partner in development, resulting in better collaboration and a better end product.Get features and value to your user community faster with DevOps and the Continuous Delivery pipeline. Your teams can continuously hypothesize, build, measure, and learn to continuously release value. This also allows large organizations to innovate more quickly.Most Recent Changes in SAFe® series - SAFe® 4.6Because Scaled Agile continues to improve, new changes have been incorporated with SAFe® 4.6. with the addition of five core competencies that enable enterprises to respond to technology and market changes.Lean Portfolio Management: The information needed for how to use a Lean-Agile approach to portfolio strategy, funding, and governance.Business Solutions and Lean Systems: Optimizing activities to Implement large, complex initiatives using a Scaled Agile approach while still addressing the necessary activities such as designing, testing, deployment, and even retiring old solutions.DevOps and Release on Demand: The skills needed to release value as needed through a continuous delivery pipeline.Team and Technical Agility: The skills needed to establish successful teams who consistently deliver value and quality to meet customer needs.Lean-Agile Leadership: How leadership enables a successful agile transformation by supporting empowered teams in implementing agile practices. Leaders carry out the Agile principles and practices and ensure teams have the support they need to succeedSAFe® Agilist (SA) Certification exam: The SAFe® Agilist certification is for the change leaders in an organization to learn about the SAFe® practices to support change at all levels: team, program, and portfolio levels. These change agents can play a positive role in an enterprise transition to SAFe®.In order to become certified as a SAFe® Agilist (SA), you must first take the Leading SAFe® class and pass the SAFe® certification exam. To learn more about this, see this article on How To Pass Leading SAFe® 4.5 Exam.SAFe® Certification Exam: KnowledgeHut provides Leading SAFe® training in multiple locations. Check the site for locations and dates.SAFe® Agile Certification Cost: Check KnowledgeHut’s scheduled training offerings to see the course cost. Each course includes the opportunity to sit for the exam included in the cost.Scaled Agile Framework Certification Cost: There are multiple levels of SAFe® certification, including Scrum Master, Release Train Engineer, and Product Owner. Courses range in cost, but each includes the chance to sit for the corresponding SAFe® certification.SAFe® Classes: SAFe® classes are offered by various organizations. To see if KnowledgeHut is offering SAFe® Training near you, check the SAFe® training schedule on our website.TrainingKnowledgeHut provides multiple Scaled Agile courses to give both leaders and team members in your organization the information they need to for a successful transition to Scaled Agile. Check the site for the list of classes to find those that are right for your organization as you make the journey.All course fees cover examination costs for certification.SAFe® 4.5 Scrum Master with SSM Certification TrainingLearn the core competencies of implementing Agile across the enterprise, along with how to lead high-performing teams to deliver successful solutions. You’ll also learn how to implement DevOps practices. Completion of this course will prepare you for obtaining your SAFe® 4 Scrum Master certificate.SAFe® 4 Advanced Scrum Master (SASM)This two-day course teaches you to how to apply Scrum at the enterprise level and prepares you to lead high-performing teams in a Scaled Agile environment. At course completion, you’ll be prepared to manage interactions not only on your team but also across teams and with stakeholders. You’ll also be prepared to take the SAFe® Advanced Scrum Master exam.Leading SAFe®4.5 Training Course (SA)This two-day Leading SAFe® class prepares you to become a Certified SAFe® 4 Agilist, ready to lead the agile transformation in your enterprise.  By the end of this course, you’ll be able to take the SAFe® Agilist (SA) certification exam.SAFe® 4.5 for Teams (SP) This two-day course teaches Scrum fundamentals, principles tools, and processes. You’ll learn about software engineering practices needed to scale agile and deliver quality solutions in a Scaled Agile environment. Teams new to Scaled Agile will find value in going through this course. Attending the class prepares you for the certification exam to become a certified SAFe® 4 Practitioner (SP). DevOps Foundation Certification trainingThis course teaches you the DevOps framework, along with the practices to prepare you to apply the principles in your work environment. Completion of this course will prepare you also to take the DevOps Foundation exam for certification.
5180
A Glimpse Of The Major Leading SAFe® Versions

A Quick view of SAFe® Agile has gained popularit... Read More

How Start Ups Can Benefit From Cloud Computing?

From nebulous beginnings, the cloud has grown to a platform that has gained universal acceptance and is transforming businesses across industries. Companies that have adopted cloud technology have seen significant payoffs, with cloud based tools redefining their data storage, data sharing, marketing and project management capabilities. The easy availability of affordable cloud infrastructure has made it so easy to set up new businesses that the economy is all set for a start up boom which has its head, so to speak, in the cloud! With the advent of this new technology, complete newbie’s in the market are able to hold their own against established market players—by achieving an amazing quantum of work using skeleton manpower resources. Recently, a popular ad doing the rounds on TV showed a long haired youth conducting business from a cafe on his HP Pavilion laptop, where he is ridiculed by some well heeled middle aged businessmen on their coffee break. Back at their office, they find that this youngster is the new investor that their boss has been heaping accolades on. “Where’s your office?” one of them asks the young man…to be laughingly told that he carries his entire office in his laptop! And that, typically, is how the new-age start up business looks. We have heard many stories of how a clever idea has turned a tidy profit for a smart entrepreneur working out of his laptop. While cloud computing is pushing the boundaries of science and innovation into a new realm, it is also laying the foundation for a new wave of business start ups. New ventures in general suffer from a lack of infrastructure, manpower and funding…and all these three concerns are categorically addressed by the cloud. Moving to the cloud minimizes the need of huge capital investments to set up expensive infrastructure. For nascent entrepreneurs, physical hardware and server costs used to be formidable given the limited budgets at their disposal. Seed money was also required to hire office space, promote the business and hire workers. Today, thanks to cloud technology, getting a new business off the ground and running costs virtually nothing. Most of the resources and tools that new ventures need are available on the cloud at minimal costs, in fact quite often at zero costs, making this a powerful value proposition for small businesses. A cloud hosting provider such as AWS can enable you to go live immediately, and will even scale up to your requirement once your business expands. Small businesses can think and dream big with the cloud. When it comes to manpower resources, it takes just a handful of people to work wonders using the online resources that are at their disposal. If you have a brilliant idea and have a workable plan for execution, you can comfortably compete neck to neck with market leaders. The messaging sensation WhatsApp was started in 2009 by just two former Yahoo employees who leveraged the power of the internet – and this goes to show that clever use of technology can completely eliminate the need for a sizeable manpower pool. Start ups have always been more agile than their large scale counterparts, and the cloud helps them take this a step further. Resources can be scaled up or down in no time, whereas in traditional environments it would have taken many days, considerable planning and funds to add hardware and software. Cloud computing also helps improve collaboration across teams, often across geographies. Data sharing is instantaneous, and teams can work on a task together in real time regardless of their location. Powered by the cloud, small businesses operate with shoestring budgets and key players in different continents. All their accounting, client data, marketing and other business critical files can be stored online and are accessible from anywhere. These online tools can be accessed and utilised instantly, and underpin all the crucial processes on which these businesses thrive. Strategic financial decisions are made after garnering insights from cloud-based accounting software. E-invoicing helps settle bills in a fraction of the time of traditional billing systems, and client queries are answered quickly through cloud-based management systems—saving precious time and increasing customer satisfaction levels to an all-time high. Whether at home, on vacation or on the phone, businesses can oversee sales, replenish products and plan new sales strategies. That’s a whole new way of doing business, and seems to be very successful! An estimate by Cloudworks has put the anticipated cloud computing market at over $200 billion by the year 2018. As Jeff Weiner, CEO of LinkedIn, succinctly put it, the cloud “makes it easier and cheaper than ever for anyone anywhere to be an entrepreneur and to have access to all the best infrastructure of innovation.” With cloud technology rapidly levelling the playing field between nascent and established businesses, it is anybody’s guess as to just how many new start ups will burst into the scene in the next few years. Hoping that the blog has helped you gain a clear understanding of the importance of Cloud Computing.  To gain more knowledge on what cloud computing has to offer, take a look at other blogs as well as the AWS certifications that we have to offer or enrol yourself for the AWS Certification Training course by KnowledgeHut.  
How Start Ups Can Benefit From Cloud Computing?

From nebulous beginnings, the cloud has grown to a... Read More

Business Transformation through Enterprise Cloud Computing

The Cloud Best Practices Network is an industry solutions groups and best practices catalogue of how-to information for Cloud Computing. While we cover all aspects of the technology our primary goal is to explain the enabling relationship between this new IT trend and business transformation, where our materials include: Core Competencies – The mix of new skills and technologies required to successfully implement new Cloud-based IT applications. Reference Documents – The core articles that define what Cloud Computing is and what the best practices are for implementation, predominately referring to the NIST schedule of information. Case studies – Best practices derived from analysis of pioneer adopters, such as the State of Michigan and their ‘MiCloud‘ framework . Read this article ‘Make MiCloud Your Cloud‘ as an introduction to the Cloud & business transformation capability. e-Guides – These package up collections of best practice resources directed towards a particular topic or industry. For example our GovCloud.info site specializes in Cloud Computing for the public sector. White papers – Educational documents from vendors and other experts, such as the IT Value mapping paper from VMware. Core competencies The mix of new skills and technologies required to successfully implement new Cloud-based IT applications, and also the new capabilities that these platforms make possible: Virtualization Cloud Identity and Security – Cloud Privacy Cloud 2.0 Cloud Configuration Management Cloud Migration Management DevOps Cloud BCP ITaaS Procurement Cloud Identity and Security Cloud Identity and Security best practices (CloudIDSec) provides a comprehensive framework for ensuring the safe and compliant use of Cloud systems. This is achieved through combining a focus on the core references for Cloud Security, the Cloud Security Alliance, with those of Cloud Identity best practices: IDaaS – Identity Management 2.0 Federated Identity Ecosystems Cloud Privacy A common critcal focus area for Cloud computing is data privacy, particularly with regards to the international aspects of Cloud hosting. Cloud Privacy refers to the combination of technologies and legal frameworks to ensure privacy of personal information held in Cloud systems, and a ‘Cloud Privacy-by-Design’ process can then be used to identify the local legislated privacy requirements of information. Tools for designing these types of privacy controls have been developed by global privacy experts, such as Ann Cavoukian, the current Privacy Commissioner for Ontario, who provides tools to design and build these federated privacy systems. The Privacy by Design Cloud Computing Architecture (26-page PDF) document provides a base reference for how to combine traditional PIAs (Privacy Impact Assessments) with Cloud Computing. As this Privacy Framework presentation then explains these regulatory mechanisms that Kantara enables can then provide the foundations for securing the information in a manner that encompasses all the legacy, privacy and technical requirements needed to ensure it is suitable for e-Government scenarios. This then enables it to achieve compliance with the Cloud Privacy recommendations put forward by global privacy experts, such as Ann Cavoukian, the current Privacy Commissioner for Ontario, who stipulates a range of ‘Cloud Privacy By Design‘ best practices Cloud 2.0 Cloud is as much a business model as it is a technology, and this model is best described through the term ‘Cloud 2.0′. As the saying goes a picture tells a thousand words, and as described by this one Cloud 2.0 represents the intersection between social media, Cloud computing and Crowdsourcing. The Social Cloud In short it marries the emergent new online world of Twitter, Linkedin et al, and the technologies that are powering them, with the traditional, back-end world of mainframe systems, mini-computers and all other shapes and sizes of legacy data-centre. “Socializing” these applications means moving them ‘into the Cloud’, in the sense of connecting them into this social data world, as much as it does means virtualizing the application to run on new hardware. This a simple but really powerful mix, that can act as a catalyst for an exciting new level of business process capability. It can provide a platform for modernizing business processes in a significant and highly innovative manner, a breath of fresh air that many government agency programs are crying out for. Government agencies operate many older technology platforms for many of their services, making it difficult to amend them for new ways of working and in particular connecting them to the web for self-service options. Crowdsourcing Social media encourages better collaboration between users and information, and tools for open data and back-end legacy integrations can pull the transactional systems informtion needed to make this functional and valuable. Crowdsourcing is: a distributed problem-solving and production process that involves outsourcing tasks to a network of people, also known as the crowd. Although not a component of the technologies of Cloud Computing, Crowdsourcing is a fundamental concept inherent to the success of the Cloud 2.0 model. The commercial success of migration to Cloud Computing will be amplified when there is a strong focus on the new Web 2.0 type business models that the technology is ideal for enabling. Case study – Peer to Patent One such example is the Whitehouse project the Peer to the Patent portal, a headline example of Open Government, led by one its keynote experts Beth Noveck. This project illustrates the huge potential for business transformation that Cloud 2.0 offers. It’s not just about migrating data-center apps into a Cloud provider, connecting an existing IT system to a web interface or just publishing Open Data reporting data online, but rather utilizing the nature of the web to entirely re-invent the core process itself. It’s about moving the process into the Cloud. In this 40 page Harvard white paper Beth describes how the US Patent Office was building up a huge backlog of over one million patent applications due to a ‘closed’ approach where only staff from the USPTO could review, contribute and decide upon applications. To address this bottleneck she migrated the process to an online, Open version where contributors from across multiple organizations could help move an application through the process via open participation web site features. Peer to Patent is a headline example of the power of Open Government, because it demonstrates its about far more than simply publishing reporting information online in an open manner, so that they public can inspect data like procurement spending numbers. Rather it’s about changing the core decision-making processes entirely, reinventing how Government itself works from the inside out, reinventing it from a centralized hierarchical monolith to an agile, distributed peer to peer network. In essence it transforms the process from ‘closed’ to ‘open’, in terms of who and how others can participate, utilizing the best practice of ‘Open Innovation‘ to break the gridlock that had occured due the constraints caused by private, traditional ways of working. Open Grantmaking – Sharing Cloud Best Practices Beth has subsequently advised further on how these principles can be applied in general across Government. For example in this article on her own blog she describes ‘Open Grantmaking‘ – How the Peer To Patent crowdsourcing model might be applied to the workflows for government grant applications. She touches on what is the important factor about these new models, their ability to accelerate continual improvement within organizations through repeatedly sharing and refining best practices: “In practice, this means that if a community college wins a grant to create a videogame to teach how to install solar panels, everyone will have the benefit of that knowledge. They will be able to play the game for free. In addition, anyone can translate it into Spanish or Russian or use it as the basis to create a new game to teach how to do a home energy retrofit.” Beth describes how Open Grantmaking might be utilized to improve community investing in another blog, describing how OG would enable more transparency and related improvements. Cloud 2.0 As the underlying technology Cloud 2.0 caters for both the hosting of the software and also the social media 2.0 features that enable the cross-enterprise collaboration that Beth describes. Cloud Configuration Management CCM is the best practice for change and configuration management within Cloud environments, illustrated through vendors such as Evolven. Problem Statement One of the key goals and perceived benefits of Cloud computing is a simplified IT environment, a reduction of complexity through virtualizing applications into a single overall environment. However complexity actually increases.  Virtual Machines (VMs) encapsulate application and infrastructure configurations, they package up a combination of applications and their settings, obscuring this data from traditional configuration management tools. Furthermore the ease of self-service creation of VMs results in their widespread proliferation, and so actually the adoption of Cloud technologies creates a need for a new, extra dimension of systems management. This is called CCM, and incorporates: Release & Incident Management The increased complexity therefore increases the difficulties in trouble-shooting technical problems, and thus requires an updated set of tools and also updates to best practices like the use of ITIL procedures. ‘Release into Production’ is a particularly sensitive process within software teams, as major upgrades and patches are transitioned from test to live environments. Any number of configuration-related errors could cause the move to fail, and so CCM software delivers the core competency of being better able to respond quicker to identify and resolve these issues, reducing the MTTR significantly. DevOps DevOps is a set of principles, methods and practices for communication, collaboration and integration between software development and IT operations. Through the implementation of a shared Lean adoption program and QMS (Quality Management System) the two groups can better work together to minimize downtimes while improving the speed and quality of software development. It’s therefore directly linked to Business Agility. The higher the value of speed and quality = a faster ability to react to market changes, deploy new products and processes and in general adapt the organization, achieved through increasing the frequency of ‘Release Events’: It’s therefore directly linked to Business Agility. The higher the value of speed and quality = a faster ability to react to market changes, deploy new products and processes and in general adapt the organization, achieved through increasing the frequency of ‘Release Events’: ITaaS Procurement The fundamental shift that Cloud Computing represents is illustrated in one key implementation area:   Procurement. Moving to Cloud services means changing from a financial model for technology where you buy your own hardware and software, and pay for it up front, to an approach where instead you access it as a rental, utility service where you “PAYG – Pay As You Go”. To encompass all the different ‘as a Service’ models this is known at an overall level as ‘ITaaS’ – IT as a Service. Any type of IT can be virtualized and delivered via this Service model. Towards the end, I hope that you have gained a clear understanding of How Business Transforms Through Enterprise Cloud Computing. If this article has helped you clear your fundamentals and if you wish to learn more about Cloud computing by getting certified, then you can undertake the AWS certification course offered by KnowledgeHut.
Business Transformation through Enterprise Cloud C...

The Cloud Best Practices Network is an industry ... Read More