Search

Test Drive Your First Istio Deployment using Play with Kubernetes Platform- Cloud Computing

As a full stack Developer, if you have been spending a lot of time in developing apps recently, you already understand a whole new set of challenges related to Microservice architecture. Although there has been a shift from bloated monolithic apps to compact, focused Microservices for faster implementation and improved resiliency but the fact is  developers have to really worry about the challenges in integrating these services in distributed systems which includes accountability for service discovery, load balancing, registration, fault tolerance, monitoring, routing, compliance, and security.Let us understand the challenges faced by the developers and operators with the Microservice Architecture in details. Consider a 1st Generation simple Service Mesh scenario. As shown below, Service (A) communicates to Service (B). Instead of communicating directly, the request gets routed via Nginx. The Nginx finds a route in Consul (A service discovery tool) and automatically retries to form the connection on HTTP 502’s happen.                                                                    Figure: 1.0 – 1st Gen Service Mesh                                                      Figure:1.1 – Cascading Failure demonstrated with the increase in the number of servicesBut, with the advent of microservices architecture, the number is growing ever since. Below are the  listed challenges encountered by both developers as well as operations team:How to make these growing microservices communicate with each other?Enabling the load balancing architectures over these microservices.Providing role-based routing for the microservices.How to implement outgoing traffic on these microservices and test canary deployment?Managing complexity around these growing pieces of microservices.Implementation of fine-grained control for traffic behavior with rich-routing rules.Challenges in implementing Traffic encryption, service-to-service authentication, and strong identity assertions.In a nutshell, although you could enable service discovery and retry logic into application or networking middleware, the fact is that service discovery becomes tricky to make it right.Enter Istio’s Service Mesh“Service Mesh” is one of the hottest buzzwords of 2018. As the name suggests, it’s a configurable infrastructure layer for a microservices app. It lays out the network of microservices that make up applications and enables interactions between them. It makes communication between service instances flexible, reliable, and fast. The mesh provides service discovery, load balancing, encryption, authentication and authorization, support for the circuit breaker pattern, and other capabilities.Istio is completely an open source service mesh that layers transparently onto existing distributed applications. Istio v1.0 got announced last month and is ready for production. It is written completely in Go Language and its a fully grown platform which provides APIs that let it integrate into any logging platform, or telemetry or policy system. This project adds a very tiny overhead to your system. It is being hosted on GitHub. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.Figure-1.2: Istio’s CapabilityThe Istio project adds a very tiny overhead to your system. It is being hosted on GitHub. Last month, Istio 1.0 release went public and ready for production environment.What benefits does Istio bring?Istio lets you connect, secure, control, and observe services.It helps to reduce the complexity of service deployments and eases the strain on your development teams.It provides developers and DevOps fine-grained visibility and control over traffic without requiring any changes to application code.It provides CIOs with the necessary tools needed to help enforce security and compliance requirements across the enterprise.It provides behavioral insights & operational control over the service mesh as a whole.Istio makes it easy to create a network of deployed services with automatic Load Balancing for HTTP, gRPC, Web Socket & TCP Traffic.It provides fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.It enables a pluggable policy layer and configuration API supporting access controls, rate limits and quotas.Istio provides automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.It provides secure service-to-service communication in a cluster with strong identity-based authentication and authorization.If you want to deep-dive into Istio architecture, I highly recommend the official Istio website.It’s Demo Time !!!Under this blog post, I will showcase how Istio can be setup on Play with Kubernetes (PWK) Platform for a free of cost. In case you’re new, Play with Kubernetes rightly aka PWK is a labs site provided by Docker. It is a playground which allows users to run K8s clusters in a matter of seconds. It gives the experience of having a free CentOS LinuxVirtual Machine in the browser. Under the hood Docker-in-Docker (DinD) is used to give the effect of multiple VMs/PCs.Open https://labs.play-with-k8s.com/ to access Kubernetes Playground.Click on the Login button to authenticate with Docker Hub or GitHub ID.Once you start the session, you will have your own lab environment.Adding First Kubernetes NodeClick on “Add New Instance” on the left to build your first Kubernetes Cluster node. It automatically names it as “node1”. Each instance has Docker Community Edition (CE) and Kubeadm already pre-installed. This node will be treated as the master node for our cluster.Bootstrapping the Master NodeYou can bootstrap the Kubernetes cluster by initializing the master (node1) node with the below script. Copy this script content into bootstrap.sh file and make it executable using “chmod +x bootstrap.sh” command.When you execute this script, as part of initialization, the kubeadm write several configuration files needed, setup RBAC and deployed Kubernetes control plane components (like kube-apiserver, kube-dns, kube-proxy, etcd, etc.). Control plane components are deployed as Docker containers.Copy the above kubeadm join token command and save it for the next step. This command will be used to join other nodes to your cluster.Adding Worker NodesClick on “Add New Node” to add a new worker node.Checking the Cluster StatusVerifying the running PodsInstalling Istio 1.0.0Istio is deployed in a separate Kubernetes namespace istio-system. We will verify it later. As of now, you can copy the below content in a file called install_istio.sh and save it. You can make it executable and run it to install Istio and related tools.You should be able to see screen flooding with the below output.As shown above, it will enable the Prometheus, ServiceGraph, Jaeger, Grafana, and Zipkin by default.Please note – While executing this script, it might end up with the below error message –unable to recognize "install/kubernetes/istio-demo.yaml": no matches for admissionregistration.k8s.io/, Kind=MutatingWebhookConfigurationThe error message is expected.As soon as the command gets executed completely, you should be able to see a long list of ports which gets displayed at the top center of the page.Verifying the ServicesExposing the ServicesTo expose Prometheus, Grafana & Servicegraph services, you will need to delete the existing services and then use NodePort instead of ClusterIP so as to access the service using the port displayed on the top of the instance page. (as shown below)You should be able to access Grafana page by clicking on “30004” port and Prometheus page by clicking on “30003”.You can check Prometheus metrics by selecting the necessary option as shown below:Under Grafana Page, you can add “Data Source” for Prometheus and ensure that the dashboard is up and running:Congratulations! You have installed Istio on Kubernetes cluster. Below listed services have been installed on K8s playground:Istio Controllers and related RBAC rulesIstio Custom Resource DefinitionsPrometheus and Grafana for MonitoringJeager for Distributed TracingIstio Sidecar Injector (we'll take a look next section)Installing IstioctlIstioctl is configuration command line utility of Istio. It helps to create, list, modify and delete configuration resources in the Istio system.Deploying the Sample BookInfo ApplicationNow Istio is installed and verified, you can deploy one of the sample applications provided with the installation- BookInfo. This is a simple mock bookstore application made up of four services that provide a web product page, book details, reviews (with several versions of the review service), and ratings - all managed using Istio.Deploying BookInfo ServicesDefining the Ingress Gateway:Verifying BookInfo ApplicationAccessing it via Web URLYou should now be able the BookInfo Sample as shown below:Hope, this Istio deployment Kubernetes tutorial helped you to successfully install Istio on Kubernetes. In the future blog post, I will deep dive into Istio Internal Architecture, traffic management, policies & telemetry in detail.
Rated 4.5/5 based on 2 customer reviews

Test Drive Your First Istio Deployment using Play with Kubernetes Platform- Cloud Computing

1K
Test Drive Your First Istio Deployment using Play with Kubernetes Platform- Cloud Computing

As a full stack Developer, if you have been spending a lot of time in developing apps recently, you already understand a whole new set of challenges related to Microservice architecture. Although there has been a shift from bloated monolithic apps to compact, focused Microservices for faster implementation and improved resiliency but the fact is  developers have to really worry about the challenges in integrating these services in distributed systems which includes accountability for service discovery, load balancing, registration, fault tolerance, monitoring, routing, compliance, and security.

Let us understand the challenges faced by the developers and operators with the Microservice Architecture in details. Consider a 1st Generation simple Service Mesh scenario. As shown below, Service (A) communicates to Service (B). Instead of communicating directly, the request gets routed via Nginx. The Nginx finds a route in Consul (A service discovery tool) and automatically retries to form the connection on HTTP 502’s happen.
                                                                    Figure: 1.0 – 1st Gen Service Mesh
                                                      Figure:1.1 – Cascading Failure demonstrated with the increase in the number of services

But, with the advent of microservices architecture, the number is growing ever since. Below are the  listed challenges encountered by both developers as well as operations team:

  • How to make these growing microservices communicate with each other?
  • Enabling the load balancing architectures over these microservices.
  • Providing role-based routing for the microservices.
  • How to implement outgoing traffic on these microservices and test canary deployment?
  • Managing complexity around these growing pieces of microservices.
  • Implementation of fine-grained control for traffic behavior with rich-routing rules.
  • Challenges in implementing Traffic encryption, service-to-service authentication, and strong identity assertions.

In a nutshell, although you could enable service discovery and retry logic into application or networking middleware, the fact is that service discovery becomes tricky to make it right.

Enter Istio’s Service Mesh

“Service Mesh” is one of the hottest buzzwords of 2018. As the name suggests, it’s a configurable infrastructure layer for a microservices app. It lays out the network of microservices that make up applications and enables interactions between them. It makes communication between service instances flexible, reliable, and fast. The mesh provides service discovery, load balancing, encryption, authentication and authorization, support for the circuit breaker pattern, and other capabilities.

Istio is completely an open source service mesh that layers transparently onto existing distributed applications. Istio v1.0 got announced last month and is ready for production. It is written completely in Go Language and its a fully grown platform which provides APIs that let it integrate into any logging platform, or telemetry or policy system. This project adds a very tiny overhead to your system. It is being hosted on GitHub. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.

Figure-1.2: Istio’s Capability

The Istio project adds a very tiny overhead to your system. It is being hosted on GitHub. Last month, Istio 1.0 release went public and ready for production environment.

What benefits does Istio bring?

  • Istio lets you connect, secure, control, and observe services.
  • It helps to reduce the complexity of service deployments and eases the strain on your development teams.
  • It provides developers and DevOps fine-grained visibility and control over traffic without requiring any changes to application code.
  • It provides CIOs with the necessary tools needed to help enforce security and compliance requirements across the enterprise.
  • It provides behavioral insights & operational control over the service mesh as a whole.
  • Istio makes it easy to create a network of deployed services with automatic Load Balancing for HTTP, gRPC, Web Socket & TCP Traffic.
  • It provides fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
  • It enables a pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
  • Istio provides automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
  • It provides secure service-to-service communication in a cluster with strong identity-based authentication and authorization.

If you want to deep-dive into Istio architecture, I highly recommend the official Istio website.
It’s Demo Time !!!

Under this blog post, I will showcase how Istio can be setup on Play with Kubernetes (PWK) Platform for a free of cost. In case you’re new, Play with Kubernetes rightly aka PWK is a labs site provided by Docker. It is a playground which allows users to run K8s clusters in a matter of seconds. It gives the experience of having a free CentOS Linux
Virtual Machine in the browser. Under the hood Docker-in-Docker (DinD) is used to give the effect of multiple VMs/PCs.

Open https://labs.play-with-k8s.com/ to access Kubernetes Playground.
Click on the Login button to authenticate with Docker Hub or GitHub ID.
Once you start the session, you will have your own lab environment.

Adding First Kubernetes Node

Click on “Add New Instance” on the left to build your first Kubernetes Cluster node. It automatically names it as “node1”. Each instance has Docker Community Edition (CE) and Kubeadm already pre-installed. This node will be treated as the master node for our cluster.
Bootstrapping the Master Node

You can bootstrap the Kubernetes cluster by initializing the master (node1) node with the below script. Copy this script content into bootstrap.sh file and make it executable using “chmod +x bootstrap.sh” command.
When you execute this script, as part of initialization, the kubeadm write several configuration files needed, setup RBAC and deployed Kubernetes control plane components (like kube-apiserver, kube-dns, kube-proxy, etcd, etc.). Control plane components are deployed as Docker containers.
Copy the above kubeadm join token command and save it for the next step. This command will be used to join other nodes to your cluster.

Adding Worker Nodes
Click on “Add New Node” to add a new worker node.
Checking the Cluster Status

Verifying the running Pods
Installing Istio 1.0.0

Istio is deployed in a separate Kubernetes namespace istio-system. We will verify it later. As of now, you can copy the below content in a file called install_istio.sh and save it. You can make it executable and run it to install Istio and related tools.
You should be able to see screen flooding with the below output.

As shown above, it will enable the Prometheus, ServiceGraph, Jaeger, Grafana, and Zipkin by default.

Please note – While executing this script, it might end up with the below error message –

unable to recognize "install/kubernetes/istio-demo.yaml": no matches for admissionregistration.k8s.io/, Kind=MutatingWebhookConfiguration

The error message is expected.

As soon as the command gets executed completely, you should be able to see a long list of ports which gets displayed at the top center of the page.
Verifying the Services
Exposing the Services

To expose Prometheus, Grafana & Servicegraph services, you will need to delete the existing services and then use NodePort instead of ClusterIP so as to access the service using the port displayed on the top of the instance page. (as shown below)
You should be able to access Grafana page by clicking on “30004” port and Prometheus page by clicking on “30003”.
You can check Prometheus metrics by selecting the necessary option as shown below:
Under Grafana Page, you can add “Data Source” for Prometheus and ensure that the dashboard is up and running:
Congratulations! You have installed Istio on Kubernetes cluster. Below listed services have been installed on K8s playground:

  • Istio Controllers and related RBAC rules
  • Istio Custom Resource Definitions
  • Prometheus and Grafana for Monitoring
  • Jeager for Distributed Tracing
  • Istio Sidecar Injector (we'll take a look next section)


Installing Istioctl

Istioctl is configuration command line utility of Istio. It helps to create, list, modify and delete configuration resources in the Istio system.
Deploying the Sample BookInfo Application

Now Istio is installed and verified, you can deploy one of the sample applications provided with the installation- BookInfo. This is a simple mock bookstore application made up of four services that provide a web product page, book details, reviews (with several versions of the review service), and ratings - all managed using Istio.

Deploying BookInfo Services
Defining the Ingress Gateway:

Verifying BookInfo Application

Accessing it via Web URL
You should now be able the BookInfo Sample as shown below:

Hope, this Istio deployment Kubernetes tutorial helped you to successfully install Istio on Kubernetes. In the future blog post, I will deep dive into Istio Internal Architecture, traffic management, policies & telemetry in detail.

Ajeet Singh

Ajeet Singh Raina

Blog Author

Ajeet Singh Raina is a Docker Captain & {code} Catalysts by DellEMC. He is currently working as Technical Lead Engineer in Enterprise Solution Group @ Dell R&D. He has over 10+ years of solid understanding of a diverse range of IT infrastructure, systems management, systems integration and quality assurance.  He is a frequent blogger at www.collabnix.com and have 150+ blogs contributed on new upcoming Docker releases and features. His personal blog attracts roughly thousands of visitors and tons of page-views every month. His areas of interest includes Docker on Swarm Mode, IoTs, and Legacy Applications & Cloud. 


Website : https://www.docker.com/captains/ajeet-singh-raina

Leave a Reply

Your email address will not be published. Required fields are marked *

Suggested Blogs

Delivering Messages Made Easy With Azure Service Bus

Integrating two different systems is often complicated and comes up with lots of challenges with respect to the availability of both systems, processing speed, scaling and many more. Amongst many recommendations for designing and developing applications for the cloud, enabling asynchronous communication between multiple services plays a vital role in achieving the reliability, scalability and the efficiency of the system.What are Message Queues?Message Queues is the solution to the challenges faced during Integration in distributed systems. It is an efficient way of enabling asynchronous communications between different software services.Following are three most important benefits Queuing solution comes with:1. Decoupling: Messaging queues provide a persistent storage and asynchronous communication and thus the availability of one service does not impact the another. They are eligible to work in a disconnected fashion.2. High reliability: Messaging queues use transactions to manage the message and help to roll back the transaction to recover the message, In case of a failure.3. Granular Scalability – Messaging queues helps to achieve granular scalability where the producer or consumer can scale on their own choice without even impacting the other.Azure Service Bus – A managed Queuing system on Azure CloudAzure Service bus is a highly scalable service that helps to achieve asynchronous messaging and exchanging data among decoupled systems. Moreover, since it is a Platform as a Service (PaaS) offering from Microsoft, thus, you don’t have to manage the Infrastructure and configuration. Azure cloud manages all this for you.Among all others, the most important feature of Azure Service Bus queue is that it guarantees messages to be delivered in FIFO order, which many other queuing solutions fail to provide, even Azure Storage Queues. This makes service bus the most suitable choice than any other Message Queues, though not the only choice. However, Other features to include high availability, auditing, Geo redundancy etc.Azure Service Bus has 3 offerings:1. Queues2. Topics and Subscriptions3. Relays1. Service Bus Queues:The queue is an optimum choice when we are implementing one-directional messaging and, we want to ensure that only one consumer can fetch the message. This is generally used when both the producer and the consumer are not online at the same point in time. All the messages sent by the producer are stored in the queue until consumed by the consumer or gets expired. Also, each message in the queue is identified by a unique Message-ID.Queues come with the assumption that the message needs to be consumed by only one service. However, in practical scenarios, one message might need to be delivered to multiple consumers on some business decisions or need to be broadcasted. To meet those requirements Service bus does have a different offering, Topics.2. Topics and Subscriptions:Topics also provide one-directional communication. However, it works on the publish-subscribe principle where the same message can be consumed by more than one consumer. A single topic may have multiple subscriptions associated with it. A Subscription is somewhat like Queue. When the topic receives the message, it delivers it to all the relevant subscriptions or distributes based on the subscription filters.3. Relays:Unlike Queues and Topics, Relays provides more sort of bi-directional communication. Relays do not support brokered messaging i.e. they don’t store any messages instead simply passes the message from one service to the other. Therefore, both the publisher and subscriber need to be active at the same point in time in case of relays. Relays are automatically created and deleted in a service bus namespace i.e. they need not be created beforehand and deleted post use by services.Azure Service Bus ArchitectureThe Azure Service Bus architecture is depicted in the figure below:Some Important Limits and QuotasAdvanced Features of Azure Service BusAzure Service Bus also has some advanced features that can help you to solve most complex messaging problems. The key features are listed below:1. Dead LetteringService bus provides dead letter subqueue to store messages that could not be delivered or processed. Dead letter queues can be used to move expired or poisoned messages from the parent queue. Those messages then can be retrieved for further investigations. Dead letter queues need not be created manually but are automatically created with the queue.2. TransactionsService bus provides transactions to group multiple operations together into one execution scope. This ensures that all the operations within a group either succeed or fail together.3. Duplicate detectionEnabling Duplicate detection helps to identify duplicate messages added on the basis of the unique message id. The duplicate message could be added by an application on restart of unexpected failure or exception scenarios not handled. Such messages need not be handled manually by the application because the service bus automatically handles those messages.4. Batch processingBatch processing feature of Azure service bus helps to add and retrieve messages in batch instead of one by one message. This extends help to the systems that have to process bulk messages.5. SessionsSometimes the messages are bigger in size say more than 1 MB (maximum message size capacity of queues). Sessions help in such scenarios by sending the message in parts and allowing the processing of the same only when all the parts are received at the consumer end.SummaryMicrosoft’s PaaS offering, Azure Service Bus is really helpful in developing and implementing highly scalable services without even care about infrastructure. It provides asynchronous communication and ensures greater reliability.Azure also lets you select from different options in service bus - for brokered and one directional message we have Queues and Topics and for non-persistent and bi-directional messages we have Relays.
Rated 4.0/5 based on 2 customer reviews
Delivering Messages Made Easy With Azure Service B...

Integrating two different systems is often complic... Read More

Monitoring AWS Machine using Azure Log Analytics - Cloud Computing

What is Azure Log Analytics?Logs make your life easier and help everyone in the organization understand what you do and what's Actually going on with the software. Ensure that you follow best practices of Azure Log Analytics for monitoring AWS machines effectively.Logs are created by network devices, applications, operating systems, and programmable or smart devices. They comprise several messages that are chronologically arranged and stored on a disk, in files, or in an application like a log collector.Firstly, we will try to understand why we need logs to monitor AWS services.We need logs because, with Azure Log Analytics, you can gather and search log data from all your resources, whether they reside on Azure, on another cloud host, or on-premises. Then you can transform the gathered data into rich analytics with AI-enhanced insights into your environment.  How Log Analytics worksThe log data from Virtual machines and other cloud resources, capture via some Agent which we need to install on VMs and Resource can be named as Connected Sources which further will be gathered in records and sent to OMS repository that is the part of Azure Cloud resource and stored in the Azure cloud.After receiving logs to log analytics it further is available for log search and other activity like Set Alerts, Dashboard View, Power BI View, and Export into Excel and Data files. Data collection in Azure Log AnalyticsData can be collected in azure Log Analytics in different ways –Agents on Windows and Linux virtual machines send telemetry from the guest operating system and applications to Log Analytics according to Data Sources that you configure.Connect a System Center Operations Manager management group to Log Analytics to collect data from its agents.Azure services such as Application Insights and Azure Security Center store their data directly in Log Analytics without any configuration.Write data from PowerShell command line or Azure Automation runbook using Log Analytics cmdlets.If you have custom requirements, then you can use the HTTP Data Collector API to write data to Log Analytics from any REST API client.Azure Monitor                   Data sources for events and performanceVirtual Machines               Data sources for events and performanceOperations Manager         Data from management group agentsApplication Insights           Application requests and exceptionsAzure Security Center       Security eventsPowerShell                        PowerShell command line or runbookData Collector API            Rest API for custom dataIncoming data automatically indexed. Data types and tables automatically created.Log AnalyticsData available through log search and smart analytics to multiple channels.Design and test queries and analyze data         AnalyticsVisualize data in Azure portal                            DashboardsWorkflows consuming Log Analytics data         Logic AppsAutomatically respond critical conditions           AlertsExport for visualization with other sources        Power BIPowerShell command line or runbook               PowerShellRest API for custom application                         Log Search APIWork flow of Log AnalyticsNow let's find out the workflow of log analytics like how it collects data, analyze, visualize and alerts.So first look into Collect part–The logs can be collected via: -Event LogsCustom App LogsIIS LogsCrash DumpsPerformance DataWe can also filter the type of logs by just checking and unchecking the log types.Once the collection has done we need to Analyse the data which can be done by: -Filter based on attributesAnalyze data with Kusto Query languageSort dataExport log data to Excel and Power BIConditional FilteringAfter that, we can Visualize all the logs in an attractive DashboardAlerts can be configured on the basis of event conditions like once the value will be reached greater than or less than the limit threshold it will automatically generate the alerts and take configured actions like sending main, sending a message and start runbook (part of azure automation).Steps to collect Data and log from AWS MachinesAWS VMs could be of Windows and Linux, so we can install the agent which can be download from azure log analytics page by just selecting the type of the OS and the Bit of the VMs.After installing the Agent we need to configure the agent by entering the workspace id and key which will be provided by the azure after configuring. We can get the all the logs within some Hours in the portal.Supported AWS OS and VersionsLet’s discuss the OS and versions of VMs that are supported on AWS Cloud.If you are using Windows, the configuration should be:Windows Server 2008 Service Pack 1 (SP1) or laterWindows 7 SP1 and laterIf you are using Linux, the configuration should be:Amazon Linux 2012.09 to 2015.09 (x86/x64)CentOS Linux 5, 6, and 7 (x86/x64)Oracle Linux 5, 6, and 7 (x86/x64)Red Hat Enterprise Linux Server 5, 6 and 7 (x86/x64)Debian GNU/Linux 6, 7, and 8 (x86/x64)Ubuntu 12.04 LTS, 14.04 LTS, 16.04 LTS (x86/x64)SUSE Linux Enterprise Server 11 and 12 (x86/x64)After verifying the supported configuration, we can successfully install the agent and receive the logs.Kusto to query AWS Machine LogsKusto is a log analytics cloud platform optimized for ad-hoc big data queries.Kusto Query Reference Portal: https://docs.loganalytics.ioThe Kusto Query Language is used to query Azure services.OMS (Operational Management Suite)The Microsoft Operations Management Suite (OMS), previously known as Azure Operational Insights, is a software as a service platform that allows an administrator to manage on-premises and cloud IT assets from one console.Azure OMS provides 4 types of services: -Log Analytics: Monitor and analyze the availability and performance of different resources including physical and virtual machines.Automation: Automate manual processes and enforce configurations for physical and virtual machines.Backup: Backup and restore critical data.Site Recovery: Provide high availability for critical applications.Management SolutionsManagement solutions leverage services in Azure provides additional insight into the operation of a particular application or service.Here, we can manually select the required solution and add to the home page where we can only see the logs of that solution type.ReportsThe final reports of logs can be exported in Excel and Power BI that can be displayed in table and chart format.Conclusion:Azure Log Analytics is a very powerful tool to capture different types of system log. Kusto Query plays a very important role in extracting insights from the log file. Also, custom reports can be prepared using KUSTO Query,which helps the organization in saving many man-hours.Detailed reports and easy to export to excel and power helps to keep the troubleshooting and diagnosis handy.Reports can be embedded in any website with live refresh data and code snipped can be generated within power BI.Choose the right Azure logging service for AWS monitoring and use it to save many man-hours and reduce the time of troubleshooting and diagnosis.
Rated 4.0/5 based on 2 customer reviews
Monitoring AWS Machine using Azure Log Analytics -...

What is Azure Log Analytics?Logs make your life ea... Read More

Impact On IT Service Management From Cloud Computing

In IT Service Management, the ITIL V3 philosophy stresses that “IT is the business.” This is truly realized for providers offering cloud computing services. There are a number of considerations that affect Service Management processes when moving to a cloud services model. Service Desk These traditional ITIL Service Support processes are tightly linked. In the cloud computing model, high expectations of availability are part of the model’s selling point, so rapid restoration of service becomes critical through the use of these processes and the Service Desk that performs them. Change management In addition, Change Management work flow activities can sometimes be done best by the Service Delivery Architects. They are the ones who determine the rules used by the automation tools rather than by the Service Management team, who traditionally performed those tasks now being done by the automation tools. Configuration and Asset Management With the Cloud service model’s standardized infrastructure and specialized tool sets, configuration is typically much simpler than that of an enterprise environment, with its extensive variety of hardware and software that must be orchestrated together. Many service-specific tools provide configuration capability for that service, thus reducing the amount of manual coordination requiredwhen compared to the Enterprise IT model. Asset management is related to configuration management and, in a cloud service, has both a virtual component (e.g. tracking virtual resources) and a dynamic component (i.e., assets can change every hour) to its management process.  Configuration Management needs to address a consumer view, (i.e., what assets belong to the service being consumed), a service view, (since assets equal revenue), and an enterprise view, (showing the business status of all cloud services being offered.) Service Level Management With a cloud environment, a single SLM process can exist, but separate SLAs and Service Level Packages should be defined, monitored, and managed for each service. The monitoring components for SLA related performance data will require tools that do not just rely on knowledge of the infrastructure, given the unpredictability of which cloud infrastructure components will actually be used. Instead, monitoring service performance and availability for ensuring SLA compliance must be done from the perspective of the user not from the perspective of the infrastructure. Availability, Capacity, Continuity, and Security The cloud environment’s “Provider-Consumer” model breaks the link between IT continuity and the customer’s business continuity. The Cloud service provider must offer to its customers a warranty of service continuity and make it part of the SLA that comprises the Service Level Packages offered by the service provider. With a cloud service provider, the hardware and software environments are much more uniform and more easily managed. With the provider’s more homogeneous infrastructure offering a much more stable environment, the risks from a change to production are significantly reduced. With reduced risk, Cloud service providers can deliver modifications to services much faster and hence their agility becomes a part of the business model and a distinguishing capability in the market. It also implies that the Release and Deployment Management (RDM) process is often replaced by a different paradigm. In the cloud model, scalability of capacity and performance is a core offering provided by cloud service providers and their SLAs should reflect this. To accomplish this real-time scalability requires two things on the part of the service provider. For cloud services, availability is vital; much of the availability must be architected into the service. Where once the technical developers of a service could ignore the availability issues of their applications, leaving that to the job of the IT organization, cloud service availability is a key factor to its commercial success and hence must be built-in by the service developers and service architects working with the IT organization. With a combination of tools and resiliency built into the design and implementation of the service itself, the responsibility for availability must be shifted into the development lifecycle long before a service goes into production.
Rated 4.0/5 based on 20 customer reviews
Impact On IT Service Management From Cloud Computi...

In IT Service Management, the ITIL V3 philosophy s... Read More