AWS serverless services, including but not limited to AWS Lambda, AWS Glue, AWS Fargate, Amazon EventBridge, Amazon Athena, Amazon Simple Notification Service (Amazon SNS), Amazon Simple Queue Service (Amazon SQS), and Amazon Simple Storage Service (Amazon S3), have become the building blocks for any serverless data lake, providing key mechanisms to ingest and transform data without fixed provisioning and the persistent need to patch the underlying servers. The combination of a data lake in a serverless paradigm brings significant cost and performance benefits. The advent of rapid adoption of serverless data lake architectures—with ever-growing datasets that need to be ingested from a variety of sources, followed by complex data transformation and machine learning (ML) pipelines—can present a challenge. Similarly, in a serverless paradigm, application logs in Amazon CloudWatch are sourced from a variety of participating services, and traversing the lineage across logs can also present challenges. To successfully manage a serverless data lake, you require mechanisms to perform the following actions:
- Reinforce data accuracy with every data ingestion
- Holistically measure and analyze ETL (extract, transform, and load) performance at the individual processing component level
- Proactively capture log messages and notify failures as they occur in near-real time
In this post, we will walk you through a solution to efficiently track and analyze ETL jobs in a serverless data lake environment. By monitoring application logs, you can gain insights into job execution, troubleshoot issues promptly to ensure the overall health and reliability of data pipelines.
Overview of solution
The serverless monitoring solution focuses on achieving the following goals:
- Capture state changes across all steps and tasks in the data lake
- Measure service reliability across a data lake
- Quickly notify operations of failures as they happen
To illustrate the solution, we create a serverless data lake with a monitoring solution. For simplicity, we create a serverless data lake with the following components:
- Storage layer – Amazon S3 is the natural choice, in this case with the following buckets:
- Landing – Where raw data is stored
- Processed – Where transformed data is stored
- Ingestion layer – For this post, we use Lambda and AWS Glue for data ingestion, with the following resources:
- Lambda functions – Two Lambda functions that run to simulate a success state and failure state, respectively
- AWS Glue crawlers – Two AWS Glue crawlers that run to simulate a success state and failure state, respectively
- AWS Glue jobs – Two AWS Glue jobs that run to simulate a success state and failure state, respectively
- Reporting layer – An Athena database to persist the tables created via the AWS Glue crawlers and AWS Glue jobs
- Alerting layer – Slack is used to notify stakeholders
The serverless monitoring solution is devised to be loosely coupled as plug-and-play components that complement an existing data lake. The Lambda-based ETL tasks state changes are tracked using AWS Lambda Destinations. We have used an SNS topic for routing both success and failure states for the Lambda-based tasks. In the case of AWS Glue-based tasks, we have configured EventBridge rules to capture state changes. These event changes are also routed to the same SNS topic. For demonstration purposes, this post only provides state monitoring for Lambda and AWS Glue, but you can extend the solution to other AWS services.
The following figure illustrates the architecture of the solution.
The architecture contains the following components:
- EventBridge rules – EventBridge rules that capture the state change for the ETL tasks—in this case AWS Glue tasks. This can be extended to other supported services as the data lake grows.
- SNS topic – An SNS topic that serves to catch all state events from the data lake.
- Lambda function – The Lambda function is the subscriber to the SNS topic. It’s responsible for analyzing the state of the task run to do the following:
- Persist the status of the task run.
- Notify any failures to a Slack channel.
- Athena database – The database where the monitoring metrics are persisted for analysis.
Deploy the solution
The source code to implement this solution uses AWS Cloud Development Kit (AWS CDK) and is available on the GitHub repo monitor-serverless-datalake. This AWS CDK stack provisions required network components and the following:
- Three S3 buckets (the bucket names are prefixed with the AWS account name and Regions, for example, the landing bucket is
- Three Lambda functions:
- Two AWS Glue crawlers:
- Two AWS Glue jobs:
- An SNS topic named datalake-monitor-sns
- Three EventBridge rules:
- An AWS Secrets Manager secret named datalake-monitoring
- Athena artifacts:
- monitor database
- monitor-table table
You can also follow the instructions in the GitHub repo to deploy the serverless monitoring solution. It takes about 10 minutes to deploy this solution.
Connect to a Slack channel
We still need a Slack channel to which the alerts are delivered. Complete the following steps:
- Set up a workflow automation to route messages to the Slack channel using webhooks.
- Note the webhook URL.
The following screenshot shows the field names to use.
The following is a sample message for the preceding template.
- On the Secrets Manager console, navigate to the
- Add the webhook URL to the
Load sample data
The next step is to load some sample data. Copy the sample data files to the landing bucket using the following command:
In the next sections, we show how Lambda functions, AWS Glue crawlers, and AWS Glue jobs work for data ingestion.
Test the Lambda functions
On the EventBridge console, enable the rules that trigger the lambda-success and lambda-fail functions every 5 minutes:
After a few minutes, the failure events are relayed to the Slack channel. The following screenshot shows an example message.
Disable the rules after testing to avoid repeated messages.
Test the AWS Glue crawlers
On the AWS Glue console, navigate to the Crawlers page. Here you can start the following crawlers:
In a minute, the glue-crawler-fail crawler’s status changes to Failed, which triggers a notification in Slack in near-real time.
Test the AWS Glue jobs
On the AWS Glue console, navigate to the Jobs page, where you can start the following jobs:
In a few minutes, the glue-job-fail job status changes to Failed, which triggers a notification in Slack in near-real time.
Analyze the monitoring data
The monitoring metrics are persisted in Amazon S3 for analysis and can be used of historical analysis.
On the Athena console, navigate to the monitor database and run the following query to find the service that failed the most often:
Over time with rich observability data – time series based monitoring data analysis will yield interesting findings.
The overall cost of the solution is less than one dollar but to avoid future costs, make sure to clean up the resources created as part of this post.
The post provided an overview of a serverless data lake monitoring solution that you can configure and deploy to integrate with enterprise serverless data lakes in just a few hours. With this solution, you can monitor a serverless data lake, send alerts in near-real time, and analyze performance metrics for all ETL tasks operating in the data lake. The design was intentionally kept simple to demonstrate the idea; you can further extend this solution with Athena and Amazon QuickSight to generate custom visuals and reporting. Check out the GitHub repo for a sample solution and further customize it for your monitoring needs.
About the Authors
Virendhar (Viru) Sivaraman is a strategic Senior Big Data & Analytics Architect with Amazon Web Services. He is passionate about building scalable big data and analytics solutions in the cloud. Besides work, he enjoys spending time with family, hiking & mountain biking.
Vivek Shrivastava is a Principal Data Architect, Data Lake in AWS Professional Services. He is a Bigdata enthusiast and holds 14 AWS Certifications. He is passionate about helping customers build scalable and high-performance data analytics solutions in the cloud. In his spare time, he loves reading and finds areas for home automation.