HomeBig DataEnergy knowledge ingestion into Splunk utilizing Amazon Information Firehose

Energy knowledge ingestion into Splunk utilizing Amazon Information Firehose


Final up to date: December 17, 2025

Initially revealed: December 18, 2017

Amazon Information Firehose helps Splunk Enterprise and Splunk Cloud as a supply vacation spot. This native integration between Splunk Enterprise, Splunk Cloud, and Amazon Information Firehose is designed to make AWS knowledge ingestion setup seamless, whereas providing a safe and fault-tolerant supply mechanism. We need to allow prospects to watch and analyze machine knowledge from any supply and use it to ship operational intelligence and optimize IT, safety, and enterprise efficiency.

With Amazon Information Firehose, prospects can use a completely managed, dependable, and scalable knowledge streaming answer to Splunk. On this put up, we let you know a bit extra in regards to the Amazon Information Firehose and Splunk integration. We additionally present you tips on how to ingest massive quantities of information into Splunk utilizing Amazon Information Firehose.

Push vs. Pull knowledge ingestion

Presently, prospects use a mix of two ingestion patterns, based totally on knowledge supply and quantity, along with present firm infrastructure and experience:

  1. Pull-based strategy: Utilizing devoted pollers operating the favored Splunk Add-on for AWS to drag knowledge from varied AWS providers resembling Amazon CloudWatch or Amazon S3.
  2. Push-based strategy: Streaming knowledge straight from AWS to Splunk HTTP Occasion Collector (HEC) through the use of Amazon Information Firehose. Examples of relevant knowledge sources embrace CloudWatch Logs and Amazon Kinesis Information Streams.

The pull-based strategy gives knowledge supply ensures resembling retries and checkpointing out of the field. Nonetheless, it requires extra ops to handle and orchestrate the devoted pollers, that are generally operating on Amazon EC2 cases. With this setup, you pay for the infrastructure even when it’s idle.

Then again, the push-based strategy gives a low-latency scalable knowledge pipeline made up of serverless assets like Amazon Information Firehose sending on to Splunk indexers (through the use of Splunk HEC). This strategy interprets into decrease operational complexity and value. Nonetheless, for those who want assured knowledge supply then you need to design your answer to deal with points resembling a Splunk connection failure or Lambda execution failure. To take action, you would possibly use, for instance, AWS Lambda Useless Letter Queues.

How about getting the very best of each worlds?

Let’s go over the brand new integration’s end-to-end answer and study how Amazon Information Firehose and Splunk collectively develop the push-based strategy right into a native AWS answer for relevant knowledge sources.

By utilizing a managed service like Amazon Information Firehose for knowledge ingestion into Splunk, we offer out-of-the-box reliability and scalability. One of many ache factors of the previous strategy was the overhead of managing the info assortment nodes (Splunk heavy forwarders). With the brand new Amazon Information Firehose to Splunk integration, there are not any forwarders to handle or arrange. Information producers (1) are configured by means of the AWS Administration Console to drop knowledge into Amazon Information Firehose.

You may as well create your individual knowledge producers. For instance, you may drop knowledge right into a Firehose supply stream through the use of Amazon Kinesis Agent, or through the use of the Firehose API (PutRecord(), PutRecordBatch()), or by writing to a Kinesis Information Stream configured to be the info supply of a Firehose supply stream. For extra particulars, consult with Sending Information to an Amazon Information Firehose Supply Stream.

You would possibly want to remodel the info earlier than it goes into Splunk for evaluation. For instance, you would possibly need to enrich it or filter or anonymize delicate knowledge. You are able to do so utilizing AWS Lambda and enabling knowledge transformation in Amazon Information Firehose. On this state of affairs, Amazon Information Firehose is used to decompress the Amazon CloudWatch logs by enabling the function.

Techniques fail on a regular basis. Let’s see how this integration handles outdoors failures to ensure knowledge sturdiness. In circumstances when Amazon Information Firehose can’t ship knowledge to the Splunk Cluster, knowledge is mechanically backed as much as an S3 bucket. You may configure this function whereas creating the Firehose supply stream (2). You may select to again up all knowledge or solely the info that’s failed throughout supply to Splunk.

Along with utilizing S3 for knowledge backup, this Firehose integration with Splunk helps Splunk Indexer Acknowledgments to ensure occasion supply. This function is configured on Splunk’s HTTP Occasion Collector (HEC) (3). It ensures that HEC returns an acknowledgment to Amazon Information Firehose solely after knowledge has been listed and is obtainable within the Splunk cluster (4).

Now let’s have a look at a hands-on train that exhibits tips on how to ahead VPC circulation logs to Splunk.

How-to information

To course of VPC circulation logs, we implement the next structure.

Amazon Digital Personal Cloud (Amazon VPC) delivers circulation log information into an Amazon CloudWatch Logs group. Utilizing a CloudWatch Logs subscription filter, we arrange real-time supply of CloudWatch Logs to an Amazon Information Firehose stream.

Information coming from CloudWatch Logs is compressed with gzip compression. To work with this compression, we are going to allow decompression for the Firehose stream. Firehose then delivers the uncooked logs to the Splunk Http Occasion Collector (HEC).

If supply to the Splunk HEC fails, Firehose deposits the logs into an Amazon S3 bucket. You may then ingest the occasions from S3 utilizing an alternate mechanism resembling a Lambda perform.

When knowledge reaches Splunk (Enterprise or Cloud), Splunk parsing configurations (packaged within the Splunk Add-on for Amazon Information Firehose) extract and parse all fields. They make knowledge prepared for querying and visualization utilizing Splunk Enterprise and Splunk Cloud.

Walkthrough

Set up the Splunk Add-on for Amazon Information Firehose

The Splunk Add-on for Amazon Information Firehose allows Splunk (be it Splunk Enterprise, Splunk App for AWS, or Splunk Enterprise Safety) to make use of knowledge ingested from Amazon Information Firehose. Set up the Add-on on all of the indexers with an HTTP Occasion Collector (HEC). The Add-on is obtainable for obtain from Splunkbase. For troubleshooting help, please consult with: AWS Information Firehose troubleshooting documentationSplunk’s official troubleshooting information

HTTP Occasion Collector (HEC)

Earlier than you need to use Amazon Information Firehose to ship knowledge to Splunk, arrange the Splunk HEC to obtain the info. From Splunk net, go to the Setting menu, select Information Inputs, and select HTTP Occasion Collector. Select World Settings, guarantee All tokens is enabled, after which select Save. Then select New Token to create a brand new HEC endpoint and token. Once you create a brand new token, guarantee that Allow indexer acknowledgment is checked.

When prompted to pick a supply kind, choose aws:cloudwatchlogs:vpcflow

Create an S3 backsplash bucket

To offer for conditions wherein Amazon Information Firehose can’t ship knowledge to the Splunk Cluster, we use an S3 bucket to again up the info. You may configure this function to again up all knowledge or solely the info that’s failed throughout supply to Splunk.

Observe: Bucket names are distinctive.

aws s3 create-bucket --bucket  --create-bucket-configuration LocationConstraint=

Create an Amazon Information Firehose supply stream

On the AWS console, open the Amazon Information Firehose console, and select Create Firehose Stream.

Choose DirectPUT because the supply and Splunk because the vacation spot.

Create Firehose Stream

If you’re utilizing Firehose to ship CloudWatch Logs and need to ship decompressed knowledge to your Firehose stream vacation spot, use Firehose Information Format Conversion (Parquet, ORC) or Dynamic partitioning. You need to allow decompression on your Firehose stream, try Ship decompressed Amazon CloudWatch Logs to Amazon S3 and Splunk utilizing Amazon Information Firehose

Enter your Splunk HTTP Occasion Collector (HEC) info in vacation spot settings

Firehose Destination setting

Observe: Amazon Information Firehose requires the Splunk HTTP Occasion Collector (HEC) endpoint to be terminated with a sound CA-signed certificates matching the DNS hostname used to connect with your HEC endpoint. You obtain supply errors in case you are utilizing a self-signed certificates.

On this instance, we solely again up logs that fail throughout supply.

Backsplash S3 settings

To observe your Firehose supply stream, allow error logging. Doing this implies that you would be able to monitor report supply errors. Create an IAM position for the Firehose stream by selecting Create new, or Select present IAM position.

Advance settings for cloudwatch loggings

You now get an opportunity to overview and alter the Firehose stream settings. If you end up glad, select Create Firehose Stream.

Create a VPC Circulate Log

To ship occasions from Amazon VPC, you should arrange a VPC circulation log. If you have already got a VPC circulation log you need to use, you may skip to the “Publish CloudWatch to Amazon Information Firehose” part.

On the AWS console, open the Amazon VPC service. Then select VPC, and select the VPC you need to ship circulation logs from. Select Circulate Logs, after which select Create Circulate Log. Should you don’t have an IAM position that permits your VPC to publish logs to CloudWatch, select Create and use a brand new service position.

VPC Flow Logs Settings

As soon as lively, your VPC circulation log ought to appear like the next.

Flow logs

Publish CloudWatch to Amazon Information Firehose

Once you generate visitors to or out of your VPC, the log group is created in Amazon CloudWatch. We create an IAM position to permit Cloudwatch to publish logs to the Amazon Information Firehose Stream.

To permit CloudWatch to publish to your Firehose stream, you should give it permissions.

$ aws iam create-role --role-name CWLtoFirehoseRole --assume-role-policy-document file://TrustPolicyForCWLToFireHose.json



Right here is the content material for TrustPolicyForCWLToFireHose.json.

{
  "Assertion": {
    "Impact": "Enable",
    "Principal": { "Service": "logs.us-east-1.amazonaws.com" },
    "Motion": "sts:AssumeRole"
  }
}

Connect the coverage to the newly created position.

$ aws iam put-role-policy 
    --role-name CWLtoFirehoseRole 
    --policy-name Permissions-Coverage-For-CWL 
    --policy-document file://PermissionPolicyForCWLToFireHose.json

Right here is the content material for PermissionPolicyForCWLToFireHose.json.

{
    "Assertion":[
      {
        "Effect":"Allow",
        "Action":["firehose:*"],
        "Useful resource":["arn:aws:firehose:us-east-1:YOUR-AWS-ACCT-NUM:deliverystream/FirehoseSplunkDeliveryStream"]
      },
      {
        "Impact":"Enable",
        "Motion":["iam:PassRole"],
        "Useful resource":["arn:aws:iam::YOUR-AWS-ACCT-NUM:role/CWLtoFirehoseRole"]
      }
    ]
}

The brand new log group has no subscription filter, so arrange a subscription filter. Setting this up establishes a real-time knowledge feed from the log group to your Firehose supply stream. Choose the VPC circulation log and select Actions. Then select Subscription filters adopted by Create Amazon Information Firehose subscription filter.

Subscription Filter option

Subscription filter details

Once you run the AWS CLI command previous, you don’t get any acknowledgment. To validate that your CloudWatch Log Group is subscribed to your Firehose stream, test the CloudWatch console.

As quickly because the subscription filter is created, the real-time log knowledge from the log group goes into your Firehose supply stream. Your stream then delivers it to your Splunk Enterprise or Splunk Cloud setting for querying and visualization. The screenshot following is from Splunk Enterprise.

As well as, you may monitor and look at metrics related together with your supply stream utilizing the AWS console.

Conclusion

Though our walkthrough makes use of VPC Circulate Logs, the sample can be utilized in lots of different eventualities. These embrace ingesting knowledge from AWS IoT, different CloudWatch logs and occasions, Kinesis Streams or different knowledge sources utilizing the Kinesis Agent or Kinesis Producer Library. It’s possible you’ll use a Lambda blueprint or disable report transformation totally relying in your use case. For an extra use case utilizing Amazon Information Firehose, try That is My Structure Video, which discusses tips on how to securely centralize cross-account knowledge analytics utilizing Kinesis and Splunk.

Should you discovered this put up helpful, make sure to try Integrating Splunk with Amazon Kinesis Streams.


In regards to the Authors

Tarik Makota

Tarik Makota

Tarik is a options architect with the Amazon Internet Providers Companion Community. He supplies technical steerage, design recommendation and thought management to AWS’ most strategic software program companions. His profession contains work in a particularly broad software program improvement and structure roles throughout ERP, monetary printing, profit supply and administration and monetary providers. He holds an M.S. in Software program Growth and Administration from Rochester Institute of Know-how.

Roy Arsan

Roy Arsan

Roy is a options architect within the Splunk Companion Integrations crew. He has a background in product improvement, cloud structure, and constructing shopper and enterprise cloud purposes. Extra not too long ago, he has architected Splunk options on main cloud suppliers, together with an AWS Fast Begin for Splunk that permits AWS customers to simply deploy distributed Splunk Enterprise straight from their AWS console. He’s additionally the co-author of the AWS Lambda blueprints for Splunk. He holds an M.S. in Laptop Science Engineering from the College of Michigan.

Yashika Jain

Yashika Jain

Yashika is a Senior Cloud Analytics Engineer at AWS, specializing in real-time analytics and event-driven architectures. She is dedicated to serving to prospects by offering deep technical steerage, driving finest practices throughout real-time knowledge platforms and fixing advanced points associated to their streaming knowledge architectures.

Mitali Sheth

Mitali Sheth

Mitali is a Streaming Information Engineer within the AWS Skilled Providers crew, specializing in real-time analytics and event-driven architectures for AWS’ most strategic software program prospects. Extra not too long ago, she has centered on knowledge governance with AWS Lake Formation, constructing dependable knowledge pipelines with AWS Glue, and modernizing streaming infrastructure with Amazon MSK and Amazon Managed Flink for large-scale enterprise deployments. She holds an M.S. in Laptop Science from the College of Florida.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments