• /
  • EnglishEspañol日本語한국어Português
  • EntrarComeçar agora

Forward Amazon CloudFront access logs

You can enable logging from your Amazon CloudFront distribution to send web access logs to New Relic.

Options for sending logs

There are two options for sending logs from CloudFront: standard logging or real-time logs.

Standard logs are sent from CloudFront to a S3 bucket where they are stored. You would then use our AWS Lambda trigger to send the logs from S3 to New Relic. Review the Amazon CloudFront docs for configuring and using standard logs (access logs). This has detailed information about creating your S3 bucket and setting appropriate permissions. Standard logs include all data fields and are sent to S3 and then to New Relic every 5 minutes. For detailed instructions on how to configure standard logs in the Amazon console and send them to New Relic via our S3 Lambda trigger, see the Enable standard logging section.

Real-time logs are sent within seconds of receiving requests using a Kinesis Data Stream consumer and a Kinesis Data Firehose for delivery to New Relic. Review the Amazon CloudFront docs for configuring real-time logs. Real-time logs are configurable and allow you to configure a sampling rate (what percentage of logs to send), select specific fields to receive in the log records, and define specific cache behaviors (path patterns) that you want to receive logs for. Real-time logs also require an S3 backup bucket for sending either all data, or failed data only. So to enable real-time logs you must also follow our instructions for creating an S3 bucket. For detailed instructions on how to configre standard logs in the Amazaon console and send them to New Relic via our S3 Lambda trigger, see the section Enable real-time logs section.

For both options you can use our built-in log parsing rules to automatically parse your CloudFront access logs, and our quickstart dashboard to immediately gain insight into your data. For the built-in parsing rules to work and to populate data in the widgets of the quickstart dashboard provided, you must configure the logtype attribute as defined in the instructions. For additional details on parsing real-time logs if you only select some of the logging fields, see the Parsing section

Create S3 bucket for storing CloudFront logs

To enable standard logs or real-time logs for your CloudFront distribution, you must first create an S3 bucket to use for storing CloudFront access logs in:

  1. In the AWS Management Console, choose Services > All services > S3.
  2. Select Create bucket.
  3. Enter a Bucket name and select the AWS region of your choice. (Note that your S3 bucket must be in the same region as your Lambda function you create in the next section if you are using Standard logging.)
  4. In the Object ownership section, select ACLs enabled.
  5. Select Create bucket.

Now you can enable either standard or real-time logging. For instructions for those options, keep reading.

Enable standard logging

Update your CloudFront distribution to enable standard logs:

  1. In the AWS Management Console, choose Services > All Services > CloudFront.
  2. Click on your distribution ID. On the General tab, select Edit in the Setting section.
  3. In the Standard logging section, select On to enable logging and display the logging configuration settings.
  4. For S3 bucket, search for and choose the S3 bucket name you created above.
  5. Optional: You can add a log prefix like cloudfront_logs.
  6. Choose Save changes.

Within five minutes, you should begin to see log files appearing in your S3 bucket with the following file name format:

<optional prefix>/<distribution ID>.YYYY-MM-DD-HH.unique-ID.gz

Next, you must install and configure our AWS Lambda function NewRelic-log-ingestion-s3 to send the access logs in S3 to New Relic. You'll need a single Lambda function for the Amazon CloudFront logs so you can set the appropriate LOG_TYPE environment variable: this allows you to use our CloudFront built-in parsing rules and the quickstart dashboard. If you already have this Lambda function installed in a region and are using it to send other S3 logs to New Relic (for example ALB/NLB logs), you'll need to install the Lambda function again in another region. Also, as noted earlier your S3 bucket for storing access logs and your Lambda function for sending S3 logs to New Relic must be in the same region.

To do these steps:

  1. Follow the instructions here to Configure AWS Lambda for sending logs from S3.
  2. After the Lambda function is installed, select Functions > NewRelic-s3-log-ingestion.
  3. Select the + Add trigger button under S3.
  4. For Trigger Configuration, select S3.
  5. For Bucket, search for and choose the S3 bucket you created above.
  6. In the Recursive invocation section, check the ackknowledgement box and then choose Add.
  7. On the Configuration tab for the function, choose the Environment variables option on the left.
  8. Select Edit, and for the LOG_TYPE enter cloudfront-web.
  9. Select Save.

Within five minutes you should begin to see your logs in the logs UI. To confirm you're receiving logs, you can search for logtype:cloudfront-web in the logs UI search bar, or run a NRQL query with something like FROM Log SELECT * WHERE logtype='cloudfront-web'

Enable real-time logs

To enable real-time logging for your CloudFront distribution, you must first create a Kinesis Data Stream for receiving the CloudFront logs:

  1. In the AWS Management Console, choose Services > Kinesis.
  2. Select Data streams, and then Create data stream.
  3. Enter a Data stream name. For example, CloudFront-DataStream.
  4. Select a Data Stream capacity mode of your choice.
  5. Select Create data stream.
  6. In the Consumers section, select Process with delivery stream.
  7. For the Destination, select New Relic.
  8. Enter a Delivery stream name. For example, CloudFront-DeliveryStream.
  9. In the Destination settings section, for the HTTP endpoint URL, select New Relic logs - US or New Relic logs - EU.
  10. For API key, enter the for your New Relic account.
  11. Select Add parameter.
  12. If you will select all fields for logging in step 21 below, for Key, enter logtype and for the Value enter cloudfront-rtl . If you plan to select a subset of fields for logging in Step 21 below, enter cloudfront-rtl-custom. If you don't choose to select all the fields, see the Real-time logs parsing section below for information on how to create a custom parsing rule for your logs.
  13. In the Backup settings section, you may choose either Failed data only or All data for the Source record backup in Amazon S3 option, based on your preference.
  14. For the S3 backup bucket, select Browse to search for and select the S3 bucket name you created above for storing CloudFront logs.
  15. Choose Create delivery stream.

Next, create a real-time logging configuration and attach it to your CloudFront distribution:

  1. In the AWS Management Console, choose Services > CloudFront.
  2. In the Telemetry section on the left, select Logs.
  3. Click on the Real-time configurations tab, and then choose Create configuration.
  4. Enter a configuration name. For example: CloudFront-RealTimeLogs.
  5. Enter a sampling rate from 1 to 100.
  6. For Fields, select All fields or choose the fields you wish to include in your logs.
  7. For Endpoint, select the Data Stream name you created in Step 3. For example: CloudFront-DataStream.
  8. In the Distrubutions section, select your CloudFront distribution ID.
  9. For Cache behaviors, check Default (*).
  10. Choose Create configuration.

Within a couple minutes you should begin to see your logs in our logs UI. To confirm you're receiving real-time logs, you can search for logtype:cloudfront-rtl* in the logs UI search bar, or via a NRQL query like FROM Log SELECT * WHERE logtype LIKE 'cloudfront-rtl%'

Real-time logs parsing

Our built-in parsing rule for real-time logs assumes that all fields will be logged. If you choose to only log a subset of the fields, you must define a custom parsing rule that matches your log format. This is required for the logs to parse correctly and to use our quickstart dashboard.

To create a custom parsing rule, select Parsing in the Manage data section of the logs UI.

  1. For the Rule name, enter CloudFront custom parsing rule.
  2. For Filter logs based on NRQL, enter logtype='cloundfront-rtl-custom'.
  3. Update the Grok parsing logic below so that it includes only the fields you've selected in the real-time logs configuration. For example, if you choose not to include the cs_headers field, remove %{SPACE}%{NOTSPACE:cs_headers}% from the Grok, and so on.
    ^%{NOTSPACE:timestamp}.\d{3}%{SPACE}%{NOTSPACE:c_ip}%{SPACE}%{NUMBER:time_to_first_byte:INT}%{SPACE}%{NUMBER:sc_status:INT}%{SPACE}%{NUMBER:sc_bytes:INT}%{SPACE}%{WORD:cs_method}%{SPACE}%{NOTSPACE:cs_protocol}%{SPACE}%{NOTSPACE:cs_host}%{SPACE}%{NOTSPACE:cs_uri_stem}%{SPACE}%{NUMBER:cs_bytes:INT}%{SPACE}%{NOTSPACE:x_edge_location}%{SPACE}%{NOTSPACE:x_edge_request_id}%{SPACE}%{NOTSPACE:x_host_header}%{SPACE}%{NUMBER:time_taken:INT}%{SPACE}%{NOTSPACE:cs_protocol_version}%{SPACE}%{NOTSPACE:cs_ip_version}%{SPACE}%{NOTSPACE:cs_user_agent}%{SPACE}%{NOTSPACE:cs_referer}%{SPACE}%{NOTSPACE:cs_cookie}%{SPACE}%{NOTSPACE:cs_uri_query}%{SPACE}%{NOTSPACE:x_edge_response_result_type}%{SPACE}%{NOTSPACE:x_forwarded_for}%{SPACE}%{NOTSPACE:ssl_protocol}%{SPACE}%{NOTSPACE:ssl_cipher}%{SPACE}%{NOTSPACE:x_edge_result_type}%{SPACE}%{NOTSPACE:fle_encrypted_fields}%{SPACE}%{NOTSPACE:fle_status}%{SPACE}%{NOTSPACE:sc_content_type}%{SPACE}%{NOTSPACE:sc_content_len}%{SPACE}%{NOTSPACE:sc_range_start}%{SPACE}%{NOTSPACE:sc_range_end}%{SPACE}%{NUMBER:c_port:INT}%{SPACE}%{NOTSPACE:x_edge_detailed_result_type}%{SPACE}%{NOTSPACE:c_country}%{SPACE}%{NOTSPACE:cs_accept_encoding}%{SPACE}%{NOTSPACE:cs_accept}%{SPACE}%{NOTSPACE:cache_behavior_path_pattern}%{SPACE}%{NOTSPACE:cs_headers}%{SPACE}%{NOTSPACE:cs_header_names}%{SPACE}%{NOTSPACE:cs_headers_count}$
  4. Paste the updated Grok into the parsing logic section, and then select the Test grok to validate that your parser is working.
  5. Enable the rule and then choose Save parsing rule.
Creating a CloudFront custom parsing rule

A screenshot of the CloudFront custom parsing configuration.

What's next?

Dashboard in Amazon CloudFront Access Logs quickstart

Our quickstart for Amazon CloudFront Access Logs includes a pre-built dashboard.

Here are some ideas for next steps:

Disable log forwarding

To disable log forwarding capabilities, there are two options. If you just want to stop sending the S3 logs to New Relic you can delete the S3 trigger in the NewRelic-log-ingestion-s3 Lambda function. If you want to disable Amazon CloudFront access logs completely you will need to delete the trigger as well as turn off logging in the general settings of the CloudFront distribution.

Copyright © 2024 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.