With our streaming export feature, available with Data Plus, you can send your data to an AWS Kinesis Firehose or Azure Event Hub as it's ingested by New Relic. We'll explain how to create and update a streaming rule using NerdGraph and how to view existing rules. You can use the NerdGraph explorer to make these calls.
What is streaming export?
As data is ingested by your New Relic organization, our streaming export feature sends that data to an AWS Kinesis Firehose or Azure Event Hub. You can set up custom rules, which are defined using NRQL, to govern what kinds of New Relic data you'll export. You can also elect to have this data compressed before it's exported, using our new Export Compression feature.
Some examples of things you can use streaming export for:
- To populate a data lake
- Enhance AI/ML training
- Long-term retention for compliance, legal, or security reasons
You can disable or enable streaming export rules whenever you want. But note that streaming export only sends currently ingested data, which means that if you disable it and re-enable it, the data ingested when it was off won't be sent with this feature. For exporting past data, you can use Historical data export.
Requirements and limits
Limits on streamed data: The amount of data you can stream per month is limited by your total ingested data per month. If your streaming data amount exceeds your ingested data amount, we may suspend your access to and use of streaming export.
Permissions-related requirements:
- Pro or Enterprise edition with Data Plus option
- User type: core user or full platform user
- The streaming data permission
You must have an AWS Kinesis Firehose or Azure Event Hub set up to receive New Relic data. If you haven't already done this, you can follow our steps below for AWS or Azure.
NRQL requirements:
- Must be flat queries, with no aggregation. For example,
SELECT *
orSELECT column1, column2
forms are supported. - Applicable for anything in the
WHERE
clause, except subqueries. - Query cannot have a
FACET
clause,COMPARE WITH
, orLOOKUP
. - Nested queries are not supported.
- Supports data types stored in NRDB, and not metric timeslice data.
Set up an AWS Kinesis Firehose
To set up streaming data export to AWS, you must first set up an Amazon Kinesis Firehose. We'll walk you through that procedure in the next three steps.
Create a Firehose for streaming export
Create a dedicated Firehose to stream your New Relic data to:
- Go to Amazon Kinesis Data Firehose.
- Create a delivery stream.
- Name the stream (you will use this name later in the rule registration).
- Use Direct PUT or other sources and specify a destination compatible with New Relic's JSON event format (for example, S3, Redshift, or OpenSearch).
Create IAM Firehose write access policy
- Go to the IAM console and sign in with your user.
- Click Policies in the left navigation, and then click Create policy.
- Select the Firehose service, and then select
PutRecord
andPutRecordBatch
. - For
Resources
, select the delivery stream, add ARN, and select the region of your stream. - Enter your AWS account number, and then enter your desired delivery stream name in the name box.
- Create the policy.
Create IAM role for granting New Relic write access
To set up the IAM role:
- Navigate to the IAM and click on Roles.
- Create a role for an AWS account, and then select for another AWS account.
- Enter the New Relic export account ID:
888632727556
. - Select Require external ID and enter the account ID of the New Relic account you want to export from.
- Click Permissions, and then select the policy you created above.
- Add a role name (this will be used in the export registration) and description.
- Create the role.
When you're done with these steps, you can set up your export rules using NerdGraph. For more on that, jump to Important fields for NerdGraph calls.
Set up an Azure Event Hub
To set up streaming data export to Azure, you must first set up an Event Hub. We'll walk you through that procedure in the next three steps.
Alternatively, you can follow the Azure guide here.
Create an Event Hubs namespace
- Navigate to Event Hubs within your Microsoft Azure account.
- Follow the steps to create an Event Hubs namespace. We recommend enabling auto-inflate to ensure you receive all of your data.
- Ensure public access is enabled, we will be using a Shared Access Policy to authenticate securely with your Event Hub.
- Once your Event Hubs namespace is deployed, click Go to resource.
Create an Event Hub
In the left column, click Event Hubs.
Then click +Event Hub to create an Event Hub.
Enter the desired Event Hub Name. Save this, as you'll need it later to create the streaming export rule.
For Retention, select the Delete
Cleanup policy
and desiredRetention time (hrs)
.Importante
Streaming export is currently not supported for Event Hubs with Compact retention policy.
Once the Event Hub is created, click on the Event Hub.
Create and attach a shared access policy
- In the left column, go to Shared access policies.
- Click +Add near the top of the page.
- Choose a name for your shared access policy.
- Check Send, and click Create.
- Click the created policy, and copy the Connection string–primary key. Save this, as we'll use this to authenticate and send data to your Event Hub.
When you're done with these steps, see the next section about important fields for your Nerdgraph calls.
Important fields for NerdGraph calls
Most of the streaming data export NerdGraph calls we'll be covering use a few fields that are related to your account:
For AWS Kinesis Firehose:
awsAccountId
: The AWS account ID. For example:10000000000
deliveryStreamName
: The Kinesis stream name. For example:firehose-test-stream
.region
: The AWS region. For example:us-east-1
.role
: The AWS IAM role for Kinesis Firehose. This will always befirehose-role
.
For Azure Event Hubs:
eventHubConnectionString
: The Azure Event Hub Connection String. Looks similar to:Endpoint=sb://<NamespaceName>.servicebus.windows.net/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<KeyValue>;EntityPath=<EventHubName>
eventHubName
: The Event Hub name. For example:my-event-hub
.
How to create a streaming export rule
First, decide which data you want to export. Then, with a NerdGraph call, you'll create the streaming rules you want using NRQL. We'll give some examples.
Create a stream
When you create a new streaming rule, you'll need all of the following fields. Here's an example of creating a streaming rule exporting to an AWS Kinesis Firehose:
mutation { streamingExportCreateRule( accountId: YOUR_NR_ACCOUNT_ID ruleParameters: { description: "ADD_RULE_DESCRIPTION" name: "PROVIDE_RULE_NAME" nrql: "SELECT * FROM NodeStatus" payloadCompression: DISABLED } awsParameters: { awsAccountId: "YOUR_AWS_ACCOUNT_ID" deliveryStreamName: "FIREHOSE_STREAM_NAME" region: "SPECIFY_AWS_REGION" role: "firehose-role" } ) { id status }}
Here's an example of creating a streaming rule exporting to an Azure Event Hub:
mutation { streamingExportCreateRule( accountId: YOUR_NR_ACCOUNT_ID ruleParameters: { description: "ADD_RULE_DESCRIPTION" name: "PROVIDE_RULE_NAME" nrql: "SELECT * FROM NodeStatus" payloadCompression: DISABLED } azureParameters: { eventHubConnectionString: "YOUR_EVENT_HUB_SAS_CONNECTION_STRING" eventHubName: "YOUR_EVENT_HUB_NAME" } ) { id status }}
You'll get a result immediately with a rule ID and status. The status will be shown as CREATION_IN_PROGRESS
. You can use the rule id to check if the rule is created successfully.
It can take up to six minutes for the rule to complete creation due to the policy validation taking some time.
Before the rule finishes registering, you can't initiate another mutation action (like Enable
, Disable
, or Update
) because the rule is locked for the creation process. If you try another mutation action before the rule completes the registration process, you'll get a message like, "The export rule is currently being updated by another request, please wait and try again later."
You can use Delete
at any time.
The creation can finish and change the status at any time within the roughly six minutes required for rule creation. The status will change to ENABLED
, DISABLED
, or CREATION_FAILED
.
See these details on the values:
ENABLED
means the rule is created successfully and data has started to be streamed.CREATION_FAILED
means the rule failed on creation. This can happen for several reasons, but is often due to AWS policy or Azure SAS validation failing.DISABLED
means the rule is created but is not enabled yet due to reasons such as filter stream limit being reached, or failing on filter stream rule creation. If the status still remains asCREATION_IN_PROGRESS
after six minutes, that means the rule creation failed due to a system error on our service. You can delete the rule and try to create a new one again.
Once a streaming rule is created, you can view it.
Update a stream
When you update a new streaming rule, you'll need all of the following fields. Here's an example of updating a streaming rule:
AWS Kinesis Firehose:
mutation { streamingExportUpdateRule( id: RULE_ID ruleParameters: { description: "ADD_RULE_DESCRIPTION" name: "PROVIDE_RULE_NAME" nrql: "YOUR_NRQL_QUERY" payloadCompression: DISABLED } awsParameters: { awsAccountId: "YOUR_AWS_ACCOUNT_ID" deliveryStreamName: "FIREHOSE_STREAM_NAME" region: "SPECIFY_AWS_REGION" role: "firehose-role" } ) { id status }}
Azure Event Hub:
mutation { streamingExportUpdateRule( id: RULE_ID ruleParameters: { description: "ADD_RULE_DESCRIPTION" name: "PROVIDE_RULE_NAME" nrql: "YOUR_NRQL_QUERY" payloadCompression: DISABLED } azureParameters: { eventHubConnectionString: "YOUR_EVENT_HUB_SAS_CONNECTION_STRING" eventHubName: "YOUR_EVENT_HUB_NAME" } ) { id status }}
When updating, you'll get a message in the message field: “The export rule is being updated and the process may take a few minutes to complete. Please check again later.” It can take up to six minutes to be fully updated.
You can check if the rule is updated by calling streamingRule
to retrieve the rule. During the period that the rule is under updating, the rule is locked, and no other mutation action can act on the rule. If you are trying to perform another mutation action on the same rule, you will get a message saying, “The export rule is currently being updated by another request, please wait and try again later.” A user can update a rule of any status except a deleted rule.
Disable a stream
To disable a rule, you only need to provide the rule ID. Here's an example of disabling a stream:
mutation { streamingExportDisableRule(id: RULE_ID) { id status message }}
You can only disable the rule when the rule has a status of ENABLED
. If you're trying to disable a rule that's in another state, it returns the error message, "The export rule can't be enabled or disabled due to status not being allowed." You can't disable the rule if the rule is locked due to another mutation being done.
Enable a stream
If you want to enable a rule, you only need to provide the rule ID. Here's an example of enabling a stream:
mutation { streamingExportEnableRule(id: RULE_ID) { id status message }}
You can only enable the rule when it has a status of DISABLED
. If you're trying to enable a rule that's in another state, it returns the error message like, "The export rule can't be enabled or disabled due to status not being allowed." You can't enable the rule if the rule is locked due to another mutation being done.
Delete a stream
You'll need to provide a rule ID to delete a stream. Here's an example:
mutation { streamingExportDeleteRule(id: RULE_ID) { id ... }}
Deleting can be performed on a rule of any status unless it's already deleted. Once a rule is deleted, it can't be reactivated again. The rule can be still viewed within the first 24 hours after deletion by calling the steamingRule
API with the rule ID. After 24 hours, the rule won't be searchable anymore through NerdGraph.
View streams
You can query information about a specific stream rule by querying for the account ID and rule ID. Here's an example:
AWS Kinesis Firehose:
{ actor { account(id: YOUR_NR_ACCOUNT_ID) { streamingExport { streamingRule(id: "RULE_ID") { aws { awsAccountId deliveryStreamName region role } createdAt description id message name nrql status updatedAt payloadCompression } } } }}
Azure Event Hub:
{ actor { account(id: YOUR_NR_ACCOUNT_ID) { streamingExport { streamingRule(id: "RULE_ID") { azure { eventHubConnectionString eventHubName } createdAt description id message name nrql status updatedAt payloadCompression } } } }}
You can also query for all existing streams. Here's an example:
{ actor { account(id: YOUR_NR_ACCOUNT_ID) { streamingExport { streamingRules { aws { awsAccountId region deliveryStreamName role } azure { eventHubConnectionString eventHubName } createdAt description id message name nrql status updatedAt payloadCompression } } } }}
Understand export compression
Optionally, we can compress payloads before they are exported, though this is disabled by default. This can help avoid hitting your ingested data limit and save money on egress costs.
You can enable compression using the payloadCompression
field under ruleParameters
. This field can be any of the following values:
DISABLED
: Payloads will not be compressed before being exported. If unspecified,payloadCompression
will default to this value.GZIP
: Compress payloads with the GZIP format before exporting
GZIP is the only compression format currently available, though we may choose to make more formats available in the future.
When compression is enabled on an existing AWS export rule, the next message from Kinesis Firehose may contain both compressed and uncompressed data. This is due to buffering within Kinesis Firehose. To avoid this, you can temporarily disable the export rule before enabling compression, or create a new Kinesis Firehose stream for the compressed data alone to flow through.
If you do encounter this issue and you're exporting to S3 or another file storage system, you can view the compressed part of the data by following these steps:
- Manually download the object.
- Separate the object into two separate files by copying the compressed data into a new file.
- Decompress the new, compressed-only data file.
Once you have the compressed data, you can re-upload it to S3 (or whatever other service you're using) and delete the old file.
Please be aware that in S3 or another file storage system, objects may consist of multiple GZIP-encoded payloads that are appended consecutively. Therefore, your decompression library should have the capability to handle such concatenated GZIP payloads.
Automatic Decompression in AWS
Once your data has arrived in AWS, you may want options to automatically decompress it. If you're streaming that data to an S3 bucket, there are two ways to enable automatic decompression:
Automatic Decompression in Azure
If you're exporting data to Azure, it's possible to view decompressed versions of the objects stored in your event hub using a Stream Analytics Job. To do so, follow these steps:
- Follow this guide up to step 16.
- On step 13, you may choose to use the same event hub as the output without breaking anything, though we do not recommend this if you intend to proceed to step 17 and start the job, as doing so as not been tested.
- In the left pane of your streaming analytics job, click Inputs, then click on the input you set up.
- Scroll down to the bottom of the pane that appears on the right, and configure the input with these settings:
- Event serialization format: JSON
- Encoding: UTF-8
- Event compression type: GZip
- Click Save at the bottom of the pane.
- Click Query on the side of the screen. Using the Input preview tab, you should now be able to query the event hub from this screen.