One way to manage your data ingestion is by using Pipeline Control cloud rules. To create Pipeline cloud rules, you must be on New Relic Compute usage-based pricing.
There are two categories of rules you can create:
Drop data rule
- Drop entire data types or a data subset (with optional filter), with NRQL in the form of:DELETE FROM DATA_TYPE_1, DATA_TYPE_2 (WHERE OPTIONAL_FILTER)
- Drop entire data types or a data subset (with optional filter), with NRQL in the form of:
Drop attribute rule
- Drop attributes from data types (with optional filter), with NRQL in the form of:DELETE dropAttr1, dropAttr2 FROM DATA_TYPE (WHERE OPTIONAL_FILTER)
- For this type of rule, you must pass in a non-empty list of raw attributes names in the
DELETE
clause.
- Drop attributes from data types (with optional filter), with NRQL in the form of:
Dica
Pipeline Control cloud rules only apply to data that arrives from the moment you create the rule, they don't delete data that's already been ingested.
To learn more about what data counts as billable or not, see Data ingest.
Cloud rules data scope
Use cloud rules to target the following data types:
- APM-reported events
- Browser-reported events
- Mobile-reported events
- Synthetics-reported events
- Custom events (like those generated by the APM agent APIs or the Event API)
- Log data (you can also use the UI to drop data)
- Distributed tracing spans
- Default infrastructure monitoring events and infrastructure integrations events. Some caveats:
- When you drop this data, the raw data is deleted, but the aggregated
SystemSample
,ProcessSample
,NetworkSample
andStorageSample
events are still available (for more on this, see Data retention). Though still available, this data doesn't count towards ingest and is not billable. - Raw infrastructure data is used for alerting, so if you drop that data, you can't alert on it. Because the aggregated data is still available, you may still see that data in charts with time ranges above 59 minutes.
- When you drop this data, the raw data is deleted, but the aggregated
Importante
On January 7, 2026, drop rules targeting infrastructure events in SystemSample
, ProcessSample
, NetworkSample
, and StorageSample
will drop aggregated data.
- Dimensional metrics. Some caveats:
- For metrics generated by the events-to-metrics service: Cloud rules won't work but these metrics can be stopped or attributes pruned by disabling or re-configuring the events-to-metric rule.
- Metric timeslice data can't be dropped with cloud rules. For more information about APM metric timeslice data see this doc.
NRQL restrictions
- The limit on NRQL query length is 4096 characters. If it exceeds the length, the
INVALID_NRQL_TOO_LONG
error occurs. If you need to drop data based on a longer query that cannot be split, contact your New Relic support. JOIN
and subqueries are not supported.- You can provide a
WHERE
clause to select data with specific attributes. - You shouldn't use entity tags in a
WHERE
clause to drop data, because entity tags are fetched in a separate process and are not attributes of the MELT data evaluated by the pipeline cloud rule. - Features such as
LIMIT
,TIMESERIES
,COMPARE WITH
,FACET
, and other clauses cannot be used. SINCE
andUNTIL
are not supported. If you have time-specific rules (say, drop everything until a time in the future), useWHERE timestamp < (epoch milliseconds in the future)
.- You can't use
SINCE
to drop historical data. Cloud rules only apply to data reported after the rule was created. If you need to delete data that has already been reported, refer to Delete existing data or contact New Relic support.
Audit rule history
To see who created and deleted cloud rules, query your account audit logs. The list endpoint also includes the user ID of the person who created the rule.
Cautions when dropping data
Cloud rules apply to each data point independently. For example, let's look at the following three data drop rules:
Importante
When creating rules, you are responsible for ensuring that the rules accurately identify and discard the data that meets the conditions that you have established. You are also responsible for monitoring the rule, as well as the data you disclose to New Relic.
1. DELETE FROM MyEvent WHERE myAttr not in ('staging')2. DELETE FROM MyEvent WHERE myAttr not in ('production')3. DELETE FROM MyEvent WHERE myAttr in ('development')
These three rules are applied independently to each data point; in summary, all MyEvent
events containing myAttr
with any value will be dropped:
myAttr: 'staging'
-> matches rule 2myAttr: 'production'
-> matches rule 1myAttr: 'development'
-> matches rules 1, 2, and 3myAttr: 'uuid-random-string'
-> matches rules 1 and 2
New Relic cannot guarantee that this functionality will completely resolve data disclosure concerns you may have. New Relic does not review or monitor how effective the rules you develop are. Always test and retest your queries and, after the drop rule is created, make sure it works as intended.
Creating rules about sensitive data can leak information about what kinds of data you maintain, including the format of your data or systems (for example, through referencing email addresses or specific credit card numbers). Rules you create, including all information in those rules, can be viewed and edited by any user with the relevant role-based access control permissions.
Only new data will be dropped. Existing data cannot be edited or deleted.
Managing cloud rules
To create and edit rules, you can either use the Pipeline Control UI or the NerdGraph API explorer (one.newrelic.com > Apps > NerdGraph API explorer).
Cuidado
Use caution when deciding to drop data. The data you drop can't be recovered. For more details on potential issues, see Caution notes.
Use case examples
Verify your rule works
After you create a cloud rule, you might wish to verify that it is working as expected. The rule should take effect quickly after a successful registration, so try running a TIMESERIES
version of the query you registered to see that the data drops off.
Note: Timeseries data is rendered with event time (not processing time) as the x-axis. Since New Relic accepts data with a timestamp up to twenty-four hours in the future, you might see some data that was sent to New Relic before the rule was created but with an event timestamp past rule creation.
Cloud rule type | NRQL |
---|---|
| Cloud rule NRQL:
Validation NRQL:
This should drop to 0. To verify that it did not affect any thing else, invert the |
| Cloud rule NRQL:
Validation NRQL:
Both lines should drop to 0. To verify that it did not affect events that contained these attributes and still should, invert the |
NerdGraph examples
Create cloud rules
Drop data:
mutation { entityManagementCreatePipelineCloudRule( pipelineCloudRuleEntity: { description: "Since we only care about MyEvent in staging and production, let's drop all MyEvent data in the test environment" name: "Drop MyEvent in test environment" nrql: "DELETE FROM MyEvent WHERE environment = 'test'" scope: { id: "your_nr_account_id", type: ACCOUNT } } ) { entity { id name nrql } }}
Drop attributes:
mutation { entityManagementCreatePipelineCloudRule( pipelineCloudRuleEntity: { description: "We don't care about jvmId and targetAttr in the test environment, let's drop those attributes" name: "Drop jvmId and targetAttr from MyEvent in test environment" nrql: "DELETE jvmId, targetAttr FROM MyEvent WHERE environment = 'test'" scope: { id: "your_nr_account_id", type: ACCOUNT } } ) { entity { id name nrql } }}
Update a cloud rule
You can modify existing cloud rules to adjust their filtering criteria, data types, or deployment scope.
Importante
Cloud rules only apply to data that arrives from the moment you create or update the rule. They do not affect data that has already been ingested. Always test and monitor your updated rules to ensure they're working as intended.
To update an existing Cloud rule, you'll use the updatePipelineCloudRule
mutation. This allows you to change the rule's name, description, and, crucially, its NRQL.
Identify the rule
You'll need the unique entity ID of the cloud rule you wish to update. You can find this ID by querying your rules using NerdGraph. For example:
actor { entitySearch( query: "type = 'PIPELINE_CLOUD_RULE'" ) { results { entities { id name } } } }
Construct your mutation:
Use the updatePipelineCloudRule
mutation, providing the rule's id
and a pipelineCloudRuleEntity
object containing the fields you want to change.
id
: The entity ID of the cloud rule you identified in step 1.pipelineCloudRuleEntity
: An object containing the updated details. You can provide any or all of the below fields:nrql
: The updated NRQL query that defines what data or attributes to drop. This should be aDELETE NRQL
query (for example,DELETE FROM MyEvent or DELETE attribute FROM MyEvent
).name
: The new name for your rule (optional).description
: The new description for your rule (optional).
Suppose you have an existing rule that drops dropAttr1
from MyEventToDrop
eventType. If you want to change it to drop dropAttr2
instead, call the following mutation:
mutation UpdatePipelineCloudRule { updatePipelineCloudRule( id: "MTAyNTY1MHxOR0VQfFBJUEVMSU5FX0NMT1VEX1JVTEV8MDE5ODgwOTUtMmZjNC03MTQ3LTkxMjQtZDk3YjhiY2Y4NGNj" # Replace with your rule's actual entity ID pipelineCloudRuleEntity: { description: "Since we do not need dropAttr2 ingested we would drop this attribute." name: "Update for Dropping dropAttr2 from MyEventToDrop" nrql: "DELETE dropAttr2 FROM MyEventToDrop" } ) { entity { id type name description nrql } }}
Verify the update
Check the API response: The entity object in the response will reflect the updated name, description, and NRQL.
Monitor your data: Send new data that would be affected by the rule and confirm that the dropping behavior has changed as expected. For example, if you changed from dropping attr1 to attr2, ensure attr1 is now ingested and attr2 is no longer appearing.
Delete a cloud rule
mutation { entityManagementDelete( id: "MTAyNTY1MHxOR0VQfFBJUEVMSU5FX0NMT1VEX1JVTEV8MDE5NWI0NDYtNjk5My03NGE5LWEyYjktMzBjMzQ1ODM0NTUz" ) { id }}
View cloud rules
Get a single cloud rule:
{ actor { entityManagement { entity( id: "MTAyNTY1MHxOR0VQfFBJUEVMSU5FX0NMT1VEX1JVTEV8MDE5NWI0M2UtYmFhNy03NDk3LWI0N2ItNjUyMmEzZDFmZTFi" ) { id ... on EntityManagementPipelineCloudRuleEntity { id name description nrql metadata { createdBy { id } createdAt } } } } }}
List all cloud rules:
{ actor { entityManagement { entitySearch(query: "type = 'PIPELINE_CLOUD_RULE'") { entities { id type ... on EntityManagementPipelineCloudRuleEntity { id name nrql } metadata { createdBy { id } } } } } }}
Non-droppable events and attributes
You cannot drop the following events and attributes using cloud rules:
Drop attributes on dimensional metric rollups
Dimensional metrics aggregate metrics into rollups for long term storage and as a way to optimize longer term queries. Metric cardinality limits are applied to this data.
You can use this feature to decide which attributes you don't need for long term storage and query, but would like to maintain for real time queries.
For example, adding containerId
as an attribute can be useful for live troubleshooting or recent analysis, but may not be needed when querying over longer periods of time for larger trends. Due to how unique something like containerId
can be, it can quickly drive you towards your metric cardinality limits which when hit stops the synthesis of rollups for the remainder of that UTC day.
This feature also allows you to keep the high cardinality attributes on the raw data and drop it from rollups which gives you more control over how quickly you approach your cardinaliity limits.
Usage
Drop attributes from dimensional metrics rollups (with optional filter). This uses NRQL of the form:
DELETE dropAttr1, dropAttr2 FROM MetricAggregate (WHERE OPTIONAL_FILTER)
Here is an example NerdGraph request:
mutation { entityManagementCreatePipelineCloudRule( pipelineCloudRuleEntity: { description: "We don't care about targetAttr in the test environment in dimensional metric rolloups, let's drop those attributes" name: "Drop targetAttr from Metric aggregate rollups in test environment" nrql: "DELETE targetAttr FROM MetricAggregate WHERE environment = 'test'" scope: { id: "your_nr_account_id", type: ACCOUNT } } ) { entity { id name nrql } }}
To verify it's working, wait 3 to 5 minutes for the rule to be picked up and for aggregate data to be generated. Then assuming the example NRQL above is your pipeline control cloud rule, run the following queries:
SELECT count(targetAttr) FROM Metric WHERE metricName = 'some.metric' TIMESERIES SINCE 2 hours agoSELECT count(targetAttr) FROM MetricRaw WHERE metricName = 'some.metric' TIMESERIES SINCE 2 hours ago
The first query retrieves metric rollups and should drop to 0 since containerId
has been dropped per the new drop rule. The second query retrieves metric raws using the MetricRaw
event type and should continue to hold steady since raw data is not impacted by the new drop rule. For more information on how to see the impact this will have on your cardinality, check out Understand and query high cardinality metrics.
Restrictions
All restrictions that apply to drop attribute rules apply to drop attributes from dimensional metric rollup rules with the additional restriction that you can only target the MetricAggregate
data type. They also do not work on Metric
queries targeting data created by an events to metrics rule or on Metric
queries targeting timeslice data.
Learn more
Recommendations for learning more:
- NerdGraph basics and terminology
- NRQL basics
- Browse the Support Forum for community discussions about cloud rules.
- For a deep dive into managing data ingest for a complex organization, see Data ingest governance.