Cloudflare Docs
Logs
Visit Logs on GitHub
Set theme to dark (⇧+D)

Logpush API configuration

Endpoints

The table below summarizes the job operations available. All the examples in this page are for zone-scoped datasets. Account-scoped datasets should use /accounts/<ACCOUNT_ID> instead of /zone/<ZONE_ID>. For more information, refer to the Log fields page.

The <ZONE_ID> argument is the zone id (hexadecimal string). The <ACCOUNT_ID> argument is the organization id (hexadecimal string). These arguments can be found using API’s zones endpoint. The <JOB_ID> argument is the numeric job id. The <DATASET> argument indicates the log category (such as http_requests, spectrum_events, firewall_events, nel_reports, or dns_logs).

OperationDescriptionURL
POSTCreate jobhttps://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/jobs
GETRetrieve jobhttps://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/jobs/<JOB_ID>
GETRetrieve all jobs for all datasetshttps://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/jobs
GETRetrieve all jobs for a datasethttps://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/datasets/<DATASET>/jobs
GETRetrieve all available fields for a datasethttps://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/datasets/<DATASET>/fields
PUTUpdate jobhttps://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/jobs/<JOB_ID>
DELETEDelete jobhttps://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/jobs/<JOB_ID>
POSTCheck whether destination existshttps://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/validate/destination/exists
POSTGet ownership challengehttps://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/ownership
POSTValidate ownership challengehttps://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/ownership/validate
POSTValidate log optionshttps://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/validate/origin

For concrete examples, see the tutorial Manage Logpush with cURL .

Connecting

The Logpush API requires credentials like any other Cloudflare API.

$ curl -s -H "X-Auth-Email: <EMAIL>" -H "X-Auth-Key: <API_KEY>" \
'https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/jobs'

Ownership

Before creating a new job, ownership of the destination must be proven.

To issue an ownership challenge token to your destination:

$ curl -s -X POST https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/ownership \
-H "X-Auth-Email: <EMAIL>" \
-H "X-Auth-Key: <API_KEY>" \
-H "Content-Type: application/json" \
--data '{"destination_conf":"s3://<BUCKET_PATH>?region=us-west-2"}' | jq .

A challenge file will be written to the destination, and the filename will be in the response (the filename may be expressed as a path, if appropriate for your destination):

{
"errors": [],
"messages": [],
"result": {
"valid": true,
"message": "",
"filename": "<path-to-challenge-file>.txt"
},
"success": true
}

You will need to provide the token contained in the file when creating a job.

Destination

You can specify your cloud service provider destination via the required destination_conf parameter.

  • AWS S3: bucket + optional directory + region + optional encryption parameter (if required by your policy); for example: s3://bucket/[dir]?region=<REGION>[&sse=AES256]
  • Datadog: Datadog endpoint URL + Datadog API key + optional parameters; for example: datadog://<DATADOG_ENDPOINT_URL>?header_DD-API-KEY=<DATADOG_API_KEY>&ddsource=cloudflare&service=<SERVICE>&host=<HOST>&ddtags=<TAGS>
  • Google Cloud Storage: bucket + optional directory; for example: gs://bucket/[dir]
  • Microsoft Azure: service-level SAS URL with https replaced by azure + optional directory added before query string; for example: azure://<BlobContainerPath>/[dir]?<QueryString>
  • New Relic New Relic endpoint URL which is https://log-api.newrelic.com/log/v1 for US or https://log-api.eu.newrelic.com/log/v1 for EU + a license key + a format; for example: for US "https://log-api.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare" and for EU "https://log-api.eu.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"
  • Splunk: Splunk endpoint URL + Splunk channel ID + insecure-skip-verify flag + Splunk sourcetype + Splunk authorization token; for example: splunk://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=<INSECURE_SKIP_VERIFY>&sourcetype=<SOURCE_TYPE>&header_Authorization=<SPLUNK_AUTH_TOKEN>
  • Sumo Logic: HTTP source address URL with https replaced by sumo; for example: sumo://<SumoEndpoint>/receiver/v1/http/<UniqueHTTPCollectorCode>

For S3, Google Cloud Storage, and Azure, logs can be separated into daily subdirectories by using the special string {DATE} in the URL path; for example: s3://mybucket/logs/{DATE}?region=us-east-1&sse=AES256 or azure://myblobcontainer/logs/{DATE}?[QueryString]. It will be substituted with the date in YYYYMMDD format, like 20180523.

For more information on the value for your cloud storage provider, consult the following conventions:

To check if a destination is already in use:

$ curl -s -XPOST https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/validate/destination/exists -d '{"destination_conf":"s3://foo"}' | jq .

Response

{
"errors": [],
"messages": [],
"result": {
"exists": false
},
"success": true
}

Job object

Options

Logpush repeatedly delivers logs on your behalf and uploads them to your destination.

The options that you can customize are:

  1. Fields: Refer to Log fields for the currently available fields. The list of fields is also accessible directly from the API: https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/datasets/<DATASET>/fields. Default fields: https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/datasets/<DATASET>/fields/default.
  2. Filter: Use filters to select the events to include and/or remove from your logs. For more information, refer to Filters .
  3. Sampling rate: Value can range from 0.001 to 1.0 (inclusive). sample=0.1 means return 10% (1 in 10) of all records. The default value is 1, meaning logs will be unsampled.
  4. Timestamp format: The format in which timestamp fields will be returned. Value options: unixnano (default), unix, rfc3339.
  5. Redaction for CVE-2021-44228: This option will replace every occurrence of ${ with x{. To enable it, set CVE-2021-44228=true.
  6. max_upload_bytes (optional): The maximum uncompressed file size of a batch of logs. This must be at least 5 MB. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size.
  7. max_upload_records (optional): The maximum number of log lines per batch. This must be at least 1000 lines or more. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this.

To check if the selected logpull_options are valid:

$ curl -s -XPOST https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/validate/origin
-d '{ "logpull_options":"fields=RayID,ClientIP,EdgeStartTimestamp&timestamps=rfc3339&CVE-2021-44228=true”,
"dataset": "http_requests",
}' | jq .

Response

{
"errors": [],
"messages": [],
"result": {
"valid": true,
"message": "",
},
"success": true
}

Custom fields

You can add custom fields to your HTTP request log entries in the form of HTTP request headers, HTTP response headers, and cookies. Custom fields configuration applies to all the Logpush jobs in a zone that use the HTTP requests dataset. To learn more, refer to Configure custom fields .

Audit

The following actions are recorded in Cloudflare Audit Logs: create, update, and delete job.