The Scalyr service is designed to let you work with all of your monitoring data — not just logs or system metrics — in one place. This page lists all of the ways in which you can get data into Scalyr. Note that all options use secure encryption and realtime data streaming, so your data is always fresh and secure.
- System metrics are automatically uploaded when you install the Scalyr Agent.
- Log files are also covered in the Scalyr Agent installation instructions. You can upload web access logs, system logs, app server logs, and more.
- If you use MySQL, PostgreSQL, Apache, or nginx, install the corresponding Scalyr Agent Plugin to gather performance and usage data.
- The agent can also provide Process metrics (per-process resource usage). See the Linux Process Metrics page for instructions.
- If you use Amazon Web Services, you can import CloudWatch metrics, CloudTrail logs, CloudFront logs, ELB access logs, S3 bucket access logs, Redshift audit logs, EC2 spot instance data feeds, other log files in S3, and RDS database logs (e.g. slow query logs).
- Heroku and AppHarbor users can import logs securely using a Logplex drain.
- For Graphite based tools, the Scalyr Agent can masquerade as a Graphite server. See the Graphite Relay section for instructions.
- For applications or devices that use Syslog to transmit logs, the Scalyr Agent can act as a Syslog server.
- Use the HTTP-based Scalyr API or our Java API to build your own custom integrations. From Java, we also provide a logback appender, allowing any log4j or logback-based code to send logs directly to Scalyr. See https://github.com/scalyr/scalyr-logback for instructions.
- To log structured data directly from .NET applications, you can also use Serilog in conjunction with the Scalyr sink.
Heroku and AppHarbor
Importing logs from a Heroku application is quick and easy. Simply type the following command on a system where you have the Heroku Toolbelt installed:
heroku drains:add 'https://log.scalyr.com/api/logplex?host=APPNAME&token=(log in to view API tokens)' -a APPNAME
For APPNAME, use the name of your Heroku application. Note that APPNAME appears in two places in the command.
This creates a Heroku "log drain" telling Heroku to deliver log messages to Scalyr. You can list drains by typing heroku drains. To remove a drain, enter the same command you used to add it, but with drains:add changed to drains:remove.
If you're using AppHarbor, configure it to send logs via LogPlex to this URL:
https://log.scalyr.com/api/logplex?token=(log in to view API tokens)&host=AppHarbor&logfile=logplex&parser=AppHarbor
Your logs should begin appearing in Scalyr within seconds. Refresh the Overview page, and look for a server named "heroku".
If you have multiple applications, you can include "host", "logfile", and/or "parser" parameters in the logplex URL. The first two parameters specify the hostname and log file name under whch your logs will appear on Scalyr's Overview page. The parser parameter identifies the log format; with Heroku, you can usually omit this parameter and accept the default ("heroku-logplex").
You can use at most 20 distinct Logplex URLs, i.e. 20 different host/logfile/parser combinations.
Graphite is an open-source system for storing and graphing timeseries data. The Scalyr Agent can masquerade as a Graphite server, acting as a relay to the Scalyr service. This allows you to use Scalyr with any tool that reports metrics using the Graphite network protocol.
To begin with, follow the agent installation instructions to install the Scalyr Agent. It's generally best to install the agent on each server you want to monitor, so that it can provide system metrics and log files, and so that the Graphite data will automatically be tagged as having come from that server.
Then go to the Graphite Monitor page for instructions on enabling Graphite support in the Scalyr Agent.
The Scalyr Agent converts Graphite and OpenTSDB measurements into the Scalyr data model. Consider the following Graphite value:
requests.500.host1 1.03 123456789
This is converted into a Scalyr event with the following fields:
path = requests.500.host1 value = 1.03 timestamp = 123456789 path1 = requests path2 = 500 path3 = host1
The first three fields are a direct representation of the Graphite data. The pathN fields break down the path into components, enabling flexible queries and aggregation. For instance, the search path1='requests' path3='host1' will match all requests on host1.
An OpenTSDB data point looks like this when sent over the network:
put mysql.bytes_received 1287333217 327810227706 schema=foo host=db1
and is transformed to:
metric = mysql.bytes_received timestamp = 1287333217 value = 327810227706 schema = foo host = db1 path1 = mysql path2 = bytes_received
Use the standard graph page to to view Graphite or OpenTSDB data. In the Expression box, enter a search query using the fields described above. Some examples for Graphite data:
$source='graphite' path = 'requests.500.host1' (a single metric)
$source='graphite' path1 = 'requests' path2=500 (requests.500.*)
$source='graphite' path1 = 'requests' path3='host1' (requests.*.host1)
And examples for TSDB data:
$source='tsdb' metric = 'mysql.bytes_received' host='db1' $source='tsdb' metric = 'mysql.bytes_received'
The Variable field must be set to one of the following:
- value graphs the mean reported value.
- function(value) applies a function to the reported value. See the Graph Functions reference for a full list of available functions.
- rate graphs the number of data values reported (the rate at which events are sent to the relay daemon).