LevelDB Dashboard

This page provides a dashboard for viewing data extracted from LevelDB log files. It is designed to work with our LevelDB log parser.

Setup

To begin with, you need to import the LOG file from your LevelDB database directory. Include a stanza like this in the logs section of your agent.json file:

{
  path: "ABSOLUTE_PATH_TO_LEVELDB_DIRECTORY/LOG",
  attributes: {parser: "leveldbLog"}
},

Be sure to leave the "parser" field set to "leveldbLog". This will cause the log to be processed by the built-in LevelDB log parser.

Next, you'll need to install the dashboard. Click the Dashboards dropdown in the navigation bar, choose "New Dashboard", and give the dashboard an appropriate name (e.g. "LevelDB"). Copy the dashboard definition from this page (see below) and paste it into the dashboard editor, replacing the default text. Then click Update File.

When this is done, click the View Dashboard link. Note that data begins to accumulate only after you've updated agentConfig.json, so at first the dashboard may look empty. If you have multiple hosts, choose the appropriate host from the dropdown menu.

Unfortunately, the data available in the LevelDB LOG file is relatively limited. Most messages relate directly or indirectly to compactions. Therefore, the dashboard is only able to display statistics about compactions. However, this does shed light on the amount of data you are writing to LevelDB, as well as the amount of system load that compactions may be imposing.

Dashboard Definition

(Paste all of the following text into the dashboard editor, replacing any existing text.)

// Dashboard for data parsed from LevelDB logs.

{
  "parameters" : [ {
    "name" : "host",
    "values" : [ "__serverHosts" ] // allow choosing any host from which Scalyr is receiving data
  } ],

  graphs: [
    {
      label: "Bytes Compacted",
      plots: [
        {
          filter: "dataset='leveldb' $serverHost='#host#'",
          facet: "sumPerSecond(compactionSizeBytes)",
          label: "bytes / second (compaction end)"
        },
        {
          filter: "dataset='leveldb' $serverHost='#host#'",
          facet: "sumPerSecond(newTableBytes)",
          label: "bytes / second (tables)"
        }
      ]
    },
    {
      facet: "sumPerSecond(newTableKeys)",
      label: "Keys Compacted",
      plots: [
        {
          filter: "dataset='leveldb' $serverHost='#host#'",
          label: "Keys / second"
        }
      ]
    },
    {
      label: "Files Compacted (levels 0-3)",
      graphStyle: "stacked",
      plots: [
        {
          filter: "dataset='leveldb' $serverHost='#host#'",
          facet: "sumPerSecond(compact0Files)",
          label: "L0 files / second"
        },
        {
          filter: "dataset='leveldb' $serverHost='#host#'",
          facet: "sumPerSecond(compact1Files)",
          label: "L1 files / second"
        },
        {
          filter: "dataset='leveldb' $serverHost='#host#'",
          facet: "sumPerSecond(compact2Files)",
          label: "L2 files / second"
        },
        {
          filter: "dataset='leveldb' $serverHost='#host#'",
          facet: "sumPerSecond(compact3Files)",
          label: "L3 files / second"
        }
      ]
    },
    {
      label: "Files Compacted (levels 4-7)",
      graphStyle: "stacked",
      plots: [
        {
          filter: "dataset='leveldb' $serverHost='#host#'",
          facet: "sumPerSecond(compact4Files)",
          label: "L4 files / second"
        },
        {
          filter: "dataset='leveldb' $serverHost='#host#'",
          facet: "sumPerSecond(compact5Files)",
          label: "L5 files / second"
        },
        {
          filter: "dataset='leveldb' $serverHost='#host#'",
          facet: "sumPerSecond(compact6Files)",
          label: "L6 files / second"
        },
        {
          filter: "dataset='leveldb' $serverHost='#host#'",
          facet: "sumPerSecond(compact7Files)",
          label: "L7 files / second"
        },
      ]
    },
  ]
}