In This Section
Overview
An operational XVM continuously collects raw statistics during the course of its operation. The xvm can also be configured to spin up a background thread that periodically performs the following:
- Performs higher level statistical computations such as calculating message rates and average latencies.
- Emits heartbeat messages to be processed by handlers.
- Optionally outputs rendered stats to a trace logger which is useful in testing and diagnose situations.
- Optionally writes heartbeat messages containing useful server-wide statistics to a binary transaction log (with zero steady-state allocations) which is useful for zero garbage capture of performance in production.
The raw metrics collected by the server are used by the background statistical thread for its computations and can also be retrieved programmatically by an application for its own use.
In this document, we describe:
- how enable and configure xvm stats collection and emission.
- the higher level statistics calculations performed by the statistics thread,
- and the format of the output of the statistics thread.
Enabling Heartbeats
Heartbeats for an XVM can be enabled via DDL XML using the <heartbeats> element:
Configuration Setting | Default | Description |
---|---|---|
enabled | false | Enable or disable server stats collection and heartbeat emission. Collection of stats and emission of heartbeats can impact application performance from both a latency and throughput standpoint. For applications that are particularly sensitive to performance, it is a good idea to compare performance with and without heartbeats enabled to understand the overhead that is incurred by enabling heartbeats. |
interval | 1000 | The interval in seconds at which server stats will be collected and emitted. |
collectNonZGStats | true | Some statistics collected by the stats collection thread require creating a small amount of garbage. This can be set to false to supress collection of these stats. |
collectIndividualThreadStats | true | Indicates whether heartbeats will contains stats for each active thread in the JVM. Individual thread stats are useful |
collectSeriesStats | true | Indicates whether or not series stats should be included in heartbeats. |
collectSeriesDatapoints | false | Indicates whether or not series stats should report the data points captured for a series statistic.
|
maxTrackableSeriesValue | 10 minutes | The maximum value (in microseconds) that can be tracked for reported series histogram timings. Datapoints above this value will be downsampled to this value, but will be reflected in the max value reported in an interval. |
includeMessageTypeStats | false | Sets whether or not message type stats are included in heartbeats (when enabled for the app). When captureMessageTypeStats is enabled for an app, the AepEngine will record select statistics on a per message type basis. Because inclusion of per message type stats can significantly increase the size of heartbeats, inclusion in heartbeats is disabled by default.
|
collectPoolStats | true | Indicates whether or not pool stats are collected by the XVM. |
poolDepletionThreshold | 1.0 | Configuration property used to set the percentage decrement at which a preallocated pool must drop to be included in a server heartbeat. Setting this to a value greater than 100 or less than or equal to 0 disables depletion threshold reporting. This gives monitoring applications advanced warning if it looks like a preallocated pool may soon be exhausted. By default the depletion threshold is set to trigger inclusion in heartbeats at every 1% depletion of the preallocated count. This can be changed by specifying the configuration property nv.server.stats.pool.depletionThreshold to a float value between 0 and 100. For example: Setting this to a value greater than 100 or less than or equal to 0 disables depletion threshold reporting.
|
logging | See below | Configures binary logging of heartbeats. Binary heartbeat logging provides a means by which heartbeat data can be captured in a zero garbage fashion. Collection of such heartbeats can be useful in diagnosing performance issues in running apps. |
tracing | See below | Configures trace logging of heartbeats. Enabling textual tracing of heartbeats is a useful way to quickly capture data from server heartbeats for applications that aren't monitoring xvm heartbeats remotely. Textual trace of heartbeats is not zero garbage and is therefore not suitable for applications that are latency sensitive. |
Enabling Global Stats
An XVM collects stats that are enabled for the applications that it contains. The followings stats can be enabled and reported in heartbeats
Environment Properties
Environment Prop | Description |
---|---|
The global default size used for capturing latencies. Latencies stats are collected in a ring buffer which is sampled by the stats thread at each collection interval.
Default value: nv.stats.series.samplesize | |
Property that can be used to control the default sampling size for Series stats.
Default value: 10240 | |
This global property instructs the platform to collect latency statistics for messages passing through various points in the process pipeline.
| |
nv.ods.latency.stats | Globally enables collection of application store latencies. |
nv.link.network.stampiots | Instructs low level socket I/O stamp input/output times on written data.
|
Engine Stats
The xvm statistics thread will collect the following latency stats from the apps it contains when they are enabled
See Also:
- DDL Reference for more details about the above stats.
- AEP Engine Statistics for more detail about app-level stats.
Handling XVM Heartbeats
When heartbeats are enabled they can be consumed or emitted in several ways, discussed below.
Heartbeat Tracing
By default all server statistics tracers are disabled as trace logging is not zero garbage and introduces cpu overhead in computing statistics. While tracing heartbeats isn't recommended in production, enabling server statistics trace output can be useful for debugging and performance tuning. To enable you will need to configure the appropriate tracers at the debug level. See the Heartbeat Trace Output section for more detail.
Heartbeat Logging
Applications that are latency sensitive might prefer to leave all tracers disabled to avoid unnecessary allocations and the associated GC activity. As an alternative, it's possible to enable logging of zero-garbage heartbeat messages to a binary transaction log:
When a storeRoot is not set, an XVM will log heartbeats to {XRuntime.getDataDirectory}/server-heartbeats/<xvm-name>-heartbeats.log, which can then be queried and traced from a separate process using the Stats Dump Tool.
Note that at this time binary heartbeat logs do not support rolling collection. Consequently this mechanism is not suitable for long running application instances.
See Also:
- DDL Reference for more options that can be used to configure heartbeat logging.
Heartbeat Event Handlers
Your application can register an event handler for server heartbeats to handle them in process.
See the SrvMonHeartbeatMessage JavaDoc for API details.
Admin Clients
A Lumino or Robin controller can also be used to connect to a server via a direct admin connection over TCP to listen for heartbeats for monitoring purposes. The XVMs stats thread will queue copies of each emitted heartbeats to each connected admin client.
Heartbeat Trace Output
Heartbeat trace is emitted to the nv.server.heartbeat
logger at a level of INFO
. Trace is only emitted for the types of heartbeat trace for which tracing has been enabled. This section discusses the various types of heartbeat trace, how the trace for those types is enabled and an explanation on the trace output for each of the types.
See Also:
- X Platform Tracing and Logging for general information on trace logging.
System Stats
Sample Trace Output
[System Stats] Sat May 13 12:14:03 PDT 2017 'market' server (pid=54449) 2 apps (collection time=0 ns) System: 20 processors, load average: 0.73 (load 0.10 process, 0.10 total system) Memory (system): 94.4G total, 89.8G free, 5.5G committed (Swap: 96.6G total, 96.6G free) Memory (proc): HEAP 1.5G init, 522M used, 1.5G commit, 1.5G max NON-HEAP 2M init, 47M used, 48M commit, 0K max Disk: [/ Total: 49.2GB, Usable: 18GB, Free: 18GB] [/dev/shm Total: 47.2GB, Usable: 47.2GB, Free: 47.2GB] [/boot Total: 484.2MB, Usable: 422.4MB, Free: 422.4MB] [/home Total: 405.2GB, Usable: 267GB, Free: 267GB] [/distributions Total: 196.9GB, Usable: 8.1GB, Free: 8.1GB] Threads: 20 total (16 daemon) 21 peak JIT: HotSpot 64-Bit Tiered Compilers, time: 2959 ms GC: ...ParNew [0 collections, commulative time: 0 ms] ...MarkSweepCompact [1 collections, commulative time: 54 ms]
The above trace can be interpreted as follows:
General Info
- Date and time that statistics gathering started
- Server name
- Server PID
- Number of apps running in the server
- Time spent gathering server statistics (for the current interval, excluding formatting)
System Info
- Number of available processors
- System load average
Memory Info
For the entire system:
- Total available memory
- The free memory
- Commit memory
- Swap total/free
For the process:
- Initial heap size
- Heap used
- Heap committed
- Max heap size
- Initial non-heap size
- Non-heap memory used
- Non-heap memory committed
- Non-heap memory max size
For more info regarding the process statistics above, you can reference the Oracle JavaDoc on MemoryUsage.
JDK 7 or newer is needed to collect all available memory stats. In addition some stats are not available on all jvms.
Disk
For each volume available:
- Total space
- Usable space
- Available space.
Listing of disk system roots required JDK7+, with JDK 6 or below, some disk information may not be available.
Thread Info
- Total thread count
- Daemon thread count
- Peak thread count
JIT Info
- JIT name
Total compilation time
Compare 2 consecutive intervals to determine if JIT occurred in the interval.
GC Info
- Collection count (for all GCs)
- Collection time (for all GCs)
Compare 2 consecutive intervals to determine if a GC occurred in the interval.
Thread Stats
SINCE 3.7
Individual thread stats can be traced by setting the following in DDL:
Sample Trace Output
[Thread Stats] ID CPU DCPU DUSER CPU% USER% WAIT% STATE NAME 1 6.0s 982.8us 0 1 0 0 RUNNABLE X-Server-blackbird1-Main (aff=[]) 2 9.3ms 0 0 0 0 0 WAITING Reference Handler 3 8.7ms 0 0 0 0 0 WAITING Finalizer 4 43.8us 0 0 0 0 0 RUNNABLE Signal Dispatcher 23 53.9ms 722.7us 0 1 0 0 RUNNABLE X-EDP-McastReceiver (aff=[1(s0c1t0)]) 24 26.3ms 426.5us 0 1 0 0 TIMED_WAITING X-EDP-Timer (aff=[1(s0c1t0)]) 26 1.9s 33.9ms 30.0ms 1 1 0 RUNNABLE X-Server-blackbird1-StatsRunner (aff=[1(s0c1t0)]) 28 6.9m 10.2s 4.8s 100 48 0 RUNNABLE X-Server-blackbird1-IOThread-1 (aff=[8(s0c11t0)]) 30 236.6us 0 0 0 0 0 TIMED_WAITING X-EventMultiplexer-Wakeup-admin (aff=[1(s0c1t0)]) 34 685.4ms 11.5ms 0 1 0 0 TIMED_WAITING X-EventMultiplexer-Wakeup-blackbird (aff=[1(s0c1t0)]) 35 9.2m 10.3s 10.3s 100 100 100 RUNNABLE X-ODS-StoreLog-blackbird-1 (aff=[4(s0c4t0)]) 40 9.2m 10.3s 10.3s 100 100 0 RUNNABLE SorProcessor (aff=[5(s0c8t0)]) 41 11.7ms 0 0 0 0 100 WAITING X-STEMux-admin-1 (aff=[]) 42 9.0m 10.3s 10.2s 100 99 90 RUNNABLE X-STEMux-blackbird-2 (aff=[2(s0c2t0)]) 43 7.0m 10.2s 4.8s 100 47 0 RUNNABLE X-ODS-StoreReplicatorLinkReader-myapp-93323c0d-5e4c-48d7-8cd4-f251963a6310 (aff=[3(s0c3t0)]) 44 52.0ms 973.7us 0 1 0 0 RUNNABLE X-ODS-StoreLinkAcceptor-1 (aff=[1(s0c1t0)]) 45 58.9ms 1.0ms 0 1 0 0 RUNNABLE X-EDP-McastReceiver (aff=[1(s0c1t0)]) 46 41.9ms 592.2us 0 1 0 0 TIMED_WAITING X-EDP-Timer (aff=[1(s0c1t0)]) 48 9.1m 10.3s 10.1s 100 98 98 RUNNABLE X-AEP-BusManager-IO-blackbird.market (aff=[7(s0c10t0)]) 49 1.1s 0 0 0 0 0 RUNNABLE X-Client-LinkManagerReader[c43b3977-572f-4366-8524-f17678e71515] (aff=[9(s0c12t0)]) 50 9.1m 10.3s 10.3s 100 100 93 RUNNABLE X-AEP-BusManager-IO-blackbird.blackbird (aff=[6(s0c9t0)])
Where columns can be interpreted as:
Column | Description |
---|---|
ID | The thread's id |
CPU | The total amount of time in nanoseconds that the thread has executed (as reported by the JMX thread bean) |
DCPU | The amount of time that the thread has executed in user mode or system mode (as reported by the JMX thread bean) |
DUSER | The amount of time that the thread has executed in user mode in the given interval in nanoseconds (as reported by the JMX thread bean) |
CPU% | The percentage of cpu time the thread used during the interval (e.g. DCPU * 100 / interval time) |
USER% | The percentage of user mode cpu time the thread used during the interval (e.g. DCPU * 100 / interval time) |
WAIT% | The percentage of the time that the thread was recorded in a wait state such as a busy spin loop or a disruptor wait. Wait times are proactively captured by the platform via code instrumentation that takes a timestamp before and after entering/exiting the wait condition. This means that unlike CPU% or USER%, this percentage can include time when the thread is not using scheduled and consuming cpu resources. Because of this It is not generally possible to simply subtract WAIT% from CPU% to calculate the amount of time the thread actually executed. For example if CPU% is 50 and WAIT% is also 50 and the interval is 5 seconds, it could be that 2.5 seconds of real work was done while 2.5 seconds of wait time occurred while the thread was context switched out, or it could be that all 2.5 seconds of wait time coincided with the 2.5 seconds of of cpu time and all of the cpu time was spent busy spinning. In other words, WAIT% gives a definitive indication of time that the thread was not doing active work during the interval, the remaining cpu time is at the mercy of the operating systems thread scheduler. |
STATE | The thread's runnable state at the time of collection |
NAME | The thread name. Note that when affinitization is enabled and the thread has been affinitized, that affinitization information is append to the thread name.
|
affinity | ![]() |
CPU times are reported according to the most appropriate short form of:
Unit | Abbreviation |
---|---|
Days | d |
Hours | h |
Minutes | m |
Seconds | s |
Milliseconds | ms |
Microseconds | us |
Nanoseconds | ns |
Pool Stats
Pools stats can be traced by setting the following in DDL:
To reduce the size of heartbeats, Pool Stats for a given pool are only included when:
- A miss has been recorded for the pool in a given interval and it results in a new object being allocated.
- The number of preallocated obects taken from a pool drops below the configured value for the pool depletion threshold.
Sample Trace Output
[Pool Stats] PUT DPUT GET DGET HIT DHIT MISS DMISS GROW DGROW EVIC DEVIC DWSH DDWSH SIZE PRE CAP NAME 38 0 16.8M 0 38 0 16.8M 0 0 0 0 0 0 0 0 0 1024 iobuf.native-32.20 1 0 62 0 1 0 61 0 0 0 0 0 0 0 0 0 1024 iobuf.native-64.21 1 0 1.0M 0 1 0 1.0M 0 0 0 0 0 0 0 0 0 1024 iobuf.native-256.23 7 0 75 0 7 0 68 0 0 0 0 0 0 0 0 0 1024 iobuf.heap-32.1
Stat | Description |
---|---|
PUT | The overall number of times items were put (returned) to a pool. |
DPUT | The number of times items were put (returned) to a pool since the last time the pool was reported in a heartbeat (the delta). |
GET | The overall number of times an item was taken from a pool.
|
DGET | The number of times an item was taken from a pool since the last time the pool was reported in a heartbeat (the delta). |
HIT | The overall number of times that an item taken from a pool was satisfied by there being an available item in the pool. |
DHIT | The number of times that an item taken from a pool was satisfied by there being an available item in the pool since the last time the pool was reported in a heartbeat(the delta). |
MISS | The overall number of times that an item taken from a pool was not satisfied by there being an available item in the pool resulting in an allocation. |
DMISS | The number of times that an item taken from a pool was not satisfied by there being an available item in the pool resulting in an allocation since the last time the pool was reported in a heartbeat. |
GROW | The overall number of times the capacity of a pool had to be increased to accomodate returned items. |
DGROW | The number of times the capacity of a pool had to be increased to accomodate returned items since the last time the pool was reported in a heartbeat. |
EVIC | The overall number of items that were evicted from the pool because the pool did not have an adequate capactiy to store them. |
DEVIC | The overall number of items that were evicted from the pool because the pool did not have an adequate capactiy to store them since the last time the pool was reported in a heartbeat. |
DWSH | The overall number of times that an item return to the pool was washed (e.g. fields reset) in the detached pool washer thread. |
DDWSH | The number of times that an item return to the pool was washed (e.g. fields reset) in the detached pool washer thread since the last time the pool was reported in a heartbeat |
SIZE | The number of items that are currently in the pool available for pool gets. This number will be 0 if all objects that have been allocated by the pool have been taken.
|
PRE | The number of items initially preallocated for the pool. |
CAP | The capacity of the backing array that is allocated to hold available pool items that have been preallocated or returned to the pool.
|
NAME | The unique identifier for the pool. |
Engine Stats
Stats collected by the AEP engine underlying your application are also included in heartbeats. Tracing of engine stats can be enabled with the following.
See AEP Engine Statistics for more detail about engine stats.
User Stats
User stats collected by your application are also included in heartbeats. Tracing of user stats can be enabled with the following.
Sample Trace Output
[App (ems) User Stats] ...Gauges{ ......EMS Messages Received: 142604 ......EMS Orders Received: 35651 ...} ...Series{ ......[In Proc Tick To Trade(sno=35651, #points=150, #skipped=0) .........In Proc Tick To Trade(interval): [sample=150, min=72 max=84 mean=75 median=75 75%ile=77 90%ile=79 99%ile=83 99.9%ile=84 99.99%ile=84] .........In Proc Tick To Trade (running): [sample=35651, min=72 max=2000 mean=93 median=76 75%ile=82 90%ile=111 99%ile=227 99.9%ile=805 99.99%ile=1197] ......[In Proc Time To First Slice(sno=35651, #points=150, #skipped=0) .........In Proc Time To First Slice(interval): [sample=150, min=85 max=98 mean=88 median=88 75%ile=90 90%ile=92 99%ile=95 99.9%ile=98 99.99%ile=98] .........In Proc Time To First Slice (running): [sample=35651, min=84 max=4469 mean=249 median=88 75%ile=95 90%ile=133 99%ile=283 99.9%ile=3628 99.99%ile=4143] ...}
See Also:
- User Defined App Stats for adding stats specific to your application to heartbeats.