|
An operational AEP engine and its underlying components such as its HA Store and Bus Bindings can be configured to continuously collect many raw statistics during the course of its operation. Such stats can be reported by an Talon Server and reported in a zero garbage fashion in the form of server heartbeats. Additionally for quick debugging and diagnostic purporses the engine can spin up a background thread that periodically performs the following:
The raw metrics collected by the engine are used by the background statistical thread for its computations and can also be retrieved programmatically by an application for its own use.
In this document, we describe:
As collection of AEPEngine involves overhead, the engine provides somewhat granular controls on the types of stats that are collected to allow applications to balance the cost of collecting statistics. The table below summarizes the various stats collection that can be enabled.
Configuration Setting | Default | Description |
---|---|---|
nv.aep.<engine>.latency.stats | false | Indicates whether or not latency stats for the engine should be collected. The latency stats include transaction level latencies that can be used in monitoring the overall latencies of the various legs of a transaction. |
nv.stats.series.samplesize | 10240 | Property that can be used to control the default sampling size for series stats. If the number of datapoints collected in a stats interval exceeds this size, the computation for histographical data will be lossy. Increasing the value reduces loss of datapoints, but results in greater overhead in stats collection in terms of both memory usage and pressure on the process caches. |
nv.aep.<engine>.msgtype.stats | false | Indicates whether or not per-message-type statistics should be collected. When enabled, processing latencies are recorded on a per message basis. Per message type stats introduce a fair amount of overhead, so typically enabling their collection is enabled only during application profiling for application with stringent performance requirements. |
nv.aep.<engine>.event.latency.stats | false | Indicates whether or not per-event-type latency statistics should be included in a given engine's stats trace output. Event latency statistics can be used to capture the processing latencies for each type of event being processed in the engine. This is useful in determining if particular event types are consuming considerable engine thread processing time. Per event type stats introduce a fair amount of overhead, so typically enabling their collection is enabled only during application profiling for application with stringent performance requirements. |
nv.msg.latency.stats | false | Property that enabled message latency stats tracking. When set to true, timings for messages are captured as they flow through the system. Enablement of these stats is required to collect message bus latency stats. Enabling this property can increase latency due to the overhead of tracking timestamps. |
nv.msgtype.latency.stats | false | Property that enabled per message type latency stats tracking. When set to true, timings for each message type are individually tracked as separate stats. This can be useful in tracking down issues in which a particular message type is problematic (for example, tracking down a high application handler message processing time). However, it results in a higher overhead. |
nv.ods.latency.stats | false | Indicates whether or not store latency statistics are captured. Store latency stats expose statistics related to serializing replicating and persisting transactions. |
nv.event.latency.stats | false | Indicates whether or not event latency statistics are captured. Enabling Event latency stats record timestamps for enqueue and dequeue of events across event multiplexer, such as the AepEngine's input multiplexer queue. Enabling event latency stats is useful for determining if an engine's event multiplexer queue is backing up by recording the time that events remain on the input queue. |
nv.link.network.stampiots | false | Indicates whether or not timestamps should be stamped on inbound and outbound messages. Disabled by default. Enabling this setting will allow engines to provide more detail in the form of transaction legs in message latency statistics. |
The following output threads can be enabled to trace statistics, which is useful for testing and performance tuning. Enabling these output threads is not required for collecting stats. Statistics trace output is not zero garbage, so in a production scenario it usually makes more sense to collect stats via Xvm Heartbeats, which emits zero garbage heartbeats with the above statistics.
Configuration Setting | Default | Description |
---|---|---|
nv.aep.<engine>.stats.interval | 0 | The interval (in seconds) at which engine stats will be traced for a given engine. Can be set to a positive integer indicate the period in seconds at which the engine's stats dump thread will dump recorded engine statistics. Setting a value of 0 disables creation of the stats thread. When enabled, engine stats are traced to the logger 'nv.aep.engine.stats' at a level of NOTE: disabling the engine stats thread only stops stats from being periodically traced. It does not stop the engine from collecting stats; stats can still be collected by an external thread (such as the Talon Server which reports the stats in server heartbeats). In other words, enabling the stats thread is not a prerequisite for collecting stats, and disabling the stats reporting thread does not stop them from being collected. NOTE: while collection of engine stats is a zero garbage operation, tracing engine stats is not a zero garbage when performed by this stats thread. For latency sensitive apps, it is recommended to run in a Talon server which can collect engine stats and report them in heartbeats in a zero garbage fashion. |
nv.aep.<engine>.sysstats.interval | 0 | The interval (in seconds) at which engine sys stats will be reported. Set to 0 (the default) to completely disable sys stats tracing for a given engine. In most cases, AEP sys stats will not be used and system level stats would be recorded in the Server Statistics from which an AEPEngine is running. |
nv.event.mux.<name>.stats.interval | 0 | The interval (in seconds) at which multiplexer stats will be traced. Multiplexer stats can also be reported as part of the overall engine stats from the engine stats thread, so there is no need to set this to a non-zero value if nv.aep.<engine>.stats.interval is greater than zero. |
nv.msg.latency.stats.interval | 0 | The interval (in seconds) at which message latency stats are traced. This setting has no effect if nv.msg.latency.stats is false. This allows granular tracing of just message latency stats on a per bus basis. Message latency stats can also be reported as part of the overall engine stats from the engine stats thread, so there is no need to set this to a non-zero value if nv.aep.<engine>.stats.interval is greater than zero. |
nv.aep.busmanager.<engine>.<bus>.stats.interval | 0 | The interval (in seconds) at which bus stats will be traced. Bus stats reported as part of the overall engine stats from the engine stats thread, so there is no need to set this to a non-zero value if nv.aep.<engine>.stats.interval is greater than zero. When engine stats output is disabled this can be used to trace only bus stats for a particular message bus. |
An AEP engine collects the following raw metrics during the course of its operation.
Metric | Description |
NumFlows | Total number of message flows functioning in the engine. |
NumMsgsRcvdBestEffort | Total number of messages received by the engine on best-effort channels. |
NumMsgsRcvdGuaranteed | Total number of messages received by the engine on guaranteed channels. |
NumMsgsSourced | [EventSourcing Only] |
NumMsgsFiltered | The number of messages that were filtered. |
NumDupMsgsRcvd | Total number of duplicate messages received and discarded by an engine. This metric will always be 0 if duplicate detection has been disabled via the nv.aep.duplicate.checking configuration property. |
NumMsgsSentBestEffort | Total number of messages sent by the engine on best-effort channels. |
NumMsgsSentGuaranteed | Total number of messages sent by the engine on guaranteed channels. |
NumMsgsResent | Total number of messages retransmitted by an engine. |
NumEventsRcvd | Total number of events received by an engine. |
NumFlowEventsRcvd | Total number of flow events received by the engine. |
NumFlowEventsProcSuccess | Total number of successfully processed flow events. |
NumFlowEventsProcFail | Total number of failed flow events. |
NumFlowEventsProcComplete | Total number of flow events whose transactions have completed. |
NumTransactions | Total number of transactions that have been committed or rolled back. |
NumCommitsStarted | Total number of transactions whose commits have been started. |
NumCommitsCompleted | Total number of transactions whose commits have completed. |
NumSendCommitsStarted | Total number of transactions whose send commits have been started. |
NumSendCommitsCompleted | Total number of transactions whose send commits have completed. |
SendCommitCompletionQueueSize | Number of transaction in the send commit completion queue. |
NumStoreCommitsStarted | Total number of transactions whose store commits have been started. |
NumStoreCommitsCompleted | Total number of transactions whose store commits have completed. |
StoreCommitCompletionQueueSize | Number of transaction in the store commit completion queue. |
NumRollbacks | Number of transactions that have been rolled back. |
BackupOutboundQueueSize | [EventSourcing Only] |
BackupOutboundLogQueueSize | [EventSourcing Only] |
OutboundSno | The current outbound sequence number in use by the engine. |
OutboundStableSno | The current 'stable' outbound sequence number. |
Message Type Specific Statistics | Message type specific statistics.
|
The AEP engine's statistics thread performs the following operations:
The statistics thread can be started/stopped either programmatically or via configuration parameters.
Each AepEngine object is associated with an AepEngineStats object that can be obtained from the engine as follows:
AepEngine.getStats()
An application can start and stop the statistics thread using the startPeriodicOutput() and stopPeriodicOutput() methods respectively exported Each AepEngineStats object. For example, the following would start the statistics thread with a periodic output and alert dispatch every 1 second.
AepEngineStats.startPeriodicOutput(1)
Correspondingly, the following would stop the statistics thread
AepEngineStats.stopPeriodicOutput()
An AEP engine's statistics thread can also be started via the following environment variable or system property.
nv.aep.<engineName>.stats.interval=<interval in seconds>
For example, the following would start the 'forwarder' AEP engine's statistics thread with a periodic output and alert frequency of 5 seconds.
nv.aep.forwarder.stats.interval=5
Once started administratively, the statistics thread remains active until the engine is stopped or the thread is programatically stopped.
Note: When configuring using environment variables, in Unix based systems where the shell does not support "." in environment variables, the "." can be replaced by "_". |
By default, an AEP engine does not collect message type specifc stats. To enable message type specific stats the following should be set in you DDL config:
<apps> <app name="MyApp"> <captureMessageTypeStats>true</captureMessageTypeStats> <app> </apps <xvms> <xvm name="MyXVM"> <hearbeats enabled=true" interval="5"> <includeMessageTypeStats>true</includeMessageTypeStats> </heartbeats> </xvm> <xvms> |
Transaction latencies traced by an engine stats thread includes summary statistics for various phases within the transaction processing pipeline. The meaning of these summary statistics is as follows:
Phase | Description |
---|---|
mpproc | Records the time (in microseconds) spent by the engine dispatching the message to an application. |
mproc | Records latencies for application message process times (in an EventHandler). |
mfilt | Records latencies for application message filtering times (by a message filter). |
msend | Time spent in AepEngine.sendMessage(). The time in the AepEngine's send call. This latency will be a subset of mproc for solicited sends and it includes msendc. |
msendc | Time spent in the AepEngine's core send logic. This leg includes enqueuing the message for delivery in the corresponding bus manager. |
cstart | Time spent from the point the first message of a transaction is received to the time the transaction is committed. |
cprolo | Time spent from the point where transaction commit is started to send or store commit, whichever occurs first. This latency measures the time taken in any bookkeeping done by the engine prior to commit the transaction to store (or for an engine without a store until outbound messages are released for delivery). |
csend | The send commit latency: i.e. time from when send commit is initiated, to receipt of send completion event. This latency represents the time from when outbound messages for a transaction are released to the time that all acknowledgements for the messages are received. Because this latency includes acknowledgement time a high value for csend does not necessarily indicate that downstream latency will be affected. The Message Latencies listed below allow this value to be decomposied further. |
ctrans | Time spent from the point the store commit completes to the beginning of the send commit which releases a transaction's outbound messages for delivery. If the engine doesn't have a store, then this statistic is not captured as messages are released immediately. |
cstore | The store commit latency i.e. time from when store commit is initiated to receipt of store completion event. This latency includes the time spent serializing transaction contents, persisting to the store's transaction log, inter cluster replication, and replication to backup members including the replication ack.
|
cepilo | Time spent from the point the store or the send commit completes, whichever is last, to commit completion. |
cfull | Time spent from the time the first message of a transaction is received to commit completion. |
tleg1 | Records latencies for the first transaction processing leg. Transaction Leg One includes time spent from the point where the first message of a transaction is received to submission of send/store commit. It includes message processing and and any overhead associated with transactional book keeping done by the engine.
|
tleg2 | Records latencies for the second transaction processing leg. Transaction Leg Two includes time spent from the point where the send/store commit completion is received to the submission of store/send commit.
|
tleg3 | Records latencies for the third transaction processing leg. Transaction Leg Three includes time spent from the point where the store/store commit completion is received to the completion of the transaction commit.
|
inout | Records latencies for receipt of a message to transmission of the last outbound message. |
inack | Records latencies for receipt of a message to stabilization (and upstream acknowledgement for Guaranteed). |
The engine stats thread can also trace summary statistics for its message buses. Each message bus is wrapped by a Bus Manager which handles bus connect and reconnect, and also provides transactional semantics around the bus by queuing up messages that will be sent as part of an engine transaction. Each Bus Manager maintains statistics for across bus binding reconnects, allowing continuous stats across bus binding reconnects. The following sections break these statistics down in more detail.
Immediately following the Bus Manager "header" are statistics that relate to message volumes and rates across the managed bus:
[Bus Manager (<Engine>.<Bus>)] In.........{ ...Msg{<NumMsgsRcvd>(<DeltaNumMsgsRcvd>) <MsgRecvRate>(<DeltaMsgRecvRate>) <NumMsgsInBatches>(<DeltaNumMsgsInBatches>) <MsgsInBatchRecvRate>(<DeltaMsgsInBatchRecvRate>) <NumMsgBatchesRcvd>(<DeltaNumMsgBatchesRcvd>) <MsgBatchRecvRate>(<DeltaMsgBatchRecvRate>) <AvgRecvBatchSize>(<DeltaAvgRecvBatchSize>)} ...Ack{<NumAcksSent>(<DeltaNumAcksSent>) <AcksSentRate>(<DeltaAcksSentRate>)} ...Pkt{<NumPacketsRcvd>(<DeltaNumPacketsRcvd>) <PacketRecvRate>(<DeltaPacketRecvRate>)} ...Stab{<NumStabilityRcvd>(<DeltaNumStabilityRcvd>) <StabilityRate>(<DeltaStabilityRate>) <NumStabilityBatchesRcvd>(<DeltaNumStabilityBatchesRcvd>) <StabilityBatchesRate>(<DeltaStabilityBatchesRate>) <AvgStabilityBatchSize>(<DeltaAvgStabilityBatchSize>)} } Out........{ ...Msg{<NumMsgsEnqueued>(<DeltaNumMsgsEnqueued>) <MsgEnqueueRate>(<DeltaMsgEnqueueRate>) <NumMsgsSent>(<DeltaNumMsgsSent>) <MsgSendRate>(<DeltaMsgSendRate>) <BacklogSize>} ...Flush{<NumFlushes>(<DeltaNumFlushes>) <FlushRate>(<DeltaFlushRate>) <NumFlushesSync>(<SyncFlushPct>%)(<DeltaNumFlushesSync>(<DeltaSyncFlushPct>%)) <FlushSyncRate>(<DeltaFlushSyncRate>) <NumFlushesAsync>(<AsyncFlushPct>%)(<DeltaNumFlushesAsync>(<DeltaAsyncFlushPct>%)) <FlushAsyncRate>(<DeltaFlushAsyncRate>) <NumAsyncFlushCompletions>(<DeltaNumAsyncFlushCompletions>) <AsyncFlushCompletionRate>(<DeltaFlushAsyncRate>)} ...Msg{<NumMsgsFlushed>(<DeltaNumMsgsFlushed>) <MsgFlushRate>(<DeltaMsgFlushRate>) <NumMsgsFlushedSync>(<SyncMsgFlushPct>%)(<DeltaNumMsgsFlushedSync>(<DeltaSyncMsgFlushPct>%)) <MsgFlushRateSync>(<DeltaMsgFlushRateSync>) <NumMsgsFlushedAsync>(<AsyncMsgFlushPct>%)(<DeltaNumMsgsFlushedAsync>(<DeltaAsyncMsgFlushPct>%)) <MsgFlushAsyncRate>(<DeltaMsgFlushAsyncRate>)} } Txn........{<NumCommits>(<DeltaNumCommits>) <CommitRate>(<DeltaCommitRate>) <NumRollbacks>(<DeltaNumRollbacks>) <RollbackRate>(<DeltaRollbackRate>)} |
The raw metrics from which these statistics are computed are as follows:
Field | Description |
---|---|
NumMsgsRcvd | The number of message received by the bus. |
NumMsgsInBatches | The number of message received by the bus that were part of a batch. |
NumMsgBatchesRcvd | The number of batch message received by the bus. |
NumPacketsRcvd | The number of raw packets received by the bus. |
NumMsgsEnqueued | The total number of batch messages enqueued for delivery by this bus. |
NumAcksSent | The total number of acknowledgement sent upstream for received messages by this bus. |
NumStabilityRcvd | The number of stability events (acks) received by this bus. |
NumStabilityBatchesRcvd | The number of batched stability events received by this bus. |
NumMsgsEnqueued | The total number of batch messages enqueued for delivery by this bus. |
NumMsgsSent | The total number of batch messages enqueued message that were actually sent by the bus. |
NumFlushesSync | The number of times this bus has been synchronously flushed. |
NumFlushesAsync | The number of times this bus has been asynchronously flushed. |
NumMsgsFlushedSync | The number of messages flushed by synchronous flushes. |
NumMsgsFlushedAsync | The number of messages flushed by asynchronous flushes. |
NumAsyncFlushCompletions | The number of asynchronous flushes for this that have completed. |
NumCommits | The number transactions committed by the bus. |
NumRollbacks | The number transactions rolled back for the bus. |
Disruptor statistics follow the message and transaction statistics:
Disruptor..{[<DisruptorNumUsed> of <DisruptorNumAvailable>] <DisruptorUsagePct>% (<DisruptorClaimStrategy>, <DisruptorWaitStrategy>)} |
When a Bus Manager is configured for detached send (aka detached commit), a transaction's outbound messages are dispatched on the Bus Manager's I/O thread. The "Offer to Poll" latency measure the time from when messages are released for delivery after being stabilized in the store, until the detached bus manager picks them up for send. High o2p latencies in a Bus Manager may indicate that messages are being released for send faster than we can actually send them.
After the disruptor statistics are counters indicating the number of connected clients, active channels and binding failures:
Clients....{<NumClients>} |
The raw metrics from which these statistics are computed are as follows:
Field | Description |
---|---|
NumClients | The number of connected clients (if applicable). |
NumChannels | The number of channels brought up by this bus. |
NumFails | The number of binding failures that have occurred for this bus. |
Messaging latencies follow the clients, channels and fails output. The following latency statistics relate to the bus manager's message handling pipeline:
Phase | Description |
---|---|
c2o | The create to send latencies in microseconds, the time in microseconds from message creation to when send was called for it. |
o2s | The send to serialize latencies in microseconds, the time from when the message was sent until it was serialized in preparation for transmission on the wire. For an engine with a store this will include the time from the application's send call, the replication hop (if there is a store) and time through the bus manager's disruptor if detached commit is enabled for the bus manager. |
s | The serialize latencies in microseconds, the spent serializing the MessageView to its transport encoding. |
s2w | The serialize to wire latencies in microseconds, the time post deserialize to just before the message is written to the wire. |
w | The wire latencies in microseconds, the time an inbound messages spent on the wire. The time spent on the wire from when the message was written to the wire by the sender to the time it was received off the wire by the receiver. Note: that this metric is subject to clock skew when the sending and receiving sides are on different hosts. |
w2d | The time from when the serialized form was received from the wire to deserialization. |
d | The time (in microseconds) spent deserializing the message and wrapping it in a MessageView. |
d2i | The time (in microseconds) from when the message was deserialized to when it is received by the engine. This measure the time from when the bus has deserialized by the bus to when the app's engine picks it up from it's input queue (before it dispatches it to an application) handler (it includes the o2p time of the engine's disruptor). Additional time spent by the engine dispatching the message to the application handler is covered by mpproc (see the Transaction Latencies table). |
o2i | The origin to receive latencies in microseconds. The time from when a message was originally created to when it was received by the binding. |
w2w | The wire to wire latencies in microseconds, for outbound messages the time from when the corresponding inbound message was received off the wire to when the outbound message was written to the wire. |
The Event Multiplexer reports statistics describing the latency between:
the point at which an event is offered to the multiplexer by a thread and the point at which it is actually enqueued for processing – i.e. the o2p statistics traced per Feeder Queue.
Feeder Queues (max=16, lastDecongest=0) |
the time between an event being enqueued for processing and actually being dequeued for processing by the multiplexer – i.e. the o2p statistics traced for the Event Multiplexer's Disruptor.
[Event Multiplexer (ems)] |
If the stats thread supports summary statistics related to an engine's store. The following sections will break these statistics down in more detail.
[Store (<Store>)] {role=<Role> state=Opensize=0} Commit..{ ...In{<NumCommitsRcvd>(<CommitRecvRate> <DeltaCommitRecvRate>) <PacketsRcvd>(<PacketRecvRate> <DeltaPacketRecvRate>) } ...Out{<NumCommitsSent>(<CommitSendRate> <DeltaCommitSendRate>) <PacketsSent>(<PacketSendRate> <DeltaPacketSendRate>) } ...Complete{<CommitCompletionsRcvd>(<CommitCompletionRecvRate> <DeltaCommitCompletionRecvRate>) <CommitCompletionsSent>(<CommitCompletionSendRate> <DeltaCommitCompletionSendRate>) } } |
The raw metrics from which these statistics are computed are as follows:
Field | Description |
---|---|
NumCommitsRcvd | The number of committed transactions replicated to this store. |
PacketsRcvd | The number of committed entries replicated to this store. |
NumCommitsSent | The number of committed transactions replicated by the store. |
PacketsSent | The number of committed entries replicated by the store. |
CommitCompletionsRcvd | The number of commit acknowledgements received by this store member from followers. |
CommitCompletionsSent | The number of commit acknowledgements sent by this store member to the leader. |
Phase | Description |
---|---|
cqs | The number of entries committed per commit.![]() |
s | The amount of time spent serializing transaction entries in preparation of replication / and persistence. |
s2w | The time between serializing transaction entries to the last entry being written to the wire (but not stabilized) for replication.
|
s2p | The time between serializing transaction entries to the time that entries have been passed to the persister for write to disk (but not yet synced).
|
w | The commit wire time. For a store in the primary role, wire latency captures the time from the last commit entry being replicated until the last commit ack is received for the transaction. In other words: the round trip time. For a store in the backup role wire latency capture the amount of time from when the primary wrote the last commit entry to the wire to the time it was received by the backup. When primary and backup are on different hosts, this statistic is subject to clock skew and could even be negative.
|
w2d | The time between receipt of a commit packet to the point at which deserialization of the entries has started (by a backup store member). |
d | The time spent deserializing a transaction's entries (by a backup store member). |
per | The amount of time spent persisting transaction entries to disk.
|
icr | The amount of time spent sending transaction entries to via an ICR Sender.
|
idx | The index latency records the time spent indexing records during commit. |
c | The commit latency records the latency from commit start to the commit being stabilized to follower members and / or to disk. |
A started engine statistics thread computes the following at the output frequency at which it was configured (programmatically or administratively).
See Appendix A
An AEP engine's statistics thread uses a named trace logger to log the raw and computed statistics output. The following is the name of the logger used by an engine:
nv.aep.<engineName>.stats
For example, the 'forwarder' engine uses the following logger to log statistics output:
nv.aep.forwarder.stats
The configured logger can be either a native logger or an SLF4J logger. Refer to the X Platform Tracing and Logging document for details on X Platform trace loggers.
The following is a sample output of the statistics output by an AEP engine's statistics thread
<11,33440,wsip-24-120-50-130.lv.lv.cox.net> 20130204-03:09:43:338 (inf)...[ <nv.aep.aep.forwarder.stats> STATS] Flows{1} Msg{In{25,901(364 0) 25,901(364 0) 0(0 0) 0(0 0) 0X(0 0) (0)} Out{25,901(364 0) 25,901(364 0) 0(0 0) (25,901 25,901 0 0) (0)} Latency{InOut{0 us} InAck{0 us}}} Ev{51,806(728 0) 25,901[25,901,0,25,901](364 0)} Txn{25,902[(25,902,25,902),(25,902,25,902 (0)),(0,0 (0)),0](364 0) 0} Store{-1} [Message Type Specific] OrderEvent In{0(0 0) 0(0 0) 0(0 0) 0(0 0) (0)} Out{25,901(364 0) 25,901(364 0) 0(0 0) (0)} Trade In{25,901(364 0) 25,901(364 0) 0(0 0) 0(0 0) (0)} Out{0(0 0) 0(0 0) 0(0 0) (0)} |
The above output is comprised of three sections:
The trace header is the standard trace header.
<11,33440,wsip-24-120-50-130.lv.lv.cox.net> 20130204-03:09:43:338 (inf)...[ <nv.aep.aep.forwarder.stats> STATS] The header above is the header output by the native X trace logger. When using SLF4J, the header will be appropriate to the configuration of the concrete logger bound to SLF4J. |
The next part of the output is the raw and computed statistics output by the statistics thread. The next sections explains the different sections of this output:
Flows{<NumFlows>} Msg{In{25,901(364 0) 25,901(364 0) 0(0 0) 0(0 0) 0X(0 0) (0)} Out{25,901(364 0) 25,901(364 0) 0(0 0) (25,901 25,901 0 0) (0)} Latency{InOut{0 us} InAck{0 us}}} Ev{51,806(728 0) 25,901[25,901,0,25,901](364 0)} Txn{25,902[(25,902,25,902),(25,902,25,902 (0)),(0,0 (0)),0](364 0) 0} Store{-1} |
Flows{1} Msg{In{<NumMsgsRcvd>(<MsgRecvRate> <DeltaMsgRecvRate>) 25,901(364 0) 0(0 0) 0(0 0) 0X(0 0) (0)} Out{25,901(364 0) 25,901(364 0) 0(0 0) (25,901 25,901 0 0) (0)} Latency{InOut{0 us} InAck{0 us}}} Ev{51,806(728 0) 25,901[25,901,0,25,901](364 0)} Txn{25,902[(25,902,25,902),(25,902,25,902 (0)),(0,0 (0)),0](364 0) 0} Store{-1} |
Flows{1} Msg{In{25,901(364 0) {<NumBestEffortMsgsRcvd>(<BestEffortMsgRecvRate> <DeltaBestEffortMsgRecvRate>) 0(0 0) 0(0 0) 0X(0 0) (0)} Out{25,901(364 0) 25,901(364 0) 0(0 0) (25,901 25,901 0 0) (0)} Latency{InOut{0 us} InAck{0 us}}} Ev{51,806(728 0) 25,901[25,901,0,25,901](364 0)} Txn{25,902[(25,902,25,902),(25,902,25,902 (0)),(0,0 (0)),0](364 0) 0} Store{-1} |
Flows{1} Msg{In{25,901(364 0) 25,901(364 0) {<NumGuaranteedMsgsRcvd>(<GuaranteedMsgRecvRate> <DeltaGuaranteedMsgRecvRate>) 0(0 0) 0X(0 0) (0)} Out{25,901(364 0) 25,901(364 0) 0(0 0) (25,901 25,901 0 0) (0)} Latency{InOut{0 us} InAck{0 us}}} Ev{51,806(728 0) 25,901[25,901,0,25,901](364 0)} Txn{25,902[(25,902,25,902),(25,902,25,902 (0)),(0,0 (0)),0](364 0) 0} Store{-1} |
Flows{1} Msg{In{25,901(364 0) 25,901(364 0) 0(0 0) {<NumMsgsSourced>(<MsgSourceRate> <DeltaMsgSourceRateRate>) 0X(0 0) (0)} Out{25,901(364 0) 25,901(364 0) 0(0 0) (25,901 25,901 0 0) (0)} Latency{InOut{0 us} InAck{0 us}}} Ev{51,806(728 0) 25,901[25,901,0,25,901](364 0)} Txn{25,902[(25,902,25,902),(25,902,25,902 (0)),(0,0 (0)),0](364 0) 0} Store{-1} |
Flows{1} Msg{In{25,901(364 0) 25,901(364 0) 0(0 0) 0(0 0) <NumMsgsFiltered>X(<MsgFilterRate> <DeltaMsgFilterRate>) (0)} Out{25,901(364 0) 25,901(364 0) 0(0 0) (25,901 25,901 0 0) (0)} Latency{InOut{0 us} InAck{0 us}}} Ev{51,806(728 0) 25,901[25,901,0,25,901](364 0)} Txn{25,902[(25,902,25,902),(25,902,25,902 (0)),(0,0 (0)),0](364 0) 0} Store{-1} |
Flows{1} Msg{In{25,901(364 0) 25,901(364 0) 0(0 0) 0(0 0) 0X(0 0) (NumDupMsgsRcvd)} Out{25,901(364 0) 25,901(364 0) 0(0 0) (25,901 25,901 0 0) (0)} Latency{InOut{0 us} InAck{0 us}}} Ev{51,806(728 0) 25,901[25,901,0,25,901](364 0)} Txn{25,902[(25,902,25,902),(25,902,25,902 (0)),(0,0 (0)),0](364 0) 0} Store{-1} |
Flows{1} Msg{In{25,901(364 0) 25,901(364 0) 0(0 0) 0(0 0) 0X(0 0) (0)} Out{<NumMsgsSent>(<MsgSendRate> <DeltaMsgSendRate>) 25,901(364 0) 0(0 0) (25,901 25,901 0 0) (0)} Latency{InOut{0 us} InAck{0 us}}} Ev{51,806(728 0) 25,901[25,901,0,25,901](364 0)} Txn{25,902[(25,902,25,902),(25,902,25,902 (0)),(0,0 (0)),0](364 0) 0} Store{-1} |
Flows{1} Msg{In{25,901(364 0) 25,901(364 0) 0(0 0) 0(0 0) 0X(0 0) (0)} Out{25,901(364 0) <NumBestEffortMsgsSent>(<BestEffortMsgSendRate> <DeltaBestEffortMsgSendRate>) 0(0 0) (25,901 25,901 0 0) (0)} Latency{InOut{0 us} InAck{0 us}}} Ev{51,806(728 0) 25,901[25,901,0,25,901](364 0)} Txn{25,902[(25,902,25,902),(25,902,25,902 (0)),(0,0 (0)),0](364 0) 0} Store{-1} |
Flows{1} Msg{In{25,901(364 0) 25,901(364 0) 0(0 0) 0(0 0) 0X(0 0) (0)} Out{25,901(364 0) 25,901(364 0) <NumGuaranteedMsgsSent>(<GuaranteedMsgSendRate> <DeltaGuaranteedMsgSendRate> (25,901 25,901 0 0) (0)} Latency{InOut{0 us} InAck{0 us}}} Ev{51,806(728 0) 25,901[25,901,0,25,901](364 0)} Txn{25,902[(25,902,25,902),(25,902,25,902 (0)),(0,0 (0)),0](364 0) 0} Store{-1} |
Flows{1} Msg{In{25,901(364 0) 25,901(364 0) 0(0 0) 0(0 0) 0X(0 0) (0)} Out{25,901(364 0) 25,901(364 0) 0(0 0) (<OutboundSno> <OutboundStableSno> 0 0) (0)} Latency{InOut{0 us} InAck{0 us}}} Ev{51,806(728 0) 25,901[25,901,0,25,901](364 0)} Txn{25,902[(25,902,25,902),(25,902,25,902 (0)),(0,0 (0)),0](364 0) 0} Store{-1} |
Flows{1} Msg{In{25,901(364 0) 25,901(364 0) 0(0 0) 0(0 0) 0X(0 0) (0)} Out{25,901(364 0) 25,901(364 0) 0(0 0) (25,901 25,901 <BackupOutboundQueueSize> <BackupOutboundLogQueueSize>) (0)} Latency{InOut{0 us} InAck{0 us}}} Ev{51,806(728 0) 25,901[25,901,0,25,901](364 0)} Txn{25,902[(25,902,25,902),(25,902,25,902 (0)),(0,0 (0)),0](364 0) 0} Store{-1} |
Flows{1} Msg{In{25,901(364 0) 25,901(364 0) 0(0 0) 0(0 0) 0X(0 0) (0)} Out{25,901(364 0) 25,901(364 0) 0(0 0) (25,901 25,901 0 0) (0)} Latency{InOut{0 us} InAck{0 us}}} Ev{<NumEventsRcvd>(<EventRecvRate> <DeltaEventRecvRate> 25,901[25,901,0,25,901](364 0)} Txn{25,902[(25,902,25,902),(25,902,25,902 (0)),(0,0 (0)),0](364 0) 0} Store{-1} |
Flows{1} Msg{In{25,901(364 0) 25,901(364 0) 0(0 0) 0(0 0) 0X(0 0) (0)} Out{25,901(364 0) 25,901(364 0) 0(0 0) (25,901 25,901 0 0) (0)} Latency{InOut{0 us} InAck{0 us}}} Ev{51,806(728 0) <NumFlowEventsRcvd>>[<NumFlowEventProcSuccess>,<NumFlowEventsProcFail>,<NumFlowEventsProcComplete>](<FlowEventRecvRate> <DeltaFlowEventRecvRate>)} Txn{25,902[(25,902,25,902),(25,902,25,902 (0)),(0,0 (0)),0](364 0) 0} Store{-1} |
Flows{1} Msg{In{25,901(364 0) 25,901(364 0) 0(0 0) 0(0 0) 0X(0 0) (0)} Out{25,901(364 0) 25,901(364 0) 0(0 0) (25,901 25,901 0 0) (0)} Latency{InOut{0 us} InAck{0 us}}} Ev{51,806(728 0) 25,901[25,901,0,25,901](364 0)} Txn{<NumTransactions>[(<NumCommitsStarted>,<NumCommitsCompleted>),(<NumSendCommitsStarted>,<NumSendCommitsCompleted> (<SendCommitCompletionQueueSize>)),(<NumStoreCommitsStarted>,<NumStoreCommitsCompleted> (<StoreCommitCompletionQueueSize>)),<NumRollbacks>](<TransactionRate> <DeltaTransactionRate>) <NumFlowEventsPerTransaction>} Store{-1} |
Flows{1} Msg{In{25,901(364 0) 25,901(364 0) 0(0 0) 0(0 0) 0X(0 0) (0)} Out{25,901(364 0) 25,901(364 0) 0(0 0) (25,901 25,901 0 0) (0)} Latency{InOut{0 us} InAck{0 us}}} Ev{51,806(728 0) 25,901[25,901,0,25,901](364 0)} Txn{25,902[(25,902,25,902),(25,902,25,902 (0)),(0,0 (0)),0](364 0) 0} Store{<EngineStoreSize>} |
[Transaction Latencies] |
[Event Multiplexer (ems)] |
[Store (ems)] [Store Latencies] |
[Bus Manager (ems.market)] |