In This Section
Overview
This section contains detailed reference information for the x-ddl schema (Domain Descriptor Language). An X-DDL XML document describes and configures an application deployment, and is used to seed the configuration repository from which an application is loaded. See X Platform Configuration for a general overview of configuration.
A main tenant of the X Platform is to separate out (and shield) an application's business logic from the complex machinery underpinning high availability, performance and scalability. As such there are a wealth of tuning knobs and configuration operations that can be applied without making application code changes. In most cases, Optional values can be omitted.
Optional Values
Many of the configuration options listed here need not be specified by most applications, and in most cases values listed as 'Optional' below should be omitted from an application's configuration as the platform will apply reasonable defaults. A good strategy is to start with a minimal configuration and only add additional configuration options as needed.
Overriding X-DDL Values at Runtime
A JVM configured using DDL xml is configured using the VMConfigurer class which accepts the XML either as a file or input stream. The VMConfigurer parses the XML and seeds the configuration repository that will subsequently be used to configure components as they are loaded.
The values in an DDL XML document can be overridden in two ways: standard variable substitution and via X DDL Overrides. In both cases the source of properties for substitution are provided to the parser passed to the VMConfigurer. When running in Non-Embedded mode, the substitution values provided by the Talon Server to the VMConfigurer are sourced from the properties files specified by 'nv.app.propfile', System Properties, and environment variables in that order. When running in embedded mode, the application provides the substitution values to the VMConfigurer directly via API, giving the application full control of the property source.
Standard Variable Substitution
VMConfigurer will first substitute and ${VARNAME::DEFAULT} values using the ISubstResolver passed into the configurer (or from the environment if no resolver is passed in):
If the substitution resolver contains a value for FRONTOFFICE_BUSDESCRIPTOR, then that will be used for the bus descriptor. Otherwise, the default value of "falcon://fastmachine:8040" will be used. This substitution is done before the xml is parsed, so in cases where the ${} syntax yields invalid xml, it will be substituted before parsing the document.
SINCE 3.4
Special XML characters in properties that originate from property files before they are substituted into the DDL XML will be escaped. In particular, <, >, &, ", ' will be respectively replaced by <, >, &, ", '. Users who prefer to do their own XML escaping can disable this behavior by setting the following property:
SINCE 3.4
Properties defined in the <env> section of an X-DDL document can be used in variable substitution elsewhere in the document. For example:
Properties defined in the <env> section of an X-DDL document have the lowest priority of all configuration sources and can be easily overridden using system properties and environment vars. Properties defined in the <env> section may reference other system properties and environment vars, but not other properties defined in the <env> section.
X DDL Overrides
Any attribute or value element listed in this document that has an X DDL Override property can be overridden by passing the the corresponding value into the substitution environment used by the VMConfigurer. DDL overrides are particularly useful for applications that internally bundle ddl on the classpath, making it difficult to change at runtime.
In the above case, the value for descriptor can also be overridden using the DDL Override, 'x.buses.bus.frontoffice.descriptor'. So given the following values in the substitution resolver:
- FRONTOFFICE_BUSDESCRIPTOR=falcon://slowmachine:8040
- x.buses.bus.frontoffice.descriptor=p2p://client
would yield:
because initial substitution would substitute "falcon://slowmachine:8040", and the ddl override would override that value with p2p://frontoffice resulting in the bus being localized as a p2p bus.
'enabled' attributes
Throughout the schema, you will notice several elements that have enabled attributes. Specifying a value of "false" for these elements will cause the X-DDL parser to ignore these elements. This pattern allows for these elements to be configured but then disabled at runtime via an environment variable, System property, or DDL override. For example, if the ddl were to configure in an app named "forwarderapp":
then at runtime it could be disabled by launching the application with:
-Dx.apps.app.forwarderapp.storage.persister.enabled=false
or
-DFORWARDER_PERSISTER_ENABLED=false
Changing the override prefix
DDL overrides (except for those in the <env> element) are prefixed with 'x.' to avoid conflicts with other configuration property names in the environment. It is possible to change this prefix by setting:
-Dnv.ddl.override.prefix=myprefix
In this case, 'x.apps.app.forwarderapp.storage.persister.enabled'
would become:
'myprefix.apps.app.forwarderapp.storage.persister.enabled'
Troublshooting DDL parsing
DDL trace can be enabled by setting -Dnv.ddl.trace=debug (or when using SLF4J setting the logger nv.ddl to 'Trace' Level).
XML Reference
- Overview
- Overriding X-DDL Values at Runtime
- Troublshooting DDL parsing
- XML Reference
- Environment Configuration
- Message Bus Configuration
- Applications (AepEngine) Configuration
- <apps>
- Application Messaging Configuration
- Application Storage Configuration
- General Application Configuration
- <inboundEventMultiplexing
- <inboundMessageLogging
- <outboundMessageLogging
- </outboundMessageLogging>
- <startupExpectations
- </startupExpectations>
- <messageHandlingPolicy>
- <replicationPolicy>
- <messageSendPolicy>
- <appExceptionHandlingPolicy>
- <quarantineChannel>
- <quarantineMessageKey>
- <messageSend ExceptionHandlingPolicy>
- <replicationPolicy>
- <adaptiveCommitBatchCeiling>
- <enableTransactionCommitSuspension>
- <dispatchTransactionStageEvents>
- <replicateSolicitedSends>
- <replicateUnsolicitedSends>
- <sequenceUnsolicitedSends>
- <sequenceUnsolicitedSends WithSolicitedSends>
- <disposeOnSend>
- <clusterHeartbeatInterval>
- <administrative>
- <stuckAlertEventThreshold>
- <performDuplicateChecking>
- <performMidstreamInitializationValidation>
- <enableSequenceNumberTrace>
- <enableEventTrace>
- <enableTransactionTrace>
- <enableScheduleTrace>
- <enableMessageTrace>
- <messageTraceInJson>
- <messageTraceJsonStyle>
- <messageTraceFilterUnsetFields>
- <messageTraceMetadataDisplayPolicy>
- <maxEnvironmentProviders>
- <enableSendCommitCompleteSequenceAlerts>
- <captureMessageTypeStats>
- <captureTransactionLatencyStats>
- <captureEventLatencyStats>
- <replicateInParallel>
- <preserveChannelJoinsOnStop>
- </app>
- <apps>
- Server Configuration
- Enums Reference
- ChannelQos
- CheckpointingType
- ICRRole
- InboundMessageLoggingPolicy
- InboundMessageLoggingFailurePolicy
- LogEmptinessExpectation
- MessageHandlingPolicy
- MessageSendPolicy
- AppExceptionHandlingPolicy
- MessageSendExceptionHandlingPolicy
- OutboundMessageLoggingPolicy
- OutboundMessageLoggingFailurePolicy
- QueueOfferStrategy
- QueueWaitStrategy
- StoreBindingRoleExpectation
- ReplicationPolicy
- Groups Reference
Environment Configuration | |
The <env> section allows configuration of the runtime properties accessible through the XRuntime class. The X Platform reserves the use of the prefix 'nv.' for platform configuration, applications are otherwise free to set arbitrary properties in the <env> section. The properties defined in <env> will be stored in the configuration repository and later loaded into XRuntime. Environment properties can be listed in either '.' separated form, or by breaking the dot separated levels into hierarchical nodes. Sample XML Snippet
| |
Element | Description |
---|---|
<env></env> | Any XML element with text content will be treated as a property by concatenating its parent's node names with '.' separators. If the value is already defined in the set of ddl overrides passed into the parser, the value in XML will be overridden. Unlike other DDL overridable values mentioned in this document, overriddens for <env> values aren't prefixed with 'nv.ddl'. They are directly overridden with the value passed in. |
Message Bus Configuration | |
The 'buses' section configures the various messaging buses used globally in the deployment. For example, the below configures a messaging bus named 'frontoffice' that:
Sample XML Snippet | |
Element | Description |
---|---|
<buses><bus | |
name | Defines the bus name which must be unique within a configuration repository. Applications reference their bus by this name. Usage: Required |
descriptor | Defines the bus descriptor. A bus descriptor is used to lookup and configure a message bus provider. Usage: Required |
enabled | If set to false, this bus will not be added to the configuration repository and will not be available for applicaiton use. Usage: Optional |
<channels><channel | Configures the channels for this bus. Individual applications that use the bus may use some or all of the channel according to their own configuration and interaction patterns. |
name | Defines and configures a channel within a message bus. An SMA message channel is a named conduit for message exchange between SMA messaging participants. An application's AepEngine will start messaging channels prior to signaling to the application that messaging has started. Usage: Required |
id | The channel id is a numerical identifier of a channel uniquely identifying the channel in its bus. Some bus binding implementations may use this on the wire as a replacement for the string channel name for efficiency, so it is important that the id is consistent across configuration domains. Usage: Optional |
<qos> | When the qos element is not provided, the platform's default QoS value will be used if not specified programmatically by the application. Usage: Optional |
<key> | Specifies the channel's key. Usage: Optional |
</channel></channels></bus></buses> | |
Applications (AepEngine) Configuration | |
Element | Description |
<apps><app | The <apps> section configures the various applications in the scenario. An application is synonymous with an AEP engine. For example, the above configures three applications (i.e. engines). |
name | Defines the application name which must be unique within an application's configuration domain. Usage: Required
A common practice for a clustered application is to use the same name for an application and its store (see storage configuration below). It is best practice not to use a name with spaces as the name is used in many context such as scripting where it is operationally simpler to avoid spaces and special characters.
|
mainClass | Specifies the application's main class (e.g. com.acme.MyApplication). An application's main class serves as the main entry point for a Talon Server application. Usage: Required |
enabled> | If set to false, this app will be ignored and not saved to the configuration repository. This can be used to disable an application at runtime. However, note that if a persistent configuration repository is in use, this will not cause a previously configured application to be deleted. Usage: Optional |
Application Messaging Configuration | |
An app's messaging> element configures the buses used by the application Sample XML Snippet | |
Element | Description |
<messaging> | Configures messaging for an application. |
<factories> | Configures message factories for an application. Each message factory defined under this element is registered with the application's underlying engine. Registered message factories are used by the message buses to materialize message instances from factory and type ids received over the wire. It is not mandatory to configure factories via DDL, they can also be registered programmatically by the application during application initialization. SINCE 3.4 |
<factory | Configures a message factory used by the app. |
name | The message factory's fully qualified name. Usage: Required |
</factory> | |
<factories> | |
<bus | Configures and registers a bus from the <buses> section for use with the application and registers it with the underlying engine. Each application in the deployment will create its own bus instance, and may configure channel interest in that bus differently depending on their participation in the message flow. |
name | Specifies the name of the bus which should reference a bus from the buses section of this descriptor or one already created and saved in the configuration repository. Usage: Required |
enabled> | If set to false, this bus will be ignored and not added to the application list of buses. Usage: Optional |
<channels> <channel | Configures the bus channels used by the application which will be a subset of those defined for the bus in the descriptor's <buses> section. |
name | Specifies the name of the channel which references a channel defined in the <bus> element in the <buses> section. Usage: Optional |
join> | An application that should receive message on the channel should specify true. Usage: Optional |
</channel> </channels> | Additional channel configurations can be added here. |
<nonBlockingInboundMessageDispatch> | Specifies whether or not enqueue of inbound messages for this bus should block on the application's main inbound event multiplexer. In most cases, this value should either not be specified or set to false. Usage: Optional |
<inboundMessageEventPriority> | Specifies the priority at which messages from this bus should be dispatched to the application's inbound event multiplexer. A negative value is interpreted as higher priority. A positive value will result in delayed processing by the number of milliseconds specified. If not set or 0 message will be dispatched at normal priority. Usage: Optional |
<scheduleSendCommitCompletionEvents> | Indicates whether the bus manager send commit completion events should be scheduled. Scheduling of the completion events allows them to be added to application's inbound event queue's feeder queue, which can cause reduced contention with message events. Usage: Optional |
<sendCommitCompletionEventPriority> | Specifies at what priority message from this bus should be dispatched to the applications inbound event multiplexer. A negative value is interpreted as higher priority. A positive value will result in delayed processing by the number of milliseconds specified. If not set or 0, message will be dispatched at normal priority. Setting this value higher than message events can reduce message processing latency in some cases. Usage: Optional |
<detachedSend | Configures the detached send event multiplexer thread for the bus. When detached send is disabled, outbound send of messages is performed by the commit processing thread (typically the engine's inbound event multiplexer thread). Enabling detached send can reduce the workload on the commit processing thread, allowing it to process more inbound messages, but this can also incur additional latency. |
enabled> | Specifies whether or not detached send is enabled for this bus. Usage: Optional |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical CPUs. See <queueDrainerCpuAffinityMask> X DDL Override: |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional |
</detachedSend> </bus></messaging> |
Application Storage Configuration | |
---|---|
Configures storage options for the application. An application's store provides the foundation for HA and Fault Tolerance. Applications achieve clustering through configuring the store, which will discover other application members and elect a single primary member through a leader election algorithm. The store serves as the foundation for HA by replicating changes from the primary application member to backups in a highly efficient, pipelined, asynchronous manner - a core requirement for In Memory Computing. While the primary mechanism for HA is memory to memory replication, an application's storage configuration may also configure disk-based persistence as a fall back mechanism in the event that connections to backup instance fail. An application that runs standalone without any persistence does not need to include a store, which is a perfectly valid configuration for an application that does not have HA requirements.
| |
Element | Description |
<storage | See Also: StoreDescriptor |
descriptor | The store descriptor which is used to localize the store to a specific provider. Usage: Required prior to 3.4, Deprecated in 3.4+
Starting with the 3.4 release, this attribute has been deprecated. Store clustering should now be configured via the <clustering> element instead. |
enabled> | Can be set to false to disable the store for this application. Usage: Optional |
<factories> | Configures a state factories for an application. Each state factory defined under this element is registered with the application's underlying engine and store. Registered message factories are used by the store's replication receiver and transaction log for materialize entity instances from factory and type ids received over the wire. It is not mandatory to configure factories via DDL, they can also be registered programmatically by the application during application initialization. SINCE 3.4 |
<factory | Configures a state factory used by the app. |
name | The state factory's fully qualified name. Usage: Required |
</factory> | |
<factories> | |
<persistenceQuorum> | Sets a store's persistence quorum. The persistence quorum is the minimum number of store members running in a cluster that determines whether persister commits are executed synchronously or not. If the number of members is greater or equal to the quorum, then persistence commits are always performed asynchronously. Otherwise, they are persisted synchronously. Usage: Optional |
<maxPersistSyncBacklog> | When set to a value greater than 0 in seconds, the store's persister will be periodically synced to disk. This limits the amount of unsynced data (in time) that hasn't been synced in the event of a JVM failure, which can be useful for low volume applications that are operating above their persistence quorum. Usage: Optional |
<icrQuorum> | Set's a store's ICR quorum. The ICR quorum is the minimum number of store members running in a cluster that determines whether ICR send commits are executed synchronously or not. If the number of members is greater or equal to the quorum, then ICR commits are always performed asynchronously. Otherwise, they are performed ssynchronously. Usage: Optional |
<maxIcrSyncBacklog> | When set to a value greater than 0 in seconds, the store's icr sendder is periodically synced. This limits the amount of unsynced data (in time) that hasn't been synced in the event of a JVM failure, which can be useful for low volume applications that are operating above their icr quorum. Usage: Optional |
<checkpointingType> | Sets the store's checkpoint controller type. A checkpoint controller determines the checkpoint boundaries within a transaction log by incrementing the checkpoint version for log entries. The checkpoint version is used by CDC and Log Compaction algorithms as the boundaries upon which those operations occur. Usage: Optional |
<checkpointThreshold> | Sets the store's checkpoint threshold. The threshold controls the maximum number of entries before a transaction log's checkpoint version is increased, a checkpoint controller keeps track of the number of entries that count towards reaching this threshold. Usage: Optional |
<checkpointMaxInterval> | Sets the max time interval (in millis) that can occur before triggering a new checkpoint. Usage: Optional |
<discoveryDescriptor> | Sets the custom discovery descriptor for the store. This element is replaced by <discoveryDescriptor> under the <clustering> element. The documentation below only applies to versions 3.2 and older.
When set, this descriptor is used to load the discovery provider for the store (providing the store implementation uses discovery). Store implementations such as the default (native) store implementation use the discovery provider to broadcast their presence and connection information and presence to cluster peers. In most cases, an application will simply want to use the default discovery configured for the server JVM in which they are running. In such cases, the store descriptor of 'native://.' should be used, and this value need not be set. The store will simply use the default discovery provider returned by However, in some cases where discovery within the same JVM must be partitioned, it can be useful to specify a separate discovery provider for the store, and this property facilitates that. NOTE on native store binding discovery configuration When the discovery descriptor is not set by this value in the descriptor, the default (native) store binding implementation will fall back to computing the discovery descriptor from the address of this store descriptor by appending the following:
where in the above the 'discovery' portion of the property name are stripped when appending to the discovery descriptor. Unless:
For Example: native://mcast://224.0.1.200:4060&discoveryInitWaitTime=5&localIfAddress=myserverhost would yield a discovery descriptor of: mcast://224.0.1.200:4060&initWaitTime=5 For discovery descriptors that need more provider configuration parameters than those listed above, this method should be used to set the discovery descriptor.
Usage: Optional |
<failOnMultiplePrimaries> | This property has been deprecated and should be set under the clustering element. Usage: Optional |
<clustering | SINCE 3.4 The clustering element, when enabled, is used to configure store clustering which provides the ability for applications' store members to discover one another and form an HA cluster. |
enabled> | Can be set to false to disable store clustering. Usage: Optional |
<storeName> | Sets the name of the store. Applications with the same store name automatically form a cluster. If this configuration parameter is not specified, then the application name is used as the store name Usage: Optional |
<localIfAddr> | Sets the local network interface to bind to when establishing cluster network connections. Usage: Optional |
<localPort> | Sets the TCP port to bind to when listening for cluster connections. Usage: Optional |
<linkParams> | A comma separate set of key=value pairs that serve as additional configuration parameters for the network connections between the cluster members. Usage: Optional |
<linkReaderCpuAffinityMask> | Sets the CPU affinity mask to use for the cluster connection reader thread. Each cluster member uses a single thread to read replication traffic from other cluster members. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus. For example, specifying "1" or "[0]" indicates Core 0. "3" or "[0, 1]" would indicate Core 0 or Core 1. Specifying a value of "0" indicates that the thread should be affinitized to the platform's default cpu, and omitting this value indicates that the thread should be affinitized according to the platform's default policy for the multiplexer. See com.neeve.util.UtlThread.setCPUAffinityMask(String) for details. Usage: Optional |
<discoveryDescriptor> | Sets the custom discovery descriptor for the store. When set, this descriptor is used to load the discovery provider for the store. In most cases, an application will simply want to use the default discovery configured for the server JVM in which they are running. In such cases this value need not be set. The store will simply use the default discovery provider returned by However, in some cases where discovery within the same JVM must be partitioned, it can be useful to specify a separate discovery provider for the store, and this property facilitates that. Usage: Optional |
<initWaitTime> | Sets the time, in milliseconds, that the store cluster manager will wait on open for the cluster to stabilize. When a store binding opens its binding to the store, it joins the discovery network to discover other store cluster members. Once discovered, the members need to connect to each other, perform handshakes and elect roles. This parameter governs how long, after the binding has joined the discovery network, does the cluster manager wait for the store cluster to "stabilize" Usage: Optional |
<failOnMultiplePrimaries> | Set whether a store cluster manager should fail the store binding on detecting multiple primaries in a cluster. The default policy is to fail on detecting multiple primaries. This means that if multiple primaries are detected, the members detected as primaries will shut down to prevent a "split-brain" situation. If this parameter is set to true, then the members detected as primaries will not establish connectivity with each other and will continue to operate independently as primaries. Usage: Optional |
<detachedSend | Configures whether or not to send the outbound replication traffic by the engine thread or pass off the send to a detached replicator sender thread.
Usage: Optional |
enabled> | Can be set to true to enable detached sends for store replication. Usage: Optional |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus. See <queueDrainerCpuAffinityMask> X DDL Override: |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional |
</detachedSend> | |
<detachedDispatch | Configures whether or not to dispatch the inbound replication traffic and events by the replication link reader thread or pass the dispatch off to a detached replicator dispatcher thread. Usage: Optional This feature is currently an experimental feature and is not supported for production usage. It is expected to be supported in a future release. |
enabled> | Can be set to true to enable detached sends for store replication. Usage: Optional |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu or a square bracket enclosed comma separated list enumerating the logical cpus. See <queueDrainerCpuAffinityMask> X DDL Override: |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional |
</detachedDispatch> </clustering> | |
<persistence | Configures the persister for this store. A persister is responsible for storing the stores transactional updates to disk (or some other recoverable storage medium). Persisters typically serve as a secondary fault tolerance mechanism for clustered applications, but for an application that will only operate standalone this can serve as the primary mechanism for fault tolerance. Usage: Optional See Also: StorePersisterDescriptor |
class | Can be set to the fully qualified classname of a custom implemenation of a store persister class. If omitted or "native" is specified, then the platform's default persister will be used (recommended) Usage: Optional |
enabled> | Can be set to false to disable the persister for this store. Usage: Optional |
<autoFlushSize> | In the absense of explicit flushes (e.g. flushOnCommit) of written entries, the size at which flush is automatically triggered for queued writes. If not set the platform default (8192) is used. Usage: Optional |
<flushOnCommit> | Whether or not the persister should be flushed on commit. By default a persister buffers writes into an internal buffer and doesn't write to disk until that buffer has filled. Enabling flush on commit will flush the persister regardless of whether the buffer has filled. Usage: Optional |
<autoRepair> | Whether or not an attempt will be made to automatically repair non-empty logs by truncating malformed entries at the end of the log that are part of incomplete transactions. Usage: Optional |
<storeRoot> | Specifies the root folder in which the persister's transaction log files are located. Usage: Optional
<storeRoot>${myapp.storeRoot}</storeRoot>, so that you can customize its location at runtime appropriate to the environment in which you are launching. |
<shared> | Whether or not the persister is a share. If omitted or false, the application will use shared nothing persistence. If true it indicates that the persister is using the same physical storage between backup and primaries meaning that instances in a backup role will not persist to disk and leave it to the primary. In most cases applications will leave this as false. Usage: Optional |
<cdcEnabled> | Whether CDC is enabled on the log. If CDC is not enabled, then a CDC processor run on a log will not dispatch any events. If CDC is not enabled on a log and then reenabled later, CDC will start from the live log at the time the CDC is enabled. If a compaction occurred while CDC was disabled, then the change events that occurred during that time will be lost; in other words, CDC enablement instructs the compactor to preserve data on disk necessary for performing CDC rather than deleting it on compaction. CDC enabled is only supported for applications using StateReplication as an HAPolicy. Usage: Optional |
<compactionThreshold> | Sets the log compaction threshold. The log compaction threshold is the size (in megabytes) that triggers a log compaction. The act of compacting a log will compact as many complete checkpoints in the log and switch the live log over to the compacted log. A threshold value of less than or equal to 0 disables live log compaction. Usage: Optional
|
<maxCompactionWindowSize> | The log compaction window is the approximate maximum size (in megabytes) rounded up to the end of the nearest checkpoint that a compact operation uses to determine how many log entries it will hold in memory. The more entries the compactor can hold in memory while performing a compaction, the more efficient the compact operation will be. Note: The minimum compaction window is a checkpoint. Therefore, if the system is configured such that a checkpooint covers entries that cumulatively exceeds the value of this parameter, then this parameter will not reduce the compaction memory usage; rather, the compactor will load the entire checkpoint into memory when performing the checkpoint operation. Note: When calculating memory needed by the compaction operation, one should multiply this parameter by a factor of 2: i.e. the memory used by compaction will be twice the memory specified via this parameter. Usage: Optional
|
<logScavengePolicy> | Sets policy used to scavenge logs. A log with number N is considered a candidate for scavenging when N is less than the live log number and N less than the CDC log number. This parameter specifies how such logs need to be scavenged. Currently, the only permissible value is 'Delete' Usage: Optional
|
<initialLogLength> | Sets the intial file size of persister's transaction log in gigabytes. Preallocating the transaction log can save costs in growing the file size over time since the operation of growing a log file may actually result in a write of file data + the metadata operation of updating the file size, and may also benefit from allocating contiguous sectors on disk. Usage: Optional
|
<zeroOutInitial> | Whether the log file should be explictly zeroed out (to force commit all disk pages) if newly created. Usage: Optional |
<pageSize> | Sets the page size for the disk in bytes. The persister will use this as a hint in several areas to optimize its operation. Usage: Optional |
<detachedPersist | Configures whether or not persister writes are done by the store commit thread or passed off to a detached persister write thread. Offloading the persist to a persister thread can increase store throughput but requires an extra processor core for the persister thread. |
enabled> | Can be set to true to enable detached persister for the persister. Usage: Optional |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: x.apps.app.<appname>.storage.persistence.detachedPersist.queueDepth |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: x.apps.app.<appname>.storage.persistence.detachedPersist.queueOfferStrategy |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus. See <queueDrainerCpuAffinityMask> X DDL Override: |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional |
</detachedPersist> </persistence> | |
<icr | Configures Inter-cluster Replication (ICR) for the application. |
role | Configures the inter-cluster replication role. See DDL Config Reference Usage: Required |
busDescriptor | Configures the bus descriptor for ICR. ICR uses its own private bus instance created from this descriptor. See DDL Config Reference Usage: Required |
enabled> | Can be set to true to enable inter cluster replication. Usage: Optional |
<shared> | Whether or not an ICR Sender is a shared sender. Applications should set this to true when using ICR to a Standalone Receiver, e.g. only the primary instance should send updated to the ICR queue. Usage: Optional
|
<flushOnCommit> | Whether or not the icr sender should be flushed on commit. Setting this value to true will flush all updates to the underlying message bus on commit. With a value of false the bus may buffer some messages until new updates are sent on subsequent commits. Usage: Optional |
<detachedSend | Configures whether or not ICR sends are done by the store commit thread or passed off to a detached send thread. Offloading the send to a sender thread can increase store throughput but requires an extra processor core for the sender thread. When enabled the properties here configure the multiplexer for the detached send thread. |
enabled | Configures whether or not detached ICR send is enabled. Usage: Optional |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: x.apps.app.<appname>.storage.icr.detachedSend.queueDepth |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu or a square bracket enclosed comma separated list enumerating the logical cpus. See <queueDrainerCpuAffinityMask> X DDL Override: |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional |
</detachedSend> </icr></storage> | End of storage configuration (only one storage configuration may be specified per application). |
General Application Configuration | |
---|---|
The remaining elements under the <app> element configure the operation of the application's AepEngine | |
<inboundEventMultiplexing | Configures the AepEngine's inbound event multiplexer. Configures the single AepEngine multiplexer thread that serializes processing of messages, timers, acks and other events for the application. |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: x.apps.app.<appname>.inboundEventMultiplexing.queueOfferStrategy A value of SingleThreaded is almost never appropriate for an AepEngine because many threads dispatch events to an engine. |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus. See <queueDrainerCpuAffinityMask> X DDL Override: |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional |
</inboundEventMultiplexing> | |
<inboundMessageLogging | Configures inbound message logging for the engine. An inbound message logger logs inbound messages to a transaction log file. Inbound logging does not play a role in HA for the application, but can be useful for auditing purposes. |
policy | The inbound message logging policy for the application. Usage: Required |
failurePolicy | SINCE 3.2 Usage: Optiona |
<autoFlushSize> | In the absence of explicit flushes (e.g. flushOnCommit) of written entries, the size at which flush is automatically triggered for queued writes. If not set, the platform default (8192) is used. Usage: Optional |
<flushOnCommit> | Whether or not the logger should be flushed on commit. By default the logger buffers writes into an internal buffer and doesn't write to disk until that buffer has filled. Enabling flush on commit will flush the logger regardless of whether the buffer has filled. Usage: Optional |
<autoRepair> | Whether or not an attempt will be made to automatically repair a non-empty log on open by truncating malformed entries at the end of the log that are part of incomplete transactions. Usage: Optional |
<storeRoot> | Specifies the root folder in which the logger's transaction log files are located. Usage: Optional If the expected value of of NVROOT on your target deployment host is not on the device where you want to place your transaction logs (e.g. slow or small disk), then consider making this a substitutable vlaue such as: <storeRoot>${myapp.storeroot}</storeRoot>, so that you can customize its location at runtime appropriate to the environment in which you are launching. |
<initialLogLength> | Sets the initial file size of logger's transaction log in gigabytes. Preallocating the transaction log can save costs in growing the file size over time since the operation of growing a log file may actually results in a write of file data + the metadata operation of updating the file size, and may also benefit from allocating contiguous sectors on disk. Usage: Optional The log size is specified in Gb. For an initial size of less than 1 Gb, specify a float value. For example, a value of .01 would result in a preallocated size of ~10Mb, which can be useful for test environments. |
<zeroOutInitial> | Whether the log file should be explictly zeroed out (to force commit all disk pages) if newly created. Usage: Optional |
<pageSize> | Sets the page size for the disk in bytes. The logger will use this as a hint in several areas to optimize its operation. Usage: Optional |
<detachedWrite | Configures whether or not logger writes are done by the committing thread or passed off to a detached writer thread. Offloading to a writer thread can increase application throughput but requires an extra processor core for the logger thread. |
enabled> | Can be set to true to enable detached logging for the logger. Usage: Required |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: x.apps.app.<appname>.inboundMessageLogging.detachedWrite.queueOfferStrategy |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu or a square bracket enclosed comma separated list enumerating the logical cpus. See <queueDrainerCpuAffinityMask> X DDL Override: |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional |
</detachedWrite> </inboundMessageLogging> | End of application's inbound message logging properties. |
<outboundMessageLogging | Configures outbound message logging for the engine. An inbound message logger logs sent messages to a transaction log file. An outbound message log file does not play a role in HA for the application, but can be useful for auditing purposes. |
policy | The outbound message logging policy for the application. Usage: Required |
failurePolicy | SINCE 3.2 See OutboundMessageLoggingFailurePolicy Usage: Optiona |
<autoFlushSize> | In the absence of explicit flushes (e.g. flushOnCommit) of written entries, the size at which flush is automatically triggered for queued writes. If not set, the platform default (8192) is used. Usage: Optional |
<flushOnCommit> | Whether or not the logger should be flushed on commit. By default the logger buffers writes into an internal buffer and doesn't write to disk until that buffer has filled. Enabling flush on commit will flush the logger regardless of whether the buffer has filled. Usage: Optional |
<autoRepair> | Whether or not an attempt will be made to automatically repair a non-empty log on open by truncating malformed entries at the end of the log that are part of incomplete transactions. Usage: Optional |
<storeRoot> | Specifies the root folder in which the logger's transaction log files are located. Usage: Optional If the expected value of of NVROOT on your target deployment host is not on the device where you want to place your transaction logs (e.g. slow or small disk), then consider making this a substitutable vlaue such as: <storeRoot>${myapp.storeroot}</storeRoot>, so that you can customize its location at runtime appropriate to the environment in which you are launching. |
<initialLogLength> | Sets the initial file size of logger's transaction log in gigabytes. Preallocating the transaction log can save costs in growing the file size over time since the operation of growing a log file may actually results in a write of file data + the metadata operation of updating the file size, and may also benefit from allocating contiguous sectors on disk. Usage: Optional The log size is specified in Gb. For an initial size of less than 1 Gb, specify a float value. for example a value of .01 would result in a preallocated size of ~10Mb, this can be useful for test environments. |
<zeroOutInitial> | Whether the log file should be explictly zeroed out (to force commit all disk pages) if newly created. Usage: Optional |
<pageSize> | Sets the page size for the disk in bytes. The logger will use this as a hint in several areas to optimize its operation. Usage: Optional |
<detachedWrite | Configures whether or not logger writes are done by the committing thread or passed off to a detached writer thread. Offloading to a writer thread can increase application throughput but requires an extra processor core for the logger thread. |
enabled> | Can be set to true to enable detached logging for the logger. Usage: Required |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus. See <queueDrainerCpuAffinityMask> X DDL Override: x.apps.app.<appname>.outboundMessageLogging. |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional |
</detachedWrite> </outboundMessageLogging> | End of application's outbound message logging properties. |
<startupExpectations | Specifies expectations that must be met on application startup. Unmet startup expectations will prevent the application from starting, ensuring that operational conditions are met. Usage: Optional |
<role> | Checks the HA Role of the application on startup. The role of an application is defined by the underlying role of its store. If the application has no store configured, its role will be 'Primary'. See StoreBindingRoleExpectation Usage: Optional |
<logEmptiness> | Enforces log emptiness expectations at startup. Usage: Optional |
</startupExpectations> | |
<messageHandlingPolicy> | Specifies the application's message handling policy. It is rare that an application would want to set anything other than 'Normal' for the message handling policy outside of a diagnostic or debug context. Usage: Optional |
<replicationPolicy> | Specifies the application's replication policy. The replication policy controls how messages are replicated to backup members (or disk). In most cases an application should specify a policy of Pipelined. Specifying the wrong value for this property can compromise recovery and cause message loss or duplication. Usage: Optional |
<messageSendPolicy> | Enumerates an application's AepEngine's outbound message send policies. The message send policy controls at what point during transaction commit processing that application sent messages are transmitted out of the application. In most cases, an application should specify a policy of Pipelined. Specifying the wrong value for this property can compromise recovery and cause message loss or duplication. Usage: Optional |
<appExceptionHandlingPolicy> | Set an engine's application exception handling policy, using which an engine determines how to handle unchecked exceptions thrown by an application handler. Usage: Optional SINCE 3.4 |
<quarantineChannel> | Set an engine's quarantine channel. Sets the channel on which quarantined messages are transmitted. It must take the form of channelName@busName This applies when the application throws an exception and the application exception policy is configured to be 'quarantine and stop' i.e. Usage: Optional SINCE 3.4 |
<quarantineMessageKey> | Set an engine's quarantine message key. Used to explicitly set the message key to be associated with outbound quarantine messages. If the key is set using this method, the sending of the quarantine message will bypass the dynamic key resolution machinery. Usage: Optional SINCE 3.4 |
<messageSend | The policy used by an application's AepEngine to determine how to handle unchecked exceptions thrown on message sends. Note that policy covers only the send of the message through the underlying bus binding during transaction commit. In particular, It does not cover:
Usage: Optional |
<replicationPolicy> | Specifies the application's replication policy. Usage: Optional |
<adaptiveCommitBatchCeiling> | Set the application's AepEngine's adaptive commit batch ceiling. The adaptive commit batch ceiling controls the maximum number of inbound messages to group into a single transaction which can improve throughput. A value less than or equal to 1 disables adaptive commit. Usage: Optional Adaptive commit cannot be used if transaction commit suspension is enabled. |
<enableTransactionCommitSuspension> | Sets whether transaction commit suspension is enabled or disabled. Transaction commit suspension is an experimental feature that allows an application to temporarily suspend commit of a transaction. Usage: Optional TransactionCommitSuspension is currently an experimental feature. It is not supported for production use.
|
<dispatchTransactionStageEvents> | Sets whether transaction stage events are emitted by the application's AepEngine. Controls whether or not AepTransactionStageEvent are emitted by the application's engine. An AepTransactionStageEvent is used by a user to suspend a transaction. Usage: Optional TransactionCommitSuspension is currently an experimental feature. It is not supported for production use. Because AepTransactionStageEvent are only useful for transaction commit suspension, this property should not be enabled unless the this feature is being tested.
|
<replicateSolicitedSends> | Sets whether or not to replicate solicited sends to a backup.
Usage: Optional Default: true X DDL Override: x.apps.app.<appname>.replicateSolicitedSends Constraints: true | false
This parameter should be changed with extreme caution. The act of disabling replication of outbound messages will likely result in a loss of outbound messages in the event of a fail over. |
<replicateUnsolicitedSends> | Set whether to replicate unsolicited sends. This parameter governs whether unsolicited sends performed on clustered engines will be replicated or not. This setting has no effect on engines that are not clustered. An unsolicited send is a send done outside of a event handler via an AepEngine.send method. Because unsolicited sends aren't part of an engine's transactional message processing, they are not considered to be part of the application's HA state. To treat unsolicited sends as part of an application's HA state, see sequenceUnsolicitedSendsWithSolicitedSends. Usage: Optional |
<sequenceUnsolicitedSends> | Set whether to sequence unsolicited sends. By default, unsolicited sends are sent with a sequence number of 0. Specifying true in this parameter will cause sequence numbers to also be attached to unsolicited sends. Usage: Optional Be careful about attaching sequence numbers to unsolicited sends, especially if the application is going to be doing both unsolicited and solicited sends concurrently, since that can cause messages to be sent on the wire in a sequence different from the sequence in which sequence numbers were assigned to the message thus causing legitimate messages to be dropped due to incorrect duplicate determination. For such applications, use sequenceSolicitedWithUnsolicitedSends instead to ensure that not only are unsolicited sends sequenced but that they are also correctly sequenced vis-a-vis solicited sends. |
<sequenceUnsolicitedSends | Set whether to sequence unsolicited sends with solicited sends. This parameter is applicable for applications performing concurrent solicited and unsolicited sends and want the unsolicited sends to be sequenced. Setting this parameter ensures that unsolicited and solicited sends are sequenced on the wire in the same order in which the sequence numbers were attached to the messages. In effect, this causes an unsolicited send to be injected into the underlying engine's transactional event processing stream promoting it to a transaction event. Usage: Optional |
<disposeOnSend> | Set whether or not the engine disposes sent messages. If set, then the AepEngine.sendMessage method will dispose a message after it has been sent. This means that the caller must not hold onto or reference a message beyond the call to the send message method. If unset, then a zero garbage application must call dispose on each sent message to ensure it is returned to its pool. Usage: Optional |
<clusterHeartbeatInterval> | Sets the cluster heartbeat interval for the application in milliseconds. A value of 0 (default) disables the cluster heartbeat interval Usage: Optional |
<administrative> | Marks the application as an 'administrative' application. Usage: Optional |
<stuckAlertEventThreshold> | Sets the threshold, in seconds, after which an AepStuckAlertEvent is dispatched to the application's IAepAsynchronousEventHandler. An AepStuckAlertEvent event is intended to alert that the engine's transaction pipeline is "stuck" i.e. there are one or more transaction commits in the pipeline and the event multiplexer thread is not processing any events. For example, the event multiplexer thread could be flow controlled on the replication TCP connection due to an issue in the backup or could be spinning in a loop in the business logic due to a bug in a business logic handler. See Stuck Engine Alerts for more information Usage: Optional |
<performDuplicateChecking> | Set whether the application's engine should perform duplicate checking. When duplicate checking is enabled, received messages that are deemed duplicates are discarded by the application's engine. A message is considered to be a duplicate under the following circumstances:
Sequence id assignment by an AepEngine
SINCE 3.4 |
<performMidstreamInitializationValidation> | Set whether the engine checks that initial transactions are not missing during recovery or replication. This parameter is only applicable to event sourced engines.
Usage: Optional |
<enableSequenceNumberTrace> | Enables diagnostic trace logging related to message sequencing. Enabling this trace can assist in diagnosing issues related to the loss, duplicate or out of order events. When enabled, trace will be emitted at debug level (TRACE level for SLF4J) to the logger named 'nv.aep.sno'. Usage: Optional |
<enableEventTrace> | Enables diagnostic trace logging of events received and dispatched by an engine. Enabling this trace is useful in determining the sequence of events processed by the engine. When enabled, trace will be emitted at debug level (TRACE level for SLF4J) to the logger named 'nv.aep.event'. Usage: Optional |
<enableTransactionTrace> | Enables diagnostic trace logging related to transactions processed by an engine. Enabling this trace is useful in determining the relative sequencing and timing of transaction commits as the commits are executed by the engine. When enabled, trace will be emitted at debug level (TRACE level for SLF4J) to the logger named 'nv.aep.txn'. Usage: Optional |
<enableScheduleTrace> | Enable diagnostic trace logging related to schedules (timers) managed by an engine. Enabling this trace is useful for diagnosing issues related to engine timer execution and scheduling. When enabled, trace will be emitted at debug level (TRACE level for SLF4J) to the logger named 'nv.aep.sched'. Usage: Optional |
<enableMessageTrace> | Enables diagnostic trace logging for messages as they pass through the engine. Enabling this trace is useful for tracing the contents of messages at different stages of execution within the engine. When enabled, trace will be emitted at debug level (TRACE level for SLF4J) to the logger named 'nv.aep.msg'. Usage: Optional |
<messageTraceInJson> | Sets whether messages are traced in Json or toString format. When enabled, messages will be printed in Json format, otherwise message will be traced using its toString method. This parameter is only applicable if message trace is enabled. Usage: Optional |
<messageTraceJsonStyle> | Sets the styling for Json formatted message trace. This parameter is only applicable if message trace in Json is enabled. Valid options are:
Usage: Optional |
<messageTraceFilterUnsetFields> | Sets whether unset fields are filtered for json formatted objects when json message tracing is enabled. Usage: Optional Constraints: true | false |
<messageTraceMetadataDisplayPolicy> | Sets whether metadata, payload or both will be traced when message tracing is enabled. Valid Options are:
Usage: Optional |
<maxEnvironmentProviders> | Sets the maximum number of environment providers that can be registered with the engine. Usage: Optional |
<enableSendCommitCompleteSequenceAlerts> | Set whether or not to enable out of order send commit completion detection. When enabled, the engine will check that stability events (acknowledgements) from the underlying messaging provider are received in an ordered fashion. If acknowledgements are received out of order, then the engine will dispatch appropriate alerts.
Usage: Optional |
<captureMessageTypeStats> | Sets whether statistics are additionally recorded ona per message type basis. Collection of message type specific statistics records counts and rates per type as well as message processing time statistics for each message which can be useful in finding particular handlers that have high execution times.
Usage: Optional |
<captureTransactionLatencyStats> | Sets whether or not the engine records transaction latency stats. Usage: Optional |
<captureEventLatencyStats> | Sets whether or not the engine records event latency stats (such as the amount of time of events spent in its input queue). Usage: Optional |
<replicateInParallel> | Enables parallel replication. When parallel replication is enabled, the engine replicates inbound messages to the cluster backups in parallel with the processing of the message by the message handler. This parameter only applies to Event Sourced engines. This parameter is particularly useful for Event Sourced applications that have higher message processing times because in this case it may be possible to replicate the message prior to completion of the message handler. Usage: Optional |
<preserveChannelJoinsOnStop> | Sets whether or not to preserve joined channels when the engine stops normally. By default, when an engine is stopped without an error bus channels that were 'joined' will be 'left', meaning that any subscriptions or interests created by the message bus will be unsubscribed or unregistered. Setting this value to true causes the engine to preserve channel interest even on a clean shutdown. Note that this property has no effect for the case where an engine shuts down with an error (e.g. AepEngine.stop(Exception) with a non-null cause. In this case channel joins are left intact allowing a backup to take over. Note that this behavior can be overridden programmatically on a case by case basis by a handler for AepEngineStoppingEvent by setting AepEngineStoppingEvent.setPreserveChannelJoins(boolean). Usage: Optional SINCE 3.4 |
</app><apps> |
Server Configuration | |
The 'servers' section configures the various Talon Servers used globally in the deployment. A server hosts one or more applications, controlling each application's lifecycle, and provides connectivity in the form of acceptors that allow management clients to monitor and administer the applications that it hosts. For example, the below configures a two servers named 'forwarder-1' 'forwarder-2' that both host the 'forwarder' app. Launching both servers would start two instances of the 'forwarder' app that would form a clustered instance of the forwarder app. Sample XML Snippet | |
Element | Description |
---|---|
<servers> | Defines the Talon Servers used globally in the deployment. A Talon 'Server' is synonymous with the term 'XVM' and is used interchangeably in the X documentation. A server hosts one or more applications, controls each application's lifecycle and implements a direct network connection acceptance machinery to (1) allow management clients connect to the server to monitor and administer the applications that it hosts and (2) accept direct connections for apps that are configured to use the Talon 'direct' bus provider. |
<server | Defines and configures a server. |
enabled | If set to false, this server will be ignored and not saved to the configuration repository. This can be used to disable a server at runtime. However, note that if a persistent configuration repository is in use, this will not cause a previously configured server to be deleted. Usage: Optional |
discoveryDescriptor | Defines the server's discovery descriptor. Usage: Optional When the p2p message bus binding is used, the discovery descriptor for the server must be part of the same discovery domain as the default discovery descriptor configured via nv.discovery.descriptor because the server itself facilitates advertising and accepting point to point connections. |
group | Defines the server's application group. Usage: Optional |
name> | Defines the server name which must be unique within the server's configuration and discovery domain. A common practice is to use a server name that is indicative of the application or set of applications that it is hosting and the normal application role. For example, if the server will host shard 1 of order processing app that normally assumes the primary role, a name such as "order-processing-p-1" might be used. In this fashion, an instance of an application can be uniquely addressed with a discovery domain as a combination of the server name and app name. Usage: Required X DDL Override: Not overridable (key)Constraints: String |
<clientHandShakeTimeout> | Sets the timeout used for allow connecting clients to complete the server connection (in seconds). Usage: Optional |
<autoStopOnLastAppStop> | SINCE 3.7 Configures whether or not the server will automatically stop after the last app is stopped.
Disabling auto stop on last app stop leaves the server running and manageable even when all applications have stopped. The xvm's internal admin app does not count as a running app. Usage: Optional |
<adminClientOutputQueueCapacity> | SINCE 3.7 Configuration property used to set the capacity (in MB) for the size of the server controller admin clients' output queue sizes. Outbound packets are dropped once the queue size reaches and/or exceeds the configured capacity. Usage: Optional |
<apps> | Configures that apps hosted by this server. Multiple servers can host the same application. Each clustered application will discover its peers and form its own independent cluster. In other words, servers don't cluster, but their applications do. |
<app | Configures an app hosted by this server. |
autostart | Sets whether the server automatically starts the app when the server is started. Usage: Optional |
enabled | If set to false, the app will be ignored and not saved to the configuration repository. This can be used to suppress addition of an application at runtime. Usage: Optional |
name> | The name of the application as defined in the 'apps' element. Usage: Required |
</app> | |
</apps> | |
<acceptors> | Configures this servers acceptors. By default, each server will create an acceptor on 'tcp://0.0.0.0:0' to listen on all interfaces at a auto assigned port (which is advertised). If any acceptors are explicitly added to the server, the default acceptor is removed and replaced with the first configured acceptor. |
<acceptor | Defines and configures an acceptor. |
descriptor | The acceptor descriptor. Accept descriptors are of the form [protocol]://[host]:[port] e.g 'tcp://myhost:12000'and are used to specify the network protocol, interface and protocol through which to accept inbound network connections requests. 'tcp' is the only currently support protocol, [host] can be the host name or IP address of a specified interface on which this server will be running, and the [port] is the protocol specific server port on which the server will listed for inbound connections. Usage: Required |
enabled> | If set to false, the acceptor will be ignored and not saved to the configuration repository. This can be used to suppress addition of an acceptor at runtime. However, note that if a persistent configuration repository is in use, this will not cause a previously configured acceptor for the this server to be removed. Usage: Optional |
<linkParams> | A comma separate set of key=value pairs that serve as additional configuration parameters for the network connections accepted by this acceptor. Usage: Optional |
</acceptor> | |
</acceptors> | |
<multithreading | |
enabled> | Sets whether the server should operate in multi-threaded mode. In most cases this value should be set to true. Setting this value to false will set the IO thread count to 1 regardless of the number of IO threads listed for the server. Usage: Required |
<ioThreads> | Configures IO threads for the server. |
<ioThread | Defines and configures an IOThread. |
id | The thread id. IO Thread ids are zero based and must be defined in monotonically increasing order. Usage: Required |
affinity | Sets the cpu affinity mask for the thread. The affinity string can either be a long that represents a mask of logical cpu or a square bracket enclosed comma separated list enumerating the logical cpus. For example, specifying "1" or "[0]" indicate Core 0. "3" or "[0, 1]" would indicate Core 0 or Core 1. Specifying a value of "0" indicates that the the thread should be affinitized to the platform's default cpu, and omitting this value indicates that the thread should be affinitized according to the platform's default policy for the multiplexer. See UtlThread.setCpuAffinityMask Usage: Optional |
enabled | Sets the thread as enabled or disabled. This can be used at runtime to disable an IO Thread. Disabling an IO thread has the effect of setting all threads with a higher id to enabled=false. Note that if a persistent configuration repository is in use, this will not cause a previously configured IO threads for the this server to be removed. Usage: Required |
</ioThreads> | |
</multithreading> | |
<heartbeatLogging | Configures heartbeat logging for the server. When configured, server heartbeats are written to disk. SINCE 3.1 |
enabled> | Whether or not to enable heartbeat logging. Usage: Required |
<autoFlushSize> | In the absence of explicit flushes (e.g. flushOnCommit) of written entries, the size at which flush is automatically triggered for queued writes. If not set, the platform default (8192) is used. Usage: Optional |
<flushOnCommit> | Whether or not the logger should be flushed on commit. By default, the logger buffers writes into an internal buffer and doesn't write to disk until that buffer has filled. Enabling flush on commit will flush the logger regardless of whether the buffer has filled. Usage: Optional |
<autoRepair> | Whether or not an attempt will be made to automatically repair a non empty log on open by truncating malformed entries at the end of the log that are part of incomplete transactions. Usage: Optional |
<storeRoot> | Specifies the root folder in which the logger's transaction log files are located. Usage: Optional If the expected value of of NVROOT on your target deployment host is not on the device where you want to place your transaction logs (e.g. slow or small disk), then consider making this a substitutable vlaue such as: <storeRoot>${myapp.storeroot}</storeRoot>, so that you can customize its location at runtime appropriate to the environment in which you are launching. |
<initialLogLength> | Sets the initial file size of logger's transaction log in gigabytes. Preallocating the transaction log can save costs in growing the file size over time, since the operation of growing a log file may actually result in a write of file data + the metadata operation of updating the file size, and may also benefit from allocating contiguous sectors on disk. Usage: Optional The log size is specified in Gb. For an initial size of less than 1 Gb, specify a float value. for example a value of .01 would result in a preallocated size of ~10Mb, this can be useful for test environments. |
<zeroOutInitial> | Whether the log file should be explictly zeroed out (to force commit all disk pages) if newly created. Usage: Optional |
<pageSize> | Sets the page size for the disk in bytes. The logger will use this as a hint in several areas to optimize its operation. Usage: Optional |
<detachedWrite | Configures whether or not logger writes are done by the committing thread or passed off to a detached writer thread. Offloading to a writer thread can increase application throughput but requires an extra processor core for the logger thread. |
enabled> | Can be set to true to enable detached logging for the logger. Usage: Required |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: x.servers.server.<servername>.heartbeatLogging.detachedWrite.queueOfferStrategy |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus. See <queueDrainerCpuAffinityMask> X DDL Override: |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional |
</detachedWrite> </heartbeatLogging> | End of application's hearbeat logging properties. |
Enums Reference
ChannelQos
Enumerates the different supported Qualities of Service used for transmitting messages over a messaging channel.
See also: MessageChannel.Qos.
Valid Values
Value | Description |
---|---|
BestEffort | Specifies Best Effort quality of service. Messages sent Best Effort are not acknowledged, and in the event of a binding failure may be lost. |
Guaranteed | Specifies Guaranteed quality of service. Messages sent Guaranteed are held until acknowledged by the message bus binding, and are retransmitted in the event of a failure. |
CheckpointingType
Enumerates the types of checkpointing controllers.
See also IStoreCheckpointingController.Type.
Valid Values
Value | Description |
---|---|
Default | Indicates that the default checkpoint controller should be used. The default checkpoint controller counts all entry types (Puts, Updates, Removes, and Sends) against the threshold trigger for writing a new checkpoint. |
CDC | Indicates that the CDC Checkpoint controller should be used. The CDC checkpoint controller only counts Puts, Updates and Removes against the checkpoint trigger threshold (because these are the only types of interest for CDC). |
Conflation | Indicates that the Conflation Checkpoint controller should be used. The Conflation checkpoint controller does not count Puts against the new checkpoint trigger threshold (because puts cannot be conflated). |
ICRRole
Enumerates the different inter-cluster replication roles of an AepEngine's store binding.
In inter-cluster replication, the store contents of a cluster are replicated to one or more receiving clusters. This enumeration enumerates the different replication roles that can be assigned to clusters. Assigning a replication role to a clustert amounts to assigning the same inter-cluster replication role to all members of the cluster.
See also: IStoreBinding.InterClusterReplicationRole
Valid Values
Value | Description |
---|---|
Sender | Cluster members designated with this role serve serve as the inter-cluster replication senders. |
StandaloneReceiver | Cluster members designated with this role serve as standalone inter-cluster replication receivers. Standalone implies that the receive side members designated with this role do not form clusters while operating in this mode. From the perspective of the user, the member operates as a backup cluster member, but there is no intra-cluster replication actually occurring. There can be multiple simultaneous standalone replication receivers. |
InboundMessageLoggingPolicy
Enumerates an engine's inbound message logging policies.
This enumerates the policy that determines if and where to log inbound messages.
See also: AepEngine.InboundMessageLoggingPolicy
Valid Values
Value | Description |
---|---|
Default | The default inbound message logging policy is determined by the HA and persistence mode at play. With this policy, if event sourcing & cluster persistence are enabled, then inbound message logging is implictly switched on and inbound messages are logged through the store's persister. All other configurations switch off inbound message logging. |
Off | Disables inbound message logging. With this policy, inbound message logging is disabled. This is the default policy with State Replication and Standalone mode of operation. The Standalone mode of operation is one where an engine has not been configured for HA: i.e. configured without a store. This option is invalid for use with engines configured to be clustered and use Event Sourcing since, in that mode, inbound messages are logged in the store's event log by virtue of inbound message replication. |
UseDedicated | Use a dedicated log for inbound message logging. With this policy, the engine uses a dedicated logger to log inbound messages. This option is invalid for use with engines configured to be clustered and use Event Sourcing since, in that mode, inbound messages are logged in the store's event log by virtue of inbound message replication. |
InboundMessageLoggingFailurePolicy
SINCE 3.2
Enumerates policies for handling inbound message logging failures.
This enumerates the policy that determines what to do in the event of an inbound message logging failure.
Valid Values
Value | Description |
---|---|
StopEngine | This policy specifies that a failure in inbound logging will be treated as a failure which will result in shutdown of the engine. |
StopLogging | This policy specifies that inbound logging errors will be trapped and cause the engine to discontinue inbound message logging. |
LogEmptinessExpectation
Enumerates the set of values permissible with the log emptiness expectation.
See Also: IStoreJournallingPersister.LogEmptinessExpectation
Valid Values
Value | Description |
---|---|
None | Used to specify that there is no expectation regarding emptiness of a transaction log |
Empty | Used to specify that a transaction log is expected to be empty. |
NotEmpty | Used to specify that a transaction log is expected to exist and contain at least one entry SINCE 3.4 |
MessageHandlingPolicy
Enumerates an application's AepEngine's inbound message handling policy.
See also: AepEngine.MessageHandlingPolicy
Valid Values
Value | Description |
---|---|
Normal | This policy represents normal message processing operation. This is the default message handling policy. |
Noop | This policy causes inbound messages to be discarded before dispatch to the application: i.e. they are not dispatched to the application. The messages are acknowledged if received on a guaranteed channel. |
Discard | This policy causes inbound messages to be blindly discarded. No acknowledgements are dispatched if received on a guaranteed channel |
MessageSendPolicy
Enumerates an application's AepEngine outbound message send policies.
The message send policy controls at what point during transaction commit processing that application sent messages are transmitted out of the application.
See also: AepEngine.MessageSendPolicy
Valid Values
Value | Description |
---|---|
ReplicateBeforeSend | This policy causes state/messages to be replicated before sending outbound messages triggered by the processing of inbound messages. In other words, for event sourcing, this policy causes an inbound message to be processed, the message replicated for processing to the back instance(s), and then outbound messages triggered by the processing of the message to be sent outbound (after processing acknowledgments have been received from all back instance(s)). For state replication, this policy causes inbound message(s) to be processed, the state changes triggered by the processing of the inbound message to be replicated to the backup instance(s), and then the outbound messages triggered by the processing of the inbound message to be sent (after receiving state replication stability notifications from the backup instance(s)). |
SendBeforeReplicate | This policy causes outbound messages triggered by the processing of inbound messages to be sent outbound first, before replicating the state/inbound messages. In other words, for event sourcing, this policy causes an inbound message to be processed, the outbound messages triggered by the processing of the inbound message to be dispatched outbound, and then the inbound message replicated to the backup instance(s) for parallel processing (after outbound send stability notifications have been received from downstream agents). For state replication, this policy causes an inbound message to be processed, the outbound messages triggered by the processing of the inbound message to be dispatched outbound, and then the state changes affected by the processing of the inbound messages to be replicated for stability to the backup instance(s). In most circumstances, this mode operation is unsafe from an HA standpoint: a failover to a backup instance may result in duplicate processing of the source message with different outbound message results: e.g. duplicate outbound messages that are different in content. |
Noop | This policy causes outbound messages to be silently discarded. No stability notifications are dispatched for this policy for messages sent through guaranteed channels.
|
AppExceptionHandlingPolicy
SINCE 3.4
Enumerates an engine's app exception handling policies.
This enumerates the policy using which an engine determines how to handle unchecked exceptions from an application message handler or message filter.
See also: AepEngine.AppExceptionHandlingPolicy
Valid Values
Value | Description |
---|---|
RollbackAndStop | Stop the engine. With this policy, upon receipt of an unchecked exception from an application handler, the engine:
If the engine cannot complete prior transactions due to a subsequent error the engine is still stopped with an exception and a backup will reprocess messages from incomplete transactions as well. This is the default policy. |
LogExceptionAndContinue | Log an exception and continue operating. With this policy, upon receipt of an unchecked exception from an application's event/message handler, the engine:
So essentially message processing stops where it is, and from an HA standpoint, the message is removed from the processing stream. When applied to an exception thrown from a message filter the message will not be dispatched to application event handlers (see AepEngine.setMessageFilter). In all cases, the message will not be considered to be part of the transaction and is acknowledged upstream. |
QuarantineAndStop | Quarantine offending message and stop engine. With this policy, upon receipt of an unchecked exception from an application handler, the engine:
If the engine cannot complete prior transactions due to a subsequent error, the engine is still stopped with an exception and a backup will reprocess messages from incomplete transactions as well. |
MessageSendExceptionHandlingPolicy
Enumerates an engine's message send exception handling policy.
This enumerates the policy using which an engine determines how to handle unchecked exceptions received on message sends.
Note: There are two types of send failures that an engine can encounter during its operation. The first are exceptions thrown during the message send operation. Such exceptions are typically thrown by the underlying message bus bindings. The other, applicable only to guaranteed channels, is where the message send operation succeeds but could not be stabilized by the underlying messaging provider. This policy only applies to the first type of send failures.
Additionally, this does not cover exceptions thrown to the application as the result of a send call from a message handler. Such exceptions are covered by the AppExceptionHandlingPolicy.
See also: AepEngine.MessageSendExceptionHandlingPolicy
Valid Values
Value | Description |
---|---|
TreatAsStabilityFailure | Treat the failure as a stability failure. Converts the send failure to a message stability failure (a fatal error). This is the default policy. |
LogExceptionAndContinue | Log an exception and continue operating. With this policy, upon receipt of an unchecked exception from the underlying send machinery, the engine logs the exception and continues operating. This policy can be dangerous for an application using Event Sourcing, because it is possible that such an exception is one that is indicative of a problem specific to the primary engine instance that would not occur on the backup if it were to take over and begin processing messages. |
OutboundMessageLoggingPolicy
Enumerates an engine's outbound message logging policies.
This enumerates the policy that determines if and where to log outbound messages
See also: AepEngine.OutboundMessageLoggingPolicy
Valid Values
Value | Description |
---|---|
Default | Disable outbound message logging. With this policy, outbound message logging is disabled. This is the default policy. When the application's HA Policy is StateReplication, outbound messages are logged to the store transaction log as required by State Replication to retransmit in doubt messages after a failure. However, the outbound messages in the store's transaction log will be discarded if log compaction is enabled, so an application may still want to log a copy to a dedicated logger as well. |
UseDedicated | Use a dedicated log for outbound message logging. With this policy, the engine uses a dedicated logger to log outbound messages. |
OutboundMessageLoggingFailurePolicy
SINCE 3.2
Enumerates policies for handling outbound message logging failures.
This enumerates the policy that determines what to do in the event of an outbound message logging failure.
Valid Values
Value | Description |
---|---|
StopEngine | This policy specifies that a failure in outbound logging will be treated as a failure, which will result in shutdown of the engine. |
StopLogging | This policy specifies that outbound logging errors will be trapped and cause the engine to discontinue outbound message logging. |
QueueOfferStrategy
Specifies the offer strategy for threads publishing to an event multiplexer's queue. When not specified, the platform's default value for the multiplexer will be used which is computed based on a number of factors depending on the event multiplexer in question and the optimization parameters in play for the application as a whole:
Valid Values
Value | Description |
---|---|
SingleThreaded | An optimized strategy that can be used when it can be guaranteed that there is only a single thread feeding the queue. |
MultiThreaded | Strategy that can be used when multiple threads can concurrently enqueue events for the multiplexer. |
MultiThreadedSufficientCores | Strategy to be used when there are multiple publisher threads claiming sequences. This strategy requires sufficient cores to allow multiple publishers to be concurrently claiming |
QueueWaitStrategy
Specifies the strategy used by an event multiplexer's queue draining thread(s).
Valid Values
Value | Description |
---|---|
Blocking | The BlockingWaitStrategy is the slowest of the available wait strategies, but is the most conservative with the respect to CPU usage and will give the most consistent behaviour across the widest variety of deployment options. However, again knowledge of the deployed system can allow for additional performance. |
Sleeping | Like the BlockingWaitStrategy, the SleepingWaitStrategy attempts to be conservative with CPU usage by using a simple busy wait loop, but uses a call to LockSupport.parkNanos(1) in the middle of the loop. On a typical Linux system, this will pause the thread for around 60us. However, it has the benefit that the producing thread does not need to take any action other than increment the appropriate counter and does not require the cost of signaling a condition variable. However, the mean latency of moving the event between the producer and consumer threads will be higher. It works best in situations where low latency is not required, but a low impact on the producing thread is desired. |
Yielding | The YieldingWaitStrategy is one of 2 Wait Strategies that can be use in low latency systems, where there is the option to burn CPU cycles with the goal of improving latency. The YieldingWaitStrategy will busy spin waiting for the sequence to increment to the appropriate value. Inside the body of the loop, Thread.yield() will be called, allowing other queued threads to run. This is the recommended wait strategy when you need very high performance and the number of Event Handler threads is less than the total number of logical cores, e.g. you have hyper-threading enabled. |
BusySpin | The BusySpinWaitStrategy is the highest performing Wait Strategy, but puts the highest constraints on the deployment environment. This wait strategy should only be used if the number of Event Handler threads is smaller than the number of physical cores on the box, or when the thread has been affinitized and is known not to be sharing a core with another thread (including a thread operating on a hyperthreaded core sibling). |
StoreBindingRoleExpectation
Enumerates the different roles that an application's store can assume.
Valid Values
Value | Description |
---|---|
Primary | Indicates that this binding is the primary binding in a store cluster. A store cluster can have a single primary member which is elected through a leader election algorithm. The single primary member replicates messages and state to its backup peers according to an application's configured HA Policy. |
Backup | Indicates that a binding is a backup binding in a store cluster. When operating in backup mode, objects can be retrieved from the store but not updated or added. |
None | Indicates no expectation regarding a store binding's role SINCE 3.4 |
ReplicationPolicy
Enumerates the different replication policies for an AepEngine.
See Also: AepEngine.ReplicationPolicy
Valid Values
Value | Description |
---|---|
Pipelined | With this replication policy, message/state is replicated soliciting acknowledgements from the backup engine cluster instance(s), but inbound message processing is not blocked while waiting for the acknowledgement to be received. |
Asynchronous | With this replication policy, message/state is replicated without soliciting an acknowledgement from the backup engine cluster instances. |
Groups Reference
EventMultiplexer Properties
Event Multiplexer properties configure the event multiplexer threads that are used throughout the platform for highly efficient inter-thread communication.
Elements
Value | Description |
---|---|
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified the platform's default value for the multiplexer will be used. Usage: Optional |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. Usage: Optional |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. Usage: Optional |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus. For example, specifying "1" or "[0]" indicates Core 0. "3" or "[0, 1]" would indicate Core 0 or Core 1. Specifying a value of "0" indicates that the the thread should be affinitized to the platform's default cpu, and omitting this value indicates that the thread should be affinitized according to the platform's default policy for the multiplexer. Examples:
Usage: Optional |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. If this value is set too low, it will result in a runtime time error. Typically, applications need not specify this value. Usage: Optional |