In This Section
Overview
This section contains detailed reference information for the x-ddl schema (Domain Descriptor Language). Your application's config.xml is an XML document, adhering to the X-DDL schema, describes and configures a system of buses, applications and XVMs that together constitute a multi-agent 'system' that is managed and deployed together to perform a higher level business function. The DDL document is used to configure each XVM prior to launch and is used by the X Platform's deployment and monitoring tools to assist in managing the system as a whole.
There are 7 main elements in a DDL configuration model file:
- <systemDetails>: Defines system metadata that is used by tools and emitted by an XVM in heartbeats.
- <env>: Defines environment variables that can be used for substitution of values in the document and are exposed to the application and talon runtimes.
- <busProviders>: Defines custom message bus provider implementations.
- <buses>: Defines the message buses that are used for applications to communicate with one another.
- <apps>: Defines and configures the applications that make up the system.
- <xvms>: Defines the XVMs (Talon lightweight containers) that host one or more applications.
- <profiles>: Defines profiles that can be dynamically activated to localize the configuration for different operating environments.
A main tenet of the X Platform is to separate out (and shield) an application's business logic from the complex machinery underpinning high availability, performance, and scalability. As such there is a wealth of tuning knobs and configuration operations that can be applied without making application code changes. Most applications will not use the majority of the settings described here unless they are being tuned for special cases.
A good way to get started with DDL configuration is to look at some existing projects. Consider:
- Starting with the simple configuration generated with one of the Talon Maven Archetypes.
- For more robust examples take a look at the conf/config.xml in the reference applications on GitHub.
DDL Features
The platform's parses config.xml as follows:
- Apply DDL Profiles: Activated DDL <profiles> elements are merged into the main DDL XML.
DDL profiles provide a means of making a single DDL configuration document more portable to different environments, by the structure of configuration to be augmented and overridden. - Apply DDL Templates: Apply <app>, <bus>, <xvm> templates.
Templates provide a means of reducing duplicate configuration across the above element types/. Template values may be supplied by profiles activated in step 1 above. - Apply DDL Substitutions and Overrides: ${varName::defaultValue} substitutions.
Substitution values come from values in the environment or <env> elements in the DDL document itself. Substitution values can come from profiles or templates applied in steps 1 and/or 2 above.
The result is a DDL XML document with no profiles, templates or unresolved ${} variable values.
Substituting Values at Runtime
For application portability across environments, It is often convenient to define some values in the DDL as variables that are later localized to the environment in which an application is to run. Examples of such values include host names, ports and message bus addresses.
The values in a DDL XML document can be overridden in two ways: Environment Variable Substitution and via DDL override properties. When launching an XVM, the substitution values are sourced from the bootstrap environment which consists of the following sources (in increasing order of precedence):
Bootstrap Environment:
- The application properties file. This file can be specified in the environment as nv_app_propfile, or as a System property as '-Dnv.app.propfile'. A value found in the environment takes higher precedence.
- System properties (System.getProperties)
- The host's environment (System.getenv())
When running in embedded mode or with a bootstrap configurer, the application may provides an alternate substitution environment using the VMConfigurer API, giving the application full control over the bootstrap property source. In this case, the application may provide VMConfigurer with the above bootstrap environment by passing in UtlTailoring.ENV_SUBSTITUTION_RESOLVER as the VMConfigurer's valueResolver.
Environment Variable Substitution
VMConfigurer will first substitute and ${VARNAME::DEFAULT} values using the ISubstResolver passed into the configurer (or from the environment if no resolver is passed in):
If the substitution resolver contains a value for FRONTOFFICE_BUSDESCRIPTOR, then that will be used for the bus descriptor. Otherwise, the default value of "falcon://fastmachine:8040" will be used. This substitution is done before the XML is validated, so in cases where the ${} syntax yields invalid XML, it will be substituted before parsing the document.
Handling Special Characters
SINCE 3.4
Special XML characters in properties that originate from the environment are automatically escaped before being substituted into the XML. In particular, <, >, &, ", ' will be respectively replaced by <, >, &, ", '. For example, if running with:
-DFRONTOFFICE_BUSDESCRIPTOR=activemq://localhost:61616&topic_starts_with_channel=false, the DDL parser will substitute it as valid XML like:
Users who prefer to do their own XML escaping can disable this behavior by setting the following property:
In which case the property would need to be passed in pre-escaped as:
-DFRONTOFFICE_BUSDESCRIPTOR=activemq://localhost:61616&topic_starts_with_channel=false
Substituting from DDL <env> Elements
SINCE 3.4
Properties defined in the <env> section of an X-DDL document can be used in variable substitution elsewhere in the document. For example:
Properties defined in the <env> section of an X-DDL document have the lowest priority of all configuration sources and can be easily overridden using system properties and environment vars. Properties values defined in the <env> section may use ${} variables, but variable values are only resolved from the property source passed in, not other properties defined in the <env> section. If a value defined in an <env> section was already defined in the property source passed into VMConfigurer, that <env> value is replaced (overridden by) the value passed in.
XVM Specific Env Properties
SINCE 3.8
The 3.8 release introduced the ability to configure XVM specific <env> properties. When DDL is being localized for a specific XVM, properties defined for an XVM are included and override <env> properties defined in the top-level portion of the DDL when parsing the environment variables. An XVM template may also provide <env> properties. Template provided properties are merged with those defined in the <xvm> with the <xvm> values overriding any values defined in the template.
The target XVM used to localize configuration can be passed to the VMConfigurer using VMConfigurer by passing value nv.ddl.targetxvm=<xvm-name> as value resolver property. This property is automatically set by an XVM prior to invoking the VMConfigurer, but applications running in an embedded mode or using a bootstrap configurer must set this value manually for the environment to be available for ${} substitution.
would resolve to an XVM specific env of:
- prop1=top-level-value1 (as defined in top-level env since not overridden).
- prop2=xvm-template-value2 (picked up from xvm-template to override top-level <env>).
- prop3=my-xvm-value3 (picked up from my-xvm, overrides the template and top-level value).
Note that if any of the above properties were specified in the environment passed in (e.g System.getenv or System.getProperty) the values passed in would take precedence.
Profile provided Env Properties
The 3.8 release introduced the ability to define DDL profiles. DDL profiles may also contribute or override <env> defined as top level <env> properties or as <xvm> specific <env> properties. Think of each active profile as being merged on top of the top-level DDL xml, overriding any values already defined.
would resolve to an xvm specific env of:
prop1=global-profile-top-level-value1 (overrides top-level value with PROD profile top-level env).
- prop2=xvm-template-value2 (picked up from xvm-template to override top level <env>).
prop3=profile-my-xvm-value3 (picked up from my-xvm, overrides the template and top level value).
DDL Overrides
Any attribute or value element listed in this document that has an X DDL Override property can be overridden by passing the corresponding value into the substitution environment used by the VMConfigurer. DDL overrides are particularly useful for applications that internally bundle DDL on the classpath, making it difficult to edit by hand at runtime.
In the above case, the value for descriptor can also be overridden using the DDL Override, 'x.buses.bus.frontoffice.descriptor'. So given the following values in the substitution resolver:
- FRONTOFFICE_BUSDESCRIPTOR= falcon://slowmachine:8040
- x.buses.frontoffice.descriptor=direct://frontoffice
would yield:
... even though initial env substitution would substitute "falcon://slowmachine:8040", and the DDL override would override that value with direct://frontoffice resulting in the bus being localized as a direct bus.
'enabled' attributes
Throughout the schema, you will notice several elements that have enabled attributes. Specifying a value of "false" for these elements will cause the X-DDL parser to ignore these elements. This pattern allows for these elements to be configured but then disabled at runtime via an environment variable, System property, or DDL override. For example, if the DDL were to configure in an app named "forwarderapp":
then at runtime, it could be disabled by launching the application with:
-Dx.apps.forwarderapp.storage.persister.enabled=false
or
-DFORWARDER_PERSISTER_ENABLED=false
Changing the override prefix
would become:
'myprefix.apps.forwarderapp.storage.persister.enabled'
DDL Templates
SINCE 3.8
The buses, apps, and xvms each support specifying templates. Templates allow common configuration elements to specified in a single place to reduce repetition in the configuration. For example, if all applications defined in a system will be configured to collect statistics putting this configuration in a template is more compact than specifying the configuration in each application.
Using Templates
- Templates are defined under the <templates> element of the section to which they apply (buses, apps, xvms)
- The <templates> element can specify the same configuration as the element to which they apply (bus, app or xvm)
- You can define multiple templates, but each bus, app, xvm can only specify a single template.
- When the DDL is parsed each element in the template is applied to the element using the template unless the element on the template overrides the same element or template in its configuration.
- DDL template values can be overridden with DDL overrides. For example, 'x.buses.templates.orders-bus-template.descriptor' can be used to override the descriptor attribute defined in the 'orders-bus-template' bus template.
Templating Example
This example shows how templating can be used to reduce configuration repetition:
In the above DDL, both applications share the same configuration for what statistics are collected. We can instead define a template to hold this configuration and configure each application to reference the template:
Templating Override Example
If an element using a template defines an element or attribute defined in the template, the value in the element sourcing the template takes precedence. Consider the following:
The above would resolve to:
Note in the above that:
order-processing-vm retains the heartbeat interval attribute of 5 that it specified explicitly
order-processing-vm retains the includeMessageTypeStats element of true that it specified explicitly.
DDL Profiles
SINCE 3.8
DDL profiles provide a means of making a single DDL configuration document more portable to different environments, by allowing the specification of profiles that can be activated to augment and override the structure of the DDL.
Some good use cases for profiles include:
- Localizing the system to a specific environment by overriding ports, paths and hostnames to the target environment.
- Creating a test profile to be used when unit testing an application.
Each profile defined in a DDL document can specify all of the elements that can be specified in the DDL model. When a profile is activated all of the configuration it specifies is overlaid on top of the main DDL configuration. Values that already exist in the main DDL are overridden - for example, if an <app> is defined in the DDL document and an <app> with the same name attribute is defined in a profile, then all of the elements and attributes in the profile are applied on top of the <app> defined at the top level.
Activated profiles are applied before doing variable substitution or templating. This means that any ${} variable substitution will pick up <env> elements defined in profiles. It also means that before any templates are applied template contributions for activated profiles are merged in.
Activating DDL Profiles
By default, profiles don't augment DDL configuration, they must be activated in order to contribute to the DDL. This can be achieved either through explicit activation or via a profile's activation element.
Explicit Activation
The property nv.ddl.profiles can be used to explicitly activate profiles by passing it in with the set of substitution values in the VMConfigurer. The value of nv.ddl.profiles is a comma separated string of profile names to activate. For example, one might configure a "test" profile that would set message buses to use loopback when running unit tests:
would resolve to the following when -Dnv.ddl.profiles=test
is set in the bootstrap environment.
And would resolve to the following if no nv.ddl.profiles were set (or none matching a the "test" profile name:
Profile Activation Element
A profile can specify an <activation> element that can accept a list of properties that must all match values in the bootstrap environment (e.g. and) to activate the profile. This can be useful to localize an application based on the environment it is running in. For exampl,e the following profile could be automatically activated to set the discovery address to use when running in the prod environment based on the environment variable ENVIRONMENT_NAME=PROD being set in that environment.
Profile Activation Order
Profiles are applied in the order in which they are specified in nv.ddl.profiles followed by any additional profiles activated by an activation element (which my be applied in any order). If the order of profile activation is important, the order attribute may be set on profiles to control the order in which they are applied:
- Profiles with a lower order are applied first.
- If the order attribute is not set it defaults to 0
- If two profiles have the same order value they are applied in the order they are specified in nv.ddl.profiles otherwise they are applied in an unspecified order after ordered profiles.
Troubleshooting DDL parsing
DDL trace can be enabled by setting -Dnv.ddl.trace=debug (or when using SLF4J setting the logger nv.ddl to 'Trace' Level).
XML Reference
If you are working in an IDE such as eclipse, try importing the DDL XSD schema into your eclipse XML catalog so that you you can get usage tips on configuration elements directly in the IDE by pressing ctrl-space.
The x-ddl.xsd schema is published online with each release and also included at the root of talon jars.
Optional Values
Many of the configuration options listed here need not be specified by most applications, and in most cases values listed as 'Optional' below should be omitted from an application's configuration as the platform will apply reasonable defaults. A good strategy is to start with a minimal configuration and only add additional configuration options as needed.
DDL Model Sections
System Details
System details provides metadata about the overall system. It can be used by tools to better identify the system being configured.
Sample XML Snippet
Settings
Element | Description |
---|---|
<systemDetails> | Holds metadata for the system described by the DDL.
|
<name> | The unique identifier for the system described by this deployment descriptor. The identifier should be a short, descriptive identifier that allows it and the collection of xvm and applications it groups together. |
<displayName> | A human readable name for the system intended for use by tools. |
<version> | The version of this system. A system version should be incremented as changes are made to the configuration or composition of the system (xvm and apps) and should also be changed when any of the binaries for the application have changed. |
</systemDetails> |
Environment Configuration
The <env> section allows configuration of the runtime properties accessible to Talon and the application through the XRuntime class. The X Platform reserves the use of the prefix 'nv.' for platform configuration, applications are otherwise free to set arbitrary properties in the <env> section. The properties defined in <env> will be stored in the configuration repository and later loaded into XRuntime and made.
Since the 3.4, values specified in an <env> section can be used for variable substitution on ${prop::value} specified values outside of the <env> element.
Environment properties can be listed in either '.' separated form, or by breaking the dot separated levels into hierarchical nodes.
Sample XML Snippet
Settings
Element | Description |
---|---|
<env> | Any XML element with text content will be treated as a property by concatenating its parent's node names with '.' separators. If the value is already defined in the set of ddl overrides passed into the parser, the value in XML will be overridden. Unlike other DDL values mentioned in this document, overriddes for <env> values need not be prefixed with 'x.' ... the values passed will directly override the values specified in the <env> section without prefix. for example given:
|
Message Bus Provider Configuration
The 'busProviders' section is optional and allows registration of custom message bus implementations.
When working with custom message binding types, a messaging provider implementing com.neeve.sma.MessagingProvider must be registered with the runtime to serve as a factory for creating new message bus instances.
Providers are registered by name and must match the providerName used when configuring the bus (or as the scheme portion of the message bus desciptor)
To avoid conflicts with potential future bus implementations provided by the platform itself, it is recommended that user prefix custom provider names with a prefix such as 'x-' to mark the binding as a custom bus extension. For example, if you were to implement a bus binding that communicates over amqp (which is not current implemented by the platform) use 'x-amqp' as the binding name.
SINCE 3.8
Sample XML Snippet
Settings
Element | Description |
---|---|
<busProviders><provider | |
name | The bus provider name is the provider name used when configuring a bus of this type. For example a bus provider registered as 'foo' would be used for a bus configured with 'foo://address:port' Usage: Required |
providerClass |
The provider class instance used to create message bus binding instances for this bus type. A provider instance must implement 'com.neeve.sma.MessagingProvider', and will typically extend 'com.neeve.sma.impl.MessagingProviderBase' Usage: Optional |
enabled | Can be used to disable the bus provider. A disabled bus provider will not be registered in the system. Usage: Optional |
displayName | A user friendly name that can be used to by tools for displaying the messaging provider. Usage: Optional |
/></busProviders> |
Message Bus Configuration
The 'buses' section configures the various messaging buses used globally in the deployment. For example, the below configures a messaging bus named 'frontoffice' that:
- Uses a Falcon publish-subscribe bus for message transport between application agents.
- The bus to use can be overridden from the environment via the FRONTOFFICE_BUSDESCRIPTOR variable.
- Contains two channels:
- 'orders' channel with guaranteed quality of delivery.
- 'event's channel with best effort quality of delivery.
Sample XML Snippet
Settings
Element | Description |
---|---|
<buses> | |
<templates> | Holds bus templates |
<template | Defines a message bus template. |
name | The name of the template. Usage: Required |
* | Any bus attribute defined below except for 'template' X DDL Override: x.buses.templates.<templatename>.* |
> | |
* | Any bus element described below. X DDL Override: x.buses.templates.<templatename>.* |
</template> | |
</templates> | |
<bus | Configures a bus. |
name | Defines the bus name which must be unique within a configuration repository. Applications reference their bus by this name. Usage: Required |
descriptor | Defines the bus descriptor. A bus descriptor is used to lookup and configure a message bus provider. Usage: Required |
template | The name of bus template Usage: Optional |
enabled | If set to false, this bus will not be added to the configuration repository and will not be available for applicaiton use. Usage: Optional |
> | |
<channels><channel | Configures the channels for this bus. Individual applications that use the bus may use some or all of the channel according to their own configuration and interaction patterns. |
name | Defines and configures a channel within a message bus. An SMA message channel is a named conduit for message exchange between SMA messaging participants. An application's AepEngine will start messaging channels prior to signaling to the application that messaging has started. Usage: Required |
id | The channel id is a numerical identifier of a channel uniquely identifying the channel in its bus. Some bus binding implementations may use this on the wire as a replacement for the string channel name for efficiency, so it is important that the id is consistent across configuration domains. Usage: Optional |
<qos> | When the qos element is not provided, the platform's default QoS value will be used if not specified programmatically by the application. Usage: Optional |
<key> | Specifies the channel's key. Usage: Optional |
</channel></channels></bus></buses> |
Application (AepEngine) Configuration
Applications are configured under the <apps> element.
Sample XML Snippet
Settings
Element | Description |
---|---|
<apps> | The <apps> section configures the various applications in the scenario. An application is synonymous with an AEP engine. For example, the above configures three applications (i.e. engines). |
<templates> | Holds bus templates Usage: Optional |
<template | Defines a Talon application template. Template applications cannot be used at runtime ... they serve only as templates for actual apps' configuration. |
name | The name of the template. Usage: Required |
* | Any app attribute defined below except for 'template', and prior to 3.9, mainClass. X DDL Override: x.apps.templates.<templatename>.* |
> | |
* | Any app element described below. X DDL Override: x.apps.templates.<templatename>.* |
</template> | |
</templates> | |
<app | |
name | Defines the application name which must be unique within an application's configuration domain. Usage: Required
A common practice for a clustered application is to use the same name for an application and its store (see storage configuration below). It is best practice not to use a name with spaces as the name is used in many context such as scripting where it is operationally simpler to avoid spaces and special characters.
|
mainClass | Specifies the application's main class (e.g. com.acme.MyApplication). An application's main class serves as the main entry point for a Talon application (it is loaded by a Talon XVM and provides lifecycle hooks to it). This is not to be confused with a java main class. When running in a Talon XVM the java main class will be the Talon XVM main (com.neeve.server.Main). Usage: Required |
enabled> | If set to false, this app will be ignored and not saved to the configuration repository. This can be used to disable an application at runtime. However, note that if a persistent configuration repository is in use, this will not cause a previously configured application to be deleted. Usage: Optional |
> | |
<messaging> | Configures messaging for the application. |
<storage> | Configures clustering and persistence for the application. |
... | General / Miscellaneous application Configuration |
<app></apps> |
Application Messaging Configuration
An app's <messaging> element:
- Declare the message factories used by the application (to deserialize messages by factory id and type)
- Configures which buses from the <buses> section are used by this application and configures them.
- Configures runtime settings used to configure the bus when it is created.
Sample XML Snippet
Settings
Element | Description |
---|---|
<messaging> | Configures messaging for an application. |
<factories> | Configures message factories for an application. Each message factory defined under this element is registered with the application's underlying engine. Registered message factories are used by the message buses to materialize message instances from factory and type ids received over the wire. It is not mandatory to configure factories via DDL, they can also be registered programmatically by the application during application initialization. SINCE 3.4 |
<factory | Configures a message factory used by the app. |
name | The message factory's fully qualified name. Usage: Required |
</factory> | |
<factories> | |
<buses> | Configures the buses that the application will use. Each bus defined in this section should have the same name as a bus defined in the global <buses> section. |
<bus> | Configures and registers a bus from the <buses> section for use with the application and registers it with the underlying engine. Each application in the deployment will create its own bus instance, and may configure channel interest in that bus differently depending on their participation in the message flow. |
name | Specifies the name of the bus which should reference a bus from the buses section of this descriptor or one already created and saved in the configuration repository. Usage: Required |
enabled> | If set to false, this bus will be ignored and not added to the application list of buses. Usage: Optional |
<channels> <channel | Configures the bus channels used by the application which will be a subset of those defined for the bus in the descriptor's <buses> section. |
name | Specifies the name of the channel which references a channel defined in the <bus> element in the <buses> section. Usage: Optional |
join> | An application that should receive message on the channel should specify true. When true, subscriptions are issued for the channel based on the channels key and filters. Usage: Optional |
<filter> | When specified and the channel is joined, this can be used to specify a channel key based filter that filters the messages received. See Channel Filters. Usage: Optional |
<preserveJoinsOnClose> | Sets whether or not to preserve subscriptions for this channel when the channel is closed normally. By default when an engine is stopped without an error bus channels that were 'joined' will be 'left' meaning that any subscriptions or interests created by the message bus will be unsubscribed or unregistered. Whether or not channels' interests are preserved can be configured at the application levelusing the app's preserveChannelJoinsOnStop setting. The preserveJoinsOnClose channel level property allows the application configured behavior to be overridden on a channel by channel basis. Valid Options
Note that this property has no effect for the case where an engine shuts down with an error with a non null cause [e.g. AepEngine.stop(new Exception())]. In this case channel joins are left in tact allowing a backup to take over. Default: Default SINCE 3.12 |
</channel> </channels> | Additional channel configurations can be added here. |
<nonBlockingInboundMessageDispatch> | Specifies whether or not enqueue of inbound messages for this bus should block on the application's main inbound event multiplexer. In most cases, this value should either not be specified or set to false. Usage: Optional |
<inboundMessageEventPriority> | Specifies the priority at which messages from this bus should be dispatched to the application's inbound event multiplexer. A negative value is interpreted as higher priority. A positive value will result in delayed processing by the number of milliseconds specified. If not set or 0 message will be dispatched at normal priority. Usage: Optional |
<scheduleSendCommitCompletionEvents> | Indicates whether the bus manager send commit completion events should be scheduled. Scheduling of the completion events allows them to be added to application's inbound event queue's feeder queue, which can cause reduced contention with message events. Usage: Optional |
<sendCommitCompletionEventPriority> | Specifies at what priority message from this bus should be dispatched to the applications inbound event multiplexer. A negative value is interpreted as higher priority. A positive value will result in delayed processing by the number of milliseconds specified. If not set or 0, message will be dispatched at normal priority. Setting this value higher than message events can reduce message processing latency in some cases. Usage: Optional |
<detachedSend | Configures the detached send event multiplexer thread for the bus. When detached send is disabled, outbound send of messages is performed by the commit processing thread (typically the engine's inbound event multiplexer thread). Enabling detached send can reduce the workload on the commit processing thread, allowing it to process more inbound messages, but this can also incur additional latency. |
enabled> | Specifies whether or not detached send is enabled for this bus. Usage: Optional |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical CPUs. See <queueDrainerCpuAffinityMask> X DDL Override: |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional |
</detachedSend> <bus> </buses></messaging> |
Application Storage Configuration
Configures storage options for the application. An application's store provides the foundation for HA and Fault Tolerance. Applications achieve clustering through configuring the store, which will discover other application members and elect a single primary member through a leader election algorithm. The store serves as the foundation for HA by replicating changes from the primary application member to backups in a highly efficient, pipelined, asynchronous manner - a core requirement for In Memory Computing. While the primary mechanism for HA is memory to memory replication, an application's storage configuration may also configure disk-based persistence as a fall back mechanism in the event that connections to backup instance fail.
An application that runs standalone without any persistence does not need to include a store, which is a perfectly valid configuration for an application that does not have HA requirements.
Sample XML Snippet
Settings
- <storage
- descriptor
- enabled>
- <factories>
- <factories>
- <persistenceQuorum>
- <maxPersistSyncBacklog>
- <icrQuorum>
- <maxIcrSyncBacklog>
- <checkpointingType>
- <checkpointThreshold>
- <checkpointMaxInterval>
- <detachedMemberInitialization>
- <detachedMemberInitializerCpuAffinityMask>
- <discoveryDescriptor>
- <failOnMultiplePrimaries>
- <clustering
- </clustering>
- <persistence
- </persistence>
- <icr
- </icr>
- </storage>
Element | Description |
---|---|
<storage | See Also: StoreDescriptor |
descriptor | The store descriptor which is used to localize the store to a specific provider. Usage: Required prior to 3.4, Deprecated in 3.4+ |
enabled> | Can be set to false to disable the store for this application. Usage: Optional |
<factories> | Configures a state factories for an application. Each state factory defined under this element is registered with the application's underlying engine and store. Registered message factories are used by the store's replication receiver and transaction log for materialize entity instances from factory and type ids received over the wire. It is not mandatory to configure factories via DDL, they can also be registered programmatically by the application during application initialization. SINCE 3.4 |
<factory | Configures a state factory used by the app. |
name | The state factory's fully qualified name. Usage: Required |
</factory> | |
<factories> | |
<persistenceQuorum> | Sets a store's persistence quorum. The persistence quorum is the minimum number of store members running in a cluster that determines whether persister commits are executed synchronously or not. If the number of members is greater or equal to the quorum, then persistence commits are always performed asynchronously. Otherwise, they are persisted synchronously. Usage: Optional |
<maxPersistSyncBacklog> | When set to a value greater than 0 in seconds, the store's persister will be periodically synced to disk. This limits the amount of unsynced data (in time) that hasn't been synced in the event of a JVM failure, which can be useful for low volume applications that are operating above their persistence quorum. Usage: Optional |
<icrQuorum> | Set's a store's ICR quorum. The ICR quorum is the minimum number of store members running in a cluster that determines whether ICR send commits are executed synchronously or not. If the number of members is greater or equal to the quorum, then ICR commits are always performed asynchronously. Otherwise, they are performed ssynchronously. Usage: Optional |
<maxIcrSyncBacklog> | When set to a value greater than 0 in seconds, the store's icr sendder is periodically synced. This limits the amount of unsynced data (in time) that hasn't been synced in the event of a JVM failure, which can be useful for low volume applications that are operating above their icr quorum. Usage: Optional |
<checkpointingType> | Sets the store's checkpoint controller type. A checkpoint controller determines the checkpoint boundaries within a transaction log by incrementing the checkpoint version for log entries. The checkpoint version is used by CDC and Log Compaction algorithms as the boundaries upon which those operations occur. Usage: Optional |
<checkpointThreshold> | Sets the store's checkpoint threshold. The threshold controls the maximum number of entries before a transaction log's checkpoint version is increased, a checkpoint controller keeps track of the number of entries that count towards reaching this threshold. Usage: Optional |
<checkpointMaxInterval> | Sets the max time interval (in millis) that can occur before triggering a new checkpoint. Usage: Optional |
<detachedMemberInitialization> | SINCE 3.12.5 Sets whether backup member initialization is performed in a detached manner or not. When member initialization is set to detached, then the member initialization executes concurrently with the store operation. Detached member initialization is another name for non-blocking cluster join. Usage: Optional |
<detachedMemberInitializerCpuAffinityMask> | SINCE 3.12.5 Sets the CPU affinity mask to use for the detached member initializer thread. The affinity string can either be a long that represents a mask of logical cpu or a square bracket enclosed comma separated list enumerating the logical cpus. For example specifying "1" or "[0]" indicate Core 0. "3" or "[0, 1]" would indicate Core 0 or Core 1. Specifying a value of "0" indicates that the thread should be affinitized to the platform's default cpu, and omitting this value indicates that the thread should be affinitized according to the platform's default policy. Usage: Optional |
<discoveryDescriptor> | Sets the custom discovery descriptor for the store.
Usage: Optional |
<failOnMultiplePrimaries> | This property has been deprecated and should be set under the clustering element. Usage: Optional |
<clustering | SINCE 3.4 The clustering element, when enabled, is used to configure store clustering which provides the ability for applications' store members to discover one another and form an HA cluster. |
enabled> | Can be set to false to disable store clustering. Usage: Optional |
<storeName> | Sets the name of the store. Applications with the same store name automatically form a cluster. If this configuration parameter is not specified, then the application name is used as the store name Usage: Optional |
<localIfAddr> | Sets the local network interface to bind to when establishing cluster network connections. Usage: Optional |
<localPort> | Sets the TCP port to bind to when listening for cluster connections. Usage: Optional |
<linkParams> | A comma separate set of key=value pairs that serve as additional configuration parameters for the network connections between the cluster members. Usage: Optional |
<linkReaderCpuAffinityMask> | Sets the CPU affinity mask to use for the cluster connection reader thread. Each cluster member uses a single thread to read replication traffic from other cluster members. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus. For example, specifying "1" or "[0]" indicates Core 0. "3" or "[0, 1]" would indicate Core 0 or Core 1. Specifying a value of "0" indicates that the thread should be affinitized to the platform's default cpu, and omitting this value indicates that the thread should be affinitized according to the platform's default policy for the multiplexer. See com.neeve.util.UtlThread.setCPUAffinityMask(String) for details. Usage: Optional |
<discoveryDescriptor> | Sets the custom discovery descriptor for the store. When set, this descriptor is used to load the discovery provider for the store. In most cases, an application will simply want to use the default discovery provider configured for the JVM which is set via the nv.discovery.descriptor property. In such cases this value need not be set, and the store will simply use the default discovery provider returned by However, in some cases where discovery within the same JVM must be partitioned, it can be useful to specify a separate discovery provider for the store, and this property facilitates that. Usage: Optional |
<discovery> | Configures the custom discovery descriptor for the store in decomposed form. The discovery descriptor is composed as <provider>://<address>[:<port>][&prop1=val1][&propN=valN]
|
<provider> | The discovery provider's name which is used to locate the discovery implementation Usage: Required |
<address> | The discovery provider's address Usage: Required |
<port> | The discovery provider's port Usage: Optional |
<properties> | List the discovery descriptor parameters in key=value pairs:
|
</discovery> | |
<initWaitTime> | Sets the time, in milliseconds, that the store cluster manager will wait on open for the cluster to stabilize. When a store binding opens its binding to the store, it joins the discovery network to discover other store cluster members. Once discovered, the members need to connect to each other, perform handshakes and elect roles. This parameter governs how long, after the binding has joined the discovery network, does the cluster manager wait for the store cluster to "stabilize"
Usage: Optional |
<failOnMultiplePrimaries> | Set whether a store cluster manager should fail the store binding on detecting multiple primaries in a cluster. The default policy is to fail on detecting multiple primaries. This means that if multiple primaries are detected, the members detected as primaries will shut down to prevent a "split-brain" situation. If this parameter is set to true, then the members detected as primaries will not establish connectivity with each other and will continue to operate independently as primaries. Usage: Optional |
<memberElectionPriority> | When two members connect and neither has already assumed the primary role, then the member with the lower election priority will be elected primary. Configured values of less than 0 are set to 0 and values configured greater than 255 are set to 255. The default election priority (if not configured here in the DDL) is 255. When two members have the same priority either one may assume the primary role. Usage: Optional SINCE 3.8 |
<detachedSend | Configures whether or not to send the outbound replication traffic by the engine thread or pass off the send to a detached replicator sender thread.
Usage: Optional |
enabled> | Can be set to true to enable detached sends for store replication. Usage: Optional |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus. See <queueDrainerCpuAffinityMask> X DDL Override: |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional |
</detachedSend> | |
<detachedDispatch | Configures whether or not to dispatch the inbound replication traffic and events by the replication link reader thread or pass the dispatch off to a detached replicator dispatcher thread. Usage: Optional |
enabled> | Can be set to true to enable detached sends for store replication. Usage: Optional |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu or a square bracket enclosed comma separated list enumerating the logical cpus. See <queueDrainerCpuAffinityMask> X DDL Override: |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional |
</detachedDispatch> </clustering> | |
<persistence | Configures the persister for this store. A persister is responsible for storing the stores transactional updates to disk (or some other recoverable storage medium). Persisters typically serve as a secondary fault tolerance mechanism for clustered applications, but for an application that will only operate standalone this can serve as the primary mechanism for fault tolerance. Usage: Optional See Also: StorePersisterDescriptor |
class | Can be set to the fully qualified classname of a custom implemenation of a store persister class. If omitted or "native" is specified, then the platform's default persister will be used (recommended) Usage: Optional |
enabled> | Can be set to false to disable the persister for this store. Usage: Optional |
<autoFlushSize> | In the absense of explicit flushes (e.g. flushOnCommit) of written entries, the size at which flush is automatically triggered for queued writes. If not set the platform default (8192) is used. Usage: Optional |
<flushOnCommit> | Whether or not the persister should be flushed on commit. By default a persister buffers writes into an internal buffer and doesn't write to disk until that buffer has filled. Enabling flush on commit will flush the persister regardless of whether the buffer has filled. Usage: Optional |
<flushUsingMappedMemory> | Whether flushes to the log file should be performed using a memory mapped file buffer. Usage: Optional
|
<autoRepair> | Whether or not an attempt will be made to automatically repair non-empty logs by truncating malformed entries at the end of the log that are part of incomplete transactions. Usage: Optional |
<storeRoot> | Specifies the root folder in which the persister's transaction log files are located. Usage: Optional
<storeRoot>${myapp.storeRoot}</storeRoot>, so that you can customize its location at runtime appropriate to the environment in which you are launching. |
<shared> | Whether or not the persister is a share. If omitted or false, the application will use shared nothing persistence. If true it indicates that the persister is using the same physical storage between backup and primaries meaning that instances in a backup role will not persist to disk and leave it to the primary. In most cases applications will leave this as false. Usage: Optional |
<cdcEnabled> | Whether CDC is enabled on the log. If CDC is not enabled, then a CDC processor run on a log will not dispatch any events. If CDC is not enabled on a log and then reenabled later, CDC will start from the live log at the time the CDC is enabled. If a compaction occurred while CDC was disabled, then the change events that occurred during that time will be lost; in other words, CDC enablement instructs the compactor to preserve data on disk necessary for performing CDC rather than deleting it on compaction. CDC enabled is only supported for applications using StateReplication as an HAPolicy. Usage: Optional |
<compactionThreshold> | Sets the log compaction threshold. The log compaction threshold is the size (in megabytes) that triggers a log compaction. The act of compacting a log will compact as many complete checkpoints in the log and switch the live log over to the compacted log. A threshold value of less than or equal to 0 disables live log compaction. Usage: Optional |
<maxCompactionWindowSize> | The log compaction window is the approximate maximum size (in megabytes) rounded up to the end of the nearest checkpoint that a compact operation uses to determine how many log entries it will hold in memory. The more entries the compactor can hold in memory while performing a compaction, the more efficient the compact operation will be. Note: The minimum compaction window is a checkpoint. Therefore, if the system is configured such that a checkpooint covers entries that cumulatively exceeds the value of this parameter, then this parameter will not reduce the compaction memory usage; rather, the compactor will load the entire checkpoint into memory when performing the checkpoint operation. Note: When calculating memory needed by the compaction operation, one should multiply this parameter by a factor of 2: i.e. the memory used by compaction will be twice the memory specified via this parameter. Usage: Optional |
<logScavengePolicy> | Sets policy used to scavenge logs. A log with number N is considered a candidate for scavenging when N is less than the live log number and N less than the CDC log number. This parameter specifies how such logs need to be scavenged. Currently, the only recommended value is 'Delete' ... the 'Disabled' policy is currently used by tools to ensure that they don't erroneously delete files still needed for CDC. Usage: Optional
|
<initialLogLength> | Sets the intial file size of persister's transaction log in gigabytes. Preallocating the transaction log can save costs in growing the file size over time since the operation of growing a log file may actually result in a write of file data + the metadata operation of updating the file size, and may also benefit from allocating contiguous sectors on disk. Usage: Optional |
<zeroOutInitial> | Whether the log file should be explictly zeroed out (to force commit all disk pages) if newly created. Usage: Optional |
<pageSize> | Sets the page size for the disk in bytes. The persister will use this as a hint in several areas to optimize its operation. Usage: Optional |
<detachedPersist | Configures whether or not persister writes are done by the store commit thread or passed off to a detached persister write thread. Offloading the persist to a persister thread can increase store throughput but requires an extra processor core for the persister thread. |
enabled> | Can be set to true to enable detached persister for the persister. Usage: Optional |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: x.apps.<appname>.storage.persistence.detachedPersist.queueDepth |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: x.apps.<appname>.storage.persistence.detachedPersist.queueOfferStrategy |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus. See <queueDrainerCpuAffinityMask> X DDL Override: |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional |
</detachedPersist> </persistence> | |
<icr | Configures Inter-cluster Replication (ICR) for the application. |
role | Configures the inter-cluster replication role. See ICRRole Usage: Required |
busDescriptor | Configures the bus descriptor for ICR. ICR uses its own private bus instance created from this descriptor. Usage: Required |
enabled> | Can be set to true to enable inter cluster replication. Usage: Optional |
<bus | Defines and configures the private bus instance used for ICR. The ICR bus can be configured by either the bus element or the bus descriptor attribute. It is illegal to use both mechanism. |
<provider> | The bus provider name. Usage: Required |
<address> | The bus address. Usage: Required |
<port> | The bus port for bus providers that accept a port. Usage: Required |
<properties> </properties> | List the bus descriptor parameters in key=value pairs e.g.
Usage: Optional |
</bus> | |
<shared> | Whether or not an ICR Sender is a shared sender. Applications should set this to true when using ICR to a Standalone Receiver, e.g. only the primary instance should send updated to the ICR queue. Usage: Optional |
<flushOnCommit> | Whether or not the icr sender should be flushed on commit. Setting this value to true will flush all updates to the underlying message bus on commit. With a value of false the bus may buffer some messages until new updates are sent on subsequent commits. Usage: Optional |
<detachedSend | Configures whether or not ICR sends are done by the store commit thread or passed off to a detached send thread. Offloading the send to a sender thread can increase store throughput but requires an extra processor core for the sender thread. When enabled the properties here configure the multiplexer for the detached send thread. |
enabled> | Configures whether or not detached ICR send is enabled. Usage: Optional |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: x.apps.<appname>.storage .icr. detachedSend. queueDepth |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpu AffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu or a square bracket enclosed comma separated list enumerating the logical cpus. See <queueDrainerCpuAffinityMask> X DDL Override: |
<queueFeedMax Concurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional |
</detachedSend> </icr></storage> | End of storage configuration (only one storage configuration may be specified per application). |
General Application Configuration
The remaining elements under the <app> element configure the operation of the application's AepEngine
Sample XML
Settings
- <inboundEventMultiplexing
- <inboundMessageLogging
- <outboundMessageLogging
- </outboundMessageLogging>
- <perTransactionStatsLogging
- </ perTransactionStatsLogging >
- <startupExpectations
- </startupExpectations>
- <messageHandlingPolicy>
- <messagingStartFailPolicy>
- <messageBusBindingFailPolicy>
- <replicationPolicy>
- <messageSendPolicy>
- <inboundEventAcknowledgementPolicy>
- <appExceptionHandlingPolicy>
- <quarantineChannel>
- <quarantineMessageKey>
- <messageSendExceptionHandlingPolicy>
- <messageSendStabilityFailureHandlingPolicy>
- <replicationPolicy>
- <adaptiveCommitBatchCeiling>
- <enableTransactionCommitSuspension>
- <dispatchTransactionStageEvents>
- <replicateSolicitedSends>
- <replicateUnsolicitedSends>
- <sequenceUnsolicitedSends>
- <sequenceUnsolicitedWithSolicitedSends>
- <dispatchSendStabilityEvents>
- <disposeOnSend>
- <clusterHeartbeatInterval>
- <administrative>
- <stuckAlertEventThreshold>
- <performDuplicateChecking>
- <setOutboundSequenceNumbers>
- <syncInjectedMessages>
- <stopOnJVMShutdown>
- <performMidstreamInitializationValidation>
- <enableSequenceNumberTrace>
- <enableEventTrace>
- <enableTransactionTrace>
- <enableScheduleTrace>
- <enableAlertTrace>
- <enableMessageTrace>
- <messageTraceInJson>
- <messageTraceJsonStyle>
- <messageTraceFilterUnsetFields>
- <messageTraceMetadataDisplayPolicy>
- <maxEnvironmentProviders>
- <enableSendCommitCompleteSequenceAlerts>
- <captureMessageTypeStats>
- <messageTypeStatsLatenciesToCapture>
- <captureTransactionLatencyStats>
- <capturePerTransactionStats>
- <captureEventLatencyStats>
- <replicateInParallel>
- <preserveChannelJoinsOnStop>
- <setSupportMetadata>
Element | Description |
---|---|
<inboundEventMultiplexing | Configures the AepEngine's inbound event multiplexer. Configures the single AepEngine multiplexer thread that serializes processing of messages, timers, acks and other events for the application. |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: x.apps.<appname>.inboundEventMultiplexing.queueOfferStrategy A value of SingleThreaded is almost never appropriate for an AepEngine because many threads dispatch events to an engine. |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus. See <queueDrainerCpuAffinityMask> X DDL Override: |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional |
</inboundEventMultiplexing> | |
<inboundMessageLogging | Configures inbound message logging for the engine. An inbound message logger logs inbound messages to a transaction log file. Inbound logging does not play a role in HA for the application, but can be useful for auditing purposes. |
policy | The inbound message logging policy for the application. See InboundMessageLoggingPolicy Usage: Required |
failurePolicy | SINCE 3.2 See InboundMessageLoggingFailurePolicy Usage: Optiona |
<autoFlushSize> | In the absence of explicit flushes (e.g. flushOnCommit) of written entries, the size at which flush is automatically triggered for queued writes. If not set, the platform default (8192) is used. Usage: Optional |
<flushOnCommit> | Whether or not the logger should be flushed on commit. By default the logger buffers writes into an internal buffer and doesn't write to disk until that buffer has filled. Enabling flush on commit will flush the logger regardless of whether the buffer has filled. Usage: Optional |
<flushUsingMappedMemory> | Whether flushes to the log file should be performed using a memory mapped file buffer. Usage: Optional
|
<autoRepair> | Whether or not an attempt will be made to automatically repair a non-empty log on open by truncating malformed entries at the end of the log that are part of incomplete transactions. Usage: Optional |
<storeRoot> | Specifies the root folder in which the logger's transaction log files are located. Usage: Optional If the expected value of of NVROOT on your target deployment host is not on the device where you want to place your transaction logs (e.g. slow or small disk), then consider making this a substitutable value such as: <storeRoot>${myapp.storeroot}</storeRoot>, so that you can customize its location at runtime appropriate to the environment in which you are launching. |
<initialLogLength> | Sets the initial file size of logger's transaction log in gigabytes. Preallocating the transaction log can save costs in growing the file size over time since the operation of growing a log file may actually results in a write of file data + the metadata operation of updating the file size, and may also benefit from allocating contiguous sectors on disk. Usage: Optional The log size is specified in Gb. For an initial size of less than 1 Gb, specify a float value. For example, a value of .01 would result in a preallocated size of ~10Mb, which can be useful for test environments. |
<zeroOutInitial> | Whether the log file should be explicitly zeroed out (to force commit all disk pages) if newly created. Usage: Optional |
<pageSize> | Sets the page size for the disk in bytes. The logger will use this as a hint in several areas to optimize its operation. Usage: Optional |
<detachedWrite | Configures whether or not logger writes are done by the committing thread or passed off to a detached writer thread. Offloading to a writer thread can increase application throughput but requires an extra processor core for the logger thread. |
enabled> | Can be set to true to enable detached logging for the logger. Usage: Required |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: x.apps.<appname>.inboundMessageLogging.detachedWrite.queueOfferStrategy |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu or a square bracket enclosed comma separated list enumerating the logical cpus. See <queueDrainerCpuAffinityMask> X DDL Override: |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional |
</detachedWrite> </inboundMessageLogging> | End of application's inbound message logging properties. |
<outboundMessageLogging | Configures outbound message logging for the engine. An inbound message logger logs sent messages to a transaction log file. An outbound message log file does not play a role in HA for the application, but can be useful for auditing purposes. |
policy | The outbound message logging policy for the application. See OutboundMessageLoggingPolicy Usage: Required |
failurePolicy | SINCE 3.2 See OutboundMessageLoggingFailurePolicy Usage: Optiona |
<autoFlushSize> | In the absence of explicit flushes (e.g. flushOnCommit) of written entries, the size at which flush is automatically triggered for queued writes. If not set, the platform default (8192) is used. Usage: Optional |
<flushOnCommit> | Whether or not the logger should be flushed on commit. By default the logger buffers writes into an internal buffer and doesn't write to disk until that buffer has filled. Enabling flush on commit will flush the logger regardless of whether the buffer has filled. Usage: Optional |
<flushUsingMappedMemory> | Whether flushes to the log file should be performed using a memory mapped file buffer. Usage: Optional
|
<autoRepair> | Whether or not an attempt will be made to automatically repair a non-empty log on open by truncating malformed entries at the end of the log that are part of incomplete transactions. Usage: Optional |
<storeRoot> | Specifies the root folder in which the logger's transaction log files are located. Usage: Optional If the expected value of of NVROOT on your target deployment host is not on the device where you want to place your transaction logs (e.g. slow or small disk), then consider making this a substitutable value such as: <storeRoot>${myapp.storeroot}</storeRoot>, so that you can customize its location at runtime appropriate to the environment in which you are launching. |
<initialLogLength> | Sets the initial file size of logger's transaction log in gigabytes. Preallocating the transaction log can save costs in growing the file size over time since the operation of growing a log file may actually results in a write of file data + the metadata operation of updating the file size, and may also benefit from allocating contiguous sectors on disk. Usage: Optional The log size is specified in Gb. For an initial size of less than 1 Gb, specify a float value. for example a value of .01 would result in a preallocated size of ~10Mb, this can be useful for test environments. |
<zeroOutInitial> | Whether the log file should be explictly zeroed out (to force commit all disk pages) if newly created. Usage: Optional |
<pageSize> | Sets the page size for the disk in bytes. The logger will use this as a hint in several areas to optimize its operation. Usage: Optional |
<detachedWrite | Configures whether or not logger writes are done by the committing thread or passed off to a detached writer thread. Offloading to a writer thread can increase application throughput but requires an extra processor core for the logger thread. |
enabled> | Can be set to true to enable detached logging for the logger. Usage: Required |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus. See <queueDrainerCpuAffinityMask> X DDL Override: x.apps.<appname>.outboundMessageLogging. |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional |
</detachedWrite> </outboundMessageLogging> | End of application's outbound message logging properties. |
<perTransactionStatsLogging | Configures per transaction stats binary logging for the engine. A per transaction stats logger logs per transaction stats to a transaction log when capturePerTransactionStats is enabled for an AEP engine.
SINCE 3.7 |
policy | The per transaction stats logging policy for the application. See PerTransactionStatsLoggingPolicy Usage: Required Constraints: Default | Off | UseDedicated |
failurePolicy |
See PerTransactionStatsLoggingFailurePolicy Usage: Optiona |
<autoFlushSize> | In the absence of explicit flushes (e.g. flushOnCommit) of written entries, the size at which flush is automatically triggered for queued writes. If not set, the platform default (8192) is used. Usage: Optional |
<flushOnCommit> | Whether or not the logger should be flushed on commit. By default the logger buffers writes into an internal buffer and doesn't write to disk until that buffer has filled. Enabling flush on commit will flush the logger regardless of whether the buffer has filled. Usage: Optional |
<flushUsingMappedMemory> | Whether flushes to the log file should be performed using a memory mapped file buffer. Usage: Optional
|
<autoRepair> | Whether or not an attempt will be made to automatically repair a non-empty log on open by truncating malformed entries at the end of the log that are part of incomplete transactions. Usage: Optional |
<storeRoot> | Specifies the root folder in which the logger's transaction log files are located. Usage: Optional If the expected value of of NVROOT on your target deployment host is not on the device where you want to place your transaction logs (e.g. slow or small disk), then consider making this a substitutable value such as: <storeRoot>${myapp.storeroot}</storeRoot>, so that you can customize its location at runtime appropriate to the environment in which you are launching. |
<initialLogLength> | Sets the initial file size of logger's transaction log in gigabytes. Preallocating the transaction log can save costs in growing the file size over time since the operation of growing a log file may actually results in a write of file data + the metadata operation of updating the file size, and may also benefit from allocating contiguous sectors on disk. Usage: Optional The log size is specified in Gb. For an initial size of less than 1 Gb, specify a float value. for example a value of .01 would result in a preallocated size of ~10Mb, this can be useful for test environments. |
<zeroOutInitial> | Whether the log file should be explictly zeroed out (to force commit all disk pages) if newly created. Usage: Optional |
<pageSize> | Sets the page size for the disk in bytes. The logger will use this as a hint in several areas to optimize its operation. Usage: Optional |
<detachedWrite | Configures whether or not logger writes are done by the committing thread or passed off to a detached writer thread. Offloading to a writer thread can increase application throughput but requires an extra processor core for the logger thread. |
enabled> | Can be set to true to enable detached logging for the logger. Usage: Required |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus. See <queueDrainerCpuAffinityMask> X DDL Override: x.apps.<appname>.perTransactionStatsLogging. |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional Constraints: positive integer |
</detachedWrite> </ perTransactionStatsLogging > | End of application's per transaction stats logging properties. |
<startupExpectations | Specifies expectations that must be met on application startup. Unmet startup expectations will prevent the application from starting, ensuring that operational conditions are met. Usage: Optional |
<role> | Checks the HA Role of the application on startup. The role of an application is defined by the underlying role of its store. If the application has no store configured, its role will be 'Primary'. See StoreBindingRoleExpectation Usage: Optional |
<logEmptiness> | Enforces log emptiness expectations at startup. Usage: Optional |
</startupExpectations> | |
<messageHandlingPolicy> | Specifies the application's message handling policy. It is rare that an application would want to set anything other than 'Normal' for the message handling policy outside of a diagnostic or debug context. Usage: Optional |
<messagingStartFailPolicy> | SINCE 3.7 Specifies an engine's messaging start fail policy. The messaging start operation establishes the bindings to the various buses that an engine is configured to bind to. This enumeration enumerates the policy that determines the conditions under which a messaging start operation is considered to have failed. The NeverFail option causes a start operation to be considered successful as long as all bind attempts do not result in permanent exceptions (a permanent exception reported by a bind attempt causes the bind operation to not be retried while a non-permanent exception causes the bind attempt to be periodically retried). In other words, the NeverFail option causes a messaging start operation to be reported as successful as long as at least one bind attempt was successful or failed with a non-permanent exception. Using a policy which does not shutdown the engine if a binding fails requires that the application is coded such that it can handle message channels being down during message processing. Bus implementations often have their own retry logic built into initial connection establishment, so it is worth bearing in mind that a failure to establish a connection may not be resolved in a timely fashion by subsequent retries made by the engine. Usage: Optional
|
<messageBusBindingFailPolicy> | SINCE 3.7 Specifies the policy that determines what action an engine takes when a message bus binding fails. Using a policy which does not shutdown the engine if a binding fails requires that the application is coded such that it can handle message channels being down during message processing. Bus implementations often have their own retry logic built into to perform transparent reconnect, so it is worth bearing in mind that a failure to establish a connection may not be resolved in a timely fashion by subsequent retries made by the engine. See MessageBusBindingFailPolicy Usage: Optional
|
<replicationPolicy> | Specifies the application's replication policy. The replication policy controls how messages are replicated to backup members (or disk). In most cases an application should specify a policy of Pipelined. Specifying the wrong value for this property can compromise recovery and cause message loss or duplication. Usage: Optional |
<messageSendPolicy> | Enumerates an application's AepEngine's outbound message send policies. The message send policy controls at what point during transaction commit processing that application sent messages are transmitted out of the application. In most cases, an application should specify a policy of Pipelined. Specifying the wrong value for this property can compromise recovery and cause message loss or duplication. Usage: Optional |
<inboundEventAcknowledgementPolicy> | Enumerates an engine's inbound event acknowledgement policy. The general contract of an AepEngine is that it cannot acknowledge upstream events (such as message events) in a transaction until such as the transaction has been stabilized to the point that in the event of a failure the message will not be lost. When the engine is not configured with a store this property has no effect and events are acknowledged when the entire transaction is committed (e.g. when downstream acknowledgements are received.) See InboundEventAcknowledgementPolicy Usage: Optional |
<appExceptionHandlingPolicy> | Set an engine's application exception handling policy, using which an engine determines how to handle unchecked exceptions thrown by an application handler. See AppExceptionHandlingPolicy Usage: Optional SINCE 3.4 |
<quarantineChannel> | Set an engine's quarantine channel. Sets the channel on which quarantined messages are transmitted. It must take the form of channelName@busName This applies when the application throws an exception and the application exception policy is configured to be 'quarantine and stop' i.e. Usage: Optional SINCE 3.4 |
<quarantineMessageKey> | Set an engine's quarantine message key. Used to explicitly set the message key to be associated with outbound quarantine messages. If the key is set using this method, the sending of the quarantine message will bypass the dynamic key resolution machinery. Usage: Optional SINCE 3.4 |
<messageSendExceptionHandlingPolicy> | The policy used by an application's AepEngine to determine how to handle unchecked exceptions thrown on message sends. Note that policy covers only the send of the message through the underlying bus binding during transaction commit. In particular, It does not cover:
See MessageSendExceptionHandlingPolicy Usage: Optional |
<messageSendStabilityFailureHandlingPolicy> | The policy used by an application's AepEngine to determine how to handle send stability failure notifications. Note that policy covers only the send stability failures received from the underlying bus binding. In particular, It does not cover:
See MessageSendStabilityFailureHandlingPolicy Usage: Optional SINCE 3.12.6 |
<replicationPolicy> | Specifies the application's replication policy. Usage: Optional |
<adaptiveCommitBatchCeiling> | Set the application's AepEngine's adaptive commit batch ceiling. The adaptive commit batch ceiling controls the maximum number of inbound messages to group into a single transaction which can improve throughput. A value less than or equal to 1 disables adaptive commit. Usage: Optional Adaptive commit cannot be used if transaction commit suspension is enabled. Auto Tuning When nv.optimizefor=throughput this value is set to 64 if not set explicitly set to a positive value. |
<enableTransactionCommitSuspension> | Sets whether transaction commit suspension is enabled or disabled. Transaction commit suspension is an experimental feature that allows an application to temporarily suspend commit of a transaction. Usage: Optional TransactionCommitSuspension is currently an experimental feature. It is not supported for production use.
|
<dispatchTransactionStageEvents> | Sets whether transaction stage events are emitted by the application's AepEngine. Controls whether or not AepTransactionStageEvent are emitted by the application's engine. An AepTransactionStageEvent is used to notify an application as the transaction commit executes through its various phases. The various transaction stages are
The transaction stage is present in the dispatched AepTransactionStageEvent. The AepTransactionStageEvent dispatched in the Start stage can be used by an application to suspend the transaction. It is illegal to suspend a transaction in any other stage other than Start. Transaction stage events are only dispatched on the primary cluster member Usage: Optional |
<replicateSolicitedSends> | Sets whether or not to replicate solicited sends to a backup.
Usage: Optional Default: true X DDL Override: x.apps.<appname>.replicateSolicitedSends Constraints: true | false
This parameter should be changed with extreme caution. The act of disabling replication of outbound messages will likely result in a loss of outbound messages in the event of a fail over. |
<replicateUnsolicitedSends> | Set whether to replicate unsolicited sends. This parameter governs whether unsolicited sends performed on clustered engines will be replicated or not. This setting has no effect on engines that are not clustered. An unsolicited send is a send done outside of a event handler via an AepEngine.send method. Because unsolicited sends aren't part of an engine's transactional message processing, they are not considered to be part of the application's HA state. To treat unsolicited sends as part of an application's HA state, see sequenceUnsolicitedWithSolicitedSends. Usage: Optional |
<sequenceUnsolicitedSends> | Set whether to sequence unsolicited sends. By default, unsolicited sends are sent with a sequence number of 0. Specifying true in this parameter will cause sequence numbers to also be attached to unsolicited sends. Usage: Optional Be careful about attaching sequence numbers to unsolicited sends, especially if the application is going to be doing both unsolicited and solicited sends concurrently, since that can cause messages to be sent on the wire in a sequence different from the sequence in which sequence numbers were assigned to the message thus causing legitimate messages to be dropped due to incorrect duplicate determination. For such applications, use sequenceSolicitedWithUnsolicitedSends instead to ensure that not only are unsolicited sends sequenced but that they are also correctly sequenced vis-a-vis solicited sends. |
<sequenceUnsolicitedWithSolicitedSends> | Set whether to sequence unsolicited sends with solicited sends. This parameter is applicable for applications performing concurrent solicited and unsolicited sends and want the unsolicited sends to be sequenced. Setting this parameter ensures that unsolicited and solicited sends are sequenced on the wire in the same order in which the sequence numbers were attached to the messages. In effect, this causes an unsolicited send to be injected into the underlying engine's transactional event processing stream promoting it to a transaction event. Usage: Optional SINCE 3.9 |
<dispatchSendStabilityEvents> | Set whether or not the engine dispatches AepSendStabilityEvent for unsolicited sends. When an application that sends messages through the engine from outside of a message handler (an unsolicited send) would like to receive notification when the the send has been stabilized, this setting can be enabled to instruct the engine to dispatch an AepSendStabilityEvent when the engine can provide guarantees that the message will be delivered. This functionality is useful for gateway applications that input messages into the system from an external source. Usage: Optional |
<disposeOnSend> | Set whether or not the engine disposes sent messages. If set, then the AepEngine.sendMessage method will dispose a message after it has been sent. This means that the caller must not hold onto or reference a message beyond the call to the send message method. If unset, then a zero garbage application must call dispose on each sent message to ensure it is returned to its pool. Usage: Optional |
<clusterHeartbeatInterval> | Sets the cluster heartbeat interval for the application in milliseconds. A value of 0 (default) disables the cluster heartbeat interval Usage: Optional |
<administrative> | Marks the application as an 'administrative' application. Usage: Optional |
<stuckAlertEventThreshold> | Sets the threshold, in seconds, after which an AepStuckAlertEvent is dispatched to the application's IAepAsynchronousEventHandler. An AepStuckAlertEvent event is intended to alert that the engine's transaction pipeline is "stuck" i.e. there are one or more transaction commits in the pipeline and the event multiplexer thread is not processing any events. For example, the event multiplexer thread could be flow controlled on the replication TCP connection due to an issue in the backup or could be spinning in a loop in the business logic due to a bug in a business logic handler. See Stuck Engine Alerts for more information Usage: Optional |
<performDuplicateChecking> | Set whether the application's engine should perform duplicate checking. When duplicate checking is enabled, received messages that are deemed duplicates are discarded by the application's engine. A message is considered to be a duplicate under the following circumstances:
Sequence id assignment by an AepEngine
SINCE 3.4 Usage: Optional |
<setOutboundSequenceNumbers> | Disables the setting of sequence numbers on outbound messages. The setting of sequence numbers in outbound messages comes at a very slight performance penalty that may not be tolerated by ultra low latency applications. This property can be used to switch off the setting of sequence numbers in outbound messages for such performance critical applications. However, note that this effectively disables checking for duplicates for messages sent by such a configured engine by downstream apps. Usage: Optional |
<syncInjectedMessages> | Sets whether MessageView.sync() is called during AepEngine.injectMessage. SINCE 3.7 Usage: Optional |
<stopOnJVMShutdown> | Sets whether the engine will automatically stop when the JVM shuts down By default, the AEP engine registers a JVM shutdown hook using which it automatically stops when the JVM shuts down. This property enables this behavior to be disabled. If set to false, the engine will not automatically stop when the JVM shuts down SINCE 3.16.20
Usage: Optional |
<performMidstreamInitializationValidation> | Set whether the engine checks that initial transactions are not missing during recovery or replication. This parameter is only applicable to event sourced engines.
Usage: Optional |
<enableSequenceNumberTrace> | Enables diagnostic trace logging related to message sequencing. Enabling this trace can assist in diagnosing issues related to the loss, duplicate or out of order events. When enabled, trace will be emitted at debug level (TRACE level for SLF4J) to the logger named 'nv.aep.sno'. Usage: Optional |
<enableEventTrace> | Enables diagnostic trace logging of events received and dispatched by an engine. Enabling this trace is useful in determining the sequence of events processed by the engine. When enabled, trace will be emitted at debug level (TRACE level for SLF4J) to the logger named 'nv.aep.event'. Usage: Optional |
<enableTransactionTrace> | Enables diagnostic trace logging related to transactions processed by an engine. Enabling this trace is useful in determining the relative sequencing and timing of transaction commits as the commits are executed by the engine. When enabled, trace will be emitted at debug level (TRACE level for SLF4J) to the logger named 'nv.aep.txn'. Usage: Optional |
<enableScheduleTrace> | Enable diagnostic trace logging related to schedules (timers) managed by an engine. Enabling this trace is useful for diagnosing issues related to engine timer execution and scheduling. When enabled, trace will be emitted at debug level (TRACE level for SLF4J) to the logger named 'nv.aep.sched'. Usage: Optional |
<enableAlertTrace> | Enable diagnostic trace logging related to events emitted by the AepEngine that implement IAlertEvent. When enabled, trace will be emitted at warning level (WARN level for SLF4J) to the logger named 'nv.aep.alert'. The engine may suppress logging of certain alert event types in cases where it makes sense to do so. For example, the AepEngineStoppedEvent implements IAlertEvent, but when this event is dispatched during a clean shutdown (no error message), it isn't deemed an Alert. Applications may disable the engine alert and provide their own IAlertEvent EventHandler for finer grained control over alert logging. Usage: Optional |
<enableMessageTrace> | Enables diagnostic trace logging for messages as they pass through the engine. Enabling this trace is useful for tracing the contents of messages at different stages of execution within the engine. When enabled, trace will be emitted at debug level (TRACE level for SLF4J) to the logger named 'nv.aep.msg'. Usage: Optional |
<messageTraceInJson> | Sets whether messages are traced in JSON or toString format. When enabled, messages will be printed in JSON format, otherwise message will be traced using its toString method. This parameter is only applicable if message trace is enabled. Usage: Optional |
<messageTraceJsonStyle> | Sets the styling for JSON formatted message trace. This parameter is only applicable if message trace in JSON is enabled. Valid options are:
Usage: Optional |
<messageTraceFilterUnsetFields> | Sets whether unset fields are filtered for JSON formatted objects when JSON message tracing is enabled. Usage: Optional Constraints: true | false |
<messageTraceMetadataDisplayPolicy> | Sets whether metadata, payload or both will be traced when message tracing is enabled. Valid Options are:
Usage: Optional |
<maxEnvironmentProviders> | Sets the maximum number of environment providers that can be registered with the engine. Usage: Optional |
<enableSendCommitCompleteSequenceAlerts> | Set whether or not to enable out of order send commit completion detection. When enabled, the engine will check that stability events (acknowledgements) from the underlying messaging provider are received in an ordered fashion. If acknowledgements are received out of order, then the engine will dispatch appropriate alerts.
Usage: Optional |
<captureMessageTypeStats> | Sets whether statistics are additionally recorded on a per message type basis. Collection of message type specific statistics records counts and rates per type as well as message processing time statistics for each message which can be useful in finding particular handlers that have high execution times.
Usage: Optional |
<messageTypeStatsLatenciesToCapture> | Property that enables tracking of latency statistics on a per message type ba Property controlling which latency stats on a per message type basis. This property is specified as a comma separated list of values. Valid value include:
The values 'all' or 'none' may not be combined with other values. This value only applies when captureMessageTypeStats is true. When not specified the value defaults to all.
Usage: Optional SINCE 3.11 |
<captureTransactionLatencyStats> | Sets whether or not the engine records transaction latency stats. Usage: Optional |
<capturePerTransactionStats> | Sets whether or not the engine records per transaction stats. Unlike captureTransactionLatencyStats which records histographical latencies, this setting is much more expensive in that it records and emits individual timestamps for operations that occurred in the transaction including store commit timestamps and individual message timestamps. In most cases the capturing this level of detail is not worth the overhead it incurs, as the histographical latency captured via captureTransactionLatencyStats is usually sufficient for inferring timings within a given sampling interval. However, in cases where it is critical to determine the exact timings of transaction processing to better understand product behavior it can be useful. If used in production (which is not recommended), applications should undergo stress testing under maximum peak load to determine the impact of enabling collection of per transaction stats. You must also configure perTransactionStatsLogging to write the captured stats to a transaction log on disk. At this time per transaction stats are not emitted via trace loggers or over a messaging bus.
Usage: Optional |
<captureEventLatencyStats> | Sets whether or not the engine records event latency stats (such as the amount of time of events spent in its input queue). Usage: Optional |
<replicateInParallel> | Enables parallel replication. When parallel replication is enabled, the engine replicates inbound messages to the cluster backups in parallel with the processing of the message by the message handler. This parameter only applies to Event Sourced engines. This parameter is particularly useful for Event Sourced applications that have higher message processing times because in this case it may be possible to replicate the message prior to completion of the message handler. Usage: Optional |
<preserveChannelJoinsOnStop> | Sets whether or not to preserve joined channels when the engine stops normally. By default, when an engine is stopped without an error bus channels that were 'joined' will be 'left', meaning that any subscriptions or interests created by the message bus will be unsubscribed or unregistered. Setting this value to true causes the engine to preserve channel interest even on a clean shutdown. Note that this property has no effect for the case where an engine shuts down with an error (e.g. AepEngine.stop(Exception) with a non-null cause. In this case channel joins are left intact allowing a backup to take over. Note that this behavior can be overridden programmatically on a case by case basis by a handler for AepEngineStoppingEvent by setting AepEngineStoppingEvent.setPreserveChannelJoins(boolean). Usage: Optional SINCE 3.4 |
<setSupportMetadata> | Enables setting of support related metadata on inbound and outbound messages. Support related metadata is not critical to the operation of an engine and is set by the engine in inbound and outbound messages to aid in support and troubleshooting activities via the metadata being persisted in the application's transaction log. However, this setting of support related metadata information comes at a slight performance penalty (a couple of microseconds) that may not be tolerated by ultra low latency applications. This property can be used to switch off the setting of support related metadata information in inbound and outbound messages for such performance critical applications. Usage: Optional SINCE 3.5 |
</app> <apps> |
XVM Configuration
The 'xvms' section defines the Talon XVMs used globally in the deployment.
A Talon XVM (also known as a Talon Server) hosts one or more applications, controls each application's lifecycle and implements monitoring machinery for the apps it manages by providing both passive monitoring capability in the form of emission of periodic heartbeats that contain statistics, emission of trace output, alert and notifications as well as active monitoring by exposing command and control facilities that allow administrative applications to execute commands against the XVM and its applications.
In version 3.8, the <xvms> and <xvm> elements were introduced to replace the <servers> and <server> elements which are now deprecated. Developers are advised to update their configuration to use <xvms> and <xvm> as soon as possible as support for <servers> and <server> elements will be dropped in a future release.
- Mixing of <server(s)> and <xvm(s)> within the same DDL is not supported and will result in an exception. Users should take particular care when using composing DDL from multiple locations to ensure that all DDL is using the same elements.
- When using the <servers> element DDL override values defined below will honor the old syntax, namely x.servers.*
Sample XML Snippet
The below configures two xvms named 'forwarder-1' 'forwarder-2' that both host the 'forwarder' app. Launching both xvms would start two instances of the 'forwarder' app that would form a clustered instance of the forwarder app.
Settings
- <xvms>
- <templates>
- </templates>
- <xvm
- </xvm>
- </xvms>
Element | Description |
---|---|
<xvms> | Defines the Talon XVMs used globally in the deployment. A 'Talon XVM' can also be referred to a 'Talon Server' as it not only acts a runtime container for X applications, but also acts a server that accepts management and direct messaging connections. An xvm hosts one or more applications, controls each application's lifecycle and implements a direct network connection acceptance machinery to (1) allow management clients connect to the xvm to monitor and administer the applications that it hosts and (2) accept direct connections for apps that are configured to use the Talon 'direct' bus provider. |
<templates> | Holds xvm templates Usage: Optional |
<template | Defines an xvm template. Template xvms cannot be used at runtime they serve only as templates for actual xvms' configuration. |
name | The name of the template. Usage: Required |
* | Any xvm attribute defined below except for 'template' X DDL Override: x.xvms.templates.<templatename>.* |
> | |
* | Any xvm element described below. X DDL Override: x.xvms.templates.<templatename>.* |
</template> | |
</templates> | |
<xvm | Defines and configures an xvm. |
name> | Defines the xvm name which must be unique within the xvm's configuration and discovery domain. A common practice is to use a xvm name that is indicative of the application or set of applications that it is hosting and the normal application role. For example, if the xvm will host shard 1 of order processing app that normally assumes the primary role, a name such as "order-processing-p-1" might be used. In this fashion, an instance of an application can be uniquely addressed with a discovery domain as a combination of the xvm name and app name. Usage: Required X DDL Override: Not overridable (key)Constraints: String |
enabled | If set to false, this xvm will be ignored and not saved to the configuration repository. This can be used to disable a xvm at runtime. However, note that if a persistent configuration repository is in use, this will not cause a previously configured xvm to be deleted. Usage: Optional |
discoveryDescriptor | Defines the xvm's discovery descriptor. Usage: Optional When the p2p message bus binding is used, the discovery descriptor for the xvm must be part of the same discovery domain as the default discovery descriptor configured via nv.discovery.descriptor because the xvm itself facilitates advertising and accepting point to point connections.
|
<env> | XVM scoped environment variables. Any XML element with text content will be treated as a property by concatenating its parent's node names with '.' separators. Env values scoped to the XVM override those defined in the global <env> section, and are applied only to the xvm for which they are defined. If the value is already defined in the set of ddl overrides passed into the parser, the value in XML will be overridden. Unlike other DDL values mentioned in this document, overrides for xvm scoped <env> values do not need to be prefixed with 'x.' ... the values passed will directly override the values specified in the <env> section without prefix. for example given:
|
<discovery> | Configures the custom discovery descriptor for the xvm in decomposed form. The discovery descriptor is composed as <provider>://<address>[:<port>][&prop1=val1][&propN=valN]
|
<provider> | The discovery provider's name which is used to locate the discovery implementation Usage: Required |
<address> | The discovery provider's address Usage: Required |
<port> | The discovery provider's port Usage: Optional |
<properties></properties> | List thediscoverydescriptor parameters in key=value pairs:
|
</discovery> | |
<group> | Defines the xvm's application group. Usage: Optional |
<clientHandShakeTimeout> | Sets the timeout used for allow connecting clients to complete the xvm connection (in seconds). Usage: Optional |
<autoStopOnLastAppStop> | SINCE 3.7 Configures whether or not the xvm will automatically stop after the last app is stopped.
Disabling auto stop on last app stop leaves the xvm running and manageable even when all applications have stopped. The xvm's internal admin app does not count as a running app. Usage: Optional |
<adminClientOutputQueueCapacity> | SINCE 3.7 Configuration property used to set the capacity (in MB) for the size of the xvm controller admin clients' output queue sizes. Outbound packets are dropped once the queue size reaches and/or exceeds the configured capacity. Usage: Optional |
<apps> | Configures that apps hosted by this xvm. Multiple xvms can host the same application. Each clustered application will discover its peers and form its own independent cluster. In other words, xvms don't cluster, but their applications do. |
<app | Configures an app hosted by this xvm. |
autostart | Sets whether the xvm automatically starts the app when the xvm is started. Usage: Optional |
enabled | If set to false, the app will be ignored and not saved to the configuration repository. This can be used to suppress addition of an application at runtime. Usage: Optional |
name> | The name of the application as defined in the 'apps' element. Usage: Required |
</app> | |
</apps> | |
<acceptors> | Configures this XVM's advertised server acceptors. By default, each xvm will create an acceptor on 'tcp://0.0.0.0:0' to listen on all interfaces at a auto assigned port. If you are running in an environment where only specific ports are opened for traffic then you can set this to a specific network interface address. XVM acceptors are advertised over discovery and are used by:
Only the first acceptor defined in this section is advertised for the above purposes. Additional acceptors may be configured, but they are not currently used by Talon.
|
<acceptor | Defines and configures an acceptor. |
descriptor | The acceptor descriptor. Accept descriptors are of the form [protocol]://[host]:[port] e.g 'tcp://myhost:12000'and are used to specify the network protocol, interface and protocol through which to accept inbound network connections requests. 'tcp' is the only currently support protocol, [host] can be the host name or IP address of a specified interface on which this xvm will be running, and the [port] is the protocol specific server port on which the server will listed for inbound connections. Usage: Required |
enabled> | If set to false, the acceptor will be ignored and not saved to the configuration repository. This can be used to suppress addition of an acceptor at runtime. However, note that if a persistent configuration repository is in use, this will not cause a previously configured acceptor for the this xvm to be removed. Usage: Optional |
<linkParams> | A comma separate set of key=value pairs that serve as additional configuration parameters for the network connections accepted by this acceptor. Usage: Optional |
</acceptor> | |
</acceptors > | |
<multithreading | |
enabled> | Sets whether the server should operate in multi-threaded mode. In most cases this value should be set to true. Setting this value to false will set the IO thread count to 1 regardless of the number of IO threads listed for the server. Usage: Required |
<ioThreads> | Configures IO threads for the xvm. |
<ioThread | Defines and configures an IOThread. |
id | The thread id. IO Thread ids are zero based and must be defined in monotonically increasing order. Usage: Required |
affinity | Sets the cpu affinity mask for the thread. The affinity string can either be a long that represents a mask of logical cpu or a square bracket enclosed comma separated list enumerating the logical cpus. For example, specifying "1" or "[0]" indicate Core 0. "3" or "[0, 1]" would indicate Core 0 or Core 1. Specifying a value of "0" indicates that the the thread should be affinitized to the platform's default cpu, and omitting this value indicates that the thread should be affinitized according to the platform's default policy for the multiplexer. See UtlThread.setCpuAffinityMask Usage: Optional |
enabled | Sets the thread as enabled or disabled. This can be used at runtime to disable an IO Thread. Disabling an IO thread has the effect of setting all threads with a higher id to enabled=false. Note that if a persistent configuration repository is in use, this will not cause a previously configured IO threads for the this xvm to be removed. Usage: Required |
</ioThreads> | |
</multithreading> | |
<heartbeats> | Configuration for the XVMs stats thread which periodically emits heartbeats containing stats. |
enabled | Indicates whether xvm heartbeats are enabled. Usage: Required |
interval> | Indicates the xvm heartbeat interval in seconds. Usage: Optional |
<collectSeriesStats> | Configures whether series stats are collected in heartbeats. Usage: Optional |
<collectSeriesDatapoints> | Configures whether series stats data points are included in heartbeats. Series statistics such as latency statistics are reported as histogram when series stats collection is enabled. Enabling this property also includes the collected data points. Enabling this property can be extremely bandwidth intensive and is not typically recommended. Usage: Optional |
<maxTrackableSeriesValue> | Series data is reported using an HDR Histogram. This property controls the maximum value that the histogram can record. Usage: Optional |
<collectPoolStats> | Configures whether pool stats are collected and reported in heartbeats. When pool stats are enabled pool stats are included for pools that experienced a pool miss in the collection interval, or preallocated pool items falls below a poolDepletionThreshold. For applications that don't expect to operate in a zero garbage mode, this can be disabled to prevent heartbeats from becoming too large. Usage: Optional |
<poolDepletionThreshold> | Configuration property used to set the percentage decrement at which a preallocated pool must drop to be included in a xvm heartbeat. Normal pool stats are only included in a heartbeat if there were pool misses in the interval. For preallocated pools, however, misses are not expected until the preallocated items are exhausted. For such pools it is generally of interest from a monitoring perspective to observe the rate of depletion of such items. If a pool is preallocated with 1000 items and this property is set to 10, pool stats will be emitted for the pool when its size drops below 900, 800, 700, until its size reaches 0 (at which point subsequent misses would cause it to be included on every heartbeat). Setting this to a value greater than 100 or less than or equal to 0 disables depletion threshold reporting. Usage: Optional |
<collectIndividualThreadStats> | Configures whether individual thread stats are collected. Collecting stats for individual threads can lead to larger heartbeats. For applications that don't need such stats this collection can be disabled. Usage: Optional |
<collectNonZGStats> | Sets whether or not stats that produce garbage as a result of being collected are enabled. Some stats involve using reflection or 3rd party apis that create garbage. This property can be set to false to suppress collection of those stats. Currently the list of stats that may produce garbage include:
Usage: Optional |
<includeMessageTypeStats> | Sets whether or not message type stats are included in heartbeats (when enabled for the app). When captureMessageTypeStats is enabled for an app, the AepEngine will record select statistics on a per message type basis. Because inclusion of per message type stats can significantly increase the size of heartbeats, inclusion in heartbeats is disabled by default. Note: For message type stats to be included in heartbeats, both captureMessageTypeStats for the app must be set to true (capture is disabled by default because recording them is costly), and includeMessageTypeStats must be set to true (inclusion is disabled by default because emitting them is costly). Usage: Optional SINCE 3.7 |
<inactiveMessageTypeStatsInclusionFrequency> | This setting can be used to control how frequently message type stats are reported for message types without any activity. By default this value is set to 1 meaning that inactive types are included in every heartbeat even if there was no activity related to that type in the interval being reported. It can be set to 0 to exclude inactive types in heartbeats, or set to a value N, greater than 1 so that every Nth heartbeat inactive message type stats are included. Setting this value to 0 can cause monitoring applications that start listening to the heartbeat stream late not to 'see' counts and latencies related to messaging that occurred in the past, so it is often desirable to at least periodically include inactive types. On the other hand, applications that work with a large number of message types that are not used frequently, it can be costly in terms of heartbeat size to always include them. This setting has no effect if message type stats are not enabled or not included in heartbeats to begin with. Usage: Optional SINCE 3.8 |
<logging> | Configures heartbeat logging for the xvm. When configured, xvm heartbeats are written to disk. SINCE 3.1 |
enabled> | Whether or not to enable heartbeat logging. Usage: Required |
<autoFlushSize> | In the absence of explicit flushes (e.g. flushOnCommit) of written entries, the size at which flush is automatically triggered for queued writes. If not set, the platform default (8192) is used. Usage: Optional |
<flushOnCommit> | Whether or not the logger should be flushed on commit. By default, the logger buffers writes into an internal buffer and doesn't write to disk until that buffer has filled. Enabling flush on commit will flush the logger regardless of whether the buffer has filled. Usage: Optional |
<flushUsingMappedMemory> | Whether flushes to the log file should be performed using a memory mapped file buffer. Usage: Optional
|
<autoRepair> | Whether or not an attempt will be made to automatically repair a non empty log on open by truncating malformed entries at the end of the log that are part of incomplete transactions. Usage: Optional |
<storeRoot> | Specifies the root folder in which the logger's transaction log files are located. Usage: Optional If the expected value of of NVROOT on your target deployment host is not on the device where you want to place your transaction logs (e.g. slow or small disk), then consider making this a substitutable value such as: <storeRoot>${myapp.storeroot}</storeRoot>, so that you can customize its location at runtime appropriate to the environment in which you are launching. |
<initialLogLength> | Sets the initial file size of logger's transaction log in gigabytes. Preallocating the transaction log can save costs in growing the file size over time, since the operation of growing a log file may actually result in a write of file data + the metadata operation of updating the file size, and may also benefit from allocating contiguous sectors on disk. Usage: Optional The log size is specified in Gb. For an initial size of less than 1 Gb, specify a float value. for example a value of .01 would result in a preallocated size of ~10Mb, this can be useful for test environments. |
<zeroOutInitial> | Whether the log file should be explicitly zeroed out (to force commit all disk pages) if newly created. Usage: Optional |
<pageSize> | Sets the page size for the disk in bytes. The logger will use this as a hint in several areas to optimize its operation. Usage: Optional |
<detachedWrite | Configures whether or not logger writes are done by the committing thread or passed off to a detached writer thread. Offloading to a writer thread can increase application throughput but requires an extra processor core for the logger thread. |
enabled> | Can be set to true to enable detached logging for the logger. Usage: Optional |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: x.xvms.<xvmname>.heartbeats.logging.detachedWrite.queueOfferStrategy |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus. See <queueDrainerCpuAffinityMask> X DDL Override: |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional |
</detachedWrite> </logging> | End of application's heartbeat logging properties. |
<tracing | Configures trace logging of heartbeats. |
enabled> | Whether or not to enable heartbeat tracing. Usage: Optional |
<traceAdminClientStats> | |
<traceAppStats> | Controls whether app stats (AEP engine level stats) are traced (when tracing is enabled). Usage: Optional |
<tracePoolStats> | Controls whether pool stats are traced (when tracing is enabled). Usage: Optional |
<traceSysStats> | Controls whether system stats are traced (when tracing is enabled). Usage: Optional |
<traceThreadStats> | Controls whether thread stats are traced (when tracing is enabled). Usage: Optional |
<traceUserStats> | Controls whether user app stats are traced (when tracing is enabled). If traceAppStats is enabled then user stats are reported as part of the application stats unless this property is disabled. If traceAppStats is false then this property can be used to trace only the user stats. Usage: Optional |
</ tracing ></heartbeats> | |
<provisioning> | The provisioning section of DDL holds information for provisioning tools such as Robin, that provision and launch the XVM. |
<host> | Configures the host or ip address to which this xvm should be provisioned. Usage: Optional |
<qualifyPathsWithSystem> | When true (default), this setting indicates that installation paths should be appended with the /systemName/xvmName qualified. For example, if an installRoot is specified as '/usr/local/' an XVM named 'order-processor' in a system named 'order-processing-1.0' would be provisioned to '/usr/local/run/order-processing-1.0/order-processor'. This setting ensures that when multiple xvms / systems are deployed to the same host that they don't collide. Usage: Optional |
<rootDirectory> | Configures the root directory to which the xvm should be provisioned. This directory should be specified using only '/' characters for file separators. Provisioning tools are expected to perform path translations when deploying to windows systems. A path that starts with either '/' or contains a ':' character is interpreted as an absolute path. Usage: Optional |
<dataDirectory> | Configures the directory in which data files should be stored. This directory path serves as the root directory for runtime data files such as recovery logs. This directory should be specified using only '/' characters for file separators. Provisioning tools are expected to perform path translations when deploying to windows systems. A path that starts with either '/' or contains a ':' character is interpreted as an absolute path. When specified as a relative path the path should be interpreted as being relative to the rootDirectory directory. When not specified this defaults to the platform's default runtime data directory, the 'rdat' subdirectory in the root folder. Usage: Optional |
<traceLogDirectory> | Configures the directory to which xvm trace output should be logged. This directory should be specified using only '/' characters for file separators. Provisioning tools are expected to perform path translations when deploying to windows systems. A path that starts with either '/' or contains a ':' character is interpreted as an absolute path. When specified as a relative path the path should be interpreted as being relative to the dataRoot directory. When not specified the logging directory is left up to launcher. Usage: Optional |
<jvm> | Configures the JVM used to launch the xvm. |
<javaHome> | Configures the JVM's home directory which contains the bin/java executable to use. Usage: Optional |
<jvmParams> | A list of JVM parameters used to launch the JVM. Parameters can be specified on a single line, or broken across multiple lines for readability. When parameters are split across multiple lines they are append to one another with a single whitespace character. For example: Note that the DDL parser doesn't merge JVM params when overriding a template or when merging profiles. One can however use jvmParamSets described below to allow sourcing JVM parameters from templates and/or multiple activated profiles. Usage: Optional |
<jvmParamSets> | A list of named JVM parameters sets (that are appended to jvmParams if provided). JVM parameter sets are useful in the context of config composition as they allow portions of JVM parameters to be overridden by name based on active profiles or templates. For example: Note that the DDL parser doesn't merge JVM params when overriding a template or when merging profiles. One can however use jvmParamSets described below to allow sourcing JVM parameters from templates and/or multiple activated profiles. Usage: Optional |
<jvmParamSet | |
name | Sets the name of this set of JVM parameters. Usage: Required |
order | Can be set to control the order in which the parameters from this JVM parameter set are appended to the jvmParams relative to other JVM parameter sets. Lower ordered JVM parameters are placed first. The ordering for 2 parameter sets with the same order is unspecified. Usage: Optional |
enabled> | Can be set to false to disable the this set of JVM params. When a JVM param set is disabled its JVM parameters are not appended to the JVM parameters string. Usage: Optional |
<jvmParams> | The JVM params for this JVM param set. JVM parameters can be specified here in the same format as the main JVM parameters for the JVM. Multi-line JVM parameters are appended together with a single white space between lines. Usage: Optional |
</jvmParamSet> | |
<jvmParamSets> | |
</jvm> | |
</provisioning> | |
</xvm></xvms> |
Enums Reference
Enums
- ChannelQos
- CheckpointingType
- ICRRole
- InboundMessageLoggingPolicy
- InboundMessageLoggingFailurePolicy
- InboundEventAcknowledgementPolicy
- LogEmptinessExpectation
- MessageHandlingPolicy
- MessagingStartFailPolicy
- MessageBusBindingFailPolicy
- MessageSendPolicy
- AppExceptionHandlingPolicy
- MessageSendExceptionHandlingPolicy
- MessageSendStabilityFailureHandlingPolicy
- OutboundMessageLoggingPolicy
- OutboundMessageLoggingFailurePolicy
- PerTransactionStatsLoggingPolicy
- PerTransactionStatsLoggingFailurePolicy
- QueueOfferStrategy
- QueueWaitStrategy
- StoreBindingRoleExpectation
- ReplicationPolicy
ChannelQos
Enumerates the different supported Qualities of Service used for transmitting messages over a messaging channel.
See also: MessageChannel.Qos.
Valid Values
Value | Description |
---|---|
BestEffort | Specifies Best Effort quality of service. Messages sent Best Effort are not acknowledged, and in the event of a binding failure may be lost. |
Guaranteed | Specifies Guaranteed quality of service. Messages sent Guaranteed are held until acknowledged by the message bus binding, and are retransmitted in the event of a failure. |
CheckpointingType
Enumerates the types of checkpointing controllers.
See also IStoreCheckpointingController.Type.
Valid Values
Value | Description |
---|---|
Default | Indicates that the default checkpoint controller should be used. The default checkpoint controller counts all entry types (Puts, Updates, Removes, and Sends) against the threshold trigger for writing a new checkpoint. |
CDC | Indicates that the CDC Checkpoint controller should be used. The CDC checkpoint controller only counts Puts, Updates and Removes against the checkpoint trigger threshold (because these are the only types of interest for CDC). |
Conflation | Indicates that the Conflation Checkpoint controller should be used. The Conflation checkpoint controller does not count Puts against the new checkpoint trigger threshold (because puts cannot be conflated). |
ICRRole
Enumerates the different inter-cluster replication roles of an AepEngine's store binding.
In inter-cluster replication, the store contents of a cluster are replicated to one or more receiving clusters. This enumeration enumerates the different replication roles that can be assigned to clusters. Assigning a replication role to a cluster amounts to assigning the same inter-cluster replication role to all members of the cluster.
See also: IStoreBinding.InterClusterReplicationRole
Valid Values
Value | Description |
---|---|
Sender | Cluster members designated with this role serve as the inter-cluster replication senders. |
StandaloneReceiver | Cluster members designated with this role serve as standalone inter-cluster replication receivers. Standalone implies that the receive side members designated with this role do not form clusters while operating in this mode. From the perspective of the user, the member operates as a backup cluster member, but there is no intra-cluster replication actually occurring. There can be multiple simultaneous standalone replication receivers. |
InboundMessageLoggingPolicy
Enumerates an engine's inbound message logging policies.
This enumerates the policy that determines if and where to log inbound messages.
See also: AepEngine.InboundMessageLoggingPolicy
Valid Values
Value | Description |
---|---|
Default | The default inbound message logging policy is determined by the HA and persistence mode at play. With this policy, if event sourcing & cluster persistence are enabled, then inbound message logging is implicitly switched on and inbound messages are logged through the store's persister. All other configurations switch off inbound message logging. |
Off | Disables inbound message logging. With this policy, inbound message logging is disabled. This is the default policy with State Replication and Standalone mode of operation. The Standalone mode of operation is one where an engine has not been configured for HA: i.e. configured without a store. This option is invalid for use with engines configured to be clustered and use Event Sourcing since, in that mode, inbound messages are logged in the store's event log by virtue of inbound message replication. |
UseDedicated | Use a dedicated log for inbound message logging. With this policy, the engine uses a dedicated logger to log inbound messages. This option is invalid for use with engines configured to be clustered and use Event Sourcing since, in that mode, inbound messages are logged in the store's event log by virtue of inbound message replication. |
InboundMessageLoggingFailurePolicy
SINCE 3.2
Enumerates policies for handling inbound message logging failures.
This enumerates the policy that determines what to do in the event of an inbound message logging failure.
Valid Values
Value | Description |
---|---|
StopEngine | This policy specifies that a failure in inbound logging will be treated as a failure which will result in shutdown of the engine. |
StopLogging | This policy specifies that inbound logging errors will be trapped and cause the engine to discontinue inbound message logging. |
InboundEventAcknowledgementPolicy
SINCE 3.7
Enumerates an engine's inbound event acknowledgement policy.
The general contract of an
AepEngine
is that it cannot acknowledge up stream events (such as message events) in a transaction until such as the transaction has been stabilized to the point that in the event of a failure the message will not be lost.
When the engine is not configured with a store this property has no effect and events are acknowledged when the entire transaction is committed (e.g. when downstream acknowledgements are received.)
Value | Description |
---|---|
Default | With this policy allows the engine to select the inbound event acknowledgement policy based on its configuration. At present setting this policy results in OnSendStability being used, but this behavior could change in future releases. |
OnSendStability | With this policy inbound events are acknowledged one all downstream acknowledgements for outbound messages and events have been acknowledged. With this policy messages would not be lost even if a backup and primary member were to fail unrecoverably. |
OnStoreStability | With this experimental policy inbound events are acknowledged once they are committed to the store without waiting for acknowledgement for the transaction's outbound messages. Once an inbound event has been successfully stored it can be recovered from a backup or a standalone instance's transaction log, making this policy safe across failover and recovery. Note: this policy is currently in an experimental phase. It is not recommended for use in production without guidance from support. |
LogEmptinessExpectation
Enumerates the set of values permissible with the log emptiness expectation.
See Also: IStoreJournallingPersister.LogEmptinessExpectation
Valid Values
Value | Description |
---|---|
None | Used to specify that there is no expectation regarding emptiness of a transaction log |
Empty | Used to specify that a transaction log is expected to be empty. |
NotEmpty | Used to specify that a transaction log is expected to exist and contain at least one entry SINCE 3.4 |
MessageHandlingPolicy
Enumerates an application's AepEngine's inbound message handling policy.
See also: AepEngine.MessageHandlingPolicy
Valid Values
Value | Description |
---|---|
Normal | This policy represents normal message processing operation. This is the default message handling policy. |
Noop | This policy causes inbound messages to be discarded before dispatch to the application: i.e. they are not dispatched to the application. The messages are acknowledged if received on a guaranteed channel. |
Discard | This policy causes inbound messages to be blindly discarded. No acknowledgements are dispatched if received on a guaranteed channel |
MessagingStartFailPolicy
Enumerates an engine's messaging start fail policy.
See also: AepEngine.MessagingStartFailPolicy
Valid Values
Value | Description |
---|---|
FailIfOneBindingFails | This policy causes a messaging start operation to be considered successful only if all bindings attempts are successful i.e. with this option a messaging start operation is reported as failed if one or more of the binding attempts fails. This is the default messaging start fail policy |
NeverFail | This policy causes a start operation to be considered successful as long as all bind attempts do not result in permanent exceptions (a permanent exception reported by a bind attempt causes the bind operation to not be retried while a non-permanent exception causes the bind attempt to be periodically retried). In other words, the NeverFail option causes a messaging start operation to be reported as successful as long as at least one bind attempt was successful or failed with a non-permanent exception. |
FailIfAllBindingsFail | This policy causes a messaging start operation to be considered successful if one or more binding attempts is successful i.e. with this option, a messaging start operation is reported as failed if all the binding attempts fail. |
MessageBusBindingFailPolicy
This enumerates the policy that determines what action an engine takes when a message bus binding fails.
See also: AepEngine.MessageBusBindingFailPolicy
Valid Values
Value | Description |
---|---|
FailIfAnyBindingFails | With this policy, when a binding fails, the engine shuts down all other operational bindings (if any) and dispatches a This is the default messaging start fail policy |
Reconnect | With this policy, when a binding fails, the engine dispatches channel down events for all channels in the failed binding. It then starts the reconnect process on the failed binding periodically retrying the binding. Channel up events are then dispatched for channels in the binding once the binding has been successfully reestablished. |
MessageSendPolicy
Enumerates an application's AepEngine outbound message send policies.
The message send policy controls at what point during transaction commit processing that application sent messages are transmitted out of the application.
See also: AepEngine.MessageSendPolicy
Valid Values
Value | Description |
---|---|
ReplicateBeforeSend | This policy causes state/messages to be replicated before sending outbound messages triggered by the processing of inbound messages. In other words, for event sourcing, this policy causes an inbound message to be processed, the message replicated for processing to the back instance(s), and then outbound messages triggered by the processing of the message to be sent outbound (after processing acknowledgments have been received from all back instance(s)). For state replication, this policy causes inbound message(s) to be processed, the state changes triggered by the processing of the inbound message to be replicated to the backup instance(s), and then the outbound messages triggered by the processing of the inbound message to be sent (after receiving state replication stability notifications from the backup instance(s)). |
SendBeforeReplicate | This policy causes outbound messages triggered by the processing of inbound messages to be sent outbound first, before replicating the state/inbound messages. In other words, for event sourcing, this policy causes an inbound message to be processed, the outbound messages triggered by the processing of the inbound message to be dispatched outbound, and then the inbound message replicated to the backup instance(s) for parallel processing (after outbound send stability notifications have been received from downstream agents). For state replication, this policy causes an inbound message to be processed, the outbound messages triggered by the processing of the inbound message to be dispatched outbound, and then the state changes affected by the processing of the inbound messages to be replicated for stability to the backup instance(s). In most circumstances, this mode operation is unsafe from an HA standpoint: a failover to a backup instance may result in duplicate processing of the source message with different outbound message results: e.g. duplicate outbound messages that are different in content. |
Noop | This policy causes outbound messages to be silently discarded. No stability notifications are dispatched for this policy for messages sent through guaranteed channels.
|
AppExceptionHandlingPolicy
SINCE 3.4
Enumerates an engine's app exception handling policies.
This enumerates the policy using which an engine determines how to handle unchecked exceptions from an application message handler or message filter.
See also: AepEngine.AppExceptionHandlingPolicy
Valid Values
Value | Description |
---|---|
RollbackAndStop | Stop the engine. With this policy, upon receipt of an unchecked exception from an application handler, the engine:
If the engine cannot complete prior transactions due to a subsequent error the engine is still stopped with an exception and a backup will reprocess messages from incomplete transactions as well. This is the default policy. |
LogExceptionAndContinue | Log an exception and continue operating. With this policy, upon receipt of an unchecked exception from an application's event/message handler, the engine:
So essentially message processing stops where it is, and from an HA standpoint, the message is removed from the processing stream. When applied to an exception thrown from a message filter the message will not be dispatched to application event handlers (see AepEngine.setMessageFilter). In all cases, the message will not be considered to be part of the transaction and is acknowledged upstream. |
QuarantineAndStop | Quarantine offending message and stop engine. With this policy, upon receipt of an unchecked exception from an application handler, the engine:
If the engine cannot complete prior transactions due to a subsequent error, the engine is still stopped with an exception and a backup will reprocess messages from incomplete transactions as well. |
MessageSendExceptionHandlingPolicy
Enumerates an engine's message send exception handling policy.
This enumerates the policy using which an engine determines how to handle unchecked exceptions received on message sends.
Note: There are two types of send failures that an engine can encounter during its operation. The first are exceptions thrown during the message send operation. Such exceptions are typically thrown by the underlying message bus bindings. The other, applicable only to guaranteed channels, is where the message send operation succeeds but could not be stabilized by the underlying messaging provider. This policy applies to the former type of send failures.
Additionally, this does not cover exceptions thrown to the application as the result of a send call from a message handler. Such exceptions are covered by the AppExceptionHandlingPolicy.
See also: AepEngine.MessageSendExceptionHandlingPolicy
Valid Values
Value | Description |
---|---|
TreatAsStabilityFailure | Treat the failure as a stability failure. Converts the send failure to a message stability failure (a fatal error). This is the default policy. |
LogExceptionAndContinue | Log an exception and continue operating. With this policy, upon receipt of an unchecked exception from the underlying send machinery, the engine logs the exception and continues operating. This policy can be dangerous for an application using Event Sourcing, because it is possible that such an exception is one that is indicative of a problem specific to the primary engine instance that would not occur on the backup if it were to take over and begin processing messages. |
MessageSendStabilityFailureHandlingPolicy
SINCE 3.12.6
Enumerates an engine's message send stability failure handling policy.
This enumerates the policy using which an engine determines how to handle stability failures notifications for outbound sends.
Note: There are two types of send failures that an engine can encounter during its operation. The first are exceptions thrown during the message send operation. Such exceptions are typically thrown by the underlying message bus bindings. The other, applicable only to guaranteed channels, is where the message send operation succeeds but could not be stabilized by the underlying messaging provider. This policy applies to the latter type of send failures.
Additionally, this does not cover exceptions thrown to the application as the result of a send call from a message handler. Such exceptions are covered by the AppExceptionHandlingPolicy.
See also: AepEngine.MessageSendStabilityFailureHandlingPolicy
Valid Values
Value | Description |
---|---|
LogExceptionAndContinue | Log an exception and continue operating. With this policy, the engine logs an exception and continues operating after receiving the send stability failure notification When using this configuration, note that the engine will NOT retry sends that could not be stabilized by the messaging provider. This will very likely result in the sent messages that could not be stabilized to be lost. Recovery of such messages is the applications responsibility. |
StopEngine | Stop the engine on encountering such a failure. With this policy, the engine shuts down when it receives a stability failure notification This is the default policy. |
OutboundMessageLoggingPolicy
Enumerates an engine's outbound message logging policies.
This enumerates the policy that determines if and where to log outbound messages
See also: AepEngine.OutboundMessageLoggingPolicy
Valid Values
Value | Description |
---|---|
Default | Disable outbound message logging. With this policy, outbound message logging is disabled. This is the default policy. When the application's HA Policy is StateReplication, outbound messages are logged to the store transaction log as required by State Replication to retransmit in doubt messages after a failure. However, the outbound messages in the store's transaction log will be discarded if log compaction is enabled, so an application may still want to log a copy to a dedicated logger as well. |
UseDedicated | Use a dedicated log for outbound message logging. With this policy, the engine uses a dedicated logger to log outbound messages. |
OutboundMessageLoggingFailurePolicy
SINCE 3.2
Enumerates policies for handling outbound message logging failures.
This enumerates the policy that determines what to do in the event of an outbound message logging failure.
Valid Values
Value | Description |
---|---|
StopEngine | This policy specifies that a failure in outbound logging will be treated as a failure, which will result in shutdown of the engine. |
StopLogging | This policy specifies that outbound logging errors will be trapped and cause the engine to discontinue outbound message logging. |
PerTransactionStatsLoggingPolicy
Enumerates an engine's outbound message logging policies.
This enumerates the policy that determines if and where to log outbound messages
See also: AepEngine.PerTransactionStatsLoggingPolicy
Valid Values
Value | Description |
---|---|
Off | Disable per transaction stats logging. With this policy, per transaction stats logging is disabled. This is the default policy. |
UseDedicated | Use a dedicated log for per transaction stats logging. With this policy, the engine uses a dedicated logger to log per transaction stats. |
PerTransactionStatsLoggingFailurePolicy
SINCE 3.2
Enumerates policies for handling per transaction stats logging failures.
This enumerates the policy that determines what to do in the event of a per transaction stats logging failure.
Valid Values
Value | Description |
---|---|
StopEngine | This policy specifies that a failure in per transaction stats logging will be treated as a failure, which will result in shutdown of the engine. |
StopLogging | This policy specifies that per transaction stats logging errors will be trapped and cause the engine to discontinue per transaction stats logging. |
QueueOfferStrategy
Specifies the offer strategy for threads publishing to an event multiplexer's queue. When not specified, the platform's default value for the multiplexer will be used which is computed based on a number of factors depending on the event multiplexer in question and the optimization parameters in play for the application as a whole:
Valid Values
Value | Description |
---|---|
SingleThreaded | An optimized strategy that can be used when it can be guaranteed that there is only a single thread feeding the queue. |
MultiThreaded | Strategy that can be used when multiple threads can concurrently enqueue events for the multiplexer. |
MultiThreadedSufficientCores | Strategy to be used when there are multiple publisher threads claiming sequences. This strategy requires sufficient cores to allow multiple publishers to be concurrently claiming |
QueueWaitStrategy
Specifies the strategy used by an event multiplexer's queue draining thread(s).
Valid Values
Value | Description |
---|---|
Blocking | The BlockingWaitStrategy is the slowest of the available wait strategies, but is the most conservative with the respect to CPU usage and will give the most consistent behavior across the widest variety of deployment options. However, again knowledge of the deployed system can allow for additional performance. |
Sleeping | Like the BlockingWaitStrategy, the SleepingWaitStrategy attempts to be conservative with CPU usage by using a simple busy wait loop, but uses a call to LockSupport.parkNanos(1) in the middle of the loop. On a typical Linux system, this will pause the thread for around 60us. However, it has the benefit that the producing thread does not need to take any action other than increment the appropriate counter and does not require the cost of signaling a condition variable. However, the mean latency of moving the event between the producer and consumer threads will be higher. It works best in situations where low latency is not required, but a low impact on the producing thread is desired. |
Yielding | The YieldingWaitStrategy is one of 2 Wait Strategies that can be use in low latency systems, where there is the option to burn CPU cycles with the goal of improving latency. The YieldingWaitStrategy will busy spin waiting for the sequence to increment to the appropriate value. Inside the body of the loop, Thread.yield() will be called, allowing other queued threads to run. This is the recommended wait strategy when you need very high performance and the number of Event Handler threads is less than the total number of logical cores, e.g. you have hyper-threading enabled. |
BusySpin | The BusySpinWaitStrategy is the highest performing Wait Strategy, but puts the highest constraints on the deployment environment. This wait strategy should only be used if the number of Event Handler threads is smaller than the number of physical cores on the box, or when the thread has been affinitized and is known not to be sharing a core with another thread (including a thread operating on a hyper-threaded core sibling). |
StoreBindingRoleExpectation
Enumerates the different roles that an application's store can assume.
Valid Values
Value | Description |
---|---|
Primary | Indicates that this binding is the primary binding in a store cluster. A store cluster can have a single primary member which is elected through a leader election algorithm. The single primary member replicates messages and state to its backup peers according to an application's configured HA Policy. |
Backup | Indicates that a binding is a backup binding in a store cluster. When operating in backup mode, objects can be retrieved from the store but not updated or added. |
None | Indicates no expectation regarding a store binding's role SINCE 3.4 |
ReplicationPolicy
Enumerates the different replication policies for an AepEngine.
See Also: AepEngine.ReplicationPolicy
Valid Values
Value | Description |
---|---|
Pipelined | With this replication policy, message/state is replicated soliciting acknowledgements from the backup engine cluster instance(s), but inbound message processing is not blocked while waiting for the acknowledgement to be received. |
Asynchronous | With this replication policy, message/state is replicated without soliciting an acknowledgement from the backup engine cluster instances. |
Groups Reference
Groups
EventMultiplexer Properties
Event Multiplexer properties configure the event multiplexer threads that are used throughout the platform for highly efficient inter-thread communication.
Elements
Value | Description |
---|---|
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified the platform's default value for the multiplexer will be used. Usage: Optional |
<queueOfferStrategy>
| Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. Usage: Optional |
<queueWaitStrategy>
| Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. Usage: Optional |
<queueDrainerCpuAffinityMask>
| Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus. For example, specifying "1" or "[0]" indicates Core 0. "3" or "[0, 1]" would indicate Core 0 or Core 1. Specifying a value of "0" indicates that the the thread should be affinitized to the platform's default cpu, and omitting this value indicates that the thread should be affinitized according to the platform's default policy for the multiplexer. Examples:
Usage: Optional |
<queueFeedMaxConcurrency>
| Sets the maximum number of threads that will feed the multiplexer's queue. If this value is set too low, it will result in a runtime time error. Typically, applications need not specify this value. Usage: Optional |
nv.optimizefor
Several DDL settings can be tuned based on whether an application should be optimized for either throughput or latency, by setting the environment variable nv.optimizefor:
throughput
When nv.optimizefor=throughput the following settings are set.
- Adaptive Commit Batch Ceiling -> 64 (unless set explicitly)
- Low level network I/O tuned:
- I/O buffer sizes tuned (adaptively sized larger to accommodate more bytes read per read).
- native network I/O libraries enabled (when available).
- tcp_no_delay -> false on replications and bus connections that support it (unless set explicitly)
- eager socket reads enabled for cluster replication connection – keep reading from socket for up to 1 sec to avoid select / poll (unless linkParams are explicitly set).
- Native file I/O enabled (when available).
- Critical Path Threads waitPolicy -> Yielding (unless set explicitly)
- Critical Path Threads detached = true (unless set to attached explicitly)
- Pooling enabled for certain platform objects. (unless disabled explicitly)
Note that this list is not an exhaustive list, and these settings above may change over time.
latency
- Low level network I/O tuned:
- I/O buffer sizes adaptively tuned
- native network I/O libraries enabled (when available).
- tcp_no_delay -> true on replications and bus connections that support it (unless set explicitly)
- eager socket reads enabled for cluster replication connection – keep reading from socket for up to 1 sec to avoid select / poll (unless linkParams are explicitly set).
- Native file I/O enabled (when available).
- Critical Path Threads waitPolicy -> BusySpin (unless set explicitly)
- Critical Path Threads detached = true (unless set to attached explicitly)
- Pooling enabled for certain platform objects such as packets. (unless disabled explicitly)
Note that this list is not an exhaustive list, and these settings above may change over time.