The Talon Manual

Skip to end of metadata
Go to start of metadata

In This Section

Overview

This section contains detailed reference information for the x-ddl schema (Domain Descriptor Language). Your application's config.xml is an XML document, adhering to the X-DDL schema, describes and configures a system of buses, applications and XVMs that together constitute a multi-agent 'system' that is managed and deployed together to perform a higher level business function. The DDL document is used to configure each XVM prior to launch and is used by the X Platform's deployment and monitoring tools to assist in managing the system as a whole. 

There are 7 main elements in a DDL configuration model file:

  • <systemDetails>: Defines system metadata that is used by tools and emitted by an XVM in heartbeats. 
  • <env>: Defines environment variables that can be used for substitution of values in the document and are exposed to the application and talon runtimes.
  • <busProviders>: Defines custom message bus provider implementations.
  • <buses>: Defines the message buses that are used for applications to communicate with one another.
  • <apps>: Defines and configures the applications that make up the system. 
  • <xvms>: Defines the XVMs (Talon lightweight containers) that host one or more applications. 
  • <profiles>: Defines profiles that can be dynamically activated to localize the configuration for different operating environments.

A main tenet of the X Platform is to separate out (and shield) an application's business logic from the complex machinery underpinning high availability, performance, and scalability. As such there is a wealth of tuning knobs and configuration operations that can be applied without making application code changes. Most applications will not use the majority of the settings described here unless they are being tuned for special cases. 

A good way to get started with DDL configuration is to look at some existing projects. Consider:

 

DDL Features

The platform's parses config.xml as follows:

  1. Apply DDL Profiles: Activated DDL <profiles> elements are merged into the main DDL XML. 
    DDL profiles provide a means of making a single DDL configuration document more portable to different environments, by the structure of configuration to be augmented and overridden.
  2. Apply DDL Templates: Apply <app>, <bus>, <xvm> templates. 
    Templates provide a means of reducing duplicate configuration across the above element types/. Template values may be supplied by profiles activated in step 1 above. 
  3. Apply DDL Substitutions and Overrides: ${varName::defaultValue} substitutions.
    Substitution values come from values in the environment or <env> elements in the DDL document itself. Substitution values can come from profiles or templates applied in steps 1 and/or 2 above. 

The result is a DDL XML document with no profiles, templates or unresolved ${} variable values. 

Substituting Values at Runtime

For application portability across environments, It is often convenient to define some values in the DDL as variables that are later localized to the environment in which an application is to run. Examples of such values include host names, ports and message bus addresses. 

The values in a DDL XML document can be overridden in two ways: Environment Variable Substitution and via DDL override properties. When launching an XVM, the substitution values are sourced from the bootstrap environment which consists of the following sources (in increasing order of precedence):

Bootstrap Environment:

  • The application properties file. This file can be specified in the environment as nv_app_propfile, or as a System property as '-Dnv.app.propfile'. A value found in the environment takes higher precedence.
  • System properties (System.getProperties)
  • The host's environment (System.getenv())

When running in embedded mode or with a bootstrap configurer, the application may provides an alternate substitution environment using the VMConfigurer API, giving the application full control over the bootstrap property source. In this case, the application may provide VMConfigurer with the above bootstrap environment by passing in UtlTailoring.ENV_SUBSTITUTION_RESOLVER as the VMConfigurer's valueResolver. 

Environment Variable Substitution

VMConfigurer  will first substitute and ${VARNAME::DEFAULT} values using the ISubstResolver passed into the configurer (or from the environment if no resolver is passed in):

If the substitution resolver contains a value for FRONTOFFICE_BUSDESCRIPTOR, then that will be used for the bus descriptor. Otherwise, the default value of "falcon://fastmachine:8040" will be used. This substitution is done before the XML is validated, so in cases where the ${} syntax yields invalid XML, it will be substituted before parsing the document. 

Handling Special Characters

SINCE 3.4 

Special XML characters in properties that originate from the environment are automatically escaped before being substituted into the XML. In particular, <, >, &, ", ' will be respectively replaced by &lt;, &gt;, &amp;, &quot;, &apos;. For example, if running with:

-DFRONTOFFICE_BUSDESCRIPTOR=activemq://localhost:61616&topic_starts_with_channel=false, the DDL parser will substitute it as valid XML like:

Users who prefer to do their own XML escaping can disable this behavior by setting the following property:

In which case the property would need to be passed in pre-escaped as:

-DFRONTOFFICE_BUSDESCRIPTOR=activemq://localhost:61616&amp;topic_starts_with_channel=false

 

Substituting from DDL <env> Elements

SINCE 3.4

Properties defined in the <env> section of an X-DDL document can be used in variable substitution elsewhere in the document. For example:

Properties defined in the <env> section of an X-DDL document have the lowest priority of all configuration sources and can be easily overridden using system properties and environment vars. Properties values defined in the <env> section may use ${} variables, but variable values are only resolved from the property source passed in, not other properties defined in the <env> section. If a value defined in an <env> section was already defined in the property source passed into VMConfigurer, that <env> value is replaced (overridden by) the value passed in. 

XVM Specific Env Properties

SINCE 3.8

The 3.8 release introduced the ability to configure XVM specific <env> properties. When DDL is being localized for a specific XVM, properties defined for an XVM are included and override <env> properties defined in the top-level portion of the DDL when parsing the environment variables. An XVM template may also provide <env> properties. Template provided properties are merged with those defined in the <xvm> with the <xvm> values overriding any values defined in the template.

The target XVM used to localize configuration can be passed to the VMConfigurer using VMConfigurer by passing value nv.ddl.targetxvm=<xvm-name> as value resolver property. This property is automatically set by an XVM prior to invoking the VMConfigurer, but applications running in an embedded mode or using a bootstrap configurer must set this value manually for the environment to be available for ${} substitution. 

XVM Specific Env Example

would resolve to an XVM specific env of:

  • prop1=top-level-value1 (as defined in top-level env since not overridden).
  • prop2=xvm-template-value2 (picked up from xvm-template to override top-level <env>).
  • prop3=my-xvm-value3 (picked up from my-xvm, overrides the template and top-level value).

Note that if any of the above properties were specified in the environment passed in (e.g System.getenv or System.getProperty) the values passed in would take precedence. 

Profile provided Env Properties

The 3.8 release introduced the ability to define DDL profiles. DDL profiles may also contribute or override <env> defined as top level <env> properties or as <xvm> specific <env> properties. Think of each active profile as being merged on top of the top-level DDL xml, overriding any values already defined.

Profile Supplied Env Example

would resolve to an xvm specific env of:

  • prop1=global-profile-top-level-value1  (overrides top-level value with PROD profile top-level env).

  • prop2=xvm-template-value2 (picked up from xvm-template to override top level <env>).
  • prop3=profile-my-xvm-value3  (picked up from my-xvm, overrides the template and top level value).

DDL Overrides

Any attribute or value element listed in this document that has an X DDL Override property can be overridden by passing the corresponding value into the substitution environment used by the VMConfigurer. DDL overrides are particularly useful for applications that internally bundle DDL on the classpath, making it difficult to edit by hand at runtime. 

In the above case, the value for descriptor can also be overridden using the DDL Override, 'x.buses.bus.frontoffice.descriptor'. So given the following values in the substitution resolver:

  • FRONTOFFICE_BUSDESCRIPTOR= falcon://slowmachine:8040
  • x.buses.frontoffice.descriptor=direct://frontoffice

would yield:

... even though initial env substitution would substitute "falcon://slowmachine:8040", and the DDL override would override that value with direct://frontoffice resulting in the bus being localized as a direct bus. 

'enabled' attributes

Throughout the schema, you will notice several elements that have enabled attributes. Specifying a value of "false" for these elements will cause the X-DDL parser to ignore these elements. This pattern allows for these elements to be configured but then disabled at runtime via an environment variable, System property, or DDL override. For example, if the DDL were to configure in an app named "forwarderapp":

then at runtime, it could be disabled by launching the application with:

-Dx.apps.forwarderapp.storage.persister.enabled=false
or 

-DFORWARDER_PERSISTER_ENABLED=false

Changing the override prefix

DDL overrides (except for those in the <env> element) are prefixed with 'x.' to avoid conflicts with other configuration property names in the environment. It is possible to change this prefix by setting:-Dnv.ddl.override.prefix=myprefixIn this case, 'x.apps.forwarderapp.storage.persister.enabled'
would become:
'myprefix.apps.forwarderapp.storage.persister.enabled'

DDL Templates

SINCE 3.8

The buses, apps, and xvms each support specifying templates. Templates allow common configuration elements to specified in a single place to reduce repetition in the configuration. For example, if all applications defined in a system will be configured to collect statistics putting this configuration in a template is more compact than specifying the configuration in each application.

Using Templates

  • Templates are defined under the <templates> element of the section to which they apply (buses, apps, xvms)
  • The <templates> element can specify the same configuration as the element to which they apply (bus, app or xvm)
  • You can define multiple templates, but each bus, app, xvm can only specify a single template.
  • When the DDL is parsed each element in the template is applied to the element using the template unless the element on the template overrides the same element or template in its configuration. 
  • DDL template values can be overridden with DDL overrides. For example, 'x.buses.templates.orders-bus-template.descriptor' can be used to override the descriptor attribute defined in the 'orders-bus-template' bus template.

Templating Example

This example shows how templating can be used to reduce configuration repetition: 

Configuration without templates

In the above DDL, both applications share the same configuration for what statistics are collected. We can instead define a template to hold this configuration and configure each application to reference the template:

Equivalent Configuration using templating

Templating Override Example

If an element using a template defines an element or attribute defined in the template, the value in the element sourcing the template takes precedence. Consider the following:

Overriding Template Values

The above would resolve to:

Resolved DDL

Note in the above that:

  • order-processing-vm retains the heartbeat interval attribute of 5 that it specified explicitly

  • order-processing-vm retains the includeMessageTypeStats element of true that it specified explicitly. 

DDL Profiles


SINCE 3.8

DDL profiles provide a means of making a single DDL configuration document more portable to different environments, by allowing the specification of profiles that can be activated to augment and override the structure of the DDL.

Some good use cases for profiles include:

  • Localizing the system to a specific environment by overriding ports, paths and hostnames to the target environment.
  • Creating a test profile to be used when unit testing an application. 

Each profile defined in a DDL document can specify all of the elements that can be specified in the DDL model. When a profile is activated all of the configuration it specifies is overlaid on top of the main DDL configuration. Values that already exist in the main DDL are overridden - for example, if an <app> is defined in the DDL document and an <app> with the same name attribute is defined in a profile, then all of the elements and attributes in the profile are applied on top of the <app> defined at the top level. 

Activated profiles are applied before doing variable substitution or templating. This means that any ${} variable substitution will pick up <env> elements defined in profiles. It also means that before any templates are applied template contributions for activated profiles are merged in. 

Activating DDL Profiles

By default, profiles don't augment DDL configuration, they must be activated in order to contribute to the DDL. This can be achieved either through explicit activation or via a profile's activation element. 

Explicit Activation

The property nv.ddl.profiles can be used to explicitly activate profiles by passing it in with the set of substitution values in the VMConfigurer. The value of nv.ddl.profiles is a comma separated string of profile names to activate. For example, one might configure a "test" profile that would set message buses to use loopback when running unit tests:

DDL With Profile

would resolve to the following when -Dnv.ddl.profiles=test is set in the bootstrap environment.

Resolved DDL (With Resolved Applied and Removed)

And would resolve to the following if no nv.ddl.profiles were set (or none matching a the "test" profile name:

Resolved DDL (With no Profile Match)

Profile Activation Element

A profile can specify an <activation> element that can accept a list of properties that must all match values in the bootstrap environment (e.g. and) to activate the profile. This can be useful to localize an application based on the environment it is running in. For exampl,e the following profile could be automatically activated to set the discovery address to use when running in the prod environment based on the environment variable ENVIRONMENT_NAME=PROD being set in that environment. 

A Profile Activation Element

Profile Activation Order

Profiles are applied in the order in which they are specified in nv.ddl.profiles followed by any additional profiles activated by an activation element (which my be applied in any order). If the order of profile activation is important, the order attribute may be set on profiles to control the order in which they are applied:

  • Profiles with a lower order are applied first.
  • If the order attribute is not set it defaults to 0
  • If two profiles have the same order value they are applied in the order they are specified in nv.ddl.profiles otherwise they are applied in an unspecified order after ordered profiles. 

Troubleshooting DDL parsing

DDL trace can be enabled by setting -Dnv.ddl.trace=debug (or when using SLF4J setting the logger nv.ddl to 'Trace' Level). 

XML Reference

This section walks provides an exhaustive list of all of the configuration elements that can be defined in DDL. Much of the reference documentation below is contained in the x-ddl.xsd itself. 

 

If you are working in an IDE such as eclipse, try importing the DDL XSD schema into your eclipse XML catalog so that you you can get usage tips on configuration elements directly in the IDE by pressing ctrl-space.



The x-ddl.xsd schema is published online with each release and also included at the root of talon jars.

Optional Values

Many of the configuration options listed here need not be specified by most applications, and in most cases values listed as 'Optional' below should be omitted from an application's configuration as the platform will apply reasonable defaults. A good strategy is to start with a minimal configuration and only add additional configuration options as needed.  


DDL Model Sections

System Details

System details provides metadata about the overall system. It can be used by tools to better identify the system being configured.

Sample XML Snippet

 

 

Settings

ElementDescription

<systemDetails>

Holds metadata for the system described by the DDL.

System details are used by deployment and monitoring tools to identify the system.

 

 <name>

The unique identifier for the system described by this deployment descriptor.

The identifier should be a short, descriptive identifier that allows it and the collection of xvm and applications it groups together.

<displayName>
A human readable name for the system intended for use by tools.
<version>

The version of this system.

A system version should be incremented as changes are made to the configuration or composition of the system (xvm and apps) and should also be changed when any of the binaries for the application have changed.

</systemDetails>

 

 

Environment Configuration

The <env> section allows configuration of the runtime properties accessible to Talon and the application through the XRuntime class. The X Platform reserves the use of the prefix 'nv.' for platform configuration, applications are otherwise free to set arbitrary properties in the <env> section. The properties defined in <env> will be stored in the configuration repository and later loaded into XRuntime and made.

Since the 3.4, values specified in an <env> section can be used for variable substitution on ${prop::value} specified values outside of the <env> element.

Environment properties can be listed in either '.' separated form, or by breaking the dot separated levels into hierarchical nodes.

Sample XML Snippet

Settings

ElementDescription

<env>
...
</env>

Any XML element with text content will be treated as a property by concatenating its parent's node names with '.' separators.

If the value is already defined in the set of ddl overrides passed into the parser, the value in XML will be overridden.

Unlike other DDL values mentioned in this document, overriddes for <env> values need not be prefixed with 'x.' ... the values passed will directly override the values specified in the <env> section without prefix. for example given:

-DCLUSTERING_IFADDRESS=192.168.1.1 should be used to override the value for CLUSTERING_IFADDRESS rather than -Dx.env.CLUSTERING_IFADDRESS=192.168.1.1.

 

 

Message Bus Provider Configuration

The 'busProviders' section is optional and allows registration of custom message bus implementations.

When working with custom message binding types, a messaging provider implementing com.neeve.sma.MessagingProvider must be registered with the runtime to serve as a factory for creating new message bus instances.

Providers are registered by name and must match the providerName used when configuring the bus (or as the scheme portion of the message bus desciptor)

To avoid conflicts with potential future bus implementations provided by the platform itself, it is recommended that user prefix custom provider names with a prefix such as 'x-' to mark the binding as a custom bus extension. For example, if you were to implement a bus binding that communicates over amqp (which is not current implemented by the platform) use 'x-amqp' as the binding name.

SINCE 3.8

Sample XML Snippet

Settings

ElementDescription

<busProviders>

<provider

 
name

The bus provider name is the provider name used when configuring a bus of this type. For example a bus provider registered as 'foo' would be used for a bus configured with 'foo://address:port'

Usage: Required
X DDL Override: Not overridable (key)
Constraints: String, 32 characters or less. Must start with lowercase character and contain lowercase letters, digits, or '+', '-', '.' ([a-z][a-z0-9+\-.]+)

providerClass

 

The provider class instance used to create message bus binding instances for this bus type. A provider instance must implement 'com.neeve.sma.MessagingProvider', and will typically extend 'com.neeve.sma.impl.MessagingProviderBase'

Usage: Optional
Default: N/A
X DDL Override: x.busProviders.<providerName>.providerClass
Constraints: String.

enabled

Can be used to disable the bus provider. A disabled bus provider will not be registered in the system.

Usage: Optional
Default: true
X DDL Override: x.busProviders.<providerName>.enabled
Constraints: true | false 

displayName

A user friendly name that can be used to by tools for displaying the messaging provider.

Usage: Optional
Default: N/A
X DDL Override: x.busProviders.<providerName>.displayName
Constraints: A short display name

 

/>

</busProviders>

 

Message Bus Configuration

The 'buses' section configures the various messaging buses used globally in the deployment. For example, the below configures a messaging bus named 'frontoffice' that:

  • Uses a Falcon publish-subscribe bus for message transport between application agents.
    • The bus to use can be overridden from the environment via the FRONTOFFICE_BUSDESCRIPTOR variable.
  • Contains two channels:
    • 'orders' channel with guaranteed quality of delivery.
    • 'event's channel with best effort quality of delivery.

Sample XML Snippet

Settings

ElementDescription

<buses>

 

<templates>

Holds bus templates
<template

Defines a message bus template.
Template buses cannot be used at runtime they serve only as templates for actual buses' configuration.

name

The name of the template.

Usage: Required
X DDL Override: Not overridable (key)

*

Any bus attribute defined below except for 'template'

X DDL Override: x.buses.templates.<templatename>.*

>
 
*

Any bus element described below.

X DDL Override: x.buses.templates.<templatename>.*

</template>
 

</templates>

 
<bus
Configures a bus.
name

Defines the bus name which must be unique within a configuration repository. Applications reference their bus by this name.

Usage: Required
X DDL Override: Not overridable (key)

descriptor

Defines the bus descriptor. A bus descriptor is used to lookup and configure a message bus provider.

Usage: Required
X DDL Override: x.buses.bus.<busname>.descriptor
Constraints: true | false 

template

The name of bus template

Usage: Optional
Default: No template when not set
X DDL Override: x.buses.bus.<busname>.enabled
Constraints: true | false 

enabled

If set to false, this bus will not be added to the configuration repository and will not be available for applicaiton use.

Usage: Optional
Default: true
X DDL Override: x.buses.bus.<busname>.enabled
Constraints: true | false 

>
 
<channels>
<channel
Configures the channels for this bus. Individual applications that use the bus may use some or all of the channel according to their own configuration and interaction patterns.

name

Defines and configures a channel within a message bus. An SMA message channel is a named conduit for message exchange between SMA messaging participants. An application's AepEngine will start messaging channels prior to signaling to the application that messaging has started.

Usage: Required
X DDL Override: Not overridable (key)

id

The channel id is a numerical identifier of a channel uniquely identifying the channel in its bus. Some bus binding implementations may use this on the wire as a replacement for the string channel name for efficiency, so it is important that the id is consistent across configuration domains.

Usage: Optional
X DDL Override: x.buses.<busname>.<channelname>.id
Constraints: positive integer

<qos>

When the qos element is not provided, the platform's default QoS value will be used if not specified programmatically by the application.

Usage: Optional
X DDL Override:
x.buses.<busname>.<channelname>.qos

Constraints: See ChannelQos

<key>

Specifies the channel's key.

Usage: Optional
X DDL Override: x.buses.<busname>.<channelname>.key
Constraints: true | false 

</channel>
</channels>

</bus>

</buses>

 

Application (AepEngine) Configuration

Applications are configured under the <apps> element. 

Sample XML Snippet

Settings

 

ElementDescription

<apps>

The <apps> section configures the various applications in the scenario. An application is synonymous with an AEP engine. For example, the above configures three applications (i.e. engines).

<templates>

Holds bus templates

Usage: Optional

<template

Defines a Talon application template.

Template applications cannot be used at runtime ... they serve only as templates for actual apps' configuration.

name

The name of the template.

Usage: Required
X DDL Override: Not overridable (key)

*

Any app attribute defined below except for 'template', and prior to 3.9, mainClass.

X DDL Override: x.apps.templates.<templatename>.*

>
 
*

Any app element described below.

X DDL Override: x.apps.templates.<templatename>.*

</template>
 

</templates>

 

<app

 
name

Defines the application name which must be unique within an application's configuration domain.

Usage: Required
X DDL Override: Not overridable (key)

 

A common practice for a clustered application is to use the same name for an application and its store (see storage configuration below). It is best practice not to use a name with spaces as the name is used in many context such as scripting where it is operationally simpler to avoid spaces and special characters.

 

mainClass

Specifies the application's main class (e.g. com.acme.MyApplication). An application's main class serves as the main entry point for a Talon application (it is loaded by a Talon XVM and provides lifecycle hooks to it). This is not to be confused with a java main class. When running in a Talon XVM the java main class will be the Talon XVM main (com.neeve.server.Main).

Usage: Required
X DDL Override:
x.apps.<appname>.mainClass

enabled>

If set to false, this app will be ignored and not saved to the configuration repository. This can be used to disable an application at runtime. However, note that if a persistent configuration repository is in use, this will not cause a previously configured application to be deleted.

Usage: Optional
Default: true
X DDL Override:
x.apps.<appname>.enabled

Constraints: true | false 

>

 

<messaging>

Configures messaging for the application.

<storage>

Configures clustering and persistence for the application.

...

General / Miscellaneous application Configuration

<app>

</apps>

 

 

 

Application Messaging Configuration

An app's <messaging> element:

  • Declare the message factories used by the application (to deserialize messages by factory id and type)
  • Configures which buses from the <buses> section are used by this application and configures them.
  • Configures runtime settings used to configure the bus when it is created.


Sample XML Snippet

Settings

ElementDescription
<messaging>

Configures messaging for an application.

<factories>

Configures message factories for an application. Each message factory defined under this element is registered with the application's underlying engine. Registered message factories are used by the message buses to materialize message instances from factory and type ids received over the wire.

It is not mandatory to configure factories via DDL, they can also be registered programmatically by the application during application initialization.

SINCE 3.4

<factory

Configures a message factory used by the app.

name

The message factory's fully qualified name.

Usage: Required
X DDL Override: Not overridable (key)

</factory>

 
<factories>
 
<buses>
Configures the buses that the application will use. Each bus defined in this section should have the same name as a bus defined in the global <buses> section.

<bus>

Configures and registers a bus from the <buses> section for use with the application and registers it with the underlying engine. Each application in the deployment will create its own bus instance, and may configure channel interest in that bus differently depending on their participation in the message flow.

name

Specifies the name of the bus which should reference a bus from the buses section of this descriptor or one already created and saved in the configuration repository.

Usage: Required
X DDL Override: Not overridable (key)

enabled>

If set to false, this bus will be ignored and not added to the application list of buses.

Usage: Optional
Default: true
X DDL Override:
x.apps.<appname>.messaging.buses.<busname>.enabled

Constraints: true | false

<channels>

  <channel

Configures the bus channels used by the application which will be a subset of those defined for the bus in the descriptor's <buses> section.

name

Specifies the name of the channel which references a channel defined in the <bus> element in the <buses> section.

Usage: Optional
DDL Override:
x.apps.<appname>.messaging.buses.<busname>.enabled

join>

An application that should receive message on the channel should specify true. When true, subscriptions are issued for the channel based on the channels key and filters.

Usage: Optional
Default: false

X DDL Override:
x.apps.<appname>.messaging.buses.<busname>.<channelname>.join
Constraints: true | false

<filter>

When specified and the channel is joined, this can be used to specify a channel key based filter that filters the messages received. See Channel Filters.

Usage: Optional
X DDL Override:
x.apps.<appname>.messaging.buses.<busname>.<channelname>.filter

<preserveJoinsOnClose>

Sets whether or not to preserve subscriptions for this channel when the channel is closed normally.

By default when an engine is stopped without an error bus channels that were 'joined' will be 'left' meaning that any subscriptions or interests created by the message bus will be unsubscribed or unregistered. Whether or not channels' interests are preserved can be configured at the application levelusing the app's preserveChannelJoinsOnStop setting. The preserveJoinsOnClose channel level property allows the application configured behavior to be overridden on a channel by channel basis.

Valid Options

  • Default: Use the value of preserveChannelJoinsOnStop configured for the AepEngine.
  • Leave: Unsubscribes any subscriptions for the channel
  • Preserve: Preserve any subscriptions established for the channel.

Note that this property has no effect for the case where an engine shuts down with an error with a non null cause [e.g. AepEngine.stop(new Exception())]. In this case channel joins are left in tact allowing a backup to take over.

Default: Default
X DDL Override:
x.apps.<appname>.messaging.buses.<busname>.<channelname>.preserveJoinsOnClose
Constraints: Default | Leave | Preserve

 SINCE 3.12

  </channel>

</channels>

Additional channel configurations can be added here.

<nonBlockingInboundMessageDispatch>


Specifies whether or not enqueue of inbound messages for this bus should block on the application's main inbound event multiplexer. In most cases, this value should either not be specified or set to false.

Usage: Optional
X DDL Override:
x.apps.<appname>.messaging.buses.<busname>.nonBlockingInboundMessageDispatch

Constraints: true | false

<inboundMessageEventPriority>

Specifies the priority at which messages from this bus should be dispatched to the application's inbound event multiplexer. A negative value is interpreted as higher priority. A positive value will result in delayed processing by the number of milliseconds specified. If not set or 0 message will be dispatched at normal priority.

Usage: Optional
X DDL Override:
x.apps.<appname>.messaging.buses.<busname>.inboundMessageEventPriority

Constraints: integer

<scheduleSendCommitCompletionEvents>

Indicates whether the bus manager send commit completion events should be scheduled. Scheduling of the completion events allows them to be added to application's inbound event queue's feeder queue, which can cause reduced contention with message events.

Usage: Optional
X DDL Override:
x.apps.<appname>.messaging.buses.<busname>.scheduleSendCommitCompletionEvents

Constraints: true | false

<sendCommitCompletionEventPriority>

Specifies at what priority message from this bus should be dispatched to the applications inbound event multiplexer. A negative value is interpreted as higher priority. A positive value will result in delayed processing by the number of milliseconds specified. If not set or 0, message will be dispatched at normal priority. Setting this value higher than message events can reduce message processing latency in some cases.

Usage: Optional
X DDL Override:
x.apps.<appname>.messaging.buses.<busname>.sendCommitCompletionEventPriority

Constraints: integer

<detachedSend

Configures the detached send event multiplexer thread for the bus. When detached send is disabled, outbound send of messages is performed by the commit processing thread (typically the engine's inbound event multiplexer thread). Enabling detached send can reduce the workload on the commit processing thread, allowing it to process more inbound messages, but this can also incur additional latency.

enabled>

Specifies whether or not detached send is enabled for this bus.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.messaging.buses.<busname>.detachedSend.enabled

Constraints: true | false

<queueDepth>

The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used.

See <queueDepth>

X DDL Override:
x.apps.<appname>.messaging.buses.<busname>.detachedSend.queueDepth
Constraints: positive integer

<queueOfferStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueOfferStrategy> 

X DDL Override:
x.apps.<appname>.messaging.buses.<busname>.detachedSend.queueOfferStrategy

Constraints: See QueueOfferStrategy

<queueWaitStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueWaitStrategy>  

X DDL Override:
x.apps.<appname>.messaging.buses.<busname>.detachedSend.queueWaitStrategy

Constraints: See QueueWaitStrategy

<queueDrainerCpuAffinityMask>

Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical CPUs.

See <queueDrainerCpuAffinityMask>   

X DDL Override:
x.apps.<appname>.messaging.buses.<busname>.detachedSend.queueDrainerCpuAffinityMask

<queueFeedMaxConcurrency>

Sets the maximum number of threads that will feed the multiplexer's queue.

See <queueFeedMaxConcurrency>     

Usage: Optional
Default: 16
X DDL Override:
x.apps.<appname>.messaging.buses.<busname>.detachedSend.queueFeedMaxConcurrency

Constraints: positive integer

</detachedSend>

<bus>

</buses>
</messaging>
 

Application Storage Configuration

Configures storage options for the application. An application's store provides the foundation for HA and Fault Tolerance. Applications achieve clustering through configuring the store, which will discover other application members and elect a single primary member through a leader election algorithm. The store serves as the foundation for HA by replicating changes from the primary application member to backups in a highly efficient, pipelined, asynchronous manner - a core requirement for In Memory Computing. While the primary mechanism for HA is memory to memory replication, an application's storage configuration may also configure disk-based persistence as a fall back mechanism in the event that connections to backup instance fail.

An application that runs standalone without any persistence does not need to include a store, which is a perfectly valid configuration for an application that does not have HA requirements.

Sample XML Snippet

Settings

ElementDescription
<storage
See Also: StoreDescriptor
descriptor

The store descriptor which is used to localize the store to a specific provider.

Usage: Required prior to 3.4, Deprecated in 3.4+
Default: native://.
X DDL Override:
x.apps.<appname>.storage.descriptor

(warning) Starting with the 3.4 release, this attribute has been deprecated. Store clustering should now be configured via the <clustering> element instead.

enabled>

Can be set to false to disable the store for this application.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.storage.enabled
Constraints: true | false

<factories>

Configures a state factories for an application. Each state factory defined under this element is registered with the application's underlying engine and store. Registered message factories are used by the store's replication receiver and transaction log for materialize entity instances from factory and type ids received over the wire.

It is not mandatory to configure factories via DDL, they can also be registered programmatically by the application during application initialization.

SINCE 3.4

<factory

Configures a state factory used by the app.

name

The state factory's fully qualified name.

Usage: Required
X DDL Override: Not overridable (key)

</factory>

 
<factories>
 
<persistenceQuorum>

Sets a store's persistence quorum. The persistence quorum is the minimum number of store members running in a cluster that determines whether persister commits are executed synchronously or not. If the number of members is greater or equal to the quorum, then persistence commits are always performed asynchronously. Otherwise, they are persisted synchronously.

Usage: Optional
Default: 2
X DDL Override:
x.apps.<appname>.storage.persistenceQuorum

Constraints: non negative integer

<maxPersistSyncBacklog>

When set to a value greater than 0 in seconds, the store's persister will be periodically synced to disk. This limits the amount of unsynced data (in time) that hasn't been synced in the event of a JVM failure, which can be useful for low volume applications that are operating above their persistence quorum.

Usage: Optional
Default: 0
X DDL Override:
x.apps.<appname>.storage.maxPersistSyncBacklog

Constraints: non negative integer

<icrQuorum>

Set's a store's ICR quorum. The ICR quorum is the minimum number of store members running in a cluster that determines whether ICR send commits are executed synchronously or not. If the number of members is greater or equal to the quorum, then ICR commits are always performed asynchronously. Otherwise, they are performed ssynchronously.

Usage: Optional
Default: 2
X DDL Override:
x.apps.<appname>.storage.icrQuorum

Constraints: non negative integer

<maxIcrSyncBacklog>

When set to a value greater than 0 in seconds, the store's icr sendder is periodically synced. This limits the amount of unsynced data (in time) that hasn't been synced in the event of a JVM failure, which can be useful for low volume applications that are operating above their icr quorum.

Usage: Optional
Default: 0
X DDL Override:
x.apps.<appname>.storage.maxIcrSyncBacklog

Constraints: non negative integer

<checkpointingType>

Sets the store's checkpoint controller type. A checkpoint controller determines the checkpoint boundaries within a transaction log by incrementing the checkpoint version for log entries. The checkpoint version is used by CDC and Log Compaction algorithms as the boundaries upon which those operations occur.

Usage: Optional
Default: 'Default'
X DDL Override:
x.apps.<appname>.storage.checkpointingType

Constraints: See CheckpointingType

<checkpointThreshold>

Sets the store's checkpoint threshold. The threshold controls the maximum number of entries before a transaction log's checkpoint version is increased, a checkpoint controller keeps track of the number of entries that count towards reaching this threshold.

Usage: Optional
Default: 'Default'
X DDL Override:
x.apps.<appname>.storage.checkpointThreshold

Constraints: positive integer

<checkpointMaxInterval>

Sets the max time interval (in millis) that can occur before triggering a new checkpoint.

Usage: Optional
Default: 'Default'
X DDL Override:
x.apps.<appname>.storage.checkpointMaxInterval

Constraints: positive integer

<detachedMemberInitialization>

SINCE 3.12.5

Sets whether backup member initialization is performed in a detached manner or not. When member initialization is set to detached, then the member initialization executes concurrently with the store operation. Detached member initialization is another name for non-blocking cluster join.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.storage.detachedMemberInitialization

Constraints: true | false

<detachedMemberInitializerCpuAffinityMask>

SINCE 3.12.5

Sets the CPU affinity mask to use for the detached member initializer thread. The affinity string can either be a long that represents a mask of logical cpu or a square bracket enclosed comma separated list enumerating the logical cpus. For example specifying "1" or "[0]" indicate Core 0. "3" or "[0, 1]" would indicate Core 0 or Core 1. Specifying a value of "0" indicates that the thread should be affinitized to the platform's default cpu, and omitting this value indicates that the thread should be affinitized according to the platform's default policy.

Usage: Optional
Default: unset
X DDL Override:
x.apps.<appname>.storage.detachedMemberInitializerCpuAffinityMask

Constraints: String

<discoveryDescriptor>

Sets the custom discovery descriptor for the store.

(warning) As of version 3.3, this element is replaced by <discoveryDescriptor> under the <clustering> element.

Usage: Optional
Default: unset
X DDL Override:
x.apps.<appname>.storage.discoveryDescriptor

Constraints: String

(warning) This element can be use as an altertive value to using the discoveryDescriptor element. Only one of <discovery> or <discoveryDescriptor> may be used to configure store discovery. When

<failOnMultiplePrimaries>

This property has been deprecated and should be set under the clustering element.

Usage: Optional
Default: true
X DDL Override:
x.apps.<appname>.storage.failOnMultiplePrimaries

Constraints: true | false

<clustering

SINCE 3.4

The clustering element, when enabled, is used to configure store clustering which provides the ability for applications' store members to discover one another and form an HA cluster.

enabled>

Can be set to false to disable store clustering.

Usage: Optional
Default: true
X DDL Override:
x.apps.<appname>.storage.clustering.enabled

Constraints: true | false

<storeName>

Sets the name of the store. Applications with the same store name automatically form a cluster. If this configuration parameter is not specified, then the application name is used as the store name

Usage: Optional
Default: The app name
X DDL Override:
x.apps.<appname>.storage.clustering.storeName

Constraints: String

<localIfAddr>

Sets the local network interface to bind to when establishing cluster network connections.

Usage: Optional
Default: unset
X DDL Override:
x.apps.<appname>.storage.clustering.localIfAddr

Constraints: String

<localPort>

Sets the TCP port to bind to when listening for cluster connections.

Usage: Optional
Default: unset
X DDL Override:
x.apps.<appname>.storage.clustering.localPort

Constraints: integer

<linkParams>

A comma separate set of key=value pairs that serve as additional configuration parameters for the network connections between the cluster members.

Usage: Optional
Default: unset
X DDL Override:
x.apps.<appname>.storage.clustering.linkParams

Constraints: String (a comma separated list of key=value pairs)

<linkReaderCpuAffinityMask>

Sets the CPU affinity mask to use for the cluster connection reader thread. Each cluster member uses a single thread to read replication traffic from other cluster members.

The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus.

For example, specifying "1" or "[0]" indicates Core 0. "3" or "[0, 1]" would indicate Core 0 or Core 1. Specifying a value of "0" indicates that the thread should be affinitized to the platform's default cpu, and omitting this value indicates that the thread should be affinitized according to the platform's default policy for the multiplexer.

See com.neeve.util.UtlThread.setCPUAffinityMask(String) for details.

Usage: Optional
Default: unset
X DDL Override:
x.apps.<appname>.storage.clustering.linkReaderCpuAffinityMask

Constraints: String

<discoveryDescriptor>

Sets the custom discovery descriptor for the store.

When set, this descriptor is used to load the discovery provider for the store. In most cases, an application will simply want to use the default discovery provider configured for the JVM which is set via the nv.discovery.descriptor property. In such cases this value need not be set, and the store will simply use the default discovery provider returned by DiscoveryCacheFactory.getDefaultCache() .

However, in some cases where discovery within the same JVM must be partitioned, it can be useful to specify a separate discovery provider for the store, and this property facilitates that.

Usage: Optional
Default: unset
X DDL Override:
x.apps.<appname>.storage.clustering.discoveryDescriptor

Constraints: String

(warning) This element can be use as an altertive value to using the discovery element. Only one of <discovery> or <discoveryDescriptor> may be used to configure xvm discovery. When no discovery information is provided, the value set in the environment for nv.discovery.descriptor is used.

<discovery>

Configures the custom discovery descriptor for the store in decomposed form.

The discovery descriptor is composed as <provider>://<address>[:<port>][&prop1=val1][&propN=valN]

(warning) This element can be use as an altertive value to using the discoveryDescriptor element. Only one of <discovery> or <discoveryDescriptor> may be used to configure xvm discovery. When no discovery information is provided, the value set in the environment for nv.discovery.descriptor is used.

<provider>

The discovery provider's name which is used to locate the discovery implementation

Usage: Required
Default: unset
X DDL Override:
x.apps.<appname>.storage.clustering.discovery.provider

Constraints: String

<address>

The discovery provider's address

Usage: Required
Default: unset
X DDL Override:
x.apps.<appname>.storage.clustering.discovery.address

Constraints: String

<port>

The discovery provider's port

Usage: Optional
Default: unset
X DDL Override:
x.apps.<appname>.storage.clustering.discovery.port

Constraints: positive short

<properties>

List the discovery descriptor parameters in key=value pairs:


Usage: Optional
Default: unset
X DDL Override:
x.apps.<appname>.storage.clustering.discovery.properties.<propName>

Constraints: non empty

</discovery>

 

<initWaitTime>

Sets the time, in milliseconds, that the store cluster manager will wait on open for the cluster to stabilize. When a store binding opens its binding to the store, it joins the discovery network to discover other store cluster members. Once discovered, the members need to connect to each other, perform handshakes and elect roles. This parameter governs how long, after the binding has joined the discovery network, does the cluster manager wait for the store cluster to "stabilize"

(lightbulb) A longer initWaitTime will cause the store open operation to take at least that amount of time. It will continue to wait for additional instances to be discovered even if one other member is found. initWaitTime should be set high enough to accommodate the maximum amount of time that is reasonable to expect another member to broadcast itself via discover and accept an connection. For example, if the store or its peers is subject to stop the world VM pauses as a result of long garbage collection, the initWaitTime should be set high enough to accommodate this to avoid to members entering the primary role.

Usage: Optional
Default: unset
X DDL Override:
x.apps.<appname>.storage.clustering.initWaitTime

Constraints: integer

<failOnMultiplePrimaries>

Set whether a store cluster manager should fail the store binding on detecting multiple primaries in a cluster. The default policy is to fail on detecting multiple primaries. This means that if multiple primaries are detected, the members detected as primaries will shut down to prevent a "split-brain" situation. If this parameter is set to true, then the members detected as primaries will not establish connectivity with each other and will continue to operate independently as primaries.

(warning)This parameter should only be used if the application or messaging layer possess a distributed locking mechanism that will cause one of the two dual primaries to be unable to start messaging when elected as primary (because there is another primary that has acquired the lock). Specifying this parameter as false without such a mechanism can have significant operational impact on the running of the application.

Usage: Optional
Default: true
X DDL Override:
x.apps.<appname>.storage.clustering.failOnMultiplePrimaries

Constraints: true | false

<memberElectionPriority>

When two members connect and neither has already assumed the primary role, then the member with the lower election priority will be elected primary. Configured values of less than 0 are set to 0 and values configured greater than 255 are set to 255. The default election priority (if not configured here in the DDL) is 255. When two members have the same priority either one may assume the primary role.

Usage: Optional
Default: 255
X DDL Override:
x.apps.<appname>.storage.clustering.memberElectionPriority

Constraints: 0-255

SINCE 3.8

<detachedSend

Configures whether or not to send the outbound replication traffic by the engine thread or pass off the send to a detached replicator sender thread.

(lightbulb) Offloading the send to a sender thread can increase store throughput and reduce latency (in some complex flows) but requires an extra processor core for the sender thread.

Usage: Optional

enabled>

Can be set to true to enable detached sends for store replication.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.storage.clustering.detachedSend.enabled

Constraints: true | false

<queueDepth>

The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified the platform's default value for the multiplexer will be used.

See <queueDepth>

X DDL Override:
x.apps.<appname>.clustering.detachedSend.queueDepth

Constraints: positive integer

<queueOfferStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueOfferStrategy> 

X DDL Override:
x.apps.<appname>.storage.clustering.detachedSend.queueOfferStrategy

Constraints: See QueueOfferStrategy

<queueWaitStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueWaitStrategy>  

X DDL Override:
x.apps.<appname>.storage.clustering.detachedSend.queueWaitStrategy

Constraints: See QueueWaitStrategy

<queueDrainerCpuAffinityMask>

Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus.

See <queueDrainerCpuAffinityMask>   

X DDL Override:
x.apps.<appname>.storage.clustering.detachedSend
.queueDrainerCpuAffinityMask

<queueFeedMaxConcurrency>

Sets the maximum number of threads that will feed the multiplexer's queue.

See <queueFeedMaxConcurrency>     

Usage: Optional
Default: 16
X DDL Override:
x.apps.<appname>.storage.clustering.detachedSend.queueFeedMaxConcurrency

Constraints: positive integer

</detachedSend>

 

<detachedDispatch

Configures whether or not to dispatch the inbound replication traffic and events by the replication link reader thread or pass the dispatch off to a detached replicator dispatcher thread.

Offloading the dispatch to a dispatcher thread can increase store throughput but requires an extra processor core for the dispatcher thread.

Usage: Optional

(warning) This feature is currently an experimental feature and is not supported for production usage. It is expected to be supported in a future release.

enabled>

Can be set to true to enable detached sends for store replication.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.storage.clustering.detachedDispatch.enabled

Constraints: true | false

<queueDepth>

The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified the platform's default value for the multiplexer will be used.

See <queueDepth>

X DDL Override:
x.apps.<appname>.clustering.detachedDispatch.queueDepth

Constraints: positive integer

<queueOfferStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueOfferStrategy> 

X DDL Override:
x.apps.<appname>.storage.clustering.detachedDispatch.queueOfferStrategy

Constraints: See QueueOfferStrategy

<queueWaitStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueWaitStrategy>  

X DDL Override:
x.apps.<appname>.storage.clustering.detachedDispatch.queueWaitStrategy

Constraints: See QueueWaitStrategy

<queueDrainerCpuAffinityMask>

Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu or a square bracket enclosed comma separated list enumerating the logical cpus.

See <queueDrainerCpuAffinityMask>   

X DDL Override:
x.apps.<appname>.storage.clustering.detachedDispatch
.queueDrainerCpuAffinityMask

<queueFeedMaxConcurrency>

Sets the maximum number of threads that will feed the multiplexer's queue.

See <queueFeedMaxConcurrency>     

Usage: Optional
Default: 16
X DDL Override:
x.apps.<appname>.storage.clustering.detachedDispatch.queueFeedMaxConcurrency

Constraints: positive integer

</detachedDispatch>

</clustering>
 
<persistence

Configures the persister for this store. A persister is responsible for storing the stores transactional updates to disk (or some other recoverable storage medium). Persisters typically serve as a secondary fault tolerance mechanism for clustered applications, but for an application that will only operate standalone this can serve as the primary mechanism for fault tolerance.

Usage: Optional

See Also: StorePersisterDescriptor

class

Can be set to the fully qualified classname of a custom implemenation of a store persister class. If omitted or "native" is specified, then the platform's default persister will be used (recommended)

Usage: Optional
Default: native
X DDL Override:
x.apps.<appname>.storage.persistence.class

enabled>

Can be set to false to disable the persister for this store.

Usage: Optional
Default: true
X DDL Override:
x.apps.<appname>.storage.persistence.enabled

Constraints: true | false

<autoFlushSize>

In the absense of explicit flushes (e.g. flushOnCommit) of written entries, the size at which flush is automatically triggered for queued writes. If not set the platform default (8192) is used.

Usage: Optional
Default: 8192
X DDL Override:
x.apps.<appname>.storage.persistence.autoFlushSize

Constraints: positive integer

<flushOnCommit>

Whether or not the persister should be flushed on commit. By default a persister buffers writes into an internal buffer and doesn't write to disk until that buffer has filled. Enabling flush on commit will flush the persister regardless of whether the buffer has filled.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.storage.persistence.flushOnCommit

Constraints: true | false

<flushUsingMappedMemory>

Whether flushes to the log file should be performed using a memory mapped file buffer.

Usage: Optional 
Default: false 
X DDL Override:
x.apps.<appname>.storage.persistence.flushUsingMappedMemory
 
Constraints: true | false

(warning) There are known issues on some platforms such as windows in which using this setting can cause file locking issues. Therefore, enabling this setting should be tested on the target platform being used.

<autoRepair>

Whether or not an attempt will be made to automatically repair non-empty logs by truncating malformed entries at the end of the log that are part of incomplete transactions.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.storage.persistence.autoRepair

Constraints: true | false

<storeRoot>

Specifies the root folder in which the persister's transaction log files are located.

Usage: Optional
Default: ${NVROOT}/rdat
X DDL Override:
x.apps.<appname>.storage.persistence.storeRoot

Constraints: a file path (possibly relative to the working directory.

 

(tick) If the expected value of of NVROOT on your target deployment host is not on the device where you want to place your transaction logs (e.g. slow or small disk), then consider making this a substitutable vlaue such as:

<storeRoot>${myapp.storeRoot}</storeRoot>, so that you can customize its location at runtime appropriate to the environment in which you are launching.

 <shared>

Whether or not the persister is a share. If omitted or false, the application will use shared nothing persistence. If true it indicates that the persister is using the same physical storage between backup and primaries meaning that instances in a backup role will not persist to disk and leave it to the primary. In most cases applications will leave this as false.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.storage.icr.shared

Constraints: true | false

<cdcEnabled>

Whether CDC is enabled on the log.

If CDC is not enabled, then a CDC processor run on a log will not dispatch any events. If CDC is not enabled on a log and then reenabled later, CDC will start from the live log at the time the CDC is enabled. If a compaction occurred while CDC was disabled, then the change events that occurred during that time will be lost; in other words, CDC enablement instructs the compactor to preserve data on disk necessary for performing CDC rather than deleting it on compaction.

CDC enabled is only supported for applications using StateReplication as an HAPolicy.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.storage.persistence.cdcEnabled

Constraints: true | false

<compactionThreshold>

Sets the log compaction threshold.

The log compaction threshold is the size (in megabytes) that triggers a log compaction. The act of compacting a log will compact as many complete checkpoints in the log and switch the live log over to the compacted log. A threshold value of less than or equal to 0 disables live log compaction.

Usage: Optional
Default: 0
X DDL Override:
x.apps.<appname>.storage.persistence.compactionThreshold

Constraints: integer

(lightbulb) Log Compaction is currently only applicable to applications using StateReplication as an HAPolicy.

<maxCompactionWindowSize>

The log compaction window is the approximate maximum size (in megabytes) rounded up to the end of the nearest checkpoint that a compact operation uses to determine how many log entries it will hold in memory. The more entries the compactor can hold in memory while performing a compaction, the more efficient the compact operation will be.

Note: The minimum compaction window is a checkpoint. Therefore, if the system is configured such that a checkpooint covers entries that cumulatively exceeds the value of this parameter, then this parameter will not reduce the compaction memory usage; rather, the compactor will load the entire checkpoint into memory when performing the checkpoint operation.

Note: When calculating memory needed by the compaction operation, one should multiply this parameter by a factor of 2: i.e. the memory used by compaction will be twice the memory specified via this parameter.

Usage: Optional
Default: 1024 (1Gb)
X DDL Override:
x.apps.<appname>.storage.persistence.maxCompactionWindowSize

Constraints: integer

(lightbulb) Log Compaction is currently only applicable to applications using StateReplication as an HAPolicy.

<logScavengePolicy>

Sets policy used to scavenge logs.

A log with number N is considered a candidate for scavenging when N is less than the live log number and N less than the CDC log number. This parameter specifies how such logs need to be scavenged. Currently, the only recommended value is 'Delete' ... the 'Disabled' policy is currently used by tools to ensure that they don't erroneously delete files still needed for CDC.

Usage: Optional
Default: Delete
X DDL Override:
x.apps.<appname>.storage.persistence.logScavengePolicy
Constraints: Delete|Disabled

(lightbulb) The log scavenge policy only make sense for applications using StateReplication as an HAPolicy, as it pertains only to stale transaction logs left over from CDC or Compaction.

(lightbulb) The Disabled policy was introduced in the 3.11 release.

<initialLogLength>

Sets the intial file size of persister's transaction log in gigabytes.

Preallocating the transaction log can save costs in growing the file size over time since the operation of growing a log file may actually result in a write of file data + the metadata operation of updating the file size, and may also benefit from allocating contiguous sectors on disk.

Usage: Optional
Default: 1 (1Gb)
X DDL Override:
x.apps.<appname>.storage.persistence.initialLogLength

Constraints: positive float

(tick) The log size is specified in Gb. For an initial size of less than 1 Gb, specify a float value. for example a value of .01 would result in a preallocated size of ~10Mb, this can be useful for test environments.

<zeroOutInitial>

Whether the log file should be explictly zeroed out (to force commit all disk pages) if newly created.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.storage.persistence.zeroOutInitial

Constraints: true | false

<pageSize>

Sets the page size for the disk in bytes. The persister will use this as a hint in several areas to optimize its operation.

Usage: Optional
Default: 8192
X DDL Override:
x.apps.<appname>.storage.persistence.pageSize

Constraints: positive float

<detachedPersist

Configures whether or not persister writes are done by the store commit thread or passed off to a detached persister write thread. Offloading the persist to a persister thread can increase store throughput but requires an extra processor core for the persister thread.

enabled>

Can be set to true to enable detached persister for the persister.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.storage.persistence.detachedPersist.enabled

Constraints: true | false

<queueDepth>

The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used.

See <queueDepth>

X DDL Override: x.apps.<appname>.storage.persistence.detachedPersist.queueDepth
Constraints: positive integer

<queueOfferStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueOfferStrategy> 

X DDL Override: x.apps.<appname>.storage.persistence.detachedPersist.queueOfferStrategy
Constraints: See QueueOfferStrategy

<queueWaitStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueWaitStrategy>  

X DDL Override:
x.apps.<appname>.storage.persistence.detachedPersist.queueWaitStrategy

Constraints: See QueueWaitStrategy

<queueDrainerCpuAffinityMask>

Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus.

See <queueDrainerCpuAffinityMask>   

X DDL Override:
x.apps.<appname>.storage.persistence.
detachedPersist.queueDrainerCpuAffinityMask

<queueFeedMaxConcurrency>

Sets the maximum number of threads that will feed the multiplexer's queue.

See <queueFeedMaxConcurrency>     

Usage: Optional
Default: 16
X DDL Override:
x.apps.<appname>.storage.persistence.detachedPersist.queueFeedMaxConcurrency

Constraints: positive integer

</detachedPersist>

</persistence>
 
<icr
Configures Inter-cluster Replication (ICR) for the application.

role

Configures the inter-cluster replication role.

See ICRRole
for this application instance

Usage: Required
Default: N/A
X DDL Override:
x.apps.<appname>.storage.icr.role

Constraints: Sender | StandaloneReceiver

busDescriptor

Configures the bus descriptor for ICR. ICR uses its own private bus instance created from this descriptor.

This can be instead of specifying the provider, address, and port properties.

Usage: Required
Default: N/A
X DDL Override:
x.apps.<appname>.storage.icr.busDescriptor

Constraints: String

enabled>

Can be set to true to enable inter cluster replication.

Usage: Optional
Default: true
X DDL Override:
x.apps.<appname>.storage.icr.enabled
Constraints: true | false

<bus

Defines and configures the private bus instance used for ICR.

The ICR bus can be configured by either the bus element or the bus descriptor attribute.

It is illegal to use both mechanism.

<provider>

The bus provider name.

Usage: Required
Default: N/A
X DDL Override:
x.apps.<appname>.storage.icr.bus.provider

Constraints: String

<address> 

The bus address.

Usage: Required
Default: N/A
X DDL Override:
x.apps.<appname>.storage.icr.bus.address

Constraints: String

<port>

The bus port for bus providers that accept a port.

Usage: Required
Default: N/A
X DDL Override:
x.apps.<appname>.storage.icr.bus.port

Constraints: short

<properties>

</properties>

List the bus descriptor parameters in key=value pairs e.g.

 

Usage: Optional
X DDL Override:
x.apps.<appname>.storage.icr.bus.<propname>

Constraints: short

</bus>

 

 <shared>

Whether or not an ICR Sender is a shared sender. Applications should set this to true when using ICR to a Standalone Receiver, e.g. only the primary instance should send updated to the ICR queue.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.storage.icr.shared
Constraints: true | false

(lightbulb) In most cases applications will want to set this to true.

<flushOnCommit>

Whether or not the icr sender should be flushed on commit. Setting this value to true will flush all updates to the underlying message bus on commit. With a value of false the bus may buffer some messages until new updates are sent on subsequent commits.

Usage: Optional
Default: false
X DDL Override: x.apps.<appname>.storage.icr.flushOnCommit
Constraints: true | false

<detachedSend

Configures whether or not ICR sends are done by the store commit thread or passed off to a detached send thread. Offloading the send to a sender thread can increase store throughput but requires an extra processor core for the sender thread. When enabled the properties here configure the multiplexer for the detached send thread.

enabled>

Configures whether or not detached ICR send is enabled.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.storage.icr.detachedSend.enabled

Constraints: true | false

<queueDepth>

The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used.

See <queueDepth>

X DDL Override: x.apps.<appname>.storage .icr. detachedSend. queueDepth
Constraints: positive integer

<queueOfferStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueOfferStrategy> 

X DDL Override:
x.apps.<appname>.storage.icr.detachedSend.queueOfferStrategy

Constraints: See QueueOfferStrategy

<queueWaitStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueWaitStrategy>  

X DDL Override:
x.apps.<appname>.storage.icr.detachedSend.queueWaitStrategy

Constraints: See QueueWaitStrategy

<queueDrainerCpu AffinityMask>

Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu or a square bracket enclosed comma separated list enumerating the logical cpus.

See <queueDrainerCpuAffinityMask>   

X DDL Override:
x.apps.<appname>.storage.icr.
detachedSend. queueDrainerCpuAffinityMask

<queueFeedMax Concurrency>

Sets the maximum number of threads that will feed the multiplexer's queue.

See <queueFeedMaxConcurrency>     

Usage: Optional
Default: 16
X DDL Override:
x.apps.<appname>.storage.icr.detachedSend.queueFeedMaxConcurrency

Constraints: positive integer

</detachedSend>

</icr>
</storage>
End of storage configuration (only one storage configuration may be specified per application).

General Application Configuration

The remaining elements under the <app> element configure the operation of the application's AepEngine

Sample XML

Settings

Element

Description

<inboundEventMultiplexing

Configures the AepEngine's inbound event multiplexer. Configures the single AepEngine multiplexer thread that serializes processing of messages, timers, acks and other events for the application.

<queueDepth>

The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used.

See <queueDepth>

X DDL Override:
x.apps.<appname>.inboundEventMultiplexing.queueDepth
Constraints: positive integer

<queueOfferStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueOfferStrategy> 

X DDL Override: x.apps.<appname>.inboundEventMultiplexing.queueOfferStrategy
Constraints: See QueueOfferStrategy

A value of SingleThreaded is almost never appropriate for an AepEngine because many threads dispatch events to an engine.

<queueWaitStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueWaitStrategy>  

X DDL Override:
x.apps.<appname>.inboundEventMultiplexing.queueWaitStrategy

Constraints: See QueueWaitStrategy

<queueDrainerCpuAffinityMask>

Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus.

See <queueDrainerCpuAffinityMask>   

X DDL Override:
x.apps.<appname>.inboundEventMultiplexing.queueDrainerCpuAffinityMask

<queueFeedMaxConcurrency>

Sets the maximum number of threads that will feed the multiplexer's queue.

See <queueFeedMaxConcurrency>     

Usage: Optional
Default: 16
X DDL Override:
x.apps.<appname>.inboundEventMultiplexing.queueFeedMaxConcurrency

Constraints: positive integer

</inboundEventMultiplexing>

 

<inboundMessageLogging

Configures inbound message logging for the engine. An inbound message logger logs inbound messages to a transaction log file. Inbound logging does not play a role in HA for the application, but can be useful for auditing purposes.

policy

The inbound message logging policy for the application.

See InboundMessageLoggingPolicy

Usage: Required
Default: "Default"
X DDL Override:
x.apps.<appname>.inboundMessageLogging.policy

Constraints: Default | Off | UseDedicated

failurePolicy

SINCE 3.2
The inbound message logging failure policy for the application.

See InboundMessageLoggingFailurePolicy

Usage: Optiona
Default: StopEngine
X DDL Override:
x.apps.<appname>.inboundMessageLogging.failurePolicy

Constraints: StopEngine| StopLogging

<autoFlushSize>

In the absence of explicit flushes (e.g. flushOnCommit) of written entries, the size at which flush is automatically triggered for queued writes. If not set, the platform default (8192) is used.

Usage: Optional
Default: 8192
X DDL Override: x.apps.<appname>.inboundMessageLogging.autoFlushSize
Constraints: positive integer

<flushOnCommit>

Whether or not the logger should be flushed on commit. By default the logger buffers writes into an internal buffer and doesn't write to disk until that buffer has filled. Enabling flush on commit will flush the logger regardless of whether the buffer has filled.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.inboundMessageLogging.flushOnCommit

Constraints: true | false

<flushUsingMappedMemory>

Whether flushes to the log file should be performed using a memory mapped file buffer.

Usage: Optional 
Default: false 
X DDL Override:
x.apps.<appname>.inboundMessageLogging.flushUsingMappedMemory
 
Constraints: true | false

(warning) There are known issues on some platforms such as windows in which using this setting can cause file locking issues. Therefore, enabling this setting should be tested on the target platform being used.

<autoRepair>

Whether or not an attempt will be made to automatically repair a non-empty log on open by truncating malformed entries at the end of the log that are part of incomplete transactions.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.inboundMessageLogging.autoRepair

Constraints: true | false

<storeRoot>

Specifies the root folder in which the logger's transaction log files are located.

Usage: Optional
Default: ${NVROOT}/rdat
X DDL Override:
x.apps.<appname>.inboundMessageLogging.storeRoot

Constraints: a file path (possibly relative to the working directory.  

If the expected value of of NVROOT on your target deployment host is not on the device where you want to place your transaction logs (e.g. slow or small disk), then consider making this a substitutable value such as:

<storeRoot>${myapp.storeroot}</storeRoot>, so that you can customize its location at runtime appropriate to the environment in which you are launching.


<initialLogLength>

Sets the initial file size of logger's transaction log in gigabytes.

Preallocating the transaction log can save costs in growing the file size over time since the operation of growing a log file may actually results in a write of file data + the metadata operation of updating the file size, and may also benefit from allocating contiguous sectors on disk.

Usage: Optional
Default: 1 (1Gb)
X DDL Override:
x.apps.<appname>.inboundMessageLogging.initialLogLength

Constraints: positive float

The log size is specified in Gb. For an initial size of less than 1 Gb, specify a float value. For example, a value of .01 would result in a preallocated size of ~10Mb, which can be useful for test environments.

<zeroOutInitial>

Whether the log file should be explicitly zeroed out (to force commit all disk pages) if newly created.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.inboundMessageLogging.zeroOutInitial
Constraints: true | false

<pageSize>

Sets the page size for the disk in bytes. The logger will use this as a hint in several areas to optimize its operation.

Usage: Optional
Default: 8192
X DDL Override:
x.apps.<appname>.inboundMessageLogging.pageSize

Constraints: positive int

<detachedWrite

Configures whether or not logger writes are done by the committing thread or passed off to a detached writer thread. Offloading to a writer thread can increase application throughput but requires an extra processor core for the logger thread.

enabled>

Can be set to true to enable detached logging for the logger.

Usage: Required
Default: false
X DDL Override:
x.apps.<appname>.inboundMessageLogging.detachedWrite.enabled

Constraints: true | false

<queueDepth>

The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used.

See <queueDepth>

X DDL Override:
x.apps.<appname>.inboundMessageLogging.detachedWrite.queueDepth

Constraints: positive integer

<queueOfferStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueOfferStrategy>

X DDL Override: x.apps.<appname>.inboundMessageLogging.detachedWrite.queueOfferStrategy
Constraints: See QueueOfferStrategy

<queueWaitStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueWaitStrategy>  

X DDL Override:
x.apps.<appname>.inboundMessageLogging.detachedWrite.queueWaitStrategy

Constraints: See QueueWaitStrategy

<queueDrainerCpuAffinityMask>

Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu or a square bracket enclosed comma separated list enumerating the logical cpus.

See <queueDrainerCpuAffinityMask>   

X DDL Override:
x.apps.<appname>.inboundMessageLogging.
detachedWrite.queueDrainerCpuAffinityMask

<queueFeedMaxConcurrency>

Sets the maximum number of threads that will feed the multiplexer's queue.

See <queueFeedMaxConcurrency>     

Usage: Optional
Default: 16
X DDL Override:
x.apps.<appname>.inboundMessageLogging.
detachedWrite.queueFeedMaxConcurrency

Constraints: positive integer

</detachedWrite>

</inboundMessageLogging>

End of application's inbound message logging properties.

<outboundMessageLogging

Configures outbound message logging for the engine. An inbound message logger logs sent messages to a transaction log file. An outbound message log file does not play a role in HA for the application, but can be useful for auditing purposes.

policy

The outbound message logging policy for the application.

See OutboundMessageLoggingPolicy

Usage: Required
Default: "Default"
X DDL Override:
x.apps.<appname>.outboundMessageLogging.policy

Constraints: Default | Off | UseDedicated

failurePolicy

SINCE 3.2
The outbound message logging failure policy for the application.

See OutboundMessageLoggingFailurePolicy

Usage: Optiona
Default: StopEngine
X DDL Override:
x.apps.<appname>.outboundMessageLogging.failurePolicy

Constraints: StopEngine| StopLogging

<autoFlushSize>

In the absence of explicit flushes (e.g. flushOnCommit) of written entries, the size at which flush is automatically triggered for queued writes. If not set, the platform default (8192) is used.

Usage: Optional
Default: 8192
X DDL Override:
x.apps.<appname>.outboundMessageLogging.autoFlushSize

Constraints: positive integer

<flushOnCommit>

Whether or not the logger should be flushed on commit. By default the logger buffers writes into an internal buffer and doesn't write to disk until that buffer has filled. Enabling flush on commit will flush the logger regardless of whether the buffer has filled.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.outboundMessageLogging.flushOnCommit

Constraints: true | false

<flushUsingMappedMemory>

Whether flushes to the log file should be performed using a memory mapped file buffer.

Usage: Optional 
Default: false 
X DDL Override:
x.apps.<appname>.outboundMessageLogging.flushUsingMappedMemory
 
Constraints: true | false

(warning) There are known issues on some platforms such as windows in which using this setting can cause file locking issues. Therefore, enabling this setting should be tested on the target platform being used.

<autoRepair>

Whether or not an attempt will be made to automatically repair a non-empty log on open by truncating malformed entries at the end of the log that are part of incomplete transactions.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.outboundMessageLogging.autoRepair

Constraints: true | false

<storeRoot>

Specifies the root folder in which the logger's transaction log files are located.

Usage: Optional
Default: ${NVROOT}/rdat
X DDL Override: x.apps.<appname>.outboundMessageLogging.storeRoot
Constraints: a file path (possibly relative to the working directory.

If the expected value of of NVROOT on your target deployment host is not on the device where you want to place your transaction logs (e.g. slow or small disk), then consider making this a substitutable value such as:

<storeRoot>${myapp.storeroot}</storeRoot>, so that you can customize its location at runtime appropriate to the environment in which you are launching.


<initialLogLength>

Sets the initial file size of logger's transaction log in gigabytes.

Preallocating the transaction log can save costs in growing the file size over time since the operation of growing a log file may actually results in a write of file data + the metadata operation of updating the file size, and may also benefit from allocating contiguous sectors on disk.

Usage: Optional
Default: 1 (1Gb)
X DDL Override:
x.apps.<appname>.outboundMessageLogging.initialLogLength

Constraints: positive float

The log size is specified in Gb. For an initial size of less than 1 Gb, specify a float value. for example a value of .01 would result in a preallocated size of ~10Mb, this can be useful for test environments.

<zeroOutInitial>

Whether the log file should be explictly zeroed out (to force commit all disk pages) if newly created.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.outboundMessageLogging.zeroOutInitial
Constraints: true | false

<pageSize>

Sets the page size for the disk in bytes. The logger will use this as a hint in several areas to optimize its operation.

Usage: Optional
Default: 8192
X DDL Override:
x.apps.<appname>.outboundMessageLogging.pageSize

Constraints: positive float

<detachedWrite

Configures whether or not logger writes are done by the committing thread or passed off to a detached writer thread. Offloading to a writer thread can increase application throughput but requires an extra processor core for the logger thread.

enabled>

Can be set to true to enable detached logging for the logger.

Usage: Required
Default: false
X DDL Override:
x.apps.<appname>.outboundMessageLogging.detachedWrite.enabled

Constraints: true | false

<queueDepth>

The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used.

See <queueDepth>

X DDL Override:
x.apps.<appname>.outboundMessageLogging.detachedWrite.queueDepth

Constraints: positive integer

<queueOfferStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueOfferStrategy> 

X DDL Override:
x.apps.<appname>.outboundMessageLogging.detachedWrite.queueOfferStrategy

Constraints: See QueueOfferStrategy

<queueWaitStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueWaitStrategy>  

X DDL Override:
x.apps.<appname>.outboundMessageLogging.detachedWrite.queueWaitStrategy

Constraints: See QueueWaitStrategy

<queueDrainerCpuAffinityMask>

Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus.

See <queueDrainerCpuAffinityMask>   

X DDL Override: x.apps.<appname>.outboundMessageLogging.
detachedWrite. queueDrainerCpuAffinityMask

<queueFeedMaxConcurrency>

Sets the maximum number of threads that will feed the multiplexer's queue.

See <queueFeedMaxConcurrency>     

Usage: Optional
Default: 16
X DDL Override:
x.apps.<appname>.outboundMessageLogging.detachedWrite.queueFeedMaxConcurrency
Constraints: positive integer

</detachedWrite>

</outboundMessageLogging>

End of application's outbound message logging properties.

<perTransactionStatsLogging


Configures per transaction stats binary logging for the engine.

A per transaction stats logger logs per transaction stats to a transaction log when capturePerTransactionStats is enabled for an AEP engine.

(warning) Per transaction stats logging is currently classified as an experimental feature

SINCE 3.7

policy

The per transaction stats logging policy for the application.

See PerTransactionStatsLoggingPolicy

Usage: Required
Default: "Default"
X DDL Override:
x.apps.<appname>.
perTransactionStatsLogging .policy

Constraints: Default | Off | UseDedicated

failurePolicy


The per transaction stats logging failure policy for the application.

See PerTransactionStatsLoggingFailurePolicy

Usage: Optiona
Default: StopEngine
X DDL Override:
x.apps.<appname>.perTransactionStatsLogging.failurePolicy

Constraints: StopEngine| StopLogging

<autoFlushSize>

In the absence of explicit flushes (e.g. flushOnCommit) of written entries, the size at which flush is automatically triggered for queued writes. If not set, the platform default (8192) is used.

Usage: Optional
Default: 8192
X DDL Override:
x.apps.<appname>.perTransactionStatsLogging.autoFlushSize

Constraints: positive integer

<flushOnCommit>

Whether or not the logger should be flushed on commit. By default the logger buffers writes into an internal buffer and doesn't write to disk until that buffer has filled. Enabling flush on commit will flush the logger regardless of whether the buffer has filled.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.perTransactionStatsLogging.flushOnCommit

Constraints: true | false

<flushUsingMappedMemory>

Whether flushes to the log file should be performed using a memory mapped file buffer.

Usage: Optional 
Default: false 
X DDL Override:
x.apps.<appname>.perTransactionStatsLogging.flushUsingMappedMemory
 
Constraints: true | false

(warning) There are known issues on some platforms such as windows in which using this setting can cause file locking issues. Therefore, enabling this setting should be tested on the target platform being used.

<autoRepair>

Whether or not an attempt will be made to automatically repair a non-empty log on open by truncating malformed entries at the end of the log that are part of incomplete transactions.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.perTransactionStatsLogging.autoRepair

Constraints: true | false

<storeRoot>

Specifies the root folder in which the logger's transaction log files are located.

Usage: Optional
Default: ${NVROOT}/rdat
X DDL Override: x.apps.<appname>.perTransactionStatsLogging.storeRoot
Constraints: a file path (possibly relative to the working directory.

If the expected value of of NVROOT on your target deployment host is not on the device where you want to place your transaction logs (e.g. slow or small disk), then consider making this a substitutable value such as:

<storeRoot>${myapp.storeroot}</storeRoot>, so that you can customize its location at runtime appropriate to the environment in which you are launching.


<initialLogLength>

Sets the initial file size of logger's transaction log in gigabytes.

Preallocating the transaction log can save costs in growing the file size over time since the operation of growing a log file may actually results in a write of file data + the metadata operation of updating the file size, and may also benefit from allocating contiguous sectors on disk.

Usage: Optional
Default: 1 (1Gb)
X DDL Override:
x.apps.<appname>.perTransactionStatsLogging.initialLogLength

Constraints: positive float

The log size is specified in Gb. For an initial size of less than 1 Gb, specify a float value. for example a value of .01 would result in a preallocated size of ~10Mb, this can be useful for test environments.

<zeroOutInitial>

Whether the log file should be explictly zeroed out (to force commit all disk pages) if newly created.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.perTransactionStatsLogging.zeroOutInitial
Constraints: true | false

<pageSize>

Sets the page size for the disk in bytes. The logger will use this as a hint in several areas to optimize its operation.

Usage: Optional
Default: 8192
X DDL Override:
x.apps.<appname>.perTransactionStatsLogging.pageSize

Constraints: positive float

<detachedWrite

Configures whether or not logger writes are done by the committing thread or passed off to a detached writer thread. Offloading to a writer thread can increase application throughput but requires an extra processor core for the logger thread.

enabled>

Can be set to true to enable detached logging for the logger.

Usage: Required
Default: false
X DDL Override:
x.apps.<appname>.perTransactionStatsLogging.detachedWrite.enabled

Constraints: true | false

<queueDepth>

The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used.

See <queueDepth>

X DDL Override:
x.apps.<appname>.perTransactionStatsLogging.detachedWrite.queueDepth

Constraints: positive integer

<queueOfferStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueOfferStrategy> 

X DDL Override:
x.apps.<appname>.perTransactionStatsLogging.detachedWrite.queueOfferStrategy

Constraints: See QueueOfferStrategy

<queueWaitStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueWaitStrategy>  

X DDL Override:
x.apps.<appname>.perTransactionStatsLogging.detachedWrite.queueWaitStrategy

Constraints: See QueueWaitStrategy

<queueDrainerCpuAffinityMask>

Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus.

See <queueDrainerCpuAffinityMask>   

X DDL Override: x.apps.<appname>.perTransactionStatsLogging.
detachedWrite. queueDrainerCpuAffinityMask

<queueFeedMaxConcurrency>

Sets the maximum number of threads that will feed the multiplexer's queue.

See <queueFeedMaxConcurrency>     

Usage: Optional
Default: 16
X DDL Override:
x.apps.<appname>.perTransactionStatsLogging.
detachedWrite.queueFeedMaxConcurrency

Constraints: positive integer

</detachedWrite>

</ perTransactionStatsLogging >

End of application's per transaction stats logging properties.

<startupExpectations

Specifies expectations that must be met on application startup.

Unmet startup expectations will prevent the application from starting, ensuring that operational conditions are met.

Usage: Optional

<role>

Checks the HA Role of the application on startup.

The role of an application is defined by the underlying role of its store. If the application has no store configured, its role will be 'Primary'.

See StoreBindingRoleExpectation  

Usage: Optional
X DDL Override: x.apps.<appname>.startupExpectations.role
Constraints: Primary | Backup | None

<logEmptiness>

Enforces log emptiness expectations at startup.

See LogEmptinessExpectation  

Usage: Optional
Default: None
X DDL Override: x.apps.<appname>.startupExpectations.logEmptiness
Constraints: None | Empty | NotEmpty

</startupExpectations>

 

<messageHandlingPolicy>

Specifies the application's message handling policy.

It is rare that an application would want to set anything other than 'Normal' for the message handling policy outside of a diagnostic or debug context.

See MessageHandlingPolicy   

Usage: Optional
Default: Normal
X DDL Override: x.apps.<appname>.messageHandlingPolicy
Constraints: Normal | Noop | Discard

<messagingStartFailPolicy>

SINCE 3.7

Specifies an engine's messaging start fail policy.

The messaging start operation establishes the bindings to the various buses that an engine is configured to bind to. This enumeration enumerates the policy that determines the conditions under which a messaging start operation is considered to have failed.

The NeverFail option causes a start operation to be considered successful as long as all bind attempts do not result in permanent exceptions (a permanent exception reported by a bind attempt causes the bind operation to not be retried while a non-permanent exception causes the bind attempt to be periodically retried). In other words, the NeverFail option causes a messaging start operation to be reported as successful as long as at least one bind attempt was successful or failed with a non-permanent exception. 

Using a policy which does not shutdown the engine if a binding fails requires that the application is coded such that it can handle message channels being down during message processing.

Bus implementations often have their own retry logic built into initial connection establishment, so it is worth bearing in mind that a failure to establish a connection may not be resolved in a timely fashion by subsequent retries made by the engine.

See MessagingStartFailPolicy   

Usage: Optional
Default: FailIfOneBindingFails
X DDL Override: x.apps.<appname>.messageStartFailPolicy
Constraints: Normal | FailIfOneBindingFails | FailIfAllBindingsFail

(lightbulb) The default policy of FailIfOneBindingFails favors prompt failover to a backup instance which may have connectivity to the message bus in question. Applications that run without a backup may want to use a policy of Reconnect.

(lightbulb) Applications that run without a backup and use only a single message bus or can function adequately with one message bus down may want to use a policy of Reconnect.

<messageBusBindingFailPolicy>

SINCE 3.7

Specifies the policy that determines what action an engine takes when a message bus binding fails. 

Using a policy which does not shutdown the engine if a binding fails requires that the application is coded such that it can handle message channels being down during message processing.

Bus implementations often have their own retry logic built into to perform transparent reconnect, so it is worth bearing in mind that a failure to establish a connection may not be resolved in a timely fashion by subsequent retries made by the engine.

See MessageBusBindingFailPolicy   

Usage: Optional
Default: FailIfAnyBindingFails
X DDL Override: x.apps.<appname>.messageBusBindingFailPolicy
Constraints: Reconnect | FailIfAnyBindingFails

(lightbulb) The default policy of FailIfAnyBindingFails favors prompt failover to a backup instance which may have connectivity to the message bus in question. If the failure is caused by a failure of the messaging provider (rather than a networking failure local to the Primary instance), the backup instance of the application will use the configured messagingStartFailPolicy along with whatever message bus specific initial connection establishment logic is available in the binding.
(lightbulb) Applications that run without a backup may want to use a policy of Reconnect.

<replicationPolicy>

Specifies the application's replication policy.

The replication policy controls how messages are replicated to backup members (or disk).

In most cases an application should specify a policy of Pipelined. Specifying the wrong value for this property can compromise recovery and cause message loss or duplication.

See ReplicationPolicy   

Usage: Optional
Default: Pipelined
X DDL Override:
x.apps.<appname>.replicationPolicy
Constraints: Pipelined | Asynchronous

<messageSendPolicy>

Enumerates an application's AepEngine's outbound message send policies.

The message send policy controls at what point during transaction commit processing that application sent messages are transmitted out of the application. 

In most cases, an application should specify a policy of Pipelined. Specifying the wrong value for this property can compromise recovery and cause message loss or duplication.

See MessageSendPolicy   

Usage: Optional
Default: ReplicateBeforeSend
X DDL Override:
x.apps.<appname>.messageSendPolicy
Constraints: ReplicateBeforeSend| SendBeforeReplicate | Noop

<inboundEventAcknowledgementPolicy>

Enumerates an engine's inbound event acknowledgement policy.

The general contract of an AepEngine is that it cannot acknowledge upstream events (such as message events) in a transaction until such as the transaction has been stabilized to the point that in the event of a failure the message will not be lost.

When the engine is not configured with a store this property has no effect and events are acknowledged when the entire transaction is committed (e.g. when downstream acknowledgements are received.)

See InboundEventAcknowledgementPolicy   

Usage: Optional
Default: Default
X DDL Override:
x.apps.<appname>.inboundEventAcknowledgementPolicy
Constraints: Default | OnSendStability | OnStoreStability

<appExceptionHandlingPolicy>

Set an engine's application exception handling policy, using which an engine determines how to handle unchecked exceptions thrown by an application handler.

See AppExceptionHandlingPolicy

Usage: Optional
Default: RollbackAndStop
X DDL Override:
x.apps.<appname>.appExceptionHandlingPolicy
Constraints: RollbackAndStop | QuarantineAndStop | LogExceptionAndContinue

SINCE 3.4

<quarantineChannel>

Set an engine's quarantine channel.

Sets the channel on which quarantined messages are transmitted. It must take the form of channelName@busName

This applies when the application throws an exception and the application exception policy is configured to be 'quarantine and stop' i.e. QuarantineAndStop

Usage: Optional
Default: null
X DDL Override:
x.apps.<appname>.quarantineChannel
Constraints: (.)+@(.)+

SINCE 3.4

<quarantineMessageKey>

Set an engine's quarantine message key.

Used to explicitly set the message key to be associated with outbound quarantine messages. If the key is set using this method, the sending of the quarantine message will bypass the dynamic key resolution machinery. 

Usage: Optional
Default: null
X DDL Override:
x.apps.<appname>.quarantineMessageKey
Constraints: valid channel key

SINCE 3.4

<messageSendExceptionHandlingPolicy>

The policy used by an application's AepEngine to determine how to handle unchecked exceptions thrown on message sends.

Note that policy covers only the send of the message through the underlying bus binding during transaction commit. In particular, It does not cover:

  • Exceptions thrown from send calls made by the application during message processing. That policy must be set via the AppExceptionHandlingPolicy.
  • Exceptions that result from processing of acknowledgements for guaranteed message traffic from the message bus. That policy must be set via the MessageSendStabilityFailureHandlingPolicy

See MessageSendExceptionHandlingPolicy

Usage: Optional
Default: TreatAsStabilityFailure
X DDL Override:
x.apps.<appname>.messageSendExceptionHandlingPolicy
Constraints: TreatAsStabilityFailure | LogExceptionAndContinue

<messageSendStabilityFailureHandlingPolicy>

The policy used by an application's AepEngine to determine how to handle send stability failure notifications.

Note that policy covers only the send stability failures received from the underlying bus binding. In particular, It does not cover:

  • Exceptions thrown from send calls made by the application during message processing. That policy must be set via the AppExceptionHandlingPolicy.
  • Exceptions thrown by the underlying bus binding for send calls made during transaction commit. That policy must be set via the MessageSendExceptionHandlingPolicy.

See MessageSendStabilityFailureHandlingPolicy

Usage: Optional
Default: StopEngine
X DDL Override:
x.apps.<appname>.messageSendStabilityFailureHandlingPolicy
Constraints: StopEngine | LogExceptionAndContinue

SINCE 3.12.6

<replicationPolicy>

Specifies the application's replication policy.

In most cases an application should specify a policy of Pipelined. Specifying the wrong value for this property can compromise recovery and cause message loss or duplication.

See ReplicationPolicy   

Usage: Optional
Default: Pipelined
X DDL Override: x.apps.<appname>.replicationHandlingPolicy
Constraints: Pipelined | Asynchronous

<adaptiveCommitBatchCeiling>

Set the application's AepEngine's adaptive commit batch ceiling.

The adaptive commit batch ceiling controls the maximum number of inbound messages to group into a single transaction which can improve throughput.

A value less than or equal to 1 disables adaptive commit.

Usage: Optional
Default: 0
X DDL Override:
x.apps.<appname>.adaptiveCommitBatchCeiling
Constraints: non negative integer

Adaptive commit cannot be used if transaction commit suspension is enabled. 

Auto Tuning

When nv.optimizefor=throughput this value is set to 64 if not set explicitly set to a positive value.

<enableTransactionCommitSuspension>

Sets whether transaction commit suspension is enabled or disabled.

Transaction commit suspension is an experimental feature that allows an application to temporarily suspend commit of a transaction.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.enableTransactionCommitSuspension
Constraints: true | false

TransactionCommitSuspension is currently an experimental feature. It is not supported for production use.

It cannot be used with:

  • clustered engines.
  • engines using StateReplication
  • engines using adaptive commit batching.

<dispatchTransactionStageEvents>

Sets whether transaction stage events are emitted by the application's AepEngine.

Controls whether or not AepTransactionStageEvent  are emitted by the application's engine. An AepTransactionStageEvent is used to notify an application as the transaction commit executes through its various phases. The various transaction stages are

  • Start
  • StoreCommitComplete
  • SendCommitComplete
  • Complete

The transaction stage is present in the dispatched AepTransactionStageEvent. The AepTransactionStageEvent dispatched in the Start stage can be used by an application to suspend the transaction. It is illegal to suspend a transaction in any other stage other than Start.

Transaction stage events are only dispatched on the primary cluster member

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.dispatchTransactionStageEvents
Constraints: true | false


SINCE 3.15

<replicateSolicitedSends>

Sets whether or not to replicate solicited sends to a backup.


This parameter governs whether solicited sends (sends triggered by the processing of inbound messages) performed on clustered State Replication engines will be replicated or not. This setting has no effect on Event Sourced engines or engines that are not clustered.

 

Usage: Optional
Default: true
X DDL Override:
x.apps.<appname>.replicateSolicitedSends
Constraints: true | false

 

This parameter should be changed with extreme caution. The act of disabling replication of outbound messages will likely result in a loss of outbound messages in the event of a fail over.

<replicateUnsolicitedSends>

Set whether to replicate unsolicited sends.

This parameter governs whether unsolicited sends performed on clustered engines will be replicated or not. This setting has no effect on engines that are not clustered. An unsolicited send is a send done outside of a event handler via an AepEngine.send method. Because unsolicited sends aren't part of an engine's transactional message processing, they are not considered to be part of the application's HA state. To treat unsolicited sends as part of an application's HA state, see sequenceUnsolicitedWithSolicitedSends.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.replicateUnsolicitedSends
Constraints: true | false

<sequenceUnsolicitedSends>

Set whether to sequence unsolicited sends.

By default, unsolicited sends are sent with a sequence number of 0. Specifying true in this parameter will cause sequence numbers to also be attached to unsolicited sends.

Usage: Optional
Default: false
X DDL Override:
 x.apps.<appname>.sequenceUnsolicitedSends
Constraints: true | false

Be careful about attaching sequence numbers to unsolicited sends, especially if the application is going to be doing both unsolicited and solicited sends concurrently, since that can cause messages to be sent on the wire in a sequence different from the sequence in which sequence numbers were assigned to the message thus causing legitimate messages to be dropped due to incorrect duplicate determination. For such applications, use sequenceSolicitedWithUnsolicitedSends instead to ensure that not only are unsolicited sends sequenced but that they are also correctly sequenced vis-a-vis solicited sends.

<sequenceUnsolicitedWithSolicitedSends>

Set whether to sequence unsolicited sends with solicited sends.

This parameter is applicable for applications performing concurrent solicited and unsolicited sends and want the unsolicited sends to be sequenced. Setting this parameter ensures that unsolicited and solicited sends are sequenced on the wire in the same order in which the sequence numbers were attached to the messages. In effect, this causes an unsolicited send to be injected into the underlying engine's transactional event processing stream promoting it to a transaction event.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.sequenceUnsolicitedWithSolicitedSends
Constraints: true | false

SINCE 3.9

<dispatchSendStabilityEvents>

Set whether or not the engine dispatches AepSendStabilityEvent for unsolicited sends.

When an application that sends messages through the engine from outside of a message handler (an unsolicited send) would like to receive notification when the the send has been stabilized, this setting can be enabled to instruct the engine to dispatch an AepSendStabilityEvent when the engine can provide guarantees that the message will be delivered. This functionality is useful for gateway applications that input messages into the system from an external source.

Usage: Optional 
 Default: false 
 X DDL Override:  
 x.apps.<appname>.dispatchSendStabilityEvents
 Constraints: true | false

<disposeOnSend>

Set whether or not the engine disposes sent messages.

If set, then the AepEngine.sendMessage method will dispose a message after it has been sent. This means that the caller must not hold onto or reference a message beyond the call to the send message method. If unset, then a zero garbage application must call dispose on each sent message to ensure it is returned to its pool.

Usage: Optional
Default: true
X DDL Override:
x.apps.<appname>.disposeOnSend
Constraints: true | false

<clusterHeartbeatInterval>

Sets the cluster heartbeat interval for the application in milliseconds.

When replicating message and state to its cluster peers, the AepEngine piggybacks internal state, such as outbound message acknowledgements and other internal control data. Setting a cluster heartbeat interval can be useful for low throughput applications to keep the backup in closer sync to the primary. The most common use case for setting this would be to reduce the number of in doubt messages outstanding in backup peers to a window.

A value of 0 (default) disables the cluster heartbeat interval

Usage: Optional
Default: 0
X DDL Override:
x.apps.<appname>.clusterHeartbeatInterval
Constraints: non negative integer

<administrative>

Marks the application as an 'administrative' application.

Marking an application as 'administrative' excludes it from being included in latency or throughput optimizations that demand system resources.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.administrative
Constraints: true | false

<stuckAlertEventThreshold>

Sets the threshold, in seconds, after which an AepStuckAlertEvent is dispatched to the application's IAepAsynchronousEventHandler.  

An AepStuckAlertEvent event is intended to alert that the engine's transaction pipeline is "stuck" i.e. there are one or more transaction commits in the pipeline and the event multiplexer thread is not processing any events. For example, the event multiplexer thread could be flow controlled on the replication TCP connection due to an issue in the backup or could be spinning in a loop in the business logic due to a bug in a business logic handler.

See Stuck Engine Alerts for more information

Usage: Optional
Default: 0
X DDL Override:
x.apps.<appname>.stuckAlertEventThreshold
Constraints: non negative integer

<performDuplicateChecking>

Set whether the application's engine should perform duplicate checking.

When duplicate checking is enabled, received messages that are deemed duplicates are discarded by the application's engine. A message is considered to be a duplicate under the following circumstances:

  • The message has a sequence number that is greater than 1.
  • The sequence number is less than the last received sequence number for the flowId and senderId in the message.
    • A sequence number of 1 is interpreted as a restart of a messaging stream.
    • A value of 0 or less means the message is not sequenced and will ignored for duplicate checks.

Sequence id assignment by an AepEngine

  • MessageSender: Is assigned by the AepEngine using the hash code of the engine name.
  • MessageFlow: Leaves flow unset as 0 (default). At present it is not recommended for applications to set a flow.
  • MessageSequenceNumber: Set for each send from an event handler by incrementing a sequence number or for each unsolicited send (outside of an event handler) when sequenceUnsolicitedSends is true.

SINCE 3.4

Usage: Optional
Default: true
X DDL Override:
x.apps.<appname>. performDuplicateChecking
Constraints: boolean

<setOutboundSequenceNumbers>

Disables the setting of sequence numbers on outbound messages.

The setting of sequence numbers in outbound messages comes at a very slight performance penalty that may not be tolerated by ultra low latency applications. This property can be used to switch off the setting of sequence numbers in outbound messages for such performance critical applications. However, note that this effectively disables checking for duplicates for messages sent by such a configured engine by downstream apps.

Usage: Optional
Default: true
X DDL Override:
x.apps.<appname>. setOutboundSequenceNumbers
Constraints: boolean

<syncInjectedMessages>

Sets whether MessageView.sync() is called during AepEngine.injectMessage.

When using Event Sourcing or inbound message logging injected messages may be replicated and/or persisted to disk. To do so means that the contents of the message must be synced. By doing sync() during the inject call, it can save the engine's processor thread cpu cycles.

SINCE 3.7

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.syncInjectedMessages
Constraints: boolean

<stopOnJVMShutdown>

Sets whether the engine will automatically stop when the JVM shuts down

By default, the AEP engine registers a JVM shutdown hook using which it automatically stops when the JVM shuts down. This property enables this behavior to be disabled. If set to false, the engine will not automatically stop when the JVM shuts down

 SINCE 3.16.20

 

Usage: Optional
Default: true
X DDL Override:
x.apps.<appname>.stopOnJVMShutdown
Constraints: boolean

<performMidstreamInitializationValidation>

Set whether the engine checks that initial transactions are not missing during recovery or replication. This parameter is only applicable to event sourced engines.

(warning) It is not recommended that you disable this check unless recommended by Neeve Support.

Usage: Optional
Default: true
X DDL Override:
x.apps.<appname>.performMidstreamInitializationValidation
Constraints: true | false
SINCE 3.4

<enableSequenceNumberTrace>

Enables diagnostic trace logging related to message sequencing. Enabling this trace can assist in diagnosing issues related to the loss, duplicate or out of order events. When enabled, trace will be emitted at debug level (TRACE level for SLF4J) to the logger named 'nv.aep.sno'.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.enableSequenceNumberTrace
Constraints: true | false
SINCE 3.4

<enableEventTrace>

Enables diagnostic trace logging of events received and dispatched by an engine. Enabling this trace is useful in determining the sequence of events processed by the engine. When enabled, trace will be emitted at debug level (TRACE level for SLF4J) to the logger named 'nv.aep.event'.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.enableEventTrace
Constraints: true | false
SINCE 3.4

<enableTransactionTrace>

Enables diagnostic trace logging related to transactions processed by an engine. Enabling this trace is useful in determining the relative sequencing and timing of transaction commits as the commits are executed by the engine. When enabled, trace will be emitted at debug level (TRACE level for SLF4J) to the logger named 'nv.aep.txn'.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.enableTransactionTrace
Constraints: true | false
SINCE 3.4

<enableScheduleTrace>

Enable diagnostic trace logging related to schedules (timers) managed by an engine. Enabling this trace is useful for diagnosing issues related to engine timer execution and scheduling. When enabled, trace will be emitted at debug level (TRACE level for SLF4J) to the logger named 'nv.aep.sched'.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.enableScheduleTrace
Constraints: true | false
SINCE 3.4

<enableAlertTrace>

Enable diagnostic trace logging related to events emitted by the AepEngine that implement IAlertEvent. When enabled, trace will be emitted at warning level (WARN level for SLF4J) to the logger named 'nv.aep.alert'.

The engine may suppress logging of certain alert event types in cases where it makes sense to do so. For example, the AepEngineStoppedEvent implements IAlertEvent, but when this event is dispatched during a clean shutdown (no error message), it isn't deemed an Alert. Applications may disable the engine alert and provide their own IAlertEvent EventHandler for finer grained control over alert logging.

Usage: Optional
Default: true
X DDL Override:
x.apps.<appname>.enableAlertTrace
Constraints: true | false
SINCE 3.11.61

<enableMessageTrace>

Enables diagnostic trace logging for messages as they pass through the engine. Enabling this trace is useful for tracing the contents of messages at different stages of execution within the engine. When enabled, trace will be emitted at debug level (TRACE level for SLF4J) to the logger named 'nv.aep.msg'.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.enableMessageTrace
Constraints: true | false
SINCE 3.4

<messageTraceInJson>

Sets whether messages are traced in JSON or toString format. When enabled, messages will be printed in JSON format, otherwise message will be traced using its toString method. This parameter is only applicable if message trace is enabled.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.messageTraceInJson
Constraints: true | false
SINCE 3.4

<messageTraceJsonStyle>

Sets the styling for JSON formatted message trace. This parameter is only applicable if message trace in JSON is enabled.

Valid options are:

  • Default: Specifies the default Jackson pretty printer that should be used (DefaultPrettyPrinter)
  • PrettyPrint: Same as default.
  • Minimal: A minimal single line pretty printer.
  • SingleLine: A minimal single line pretty printer. Same as minimal, but with a space after the colon between objects field and value separator (e.g. "intField": 1 vs "intField":1)

Usage: Optional
Default: Default
X DDL Override:
x.apps.<appname>.messageTraceJsonStyle
Constraints: Default | PrettyPrint | Minimal | SingleLine
SINCE 3.4

<messageTraceFilterUnsetFields>

Sets whether unset fields are filtered for JSON formatted objects when JSON message tracing is enabled.

Usage: Optional
Default: false
X DDL Override:
 x.apps.<appname>.messageTraceFilterUnsetFields

Constraints: true | false

<messageTraceMetadataDisplayPolicy>

Sets whether metadata, payload or both will be traced when message tracing is enabled.

Valid Options are:

  • On:  Specifies that metadata should be displayed along with the message payload.
  • Off:  Specifies that only the message payload should be displayed.
  • Only: Specifies that only the metadata should be displayed.

Usage: Optional
Default: On
X DDL Override:
x.apps.<appname>.messageTraceMetadataDisplayPolicy
Constraints: On | Off | Only
SINCE 3.4

<maxEnvironmentProviders>

Sets the maximum number of environment providers that can be registered with the engine.

Usage: Optional
Default: 256
X DDL Override:
x.apps.<appname>.maxEnvironmentProviders
Constraints: positive integer.
SINCE 3.4

<enableSendCommitCompleteSequenceAlerts>

Set whether or not to enable out of order send commit completion detection. When enabled, the engine will check that stability events (acknowledgements) from the underlying messaging provider are received in an ordered fashion. If acknowledgements are received out of order, then the engine will dispatch appropriate alerts.

(warning) This check can introduce overhead and is useful in situations in which the message bus is expected to return acks in an ordered fashion. A raised alert is not necessarily an indicator of a problem as the engine will ensure that transactions are completed out in order. In general it is not recommended that this property be enabled unless diagnosing an issue where a bus binding appears not to be acknowledging messages.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.enableSendCommitCompleteSequenceAlerts
Constraints: true | false
SINCE 3.4

<captureMessageTypeStats>

Sets whether statistics are additionally recorded on a per message type basis. Collection of message type specific statistics records counts and rates per type as well as message processing time statistics for each message which can be useful in finding particular handlers that have high execution times.

(warning) Enabling message type stats introduces a fair amount of overhead. Before running with this in production be sure to gauge its performance impact for your application to determine whether or not the overhead is worth the additional visibility.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.captureMessageTypeStats
Constraints: true | false
SINCE 3.4

<messageTypeStatsLatenciesToCapture>

Property that enables tracking of latency statistics on a per message type ba

Property controlling which latency stats on a per message type basis. This property is specified as a comma separated list of values. 

Valid value include:

  • all Indicates that all available per message type latency stats should be collected.
  • none Indicates that no message type latency stats should be collected.
  • c2o Indicates create to offer latencies should be captured.
  • o2p Indicates offer to poll (input queueing time) should be captured.
  • mfilt Indicates that time spent in application message filters should be captured.
  • mpproc Indicates that time spent in the engine prior to message dispatch should be captured.
  • mproc Indicates that the time spent in application message handlers should be captured.

The values 'all' or 'none' may not be combined with other values.

This value only applies when captureMessageTypeStats is true. When not specified the value defaults to all.

(warning) Each latency stat results in a fair amount of overhead - Every latency stat results in a large array being created to hold the data points, and for large a number of message types this can add up quickly in terms of putting pressure on the processor cache. Consider testing the performance impact on your application under load with and without per message type stat latencies enabled. Consider also that each enabled statistics is reported in heartbeats which can increase their size significantly.

Usage: Optional
 Default: all
 X DDL Override:
 x.apps.<appname>.messageStatsTypeLatenciesToCapture
Constraints: see above.

SINCE 3.11

<captureTransactionLatencyStats>

Sets whether or not the engine records transaction latency stats.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.captureTransactionLatencyStats
Constraints: true | false
SINCE 3.4

<capturePerTransactionStats>

Sets whether or not the engine records per transaction stats.

Unlike captureTransactionLatencyStats which records histographical latencies, this setting is much more expensive in that it records and emits individual timestamps for operations that occurred in the transaction including store commit timestamps and individual message timestamps.

In most cases the capturing this level of detail is not worth the overhead it incurs, as the histographical latency captured via captureTransactionLatencyStats is usually sufficient for inferring timings within a given sampling interval. However, in cases where it is critical to determine the exact timings of transaction processing to better understand product behavior it can be useful.

If used in production (which is not recommended), applications should undergo stress testing under maximum peak load to determine the impact of enabling collection of per transaction stats.

You must also configure perTransactionStatsLogging to write the captured stats to a transaction log on disk. At this time per transaction stats are not emitted via trace loggers or over a messaging bus.

 

(warning) Per transaction stats collection is currently classified as an experimental feature. It is subject to change without notice.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.capturePerTransactionStats
Constraints: true | false
SINCE 3.7

<captureEventLatencyStats>

Sets whether or not the engine records event latency stats (such as the amount of time of events spent in its input queue).

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.captureEventLatencyStats
Constraints: true | false
SINCE 3.4

<replicateInParallel>

Enables  parallel replication. When parallel replication is enabled, the engine replicates inbound messages to the cluster backups in parallel with the processing of the message by the message handler. This parameter only applies to Event Sourced engines.

This parameter is particularly useful for Event Sourced applications that have higher message processing times because in this case it may be possible to replicate the message prior to completion of the message handler.

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.replicateInParallel
Constraints: true | false
SINCE 3.4

<preserveChannelJoinsOnStop>

Sets whether or not to preserve joined channels when the engine stops normally.

By default, when an engine is stopped without an error bus channels that were 'joined' will be 'left', meaning that any subscriptions or interests created by the message bus will be unsubscribed or unregistered. Setting this value to true causes the engine to preserve channel interest even on a clean shutdown.

Note that this property has no effect for the case where an engine shuts down with an error (e.g. AepEngine.stop(Exception) with a non-null cause. In this case channel joins are left intact allowing a backup to take over.

Note that this behavior can be overridden programmatically on a case by case basis by a handler for AepEngineStoppingEvent by setting AepEngineStoppingEvent.setPreserveChannelJoins(boolean).

Usage: Optional
Default: false
X DDL Override:
x.apps.<appname>.preserveChannelJoinsOnStop
Constraints: true | false

SINCE 3.4

<setSupportMetadata>

Enables setting of support related metadata on inbound and outbound messages.

Support related metadata is not critical to the operation of an engine and is set by the engine in inbound and outbound messages to aid in support and troubleshooting activities via the metadata being persisted in the application's transaction log.

However, this setting of support related metadata information comes at a slight performance penalty (a couple of microseconds) that may not be tolerated by ultra low latency applications. This property can be used to switch off the setting of support related metadata information in inbound and outbound messages for such performance critical applications.

Usage: Optional
Default: true
X DDL Override:
x.apps.<appname>.setSupportMetadata
Constraints: true | false

SINCE 3.5

</app>

<apps>

 

 

 

XVM Configuration

The 'xvms' section defines the Talon XVMs used globally in the deployment.

A Talon XVM (also known as a Talon Server) hosts one or more applications, controls each application's lifecycle and implements monitoring machinery for the apps it manages by providing both passive monitoring capability in the form of emission of periodic heartbeats that contain statistics, emission of trace output, alert and notifications as well as active monitoring by exposing command and control facilities that allow administrative applications to execute commands against the XVM and its applications.

In version 3.8, the <xvms> and <xvm> elements were introduced to replace the <servers> and <server> elements which are now deprecated. Developers are advised to update their configuration to use <xvms> and <xvm> as soon as possible as support for <servers> and <server> elements will be dropped in a future release.

  • Mixing of <server(s)> and <xvm(s)> within the same DDL is not supported and will result in an exception. Users should take particular care when using composing DDL from multiple locations to ensure that all DDL is using the same elements.
  • When using the <servers> element DDL override values defined below will honor the old syntax, namely x.servers.*

Sample XML Snippet

The below configures two xvms named 'forwarder-1' 'forwarder-2' that both host the 'forwarder' app. Launching both xvms would start two instances of the 'forwarder' app that would form a clustered instance of the forwarder app.

Settings

ElementDescription

<xvms>

Defines the Talon XVMs used globally in the deployment. A 'Talon XVM' can also be referred to a 'Talon Server' as it not only acts a runtime container for X applications, but also acts a server that accepts management and direct messaging connections. An xvm hosts one or more applications, controls each application's lifecycle and implements a direct network connection acceptance machinery to (1) allow management clients connect to the xvm to monitor and administer the applications that it hosts and (2) accept direct connections for apps that are configured to use the Talon 'direct' bus provider.

<templates>

Holds xvm templates

Usage: Optional

<template

Defines an xvm template.

Template xvms cannot be used at runtime they serve only as templates for actual xvms' configuration.

name

The name of the template.

Usage: Required
X DDL Override: Not overridable (key)

*

Any xvm attribute defined below except for 'template'

X DDL Override: x.xvms.templates.<templatename>.*

>
 
*

Any xvm element described below.

X DDL Override: x.xvms.templates.<templatename>.*

</template>
 

</templates>

 

<xvm

Defines and configures an xvm.
name>

Defines the xvm name which must be unique within the xvm's configuration and discovery domain.

A common practice is to use a xvm name that is indicative of the application or set of applications that it is hosting and the normal application role. For example, if the xvm will host shard 1 of order processing app that normally assumes the primary role, a name such as "order-processing-p-1" might be used. In this fashion, an instance of an application can be uniquely addressed with a discovery domain as a combination of the xvm name and app name.

It is a best practice not to use a name with spaces, as the name is used in many context such as scripting where it is operationally simpler to avoid spaces and special characters.

Usage: Required

X DDL Override: Not overridable (key)
Constraints: String
 
enabled

If set to false, this xvm will be ignored and not saved to the configuration repository. This can be used to disable a xvm at runtime. However, note that if a persistent configuration repository is in use, this will not cause a previously configured xvm to be deleted.

Usage: Optional
Default: true
X DDL Override:
x.xvms.<xvmname>.enabled
Constraints: true | false

discoveryDescriptor

Defines the xvm's discovery descriptor.

Sets the discovery descriptor which is used to load and configure the discovery provider used to advertise this xvm and its acceptors. Setting this property is only useful when the intent is to separate out xvm discovery from cluster discovery. When not set, the default discovery descriptor set in the global environment will be used (which is simpler and sufficient for most use cases).

xvm discovery is important for administrative applications to be able to locate running applications to allow monitoring and control.

A xvm must have a unique name with its discovery domain.

Usage: Optional
Default: <unset> (uses default discovery descriptor from environment)
X DDL Override:
x.xvms.<xvmname>.discoveryDescriptor
Constraints: String

When the p2p message bus binding is used, the discovery descriptor for the xvm must be part of the same discovery domain as the default discovery descriptor configured via nv.discovery.descriptor because the xvm itself facilitates advertising and accepting point to point connections.

(warning) This element can be use as an alternative value to using the discoveryDescriptor element. Only one of <discovery> or <discoveryDescriptor> may be used to configure xvm discovery. When no discovery information is provided, the value set in the environment for nv.discovery.descriptor is used.

<env>
...
</env>

XVM scoped environment variables.

Any XML element with text content will be treated as a property by concatenating its parent's node names with '.' separators. Env values scoped to the XVM override those defined in the global <env> section, and are applied only to the xvm for which they are defined.

If the value is already defined in the set of ddl overrides passed into the parser, the value in XML will be overridden.

Unlike other DDL values mentioned in this document, overrides for xvm scoped <env> values do not need to be prefixed with 'x.' ... the values passed will directly override the values specified in the <env> section without prefix. for example given:

-DCLUSTERING_IFADDRESS=192.168.1.1 should be used to override the value for CLUSTERING_IFADDRESS rather than -Dx.xvms.processor-1.env.CLUSTERING_IFADDRESS.

<discovery>

Configures the custom discovery descriptor for the xvm in decomposed form.

The discovery descriptor is composed as <provider>://<address>[:<port>][&prop1=val1][&propN=valN]

(warning) This element can be use as an alternative value to using the discoveryDescriptor element. Only one of <discovery> or <discoveryDescriptor> may be used to configure xvm discovery. When no discovery information is provided, the value set in the environment for nv.discovery.descriptor is used.

<provider>

The discovery provider's name which is used to locate the discovery implementation

Usage: Required
Default: unset
X DDL Override:
x.xvms.<xvmname>.discovery.provider

Constraints: String

<address>

The discovery provider's address

Usage: Required
Default: unset
X DDL Override:
x.xvms.<xvmname>.discovery.address

Constraints: String

<port>

The discovery provider's port

Usage: Optional
Default: unset
X DDL Override:
x.xvms.<xvmname>.discovery.port

Constraints: positive short

<properties>

</properties>

List thediscoverydescriptor parameters in key=value pairs:


Usage: Optional
Default: unset
X DDL Override:
x.xvms.<xvmname>.discovery.properties.<propName>

Constraints: non empty

</discovery>
 
<group>

Defines the xvm's application group.

The group is used to logically organize a set of xvm's that operate in concert.

Usage: Optional
Default: <none>
X DDL Override:
x.xvms.<xvmname>.group
Constraints: String

<clientHandShakeTimeout>

Sets the timeout used for allow connecting clients to complete the xvm connection (in seconds). 

Usage: Optional
Default: 10
X DDL Override:
x.xvms.<xvmname>.clientHandShakeTimeout
Constraints: String

<autoStopOnLastAppStop>
SINCE 3.7

Configures whether or not the xvm will automatically stop after the last app is stopped.

 

Disabling auto stop on last app stop leaves the xvm running and manageable even when all applications have stopped. The xvm's internal admin app does not count as a running app. 

Usage: Optional
Default: true
X DDL Override:
x.xvms.<xvmname>.autoStopOnLastAppStop
Constraints: String

<adminClientOutputQueueCapacity>

SINCE 3.7

Configuration property used to set the capacity (in MB) for the size of the xvm controller admin clients' output queue sizes. Outbound packets are dropped once the queue size reaches and/or exceeds the configured capacity.

Usage: Optional
Default: 10.0
X DDL Override:
x.xvms.<xvmname>.adminClientOutputQueueCapacity
Constraints: positive float

<apps>

Configures that apps hosted by this xvm. 

Multiple xvms can host the same application. Each clustered application will discover its peers and form its own independent cluster. In other words, xvms don't cluster, but their applications do.

<app
Configures an app hosted by this xvm.

autostart

Sets whether the xvm automatically starts the app when the xvm is started.

If not set, then the application will be loaded at xvm start, but it will not be started without an administrative command.

Usage: Optional
Default: true
X DDL Override:
x.xvms.<xvmname>.apps.app.<appname>.autostart
Constraints: true|false

enabled

If set to false, the app will be ignored and not saved to the configuration repository. This can be used to suppress addition of an application at runtime.

However, note that if a persistent configuration repository is in use, this will not cause a previously configured app for the this xvm to be removed.

Usage: Optional
Default: <none>
X DDL Override:
x.xvms.<xvmname>.apps.app.<appname>.enabled
Constraints: true|false

name>

The name of the application as defined in the 'apps' element.

Usage: Required
X DDL Override: Not overridable (key)
Constraints: String 

</app>
 
</apps>
 
<acceptors>

Configures this XVM's advertised server acceptors. By default, each xvm will create an acceptor on 'tcp://0.0.0.0:0' to listen on all interfaces at a auto assigned port.

If you are running in an environment where only specific ports are opened for traffic then you can set this to a specific network interface address.

XVM acceptors are advertised over discovery and are used by:

  • Administrative clients (tools like Robin and Lumino) to connect to and manage the XVM.
  • The direct message bus bindings to allow establishment of point to point connections between applications.

Only the first acceptor defined in this section is advertised for the above purposes. Additional acceptors may be configured, but they are not currently used by Talon.

(lightbulb)If any acceptors are explicitly added to the server, the default acceptor is removed and replaced with the first configured acceptor.

<acceptor
Defines and configures an acceptor.

descriptor

The acceptor descriptor.

Accept descriptors are of the form [protocol]://[host]:[port] e.g 'tcp://myhost:12000'and are used to specify the network protocol, interface and protocol through which to accept inbound network connections requests. 'tcp' is the only currently support protocol, [host] can be the host name or IP address of a specified interface on which this xvm will be running, and the [port] is the protocol specific server port on which the server will listed for inbound connections.

Usage: Required
X DDL Override: <none>
Constraints: String

enabled>

If set to false, the acceptor will be ignored and not saved to the configuration repository. This can be used to suppress addition of an acceptor at runtime.

However, note that if a persistent configuration repository is in use, this will not cause a previously configured acceptor for the this xvm to be removed.

Usage: Optional
Default: <none>
X DDL Override: <none>
Constraints: true|false

<linkParams>

A comma separate set of key=value pairs that serve as additional configuration parameters for the network connections accepted by this acceptor.

Usage: Optional
Default: <none>
X DDL Override: <none>
Constraints: String, a comma separate list of key=value pairs.

</acceptor>
 
</acceptors >
 
<multithreading
 
enabled>

Sets whether the server should operate in multi-threaded mode.

In most cases this value should be set to true. Setting this value to false will set the IO thread count to 1 regardless of the number of IO threads listed for the server.

Usage: Required
X DDL Override:
x.xvms.<xvmname>.multithreading.enabled
Constraints: true|false

<ioThreads>
Configures IO threads for the xvm.

<ioThread

Defines and configures an IOThread.

id

The thread id.

IO Thread ids are zero based and must be defined in monotonically increasing order.

Usage: Required
X DDL Override: Not overridable (key)
Constraints: true|false

affinity

Sets the cpu affinity mask for the thread.

The affinity string can either be a long that represents a mask of logical cpu or a square bracket enclosed comma separated list enumerating the logical cpus.

For example, specifying "1" or "[0]" indicate Core 0. "3" or "[0, 1]" would indicate Core 0 or Core 1. Specifying a value of "0" indicates that the the thread should be affinitized to the platform's default cpu, and omitting this value indicates that the thread should be affinitized according to the platform's default policy for the multiplexer.

See UtlThread.setCpuAffinityMask

Usage: Optional
Default: 0 (no affinity / default)
X DDL Override:
x.xvms.<xvmname>.multithreading.ioThreads.ioThread.<id>.affinity
Constraints: String per above

enabled

Sets the thread as enabled or disabled.

This can be used at runtime to disable an IO Thread. Disabling an IO thread has the effect of setting all threads with a higher id to enabled=false.

Note that if a persistent configuration repository is in use, this will not cause a previously configured IO threads for the this xvm to be removed.

Usage: Required
X DDL Override:
x.xvms.<xvmname>.multithreading.ioThreads.ioThread.<id>.enabled
Constraints: String per above

</ioThreads>
 
</multithreading>
 
<heartbeats>
Configuration for the XVMs stats thread which periodically emits heartbeats containing stats.
enabled

Indicates whether xvm heartbeats are enabled. 

Usage: Required
Default: "false"
X DDL Override:
x.xvms.<xvmname>.heartbeats.enabled
Constraints: true | false

interval>

Indicates the xvm heartbeat interval in seconds.

Usage: Optional
Default: "30"
X DDL Override:
x.xvms.<xvmname>.heartbeats.interval
Constraints: positive int

<collectSeriesStats>

Configures whether series stats are collected in heartbeats.
<p>
Latencies statistics and other series type statistics are more cpu and bandwidth
intensive to collect and emit. This property can be set to false to disable their
collection.
</p>

Usage: Optional
Default: "true"
X DDL Override:
x.xvms.<xvmname>.heartbeats.collectSeriesStats
Constraints: true | false

<collectSeriesDatapoints>

Configures whether series stats data points are included in heartbeats.

Series statistics such as latency statistics are reported as histogram when series stats collection is enabled. Enabling this property also includes the collected data points.

Enabling this property can be extremely bandwidth intensive and is not typically recommended.

Usage: Optional
Default: "false"
X DDL Override:
x.xvms.<xvmname>.heartbeats.collectSeriesDatapoints
Constraints: true | false

<maxTrackableSeriesValue>

Series data is reported using an HDR Histogram. This property controls the maximum value that the histogram can record.

When this property is not specified the value is set high enough that a 10 minute latency can be recorded in microseconds.

Usage: Optional
X DDL Override:
x.xvms.<xvmname>.heartbeats.maxTrackableSeriesValue
Constraints: positive int

<collectPoolStats>

Configures whether pool stats are collected and reported in heartbeats.

When pool stats are enabled pool stats are included for pools that experienced a pool miss in the collection interval, or preallocated pool items falls below a poolDepletionThreshold.

For applications that don't expect to operate in a zero garbage mode, this can be disabled to prevent heartbeats from becoming too large.

Usage: Optional
Default: "true"
X DDL Override:
x.xvms.<xvmname>.heartbeats.collectPoolStats
Constraints: true | false

<poolDepletionThreshold>

Configuration property used to set the percentage decrement at which a preallocated pool must drop to be included in a xvm heartbeat.

Normal pool stats are only included in a heartbeat if there were pool misses in the interval. For preallocated pools, however, misses are not expected until the preallocated items are exhausted. For such pools it is generally of interest from a monitoring perspective to observe the rate of depletion of such items.

If a pool is preallocated with 1000 items and this property is set to 10, pool stats will be emitted for the pool when its size drops below 900, 800, 700, until its size reaches 0 (at which point subsequent misses would cause it to be included on every heartbeat).

Setting this to a value greater than 100 or less than or equal to 0 disables depletion threshold reporting.

Usage: Optional
X DDL Override:
x.xvms.<xvmname>.heartbeats.poolDepletionThreshold
Constraints: positive int [0 - 100]

<collectIndividualThreadStats>

Configures whether individual thread stats are collected.

Collecting stats for individual threads can lead to larger heartbeats. For applications that don't need such stats this collection can be disabled.

Usage: Optional
Default: "false"
X DDL Override:
x.xvms.<xvmname>.heartbeats.collectIndividualThreadStats
Constraints: true | false

<collectNonZGStats>

Sets whether or not stats that produce garbage as a result of being collected are enabled.

Some stats involve using reflection or 3rd party apis that create garbage. This property can be set to false to suppress collection of those stats.

Currently the list of stats that may produce garbage include:

  • Process and System CPU usage. When disabled processor and system cpu usage may be returned as -1.
  • Enhanced disk usage statistics. When disabled heartbeats will fall back to reporting the space for disk roots returned from File.getRoots().

Usage: Optional
Default: "true"
X DDL Override:
x.xvms.<xvmname>.heartbeats.collectNonZGStats
Constraints: true | false

<includeMessageTypeStats>

Sets whether or not message type stats are included in heartbeats (when enabled for the app).

When captureMessageTypeStats is enabled for an app, the AepEngine will record select statistics on a per message type basis. Because inclusion of per message type stats can significantly increase the size of heartbeats, inclusion in heartbeats is disabled by default.

Note: For message type stats to be included in heartbeats, both captureMessageTypeStats for the app must be set to true (capture is disabled by default because recording them is costly), and includeMessageTypeStats must be set to true (inclusion is disabled by default because emitting them is costly).

Usage: Optional
Default: "false"
X DDL Override:
x.xvms.<xvmname>.heartbeats.includeMessageTypeStats
Constraints: true | false

SINCE 3.7

<inactiveMessageTypeStatsInclusionFrequency>

This setting can be used to control how frequently message type stats are reported for message types without any activity. By default this value is set to 1 meaning that inactive types are included in every heartbeat even if there was no activity related to that type in the interval being reported. It can be set to 0 to exclude inactive types in heartbeats, or set to a value N, greater than 1 so that every Nth heartbeat inactive message type stats are included.

Setting this value to 0 can cause monitoring applications that start listening to the heartbeat stream late not to 'see' counts and latencies related to messaging that occurred in the past, so it is often desirable to at least periodically include inactive types. On the other hand, applications that work with a large number of message types that are not used frequently, it can be costly in terms of heartbeat size to always include them.

This setting has no effect if message type stats are not enabled or not included in heartbeats to begin with.

Usage: Optional
Default: 1
X DDL Override:
x.xvms.<xvmname>.heartbeats.inactiveMessageTypeStatsInclusionFrequency
Constraints: positive integer

SINCE 3.8

<logging>

Configures heartbeat logging for the xvm. When configured, xvm heartbeats are written to disk. SINCE 3.1 

enabled>

Whether or not to enable heartbeat logging.

Usage: Required
Default: "false"
X DDL Override:
x.xvms.<xvmname>.heartbeats.logging.enabled
Constraints: true | false

<autoFlushSize>

In the absence of explicit flushes (e.g. flushOnCommit) of written entries, the size at which flush is automatically triggered for queued writes. If not set, the platform default (8192) is used.

Usage: Optional
Default: 8192
X DDL Override: x.xvms.<xvmname>.heartbeats.logging.autoFlushSize
Constraints: positive integer

<flushOnCommit>

Whether or not the logger should be flushed on commit. By default, the logger buffers writes into an internal buffer and doesn't write to disk until that buffer has filled. Enabling flush on commit will flush the logger regardless of whether the buffer has filled.

Usage: Optional
Default: false
X DDL Override:
x.xvms.<xvmname>.heartbeats.logging.flushOnCommit

Constraints: true | false

<flushUsingMappedMemory>

Whether flushes to the log file should be performed using a memory mapped file buffer.

Usage: Optional 
Default: false 
X DDL Override:
x.xvms.<xvmname>.heartbeats.logging.flushUsingMappedMemory
 
Constraints: true | false

(warning) There are known issues on some platforms such as windows in which using this setting can cause file locking issues. Therefore, enabling this setting should be tested on the target platform being used.

<autoRepair>

Whether or not an attempt will be made to automatically repair a non empty log on open by truncating malformed entries at the end of the log that are part of incomplete transactions.

Usage: Optional
Default: false
X DDL Override:
x.xvms.<xvmname>.heartbeats.logging.autoRepair

Constraints: true | false

<storeRoot>

Specifies the root folder in which the logger's transaction log files are located.

Usage: Optional
Default: ${NVROOT}/rdat
X DDL Override:
x.xvms.<xvmname>.heartbeats.logging.storeRoot

Constraints: a file path (possibly relative to the working directory.  

If the expected value of of NVROOT on your target deployment host is not on the device where you want to place your transaction logs (e.g. slow or small disk), then consider making this a substitutable value such as:

<storeRoot>${myapp.storeroot}</storeRoot>, so that you can customize its location at runtime appropriate to the environment in which you are launching.


<initialLogLength>

Sets the initial file size of logger's transaction log in gigabytes.

Preallocating the transaction log can save costs in growing the file size over time, since the operation of growing a log file may actually result in a write of file data + the metadata operation of updating the file size, and may also benefit from allocating contiguous sectors on disk.

Usage: Optional
Default: 1 (1Gb)
X DDL Override:
x.xvms.<xvmname>.heartbeats.logging.initialLogLength

Constraints: positive float

The log size is specified in Gb. For an initial size of less than 1 Gb, specify a float value. for example a value of .01 would result in a preallocated size of ~10Mb, this can be useful for test environments.

<zeroOutInitial>

Whether the log file should be explicitly zeroed out (to force commit all disk pages) if newly created.

Usage: Optional
Default: false
X DDL Override:
x.xvms.<xvmname>.heartbeats.logging.zeroOutInitial
Constraints: true | false

<pageSize>

Sets the page size for the disk in bytes. The logger will use this as a hint in several areas to optimize its operation.

Usage: Optional
Default: 8192
X DDL Override:
x.xvms.<xvmname>.heartbeats.logging.pageSize

Constraints: positive int

<detachedWrite

Configures whether or not logger writes are done by the committing thread or passed off to a detached writer thread. Offloading to a writer thread can increase application throughput but requires an extra processor core for the logger thread.

enabled>

Can be set to true to enable detached logging for the logger.

Usage: Optional
Default: false
X DDL Override:
x.xvms.<xvmname>.heartbeats.logging.detachedWrite.enabled

Constraints: true | false

<queueDepth>

The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used.

See <queueDepth>

X DDL Override:
x.xvms.<xvmname>.heartbeats.logging.detachedWrite.queueDepth

Constraints: positive integer

<queueOfferStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueOfferStrategy>

X DDL Override: x.xvms.<xvmname>.heartbeats.logging.detachedWrite.queueOfferStrategy
Constraints: See QueueOfferStrategy

<queueWaitStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

See <queueWaitStrategy>  

X DDL Override:
x.xvms.<xvmname>.heartbeats.logging.detachedWrite.queueWaitStrategy

Constraints: See QueueWaitStrategy

<queueDrainerCpuAffinityMask>

Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus.

See <queueDrainerCpuAffinityMask>   

X DDL Override:
x.xvms.<xvmname>.heartbeats.logging.
detachedWrite.queueDrainerCpuAffinityMask

<queueFeedMaxConcurrency>

Sets the maximum number of threads that will feed the multiplexer's queue.

See <queueFeedMaxConcurrency>     

Usage: Optional
Default: 16
X DDL Override:
x.xvms.<xvmname>.heartbeats.logging.
detachedWrite.queueFeedMaxConcurrency

Constraints: positive integer

</detachedWrite>

</logging>
End of application's heartbeat logging properties.
<tracing  
Configures trace logging of heartbeats.

enabled>

Whether or not to enable heartbeat tracing.

Usage: Optional
Default: "false"
X DDL Override:
x.xvms.<xvmname>.heartbeats.tracing.enabled
Constraints: true | false

<traceAdminClientStats>

 

<traceAppStats>

Controls whether app stats (AEP engine level stats) are traced (when tracing is enabled).

Usage: Optional
Default: "true"
X DDL Override:
x.xvms.<xvmname>.heartbeats.tracing.traceAppStats
Constraints: true | false

<tracePoolStats>

Controls whether pool stats are traced (when tracing is enabled).

Usage: Optional
Default: "true"
X DDL Override:
x.xvms.<xvmname>.heartbeats.tracing.tracePoolStats
Constraints: true | false

<traceSysStats>

Controls whether system stats are traced (when tracing is enabled).

Usage: Optional
Default: "true"
X DDL Override:
x.xvms.<xvmname>.heartbeats.tracing.traceSysStats
Constraints: true | false

<traceThreadStats>

Controls whether thread stats are traced (when tracing is enabled).

Usage: Optional
Default: "true"
X DDL Override:
x.xvms.<xvmname>.heartbeats.tracing.traceThreadStats
Constraints: true | false

<traceUserStats>

Controls whether user app stats are traced (when tracing is enabled).

If traceAppStats is enabled then user stats are reported as part of the application stats unless this property is disabled. If traceAppStats is false then this property can be used to trace only the user stats.

Usage: Optional
Default: "true"
X DDL Override:
x.xvms.<xvmname>.heartbeats.tracing.traceUserStats
Constraints: true | false

</ tracing >
</heartbeats>

 

 
<provisioning>
The provisioning section of DDL holds information for provisioning tools such as Robin, that provision and launch the XVM.
<host>

Configures the host or ip address to which this xvm should be provisioned. 

Usage: Optional 
X DDL Override:
x.xvms.<xvmname>.provisioning.host

<qualifyPathsWithSystem>

When true (default), this setting indicates that installation paths should be appended with the /systemName/xvmName qualified. For example, if an installRoot is specified as '/usr/local/' an XVM named 'order-processor' in a system named 'order-processing-1.0' would be provisioned to '/usr/local/run/order-processing-1.0/order-processor'. This setting ensures that when multiple xvms / systems are deployed to the same host that they don't collide.

Usage: Optional 
Default: "true" 
X DDL Override:
x.xvms.<xvmname>.provisioning.host
Constraints: hostname or address

<rootDirectory>

Configures the root directory to which the xvm should be provisioned.

This directory should be specified using only '/' characters for file separators. Provisioning tools are expected to perform path translations when deploying to windows systems. A path that starts with either '/' or contains a ':' character is interpreted as an absolute path.

Usage: Optional 
X DDL Override:
x.xvms.<xvmname>.provisioning.rootDirectory

<dataDirectory>

Configures the directory in which data files should be stored.

This directory path serves as the root directory for runtime data files such as recovery logs.

This directory should be specified using only '/' characters for file separators. Provisioning tools are expected to perform path translations when deploying to windows systems. A path that starts with either '/' or contains a ':' character is interpreted as an absolute path.

When specified as a relative path the path should be interpreted as being relative to the rootDirectory directory.

When not specified this defaults to the platform's default runtime data directory, the 'rdat' subdirectory in the root folder.
</p>

Usage: Optional 
Default: "rdat" 
X DDL Override:
x.xvms.<xvmname>.provisioning.dataDirectory

<traceLogDirectory>

Configures the directory to which xvm trace output should be logged.

This directory should be specified using only '/' characters for file separators. Provisioning tools are expected to perform path translations when deploying to windows systems. A path that starts with either '/' or contains a ':' character is interpreted as an absolute path.

When specified as a relative path the path should be interpreted as being relative to the dataRoot directory.

When not specified the logging directory is left up to launcher.

Usage: Optional 
Default: "rdat/logs"
X DDL Override:
x.xvms.<xvmname>.provisioning.traceLogDirectory

<jvm>
Configures the JVM used to launch the xvm. 

<javaHome>

Configures the JVM's home directory which contains the bin/java executable to use.

Usage: Optional
X DDL Override:
x.xvms.<xvmname>.provisioning.jvm.javaHome

<jvmParams>

A list of JVM parameters used to launch the JVM.

Parameters can be specified on a single line, or broken across multiple lines for readability. When parameters are split across multiple lines they are append to one another with a single whitespace character.

For example:

Note that the DDL parser doesn't merge JVM params when overriding a template or when merging profiles. One can however use jvmParamSets described below to allow sourcing JVM parameters from templates and/or multiple activated profiles.

Usage: Optional
X DDL Override:
x.xvms.<xvmname>.provisioning.jvm.jvmParams

<jvmParamSets>

A list of named JVM parameters sets (that are appended to jvmParams if provided).

JVM parameter sets are useful in the context of config composition as they allow portions of JVM parameters to be overridden by name based on active profiles or templates.

For example:

Note that the DDL parser doesn't merge JVM params when overriding a template or when merging profiles. One can however use jvmParamSets described below to allow sourcing JVM parameters from templates and/or multiple activated profiles.

Usage: Optional
X DDL Override:
x.xvms.<xvmname>.provisioning.jvm.jvmParamSet

 <jvmParamSet

 

name

Sets the name of this set of JVM parameters.
<p>
If a template is overridden or a profile is activated and the overriding
xml has a JVM param set with the same name, its JVM parameters will replace
those defined previously.

Usage: Required  
X DDL Override: Not overridable (key) 
Constraints: String

order

Can be set to control the order in which the parameters from this JVM parameter set are appended to the jvmParams relative to other JVM parameter sets. Lower ordered JVM parameters are placed first. The ordering for 2 parameter sets with the same order is unspecified.

Usage: Optional
Default: 0
X DDL Override:
x.xvms.<xvmname>.provisioning.jvm.<param-set-name>.order
Constraints: short

enabled>

Can be set to false to disable the this set of JVM params.

When a JVM param set is disabled its JVM parameters are not appended to the JVM parameters string.

Usage: Optional
Default: true
X DDL Override:
x.xvms.<xvmname>.provisioning.jvm.<param-set-name>.enabled
Constraints: true|false

<jvmParams>

The JVM params for this JVM param set.

JVM parameters can be specified here in the same format as the main JVM parameters for the JVM. Multi-line JVM parameters are appended together with a single white space between lines.

Usage: Optional
X DDL Override:
x.xvms.<xvmname>.provisioning.jvm.<param-set-name>.jvmParams

</jvmParamSet>

 

<jvmParamSets>

 
</jvm>
 
</provisioning>
 

</xvm>

</xvms>

 

Enums Reference

Enums

ChannelQos

Enumerates the different supported Qualities of Service used for transmitting messages over a messaging channel.

See also: MessageChannel.Qos.

Valid Values

ValueDescription

BestEffort

Specifies Best Effort quality of service. Messages sent Best Effort are not acknowledged, and in the event of a binding failure may be lost.

Guaranteed

Specifies Guaranteed quality of service. Messages sent Guaranteed are held until acknowledged by the message bus binding, and are retransmitted in the event of a failure.

CheckpointingType

Enumerates the types of checkpointing controllers. 

See also IStoreCheckpointingController.Type.

Valid Values

ValueDescription

Default

Indicates that the default checkpoint controller should be used.

The default checkpoint controller counts all entry types (Puts, Updates, Removes, and Sends) against the threshold trigger for writing a new checkpoint.

CDC

Indicates that the CDC Checkpoint controller should be used.

The CDC checkpoint controller only counts Puts, Updates and Removes against the checkpoint trigger threshold (because these are the only types of interest for CDC).

Conflation

Indicates that the Conflation Checkpoint controller should be used.

The Conflation checkpoint controller does not count Puts against the new checkpoint trigger threshold (because puts cannot be conflated).

ICRRole

Enumerates the different inter-cluster replication roles of an AepEngine's store binding.

In inter-cluster replication, the store contents of a cluster are replicated to one or more receiving clusters. This enumeration enumerates the different replication roles that can be assigned to clusters. Assigning a replication role to a cluster amounts to assigning the same inter-cluster replication role to all members of the cluster.

See also: IStoreBinding.InterClusterReplicationRole

Valid Values

ValueDescription

Sender

Cluster members designated with this role serve as the inter-cluster replication senders.

StandaloneReceiver

Cluster members designated with this role serve as standalone inter-cluster replication receivers. Standalone implies that the receive side members designated with this role do not form clusters while operating in this mode. From the perspective of the user, the member operates as a backup cluster member, but there is no intra-cluster replication actually occurring. There can be multiple simultaneous standalone replication receivers.

InboundMessageLoggingPolicy

Enumerates an engine's inbound message logging policies.

This enumerates the policy that determines if and where to log inbound messages.

See also: AepEngine.InboundMessageLoggingPolicy

Valid Values

ValueDescription

Default

The default inbound message logging policy is determined by the HA and persistence mode at play. With this policy, if event sourcing & cluster persistence are enabled, then inbound message logging is implicitly switched on and inbound messages are logged through the store's persister. All other configurations switch off inbound message logging.

Off

Disables inbound message logging.

With this policy, inbound message logging is disabled.

This is the default policy with State Replication and Standalone mode of operation. The Standalone mode of operation is one where an engine has not been configured for HA: i.e. configured without a store. 

This option is invalid for use with engines configured to be clustered and use Event Sourcing since, in that mode, inbound messages are logged in the store's event log by virtue of inbound message replication.

UseDedicated

Use a dedicated log for inbound message logging.

With this policy, the engine uses a dedicated logger to log inbound messages.

This option is invalid for use with engines configured to be clustered and use Event Sourcing since, in that mode, inbound messages are logged in the store's event log by virtue of inbound message replication.

InboundMessageLoggingFailurePolicy

SINCE 3.2

Enumerates policies for handling inbound message logging failures.

This enumerates the policy that determines what to do in the event of an inbound message logging failure.

Valid Values

ValueDescription

StopEngine

This policy specifies that a failure in inbound logging will be treated as a failure which will result in shutdown of the engine.

StopLogging

This policy specifies that inbound logging errors will be trapped and cause the engine to discontinue inbound message logging. 

InboundEventAcknowledgementPolicy

SINCE 3.7

Enumerates an engine's inbound event acknowledgement policy.

The general contract of an  AepEngine  is that it cannot acknowledge up stream events (such as message events) in a transaction until such as the transaction has been stabilized to the point that in the event of a failure the message will not be lost.

When the engine is not configured with a store this property has no effect and events are acknowledged when the entire transaction is committed (e.g. when downstream acknowledgements are received.)

ValueDescription

Default

With this policy allows the engine to select the inbound event acknowledgement policy based on its configuration.

At present setting this policy results in OnSendStability being used, but this behavior could change in future releases. 

OnSendStability

With this policy inbound events are acknowledged one all downstream acknowledgements for outbound messages and events have been acknowledged.

With this policy messages would not be lost even if a backup and primary member were to fail unrecoverably.

OnStoreStability

With this experimental policy inbound events are acknowledged once they are committed to the store without waiting for acknowledgement for the transaction's outbound messages.

Once an inbound event has been successfully stored it can be recovered from a backup or a standalone instance's transaction log, making this policy safe across failover and recovery. Note: this policy is currently in an experimental phase. It is not recommended for use in production without guidance from support.

LogEmptinessExpectation

Enumerates the set of values permissible with the log emptiness expectation.

See Also: IStoreJournallingPersister.LogEmptinessExpectation  

Valid Values

ValueDescription

None

Used to specify that there is no expectation regarding emptiness of a transaction log

Empty

Used to specify that a transaction log is expected to be empty.

NotEmpty

Used to specify that a transaction log is expected to exist and contain at least one entry SINCE 3.4

MessageHandlingPolicy

Enumerates an application's AepEngine's inbound message handling policy.

See also: AepEngine.MessageHandlingPolicy  

Valid Values

ValueDescription

Normal

This policy represents normal message processing operation.

This is the default message handling policy.

Noop

This policy causes inbound messages to be discarded before dispatch to the application: i.e. they are not dispatched to the application. The messages are acknowledged if received on a guaranteed channel.

Discard

This policy causes inbound messages to be blindly discarded. No acknowledgements are dispatched if received on a guaranteed channel

MessagingStartFailPolicy

 Enumerates an engine's messaging start fail policy. 

See also: AepEngine.MessagingStartFailPolicy  

Valid Values

ValueDescription

FailIfOneBindingFails

This policy causes a messaging start operation to be considered successful only if all bindings attempts are successful i.e. with this option a messaging start operation is reported as failed if one or more of the binding attempts fails.

This is the default messaging start fail policy

NeverFail

This policy causes a start operation to be considered successful as long as all bind attempts do not result in permanent exceptions (a permanent exception reported by a bind attempt causes the bind operation to not be retried while a non-permanent exception causes the bind attempt to be periodically retried). In other words, the NeverFail option causes a messaging start operation to be reported as successful as long as at least one bind attempt was successful or failed with a non-permanent exception. 

FailIfAllBindingsFail

This policy causes a messaging start operation to be considered successful if one or more binding attempts is successful i.e. with this option, a messaging start operation is reported as failed if all the binding attempts fail. 

MessageBusBindingFailPolicy

This enumerates the policy that determines what action an engine takes when a message bus binding fails. 

See also: AepEngine.MessageBusBindingFailPolicy  

Valid Values

ValueDescription

FailIfAnyBindingFails

With this policy, when a binding fails, the engine shuts down all other operational bindings (if any) and dispatches a AepMessagingFailedEvent to the application. A binding shut down such is done so in a manner that preserves the binding's channel interests on the messaging subsystem for all the binding's guaranteed channels while the binding is offline. This ensures no message loss when the binding is reestablished at a later point.

This is the default messaging start fail policy

Reconnect

With this policy, when a binding fails, the engine dispatches channel down events for all channels in the failed binding. It then starts the reconnect process on the failed binding periodically retrying the binding. Channel up events are then dispatched for channels in the binding once the binding has been successfully reestablished. 

MessageSendPolicy

Enumerates an application's AepEngine outbound message send policies.

The message send policy controls at what point during transaction commit processing that application sent messages are transmitted out of the application. 

See also: AepEngine.MessageSendPolicy  

Valid Values

ValueDescription

ReplicateBeforeSend

This policy causes state/messages to be replicated before sending outbound messages triggered by the processing of inbound messages.

In other words, for event sourcing, this policy causes an inbound message to be processed, the message replicated for processing to the back instance(s), and then outbound messages triggered by the processing of the message to be sent outbound (after processing acknowledgments have been received from all back instance(s)). For state replication, this policy causes inbound message(s) to be processed, the state changes triggered by the processing of the inbound message to be replicated to the backup instance(s), and then the outbound messages triggered by the processing of the inbound message to be sent (after receiving state replication stability notifications from the backup instance(s)).

SendBeforeReplicate

This policy causes outbound messages triggered by the processing of inbound messages to be sent outbound first, before replicating the state/inbound messages.

In other words, for event sourcing, this policy causes an inbound message to be processed, the outbound messages triggered by the processing of the inbound message to be dispatched outbound, and then the inbound message replicated to the backup instance(s) for parallel processing (after outbound send stability notifications have been received from downstream agents). For state replication, this policy causes an inbound message to be processed, the outbound messages triggered by the processing of the inbound message to be dispatched outbound, and then the state changes affected by the processing of the inbound messages to be replicated for stability to the backup instance(s).

In most circumstances, this mode operation is unsafe from an HA standpoint: a failover to a backup instance may result in duplicate processing of the source message with different outbound message results: e.g. duplicate outbound messages that are different in content.


Noop

This policy causes outbound messages to be silently discarded. No stability notifications are dispatched for this policy for messages sent through guaranteed channels.

(lightbulb) This mode of operation is useful in debugging or diagnostic situations only and should not be used in production.

AppExceptionHandlingPolicy

SINCE 3.4

Enumerates an engine's app exception handling policies.

This enumerates the policy using which an engine determines how to handle unchecked exceptions from an application message handler or message filter. 

See also: AepEngine.AppExceptionHandlingPolicy

Valid Values

ValueDescription

RollbackAndStop

Stop the engine.

With this policy, upon receipt of an unchecked exception from an application handler, the engine:

  1. Rolls back the transaction in which the exception was thrown (leaving any previous messages in the transaction if adaptively batched unacknowledged and uncommitted).

  2. Schedules an engine stop to be triggered after in-flight transactions have completed (excluding those in the current adaptive batch).
    • completion of prior transactions allows messages in prior transactions to be acknowledged upstream.
    • because the engine is stopped with an exception, channel joins are preserved and a backup, if running, will take over.
    • the backup will reprocess any of the unacknowledged messages.

If the engine cannot complete prior transactions due to a subsequent error the engine is still stopped with an exception and a backup will reprocess messages from incomplete transactions as well.

Note that even if all prior transactions complete successfully is currently possible that that not all stabilized transactions will be reported to a backup or the transaction log ... in such cases it is possible that outbound message redelivery can occur on failover or recovery, though those messages will be filtered by Aep Duplicate Checking at the downstream receiver if enabled. In this regard a RollbackAndStop failure in which an application isn't using duplicate checking has the same redelivery guarantees for outbound messaging as a process failure except that inbound messages are acknowledged upstream on a best effort basis. This doesn't impose any additional coding requirements on application not using duplicate checking as they must tolerate duplicates to protect against process failure anyway.

This is the default policy.

LogExceptionAndContinue

Log an exception and continue operating.

With this policy, upon receipt of an unchecked exception from an application's event/message handler, the engine:

  • logs the exception received from the application,
  • stops processing the message,
  • acknowledges it,
  • and continues to process new events queued for execution.

So essentially message processing stops where it is, and from an HA standpoint, the message is removed from the processing stream.

When applied to an exception thrown from a message filter the message will not be dispatched to application event handlers (see AepEngine.setMessageFilter).

In all cases, the message will not be considered to be part of the transaction and is acknowledged upstream.

QuarantineAndStop

Quarantine offending message and stop engine.

With this policy, upon receipt of an unchecked exception from an application handler, the engine:

  1. Rolls back the transaction in which the exception was thrown (leaving any previous messages in the transaction if adaptively batched unacknowledged).

  2. Starts a new transaction that solely consists of sending of the offending message through the configured quarantine channel.
  3. Stops the engine with the exception thrown by the application after the completion of the transaction. Stopping after the completion of the quarantine transaction implies that:
    • the engine ensures successful delivery of the quarantine message (i.e. it waits for the send acknowledgement to be received).
    • it acknowledges the offending message up stream before shutting down.
    • because the engine is stopped with an exception, channel joins are preserved and a backup, if running, will take over.

If the engine cannot complete prior transactions due to a subsequent error, the engine is still stopped with an exception and a backup will reprocess messages from incomplete transactions as well.

In all of the above cases, an exception handled by the AppExceptionHandlingPolicy will result in the emission of an AepApplicationExceptionEvent that alerts registered handlers that an exception has occurred. 

MessageSendExceptionHandlingPolicy

Enumerates an engine's message send exception handling policy.

This enumerates the policy using which an engine determines how to handle unchecked exceptions received on message sends.

Note: There are two types of send failures that an engine can encounter during its operation. The first are exceptions thrown during the message send operation. Such exceptions are typically thrown by the underlying message bus bindings. The other, applicable only to guaranteed channels, is where the message send operation succeeds but could not be stabilized by the underlying messaging provider. This policy applies to the former type of send failures.

Additionally, this does not cover exceptions thrown to the application as the result of a send call from a message handler. Such exceptions are covered by the AppExceptionHandlingPolicy.

See also: AepEngine.MessageSendExceptionHandlingPolicy

Valid Values

ValueDescription

TreatAsStabilityFailure

Treat the failure as a stability failure.

Converts the send failure to a message stability failure (a fatal error).

This is the default policy.

LogExceptionAndContinue

Log an exception and continue operating.

With this policy, upon receipt of an unchecked exception from the underlying send machinery, the engine logs the exception and continues operating.

This policy can be dangerous for an application using Event Sourcing, because it is possible that such an exception is one that is indicative of a problem specific to the primary engine instance that would not occur on the backup if it were to take over and begin processing messages.

MessageSendStabilityFailureHandlingPolicy

SINCE 3.12.6

Enumerates an engine's message send stability failure handling policy.

This enumerates the policy using which an engine determines how to handle stability failures notifications for outbound sends.

Note: There are two types of send failures that an engine can encounter during its operation. The first are exceptions thrown during the message send operation. Such exceptions are typically thrown by the underlying message bus bindings. The other, applicable only  to guaranteed channels, is where the message send operation succeeds but could not be stabilized by the underlying messaging provider. This policy applies to the latter type of send failures. 

Additionally, this does not cover exceptions thrown to the application as the result of a send call from a message handler. Such exceptions are covered by the AppExceptionHandlingPolicy.

See also: AepEngine.MessageSendStabilityFailureHandlingPolicy

Valid Values

ValueDescription

LogExceptionAndContinue

Log an exception and continue operating.

With this policy, the engine logs an exception and continues operating after receiving the send stability failure notification

When using this configuration, note that the engine will NOT retry sends that could not be stabilized by the messaging provider. This will very likely result in the sent messages that could not be stabilized to be lost. Recovery of such messages is the applications responsibility. 

StopEngine

Stop the engine on encountering such a failure.

With this policy, the engine shuts down when it receives a stability failure notification 

This is the default policy.

OutboundMessageLoggingPolicy

Enumerates an engine's outbound message logging policies.

This enumerates the policy that determines if and where to log outbound messages

See also: AepEngine.OutboundMessageLoggingPolicy  

Valid Values

ValueDescription

Default

Disable outbound message logging.

With this policy, outbound message logging is disabled.

This is the default policy.

When the application's HA Policy is StateReplication, outbound messages are logged to the store transaction log as required by State Replication to retransmit in doubt messages after a failure. However, the outbound messages in the store's transaction log will be discarded if log compaction is enabled, so an application may still want to log a copy to a dedicated logger as well.


UseDedicated

Use a dedicated log for outbound message logging.

With this policy, the engine uses a dedicated logger to log outbound messages.

OutboundMessageLoggingFailurePolicy

SINCE 3.2

Enumerates policies for handling outbound message logging failures.

This enumerates the policy that determines what to do in the event of an outbound message logging failure.

Valid Values

ValueDescription

StopEngine

This policy specifies that a failure in outbound logging will be treated as a failure, which will result in shutdown of the engine.

StopLogging

This policy specifies that outbound logging errors will be trapped and cause the engine to discontinue outbound message logging. 

PerTransactionStatsLoggingPolicy

Enumerates an engine's outbound message logging policies.

This enumerates the policy that determines if and where to log outbound messages

See also: AepEngine.PerTransactionStatsLoggingPolicy  

Valid Values

ValueDescription

Off

Disable per transaction stats logging.

With this policy, per transaction stats logging is disabled.

This is the default policy.


UseDedicated

Use a dedicated log for per transaction stats logging.

With this policy, the engine uses a dedicated logger to log per transaction stats.

PerTransactionStatsLoggingFailurePolicy

SINCE 3.2

Enumerates policies for handling per transaction stats logging failures.

This enumerates the policy that determines what to do in the event of a per transaction stats logging failure.

Valid Values

ValueDescription

StopEngine

This policy specifies that a failure in per transaction stats logging will be treated as a failure, which will result in shutdown of the engine.

StopLogging

This policy specifies that per transaction stats logging errors will be trapped and cause the engine to discontinue per transaction stats logging. 

QueueOfferStrategy

Specifies the offer strategy for threads publishing to an event multiplexer's queue. When not specified, the platform's default value for the multiplexer will be used which is computed based on a number of factors depending on the event multiplexer in question and the optimization parameters in play for the application as a whole:

Valid Values

ValueDescription

SingleThreaded

An optimized strategy that can be used when it can be guaranteed that there is only a single thread feeding the queue.

MultiThreaded

Strategy that can be used when multiple threads can concurrently enqueue events for the multiplexer.

MultiThreadedSufficientCores

Strategy to be used when there are multiple publisher threads claiming sequences. This strategy requires sufficient cores to allow multiple publishers to be concurrently claiming
sequences and those threads contented relatively infrequently.

QueueWaitStrategy

Specifies the strategy used by an event multiplexer's queue draining thread(s).

Valid Values

ValueDescription

Blocking

The BlockingWaitStrategy is the slowest of the available wait strategies, but is the most conservative with the respect to CPU usage and will give the most consistent behavior across the widest variety of deployment options. However, again knowledge of the deployed system can allow for additional performance.

Sleeping

Like the BlockingWaitStrategy, the SleepingWaitStrategy attempts to be conservative with CPU usage by using a simple busy wait loop, but uses a call to LockSupport.parkNanos(1) in the middle of the loop. On a typical Linux system, this will pause the thread for around 60us. However, it has the benefit that the producing thread does not need to take any action other than increment the appropriate counter and does not require the cost of signaling a condition variable. However, the mean latency of moving the event between the producer and consumer threads will be higher. It works best in situations where low latency is not required, but a low impact on the producing thread is desired.

Yielding

The YieldingWaitStrategy is one of 2 Wait Strategies that can be use in low latency systems, where there is the option to burn CPU cycles with the goal of improving latency. The YieldingWaitStrategy will busy spin waiting for the sequence to increment to the appropriate value. Inside the body of the loop, Thread.yield() will be called, allowing other queued threads to run. This is the recommended wait strategy when you need very high performance and the number of Event Handler threads is less than the total number of logical cores, e.g. you have hyper-threading enabled.

BusySpin

The BusySpinWaitStrategy is the highest performing Wait Strategy, but puts the highest constraints on the deployment environment. This wait strategy should only be used if the number of Event Handler threads is smaller than the number of physical cores on the box, or when the thread has been affinitized and is known not to be sharing a core with another thread (including a thread operating on a hyper-threaded core sibling).

StoreBindingRoleExpectation

Enumerates the different roles that an application's store can assume.

Valid Values

ValueDescription

Primary

Indicates that this binding is the primary binding in a store cluster.

A store cluster can have a single primary member which is elected through a leader election algorithm. The single primary member replicates messages and state to its backup peers according to an application's configured HA Policy.

Backup

Indicates that a binding is a backup binding in a store cluster.

When operating in backup mode, objects can be retrieved from the store but not updated or added.

None

Indicates no expectation regarding a store binding's role SINCE 3.4

ReplicationPolicy

Enumerates the different replication policies for an AepEngine. 

See Also: AepEngine.ReplicationPolicy  

Valid Values

ValueDescription

Pipelined

With this replication policy, message/state is replicated soliciting acknowledgements from the backup engine cluster instance(s), but inbound message processing is not blocked while waiting for the acknowledgement to be received.

Asynchronous

With this replication policy, message/state is replicated without soliciting an acknowledgement from the backup engine cluster instances.

 

 

Groups Reference

Groups

EventMultiplexer Properties

Event Multiplexer properties configure the event multiplexer threads that are used throughout the platform for highly efficient inter-thread communication.

Elements

ValueDescription

<queueDepth>


The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified the platform's default value for the multiplexer will be used.

Usage: Optional
Default: 1024
Constraints: positive integer

<queueOfferStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

Usage: Optional
Default: MultiThreadedSufficientCores (unless otherwise noted)

Constraints: See QueueOfferStrategy

<queueWaitStrategy>

Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used.

Usage: Optional
Default: Blocking
Constraints: See QueueWaitStrategy

<queueDrainerCpuAffinityMask>

Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus.

For example, specifying "1" or "[0]" indicates Core 0. "3" or "[0, 1]" would indicate Core 0 or Core 1. Specifying a value of "0" indicates that the the thread should be affinitized to the platform's default cpu, and omitting this value indicates that the thread should be affinitized according to the platform's default policy for the multiplexer.

Examples:

  • "0" no affinity specified
  • "[]" no affinity specified
  • "1" specifies logical cpu 0
  • "[0]" specifies logical cpu 0
  • "4" specifies logical cpu 2
  • "[2]" list specifying logical cpu 2
  • "6" mask specifying logical cpu 1 and 2
  • "4294967296" specifies logical cpu 32
  • "[32]" specifies logical cpu 32
  • "[1,2]" list specifying logical cpu 1 and 2 

Usage: Optional
Default: 0 (no affinity / default)
Constraints: String per above

<queueFeedMaxConcurrency>

Sets the maximum number of threads that will feed the multiplexer's queue.

If this value is set too low, it will result in a runtime time error. Typically, applications need not specify this value.

Usage: Optional
Default: 16
Constraints: positive integer

nv.optimizefor

Several DDL settings can be tuned based on whether an application should be optimized for either throughput or latency, by setting the environment variable nv.optimizefor: 

 

throughput

When nv.optimizefor=throughput the following settings are set. 

  • Adaptive Commit Batch Ceiling -> 64 (unless set explicitly)
  • Low level network I/O tuned:
    • I/O buffer sizes tuned (adaptively sized larger to accommodate more bytes read per read). 
    • native network I/O libraries enabled (when available). 
    • tcp_no_delay -> false on replications and bus connections that support it (unless set explicitly)
    • eager socket reads enabled for cluster replication connection – keep reading from socket for up to 1 sec to avoid select / poll (unless linkParams are explicitly set).
  • Native file I/O enabled (when available).
  • Critical Path Threads waitPolicy -> Yielding (unless set explicitly)
  • Critical Path Threads detached = true (unless set to attached explicitly)
  • Pooling enabled for certain platform objects. (unless disabled explicitly)

Note that this list is not an exhaustive list, and these settings above may change over time. 

latency

  • Low level network I/O tuned:
    • I/O buffer sizes adaptively tuned
    • native network I/O libraries enabled (when available). 
    • tcp_no_delay -> true on replications and bus connections that support it (unless set explicitly)
    • eager socket reads enabled for cluster replication connection – keep reading from socket for up to 1 sec to avoid select / poll (unless linkParams are explicitly set).
  • Native file I/O enabled (when available).
  • Critical Path Threads waitPolicy -> BusySpin (unless set explicitly)
  • Critical Path Threads detached = true (unless set to attached explicitly)
  • Pooling enabled for certain platform objects such as packets. (unless disabled explicitly)

Note that this list is not an exhaustive list, and these settings above may change over time.

  • No labels