|
This section contains detailed reference information for the x-ddl schema (Domain Descriptor Language). An X-DDL XML document describes and configures an application deployment, and is used to seed the configuration repository from which an application is loaded. See X Platform Configuration for a general overview of configuration.
A main tenant of the X Platform is to separate out (and shield) an application's business logic from the complex machinery underpinning high availability, performance and scalability. As such there are a wealth of tuning knobs and configuration operations that can be applied without making application code changes. In most cases, Optional values can be omitted.
|
A JVM configured using DDL xml is configured using the VMConfigurer class which accepts the XML either as a file or input stream. The VMConfigurer parses the XML and seeds the configuration repository that will subsequently be used to configure components as they are loaded.
The values in an DDL XML document can be overridden in two ways: standard variable substitution and via X DDL Overrides. In both cases the source of properties for substitution are provided to the parser passed to the VMConfigurer. When running in Non-Embedded mode, the substitution values provided by the Talon Server to the VMConfigurer are sourced from the properties files specified by 'nv.app.propfile', System Properties, and environment variables in that order. When running in embedded mode, the application provides the substitution values to the VMConfigurer directly via API, giving the application full control of the property source.
Standard Variable Substitution
VMConfigurer will first substitute and ${VARNAME::DEFAULT} values using the ISubstResolver passed into the configurer (or from the environment if no resolver is passed in):
<model xmlns="http://www.neeveresearch.com/schema/x-ddl" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <buses> <bus name="frontoffice" descriptor="${FRONTOFFICE_BUSDESCRIPTOR::falcon://fastmachine:8040}"> ... |
If the substitution resolver contains a value for FRONTOFFICE_BUSDESCRIPTOR, then that will be used for the bus descriptor. Otherwise, the default value of "falcon://fastmachine:8040" will be used. This substitution is done before the xml is parsed, so in cases where the ${} syntax yields invalid xml, it will be substituted before parsing the document.
Special XML characters in properties that originate from property files before they are substituted into the DDL XML will be escaped. In particular, <, >, &, ", ' will be respectively replaced by <, >, &, ", '. Users who prefer to do their own XML escaping can disable this behavior by setting the following property:
x.escapesubstitutionvalues=false |
Properties defined in the <env> section of an X-DDL document can be used in variable substitution elsewhere in the document. For example:
<model xmlns="http://www.neeveresearch.com/schema/x-ddl" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <env> <FRONTOFFICE_BUSDESCRIPTOR>falcon://fastmachine:8040</FRONTOFFICE_BUSDESCRIPTOR> <env> <buses> <bus name="frontoffice" descriptor="${FRONTOFFICE_BUSDESCRIPTOR}"> ... |
Properties defined in the <env> section of an X-DDL document have the lowest priority of all configuration sources and can be easily overridden using system properties and environment vars. Properties defined in the <env> section may reference other system properties and environment vars, but not other properties defined in the <env> section.
Any attribute or value element listed in this document that has an X DDL Override property can be overridden by passing the the corresponding value into the substitution environment used by the VMConfigurer. DDL overrides are particularly useful for applications that internally bundle ddl on the classpath, making it difficult to change at runtime.
<model xmlns="http://www.neeveresearch.com/schema/x-ddl" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <buses> <bus name="frontoffice" descriptor="${FRONTOFFICE_BUSDESCRIPTOR::falcon://fastmachine:8040}"> ... |
In the above case, the value for descriptor can also be overridden using the DDL Override, 'x.buses.bus.frontoffice.descriptor'. So given the following values in the substitution resolver:
would yield:
<bus name="frontoffice" descriptor="p2p://frontoffice"> |
because initial substitution would substitute "falcon://slowmachine:8040", and the ddl override would override that value with p2p://frontoffice resulting in the bus being localized as a p2p bus.
Throughout the schema, you will notice several elements that have enabled attributes. Specifying a value of "false" for these elements will cause the X-DDL parser to ignore these elements. This pattern allows for these elements to be configured but then disabled at runtime via an environment variable, System property, or DDL override. For example, if the ddl were to configure in an app named "forwarderapp":
then at runtime it could be disabled by launching the application with: -Dx.apps.app.forwarderapp.storage.persister.enabled=false -DFORWARDER_PERSISTER_ENABLED=false |
DDL overrides (except for those in the <env> element) are prefixed with 'x.' to avoid conflicts with other configuration property names in the environment. It is possible to change this prefix by setting: -Dnv.ddl.override.prefix=myprefix In this case, 'x.apps.app.forwarderapp.storage.persister.enabled' |
DDL trace can be enabled by setting -Dnv.ddl.trace=debug (or when using SLF4J setting the logger nv.ddl to 'Trace' Level).
Environment Configuration | ||
The <env> section allows configuration of the runtime properties accessible through the XRuntime class. The X Platform reserves the use of the prefix 'nv.' for platform configuration, applications are otherwise free to set arbitrary properties in the <env> section. The properties defined in <env> will be stored in the configuration repository and later loaded into XRuntime. Environment properties can be listed in either '.' separated form, or by breaking the dot separated levels into hierarchical nodes. Sample XML Snippet
| ||
Element | Description | |
---|---|---|
<env></env> | Any XML element with text content will be treated as a property by concatenating its parent's node names with '.' separators. If the value is already defined in the set of ddl overrides passed into the parser, the value in XML will be overridden.
|
Message Bus Configuration | ||
The 'buses' section configures the various messaging buses used globally in the deployment. For example, the below configures a messaging bus named 'frontoffice' that:
Sample XML Snippet
| ||
Element | Description | |
---|---|---|
<buses><bus | ||
name | Defines the bus name which must be unique within a configuration repository. Applications reference their bus by this name. Usage: Required | |
descriptor | Defines the bus descriptor. A bus descriptor is used to lookup and configure a message bus provider. Usage: Required | |
enabled | If set to false, this bus will not be added to the configuration repository and will not be available for applicaiton use. Usage: Optional | |
<channels><channel | Configures the channels for this bus. Individual applications that use the bus may use some or all of the channel according to their own configuration and interaction patterns. | |
name | Defines and configures a channel within a message bus. An SMA message channel is a named conduit for message exchange between SMA messaging participants. An application's AepEngine will start messaging channels prior to signaling to the application that messaging has started. Usage: Required | |
id | The channel id is a numerical identifier of a channel uniquely identifying the channel in its bus. Some bus binding implementations may use this on the wire as a replacement for the string channel name for efficiency, so it is important that the id is consistent across configuration domains. Usage: Optional | |
<qos> | When the qos element is not provided, the platform's default QoS value will be used if not specified programmatically by the application. Usage: Optional | |
<key> | Specifies the channel's key. Usage: Optional | |
</channel></channels></bus></buses> | ||
Applications (AepEngine) Configuration | ||
Element | Description | |
<apps><app | The <apps> section configures the various applications in the scenario. An application is synonymous with an AEP engine. For example, the above configures three applications (i.e. engines). | |
name | Defines the application name which must be unique within an application's configuration domain. Usage: Required
| |
mainClass | Specifies the application's main class (e.g. com.acme.MyApplication). An application's main class serves as the main entry point for a Talon Server application. Usage: Required | |
enabled> | If set to false, this app will be ignored and not saved to the configuration repository. This can be used to disable an application at runtime. However, note that if a persistent configuration repository is in use, this will not cause a previously configured application to be deleted. Usage: Optional | |
Application Messaging Configuration | ||
An app's messaging> element configures the buses used by the application Sample XML Snippet
| ||
Element | Description | |
<messaging> | Configures messaging for an application. | |
<factories> | Configures message factories for an application. Each message factory defined under this element is registered with the application's underlying engine. Registered message factories are used by the message buses to materialize message instances from factory and type ids received over the wire. It is not mandatory to configure factories via DDL, they can also be registered programmatically by the application during application initialization. | |
<factory | Configures a message factory used by the app. | |
name | The message factory's fully qualified name. Usage: Required | |
</factory> | ||
<factories> | ||
<buses> | Configures the buses that the application will use. Each bus defined in this section should have the same name as a bus defined in the global <buses> section. | |
<bus> | Configures and registers a bus from the <buses> section for use with the application and registers it with the underlying engine. Each application in the deployment will create its own bus instance, and may configure channel interest in that bus differently depending on their participation in the message flow. | |
name | Specifies the name of the bus which should reference a bus from the buses section of this descriptor or one already created and saved in the configuration repository. Usage: Required | |
enabled> | If set to false, this bus will be ignored and not added to the application list of buses. Usage: Optional | |
<channels> <channel | Configures the bus channels used by the application which will be a subset of those defined for the bus in the descriptor's <buses> section. | |
name | Specifies the name of the channel which references a channel defined in the <bus> element in the <buses> section. Usage: Optional | |
join> | An application that should receive message on the channel should specify true. Usage: Optional | |
</channel> </channels> | Additional channel configurations can be added here. | |
<nonBlockingInboundMessageDispatch> | Specifies whether or not enqueue of inbound messages for this bus should block on the application's main inbound event multiplexer. In most cases, this value should either not be specified or set to false. Usage: Optional | |
<inboundMessageEventPriority> | Specifies the priority at which messages from this bus should be dispatched to the application's inbound event multiplexer. A negative value is interpreted as higher priority. A positive value will result in delayed processing by the number of milliseconds specified. If not set or 0 message will be dispatched at normal priority. Usage: Optional | |
<scheduleSendCommitCompletionEvents> | Indicates whether the bus manager send commit completion events should be scheduled. Scheduling of the completion events allows them to be added to application's inbound event queue's feeder queue, which can cause reduced contention with message events. Usage: Optional | |
<sendCommitCompletionEventPriority> | Specifies at what priority message from this bus should be dispatched to the applications inbound event multiplexer. A negative value is interpreted as higher priority. A positive value will result in delayed processing by the number of milliseconds specified. If not set or 0, message will be dispatched at normal priority. Setting this value higher than message events can reduce message processing latency in some cases. Usage: Optional | |
<detachedSend | Configures the detached send event multiplexer thread for the bus. When detached send is disabled, outbound send of messages is performed by the commit processing thread (typically the engine's inbound event multiplexer thread). Enabling detached send can reduce the workload on the commit processing thread, allowing it to process more inbound messages, but this can also incur additional latency. | |
enabled> | Specifies whether or not detached send is enabled for this bus. Usage: Optional | |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: | |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: | |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: | |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical CPUs. See <queueDrainerCpuAffinityMask> X DDL Override: | |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional | |
</detachedSend> <bus> </buses></messaging> |
|
|
Server Configuration | ||
The 'servers' section configures the various Talon Servers used globally in the deployment. A server hosts one or more applications, controlling each application's lifecycle, and provides connectivity in the form of acceptors that allow management clients to monitor and administer the applications that it hosts. For example, the below configures a two servers named 'forwarder-1' 'forwarder-2' that both host the 'forwarder' app. Launching both servers would start two instances of the 'forwarder' app that would form a clustered instance of the forwarder app. Sample XML Snippet
| ||
Element | Description | |
---|---|---|
<servers> | Defines the Talon Servers used globally in the deployment. A Talon 'Server' is synonymous with the term 'XVM' and is used interchangeably in the X documentation. A server hosts one or more applications, controls each application's lifecycle and implements a direct network connection acceptance machinery to (1) allow management clients connect to the server to monitor and administer the applications that it hosts and (2) accept direct connections for apps that are configured to use the Talon 'direct' bus provider. | |
<server | Defines and configures a server. | |
enabled | If set to false, this server will be ignored and not saved to the configuration repository. This can be used to disable a server at runtime. However, note that if a persistent configuration repository is in use, this will not cause a previously configured server to be deleted. Usage: Optional | |
discoveryDescriptor | Defines the server's discovery descriptor. Usage: Optional
| |
group | Defines the server's application group. Usage: Optional | |
name> | Defines the server name which must be unique within the server's configuration and discovery domain. A common practice is to use a server name that is indicative of the application or set of applications that it is hosting and the normal application role. For example, if the server will host shard 1 of order processing app that normally assumes the primary role, a name such as "order-processing-p-1" might be used. In this fashion, an instance of an application can be uniquely addressed with a discovery domain as a combination of the server name and app name. Usage: Required X DDL Override: Not overridable (key)Constraints: String | |
<clientHandShakeTimeout> | Sets the timeout used for allow connecting clients to complete the server connection (in seconds). Usage: Optional | |
<autoStopOnLastAppStop> | Configures whether or not the server will automatically stop after the last app is stopped.
Disabling auto stop on last app stop leaves the server running and manageable even when all applications have stopped. The xvm's internal admin app does not count as a running app. Usage: Optional | |
<adminClientOutputQueueCapacity> | Configuration property used to set the capacity (in MB) for the size of the server controller admin clients' output queue sizes. Outbound packets are dropped once the queue size reaches and/or exceeds the configured capacity. Usage: Optional | |
<apps> | Configures that apps hosted by this server. Multiple servers can host the same application. Each clustered application will discover its peers and form its own independent cluster. In other words, servers don't cluster, but their applications do. | |
<app | Configures an app hosted by this server. | |
autostart | Sets whether the server automatically starts the app when the server is started. Usage: Optional | |
enabled | If set to false, the app will be ignored and not saved to the configuration repository. This can be used to suppress addition of an application at runtime. Usage: Optional | |
name> | The name of the application as defined in the 'apps' element. Usage: Required | |
</app> | ||
</apps> | ||
<acceptors> | Configures this servers acceptors. By default, each server will create an acceptor on 'tcp://0.0.0.0:0' to listen on all interfaces at a auto assigned port (which is advertised). If any acceptors are explicitly added to the server, the default acceptor is removed and replaced with the first configured acceptor. | |
<acceptor | Defines and configures an acceptor. | |
descriptor | The acceptor descriptor. Accept descriptors are of the form [protocol]://[host]:[port] e.g 'tcp://myhost:12000'and are used to specify the network protocol, interface and protocol through which to accept inbound network connections requests. 'tcp' is the only currently support protocol, [host] can be the host name or IP address of a specified interface on which this server will be running, and the [port] is the protocol specific server port on which the server will listed for inbound connections. Usage: Required | |
enabled> | If set to false, the acceptor will be ignored and not saved to the configuration repository. This can be used to suppress addition of an acceptor at runtime. However, note that if a persistent configuration repository is in use, this will not cause a previously configured acceptor for the this server to be removed. Usage: Optional | |
<linkParams> | A comma separate set of key=value pairs that serve as additional configuration parameters for the network connections accepted by this acceptor. Usage: Optional | |
</acceptor> | ||
</acceptors> | ||
<multithreading | ||
enabled> | Sets whether the server should operate in multi-threaded mode. In most cases this value should be set to true. Setting this value to false will set the IO thread count to 1 regardless of the number of IO threads listed for the server. Usage: Required | |
<ioThreads> | Configures IO threads for the server. | |
<ioThread | Defines and configures an IOThread. | |
id | The thread id. IO Thread ids are zero based and must be defined in monotonically increasing order. Usage: Required | |
affinity | Sets the cpu affinity mask for the thread. The affinity string can either be a long that represents a mask of logical cpu or a square bracket enclosed comma separated list enumerating the logical cpus. For example, specifying "1" or "[0]" indicate Core 0. "3" or "[0, 1]" would indicate Core 0 or Core 1. Specifying a value of "0" indicates that the the thread should be affinitized to the platform's default cpu, and omitting this value indicates that the thread should be affinitized according to the platform's default policy for the multiplexer. See UtlThread.setCpuAffinityMask Usage: Optional | |
enabled | Sets the thread as enabled or disabled. This can be used at runtime to disable an IO Thread. Disabling an IO thread has the effect of setting all threads with a higher id to enabled=false. Note that if a persistent configuration repository is in use, this will not cause a previously configured IO threads for the this server to be removed. Usage: Required | |
</ioThreads> | ||
</multithreading> | ||
<heartbeatLogging | Configures heartbeat logging for the server. When configured, server heartbeats are written to disk. | |
enabled> | Whether or not to enable heartbeat logging. Usage: Required | |
<autoFlushSize> | In the absence of explicit flushes (e.g. flushOnCommit) of written entries, the size at which flush is automatically triggered for queued writes. If not set, the platform default (8192) is used. Usage: Optional | |
<flushOnCommit> | Whether or not the logger should be flushed on commit. By default, the logger buffers writes into an internal buffer and doesn't write to disk until that buffer has filled. Enabling flush on commit will flush the logger regardless of whether the buffer has filled. Usage: Optional | |
<autoRepair> | Whether or not an attempt will be made to automatically repair a non empty log on open by truncating malformed entries at the end of the log that are part of incomplete transactions. Usage: Optional | |
<storeRoot> | Specifies the root folder in which the logger's transaction log files are located. Usage: Optional
| |
<initialLogLength> | Sets the initial file size of logger's transaction log in gigabytes. Preallocating the transaction log can save costs in growing the file size over time, since the operation of growing a log file may actually result in a write of file data + the metadata operation of updating the file size, and may also benefit from allocating contiguous sectors on disk. Usage: Optional
| |
<zeroOutInitial> | Whether the log file should be explictly zeroed out (to force commit all disk pages) if newly created. Usage: Optional | |
<pageSize> | Sets the page size for the disk in bytes. The logger will use this as a hint in several areas to optimize its operation. Usage: Optional | |
<detachedWrite | Configures whether or not logger writes are done by the committing thread or passed off to a detached writer thread. Offloading to a writer thread can increase application throughput but requires an extra processor core for the logger thread. | |
enabled> | Can be set to true to enable detached logging for the logger. Usage: Required | |
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified, the platform's default value for the multiplexer will be used. See <queueDepth> X DDL Override: | |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: x.servers.server.<servername>.heartbeatLogging.detachedWrite.queueOfferStrategy | |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. X DDL Override: | |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus. See <queueDrainerCpuAffinityMask> X DDL Override: | |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. Usage: Optional | |
</detachedWrite> </heartbeatLogging> | End of application's hearbeat logging properties. |
Enumerates the different supported Qualities of Service used for transmitting messages over a messaging channel.
See also: MessageChannel.Qos.
Valid Values
Value | Description |
---|---|
BestEffort | Specifies Best Effort quality of service. Messages sent Best Effort are not acknowledged, and in the event of a binding failure may be lost. |
Guaranteed | Specifies Guaranteed quality of service. Messages sent Guaranteed are held until acknowledged by the message bus binding, and are retransmitted in the event of a failure. |
Enumerates the types of checkpointing controllers.
See also IStoreCheckpointingController.Type.
Valid Values
Value | Description |
---|---|
Default | Indicates that the default checkpoint controller should be used. The default checkpoint controller counts all entry types (Puts, Updates, Removes, and Sends) against the threshold trigger for writing a new checkpoint. |
CDC | Indicates that the CDC Checkpoint controller should be used. The CDC checkpoint controller only counts Puts, Updates and Removes against the checkpoint trigger threshold (because these are the only types of interest for CDC). |
Conflation | Indicates that the Conflation Checkpoint controller should be used. The Conflation checkpoint controller does not count Puts against the new checkpoint trigger threshold (because puts cannot be conflated). |
Enumerates the different inter-cluster replication roles of an AepEngine's store binding.
In inter-cluster replication, the store contents of a cluster are replicated to one or more receiving clusters. This enumeration enumerates the different replication roles that can be assigned to clusters. Assigning a replication role to a clustert amounts to assigning the same inter-cluster replication role to all members of the cluster.
See also: IStoreBinding.InterClusterReplicationRole
Valid Values
Value | Description |
---|---|
Sender | Cluster members designated with this role serve serve as the inter-cluster replication senders. |
StandaloneReceiver | Cluster members designated with this role serve as standalone inter-cluster replication receivers. Standalone implies that the receive side members designated with this role do not form clusters while operating in this mode. From the perspective of the user, the member operates as a backup cluster member, but there is no intra-cluster replication actually occurring. There can be multiple simultaneous standalone replication receivers. |
Enumerates an engine's inbound message logging policies.
This enumerates the policy that determines if and where to log inbound messages.
See also: AepEngine.InboundMessageLoggingPolicy
Valid Values
Value | Description | |
---|---|---|
Default | The default inbound message logging policy is determined by the HA and persistence mode at play. With this policy, if event sourcing & cluster persistence are enabled, then inbound message logging is implictly switched on and inbound messages are logged through the store's persister. All other configurations switch off inbound message logging. | |
Off | Disables inbound message logging. With this policy, inbound message logging is disabled. This is the default policy with State Replication and Standalone mode of operation. The Standalone mode of operation is one where an engine has not been configured for HA: i.e. configured without a store.
| |
UseDedicated | Use a dedicated log for inbound message logging. With this policy, the engine uses a dedicated logger to log inbound messages.
|
Enumerates policies for handling inbound message logging failures.
This enumerates the policy that determines what to do in the event of an inbound message logging failure.
Valid Values
Value | Description |
---|---|
StopEngine | This policy specifies that a failure in inbound logging will be treated as a failure which will result in shutdown of the engine. |
StopLogging | This policy specifies that inbound logging errors will be trapped and cause the engine to discontinue inbound message logging. |
Enumerates an engine's inbound event acknowledgement policy.
The general contract of an AepEngine
is that it cannot acknowledge up stream events (such as message events) in a transaction until such as the transaction has been stabilized to the point that in the event of a failure the message will not be lost.
When the engine is not configured with a store this property has no effect and events are acknowledged when the entire transaction is committed (e.g. when downstream acknowledgements are received.)
Value | Description |
---|---|
Default | With this policy allows the engine to select the inbound event acknowledgement policy based on its conifugaration. At present setting this policy results in OnSendStability being used, but this behavior could change in future releases. |
OnSendStability | With this policy inbound events are acknowledged one all downsteam acknowledgements for outbound messages and events have been acknowledged. With this policy messages would not be lost even if a backup and primary member were to fail unrecoverably. |
OnStoreStability | With this experimental policy inbound events are acknowledged once they are committed to the store without waiting for acknowledgement for the transaction's outbound messages. Once an inbound event has been successfully stored it can be recovered from a backup or a standalone instance's trasnaction log, making this policy safe across failover and recovery. Note: this policy is currently in an experimental phase. It is not recommended for use in production without guidance from support. |
Enumerates the set of values permissible with the log emptiness expectation.
See Also: IStoreJournallingPersister.LogEmptinessExpectation
Valid Values
Value | Description |
---|---|
None | Used to specify that there is no expectation regarding emptiness of a transaction log |
Empty | Used to specify that a transaction log is expected to be empty. |
NotEmpty | Used to specify that a transaction log is expected to exist and contain at least one entry |
Enumerates an application's AepEngine's inbound message handling policy.
See also: AepEngine.MessageHandlingPolicy
Valid Values
Value | Description |
---|---|
Normal | This policy represents normal message processing operation. This is the default message handling policy. |
Noop | This policy causes inbound messages to be discarded before dispatch to the application: i.e. they are not dispatched to the application. The messages are acknowledged if received on a guaranteed channel. |
Discard | This policy causes inbound messages to be blindly discarded. No acknowledgements are dispatched if received on a guaranteed channel |
Enumerates an engine's messaging start fail policy.
See also: AepEngine.MessagingStartFailPolicy
Valid Values
Value | Description |
---|---|
FailIfOneBindingFails | This policy causes a messaging start operation to be considered successful only if all bindings attempts are successful i.e. with this option a messaging start operation is reported as failed if one or more of the binding attempts fails. This is the default messaging start fail policy |
NeverFail | This policy causes a start operation to be considered successful as long as all bind attempts do not result in permanent exceptions (a permanent exception reported by a bind attempt causes the bind operation to not be retried while a non-permanent exception causes the bind attempt to be periodically retried). In other words, the NeverFail option causes a messaging start operation to be reported as successful as long as at least one bind attempt was successful or failed with a non-permanent exception. |
FailIfAllBindingsFail | This policy causes a messaging start operation to be considered successful if one or more binding attempts is successful i.e. with this option, a messaging start operation is reported as failed if all the binding attempts fail. |
This enumerates the policy that determines what action an engine takes when a message bus binding fails.
See also: AepEngine.MessageBusBindingFailPolicy
Valid Values
Value | Description |
---|---|
FailIfAnyBindingFails | With this policy, when a binding fails, the engine shuts down all other operational bindings (if any) and dispatches a This is the default messaging start fail policy |
Reconnect | With this policy, when a binding fails, the engine dispatches channel down events for all channels in the failed binding. It then starts the reconnect process on the failed binding periodically retrying the binding. Channel up events are then dispatched for channels in the binding once the binding has been successfully reestablished. |
Enumerates an application's AepEngine outbound message send policies.
The message send policy controls at what point during transaction commit processing that application sent messages are transmitted out of the application.
See also: AepEngine.MessageSendPolicy
Valid Values
Value | Description | |
---|---|---|
ReplicateBeforeSend | This policy causes state/messages to be replicated before sending outbound messages triggered by the processing of inbound messages. In other words, for event sourcing, this policy causes an inbound message to be processed, the message replicated for processing to the back instance(s), and then outbound messages triggered by the processing of the message to be sent outbound (after processing acknowledgments have been received from all back instance(s)). For state replication, this policy causes inbound message(s) to be processed, the state changes triggered by the processing of the inbound message to be replicated to the backup instance(s), and then the outbound messages triggered by the processing of the inbound message to be sent (after receiving state replication stability notifications from the backup instance(s)). | |
SendBeforeReplicate | This policy causes outbound messages triggered by the processing of inbound messages to be sent outbound first, before replicating the state/inbound messages. In other words, for event sourcing, this policy causes an inbound message to be processed, the outbound messages triggered by the processing of the inbound message to be dispatched outbound, and then the inbound message replicated to the backup instance(s) for parallel processing (after outbound send stability notifications have been received from downstream agents). For state replication, this policy causes an inbound message to be processed, the outbound messages triggered by the processing of the inbound message to be dispatched outbound, and then the state changes affected by the processing of the inbound messages to be replicated for stability to the backup instance(s).
| |
Noop | This policy causes outbound messages to be silently discarded. No stability notifications are dispatched for this policy for messages sent through guaranteed channels.
|
Enumerates an engine's app exception handling policies.
This enumerates the policy using which an engine determines how to handle unchecked exceptions from an application message handler or message filter.
See also: AepEngine.AppExceptionHandlingPolicy
Valid Values
Value | Description |
---|---|
RollbackAndStop | Stop the engine. With this policy, upon receipt of an unchecked exception from an application handler, the engine:
If the engine cannot complete prior transactions due to a subsequent error the engine is still stopped with an exception and a backup will reprocess messages from incomplete transactions as well. This is the default policy. |
LogExceptionAndContinue | Log an exception and continue operating. With this policy, upon receipt of an unchecked exception from an application's event/message handler, the engine:
So essentially message processing stops where it is, and from an HA standpoint, the message is removed from the processing stream. When applied to an exception thrown from a message filter the message will not be dispatched to application event handlers (see AepEngine.setMessageFilter). In all cases, the message will not be considered to be part of the transaction and is acknowledged upstream. |
QuarantineAndStop | Quarantine offending message and stop engine. With this policy, upon receipt of an unchecked exception from an application handler, the engine:
If the engine cannot complete prior transactions due to a subsequent error, the engine is still stopped with an exception and a backup will reprocess messages from incomplete transactions as well. |
In all of the above cases, an exception handled by the AppExceptionHandlingPolicy will result in the emission of an AepApplicationExceptionEvent that alerts registered handlers that an exception has occurred. |
Enumerates an engine's message send exception handling policy.
This enumerates the policy using which an engine determines how to handle unchecked exceptions received on message sends.
Note: There are two types of send failures that an engine can encounter during its operation. The first are exceptions thrown during the message send operation. Such exceptions are typically thrown by the underlying message bus bindings. The other, applicable only to guaranteed channels, is where the message send operation succeeds but could not be stabilized by the underlying messaging provider. This policy only applies to the first type of send failures.
Additionally, this does not cover exceptions thrown to the application as the result of a send call from a message handler. Such exceptions are covered by the AppExceptionHandlingPolicy.
See also: AepEngine.MessageSendExceptionHandlingPolicy
Valid Values
Value | Description | |
---|---|---|
TreatAsStabilityFailure | Treat the failure as a stability failure. Converts the send failure to a message stability failure (a fatal error). This is the default policy. | |
LogExceptionAndContinue | Log an exception and continue operating. With this policy, upon receipt of an unchecked exception from the underlying send machinery, the engine logs the exception and continues operating.
|
Enumerates an engine's outbound message logging policies.
This enumerates the policy that determines if and where to log outbound messages
See also: AepEngine.OutboundMessageLoggingPolicy
Valid Values
Value | Description | |
---|---|---|
Default | Disable outbound message logging. With this policy, outbound message logging is disabled. This is the default policy.
| |
UseDedicated | Use a dedicated log for outbound message logging. With this policy, the engine uses a dedicated logger to log outbound messages. |
Enumerates policies for handling outbound message logging failures.
This enumerates the policy that determines what to do in the event of an outbound message logging failure.
Valid Values
Value | Description |
---|---|
StopEngine | This policy specifies that a failure in outbound logging will be treated as a failure, which will result in shutdown of the engine. |
StopLogging | This policy specifies that outbound logging errors will be trapped and cause the engine to discontinue outbound message logging. |
Specifies the offer strategy for threads publishing to an event multiplexer's queue. When not specified, the platform's default value for the multiplexer will be used which is computed based on a number of factors depending on the event multiplexer in question and the optimization parameters in play for the application as a whole:
Valid Values
Value | Description |
---|---|
SingleThreaded | An optimized strategy that can be used when it can be guaranteed that there is only a single thread feeding the queue. |
MultiThreaded | Strategy that can be used when multiple threads can concurrently enqueue events for the multiplexer. |
MultiThreadedSufficientCores | Strategy to be used when there are multiple publisher threads claiming sequences. This strategy requires sufficient cores to allow multiple publishers to be concurrently claiming |
Specifies the strategy used by an event multiplexer's queue draining thread(s).
Valid Values
Value | Description |
---|---|
Blocking | The BlockingWaitStrategy is the slowest of the available wait strategies, but is the most conservative with the respect to CPU usage and will give the most consistent behaviour across the widest variety of deployment options. However, again knowledge of the deployed system can allow for additional performance. |
Sleeping | Like the BlockingWaitStrategy, the SleepingWaitStrategy attempts to be conservative with CPU usage by using a simple busy wait loop, but uses a call to LockSupport.parkNanos(1) in the middle of the loop. On a typical Linux system, this will pause the thread for around 60us. However, it has the benefit that the producing thread does not need to take any action other than increment the appropriate counter and does not require the cost of signaling a condition variable. However, the mean latency of moving the event between the producer and consumer threads will be higher. It works best in situations where low latency is not required, but a low impact on the producing thread is desired. |
Yielding | The YieldingWaitStrategy is one of 2 Wait Strategies that can be use in low latency systems, where there is the option to burn CPU cycles with the goal of improving latency. The YieldingWaitStrategy will busy spin waiting for the sequence to increment to the appropriate value. Inside the body of the loop, Thread.yield() will be called, allowing other queued threads to run. This is the recommended wait strategy when you need very high performance and the number of Event Handler threads is less than the total number of logical cores, e.g. you have hyper-threading enabled. |
BusySpin | The BusySpinWaitStrategy is the highest performing Wait Strategy, but puts the highest constraints on the deployment environment. This wait strategy should only be used if the number of Event Handler threads is smaller than the number of physical cores on the box, or when the thread has been affinitized and is known not to be sharing a core with another thread (including a thread operating on a hyperthreaded core sibling). |
Enumerates the different roles that an application's store can assume.
Valid Values
Value | Description |
---|---|
Primary | Indicates that this binding is the primary binding in a store cluster. A store cluster can have a single primary member which is elected through a leader election algorithm. The single primary member replicates messages and state to its backup peers according to an application's configured HA Policy. |
Backup | Indicates that a binding is a backup binding in a store cluster. When operating in backup mode, objects can be retrieved from the store but not updated or added. |
None | Indicates no expectation regarding a store binding's role |
Enumerates the different replication policies for an AepEngine.
See Also: AepEngine.ReplicationPolicy
Valid Values
Value | Description |
---|---|
Pipelined | With this replication policy, message/state is replicated soliciting acknowledgements from the backup engine cluster instance(s), but inbound message processing is not blocked while waiting for the acknowledgement to be received. |
Asynchronous | With this replication policy, message/state is replicated without soliciting an acknowledgement from the backup engine cluster instances. |
Event Multiplexer properties configure the event multiplexer threads that are used throughout the platform for highly efficient inter-thread communication.
Elements
Value | Description |
---|---|
<queueDepth> | The size of the feeder queue for the event multiplexer. Typically this value should be a power of 2. When not specified the platform's default value for the multiplexer will be used. Usage: Optional |
<queueOfferStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. Usage: Optional |
<queueWaitStrategy> | Controls the offer strategy for threads publishing to the queue. When not specified, the platform's default value for the multiplexer will be used. Usage: Optional |
<queueDrainerCpuAffinityMask> | Sets the CPU affinity mask to use for the drainer thread. The affinity string can either be a long that represents a mask of logical cpu, or a square bracket enclosed comma separated list enumerating the logical cpus. For example, specifying "1" or "[0]" indicates Core 0. "3" or "[0, 1]" would indicate Core 0 or Core 1. Specifying a value of "0" indicates that the the thread should be affinitized to the platform's default cpu, and omitting this value indicates that the thread should be affinitized according to the platform's default policy for the multiplexer. Examples:
Usage: Optional |
<queueFeedMaxConcurrency> | Sets the maximum number of threads that will feed the multiplexer's queue. If this value is set too low, it will result in a runtime time error. Typically, applications need not specify this value. Usage: Optional |