The Talon Manual

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

In This Section

Overview

To achieve zero garbage operation in steady state operation Talon pools objects to avoid allocations. Pools created by the platform start out with no objects preallocated, so it is important for performance sensitive applications to drive warmup traffic at startup not only to allow for JIT optimization to kick in, but also to make sure pools are grown accordingly to reach an equilibrium state where enough instance of its objects have been allocated to satisfy the number of objects in use at a given time. To reach such an equilibrium state, applications should push traffic at rates higher than the expected volume to ensure that pools sizes grow to a level that can accommodate spikes in traffic. In cases where application warmup isn't sufficient for reaching pool equilibrium pools can be manually configured as described below.
Some cases where it is desirable to manually configure pool parameters include:

  • When performance (throughput and latency) is less important that memory footprint disabling pools can avoid memory overhead in cases where objects might otherwise sit in a pool unused. 
  • Cases where is challenging to drive warmup traffic to the point where pools reach they optimal capacity. In such cases it is desirable to manual configure a pool for preallocation.
  • Pools that are used purely in a preallocated fashion. For example in applications that use expect a given number of Order objects in a day may want to start with those objects preallocated
  • In some cases initial bursts of traffic at application startup cause pools of seldom used types to grow to a large size, but then remain dormant taking up wasted memory. In this case it would be desirable to limit the pools capacity. 

General Pooling Configuration

Pooling of platform internal objects and embedded entities generated with Xbuf encoding is enabled by default. ADM Messages and State Entities can also be pooled when generated with Xbuf encoding, but pooling of these types is currently only enabled with nv.optimizefor=throughput or nv.optimizefor=latency. In certain cases it may be desirable to change this behavior at a more granular level. 

ParameterDefaultComments
nv.pool.shouldpooltrue

This property can be set to false to globally disabled all pooling including platform internal pools.

Setting this value to false can adversely affect throughput and latency, but can, in some cases, provide an fast way to

nv.packet.shouldpool

false

true when nv.optimizefor=throughput|latency

This property globally enables or disables packet pooling.

ADM generated Xbuf Messages or Entities are backed by and pooled with their backing packet objects, consequently this parameter controls pooling for these types.

Note that the embedded entities are not backed by a packet and consequently are pooled independently of the Message or Entity in which they are contained. So this property does not affect entities.


Packet pooling also impacts transaction logs and cluster replication as both of these are operate using packets to frame their datagrams

 

 

Configuring Specific Pools

A pool is uniquely named in an JVM as <pooltype>.<poolname>.<poolinstanceid>. For example, a native 256 byte platform IOBuffer named "iobuf.native-256.23" has a pooltype of "iobuf", a poolname of "native-256" and is suffixed with a JVM unique instance id that is assigned monotically at random in each JVM. 

The following output from enabling pool stats trace shows some example pool names: 

Pool Stats


Pools are configured by specifying nv.pool.<poolidentifier>.<propertyname> where the indentifier can be either the 'pooltype' or the 'pooltype.poolname'

Pool Config Identifiers

Examples pool identifier from the above stats output would be:

  • "iobuf" applies to all IOBuffer pools
  • "iobuf.native-256" applies only to native 256 byte IOBuffer pools
  • "packet" applies to all packet pools

  • "packet.MyMessageXbufPacket.71.1" applies to the packet type backing the Xbuf generated MyMessage class with a factory id of 71 and type id of 1

  • "xbuf.entity" applies to all Xbuf embedded entities. 

  • "xbuf.entity.MyEntityXbufEntity.301" applies to the embedded Xbuf MyEntity class with factory id of 301 and type id of 199. 

When both a type level and type+poolname poolkey property is configured the higher granularity 'poolname' property takes precedence. 

Pool Configuration Properties

The following environment variables can be used to configure a specific pool type using the above identifier: 

ParameterDefaultComments
nv.pool.<poolIdentifier>.initialCapacity 1024

The initial size of the array to hold pool items that are not in use. 

nv.pool.<poolIdentifier>.maxCapacity 

Integer.MAX_VALUE

The maximum number of slots for items not in use. If an item is returned to a pool and they are no slots left it is evicted and becomes eligible for garbage collection. 
nv.pool.<poolIdentifier>.threaded trueWhether or not the pool will be safe for access by multiple threads
nv.pool.<poolIdentifier>.preallocate falseWhen true, pool is filled to its initial capacity with newly created items when the pool is created. When false, items are created on demand. 
nv.pool.<poolIdentifier>.detachedWash falseWhen items are returned to a pool its fields are reset. When true this makes items returned to the pool eligible for cleanup on a detached thread. 

(warning) Note on above defaults: each pool above is created programmatically and each pool instance that is created may set its own default values; the above defaults to pools that haven't altered the default value. 

DDL Pool Configuration Examples

The following configuration shows an example of configuring preallocation for the MyEntity Xbuf entity type:

Configuring Preallocating

See Also

 

  • No labels