The Talon Manual

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Fixing typos and adding clarifications.

...

Excerpt

To achieve zero garbage operation in steady state operation Talon pools objects to avoid allocations. Pools created by the platform start out with no objects preallocated, so it is important for performance sensitive applications to drive warmup warm-up traffic at startup not only to allow for JIT optimization to kick in, but also to make sure pools are grown accordingly to reach an equilibrium state where enough instance of its objects have been allocated to satisfy the number of objects in use at a given time. To reach such an equilibrium state, applications should push traffic at rates higher than the expected volume to ensure that pools sizes grow to a level that can accommodate spikes in traffic. In cases where application warmup warm-up isn't sufficient for reaching pool equilibrium pools can be manually configured as described below.


Some cases where it is desirable to manually configure pool parameters include:

  • When performance (throughput and latency) is less important that memory footprint disabling pools can avoid memory overhead in cases where objects might otherwise sit in a pool unused. 
  • Cases where it is challenging to drive warmup warm-up traffic to the point where pools reach they optimal capacity. In such cases it is desirable to manual configure a pool for preallocation.
  • Pools that are used purely in a preallocated fashion. For example in applications that use expect a given number of Order objects in a day may want to start with those objects preallocated
  • In some cases initial bursts of traffic at application startup cause pools of seldom used types to grow to a large size, but then remain dormant taking up wasted memory. In this case it would be desirable to limit the pools capacity. 

...

ParameterDefaultComments
nv.pool.shouldpooltrue

This property can be set to false to globally disabled disable all pooling including platform internal pools.

Setting this value to false can adversely affect throughput and latency , but can , in some cases, provide an fast way tobe useful in lowering memory overhead for lower performance usecases in which garbage collection costs are low.

nv.pkt.shouldpool

false

true when nv.optimizefor=throughput|latency

This property globally enables or disables packet pooling.

ADM generated Xbuf Messages or Entities are backed by and pooled with their backing packet objects, consequently this parameter controls pooling for these types.

Tip

Note that the embedded entities are not backed by a packet and consequently are pooled independently of the Message or Entity in which they are contained. So this property does not affect entities.

Packet pooling impacts several areas of the platform including transaction logs and cluster replication as both of these operate using packets to frame their data.

...

A pool is uniquely named in an JVM as <pooltype>.<poolname>.<poolinstanceid>. For example, a native 256 byte platform IOBuffer named "iobuf.native-256.23" has a pooltype of "iobuf", a poolname pool name of "native-256" and is suffixed with a JVM unique instance id that is assigned monotically at random in each JVM. 

...

Code Block
languagetext
titlePool Stats
[Pool Stats]
PUT   DPUT  GET   DGET  HIT   DHIT  MISS  DMISS GROW  DGROW EVIC  DEVIC DWSH  DDWSH SIZE  PRE   CAP   NAME
612K  937   11.3M 947   612K  936   10.7M 11    0     0     0     0     0     0     1     0     1024  iobuf.native-256.23
8302  40    50670 41    8302  40    42368 1     0     0     0     0     0     0     0     0     1024  iobuf.native-512.24
62    20    1.5M  0     31    0     1.5M  0     0     0     0     0     0     0     31    0     1024  packet.MyMessageXbufPacket.71.1.266
62    20    1.5M  0     31    0     1.5M  0     0     0     0     0     0     0     31    0     1024  xbuf.entity.MyEntityXbufEntity.301.199.266267


Pools are configured by specifying nv.pool.<poolidentifier>.<propertyname> where the indentifier can be either the 'pooltype' or the 'pooltype.poolname'

...

  • "iobuf" applies to all IOBuffer pools
  • "iobuf.native-256" applies only to native 256 byte IOBuffer pools
  • "packet" applies to all packet pools

  • "packet.MyMessageXbufPacket.71.1" applies to the packet type backing the Xbuf generated MyMessage class with a factory id of 71 and type id of 11 The pool instance id of 266 in the above example has no bearing on configuration and will change from run to run of an application. 

  • "xbuf.entity" applies to all Xbuf embedded entities. 

  • "xbuf.entity.MyEntityXbufEntity.301.199" applies to the embedded Xbuf MyEntity class with (which has a factory id of 301 and type id of 199). The pool instance id of 267 in the above example has no bearing on configuration and will change from run to run of an application. 

When both a type level and type+poolname poolkey property is configured the higher granularity 'poolname' property takes precedence. 

...