|
To achieve zero garbage operation in steady state operation Talon pools objects to avoid allocations. Pools created by the platform start out with no objects preallocated, so it is important for performance sensitive applications to drive warm-up traffic at startup not only to allow for JIT optimization to kick in, but also to make sure pools are grown accordingly to reach an equilibrium state where enough instance of its objects have been allocated to satisfy the number of objects in use at a given time. To reach such an equilibrium state, applications should push traffic at rates higher than the expected volume to ensure that pools sizes grow to a level that can accommodate spikes in traffic. In cases where application warm-up isn't sufficient for reaching pool equilibrium pools can be manually configured as described below. |
Pooling of platform internal objects and embedded entities generated with Xbuf encoding is enabled by default. ADM Messages and State Entities can also be pooled when generated with Xbuf encoding, but pooling of these types is currently only enabled with nv.optimizefor=throughput or nv.optimizefor=latency. In certain cases it may be desirable to change this behavior at a more granular level.
Parameter | Default | Comments | |
---|---|---|---|
nv.pool.shouldpool | true | This property can be set to false to globally disable all pooling including platform internal pools. Setting this value to false can adversely affect throughput and latency but can be useful in lowering memory overhead for lower performance usecases in which garbage collection costs are low. | |
nv.pkt.shouldpool | false true when nv.optimizefor=throughput|latency | This property globally enables or disables packet pooling. ADM generated Xbuf Messages or Entities are backed by and pooled with their backing packet objects, consequently this parameter controls pooling for these types.
Packet pooling impacts several areas of the platform including transaction logs and cluster replication as both of these operate using packets to frame their data. |
A pool is uniquely named in an JVM as <pooltype>.<poolname>.<poolinstanceid>. For example, a native 256 byte platform IOBuffer named "iobuf.native-256.23" has a pooltype of "iobuf", a pool name of "native-256" and is suffixed with a JVM unique instance id that is assigned monotically at random in each JVM.
The following output from enabling pool stats trace shows some example pool names:
[Pool Stats] PUT DPUT GET DGET HIT DHIT MISS DMISS GROW DGROW EVIC DEVIC DWSH DDWSH SIZE PRE CAP NAME 612K 937 11.3M 947 612K 936 10.7M 11 0 0 0 0 0 0 1 0 1024 iobuf.native-256.23 8302 40 50670 41 8302 40 42368 1 0 0 0 0 0 0 0 0 1024 iobuf.native-512.24 62 20 1.5M 0 31 0 1.5M 0 0 0 0 0 0 0 31 0 1024 packet.MyMessageXbufPacket.71.1.266 62 20 1.5M 0 31 0 1.5M 0 0 0 0 0 0 0 31 0 1024 xbuf.entity.MyEntityXbufEntity.301.199.267 |
Pools are configured by specifying nv.pool.<poolidentifier>.<propertyname> where the indentifier can be either the 'pooltype' or the 'pooltype.poolname'
Examples pool identifier from the above stats output would be:
"packet" applies to all packet pools
"packet.MyMessageXbufPacket.71.1" applies to the packet type backing the Xbuf generated MyMessage class with a factory id of 71 and type id of 1 The pool instance id of 266 in the above example has no bearing on configuration and will change from run to run of an application.
"xbuf.entity" applies to all Xbuf embedded entities.
When both a type level and type+poolname poolkey property is configured the higher granularity 'poolname' property takes precedence.
The following environment variables can be used to configure a specific pool type using the above identifier:
Parameter | Default | Comments |
---|---|---|
nv.pool.<poolIdentifier>.initialCapacity | 1024 | The initial size of the array to hold pool items that are not in use. |
nv.pool.<poolIdentifier>.maxCapacity | Integer.MAX_VALUE | The maximum number of slots for items not in use. If an item is returned to a pool and they are no slots left it is evicted and becomes eligible for garbage collection. |
nv.pool.<poolIdentifier>.threaded | true | Whether or not the pool will be safe for access by multiple threads |
nv.pool.<poolIdentifier>.preallocate | false | When true, pool is filled to its initial capacity with newly created items when the pool is created. When false, items are created on demand. |
nv.pool.<poolIdentifier>.detachedWash | false | When items are returned to a pool its fields are reset. When true this makes items returned to the pool eligible for cleanup on a detached thread. |
Note on above defaults: each pool above is created programmatically and each pool instance that is created may set its own default values; the above defaults apply to pools that haven't altered the default value.
The following configuration shows an example of configuring preallocation for the MyEntity Xbuf entity type:
<env> <!-- pool parameters --> <nv> <pool> <xbuf.entity.MyEntityXbufEntity.301> <initialCapacity>16384</initialCapacity> <preallocate>true</preallocate> </xbuf.entity.MyEntityXbufEntity.301> </pool> </nv> </env> |