Download Android App


Alternate Blog View: Timeslide Sidebar Magazine
Showing posts with label infinispan. Show all posts
Showing posts with label infinispan. Show all posts

Friday, March 22, 2013

Infinispan performance tweaks

This article is a follow up to Getting started: Infinispan as remote cache cluster

Out of the box Infinispan configuration works great for low to medium number of GET/PUT operations. But in distributed mode and for heavy  GET/PUT operations, you may frequently see locking failures like this one:


2013-03-22 00:14:20,033 [DEBUG] org.infinispan.server.hotrod.HotRodDecoder HotRodClientMaster-63 - Exception caught
org.infinispan.server.hotrod.HotRodException: org.infinispan.util.concurrent.TimeoutException: Unable to acquire lock after [10 seconds] on key [ByteArrayKey{data=ByteArray{size=18, hashCode=48079ac7, array=0x033e0f3134354065..}}] for requestor [Thread[HotRodClientMaster-63,5,main]]! Lock held by [(another thread)]
        at org.infinispan.server.hotrod.HotRodDecoder.createServerException(HotRodDecoder.scala:214)
        at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:75)
        at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:45)

Infinispan uses locking to maintain cache consistency. Optimizing locking settings can help improve overall performance. Here are some configuration tips to avoid locking issues and improve concurrency:


    <default>
        <locking concurrencylevel="1000" isolationlevel="READ_COMMITTED" lockacquisitiontimeout="500" uselockstriping="false">
        <jmxstatistics enabled="true" />
        <!-- Configure a asynchronous distributed cache -->
        <clustering mode="distribution">
            <async/>
            <hash numowners="2"></clustering>
        </locking>
    </default>

Explanation:
  • Concurrency level: Adjust this value according to the number of concurrent threads interacting with Infinispan.
  • lockAcquisitionTimeout: Maximum time to attempt a particular lock acquisition. Set this based on your application needs.
  • useLockStriping: If true, a pool of shared locks is maintained for all entries that need to be locked. Otherwise, a lock is created per entry in the cache. Lock striping helps control memory footprint but may reduce concurrency in the system.
Another configuration worth looking at it Level 1 (L1) cache. An L1 cache prevents unnecessary remote fetching of entries mapped to remote caches by storing them locally for a short time after the first time they are accessed. Read more here.

Sunday, July 22, 2012

Getting started: Infinispan as remote cache cluster


This guide will walk you through configuring and running Infinispan as a remote distributed cache cluster. There is straightforward documentation for running Infinispan in embedded mode. But there is no complete documentation for running Infinispan in client/server or remote mode. This guide helps bridge the gap.

Infinispan offers four modes of operation, which determine how and where the data is stored:
  • Local, where entries are stored on the local node only, regardless of whether a cluster has formed. In this mode Infinispan is typically operating as a local cache
  • Invalidation, where all entries are stored into a cache store (such as a database) only, and invalidated from all nodes. When a node needs the entry it will load it from a cache store. In this mode Infinispan is operating as a distributed cache, backed by a canonical data store such as a database
  • Replication, where all entries are replicated to all nodes. In this mode Infinispan is typically operating as a data grid or a temporary data store, but doesn't offer an increased heap space
  • Distribution, where entries are distributed to a subset of the nodes only. In this mode Infinispan is typically operating as a data grid providing an increased heap space
Invalidation, Replication and Distribution can all use synchronous or asynchronous communication.

Infinispan offers two access patterns, both of which are available in any runtime:
  • Embedded into your application code
  • As a Remote server accessed by a client (REST, memcached or Hot Rod)
In this guide, we will configure an Infinispan server with a HotRod endpoint and  access it via a Java Hot Rod client. One reason to use HotRod protocol is it provides automatic loadbalancing and failover.

1. Download full distribution of Infinispan. I will use version 5.1.5.
2. Configure Infinispan to run in distributed mode. Create infinispan-distributed.xml.



<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xmlns="urn:infinispan:config:5.1" xsi:schemalocation="urn:infinispan:config:5.1 
http://www.infinispan.org/schemas/infinispan-config-5.1.xsd">
 <global>
  <globaljmxstatistics enabled="true">
  <transport>
   <properties>
    <property name="configurationFile" value="jgroups.xml">
   </property></properties>
  </transport>
 </globaljmxstatistics></global>
 <default>
  <jmxstatistics enabled="true">
  
  <clustering mode="distribution">
   <async>
   <hash numowners="2">
  </hash></async></clustering>
 </jmxstatistics></default>

 <namedcache name="myCache">
  <clustering mode="distribution">
   <sync>
   <hash numowners="2">
  </hash></sync></clustering>
 </namedcache>
</infinispan>


We will use JGroups to setup cluster communication. Copy etc/jgroups-tcp.xml as jgroups.xml.

3. Place infinispan-distributed.xml and jgroups.xml in bin folder. Start 2 Infinispan instances on the same or different machines.

Starting an Infinispan server is pretty easy. You need to download and unzip the Infinispan distribution and use the startServer script.


bin\startServer.bat --help // Print all available options
bin\startServer.bat -r hotrod -c infinispan-distributed.xml -p 11222
bin\startServer.bat -r hotrod -c infinispan-distributed.xml -p 11223

The 2 server instances will start talking to each other via JGroups.

4. Create a simple Remote HotRod Java Client.



import java.net.URL;
import java.util.Map;

import org.infinispan.client.hotrod.RemoteCache;
import org.infinispan.client.hotrod.RemoteCacheManager;
import org.infinispan.client.hotrod.ServerStatistics;


public class Quickstart {

 public static void main(String[] args) {

  URL resource = Thread.currentThread().getContextClassLoader()
                                     .getResource("hotrod-client.properties");
  RemoteCacheManager cacheContainer = new RemoteCacheManager(resource, true);

  //obtain a handle to the remote default cache
  RemoteCache cache = cacheContainer.getCache("myCache");

  //now add something to the cache and make sure it is there
  cache.put("car", "ferrari");
  if(cache.get("car").equals("ferrari")){
   System.out.println("Found");
  } else {
   System.out.println("Not found!");
  }

  //remove the data
  cache.remove("car");

  //Print cache statistics
  ServerStatistics stats = cache.stats();
  for (Map.Entry stat : stats.getStatsMap().entrySet()) {
   System.out.println(stat.getKey() + " : " + stat.getValue());
  }

  // Print Cache properties
  System.out.println(cacheContainer.getProperties());

  cacheContainer.stop();
 }
}


5. Define hotrod-client.properties.


infinispan.client.hotrod.server_list = localhost:11222;localhost:11223;
infinispan.client.hotrod.socket_timeout = 500
infinispan.client.hotrod.connect_timeout = 10

## below is connection pooling config
maxActive=-1
maxTotal = -1
maxIdle = -1
whenExhaustedAction = 1
timeBetweenEvictionRunsMillis=120000
minEvictableIdleTimeMillis=1800000
testWhileIdle = true
minIdle = 1


See RemoteCacheManager for all available properties.

6. Run QuickStart.java. You will see something like this on the console:



Jul 22, 2012 9:40:39 PM org.infinispan.client.hotrod.impl.protocol.Codec10 
readNewTopologyAndHash
INFO: ISPN004006: localhost/127.0.0.1:11223 sent new topology view (id=3) 
containing 2 addresses: [/127.0.0.1:11223, /127.0.0.1:11222]
Found

hits : 3
currentNumberOfEntries : 1
totalBytesRead : 332
timeSinceStart : 1281
totalNumberOfEntries : 8
totalBytesWritten : 926
removeMisses : 0
removeHits : 0
retrievals : 3
stores : 8
misses : 0
{whenExhaustedAction=1, maxIdle=-1, infinispan.client.hotrod.connect_timeout=10, 
maxActive=-1, testWhileIdle=true, minEvictableIdleTimeMillis=1800000, maxTotal=-1, 
minIdle=1, infinispan.client.hotrod.server_list=localhost:11222;localhost:11223;, 
timeBetweenEvictionRunsMillis=120000, infinispan.client.hotrod.socket_timeout=500}



As you will notice, the cache server returns the cluster topology when the connection is established. You can start more Infinispan instances and notice that the cluster topology changes quickly.

That's it!

Some useful links:

http://docs.jboss.org/infinispan/5.1/configdocs/
https://github.com/infinispan/infinispan-quickstart
https://github.com/infinispan/infinispan/tree/master/client/hotrod-client
https://docs.jboss.org/author/display/ISPN/Using+Hot+Rod+Server
https://docs.jboss.org/author/display/ISPN/Java+Hot+Rod+client