If you use the distributed cache scheme of coherence cluster, all the data will be distributed to storage nodes. the partition-count is based on the data size. by default is 257 for data less than 100M.
the partition-count is a setting in the cache-scheme , you can always override the partition-count.
here I just changed the partition-count to 13, and setup backup count to 1. (each copped data will have one backup.)
then hook up the btrace script to any node.
@OnMethod(clazz = "com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache", method = "getStorageAssignments",
Let’s start with one proxy node and Two Storage Nodes first.
here is the partition output. (key is 0-12). Node 1 is the proxy Node, 3 and 2 are the storage node. they just keep the backup data of the others
If I add one more Storage nodes with ID=5. the New data node will take care some data. ( this will make sure each data node hold equal pieces of data. )
Key 0 Primary 5 Backup 2
this partition table is shared by all Nodes. (proxy node and storage node.), So when proxy get one put/get request, It will run the partition logic first to locate the primary node member to hold the data. then dispatch the request to that node for storage or query.
Storage Node, store the data, and maintain keyindex