Tuesday, October 25, 2011

Coherence push replication, com.tangosol.net.DefaultConfigurableCacheFactory cannot be cast to com.oracle.coherence.environment.Environment

Coherence has several incubator project, Push replication is one of them which enable you to turn on replication between several standalone clusters. i.e  Cross WAN replication between data centers.

I just read their limited documentation, trying to setup one Master-Slave replication between two separate clusters in my pc. ON the master side, here is the cache configuration. basically load the default coherence config, reference the incubator pof file.
image
Here is just pickup the remote cluster publisher

using the distributed-scheme-with-publishing-cachestore scheme will enable the runtime to capture the entity change to a queue, then flushed to remote cluster using the remove invocation service.
image

All set, when I try to feed some data to the local cache which is Master, long stack trace appears.

Map (master): put a 2
2011-10-25 09:17:19.862/22.446 Oracle Coherence GE 3.7.1.0 <Error> (thread=main, member=2):
Portable(com.tangosol.util.WrapperException): (Wrapped: Failed request execution for DistributedCacheWithPublishingCacheStore service on Member(Id=1, Timestamp=
706, Address=192.168.137.1:8090, MachineId=63704, Location=site:E3,machine:androidyou-PC,process:6004, Role=CoherenceServer)) null
        at com.tangosol.util.Base.ensureRuntimeException(Base.java:288)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.tagException(Grid.CDB:36)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onPutRequest(PartitionedCache.CDB:50)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$PutRequest.run(PartitionedCache.CDB:1)
        at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:1)
        at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
        at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
        at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
        at java.lang.Thread.run(Thread.java:619)
        at <process boundary>
        at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
        at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
        at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
        at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:368)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
        at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
        at com.tangosol.coherence.component.net.message.SimpleResponse.read(SimpleResponse.CDB:6)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:19)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:31)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
        at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
        at java.lang.Thread.run(Thread.java:619)
Caused by: Portable(java.lang.UnsupportedOperationException)
        at java.util.AbstractMap.put(AbstractMap.java:186)
        at com.tangosol.util.WrapperObservableMap.put(WrapperObservableMap.java:151)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.postPut(PartitionedCache.CDB:70)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.put(PartitionedCache.CDB:17)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onPutRequest(PartitionedCache.CDB:25)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$PutRequest.run(PartitionedCache.CDB:1)
        at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:1)
        at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
        at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
        at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
        at java.lang.Thread.run(Thread.java:619)
        at <process boundary>
        at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)


ON the storage Node, the error that may bring you here,

</class-scheme>) java.lang.reflect.InvocationTargetException
        at com.tangosol.util.Base.ensureRuntimeException(Base.java:288)
        at com.tangosol.run.xml.XmlHelper.createInstance(XmlHelper.java:2652)
        at com.tangosol.run.xml.XmlHelper.createInstance(XmlHelper.java:2536)
        at com.tangosol.net.DefaultConfigurableCacheFactory.instantiateAny(DefaultConfigurableCacheFactory.java:3476)
        at com.tangosol.net.DefaultConfigurableCacheFactory.instantiateCacheStore(DefaultConfigurableCacheFactory.java:3324)
        at com.tangosol.net.DefaultConfigurableCacheFactory.instantiateReadWriteBackingMap(DefaultConfigurableCacheFactory.java:1753)
        at com.tangosol.net.DefaultConfigurableCacheFactory.configureBackingMap(DefaultConfigurableCacheFactory.java:1500)
        at com.tangosol.net.DefaultConfigurableCacheFactory$Manager.instantiateBackingMap(DefaultConfigurableCacheFactory.java:4111)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.instantiateBackingMap(Partitione
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.setCacheName(PartitionedCache.CD
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ServiceConfig$ConfigListener.entryInsert
CDB:17)
        at com.tangosol.util.MapEvent.dispatch(MapEvent.java:266)
        at com.tangosol.util.MapEvent.dispatch(MapEvent.java:226)
        at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:567)
        at com.tangosol.util.ObservableHashMap.dispatchEvent(ObservableHashMap.java:229)
        at com.tangosol.util.ObservableHashMap$Entry.onAdd(ObservableHashMap.java:270)
        at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
        at com.tangosol.coherence.component.util.ServiceConfig$Map.put(ServiceConfig.CDB:43)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$StorageIdRequest.onReceived(PartitionedC
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:34)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:33)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
        at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
        at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at com.tangosol.util.ClassHelper.newInstance(ClassHelper.java:694)
        at com.tangosol.run.xml.XmlHelper.createInstance(XmlHelper.java:2611)
        ... 23 more
Caused by: java.lang.ClassCastException: com.tangosol.net.DefaultConfigurableCacheFactory cannot be cast to com.oracle.coherence.environment.Environment
        at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore.<init>(PublishingCacheStore.java:179)

        ... 29 more
2011-10-25 09:17:16.120/18.685 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member 2 joined Service DistributedCacheWithPublishingCacheStore wit


trying to cast DefautCOnfigurableCacheFactory to Envionment? what is Environment class located, in standard coherence, or Incubator project? Iet me find it out in eclipse.
It is in the common lib used by push replication,

image

check the class hierarchy, it’s another cachefactory,
image

then change our cachefactory from the default DefautCOnfigurableCacheFactory  to the incubator cache facotry, it will pick up the setting like sync namespace.

Old,
imageshould be,
image

then all back to normal. Hope it helps

Tuesday, October 18, 2011

Android, Get the Process ID and Thread ID

When you debug the Multi-Thread app in android, it will be better that you can tell the activity thread id. so I have one Simple Line of Code to dumpout the logcat message, time, processid, thread id and the message.

public static void MLog(String msg) {
        Log.d("xxxnew", String.format("at %s Process ID %s, Thread ID %s %s   ", new Date().toString(),  Process.myPid(),Process.myTid(),msg));
    }


image

Saturday, October 15, 2011

HP Notebook, stuck on booting screen, Status: 0xc00000e9

some bad day, My HP pavilion stuck on the booting screen, here is what it looks like,
Status: 0xc00000e9
Info: An unexpected I/0 error has occurred.
image

So basically, it’s an IO error, for sure, should be disk IO error. then I go to the bios, Diagnostics-> hard drive test,
Get an 303 error, looks like 404 for geeks,
image

here is the HP error codes, http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c01443317

No magic, the hard drive died, replace a new one will fix the issue.

Friday, October 14, 2011

Hadoop Hbase, Test JAVA client and inspect the network flow

I just setup three VMS, with the following roles

192.168.209.130 /HOME NameNode, HbaseMaster(Standby server), Region Server, Zookeeper
192.168.209.132 /LA DateNode, HbaseMaster (Active One), Region Server, Zookeeper
192.168.209.133 /NJ Zookeeper

then I create one basic table, called ‘customer’ with Info as the only one column family.

image

then on the client machine which is win 7, I write a simple HBASE client to push some data to the HBASE. client runs on 192.168.209.1, Here is the Code. 

image 

on the client side, you need reference the HBASE jars , and put the conf folder into the classpath. in the client hbase-site.xml, just point to the zookeeper instances. Here I put three ZK quorums.
image

Then run the program, after done. you can tell from the hbase  shell, 5 records are there.
image

from the client, you can tell from the console log. it will first talk to zookeeper, to get which master is active. then  query the root region, then the meta region information.
image

after that, it will write data to the corresponding regions server directly.

here is the network flow layout.
First, setup TCP connection with Zookeeper, to locate the master server.  2181 is the zookeeper Listening Port.
image

Then talk to Root region server which is LA to get the region allocation.
image
once it has those information, it will cached in memory. then put the data to that regions server directly.

image

that’s all the client conversation. (client , zookeeper, region server.)

The general communication flow is that a new client contacts the ZooKeeper ensemble
(a separate cluster of ZooKeeper nodes) first when trying to access a particular row. It
does so by retrieving the server name (i.e., hostname) that hosts the -ROOT- region from
ZooKeeper. With this information it can query that region server to get the server name
that hosts the .META. table region containing the row key in question. Both of these
details are cached and only looked up once. Lastly, it can query the reported .META.
server and retrieve the server name that has the region containing the row key the client
is looking for.

 
Locations of visitors to this page