Wednesday, December 14, 2011

Cassandra OpsCenter Community features

played with the Ops several hours, here are the major feature In my option.

Monitoring the whole cluster status, like req/s, disk capacity, IO
image

Node tool visualization and do maintenance job just a click away. Such as, drain one node, compact, or get one node out of cluster(decommission.), rebalance cluster is only available in enterprise version sadly.
image

Clicking the Node, you can do a lot stuff, such as view the multi-DC replications.

image

Data modeling and explor( create keyspace , and view data), update column family.
image

Exploring data,
image

Monitoring again,
image

Actions on node, (like the node tool)

image

Cassandra Opscenter, connected to the cluster, But No Node information get listed

configration is pretty simple for opscenter, just change files under conf fodler, point to correct seed server, then I can query the data from Opscenter now.

image
How ever, the dashboard, there is no node information.
image
When I check the logs under opscenter , when it try to get node infrmation from the thift API, get error, that explains why, return no results.
image

After did a lot testing, I noticed this is because of the version mismatch, for Cassandra cluster, it runs on 1.0 version.
image

while the ops center version is 1.3.1, you can tell from the console on the left bottom
image

then I try update the Cassandra to the latest version 1.0.5, it works without any problem.
image

Cassandra OpsCenter,Failed to load application: libpython2.6.so.1.0

Unzip the OpsCenter file, Instal the 2.7 Python, then run the opscenter, failed get the following error.

[root@e3 opscenter]# bin/opscenter
/usr/local/bin/python2.7
Traceback (most recent call last):
File "/usr/lib/cassandra/opscenter-1.3.1/lib/py-redhat/2.6/shared/amd64/twisted/application/app.py", line 631, in run
runApp(config)
File "/usr/lib/cassandra/opscenter-1.3.1/lib/py-redhat/2.6/shared/amd64/twisted/scripts/twistd.py", line 23, in runApp
_SomeApplicationRunner(config).run()
File "/usr/lib/cassandra/opscenter-1.3.1/lib/py-redhat/2.6/shared/amd64/twisted/application/app.py", line 374, in run
self.application = self.createOrGetApplication()
File "/usr/lib/cassandra/opscenter-1.3.1/lib/py-redhat/2.6/shared/amd64/twisted/application/app.py", line 439, in createOrGetApplication
application = getApplication(self.config, passphrase)
--- <exception caught here> ---
File "/usr/lib/cassandra/opscenter-1.3.1/lib/py-redhat/2.6/shared/amd64/twisted/application/app.py", line 450, in getApplication
application = service.loadApplication(filename, style, passphrase)
File "/usr/lib/cassandra/opscenter-1.3.1/lib/py-redhat/2.6/shared/amd64/twisted/application/service.py", line 400, in loadApplication
application = sob.loadValueFromFile(filename, 'application', passphrase)
File "/usr/lib/cassandra/opscenter-1.3.1/lib/py-redhat/2.6/shared/amd64/twisted/persisted/sob.py", line 210, in loadValueFromFile
exec fileObj in d, d
File "bin/start_opscenter.py", line 1, in <module>
from opscenterd import opscenterd_tap
File "build/lib/python2.7/site-packages/opscenterd/opscenterd_tap.py", line 16, in <module>

File "build/lib/python2.7/site-packages/opscenterd/Config.py", line 392, in init_config

File "build/lib/python2.7/site-packages/opscenterd/events/plugins/CassandraStore.py", line 12, in <module>

File "build/lib/python2.7/site-packages/opscenterd/CassandraService.py", line 17, in <module>

File "build/lib/python2.7/site-packages/opscenterd/Cluster.py", line 14, in <module>

File "build/lib/python2.7/site-packages/opscenterd/AgentServer.py", line 23, in <module>

File "build/lib/python2.7/site-packages/opscenterd/HttpUtils.py", line 10, in <module>

File "/usr/lib/cassandra/opscenter-1.3.1/lib/py-redhat/2.6/5/amd64/OpenSSL/__init__.py", line 11, in <module>
import rand, crypto, SSL, tsafe
exceptions.ImportError: libpython2.6.so.1.0: cannot open shared object file: No such file or directory

Failed to load application: libpython2.6.so.1.0: cannot open shared object file: No such file or directory


When I list the LibPython module under /usr/lib64/libpython* , It only has the 2.4 modue. So we need to install the 2.6 Module required by the opscenter.

1, first, Check my OS, it ‘s Centos 5

image

go to http://download.fedora.redhat.com/pub/epel/5/x86_64/, download the epel-release-5-4.noarch.rpm 

2. Update the rpm, “rpm -Uvh epel-release*rpm”

3. yum install python26-libs

image

Once done, you can see the 2.6 modules under the lib folder,
image

Now we can run the opscenter now.

Tuesday, December 6, 2011

How to: write a key mapper or Key transformation utility in 10 minutes

Question, need a utility to map or transform the key stoke, say when you press F6, replace the contents with a Mailing address, and Press F7 for the zip code of your city.

  Answer, In C#, it’s easy and not easy. you have to wrap several Native APIs to hook the mapping point into the system, Easy one is it has one Great methods called Sendkeys.Send.

So eventually, we need one file to keep the mapping logic , here I just pickup the .config file, then the code. Code is here,you may just copy and save it as .cs file, then compile it.

using System;
using System.Diagnostics;
using System.Windows.Forms;
using System.Runtime.InteropServices;
using System.Text;
using System.Collections.Specialized;

public class KeyMapper
{
    private const int WH_KEYBOARD_LL = 13;
    private const int WM_KEYDOWN = 0x0100;

    private static LowLevelKeyboardProc _proc = HookCallback;
    private static IntPtr _hookID = IntPtr.Zero;
    private static NameValueCollection mappings;
    public static void Main()
    {
        //Check to see no duplicate running
        if(Process.GetProcessesByName("af").Length>1)
        {
            return;
        }
        //keep it small
        Console.SetWindowSize(1,2);

        mappings = System.Configuration.ConfigurationManager.AppSettings;

        _hookID = SetHook(_proc);
        Application.Run();
        UnhookWindowsHookEx(_hookID);
    }

    private static IntPtr SetHook(LowLevelKeyboardProc proc)
    {
        using (Process curProcess = Process.GetCurrentProcess())
        using (ProcessModule curModule = curProcess.MainModule)
        {
            return SetWindowsHookEx(WH_KEYBOARD_LL, proc,
                GetModuleHandle(curModule.ModuleName), 0);
        }
    }

    private delegate IntPtr LowLevelKeyboardProc(
        int nCode, IntPtr wParam, IntPtr lParam);

    private static IntPtr HookCallback(
        int nCode, IntPtr wParam, IntPtr lParam)
    {
        if (nCode >= 0 && wParam == (IntPtr)WM_KEYDOWN)
        {
            int vkCode = Marshal.ReadInt32(lParam);
            String key = ((Keys)vkCode).ToString();

            if (mappings[key] != null)
            SendKeys.Send(mappings[key]);
            
        }
        return CallNextHookEx(_hookID, nCode, wParam, lParam);
    }

 


    [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = true)]
    private static extern IntPtr SetWindowsHookEx(int idHook,
        LowLevelKeyboardProc lpfn, IntPtr hMod, uint dwThreadId);

    [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = true)]
    [return: MarshalAs(UnmanagedType.Bool)]
    private static extern bool UnhookWindowsHookEx(IntPtr hhk);

    [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = true)]
    private static extern IntPtr CallNextHookEx(IntPtr hhk, int nCode,
        IntPtr wParam, IntPtr lParam);

    [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)]
    private static extern IntPtr GetModuleHandle(string lpModuleName);
}

Config  file, like mine,
image

Then we you started the console application.
wherever you enter F6, it will put a sample address there.
image

If you can’t compile the code , leave a comment , I will email you the compiled bits.

Wednesday, November 30, 2011

Error: Vmplayer, NO Disk, There is no disk in the drive. Please insert a disk into drive \Device\Harddisk2\DR4.

One Day I get this error, when I just open the Vmware player, Kept popping up this error when I click the VMs.
image

Then I run a diskpart from the cmd shell, then call “list disk” it will show all the hard disks and their sizes
image
the disk2 is a weird Disk, Size is 0B.

then checked all my hard disk, I plugged my phone into the USB port. unplug the cable, the disk 2 is gone.
image

the error of NO disk was fixed too.

Wednesday, November 16, 2011

How to: Install and Test apache mahout on hadoop

Mahout and Hadoop are all java libraries basically, mahout use the Maven tool to build the source code and maintain the dependency.
So we need make sure we have the following bits ready.

  • JDK
  • Maven
  • Hadoop
  • Mahout

 

I will start from the fresh centos, then get all those stuff ready step by step.

install JDK.
GO to Oracle JDK download site, http://www.oracle.com/technetwork/java/javase/downloads/index.html, I still prefer the Java 6 instead of 7, pickup one the .bin link, download and run it directly. I will put the java under /usr/lib/jdk6 folder.
image 
Export the bin directory to PATH, and jdk6 to JAVA_HOME environment variable.

Install MAVEN
Download the binary package from http://maven.apache.org/download.html, here I chose the 2.2.1 version which is more stable. 
image

Extract the zip file, and link it to /usr/lib/maven, then.
Export /usr/lib/maven to the PATH. now, you can run mvn –version, to make sure it works at least we can get the version,
image
For settings like proxy, check it out here, http://maven.apache.org/download.html#Maven_Documentation

Install HADOOP.
you can check this out, if you want to install hadoop as a fully distributd mode, How to: install and config hadoop to run in fully distributed Mode, Centos.

here we just have one vm, so keep it easy for the mahout testing. I will use the Cloudera distribution,

Download the repo file for centos 5, http://archive.cloudera.com/redhat/cdh/cloudera-cdh3.repo and copy it to yum repo directory.
imageNow just search hadoop, you will see all the components,we will use the hadoop-0.20-conf-pseudo one.
yum install  hadoop-0.20-conf-pseudo

image

once done, go to /usr/lib/hadoop/conf directory, change the java home to /usr/lib/jdk6 in hadoop-env.sh

image

Then run as hdfs, format the namenode,

image
then start those daemons like /etc/init.d/hadoop-*, run JPS, you should see all the java process there,
image

now we can run a simple test, go the /usr/lib/hadoop, run

image
we can just copy one file there,
image
open a browser, go to http://localhost:50070, you can see the file we just uploaded is there,
image
Now , HDFS is ready. we can run a mapreduce job to make sure hadoop is ready.
image

If no error, we are all set, hadoop is ready.

Install Mahout.

Download one source code, you can use svn to clone one trunk copy,

svn co http://svn.apache.org/repos/asf/mahout/trunk



























and copy this code to /usr/lib/mahout

then run mvn install –DskipTests to compile the source, mvn will figure out the dependency and fetch those jars for you automatically,


image





it takes time to download all the jar, mileage depends.


Here is my MPG, take several minutes,


image





Now, export /usr/lib/mahout/bin to PATH , then we can run mahout from the shell.


If you cant exectute the mahout, give it one execute permission.


run mahout, will list all the options to go with different algorithms.


image



Then go the examples folder, run mvn compile



image



Now, you can run some example like the one to classify the news groups.



image





Here we didn’t sepecify the HADOOP_HOME, so it will run locally. the shell script will download data, prepare the data, then run the classifier.



image



when done, it will show the confusion matrix against the testing data.

image

Monday, November 14, 2011

How to: recover the Root password for Centos

You may forget the Root password which is hard to believe, but it does happen everyday. here is the tips to recover the root password for Centos.  bad day, no way to get root access.

image

Now, restart the VM and boot to the geekmode.  by press any key, or CTL+X to the grub mode.
image

Now, Press ‘e’ to edit the command line. we need tell the kernal boot from graphics mode to single user mode
user arrow key up/down to select the kernal line,
image

Press e, to edit the command line, you will see the follow screen,
Now, it is on the grphics mode, rhgb=red hat graphical boot

Let’s change it to single user mode, by replacing the rhgb quiet.
Before,
image


after, image

Press enter to switch the parent screen. then press b to boot the single mode kernel, you can tell linux single is there
image
Once Done, you are in single user mode.
remember the password this time, no typo please. Winking smile
image



Tuesday, October 25, 2011

Coherence push replication, com.tangosol.net.DefaultConfigurableCacheFactory cannot be cast to com.oracle.coherence.environment.Environment

Coherence has several incubator project, Push replication is one of them which enable you to turn on replication between several standalone clusters. i.e  Cross WAN replication between data centers.

I just read their limited documentation, trying to setup one Master-Slave replication between two separate clusters in my pc. ON the master side, here is the cache configuration. basically load the default coherence config, reference the incubator pof file.
image
Here is just pickup the remote cluster publisher

using the distributed-scheme-with-publishing-cachestore scheme will enable the runtime to capture the entity change to a queue, then flushed to remote cluster using the remove invocation service.
image

All set, when I try to feed some data to the local cache which is Master, long stack trace appears.

Map (master): put a 2
2011-10-25 09:17:19.862/22.446 Oracle Coherence GE 3.7.1.0 <Error> (thread=main, member=2):
Portable(com.tangosol.util.WrapperException): (Wrapped: Failed request execution for DistributedCacheWithPublishingCacheStore service on Member(Id=1, Timestamp=
706, Address=192.168.137.1:8090, MachineId=63704, Location=site:E3,machine:androidyou-PC,process:6004, Role=CoherenceServer)) null
        at com.tangosol.util.Base.ensureRuntimeException(Base.java:288)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.tagException(Grid.CDB:36)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onPutRequest(PartitionedCache.CDB:50)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$PutRequest.run(PartitionedCache.CDB:1)
        at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:1)
        at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
        at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
        at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
        at java.lang.Thread.run(Thread.java:619)
        at <process boundary>
        at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
        at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
        at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
        at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:368)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
        at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
        at com.tangosol.coherence.component.net.message.SimpleResponse.read(SimpleResponse.CDB:6)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:19)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:31)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
        at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
        at java.lang.Thread.run(Thread.java:619)
Caused by: Portable(java.lang.UnsupportedOperationException)
        at java.util.AbstractMap.put(AbstractMap.java:186)
        at com.tangosol.util.WrapperObservableMap.put(WrapperObservableMap.java:151)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.postPut(PartitionedCache.CDB:70)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.put(PartitionedCache.CDB:17)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onPutRequest(PartitionedCache.CDB:25)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$PutRequest.run(PartitionedCache.CDB:1)
        at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:1)
        at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
        at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
        at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
        at java.lang.Thread.run(Thread.java:619)
        at <process boundary>
        at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)


ON the storage Node, the error that may bring you here,

</class-scheme>) java.lang.reflect.InvocationTargetException
        at com.tangosol.util.Base.ensureRuntimeException(Base.java:288)
        at com.tangosol.run.xml.XmlHelper.createInstance(XmlHelper.java:2652)
        at com.tangosol.run.xml.XmlHelper.createInstance(XmlHelper.java:2536)
        at com.tangosol.net.DefaultConfigurableCacheFactory.instantiateAny(DefaultConfigurableCacheFactory.java:3476)
        at com.tangosol.net.DefaultConfigurableCacheFactory.instantiateCacheStore(DefaultConfigurableCacheFactory.java:3324)
        at com.tangosol.net.DefaultConfigurableCacheFactory.instantiateReadWriteBackingMap(DefaultConfigurableCacheFactory.java:1753)
        at com.tangosol.net.DefaultConfigurableCacheFactory.configureBackingMap(DefaultConfigurableCacheFactory.java:1500)
        at com.tangosol.net.DefaultConfigurableCacheFactory$Manager.instantiateBackingMap(DefaultConfigurableCacheFactory.java:4111)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.instantiateBackingMap(Partitione
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.setCacheName(PartitionedCache.CD
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ServiceConfig$ConfigListener.entryInsert
CDB:17)
        at com.tangosol.util.MapEvent.dispatch(MapEvent.java:266)
        at com.tangosol.util.MapEvent.dispatch(MapEvent.java:226)
        at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:567)
        at com.tangosol.util.ObservableHashMap.dispatchEvent(ObservableHashMap.java:229)
        at com.tangosol.util.ObservableHashMap$Entry.onAdd(ObservableHashMap.java:270)
        at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
        at com.tangosol.coherence.component.util.ServiceConfig$Map.put(ServiceConfig.CDB:43)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$StorageIdRequest.onReceived(PartitionedC
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:34)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:33)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
        at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
        at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
        at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at com.tangosol.util.ClassHelper.newInstance(ClassHelper.java:694)
        at com.tangosol.run.xml.XmlHelper.createInstance(XmlHelper.java:2611)
        ... 23 more
Caused by: java.lang.ClassCastException: com.tangosol.net.DefaultConfigurableCacheFactory cannot be cast to com.oracle.coherence.environment.Environment
        at com.oracle.coherence.patterns.pushreplication.PublishingCacheStore.<init>(PublishingCacheStore.java:179)

        ... 29 more
2011-10-25 09:17:16.120/18.685 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member 2 joined Service DistributedCacheWithPublishingCacheStore wit


trying to cast DefautCOnfigurableCacheFactory to Envionment? what is Environment class located, in standard coherence, or Incubator project? Iet me find it out in eclipse.
It is in the common lib used by push replication,

image

check the class hierarchy, it’s another cachefactory,
image

then change our cachefactory from the default DefautCOnfigurableCacheFactory  to the incubator cache facotry, it will pick up the setting like sync namespace.

Old,
imageshould be,
image

then all back to normal. Hope it helps

 
Locations of visitors to this page