Monday, March 10, 2014

Apache Camel tutorial : Quick start from scratch (Console App , without Spring)

Apache Camel is one great implementation of the EIP pattern, If you are not a seasoned Spring/Java developer , you might found it’s hard to get started, the goal of this post is simplify that process and show you how easy to get started without remember any code snippet. we will use plain console app to host and start one camel route.

to get started, simply open eclipse and create one maven project (the purpose of maven here is to add those dependency automatically), if you don’t have any maven experience, you can simply download the required jar and add to you build path.

Create one Maven project , using the simple project type, we get one project with maven ready,
image

Create one simple class with main method,
image

Now , right click pom.xml, click maven menu,  add a dependency to camel-core, you can simply search camel-core

image

Now, we can start coding,
image

we’ve initialized the camelcontext, next step is adding our routes by xml or java code.

let’s start from java code first, since it has intelligence and life is little bit easier,

image

it looks like we can just add one RoutesBuilder to the context, is there any existing one, or we have to build our own.
ask the eclipse create a local variable for the routesbuilder, let’s see whether there is any existing one,

image

it turns out Routesbuilder is a interface, and have RoutesBuilder as an implementation, so let’s extends that one,
image

then the code looks like this,

image

in the configure , we just inject the code snippets listed on the camel component page, here is one demo, that dump out some message every 2 seconds,

image

if we run this, we can see the message get shown in the console every 2 seconds.

for the log component, we can’t see the message because it is NOP for slf4j by default, so lets add a slf4j-simple to the pom.xml

image

run again, you can see the message is listed in the console window.
image

if we want to export this as a executable jar, go to the pom.xml ,right click maven menu to add a mvn-assemly-plugin
image

Change the pom.xml a little big , pint to the mainclass , you can copy the xml here
image

then go to the project folder, run mvn compile assemlby : Single

image

you will see a jar in the target folder,
image

Now, we can run the jar directly, by typing java –jar pathtothejar

image

image

for the source code, check it out here, https://github.com/ryandh/apacheCamelTutorial/tree/DemoConsole

Maven Tutorial: how to create a maven plugin that rename the packaged jar files

It turns out it is super easy to create one plugin by using the maven archetype in eclipse , here is quick tutorial to create one plugin that we can add to any project, the plugin will rename the packaged jars to our desired format. I will put a data in the jar names as an example.

for the demo, I will create one parent Project named Container, and two modules under this container, one is named MyPlugin, another module will be Client which reference the Plugin.
so one project which is the parent, pom as the target, and two modules.

image

for the Client, it’s just a regular simple Maven module.
image

for the MyPlugin project, we want it to use the archetype maven-arechetype-mojo as the base type.
image

once done, we can right click the pom.xml in client library, and plugin dependency, you can see our new plugin is there.
image

Click ok to add the MyPlugin dependency.
here is the pom.xml for Client project
image

Now, the plugin is ready, we can go the client folder, and run our plugin,
before we run this, run a “mvn install” under Container folder to install the plugins to local repository.

on the Client folder, run “mvn MyPlugin:touch”
you can see the touch.txt is there, which is created by our plugin.
image

go back to demo for the file renaming, we want the plugin to rename the jar file from Client-0.0.1-SNAPSHOT.jar to Client-0.0.1-SNAPSHOT-20140310.jar

so let’s rewrite our plugin execution logic,

image

run this target again, we can see the file is renamed to our format

image

if we want add this plugin to our package target, we can change the execution for this plugin to package.

image

then once we run the package , this plugin will be invoked automatically.

image

Monday, December 23, 2013

MVC API Controller Error

You may get this very weird error when you test the API Controller using the Entity FX code-first approach. Here is my error,

Type 'System.Data.Entity.DynamicProxies.Speaker_78C9094BC0D38F33C0C4EEAD409D4FABB88C631E3F9CDA236EF6BE10DD027500' with data contract name 'Speaker_78C9094BC0D38F33C0C4EEAD409D4FABB88C631E3F9CDA236EF6BE10DD027500:http://schemas.datacontract.org/2004/07/System.Data.Entity.DynamicProxies' is not expected. Consider using a DataContractResolver or add any types not known statically to the list of known types - for example, by using the KnownTypeAttribute attribute or by adding them to the list of known types passed to DataContractSerializer.


and the Stacktrace,
at System.Runtime.Serialization.XmlObjectSerializerWriteContext.SerializeAndVerifyType(DataContract dataContract, XmlWriterDelegator xmlWriter, Object obj, Boolean verifyKnownType, RuntimeTypeHandle declaredTypeHandle, Type declaredType) at System.Runtime.Serialization.XmlObjectSerializerWriteContext.SerializeWithXsiType(XmlWriterDelegator xmlWriter, Object obj, RuntimeTypeHandle objectTypeHandle, Type objectType, Int32 declaredTypeID, RuntimeTypeHandle declaredTypeHandle, Type declaredType) at System.Runtime.Serialization.XmlObjectSerializerWriteContext.InternalSerialize(XmlWriterDelegator xmlWriter, Object obj, Boolean isDeclaredType, Boolean writeXsiType, Int32 declaredTypeID, RuntimeTypeHandle declaredTypeHandle) at WriteArrayOfSpeakerToXml(XmlWriterDelegator , Object , XmlObjectSerializerWriteContext , CollectionDataContract ) at System.Runtime.Serialization.CollectionDataContract.WriteXmlValue(XmlWriterDelegator xmlWriter, Object obj, XmlObjectSerializerWriteContext context) at System.Runtime.Serialization.XmlObjectSerializerWriteContext.WriteDataContractValue(DataContract dataContract, XmlWriterDelegator xmlWriter, Object obj, RuntimeTypeHandle declaredTypeHandle) at System.Runtime.Serialization.XmlObjectSerializerWriteContext.SerializeWithoutXsiType(DataContract dataContract, XmlWriterDelegator xmlWriter, Object obj, RuntimeTypeHandle declaredTypeHandle) at System.Runtime.Serialization.DataContractSerializer.InternalWriteObjectContent(XmlWriterDelegator writer, Object graph, DataContractResolver dataContractResolver) at System.Runtime.Serialization.DataContractSerializer.InternalWriteObject(XmlWriterDelegator writer, Object graph, DataContractResolver dataContractResolver) at System.Runtime.Serialization.XmlObjectSerializer.WriteObjectHandleExceptions(XmlWriterDelegator writer, Object graph, DataContractResolver dataContractResolver) at System.Runtime.Serialization.DataContractSerializer.WriteObject(XmlWriter writer, Object graph) at System.Net.Http.Formatting.XmlMediaTypeFormatter.<>c__DisplayClass7.<WriteToStreamAsync>b__6() at System.Threading.Tasks.TaskHelpers.RunSynchronously(Action action, CancellationToken token)


and My code,

image

image

to Fix this, just turn off the proxycreation for this DBContext

image

Thursday, December 19, 2013

How to remove the dead nodes in the Solrcloud manually

Somehow, you may find SolrCloud has a track of all the Nodes even those dead or testing Nodes in the clustermap.

  for example, there is one dead nodes, here
image

If you hover on it, it shows the ip and port.
image

so here we go, remove the 8983 one. Basically the cluster state is keeped in the Zookeeper, all we need to do is download the file and remove the dead/testing nodes, then upload it back. To do this, we need download a full version of the zookeeper , under the bin ,the is a full featured zk client zkCli which enable us to get and set the data.

So save the clusterstate.json to local file.
image

find the dead ndoes, and remove it then save as a new file like new.txt
image

then upload it back,

./zkCli.sh -server 127.0.0.1:9984 set /clusterstate.json "`cat new.txt`"

then it’s gone!~

image

Tuesday, September 17, 2013

JAVA: ByteBuffer.Allocate vs AllocateDirect

Allocation on heap, GC involved.
image

the process used 2G RAM
image

the buffer is one the HEAP Old space,
image

If allocatedirect on RAM,

image

still 2G as a whole

image

but not on Heap,

image


so The heavily used buffer appropriately continues to use a non-heap, non-garbage-collected direct buffer.

Cassandra: InvalidRequestException(why:Key may not be empty)

When you get this exception when you try to submit some changes through the THRIFT gateway to the cluster,

the code are simple,
  

client.remove(KEYSPACE, key,cp, System.currentTimeMillis(), ConsistencyLevel.ALL);

and when you check the key==null, it’s not null for sure, but how comes still get the key empty error?

then check ThriftValidation class,
image

now you know, the key is not null, the remaning() is zeor.
typically this means the key itself has been read and the pointer is at the end, so before you read the key, try duplicate a new one.

use this utilit lib instead, http://svn.apache.org/repos/asf/cassandra/trunk/src/java/org/apache/cassandra/utils/ByteBufferUtil.java

Friday, September 13, 2013

How-to: Test the HBase Coprocessor tutorial

here is a quick tutorial to setup hbase instance and hook up with a basic coprocessor.

Download the tar file and unzip it from http://mirror.tcpdiag.net/apache/hbase/hbase-0.94.11/
   Change the Hbase-site.xml, point the hbase.rootdir to the local folder, by default Hbase will load the default one embedded in the hbase-x.jar, and default setting is local temp folder,

image

change it to a folder you fell comfortable,
  also change the hbase.temp.dir to a well know folder, you can see all the underlying folder it used,
My final change,

image

then you can start the hbase server ./bin/start-hbase.sh

  check the port it listened, should be a port called 60030
 image

then you can go the http://ip:60030 to see the hbase console of the given region. and 60010 for the hbase master
image

image

then create a table and put a record

image

check the folder

image

Now let’s write a basic java program to read the data in hbase.

Create a new maven project by clicking the eclipse wizard,
image_thumb[2]
Click the pom.xml, right click to add one dependency hbase.

image_thumb[3]

Simple Query in another server, make sure the zookeeper ip and host name are accessible from client machine,

image

for the coprocessor, they are bascailly observers in 3 levels.
  Master level, you can hookup with all the DDL like create table,update column,etc (check BaseMasterObserver)
image

and DML level in the region servers,
image

And WAL level, all changes going to WAL, basically you can see all the changes. you can even reject the change
image
 
I will put the WALobserver as a example, just dump out any changes to the cluseter, we can send the changeset to another server do secondary indexing, or for logging /auditing purpose.

image

compile and package it to a jar file,then copy to the hbase lib folder, or any folder which is in the HBASE_CLASSPATH

then change hbase-site.xml, point to our WALobserver

image

then restart the instance, from the region server, you can see our WAL is loaded,
image

then make some changes to the data,
image

from the log, you should see our logging , data changes captured here
image

Please note the WALobserver runs in the same JVM of the hbase, make sure not fail safe and no huge extra performance hit.

 
Locations of visitors to this page