Saturday, January 22, 2011

Copy MacBook MAC and Windows Partition from one Disk to another one

I have one 60G SSD HD with the mac OS and bootcamp windows 7 64. at one day, I get  a new 160G SSD, so need to find one way to copy all the bits in MAC and Windows to the new SSD HD.

first, copy the MAC Os from old disk to the new one.  this is much easy compared with the windows partition. Just connect the second HD using the USB adapter, or external HD enclosure. boot the os to mac, and run the Disk Utility. 
    format the New HD to MAC journal format.

restore the existing MAC partition to the new HD.  (drag and drop) That’s it.

image 

For the windows partition, Need back up it to a external HD using the winclone, then restore. 
Download and install the winclone(it’s free.)
image

Click preference, to disable the compressing.
image

Then click Image button to dump the windows partition to a DMG file.
image

It may take a while depending on your IO and CPU.

once done, Boot with the New HD to mac, create a windows partition using the bootcamp assistant.

then run the winclone to restore the windows partition to the new HD partition.
image

after that, done. for the first windows run, It may prompt you to run a diskcheck, you may skip it if you believe your HD is pretty reliable. Now you can boot the MAC/Win with the new HD.

Wednesday, January 12, 2011

Hello world EJB 3.1 with JBOSS

I’ve been in the Microsoft world for a while, there are a lot distributed technology in the Microsoft stack. DCOM, COM+,Remoting, ASMX/WCF.  What’s the equivalent term for EJB? I would say that’s dot net remoting.

for a typical remoting project, we need define the MBR Object, Host it ( single call, singleton or Client control activation.) using different options, (IIS or in-process hosting. the runtime will take care the proxy/sub stuff.)

here I will put a hello world EJB. Jboss will be the role of IIS.

like remoting,First define a Interface (shared as a contract between server and client. )

public interface ICalculatorBusinessObjects {
     public int Add(int ... args);
}

then Implement a MBR like remoting. 

@Stateless
@Remote
public class CalculatorBeanBase implements ICalculatorBusinessObjects {

    @Override
    public int Add(int... args) {
        // TODO Auto-generated method stub
        int i=0;
        System.out.println("------------------" + "get HIt on Server side, My hashcode" + this.hashCode() );
        for (int j : args) {
            i+=j;
        }
        return i;
    }

}

    

like the Microsoft attribute, those @xx are annotations.  the runtime will check thos annotation and enforce some ad-hoc logic. like provide different hosting control, transaction control,etc.

Once Done, need deployed it to a container or server . for .net , it’s just a assembly or a dll. for J2ee, a zip file called EJB jar.  you can click eclipse and export the build to a jar file firstEjb.jar

image 

then deploy the jar to Jboss.  drag and drop the jar to c:\jboss6\server\default\deploy folder. ( for standard configuration,  c:\jboss6\server\all\deploy  if you run with –c all option.)

the server discovers new bits and host it, also register a JDNI entry.  like the remoting .rem stuff.
image
the ejb is hosted and ready for client connection now.

for client code, just find the proxy by querying the JNDI server and call the biz logic.

import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;

public class main {

    public static void main(String[] args) throws NamingException {
        // TODO Auto-generated method stub
        Context context=new  InitialContext();
        ICalculatorBusinessObjects o=(ICalculatorBusinessObjects)context.lookup("java:CalculatorBeanBase/remote");
        System.out.println(o.getClass().getCanonicalName());
        System.out.println(o.Add(1,2,3,4,5));
    }

}

when you run the jboss, Jboss is also one JNDI server listening one a specific port , 1099 by default.

so you need tell the client, which jndi server we should talk to by put some environment variables like the following.

-Djava.naming.factory.initial=org.jnp.interfaces.NamingContextFactory
-Djava.naming.provider.url=jnp://localhost:1099
-Djava.naming.factory.url.pkgs=org.jnp.interfaces

-Djnp.timeout=0
-Djnp.sotimeout=0

When you run the client, you will get the result as you expected, a simple calculator. but all logic runs on server, the Jboss jvm.

image

Jboss log.

image

here concludes the basic tutorial. have fun.

How to : test hibernate data access with SQL Server 2008

there are some prerequisites to test the hibernate data access with SQL server.

I will provide some snippets as following,

Create a java project, add the jars to the builder path. 101 with Hibernate, first create one POJO ( a basic entity bean with “attributes”), I will create a basic Class called TODO with two attributes, ID and Title.

package Domain;

public class TODO {

    private int ID;

    private String Title;

    public void setID(int iD) {
        ID = iD;
    }

    public int getID() {
        return ID;
    }

    public void setTitle(String title) {
        Title = title;
    }

    public String getTitle() {
        return Title;
    }
}

Then Add one XML mapping for this Class. (the file will used by hibernate to interpret the field-column mapping, ID generation rules.) , always using the naming rules as [ENtity].HBM.XML which is optional.
for the entitye TODO, will be mapped to a table called TODOs, the ID generation rule is implemented by SQL server( identity column)

<?xml version="1.0"?>
<!DOCTYPE hibernate-mapping PUBLIC        "-//Hibernate/Hibernate Mapping DTD 3.0//EN"        "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd">
<hibernate-mapping package="Domain">

    <class table="TODOs" name="TODO">
        <id name="ID" column="ID">
            <generator class="native">
            </generator>
        </id>

        <property name="Title"></property>

    </class>
</hibernate-mapping>

 
then create one file HIbernate.cfg.xml ( the driver used to communicate with sql server. )

<?xml version='1.0' encoding='utf-8'?><!DOCTYPE hibernate-configuration PUBLIC        "-//Hibernate/Hibernate Configuration DTD 3.0//EN"        "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
    <session-factory>        <!-- Database connection settings -->
        <property name="connection.driver_class">com.microsoft.sqlserver.jdbc.SQLServerDriver</property>
        <property name="connection.url">jdbc:sqlserver://localhost;databaseName=test;integratedSecurity=true; </property>
        <property name="connection.username">not required</property>
        <property name="connection.password"></property>        <!-- JDBC connection pool (use the built-in) -->
        <property name="connection.pool_size">1</property>        <!-- SQL dialect -->
        <property name="dialect">org.hibernate.dialect.SQLServer2008Dialect</property>        <!-- Enable Hibernate's automatic session context management -->
        <property name="current_session_context_class">thread</property>        <!-- Disable the second-level cache -->
        <property name="cache.provider_class">org.hibernate.cache.NoCacheProvider</property>        <!-- Echo all executed SQL to stdout -->
        <property name="show_sql">true</property>        <!-- Drop and re-create the database schema on startup -->
        <property name="hbm2ddl.auto">create-drop</property>
        <mapping resource="test/TODO.hbm.xml" />
    </session-factory>
</hibernate-configuration>

Then, read /write to and from DB using the hibernate API.

package test;

import org.hibernate.Session;
import org.hibernate.SessionFactory;
import org.hibernate.Transaction;
import org.hibernate.cfg.Configuration;

public class Main {

    /**
     * @param args
     * @throws ClassNotFoundException
     */
    public static void main(String[] args) {

        SessionFactory factory = new Configuration().configure(
                "test/hibernate.cfg.xml").buildSessionFactory();
        Session session = factory.getCurrentSession();
        Transaction trans = session.beginTransaction();

        Domain.TODO newobj = new Domain.TODO();
        newobj.setTitle("FirstTODO");
        session.save(newobj);

        Domain.TODO newobj2 = new Domain.TODO();
        newobj2.setTitle("SecondTODO");
        session.save(newobj2);
        // list all objects

        java.util.List lists = session.createQuery("from TODO").list();
        System.out.println(lists.size());
        for (int i = 0; i < lists.size(); i++) {
            System.out.println(((Domain.TODO) lists.get(i)).getTitle());
        }

        trans.commit();

    }

}

When you run the program, it will create two records and save it to DB. here is the sql used captured by SQL profiler.

image

If you get the error when you run the app,

Jan 12, 2011 2:19:19 PM com.microsoft.sqlserver.jdbc.AuthenticationJNI <clinit>
WARNING: Failed to load the sqljdbc_auth.dll cause :- no sqljdbc_auth in java.library.path

that means the JDBC driver need loaded some DLL in order to work with SQL server. just copy this dll [located in the driver auth folder] to the class path folder. or just c:\windows folder.
this only applies if we use the Windows authentication against sql server instead of sql authentication.

<property name="connection.url">jdbc:sqlserver://localhost;databaseName=test;integratedSecurity=false; </property>





 

Monday, January 10, 2011

Varnish Cache , Blocked Proxy access to your Server / IP Address

Varnish is one great reverse-proxy to cache and accelerate your web application. If you deployed one varnish instance which is facing internet. you may expose some kind of security threats.
i.e somebody will try to use your instance as a regular proxy. or send junk url to the backend.

you can simply disable those junk request.

sub vcl_recv {
//only allow IP address for Global DNS based Loadbalancing Or your hostname
if(!req.http.host ~ "yourwebsite" && !req.http.host ~"([0-9]{1,3})\.([0-9]{1,3})\.([0-9]{1,3})\.([0-9]{1,3})" )
{
set req.http.host = “yourwebsite";
set req.url="/" ; //redirect to home page. instead of forward to the backend
}

Wednesday, January 5, 2011

using btrace to inspect coherence cluster partition tables

If you use the distributed cache scheme of coherence cluster, all the data will be distributed to storage nodes. the partition-count is based on the data size. by default is 257 for data less than 100M.

image

the partition-count is a setting in the cache-scheme , you can always override the partition-count.

image

here I just changed the partition-count to 13, and setup backup count to 1.  (each copped data will have one backup.)

then hook up the btrace script to any node.

@OnMethod(clazz = "com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache", method = "getStorageAssignments",
            location=@Location(value=Kind.RETURN))
    public static void getStorageAssignments(@Return int [][] assignment) {
        println("getStorageAssignments");
        print("Key \t0\t Primary ");    print(assignment[0][0]);    print("\t Backup ");    println(assignment[0][1]);
        print("Key \t1\t Primary ");    print(assignment[1][0]);    print("\t Backup ");    println(assignment[1][1]);
        print("Key \t2\t Primary ");    print(assignment[2][0]);    print("\t Backup ");    println(assignment[2][1]);
        print("Key \t3\t Primary ");    print(assignment[3][0]);    print("\t Backup ");    println(assignment[3][1]);
        print("Key \t4\t Primary ");    print(assignment[4][0]);    print("\t Backup ");    println(assignment[4][1]);
        print("Key \t5\t Primary ");    print(assignment[5][0]);    print("\t Backup ");    println(assignment[5][1]);
        print("Key \t6\t Primary ");    print(assignment[6][0]);    print("\t Backup ");    println(assignment[6][1]);
        print("Key \t7\t Primary ");    print(assignment[7][0]);    print("\t Backup ");    println(assignment[7][1]);
        print("Key \t8\t Primary ");    print(assignment[8][0]);    print("\t Backup ");    println(assignment[8][1]);
        print("Key \t9\t Primary ");    print(assignment[9][0]);    print("\t Backup ");    println(assignment[9][1]);
        print("Key \t10\t Primary ");    print(assignment[10][0]);    print("\t Backup ");    println(assignment[10][1]);
        print("Key \t11\t Primary ");    print(assignment[11][0]);    print("\t Backup ");    println(assignment[11][1]);
        print("Key \t12\t Primary ");    print(assignment[12][0]);    print("\t Backup ");    println(assignment[12][1]);

    }

Let’s start with one proxy node and Two Storage Nodes first.
here is the partition output. (key is 0-12). Node 1 is the proxy Node, 3 and 2 are the storage node. they just keep the backup data of the others

getStorageAssignments
Key     0        Primary 3       Backup 2
Key     1        Primary 3       Backup 2
Key     2        Primary 3       Backup 2
Key     3        Primary 3       Backup 2
Key     4        Primary 3       Backup 2
Key     5        Primary 3       Backup 2
Key     6        Primary 2       Backup 3
Key     7        Primary 2       Backup 3
Key     8        Primary 2       Backup 3
Key     9        Primary 2       Backup 3
Key     10       Primary 2       Backup 3
Key     11       Primary 2       Backup 3
Key     12       Primary 2       Backup 3

If I add one more Storage nodes with ID=5. the New data node will take care some data. ( this will make sure each data node hold equal pieces of data. )

Key     0        Primary 5       Backup 2
Key     1        Primary 3       Backup 5
Key     2        Primary 5       Backup 2
Key     3        Primary 3       Backup 5
Key     4        Primary 3       Backup 2
Key     5        Primary 3       Backup 2
Key     6        Primary 5       Backup 3
Key     7        Primary 5       Backup 3
Key     8        Primary 2       Backup 5
Key     9        Primary 2       Backup 5
Key     10       Primary 2       Backup 3
Key     11       Primary 2       Backup 3
Key     12       Primary 2       Backup 3

this partition table is shared by all Nodes. (proxy node and storage node.), So when proxy get one put/get request, It will run the partition logic first to locate the primary node member to hold the data. then dispatch the request to that node for storage or query.

Proxy get key bucket
image

Storage Node, store the data, and maintain keyindex

image

proxy node will query the partition everytime it get one request to put data.
image
Storage nodes keep exchanging those information.
image

Free video format converter tool, verified free

If you want to convert MP4 to WMV , google the free converter tool, you will find a lot. most of them are bundled with spams.

I just find this one, which is free and free of spams. http://www.pcfreetime.com/  Format Factory.

it supports a lot file format.

Format Factory is a multifunctional media converter.
Provides functions below:
All to MP4/3GP/MPG/AVI/WMV/FLV/SWF.
All to MP3/WMA/AMR/OGG/AAC/WAV.
All to JPG/BMP/PNG/TIF/ICO/GIF/TGA.
Rip DVD to video file , Rip Music CD to audio file.
MP4 files support iPod/iPhone/PSP/BlackBerry format.
Supports RMVB,Watermark, AV Mux.

Format Factory's Feature:
1 support converting all popular video,audio,picture formats to others.
2 Repair damaged video and audio file.
3 Reducing Multimedia file size.
4 Support iphone,ipod multimedia file formats.
5 Picture converting supports Zoom,Rotate/Flip,tags.
6 DVD Ripper.
7 Supports 56 languages

 
Locations of visitors to this page