Saturday, 18 October 2014

Ignite Open fire Plugin


Open fire version :- Openfire 3.9.3
Maven version :- Maven-2.0.10

Download the Maven Openfire Plugin master branch from this link
                        https://github.com/srt/maven-openfire-plugin

Go to the Maven Openfire Plugin folder and execute the below command.
 mvn clean install
To write Ignite Openfire plugin we need openfire.jar. To get the openfire.jar go to the open fire server’s lib folder and copy it into the local folder.

Now add the openfire.jar into the maven local repository using below command. I downloaded the openfire.jar into the F folder and my open fire version is 3.9.3.
mvn install:install-file -DgroupId=org.igniterealtime.openfire -DartifactId=openfire -Dversion=3.9.3 -Dpackaging=jar -DgeneratePom=true -Dfile=F:\openfire.jar

Maven project structure and sample application will be avialable in below link.
            http://1drv.ms/1tB3avH

After downloading the OpenfireSamplePlugin go to the root folder of OpenfireSamplePlugin and execute the below command.                                             
                        mvn clean install

This command will create the target folder with OpenfireSamplePlugin-0.0.1-SNAPSHOT.jar.

Now go to the open fire server management portal and upload the jar file.

NOTE:- If we got error like below please check the maven project structure. If we add webapp folder under src/main the error will be gone.
Failed to execute goal com.reucon.maven.plugins:maven-openfire-plugin:1.0.2-SNAPSHOT:jspc (default-jspc) on project OpenfireSamplePlugin: Failure processing jsps
                       

Monday, 21 July 2014

Azure Configure Software RAID on Linux

   
Following are the steps to create RAID 5 Array on Ubuntu 12.04 LTS machine.

          è  Attach 5 hard disks each one with 500 GB.

 è Partition all hard disks.

 è Install the mdadm using the below command.
             apt-get install mdadm

 è Create the RAID 5 array using below command.
             mdadm --create /dev/md127 --level=5 --raid-devices=5 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1                     /dev/sdg1

 è Partition the RAID 5 array /dev/md127 and we get the /dev/md127p1 partition.

 è Format using the ext4 file system.
             mkfs –t ext4 /dev/md127p1

 è Add the configuration to mdadm.conf
             mdadm --detail --scan >> /etc/mdadm/mdadm.conf
             Edit the mdadm.conf file to contain only the device name and UUID like below.
             ARRAY /dev/md127 UUID=*********

 è You need to update initramfs so it contains your mdadm.conf settings during boot.
             sudo update-initramfs –u

 è Get the UUID of RAID 5 partition to add entry in fstab. Execute the below command to get UUID.
             sudo /sbin/blkid
             Copy the UUID of /dev/md127p1

 è Add the entry in fstab like below.
             UUID=****  /mountpoint      ext4       defaults, nobootwait     0              2

 è Change the bootdegraded value to true in below file.
             /etc/initramfs-tools/conf.d/mdadm

 è Add the bootdegraded=true property in GRUB_CMDLINE_LINUX_DEFAULT like below in               /etc/default/grub file.
            GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0 earlyprintk=ttyS0                           rootdelay=300 bootdegraded=true"

 è After updating the above file execute the below command.
            sudo update-grub

 è Check the mount configuration using below command.
           mount -a

Friday, 4 April 2014

JASIG CAS REST API Java Client

Code for user Authentication using CAS REST API in java.

Download from here

JASIG CAS REST Activation

Using Jasig CAS REST API we can authenticate user without CAS login form.

CAS Version:- cas-server-3.5.2-release

Required libraries :-
cas.server.core 3.5.1
cas-server-integration-restlet-3.5.1
spring-beans ${spring.version}

cglib-nodep-3.1.jar
com.noelios.restlet.ext.servlet-1.1.1.jar
com.noelios.restlet.ext.spring-1.1.1.jar
com.noelios.restlet-1.1.1.jar
org.restlet.ext.spring-1.1.1.jar
org.restlet-1.1.1.jar


Open the web.xml file of CAS and add the below servlet configuration.

cas.server.core 3.5.1
cas-server-integration-restlet-3.5.1
spring-beans ${spring.version}

  <servlet>
<servlet-name>restlet</servlet-name>
<servlet-class>com.noelios.restlet.ext.spring.RestletFrameworkServlet</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>

<servlet-mapping>
<servlet-name>restlet</servlet-name>
<url-pattern>/v1/*</url-pattern>
</servlet-mapping>


Now we can authenticate the user using CAS REST API.

Friday, 31 January 2014

Remote Desktop Windows Azure OpenSUSE VM

Based on your openSUSE version select the xrdp. For different versions refer the below link

http://software.opensuse.org/download.html?project=home%3Atwotaps%3Aremotedesktop&package=xrdp

1. Open the 3389 port from Windows Azure portal.

Install gnome using below command.

2. sudo zypper install gnome-session

3. Now go to the /etc/sysconfig/displaymanger and change the DISPLAYMANAGER property to "gdm"

4. Reboot the VM from Windows Azure portal.

5. Now take the remote desktop and select the GNOME session type.

Friday, 16 August 2013

The underlying connection was closed: An unexpected error occurred on receive.


ERROR:-Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.

The underlying connection was closed: An unexpected error occurred on receive.

To solve the above error we have to add the below configuration in endpoint behavior or service behavior.

       <dataContractSerializer maxItemsInObjectGraph="6553600" />

If above configuration does not solve the issue there is problem with ResponseFormat json.
Just set the ResponseFormat to XML it will work without error.


If you have DateTime type in your class  set the some Date other than default [{01-01-0001 00:00:00}] . It solve the problem. It is problem with json ,it works well with XML.

Saturday, 18 May 2013

Running Multiple Web Application in Different Ports in Single Web Role

We can run multiple web applications in a single web role.
Select the web role in where you want to add the new web application in different port.
Add the new end point with the port 9000.
Now open the csdef file.
Under sites section of the web role add the below configuration.

<Site name="WebApp2" physicalDirectory="path to the web application">
<Bindings>
     <Binding name="Endpoint2" endpointName="Endpoint2"/>
</Bindings>
</Site>

If we add the project in solution we can add the relative path also. For example, if your ServiceConfiguration.csdef file is located at C:\projects\CloudProject\ServiceConfiguration.csdef and the folder containing your new web site is located in C:\projects\NewWebSite then the relative path in the physicalDirectory attribute would be "..\NewWebSite".

Associating the Sales Literature to Product in CRM

We can associate the Sales Literature to the Product using the below code.

var xrm = new XrmServiceContext("Xrm");

Guid salesLiteratureIdnew Guid();

EntityReferenceCollection relatedEntitiesSalesLiterature = new EntityReferenceCollection();
                           
relatedEntitiesSalesLiterature.Add(new EntityReference(CompetitorProduct.EntityLogicalName, salesLiteratureId));
                           
Relationship relationshipSalesLiterature = new Relationship("productsalesliterature_association");
                           
xrm.Associate("product", newProduct.Id, relationshipSalesLiterature, 
relatedEntitiesSalesLiterature);

Here the productsalesliterature_association is the relationship between the Price List and the Product. We can see the relationship in the CRM portal. The steps are
Settings->Customizations->Customize the System->Entities->Select the Entity ->Click on Relationship.

Associating the Competitor to Product in CRM

We can associate the Competitor to the Product using the below code.

var xrm = new XrmServiceContext("Xrm");

Guid competitorId = new Guid();

EntityReferenceCollection relatedEntitiesCompetitor = new EntityReferenceCollection();

relatedEntitiesCompetitor.Add(new EntityReference(CompetitorProduct.EntityLogicalName, competitorId));

Relationship relationshipCompetitor = new Relationship("competitorproduct_association");

xrm.Associate("product", newProduct.Id, relationshipCompetitor, relatedEntitiesCompetitor);

Here the competitorproduct_association is the relationship between the Price List and the Product. We can see the relationship in the CRM portal. The steps are
Settings->Customizations->Customize the System->Entities->Select the Entity ->Click on Relationship.

Associating the Default Price List to Product in CRM

We can associate the Default Price List to the Product using the below code.

var xrm = new XrmServiceContext("Xrm");

Guid priceLevelId = new Guid();

EntityReferenceCollection relatedEntities = new EntityReferenceCollection();

relatedEntities.Add(new EntityReference(PriceLevel.EntityLogicalName, priceLevelId));

Relationship relationship = new Relationship("price_level_products");

xrm.Associate("product", newProduct.Id, relationship, relatedEntities);

Here the price_level_products is the relationship between the Price List and the Product. We can see the relationship in the CRM portal. The steps are
Settings->Customizations->Customize the System->Entities->Select the Entity ->Click on Relationship.

Saturday, 5 January 2013

HBase REST From C#


HBase supports REST for non-Java front-ends.

To start hbase rest server use the below command.
hbase rest start [The REST server start listening at 8080].

We can define our port using the below command.
hbase rest start -p 9090

Create a table user with column family info.

create 'user','info'

Now use the REST API to insert data into the user table.

//Create the HttpWebrequest object with the HBase rest url.
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://---:8080/User/1");

//Method type is POST.
request.Method = "POST";

//Contenet type is JSON.Set this to application/json if you send json data.Otherwise you get error like Unsupported Media Type.
request.ContentType = "application/json";

//To store data in HBase the rowkey,column,columnvalue should be in the Base64String.

//Id column.
string idcolumn = System.Convert.ToBase64String(Encoding.UTF8.GetBytes("info:id"));
string idvalue = System.Convert.ToBase64String(Encoding.UTF8.GetBytes("1"));

//Name column.
string namecolumn = System.Convert.ToBase64String(Encoding.UTF8.GetBytes("info:name"));
string namevalue = System.Convert.ToBase64String(Encoding.UTF8.GetBytes("A"));

//Rowkey.
string rowkey = System.Convert.ToBase64String(Encoding.UTF8.GetBytes("1"));

//I had used the JSON.NET to construct the json string.
JObject objrow = new JObject();
JProperty key = new JProperty("key", rowkey);
JArray arr = new JArray();
JObject objcolum1 = new JObject();
objcolum1.Add(new JProperty("column", idcolumn));
objcolum1.Add(new JProperty("$", idvalue));
JObject objcolum2 = new JObject();
objcolum2.Add(new JProperty("column", namecolumn));
objcolum2.Add(new JProperty("$", namevalue));
arr.Add(objcolum1);
arr.Add(objcolum2);
JProperty cell = new JProperty("Cell", arr);
objrow.Add(key);
objrow.Add(cell);
JObject main = new JObject();
JProperty row = new JProperty("Row", objrow);
main.Add(row);

//Get the json string.
string input = main.ToString();

//Get bytes.
byte[] by = Encoding.Default.GetBytes(input);

//Set the content length.
request.ContentLength = by.Length;

//Get the request stream.
Stream stream = request.GetRequestStream();

//Write bytes to stream.
stream.Write(by, 0, by.Length);

//Execute the request.
HttpWebResponse response = (HttpWebResponse)request.GetResponse();

//Get the response stream.
Stream responsestream = response.GetResponseStream();

//Read the response.
using (StreamReader reader = new StreamReader(responsestream))
{
     //Read the response.
     string res = reader.ReadToEnd();

     //Write the response.
     Response.Write(res);
}

Now you can see the data in hbase user table.

Installation Of HBase In Ubuntu


Use Apache HBase when you need random, realtime read/write access to your Big Data. This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware. Apache HBase is an open-source, distributed, versioned, column-oriented store modeled after Google's Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, Apache HBase provides Bigtable-like capabilities on top of Hadoop and HDFS.

The HBASE-0.94.2 installation is done in below versions of Linux, Java and Hadoop respectively.

UBUNTU 12.04 LTS
JAVA 1.7.0_09
HADOOP 1.1.0

I have hduser as a dedicated hadoop system user. I had installed my Hadoop in /home/hduser/hadoop folder. Now I am going to install hbase  in /home/hduser folder. Change the directory to the hduser and execute below commands.

Download the hbase from below URL using wget.

Unzip the tar file.
sudo tar xzf  hbase-0.94.2.tar.gz

Change the name to hbase.
sudo mv hbase-0.94.2 hbase

Set the JAVA_HOME and HBASE_CLASSPATH in hbase-env.sh.
hbase-env.sh file exist in the conf folder of hbase.[ /home/hduser/hbase/conf/hbase-env.sh]

Change
 # The java implementation to use.  Required.
 # export JAVA_HOME=/usr/lib/j2sdk1.5-sun
to
 # The java implementation to use.  Required.
 export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-amd64
Add the hbase  conf directory to the HBASE_CLASSPATH
export HBASE_CLASSPATH=/home/hduser/hbase/conf/

Set the  HBASE_HOME path.
export HBASE_HOME=/home/hduser/hbase
export PATH=${PATH}:${HBASE_HOME}/bin

hbase-site.xml.
hbase-site.xml file exist in the conf folder of hbase.[ /home/hduser/hbase/conf/hbase-site.xml]

I had 3 node hadoop cluster one as master and two as slaves. In master node I had namenode,secondarynamenode and jobtracker. In slave nodes I had datanode and tasktracker.

<configuration>
   <property>
    <name>hbase.rootdir</name>
    <value>hdfs://master:54310/hbase </value>
    <description>
       The directory shared by region servers. Should be fully-qualified to include the filesystem to use.
       E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
    </description>   
   </property>
   <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
    <description>The mode the cluster will be in. Possible values are
      false: standalone and pseudo-distributed setups with managed Zookeeper
      true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
    </description>
   </property>
   <property>
    <name>hbase.zookeeper.quorum</name>
    <value>10.146.244.133</value>
    <description>Comma separated list of servers in the ZooKeeper Quorum.
      For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
      By default this is set to localhost for local and pseudo-distributed modes
      of operation. For a fully-distributed setup, this should be set to a full
      list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
      this is the list of servers which we will start/stop ZooKeeper on.
      </description>
   </property>
   <property>
    <name>hbase.zookeeper.dns.nameserver</name>
    <value>10.146.244.133</value>
    <description> The host name or IP address of the name server (DNS) which a ZooKeeper server should use to determine the host name used by the master for communication and display purposes.
    </description>
  </property>

 <property>
    <name>hbase.regionserver.dns.nameserver</name>
    <value>10.146.244.133</value>
    <description> The host name or IP address of the name server (DNS) which a region server should use to determine the host name used by the master for communication and display purposes.
    </description>
  </property>

 <property>
    <name>hbase.master.dns.nameserver</name>
    <value>10.146.244.133</value>
    <description> The host name or IP address of the name server (DNS) which a master should use to determine the host name used for communication and display purposes.    </description>
  </property>
</configuration>

In the below properties I had used the masters ip address.
hbase.zookeeper.quorum
hbase.zookeeper.dns.nameserver
hbase.master.dns.nameserver
hbase.regionserver.dns.nameserver

Specify RegionServers.
regionservers file exist in the conf folder of hbase.
[/home/hduser/hbase/conf/regionservers]

master
slave1
slave2

Here I have specified the master also as a regionserver.

Remote copy hbase folder from master node to slave nodes.
scp -r /home/hduser/hbase 10.146.244.62:/home/hduser/hbase      [Slave1]
scp -r /home/hduser/hbase 10.146.242.32:/home/hduser/hbase      [Slave2]

Run Hbase.
Now run the hbase shell command.

NOTE:- If you get error like DNS name not found .You have to create  the forward and reverse lookup zones of ips. Use the bind9 to create lookups.

REST API.
To start hbase rest server use the below command.
hbase rest start [The REST server start listening at 8080].
We can define our port using the below command.
hbase rest start -p 9090

Hbase MapReduce
To run mapreduce jobs in hadoop which uses input as hbase and output as hbase you have to add the hbase jar files in hadoop class path otherwise you get the NoClassDefFoundError errors.

I had added the below jars to the HADOOP_CLASSPATH to get it work.

export HADOOP_CLASSPATH=$HBASE_HOME/hbase-0.94.2.jar:$HBASE_HOME/hbase-0.94.2-tests.jar:$HBASE_HOME/lib/zookeeper-3.4.3.jar:$HBASE_HOME/lib/avro-1.5.3.jar:$HBASE_HOME/lib/avro-ipc-1.5.3.jar:$HBASE_HOME/lib/commons-cli-1.2.jar:$HBASE_HOME/lib/jackson-core-asl-1.8.8.jar:$HBASE_HOME/lib/jackson-mapper-asl-1.8.8.jar:$HBASE_HOME/lib/commons-httpclient-3.1.jar:$HBASE_HOME/lib/jetty-6.1.26.jar:$HBASE_HOME/lib/hadoop-core-1.0.3.jar:$HBASE_HOME/lib/com.google.protobuf_2.3.0.jar

All the above jars comes with hbase, but com.google.protobuf_2.3.0.jar will not come with hbase. You have to explicitly download it from internet and add it to the Hadoop class path.