Friday 16 August 2013

The underlying connection was closed: An unexpected error occurred on receive.


ERROR:-Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.

The underlying connection was closed: An unexpected error occurred on receive.

To solve the above error we have to add the below configuration in endpoint behavior or service behavior.

       <dataContractSerializer maxItemsInObjectGraph="6553600" />

If above configuration does not solve the issue there is problem with ResponseFormat json.
Just set the ResponseFormat to XML it will work without error.


If you have DateTime type in your class  set the some Date other than default [{01-01-0001 00:00:00}] . It solve the problem. It is problem with json ,it works well with XML.

Saturday 18 May 2013

Running Multiple Web Application in Different Ports in Single Web Role

We can run multiple web applications in a single web role.
Select the web role in where you want to add the new web application in different port.
Add the new end point with the port 9000.
Now open the csdef file.
Under sites section of the web role add the below configuration.

<Site name="WebApp2" physicalDirectory="path to the web application">
<Bindings>
     <Binding name="Endpoint2" endpointName="Endpoint2"/>
</Bindings>
</Site>

If we add the project in solution we can add the relative path also. For example, if your ServiceConfiguration.csdef file is located at C:\projects\CloudProject\ServiceConfiguration.csdef and the folder containing your new web site is located in C:\projects\NewWebSite then the relative path in the physicalDirectory attribute would be "..\NewWebSite".

Associating the Sales Literature to Product in CRM

We can associate the Sales Literature to the Product using the below code.

var xrm = new XrmServiceContext("Xrm");

Guid salesLiteratureIdnew Guid();

EntityReferenceCollection relatedEntitiesSalesLiterature = new EntityReferenceCollection();
                           
relatedEntitiesSalesLiterature.Add(new EntityReference(CompetitorProduct.EntityLogicalName, salesLiteratureId));
                           
Relationship relationshipSalesLiterature = new Relationship("productsalesliterature_association");
                           
xrm.Associate("product", newProduct.Id, relationshipSalesLiterature, 
relatedEntitiesSalesLiterature);

Here the productsalesliterature_association is the relationship between the Price List and the Product. We can see the relationship in the CRM portal. The steps are
Settings->Customizations->Customize the System->Entities->Select the Entity ->Click on Relationship.

Associating the Competitor to Product in CRM

We can associate the Competitor to the Product using the below code.

var xrm = new XrmServiceContext("Xrm");

Guid competitorId = new Guid();

EntityReferenceCollection relatedEntitiesCompetitor = new EntityReferenceCollection();

relatedEntitiesCompetitor.Add(new EntityReference(CompetitorProduct.EntityLogicalName, competitorId));

Relationship relationshipCompetitor = new Relationship("competitorproduct_association");

xrm.Associate("product", newProduct.Id, relationshipCompetitor, relatedEntitiesCompetitor);

Here the competitorproduct_association is the relationship between the Price List and the Product. We can see the relationship in the CRM portal. The steps are
Settings->Customizations->Customize the System->Entities->Select the Entity ->Click on Relationship.

Associating the Default Price List to Product in CRM

We can associate the Default Price List to the Product using the below code.

var xrm = new XrmServiceContext("Xrm");

Guid priceLevelId = new Guid();

EntityReferenceCollection relatedEntities = new EntityReferenceCollection();

relatedEntities.Add(new EntityReference(PriceLevel.EntityLogicalName, priceLevelId));

Relationship relationship = new Relationship("price_level_products");

xrm.Associate("product", newProduct.Id, relationship, relatedEntities);

Here the price_level_products is the relationship between the Price List and the Product. We can see the relationship in the CRM portal. The steps are
Settings->Customizations->Customize the System->Entities->Select the Entity ->Click on Relationship.

Saturday 5 January 2013

HBase REST From C#


HBase supports REST for non-Java front-ends.

To start hbase rest server use the below command.
hbase rest start [The REST server start listening at 8080].

We can define our port using the below command.
hbase rest start -p 9090

Create a table user with column family info.

create 'user','info'

Now use the REST API to insert data into the user table.

//Create the HttpWebrequest object with the HBase rest url.
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://---:8080/User/1");

//Method type is POST.
request.Method = "POST";

//Contenet type is JSON.Set this to application/json if you send json data.Otherwise you get error like Unsupported Media Type.
request.ContentType = "application/json";

//To store data in HBase the rowkey,column,columnvalue should be in the Base64String.

//Id column.
string idcolumn = System.Convert.ToBase64String(Encoding.UTF8.GetBytes("info:id"));
string idvalue = System.Convert.ToBase64String(Encoding.UTF8.GetBytes("1"));

//Name column.
string namecolumn = System.Convert.ToBase64String(Encoding.UTF8.GetBytes("info:name"));
string namevalue = System.Convert.ToBase64String(Encoding.UTF8.GetBytes("A"));

//Rowkey.
string rowkey = System.Convert.ToBase64String(Encoding.UTF8.GetBytes("1"));

//I had used the JSON.NET to construct the json string.
JObject objrow = new JObject();
JProperty key = new JProperty("key", rowkey);
JArray arr = new JArray();
JObject objcolum1 = new JObject();
objcolum1.Add(new JProperty("column", idcolumn));
objcolum1.Add(new JProperty("$", idvalue));
JObject objcolum2 = new JObject();
objcolum2.Add(new JProperty("column", namecolumn));
objcolum2.Add(new JProperty("$", namevalue));
arr.Add(objcolum1);
arr.Add(objcolum2);
JProperty cell = new JProperty("Cell", arr);
objrow.Add(key);
objrow.Add(cell);
JObject main = new JObject();
JProperty row = new JProperty("Row", objrow);
main.Add(row);

//Get the json string.
string input = main.ToString();

//Get bytes.
byte[] by = Encoding.Default.GetBytes(input);

//Set the content length.
request.ContentLength = by.Length;

//Get the request stream.
Stream stream = request.GetRequestStream();

//Write bytes to stream.
stream.Write(by, 0, by.Length);

//Execute the request.
HttpWebResponse response = (HttpWebResponse)request.GetResponse();

//Get the response stream.
Stream responsestream = response.GetResponseStream();

//Read the response.
using (StreamReader reader = new StreamReader(responsestream))
{
     //Read the response.
     string res = reader.ReadToEnd();

     //Write the response.
     Response.Write(res);
}

Now you can see the data in hbase user table.

Installation Of HBase In Ubuntu


Use Apache HBase when you need random, realtime read/write access to your Big Data. This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware. Apache HBase is an open-source, distributed, versioned, column-oriented store modeled after Google's Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, Apache HBase provides Bigtable-like capabilities on top of Hadoop and HDFS.

The HBASE-0.94.2 installation is done in below versions of Linux, Java and Hadoop respectively.

UBUNTU 12.04 LTS
JAVA 1.7.0_09
HADOOP 1.1.0

I have hduser as a dedicated hadoop system user. I had installed my Hadoop in /home/hduser/hadoop folder. Now I am going to install hbase  in /home/hduser folder. Change the directory to the hduser and execute below commands.

Download the hbase from below URL using wget.

Unzip the tar file.
sudo tar xzf  hbase-0.94.2.tar.gz

Change the name to hbase.
sudo mv hbase-0.94.2 hbase

Set the JAVA_HOME and HBASE_CLASSPATH in hbase-env.sh.
hbase-env.sh file exist in the conf folder of hbase.[ /home/hduser/hbase/conf/hbase-env.sh]

Change
 # The java implementation to use.  Required.
 # export JAVA_HOME=/usr/lib/j2sdk1.5-sun
to
 # The java implementation to use.  Required.
 export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-amd64
Add the hbase  conf directory to the HBASE_CLASSPATH
export HBASE_CLASSPATH=/home/hduser/hbase/conf/

Set the  HBASE_HOME path.
export HBASE_HOME=/home/hduser/hbase
export PATH=${PATH}:${HBASE_HOME}/bin

hbase-site.xml.
hbase-site.xml file exist in the conf folder of hbase.[ /home/hduser/hbase/conf/hbase-site.xml]

I had 3 node hadoop cluster one as master and two as slaves. In master node I had namenode,secondarynamenode and jobtracker. In slave nodes I had datanode and tasktracker.

<configuration>
   <property>
    <name>hbase.rootdir</name>
    <value>hdfs://master:54310/hbase </value>
    <description>
       The directory shared by region servers. Should be fully-qualified to include the filesystem to use.
       E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
    </description>   
   </property>
   <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
    <description>The mode the cluster will be in. Possible values are
      false: standalone and pseudo-distributed setups with managed Zookeeper
      true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
    </description>
   </property>
   <property>
    <name>hbase.zookeeper.quorum</name>
    <value>10.146.244.133</value>
    <description>Comma separated list of servers in the ZooKeeper Quorum.
      For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
      By default this is set to localhost for local and pseudo-distributed modes
      of operation. For a fully-distributed setup, this should be set to a full
      list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
      this is the list of servers which we will start/stop ZooKeeper on.
      </description>
   </property>
   <property>
    <name>hbase.zookeeper.dns.nameserver</name>
    <value>10.146.244.133</value>
    <description> The host name or IP address of the name server (DNS) which a ZooKeeper server should use to determine the host name used by the master for communication and display purposes.
    </description>
  </property>

 <property>
    <name>hbase.regionserver.dns.nameserver</name>
    <value>10.146.244.133</value>
    <description> The host name or IP address of the name server (DNS) which a region server should use to determine the host name used by the master for communication and display purposes.
    </description>
  </property>

 <property>
    <name>hbase.master.dns.nameserver</name>
    <value>10.146.244.133</value>
    <description> The host name or IP address of the name server (DNS) which a master should use to determine the host name used for communication and display purposes.    </description>
  </property>
</configuration>

In the below properties I had used the masters ip address.
hbase.zookeeper.quorum
hbase.zookeeper.dns.nameserver
hbase.master.dns.nameserver
hbase.regionserver.dns.nameserver

Specify RegionServers.
regionservers file exist in the conf folder of hbase.
[/home/hduser/hbase/conf/regionservers]

master
slave1
slave2

Here I have specified the master also as a regionserver.

Remote copy hbase folder from master node to slave nodes.
scp -r /home/hduser/hbase 10.146.244.62:/home/hduser/hbase      [Slave1]
scp -r /home/hduser/hbase 10.146.242.32:/home/hduser/hbase      [Slave2]

Run Hbase.
Now run the hbase shell command.

NOTE:- If you get error like DNS name not found .You have to create  the forward and reverse lookup zones of ips. Use the bind9 to create lookups.

REST API.
To start hbase rest server use the below command.
hbase rest start [The REST server start listening at 8080].
We can define our port using the below command.
hbase rest start -p 9090

Hbase MapReduce
To run mapreduce jobs in hadoop which uses input as hbase and output as hbase you have to add the hbase jar files in hadoop class path otherwise you get the NoClassDefFoundError errors.

I had added the below jars to the HADOOP_CLASSPATH to get it work.

export HADOOP_CLASSPATH=$HBASE_HOME/hbase-0.94.2.jar:$HBASE_HOME/hbase-0.94.2-tests.jar:$HBASE_HOME/lib/zookeeper-3.4.3.jar:$HBASE_HOME/lib/avro-1.5.3.jar:$HBASE_HOME/lib/avro-ipc-1.5.3.jar:$HBASE_HOME/lib/commons-cli-1.2.jar:$HBASE_HOME/lib/jackson-core-asl-1.8.8.jar:$HBASE_HOME/lib/jackson-mapper-asl-1.8.8.jar:$HBASE_HOME/lib/commons-httpclient-3.1.jar:$HBASE_HOME/lib/jetty-6.1.26.jar:$HBASE_HOME/lib/hadoop-core-1.0.3.jar:$HBASE_HOME/lib/com.google.protobuf_2.3.0.jar

All the above jars comes with hbase, but com.google.protobuf_2.3.0.jar will not come with hbase. You have to explicitly download it from internet and add it to the Hadoop class path.