Install RazorSQL on Ubuntu 12.04 LTS x64


RazorSQL is a GUI tool for working with Postgresql.

Install RazorSQL

First, create the razorsql directory.

mkdir ~/Downloads/razorsql

Download RazorSQL into the ~/Downloads/navicat directory.

  1. Open a web browser to
  2. Click Download next to Linux (64-bit).

After the download completes, open a command prompt and enter the following commands. I have assumed that you downloaded the .zip file to the Downloads/razorsql directory.

cd ~/Downloads/razorsql
mv razorsql ~/bin/
cd ~/bin/razorsql
chmod 755

Connect to a Database

The following steps are performed in the RazorSQL GUI.

  • From the menu, select Connections, click Add Connection Profil.
  • In the Connection Wizard, select Postgresql, click Continue.
    Add Connection Profile 01
  • In the 2nd screen, enter the information shown below.
    Add Connection Profile 02
  • Click Connect.

Enable TCP/IP Connections to Postgresql 9.1


The default installation of Postgresql does not allow TCP/IP connections, which also means JDBC connections wont’ work.

Enable TCP/IP Connections

cd /etc/postgresql/9.1/main

Let’s make a backup of the original config file.

sudo cp postgresql.conf
sudo vi postgresql.conf

Find the line that contains

#listen_addresses = 'localhost'

and change it to:

listen_addresses = 'localhost'

If you want to allow remote connections, then you need to set listen_address to something like (assuming your machine’s IP Address is
listen_addresses = ‘’

You will also need to update the firewall to allow connections to this machine.

Restart the Postgresql Server

sudo service postgresql restart

Test the TCP/IP Connection

psql -h -p 5433 -U postgres -W

Initial Configuration of Postgresql on Ubuntu 12.04 LTS


After installing Postgresql on Ubuntu, you will need to complete a few basic setup steps.

Set the Password for the postgres User

The first step is to update the password for the postgres user.

sudo -u postgres psql postgres

Then enter the following command at the postgres=# prompt.

\password postgres

Create a Database

If you exited the postgres=# prompt, then enter the following command to connect to Postgresql. However, if your prompt still looks like postgres=#, then do not enter the psql command below.

sudo -u postgres psql postgres

Enter the following command at the postgres=# prompt.

First we’ll list all of the databases, which should be the following: postgres, template0, and template1.


Now you should see the practicedb database.

Press Ctrl + D to exit the Postgres prompt.

Install Postgresql on Ubuntu 12.04 LTS


Postgresql is an open source object relational database. It is often thought of as an alternative to MySQL.

In this post I’ll provide the steps required to install Postgresql on a developer laptop, assuming that work will be done in both SQL and Python.

You can learn more at:


Open a command prompt and enter the following commands.

sudo apt-get install postgresql postgresql-common postgresql-contrib \
postgresql-client postgresql-client-common \
pgsnap pgadmin3 pgpool2 ptop pgtune pgloader pgagent \
python-pygresql postgresql-plpython-9.1 python-psycopg2

You’ll see a list of additional packages that will be installed by default.

The following extra packages will be installed:
libossp-uuid16 libpgpool0 libpq5 libwxbase2.8-0 libwxgtk2.8-0 pgadmin3-data php5-cli php5-common php5-pgsql postgresql-9.1 postgresql-client-9.1 postgresql-contrib-9.1 python-support

Installed Packages

Each of the installed packages are described below.

This is the core Postgresql object relational database server.

Add the ability to setup a cluster of Postgresql database servers.

Command line client for interacting with a Postgresql database, including the psql command.

Allows multiple clients to be installed as part of a Postgresql cluster.

Generates a Postgresql performance report in HTML format.

GUI tool to work with Postgresql. Personally, I prefer some of the commercial tools, such as Navicat on Linux or EMS SQL Manager on Windows.

Middleware between a Postgresql client and a Postgresql server that provides connection pooling, replication and load balancing, among other things.

CLI based performance monitoring tool that’s analogous to the Linux psql command

Automatically tunes the Postgresql configuration file postgresql.conf based on the system’s hardware.

Utility for loading flat files and CSV files into a Postgresql table.

Job scheduler for Postgresql.

Python module that allows you to query a Postgresql database from a python script. Basically, this is for python developers who need to query Postgresql.

Allows SQL developers to extend their SQL script by writing procedural functions in python.

Similar to python-pygresql, except that python-psycopg2 is designed for heavily threaded python scripts that create and destroy a large number of cursors, and execute a high volume of INSERTs and UPDATEs.

How to write professional emails (Top 10 list)

I was going through my old notes as I’m cleaning our my closet. I came across the following notes, that I like a lot, on how to write a professional email.

  1. Use email macros to both improve your efficiency in replying to email, and to improve quality.
  2. End each sentence with the correct punctuation.
  3. Write short sentences.
  4. Each sentence should discuss only one topic.
  5. Use bulleted lists to simplify that information you are trying to convey.
    1. Bulleted lists are easy to read.
    2. Each point is separated from other points.
    3. Lists help to organize thoughts.
  6. Use the following correctly:
    1. Capitalization
    2. Spacing
    3. Formatting
  7. Do not use SMS fake words – ur, dk, lol, how ru, etc.
  8. Make the Subject descriptive.
  9. Limit the email to the subject matter described in the Subject.
  10. Attach files before writing the email. No body likes a forgotten attachments.

There are more, but I would put these as my top 10 list.

How to add numbers with Pig


We’re going to start with a very simple Pig script that reads a file that contains 2 numbers per line separated by a comma. The Pig script will first read the line, store each of the 2 numbers in separate variables, and will then add the numbers together.

Create the Sample Input File

vi pig-practice01.txt

Paste the following into pig-practice01.txt.

5	1
6	4
3	2
1	1
9	2
3	8

Create the Input and Output Directories in HDFS

We’re going to create 2 directories to store the input to and output from our first pig script.

hadoop fs -mkdir pig01-input
hadoop fs -mkdir pig01-output

Put Data File into HDFS

hadoop fs -put pig-practice01.txt pig01-input

Now, let’s check that our file was put from our local file system to HDFS correctly.

hadoop fs -ls pig01-input
hadoop fs -cat pig01-input/pig-practice01.txt

Write the Pig Latin Script

vi practice01.pig

Paste the following code into practice01.pig.

Add 2 numbers together

-- Load the practice file from HDFS
A = LOAD 'pig01-input/pig-practice01.txt' USING PigStorage() AS (x:int, y:int);

-- Add x and y 

-- Show the output
STORE B INTO 'pig01-output/results' USING PigStorage();

Run the Pig Script

pig practice01.pig

View the Results

hadoop fs -ls pig01-output/results

The results are stored in the part* file.

hadoop fs -cat pig01-output/results/part-m-0000

Additional Reading

Install Pig 0.9.2 for CDH4 on Ubuntu 12.04 LTS x64


Installing Pig is drop dead simple.


sudo apt-get install pig

Check the Pig version.

pig --version

Setup the Environment

We’re going to set the environment variables system-wide for Pig programming.

sudo vi /etc/environment

Paste the following environment variables into the environment file.

source  /etc/environment

That’s it. You can now start to write and run pig jobs.

Install Pentaho Analysis Services (Mondrian) 3.4.1 with MySQL on Ubuntu 12.04 LTS


Pentaho Analysis Services is called Mondrian, which is the name I’ll use for the remainder of this post. Mondrian is Pentaho’s OLAP server.

In this post I’ll provide step-by-step instructions on how to install Mondrian 3.4.1 Ubuntu Linux 12.04 LTS x64. We’ll use MySQL as the database.


Let’s first create a directory to download Mondrian into.

mkdir -p ~/Downloads/pentaho/analysis
cd ~/Downloads/pentaho/analysis

To download the Mondrian either run the following command, or follow the 3 bulleted steps below.


Or follow the steps below if you don’t want to use the wget command shown above.


cd ~/Downloads/pentaho/analysis

Install the Mondrian Server

cd ~/Downloads/pentaho/analysis/mondrian-3.4.1/lib
sudo unzip mondrian.war -d /opt/pentaho/biserver-ce/tomcat/webapps/mondrian
sudo chown -R pentaho:pentaho /opt/pentaho/biserver-ce/tomcat/webapps/mondrian

Create the FoodMart Database

FoodMart is a sample database that can be used to learn Mondrian.

mysql -u root -p

Enter the following commands at the MySQL command prompt.

mysql> CREATE DATABASE foodmart;

Please use a real password by changing pass.word below.

mysql> GRANT ALL ON foodmart.* TO foodmart@localhost IDENTIFIED BY 'pass.word';
mysql> quit;

Import the Foodmart Schema and Data

cd /opt/pentaho/biserver-ce/tomcat/webapps/mondrian

Change akbar to your user name in the inputFile option in the command below.

java -cp "\
/opt/pentaho/biserver-ce/tomcat/lib/mysql-connector-java-5.1.17.jar" \
mondrian.test.loader.MondrianFoodMartLoader -verbose -tables -data -indexes \
-jdbcDrivers=com.mysql.jdbc.Driver \
-inputFile=/home/akbar/Downloads/pentaho/analysis/mondrian-3.4.1/demo/FoodMartCreateData.sql \

If you get an error, there is a 99% probability that the jar files cannot be found on the paths specified in the command above. So, the first thing you should do is check if the jar files can be found in the class path paths that are listed above, then update the paths as necessary.

Update the Sample MDX Query Files

cd /opt/pentaho/biserver-ce/tomcat/webapps/mondrian/WEB-INF/queries

Edit each of the following files and make the change shown in the From / To instructions below.

Change each of the JSP files from:

<jp:mondrianQuery id="query01" jdbcDriver="org.apache.derby.jdbc.EmbeddedDriver" jdbcUrl="jdbc:derby:classpath:/foodmart" catalogUri="/WEB-INF/queries/FoodMart.xml"
   jdbcUser="sa" jdbcPassword="sa" connectionPooling="false">


<jp:mondrianQuery id="query01" jdbcDriver="com.mysql.jdbc.Driver" jdbcUrl="jdbc:mysql://localhost/foodmart?user=foodmart&password=pass.word" catalogUri="/WEB-INF/queries/FoodMart.xml">

sudo vi fourhier.jsp
sudo vi mondrian.jsp
sudo vi arrows.jsp
sudo vi colors.jsp
cd /opt/pentaho/biserver-ce/tomcat/bin
sudo ./
sudo ./

I have changed the default Pentaho BI Server port from 8080 to 8585.

Open a web browser to http://localhost:8585/mondrian.

Click each of the links below to view the same cube.

The cube will be populated for the first time when you click the first link below. In other words, the first page will take a long time to load while the cube is populating. On my system, I actually had to reboot and restart Tomcat before the pages displayed.

Mondrian Configuration Files

Nothing needs to be done to setup the FoodMart database, however you will need to edit the following configuration files when you create your own database.

cd /opt/pentaho/biserver-ce/tomcat/webapps/mondrian/WEB-INF
sudo cp
sudo vi


cd /opt/pentaho/biserver-ce/tomcat/webapps/mondrian/WEB-INF
sudo cp datasources.xml
sudo vi datasources.xml


I was initially using the instructions at, however, I got the following error when I ran the jar command.

Exception in thread "main" java.lang.NoClassDefFoundError: org/olap4j/mdx/IdentifierSegment
	at mondrian.test.loader.MondrianFoodMartLoader.(
Caused by: java.lang.ClassNotFoundException: org.olap4j.mdx.IdentifierSegment
	at Method)
	at java.lang.ClassLoader.loadClass(
	at sun.misc.Launcher$AppClassLoader.loadClass(
	at java.lang.ClassLoader.loadClass(
	... 1 more
Could not find the main class: mondrian.test.loader.MondrianFoodMartLoader.  Program will exit.

The solution was to add the following missing .jar files to the class path:

  • commons-collections-3.1.jar
  • olap4j.jar

How to view files in HDFS (hadoop fs -ls)


The hadoop fs -ls command allows you to view the files and directories in your HDFS filesystem, much as the ls command works on Linux / OS X / *nix.

Default Home Directory in HDFS
A user’s home directory in HDFS is located at /user/userName. For example, my home directory is /user/akbar.

List the Files in Your Home Directory

hadoop fs -ls defaults to /user/userName, so you can leave the path blank to view the contents of your home directory.

hadoop fs -ls

Recursively List Files

The following command will recursively list all files in the /tmp/hadoop-yarn directory.

hadoop fs -ls -R /tmp/hadoop-yarn

Show List Output in Human Readable Format

Human readable format will show each file’s size, such as 1461, as 1.4k.

hadoop fs -ls -h /user/akbar/input

You will see output similar to:
-rw-r–r–   1 akbar akbar       1.4k 2012-06-25 16:45 /user/akbar/input/core-site.xml
-rw-r–r–   1 akbar akbar       1.8k 2012-06-25 16:45 /user/akbar/input/hdfs-site.xml
-rw-r–r–   1 akbar akbar       1.3k 2012-06-25 16:45 /user/akbar/input/mapred-site.xml
-rw-r–r–   1 akbar akbar       2.2k 2012-06-25 16:45 /user/akbar/input/yarn-site.xml

List Information About a Directory

By default, hadoop fs -ls shows the contents of a directory. But what if you want to view information about the directory, not the directory’s contents?

To show information about a directory, use the -d option.

hadoop fs -ls -d /user/akbar

drwxr-xr-x   – akbar akbar          0 2012-07-07 02:28 /user/akbar/

Compare the output above to the output without the -d option:
drwxr-xr-x   – akbar akbar          0 2012-06-25 16:45 /user/akbar/input
drwxr-xr-x   – akbar akbar          0 2012-06-25 17:09 /user/akbar/output
-rw-r–r–   1 akbar akbar          3 2012-07-07 02:28 /user/akbar/text.hdfs

Show the Usage Statement

hadoop fs -usage ls

The output will be:

Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]

hdfs dfsadmin -metasave


hdfs dfsadmin -metasave provides additional information compared to hdfs dfsadmin -report. With hdfs dfsadmin -metasave provides information about blocks, including>

  • blocks waiting for replication
  • blocks currently being replication
  • total number of blocks

hdfs dfsadmin -metasave filename.txt

Run the command with sudo -u hdfs prefixed to ensure you don’t get a permission denied error. CDH4 runs the namenode as the hdfs user by default. However if you have changed the

ssudo -u hdfs hdfs dfsadmin -metasave metasave-report.txt

You will see output similar to:

Created file metasave-report.txt on server hdfs://localhost:8020

The output above initially confused me as I thought the metasave report was saved to the HDFS filesystem. However, it’s stating the the metasave report is saved into the /var/log/hadoop-hdfs directory on localhost.

cd /var/log/hadoop-hdfs
cat metasave-report.txt

You will see output similar to:

58 files and directories, 17 blocks = 75 total
Live Datanodes: 1
Dead Datanodes: 0
Metasave: Blocks waiting for replication: 0
Mis-replicated blocks that have been postponed:
Metasave: Blocks being replicated: 0
Metasave: Blocks 0 waiting deletion from 0 datanodes.
Metasave: Number of datanodes: 1 IN 247241674752(230.26 GB) 323584(316 KB) 0% 220983930880(205.81 GB) Sat Jul 14 18:52:49 PDT 2012