Setting up JAVA_HOME on a mac

Installing and setting up JAVA_HOME was a bit of a research for me. So thought I would post it here so next time anyone else or myself wonders how to do it .

Run the following command /usr/libexec/java_home -V  to get the list of installed JDK. The command will print out something like the following depending on the available JDK in your computer.
On my Mac I have the following version of Java.
/usr/libexec/java_home -V
Matching Java Virtual Machines (1):
    1.8.0_152, x86_64: “Java SE 8” /Library/Java/JavaVirtualMachines/jdk1.8.0_152.jdk/Contents/Home
If you have multiple JDK, it will list all of them.
From the list above pick which version you want to be the default JDK. For example I will choose the 1.8.0_152 version to be my default JDK. To set it run the command below.
export JAVA_HOME=`/usr/libexec/java_home -v 1.8.0_152`
If the major version of the available JDK is unique you can just use the major version, like:
export JAVA_HOME=`/usr/libexec/java_home -v 1.8`
After setting the JAVA_HOME and you run the java -version command you will see that JDK 1.8 is the new default JDK in your computer.
java version “1.8.0_152”
Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
The change above will only active in the current running shell. If you close or terminate the shell, next time you open the shell you will need to set it again. To make this change permanent you need to set it in your shell init file. For example if you are using bash then you can set the command in the .bash_profile. Add the following lines at the end of the file.
# Setting default JDK to version 1.8.
export JAVA_HOME=`/usr/libexec/java_home -v 1.8`
To activate this configuration right away your can run source .bash_profile. This command reads and executes the .bash_profile in the current shell.
Posted in Apps, big data, how to, java, Mac | Tagged , , | Leave a comment

Creating a new file system in Linux

Here how you would create a new file system :

  1. First create a new partition using fdisk. eg:
    fdisk /dev/sdb     –> Options (m , n,p,1,  ,t,8e,w)
  2. Create volume group
    vgcreate myvg /dev/sdb1
  3. Create a logical volume
    lvcreate -L 512G -n my_lv myvg
  4. mkfs.ext4 /dev/myvg/my_lv
  5. You can optionally add the file system in /etc/fstab to automatically mount
  6. Mount /myfilesystem
Posted in how to, linux | Tagged , , , , | Leave a comment

Drill your data with Apache Drill !

Feeling the Drill

I have been using Apache Drill to explore data for a while now. Apache Drill is a low latency distributed query engine for large-scale datasets, including structured and semi-structured/nested data. Drill supports a variety of NoSQL databases and file systems, including HBase, MongoDB, MapR-DB, HDFS, MapR-FS, Amazon S3, Azure Blob Storage, Google Cloud Storage, Swift, NAS and local files.  To be clear Drill is not limited to Hadoop, you can query NoSQL databases like MongoDB, Hbase or cloud storage like Amazon S3, Azure Blob Storage  or even local files on your computer.  I have it installed on my laptop and use it as embedded mode to query my txt and cvs files. Apache Drill can be installed on Windows, Linux and MacOS with JDK.

Drill data like a table even when its not – schema on read

Drill is based on schema on read, meaning unlike traditional query engines that requires to have a predefined schema and structure, drill lets you define schema as you query the data. Cool uh ? Wait there is more  with Drill there’s no need to load the data or transform the data before it can be processed. Simply, point the query to the file or database you want to query and start querying the data.
For instance lets say you have a file  customers.csv  on a directory  /data/customer/. Once you have Drill installed (which takes about 3 mins) all you have to from a Drill prompt is :
select * from dfs./data/customer/customers.csv`;  and drill get you the data. You can even bring specific columns :
select column[0],column[1],column[6] from dfs./data/customer/customers.csv`

Drill also allows you to query against wild card files :
select * from dfs./data/orders/orders-08-*-2016.csv`
Drill lets you create views and static tables to even increase ease of use and improve performance.  You can check out the documentation for more options.

In love with your query or BI tool ? No problemo

Apache Drill supports standard SQL. So you can continue to use your favorite query tools and SQL that you have been using. Drill supports ODBC and JDBC drivers, so you it will let you access Drill using tool of your choice.  Data users can use standard BI/analytics tools such as Tableau, Qlik, MicroStrategy and so on to interact with non-relational datastores by leveraging Drill’s JDBC and ODBC drivers. Developers can leverage Drill’s simple REST API in their custom applications to create beautiful visualizations.  Drill comes with a web interface when you install in distributed mode. Drill also provides a native tool called Drill Explorer which I find really useful. You can find all the details on how to configure your tool to access Drill in the documentation.

Lets get it going …

Apache Drill is easy to download and run Drill on your computer . It runs on all standard OS and takes few minutes to install. Drill can also be installed on a cluster of servers to serve a scalable and high performance execution engine.  Drill has two install options:
1.  Installing in Embedded mode
2. Installing in Distributed mode.

Installing in your computer that has JDK installed involves:
1. Downloading the tar file
2. Untar the file
3. cd to the apache-drill<version>
4. run  bin/drill-embedded (Mac and Linux) . On windows : C:\bin\sqlline sqlline.bat –u “jdbc:drill:zk=local;schema=dfs”


Drill in to your data with Apache Drill and hopefully you will enjoy drilling as much as I do.


Posted in BigData, database, hadoop | Tagged , , | Leave a comment

Script check the index fragmentation in SQL Server

Today I had a need to find the index fragmentation on one of my sql server database and I found this script buried on my computer, thought I will share it here:

DECLARE @DatabaseID int

SET @DatabaseID = DB_ID()

SELECT DB_NAME(@DatabaseID) AS DatabaseName,
schemas.[name] AS SchemaName,
objects.[name] AS ObjectName,
indexes.[name] AS IndexName,
objects.type_desc AS ObjectType,
indexes.type_desc AS IndexType,
dm_db_index_physical_stats.partition_number AS PartitionNumber,
dm_db_index_physical_stats.page_count AS [PageCount],
dm_db_index_physical_stats.avg_fragmentation_in_percent AS AvgFragmentationInPercent
FROM sys.dm_db_index_physical_stats (@DatabaseID, NULL, NULL, NULL, ‘LIMITED’) dm_db_index_physical_stats
INNER JOIN sys.indexes indexes ON dm_db_index_physical_stats.[object_id] = indexes.[object_id] AND dm_db_index_physical_stats.index_id = indexes.index_id
INNER JOIN sys.objects objects ON indexes.[object_id] = objects.[object_id]
INNER JOIN sys.schemas schemas ON objects.[schema_id] = schemas.[schema_id]
WHERE objects.[type] IN(‘U’,’V’)
AND objects.is_ms_shipped = 0
AND indexes.[type] IN(1,2,3,4)
AND indexes.is_disabled = 0
AND indexes.is_hypothetical = 0
AND dm_db_index_physical_stats.alloc_unit_type_desc = ‘IN_ROW_DATA’
AND dm_db_index_physical_stats.index_level = 0
AND dm_db_index_physical_stats.page_count >= 1000


Best practice, you rebuild the index if the fragmentation is more than 40 % , if not a reorganization will suffice.

Posted in database, how to, sql server | Tagged , , , | Leave a comment