Copy S3 objects across AWS Accounts

This will show you how to copy objects between S3 buckets across different AWS Accounts. Its not an easy drag and drop. Not sure why Amazon doesn’t provide an easy “SFTP” like feature. Here are the steps:

Prerequisites

  1. You would need access to both the AWS accounts
  2. You need IAM user access on the destination
  3. AWS account number of the destination.
  4. You need to have the AWS CLI configured on your machine with the IAM user that you created/used from earlier step.

Get AWS Account number

  1. Login to the destination AWS account
  2. Go to My Account page and copy the Account ID

Set S3 policy on source account

  1. Login to the source AWS account
  2. Go to the S3 bucket
  3. Create the following policy to the bucket

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “DelegateS3Access”,
“Effect”: “Allow”,
“Principal”: {
“AWS”: “arn:aws:iam::DESTINATION_BUCKET_ACCOUNT_NUMBER:root”
},
“Action”: [
“s3:ListBucket”,
“s3:GetObject”
],
“Resource”: [
“arn:aws:s3:::SOURCE_BUCKET_NAME/*”,
“arn:aws:s3:::SOURCE_BUCKET_NAME
]
}
]
}

Replace DESTINATION_BUCKET_ACCOUNT_NUMBER with the account ID that you copied earlier. Replace the SOURCE_BUCKET_NAME with the actual bucket name.

Attach policy on the destination account

  1. Login to the destination AWS account
  2. Go to my security credentials
  3. Select policies
  4. Add the following as the new policy for the IAM user

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Action”: [
“s3:ListBucket”,
“s3:GetObject”
],
“Resource”: [
“arn:aws:s3:::SOURCE_BUCKET_NAME“,
“arn:aws:s3:::SOURCE_BUCKET_NAME/
]
},
{
“Effect”: “Allow”,
“Action”: [
“s3:ListBucket”,
“s3:PutObject”,
“s3:PutObjectAcl”
],
“Resource”: [
“arn:aws:s3:::DESTINATION_BUCKET_NAME“,
“arn:aws:s3:::DESTINATION_BUCKET_NAME/
]
}
]
}

Replace DESTINATION_BUCKET_NAME with the actual bucket name of the destination. Replace the SOURCE_BUCKET_NAME with the actual source bucket name.

Sync the S3 from AWS CLI

Using AWS CLI on your computer issue the following command after replacing the BUCKET_NAME with the appropriate actual names.
Its important to use destination AWS IAM user account credentials.

aws s3 sync s3://SOURCE-BUCKET-NAME s3://DESTINATION-BUCKET-NAME –source-region SOURCE-REGION-NAME –region DESTINATION-REGION-NAME

This would sync the S3 buckets. As usual use due diligence before using this on your production system.

Posted in AWS, how to, Misc, S3 | Tagged , | Comments Off on Copy S3 objects across AWS Accounts

Setting up JAVA_HOME on a mac

Installing and setting up JAVA_HOME was a bit of a research for me. So thought I would post it here so next time anyone else or myself wonders how to do it .

Run the following command /usr/libexec/java_home -V  to get the list of installed JDK. The command will print out something like the following depending on the available JDK in your computer.
On my Mac I have the following version of Java.
/usr/libexec/java_home -V
Matching Java Virtual Machines (1):
    1.8.0_152, x86_64: “Java SE 8” /Library/Java/JavaVirtualMachines/jdk1.8.0_152.jdk/Contents/Home
/Library/Java/JavaVirtualMachines/jdk1.8.0_152.jdk/Contents/Home
If you have multiple JDK, it will list all of them.
From the list above pick which version you want to be the default JDK. For example I will choose the 1.8.0_152 version to be my default JDK. To set it run the command below.
export JAVA_HOME=`/usr/libexec/java_home -v 1.8.0_152`
If the major version of the available JDK is unique you can just use the major version, like:
export JAVA_HOME=`/usr/libexec/java_home -v 1.8`
After setting the JAVA_HOME and you run the java -version command you will see that JDK 1.8 is the new default JDK in your computer.
java version “1.8.0_152”
Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
The change above will only active in the current running shell. If you close or terminate the shell, next time you open the shell you will need to set it again. To make this change permanent you need to set it in your shell init file. For example if you are using bash then you can set the command in the .bash_profile. Add the following lines at the end of the file.
# Setting default JDK to version 1.8.
export JAVA_HOME=`/usr/libexec/java_home -v 1.8`
To activate this configuration right away your can run source .bash_profile. This command reads and executes the .bash_profile in the current shell.
Posted in Apps, big data, how to, java, Mac | Tagged , , | Leave a comment

Creating a new file system in Linux

Here how you would create a new file system :

  1. First create a new partition using fdisk. eg:
    fdisk /dev/sdb     –> Options (m , n,p,1,  ,t,8e,w)
  2. Create volume group
    vgcreate myvg /dev/sdb1
  3. Create a logical volume
    lvcreate -L 512G -n my_lv myvg
  4. mkfs.ext4 /dev/myvg/my_lv
  5. You can optionally add the file system in /etc/fstab to automatically mount
  6. Mount /myfilesystem
Posted in how to, linux | Tagged , , , , | Leave a comment

Drill your data with Apache Drill !

Feeling the Drill

I have been using Apache Drill to explore data for a while now. Apache Drill is a low latency distributed query engine for large-scale datasets, including structured and semi-structured/nested data. Drill supports a variety of NoSQL databases and file systems, including HBase, MongoDB, MapR-DB, HDFS, MapR-FS, Amazon S3, Azure Blob Storage, Google Cloud Storage, Swift, NAS and local files.  To be clear Drill is not limited to Hadoop, you can query NoSQL databases like MongoDB, Hbase or cloud storage like Amazon S3, Azure Blob Storage  or even local files on your computer.  I have it installed on my laptop and use it as embedded mode to query my txt and cvs files. Apache Drill can be installed on Windows, Linux and MacOS with JDK.

Drill data like a table even when its not – schema on read

Drill is based on schema on read, meaning unlike traditional query engines that requires to have a predefined schema and structure, drill lets you define schema as you query the data. Cool uh ? Wait there is more  with Drill there’s no need to load the data or transform the data before it can be processed. Simply, point the query to the file or database you want to query and start querying the data.
For instance lets say you have a file  customers.csv  on a directory  /data/customer/. Once you have Drill installed (which takes about 3 mins) all you have to from a Drill prompt is :
select * from dfs./data/customer/customers.csv`;  and drill get you the data. You can even bring specific columns :
select column[0],column[1],column[6] from dfs./data/customer/customers.csv`

Drill also allows you to query against wild card files :
select * from dfs./data/orders/orders-08-*-2016.csv`
Drill lets you create views and static tables to even increase ease of use and improve performance.  You can check out the documentation for more options.

In love with your query or BI tool ? No problemo

Apache Drill supports standard SQL. So you can continue to use your favorite query tools and SQL that you have been using. Drill supports ODBC and JDBC drivers, so you it will let you access Drill using tool of your choice.  Data users can use standard BI/analytics tools such as Tableau, Qlik, MicroStrategy and so on to interact with non-relational datastores by leveraging Drill’s JDBC and ODBC drivers. Developers can leverage Drill’s simple REST API in their custom applications to create beautiful visualizations.  Drill comes with a web interface when you install in distributed mode. Drill also provides a native tool called Drill Explorer which I find really useful. You can find all the details on how to configure your tool to access Drill in the documentation.

Lets get it going …

Apache Drill is easy to download and run Drill on your computer . It runs on all standard OS and takes few minutes to install. Drill can also be installed on a cluster of servers to serve a scalable and high performance execution engine.  Drill has two install options:
1.  Installing in Embedded mode
2. Installing in Distributed mode.

Installing in your computer that has JDK installed involves:
1. Downloading the tar file
2. Untar the file
3. cd to the apache-drill<version>
4. run  bin/drill-embedded (Mac and Linux) . On windows : C:\bin\sqlline sqlline.bat –u “jdbc:drill:zk=local;schema=dfs”

 

Drill in to your data with Apache Drill and hopefully you will enjoy drilling as much as I do.

 

Posted in BigData, database, hadoop | Tagged , , | Leave a comment