muSOAing for 4/17/11 – Write once Read Many?

One of the features of a Big Data setup is it’s Write once Read Many paradigm. Any Big Data infrastructure like Hadoop is still a data warehousing infrastructure used for analyzing historical information. Your relational store will still be your repository for ongoing OLTP needs with data being ETLd into your Big Data infrastructure. With data being written to file systems and being analyzed using map/reduce at the lowest level. Advocates encourage the use of higher level tools like Pig and Hive to perform analytics. These tools do execute map/reduce for you but provide you with higher level SQL like interfaces that you are already familiar with to issue your commands which are translated into map/reduce directives under the covers.

With the adoption of Hadoop increasing by the day across all verticals, the need in this area is only going to increase. It also has something for everybody, the technology nerd who can get started on the cheap to your CIO who can now have a multi-node Big Data infra up and running in no time and churning out useful and timely business analytics.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: