muSOAing for 7/29/10 – Snowflakes

Getting back on track. Let us examine Big Data in a bit more detail. The first question is what is that magic marker that catapults data into the Big League. I think there is no set industry standard but a very commonly used yardstick is around 1TB of data. The goal here is to convert data into meaningful information and ultimately into saleable intelligence that will have a bearing on the bottom line and ROI.

These days, the problem really is two fold. One is to deal with such vast amounts of data, to be able to park it somewhere on a cost effective infrastructure and then to be able to analyze and slice and dice it to obtain that intelligence. With even very small under 10 person businesses generating enormous amounts of information on a daily basis, affordable storage has indeed become a very big issue.

Traditional Relational databases simply don’t cut it anymore not only due to their size limitations but also the arcane SQL technology used to access and process such vast amounts of information is beyond the scope of this technology. Also given that each database is tied to one server, there are a lot of technology limits that you will hit. Hence the popularity of grid based storage where you can have n number of physical storage nodes on top of which you can have n number of some Big Data oriented framework like Hadoop or Cassandra and both can expand elastically based on your needs. You add physical and/or framework nodes as the need dictates and there is no single point of failure at least from a hardware standpoint.

More on this later.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: