Archive

Big Data

   We talk about big data quite often these days, wanted to put some fundas about basics around data. Do you know the singular form of data ? How data differs from information vs. knowledge? How insights convert to actions? Here is my attempt towards answering some of these.
      Data is often raw in binary represented as a number or a character or a string. Data is a plural version of datum. Information is anything which puts context to data. For example: The number 89 itself doesn’t mean anything unless we fit a context to say – the car’s speed is 89 kmph. Knowledge on other hand is about knowing how things around us work & the larger world is interconnected. It is obtained based on experiences, experiment, research, etc. In the below infographic I have tried to explain these terms using a simplified version of intelligence that can be embedded in cars. More on the infographic follows.
Data 2 Actions
           I have tried to take a very simplified version here. Now just imagine, when we talk about intelligent cars – we are usually talking about 100s of such parameters instead of just a single varaible (speed of car) ,all collected from multiple sensors obtained in realtime streaming format to make such decisions. Knowledge is obtained through predictive algorithms which continuously learns [in AI terms to say “adapts”] from data and help in making recommendation about safety of the vehicle. Now imagine these 100s of parameters collected every millisecond from 1000s of connected cars around you – this is what forms “Big Data”.
              Hope you liked this post, I will be writing up more articles – as have been getting requests from friends around the globe. Stay tuned !
Advertisements

I have spent most of my life into data and its applications to problems. Now, when i look at some patterns in algorithms we use in analyzing data, one thing that emerges is increased use of meta-algorithms. Boosting techniques(like AdaBoost) is one such meta-algorithm which uses multiple weak learners(classifiers) to improve prediction accuracy. Random forests,a very prominent Read More

Clustering is an unsupervised classification (learning) technique, where the objective is to maximize inter-cluster distance while minimizing the intra-cluster distance.  By unsupervised, we mean clustering or segmenting or classifying data based on all the available attributes and specifically there is no availability of class information. A supervised classification on other hand uses class information.
As usual, before we jump into ‘how’ let’s answer the ‘why’. Clustering is applied to solve variety of problems ranging from biological systems to using it for exploratory analysis of data ( as a pre-processing technique).  Many of the predictive analytics algorithms use clustering solutions as one of their components. It is used in all major brands for CRM, to understand their customer better. Another use of clustering is in outlier detection or fraud transaction identification.  If you have heard about a site called www.similarsites.com, it extensively works on clustering algorithms where the sites are segmented/clustered based on website attributes like category of domain, number of users, traffic, content type, corporate or personal, blog, image blog, video blog,etc. For example, if you entered INMOBI, you would get a list of companies which are in this space mainly its competitors – mojiva, Millenialmedia, Admob, Quattro, Mobclix,etc. If you are looking for image hosting site and want to know alternatives/options, this will be helpful.

We talk about similarity in terms of distance measures like

(i)                  Euclidean Distance

(ii)                Manhattan Distance

Read More

Hadoop is an open source framework for writing and running distributed application that process huge amounts of data ( more famously called Big Data). The name Hadoop is not an acronym; it’s a made-up name. The project’s creator, Doug Cutting, explains how the name came about: “The name my kid gave a stuffed yellow elephant. Short, relatively easy to spell and pronounce, meaningless, and not used elsewhere: those are my naming criteria. Kids are good at generating such. Googol is a kid’s term”

It has two components

–          Distributed Storage ( uses HDFS – Hadoop file system)
Ensures the data is distributed evenly across all the nodes of Hadoop cluster. There is option of replicate data across nodes (redundancy) to provide capabilities to recover from failures.

–          Distributed Computing ( uses MR – Map Reduce Paradigm)
Once the data is available on Hadoop cluster. The MR codes ( typically return in Java,C++) is moved to each of the nodes for computation on the data. Map Reduce has two phases mapper and Reducer.

One of the early examples of a distributed computing include SETI@home project, where a group of people volunteered to offer CPU time of their personal computer for research on radio telescope data to find intelligent life outside earth. However this differs from Hadoop MR is in the fact that, data is moved to place where computing takes place in case SETI, while code is moved to the place of data in latter case. Other projects include finding the largest prime numbers, sorting Pet bytess of data in shortest time,etc.

Applications of Hadoop MR – Big data

  •           Weblog analysis
  •           Fraud detection
  •           Text Mining
  •           Search Engine Indexing
  •           LinkedIn uses for “Who viewed  your profile” and “People you may know – recommendations”
  •           Amazon.com uses for book recommendation

Hadoop MR Wrapper applications include

  •           Pig : A data flow language and execution environment for exploring very large datasets. Pig runs on HDFS and MapReduce clusters.
  •           Hive : A distributed data warehouse. Hive manages data stored in HDFS and provides a query language based on SQL (and which is translated by the
    runtime engine to MapReduce jobs) for querying the data.
  •           Mahout : Machine Learning implementation in Map Reduce.
%d bloggers like this: