Showing posts with label data. Show all posts
Showing posts with label data. Show all posts

Wednesday, February 17, 2010

Hadoop & Bussiness Intelligent

The group at Yahoo! that I came from was using Hadoop for data analytics and data warehousing. We had something like 100,000 web servers across the world, and once we collected data from across all these servers, we dumped it into Hadoop, which became the place where we stored all of the data, instead of traditional network storage.

Our reasoning for doing that was a matter of economics, given the quantity of hardware. Hadoop lets us scalably process that data, clean it up, and normalize it so we could pass it along to the systems that need it.

Hadoop is getting very wide adoption in the data warehousing and business intelligence domains. One of the biggest uses within Yahoo! right now is dealing with all of the log information from servers. Analyzing that information allows for better spam filtering, ad targeting, content targeting, A/B testing for new features, et cetera.

It’s not web-specific. For example, everybody does data warehousing, and we see very strong adoption there.

Separate from that, your example of oil companies is a very good one, as is the financial sector. Right now, we do have a couple of very large financial institutions working with us on these exact problems, taking huge amounts of data from domains like credit card processing and building predictive models for fraud that enable better decisions, for example, about whether to block or allow a given transaction.

In the stock market, Hadoop is being used to do simulations that help predict option pricing and related problems. That’s another very healthy market that we’ve seen growth in.

Knowing that Yahoo is the biggest contributor and adopter of Hadoop and the company is used Hadoop to solve various problems from data analytics and data warehousing: log processing, gene sequence mapping (basically a fuzzy string matching problem) to business intelligent domains: financial, stock market ...

Rumor said that a bank in Singapore invest millions of dollars create a computing and predicting system from scratch using Haskell - a static type, functional programming language to warranty scaling and performance.

I wonder why the bank did not take a look at Distributed File System (DFS) + MapReduce (Hadoop is an open source implementation of it) as a massively scalable on commodity hardware that successfully utilized at biggest IT firms in the world (Google, Yahoo, Facebook ... just to name the few) ... or they just re-implementing DFS+MapReduce themselves :D

Posted via web from Microsoft Innovation Center

What is Hadoop? (from Interview with Amr Awadallah – Cloudera CTO)

First, it’s worth making the important clarifying point that Hadoop is not a database. Hadoop is a data processing system, and in fact, I would even go as far as saying Hadoop is an operating system. The core of an operating system boils down to a file system, the storage of files, and a process scheduling system that runs applications on top of these files.

There are many other components that help with devices, credentials and user access, and so on, but that is the core. Hadoop is exactly the same thing. The core of Hadoop is the Hadoop Distributed File System, which is a file system that’s runs across many nodes. It links together the file systems on many local nodes to make them into one big file system. Hadoop MapReduce is really the job scheduling system that takes care of scheduling jobs on top of all those nodes.

That is the key distinction between Hadoop’s approach and that of database systems. Hadoop, at its heart, does not require any structure to your data. You can just upload files directly from anywhere, like a web server, RFID device, or cell phone mobile device, directly into Hadoop.

They could be images, videos, or just a bunch of bits. They don’t have to have a schema with column types and so on, which gives you tremendous agility and flexibility.

Hadoop has a very nice model that I sometimes refer to as schema on read. Whereas defining your schema as you’re writing the data in limits what you can put in by requiring it to be conformant to the schema that you created, Hadoop allows you to define the schema as you’re reading stuff out.

That gives you a lot of flexibility and agility, since you can add files that have dynamic parts like JSON or new standards coming up like Avro, which is a very good project coming out of the Hadoop project that’s similar to protocol buffers from Google and Thrift from Facebook. Avro makes files have a schema around them as well, but these schemas are semi-structured, rather than conforming to a strict relational model.

That said, it’s also important to point out that structured stuff is a subset of unstructured stuff. The fact that Hadoop at its heart is a file system doesn’t mean that it can’t do database relational stuff. It does actually, in the same way that Windows at its heart is a file system, but you can run SQL Server on top of it to get the relational services, schemas, column types, and so on.

One of the key projects on top of Hadoop is Hive, which actually came out of Facebook. Hive essentially provides a relational database on top of Hadoop that utilizes the underlying file system but has a metastore that keeps the schema of the files.

It knows that a given file is tab delimited or whatever, it knows the column type for these files, and Hive allows you to write SQL against these files. It will look up the schema and then it will write for you the MapReduce jobs so that you don’t have to go and learn MapReduce from scratch.

Now you have the flexibility of going either way. One approach is to get at the core of the MapReduce framework using Java MapReduce, which we sometimes refer to as being like assembly language for Hadoop. It gives you the most flexibility and performance, but it is fairly complex and difficult to learn.

Alternately, you can go in with a high level language like Hive. In this case, you can just use SQL, if that’s what you’re used to, to write your job. Hive itself has lots of optimizations. It understands the underlying MapReduce framework, so it can properly map your problem on top of your data.

 

Posted via web from Microsoft Innovation Center