FW: How will Hama BSP different from Pregel?

Firstly, why did we use HBase?

Until last year, we were researched the distributed matrix/graph computing package, based on Map/Reduce.

As you know, the Hadoop is consists of HDFS, which is designed for commodity servers as a shared nothing model (also termed as data partitioning model), and a distributed programming model called Map/Reduce. The Map/Reduce is a high-performance parallel data processing engine, to be sure, but it's not good for complex numerical/relational processing requires huge iterations or inter-node communications. So, we used HBase as a shared storage (shared memory model).

Why BSP instead of Map/Reduce and HBase?

However, there were still problems as below:

OS overhead of running shared storage software (HBase)
The limitation of HBase faculty (especially, a size of column qualifier)
Growth of code complexity
Therefore, we started to consider about message-passing model, and decided to adopt the BSP (Bulk Synchronous Parallel) model, inspired by Pregel from Google Research Blog.

What's the Pregel?


According to my understanding, Pregel is graph-specific: a large-scale graph computing framework, based on BSP model.

How will Hama BSP different from Pregel?

Hama BSP is a computing engine, based on BSP model, like a Pregel, and it'll be compatible with existing HDFS cluster, or any FileSystem and Database in the future. However, we believe that the BSP computing model is not limited to a problems of graph; it can be used for widely distributed software such as Map/Reduce. In addition to a field of graph, there are many other algorithms, which have similar problems with graph processing using Map/Reduce. Actually, the BSP model has been researched for many years in the field of matrix computation, too.

Therefore, we're trying to implement more generalized BSP computing solution. And, the Hama will consists of the BSP computing engine, and a set of few examples (e.g., matrix inversion, pagerank, BFS, ..., etc).

You can locally test your BSP program using TRUNK version of Hama project.
Please subscribe the mailing list or comment here if you have any question, suggestion, objection about our project.

1 comment:

  1. Anonymous18/9/10 10:13

    Like you, we found serious performance bottlenecks in MapReduce. We use a Dataflow architecture to get around those limitations (you can see the "DataRush" product at www.pervasivedatarush.com )

    We are now experimenting with combining DataRush and HDFS/HBase, and so far getting 100x better performance than "native" HBase

    Good luck with Hama project, tks

    ReplyDelete