Meeting notes with Prof. Cox on 1/18/2014

This post summarizes meeting with Prof. Alan Cox on Jan 18th and outlines the next steps for the project to get it ready for publication. It includes the following items

  • Further improvements on Java Memory Footprint measurements and analysis
  • Quantifying the trade off between two approaches of multi-core parallelism that we implemented
  • A possible algorithm for switching dynamically between the two forms of parallelisms for optimal multi-core resource utilization on different kinds of MapReduce applications

Further improvements on Java Memory Footprint measurements and analysis

Set the Hadoop JVMs with both the -Xmx and -Xms options to further reduce unnecessary garbage collection calls. The minimum heap size could be set to be 1/4-1/2 of the maximum heap size.

Additionally, I could potentially use JMap to have a more accurate measurement on the size of the heap utilized.

Quantifying the trade off between two approaches of multi-core parallelism that we implemented

I will measure the memory footprint running KMeans and KNN using (1) Parallel Mapper, which exploits parallelisms within a single map task (2) Parallel JVM, which exploits inter task parallelism.

I expect approach (2) to have a larger memory footprint compare to approach (1). But I need to quantify the difference.

 

A possible algorithm for switching dynamically between the two forms of parallelisms for optimal multi-core resource utilization on different kinds of MapReduce applications

At the meeting, I discussed a possible algorithm with Prof. Cox on switching between the two approaches of parallelism

Step1: start with a single map task and parallel mapper. If the application is compute intensive and there is enough parallel slackness in the map task, the runtime exploits intra-task parallelism by running 1 map task on 8 cores (this approach can utilize the CPU with minimum memory foot print).

Step2: If the parallel mapper approach can’t fully utilize the CPU because of the IO bottleneck in the MapReduce application, then the runtime will reduce increase the number of tasks and decrease the number of threads inside each Map Task(switching to a hybrid with inter-task parallelism). This way, there is more parallelism in IO and overlap between computation and communication.

Step3: If increasing the map task no longer lead to increased CPU utilization and IO utilization, then stop increasing the total number of tasks running within the JVM.

At this point, IO is saturated and we are keeping the resources best utilized.

I plan to show this algorithm will work for Compute Intensive applications such as KMeans and KNN, and less Compute Intensive more IO Intensive applications such as HashJoin.

Advertisements
This entry was posted in Hadoop Research, HJ-Hadoop Improvements, Java. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s