Back to Blog

Big Data Hackathon: BlueData on Apache Mesos

Here at BlueData, we’ve been tracking the progress of Apache Mesos with great interest. We recently held an internal 24 hour hackathon for BlueData engineering to explore new technologies and one of the projects we tackled was to see how we could integrate BlueData’s EPIC software platform with Apache Mesos.

This work was a joint effort with Kartik Mathur and Xiongbing Xou from BlueData’s engineering team.  This post summarizes our approach and the initial results. First, let’s step back and review.

What is Apache Mesos?

Apache Mesos is an open source platform for the dynamic, fine-grained sharing of resources (CPU, memory, storage, etc.) in the data center. Distributed computing frameworks such Hadoop, Spark, MPI, and Elasticsearch as well as non-distributed services can share these resources efficiently and elastically by integrating with Mesos. Sharing of data center resources improves their utilization and thus reduces hardware costs.

Below is a diagram (obtained from here) showing the Mesos architecture with two application workloads – Hadoop and MPI:

Mesos architecture

The Mesos data center operating system consists of a master daemon and a collection of slave daemons running on each cluster node. The Mesos master implements a resource “framework” API, and provides the resource management and scheduling actions across the entire data center. Each application must implement a Mesos framework which communicates with the Mesos services, using the Mesos APIs, in order to obtain data center resources.

Each application-specific Mesos framework is composed of two components:

  1. A scheduler that negotiates with the Mesos master to obtain resources
  2. An executor process that is launched by the Mesos slaves to run the application’s tasks once those resources have been obtained.

Mesos introduces a distributed two-level scheduling mechanism based on the idea of “resource offers”. Mesos decides which resources to offer to each framework, while the framework decides which resources to accept and how to run computations on them. In the illustration above, two application-specific frameworks (for Hadoop and MPI) are sharing resources via Mesos.

More details about Apache Mesos and its architecture can be found here.

Why BlueData EPIC on Mesos?

Today, the BlueData EPIC software platform uses Docker containers to simplify and accelerate the deployment of Big Data infrastructure and applications on-premises in a secure multi-tenant environment. BlueData supports a broad range of different open source and commercial applications for Big Data analytics – including Hadoop, Spark, and more. And we’ve seen Mesos emerge as a potential resource management option for many of our customers over the past several months.

The integration of BlueData EPIC with Mesos can provide several benefits:

  • The same physical servers can be used to simultaneously host the BlueData EPIC platform as well as other application frameworks (e.g. other web applications and services). Resources can be moved back and forth between applications running on Mesos and EPIC platform services (which in turn support running various open source and commercial Hadoop distributions, Spark, BI/ETL tools, etc).
  • The various Big Data applications that run on the EPIC platform do not have to implement custom Mesos frameworks. The EPIC Mesos framework becomes their interface to transparently run on a Mesos cluster and share resources. It can thus be a fast way to bring a diverse set of Big Data workloads (Hadoop, Spark, Kafka, Gemfire, Hadoop-native BI/ETL tools) to Mesos resource management without having to create and use custom Mesos frameworks for each.
  • The Apache Myriad project (currently incubating) has similar goals albeit focused exclusively on Hadoop YARN clusters.

Hackathon Project: BlueData + Mesos

During our hackathon, we initially explored the option of leveraging the Marathon Mesos framework for running the BlueData EPIC platform. However, since EPIC’s bits are inherently stateful and, at the time of the hackathon, Marathon’s support for stateful applications (by leveraging persistent local volumes) was a work in progress, we decided to implement a custom Mesos framework (scheduler and executor) for BlueData EPIC (see diagram below). Moreover, this approach allowed us to have finer grained control over the allocation of resources for various containerized clusters on the EPIC platform as well as enforce custom policies.

The diagram below illustrates the interaction between the BlueData EPIC Mesos framework components and Mesos.

The BlueData EPIC Mesos framework components and workflow are described below.

BlueData EPIC Mesos Scheduler

This scheduler acts as an interface for components on the EPIC platform to interact with Mesos. It implements the automation of operational tasks like installation and setup of EPIC platform, and obtaining resources from Mesos. We achieved this by reserving a chunk of resources on the Mesos slave nodes for the ‘BlueData’ role. The EPIC scheduler then acquires these resources via Mesos offers and instructs the executor to do the initial installation & setup, given the EPIC’s bin file. The EPIC platform can then create on-demand containerized clusters on the resources allocated to it via Mesos.

BlueData EPIC Mesos Executor

The executor takes slave-specific instructions from the EPIC scheduler, and executes those as tasks. This includes the installation & setup of the controller and worker nodes for EPIC platform on these slaves.

BlueData + EPIC Workflow (numbers correspond to the illustration above)

  1. Each Mesos slave instructs the Mesos master to reserve a set of resources for the BlueData EPIC role. Standard Mesos functionality was used to configure the slaves to advertise these resources.
  2. The Mesos master offers these resources to the BlueData EPIC Mesos scheduler.
  3. The BlueData EPIC Mesos scheduler accepts one of the resource offers and instantiates the BlueData platform services on the granted resources. For example, a BlueData controller or worker processes.
  4. The BlueData platform service (e.g. controller or worker) runs on the selected Mesos slave.

Installation and Configuration

We configured a clustered setup using Centos 6.6 hosts and Mesos version 0.26.0 following the steps provided here. We used the steps below to configure the system post-installation of Mesos:

  • Added kernel boot options ‘cgroup_enable=memory swapaccount=1’ to enable swap and memory accounting on these hosts so that Docker could run smoothly.
  • Started the Mesos master and slave daemons on these hosts as appropriate. We defined a role ‘BlueData’ and reserved some resources (cpu and memory) statically for this role, for the purpose of the hackathon. The commands to start the master and slaves look like this:

MesosCommands-1

  • Started the BlueData EPIC Mesos scheduler on one of the hosts and provided it with the EPIC bin file. It acquired ‘BlueData’ reserved resources from Mesos and instructed the BlueData EPIC Mesos executor to install the EPIC controller services on one of the hosts. The command looks like this:

EPICFrameworkCommand-2

  • Made a few minor tweaks to the BlueData EPIC installation process so that the EPIC platform limited itself to the resources granted to it by Mesos, as opposed to its normal mode of operation of utilizing all the resources on the hosts.
  • Once the EPIC controller services started, we were able to spin up Docker container-based Hadoop and Spark clusters on the resources granted by Mesos.

Next Steps

We’re excited about this initial integration of the BlueData EPIC platform with Apache Mesos. Further work remains to completely integrate various BlueData EPIC features – such as multi-tenancy, network isolation, and differential Quality of Service (QoS) – with Mesos, but we have lots of ideas of how this can be done. We are actively exploring the following areas of integration and improvement:

  • Supporting storage as a resource. The implementation outlined above only supports sharing of CPU and memory resources. Mesos allows persistent volume creation and handles multiple disks, and the BlueData EPIC platform can leverage these features.
  • Support auto-scaling of the EPIC platform, based on the on-demand creation of containerized clusters.
  • Investigate if the Docker Containerizer in Mesos can be used to create containerized clusters on BlueData EPIC.
  • Determine if the Marathon framework can be leveraged to support high availability for the BlueData EPIC Mesos framework.
  • Investigate the new resource quota feature introduced in the Mesos 0.27.0 release. We may be able to use this feature to support guaranteed resource allocations for BlueData EPIC tenants.

The BlueData EPIC software platform enables on-demand creation of containerized clusters for Big Data workloads in a multi-tenant environment – with enterprise-grade security, isolation, and differential QoS. Mesos allows the optimal utilization of cluster resources by efficiently sharing the resources across diverse workloads. We believe that the integration of these two platforms will provide users with the best of both worlds. Stay tuned!