Today I’m thrilled to announce that BlueData will be joining forces with one of the legendary giants of Silicon Valley; we signed a definitive agreement to be acquired by Hewlett Packard Enterprise (HPE).
This is a huge milestone for our company. I’m excited about our future as part of HPE and the impact this will have on the overall AI, Machine Learning, and Big Data Analytics market. But before we talk about what’s next, I want to look back at our journey and celebrate what we’ve accomplished.
Start at the Beginning
Flash back to six years ago … it all started with a big idea and a bold vision for the emerging market known then simply as “Big Data.” I’ve shared this story in the past, but I’ll recap it here:
In the summer of 2012, I was on a late-night flight from Boston, headed back home to San Francisco. It had been an exhausting week of meetings and I was passing the time reading “Abundance.” The book spoke, in part, about the tremendous amount of data that we collectively generate. The numbers boggled my mind. It occurred to me that no one was innovating around how to make Big Data more consumable and that this innovation was required to make the promise of Big Data a reality. In that moment, the founding idea for BlueData was born.
I was an executive at VMware at the time and I convinced Tom Phelan, one of the best engineers that I’ve ever had the opportunity to work with, to join me in co-founding BlueData. Our vision was to create an infrastructure software platform like VMware for data-intensive distributed applications. Both Tom and I saw the opportunity to fundamentally change the data consumption model: enabling a “cloud-like” experience in on-premise deployments for these workloads, while helping enterprises to navigate their path to public cloud and hybrid cloud deployments.
Building the Business
From that point on, we worked together to build a rock-solid team that had the experience and skills to take on the challenge. We took BlueData out of stealth mode in late 2014 and launched our product, we started working with early customers, and we made a big bet on Docker containers as the underlying technology to enable our vision.
Along the way, we were fortunate to find the right investors and partners who believed in our vision. Our funding round in 2015 was another major milestone and it coincided with a strategic collaboration agreement with Intel – an early investor and one of several great partners who helped us in this journey.
At that time, I wrote about the evolution in the market:
Big Data is at an inflection point today. Adoption has moved from experimental projects to mission-critical, enterprise-wide deployments that are delivering new customer insights, competitive advantage, and business innovation. But the complexity of Big Data is holding back adoption – it’s still too time-consuming, expensive, and resource-intensive to initiate and scale these deployments. The time is ripe for a new approach to Big Data infrastructure that can help simplify and streamline these deployments.
I reflect back on this because these words still ring true today. Our mission at the time was to make “Big Data” deployments much easier, faster, and more cost-effective. And we’ve executed on that mission ever since.
What’s changed in the past few years is that enterprise priorities have evolved from Big Data deployments (with data frameworks like Hadoop, Spark, and Kafka) to more AI-focused initiatives (with a wide range of Machine Learning, Deep Learning, and Data Science tools). At the same time, hybrid IT and multi-cloud strategies have become the norm. And containers are no longer new in the enterprise; now containers are the standard, and technologies like Kubernetes have established a rapidly maturing open source container ecosystem.
Along the way, we’ve developed several game-changing software innovations:
- The first and only turn-key enterprise-grade solution for running distributed data-intensive frameworks unmodified in containers;
- On-demand provisioning for Cloudera, Hortonworks, MapR, Spark, Kafka, TensorFlow, H2O, and other tools for AI/ML and Big Data;
- Multi-tenancy with enterprise-grade security, including automated integration with AD/LDAP, Kerberos, HDFS TDE, and more;
- Compute/storage separation with our unique DataTap technology, providing the ability to scale compute and storage resources independently;
- Bare-metal performance for containerized AI/ML and Big Data applications, and optimization for both Intel Xeon architecture and NVIDIA GPUs;
- Ultimate flexibility and elasticity, with deployment on any infrastructure – whether on-premises, in a multi-cloud model, or in a hybrid IT architecture; and
- Contributions to the open source Kubernetes community, with our new KubeDirector open source project for deploying and managing complex stateful applications.
Looking back, I’m proud of the phenomenal product and team that we’ve built here at BlueData.
The Customer is King
Even more importantly, we’ve seen customer adoption of our container-based BlueData EPIC™ software platform take off dramatically – with incredible customer momentum and large-scale AI and Big Data deployments in production at many of the most well-respected enterprises in the world.
As an entrepreneur, the odds are against you when you start any new business. There are a lot of sacrifices along the way. And it takes a lot of blood, sweat, and tears to build a great product. But the most rewarding aspect – and the best measure of success – is finding satisfied customers that love your product and use it to help run their own business.
Here at BlueData, we’ve been relentlessly focused on delivering value for our customers – with a high-touch partnership approach to ensure customer success, continuous product innovation, and an unmatched support experience. I’m grateful to our customers for that partnership.
Together we’ve ensured that our product works and performs in the most demanding enterprise environments. Our customers have made BlueData’s software even better, more battle-tested, more secure, easier to implement, and more enterprise-proven. And we’ve helped them to reduce costs, improve agility, and deliver faster time-to-value for their AI/ML and Big Data initiatives.
Onward and Upward
Now back to today’s news. As a co-founder and CEO, I’m always looking for opportunities to accelerate our technology innovation and grow the business even faster – by developing new differentiation, bringing on more new customers, and expanding into new markets. HPE presented us with an incredible opportunity to accomplish those goals, building upon our existing partnership and joint customers.
It’s very satisfying to see the idea and vision for our company become part of a legendary Silicon Valley pioneer like HPE. By combining BlueData with HPE’s strong brand, global reach, broad portfolio including the HPE Apollo Systems, and enterprise relationships, we can double down on our investment in R&D innovation and reinforce our technology differentiation for AI and Big Data Analytics infrastructure.
And with HPE’s world-class service and support teams, we can continue our intense focus on customer satisfaction as we look to nurture and expand our customer community. Together, HPE and BlueData will help our growing roster of customers around the world unlock the enormous potential of data – delivering an “as-a-Service” experience for data-intensive distributed applications and helping them to accelerate their AI-driven digital transformation initiatives.
I’m looking forward to seeing the container-based BlueData EPIC™ software platform become the standard for AI/ML and Big Data deployments. And I’m excited to join forces with HPE on this next phase in the BlueData journey as we continue to execute on our mission.