Next years Hadoop Summit will be held in Munich on April 5-6, 2017 which will be an exceptional opportunity for the community in Munich to present itself to the best and brightest in the data community.
Please take this opportunity to hand in your abstract now with only a few days left!
Submit Abstract: http://dataworkssummit.com/munich-2017
Deadline: Monday, November 21, 2016.
2017 Agenda: http://dataworkssummit.com/munich-2017/agenda/
The 2017 tracks include:
- Enterprise Adoption
- Data Processing & Warehousing
- Apache Hadoop Core Internals
- Governance & Security
- IoT & Streaming
- Cloud & Operations
- Apache Spark & Data Science
We want to expand the ecosystem to include technologies that were not explicitly in the Hadoop Ecosystem. For instance, in the community showcase we will have the following zones:
- Apache Hadoop Zone
- IoT & Streaming Zone
- Cloud & Operations Zone
- Apache Spark & Data Science Zone
The goal is to increase the breadth of technologies we can talk about and increase the potential of a data summit.
Future of Data Meetups
Want to present at Meetups?
If you would like to present at a Future of Data Meetup please don’t hesitate to reach out to me and send me a message.
Want to host a Meetup? Become a Sponsor?
We are also looking for rooms and organizations willing to host one of our Future of Data Meetups or become a sponsor. Please reach out and let me know.
Livy.io is a proxy service for Apache Spark that allows to reuse an existing remote SparkContext among different users. By sharing the same context Livy provides an extended multi-tenant experience with users being capable of sharing RDDs and YARN cluster resources effectively.
In summary Livy uses a RPC architecture to extend the created SparkContext with a RPC service. Through this extension the existing context can be controlled and shared remotely by other users. On top of this Livy introduces authorization together with enhanced session management.
Analytic applications like Zeppelin can use Livy to offer multi-tenant spark access in a controlled manner.
This post discusses setting up Livy with a secured HDP cluster.
Continue reading “Connecting Livy to a Secured Kerberized HDP Cluster”
With the introduction of ZEPPELIN-548 it now supports Apache Shiro based AD and LDAP authentication. This quick example demonstrates the connection of Zeppelin to the Knox Demo LDAP server. Continue reading “Zeppelin Login with Demo LDAP of Knox”
Hadoop supports multiple file formats as input for MapReduce workflows, including programs executed with Apache Spark. Defining custom InputFormats is a common practice among Hadoop Data Engineers and will be discussed here based on publicly available data set.
The approach demonstrated in this post does not provide means for a general MATLAB™ InputFormat for Hadoop. This would require significant effort in finding a general purpose mapping of MATLAB™’s file format and type system to the ones of HDFS. Continue reading “Custom MATLAB InputFormat for Apache Spark”
MATLAB™ is a widely used professional tool for numerical processing used across multiple divers disciplines like Physics, Chemistry, and Mathematics. You can encounter multiple public data sets which are published in MATLAB™ format. This article gives a brief example of such data set and reading it from R. Continue reading “Reading Matlab files with R”
Hive joins are executed by MapReduce jobs through different execution engines like for example Tez, Spark or MapReduce. Joins even of multiple tables can be achieved by one job only. Since it’s first release many optimizations have been added to Hive giving users various options for query improvements of joins.
Understanding how joins are implemented with MapReduce helps to recognize the different optimization techniques in Hive today. Continue reading “Hive Join Strategies”
Apache Ambari rapidly improves support for secure installations and managing security in Hadoop. Already now it is fairly convenient to create kerberized clusters in a snap with automated procedures or the Ambari wizard.
With the latest release of Ambari kerberos setups get baked into blueprint installations making separate methods like API calls unnecessary. In this post I would like to briefly discuss the new option in Ambari to use pure Blueprint installs for secure cluster setups. Additionally explaining some of the prerequisites for a sandbox demo like install. Continue reading “Kerberos Ambari Blueprint Installs”