The Ambari server offers a comprehensive REST API to install a complete HDP cluster or manage all parts of it. In this post we are going to explore the possibilities of installing a new service. With Ambari it is fairly easy to define custom services for management. With YARN being a general purpose execution engine Ambari can be seen as the general purpose management service. For this example we are using the Apache Zeppelin service provided here. A more general documentation of how to install a new service to an existing cluster can be found here. Continue reading “Install Apache Zepplin via REST & Ambari”
Category: General
Upgrade Docker to Master on OSx
Docker is a fast moving project enjoying a lot of popularity among developers across all branches. With this wide support the Docker ecosystem is evolving almost every day reaching from Deis as a PaaS platform, cluster management in CoreOS and Kubernets. Even Microsoft is considering Docker support for their next version of Microsoft Server. In this post I would like to demonstrate how to upgrade to a master release of Docker running on Mac OSx with Boot2Docker which comes handy when trying to keep up with the latest development or move a around a current bug already fixed in a new release. Likely the here stated notes will also be useful with other environments. Continue reading “Upgrade Docker to Master on OSx”
Installing Ranger with Ambari Blueprints
With the new release of HDP 2.3 comes Ambari 2.1 that brings among other improvements the provisioning and management of Apache Ranger. Ranger together with new agents for a centralized authorization management brings a new KMS key storage for HDFS encryption. HDP components in Ambari can be installed and configured through blueprints that are described in a JSON notation. Continue reading “Installing Ranger with Ambari Blueprints”
Services and State with Ambari REST API
The Ambari management tool for Hadoop offers among other handy tools a comprehensive REST API for cluster administration. Logically a cluster is divided into hosts, services and service components. While the UI might not always has support for all needed scenarios sure the REST API can be used to achieve it. For example moving a master component of a service from one host to another.
In this post we are going to look a little closer at the way the Ambari API can be used to manage Hadoop services. At the end of this post you will find a list of all the currently supported Hadoop services with all the needed master, slave and client components that can be manged and administrated within your HDP stack. Also this posts contains the possible states and state transitions a component might have which could become useful when facing problems like Host config is in invalid state. Continue reading “Services and State with Ambari REST API”
Spark Streaming with Python
Streaming applications in Spark can be written in Scala, Java and Python giving developers the possibility to reuse existing code. An important note about Python in general with Spark is that it lacks behind the development of the other APIs by several months. For Spark Streaming only basic input sources are supported. Sources like Flume and Kafka might not be supported. For now only text file and text socket inputs are supported (Kafka support is available with Spark 1.3). A general fileStream is not supported just textFileStream. Continue reading “Spark Streaming with Python”
HiveSink for Flume
With the most recent release of HDP (v2.2.4) Hive Streaming is shipped as technical preview. It can for example be used with Storm to ingest streaming data collected from Kafka as demonstrated here. But it also still has some serious limitations and in case of Storm a major bug. Nevertheless Hive Streaming is likely to become the tool of choice when it comes to streamline data ingestion to Hadoop. So it is worth to explore already today.
Flume’s upcoming release 1.6 will contain a HiveSink capable of leveraging Hive Streaming for data ingestion. In the following post we will use it as a replacement for the HDFS sink used in a previous post here. Other then replacing the HDFS sink with a HiveSink none of the previous setup will change, except for Hive table schema which needs to be adjusted as part of the requirements that currently exist around Hive Streaming. So let’s get started by looking into these restrictions. Continue reading “HiveSink for Flume”
HDFS Spooling Directory with Spark
As Spark natively supports reading from any kind of Hadoop InputFormat, those data sources are also available to form DStreams for Spark Streaming applications. By using a simple HDFS file input format a HDFS directory can be turned into a spooling directory for data ingestion.
Files newly added to that directory in an atomic way (required) would be picked up by the running streaming context for processing. The data could for example be processed and stored in an external database like HBase or Hive. Continue reading “HDFS Spooling Directory with Spark”
Hadoop File Ingest and Hive
In the beginning of all Hadoop adventures is the task of ingesting data to HDFS preferably today being queried for analysis by Hive at any point in time. High chances are that most enterprise data today at the beginning of any Hadoop project resides inside of RDBMS systems. Sqoop is the tool of choice within the Hadoop ecosystem for these kind of data. It is also quite convenient to use with Hive directly.
As most business is inherently event driven and more and more electronic devices are being used to track this events, ingesting a stream of data to Hadoop is a common demand. A tool like Kafka would be used for data ingestion into Hadoop in such a scenario of stream processing.
None of the methods mentioned above consider the sheer amount of data stored in files today. Not to mention the files newly created day by day. While WebHDFS or direct HDFS sure are convenient method for file ingestion they often require direct access to the cluster or a huge landing zone also with direct access to HDFS. A continues data ingest is also not supported.
For such scenarios Apache Flume sure would be a good option. Flume is capable of dealing with various continues data sources. Sources can be piped together over several nodes through channels writing data into various sink. In this post we look at the possibility to define a local directory where files can be dropped off, while Flume monitors for new files in that directory to sink to HDFS. Continue reading “Hadoop File Ingest and Hive”
Hadoop Credential API
In Hadoop 2.6 a fundamental feature was introduced that did not get much attention but will play an important role moving forward – the Credential API. Looking ahead the Credential Management Framework (CMF) will play an important role for the pluggable token authentication framework, column encryption in ORC files, or the transparent data encryption. But not only future components but applications build for Hadoop can benefit from it. Continue reading “Hadoop Credential API”
Spark Streaming with Kafka & HBase Example
Even a simple example using Spark Streaming doesn’t quite feel complete without the use of Kafka as the message hub. More and more use cases rely on Kafka for message transportation. By taking a simple streaming example (Spark Streaming – A Simple Example source at GitHub) together with a fictive word count use case this post describes the different ways to add Kafka to a Spark Streaming application. Additionally this posts describes the possibility to write out results to HBase from Spark directly using the TableOutputFormat. Continue reading “Spark Streaming with Kafka & HBase Example”