Hadoop where are you heading to?


One of my 5 predictions for 2019 is about Hadoop. Basically, I do expect that a lot of projects won’t take Hadoop as a full-blown solution anymore. Why is that? Basically, one of the most exciting news in 2018 was the merger between Hortonworks and Cloudera. The two main competitors now joining forces? How can this happen? Basically, I do believe that a lot of that didn’t come out of a strength of the two and that they somehow started to “love” each other but rather out of economical calculations. Now, it isn’t a competition between Hortonworks or Cloudera anymore (even before the merger), it is rather Hadoop vs. new solutions. These solutions are highly diversified – Apache Spark is one of the top competitors to it. But there are also other platforms such as Apache Kafka and some NoSQL databases such as MongoDB, plus TensorFlow emerging. One would now argue that all of that is included in a Cloudera

read more Hadoop where are you heading to?

Cloud is not the future


Now you probably think: is Mario crazy? In fact, during this post, I will explain why cloud is not the future. First, let’s have a look at the economic facts of the cloud. If we look at share prices of companies providing cloud services, it is rather easy to say: those shares are skyrocketing! (Not mentioning recent drops in some shares, but these are rather market dynamics than real valuations). One thing is also about overall company performances: the income of companies providing cloud services increased a lot. Have a look at the major cloud providers such as AWS, Google, Oracle or Microsoft: they make quite a lot of their revenue now with cloud services. So, obviously here, my initial statement seems to be wrong. So why did I just choose this one? Still crazy? Let’s look at another explanation on this: it might be all about technology, right? I was recently playing with AWS API Gateway and AWS Lambda.

read more Cloud is not the future

How to: Start and Stop Cloudera on Azure with the Azure CLI


The Azure CLI is my favorite tool to manage Hadoop Clusters on Azure. Why? Because I can use the tools I am used to from Linux now from my Windows PC. In Windows 10, I am using the Ubuntu Bash for that, which gives me all the major tools for managing remote Hadoop Clusters. One thing I am doing frequently, is starting and stopping Hadoop Clusters based on Cloudera. If you are coming from Powershell, this might be rather painfull for you, since you can only start each vm in the cluster sequentially, meaning that a cluster consisting of 10 or more nodes is rather slow to start and might take hours! In the Azure CLI I can easily do this by specifiying “–nowait” and all runs in parallel. The only disadvantage is that I won’t get any notifications on when the cluster is ready. But I am doing this with a simple hack: ssh’ing into the cluster (since I

read more How to: Start and Stop Cloudera on Azure with the Azure CLI

Hadoop Tutorial – Working with the Apache Hue GUI


When working with the main Hadoop services, it is not necessary to work with the console at all time (event though this is the most powerful way of doing so). Most Hadoop distributions also come with a User Interface. The user interface is called “Apache Hue” and is a web-based interface running on top of a distribution. Apache Hue integrates major Hadoop projects in the UI such as Hive, Pig and HCatalog. The nice thing about Apache Hue is that it makes the management of your Hadoop installation pretty easy with a great web-based UI. The following screenshot shows Apache Hue on the Cloudera distribution. Apache Hue

Hadoop Tutorial – Hadoop Commons


Apache Commons is one of the easiest things to explain in the Hadoop context – even though it might get complicated when working with it. Apache Commons is a collection of libraries and tools that are often necessary when working with Hadoop. These libraries and tools are then used by various projects in the Hadoop ecosystem. Samples include: A CLI minicluster, that enables a single-node Hadoop installation for testing purposes Native libraries for Hadoop Authentification and superusers A Hadoop secure mode You might not use all of these tools and libraries that are in Hadoop Commons as some of them are only used when you work on Hadoop projects.

Hadoop Tutorial – Serialising Data with Apache Avro


Apache Avro is a service in Hadoop that enables data serialization. The main tasks of Avro are: Provide complex data structures Provide a compact and fast binary data format Provide a container to persist data Provide RPC’s to the data Enable the integration with dynamic languages Avro is built with a JSON Schema, that allows several different types: Elementary types Null, Boolean, Int, Long, Float, Double, Byte and String Complex types Record, Enum, Array, Map, Union and Fixed The sample below demonstrates an Avro schema {“namespace”: “person.avro”, “type”: “record”, “name”: “Person”, “fields”: [ {“name”: “name”, “type”: “string”}, {“name”: “age”,  “type”: [“int”, “null”]}, {“name”: “street”, “type”: [“string”, “null”]} ] } Table 4: an avro schema

Hadoop Tutorial – Import large amount of data with Apache Sqoop


Apache Sqoop is in charge of moving large datasets between different storage systems such as relational databases to Hadoop. Sqoop supports a large number of connectors such as JDBC to work with different data sources. Sqoop makes it easy to import existing data into Hadoop. Sqoop supports the following databases: HSQLDB starting version 1.8 MySQL starting version 5.0 Oracle starting version 10.2 PostgreSQL Microsoft SQL Sqoop provides several possibilities to import and export data from and to Hadoop. The service also provides several mechanisms to validate data.

Hadoop Tutorial – Analysing Log Data with Apache Flume


Most IT departments produce a large amount of log data. This occurs especially when server systems are monitored, but it is also necessary for device monitoring. Apache Flume comes into play when this log data needs to be analyzed. Flume is all about data collection and aggregation. The architecture is built with a flexible architecture that is based on streaming data flows. The service allows you to extend the data model. Key elements of Flume are: Event. An event is data that is transported from one place to another place. Flow. A flow consists of several events that are transported between several places. Client. A client is the start of a transport. There are several clients available. A frequently used client for example is the Log4j appender. Agent. An Agent is an independent process that provides components to flume. Source. This is an interface implementation that is capable of transporting events. A sample of that is an Avro source. Channels.

read more Hadoop Tutorial – Analysing Log Data with Apache Flume

Hadoop Tutorial – Data Science with Apache Mahout


Apache Mahout is the service on Hadoop that is in charge of what is often called “data science”. Mahout is all about learning algorithms, pattern recognition and alike. An interesting fact about Mahout is that under the hood MapReduce was replaced by Spark. Mahout is in charge of the following tasks: Machine Learning. Learning from existing data and. Recommendation Mining. This is what we often see at websites. Remember the “You bought X, you might be interested in Y”? This is exactly what Mahout can do for you. Cluster data. Mahout can cluster documents and data that has some similarities. Classification. Learn from existing classifications. A Mahout program is written in Java. The next listing shows how the recommendation builder works. DataModel model = new FileDataModel(new File(“/home/var/mydata.xml”));   RecommenderEvaluator eval = new AverageAbsoluteDifferenceRecommenderEvaluator();   RecommenderBuilder builder = new MyRecommenderBuilder();   Double res = eval.evaluate(builder, null, model, 0.9, 1.0);   System.out.println(result); A Mahout program

Hadoop Tutorial – Graph Data in Hadoop with Giraph and Tez


Both Apache Graph and Tez are focused on Graph processing. Apache Giraph is a very popular tool for graph processing. A famous use-case for Giraph is the social graph at Facebook. Facebook uses Giraph to analyze how one might know a person in order to find out what other persons could be friends. Graph processing works on the travelling Sales-Person problem, trying to answer the question on what is the shortest way to get to the customers. Apache Tez is focused on improving the performance when working with graphs. This makes the development ways easier and reduces the number of MapReduce jobs that are executed underneath it significantly. Apache Tez highly increases the performance against typical MapReduce queries and optimizes the resource management. The following figure demonstrates graph processing with and without Tez. MapReduce without Tez With Tez