Author Archives: icorda

Let’s spend 2 minutes at The Bash Script Corner

A while ago, I was tasked with writing a bash script that would do the following:

  • Read the username, password and host values from Kubernetic config map
  • Reads the Postgres username, password, host and database values
  • Replace variables with corresponding values in the application.yml file
#! /usr/bin/env sh

echo "Mapping the env variables to the values from Kubernetis "

STAGING_ENV="$1"

#Sanity check
if [ $# -eq 0 ]; then
    echo "No argument was provided, however the script requires 1 argument to successfully run"
    exit 1
fi

POSTGRES_CORE="postgres-core"

#Postgres Env Variables
export DB_USERNAME=$(kubectl get configmap "$STAGING_ENV"-$POSTGRES_CORE -o jsonpath="{.data.username}")
export DB_PASSWORD=$(kubectl get secret "$STAGING_ENV"-$POSTGRES_CORE -o jsonpath="{.data.password}" | base64 -D)
export DB_HOSTNAME=$(kubectl get configmap "$STAGING_ENV"-$POSTGRES_CORE -o jsonpath="{.data.host}")
export DB_NAME=$(kubectl get configmap "$STAGING_ENV"-$POSTGRES_CORE -o jsonpath="{.data.name}")

export APPLICATION_YML_LOCATION=src/main/resources/application.yml

echo "Start replacing postgres env variables values with sed"
sed -i '' "s/DB_USERNAME:postgres/DB_USERNAME:$DB_USERNAME/g" $APPLICATION_YML_LOCATION
sed -i '' "s/DB_PASSWORD:password/DB_PASSWORD:$DB_PASSWORD/g" $APPLICATION_YML_LOCATION
sed -i '' "s/DB_HOSTNAME:localhost/DB_HOSTNAME:$DB_HOSTNAME/g" $APPLICATION_YML_LOCATION
sed -i '' "s/DB_NAME:core-db/$DB_NAME:$DB_NAME/g" $APPLICATION_YML_LOCATION
echo "End replacing postgres env variables values with sed"



New from Satellite 2020: GitHub Discussions, Codespaces, securing code in private repositories, and more – The GitHub Blog

Source: New from Satellite 2020: GitHub Discussions, Codespaces, securing code in private repositories, and more – The GitHub Blog

Which Java Microservice Framework Should You Choose in 2020?

Source: Which Java Microservice Framework Should You Choose in 2020?

Installing and Configuring Hadoop on Mac

So, now you want to find out a bit more about Hadoop, an open source framework for storing large datasets in a distributed environment. Before tackling the installation and configuration of Hadoop on your beloved Mac, let’s clarify a few important points will help navigate the world of Hadoop.

Hadoop’s Components

There are essentially 4 components that form the core of Apache Hadoop:

  • HDFS, aka Hadoop Distributed File System; HDFS is the primary data storage system used by Hadoop applications.  It employs NameNode and DataNode architecture to
  • MapReduce, aka the distributed data processing framework of Apache Hadoop;

    The MapReduce algorithm consists of 2 main stages:

    • Map stage − This is the input stage where data is in the form of file or directory and is stored in the Hadoop file system (HDFS). The mapper is responsible to process the data and split it in several chunks.
    • Reduce stage − This stage is the combination of the Shuffle stage and the Reduce stage. The Reducer’s job is to process the data that comes from the mapper.
      • During a MapReduce job, Hadoop sends the Map and Reduce tasks to the appropriate servers in the cluster.
      • The framework manages all the details of data-passing such as issuing tasks, verifying task completion, and copying data around the cluster between the nodes.
      • Most of the computing takes place on nodes with data on local disks that reduces the network traffic.
      • After completion of the given tasks, the cluster collects and reduces the data to form an appropriate result, and sends it back to the Hadoop server.
  • Hadoop Common, that are is a set of pre-defined utilities and libraries employed by other modules within the Hadoop ecosystem;
  • YARN(Yet Another Resource Negotiator).Yarn is the cluster resource management layer of the Apache Hadoop Ecosystem, which schedules jobs and assigns resources. The main idea behind the birth of YARN was to resolve issues such as scalability and resource utilisation within a cluster. Yarn has 2 core components: Scheduler and Applications manager. The scheduler is responsible for allocating resources to various up and running applications, but it does not perform monitoring or tracking of status for the applications.

    Conversely, the applications manager is responsible for accepting job submissions.

When Hadoop is the right choice

  • For Processing large datasets;
  • For Storing a variety of data formats (see the concept of Data Variety as one of the 3 V’s in Big Data;
  • For Parallel Data Processing (yes and that is exactly what MapReduce helps you with)

When Hadoop is NOT the right choice

  • When your dataset is not big enough, meaning that you can work well with RDBMs solutions;
  • For processing data stored in relational databases;
  • For processing real-time as well as graph-based data.

Hadoop Mac Installation

Before installing Hadoop, you should ask yourself what kind of Hadoop cluster and therefore installation would you require? To make it easy, here are 3 of the most common installation types:

  • Local or Standalone Mode. In this way, Hadoop is configured to run in a non-distributed manner as a single Java process running on your computer;
  • Pseudo Distributed Mode (also known as single-node cluster). This means that it will be similar to the standalone mode, but all Hadoop daemons run on a single node. This is what they called near production mode.
  • Fully Distributed Mode.This is the production mode of Hadoop where multiple nodes will be running at the same time. In such setting, data will be distributed across several nodes and processing will be done on each node.

Setting Up SSH on MacOS

Before installing Hadoop, we need to make sure that SSH is working properly on your machine, by running the following:

ssh localhost

If it returns this:

ssh: connect to host localhost port 22: Connection refused

It means that the remote login is off:

 sudo systemsetup -getremoteloginRemote 
 Login: off

In order to enable the remote login, run the following:

sudo systemsetup -setremotelogin on

SSH keys will then need to be generated:

ssh-keygen -t rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Install Hadoop via HomeBrew

We are going to install Hadoop via HomeBrew, as follows:

brew install hadoop
/usr/local/Cellar/hadoop/3.2.1

Hadoop Configuration on Mac

Configuring Hadoop requires a number of steps.

Edit

hadoop-env.sh 

The file is located at

/usr/local/Cellar/hadoop/3.2.1/libexec/etc/hadoop/hadoop-env.sh

The following line should be change from

export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"

to

export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true -Djava.security.krb5.realm= -Djava.security.krb5.kdc=" 

Edit

Core-site.xml

The file is  located at

/usr/local/Cellar/hadoop/3.2.1/libexec/etc/hadoop/core-site.xml

and add the below configuration inside

<property>
    <name>fs.defaultFS</name>
    <value>hdfs://localhost:9000</value>
</property>

Edit

mapred-site.xml

The file is located at

/usr/local/Cellar/hadoop/3.2.1/libexec/etc/hadoop/mapred-site.xml

and by default will be blank add below config

<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:9010</value>
  </property>
</configuration>

Edit

hdfs-site.xml

The file is located at

/usr/local/Cellar/hadoop/3.2.1/libexec/etc/hadoop/hdfs-site.xml

add the following:

<configuration>
 <property>
  <name>dfs.replication</name>
  <value></value>
 </property>
</configuration>

Before running Hadoop format HDFS

$ hdfs namenode -format

To Start Hadoop, you can use the following 2 commands:

start-dfs.sh
start-yarn.sh

Both scripts are available at:

/usr/local/Cellar/hadoop/3.2.1/sbin

Kafka and Zookeeper: main concepts

What is Kafka

Apache Kafka is a distributed real-time streaming platform whose primarily use cases are those requiring high throughput, reliability, and replication characteristics not achievable with ideal performance by applications like JMS, RabbitMQ, and AMQP

Generally speaking, a Big Data streaming platform offers 3 main capabilities:

  • Publishing and subscribing to streams of records, similar to a message queue or enterprise messaging system;
  • Storing streams of records in a fault-tolerant durable way;
  • Processing streams of records as they occur.

Kafka’s Applications and Case Studies

Some of the companies that are using Apache Kafka in their respective use cases are as follows:

  • LinkedIn: Apache Kafka is used at LinkedIn activity data streaming and operational metrics. This data powers various products such as LinkedIn News Feed and LinkedIn Today.
  • Twitter uses Kafka as a part of its Storm (now Herion actually)—a stream-processing infrastructure. Here is an account of Twitter’s Kafka adoption.
  • Foursquare : Kafka powers online-to-online and online-to-offline messaging at Foursquare. It is used to integrate Foursquare monitoring and production systems with Foursquare-and Hadoop-based offline infrastructures.

Kafka: main concepts

A Kafka cluster primarily has 5 main components:

  • Topic: A topic is a category or feed name to which messages are published by the message producers. In Kafka, topics are partitioned and each partition is represented by the ordered immutable sequence of messages. A Kafka cluster maintains the partitioned log for each topic. Each message in the partition is assigned a unique sequential ID called the offset.
  • Broker: A Kafka cluster consists of one or more servers where each one may have one or more server processes running and is called the broker. Topics are created within the context of broker processes.
  • Zookeeper: It serves as the coordination interface between the Kafka broker and consumers. From the Hadoop Wiki ZooKeeper allows distributed processes to coordinate with each other through a shared hierarchical name space of data registers (we call these registers znodes), much like a file system.
  • Producers: They publish data to the topics by choosing the appropriate partition within the topic. For load balancing, the allocation of messages to the topic partition can be done in a round-robin fashion or using a custom defined function.
  • Consumers: They are the applications or processes that subscribe to topics and process the feed of published messages.

What is Zookeeper

ZooKeeper is a centralised service for maintaining configuration information, naming, providing distributed synchronisation and group services. In a nutshell, Zookeeper is a coordination interface that allows communication between Kafka and the consumer. The main difference between Zookeeper and the normal filesystems lies in the concept of znode. Every znode is identified by a name and separated by a sequence of path (/).

  • at the highest level, there is a root znode separated by “/” under which, there are 2 logical namespaces, namely config and workers.
  • The config namespace is used for centralized configuration management and the workers namespace is used for naming.
  • Under config namespace, each znode can store upto 1MB of data. The main purpose of such structure (also called ZooKeeper Data Model) is to store synchronized data and describe the metadata of the znode.

Where to go from here

Lots of resources can be found on line, just a few to begin your journey with distributed messaging services:

Apache Kafka Home

Apache Kafka Github Repo

Apache Kafka for Beginners

Big Data Messaging with Kafka

Apache Zookeeper HomePage

Apache Zookeeper GitHub Repo

Spring Cloud Zookeeper

How to configure Zookeeper

 

 

Setting up your Deep Learning Environment (Mac)

So, you have embarked into your Deep Learning journey and perhaps you are navigating through the concepts of Gradient Descent, Back-propagation and so forth. After all the theory you are eager to get your environment ready to do some actual ‘deep learning hard work’ and you have no idea where to start. You are in the right place then. This short tutorial has been put together for Mac user (sorry Windows aficionados) and will provide you with what you need to get started.

Yes, you need Python!

Sure you know that Python is the key programming language when it comes to Machine and Deep Learning. Make sure you have our beloved HomeBrew:

/usr/bin/ruby -e “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Install Python 3 (with this version, pip3 will be automatically installed)

brew install python3

Virtual Environment

In order to keep things clean and contain all your deep learning related dependencies in one space, it is useful to use virtual environments.

pip3 install virtualenv virtualenvwrapper

You will also need to modify your bash profile file:

vim ~/.bash_profile

by adding the following:

# virtualenv and virtualenvwrapper
export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python3
source /usr/local/bin/virtualenvwrapper.sh

Next step is to create a virtual environment for your deep learning project:

mkvirtualenv cv -p python3

This will create a virtual environment named cv and in order to come out of such instance, you will need to type the command deactivate

Some Additional Dependencies

You will also need to install cmake to be able to use dlib, a C++ toolkit containing Machine Learning algorithm:

brew install cmake

Additionally, you will need to download X11 to display the image’s outputs from both dlib and opencv target=”_blank”

Let’s install the real stuff

Situate yourself inside your virtual environment by typing the following:

workon cv

Some additional dependencies should be taken care of:

pip install numpy h5py pillow scikit-image

Finally, we can install OpenCV:

pip install opencv-python

Then, we will be installing Dlib, Tensorflow and Keras:

pip install dlib
pip install tensorflow
pip install keras

Keras, in particular, is a user friendly, beginner library for Machine Learning and Deep Learning models that runs on top of Tensorflow. Happy Machine Learning modelling 🙂

 

Clearing the Confusion: AI vs Machine Learning vs Deep Learning Differences

Perhaps the most basic question for beginners when learning about Machine Learning and Deep Learning.

https://towardsdatascience.com/clearing-the-confusion-ai-vs-machine-learning-vs-deep-learning-differences-fce69b21d5eb

Read Parquet Files with SparkSQL

SparkSQL is a Spark module for working with structure data and it can also be used to read columnar data format such as Parquet files.  Here a number of useful commands that can be run from the spark-shell:

#Set the context

val sqlContext = new org.apache.spark.sql.SQLContext(sc)

#Read the parquet file in HDFS and

val df = sqlContext.read.parquet(“hdfs://user/myfolder/part-r-00033.gz.parquet”).printSchema

#Show the top 10 rows of data from the parquet file

df.show(10, false)

#Convert to JSON and print out the content of 1 record

df.toJSON.take(1).foreach(println)

Jenkins Best Practices – Practical Continuous Deployment in the Real World — GoDaddy Open Source HQ

Source: Jenkins Best Practices – Practical Continuous Deployment in the Real World — GoDaddy Open Source HQ

Java Beans and DTOs

DTO (Data Transfer Object)

Data Transfer Object is a pattern whose aim is to transport data between layers and tiers of a program. A DTO should contain NO business logic

public class UserDTO {
    String firstName;
    String lastName;
    List<String> groups;

    public String getFirstName() {
        return firstName;
    }
    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }
    public String getLastName() {
        return lastName;
    }
    public void setLastName(String lastName) {
        this.lastName = lastName;
    }

    public List<String> getGroups() {
        return groups;
    }
    public void setGroups(List<String> groups) {
        this.groups = groups;
    }
}

Java Beans

Java Beans are classes that follows certain conventions or event better they are Sun/Oracle standards/specifications as explained here:

https://www.oracle.com/technetwork/java/javase/documentation/spec-136004.html

Essentially, Java Beans adhere to the following:

  • all properties are private (and they are accessed through getters and setters);
  • they have zero-arg constructors (aka default constructors)
  • they implement the Serializable Interface

The main reason why we use Java Beans is to encapsulate

public classBeanClassExample() implements java.io.Serializable {

  private int id;

  //no-arg constructor
  public BeanClassExample() {
  }

  public int getId() {
    return id;
  }

  public void setId(int id) {
    this.id = id;
  }
}

So, yeah what is the real difference? If any?

In a nutshell, Java Beans follow strict conditions (as discussed above) and contain no behaviour (as opposed to states), except made for storage, retrieval, serialization and deserialization. It is indeed a specification, while DTO (Data Transfer Object) is a Pattern on its own. It is more than acceptable to use a Java Bean to implement a DTO pattern.