HowTo: Setup a directory sensitive prompt in Bash that is updated at runtime

My requirements are:

  1. I want to see the full path when I am not in a sub-directory of my home drive. I want to do this when I am not logged in as root.
  2. I want to see the immediate sub-directory if I am in a sub-directory of the home drive. If this sub-directory is more then one level down then I want to see the immediate sub-directory followed by the ‘+’ token to show that I am more than 1 level down.

To implement this I updated the ‘~/.bashrc’ file and made two changes. First, I updated the value of the PS1 variable that contains the value of the Bash shell prompt as follows:

path=`echo $(ls)`;
inside_home=`echo $(pwd|grep -i home)`;
inside_two_sub_directories_of_home=`echo $(pwd | grep -i home | cut -d/ -f5)`;
if [ ! -z $inside_home ]; then
PS1=’${debian_chroot:+($debian_chroot)}\u@\h:$(if [ ! -z $inside_two_sub_directories_of_home ];then echo “$(pwd | grep -i home | cut -d/ -f1,2,3$
else
PS1=’${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ‘
fi

Second, I updated the built-in ‘cd’ function so that it runs the above code each time a directory is changed. If this is not done the value of the variable PS1 will only be set when the user logs in or when he manually runs the command ‘source ~/.bashrc’. In order to make the second change I updated the ‘~/.bashrc’ file and added the following line:

function cd() { builtin cd “$@” && source ~/.bashrc; }

Advertisement

My Guess At: What does a data scientist do?

  1. A statistician is skilled in developing a predictive model by analyzing historic data.
  2. A programmer is skilled in translating such a model into code.
  3. A data scientist is skilled at understanding the needs of a software product and can enhance it through skills in (1) & (2) to improve it’s ability to predict each individual user.
  4. An application of this is in Facebook which provides it’s users with tailored news feed.

Reference: https://mixpanel.com/blog/2016/03/30/this-is-the-difference-between-statistics-and-data-science/

AWS Certification Brief

There are three AWS certification tracks as follows:

  1. AWS Solutions Architect Associate level certification
  2. AWS Developer Associate level certification
  3. AWS Operations Associate level certification

The first track leads to a AWS Architect Professional level certification. The second and third tracks both lead to the same advanced level certification titled AWS DevOps Professional level certification.

There is a channel on YouTube titled ‘AWS: reInvent’ that hosts a lot of videos that will help you do the certifications above. The website where I purchased unlimited access to do labs for a limited period of 1 month for USD 55 is: https://qwiklabs.com/?locale=en. Their labs are grouped into what they call ‘Quests’. These quests are designed to prepare you for a given certification. I completed the Quest for AWS Solutions Architect. They provide a public URL as evidence when you complete a Quest – mine is: https://qwiklabs.com/public_profiles/dfc826aa-a227-466a-8409-a5d7077bb642

Note that the Architect track is designed to be broad in scope. An architect is presumed to be an expert who knows all the breath of available AWS resources and services using which should enable him to ‘architect’ a solution to a given use case.

The Operations Associate is presumed to have depth of knowledge of the tools stack put together by the architect and usually has a deep technical background such as a Data Center Engineer. Using this knowledge he/she is expected to do the actual implementation of the architecture proposed by the AWS Architect.

The Developer Associate I presume is meant to introduce automation via scripting and aid the Operations Associate.

The revolution that is AWS

AWS has brought together the best (datacenter provisioning) tool stack in the world of open source and seamlessly put that together, all behind what they call a ‘Management Console’. What this means is that experts who know the breath and depth of these technologies are now redundant.

All anyone needs to know in this area of tech is AWS and how their Management Console user interface operates. Thus an expert who can create a custom solution from scratch will be replaced by someone who can implement the same solution using what is available through the AWS management console.

“This is atleast true for public clouds” – A mentor.

init script for Bitnami Moodle – Ubuntu 14.04

Once the Bitnami Moodle installer version bitnami-moodle-3.1.1-1-linux-x64-installer.run installs Moodle it requires a non-root user to launch it. However, since init scripts are launched with root level privileges a special procedure is required to launch Moodle if choosing to do so via init.

The following procedure will allow you to launch Moodle at boot time via an init script in Ubuntu 14.04.

  1. sudo nano /etc/init.d/moodle (and copy the contents of the gist below)
  2. sudo chmod +x /etc/init.d/moodle
  3. sudo chown root.root /etc/inti.d/moodle
  4. sudo defaults-rc.d moodle defaults
#!/bin/bash
# Moodle Startup Service script v1.0 by Faraz Haider 19 August 2016
# acts as startup service script for Learning Management System.
# USAGE: start|stop|status
#
case "$1" in
start)
echo "Starting Moodle."
su -c '/opt/moodle/ctlscript.sh start' user_name
;;
stop)
echo "Stopping Moodle."
su -c '/opt/moodle/ctlscript.sh stop' user_name
;;
status)
# Check to see if the process is running
su -c '/opt/moodle/ctlscript.sh status' user_name
;;
*)
echo "Moodle Service."
echo $"Usage: $0 {start|stop|status}"
exit 1
esac
exit 0
view raw moodle hosted with ❤ by GitHub

You will have to replace the token user_name in the gist above with the user name of the non-root user with which Bitnami Moodle was installed. Also, note that the location of my Bitnami Moodle installation is ‘/opt/moodle’. You will have to update this path before using the script if you install Bitnami Moodle at a different location.

Additionally, to start Bitnami Moodle on port 80 run the commands below:

  • sudo iptables -t nat -A PREROUTING -p tcp –dport 80 -j REDIRECT –to-port 8080
  • sudo aptitude install iptables-persistent: this will make the rule in the step above persistent across reboots.

HowTo: Setup Nginx & PhP-FPM dockers on Ubuntu (16.04) VM hosted on Xen Server 6.5

The Ubuntu VM will have two docker containers running on it. These containers will be managed via Xen Center.

The high level steps to achieve this

  1. Install docker plug-in on Xen Server.
  2. Deploy Ubuntu 16.04 from ISO and install additional software on VM to enable Xen Server to manage dockers inside the VM.
  3. Run CLI command on Xen Server instructing it to enable VM for docker management.
  4. Deploy Docker config files on VM filesystem.
  5. Pull Dockers from official repository.
  6. Launch and Test.

Install docker plug-in on Xen Server

  1. SSH on your XenServer
  2. Download the plugin: wget http://downloadns.citrix.com.edgesuite.net/10343/XenServer-6.5.0-SP1-xscontainer.iso
  3. Install it: xe-install-supplemental-pack XenServer-6.5.0-SP1-xscontainer.iso

Note: The plugin should be installed on every host even if these hosts are on the same pool.

Deploy Ubuntu 16.04 from ISO and install additional software on VM to enable Xen Server to manage dockers inside the VM

  1. Download Ubuntu server 16.04 ISO from: http://www.ubuntu.com/download/server/thank-you?version=16.04.1&architecture=amd64
  2. Copy ISO to a windows server that is accessible to the Xen Server.
  3. Create a SR based on Windows File Sharing.
  4. Provide the mount location of the ISO folder in step (2) in the SR setup wizard in step (3).
  5. Instantiate & install the Ubuntu 16.04 VM on Xen Server using the ISO from step (4).
  6. sudo apt-get update
  7. sudo apt-get install aptitude
  8. sudo aptitude safe-upgrade
  9. sudo aptitude install docker.io openssh-server nmap
  10. sudo useradd -G docker (user_name) where (user_name) belongs to the user created during installation of Ubuntu. Execute the command without parenthesis.

Run CLI command on Xen Server instructing it to enable VM for docker management

  1. Install Xen Tools on the VM.
  2. Right click the VM on the Xen Center and click on ‘Install Xen Tools’.
  3. sudo mount /dev/cdrom /mnt
  4. cd /mnt/Linux
  5. sudo ./install.sh
  6. Restart VM
  7. Goto Xen Center and identify the UUID of the VM.
  8. Goto Xen Server console and execute the command: xscontainer-prepare-vm -v (VM_UUID) -u (user_name) where VM_UUID is the UUID from step (7) and user_name is the non-root user name of the user on the Ubuntu 16.04 system set-up while installing the OS. Execute the command without parenthesis.

Deploy Docker config files on VM filesystem

  1. mkdir ~/docker-files
  2. cd ~/docker-files
  3. nano default.conf and paste the contents of the gist below. Replace (your_server_name) with the hostname of your server.
  4. nano docker-compose.yml and paste the contents of the gist below
  5. mkdir ./code
server {
index index.php index.html;
server_name your_server_name;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /code;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
view raw default.conf hosted with ❤ by GitHub
web:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./site.conf:/etc/nginx/conf.d/default.conf
- ./code:/code
links:
- php
php:
image: php:7-fpm
volumes:
- ./code:/code

Pull Dockers from official repository

  1. cd ~/docker-files
  2. docker-compose up

Launch and Test

  1. Create a PhP file ~/docker-files/code/hello_world.php with contents from the tutorial here: http://php.net/manual/en/tutorial.firstpage.php
  2. Browse the URL: http://your_server_name/hello_world.php

References:

  1. https://xen-orchestra.com/docs/docker_support.html
  2. http://geekyplatypus.com/dockerise-your-php-application-with-nginx-and-php7-fpm/

HowTo: Integrate IDempiere with OpenDJ (LDAP)

Using the steps below I was able to improve upon the integration between IDempiere and OpenDJ. Specifically, I worked on the LDAP based login feature of IDempiere. I was able to improve LDAP integration between IDempiere and OpenDJ such that using the steps below a user is able to login via his LDAP user-name and password irrespective of where his account exists in the LDAP directory structure.

When a user attempts to login to IDempiere an LDAP search takes place on the integrated OpenDJ instance. Once the user-name is found as a result of the search, in the second step the password token is authenticated.

The OpenDJ UID field is mapped to the user-name on the IDempiere Login screen. An LDAP password associated with the UID field must also be pre-entered in IDempiere. However, this password can be a dummy value.

The steps below assume that your development environment for IDempiere is already set-up.

Steps:

  1. Update the LDAP.java file with the code at the bottom of this post in the source code on the development server.
  2. Build a jar file for the org.adempiere.org package and place this jar file in the plugins folder on the production IDempiere server.
  3. Update java.policy  file with the contents at the end of this post on the production IDempiere server.
  4. Remember to give a dummy password for all users you want to hook up to LDAP. Even though the LDAP credentials will be used this is probably a bug in IDempiere that requires users to put in a regular password as well.

LDAP.java file contents:

/******************************************************************************
* Product: Adempiere ERP & CRM Smart Business Solution
* Copyright (C) 1999-2006 ComPiere, Inc. All Rights Reserved.
* This program is free software; you can redistribute it and/or modify it
* under the terms version 2 of the GNU General Public License as published
* by the Free Software Foundation. This program is distributed in the hope
* that it will be useful, but WITHOUT ANY WARRANTY; without even the implied
* warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
* See the GNU General Public License for more details.
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
* You may reach us at: ComPiere, Inc. – http://www.compiere.org/license.html
* 2620 Augustine Dr. #245, Santa Clara, CA 95054, USA or info@compiere.org
*****************************************************************************/
package org.compiere.db;

import java.util.Hashtable;
import javax.naming.Context;
import javax.naming.NamingEnumeration;
import javax.naming.NamingException;
import javax.naming.directory.Attribute;
import javax.naming.directory.Attributes;
import javax.naming.directory.DirContext;
import javax.naming.directory.InitialDirContext;
import javax.naming.directory.SearchControls;
import javax.naming.directory.SearchResult;
import org.compiere.util.CLogger;

/**
* LDAP Management Interface
*
* @author Jorg Janke
* @version $Id: LDAP.java,v 1.2 2006/07/30 00:55:13 jjanke Exp $
* @modified Faraz Haider
*/
public class LDAP {

private static String getUid(String ldapURL, String user) throws Exception {
DirContext ctx = null;
Hashtable<String, String> env = new Hashtable<String, String>();
env.put(Context.INITIAL_CONTEXT_FACTORY,
“com.sun.jndi.ldap.LdapCtxFactory”);
env.put(Context.PROVIDER_URL, ldapURL);
try {
ctx = new InitialDirContext(env);
} catch (NamingException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
String filter = “(uid=” + user + “)”;
SearchControls ctrl = new SearchControls();
ctrl.setSearchScope(SearchControls.SUBTREE_SCOPE);
NamingEnumeration answer = ctx.search(“”, filter, ctrl);

String dn;
if (answer.hasMore()) {
SearchResult result = (SearchResult) answer.next();
dn = result.getNameInNamespace();
} else {
dn = null;
}
answer.close();
return dn;
}

private static boolean testBind(String ldapURL, String dn, String password)
throws Exception {
DirContext ctx = null;
Hashtable<String, String> env = new Hashtable<String, String>();
env.put(Context.INITIAL_CONTEXT_FACTORY,
“com.sun.jndi.ldap.LdapCtxFactory”);
env.put(Context.PROVIDER_URL, ldapURL);
env.put(Context.SECURITY_AUTHENTICATION, “simple”);
env.put(Context.SECURITY_PRINCIPAL, dn);
env.put(Context.SECURITY_CREDENTIALS, password);

try {
ctx = new InitialDirContext(env);
} catch (javax.naming.AuthenticationException e) {
return false;
}
return true;
}

/**
* Validate User
*
* @param ldapURL
*            provider url – e.g. ldap://dc.compiere.org
* @param domain
*            domain name = e.g. compiere.org
* @param userName
*            user name – e.g. jjanke
* @param password
*            password
* @return true if validated with ldap
*/
public static boolean validate(String ldapURL, String domain,
String userName, String password) {

String dn = null;
try {
dn = getUid( ldapURL, userName );
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

if (dn != null) {
/* Found user – test password */
try {
if ( testBind( ldapURL, dn, password ) ) {
log.info(“OK: ” + userName);
return true;
}
else {
log.severe(“Authentication failed for user: ‘” + userName + “‘” );
return false;
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
//else {
log.severe(“User: ‘” + userName + “‘ not found.” );
return false;
//}
} // validate

/** Logger */
private static CLogger log = CLogger.getCLogger(LDAP.class);

/**
* Test NT
*
* @throws LoginException
*
*             private static void testNT () throws LoginException { try {
*             System.out.println
*             (“NT system —————————-“); NTSystem ntsystem
*             = new NTSystem (); System.out.println (ntsystem);
*             System.out.println (ntsystem.getDomain ());
*             System.out.println (ntsystem.getDomainSID ());
*             System.out.println (ntsystem.getName ()); System.out.println
*             (ntsystem.getUserSID ()); System.out.println
*             (“NT login —————————-“); NTLoginModule
*             ntlogin = new NTLoginModule (); System.out.println (ntlogin);
*             Map<String,String> map = new HashMap<String,String>();
*             map.put (“debug”, “true”); ntlogin.initialize (null, null,
*             null, map); System.out.println (ntlogin.login ()); } catch
*             (LoginException le) { System.err.println
*             (“Authentication attempt failed” + le); } } // testNT
*
*
*             /** testKerberos
* @throws LoginException
*
*             private static void testKerberos () throws LoginException {
*             System.out.println
*             (“Krb login —————————-“);
*             Map<String,String> map = new HashMap<String,String>(); //
*             map.put(“debug”, “true”); // map.put(“debugNative”, “true”);
*             Krb5LoginModule klogin = new Krb5LoginModule ();
*             System.out.println (klogin); map.put (“principal”,
*             “username@compiere.org”); map.put (“credential”, “pass”);
*             klogin.initialize (null, null, null, map); System.out.println
*             (klogin.login ());
*             /******************************************
*             ***************************** ** No krb5.ini file found in
*             entire system Debug is true storeKey false useTicketCache
*             false useKeyTab false doNotPrompt false ticketCache is null
*             KeyTab is null refreshKrb5Config is false principal is jjanke
*             tryFirstPass is false useFirstPass is false storePass is
*             false clearPass is false [Krb5LoginModule] authentication
*             failed Could not load configuration file c:\winnt\krb5.ini
*             (The system cannot find the file specified)
*             javax.security.auth.login.LoginException: Could not load
*             configuration file c:\winnt\krb5.ini (The system cannot find
*             the file specified)
*
*             } // testKerbos /
**/

/**
* Print Attributes to System.out
*
* @param attrs
*/
@SuppressWarnings(“unused”)
private static void dump(Attributes attrs) {
if (attrs == null) {
System.out.println(“No attributes”);
} else {
/* Print each attribute */
try {
for (NamingEnumeration<? extends Attribute> ae = attrs.getAll(); ae
.hasMore();) {
Attribute attr = ae.next();
System.out.println(“attribute: ” + attr.getID());
/* print each value */
for (NamingEnumeration<?> e = attr.getAll(); e.hasMore(); System.out
.println(”    value: ” + e.next()))
;
}
} catch (NamingException e) {
e.printStackTrace();
}
}
} // dump

/**
* Test
*
* @param args
*            ignored
*/
public static void main(String[] args) {
try {
validate(“ldap://directory.my.company.pk”, “dc=my,dc=company,dc=pk”, “faraz”,
“ikeepforgetting”);
} catch (Exception e) {
e.printStackTrace();
}
} // main

} // LDAP

Lines to add in java.policy file:

        //This permission is needed to connect to the LDAP server in order to authenticate users.
permission java.net.SocketPermission “IP_Address_Of_LDAP_Server:LDAP_Port”,  “accept,connect,resolve”;

Load Balance PHP Application with Apache2.4

Following the set-up given below you will be able to load balance between multiple Apache web servers hosting a PHP session based application.

A brief conceptual overview: There are two types of roles that are involved in setting up the load balancer. The first is the gateway and the second is the worker role. The gateway cannot be a worker. The gateway hosts a special configuration in its apache config file whereas the worker webservers do need any configuration in their config files. The gateway intercepts requests from the user and redirects them to one of the workers. The response received from the worker is forwarded to the user. The gateway should not host the web application, this must be hosted on the worker webservers.

Before you begin you will have to enable the following Apache modules:

  1. mod_proxy
  2. mod_proxy_http
  3. mod_proxy_balancer
  4. mod_header
  5. mod_lbmethod_bytraffic

Once you have done that paste the following config in your Apache config file on the gateway webserver:

ProxyRequests Off

ProxyPreserveHost On

Header add Set-Cookie “PHPSESSIONID=.%{BALANCER_WORKER_ROUTE}e; path=/” env=BALANCER_ROUTE_CHANGED

<Proxy balancer://my_cluster>

BalancerMember http://server1.yourdomain.com:80 route=01

BalancerMember http://server2.yourdomain.com:80 route=02

AllowOverride None

Order allow,deny

allow from all

ProxySet lbmethod=bytraffic stickysession=PHPSESSIONID nofailover=On timeout=600

</Proxy>

ProxyPass / balancer://my_cluster/

ProxyPassReverse / balancer://my_cluster/

Note that you will have to configure your DNS to resolve server1.yourdomain.com and server2.yourdomain.com such that they point to the IPs of the two worker webservers. You may instead just give the IPs of the two worker webservers.

And that’s a wrap!

 

Code to create multiple zip archives from set of files with a size limit per archive – PHP CLI Script

You will need to install the PHP-ZIP package and possibly the libzip package. I ran this code on Ubuntu using PHP CLI.

Code has been copied and modified from: http://php.net/manual/en/class.ziparchive.php

<?php
//Start of class
class xZip {
public function __construct() {}
public function __destruct() {}
public function zip($source = NULL, $destination = "./",$max_size_of_1_zip_archive) {
if (!$destination || trim($destination) == "") {
$destination = "./";
}
$input = array();
$this->_rglobRead($source, $input);
//$maxinput = count($input);
//$splitinto = (($maxinput / $limit) > round($maxinput / $limit, 0)) ? round($maxinput / $limit, 0) + 1 : round($maxinput / $limit, 0);
//Add code to ensure that a max of $max_size_of_1_zip_archive MB are added to zip archive
$total_number_of_images = sizeof($input);
echo "Total Number of Images:".$total_number_of_images."\n";
$current_image_array_counter_position = 0;
$beginning_image_array_index_for_archive = 0;
$archive_counter = 0;
$current_archive_size = 0;
while($current_image_array_counter_position < $total_number_of_images)
{
$current_image_file_size = filesize($input[$current_image_array_counter_position]);
$current_image_file_size = ($current_image_file_size/1024);//to convert from bytes to Kilobytes.
/*echo "beginning_image_array_counter_position value: ".$beginning_image_array_index_for_archive."\n";
echo "current_image_array_counter_position value: ".$current_image_array_counter_position."\n";
*/
//sleep(1);
if(($current_archive_size + ($current_image_file_size) ) > ($max_size_of_1_zip_archive*1024))
{
$this->_zip(array_slice($input, $beginning_image_array_index_for_archive, ($current_image_array_counter_position-$beginning_image_array_index_for_archive), true), $archive_counter++, $destination);
$beginning_image_array_index_for_archive = $current_image_array_counter_position;
echo "Archive Number: ".($archive_counter)."\n";
echo "Finished Archive Size: ".$current_archive_size."\n\n";
$current_archive_size = 0;
}
else
{
$current_archive_size += ($current_image_file_size);
$current_image_array_counter_position++;
echo "Current Image File Size: ".$current_image_file_size."\n";
echo "Current Archive Size: ".$current_archive_size."\n";
echo "Number of Files in Archive: ".($current_image_array_counter_position-$beginning_image_array_index_for_archive)."\n";
}
}
if($current_archive_size > 0)
{
$this->_zip(array_slice($input, $beginning_image_array_index_for_archive, ($current_image_array_counter_position-$beginning_image_array_index_for_archive), true), $archive_counter++, $destination);
$beginning_image_array_index_for_archive = $current_image_array_counter_position;
echo "Archive Number: ".($archive_counter)."\n";
echo "Finished Archive Size: ".$current_archive_size."\n";
}
//echo "size of input array: ".(sizeof($input))."\n";
//echo "value of begining of array index: ".$beginning_image_array_index_for_archive."\n";
//echo "value of current of array index: ".$current_image_array_counter_position."\n";
unset($input);
return;
}
public function unzip($source, $destination) {
@mkdir($destination, 0777, true);
foreach ((array) glob($source . "/*.zip") as $key => $value) {
$zip = new ZipArchive;
if ($zip->open(str_replace("//", "/", $value)) === true) {
$zip->extractTo($destination);
$zip->close();
}
}
}
private function _zip($array, $part, $destination) {
$zip = new ZipArchive;
@mkdir($destination, 0777, true);
if ($zip->open(str_replace("//", "/", "{$destination}/partz{$part}.zip"), ZipArchive::CREATE)) {
foreach ((array) $array as $key => $value) {
$value_of_filename_in_ziparchive = "";
$value_of_filename_in_ziparchive = substr($value, strrpos($value, '/') + 1);
//echo (sizeof($array))." ".$part." ".$value_of_filename_in_ziparchive."\n";
$zip->addFile($value, $value_of_filename_in_ziparchive);
}
$zip->close();
}
}
private function _rglobRead($source, &$array = array()) {
if (!$source || trim($source) == "") {
$source = ".";
}
foreach ((array) glob($source . "/*/") as $key => $value) {
$this->_rglobRead(str_replace("//", "/", $value), $array);
}
foreach ((array) glob($source . "*.*") as $key => $value) {
$array[] = str_replace("//", "/", $value);
}
}
}
//End of Class
//Parent process code:
//Defining session specific variables below:
$max_size_of_1_zip_archive = 5; //This value is in MBs.
$input_path_of_image_files = "~/source_pictures/";
$output_path_of_zip_archives = "~/destination_archives/";
//Invoking class xZip:
$zip = new xZip();
$zip->zip($input_path_of_image_files, $output_path_of_zip_archives,$max_size_of_1_zip_archive);
?>