AkbarAhmed.com

Engineering Leadership

Effective meetings are an important element in running a high-performance organization. Meetings provide a high-fidelity and efficient means to quickly communicate, collaborate, and coordinate. However, too often meeting lack the structure to necessary to drive the desired outcomes.

Meeting Invite

The following provides a high-level outline of the primary sections to include in meeting invites.

  1. Agenda [required]: Have an agenda for the meeting that defines what topic(s) are to be discussed.
    • Briefing Document [optional]: 1 to 6 page memo that introduces attendees to the subject matter covered in the Agenda.
  2. Desired Outcomes [required]: Why is this meeting being called and what outcomes define a successful meeting?
  3. Associated Documents [optional]: Links to any documents that may be referenced during the meeting.

The Meeting

The meeting is the main show. If you organized the meeting, then you’ll normally assume the role of moderator to help keep everyone on topic.

Meeting Documentation

Meetings consume a significant investment of people’s time. Therefore, make the best use of this time by generating meeting notes.

  1. Meeting Notes [required]: Either take notes yourself or assign the task to another attendee.
    • Action Items [required]: Write down each Action Item that has been created as a result of the meeting and who is responsible for each Action Item.

 

References

This article is a bit dated given that it was written in 2014.

The traditional 3-tier architecture is dead, or at least its dying quickly. In a traditional 3-tier web architecture the tiers were defined as:

  1. Client: HTML, CSS and JavaScript
  2. Server: A server-side framework in Java, Python, Ruby, PHP, Node.js/JavaScript, etc.
  3. Database: A relational database including stored procedures inside the database

Each tier had a specific job to do:
Client: render the UI
Server: business logic (controller) plus generate updates to the UI (view) based on queries run against the database (model)
Database: data access and storage

So what’s changing? Literally, everything. Every layer of the stack is undergoing a massive change that necessitates a change to the architecture.

Client to UI/UX

TL;DR: The client-layer has evolved from static HTML to advanced, thick clients composed of JavaScript. These new JavaScript apps require a UI/UX API to provide a portion of their functionality. Further, mobile platforms often share the same UI/UX API.

The web client has evolved from HTML, CSS, and JavaScript to JavaScript, CSS, and HTML (where order indicates the importance of the code in delivering a high-quality user experience).

JavaScript heavy clients have become the norm and are table stakes for today’s modern web apps. The emergence of JavaScript in thick web clients (aka Single Page Applications, or SPAs) has given rise to a larger number of advanced UI JavaScript developers who must deliver increasingly advanced functionality in their apps. As a result, modern apps demand more from UI/UX developers and this has driven the need for UI/UX engineers to have control of their own server-side API.

Node.js has emerged as the go-to solution for UI/UX server-side API development, although other scripting languages such as Python and Ruby remain popular choices. Essentially, a Node.js API (or equivalent) is a thin API layer that represents the Model portion of the older, monolithic server-side frameworks.

Further, mobile development for iOS and Android require a UI/UX API. Consolidating this new API requirement within the UI/UX team allows all customer facing application development to move at a faster pace.

The last big driver that necessitates the creation of a UI/UX API is the fact that the UI/UX API calls a multitude of other internal APIs (for various platform services or data services) and/or external APIs. The UI/UX API tier helps to consolidate these various API calls into a single API endpoint that can be called from JavaScript, Objective C/Swift, or Java

Server to Services

TL;DR: The older server-side MVC monoliths have been broken apart into specialized functions. The Model layer has been pushed down into Data Services, The View layer has been pushed up into the UI/UX team, and the Controller is now an entire Services API layer that provides common functionality that is used by multiple apps.

The traditional server-tier has been broken apart into specialized functions. Traditional server-side frameworks consisted of an MVC architecture (Model, View, Controller). These older applications were monolithic code bases that did everything from querying the data layer, running business logic to rendering UI components.

As discussed above, the View portion of server-side MVC has been taken over by the new UI/UX API server.

The traditional MVC server-side frameworks have given way to a more specialized business logic layer, which consists of APIs capable of handling various service-oriented functions. The new Services APIs consist of common Platform Services plus reusable app Services APIs.

The Model layer, or server-side data access layer, has been pushed into a new layer known as data services. Data Services is the newly evolved data team. We’ll discuss the data layer more below.

While it may appear that the server tier has been reduced in scope, the reality is that the Services API layer is the core infrastructure team. Neither the UI/UX layer nor the Data Services layer would be able to develop functionality as quickly as they do without the platform and shared services delivered via the Services API tier.

Database to Data Services

TL;DR: Data storage and query is undergoing a revolution.

Much as the client layer has undergone an explosion of capability, the data layer now consists of a myriad of technologies.

Live was easy for the data team when relational databases were the only option. RDBMS’ provide a fully integrated data environment, complete with the SQL query language, an integrated query engine, stored procedures, logical abstractions and physical storage.

However, the modern data layer consists of a multitude of specialized data components that often separate the query language, the query engine, logical abstractions and physical storage.

Let’s use Cassandra as a quick example. In Cassandra, data engineers write queries using CQL. However, to actually run CQL the data engineer must embed the CQL in Java, Python or another supported language. So, now the data engineer requires an execution environment for their query code and they must give access to the query layer to the Services team and the UI/UX team. The obvious solution is for the Data Services team is to run their own Data Services API layer, which is exactly what has happened. Contrast this with an RDBMS where all a stored procedure is embedded inside the database and the API is the stored proc’s function signature.

Summary

The traditional 3-tier architecture of client, server and database are being replaced by new tiers that more closely align with modern applications:
– UI/UX
– Services
– Data Services.

The UI/UX layer now contains a full stack of its own including rich, thick clients written in JavaScript plus it’s own server-side API.

The Services layer is now more specialized as the view layer has been pushed to UI/UX and the model layer has been pushed to Data Services. This enables the Services layer to focus on what it does best, which is write advanced business logic and provide platform services that are common across multiple apps.

Data Services, which was previously confined to relational databases, now runs multiple data storage technologies, just one of which is a relational database. Data Services now runs its own API layer as well.

These changes align well with modern application development and help accelerate development cycles. UI/UX can deliver client functionality faster by leveraging the core infrastructure provided by the Services team and owns it’s own server-side API to quickly integrate the data provided by the Data Services team.

Overview

WebM is a free and open video format designed for HTML5. WebM is an open source project sponsored by Google. You can learn more at the WebM website.

Install Miro Video Converter

  1. Open http://www.mirovideoconverter.com.
  2. Click Download. When the download is finished double-click MiroVideoConverter_Setup.msi.
  3. Click Next.
  4. Select Custom Installation.
    1. Uncheck Install the AVG toolbar and set AVG Secure Search as my default search provider.
    2. Uncheck Set AVG Secure Search as my homepage and newly opened tabs.
  5. Click Next.
  6. Click Finish.

Convert mp4 video to webm

  • Open the Miro Video Converter via your Start menu.
  • Click Choose Files… in the Miro UI.
  • Find an mp4 file, or multiple files, on your harddrive. Click Open.
  • Click format, select Video, click WebM HD (assuming you want to create an HD video).
  • Click Convert to WebM HD.

Introduction

Debugging a Play Framework 2.0 application with Eclipse is exceptionally easy to setup. Importantly, using the debugger is integral to developing high quality, complex applications as it provides an easy way to step into your code.

YouTube Version

I have created a YouTube video that shows the steps below. You can watch the YouTube video at:

How to attach the Eclipse debugger to a Play Framework 2.0 application (YouTube)

Note: Change the playback quality to 720p with a large window for the best display.

Configure Play

Note: Prototyper is the name of a project that I use for prototyping code. Replace Prototyper with the name of the project that you want to debug.

Open a command prompt (Linux) or PowerShell (Windows), then enter the following
commands:

cd Prototyper
play clean compile
play debug run

Configure Eclipse

  • Open Eclipse.
  • Select the project (ex. Prototyper) in Navigator in the left pane.
  • Select the Run menu, click Debug Configurations…
  • In the Debug Configurations dialog box, double-click on Remote Java Application in the left pane.
  • In the right pane, a new remote Java application configuration will be created for you. Change the Port to 9999.
  • Click Apply.
  • Click Debug.
  • Add a breakpoint in your Java code by pressing Ctrl + Shift + B.
  • Open a web browser to http://localhost:9000 and navigate to the page where the breakpoint will be activated.

I’m a relatively recent convert from Subversion to Git, so getting to know the git equivalent of an svn command is challenging.

Reverting a file in git actually uses the checkout command.

For example, if you want to revert your uncommitted changes for a file named package/File.java, then you would use the following command:

git checkout package/File.java

The following is a repost of my answer to a question on LinkedIn, but I thought it may prove useful to people evaluating Hadoop distributions.

The following is a substantially over simplified set of choices (in alphabetical order):

Amazon: Apache Hadoop provided as a web service. Good solution if your data is collected on Amazon…saves you the trouble of uploading gigs and gigs of data.

Apache: Apache Hadoop is the core code based upon which the various distributions are based.

Cloudera: CHD3 is based on Hadoop 1 (the current stable version) and CDH4 is based on Hadoop 2. CDH is based on Apache Hadoop. The only piece that’s not open source (AFAIK) is Cloudera Manager, which allows you to install up to 50 nodes for free before you go to the paid version. Cloudera is an extremely popular solution that runs on a wide variety of operating systems.

Hortonworks: HDP1 is 100% open source and is based on Hadoop 1. HDP is designed to run on RedHat/CentOS/Oracle Linux.

IBM: IBM BigInsights adds the GPFS filesystem to Hadoop, and is a good choice if your company already is an IBM shop…and you need to integrate with other IBM solutions. Free version is available as InfoSphere BigInsights Basic Edition. Basic Edition does not include all of the value add features found in Enterprise Edition (such as GPFS-SNC).

MapR: MapR uses a proprietary file system plus additional changes to Hadoop that addresses issues with the platform. They have a shared nothing architeture for the NameNode and JobTracker. MapR M3 is available for free, while M5 is a paid version with more features (such as the shared nothing NameNode). People who have used MapR tend to like it.

Once you start to use Hadoop in your day-to-day business operations, you’ll quickly find that uptime is an important consideration. No one wants to explain to the CEO why a report is not delivered. While most of Hadoop’s architecture is designed to work in the face of node failure (such as the DataNodes), other components such as the NameNode must be configured with an HA option.

The following is a quick and dirty list of Hadoop HA options:

  • Cloudera CDH4 (free)
    • Uses shared storage
  • Hortonworks (free)
    • Option 1: Use Linux HA (Uses shared storage)
    • Option 2: Use VMWare
  • IBM BigInsights ($$$)
    • GPFS-SNC: Provides a shared nothing HA option
  • MapR M5 ($$$)
    • Shared nothing HA for both NameNode and JobTracker

 

If you’re brave, you can also apply Facebook’s patches to Apache Hadoop to get an “Avatar” based HA option. This is what FB uses in production.

%d bloggers like this: