AkbarAhmed.com

Engineering Leadership

Coronavirus has forced the world to experiment with Work from Home (WFH) and Remote work at scale. Many people who would otherwise choose to work from an office have been thrust into WFH while simultaneously becoming home schooling teachers. Due to this forced change, a heated debate has been raging between the advantages of WFH/Remote vs. working in an office/Face-to-Face (F2F). This debate is not new.

For the first time WFH is the default. In other words, months ago the assumption was that people work in an office and met F2F for sales kickoffs/QBRs/offsites/etc. and the question was, “What are the benefits and risks of WFH and remote? Can we do this event remotely and will it results in reduced output?”. However, as nearly everyone is now WFH, the question is, “What are the benefits of meeting F2F? Is there an efficiency gain from meeting in person, can any productivity gain be quantified and does it justify the costs and health risks?”.

As the initial wave of the coronavirus pandemic recedes, many managers will be tempted to assume F2F is better and will assume that CFOs will by default approve new budget for travel. Yet, it’s reasonable to assume that disciplined CFOs will look at the output achieved during the pandemic and ask “What am I buying with this new travel spend? What tangible benefits will be achieve? Is this worth the additional expense?”.

Can you quantify the efficiency gains, increased output, or other benefits from meeting Face-to-Face (F2F)? Do these benefits justify an increase in spend?

Never before has the assumption that F2F results in higher output been tested at scale. As a disciplined manager, we must set aside our assumptions and prejudices between remote and F2F work. Let others engage in religious debates that presume that one is right and the other is wrong.

If we are pragmatic then it’s safe to start from a position that there are benefits to each model and neither is ipso facto better than the other.

The pandemic has forced organizations to make complex processes work remotely when previously it was thought that these processes and events required people to work F2F. In technology companies, we have been forced to make sales kickoffs, Quarterly Business Reviews (QBRs), quarterly planning and other offsites work remotely. In other words, some questions to ask are, “Did we deliver the same output from our sales kickoff, QBR, quarterly planning and other offsites as we did with our previous in person events?”. Remove feelings from your thought process and approach the question objectively.

“Did we deliver the same output from our sales kickoff, QBR, quarterly planning and other offsites as we did with our previous in person events?”.

Many companies have successfully executed a remote sales kickoff, QBR and quarterly planning in Q2 2020. So, what are the benefits of increasing travel spend? Will sales growth be higher? Will planning F2F result in better product development? Perhaps most interesting is that many organizations have gone remote with no loss in revenue, efficiency, or output. So why do we spend substantial amounts of money to force hundreds or thousands of employees to travel to a central location?

Two obvious costs of F2F meetings are the direct travel costs and the lost productivity due to travel. Travel costs are a direct, tangible and quantifiable expense. Lost productivity is also relatively easy to measure. For example, a 3-day offsite from Tuesday to Thursday, results in lost productivity on Monday and Friday (travel days). Using a back of the napkin calculation, we can determine that an employee who travels to an offsite each quarter is offline for 8 days per year (2 travel days per quarter x 4 quarters).  There are 250 to 260 working days per year on average, so those 8 days of travel represent a loss of approx. 3% of the employees work days. So, if we’re obtaining the same output from meeting remote as F2F, but increasing direct travel expenses and losing 3% of a person’s working days times the number of offsites attended, then what are we gaining from meeting in person?

So, the question to ask now is, “What are the benefits of meeting F2F? What am I buying with this travel budget? Can we quantify the benefits of face-to-face interaction? Can we deliver a material improvement in output by meeting in person? If there is an increase in output, then is the improvement in large enough to justify the expense?”

Perhaps, meeting in person is about social interaction and not work. Clearly, we all feel there is a benefit to knowing our co-workings, to socializing, to getting to know each other. This has value, but how much are we willing to pay for feeling that we work better even if there’s no objective fact to back up our feelings.

What is the value of social interaction at work and how much are we willing to pay for it?

As we move past the first wave of the pandemic, it’s reasonable to assume that many companies will avoid the extreme positions of returning to the prior status quo of all F2F meetings while also avoiding the new status quo of all remote meetings. Perhaps, the ideal balance between the raw efficiency of remote and the human / social needs served by F2F will result in some type of split such as alternating remote and F2F events.

Choosing an operating model is about the overall output of the organization which must also factor in employee satisfaction. Ultimately, each organization must ask itself what are the differences in output between remote and F2F? Which differences can be quantified and which are intangible? And what are the cost implications of remote vs. F2F events? Ceteris paribus, corporations will choose the most efficient means of production to satisfy a market opportunity. Which you choose for your business will have an impact on output, costs, profitability, and employee satisfaction.

The rapid transition to Work From Home (WFH) while simultaneously homeschooling the kids is stress inducing. Throw in a global pandemic and it’s understandable why so many people default to pointing out challenges with the current environment. Despite the challenges, some benefits have emerged that are worth reflecting on.

For many of us, the current environment has refocused our attention toward what is important in life. Many of these rediscovered activities may stick when the pandemic is our collective rear view mirror. At a minimum, it’s worth focusing on what’s good in life even in the midst of a pandemic.

Spending Time with Family

Nearly everyone I have spoken with has been spending more time with family during the covid-19 pandemic. While shelter-in-place orders are challenging, having more time to spend with a significant other and the kids has been positive.

There is a growing consensus that people want to continue to have more quality family time when the pandemic is done. While WFH is tough for those new to it, the lack of a commute has opened up free time to spend with the people who matter most.

Another important consideration is that children are benefiting from increased parental attention. In other words, children are beginning to expect more time and attention from parents. Curbing this attention when the pandemic ends may have a net negative impact on children while leaving a palpable gap.

Catching up with Friends

Have you called old friends recently? The answer for nearly everyone is an emphatic yes. The pandemic has driven people to touch base with old friends. Who doesn’t enjoy catching up with old friends. Hopefully, we’ll all continue to do so even after the pandemic passes.

Dinner with the Family

A few months ago many of us were too busy to sit down and eat dinner as a family. Today, an increasing number of people have rediscovered the tradition of enjoying dinner as a family. The people I speak with have found joy in connecting with family every night. Dinner time was traditionally a time when everyone would put aside the day’s burdens and reconnected with one another. Nearly everyone views this one a keeper.

Summary

While the pandemic is definitely a net negative there are are also some positives worth noticing. In general, no one wants to spend time with people who are downers. So, when asked “How are you doing?”, I like to mention some of the positives as a result of the current environment. Connecting with friends and family and spending more time the kids are all things that have been reinvigorated.

Effective meetings are an important element in running a high-performance organization. Meetings provide a high-fidelity and efficient means to quickly communicate, collaborate, and coordinate. However, too often meeting lack the structure to necessary to drive the desired outcomes.

Meeting Invite

The following provides a high-level outline of the primary sections to include in meeting invites.

  1. Agenda [required]: Have an agenda for the meeting that defines what topic(s) are to be discussed.
    • Briefing Document [optional]: 1 to 6 page memo that introduces attendees to the subject matter covered in the Agenda.
  2. Desired Outcomes [required]: Why is this meeting being called and what outcomes define a successful meeting?
  3. Associated Documents [optional]: Links to any documents that may be referenced during the meeting.

The Meeting

The meeting is the main show. If you organized the meeting, then you’ll normally assume the role of moderator to help keep everyone on topic.

Meeting Documentation

Meetings consume a significant investment of people’s time. Therefore, make the best use of this time by generating meeting notes.

  1. Meeting Notes [required]: Either take notes yourself or assign the task to another attendee.
    • Action Items [required]: Write down each Action Item that has been created as a result of the meeting and who is responsible for each Action Item.

 

References

This article is a bit dated given that it was written in 2014.

The traditional 3-tier architecture is dead, or at least its dying quickly. In a traditional 3-tier web architecture the tiers were defined as:

  1. Client: HTML, CSS and JavaScript
  2. Server: A server-side framework in Java, Python, Ruby, PHP, Node.js/JavaScript, etc.
  3. Database: A relational database including stored procedures inside the database

Each tier had a specific job to do:
Client: render the UI
Server: business logic (controller) plus generate updates to the UI (view) based on queries run against the database (model)
Database: data access and storage

So what’s changing? Literally, everything. Every layer of the stack is undergoing a massive change that necessitates a change to the architecture.

Client to UI/UX

TL;DR: The client-layer has evolved from static HTML to advanced, thick clients composed of JavaScript. These new JavaScript apps require a UI/UX API to provide a portion of their functionality. Further, mobile platforms often share the same UI/UX API.

The web client has evolved from HTML, CSS, and JavaScript to JavaScript, CSS, and HTML (where order indicates the importance of the code in delivering a high-quality user experience).

JavaScript heavy clients have become the norm and are table stakes for today’s modern web apps. The emergence of JavaScript in thick web clients (aka Single Page Applications, or SPAs) has given rise to a larger number of advanced UI JavaScript developers who must deliver increasingly advanced functionality in their apps. As a result, modern apps demand more from UI/UX developers and this has driven the need for UI/UX engineers to have control of their own server-side API.

Node.js has emerged as the go-to solution for UI/UX server-side API development, although other scripting languages such as Python and Ruby remain popular choices. Essentially, a Node.js API (or equivalent) is a thin API layer that represents the Model portion of the older, monolithic server-side frameworks.

Further, mobile development for iOS and Android require a UI/UX API. Consolidating this new API requirement within the UI/UX team allows all customer facing application development to move at a faster pace.

The last big driver that necessitates the creation of a UI/UX API is the fact that the UI/UX API calls a multitude of other internal APIs (for various platform services or data services) and/or external APIs. The UI/UX API tier helps to consolidate these various API calls into a single API endpoint that can be called from JavaScript, Objective C/Swift, or Java

Server to Services

TL;DR: The older server-side MVC monoliths have been broken apart into specialized functions. The Model layer has been pushed down into Data Services, The View layer has been pushed up into the UI/UX team, and the Controller is now an entire Services API layer that provides common functionality that is used by multiple apps.

The traditional server-tier has been broken apart into specialized functions. Traditional server-side frameworks consisted of an MVC architecture (Model, View, Controller). These older applications were monolithic code bases that did everything from querying the data layer, running business logic to rendering UI components.

As discussed above, the View portion of server-side MVC has been taken over by the new UI/UX API server.

The traditional MVC server-side frameworks have given way to a more specialized business logic layer, which consists of APIs capable of handling various service-oriented functions. The new Services APIs consist of common Platform Services plus reusable app Services APIs.

The Model layer, or server-side data access layer, has been pushed into a new layer known as data services. Data Services is the newly evolved data team. We’ll discuss the data layer more below.

While it may appear that the server tier has been reduced in scope, the reality is that the Services API layer is the core infrastructure team. Neither the UI/UX layer nor the Data Services layer would be able to develop functionality as quickly as they do without the platform and shared services delivered via the Services API tier.

Database to Data Services

TL;DR: Data storage and query is undergoing a revolution.

Much as the client layer has undergone an explosion of capability, the data layer now consists of a myriad of technologies.

Live was easy for the data team when relational databases were the only option. RDBMS’ provide a fully integrated data environment, complete with the SQL query language, an integrated query engine, stored procedures, logical abstractions and physical storage.

However, the modern data layer consists of a multitude of specialized data components that often separate the query language, the query engine, logical abstractions and physical storage.

Let’s use Cassandra as a quick example. In Cassandra, data engineers write queries using CQL. However, to actually run CQL the data engineer must embed the CQL in Java, Python or another supported language. So, now the data engineer requires an execution environment for their query code and they must give access to the query layer to the Services team and the UI/UX team. The obvious solution is for the Data Services team is to run their own Data Services API layer, which is exactly what has happened. Contrast this with an RDBMS where all a stored procedure is embedded inside the database and the API is the stored proc’s function signature.

Summary

The traditional 3-tier architecture of client, server and database are being replaced by new tiers that more closely align with modern applications:
– UI/UX
– Services
– Data Services.

The UI/UX layer now contains a full stack of its own including rich, thick clients written in JavaScript plus it’s own server-side API.

The Services layer is now more specialized as the view layer has been pushed to UI/UX and the model layer has been pushed to Data Services. This enables the Services layer to focus on what it does best, which is write advanced business logic and provide platform services that are common across multiple apps.

Data Services, which was previously confined to relational databases, now runs multiple data storage technologies, just one of which is a relational database. Data Services now runs its own API layer as well.

These changes align well with modern application development and help accelerate development cycles. UI/UX can deliver client functionality faster by leveraging the core infrastructure provided by the Services team and owns it’s own server-side API to quickly integrate the data provided by the Data Services team.

Overview

WebM is a free and open video format designed for HTML5. WebM is an open source project sponsored by Google. You can learn more at the WebM website.

Install Miro Video Converter

  1. Open http://www.mirovideoconverter.com.
  2. Click Download. When the download is finished double-click MiroVideoConverter_Setup.msi.
  3. Click Next.
  4. Select Custom Installation.
    1. Uncheck Install the AVG toolbar and set AVG Secure Search as my default search provider.
    2. Uncheck Set AVG Secure Search as my homepage and newly opened tabs.
  5. Click Next.
  6. Click Finish.

Convert mp4 video to webm

  • Open the Miro Video Converter via your Start menu.
  • Click Choose Files… in the Miro UI.
  • Find an mp4 file, or multiple files, on your harddrive. Click Open.
  • Click format, select Video, click WebM HD (assuming you want to create an HD video).
  • Click Convert to WebM HD.

Introduction

Debugging a Play Framework 2.0 application with Eclipse is exceptionally easy to setup. Importantly, using the debugger is integral to developing high quality, complex applications as it provides an easy way to step into your code.

YouTube Version

I have created a YouTube video that shows the steps below. You can watch the YouTube video at:

How to attach the Eclipse debugger to a Play Framework 2.0 application (YouTube)

Note: Change the playback quality to 720p with a large window for the best display.

Configure Play

Note: Prototyper is the name of a project that I use for prototyping code. Replace Prototyper with the name of the project that you want to debug.

Open a command prompt (Linux) or PowerShell (Windows), then enter the following
commands:

cd Prototyper
play clean compile
play debug run

Configure Eclipse

  • Open Eclipse.
  • Select the project (ex. Prototyper) in Navigator in the left pane.
  • Select the Run menu, click Debug Configurations…
  • In the Debug Configurations dialog box, double-click on Remote Java Application in the left pane.
  • In the right pane, a new remote Java application configuration will be created for you. Change the Port to 9999.
  • Click Apply.
  • Click Debug.
  • Add a breakpoint in your Java code by pressing Ctrl + Shift + B.
  • Open a web browser to http://localhost:9000 and navigate to the page where the breakpoint will be activated.

I’m a relatively recent convert from Subversion to Git, so getting to know the git equivalent of an svn command is challenging.

Reverting a file in git actually uses the checkout command.

For example, if you want to revert your uncommitted changes for a file named package/File.java, then you would use the following command:

git checkout package/File.java

The following is a repost of my answer to a question on LinkedIn, but I thought it may prove useful to people evaluating Hadoop distributions.

The following is a substantially over simplified set of choices (in alphabetical order):

Amazon: Apache Hadoop provided as a web service. Good solution if your data is collected on Amazon…saves you the trouble of uploading gigs and gigs of data.

Apache: Apache Hadoop is the core code based upon which the various distributions are based.

Cloudera: CHD3 is based on Hadoop 1 (the current stable version) and CDH4 is based on Hadoop 2. CDH is based on Apache Hadoop. The only piece that’s not open source (AFAIK) is Cloudera Manager, which allows you to install up to 50 nodes for free before you go to the paid version. Cloudera is an extremely popular solution that runs on a wide variety of operating systems.

Hortonworks: HDP1 is 100% open source and is based on Hadoop 1. HDP is designed to run on RedHat/CentOS/Oracle Linux.

IBM: IBM BigInsights adds the GPFS filesystem to Hadoop, and is a good choice if your company already is an IBM shop…and you need to integrate with other IBM solutions. Free version is available as InfoSphere BigInsights Basic Edition. Basic Edition does not include all of the value add features found in Enterprise Edition (such as GPFS-SNC).

MapR: MapR uses a proprietary file system plus additional changes to Hadoop that addresses issues with the platform. They have a shared nothing architeture for the NameNode and JobTracker. MapR M3 is available for free, while M5 is a paid version with more features (such as the shared nothing NameNode). People who have used MapR tend to like it.

Once you start to use Hadoop in your day-to-day business operations, you’ll quickly find that uptime is an important consideration. No one wants to explain to the CEO why a report is not delivered. While most of Hadoop’s architecture is designed to work in the face of node failure (such as the DataNodes), other components such as the NameNode must be configured with an HA option.

The following is a quick and dirty list of Hadoop HA options:

  • Cloudera CDH4 (free)
    • Uses shared storage
  • Hortonworks (free)
    • Option 1: Use Linux HA (Uses shared storage)
    • Option 2: Use VMWare
  • IBM BigInsights ($$$)
    • GPFS-SNC: Provides a shared nothing HA option
  • MapR M5 ($$$)
    • Shared nothing HA for both NameNode and JobTracker

 

If you’re brave, you can also apply Facebook’s patches to Apache Hadoop to get an “Avatar” based HA option. This is what FB uses in production.

Introduction

I had configured only YARN in my original post on how to Install Cloudera Hadoop (CDH4) with YARN (MRv2) in Pseudo mode on Ubuntu 12.04 LTS.

Importantly, YARN is not ready for production yet, so we’ll go ahead and install MRv1 to get some production development done.

Stop the YARN Daemons

We first have to stop all daemons associated with YARN only packages.

sudo service hadoop-yarn-resourcemanager stop
sudo service hadoop-yarn-nodemanager stop
sudo service hadoop-mapreduce-historyserver stop

Install the Missing MRv1 Packages

Next, we’ll install 2 packages that are required for Map Reduce v1, but were not also part of the MRv2/YARN installation.

sudo apt-get install hadoop-0.20-mapreduce-jobtracker
sudo apt-get install hadoop-0.20-mapreduce-tasktracker

Start the MapReduce v1 Daemons

sudo service hadoop-0.20-mapreduce-jobtracker start
sudo service hadoop-0.20-mapreduce-tasktracker start
%d bloggers like this: