Why I Prefer Vim Over Other IDEs

I try not to enforce Integrated Development Environments (IDE) on my teams. Some developers will give up their religion and home sports team before they drop their preferred IDE. My only ask is the code should build and be understandable outside of their chosen IDE. I often found most IDEs to be more of a hindrance and never to be a one size fits all. As result throughout the day I use multiple IDEs. Anything in Java I will do in Eclipse. I will use Visual Studio Code (vs.code) for tools like watsonx Code Assistant (WCA) in Go, Python, or other scripting languages. I like to use WCA for code comment generation. Yet, for the majority of my coding I use Vim.

Yes that default editor that comes with most Linux, Unix, and Mac distributions. Why… because it is there. My teams need to deliver Software in our SaaS environments and for customers to use on-premises. As I like to work on the front lines I want to use tools that are already there. This is why I don’t use neovim even though it is superior to Vim.

Now I do add plugins to Vim to support color syntax highlighting and autocomplete. So you may be asking why not just use the Vim plugin to vs.code? It is because I need to be used to working in a terminal window. As a developer you should overcome anything hard by doing often. So while I do use a plethora of IDEs, I choose to use the hardest often. That way I can be great at using the terminal editor which is nearly everywhere.

Improving Employee Performance Through Insightful Metrics

When I was a first line manager with only a handful of employees measuring performance was easy. You can be a “walk around manager” constantly seeing their accomplishments and giving them feedback. Now I am a manager of managers and have far more employees reporting to me. One of my challenges has been how to devote enough time to employees to know them properly. To know how well they are doing and give them feedback. I recently read Decisive: How to Make Better Choices in Life and Work by Chip & Dan Heath. An example from the book was Van Halen.

When the band toured the technical requirements of their shows were immeasurable. So they added contract riders with heavy consequences. Most notably, their riders specified that a bowl of M&M’s candies was to be placed in their dressing room. Separately, in a different area of the contract, all of the brown M&M’s were to be removed. This sounds absurd. Yet, it quickly became an indicator. This showed that the electrical, structural, security, and safety requirements in the contract had been thoroughly observed.

So how do we apply this developers, testers, or even managers? I really don’t like performance metrics tied to code written, test performed, defects found, or projects accomplished. I found those metrics are too easily gained and reinforce the wrong behaviors. Luckily, the company I work for has a list of behaviors they want to see in employees. An example is “work across other teams to improve your solution.” These behaviors are not directly related to performance of source code or quality of the product. Just like the M&Ms it can be an indicator of an ask of the business that is missed.

I rank all employees based on these behaviors. I then ask what the top employees are doing that the bottom are not? The answer to that question then gives me new metrics to drive improvements to performance. Yes this sounds like a moving target and transparency is key. Managers can now have a conversation with employees who are low performers like:

“We have noticed that the top performers in the organization have participated in knowledge shares. What can you share with the greater team?”

The goal of this exercise is to constantly find ways to improve the organization’s performance. It is not meant to find ways to remove low performers. You don’t drown by falling into the water. You drown by staying there.

Rediscovering Git: Embracing Worktrees for Simplicity

When Git first came out I was using ClearCase, CVS, and SVN in my day to day development. I thought Git was interesting because the Linux Kernel community embraced it and Linus Torvalds created it. Unfortunately I found It limited and frankly not seamless. Later I worked on the first release of Rational Team Concert which I preferred to Git which was still limited. Then something interesting happened. Git started to get good. In fact it was getting really good. So much so when I later moved to the Watson group in IBM we moved to Git.

It was just simple and easy. We simplified our development process to make the most of its simplicity. I put my head down. I stopped paying attention to Git. I just focused on developing software with what I knew. During that time Git didn’t stand still. It was improving.

I now feel like I am having a Git renascence. I am discovering that I can do things in Git. Earlier, I thought I needed extra tools for these tasks. For instance I thought I needed plugins in my IDE to work on multiple branches at the same time. As I have been slowly breaking my reliance on IDEs I found git worktree.

With Git’s worktree command lets you easily work on multiple branches at the same time. It does so via different directories. This avoids having to stash changes and current work is visible via the directory structure.

To get started

  • Create a project directory
  • Clone a bare repository git clone --bare <repo url>.
  • Go to that new git directory
  • Now you can add additional parent directories that represent a branch
    git worktree add --track -b <branch> ../<branch> <remote>/<branch>

Now when you look at the parent project directory, you can work with multiple branches simultaneously. You do this by using the different sub-directories.

Git has come a long way. The more I explore the more I realize the less tools I need. Using less tools means I go faster.

Effective Interview Questions in the Time of ChatGPT

Every year or so I hire new members to my team. As a hiring manager, normally during the interview I don’t ask too many technical questions. That is usually for a follow-up technical interview. Still, I do want to know two things. Does the candidate show knowledge of what they put on their resume? How does the employee handle a technical question they don’t know the answer to? The 2nd part is frankly the more interesting question.

Does the candidate have a network of experts they can pull from? Do they know how to engage open source communities? Are they comfortable jumping into unfamiliar code to understand how it works? I want to hear war stories to better understand how they deal with ambiguity of something new. In the last few years especially from college hires I started getting “I would google it” as an answer. Now that isn’t a horrible answer but you can’t google everything. For instance, Internal projects/services/components will not be answerable via google. Also the answers you find might

This year, I have noticed an increasing number of references to using ChatGPT or Microsoft CoPilot. Often as the first reference used to a problem which has left me with mixed feelings. On one hand, these are wonderful tools. I use IBM watsonx Code Assistant (WCA) at work all the time. I encourage my entire team to use WCA. They should at least use it for generating comments for the code they have written.

Now It can’t be the only tool in your tool chest. It will not have all the answers. This is due to the same reasons I mentioned that Google will not have all the answers. Also some tools like Microsoft CoPilot are still in litigation on the legality of their product. This is fine for working on a school programming assignment but a significant risk to authoring commercial software.

In closing, the best thing for a candidate is to show a breadth of tools & resources to get the job done.

From Developer to Manager: My Unexpected Journey

Back in 1999, I took my first steps on Clark University with the intention of majoring in Computer Science and obtaining an MBA. The great thing about College is sometimes you decide to go in a different direction in life. I think it was the space shuttle Challenger case study which turned me off of business. So I focused on Computer Science being my future. Little did I know life would bring me back to business as a career.

I followed a typical technical path of developer, senior developer, and eventually software architect. I have mixed feelings about that last title as I was more playing the role of technical and team lead than pure head in the clouds modeling. Then something odd happened on the way to the airport.

After finally making my way though Logan Airport security I looked at my phone to see slack messages with “congratulations” and “look forward to working under your leadership.” I messaged my manager at the time asking if I missed anything and I got the response “he didn’t tell you.” A few phone calls later I learned that my 2nd line made me a manager.

Now training followed which was provided by IBM. However most of my training came with experience afterwards. Wow, did I get experience. I have had to work with everything from employee love triangles to international espionage. Of course it is nothing I can share which is the hardest part of being a manager. My personal belief is transparency leads to trust and as a manager you can’t always be transparent.

I think of myself as a technical manager as I am still an individual contributor. However as I have increased my number of reports and teams that report to me that has become harder. I am still a believer of leading from the front and getting your hands dirty. Using a combination of tech focal or team lead, setting expectations, and knowing when to write code or jump on a call with a customer helps bring my life/manager/contributor into balance.

I once said as an architect all I could do was document what should be done now as a manager I have people to make it happen. That is still mostly true with the exceptions of matrix employees. Employees love to tell you about their current projects. As a manager you can take that insight and make business decisions around that. However when their day to day doesn’t align with your mission business decisions then your role can feel diminished. Nevertheless over the years I’ve found matrix employees is fine and healthy as long as they are in the same org and have an aligned mission. I guess Conway’s law is alive and well.

I really do enjoy being a manager. As an agile developer I am a firm believer in the Agile Manifesto‘s “Individuals and interactions over processes and tools.”However I never got to truly focus on the “Individuals and interactions” until I became a manager.

Lessons from porting Lucene: Equality

Java and Rust take two different approaches to evaluating equality of objects. Yes on the surface they are trying to accomplish the same thing but the differences may result in double the methods. First let’s get a foundation in equality.

For the purpose of this article there are three types of equality.

  • Reflexive – An object is “equal to” another object. (eg. x==x)
  • Symmetric – An object is “equal to” another object in both directions (eg. x==y && y==x)
  • Transitive – If an object is “equal to” a second object and that object is “equal to” a third object then the first and the third are “equal to” (eg. x==y && y==z then x==z)

Now both Java and Rust break down comparing just symmetric/transitive and all three. This is sometimes referred to as shallow and deep comparisons. Shallow (or flat) compares the values identity and deep compares the fields/values. Java and Rust’s approach to deep comparisons are the same. So when porting from Java to Rust the “Equals”method logic can directly be ported to “PartialEq”, However shallow comparisons are where they differ. Yes they both do shallow, memcmp-like, pointer-comparing versions. The first difference is Java also leverages a overridable hashCode method to provide a unique identifier for an object. Second to that is Rust will fall back onto deep comparisons (partialEQ) in some scenarios. For instance Floating point NaN can’t be equal.

So what does that mean for my porting process? Because I want to maintain some level of backwards compatibility I need to port the hashCode() method even though it is useless to Rust.

Porting Lucene: Iteration 1 – Decisions about how to start porting…

So time to make a list of files to port. Lets see how many java files there are…

find lucene |grep ".java" |wc -l
5502

Hmm…. OK let’s try focusing on Test cases instead…

find lucene |grep ".java" |grep Test |wc -l
1570

A bit better but not great. Let’s focus on the first thing you need to create an index…

find lucene/core/src/test/org/apache/lucene/store |grep ".java" |grep Test |wc -l
25

Looking good….Oh wait…. and then there are dependencies

 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package org.apache.lucene.store;

import static org.junit.Assert.*;

import com.carrotsearch.randomizedtesting.RandomizedTest;
import com.carrotsearch.randomizedtesting.Xoroshiro128PlusRandom;
import com.carrotsearch.randomizedtesting.generators.RandomBytes;
import com.carrotsearch.randomizedtesting.generators.RandomNumbers;
import com.carrotsearch.randomizedtesting.generators.RandomPicks;
import com.carrotsearch.randomizedtesting.generators.RandomStrings;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import org.apache.lucene.util.ArrayUtil;
import org.apache.lucene.util.IOUtils.IOConsumer;
import org.junit.Test;

public abstract class BaseDataOutputTestCase extends RandomizedTest {
protected abstract T newInstance();

So org.apache.lucene.util is too big to implement at once.

find lucene/core/src/test/org/apache/lucene/util/ |grep ".java" |grep Test |wc -l
     100

So in theory implementing the rest of core should result in the util package getting fully implemented. Now Lucene also has it’s own Test Framework.

Test Frameworks

So here is the conundrum. The problem with JNI is you are living in two domains (JavaVM & System).It is very tempting to use the Java test cases as a source of truth for the Rust code behavior. The key problem is when it goes wrong is the problem in the java code, JNI, or the rust code? The alternative is to have two sets of books. port the tests to rust and run both. This essentially doubles the amount of work. However should save an incredible amount of time when tests fail. The resulting logic should be

Rust test failsJUnit test failsNext steps
YesYesFocus on changing the code to make the Rust test pass.
YesNoUpdate the Rust tests to match the logic of the JUnit Test
NoYesEnsure the logic in the rust test match the JUnit test. If so focus on the JNI glue code.

Porting Lucene: Iteration 1 – Project Setup

So I thought project setup would be the easiest item to complete. Turns out the due diligence was far greater than I thought. Why? In short there are tools that have best practices for a project built in like Gradle or Cargo. However they just didn’t quite fit what I needed. Letls look at those use-cases.

Little to no setup required

This is something I feel very strongly about. This project should make contributing absolutely frictionless. You should not need a specific IDE or require changing versions of system libraries.

Don’t introduce yet another language if you don’t have to

Choosing the best tool for a task is important. However there is a certain amount of overhead with each language. Some require additional tooling and may not be widely known. So reusing languages that are used elsewhere in the project is preferable. 

Option #1: Gradle

This should be a no brainer. Lucene uses Gradle. I’m porting Lucene. I should use Gradle. However while I am porting Lucene to learn I want to make this easy for others to adopt. Gradle brings in either Groovy or Kotlin.

Option #2: Cargo

Cargo is the Rust package manager. Cargo downloads your Rust package‘s dependencies, compiles your packages, makes distributable packages, and uploads them to crates.io, the Rust community’s package registry

OK… That I stole from Cargo’s guide. Cargo is going to be needed for building/managing the rust components. However we really need a level of orchestration above it.

Option #3: Maven

Maven for the longest time was the work horse for most Java development. Unlike Gradle Maven can be done using xml exclusively. This then becomes a discussion of project management via configuration or code. Both have their time/place. As a project becomes increasingly complex code becomes preferable over configuration.

Option #4: Make

There is something to be said for keeping it simple. Make is typically included in every distribution and shell scripting provides the capabilities I need. While shell scripting is yet another language it is already pulled in by Docker. This makes the most sense.

Conclusion

A mixture of Option #2, #3, & #4 seems like the best course. Due to the expected complexity of the project Make seems like the best option as it has the least amount of dependencies. Maven & Cargo can then be used for Java & Rust sub-components respectively.

Embracing Behavior Driven Development

Many years Ago I worked on a project which became Rational Team Concert 1.0. The ability (via OSLC) to link to all of the development assets made life easier. I could easily click from requirements to test results. Today I spend majority of my days in github which doesn’t have the same type of linkage. While linkage made my life easier it did not mean the assets were in sync which caused greater overhead. Recently I adopted BDD (Behavior Driven Development) and found myself using it for…everything.

Frankly it just makes sense to use it for everything from javascript applications to infrastructure ansible playbooks. All of your requirements in one place with your code and it encourages better requirements. It sounds too good to be true and unfortunately it can be a hard sell to others. Especially since the main advertised use case for BDD is to help the business owner/requirements author which don’t always have a strong presence on smaller projects.

I recall a few projects where I spent majority of my time calling myself an architect and converting business requirements to development requirements & test cases for development. Frankly it was like playing a game of telephone.  In software development the best way to ensure requirements are met are to have less middle men.

I have learned the hard way that documenting requirements is important. Even if you think it is for disposable code. On one hand it forces you to think about what you are going to write. So you spend less time rewriting your code. However on the other hand projects have a habit of lasting far longer than they should.  Your future self will thank you for documenting.

Better requirements

I started my IBM career in the Rational acquisition back in 2003. Home of requirements, governance, testing, and traceability software. I have an entire book on gathering and writing requirements that I quote from more often than I should. Nevertheless a good project manager, architect, designer, or anyone else in a requirements gathering role is not always available for projects. So a simple language/framework like Gherkin that anyone can use is far better than nothing.

While I was a Teaching Assistant for the introduction to computer science class at Clark University I taught students to outline preconditions and postconditions for each method before writing a line of code. Gherkin is essentially the same thing with given, when, and then. “Given” is your precondition, “when” is your method action, and “then” is your postcondition. You write them for each scenario of each feature.

Features

BDD documentation is different from other project related documentation. It isn’t a substitute for a decisions document or design thinking outputs. Those are all point in time documents. A BDD Feature is a living document which outlines the current expectation for a solution’s specific feature,

Think about how a typical development project is managed. You have an agile story or change request for the solution to implement. Then over time you have additional stories or change requests to change that behavior. An archeological dig through documents, development assets, and meeting notes are required to grasp current behavior.

The basic schema of a feature document is as follows:

Feature: <feature name>
    <Feature Description>

    Background:
        Given <precondition>
        And <precondition>

    Scenario: <scenario name>
        Given <precondition>
        Where <action>
        Then <postcondition

Now of course it can get far more complicate but that is the basic gist. It is human readable and can be used to describe the solution, component, or system role features.

The Glue

More documentation is all good but it isn’t code. Text only has impact if it can pass/fail code. That is where step code comes in. Now depending on which language you are using step code will look slightly different however it will look something like this:

@given(’text’)
function setup_scenario_x (test_context) {
    …
}

Each step is a method to match to feature document text, an action to perform, and a scoped variable to the test. Yes this is essentially a form of unit test at the end of the day but it provides very different insight.

End of the day

Up to recently I was a born again test driven developer. I would translate my requirements into an architecture decision document, then to component specifications, then to tests, and lastly write my code. This process over time proved less and less agile. Constant change made this inflexible. Majority of my test were written to ensure my code addressed null pointer exceptions and reach 100% coverage. While important what is critical for a minimum viable product is just enough code to meet the business requirements.

For more information about BDD and a great framework to get you started go to the cucumber project.

Back to basics

It seems that every time I get in a conversation with snugug he tells me to avoid leveraging frameworks Now I still stand by my belief that frameworks are inevitable however I thought I would give it a try with a small proof of concept. In fact I would try to use as few libraries as possible and try to  just use vanilla javascript.

Library freedom and curse

Normally I just use whatever libraries for development that the large framework suggest. So I use intern.io for testing Dojo, protractor for testing AngularJs, etc… On one hand this provides a immense amount of freedom and on the other hand adds significant overhead. Selecting a library is like selecting a restaurant for lunch next year from today’s yelp reviews. A thorough evaluation of the library capabilities, it’s community, and expected enhancements needs to be performed and alternatives considered. I can’t tell you how much time I lost comparing mocha to jasmine.

Even if you don’t leverage any libraries in your application and stick to standards you are faced with the very ugly truth. Not every browser implements standards the same way. Making up for this gap requires polyfills which results in the same overhead mentioned for selecting libraries.

Of course you could roll your own but frankly something as simple as XMLHttpRequest can be a nightmare. My favorite was finding out that in IE 9 the console object is undefined unless the developer tools are open. Don’t get me started about the hoops you need to jump though to get the PhantomJS Browser working.

Nothing more than NPM

Builds starts off simple and quickly gets very complicated. Gulp works especially well for complicated builds but it just ends up being more code to manage. The alternative is to just use NPM as a build tool. It works surprisingly well but I’m guessing there is an upper limit to how complicated your build can be since pre and post hooks can only get you so far. That being said I would suggest leveraging just NPM for build management until you actually need those additional capabilities. I should mention that I found it slower and sometimes wished I had used broccoli.

 Conclusion

Many years ago I was brought in to address a project that was drowning in technical debt. It was 60,000 lines of Perl code. The head developer at the time didn’t trust modules or 3rd party libraries. He wrote everything himself so he could optimize it. The result was 6 weeks to resolve a defect and the time to bring on new developers was 4 months.

My first order of business was to draw boxes around code, look for duplication with modules on CPAN, and replace them. The result was a more manageable 5,000 lines of code.  The interesting thing was performance got better. Mainly because even though the libraries were bigger they used new faster features of the language. The lesson I learned from that experience is you need to size your code for the resources maintaining it.

There is cost associated with using a framework, library, or micro-library. However there is also a cost with not using them. Shared code is always bloated but it gets updated more often with defect fixes and possibly faster techniques. I am not saying you should or should not use frameworks like Angular, React, Ember, etc… However you should understand your capabilities as a team and balance that against the end user experience.

This was a great experiment and as result I will only bring in frameworks as needed moving forward.