Applying Behavior Driven Development practices to infrastructure. 

Earlier this summer I worked with my team to apply Behavior Driven Development practices to infrastructure that we deployed our products to. Prior to this the DevOps team simply identified toil and implemented solutions. Unfortunately this meant the reasons behind the changes would get lost over time. Having GHERKIN files with your solution means that information would not get lost. Interesting enough we were able to reduce the total size of the source code considerably because some of the user stories were no longer relevant.

I wrote a tutorial based on this experience that can be found here.

Embracing Behavior Driven Development

http://dilbert.com/strip/2006-02-26

Many years Ago I worked on a project which became Rational Team Concert 1.0. The ability (via OSLC) to link to all of the development assets made life easier. I could easily click from requirements to test results. Today I spend majority of my days in github which doesn’t have the same type of linkage. While linkage made my life easier it did not mean the assets were in sync which caused greater overhead. Recently I adopted BDD (Behavior Driven Development) and found myself using it for…everything.

Frankly it just makes sense to use it for everything from javascript applications to infrastructure ansible playbooks. All of your requirements in one place with your code and it encourages better requirements. It sounds too good to be true and unfortunately it can be a hard sell to others. Especially since the main advertised use case for BDD is to help the business owner/requirements author which don’t always have a strong presence on smaller projects.

I recall a few projects where I spent majority of my time calling myself an architect and converting business requirements to development requirements & test cases for development. Frankly it was like playing a game of telephone.  In software development the best way to ensure requirements are met are to have less middle men.

I have learned the hard way that documenting requirements is important. Even if you think it is for disposable code. On one hand it forces you to think about what you are going to write. So you spend less time rewriting your code. However on the other hand projects have a habit of lasting far longer than they should.  Your future self will thank you for documenting.

Better requirements

I started my IBM career in the Rational acquisition back in 2003. Home of requirements, governance, testing, and traceability software. I have an entire book on gathering and writing requirements that I quote from more often than I should. Nevertheless a good project manager, architect, designer, or anyone else in a requirements gathering role is not always available for projects. So a simple language/framework like Gherkin that anyone can use is far better than nothing.

While I was a Teaching Assistant for the introduction to computer science class at Clark University I taught students to outline preconditions and postconditions for each method before writing a line of code. Gherkin is essentially the same thing with given, when, and then. “Given” is your precondition, “when” is your method action, and “then” is your postcondition. You write them for each scenario of each feature.

Features

BDD documentation is different from other project related documentation. It isn’t a substitute for a decisions document or design thinking outputs. Those are all point in time documents. A BDD Feature is a living document which outlines the current expectation for a solution’s specific feature,

Think about how a typical development project is managed. You have an agile story or change request for the solution to implement. Then over time you have additional stories or change requests to change that behavior. An archeological dig through documents, development assets, and meeting notes are required to grasp current behavior.

The basic schema of a feature document is as follows:

Feature: <feature name>
    <Feature Description>

    Background:
        Given <precondition>
        And <precondition>

    Scenario: <scenario name>
        Given <precondition>
        Where <action>
        Then <postcondition

Now of course it can get far more complicate but that is the basic gist. It is human readable and can be used to describe the solution, component, or system role features.

The Glue

More documentation is all good but it isn’t code. Text only has impact if it can pass/fail code. That is where step code comes in. Now depending on which language you are using step code will look slightly different however it will look something like this:

@given(’text’)
function setup_scenario_x (test_context) {
    …
}

Each step is a method to match to feature document text, an action to perform, and a scoped variable to the test. Yes this is essentially a form of unit test at the end of the day but it provides very different insight.

End of the day

Up to recently I was a born again test driven developer. I would translate my requirements into an architecture decision document, then to component specifications, then to tests, and lastly write my code. This process over time proved less and less agile. Constant change made this inflexible. Majority of my test were written to ensure my code addressed null pointer exceptions and reach 100% coverage. While important what is critical for a minimum viable product is just enough code to meet the business requirements.

For more information about BDD and a great framework to get you started go to the cucumber project.

Back to basics

It seems that every time I get in a conversation with snugug he tells me to avoid leveraging frameworks Now I still stand by my belief that frameworks are inevitable however I thought I would give it a try with a small proof of concept. In fact I would try to use as few libraries as possible and try to  just use vanilla javascript.

Library freedom and curse

Normally I just use whatever libraries for development that the large framework suggest. So I use intern.io for testing Dojo, protractor for testing AngularJs, etc… On one hand this provides a immense amount of freedom and on the other hand adds significant overhead. Selecting a library is like selecting a restaurant for lunch next year from today’s yelp reviews. A thorough evaluation of the library capabilities, it’s community, and expected enhancements needs to be performed and alternatives considered. I can’t tell you how much time I lost comparing mocha to jasmine.

Even if you don’t leverage any libraries in your application and stick to standards you are faced with the very ugly truth. Not every browser implements standards the same way. Making up for this gap requires polyfills which results in the same overhead mentioned for selecting libraries.

Of course you could roll your own but frankly something as simple as XMLHttpRequest can be a nightmare. My favorite was finding out that in IE 9 the console object is undefined unless the developer tools are open. Don’t get me started about the hoops you need to jump though to get the PhantomJS Browser working.

Nothing more than NPM

Builds starts off simple and quickly gets very complicated. Gulp works especially well for complicated builds but it just ends up being more code to manage. The alternative is to just use NPM as a build tool. It works surprisingly well but I’m guessing there is an upper limit to how complicated your build can be since pre and post hooks can only get you so far. That being said I would suggest leveraging just NPM for build management until you actually need those additional capabilities. I should mention that I found it slower and sometimes wished I had used broccoli.

 Conclusion

Many years ago I was brought in to address a project that was drowning in technical debt. It was 60,000 lines of Perl code. The head developer at the time didn’t trust modules or 3rd party libraries. He wrote everything himself so he could optimize it. The result was 6 weeks to resolve a defect and the time to bring on new developers was 4 months.

My first order of business was to draw boxes around code, look for duplication with modules on CPAN, and replace them. The result was a more manageable 5,000 lines of code.  The interesting thing was performance got better. Mainly because even though the libraries were bigger they used new faster features of the language. The lesson I learned from that experience is you need to size your code for the resources maintaining it.

There is cost associated with using a framework, library, or micro-library. However there is also a cost with not using them. Shared code is always bloated but it gets updated more often with defect fixes and possibly faster techniques. I am not saying you should or should not use frameworks like Angular, React, Ember, etc… However you should understand your capabilities as a team and balance that against the end user experience.

This was a great experiment and as result I will only bring in frameworks as needed moving forward.

Framework fatigue

My personal belief is that coding frameworks are natural and can’t be avoided. Especially as projects mature and grow in size. They start as boilerplate code, best practices, and style guides. Then code is refactored into more manageable components and a framework emerges. That framework is used across many different projects and inherits use-cases that may not be relevant to your project and can be seen as bloat.  Bloat leads to performance implications which then leads to considering a different framework or writing your own.

Migrating to a different framework can be extensive and for front end developers happen more often than the backend. I loathe rewrites and find them to be an anti-pattern. However there are  justified reasons for making them.

There comes a time when technical debt is so great it justifies a major change. For frameworks this means it is easier to migrate to a new framework than address the technical debt you have with the existing framework. In my experience the technical debt that drives this is less performance and more security or maintainability.

Up to this year I would have never thought on-boarding new developers could drive a framework change. I was in the camp a basic Computer Science degree as a solid foundation was enough. However majority of mastering a framework is less learning the terminology/usage and more understanding the community.  Many of the frameworks today you just can’t buy a book. You need to learn by engaging the community.

Whenever you have or build top talent they always have one foot out the door. Finding a new developer takes time and requires resources from the team to vet the right candidate. Then they need to up-skill with assistance from the team. Finding and on-boarding a new developer can have a significant impact on the delivery schedule. Aligning some of the technologies/frameworks with talent in the marketplace reduces this overhead. This is probably the most frustrating driver of change. This should always be done with a cost/benefit analysis and not simply following the framework du jour.

So how do we address framework fatigue. On one hand you could choose a framework that you believe will be victorious and stick with it as long as you can. You could also manage your own framework and be responsible for everything. However I am not a betting man and don’t have the resources to keep up with every security edge-case. I would rather accept embrace change. So I externalize and break down logic as much as possible.  Additional levels of abstraction impact performance so choose wisely. Lastly spend some time between retrospectives and planning sessions to perform gap analysis between what you are doing and where the frameworks are going. If you are using Angular 1.x is the direction Angular 2 aligned with your interest? Is there another framework with an active community better suited for your needs.

Ultimately the story of living with frameworks is about living with change, choice, and constant revalidation of direction and purpose.

Running RTC, RAD, RSA, or IDA on Fedora 19.

So let me start off by saying that Fedora 19 is not a supported environment for IBM products.  That being said if you are living on the bleeding edge of technology like me you probably noticed that Rational Team Concert, Rational Application Developer, Rational Software Architect, or InfoSphere Data Architect do not work on Fedora 19 or the latest version of Ubuntu. The main reason for this is a defect in Eclipse 4.2.2 that they are built on. The resulting error looks like this:

Unhandled exception
Type=Segmentation error vmState=0x00000000
J9Generic_Signal_Number=00000004 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000001
Handler1=00007F88ABA54940 Handler2=00007F88AB6F4900 InaccessibleAddress=0000000000000000
RDI=0000000000000000 RSI=00007F88A4A5B5A0 RAX=00007F88A617F2C0 RBX=0000000000000000
RCX=0000003FF306DA00 RDX=0000003DE9421908 R8=0000000000000000 R9=00007F88A492DCD0
R10=00007F88AC604FA0 R11=00007F88AC6050A0 R12=0000000000000004 R13=0000000000000119
R14=00007F88ABBD0D80 R15=00007F882C5C7A2A
RIP=0000003FF306DA11 GS=0000 FS=0000 RSP=00007F88AC6053B0
EFlags=0000000000210206 CS=0033 RBP=00007F88A4A5B5A0 ERR=0000000000000004
TRAPNO=000000000000000E OLDMASK=0000000000000000 CR2=0000000000000000
xmm0 0000000000000000 (f: 0.000000, d: 0.000000e+00)
xmm1 2424242424242424 (f: 606348352.000000, d: 1.385533e-134)
xmm2 0000000000000000 (f: 0.000000, d: 0.000000e+00)
xmm3 0000000000000000 (f: 0.000000, d: 0.000000e+00)
xmm4 0000000000000000 (f: 0.000000, d: 0.000000e+00)
xmm5 0000000000000000 (f: 0.000000, d: 0.000000e+00)
xmm6 00007f88ac605830 (f: 2891995136.000000, d: 6.928035e-310)
xmm7 0000000000000004 (f: 4.000000, d: 1.976263e-323)
xmm8 b1cc29b34c49b08c (f: 1279897728.000000, d: -8.161092e-69)
xmm9 116c0a02d9b0f362 (f: 3652252416.000000, d: 9.468854e-225)
xmm10 405e000000000000 (f: 0.000000, d: 1.200000e+02)
xmm11 403d5ee19101ca50 (f: 2432813568.000000, d: 2.937063e+01)
xmm12 3ee7185e8efb29c4 (f: 2398824960.000000, d: 1.101265e-05)
xmm13 3f6b2f769cf0e200 (f: 2633032192.000000, d: 3.318531e-03)
xmm14 3fe9e3779b97f4a7 (f: 2610427136.000000, d: 8.090170e-01)
xmm15 3fe41b2f769cf0e2 (f: 1989996800.000000, d: 6.283185e-01)
Module=/lib64/libsoup-2.4.so.1
Module_base_address=0000003FF3000000 Symbol=soup_session_feature_detach
Symbol_address=0000003FF306DA00
Target=2_60_20130617_152572 (Linux 3.11.2-201.fc19.x86_64)
CPU=amd64 (8 logical CPUs) (0x3d1fae000 RAM)
———– Stack Backtrace ———–
soup_session_feature_detach+0x11 (0x0000003FF306DA11 [libsoup-2.4.so.1+0x6da11])
Java_org_eclipse_swt_internal_webkit_WebKitGTK__1soup_1session_1feature_1detach+0x7f (0x00007F882C5C7AA9 [libswt-webkit-gtk-4236.so+0x5aa9])
(0x00007F88ABA6CF21 [libj9vm26.so+0x33f21])
—————————————

The defect has to do with a bug in WebKitGTK. In WebKitGTK 1.10.x a crash can occur if an attempt is made to show a browser before a size has been set. There is a fix in Eclipse 4.3 but unfortunately the IBM tooling does not build on that yet.

The first alternative to consider is to use xulrunner instead. Unfortunately only xulrunner 1.9.2 is supported for 64 bit. This is due to the fact that JavaXPCOM was removed from xulrunner 2 and later.

Unfortunately that version of xulrunner does not fully support HTML 5 nor does it reliably work under the latest Fedora or Ubuntu in my experience.  So an alternative is to upgrade the underlying Eclipse platform for those products. This is not an easy task since the products disable this capability by way of dependencies. The next best thing is to update the components of one jar.

The fix is available in the org.eclipse.swt.gtk.linux jar file in Eclipse 4.3. Here are the steps I took to resolve this issue:

  1. Install RTC, RAD, or another IBM IDE based on Eclipse 4.2
  2. Download and uncompress Eclipse 4.3 (Kepler)
  3. Open org.eclipse.swt.gtk.linux.x86_64_3.102.1.v20130827-2048.jar Eclipse 4.3 in an archive utility
  4. Open org.eclipse.swt.gtk.linux.x86_64_3.100.1.v4236b.jar from the IBM product in an archive utility.
  5. copy all of the /org/eclipse/swt/browser/WebKit*.class files from the Eclipse 4.3 swt archive to the IBM product’s archive.
  6. Start the application

Git Versus Subversion Versus Rational Team Concert – Basic Commandline Syntax Reference

It looks like Developerworks (https://www.ibm.com/developerworks) updated their theme and one of my old post is no longer usable. So I thought I would repost it in my new blog.

A while back I came across a Nick Boldt article called ” Git vs. SVN – Basic Commandline Syntax Reference.” I decided to enhance his table to include Syntax for the Rational Team Concert Command-line client.

 

Action Git Syntax Subversion Syntax Rational Team Concert Synta
Initial checkout from existing repo for a given branch git clone <url> ;cd <module>;git checkout <branch> SVN checkout <url>/<branch> lscm load -r <url>
Update locally checked out files from central repo git pull svn update lscm accept -v
List locally changes files/folders git status svn stat lscm status
 Diff locally changed file  git diff somefile.txt  svn diff somefile.txt  lscm diff file somefile.txt
 Revert locally changed file*  git checkout somefile.txt  svn revert somefile.txt  lscm undo somefile.txt
 Revert ALL local changes (except untracked files)*  git reset –hard HEAD  svn revert . -R  lscm load <workspace> -r <url>
 Add new file  git add file.txt  svn add file.txt  lscm checkin file.txt
 Add new folder recursively  git add folder  svn add folder  lscm checkin folder
 Delete file  git rm file.txt  svn rm file.txt  rm file.txtlscm checkin <parent folder>
 Delete folder  git rm -r folder (non-recursive by default; use -r to recurse)  svn rm folder (recursive by default; use -N to not recurse)  rm folderlscm checkin <parent folder>
 Commit changed file to central repo  git commit -m “<message>” file.txt; git push  svn ci -m “<message>” file.txt  lscm checkin file.txtlscm changeset comment <changeset> <message>

Legend:

  •  <URL> – Repository URL
  • <branch> – Branch or Stream or workspace
  • <module>.- the component of the repository.
  • <workspace> – Workspace is the Rational Team Concert equivalent of private Stream.
  • <changeset> – Alias or UUID of target change set.
  • <message> – Comment Text

Bringing Jazz.net into Rational Team Concert

For a while now I have been using a tool that I created called the Jazz Support Handler. I originally created it as part of a thought experiment of how to bring some of the Jazz.net experience into the Rational Team Concert client.  The tool automates searching Jazz.net and Google when an error occurs inside of eclipse. This saves me a lot of copy/pasting and additional browser windows. With the upcoming release of CLM 4.0.3 and the move to Eclipse 4.2.2, I thought I would update the dependencies and make it available to the public.

Now this tool will only catch errors passed to the ErrorSupportProvider from the Eclipse Workbench API.  it only works while the eclipse workbench is still active. Lastly this is a non-supported tool. Although I do work for IBM this is not supported by IBM nor myself. That being said I do encourage you to comment on this post with any trouble you may have.

Prerequisites

The latest bits are designed to work with the Rational Team Concert 4.0.3 Eclipse client (3.6.2 or 4.2.2 Eclipse clients). If you have RTC 4.0.3 installed in another Eclipse client than it needs to be Eclipse versions 3.6.2 or later. Eclipse also supports numerous operating systems but I have only tested it under Windows XP, Linux (Ubuntu, RHEL 6.2, and Fedora), and Apple OSX.

Installation

  1. Download the Eclipse Update Site archive from this link: JazzSupportHandlerToolUpdateSite
  2. Start your Rational Team Concert 4.0.3 Eclipse Client.
  3. Open the Install New Software Dialog (Help->Install New Software…)
  4. Select Add
  5. Archive…
  6. Point to the com.ibm.rational.jazz.support.handler.site.zip file
  7. From the Work with drop down select “All Available Sites”
  8. Select Jazz Support Handler and its child feature.
    jazz_support_handler_install
  9. Select next…
  10. Select next…
  11. Accept the license.

 Getting Started

After the install has completed there will be a new “Jazz Support Handler Category” in your Eclipse Preferences dialog.  This new preferences area is where the Jazz Support Handler can be configured and tested.

jazz_support_handler_preferencesThere are three basic settings available to the user.

  • Enabling the Jazz Support Error Handler or using the default ErrorSupportProvider.Eclipse only allows for one error handler to be enabled at a time.
  • Which data sources do you want to display? Google? Jazz.net? both? neither?
  • What do you want the tool to ignore? I am always amazed how much search engines can know about their users. I normally add filter words for my projects which prevent project specific errors from being googled.

Finally there is a test button. This is very useful for testing filters and also allows you to see the dialog without the need for an workbench error.
jazz_support_handler_example