Self Documenting Unit Tests

Naming JUnit methods hasn’t been my strong point. My general obsession with naming conventions outstripped my creativity for specific unit test names during much of my last project. Darker moments birthed monstrosities like AdvertisementTest001(), AdvertisementTest002(), etc. Finding good names for classes, public methods, variables, and database columns is always on my mind; that’s why (a link to) a trusty thesaurus should always be at hand. So my inability to make good, consistent unit test names vexed me.

I had a trick for regression tests, though. I’d use the JIRA ID as the method name: e.g., public void jiraProjectNameIssueNumber(). Anybody could look up the issue and get the entire history when needed, and some particularly infamous issues were immediately recognizable when a too-well-known number showed up in the runner’s failed list. This I liked, and now I have something that I may like for actual unit tests thanks to the January 2015 Software As Craft topic, Russell Gold’s talk on Executable Documentation.

A better title might have been Self-Documenting Unit Tests since several of us were thinking this would be about generating code from comments like cucumber tests or software contracts. Regardless, it’s an excellent pattern to make better unit tests, and it’s a language-independent pattern. The core idea is simple: Test a single method given a particular set of conditions, and name the unit test givenCondition_expectResult(). If you practice TDD and you write these tests as stubs first, your path from red light to green light starts with something like this for a findsert style method:

@Test  // DEFINITELY use annotations with JUnit!
public void givenNotFound_returnNewRecord() { ... }

The one thing I might add here is a method name prefix depending on how the tests and production classes are structured. In the case of a service method, I’ll probably create separate JUnit classes for each, so the given-expects pattern is fine. For tests that might be a little more integration-y or about a class representing a persistent entity, it might look more like this:

@Test
public void nextSerialId_onInstantiation_defaultsToZero() { ... }

@Test
public void nextSerialId_onClosedStatus_throwsInvalidStateException() { ... }

That second line wraps when I preview it in my current (kind of horrible) WordPress theme; it’s definitely long for the blog page, but it’s not so bad for a code editor. How long is too long? I hesitate to say this given some people (I mean you, MG!) and their predilection for long method names, but … Java has no defined limit on the length of a method name! Yikes. I’d say too long is not being able to uniquely identify the method name when eye-balling it in the JUnit runner window. I should further qualify that’s on a 1920-wide monitor with readable font sizes and standard Eclipse frame widths for those of you who love to rules-lawyer such things into the realm of microfiche.

This approach yields some additional benefits atop normal TDD and unit testing. The presenter showed a few unit tests from well-known, highly-regarded open source projects. Having tests is good, but having ambiguous names and one method test a bunch of unrelated things make understanding, updating, and expanding on those tests difficult. If the tests get too hard to understand, then they’ll meet @Ignore when inexplicable failures crop up. Contributors won’t expand or update the tests for new functionality if they have to understand too much unrelated code, test and otherwise, to write something useful. Having the developer name the test this way makes the developer think about what this test really does and should contain up front, guiding the test writing process rather than playing catch-up after it’s done, and making it easier for future developers to expand upon it.

There are cases were a single method may test more than one method call at a time like a handful of easy “success” cases. Apply some common sense to avoid an explosion of teeny-tiny test methods, but also pay attention to how many valid separate cases you have. As the presenter said, it can be a clue about when production classes that are too large or complex should be broken down.

The underlying principle here is the same as a test commenting style that MG got me into the habit of using: Start with one or more “Given …” condition comments, then a “When …” for what is being tested, and finally a series of “Then …” assertion comments. This is like the comment/pseudocode-to-code style I’ve used for production code since forever but applied to tests. Those given/when/then comments often became the most recent documentation of how code behaved during refactoring because when something depending on it misbehaved, somebody went in there and read those comments. Some of the need for embedding such comments goes away with it already being in the test method name, but how much depends on which side you take in the agile controversy over comments.

Yes, A Controversy Over Comments

The SoC presenter talked about self-documenting unit tests as the “what it does” where the self-documenting production code shows “how it does it”. I like how that dove-tails with TDD since we (mostly) know the what first, and figure out the how later. However, neither answers what can be the most important question when stumbling into old code or somebody else’s code: Why? Deadlines, bugs in APIs, old versions of libraries, or (worst of all) politics sometimes require code to do … questionable things. Those are cases where comments are always, always, ALWAYS a Good Thing(tm).

Some people take a dimmer view of comments overall. They say that even good commenting is wasted effort since it duplicates what the code does and doesn’t have the basic validation code gets by compiling. They say comments get confusing over time as code changes but comments don’t keep up. They say that code should be obvious in how it’s laid out, how it’s broken down into subroutines, and how its elements are named. They say comments should not be necessary. They say it’s a mark of the badly written, the badly structured, or the badly named when comments are necessary. That’s a bold statement.

There is some truth there, but I don’t completely subscribe to it. Developers need to weigh the pros and cons for more than themselves when deciding how much commenting is enough. They also need to weigh project factors like how complex the system is, how long it will be maintained, and by whom. That who is also a consideration; if the whole team cannot play by the same rules, it may not be worth trying to assure regular, high-quality commenting. If there’s only so much effort you can get the whole them to commit to, then getting them to use TDD and self-documenting unit tests may be the 20% effort that will get you to an 80% maintainable system.

Open office spaces? 64 percent say they’re a distraction – Philadelphia Business Journal

Philadelphia Business Journal is a staple of my RSS feed, but today’s follow-up article on open plan offices is the first time I felt compelled to leave a comment. Unfortunately they only allow comments through a Facebook or Yahoo! account, neither of which I want to associate with my professional identity or grant any “rights” to my professional content. Hello, Linkedin support please!  Here’s the article followed by the comment I’d planned to make:

howtoannoyed-304xx2122-1415-0-0Open office spaces? 64 percent say they’re a distraction – Philadelphia Business Journal.

The author (Jared Shelly, @PHLBizJShell) takes exception with his readers’ responses, saying this: Perhaps it all depends on your co-workers and office culture. If you have co-workers that want to dissect last night’s episode of The Voice for 45 minutes while you try to work, an open office design can certainly be a hinderance. But if your co-workers are mature about it, an open-office can get people talking and be the springboard for new ideas and creativity.

It also depends on your profession and your team’s location. If your job is low concentration and high interruption, and if your team is co-located, then open plans might work for you. I am a contract software engineer usually working on global teams; I dread contracts where I must be on site and work in open plans.

Having long periods of interruption-free time to concentrate on code is a necessity. I first saw the negative effects of open plans on “flow” documented in the classic Peopleware (DeMarco and Lister). That book–printed in 1987—explicitly discusses open plans; this is not a new idea, but it is a fad whose reappearance today is more about saving money than boosting productivity.

When I do collaborate, much of that time is spent in teleconferences and screen shares with New York, Chicago, and London. This can be very distracting to those around me who aren’t also on the call/share and who probably aren’t involved in my project at all. Imagine overhearing half of a nerdy, sometimes-heated conversation during an hours-long elevator ride; that’s open plan with programmers practicing team or peer programming.

One thing has changed (for the worse) since Peopleware was published: We’ve become a culture of interruption thanks to mobile phones and mobile social media. My reflex to a ringing phone has always been to silence rather than answer it. Now it’s less and less socially acceptable to respond to a text or email or selfie notification when the time is right rather than right now. I’m sure this is giving the fad of open plan a big boost. The Millenial temper tantrum of no boundaries between the personal and the private is being taken as a desirable end state instead of just teething pains with new technologies.

Goat Simulator: When Bugs Are Features

Just a few days ago, @crosswiredmind pointed out a strange little game he thought might interest me given our shared gaming history–Llamatron, anyone? So now it’s making the rounds in my geek and gaming blogs including this Ars Technica article:

Goat Simulator preview: Goat of the year | Ars Technica.


I may post more about it  in other more-appropriate venues along with the unrelated but oddly, personally relevant Bear Simulator [Bear Simulator is “like a mini Skyrim but you’re a bear” | Ars Technica], but something the developer said in the Ars article caught my attention as a fellow software developer:

Goat Simulator is currently ripe with glitches, particularly with the titular character’s bendy neck getting stuck in objects, but Ibrisagic pledged not to fix most of the problems in time for the game’s purported launch of April 1st (a date he insisted was no prank).

“I’m only fixing crashes,” he said. “Everything else is totally hilarious, so we’re keeping it.”

In ancient days, some of the craziest developers for the Commodore 64 and Amiga actually depended on flaws in the hardware, APIs, and operating system to push our platform beyond its limits. Of course the risk there is that we’d fix those bugs or correct those anomalies and break their games. In fact when we did our “clean slate” Workbench 2.0, we eventually backed out some purist changes to avoid breaking the especially cool stuff.

The deliberate retention of bugs in Goat Simulator strikes me as a brilliant kind of post-modern self-referential nod that’s possible now that computers and computer games (and bugs in computer software) have become so much a part of our everyday experience.  Like Marilyn’s mole, sometimes it’s the flaws that make mere beauty transcendent.

Related Links: Goat SimulatorBear Simulator by Farjay Studios — KickstarterLlamatron – Wikipedia, the free encyclopedia

Microsoft Corp. names Philadelphia a Showcase City

Microsoft LogoPhilly.com’s Heard in the Hall reports Microsoft Corp. names Philadelphia a Showcase City.  I had heard early in the year that Philadelphia wasn’t even on Microsoft’s original short list, so this may be a sign the Philly tech scene is starting to make itself heard.

It’s hard to say if Microsoft’s CityNext program is going to bring any real benefit soon; the city’s previous collaboration with the Redmond software company only really started hitting its stride recently [School of the Future: 10 years after concept]. The press this generates could be more valuable in the long run if it gets other tech companies interested in the city and helps give the local scene some much-needed confidence and solidarity.

 

Second @HacksHackersPHL Meetup talks Data and Demos

Hacks/Hackers PhiladelphiaHacks/Hackers Philly [@hackshackersPHL] held their second meetup on Leap Day, 29 February 2012, at the iconic Philadelphia Inquirer Building.  The Philadelphia chapter of Hacks/Hackers, a grassroots journalism organization examining the intersection of journalism and computing, co-hosted the event with philly.com, the online arm of Philly’s biggest newspaper. It was an interesting look into a world very different than the more familiar setting of Fortune 500 companies doing global projects.  This is computing in the trenches and newsrooms of Philadelphia.

Reporting from The Data and Demos Meetup …

The Inquirer Building
Stormy weather again over The Inquirier Building

Members of the Inquirer staff gave presentations during the first half of the meetup about how collecting, analyzing, and visualizing data works in the context of a large newspaper publisher without a large IT budget.  Some of the larger stories can take 6-12 months and a sizeable team to complete, like a study of violence in schools or the first fact-based comprehensive analysis of the impact of Philadelphia’s real estate tax reassessment. The data end of that often involves long fights for access to information; getting the goods from a “Freedom of Information” request is rarely as simple as asking. Even  agencies dabbling in transparency muddle analysis by changing data formats with the latest fad, and some withdraw from the web altogether after feeling the sting of disinfecting sunlight.  That may be the case with detailed public data on fracking operations in Pennsylvania gone missing without warning or explanation earlier this year.  The data journalism on the Marcellus controversy to that point was shining an unfavorable light on something our new administration in Harrisburg certainly favors without the burdens of regulation, taxation, and transparency.

Having heard from the hacks, the hackers talked about their projects in the second half.  The theme was social activism through social media: Cost of Freedom is a fledgling website to help voters get Photo IDs in states where laws have changed to disenfranchise young, old, and poor voters; Lobbying.ph provides a human-readable experience for exploring the Philadelphia lobbyist data recently made available by the city; WhoPaid is a prototype mobile app using the Shazam approach to identify political ads and who paid for them by capturing audio snippets on mobile phones.  Several of these applications came out of Random Hacks of Kindness, a movement around “technology for social good” that sponsors contests and hackathons.  Seeing my neighbors doing good works like this rekindles the pride I felt when I first called the Birthplace of American Liberty and City of Brotherly Love my home.

Another Case of Cautious Optimism

I’m often critical of Philadelphia’s low-tech standing despite being one of the country’s ten largest cities.  Finding a job in the city for a person like me is remarkably hard, partly due to a bad corporate tax structure and partly from a long history of first-but-no-longer claims to fame.  The meetup itself was amazing in organization, content, and participation.  The Inquirer hosts are facing another potential change of ownership, one in a series of such events that has underfunded and understaffed the newsroom across the board.  Several other attendees commented on how Philly’s tech scene is growing so slowly.  We have interested, capable people with good ideas; what’s the missing ingredient?  The bright side here is these are just the people with the journalistic and technical skills to figure that out.

Welcome to the Trans-PC World

My confidence in EMC Software has been growing over the last year.  Now that Lewis is out, I generally like the things I’m hearing from Gelsinger, Patel, and van Rotterdam.  What still bugs me is their fetish for the term post-PC which implies that the PC has no place in the workplace of tomorrow; I don’t think they mean it that way, but I’m a bit of a semantics geek and would suggest trans-PC instead.  Honestly, the iPhone is really just a personal computer in mobile phone drag.

You Better Work

I've just got one thing to iMessage...

My problem with the idea of a world without PCs is that people still need to get work done.  Tablets and phones make nice consumption devices, but I’m not going to code or pen my first novel on one.  It’s not a matter of CPU speed, storage capacity, or network access; a current-day smart phone is more powerful and connected than a PC ten years ago.  You may remember we got work done back then on those quaintly obsolete big beige boxes and monstrously-heavy cathode-ray tubes.

Two things make the PC the place where work still gets done: screen real estate and rich input devices (i.e., keyboards and precise pointing devices). I’ve posted before on the mistaken assumption that one thing must replace another [Reports of Mouse’s Death Greatly Exaggerated].  There are appropriate contexts for smart phones, tablets, laptops, and desktop PCs.  I often use two computers (1 desktop, 1 laptop) and my iPhone “simultaneously” exactly because it’s a physical way to define contexts for easy monitoring, switching, and sometimes ignoring.  Put in simple terms:  “Right tool for the job” necessitates having more than one tool in your toolbox.

Don’t expect the PC to go away; just expect it to cost more.  People generating serious content and playing serious games will still need real workstations and top-of-the-line gaming rigs.  However, the general demand for such devices will go down since tablets and phones are already seen as good enough for daily tasks like email, chat, and web browsing along with their superiority in portability-related domains like music and eBooks.  Lower PC prices have been as much about demand as technological advances, and demand is shrinking enough that industry players like Dell are trying to abandon the consumer PC market that made them [Dell unveils new servers, says not a PC company | Reuters].  The logical consequence is flattening or rising prices as overall demand shrinks and the remaining demand is focused on higher-end devices.

New User Is Old Hat

Game Changer

The fact is we’ve been in the trans-PC world since 1997 when the PalmPilot debuted.  It wasn’t that people hadn’t tried to make palmtop computers before–we had a few wacky prototypes at Commodore in the early 90s.  The PalmPilot succeeded because it was the first device that didn’t try to be a PC itself; it was something that augmented your PC by sharing data (via cabled synchronization) and being mobile.  Unboxing my first PalmPilot was the moment where I really got the idea of the context a device brings to my data.  With everything living in the cloud, PCs aren’t the personal data overlords they once were.  They are one among many devices available for us to find, use, or create content in the most appropriate context at the moment.

Social (In The Enterprise) Like a Disease

It shouldn’t be a surprise that Social in the Enterprise isn’t an automatic win.  If you wondered if buzzword compliance was the driving force rather than compelling use cases, Virginia’s article in CMS Wire yesterday [Social Business in 2012: Like Having a Party and No One Shows Up] will likely confirm your suspicions.  Social fever has come to the workplace, and the prognosis so far isn’t great.

First, my personal experience with being in the Social driver’s seat. Then, a case where I’ve seen Social in the Enterprise be the cure instead of the disease.

Back on the Chain Gang

My next Linkedin profile picture

Part of my community service for being between is serving as chair of my condominium association’s communications committee. In that role, I attended a breakfast seminar sponsored by the Greater Philadelphia Condominium Managers Association (GPCMA) on social media awareness in January.  There were three speakers:  a lawyer raining down real-world worst-case scenarios who advocates making new residents sign online anti-defamation agreements; an insurance professional explaining that there haven’t been any real test cases so the industry has no benchmarks for insuring against social-media-related liabilities; a social media expert saying “it’s great, you have to do this” but not really explaining why.  The take-away for most people that day was “protect yourself” instead of “let’s make beautiful music together”.

Having worked in the realms of regulation and compliance, I’m conditioned to be cautious (ok, paranoid).  After that meeting, I’ve been reluctant to pull my building into the 21st Century for fear of demonstrable risks, no clear remedies when something bad happens, and a nebulous value proposition.  Some residents are part of a years-old Yahoo! Group already, and observing the level of discourse (ok, lurking) makes me doubly-reluctant.  Still, I feel the pressure from my “board” to do something because of the condominium Council’s broad, vague mandate to “have a website or something”.  So, here’s my plan so far:

Perform an owner/resident survey to assess the interests and capabilities of the community. Making decisions, especially ones that affect my building’s budget, is something I don’t like to do blind.  Also important is generating a participatory feeling: I’ve seen enough good-idea projects flounder because they failed to involve their target communities in the beginning.

Get building notifications to people electronically, via their preferred medium, to supplement the current memo-under-the-door method.  Push isn’t social per se, but it’s a step along the way to getting residents to think about the building as a place online as well as in real life.  Unlike a workplace, I have to deal with inconsistent tools and capabilities–we don’t have email addresses for all of our residents, and there are probably some who don’t even have email at all! I have to accomodate 80-year-old retirees who’ve lived in the building since it opened as well as 20-something professionals moving in tomorrow.  This is definitely a frog I need to boil slowly.

Think a step ahead of the Council.  My mandate is a curious artifact of politics and years of “we should have a website” not leading to results.  The problem is I don’t think a traditional website makes sense anymore.  Leveraging public, cloud-based solutions when I don’t have an IT staff or dedicated community managers on hand is a more sustainable approach, and this is a case where I have no desire to reinvent the wheel with a local website developer.

I sympathize with executives who feel compelled to “do Social” from above and where expectations below come from the not-obviously-relevant personal realms of Facebook and Twitter.  It’s a 360 mismatch of expectation and reality, but there may be a larger problem with smaller networks: Part of Social’s value equation is Metcalfe’s Law, and a corollary would be that there’s some critical mass of active connections required for a social network to sustain itself.  Chat and dating sites (and now MMOs) need to have plenty of free participants in order to make their services worth it to their smaller base of paying customers.  A pure-paid chat site isn’t going to work, and the subscription-only MMO is becoming a rare thing only feasible for the biggest intellectual properties.

What if most companies aren’t big enough to have a self-sustaining network?  Without dedicated evangelists, there may not be enough regular traffic to keep people interested.  That’s one of my big concerns as chair: My building is large as buildings go with almost 600 units, but it’s microscopic from the Social perspective.

Hailing Frequencies Open

I always thought of myself as Spock, but...

Virginia references a Harvard Business Review blogger who seems to confuse collaboration with Social; they are different but not mutually exclusive things. Collaboration’s about projects, about people coming together to produce a result. Social doesn’t have a clearly-identifiable objective; e.g., sharing pictures isn’t an objective, it’s an activity.  On the consumption side, it’s more like discovery than development, and it may produce collaboration as a by-product. I recently saw the two dove-tail nicely.

A former client is switching from Lotus Notes to SharePoint.  They were a big Notes shop with a massive, highly-customized installation.  Despite its age, their Notes platform is still the best collaborative environment I’ve seen professionally, and I miss it when working at other clients with more COTS solutions.  SharePoint itself is a very different beast, and their implementation being COTS is generating lots of head scratching after years with such a custom-fitted solution

These are IT people in Pharma R&D, so it wasn’t surprising when some of the collaboration tool mavens started documenting hacks, work-arounds, and equivalents on their somewhat-informal wiki farm.  This kind of living documentation is a great way to manage the early adoption stage where understanding is low and the delivered functionality changes rapidly.  I love wikis for things like this; it works amazingly well in the world of poorly-documented, frequently-changing MMOs.

Social factored into the equation when people started posting questions and complaints to an internal Twitter-like tool.  Before, most people found those excellent wiki resources by the luck of their acquaintance with a maven or other person in the know.  Now, those “help me” posts would get “follow this link” replies.  I suspect another thing was going on here; the questions probably began shaping what was on the wiki, especially when a new release introduced new bugs or changed functionality.

The mechanism here may be that Social identifies the problem and collaboration solves it, both working much faster than traditional channels. I have to wonder how much the “semi-supported” status and informal, undirected tone of those services contributed to the positive outcome.  It requires a hands-off approach–and trust–from management that I can understand as an information professional.  But, can I embrace it as management in my role as committee chair?  That’s proving to be surprisingly difficult.