Beware of Chrome Desktop Apps With Multiple Accounts

Hangouts-FrownIf you use multiple Google accounts and try Google’s new Chrome Desktop Apps, particularly Hangouts, be sure to install it on all of your Google accounts. I realized this when my USB headset “stopped working” with Google Voice last week and I couldn’t fix it. This happens from time to time, usually when Google stealths out big changes to the Hangouts plugin. It usually resolves itself after I reauthorize the plugin to use the camera and microphone again. This time, however, that and none of the other regular tricks worked.

The problem took some digging, but it turns out that the Desktop App configuration on one Chrome Profile will silently override the plug-in or web configuration on other profiles that don’t have the Desktop App version. The Hangouts device settings for one account (which are only visible when actually in a hangout–how dumb is that?) showed Headset for I/O but were being silently overridden by the other account’s Default I/O settings. Very confusing and frustrating.

To be fair, Google does give some warning in the Hangouts support pages–if you know to look. If you don’t have a different image for each Google account, you might not notice that the instructions on signing out didn’t actually work and it’s more an issue of signing into multiple accounts. It’s generally handy to set different profile pictures and themes on each account, even the one in Gmail / Settings / My Picture.

A Good Idea, But …

One continuing annoyance I have with Google and Cloud-based services in general is stealth updates. In the cloud, I have no control of what versions of software I use, and I usually don’t even know when anything has changed. This seems like a good idea, even the holy grail of large IT organizations managing tens of thousands of desktops. The advantages of having all my data in one place but accessible from many devices and locations convinced me to live with that chronic pain of not being completely sure what my applications will look like or how they will behave from day to day.

The other pain associated with the cloud is using web apps instead of desktop apps. So far only Gmail itself gives me a better experience than desktop apps, as trying to use again reminded me. Google is feeling the limits of web apps and is trying to get a foothold on the desktop, and this bug was symptomatic of Google trying to get there.

The Chrome browser was their beachhead, and Chrome Profiles were the next logical step in solving multiple account issues on the desktop that they more-or-less already solved in the Web. Chrome’s duality as browser and operating system is really clever; the browser they got everybody to install on their existing computers running Windows or OS X effectively becomes a virtual machine manager for an ecosystem of accounts and applications of their own making.

Unfortunately as clever as Google is, something as ambitious and complex as stealthing an entire virtual operating system onto everybody’s computers requires as much or more persistence than cleverness. Google’s fickle attitude towards other ambitious projects makes me wonder if they can find the commitment to make Desktop Apps a survivor like Gmail, or it it’s doomed to the dust bin like Plus and Reader and Wave and many others before it.


Pretty Hate (Java Virtual) Machine

The Terrible Lie of Java has been “Write once, run anywhere”. It was a Sin of omission because it’s really “Write once, run anywhere, install repeatedly”. Platform independence requires lots of disk and downloading.

Now Oracle only considers “their” versions Sanctified and JRE/JVM versions before “their” 1.7 are Something I Can Never Have if I’m not willing to create an Oracle account and register like some kind of byte code hex offender. You really should put something on my Ringfinger before me asking such intimate questions to just download something already on millions of computers.

I had to multi-purpose my gaming rig to run SQL Server for a project, and that also includes working on a Java client running on 1.6. So now I’m Down In It, managing multiple JREs and JVMs on a box meant to remain simple and single-focused. Maybe it’s time to buy a new game rig or a dedicated work machine. The work machine wouldn’t have to be heavy-duty, and the current gaming rig really isn’t that old, but Kinda I Want To buy the latest, hottest box I can. Hottest, literally. My gaming rigs double as space heaters in winter.

Now that I’ve crossed that line, I’m getting hate boxes and nastygram pop-ups from Windows, Internet Explorer–AND NOW EVEN ORACLE! That’s What I Get for doing my job. I clicked all the ignore boxes I could, but I have a feeling this won’t be The Only Time I have to do it.

Why must managing multiple JREs/JVMs feel like having a nine inch nail hammered into your Head Like A Hole? At least it gives me an idea for a song or two.

Licensed under Fair use via Wikipedia


Self Documenting Unit Tests

Naming JUnit methods hasn’t been my strong point. My general obsession with naming conventions outstripped my creativity for specific unit test names during much of my last project. Darker moments birthed monstrosities like AdvertisementTest001(), AdvertisementTest002(), etc. Finding good names for classes, public methods, variables, and database columns is always on my mind; that’s why (a link to) a trusty thesaurus should always be at hand. So my inability to make good, consistent unit test names vexed me.

I had a trick for regression tests, though. I’d use the JIRA ID as the method name: e.g., public void jiraProjectNameIssueNumber(). Anybody could look up the issue and get the entire history when needed, and some particularly infamous issues were immediately recognizable when a too-well-known number showed up in the runner’s failed list. This I liked, and now I have something that I may like for actual unit tests thanks to the January 2015 Software As Craft topic, Russell Gold’s talk on Executable Documentation.

A better title might have been Self-Documenting Unit Tests since several of us were thinking this would be about generating code from comments like cucumber tests or software contracts. Regardless, it’s an excellent pattern to make better unit tests, and it’s a language-independent pattern. The core idea is simple: Test a single method given a particular set of conditions, and name the unit test givenCondition_expectResult(). If you practice TDD and you write these tests as stubs first, your path from red light to green light starts with something like this for a findsert style method:

@Test  // DEFINITELY use annotations with JUnit!
public void givenNotFound_returnNewRecord() { ... }

The one thing I might add here is a method name prefix depending on how the tests and production classes are structured. In the case of a service method, I’ll probably create separate JUnit classes for each, so the given-expects pattern is fine. For tests that might be a little more integration-y or about a class representing a persistent entity, it might look more like this:

public void nextSerialId_onInstantiation_defaultsToZero() { ... }

public void nextSerialId_onClosedStatus_throwsInvalidStateException() { ... }

That second line wraps when I preview it in my current (kind of horrible) WordPress theme; it’s definitely long for the blog page, but it’s not so bad for a code editor. How long is too long? I hesitate to say this given some people (I mean you, MG!) and their predilection for long method names, but … Java has no defined limit on the length of a method name! Yikes. I’d say too long is not being able to uniquely identify the method name when eye-balling it in the JUnit runner window. I should further qualify that’s on a 1920-wide monitor with readable font sizes and standard Eclipse frame widths for those of you who love to rules-lawyer such things into the realm of microfiche.

This approach yields some additional benefits atop normal TDD and unit testing. The presenter showed a few unit tests from well-known, highly-regarded open source projects. Having tests is good, but having ambiguous names and one method test a bunch of unrelated things make understanding, updating, and expanding on those tests difficult. If the tests get too hard to understand, then they’ll meet @Ignore when inexplicable failures crop up. Contributors won’t expand or update the tests for new functionality if they have to understand too much unrelated code, test and otherwise, to write something useful. Having the developer name the test this way makes the developer think about what this test really does and should contain up front, guiding the test writing process rather than playing catch-up after it’s done, and making it easier for future developers to expand upon it.

There are cases were a single method may test more than one method call at a time like a handful of easy “success” cases. Apply some common sense to avoid an explosion of teeny-tiny test methods, but also pay attention to how many valid separate cases you have. As the presenter said, it can be a clue about when production classes that are too large or complex should be broken down.

The underlying principle here is the same as a test commenting style that MG got me into the habit of using: Start with one or more “Given …” condition comments, then a “When …” for what is being tested, and finally a series of “Then …” assertion comments. This is like the comment/pseudocode-to-code style I’ve used for production code since forever but applied to tests. Those given/when/then comments often became the most recent documentation of how code behaved during refactoring because when something depending on it misbehaved, somebody went in there and read those comments. Some of the need for embedding such comments goes away with it already being in the test method name, but how much depends on which side you take in the agile controversy over comments.

Yes, A Controversy Over Comments

The SoC presenter talked about self-documenting unit tests as the “what it does” where the self-documenting production code shows “how it does it”. I like how that dove-tails with TDD since we (mostly) know the what first, and figure out the how later. However, neither answers what can be the most important question when stumbling into old code or somebody else’s code: Why? Deadlines, bugs in APIs, old versions of libraries, or (worst of all) politics sometimes require code to do … questionable things. Those are cases where comments are always, always, ALWAYS a Good Thing(tm).

Some people take a dimmer view of comments overall. They say that even good commenting is wasted effort since it duplicates what the code does and doesn’t have the basic validation code gets by compiling. They say comments get confusing over time as code changes but comments don’t keep up. They say that code should be obvious in how it’s laid out, how it’s broken down into subroutines, and how its elements are named. They say comments should not be necessary. They say it’s a mark of the badly written, the badly structured, or the badly named when comments are necessary. That’s a bold statement.

There is some truth there, but I don’t completely subscribe to it. Developers need to weigh the pros and cons for more than themselves when deciding how much commenting is enough. They also need to weigh project factors like how complex the system is, how long it will be maintained, and by whom. That who is also a consideration; if the whole team cannot play by the same rules, it may not be worth trying to assure regular, high-quality commenting. If there’s only so much effort you can get the whole them to commit to, then getting them to use TDD and self-documenting unit tests may be the 20% effort that will get you to an 80% maintainable system.

Open office spaces? 64 percent say they’re a distraction – Philadelphia Business Journal

Philadelphia Business Journal is a staple of my RSS feed, but today’s follow-up article on open plan offices is the first time I felt compelled to leave a comment. Unfortunately they only allow comments through a Facebook or Yahoo! account, neither of which I want to associate with my professional identity or grant any “rights” to my professional content. Hello, Linkedin support please!  Here’s the article followed by the comment I’d planned to make:

howtoannoyed-304xx2122-1415-0-0Open office spaces? 64 percent say they’re a distraction – Philadelphia Business Journal.

The author (Jared Shelly, @PHLBizJShell) takes exception with his readers’ responses, saying this: Perhaps it all depends on your co-workers and office culture. If you have co-workers that want to dissect last night’s episode of The Voice for 45 minutes while you try to work, an open office design can certainly be a hinderance. But if your co-workers are mature about it, an open-office can get people talking and be the springboard for new ideas and creativity.

It also depends on your profession and your team’s location. If your job is low concentration and high interruption, and if your team is co-located, then open plans might work for you. I am a contract software engineer usually working on global teams; I dread contracts where I must be on site and work in open plans.

Having long periods of interruption-free time to concentrate on code is a necessity. I first saw the negative effects of open plans on “flow” documented in the classic Peopleware (DeMarco and Lister). That book–printed in 1987—explicitly discusses open plans; this is not a new idea, but it is a fad whose reappearance today is more about saving money than boosting productivity.

When I do collaborate, much of that time is spent in teleconferences and screen shares with New York, Chicago, and London. This can be very distracting to those around me who aren’t also on the call/share and who probably aren’t involved in my project at all. Imagine overhearing half of a nerdy, sometimes-heated conversation during an hours-long elevator ride; that’s open plan with programmers practicing team or peer programming.

One thing has changed (for the worse) since Peopleware was published: We’ve become a culture of interruption thanks to mobile phones and mobile social media. My reflex to a ringing phone has always been to silence rather than answer it. Now it’s less and less socially acceptable to respond to a text or email or selfie notification when the time is right rather than right now. I’m sure this is giving the fad of open plan a big boost. The Millenial temper tantrum of no boundaries between the personal and the private is being taken as a desirable end state instead of just teething pains with new technologies.

Goat Simulator: When Bugs Are Features

Just a few days ago, @crosswiredmind pointed out a strange little game he thought might interest me given our shared gaming history–Llamatron, anyone? So now it’s making the rounds in my geek and gaming blogs including this Ars Technica article:

Goat Simulator preview: Goat of the year | Ars Technica.

I may post more about it  in other more-appropriate venues along with the unrelated but oddly, personally relevant Bear Simulator [Bear Simulator is “like a mini Skyrim but you’re a bear” | Ars Technica], but something the developer said in the Ars article caught my attention as a fellow software developer:

Goat Simulator is currently ripe with glitches, particularly with the titular character’s bendy neck getting stuck in objects, but Ibrisagic pledged not to fix most of the problems in time for the game’s purported launch of April 1st (a date he insisted was no prank).

“I’m only fixing crashes,” he said. “Everything else is totally hilarious, so we’re keeping it.”

In ancient days, some of the craziest developers for the Commodore 64 and Amiga actually depended on flaws in the hardware, APIs, and operating system to push our platform beyond its limits. Of course the risk there is that we’d fix those bugs or correct those anomalies and break their games. In fact when we did our “clean slate” Workbench 2.0, we eventually backed out some purist changes to avoid breaking the especially cool stuff.

The deliberate retention of bugs in Goat Simulator strikes me as a brilliant kind of post-modern self-referential nod that’s possible now that computers and computer games (and bugs in computer software) have become so much a part of our everyday experience.  Like Marilyn’s mole, sometimes it’s the flaws that make mere beauty transcendent.

Related Links: Goat SimulatorBear Simulator by Farjay Studios — KickstarterLlamatron – Wikipedia, the free encyclopedia

Microsoft Corp. names Philadelphia a Showcase City

Microsoft’s Heard in the Hall reports Microsoft Corp. names Philadelphia a Showcase City.  I had heard early in the year that Philadelphia wasn’t even on Microsoft’s original short list, so this may be a sign the Philly tech scene is starting to make itself heard.

It’s hard to say if Microsoft’s CityNext program is going to bring any real benefit soon; the city’s previous collaboration with the Redmond software company only really started hitting its stride recently [School of the Future: 10 years after concept]. The press this generates could be more valuable in the long run if it gets other tech companies interested in the city and helps give the local scene some much-needed confidence and solidarity.


Second @HacksHackersPHL Meetup talks Data and Demos

Hacks/Hackers PhiladelphiaHacks/Hackers Philly [@hackshackersPHL] held their second meetup on Leap Day, 29 February 2012, at the iconic Philadelphia Inquirer Building.  The Philadelphia chapter of Hacks/Hackers, a grassroots journalism organization examining the intersection of journalism and computing, co-hosted the event with, the online arm of Philly’s biggest newspaper. It was an interesting look into a world very different than the more familiar setting of Fortune 500 companies doing global projects.  This is computing in the trenches and newsrooms of Philadelphia.

Reporting from The Data and Demos Meetup …

The Inquirer Building
Stormy weather again over The Inquirier Building

Members of the Inquirer staff gave presentations during the first half of the meetup about how collecting, analyzing, and visualizing data works in the context of a large newspaper publisher without a large IT budget.  Some of the larger stories can take 6-12 months and a sizeable team to complete, like a study of violence in schools or the first fact-based comprehensive analysis of the impact of Philadelphia’s real estate tax reassessment. The data end of that often involves long fights for access to information; getting the goods from a “Freedom of Information” request is rarely as simple as asking. Even  agencies dabbling in transparency muddle analysis by changing data formats with the latest fad, and some withdraw from the web altogether after feeling the sting of disinfecting sunlight.  That may be the case with detailed public data on fracking operations in Pennsylvania gone missing without warning or explanation earlier this year.  The data journalism on the Marcellus controversy to that point was shining an unfavorable light on something our new administration in Harrisburg certainly favors without the burdens of regulation, taxation, and transparency.

Having heard from the hacks, the hackers talked about their projects in the second half.  The theme was social activism through social media: Cost of Freedom is a fledgling website to help voters get Photo IDs in states where laws have changed to disenfranchise young, old, and poor voters; provides a human-readable experience for exploring the Philadelphia lobbyist data recently made available by the city; WhoPaid is a prototype mobile app using the Shazam approach to identify political ads and who paid for them by capturing audio snippets on mobile phones.  Several of these applications came out of Random Hacks of Kindness, a movement around “technology for social good” that sponsors contests and hackathons.  Seeing my neighbors doing good works like this rekindles the pride I felt when I first called the Birthplace of American Liberty and City of Brotherly Love my home.

Another Case of Cautious Optimism

I’m often critical of Philadelphia’s low-tech standing despite being one of the country’s ten largest cities.  Finding a job in the city for a person like me is remarkably hard, partly due to a bad corporate tax structure and partly from a long history of first-but-no-longer claims to fame.  The meetup itself was amazing in organization, content, and participation.  The Inquirer hosts are facing another potential change of ownership, one in a series of such events that has underfunded and understaffed the newsroom across the board.  Several other attendees commented on how Philly’s tech scene is growing so slowly.  We have interested, capable people with good ideas; what’s the missing ingredient?  The bright side here is these are just the people with the journalistic and technical skills to figure that out.