Will Google+ bring relevance to social networking?

Google’s latest offering may finally bring relevance to social networking. Google+ Circles let people target content to subsets of their social graphs.  No more blurting out your weekend escapades to bosses or making friends’ eyes bleed with war-and-peace posts about dm_folder implementation?  Faaabulous!

Official Google Blog: Introducing the Google+ project: Real-life sharing, rethought for the web.

This isn’t just about hiding potentially embarrassing facts from prospective employers; it’s about targeting content to the audiences that are most likely to find it interesting. Current social media systems like Twitter and Facebook don’t get this. With Twitter in particular, I try to work around it by having a half dozen Twitter accounts. I restrict who I follow and what I post by account based on theme–personal, professional, gaming, etc. Hacks like this make for more work and are prone to mis-posts; it’s as discouraging to posting as wading through live-tweeted baseball games or diaper anecdotes are to reading.

Identity is also an issue as services like Facebook and LinkedIn expose our real-life names to the virtual world, something I experience more acutely because of this eponymously-named blog and professionally-oriented Twitter account.  I can’t prevent noise in my professional channel no matter how clever and diligent I am when less savvy friends and relatives can’t remember to use my personal non-eponymous identity for personal messages. Social search, realtime results, and consolidated logins will make this everybody’s problem in a few years.

Google’s social networking track record isn’t great; i.e., Orkut, Buzz, and Wave. It looks like Google+ doesn’t suffer from the lack of look-and-feel sophistication that may have hampered earlier efforts, and features like Circles address some of the fundamental design flaws in established products. However, the better product doesn’t always win, and Google will have to convince people to leave existing services. That’s a Catch 22 because the value of a network depends on its size, and it’s compounded because members of those networks don’t understand issues of identity, privacy, and relevance.

Call me cynical, but I think the odds are stacked against Google+. How many people realize the value of regular backups before losing everything to crashed disks or lost laptops? Those same people won’t realize why leaving Facebook for Google+ makes sense until they lose jobs or spouses for lack of caring. Please, Google, prove me wrong.

Reports of Mouse’s Death Greatly Exaggerated

Hole: Live Through This “When I get what I want I never want it again.”
— Violet, Courtney Love

I’ve been clamoring for something better than the mouse for more than a decade. My ideal interface would unite action with result to eliminate the perceptual disconnect between moving my hand in one place and seeing the result in another. The iPhone was the first thing in all that time to feel like a real breakthrough along those lines, and I think there’s much personal computer interfaces can learn from touch on phones and tablets. That doesn’t mean copy them verbatim and proclaim previous paradigms completely invalid.

The second half of Erick Schonfeld’s TechCrunch article on Windows 8 [Windows 8 Is Gorgeous, But Is It More Than Just A Shell?] claims the mouse is dead. I beg to differ. Touch interfaces like this have their uses, but they also have limitations because they are content consumption oriented.  It’s not that we’re living in a post-mouse era: we’re living in a post-one-size-fits-all era, i.e., the Windows Everywhere Era.  Touch interfaces will not obliterate mice and trackpads for the following reasons.

IMPRECISION: The finger is an imprecise pointing device when pixels matter.  Although I don’t have fat fingers, it’s rather difficult for me to finger a single pixel on a good monitor with 100 pixels per inch–let alone the Retina Display’s 326 PPI. You can select an image that way, but you can’t draw one.  It’s not just being more precise; pointing devices can transform imprecise hand movements into a variety of precision levels on screen. I love mice that let me dial up the resolution for fine work or dial it down when flailing around in a game.

OBSTRUCTION: With touch interfaces, fingers block sight of a substantial number of pixels during touch activities, interfering with the realization of a realtime respond-where-you-act interface like touch. That’s not an issue when tapping a built-big tile to select something but it’s a big problem for precision movement or tracking.

INEFFICIENCY: Sometimes editing text requires switching between mouse and keyboard despite a keyboard jockey’s mastery of keyboard shortcuts. Current pointing devices live within the same range of motion as the keyboard so it’s a small, less disruptive gesture. Now imagine reaching from keyboard to the screen to drag-select or reposition a cursor; the gods of time-and-motion studies will not be pleased. Maybe laptops would fare better than desktops with Windows 8, but it also might add to the ergonomic train wreck they’ve become.

SMUDGINESS: People touching monitors is a huge pet peeve of mine. I’m a little more smudge tolerant with my iPhone, but I can’t imagine what my monitor would look like after just one working lunch on Windows 8. Just thinking about this makes me want to rush into the bathroom and wash my hands.

Touch technologies have existed for decades, and I think that the iPhone APIs ushered in this new era, not the hardware. Apple created a toolset to help developers deal with the strengths and weaknesses of touch that also provided a consistent experience for users across applications. Mac OS X Lion appears to learn from touch interfaces, not emulate them. Apple realizes that they need operating systems that match the devices they run on, perhaps a wisdom only earned by making both software and hardware. Microsoft should think very carefully about repeating their habitual strategic blunder of trying to make a one-size-fits-all Windows.

One good thing about Windows 8 preview

Making a computer behave like a web page on a tablet looks like a step backwards, albeit a pretty one, but there is one good thing in the following Microsoft video on Windows 8. The demonstrator half-drags another application onto the screen at 2:05 and 3:00 to create a resizable region so two apps can run side by side:

(Full article on microsoft.com)

The is a logical evolution of the one thing I like about Windows 7 and use regularly, window snapping. It’s also one step closer to a desktop that works like the Eclipse IDE with various resizable panels for different elements like object browser, code editor, and compilation messages. Apple is also taking a tablet-inspired approach in Mac OS X Lion. It’s about time operating system vendors step up and do something about the sad state of application window management, but I don’t expect Windows 8 or Mac OS X Lion to bring perfect solutions because of their inspiration: Tablets make great content consumption devices, but I need something inspired by a content creation tool as my desktop.

In the meantime, I’m using a little open source app called Shift It to get something like window snapping on the Mac. It uses keyboard shortcuts rather than the Windows-style border collision which the Mac already uses for virtual desktop management, Spaces. Here’s my somewhat-Eclipse-inspired desktop with the Shift It menu exposed:

My Desktop with Shift It
Dock and Twitter on one side, desktop files on another, and a perfect half-width browser window (with too many tabs as usual) in the middle. Sized and positioned Chrome by shifting it left then center with Shift It.

Shift It isn’t a perfect solution. It has some problems with window size or position being a little off sometimes, and the top/bottom options are practically useless on a widescreen display. Most people won’t mind, but my particular pathological need to organize (also expressed by my obsession with The Container Store) requires precision and symmetry and flexibility.  Steve Jobs has a similar affliction, so I’m hoping Lion will be like digital Paxil for my application window management OCD.

When Relevance Attacks

I used to think all ads were spam. That sentiment had its origins in traditional media where some kind of adspace cosmological constant keeps pushing real content further and further apart, filling my field of view with a relevance vacuum of feminine hygiene, SUV, and sorority girl chat line commercials.

However, I’ve been coming around to V.’s way of thinking about ads in new media lately. I don’t mind as much because ads are at least loosely targeted to the topic-specific sites I frequent; sometimes I’m even grateful when I encounter something previously unknown and personally applicable.  That happens more when I’m willing to make Google privy to my electronic life or tell Hulu when ads are relevant. Old media pushes me further away while new media draws me into its less-adflated, more-relevant open arms.

So I came home from a MarkLogic event at The Palomar–ready to blog about some things they get that EMC is just starting to grasp–to find a new comment on an old post about the iPhone Clock app.  It was an ad (no surprise) but it was perfectly relevant to the post and even got me into iTunes to download the app (big surprise).  A few more things like this might even get cynical me to stop cringing whenever I see “please moderate” in my inbox.

The reward for that relevance is an extra plug.  I haven’t tried the app yet since there’s nothing to time at the moment, but I encourage you to take a look:

So, please do check out http://www.elapsedapp.com – you’ll find it to be a significant upgrade from the default Clock app. Oh, one last thing… its FREE!

Lies My Folder Objects Told Me

Pie is having some folder object problems [ Tip: A Documentum Folder’s Existential Crisis « Word of Pie ] and it’s no surprise to me.  I trust folder objects less than I trust an insane homicidal computer; at least I know for certain that GLaDOS is still trying to kill me.

I’ve fantasized for years about replacing dm_sysobject with something light weight and implementing things like folder location and versioning as interfaces applied to that type as needed. Pie might not have had hair pulling to do if dm_folder wasn’t bringing along all of dm_sysobject’s baggage.  It’s another example of the junk DNA rife through Documentum’s API and schema.

Documentum did add lightweight objects a few years ago, but they fell far short of my fantasy.  They turned out to be a hack to deal with bulk object creation instead of a fundamental refactoring of the object schema. I wasn’t surprised; implementing something like my fantasy would be an upgrade/compatibility nightmare; every single sysobject would have to be folded, spindled, and mutilated in the process. Just the database part of that upgrade could take days on big docbases, and those upgrades could fail in spectacular ways noticeable only long after the fact. Oh well.

It’s a lesson to inform the creators of new systems like Documentum. Of course there are scaling problems at the database level if those interface abstractions result in lots of separate tables and joins at the concrete level assuming a relational database infrastructure. I think that’s still a safe assumption since object databases haven’t gotten that much better and NoSQL databases don’t seem to be a good fit to this problem space on first blush. I wonder: Did Alfresco learn any of these lessons? Maybe I’ll go take a look under its hood and see.



Perl, I say!

At last week’s Philadelphia Perl Mongers meeting, I asked what’s new and exciting in Perl since I last used it exclusively, circa Perl 5.6.  Walt chimed in with say, a command that prints an expression and adds a new line at the end.  New and exciting?

Other languages have always had separate commands to display strings with or without newlines at the end.  Embedding escape codes like \n or using Perl’s smarts around concatenating and contextualizing work fine, so a separate command isn’t necessary like in some of those other languages with print and println or write and writeln.  “So why now?” I asked.

Some of the semi-glib responses to my follow-up touched on the venerated trait of laziness among Perl programmers and joking about doing more with less (i.e., two less characters from print to say). I say semi-glib because both of these comments hold kernels of truth about Perl that originally drew me to the language and explain my continued frustration with the “newer” languages that I now have to call bread and butter.

Perl has always been a programming language for the lazy, proud, and impatient.  From personal experience, I’m more likely to use print with a “\n” than without, so that little extra work spread out over the thousands and thousands of print “…” does add up. There is some sense in having a command whose default mode includes a linefeed.

Perl has also always been a language about packing–functionally and semantically. That’s earned it a reputation of being hackish or too clever for itself, but anything taken to an extreme can be bad.  My experience supports the perlish idea that less code written is less code to debug or relearn later.  Some languages, especially those fond of methods on literal strings instead of operators, provide flexibility at the cost of verbosity and ugliness.  The idea of packing more capability into two less characters is very, very Perl.

There’s talk of trying to reinvigorate the Perl base and recover some of the mindshare (and subsequent marketshare) that Perl’s lost over the last decade.  I don’t think say will convince legions of .Net or Java programmers to switch, but I’ll definitely use it my next script.

From the Perldocs on say:

  • say LIST
  • say

Just like print, but implicitly appends a newline. say LIST is simply an abbreviation for { local $\ ="\n"; print LIST } .

This keyword is available only when the “say” feature is enabled: see feature.

Beating a Dead dmHorse

Don't google "dead horse" for images.

My response to Pie’s Quality of Documentum Over the Years bears repeating, even if I am beating a dead dmHorse:

I started with version 2, back when I was just a newly-minted UNIX geek. One thing you missed with the transition to 4i was the introduction of the DFC. DMCL had a very UNIX feel; a simple, open API designed to be glued into any programming language. DFC was just Java then, with a COM layer growing over it later. That was also the point where EMC became more marketing-driven and started chasing the Internet bubble at the expense of their existing clients.

Both were attempts to capitalize on hot topics of the time, Java and the Web. I never bought that the DFC would make a whole pool of talent available; Documentum’s about the model, not the means. However, the marketroids successfully reframed it. Hiring managers now believe they can take Java people and mold them into Documentum people, and I hear gasps of disbelief when I say Java or Visual Studio aren’t requirements to do Documentum–a good Java programmer is not necessarily a good Documentum developer.

This Java mentality did increase the number of people with Documentum on their resumes, but the talent didn’t increase as much as the volume. It just diluted (maybe also tainted) the pool. It became harder to find good people in the now-mirky waters.

The lack of focus then is what brings us to the lack of quality now. Innovation at the model and server level is rare, and frankly I don’t give Documentum much geek cred anymore because of it. Great ideas like BOF and Aspects are stapled into an API rather than made an inherent part of the product. Too much work up the stack (and on vertical solutions) has made the product top-heavy and tottery. EMC continues to chase markets (i.e., case management) rather than concentrate on making a solid core product.