The Internet is the computer

With its Premier Edition, Google Apps may make the virtual workplace a reality for larger corporations. Other concerns, like Yahoo and BlueTie, have been offering virtual corporation email and more for some time.several online office suites, like Google Apps, and WebOS applications, are ready to take it to the next level.

Undoubtedly, some organizations might hesitate at keeping essential records on someone else’s hardware. But, realistically, most hosting services have better backup procedures that most corporate houses. Outfits like BlueTie, Yahoo, and Google are definitely leaders in uptime services and disaster recovery planning.

I daresay that if all the records in New Orleans had been stored on the Googleplex, we would not have lost so much data to Katrina. And, that’s something to think about. We like to fret about terrorist attacks, but year-after-year, we lose much more to natural disasters. The more data we have backed up on redundant virtual systems, the less data we will lose the next time a flood, quake, or hurricane strikes. Lest we forget, the original internet was designed to survive a nuclear disaster.

And inch by inch, the Internet is becoming omnipresent. We can already access the Internet from anywhere we can get cell reception. I’ve watched geeks blog from conferences in real time by hooking a laptop up to a cell phone. Right now, that kind of connectivity is an additional fee that most of us don’t want to pay, but that will change.

Online banking is already old-hat, for both personal and business use. My business account at HSBC is so secure that I can barely use it myself. My username and passwords are longer than usual, and I have to get a magic number from a LCD dongle on every login. At one point, I needed my password reset, and I had to present an affidavit to the local branch. (Really, I did!) Though, I suppose a Klingon-secure bank account is not a bad thing :)

I flew to Milwaukee in January to host a training course, and I was pleased to find free wifi connectivity in every airport along the way, including our own itsy-bitsy Rochester International. If I had brought my HSBC dongle, I could have even done some banking!

Today, running your business over the Internet is almost inescapable. If you make payroll deductions, you may even be compelled to deposit withholdings online.

Cell phones too are almost inescapable. It was one thing when single adults did without landlines, but now the nuclear family next door is doing the same. I have friends with a teenage son that don’t have a home phone: just three cell phones. We’d consider doing the same, but we have a Time Warner “all in one account”, so the home phone doesn’t cost much more. It’s not even a real landline. It’s telephony-over-cable-over-Internet. But, hey, the long distance is free!

(Just who is Jack Bauer’s cell phone carrier? I’d love a magical phone that get can reception even in an abandoned sewer tunnel with a battery than never runs dry. I’m surprised that Verizon hasn’t snapped Kiefer up as a spokesperson!)

The next step in cell phone evolution is likely to be “stationary” lines that can share cell minutes. That way people could still have a phone at home and another on the go, without having two bills to pay. Or, we could just cut to the chase and get on with the subdermal iPhone implants!

When not taking calls on our cell phones, my office orders our w-2’s online, and we backups key files to a remote Subversion repository. We’ve been experimenting with sharing Google Applications too. I seem to have a learning disability when it comes to graphics software and spreadsheets. But, the Google spreadsheet is easy enough to use, even for a knucklehead like me.

Of course, being an ASF geek, what I like best about Google Apps is the built-in collaboration features. You can share a document or spreadsheet with any other googler, or with the public generally, either read-only or read-write. That Google Apps bake-in sharing is truly joyful, since collaboration is the answer.

I work remotely with clients over the Internet, most often to pair-program. We’ve tried a number of products, including Windows Messenger and RealVNC. Our new best friend is an add-on to Skype, called Unyte. You can make the telephone-over-internet call with Skype, and then bring up application sharing with Unyte. The free version generously allows up to four people to conference into your desktop.

We seem to be experiencing a bit more latency with Unyte, especially when a remote caller tries to take control. But, Unyte is able to connect through my client’s firewall, which has been a problem with Messenger lately. (RealVNC still works, but only over the VPN, and only if the sharing happened from my client’s side.)

I know many corporations still value “seat time”, but using open source practices, I’m easy to track what’s happening from where I sit. All of my work ends up in Subversion, Confluence, or JIRA (are these ever going to be one product?), all of which post to a mailing list as I work. I make frequent commits are mainly so that I can roll back to a working version if I break something, but also to document my work for my colleagues, step by step. It’s easy for anyone following the list to see how much I’ve done when.

Minute to minute, it’s likely that I’m the best documented worker in the enterprise :)

WebDev Pushmi-Pullyu

As might be expected, the Struts 2 GA announcement had its share of comments on The Server Side last week.

One subtopic was push versus pull. As with many terms, I think we sometimes use “push” and “pull” to mean different things. Sometimes we mean it to contrast component versus action paradigms. Other times, we mean to contrast creating a custom context (or API) for each page that exposes only what that page is suppose to know (push), or whether we should create a global context (or API) that can be exposed to every page, so each page can pick and choose (pull) whatever it wants.

Another use of the “push/pull” term is to contrast “merge” templates, like Velocity and FreeMarker, with “scriptlet” templates, like ASP, JSP, and PHP. In this usage, the point is whether it is better to push out to the page a prepared context, or whether the page should use scriplets to pull values from the platform’s shared context.

One benefit of push is that it easier to use the technology outside of the environment, since we can create a prepared context independent of the target platform. One benefit of pull is that its easier to share values with other application resources, since the context is shared.

Struts 1 tends to muddle this kind of push and pull. ActionForms are push, but we also provide a lot of servlet attributes which pages need to pull from one of the platform’s scopes (page, servlet, application). The Velocity support for Struts uses a chained context to provide access to a Velocity context as well as the platform contexts.

Struts 2 creates its own context that includes references to the servlet scopes (as plain-old Maps). In this way, S2 provides the benefits of both push and pull. For testing, it’s easy to create our own action context, and at runtime, we can access the usual servlet resources. Another benefit of wrapping pull-within-push is that we can provide “first class” tag support for JSP, FreeMarker and Velocity.

Personally, I’m a fan of the template approach. The Struts 1 tags mitigated the damage JSP scriplets were causing back in the day; before JSTL, stock JSPs were a ugly, inelegant mess. (And before Velocity people got involved in JSTL, the JSTL was a mess too.)

If there is a single reason why Struts 1 was so successful, it was because we provided a JSP taglib when everyone else (Barracuda, JPublish, Maverick, Tapestry, Turbine, among others) was focused on templates and other alternative solutions.

Over the years, I’ve consulted with some large concerns that standardized on templates pre-y2k. The technology worked well, but my clients eventually replaced the templates with Struts and JSPs. Not because JSP was “better”, but because JSP worker drones are easy to hire. As Craig said, project managers tend to choose “mainstream” technologies, regardless. We already have a hammer, so every problem must be a nail.

Ironically, Struts 2 “levels the playing field”, so that “alternative” technologies like Velocity, Freemarker, and AJAX are on equal footing with “mainstream” technologies like JSP, JSF, and, well, AJAX. :)

View Me First: The Crockford Clips

Sure, “The JavaScript Programming Language” and “Advanced JavaScript” are dull titles, but the videos clips are anything but. Speaking with an insider’s perspective, Yahoo! JavaScript Architect Douglas Crockford first explains why “Javascript is the world’s most misunderstood language” and then steps quickly through the whys, wherefores, and howtos of the web’s best friend. Crockford invented the notion of using JSON as an Ajax payload, and the videos reflect his innovative and pragmatic view of JavaScript as a strange but powerful programming language.

If you are just learning JavaScript, view these clips first. If you think you know JavaScript, view these clips now, and find out what the language really can do.

The “Crockford Clips” are from three live teaching sessions covering JavaScript, the DOM, and Advanced JavaScript. Viewing all three sets (in that order) is well worth the time, if only for Crockford’s insider asides on the politics-err-process of language and browser development.

Some key takeaways from all three sessions:

  • Learn to like loose typing
  • Learn to love objects as containers
  • Think prototypes, not classes
  • There are six values; everything else is an object
  • Be wary of uncommon arithmetic
  • NaN is a perplexing value (and toxic!)
  • Globals are evil

  • Use a platform library

  • Be correct, be common, be standard
  • The wrong people are writing our standards

  • Globals are still evil

  • Prefer power constructors
  • Work with the grain
  • Native JavaScript patterns create elegant reuse
  • Debuggers work, use one
  • Local variables are cheap, use them.
  • Minify and gzip, but do not obsuficate
  • JSON rocks!
    Hint: Save yourself some pausing. Download the slides first, and click along on a second monitor or laptop. And before commenting on slide 63, read the other reviews :)

Getting Started with Professional Ajax

Like most developers of a certain age, I’m a latecomer to Ajax. We’ve adopted it whole-heartedly for my current project, so I’ll be passing along my own spin on the learning curve.

In the beginning, I snagged a copy of “Teach Yourself AJAX in 10 minutes“, which is actually quite good. But the notion that you can do the “chapters” in 10 minutes is a bit far fetched. (Ditto for the “24-hour” books. Most chapters seem to take me 90 minutes to two hours. Maybe I’m slow.)

Last week, I stumbled on a PDF of the first edition of Ajax Professional. The second edition is

hot-off-the-presses
, but I haven’t had a chance to pick it up yet.

In the first edition, I was hoping the example application (Chapter 9) would contrast conventional versus event-driven programming, but it seemed to spend more time with the routine guts of the application, rather than the interesting Ajax bits.

Chapter 1 lays a good foundation for Ajax development. Most of Chapter 2, we are handling with frameworks, but it does make one appreciate how much YUI/Dojo and Jayrock do for us, given that with Jayrock we are boiling the Ajax calls down to

   PhoneBook.rpc.entry_list(entryTableLoad).call(channel);

Chapter 3 describes several patterns, most or all of which will apply
to my current project.

  • Predictive Fetch
  • Incremental Field Validation
  • Multi-Stage Download
  • Try Again
  • Submission Throttling
  • Periodic Refresh (“Polling”)
  • Cancel Pending Requests
    Another chapter provides a gentle introduction to JSON, though I think we have a handle on that now. Since JSON is the constant notation for
    JavaScript, I’m finding that it’s also handy for creating test data.
    We could even create an mock RPC script for database-free testing (see
    Anvil Issue 16).

An excerpt from the second edition is online now. It’s a section from Chapter 4, “Ajax Libraries” that covers the YUI Connection Manager. The second edition adds chapters on Ajax Libraries, Request Management, Mpas and Mashups, Ajax Debugging Tools, Ajax Frameworks (including Atlas), and a .NET case study. So, I’ll probably have to get it.

Meanwhile, Douglas Crockford’s Wrrrld Wide Web” site is a menagerie of groovy links. (Crockford being responsible for the notion of JSON and JSON-RPC.) I’ve only pursued a few, but I’m sure to be back.

JavaScript Lint is a “must-have”. It saved me several minutes of debugging the first time I used it.

So as to not let the tools have all the fun, I ordered a copy
of “JavaScript: The Definitive Guide“ through the link on Crockford’s
site. (My pidgin JS is beginning to show!)

Speaking of tools, I wasn’t thrilled with what HMTL Tidy does with the JS script
elements. It wraps the script element but doesn’t indent the closing tag, leaving the markup looking like a awkward hanging indent. Since our HTML files are heavy with script includes, I’d rather not use Tidy as a formatter. But, it still works well for validation. (As does The HTML Validator for FireFox.)

OTOH, I am thrilled to be back in a place where we can expect the pages to validate with HTML Tidy!

I’m sure Ajax will be another long strange trip, but at least the path is well-traveled. I’m looking forward to sharing the ride.

Make Mine YUI

After two years of bashing our brains against ASP.NET at my day job, we’re diving into Ajax for our next frontend.

We looked at Microsoft’s AJAX.NET, and at Anthem, and GWT, and Dojo, and we settled on the Yahoo User Interface Library. Out of the box, the widgets aren’t as sexy as some others, but the library is well documented, easy enough to use, and Yahoo seems committed to supporting YUI over the long haul. (I haven’t run any benchmarks, but it also seems like its the fastest.)

Of course, I’m also liking that Yahoo eats their own dog food … “the YUI you can download here is exactly the same YUI Library used all across Yahoo!”.

Though, it’s not YUI that’s really making the move from ASPX to HTML-and-Ajax possible for us. The “reality check” kudos go to a .NET JSON-RPC libary named “Jayrock”. Using Jayrock, we are able to create a paper-thin ASHX handler (“servlet”) that calls our business logic framework (Anvil).

From a coding perspective, the handler is a “plain-old” C# class with some annotations to enable the JSON-RPC voodoo.

[JsonRpcMethod]
public AppEntry entry(string key)
{
// … arbitrary business logic to obtain a value object (JavaBean)
return appEntry;
}

On the JavaScript side, Jayrock bundles a utility script that generates the JSON API gluecode for us. In the end, we can just call a method on the handler and pass in our parameters and a callback function. Click. Done. Sweet.

  PhoneBook.rpc.entry(entry_key,onEdit_load).call(av_channel);

If anything does go sour, the handler can throw an exception, which, on the JavaScript side, is automatically routed to a separate error callback. Once there, it’s trivial to display the message, and even the stack trace, if we want it.

Happily, we are already doing all the data conversion and formatting on the business logic layer, so for Ajax, we are good to go.

We did an initial “spike” using Dojo and our canonical example application (a simple PhoneBook). We then did the same thing with YUI. A keen happenstance was that we were able to use Jayrock to switch “channels” behind the scenes, so that the application could connect to the server using Dojo or using YUI, just by changing one line of code.

Along with the clear separation of concerns, I think what we like most about Ajax is that it plays well with others. We can start with YUI as a foundation, but if we want to toss in some Dojo or Ext widgets, we can. No sweat.

I’m thinking this feeling is what JetBrains means by “Develop with pleasure!” :)