Skip to content

links for 2008-11-03

Categories: Links.

Comment Feed

2 Responses

  1. Thanks for the Esquire link–it sums up my feelings very well and with eloquence.

  2. Regarding your answer “because it’s likely to arrive far more quickly”, is that really true? In this case, the science of caching form data in a way that’s recoverable during some future session is nearly as difficult as solving the persistence problem in general. A modicum of structure would be required and, as you know, the majority of form-based HTML pages lack structure.

    With such a lack of structure to these pages, the only approach might be something like a plug-in that’s the equivalent of an autosave (like what Gmail does), but instead, to the local hard drive where it can be retrieved. But I suspect that even that is almost as complicated as the persistence problem.

    For example, go to any page with a form on it (even the one to fill out a comment on your blog), fill some of the form out, and then hit File Save and save the file to your local hard drive. Now, open that file with file open and you’ll notice that whatever you filled into the form is no longer there.

    State in combination with lack of structure (even though the form seems structured) is most definitely an issue. The more I think it through, the more I realize how it’s really a thorny problem.

    Thread-per-tab browsers (like Chrome) might be a part of the answer in that each tab could run in its own shell and those shells in turn could be capable of running some code against the currently loaded page in a way that doesn’t interfere with the HTTP server’s understanding of the page’s state. I’m thinking “screen scraping” tech that independently (of the web server) creates its own last known state for every tab.

    One question if this were working…: would ordinary users expect the cached-page to be able inject the recovered information into the “real pages” when they’re available? You’re a power user. The idea that you might be able to pull the cached page back and copy & paste some information somewhere so it doesn’t get lost isn’t exactly a great user experience. Most people would get to that recovered page and ask “Now what?.” There would be an expectation that the recovered page could inject the data back into the real page (when it’s available), in which case, a significant amount of flexible structure and intelligence would have to be incorporated into the solution…. an architecture that’s remarkably close to being a persistence mechanism.

    David BerlindNovember 5, 2008 @ 8:20 amReply

Some HTML is OK

or, reply to this post via trackback.