Blogs

RedMonk

Skip to content

Simplicity Finds a Home: lesscode.org


Ryan’s Latest Project: lesscode.org

Originally uploaded by sogrady.

Fresh on the heels of an email I got from a very well known technologist discussing how the avoidance of unnecessary features (bloat) was one of his highest project priorities, comes Ryan Tomayko’s (a friend of RedMonk if ever there was one) latest effort, lesscode.org. Like SWiK, reviewed yesterday, its ambitions seem to be as a destination site for developers with a certain interest. While SWiK’s aimed at open source developers, lesscode.org is oriented towards those whose make a virtue out of simplicity, who seek the least complicated solution to the problems they are presented with. It should come as a surprise to no one that I think such lesscode.org is an excellent idea. While I took this too far as a developer [1], I think there’s absolutely an opportunity for lesscode to be the first stop and rallying point for developers that value efficiency over elegance, simplicity over overdesign.

Whether or not its role should be as a repository for projects that espouse these values or merely point to them to them SWiK-style, remains to be seen, but is ultimately besides the point. The important news is that the “less code” approach has a venue of its own.

Immediate opportunities, I think, are the creation of a wiki, where a community might begin to accumulate:

  1. Suggestions for the “simplest” application in a variety of categories
  2. Examples or tutorials for devlopers on how to use less code
  3. Real world case studies and examples of less code

Maybe this is a role that SWiK can help play, who knows? Worth a chat, Alex and Ryan?

In addition to a wiki, it might be interesting to see a Planet Lesscode rollup of simplicity oriented bloggers like Adam Bosworth, Bill de hÓra, or Sam Ruby (as an alternative to them posting on lesscode.org directly).

Lesscode might also represent a good opportunity for some of the larger ISVs out there to demonstrate their affinity for simplicity (something many of them are not typically known for at the moment). Maybe a guest blogger spot? Or possibly featured case studies?

Anyway, that’s all I have time for now; I’ve got to head out to set up some new infrastructure at a colo here (more on that later), but I wanted to express both my support for Ryan’s project and the notion of less code. As I’ve said previously, “All Hail the New Simplicity.”

[1] One of the reasons that I got out of development was because I didn’t care enough about it. Unlike some of my fellow (better) developers who wanted to know how things work and build new applications, I was always hunting for someone who’d done it before. I went beyond less code to “no code if I can help it,” a sign that it was time for me to find another line of work.

Categories: Application Development.

  • http://www-106.ibm.com/developerworks/blogs/dw_blog.jspa?blog=396 Bill Higgins

    I think that there are two distinct issues here that you’ve intertwined in your post: simplicity in the programming model and simplicity of the application.

    Simplicity in the programming model determines how many hoops a developer must jump through to realize some function. This affects time-to-market for new software products as well as extensibility since (as Adam Bosworth likes to point out) overly-complex architectures tend to implode over time.

    Simplicity of the application on the other hand determines how simple or hard an application is for a user to accomplish what he or she intends to do with it. Unneeded extra features may indeed degrade the user experience by complicating the user interface, increasing response times, and wasting system resources.

    It’s easy to agree with the folks who wish to simplify the programming model; after all, who could argue with the desire to deliver required function in less time with less effort?

    The desire to simplify the application is more difficult. Though I strongly agree with this desire in principle, it’s common knowledge that many software customers still do feature comparisons when deciding between competitive software packages, even if there’s a low probability that they’ll end up using many of the features on their checklist. So a team could deliver a useful but adequate application but end up losing the sale because its customer favors the competitor’s application that includes everything and the kitchen sink.

    Now of course there are cases of minimalism winning in the marketplace; Google in search and iPod in portable music players for instance. What is it that makes these simpler products win, while other products lose in a features arms race? And how can we change the attitude that more features == better product? Are there ways to have it both ways by including lots of features yet avoid the negative side effects mentioned above?

    I have some thoughts on these questions but would like to hear from others first.

  • http://www-106.ibm.com/developerworks/blogs/dw_blog.jspa?blog=396 Bill Higgins

    I'm glad I could entertain you.

  • http://www.redmonk.com/jgovernor james Governor

    ha ha ha. that’s an ibm answer if ever i read one, bill.

  • http://www.redmonk.com/sogrady sogrady

    Bill: "I think that there are two distinct issues here that you've intertwined in your post: simplicity in the programming model and simplicity of the application."

    interesting. my focus here is mostly on the programming model. i fully agree that simplicity on an application basis is a complicated equation, though i would contend that feature bloat is of little benefit to users – RFPs or no RFPs.

    when i'm speaking of applications here, it's not real from a feature/function design basis – merely that i think having illustrative examples of simply built and executed applications is useful.

    but while that was my focus, it does beg the question as to whether or not something like less code should concern itself further up the design stack, vis a vis feature/function questions.

  • http://naeblis.cx/rtomayko/ Ryan Tomayko

    I think they're really the same issue that are evolving at different
    paces. In both situations, you have:

    1. People
    2. Tools
    3. Time

    Models that enable the people using the tool to directly impact the
    future direction of the tool should result in a definite trend away from
    friction between person and tool. Developers and programming languages
    have such a relationship and I think you see the trend. It's to the
    point that we're consciously striving for reduced-complexity and putting
    up silly websites called "lesscode.org" and whatnot.

    It's interesting that this trend seems to have the greatest acceleration
    in F/OSS languages, which I believe to be a result of a greater ability
    of those people to impact their tools.

    Traditional applications are different only in that the people actually
    using them have much less ability to impact their future direction and
    they haven't had the same amount of applied time/use. Applications that
    are bloated are usually that way because direction is being driven by
    unqualified guesses about how things should work, what users want, or
    the potential future value of a market instead of from experience and
    observation of tool use over time.

    I think lesscode.org is mostly about simplifying development tools and
    processes and trying to show that legitimacy should be based on proven
    capability in the field rather than on whose pushing something or where
    Gartner puts something in a magic quadrant. We hate magic.

  • http://www.redmonk.com/sogrady sogrady

    Bill: interesting. i'll let Ryan speak for himself here, but while i respect that both you and Joel seem to share, i myself find myself increasingly convinced differently. specifically, i don't have a problem with this notion:

    "There is very little guesswork involved. Even on "confidential" projects, we have thousands of future users within the company from whom we can solicit feedback."

    not because i disbelieve it, of course – you're absolutely right. but mostly b/c if there's one thing i've learned from research focus groups and such marketing survey activities, it's that they are flawed at best. see the links below:
    http://del.icio.us/sogrady/focus-groups http://www.redmonk.com/sogrady/archives/000456.hthttp://www.itconversations.com/shows/detail230.ht

    to sum up, customers are very often completely unable to indicate what they actual want and/or need until after the fact. so despite all the research you might do, i think Ryan's not inaccurate when he says that a lot of product planning is guessing.

    as for the 80/20 rule, Joel gives a good example in word processing, but i'd counter that with a Google vs Yahoo argument. many switched to Google not just because search was what they were actually interested in, but b/c what Yahoo was delivering – fueled no doubt by focus groups – was complete feature overkill. did Google need to include features that customers didn't need? not so much (despite their portal overtures, which have been minor).

    where does the truth lie in all of this? no doubt somewhere in the middle ;)

  • http://www-106.ibm.com/developerworks/blogs/dw_blog.jspa?blog=396 Bill Higgins

    Re: "focus groups": although I stated that we do indeed listen to customers, I know that this approach has flaws, but didn't want to go there, because the post was long enough :-)

    So the longer version of what I said about "listening to customers' wants and needs" is that customers speak in terms of incremental enhancements based on their mental model of the current product. This may or may not be the best solution to the underlying problem they're trying to solve. So if a customer says "we want the product to do such and such" we don't just jump and say "ya got it!"; we probe to understand what is the underlying *problem* that their proposed solution addresses. Once you get past the predjudices imposed by mental models of current technological constraints and identify the underlying problems, you're in a zone where you have a shot at creating a useful, potentially innovative, product.

    Re: Google vs. Yahoo, this was the point I was making in the first entry, and the discussion I'm trying to start. My question remains: for some categories of products (e.g. word processors) lack of features seems to be a negative, whereas in others (e.g. web search) simplicity is a positive. Why is this?

  • http://naeblis.cx/rtomayko/ Ryan Tomayko

    Hi Bill, thanks for cutting through the bullsh*t – I'll try to do the
    same (steve, what's with the filters? does your grandma read this? :) Steve hit on a lot of my initial thoughts but there's still a bit
    I'd like to clear up:

    "I'm not sure which software companies you were referring to (please
    feel free to give examples and name names) but at least at IBM (where I
    work) and Microsoft (whom I've studied), we speak with users extensively
    about which features they want in future products. There is very little
    guesswork involved. Even on "confidential" projects, we have thousands
    of future users within the company from whom we can solicit feedback."

    I definitely didn't mean to imply that you guys weren't performing your
    due diligence in seeking out what users want. I was speaking partially
    to the "Gladwell Effect", which I assume is where that itconversations
    link Steve posted is pointing, and partially to the idea that
    traditional applications are at a natural disadvantage to programming
    languages in their ability to be weighed and modified by their
    users. The latter problem creates the former: when user's are unable (or
    unwilling, which is probably the bigger problem) to change their tools
    directly, you are forced to rely on other ways of measuring the fitfulness
    of new features, like Gartner magic quadrants, focus groups, etc.

    That being said, we're really arguing about terminology because I happen
    to agree with most of what you're saying. You said that "simplicity in
    the programming model and simplicity of the application" are different
    issues and I said they're the same issue with different surrounding
    circumstances to make the points that were made but I'll admit that it
    was really a silly nit. I'll accept that they're different issues so
    that we can move this to where I think you'd like to go with it.

    So let me pose this as a question: how is it that programming
    languages–tools that perform what is quite possibly the most complex of
    tasks–are able retain any simplicity whatsoever and avoid bloat? And
    why can't we apply those techniques to traditional applications?

    One technique is separating the core language from the library, which
    allows features to be chosen selectively based on need. I guess this
    could correspond to the plug-in model of traditional applications, which
    probably isn't being used to its fullest potential. If you look at Lisp
    and many dynamic languages like Python, Ruby, and Smalltalk, you see
    that not only is there a concept of libraries but that it's extremely
    beneficial to allow libraries to extend the core language as much as
    possible so as to maximize the number of features that can be provided
    in that manner (I think the cool kids are calling these DSLs now).

    This has me thinking of the web now (I'm straying I know but this is
    interesting) because I believe this same basic technique is responsible
    for much of its success. The web has a simple foundation in HTTP/REST
    architecture that basically acts as a big goddam system for hosting
    plug-ins. It may be that this is one of the reasons so many applications
    are moving to the web as a base platform, becoming smaller, simpler, and
    more composeable. This also might get to the root of what turns me off
    about WS-* and much of the proposed direction for SOA and enterprise
    software. The web is simple and minimal by design and the system is
    working. Adding a bunch of new, unproven sophistication to the core
    language shouldn't be required to add value.

    I have to stop now but am definitely interested in continuing the
    conversation.

  • http://naeblis.cx/rtomayko/ Ryan Tomayko

    "My question remains: for some categories of products (e.g. word processors) lack of features seems to be a negative, whereas in others (e.g. web search) simplicity is a positive. Why is this?"

    Hrmm.. I'm going to lean on my previous response and say that, at least in this case, its not so much the task that's dictating whether you need a ton of features or not but the environment you perform the task in. Web search can be simple because its environment enables many other small and simple pieces to be latched together. Word processors must be complex because they're isolated and so all value must be provided locally.

  • http://www-106.ibm.com/developerworks/blogs/dw_blog.jspa?blog=396 Bill Higgins

    Ryan, thanks for your insight and looking forward to checking out lesscode.org. Now enough with the niceties :-) My experience doesn’t match with your observation:

    “Applications that are bloated are usually that way because direction is being driven by unqualified guesses about how things should work, what users want, or the potential future value of a market instead of from experience and observation of tool use over time.”

    I’m not sure which software companies you were referring to (please feel free to give examples and name names) but at least at IBM (where I work) and Microsoft (whom I’ve studied), we speak with users extensively about which features they want in future products. There is very little guesswork involved. Even on “confidential” projects, we have thousands of future users within the company from whom we can solicit feedback.

    The reason that we still end up with feature bloat, which I’ll define for the purposes of this comment as “extra features that a particular user doesn’t need” is that different users need different sets of features from the same product. As Joel Spolsky pointed out in his “80/20 bloatware blog”:

    “A lot of software developers are seduced by the old ’80/20′ rule. It seems to make a lot of sense: 80% of the people use 20% of the features. So you convince yourself that you only need to implement 20% of the features, and you can still sell 80% as many copies.”

    (Spolsky continued) “Unfortunately, it’s never the same 20%. Everybody uses a different set of features. In the last 10 years I have probably heard of dozens of companies who, determined not to learn from each other, tried to release ‘lite’ word processors that only implement 20% of the features. This story is as old as the PC. Most of the time, what happens is that they give their program to a journalist to review, and the journalist reviews it by writing their review using the new word processor, and then the journalist tries to find the ‘word count’ feature which they need because most journalists have precise word count requirements, and it’s not there, because it’s in the ’80% that nobody uses,’ and the journalist ends up writing a story that attempts to claim simultaneously that lite programs are good, bloat is bad, and I can’t use this damn thing ’cause it won’t count my words. If I had a dollar for every time this has happened I would be very happy.”

    (link to full article: http://www.joelonsoftware.com/articles/fog0000000020.html)

    So I simply accept that products will include features that a particular customer doesn’t need. Now, I’d like to refer you back to my question in my original response where I asked: “Are there ways to have it both ways by including lots of features yet avoid the negative side effects mentioned above?” There *are* ways to provide every feature that every user wants yet avoid a cluttered user experience. Unfortunately, this reply is itself becoming bloated, so I will stop now and write more later.

  • http://www.redmonk.com/jgovernor james Governor

    i think to help consider this problem its also worth looking at a couple of other domains. namely server management and systems management.

    IBM's server business has successfully pitching a simplicity meme. consolidate to make management more effective.

    systems management meanwhile its interesting because the framework era showed us that trying to offer tools that hide massive complexity can be a false promise. some work needs to be done under the covers

    consider the IBM System House-driving common instrumentation and common logging, common installs and so on. That is an attempt at IT simplification. (autonomic is just marketing blather without rationalization under the covers)

    if we have to create a layer, a bunch of stuff explictly designed to hide the complexity from users, or developers, isn't it worth asking whether we actually needed all that function in the first place? which functions we needed to support a user service.

    modularity is important.

    take MS and the fact it had no presence in the HPC market. why would it? all you need is a bunch of small kernels, linked together to process chunks of data. thus Linux was ideal. now its working to establish a place in the market.

    Which brings us full circle (kind of) – that is, sometime you really don't want all that additional functionality. it creates management and performance overheads. general purpose has its place. but so does optimization. and "less code" systems are often optimal systems. they arent built to solve every computing problem, but rather a particular computing problem. solving a specific problem makes these technologies valuable (read Clay Shirkey on situated software).

    PHP and all may not have all the bells and whistles, but they are very very useful for "getting things done"

    unless major vendors offer mechanisms such that users can strip out the function they dont want, we're back to square one – monolithic apps with tons of function we spend huge cycles managing.

  • http://www.redmonk.com/jgovernor james Governor

    i think to help consider this problem its also worth looking at a couple of other domains. namely server management and systems management.

    IBM's server business has successfully pitching a simplicity meme. consolidate to make management more effective.

    systems management meanwhile is interesting because the framework era showed us that trying to offer tools that hide massive complexity can be a false promise. some work needs to be done under the covers

    consider the IBM System House-driving common instrumentation and common logging, common installs and so on. That is an attempt at IT simplification. (autonomic is just marketing blather without rationalization under the covers)

    if we have to create a layer, a bunch of stuff explictly designed to hide the complexity from users, or developers, isn't it worth asking whether we actually needed all that function in the first place? which functions we needed to support a user service.

    modularity is important.

    take MS and the fact it had no presence in the HPC market. why would it? all you need is a bunch of small kernels, linked together to process chunks of data. thus Linux was ideal. now MS is working to establish a place in the market.

    Which brings us full circle (kind of) – that is, sometime you really don't want all that additional functionality. it creates management and performance overheads. general purpose has its place. but so does optimization. and "less code" systems are often optimal systems. they arent built to solve every computing problem, but rather a particular computing problem. solving a specific problem makes these technologies valuable (read Clay Shirkey on situated software).

    PHP and all may not have all the bells and whistles, but they are very very useful for "getting things done"

    unless major vendors offer mechanisms such that users can strip out the function they dont want, we're back to square one – monolithic apps with tons of function we spend huge cycles managing.