• http://twitter.com/kevinmarks Kevin Marks

    The trend over time would be interesting. “check-ins used to add a single conceptual piece” is a cultural practice programmers learn over time, and was made far easier by changes in source code control techniques. I suspect fortran and javascript being so far to the right reflects a pattern of more novice programmers, who will check in at the end of a project, having copied a lot of code and fiddled around with it locally until it works.

    • http://redmonk.com/dberkholz Donnie Berkholz

      Absolutely agree on trends over time. I took forever writing this up, so I wanted to get something out the door even though there’s always more to do. Definitely want to break this data down in a few different ways, and time as a variable is near the top of the list.

      Unfortunately there’s a lot of potentially confounding variables, or variables that get averaged out and nuances lost, but I’ve gotta work with the data at hand instead of waiting for something perfect to fall out of the sky into my lap. =)

      • http://twitter.com/gstamp gstamp

        I like to see hard data but I guess the problem with any exercise like this are the confounding variables. I have a hard time accepting that Javascript is less expressive than Java for example. There are probably reasons why this occurred however those reasons have to be guessed at.

        • http://james-iry.blogspot.com James Iry

          A (probably good) guess: JavaScript is often used by people who are not experienced programmers and who feel more comfortable with copy/paste than with abstraction.

        • Ruben Verborgh

          Above all, this graph says how expressive the programmers working in those languages have been. Only few people writing JavaScript code are able to deal with its expressiveness. This is why Crockford rightly calls JavaScript “the world’s most misunderstood programming language”.

          • http://redmonk.com/dberkholz Donnie Berkholz

            I think if you look at the bottom whisker (the 10th percentile), you might be able to get a decent feel for what the top users of a given language are doing. JavaScript could be a weird case where you’d need to go further down, like the top 1 percent.

  • http://twitter.com/applebyj John Appleby

    I’m struggling here Donnie and I feel this is a set of analytics, looking for a sequitur.

    For instance, how do you define the semantics of expressiveness? I’ve programmed in probably 40 different programming languages in my career, of which I’m an expert in none. Those range through many of the different types of language in your list, from procedural (C++) to functional (ML) and high-level (ABAP) to low-level (assembly).

    For my money, expressive would mean easy to code something which is easy to understand, reusable and fast. I’m not sure how any of your metrics help here, because they represent only a subsection of code, and aren’t measuring something I think is meaningful.

    For instance, if I want to code something fast, which is easy to express, then I will often use perl or python. Curiously those languages have a downside, which is they are difficult to read after the fact, if you want them to be efficient, despite being easy to express a concept.

    If you want to really control, then there’s no replacement for C/C++, though I can’t imagine anyone that programs in them describing them as expressive. Every highly-efficient program I have written has been in C.

    Whilst describing any functional programming language like ML or Haskell as expressive? If you have a huge brain and can describe things in that way, sure. But then no one else can understand it.

    As for C being more expressive than C++, that fails a sanity check. OO brings expressiveness and control.

    And as for FORTRAN, yikes.

    But in short – I think there is probably some interesting information in the data points you have access too, but I’m not sure it’s in expressiveness of languages. Perhaps instead you should look into why developers commit in this way and in this volume, and what projects they relate to?

    • http://redmonk.com/dberkholz Donnie Berkholz

      John, thanks a lot for reading and commenting! I definitely agree with you that there are some huge caveats to what you can get out of this, and you nailed, in very concrete terms, two of the key ones I mentioned: “It won’t tell you how readable the resulting code is (Hello, lambda functions) or how long it takes to write it (APL anyone?), so it’s not a measure of maintainability or productivity.”

      I struggled to come up with a good term to describe this — expressiveness was the best of a bad set. So what exactly does this metric tell you? It doesn’t tell you much if anything about the writing or the reading, as you so well described, but rather something about the state of the code in the repository, the development practices in use, potentially the level of bugs you’re likely to get (given the correlation between bugs and LOC). I could imagine it being pretty interesting to look at this kind of statistic across developers or organizations to see what you could learn about how they develop. Do they use small, granular commits or large ones? Does it change when the process for pushing code is painful? Does a high LOC/commit suggest issues with too much autogenerated code being committed?

  • Pingback: Expressiveness of languages ranked | Smash Company

  • newz2000

    I know and use several of the languages there and I think that Javascript might be a bit of an outlier. I wonder if this is because many projects include pre-packaged libraries like jQuery in their source code. Whenever they update a library it makes a huge impact on their delta. Also, many web-focused projects minify their Javascript which can wreck havoc on an automated analysis tool.

    While I do think this is an interesting metric, I wonder how valuable expressiveness is in recommending a language. I think that it impacts the learnability, but I’d probably suggest to people to shop their local job market or talk to a recruiter long before using this. Neither Clojure nor Coffeescript will get your foot in the door in my area.

    • http://redmonk.com/dberkholz Donnie Berkholz

      Exactly. That’s the point I made in the post, although it was kind of buried in the middle.

      Expressiveness is just one measure, as you say. I’d love to see if I can find ways to get at data for the barrier to entry, the maintainability, the coding speed to complete similar problems, etc.

  • http://www.facebook.com/mwatson81 Matt Watson

    Also have to think about how much code is auto generated. In C# ORM and web service references generate lots of code that would inflate these numbers.

  • http://www.facebook.com/i3enhamin Ben Racine

    Perhaps rosettacode would be a better place to look given that these are problems where the same exact problem is being solved by each language? Having casually studied that site though, it seems like your results would match well to me.

  • Roland Bouman

    Interesting excercise, with – for me – rather unexpected results.

    It makes me wonder to what extent the chosen metric is indicative of expressivity.

    One thing that wasn’t clear to me is how “the interquartile range (IQR; the distance between the 25th and 75th percentiles)” can be seen as a metric for consistency.

    Another thing that stood out is that the “most expressive” languages by this metric are relatively obscure or at least not widely adopted. It made me wonder whether it’s possible that those languages are programmed by a relatively small number of programmers that know exactly what they are doing. In that case, this would be more of a measurement of expressiveness of programmers than of programming languages.

    • http://redmonk.com/dberkholz Donnie Berkholz

      The IQR is a way to look at the width of the core distribution without being overly affected by outliers. The difference between the 10th and 90th percentiles would be another slightly less robust way to do the same thing. This width is essentially a single number to describe how variable the LOC/commit values are for a given language, which should be a view into use of the language across many problem domains and many developers. If the width is small, it should be both a generally applicable language (general to its entire “domain” in the case that it’s a DSL) and a language that’s used fairly well by at least half of its developers.

      I also noticed the higher ranking of relatively unpopular languages. Let’s take a second-tier languages like CoffeeScript, for example, which was used by 391 developers across 200 projects in February. That’s relatively small but not exactly a tiny group of super-leet coders. Third-tier languages, on the other hand, are absolutely subject to your point. Vala was used by 87 developers, but there are some that are an order of magnitude lower: REBOL was used by 5 developers, Augeas by 8, eC by 9, etc.

      • Roland Bouman

        Thanks! This clears it up :)

  • http://twitter.com/MarcRGreiner Marc Greiner

    You seem to have found a funny way to randomly sort programming languages.

    • Old Skool is still cool

      Agreed! I wonder, hasn’t the author ever heard of Function Points? People are under the mistaken impression that they’re for ancient mainframe-style apps, but they’re commonly used in industry to reliably estimate project size. The thing is, Capers Jones and the SPR has for years comparitively ranked languages by the number of SLOCs required per average unit of functionality, as an indicator of productivity. This all used to be common knowledge.
      It became very clear that at a given skill level, VB.Net and Java were twice as productive as VB, which was about 20% more productive than C++, which was about 100% more productive than C, which was 3x as productive as MASM.
      Then again, back in the day, real programmers also knew how to write and evaluate their own sorting algorithms and understood the value of metrics like Cyclomatic Complexity.

  • thejambi

    Vala is pretty cool :)

  • http://twitter.com/hyperpape Justin

    I commented this elsewhere, but I think pairwise comparisons reveal the limitations of this method. If you take very similar languages, they should have similar scores if this method works. But I doubt it–are Scheme and Racket highly different in expressiveness? They shouldn’t be, since Racket is PLT Scheme.

    Another raised eyebrow is the rather significant difference between Python and Ruby. Lots of people have strong feelings towards one of the pair (I’m a Python guy), but it’s generally admitted that they’re very similar. Why do they appear so different in your presentation? Another pair that I know less about is D/C++, but I have the same concerns.

    Another artifact: Emacs Lisp almost certainly has a low LOC per commit ratio because it is heavily used for hacking Emacs, which leads to lots of little helper functions.

    My general thought is: this list jives with my preconceptions in most cases, but seems very very noisy.

    • http://redmonk.com/dberkholz Donnie Berkholz

      There’s definitely a fair amount of noise in the metric, but very few major outliers (i.e. ones that are *way* out of place, rather than just a few spots).

      The expressiveness of a language, in practical use, is a convolution of many variables including the language characteristics themselves, the standard library and ecosystem, the “culture” built around the language (is it one that encourages copying of external libraries, for example), etc. And you obviously nailed the fact that DSLs are special cases. I would be very curious what kinds of differences not at the syntax level, but otherwise, might be present in the Racket/Scheme instance. Any thoughts?

      • http://twitter.com/hyperpape Justin

        I’m not familiar enough to appraise Scheme/Racket. I know they’re close, but I’ve never really touched Racket enough to know if there might be a big difference in APIs/libraries that drives different coding patterns.

  • Rob Grainger

    “One proxy for this is how many lines of code change in each commit”

    Can you provide any evidence that your assumption is correct? Sounds like a Straw Man hypotheses to me.

    Surely if a language is sufficiently expressive, code would not need to change so much.

    • http://redmonk.com/dberkholz Donnie Berkholz

      That was basically the hypothesis going in: can we measure things this way? The results seem to bear out that it broadly works. That said, it’s clearly an imperfect, somewhat noisy metric that’s actually measuring a number of factors that combine to form the expressiveness in practice rather than in theory.

  • Pingback: In the News: 2013-03-26 | Klaus' Korner

  • Danny Price

    This reminds me of those quarterly poles that declare C as the most popular programming language based on the number of C-related web searches.

    These poles don’t consider the fact that programming anything non-trivial in C is a lot of work so it’s only natural that people will hit the web in search of answers, inflating it’s ‘popularity’.

    Programming languages are tools and you use the right one for the job. You wouldn’t use CoffeScript for a high-performance rendering engine no matter how expressive it is.

    • http://redmonk.com/dberkholz Donnie Berkholz

      You might be interested in checking out my colleague Steve’s correlation of actual use on GitHub with conversation on Stack Overflow: http://redmonk.com/sogrady/2013/02/28/language-rankings-1-13/

    • http://www.facebook.com/profile.php?id=199704177 Tomalak Geret’kal

      What do either Polish people or long metal sticks have to do with anything here?

      • http://redmonk.com/dberkholz Donnie Berkholz

        Very little, although geographic poles might be a better fit.

  • Thomas Marshall (Tom) Olsen

    Pity you didn’t include APL, invented by Ken Iverson at Harvard, championed by IBM. I used it for 25 years until I retired

    • http://redmonk.com/dberkholz Donnie Berkholz

      Apparently it isn’t too popular in open source. =)

      • Thomas Marshall (Tom) Olsen

        Perhaps the most successful vendor today is Dyalog, Ltd., but they require a monthly fee, not practical for casual users. A friend recommended NARS2000, freeware but I haven’t tried it yet. IBM still offers an APL2 for $2,120 to run under Windows or Linux

        • Bob C.

          APL is one of the great languages IBM never marketed! Used it for years on VM/CMS (another great OS that IBM never marketed)!

          • http://borasky-research.net/about-data-journalism-developer-studio-pricing-survey/ M. Edward (Ed) Borasky

            Actually, it was heavily used on Wall Street as APL360; IBM sold quite a few timesharing systems with the magic APL Selectric terminals. Gradually it died out – most of the finance folks use Mathematica, R or Matlab now. There’s still a dialect of APL in use called ‘K’, which I believe is open source.

          • Bob C.

            There is also a slightly earlier variant called J (then came K!). J was developed by Iverson and Hui back in the early 90s and is a synthesis of APL, FL and FP. It is suited more for mathematical and statistical programming. It is also FOSS and under the GPLv3 license. Check out the wiki.

        • Bob C.

          I downloaded and tried NARS2000. It is sort of neat. It helps to have a good size screen (I have a 20″ monitor). It starts off with a clear WS and above it, there is a string of the APL character set plus some that are new to me. Point the cursor at one and click on it. When you hover over it, it also displays a “tool tip” which briefly describes the function and a keyboard (combo) key that can be used. You do get an introductory Copyright message with instructions on how to suppress it.

      • Ben Evans

        http://www.aplusdev.org/ – a GPL language related to APL created by Morgan Stanley

  • http://twitter.com/stevekmcc Steven Kelly

    Correction: Programming Languages Ranked by Size of Commit

  • rodrigo

    NO DELPHI? NO PASCAL?

  • Chad Scherrer

    Haskell is in the top ten by both metrics, but you don’t include it in your list of “best languages by these metrics”.

    • http://redmonk.com/dberkholz Donnie Berkholz

      Thanks! Fixing now. Must’ve had a typo when I entered the sets.

  • http://twitter.com/chrisparnin Chris Parnin

    At least normalize the commits by the size of the project. You may be observing that certain languages are used for different size projects.

    • http://redmonk.com/dberkholz Donnie Berkholz

      Unfortunately I’m rather limited by available data (and time) on improving some of this. If only I were still in academia and had more time to devote!

      What I’ve got is total # of projects, committers, commits, and loc_changed by language on a monthly resolution for about 20 years.

  • Pingback: What does “expressiveness” via LOC per commit measure in practice? – Donnie Berkholz's Story of Data

  • http://twitter.com/akuhn Adrian Kuhn

    “One proxy for [expressiveness of a language] is how many lines of code change in each commit”

    Hmm …

    I see many social and behavioral signals affecting commit size, but not expressiveness of the language.

    There’s a plethora of social signals affecting commit size, in particular across languages. Language communities have different cultures. So might value small commits, while other language communities might have a habbit of only commiting ever so often. Or, of committing small commits to a local feature branch and them merging the feature in one huge commit into the public repository!

    Another example are behavioral signals causes by different best practices across language communities. For example, some language communities are much more invested in ad-hoc code reuse, so large pieces of code are copied from one code base to another, or from the internet.

    Also language communities differ by how they package libraries. In some communities it can be quite common to copy paste libraries into a project. Hence leading to huge commits.

    Also the edit behavior is largely dependent on tools. In languages with refactoring support we can expect engineers to touch much more code at once because tools enable them to do so without fear. Which again leads to larger commits.

    None the factors above are dependent on technical aspects of the language, their are all contextual, depending on factors such as culture and tooling.

    • http://redmonk.com/dberkholz Donnie Berkholz

      Yeah, some of that is related to what I coincidentally published in a follow-up post about 20 minutes ago: http://redmonk.com/dberkholz/2013/03/26/what-does-expressiveness-via-loc-per-commit-measure-in-practice/

      Do you think the behavioral differences should reasonably be expected to apply to entire classes of languages, like functional programming? I would expect larger-scale trends across multiple languages to be more resistant to some of the points you mention.

      The tooling/IDE point is a great one, and I heard that from another expert in the field although their point pertained more to productivity while you’re making a great argument for committing differences as well.

      • http://twitter.com/akuhn Adrian Kuhn

        Hmm, even languages that are technically very similar can have quite opposing cultures. Take for example Python and Ruby, which both boast the same meta-programming feature, but while it is not considers pythonic to use them, if you leverage Ruby’s meta-programming you’re a rockstar.

  • http://www.facebook.com/i3enhamin Ben Racine

    Thank you for this Donnie… there’s an awful lot of negativity on this page considering you spent your free time to run this study.

  • http://www.facebook.com/i3enhamin Ben Racine
  • Pingback: Some external validation on expressive languages – Donnie Berkholz's Story of Data

  • ChadF

    The interesting thing I noticed was Javascript and ActionScript didn’t have closely matching results. After all, aren’t these both essentially EMCAScript with different runtime environments (i.e. classes/functions available) and goals? So syntax wise they should be about the same and only really differ on how they are used.

  • http://borasky-research.net/about-data-journalism-developer-studio-pricing-survey/ M. Edward (Ed) Borasky

    I don’t understand why CoffeeScript scores so high on expressiveness. I think that’s an artifact of there being so little code written in CoffeeScript; it’s an *extremely* young language. Nor do I understand why there’s such a huge gap between Matlab and Scilab; they’re very similar, as is Octave. Both Octave and SciLab were deliberately designed to be low-cost alternatives to Matlab.

  • http://www.facebook.com/cfotop Chris Fotopoulos

    Where is COBOL ?

  • Aaron Bohannon

    This was absolutely fascinating. FWIW, I would have chosen to describe the metric as “conciseness” rather than expressiveness. Its literal meaning is about the same, but it seems like a less loaded term.

    There are some factors that might be nice to eliminate from the metric. For instance, did you exclude blank lines and comments? Also, if you’re counting lines of code, then line length is a factor. I would even advocate factoring out the length of variable names, and simply counting the number of syntactic tokens in each commit. I don’t want to call that a more “accurate” metric because some people might genuinely care about the space the code takes up on the screen. However, a token-based metric is certain to be less influenced by the conventions of a language culture. It is also a metric that would take a lot more work to measure. :)

    When counting lines of code, it is no surprise that Lisp-like languages made a strong showing, given their minimal syntax. They also have no static type system, which is another very interesting factor to me. Any static type system will require some amount of type annotation, and that will have an impact on conciseness. If it were possible to identify which tokens were purely for the sake of type annotation, then comparing the metrics with and without those tokens included would be very interesting.

    One last thought/critique: I’m not convinced that you measured “consistency” in the right way. I would probably have measured the width of the distributions in a manner that was proportional to the median. That would place JavaScript as one of the most consistent and Prolog as one of the most inconsistent.

  • BillStewart2012

    Any attempt at measuring expressiveness is going to be stuck with fuzziness and subjectivity, so thanks for finding some way to at least partially nail the jello to the wall.

    One of the difficulties is that many of these languages are going to select for different kinds of programmers. The people I know who are doing Coffeescript and Haskell tend to be scary-brilliant academic types, so I’d expect their code to be terser than the average Joe Web Designer’s.

    Fortran programmers, on the other hand, are typically not interested in the programming aspects; it’s much more of a domain-specific language for physicists and chemists, who are writing code to model physical or mathematical processes. They don’t get their expressiveness from the language – they get it from collecting a bunch of data and marshalling it to hand to subroutines other people have written decades ago, or by rewriting those subroutines to adapt them for different but similar physical processes. (Though yeah, marshalling input for subroutines is a lot more annoying in Fortran than it is in C or really anything but assembler or maybe Cobol.)

    Ada programmers, when I last dealt with them, tended to work in huge top-down waterfall development processes in unmanageably large teams for the military aircraft market. (That’s probably less true three decades later, especially in open source.) The language is designed to nail down interfaces between modules and force you to do most of the design upfront, so you can then let different teams write their own modules and review them to make sure they conform to the interfaces and maybe even work, and there was a mixture of new code from thick paper requirements documents and translation of existing messes of poorly documented Jovial, assembler, and Fortran. It’s an ugly environment.

  • http://twitter.com/nearyd Dave Neary

    Is it possible that the SLOC/commit measure is smaller for domain specific languages like Augeas & Puppet because you can do so little with them?

  • http://twitter.com/Ferentchak Charles Ferentchak

    Did you take the time to remove comments out of the files when you were counting commits. Did you include things like YAML or XML config files that were added?

    Even lines of code is a tricky thing to count.

    That being said I am not sure LOC /commit is a good metric for expressiveness since different languages may have systematic differences in commit style. For example when a language is new tons of people want to check it out and thus create a ton of small “hello world” and my three line webserver programs.

    There will be far less programs of that simplicity in a language like C. For a more complex project “one unit of value” would be much more code.

  • http://twitter.com/spion Gorgi Kosev

    You forget that an expressive language can be used in an un-expressive manner. Good developers tend to produce succinct and elegant code using the full range of features of the language. Code of average developers is significantly more verbose. Bad developers sometimes produce mountains of copy-pasta.

    This explains the abnormally huge disparity between CoffeeScript and JavaScript, which excluding some small amount of sugar are essentially the same language. (Another possibility adding to this is code-generation – there is a lot of generated JavaScript these days).

    Still, the experiment is an interesting start. The next step would be to take the number of developers into account and to model the probability of a developer using a particular language being above/below average

  • Pingback: Coastal Africa: an up-and-coming force in software – Donnie Berkholz's Story of Data

  • Sebastian Dietrich

    What such comparisons usualy forget is that some programming languages come with frameworks and some can rely on thousands of open source frameworks. Neither the LOCs nor function points nor “expressiveness” of the language matter when most of the functionality I need in my application does not need to be coded, but can be found in already available software.

  • Pingback: Links & reads for 2013 Week 13 | Martin's Weekly Curations

  • Pingback: Quantifying the shift toward permissive licensing – Donnie Berkholz's Story of Data

  • Pingback: Roundup for many things

  • http://twitter.com/MagielBruntink Magiel Bruntink

    Hi Donny,

    Thanks for your article! I have a question about the data collection process: How did you obtain the monthly LOC per commit numbers? Did you divide the total monthly LOC added for a project by the total monthly number of commits? Or are you able to get data on the individual commits and get the LOC added from those?

    I’m asking because, like you, I’m excited by the sheer amount of data offered by Ohloh, and am working on an analytics project as well. I can’t seem to get fine-grained access to commits using the API, however.

    Best regards,
    Magiel Bruntink
    University of Amsterdam

    • http://redmonk.com/dberkholz Donnie Berkholz

      The data I used is aggregated across all projects in each language on a monthly level — it’s just the data behind the graphs at https://www.ohloh.net/languages/compare/ and I got it directly from them.

      But yeah, even via the API you’re stuck at a monthly level (at least for activity_facts and size_facts).

      • http://twitter.com/MagielBruntink Magiel Bruntink

        Hi Donny, sorry but I still don’t really understand how you got at the metrics. Did you, given a language, divide the total LOC added monthly by the total number of commits monthly? Or did you have data on individual commits and aggregated those? I’m asking mainly because of worries that the LOC / commit metric within a language is not normally distributed, and hence an arithmetic mean would not represent the central tendency very well.

        Magiel

  • tjholowaychuk

    These results are horribly incorrect IMO

  • schiffbruechige

    I liked your post.. but was a little sad about the ending.. first “the presence of Perl and shell supporting the initial assertion that expressiveness has little to do with readability or maintainability. ” – what’s this based on? Where is the evidence that Perl/Shell are less readable/maintainable.. Then you concluded Python, also based on.. what you prefer.

    It’s a shame that a post all about data ends with “and I prefer X”.

  • Pingback: My First Tangle With the Tower of Babel | Codecraft

  • Tela

    Thanks for the article. I liked it :)

    “One proxy for [expressiveness of a language] is how many lines of code change in each commit”

    Can you tell how the number of changed LOC is calculated? Is it just the number of new lines or is it calculated in some more complex way?

  • Pingback: Are we getting better at designing programming languages? – Donnie Berkholz's Story of Data

  • Guest

    Productivity (immediate and maintenance) is the ultimate metric. Pick a function point or Agile story and see which one gets done faster…assuming you can ever design a controlled experiment in terms of talent and experience.

  • Pingback: What were developers reading on my blog and tweetstream in 2013? – Donnie Berkholz's Story of Data

  • Pingback: A Bright Future for Tornado and Motor | JoshAust.in

  • notchent

    I’ve been using Rebol for more than a decade, and couldn’t agree more. I have never found any other general language or development tool which is more expressive or productive.

  • Pingback: Expressive languages and whiteboard coding | Eat, work, sleep

Back to top
mobile desktop