For the second episode of
make all I wanted to get up to speed on a language I’ve been hearing a lot about recently, clojure.
Starting with pretty much zero knowledge on my part, one of my old buddies Dave “KirinDave” Fayram (@KirinDave) walks us through what coding in clojure is like, what it’s good at, and some new functionality on the table for it.
(I haven’t checked this transcription in detail, so if you notice something funny looking, ask before assuming it’s “funny.”)
Michael Coté: Well, hello everybody! This is Make All, Episode Number 2, and as always this is one of your host, or I guess the host, Michael Coté, available at peopleoverprocess.com.
And today, I thought it would be fun to talk about this wacky language, which I must admit I don’t really know a tremendous amount about, called Clojure. I was searching around for folks who would be good to talk with about it, and then sure enough, like right in the metaphor backyard someone that I have known for a while and pretty well know, is someone who can talk to it. So why don’t you introduce yourself.
Dave Fayram: Okay. I am Dave Fayram, I am KirinDave on Twitter. I have been banging around the Clojure community for a while, kind of quietly watching everything go on there.
Michael Coté: Yeah, I remember on one of my other podcast, DrunkAndRetired, Charles and I, the guy on there, talked with you a while ago. I don’t know if it was before Clojure or something, but it was kind of in the — the Scala Erlang like days of like stateless crap and everything, and you have always been interested in that kind of — I don’t even know what to kind of categorize that space of, but that weird space that this stuff is in. Like high performance systems that have a lot of concurrency issues essentially.
Dave Fayram: Yeah, I have been doing full-time Erlang programming for until — right up until the acquisition of Powerset by Microsoft, I was a full-time Erlang programmer, doing like distributed high concurrency systems.
Michael Coté: So what’s your background with Clojure?
Dave Fayram: I mean, I know Lisp, and I am sure in the previous podcast that you had I mentioned, I love Lisp a lot, at least once. So I was curious, once Clojure started getting a bit more mature and it’s Lisp got solidified, I wanted to check it out. Because I would really like to know Java, as a language, Java has always kind of ticked me off.
But there are so many great libraries for it, you can’t take that away from Java. So the idea of being able to, like a thief in the night sneak in, take all the Java libraries, but use them in a Lispy context, that appealed to me greatly.
Also, I had read a lot about the way that Rich Hickey, the creator of Clojure, was handling concurrency issues in systems that aren’t like Erlang, because Erlang, you can do like literally — like the default number of processes you can run simultaneously starts out at a quarter of a million and goes up from there. But not every system can really model that and it’s kind of a lot of overhead to model a lot of user processes at that level.
So the way that Clojure does it is, is a little bit more representative of the underlying hardware. It has immutable data, so there is no synchronization issues; one reads, and if you write, and the Garbage Collector keeps the old data around, because it’s not mutating.
So it ended up being that the thread pool became a very elegant way of handling a lot of stuff, that whereas before a thread pool was just an abstraction for managing fixed amount of resources, suddenly it became — with immutable data, it pretty much became a solution to a lot of problems.
Michael Coté: I mean, we will get back to the basics of Clojure here pretty soon. But I mean, it seems like, when you are dealing with these languages and platforms that deal with the dual concurrency well, essentially what they are trying to do is escape state as much as possible. I mean, basically state is where a lot of your problems come in, when you are doing concurrent and multithreaded things.
Dave Fayram: Yeah, especially mutable state.
Michael Coté: Yeah, exactly. And the thing I am always fascinated by, whether it’s Erlang or Clojure or other stateless thing, is eventually you do have to change the state or I would hope. So like that always seems like the crux of the trick of these platforms a lot, is figuring out how to isolate mutability, such that it doesn’t screw up the rest of your processing.
Dave Fayram: Yeah. I mean, so Clojure’s approach is to kind of sidestep the problem. They have a couple of primitives, that allow values to changeover atomically. And the way it works is that, there is these so-called persistent data structures and it isn’t this context of like read it to disk. In this case, persistent means that two trees — like let’s say you had a tree that represents a hash table and then you add a value, so you have T0 and T1. The tree T1 will share as much of the structure as possible with T0.
Michael Coté: Right, right.
Dave Fayram: So what that allows you to do is to keep old values around if they are needed, if there is a reference system that the Garbage Collector can catch, but still immediately and efficiently update new values.
Because one of the big problems with a lot of languages that try to do immutable data historically is, they require a lot of copying. If you want to have a hash table, and you don’t have this trick of a persistent data structure, the way you do it is you copy a whole hash table, you add the new value, and that’s the new, new value. And the old one kind of has to go away or stick around and be very wasteful.
Michael Coté: You sort of have evolving state, like you are always adding to the state. Anyways, but yeah, that kind of makes sense.
Dave Fayram: So that’s the big trick, and that was, I think, the lion’s share of the work at Clojure, it seems like, from what he was saying. Especially, there is a primitive for software transaction memory using this framework on top of a data structures, and that’s actually the primary way that most Clojure programmers deal with state, is they have several — they are called Refs in the language, which are points where software transaction memory — like transactions can occur, and those are acted upon to change the state.
But people holding references to the old states, a snapshot, if you will, get to those for free, because you are never deleting anything unless the Garbage Collector says, oh, no references to it, we are good.
Michael Coté: So essentially, I mean when it come to managing the state, what Clojure is doing is, it almost has like a version control system of different states, like it sounds like it persists older versions of state, that processes might have access to. And I guess the tradeoff there is, you may not be getting the newest state, but you are getting state consistent to your view of the world or something.
Dave Fayram: Right. You will never have bad state. You will never have state that’s halfway written, because that would be catastrophic, right?
Michael Coté: Right. You might have like out-of-date state, but it’s not going to be bad state.
Dave Fayram: And that’s fine, because in general, it’s very difficult to have what you are currently working on really represent what just happened right now. There is always processing time lag. So the ability to truly separate the concerns of a thread processing and a thread consuming, like a thread filling a queue and a thread processing the queue, will always have a disconnect.
Michael Coté: So this is kind of like that school of eventual consistency or something it sounds like, where everyone may not be perfectly consistent at the same time, but eventually they will get to the same good state.
Dave Fayram: They might never actually intersect states. It could always be that the consumer is lagging behind. In fact, unless the data — so imagine that queue example we were talking about. Until the queue is empty, there will always be a difference between what the processing thread and the generating thread see.
Which sounds like kind of weird and inky, but when you actually work with it, you realize that it’s a very natural approach, because typically, very few programs actually are trying to eat bits right off the wire, so to speak. Like even network analysis programs have a buffer and they are reading out of the buffer.
Almost everyone implicitly has a model like this. They are like routers and switches and things like that, that try to minimize latency to the utmost, and they actually try to do things in real-time and like wire one pin to the other to get clock signal [?] and something like that. But that’s not — most people never get into that realm.
Michael Coté: Yeah. It sounds like — I mean, another thing that’s sort of vital to the way — how you go about doing Clojure, using Clojure, which is another interesting trend I have been seeing a lot is, there is kind of this return to using queues. Like I don’t really remember when queue sort of fell out of favor, but at some point people didn’t really think about queues and message buses and stuff, and then all of a sudden like, everyone is queue crazy all of a sudden.
Like RabbitMQ was just brought by VMware/SpringSource and I hear about queues all the time, and even when you are talking about the way you are consuming the different state, like you have got queues operating as a fundamental part of the way you are doing your application, right?
I don’t know. I mean, it’s interesting that — this return to queue-based thinking for application architecture is happening.
Dave Fayram: Yeah, to some extent, we are all coming of age in this world of large scale software design. I mean, back in like 2003 or 2004, most people didn’t have huge ton of machines lying around. So you could think very sloppily about how you are going to pass, which wasn’t that much. But now, I mean, even small companies can have 20 or 30 machines as a baseline.
Michael Coté: That’s an excellent point! That’s another confluence is, you have got a lot more hardware, it’s not like server bound essentially.
Dave Fayram: Yeah. And also, I mean, there is just — people are handling huge volumes of data and our hardware has also fundamentally changed. We went from very, very long, fast hardware, to slower but wider hardware.
So we have been talking about the multithreaded crisis for quite sometime now and we are starting to see — I think this will be the decade where we start to really try and solve the problem of modeling concurrency and interprocess and intermachine communication in a consistent way. That’s rigorous, and that’s efficient, and that is — and people will start to understand it intuitively, like they do a queue.
A queue is a very intuitive process or a stack. But most people don’t have an intuitive sense for how to manipulate multithreaded programs. In part because, like the classic Pthread’s approach you see on Linux or the Windows’ threads, it’s such the ad hoc thing. It’s like, well, we have given you access to the primitives the Operating System uses, good luck!
Michael Coté: Yeah, yeah, and use all these flags and interrupts and weird things.
Dave Fayram: Yeah.
Michael Coté: I have to admit, I was always one of those developers who really was just poor at thread management and stuff.
Dave Fayram: It was a hard problem, everyone was bad at it. I mean, people who were experts would go back to a program a year later, with a deadlock condition and say, gosh, how could this happen?
Michael Coté: Yeah. It’s just a very unnatural way to think, I guess. That’s the other issue.
So like backing the scope up a bit, so like, yourself included, but what are the types of applications or workloads or whatever, like what’s the type of stuff you are using and you see other people using Clojure for?
Dave Fayram: I have been — on my spare time, the first thing I wrote was a chat server, just to kind of try it out. And that was actually surprisingly easy. It’s like — I think it’s like less than 250 lines, maybe less than 200 lines. It ends up being really easy to synchronize. So if you had a lot of data inputs and synchronization.
Right now it’s not so good for UI-based stuff. So if you are going to be running a lot of UI, I wouldn’t recommend it there just yet. It will do it. Clojure has a strong connection with the underlying Java. It has a very, very robust way to speak with it.
Gosh! Basically, any kind of — if you are building infrastructure, Clojure isn’t easy. Especially if you are building infrastructure that takes multiple data sources and then consolidates them or processes them, or you are trying to write something, a Hadoopish, a MapReducish kind of thing.
Michael Coté: Right. So if you are dealing with big sets of data that you want to do something with, essentially, it sounds like.
Dave Fayram: That’s its sweet spot.
Michael Coté: Yeah. I mean, it seems like you would get these large sets of data and you want to visit every piece of data — I mean, this is the classic Hadoop, MapReduce thing. You want to visit every piece of data and somehow do something with that data or extract some information from that data. And obviously, you are not going to be very successful doing that just in serial; doing it one after the other. So you have got to do it in parallel. It seems like that’s where these things kind of come into play.
And then I guess, getting to some of the things you were saying, you get to some of those more real-time demands on it, right? Like the users are actually coming and wanting to do something over that data. That’s where you have to start managing the transaction management, to use an old term, over the way the data changes and things like that.
Dave Fayram: Right. Or like a web page, let’s just say you have a tree and you want a web page to represent that, or a list, like a list of the last 15 songs the user has listened to.
Michael Coté: Right.
Dave Fayram: The easiest way to do that is render a web page of that, for example, just to pull off one of these references from the software transactional memory and then start to render it.
Now, because of the way that Clojure’s data structures are architected, that rendering process gets — basically it thinks that it has stopped the world and has a perfectly internally consistent copy. But it doesn’t. Actually, the world is going on, it’s just that the way the data structures are architected, it can give a perfect snapshot. It can give lots —
Michael Coté: Oh, yeah, yeah. So like sussing that out some more. So if you had like 500 people and you had a pool of music, and all of them could constantly be editing the one playlist. So you have got a lot of people editing one piece of data, and then at the same time these 500 people and other people are going to want to be — they are going to want to listen to that playlist.
So essentially, whenever they go request that playlist to check it out or whatever, to use that term, it’s a good way of managing people changing that playlist, and then also listening to that playlist.
And then, to complicate it even further, I guess, while everyone is listening to that playlist, someone could go modify it and that will modify the playlist that people are listening to if they have — I don’t know, is that sort of a reasonable thing to expect that you could go out and change the playlist in flight?
Dave Fayram: Yeah, absolutely! I mean, of course there is logic for what happens if someone removes the song I am listening to, right?
Michael Coté: Right, right, right, yeah. I mean, that would be part of the business logic, I think we can call it.
Dave Fayram: Exactly! But all of that is fairly straightforward. In fact, I would even say trivial to do in memory in Clojure. Now, if you are trying to persist that across multiple machines, then of course there is additional work there. Clojure, unlike Erlang, the last language I was working with, does not make machine to machine communication transparent.
Michael Coté: So it’s not good at distributed application. So I guess —
Dave Fayram: I wouldn’t say it’s not good at it; it’s just not as good as Erlang, which is the undisputed king of writing trivial distributed application. Nothing is better than Erlang.
Michael Coté: So to extend the playlist metaphor more, let’s say you had CDN systems, or I always forget, Content Delivery Network stuff, so that, with these 500 people that’s spread out over the globe, and may be you have got 20 different servers there that each of them are talking to. And so I guess what you are saying is, Clojure on its own isn’t really good at keeping those 20 different servers maybe synchronized, if you will, or kind of working with each other. It would be better if you had the one server that they were all talking to.
Dave Fayram: Well, that’s where you get like — so within a one server scope is where you get the software transactional memory. But there are a lot of great Java libraries for synchronizing state and you can just use those for free in Clojure.
Michael Coté: Yeah, we had all that Enterprise Java stuff and it was supposed to solve all these problems, as I recall.
Dave Fayram: I still like — one library I got my hands on, I was so happy to get was Joda-Time. I am like, oh man, it’s so good to actually have a competent time library, and I just — I thought I was like doing something wrong, like I had a white and black horizontally striped shirt and a mask on and I was like creeping in the house and stealing the code and then running back outside. I felt like a burglar, but it was great to be able to do that.
Michael Coté: Yeah, and is that a Java library or what’s that?
Dave Fayram: Yeah, it’s a Java library.
Michael Coté: You know, you mentioned this earlier and I am curious to hear you like kind of speak to this more. So Clojure runs on top of the VM and I think it runs on top of .NET as well and top of the —
Dave Fayram: It’s not a first class entity anymore, it’s kind of a port now.
Michael Coté: Right. But like you are saying, what that means is you have access to everything in the Java world, right? And I mean, like what — like you are saying, you got a good time library, but how does that sort of affect the way you go about writing software with Clojure? I mean, do you use a lot of Java stuff or like what — what does that end up looking like day-to-day?
Dave Fayram: I try to use — if there is a Clojure library, I try to use it. A lot of Clojure libraries are wrapped over Java libraries to give the appearance of laziness, which we haven’t talked about yet, and immutable state.
Increasingly, people are writing libraries with immutable data. So there isn’t too much of an impedance mismatch there. We are actually getting to the point where we can handle these sorts of things.
I think the most difficult thing to work with, when you are talking about working between Clojure and Java, is Clojure doesn’t have a way to represent class annotations and that can sometimes really bite you.
Michael Coté: Yeah, because that’s a huge part of Java nowadays.
Dave Fayram: Right. So you have to just write classes that have those annotations and then you can populate those in Clojure. But you have to put the annotations. I haven’t found a good way to do annotations yet, especially, there is this whole new world of more efficient class generation, type generation in Clojure. And there are some pretty clever tricks, like this new Def protocol stuff and the new extension, class extensions, allow you to extend classes in flight.
What that means is that, for example, I can add methods to the string class.
Michael Coté: Oh, like monkey-patching or something?
Dave Fayram: It’s not like monkey-patching. Monkey-patching is a Ruby thing, where you open up the class, you add methods, and then the entire binary gets that. With this, it’s actually keeping these changes local to the module in which they are created.
Michael Coté: So it’s like scoped to the instance of that class?
Dave Fayram: Exactly. And you can even do, I think, individual instances right now, but it’s kind of — it’s still kind of up in the air exactly how that works, because they are finalizing it right now, they are not done with it yet. So you have to get the unreleased version, the dev version in Clojure to see these features.
Michael Coté: So is that kind of like a controversial change, because it’s kind of like — whenever you get to things that are kind of, to use a phrase somewhat incorrectly, again, monkey-patching, right? The OO pier is starting to get a little wacky, because the idea that you would have multiple instance — different versions of the same class running around, starts to kind of grade against what you would think classes would be.
Dave Fayram: Well, I mean, any feature is subject to abuse I suppose. So I am sure that there are people out there that would complain. I personally haven’t seen that much in the way of complaints and I have been looking around. I think it’s — I mean, it’s very nice from the Clojure perspective to be able to — one of the other things you can do is insert existing classes into hierarchies, they were never written. That’s something you can’t do easily in Java, you actually have to reflect and actually, probably have to add a Class Loader to get it to work out just right.
Michael Coté: So you can like give it a different parent or ancestor class or something?
Dave Fayram: No, you can insert interfaces that it claims to conform to.
Michael Coté: Oh, right, right. Okay, okay.
Dave Fayram: So those interfaces are a little bit higher level to kind of just type, so it’s like kind of a flat type system in Clojure that you can use. So when you are writing generic methods, you can say, I want to handle all streams and you can add some things from Java into the stream type that you have within your module. So that it will handle those things correctly.
Michael Coté: Okay.
Dave Fayram: It basically allows you to create ad hoc taxonomies within your program, so that you can handle things in a consistent way that the designers may not have anticipated. So the library, like, for example, someone may not have anticipated trying to use both the NIO and the I/O classes in exactly the same way, but you may have code that can, right? There are a lot of similarities in the two APIs, but they kind of descend from different roots in some ways. So you would like to be able to treat them both identically.
Well, now, if your code can do that, then you can represent that taxonomy, you can say, well, these guys descend from a common type that I have made up in this module. And then you can have the code act on both of them in a consistent manner and efficiently.
Michael Coté: Okay. Yeah, that’s interesting. Do you find that you do that a lot?
Dave Fayram: I haven’t done any of it at all. It’s very new and I haven’t played with that particular aspect of it.
Michael Coté: That’s right. It’s always funny to see the, I don’t know the fancy word for it, maybe it’s dichotomy or something. But the sort of the push and pull between, like the piece of OO and then the users of it, right? Where like the whole point of — on the one side the whole point of defining all these classes and interfaces and so people at the bottom of the chains can’t muck with it, they can’t change it.
And then yet, with the rise of dynamic languages and all this stuff over the past several years, I think it has become apparent that the actual users of these classes, they want to be able to muck with stuff up in the hierarchy. And that, like you said, any tool can be used for evil or whatever.
But it does seem like, you get a lot more longevity out of your platform if you allow people to go up and modify kind of like the ivory tower, so to speak, go up and do this modification of things up in the class hierarchy.
Dave Fayram: You certainly get a lot of good work from the developers. I mean, I don’t know if you get more longevity, I think that they can probably find examples and counterexamples all over the place.
Michael Coté: Yeah. I mean, I guess I would assume the more goodwill you have, the better chance your platform has of living longer. So it’s definitely not a guarantee, but it’s sort of a — what is it, it’s like a necessary thing, but maybe not — whatever the other thing is that makes it sell.
But yeah, yeah, that’s — and so like looking at the wider Clojure community, like do you see — like what’s the popularity and usage of Clojure that you see out there? I mean, from my perspective, it’s sort of just like all of a sudden like risen up as something that people are really interested in, but how long has it been going on?
Dave Fayram: I just read an article where like the number of Scala job requests has increased and that there are more of them right now than there are of Clojure job requests. But if you look at relative, like the interest of Clojure and like how many times has the Clojure request market grown, like the job request market, as opposed to the Scala, Clojure has grown at a rate of 4000%, a ton of people are interested.
There is a lot of kind of like submarine goodwill towards this, but there are a lot of negatives, like too many parenthesis is stupid, kind of thoughts out there.
But at the same time, there is a lot of people who are like, you know, that sounds really interesting and Clojure has just like some [?] useful features that allow you to write code that’s very simple and yet very efficient. Like laziness, and so a lot of people are getting into that and saying, wow, this is really amazing, I actually can write some great code with this. The parenthesis is a little weird, but maybe I can work with this.
Michael Coté: Yeah. It’s always easy to make fun of Lisp, right? I think that’s the number one language to make fun of.
Dave Fayram: It’s up there; that and Perl, I think.
Michael Coté: Yeah, you are right. Perl is definitely the absolute number one. So you mentioned it several times, what is this laziness business?
Dave Fayram: So what — it’s a feature from functional program making which says that, for example, let’s say, I wanted a sequence of all numbers between 1 and 50. I can’t write a program in C that represents that easily, right? Because if I want to make it an array, I couldn’t write an array [?], unless I did a bunch of management and magic tricks. Maybe I can do it in C++, but it would be very difficult.
But with Clojure, you actually can write[?]; this thing called lazy sequences and basically what they are is a history of the sequence, the current sequence pointer, and then a function pointing to the next — a function pointing to, given the current value, what is the next thing? And that can all be scoped within a Clojure, not the language, the programming construct, so there could be state associate with this generation.
So that sounds kind of complicated and abstract, but you could imagine, for example, the most — like an example is, there is a function in Clojure called Iterate and it takes two arguments; a value and a function to iterate through that value on.
So if you say iterate one increment, then it will simply create an infinite list, when the first elements says one and the second element is increment one, and the third element is incremental result of the preview, and so on and so on.
And so if you compute the first 10,000 elements of that, then you have got a list, you have got 1 in 10,000. But if you only compute the first three, well, you have only got three. You can actually move ahead two and just discard previous elements, right, and then so you get a window of say 5 out of this infinite array.
Michael Cote: Because it is having – it sounds like — correct me if I am wrong. It sounds like what’s happening is that the function you pass into iterate, is figuring out however it does it, what the next item is, and you also have to figure out what that item 5 up is, and so instead of having to have I guess a balanced state, where you have to define all the state ahead of time, you can come up with whatever the next state is when it’s requested.
Dave Fayram: So effectively someone can say just give me the element of this list, and you get it, and you compute exactly what you need to get there.
Michael Cote: And now it sounds so interesting, because you don’t compute it until you need it.
Dave Fayram: Exactly, and that’s the secret of Lazing is that you don’t — things aren’t computed until they’re needed. So for example, you could write a function that starts at 1, and then finds the next tried number. Now with all these components you have got a sequence of lazily computed prime numbers.
Michael Cote: Right, that’s got to be a good Clojure interview question.
Dave Fayram: It’s not too bad. Another easy one is to take a sequence which takes a file handle, and begins to produce sequences of lines. So you could just say – you could treat the lines as if they are list, but you are only reaching the files you need.
Michael Cote: Right, right.
Dave Fayram: So that ends up saving you a ton of memory. We talked earlier about how social security is handling large volumes data. Well, there is a reason why, it can deal with that data without a massive memory footprint, [?].
Michael Cote: Right, right. I mean this is a kind of a piss poor example, but it’s — I mean it’s also kind of like good for pagination and things like that where —
Dave Fayram: Yes, it’s a task report.
Michael Cote: Yeah, if you were to compute 10,000 pages of some search results ahead of time, that might look end up looking pretty poor, but if you do it in this lazy way, it would definitely be a more resource constrained way of doing it.
Dave Fayram: It’s not magical, because you have to compute, unless you save your intermediate values, if you say give me the 10,000 value of my prime number things, you have to compute 10,000 prime numbers. Then if you say it again, unless you saved that computation you have to do it again.
Michael Cote: Right.
Dave Fayram: Right? But, it’s also –
Michael Cote: So when you are using this laziness, you need someway given any point in a sequence, that function has to figure out what the next point is, right? I assume maybe the point after it, and I guess there’s some datasets where that wouldn’t be possible, right? Like you can’t figure out n+1 without the context of n+5 or whatever, but I am sure there’s probably more situations where you can’t go from n to n+1 without having to compute the —
Dave Fayram: So that [?] is awesome when you iterate the [?] for getting your task, it’s awesome when you are dealing with very large datasets and you only need to look at a small slide window but there is also a tricking list called memorization where you get the function and you wrap in another function, but it looks that it basically has a hash table of calls to results. So by combining those two features you can efficiently compute exactly what you need and then recomputed it.
Michael Cote: Right and this kind of highlights another implicit thing that we haven’t really talked about. And I guess, that Clojure is like a functional language, where you can pass functions around, which I guess if you are used to classic 0 – I mean I remember when I spend a lot of time figuring what the hell that meant, but if you are used to like procedural languages in Java, the idea that you are passing a function around is a little strange, but it is kind of exciting to this idea that you can pass like a chunk of operational code around as data, and just kind of pass it around to something that gets executed. And I mean the —
Dave Fayram: That architecture, I mean that kind of thought of passing functions around is heavily involved into the [?] and it leads to very, very different types of architecture, than you are used to with OO languages. A lot of problems go away with that OO has to do within — for example, the Visitor pattern, just basically vanishes.
Michael Cote: Right, right, right, because yeah, I mean a lot of those things are – a lot of object oriented patterns are getting around not being a functional language, and I imagine conversely there’s probably like functional language patterns that are getting around not being an object oriented sort of language.
Dave Fayram: Yes.
Michael Cote: Yeah. Well, that’s interesting. Starting from really not knowing anything; like I can see why people are kind of fascinated with it.
Dave Fayram: It also has all blistering that lots of Lisp people love it. It has macros, the strong list style macros; let you write many languages, and a lot of the language features are written with macros, so you can see how it’s done.
Michael Cote: Right, right. So you can learn from the language itself how to develop for it, which is not always the case. So yeah, I mean is there anything else as far as like a little intro that you think we should go over, what have we left out that’s kind of just like explaining what Clojure is?
Dave Fayram: Yeah, I mean, the Clojure is, it’s weird, because it’s not — it’s a hybrid language. That’s what’s very interesting about it. And that’s also I think, what makes this call interesting. Yes, it is purely functional. It’s not purely immutable. It does have a lot of features and a lot of things. It has a viewpoint, but that viewpoint is not absolutely important. I think Scala is similar in that fashion.
So, that’s what made this Clojure being so attracted to a lot of people, is that you can ease into it. It is not like Scala which has a lot of these features and it’s a really interesting language once you learn it. But, Scala takes absolute [?], we will have no stakes. We’ll have no real stakes, none. And there is no global stake, everything is held in functional calls. So too bad for you, whereas Clojure, takes the more moderate approach, it’s like I’m a pragmatist, I just want to get this done.
Michael Cote: Right. So like Clojure is not opinionated, it doesn’t try to enforce one way of using it or one way of —
Dave Fayram: Right, Clojure is not belligerent.
Michael Cote: Right and it’s funny, because languages are like that, like there is very — I don’t know if there is a middle between these two, but languages are either like there is one way to do it, or there’s three ways to do it, and that’s the only three or whatever. Or they’re kind of like, hey, do whatever you like. Like here’s a bunch of toys and stuff, and you can fiddle around with what you think is the best way to use this.
Dave Fayram: Yeah, people’s stuff is the [?] example of that, right?
Michael Cote: That’s right.
Dave Fayram: There’s amazing things in there, but got to find out what to do with them.
Michael Cote: Exactly, exactly. Well, that’s great! Well, thanks for taking the time to go over this, this overview with us. That sounds like, like what do you — I guess the other thing is like — well, one last thing, I mean, it sounds like from what you’re saying is that, if you were going to develop an application, you’re probably not going to use Clojure on its own necessarily, like you might want something to do your UI layer or different layers like that.
So, it seems like — I mean, this happens a lot with languages and platforms and frameworks that I see nowadays, where there are more — I don’t know how to quantify it, but oftentimes, these frameworks want to be used with other frameworks and other technologies, like they’re not — it’s not like the days of the — like Java versus .NET where it’s kind of like one or the other, like they really wanted to use with other things.
Dave Fayram: Well, I mean, I would see using the Java or Scala to write like your UI and then you’re calling into Clojure code that you’ve written to do your Algorithmix, and vice versa too. You could write a UI and then call it from Clojure. What’s nice about them is they are all playing in the same field and they’re all sharing the same basic runtime stuff. So, if not — they all can play nice together, and you can call — as I said, you can call Java from Clojure, that’s my perspective if someone who would prefer to write Clojure than Java.
Michael Cote: Right, right.
Dave Fayram: By the same token, you can call Clojure from Java code. Every module can be requested to become a class, and you can also create specialized classes, creating proxy objects to hand back to people. So, it ends up being very easy to play nice with neighbors who have different language preferences.
Michael Cote: Right, right. Yeah, and I guess that’s a point like you could imagine that getting Perl to behave with Java would be a little difficult, and so forth and so on. But it makes it easy for the frameworks to play nice with each other or just care to pay attention to that. So, when it’s easy to have multiple platforms and frameworks and what not, then of course you want to take advantage of them, instead of being stuck in one.
Dave Fayram: I guess the Java people have heard about it because the JVM, like Java 7, I guess, is strongly considering including things like actual direct support for [?] languages.
Michael Cote: Yeah. I mean, the Java world has been talking about Clojures for a long time and stuff like that. I even actually caught up on like what’s going to — that stuff in Java 7, but that’s interesting to hear.
Dave Fayram: It seems like between Scala Clojure and JRuby and [?] will in the second tier. It looks like people have really taken to using [?] languages in the [?]. Yeah, it’s a very — I always program in Java for like, gosh, until I touched Clojure, I haven’t touched it for like six years, right? I was really used to it being slow and painful, so I write stuff that goes through like 15,000 strings in a regular expression, and it completed like instantly. I was like, wow! This has gotten faster since I last touched it.
So, I was very pleasantly surprised, it will be a raw performance of this. So, having this common platform means a lot of horses can get into the same wagon, or perhaps many wagons, I don’t even know. Many wagons to be hitched with Java super strong [?] horse.
Michael Cote: Yeah, yeah, no, no, it was a — the Ruby crowd and their friends did a good job of like being dismissive of the VM, and it is interesting to see people kind of rediscovering the niceties of it. I mean, there’s plenty of terrible stuff to the Java world, but to your point, I mean, there is also a lot of just large comfortable couches, metaphorically speaking, which are not always so bad to have around.
But if you — I mean, as long as you would address those issues that makes it easy to operate with new technologies and new frameworks as they come along. I mean, that was always Java’s shortcoming as it was very iron-fisted, as far as it was going to be the Java language, and treating it more as a platform, as a runtime, instead of a language necessarily, does seem like it’s a nice helpful long term. It encourages that good will that hopefully goes towards longevity.
Dave Fayram: Yeah, yeah.
Michael Cote: Well, great, like I said, I think that’s a good summary. We’ve got a good little introduction there. So, thanks for going over that with us.
Dave Fayram: You’re welcome! It was great to talk about. I’m really enthusiastic; you asked about Clojure, I think it’s a really good thing.
Michael Cote: You’ve got any of those like Twitter things or any stuff like that you want to advertise?
Dave Fayram: Oh, sure! Theory of my friends and follow me on Twitter. I’m Kirin Dave on Twitter.
Michael Cote: That’s right. There are always plenty of pictures of artfully done lattés and things like that. That’s what I always look for.
Dave Fayram: I like to advertise my liberalness by posting picture of my coffee.
Michael Cote: Exactly, no, it’s nice. You see what designs are going on there, its good stuff.
Dave Fayram: Yeah? Alright, it was great talk with you.
Michael Cote: Yeah, we’ll see everyone next time.
Disclosure: Microsoft, where Dave works, is a client.