The last in the series of five Eclipse Europa screencasts was published this week while I was off in Marfa, TX on vacation. The five are:
- EMF with Ed Merks.
- DTP with John Graham.
- Mylyn (formally Mylar) with Mik Kersten.
- BIRT with Virgil Dodson.
- Equinox with Jeff McAffer.
This post isn’t so much about the Europa screencasts themselves as it is about how I made them, which has been a frequent question. I go into the minutia of video and screencast editing, so it may seem terribly nerdy and boring if you’re not interested in that topic.
In general, the notes below apply to straight-up video as well as screencasts.
The Format
First, the format of the videos was a sort of “conversation demo” between me and the respective project representatives (usually leads on the project). The Eclipse person gives a brief overview of the project, mostly what it does, and then proceed to walk through a handful of demos showing off general and new functionality in the project. Throughout the demo I’d ask questions, summarize things, and other wise comment on things I thought were interesting or wanted to explore more.
While I’d been shooting for 10-15 minutes for each screencast, they ended up being 20-25. To get that, we’d usually spend anywhere form 60 to 90 minutes recording, meaning I typically edited out quite a lot.
In two instances, for Mylyn and BIRT, I got the chance to shoot actual video that I put at the beginning of each demo as the intro/overview segment.
The Tools
Screencasting with one person with one take is quite straight-forward now-a-days thanks to Camtasia (Windows), iShowU (OS X), SnapZProX (OS X), and any number of other tools (I’m not sure what’s available for Linux). I haven’t used Camtasia (as I’m not going to pay $300 for Windows software, being primarily on a Mac), but I understand it has some good editing features in it. Last I checked, screencasting grand theorist Jon Udell uses Camtasia, so it must be acceptable (though he says dumb when it comes to screencasts).
Screencasting with multiple people is another story involving capturing the video and audio somehow.
Desktop Sharing
The first challenge, of course, is seeing the remote desktop of the Eclipse narrator. ,There’s no end of desktop sharing services now-a-days: WebEx, GoToMeeting, Microsoft LiveMeeting, and Acrobat Connect. Being on OS X, the first 3 are always touch and go. The 4th has always worked, not to mention that RedMonk has a year long trial for Connect.
Connect turned out to be a fine tool for desktop sharing. You have to install a small Connect client application before using it, but none of the participants had a problem doing it. On my end, in OS X land, it worked out perfectly.
So, with Connect, the Eclipse presentor and I would login to RedMonk’s connect account, they’d share their desktop or application, I’d see their desktop on my side, and then I’d record it with my screen capturing tool (more on that below).
Recording Client Side
In the case of BIRT, due to scheduling problems, Virgil used Camtaisa on his end to record the screencast portion on his own. In truth, the quality of this video was the best. It’d be great if Camtaisa had some sort of offering where I could send remote people their recording agent to record on the client side. The problem there would be trusting that they set it up correctly, but that wouldn’t have been an issue with the highly technical people I was talking with.
Video Capture
Once the we were hooked up on Acrobat Connect and I could see the Eclipse presentors remote screen, the next thing I needed to do was record that remote screen. Ideally, you can select an area of the screen to capture instead of only being able to capture the entire screen. This was especially needed for how I did the Europa screencasts by capturing a remote desktop.
For video, I use iShowU for three reasons: it’s cheap, has all sorts of options I can play with but still has several pre-sets to get you started, and has an interface that seems more “natural” than SnapZProX.
SnapZProX follows what I always think of as the “where the hell is my application Window?” model of OS X windowing. I’m sure there’s some official name for it, but the general thing is that the application in question is not quite sure if it’s an application or sort of pop-up Window thing. That’s sort of a petty thing to choose on, but what the hell? iShowU is more simply an application. Though it hides itself if you tell it too when you’re recording, it doesn’t act like one of those ghost applications that I can’t describe. It’s just an app, that’s it.
iShowU also has all sorts of settings for video, which is fun for a nerd like me. But, it comes with several pre-sets out of the box which is helpful for figuring out the proper settings.
Also, iShowU is cheap at $20 vs. SnapZProX’s $69. As far as I can tell, both do pretty much the same thing, so I’ll go with the cheaper one ;>
There is (hopefully was) one problem with iShowU: it’d crash sometimes after recording about 50-60 minutes of video, loosing all of the captures video. The author of iShowU was quick to respond that to my support emails, though he didn’t have a fix. It seemed like there was some QuickTime bug. I’ve gotten several updates to both iShowU and QuickTime since then, so perhaps it’s fixed. As a work around, I’d just save the video every 15-20 minutes, which was sort of weird (and annoying for audio synching) but it wasn’t too shabby.
Audio
Along with the visual for the demo, you of course need the audio. I talked with each presenter over Skype, allowing me to use my long time podcasting tool-buddy, Audio Hijack Pro. For all the media stuff I do, Audio Hijack Pro is by far one of my favorite tools: it just works, and more importantly, I trust it.
Trusting
As a brief rabbit-hole, let me explain that trust part. Recording computer audio and video still has that feel of being a magical hack to it. As such, in the back of my head, I’m always wondering if it’s going to fail or simply sound terrible. Part of this is that I’m not training in audio or video, so I have no idea what I’m supposed to do with all those virtual knobs, levels, and jiggling read-outs. Audio Hijack Pro certainly has all that stuff, but it hasn’t really let me fail due to my ignorance.
Now, iShowU has some way to record all the audio on your machine…I think…but involved using SoundFlower which, to be honest, I can never get to work correctly. That is, I don’t trust SoundFlower, it requires me to be too smart. So, while it might have worked out to use iShowU to record the audio, I ended up using Audio Hijack Pro.
This means the result of recording video and edit are two types of files: the video and audio files, which I have to synch up in editing.
Audio Editing
Finally, I’d use Audacity to do micro editing on the audio and The Levelator to level out the sound. In later screencasts, I enabled the feature of Audio Hijack Pro wherein it records my audio and the remote audio into separate tracks (the right and left channels in the audio). This was fantastic as I could loud up the final audio into Audacity and silence out any background noise on my end while the presenter was talking or unsuccessful interruptions either of us made. I even removed some excessive “right”‘s and “uh-huh”‘s on my end.
While Final Cut Express HD comes with some audio editing software, which looks quite impressive in the demos, I’ve used Audacity for so long with my audio-only podcasts that I feel most comfortable with it.
Once I edited the audio, I’d re-import it to Final Cut Express HD, silent out the original tracks, thus putting in my editing audio as the final sound track.
Editing
Once I got the video and audio — as I said, about 60-90 minutes — I loaded them up in Final Cut Express HD for editing. Now, Final Cut Express HD is one of those pieces of software that I get angry at. Getting angry at software is stupid, but it happens. Really, I should be angry at the team who selected the features, or rather, excluded the ones I want.
To be fair, those last two letters in the software should clue you in that they’re not going to be good for screencast editing: “HD” is for TV, not for computer demos. Indeed, that’s the first silly thing in “FCE”: it only edits in and exports to TV formats. That is, you can’t just tell it “I want to edit a 1024×768 video at 15fps.” Oh no, it’d gotta be one of the TV or HD formats and that’s it!
This constraint causes all sorts of problems: first, the videos you import have to be at the proper frame rate, which means I usually have to spend time converting up to 29.97fps from the 15 I record screencasts in. This is no big deal, it’s just time burnt setting up the conversations.
Second, when editing, I have to place the video an HD sized screen which means most of the editing space is wasted (black) as HD resolutions are much larger than the ideal 1024×768 screencasting size. Again, no big deal, but it just makes FCE look stupid in that it’s crippled. And, of course, the audio is the same way: FCE is picky about the format of the audio it deals with. Doesn’t want to get it’s dainty hands dirty, I guess.
Third, when I finally export the screencasts, I suffer frame the same “wasted space” in the output. This is actually very annoying. Thankfully, I figured out using another tool — VisualHub — to crop videos down. After a bunch of napkin math, and relying on the video being centered, you can tell VisualHub to crop in x number of pixels on either side. Hence, while you might export a video that’s in HD (1280×720, 1920×1024, etc.), you can just slice out all that black space, resulting in a 1024×768 video in the end.
Once the video and audio are imported into FCE, the editing is pretty straight-forward. I had to synch up the audio and video, but that became easy once I started saying giving myself a marker in the audio: I’d start recording audio first, and then as I said “now we’re recording,” I’d click the record button in iShowU.
I’m not sure how you’re supposed to do video editing in FCE, but here’s how I do it for both screencasts and normal video:
- I watch the “raw” video in a “sequence” (just a linear ordered collection of video and audio) marking what seem like logical sections as “sub-sequences.” These sub-sequences could be things like “introduction,” or “demonstrating round-triping in EMF,” or even “thanks!” They can last from 10 seconds to 5 minutes. This is the bulk of editing time as I have to watch all of the video and often go back and forth to figure out the right place to cut things up.
- After slicing up the raw video into sub-sequences, I pick the sub-sequences to include in the “highlights” version. At the very least, I typically end up cutting about half, if not much more, of the “raw” content. So, if you wanted to be all clever, you could say that I’m choosing what I don’t want as much, if not more, than what I want.
- Once I’ve got the all the sub-sequences I want, I make up a short (3-5 second) title clip.
- Then I slap on the RedMonk logo at the beginning, followed by the title clip, the main body of the video, and then cap it off with the “stock” RedMonk ending clip that lists our URL and Creative Commons text.
The begins the process of encoding…
More FCE Ranting
FCE actually works well for editing “normal” video. It’s just ill-suited for “computer video,” that is screencasts. Of course, I could spend hours ranting about all the little things in FCE that piss me off. For example, for some asinine reason, it stops playing video when you switch away from it: it’s greedy for you attention!
That “on the other hand” aside, I have to say that the real insult of Final Cut Express HD is that it’s $300. If it allowed you to edit any type of video and wasn’t just crippled to TV/HD formats, that might be bearable. But, at $300, I expect software to do pretty much everything possible for it’s type of software. And, really, does the feature of being able to specify any video size really “cost” Apple the hundreds of dollars it takes to upgrade to Final Cut Pro, or, as I like to call it, “Final Cut Does What I Expect a $300 Piece of Software to Do…Pro”?
I don’t mind paying for software, but my tolerance for feature cutting shenanigans stops at about $20. $300 for feature-flipped crippled software is just arrogant as crap. But, hey, it’s Apple! Shut-up and go buy an iPhone!
Encoding with VisualHub
As mentioned above, the last item in the screencast tool-chain is VisualHub. VisualHub is that video converting/encoding software that you wish QuickTime was once you plunk down that $30 or so for QT. VisualHub will convert between most any format and allow you to do all sorts of little settings (like cropping and re-sizing). So, I can take the “raw” export from FCE and convert to an iPod/iPhone friendly video, a scaled down video, a Flash video, whatever bit rates for video and audio I want, or any of the other formats and user-specified sizes VisualHub provoides.
It’s really a great piece of software up there with Audio Hijack Pro. This is primarily, I think, because VisualHub is focused first on several use cases rather than being just a bucket of settings you can twiddle with.
And, it’s also quite fast.
Unlike FCE and QuickTime itself, you can actually save the settings so you don’t have to re-configure your video encoding each time. This last point is something I was yet another thing I naively expect in FCE. I mean, you can set-up all sorts of configuration profiles for printers in OS X, but not for the ever more complex and widget-setting rich process of video encoding?
And at $23.32 (palindrome pricing?), it’s very sanely prices. No FCE jerk-pricing here!
Publishing
As with all RedMonkTV videos, I post the screencast our podtech blog and the RedMonk TV blog. Of course, I also posted a pointed on my own blog.
The only hitch here was that, initially, I didn’t make it clear enough that there was a larger version of the video available than the 320×240 or 420×270 video. Those are, of course, too small for those who want to follow along in detail. There was a larger version available as video download, but it wasn’t immediately obvious that they were large versions. But, after the first post or two, I simply linked directly to the large version and a larger Flash version that we provided on RedMonkTV.
Thoughts on the Tool-chain
While the end result was collection of good and useful screencasts, the tool-chain I used left a lot to be desired. The main problem came with the video editing tools available. From my experience over the past few months working on RedMonkTV, it seems like video editing is still a well fortified software silo. This means that (a.) it’s expensive, and, (b.) it has it’s own conventions and “UI language” if you will.
Editing
For people without video editing backgrounds (like me) iMovie has a brilliant interface and is incredibly usable, but Apple has crippled it such that it’s not really usable for my needs: you can only edit one video track for example, and forget putting videos side-by-side together like I do for DrunkAndRetired.com.
I so, so wish that Apple would make a sort of “iMovie Pro” that had the same interface and usability that iMovie does with some of the abilities that Final Cut Express HD has. FCE has fantastic abilities, but it’s interface is really, really weird. What’s shocking is how un-Apple it is: it’s a classic example of the Winamp UI anti-pattern. Again, I hope this is what the core user-base of “traditional” video editors wants, but for “new video editors” like myself, it’s annoying.
Video editing tool-chains are very much chains of tools rather than one or two pieces of software you use to get the final product. I had to use 8 different tools (not including the FTP program, the blog publishing software, and iTunes to edit ID3 tags) to get the video. I’m sort of OK with this being a programmer, but it can ridiculous after awhile to add yet another application to the tool-chain.
Encoding Formats
The other issue with screencasts and video in general is the mass proliferation of video formats. When doing a podcast, there’s one format: MP3. That’s sort of it. (Yes, I know I should want to use OGG.) In video, you’ve got containers and encoding for your video, and then the proprietary pissing matches between Apple, Microsoft, and Adobe to deal with. For a producers and consumers of video, it’s all just a royal waste of time. I settled on the podtech formats (which are the ones I would have chosen anyways): Apple and Adobe, MP4 w/H.264 and Flash.
Encoding into those formats takes a long, long time itself. I use the term “encoding” to mean “getting the video into a format that I can publish.” Typically, you’re editing in high quality formats and the “raw” result of that is 10-20 gigs of video, if not more. Distributing that amount of video is, well, not something I want to do. So, I have to encode to compress. Also, I have to publish at 4 different formats for the screencast: the iPod-ready MP4/H.264 video, the larger MP4/H.264 video, the small Flash video, and the larger Flash video. VisualHub makes doing this extremely easy. The only hitch is that it takes anywhere from 30-90 minutes for each format. As you can imagine, each time I screw up that first 60 minutes of encoding (with a typo, usually), I get all upset and then have to re-encode ;>
Storage
Finally, video is the only media I’ve produced where I have to worry about storage all the time. At the moment I have about a terabyte of storage available across my external hard-drives and laptop. I’ve filled about 90% of that up with video, audio, music, and all my other crap. I’ve got another terabyte I haven’t hooked-up yet, but will need to right-quick. I can see why people like Seagate readily shoveled cash into video propagators like podtech. With all the video I produce for DrunkAndRetired.com and RedMonk, I’m always fiending for more storage, even after deleting as much as possible. Storage may be “cheap,” but I need a lot of it.
Overall
Overall, as troublesome as I may make it sound above, the end result is quite nice. Once you disect any production process, it can seems lengthy and tedious. But, after awhile, you just do that stuff without thinking and quickly. Once I figured out the tool-chain and starting using a process (“marking” the audio for synching, chopping into sub-sequences, etc.), it went by as fast as possible. The same was true for audio editing with podcasts, and I consider those a snap, now, compared to video.
Personally, it was a great way to learn about each project (more or less) first hand in the quickest way possible, by observing, questioning, and then re-observing (while editing) an expert teach me how to use the tool. Hopefully, the same will be true (minus the re-observing ;>) for viewers.
Postscript: Irony!
You may be thinking, clever reader you are, “why didn’t you just make a screencast of this?” Indeed! Perhaps one day when I have time I will. But, I’d wager that take 3-4 hours to make, where-as typing it up takes a fraction of that.
Disclaimer: Eclipse, Adobe, and Microsoft are clients. Eclipse paid for the Europa screencasts.
Technorati Tags: eclipse, europa, finalcut, screencasts, video
Cote,
Wow, sounds complicated. I am certainly glad we had you to help figure it out. 🙂
Personally, I think the format is great. I think having the demo as a conversation makes it a lot more interesting to listen to.
I am surprised that the technology seems to be so far behind. I can only imagine that this will get easier.
Thanks again for doing them. Hopefully we will do more in the future.
Thanks for posting this. I thought I was the only one wanting to scream bloody murder at FCE-HD for editing screencasts. Like you, I tried iMovie for initial editing but you quickly hit the limits. The new iMovie-08 looked promising but then on export to Quicktime it generated rubbish, taking one random still image from the middle of the piece and stretching it across the entire movie.
My method is slightly different than yours. I record the screencasts in short snippets and with no audio tracks, then make opening and section titles in the editing program, then record voice-over trying to match the narration to the action. Two words of advice for anyone trying it this way: 1) write down what you plan to say and keep the text on a second-monitor window while you're playing back the video — otherwise you get a lot of um's and ah's, and 2) Don't use the voice-over recording facilities of the movie editor software unless you KNOW it's all going to work (it took me a while to extract the audio voice-overs from the death-grip of iMovie-08).
In desperation I switched a couple days ago to FCE-HD. Not too hard to learn the basics (after watching the tutorials and skimming through a couple of How-to books, that is 🙂 I totally agree with you on the interface. It was obviously designed by people who like working with jeweler's screwdrivers in the dark.
Did a quick pass edit in FCE, exported it to QT *with the same dimensions as the screencast* and it came out like crap — all squeezed out and looking like the titles at the end of those Cinemascope Spaghetti Westerns (anyone remember those?) A desperate Google search brought me to your post.
I'll give your method a try — exporting to a larger HD video format and post-trimming the sides. It's good to know there's a way to do it. All I can say is I'm glad I *only* spent $300 bucks on the Express and not $2K on Final Cut Pro. Hopefully for the next release, Apple will clue-in to the fact that there are people out there who want to create video not just for TV screens but for online consumption.
Thanks again.
rf: I'm glad the post was helpful 😉 And thanks for your method write-up as well.
I just wanted to say thanks! For a non-video savvy OS X Eclipse developer wanting to make my tools more accesible your article is perfect. I do have to say that I'm worried now that what I thought might be a Q&D way to provide folks with a sense for what my tool does might end up being an endeavor in itself.