Writing Dual-sided* JavaScript

Wrap Or Detect JavaScript on the server, eh?

The least painful way to write dual-sided JavaScript, initially, is to create source files which can be understood by browsers and Node.js‘ module loader, exporting the module’s API appropriately for the environment it’s being run in.

This is generally done by either detecting which environment you’re running in (checking for things like process, module or module.exports()) and explicitly differentiating the export process for it, or by providing your own exports object which puts an object exposing your module’s public API in a global variable when you’re running on the browser. You might have a function which wraps the entire module, you might not, depending on your needs.

This is a fine approach for single-script modules, or modules which can be concatenated into one.

Where Am I? It’s still good, it’s still good – it’s just a little ugly

When your modules start to depend on each other, you’ll find you either need an explicit “which environment am I running in?” test to determine whether to require() your dependencies or pull them in from the global scope, or you need to shim out your own require() function which does so for you.

I hadn’t thought of the latter approach prior to starting to write this, but even had I gone with it, I still would have ended up at…

Too Many Globals We have to talk…

Then you want to use many of your dual-sided modules in a bigger project, but they’re all exporting themselves to the global scope, when you only want them as dependencies. Furthermore, you may have lacked foresight to the degree that not all of your modules export to a global variable in the browser which matches their npm module name… You can sugar what needs to happen next to make it look a bit nicer, but ultimately it comes down to this sort of thing:

;(function(server) {

var isomorph = (server ? require('isomorph') : window.isomorph)
  , Concur = (server ? require('Concur') : window.Concur)
  , urlresolve = (server ? require('urlresolve') : window.urlresolve)
  , DOMBuilder = (server ? require('DOMBuilder') : window.DOMBuilder)
  , forms = (server ? require('newforms') : window.forms)

// ...

})(typeof window == 'undefined')

This is where I was last week – at this stage, I decided that the thing I liked most since starting to write JavaScript for Node.js was its module system, so I resolved to write my dual-sided code as regular Node.js modules and export browser versions with a build script, since some of my modules already required one to concatenate large source files and create multiple browser builds with different feature sets.

Build It and they might come

I already knew about browserify, but it’s more of a comprehensive solution for exporting any Node.js code to the browser, including all sorts of useful shims to that end, whereas if you’re writing code which is explicitly dual-sided, not even using any ES5 features, you just need a relatively simple export script to wrap your modules to define modules, exports and require() for them to use.

Researching what else was out there and just from an initial foray, it quickly became clear that there are tons of these things. I was most taken with Mocha’s approach, which is a manual build process written to match the specific way Mocha has been written with browser usage in mind, shimming out only what it needs to.

I took one of my projects which had a dependency, wrote up some rules to work against and a target end state, and wrote a simple build script, explicitly listing the files it needed to include and the require() strings they were imported with. These files are dumped into a basic template with a wrapper for each providing the variables Node.js’ module loader provides on the server, then exported to window with a final call to require()

After trying it out on another project (and flipping some of the expected config settings to allow for different require() strings being used to import code from the same file) I separated out the build logic from the configuration it required and split it out into a separate module – buildumb. This is a sample script which uses it, from Concur:

var path = require('path')

var buildumb = require('buildumb')

buildumb.build({
  root: path.normalize(path.join(__dirname, '..'))
, modules: {
    'node_modules/isomorph/lib/is.js' : 'isomorph/lib/is'
  , 'node_modules/isomorph/lib/object.js' : 'isomorph/lib/object'
  , 'lib/concur.js' : 'concur'
  }
, exports: {
    'Concur': 'concur'
  }
, output: 'concur.js'
, compress: 'concur.min.js'
, header: buildumb.formatTemplate(path.join(__dirname, 'header.js'),
                                  require('../package.json').version)
})

This particular build tool will fall over if I ever have two modules which need to require different code using the same require() string, and that fits with the original rules I set myself – there are too many clever build tools already out there for me to justify learning how to solve the problems they solve all over again.

Next…

As the projects I’m working on grow in size, my build process will likely start including AMD-wrapped output as soon as I need it on the client side. There are some strong feelings out there about AMD and CommonJS, but as long as you can write code the way you like to read it and export it however it needs to be consumed, and we have the technology to make that process happen automatically, I’m indifferent to module format holy wars.

* hat-tip to Joshua Holbrook for the “dual-sided” term

Advertisements

The Holy Grail of Web Development?

I’m feeling the programming buzz this morning – not only was Joe Hewitt doing what I’m doing now with DOMBuilder templates-in-code 5 years ago with FireBug’s DomPlate – which I wish I’d known about long before yesterday – but it sounds like he’s looking at some of the same problems I’m looking at now with respect to chasing what, for me, has become a JavaScript web development obsession when my interests shift back towards coding – sharing as much code as possible between the backend and frontend.

The old psynapses started firing when he made these tweets last night:

The new way I’m making web apps is you don’t get to author an HTML file. The “page” is just a JS file. Static markup is pointless now.

Then if I need HTML for the Google crawler or something, I run the JS through Node.js to generate the page. Works really well.

In @replies, people are starting to ask the same questions I’ve been asking myself about the best way to set this up:

  • How do you best generate markup on both ends?
  • Do you use DOM? HTML? Both?
  • How do you best hook up events when generating markup from the server side when the code which will run in the browser already knows what the markup will look like?

  • How do you cleanly handle generating partial changes and rich feedback on the frontend vs. full pages with the same codebase on the backend?
  • How do you cleanly handle talking to the server for persistence vs. just doing it when you’re running on the server?

In my own attempt to explore some of these questions with Sacrum, I haven’t yet hooked up history on the front end, true persistence on either end, or even a proper model layer yet, but I have a demo written in the style of your standard synchronous web framework which serves you full, working pages from Node.js when you browse with JavaScript turned off and hijacks links and form submissions to do everything with JavaScript when available.

I haven’t yet experimented with how you could nicely handle the approach where you have particular UI components which tightly reflect the current state of a model instance which behaves in the dynamic way you expect of single-page webapps, or even just doing progressive enhancement/registering event handlers for doing so – perhaps it’s enough to get the basic fallback for free via Node.js, I just don’t know how far you can push it yet.

Aside: Fear of Programming

My own investigations are going to force me to finally climb the async wall, lest I hit it. As comfortable as I am with async in the browser, I haven’t yet got my hands truly dirty with the different flavour of async Node.js necessitates, which is going to take some time and motivation to get into (or will it? Perhaps it’s much simpler than I’ve personally built it up to be, a slightly different approach…). I feel like I have a touch of the fear, which is silly in the context of personal projects, but I think it’s rooted in not being able to guarantee that I’ll have the free time and that when I have the time, that I’ll also have the motivation.

I have similar issues with getting into games programming, so it was inspiring to watch notch livestream some utterly fearless coding recently when he was creating his latest Ludum Dare entry. I’m just trying to internalise some of the things I felt while watching that:

  • It doesn’t have to be right first time.
  • You don’t need to agonise over every design decision up front – sometimes just doing what feels obvious is all you need to get started, and things will roll naturally from there.
  • Get something working now, then decide if it’s any good or make it so.

Even if I can’t get there, I’m glad that I’ll at least be able to follow along with someone of Joe’s calibre to see how far the concept can be pushed.

Sharing QUnit tests between browsers and Node.js

A while back, I wanted to test out my newforms library on Node.js, as one of newforms’ (and everything else I seem to want to create these days) ultimate goals is to work in the browser and server-side, to reduce duplicated effort displaying, validating and redisplaying forms and coercing their values to the appropriate types once valid.

It already had a fairly extensive test suite written against the QUnit API, which I wasn’t keen to have to port to something else in the immediate future.

ok(Node.js + QUnit === node-qunit)

I found a Node.js port of QUnit – the aptly-named node-qunit – but it assumed all the code under test was being stuffed into the global scope. What I needed was parity with what you get by using <script> tags to pull in code in a QUnit HTML test runner.

In the browser, some scripts stuff functions and objects into the global scope and some expose themselves by adding a single namespace object to the global scope, whereas in Node.js, you must explicitly import modules into named variables. After some jiggery-pokery and a pull request, you’re now able to specify how code gets required when running tests with node-qunit from the command line or from a test runner module.

Dependencies and code under test are now specified as an Object which can have path and namespace properties. Node-qunit creates a child process for each test file you specify, requires your specified dependencies/code and stuffs them into the global scope before running the tests – now, if you specify a namespace, modules can be exposed under a given named variable, otherwise the module contents are made available globally as before.

Changes Required

You don’t get this entirely for free.

Files which define things like test helpers which are global scoped for convenience need to be modified to export what they define in the standard am-I-running-on-the-server? way:

if (typeof module != 'undefined' && module.exports) {
  module.exports = {
    errorEqual: errorEqual
  , cleanErrorEqual: cleanErrorEqual
  }
}

Assuming you’re already writing code which is intended to work in both environments (in the case of newforms, tests are all written against HTML output, but the library can also do DOM output), your QUnit tests just need to either use QUnit.module instead of the global version QUnit adds for convenience, or take back the the module variable by force at the start of each test:

module = QUnit.module

Example Usage

The following test runners are equivalent; the tests are written with the following expectations for how dependencies and code under test expose themselves for use:

  • customAsserts.js – adds library-specific assertions to the global scope/module exports.
  • DOMBuilder – exposed through a global DOMBuilder variable.
  • newforms.js – exposed through a global forms variable.

HTML test runner <head>:

<!-- QUnit -->
<script src="lib/qunit.js"></script>
<!-- Custom asserts -->
<script src="customAsserts.js"></script>
<!-- Dependencies -->
<script src="lib/DOMBuilder.js"></script>
<script src="lib/DOMBuilder.html.js"></script>
<!-- Code under test -->
<script tsrc="../newforms.js"></script>
<!-- Test cases -->
<script src="time.js"></script>
<script src="util.js"></script>
<script src="validators.js"></script>
<script src="forms.js"></script>
<script src="formsets.js"></script>
<script src="fields.js"></script>
<script src="errormessages.js"></script>
<script src="widgets.js"></script>
<script src="extra.js"></script>
<script src="regressions.js"></script>

Node.js test runner (ensuring your tests don’t care about your working directory is handy if you want to set up a quick npm test script so your tests can be run by Travis CI):

var qunit = require('qunit')

function abspath(p) {
  return path.join(__dirname, p)
}

qunit.options.deps = [
  {path: apspath('customAsserts.js')}
, {path: 'DOMBuilder', namespace: 'DOMBuilder'}
]

qunit.run({
  code: {path: apspath('../newforms.js'), namespace: 'forms'}
, tests: ['time.js', 'util.js', 'validators.js', 'forms.js',
          'formsets.js', 'fields.js','errormessages.js',
          'widgets.js', 'extra.js', 'regressions.js'].map(abspath)
})

Or via the command line, the variable name a module is require()-d into can be specified as a prefix to its file or package name, followed by a colon:

qunit -c forms:../newforms.js -d ./customAsserts.js DOMBuilder:DOMBuilder -t ./time.js ./util.js ./validators.js ./forms.js ./formsets.js ./fields.js ./errormessages.js ./widgets.js ./extra.js ./regressions.js

Litany against care

An incantation for the days when you’re feeling 9-to-5-ey:

I must not care.
Care is the mind-killer.
Care is the little-death that brings total demotivation.
I will face my care.
I will permit it to pass over me and through me.
And when it has gone past I will turn the inner eye to see its path.
Where the care has gone there will be nothing.
Because I will have gone home.

(With apologies to Frank Herbert)

Forum Idea: Per-Topic Spoiler Categories & Filters

Reading Game of Thrones topics around the web has given me an idea for a feature to play with in my Django forum app/project – which exists for me to try out passing ideas related to web forums which are interesting enough for me to get around to actually trying to implement them.

In this case, it’s on the topic of spoilers. People can get very passionate about spoilers. You may not have the right to be offended, but you most definitely do have the right to rave at length about un-spoilered discussion on the internet! A very brief survey of what I’ve encountered lately with respect to Game of Thrones:

  • Rllmuk has a single topic on the TV series, with plenty of book spoilers in it, relying on people to signpost their spoilers appropriately as to whether they’re about the books or the TV series, and how much they give away. (It also has annoying click-to-view spoilers).
  • NeoGAF has separate topics for the TV series: one with book spoilers, one without.
  • The Game of Thrones subreddit offers separate, easily identifiable spoiler tagging for the TV series, the books and speculation on the books.

In the Game of Spoilers, you win or you cry

It’s often the case that I’ll pick up on a TV series late and will usually avoid all discussions on it, or will try to follow discussions without going beyond where I’m at without spoiling anything, which is a dicey game.

The basic idea, which may occasionally lapse specifically into spoiler-laden discussions about TV series’, to save us all from another premature layer of abstraction:

  • Definition of per-topic spoiler categories, which allow:
    • Tagging of time-specific spoilers appropriate to their context, rather than in the post itself (with posting helpers based on the defined categories)
    • Categorisation of other spoilers – e.g. discussion of a related book, various types of speculation
  • Subsets of spoiler categories for the same topic can be made order-aware – e.g. they know that SEo2E01 comes after SE01E05 and that Book 4 comes after Book 1
  • Topics are aware of the “current” state of play, for the benefit of those taking part in ongoing discussion:
    • Possible  simple implementation – ability to associate a date with a spoiler category?
    • Default view: show all spoilers up to and including the current episode, with some configurable delay (e.g. 3 days after the associated date)
    • Could be used to make posting spoilers simpler for users – e.g. posting a TV show spoiler after the broadcast date of the last shown episode automatically associates it with the appropriate category.
  • Per-topic user preferences, for the benefit of those coming late to a discussion – indicate which spoiler categories should be displayed by default, and which up-to categories for those which are ordered. i.e. the ability to say something like “I’ve read the first two Dexter books and seen up to SE02E03”

DOMBuilder 2.0 – Templating in Code

Work is proceeding well on version 2.0 of my, uh… content creation library, DOMBuilder, which provides a declarative means of creating content on the frontend and backend.

The goal for 2.0 is to implement templating with a distinctly Django-ey flavour – including template inheritance – which will follow the USP of the existing DOM and HTML creation functionality – that is:

  • all content is created in code, with a declarative API.
  • you can interchangeably get DOM Elements or HTML strings back from the same chunk of code.
  • code can be shared between frontend and backend (targeting modern web browsers, IE6 and up and Node.js and Akshell for starters on the backend).

(I’m not sure if these are all worthy, or even useful points, but they’re fun to aim for – it’s nice to see how far you can push things with JavaScript).

Mode Plugins

DOMBuilder is currently listed on Microjs, as it comes in at 2.8KB when minified with Google’s lovely Closure Compiler and gzipped, but it’s already crept above the (completely arbitrary) 5KB limit before work on the templating mode has completed.

Personally, I like knowing that I’m not lugging around more than I need to with a library (which is why jQuery is only an optional dependency for DOMBuilder), so I’m currently working towards providing DOM Element creation as the library’s core functionality and making HTML and template generation available as plugins in the mode-plugin branch.