I came into 2013 with my first published Node.js package, onm. A data modeling tool I wrote for myself in 2012 to help me with a brain-crushing number of dimensions to juggle in the context of trying to build a CAD program inside an HTML 5 SPA.
It’s a case of knowing the use case well, so the tool felt natural. But to those unfamiliar with the use case, the tool is baffling. So lots of work to do yet to make it simple. And, explain more clearly why it’s so damn important.
That said, onm worked well for me and supported lengthy and deep multi-month dives on several important fronts in the war:
The major MV* frameworks just don’t feel right to me. Using onm’s in-memory JSON resource locator and model introspection, I adapted (read ripped the backside off) Facebook’s React framework and bolted their renderer to onm.
The basic idea is to allow the development of re-usable, render functions (data to client browser DOM effectively) that are completely decoupled from one another, but bound loosely via a semantic contract over JSON to some particular input object signature. This worked pretty well.
The client SPA is effectively a generic glue component and a library of render functions. Throw in some JSON, the algorithms sifts through the data, binds the requisite render functions and dispatches them. This solves a lot of problems but isn’t perfect.
The benefit is that a well-written render function can be re-used easily over and over and over in different scenarios, in different apps without having to do any hand-integration whatsoever. This is so because the render functions can be unit tested as can the assembly mechanism so composing them generally just works by superposition.
However, this exercise did expose onm’s weak belly: client data ingress/egress/transformation. I built onm to be a resource locator presuming client code would just reach in and CRUD the raw data. This was a rationalization: I can do that and you can too if you write a lot of tests and are careful. But that’s not how web developers roll generally.
- resource location by URI everywhere = awesome
- having to write you own data models all the time = not awesome
- not supporting arrays in JSON = not awesome (but really a pain in the ass)
- Everyone wants variants (i.e. addressable, typed, heterogeneous arrays and hash tables) = so noted, but very difficult, and can be safely staged after base features I rationalize.
- Developers can’t be trusted with raw JSON off the wire. Ever. Not even you. It’s like running as root all the time. No. So there needs to be an API. And it has to be easy to use and understand. Damn. That’s hard. You guys have tried to read the RDF spec right? Yea.
jsgraph implements an in-memory container abstraction for directed mathematical graph data sets. Vertices in the container are represented by user-assigned unique string identifiers. Edges in the container are represented by pairs of vertex identifier strings. The container supports the the attachment of arbitrary application-specific meta-data to vertices and edges.
jsgraph’s bundled breadth-first, and depth-first visitor algorithms leverage the container API and an external state store (color table) to affect the desired traversal firing synchronous callbacks to your code at specific stages of the traversal.
jsgraph is inspired by the design of the Boost C++ Graph Library that leverages C++ templates to affect a complete separation of concerns between (a) data storage and access (read you can adapt your own data source as necessary) (b) data semantics (BYO semantics) (c) re-usable algorithms that rely on generic protocols for (a) and (b) and thus just work by superposition.
In October 2014 I started back in on onm working on a set of features intended to make it simple to overlay data (i.e. ingress) JSON into onm. As I worked through the details and assessed onm’s deficiencies, a new plan formed.
Some high-level details of this idea are documented here: https://github.com/Encapsule/jbus
I’ve moved my office into the new HUB in Bellevue, WA and am heads down until I get this onm re-write released.
Then I’m going to try to teach every Node.js developer on earth how to imbue their JSON with semantics, and leverage this information to write less, more reliable, generally re-usable components that work together by-design (as opposed to by chance or by virtue of the fact that you invested substantial resources writing tests).
Please follow @Encapsule on Twitter and watch for updates over the coming weeks.