Author Archive

Schema v0.869 Screenshot

I am currently working on something I call the “navigator” – essentially a Javascript library (compiled from Coffeescript actually)  that accepts as input a Javascript “layout” object declaration.

demo: stable demo link • latest development build

The layout is parsed to create a “tree-view” type thing. However, this isn’t an ordinary tree-view control. Not even close. The ‘navigator’ library allows each node in the tree to be annotated with additional metadata that conveys to the navigator that the menu item should semantically model a concept. “Concepts” are whatever you can declare and insert into the metadata.

The “concept” I’m interested in at the moment is modeling the structure of the SCDL data object model on top of JSON with enough precision to generate the runtime code I need to edit and manage the data objects in my model at the semantic level of SCDL, not JSON.

Starting with the simplest of Javascript objects that defines the menu names in a treeview, I’ve applied the idea of annotating each menu item with additional metadata fed into a set of object factories to allow menu items to be “conceptually bound” to nodes in a JSON object described in metadata.

A tree-view with a single root can be thought of as the root object of a JSON deserialization. Similarly, the children of the root can be thought of as sub-objects, or sub-arrays. Each menu item is annotated in metadata to set it’s associated JSON object type. Additionally, information in the metadata is parsed to determine SCDL-level schema (higher-order than JSON).

The screenshot below is of build 0.869 with the “Advanced” view selected.

Encapsule Project Schema 0.869 (test build)

Encapsule Project Schema 0.869 (test build)

It’s not time-efficient to explain further right now but the level of power the ‘navigator’ gives me is incredible… Navigator allows me to map pretty much any object model to JSON (doesn’t have to be SCDL). You can declare “navigator layout” objects (Javascript objects) for different problem domains (i.e. completely unrelated sets of objects that comprise some sort of “domain specific modeling schema”) and use them in the same application in different instances of navigator. Or, bind a single layout to multiple object models…

v1 Schema will very likely rely very heavily on ‘navigator’ as the core of the application.

Upcoming SCDL Talks and Schema Demos

I’ll be giving an extremely short talk and demo of the forthcoming Schema design tool at LinuxFest Northwest 2013 (April 27th and 28th in Bellingham, WA) as part of their “lightning round” presentations. [1]

Tentatively, I’ll be giving a longer demo/talk to the Graph Database Seattle group sometime in May (time and exact location in Seattle hasn’t been set for this meeting yet).

Additionally, I’ll be doing a full demo and giving an extended talk about Soft Circuit Description Language (SCDL) at the monthly meeting of the Seattle Programming Languages (SeaLang) group on June, 5th in Redmond, WA.

I’ll post more details as I get them.

Back to code… I’ve got lots of work yet to complete to avoid looking like an idiot :-)

[1] If you live in the Seattle-area LinuxFest should not be missed. The talks are great, the people are interesting and friendly, and of course it’s free.

The Devil Is In The Details

I previously published a number of documents related to the 2000-2004 Encapsule Systems start-up effort but have just taken the time to export the 2004 Provisional Patent application to PDF, and all the associated disclosure diagrams to JPG’s for easier online viewing.

These resources are posted here: Encapsule Systems 2004 US Provisional Patent Application

Schema v0.81 Window Manager Screenshot

I’ve spent the past week building a window manager on top of the Knockout.js library for the Schema app. Here’s a screenshot:


Schema v0.81 Debug Build (featuring window manager)

Schema v0.81 Debug Build (featuring window manager)

If you’re interested in the HTML 5 details, check out my personal blog:

Two Paths: The Encapsule Project Mission Explained

Two PathsI took a break from coding today and re-wrote the Encapsule Project Homepage. I’ve started to talk about Schema a little online (not just here) and there’s some interest. The organizers of the Seattle Graph Database Meetup Group pinged me this AM and asked if I’m interested in presenting. Absolutely YES!

The revised project homepage explains the software IP re-use mission of the project to provide some context to all the low-level detail that I’ve surfaced here so far.

Here’s a re-post:

Two Paths Diverged

Fundamentally, digital hardware and software systems are manifestations of Turing Machine models (or some other universal computational model – choose your favorite it’s not central to the argument here).

Similarly, systems comprising digital hardware plus software (e.g. whatever you’re reading this text on) are themselves describable as Turing Machines models. Few would dispute this. Yet despite their common mathematical roots, digital hardware and software engineering are radically different fields of endeavor.

To understand why this is consider that hardware systems must be physically realized via a complex, time consuming, and extremely expensive manufacturing process. Software by contrast does not. A bug in a chip or circuit board spells doom for a product and the company that produced it. By contrast, a bug in a software system isn’t considered that big a deal because it can be fixed, and an update issued at “low cost” and little customer impact.

So of economic necessity, the hardware industry has always had to get it nearly 100% right before going to market whereas the software industry has traditionally played fast and loose largely ignoring formal design verification, simulation, and IP re-use strategies. Design it quickly, implement it well enough, ship it, and patch it later is the software mantra. That used to work.

There was a time when if you found the right small group of people and locked them away for awhile that they would build you the next VMS or Windows NT, the world would change, and the money would print itself. Who were these people?

They were computer scientists and engineers with deep knowledge of hardware (of necessity). The decission to forgo formalism in the name of expediency was not taken casually. It was carefully considered, and at the time the decission was correct (see money that prints itself). What was perhaps not clear at the time is that this paradigm choice would pave a road forward that many would follow not knowing there was an alternate path to the same end.

Software Has Taken the Wrong Path

Even as recently as twenty years ago, the scale, scope, and importance of software systems as they exist today could scarcely have been imagined. Software systems have become so large and complex that very few people (myself included) really fully understand how they work. Sure there are a few individuals who understand all the pieces and where they fit into the puzzle. However, the days when small groups of people were able to to make startling software advances in short time seem to have passed. There’s simply too much complexity, too much legacy, too much code.

Software is imploding under the weight of its own complexity. I think this is so because as an industry we have largely chosen to ignore that path taken by the hardware engineering community and not yet realized that the hardware guys have had it right all along.

The hardware road is harder, requires more up-front investment in tools, methods, verification, re-verification, systemic re-use of IP, models that matter… But, they’re innovating while we fix bugs and argue about the best way to port our old code to new platforms, and make incremental improvements to huge existing codebases.

To my way of thinking, an honest accounting of the cost of producing software must include the cost of constant revision, and the manifest waste of having no efficient way to re-use existing software IP. Well intentioned efforts to make software development more cost effective like Agile to me seem like putting the cart out in front of an unbroken horse. Break the horse, then we can argue about cart placement.

The Intrinsic Value of Re-Usable IP

Fundamental to the sucess of any hardware company is the management, and re-use of their existing intellectual property. This is necessary because circuit designs are (a) very very complicated (b) must ultimately be realized in a physical manufacturing process subject to the laws of physics, cost and availability of materials, time…

For reasons discussed earlier, hardware details can’t be left to chance. You cannot design an arbitrary piece of hardware and then expect that you can actually manufacture it without knowing that the little pieces and the system as a whole are realizable. Chip manufacturers (the ones with fabs) invest billions (? at least hundreds of millions) of dollars annually to ensure that this loop is closed. There are entire teams of individuals whose job it is to ensure that the designers have access to standard cell libraries and models that can actually be fabricated.

And it doesn’t stop there. Given re-usable libraries of hardware IP, hardware designers must additionally consider a myriad of complex interdependent constraints: space, time, power, heat, performance goals… Again, you can’t leave these things to chance when the cost of getting your first physical prototype is measured in 9-digit dollar units and mulitple business quarters.

But that’s just exactly what the hardware community does every day. And they’re really good it because they have to be. But, this is very expensive. So, every piece of everything is re-used in new designs whenever possible. Software designers try, but it’s ultimately hopeless.

Software tools, methodologies and libraries are simply not evolved enough for this process to be efficient or safe. The unfortunate consequence of this is further fragmentation and duplication of effort. It’s often cheaper and less risky to roll your own solution than to invest the time required to locate and vet a source of re-usable software IP.

So software practioners are necessarily faced with two bad choices: hunt/gather/adapt vs. roll-your-own. The first is inefficient and risky, the second ultimately wasteful and distracting.

Re-usable IP is where it’s at. Software needs to focus on the re-usable IP problem.

The “Soft Circuit” Idea

The concept of “soft circuits” evolved over many years of thinking about how best to write software libraries that were easy for other people to use effectively without a major investment. I had some small success but the results weren’t satisfying. What I really wanted was a way to build non-trivial re-usable chunks of code that you could just drop into a design a go.

Then one day while doing real work for my employer, I realized that the thing I hated most about existing software libraries was that it was left to me to discern the overall state model of the library in order to use it. For example, call this function get an object, call these methods on the object passing in this other object. Then if this is true, instantiate another object passing in your result object. Blah blah blah…

Why can’t I just have a “socket” like what’s soldered to a circuit board, and go find something that’s pin compatible and drop it in? Why can’t I quickly assess the capabilities of a library by reading its “data sheet” like I can for an IC? Why do _I_ have to wire up the “pins” by hand just to try it out? If I know what my inputs and outputs are, why do I have to spend so much time looking for something I can just use. Lots of hard questions that seemed worth trying to answer.

I built two prototype systems both of which metaphorically replaced “IC” with software plug-in and worked out from there. The first, a complete fandango, attempted to embed intelligence in plug-ins (written using conventional techniques – in my case C++). The idea was that a plug-in would be given innate knowledge of what it was compatible with and would cooperatively bind itself with other plug-ins at runtime to self-assemble useful higher-order structures. I don’t know what I was thinking. At the point I could no longer read my own code I gave up on this approach. It doesn’t scale. And, it can seriously damage your brain.

The second prototype, still using software plug-ins to represent “IC’s”, removed all intelligence from the plug-ins and instead delegated the responsibility of building higher-order structures to a generic algorithm that used mathematical graph models and declarative interconnection contracts expressed as XML files to do the building. This system worked: you could snap together visual representations of re-usable chunks of code (plug-ins or collections of plug-ins wired up using XML declarations), push a button, and the system would do the graph splicing, instantiate the plug-ins, late-bind their inputs and outputs, spawn some some threads, and actuate data flow through the graph. The result: software LEGOs. Well sort of.

The reality was actually not as grand as I had hoped. Nobody beat a path to my door offering large sums of money to continue and complete the work. And, in 2004 there wasn’t a lot of speculation going on in software. Particularly not in software tools. The only VC’s who would talk to me were those who had lost their shirts investing in EDA and they mostly wanted to tell me to give it up, or come back when I had my first $1M sale. I couldn’t do either.

Out of time, and running badly in the red at this point, I burned the prototype code onto a CD and didn’t look at it again until 2012. For awhile I even managed to convince myself the entire thing was a bad idea in the first place. But it’s not a bad idea. It’s just an idea bigger than I can manage alone. So what?

Encapsule Project

In 2013 I’ve resolved to take another shot at this. There are several reasons why I think there’s some small chance this might get some traction this time around.

  • Software is even more horked than it was then.
  • Open source has exploded. The maker movement has happened. People are searching for plausible answers and creative freedom.
  • The hardware industry has exploded with innovation and software continues to lag.
  • Not building the whole thing this time. It’s too big.
  • Building only the front-end design tool to allow people to create, and edit models. By itself I think this will be useful if only to further the debate. Maybe more. We’ll see.
  • I’m building for the browser.
  • Everything is open source under a permissive license.

As “Schema” (the browser-based design tool) comes into focus, I’ll be getting into a lot more of the details that are too difficult to convey as plain text. I also plan to do some talks and demos later this spring in the Seattle area. Please follow @Encapsule on Twitter for updates or check out the Encapsule Project Blog for more details.

Why Do We Do This?

I remember when writing software was fun and exciting. Little slices of it still are. But overall, it’s a giant fragmented mess and most of our time and effort is spent devising creative strategies to unhork the completely horked. Don’t get me wrong: I love software. But, I would love it more if we could return to the time when a day spent coding moved us forward instead of largely marching in place.

This is possible, I think, but we need to stop making up new languages, writing books on Agile methodologies, and instead study and learn from the hardware community. They’re doing better work, faster, and having more fun doing it.

We too can kick ass, have fun, and innovate. And, maybe in the process unlock the hardware guys so they too can realize their full potential.

Thank you for your interest in Encapsule Project!

Framing the SCDL Data Model in HTML 5

Today’s Schema build is the best SCDL reference I’ve written to date:

Today marks my fortieth consecutive day of heads-down coding on Schema – a single page HTML 5 application for designing systems in Soft Circuit Description Language (SCDL – pronounced “scuddle” as in to make haste).

About half of this time has been consumed learning tricky details of other people’s amazing HTML 5 libraries (e.g. Knockout.js is really great).

The other half has been devoted to struggling to codify the SCDL object model (serialized to JSON and used to synthesize runtime code).

I’ve strongly resisted the urge to write about SCDL here as I’ve explained the concept countless times to some very smart people with little success to-date. I’m resigned to the fact that nobody is going to “get it” until they can see it. That is see and edit the graph models that underpin SCDL using browser-based SVG visualizations.

Over the past week I basically re-wrote the entire SCDL data model for Schema because it had just gotten out of control (entire data and view model in a single Coffeescript file). That single file is now twenty and I’m glad I did this now although I didn’t get a lot of sleep this week :-)

Tuesday I reached parity with where I was when I started refactoring and today the SCDL data model is almost completely framed in Coffeescript. It’s not yet functional (e.g. there are lots of methods missing) but the overall structure has been captured in classes.

Today’s development build of Schema is interesting. Essentially it’s a Knockout.js view bound to the SCDL data model with buttons that allow to instantiate objects. As changes are made, the SCDL catalogue JSON is updated dynamically.

If you’re intrigued by SCDL, check out today’s build.

Note that I’ve cached today’s build on my own website for future reference. Once the SVG visualizations are online, most people will not care about this detail (that’s the whole point after all). But today’s build is perhaps the best SCDL reference I’ve written to-date so I’m sharing with those of you interested in this early work.



Schema Day 32

Schema Day 32

I’ve been busy. The effort must be completed so that we can all design systems graphically and transform them into executing code. I’ll be really happy when this application is done enough to start sending out URI’s and explaining just what the heck this is all about with diagrams.



Generic Finite State Machine Library in C++

This is a little C++ template library I wrote over Christmas break in 2012 that implements a generic Turing Machine customizable to theoretically any task via static data tables that specialize for a specific input and output vectors, and a specific state transition matrix.

The example isn’t completely finished IIRC but that’s okay. It’s done enough to demonstrate the basic concepts that we can write simple little programs that leverage static data tables (e.g. SCDL models) to do whatever we want.

As a side note, while developing this example I found many roads leading me to the strange and wonderful land of  boost::proto. Proto can (and actually should) be used to implement my generic FSM. Why? If we convey all the structural and logical details of our design captured in SCDL to the C++ compiler, then the C++ compiler can optimize the ever-living bajesus out of it.

So for example, in my little library when “clock” edge (actually modelled as a function call) occurs, my simple algorithm performs several table lookup operations. But in my intended use case these tables are static data. I failed to convey to the compiler this detail so what it gave me was a generic library that can handle dynamically reconfigurable FSM (which I don’t need) at  the cost of imposing additional overhead in the case of static tables.

What we would ideally like to be able to do is transform SCDL “machine” models (so the static model data that comprises in the I/O vectors and state transition matrix) into a native runtime that an assembly language dog could not further optimize by hand. So no table lookups, just inline blazing fast code.

Given the simplicity of this little C++ program, later I may extend it a bit and try installing some variation of it server side for performing simulation runs (e.g. turn it into a webservice basically).

It’s quiet in here :)

I’m reserving this blog for talking about building software with circuit models. But in order to make it possible to efficiently even write about the subject I need pictures. Even better, I need a way to quickly make pictures. Even better, the pictures should be interactive, editable, savable, shareable…

This is the forthcoming “Schema” app for the browser I’ve mentioned briefly here. I’ve been busy: as of this writing encapsule/schema repository on GitHub is 66-commits old young and I’m on an unbroken 25-day commit spree. If you’re interested in my adventures in HTML5 open source development you can read the day-to-day here:

Early Schema SCDL Editor Demo

I’ve deployed a snapshot build of work in progress on Encapsule Schema – a single-page HTML5 application for creating, visualizing, and editing JSON-encoded Soft Circuit Description Language (SCDL) system models.

Demo snapshot build:

Although primarily a demo of single-page, cached HTML5 application development at this point. You can actually explore the SCDL data model a bit but adding and removing SCDL entities from the forms embedded in the app’s main page.

SCDL Catalogue JSON

Schema Editor Produces SCDL in JSON

As you add/remove SCDL entities from a SCDL catalogue, you’ll note that the JSON representation of SCDL catalogue is updated live.

This is not even pre-alpha at this point. But a good start :-)

If you’re interested in the HTML5 client development that’s going into the Schema editor, you can follow along on my personal blog: