Archive for the ‘ General ’ Category

Encapsule Project Next Mocha/Chai Test Author(s)

If you look at my GitHub public commit logs you’ll notice that there’s been very little activity here in 2014. This is deceptive; I’ve joined a start-up in Seattle called Azuqua (http://azuqua.com) and am actively working on a major update to the service based, architecturally, on Object Namespace Manager, Software Circuit Description Language (onmd-scdl npm package), and some other for-now secret sauces (hint: graph theory).

onm npm package is stable (both in the browser via browserify) and on Node.js and I’m actively pushing bug fixes and small enhancements as I find/need them for production work. If you’re interested in using this package in your app and want more information drop me an e-mail. I am happy to answer questions and help as I’m able.

Contributing

I was talking with John-David Dalton (lodash author) after a recent SeattleJS meetup and he advised me to party on Mocha/Chai for my testing needs. So I dove in and think Mocha/Chai is really great. I’ve been writing a ton of tests for new code that derives from onm, but just don’t have the time to circle back and do the same for onm itself.

If you’re interested in learning about the onm lib, I mean really learning it, and are willing to help out writing Mocha/Chai tests please drop me a line.

Upcoming SCDL Talks and Schema Demos

I’ll be giving an extremely short talk and demo of the forthcoming Schema design tool at LinuxFest Northwest 2013 (April 27th and 28th in Bellingham, WA) as part of their “lightning round” presentations. [1]

Tentatively, I’ll be giving a longer demo/talk to the Graph Database Seattle group sometime in May (time and exact location in Seattle hasn’t been set for this meeting yet).

Additionally, I’ll be doing a full demo and giving an extended talk about Soft Circuit Description Language (SCDL) at the monthly meeting of the Seattle Programming Languages (SeaLang) group on June, 5th in Redmond, WA.

I’ll post more details as I get them.

Back to code… I’ve got lots of work yet to complete to avoid looking like an idiot :-)

[1] If you live in the Seattle-area LinuxFest should not be missed. The talks are great, the people are interesting and friendly, and of course it’s free.

Two Paths: The Encapsule Project Mission Explained

Two PathsI took a break from coding today and re-wrote the Encapsule Project Homepage. I’ve started to talk about Schema a little online (not just here) and there’s some interest. The organizers of the Seattle Graph Database Meetup Group pinged me this AM and asked if I’m interested in presenting. Absolutely YES!

The revised project homepage explains the software IP re-use mission of the project to provide some context to all the low-level detail that I’ve surfaced here so far.

Here’s a re-post:

Two Paths Diverged

Fundamentally, digital hardware and software systems are manifestations of Turing Machine models (or some other universal computational model – choose your favorite it’s not central to the argument here).

Similarly, systems comprising digital hardware plus software (e.g. whatever you’re reading this text on) are themselves describable as Turing Machines models. Few would dispute this. Yet despite their common mathematical roots, digital hardware and software engineering are radically different fields of endeavor.

To understand why this is consider that hardware systems must be physically realized via a complex, time consuming, and extremely expensive manufacturing process. Software by contrast does not. A bug in a chip or circuit board spells doom for a product and the company that produced it. By contrast, a bug in a software system isn’t considered that big a deal because it can be fixed, and an update issued at “low cost” and little customer impact.

So of economic necessity, the hardware industry has always had to get it nearly 100% right before going to market whereas the software industry has traditionally played fast and loose largely ignoring formal design verification, simulation, and IP re-use strategies. Design it quickly, implement it well enough, ship it, and patch it later is the software mantra. That used to work.

There was a time when if you found the right small group of people and locked them away for awhile that they would build you the next VMS or Windows NT, the world would change, and the money would print itself. Who were these people?

They were computer scientists and engineers with deep knowledge of hardware (of necessity). The decission to forgo formalism in the name of expediency was not taken casually. It was carefully considered, and at the time the decission was correct (see money that prints itself). What was perhaps not clear at the time is that this paradigm choice would pave a road forward that many would follow not knowing there was an alternate path to the same end.

Software Has Taken the Wrong Path

Even as recently as twenty years ago, the scale, scope, and importance of software systems as they exist today could scarcely have been imagined. Software systems have become so large and complex that very few people (myself included) really fully understand how they work. Sure there are a few individuals who understand all the pieces and where they fit into the puzzle. However, the days when small groups of people were able to to make startling software advances in short time seem to have passed. There’s simply too much complexity, too much legacy, too much code.

Software is imploding under the weight of its own complexity. I think this is so because as an industry we have largely chosen to ignore that path taken by the hardware engineering community and not yet realized that the hardware guys have had it right all along.

The hardware road is harder, requires more up-front investment in tools, methods, verification, re-verification, systemic re-use of IP, models that matter… But, they’re innovating while we fix bugs and argue about the best way to port our old code to new platforms, and make incremental improvements to huge existing codebases.

To my way of thinking, an honest accounting of the cost of producing software must include the cost of constant revision, and the manifest waste of having no efficient way to re-use existing software IP. Well intentioned efforts to make software development more cost effective like Agile to me seem like putting the cart out in front of an unbroken horse. Break the horse, then we can argue about cart placement.

The Intrinsic Value of Re-Usable IP

Fundamental to the sucess of any hardware company is the management, and re-use of their existing intellectual property. This is necessary because circuit designs are (a) very very complicated (b) must ultimately be realized in a physical manufacturing process subject to the laws of physics, cost and availability of materials, time…

For reasons discussed earlier, hardware details can’t be left to chance. You cannot design an arbitrary piece of hardware and then expect that you can actually manufacture it without knowing that the little pieces and the system as a whole are realizable. Chip manufacturers (the ones with fabs) invest billions (? at least hundreds of millions) of dollars annually to ensure that this loop is closed. There are entire teams of individuals whose job it is to ensure that the designers have access to standard cell libraries and models that can actually be fabricated.

And it doesn’t stop there. Given re-usable libraries of hardware IP, hardware designers must additionally consider a myriad of complex interdependent constraints: space, time, power, heat, performance goals… Again, you can’t leave these things to chance when the cost of getting your first physical prototype is measured in 9-digit dollar units and mulitple business quarters.

But that’s just exactly what the hardware community does every day. And they’re really good it because they have to be. But, this is very expensive. So, every piece of everything is re-used in new designs whenever possible. Software designers try, but it’s ultimately hopeless.

Software tools, methodologies and libraries are simply not evolved enough for this process to be efficient or safe. The unfortunate consequence of this is further fragmentation and duplication of effort. It’s often cheaper and less risky to roll your own solution than to invest the time required to locate and vet a source of re-usable software IP.

So software practioners are necessarily faced with two bad choices: hunt/gather/adapt vs. roll-your-own. The first is inefficient and risky, the second ultimately wasteful and distracting.

Re-usable IP is where it’s at. Software needs to focus on the re-usable IP problem.

The “Soft Circuit” Idea

The concept of “soft circuits” evolved over many years of thinking about how best to write software libraries that were easy for other people to use effectively without a major investment. I had some small success but the results weren’t satisfying. What I really wanted was a way to build non-trivial re-usable chunks of code that you could just drop into a design a go.

Then one day while doing real work for my employer, I realized that the thing I hated most about existing software libraries was that it was left to me to discern the overall state model of the library in order to use it. For example, call this function get an object, call these methods on the object passing in this other object. Then if this is true, instantiate another object passing in your result object. Blah blah blah…

Why can’t I just have a “socket” like what’s soldered to a circuit board, and go find something that’s pin compatible and drop it in? Why can’t I quickly assess the capabilities of a library by reading its “data sheet” like I can for an IC? Why do _I_ have to wire up the “pins” by hand just to try it out? If I know what my inputs and outputs are, why do I have to spend so much time looking for something I can just use. Lots of hard questions that seemed worth trying to answer.

I built two prototype systems both of which metaphorically replaced “IC” with software plug-in and worked out from there. The first, a complete fandango, attempted to embed intelligence in plug-ins (written using conventional techniques – in my case C++). The idea was that a plug-in would be given innate knowledge of what it was compatible with and would cooperatively bind itself with other plug-ins at runtime to self-assemble useful higher-order structures. I don’t know what I was thinking. At the point I could no longer read my own code I gave up on this approach. It doesn’t scale. And, it can seriously damage your brain.

The second prototype, still using software plug-ins to represent “IC’s”, removed all intelligence from the plug-ins and instead delegated the responsibility of building higher-order structures to a generic algorithm that used mathematical graph models and declarative interconnection contracts expressed as XML files to do the building. This system worked: you could snap together visual representations of re-usable chunks of code (plug-ins or collections of plug-ins wired up using XML declarations), push a button, and the system would do the graph splicing, instantiate the plug-ins, late-bind their inputs and outputs, spawn some some threads, and actuate data flow through the graph. The result: software LEGOs. Well sort of.

The reality was actually not as grand as I had hoped. Nobody beat a path to my door offering large sums of money to continue and complete the work. And, in 2004 there wasn’t a lot of speculation going on in software. Particularly not in software tools. The only VC’s who would talk to me were those who had lost their shirts investing in EDA and they mostly wanted to tell me to give it up, or come back when I had my first $1M sale. I couldn’t do either.

Out of time, and running badly in the red at this point, I burned the prototype code onto a CD and didn’t look at it again until 2012. For awhile I even managed to convince myself the entire thing was a bad idea in the first place. But it’s not a bad idea. It’s just an idea bigger than I can manage alone. So what?

Encapsule Project

In 2013 I’ve resolved to take another shot at this. There are several reasons why I think there’s some small chance this might get some traction this time around.

  • Software is even more horked than it was then.
  • Open source has exploded. The maker movement has happened. People are searching for plausible answers and creative freedom.
  • The hardware industry has exploded with innovation and software continues to lag.
  • Not building the whole thing this time. It’s too big.
  • Building only the front-end design tool to allow people to create, and edit models. By itself I think this will be useful if only to further the debate. Maybe more. We’ll see.
  • I’m building for the browser.
  • Everything is open source under a permissive license.

As “Schema” (the browser-based design tool) comes into focus, I’ll be getting into a lot more of the details that are too difficult to convey as plain text. I also plan to do some talks and demos later this spring in the Seattle area. Please follow @Encapsule on Twitter for updates or check out the Encapsule Project Blog for more details.

Why Do We Do This?

I remember when writing software was fun and exciting. Little slices of it still are. But overall, it’s a giant fragmented mess and most of our time and effort is spent devising creative strategies to unhork the completely horked. Don’t get me wrong: I love software. But, I would love it more if we could return to the time when a day spent coding moved us forward instead of largely marching in place.

This is possible, I think, but we need to stop making up new languages, writing books on Agile methodologies, and instead study and learn from the hardware community. They’re doing better work, faster, and having more fun doing it.

We too can kick ass, have fun, and innovate. And, maybe in the process unlock the hardware guys so they too can realize their full potential.

Thank you for your interest in Encapsule Project!

LEGO Workers and Chips

LEGO workers and IC's

Ha. I love this!

Rebooting Encpasule Project for the Web

Repost from: http://www.encapsule.org

Welcome to the Encapsule Project.

The Encapsule Project is an open source effort to apply electrical engineering design and modeling techniques to software development.

Quick update:

I’ve been working to re-imagine the Hyperworx prototype, or at least it’s core concepts, using modern web technologies and am preparing to launch a new single-page HTML5 called “schema” that implements an interactive data visualization and editing environment for working with JSON-encoded Soft Circuit Description Language (SDCL (pronounced “scuddle“)) models.

Briefly, “soft circuits” are the software analogue of a digital circuit board schematics. Just as a digital circuit board schematic captures the essence of a system design to be realized in hardware, a “soft circuit” schematic captures the essence of a system design to be realized in software. Given this, SCDL is a JSON serialization dialect for exchanging “soft circuit” schematics across the Internet.

So what do you do with a “software schematic” that is machine-readable? There are a lot of possibilities. But first, we need some SCDL to fuddle with…

SCDL models are cool:

  • SCDL is based fundamentally on Turing Machine models.
  • SCDL can be used to model any system regardless of its scale.
  • SCDL is not tied to a hardware platform, operating system, or programming language.
  • SCDL is not actually even tied to software: it can be used to model hierarchical systems in any domain (although we’re primarily interested in its application to the software domain here).

SCDL is transformative:

SCDL models are machine-readable abstract system specifications which are exactly what’s required to write a software program that transforms SCDL into an executable runtime that implements your design given a binding to a specific target runtime environment.

For example, it would be possible to write a “transformer” that converts SCDL into a C++ program, or creates a single-page HTML5 application that performs the “transformation” on the client by operating directly on the SCDL JSON.Or, on the server via node.js. Or, all of the above. Maybe we want to model our entire cloud-based enterprise in SCDL and “transform” the clients, custom back-end servers that execute app logic on metal, and everything that glues it all together.

The possibilities are endlessly cool. Please stay tune for updates. The SCDL editor is coming soon!

– Chris

Deprecated to ‘Junk':

 

Publication Plan, Dependency Questions, License Model

I spent some time back in late February / March working to revive the prototype code from 2004 (the last time I actually looked at the source code) but stopped short of a total bring-up thinking that the time would be better spent on refactoring routines that I now feel are poorly implemented (who wrote this crap?) and porting sections dependent on the Windows OS (see Welcome post).

The prototype codebase has the following dependencies currently:

Obviously quite a lot has changed since 2004. Notwithstanding, investment in STL and Boost in particular seem to have withstood the test of time.

As I’m super busy currently, getting the source code published is going to take some time because I do not want to release the currently busted and out-of-date source as a baseline. It will simply be too confusing and chaotic. What’s likely is that as I tackle the various subsystems of the platform (there are ~six major subsystems) that I’ll package them as libraries and release them one at a time. The first will likely be the base parsers that read and deserialize the XML-encoded data flow graphs upon which the entire platform is based.

Source code will eventually appear on GitHub where I’ve a created the Encapsule-Project organization (no repositories currently so don’t even bother looking).

License model: I intend to publish under the Boost Software License because it is simple, permissive, and non-controversial. I could say more but the explanation would then exceed the number of words in the actual license text.

I would appreciate hearing from anyone willing to offer suggestions about the list items above marked ‘under investigation’ and more broadly from people who have practical ideas for applying this work to the problems they’re facing. Or, if you think the whole effort is a total waste of time I would like to hear that too :)

Thanks, Chris

Welcome to the Encapsule Project

Started in 2001, the Encapsule Project was at one point a commercial undertaking of my start-up Encapsule Systems, Inc. My partners and I failed to get the company funded and pulled the plug in 2004 and we all went on to do other things.

Until now little information about this project has been published publicly. I am now starting to publish details in hopes of generating interest in this work.

I’ve created two pages on this blog to start this process:

Subsequent posts will discuss my future plans for this project. Briefly:

  • Release the entire 170K+ lines of C++ that comprise the prototype codebase as open source.
  • Update portions of the codebase to leverage newly standardized extensions to the STL (TR1).
  • Carefully review and update significant dependencies on the Boost C++ libraries.
  • Port the Windows platform-specific sections of the code to Linux (or possibly just leverage Qt).
  • Port the Windows platform-specific user interface to Qt.
  • Explore options for enabling plug-ins to be written in dynamic languages (e.g. Python).
  • Build a community comprised of core contributors, plug-in and model authors, and end users of the platform (this could be people who snap together applications/services using other people’s models, or other projects that wish to embed the core runtime controller).
  • Find corporate sponsorship for this effort?

Thanks, Chris