Quantcast
Channel: Peter Keane's Miscellanea » REST
Viewing all articles
Browse latest Browse all 6

Layers

$
0
0

I’d like drill down on one element of the arguments I made in my last two posts. I alluded to it, and it’s an idea that’s ever-present in my thinking. It’s the idea of layering or separation of concerns. I suspect it is the most common reason that I get myself into trouble explaining my ideas — while I assume it is understood that I’m talking about layered solutions, I don’t always make that point explicitly clear.

The idea (and it has some very important implications) is that a solution/technology/standard, etc. needs to address a limited, well-defined problem at the proper level of abstraction. I’m a huge fan of the Unix philosophy: “do one thing and do it well,” which captures the idea in its simplest form quite nicely. Tied into the idea of “levels of abstaction,” it is a paradigm/pattern that turns up all over the place in computer science: machine code is abstracted into assembly language, which is abstracted into something like “C,” which is abstracted into Perl/Python/PHP, etc. Each “layer” need only follow the interface provided by its underlying layer and provide a reasonably useful interface for the layer above. It’s a very powerful concept — indeed the Internet itself is built on layers such as this.

But this idea of layered solutions is in no way limited to the “plumbing” of the systems we use. I would contend that all of the work we do as programmers, information technologists, librarians is exactly this: we use the tools and resources that we have available (the underlying layer) to provide a useful abstraction for those customers/clients (the layer above) who will use our systems/collections/guidance, etc. Everywhere I look in my work, whether it be with code, applications, co-worker or faculty interactions, I see this pattern popping up again and again.

I’m actually mixing a few different ideas here — the layers of abstraction pattern and the separation of concerns pattern are related, but not exactly the same thing. No big deal though, since they go together quite well. I’ll add a third idea to the mix, since it’s an important element of my thinking. It’s called Gall’s Law (I actually have it printed out and tacked up next to my desk) and says:

“A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.”

We fall into a trap in academia of trying to solve problems vertically (hmm, the image of a silo pops into mind) rather than horizontally. We see a problem, are smart enough to fashion a solution, and so thus solve it. But we run into trouble, because we come upon a related problem and wind up starting the process all over again, fashioning a new solution for this specific problem. Then our users come asking for help solving problems we hadn’t exactly thought of, and the problems compound — we’re thrashing, constantly on the treadmill of solving today’s problem. We typically make two big mistakes. One is not following Gall’s Law (and indeed the Unix philosphy). We have not properly decomposed our problem — a working solution to a given problem might very well be a set of of small solutions nicely tied together. Two, we have not adequately abstracted our problem. We simply do not recognize the problem as one that has already been solved in some other arena because superficially, they look different (e.g., not recognizing that the problem Amazon S3 tries to solve is remarkably like the problem we try to solve with our Institutional Repositories).

This also flows into the idea of “services,” which we are simply not seeing enough of in libraries and I think it is a key piece of our survival. My “Aha!” moment came a number of years back when I discovered Jon Udell’s Library Lookup project. It was the first time I saw the OPAC as simply a service that I can interact with, either with the dedicated interface or with a scripted bookmarklet. It didn’t matter to the catalog which I used. It simply filled its role, performed the search and returned a result. Jon Udell had unlocked the power of REST — a theory, derived from HTTP itself and described by the guy who wrote the HTTP spec that elegantly laid out a set of rules (the REST “constraints”) that when followed, would result in robust, evolvable, scalable systems. I no longer felt bound to one user experience for web applications I was building. I could begin thinking about an application as a piece of a larger, loosely coupled system. “Do one thing and do it well” in this case was indeed freeing.

But this brought up a whole new slew of problems: how do I know the piece I build, properly abstracted and decomposed, will be useful to the other pieces of the system? If I’m building all of the pieces, no problem, but that sort of defeats the purpose, no?  I want it to be useful to some other piece that someone else has built.

This is where my interest in standards and specifications began in earnest. What had seemed utterly mundane and boring to me previously, suddenly seemed really useful and (I hate to admit ;-) ) exciting. People get together, compare notes, argue, sweat over tiny details, and end up defining a contract that allows two separate services to interoperate. Maybe not seamlessly and perhaps not without the occassionaly glitch, but ultimately it works and the results can be astonishing (cf. THE WEB).

Let me be clear about one thing: this work of separating concerns, finding the proper level of abstraction, creating useful and useable service contracts is a hugely difficult undertaking. And it’s work librarians should be and need to be involved in. No one knows our problems like we do, and while I refuse to believe that our problems are unique, who better than us to recognize that our problem X is exactly the same problem addressed by Y in some totally different realm. No one is going to decompose our problems or hit upon appropriate abstractions for us — that is OUR work to do.

I can offer a bit of empirical evidence to back up my assertion that it is a useful endeavor. Taking Gall’s words to heart, we built the DASe project as drop-dead simple as possible, made everything a service and evolved out from there, with modules and clients that can be quickly implemented, designed to do one thing well (the good ones stick around, becoming part of the codebase). I’m not here to promote DASe (that’s another topic — my preference would be to see the ideas we have incubated reimplemented in an application that actually has a good solid community already — DSpace 2.0 anyone? :-) ), but rather to push the idea that these basic, solid principles upon which the Web itself is based have real value and offer huge return on investment.

And so to answer incisive criticism of my original post: my suggestions are indeed of a technical nature, but that is exactly the layer I am addressing. It’s not vertical solutions I am after (“here is a problem…here’s how you fix it”), but rather horizontal: if we get the tooling layer right, we will have the opportunity to build, on top of that layer, more robust solutions — solutions which will get down into the nitty-gritty social and cultural problems that can seem at times to be quite intractable. My opinion is that they ARE intractable if they are attempted atop an unstable underlying layer. Let’s allow the folks who daily work in the incredibly diverse and challenging social realm to do so without the obstacles presented by tools that are not doing their job effectively.


Viewing all articles
Browse latest Browse all 6

Latest Images

Trending Articles





Latest Images