Tuesday, August 23, 2005

Web Databases vs. Web Services/API's

It seems like everyone and their mother is talking about Web 2.0, mash-ups, and Web Services, lately. On the same day that Mike Weiksner posted this article to my del.iciou.us "for" bucket, I was reading a BusinessWeek article called "Mix, Match, and Mutate". Today, News.com published an article entitled "From Web page to Web platform". Perhaps the most lyrical and eloquent rhapsody around this idea appeared in the most recent issue of Wired, in an article entitled "We are the Web". The passage begins:
These are safe bets, but they fail to capture the Web's disruptive trajectory. The real transformation under way is more akin to what Sun's John Gage had in mind in 1988 when he famously said, "The network is the computer." He was talking about the company's vision of the thin-client desktop, but his phrase neatly sums up the destiny of the Web: As the OS for a megacomputer that encompasses the Internet, all its services, all peripheral chips and affiliated devices from scanners to satellites, and the billions of human minds entangled in this global network. This gargantuan Machine already exists in a primitive form. In the coming decade, it will evolve into an integral extension not only of our senses and bodies but our minds.
Later he remarks:
By 2015, desktop operating systems will be largely irrelevant. The Web will be the only OS worth coding for.
This vision is similiar to previous pipe dreams, like The Intergalactic Foundation (which I, in my college years and fresh-out-of-college years, happened to have been a big believer in), except it doesn't seem like such a pipe dream anymore. The web has taught us a great deal about what it is necessary to make a truly "intergalactic" web platform work, and if we look at the evolution of pipe dream towards realistic vision we see a trend towards increasing simplicity of the model. SOAP was a revelation because it looked like CORBA reincarnated on a more lightweight web substrate. In its first incarnation it did not require any specialized software other than a web server and xml parser, which are much easier to come by and simpler beasts than CORBA ORB's. Unfortunately, SOAP seems to be following the path of CORBA in a spiral of increasing complexity towards irrelevance. For that reason, REST appears to be the architecture of choice for these emerging "web service" applications.

The common thread through all of these discussions about distributed computing platforms is the notion of API's, and so the RPC (remote procedure call), in some form, to this day remains the key figure in the vision of Web 2.0. But what I think has been absent from these discussions is consideration of a DBMS for the web. For decades now, some sort of DBMS has served as the backbone for the vast majority of "data-driven" applications, which happens to comprise virtually 100% of corporate IT systems and "business apps". The reason is simple: a standard, consistent, elegant data management platform is not a trivial undertaking, and yet is a requirement for all such applications. For most software developers, developing these applications would be unthinkable without a DBMS, usually an RDBMS.

Databases often serve as an integration point between several applications that share the same data (in fact, this was one of the primary motivations for the development of the first database management systems). Sometimes the quickest way to extend the functionality of an existing application that you've inherited, is to go around the code and look at the database and build a new app directly against that. This is frowned upon but fairly common, in my experience, often because the existing code either doesn't provide an API, per se, or the API is deficient in some way (functionally, or non-functionally). Still, the philosophy that one shouldn't access a database directly, and should go through API's instead, persists and this is still the way many systems are integrated. What are the reasons for this?

Well one reason is that you want to protect your database from "corruption". There are often complex rules surrounding how records get updated that cannot be fully expressed through the "data integrity" machinery of the DBMS, and so some sort of API call (which might be a stored procedure in the RDBMS) backed by code which enforces these rules is required. Furthermore, the space and shape of update operations is usually pretty well understood and to some degree fixed. The application designers can usually map out the majority of useful write operations and provide API calls, or end-user functionality, which accomplish them. Not so with the reading of the data. Application developers often find that users need to be able to generate "reports" about the data that were not foreseen. There are myriad possible ways that a user might want to filter, sort, count, or see relationships amongst the different data elements, and the chances of predicting all of the ones users will want ahead of time is slim. Thus the robust market for reporting and OLAP software that hit the database directly, as well as the trend of building data warehouses - large uber-databases with data culled and integrated from multiple systems across an enterprise, to which OLAP software is then applied.

Another reason for the persistence of this API-oriented thinking, I think, is that there is still engrained in our collective software engineering unconscious this notion of the importance of "encapsulation". We were taught the importance of writing, and writing to, abstract interfaces in our software development, and to treat the implementations of these interfaces as "black boxes" that cannot, and should not, be seen into. It was thought that encapsulation could not only provide greater security, but also prevent users of software libraries from building dependencies in their systems on the parts of the software library most likely to change (the implementations vs. the more stable interfaces), causing the client system to break. While this interface vs. implementation concept has a lot of merit when developing software frameworks, from a practical standpoint its value is negligible in the context of pure read access of data, particularly when the database software and database schema of a production application is the thing least likely to change. Even when the schema does change, this usually requires a change to interfaces representing data anyway since there is usually a straight mapping from database schema to these interfaces. The open-source era has also taught us a lot about the relative value of this black-box notion of software components. Contrary to our prior intuition, in a globally networked environment with constant, instant, and open communication, lots of eyes looking deep into software can increase its safety and reliability. Our ability to respond to changes in software components which break the apps we build on top of them is also enhanced.

A Case Study

Recently, I wrote a Greasemonkey script that reinforced my belief in the need for a web database service for Web 2.0 apps. While it was a fairly trivial script that I wrote simply to tinker around, it highlights some of the shortcomings of a purely API-centric approach to these new cross-web applications. Basically what the script does is replace the photos in the slideshows of city guides on the Yahoo travel site with Flickr photos that are tagged with that city's name and have been flagged by the Flickr system as "interesting".

Well, the first problem is that the Flickr API does not give you a way to retrieve interesting photos. They have a search method that allows you to retrieve photos with the tags you specify, but "interestingness" is some special system attribute which is not modeled as a tag. In a situation like this, where the method hard-codes a limited set of ways in which you can query the data, you're pretty much shit up the creek if you want to query the data in a way that the developers didn't anticipate. You can ask the Flickr development team to provide it, and hope that they honor your request, and implement it within a reasonable timeframe, but your deadline will likely be past by then. Luckily for me, there's a screen I can scrape to grab the photos I need, an inelegant hack that does the job, but which is an ugly solution.

The second problem I had was that I wanted to filter out any photos tagged as "nude", not wanting to offend the users of my script with the sight of unwanted genitalia when they're exploring possible vacation destinations. There is no exclude tag option for the search method, and no easy way to do this. I could if I wanted to, put a loop in my program to repeatedly call the search method (assuming the search method did actually provide an option to specify "interesting" photos), and for each photo in the result page invoke the Flickr service again to find out all that photo's tags and throw it away if it has a "nude" tag, calling the search method repeatedly until I have the number of photos I need to fill in the slide show. Now, it's unlikely that the search method will need to be invoked more than twice, but I have to code for an indefinite number of iterations of this loop cuz I can't know for certain at any time for any given city how many nude photos there will be in the results. And two invocations of the search method is already more than I should have to make. Not only is this solution more work to implement, but it has very unfavorable performance characteristics, and puts unnecessary load on the server. Instead of making one service call over the network, I have to make (N+1)*X calls, where N is the number of results in each page, and X is the number of pages that need to be processed to fill the slide show. In this case, this requirement turned out not be worth the effort and performance impact it would have, so I let it go.

The third problem I encountered was a consequence of the screen scraping approach I was forced to take. I wanted to display the title of each photo, just like the default Yahoo slideshow does. The search method of the Flickr API returns the title of each photo in the results, but unfortunately the screen that shows a page of "interesting" photos with a given tag does not. If I want to display the titles of each photo in the slideshow, I have the same (N+1)*X problem I have with wanting to filter out nude photos; I'd have to make a seperate call to get the title for each photo in the page. This was not such an easy requirement to let go of, so we're forced to pay the performance penalty.

Now this was a very small script with very limited functionality, but you you can see the issues that crop up when you want to build a real-world web app using a purely API-based approach. It is not possible to approximate the power of a full relational/pattern-matching calculus, the kind that is approximated with a typical database query language like SQL, with a set of name-value pairs, which is what the input to a method/REST-endpoint essentially is (the usual way around this is to allow one of the name-value pairs to represent a query that gets executed directly against the database; this is nothing more than proxying the DB query interface through the method call). It is also generally much more efficient to look at a diagram of a data model to figure out what query to run against a database than it is to read a functional API spec to figure out how to orchestrate a set of API calls to accomplish what one query could.

We need a WDBMS (Web Database Management System) or WDBS (Web Database Service)

I say, let's use API's when appropriate(for most write access to data), and give access to DBMS query interfaces when appropriate (which is often the case for read access to rich data repositories). We have a good architecture for Web Services/API's, which is proving itself in real and prominent (press-worthy, at least) apps, in REST. Where's our web database architecture, which can complement REST in its simplicity and ability to scale to a global level? Well, as I've expounded on in previous posts, I think RDF is it.

Another point to consider is that as these mash-ups get more sophisticated they will no longer be pure mash-ups. Instead of merely exploiting existing relationships between data in different web sites, they will allow for the creation and storage of new relationships amongst data that is globally distributed across the web. These applications will need to have write access to their own databases, built on DBMS's designed for the web.

Designed for the web, these databases should be available as online services that can be accessed over the web. There should be a consistent serialization defined from an arbitrary dataset to an "on-the-wire" transport format in the lingua franca of the web - XML - which RDF provides, or alternatively into another web format that is simpler and better - JSON ( this simple requirement could have naively be achieved by storing your data as XML with some sort of XML database technology, but XML has many problems as a data model, not the least of which being that it violates the KISS principle) . Physically, they should look like the web, with a similiar topology and the ability to be massively distributed and decentralized, with distributed query mechanisms that can work in a peer-to-peer fashion. As the data substrate underpinning the sophisticated mash-ups of the future, I see them filling in what might be viewed as the currently "negative space" of the web, the gaps between web sites. I can see these kinds of database services really coming into their own serving as data hubs between multiple sites.

As an experiment, I will be putting a stab at such a WDBS online in the near future. A web app that I'm putting together using Kowari's RDF database engine. It will be available for free use by mash-up experimentalists who just have a Mozilla browser with Greasemonkey at their disposal, and need some place online to store their data. More news on that coming up ...

Monday, August 22, 2005

The Web Database

there are many who have traced the history of database management systems, in particular the Great Debate between the network model and the relational model - embodied in their key proponents, Charles Bachman and E.F. Codd, respectively - and note that if there are any purely technical factors that contributed to the relational model's triumph over the network model it would be that the relational was simpler. not only were network databases more complex to manage from an adminstrative perspective, but from a user standpoint querying network databases was complex and error-prone because developers of the network model were never able to devise a simple declarative query language, having to rely on procedural devices like goto's and cursors and requiring an intimate low-level knowledge of the physical data structures by the user. some relational purists will argue that the relational model's solid mathematical foundation was the source of its technical superiority, but from a pragmatic perspective its grounding in predicate calculus was only important insofar as it simplified the problems of storing and accessing data.

we see the idea of simplicity appearing over and over again when we analyze the advantages of various successful models and systems over their competitors/predecessors. HTML vs. SGML. REST vs. SOAP. Hibernate over EJB and Spring over J2EE. Extreme Programming's KISS philosophy and the New Jersey approach to design. Capitalism vs. Communism. hell, even Nike is going barefoot these days, and in the world of organized violence the paring down of "barred" holds and the mixing of styles is all the rage. common to all of these frameworks is the greater flexibility and creative freedom to allow human ingenuity its fullest expression. when the prime value of the global network that all of our lives are being woven deeper and deeper into is the aggregation and multiplication of human capital, i think that it's no accident that models which release human capabilities are gaining more and more prominence over those that attempt to control them.

what many people fail to realize about the RDF model of data is that it is a simpler and more general model of data than anything that has come before it. not RDF with schemas and ontologies and all that jazz. that's actually more complex than anything that has come before it. i'm talking about basic RDF. designed originally as a data model for the web, one key requirement had to be met: that any data anywhere on the globe, whether it be in relational databases, network databases, flat files, or what have you, could be mapped to it. consequently, what was produced was a kind of lowest common denominator of data models. a key concept here is that of the fundamental, irreducible unit of data as the simplest kind of statement (or, more precisely, in the language of mathematics: a binary relation). even C.J. Date - arguably second only to Codd as an authority on the relational model - acknowledged in a recent comment on "relational binary database design" that there is an argument for the binary relation being an irreducible unit out of which the n-ary relations which relational theory deals with can be composed. in his comment, he describes how a ternary (3 column) relation can be composed by "joining" 2 binary relations. by breaking down the nature of data into something a bit more granular to manipulate we gain a power and flexibility not unlike that envisioned by Bill Joy when he waxes philosophic about nanotechnology and its promise of the ability to create any physical thing by manipulating granular components of matter. indeed much of the progress in our understanding of matter has been driven by successive discoveries of increasingly more granular, or atomic, units of matter.

"No tuples barred" data kung-fu

there's another aspect of RDF that has practical consequences that make it a good fit for the web: it's "self-describing" nature. this aspect of RDF is not just something that was artifically designed in or layered on; it follows quite naturally from its reductionist foundations. since we effectively use the irreducible binary relation as a kind of building block to compose larger types of relations, each irreducible binary relation must have an independent existence apart from the compositional relationships it participates in. it must have a global identifier to be independently recognizable by the system. when the most granular components of even the most complex dynamic aggregations of data are identifiable as individuals with an independent existence, the effect is that the data becomes self-describing. contrast that with the relational model wherein columns are defined relative to a relation. columns cannot be said to exist independent of some relation of which they are a part.

when data is self-describing, schema becomes inessential. there are no RDBMS's that I'm aware of that allow data to be created that does not conform to some pre-defined schema. XML, on the other hand, another self-describing data format, does not require a schema to exist before you can create a valid XML document. while schema may be useful for enforcing/confirming some kind of organization of the data, it is not essential to the creation and manipulation of data.

this allows you to have a database that does not require the kind of bureaucratic planning that the database modeling exercise in a large organization can devolve into before being put into action. if it were a relational database, it would be as if there were no conceivable tuple barred from creation. it allows a level of responsiveness and agility in reacting to problems and creating solutions that simply isn't possible with today's RBMS technology, and with the bureaucracy that has developed in many corporate IT departments around the administration and development of such database systems.

such a system would be much like a database created in Prolog (which almost certainly had an influence on the design of RDF due to its early "knowledge representation" aspirations). in Prolog you can assert any fact, i.e. make any statement that you want without having the predicates predefined. any kind of higher-order structure or logic that exists among the facts, such as a graph connecting a set of binary relations, is an emergent property of a dataset that can be discovered through inference, but is never explicitly defined anywhere in the system. while some sort of schemata may serve as a guide to a user entering facts and rules in a Prolog database, prolog is not aware of it, and has no way of enforcing it. this is much the way that the human brain, indeed matter itself, works. while it's possible at higher levels of organization for both the brain and matter to create rigid molds into which things that don't fit the mold are not accepted, they don't fundamentally work this way. by the same token, it is possible to create RDF systems that ridigly enforce RDF schemas and ontologies, but i wouldn't recommend it. the bigger your world gets the more flexiblity you want. as your horizon expands, it becomes increasingly difficult to define a single schema that fits all data, and the web is about as big a data universe as you can get. the simpler model scales better.

a recent article in HBS Working Knowledge, entitled "How Toyota and Linux Keep Collaboration Simple", describes how "The Toyota and Linux communities illustrate time-tested techniques for collaboration under pressure". the article makes the point that both groups follow a minimalist philosophy of using the simplest, most widely available technologies to enable far-flung groups to collaborate. a minimalist, widely available database technology (i.e. available as a service over HTTP) could allow a kind of real-time programming capability to rapidly create programs that allow collaborators across different organizations to analyze and attack novel problems with unique data patterns in near real-time. the web database should be like a CVS for data, allowing programmers to work in parallel with different representations of data and to merge those representations, in much the way source code version control systems allow different representations of program logic to be worked on in parallel, and merged. like CVS it should provide a lineage of the changes made to those representations allowing them to be "rolled back" if necessary, giving coders the confidence to move forward quickly pursuing a path, knowing that it will be easy to backtrack if necessary. it would be the perfect database technology for agile development, founded on the Jeet Kune Do of data models:
JKD advocates taking techniques from any martial art; the trapping and short-range punches of Wing Chun, the kicks of northern Chinese styles as well as Savate, the footwork found in Western fencing and the techniques of Western boxing, for example. Bruce Lee stated that his concept is not an "adding to" of more and more things on top of each other to form a system, but rather, a winnowing out. The metaphor Lee borrowed from Chan Buddhism was of constantly filling a cup with water, and then emptying it, used for describing Lee's philosophy of "casting off what is useless."


The best of all worlds

recently i came across this interview in 2003 with Don Chamberlin, co-inventor of SQL. nowadays, he spends his time working out a query language for XML and thinking about how to unify structured data and unstructured data under one model, and the integration of heterogenous data with self-describing data models (the latter is exactly what RDF is a good simple solution for, and XML isn't). it ends with some interesting quotes by Mr. Chamberlin:

Chamberlin: Well, you know I've thought about it, and I think the world needs a new query language every 25 years. Seriously, it's very gratifying to be able to go through two of these cycles. DB2 will support SQL and XQuery as sort of co-equals, and that's the right approach. It embodies the information integration idea that we are trying to accomplish.

Haderle: And do you think that, given the Internet's predominantly pointer-based navigation, that Charles Bachman [originator of the network database model] is thinking, "I finally won out over relational?"

Chamberlin: Well, there are a lot of hyperlinks in the world, aren't there? I have a talk, "A Brief History of Data," that I often give at universities. And in this talk, I refer to the Web as "Bachman's Revenge."

Haderle: I know that the IMS guys are saying, "I told you so."

so are we are ready for a new data model? is the web indeed "Bachman's Revenge", and will the new data model be really a return to something old? in some ways, yes. the web, and RDF, do superficially resemble the hyperspace of Bachman's network data model. the hyperlink is a binary relation between two nodes, and both the network data model and RDF are based conceptually, to some extent, on a graph model of data. this is directly attributable to the binary relation's fundamental role in graph theory. but RDF is also fundamentally different. in Bachman's network model it was "records" that were hyperlinked. these records looked more like the n-ary relations of the relational world (though they were never rigorously and formally defined as such). thus, there was a fundamental inconsistency in the network data model. in RDF, all data is modeled as binary relations, and thus all data is "in the graph". thus, all data in an RDF model is at once amenable to the kind of rigorous mathematical analysis and logical inference that the relational model is, and also mappable to a graph (a labeled directed graph, to be more exact). add to that basic structure a self-describing format, and the result is a model of data that achieves an elegance, simplicity, and flexibility that Bachman's model never did, making it a beautiful fit for the web.

in much the same way that the strength of RDF as a universal data model seems to be a result of it being a simplification and distillation of the essence of other models of data, with more dynamism and flexibility, the success of Java was driven in its early days by it being in some sense a distillation of the essence of other popular programming languages and platforms, that was simpler than any of the existing programming languages and platforms - a lowest common denominator that held the promise of portability across all platforms.

Back to the basics ...

so what i'm advocating, in part to help clear up the noise and confusion surrounding this technology, and partly to focus resources where they would reap the most value at this yet early stage in its evolution, is a focus on a simpler RDF. i'm more interested in an RDF--, than an RDF++. the reason the web took off was because it was so simple to use. anyone could write an HTML page. the hyperlink is the most basic and intuitively graspable data structure one could imagine. RDF, in its basic form, doesn't really do much more than add a label to that link, introduce a new kind of node - a literal, and a powerful query language against this network of nodes with labeled links. RDF has yet to "take off". let's wait till that happens and it gains some real traction before we start over-engineering it. let's see how we can cope without schemas and ontologies. let's see if the self-organizing nature of the web will allow us to get away without them. then maybe we'll discover that it's possible to start integrating the world's data on a grand scale.