Archive for October, 2008

Requirements and Complexity: the devil is in the detail

October 15th, 2008 No comments

Who would think that I am still hung up on the problem of requirements? From an agile perspective, haven’t we “solved”  that analysis does not need to be all done upfront? Well, where I work, it still forms the basis of contracts that my company engages in and, as such, it can affect my daily work. Requirements documents tend to be off dubious value when they promise more than they can deliver on: even after significant time and effort is expended on writing requirements, clarification is still needed, change still occurs. And it is because of unforeseen complexities that I often hear the call that the devil in the detail. The detail occurs in coding work. But being an agile-type of guy, I’m cool with that: it’ merely a little waste and frustration (if only!). Conversely, let’s also say for a minute that there was no additional complexity, wastage or change and projects came in on time and on budget.  Would we still challenge these requirements approaches? Possibly not. The arguments based on efficiency leave me wondering that perhaps there is more to complexity in requirements that I currently understand.

I had a big rewrite scenario a year ago that suggested the impossible quest of requirements documents. The work was to replace an existing travel insurance web application but was effectively a porting exercise. We had a 90 page requirements specification based on the old system. In fact, we had the old system running live and in staging. So managing complexity, uncertainty and change couldn’t be that hard when you already had working software. As you can imagine, it didn’t go quite as planned and the specifications weren’t quite as correct or complete as we expected. It was more “complex”. There were the usual suspects: business rules had changed or were poorly constructed in the first place, teams weren’t co-located. There were also people and experience issues: we, the dev team, were criticised for adding complexity/effort with, for instance, continuous integration and high level of code coverage through unit tests, using a new ORM mapper and integrating into a new CMS application. It seemed to me (even from the outset) that requirements were not the biggest risk of the project yet this approach dominated the project’s process. Deployment, politics, size of team, technology and people where in roughly that order. Clearly then requirements documents are meant to manage such complexity. So, what are some of our assumptions in the idea of a “requirement”? What are the characteristics of complexity?

Luckily, I have had some time lately to read around “complexity science” by some scholars, John Law and Chunglin Kwa, in the field of Science, Technology and Society (STS). They are arguing for making a split between two opposing conceptions of complexity: romantic vs baroque. Let me briefly rehearse my argument: if you have a romantic view of complexity a requirements document makes sense because the analysis allows you see the overview of the project; from a baroque perspective there are real limits of what the document can provide and in fact there are too many blind spots to let it carry the weight that it does. It only sees complexity at a high level of details and connections. The baroque perspective instead sees complexity within the details of the details and those details emerge at the time you are working on the details. For me, digging into the details is most likely to happen in coding-based analysis work. Yet, these are usually seen as subsequent, and secondary, to the requirements documents. Really, these practices should be first-class citizens when we think about analysis work. As such we should privilege and accordingly actually prioritise this work. Bare with me as I try and clarify it for myself why using these notions of complexity we should be critical of requirements documents in software making as the way to do analysis and spend more time doing coding-based analysis.

Requirements as holistic: romantic complexity

Writing a requirements document is a holistic approach to software making. It wants to address the complexities of the project as a whole and bring them together often in one document. For example, we might have a functional specification for the software. Breaking the software up into key components, we tend to see use-case type analysis work going on: roles, actors, actions, outcomes. This is further broken up into business rules and cross referencing activities. It is also holistic in its approach because it tends to get people to work together, writing and reviewing it. In this sense, it is a shared, inclusive and collaborative approach between parties (e.g. say vendor/developer and client/SME) and working together is better than working alone. This is not only best practice but also common sense when where the vendor has no business knowledge. The holistic approach is widely accepted in and outside of the industry and found in books such as Code Complete. I could develop this perspective more but let me paraphrase Will Meed to start unpicking why this holistic, indeed romantic view of complexity that is developed here is problematic. The key problem is what constitutes the ‘whole’ of the system. What are the elements that interact to form the emergent whole? Can we identify the rule-based parts or elements that we find in components (the models of complexity)? When we think about the complexity of the different components involved in the system, can it really be described as holistic? How should we define the boundaries of the components as a complex system?

The holistic approach assumes that complex relationships can be made explicit. The holistic view represents a paradox in complexity science, according to Kwa. On the one hand, complexity embodies a “romantic” imagination which invites expression of and gives value to non-verbal, the emotional, the artistic. This vision suggests that there is some complexity that cannot be grasped and we need to find ways to express it. Kwa suggests art, poetry and emotions: in software making the alpha geeks embody that something extra-ordinary. On the other hand, however, we want to find models of complexity that make complexity explicit by emphasising abstraction and privileging looking up and assuming the possibility of an overview. For example, for my travel insurance scenario, the requirements document has sections with use case analysis. The elements are actors, roles, components of the system. We find all the constituent elements – actors, roles – and then map out the relationships between them. Then we find out who does what. Then we take the what and further work out what else happens say with sequence diagrams. So we move back-and-forth between higher and lower level components trying to flesh out the key relationships. In doing this, the system emerges as a whole. So the assumption in a romantic view is that not only is this new whole quantitatively different for its component parts, but also the system can only be grasped as a whole. So, if we didn’t have the document under this thinking we could really never build the system because we would never see the whole. This may sound extreme but the converse is when change occurs: the argument is that we should have spent more time analysing the problem at the time to have avoided the problem assuming that at the time the problem could have been “found”.

The use case illustrates a further two ideas of the romantic: homogenisation and abstraction. The former asks questions about how to model and how to make components and their relations comparable. The latter is as you would expect a move directly away from the form it is trying to represent. Use cases are an easy candidate. It is certainly an abstraction in this case of people, roles and actions in the work place. There has been much effort in notations. There is a multitude of notions: UML, Brady-Booch (go and take a look at the listings in Visio). Homogenisation is standardisation at the tooling level. The hope is to be able to create standard set of processes through tools that can be used throughout and across projects. At its best, it would make projects comparable and that when learned once can be applied generically. Wouldn’t vendors love this if it were easily possible! I’m not just thinking of the usual suspects (eg Microsoft, Borland, Rational Suite) here because it is also happening in the agile field too (eg Rally’s suite).

A romantic view privileges then an approach of “looking up” (rather than looking down). It believes that there is always a possibility of an overview. In seeking the whole, it assumes that constituent elements are knowable, representable and can be incorporated where they were previously separate. Interestingly, Law points out that the looking up metaphor centralises control and is a centring of technology. In our case, the role of the analyst – and particularly the untouchables such as the architect -  is privileged. There are many ways in which this in enacted: early, regular and easy access to the client; gate keep changes to requirements; specialised notations. Technology is also centred. This is particularly insidious when standards and vendors closely align.  You need a suite of applications that often tightly coupled: you need to correctly “draw” the diagrams to the correct version; you need to be correctly trained.

There is one further issue in the romantic. It has an emphasis on openness. Law points out that in practice this means an increasing in the size of the model and its desire to take in more.  He argues that there is always something more “out there” and the goals is step up to the level of complexity.
The call is to look up and work on a larger scale … The pull is towards the emergent global reality which, it becomes clear, has necessarily to be modelled if the components that interact together to make it up are themselves to be understood.
For Law, and I have to agree, left to its own devices romantic complexity leads to the holism of grand narratives. In this case, requirements, UML, tooling, or whatever the modelling approach believes it can solve the problem. Luckily for vendors and trainers out there there are people out there willing to pay for that too. I think that we need to think about complexity in a way that we do about software. Any piece of software will always have trade-offs – it cannot do everything; we always leave something out. You need not to be writing opinionated software to know this. So complexity, as does software, has points in which boundaries are drawn and no is said. We, therefore, need a notion complexity that has boundaries, sets limits and understands that we always exclude. This is what a baroque sensibility offers.

Requirements as emergent: baroque critique of complexity

The alternative to looking up is “looking down”. It looks for complexity within the details (or specificity) rather than to seek it in the broader picture. In software, we often hear that the devil is in the detail and that not merely because of tardiness but because complexity lies in the concrete implementations. While requirements-type approaches are an attempt to eliminate this, other approaches, such as agile-type responses, argue for emergent design.  This is not a notion of the emergent which is there merely to be found but rather are created “anew”. For example, as people learn more about the domain the software becomes something beyond what was imagined. Therefore, software making becomes that which is always situated, is always specific, and must, therefore, always be accomplished. It is in the doing and the digging down into the details that we layers of complexity. And, generally, in software I think we find increasing levels of complexity: those exceptions, edge cases and that which we need to ignore and live with. But in looking down, because complexity is created in looking for complexity, we have some very different assumptions. Looking down involves a “Baroque” approach to complexity which has three main characteristics (this is from Kwa the Law piece is heavily indebted to):
First, the historic baroque insists on a strong phenomenological realness, a sensuous materiality. Second, this materiality is not confined to a, or locked within, a simple individual that flows out in many directions, blurring the distinction between individual environment. And third, there is also the baroque inventiveness, the ability to produce lots of novel combinations out of a rather limited set of combinations.
What does the baroque mean for complexity in software making? People are central to software: what they know, feel and have a sense of is important (the phenomenological and sensuous). We need to turn to individuals because discovery is through individuals – or some might say that everything is already within the individuals or team. But, at the same, we need to collapse the distinction between the individual and environment. Let’s spend a little more time here: everything is connected and contained with everything else. Here’s a simple example again from my travel insurance application to try and illustrate that individual developers don’t tend to build systems outside of their experience/knowledge. If I’m a developer for ASP.NET 2.0 and I don’t know about REST architectures I am likely to build a server-side system using Webforms or ASP.NET AJAX than I am to use Default Generic Handlers and client-side libraries like JQuery. Here my environment, Microsoft supported technologies, form my world and who I am. So in the baroque sensibility, when I looking down, I am likely then to see complexities and come up with solutions. So while they are always bounded decisions, I can at least potentially come up with some new solutions. For example, say now I start reading around Alt.Net new options become available (eg IOC containers or MVC patterns). Alternatively, I start writing in other languages which influences how I use my current, say, C# tools. New and innovative ideas and practices from this perspective are not simply a function of the brilliance of, say, the alpha geek. But rather one of the intersecting of, and contradictions between, ideas and the necessary responses to these. I say necessarily because we do write code and provide software!

There is a fourth characteristic which I think is possibly the most important part of the baroque. An acceptance of the implicit. This is a simple point about boundaries. The implicit exists because there are limits on what can be made explicit. Law argues that “perhaps some individuals make some things explicit, and others, others”. If everyone isn’t saying everything, then, we need to be tolerant of the implicit and to know that it is not necessary to make everything explicit. It may in fact be enough to “reflect, refract, enact or embody” some knowledge. And finally that we can also learn to know through the implicit. A shift to working software might be an example. We often use (and accept) software with partial knowledge. There is a lot that we don’t know (whether or not we need to) but at the same time we might argue that we know enough at that moment. [Does this example work I ask myself?] For example, I was working on web application to buy the travel insurance. It had a workflow obviously and it needed to be more flexible than a linear wizard style. So there’s two aspects of complexity that we were trying to manage. The navigation by the user and the business logic of the information/models; the physical and logical layers say. How do we know that the two reconcile? How do we get the limits of one to map “nicely” onto the other? How do we allow for enough flexibility without going into a generalised piece of code? We choose to use a state pattern to bind these two. For none developer views, we generated directed-graph diagrams out of the code and we stuck to the wall the workflow of the site with lines. This was a tension ridden process of explanation, re-explanation, demonstration, scenarios and counter-scenarios. As you would imagine, the workflow in all its combinations of state and navigation was complex in its own right. It required at times people to accept that some explicit decisions (state pattern usage) that created boundaries for implicit reasons that they couldn’t fathom (ie they don’t understand the state pattern’s constraints) but that over time they actually got to understand what they were likely be able and not be able to do in the workflow.

In practice, I suspect we deal quite well with the implicit. Change control, iterations, prioritisation, backlog, weighting all point to the uncertainty of the implicit. The real problem, Law points out, is when we deny the presence of the implicit. For me, the requirements document tends toward a denial. [How?]

Finally, the baroque argues that there is no emergent overview. There can be no overview because what is known is never fully explicit. Law points out that the overview is at best a partially enacted romantic aspiration. Applied to requirements documents, I have to agree. The argument here is to do with size, scale and convergence. Requirements from a romantic perspective tend to see the complexity of the software as very large: there are lots of elements and complex relations between them. But with right modelling techniques any size of project is possible. We then will arrive at model of the entire system in the end and that we can zoom in/out on components of the system. For example, UML is seen as scalable to systems: using the right diagrams you can model components to the appropriate level; joined up they make the whole. To avoid picking on UML, functional specification is similar. There are sections breaking up the systems with a number of cross referenced business rules. Diagrams, screen shots and indexes are an attempt to bring the system together as a whole. But for the baroque, these are partial. They provide an overview at a particular level and, in fact, that overview level is potentially quite small. It is not large because it combines elements at a component level. Because there is always more detail and complexity we cannot make an entire whole – that is, there is no convergence, we cannot bring together all the elements to make a whole. To some this may sound circular or even recursive – the whole equals the elements linked but because there are more elements within elements there are always more links and without all the elements and links we cannot make a convergent, closed whole. To me though the baroque sounds familiar in coding-based work. Current best (object oriented) practice is to have loosely coupled, highly cohesive code.

So for all those that don’t go down the requirements approach (eg XP) too often the accusations are that things are unclear, messy, murky and just plain undisciplined. As Law says, the baroque is very hard to achieve as you are easily treated as confused and unclear. But I want to hold onto this notion of complexity as found specific accomplishment rather than given. My experience (as much as it betrays me) does also suggest that realities can be caught associatively and indirectly, at the edges of my perception. I often have the ah-has but can’t explain them – so much of OO programming, recursion or even the usage of patterns in the coding. There is something in the doing, the watching, experiencing and hunches. Rarely though does everything go to plan. And that doesn’t fit well with the romance between complexity and explicit emergence.

Categories: Uncategorized Tags:

JQuery vs XSLT as REST client for XML resource in the browser

October 2nd, 2008 1 comment

This work is to try and help through thinking about XML results through REST in a browser. If we are going to view the results (in a browser) what are the options that treat the results as a GET resource?

Here are the sample files.

  • Parsing XML files is a no brainer to display in HTML
  • I’d probably avoid using the browser’s xslt engine though – so xslt through javascript is simplified
  • JQuery is central to the real presentation functionality
  • I’d start with option 2 personally and then add XPath as needed to target information and then if this display got more complex add in XSLT
  • You can get there either through (1) XSLT, (2) JQuery itself or (3) JQuery + XSLT + XPath – you really should be familiar with them all anyway
Here’s a couple of main options:
  1. XML + XSLT/XPath -> HTML + JQuery => Display
  2. HTML + JQuery ++XML => Display
  3. HTML + JQuery +XSLT ++XML => Display
  4. Flex (+ HTML) ++XML => Display
I have looked at the first two. And that is what I will try and explain. I suspect the other two options are also good. I do have a concern that in using the XSLT parser within a JS plugin it is perhaps delegating too much work to JS (maintenance of enough functionality will probably become an issue). Also beyond that I was looking at is the hyperlinking of resources (my hunch is that JS or Flex will be the way to load and traverse hyperlinked resources)

I think that the major concern:

  • is a separation of concerns
  • keeping a resource as data that can be asked for and can ask for a service to act on it

Separation of concerns:

  1. The XML needs to have a life of its own as a resource
  2. The client needs to be able to act on the resource
  3. The client needs to be in control of all of this and don’t want logic to be hiding at the server
  4. On the client, I want to keep separate layout from content from skin
  5. For development, I wouldn’t mind some intellisense; I want good debugging help

Some findings:

  • One thing to note is that for both approaches I bind a data source to a control. Here I read in XML and load it into an object and then bind this object to the control which then appends the table to the DOM object. This control can also be bound to an html table. In this case, you put a table into the DOM and then the control transforms the table. The later seemed to me to be a server-side solution rather than a late-binding to a resource.
  • I also did a sample XSLT of creating table in HTML from the results (results-as-html-table.xsl). The code was too long for me! I have include that file in the source for reference.

    Option One: XML + XSLT

    This was my initial reaction as the way forward for this solution. Personally, I like XSLT and its declarative nature. But, in practice, I still find it a little slow to programme (I really do forget the syntax and have to relearn it each time – particularly if you start to heavily use namespaces).
  • The xslt on a browser has to have a reference to an xsl in the XML - this really isn’t that good as there is only one view per xml (although if you were working outside of the browser this need not be a problem)
  • XML in many ways does act on itself because it puts an XSLT across it (although limited to one)
  • It was harder to debug the JS in the xslt mode and generally took longer to working in XSLT
  • The key thing to remember is that the target platform is HTML/JS and XSLT is good for transformations from one structure to another. The problem is that while XHTML is XML we are really in a programming mode rather than a transformation mode. I think that the part that is best for the tranformation via XPath is taking the results and combining the bits of information together (that is this case would be in a cell).
  • I had add extra code to late-bind the data into the control as it wasn’t designed to be used in this way (you could rewrite the component to reduce code though)
In summary, as the transformations got larger, this might still work well. You will need to write a number of xsl files to allow for good modularisation (see DocBook if you have any doubt on this one). In doing, so you can keep alot of the javascript/JQuery work invisible to the main transformations. But I find that really if isn’t great for debugging so I write the JS in another context and then import it back in to the XSL file.

Option Two: HTML + JQuery ++XML

Having looked at the XSLT work, I was in the land of JQuery and was finding it hard to get good, flowing code from the HTML/JS perspective. So while what I want to see needs to be parsed declaratively, the way to display and have interactions with it needs to be procedural code.

Note: I had to patch the flexigrid component to allow transforming data from an XML to JSON source. It assumes that the resource gotten is the object presented.

  • The HTML/JS using an html container was quicker and cleaner to write. I was easier to use code samples to then customise and extend.
  • The browser’s debugging tools such as Firebug played a little more nicely in terms of error handling
  • Honestly, this approach took not more than a third of the time
  • JQuery gives me iterators and finders that makes my code not look dissimilar to the declarative XSLT (I would also be able to use XPath if I really wanted)
  • To keep it clean I will tend to use expected modularisation in Javascript (objects/functions)
  • I’m not convinced that the end result is necessarily any cleaner though
  • I will be able to put unit tests around it though
  • the HTML file is itself a resource


  • I only need to load the xml once
  • both use the same libraries
  • both need to be improved to do a DTD (xsd) check?
  • the REST approach requires that the client transforms the XML to a format that can be bound to the control

    Where to?

  • I probably stick with the HTML approach
  • I’d keep extending the JQuery plugin approach which allows for nice configuration of views
  • I can use either the each iterators with find in JS or use XPath to work out what data I want to (re)present
  • I would probably add another layer of abstraction between the model object and the data source – at the moment what required from the data source and format of the binding object are combined. A factory would easily create that separation – but was too much for this example
  • Alternatively, I would extend the column model so that you can use XPath configuration of data rather than need the pre processing callback in the first instance
  • go and look at options 3 and 4 above at some stage (3 is likely to end up as the alternative above)

    Appendix A: Reference Formats of JSON data format for server

    { page: 1, total: 239, rows: [ {id:'ZW',cell:['ZW','ZIMBABWE','Zimbabwe','ZWE','716']}, }
Categories: Uncategorized Tags: , ,