back to software blog
 

Modeling bias

This note looks at limitations of modeling in software process and design and suggest a complementary approach, which I found really useful over the years of my work in software, however I have not seen it articulated in the literature or consciously used by other engineers. This note to a large extent rephrases my older post Code as two texts, but departs specifically from the role of modeling and metrics in software methodologies.

I submitted it for an online publication, but it was rejected with the following comment: "This article as it stands is not a good fit for InfoQ. It touches on an important topic but the way it is presented is very theoretical/academic and it needs to be more practitioner focused ... Please provide also the 5 key takeaways of the article."

Maybe my article is somewhat dry. However it may be very hard for a practitioner to apply in his practice the conceptual shift from modelling to linguistic analysis without discussing a few core methodological concepts. From all my experience, the lack of grasp of these concepts is one of the main deficiencies in software engineering and software process. The vast majority of engineers and managers is not taught it and does not have it naturally. This fact has a very negative impact on the efficiency and, moreover, mostly remains in blind spot - a double whammy for the industry.

 

By definition, a model is an abstract representation of a certain aspect of the actual thing. Say, the queueing theory models how items move through when the throughput is limited. The abstract queue can represent cars on the road, a production line, etc. A good model predicts well how a system structured in a certain way would behave simply because the model repeats the system, but in a simpler way.

Models are picked for a reason. Unlike queue or complexity theory models, say, the models of human cardio-vascular system are not routinely applied to the software process (apart from maybe ergonomics), although they accurately represent human functioning, which is an integral part of the software production. You apply the models not just because they are "correct", but because they demonstrably predict the production outcomes.

Say, the queue theory shows how to optimize throughput in the queues. One can look at the specific software project as a collection of queues of items in waiting to assess and improve time to market. Time to market is used as a quantitative metric: longer is worse, faster is better. Reduce waiting time in project queues will reduce calculated time to market: the metrics or utility function of the model can be directly calculated.

Obviously, in many cases, one cannot directly calculate utility of the models. Say, a psychological model may suggest that higher salaries may lead to a higher productivity, but such conclusion would not follow directly from a model-based calculation. It can be established in indirect way and then measured in experiments. Say, one can find that statistically comfortable chairs improve development speed logarithmically. Although no doubt such experimental results are useful (e.g. Google do this sort of data mining to improve conditions and productivity of their teams), the advantage of the direct calculation of the utility function is that it shows you an exact mechanism of how to achieve better outcomes, while the experimental, empirical cause-effect relationships look more like soft recommendations that can be misinterpreted by those who apply them.

The "direct" models are very attractive, since their utility follows from their mechanics. It leads to the temptation of devising "direct" models not to represent the mechanics of the modelled system, but to extract from it a desired utility function. For example, the old, but still wide-spread "mythical man-month" illusion that if the project is planned down to a day, it is sufficient to find sum total of all the planned man-days to tell the project delivery date.

It is worth to dwell a bit on the latter example, since it will help to later question even the "direct" models whose mechanics define how the cause-effect is produced. The source of the mythical man-month illusion is a flavour of Laplace' Daemon fallacy, coming from the classical science, which suggests that if we know everything about a system, we can perfectly predict its behaviour; or in a weaker form: the more we know about a system, the better we can predict its behaviour. However, it is not the case with software development: the management or analysts go deeper and deeper into implementation details, request more and more accountability for every hour spent, sowing anxiety and slugging the creativity and productivity of the project team.

The level of detail of the project planning looks like a continuum from coarse to infinitely refined. Say, what would be the best level of detail for the initial upfront project planning? Roughly, two common answers might correspond to the "traditional" and agile approaches: The "traditional" way would be based on a time-to-market utility function derived from a faulty assumption that project work can be modelled as sum total of work on each individual item. The agile approach would suggest best practices of managing backlogs at several scales, which works empirically, however it is justified only indirectly: For example, why is the time horizon for a sprint is two-four weeks? Because empirically it has been found that doing detailed planning for periods longer than a month would lead to the development losing focus and digressing from value-driven goals.

Various models from queue theory, sociology, etc only partially explain such time scales, but do not give technical criteria of what is the right time scale. Instead, best practices, techniques, and euristics like organizational patterns or formulae like "go see", "andon cord", etc are offered. The common justification of the best practices falls into two parts: Firstly, some aspects of those best practices are modeled (e.g. as in queue theory). Others do not offer easy modeling. For example, the pair programming looks like taking double amount of engineers' time. How can we show it is more productive? The pair programming as an intellectual effort, human communication, etc does not lend an simple model that would behave like pair programming to allow prediction. Instead of a model, we run an experiment on the system itself. Instead of assessing the model's behaviour (we don't have one), we assess the behaviour of the system itself: say, pair programming productivity.

There are several consequences of it:

There are essential things in agile that are hard to "model". Instead, they are taken from and confirmed by experience as best practices, recipes, or patterns coming from books, reflecting on one's experience, or common sense. How can their "mechanics" be expressed in more formal terms, so that one does not just soak in them after years of mentoring and practice, but can actually assess an organizational or design practice or generate a new one?

Moreover, empirical models eventually stop at the questions like: what is the right level of detail of a component architecture? how much documentation a product needs? how fine-detail time planning is required? The naive approach often falls into Laplace' Daemon trap, e.g. document "everything". The empirical modeling cautiously suggests: "A better way to frame all these issues [level of detail, documentation, etc] is along continuums. Appropriate behavior varies along a continuum for each discipline, and this may evolve iteration by iteration ... This is the view in Scrum: Practices adjust along continuums according to context." [Larman, Vodde]*, p.130

Apart from the empirical iterative approach, can we offer structural criteria or decision mechanisms that would help to identify the right scale in design, planning, etc on a continuum spanning from small to large or from fine-grain to high-level without falling into the trap of simplistic models meant to optimize "customer value"? And is "continuum" the best way to think of those things, are they really continuous functions of size or time?

Software possesses one quality to a much larger degree than most of industries: its main instrument is language and its products are written word. The software systems do something to the material world: control machines, provide communication, etc - actions that are performed through executing sentenses in artificial languages. The connection between a user story in natural language and the code as a text in an artificial language, as well as the connection between saying (code) and doing (its execution) is fundamental for software engineering, but software methodologies rarely (if ever) look into its structure.

Sentences in a programming language could be classified into two large groups:

There would not be any other types of expressions in the code. In fact, the definitions of passive data structures may be only descriptive, but they inevitably are meant to be used in performative expressions somewhere in the code. Something will be done with them, otherwise they are useless.

The expressions written in programming languages are performative exactly as John Austin defined them (in natural languages) in 1950s: By saying "I pronounce you husband and wife" the priest actually makes a couple husband and wife. By saying v+=2;, I actually increment the value of v by 2, when my code is executed; and say, if v is passed as velocity to the vehicle control, my saying actually does make the vehicle go faster.

Why does this obvious fact matter? Because computer programs are language artefacts of a special (performative) sort and therefore linguistic reasoning applies to them. Most of engineers, analysts, managers, and customers are used to think about a software product as a "model" of some aspect of reality and software design itself as sufficient amount and right type of modeling (hence "object-oriented models, relational models, software ontologies etc). However, in terms of Hjemslev, a key figure in the 20th century linguistics, models are essentially symbolic systems: they substitute reality with matching symbols, where symbols correspond to things and therefore the "model" behaves similarly to the real thing, which renders its predictive power.

However, computer programs are systems of signs, semiotic systems (see Code as two texts for more). Hjelmslev showed that symbolic and semiotic systems cannot be reduced to each other, therefore "modeling" cannot possibly sufficiently cover software engineering design and practices, since those are semiotic systems. Instead, the latter would benefit from applying linguistic apparatus to them and, specifically for software, the analysis of the performative structure of the code software process.

Each executable portion of code is two things at the same time: expressive and performative: It is up to the programmer to make sure the code says what it does in a concise manner.

Moreover, the only reason a class, or function, or utility are written is using them for something else, do something with them. Their meaning is in how they can be used. To make sure, it is not just the class interface, it is actually the class usage that represents the meaning of a software artefact. For example, the meaning of C++ STL containers consists not just in their types and methods like begin(), or end(), or iterator increment etc, but in the patterns of their usage: the iteration concepts and so on. It was Wittgenstein who revolutionized philosophy of language by suggesting that the meaning of words or sentences is their use. To avoid general philosophical discussions, Wittgenstein's definition of meaning works for programming languages in practice, since only programming artefacts that are used make sense.

Therefore, each software artefact has two sides: performative (what it does) and expressive (what it says). What is says needs to be meaningful in terms of its use. Once used, it becomes a performative implementation detail of another series of expressive statements in the code; for example on a small scale from low-level comms through database query primitives to sets of complex specialized database queries to the end customer-facing semantics - the use (and therefore the meaning) of each of those steps should not be coupled with the implementation details of its executable side.

(One may think: is not it all just about encapsulation? Encapsulation is only a part of the story: making a hundred line-long method private is hiding a blog of code that lacks the expressive-performative quality. Feature-hiding often mistaken for encapsulation is the last resort and typically a quick fix; the good code is as exposed as possible. E.g. if a class has to have a lot of complexity, the latter should be hidden in private, but decomposed into more classes and then instantiated as private members of the class. If it is not done religiously, the poor unit test coverage for the class is almost guaranteed.)

This all may be common and obvious. I would like to emphasize that the structure described above does not model any kind of "reality" outside of the code, unlike, say, object-oriented design. Instead, it describes the foundational elements of code. Just like object-oriented design articulates cleaner models for some aspects of "real world", maintaining a clean-cut execution↔meaning↔use structure throughout the code leads to naturally better software (i.e. code that it is closer to its nature). The main thing about this structure is that it represents sign and language qualitites and therefore is essentially linguistic and particularly related to Austin's theory of performative utterances.

One property of performative utterances is that they do not have a truth value, i.e. they are not true or false [Austin]. The correctness of the code is strangely less relevant than one might think, taking into account that there is a strong inclination to think about software code as a chain of logical conclusions - and in classical logic the statements are either true or false. Rather, the "correctness" or better "consistency" of the code is really assessed by checking whether is says what it really does. Does it say one thing, but does something else? Meaning that the artificial languages seem to have a very human propensity to lie? Or the code is written the way that one has to read each line and to understand what it does (which is a flavour of software Laplace' Daemon) - but then how can the code be "wrong" or "false"?

Unlike logic, there is no inherent need or even possibility for being true or false in the code. It may be only semantically consistent: when you use it, it does what you think it does. Therefore, relentless semantic decomposition along execution↔meaning↔use line. One common design problem, which I think comes from perceiving software as something "logical", is that the participants see design considerations as conditions that are either true or false. This problem has been solved (or rather never existed) in Pattern Languages: design considerations are the forces that need to be balanced. The solutions are not true or false. Instead, the interacting forces are in balance or they are not. A path to a good design is to articulate use cases as constellations of forces. The resulting solution balances forces for given use cases and therefore is meaningful, because its meaning is in its use. ("Language" is often is omitted in "Software Pattern Languages". A pattern language allows to put its patterns together into the "usage" sentences, but its underlying semantic layer is language of forces.)

It is essentially insufficient to assess whether a piece of software is meaningful purely in the logical terms of being true/false or correct/incorrect, which unfortunately still happens a lot in the "non-agile" world. The following citation is an example of the well-meaning proposition: "It is this semiformal structuring that liberates the creativity of people. Rigid formal requirement models can be stifling, and are unusable by most people because they have not been expertly trained in the appropriate modeling technique." [Adolph et al] It is followed by a great text, however its reasoning is built on empirical evidence and common sense of software experts and misses a chance to drill down towards a firmer conclusions on why formal specifications do not work well: the formal specifications are inefficient not because "most people ... have not been expertly trained in the appropriate modeling technique", but because "modeling techniques" are symbolic systems and therefore inevitably become inadequate, when applied to a software project as an essentially semiotic system.

Another common critique of the "waterfall" goes as: "Its great strength is that it is supremely logical ... It has just one great weakness: humans are involved." [Deemer, Benefield] The following discussion is great and convincing, but despite that I take the quote out of context to demonstrate how the reasoning in that specific sentence, although well-meaning, falls into the same Laplace' Daemon trap. Criticizing the "waterfall" for wrong reasons would make it hard, if not impossible, to fix certain problems of heavy-weight methodologies. The "waterfall" is expensive to the point of intractability not just because of irrational humans and necessity of a more empirical approach. Its costs would remain prohibitive even with most compliant engineers and the problems may not disappear with the adaptive approach. It is so expensive exactly because it is "supremely logical" and therefore an attempt to build a symbolic model of the software design, which is a semiotic system.

In the same way, the software often is seen meaninful, if it brings "value to customer" - and books on agile are permeated by the statements like that. Whatever does not create "value" is seen as "waste", in the best case "necessary waste". The problem is that once the value is expressed quantitatively: in terms of money, time to market, or productivity metrics, the meaning in software gets trapped in a symbolic system representing circulation of that value. That is why customers or the marketing department do not like when programmers try to do things properly with crips semantic-pragmatic relationships in place. It is not because the customers or managers are greedy or impatient, but because semiotic aspects are in a blind spot of the "value"-driven symbolic system.

The expression "customer value" can be a misnomer, since it has the connotation of measurability. What really is delivered to the customer is a software capability to do something that brings the customer benefits, hopefully beyond the costs of developing that capability. Those benefits not necessarily are immediately measurable; e.g. it could be a system used in public education or fundamental science with no immediately obvious quantitative value. Thus, the delivered capabilities of a software product have the same execution↔meaning↔use structure as any intermediate software artefacts. From this point of view, let us take a closer look at a popular format of user stories: "As a <customer/user role> I want <goal> so that <reason>" (C-Style User Story, see e.g. [Larman, Vodde]**, p.271).

Expressing the customer's goal and reason (rightly) focusses the meaning of the user story on the "customer value". However, the eventual purpose of the user story is to be projected into its technical design and implementation (the user story is useless unless [eventually] implemented). Thus, I will try to rephrase the C-Style format according to the semantic/pragmatic structure of software:

Let us go through one more iteration of reasoning. (As a potentially dry note, while many thinkers of the 20th century, such as Wittgenstein, Austin, Hjelmslev, or even Deleuze to name a few, have done work on expressive/performative aspects of semiotic systems, the relatively recent book of Robert Brandom [Brandom] is a condenced research of what Brandom calls meaning-use relationships, which is just another way to represent the execution↔meaning↔use structure. If we have two vocabularies or mini-languages A and B, the meaning-use relationship comes as an answer to the question: "what should I do with the vocabulary A to be able to say something in vocabulary B?" Brandom looks into much more generic vocabularies like modal logics, whereas in the applied field, the vocabularies a scaled as much "smaller" languages like Pattern Languages or Wittgenstein's language games, e.g. his classical example of a builder's language that happens to open his Philosophical Investigations.)

Seen this way, action and meaning of the user story correspond directly to execution and meaning in the execution↔meaning↔use structure, whereas user specifies:

The rephrasing above matters because it highlights a more basic structure behind the user story. The user story is the point of contact of the customers and developers. Earlier, we identified the execution↔meaning↔use structure at the core of the software code. Above, we just have seen how the very same structure applies to the customer's side of the user story: the actor executes (by utilizing the capabilities of the software product) actions to express something in one of the (human) languages in the customer's domain. Rigorously defining the user stories with the execution↔meaning↔use structure in mind not only leads to better software requirements, but helps еo better analyse business on the customer's side. The effective user requirements in the form of user stories and software code actually have the same semiotic structure: execution↔meaning↔use, i.e. we have shown that the semiotic structure is not just an idiosyncrasy of the code, but a feature of the development process and indeed of the functioning of an organization.

Therefore, we can analyze not only the code or software design, but also the organizational processes from the semiotic point of view in the software team, in the communication with the customer, and in the business analysis on the customer's side: expressing something in a certain organizational domain has its action counterpart that deploys a bunch of capabilities making it possible. E.g. the complementary side of the ability to say: "Portfolio risk is continuously assessed" is the risk specialist's action of updating the trading data and running necessary computations, which is based on the vocabulary of capabilities of trading data interactions (request, filter, etc) and computations (e.g. value at risk, greeks, etc). The ability to say: "Plan next sprint" is an action of practical deployment the vocabulary of the "product owner refining product backlog", "engineers doing planning poker", etc.

This execution↔meaning↔use may sound trivial in case of well-known practices. What is the use of looking into the ability to "walk for 100m" as the capability deployment action of "making a step with one's left" and "making a step with one's right foot"? Let us look at two typical decision-making scenarios:

I call this continuous shaping of the decision-making as alternation between saying and doing "performative negotiation" carefully structured as negotiating in the meeting, in the group, or with the customer always keeping in mind what we are trying to say and to do. If speaking becomes fuzzy and the doing aspect not clearly articulated, the performative step (an experiment, a test, a prototype, a missing piece of work that would bring back clarity) needs to be identified. If the action loses the view of the usage scenario, it's time to speak.

Performative negotiation has the now-familiar two-sided semiotic structure of execution vs meaning. It goes beyond software technicalities and really applies to any business interaction (leaving politics, ambitions, or emotional aspects aside for now). It does not impose any specific ways of doing things. Instead, it is a tool to tell whether a proposed solution makes sense or whether a demanded requirement can be translated into the action.

(It is very hard to maintain the effort of the continous performative negotiation. It requires a certain endorsement in the organization or team, otherwise people at all levels may see it as intrusive, threatening, or confrontational, similar to the Five-Whys analysis. However, after a short time, the key stakeholders start seeing the direct benefits and improvements: the meetings are more organized and efficient; there are fewer empty promises and more material benefits timely delivered, etc, this all coming for them by the price of accepting the new form of interaction, which they perhaps may find quirky. My own planning conversations were called by friendly stakeholders, hopefully as a joke, "interrogations". Although, I think a better word would be "interviews", indeed their degree of persistence is similar to the Five-Whys analysis.)

Seeing not only the software design, but all aspects in the organization as this semiotic structure is closely related to the Pattern Languages. Christopher Alexander [Alexander 1977] introduced a Pattern Language as a vocabulary (e.g. in architecture, rooms, windows, latches), syntax (how the vocabulary items can be combined into "sentences" other semantic entities that become meaningful as they solve a given problem. This and other pattern language definitions rephrase the execution↔meaning↔use or capability deployment relationship I tried to describe above. However, in software engineering the emphasis has been on generative aspect of the pattern languages: spot a recurring situation, represent it as a bunch of forces and capabilities, formulate a resolution pattern that would balance the forces well, give it a good name in a corresponding pattern language, and reuse it. It is totally the right thing to do (as long as those patterns are not perceived as cookbook recipes).

The emphasis of the continuous performative negotiation (and continuous software design) is same as for patterns, except its goal is not just resolving recurring situations, but in constantly resolving the forces, no matter whether the solution is going to be reusable or not. It addresses the fact that the nature of many problems to solve in an organisation producing value also has the same semiotic structure: some problems are about building organisation capabilities (tools, expertise acquisition, knowledge packaging, etc) and others are about capability deployment (making products, providing concrete services, etc). These problem spaces have the same structure of vocabulary A (capabilities, e.g. investment specialists) that is being deployed to be able to achieve a vocabulary B (e.g. a range of financial advice). A capability is an ability to repeatably accomplish something in various circumstance and therefore capabilities and their basic use very well may be expressed by boiler-plate patterns. On the other hand, deployment may be extremely specific and therefore the forces may need to be resolved differently every time. Thus, while learning established pattern languages may be a great thing, ultimately it requires the continuous performative negotiation as semantic/pragmatic analysis of the forces:

The (good) books on pattern languages and agile methodologies often consist of multiple volumes with hundreds of pages each: the closer we get to the business of producing final products, the larger in the projects is the proportion of capability deployment as opposed to building capability and therefore the more specific problems need to be solved, turning the agile textbooks on the subject into colossal nomenclatures of what are better practices and what are not. They tend to constantly refer back to the principles of agile and the need to efficiently produce value, however I rarely, if ever, have seen the emphasis on the immediate underlying semiotic execution↔meaning↔use structure of most, if not all of those patterns and practices. If, through performative negotiation, any practice, decision or solution is distilled as action of deployment of capability language A into the target language B and the value of it is seen in the crisp meaningful expression in the target language, it gives a concise conceptual tool that allows to generate practical solutions rather than just refer to models or experiments.

Commonly, one would hear from analysts, management, or programmers that until one builds a model to solve a design, architectural, or process problem, the problem has not definitively addressed, yet. On the other hand, the agilists may often say that until a real-life experiment is set, the justification of the proposed solution or improvement is not complete. Both modelling and experiment are yardsticks that certainly should be kept at the back of one's mind and used. However, I tried to show above that both approaches are essentially limited and leave in a blind spot probably the most essential property of the software development, its linguistic nature. Therefore, I would say, one needs to consciously apply meaning-use analysis (execution↔meaning↔use) at every scale and aspect of software development to assess, design, and organize.

It has been common to offer "software metaphors": "try to think of software developement as ..." to understand its nature through similarities, software "as architecture" (making blueprints, building, etc), or "as gardening" (grooming, pruning, evolving) being most well-known. On one hand, the semiotic approach is not a model. It does not say: "software developement is accurately modeled in language structures". The semiotic systems cannot be reduced to symbolic systems, models being the latter. On the other hand, the semiotic approach is not a metaphor, either. It does not say: "try to think of software development as a language activity". It says: making a software product at all its levels is a language activity at its core and needs to be analysed, driven, and executed as such.

 

References

[Adolph et al] Steve Adolph, Paul Bramble, Alistair Cockburn, Andy Pols, What Is a Quality Use Case?

[Alexander 1977], Christopher Alexander, A Pattern Language: Towns, Buildings, Construction

[Austin] J.L.Austin, How to Do Things with Words

[Brandom], Robert Brandom, Between Saying and Doing: Towards an Analytic Pragmatism

[Deemer, Benefield] Pete Deemer & Gabrielle Benefield, Scrum Primer; cit. from [Larman, Vodde, p.306]

[Larman, Vodde]* Craig Larman, Bas Vodde, Scaling Lean & Agile development

[Larman, Vodde]** Craig Larman, Bas Vodde, Practices for Scaling Lean & Agile development

See also

 

December 2016