A concept is an action. The software code is the best illustration of it: by saying something it necessarily does something. Any sentence we write in python or C++ is meant to be executed.
In the human speech, John Austin called such sentences performative, the idea he introduced in 1955. For example, by saying: I declare you man and wife, a priest actually does make a couple husband and wife. He performs an action just by saying the phrase.
Without going into details, many things we say only describe the situation, e.g. "it rains". In contast, an expression in C or Java is written to perform an action. The data definitions are descriptive, but they would be useless, unless they would form "words" in the vocabulary of executable expressions: instead of speaking of a bundle of wheels, engine, or control they let us speak of a car, i.e. do something to that car by the way of software expressions.
The code has two meanings: what it says and what it actually does. Code is two texts at the same time: the first expresses what it is, the second instructs the computer to do something,. The former text is expressive, the latter performative.
When a programmer writes:
To make things worse, if one writes something:
These examples are trivial, but the illustrate the common experience that each disconnection of this sort between expression and use makes the code slipping into chaos at the polynomial rate, since the mess does not add, but multiplies.
"It all is in my code. Don't ask me questions, just read it and you will see how it works." Throwing code over the fence is typical in software teams. Coding rules like "functions should not be longer than 10 (or whatever number of) lines" do not address the problem: the function works, but it does not say what it does. Or rather, it says: "My meaning is exactly the sum of my instructions. If you go through them one by one, you will know what I do."
In 1814, Laplace suggested a thought-experiment: if someone knew the exact location and speed of all the particles of the universe at a given moment, he would be able to exactly predict the future state of the universe.
The determinism in physics would not work due to the laws thermodynamics and quantum theory. However, in software engineering there still persists an illusion of determinism that when you know every single instruction of a program, you will know the meaning of it. It is the software flavour of Laplace's daemon.
Many companies go through it: the founder feverishly produces the initial system to get it out of the window. The start-up picks up, making the founder busy with the super-nova expansion and hardly available for questions. And then a bunch of engineers literally spend months or years digging through and reverse-engineering the twists and dark corners of the original code.
On a smaller scale, the engineers often stop too early to decompose their code. They give up when they split a functionality, e.g. into classes, until they say: fine, even a child would understand it. However, if there still is functionality coupling or especially the need to look into implementation details to understand what the code is doing, the design job is not finished.
The reasons the idea of Laplace's daemon does not work in software engineering is in a way opposite to the physics: it is not the enthropy, but the complexity of language playing against the simplicity of mathematics.
It is not universal, but common to hear software engineers saying: the software development is essentially a mathematical thing; the projects are troubled and work frustrating, because management tries to apply business logic to maths, as well as because the fresh graduates are not taught maths properly anymore.
This is true to an extent; indeed, some engineers struggle with basic mathematical concepts. However, expecting that if the whole software enterprise would be organized like mathematical theories, it will bring rigour, efficiency, and quality into the trade is a big mistake. Maths is a big part, but not everything in software engineering.
Roughly speaking, in Hjelmslev's terms [Hjelmslev, Prolegomena to a theory of language], mathematics operates with symbolic systems, whereas programming essentially deals with the semiotic ones.
The distinction is based on the fact that the reference of a sign (e.g. of a word) and the meaning of the sign are two different things.
Roughly, in a symbolic system, symbols interact in the same way as what they symbolize (their references). For example, numbers symbolize countable objects (say, apples).
Symbols reference objects like text labels in the zoo: the label says "elephant" and we see elephant inside of the cage. Instead of operating the actual animals (e.g. counting them), we can use the labels on the cages.
That's how mathematics can be so powerfully applied: if elements in the "real world", e.g. in a physical system, interact in the same way as the elements of a mathematical theory, then we could expect that all theorems of that theory will be true also in the "real world".
Say, natural numbers and their relationships map one-to-one into piles of apples and therefore all the theorems of number theory will apply to piles of apples.
Unlike mathematics, which is based on symbols, semiotic systems (e.g. natural language) are based on signs.
Sign is a unity of two radically different things: its expressive part (for example, the way a word sounds) and its meaning (Saussure). The expressive system of a language has essentially different relationships between its parts than the system of meanings.
These relationships cannot be reduced to a symbolic system: there is no direct symbol-referent mapping, since meanings operate in a system of relationships different from expressions: meaning of words vs sound of words; meaning of a novel vs its expressive elements; meaning of a software library (what it does) vs its API (how it expresses it).
By the way, it is a very common problem with the researchers who become programmers: being used to symbolic systems from their scientific experience, in their code, they do not program, but model. They end up with highly inflexible closed implementations either endlessly parametrized or with almost no semantic degrees of freedom, which is the same at the end of the day. It is almost a definition of coupling: if you need to set up and fine-tune dozens or hundreds of parameters to make your system work, it means that the system is semantically extremely hard to use. And since the meaning of the software system is its use (see 2.4), "hard to use" means "meaningless", devoid of semantic flexibility, and - just like in the software version of Laplace's Daemon - meaning only one thing: itself.
Take object-oriented design: in software engineering it often was (if not has been) perceived as a sufficient model of the world, creating an illusion that any problem domain could be expressed purely in terms of classes, objects, and their relationships. Due to this mix-up of the meaning and reference, OOD was mistaken for a universal language (i.e. a semiotic system), whereas it is a symbolic one.
Essentially, it was saying: class elephant or car in a Java code corresponds to elephants in the real zoo or cars on the real road. Then, to design any software system that would deal with elephants or cars, we simply express the relationships between the real things through the relationships between the corresponding classes. In nutshell, the OOD is a symbolic system.
However, the limitation of the OOD is that we do not deal just with the cars. In a single software system, a car could be a means of transportation, a physical mechanism, an asset, an obstacle, a consumer item, a registered entity, etc, all at once. How do we apply the OOD? By having one over-sized class car? Or by having many interrelated classes belonging to various domains? Clearly, the latter.
But here we are: the sign 'car' interacts with other signs differently depending on what 'car' means in various contexts or problem domains. The system we want to build is semiotic, not symbolic.
Engineers were forced into OO designs, which did not acknowledge shortcomings of the method. Wherever the naive OOD went beyond its limitations (trying to solve "semiotic" problems), it caused damage: over-engineered architectures; brainwashed teams; OO tools, largely abandoned now, that consumed huge resourses to be developed and sucked millions of dollars from the users in license fees. I still think UML was waste of time. Scores of ugly, coupled software systems were spawned, which either failed or, worse, companies still are trying to unscramble this legacy.
Software is texts in artificial languages and software development is collaboration in natural languages. They are (implicitly or explicitly) based not only on maths or any other sort of one-to-one modelling, but heavily on the methods coming from linguistics and cannot be reduced to mathematics or another unified system.
Continuing our example, what do we mean by 'car', when we design, say an automated vehicle? There are multiple domains of its meaning: car in the context of its mechanics, of fuel consumption, of route planning, of obstacle avoidance, of asset tracking, etc.
We would like to better understand the relationship between those meanings on one hand, as well as classes, libraries, and other software artifacts on the other.
To design a software system, we need to be clear about what we mean, when the system requirement document says: 'car', or 'road', etc.
Ludwig Wittgenstein has influenced most of the modern philosophy and linguistics, arguing that generally in any language meaning of its words, sentences, etc is their use, and they do not have any meaning beyond that. It turned a big deal of all the traditional philosophy, logic, or psychology upside down.
Luckily, unlike humanities in general, in software engineering we can legitimately say: the only important thing about a software system is how it will be used. Beyond that, no-one cares.
We can state: for all the practical reasons, the meaning of any software system is its use. And therefore, Wittgenstein's ideas should be highly relevant to software.
Besides, earlier we figured out that a software system is a semiotic system, which has two sides, expressive and performative: what a piece of code says and what it does.
But what exactly does the code do?
Consider the following code snippet:
We can say: the client class opens a socket, does checks, sends the requests, polls the reponse, etc.
Or we can say: the code above creates a database client, sends a request, receives respons, etc.
The latter is what matters for us when we use the code. After all, the code always is an instrument. For those who use it, it is not important what it does, but rather what they can do with it. And this is exactly the meaning of the code in Wittgenstein's terms.
On one hand, exposing implementation details destroys its meaning as its usage. Say, if after construction we had to call client.connect() and then client.bind(), it would give out implementation details that are meaningless in the context of usage.
However, the high-level usage is not always a hallmark of the meaningful code, either. If that were true, encapsulation would solve all the problems, and the latter has been seriously overused in the old-style object-oriented design (e.g. think of trivial getters and setters).
It is the same problem as saying that there is only one 'car' class. The actual car as en entity made of metal and plastic lures us into thinking so. However, 'car' can belong to multiple meaning/use domains (car as a vehicle, as an obstacle, as an asset, etc). Software manipulates a 'car' as a language construct, not as a real thing.
The meaning of a class is its use, i.e. the meaning of a class always is outside of the class and, thus, there should be as little encapsulation as possible. One could also say: the class should be interpenetrable in as many semantic directions as possible.
Say, encapsulating connectivity, e.g. by using singletons or inversion of control, would be too much of encapsulation:
It hides the connectivity aspect of the client, which makes testing or changing the connection parameters very difficult.
The following expression:
hints that client connects over a TCP socket. However, it exposes too many implementation details.
A better way of fully exposing the aspects of the client (connectivity and database interface) would be:
where tcp::endpoint preferrably is just a POD structure, meaning that although the classes semantically bootstrap the data, there is no encapsulation and the db::client still may expose its connectivity aspects, e.g. as
the rule of thumb being: If class members are not interdependent (e.g. tcp host and port are not interdependent), they should be exposed publically.
For the user, it is important how a piece of software is used than how it does what it does. The developer has the implementation bias, since he naturally has more vested interest in the implementation details.
It is usually a strong bias, which is further mutually amplified in the team. Due to their insider knowledge of each other's code, the engineers tend to blur the usage of classes, libraries, or utilities. In the worst cases, this rapidly leads to the highly coupled macaroni code and if though the implementations do not encroach onto each other and the encapsulation and upfront modular design still may seem holding, but the interfaces and usage semantics quickly become overfit to particular usages, which makes the system and its part rigid and arcane.
Despite the advances of agile methodologies, the upfront design still often is seen as the answer. However, it takes long time and is speculative, and therefore too rigid. Nothing too new about it: the agile world has been offering all the critique and new ways to design for one-two decades (and yet, the upfront design dominates in many organizations). More about it in the following chapters. What interests me now, though, is the language aspect of the problem:
Just as coding, the design is done in some language: UML, informal diagrams, plain English, etc. What makes the design language better than a programming language? I know what makes it worse: you cannot run a design diagram or a Word document. You cannot test it, since a test is essentially something that may fail.
According to Karl Popper, a theory is scientific, only if there is a demonstrable way to falcify it [Popper, The Logic of Scientific Discovery], a well-accepted necessary condition today. The same is true for the technology and specifically for software: if a test fails, the software is "wrong" (does not work as expected, but the expectation may be "wrong", too of course).
With a design document, you can only defer testing until the software is written. But since the document and the software are written in two different languages, most of the time it is very hard to prove that the software does what the design says. "Hard" here means "expensive".
Therefore, why do the upfront design? Why not write code straightaway? Is the reason that diagrams or documents are easier to use for collaboration? Then why? You can show and use the working code; it is much easier material for discussion. Is it that the code lacks readability? Or the designers lack the capabilities to understand how to read and use code?
The latter seems to be true too often: the business analysts or domain experts, even those who can actually program pretty well, appear to be averse to touching software. They prefer communicating through documents and board room meetings. It is strange, because removing any unnecessary mediation cuts waste. On the other hand, putting a document between the domain expert and software engineer seems to be a defensive move caused by subjective factors rather than development efficiency. I would call it 'speculation bias'.
Both speculation and implementation bias violate the expressive/performative duplicity at the core of the software. Speculation bias creates a gaping hole between design and implementation. Implementation bias overspills the implementation details into design and them overlap. Neither of it is bad, just very expensive, and thus needs to be carefully watched to keep the project on track.
From all the experience, having a problem domain expert and a software engineer or team lead working in pair is the ideal combination. Introducing this kind of culture requires effort to overcome speculation (the client's creating a gap dues to her/his aversity to software) and implementation bias (the engineer's lack of understanding of the semantic boundary), but the fruits of such pair work come early and in abundance. The reason it works well seems to be that together such a pair naturally fits into the structure of code as two texts: the domain expert represents the expressive aspect and the programmer is in charge of the performative one. The client tends to shy away from the implementation details, since he is really interested only in the expressive aspects (usage semantics), while the engineer tends to close this gap. The goal of the pair work is to make the expression and implementation stay in touch, with no gap or overspilling.
It may be one the main functions of the team or project lead that such pairs are dynamically formed and maintained at all levels, programmers also serving as domain experts for each other. This also shows that the structure of the code as two texts is doubled in the efficient team organization: the productive component of the human interaction also is organized as expressive/performative pairs. I will explore it further in the chapter on performative negotiation.
As a specific example on a smaller scale, when I need another engineer to implement a command line utility (which later may be plugged into a larger system), first I ask him to write an empty executable that only has one option: --help. We review the utility together, making sure that reading help is sufficient for me to show to the author how I would run his utilitity to achieve my purposes as a user. As we go, we edit the help, add to it my usage scenarios as examples. The code grows self-documented.
In the same manner, when I work on a utility for a user, I also sketch a basic executable that can be quickly run to discuss use cases and usage semantics. If the user is not a programmer, in the beginning of our collaboration, I often need to overcome her speculation bias. However, after just a few interactions of this sort, once the user realizes that a slight effort on her side gives her a big leverage at getting exactly what she wants and quick, she develops a taste to this work style.
Software is a product, but the problem is that stakeholders want to see products as something having value and by that merit as something tangible. Tangible in terms of value, at least of economic value, needs to be something measurable. It becomes too easy, though, to invert the relation: anything measurable becomes tangible. This represents tangibility fallacy or bias, which is easy to fall for because the measurable in the material world seems to represent its value, at least in economic terms: a bigger house, a richer harvest, etc.
The problem is that software is not a material object, but a language phenomenon. Many metrics applied to software production are meaningless. Thank goodness, they stopped counting lines of code. However, the meticulous time accounting, which seems to directly translate into the workforce cost and time to market, still is rampant and it is here to stay as long as the workforce is valued by the amount to pay in salaries, which, in software, is a flawed justification, too, but requires a separate investigation.
Software does not scale as the material objects, with which the older industries deal. Material objects are assembled of interacting parts. The parts of a wheelbarrow are fewer and simpler than the parts of a car. Thus, the wheelbarrow is simpler and cheaper than a car. The parts of languages, including the artificial ones, are letters and words, however you can combine them into books, speeches, or software beyond any limits. That is why it is such an absurd to assign value to words by the number of letters in them, or to the books by the number of words, or to the code by the number of lines. Metrics indicating 5 percent productivity improvement are uninteresting and irrelevant to language in general and by implication to software production. The real productivity leverage in software always has to be manifold, otherwise it is not worth: due to the leverage, creating semantic value is far more profitable than cutting cost. That also is why a good software engineer usually may be more productive by an order of magnitude compared to her or his equally paid colleagues.
A common flavour of the tangibility bias is the activity bias: software productivity is measured by the presence and amount of "activities". It boils down to a strange conviction that if everyone is very busy with the right types things: "business analysis", "planning", "testing", "meetings-lots-of-meetings", etc, it will result in marvelous output. In reality, "activities" are just tools that need to be used for the right purpose, which is to produce meaninful artefacts with finite amount of effort.
The unit of value in software is something about what we can tell that it works: it is a functional artefact: class, library, utility, which does something useful. In software useful=meaningful. Artefact clearly says what it does (i.e. has well-articulated expressive and performative sides).