First, the form of anti-Turing test for the artificial intelligence: If the system behaves like a human in a certain context or situation, it does not matter what else it does - it still possesses a portion of intelligence. (The arbiters still are humans and they are assigning the meaning to any system's behaviour. Attributing intelligence to the system means attributing to it some hidden quality, some qualification of its inner life, because we find the system's behaviour meaningful.)
A more functional social example: An evil person (one would be amazed how many of them are around) never commits a serious crime or even shows such an inclination, because he would be punished according to the law or social norms. Is this person moral? Do we, most of the time, care if he is? We care more that at any moment he would do the right thing, rather than the wrong one. (The example of Alcoholics Anonymous shows how it works: not the universal abstinence, but rejection of the single particular drink here and now.) The morality of one's inner life matters less than his behaviour in a concrete situation.
The same holds for the software engineering (good-enough software): If the system behaves as required in a certain situation, it may not be necessary to worry about all the possible situations, or rather assume or require a certain inner quality, or structural integrity of the system. We don't care whether the system's inside is coherent or "designed" in all its aspects, while observing its behaviour under certain conditions. (A well-designed system even should not have inside; everything in it should remain on the surface. Such a system with no inside manifests, for example, int the ability to achieve a good test coverage of it in a tractable time.)
One may say: how about the life-critical systems or end-consumer high-reliability products? The situation is the same as in the case of a soldier's morality: his behaviour in a concrete, albeit extreme, situation is more important than the integrity of his inner life as some kind of general inner quality.
Besides, those extreme situations may never come. Make sure that the people do not face the life-or-death moral choices unless absolutely necessary, and people will not break. Or for example, make sure that your software system is as stateless as possible and the rate of failure will be dramatically reduced.
For example, I have seen the companies designing high-reliability, life-critical software. They use heavy-weight processes, very stringent quality assurance, automatically traced initial requirement capturing, high-level design tools (UML, etc), automatically enforced meticulous coding standards and rules, documented code reviews of all the code, etc.
This all is meant to assure that their products possesses the inner qualities that would guarantee its good behaviour in all situations. It slows their development manyfold, but most importantly, it's just natural that 50-90% of code gets re-written or abandoned before it reaches anywhere near to the end customer, no matter how well you write it and either your bosses recognize it or not.
Why waste time on painfully fixing the coding-rule parser warnings or documenting the code reviews of something that mostly likely will end up in the trash bin? (Besides, when the meticulous assessment is done on something half of which is dead on arrival, how can we guarantee that it is not vital for the integrity of the remaining code that the dead part has to be alive - which is meant by its intended structural integrity?) The remaining code has survived exactly because it happened to be more meaningful. But meaning gets produced in use, i.e. in the concrete situations. The structure of the code is not an inner quality, but something emerging from and shaped by its use, i.e by its being meaningful. In this sense, the structure of the code is secondary and need not be assessed.
Maybe, one structural quality of the code is crucial, though. It is its being open to use or test. As a crude social analogy could go: We may know that a particular soldier can be nasty and ill-tempered, but if he is open, we may rely upon him more than upon another one who never has done anything wrong, but keeps things to himself and does not give a chance to test him. The code that has a supposedly well-documented and assessed internal structure that is not exposed at its surface usually is hard to understand, hard to maintain, and easy to break.
Thus, the test that would guarantee a certain intangible inner quality: intelligence, morality, or robustness - although attractive, in many situations may rather give way to a more pragmatic approach.
A well-written class, say in C++, is written as a double text. It expresses something at the level of its interface and usage scenarios, as well as does what it says by expressing it (i.e. it is performative). It is performative in several aspects. Of course, it has the interface and implementation. The interface is the expressive part, and the implementation performative.
Encapsulation is a way to provide the clear-cut between interface and implementation.
The other aspect to it is: looking at how the class expresses itself in its interface and usage scenarios, we can have a reasonable guess
about the class's guts, even though they are hidden by encapsulation. We can guess that a class
Encapsulation as a technical way of limiting the access to the class members is valid, but once encapsulation becomes a feature of the design that make a class opaque and having its inside, it creates tension and inconsistency with the class being a double text without any inside or outside, as both performative and expressive texts of a programming artifact are equally exposed and actually are expressed in the same double text.
One of the practical ways is to assess the accessibility of the class implementation to unit testing. If class has the opaque guts, its user-level semantics still can be tested via its public interface. The profilers could help to check the code coverage of its implementation. But the semantic aspect of the implementation cannot be tested, if it is hidden and does not map itself reasonably into its public interface. In this case, either the implementation semantics need to be exposed or the implementation artifacts themselves need to be expressed as testable classes on their own.
Thus, a better design would use encapsulation for access control only, keeping the implementation logic visible and intuitively clear from the class interface, i.e. class is transparent and permeable rather than having some unexposed inside. Any unexposed inside should provoke suspicion about the quality of the class design.
For now, see: Matter of scale. Staying out of touch
Whenever I spot coupling, staying out of touch, or speculation, it induces in me the gut-wrenching feeling probably similar to the one in a computer scientist, when he is told to quickly offer a practical solution to an NP-hard problem, because this is what they are: exponential, stressful, and intractable - in the case of software process, for no good reason most of the time.
Semantic coupling commonly happens, when an engineer works in solitary long enough. It is not about time: once he stops articulating what he is doing he has been alone long enough. Often, "long enough" is a couple of days or even hours.
The quiet concentration time is central to our profession, but it only exacerbates the peril of the overfitting of the code to a specific use case, which is almost a definition of coupling. The implementation bias kicks in. To get a second use case or second pair of eyes, the programmer has to stop doing and talk instead. Frequent and structured face-to-face code reviews are not so much for catching bugs, but for fixing the expressive aspect of software, making it more meaningful, i.e. usable and intuitive in more contexts. Thus, stop doing and talk.
On the other hand, a purposeful software discussion should transform a set of practical inputs into a list of concrete actions, which would produce decisive results and pose problems for the next conversation. Thus, stop talking and do: write an artefact to look at during the next conversation.
Brought to the extreme, this process becomes continuous: any action is accompanied with a talk, or rather expressed in it. I call this style of work "performative negotiation": any discussion or negotiation is about the meaning of a concrete productive action in a given context. We articulate any action as we perform it.
The pair programming in eXtreme Programming is exactly an instance of such an extreme. The Scrum stand-up meetings are another example: the talk articulates the progress between yesterday's and today's action.
In other engineering disciplines, they speak about the construction of a bridge or an engine. Unlike such material things, in software engineering, the communication is an immediate extension of the code. The programming artefacts comprise our vocabularies: libraries, databases, patterns, etc. In a design discussion, we map those vocabularies into the user stories, which become the expressive part of the next step of implementation. The two sides of performative negotiation correspond, or rather are extension of, the two sides the code.
Continuous performative negotiation in software engineering is synonymous to staying in touch. The touch between two actions or pragmatic steps is articulated in an expressive vocabulary (in Brandom's words "what needs to be said to be able to do something" [Brandom]); the communication is structured by removing the forking "if" or fuzzy "maybe" by a decisive practical step: "what needs to be done to be able to say something". [ibid.]
The common critique of performativity is that it takes the human part out of communication. It requires "normalized and governable individuals" [Foucault] for the march of performativity [Marshall], so that they "perform", i.e. produce things, relationships, or qualities valued and desired in a given company, group, or society. Anything beyond that is deemed worthless, since it simply cannot be valued or expressed in the language used by the group (akin to Lyotard's differend [Lyotard 1983]). Therefore, social and technological performativity has been criticized as dehumanizing and oppressive.
However, the texts that brought the concept of performativity to prominence, like Lyotard's Postmodern Condition [Lyotard 1979], seem to mix it with plain productivity, whereas the latter really is just a consequence of the former.
In the context of software development, one of the main forces is the tension between development discipline and guaranteed achievement of the goals on one hand and having independently thinking, motivated, and diverse talents in the team, on the other.
Therefore, principle of performative negotiation does not simply focus on delivering desired products or producing value. One of its main applications is exactly producing new idioms, vocabularies, and planes of meaning, if something cannot be expressed in the existing conceptual framework. This belongs to the negotiative part of it. A new library may open a new dimension or meaning space for more articulate applications that can be combined and used in a different way; or a new way of communication between team members may make their work and life experience more wholesome.
The discovering or creating of new dimensions and articulating new meanings is the true sense of production and putting things at work, the pride of the software craftsman. Things are put at work not by action alone (an ugly coupled library overfit to only one specific job usually does not take us far) and not by expression alone (a proclaimed strategic vision of the marketing department or a design diagram are rather ideological and void of content). At the point when action and expression become the two sides of the one and the same thing, a new meaning starts emitting and new things happening. In this sense, performative negotiation always is transformative and metaphorical, since it makes us take a leap from one vocabulary or language to another and make it possible to express and enact the things impossible or intractable in the old vocabularies. That's why writing software is often compared with writing poetry [Cockburn]: the main vehicle of poetry is metaphor and "poiesis" means "making", "creating", "bringing forth", "an action that transforms".
[Brandom] R. Brandom, Between Saying and Doing: Towards an Analytic Pragmatism.
[Cockburn] Software development as community poetry writing.
[Foucault] M.Foucault, Discipline and punish.
[Lyotard 1979] J-F.Lyotard, The Postmodern Condition: a Report on Knowledge.
[Lyotard 1983] J-F.Lyotard, The Differend: Phrases in Dispute.
[Marshall] James D. Marshall, Performativity: Lyotard and Foucault Through Searle and Austin. Studies in Philosophy and Education, 1999, Volume 18, Issue 5, pp 309-317
[Ries] E. Ries, The Lean Startup.
[Yanagi] Yanagi Soetsu, The Unknown Craftsman.