Principle: When designing or naming an artifact, be as generic as possible. If you cannot, be as specific as possible.
Of course, it's good to be generic: less work in future. However, if a class or library does not generalize well, but still is necessary for the system one develops, generalizing it just to some degree usually creates more trouble than leverage in future due coupled implementation and sloppy naming, and the latter is a recipe for inconsistent semantics, which will immediately start growing around the badly placed artifact.
Also, if the artifact seems to be mid-way: although specific to a certain project, but possessing useful generic features, it usually means that it is not decoupled properly and could be further decomposed into its generic and specific elements.
Corollary 1: If you have to be specific, try to put the artifact not simply in the project folder or namespace, but in impl or detail namespace and/or directory.
The latter is not done for sake of incapsulation, though. It is not hiding things, but putting the most specific specific things at the most specific location.
Corollary 2: The semantics of usage (interface, protocols, command line options, etc) of a product should be as generic as possible.
I.e. "generic" practically is synonymous to "semantically sane". A product (a system or application) is the most singular and specific artifact and according to the corollary 1 its specifics should be moved as much as possible to implementation details. However, to be natural, intuitive, and expressive, its exposed parts should be composed of generic elements.
The principle of two extremes comes from practice, however conceptually it is a specific case of a clear separation between building capability and capability deployment.
Building capability in a software team can anything that contributes to the future ability of the team to produce useful stuff. It can refer to developing engineers' skills, improving team morale, ensuring team continuity, addressing single points of failure in terms of knowledge, etc. In terms of software, it means producing reusable artifacts, common libraries, toolkits that scale for multiple projects, etc.
Building capability is a continuous process, which requires sense of purpose and constant attention to opportunities for generalization, decoupling, and usage convenience of the software codebase.
Building software capability is not necessary. If a company develops a one-off system, it often is a good idea not to keep any software capability or legacy in-house. The software capability is replaced by the financial capability to buy required software services on demand. However, if a company has any kind of software team, it is unwise not build capability. The latter requires things done very thoughtfully, but does not take much extra time in short run; and in longer term gives a manifold leverage.
Building software capability is only a means to an end: producing useful stuff. Running a software project is essentially deploying existing software capabilities: putting team's skills and software tools together in a meaninful way. When a project comes about, some of the required software capabilities may be missing. Identifying and building those capabilities is an essential design activity. It is especially important to pay continuous acute attention to separating building capabilities from their deployment. Firstly, mixing building capabilities and their deployment almost certainly characterizes semantic coupling between the tools and the project goals. This almost invariably ends up in time-consuming designs that are hard to implement, test, maintain, and scale due to their polynomial or exponential complexity (more on it here). Secondly, when in the course of the project, new capabilities are built, they contribute to the overall software capabilities of the organization, and in a great way: working from a real use case, they fill semantic gaps in the team's generic codebase.
Now you see that in the principle of two extremes the choice lies between building capability (most generic) and deploying capability (most specific).
Somewhat more conceptually pure way of looking at the problem of generic vs specific is the meaning-use relationship approach originating from Wittgenstein's thought and developed particularly in Brandom's book. I briefly described Brandom's approach in my note on architectureless software design:
Capabilities form a collection of mini-languages that allow to express various functional areas. For example, SQL lets express anything about relational data. A bunch of C++ classes may form a library, say for vector arithmetics, etc. At your project level, you could put those capabilities together to form a new language expressing totally different functional concepts another language. In any but trivial projects, it usually will involve many layers of building new capabilities of "higher" order by deploying capabilities of "lower" order.
Each language has its semantics, i.e. meaning of using its components. We deploy the language of "lower" order (e.g. SQL statements) to get a language of "higher" order (e.g. a database manipulations for warehousing goods). The deployment step essentially is not semantic, but pragmatic: by putting SQL statements together, we do something (make those statements execute in certain way) to be able to say something in the new language, which deals not with database records, but with the movements of goods.
When those two languages are not clearly separated - for example we speak both in terms of tables, clothing container deliveries, hash indexes, and forklifts - it creates semantic coupling, whereas the pragmatic step provides clear separation between two semantic layers.
One of the beautiful things about the clear distinction between the languages is the enormous reduction in the system complexity. Deploying the 26 letters of Latin alphabet, seen as a mini-language of "lower" order, we produce a language consisting of hundreds of thousands of words. On the other hand, in Mandarin, for historical reasons, the graphical symbols have kept the semantic coupling with the words: unlike Latin letters that are meaningless (i.e. decoupled from the meaning of the words they form), most of Chinese characters do have meaning at least rudimentally, which leads to the most complex writing system by far.
Of course, there can be more than one language of "lower" order involved in producing a language of "higher" order. Software pattern languages inspired by Christopher Alexander's architectural pattern languages are exactly that: multiple forces, often heterogeneous, listed in the pattern signature form alphabets, which shape that pattern by being deployed. And then patterns of various sorts for alphabets for functional components of a system, etc.
Back to the principle of two extremes, generic artifacts belong to the "lower"-level alphabets. The most specific artifacts are results of deploying those "lower"-level alphabets.