Background. I have a system S that I have modeled in its most significant variables and whose internal states I know how to qualitatively describe, categorize (ref clustering), and possibly measure. I therefore have KPIs ready on the system I am trying to modulate and have some degree of agency over the same. At this point it is a matter of building from scratch or exploring layer by layer a graph G, a graph that is used to describe my possible interactions on S and how these go to modulate the state of S itself, that is, it is a matter of deciding the next move from an enumerable bag of possible choices.
Whether I am exploring a transparent system or a blackbox, in any case I am now dealing with G. Exploring G has costs, which in the abstract are computational constraints of time and memory, and in concrete are traversal costs of traversing an arc, that is, real costs of carrying out the actual attempt to modulate the system and push forward the knowledge frontier of G by S. By this I mean that for every practical attempt to mitigate a real problem that I decide to traverse, I must budget for the travel time and total costs of the array of actions that I decide to perform. Theoretical computer science notes two main techniques for searching on G i.e., traversing G itself, which are BFS breadth first search and DFS depth first search. In practice, this involves deciding at what point I want to stop considering the consequences of the consequences of my actions when choosing the first action to take on S.
Three new concepts come into play here: heuristics, V constraints on decision making, and ID iterative deepening. Coming out of abstraction again, in business reality V is the set of constraints that market competition imposes on decision making, i.e., I often have limited time to make meaningful decisions in contexts closer to a blackbox than to transparency. And it is in the context of these practical limitations that the concept of heuristics must be introduced, i.e., that set of empirical and experiential knowledge that makes me say enough is enough, let us stop delving into the knowledge frontier of G and proceed to choose from the partial bag of possible choices we have enumerated. That is, I accept the non-perfection of my knowledge about S but I note that the choices I now find plausible allow me to modulate S itself by keeping it within externally imposed hard limits, and I also note that by partially traversing segments of G no known problems have escalated to my attention, that is, all things considered I have sufficiently good alternatives. I can invest other resources and try further optimizations via ID, i.e., preserving cached already computed nodes of G and proceeding with deepening in a prudent and iterative manner as long as I have time to do so.
At this point we reiterate the whole process with more information available, derived from the relief of the partial P results I obtained. Right here lies the disconnect between competitiveness and systematic lagging. To elaborate I enter a metaphor: we are piloting visuals, without advanced instrumentation. If I set an attitude (ref text a decision on S), wait just long enough to avoid observing only noise (ref heuristics), I observe the short-term outcome of my actions: am I losing altitude? am I turning? am I yawing? In this context, it is easy to make the assessments because we are in a relatively limited space of system states. But generalizing, envelope protection is done from the derivative before the indicators, without waiting for large deltas. If a setup that is too high makes me slow down I correct it long before I stall, because by stalling I lose altitude quickly and I can’t necessarily afford it. If I have not yet moved away from the critical points I cannot afford to look at macro changes to take corrective action on S. The more the synthetic indicator I is surgically aligned with my immediate goals the more I can afford to trust the derivative first on I, i.e., I can afford not to give time to time the more I better set an orthogonal indicator system on S, and the systematicity of staying competitive lies precisely in not giving time to time. P and I spawn a new system S’ on which to begin modulation again.