Archive

Posts Tagged ‘Control theory’

Please Do Not Disturb

When a control system is humming along, the gap between the desired and current states is so small that the frequency of command issuance by the Decision Maker component is essentially zero; all is well and goal attainment is on track. However, with the universe being as messy as it is, unseen and unpredictable “disturbances” can, and do, enter the system at any point of access to the structure.

CS plus disturbances

If the sensors and/or actuators can’t filter out the disturbances or are malfunctioning themselves, then true control of the production system may be lost. Perceptions and commands get distorted and the distance between goal attainment and “reality” will be perceived as shorter or longer than they are. D’oh! I hate when that happens.

Goal Setting

September 13, 2013 Leave a comment

Given the generic control system model below, how can we improve the performance of an underachieving production system?

Intact CS

One performance improvement idea is to make the goal directly visible to the production system; as opposed to indirectly via the policy/process actions of the actuators.

Goal Visibilty

A meta-improvement on this idea is to allow the production system constituents direct involvement in the goal setting process.

Goal Setting

It seems to make naive sense, doesn’t it? Well, it does unless the goal is some arbitrarily set financial target (“increase market share by 10%“, “decrease costs by 5%“, “increase profits by 25%“) pulled out of a hat to temporarily anesthesize Wall Street big-wigs.

Categories: management Tags: ,

Unobservable, Uncontrollable

September 11, 2013 4 comments

Piggybacking on yesterday’s BS post, let’s explore (and make stuff up about) a couple of important control system “ilities“: observability and controllability.

First, let’s look at a fully functional control system:

Intact CS

As long as the (commands -> actions -> CQs -> perceptions) loop of “consciousness” remains intact, the system’s decision maker(s) enjoy the luxury of being able to “control the means of production“. Whether this execution of control is effective or ineffective is in the eye of the beholder.

As the figure below illustrates, the capability of decision-makers to control and/or observe the functioning of the production system can be foiled by slicing through its loop of “consciousness” at numerous points in the system.

observe control

From the perspective of the production system, simply ignoring or misinterpreting the non-physical actions imposed on it by the actuators will severely degrade the decision maker’s ability to control the system. By withholding or distorting the state of the important “controlled quantities” crucial for effective decision making, the production system can thwart the ability of the decision maker(s) to observe and assess whether the goal is being sought after.

In systems where the functions of the components are performed by human beings, observability and controllability get compromised all the time. The level at which these “ilities” are purposefully degraded is closely related to how fair and just the decision makers are perceived to be in the minds of the system’s sensors, actuators, and (especially the) producers.

Who’s Controlling The Controller?

September 9, 2013 4 comments

The figure below models a centralized control system in accordance with Bill Powers’ Perceptual Control Theory (PCT).

CS-No Disturbances

Given 1: a goal to achieve, and 2: the current perceived state of the production system, the decision-making apparatus issues commands it presumes will (in a timely fashion) narrow the gap between the desired goal and the current system state.

But wait! Where does the goal come from, or, in cybernetics lingo, “who’s controlling the controller?” After all, the entity’s perceptions, commands, actions, and controlled quantity signals all have identifiable sources in the model. Why doesn’t the goal have a source? Why is it left dangling in an otherwise complete model?

Abstraction is selective ignorance – Andrew Koenig

Well, as Bill Clinton would say, “it depends“. In the case of an isolated system (if there actually is such a thing), the goal source is the same as the goal target: the decision-maker itself. Ahhhh, such freedom.

DM source tgt

On the other hand, if our little autonomous control system is embedded within a larger hierarchical control system, then the goal of the parent system decision maker takes precedence over the goal of the child decision maker. In the eyes of its parent, the child decision maker is the parent’s very own virtual production subsystem be-otch.

Usurped goals

To the extent that the parent and child decision maker’s goals align, the “real” production system at the bottom of the hierarchy will attempt to achieve the goal set by the parent decision maker. If they are misaligned, then unless the parent interfaces some of its own actuator and sensor resources directly to the real production system, the production system will continue to do the child decision maker’s bidding. The other option the parent system has is to evict its child decision maker subsystem from the premises and take direct control of the production system. D’oh! I hate when that happens.

No Middleman

Sensors AND Actuators

Yin And Yang

August 19, 2012 1 comment

In Bill Livingston’s current incarnation of the D4P, the author distinguishes between two mutually exclusive types of orgs. For convenience of understanding, Bill arbitrarily labels them as Yin (short for “Yinstitution“) and Yang (short for “Yang Gang“):

The critical number of “four” in Livingston’s thesis is called “the Starkermann bright line“. It’s based on decades of modeling and simulation of Starkermann’s control-theory-based approach to social unit behavior. According to the results, a group with greater than 4 members, when in a “mismatch” situation where Business As Usual (BAU) doesn’t apply to a novel problem that threatens the viability of the institution, is not so “bright” – despite what the patriarchs in the head shed espouse. Yinstitutions, in order to retain their identities, must, as dictated by natural laws (control theory, the 2nd law of thermodynamics, etc), be structured hierarchically and obey an ideology of “infallibility” over “intelligence” as their ideological MoA (Mechanism of Action).

According to Mr. Livingston, there is no such thing as a “mismatch” situation for a group of  <= 4 capable members because they are unencumbered by a hierarchical class system. Yang Gangs don’t care about “impeccable identities” and thus, they expend no energy promoting or defending themselves as “infallible“. A Yang Gang’s  structure is flat and its MoA is “intelligence rules, infallibility be damned“.

The accrual of intelligence, defined by Ross Ashby as simply “appropriate selection“, requires knowledge-building through modeling and rapid run-break-learn-fix simulation (RBLF). Yinstitutions don’t do RBLF because it requires humility, and the “L” part of the process is forbidden. After all, if one is infallible, there is no need to learn.

Bankrupt Models

August 17, 2012 1 comment

In his paper, “The Dispute Over Control Theory“, Bill Powers tries to clarify how Perceptual Control Theory (PCT) differs from the two main causal approaches to psychology: stimulus-response and command-response. In order to gain a deeper understanding of PCT, I’m gonna try to reproduce Bill’s argument in this post with my own words and pictures.

The figure below represents a PCT unit of behavioral organization, the Feedback Control System (FCS). An FCS is a closed loop with not one independent input (e.g. stimulus or command), but two. One input, the reference signal, is sourced from the output function of a higher level control unit(s). The second input, an amalgam of environmental disturbances, “invades” the loop from outside the organism.  Both inputs act on the closed loop as a whole and the purpose of the FCS is to continuously act on the environment (via muscular exertion) to maintain the perceptual signal as close to the reference signal as possible. As the reference changes, the behavior changes. As the disturbance changes, the behavior changes. Since action is behavior, the FCS exhibits behavior to control perception; behavior is the control of perception.

The figure below depicts models of the stimulus-response and command-response views in terms of the PCT FCS. The foremost feature to notice is that there is no loop in either model – it’s broken. The second major difference is that neither model has two inputs.

In the Stimulus-Response model, the linear, causal path of action is: Stimulus (a.k.a Disturbance) ->Organism->Behavior. In the Command-Response model, the linear, causal path of action is: Command (a.k.a Reference)->Organism->Behavior. Hence, the models can be reduced to these simple (and bankrupt) renderings of a dumb-ass organism totally under the control of “something in the external environment“:

So, you may ask: “How could our best and brightest minds in psychology and sociology gotten it so wrong for so long; and why don’t they embrace PCT to learn how living systems really tic?” It’s because they erroneously applied Newton’s linear cause-effect approach for the physics of inanimate objects to living beings and they’ve thoroughly crystallized their UCBs into cement bunkers.

When you push a rock, there is no internal resistance from the rock and Newton’s laws kick into action. When you push a human being, you’ll encounter internal resistance and Newton’s laws don’t apply – control theory applies.

Compounding the difficulty has been a surprising tendency for scientists who are normally careful to know what they are talking about to leap to intuitive conclusions about the properties and capabilities of control systems, without first having become personally acquainted with the existing state of theart. If any criticism is warranted, it is for promulgating statements with an authoritative air without having verified personally that they are justified. – Bill Powers

D’oh! BD00 takes major offense at Bill’s last sentence.

Normal, Slave, Almost Dead, Wimp, Unstable

July 29, 2012 6 comments

Mr. William T. Powers is the creator (discoverer?) of “Perceptual Control Theory” (PCT). In a nutshell, PCT asserts that “behavior controls perception“. His idea is the exact opposite of the stubborn, entrenched, behaviorist mindset which auto-assumes that “perception controls behavior“.

This (PCT) interpretation of behavior is not like any conventional one. Once understood, it seems to match the phenomena of behavior in an effortless way. Before the match can be seen, however, certain phenomena must be recognized. As is true for all theories, phenomena are shaped by theories as much as theories are shaped by phenomena. – Bill Powers

On the Living Control Systems III web page, you can download software that contains 13 interactive demos of PCT in action:

The other day, I spent several hours experimenting with the “LiveBlock” demo in an attempt to understand PCT more deeply. When the demo is launched, the majority of the window is occupied by a fundamental, building-block feedback control system:

When the “Auto-Disturbance” radio option in the lower left corner is clicked to “on“, a multi-signal time trace below the model springs to life:

As you can see, while operating under stable, steady-state circumstances, the system does what it was designed to do. It purposefully and continuously changes its “observable” output behavior such that its internal (and thus, externally unobservable) perceptual signal tracks its internal reference signal (also externally unobservable) pretty closely – in spite of being continuously disturbed by “something on the outside“. When the external disturbance is turned off, the real-time trace goes flat; as expected. The perceptual signal starts tracking the reference signal dead nutz on the money such that the difference between it and the reference is negligible:

The Sliders

Turning the disturbance signal “on/off” is not the only thing you can experiment with. When enabled via the control panel to the left of the model (not shown in the clip below),  six parameter sliders are  displayed:

So, let’s move some of those sliders to see how they affect the system’s operation.

The Slave

First, we’ll break the feedback loop by decreasing the “Feedback Gain” setting to zero:

Almost Dead

Next, let’s disable the input to the system by moving the “Input Gain” slider as far to the left as we can:

The Wimp

Next, let’s cripple the system’s output behavior by moving the “Output Gain” slider as far to the left as we can:

Let’s Go Unstable!

Finally, let’s first move the “Input Delay” slider to the right to decrease the response time and then subsequently move the “Output Time Constant” slider to the left to increase the reaction time:

So, what are you? Normal, a slave, almost dead, a wimp, or an unstable wacko (like BD00)?

I’ve always been pretty much a blue-collar type, by training and by preference. – Bill Powers

Extrapolation, Abstraction, Modeling

July 25, 2012 5 comments

In the beginning of his book, “Behavior: The Control Of Perception“, Bill Powers asserts that there are three ways of formulating a predictive theory of behavior: extrapolation, abstraction, and modeling.

Extrapolation and abstraction are premised on accumulating a collection observations of behaviors and ferreting out recurring patterns applicable under many contexts and input situations. Modeling goes one level deeper and is based on formulating an organizational structure of the internal mechanisms that cause the observed behaviors.

For 30 years prior to the discovery/development/refinement of control theory (and continuing on today because of entrenched mindsets), psychologists and sociologists formulated theories of behavior based on extrapolation and abstraction. Because the human nervous system and brain were (and still are) unfathomably complex, they didn’t even try to model any underlying mechanisms. They treated organisms as dumb-ass, purposeless, “black box” responders to stimuli.

Bill Powers didn’t accept the superficial approaches and black box conclusions of the social “sciences” crowd. He went deeper and turned opaque-black into transparent-white with the relentless modeling and testing of his control system hypothesis of behavior:

Note that in Bill’s model, there is an internal goal that determines the response to a given “disturbance“. Thus, given the same disturbance at two different points in time, the white box model can generate different responses whereas the black box model would always generate the same response.

For example, the white box model explains anomalies like why, on the 100th test run, a mouse won’t press a button to get a food pellet as it did on the 99 previous runs. In this case, the internal goal may be to “eat until satiated“. When the internal goal is achieved, the externally observed behavior changes because the stimulus is no longer important to the mouse.

Theories based on extrapolation and abstraction are useful for predicting short term actions and trends within a certain probability, but when a physical model of the underlying mechanisms of a phenomenon is discovered, it explains a lot of anomalies unaccounted for by extrapolation/abstraction.

For a taste of Mr. Powers’ control system-based theory of behavior, download and experiment with the software provided here: Living Control Systems III.

Nine Plus Levels

July 11, 2012 6 comments

In William T. Powers’ classic and ground-breaking book “Behavior: The Control Of Perception“, Mr. Powers derives a theoretical model of the human nervous system as a stacked, nine-level hierarchical control system that collides with the standard behaviorist stimulus-response model of behavior. As the book title conveys, his ultimate heretical conclusion is that behavior controls perception; not vice-versa.

The figure below shows a model of a control system building block. The controller’s job is to minimize the error between a “reference signal” (that originates from  “somewhere” outside of the controller) and some feature in the external environment that can be “disturbed” from the status quo by other, unknown forces in the environment.

Notice that the comparator is one level removed from physical reality via the transformational input and output functions. An input function converts a physical effect into a simplified neural current representation and an output function does the opposite. Afterall, everything we sense and every action we perform is ultimately due to neural currents circulating through us and being interpreted as something important to us.

So, what are the nine levels in Mr. Powers’ hierarchy, and what is the controlled quantity modeled by the reference signal at each level? BD00 is glad you asked:

Starting at the bottom level, the controlled variables get more and more abstract as we move upward in the hierarchy. Mr. Powers’ hierarchy ends at 9 levels only because he doesn’t know where to go after “concepts“.

So, who/what provides the “reference signal” at the highest level in the hierarchy? God? What quantity is it intended to control? Self-esteem? Survival? Is there a “top” at all, or does the hierarchy extend on to infinity; driven by evolutionary forces? The ultimate question is “who’s controlling the controller?“.

This post doesn’t come close to serving justice to Mr. Powers’ work. His logical, compelling, and novel derivation of the model from the ground up is fascinating to follow. Of course, I’m a layman but it’s hard to find any holes/faults in his treatise, especially in the lower, more concrete levels of the hierarchy.

Note: Thanks once again to William L. Livingston for steering me toward William T. Powers. His uncanny ability to discover and integrate the work of obscure, “ignored”, intellectual powerhouses like Mr. Powers into his own work is amazing.

%d bloggers like this: