Thursday, March 10, 2011

Predicting Strategic Style Change

By: Gary J. Salton, Ph.D.
Chief: Research & Development
Professional Communications, Inc.


SUMMARY
This research outlines a study of over 1,500 test-retest surveys spaced up to 12.7 years apart. The study uses a natural design that identified the degree and direction of change in “I Opt” strategic styles and profiles over time.

The study found that a majority of “I Opt” dominant styles remained constant over the long time period covered by the study. Many of the dominant style changes that did occur did not represent major behavioral shifts. Rather the granular nature of rank order (i.e., ordinal) measurement tended to exaggerate behavioral change estimate

It was found that a majority (83%) of the dominant styles that did changed followed a predictable pattern. The changes appear to be governed by the principles of social economics and non-optimality. The effects of aging were also identified. Aging effects were statistically significant but of relatively modest consequence.

Finally, the study identified a single strategic style that was most resistant to change. This stability appears to be due to the flexible structure of the input and output strategies being employed.

A video summary is available on YouTube and can be accessed by clicking the icon to the right.


BACKGROUND
Most tools in the field find their roots in psychology (e.g., Myers-Briggs®, DiSC®, 16PF®, FIRO-B®, etc.). They believe that they are measuring a “hard wired” behavioral map. They may informally acknowledge that change occurs. But none offers an explicit, testable change mechanism.

“I Opt” ® is unique. It is based on information processing. It is focused on the behavior generated as the human being attempts to navigate a particular environment. If the environment is stable, their behavior is stable. If the environment changes they either adapt their behavior or exit to a more comfortable terrain.

The choice of particular behavior to adopt is guided by social economics. Some behavioral changes are harder than others. If the same goal can be reached by two different behavioral choices—one hard and one easy—they will choose the easy one. An obvious but important observation.

The adaptive behavioral choice does not involve optimization. Optimality implies that some best possible end condition is known. This is impossible in ever-changing social situations. “Good enough” is the typical standard. Once a “good enough” result is regularly obtained behavior again stabilizes. Adequacy is a governing principal of style choice.

“I Opt” is indifferent as to whether or not there is a “hard wired” component to human nature. It works whether it is there or not. Any change discovered must be due to factors other than psychology since that behavior is fixed by definition. The fact that environment affects behavioral choice is deemed so obvious as not to required explanation here.


STYLE CHANGE
As applied in organizational research “style” is a term applied to a typical pattern of behavior. It is usually assessed using ordinal (i.e., rank order) measures. This form of measurement has serious limitations.

For example, DiSC® allows you to say that a person is more “dominant” than “compliant.” But you cannot say that a person is twice as likely to use one style versus the other. It does not matter if you assign the numeral 2 to “sometimes” and 3 to “often.” You will still be dividing “sometimes” by “often.” The inability to assign a magnitude to a style means that only general, non-specific and somewhat vague assessments can be offered.

The underlying concept of “style” does have practical utility. Practitioners must convey knowledge in a manner that can be understood. Styles offer that vehicle. The problem lies in how style is measured. “I Opt” has overcome this problem by using exact (
i.e., ratio scale—like a ruler) rather than rank order calibration. This gives “I Opt” a far broader range than traditional tools.

For example, we can always reduce time, say 12.05PM, to “daytime.” The reverse is obviously not true. “Daytime” is not always 12:05PM. “I Opt” can emulate traditional rank order tools. Traditional tools cannot emulate “I Opt.” This means that “I Opt” can address issues using broad categories where appropriate but is not confined to that level.

Exact measurement also means that the theory underlying “I Opt” can be disproved. This is the essential quality of any scientific theory. Without precise measurement no experiment can be designed that could completely disprove the “hard wired” claims of traditional tools. The inability to disprove them relegates these traditional tools to the realm of speculation. That speculation may be true. No one will ever know for certain. However, even if unproven traditional tools can be useful. They need not be discarded.

Definitive theory, measurement capabilities and a scientific basis makes “I Opt” a unique assessment tool. It is in a class or category by itself. This means that it can be used in conjunction with any of the historically accepted tools. “I Opt” is addressing different things in a different way. Since they work on different dimensions, they cannot contradict each other. This means that they can be combined and used together if conditions warrant.


THE SAMPLE
This research tests the theoretical expectations and principals outlined above using evidence-based data. It draws on over 12 years of repeated measurements. Table 1 outlines the general characteristics of the sample used.

Table 1
GENERAL SAMPLE CHARACTERISTICS



The size of the sample is wide, large and diverse. It is a meaningful representation of the universe to which “I Opt” technology applies.


STUDY DESIGN
Organizations have used “I Opt” technology continuously since 1994. The technology was used purposefully in development efforts involving teams, departments, and work groups. It was also deployed in programs involving leadership development, conflict resolution and other similar areas. This purposeful use means it is not contaminated by “experimentation” bias. In other words, the participants did not consider it a “game”, toy or other form of diversion.

During the period of the study people participated multiple activities using “I Opt.” The time horizon was long enough for significant changes in life circumstances to occur. People got married, had children, were promoted, changed location and so on. Testing and retesting over this long period provides “I Opt” with a rock-solid base on which to test and extend the already strong foundation on which it rests.


RETEST TIMING
The time periods between test and retest was determined by business needs. Therefore the periods between test and retest vary widely. This is an advantage. There is no preordained period over which change is “suppose” to occur. Graphic 1 shows the distribution retests over time.

Graphic 1
TIME DISTRIBUTION OF RETESTS
The distribution is obviously skewed. The reason is that inclusion in the study design requires that the person remain with a firm. The average tenure of males in the private sector dropped 15.5% in the 1973-83 era to 11.4 years in the 1996-2006 period. (Farber, 2008). Since the economic collapse in 2008 it has undoubtedly dropped still further. Fewer people remaining with an organization over time means that there are fewer people to retest. Hence the skewed distribution.

However, the sample size is large. Retests beyond the 2.7-year average retest period totaled 556 (37% of the sample). This means that long tenured people are well represented. This reduces the potential bias arising from inadvertently measuring one cohort (e.g., Gen-X’s, baby boomers, etc). Overall, the retest distribution appears to be a fair representation of what can be expected in typical organizational situations.


POPULATION LEVEL CHANGE
Styles cannot change without affecting the entire behavioral profile of a person. For example, if “dominance” increases there is less time left in which “compliance” can be expressed (i.e., DiSC). Increase any style and something else has to change to accommodate it.

Graphic 2
CHANGES IN THE POPULATION PROFILE
(n =1515)



Graphic 2 measures change the overall behavioral profile of the sample population. It says that the whole population (i.e., n = 1515) did not substantially change in the average 2.7 years between test and retest. Individual changes netted out. This is exactly what would be expected and predicted by “I Opt” theory.

A major change in the profile of an overall population would require the information flows or meanings used in a society to change. While there have been “tweaks” (e.g., faster Internet lines, another recession, etc.) the basic social substance has not changed. The government still functions, schools are still teaching and stocks are still being traded. “I Opt” reflects this consistency. This result lends support to theory underlying “I Opt.”


COMPOSITE LEVEL CHANGE
The sample of 1,515 retests can also be looked at individually in terms of “style” changes. While society many not have changed, the circumstances of many individuals within that society most certainly have. Using the “style” concept these changes would be most visible in a change in the dominant style—the style with the highest rank order. Looked at in this manner, rank is all that is important. The magnitude of change does not matter. Graphic 3 shows the results for the dominant style of the study participants.

Graphic 3
CHANGES IN DOMINANT STYLE
(n =1515)

The dominant style of most people in the sample did not change. But a significant minority did. Rank order (i.e., ordinal) measurement does not tell us by how much. However it was enough to change the rank order of strategies employed. The first step in understanding change is to try to figure out the magnitude of the change. In other words, we want to know if the change in behavioral preference is a lot or a little.

Graphic 4 shows how many individual survey retest responses (e.g., "questions") changed style values among all 1,515 re-testers. This is a measure of actual change. It may not be reflected in a change in dominant style. It could be merely a change in emphasis. For example, a style may have increased or decreased without changing rank order. Graphic 4 shows that an average of 2.5 “I Opt” responses that affected style scores changed between test and retest.

Graphic 4
NUMBER OF RESPONSE CHANGES
IN ALL STYLE CATEGORIES

(n =1515)

It is worth noting that only 7% of the sample had no change at all (left-hand column on Graphic 4). This would suggest that over an average period of 2.7 years most people encounter some kind of change. This reconfirms the ubiquitous nature of change. Change is the one constant of life

Graphic 5
RESPONSE CHANGES IN DOMINANT STYLE CATEGORY
DOMINANT STYLE CHANGED
(n =651)

Graphic 5 shows the number of retest response changes among those whose dominant style (i.e., rank order position) actually shifted. The column showing “0” change in dominant style changed responses merit explanation (left hand column on Graphic 5). These are the cases where the original score for the dominant style did not change. But peripheral styles did change. Some of these changes were enough to boost another style to the dominant category even when the dominant style stayed at the same level.

Combining Graphics 4 and 5 tells a story. On average, 2.5 responses on the 24-question survey changed for all members of the sample. Graphic 5 shows the same data only for surveys where the dominant style (rank order) changed. Here the average number of responses that changed was 3.5. In other words an average difference of 1-response is enough to flip a style from one category to another.

The response analysis tells us that people change over time. Most people do not change their dominant style (57%). Those that do change (43%) do not change by much. However, we can get a firmer fix on the actual magnitude of change in the styles by using the exact measurement capabilities of ‘I Opt.”

Graphic 6 compares the test and retest profiles of the surveys where the 43% of the sample where the dominant style changed. The change is discernible. But it is not a lot. It is likely that the cruder rank order measurements of the traditional tools would judge this overall profile to be unchanged. “I Opt”, on the other hand, is able to recognize and measure the small but actual changes in survey responses.

Graphic 6
CHANGES IN THE AVERAGE PROFILE
WHERE DOMINANT STYLE CHANGED
(n =651)



This analysis has shown that group level behavior—as measured by “I Opt”—is relatively stable. This is true whether the group includes everyone or just those who changed enough to flip their dominant style. This finding invites explanation.

One possibility is obvious. The social network and technologies of a particular society create unique information flows. These flows create a need for certain levels of each of the “I Opt” styles. Social adjustment mechanisms keep things in balance. Compensation levels for particular jobs can be increased or decreased. Social status of particular activities can rise or fall. These adjustments can cause people whose profiles lie close the expressed need to shift. The result is an automatic stabilization centering on the needs of the society at any particular point. This is a testable hypothesis. If it is true we should expect different societies to have different global profiles. All we need do is to look.


INDIVIDUAL LEVEL CHANGE
Practitioners involved in global level of analysis (e.g., culture studies) can make use of the group level assessment immediately. Practitioners whose work is focused on individuals require a deeper level of assessment. For this we have to look at an individual level.

The first thing we need is a baseline. Graphic 7 compares all of the retest surveys (n = 1515) with those of random pairs of people (i.e., the columns in Graphic 7). The columns were constructed by drawing random people from our 60,000+ database and calculating the degree that the profiles of the selected people overlapped. This is the equivalent of the experience a person would have if they were to interact with random people on any particular street.


Graphic 7
DISTRIBUTION OF TEST-RETEST OVERLAP CHANGES
(n = 1,515 retests, n = 100 random pairs)


On average, a random pairing of people produces an overlap of about 40%. The average overlap of retest surveys is 61.1%. The difference is statistically significant (p< .00001). This is a condition “I Opt” theory predicts.

People tend to live in stable environments. Most people go home to the same house each night, the same kids show up at the dinner table and they take the same route to work the next morning. But some changes do occur. Promotions, new babies and job losses are just some dislocations over the period measured here. “I Opt” theory would recognize both the consistency and change in life circumstances.

Graphic 7 bears out “I Opt” predictions. Adjustments did occur but they did not take people back to point zero (i.e. random pairings). People tended to tweak their existing profiles. Strategies that were useful in the stable portion of their lives tended to be preserved. That portion of their life that did change was met with changes in their style elections. The net effect is that the test-retest overlaps stay at higher levels than would be expected by pure chance. That is exactly what is shown on Graphic 7.


ANALYSIS OF INDIVIDUAL LEVEL CHANGE
The consistency displayed in Graphic 7 requires no explanation. The change element invites it. One potential explanation of change might be simple aging. Age affects biology and biology affects behavior. Graphic 8 shows that age has an effect. The longer the time period between test and retest (i.e., more time for aging to occur) the less the before and after profile resembled each other. However, the effect is small.

Graphic 8
CHANGE IN RETEST OVERLAP WITH TIME
(n = 1515 retests)

The mathematical notation in Chart 8 shows that there is about -0.5% (i.e., half of one percent) a year change in overlap by year due to time. It is reasonable to see this as the natural affect of adjustment due to normal aging processes. It is not enough to explain the entire change distribution we saw in Graphic 7. However the R2 of 95% suggests that this source of change is real.

Since the change due to time alone is small it is reasonable to look to changes in the local environment to explain the major portion of the observed change. The sources of environmental change are probably infinite and are not of significance to “I Opt” theory. Any change for any reason that causes a significant deterioration in the success of the current behavioral strategy is a motive (i.e., reason) for change.

However, “I Opt” theory can predict the direction of change. Remember social economics mentioned in the first part of this article? It says that people will try to minimize the cost any change. This is best achieved by preserving as much of the present behavior pattern as possible. This can be read directly from the “I Opt” profile by looking at adjacent axes on the graph.

The “I Opt” graphic is constructed so that adjacent axis share one or another information-processing component (input or output). By moving to an adjacent axis the individual is preserving at least one element of information processing strategy that they are using to navigate life. This lowers the cost of the change.

The easiest shift that can be made is to change emphasis. This would happen when a person promotes their secondary style to primary status. They simply begin using a style with which they are already familiar—their current “fallback” option—more intensively. Table 2 tests this hypothesis.

Table 2
SECONDARY STYLE PROMOTED TO PRIMARY
SCORES ARE AVERAGED PERCENT OF TOTAL FOR EACH CATEGORY
(n = 651 retests)



The original secondary style is highlighted in yellow in the original test section of Table2. The new primary style is highlighted in the retest section (i.e., right side) in bolder enlarged characters.

In a majority of cases the original secondary and final primary styles are in the same position. In 10 of the 12 categories (83%) the change in style behaved exactly as predicted. On average, people just adjusted emphasis. They used their secondary style more. This increased its rank order position. The dominant style (rank order position) changed but a consistency in the behavioral pattern is preserved.

The two cases where this did not happen are boxed in red. In these two cases the original secondary style was almost as strong as the style that ultimately evolved into the primary style on retest. In other words, people were already heavily using the style that ultimately became their primary style. They were just not using it heavily enough to raise the rank order to a secondary status. But there is also another possible reason.

Major environmental dislocations that invalidate both input and output strategies can occur. This would cause a global change in an individual’s strategy (both input and output). Major dislocations of this nature are rare. Those that do occur can often be anticipated. New babies give at least 9 months advanced warning. Layoffs are often preceded by losses and deteriorated working conditions. Serious illness is usually accompanied by increasing medical interventions. These “flags” may have had a role in moving the percentages in the red boxes closer together. In other words, people may have been preparing for an anticipated change in their environment.

The two HA and LP boxed areas in Table 2 are representative of a class. There were individuals in each of the four categories that made a total transition (input and output change). Table 2 worked on averages. In the two cases which escaped the boxed effect their strength was not enough to move the average—but individuals within those groups did make total transitions.

Table 3 shows the effect of those involved in a total transition of their information processing strategy more clearly. The “Focus Change” column on the right shows which element of the strategy changed. The designation “BOTH” indicates a total transition.

Table 3
SOURCE AND AMOUNT OF CHANGE
(n = 651 retests)

Table 3 looks at that portion of the sample whose dominant style changed. It rank-orders change categories by the percent of surveys falling within that category. A pattern is clearly visible and is governed by the principles of social economics and style adequacy.

A change in the output strategy (action vs. thought) is always the top of the list. The reason is that this is the simplest and least expensive approach. Just change the output from thought to action or vice-versa. It is a one-step strategy.

A change to the input strategy (unpatterned vs. structured) is the next most frequent. This is a two-step process. New kinds of information must be acquired. Then effort must be expended to organize and understand the new information. It is more expensive and therefore less used.

A total transition strategy—change in both input and output—are the least used. This is a three-step process. New kinds of information must be acquired, it must be organized to be understandable and then it must be acted upon in an unfamiliar way. This is the most expensive strategy and the least used by a substantial margin.

The ordering of the strategy changes also evidences the operation of the adequacy principle of strategic style selection. People were selecting strategies that worked “good enough.” If some form of optimization were operating it is unlikely that the ordering of change would be the same in every set.

The analysis of individual changes dramatically confirms the theory (what causes what and why) underlying “I Opt” technology. The data demonstrates continuity in the transition process (Graphic 7). It is also able to capture the universally acknowledged maturation effect (Graphic 8). The concept of a style change economy was forcefully confirmed by the use of secondary styles as the principal transition vehicle (Table 2). And finally the order of transition (Table 3) evidences the operation of the social economics and style adequacy principles.

Table 4
DIFFERENCES IN TRANSITION FREQUENCY


Table 5 shows that the difference between the RI (i.e. 35%) and other style change rate is statistically significant for the HA and LP (highlighted in yellow) and almost so for the RS (highlighted in green).


Table 5
SIGNIFICANCE OF RATE OF CHANGE DIFFERENCES

The reason for the RI’s greater resistance to change is likely to be found in the structure of its information processing strategy. The RI uses unpatterned input. This means that the style can accept and use any form of input. The amount of information obtained may be less than for the structured HA and LP styles but it is still usable without the need to change style orientation. Remember that optimality is not at issue. Adequacy is all that is needed.

The other information-processing element is the RI’s use of thought output (e.g., ideas, options, etc.). This is infinitely flexible. Unpatterned input means that the RI is not bound by the rigor of the HA’s structured thought-based strategy. Inconvenient discrepancies can be ignored and returned to later if further definition or specification is needed. The reduced need for rigor means that a relevant response (i.e., output) can be offered to meet almost any situation. It does not have to be perfect, just “good enough.”

The combination of unpatterned input and thought output means that the RI is the most flexible of the styles. The RI style can more easily emulate any of the other styles—at least for a time. Since most transactions are relatively brief, this capacity allows the RI to “get by” in most situations. The ability to “get by” allows the RI to more easily maintain their approach in the face of environmental change. Hence they have the lowest rate of change. The data does not “prove” this theoretical reasoning but the author is at a loss for any other reasonable causal chain.

An informal confirmation of the logic offered above is found in the many leadership studies that have been done (Salton, various). They all find the dominance of the RI style to be characteristic of people in senior leadership positions. This is no accident.

Leaders typically must guide people who use a variety of styles. The higher the level, the more variety will likely be encountered (e.g., more functions, more people, etc). The flexibility of the RI is well suited to understanding and contributing to these various postures. This gives the RI an edge in rising to leadership—whether formal or informally gained.

Keep in mind that there is no such thing as optimality in social interactions. Adequacy is all that is needed. Thus it is not necessary that a leader “fully” appreciate the contribution of other styles. It is only necessary that the leader understand “well enough” to provide reasonably correct directional guidance.


RESEARCH IMPLICATIONS
The findings of this research offer insight of immediate value to the practitioner. For example, this paper has identified the kinds of change that will be easy or hard. This can be useful in establishing job progressions that are likely to yield success for both the individual and the organization.

The study has also alerted the practitioner to the fact that dominant styles are “sticky.” Most do not substantially change even over long periods of time. What this means is trying to change a persons “I Opt” strategy will always be a difficult undertaking. It can be done but it is not cheap. Clients looking for “quick fixes” to fundamental strategic postures are likely to be disappointed. This study provides the practitioner with hard data with which to make this case.

The study has also demonstrated that changing a “style” is easier than changing observable behavior. A style change only requires a change in rank order. The study has shown that on a global basis this happens with an average change in only 1 response (see Graphics 4 and 5). Even when this happens the change in the entire repertoire of behaviors is small (see Graphic 6). Practitioners should expect to continue to work with people and organizations even after “style” based measurements tell them that the job is done. It probably is not.

This paper further identified the degree of difficulty that can be expected in any change. Table 3 showed that the easiest change is redirecting output. Redirecting input is next. This knowledge can be very useful in areas such as leadership training.

The study also alerts the practitioner to some more subtle factors. Both the leadership edge and relative resistance of the RI to change probably rests on its flexibility. Observed changes may be simply temporary accommodations. Knowing this can cause the practitioner to make their own accommodations in their development initiatives.

“I Opt” represents a quantum leap beyond the capacities of traditional tools. It not only anticipates change but can identify how much actually occurs in the “real world.” It goes on to explain what changes, why it changes, how much it changes and the likelihood that it will change in a particular direction. In other words, “I Opt” theory accurately predicts what is going to happen in the real world as well as explaining what has happened.

The combination of a quantum leap in scope, accuracy of predictive capability and solid, testable reasoning creates a much more powerful tool than previously available. Both practitioners and theoreticians can deploy this tested and validated tool immediately to address issues being confronted in today’s world. The result is likely to benefit both the practitioner/theoretician and the organization to which it is applied.


TRADEMARKS
® IOPT is a registered trademark of Professional Communications Inc.
® MBTI, Myers-Briggs Type Indicator, and Myers-Briggs are registered trademarks of the MBTI Trust, Inc.
® FIRO-B is a registered trademark of CPP, Inc.
® DiSC is a registered trademark of Inscape Publishing, Inc.
® 16PF is a registered trademark of the Institute for Personality and Ability Testing, Inc.


BIBLIOGRAPHY
Farber, Henry F., 2008. Employment Insecurity: The Decline in Worker-Firm Attachment in the United States. Princeton University: CEPS Working Paper No. 171, June 2008, page 6. Retrieved from http://www.princeton.edu/ceps/workingpapers/172farber.pdf on February 2, 2011.

Inscape Publishing (2005). DiSC Validation Research Report. Inscape Publishing, Minneapolis, MN. Retrieved from http://www.discprofile.com/downloads/DISC/ResearchDiSC_ValidationResearch
Report.pdf January 4, 2011.

Salton, Gary (2011), IOpt Style Reliability Stress Test, http:\\garysalton.blogspot.com.

Salton, Gary (Various):
  • Salton, Gary (November 2010) Sales Management and Performance. http://garysalton.blogspot.com/2010/11/sales-management-and-performance.html
  • Salton, Gary (October 2010) City Management http://garysalton.blogspot.com/2010/10/city-versus-corporate-executive.html
  • Salton, Gary (September 2009). The Nursing Staircase and Managerial Gap http://garysalton.blogspot.com/2009/09/nursing-staircase-and-managerial-gap.html
  • Salton, Gary (September 2008). Hierarchy Influence on Team Leadership
  • http://garysalton.blogspot.com/2008/09/hierarchy-influence-on-team-leadership.html
  • Salton, Gary (August 2008). Engineering Leadership. http://garysalton.blogspot.com/2008/08/engineering-leadership.html
  • Salton, Gary (June 2008). The Pastor as a Leader. http://garysalton.blogspot.com/2008/06/pastor-as-leader.html
  • Salton, Gary (May 2008). Fitting the Leader to the Matrix http://garysalton.blogspot.com/2008_05_01_archive.html
  • Salton, Gary (October 2007). Leadership, Diversity and the Goldilocks Zone http://garysalton.blogspot.com/2008_01_01_archive.html
  • Salton, Gary (October 2007). How Styles Affect Promotion Potential http://garysalton.blogspot.com/2007_10_01_archive.html
  • Salton, Gary (November 2006). Gender in the Executive Suite http://garysalton.blogspot.com/2006_11_01_archive.html
  • Salton, Gary (October 2006). CEO Insights http://garysalton.blogspot.com/2006_10_01_archive.html

Harvey, R J (1996). Reliability and Validity, in MBTI Applications A.L. Hammer, Editor. Consulting Psychologists Press: Palo Alto, CA. p. 5- 29.

Wednesday, March 09, 2011

"I Opt" Pattern Reliability Stress Test

By: Gary J. Salton, Ph.D.
Chief: Research & Development
Professional Communications, Inc.


SUMMARY
This study reports on the reliability of the “I Opt” patterns, predictable behavioral sequences. The study was conducted under “stress” conditions. The results are the “worst case” reliability condition that might be reasonably expected. The results of the evidence-based research show that “I Opt” is able to provide highly reliable pattern results even under adverse conditions.

A summary of this research is available on YouTube. Click the icon on the right to launch the video summary.


STUDY DESIGN
This study uses the same data as did the companion style research (for more detail see Salton, 2011). That study used a classic test-retest design. People retested using the same survey spaced minutes to days apart. The design was a stress test. People participating were trying to change their profile. This created a bias against style reliability. The result was a “worst case” level of reliability. The methodology outlined there applies equally here.

Style reliability considered only one variable—a strategic style. That research found that “I Opt” outperformed traditional instruments (e.g., Myers-Briggs®, DiSC®, 16PF®, FIRO-B®, etc.) by a wide margin. This study considers the reliability of styles working in unison. In other words, it considers the reliability of sequences of style behavior being applied to a particular life issue.


STRATEGIC PATTERNS
Global style level characterizations—single styles—are the limit of traditional instruments. They can talk about the use of different styles but cannot give a probabilistic estimate of the likelihood of their use. The reason is that they rely on rank order measurement. You cannot do arithmetic on rank order measures. You cannot divide “sometimes” by “often.” It does not matter if you assign numerals to them. If you try to do arithmetic you will still be dividing “sometimes” by “often.”

If you cannot do arithmetic you cannot calculate style probabilities. If you cannot calculate a style probability you cannot combine styles into any meaningful measure of joint probability. Joint probability is relevant when you want to figure out the likelihood that two or more particular styles will be applied to a particular issue. Since most of life is filled with these interdependent elections this represents a serious limit to traditional tools.

For example, you may launch a problem solving effort with a new idea. But you cannot just keep piling on new ideas. You are going to have to shift to analysis, assessment or some form of action sooner or later. If a theory cannot address that shift, it is going to miss most of life. This is not a good position for someone in the business of predicting and guiding human behavior.

“I Opt” is unique. It uses exact measurement. You can divide one “I Opt” score by another and get a meaningful result. This means that the four “I Opt” style axes can be expressed as probabilities. This alone puts “I Opt” in a league of its own. You can now speak with probabilistic certainty in place of vague, general narratives. But that is not the end of it.

The theory (i.e., what causes what and why) that underlies “I Opt” specifies the exact relationship between the styles. What this means is that the quadrants between the style axes have a specific behavioral meaning. Since each quadrant has a specific meaning (given by “I Opt” theory), the probabilities of every behavioral sequence can be calculated and predicted. These are called “strategic patterns.” They are called “strategic patterns” because they define a strategy (i.e., “plan, method, or series of maneuvers”, Random House, 2010) used to address life situations.

Graphic 1 illustrates the concept of pattern. It is the actual behavioral profile of the average of the 171 re-testers used in this study. It shows that this average person favors a “Conservator” pattern (30.7%). This is the combination of the Hypothetical Analyzer (disciplined assessment) and Logical Processor (rigorous execution) styles. This might be characterized as “Let’s think through our options and then methodically execute the option we choose” strategy.

Graphic 1
"I OPT" RE-TESTER PATTERNS
(n = 171)


Not every situations will yield to a Conservator strategy. When this happens our average person is likely to revert to a Perfector pattern (26.9%). This is a behavior sequence characterized “Let’s come up with some new ideas and then think them though.” If that does not work, the next likely option is the Performer Pattern (22.6%). This is a “Let’s get it done—right if we can, anyway if we have to” strategy.

The reason we were able to predict the likely outcome for the average re-tester in our sample is that “I Opt” could use arithmetic and had a theory to guide its application. This kind of insight is outside of the capacity of the traditional tools. If you cannot use arithmetic, you cannot calculate the probabilities we used to decipher the likely behavior of our composite re-testers.

This brief background demonstrates that “I Opt” stands as the lone member of a new class of tools. What this means is that there is nothing to which “I Opt” can be reasonably compared. Since there is no point of comparison, the reliability of the pattern measurement must rely on absolute measures. If these measures meet the needs of the issues being addressed “I Opt” pattern reliability can be accepted.


PROFILE RELIABILITY
An “I Opt” profile is a representation of all of the styles and patterns considered simultaneously. It describes entire behavioral map that a person will use to navigate all of life’s situations. Any shift in style strength will redistribute the likelihood of all combination's of style use. In other words, the entire profile will also shift. This means we can measure global changes in behavior by comparing one profile (e.g., test) with another (e.g., retest).

Graphic 2 compares the original test with the retest profile for all 171 retest surveys. It shows a minor net change in the overall profile of the retest group. Some individual test-retest surveys did change. However, there was no overall directional change. Individual changes tended to offset each other. As we will find out later, this is no accident.

Graphic 2
COMPARISON OF ORIGINAL AND RETEST PROFILES
(n = 171)



The “I Opt” profile reliability of groups is stable. Policy decisions involving the prediction of group patterns can be relied upon. “I Opt” technology will provide a firm foundation for large scale initiatives. Proposals concerning mergers, acquisitions, policy initiatives and the like will be grounded on a firm factual base.


INDIVIDUAL PATTERN RELIABILITY
Stable profiles can be composed of offsetting individual changes. This is of little comfort for the practitioner working with individuals and smaller groups. For them individual pattern variability is of prime importance.

Several methods can be used to assess individual variability. One is to focus on the dominant pattern. This method treats a pattern as a category. It ignores the degree of change. The only measure is whether rank order of the dominant pattern has changed. It does not matter if the dominant pattern exceeds the secondary by 1% or 50%. This method is inexact but is easily understood and this quality has much merit in field settings.

Graphic 3 shows the stability of strategic patterns (i.e., behavioral sequences) in the stress test sample. A majority of retests did not change dominant patterns. Fully 66% remained stable even under stress conditions.

Graphic 3
DOMINANT PATTERN DISTRIBUTION AMONG 171 RE-TESTERS


It is worth examining just what the “worst case conditions” of the stress test entail. People often retested within minutes of their original test. Many did so with the conscious intention to change the results. They knew the results of the original test and probably remembered their initial responses. This made “engineering” a change easy. Graphic 4 illustrates one such case.

Graphic 4
INDIVIDUAL CHANGE IN DOMINANT PATTERN
RETEST WITHIN 2.3 HOURS
(139 Minutes)


Graphic 4 clearly shows a manipulative result. A profile change of this character requires a person to choose multiple responses that directly contradict their original position. This kind of change in strategic posture does not happen within 2 hours between test and retest. Other similar results happened in as fast as 5 minutes. And this is not all that was confronted in the stress test.

Many retests involved only slight changes. While small, these were enough to flip the dominant style from one category to another. Graphic 5 illustrates a change in dominant pattern resulting from an individual answering 1 statement differently. It was enough to flip this person from a “Perfector” to a “Changer” pattern.

Graphic 5
INDIVIDUAL CHANGE IN DOMINANT PATTERN
RETEST WITHIN ONE DAY


The kind of change illustrated by Graphic 5 probably has no practical consequence in any kind of organizational diagnosis. In total, 16 of the 58 people who changed their dominant pattern still maintained an overall profile overlap of 66% or more between their test and retest. This profile consistency means that their responses just “wiggled.” The relatively minor nature of this high overlap condition is visually illustrated in Graphic 6.

Graphic 6
INDIVIDUAL RE-TESTER EXAMPLE
TEST-RETEST PROFILE OVERLAP OF ~70%


Graphic 7 shows what happens if both the obvious manipulative and minor changes were discounted from the study. Removing the 23 obviously distorted surveys cause the dominant pattern repeatability rate to jump from 66% to 76%. And there is still stress left among those that remain.


Graphic 7
“I OPT” PATTERN RELIABILITY
WITH 23 UNREPRESENTATIVE SURVEYS REMOVED



Even without a specific comparable to act as a standard, the “I Opt” pattern reliability rate clearly exceeds the generally accepted norms of the field of organizational research. This judgment can be reasonably inferred by comparing pattern reliability to the style reliability found in the companion research (Salton, 2011). This is done in Table 1.


Table 1
STYLE VERSUS PATTERN RELIABILITY
As expected, “I Opt” pattern reliability is somewhat less than “I Opt” style reliability. This is because patterns can be affected by more things. The sum of all patterns must add up to 100%. A change in any style will cause its associated pattern percentage to increase or decrease. This means that all of the remaining patterns must shift if they are to continue to add up to 100%. Since more variables can affect patterns, they are inherently less repeatable than are styles.

Overall, “I Opt” patterns compare very favorably with the reliability measures posted by other generally accepted tools in the field of organizational research. And this is under stressed conditions. In actual practice much higher levels of reliability can be reasonably expected. In sum, “I Opt” pattern reliability is meets or exceeds any standard of acceptability.


DIRECTION OF CHANGE
The design of the stress test offers an opportunity to examine the nature of the pattern changes that occurred. Table 2 shows both the patterns that are being moved from (i.e., test) and those that are being moved to (i.e., retest).

Table 2
DIRECTION OF PATTERN CHANGE
(n = 58 re-testers who changed style)

The Conservator pattern stands out at a 40% change rate (far right column). Other patterns each account for roughly 20% of the 58 pattern changes. This suggests that Conservators are the most motivated to attempt to change their reported “I Opt” pattern.

The natural design of the stress test did not allow for interviewing participants. It may be that Conservators were trying to better align themselves with the more socially attractive pattern. Or their inherently skeptical posture may have created more doubt as to the accuracy of the assessment. But whatever the reason, it is clear that they were certainly trying harder.

The “changed to” percent (bottom row) is notable. It shows that there is no overall direction to the change. This is due to the structure of the “I Opt” survey. The survey gives no clue as to the diagnostic consequences of a particular choice. This is evidenced by the random distribution of “changed to” results. In a random situation each pattern has about an equal probability of occurring. That is exactly what the bottom row shows.

Pattern recognition is a typical strategy for attempting to engineer survey outcomes. Experience has shown that test takers will initially examine a survey in search of patterns. If highly motivated re-testers with full knowledge of initial results (the LPs in Table1) cannot find a pattern, it is extremely unlikely that someone taking an initial survey will find one.

A reasonable conclusion to this section is that professionals using “I Opt” technology can trust the initial diagnosis. People are unlikely to be able to “figure out” the survey. The result is that they tend to give an honest assessment of their status. That is what the original Validity Study (Soltysik, 2000) found a decade ago and that is what this study has confirmed.


TIMING EFFECTS
The structure of the survey prevents the respondent from predicting the direction of any change. However an unpredictable change can be caused just by answering differently. To answer differently the initial response has to be remembered. A short time between test and retest improves the odds that the original responses will be remembered.

If people choosing to retest were honestly reflecting a different state we should expect to find no time dependent difference in outcomes. If people were attempting to manipulate the results there should be a significant difference by time. People retaking the test sooner would remember their initial responses better that those more distant from it. Table 2 shows the results of this test.

Table 3
EFFECT OF RETEST TIMING ON RETEST RESULTS


The fact that surveys taken days apart differed significantly from those taken hours-apart (p<.05) suggests a concerted effort to change results. The shorter the time between surveys, the more divergent were the before and after profiles. This was not honest re-evaluative effort. It is evidence of conscious manipulation. The “stress” embedded in the stress test is real. The re-testers were really trying to change the results.

The implications of this finding are material. Even when trying to change the results a majority of the re-testers were unable to do so. This suggests that “I Opt” is able to deliver accurate results even under the most adverse of circumstances. Scholars and professionals can trust an “I Opt” diagnosis. Such trust is well placed.


SUMMARY
This pattern research has confirmed the unique standing of“I Opt”. Being able to probabilistically predict sequences of behavior is alone enough to distinguish it from traditional tools. This judgment is reinforced by the fact that the dominant pattern reliability was able to outperform the less challenging style reliability posted by traditional tools. And it did this under adverse test conditions. Real world experience is likely to be much better.

The analysis of the direction of change among retesters demonstrated the integrity of “I Opt.” The “I Opt” survey offers no clues on the likely result of a particular choice. This means that the only viable strategy available to the test taker is to follow some semblance of their true posture on choices offered. This means that the scholar or professional administering the “I Opt” survey can trust the results.

The study also demonstrated that re-testers were really trying to bias the outcome. The fact that so few of them succeeded is further evidence of the integrity of the “I Opt” instrumentation. Scholars and professionals can rely on “I Opt” diagnosis even in situations where the respondents may be less than fully cooperative.

Overall, the both stress tests (i.e., style and pattern) have confirmed and extended the results of the original validity study of over a decade ago. “I Opt” offers an evidence-based opportunity to both broaden and deepen our understanding of behavior in real world situations. With that will come an improvement in the human condition.


TRADEMARKS
® IOPT is a registered trademark of Professional Communications Inc.
® MBTI, Myers-Briggs Type Indicator, and Myers-Briggs are registered trademarks of the MBTI Trust, Inc.
® FIRO-B is a registered trademark of CPP, Inc.
® DiSC is a registered trademark of Inscape Publishing, Inc.
® 16PF is a registered trademark of the Institute for Personality and Ability Testing, Inc.


BIBLIOGRAPHY
Random House Dictionary, Random House, Inc., 2010.
Salton, Gary (2011), “IOpt Style Reliability Stress Test”, http:\\garysalton.blogspot.com.
Soltysik, Robert (2000), Validation of Organizational Engineering: Instrumentation and Methodology, Amherst: HRD Press.

Tuesday, March 08, 2011

"I Opt" Style Reliability Stress Test

By: Gary J. Salton, Ph.D.
Chief: Research & Development
Professional Communications, Inc.


SUMMARY
A natural experiment offered an opportunity to test “I Opt”reliability. It used a “worst case possible” design. The experiment was biased AGAINST "I Opt" reliability. The outcome was compared to industry standards. The standards accepted were the most favorable reported by those that had a vested interest in these traditional tools. The worst possible outcomes of “I Opt” were compared the best reported reliability results of alternative tools. This created a natural “stress” test.

The study found that the worst “I Opt” results exceeded the best results of alternatives. These results give the practitioner and scholar confidence that “I Opt” is a tool that can be relied upon even in difficult field situations. You can access a video summarizing the research by clicking the icon to the right.


A NATURAL EXPERIMENT
A program to increase the visibility of “I Opt”® technology created a natural experiment. Random people were offered a free Advanced Leader, Career or Emotional Impact Management Report. They could take the “I Opt Survey” on-line without user codes or passwords. The use was anonymous (i.e., fake names were an option). The report was automatically generated and sent to any email address designated.

An accompanying email invited people to use it as they wished. They could retake the survey without penalty. Table 1 outlines common reasons for retest.

Table 1
POTENTIAL REASONS FOR RETEST


The reasons cited in Table 1 involve TRYING to change original outcome. Most retest protocols seek to eliminate this possibility. They try to insure that motivations and conditions are constant between test and retest. This creates a bias toward consistency (i.e., reliability). This experiment does just the opposite. It burdens “I Opt” with a bias towards inconsistency (i.e., unreliability).

Thus the structure of the experiment acts as a stress test. It measures “I Opt” reliability under the "worst case" conditions. Passing this stress test offers strong evidence of “I Opt” technologies inherent reliability.


A COMPARATIVE FRAMEWORK
Any test requires a standard of judgment. The natural standard would be the results of the reliability studies of comparable tests. The most stringent would be results published by those with a vested interest in the success of these tests. Accepting this standard takes the “stress” of the stress test up to the another level.

The Center for the Application of Psychological Type (2011) reports that using MBTI® “on retest, people come out with three to four type preferences the same 75-90% of the time.” That means that the best that can be expected under controlled conditions is 90%. In addition, this result applies to only “three to four” of the 16 possible type preferences (i.e., ESTJ, INFP, etc.). That means that an ESTJ could be retested as a STJE and still qualify as successful retake—3 of the 4 stayed the same. A practitioner who had to explain to a client why their dominant style changed might view this as less than a “success.”

The Consulting Psychologist Press, the publisher of MBTI®, does not cite reliability data on their website. However, a book published by the organization does cite results (Harvey, 1996). About 50% of people tested within nine months remain the same overall type, and 36% remain the same type after more than nine months (Wikipeida, 2011). Averaging MBTI results gives a overall standard of about 63% (i.e., average 75%, 90%, 36%, 50%).

Internet research on DiSC® provided no simply described evidence on test-retest reliability. However, Inscape Publishing (the publisher of DiSC) does provide a table of correlation coefficients (Inscape Publishing, 2005). That table reports correlation coefficients of between .89 and .71 depending on the time between retests. The timing was ~1 week (n=142), 5-7 months (n=174) and 10-14 months (n=138).

A correlation coefficient measures the difference between things, not the things themselves. To make it meaningful it has to be converted. Squaring the correlation coefficient (e.g., .89 x.89) does this. The result is called the Coefficient of Determination or r2. (called "r squared" - see Biddle, p.14). Applying this to the highest DiSC correlation yields a r2 of 79%.

The same method can be applied to the lowest DiSC correlation reported. The corresponding r2 would be 50% (i.e., .71 x .71). Averaging all of the correlation coefficients reported by Inscape Publishing (12 in total) yields an overall correlation coefficient of .763. Squaring that to gives a meaningful Coefficient of Determination of about 58%. Overall, DiSC can be expected to retest differently about 42% of the time (100% -58%).

FIRO-B® is published by CPP, Inc. On their website (CPP, 2009) they report a test-retest reliability as “ranging from .71 to .85—for three different samples as reported in the FIRO-B® Technical Guide (Hammer & Schnell, 2000).” Using r2 this translates into an expected test-retest success rate of between 50% and 72%. Averaging these numbers gives an overall retest consistency of 61%.

The Sixteen Personality Factor or 16PF® publishers (IPAT, Inc.) do not cite reliability statistics on their site. However, Cantrell and Mead in the Sage Handbook of Personality Theory and Assessment do quote statistics. These were taken from the 16PF Fifth Edition Technical Manual. The Institute for Personality and Ability Testing—the predecessor of IPAT—published this manual. They report a 2-week test-retest reliability of .8 and .7 over a two-month interval. This translates to r2 percentages of 49% and 64% for an average r2 of about 57%.

There are many more personality tests of this character. However, Table 2 shows a developing a pattern.

Table 2
OVERALL TEST-RETEST RELIABILITY ESTIMATES
REPEATABILITY PERCENT

All of the instruments appear to approximate 60% test-retest repeatability. Since these results were published by organizations with a vested interest, it is a very high standard. It is likely that these organizations have published the most favorable rates available.


DATA SOURCE
Participants accessed the free reports via an internet connection. The internet server used in this experiment recorded the timing, score and email address of users. Table 3 is an outline of origins of the sample from the server data.

Table 3
SAMPLE CHARACTERISTICS

The diversity of origins suggests that this is a fair sample of the universe of potential “I Opt” users. It is unlikely that there is a selection bias that might contaminate the results (e.g., all college students, all members of a single firm, etc.).


RETEST PROFILE
Participants could choose to rerun a report at any the time. The service was entirely automatic. People could retake the survey without any worry about having to defend the retest to an administrator. The response could be anonymous giving a further level of comfort. Users were effectively unconstrained.

Graphic 1
RETEST COMPARED TO TOTAL SURVEYS



Graphic 1 shows the usage. A total of 6,298 reports were run. There were 171 retests—a 2.7% retest rate. This is a very low rate given the multiple possible reasons for retest (see Table 1), the ease of access and the penalty free nature of a retake.

This result confirms the high “I Opt” face validity found in the original validity study (Soltysik, 2000). Face validity is an “unscientific” measure of validity. However, reliability is not a measure of validity. Reliability is a measure of consistency. It is meant to provide assurance that you are not using a “rubber ruler.”

Most people did not choose to retest even though they could do it with ease. This suggests that they found the results consistent with their internal estimates. In other words, the low retest rate can be viewed as evidence of the reliability judgment of the participants as measured by their own internal standards. At 2.7% it is very high.

The diversity of data sources (Table3) indicates that there is little likelihood of an external selection bias (e.g., all college students). However, a question could arise on whether there is a particular “I Opt” style inclined to take a retest? The answer is no.

Graphic 2 shows the profiles of re-testers (n=171) are a mirror image of the general test taker (n=6,298). Statistical tests confirm that there is no significant difference (p<.05) in any “I Opt” dimension. The motivation for retesting does not reside in the “I Opt” style. Thus the chance of auto-correlation confounding the results is minimized.


Graphic 2
STRATEGIC STYLE DISTRIBUTION
RE-TESTER INITIAL SCORES versus ALL PARTICIPANT SCORES
(n = 171 versus n=6,298)


The sample size is large and diverse and is a fair representation of people likely to use “I Opt.” The number of retests (n=171) is enough to give meaningful insights. The “mirror image” profiles between all testers and re-testers mean results are unlikely to be confounded by this dimension of auto-correlation. Finally, self-selected retesting means that all of the possible motives (Table 1) can operate thus maximizing the “stress” in the stress test. The study rests on a firm foundation.


RETEST TIMING
The time between test and retest is relevant to the stress test. Short time periods maximize the chance producing an inconsistent result. Over short time periods people are likely to remember their responses to the original survey. If the motive is to change the result (see Table 1) a short retake cycle makes this much easier. Table 4 shows the retest timing of the experiment.

Table 4
TIMING OF RETEST

Fully two-thirds of people retested almost immediately. This is a strong indication that they wanted to explore variation in the results. This reduces the likelihood of consistency (i.e., reliability). This short-cycle retake further increases the “stress” of the stress test.


RETEST STYLE RESULTS
A person’s dominant style is the practitioners’ most important measure. It is the one that the client is likely to see as characterizing their behavior. The last thing a practitioner wants is to argue with a client over a discrepant result.

Other tools (i.e., MBTI, DiSC, Pf16, etc) do not specify dominant style repeatability rates. Rather, they tended to mix all of the styles (i.e., primary, secondary, peripheral, etc.). This strategy implies that all had equal importance. If the dominant style had fared better it is likely to have been celebrated. It was not. The “I Opt” stress test does not avoid dominant style visibility as is shown in Graphic 3.


Graphic 3
CHANGED VERSUS UNCHANGED
DOMINANT STYLES ON RETEST


In spite of a strong bias against consistency fully 74% of the “I Opt” retest surveys yielded exactly the same dominant style as obtained in the initial test. This substantially exceeds the implied ~60% repeatability of the other “non-stressed” tools.



Graphic 4
CHANGED VERSUS UNCHANGED
“DETERMINED” vs. REGULAR RE-TESTERS


Graphic 4 shows a deeper examination of the 26% that changed styles. It further improves the outcome. Eighteen of 45 people who changed dominant style took the survey 3 or more times (for a combined total 48 surveys). Ultimately, 14 of these 18 “determined” people (77%) finally managed to change their primary style. If these 14 people were removed on the basis of gross distortion the repeatability rate would jump from 74% to 81%.

Table 5
OVERALL TEST-RETEST RELIABILITY ESTIMATES
REPEATABILITY PERCENT

Table 5 shows that whether considered in its raw (74%) or refined (81%) form, “I Opt” clearly passes the stress test. It exceeds the ~60% average repeatability standard. It accomplishes this even with a experimental design heavily biased against it.


SUMMARY
The natural experiment arising from an “I Opt” visibility program provides strong evidence of the inherent reliability of “I Opt” technology. This study confirms and extends the similar findings of the original Validity Study (Soltysik, 2000) of over a decade ago.

The result was that “I Opt” substantially exceeded the average reported reliability of traditional tools used in the field under heavily “stressed” conditions. It is reasonable to judge “I Opt” technology to be the most reliable tool available in the field. If there is an equal or superior, it has yet to make itself visible.


TRADEMARKS
® IOPT is a registered trademark of Professional Communications Inc.
® MBTI, Myers-Briggs Type Indicator, and Myers-Briggs are registered trademarks of the MBTI Trust, Inc.
® FIRO-B is a registered trademark of CPP, Inc.
® DiSC is a registered trademark of Inscape Publishing, Inc.
® 16PF is a registered trademark of the Institute for Personality and Ability Testing, Inc.


BIBLIOGRAPHY
Biddle, Daniel (Publication date not provided). Retrieved from http://www.biddle.com/documents/bcg_comp_chapter2.pdf, January 1, 2011.

Cattell, Heather and Mead, Alan (2000).“The Sixteen Personality Factor Questionnaire (16PF)” in The SAGE Handbook of Personality Theory and Assessment: Personality Theories and Models (Volume 1). Retrieved from http://www.gl.iit.edu/reserves/docs/psy504f.pdf, January 1, 2011.

Center for the Application of Psychological Type, 2010. “The Reliability and Validity of the Myers-Briggs Type Indicator® Instrument” Retrieved from http://www.capt.org/mbti-assessment/reliability-validity.htm, January 1, 2011.

Conn, S.R. and Rieke, M.L. (1994) The 16PF Fifth Edition Technical Manual. Champaign, IL: Institute for Personality and Ability Testing.

CPP (2009). Retrieved from https://www.cpp.com/products/firo-b/firob_info.aspx, January 1, 2011

Harvey, R J (1996). Reliability and Validity, in MBTI Applications A.L. Hammer, Editor. Consulting Psychologists Press: Palo Alto, CA. p. 5- 29.

Inscape Publishing (2005). DiSC Validation Research Report. Inscape Publishing, Minneapolis, MN. Retrieved from http://www.discprofile.com/downloads/DISC/ResearchDiSC_ValidationResearchReport.pdf January 4, 2011.

Schnell, E. R., & Hammer, A. (1993). Introduction to the FIRO-B in organizations. Palo Alto, CA: Consulting Psychologists Press, Inc.

Wikipeida (2011). “Myers-Briggs Type Indicator.” Retrieved from http://en.wikipedia.org/wiki/Myers-Briggs_Type_Indicator#cite_note-39, January 4, 2011.

Soltysik, Robert (2000), Validation of Organizational Engineering: Instrumentation and Methodology, Amherst: HRD Press.