Chief: Research & Development
Professional Communications, Inc.
SUMMARY
This study reports on the reliability of the “I Opt” patterns, predictable behavioral sequences. The study was conducted under “stress” conditions. The results are the “worst case” reliability condition that might be reasonably expected. The results of the evidence-based research show that “I Opt” is able to provide highly reliable pattern results even under adverse conditions.
A summary of this research is available on YouTube. Click the icon on the right to launch the video summary.
STUDY DESIGN
This study uses the same data as did the companion style research (for more detail see Salton, 2011). That study used a classic test-retest design. People retested using the same survey spaced minutes to days apart. The design was a stress test. People participating were trying to change their profile. This created a bias against style reliability. The result was a “worst case” level of reliability. The methodology outlined there applies equally here.
Style reliability considered only one variable—a strategic style. That research found that “I Opt” outperformed traditional instruments (e.g., Myers-Briggs®, DiSC®, 16PF®, FIRO-B®, etc.) by a wide margin. This study considers the reliability of styles working in unison. In other words, it considers the reliability of sequences of style behavior being applied to a particular life issue.
STRATEGIC PATTERNS
Global style level characterizations—single styles—are the limit of traditional instruments. They can talk about the use of different styles but cannot give a probabilistic estimate of the likelihood of their use. The reason is that they rely on rank order measurement. You cannot do arithmetic on rank order measures. You cannot divide “sometimes” by “often.” It does not matter if you assign numerals to them. If you try to do arithmetic you will still be dividing “sometimes” by “often.”
If you cannot do arithmetic you cannot calculate style probabilities. If you cannot calculate a style probability you cannot combine styles into any meaningful measure of joint probability. Joint probability is relevant when you want to figure out the likelihood that two or more particular styles will be applied to a particular issue. Since most of life is filled with these interdependent elections this represents a serious limit to traditional tools.
For example, you may launch a problem solving effort with a new idea. But you cannot just keep piling on new ideas. You are going to have to shift to analysis, assessment or some form of action sooner or later. If a theory cannot address that shift, it is going to miss most of life. This is not a good position for someone in the business of predicting and guiding human behavior.
“I Opt” is unique. It uses exact measurement. You can divide one “I Opt” score by another and get a meaningful result. This means that the four “I Opt” style axes can be expressed as probabilities. This alone puts “I Opt” in a league of its own. You can now speak with probabilistic certainty in place of vague, general narratives. But that is not the end of it.
The theory (i.e., what causes what and why) that underlies “I Opt” specifies the exact relationship between the styles. What this means is that the quadrants between the style axes have a specific behavioral meaning. Since each quadrant has a specific meaning (given by “I Opt” theory), the probabilities of every behavioral sequence can be calculated and predicted. These are called “strategic patterns.” They are called “strategic patterns” because they define a strategy (i.e., “plan, method, or series of maneuvers”, Random House, 2010) used to address life situations.
Graphic 1 illustrates the concept of pattern. It is the actual behavioral profile of the average of the 171 re-testers used in this study. It shows that this average person favors a “Conservator” pattern (30.7%). This is the combination of the Hypothetical Analyzer (disciplined assessment) and Logical Processor (rigorous execution) styles. This might be characterized as “Let’s think through our options and then methodically execute the option we choose” strategy.
Graphic 1
"I OPT" RE-TESTER PATTERNS
(n = 171)
"I OPT" RE-TESTER PATTERNS
(n = 171)
Not every situations will yield to a Conservator strategy. When this happens our average person is likely to revert to a Perfector pattern (26.9%). This is a behavior sequence characterized “Let’s come up with some new ideas and then think them though.” If that does not work, the next likely option is the Performer Pattern (22.6%). This is a “Let’s get it done—right if we can, anyway if we have to” strategy.
The reason we were able to predict the likely outcome for the average re-tester in our sample is that “I Opt” could use arithmetic and had a theory to guide its application. This kind of insight is outside of the capacity of the traditional tools. If you cannot use arithmetic, you cannot calculate the probabilities we used to decipher the likely behavior of our composite re-testers.
This brief background demonstrates that “I Opt” stands as the lone member of a new class of tools. What this means is that there is nothing to which “I Opt” can be reasonably compared. Since there is no point of comparison, the reliability of the pattern measurement must rely on absolute measures. If these measures meet the needs of the issues being addressed “I Opt” pattern reliability can be accepted.
PROFILE RELIABILITY
An “I Opt” profile is a representation of all of the styles and patterns considered simultaneously. It describes entire behavioral map that a person will use to navigate all of life’s situations. Any shift in style strength will redistribute the likelihood of all combination's of style use. In other words, the entire profile will also shift. This means we can measure global changes in behavior by comparing one profile (e.g., test) with another (e.g., retest).
Graphic 2 compares the original test with the retest profile for all 171 retest surveys. It shows a minor net change in the overall profile of the retest group. Some individual test-retest surveys did change. However, there was no overall directional change. Individual changes tended to offset each other. As we will find out later, this is no accident.
The “I Opt” profile reliability of groups is stable. Policy decisions involving the prediction of group patterns can be relied upon. “I Opt” technology will provide a firm foundation for large scale initiatives. Proposals concerning mergers, acquisitions, policy initiatives and the like will be grounded on a firm factual base.
INDIVIDUAL PATTERN RELIABILITY
Stable profiles can be composed of offsetting individual changes. This is of little comfort for the practitioner working with individuals and smaller groups. For them individual pattern variability is of prime importance.
Several methods can be used to assess individual variability. One is to focus on the dominant pattern. This method treats a pattern as a category. It ignores the degree of change. The only measure is whether rank order of the dominant pattern has changed. It does not matter if the dominant pattern exceeds the secondary by 1% or 50%. This method is inexact but is easily understood and this quality has much merit in field settings.
Graphic 3 shows the stability of strategic patterns (i.e., behavioral sequences) in the stress test sample. A majority of retests did not change dominant patterns. Fully 66% remained stable even under stress conditions.
Graphic 3
DOMINANT PATTERN DISTRIBUTION AMONG 171 RE-TESTERS
DOMINANT PATTERN DISTRIBUTION AMONG 171 RE-TESTERS
It is worth examining just what the “worst case conditions” of the stress test entail. People often retested within minutes of their original test. Many did so with the conscious intention to change the results. They knew the results of the original test and probably remembered their initial responses. This made “engineering” a change easy. Graphic 4 illustrates one such case.
Graphic 4
INDIVIDUAL CHANGE IN DOMINANT PATTERN
RETEST WITHIN 2.3 HOURS
(139 Minutes)
INDIVIDUAL CHANGE IN DOMINANT PATTERN
RETEST WITHIN 2.3 HOURS
(139 Minutes)
Graphic 4 clearly shows a manipulative result. A profile change of this character requires a person to choose multiple responses that directly contradict their original position. This kind of change in strategic posture does not happen within 2 hours between test and retest. Other similar results happened in as fast as 5 minutes. And this is not all that was confronted in the stress test.
Many retests involved only slight changes. While small, these were enough to flip the dominant style from one category to another. Graphic 5 illustrates a change in dominant pattern resulting from an individual answering 1 statement differently. It was enough to flip this person from a “Perfector” to a “Changer” pattern.
The kind of change illustrated by Graphic 5 probably has no practical consequence in any kind of organizational diagnosis. In total, 16 of the 58 people who changed their dominant pattern still maintained an overall profile overlap of 66% or more between their test and retest. This profile consistency means that their responses just “wiggled.” The relatively minor nature of this high overlap condition is visually illustrated in Graphic 6.
Graphic 7 shows what happens if both the obvious manipulative and minor changes were discounted from the study. Removing the 23 obviously distorted surveys cause the dominant pattern repeatability rate to jump from 66% to 76%. And there is still stress left among those that remain.
Many retests involved only slight changes. While small, these were enough to flip the dominant style from one category to another. Graphic 5 illustrates a change in dominant pattern resulting from an individual answering 1 statement differently. It was enough to flip this person from a “Perfector” to a “Changer” pattern.
Graphic 5
INDIVIDUAL CHANGE IN DOMINANT PATTERN
RETEST WITHIN ONE DAY
INDIVIDUAL CHANGE IN DOMINANT PATTERN
RETEST WITHIN ONE DAY
The kind of change illustrated by Graphic 5 probably has no practical consequence in any kind of organizational diagnosis. In total, 16 of the 58 people who changed their dominant pattern still maintained an overall profile overlap of 66% or more between their test and retest. This profile consistency means that their responses just “wiggled.” The relatively minor nature of this high overlap condition is visually illustrated in Graphic 6.
Graphic 7 shows what happens if both the obvious manipulative and minor changes were discounted from the study. Removing the 23 obviously distorted surveys cause the dominant pattern repeatability rate to jump from 66% to 76%. And there is still stress left among those that remain.
Graphic 7
“I OPT” PATTERN RELIABILITY
WITH 23 UNREPRESENTATIVE SURVEYS REMOVED
Even without a specific comparable to act as a standard, the “I Opt” pattern reliability rate clearly exceeds the generally accepted norms of the field of organizational research. This judgment can be reasonably inferred by comparing pattern reliability to the style reliability found in the companion research (Salton, 2011). This is done in Table 1.
As expected, “I Opt” pattern reliability is somewhat less than “I Opt” style reliability. This is because patterns can be affected by more things. The sum of all patterns must add up to 100%. A change in any style will cause its associated pattern percentage to increase or decrease. This means that all of the remaining patterns must shift if they are to continue to add up to 100%. Since more variables can affect patterns, they are inherently less repeatable than are styles.
Overall, “I Opt” patterns compare very favorably with the reliability measures posted by other generally accepted tools in the field of organizational research. And this is under stressed conditions. In actual practice much higher levels of reliability can be reasonably expected. In sum, “I Opt” pattern reliability is meets or exceeds any standard of acceptability.
DIRECTION OF CHANGE
The design of the stress test offers an opportunity to examine the nature of the pattern changes that occurred. Table 2 shows both the patterns that are being moved from (i.e., test) and those that are being moved to (i.e., retest).
The Conservator pattern stands out at a 40% change rate (far right column). Other patterns each account for roughly 20% of the 58 pattern changes. This suggests that Conservators are the most motivated to attempt to change their reported “I Opt” pattern.
The natural design of the stress test did not allow for interviewing participants. It may be that Conservators were trying to better align themselves with the more socially attractive pattern. Or their inherently skeptical posture may have created more doubt as to the accuracy of the assessment. But whatever the reason, it is clear that they were certainly trying harder.
The “changed to” percent (bottom row) is notable. It shows that there is no overall direction to the change. This is due to the structure of the “I Opt” survey. The survey gives no clue as to the diagnostic consequences of a particular choice. This is evidenced by the random distribution of “changed to” results. In a random situation each pattern has about an equal probability of occurring. That is exactly what the bottom row shows.
Pattern recognition is a typical strategy for attempting to engineer survey outcomes. Experience has shown that test takers will initially examine a survey in search of patterns. If highly motivated re-testers with full knowledge of initial results (the LPs in Table1) cannot find a pattern, it is extremely unlikely that someone taking an initial survey will find one.
A reasonable conclusion to this section is that professionals using “I Opt” technology can trust the initial diagnosis. People are unlikely to be able to “figure out” the survey. The result is that they tend to give an honest assessment of their status. That is what the original Validity Study (Soltysik, 2000) found a decade ago and that is what this study has confirmed.
TIMING EFFECTS
The structure of the survey prevents the respondent from predicting the direction of any change. However an unpredictable change can be caused just by answering differently. To answer differently the initial response has to be remembered. A short time between test and retest improves the odds that the original responses will be remembered.
If people choosing to retest were honestly reflecting a different state we should expect to find no time dependent difference in outcomes. If people were attempting to manipulate the results there should be a significant difference by time. People retaking the test sooner would remember their initial responses better that those more distant from it. Table 2 shows the results of this test.
The fact that surveys taken days apart differed significantly from those taken hours-apart (p<.05) suggests a concerted effort to change results. The shorter the time between surveys, the more divergent were the before and after profiles. This was not honest re-evaluative effort. It is evidence of conscious manipulation. The “stress” embedded in the stress test is real. The re-testers were really trying to change the results.
The implications of this finding are material. Even when trying to change the results a majority of the re-testers were unable to do so. This suggests that “I Opt” is able to deliver accurate results even under the most adverse of circumstances. Scholars and professionals can trust an “I Opt” diagnosis. Such trust is well placed.
SUMMARY
This pattern research has confirmed the unique standing of“I Opt”. Being able to probabilistically predict sequences of behavior is alone enough to distinguish it from traditional tools. This judgment is reinforced by the fact that the dominant pattern reliability was able to outperform the less challenging style reliability posted by traditional tools. And it did this under adverse test conditions. Real world experience is likely to be much better.
The analysis of the direction of change among retesters demonstrated the integrity of “I Opt.” The “I Opt” survey offers no clues on the likely result of a particular choice. This means that the only viable strategy available to the test taker is to follow some semblance of their true posture on choices offered. This means that the scholar or professional administering the “I Opt” survey can trust the results.
The study also demonstrated that re-testers were really trying to bias the outcome. The fact that so few of them succeeded is further evidence of the integrity of the “I Opt” instrumentation. Scholars and professionals can rely on “I Opt” diagnosis even in situations where the respondents may be less than fully cooperative.
Overall, the both stress tests (i.e., style and pattern) have confirmed and extended the results of the original validity study of over a decade ago. “I Opt” offers an evidence-based opportunity to both broaden and deepen our understanding of behavior in real world situations. With that will come an improvement in the human condition.
TRADEMARKS
® IOPT is a registered trademark of Professional Communications Inc.
® MBTI, Myers-Briggs Type Indicator, and Myers-Briggs are registered trademarks of the MBTI Trust, Inc.
® FIRO-B is a registered trademark of CPP, Inc.
® DiSC is a registered trademark of Inscape Publishing, Inc.
® 16PF is a registered trademark of the Institute for Personality and Ability Testing, Inc.
BIBLIOGRAPHY
Random House Dictionary, Random House, Inc., 2010.
Salton, Gary (2011), “IOpt Style Reliability Stress Test”, http:\\garysalton.blogspot.com.
Soltysik, Robert (2000), Validation of Organizational Engineering: Instrumentation and Methodology, Amherst: HRD Press.
“I OPT” PATTERN RELIABILITY
WITH 23 UNREPRESENTATIVE SURVEYS REMOVED
Even without a specific comparable to act as a standard, the “I Opt” pattern reliability rate clearly exceeds the generally accepted norms of the field of organizational research. This judgment can be reasonably inferred by comparing pattern reliability to the style reliability found in the companion research (Salton, 2011). This is done in Table 1.
As expected, “I Opt” pattern reliability is somewhat less than “I Opt” style reliability. This is because patterns can be affected by more things. The sum of all patterns must add up to 100%. A change in any style will cause its associated pattern percentage to increase or decrease. This means that all of the remaining patterns must shift if they are to continue to add up to 100%. Since more variables can affect patterns, they are inherently less repeatable than are styles.
Overall, “I Opt” patterns compare very favorably with the reliability measures posted by other generally accepted tools in the field of organizational research. And this is under stressed conditions. In actual practice much higher levels of reliability can be reasonably expected. In sum, “I Opt” pattern reliability is meets or exceeds any standard of acceptability.
DIRECTION OF CHANGE
The design of the stress test offers an opportunity to examine the nature of the pattern changes that occurred. Table 2 shows both the patterns that are being moved from (i.e., test) and those that are being moved to (i.e., retest).
Table 2
DIRECTION OF PATTERN CHANGE
(n = 58 re-testers who changed style)
DIRECTION OF PATTERN CHANGE
(n = 58 re-testers who changed style)
The Conservator pattern stands out at a 40% change rate (far right column). Other patterns each account for roughly 20% of the 58 pattern changes. This suggests that Conservators are the most motivated to attempt to change their reported “I Opt” pattern.
The natural design of the stress test did not allow for interviewing participants. It may be that Conservators were trying to better align themselves with the more socially attractive pattern. Or their inherently skeptical posture may have created more doubt as to the accuracy of the assessment. But whatever the reason, it is clear that they were certainly trying harder.
The “changed to” percent (bottom row) is notable. It shows that there is no overall direction to the change. This is due to the structure of the “I Opt” survey. The survey gives no clue as to the diagnostic consequences of a particular choice. This is evidenced by the random distribution of “changed to” results. In a random situation each pattern has about an equal probability of occurring. That is exactly what the bottom row shows.
Pattern recognition is a typical strategy for attempting to engineer survey outcomes. Experience has shown that test takers will initially examine a survey in search of patterns. If highly motivated re-testers with full knowledge of initial results (the LPs in Table1) cannot find a pattern, it is extremely unlikely that someone taking an initial survey will find one.
A reasonable conclusion to this section is that professionals using “I Opt” technology can trust the initial diagnosis. People are unlikely to be able to “figure out” the survey. The result is that they tend to give an honest assessment of their status. That is what the original Validity Study (Soltysik, 2000) found a decade ago and that is what this study has confirmed.
TIMING EFFECTS
The structure of the survey prevents the respondent from predicting the direction of any change. However an unpredictable change can be caused just by answering differently. To answer differently the initial response has to be remembered. A short time between test and retest improves the odds that the original responses will be remembered.
If people choosing to retest were honestly reflecting a different state we should expect to find no time dependent difference in outcomes. If people were attempting to manipulate the results there should be a significant difference by time. People retaking the test sooner would remember their initial responses better that those more distant from it. Table 2 shows the results of this test.
Table 3
EFFECT OF RETEST TIMING ON RETEST RESULTS
EFFECT OF RETEST TIMING ON RETEST RESULTS
The fact that surveys taken days apart differed significantly from those taken hours-apart (p<.05) suggests a concerted effort to change results. The shorter the time between surveys, the more divergent were the before and after profiles. This was not honest re-evaluative effort. It is evidence of conscious manipulation. The “stress” embedded in the stress test is real. The re-testers were really trying to change the results.
The implications of this finding are material. Even when trying to change the results a majority of the re-testers were unable to do so. This suggests that “I Opt” is able to deliver accurate results even under the most adverse of circumstances. Scholars and professionals can trust an “I Opt” diagnosis. Such trust is well placed.
SUMMARY
This pattern research has confirmed the unique standing of“I Opt”. Being able to probabilistically predict sequences of behavior is alone enough to distinguish it from traditional tools. This judgment is reinforced by the fact that the dominant pattern reliability was able to outperform the less challenging style reliability posted by traditional tools. And it did this under adverse test conditions. Real world experience is likely to be much better.
The analysis of the direction of change among retesters demonstrated the integrity of “I Opt.” The “I Opt” survey offers no clues on the likely result of a particular choice. This means that the only viable strategy available to the test taker is to follow some semblance of their true posture on choices offered. This means that the scholar or professional administering the “I Opt” survey can trust the results.
The study also demonstrated that re-testers were really trying to bias the outcome. The fact that so few of them succeeded is further evidence of the integrity of the “I Opt” instrumentation. Scholars and professionals can rely on “I Opt” diagnosis even in situations where the respondents may be less than fully cooperative.
Overall, the both stress tests (i.e., style and pattern) have confirmed and extended the results of the original validity study of over a decade ago. “I Opt” offers an evidence-based opportunity to both broaden and deepen our understanding of behavior in real world situations. With that will come an improvement in the human condition.
TRADEMARKS
® IOPT is a registered trademark of Professional Communications Inc.
® MBTI, Myers-Briggs Type Indicator, and Myers-Briggs are registered trademarks of the MBTI Trust, Inc.
® FIRO-B is a registered trademark of CPP, Inc.
® DiSC is a registered trademark of Inscape Publishing, Inc.
® 16PF is a registered trademark of the Institute for Personality and Ability Testing, Inc.
BIBLIOGRAPHY
Random House Dictionary, Random House, Inc., 2010.
Salton, Gary (2011), “IOpt Style Reliability Stress Test”, http:\\garysalton.blogspot.com.
Soltysik, Robert (2000), Validation of Organizational Engineering: Instrumentation and Methodology, Amherst: HRD Press.