What are the different kinds of innovation talent assessments?
All right, you may say. “I am convinced there is merit in measuring my people’s innovation abilities. How should I go about it?” Well, there are different kinds of innovation talent assessments. They vary by:
- Purpose of the innovation assessment
- Quality of the research behind the assessment (validity, sample size, location, research methods used)
- Practical considerations such as:
- Ease of use
- Years of use/Market validation
- Related products the vendor can offer
Let’s discuss each of these aspects of innovation talent assessments, to help you reach your own conclusions about the right assessment for your organization.
Purpose of the innovation assessment
Some innovation assessments are primarily sold as engagement or team activities. These vendors expressly state that their instrument is not predictive. They may be descriptive or have “concurrent validity.” Other innovation assessments can be used for talent selection, team design and workforce planning because they are based on “predictive validity.” We’ll talk more about validity below.
Quality of the research behind the assessment
Do the makers of the instrument claim predictive validity or concurrent validity?
Validity is the extent to which a tool measures what it’s supposed to measure. It measures the strength of your research conclusions. There are many different kinds of validity but you’re likely to run into concurrent validity and predictive validity. “Concurrent validity” refers to the degree to which the scores on a measurement are related to other scores on other measurements that have already been established as valid. In a way, concurrent validity is looking to the past for validation.
For example, the Big Five personality traits are widely viewed as valid. (The Big Five are openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism.) So an innovation talent assessment based on the Big Five could claim concurrent validity. However, the five dimensions of personality were not developed in order to predict business results.
In fact, controversy exists as to whether or not the Big 5 personality traits are correlated with success in the workplace, let alone innovation outcomes. The low correlation coefficients between personality and job performance mean the Big Five are not very predictive. Whereas predictive validity is based on the test scores accurately predicting performance on some other measure in the future. Predictive validity looks to the future, the ability to predict an outcome.
What kind of respondents is the instrument based on?
Digging deeper into the technical background of the assessment, you should consider the kinds of people the research studied. For an innovation assessment, it is important to ask:
- Was the research based on entrepreneurs, intrapreneurs or both?
- Were social entrepreneurs included, or only for-profits?
- Were the entrepreneurs business owners like your local florist or small law practice?
- Or were they founders of companies that scaled globally?
Where were the respondents located, and how many were there?
In addition to the type of respondents, it is important to know the location and size of the research sample:
- Was the research conducted in a single country or region, or internationally?
- Single-country findings cannot be generalized internationally.
- Was it with 30 people or 3,000 people?
- Generally, the larger the sample, the better.
What research methods were used?
Finally the research methods tell us something about the care researchers invested in obtaining accurate conclusions.
- Was the research based on qualitative methods only, e.g. structured interviews and observation; or quantitative methods?
- Qualitative can be used at the front end to form hypotheses, but large scale quantitative is more conclusive.
- Were the inputs self-reported, or did it include the views of colleagues and supervisors?
- 360-degree input can be more accurate, though other kinds of bias can slip in (e.g. rivalry).
- Were business outcomes collected? Were they validated or self-reported?
- An objective measure (such as business outcomes) is necessary for predictive validity.
- Was there just one research wave, or were the researchers able to reproduce their findings in a validation study?
- A single wave can be subject to error, so the ability to reproduce the results adds credibility to the research.
These are just a few of the aspects you should consider to determine the quality of the research underpinning the innovation assessment. But once you get it down to 2-3 potential vendors, how can you compare one innovation assessment to another?
Reliability and internal consistency of innovation assessments
Reliability and internal consistency of the instrument are statistics that let you compare one assessment tool to another. The standards for any employee-related assessments are higher than for descriptive assessments. That is because if the HR assessment is going to be used to inform business decision-making (e.g. hiring, placement, project assignments), real lives and business results can be impacted.
What are good reliability and consistency numbers for innovation assessments?
To nerd out for a moment here, reliability is measured with what is called a “p-value.” If the p-value of an instrument is very small, then the statistical significance is thought to be very large. A p-value of .001 means the probability that such a result could have happened by chance is only 0.1 percent. So a low p-value is desirable.
The consistency or inter-relatedness of the various items within an instrument is measured by the alpha coefficient, or “Cronbach’s alpha.” The higher the α (alpha) coefficient, the better. The higher the alpha, the more the items probably measure the same underlying concept.
In HR, the following can be used as a guideline for reliability:
Practical considerations for choosing an innovation assessment
So the purpose of the assessment and the quality of the research behind it are key. But there are a number of other practical considerations that should factor into your final selection of an innovation talent assessment. Let’s look at these seven practical considerations in choosing an innovation assessment:
How do respondents access the instrument? Is it a paper assessment? Online? Is it only given by a certified professional, or can it be self-administered? Is it available in multiple languages that suit your workforce?
- Ease of use
Are the instructions and questions in the assessment easy for most respondents to understand?
Are the results easy to understand, or do they require interpretation by a certified professional?
Do the assessment results suggest clear ways the employee and his or her manager can use this information to drive improved innovation results? Do the assessment results translate into action?
- Market validation
The longer an assessment is in the market, the more one can gauge its usefulness. Do customers believe in the data? Do they derive the expected benefits from using it?
Is the assessment affordable? If only a handful of employees will be assessed, this may be less of a consideration. But if you want to assess hundreds or thousands of employees and candidates, price can be a bigger factor.
- Other related tools and services
Does the assessment company offer related tools and services to help apply the results of the assessment? For example:
– Do they also provide a dashboard with tools to view teams, divisions or other segments of the workforce?
– Do they offer coaching on the correct use and meaning of the assessment?
– Do they offer employee training to develop the innovation skills that are lacking?