1,578
3
Essay, 4 pages (950 words)

Reliability and validity of various tests critical thinking

The VARK test has been analyzed and criticized by several people and in several publications, for example:

In August 2009 Walter Leite(Research and Evaluation Methodology program, University of Florida), Marilla Svinicki(University of Texas at Austin) and Yuying Shi also from the University of Florida conducted a test to examine the validity of the scores of the VARK. They mainly focused on the dimentionality of the learning style inventory associated with VARK. From the study it was found that there was preliminary support for the validity of the VARK scores.

There were however concerns about the scoring algorithm and item wording. As a result therefore, it was established that caution needed to be exercised whenever VARK was employed in research. From the study it was determined that the reliability estimates for the scores of the

VARK sub-scales were as follows:

0. 85 for the visual subscale
0. 82 for the aural subscale
0. 84 for the read/write subscale
0. 77 for the kinesthetic subscale

The most common question concerning the reliability and validity of VARK is usually associated with the absence of a statistical scoring method. Apart from this, VARK has been said to ignore other learning methods that are very vital in the cognitive development of pupils or students. In addition to this, it is said to be an incomplete learning style inventory.
Other tests have shown the VARK reliability coefficient to be 0, 8132 for checklist and 0, 7076 for the multioption variable. Validity tests on the other hand show all VARK to be statistically significant on p < 0, 05 (2 tailed).

b) Honey and Mumford

The Honey and Mumford test has always been championed as a viable alternative to Kolb’s Learning Style Inventory(LSI). In recent times however, the reliability and validity of the Honey and Mumford test as well as its ability to emerge as a worthy replacement to Kolb’s LSI have come under intense scrutiny.

A study conducted at the University of Paisely, Ayr Campus, UK appears to strongly question the reliability and validity of the Honey and Mumford test. It says that confirmatory and exploratory factor analysis failed to lend credence to four learning styles hypothesized by Honey and Mumford. After an item analysis and pruning exercise, the internal consistency reliability remained unsatisfactorily low compared to the required standards, and also there was the lack of an adequate model to fit the data.

A structural equation model revealed that there lacked a consistent relationship between scores on the four learning style scales. Factorial invariance tests do not provide any support for the generalizability or stability of the model. In conclusion, the Honey and Mumford LSQ was found not to be a credible alternative to the LSI-1985 and the LSI. In addition to this, its employment in applied research involving students in higher education was found to be premature.

c) Myers Briggs

In 1991, after reviewing data from MBTI research studies, the National Academy of Sciences committee came to the conclusion that only the I-E scale of the Myers Briggs test has sufficient construct validity. This is in terms of displaying low correlations with instruments meant to evaluate different concepts while at the same time showing high correlations with comparable scales of other instruments. Relatively weaker validity was observed in the T-F and S-N scales.

According to a number of studies, it has been found that between 39% and 76% of respondents usually fall into different categories when retesting is done weeks or even years later. As such, the reliability of the test has been said to be low by a number of researchers. On the other hand, the MBTI dichotomies display good split-half reliability.

d) Belbin Team Role Test

In 1996, Stephen Fisher, W. D. K Macrosson and Gillian Sharp, all of the University of Strathclyde wrote a research paper which was published by MCB UP Ltd and which is available at Emerald Online. In the paper they sought to investigate the internal reliability and validity of the Belbin Team Role Self-perception Inventory. Two tests which were both linked were conducted in light of this, and they revealed the following:

i) The first test showed that the test-retest reliabilities of the Belbin self-perception
inventory were unsatisfactory.

ii) The second test examined the correlations with team roles predicted on the basis of
16PF data, and with the exemption of only one team role, no credible correlations
were found.

The research paper and the two linked tests therefore appear to provide support for the disregardence of the Belbin self-perception inventory data and instead the use of 16PF data as the suitable way for estimating team role preferences.

Reference List

Dulewicz, V 1995, A validation of Belbin’s team roles from 16PF and OPQ using bosses’
ratings of competence, Journal of Organizational Psychology, Vol 68, pp. 81-99.
Dunn, R., Dunn, K, & Price, GE 1984, Learning style inventory, Price Systems, Lawrence,
KS.
Fleming, N 2001, Teaching and Learning Styles: VARK strategies, Published by the author,
Christchurch.
Honey, P & Mumford, A, 2006, The Learning Styles Questionnaire, 80-item version, Peter
Honey Publications, Maidenhead.
Howes, RJ 1977, ‘ Reliability of the Myers-Briggs Type Indicator as a function of mood
manipulation’, Unpublished master’s thesis, Mississippi State University.
Jackson, CJ & Baguma, P & Furnham, A In press, Predicting Grade Point Average from
the hybrid model of learning in personality: Consistent findings from Ugandan and
Australian Students, Educational Psychology, pp 75-79.
Jackson, CJ 2009, “ Using the hybrid model of learning in personality to predict performance
in the workplace”. 8th IOP Conference, Conference Proceedings, Manly,
Sydney, pp
75-79.
Myers, IB, McCaulley, MH, Most, R 1985, A Guide to the Development and Use of the Myers
Briggs Type Indicator. Consulting Psychologists Press, Palo Alto.
Sprenger, M 2003, Differentiation through learning styles and memory, Corwin Press
Thousand Oaks.

Thank's for Your Vote!
Reliability and validity of various tests critical thinking. Page 1
Reliability and validity of various tests critical thinking. Page 2
Reliability and validity of various tests critical thinking. Page 3
Reliability and validity of various tests critical thinking. Page 4
Reliability and validity of various tests critical thinking. Page 5
Reliability and validity of various tests critical thinking. Page 6

This work, titled "Reliability and validity of various tests critical thinking" was written and willingly shared by a fellow student. This sample can be utilized as a research and reference resource to aid in the writing of your own work. Any use of the work that does not include an appropriate citation is banned.

If you are the owner of this work and don’t want it to be published on AssignBuster, request its removal.

Request Removal
Cite this Essay

References

AssignBuster. (2021) 'Reliability and validity of various tests critical thinking'. 14 November.

Reference

AssignBuster. (2021, November 14). Reliability and validity of various tests critical thinking. Retrieved from https://assignbuster.com/reliability-and-validity-of-various-tests-critical-thinking/

References

AssignBuster. 2021. "Reliability and validity of various tests critical thinking." November 14, 2021. https://assignbuster.com/reliability-and-validity-of-various-tests-critical-thinking/.

1. AssignBuster. "Reliability and validity of various tests critical thinking." November 14, 2021. https://assignbuster.com/reliability-and-validity-of-various-tests-critical-thinking/.


Bibliography


AssignBuster. "Reliability and validity of various tests critical thinking." November 14, 2021. https://assignbuster.com/reliability-and-validity-of-various-tests-critical-thinking/.

Work Cited

"Reliability and validity of various tests critical thinking." AssignBuster, 14 Nov. 2021, assignbuster.com/reliability-and-validity-of-various-tests-critical-thinking/.

Get in Touch

Please, let us know if you have any ideas on improving Reliability and validity of various tests critical thinking, or our service. We will be happy to hear what you think: [email protected]