‘Ghana Tops Africa In Maths & Science’: Further Revelations!

Education levels in Ghana continue to trail international standards at all levels”. The above quotation is a remark of the Global Information Technology (GIT) report (2014, p.42) that supposedly ranked Ghana first in Africa relative to the quality of maths and science education.

In view of the increased attention that this report has enjoyed and its concomitant media discourse, I have decided to further examine and critique the bases of the rankings to elucidate the gross limitation of the claims made therein.

First of all, it does appear to me that, the media reports have confused the Global Information Technology report (2014) with the Global Competitiveness Report (2013–2014). Nevertheless, I will proceed to discuss the former.

The most formidable criticism of the GIT reports is that, they are not based on any actual tests or assessments done by learners; they do not in any way interact with learners in the system. They are generated purely from SURVEY RESEARCH DESIGN and some statistics from UNESCO and The World bank.
The present study was basically to assess the current state of ICT readiness in the world with ten pillars including the skills pillar (pillar 5) which is the subject of contention here.

According to the 2014 report, the Skills Pillar gauges the ABILITY of a society to make effective use of ICTs via basic educational skills. This indicator was captured by four variables namely, quality of the educational system, the level of adult literacy, and the rate of secondary education enrollment.

Even before I proceed further, you will agree with me that, questionnaire is incapable (or inadequate) of tapping or measuring one’s ability. In fact, it is conceptually, theoretically and methodologically flawed to measure ability/skills using questionnaire. Where this has been done, interpretation of results has been done with great caution.

So at best, ‘perceived ability’ could have properly captured the intent of the skills pillar. This is because respondents would only provide what they PERCEIVED (i.e. feel, sense, believe, think etc.) to be the ability of society to make effective use of ICTs. It is open knowledge that PERCEPTION IS NOT SYNONYMOUS TO WHAT ACTUALLY HAPPENS. Observations could help to understand what actually happens in a social setting. This point is extremely important in social research like this study in which beliefs, attitudes, perceptions, feelings are measured.

Despite the damning findings, albeit defective in my estimation, it appears the media has highlighted the results of two variables of the fifth pillar (a.k.a. Skills Pillar.) namely “Quality of the educational system” AND “Quality of math and science education”. But the second variable (i.e. Quality of math and science education) has enjoyed high currency in our recent public discourse. Thus, the scope of the present discussion revolves around these two variables.

According to the GIT report which was compiled by the World Economic Forum, “Quality of the educational system” in Ghana was measured by asking respondents, who I believe are Ghanaians, to indicate their association with the following statement on a seven-Likert’s scale:
“How well does the educational system in your country meet the needs of a competitive economy? [1_=_not well at all; 7_=_extremely well] “.
The average response (from 1-7) of Ghanaian respondents who took part in this study was 4.2, which indicates a slightly well educational system and, thus, ranked 46 out of 148.

On this indicator, The Gambia was ranked 29 (mean score =4.5) out of the 148 countries; Mauritius, 37(mean score=4.3), Zambia 38 (mean score =4.3), Zimbabwe 42(mean=4.2) and Kenya 44(mean=4.2) ahead of Ghana. Effectively, the report suggests that The Gambia, Mauritius, Zambia, Zimbabwe and Kenya have all got better educational system than Ghana. Is that really the case or some people perceived it to be so? Could we question the bases of this perception? I opine that, one’s mood, emotions, economic situation, political orientation at the time of completing a questionnaire could occasion socially desirable responses.

The study failed to highlight this important limitation and how they were addressed.

On the second variable of “Quality of math and science education”, respondent were asked to indicate how they feel about the following statement on a seven-Likert’s scale:

“In your country, how would you assess the quality of math and science education in schools? [1_=_extremely poor—among the worst in the world; 7_=_excellent—among the best in the world]”

Again the mean score of the responses from the Ghanaian respondents was 4.2 indicating a slightly excellent quality maths and science education among the best in the world.

The final rankings was then done by compiling these means score of all the participating countries (148) and arranging them in ascending order.The top five countries were Singapore( mean score= 6.3); Finland(6.3); Belgium(mean score=6.0); Lebanon(mean score=5.8); Switzerland(mean score=5.8) and the bottom five are Honduras(mean score=2.2); Egypt(mean score=2.2); Dominican Republic(mean score=2.2); Angola(means score=2.1); South Africa(mean score=2.1).

Ghana ranked 62(mean=4.2) and not 48 as has been reported in the media.Perhaps, I am examining the wrong report or better still a serious oversight by the media reportage; I choose the latter.Quite interestingly, on this same “Quality of math and science education” construct, Tunisia was ranked 37 (mean=4.7) and Côte d’Ivoire ranked 60(mean=4.3) ahead of Ghana. So how could Ghana have topped Africa in maths and science education? We must also remember that on such a scale (1-7), 4 is likely to be interpreted as a natural/middle position (i.e. undecided, don’t know, unsure etc.).

The implication is that Ghanaian respondents were generally closer to a neutral position than perceiving best possible outcomes.That is they were unsure about their responses than taken a more positive position on the question.

It is my respectfulviewthat;the GIT methodology is not sufficient and valid enough to conclude that a country has excellent (or otherwise) quality of math and science education. On the other hand, reports from the Trends in Mathematics and Science Study(TIMSS) which are released every four years provide a better picture of the state of maths and science education in Ghana as they actually test and interview students, interact with teachers, head teachers and other relevant stakeholders to provide a triangulated data for a more robust analysis. In 2003, Ghanaian students in JHS 2 failed to reach the “low” level of achievement in the Trends in International Mathematics and Science (TIMSS) testing after scoring 276 on the average scale of 467.

The report ranked Ghana on the 45th position out of the participating 46 countries around the world. In 2007, Ghana scored 309 in TIMSS, lower than those obtained by all the participating African countries and placed 48th out of 49 participating countries. The 2007 score of 309 was, however, slightly higher than the 2003 score of 276, a 33 point increase. In 2011, Ghana’s scored 331 to place last of the 43 countries, albeit slightly better scores than the previous results. The results are not different for science education too.

In conclusion, our media houses must therefore begin to seek the services of statisticians and social researchers when interpreting such rankings. Click here to download the full report: http://tinyurl.com/q5ehsog