What the Paper Says: Critiquing a Journal Article.

Updated: Dec 15, 2021


Fourati-Jamoussi, Fatma; Narcisse Niamba, Claude

Article Title

An Evaluation of Business Intelligence Tools: A Cluster Analysis of Users’ Perceptions

Article URL

Journal of Publication

Journal of Intelligence Studies in Business. Vol 6, No 1. Pages 37-47


Halmstad University, Sweden

Statement of the issue discussed

The authors identified that there was a gap between knowledge at the BI tool design level and the eventual users' perception when utilising those tools. They argue that there is a difference in the perception of these tools informed by the demographic and studentship/professional status of the user. In other words, practising professionals view the BI tools as critical to their daily tasks but students see them as too cumbersome.

The problem here, according to the authors, is that the tools vendors do not consider these differences between both sets of users during design and production processes, thereby leading to poor user satisfaction among at least one of the clusters.


The key purpose of the article is to evaluate how students and professionals perceive and utilise Business Intelligence (BI) tools. The overall aim of that evaluation is to aid the designers of the BI tools to better monitor the efficiency of the tools.


The authors set out to measure and evaluate individual respondents' perspectives of several BI tools by using scientific empirical methods.

In view of the Cluster Analysis method applied, the authors used the Statistical Package for Augmented Designs (SPAD) software to segment the identified respondents into groups, characterising them through demographic profiling and tools-usage clustering. Data was collected through survey questionnaires and afterwards treated using Statistical Package for the Social Sciences (SPSS.19) software.

Furthermore, the BI tools in question were classified into general and specialized tools and platforms. This was done using the Technology-Task Fit (TTF) and Technology Adoption Model (TAM).


They hypothesize that closing the identified gap will lead to a better user experience that will provide value for the user irrespective of whether they are practising professionals or students.

On the other hand, they were of the opinion that the outcome of their study will help the BI tools production companies to better monitor the efficiency of the tools when eventually deployed in real-life production settings.

Major conclusions

The paper concludes that the technologies behind the BI tools are, in themselves, not bad, but poorly appreciated because of limited use. They suggested that the practising professionals who showed satisfaction with the tools are happy because they utilise these tools daily, and have, as such, mastered the intricacies involved. This group is said to be aware of the usefulness of the tools. The students, on the other hand, were found to be unhappy with the various software.

The authors further recommended that BI tools vendors should pay attention to the differentiation in user perception as they design and deploy their software. Doing so, they suggest, would accommodate different users even while using the same platform.

They also conclude that a human-centric approach should be adopted in organisations by applying organisational change strategies such that different users will quickly adopt the usage of BI tools. This, they propose, should be applied in big and small organisations alike.


Sampling is a good method for collecting valuable, unbiased, and end-user data from a large number of potential survey participants relating to the study in view. However, increasing the sample size would have provided better insight given the more representativeness the authors would have achieved.

The cluster analysis approach used in the project may not have been the best choice in the sense that both groups have very different outlooks towards technology. Perrotta and Williamson (2018) buttress this point by explaining that cluster analysis is best applied when the various groups have similar characteristics.

This is not the case here. Relatedly, the sample size that was considered is also grossly insufficient (134 persons, both professionals and students) to reach meaningful conclusions.

Getting data from students was a good way of balancing out the data range from which the data was obtained. However, the student sample was not broadly explored as in the case of the professionals’ cluster, which was well-spread across various disciplines. This is evident, considering that data was only collected from students of the Engineering discipline at LaSalle Beauvais Institute (Fourati-Jamoussi and Niamba, 2016:40).

In other words, data saturation, which means ‘to continue sampling until new data begins to yield redundant information, was not achieved’ – as described by Moser and Korstjens (2018).

A more robust sampling would have included data from various disciplines, and perhaps different institutions of higher learning. For instance, data collected from STEM disciplines and business-related disciplines would have accounted for more representative data sets sourced from broader fields of knowledge.

Overall, a better approach would have been to design the research in different phases so the data collected, and the eventual outcome would be streamlined in such a way that the researchers would be comparing apples with apples.

As pointed out in their conclusion: “For future research, we will adapt our survey to our student population to evaluate their perception of BI tools as part of their project…” (ibid., 42). This indicates that a more focused study would have ensured that the variables and questions relied upon to fetch the data would have been designed to fit the specific demographic – in this case, students.

This conclusion is drawn based on the assumption that the student and practising professionals’ demographics, respectively, think differently and have unique outlooks with respect to technology.

For example, a 23-year-old student might prefer to use data visualization tools that are visually appealing, like Tableau or Google Data Studio, while the working professional aged 36 and above may be comfortable with SPSS and Excel.

In summary, this research study should have been split in two and then conducted with different yet very specific demographics in mind, given their unique and parallel views about technology.


The title of the article is clear enough and the abstract gives a good insight into what the study, as well as its outcome, was about. Also, the discussion is quite relevant in the sense that software designers and production companies should be made aware of users' perceptions of their products in a bid to further provide better value to the customer.

The authors have been objective in presenting their assumptions such that these hypotheses did not becloud their research process, but rather subjected to the standards of academic research.


Fourati-Jamoussi, F. and Niamba, C.N., 2016. ‘An evaluation of business intelligence tools: a cluster analysis of users’ perceptions’, Journal of Intelligence Studies in Business, 6(1), pp. 37-47. Available at: [Accessed 18 November 2021]

Moser, A. and Korstjens, I., 2018. ‘Series: Practical guidance to qualitative research. Part 3: Sampling, data collection and analysis’, European Journal of General Practice, 24(1), pp. 9-18. Available at: [Accessed 2 December 2021]

Perrotta, C. and Williamson, B., 2016. The social life of Learning Analytics: cluster analysis and the ‘performance’ of algorithmic education. Learning, Media and Technology, [online] 43(1), pp.3-16. Available at: [Accessed 2 December 2021].

7 views0 comments