Output list
Journal article
Published 2026
Journal of Metaverse, 6, 57 - 70
Gamification plays a pivotal role in enhancing user engagement in the Metaverse, particularly among Generation Z users who value autonomy, immersion, and identity expression. However, current research lacks a cohesive framework tailored to designing gamified social experiences in immersive virtual environments. This study presents a framework-oriented systematic literature review, guided by PRISMA 2020 and SPIDER, to investigate how gamification is applied in the Metaverse and how it aligns with the behavioral needs of Gen Z. From 792 screened studies, seventeen high-quality papers were synthesized to identify core gamification mechanics, including avatars, XR affordances, and identity-driven engagement strategies. Building on these insights, we propose the Affordance-Driven Gamification Framework (ADGF), a conceptual model for designing socially immersive experiences, along with a five-step design process to support its real-world application. Our contributions include a critical synthesis of existing strategies, Gen Z-specific design considerations, and a dual-framework approach to guide researchers and practitioners in developing emotionally engaging and socially dynamic Metaverse experiences.
Journal article
A systematic review of multi-modal large language models on domain-specific applications
Published 2025
The Artificial intelligence review, 58, 12, 383
While Large Language Models (LLMs) have shown remarkable proficiency in text-based tasks, they struggle to interact effectively with the more realistic world without the perceptions of other modalities such as visual and audio. Multi-modal LLMs, which integrate these additional modalities, have become increasingly important across various domains. Despite the significant advancements and potential of multi-modal LLMs, there has been no comprehensive PRISMA-based systematic review that examines their applications across different domains. The objective of this work is to fill this gap by systematically reviewing and synthesising the quantitative research literature on domain-specific applications of multi-modal LLMs. This systematic review follows the PRISMA guidelines to analyse research literature published after 2022, the release of OpenAI’s ChatGPT
3.5. The literature search was conducted across several online databases, including Nature, Scopus, and Google Scholar. A total of 22 studies were identified, with 11 focusing on the medical domain, 3 on autonomous driving, and 2 on geometric analysis. The remaining studies covered a range of topics, with one each on climate, music, e-commerce, sentiment analysis, human-robot interaction, and construction. This review provides a comprehensive overview of the current state of multi-modal LLMs, highlights their domain-specific applications, and identifies gaps and future research directions.
Journal article
Measuring the digital divide: A modified benefit-of-the-doubt approach
Published 2023
Knowledge-based systems, 261, 110191
In this paper, a modified composite index is developed to measure digital inclusion for a group of cities and regions. The developed model, in contrast to the existing benefit-of-the-doubt (BoD) composite index literature, considers the subindexes as non-compensatory. This new way of modeling results in three important properties: (i) all subindexes are taken into account when assessing the digital inclusion of regions and are not removed (substituted) from the composite index, (ii) in addition to an overall composite index (aggregation of the subindexes), partial indexes (aggregated scores for each subindex) are also provided so that weak performances can be detected more effectively than when only the overall index is measured, and (iii) compared with current BoD models, the developed model has improved discriminatory power. To demonstrate the developed model, we use the Australian digital inclusion index as a real-world example.
Book chapter
Memetic Strategies for Network Design Problems
Published 2021
Frontiers in Nature-Inspired Industrial Optimization, 33 - 48
In this chapter, memetic strategies are analyzed for the Steiner tree problem in graphs as a classic network design problem. Steiner tree problems can model a wide range of real-life problems from fault recovery in wireless sensor networks through Web API recommendation systems. The Steiner tree problem is considered as a generalized minimum spanning tree problem. Whilst the objective function of the minimum spanning tree problems is to find the minimum-total-weight subset of edges that connects all the nodes, the Steiner tree problem does not include all the nodes. However, it still has the same objective function. It should be noted that this problem requires a subset of nodes, called terminals, to be connected and the rest of the nodes are optional for being included. The problem, unlike the minimum spanning tree, is NP-Complete, and hence necessitates the design of a hybrid metaheuristic as an appropriate solution strategy. We analyze memetic strategies, based on effective integration of different local search procedures into a genetic algorithm for tackling this very interesting problem. Computational experiments have been reported on evaluating the impact of individual components of the procedure and it is demonstrated that the proposed strategy is both effective and robust.
Journal article
Published 2021
European journal of operational research, 295, 1, 394 - 397
Ghasemi, Ignatius, and Rezaee (2019) (Improving discriminating power in data envelopment models based on deviation variables framework. European Journal of Operational Research 278, 442– 447) propose a procedure for ranking efficient units in data envelopment analysis (DEA) based on the deviation variables framework. They claim that their procedure improves the discriminating power of DEA and can be an alternative to the super-efficiency model that is well-known to have the infeasibility problem and the cross-efficiency approach which suffers from the presence of multiple optimal solutions. However, we demonstrate, in this short note, that their procedure is developed based upon inappropriate use of deviation variables which leads to the development of a ranking approach that does not meet their expectations and as a result, an unreasonable ranking of decision making units (DMUs). We also show that the use of deviation variables, if interpreted and used correctly, can lead to developing a cross-inefficiency matrix and approach.
Journal article
Integrated data envelopment analysis: Linear vs. nonlinear model
Published 2018
European journal of operational research, 268, 1, 255 - 267
This paper develops a relationship between two linear and nonlinear data envelopment analysis (DEA) models which have previously been developed for the joint measurement of the efficiency and effectiveness of decision making units (DMUs). It will be shown that a DMU is overall efficient by the nonlinear model if and only if it is overall efficient by the linear model. We will compare these two models and demonstrate that the linear model is an efficient alternative algorithm for the nonlinear model. We will also show that the linear model is more computationally efficient than the nonlinear model, it does not have the potential estimation error of the heuristic search procedure used in the nonlinear model, and it determines global optimum solutions rather than the local optimum. Using 11 different data sets from published papers and also 1000 simulated sets of data, we will explore and compare these two models. Using the data set that is most frequently used in the published papers, it is shown that the nonlinear model with a step size equal to 0.00001, requires running 1,955,573 linear programs (LPs) to measure the efficiency of 24 DMUs compared to only 24 LPs required for the linear model. Similarly, for a very small data set which consists of only 5 DMUs, the nonlinear model requires running 7861 LPs with step size equal to 0.0001, whereas the linear model needs just 5 LPs.
Conference paper
Probability-based Scoring for Normality Map in Brain MRI Images from Normal Control Population
Published 2016
Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 27/02/2016–29/02/2016, Rome, Italy
The increasing availability of MRI brain data opens up a research direction for abnormality detection which is necessary to on-time detection of impairment and performing early diagnosis. The paper proposes scores based on z-score transformation and kernel density estimation (KDE) which are respectively Gaussian-based assumption and nonparametric modeling to detect the abnormality in MRI brain images. The methodologies are applied on gray-matter-based score of Voxel-base Morphometry (VBM) and sparse-based score of Sparse-based Morphometry (SBM). The experiments on well-designed normal control (CN) and Alzheimer disease (AD) subsets extracted from MRI data set of Alzheimer’s Disease Neuroimaging Initiative (ADNI) are conducted with threshold-based classification. The analysis of abnormality percentage of AD and CN population is carried out to validate the robustness of the proposed scores. The further cross validation on Linear discriminant analysis (LDA) and Support vector machine (S VM) classification between AD and CN show significant accuracy rate, revealing the potential of statistical modeling to measure abnormality from a population of normal subjects.
Conference paper
Weight-enhanced diversification in stochastic local search for satisfiability
Published 2013
23rd International Joint Conference on Artificial Intelligence (IJCAI 2013), 03/08/2013–09/08/2013, Beijing, China
Intensification and diversification are the key factors that control the performance of stochastic local search in satisfiability (SAT). Recently, Novelty Walk has become a popular method for improving diversification of the search and so has been integrated in many well-known SAT solvers such as TNM and gNovelty+. In this paper, we introduce new heuristics to improve the effectiveness of Novelty Walk in terms of reducing search stagnation. In particular, we use weights (based on statistical information collected during the search) to focus the diversification phase onto specific areas of interest. With a given probability, we select the most frequently unsatisfied clause instead of a totally random one as Novelty Walk does. Amongst all the variables appearing in the selected clause, we then select the least flipped variable for the next move. Our experimental results show that the new weight-enhanced diversification method significantly improves the performance of gNovelty$^+$ and thus outperforms other local search SAT solvers on a wide range of structured and random satisfiability benchmarks.
Conference paper
Trap escape for local search by backtracking and conflict reverse
Published 2013
Scandinavian Conference on Artificial Intelligence, 20/11/2013–22/11/2013, Aalborg, Denmark
This paper presents an efficient trap escape strategy in stochastic local search for Satisfiability. The proposed method aims to enhance local search by providing an alternative local minima escaping strategy. Our variable selection scheme provides a novel local minima escaping mechanism to explore new solution areas. Conflict variables are hypothesized as variables recently selected near local minima. Hence, a list of backtracked conflict variables is retrieved from local minima. The new strategy selects variables in the backtracked variable list based on the clause-weight scoring function and stagnation weights and variable weights as tiebreak criteria. This method is an alternative to the conventional method of selecting variables in a randomized unsatisfied clause. The proposed tiebreak method favors high stagnation weights and low variable weights during trap escape phases. The new strategies are examined on verification benchmark and SAT Competition 2011 and 2012 application and crafted instances. Our experiments show that proposed strategy has comparable performance with state-of-the-art local search solvers for SAT.
Conference paper
Trap Avoidance in Local Search Using Pseudo-Conflict Learning
Date presented 07/2012
26th AAAI Conference on Artificial Intelligence , 22/07/2012–26/07/2012, Ontario, Canada
A key challenge in developing efficient local search solvers is to effectively minimise search stagnation (i.e. avoiding traps or local minima). A majority of the state-of-the-art local search solvers perform random and/or Novelty-based walks to overcome search stagnation. Although such strategies are effective in diversifying a search from its current local minimum, they do not actively prevent the search from visiting previously encountered local minima. In this paper, we propose a new preventative strategy to effectively minimise search stagnation using pseudo-conflict learning. We define a pseudo-conflict as a derived path from the search trajectory that leads to a local minimum. We then introduce a new variable selection scheme that penalises variables causing those pseudo-conflicts. Our experimental results show that the new preventative approach significantly improves the performance of local search solvers on a wide range of structured and random benchmarks.