242 lines
34 KiB
Plaintext
242 lines
34 KiB
Plaintext
2020 FORTEI-International Conference on Electrical Engineering (FORTEI-ICEE)
|
||
|
||
Classification of Learning Styles in Multimedia Learning Using Eye-Tracking and Machine Learning
|
||
|
||
Generosa Lukhayu Pritalia
|
||
|
||
Sunu Wibirama
|
||
|
||
Teguh Bharata Adji
|
||
|
||
Sri Kusrohmaniah
|
||
|
||
Department of Electrical
|
||
|
||
Department of Electrical Department of Electrical Department of Psychology
|
||
|
||
and Information Engineering and Information Engineering and Information Engineering Faculty of Psychology
|
||
|
||
Universitas Gadjah Mada
|
||
|
||
Universitas Gadjah Mada Universitas Gadjah Mada Universitas Gadjah Mada
|
||
|
||
Yogyakarta, Indonesia
|
||
|
||
Yogyakarta, Indonesia
|
||
|
||
Yogyakarta, Indonesia
|
||
|
||
Yogyakarta, Indonesia
|
||
|
||
generosalukhayu@mail.ugm.ac.id
|
||
|
||
sunu@ugm.ac.id
|
||
|
||
adji@ugm.ac.id
|
||
|
||
koes psi@ugm.ac.id
|
||
|
||
Abstract—The existence of a multimedia learning system still presents the same material for every student. Educational theory suggests that learning content ideally should be adaptive by considering each student’s learning style. To make learning more optimal, it is necessary to detect learning styles. Several learning detection approaches have been implemented. Conventional methods such as student assessment tests and interviews tend to be more subjective. An objective method of eye-tracking has been researched but limited as a validation tool for differentiating learning styles. To overcome the above mentioned problems, this study proposes a new approach using machine learning and eye-tracking techniques. The experiment and analysis involved 68 students. There were 23 male participants and 45 female participants. In the experiment, participants were assigned to interact with learning content and their eye movements were recorded using an eye-tracker sensor. From the experimental results using three classification algorithms — SVM, Na¨ıve Bayes, and Logistic Regression — and using SVM-RFE as a feature selection method, the best model was achieved by Na¨ıve Bayes algorithm through three features selected from SVM-RFE method. The model yielded 71% of accuracy, 60% of sensitivity, and 75% of specificity. This empirical study provides an opportunity for machine learning and eye-tracking approaches to automatically classify learning styles. These results can be used as guidelines for developing an adaptive multimedia learning system by considering students’ learning styles.
|
||
Keywords—classification, eye-tracking, learning styles, machine learning, multimedia learning
|
||
I. INTRODUCTION
|
||
Humans always make decisions every day. According to dual-process theory, judgment and decision-making are the results of two competing processes [1]. The first cognitive processing route tends to be automatic, holistic, and global, while the second route tends to be controlled, analytical, and sequential. Regarding holistic and analytic processing, Felder and Silverman introduced a similar learning style model, namely global and sequential dimensions [2]. Learning style are characteristics, inclination and preferences of individuals in processing information [3]. Individual learning environments can affect global and sequential learning styles [4].
|
||
Multimedia learning is a developing learning environment. Multimedia learning is based on theories about how people learn and process information [5]. According to Mayer, multimedia is defined as a representation that combines words and
|
||
|
||
visual material [6]. Cognitive Theory of Multimedia Learning is centered around cognitive models of information processing. Regarding dual-process model of cognition, Paivio emphasized that pictorial information was processed simultaneously, while verbal information was processed sequentially [7].
|
||
At present, the existence of a multimedia learning system still presents the same material for every student. However, each student has various levels of inspiration, mentality, and reaction in learning. These differences affect their learning tendencies [8]. The learning method “one size for all” cannot meet the desires of every student [9]. As multimedia learning is increasingly developing, students need personalized learning, so that they can build their path of knowledge [8]. Educational theory suggests that learning content ideally should be adaptive by considering each student’s learning style [10].
|
||
According to Hasibuan et al. [11], the impact that occurs when ignoring learning styles can lead to self-demotivation, thus making the learning process not optimal. For optimal learning, it is necessary to detect learning styles [12], especially global and sequential learning styles. The process of detecting learning styles is important because it can improve learning performance and self-motivation [13]. The benefit of detecting learning style is that it can provide information about global and sequential learning styles early on. Thus in future, the information can be used as guidelines in developing learning content that is more adaptive and responsive to users.
|
||
Distinguishing between global and sequential learning styles has been conducted for quite a long time. Several methodologies have been applied to distinguish learning styles. The conventional method is applied to determine the learning styles of students. Self-reporting, student assessment tests, and interviews are usually used to investigate student activities in multimedia learning [14]. Conventional methods could not provide direct measurements when cognitive processes were taking place. The results of the study depended on the ability and experience of the researcher to interpret the data. Conventional method encourages participants to be involved in providing information related to their learning preferences, so that this method is more towards the subjective assessment of users [13].
|
||
Sensors are possibly implemented in learning style detection.
|
||
|
||
Authorized licensed use limited to: Technische Informationsbibliothek (TIB). Downloaded on April 16,2024 at 13:26:09 UTC from IEEE Xplore. Restrictions apply.
|
||
|
||
978-1-7281-9434-9/20/$31.00 ©2020 IEEE
|
||
|
||
145
|
||
|
||
Initially, limited sensors were used used in the laboratory environment because of high-cost and they required special maintenance. However, recent technology has provided low cost and highly compatible sensors such as screen-mounted eye-trackers [15]. Eye-Tracking is an important source of information needed during the cognitive process [16]. EyeTracking is an objective technique and can record cognitive processes in real-time [17]. Mehigan and Pitt investigated learning styles using biometric technology that was implemented in mobile game-based learning [18]. The research used Felder and Silverman learning style models, one of which was the globalsequential dimension. The results showed that students with sequential learning styles had a longer average fixation duration than global students. Results indicated that eye-tracking metrics can be used to investigate individual gaze and visual attention in multimedia content to understand their learning patterns [19]. However, eye movement data are still limited as validation tool in finding significant metrics to distinguish between learning styles.
|
||
Eye-Tracking applications in other multi-disciplinary studies have been combined with machine learning techniques to facilitate effective data processing and develop smarter systems. For example, Lou’s study used SVM model to identify reading skills based on eye movements [20]. SVM algorithm was able to classify readers with high literacy and low literacy readers with an accuracy of 80.3%. Furthermore, the research conducted by Lagun detected cognitive imbalances using eye movement data and classification algorithms — SVM, Na¨ıve Bayes, and Logistic Regression. SVM algorithm achieved 87% of accuracy, 97% of sensitivity, and 77% of specificity in the classification between normal groups and mild cognitive imbalance groups [21].
|
||
Apart from these works, the study of classifying global and sequential learning styles in multimedia learning using eye-tracking and machine learning has not been investigated. Thus, in this study, we propose a new approach to classifying global and sequential learning styles based on eye-tracking and machine learning.
|
||
|
||
B. Participant
|
||
Experiments and analysis involved 68 students from Universitas Gadjah Mada Indonesia with voluntary participation. There were 23 male participants and 45 female participants. Age of participants were between 17-38 years. Their educational background were undergraduate and postgraduate programs. Participants preferably had normal vision or corrected with myopia and or astigmatism.
|
||
C. Experimental Setup
|
||
Fig.1 shows the experimental settings of this study. Gazepoint GP3 eye-tracker was mounted under the 22-inch LED monitor. The participant was seated 50 cm in front of the LED monitor. All participants were asked to study the stimulus displayed on the monitor screen.
|
||
D. Stimulus
|
||
Fig. 2 shows a stimulus design in this study. The design of multimedia learning used a combination of static texts and static images (illustrations). The design was arranged vertically
|
||
|
||
II. MATERIALS AND METHODS
|
||
A. Apparatuses
|
||
The experiment used a personal computer (notebook) with Intel Core i3, 8 GB RAM, and Windows 10 Pro 64-bit Operating System. To display the stimulus, a 22 inches LED monitor with 1920 × 1080 pixels resolution was used. GP3 Gazepoint eye-tracker (Gazepoint Research Inc., Canada) with a 60 Hz sampling rate was used to record the participants’ gaze. Open Gaze and Mouse Analyzer (OGAMA) Version 5.0 which was used to control, monitor participants and store experimental data from eye-tracker. We used Jupyter Notebook, a website-based open-source application with Python version 3.7 programming language for developing machine learning models. We also used Scikit-learn version 0.21.3, an open source library that supports various tools for programming [22].
|
||
|
||
Fig. 1. Experimental setup. Fig. 2. Design stimulus and AOI of Multimedia Learning.
|
||
|
||
Authorized licensed use limited to: Technische Informationsbibliothek (TIB). Downloaded on April 16,2024 at 13:26:09 UTC from IEEE Xplore. Restrictions apply.
|
||
146
|
||
|
||
placed on two sides. The design between text and pictures or illustrations explained one another. The displayed design showed the balance of verbal and visual-based learning information. The stimulus design was inspired by the research of Mehigan and Pitt that was displayed at a certain time duration [18].
|
||
E. Eye-Tracking Measurement Metric and Data Set
|
||
Basically, the metric of eye movement measurement consists of a series of fixations and saccades. Fixation is a condition when the eyes remain stationary for a period of time, e.g., when our eyes stare at static objects for 200 ms or more. Saccade is a rapid eye movement between two fixations. Fixation is represented by a round object and saccade is symbolized by a line connecting fixation [23]. This study used eye-tracking metric measurement that had been provided by OGAMA software. OGAMA software has a statistical module designed to calculate empirical parameters for analysis purposes [24]. EyeTracking measurement metric in this study focused on the entire stimulus area and on certain objects as Area of Interest (AOI). Eye movement metrics such as fixation and saccade were used in this study. A total of 40 eye-tracking metrics used in the data set for classification.
|
||
Dimensions refer to how many attributes a data set has. There were 40 attributes or features in this study assigned to classify 68 participant data. High-dimensional data means a large number of dimensions, so the calculations become extremely difficult and this refers to ‘curse of dimensionality’. Data set that has many attributes or features must be managed properly.
|
||
F. Data Preparation
|
||
1) Data labeling: In classification, it is necessary to use a labeled data set with input attributes and labels or an appropriate (discrete) output class [25]. If the data set is self-made, it is necessary to label the sample according to the ground truth. Ground truth must be valid and reliable, so it is required to use proven measurement tools. In this study, Index Learning Style Questionnaire was used as ground truth in recognizing learning styles [26]. Index Learning Style questionnaire has been proven in terms of the level of validity and reliability [26].
|
||
The Index Learning Style (ILS) questionnaire is an online survey questionnaire used to assess learning style preferences created by Richard M. Felder and Linda K. Silverman through four dimensions, namely active - reflective, sensing - intuitive, visual - verbal, and sequential - global. Questionnaire answers from participants were inputted and reported through the ILS official website of North Carolina University [27]. The results of the learning style preferences will appeared automatically (real time) after the input process had been completed. ILS questionnaire consists of 44 questions. The question provides two answer options as ‘a’ and ‘b’. Answers a and b are scored as +1 and -1, respectively. Learning styles can be expressed as odd and even numbers on an interval scale of -11 to +11. This study ruled out whatever scale was obtained and only assessed the scale with a negative value as sequential and positive as global.
|
||
|
||
2) Eliminating outlier: Existence of outlier in data can cause a large residual, a large variance, and a wider interval of data [28]. We used the boxplot method to identify outliers. The concept of this method uses values from quartiles and range.
|
||
|
||
IQR = Q3 − Q1
|
||
|
||
(1)
|
||
|
||
The first quartile (Q1) is the median of the number of data with the smallest value, the third quartile (Q3) is the median of the amount of data with the largest value, while the second quartile (Q2) is equal to the median [29]. Outliers data can be determined from values less than 1.5 × IQR against first quartile (Q1) and values more than 1.5 × IQR against third quartile (Q3) [30].
|
||
3) Split data for training and testing: We split data for training and testing using a proportion 75% : 25%. From 68 data, 51 data were used as training data and 17 data were used as testing data.
|
||
4) Class balancing: This study used Synthetic Minority Oversampling Technique (SMOTE) method as a solution in handling imbalanced data by duplicating the data randomly [31]. This study used SMOTE imblearn oversampling library from Python. The SMOTE method increases amount of minor class data to be equivalent to the major class by generating synthetic data using the concept of K-nearest neighbor. Data generation was measured by proximity using Euclidean Distance. K was set from library with a default value of 5. Oversampling was implemented on training data. The training data consisted of 23 data from sequential class and 28 data from global class. The number of data from the sequential class was less than the global class, so that the sequential class was oversampled. The result of this process generated balanced classes.
|
||
5) Data transformation: Data transformation is the process of changing data values into a range of new data values using certain methods, so that it is more efficient in analyzing data. This study performed data transformation using Quantile Normalization method. Quantile Normalization was used to change features following a normal distribution. The Cumulative Distribution Function feature was used to map the original value to the normal distribution. Quantile function mapped the values obtained to the desired output distribution. The feature values of the data would appear below or above the corresponding range. Afterward, the feature values were mapped near output distribution limit. Quantile Normalization would smoothen the data distribution [32].
|
||
|
||
G. Feature Selection
|
||
The combination of many features makes the data increasingly have high dimensions. High dimensional data can be managed by selecting the most relevant features and removing features that have less effect on the classification processed [33]. Feature selection method used in this study was SVMRecursive Feature Elimination (SVM-RFE). SVM-RFE is one of the embedded methods in feature selection. SVM-RFE works by iteratively eliminating excessive features that do not have the weight of influence when tested on SVM algorithm [34].
|
||
|
||
Authorized licensed use limited to: Technische Informationsbibliothek (TIB). Downloaded on April 16,2024 at 13:26:09 UTC from IEEE Xplore. Restrictions apply.
|
||
147
|
||
|
||
In several case studies related to eye-tracking and machine learning [20], [35], the researchers used SVM-RFE feature selection method, so the same thing was done in this study. SVM-RFE is implemented to avoid overfitting when the number of features is high in the hundreds or even thousands [36].
|
||
|
||
H. Classification Algorithms
|
||
The algorithms used in this study were Support Vector Machine, Logistic Regression and Na¨ıve Bayes. Na¨ıve Bayes and Logistic Regression are good for large data set and highdimensional data [25]. Support Vector Machine model is robust against overfitting and performs well for very high-dimensional problems [37]. Overfitting occurs when the model performs very well on training set but cannot be generalized to new data [38].
|
||
1) Support Vector Machine (SVM): Support Vector Machine is an approach model that aims to find the best hyperplane in the input space. Support Vector Machine model uses combinations of C, Gamma, and kernels as parameters. C and Gamma values used in this research were ranging from 0.001 to 1000. In this research, the applied kernels for SVM were Linear, Poly, and Radial Basis Function.
|
||
2) Logistic Regression: Logistic Regression is a linear model for classification. In this model, probabilities that illustrate possible outcomes from a single trial are modelled using logistic functions. Logistic Regression model used solver—Newton-cg, lbfgs, and liblinear as parameters.
|
||
3) Na¨ıve Bayes: Na¨ıve Bayes method is a supervised learning algorithm based on the application of the Bayes theorem with the ‘Na¨ıve’ assumption. Na¨ıve Bayes implements conditional independence between each pair of features given the value of the class variable. We set Na¨ıve Bayes model parameters with variable smoothing ranging from 0.000000001 to 100. The application of smoothing is a method to avoid zero values in the probability model.
|
||
To find the best parameters, we applied grid search crossvalidation. Cross-validation method in this grid search was stratified K-fold with 10 splits to reduce bias.
|
||
|
||
I. Evaluation
|
||
|
||
In this phase, the performance of each model was measured
|
||
|
||
to find out which model can provide the best results. Referring
|
||
|
||
to Lou’s research in evaluating the model, we used the accuracy,
|
||
|
||
specificity and sensitivity parameters of the confusion matrix
|
||
|
||
[20]. The confusion matrix contains TG (True Global), TS
|
||
|
||
(True Sequential), FG (False Global) and FS (False Sequential)
|
||
|
||
[39].
|
||
|
||
TG + TS
|
||
|
||
Accuracy =
|
||
|
||
(2)
|
||
|
||
TG+FG+TS +FS
|
||
|
||
TG
|
||
|
||
Sensitivity =
|
||
|
||
(3)
|
||
|
||
TG + FS
|
||
|
||
TS
|
||
|
||
Specif icity =
|
||
|
||
(4)
|
||
|
||
TS +FG
|
||
|
||
Accuracy is ratio of the results of true global and sequential predictions with the whole sample being tested. Sensitivity is
|
||
|
||
ratio of correctly predicted global students to all samples who are global students. Specificity is ratio of correctly predicted sequential students to all samples who are sequential students.
|
||
III. RESULTS
|
||
Fig. 3 shows the classification results of each algorithm. Initially, we tested the Na¨ıve Bayes algorithm. Na¨ıve Bayes model with the best parameters smoothing value of 10 and SVM-RFE’s with three selected features obtained the highest score with 71% of accuracy, 60% of sensitivity, and 75% of specificity. Next, we tested the three selected features on SVM and Logistic Regression algorithms. As a result, SVM model with the best parameters is linear kernel, C value of 0.001, Gamma value of 0.01 obtained 41% of accuracy, 60% sensitivity, and 33% specificity. Meanwhile, Logistic Regression model with the best parameters is Newton-CG solver gained 65% of accuracy, 60% of sensitivity, and 67% of specificity.
|
||
From the results of testing three classification algorithms — SVM, Na¨ıve Bayes, Logistic regression — and three selected features from SVM-RFE method, Na¨ıve Bayes model obtained the best accuracy of 71%, sensitivity of 60%, and specificity of 75%. This model produced three most contributing features, namely Number of fixation at AOI Text1, Saccade duration at AOI Text1, and Average fixation duration at AOI Text2.
|
||
Saccade duration of AOI Text1 is the average duration of eye movement from one point to another in a specific area (AOI) Text 1. The results of the statistical description in Table I show that global students (248.41 milliseconds) produced mean saccade duration longer than sequential students (198.06 milliseconds). This event was occurred because global students did not make longer fixation than sequential students, according to the number of fixation global students (mean 37.48 fixations) and sequential students (mean 43.94 fixations). Global students tend to make more transitions because they make relationships and connections from one object to another as a whole, while sequential students learn start from small parts and detailed [40]. This study supports research finding of Mehigan and Pitt [18], which hypothesized that sequential students show a longer fixation duration on content than global students. As a result of the descriptive statistics in Table I, sequential students showed average fixation duration longer (mean 176.07 milliseconds) than global students (mean 164.02 milliseconds).
|
||
IV. DISCUSSION
|
||
Na¨ıve Bayes model and SVM-RFE’s with three selected features produced the highest performance with 71% of accuracy, 60% of sensitivity, and 75% of specificity. Unfortunately, as many as 5 out of 17 data are incorrectly classified so that the classification error ratio was 29.4%. According to Barua et al. [41], in some cases, the oversampling method can produce incorrect artificial samples, while wrong minority instances make classification models difficult to learn correctly. The oversampling method can potentially cause noise in the data. There is another method for balancing data classes called hybrid sampling methods. Hybrid sampling uses a combination of both sampling techniques to balance data. The hybrid sampling
|
||
|
||
Authorized licensed use limited to: Technische Informationsbibliothek (TIB). Downloaded on April 16,2024 at 13:26:09 UTC from IEEE Xplore. Restrictions apply.
|
||
148
|
||
|
||
Fig. 3. Classification Result
|
||
|
||
TABLE I DESCRIPTIVE DATA (MEAN)
|
||
|
||
Feature (Eye-Tracking metric) Number of fixation at AOI Text1 Saccade duration at AOI Text1 Average fixation duration at AOI Text2
|
||
|
||
Units Count Millisecond Millisecond
|
||
|
||
Mean Global students 37.48 248.41 164.02
|
||
|
||
Mean Sequential students 43.94 198.06 176.07
|
||
|
||
TABLE II COMPUTATIONAL TIME
|
||
|
||
Classification SVM
|
||
Na¨ıve Bayes Logistic Regression
|
||
|
||
Average computational time 6.27 second/task 0.18 second/task 0.28 second/task
|
||
|
||
method performs oversampling the minority class without overfitting data and also undersampling the majority class without removing too many instances from the data. SMOTE + Tomek and SMOTE + Edited Nearest Neighbor (ENN) are examples of hybrid sampling methods that can be considered for future research [42].
|
||
Despite limitations, this study has successfully used the objective method of eye-tracking. Eye gaze is an important source of information needed during the cognitive process [16]. EyeTracking technique is considered more effective than collecting data through interviews and self-reporting. Eye-Tracking technique is more objective and can record cognitive processes in real-time [17]. Na¨ıve Bayes classification model and SVM-RFE feature selection method are adequate for predicting global and sequential learning styles at one time. As shown in Table II, Na¨ıve Bayes was able to detect within 0.18 seconds on each task. The detection duration was the fastest compared with SVM and Logistic Regression models. The computational time implies that the proposed method can be implemented in a real-time manner, although computational time may differ in other experiments depending on the hardware and programming language for processing data.
|
||
This research used static multimedia content. We suggest that future research may consider using dynamic and complex multimedia learning content such as hypermedia. The effects of
|
||
|
||
age, educational background, and gender have not been studied at this time. In the future, research can be conducted with more specific considerations.
|
||
V. CONCLUSION
|
||
In prior research works, identifying global-sequential learning styles using eye-tracking techniques was limited in statistical descriptive that requires further interpretation. To deal with this research gap, this study proposes an eye-tracking technique to measure a person’s visual attention and machine learning methods to automatically classify learning style. The result of this study implies that machine learning is a promising technique to classify and to predict student’s learning styles during multimedia learning merely using their eye movements. From the results of the experiments using three classification algorithms — SVM, Na¨ıve Bayes, and Logistic Regression — and using SVM-RFE as a feature selection method, The best model is achieved by Na¨ıve Bayes algorithm through three features selected from SVM-RFE method, namely Number of fixation at AOI Text1, Saccade duration at AOI Text1, and Average fixation duration at AOI Text2. The model yielded 71% of accuracy, 60% of sensitivity, and 75% of specificity. In future, we can use these results as guidelines to develop methods to optimally classify global and sequential learning styles.
|
||
REFERENCES
|
||
[1] T. Kvaran, S. Nichols, and A. Sanfey, “The effect of analytic and experiential modes of thought on moral judgment,” in Progress in brain research. Elsevier, 2013, vol. 202, pp. 187–196.
|
||
[2] R. M. Felder, L. K. Silverman et al., “Learning and teaching styles in engineering education,” Engineering education, vol. 78, no. 7, pp. 674– 681, 1988.
|
||
|
||
Authorized licensed use limited to: Technische Informationsbibliothek (TIB). Downloaded on April 16,2024 at 13:26:09 UTC from IEEE Xplore. Restrictions apply.
|
||
149
|
||
|
||
[3] H. Kassim, “The relationship between learning styles, creative thinking performance and multimedia learning materials,” Procedia-Social and Behavioral Sciences, vol. 97, pp. 229–237, 2013.
|
||
[4] S. Rayner and E. Cools, Style differences in cognition, learning, and management: Theory, research, and practice. Routledge, 2011, vol. 10.
|
||
[5] R. E. Mayer, R. Moreno, M. Boire, and S. Vagge, “Maximizing constructivist learning from multimedia communications by minimizing cognitive load.” Journal of educational psychology, vol. 91, no. 4, p. 638, 1999.
|
||
[6] R. Mayer and R. E. Mayer, The Cambridge handbook of multimedia learning. Cambridge university press, 2005.
|
||
[7] A. Paivio, “Imagery and synchronic thinking.” Canadian Psychological Review/Psychologie canadienne, vol. 16, no. 3, p. 147, 1975.
|
||
[8] R. M. Felder and R. Brent, “Understanding student differences,” Journal of engineering education, vol. 94, no. 1, pp. 57–72, 2005.
|
||
[9] C. Huang, Y. Ji, and R. Duan, “A semantic web-based personalized learning service supported by on-line course resources,” in INC2010: 6th International Conference on Networked Computing. IEEE, 2010, pp. 1–7.
|
||
[10] M. P. P. Liyanage, K. L. Gunawardena, and M. Hirakawa, “Using learning styles to enhance learning management systems,” ICTer, vol. 7, no. 2, 2014.
|
||
[11] M. S. Hasibuan, L. E. Nugroho, and P. I. Santosa, “Model detecting learning styles with artificial neural network.” Journal of Technology and Science Education, vol. 9, no. 1, pp. 85–95, 2019.
|
||
[12] M. Hasibuan and L. Nugroho, “Detecting learning style using hybrid model,” in 2016 IEEE Conference on e-Learning, e-Management and eServices (IC3e). IEEE, 2016, pp. 107–111.
|
||
[13] J. Feldman, A. Monteserin, and A. Amandi, “Automatic detection of learning styles: state of the art,” Artificial Intelligence Review, vol. 44, no. 2, pp. 157–186, 2015.
|
||
[14] L. W. Ang, M. c Masood, and S. H. Abdullah, “Analysing the relationship of sequential and global learning styles on students’ historical thinking and understanding: a case study on form four secondary schools students in malaysia,” Asian Journal of Assessment in Teaching and Learning, vol. 6, pp. 51–58, 2016.
|
||
[15] I. Pitt, T. Mehigan, and K. Crowley, “Using biometrics to support affective elearning for users with special needs,” in International Conference on Computers Helping People with Special Needs. Springer, 2016, pp. 487–490.
|
||
[16] P. V. Yulianandra, S. Wibirama, and P. I. Santosa, “Examining the effect of website complexity and task complexity in web-based learning management system,” in 2017 1st International Conference on Informatics and Computational Sciences (ICICoS). IEEE, 2017, pp. 119–124.
|
||
[17] A. Bojko, Eye tracking the user experience: A practical guide to research. Rosenfeld Media, 2013.
|
||
[18] T. J. Mehigan and I. Pitt, “Detecting learning style through biometric technology for mobile gbl,” International Journal of Game-Based Learning (IJGBL), vol. 2, no. 2, pp. 55–74, 2012.
|
||
[19] T. Ujbanyi, J. Katona, G. Sziladi, and A. Kovari, “Eye-tracking analysis of computer networks exam question besides different skilled groups,” in 2016 7th IEEE International Conference on Cognitive Infocommunications (CogInfoCom). IEEE, 2016, pp. 000 277–000 282.
|
||
[20] Y. Lou, Y. Liu, J. K. Kaakinen, and X. Li, “Using support vector machines to identify literacy skills: Evidence from eye movements,” Behavior research methods, vol. 49, no. 3, pp. 887–895, 2017.
|
||
[21] D. Lagun, C. Manzanares, S. M. Zola, E. A. Buffalo, and E. Agichtein, “Detecting cognitive impairment by eye movement analysis using automatic classification algorithms,” Journal of neuroscience methods, vol. 201, no. 1, pp. 196–203, 2011.
|
||
[22] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.
|
||
[23] K. Holmqvist, M. Nystro¨m, R. Andersson, R. Dewhurst, H. Jarodzka, and J. Van de Weijer, Eye tracking: A comprehensive guide to methods and measures. OUP Oxford, 2011.
|
||
[24] A. Voßku¨hler, V. Nordmeier, L. Kuchinke, and A. M. Jacobs, “Ogama (open gaze and mouse analyzer): open-source software designed to analyze eye and mouse movements in slideshow study designs,” Behavior research methods, vol. 40, no. 4, pp. 1150–1162, 2008.
|
||
[25] A. C. Mu¨ller, S. Guido et al., Introduction to machine learning with Python: a guide for data scientists. ” O’Reilly Media, Inc.”, 2016.
|
||
|
||
[26] R. M. Felder and J. Spurlin, “Applications, reliability and validity of the index of learning styles,” International journal of engineering education, vol. 21, no. 1, pp. 103–112, 2005.
|
||
[27] R. M. Felder and B. A. Soloman, “Index of learning styles questionnaire,” https://www.webtools.ncsu.edu/learningstyles/, accessed: 2018-11-16.
|
||
[28] E. Widodo, S. Guritno, and S. Haryatmi, “Aplication of m-estimation for response surface model with data outliers,” in Prosiding Seminar Nasional Statistika, 2013, pp. 537–545.
|
||
[29] L. Rade and B. Westergren, Mathematics handbook for science and engineering. Springer Science & Business Media, 2013.
|
||
[30] G. Upton and I. Cook, Understanding statistics. Oxford University Press, 1996.
|
||
[31] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “Smote: synthetic minority over-sampling technique,” Journal of artificial intelligence research, vol. 16, pp. 321–357, 2002.
|
||
[32] B. M. Bolstad, R. A. Irizarry, M. A˚ strand, and T. P. Speed, “A comparison of normalization methods for high density oligonucleotide array data based on variance and bias,” Bioinformatics, vol. 19, no. 2, pp. 185–193, 2003.
|
||
[33] D. Mladenic´, “Feature selection for dimensionality reduction,” in International Statistical and Optimization Perspectives Workshop” Subspace, Latent Structure and Feature Selection”. Springer, 2005, pp. 84–102.
|
||
[34] I. Guyon, J. Weston, S. Barnhill, and V. Vapnik, “Gene selection for cancer classification using support vector machines,” Machine learning, vol. 46, no. 1-3, pp. 389–422, 2002.
|
||
[35] M. N. Benfatto, G. O¨ . Seimyr, J. Ygge, T. Pansell, A. Rydberg, and C. Jacobson, “Screening for dyslexia using eye tracking during reading,” PloS one, vol. 11, no. 12, 2016.
|
||
[36] K. Kavitha, G. S. Rajendran, and J. Varsha, “A correlation based svmrecursive multiple feature elimination classifier for breast cancer disease using microarray,” in 2016 International Conference on Advances in Computing, Communications and Informatics (ICACCI). IEEE, 2016, pp. 2677–2683.
|
||
[37] J. D. Kelleher, B. Mac Namee, and A. D’arcy, Fundamentals of machine learning for predictive data analytics: algorithms, worked examples, and case studies. MIT press, 2015.
|
||
[38] A. Zheng and A. Casari, Feature engineering for machine learning: principles and techniques for data scientists. ” O’Reilly Media, Inc.”, 2018.
|
||
[39] I. H. Witten and E. Frank, “Data mining: practical machine learning tools and techniques with java implementations,” Acm Sigmod Record, vol. 31, no. 1, pp. 76–77, 2002.
|
||
[40] S. Graf, S. Viola et al., “Automatic student modelling for detecting learning style preferences in learning management systems,” in Proc. international conference on cognition and exploratory learning in digital age, 2009, pp. 172–179.
|
||
[41] S. Barua, M. M. Islam, X. Yao, and K. Murase, “Mwmote–majority weighted minority oversampling technique for imbalanced data set learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 26, no. 2, pp. 405–425, 2012.
|
||
[42] T. R. Hoens and N. V. Chawla, “Imbalanced datasets: from sampling to classifiers,” Imbalanced learning: Foundations, algorithms, and applications, pp. 43–59, 2013.
|
||
|
||
Authorized licensed use limited to: Technische Informationsbibliothek (TIB). Downloaded on April 16,2024 at 13:26:09 UTC from IEEE Xplore. Restrictions apply.
|
||
150
|
||
|