Tuesday, 04 January 2022 08:12

Editorial Board Member Message 2022

A Message from the Editor

It is my pleasure to introduce this first issue of the 19th volume of IAJIT, which includes articles that tackle the advancements in several information technology topics that investigate mainly the following topics: Information security, Data Management, Machine learning for Medical applications, pattern detection, Social media analysis, GPS applications, and Software engineering.

It is our usual habit to send the articles to IAJIT authors on their email addresses to keep them updated on the most recent articles of the journal.

I would like to thank the reviewers of IAJIT for their thoughtful comments and efforts towards improving the quality of our published articles.

Wish you all a happy new year and better accomplishment in the coming year.

Dr. Yaser Al-Lahham

Editorial Board Member

Sunday, 02 January 2022 11:51

Optimal Image Based Information Hiding with One-dimensional Chaotic Systems and Dynamic Programming

Yinglei Song1, Jia Song2, and Junfeng Qu3

1School of Electronics and Information Science, Jiangsu University of Science and Technology, China

2Department of Electronic and Information Technology, Suzhou Vocational University, China

3Department of Computer Science and Information Technology, Clayton State University, USA

Abstract: Information hiding is a technology aimed at the secure hiding of important information into digital documents or media. In this paper, a new approach is proposed for the secure hiding of information into gray scale images. The hiding is performed in two stages. In the first stage, the binary bits in the sequence of information are shuffled and encoded with a set of integer keys and a system of one-dimensional logistic mappings. In the second stage, the resulting sequence is embedded into the gray values of selected pixels in the given image. A dynamic programming method is utilized to select the pixels that minimize the difference between a cover image and the corresponding stego image. Experiments show that this approach outperforms other information hiding methods by 13.1% in Peak Signal to Noise Ratio (PSNR) on average and reduces the difference between a stego image and its cover image to 0 in some cases.

Keywords: Encryption and hiding, minimized hiding effects, improved security, convenient recovery.

Received June 3, 2019; accepted February 15, 2021

https://doi.org/10.34028/iajit/19/1/1

Full Text

Sunday, 02 January 2022 11:49

Sørensen-Dice Similarity Indexing based Weighted Iterative Clustering for Big Data Analytics

KalyanaSaravanan Annathurai1 and Tamilarasi Angamuthu2

1Department of Computer Science and Engineering, Kongu Engineering College, India

2Department of Computer Applications, Kongu Engineering College, India

Abstract: Big data is a collection of large volume of data and extract similar data points from large dataset. Clustering is an essential data mining technique for examining large volume of data. Several techniques have been developed for handling big dataset. However, with much time consumption and space complexity, accuracy is said to be compromised. In order to improve clustering accuracy with less complexity, Sørensen-Dice Indexing based Weighted Iterative X-means Clustering (SDI-WIXC) technique is introduced. SDI-WIXC technique is used for grouping the similar data points with higher clustering accuracy and minimal time. First, number of data points is collected from big dataset. Then, along with the weight value, the given dataset is partitioned into ‘X’ number of clusters. Next, based on the similarity measure, Weighted Iterated X-means Clustering (WIXC) is applied for clustering data points. Sørensen-Dice Indexing Process is used for measuring similarity between cluster weight value and data points. Upon similarity found between weight value of cluster and data point, data points are grouped into a specific cluster. Besides, the WIXC method also improves the cluster assignments through repeated subdivision using Bayesian probability criterion. This in turn helps to group all data points and hence, improving the clustering accuracy. Experimental evaluation is carried out with number of factors such as clustering accuracy, clustering time and space complexity with respect to the number of data points. The experimental results reported that the proposed SDI-WIXC technique obtains high clustering accuracy with minimum time as well as space complexity.

Keywords: Bayesian probability criterion, big data analytics, sørensen-dice indexing process, weighted iterated x-means clustering.

Received August 3, 2019; accepted May 9, 2020

https://doi.org/10.34028/iajit/19/1/2

Full text

Sunday, 02 January 2022 11:47

A Modified DBSCAN Algorithm for Anomaly Detection in Time-series Data with Seasonality

Praphula Jain, Mani Shankar Bajpai, and Rajendra Pamula

Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines), India

Abstract: Anomaly detection concerns identifying anomalous observations or patterns that are a deviation from the dataset's expected behaviour. The detection of anomalies has significant and practical applications in several industrial domains such as public health, finance, Information Technology (IT), security, medical, energy, and climate studies. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) Algorithm is a density-based clustering algorithm with the capability of identifying anomalous data. In this paper, a modified DBSCAN algorithm is proposed for anomaly detection in time-series data with seasonality. For experimental evaluation, a monthly temperature dataset was employed and the analysis set forth the advantages of the modified DBSCAN over the standard DBSCAN algorithm for the seasonal datasets. From the result analysis, we may conclude that DBSCAN is used for finding the anomalies in a dataset but fails to find local anomalies in seasonal data. The proposed Modified DBSCAN approach helps to find both the global and local anomalies from the seasonal data. Using normal DBSCAN, we are able to get 19 (2.16%) anomaly points. While using the modified approach for DBSCAN, we are able to get 42 (4.79%) anomaly points. In comparison, we can say that we are able to get 2.21% more anomalies using the modified DBSCAN approach. Hence, the proposed Modified DBSCAN algorithm outperforms in comparison with the DBSCAN algorithm to find local anomalies.

Keywords: Anomaly detection, data mining, DBSCAN, modified DBSCAN, seasonal data, time series.

Received October 18, 2019; accepted January 19, 2021

https://doi.org/10.34028/iajit/19/1/3

Full text

Sunday, 02 January 2022 11:45

Skin Lesion Segmentation in Dermoscopy Imagery

Shelly Garg1 and Balkrishan Jindal2

1Department of Electronics and communication, Punjabi University, India

2Yadavindra College of Engineering, Computer Engineering Section, Punjabi University, India

Abstract: The main purpose of this study is to find an optimum method for segmentation of skin lesion images. In the present world, Skin cancer has proved to be the most deadly disease. The present research paper has developed a model which encompasses two gradations, the first being pre-processing for the reduction of unwanted artefacts like hair, illumination or many other by enhanced technique using threshold and morphological operations to attain higher accuracy and the second being segmentation by using k-mean with optimized Firefly Algorithm (FFA) technique. The online image database from the International Skin Imaging Collaboration (ISIC) archive dataset and dermatology service of Hospital Pedro Hispano (PH2) dataset has been used for input sample images. The parameters on which the proposed method is measured are sensitivity, specificity, dice coefficient, jacquard index, execution time, accuracy, error rate. From the results, authors have observed proposed model gives the average accuracy value of huge number of cancer images using ISIC dataset is 98.9% and using PH2 dataset is 99.1% with minimize average less error rate. It also estimates the dice coefficient value 0.993 using ISIC and 0.998 using PH2 datasets. However, the results for the rest of the parameters remain quite the same. Therefore the outcome of this model is highly reassuring.

Keywords: Automatic detection, FFA, K-mean, pre-processing, segmentation.

Received October 29, 2019; accepted February 7, 2021

https://doi.org/10.34028/iajit/19/1/4

Full text

Sunday, 02 January 2022 11:44

Heterogeneous Feature Analysis on Twitter Data Set for Identification of Spam Messages

Valliyammai Chinnaiah and Cinu C Kiliroor

Department of Computer Technology, MIT Campus Anna University, India

Abstract: Spam is an undesirable content that present on online social networking sites, while spammers are the users who post this content on social networking sites. Unwanted messages posted on Twitter may have several goals and the spam tweets can interfere with statistics presented by Twitter mining tools and squander users’ attention.. Since Twitter has achieved a lot of attractiveness through-out the world, the interest towards it by the spammers and malevolent users is also increases. To overcome the spam problems many researchers proposed ideas using machine learning algorithms for the identification of spam messages. Not only the selection of classifiers but also the variegated feature analysis is essential for the identification of irrelevant messages in social networks. The proposed model performs a heterogeneous feature analysis on the twitter data streams for classifying the unsolicited messages using binary and continuous feature extraction with sentiment analysis on social network datasets. The features created are assessed using significant stratagems and the finest features are selected. A classifier model is built using these feature vectors to predict and identify the spam messages in Twitter. The experimental results clearly show that the proposed Sentiment Analysis based Binary and Continuous Feature Extraction model with Random Forest (SA-BC-RF) approach classifies the spam messages from the social networks with an accuracy of 90.72% when compared with the other state-of-the-art methods.

Keywords: Spam filter, random forest, heterogeneous feature analysis, online social network.

Received November 9, 2019; accepted February 15, 2021

https://doi.org/10.34028/iajit/19/1/5

Full text

Sunday, 02 January 2022 11:42

Energy Heterogeneity Analysis of Heterogeneous Clustering Protocols

Shahzad Hassan and Maria Ahmad

Department of Computer Engineering, Bahria University Islamabad, Pakistan

Abstract: In Wireless Sensor Networks (WSN) the nodes have restricted battery power and the exhaustion of battery depends on various issues. In recent developments, various clustering protocols have been proposed to diminish the energy depletion of the node and prolong the network lifespan by reducing power consumption. However, each protocol is inappropriate for heterogeneous wireless sensor networks. The efficiency of heterogeneous wireless sensor networks declines as changing the node heterogeneity. This paper reviews cluster head selection criteria of various clustering protocols for heterogeneous wireless sensor networks in terms of node heterogeneity and compares the performance of these protocols on several parameters like clustering technique, cluster head selection criteria, nodes lifetime, energy efficiency under two-level and three-level heterogeneous wireless sensor networks protocols Stable Election Protocol (SEP), Zonal-Stable Election Protocol (ZSEP), Distributed Energy-Efficient Clustering (DEEC), A Direct Transmission And Residual Energy Based Stable Election Protocol (DTRE-SEP), Developed Distributed Energy-Efficient Clustering (DDEEC), Zone-Based Heterogeneous Clustering Protocol (ZBHCP), Enhanced Distributed Energy Efficient Clustering (EDEEC), Threshold Distributed Energy Efficient Clustering (TDEEC), Enhanced Stable Election Protocol (SEP-E), and Threshold Stable Election Protocol (TSEP). The comparison has shown that the TDEEC has very effective results over other over two-level and three-level heterogeneous wireless sensor networks protocols and has extended the unstable region significantly. From simulations, it can also be proved that adding node heterogeneity can significantly increase the network life.

Keywords: Heterogeneous wireless sensor networks, heterogenous, lifetime, clustering, cluster head, energy, nodes.

Received December 6, 2019; accepted February 1, 2021

https://doi.org/10.34028/iajit/19/1/6

Full text

Sunday, 02 January 2022 11:40

User-Centric Adaptive Password Policies to Combat Password Fatigue

Yaqoob Al-Slais and Wael El-Medany

College of Information Technology, University of Bahrain, Kingdom of Bahrain

Abstract: Today, online users will have an average of 25 password-protected accounts online, yet use, on average, 6.5 passwords. The excessive cognitive burden of remembering large amounts of passwords causes Password Fatigue. Therefore users tend to reuse passwords or recycle password patterns whenever prompted to change their passwords regularly. Researchers have created Adaptive Password Policies to prevent users from creating new passwords similar to previously created ones. However, this approach creates user frustration as it neglects users’ cognitive burden. This paper proposes a novel User-Centric Adaptive Password Policy (UCAPP) Framework for password creation and management that assigns users system-generated passwords based on a cognitive-behavioural agent-based model. The framework comprises a Password Policy Assignment Test (PassPAST), a Cognitive Burden Scale (CBS), a User Profiling Algorithm, and a Password Generator (PassGEN). The framework creates tailor-made password policies that maintain password memorability for users of different cognitive thresholds without sacrificing password strength and entropy. The framework successfully created 30-40% stronger passwords for Critical users and random (non-mnemonic) passwords for Typical users based on each individual’s cognitive password thresholds in a preliminary test.

Keywords: Cognitive burden, cybersecurity, human factors, adaptive password policy, password fatigue.

Received January 1, 2020; accepted February 15, 2021

https://doi.org/10.34028/iajit/19/1/7

Full text

Sunday, 02 January 2022 11:34

The Delay Measurement and Analysis of Unreachable Hosts of Internet

Ali Gezer

Electronic and Telecommunication Technology, Kayseri University, Turkey

Abstract: Delay related metrics are significant quality of service criteria for the performance evaluation of networks. Almost all delay related measurement and analysis studies take into consideration the reachable sources of Internet. However, unreachable sources might also shed light upon some problems such as worm propagation. In this study, we carry out a delay measurement study of unreachable destinations and analyse the delay dynamics of unreachable nodes. 2.     Internet Control Message Protocol (ICMP) destination unreachable Internet Control Message Protocol-Destination Unreachable (ICMP T3) packets are considered for the delay measurement according to their code types which shows network un reach ability, host un reach ability, port un reach ability, etc., Measurement results show that unreachable sources exhibit totally different delay behaviour compared to reachable IP hosts. A significant part of the unreachable hosts experiences extra 3 seconds Round Trip Time (RTT) delay compared to accessible hosts mostly due to host un reach ability. It is also seen that, approximately 79% of destination un reach ability causes from host un reach ability. Obtained Hurst parameter estimation results reveal that unreachable host RTTs show lower Hurst degree compared to reachable hosts which is approximately a random behaviour. Unreachable sources exhibit totally different distributional characteristic compared to accessible ones which is best fitted with Phased Bi-Exponential distribution.

Keywords: RTT distributions, ICMP-T3, ICMP, self-similarity, worm propagation, Hurst parameter.

Received February 3, 2020; accepted April 21, 2021

https://doi.org/10.34028/iajit/19/1/8

Full text

Sunday, 02 January 2022 11:33

GPS Receiver Position Augmentation Using Correntropy Kalman Filter in Low Latitude Terrain

Sirish Kumar Pagoti1, Srilatha Indira Dutt Vemuri2, and Ganesh Laveti3

1Department of Electronics and Communication Engineering, Aditya Institute of Technology and Management, India

2Department of Electronics and Communication Engineering, GITAM University, India

3Department of Electronics and Communication Engineering, GVP College of Engineering for Women, India

Abstract: If any Global Positioning System (GPS) receiver is operated in low latitude regions or urban canyons, the visibility further reduces. These system constraints lead to many challenges in providing precise GPS position accuracy over the Indian subcontinent. As a result, the standalone GPS accuracy does not meet the aircraft landing requirements, such as Category I (CAT-I) Precision Approaches. However, the required accuracy can be achieved by augmenting the GPS. Among all these issues, the predominant factors that significantly influence the receiver position accuracy are selecting a user/receiver position estimation algorithm. In this article, a novel method is proposed based on correntropy and designated as Correntropy Kalman Filter (CKF) for precise GPS applications and GPS Aided Geosynchronous equatorial orbit Augmented Navigation (GAGAN) based aircraft landings over the low latitude Indian subcontinent. The real-world GPS data collected from a dual-frequency GPS receiver located in the southern region of the Indian subcontinent (IISc), Bangalore with Lat/Long: 13.021°N/ 77.5°E) is used for the performance evaluation of the proposed algorithm. Results prove that the proposed CKF algorithm exhibits significant improvement (up to 34%) in position estimation compared to the traditional Kalman Filter.

Keywords: Accuracy, correntropy, correntropy kalman filter, global positioning system, kalman filter.

Received February 7, 2020; accepted February 7, 2021

https://doi.org/10.34028/iajit/19/1/9

Full text

Sunday, 02 January 2022 11:31

Designing an Intelligent System to Detect Stress Levels During Driving

Mohammad Karimi1, Zahra Khandaghi2, and Mahsa Shahipour1

1Department of Biomedical Engineering, Islamic Azad University, Iran

2Department of Medical Radiation Engineering, Shahid Beheshti University, Iran

Abstract: In addition to the devastating effects of anxiety and stress on the development and exacerbation of the cardiovascular disease, lack of stress control increases drivers' risk of accidents. This paper aims to identify the stress of drivers in various driving situations to warn the driver to control the tense conditions during driving. In order to detect stress while driving, we used the heart signals in the Physionet database. To analyze the conditions of the electrocardiogram (ECG) under various driving situations, linear and non-linear features were used. The characteristics of the RRIs are the only able to identify driver stress in different driving modes relative to rest periods, while the return mapping features, in addition to identifying driver stress while resting, have the ability to identify stress between different driving positions also brought. The results showed that driver's stress level during driving in city 1 and highway 1 with a P-value of 0.028 and also in city 3 and highway 2 with a P-value of 0.041 can be distinguished. The accuracy obtained from the proposed detection method is 98±2% for 100 iterations. The result indicated an efficiency of our proposed method and enhanced the reliability.

Keywords: Driver's stress level, Heart rate signal, return map, linear and non-linear features, statistical analysis.

Received March 20, 2020; accepted November 24, 2020

https://doi.org/10.34028/iajit/19/1/10

Full text

Sunday, 02 January 2022 11:29

Ensemble based on Accuracy and Diversity Weighting for Evolving Data Streams

Yange Sun1, Han Shao1, and Bencai Zhang2

1School of Computer and Information Technology, Xinyang Normal University, China

2School of Computer and Information Technology, Beijing Jiaotong University, China

Abstract: Ensemble classification is an actively researched paradigm that has received much attention due to increasing real-world applications. The crucial issue of ensemble learning is to construct a pool of base classifiers with accuracy and diversity. In this paper, unlike conventional data-streams oriented ensemble methods, we propose a novel Measure via both Accuracy and Diversity (MAD) instead of one of them to supervise ensemble learning. Based on MAD, a novel online ensemble method called Accuracy and Diversity weighted Ensemble (ADE) effectively handles concept drift in data streams. ADE mainly uses the following three steps to construct a concept-drift oriented ensemble: for the current data window, 1) a new base classifier is constructed based on the current concept when drift detect, 2) MAD is used to measure the performance of ensemble members, and 3) a newly built classifier replaces the worst base classifier. If the newly constructed classifier is the worst one, the replacement has not occurred. Comparing with the state-of-art algorithms, ADE exceeds the current best-related algorithm by 2.38% in average classification accuracy. Experimental results show that the proposed method can effectively adapt to different types of drifts.

Keywords: Ensemble learning, concept drift, accuracy, diversity.

Received March 27, 2020; accepted February 21, 2021

https://doi.org/10.34028/iajit/19/1/11

Full text

Sunday, 02 January 2022 11:27

Systematic Li­terature Review: Causes of Rework in GSD

Shiza Nawaz, Anam Zai, Salma Imtiaz, and Humaira Ashraf

Department of Computer Science and Software Engineering, International Islamic University, Pakistan

Abstract: Global Software Development (GSD) involves multiple sites which comprise of different cultures and time zones apart from geographical locations. It is a common software development approach adopted to achieve competitiveness. However, due to multiple challenges it can result in misunderstandings and rework. Rework raises the chance of project failure by delaying the project and increasing the estimated budget. The aim of this study is to identify and categorize the rework causes to reduce its frequency in GSD. To identify the empirical literature related to causes of rework, we performed a Systematic Literature Review (SLR). A total of 23 studies are included as a result of final inclusion. The empirical literature from the year 2009 to 2020 is searched. The overall identified causes of rework in GSD are categorized into 6 major categories which are communication, Requirement Management (RM), roles of stakeholders, product development/integration issues, documentation issues, and differences among stakeholders. The most reported rework causes are related to the category of communication & coordination and RM. Moreover, an industrial survey is conducted to validate the identified rework causes and their mitigation practices from practitioners. This study will help practitioners and researchers in addressing the identified causes and therefore reduce the chances of rework.

Keywords: Global software development, communication and coordination issues, requirement management issues and rework.

Received August 21, 2020; accepted April 28, 2021

https://doi.org/10.34028/iajit/19/1/12

Full text

Sunday, 02 January 2022 11:25

Mining Frequent Sequential Rules with An Efficient Parallel Algorithm

Nesma Youssef1,3, Hatem Abdulkader3, and Amira Abdelwahab2,3

1Department of Information System, Sadat Academy for Management Science, Egypt

2Department of Information Systems, King Faisal University, Saudi Arabia

 3Department of Information Systems, Menoufia University, Egypt

Abstract: Sequential rule mining is one of the most common data mining techniques. It intends to find desired rules in large sequence databases. It can decide the essential information that helps acquire knowledge from large search spaces and select curiously rules from sequence databases. The key challenge is to avoid wasting time, which is particularly difficult in large sequence databases. This paper studies the mining rules from two representations of sequential patterns to have compact databases without affecting the final result. In addition, execute a parallel approach by utilizing multi core processor architecture for mining non-redundant sequential rules. Also, perform pruning techniques to enhance the efficiency of the generated rules. The evaluation of the proposed algorithm was accomplished by comparing it with another non-redundant sequential rule algorithm called Non-Redundant with Dynamic Bit Vector (NRD-DBV). Both algorithms were performed on four real datasets with different characteristics. Our experiments show the performance of the proposed algorithm in terms of execution time and computational cost. It achieves the highest efficiency, especially for large datasets and with low values of minimum support, as it takes approximately half the time consumed by the compared algorithm. 

Keyword: Non-redundant rule, multi-core processors, dynamic bit vector, closed sequential patterns, sequential generator pattern.

Received December 6, 2020; accepted April 28, 2021

https://doi.org/10.34028/iajit/19/1/13

Full text

Sunday, 02 January 2022 11:22

Analyzing the Behavior of Multiple Dimensionality Reduction Algorithms to Obtain Better Accuracy using Benchmark KDD CUP Dataset

Suriya Prakash Jambunathan1, Suguna Ramadass2, and Palanivel Rajan Selva kumaran3

1Faculty of Information and Communication Engineering, Anna University, India

2Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, India

3Department of Electronics and Communication Engineering, M. Kumarasamy College of Engineering, India

Abstract: In the ubiquitously connected world of IT infrastructure, Intrusion Detection System (IDS) plays vital role. IDS is considered as a critical component of security infrastructure and is implemented either through hardware or software devices and can detect malicious activities in a networked environment. To detect or prevent network attacks, Network Intrusion Detection (NID) system may be equipped with machine learning algorithms to achieve better accuracy and faster detection speed. Analyzing different attacks effectively through Dimensionality Reduction Algorithms is an efficient mechanism. The significance of these algorithms is they improvise feature selection from huge datasets. Also through this the learning speed is enhanced. Speed is a crucial parameter in the success of network intrusion detection systems for defending reactions. In this paper open source datasets Knowledge Discovery in Databases (KDD CUP) dataset and 10% KDD CUP dataset are employed for experimentation. These datasets are provided to Dimensionality Reduction Algorithms like Principal Component Analysis (PCA), Linear Discriminate Analysis (LDA) and Kernel PCA with different kernels and classified with Logistic Regression classification algorithm for procuring accurate results. Further to boost up the accuracy achieved so far K-fold algorithm is utilized. Finally a comparative study of different accuracy results is done by using K-fold algorithm and also without the usage of this algorithm. The empirical study on KDD CUP data confirms the effectiveness of the proposed scheme. In this paper we discovered the combination of multiple dimensionality reduction algorithm such as PCA , LDA and Kernel PCA with classification algorithm and this combination of algorithm gives best result. Our study will help out the researchers to uncover critical area such as intrusion detection in network traffic environment. The results what we identified will be very much helpful for researchers for their future research on KDD CUP dataset. In this the new theory will be arrived by this research that the best accuracy achieved by PCA with 10% KDD CUP dataset experimental results without KFold attained 98% and with KFold attained 99%. LDA with 10% KDD CUP Dataset experimental results without KFold attained 98% and with KFold attained 99%.

Keywords: Intrusion attacks, network, features, accuracy.

Received December 14, 2020; accepted August 17, 2021

https://doi.org/10.34028/iajit/19/1/14

Full text

Sunday, 02 January 2022 10:35

Voice Versus Keyboard and Mouse for Text Creation on Arabic User Interfaces

Abstract: Voice User Interfaces (VUIs) are increasingly popular owing to improvements in automatic speech recognition. However, the understanding of user interaction with VUIs, particularly Arabic VUIs, remains limited. Hence, this research compared user performance, learnability, and satisfaction when using voice and keyboard-and-mouse input modalities for text creation on Arabic user interfaces. A Voice-enabled Email Interface (VEI) and a Traditional Email Interface (TEI) were developed. Forty participants attempted pre-prepared and self-generated message creation tasks using voice on the VEI, and the keyboard-and-mouse modal on the TEI. The results showed that participants were faster (by 1.76 to 2.67 minutes) in pre-prepared message creation using voice than using the keyboard and mouse. Participants were also faster (by 1.72 to 2.49 minutes) in self-generated message creation using voice than using the keyboard and mouse. Although the learning curves were more efficient with the VEI, more participants were satisfied with the TEI. With the VEI, participants reported problems, such as misrecognitions and misspellings, but were satisfied about the visibility of possible executable commands and about the overall accuracy of voice recognition.

Keywords: Voice, speech, recognition, input modal, user interface, user performance, Arabic, text entry, keyboard, mouse.

Received March 23, 2020; accepted June 16, 2021

https://doi.org/10.34028/iajit/19/1/15

Full text

Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…