Monday, 02 January 2023 06:25

Automated Classification of Whole-Body SPECT Bone Scan Images with VGG-Based Deep Networks

Qiang Lin

School of Mathematics and Computer Science, Northwest Minzu University, China

This email address is being protected from spambots. You need JavaScript enabled to view it.

Zhengxing Man

School of Mathematics and Computer Science, Northwest Minzu University, China

This email address is being protected from spambots. You need JavaScript enabled to view it.

Yongchun Cao

School of Mathematics and Computer Science, Northwest Minzu University, China

This email address is being protected from spambots. You need JavaScript enabled to view it.

Haijun Wang

Department of Nuclear Medicine, Gansu Provincial Hospital, China

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Single Photon Emission Computed Tomography (SPECT) imaging has the potential to acquire information about areas of concerns in a non-invasive manner. Until now, however, deep learning based classification of SPECT images is still not studied yet. To examine the ability of convolutional neural networks on classifying whole-body SPECT bone scan images, in this work, we propose three different two-class classifiers based on the classical Visual Geometry Group (VGG) model. The proposed classifiers are able to automatically identify that whether or not a SPECT image include lesions via classifying this image into categories. Specifically, a pre-processing method is proposed to convert each SPECT file into an image via balancing difference of the detected uptake between SPECT files, normalizing elements of each file into an interval, and splitting an image into batches. Second, different strategies were introduced into the classical VGG16 model to develop classifiers by minimizing the number of parameters as many as possible. Lastly, a group of clinical whole-body SPECT bone scan files were utilized to evaluate the developed classifiers. Experiment results show that our classifiers are workable for automated classification of SPECT images, obtaining the best values of 0.838, 0.929, 0.966, 0.908 and 0.875 for accuracy, precision, recall, F-1 score and AUC value, respectively.

Keywords: Image classification, nuclear medicine, SPECT imaging, deep learning, VGG 16.

Received November 1, 2020; accepted January 19, 2022

https://doi.org/10.34028/iajit/20/1/1

Full text

Monday, 02 January 2023 06:24

On Satellite Imagery of Land Cover Classification for Agricultural Development

Ali Alzahrani

Department of Computer Engineering, King Faisal University,

 Saudi Arabia

This email address is being protected from spambots. You need JavaScript enabled to view it.

Al-Amin Bhuiyan

Department of Computer Engineering, King Faisal University, Saudi Arabia

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Distribution of chronological land cover modifications has attained a vibrant concern in contemporary sustainability research. Information delivered by satellite remote sensing imagery plays momentous role in enumerating and discovering the expected land cover for vegetation. Fuzzy clustering has been found successful in implementing a significant number of optimization problems associated with machine learning due to its fractional membership degrees in several neighbouring constellations. This research establishes a framework on land cover classification for agricultural development. The approach is focused on object-oriented classification and is organized with a Fuzzy c-means clustering over segmentation on CIE L*a*b* colour scheme which provides analysis of vegetation coverage and enhances land planning for sustainable developments. This research investigates the land cover variations of the eastern province of Saudi Arabia throughout an elongated span of period from 1984 to 2018 to recognize the possible roles of the land cover alterations on farming. The Landsat satellite imagery and Geographical Information System (GIS), in tandem with Google Earth chronological imagery are employed for land use variation analysis. Experimental results exhibit a reasonable spread in the cultivated zones and reveal that this Colour Segmented Fuzzy Clustering (CSFC) strategy achieves better than the relevant counterpart approaches considering classification accuracy.

Keywords: Satellite imagery, image processing, land cover classification, fuzzy c-means clustering, CIE L*a*b* colour.

Received January 1, 2021; accepted January 19, 2022

https://doi.org/10.34028/iajit/20/1/2

Full text

Monday, 02 January 2023 06:22

Latent Fingerprint Recognition using Hybrid Ant Colony Optimization and Cuckoo Search

Richa Jindal

Department of Computer Science and Engineering, IK Gujral Punjab Technical University, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Sanjay Singla

Department of Computer Science and Engineering, Chandigarh University, Punjab, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Latent fingerprints are adapted as prominent evidence for the identification of crime suspects from ages. The unavailability of complete minutiae information, poor quality of impressions, and overlapping of multi-impressions make the latent fingerprint recognition process a challenging task. Although the contributions in the field are efficient for determining the match, there is a requirement to ameliorate the existing techniques as false identification can put the benign behind bars. This research work has amalgamated the Cuckoo Search (CS) algorithm with Ant Colony Optimization (ACO) for the recognition of latent fingerprints. It reduces the demerits of the individual cuckoo search algorithm, such as the probability of falling into local optima, the inefficient creation of nests at the boundary due to random walk and Levy flight attributes. The positive feedback mechanism of ant colony optimization makes it easy to combine with other techniques, reducing the risk of local failure and evaluating the global best solution. Prior to the evaluation of the proposed amalgamated technique on the latent fingerprint dataset of NIST SD-27, it is tested with the benchmark functions for different shapes and physical attributes. The benchmark testing and latent fingerprint evaluation result in the betterment of the amalgamated technique over the individual cuckoo search algorithm. The state-of-the-art comparison indicates that the amalgamation technique outperformed the other fingerprint matching techniques.

Keywords: Latent fingerprint, cuckoo search, ant colony optimization, swarm intelligence, biometric system, fingerprint recognition, latent fingerprint recognition, levy flight.

Received February 14, 2021; accepted March 16, 2022

https://doi.org/10.34028/iajit/20/1/3

Full text

Monday, 02 January 2023 06:21

Highly Accurate Spam Detection with the Help of Feature Selection and Data Transformation

Hidayet Takci

Computer Engineering Department, Sivas Cumhuriyet University, Turkey

This email address is being protected from spambots. You need JavaScript enabled to view it.

Fatema Nusrat

Computer Science Department, Asian University for Women, Bangladesh

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: The amount of spam is increasing rapidly while the popularity of emails is increasing. This situation has led to the need to filter spam emails. To date, many knowledge-based, learning-based, and clustering-based methods have been developed for filtering spam emails. In this study, machine-learning-based spam detection was targeted, and C4.5, ID3, RndTree, C-Support Vector Classification (C-SVC), and Naïve Bayes algorithms were used for email spam detection. In addition, feature selection and data transformation methods were used to increase spam detection success. Experiments were performed on the UC Irvine Machine Learning Repository (UCI) spambase dataset, and the results were compared for accuracy, Receiver Operating Characteristic (ROC) analysis, and classification speed. According to the accuracy comparison, the C-SVC algorithm gave the highest accuracy with 93.13%, followed by the RndTree algorithm. According to the ROC analysis, the RndTree algorithm gave the best Area Under Curve (AUC) value of 0.999, while the C4.5 algorithm gave the second-best result. The most successful methods in terms of classification speed are Naïve Bayes and RndTree algorithms. In the experiments, it was seen that feature selection and data transformation methods increased spam detection success. The binary transformation that increased the classification success the most and the feature selection method was forward selection.

Keywords: Internet security, prediction methods, feature selection, data conversion, spam detection.

Received July 2, 2021; accepted September 28, 2022

https://doi.org/10.34028/iajit/20/1/4

Full text

Monday, 02 January 2023 06:20

Heart Disease Classification for Early Diagnosis based on Adaptive Hoeffding Tree Algorithm in IoMT Data

Ersin Elbasi

College of Engineering and Technology

American University of the Middle East, Kuwait

This email address is being protected from spambots. You need JavaScript enabled to view it.

Aymen I. Zreikat

College of Engineering and Technology

American University of the Middle East, Kuwait

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Heart disease is a rapidly increasing disease that causes death worldwide. Therefore, scientists around the globe start studying this issue from a different perspective to assure early prediction of diagnosis to save patients' life from bad consequences that cause death. In this regard, Internet of Medical Things (IoMT) applications and algorithms should be utilized effectively to overcome this problem. Hoeffding Tree Algorithm (HTA) is a standard decision tree algorithm to handle large sizes of data sets. In this paper, an Adaptive Hoeffding Tree (AHT) algorithm is suggested to carry out classifications of data sets for early diagnosis of heart disease-related factors, and the obtained results by this algorithm are compared with other suggested Machine Learning (ML) algorithms in the literature. Therefore, a total of 3000 records of data sets are used in the classification, 33% of the data are utilized for female patient information, and the rest of the data are utilized for male patient information. In the original data set, each patient record includes 76 attributes, however only the most important 16 patient attributes are used for the classification. Data are retrieved from the University of California Irvine (UCI) Machine Learning Repository, which is collected from the Hungarian Institute of Cardiology, University Hospital at Zurich, University Hospital at Basel, and V.A. Medical Center. The obtained results from this study and the provided comparative results show the effectiveness of the AHT algorithm over other ML algorithms. Compared to other ML algorithms, AHT outperforms other algorithms with 95.67% accuracy for early estimation of diagnosis of heart disease.

Keywords: Internet of medical things, machine learning, medical data, random forest, internet of things, diagnosis, AHT.

Received August 7, 2021; accepted September 26, 2022

https://doi.org/10.34028/iajit/20/1/5

Full text

Monday, 02 January 2023 06:19

A Rule-Induction Approach for Building an Arabic Language Interfaces to Databases

Hanane Bais

LAMIGEP, EMSI Marrakech,

Morocco

This email address is being protected from spambots. You need JavaScript enabled to view it.

Mustapha Machkour

Department Computer Sciences, Ibn Zohr University, Morocco

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: In the field of Natural Language Interfaces for Databases (NLIDB), most of the solutions considered for translating natural language queries into database query language is based on linguistic operations. The application of these operations makes it possible to translate the natural language queries into an unambiguous logical interpretation. However, this task is extremely complex and requires excessive time. While nowadays emphasis is placed on the use of machine learning approaches to automate the operation of natural language processing systems. From this, the automation of the natural language queries translation process into a logical interpretation is interesting and remains a major challenge in the field NLIDB. Also, it can have a direct impact on reducing the complexity of the operation of NLIDB. In this study, we focused on applying a new approach to automate the operation of NLIDB. In this approach, we applied a supervised learning technique to induce rules that transform natural language queries into unambiguous expressions.

Keywords: Rule-induction, databases, machine learning, intelligent interfaces, arabic language processing.

Received June 7, 2020; accepted October 10, 2021

https://doi.org/10.34028/iajit/20/1/6

Full text

Thursday, 29 December 2022 12:03

A Genetic Algorithm based Domain Adaptation Framework for Classification of Disaster Topic Text Tweets

Lokabhiram Dwarakanath

Department of Computer System and Technology, Universiti Malaya, Malaysia

This email address is being protected from spambots. You need JavaScript enabled to view it.

Amirrudin Kamsin

Department of Computer System and Technology, Universiti Malaya, Malaysia

This email address is being protected from spambots. You need JavaScript enabled to view it.

Liyana Shuib

Department of Information Systems, Universiti Malaya, Malaysia

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: The ability to post short text and media messages on Social media platforms like Twitter, Facebook, etc., plays a huge role in the exchange of information following a mass emergency event like hurricane, earthquake, tsunami etc. Disaster victims, families, and other relief operation teams utilize social media to help and support one another. Despite the benefits offered by these communication media, the disaster topic related posts (posts that indicate conversations about the disaster event in the aftermath of the disaster) gets lost in the deluge of posts since there would be a surge in the amount of data that gets exchanged following a mass emergency event. This hampers the emergency relief effort, which in turn affects the delivery of useful information to the disaster victims. Research in emergency coordination via social media has received growing interest in recent years, mainly focusing on developing machine learning-based models that can separate disaster-related topic posts from non-disaster related topic posts. Of these, supervised machine learning approaches performed well when the machine learning model trained using source disaster dataset and target disaster dataset are similar. However, in the real world, it may not be feasible as different disasters have different characteristics. So, models developed using supervised machine learning approaches do not perform well in unseen disaster datasets. Therefore, domain adaptation approaches, which address the above limitation by learning classifiers from unlabeled target data in addition to source labelled data, represent a promising direction for social media crisis data classification tasks. The existing domain adaptation techniques for the classification of disaster tweets are experimented with using single disaster event dataset pairs; then, self-training is performed on the source target dataset pairs by considering the highly confident instances in subsequent iterations of training. This could be improved with better feature engineering. Thus, this research proposes a Genetic Algorithm based Domain Adaptation Framework (GADA) for the classification of disaster tweets. The proposed GADA combines the power of 1) Hybrid Feature Selection component using the Genetic Algorithm and Chi-Square Feature Evaluator for feature selection and 2) the Classifier component using Random Forest to classify disaster-related posts from noise on Twitter. The proposed framework addresses the challenge of the lack of labeled data in the target disaster event by proposing a Genetic Algorithm based approach. Experimental results on Twitter datasets corresponding to four disaster domain pair shows that the proposed framework improves the overall performance of the previous supervised approaches and significantly reduces the training time over the previous domain adaptation techniques that do not use the Genetic Algorithm (GA) for feature selection.

Keywords: Crisis informatics, disaster management, machine learning, domain adaptation approaches, social media, genetic algorithm.

Received December 5, 2021; accepted February 20, 2022

https://doi.org/10.34028/iajit/20/1/7

Full text

Thursday, 29 December 2022 12:02

Framework of Geofence Service using Dummy Location Privacy Preservation in Vehicular Cloud Network

Hani Al-Balasmeh

Computer Science and Engineering Department, Thapar
Institute of Engineering and Technology, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Maninder Singh

Computer Science and Engineering Department, Thapar
Institute of Engineering and Technology, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Raman Singh

School of Computing, Engineering, and Physical Sciences,
University of the West of Scotland, United Kingdom

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: With the increasing prevalence of different mobile apps, many applications require users to enable the location service on their devices. For example, the geofence service can be defined as establishing virtual geographical boundaries. Enabling this service triggers entering and exiting the boundary area and notifies the users and trusted third parties. The foremost concern of using geofence is the privacy of location coordinates shared among different applications. In this paper, a framework called ‘TIET-GEO is proposed that allows users to define the geofence boundary; in addition, it monitors Global Positioning System (GPS) devices in real-time when they enter/exit a specific area. The proposed framework also proposes a dummy privacy preservation algorithm to generate K-dummy locations around the real trajectories when the user requests the Point Of Interest (POI) from the Location-Based Services (LBS). This article aims to enhance the location privacy preservation in geofence service, by generating a k-dummy location around the user location based on the radius size of the geofence area. The proposed framework uses token keys authentication to authorize the users in the Vehicular Cloud Network (VCN) service by generating secret token keys authentication between the client and services. The results obtained show the effectiveness of the proposed framework was on parameters like flexibility and reliability of responses from different sources, such as smart IoT devices and datasets.

Keywords: Location privacy preservation, geofence service, vehicular cloud, user-authentication.

Received February 10, 2022; accepted April 13, 2022

https://doi.org/10.34028/iajit/20/1/8

Full text

Thursday, 29 December 2022 12:01

Challenges and Mitigation Strategies for Transition from IPv4 Network to Virtualized Next-Generation IPv6 Network

Zeeshan Ashraf

Department of Computer Science and IT, The University of Chenab, Pakistan

This email address is being protected from spambots. You need JavaScript enabled to view it.

Adnan Sohail

Department of Computing and Technology, IQRA University, Pakistan

This email address is being protected from spambots. You need JavaScript enabled to view it.

Sohaib Latif

Department of Computer Science, Anhui University of Science and Technology, China

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abdul Hameed

Department of Computing and Technology, IQRA University, Pakistan

This email address is being protected from spambots. You need JavaScript enabled to view it.

Muhammad Yousaf

Department of Cyber Security, Riphah International University, Pakistan

This email address is being protected from spambots. You need JavaScript enabled to view it.

       

Abstract: The rapid proliferation of the Internet has exhausted Internet Protocol version 4 (IPv4) addresses offered by Internet Assigned Number Authority (IANA). The new version of the IP i.e. IPv6 was launched by Internet Engineering Task Force (IETF) with new features, such as a simpler packet header, larger address space, new anycast addressing type, integrated security, efficient segment routing, and better Quality of Services (QoS). Virtualized network architectures such as Network Function Virtualization (NFV) and Software Defined Network (SDN) have been introduced. These new paradigms have entirely changed the way of internetworking and provide a lot of benefits in multiple domains of applications that have used SDN and NFV. ISPs are trying to move from existing IPv4 physical networks to virtualized next-generation IPv6 networks gradually. The transition from physical IPv4 to software-based IPv6 is very slow due to the usage of IPv4 addresses by billions of devices around the globe. IPv4 and IPv6 protocols are different in format and behaviour. Therefore, direct communication between IPv4 and IPv6 is not possible. Both protocols will co-exist for a long time during transition despite the incompatibility issues. The core issues between IPv4 and IPv6 protocols are compatibility, interoperability, and security. The transition creates many challenges for ISPs during shifting the network toward a software-based IPv6 network. Packet traversing, routing scalability, the guarantee of performance, and security are the main challenges faced by ISPs. In this research, we focused on a qualitative and comprehensive survey. We summarize the challenges during the transition process, recommended appropriate solutions, and an in-depth analysis of their mitigations during moving towards the next-generation virtual IPv6 network.

Keywords: QoE, SDN, NFV, segment routing, security.

Received November 16, 2019; accepted December 6, 2020

https://doi.org/10.34028/iajit/20/1/9

Full text

Thursday, 29 December 2022 12:00

A Novel Architecture for Search Engine using Domain Based Web Log Data

Prem Sharma

Computer Science and Engineering, Veer Madho Singh Bhandari Uttarakhand Technical University, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Divakar Yadav

School of Computer and Information Sciences, Indira Gandhi National Open University, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Search engines, an information retrieval tool are the main source of information for users’ information need now a day. For every query, the search engine explores its repository and/or indexer to find the relevant documents/URLs for that query. Page ranking algorithms rank the Uniform Resource Locator in abstract section (URLs) according to its relevancy with respect to users’ query. It is analyzed that many of the queries fired by users on search engines are duplicate. There is a scope to improve the performance of search engine to reduce its efforts for duplicate queries. In this paper a proxy server is created that keep store the search results of user queries in web log. The proposed proxy server uses this web log to find results faster for duplicate queries fired next time. The proposed scheme has been tested and found prominent. The proposed architecture tested for ten duplicate user queries. it return all relevant web pages for duplicate user query (if query is found in web log at proxy server) from a particular domain instead of entire database. It reduces the perceived latency for duplicate query and also improves the value of precession and accuracy up to 81.8% and 99% respectively for all duplicate user queries.

Keywords: Search engine, information retrieval, web usage mining, content mining.

Received February 14, 2021; accepted March 16, 2022

https://doi.org/10.34028/iajit/20/1/10

Full text

Thursday, 29 December 2022 11:58

Robust Hearing-Impaired Speaker Recognition from Speech using Deep Learning Networks in Native Languageenergy

Jeyalakshmi Chelliah

Department of ECE, K.Ramakrishnan College of Engineering, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

KiranBala Benny

Department of Artificial Intelligence and Data Science, K.Ramakrishnan College of Engineering, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Revathi Arunachalam

School of EEE, Sastra Deemed to be University, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Viswanathan Balasubramanian

Department of ECE, K.Ramakrishnan College of Engineering, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

 

Abstract: Several research works in speaker recognition have grown recently due to its tremendous applications in security, criminal investigations and in other major fields. Identification of a speaker is represented by the way they speak, and not on the spoken words. Hence the identification of hearing-impaired speakers from their speech is a challenging task since their speech is highly distorted. In this paper, a new task has been introduced in recognizing Hearing Impaired (HI) speakers using speech as a biometric in native language Tamil. Though their speech is very hard to get recognized even by their parents and teachers, our proposed system accurately identifies them by adapting enhancement of their speeches. Due to the huge variety in their utterances, instead of applying the spectrogram of raw speech, Mel Frequency Cepstral Coefficient features are derived from speech and it is applied as spectrogram to Convolutional Neural Network (CNN), which is not necessary for ordinary speakers. In the proposed system of recognizing HI speakers, is used as a modelling technique to assess the performance of the system and this deep learning network provides 80% accuracy and the system is less complex. Auto Associative Neural Network (AANN) is used as a modelling technique and performance of AANN is only 9% accurate and it is found that CNN performs better than AANN for recognizing HI speakers. Hence this system is very much useful for the biometric system and other security related applications for hearing impaired speakers.

Keywords: Speaker recognition, voice impaired, energy, deep learning based convolutional neural network, mel frequency cepstral coefficient, Auto associative neural network, back propagation algorithm.

Received November 26, 2020; accepted December 26, 2021

https://doi.org/10.34028/iajit/20/1/11

Full text

Thursday, 29 December 2022 11:57

A VANET Collision Warning System with Cloud Aided Pliable Q-Learning and Safety Message Dissemination


Nalina Venkatamune

Department of Information Science and Engineering, BMS College of Engineering, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Jayarekha PrabhaShankar

Department of Information Science and Engineering, BMS College of Engineering, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Ease of self-driving technological developments revives Vehicular Adhoc Networks (VANETs) and motivates the Intelligent Transportation System (ITS) to develop novel intelligent solutions to amplify the VANET safety and efficiency. Collision warning system plays a significant role in VANET due to the avoidance of fatalities in vehicle crashes. Different kinds of collision warning systems have been designed for diverse VANET scenarios. Among them, reinforcement-based machine learning algorithms receive much attention due to the dispensable of explicit modeling about the environment. However, it is a censorious task to retrieve the Q-learning parameters from the dynamic VANET environment effectively. To handle such issue and safer the VANET driving environment, this paper proposes a cloud aided pliable Q-Learning based Collision Warning Prediction and Safety message Dissemination (QCP-SD). The proposed QCP-SD integrates two mechanisms that are pliable Q-learning based collision prediction and Safety alert Message Dissemination. Firstly, the designing of pliable Q-learning parameters based on dynamic VANET factors with Q-learning enhances collision prediction accuracy. Further, it estimates the novel metric named as Collision Risk Factor (CRF) and minimizes the driving risks due to vehicle crashes. The execution of pliable Q-learning only at RSUs minimizes the vehicle burden and reduces the design complexity. Secondly, the QCP-SD sends alerts to the vehicles in the risky region through highly efficient next-hop disseminators selected based on a multi-attribute cost value. Moreover, the performance of QCP-SD is evaluated through Network Simulator (NS-2). The efficiency is analyzed using the performance metrics that are duplicate packet, latency, packet loss, packet delivery ratio, secondary collision, throughput, and overhead.

Keywords: VANETs, collision warning system, reinforcement learning, pliable q-learning, multi-attribute cost based disseminator selection, reliable safety routing.

Received January 2, 2021; accepted October 31, 2021

https://doi.org/10.34028/iajit/20/1/12

Full text

Thursday, 29 December 2022 11:55

A Novel Energy Efficient Harvesting Technique for SDWSN using RF Transmitters with MISO Beamforming

Subaselvi Sundarraj

Department of Electronics and Communication Engineering, Anna University, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Gunaseelan Konganathan

Department of Electronics and Communication Engineering, Anna University, India

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Software Defined Wireless Sensor Networks (SDWSN) is emerged to overcome the additional energy consumption in WSN. Even then the sensor nodes in the SDWSN suffer from scarce battery resources. Generally, the Radio Frequency (RF) transmitters are deployed around the base station in the SDWSN to overcome the high energy consumption problem. To enhance harvesting energy and coverage of nodes in the network, a new energy harvesting technique using RF transmitters with Multiple Input and Single Output (MISO) beamforming is proposed. In this method, multiple antenna RF transmitters and single antenna sensor nodes are deployed. The optimization problem subject to Signal to Noise Ratio (SNR) and energy harvesting constraints is formulated for hybrid beamforming design to reduce the transmit power in the network. The optimization problem based on convex Second Order Cone Programming (SOCP) is derived to get the optimal solution for hybrid beamforming design. The beamforming technique is used to steer the beam in the desired direction and null to the other direction improves the energy harvesting. The simulation results show that the proposed technique provides better average harvesting energy, average transmit power, average residual energy and throughput than the existing RF transmitter based energy harvesting methods.

Keywords: SDWSN, RF transmitters, energy harvesting, MISO, beamforming, SOCP, convex optimization.

Received January 22, 2021; accepted April 3, 2022

https://doi.org/10.34028/iajit/20/1/13

Full text

Thursday, 29 December 2022 11:15

Performance Evaluation of Keyword Extraction Techniques and Stop Word Lists on Speech-To-Text Corpus

Blessed Guda

Department of Electrical and Computer Engineering/AI, Carnegie Mellon University, Africa

This email address is being protected from spambots. You need JavaScript enabled to view it.

Bello Kontagora Nuhu

Department of Computer Engineering, Federal University of Technology, Nigeria

Corresponding Authors: This email address is being protected from spambots. You need JavaScript enabled to view it.

James Agajo

Department of Computer Engineering, Federal University of Technology, Nigeria,

This email address is being protected from spambots. You need JavaScript enabled to view it.

Ibrahim Aliyu

Department of ICT Convergence System Engineering, Chonnam National University, Korea

Corresponding Authors: This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: The dawn of conversational user interfaces, through which humans communicate with computers through voice audio, has been reached. Therefore, Natural Language Processing (NLP) techniques are required to focus not only on text but also on audio speeches. Keyword Extraction is a technique to extract key phrases out of a document which can provide summaries of the document and be used in text classification. Existing keyword extraction techniques have commonly been used on only text/typed datasets. With the advent of text data from speech recognition engines which are less accurate than typed texts, the suitability of keyword extraction is questionable. This paper evaluates the suitability of conventional keyword extraction methods on a speech-to-text corpus. A new audio dataset for keyword extraction is collected using the World Wide Web (WWW) corpus. The performances of Rapid Automatic Keyword Extraction (RAKE) and TextRank are evaluated with different Stoplists on both the originally typed corpus and the corresponding Speech-To-Text (STT) corpus from the audio. Metrics of precision, recall, and F1 score was considered for the evaluation. From the obtained results, TextRank with the FOX Stoplist showed the highest performance on both the text and audio corpus, with F1 scores of 16.59% and 14.22%, respectively. Despite lagging behind text corpus, the recorded F1 score of the TextRank technique with audio corpus is significant enough for its adoption in audio conversation without much concern. However, the absence of punctuation during the STT affected the F1 score in all the techniques.

Keywords: Keyword, natural language processing, RAKE, textrank, stoplist, speech recognition.

Received August 13, 2021; accepted August 31, 2022

https://doi.org/10.34028/iajit/20/1/14

Full text

Thursday, 29 December 2022 11:12

T-LBERT with Domain Adaptation for Cross-Domain Sentiment Classification

Hongye Cao

School of Software, Northwestern Polytechnical University, China

This email address is being protected from spambots. You need JavaScript enabled to view it.

Qianru Wei

School of Software, Northwestern Polytechnical University, China

This email address is being protected from spambots. You need JavaScript enabled to view it.

Jiangbin Zheng

School of Software, Northwestern Polytechnical University, China

This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract: Cross-domain sentiment classification transfers the knowledge from the source domain to the target domain lacking supervised information for sentiment classification. Existing cross-domain sentiment classification methods establish connections by extracting domain-invariant features manually. However, these methods have poor adaptability to bridge connections across different domains and ignore important sentiment information. Hence, we propose a Topic Lite Bidirectional Encoder Representations from Transformers (T-LBERT) model with domain adaption to improve the adaptability of cross-domain sentiment classification. It combines the learning content of the source domain and the topic information of the target domain to improve the domain adaptability of the model. Due to the unbalanced distribution of information in the combined data, we apply a two-layer attention adaptive mechanism for classification. A shallow attention layer is applied to weigh the important features of the combined data. Inspired by active learning, we propose a deep domain adaption layer, which actively adjusts model parameters to balance the difference and representativeness between domains. Experimental results on Amazon review datasets demonstrate that the T-LBERT model considerably outperforms other state-of-the-art methods. T-LBERT shows stable classification performance on multiple metrics.

Keywords: Cross-domain, sentiment classification, topic model, attention, domain adaption.

Received November 1, 2021; accepted April 4, 2022

https://doi.org/10.34028/iajit/20/1/15

 Full text

Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…