Tuesday, 24 April 2018 05:34

A Hybrid BATCS Algorithm to Generate Optimal Query Plan

Gomathi Ramalingam1 and Sharmila Dhandapani2

1Department of Computer Science and Engineering, Bannari Amman Institute of Technology, India

2Department of Electronics and Instrumentation Engineering, Bannari Amman Institute of Technology, India

Abstract: The enormous increase in the amount of web pages day by day leads to progress in semantic web data management. The issues in semantic web data management are increasing and there is a need for improvement in research to handle them. One of the most important issues is the process of query optimization. The semantic web data stored in the form of Resource Description Framework (RDF) data can be queried using the popular query language SPARQL Protocol And RDF Query Language (SPARQL). As the size of the data increases, complication arises in querying the RDF data. The problem of querying the RDF graphs involves multiple join operations and optimizing those joins becomes NP-hard. Nature inspired algorithms are becoming much popular in recent days to handle problems with high complexity. In this research, a hybrid BAT Algorithm with Cuckoo Search (BATCS) is proposed to handle the problem of query optimization. The algorithm applies the echolocation behaviour of bats and hybrids with cuckoo search if the best solution stagnates for a designated number of iterations. Experiments were conducted with benchmark data sets and the algorithm proves that it performs efficiently in terms of query execution time.

Keywords: Data management, query optimization, nature inspired algorithms, bat algorithm, cuckoo search algorithm.

Received November 7, 2014; accepted August 3, 2015

Tuesday, 24 April 2018 05:29

Arabic Character Extraction and Recognition

using Traversing Approach

Abdul Khader Saudagar and Habeeb Mohammed

College of Computer and Information Sciences, Al Imam Mohammad Ibn Saud Islamic University, Saudi Arabia

Abstract: The intention behind this research is to present an original work undertaken for Arabic character extraction and recognition for attaining higher percentage of recognition rate. Copious techniques for character, text extraction were proposed in earlier decades, but very few of them shed light on Arabic character set. From literature survey, it was found that 100% recognition rate is not attained by earlier proposed implementations. The proposed technique is novel and is based on traversing of the characters in a given text and marking their directions viz. North-South (NS), East-West (EW), North East-South West (NE-SW), North West-South East (NW-SE) etc., in an array and comparing them with the pre-defined codes of every character in the dataset. The experiments were conducted on Arabic news videos, documents taken from Arabic Printed Text Image (APTI) database and the results achieved from this research are very promising with a recognition rate of 98.1%. The proposed algorithm in this research work can replace the existing algorithms used in present Arabic Optical Character Recognition (AOCR) systems.

Keywords: Accuracy, arabic optical character recognition and text extraction.

Received March 14, 2015; accepted August 16, 2015
 
Tuesday, 24 April 2018 05:26

A Novel Approach for Face Recognition Using

Fused GMDH-Based Networks

El-Sayed El-Alfy1, Zubair Baig2, and Radwan Abdel-Aal1

1College of Computer Sciences and Engineering, King Fahd University of Petroleum and Minerals, KSA

2School of Science and Security Research Institute, Edith Cowan University, Australia

Abstract: This paper explores a novel approach for automatic human recognition from multi-view frontal facial images taken at different poses. The proposed computational model is based on fusion of the Group Method of Data Handling (GMDH) neural networks trained on different subsets of facial features and with different complexities. To demonstrate the effectiveness of this approach, the performance is evaluated and compared using eigen-decomposition for feature extraction and reduction with a variety of GMDH-based models. The experimental results show that high recognition rates, close to 98%, can be achieved with very low average false acceptance rates, less than 0.12%. Performance is further investigated on different feature set sizes and it is found that with smaller feature sets (as few as 8 features), the proposed GMDH-based models outperform other classifiers including those using radial-basis functions and support-vector machines. Additionally, the capability of the group method of data handling algorithm to select the most relevant features during the model construction makes it more attractive to build much simplified models of polynomial units.

Keywords: Face recognition, abductive machine learning, neural computing, GMDH-based ensemble learning.

Received May 30, 2015; accepted November 29, 2015
 
Tuesday, 24 April 2018 05:23

Fall Motion Detection with Fall Severity Level

Estimation by Mining Kinect 3D Data Stream

Orasa Patsadu1, Bunthit Watanapa1, Piyapat Dajpratham2, and Chakarida Nukoolkit1

1School of Information Technology, King Mongkut’s University of Technology Thonburi, Thailand

2Faculty of Medicine Siriraj Hospital, Mahidol University, Thailand

Abstract: This paper proposes an integrative model of fall motion detection and fall severity level estimation. For the fall motion detection, a continuous stream of data representing time sequential frames of fifteen body joint positions was obtained from Kinect’s 3D depth camera. A set of features is then extracted and fed into the designated machine learning model. Compared with existing models that rely on the depth image inputs, the proposed scheme resolves background ambiguity of the human body. The experimental results demonstrated that the proposed fall detection method achieved accuracy of 99.97% with zero false negative and more robust when compared with the state-of-the-art approach using depth of image. Another key novelty of our approach is the framework, called Fall Severity Injury Score (FSIS), for determining the severity level of falls as a surrogate for seriousness of injury on three selected risk areas of body: head, hip and knee. The framework is based on two crucial pieces of information from the fall: 1) the velocity of the impact position and 2) the kinetic energy of the fall impact. Our proposed method is beneficial to caregivers, nurses or doctors, in giving first aid/diagnosis/treatment for the subject, especially, in cases where the subject loses consciousness or is unable to respond.

Keywords: Kinect 3D data stream, fall motion detection, fall severity level estimation, machine learning, smart home system.

Received August 25, 2015; accepted December 27, 2015
Tuesday, 24 April 2018 05:18

Vision-Based Human Activity Recognition

Using LDCRFs

Mahmoud Elmezain1,2 and Ayoub Al-Hamadi3

1Faculty of Science and Computer Engineering, Taibah University, KSA

2Computer Science Division, Faculty of Science, Tanta University, Egypt

3Institute of Information Technology and Communications, Otto-Von-Guericke-University, Germany

Abstract: In this paper, an innovative approach for human activity relies on affine-invariant shape descriptors and motion flow is proposed. The first phase of this approach is to employ the modelling background that uses an adaptive Gaussian mixture to distinguish moving foregrounds from their moving cast shadows. Accordingly, the extracted features are derived from 3D spatio-temporal action volume like elliptic Fourier, Zernike moments, mass center and optical flow. Finally, the discriminative model of Latent-dynamic Conditional Random Fields (LCDRFs) performs the training and testing action processes using the combined features that conforms vigorous view-invariant task. Our experiment on an action Weizmann dataset demonstrates that the proposed approach is robust and more efficient to problematic phenomena than previously reported. It also can take place with no sacrificing real-time performance for many practical action applications.

Keywords: Action recognition, Invariant elliptic fourier, Invariant zernike moments, latent-dynamic conditional random fields.

Received August 15, 2015; accepted January 11, 2016

 

Tuesday, 24 April 2018 05:13

Bilateral Multi-Issue Negotiation Model for

a Kind of Complex Environment

Jun Hu1, Li Zou1,2, and Ru Xu1,3

1College of Computer Science and Electronic Engineering, Hunan University, China

2The Second Hospital, University of South China, China

3Guangxi Key Laboratory of Trusted Software, Guilin University of Electronic Technology, China

Abstract: There are many uncertain factors in bilateral multi-issue negotiation in complex environments, such as unknown opponents and time constraints. The key of negotiation in complex environment is the negotiation strategy of Agent. We use Gaussian process regression and dynamic risk strategies to predict the opponent concessions, and according to the utility of the opponent’s offer and the risk function, predict the concessions of opponent, then set the concessions rate of our Agent upon the opponent's concession strategy. We run the Agent in Generic Environment for Negotiation with Intelligent multi-purpose Usage Simulation (GENIUS) platform and analyze the results of experiments. Experimental results show that the application of dynamic risk strategy in negotiation model is superior to other risk strategies.

Keywords: Multi-issue negotiation; jaussian process regression; dynamic risk strategy; concession strategy.

Received October 16, 2014; accepted September 9, 2015
  

Tuesday, 24 April 2018 05:09

Energy Consumption Improvement and Cost

Saving by Cloud Broker in Cloud Datacenters

Ahmad Karamolahy1, Abdolah Chalechale2, and Mahmoud Ahmadi2

1Radicloud Software Company Ilam, Iran

2Computer and Information Technology Department, Razi University, Iran

Abstract: Using a single cloud datacenter in Cloud network can have several disadvantages for users, from excess energy consumption to increase dissatisfaction of users of service and price of provided services. The Cloud broker as an intermediary between users and datacenters can play a key role to enhance users' satisfaction and reducing energy consumption of datacenters that are located geographically in different areas. In this paper, we have attempted to provide an algorithm that assigns datacenter to users through rating various datacenters. This algorithm has been simulated by Cloudsim and will result in high levels of user satisfaction, cost-effectiveness and improving energy consumption. In this paper, we show that this algorithm can save 44% of energy consumption and 7% of cost saving to users are in sample simulation space.

Keywords: Cloud network, cloud broker, energy optimizing, cost saving.

Received June 10, 2015; accepted December 9, 2015

  
Tuesday, 24 April 2018 05:07

An Efficient Web Search Engine for Noisy Free Information Retrieval

Pradeep Sahoo1 and Rajagopalan Parthasarthy2

1Department of Computer Science and Engineering, Anna University, India

2Department of Computer Science and Engineering, GKM College of Engineering and Technology, India

Abstract: The vast growth, various dynamic and low quality of the world wide web makes it very difficult to retrieve relevant information from internet during query search. To resolve this issue, various web mining techniques are being used. The biggest challenge in web mining is to remove noisy data information or unwanted information from the webpage such as banner, video, audio, images, hyperlinks etc. which are not associated to a user query. To overcome these issues, a novel custom search engine is proposed with efficient algorithm in this paper. The proposed Uniform Resource Locator (URL) pattern extractor algorithm will extract the all relevance index pages from the web and ranking the indexes based on user query. Then, Noisy Data Cleaner (NDC) algorithm is applied to remove the unwanted content from the retrieved web pages. The results show that the proposed URL Pattern Extractor (UPE)+NDC algorithm provides very promising results for different datasets with high precision and recall rate in comparison with the existing algorithms.

Keywords: Web content extraction, relevant information, noise data elimination, noisy data cleaner algorithm, URL pattern extractor algorithm.

Received November 27, 2014; accepted June 1, 2015

Full text 

Tuesday, 24 April 2018 05:04

Complementary Approaches Built as Web Service for Arabic Handwriting OCR Systems via Amazon Elastic MapReduce (EMR) Model

Hassen Hamdi1, Maher Khemakhem2, and Aisha Zaidan1

1Department of Computer Science, Taibah University, Kingdom of Saudi Arabia

2Department of Computer Science, University of King Abdul-Aziz, Kingdom of Saudi Arabia

Abstract: Arabic Optical Character Recognition (OCR) as Web Services represents a major challenge for handwritten document recognition. A variety of approaches, methods, algorithms and techniques have been proposed in order to build powerful Arabic OCR web services. Unfortunately, these methods could not succeed in achieving this mission in case of large quantity Arabic handwritten documents. Intensive experiments and observations revealed that some of the existing approaches and techniques are complementary and can be combined to improve the recognition rate. Designing and implementing these recent sophisticated complementary approaches and techniques as web services are commonly complex; they require strong computing power to reach an acceptable recognition speed especially in case of large quantity documents. One of the possible solutions to overcome this problem is to benefit from distributed computing architectures such as cloud computing. This paper describes the design and implementation of Arabic Handwriting Recognition as a web service (AHRweb service) based on the complementary approach K-Nearest Neighbor (KNN) /Support Vector Machine (SVM) (K-NN/SVM) via Amazon Elastic Map Reduce (EMR) model. The experiments were conducted on a cloud computing environment with a real large scale handwriting dataset from the Institut Für Nachrichtentechnik (IFN)/ Ecole Nationale d’Ingénieur de Tunis (ENIT) IFN/ENIT database. The J-Sim (Java Simulator) was used as a tool to generate and analyze statistical results. Experimental results show that Amazon Elastic Map Reduce (EMR) model constitutes a very promising framework for enhancing large Arabic Handwriting Recognition (AHR) web service performances.

Keywords: Arabic handwriting, complementary approaches and techniques, K-NN/SVM, web service, amazon elastic mapreduce.

Received April 25, 2015; accepted January 3, 2016

Full text 

Tuesday, 24 April 2018 05:01

Advanced Architecture for Java Universal Message Passing (AA-JUMP)

Adeel-ur-Rehman1 and Naveed Riaz2

1National Centre for Physics, Pakistan

2School of Electrical Engineering and Computer Science, National University of Science and Technology, Pakistan

Abstract: The Architecture for Java Universal Message Passing (A-JUMP) is a Java based message passing framework. A-JUMP offers flexibility for programmers in order to write parallel applications making use of multiple programming languages. There is also a provision to use various network protocols for message communication. The results for standard benchmarks like ping-pong latency, Embarrassingly Parallel (EP) code execution, Java Grande Forum (JGF) Crypt etc. gave us the conclusion that for the cases where the data size is smaller than 256K bytes, the numbers are comparative with some of its predecessor models like Message Passing Interface CHameleon version 2 (MPICH2), Message Passing interface for Java (MPJ) Express etc. But, in case, the packet size exceeds 256K bytes, the performance of the A-JUMP model seems to be severely hampered. Hence, taking that peculiar behaviour into account, this paper talks about a strategy devised to cope up with the performance limitation observed under the base A-JUMP implementation, giving birth to an Advanced A-JUMP (AA-JUMP) methodology while keeping the basic workflow of the original model intact. AA-JUMP addresses to improve performance of A-JUMP by preserving its various traits like portability, simplicity, scalability etc. which are the key features offered by flourishing High Performance Computing (HPC) oriented frameworks of now-a-days. The head-to-head comparisons between the two message passing versions reveals 40% performance boost; thereby suggesting AAJUMP a viable approach to adopt under parallel as well as distributed computing domains.

Keywords: A-JUMP, java, universal message passing, MPI, distributed computing.

Received February 5, 2015; accepted December 21, 2015

 

Tuesday, 24 April 2018 04:58

Performance Analysis of Security Requirements

Engineering Framework by Measuring the Vulnerabilities

Salini Prabhakaran1 and Kanmani Selvadurai2

1Department of Computer Science and Engineering, Pondicherry Engineering College, India

2Department of Information Technology, Pondicherry Engineering College, India

Abstract: To develop security critical web applications, specifying security requirements is important, since 75% to 80% of all attacks happen at the web application layer. We adopted security requirements engineering methods to identify security requirements at the early stages of software development life cycle so as to minimize vulnerabilities at the later phases. In this paper, we present the evaluation of Model Oriented Security Requirements Engineering (MOSRE) framework and Security Requirements Engineering Framework (SREF) by implementing the identified security requirements of a web application through each framework while developing respective web application. We also developed a web application without using any of the security requirements engineering method in order to prove the importance of security requirements engineering phase in software development life cycle. The developed web applications were scanned for vulnerabilities using the web application scanning tool. The evaluation was done in two phases of software development life cycle: requirements engineering and testing. From the results, we observed that the number of vulnerabilities detected in the web application developed by adopting MOSRE framework is less, when compared to the web applications developed adopting SREF and without using any security requirements engineering method. Thus, this study led the requirements engineers to use MOSRE framework to elicit security requirements efficiently and also trace security requirements from requirements engineering phase to later phases of software development life cycle for developing secure web applications.

Keywords: Requirements engineering, security mechanism, security requirements, security requirements engineering, web applications and vulnerabilities.

Received December 15, 2014; accepted April 5, 2015


  

Tuesday, 24 April 2018 04:56

  Hybrid Metaheuristic Algorithm for Real Time

Task Assignment Problem in Heterogeneous Multiprocessors

Poongothai Marimuthu, Rajeswari Arumugam, and Jabar Ali

Department of Electronics and Communication Engineering, Coimbatore Institute of Technology, India

Abstract: The assignments of real time tasks to heterogeneous multiprocessors in real time applications are very difficult in scenarios that require high performance. The main problem in the heterogeneous multiprocessor system is task assignment to the processors because the execution time for each task varies from one processor to another. Hence, the problem of finding a solution for task assignment to heterogeneous processor without exceeding the processors capacity in general is an NP hard problem. In order to meet the constraints in real time systems, a Hybrid Max-Min Ant colony optimization algorithm (H-MMAS) is proposed in this paper. Max-Min Ant System (MMAS) is extended with a local search heuristic to improve task assignment solution. The Local Search has resulted in maximizing the number of tasks assigned as well as minimizing the energy consumption. The performance of the proposed algorithm H-MMAS is compared with the Modified Binary Particle Swarm Optimization algorithm (BPSO), Ant Colony Optimization (ACO), MMAS algorithms in terms of the average number of task assigned, normalized energy consumption, quality of solution and average Central Processing Unit (CPU) time. From the experimental results, the proposed algorithm has outperformed MMAS, Modified BPSO and ACO for consistency matrix. In case of inconsistency matrix H-MMAS performed better than Modified BPSO, similar to ACO and MMAS, but there is an improvement in the normalized energy consumption.

Keywords: Multiprocessors, task assignment, heterogeneous processors, ant colony optimization, real time systems.

Received September 21, 2014; accepted December 21, 2015

 

Tuesday, 24 April 2018 04:52

A Multimedia Web Service Matchmaker

Sid Midouni1,2, Youssef Amghar1, and Azeddine Chikh2

1Université de Lyon, CNRS INSA-Lyon, France

2Département d'informatique, Université Abou Bekr Belkaid-Tlemcen, Algérie

Abstract: The full service approach for composing (MaaS) Multimedia as a Service in multimedia data retrieving, which we have proposed in a previous work, is based on a four phases process: description; matching; clustering; and restitution. In this article, we show how MaaS services are matched to meet user needs. Our matching algorithm consists of two steps: (1) the domain matching step is based on the calculation of similarity degrees between the domain description of MaaS services and user queries; (2) the multimedia matching step compares the multimedia description of MaaS services with user queries. The multimedia description is defined as a SPARQL Protocol and RDF Query Language( SPARQL) query over multimedia ontology. An experimentation in a medical domain allowed to evaluate the solution. The results indicate that using both domain and multimedia matching considerably improve the performance of multimedia data retrieving systems.

Keywords: Semantic web services, information retrieval, service description, SAWSDL, service matching.

Received July 27, 2015; accepted September 12, 2015

 


Tuesday, 24 April 2018 04:46

Hidden Markov Random Fields and Particle

Swarm Combination for Brain Image Segmentation

El-Hachemi Guerrout, Ramdane Mahiou, and Samy Ait-Aoudia

Laboratoire des Méthodes de Conception des Systèmes-Ecole Nationale Supérieure en Informatique, Algeria

Abstract: The interpretation of brain images is a crucial task in the practitioners’ diagnosis process. Segmentation is one of key operations to provide a decision support to physicians. There are several methods to perform segmentation. We use Hidden Markov Random Fields (HMRF) for modelling the segmentation problem. This elegant model leads to an optimization problem. Particles Swarm Optimization (PSO) method is used to achieve brain magnetic resonance image segmentation. Setting the parameters of the HMRF-PSO method is a task in itself. We conduct a study for the choice of parameters that give a good segmentation. The segmentation quality is evaluated on ground-truth images, using the Dice coefficient also called Kappa index. The results show a superiority of the HMRF-PSO method, compared to methods such as Classical Markov Random Fields (MRF) and MRF using variants of Ant Colony Optimization (ACO).

Keywords: Brain image segmentation, hidden markov random field, swarm particles optimization, dice coefficient.

Received June 5, 2015; accepted October 19, 2015

 
Tuesday, 24 April 2018 04:44

Vertical Links Minimized 3D NoC

Topology and Router-Arbiter Design

                        Nallasamy Viswanathan1, Kuppusamy Paramasivam2, and Kanagasabapathi Somasundaram3

1Department of Electrical and Computer Engineering, Mahendra Engineering College, India

2Department of Electrical and Computer Engineering, Karpagam College of Engineering, India

3Department of Mathematics, Amrita Vishwa Vidyapeetham, India


Abstract: Design of a topology and its router plays a vital role in a 3D Network-on-Chip (3D NoC) architecture. In this paper, we develop a partially vertically connected topology, so called 3D Recursive Network Topology (3D RNT) and using an analytical model, we study the performance of the 3D RNT. Delay per Buffer Size (DBS) and Chip Area per Buffer Size (CABS) are the parameters considered for the performance evaluation. Our experimental results show that the vertical links are cut down upto 75% in 3D RNT compared to that of 3D Fully connected Mesh Topology (3D FMT) at the cost of increasing DBS by 8%, besides 10% lesser CABS is observed in the 3D RNT. Further, a Programmable Prefix router-Arbiter (PPA) is designed for 3D NoC and its performance is analyzed. The results of the experimental analysis indicate that PPA has lesser delay and area (gate count) compared to Round Robin Arbiter (RRA) with prefix network.

Keywords: Network topology; vertical links; network calculus; arbiter; latency; chip area.

Received June 26, 2014; accepted July 7, 2015

  
Tuesday, 24 April 2018 04:38

Effective Technology Based Sports Training

System Using Human Pose Model

Kannan Paulraj and Nithya Natesan

Department of Electronics and Communication Engineering, Panimalar Engineering College, India

Abstract: This paper investigates the sports dynamics using human pose modeling from the video sequences. To implement human pose modeling, a human skeletal model is developed using thinning algorithm and the feature points of human body are extracted. The obtained feature points play an important role in analyzing the activities of a sports person. The proposed human pose model technique provides a technology based training to a sports person and performance can be gradually improved. Also the paper aimed at improving the computation time and efficiency of 2D and 3D model.

Keywords: Thinning algorithm, human activity, motion analysis, feature extraction.

Received March 28, 2015; accepted September 9, 2015

 

Full text
Tuesday, 24 April 2018 04:30

HierarchicalRank: Webpage Rank Improvement

Using HTML TagLevel Similarity

Dilip Sharma and Deepak Ganeshiya

Department of Computer Engineering and Applications, GLA University Mathura, India

Abstract: In the past researches, two types of algorithms are introduced that are query dependent and query independent, works online or offline. PageRank Algorithm works offline independent to query while Hyperlink-Induced Topic Search (HITS) algorithm woks online dependent on query. One of the problems of these algorithms is that, division of the rank is based on number of inlinks, outlinks and different parameters used in hyperlink analysis which is dependent or independent to webpage content with the problem of topic drift. Previous researches were focused to solve this problem using the popularity of the outlink webpages. In this paper a novel algorithm for popularity measure is proposed based on similarity between query and Hierarchical text extracted from source and target webpage using Hyper Text Markup Language (HTML) tags importance parameter. In this paper, result of proposed method is compared with PageRank Algorithm and Topic Distillation with Query Dependent Link Connections and Page Characteristics results.

Keywords: Web mining, web graph, hyperlink analysis, connectivity, pagerank, HTML tags.

Received July 21, 2014; accepted October 14, 2014

 

Tuesday, 24 April 2018 04:24

A New Chaos-Based Image

Encryption Algorithm

Ming Xu

Department of Mathematics and Physics, Shijiazhuang Tiedao University, China

Abstract: In this paper, we propose a new image encryption algorithm based on the Compound chaotic image encryption algorithm. The original one can’t resist chosen-plaintext attack and has weak statistical security, but our new algorithm can resist the chosen-plaintext attack using a simple improvement solution. The improvement solution is novel and transplantable, it can also be used to enhance the ability of resisting differential attack of other image encryption algorithms. The experimental results show that the new algorithm has higher security but its encryption speed is very nearly the same as the original one.

Keywords: Chaotic; image encryption; chosen-plaintext attack; transplantable.

Received July 7, 2015; accepted January 13, 2016

Full text 

Tuesday, 24 April 2018 03:03

Overview of Automatic Seed Selection Methods

for Biomedical Images Segmentation

Ahlem Melouah, and Soumia Layachi

Department of Informatics, Badji-Mokhtar Annaba University, Algeria

Abstract: In biomedical image processing, image segmentation is a relevant research area due to its wide spread usage and application. Seeded region growing is very attractive for semantic image segmentation by involving the high-level knowledge of image components in the seed point selection procedure. However, the seeded region growing algorithm suffers from the problems of automatic seed point generation. A seed point is the starting point for region growing and its selection is very important for the success of segmentation process. This paper presents an extensive survey on works carried out in the area of automatic seed point selection for biomedical images segmentation by seeded region growing algorithm. The main objective of this study is to provide an overview of the most recent trends for seed point selection in biomedical image segmentation.        

Keywords: Automatic seed selection, biomedical image, region growing segmentation, region of interest, region extraction, edge extraction, feature extraction.

Received November 6, 2015; accepted February 21, 2016
 
Tuesday, 24 April 2018 02:56

Intelligent Replication for Distributed Active

Real-Time Databases Systems

Rashed Salem1, Safa'a Saleh2, and Hattem Abdul-Kader1

1Information Systems Department, Menoufia University, Egypt

2Information Systems Department, Taibah University, KSA

Abstract: Recently, the demand for real-time database is increasing. Most real-time systems are inherently distributed in nature and need to handle data in a timely fashion. Obtaining data from remote sites may take long time making the temporal data invalid. This results in a large number of tardy transactions with their catastrophic effect. Replication is one solution of this problem, as it allows transactions to access temporal data locally. This helps transactions to meet their time requirements which require predictable resource usage. To improve predictability, Distributed Active Real-time Database System (DeeDS) prototype is introduced to avoid the delay which results from disk access, network communications and distributed commit processing. DeeDS advises to use In-memory database, fully replication and local transaction committing, but full replication consumes the system resources causing a scalability problem. In this work, we introduce Intelligent Replication In DeeDS (IReIDe) as a new replication protocol that supports the replication for DeeDS and faces the scalability problem using intelligent clustering technique. The results show the ability of IReIDe to reduce the consumed system resources and maintain scalability.

Keyword: Replication, real-time, DRTDBS, DeeDS, clustering.

                                                                                 Received February 17, 2015; accepted October 7, 2015 

Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…