Wednesday, 24 April 2019 02:36

Evaluation of Grid Computing Environment Using TOPSIS

Mahmoud Mohammaddoust1, Ali Harounabadi2, and Mohammadali Neizari1

1Department of Computer, Institute for Higher Education Academic Center for Education, Culture and Research Khouzestan, Iran

2Department of Computer, Islamic Azad University, Iran

Abstract: Grid evaluation approaches usually focus on some special aspects of grid environment and there have been few researches on a technique which is able to comprehensively evaluate a grid system in terms of its performance. In this paper an algorithm is proposed in order to evaluate the performance of grid environment based on4 metrics of reliability, task execution time, resource utilization rate and load balance level. In the proposed algorithm, a new method for evaluating the resource utilization rate has been presented. Also, in the paper an application of Technique for Order-Preference by Similarity to Ideal Solution (TOPSIS) is presented in order to choose the most efficient system based on these 4 metrics. Algorithm and TOPSIS performances are demonstrated through analytical and numerical examples. Then, using simulation, it has been demonstrated that the proposed algorithm estimates the amount of utilization rate with high accuracy. Using the suggested approach, one can choose the most efficient algorithm so that a compromise is established between managers’ and users’ requests.

Keywords: Grid computing evaluation, reliability, task execution time, resource utilization rate, load balance level, TOPSIS.

Received October 4, 2015; accepted March 27, 2016
 
Wednesday, 24 April 2019 02:35

Identifier (ID) based Enhanced Service for Device Communication and Control in Future Networks

Muhammad Khan and DoHyeun Kim

Computer Engineering Department, Jeju National University, South Korea

Abstract: Enablement of the future smart devices to interact with each other for the provision of intelligent and useful services to users is the focus of research towards the realization of future networks. The conventional static nature of networks is not feasible for the future networks which require scalability and device mobility at its core. The usage of Identifier (ID) in conjunction with a physical address supports mobility of the devices and the scalability of the overall network. This paper presents ID based device communication and control service in future networks. The study is performed using the test bed for indoor environment management, which utilizes the data from indoor and outdoor sensing devices to provide and optimum indoor environment (temperature, humidity, light etc.) by controlling the indoor actuating devices. The test bed implementation has been modified in order to execute the proposed ID based device communication and control scheme and compare the results with the IP only implementation of the test bed. The comparison reveals that ID based device communication and control scheme can be as efficient as IP based routing while providing the added advantages of coping with heterogeneity, scalability and mobility in the future networks.

Keywords: Future networks, identifier, ID, device control, service, efficient.

Received November 21, 2015; accepted August 8, 2016
 
Wednesday, 24 April 2019 02:34

A Low Complexity Face Recognition Scheme Based on Down Sampled Local Binary Patterns

 

Gibran Benitez-Garcia1, Mariko Nakano-Miyatake2, Jesus Olivares-Mercado2, Hector Perez-Meana2,

Gabriel Sanchez-Perez2, and Karina Toscano-Medina2

1Department of Mechanical Engineering and Intelligent Systems, University of Electro-Communication, Japan

2Section of Graduate Studies and Research, Instituto Politécnico Nacional, Mexico

Abstract: The accurate description of face images under variable illumination, pose and face expression conditions is a topic that has attracted the attention of researchers in recent years, resulting in the proposal of several efficient algorithms. Among these algorithms, Local Binary Pattern (LBP)-based schemes appear to be promising approaches, although the computational complexity of LPB-based approaches may limit their implementation in devices with limited computational power. Hence, this paper presents a face recognition algorithm, based on the LBP feature extraction method, with a lower computational complexity than the conventional LBP-based scheme and similar recognition performance. The proposed scheme, called Decimated Image Window Binary Pattern (DI-WBP), firstly, the face image is down sampled and then the LBP is applied to characterize the size reduced image using non overlaping blocks of 3x3 pixels. The DI-WBP does not require any dimensionality reduction scheme because the size of the resulting feature matrix is much smaller than the original image size. Finally, the resulting feature vectors are applied to a given classification method to perform the recognition task. Evaluation results using the Aleix-Robert (AR) and Yale face databases demonstrate that the proposed scheme provides a recognition performance similar to those provided by the conventional LBP-based scheme and other recently proposed approaches, with lower computational complexity.

Keywords: Local binary patterns, DI-WBP, face recognition, identity verification, bicubic interpolation.

Received September 5, 2015; accepted May 11, 2016
 
Wednesday, 24 April 2019 02:32

Automated Software Test Optimization using Test

Language Processing

Mukesh Mann1, Om Prakash Sangwan2, and Pradeep Tomar3

1,3Department of Computer Science and Engineering, Gautam Buddha University, India

2Department of Computer Science and Engineering, Guru Jambheshwar University of Science and Technology, India

Abstract: The delivery of error free software has become a major challenge for software practitioner since many past years. In order to deliver an error free software testers spends 40-50 % software design life cycle cost during testing, which further get incremented with changing user demands. The Large existence of test cases for a particular functionality is possible and some of them may cause software fails. Thus it raises a demand to automate existing approach of manual testing which can minimize execution efforts while maintaining the quality of testing. In this paper, a regression framework based on keyword oriented data-driven approach has been proposed for generation and execution of test cases. The methodology for the developed framework is based on Test Language Processing (TLP) which acts as a comprehensive approach to design and execution of test cases. The framework is tested on an open source web application called Vtiger-Customer Relationship Management (CRM) version 5. The framework is compared against manual testing in terms of test suite execution and their optimization. Based on our experiments it is concluded that (1) Test execution time using TLP based framework is significantly low and (2) a test suite optimization of 83.78% is achieved through the proposed TLP framework.

Keywords: (TLP) framework, manual testing, effort reduction, test optimization.

Received June 17, 2015; accepted July 4, 2016

 

Wednesday, 24 April 2019 02:31

Taxonomy of GUM and Usability Prediction Using GUM Multistage Fuzzy Expert System

Deepak Gupta1 and Anil Ahlawat2

1Maharaja Agrasen Institute of Technology, Guru Gobind Singh Indraprastha University, India

2Krishna Institute of Engineering and Technology, Dr. APJ Abdul Kalam Technical University, India

Abstract: The evaluation of quality of software is an important aspect for controlling, managing so that we can be able to enhance the improvement in a software process. For such evaluation, many factors have been identified by a number of researchers. The quality of software is further dependent on many other factors. Usability of software is one of the most significant aspect on which quality of software is dependent. Many researchers proposed a number of software usability models, each model considering a set of usability factors but these models do not include all the usability aspects and it is hard to integrate these models into current software engineering practices. As far as real world is concerned, we are facing many obstacles in implementing any of these proposed models as there is a lack in its precise definition and the concept of globally accepted usability. This paper aims to define the term ‘usability’ using the Generalized Usability Model (GUM). GUM is proposed with detailed taxonomy for specifying and identifying the quality components, which brings together factors, attributes and characteristics defined in various Human Computer Interaction (HCI) and Software Models. This paper also shows how to predict the usability of a software application using a fuzzy based expert system which has been implemented using multistage fuzzy logic toolbox.

Keywords: Quality of software, usability, factors, GUM, evaluation, fuzzy logic, soft computing.

Received September 23, 2015; accepted June 29, 2016
 
Wednesday, 24 April 2019 02:30

Preceding Document Clustering by Graph Mining Based Maximal Frequent Termsets Preservation

Syed Shah and Mohammad Amjad

Department of Computer Engineering, Jamia Millia Islamia, India

Abstract: This paper presents an approach to cluster documents. It introduces a novel graph mining based algorithm to find frequent termsets present in a document set. The document set is initially mapped onto a bipartite graph. Based on the results of our algorithm, the document set is modified to reduce its dimensionality. Then, Bisecting K-means algorithm is executed over the modified document set to obtain a set of very meaningful clusters. It has been shown that the proposed approach, Clustering preceded by Graph Mining based Maximal Frequent Termsets Preservation (CGFTP), produces better quality clusters than produced by some classical document clustering algorithm(s). It has also been shown that the produced clusters are easily interpretable. The quality of clusters has been measured in terms of their F-measure.

Keywords: Bipartite graph, graph mining, frequent termsets mining, bisecting K-means.

Received June 18, 2016; accepted June 29, 2017
 
Wednesday, 24 April 2019 02:29

PeSOA:Penguins Search Optimisation Algorithm

for Global Optimisation Problems

Youcef Gheraibia1, Abdelouahab Moussaoui2, Peng-Yeng Yin3, Yiannis Papadopoulos1, and Smaine Maazouzi4

1School of Engineering and Computer Science, University of Hull, U.K

2Department of Computer Science, University of Setif 1, Algeria

3Department of Information Management, National Chi Nan University, Taiwan

4Department of Computer Science, Univsersité 20 Août 1955, Algeria

Abstract: This paper develops Penguin Search Optimisation Algorithm (PeSOA), a new metaheuristic algorithm which is inspired by the foraging behaviours of penguins. A population of penguins located in the solution space of the given search and optimisation problem is divided into groups and tasked with finding optimal solutions. The penguins of a group perform simultaneous dives and work as a team to collaboratively feed on fish the energy content of which corresponds to the fitness of candidate solutions. Fish stocks have higher fitness and concentration near areas of solution optima and thus drive the search. Penguins can migrate to other places if their original habitat lacks food. We identify two forms of penguin communication both intra-group and inter-group which are useful in designing intensification and diversification strategies. An efficient intensification strategy allows fast convergence to a local optimum, whereas an effective diversification strategy avoids cyclic behaviour around local optima and explores more effectively the space of potential solutions. The proposed PeSOA algorithm has been validated on a well-known set of benchmark functions. Comparative performances with six other nature-inspired metaheuristics show that the PeSOA performs favourably in these tests. A run-time analysis shows that the performance obtained by the PeSOA is very stable at any time of the evolution horizon, making the PeSOA a viable approach for real world applications.

Keywords: Population-based approach, complex problems, intensification strategy, diversification strategy, penguins search.

Received October 1, 2015; accepted March 3, 2016 
 
Wednesday, 24 April 2019 02:27

Machine Translation Infrastructure for Turkic Languages (MT-Turk)

Emel Alkım and Yalçın Çebi

Department of Computer Engineering, Dokuz Eylul University, Turkey

Abstract: In this study, a multilingual, extensible machine translation infrastructure for grammatically similar Turkic languages “MT-Turk” is presented. MT-Turk infrastructure has multi-word support and is designed using a combined rule-based translation approach thatunites the strengths of interlingual and transfer approaches. This resulted in achieving ease of extensibility by adding new Turkic languages. The new language can be used both as destination and as source language achieving two-way extensibility. In addition, the infrastructure is strengthened with the ability of learning from previous translations and using the suggestions of previous users for disambiguation. Finally, the success of MT-Turk for three Turkic languages -Turkish, Kirghiz and Kazan- is evaluated using BiLingual Evaluation Understudy (BLEU) metric and it is seen that the suggestion system improved the success by 43.66% in average. Although the lack of linguistic resources affected the success of the system negatively, this study led to the introduction of an extensible infrastructure that can learn from previous translations.

Keywords: Rule-based machine translation, Turkic languages, semi-language specific interlingua and disambiguation by suggestions.

Received April 21, 2015; accepted November 8, 2016
 
Wednesday, 24 April 2019 02:26

Contrast Enhancement using Completely

Overlapped Uniformly Decrementing Sub-Block

Histogram Equalization for Less Controlled

Illumination Variation

Shree Devi Ganesan1 and Munir Rabbani2

1Department of Computer Applications, B.S. Abdur Rahman Crescent Institute of Science and Technology, India

2Department of Mathematics, B.S. Abdur Rahman Crescent Institute of Science and Technology, India

Abstract: Illumination pre-processing is an inevitable step for a real-time automatic face recognition system in solving challenges related to lighting variation for recognizing the face images. This paper proposes a novel framework namely Completely Overlapped Uniformly Decrementing Sub-Block Histogram Equalization (COUDSHE) to normalize or pre-process the illumination deficient images. COUDSHE is based on the idea that efficiency of the pre-processing technique mainly depends on the framework for application of the technique on the affected image. The primary goal of this paper is to bring out a new strategy for localizing a Global Histogram Equalization (GHE) Technique to help it adapt to the local light condition of the image. The Mean Squared Error (MSE), Histogram Flatness Measure, Absolute Mean Brightness Error (AMBE) are the objective measures used to analysis the efficiency of the technique. Experimental Results reveal that COUDSHE has better performance on Heavy shadow images and half lit image than the existing techniques.

Keywords: Illumination pre-processing; global histogram equalization; localization; mean squared error; histogram flatness measure, absolute mean brightness error.

Received July 4, 2015; accepted April 17, 2016
 

 

Wednesday, 24 April 2019 02:25

A Dynamic Architecture for Runtime Adaptation of Service-based Applications

Yousef Rastegari and Fereidoon Shams

Faculty of Computer Science and Engineering, Shahid Beheshti University, Iran

Abstract: Service-Based Applications (SBA) offer flexible functionalities in wide range of environments. Therefore they should dynamically adapt to different quality concerns such as security, performance, etc. For example, we may add particular delivery service for the golden customers, or provide secure services for the specific partners, or change service invocation based on context information. Unlike other adaptation methods which substitute a faulty service or negotiate for service level objectives, we modify the architecture of SBA, that is, the underlying services structure and the runtime services implementation. In this regard, we propose a reflective architecture which holds business and adaptation knowledge in the Meta level and implements service behaviours in the Base level. The knowledge is modelled in the form of Meta states and Meta transitions. We benefit from Reflective Visitor pattern to materialize an abstract service in different concrete implementations and manipulate them at runtime. Each service implementation fulfils a specific quality concern, so it is possible to delegate user requests to appropriate implementation instead of reselecting a new service which is a time consuming strategy. We used Jmeter load simulator and real-world Quality of Service (QoS) dataset to measure the architecture efficiency. Also, we characterized our work in comparison with related studies according to the European Software Services and Systems Network (S-CUBE) adaptation taxonomy.

Keywords: Software engineering, service-based application, software adaptation, reflection, quality of service.

Received October 28, 2015; accepted July 4, 2016
 
Wednesday, 24 April 2019 02:23

Toward Proving the Correctness of TCP Protocol Using CTL

Rafat Alshorman

Department of Computer Science, Yarmouk University, Jordan

Abstract: The use of the Internet requires two types of application programs. One is running in the first endpoint of the network connection and requesting services, via application programs, is called the client. The other, that provides the services, is called the server. These application programs that are in client and server communicate with each other under some system rules to exchange the services. In this research, we shall try to model the system rules of communications that are called protocol using model checker. The model checker represents the states of the clients, servers and system rules (protocol) as a Finite State Machine (FSM). The correctness conditions of the protocol are encoded into temporal logics formulae Computational Tree Logic (CTL). Then, Model checker interprets these temporal formulae over the FSM to check whether the correctness conditions are satisfied or not. Moreover, the introduced model of the protocol, in this paper, is modelling the concurrent synchronized clients and servers to be iterated infinite often.

Keywords: CTL, model checking, TCP protocols, correctness conditions, kripke structure.

Received January 28, 2017; accepted March 21, 2017
 
Wednesday, 24 April 2019 02:22

Evolutionary Testing for Timing Analysis of Parallel Embedded Software

Muhammad Waqar Aziz and Syed Abdul Baqi Shah

Science and Technology Unit, Umm Al-Qura University, Kingdom of Saudi Arabia

Abstract: Embedded real-time software must be verified for their timing correctness where knowledge about the Worst-Case Execution Time (WCET) is the building block of such verification. The WCET of embedded software can be estimated using either static analysis or measurement-based analysis. Previously, the WCET research assumes sequential code running on single-core platforms. However, as computation is steadily moving towards using a combination of parallel programming and multicore hardware, necessary research in WCET analysis should be taken into account. While focusing on the measurement-based analysis, the aim of this research is to find the WCET of parallel embedded software by generating the test-data using search algorithms. In this paper, the use of a meta-heuristic optimizing search technique-Genetic Algorithm is demonstrated, to automatically generate such test-data. The search-based optimization used yielded the input vectors of the parallel embedded software that cause maximal execution times. These execution times can be either the WCET of the parallel embedded software or very close to it. The process was evaluated in terms of its scalability, safety and applicability. The generated test-data showed improvements over randomly generated data.

Keywords: Embedded real-time software, worst-case execution-time analysis, measurement-based analysis, end-to-end testing, genetic algorithm, parallel computing.

Received May 24, 2016; accepted February 3, 2017
 
Wednesday, 24 April 2019 02:21

(m, k)-Firm Constraints and Derived Data Management for the QoS Enhancement in Distributed Real-Time DBMS

Malek Ben Salem1, Emna Bouazizi1,2, Claude Duvallet3, and Rafik Bouaziz1

1Higher Institute of Computer Science and Multimedia, Sfax University, Tunisia

2College of Computer Science and Engineering, Jeddah University, Jeddah, Saudi Arabia

3Normandie Univ, UNIHAVRE, LITIS, 76600 Le Havre, France

Abstract: Distributed Real-Time DBMS (DRTDBMS) is a collection of Real-Time DataBase Management Systems (RTDBMS) running on sites connected together via communication's networks for transaction processing. This system is characterized by the data distribution and the unpredictable transactions. In addition, in this system, the presence of several sites raises the problem of the unbalanced load between those nodes. In order to enhance the performance of DRTDBMS with taking into account those problems, Quality of Service (QoS) based approaches are the most appropriate. Distributed Feedback Scheduling Control Architecture (DFCSA) is proposed for managing the QoS in this system. In DRTDBMS, the results produced in time with less precision are sometimes preferable than exact results obtained in delay, inaccuracy can be tolerated. In order to take this assertion, in this paper, we extend the DFCSA by using the (m, k)-firm constraints, which take into account the imprecise results, using three data replication policies. The obtained architecture is called (m, k)-firm-User-DFCS. The second contribution consists of taking into account the real-time derived data on (m, k)-firm-User-DFCS architecture, always, using three data replication policies. The obtained architecture is called Derived Data Management (DDM)-(m, k)-firm-User-DFCS. Then, we are focusing on two ways of service optimization in DRTDBMS. We are interested in (1) the space optimization in which we propose to apply three replication data policies, and (2) the QoS optimization in which we propose to take into account the real-time derived data and (m, k)-firm constraints.

Keywords: Software DRTDBMS, QoS management, feedback control scheduling, (m, k)-firm constraints, derived data.

Received June 28, 2015; accepted January 4, 2017
 
Wednesday, 24 April 2019 02:20

An Efficiency Batch Authentication Scheme for

Smart Grid Using Binary Authentication Tree

Lili Yan, Yan Chang, and Shibin Zhang

College of Information Security Engineering, Chengdu University of Information Technology, China

Abstract: The Smart Grid (SG) is designed to replace traditional electric power infrastructure that manages electricity demand in a sustainable, reliable and economic manner. Advanced Metering Infrastructure (AMI) is proposed as a critical part of the smart grid. The gateway of AMI receives and verifies a mass of data from smart meters within a required interval. This paper focuses on the computation overhead of gateway, and proposes a batch authentication scheme based on binary tree. The proposed scheme enables the gateway to batch authenticate data. The computation cost to verify all messages only requires n multiplications and 2 pairing operations where n is the number of smart meters. That significantly reduces the computation cost of gateway, especially when the number of smart meters in the AMI gets large. We analyze security and performance of proposed scheme in detail to show that the proposed scheme is both secure and efficient for AMI in smart grid.

Keywords: Smart grid, smart meter, security, authentication.

Received July 22, 2015; accepted January 1, 2017
 
Wednesday, 24 April 2019 02:18

Parallel Optimized Pearson Correlation Condition (PO-PCC) for Robust Cosmetic Makeup Facial Recognition

Kanokmon Rujirakul and Chakchai So-In

Department of Computer Science, Faculty of Science, Khon Kaen University, Thailand

Abstract: Makeup changes or the application of cosmetics constitute one of the challenges for the improvement of the recognition precision of human faces because makeup has a direct impact on facial features, such as shape, tone, and texture. Thus, this research investigates the possibility of integrating a statistical model using Pearson Correlation (PC) to enhance the facial recognition accuracy. PC is generally used to determine the relationship between the training and testing images while leveraging the key advantage of fast computing. Considering the relationship of factors other than the features, i.e., changes in shape, size, color, or appearance, leads to a robustness of the cosmetic images. To further improve the accuracy and reduce the complexity of the approach, a technique using channel selection and the Optimum Index Factor (OIF), including Histogram Equalization (HE), is also considered. In addition, to enable real-time (online) applications, this research applies parallelism to reduce the computational time in the pre-processing and feature extraction stages, especially for parallel matrix manipulation, without affecting the recognition rate. The performance improvement is confirmed by extensive evaluations using three cosmetic datasets compared to classic facial recognitions, namely, principal component analysis and local binary pattern (by factors of 6.98 and 1.4, respectively), including their parallel enhancements (i.e., by factors of 31,194.02 and 1577.88, respectively) while maintaining high recognition precision.

Keywords: Cosmetic, facial recognition, makeup, parallel, pearson correlation.

Received September 1, 2015; accepted September 26, 2016
 
Wednesday, 24 April 2019 02:17

Improving Classification Performance Using Genetic Programming to Evolve String Kernels

Ruba Sultan1, Hashem Tamimi1,2, and Yaqoub Ashhab2

1College of IT and Computer Engineering, Palestine Polytechnic University, Palestine

2Palestine-Korea Biotechnology Center, Palestine Polytechnic University, Palestine

Abstract: The objective of this work is to present a novel evolutionary-based approach that can create and optimize powerful string kernels using Genetic Programming. The proposed model creates and optimizes a superior kernel, which is expressed as a combination of string kernels, their parameters, and corresponding weights. As a proof of concept to demonstrate the feasibility of the presented approach, classification performance of the newly evolved kernel versus a group of conventional single string kernels was evaluated using a challenging classification problem from biology domain known as theclassification of binder and non-binder peptides to Major Histocompatibility Complex Class II. Using 4794 strings containing 3346 binder and 1448 non-binder peptides, the present approach achieved Area Under Curve=0.80, while the 11 tested conventional string kernels have Area Under Curve ranging from 0.59 to 0.75. This significant improvement of the optimized evolved kernel over all other tested string kernels demonstrates the validity of this approach for enhancing Support Vector Machine classification. The presented approach is not exclusive for biological strings. It can be applied to solve pattern recognition problems for other types of strings as well as natural language processing.

Keywords: Support vector machine, string kernels, genetic programming, pattern recognition.

Received October 31, 2015; accepted June 1, 2016
 
Wednesday, 24 April 2019 02:16

Multi-Level Improvement for a Transcription

Generated by Automatic Speech Recognition

System for Arabic

Heithem Amich, Mohamed Ben Mohamed, and Mounir Zrigui

LaTICE Laboratory, Monastir Faculty of Sciences, Tunisia

Abstract: In this paper we will propose a novel approach to improving an automatic speech recognition system. The proposed method constructs a search space based on the relations of semantic dependence of the output of a recognition system. Then, it applies syntactic and phonetic filters so as to choose the most probable hypotheses. To achieve this objective, different techniques are deployed, such as the word2vec or the language model Recurrent Neural Networks Language Models (RNNLM) or ever the language model tagged in addition to a phonetic pruning system. The obtained results showed that the proposed approach allowed to improve the accuracy of the system especially for the recognition of mispronounced words and irrelevant words.

Keywords: Automatic speech recognition, multi-level improvement, language model, semantic similarity, phonetic pruning.

Received July 12, 2016; accepted March 26, 2017
 
Wednesday, 24 April 2019 02:14

Offline Isolated Arabic Handwriting Character

Recognition System Based on SVM

Mustafa Salam1 and Alia Abdul Hassan2

1Computer Engineering Techniques, Imam Ja'afar Al-Sadiq University, Iraq

2Computer Science Department, University of Technology, Iraq

Abstract: This paper proposed a new architecture for Offline Isolated Arabic Handwriting Character Recognition System Based on SVM (OIAHCR). An Arabic handwriting dataset also proposed for training and testing the proposed system. Although half of the dataset used for training the Support Vector Machine (SVM) and the second half used for testing, the system achieved high performance with less training data. Besides, the system achieved best recognition accuracy 99.64% based on several feature extraction methods and SVM classifier. Experimental results show that the linear kernel of SVM is convergent and more accurate for recognition than other SVM kernels.

Keywords: Arabic character, pre-processing, feature extraction, classification.

Received October 5, 2015; accepted February 2, 2017
 
Wednesday, 24 April 2019 02:13

New Class-based Dynamic Scheduling Strategy for Self-Management of Packets at the Internet Routers

Hanaa Mohammed1, Gamal Attiya2, and Samy El-Dolil3

1Department Electronics and Electrical Communications Engineering, Tanta University, Egypt

2Department Computer Science and Engineering, Menoufia University, Egypt

3Department Electronics and Electrical Communications Engineering, Menoufia University, Egypt

Abstract: Recently, the Internet became the most important environment for many activities including sending emails, browsing web sites, making phone calls and even having a videoconference for far education. The incremental growth of the internet traffic leads to a serious problem called congestion. Several Active Queue Management (AQM) algorithms have been implemented at the internet routers to avoid congestion before happening and solve the congestion if it happens by actively controlling the average queue length in the routers. However, most of the developed algorithms handle all the traffics by the same strategy although the internet traffics, real time and non-real time; require different Quality of Service (QoS). This paper presents a new RED-based algorithm, called Dynamic Queue RED (DQRED), to guarantee the required QoS of different traffics. In the proposed algorithm, three queues are used in the internet router; one queue for each traffic type (data, audio and video). The arrived packets are first queued in the corresponding queue. The queued packets are then scheduled dynamically according to the load (the number of queued packets) of each class type. This strategy guarantees QoS for real time applications as well as service fairness.

Keywords: Congestion control, AQM, packet queuing, dynamic scheduling, multimedia QoS.

Received December 31, 2015; accepted July 4, 2016
 
Wednesday, 24 April 2019 01:57

Cloud Data Center Design using Delay Tolerant Based Priority Queuing Model

Meera Annamalai1 and Swamynathan Sankaranarayanan2

1Department of Information Technology, Tagore Engineering College, India

2Department of Information Science and Technology, Anna University Chennai, India

Abstract: Infrastructure as a Service (IaaS) that occupies the bottom tier in the cloud pyramid is a recently developed technology in cloud computing. Organizations can move their applications to a cloud data center without remodelling it. Cloud providers and consumers need to take into account the performance factors such as resource utilization of computing resources, availability of resources caused by scheduling algorithms. Thus, an effective scheduling algorithm must strive to maximize these performance factors. Designing a cloud data center that schedules computing resources and monitoring their performances plays a leading challenge among the cloud researches. In this paper, we propose a data center design using delay tolerant based priority queuing model for resource provisioning, by paying attention to individual customer attributes. Priority selection process defines how to select the next customer to be served. The system has a priority based task classifier and allocator that accept the customer’s request. Based on the rules defined in the rule engine, task classifier classifies each request to a workload Priority classifier is modeled as M/M/S priority queue. The resource monitoring agent provides the resource utilization scenario of cloud infrastructure in the form of dashboard to the task classifier for further resource optimization.

Keywords: Cloud data center, (IaaS) and M/M/S priority queuing model.

Received June 7, 2015; accepted July 28, 2016
 
Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…