Sunday, 24 February 2019 07:41

Using Static and Dynamic Impact Analysis for

Effort Estimation

Nazri Kama, Sufyan Basri, Saiful Adli Ismail, and Roslina Ibrahim

Advanced Informatics School, Universiti Teknologi Malaysia, Malaysia

Abstract: Effort estimation undoubtedly happens in both software maintenance and software development phases. Researchers have been inventing many techniques to estimate change effort prior to implementing the actual change and one of the techniques is using impact analysis. A challenge of estimating a change effort during developing a software is the management of inconsistent states of software artifacts i.e., partially completed and to be developed artifacts. Our paper presents a novel model for estimating a change effort during the software development phase through integration between static and dynamic impact analysis. Three case studies of software development projects have been selected to evaluate the effectiveness of the model using the Mean Magnitude of Relative Error (MMRE) and Percentage of Prediction (PRED) metrics. The results indicated that the model has 22% MMRE relative error on average and the accuracy of our prediction was more than 75% across all case studies.

Keywords: Software development, change impact analysis, change effort estimation, impact analysis, effort estimation.

Received February 18, 2015; accepted September 26, 2016
Full text     
Sunday, 24 February 2019 07:40

Scheduling with Setup Time Matrix for Sequence

Dependent Family

Senthilvel Nataraj1, Umamaheshwari Sankareswaran2, Hemamalini Thulasiram3, and Senthiil Varadharajan4

1Department of Computer Science, Anna University, India

2Department of Electronics and Communication Engineering, Coimbatore Institute of Technology, Affiliated to Anna University, India

 3Department of Mathematics, Government Arts and Science College, Bharathiar University, India

4Production Engineering Department, Saint Peter's University, India

Abstract: We consider a problem of scheduling n jobs in k families on a single machine subjected to family set-up time to minimize the overall penalty. This paper proposes three heuristic approaches based on neighbourhood structures using setup time matrix. These three approaches minimize the maximum penalty which in turn minimizes the total penalty. Inserting the Idle Time initially (ITF approach) or between the families perform efficiently on large instances. The computational results prove the efficiency of the algorithm.

Keywords: Scheduling, sequence dependent scheduling, heuristic algorithm, idle time insertion.

Received January 1, 2016; accepted June 26, 2016
Full text     
Sunday, 24 February 2019 07:38

Feature Selection Method Based On Statistics of

Compound Words for Arabic Text Classification

Aisha Adel, Nazlia Omar, Mohammed Albared, and Adel Al-Shabi

Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Malaysia

Abstract: One of the main problems of text classification is the high dimensionality of the feature space. Feature selection methods are normally used to reduce the dimensionality of datasets to improve the performance of the classification, or to reduce the processing time, or both. To improve the performance of text classification, a feature selection algorithm is presented, based on terminology extracted from the statistics of compound words, to reduce the high dimensionality of the feature space. The proposed method is evaluated as a standalone method and in combination with other feature selection methods (two-stage method). The performance of the proposed algorithm is compared to the performance of six well-known feature selection methods including Information Gain, Chi-Square, Gini Index, Support Vector Machine-Based, Principal Components Analysis and Symmetric Uncertainty. A wide range of comparative experiments were conducted on three Arabic standard datasets and with three classification algorithms. The experimental results clearly show the superiority of the proposed method in both cases as a standalone or in a two-stage scenario. The results show that the proposed method behaves better than traditional approaches in terms of classification accuracy with a 6-10% gain in the macro-average, F1.

Keywords: Feature selection method, compound words, arabic text classification.

Received March 15, 2015; accepted December 27, 2015
Full text     
Sunday, 24 February 2019 07:36

Collaborative Detection of Cyber Security

Threats in Big Data

Jiange Zhang, Yuanbo Guo, and Yue Chen

State Key Laboratory of Mathematical Engineering and Advanced Computing, Zhengzhou Information Science and Technology Institute, China

Abstract: In the era of big data, it is a problem to be solved for promoting the healthy development of the Internet and the Internet+, protecting the information security of individuals, institutions and countries. Hence, this paper constructs a collaborative detection system of cyber security threats in big data. Firstly, it describes the log collection model of Flume, the data cache of Kafka, and the data process of Esper; then it designs one-to-many log collection, consistent data cache, Complex Event Processing (CEP) data process using event query and event pattern matching; finally, it tests on the datasets and analyzes the results from six aspects. The results demonstrate that the system has good reliability, high efficiency and accurate detection results; moreover, the system has the advantages of low cost and flexible operation.

Keywords: Big data, cyber security, threat, collaborative detection.

Received July 20, 2016; accepted February 15, 2017
Full text    
Sunday, 24 February 2019 07:35

Using the Improved PROMETHEE for Selection

of Trustworthy Cloud Database Servers

Jagpreet Sidhu1,2 and Sarbjeet Singh1

1University Institute of Engineering and Technology, Panjab University, India

 2Department of Computer Science and Engineering and Information Technology, Jaypee University of Information Technology, India

Abstract: The adoption of cloud computing transfers control of resources to cloud service providers. This transformation gives rise to variety of security and privacy issues which results into lack of trust of Cloud Client (CC) on Cloud Service Provider (CSP). Clients need a sense of trust on service provider in order to migrate their businesses to cloud platform. In this paper, an attempt has been made to design an improved Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE) method based selection technique for choosing trustworthy Cloud Database Servers (CDSs). The selection technique utilizes multi attribute decision making approach for selecting trustworthy CDSs. The technique makes use of benchmark parameters to evaluate selection index of trustworthy CDSs. The selection index assists CCs in choosing the trustworthy CSPs. To demonstrate the proposed technique’s applicability to real cloud environment, a case study based evaluation has been performed. The case study has been designed and demonstrated using real cloud data collected from Cloud Harmony Reports. This data serves as the dataset for trust evaluation and CDS selection. The results demonstrate the effectiveness of the proposed selection technique in real cloud environment.

Keywords: Cloud computing, cloud database servers, trust model, trustworthiness, multi attribute decision making, PROMETHEE, improved PROMETHEE, selection technique.

Received April 11, 2015; accepted January 13, 2016

Full text    

Sunday, 24 February 2019 07:33

Rough Set-Based Reduction of Incomplete Medical Datasets by Reducing the Number of Missing Values

Luai Al Shalabi

Faculty of Computer Studies, Arab Open University, Kuwait

Abstract: This paper proposes a model of: firstly, dimensionality reduction of noisy medical datasets that based on minimizing the number of missing values, which achieved by cutting the original dateset, secondly, high quality of generated reduct. The original dataset was split into two subsets; the first one contains complete records and the other one contains imputed records that previously have missing values. The reducts of the two subsets based on rough set theory are merged. The reduct of the merged attributes was constructed and tested using Rule Based and Decomposition Tree classifiers. Hepdata dataset, which has 59% of its tuples with one or more missing values, is mainly used throughout this article. The proposed algorithm performs effectively and the results are as expected. The dimension of the reduct generated by the Proposed Model (PM) is decreased by 10% comparing to the Rough Set Model (RSM). The proposed model was tested against different medical incomplete datasets. Significant and insignificant difference between RSM and PM are shown in Tables 1-5.

Keywords: Data mining, rough set theory, missing values, reduct.

Received October 9, 2015; accepted August 24, 2016
Full text     
Sunday, 24 February 2019 07:30

Formal Architecture and Verification of a Smart Flood Monitoring System-of-Systems

Nadeem Akhtar1and Saima Khan2
1Department of Computer Science and IT, the Islamia University of Bahawalpur, Pakistan
22Department of Computer Science Faculty of Computer Science and Information Technology, Virtual University of Pakistan, Pakistan

Abstract: In a flood situation, forecast of necessary information and an effective evacuation plan are vital. Smart Flood Monitoring System-of-Systems(SoS) is a flood monitoring and rescue system. It collects information from weather forecast, flood onlookers and observers. This information is processed and then made available as alerts to the clients. The system also maintains continuous communication with the authorities for disaster management, social services, and emergency responders. Smart Flood Monitoring System-of-System synchronizes the support offered by emergency responders with the community needs. This paper presents the architecture specification and formal verification of the proposed Smart Flood Monitoring SoS. The formal model of this SoS is specified to ensure the correctness properties of safety and liveness.

Keywords: Flood monitoring; system-of-systems; behavioral modeling; formal verification; correctness; safety property.

Received September 15, 2015; accepted June 1, 2016
Full text
Sunday, 24 February 2019 07:26

Parallel Batch Dynamic Single Source Shortest Path Algorithm and Its Implementation on GPU based Machine

Dhirendra Singh and Nilay Khare

Department of Computer Science and Engineering, Maulana Azad National Institute of Technology, India

Abstract: In this fast changing and uncertain world, to meet the user’s requirements the computer applications based on real world data always try to give responses in the minimum possible time. Single Source Shortest Path (SSSP) calculation is a basic requirement of applications using graphs portraying real world data like social networks and road networks etc. to get useful information from them. Some of these real world data changes very frequently, so recalculation of the shortest path for all nodes of a graph depicting these real world data after small updates of graph structure is an expensive process. To minimize the cost of recalculation shortest path algorithms need to process only the affected part of a graph after any update, and to speed-up any process parallel implementation of algorithm is a frequently used technique. This paper proposes a new parallel batch dynamic SSSP calculation approach and shows its implementation on a CPU- Graphic Processing Unit (GPU) based hybrid machine. The proposed algorithm is defined for positive edge weighted graphs. It accepts multiple edge weight updates simultaneously. It uses parallel modified Bellman Ford algorithm for SSSP recalculation of all affected nodes. Nvidia’s Tesla C2075 GPU is used to run the parallel implementation of the algorithm. The proposed parallel algorithm shows up to a twenty-fold speed increase as compared to best serial algorithm available in literature.

Keywords: Parallel algorithm, graph algorithm, dynamic shortest path algorithm, network algorithm.

Received December 20, 2014; accepted October 30, 2016
Full text    
Sunday, 24 February 2019 07:25

A New Approach of Lossy Image Compression

Based on Hybrid Image Resizing Techniques

Jau-Ji Shen1, Chun-Hsiu Yeh2,3, and Jinn-Ke Jan2

1Department of Management Information Systems, National Chung Hsing University, Taiwan

2Department of Computer Science and Engineering, National Chung Hsing University, Taichung

3Department of Information Management Systems, Chung Chou University, Taiwan

Abstract: In this study, we coordinated and employed known image resizing techniques to replace the widely applied image compression techniques defined by the Joint Photographic Experts Group (JPEG). The JPEG approach requires additional information from a quantization table to compress and decompress images. Our proposed scheme requires no additional data storage for compression and decompression and instead of using compression code it uses shrunken images that can be read visually. Experimental results indicate that the proposed method can coordinate typical image resizing techniques effectively to yield enlarged (decompressed) images that are better in quality than JPEG images. Our novel approach to lossy image compression can improve the quality of decompressed images and could replace the use of JPEG compression in current image resizing techniques, thus enabling compression to be performed directly in the spatial domain without the need for complex conversion in the frequency domain.

Keywords: Differential image, Image compression, Image rescaling, Image resolution improvement.

Received July 14, 2015; accepted October 24, 2016
Full text   
Sunday, 24 February 2019 07:23

Information Analysis and 2D Point Extrapolation using Method of Hurwitz-Radon Matrices

Dariusz Jakóbczak

Department of Electronics and Computer Science, Koszalin University of Technology, Poland

Abstract: Information analysis needs suitable methods of curve extrapolation. Proposed method of Hurwitz-Radon Matrices (MHR) can be used in extrapolation and interpolation of curves in the plane. For example quotations from the Stock Exchange, the market prices or rate of a currency form a curve. This paper contains the way of data anticipation and extrapolation via MHR method and decision making: to buy or not, to sell or not. Proposed method is based on a family of Hurwitz-Radon (HR) matrices. The matrices are skew-symmetric and possess columns composed of orthogonal vectors. The operator of Hurwitz-Radon (OHR), built from these matrices, is described. Two-dimensional information is represented by the set of curve points. It is shown how to create the orthogonal and discrete OHR and how to use it in a process of data foreseeing and extrapolation. MHR method is interpolating and extrapolating the curve point by point without using any formula or function.

Keywords: Information analysis, decision making, point interpolation, data extrapolation, value anticipation, hurwitz-radon matrices.

Received December 15, 2014; accepted September 19, 2016
Full text   
Sunday, 24 February 2019 07:19

An Efficient Mispronunciation Detection System

Using Discriminative Acoustic Phonetic Features

for Arabic Consonants<

Muazzam Maqsood1, Adnan Habib2, and Tabassam Nawaz1

1Department of Software Engineering, University of Engineering and Technology Taxila, Pakistan

2Department of Computer Science, University of Engineering and Technology Taxila, Pakistan

Abstract: Mispronunciation detection is an important component of Computer-Assisted Language Learning (CALL) systems. It helps students to learn new languages and focus on their individual pronunciation problems. In this paper, a novel discriminative Acoustic Phonetic Feature (APF) based technique is proposed to detect mispronunciations using artificial neural network classifier. By using domain knowledge, Arabic consonants are categorized into two groups based on their acoustic similarities. The first group consists of consonants having similar ending sounds and the second group consists of consonants with completely different sounds. In our proposed technique, the discriminative acoustic features are required for classifier training. To extract these features, discriminative parts of the Arabic consonants are identified. As a test case, a dataset is collected from native/non-native, male/female and children of different ages. This dataset comprises of 5600 isolated Arabic consonants. The average accuracy of the system, when tested with simple acoustic features are found to be 73.57%.While the use of discriminative acoustic features has improved the average accuracy to 82.27%. Some consonant pairs that are acoustically very similar, produced poor results and termed as Bad Phonemes. A subjective analysis has also been carried out to verify the effectiveness of the proposed system.

Keywords: Computer assisted language learning systems, mispronunciation detection, acoustic-phonetic features, artificial neural network, confidence measures.

Received April 20, 2016; accepted November 9, 2016
Full text    
Sunday, 24 February 2019 07:17

Secure Searchable Image Encryption in Cloud Using Hyper Chaos

Shaheen Ayyub and Praveen Kaushik

Department of Computer Science and Engineering, Maulana Azad National Institute of Technology, India

Abstract: In cloud computing, security is the main issue to many cloud providers and researchers. As we know that cloud acts as a big black box. Nothing inside the cloud is visible to the cloud user. This means that when we store our data or images in the cloud, we lost our control upon it. The data in the provider’s hands could make security and privacy issues in cloud storage as users lose their control over their data. So it is necessary for protecting user’s private data that they should be stored in the encrypted form and server should not learn anything about the stored data. These data may be personal images. In this paper we have worked on the user’s personal images which should be kept secret. The proposed scheme is to do the encryption of the images stored in the cloud. In this paper Hyper Chaos based encryption is done, which is applied on the masked images. Comparing with conventional algorithms chaos based ones have suggested more secure and fast encryption methods. The flicker images are used to create the mask for the original image and then hyper chaos is applied for encrypting the image. Prior methods in this regard are restricted to either some attacks possibility or key transfer mechanism. One of the advantages of proposed algorithm is that, the key is also encrypted. Some values of generated encrypted key with the index is sent to the server & other value is sent to the user. After decrypting the key, an encrypted image can be decrypted. The key encryption is used to enhance the security and privacy of the algorithm. Index is also created for the images before storing them on the cloud.

Keywords: Cloud computing, encryption, cloud security, privacy and integrity, hyper chaos, decryption, logistic map.

Received June 27, 2015; accepted June 1, 2016
Full text    
Sunday, 24 February 2019 07:13

A Low-Power Self-service Bus Arrival Reminding

Algorithm on Smart Phone

Xuefeng Liu, Jingjing Fan, Jianhua Mao, and Fenxiao Ye

School of Communication and Information Engineering, Shanghai University, China

Abstract: In this paper, a low-power self-service bus arrival reminding algorithm on smart phone is proposed and implemented. The algorithm first determines the current position of the bus by Global Positioning System (GPS) module in smart phone and calculates the linear distance from the bus current position to the destination station, then sets a buffer distance for reminding passengers of getting off the bus, estimates the bus maximum speed and calculates the minimum time of approaching the buffer. In terms of the time, the frequency of the GPS location and the distance calculation between the bus and the destination station is intelligently adjusted. Once the distance to destination station is within the buffer distance, smart phone will immediately remind passengers to get off. The test result shows that the algorithm can timely provide personalized arrival reminding service, efficiently meet the requirements of different passengers and greatly reduce the power consumption of smart phone.

Keywords: Bus arrival reminding algorithm, power consumption, buffer distance, GPS location.

Received June 21, 2015; accepted July 4, 2016
Full text   
Sunday, 24 February 2019 07:05

Optimal Threshold Value Determination for

Land Change Detection

Sangram Panigrahi1, Kesari Verma1, and Priyanka Tripathi2

1Department of Computer Applications, National Institute of Technology Raipur, India

2Department of Computer Engineering and Applications, National Institute of Technical Teachers Trainingand Research Bhopal, India

Abstract: Recently data mining techniques have emerged as an important technique to detect land change by detecting the sudden change and/or gradual change in time series of vegetation index dataset. In this technique, the algorithms takes the vegetation index time series data set as input and provides a list of change scores as output and each change score corresponding to a particular location. If the change score of a location is greater than some threshold value, then that location is considered as change. In this paper, we proposed a two step process for threshold determination: first step determine the upper and lower boundary for threshold and second step find the optimal point between upper and lower boundary, for change detection algorithm. Further, by engaging this process, we determine the threshold value for both Recursive Merging Algorithm and Recursive Search Algorithm and presented a comparative study of these algorithms for detecting changes in time series data. These techniques are evaluated quantitatively using synthetic dataset, which is analogous to vegetation index time series data set. The quantitative evaluation of the algorithms shows that the Recursive Merging (RM) method performs reasonably well, but the Recursive Search Algorithm (RSA) significantly outperforms in the presence of cyclic data.

Keywords: Data mining, threshold determination, EVI and NDVI time series data, high dimensional data, land change detection, recursive search algorithm, recursive merging algorithm.

Received October 28, 2015; accepted June 26, 2016
Full text  
Sunday, 24 February 2019 06:52

An Efficient Algorithm for Extracting Infrequent

Itemsets from Weblog

Brijesh Bakariya1 and Ghanshyam Thakur2

1Department of Computer Science and Engineering, I.K. Gujral Punjab Technical University, India

2Department of Computer Applications, Maulana Azad National Institute of Technology, India

Abstract: Weblog data contains unstructured information. Due to this, extracting frequent pattern from weblog databases is a very challenging task. A power set lattice strategy is adopted for handling that kind of problem. In this lattice, the top label contains full set and at the bottom label contains empty set. Most number of algorithms follows bottom-up strategy, i.e. combining smaller to larger sets. Efficient lattice traversal techniques are presented which quickly identify all the long frequent itemsets and their subsets if required. This strategy is suitable for discovering frequent itemsets but it might not be worth being used for infrequent itemsets. In this paper, we propose Infrequent Itemset Mining for Weblog (IIMW) algorithm; it is a top-down breadth-first level-wise algorithm for discovering infrequent itemsets. We have compared our algorithm IIMW to Apriori-Rare, Apriori-Inverse and generated result in with different parameters such as candidate itemset, frequent itemset, time, transaction database and support threshold.

Keywords: Infrequent itemsets, lattice, frequent itemsets, weblog, support threshold.

Received September 6, 2014; accepted March 24, 2016
Full text  
Sunday, 24 February 2019 06:50

Case Retrieval Algorithm Using Similarity Measure and Fractional Brain Storm Optimization for Health Informaticians

Poonam Yadav

Department of Computer Science and Engineering, DAV College of Engineering and Technology, Maharshi Dayanand University, India

Abstract: The management and exploitation of health Information is a demandingtask for health informaticians to provide the highest quality healthcare delivery. Storage, retrieval and interpretation of healthcare information are important stages in health informatics. Consequently, the retrieval of similar cases based on the current patient data can help doctors to identify the similar kind of patients and their methods of treatments. By taking into concern, a hybrid model is developed for retrieval of similar cases through the use of Case-based reasoning. Here, new measure called, parametric Enabled-Similarity Measure (PESM) is proposed and a new optimization algorithm called, Fractional Brain Storm Optimization (FBSO), by modifying the well known Brain Storm Optimization (BSO) algorithm with the addition of fractional calculus is proposed. For experimentation, three different patient dataset from UCI machine learning repository is used and the performance is compared with existing method using accuracy and f-measure. The average accuracy and f-measure reached by the proposed method with three different dataset is 89.6% and 88.8% respectively.

Keywords: Case-based reasoning, case retrieval, optimization, similarity, fractional calculus.

Received April 1, 2015; accepted September 7, 2015
Sunday, 24 February 2019 06:48

Prediction of Future Vulnerability Discovery in

Software Applications using Vulnerability Syntax

Tree (PFVD-VST)

Kola Periyasamy1 and Saranya Arirangan2

1Department of Information Technology, Madras Institute of Technology, India

2Department of Information Technology, SRM Institute of Engineering and Technology, India

Abstract: Software applications are the origin to spread vulnerabilities in systems, networks and other software applications. Vulnerability Discovery Model (VDM) helps to encounter the susceptibilities in the problem domain. But preventing the software applications from known and unknown vulnerabilities is quite difficult and also need large database to store the history of attack information. We proposed a vulnerability prediction scheme named as Prediction of Future Vulnerability Discovery in Software Applications using Vulnerability Syntax Tree (PFVD-VST) which consists of five steps to address the problem of new vulnerability discovery and prediction. First, Classification and Clustering are performed based on the software application name, status, phase, category and attack types. Second, Code Quality is analyzed with the help of code quality measures such as, Cyclomatic Complexity, Functional Point Analysis, Coupling, Cloning between the objects, etc,. Third, Genetic based Binary Code Analyzer (GABCA) is used to convert the source code to binary code and evaluates each bit of the binary code. Fourth, Vulnerability Syntax Tree (VST) is trained with the help of vulnerabilities collected from National Vulnerability Database (NVD). Finally, a combined Naive Bayesian and Decision Tree based prediction algorithm is implemented to predict future vulnerabilities in new software applications. The experimental results of this system depicts that the prediction rate, recall, precision has improved significantly.

Keywords: Vulnerability discovery, prediction, classification and clustering, binary code analyzer, code quality metrics, vulnerability syntax tree.

Received October 30, 2014; accepted April 21, 2016
Sunday, 24 February 2019 06:30

Tunisian Arabic Chat Alphabet Transliteration Using Probabilistic Finite State Transducers

Nadia Karmani, Hsan Soussou, and Adel Alimi
Research Groups on Intelligent Machines, University of Sfax, Tunisia

Abstract: Internet is taking more and more scale in Tunisians life, especially after the revolution in 2011. Indeed, Tunisian Internet users are increasingly using social networks, blogs, etc. In this case, they favor Tunisian Arabic chat alphabet, which is a Latin-scripted Tunisian Arabic language. However, few tools were developed for Tunisian Arabic processing in this context. In this paper, we suggest developing a Tunisian Arabic chat alphabet-Tunisian Arabic transliteration machine based on weighted finite state transducers and using a Tunisian Arabic lexicon: aebWordNet (i.e., aeb is the ISO 639-3 code of Tunisian Arabic) and a Tunisian Arabic morphological analyzer. Weighted finite state transducers allow us to follow Tunisian Internet user’s transcription behavior when writing Tunisian Arabic chat alphabet texts. This last has not a standard format but respects a regular relation. Moreover, it uses aebWordNet and a Tunisian Arabic morphological analyzer to validate the generated transliterations. Our approach attempts good results compared with existing Arabic chat alphabet-Arabic transliteration tools such as EiKtub.

Keywords: Tunisian arabic chat alphabet, tunisian arabic, transliteration, aebWordNet, tunisian arabic morphological analyzer, weighted finite state transducer.

Received August 6, 2015; accepted April 17, 2016
Sunday, 24 February 2019 06:28

Fast and Robust Copy-Move Forgery Detection Using Wavelet Transforms and SURF

Mohammad Hashmi1 and Avinash Keskar2

1Department of Electronics and Communication Engineering, National Institute of Technology Warangal, India

2Department of Electronics and Communication Engineering, Visvesvaraya National Institute of Technology Nagpur, India

Abstract: Most of the images today are stored in digital format. With the advent of digital imagery, tampering of images became easy. The problem has become altogether intensified due to the availability of image tampering softwares. Moreover there exist cameras with different resolutions and encoding techniques. Detecting forgery in such cases becomes a challenging task. Furthermore, the forged image may be compressed or resized which further complicates the problem. This article focuses on blind detection of copy-move forgery using a combination of an invariant feature transform and a wavelet transform. The feature transform employed is Speeded Up Robust Features (SURF) and the wavelet transforms employed are Discrete Wavelet Transform (DWT) and Dyadic Wavelet Transform (DyWT). A comparison between the performances of the two wavelet transforms is presented. The proposed algorithms are different from the previously proposed methods in a way that they are applied on the whole image, rather than after dividing the image in to blocks. A comparative study between the proposed algorithm and the previous block-based methods is presented. From the results obtained, we conclude that these algorithms perform better than their counterparts in terms of accuracy, computational complexity and robustness to various attacks.

Keywords: Image forgery; SURF; DWT; DyWT, CMF.

Received December 10, 2014; accepted June 12, 2016
Sunday, 24 February 2019 06:12

Efficient Mapping Algorithm on Mesh-based NoCs in Terms of Cellular Learning Automata

Mohammad Keley1, Ahmad Khademzadeh2, and Mehdi Hosseinzadeh1

1Department of Computer, Islamic Azad University, Iran

2Information and Communication Technology Research Institute, IRAN Telecommunication Research Center, Iran


Abstract: Network-on-Chip (NoC) presents the interesting approaches to organize complex communications in many systems. NoC can also be used as one of the effective solutions to cover the existing problems in System-on-Chip (SoC) such as scalability and reusability. The most common topology used in NoC is mesh topology. However, offering the mapping algorithm for mapping applications, based on weighted task graphs, onto the mesh is known as a NP-hard problem. This paper presents an effective algorithm called ‘Boundary Mapping Algorithm’ (BMA), in terms of decreasing the priority of low weighted edges in the task graph to improved performance in the NoCs. A low complexity mapping algorithm cannot present the optimal mapping results for all applications. Then, adding an optimization phase to mapping algorithms can have a positive impact on their performance. So, this study presents an optimization phase based on Cellular Learning Automata to achieve this goal. For the evaluation mapping algorithm and optimization phase, we compared the BMA method with Integer Linear Programming (ILP), Nmap, CastNet and Onyx methods for six real applications. The mapping results indicated that the proposed algorithm can be useful for some applications. Also, optimization phase can be useful for the proposed and other mapping algorithms.

Keywords: Cellular learning automata, mapping algorithm, network on chip, optimization algorithm, power consumption.

Received May 22, 2014; accepted June 8, 2016
Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…