Thursday, 12 November 2015 03:03

A Novel Approach to Develop Dynamic Portable Instructional Operating System for Minimal Utilization

Siva Sankar Kanahasabapathy1, Jebarajan Thanganadar2, and Padmasuresh Lekshmikanthan3

1Department of Information Technology, Noorul Islam University, India

2Department of Computer Science and Engineering, Rajalekshmi Engineering College, India

3Department of Electrical and Electronics Engineering, Noorul Islam University, India

Abstract: Most well-known instructional Operating Systems (OSs) are complex, particularly if their companion software is taken into account. It takes considerable time and effort to craft these systems and their complexity may introduce maintenance and evolution problems. The purpose of this paper is to develop a mini OS which is open source and Linux based. This OS is independent of hardware simulator and platform. It encompasses a simplified kernel occupying a low memory with minimal resource consumption. It also, includes a dynamic boot loader which ignores the BIOS priority, takes itself to be as a highest priority. It is designed to utilize low primary memory and minimal CPU utilization. This is developed mainly to satisfy the minimal and basic requirements for a normal desktop user.

 Keywords: OSs, kernel, boot loader, linux, portable, opensource.

Received July 23, 2012; accepted May 19, 2015; published online September 15, 2015

 

Tuesday, 06 October 2015 07:39

Technique for Burning Area Identification Using IHS Transformation and Image Segmentation

 Thumma Kumar1 and Kamireddy Reddy2

1Computer Sciences Corporation, India

2Remote Sensing Applications Area, National Remote Sensing Centre, India

 Abstract: In this paper, we have designed and developed a technique for burning area identification using Intensity Hue Saturation (IHS) transformation and image segmentation. The process of identifying the burnt area in proposed technique consists of four steps such as: IHS transformation, object segmentation, identification of smoke area using Feed-Forward Neural Network (FFNN) and discovering burning areas from the smoke segments. Here, satellite image collected from NASA is utilized for the experimental study of the proposed research. The images obtained from the NASA is given to HIS transformation that convert the RGB image into intensity, hue, saturation transformed image so that, this process is suitable for segmentation process. After the transformation of image, object segmentation technique is done based on K-means  clustering algorithm. Subsequently, FFNN is used for identification of smoke area from the segments. After identifying the smoke segment, the burning area is identified through directional analysis. The proposed burnt area identification technique is analyzed with the help of sensitivity, specificity and the accuracy. Finally, experimental results say that, the proposed technique is achieved the overall accuracy 2.6%, which is better than the existing approach.

 Keywords: Burning, segmentation, K-means, FFNN.

 Received July 3, 2013; accepted March 20, 2014

 

Tuesday, 06 October 2015 07:31

A Measurement of Similarity to Identify Identical Code Clones

Mythili ShanmughaSundaram and Sarala Subramani

Department of Information Technology, Bharathiar University, India

Abstract: Code clones are described as a part of the program which is completely or partially similar to the other portions. In the earlier research the code clones have been detected using fingerprinting technique. The major challenge in our work was to group the code clones based on similarity measure. The proposed system measures the similarity based on similarity distance. The defined expression considers two parameters for calculating the similarity measure namely the similarity distance and the population of the clone. Thereby the code clones are clustered and ranked on the basis of their similarity measures. Indexing is used to interactively identify the clones which are caused due to inconsistent changes. As a result of this work all the identical clusters for most similar and more similar categories are identified.

Keywords: Clone detection, software clones, fingerprinting, clustering, reuse.

 

Tuesday, 29 September 2015 02:32

A Statistical Framework for Identification of Tunnelled Applications using Machine Learning

 

Ghulam Mujtaba1 and David Parish2

1Department of Electrical Engineering, Comsats Institute of Information Technology, Pakistan

2School of Electronic and Engineering, Loughborough University, UK

 

Abstract: This work describes a statistical approach to detect applications which are running inside application layer tunnels. Application layer tunnels are a significant threat for network abuse and violation of acceptable internet usage policy of an organisation. In tunnelling, the prohibited application packets are encapsulated as payload of an allowed protocol packet. It is much difficult to identify tunnelling using conventional methods in the case of encrypted HTTPS tunnels, for example. Hence, machine learning based approach is presented in this work in which statistical packet stream features are used to identify the application inside a tunnel. Packet Size Distribution (PSD) in the form of discrete bins is an important feature which is shown to be indicative of the respective application. This work presents a combination of other features with the PSD bins for better identification of the applications. Tunnelled applications are identifiable using these traffic statistical parameters. A comparison of the performance accuracy of five machine learning algorithms for application detection using this feature set is also given.

Keywords: Network security, tunnelled applications, firewalls, HTTP tunnels, HTTPS tunnels.

 Received May 22, 2013; Accepted May 17, 2015

Tuesday, 29 September 2015 02:28

Utilizing Corpus Statistics for Hindi Word Sense Disambiguation

Satyendr Singh and Tanveer Siddiqui

Department of Electronics and Communication, University of Allahabad, India

 

Abstract: Word Sense Disambiguation (WSD) is the task of computational assignment of correct sense of a polysemous word in a given context. This paper compares three WSD algorithms for Hindi WSD based on corpus statistics. The first algorithm, called corpus-based Lesk, uses sense definitions and a sense tagged training corpus to learn weights of Content Words (CWs).

These weights are used in the disambiguation process to assign a score to each sense. We experimented with four metrics for computing weight of matching words Term Frequency (TF), Inverse Document Frequency (IDF), Term Frequency-Inverse Document frequency (TF-IDF) and CW in a fixed window size. The second algorithm uses conditional probability of words and phrases co-occurring with each sense of an ambiguous word in disambiguation. The third algorithm is based on the classification information model. The first method yields an overall maximum precision of 85.87% using TF-IDF weighting scheme. The WSD algorithm using word co-occurrence statistics results in an average precision of 68.73%. The WSD algorithm using classification information model results in an average precision of 76.34%. All the three algorithms perform significantly better than direct overlap method in which case we achieve an average precision of 47.87%.

 Keywords: Supervised hindi WSD, corpus based lesk, TF-IDF, statistical WSD, word co-occurrence, information theory, classification information model.

 Received August 15, 2013; accepted May 6, 2014

 

Tuesday, 29 September 2015 02:24

Translation Rules for English to Hindi Machine Translation System: Homoeopathy Domain

Sanjay Dwivedi and Pramod Sukhadeve

Department of Computer Science, Babasaheb Bhimrao Ambedkar University (a Central University), India

Abstract: Rule based machine translation system embraces a set of grammar rules which are mandatory for the mapping of syntactic representations of a source language, on the target language. The system necessitates good linguistic knowledge to write rules and require of acquaintance source such as corpus and bilingual dictionary. In this paper, we have described the grammar rules intended for our English to Hindi machine translation system to translate the homoeopathic literatures, medical reports, prescription etc. The rules which have been written follow the transfer based approach for reordering of rules between two languages. The paper first discusses about our developed stemmer and its rules, further we discuss the Part of Speech tagging (PoS) rules for categorizing each word of the sentence grammatically and our developed homoeopathy corpus in English and Hindi of size 20085 and 20072 words respectively and at the last we discuss the agreement/translation rules for translating various homoeopathic sentences.

 

Keywords: Machine translation, stemmer, PoS tagging, grammar rules, homoeopathy, corpus.

 

Received June 14, 2013; Accepted Mrch 17, 2014

Full Text

 

 

Tuesday, 29 September 2015 02:20

Comparative Analysis of Classifier Performance on MR Brain Images

 

AkilaThiyagarajan and UmaMaheswari Pandurangan

Research Scholar, Anna University, India

Info Institute of Engineering, India

Abstract: This paper, aims to reveal a comparative analysis of classifier performance of MR brain images, particularly for the brain tumor detection and classification. The detection of brain tumor stands in need of Magnetic Resonance Imaging (MRI). The moment invariant feature extraction has been evaluated to categorize the MRI slices as normal, benign and malignant by Neural Network (NN) classifier. In our comparative study, we examine the precision rate of aforementioned classification with extracted features and the classification of brain images with selected features by Association Rule (AR) based NN classifier.

The results are then analyzed with Receiver Operating Characteristics (ROC) curve and compared to illustrate the method producing higher accuracy rate in tumor recognition. Factually, our analysis proves that the classifier works below feature extraction followed by rule pruning method affords better accuracy rate.

 Keywords: Binary association rule, brain tumor, feature extraction, MRI, pruning.

 

Received June 17, 2013; accepted January 17, 2014

 Full Text

 

 

Tuesday, 29 September 2015 02:12

Brain Tumor Segmentation in MRI Images Using Integrated Modified PSO-Fuzzy Approach

Krishna Priya Remamany1, Thangaraj Chelliah2, Kesavadas Chandrasekaran3, and Kannan Subramanian4

1Department of Electrical and Computer Engineering, Caledonian College of Engineering, Oman

2Anna University of Technology, India

3Department of Imaging Sciences and Interventional Radiology, SCTIMST, India

4Department of EEE, Kalasalingam University, India

 

Abstract: An image segmentation technique based on maximum fuzzy entropy is applied for Magnetic Resonance (MR) brain images to detect a brain tumor is presented in this paper. The proposed method performs image segmentation based on adaptive thresholding of the input MR brain images. The MR brain image is classified into two Membership Function (MF), whose MFs of the fuzzy region are Z-function and S-function. The optimal parameters of these fuzzy MFs are obtained using Modified Particle Swarm Optimization (MPSO) algorithm. The objective function for obtaining the optimal fuzzy MF parameters is considered to be the maximum the fuzzy entropy. In the course of a number of examples, the performance is compared with those using existing entropy-based object segmentation approaches and the superiority of the proposed MPSO method is demonstrated. The experimental results are compared with the exhaustive search method and Otsu segmentation technique. The result shows the proposed fuzzy entropy based segmentation method optimized using MPSO achieves maximum entropy with proper segmentation of tumor and with minimum computational time.

 

Keywords: Fuzzy entropy, particle swarm optimization, MRI, segmentation.

Received June 9, 2012; Accepted April 18, 2013

 

Thursday, 27 August 2015 03:13

Verifiable Multi Secret Sharing Scheme for 3D Models

Jani Anbarasi1 and Anandha Mala2

1Anna University, India

2Department of Computer Science and Engineering, Easwari Engineering College, India

Abstract: An efficient, computationally secure, verifiable (t, n) multi secret sharing scheme, based on YCH is proposed for multiple 3D models. The (t, n) scheme shares the 3D secrets among n participants, such that shares less than t cannot reveal the secret. The experimental results provide sufficient protection to 3D models. The feasibility and the security of the proposed system are demonstrated on various 3D models. The simulation results show that the secrets are retrieved from the shares without any loss.

 Keywords: Visual secret sharing, 3d graphics, cryptography.

 Received March 2, 2013; accepted June 9, 2014

Full Text

 

Thursday, 27 August 2015 03:08

Preventing Collusion Attack in Android

Iman Kashefi1, Maryam Kassiri2, and Mazleena Salleh1

1Faculty of Computing, Universiti Teknologi Malaysia, Malaysia

2Faculty of Computer Engineering, Islamic Azad University, Iran

Abstract: Globally, the number of Smartphone users has risen above a billion, and most of users use them to do their day-today activities. Therefore, the security of smartphones turns to a great concern. Recently, Android as the most popular smartphone platform has been targeted by the attackers. Many severe attacks to Android are caused by malicious applications which acquire excessive privileges at install time. Moreover some applications are able to collude together in order to increase their privileges by sharing their permissions. This paper proposes a mechanism for preventing this kind of collusion attack on Android by detecting the applications which are able to share their acquired permissions. By applying the proposed mechanism on a set of 290 applications downloaded from the Android official market, Google Play, the number of detected applications which potentially are able to conduct malicious activities increased by 12.90% in compare to the existing detection mechanism. Results showed that there were 4 applications among the detected applications which were able to collude together in order to acquire excessive privileges and were totally ignored by the existing method.

 

Keywords: Android security, collusion attacks, colluding applications, over-privileged applications.

 

Received July 19, 2012; accepted September 27, 2012

Full Text

 

  
Monday, 17 August 2015 04:27

Intelligent Risk Analysis Model for Mining Adaptable Reusable Component

Iyapparaja Meenakshisundaram1 and Sureshkumar Sreedharan2

1School of Information Technology and Engineering, Anna University, India

2Vivekanandha College of Technology for Women, India

 Abstract: Every elucidation for today’s quandary has been achieved in an easier prospect, with due respect to the experience gained by a normal man. The engineers too look out for the better way in the development cycle of software apart from its traditional approach. Software, being implemented in almost every machine, is in the urge of being developed with many improvisation techniques but obeying the time and cost constrains. Adding to the available simplifications methodologies in the development phases, the proposed Intelligent Risk Analysis Model (IRAM) would abridge the limitations of an object oriented program developed for a new software product showing betterments in time and budget needed. An object oriented program would comprise of individual and exclusive objects with indicated functionalities. Recognizing the usage of the objects in the existing programs would eliminate the necessity of a new coding, thus the component could be reused if it cannot be designated any better. This methodology does a primary verification whether there are any components which match with the stated requirements in the database of programs (e.g., C++, Java, Perl and Python). Based on the analysis of the matched component, it is categorized into Exact Match (EM), Partial Match (PM) or the Rejected Match (RM) which denotes its chances of applicability into the new product. This analysis of the correspondence in the reused object depends on the defined four parameters tuple namely Expected Language (EL), Module Description (MD), Argument Description (AD) and the Usage Threshold (UT). The component that matches exactly EM can be directly incorporated into the new software product whereas if the component falls into the other category PM then it is subjected to additional tests, Rank (R) is allotted, Intelligent Report (IR) is prepared and measures for its updating as an EM are taken. The RM component is eliminated from the list of possible outcomes at once.

 

Keywords: Software engineering, software reusability, object oriented programming, IR, cohesion and coupling, regression test.

Received February 3, 2013; accepted September 9, 2014

Full Text

 


Monday, 17 August 2015 04:21

Multiclass SVM based Spoken Hindi Numerals Recognition

Teena Mittal1 and Rajendra Kumar Sharma2

1Department of Electronics and Communication Engineering, Thapar University, India

2School of Mathematics and Computer Applications, Thapar University, India

 

Abstract: This paper presents recognition of isolated Hindi numerals using multiclass Support Vector Machine (SVM). The acoustic features in terms of Linear Predictive Coding (LPC), Mel-Frequency Cepstral Coefficients (MFCC) and combination of LPC and MFCC have been considered as inputs to the recognition process. The extracted acoustic features are given as input to the SVM. The classification is performed in two steps. In first step, a one-versus-all SVM classifier is used to identify the Hindi language. Further, in second step ten one-versus-all classifiers are used to recognize numerals. The linear, polynomial and RBF kernels are used for the construction of SVM for recognition purpose. In the first phase, the best kernel strategy was explored for a fixed number of frames of the speech signal. The highest recognition rate has been achieved using linear kernel strategy. Next, the number of frames in order to calculate LPCs and MFCCs was varied and recognition accuracy was calculated. The highest recognition accuracy achieved in this study is 96.8%.

 Keywords: LPC, MFCC, Hindi Numerals, Speech Recognition, SVM.

 Received November 9, 2012; accepted March 9, 2014

Full Text

 


Monday, 17 August 2015 04:08

Towards the Construction of a Comprehensive Arabic Lexical Reference System

 Hamza Zidoum, Fatma Al-Rasbi, and Muna Al-Awfi

Department of Computer Science, College of Science, Sultan Qaboos University

 Abstract: Arabic is a Semitic language spoken by millions of people in 20 different countries. However, not much work has been done in the field of online dictionaries or lexical resources. WordNet is an example of a lexical resource that has not been yet developed to its full extent for Arabic. WordNet, a lexical database developed by Professor George Miller and his team at Princeton University, has seen life 20 years ago. Ever since then, it has proved to be widely successful and extremely necessary for today’s demands. Accordingly, the motivation of developing an Arabic WordNet (AWN) became strong. This project addresses the nominal part of WordNet as the first step towards the construction of a comprehensive AWN. The nominal part means nouns as a part of speech.

 Key Words: Wordnet, synsets, arabic processing, lexicon.

 Received March 10, 2012; accepted July 28, 2015

Full Text

 


Monday, 10 August 2015 01:50
CARIM: An Efficient Algorithm for Mining Class-Association Rules with Interestingness Measures

Loan Nguyen1,2, Bay Vo3, Tzung-Pei Hong4,5

1Division of Knowledge and System Engineering for ICT, Ton Duc Thang University, Vietnam

2Faculty of Information Technology, Ton Duc Thang University, Vietnam

3Faculty of Information Technology, Ho Chi Minh City University of Technology, Vietnam

4Department of CSIE, National University of Kaohsiung, Taiwan

5Department of CSE, National Sun Yat-sen University, Taiwan


Abstract: Classification based on association rules can often achieve higher accuracy than some traditional rule-based methods such as C4.5 and ILA. The right-hand-side part of an association rule is a value of the target (or class) attribute. This study proposes a general algorithm for mining class-association rules based on a variety of interestingness measures. The proposed algorithm uses a tree structure for maintaining the related information of itemsets in the nodes, thus speeding up the process of generation of rules. The proposed algorithm can be easily extended to integrate some measures together for ranking of rules. Experiments are also conducted to show the efficiency of the proposed approach under various settings.

Keywords: Accuracy, classification, class-association rule, interestingness measure, integration.

 

Received July 19, 2012; Accepted September 27, 2012

Full Text

 

\

Monday, 10 August 2015 01:45
The Fuzzy Logic Based ECA Rule Processing for XML Databases

Thomson Fredrick1and Govindaraju Radhamani 2

1R&D Centre, Bharathiar University, Coimbatore, India

2Dr.G.R.D College of Science, Coimbatore, India

Abstract: Current needs of E-Commerce transactions require the development of XML database system like relational database systems. Fuzzy concepts are adapted to the field of XML Databases (DB) in order to deal with ambiguous and uncertain data. Incorporating fuzziness into Event Condition Action (ECA) rules would improve the effectiveness of XML DB as it provides much flexibility in defining rules for the supported application. An architecture that specifies how the fuzzy logic based rules are processed in the context of XML database transactions is presented in this paper. The algorithm for implementing fuzzy active rule based triggers for XML is proposed in this paper. The proposed architecture provides new forms of interaction, in support of fuzzy ECA rules between any application programs and the XML database. This paper presents a motivating example that illustrates the use of fuzzy trigger in stock market brokering agency. The testing has been done to compare the performance of fuzzy XML triggers and normal XML triggers. Our testing results show that Fuzzy ECA rule based triggers are providing better output than Normal ECA rule based triggers.

Keywords: XML DB, ECA rules, fuzzy eca rules, fuzzy xquery, fuzzy trigger.

Received August 30, 2012; Accepted March 20, 2014

Monday, 10 August 2015 01:44
Finger Knuckle Print Authentication Using AES and K-Means Algorithm

Muthukumar Arunachalam1 and Kannan Subramanian2

1Department of Electronics and Communication Engineering, Kalasalingam University, India

2Department of Electrical and Electronics Engineering, Kalasalingam University, India

 

Abstract: In general, the identification and verification are done by passwords, PIN number, etc., which can be easily cracked by hackers. Biometrics is a powerful and unique tool based on the anatomical and behavioural characteristics of the human beings in order to prove their authentication. Security is the most important thing in the world. Password is used for security, but it does not provide the effective security. So biometrics can be used to provide the higher security than the password. Finger Knuckle Print (FKP) is a unique biometric anatomical feature for an individual person. Biometric systems are suffered to a variety of attacks. In order to avoid these attacks, the biometric combined cryptography is the major tool. Bio-crypto system is to provide the authentication as well as the confidentiality of the data. This paper presents biometric key, which is generated from key points of FKP using k-means algorithm and secret hash value also generated using Secure Hash Algorithm (SHA) function, which is encrypted with the FKP extracted key points by Symmetric  Advanced Encryption Standard (AES) algorithm. The key points extraction of FKP was derived using Scale Invariant Feature Transform (SIFT). Hence encrypted secret hash value secures biometric data and the secret value. The hash function protects the biometric data from malicious tampering, and it provides error checking functionality.

Keywords: Biometric cryptosystems, key point’s extraction, K-Means algorithm, SIFT algorithm, AES, SHA function.

                                                      Received August 30, 2012; accepted April 23, 2013                                                                                                                                                                      
Monday, 10 August 2015 01:40

Lightweight Anti-Censorship Online Network for Anonymity and Privacy in Middle Eastern Countries

Tameem Eissa and Gihwan Cho

Division of Electronic and Information Engineering, Chonbuk National University, Republic of Korea

Abstract:  The Onion Router (TOR) online anonymity system is a network of volunteer’s nodes that allows Internet users to be anonymous through consecutive encryption tunnels. Nodes are selected according to estimated bandwidth (bnd) values announced by the nodes themselves. Some nodes may announce false values due to a lack of accuracy or hacking intention. Furthermore, a network bottleneck may occur when running TOR in countries with low Internet speed. In this paper, we highlight the censorship challenges that Internet users face when using anti-censorship tools in such countries. We show that the current anti-censorship solutions having limitations when implemented in countries with extensive internet filtering and low Internet speed. In order to overcome such limitations, we propose a new anonymity online solution based on TOR. The network nodes are selected using a trust based system. Most encryption and path selection computation overhead are shifted to our network nodes. We also provide a new encryption framework where the nodes with higher bnd and resources are chosen and verified carefully according to specific metrics. We use an atomic encryption between entry and exit nodes (Ex) without revealing the secret components of each party. We demonstrate that our solution can provide anonymous browsing in countries with slow internet as well as fewer bottlenecks.

Keywords: Anonymity, Censorship, TOR, Anti-Censorship, Atomic Encryption.

Receiver August 31, 2012; accepted May 6, 2013

Full Text

Monday, 10 August 2015 01:34

An Effective Approach to Software Cost Estimation Based on Soft Computing Techniques

Marappagounder Shanker1 and Keppanagounder Thanushkodi2

1Research Scholar, Anna University, Chennai

2Akshaya College of Engineering and Technology, Coimbatore

Abstract: Employing estimation models in software engineering help in envisaging some essential traits of future entities like  software development effort, software reliability and programmers productivity. Of these models, the one that supports the estimation of software effort has drawn substantial attention currently to carry out researches. Estimation by analogy is one among the interesting techniques used for estimating the software effort. But, the process of estimating by analogy is unable to handle categorical data accurately. A novel technique that relies on reasoning by analogy, fuzzy logic and linguistic quantifiers is being proposed here for estimating effort, provided that the software project is represented either by categorical or numerical data. Use of fuzzy logic-based cost estimation models is more suitable if unclear or inaccurate information are considered. Fuzzy systems attempt to imitate the processes of the brain through a rule base. The proposed method utilizes Fuzzy logic based analogy approach to estimate the cost and the effort.  The performance analysis of the proposed scheme is made using Mean Absolute Relative Error (MARE) and Mean Magnitude of Relative Error (MMRE) which is validated with other existing techniques.

 Keywords: Cost estimation, effort estimation, analogy, fuzzy logic, MARE, cost constructive model.

Received October 18, 2012; accepted June 30,2014

Full Text

 

Monday, 10 August 2015 01:25

An Efficient Method for Contrast Enhancement of Real World Hyper Spectral Images

Shyam Lal1 and Rahul Kumar2

1ECE Department, National Institute of Technology Karnataka, India

2ECE Department, Moradabad Institute of Technology, India

 

Abstract: This paper proposed an efficient method for contrast enhancement of real world hyper spectral images. The contrast of image is an important characteristic by which the quality of image can be judged as good or poor quality. The proposed method is consists of two stages: In first stage the poor quality of image is process by automatic contrast adjustment in spatial domain and in second stage the output of first stage is further process by adaptive filtering  for image enhancement in frequency domain. Simulation and experimental results on benchmark real world hyper spectral image database demonstrates that proposed method provides better results as compared to other state-of-art contrast enhancement techniques. Proposed method performs better in different dark and bright real world hyper spectral images by adjusting their contrast very frequently. Proposed method is very simple and efficient approach for contrast enhancement of real world hyper spectral images. This method can be used in different applications where images are suffering from different contrast problems.

Keywords: Adaptive contrast enhancement, real world hyper spectral image, image processing, histogram equalization.

Received December 17, 2012; accepted June 19, 2014

Full Text

 

Monday, 10 August 2015 01:20

Adaptability Metric for Adaptation of the Dynamic Changes

 

Subbian Suganthi1 and Rethanaswamy Nadarajan2

1Department of Computer Technology and Applications, Coimbatore Institute of Technology, India

2Department of Applied Mathematics and Computational Sciences, PSG College of Technology, India

Abstract: Adapting dynamic changes in the user needs or in the environment is considered as one of the important quality attributes of a system in the pervasive or ubiquitous environment. An aspect-oriented framework to modularize the dynamic changes using aspects is considered as a solution for creating dynamic adaptable systems. This framework allows the system to reflect the dynamic changes on the associated components through aspects without altering the structure of the components. For evaluating the adaptability of this framework, a new adaptability metric has been proposed using the principles of coupling. In this work, coupling is defined as a Conceptual coupling between Aspects and Classes (CBAC), which represents the semantic association between the aspects that are used to represent dynamic changes and the components that are associated with the dynamic changes at the architecture level. The adaptable efficiency of the system that is the ability of reflecting the dynamic changes on the components associated with those changes is measured using the proposed conceptual coupling metric. Based on the measures it is concluded that adaptability efficiency of the system is increased with increasing the coupling between the aspect and the components. The proposed CBAC metric is evaluated and demonstrated by measuring the adaptability of the dynamic changes in the requirements of the various software systems.

Keywords: software adaptability, modularization, aspect-oriented approach, dynamic changes, adaptability metric, coupling metric.

Received February 11, 2013; accepted May 6, 2013

Full Text
Top
We use cookies to improve our website. By continuing to use this website, you are giving consent to cookies being used. More details…