• OpenAccess
    • List of Articles Optimization

      • Open Access Article

        1 - Multimodal Biometric Recognition Using Particle Swarm Optimization-Based Selected Features
        Sara Motamed Ali Broumandnia Azam sadat  Nourbakhsh
        Feature selection is one of the best optimization problems in human recognition, which reduces the number of features, removes noise and redundant data in images, and results in high rate of recognition. This step affects on the performance of a human recognition system More
        Feature selection is one of the best optimization problems in human recognition, which reduces the number of features, removes noise and redundant data in images, and results in high rate of recognition. This step affects on the performance of a human recognition system. This paper presents a multimodal biometric verification system based on two features of palm and ear which has emerged as one of the most extensively studied research topics that spans multiple disciplines such as pattern recognition, signal processing and computer vision. Also, we present a novel Feature selection algorithm based on Particle Swarm Optimization (PSO). PSO is a computational paradigm based on the idea of collaborative behavior inspired by the social behavior of bird flocking or fish schooling. In this method, we used from two Feature selection techniques: the Discrete Cosine Transforms (DCT) and the Discrete Wavelet Transform (DWT). The identification process can be divided into the following phases: capturing the image; pre-processing; extracting and normalizing the palm and ear images; feature extraction; matching and fusion; and finally, a decision based on PSO and GA classifiers. The system was tested on a database of 60 people (240 palm and 180 ear images). Experimental results show that the PSO-based feature selection algorithm was found to generate excellent recognition results with the minimal set of selected features. Manuscript profile
      • Open Access Article

        2 - A Robust Data Envelopment Analysis Method for Business and IT Alignment of Enterprise Architecture Scenarios
        Mehdi Fasanghari Mohsen  Sadegh Amalnick Reza Taghipour Anvari Jafar Razmi
        Information Technology is recognized as a competitive enabler in today’s dynamic business environment. Therefore, alliance of business and Information Technology process is critical, which is mostly emphasized in Information Technology governance frameworks. On the othe More
        Information Technology is recognized as a competitive enabler in today’s dynamic business environment. Therefore, alliance of business and Information Technology process is critical, which is mostly emphasized in Information Technology governance frameworks. On the other hand, Enterprise Architectures are deployed to steer organizations for achieving their objectives while being responsive to changes. Thus, it is proposed to align the business and Information Technology through investigating the suitability of Enterprise Architecture scenarios. In view of this fact, investigating a flexible decision making method for business and information technology alignment analysis is necessary, but it is not sufficient since the subjective analysis is always perturbed by some degree of uncertainty. Therefore, we have developed a new robust Data Envelopment Analysis technique designed for Enterprise Architecture scenario analysis. Several numerical experiments and a sensitivity analysis are designed to show the performance, significance, and flexibility of the proposed method in a real case. Manuscript profile
      • Open Access Article

        3 - Optimal Sensor Scheduling Algorithms for Distributed Sensor Networks
        Behrooz Safarinejadian Abdolah Rahimi
        In this paper, a sensor network is used to estimate the dynamic states of a system. At each time step, one (or multiple) sensors are available that can send its measured data to a central node, in which all of processing is done. We want to provide an optimal algorithm More
        In this paper, a sensor network is used to estimate the dynamic states of a system. At each time step, one (or multiple) sensors are available that can send its measured data to a central node, in which all of processing is done. We want to provide an optimal algorithm for scheduling sensor selection at every time step. Our goal is to select the appropriate sensor to reduce computations, optimize the energy consumption and enhance the network lifetime. To achieve this goal, we must reduce the error covariance. Three algorithms are used in this work: sliding window, thresholding and randomly chosen algorithms. Moreover, we will offer a new algorithm based on circular selection. Finally, a novel algorithm for selecting multiple sensors is proposed. Performance of the proposed algorithms is illustrated with numerical examples. Manuscript profile
      • Open Access Article

        4 - PSO-Algorithm-Assisted Multiuser Detection for Multiuser and Inter-symbol Interference Suppression in CDMA Communications
        Atefeh Haji Jamali Arani paeez azmi
        Applying particle swarm optimization (PSO) algorithm has become a widespread heuristic technique in many fields of engineering. In this paper, we apply PSO algorithm in additive white Gaussian noise (AWGN) and multipath fading channels. In the proposed method, PSO algor More
        Applying particle swarm optimization (PSO) algorithm has become a widespread heuristic technique in many fields of engineering. In this paper, we apply PSO algorithm in additive white Gaussian noise (AWGN) and multipath fading channels. In the proposed method, PSO algorithm was applied to solve joint multiuser and inter-symbol interference (ISI) suppression problems in the code-division multiple-access (CDMA) systems over multipath Rayleigh fading channel and consequently, to reduce the computational complexity. At the first stage, to initialize the POS algorithm, conventional detector (CD) was employed. Then, time-varying acceleration coefficients (TVAC) were used in the PSO algorithm. The simulation results indicated that the performance of PSO-based multiuser detection (MUD) with TVAC is promising and it is outperforming the CD. Manuscript profile
      • Open Access Article

        5 - Analysis and Evaluation of Techniques for Myocardial Infarction Based on Genetic Algorithm and Weight by SVM
        hojatallah hamidi Atefeh Daraei
        Although decreasing rate of death in developed countries because of Myocardial Infarction, it is turned to the leading cause of death in developing countries. Data mining approaches can be utilized to predict occurrence of Myocardial Infarction. Because of the side effe More
        Although decreasing rate of death in developed countries because of Myocardial Infarction, it is turned to the leading cause of death in developing countries. Data mining approaches can be utilized to predict occurrence of Myocardial Infarction. Because of the side effects of using Angioplasty as main method for diagnosing Myocardial Infarction, presenting a method for diagnosing MI before occurrence seems really important. This study aim to investigate prediction models for Myocardial Infarction, by applying a feature selection model based on Wight by SVM and genetic algorithm. In our proposed method, for improving the performance of classification algorithm, a hybrid feature selection method is applied. At first stage of this method, the features are selected based on their weights, using weight by Support Vector Machine. At second stage, the selected features, are given to genetic algorithm for final selection. After selecting appropriate features, eight classification methods, include Sequential Minimal Optimization, REPTree, Multi-layer Perceptron, Random Forest, K-Nearest Neighbors and Bayesian Network, are applied to predict occurrence of Myocardial Infarction. Finally, the best accuracy of applied classification algorithms, have achieved by Multi-layer Perceptron and Sequential Minimal Optimization. Manuscript profile
      • Open Access Article

        6 - Data Aggregation Tree Structure in Wireless Sensor Networks Using Cuckoo Optimization Algorithm
        Elham Mohsenifard Behnam Talebi
        Wireless sensor networks (WSNs) consist of numerous tiny sensors which can be regarded as a robust tool for collecting and aggregating data in different data environments. The energy of these small sensors is supplied by a battery with limited power which cannot be rech More
        Wireless sensor networks (WSNs) consist of numerous tiny sensors which can be regarded as a robust tool for collecting and aggregating data in different data environments. The energy of these small sensors is supplied by a battery with limited power which cannot be recharged. Certain approaches are needed so that the power of the sensors can be efficiently and optimally utilized. One of the notable approaches for reducing energy consumption in WSNs is to decrease the number of packets to be transmitted in the network. Using data aggregation method, the mass of data which should be transmitted can be remarkably reduced. One of the related methods in this approach is the data aggregation tree. However, it should be noted that finding the optimization tree for data aggregation in networks with one working-station is an NP-Hard problem. In this paper, using cuckoo optimization algorithm (COA), a data aggregation tree was proposed which can optimize energy consumption in the network. The proposed method in this study was compared with genetic algorithm (GA), Power Efficient Data gathering and Aggregation Protocol- Power Aware (PEDAPPA) and energy efficient spanning tree (EESR). The results of simulations which were conducted in matlab indicated that the proposed method had better performance than GA, PEDAPPA and EESR algorithm in terms of energy consumption. Consequently, the proposed method was able to enhance network lifetime. Manuscript profile
      • Open Access Article

        7 - A Hybrid Cuckoo Search for Direct Blockmodeling
        Saeed NasehiMoghaddam mehdi ghazanfari babak teimourpour
        As a way of simplifying, size reducing and making sense of the structure of each social network, blockmodeling consists of two major, essential components: partitioning of actors to equivalence classes, called positions, and clarifying relations between and within posit More
        As a way of simplifying, size reducing and making sense of the structure of each social network, blockmodeling consists of two major, essential components: partitioning of actors to equivalence classes, called positions, and clarifying relations between and within positions. Partitioning of actors to positions is done variously and the ties between and within positions can be represented by density matrices, image matrices and reduced graphs. While actor partitioning in classic blockmodeling is performed by several equivalence definitions, such as structural and regular equivalence, generalized blockmodeling, using a local optimization procedure, searches the best partition vector that best satisfies a predetermined image matrix. The need for known predefined social structure and using a local search procedure to find the best partition vector fitting into that predefined image matrix, makes generalized blockmodeling be restricted. In this paper, we formulate blockmodel problem and employ a genetic algorithm to search for the best partition vector fitting into original relational data in terms of the known indices. In addition, during multiple samples and various situations such as dichotomous, signed, ordinal or interval valued relations, and multiple relations the quality of results shows better fitness to original relational data than solutions reported by researchers in classic, generalized, and stochastic blockmodeling field. Manuscript profile
      • Open Access Article

        8 - Hybrid Task Scheduling Method for Cloud Computing by Genetic and PSO Algorithms
        Amin Kamalinia Ali Ghaffari
        Cloud computing makes it possible for users to use different applications through the internet without having to install them. Cloud computing is considered to be a novel technology which is aimed at handling and providing online services. For enhancing efficiency in cl More
        Cloud computing makes it possible for users to use different applications through the internet without having to install them. Cloud computing is considered to be a novel technology which is aimed at handling and providing online services. For enhancing efficiency in cloud computing, appropriate task scheduling techniques are needed. Due to the limitations and heterogeneity of resources, the issue of scheduling is highly complicated. Hence, it is believed that an appropriate scheduling method can have a significant impact on reducing makespans and enhancing resource efficiency. Inasmuch as task scheduling in cloud computing is regarded as an NP complete problem; traditional heuristic algorithms used in task scheduling do not have the required efficiency in this context. With regard to the shortcomings of the traditional heuristic algorithms used in job scheduling, recently, the majority of researchers have focused on hybrid meta-heuristic methods for task scheduling. With regard to this cutting edge research domain, we used HEFT (Heterogeneous Earliest Finish Time) algorithm to propose a hybrid meta-heuristic method in this paper where genetic algorithm (GA) and particle swarm optimization (PSO) algorithms were combined with each other. The results of simulation and statistical analysis of proposed scheme indicate that the proposed algorithm, when compared with three other heuristic and a memetic algorithms, has optimized the makespan required for executing tasks. Manuscript profile
      • Open Access Article

        9 - A Two-Stage Multi-Objective Enhancement for Fused Magnetic Resonance Image and Computed Tomography Brain Images
        Leena Chandrashekar A Sreedevi Asundi
        Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) are the imaging techniques for detection of Glioblastoma. However, a single imaging modality is never adequate to validate the presence of the tumor. Moreover, each of the imaging techniques represents a diff More
        Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) are the imaging techniques for detection of Glioblastoma. However, a single imaging modality is never adequate to validate the presence of the tumor. Moreover, each of the imaging techniques represents a different characteristic of the brain. Therefore, experts have to analyze each of the images independently. This requires more expertise by doctors and delays the detection and diagnosis time. Multimodal Image Fusion is a process of generating image of high visual quality, by fusing different images. However, it introduces blocking effect, noise and artifacts in the fused image. Most of the enhancement techniques deal with contrast enhancement, however enhancing the image quality in terms of edges, entropy, peak signal to noise ratio is also significant. Contrast Limited Adaptive Histogram Equalization (CLAHE) is a widely used enhancement technique. The major drawback of the technique is that it only enhances the pixel intensities and also requires selection of operational parameters like clip limit, block size and distribution function. Particle Swarm Optimization (PSO) is an optimization technique used to choose the CLAHE parameters, based on a multi objective fitness function representing entropy and edge information of the image. The proposed technique provides improvement in visual quality of the Laplacian Pyramid fused MRI and CT images. Manuscript profile
      • Open Access Article

        10 - Evaluation of Pattern Recognition Techniques in Response to Cardiac Resynchronization Therapy (CRT)
        Mohammad Nejadeh Peyman Bayat Jalal Kheirkhah Hassan Moladoust
        Cardiac resynchronization therapy (CRT) improves cardiac function in patients with heart failure (HF), and the result of this treatment is decrease in death rate and improving quality of life for patients. This research is aimed at predicting CRT response for the progno More
        Cardiac resynchronization therapy (CRT) improves cardiac function in patients with heart failure (HF), and the result of this treatment is decrease in death rate and improving quality of life for patients. This research is aimed at predicting CRT response for the prognosis of patients with heart failure under CRT. According to international instructions, in the case of approval of QRS prolongation and decrease in ejection fraction (EF), the patient is recognized as a candidate of implanting recognition device. However, regarding many intervening and effective factors, decision making can be done based on more variables. Computer-based decision-making systems especially machine learning (ML) are considered as a promising method regarding their significant background in medical prediction. Collective intelligence approaches such as particles swarm optimization (PSO) algorithm are used for determining the priorities of medical decision-making variables. This investigation was done on 209 patients and the data was collected over 12 months. In HESHMAT CRT center, 17.7% of patients did not respond to treatment. Recognizing the dominant parameters through combining machine recognition and physician’s viewpoint, and introducing back-propagation of error neural network algorithm in order to decrease classification error are the most important achievements of this research. In this research, an analytical set of individual, clinical, and laboratory variables, echocardiography, and electrocardiography (ECG) are proposed with patients’ response to CRT. Prediction of the response after CRT becomes possible by the support of a set of tools, algorithms, and variables. Manuscript profile
      • Open Access Article

        11 - Using Static Information of Programs to Partition the Input Domain in Search-based Test Data Generation
        Atieh Monemi Bidgoli Hassan haghighi
        The quality of test data has an important effect on the fault-revealing ability of software testing. Search-based test data generation reformulates testing goals as fitness functions, thus, test data generation can be automated by meta-heuristic algorithms. Meta-heurist More
        The quality of test data has an important effect on the fault-revealing ability of software testing. Search-based test data generation reformulates testing goals as fitness functions, thus, test data generation can be automated by meta-heuristic algorithms. Meta-heuristic algorithms search the domain of input variables in order to find input data that cover the targets. The domain of input variables is very large, even for simple programs, while this size has a major influence on the efficiency and effectiveness of all search-based methods. Despite the large volume of works on search-based test data generation, the literature contains few approaches that concern the impact of search space reduction. In order to partition the input domain, this study defines a relationship between the structure of the program and the input domain. Based on this relationship, we propose a method for partitioning the input domain. Then, to search in the partitioned search space, we select ant colony optimization as one of the important and prosperous meta-heuristic algorithms. To evaluate the performance of the proposed approach in comparison with the previous work, we selected a number of different benchmark programs. The experimental results show that our approach has 14.40% better average coverage versus the competitive approach Manuscript profile
      • Open Access Article

        12 - Improvement of Firefly Algorithm using Particle Swarm Optimization and Gravitational Search Algorithm
        Mahdi Tourani
        Evolutionary algorithms are among the most powerful algorithms for optimization, Firefly algorithm (FA) is one of them that inspired by nature. It is an easily implementable, robust, simple and flexible technique. On the other hand, Integration of this algorithm with ot More
        Evolutionary algorithms are among the most powerful algorithms for optimization, Firefly algorithm (FA) is one of them that inspired by nature. It is an easily implementable, robust, simple and flexible technique. On the other hand, Integration of this algorithm with other algorithms, can be improved the performance of FA. Particle Swarm Optimization (PSO) and Gravitational Search Algorithm (GSA) are suitable and effective for integration with FA. Some method and operation in GSA and PSO can help to FA for fast and smart searching. In one version of the Gravitational Search Algorithm (GSA), selecting the K-best particles with bigger mass, and examining its effect on other masses has a great help for achieving the faster and more accurate in optimal answer. As well as, in Particle Swarm Optimization (PSO), the candidate answers for solving optimization problem, are guided by local best position and global best position to achieving optimal answer. These operators and their combination with the firefly algorithm (FA) can improve the performance of the search algorithm. This paper intends to provide models for improvement firefly algorithm using GSA and PSO operation. For this purpose, 5 scenarios are defined and then, their models are simulated using MATLAB software. Finally, by reviewing the results, It is shown that the performance of introduced models are better than the standard firefly algorithm. Manuscript profile
      • Open Access Article

        13 - An Automatic Thresholding Approach to Gravitation-Based Edge Detection in Grey-Scale Images
        Hamed Agahi Kimia Rezaei
        This paper presents an optimal auto-thresholding approach for the gravitational edge detection method in grey-scale images. The goal of this approach is to enhance the performance measures of the edge detector in clean and noisy conditions. To this aim, an optimal thres More
        This paper presents an optimal auto-thresholding approach for the gravitational edge detection method in grey-scale images. The goal of this approach is to enhance the performance measures of the edge detector in clean and noisy conditions. To this aim, an optimal threshold is automatically found, according to which the proposed method dichotomizes the pixels to the edges and non-edges. First, some pre-processing operations are applied to the image. Then, the vector sum of the gravitational forces applied to each pixel by its neighbors is computed according to the universal law of gravitation. Afterwards, the force magnitude is mapped to a new characteristic called the force feature. Following this, the histogram representation of this feature is determined, for which an optimal threshold is aimed to be discovered. Three thresholding techniques are proposed, two of which contain iterative processes. The parameters of the formulation used in these techniques are adjusted by means of the metaheuristic grasshopper optimization algorithm. To evaluate the proposed system, two standard databases were used and multiple qualitative and quantitative measures were utilized. The results confirmed that the methodology of our work outperformed some conventional and recent detectors, achieving the average precision of 0.894 on the BSDS500 dataset. Moreover, the outputs had high similarity to the ideal edge maps. Manuscript profile
      • Open Access Article

        14 - Reducing Energy Consumption in Sensor-Based Internet of Things Networks Based on Multi-Objective Optimization Algorithms
        Mohammad sedighimanesh Hessam  Zandhessami Mahmood  Alborzi Mohammadsadegh  Khayyatian
        Energy is an important parameter in establishing various communications types in the sensor-based IoT. Sensors usually possess low-energy and non-rechargeable batteries since these sensors are often applied in places and applications that cannot be recharged. The mos More
        Energy is an important parameter in establishing various communications types in the sensor-based IoT. Sensors usually possess low-energy and non-rechargeable batteries since these sensors are often applied in places and applications that cannot be recharged. The most important objective of the present study is to minimize the energy consumption of sensors and increase the IoT network's lifetime by applying multi-objective optimization algorithms when selecting cluster heads and routing between cluster heads for transferring data to the base station. In the present article, after distributing the sensor nodes in the network, the type-2 fuzzy algorithm has been employed to select the cluster heads and also the genetic algorithm has been used to create a tree between the cluster heads and base station. After selecting the cluster heads, the normal nodes become cluster members and send their data to the cluster head. After collecting and aggregating the data by the cluster heads, the data is transferred to the base station from the path specified by the genetic algorithm. The proposed algorithm was implemented with MATLAB simulator and compared with LEACH, MB-CBCCP, and DCABGA protocols, the simulation results indicate the better performance of the proposed algorithm in different environments compared to the mentioned protocols. Due to the limited energy in the sensor-based IoT and the fact that they cannot be recharged in most applications, the use of multi-objective optimization algorithms in the design and implementation of routing and clustering algorithms has a significant impact on the increase in the lifetime of these networks. Manuscript profile
      • Open Access Article

        15 - A Threshold-based Brain Tumour Segmentation from MR Images using Multi-Objective Particle Swarm Optimization
        Katkoori Arun  Kumar Ravi  Boda
        The Pareto optimal solution is unique in single objective Particle Swarm Optimization (SO-PSO) problems as the emphasis is on the variable space of the decision. A multi-objective-based optimization technique called Multi-Objective Particle Swarm Optimization (MO-PSO) i More
        The Pareto optimal solution is unique in single objective Particle Swarm Optimization (SO-PSO) problems as the emphasis is on the variable space of the decision. A multi-objective-based optimization technique called Multi-Objective Particle Swarm Optimization (MO-PSO) is introduced in this paper for image segmentation. The multi-objective Particle Swarm Optimization (MO-PSO) technique extends the principle of optimization by facilitating simultaneous optimization of single objectives. It is used in solving various image processing problems like image segmentation, image enhancement, etc. This technique is used to detect the tumour of the human brain on MR images. To get the threshold, the suggested algorithm uses two fitness(objective) functions- Image entropy and Image variance. These two objective functions are distinct from each other and are simultaneously optimized to create a sequence of pareto-optimal solutions. The global best (Gbest) obtained from MO-PSO is treated as threshold. The MO-PSO technique tested on various MRI images provides its efficiency with experimental findings. In terms of “best, worst, mean, median, standard deviation” parameters, the MO-PSO technique is also contrasted with the existing Single-objective PSO (SO-PSO) technique. Experimental results show that Multi Objective-PSO is 28% advanced than SO-PSO for ‘best’ parameter with reference to image entropy function and 92% accuracy than Single Objective-PSO with reference to image variance function. Manuscript profile
      • Open Access Article

        16 - A New High-Capacity Audio Watermarking Based on Wavelet Transform using the Golden Ratio and TLBO Algorithm
        Ali Zeidi joudaki Marjan Abdeyazdan Mohammad Mosleh Mohammad Kheyrandish
        Digital watermarking is one of the best solutions for copyright infringement, copying, data verification, and illegal distribution of digital media. Recently, the protection of digital audio signals has received much attention as one of the fascinating topics for resear More
        Digital watermarking is one of the best solutions for copyright infringement, copying, data verification, and illegal distribution of digital media. Recently, the protection of digital audio signals has received much attention as one of the fascinating topics for researchers and scholars. In this paper, we presented a new high-capacity, clear, and robust audio signaling scheme based on the DWT conversion synergy and golden ratio advantages using the TLBO algorithm. We used the TLBO algorithm to determine the effective frame length and embedded range, and the golden ratio to determine the appropriate embedded locations for each frame. First, the main audio signal was broken down into several sub-bands using a DWT in a specific frequency range. Since the human auditory system is not sensitive to changes in high-frequency bands, to increase the clarity and capacity of these sub-bands to embed bits we used the watermark signal. Moreover, to increase the resistance to common attacks, we framed the high-frequency bandwidth and then used the average of the frames as a key value. Our main idea was to embed an 8-bit signal simultaneously in the host signal. Experimental results showed that the proposed method is free from significant noticeable distortion (SNR about 29.68dB) and increases the resistance to common signal processing attacks such as high pass filter, echo, resampling, MPEG (MP3), etc. Manuscript profile
      • Open Access Article

        17 - Statistical Analysis and Comparison of the Performance of Meta-Heuristic Methods Based on their Powerfulness and Effectiveness
        Mehrdad Rohani Hassan Farsi Seyed Hamid Zahiri
        In this paper, the performance of meta-heuristic algorithms is compared using statistical analysis based on new criteria (powerfulness and effectiveness). Due to the large number of meta-heuristic methods reported so far, choosing one of them by researchers has always b More
        In this paper, the performance of meta-heuristic algorithms is compared using statistical analysis based on new criteria (powerfulness and effectiveness). Due to the large number of meta-heuristic methods reported so far, choosing one of them by researchers has always been challenging. In fact, the user does not know which of these methods are able to solve his complex problem. In this paper, in order to compare the performance of several methods from different categories of meta-heuristic methods new criteria are proposed. In fact, by using these criteria, the user is able to choose an effective method for his problem. For this reason, statistical analysis is conducted on each of these methods to clarify the application of each of these methods for the users. Also, powerfulness and effectiveness criteria are defined to compare the performance of the meta-heuristic methods to introduce suitable substrate and suitable quantitative parameters for this purpose. The results of these criteria clearly show the ability of each method for different applications and problems. Manuscript profile
      • Open Access Article

        18 - ARASP: An ASIP Processor for Automated Reversible Logic Synthesis
        Zeinab Kalantari Marzieh Gerami Mohammad eshghi
        Reversible logic has been emerged as a promising computing paradigm to design low power circuits in recent years. The synthesis of reversible circuits is very different from that of non-reversible circuits. Many researchers are studying methods for synthesizing reversib More
        Reversible logic has been emerged as a promising computing paradigm to design low power circuits in recent years. The synthesis of reversible circuits is very different from that of non-reversible circuits. Many researchers are studying methods for synthesizing reversible combinational logic. Some automated reversible logic synthesis methods use optimization algorithms Optimization algorithms are used in some automated reversible logic synthesis techniques. In these methods, the process of finding a circuit for a given function is a very time-consuming task, so it’s better to design a processor which speeds up the process of synthesis. Application specific instruction set processors (ASIP) can benefit the advantages of both custom ASIC chips and general DSP chips. In this paper, a new architecture for automatic reversible logic synthesis based on an Application Specific Instruction set Processors is presented. The essential purpose of the design was to provide the programmability with the specific necessary instructions for automated synthesis reversible. Our proposed processor that we referred to as ARASP is a 16-bit processor with a total of 47 instructions, which some specific instruction has been set for automated synthesis reversible circuits. ARASP is specialized for automated synthesis of reversible circuits using Genetic optimization algorithms. All major components of the design are comprehensively discussed within the processor core. The set of instructions is provided in the Register Transform Language completely. Afterward, the VHDL code is used to test the proposed architecture. Manuscript profile
      • Open Access Article

        19 - A Hybrid Approach based on PSO and Boosting Technique for Data Modeling in Sensor Networks
        hadi shakibian Jalaledin Nasiri
        An efficient data aggregation approach in wireless sensor networks (WSNs) is to abstract the network data into a model. In this regard, regression modeling has been addressed in many studies recently. If the limited characteristics of the sensor nodes are omitted from c More
        An efficient data aggregation approach in wireless sensor networks (WSNs) is to abstract the network data into a model. In this regard, regression modeling has been addressed in many studies recently. If the limited characteristics of the sensor nodes are omitted from consideration, a common regression technique could be employed after transmitting all the network data from the sensor nodes to the fusion center. However, it is not practical nor efferent. To overcome this issue, several distributed methods have been proposed in WSNs where the regression problem has been formulated as an optimization based data modeling problem. Although they are more energy efficient than the centralized method, the latency and prediction accuracy needs to be improved even further. In this paper, a new approach is proposed based on the particle swarm optimization (PSO) algorithm. Assuming a clustered network, firstly, the PSO algorithm is employed asynchronously to learn the network model of each cluster. In this step, every cluster model is learnt based on the size and data pattern of the cluster. Afterwards, the boosting technique is applied to achieve a better accuracy. The experimental results show that the proposed asynchronous distributed PSO brings up to 48% reduction in energy consumption. Moreover, the boosted model improves the prediction accuracy about 9% on the average. Manuscript profile
      • Open Access Article

        20 - Mathematical Modeling of Flow Control Mechanism in Wireless Network-on-Chip
        Fardad Rad Marzieh Gerami
        Network-on-chip (NoC) is an effective interconnection solution of multicore chips. In recent years, wireless interfaces (WIs) are used in NoCs to reduce the delay and power consumption between long-distance cores. This new communication structure is called wireless netw More
        Network-on-chip (NoC) is an effective interconnection solution of multicore chips. In recent years, wireless interfaces (WIs) are used in NoCs to reduce the delay and power consumption between long-distance cores. This new communication structure is called wireless network-on-chip (WiNoC). Compared to the wired links, demand to use the shared wireless links leads to congestion in WiNoCs. This problem increases the average packet latency as well as the network latency. However, using an efficient control mechanism will have a great impact on the efficiency and performance of the WiNoCs. In this paper, a mathematical modeling-based flow control mechanism in WiNoCs has been investigated. At first, the flow control problem has been modeled as a utility-based optimization problem with the wireless bandwidth capacity constraints and flow rate of processing cores. Next, the initial problem has been transformed into a dual problem without limitations and the best solution of the dual problem is obtained by the gradient projection method. Finally, an iterative algorithm is proposed in a WiNoC to control the flow rate of each core. The simulation results of synthetic traffic patterns show that the proposed algorithm can control and regulate the flow rate of each core with an acceptable convergence. Hence, the network throughput will be significantly improved. Manuscript profile
      • Open Access Article

        21 - A Novel Elite-Oriented Meta-Heuristic Algorithm: Qashqai Optimization Algorithm (QOA)
        Mehdi Khadem Abbas Toloie Eshlaghy Kiamars Fathi Hafshejani
        Optimization problems are becoming more complicated, and their resource requirements are rising. Real-life optimization problems are often NP-hard and time or memory consuming. Nature has always been an excellent pattern for humans to pull out the best mechanisms and th More
        Optimization problems are becoming more complicated, and their resource requirements are rising. Real-life optimization problems are often NP-hard and time or memory consuming. Nature has always been an excellent pattern for humans to pull out the best mechanisms and the best engineering to solve their problems. The concept of optimization seen in several natural processes, such as species evolution, swarm intelligence, social group behavior, the immune system, mating strategies, reproduction and foraging, and animals’ cooperative hunting behavior. This paper proposes a new Meta-Heuristic algorithm for solving NP-hard nonlinear optimization problems inspired by the intelligence, socially, and collaborative behavior of the Qashqai nomad’s migration who have adjusted for many years. In the design of this algorithm uses population-based features, experts’ opinions, and more to improve its performance in achieving the optimal global solution. The performance of this algorithm tested using the well-known optimization test functions and factory facility layout problems. It found that in many cases, the performance of the proposed algorithm was better than other known meta-heuristic algorithms in terms of convergence speed and quality of solutions. The name of this algorithm chooses in honor of the Qashqai nomads, the famous tribes of southwest Iran, the Qashqai algorithm. Manuscript profile
      • Open Access Article

        22 - Proposing an FCM-MCOA Clustering Approach Stacked with Convolutional Neural Networks for Analysis of Customers in Insurance Company
        Motahareh Ghavidel meisam Yadollahzadeh tabari Mehdi Golsorkhtabaramiri
        To create a customer-based marketing strategy, it is necessary to perform a proper analysis of customer data so that customers can be separated from each other or predict their future behavior. The datasets related to customers in any business usually are high-dimension More
        To create a customer-based marketing strategy, it is necessary to perform a proper analysis of customer data so that customers can be separated from each other or predict their future behavior. The datasets related to customers in any business usually are high-dimensional with too many instances and include both supervised and unsupervised ones. For this reason, companies today are trying to satisfy their customers as much as possible. This issue requires careful consideration of customers from several aspects. Data mining algorithms are one of the practical methods in businesses to find the required knowledge from customer’s both demographic and behavioral. This paper presents a hybrid clustering algorithm using the Fuzzy C-Means (FCM) method and the Modified Cuckoo Optimization Algorithm (MCOA). Since customer data analysis has a key role in ensuring a company's profitability, The Insurance Company (TIC) dataset is utilized for the experiments and performance evaluation. We compare the convergence of the proposed FCM-MCOA approach with some conventional optimization methods, such as Genetic Algorithm (GA) and Invasive Weed Optimization (IWO). Moreover, we suggest a customer classifier using the Convolutional Neural Networks (CNNs). Simulation results reveal that the FCM-MCOA converges faster than conventional clustering methods. In addition, the results indicate that the accuracy of the CNN-based classifier is more than 98%. CNN-based classifier converges after some couples of iterations, which shows a fast convergence in comparison with the conventional classifiers, such as Decision Tree (DT), Support Vector Machine (SVM), K-Nearest Neighborhood (KNN), and Naive Bayes (NB) classifiers. Manuscript profile