• List of Articles


      • Open Access Article

        1 - Early Detection of Pediatric Heart Disease by Automated Spectral Analysis of Phonocardiogram
        Azra Rasouli Kenari
        Early recognition of heart disease is an important goal in pediatrics. Developing countries have a large population of children living with undiagnosed heart murmurs. As a result of an accompanying skills shortage, most of these children will not get the necessary treat Full Text
        Early recognition of heart disease is an important goal in pediatrics. Developing countries have a large population of children living with undiagnosed heart murmurs. As a result of an accompanying skills shortage, most of these children will not get the necessary treatment. Taking into account that heart auscultation remains the dominant method for heart examination in the small health centers of the rural areas and generally in primary healthcare setups, the enhancement of this technique would aid significantly in the diagnosis of heart diseases. The detection of murmurs from phonocardiographic recordings is an interesting problem that has been addressed before using a wide variety of techniques. We designed a system for automatically detecting systolic murmurs due to a variety of conditions. This could enable health care providers in developing countries with tools to screen large amounts of children without the need for expensive equipment or specialist skills. For this purpose an algorithm was designed and tested to detect heart murmurs in digitally recorded signals. Cardiac auscultatory examinations of 93 children were recorded, digitized, and stored along with corresponding echocardiographic diagnoses, and automated spectral analysis using discrete wavelet transforms was performed. Patients without heart disease and either no murmur or an innocent murmur (n = 40) were compared to patients with a variety of cardiac diagnoses and a pathologic systolic murmur present (n = 53). A specificity of 100% and a sensitivity of 90.57% were achieved using signal processing techniques and a k-nn as classifier. Manuscript Document
      • Open Access Article

        2 - Active Steganalysis of Transform Domain Steganography Based on Sparse Component Analysis
        Hamed Modaghegh Seyed Alireza  Seyedin
        This paper presents a new active steganalysis method to break the transform domain steganography. Most of steganalysis techniques focus on detecting the presence or absence of a secret message in a cover (passive steganalysis), but in some cases we need to extract or es Full Text
        This paper presents a new active steganalysis method to break the transform domain steganography. Most of steganalysis techniques focus on detecting the presence or absence of a secret message in a cover (passive steganalysis), but in some cases we need to extract or estimate hidden message (active steganalysis). Although estimating the message is important but there is little research in this area. A new active steganalysis method which is based on Spars Component Analysis (SCA) technique is presented in this work. Here, the sparsity property of cover image and hidden message has been used to extract hidden message from stego image. In our method, transform domain steganography is formulated mathematically as a linear combination of sparse sources and therefore active steganalysis can be presented as a SCA problem. The feasibility of the SCA problem solving is confirmed by Linear Programming methods. Then, a fast algorithm is introduced to decrease the computational cost of steganalysis without much loss of accuracy. The accuracy of our new method has been confirmed in different experiments on a variety of transform domain steganography. These experiments show that, our method compared to the previous active steganalysis methods not only reduces the error rate but also decreases the computational cost. Manuscript Document
      • Open Access Article

        3 - A Robust Statistical Color Edge Detection for Noisy Images
        Mina Alibeigi Niloofar Mozafari Zohre Azimifar Mahnaz Mahmoodian
        Edge detection is a fundamental tool that plays a significant role in image processing, and performance of high-level tasks such as image segmentation and object recognition depends on its efficiency. Therefore, edge detection is one of the well-studied areas in image p Full Text
        Edge detection is a fundamental tool that plays a significant role in image processing, and performance of high-level tasks such as image segmentation and object recognition depends on its efficiency. Therefore, edge detection is one of the well-studied areas in image processing and computer vision. However, it is clear that accurate edge map generation is more difficult when images are corrupted with noise. Moreover, most of edge detection methods have parameters which must be set manually. In recent years different approaches has been used to address these problems. Here we propose a new color edge detector based on a statistical test, which is robust to noise. Also, the parameters of this method will be set automatically based on image content. To show the effectiveness of the proposed method, four state-of-the-art edge detectors are implemented and the results are compared. Experimental results on five of the most well-known edge detection benchmarks show that the proposed method is robust to noise. The performance of our method for lower levels of noise is very comparable to the existing approaches, whose performances highly depend on their parameter tuning stage. However, for higher levels of noise, the observed results significantly highlight the superiority of the proposed method over the existing edge detection methods, both quantitatively and qualitatively. Manuscript Document
      • Open Access Article

        4 - Ant Colony Scheduling for Network On Chip
        Neda  Dousttalab Mohammad Ali Jabraeil Jamali بهنام طالبی
        The operation scheduling problem in network on chip is NP-hard; therefore effective heuristic methods are needful to provide modal solutions. This paper introduces ant colony scheduling, a simple and effective method to increase allocator matching efficiency and hence n Full Text
        The operation scheduling problem in network on chip is NP-hard; therefore effective heuristic methods are needful to provide modal solutions. This paper introduces ant colony scheduling, a simple and effective method to increase allocator matching efficiency and hence network performance, particularly suited to networks with complex topology and asymmetric traffic patterns. Proposed algorithm has been studied in torus and flattened-butterfly topologies with multiple types of traffic pattern. Evaluation results show that this algorithm in many causes has showed positive effects on reducing network delays and increased chip performance in comparison with other algorithms. Manuscript Document
      • Open Access Article

        5 - A Fast and Accurate Sound Source Localization Method using Optimal Combination of SRP and TDOA Methodologies
        Mohammad  Ranjkesh Eskolaki Reza Hasanzadeh
        This paper presents an automatic sound source localization approach based on combination of the basic time delay estimation sub method namely, Time Difference of Arrival (TDOA), and Steered Response Power (SRP) methods. The TDOA method is a fast but vulnerable approach Full Text
        This paper presents an automatic sound source localization approach based on combination of the basic time delay estimation sub method namely, Time Difference of Arrival (TDOA), and Steered Response Power (SRP) methods. The TDOA method is a fast but vulnerable approach to find sound source location in long distances and reverberant environments and so sensitive in noisy situations, on the other hand the conventional SRP method is time consuming but successful approach to accurately find sound source location in noisy and reverberant environment. Also another SRP based method namely SRP Phase Transform (SRP-PHAT) has been suggested for better noise robustness and more accuracy of sound source localization. In this paper, based on the combination of TDOA and SRP based methods, two approaches proposed for sound source localization. In the first proposed approach which is named Classical TDOA-SRP, the TDOA method is used to find approximate sound source direction and then SRP based methods used to find the accurate location of sound source in the Field of View (FOV) which is obtained through the TDOA method. In the second proposed approach which is named Optimal TDOA-SRP, for more reduction of computational time of processing of SRP based methods and better noise robustness, a new criteria has been proposed to find the effective FOV which is obtained through the TDOA method. Experiments carried out under different conditions confirm the validity of the purposed approaches. Manuscript Document
      • Open Access Article

        6 - Better Performance of New Generation of Digital Video Broadcasting-terrestrial (DVB-T2) Using Alamouti scheme with Cyclic Delay Diversity
        Behnam Akbarian Saeed Ghazi-Maghrebi
        The goal of the future terrestrial digital video broadcasting (DVB-T) standard is to employ diversity and spatial multiplexing in order to achieve the fully multiple-input multiple-output (MIMO) channel capacity. The DVB-T2 standard targets an improved system performanc Full Text
        The goal of the future terrestrial digital video broadcasting (DVB-T) standard is to employ diversity and spatial multiplexing in order to achieve the fully multiple-input multiple-output (MIMO) channel capacity. The DVB-T2 standard targets an improved system performance throughput by at least 30% over the DVB-T. The DVB-T2 enhances the performance using improved coding methods, modulation techniques and multiple antenna technologies. After a brief presentation of the antenna diversity technique and its properties, we introduce the fact of the well-known Alamouti decoding scheme cannot be simply used over the frequency selective channels. In other words, the Alamouti Space-Frequency coding in DVB-T2 provides additional diversity. However, the performance degrades in highly frequency-selective channels, because the channel frequency response is not necessarily flat over the entire Alamouti block code. The objective of this work is to present an enhanced Alamouti space frequency block decoding scheme for MIMO and orthogonal frequency-division multiplexing (OFDM) systems using the delay diversity techniques over highly frequency selective channels. Also, we investigate the properties of the proposed scheme over different channels. Specifically, we show that the Alamouti scheme with using Cyclic Delay Diversity (CDD) over some particular channels has the better performance. Then, we exemplarity implement this scheme to the DVB-T2 system. Simulation results confirm that the proposed scheme has lower bit error rate (BER), especially for high SNRs, with respect to the standard Alamouti decoder over highly frequency-selective channels such as single frequency networks (SFN). Furthermore, the new scheme allows a high reliability and tolerability. The other advantages of the proposed method are its simplicity, flexibility and standard compatibility with respect to the conventional methods. Manuscript Document
      • Open Access Article

        7 - Online Signature Verification: a Robust Approach for Persian Signatures
        Mohamamd Esmaeel Yahyatabar Yasser  Baleghi Mohammad Reza Karami-Mollaei
        In this paper, the specific trait of Persian signatures is applied to signature verification. Efficient features, which can discriminate among Persian signatures, are investigated in this approach. Persian signatures, in comparison with other languages signatures, have Full Text
        In this paper, the specific trait of Persian signatures is applied to signature verification. Efficient features, which can discriminate among Persian signatures, are investigated in this approach. Persian signatures, in comparison with other languages signatures, have more curvature and end in a specific style. Usually, Persian signatures have special characteristics, in terms of speed, acceleration and pen pressure, during drawing curves. An experiment has been designed to determine the function indicating the most robust features of Persian signatures. Results obtained from this experiment are then used in feature extraction stage. To improve the performance of verification, a combination of shape based and dynamic extracted features is applied to Persian signature verification. To classify these signatures, Support Vector Machine (SVM) is applied. The proposed method is examined on two common Persian datasets, the new proposed Persian dataset in this paper (Noshirvani Dynamic Signature Dataset) and an international dataset (SVC2004). For three Persian datasets EER value are equal to 3, 3.93, 4.79, while for SVC2004 the EER value is 4.43. Manuscript Document
      • Open Access Article

        8 - Computing Semantic Similarity of Documents Based on Semantic Tensors
        Navid Bahrami Amir H.  Jadidinejad Mojdeh Nazari
        Exploiting semantic content of texts due to its wide range of applications such as finding related documents to a query, document classification and computing semantic similarity of documents has always been an important and challenging issue in Natural Language Process Full Text
        Exploiting semantic content of texts due to its wide range of applications such as finding related documents to a query, document classification and computing semantic similarity of documents has always been an important and challenging issue in Natural Language Processing. In this paper, using Wikipedia corpus and organizing it by three-dimensional tensor structure, a novel corpus-based approach for computing semantic similarity of texts is proposed. For this purpose, first the semantic vector of available words in documents are obtained from the vector space derived from available words in Wikipedia articles, then the semantic vector of documents is formed according to their words vector. Consequently, measuring the semantic similarity of documents can be done by comparing their semantic vectors. The vector space of the corpus of Wikipedia will cause the curse of dimensionality challenge because of the existence of the high-dimension vectors. Usually vectors in high-dimension space are very similar to each other; in this way, it would be meaningless and vain to identify the most appropriate semantic vector for the words. Therefore, the proposed approach tries to improve the effect of the curse of dimensionality by reducing the vector space dimensions through random indexing. Moreover, the random indexing makes significant improvement in memory consumption of the proposed approach by reducing the vector space dimensions. The addressing capability of synonymous and polysemous words in the proposed approach will be feasible by means of the structured co-occurrence through random indexing. Manuscript Document