List of articles (by subject) Image Processing
-
Open Access Article
1 - Accurate Fire Detection System for Various Environments using Gaussian Mixture Model and HSV Space
Khosro Rezaee Seyed Jalaleddin Mousavirad Mohammad Rasegh Ghezelbash Javad HaddadniaSmart and timely detection of fire can be very useful in coping with this phenomenon and its inhibition. Enhancing some image analysis methods such as converting RGB image to HSV image, smart selecting the threshold in fire separation, Gaussian mixture model, forming po MoreSmart and timely detection of fire can be very useful in coping with this phenomenon and its inhibition. Enhancing some image analysis methods such as converting RGB image to HSV image, smart selecting the threshold in fire separation, Gaussian mixture model, forming polygon the enclosed area resulted from edge detection and its combination with original image, this papers addresses fire detection. Accuracy and precision in performance and rapid detection of fire are among the features that distinguish this proposed system from similar fire detection systems such as Markov model, GM, DBFIR and other algorithms introduced in valid articles. The average accuracy (95%) resulted from testing 35000 frames in different fire environments and the high sensitivity (96%) was quite significant. This system be regarded as a reliable suitable alternative for the sensory set used in residential areas, but also the high speed image processing and accurate detection of fire in wide areas makes it low cost, reliable and appropriate. Manuscript profile -
Open Access Article
2 - Low Distance Airplanes Detection and Tracking Visually using Spectral Residual and KLT Composition
Mohammad Anvaripour Sima SoltanpourThis paper presents the method for detection and tracking airplanes which can be observed visually in low distances from sensors. They are used widely for some reasons such as military or unmanned aerial vehicle (UAV) because of their ability to hide from radar signals; MoreThis paper presents the method for detection and tracking airplanes which can be observed visually in low distances from sensors. They are used widely for some reasons such as military or unmanned aerial vehicle (UAV) because of their ability to hide from radar signals; however they can be detected and viewed by human eyes. Vision based methods are low cost and robust against jamming signals. Therefore, it is mandatory to have some visual approaches to detect airplanes. By this way, we propose spectral density for airplane detection and KLT algorithm for tracking. This approach is a hybrid of two distinct methods which have been presented by researchers and used widely in detection or tracking specific objects. To have accurate detection, image intensity would be adjusted adaptively. Correct detected airplanes would be achievable by eliminating some long optical flow trajectory in image frames. The proposed method would be analyzed and evaluated by comparison with state of the art approaches. The experimental results show the power of our approach in detection of multiple airplanes unless they become too small in presence of other objects and multiple airplanes. We make some test by implementing our approach on an useful database presented by some researchers. Manuscript profile -
Open Access Article
3 - Design of Fall Detection System: A Dynamic Pattern Approach with Fuzzy Logic and Motion Estimation
Khosro Rezaee Javad HaddadniaEvery year thousands of the elderly suffer serious damages such as articular fractures, broken bones and even death due to their fall. Automatic detection of the abnormal walking in people, especially such accidents as the falls in the elderly, based on image processing MoreEvery year thousands of the elderly suffer serious damages such as articular fractures, broken bones and even death due to their fall. Automatic detection of the abnormal walking in people, especially such accidents as the falls in the elderly, based on image processing techniques and computer vision can help develop an efficient system that its implementation in various contexts enables us to monitor people’s movements. This paper proposes a new algorithm, which drawing on fuzzy rules in classification of movements as well as the implementation of the motion estimation, allows the rapid processing of the input data. At the testing stage, a large number of video frames received from CASIA, CAVAIR databases and the samples of the elderly’s falls in Sabzevar’s Mother Nursing Home containing the falls of the elderly were used. The results show that the mean absolute percent error (MAPE), root-mean-square deviation (RMSD) and standard deviation error (SDE) were at an acceptable level. The main shortcoming of other systems is that the elderly need to wear bulky clothes and in case they forget to do so, they will not be able to declare their situation at the time of the fall. Compared to the similar techniques, the implementation of the proposed system in nursing homes and residential areas allow the real time and intelligent monitoring of the people. Manuscript profile -
Open Access Article
4 - Fast Automatic Face Recognition from Single Image per Person Using GAW-KNN
Hassan Farsi Mohammad HasheminejadReal time face recognition systems have several limitations such as collecting features. One training sample per target means less feature extraction techniques are available to use. To obtain an acceptable accuracy, most of face recognition algorithms need more than on MoreReal time face recognition systems have several limitations such as collecting features. One training sample per target means less feature extraction techniques are available to use. To obtain an acceptable accuracy, most of face recognition algorithms need more than one training sample per target. In these applications, accuracy of recognition dramatically reduces for the case of one training sample per target face image because of head rotation and variation in illumination state. In this paper, a new hybrid face recognition method by using single image per person is proposed, which is robust against illumination variations. To achieve robustness against head variations, a rotation detection and compensation stage is added. This method is called Weighted Graphs and PCA (WGPCA). It uses harmony of face components to extract and normalize features, and genetic algorithm with a training set is used to learn the most useful features and real-valued weights associated to individual attributes in the features. The k-nearest neighbor algorithm is applied to classify new faces based on their weighted features from the templates of the training set. Each template contains the corrected distances (Graphs) of different points on the face components and the results of Principal Component Analysis (PCA) applied to the output of face detection rectangle. The proposed hybrid algorithm is trained using MATLAB software to determine best features and their associated weights and is then implemented by using delphi XE2 programming environment to recognize faces in real time. The main advantage of this algorithm is the capability of recognizing the face by only one picture in real time. The obtained results of the proposed technique on FERET database show that the accuracy and effectiveness of the proposed algorithm. Manuscript profile -
Open Access Article
5 - Cover Selection Steganography Via Run Length Matrix and Human Visual System
Sara Nazari Mohammad Shahram MoinA novel approach for steganography cover selection is proposed, based on image texture features and human visual system. Our proposed algorithm employs run length matrix to select a set of appropriate images from an image database and creates their stego version after e MoreA novel approach for steganography cover selection is proposed, based on image texture features and human visual system. Our proposed algorithm employs run length matrix to select a set of appropriate images from an image database and creates their stego version after embedding process. Then, it computes similarity between original images and their stego versions by using structural similarity as image quality metric to select, as the best cover, one image with maximum similarity with its stego. According to the results of comparing our new proposed cover selection algorithm with other steganography methods, it is confirmed that the proposed algorithm is able to increase the stego quality. We also evaluated the robustness of our algorithm over steganalysis methods such as Wavelet based and Block based steganalyses; the experimental results show that the proposed approach decreases the risk of message hiding detection. Manuscript profile -
Open Access Article
6 - Pose-Invariant Eye Gaze Estimation Using Geometrical Features of Iris and Pupil Images
Mohammad Reza Mohammadi Abolghasem Asadollah RaieIn the cases of severe paralysis in which the ability to control the body movements of a person is limited to the muscles around the eyes, eye movements or blinks are the only way for the person to communicate. Interfaces that assist in such communications often require MoreIn the cases of severe paralysis in which the ability to control the body movements of a person is limited to the muscles around the eyes, eye movements or blinks are the only way for the person to communicate. Interfaces that assist in such communications often require special hardware or reliance on active infrared illumination. In this paper, we propose a non-intrusive algorithm for eye gaze estimation that works with video input from an inexpensive camera and without special lighting. The main contribution of this paper is proposing a new geometrical model for eye region that only requires the image of one iris for gaze estimation. Essential parameters for this system are the best fitted ellipse of the iris and the pupil center. The algorithms used for both iris ellipse fitting and pupil center localization pose no pre-assumptions on the head pose. All in all, the achievement of this paper is the robustness of the proposed system to the head pose variations. The performance of the method has been evaluated on both synthetic and real images leading to errors of 2.12 and 3.48 degrees, respectively. Manuscript profile -
Open Access Article
7 - Digital Video Stabilization System by Adaptive Fuzzy Kalman Filtering
Mohammad javad Tanakian Mehdi Rezaei Farahnaz MohannaDigital video stabilization (DVS) allows acquiring video sequences without disturbing jerkiness, removing unwanted camera movements. A good DVS should remove the unwanted camera movements while maintains the intentional camera movements. In this article, we propose a no MoreDigital video stabilization (DVS) allows acquiring video sequences without disturbing jerkiness, removing unwanted camera movements. A good DVS should remove the unwanted camera movements while maintains the intentional camera movements. In this article, we propose a novel DVS algorithm that compensates the camera jitters applying an adaptive fuzzy filter on the global motion of video frames. The adaptive fuzzy filter is a Kalman filter which is tuned by a fuzzy system adaptively to the camera motion characteristics. The fuzzy system is also tuned during operation according to the amount of camera jitters. The fuzzy system uses two inputs which are quantitative representations of the unwanted and the intentional camera movements. Since motion estimation is a computation intensive operation, the global motion of video frames is estimated based on the block motion vectors which resulted by video encoder during motion estimation operation. Furthermore, the proposed method also utilizes an adaptive criterion for filtering and validation of motion vectors. Experimental results indicate a good performance for the proposed algorithm. Manuscript profile -
Open Access Article
8 - Image Retrieval Using Color-Texture Features Extracted From Gabor-Walsh Wavelet Pyramid
Sajad Mohammadzadeh Hassan FarsiImage retrieval is one of the most applicable image processing techniques which have been extensively used. Feature extraction is one of the most important procedures used for interpretation and indexing images in Content-Based Image Retrieval (CBIR) systems. Effective MoreImage retrieval is one of the most applicable image processing techniques which have been extensively used. Feature extraction is one of the most important procedures used for interpretation and indexing images in Content-Based Image Retrieval (CBIR) systems. Effective storage, indexing and managing a large number of image collections are critical challenges in computer systems. There are many proposed methods to overcome these problems. However, the rate of image retrieval and speed of retrieval are still interesting fields of researches. In this paper, we propose a new method based on combination of Gabor filter and Walsh transform and Wavelet Pyramid (GWWP). The Crossover Point (CP) of precision and recall are considered as metrics to evaluate and compare different methods. The Obtained results show using GWWP provides better performance in compared to with other methods. Manuscript profile -
Open Access Article
9 - Assessment of Performance Improvement in Hyperspectral Image Classification Based on Adaptive Expansion of Training Samples
Maryam ImaniHigh dimensional images in remote sensing applications allow us to analysis the surface of the earth with more details. A relevant problem for supervised classification of hyperspectral image is the limited availability of labeled training samples, since their collectio MoreHigh dimensional images in remote sensing applications allow us to analysis the surface of the earth with more details. A relevant problem for supervised classification of hyperspectral image is the limited availability of labeled training samples, since their collection is generally expensive, difficult and time consuming. In this paper, we propose an adaptive method for improving the classification of hyperspectral images through expansion of training samples size. The represented approach utilizes high-confidence labeled pixels as training samples to re-estimate classifier parameters. Semi-labeled samples are samples whose class labels are determined by GML classifier. Samples whose discriminator function values are large enough are selected in an adaptive process and considered as semi-labeled (pseudo-training) samples added to the training samples to train the classifier sequentially. The results of experiments show that proposed method can solve the limitation of training samples in hyperspectral images and improve the classification performance. Manuscript profile -
Open Access Article
10 - Low Complexity Median Filter Hardware for Image Impulsive Noise Reduction
Hossein Zamani HosseinAbadi samavi96 samavi96 Nader KarimiMedian filters are commonly used for removal of the impulse noise from images. De-noising is a preliminary step in online processing of images, thus hardware implementation of median filters is of great interest. Hence, many methods, mostly based on sorting the pixels, MoreMedian filters are commonly used for removal of the impulse noise from images. De-noising is a preliminary step in online processing of images, thus hardware implementation of median filters is of great interest. Hence, many methods, mostly based on sorting the pixels, have been developed to implement median filters. Utilizing vast amount of hardware resources and not being fast are the two main disadvantages of these methods. In this paper a method for filtering images is proposed to reduce the needed hardware elements. A modular pipelined median filter unit is first modeled and then the designed module is used in a parallel structure. Since the image is applied in rows and in a parallel manner, the amount of necessary hardware elements is reduced in comparison with other hardware implementation methods. Also, image filtering speed has increased. Implementation results show that the proposed method has advantageous speed and efficiency. Manuscript profile -
Open Access Article
11 - A New Robust Digital Image Watermarking Algorithm Based on LWT-SVD and Fractal Images
Fardin Akhlaghian Tab Kayvan Ghaderi Parham MoradiThis paper presents a robust copyright protection scheme based on Lifting Wavelet Transform (LWT) and Singular Value Decomposition (SVD). We have used fractal decoding to make a very compact representation of watermark image. The fractal code is presented by a binary im MoreThis paper presents a robust copyright protection scheme based on Lifting Wavelet Transform (LWT) and Singular Value Decomposition (SVD). We have used fractal decoding to make a very compact representation of watermark image. The fractal code is presented by a binary image. In the embedding phase of watermarking scheme, at first, we perform decomposing of the host image with 2D-LWT transform, then SVD is applied to sub-bands of the transformed image, and then the watermark, “binary image,” is embedded by modifying the singular values. In the watermark extraction phase, after the reverse steps are applied, the embedded binary image and consequently the fractal code are extracted from the watermarked image. The original watermark image is rendered by running the code. To verify the validity of the proposed watermarking scheme, several experiments are carried out and the results are compared with the results of the other algorithms. In order to evaluate the quality of image, we use parameter peak value signal-to-noise ratio (PSNR). To measure the robustness of the proposed algorithm, the NC coefficient is evaluated. The experimental results indicate that, in addition to high transparency, the proposed scheme is strong enough to resist various signal processing operations, such as average filter, median filter, Jpeg compression, contrast adjustment, cropping, histogram equalization, rotation, etc. Manuscript profile -
Open Access Article
12 - A Study on Clustering for Clustering Based Image De-noising
Hossein Bakhshi Golestani Mohsen Joneidi Mostafa SadeghiIn this paper, the problem of de-noising of an image contaminated with Additive White Gaussian Noise (AWGN) is studied. This subject is an open problem in signal processing for more than 50 years. In the present paper, we suggest a method based on global clustering of i MoreIn this paper, the problem of de-noising of an image contaminated with Additive White Gaussian Noise (AWGN) is studied. This subject is an open problem in signal processing for more than 50 years. In the present paper, we suggest a method based on global clustering of image constructing blocks. As the type of clustering plays an important role in clustering-based de-noising methods, we address two questions about the clustering. The first, which parts of the data should be considered for clustering? The second, what data clustering method is suitable for de-noising? Then clustering is exploited to learn an over complete dictionary. By obtaining sparse decomposition of the noisy image blocks in terms of the dictionary atoms, the de-noised version is achieved. Experimental results show that our dictionary learning framework outperforms its competitors in terms of de-noising performance and execution time. Manuscript profile -
Open Access Article
13 - Facial Expression Recognition Using Texture Description of Displacement Image
Hamid Sadeghi Abolghasem Asadollah Raie Mohammad Reza MohammadiIn recent years, facial expression recognition, as an interesting problem in computer vision has been performed by means of static and dynamic methods. Dynamic information plays an important role in recognizing facial expression. However, using the entire dynamic inform MoreIn recent years, facial expression recognition, as an interesting problem in computer vision has been performed by means of static and dynamic methods. Dynamic information plays an important role in recognizing facial expression. However, using the entire dynamic information in the expression image sequences is of higher computational cost compared to the static methods. To reduce the computational cost, instead of entire image sequence, only neutral and emotional faces can be employed. In the previous research, this idea was used by means of DLBPHS method in which facial important small displacements were vanished by subtracting LBP features of neutral and emotional face images. In this paper, a novel approach is proposed to utilize two face images. In the proposed method, the face component displacements are highlighted by subtracting neutral image from emotional image; then, LBP features are extracted from the difference image. The proposed method is evaluated on standard databases and the results show a significant accuracy improvement compared to DLBPHS. Manuscript profile -
Open Access Article
14 - Security Analysis of Scalar Costa Scheme Against Known Message Attack in DCT-Domain Image Watermarking
Reza Samadi Seyed Alireza SeyedinThis paper proposes an accurate information-theoretic security analysis of Scalar Costa Scheme (SCS) when the SCS is employed in the embedding layer of digital image watermarking. For this purpose, Discrete Cosine Transform (DCT) coefficients are extracted from the cove MoreThis paper proposes an accurate information-theoretic security analysis of Scalar Costa Scheme (SCS) when the SCS is employed in the embedding layer of digital image watermarking. For this purpose, Discrete Cosine Transform (DCT) coefficients are extracted from the cover images. Then, the SCS is used to embed watermarking messages into mid-frequency DCT coefficients. To prevent unauthorized embedding and/or decoding, the SCS codebook is randomized using the pseudorandom dither signal which plays the role of the secret key. A passive attacker applies Known Message Attack (KMA) on the watermarked messages to practically estimate the secret key. The security level is measured using residual entropy (equivocation) of the secret key provided that the attacker’s observations are available. It can be seen that the practical security level of the SCS depends on the host statistics which has contradiction with previous theoretical result. Furthermore, the practical security analysis of the SCS leads to the different values of the residual entropy in comparison with previous theoretical equation. It will be shown that these differences are mainly due to existence of uniform regions in images that cannot be captured by previous theoretical analysis. Another source of such differences is ignoring the dependencies between the observations of non-uniform regions in previous theoretical analysis. To provide an accurate reformulation, the theoretical equation for the uniform regions and the empirical equation for the non-uniform regions are proposed. Then, by combining these equations a new equation is presented for the whole image which considers both host statistics and observations dependencies. Finally, accuracy of the proposed formulations is examined through exhaustive simulations. Manuscript profile -
Open Access Article
15 - Effects of Wave Polarization on Microwave Imaging Using Linear Sampling Method
Mehdi Salar Kaleji Mohammad Zoofaghari reza Safian Zaker Hossein FirouzehLinear Sampling Method (LSM) is a simple and effective method for the shape reconstruction of unknown objects. It is also a fast and robust method to find the location of an object. This method is based on far field operator which relates the far field radiation to its MoreLinear Sampling Method (LSM) is a simple and effective method for the shape reconstruction of unknown objects. It is also a fast and robust method to find the location of an object. This method is based on far field operator which relates the far field radiation to its associated line source in the object. There has been an extensive research on different aspects of the method. But from the experimental point of view there has been little research especially on the effect of polarization on the imaging quality of the method. In this paper, we study the effect of polarization on the quality of shape reconstruction of two dimensional targets. Some examples are illustrated to compare the effect of transverse electric (TE) and transverse magnetic (TM) polarizations, on the reconstruction quality of penetrable and non-penetrable objects. Manuscript profile -
Open Access Article
16 - Detection and Removal of Rain from Video Using Predominant Direction of Gabor Filters
Gelareh Malekshahi Hossein EbrahimnezhadIn this paper, we examine the visual effects of rain on the imaging system and present a new method for detection and removal of rain in a video sequences. In the proposed algorithm, to separate the moving foreground from the background in image sequences that are the f MoreIn this paper, we examine the visual effects of rain on the imaging system and present a new method for detection and removal of rain in a video sequences. In the proposed algorithm, to separate the moving foreground from the background in image sequences that are the frames of video with scenes recorded from the raindrops moving, a background subtraction technique is used. Then, rain streaks are detected using predominant direction of Gabor filters which contains maximum energy. To achieve this goal, the rainy image is partitioned to multiple sub images. Then, all directions of Gabor filter banks are applied to each sub image and the direction which maximizes the energy of the filtered sub image is selected as the predominant direction of that region. At the end, the rainy pixels diagnosed in per frame are replaced with non-rainy pixels background of other frames. As a result, we reconstruct a new video in which the rain streaks have been removed. According to the certain limitations and existence of textures variation during time, the proposed method is not sensitive to these changes and operates properly. Simulation results show that the proposed method can detect and locate the rain place as well. Manuscript profile -
Open Access Article
17 - A Robust Statistical Color Edge Detection for Noisy Images
Mina Alibeigi Niloofar Mozafari Zohre Azimifar Mahnaz MahmoodianEdge detection is a fundamental tool that plays a significant role in image processing, and performance of high-level tasks such as image segmentation and object recognition depends on its efficiency. Therefore, edge detection is one of the well-studied areas in image p MoreEdge detection is a fundamental tool that plays a significant role in image processing, and performance of high-level tasks such as image segmentation and object recognition depends on its efficiency. Therefore, edge detection is one of the well-studied areas in image processing and computer vision. However, it is clear that accurate edge map generation is more difficult when images are corrupted with noise. Moreover, most of edge detection methods have parameters which must be set manually. In recent years different approaches has been used to address these problems. Here we propose a new color edge detector based on a statistical test, which is robust to noise. Also, the parameters of this method will be set automatically based on image content. To show the effectiveness of the proposed method, four state-of-the-art edge detectors are implemented and the results are compared. Experimental results on five of the most well-known edge detection benchmarks show that the proposed method is robust to noise. The performance of our method for lower levels of noise is very comparable to the existing approaches, whose performances highly depend on their parameter tuning stage. However, for higher levels of noise, the observed results significantly highlight the superiority of the proposed method over the existing edge detection methods, both quantitatively and qualitatively. Manuscript profile -
Open Access Article
18 - Active Steganalysis of Transform Domain Steganography Based on Sparse Component Analysis
Hamed Modaghegh Seyed Alireza SeyedinThis paper presents a new active steganalysis method to break the transform domain steganography. Most of steganalysis techniques focus on detecting the presence or absence of a secret message in a cover (passive steganalysis), but in some cases we need to extract or es MoreThis paper presents a new active steganalysis method to break the transform domain steganography. Most of steganalysis techniques focus on detecting the presence or absence of a secret message in a cover (passive steganalysis), but in some cases we need to extract or estimate hidden message (active steganalysis). Although estimating the message is important but there is little research in this area. A new active steganalysis method which is based on Spars Component Analysis (SCA) technique is presented in this work. Here, the sparsity property of cover image and hidden message has been used to extract hidden message from stego image. In our method, transform domain steganography is formulated mathematically as a linear combination of sparse sources and therefore active steganalysis can be presented as a SCA problem. The feasibility of the SCA problem solving is confirmed by Linear Programming methods. Then, a fast algorithm is introduced to decrease the computational cost of steganalysis without much loss of accuracy. The accuracy of our new method has been confirmed in different experiments on a variety of transform domain steganography. These experiments show that, our method compared to the previous active steganalysis methods not only reduces the error rate but also decreases the computational cost. Manuscript profile -
Open Access Article
19 - A Hybrid Object Tracking for Hand Gesture (HOTHG) Approach based on MS-MD and its Application
Amir Hooshang Mazinan Jalal HassanianIn the research proposed here, a hybrid object tracking approach, namely HOTHG, with its application to hand gesture recognition in American Sign Language; ASL, is realized. This is in fact proposed to track and recognize the hand gesture, in an effective manner, in lin MoreIn the research proposed here, a hybrid object tracking approach, namely HOTHG, with its application to hand gesture recognition in American Sign Language; ASL, is realized. This is in fact proposed to track and recognize the hand gesture, in an effective manner, in line with the mean shift; MS, and the motion detection; MD, entitled MS/MD-based approach. The results are synchronously investigated based on these two well-known techniques in the area of object tracking to modify those obtained from the traditional ones. The MS algorithm can track the objects based on its detailed targets, so we have to specify ones, as long as the MD algorithm is not realized. In the proposed approach, the advantages of two algorithms are efficiently used to upgrade the hand tracking performance. In the first step, the MD algorithm is applied to remove a number of parts without area motion, and subsequently the MS algorithm is accurately realized for hand tracking. Finally, the present approach is carried out to eliminate the weakness of the traditional methods, which are only organized in association with the MS algorithm. The results are all carried out on Boston-104 database, where the hand gesture is tracked in better form with respect to the previous existing approaches. Manuscript profile -
Open Access Article
20 - Fusion Infrared and Visible Images Using Optimal Weights
Mehrnoush Gholampour Hassan Farsi Sajad MohammadzadehImage fusion is a process in which different images recorded by several sensors from one scene are combined to provide a final image with higher quality compared to each individual input image. In fact, combination of different images recorded by different sensors is on MoreImage fusion is a process in which different images recorded by several sensors from one scene are combined to provide a final image with higher quality compared to each individual input image. In fact, combination of different images recorded by different sensors is one of image fusion methods. The fusion is performed based on maintaining useful features and reducing or removing useless features. The aim of fusion has to be clearly specified. In this paper we propose a new method which combines vision and infrared images by weighting average to provide better image quality. The weighting average is performed in gradient domain. The weight of each image depends on its useful features. Since these images are recorded in night vision, the useful features are related to clear scene details. For this reason, object detection is applied on the infrared image and considered as its weight. The vision image is also considered as a complementary of infrared image weight. The averaging is performed in gradient of input images, and final composed image is obtained by Gauss-Seidel method. The quality of resulted image by the proposed algorithm is compared to the obtained images by state-of-the-art algorithms using quantitative and qualitative measures. The obtained results show that the proposed algorithm provides better image quality. Manuscript profile -
Open Access Article
21 - Simultaneous Methods of Image Registration and Super-Resolution Using Analytical Combinational Jacobian Matrix
Hossein Rezayi Seyed Alireza SeyedinIn this paper we propose two new simultaneous image registration (IR) and super-resolution (SR) methods using a novel approach to calculate the Jacobian matrix. SR is the process of fusing several low resolution (LR) images to reconstruct a high resolution (HR) image; h MoreIn this paper we propose two new simultaneous image registration (IR) and super-resolution (SR) methods using a novel approach to calculate the Jacobian matrix. SR is the process of fusing several low resolution (LR) images to reconstruct a high resolution (HR) image; however as inverse problem it consists of three principal operations of warping, blurring and down-sampling should be applied to the desired HR image to produce the existing LR images. Unlike the previous methods, we neither calculate the Jacobian matrix numerically nor derive the Jacobian matrix by treating the three principal operations separately. We develop a new approach to derive the Jacobian matrix analytically from combinational form of the three principal operations. In this approach, a Gaussian kernel (as it is more realistic in a wide rang of applications) is considered for blurring, which can be adaptively resized for each LR image. The main intended method is established by applying the aforementioned ideas to the joint methods, a class of simultaneous iterative methods in which the incremental values for both registration parameters and HR image are obtained by solving one system of equations per iteration. Our second proposed method is formed by applying these ideas to the alternating minimization (AM) methods, a class of simultaneous iterative methods in which the incremental values of registration parameters are obtained after calculating the high resolution image at each iteration. The results show that our methods are superior to the recently proposed methods such as Tian's joint and Hardie's AM method. Additionally, the computational cost of our proposed methods has also been reduced. Manuscript profile -
Open Access Article
22 - On-road Vehicle detection based on hierarchical clustering using adaptive vehicle localization
Moslem Mohammadi Jenghara Hossein Ebrahimpour KomlehVehicle detection is one of the important tasks in automatic driving. It is a hard problem that many researchers focused on it. Most commercial vehicle detection systems are based on radar. But these methods have some problems such as have problem in zigzag motions. Im MoreVehicle detection is one of the important tasks in automatic driving. It is a hard problem that many researchers focused on it. Most commercial vehicle detection systems are based on radar. But these methods have some problems such as have problem in zigzag motions. Image processing techniques can overcome these problems.This paper introduces a method based on hierarchical clustering using low-level image features for on-road vehicle detection. Each vehicle assumed as a cluster. In traditional clustering methods, the threshold distance for each cluster is fixed, but in this paper, the adaptive threshold varies according to the position of each cluster. The threshold measure is computed with bivariate normal distribution. Sampling and teammate selection for each cluster is applied by the members-based weighted average. For this purpose, unlike other methods that use only horizontal or vertical lines, a fully edge detection algorithm was utilized. Corner is an important feature of video images that commonly were used in vehicle detection systems. In this paper, Harris features are applied to detect the corners. LISA data set is used to evaluate the proposed method. Several experiments are applied to investigate the performance of proposed algorithm. Experimental results show good performance compared to other algorithms . Manuscript profile -
Open Access Article
23 - Quality Assessment Based Coded Apertures for Defocus Deblurring
Mina Masoudifar Hamid Reza PourrezaA conventional camera with small size pixels may capture images with defocused blurred regions. Blurring, as a low-pass filter, attenuates or drops details of the captured image. This fact makes deblurring as an ill-posed problem. Coded aperture photography can decrease MoreA conventional camera with small size pixels may capture images with defocused blurred regions. Blurring, as a low-pass filter, attenuates or drops details of the captured image. This fact makes deblurring as an ill-posed problem. Coded aperture photography can decrease destructive effects of blurring in defocused images. Hence, in this case, aperture patterns are designed or evaluated based on the manner of reduction of these effects. In this paper, a new function is presented that is applied for evaluating the aperture patterns which are designed for defocus deblurring. The proposed function consists of a weighted sum of two new criteria, which are defined based on spectral characteristics of an aperture pattern. On the basis of these criteria, a pattern whose spectral properties are more similar to a flat all-pass filter is assessed as a better pattern. The weights of these criteria are determined by a learning approach. An aggregate image quality assessment measure, including an existing perceptual metric and an objective metric, is used for determining the weights. According to the proposed evaluation function, a genetic algorithm that converges to a near-optimal binary aperture pattern is developed. In consequence, an asymmetric and a semi-symmetric pattern are proposed. The resulting patterns are compared with the circular aperture and some other patterns in different scenarios. Manuscript profile -
Open Access Article
24 - Design, Implementation and Evaluation of Multi-terminal Binary Decision Diagram based Binary Fuzzy Relations
Hamid Alavi Toussi Bahram Sadeghi BighamElimination of redundancies in the memory representation is necessary for fast and efficient analysis of large sets of fuzzy data. In this work, we use MTBDDs as the underlying data-structure to represent fuzzy sets and binary fuzzy relations. This leads to elimination MoreElimination of redundancies in the memory representation is necessary for fast and efficient analysis of large sets of fuzzy data. In this work, we use MTBDDs as the underlying data-structure to represent fuzzy sets and binary fuzzy relations. This leads to elimination of redundancies in the representation, less computations, and faster analyses. We also extended a BDD package (BuDDy) to support MTBDDs in general and fuzzy sets and relations in particular. Representation and manipulation of MTBDD based fuzzy sets and binary fuzzy relations are described in this paper. These include design and implementation of different fuzzy operations such as max, min and max-min composition. In particular, an efficient algorithm for computing max-min composition is presented.Effectiveness of our MTBDD based implementation is shown by applying it on fuzzy connectedness and image segmentation problem. Compared to a base implementation, the running time of the MTBDD based implementation was faster (in our test cases) by a factor ranging from 2 to 27. Also, when the MTBDD based data-structure was employed, the memory needed to represent the final results was improved by a factor ranging from 37.9 to 265.5. We also describe our base implementation which is based on matrices. Manuscript profile -
Open Access Article
25 - Unsupervised Segmentation of Retinal Blood Vessels Using the Human Visual System Line Detection Model
Mohsen Zardadi Nasser Mehrshad Seyyed Mohammad RazaviRetinal image assessment has been employed by the medical community for diagnosing vascular and non-vascular pathology. Computer based analysis of blood vessels in retinal images will help ophthalmologists monitor larger populations for vessel abnormalities. Automatic s MoreRetinal image assessment has been employed by the medical community for diagnosing vascular and non-vascular pathology. Computer based analysis of blood vessels in retinal images will help ophthalmologists monitor larger populations for vessel abnormalities. Automatic segmentation of blood vessels from retinal images is the initial step of the computer based assessment for blood vessel anomalies. In this paper, a fast unsupervised method for automatic detection of blood vessels in retinal images is presented. In order to eliminate optic disc and background noise in the fundus images, a simple preprocessing technique is introduced. First, a newly devised method, based on a simple cell model of the human visual system (HVS) enhances the blood vessels in various directions. Then, an activity function is presented on simple cell responses. Next, an adaptive threshold is used as an unsupervised classifier and classifies each pixel as a vessel pixel or a non-vessel pixel to obtain a vessel binary image. Lastly, morphological post-processing is applied to eliminate exudates which are detected as blood vessels. The method was tested on two publicly available databases, DRIVE and STARE, which are frequently used for this purpose. The results demonstrate that the performance of the proposed algorithm is comparable with state-of-the-art techniques. Manuscript profile -
Open Access Article
26 - A new Sparse Coding Approach for Human Face and Action Recognition
Mohsen Nikpoor Mohammad Reza Karami-Mollaei Reza GhaderiSparse coding is an unsupervised method which learns a set of over-complete bases to represent data such as image, video and etc. In the cases where we have some similar images from the different classes, using the sparse coding method the images may be classified into MoreSparse coding is an unsupervised method which learns a set of over-complete bases to represent data such as image, video and etc. In the cases where we have some similar images from the different classes, using the sparse coding method the images may be classified into the same class and devalue classification performance. In this paper, we propose an Affine Graph Regularized Sparse Coding approach for resolving this problem. We apply the sparse coding and graph regularized sparse coding approaches by adding the affinity constraint to the objective function to improve the recognition rate. Several experiments has been done on well-known face datasets such as ORL and YALE. The first experiment has been done on ORL dataset for face recognition and the second one has been done on YALE dataset for face expression detection. Both experiments have been compared with the basic approaches for evaluating the proposed method. The simulation results show that the proposed method can significantly outperform previous methods in face classification. In addition, the proposed method is applied to KTH action dataset and the results show that the proposed sparse coding approach could be applied for action recognition applications too. Manuscript profile -
Open Access Article
27 - Efficient Land-cover Segmentation Using Meta Fusion
Morteza Khademi Hadi Sadoghi YazdiMost popular fusion methods have their own limitations; e.g. OWA (order weighted averaging) has “linear model” and “summation of inputs proportions in fusion equal to 1” limitations. Considering all possible models for fusion, proposed fusion method involve input data c MoreMost popular fusion methods have their own limitations; e.g. OWA (order weighted averaging) has “linear model” and “summation of inputs proportions in fusion equal to 1” limitations. Considering all possible models for fusion, proposed fusion method involve input data confusion in fusion process to segmentation. Indeed, limitations in proposed method are determined adaptively for each input data, separately. On the other hand, land-cover segmentation using remotely sensed (RS) images is a challenging research subject; due to the fact that objects in unique land-cover often appear dissimilar in different RS images. In this paper multiple co-registered RS images are utilized to segment land-cover using FCM (fuzzy c-means). As an appropriate tool to model changes, fuzzy concept is utilized to fuse and integrate information of input images. By categorizing the ground points, it is shown in this paper for the first time, fuzzy numbers are need and more suitable than crisp ones to merge multi-images information and segmentation. Finally, FCM is applied on the fused image pixels (with fuzzy values) to obtain a single segmented image. Furthermore mathematical analysis and used proposed cost function, simulation results also show significant performance of the proposed method in terms of noise-free and fast segmentation. Manuscript profile -
Open Access Article
28 - Concept Detection in Images Using SVD Features and Multi-Granularity Partitioning and Classification
Kamran Farajzadeh Esmail Zarezadeh Jafar MansouriNew visual and static features, namely, right singular feature vector, left singular feature vector and singular value feature vector are proposed for the semantic concept detection in images. These features are derived by applying singular value decomposition (SVD) " MoreNew visual and static features, namely, right singular feature vector, left singular feature vector and singular value feature vector are proposed for the semantic concept detection in images. These features are derived by applying singular value decomposition (SVD) "directly" to the "raw" images. In SVD features edge, color and texture information is integrated simultaneously and is sorted based on their importance for the concept detection. Feature extraction is performed in a multi-granularity partitioning manner. In contrast to the existing systems, classification is carried out for each grid partition of each granularity separately. This separates the effect of classifications on partitions with and without the target concept on each other. Since SVD features have high dimensionality, classification is carried out with K-nearest neighbor (K-NN) algorithm that utilizes a new and "stable" distance function, namely, multiplicative distance. Experimental results on PASCAL VOC and TRECVID datasets show the effectiveness of the proposed SVD features and multi-granularity partitioning and classification method Manuscript profile -
Open Access Article
29 - Improving Image Dynamic Range For An Adaptive Quality Enhancement Using Gamma Correction
Hamid HassanpourThis paper proposes a new automatic image enhancement method by improving the image dynamic range. The improvement is performed via modifying the Gamma value of pixels in the image. Gamma distortion in an image is due to the technical limitations in the imaging device, MoreThis paper proposes a new automatic image enhancement method by improving the image dynamic range. The improvement is performed via modifying the Gamma value of pixels in the image. Gamma distortion in an image is due to the technical limitations in the imaging device, and impose a nonlinear effect. The severity of distortion in an image varies depends on the texture and depth of the objects. The proposed method locally estimates the Gamma values in an image. In this method, the image is initially segmented using a pixon-based approach. Pixels in each segment have similar characteristics in terms of the need for Gamma correction. Then the Gamma value for each segment is estimated by minimizing the homogeneity of co-occurrence matrix. This feature can represent image details. The minimum value of this feature in a segment shows maximum details of the segment. The quality of an image is improved once more details are presented in the image via Gamma correction. In this study, it is shown that the proposed method performs well in improving the quality of images. Subjective and objective image quality assessments performed in this study attest the superiority of the proposed method compared to the existing methods in image quality enhancement. Manuscript profile -
Open Access Article
30 - Mitosis detection in breast cancer histological images based on texture features using AdaBoost
Sooshiant Zakariapour Hamid Jazayeri Mehdi EzojiCounting mitotic figures present in tissue samples from a patient with cancer, plays a crucial role in assessing the patient’s survival chances. In clinical practice, mitotic cells are counted manually by pathologists in order to grade the proliferative activity of brea MoreCounting mitotic figures present in tissue samples from a patient with cancer, plays a crucial role in assessing the patient’s survival chances. In clinical practice, mitotic cells are counted manually by pathologists in order to grade the proliferative activity of breast tumors. However, detecting mitoses under a microscope is a labourious, time-consuming task which can benefit from computer aided diagnosis. In this research we aim to detect mitotic cells present in breast cancer tissue, using only texture and pattern features. To classify cells into mitotic and non-mitotic classes, we use an AdaBoost classifier, an ensemble learning method which uses other (weak) classifiers to construct a strong classifier. 11 different classifiers were used separately as base learners, and their classification performance was recorded. The proposed ensemble classifier is tested on the standard MITOS-ATYPIA-14 dataset, where a pixel window around each cells center was extracted to be used as training data. It was observed that an AdaBoost that used Logistic Regression as its base learner achieved a F1 Score of 0.85 using only texture features as input which shows a significant performance improvement over status quo. It also observed that "Decision Trees" provides the best recall among base classifiers and "Random Forest" has the best Precision. Manuscript profile -
Open Access Article
31 - Automatic Facial Emotion Recognition Method Based on Eye Region Changes
Mina Navraan charkari charkari Muharram MansoorizadehEmotion is expressed via facial muscle movements, speech, body and hand gestures, and various biological signals like heart beating. However, the most natural way that humans display emotion is facial expression. Facial expression recognition is a great challenge in the MoreEmotion is expressed via facial muscle movements, speech, body and hand gestures, and various biological signals like heart beating. However, the most natural way that humans display emotion is facial expression. Facial expression recognition is a great challenge in the area of computer vision for the last two decades. This paper focuses on facial expression to identify seven universal human emotions i.e. anger, disgust, fear, happiness, sadness, surprise, and neu7tral. Unlike the majority of other approaches which use the whole face or interested regions of face, we restrict our facial emotion recognition (FER) method to analyze human emotional states based on eye region changes. The reason of using this region is that eye region is one of the most informative regions to represent facial expression. Furthermore, it leads to lower feature dimension as well as lower computational complexity. The facial expressions are described by appearance features obtained from texture encoded with Gabor filter and geometric features. The Support Vector Machine with RBF and poly-kernel functions is used for proper classification of different types of emotions. The Facial Expressions and Emotion Database (FG-Net), which contains spontaneous emotions and Cohn-Kanade(CK) Database with posed emotions have been used in experiments. The proposed method was trained on two databases separately and achieved the accuracy rate of 96.63% for spontaneous emotions recognition and 96.6% for posed expression recognition, respectively Manuscript profile -
Open Access Article
32 - High-Resolution Fringe Pattern Phase Extraction, Placing a Focus on Real-Time 3D Imaging
Amir Hooshang Mazinan Ali EsmaeiliThe idea behind the research is to deal with real-time 3D imaging that may extensively be referred to the fields of medical science and engineering in general. It is to note that most effective non-contact measurement techniques can include the structured light patterns MoreThe idea behind the research is to deal with real-time 3D imaging that may extensively be referred to the fields of medical science and engineering in general. It is to note that most effective non-contact measurement techniques can include the structured light patterns, provided in the surface of object for the purpose of acquiring its 3D depth. The traditional structured light pattern can now be known as the fringe pattern. In this study, the conventional approaches, realized in the fringe pattern analysis with applications to 3D imaging such as wavelet and Fourier transform are efficiently investigated. In addition to the frequency estimation algorithm in most of these approaches, additional unwrapping algorithm is needed to extract the phase, coherently. Considering problems regarding phase unwrapping of fringe algorithm surveyed in the literatures, a state-of-the-art approach is here organized to be proposed. In the aforementioned proposed approach, the key characteristics of the same conventional algorithms such as the frequency estimation and the Itoh algorithm are synchronously realized. At the end, the results carried out through the simulation programs have revealed that the proposed approach is able to extract image phase of simulated fringe patterns and correspondingly realistic patterns with high quality. Another advantage of this investigated approach is considered as its real-time application, while a significant part of operations might be executed in parallel. Manuscript profile -
Open Access Article
33 - An Efficient Noise Removal Edge Detection Algorithm Based on Wavelet Transform
Ehsan EhsaeianIn this paper, we propose an efficient noise robust edge detection technique based on odd Gaussian derivations in the wavelet transform domain. At first, new basis wavelet functions are introduced and the proposed algorithm is explained. The algorithm consists of two st MoreIn this paper, we propose an efficient noise robust edge detection technique based on odd Gaussian derivations in the wavelet transform domain. At first, new basis wavelet functions are introduced and the proposed algorithm is explained. The algorithm consists of two stage. The first idea comes from the response multiplication across the derivation and the second one is pruning algorithm which improves fake edges. Our method is applied to the binary and the natural grayscale image in the noise-free and the noisy condition with the different power density. The results are compared with the traditional wavelet edge detection method in the visual and the statistical data in the relevant tables. With the proper selection of the wavelet basis function, an admissible edge response to the significant inhibited noise without the smoothing technique is obtained, and some of the edge detection criteria are improved. The experimental visual and statistical results of studying images show that our method is feasibly strong and has good edge detection performances, in particular, in the high noise contaminated condition. Moreover, to have a better result and improve edge detection criteria, a pruning algorithm as a post processing stage is introduced and applied to the binary and grayscale images. The obtained results, verify that the proposed scheme can detect reasonable edge features and dilute the noise effect properly. Manuscript profile -
Open Access Article
34 - Eye Gaze Detection Based on Learning Automata by Using SURF Descriptor
Hassan Farsi Reza Nasiripour Sajad MohammadzadehIn the last decade, eye gaze detection system is one of the most important areas in image processing and computer vision. The performance of eye gaze detection system depends on iris detection and recognition (IR). Iris recognition is very important role for person iden MoreIn the last decade, eye gaze detection system is one of the most important areas in image processing and computer vision. The performance of eye gaze detection system depends on iris detection and recognition (IR). Iris recognition is very important role for person identification. The aim of this paper is to achieve higher recognition rate compared to learning automata based methods. Usually, iris retrieval based systems consist of several parts as follows: pre-processing, iris detection, normalization, feature extraction and classification which are captured from eye region. In this paper, a new method without normalization step is proposed. Meanwhile, Speeded up Robust Features (SURF) descriptor is used to extract features of iris images. The descriptor of each iris image creates a vector with 64 dimensions. For classification step, learning automata classifier is applied. The proposed method is tested on three known iris databases; UBIRIS, MMU and UPOL database. The proposed method results in recognition rate of 100% for UBIRIS and UPOL databases and 99.86% for MMU iris database. Also, EER rate of the proposed method for UBIRIS, UPOL and MMU iris database are 0.00%, 0.00% and 0.008%, respectively. Experimental results show that the proposed learning automata classifier results in minimum classification error, and improves precision and computation time. Manuscript profile -
Open Access Article
35 - Improvement in Accuracy and Speed of Image Semantic Segmentation via Convolution Neural Network Encoder-Decoder
Hanieh Zamanian Hassan Farsi Sajad MohammadzadehRecent researches on pixel-wise semantic segmentation use deep neural networks to improve accuracy and speed of these networks in order to increase the efficiency in practical applications such as automatic driving. These approaches have used deep architecture to predic MoreRecent researches on pixel-wise semantic segmentation use deep neural networks to improve accuracy and speed of these networks in order to increase the efficiency in practical applications such as automatic driving. These approaches have used deep architecture to predict pixel tags, but the obtained results seem to be undesirable. The reason for these unacceptable results is mainly due to the existence of max pooling operators, which reduces the resolution of the feature maps. In this paper, we present a convolutional neural network composed of encoder-decoder segments based on successful SegNet network. The encoder section has a depth of 2, which in the first part has 5 convolutional layers, in which each layer has 64 filters with dimensions of 3×3. In the decoding section, the dimensions of the decoding filters are adjusted according to the convolutions used at each step of the encoding. So, at each step, 64 filters with the size of 3×3 are used for coding where the weights of these filters are adjusted by network training and adapted to the educational data. Due to having the low depth of 2, and the low number of parameters in proposed network, the speed and the accuracy improve compared to the popular networks such as SegNet and DeepLab. For the CamVid dataset, after a total of 60,000 iterations, we obtain the 91% for global accuracy, which indicates improvements in the efficiency of proposed method. Manuscript profile -
Open Access Article
36 - A Novel Method for Image Encryption Using Modified Logistic Map
ardalan Ghasemzadeh Omid R.B. SpeilyWith the development of the internet and social networks, the interest on multimedia data, especially digital images, has been increased among scientists. Due to their advantages such as high speed as well as high security and complexity, chaotic functions have been ext MoreWith the development of the internet and social networks, the interest on multimedia data, especially digital images, has been increased among scientists. Due to their advantages such as high speed as well as high security and complexity, chaotic functions have been extensively employed in images encryption. In this paper, a modified logistic map function was proposed, which resulted in higher scattering in obtained results. Confusion and diffusion functions, as the two main actions in cryptography, are not necessarily performed respectively, i.e. each of these two functions can be applied on the image in any order, provided that the sum of total functions does not exceed 10. In calculation of sum of functions, confusion has the coefficient of 1 and diffusion has the coefficient of 2. To simulate this method, a binary stack is used. Application of binary stack and pseudo-random numbers obtained from the modified chaotic function increased the complexity of the proposed encryption algorithm. The security key length, entropy value, NPCR and UICA values and correlation coefficient analysis results demonstrate the feasibility and validity of the proposed method. Analyzing the obtained results and comparing the algorithm to other investigated methods clearly verified high efficiency of proposed method. Manuscript profile -
Open Access Article
37 - Retinal Vessel Extraction Using Dynamic Threshold And Enhancement Image Filter From Retina Fundus
erwin erwin Tomi KiyatmokoIn the diagnosis of retinal disease, Retinal vessels become an important role in determining certain diseases. Retina vessels are an important element with a variety of shapes and sizes, each human blood vessel also can determine the disease with various types, but the MoreIn the diagnosis of retinal disease, Retinal vessels become an important role in determining certain diseases. Retina vessels are an important element with a variety of shapes and sizes, each human blood vessel also can determine the disease with various types, but the feasibility of the pattern of retinal blood vessels is very important for the advanced diagnosis process in medical retina such as detection, identification and classification. Improvement and improvement of image quality in this case is very important by focusing on extracting or segmenting the retinal veins so that parameters such as accuracy, specifications, and sensitivity can be obtained that are better and meet the advanced system. Therefore we conducted experiments in order to develop extraction of retinal images to obtain binary images of retinal vessels in the medical world using Dynamic Threshold and Butterworth Bandpass Filter. Using a database DRIVE Accuracy of 94.77%, sensitivity of 54.48% and specificity of 98.71%. Manuscript profile -
Open Access Article
38 - SSIM-Based Fuzzy Video Rate Controller for Variable Bit Rate Applications of Scalable HEVC
Farhad Raufmehr Mehdi RezaeiScalable High Efficiency Video Coding (SHVC) is the scalable extension of the latest video coding standard H.265/HEVC. Video rate control algorithm is out of the scope of video coding standards. Appropriate rate control algorithms are designed for various applications t MoreScalable High Efficiency Video Coding (SHVC) is the scalable extension of the latest video coding standard H.265/HEVC. Video rate control algorithm is out of the scope of video coding standards. Appropriate rate control algorithms are designed for various applications to overcome practical constraints such as bandwidth and buffering constraints. In most of the scalable video applications, such as video on demand (VoD) and broadcasting applications, encoded bitstreams with variable bit rates are preferred to bitstreams with constant bit rates. In variable bit rate (VBR) applications, the tolerable delay is relatively high. Therefore, we utilize a larger buffer to allow more variations in bitrate to provide smooth and high visual quality of output video. In this paper, we propose a fuzzy video rate controller appropriate for VBR applications of SHVC. A fuzzy controller is used for each layer of scalable video to minimize the fluctuation of QP at the frame level while the buffering constraint is obeyed for any number of layers received by a decoder. The proposed rate controller utilizes the well-known structural similarity index (SSIM) as a quality metric to increase the visual quality of the output video. The proposed rate control algorithm is implemented in HEVC reference software and comprehensive experiments are executed to tune the fuzzy controllers and also to evaluate the performance of the algorithm. Experimental results show a high performance for the proposed algorithm in terms of rate control, visual quality, and rate-distortion performance. Manuscript profile -
Open Access Article
39 - Body Field: Structured Mean Field with Human Body Skeleton Model and Shifted Gaussian Edge Potentials
Sara Ershadi-Nasab Shohreh Kasaei Esmaeil Sanaei Erfan Noury Hassan Hafez-kolahiAn efficient method for simultaneous human body part segmentation and pose estimation is introduced. A conditional random field with a fully-connected graphical model is used. Possible node (image pixel) labels comprise of the human body parts and the background. In the MoreAn efficient method for simultaneous human body part segmentation and pose estimation is introduced. A conditional random field with a fully-connected graphical model is used. Possible node (image pixel) labels comprise of the human body parts and the background. In the human body skeleton model, the spatial dependencies among body parts are encoded in the definition of pairwise energy functions according to the conditional random fields. Proper pairwise edge potentials between image pixels are defined according to the presence or absence of human body parts that are near to each other. Various Gaussian kernels in position, color, and histogram of oriented gradients spaces are used for defining the pairwise energy terms. Shifted Gaussian kernels are defined between each two body parts that are connected to each other according to the human body skeleton model. As shifted Gaussian kernels impose a high computational cost to the inference, an efficient inference process is proposed by a mean field approximation method that uses high dimensional shifted Gaussian filtering. The experimental results evaluated on the challenging KTH Football, Leeds Sports Pose, HumanEva, and Penn-Fudan datasets show that the proposed method increases the per-pixel accuracy measure for human body part segmentation and also improves the probability of correct parts metric of human body joint locations. Manuscript profile -
Open Access Article
40 - A Two-Stage Multi-Objective Enhancement for Fused Magnetic Resonance Image and Computed Tomography Brain Images
Leena Chandrashekar A Sreedevi AsundiMagnetic Resonance Imaging (MRI) and Computed Tomography (CT) are the imaging techniques for detection of Glioblastoma. However, a single imaging modality is never adequate to validate the presence of the tumor. Moreover, each of the imaging techniques represents a diff MoreMagnetic Resonance Imaging (MRI) and Computed Tomography (CT) are the imaging techniques for detection of Glioblastoma. However, a single imaging modality is never adequate to validate the presence of the tumor. Moreover, each of the imaging techniques represents a different characteristic of the brain. Therefore, experts have to analyze each of the images independently. This requires more expertise by doctors and delays the detection and diagnosis time. Multimodal Image Fusion is a process of generating image of high visual quality, by fusing different images. However, it introduces blocking effect, noise and artifacts in the fused image. Most of the enhancement techniques deal with contrast enhancement, however enhancing the image quality in terms of edges, entropy, peak signal to noise ratio is also significant. Contrast Limited Adaptive Histogram Equalization (CLAHE) is a widely used enhancement technique. The major drawback of the technique is that it only enhances the pixel intensities and also requires selection of operational parameters like clip limit, block size and distribution function. Particle Swarm Optimization (PSO) is an optimization technique used to choose the CLAHE parameters, based on a multi objective fitness function representing entropy and edge information of the image. The proposed technique provides improvement in visual quality of the Laplacian Pyramid fused MRI and CT images. Manuscript profile -
Open Access Article
41 - Drone Detection by Neural Network Using GLCM and SURF Features
Tanzia Ahmed Tanvir Rahman Bir Ballav Roy Jia UddinThis paper presents a vision-based drone detection method. There are a number of researches on object detection which includes different feature extraction methods – all of those are used distinctly for the experiments. But in the proposed model, a hybrid feature extrac MoreThis paper presents a vision-based drone detection method. There are a number of researches on object detection which includes different feature extraction methods – all of those are used distinctly for the experiments. But in the proposed model, a hybrid feature extraction method using SURF and GLCM is used to detect object by Neural Network which has never been experimented before. Both are very popular ways of feature extraction. Speeded-up Robust Feature (SURF) is a blob detection algorithm which extracts the points of interest from an integral image, thus converts the image into a 2D vector. The Gray-Level Co-Occurrence Matrix (GLCM) calculates the number of occurrences of consecutive pixels in same spatial relationship and represents it in a new vector- 8 × 8 matrix of best possible attributes of an image. SURF is a popular method of feature extraction and fast matching of images, whereas, GLCM method extracts the best attributes of the images. In the proposed model, the images were processed first to fit our feature extraction methods, then the SURF method was implemented to extract the features from those images into a 2D vector. Then for our next step GLCM was implemented which extracted the best possible features out of the previous vector, into a 8 × 8 matrix. Thus, image is processed in to a 2D vector and feature extracted from the combination of both SURF and GLCM methods ensures the quality of the training dataset by not just extracting features faster (with SURF) but also extracting the best of the point of interests (with GLCM). The extracted featured related to the pattern are used in the neural network for training and testing. Pattern recognition algorithm has been used as a machine learning tool for the training and testing of the model. In the experimental evaluation, the performance of proposed model is examined by cross entropy for each instance and percentage error. For the tested drone dataset, experimental results demonstrate improved performance over the state-of-art models by exhibiting less cross entropy and percentage error. Manuscript profile -
Open Access Article
42 - Human Activity Recognition based on Deep Belief Network Classifier and Combination of Local and Global Features
Azar MahmoodzadehDuring the past decades, recognition of human activities has attracted the attention of numerous researches due to its outstanding applications including smart houses, health-care and monitoring the private and public places. Applying to the video frames, this paper pro MoreDuring the past decades, recognition of human activities has attracted the attention of numerous researches due to its outstanding applications including smart houses, health-care and monitoring the private and public places. Applying to the video frames, this paper proposes a hybrid method which combines the features extracted from the images using the ‘scale-invariant features transform’ (SIFT), ‘histogram of oriented gradient’ (HOG) and ‘global invariant features transform’ (GIST) descriptors and classifies the activities by means of the deep belief network (DBN). First, in order to avoid ineffective features, a pre-processing course is performed on any image in the dataset. Then, the mentioned descriptors extract several features from the image. Due to the problems of working with a large number of features, a small and distinguishing feature set is produced using the bag of words (BoW) technique. Finally, these reduced features are given to a deep belief network in order to recognize the human activities. Comparing the simulation results of the proposed approach with some other existing methods applied to the standard PASCAL VOC Challenge 2010 database with nine different activities demonstrates an improvement in the accuracy, precision and recall measures (reaching 96.39%, 85.77% and 86.72% respectively) for the approach of this work with respect to the other compared ones in the human activity recognition. Manuscript profile -
Open Access Article
43 - Farsi Font Detection using the Adaptive RKEM-SURF Algorithm
Zahra Hossein-Nejad Hamed Agahi Azar MahmoodzadehFarsi font detection is considered as the first stage in the Farsi optical character recognition (FOCR) of scanned printed texts. To this aim, this paper proposes an improved version of the speeded-up robust features (SURF) algorithm, as the feature detector in the font MoreFarsi font detection is considered as the first stage in the Farsi optical character recognition (FOCR) of scanned printed texts. To this aim, this paper proposes an improved version of the speeded-up robust features (SURF) algorithm, as the feature detector in the font recognition process. The SURF algorithm suffers from creation of several redundant features during the detection phase. Thus, the presented version employs the redundant keypoint elimination method (RKEM) to enhance the matching performance of the SURF by reducing unnecessary keypoints. Although the performance of the RKEM is acceptable in this task, it exploits a fixed experimental threshold value which has a detrimental impact on the results. In this paper, an Adaptive RKEM is proposed for the SURF algorithm which considers image type and distortion, when adjusting the threshold value. Then, this improved version is applied to recognize Farsi fonts in texts. To do this, the proposed Adaptive RKEM-SURF detects the keypoints and then SURF is used as the descriptor for the features. Finally, the matching process is done using the nearest neighbor distance ratio. The proposed approach is compared with recently published algorithms for FOCR to confirm its superiority. This method has the capability to be generalized to other languages such as Arabic and English. Manuscript profile -
Open Access Article
44 - DeepFake Detection using 3D-Xception Net with Discrete Fourier Transformation
Adeep Biswas Debayan Bhattacharya Kakelli Anil KumarThe videos are more popular for sharing content on social media to capture the audience’s attention. The artificial manipulation of videos is growing rapidly to make the videos flashy and interesting but they can easily misuse to spread false information on social media MoreThe videos are more popular for sharing content on social media to capture the audience’s attention. The artificial manipulation of videos is growing rapidly to make the videos flashy and interesting but they can easily misuse to spread false information on social media platforms. Deep Fake is a problematic method for the manipulation of videos in which artificial components are added to the video using emerging deep learning techniques. Due to the increase in the accuracy of deep fake generation methods, artificially created videos are no longer detectable and pose a major threat to social media users. To address this growing problem, we have proposed a new method for detecting deep fake videos using 3D Inflated Xception Net with Discrete Fourier Transformation. Xception Net was originally designed for application on 2D images only. The proposed method is the first attempt to use a 3D Xception Net for categorizing video-based data. The advantage of the proposed method is, it works on the whole video rather than the subset of frames while categorizing. Our proposed model was tested on the popular dataset Celeb-DF and achieved better accuracy. Manuscript profile -
Open Access Article
45 - Diagnosis of Gastric Cancer via Classification of the Tongue Images using Deep Convolutional Networks
Elham Gholami Seyed Reza Kamel Tabbakh Maryam khairabadiGastric cancer is the second most common cancer worldwide, responsible for the death of many people in society. One of the issues regarding this disease is the absence of early and accurate detection. In the medical industry, gastric cancer is diagnosed by conducting nu MoreGastric cancer is the second most common cancer worldwide, responsible for the death of many people in society. One of the issues regarding this disease is the absence of early and accurate detection. In the medical industry, gastric cancer is diagnosed by conducting numerous tests and imagings, which are costly and time-consuming. Therefore, doctors are seeking a cost-effective and time-efficient alternative. One of the medical solutions is Chinese medicine and diagnosis by observing changes of the tongue. Detecting the disease using tongue appearance and color of various sections of the tongue is one of the key components of traditional Chinese medicine. In this study, a method is presented which can carry out the localization of tongue surface regardless of the different poses of people in images. In fact, if the localization of face components, especially the mouth, is done correctly, the components leading to the biggest distinction in the dataset can be used which is favorable in terms of time and space complexity. Also, since we have the best estimation, the best features can be extracted relative to those components and the best possible accuracy can be achieved in this situation. The extraction of appropriate features in this study is done using deep convolutional neural networks. Finally, we use the random forest algorithm to train the proposed model and evaluate the criteria. Experimental results show that the average classification accuracy has reached approximately 73.78 which demonstrates the superiority of the proposed method compared to other methods. Manuscript profile -
Open Access Article
46 - Proposing Real-time Parking System for Smart Cities using Two Cameras
Phat Nguyen Huu Loc Hoang BaoToday, cars are becoming a popular means of life. This rapid development has resulted in an increasing demand for private parking. Therefore, finding a parking space in urban areas is extremely difficult for drivers. Another serious problem is that parking on the roadw MoreToday, cars are becoming a popular means of life. This rapid development has resulted in an increasing demand for private parking. Therefore, finding a parking space in urban areas is extremely difficult for drivers. Another serious problem is that parking on the roadway has serious consequences like traffic congestion. As a result, various solutions are proposed to solve basic functions such as detecting a space or determining the position of the parking to orient the driver. In this paper, we propose a system that not only detects the space but also identifies the vehicle's identity based on their respective license plate. Our proposal system includes two cameras with two independent functions, Skyeye and LPR cameras, respectively. Skyeye module has function to detect and track vehicles while automatic license plate recognition system (ALPR) module detects and identifies license plates. Therefore, the system not only helps drivers to find suitable parking space but also manages and controls vehicles effectively for street parking. Besides, it is possible to detect offending vehicles parking on the roadway based on its identity. We also collect a set of data that correctly distributes for the context in order to increase the system's performance. The accuracy of proposal system is 99.48% that shows the feasibility of applying into real environments. Manuscript profile -
Open Access Article
47 - An Automatic Thresholding Approach to Gravitation-Based Edge Detection in Grey-Scale Images
Hamed Agahi Kimia RezaeiThis paper presents an optimal auto-thresholding approach for the gravitational edge detection method in grey-scale images. The goal of this approach is to enhance the performance measures of the edge detector in clean and noisy conditions. To this aim, an optimal thres MoreThis paper presents an optimal auto-thresholding approach for the gravitational edge detection method in grey-scale images. The goal of this approach is to enhance the performance measures of the edge detector in clean and noisy conditions. To this aim, an optimal threshold is automatically found, according to which the proposed method dichotomizes the pixels to the edges and non-edges. First, some pre-processing operations are applied to the image. Then, the vector sum of the gravitational forces applied to each pixel by its neighbors is computed according to the universal law of gravitation. Afterwards, the force magnitude is mapped to a new characteristic called the force feature. Following this, the histogram representation of this feature is determined, for which an optimal threshold is aimed to be discovered. Three thresholding techniques are proposed, two of which contain iterative processes. The parameters of the formulation used in these techniques are adjusted by means of the metaheuristic grasshopper optimization algorithm. To evaluate the proposed system, two standard databases were used and multiple qualitative and quantitative measures were utilized. The results confirmed that the methodology of our work outperformed some conventional and recent detectors, achieving the average precision of 0.894 on the BSDS500 dataset. Moreover, the outputs had high similarity to the ideal edge maps. Manuscript profile -
Open Access Article
48 - Performance Analysis of Hybrid SOM and AdaBoost Classifiers for Diagnosis of Hypertensive Retinopathy
Wiharto Wiharto Esti Suryani Murdoko SusiloThe diagnosis of hypertensive retinopathy (CAD-RH) can be made by observing the tortuosity of the retinal vessels. Tortuosity is a feature that is able to show the characteristics of normal or abnormal blood vessels. This study aims to analyze the performance of the CAD MoreThe diagnosis of hypertensive retinopathy (CAD-RH) can be made by observing the tortuosity of the retinal vessels. Tortuosity is a feature that is able to show the characteristics of normal or abnormal blood vessels. This study aims to analyze the performance of the CAD-RH system based on feature extraction tortuosity of retinal blood vessels. This study uses a segmentation method based on clustering self-organizing maps (SOM) combined with feature extraction, feature selection, and the ensemble Adaptive Boosting (AdaBoost) classification algorithm. Feature extraction was performed using fractal analysis with the box-counting method, lacunarity with the gliding box method, and invariant moment. Feature selection is done by using the information gain method, to rank all the features that are produced, furthermore, it is selected by referring to the gain value. The best system performance is generated in the number of clusters 2 with fractal dimension, lacunarity with box size 22-29, and invariant moment M1 and M3. Performance in these conditions is able to provide 84% sensitivity, 88% specificity, 7.0 likelihood ratio positive (LR+), and 86% area under the curve (AUC). This model is also better than a number of ensemble algorithms, such as bagging and random forest. Referring to these results, it can be concluded that the use of this model can be an alternative to CAD-RH, where the resulting performance is in a good category. Manuscript profile -
Open Access Article
49 - Defect Detection using Depth Resolvable Statistical Post Processing in Non-Stationary Thermal Wave Imaging
G.V.P. Chandra Sekhar Yadav V. S. Ghali Naik R. BalojiDefects that are generated during various phases of manufacturing or transporting limit the future applicability and serviceability of materials. In order to detect these defects a non-destructive testing modality is required. Depth resolvable subsurface anomaly detecti MoreDefects that are generated during various phases of manufacturing or transporting limit the future applicability and serviceability of materials. In order to detect these defects a non-destructive testing modality is required. Depth resolvable subsurface anomaly detection in non-stationary thermal wave imaging is a vital outcome for a reliable prominent investigation of materials due to its fast, remote and non-destructive features. The present work solves the 3-Dimensional heat diffusion equation under the stipulated boundary conditions using green’s function based analytical approach for recently introduced quadratic frequency modulated thermal wave imaging (with FLIR SC 655A as infrared sensor with spectral range of 7.5-14µm and 25 fps) to explore the subsurface details with improved sensitivity and resolution. The temperature response obtained by solving the 3-Dimensional heat diffusion equation is used along with random projection-based statistical post-processing approach to resolve the subsurface details by imposing a band of low frequencies (0.01-0.1 Hz) over a carbon fiber reinforced polymer for experimentation and extracting orthonormal projection coefficients to improve the defect detection with enhanced depth resolution. Orthonormal projection coefficients are obtained by projecting the orthonormal features of the random vectors that are extracted by using Gram-Schmidt algorithm, on the mean removed dynamic thermal data. Further, defect detectability of random projection-based post-processing approach is validated by comparing the full width at half maxima (FWHM) and signal to noise ratio (SNR) of the processed results of the conventional approaches. Random projection provides detailed visualization of defects with 31% detectability even for deeper and small defects in contrast to conventional post processing modalities. Additionally, the subsurface anomalies are compared with their sizes based on full width at half maxima (FWHM) with a maximum error of 0.99% for random projection approach. Manuscript profile -
Open Access Article
50 - Deep Learning Approach for Cardiac MRI Images
Afshin Sandooghdar Farzin YaghmaeeDeep Learning (DL) is the most widely used image-analysis process, especially in medical image processing. Though DL has entered image processing to solve Machine Learning (ML) problems, identifying the most suitable model based on evaluation of the epochs is still an o MoreDeep Learning (DL) is the most widely used image-analysis process, especially in medical image processing. Though DL has entered image processing to solve Machine Learning (ML) problems, identifying the most suitable model based on evaluation of the epochs is still an open question for scholars in the field. There are so many types of function approximators like Decision Tree, Gaussian Processes and Deep Learning, used in multi-layered Neural Networks (NNs), which should be evaluated to determine their effectiveness. Therefore, this study aimed to assess an approach based on DL techniques for modern medical imaging methods according to Magnetic Resonance Imaging (MRI) segmentation. To do so, an experiment with a random sampling approach was conducted. One hundred patient cases were used in this study for training, validation, and testing. The method used in this study was based on full automatic processing of segmentation and disease classification based on MRI images. U-Net structure was used for the segmentation process, with the use of cardiac Right Ventricular Cavity (RVC), Left Ventricular Cavity (LVC), Left Ventricular Myocardium (LVM), and information extracted from the segmentation step. With train and using random forest classifier, and Multilayer Perceptron (MLP), the task of predicting the pathologic target class was conducted. Segmentation extracted information was in the form of comprehensive features handcrafted to reflect demonstrative clinical strategies. Our study suggests 92% test accuracy for cardiac MRI image segmentation and classification. As for the MLP ensemble, and for the random forest, test accuracy was equal to 91% and 90%, respectively. This study has implications for scholars in the field of medical image processing. Manuscript profile -
Open Access Article
51 - Content-based Retrieval of Tiles and Ceramics Images based on Grouping of Images and Minimal Feature Extraction
Simin RajaeeNejad Farahnaz MohannaOne of the most important databases in the e-commerce is tile and ceramic database, for which no specific retrieval method has been provided so far. In this paper, a method is proposed for the content-based retrieval of digital images of tiles and ceramics databases. Fi MoreOne of the most important databases in the e-commerce is tile and ceramic database, for which no specific retrieval method has been provided so far. In this paper, a method is proposed for the content-based retrieval of digital images of tiles and ceramics databases. First, a database is created by photographing different tiles and ceramics on the market from different angles and directions, including 520 images. Then a query image and the database images are divided into nine equal sub-images and all are grouped based on their sub-images. Next, the selected color and texture features are extracted from the sub-images of the database images and query image, so, each image has a feature vector. The selected features are the minimum features that are required to reduce the amount of computations and information stored, as well as speed up the retrieval. Average precision is calculated for the similarity measure. Finally, comparing the query feature vector with the feature vectors of all database images leads to retrieval. According to the retrieving results by the proposed method, its accuracy and speed are improved by 16.55% and 23.88%, respectively, compared to the most similar methods. Manuscript profile