• About Journal

     The Journal of Information Systems and Telecommunication (JIST) accepts and publishes papers containing original researches and/or development results, representing an effective and novel contribution for knowledge in the area of information systems and Telecommunication. Contributions are accepted in the form of Regular papers or Correspondence. Regular papers are the ones with a well-rounded treatment of a problem area, whereas Correspondence focus on a point of a defined problem area. Under the permission of the editorial board, other kinds of papers may be published if they are found to be relevant or of interest to the readers. Responsibility for the content of the papers rests upon the Authors only. The Journal is aimed at not only a national target community, but also international audiences is taken into consideration. For this reason, authors are supposed to write in English.

    This Journal is Published under scientific support of Advanced Information Systems (AIS) Research Group and Digital & Signal Processing Group, ICTRC

    The JIST has taken the decision that abroad authors pay model as an author-pays Open Access (OA) model, effective from the 1st March, 2021 volume, which comes into effect for all new submissions to the journal from this date.

    APC charge update for ONLY Iranian authors, effective 6th June, 2021.

    Call for Papers: Special Issue on Telecommunication and relevant fields, published by December 2021.

    For further information on Article Processing Charges (APCs) policies, please visit our APC page or contact us infojist@gmail.com. 


    Latest published articles

    • Open Access Article

      1 - Cost Benefit Analysis of Three Non-Identical Machine Model with Priority in Operation and Repair
      Nafeesa Bashir Raeesa Bashir JPS Joorel Tariq Rashid Jan Jan
      Issue 35 , Volume 9 , Summer 2021
      The paper proposes a new real life model and the main aim is to examine the cost benefit analysis of Textile Industry model subject to different failure and repair strategies. The reliability model comprises of three units i,e Spinning machine (S), Weaving machine (W), Full Text
      The paper proposes a new real life model and the main aim is to examine the cost benefit analysis of Textile Industry model subject to different failure and repair strategies. The reliability model comprises of three units i,e Spinning machine (S), Weaving machine (W), Colouring and Finishing machine(Cf). The working principal of the model starts with spinning machine (S) where in unit S is in operative state while as weaving machine, Colouring and Finishing machine are in ideal state. Complete failure of system is observed when all three units of system i.e. S,W and Cf are in down state. Repairperson is always available to carry out the repair activities in the system in which first priority in repair is given to Colouring and Finishing machine followed by Spinning and weaving machine. The proposed model attempts to maximize the reliability of a real life system. Reliability measures such as Mean Sojourn time, Mean time to system failure, Profit analysis of system are examined to define the performance of the reliability characteristics. For concluding the study of such model, different stochastic measures are analyzed in steady state using regenerative point technique. The tables are prepared for arbitrary values of the parameters to show the performance of some important reliability measures and to check the efficiency of the model under such situations. Manuscript Document

    • Open Access Article

      2 - DeepFake Detection using 3D-Xception Net with Discrete Fourier Transformation
      Adeep  Biswas Debayan  Bhattacharya Anil Kumar Kakelli
      Issue 35 , Volume 9 , Summer 2021
      The videos are more popular for sharing content on social media to capture the audience’s attention. The artificial manipulation of videos is growing rapidly to make the videos flashy and interesting but they can easily misuse to spread false information on social media Full Text
      The videos are more popular for sharing content on social media to capture the audience’s attention. The artificial manipulation of videos is growing rapidly to make the videos flashy and interesting but they can easily misuse to spread false information on social media platforms. Deep Fake is a problematic method for the manipulation of videos in which artificial components are added to the video using emerging deep learning techniques. Due to the increase in the accuracy of deep fake generation methods, artificially created videos are no longer detectable and pose a major threat to social media users. To address this growing problem, we have proposed a new method for detecting deep fake videos using 3D Inflated Xception Net with Discrete Fourier Transformation. Xception Net was originally designed for application on 2D images only. The proposed method is the first attempt to use a 3D Xception Net for categorizing video-based data. The advantage of the proposed method is, it works on the whole video rather than the subset of frames while categorizing. Our proposed model was tested on the popular dataset Celeb-DF and achieved better accuracy. Manuscript Document

    • Open Access Article

      3 - An Efficient Method for Handwritten Kannada Digit Recognition based on PCA and SVM Classifier
      Ramesh G Prasanna  G B Santosh  V Bhat Chandrashekar  Naik Champa  H N
      Issue 35 , Volume 9 , Summer 2021
      Handwritten digit recognition is one of the classical issues in the field of image grouping, a subfield of computer vision. The event of the handwritten digit is generous. With a wide opportunity, the issue of handwritten digit recognition by using computer vision and m Full Text
      Handwritten digit recognition is one of the classical issues in the field of image grouping, a subfield of computer vision. The event of the handwritten digit is generous. With a wide opportunity, the issue of handwritten digit recognition by using computer vision and machine learning techniques has been a well-considered upon field. The field has gone through an exceptional turn of events, since the development of machine learning techniques. Utilizing the strategy for Support Vector Machine (SVM) and Principal Component Analysis (PCA), a robust and swift method to solve the problem of handwritten digit recognition, for the Kannada language is introduced. In this work, the Kannada-MNIST dataset is used for digit recognition to evaluate the performance of SVM and PCA. Efforts were made previously to recognize handwritten digits of different languages with this approach. However, due to the lack of a standard MNIST dataset for Kannada numerals, Kannada Handwritten digit recognition was left behind. With the introduction of the MNIST dataset for Kannada digits, we budge towards solving the problem statement and show how applying PCA for dimensionality reduction before using the SVM classifier increases the accuracy on the RBF kernel. 60,000 images are used for training and 10,000 images for testing the model and an accuracy of 99.02% on validation data and 95.44% on test data is achieved. Performance measures like Precision, Recall, and F1-score have been evaluated on the method used. Manuscript Document

    • Open Access Article

      4 - Overcoming the Link Prediction Limitation in Sparse Networks using Community Detection
      Mohammad Pouya Salvati Jamshid  Bagherzadeh Mohasefi Sadegh Sulaimany
      Issue 35 , Volume 9 , Summer 2021
      Link prediction seeks to detect missing links and the ones that may be established in the future given the network structure or node features. Numerous methods have been presented for improving the basic unsupervised neighbourhood-based methods of link prediction. A maj Full Text
      Link prediction seeks to detect missing links and the ones that may be established in the future given the network structure or node features. Numerous methods have been presented for improving the basic unsupervised neighbourhood-based methods of link prediction. A major issue confronted by all these methods, is that many of the available networks are sparse. This results in high volume of computation, longer processing times, more memory requirements, and more poor results. This research has presented a new, distinct method for link prediction based on community detection in large-scale sparse networks. Here, the communities over the network are first identified, and the link prediction operations are then performed within each obtained community using neighbourhood-based methods. Next, a new method for link prediction has been carried out between the clusters with a specified manner for maximal utilization of the network capacity. Utilized community detection algorithms are Best partition, Link community, Info map and Girvan-Newman, and the datasets used in experiments are Email, HEP, REL, Wikivote, Word and PPI. For evaluation of the proposed method, three measures have been used: precision, computation time and AUC. The results obtained over different datasets demonstrate that extra calculations have been prevented, and precision has been increased. In this method, runtime has also been reduced considerably. Moreover, in many cases Best partition community detection method has good results compared to other community detection algorithms. Manuscript Document

    • Open Access Article

      5 - Diagnosis of Gastric Cancer via Classification of the Tongue Images using Deep Convolutional Networks
      Elham Gholam Seyed Reza Kamel Tabbakh maryam khairabadi
      Issue 35 , Volume 9 , Summer 2021
      Gastric cancer is the second most common cancer worldwide, responsible for the death of many people in society. One of the issues regarding this disease is the absence of early and accurate detection. In the medical industry, gastric cancer is diagnosed by conducting nu Full Text
      Gastric cancer is the second most common cancer worldwide, responsible for the death of many people in society. One of the issues regarding this disease is the absence of early and accurate detection. In the medical industry, gastric cancer is diagnosed by conducting numerous tests and imagings, which are costly and time-consuming. Therefore, doctors are seeking a cost-effective and time-efficient alternative. One of the medical solutions is Chinese medicine and diagnosis by observing changes of the tongue. Detecting the disease using tongue appearance and color of various sections of the tongue is one of the key components of traditional Chinese medicine. In this study, a method is presented which can carry out the localization of tongue surface regardless of the different poses of people in images. In fact, if the localization of face components, especially the mouth, is done correctly, the components leading to the biggest distinction in the dataset can be used which is favorable in terms of time and space complexity. Also, since we have the best estimation, the best features can be extracted relative to those components and the best possible accuracy can be achieved in this situation. The extraction of appropriate features in this study is done using deep convolutional neural networks. Finally, we use the random forest algorithm to train the proposed model and evaluate the criteria. Experimental results show that the average classification accuracy has reached approximately 73.78 which demonstrates the superiority of the proposed method compared to other methods. Manuscript Document

    • Open Access Article

      6 - Cluster-based Coverage Scheme for Wireless Sensor Networks using Learning Automata
      Ali Ghaffari Seyyed Keyvan  Mousavi
      Issue 35 , Volume 9 , Summer 2021
      Network coverage is one of the most important challenges in wireless sensor networks (WSNs). In a WSN, each sensor node has a sensing area coverage based on its sensing range. In most applications, sensor nodes are randomly deployed in the environment which causes the d Full Text
      Network coverage is one of the most important challenges in wireless sensor networks (WSNs). In a WSN, each sensor node has a sensing area coverage based on its sensing range. In most applications, sensor nodes are randomly deployed in the environment which causes the density of nodes become high in some areas and low in some other. In this case, some areas are not covered by none of sensor nodes which these areas are called coverage holes. Also, creating areas with high density leads to redundant overlapping and as a result the network lifetime decreases. In this paper, a cluster-based scheme for the coverage problem of WSNs using learning automata is proposed. In the proposed scheme, each node creates the action and probability vectors of learning automata for itself and its neighbors, then determines the status of itself and all its neighbors and finally sends them to the cluster head (CH). Afterward, each CH starts to reward or penalize the vectors and sends the results to the sender for updating purposes. Thereafter, among the sent vectors, the CH node selects the best action vector and broadcasts it in the form of a message inside the cluster. Finally, each member changes its status in accordance with the vector included in the received message from the corresponding CH and the active sensor nodes perform environment monitoring operations. The simulation results show that the proposed scheme improves the network coverage and the energy consumption. Manuscript Document

    • Open Access Article

      7 - Energy Efficient Cross Layer MAC Protocol for Wireless Sensor Networks in Remote Area Monitoring Applications
      R Rathna L Mary Gladence J Sybi Cynthia V Maria Anu
      Issue 35 , Volume 9 , Summer 2021
      Sensor nodes are typically less mobile, much limited in capabilities, and more densely deployed than the traditional wired networks as well as mobile ad-hoc networks. General Wireless Sensor Networks (WSNs) are designed with electro-mechanical sensors through wireless d Full Text
      Sensor nodes are typically less mobile, much limited in capabilities, and more densely deployed than the traditional wired networks as well as mobile ad-hoc networks. General Wireless Sensor Networks (WSNs) are designed with electro-mechanical sensors through wireless data communication. Nowadays the WSN has become ubiquitous. WSN is used in combination with Internet of Things and in many Big Data applications, it is used in the lower layer for data collection. It is deployed in combination with several high end networks. All the higher layer networks and application layer services depend on the low level WSN in the deployment site. So to achieve energy efficiency in the overall network some simplification strategies have to be carried out not only in the Medium Access Control (MAC) layer but also in the network and transport layers. An energy efficient algorithm for scheduling and clustering is proposed and described in detail. The proposed methodology clusters the nodes using a traditional yet simplified approach of hierarchically sorting the sensor nodes. Few important works on cross layer protocols for WSNs are reviewed and an attempt to modify their pattern has also been presented in this paper with results. Comparison with few prominent protocols in this domain has also been made. As a result of the comparison one would get a basic idea of using which type of scheduling algorithm for which type of monitoring applications. Manuscript Document
    Most Viewed Articles

    • Open Access Article

      1 - Privacy Preserving Big Data Mining: Association Rule Hiding
      Golnar Assadat  Afzali shahriyar mohammadi
      Issue 14 , Volume 4 , Spring 2016
      Data repositories contain sensitive information which must be protected from unauthorized access. Existing data mining techniques can be considered as a privacy threat to sensitive data. Association rule mining is one of the utmost data mining techniques which tries to Full Text
      Data repositories contain sensitive information which must be protected from unauthorized access. Existing data mining techniques can be considered as a privacy threat to sensitive data. Association rule mining is one of the utmost data mining techniques which tries to cover relationships between seemingly unrelated data in a data base.. Association rule hiding is a research area in privacy preserving data mining (PPDM) which addresses a solution for hiding sensitive rules within the data problem. Many researches have be done in this area, but most of them focus on reducing undesired side effect of deleting sensitive association rules in static databases. However, in the age of big data, we confront with dynamic data bases with new data entrance at any time. So, most of existing techniques would not be practical and must be updated in order to be appropriate for these huge volume data bases. In this paper, data anonymization technique is used for association rule hiding, while parallelization and scalability features are also embedded in the proposed model, in order to speed up big data mining process. In this way, instead of removing some instances of an existing important association rule, generalization is used to anonymize items in appropriate level. So, if necessary, we can update important association rules based on the new data entrances. We have conducted some experiments using three datasets in order to evaluate performance of the proposed model in comparison with Max-Min2 and HSCRIL. Experimental results show that the information loss of the proposed model is less than existing researches in this area and this model can be executed in a parallel manner for less execution time Manuscript Document

    • Open Access Article

      2 - Instance Based Sparse Classifier Fusion for Speaker Verification
      Mohammad Hasheminejad Hassan Farsi
      Issue 15 , Volume 4 , Summer 2016
      This paper focuses on the problem of ensemble classification for text-independent speaker verification. Ensemble classification is an efficient method to improve the performance of the classification system. This method gains the advantage of a set of expert classifiers Full Text
      This paper focuses on the problem of ensemble classification for text-independent speaker verification. Ensemble classification is an efficient method to improve the performance of the classification system. This method gains the advantage of a set of expert classifiers. A speaker verification system gets an input utterance and an identity claim, then verifies the claim in terms of a matching score. This score determines the resemblance of the input utterance and pre-enrolled target speakers. Since there is a variety of information in a speech signal, state-of-the-art speaker verification systems use a set of complementary classifiers to provide a reliable decision about the verification. Such a system receives some scores as input and takes a binary decision: accept or reject the claimed identity. Most of the recent studies on the classifier fusion for speaker verification used a weighted linear combination of the base classifiers. The corresponding weights are estimated using logistic regression. Additional researches have been performed on ensemble classification by adding different regularization terms to the logistic regression formulae. However, there are missing points in this type of ensemble classification, which are the correlation of the base classifiers and the superiority of some base classifiers for each test instance. We address both problems, by an instance based classifier ensemble selection and weight determination method. Our extensive studies on NIST 2004 speaker recognition evaluation (SRE) corpus in terms of EER, minDCF and minCLLR show the effectiveness of the proposed method. Manuscript Document

    • Open Access Article

      3 - COGNISON: A Novel Dynamic Community Detection Algorithm in Social Network
      Hamideh Sadat Cheraghchi Ali Zakerolhossieni
      Issue 14 , Volume 4 , Spring 2016
      The problem of community detection has a long tradition in data mining area and has many challenging facet, especially when it comes to community detection in time-varying context. While recent studies argue the usability of social science disciplines for modern social Full Text
      The problem of community detection has a long tradition in data mining area and has many challenging facet, especially when it comes to community detection in time-varying context. While recent studies argue the usability of social science disciplines for modern social network analysis, we present a novel dynamic community detection algorithm called COGNISON inspired mainly by social theories. To be specific, we take inspiration from prototype theory and cognitive consistency theory to recognize the best community for each member by formulating community detection algorithm by human analogy disciplines. COGNISON is placed in representative based algorithm category and hints to further fortify the pure mathematical approach to community detection with stabilized social science disciplines. The proposed model is able to determine the proper number of communities by high accuracy in both weighted and binary networks. Comparison with the state of art algorithms proposed for dynamic community discovery in real datasets shows higher performance of this method in different measures of Accuracy, NMI, and Entropy for detecting communities over times. Finally our approach motivates the application of human inspired models in dynamic community detection context and suggest the fruitfulness of the connection of community detection field and social science theories to each other. Manuscript Document

    • Open Access Article

      4 - Node Classification in Social Network by Distributed Learning Automata
      Ahmad Rahnama Zadeh meybodi meybodi Masoud Taheri Kadkhoda
      Issue 18 , Volume 5 , Spring 2017
      The aim of this article is improving the accuracy of node classification in social network using Distributed Learning Automata (DLA). In the proposed algorithm using a local similarity measure, new relations between nodes are created, then the supposed graph is partitio Full Text
      The aim of this article is improving the accuracy of node classification in social network using Distributed Learning Automata (DLA). In the proposed algorithm using a local similarity measure, new relations between nodes are created, then the supposed graph is partitioned according to the labeled nodes and a network of Distributed Learning Automata is corresponded on each partition. In each partition the maximal spanning tree is determined using DLA. Finally nodes are labeled according to the rewards of DLA. We have tested this algorithm on three real social network datasets, and results show that the expected accuracy of presented algorithm is achieved. Manuscript Document

    • Open Access Article

      5 - A Bio-Inspired Self-configuring Observer/ Controller for Organic Computing Systems
      Ali Tarihi haghighi haghighi feridon Shams
      Issue 15 , Volume 4 , Summer 2016
      The increase in the complexity of computer systems has led to a vision of systems that can react and adapt to changes. Organic computing is a bio-inspired computing paradigm that applies ideas from nature as solutions to such concerns. This bio-inspiration leads to the Full Text
      The increase in the complexity of computer systems has led to a vision of systems that can react and adapt to changes. Organic computing is a bio-inspired computing paradigm that applies ideas from nature as solutions to such concerns. This bio-inspiration leads to the emergence of life-like properties, called self-* in general which suits them well for pervasive computing. Achievement of these properties in organic computing systems is closely related to a proposed general feedback architecture, called the observer/controller architecture, which supports the mentioned properties through interacting with the system components and keeping their behavior under control. As one of these properties, self-configuration is desirable in the application of organic computing systems as it enables by enabling the adaptation to environmental changes. However, the adaptation in the level of architecture itself has not yet been studied in the literature of organic computing systems. This limits the achievable level of adaptation. In this paper, a self-configuring observer/controller architecture is presented that takes the self-configuration to the architecture level. It enables the system to choose the proper architecture from a variety of possible observer/controller variants available for a specific environment. The validity of the proposed architecture is formally demonstrated. We also show the applicability of this architecture through a known case study. Manuscript Document

    • Open Access Article

      6 - Publication Venue Recommendation Based on Paper’s Title and Co-authors Network
      Ramin Safa Seyed Abolghassem Mirroshandel Soroush Javadi Mohammad Azizi
      Issue 21 , Volume 6 , Winter 2018
      Information overload has always been a remarkable topic in scientific researches, and one of the available approaches in this field is employing recommender systems. With the spread of these systems in various fields, studies show the need for more attention to applying Full Text
      Information overload has always been a remarkable topic in scientific researches, and one of the available approaches in this field is employing recommender systems. With the spread of these systems in various fields, studies show the need for more attention to applying them in scientific applications. Applying recommender systems to scientific domain, such as paper recommendation, expert recommendation, citation recommendation and reviewer recommendation, are new and developing topics. With the significant growth of the number of scientific events and journals, one of the most important issues is choosing the most suitable venue for publishing papers, and the existence of a tool to accelerate this process is necessary for researchers. Despite the importance of these systems in accelerating the publication process and decreasing possible errors, this problem has been less studied in related works. So in this paper, an efficient approach will be suggested for recommending related conferences or journals for a researcher’s specific paper. In other words, our system will be able to recommend the most suitable venues for publishing a written paper, by means of social network analysis and content-based filtering, according to the researcher’s preferences and the co-authors’ publication history. The results of evaluation using real-world data show acceptable accuracy in venue recommendations. Manuscript Document

    • Open Access Article

      7 - The Surfer Model with a Hybrid Approach to Ranking the Web Pages
      Javad Paksima - -
      Issue 15 , Volume 4 , Summer 2016
      Users who seek results pertaining to their queries are at the first place. To meet users’ needs, thousands of webpages must be ranked. This requires an efficient algorithm to place the relevant webpages at first ranks. Regarding information retrieval, it is highly impor Full Text
      Users who seek results pertaining to their queries are at the first place. To meet users’ needs, thousands of webpages must be ranked. This requires an efficient algorithm to place the relevant webpages at first ranks. Regarding information retrieval, it is highly important to design a ranking algorithm to provide the results pertaining to user’s query due to the great deal of information on the World Wide Web. In this paper, a ranking method is proposed with a hybrid approach, which considers the content and connections of pages. The proposed model is a smart surfer that passes or hops from the current page to one of the externally linked pages with respect to their content. A probability, which is obtained using the learning automata along with content and links to pages, is used to select a webpage to hop. For a transition to another page, the content of pages linked to it are used. As the surfer moves about the pages, the PageRank score of a page is recursively calculated. Two standard datasets named TD2003 and TD2004 were used to evaluate and investigate the proposed method. They are the subsets of dataset LETOR3. The results indicated the superior performance of the proposed approach over other methods introduced in this area. Manuscript Document

    • Open Access Article

      8 - Short Time Price Forecasting for Electricity Market Based on Hybrid Fuzzy Wavelet Transform and Bacteria Foraging Algorithm
      keyvan borna Sepideh Palizdar
      Issue 16 , Volume 4 , Autumn 2016
      Predicting the price of electricity is very important because electricity can not be stored. To this end, parallel methods and adaptive regression have been used in the past. But because dependence on the ambient temperature, there was no good result. In this study, lin Full Text
      Predicting the price of electricity is very important because electricity can not be stored. To this end, parallel methods and adaptive regression have been used in the past. But because dependence on the ambient temperature, there was no good result. In this study, linear prediction methods and neural networks and fuzzy logic have been studied and emulated. An optimized fuzzy-wavelet prediction method is proposed to predict the price of electricity. In this method, in order to have a better prediction, the membership functions of the fuzzy regression along with the type of the wavelet transform filter have been optimized using the E.Coli Bacterial Foraging Optimization Algorithm. Then, to better compare this optimal method with other prediction methods including conventional linear prediction and neural network methods, they were analyzed with the same electricity price data. In fact, our fuzzy-wavelet method has a more desirable solution than previous methods. More precisely by choosing a suitable filter and a multiresolution processing method, the maximum error has improved by 13.6%, and the mean squared error has improved about 17.9%. In comparison with the fuzzy prediction method, our proposed method has a higher computational volume due to the use of wavelet transform as well as double use of fuzzy prediction. Due to the large number of layers and neurons used in it, the neural network method has a much higher computational volume than our fuzzy-wavelet method. Manuscript Document

    • Open Access Article

      9 - DBCACF: A Multidimensional Method for Tourist Recommendation Based on Users’ Demographic, Context and Feedback
      Maral Kolahkaj Ali Harounabadi Alireza Nikravan shalmani Rahim Chinipardaz
      Issue 24 , Volume 6 , Autumn 2018
      By the advent of some applications in the web 2.0 such as social networks which allow the users to share media, many opportunities have been provided for the tourists to recognize and visit attractive and unfamiliar Areas-of-Interest (AOIs). However, finding the appropr Full Text
      By the advent of some applications in the web 2.0 such as social networks which allow the users to share media, many opportunities have been provided for the tourists to recognize and visit attractive and unfamiliar Areas-of-Interest (AOIs). However, finding the appropriate areas based on user’s preferences is very difficult due to some issues such as huge amount of tourist areas, the limitation of the visiting time, and etc. In addition, the available methods have yet failed to provide accurate tourist’s recommendations based on geo-tagged media because of some problems such as data sparsity, cold start problem, considering two users with different habits as the same (symmetric similarity), and ignoring user’s personal and context information. Therefore, in this paper, a method called “Demographic-Based Context-Aware Collaborative Filtering” (DBCACF) is proposed to investigate the mentioned problems and to develop the Collaborative Filtering (CF) method with providing personalized tourist’s recommendations without users’ explicit requests. DBCACF considers demographic and contextual information in combination with the users' historical visits to overcome the limitations of CF methods in dealing with multi- dimensional data. In addition, a new asymmetric similarity measure is proposed in order to overcome the limitations of symmetric similarity methods. The experimental results on Flickr dataset indicated that the use of demographic and contextual information and the addition of proposed asymmetric scheme to the similarity measure could significantly improve the obtained results compared to other methods which used only user-item ratings and symmetric measures. Manuscript Document

    • Open Access Article

      10 - Promote Mobile Banking Services by using National Smart Card Capabilities and NFC Technology
      Reza Vahedi Sayed Esmaeail Najafi Farhad Hosseinzadeh Lotfi
      Issue 15 , Volume 4 , Summer 2016
      By the mobile banking system and install an application on the mobile phone can be done without visiting the bank and at any hour of the day, get some banking operations such as account balance, transfer funds and pay bills did limited. The second password bank account Full Text
      By the mobile banking system and install an application on the mobile phone can be done without visiting the bank and at any hour of the day, get some banking operations such as account balance, transfer funds and pay bills did limited. The second password bank account card, the only security facility predicted for use mobile banking systems and financial transactions. That this alone cannot create reasonable security and the reason for greater protection and prevent the theft and misuse of citizens’ bank accounts is provide banking services by the service limits. That by using NFC (Near Field Communication) technology can identity and biometric information and Key pair stored on the smart card chip be exchanged with mobile phone and mobile banking system. And possibility of identification and authentication and also a digital signature created documents. And thus to enhance the security and promote mobile banking services. This research, the application and tool library studies and the opinion of seminary experts of information technology and electronic banking and analysis method Dematel is examined. And aim to investigate possibility Promote mobile banking services by using national smart card capabilities and NFC technology to overcome obstacles and risks that are mentioned above. Obtained Results, confirmed the hypothesis of the research and show that by implementing the so-called solutions in the banking system of Iran. Manuscript Document
    Upcoming Articles

    • Open Access Article

      1 - Evaluating the Cultural Anthropology of Artefacts of Computer Mediated Communication: A Case of Law Enforcement Agencies
      Chukwunonso Henry Nwokoye Njideka N. Mbeledogu Chikwe Umeugoji
      The renowned orientations of cultural models proposed by Hall and Hofstede has been the subject of criticisms. This is due to the weak, inflexible and old-fashioned nature of some designs resulting from them. In addition, is the ever-changing, formless and undefined nat Full Text
      The renowned orientations of cultural models proposed by Hall and Hofstede has been the subject of criticisms. This is due to the weak, inflexible and old-fashioned nature of some designs resulting from them. In addition, is the ever-changing, formless and undefined nature of culture and globalization. Consequently, these vituperations have resulted in better clarifications when assessing the cultural anthropology of websites. Based on these later clarifications and other additions, we seek to evaluate the cultural heuristics of websites owned by agencies of the Nigerian government. Note that this is verily necessary because older models did not include Africa in their analyses. Specifically, we employed the online survey method by distributing questionnaires to different groups of experts drawn from the various regions of Nigeria. The experts employed methods such as manual inspection and use of automated tools to reach conclusions. Afterwards, the results were assembled and using the choice of a simple majority, we decided whether a design parameter is either high or low context. Findings showed that websites developers tend to favor low context styles when choosing design parameters. The paper attempts to situate Africa in Hall’s continuum; therein, Nigeria (Africa) may fall within French Canadian and Scandinavian and/or within Latin and Scandinavian for the left hand and right hand side diagram respectively. In future, we would study the cultural anthropology of African websites employing the design parameters proposed by Alexander, et al. Manuscript Document

    • Open Access Article

      2 - Digital Transformation model, based on Grounded theory
      abbas khamseh Mohammad Ali Mirfallah Lialestani reza radfar
      Given the emergence of digital transformation from Industry 4 and the rapid dissemination of technological innovations as well as their impact as a strong driving force in new businesses, efforts should be made to identify the dimensions of this core factor as rapidly a Full Text
      Given the emergence of digital transformation from Industry 4 and the rapid dissemination of technological innovations as well as their impact as a strong driving force in new businesses, efforts should be made to identify the dimensions of this core factor as rapidly as possible. Providing a comprehensive overview of all aspects of the model. The purpose of this article is to provide insights into the state of the art of digital transformation in the last years and suggest ways for future research. This analysis is like a vector-based mapping of the subject literature into categories, so that with the help of a number of experts the evolutionary trends can be identified and further researched. In this way, with a deeper understanding of the subject, we have attempted to identify existing gaps. The findings suggest that organizations of all sizes must adapt their business strategy to the realities of digital transformation. This will largely lead to changing business processes as well as managing operations in a new and more intelligent tool-based way. In this regard, organizations will evolve not just on their own, but on the whole value chain, and this will clearly change the way they produce and deliver value. Researchers also face challenges because of the potential of technological innovations, all previous research seems to have identified and mapped only part of the opportunities and challenges of digital transformation. Manuscript Document

    • Open Access Article

      3 - Predicting Student Performance for Early Intervention using Classification Algorithms in Machine Learning
      Kalaivani K Ulagapriya K Saritha A Ashutosh  Kumar
      Predicting Student’s Performance System is to find students who may require early intervention before they fail to graduate. It is generally meant for the teaching faculty members to analyze Student's Performance and Results. It stores Student Details in a database and Full Text
      Predicting Student’s Performance System is to find students who may require early intervention before they fail to graduate. It is generally meant for the teaching faculty members to analyze Student's Performance and Results. It stores Student Details in a database and uses Machine Learning Model using i. Python Data Analysis tools like Pandas and ii. Data Visualization tools like Seaborn to analyze the overall Performance of the Class. The proposed system suggests student performance prediction through Machine Learning Algorithms and Data Mining Techniques. The Data Mining technique used here is classification, which classifies the students based on student’s attributes. The Front end of the application is made using React JS Library with Data Visualization Charts and connected to a backend Database where all student’s records are stored in MongoDB and the Machine Learning model is trained and deployed through Flask. In this process, the machine learning algorithm is trained using a dataset to create a model and predict the output on the basis of that model. Three different types of data used in Machine Learning are continuous, categorical and binary. In this study, a brief description and comparative analysis of various classification techniques is done using student performance dataset. The six different machine learning Classification algorithms, which have been compared, are Logistic Regression, Decision Tree, K-Nearest Neighbor, Naïve Bayes, Support Vector Machine and Random Forest. The results of Naïve Bayes classifier are comparatively higher than other techniques in terms of metrics such as precision, recall and F1 score. The values of precision, recall and F1 score are 0.93, 0.92 and 0.92 respectively Manuscript Document

    • Open Access Article

      4 - Proposing Real-time Parking Systems for Smart Cities using Multiview Cameras
      Phat Nguyen Huu Loc Bao Hoang
      Today, cars are becoming a popular means of life. This rapid development has resulted in an increasing demand for private parking. Therefore, finding a parking space in urban areas is extremely difficult for drivers. Another serious problem is that parking on the roadwa Full Text
      Today, cars are becoming a popular means of life. This rapid development has resulted in an increasing demand for private parking. Therefore, finding a parking space in urban areas is extremely difficult for drivers. Another serious problem is that parking on the roadway has serious consequences like traffic congestion. As a result, various solutions are proposed to solve basic functions such as detecting a space or determining the position of the parking to orient the driver. In this paper, we propose a system that not only detects the space but also identifies the vehicle's identity based on their respective license plate. Our proposal system includes two cameras with two independent functions, Skyeye and LPR cameras, respectively. Skyeye module has function to detect and track vehicles while automatic license plate recognition system (ALPR) module detects and identifies license plates. Therefore, the system not only helps drivers to find suitable parking space but also manages and controls vehicles effectively for street parking. Besides, it is possible to detect offending vehicles parking on the roadway based on its identity. We also collect a set of data that correctly distributes for the context in order to increase the system's performance. The accuracy of proposal system is 99.48% that shows the feasibility of applying into real environments. Manuscript Document

    • Open Access Article

      5 - An Automatic Thresholding Approach to Gravitation-Based Edge Detection in Grey-Scale Images
      Hamed Agahi Kimia Rezaei
      This paper presents an optimal auto-thresholding approach for the gravitational edge detection method in grey-scale images. The goal of this approach is to enhance the performance measures of the edge detector in clean and noisy conditions. To this aim, an optimal thres Full Text
      This paper presents an optimal auto-thresholding approach for the gravitational edge detection method in grey-scale images. The goal of this approach is to enhance the performance measures of the edge detector in clean and noisy conditions. To this aim, an optimal threshold is automatically found, according to which the proposed method dichotomizes the pixels to the edges and non-edges. First, some pre-processing operations are applied to the image. Then, the vector sum of the gravitational forces applied to each pixel by its neighbors is computed according to the universal law of gravitation. Afterwards, the force magnitude is mapped to a new characteristic called the force feature. Following this, the histogram representation of this feature is determined, for which an optimal threshold is aimed to be discovered. Three thresholding techniques are proposed, two of which contain iterative processes. The parameters of the formulation used in these techniques are adjusted by means of the metaheuristic grasshopper optimization algorithm. To evaluate the proposed system, two standard databases were used and multiple qualitative and quantitative measures were utilized. The results confirmed that the methodology of our work outperformed some conventional and recent detectors and the outputs had high similarity to the ideal edge maps. Manuscript Document

    • Open Access Article

      6 - Word sense induction in Persian and English: A comparative study
      Masood Ghayoomi
      Words in a natural language have forms and meanings, and there might not always be a one-to-one match between them. This property of the language causes words to have more than one meaning and to challenge a processing system to determine the meaning of the word in a se Full Text
      Words in a natural language have forms and meanings, and there might not always be a one-to-one match between them. This property of the language causes words to have more than one meaning and to challenge a processing system to determine the meaning of the word in a sentence. Using a lexical resource such as an electronic dictionary or a lexical database might be a help; but due to their manual development, they outdate by passage of time and language change. These drawbacks are strong motivations to use unsupervised machine learning approaches to induce word senses from the natural data. To this end, the clustering approach can be utilized. In this paper, we study the performance of a word sense induction model by using three variables: a) the target language: we run the induction process on Persian and English data; b) the type of clustering algorithm: both parametric and non-parametric clustering algorithms are utilized to induce senses; c) the context of the target words to capture the information in vectors created for clustering: vectors are created either based on the whole sentence in which the target word is located; or based on the limited surrounding words of the target word. We evaluate the clustering performance externally. To this end we introduce a normalized evaluation metric to compare the models. The experimental results for both Persian and English data show that the window-based partitioning K-means algorithm has obtained the best performance for both datasets. Manuscript Document

    • Open Access Article

      7 - A Threshold-based Brain MRI Segmentation using Multi-Objective Particle Swarm Optimization
      ARUN KUMAR Ravi  Boda
      A multi-objective-based optimization technique called Multi-Objective Particle Swarm Optimization (MO-PSO) is introduced in this paper for image segmentation. This technique is used to detect the tumour of the human brain on MR images. To get the threshold, the suggeste Full Text
      A multi-objective-based optimization technique called Multi-Objective Particle Swarm Optimization (MO-PSO) is introduced in this paper for image segmentation. This technique is used to detect the tumour of the human brain on MR images. To get the threshold, the suggested algorithm uses two fitness(objective) functions- Image entropy and Image variance. These two objective functions are distinct from each other and are simultaneously optimized to create a sequence of pareto-optimal solutions. The MO-PSO technique tested on various MRI images provides its efficiency with experimental findings. In terms of “best, worst, mean, median, standard deviation” parameters, the MO-PSO technique is also contrasted with the existing Single-objective PSO (SO-PSO) technique. Experimental results show that MO-PSO is 28% advanced than SO-PSO for ‘best’ parameter with reference to image entropy function and 92% advanced than SO-PSO with reference to image variance function. Manuscript Document
  • Affiliated to
    Iranian Academic Center for Education,Culture and Research
    Editor in Chief
    Masood Shafiei (Amir Kabir)
    Internal Manager
    Editorial Board
    Ali Mohammad Djafari ((CNRS), France ) Rahim Saeidi (Aalto University, Finland) Abdolali Abdipour (Amirkabir University of Technology) Mahmoud Naghibzadeh (Ferdowsi University of Mashhad) Zabih Ghasemlooy ( University of Northumbria ) Mahmoud Moghavemi (University of Malaya) Aliakbar Jalali (Iran University of Science and Technology) Ramazan Ali Sadeghzadeh (ACECR) Hamidreza Sadegh Mohammadi (Acecr) Saeed Ghazimaghrebi (ACECR) Shaban Elahi (Tarbiat Modaress) Alireza Montazemi (McMaster University) Shohreh Kasaei (Sharif University of Technology) Mehrnoush Shamsfard (Shahid Beheshti University)
    ISSN: 2322-1437
    eISSN:2345-2773
    Email
    infojist@gmail.com
    Address
    No.5, Saeedi Alley, Kalej Intersection., Enghelab Ave., Tehran, Iran.
    Phone
    +98 21 88930150

    Search

    Statistics

    Number of Issues 9
    Count of Volumes 34
    Printed Articles 251
    Number of Authors 2004
    Article Views 543472
    Article Downloads 104647
    Number of Submitted Articles 1109
    Number of Rejected Articles 654
    Number of Accepted Articles 270
    Admission Time(Day) 163
    Reviewer Count 718