Courtesy

Technically Sponsored by


Proceedings

The Registration fee is 250 USD. Hard copy of the proceedings will be distributed during the Conference. The softcopy will be available on AIRCC Digital Library

Accepted Papers
  • Replication Strategy Based on Data Relationship in Grid Computing
    Yuhanis Yusof, Universiti Utara Malaysia, Malaysia
    ABSTRACT
    This study discusses the utilization of three types of relationships in performing data replication. As grid computing offers the ability of sharing huge amount of resources, resource availability is an important issue to be addressed. The undertaken approach combines the viewpoint of user, system and the grid itself in ensuring resource availability. The realization of the proposed strategy is demonstrated via OptorSim and evaluation is made based on execution time, storage usage, network bandwidth and computing ele-ment usage. Results suggested that the proposed strategy produces a better outcome than an existing method even though various job workload is introduced.
  • Automated Policy Compliance and Change Detection Managed Service in Data Networks
    Saeed Agbariah, Geroge Mason University, USA
    ABSTRACT
    As networks continue to grow in size, speed and complexity, as well as in the diversification of their services, they require many ad-hoc confifuraion chanfes. Sich changes may lead to potential configuration errors, policy violations, inefficiencies and vulnerable states. Even the best administrators can make mistakes, and the cost of missing a key configuration of accidentally skipping an asset could be catastrophic. While, manual and labor intensive networkauditing or more recently products using dedicated configuration compliance scanning appliances for verifying individual system configurarion, can lead to the discovery of configuration errors, and policy violations, however; seldom is the discovery in real time, which may leas to potential system outages, services disruption, and security risks.
  • A Novel Approach to Smoothing on 3D Structured Adaptive Mesh of the Kinect-Based Models
    Erdal Ozbay and Ahmet Cınar, Fırat University, Turkey
    ABSTRACT
    3-dimensional object modelling of real world objects in steady state by means of multiple point cloud (pcl) depth scans taken by using sensing camera and application of smoothing algorithm are suggested in this study. Polygon structure, which is constituted by coordinates of point cloud (x,y,z) corresponding to the position of 3D model in space and obtained by nodal points and connection of these points by means of triangulation, is utilized for the demonstration of 3D models. Gaussian smoothing and methods developed are applied to the mesh consisting of merge of these polygons, and a new mesh simplification and augmentation algorithm are suggested for the over the 3D modelling. Mesh consisting of merge of polygons can be demonstrated in a more packed, smooth and fluent way. In this study is shown that applied the triangulation and smoothing method for 3D modelling, perform to a fast and robust mesh structures compared to existing methods therewithal no remeshing is necessary for refinement and reduction.
  • Extraction of Features for Predicting Patterns of Heart Disease
    Iqra Basharat, Mamuna Fatima, Ali Raza Anjum and Shoab Ahmed Khan, National University of Sciences & Technology, Pakistan
    ABSTRACT
    There is a huge amount of ‘knowledge-enriched data’ in hospitals, which needs to be processed in order to extract useful information from it. That knowledge-enriched data is very useful in making valuable medical decisions. However, there is a lack of effective analysis tools to discover hidden relationships in data. The objective of this research is to analyze the heart patients’ data and extract the useful information that helps the doctors in making wise decisions. We have a huge quantity of historical unstructured data of patients’ in the form of their medical reports along with unstructured doctors’ remarks. In this research, K-means clustering technique is used to extract features for predicting patterns of heart disease. Using patients’ medical profiles such as age, sex, ECG, LVEF, EVS, blood pressure and previous history significant features (as male patients above 60 years with high blood pressure and hypertension are having TVCAD) are extracted. Based on these extracted patterns medical practitioners can make intelligent verdicts. Results of this study could be very constructive for medical researchers in the field of medicine research and can help medical team and doctors to suggest best diagnosis for a disease.
  • Theory and Practice ofWavelets in Signal Processing
    Jalal Karam, Nazarbayev University, Kazakhstan
    ABSTRACT
    The methods of Fourier, Laplace and Wavelet Transforms provide transfer functions and relationships between the input and the output signals in linear time invariant systems. This paper shows the equivalence among these three methods and in each case presenting an application of the appropriate (Fourier, Laplace or Wavelet) to the convolution theorem. In addition it is shown that the same holds for a direct integration method. The Biorthogonal wavelets Bior3.5 and Bior3.9 are examined and the zeros distribution of their polynomials associated filters are located. Also this paper presents the significance of utilizing wavelets as effective tools in processing speech signals for common multimedia applications in general, and for recognition and compression in particular. Theoretically and practically, wavelets have proved to be effective and competitive. The practical use of the Continuous Wavelet Transform (CWT) in processing and analysis of speech is then presented along with explanations of how the human ear can be thought of as a natural wavelet transformer of speech. This generates a variety of approaches for applying the (CWT) to many paradigms analysing speech, sound and music. For perception, the flexibility of implementation of this transform, allows the construction of numerous scales and we include two of them. Results for speech recognition and speech compression are then included.
  • Learning of Robot Navigation Tasks by Probabilistic Neural Network
    Mücella OZBAY KARAKUS and Orhan ER, Bozok University, Turkey
    ABSTRACT
    This paper reports results of artificial neural network for robot navigation tasks. Machine learning methods have proven usability in many complex problems concerning mobile robots control. In particular we deal with the well-known strategy of navigating by “wall-following”. In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks. The PNN result was compared with the results of the Logistic Perceptron, Multilayer Perceptron, Mixture of Experts and Elman neural networks and the results of the previous studies reported focusing on robot navigation tasks and using same dataset. It was observed the PNN is the best classification accuracy with 99,635% accuracy using same dataset.
  • Simulation Optimization of Facility Layout Design Problem With Ambiguous Safety and Ergonomics Factors
    Ali Azadeh and Bita Moradi, Tehran University, Iran
    ABSTRACT
    This paper presents an integrated fuzzy simulation-fuzzy data envelopment analysis-fuzzy analytic hierarchy process algorithm for optimization of flow shop facility layout design problem with safety and ergonomics factors. At first, safety and ergonomics factors are retrieved from a standard questionnaire. Then, feasible layout alternatives are generated by a software package. Third, FAHP is used for weighting non-crisp maintainability, accessibility and flexibility (or qualitative) indicators. Fuzzy simulation is used consequently to incorporate the ambiguity associated with processing times in the flow shop with uncertain inputs. Finally, FDEA is used for finding the optimum layout alternatives with respect to ergonomics, safety, operational, qualitative and dependent indicators. The unique features of this study are the ability of dealing with multiple non-crisp inputs and outputs including ergonomics and safety factors. It also uses fuzzy mathematical programming for optimum layout alternatives by considering safety and ergonomics factors as well as other standard indicators.
  • GPU-Based Image Segmentation Using Level Set Method With Scaling Approach
    Zafer GULER and Ahmet Çınar, Firat University, Turkey
    ABSTRACT
    In recent years, with the development of graphics processors, graphics cards have been widely used to perform general-purpose calculations. Especially with release of CUDA C programming languages in 2007, most of the researchers have been used CUDA C programming language for the processes which needs high performance computing.
    In this paper, a scaling approach for image segmentation using level sets is carried out by the GPU programming techniques. Approach to level sets is mainly based on the solution of partial differential equations. The proposed method does not require the solution of partial differential equation. Scaling approach, which uses basic geometric transformations, is used. Thus, the required computational cost reduces. The use of the CUDA programming on the GPU has taken advantage of classic programming as spending time and performance. Thereby results are obtained faster. The use of the GPU has provided to enable real-time processing. The developed application in this study is used to find tumor on MRI brain images.
  • Research on Real-Time Defect Detection Methods
    Ying Hu1 and Huiqiang Lyu2, 1Hangzhou Dianzi University and 2Zhejiang University of Technology
    ABSTRACT
    With the rapid development of the printing and dyeing industry, defect detection has become a popular subject among researchers. This project focuses on the research of an online defect detection method which is applied to simple and inexpensive devices, so as to reduce the cost of detection. A hybrid method is proposed, which consists of a feature point based template matching technique where the SURF algorithm is used, k-means algorithm, contour extraction technique and some other approaches, which are used for judging defects. Each of these methods above has not so efficient. In this paper, we propose a new method of combining the known methods and our experimental results show that the proposed methods have good performance and meets industrial requirements of real-time operations. In addition, our results can be used to adjust or fix the printing and dyeing equipment, which make this study more valuable.
  • Research on Competitiveness Evaluation of Information Industry in China
    Kailiang Wang, Jie Zhong, Yanyue Zheng, Chongqing University of Posts and Telecommunications, China
    ABSTRACT
    The information industry is playing a more and more important role in transforming traditional technologies and promotes industrial upgrading as a representative of the new economy. This article starts from the development data of information industry from 2008 to 2010 in 31 provinces. Selects 15 relevant basic indicators, uses the factor analysis method, and evaluate the development level of information industry in all provinces from the development potential and productivity of information industry. It also makes a comparison between the development level of information industry in the eastern, western and central regions. On this basis, we want to get the general development level of the information industry, and we make tables to provide certain theoretical basis for the information industry development.
  • WLAN-Based Real Time Vehicle Locating Method
    Zhao Jianling, Beihang University, China
    ABSTRACT
    With the development of the Internet of things and Smart City, Location Based Service (LBS) application is very extensive. This paper presents a wireless local area network (WLAN)-based real time locating method for indoors vehicle localization. The precise stop position of the vehicle is available on the mobile phone. This information is valuable for the user when he wants to return to the vehicle. Continuing the journey from the parking garage, the user is directly found a route to leave the garage by the right exit by himself. This method raised in this paper can realize a continuous, seamless and accurate indoor vehicle positioning method.
  • Design of V-band Substrate Integrated Waveguide Power Divider, Circulator and Coupler
    Bouchra Rahali and Mohammed Feham, Universite of Tlemcen, Algeria
    ABSTRACT
    Recently there is growing interest in a new technology, substrate integrated waveguide (SIW), it has been applied successfully to the conception of planar compact components for the microwave and millimeter waves applications. In this study, a V-band substrate integrated waveguide power divider, circulator and coupler are conceived and optimized by Ansoft HFSS code. Thus, through this modeling, design considerations and results are discussed and presented. Attractive features including compact size and planar form make these devices structure easily integrated in planar circuits.
  • Real Time Drowsy Driver Detection Using Haarcascade Samples
    Suryaprasad J, Sandesh D, Saraswathi V, Swathi D and Manjunath S, PES Institute of Technology South Campus, India
    ABSTRACT
    With the growth in population, the occurrence of automobile accidents has also seen an increase. A detailed analysis shows that, around half million accidents occur in a year , in India alone. Further , around 60% of these accidents are caused due to driver fatigue. Driver fatigue affects the driving ability in the following 3 areas, a) It impairs coordination, b) It causes longer reaction times, and, c)It impairs judgment. Through this paper, we provide a real time monitoring system using image processing, face/eye detection techniques. Further, to ensure real-time computation, Haarcascade samples are used to differentiate between an eye blink and drowsy/fatigue detection.
  • Trusting Identity Based Authentication on Hybrid Cloud Computing
    Hazem A Elbaz, Mohammed H Abd-Elaziz and Taymoor M Nazmy, Ain Shams University, Egypt
    ABSTRACT
    Nowadays users outsourcing their data to cloud provider, the most challenge on research works in cloud computing are access control and data security. Security problems come from the different kinds of cloud services for internet users provided by companies. Currently the majority of cloud computing systems provide digital identity for user to access their services, this will bring some inconvenience for hybrid cloud that include private clouds and, or public clouds. Recently identity based cryptography and Hierarchal identity based cryptography have been proposed to solve the internet applications threads. This paper based on using identity management on hybrid cloud, each user and server will have its own trusted unique identity, the unique key distribution and mutual authentication can be greatly simplified.
  • Novel Approach for the Memory of the Computer
    W. A. C. Weerakoon, A. S. Karunananda and N. G. J. Dias, University of Moratuwa, Sri Lanka.
    ABSTRACT
    Computers with the Von-Neumann architecture improve their processing power with the support of memory. This architecture has been improved by introducing different types of memories. A research we conduct has been inspired by the fact that humans are able to improve their memories with the support of continuous processing in the mind. It is evident that we can start with a smaller memory to drive the processing, which in turn improves, both the memory and the processing. This is analogous to how a person uses short-notes to do processing on a larger knowledge base. We postulate the processor to use the said smaller memory to access the bigger knowledge base, without the processor directly accessing the knowledge base as in the present computations. Thus we are researching into the development of the said small memory, say tactics memory, as a novel memory for Von-Neumann architecture. In doing so, we have exploited the Buddhist theory of mind, which presents everything as a phenomenon that occurs when the related conditions are met. We have developed algebra for modeling the tactics memory. The tactics memory can be introduced both as a software or hardware solution for computations.
  • Modelling Organisational Policies in a Disaster Management Context
    Khaled Gaaloul1 and Henderik A. Proper1,2, 1Centre de Recherche Public Henri Tudor, Luxembourg and 2Radboud University Nijmegen,Netherlands
    ABSTRACT
    Disaster management can be defned as the organisation and management of resources and responsibilities for dealing with all humani-tarian aspects of emergencies. This paper is about organisational policies when assigning responsibilities during a flood scenario. Specially, we fo- cus on dynamic access control policies supporting delegation. Delegation is a dynamic behaviour involving a user passing his access control au- thorisations to other users within organisations. This de nes one aspect of collaboration at the organisational level.
  • SBML for Optimizing Decision Support's Tools
    Dalila Hamami and Baghdad Atmani, Mostaganem University, Algeria
    ABSTRACT
    Many theoretical works and tools on epidemiological field reflect the emphasis on decision-making tools by both public health and the scientific community, which continues to increase.
    Indeed, in the epidemiological field, modeling tools are proving a very important way in helping to make decision. However, the variety, the large volume of data and the nature of epidemics lead us to seek solutions to alleviate the heavy burden imposed on both experts and developers.
    In this paper, we present a new approach: the passage of an epidemic model realized in Bio-PEPA to a narrative language using the basics of SBML language. Our goal is to allow on one hand, epidemiologists to verify and validate the model, and the other hand, developers to optimize the model in order to achieve a better model of decision making. We also present some preliminary results and some suggestions to improve the simulated model.
  • A Hybrid DPCM-DCT And Rle Coding For Satellite Image Compression
    Khaled SAHNOUN, Noureddine BENABADJI, University of Sciences and Technology of Oran, Algeria
    ABSTRACT
    There are many ways to encode, represent, and compress satellite images. Today, with the huge technological advance, algorithms are used to perform many calculations to compress and decompress a satellite image. The future of the compression can take place only through mathematical algorithms, and the progress of mathematical research undoubtedly will lead to an advance in image and file compression. In this paper, we propose a hybrid DPCM-DCT predictive coding and discrete cosine transform DCT and, run-length encoding (RLE) for satellite image compression.
  • ON-Board Satellite Image Compression Using the Fourier Transform and Huffman Coding
    Khaled SAHNOUN, Noureddine BENABADJI, University of Sciences and Technology of Oran, Algeria
    ABSTRACT
    The need to transmit or store satellite images is growing rapidly with the development of modern communications and new imaging systems. The goal of compression is to facilitate the storage and transmission of large images on the ground with high compression ratios and minimum distortion. In this work, we present a new coding scheme for satellite images. At first, the image will be downloaded followed by a fast Fourier transform FFT. The result obtained after FFT processing undergoes a scalar quantization (SQ). The results obtained after the quantization phase are encoded using entropy encoding. This approach has been tested on satellite image and Lena picture. After decompression, the images were reconstructed faithfully and memory space required for storage has been reduced by more than 80%.
  • An Empirical Study on Rural Residents’ Willingness to Pay for the ICT in China
    YUAN Chunhui, GONG Zhenwei, WAN Yan, CHEN Wenjing, Beijing University of Posts and Telecommunications, China
    ABSTRACT
    The main purpose of this paper is to explain the low efficiency of financial input in improving rural ICT development in China from the point of view of willingness to pay (WTP). We established the model of WTP for the ICT resources based on the consumer choice theory, and studied rural residents’ WTP in poor areas by estimating Logit and Tobit Models on the basis of data from field visit in Guyuan county of Hebei province in China. Results have showed that WTP of rural residents is 443 CNY and low compared with the cost of ICT expenses; the probability of WTP is not only related with age, education, and income level, but also related with the ICT training, internet use, and the information literacy of rural residents, among which, whether joined ICT training have significantly positive impact on the WTP. The findings, based on empirical study in rural areas, will be of value to operators and regulators on improving rural ICT development in China.
  • Signal Processing Method for High Speed Train Enviroment Based on Ieee 802.15.4
    Kanghoon Kim and Younglok Kim, Sogang university, Korea
    ABSTRACT
    Monitoring system of high speed train is under study. The system of commercial monitoring system is attached on railroad and diagnose from vibration sensor data. But the system can’t detect defect of train reliably. The essential problem is that the system is able to receive vibration data, when train run on the railroad installed the system. So I’m going on study how to install the sensor system on train, then it will be able to work on the train every time. To transmit vibration data wire communication is ideal situation, but we expect it to infect mechanical system. So my study is wireless communication system to transmit the vibration data reliably. Railroad environment is very tough for wireless communication system by power loss, multipath, fast fading and shielding. So it cause signal distortion that have an effect on BER performance. To equalize the distortion, baseband signal processing is highly necessary. Without equalization, BER performance become 10-2. By proposed method, bit error performance of communication system is improved as 10-3 on fast fading channel.
  • A Simulation-Based Performance Comparison of MANETs CDS creation algorithms using Ideal MAC and IEEE 802.11 MAC
    Khalid A. Almahorg, Mohamed S. Elbouni, Elmahdi M. Abousetta and Ahmed Arara, University of Tripoli, Libya
    ABSTRACT
    Mobile Ad Hoc networks (MANETs) are gaining increased interest due to their wide range of potential applications in civilian and military sectors. The self-control, self-organization, topology dynamism, and bandwidth limitation of the wireless communication channel make implementation of MANETs a challenging task. The Connected Dominating Set (CDS) has been proposed to facilitate MANETs realization. Minimizing the CDS size has several advantages; however, this minimization is NPcomplete problem; therefore, approximation algorithms are used to tackle this problem. The fastest CDS creation algorithm is Wu and Li algorithm; however, it generates a relatively high signaling overhead. Utilizing the location information of network members reduces the signaling overhead of Wu and Li algorithm. In this paper, we compare the performance of Wu and Li algorithm with its Location-Information-Based version under two types of Medium Access Control protocols, and several network sizes. The MAC protocols used are: a virtual ideal MAC protocol, and the IEEE 802.11 MAC protocol. The use of a virtual ideal MAC enables us to investigate how the real-world performance of these algorithms deviates from their ideal-conditions counterpart. The simulator used in this research is the ns-2 network simulator.
  • Image Denoising using Spatial Domain Filters :A Quantitative Study
    Anmol Sharma and Jagroop Singh, DAV Institute of Engineering and Technology, India
    ABSTRACT
    Image denoising is the first preprocessing step dealing with image processing. In image denoising an image is processed using certain restoration techniques to remove induced noise which may creep in the image during acquisition, transmission or compression process. Examples of noise in an image can be Additive White Gaussian Noise (AWGN), Impulse Noise, etc. The goal of restoration techniques is to obtain an image that is as close to the original input image as possible. In this paper objective evaluation methods are used to judge the efficiency of different types of spatial domain filters applied to different noise models, with a quantitative approach. Performance of each filter is compared as they are applied on images affected by a wide variety of noise models. Conclusions are drawn in the end, about which filter is best suited for a number of noise models individually induced in an image, according to the experimental data obtained.
  • Real-Time Realistic Illumination and Rendering of Cumulus Clouds
    Sassi Abdessamed, Djedi Noureddine and Sassi Amina, Mohamed Khider University, Algeria
    ABSTRACT
    Realistic simulation of natural phenomena such as clouds is one of the most challenging problems facing computer graphics. The complexity of cloud formation, dynamics and light interaction makes real time cloud rendering a difficult task. In addition, traditional computer graphics methods involve high amounts of memory and computing resources, which currently limits their realism and speed.
    We propose an efficient and computationally inexpensive phenomenological approach for modelling and rendering cumulus clouds, by drawing on several approaches that we combine and extend.
    This paper focuses on the modelling of the cloud’s shape, rendering and sky model but is does not deal with the animation of the cloud.
  • Improving Supervised Classification of Daily Activities Living Using New Cost Sensitive Criterion for C-SVM
    M'hamed Bilal Abidine and Belkacem Fergani, USTHB, Algiers
    ABSTRACT
    The growing population of elders in the society calls for a new approach in care giving. By inferring what activities elderly are performing in their houses it is possible to determine their physical and cognitive capabilities. In this paper we show the potential of important discriminative classifiers namely the Soft-Support Vector Machines (C-SVM), Conditional Random Fields (CRF) and k-Nearest Neighbors (k-NN) for recognizing activities from sensor patterns in a smart home environment. We address also the class imbalance problem in activity recognition field which has been known to hinder the learning performance of classifiers. Cost sensitive learning is attractive under most imbalanced circumstances, but it is difficult to determine the precise misclassification costs in practice. We introduce a new criterion for selecting the suitable cost parameter C of the C-SVM method. Through our evaluation on four real world imbalanced activity datasets, we demonstrate that C-SVM based on our proposed criterion outperforms the state-of-the-art discriminative methods in activity recognition.
  • About Some Methods of Motion Detection and Estimation: A Review
    AMARA Kahina1, DJEKOUNE A.Oualid2, BELHOCINE Mahmoud2 and ACHOUR Nouara1, 1University of Science and Technology Houari Boumediene, Algeria and 2CDTA, Algeria
    ABSTRACT
    Video surveillance systems have known significant growth because of the increased insecurity in these recent years. In order to reduce threats such as assaults, many cameras have invaded the public squares. The manual monitoring of these screens is tedious because of the large amount of information. So it is very interesting to automate this process from image processing systems able to extract the useful information from video sequences and interpret it. One of the most important tasks is the motion detection and estimation. This article aims to provide the status of art of the different techniques of motion detection estimation and segmentation based on movement. Many studies have been conducted on the subject and the literature is very abundant in this province, we are not trying to list all the existing methods. The idea is to give an overview of the most commonly used methods and to distinguish different types and approaches.
  • DataQuest – Ontology Driven Information Extraction Framework
    Qurat ul Ain and Amna Basharat, National University of Computer and Emerging Sciences Islamabad, Pakistan
    ABSTRACT
    In recent years, number of applications have been introduced to automate the religious learning using emerging technologies and ultimately to ease the retrieval of knowledge from religious literature. In this project we propose DataQuest, an efficient framework for modeling and retrieving knowledge, from distributed knowledge sources primarily related to the Holy Qur’an related scholarly texts, with the use of Semantic Web, Information Extraction and Natural Language Processing techniques. The documents are annotated using the domain ontology and then a semantic based intelligent search engine let the user query that filtered and concise knowledge.
  • Introducing The Concept Of Information PixelsAnd The SIPA ( Storing Information Pixels Addresses ) MethodAs an Efficient Model for Document Storage
    Mohammad A. ALGhalayini, King Saud University, Saudi Arabia
    ABSTRACT
    Today, many institutions and organizations are facing serious problem due to the tremendously increasing size of documents, and this problem is further triggering the storage and retrieval problems due to the continuously growing space and efficiency requirements. This problem is becoming more complex with time and the increase in the size and number of documents in an organization. Therefore, there is a growing demand to address this problem. This demand and challenge can be met by developing a technique to enable specialized document imaging people to use when there is a need for storing documents images. Thus, we need special and efficient storage techniques for this type of information storage (IS) systems.
    In this paper, we present an efficient storage technique for electronic documents. The proposed technique uses the Information Pixels concept to make the technique more efficient for certain image formats. In addition, we shall see how Storing Information Pixels Addresses ( SIPA ) method is an efficient method for document storage and as a result makes the document image storage relatively efficient for most image formats.
  • Introducing The Concept Of Back-Inking As an Efficient Model For Document Retrieval ( Image Reconstruction )
    Mohammad A. ALGhalayini, King Saud University, Saudi Arabia
    ABSTRACT
    Today, many institutions and organizations are facing serious problem due to the tremendously increasing size of documents, and this problem is further triggering the storage and retrieval problems due to the continuously growing space and efficiency requirements. This increase in the size and number of documents is becoming a complex problem in most offices. Therefore, there is a demand to address this challenging problem. This can be met by developing a technique to enable specialized document imaging people to use when there is a need for storing documents images. Thus, there is a need for an efficient retrieval technique for this type of information retrieval (IR) systems.
    In this paper, we present an efficient retrieval technique for electronic documents. TheBack-Inking concept is proposed as an efficient technique for certain image formats. The use of this approach is a continuation of the SIPA [32]approach which was presented in an earlier paper as an efficient method for document storage,and as a result makes the image retrieval relatively efficient.
  • The Relationship Between the use of Blackberry With The Students’ Demand Fulfillment and Personality
    Chairiawaty, Yenni Yuniaty, Anne Maryani and Nurahmawaty, Bandung Islamic University, Indonesia
    ABSTRACT
    The communication technology mainly Blackberry enables a medium to facilitate mediated interpersonal communication because of its interactive ability. This aspect creates some easiness. In the interpersonal communication keeping apart with a distance, the interactivity of the convergent media has been over the potential ability of a feedback since a person accessing a convergent medium directly gives a feedback of the message coveyed. Blackberry as a result of an advanced technology development has been growing so fast in this life.

    Based on the background and phenomenon mentioned, this research studied about “The Relationship between The Use of Blackberry with The Demand Fulfillment and Personality of The Junior High Students di Bandung.” The research was aimed at finding out: (1) the correlation between ten intensity of blackberry use with the cognitive and affective fulfillment of The Junior High School Students, (2) the correlation between ten intensity of blackberry use with the Tense Release of The Junior High School Students, (3) the correlation between tine intensity of blackberry use with the Personal Integrative of The Junior High School Students; (4) the correlation between the intensity of blackberry use with the Socially Integrative of The Junior High School Students; (5) the correlation between the intensity of blackberry use with the Confidence of The Junior High School Students; (6) the correlation between the intensity of blackberry use with the Tolerance of The Junior High School Students; (7) the correlation between the intensity of blackberry use with the Whole Fulfillment of The Junior High School Students; (8) the correlation between the intensity of blackberry use with the Personality As A Whole of The Junior High School Students

    The research used a quantitative approach with the explanatory survey method. The Theories used were: Cognitive Psychology, Technology Determinism, and Uses and Gratification . The population of the research was The Junior High School students. By using random sampling technique, it was taken 5 schools and 200 students as the sample. The data were taken through questionnaires. The data obtained were analyzes by using the statistical test of correlation. The results of the research were shown in the forms of Bar Chart.
  • Scaling Based Active Contour Model in Grayscale Images
    Ahmet cinar and Zafer Güler, Firat University, Turkey
    ABSTRACT
    Structures that can detect the shapes in images in a flexible manner are widely used today. Active contour model, which is one of these structures, is an important model. This model aims to surround the shape with points and to enable these points to fit into the shape. However, since Active Contour Model contains energy minimization, it requires long working time. Scaling based model developed in the present study proposed suggestions to solve speed problem of active contour model. Furthermore, this method allows working on the shape both outwardly (enlargement) and inwardly (reduction). Using the proposed model, conventional active contour model and developed (improved) active contour model (Gradient Vector Flow - GVF) were compared in terms of speed and the results were presented.