Accepted Papers

  • Web Accessibility Evaluation of e-Government website in Saudi Arabia
    Ahlam J. Al-Khiebari and Khalid A. Alnafjan, king saud university, Saudi Arabia
    This paper presents the results of a new accessibility study carried out in 2013 based on a sample of Saudi e-government websites. The main objective of this study is to investigate the accessibility of Saudi e-government websites with reference to the Web Content Accessibility Guidelines 2 (WCAG 2.0), conformance level A by using automatic evaluation tools. It aims to address problems of accessibility experienced by people with disabilities in Saudi Arabia while using e-government websites. It also aims to get a view on the degree to which the web accessibility is maintained over time in Saudi Arabia. This could be obtained by comparing the evaluation results with earlier study and analyze the progress in web accessibility. The analysis of results reveals that none of Saudi e-government sample's websites passed the lowest level of WCAG2. No indications for web accessibility progress was noticed as regarded the sample evaluated in 2010 study.
  • Efficient Damage Assessment and Recovery Using Fast Mapping
    Ramzi A. Haraty and Hussein Mohsen, Lebanese American University, Lebanon
    As electronic attacks increase with the advancement of World Wide Web applications, protecting information becomes of paramount importance. Clustering and sub-clustering log files have been introduced to save time and prevent searching the log in case any malicious transaction is detected and recovery is needed. In this paper, we introduce enhanced damage assessment and recovery algorithms with auxiliary data structures and fast mapping for efficient retrieval of information.
  • Access Privacy in Control Access to Outsourced Data
    Aisha Urooj Khan and Zain Hameed, LUMS Lahore, Pakistan
    This research work proposes techniques to improve selective secure access to outsourced data. The article addresses the problem regarding threats to leakage of information to outsourced data because of honest but curious and un-trusted service providers. We addressed the specific area of access privacy in which service provider will not be capable to infer or reveal access patterns of users queried to outsource data. We presented the novel approach to this problem by introducing k-anonymity on both user and data level. K-anonymity involves introducing k-anonymous users at user level and k-anonymous fake tuples at data level combined with generalization and suppression. The study shows that the proposed solution is effectively increasing access privacy and therefore increasing overall security to outsourced data.
  • Authentication using graphical passwords
    Shubhangi Agarwal and Himani Rallapalli, VIT University, India
    Access to systems is most often based on the use of alphanumeric passwords. However, users have difficulty remembering a password that is long and random-appearing. Instead, they create short, simple, and insecure passwords. Graphical passwords have been designed to make passwords more memorable and easier for people to use and, therefore, more secure. Using a graphical password, users click on images rather than type alphanumeric characters. We have designed a two level authentication system in which the user creates a graphical password as well as a code is generated and sent to the handheld device for authentication. User must clear both levels .Our approach can be leveraged by many organizations as it is more secure as we provide graphical approach linked with text password.
  • Application of Enhanced Clustering Technique using Similarity Measure for Market Segmentation
    M M Kodabagi, Savita S Hanji and Sanjay V Hanji, Basaveshwar Engineering College, Bagalkot, India
    Segmentation is one of the very important strategic tools used by the marketer. Segmentation strategy is based on the concept that no firm can satisfy all needs of one customer or one need of all the customers. The customers are too numerous and diverse in their buying requirements, hence the marketers or companies cannot cater to the requirements of all customers that too in a broad market such as two-wheelers. Cluster analysis is a class of techniques used to identify the group of customers with similar behaviors given a large database of customer data containing their properties and past buying records. Clustering is one of the unsupervised learning method in which a set of data points are separated into uniform groups. The k-means is one of the most widely used clustering techniques used for various applications. The main drawback of original k-means clustering algorithm is dead centers. Dead centers are centers that have no associated data points. The original k-means clustering algorithm with Euclidian distance treats all features equally and does not accurately reflect the similarity among data points. In this paper, an attempt has been made to apply enhanced clustering algorithm which uses similarity measure for clustering (segmentation) of two-wheeler market data. The enhanced clustering algorithm works in two phases; Seed Point Selection and Clustering. The method adapts new strategy to cluster data points more efficiently and accurately, and also avoids dead centers. The enhanced clustering algorithm is found to be efficient in meaningful segmentation of two-wheeler market data. The results of market segmentation are discussed.
  • Reversible Fragile Database Watermarking Algorithm Based on Multilevel Histogram Modification
    Amal Hamdy, Mohamed Hashem, Amal El-Shershaby and Sawsan Shouman, Ain Shams University, Egypt
    Fragile watermarking is commonly used for content authentication and tamper detection in relational database. For some critical applications, such as medical systems, the fragile watermarking system should be based on a reversible data hiding scheme. Reversibility is the ability to regenerate the original relation from the watermarked relation. This paper proposes a reversible and a blinded fragile watermarking technique to detect and localize database tampering using a multilevel histogram modification mechanism. In the proposed scheme more peak points are used for hiding secret bits, the hiding capacity is enhanced compared with those conventional methods based on one or two level histogram modification. Furthermore, the proposed scheme can also characterize the modifications to quantify the nature of tampering attacks based on evaluating the local characteristics of database relation like frequency distribution of bits. The experimental results demonstrate that tampered groups are correctly detected, and non-tampered data recovered with high quality.
  • The complexity of Probabilistic Inference in Multi-Dimensional Bayesian Classifiers
    Guangdi Li, Rega institute for medical research, Belgium
    Multi-dimensional Bayesian networks (MBCs) have been recently shown to perform efficient classifications. In this study, we evaluate the computational complexity of exact inference, MAP (maximum a posterior) and MPE (most probable explanation) in MBCs. Even when MBCs have simple graphical structures under strong constraints, we find that computing exact inference is NP-Complete, while computing MAP and MPE is NP-hard.
  • A Survey on Sentiment Analysis and Opinion Mining: A need for an Organization and Requirement of a customer
    Ravendra Jandail, Pradeep Sharma and Chetan Agrawal, Galgotias University, India
    Sentiment analysis and Opinion mining is the computational study of User opinion to analyze the social, psychological, philosophical, behavior and perception of an individual person or a group of people about a product, policy, services and specific situations using Machine learning technique. Machine learning for text analysis technically has always been very challenging as its main goal is to make computers able to learn and automatically generate emotions like a human as it is practically very useful in real life scenarios. After a boom in web 2.0 technology this field became the most interesting area for researchers because the social media has grown as the fastest medium for availability of opinions. There are many commercial tools available in the market and many researchers have proposed their solutions for opinion extraction, but still there are some problems of text classification and sentiment extraction in opinion mining. These problems arise due to different behaviors, manners and textual habits of users. A sentence can be positive for one ,but it may have a negative impact on other so it will be a problem for a machine to generate its emotion. A negative sentence can be written in a positive manner like “What a great camera! It consumes more battery power, this sentence has a negative opinion about a camera, but it consists only positive keywords. There are mainly four predominating problems viz. subjectivity classification, word sentiment classification, document sentiment classification and opinion extraction. Data mining algorithms are easy to implement, but concludes to poor accuracy meanwhile the machine learning technique provides better accuracy, but requires a lot of training time, so there should be a hybrid technique which has the advantages of both the techniques. This survey focuses on various Data Mining algorithms, Machine learning techniques and a brief review stating the comparative analysis of these algorithms. We have followed a systematic literature review process to conduct this survey and also mentioned the future aspects of sentiment analysis and opinion mining.
  • Designing A Disease Outbreak Notification System In Saudi Arabia
    Farag Azzedin, Salahadin Mohammed, Jaweed Yazdani and Mustafa Ghaleb, King Fahd University of Petroleum & Minerals, Saudi Arabia
    This paper describes the design and development of a Disease Outbreak Notification System (DONS) in Saudi Arabia. The main function of DONS is to warn for potential outbreaks. A prototype of the DONS was implemented in a hybrid cloud environment as an online/real-time disease outbreak notification system. The system notifies experts of potential disease outbreaks of both pre-listed diseases and totally unknown diseases. The system only accepts cases from pre-registered sources. It is designed to also share information about disease outbreaks with international systems. As soon as the system detects a potential disease outbreak it notifies stakeholders and experts. The system takes feedback from experts to improve the disease detection capabilities and to adapt to new situations.
  • Velocity Prediction Model of Servo Motor System Using Adaptive Fuzzy Petri Nets Reasoning System
    Raed Hamed
    A graphical fuzzy model that uses rules base system to effectively process the uncertainties variables is built. Modules that represent distinct types of fuzzy rules are created and defined. An AFPN module for velocity estimation ve is constructed according to the structure, relations, rules, certainty factors and weights of the adaptive fuzzy model for servo motor system. The servo motor system is analysed, and definitions are prepared for the AFPN model and the input data. An AFPN model is created and trained with input data on weight w(pi). Input places are represented the membership function values the initial values of the input places are entered (position error velocity error ), and the output value of ( ) represented the velocity estimation . The system can perform fuzzy reasoning automatically to evaluate the degree of truth of the proposition. The presented study demonstrates that the proposed model is able to achieve the purpose of reasoning, and computing of the velocity estimation value. An AFPN structure has been used rather than FPNs formalism to improve the efficiency of fuzzy reasoning. The effectiveness of the proposed method is verified by both model simulations and experimental results.
  • Recognition and Ranking Critical Success Factors of Business Intelligence in Hospitals -Case Study: Hasheminejad Hospital
    Raed Hamed
    Background and Aim: Business Intelligence, not as a tool of a product but as a new approach is propounded in organizations to make tough decisions in business as shortly as possible. Hospital managers often need business intelligence in their fiscal, operational, and clinical reports and indices. Recognition of critical success factors (CSF) is necessary for each organization or project. Yet, there is not a valid set of SCF for implementing business intelligence. The main goal of recognition and ranking CSF is implementation of a business intelligent system in hospitals to increase success factor of application of business intelligence in health and treatment sector. Materials and Methods: This paper is an application and descriptive-analytical one, in which we use questionnaires to gather data and we used SPSS and LISREL to analyze them. Its statistical society is managers and personnel of Hasheminejad hospital and case studies are selected by Cochran formula. Results: The findings show that all three organizational, process, and technological factors equally affect implementation of business intelligence based on Yeoh & Koronis approach, where the assumptions are based upon it. The proposed model for CSFs of business intelligence in hospitals include: declaring perspective, goals and strategies, development of human and financial resources, clarification of organizational culture, documentation and process mature, management support, etc. Conclusion: Business intelligence implementation is affected by different components. Center of Hasheminejad hospital BI system as a leader in providing quality health care, partially succeeded to take advantage of the benefits the organization in passing the information revolution but the development of this system to achieve intelligent hospital and its certainty is a high priority, thus it can`t be said that the hospital-wide BI system is quite favorable. In this regard, it can be concluded that Hasheminejad hospital requires practical model for business intelligence systems development.
  • A System Model to Detect Left-turn Forbidden Intersection Automatically Based on Taxi Trajectories
    Zhixin Song, Tongyu Zhu and Shuai Liu, Beihang University, Beijing, China
    The traffic measure by setting left-turn forbidden restriction can effectively reduce urban congestion and reduce the frequency of occurrences of unexpected events in the intersection at the same time. However, because travellers often cannot accurately obtain the traffic restriction’s information about the Left-turn Forbidden Intersection in advance, it always results in the travel time delays and even vehicles driving violations. A few years ago, road traffic information was collected mainly by coils, but it led to a serious problem that we could not automatically detect the Left-turn Forbidden Intersection. In recent years, the Floating Car Data (FCD) has become a new way to collect the road traffic information, and is widely applied in it resulting from the advantage of its low cost and wide coverage. In the field of the intelligent transportation system (ITS), we can quickly and easily get the floating car trajectories by collecting location information from a floating car equipped with a GPS device constantly. In this paper, we mining from a large number of real-world taxi trajectories and establish a system model to detect Left-turn Forbidden Intersection automatically. Experiment shows that the correct rate of the system model is satisfactory, and it can effectively detect Left-turn Forbidden Intersection automatically.
  • Web-Based Arabic System for Extracting Information of Future Events: an Overview
    Meshrif Alruily1 and Mohammad Alghamdi2, 1Aljouf Univeristy, Saudi Arabia and 2Umm AlQura University, Saudi Arabia
    Arabic is a very widely spoken language but very few mining tools have been developed to exploit the data that lies within bodies of Arabic text. Thus, this paper presents a proposal of developing a web-based system that should be able to regularly collect news reports from Arabic newspaper websites, and then extract related-information about future events e.g. event type, date and location. Also, the system should store the extracted information about events in an online database to enable users to access them.
  • Least Support Orthogonal Matching Pursuit (LS-OMP) Recovery Method
    Israa Sh. Tawfic and Sema Koc Kayhan,Gaziantep University, Turkey
    In this paper; we propose Least Support Orthogonal Matching Pursuit (LS-OMP) algorithm to improve the performance, of the OMP (Orthogonal matching pursuit) algorithm. LS-OMP algorithm adaptively chooses optimum Least Part of support, at each iteration. This modification helps to reduce the computational complexity significantly and performs better than OMP algorithm. This new algorithm have some important characteristics: first it had low computational complexity comparing with ordinary OMP method, second the reconstruction accuracy is show better results than other method. We propose Least Support Orthogonal Matching Pursuit (LS-OMP) algorithm to improve the performance, of the OMP (Orthogonal matching pursuit) algorithm. LS-OMP algorithm adaptively chooses optimum Least Part of support, at each iteration. While the LS-OMP offers a comparably theoretical guarantees as best optimization–based approach, simulation results show that it outperforms many algorithms especially for compressible signals.