The first image in a slideshow demo.
Announcements
ijmcs  IJMCS

Volume-5 & Issue-4 Published (Acceptance Ratio=47.36%)http://www.ijmcs.info/current_issue

ijmcs  IJMCS

Volume-5 & Issue-3 Published (Acceptance Ratio=38.88%)http://www.ijmcs.info/current_issue

ijmcs  IJMCS

Volume-5 & Issue-2 Published (Acceptance Ratio=25%)http://www.ijmcs.info/current_issue

ijmcs  IJMCS

Submit your Paper Last date 20th June 2017 http://www.ijmcs.info/submit_your_paper

ijmcs  IJMCS

Submit your paper @ Free of Cost http://www.ijmcs.info/publication_fee...

ijmcs  IJMCS

Volume-5 & Issue-1 Published (Acceptance Ratio=28.94%)http://www.ijmcs.info/current_issue

ijmcs  IJMCS

Volume-4 & Issue-5 Published (Acceptance Ratio=41.42%)http://www.ijmcs.info/current_issue

ijmcs  IJMCS

Volume-4 & Issue-4 Published (Acceptance Ratio=54.02%)http://www.ijmcs.info/current_issue

ijmcs  IJMCS

Volume-4 & Issue-3 Published (Acceptance Ratio=48.05%)http://www.ijmcs.info/current_issue

ijmcs  IJMCS

Volume-4 & Issue-2 Published (Acceptance Ratio=44%)http://www.ijmcs.info/current_issue

ijmcs  IJMCS

Volume-4 & Issue-1 Published (Acceptance Ratio=50%)http://www.ijmcs.info/current_issue

ijmcs  IJMCS

Volume-3 & Issue-2 Published (Acceptance Ratio=34%)http://www.ijmcs.info/current_issue

ijmcs  IJMCS

Submit your Paper Last date 10th October 2015 http://www.ijmcs.info/submit_your_paper

ijmcs  IJMCS

Volume-2 & Issue-6 Published (Acceptance Ratio=69.56%) http://www.ijmcs.info/current_issue

ijmcs  IJMCS October

Volume-2 & Issue-5 Published (Acceptance Ratio=45.94%) http://www.ijmcs.info/current_issue

ijmcs  IJMCS 15 August

Upcoming Isuue: Volume-2 & Issue-6 http://www.ijmcs.info/next_issue

ijmcs  IJMCS

Volume-2 & Issue-4 Published (Acceptance Ratio=25%) http://www.ijmcs.info/current_issue

ijmcs  IJMCS

National Conference on (Advances in Modern Computing and Application Trends) Organised By Acharya Institute of Technology Bangalore, India

  • For More Detail Click Here
  • http://www.acharya.ac.in/Techman_2014.pdf

    Volume-4 Issue-3(June 2016)
    Title: Resource-Aware Scheduling through Improved Speculative Task Execution
    Authors: Anuj Sachan and S.B.Deshmukh
    Abstract: Scheduling of a task in MapReduce is one of the most important aspects. To improve Job Completion and cluster throughput MapReduce uses speculative execution. When a machine takes an unusually long time to complete a task, the so-called straggler machine, it will delay the job completion time and degrade the cluster throughput significantly. Many research efforts being undertaken to increase MapReduce performance like LATE (Longest Approximate Time to End), Resource utilization, Data Placement in Hadoop clusters, but some are inappropriate and some did not lever the situation like data skew, asynchronously starting task, improper configuration of phase percentage and abrupt resources. These problems are handled via smart speculative execution wherein slow task is backed-up on an alternative machine with the hope that the back-up one can finish later. In proposed system a new set of resource-aware scheduling with the help of improved speculative execution strategy will be proposed which improvises speculative execution significantly by decreasing the job completion time and also improves the cluster throughput by assigning task slots in the order of assigning jobs. This new resource-aware scheduling technique will aim at improving resource utilization across machines while observing completion time goals.
    Click Here to Download Pdf
    Title: Image Mining Using Lipomatous Ependymoma on Weighted Image Find Brain Tumor (Allin One)
    Authors: Prof P.Senthil
    Abstract: Image segmentation refers to the process of partitioning an image into mutually exclusive regions. It can be considered as the most essential and crucial process for facilitating the delineation, characterization, and visualization of regions of interest in any medical image. Despite intensive research, segmentation remains a challenging problem due to the diverse image content, cluttered objects, occlusion, image noise, non-uniform object texture, and other factors. There are many algorithms and techniques available for image segmentation but still there needs to develop an efficient, fast technique of medical image segmentation in the classifiers were SVM highest Relevance and Time & Accuracy (Relevance: 99.5%, Testing: 90.6%), Decision Tree sensitivity (Accurancy:99.0%, Testing: 91.0%), Rough sets specificity (Relevance&Time: 92.0%,Testing: 90.2%) and EM (Time and Accurancy:99.9%, Testing 90.5%) .This paper presents an efficient image segmentation approach using EM algorithms Image mining Classification technique integrated with Fuzzy EM algorithm. It is followed by thresholding and level set segmentation stages to provide accurate brain tumor detection. The proposed technique can get benefits of the EM algorithms classification for image segmentation in the aspects of minimal computation time. In addition, it can get advantages of the Fuzzy EM in the aspects of accuracy. The performance of the proposed image segmentation approach was evaluated by comparing it with some state of the art segmentation algorithms in case of accuracy, processing time, and performance. The accuracy was evaluated by comparing the results with the ground truth of each processed image. The experimental results clarify the effectiveness of our proposed approach to deal with a higher number of segmentation problems via improving the segmentation quality and accuracy in minimal execution time.
    Click Here to Download Pdf
    Title: Performance Based Evaluation of Various Machine Learning Classification Techniques for Chronic Kidney Disease Diagnosis
    Authors: Sahil Sharma, Vinod Sharma and Atul Sharma
    Abstract: Areas where Artificial Intelligence (AI) & related fields are finding their applications are increasing day by day, moving from core areas of computer science they are finding their applications in various other domains. In recent times Machine Learning i.e. a sub-domain of AI has been widely used in order to assist medical experts and doctors in the prediction, diagnosis and prognosis of various diseases and other medical disorders. In this manuscript the authors applied various machine learning algorithms to a problem in the domain of medical diagnosis and analyzed their efficiency in predicting the results. The problem selected for the study is the diagnosis of the Chronic Kidney Disease. The dataset used for the study consists of 400 instances and 24 attributes. The authors evaluated 12 classification techniques by applying them to the Chronic Kidney Disease data. In order to calculate efficiency, results of the prediction by candidate methods were compared with the actual medical results of the subject. The various metrics used for performance evaluation are predictive accuracy, precision, sensitivity and specificity. The results indicate that decision-tree performed best with nearly the accuracy of 98.6%, sensitivity of 0.9720, precision of 1 and specificity of 1.
    Click Here to Download Pdf
    Title: Link Failure in AODV Protocol – A Review
    Authors: Kiratjit Kaur and Ms. Navjot Kaur
    Abstract: MANET (Mobile Ad hoc Networks) is a group of mobile nodes, forming a self establishing network with an aim to communicate in absence of any central controller. There are various issues in MANET like battery consumption, node mobility, faults and link failure. In this paper, the link failure problem in AODV is being discussed.
    Click Here to Download Pdf
    Title: Stock Market Index Forecasting of Nifty 50 Using Machine Learning Techniques with ANN Approach
    Authors: Gourav Kumar and Vinod Sharma
    Abstract: Stock market index forecasting is one of the most challenging tasks for one who wants to invest in stock market. This challenge is because of the uncertainty and volatility of the stock prices in the market. Due to advancement in technology and globalization of business and share markets it is important to forecast the stock prices more quickly and accurately. Machine Learning techniques are most accepted due to the capability of identifying stock trend from massive amounts of data that capture the underlying stock price movement. Artificial Neural Network (ANN) algorithms are mostly implemented and play a important role in decision making for stock market index predictions. Multi Layer Perceptron (MLP) architecture with back propagation algorithm has the capability to predict with greater accuracy. This paper presents an ANN based approach to forecast Nifty 50 Index. A feed-forward neural network using multiple back propagation algorithm has been used to forecast next day’s OHLC data. This model has used the pre-processed dataset of Open price (O), High price (H), Low price (L), Close price (C), Volume Traded (V) and Turnover (T) for the period of 10 years from 03 April 2006 to 16 May 2016. The root mean square error (RMSE) is chosen as indicators of performance of the network. The performance of this model has been tested to an average accuracy of 99.2152 % with an RMSE error of 0.0079 .In this research Multiple Back -Propagation (MBP version 2.2.4) software has been used to predict the future stock prices and their performance statistics have been evaluated. This would help the investor to take better business decisions such as buy or sell a stock.
    Click Here to Download Pdf
    Title: Enhancement of Multi Machine Stability using Fault Current Limiter and Thyristor Controlled Braking Resistor
    Authors: Harvinderpal Singh and Prince Jindal
    Abstract: This paper presents the enhancement of transient stability of multi machine power system using fault current limiter (FCL) and thyristor controlled braking resistor (TCBR). The fault current limiter (FCL) is use to limit the fault current by generating impedance which can limit the fault current, and TCBR is used for fast control of generator disturbance. The effectiveness of both the devices has been demonstrated by considering three phase to ground fault. The simulation results shows the enhancement in power system transient stability when we use both the devices simultaneously.
    Click Here to Download Pdf
    Title: Uses a Statistical and Poisson Distribution Analysis Method to Derive the Probability Measures from the Log File
    Authors: J.MALARVIZHI. and Dr.S.THABASU KANNAN
    Abstract: Due to increasing the number of Engineering Colleges in Tamilnadu, the level of competition for admission is also increased. By implementing some dynamic strategies only the academic institutions can meet their own competition. One survey clearly states that more than 75% of the Engineering Colleges their strength is less than 30% of their actual intake. Hence the surveillance is the problem for the institutions. One more survey shows that every year 10% of the engineering colleges’ windup their affiliation and approval due to lack of admissions, and 5% of the engineering colleges have decided to sell due to lack of strength. With the strong effort and dynamic strategy framed by the institution, they can even manage in the society. The candidate finds admission in an institution only when their own preference matches exactly, otherwise the candidate continues to go by the next alternate in the list of preference. This paper clearly stresses some factors influenced to identify the pattern for getting the potential of the students to meet at least the breakeven point. In addition to the above, the World Wide Webin internet plays an important role to store, share and distribute information about the academic institutions. A social survey states that more than 65% of the admissions gained by their effective web pages. The exponential growth of the World Wide Web has provided an excessive prospect to study the potential student and their behavior by using web access logs.If the institution’s web domain clearly contains the information required for the potential students, surely they can attract the above by which they can get more number of admissions even beyond our jurisdiction. Some of the attractions from the potential students while accessing the web site for getting the admission are: • get the required information by clicking minimum number of hits from the web pages, • no web traffic occurred while accessing and navigating the college web site • Search queries will be rectified within a short period of response time by implementing the practice of search engine optimization, search engine spiders. • Always use fastest and latest browsers and operating systems in their WWW. • not to display much more web server errors while navigating the college web site. For attracting the counseling category and other state students, this web page plays an important role. Web usage Mining is the application of data mining techniques to very large data repositories to extract practice patterns. In general every web server keeps a record of all transactions needed for the potential students and act as a bridge between the potential students and institutions. The record contains full details about every user click to the web documents of the web site. The useful record details needs to be scrutinized and inferred to gather knowledge about actual potential student and their parent preferences in accessing web pages. In recent years several methods have been proposed for mining web log data. This paper its main intention is to use the statistical method of Poisson distribution analysis to find out the higher probability session sequences and also compare the efficiency of our developed algorithm with Poisson value. The study of large volumes of click stream data demands the employment of data mining methods. Conducting data mining on records of web servers contains the determination of frequently occurring access sequences. A statistical method of poison distribution clearly shows the probability of frequency of specific events when the average probability of a single occurrence is known. Here the probability of poison value is compared with the efficiency of our developed algorithm. For more number of transactions, our developed algorithm its performance is better than poison value. Because our algorithm extracts the confidence level as dependent rather independent. The Poisson distribution is used in this paper to find out the probability frequency of particular page is visited by the user as independent, but the result of the developed algorithm is dependent.
    Click Here to Download Pdf
    Title: Improvement to Adaptive E-learning Using Augmented Reality
    Authors: B. Kowsalya and Dr. M. Maria Dominic
    Abstract: Learning is the process of receiving knowledge based on the information, behaviors, skills and values. Learners are classified into different types based on how the learner getting the information in a unique way. The various kinds of learner having different types of learning model based on Felder Silverman. The Augmented Reality technology provides the learning content which is adapted for the individual learner. The AR technology provides sufficient learning content for every learner. The technology gives the more comfortable for each and every learner. The main objective of this research is to provide best learning content for the learner using Augmented Reality technology.
    Click Here to Download Pdf
    Title: Use of Linked Data in Adaptive E-Learning Management System
    Authors: J. Agalya and Dr. M. Maria Dominic
    Abstract: Every learner is unique and has his own thoughts, characters and even his own impressions. A learning style is a sequence of philosophies that explain the variability in the individual patterns of accepting information. In-order to identify the learner’s learning style Felder-Silverman learning style model is used in this research. This is done by means of a questionnaire. Accordingly the learning resourcesare provided to the learner. This research paper proposes an architecture which is unique by itself through the usage of linked data and clustering algorithm.
    Click Here to Download Pdf
    Title: Adaptation in E-Learning Management System Using Possibility Theory
    Authors: P. Sivaranjani, Dr. M. Maria Dominic and Dr. Britto Anthony Xavier
    Abstract: Learning is the process of actively creating knowledge. Learning is the gathering of experiences and the consequential development and new understanding of the world around us. With the rapid growth of computer and internet machineries e-learning has become a major trend in the computer teaching and learning field now. The aim of adaptive e-Learning is to give the students the suitable content at the right time means that the system is able to find the keep track of usage, knowledge level and arrange content automatically for each student for the greatest learning result. This paper provides an overview of possibility theory in using adaptive e-learning. To get a more specific estimation of learner ability, the Maximum Possibility Estimation (MPE) applies to approximation learner’s ability based on explicit learning path and resources.
    Click Here to Download Pdf
    Title: Furtherance to Adaptive E-learning Using Dynamic Programming
    Authors: J. Sivasankari, Dr. M. Maria Dominic and Mr. A. George Louis Raja
    Abstract: Different Learners have different learning styles based on their preferences, knowledge, motivation and other factors. The learning style can be generalized to visual, audio and kinesthetic. Felder-Silverman has categorized in an elaborate manner. They are visual, verbal, inductive, intuitive, deductive, active, reflective, sensory and global. This research is focused on providing an adaptive learning path and personalize the learning resources to the style of the learner. To obtain the above said objective dynamic programming representation is used.
    Click Here to Download Pdf
    Title: Controlling the Boiler Temperature of Tea Leaves by using Android Application and Arduino UNO
    Authors: Boopathy S and N.Ramkumar
    Abstract: This paper presents an automated tea boiler system based on Arduino UNO programmable controller which controls the temperature of the chamber in different stages of drying. Several techniques were used for tea drying systems according to the tea genres. The batch tea dryer is designed with 6 to 8 trays. The temperature above the trays is controlled between 50ºC and 100ºC. Moreover, the moisture content of the tea leaves declined from around 68% to approximately under 3%. In addition, the temperature of the leaves increased from a little less than 30ºC to 80ºC. A microcontroller as the main processor was deployed to process received data from sensors and also it provides control signals. The temperature will be monitored in Bluetooth terminal of android phone. HC 05 module will send and receive the data to & from the android phone.
    Click Here to Download Pdf
    Title: PPAKBS: Privacy-Preserving Public Auditing for Key generation Based Cloud Storage using Batch Signature
    Authors: Mr. R. Nallakumar and N Keerthika
    Abstract: To protect privacy data in cloud storage beside frauds, adding imperfection tolerance to cloud storage jointly with data truthfulness checking and malfunction reparation becomes critical. In this paper, we propose a novel public auditing framework for the public key generation-code-based cloud storage. To solve the auditin-*g difficulties of unsuccessful authenticators in the lack of data possessors, we introduce a batch signature process, which is confidential to stimulate the authenticators, into the conventional public auditing framework model. Moreover, the proposed system designs a novel public truthful authenticator, which is generated by a combine of private keys and can be regenerated using biased keys. Extensive security examination shows that proposed method is verifiable secure under random batch model and experimental evaluation indicates that our scheme is highly proficient and can be possibly integrated into the key generation -code-based cloud storage.
    Click Here to Download Pdf
    Title: An Efficient Verifiable Dynamic Multi-copy Data Possession in Cloud Computing Systems
    Authors: Mr. R. Nallakumar and S Hemalatha
    Abstract: At present, a lot of individuals and associations outsource their data to isolated cloud service providers (CSPs) looking for to reduce the preservation cost and the load of large local data storage. The CSP presents waged storage space on its communications to store customers’ data. Duplicating data on several servers diagonally many data centers realizes an upper level of scalability, availability, and durability. The more copies the CSP is raised to store, the more charges the customers are charged. In this paper we propose Optimal Multi-copy Dynamic Data Possession (MDAP) Security Model provable data possession schemes that achieve main goals: i) they check the CSP from fraud and using less storage by preserving fewer copies, and support dynamic behaviour of data copies over cloud servers via operations such as block modification, insertion, deletion, and append. To prove the security of the proposed schemes against colluding servers.
    Click Here to Download Pdf
    Title: A Survey Paper on Sentiment Analysis of Social Media & Applications
    Authors: Yuvrajsing Chilhare and Prof. D.D.Londhe
    Abstract: Social networks have an increasingly popularity on the internet, in today’s competing business environment , it is necessary to collect , monitor and analyze the user data on their own and on their competitor’s company, now a days sentiment analysis is use for, business intelligence, recommendations, information retrieval,etc. big organizations know this thing , so they use a social media as a marketing tool , for near about 800 million daily active users.
    Click Here to Download Pdf
    Title: A Social Network Aided Video on Demand for Peer To Peer Network
    Authors: Ms. Virangana Waghchaure and Prof.Viresh Chapte
    Abstract: The Internet nowadays is often used as a medium to share giant size transmission content. This sharing is meted out, variety of times, through the Peer to see sharing design instead of the traditional server-client model. The scarcity of network addresses within the web has LED to the emergence of personal and world networks. as a result of the identity of peers in an exceedingly personal network remains hidden behind their world end point, P2P applications cannot run between 2 peers in separate personal networks. We have got planned a hierarchical P2P network of personal and world networks. Here, the lower tier is created by the peers in every personal network, whereas the higher tier is created by the worldwide endpoints (called proxies) of every of those personal networks. We have got designed a system to measure stream videos from peer to see with centralized server too manage if the content delivery fails. The systems aim at reducing hundreds on server through peer to see applications.
    Click Here to Download Pdf
    Title: Analysing HEED Clustering Algorithm For Performance Improvements
    Authors: Nikita Gandotra and Jasbir Singh
    Abstract: Wireless Sensor Networks are extensively used when there is a need for monitoring in difficult terrains. Due to restricted power supply, clustering algorithms are used to group neighbours which imitate the behaviour of the actual network. Hybrid Energy Efficient Distributed Clustering (HEED) was proposed to address the limited power supply and the network lifetime. HEED selects cluster heads periodically according to their residual energy and node degree. With this paper few improvements in the original HEED are suggested. A new algorithm based on the improvements is proposed and analysed with both homogeneous and heterogeneous node batteries. The proposed model improves the average energy of each node and the network lifetime.
    Click Here to Download Pdf
    Title: Proficiency Analysis of AODV, DSR and DSDV, Adhoc Routing Techniques for Internet of Things
    Authors: Tausifa Jan Saleem and Prof. LalitSen Sharma
    Abstract: With massive technological advancements and the growing popularity of digital assistance in everyday life, world is witnessing the formation of Internet of Things (IoT), where real-world entities augmented with computing devices, sensors, and actuators are connected to the Internet, enabling them to share their generated data through Web. By mashing up these Smart Things with the services and data available on the Web, novel IoT applications can be created. In these applications, routing service is required to enable efficient exchange of information among smart things. In this paper three routing techniques AODV, DSR & DSDV have been studied and their proficiency has been analyzed using network simulator NS2 based on three performance metrics, End to End Delay, Packet Delivery Ratio and Average Energy Consumption.
    Click Here to Download Pdf
    Title: A Study of various data mining techniques for diabetic prognosis
    Authors: Sabreena Jan and Vinod Sharma
    Abstract: Diabetes has affected over 415 million people worldwide and according to world Atlas of 2015, by 2040 this number is expected to rise over 642 million. India has become the second largest country in the world with 69.2 million people suffering from diabetes. Diabetes has emerged as one of the deadliest disease of the world and is a serious public health challenge. Data mining has emerged as an important field in diagnosis of this deadliest disease. The use of data mining techniques on this medical data has brought about important, valuable and effective achievements which can enhance the medical knowledge to make necessary decisions. This paper focuses on the analysis of diabetes data by various data mining techniques which involve Naïve Bayes,J48,Multi layer Perceptron and K-star.
    Click Here to Download Pdf
    Title: Mobility Models On Routing Protocols in MANET for Systems Performance Evaluation: A Survey and Analysis
    Authors: Ashish Gupta
    Abstract: Mobile Ad-Hoc Network (MANET) is a collection of wireless mobile hosts forming a temporary network without the aid of any stand-alone infrastructure or centralized administration. Most of the proposed MANET protocols do not address security issues. MANETs routing protocols are used to find specific routes between source and destination.This paper involves logical study of four routing protocols (FSR, DYMO, STAR and LAR1), and performance comparisons between these routing protocols on the basis of performance metrics (throughput, packet delivery ratio, and end to end delay measured after simulation of network) with the help of QualNet Simulator. In this study we have considered three mobility scenarios: Random Waypoint, Group Mobility and Freeway Mobility Models. These three Mobility Models are selected to represent possibility of practical application in future.
    Click Here to Download Pdf
    Title: A study on SQL Injection Attack and its Prevention Measures at Database Management Level
    Authors: Nikita Gupta and Lalit Sen Sharma
    Abstract: In multi-tier design, the attacker takes the advantages of the design and maliciously inserts commands which may lead to security breaches such as unauthorized access to application resources, escalation of privileges, and modification of sensitive data. Also these malign commands do not get sensed fully by the firewalls and endpoint defenses. The hackers exploit these vulnerabilities to carry out attacks and compromise security. SQL Injection Attack (SQLIA) is a kind of attack which is performed by exploiting these vulnerabilities.
    Click Here to Download Pdf
    Title: Browser compatibility issues in implementing Content Security Policy to prevent Cross Site Scripting attacks
    Authors: Haneet Kour and Lalit Sen Sharma
    Abstract: The web applications are developed and accessed by millions of users for various services. These applications are developed using various technologies like HTML, JavaScript, AJAX, XML etc. But the vulnerabilities at the design level in these technologies lead to security breach, resulting in theft of the user’s credentials. Thus, the security of these applications is becoming an important concern to ensure the user’s authentication and privacy. Cross site scripting attack (XSS) is also an exploitation of these vulnerabilities existing in the web applications. XSS still remains a big problem for web applications, despite the bulk of solutions provided so far. Content Security Policy (CSP) is also an approach to prevent this code injection. This paper studies the browser compatibility issues in deploying CSP to mitigate XSS vulnerabilities and also discusses how to resolve this incompatibility.
    Click Here to Download Pdf
    Title: ANN based Hierarchical Self-Organization to support Map-Reduce in Big Data Analytics
    Authors: Gazala Tidagundi and Vijaya Kumar B.P.
    Abstract: Organizing and managing the data generated from different sources in any sector has become a greater challenge in today’s world. Presently the Internet of Things(IoT) has become an added bottleneck in Big Data Analytics. With the boom in technology and the huge growth in the data, clustering has become a crucial part in identifying the similarities based on many parameters in a given data set that influences in decision making. To reinforce this clustering, we proposed a machine learning methods to influence over data redundancy and grouping i.e. Self Organized Mapping (SOM) from the previous paper. Solving such problem involves in tackling the issues like clustering, visualization, abstraction and de-duplication. Here the work involves in the usage of Artificial Neural Network (ANN) for Self Organized Mapping that has a unique features, like construction of maps, self organization to form different clusters dynamically to support the volume, variety and variance of big data. As discussed in the previous paper about a novel technique that supports Hadoop in classifying or clustering of data using Self Organized Mapping, to support such an environment a novel method of Hierarchical Self Organized Map is adapted with distributed SOM to support Hadoop. With hierarchal SOM, modularity can be achieved . Modularity helps in dividing complex objects in to simpler objects. This division is based on specific functionality and properties of objects. This helps to tackle the problem of reduplication, visualization and abstraction at various points. MapReduce has always been used in combination with Hadoop, but in this implementation MapReduce is used external to Hadoop job. Using MapReduce externally is an advantage as Logical Block Address (LBA) and Hash Tag are obtained easily. From the results we found that this implementation has increased speed, data is structured and redundancy is achieved with improved efficiency.
    Click Here to Download Pdf
    Title: A Study of News Recommendation System
    Authors: Ms. Surbhi Ambulkar
    Abstract: Recommender systems have made significant utility in daily routing life. News papers are essential to get knowledge about recent activity and general awareness. Various solutions are developing to convert paper News system to digital news and become too much popular. Now a days, user want more clear and abstract news as per their choice. Proposed solution will help to fulfill this desire and give best result as per your demand Proposed system will not only observe the news content on user preference or popularity basis but also refine article on priority and impact basis. Proposed system will help to refine popular and effective news content according to user desire. This project work proposed to develop a unique solution to derive most relevant news as per user input and recommend more connected news from same. The complete project will implement Association rules and popularity algorithms to refine most optimum solution. Java Technology will be used to develop and evaluate the same.
    Click Here to Download Pdf
    Title: Estimating the Similarity of the Objects Using Feature Vectors
    Authors: Hempriya Bali and Pawanesh Abrol
    Abstract: To understand similarity between the two given objects is an essential part of many applications like content based image retrieval, shape based image retrieval, text classification and clustering etc. The similarity measure thus can help us in identifying the most similar object images according to the problem at hand. Some similarity measure depend upon the features of the object image, the orientation of the image, the intensity of light that it is subjected to because of which certain limitations are faced. A lot of research is going on to evaluate the similarities between the two given object images. There are many techniques proposed by different researchers that calculate variations on different parameters. However different techniques form different bases and different methodologies for estimating the similarities. It is very essential to understand the methodologies of different distance measuring techniques in order to apply the same for various applications. Some methodologies use pixel based estimation, some use feature based and some require essential pre-processing of the images. The main aim of this manuscript is to examine and compare the different similarity measuring techniques. In our work, it was found that Euclidean and Manhattan distance work on the relative position of the pixel, Jaccard distance measures similarity between the objects that are exactly similar in size and in a binary or gray format and Mahalanobis distance is feature based. The analysis of results showed that Euclidean, Manhattan and Jaccard distance are easy in computation but are variant to object rotation, whereas Mahalanobis distance is invariant to object rotation but requires high computation time.
    Click Here to Download Pdf
    Title: Cityzen: An Application to Address Civic Issues
    Authors: Vibha M J, Sanjana S, Rohit Vibhu C, Senthan Reddy T, Srikantaiah K C and Venugopal K R
    Abstract: Today, we are facing many civic issues, especially in populated cities. This is leading to increasing number of complaint registries. A transparent and approachable mechanism is required where the people of a locality are able to register complaints, check if they have been rectified and also support existing complaints. This application provides a way for people who are facing problems due to civic issues like garbage dump, sewage leak, potholes etc. to register a complaint using images which are automatically geo-tagged. The pertaining government agency can inform the public (through an image or message) of the rectified issue.
    Click Here to Download Pdf
    Title: Automatic Text Extraction and Character Segmentation using Maximally Stable Extremal Regions
    Authors: Nitigya Sambyal and Pawanesh Abrol
    Abstract: Text detection and segmentation is an important prerequisite for many content based image analysis tasks. The paper proposes a novel text extraction and character segmentation algorithm using Maximally Stable Extremal Regions as basic letter candidates. These regions are then subjected to thresholding and thereafter various connected components are determined to identify separate characters. The algorithm is tested along a set of various JPEG, PNG and BMP images over four different character sets; English, Russian, Hindi and Urdu. The algorithm gives good results for English and Russian character set; however character segmentation in Urdu and Hindi language is not much accurate. The algorithm is simple, efficient, involves no overhead as required in training and gives good results for even low quality images. The paper also proposes various challenges in text extraction and segmentation for multilingual inputs.
    Click Here to Download Pdf
    Title: Accuracy of Point Cloud Estimation for Tiny Objects
    Authors: Ashima Sharma, Pawanesh Abrol and Parveen K. Lehana
    Abstract: Reconstruction of 3D models from 2D images is one of the key problems in computer vision and has gained considerable attention in the past few years. Attempts have been made to simplify reconstruction techniques and minimize the required efforts of camera calibration. Among all other types of 3D reconstruction techniques, projective reconstruction requires least amount of calibration efforts. The objective of this paper is to generate 3D point clouds using projective reconstruction approach and analyze the image pairs which generate accurate results. The image set for this research study consists of multiple views of ten objects at different angles. An algorithm has been presented for this point cloud generation. For this study SURF feature detector and descriptor has been used for feature detection. Least median of squares method (LMedS) has been used to estimate fundamental matrix and remove outliers. Although this algorithm is based on the existing mathematical model of 3D reconstruction but the technique SURF for feature detection and LMedS for calculating fundamental matrix have not yet been explored within a single algorithm. This algorithm uses projective reconstruction with minimum level of complexity. Furthermore, the analysis of results of the study shows that the 3D points provide satisfactory information about the shape of the object. However its size and relative position are not preserved during reconstruction.
    Click Here to Download Pdf
    Title: Authenticated Node Deployment and Data Transmission By Authorized Node through Multiple Hops in Wireless Sensor Network
    Authors: Pooja Gupta, ShashiBhushan and Sachin Majithia
    Abstract: Wireless sensor network is composed of small nodes scattered in large area for sensing, evaluating and collaborating. Sensors are used to collect information from their surroundings and then pass the relevant information to the main station usually called as base station for analyzation of the condition. They are implemented in very hostile environment hence many issues and challenges can be faced in this area. Security and authentication is the main concern for the wireless sensor networks as they are more prone to several attacks. Our main emphasis is on the security of the network, for this we proposed a technique which allows only authenticated node to join the network. In this way we restrict the entry of black node, which protects our network from many attacks. We also work on the secure data transmission from source to destination within the network using the public and private key generation and encryption procedure.
    Click Here to Download Pdf
    Title: Benefits and Impact of Big Data in future E-Learning Industry
    Authors: Dr.V V Narendra Kumar and Dr. K.Kondaiah
    Abstract: E-learning is changing and we will witness new models, new technologies and designs that will emerge in future. May be the “e” from e-learning should be dropped and “e” shall be replaced with “Big D” calling big e-learning or may be (“Big D-Learning”). E-learning in today's world is gaining lot of importance in educational and corporate sector. Future e-learning is dependent on emerging technologies and improved instructional design models. Big data has a prominent role in the future e-learning.
    Click Here to Download Pdf
    Title: “BIG DATA” Analysis Using Hadoop & MongoDB
    Authors: Praveen Garg and Ms. Monika Sharma
    Abstract: Big data is a term for data sets having large and complex structure, for these data sets traditional data processing application are inadequate. Big data include several types of data- semi-structured, unstructured data, and structured data. Hadoop, used to process semi-structured and unstructured big data, by implementing the map-reduce paradigm to locate all relevant data. NoSQL, MongoDB, and TerraStore process structured big data. This paper discusses the analysis of Big Data using Hadoop and MongoDB.
    Click Here to Download Pdf
    Title: Improving Image Retrieval Using Mahalanobis Based Clustering Technique
    Authors: Sarika Mahajan, Pawanesh Abrol and Parveen K. Lehana
    Abstract: Image retrieval is a well-known field of research in information management. It is the system in which images are retrieved based on features of image. Images are represented by feature vectors and the measure of similarity between two images is determined by the distance between their feature vectors. There are different types of distance functions like city block distance, Euclidean distance, weighted Euclidean distance and Mahalanobis distance. In this paper Mahalanobis distance measure has been used which is based on correlation between variables by which different patterns are identified and analyzed. Multimedia database may contain thousands of images and the task of searching images from high dimensional database may be time consuming. Although many conventional image retrieval methods retrieve images from database based on considering similarity between query image and the images stored in the database but need further improvement for better results. In this paper efficient retrieval of images using Mahalanobis based clustering has been used. The clustering based technique that adapts to mahalanobis distance measure increases accuracy and reduces retrieval time.
    Click Here to Download Pdf
    Title: Data Security in Cloud Computing Using Various Encryption Techniques
    Authors: Shikha Rani and Shanky Rani
    Abstract: Cloud computing is the mechanism to provide on demand self service access and providing computing resources over the internet. It is a collection of shared pool of information, resources that makes up a cloud .So, To manage this huge amount of data it is highly recommended to ensure its security, control its access. Due to advancement in the field of technology security becomes one its major area of concern in the cloud environment. The higher the level of security is the higher the level of effectiveness will achieve. In this paper, emphasis is to provide a various encryption techniques and effective security solution and also to reduce cloud storage to reduce its overhead.
    Click Here to Download Pdf
    Title: Performance Analysis of Jellyfish Reorder Attack on Zone routing protocol in MANET
    Authors: Pardeep Singh Tiwana and Nafiza Mann
    Abstract: MANET’s have picked up enthusiasm of analysts in years ago. It is wireless network so that nodes move arbitrarily in the network. Zone routing protocol is kind of hybrid routing protocol. Numerous attackers are attacking on Manet due to its tendency. One attack is Jellyfish reorder attack that reorders the packets during forwarding source to destination node. This paper is subjected to correlation of jellyfish reorder attack on zone routing protocol and without attack on zone routing protocol with the expanding number of nodes. The performance metrics are data packet sent, data packet received, packet delivery ratio. To examine the efficiency of this attack a detailed simulative analysis is done under network simulator with and without jellyfish reorder attack.
    Click Here to Download Pdf
    Title: Parametric analysis of AODV, DSDV and ZRP protocols for video streaming over WiMAX
    Authors: Gurmandeep Kaur and Navneet Kaur
    Abstract: Worldwide Interoperability for Microwave Access (WiMAX) is extremely well known sort of broadband wireless technology that consist of most prominent features like perfect non-line-of-sight transmission, flexible deployment, scalable configuration, excellent QoS, high security. It endeavours by providing high speed over greater distances, thus meeting diverse client necessities for information, voice and video transmission. This paper is conducted to the work for the comparison assessment of a current reactive, proactive and hybrid protocol on the execution of User Datagram Protocol (UDP) for video streaming in WiMAX. Results are determined considering diverse parameters like throughput, packet delay and packet loss. The simulator used for the analysis is ns2.
    Click Here to Download Pdf
    Title: Introducing a New Technique for Making a Secure Steganosystem
    Authors: Gopesh Sardana and Nidhi Sharma
    Abstract: One of the popular ways of data security is steganography. Steganography is an art of transfer hidden data or undisclosed messages over a public channel so that a third party cannot detect the presence of the secret messages. Steganography is practiced for thousands of years, but in the last two decades steganography has been introduced to digital media. Since steganography is becoming more and more common known technique, so we need to make it more secure too, i.e. even if the intruder sniffs it our information is still untraceable or secure. There are various file types related to Steganography e.g. image, audio, video, executable files. Video steganography is impressive but hard to implement, useful in hiding a quite large amount of data in smaller value and hence more secure as the size of resultant file creates no suspicion about hidden information. In this paper, we will design a new technique of steganography which is impressive, in terms of hiding and secure transmission of information. This technique does not involve stream substitution which makes this technique difficult to implement.
    Click Here to Download Pdf
    Title: A Performance Improvement Technique with Feature Extraction Using Adaptive Boost
    Authors: Karishma Manchanda and Heena Wadhwa
    Abstract: Software Defect Prediction is one of the crucial activity in software engineering. A software defect is a condition in software products that does not meet end-user expectations. It occurs when there is an error in coding that causes a program malfunction or produces incorrect results. There are number of learning approaches that has used in software performance analysis. Analysis of Various factors that influence the predictive performance of the system is an essential step. The paper reflects a Hybrid Adaptive Boost with SVM (AdaBoostSVM) approach implemented as a classifier model for component learning and to improve the overall performance of system. Kernel Principle Component Analysis (KPCA) and Correlation feature selection (CFS) methods has applied on the fault prediction dataset PC1 from PROMISE dataset repository. The defect prediction performance of the model was analysed using various metrics like accuracy, recall, precision, and F-measure with the help of MATLAB. A comparative analysis of our proposed model with existing models demonstrates that Adaptive Boost with SVM (AdaBoostSVM) is better approach.
    Click Here to Download Pdf
    Title: Implementation of Big Data on Hadoop over Single & Distributed Node
    Authors: Ketan Singh Tanwar and Akhil Kaushik
    Abstract: Big data consist of datasets too large to be handled by traditional database systems. Knowledge discovery and decision making from such rapidly growing voluminous data is a challenging task in terms of data organization and processing, which is an emerging trend known as Big Data Computing; a new standard which combines large scale compute, new data intensive techniques and mathematical models to build data analytics. Apache Hadoop is the intersection of storage of data, and understanding of information. We can configure Hadoop on single node as well as on distributed node. This paper covers all the steps for configuring hadoop1.2.1
    Click Here to Download Pdf
    Title: Segmentation of Brain Tumor from MRI- A Survey
    Authors: Anjali Gupta and Gunjan Pahuja
    Abstract: The segmentation of internal structure of the brain plays a crucial role in neuro imaging analysis. Major purpose of Brain tumor segmentation is to categorize the different tumor tissues such as active cells, necrotic core, and edema from normal brain tissues of Gray Matter (GM), White Matter (WM), and Cerebrospinal Fluid (CSF). MR (Magnetic resonance) images are used to produce images of soft tissues that connect, support, or surround other structures and organs of the body of human body are . The objective of this review paper is to present a complete overview for MRI brain tumor segmentation. In this paper various techniques for segmentation have been discussed & it is found that hybrid clustering methodology results better in minimal computation time and higher accuracy.
    Click Here to Download Pdf

    QR Code

    IJMCS