ABSTRACT: algorithm; Manets; total transmission energy; maximum number

ABSTRACT: Users are increasingly pursuing complextask-oriented goals on the web, such as making travel arrangements, managingfinances, or planning purchases. To this end, they usually break down the tasksinto a few co dependent steps and issue multiple queries around these stepsrepeatedly over long periods of time. To better support users in theirlong-term information quests on the web, search engines keep track of theirqueries and clicks while searching online. In this paper, we study the problemof organizing a user’s historical queries into groups in a dynamic andautomated fashion. Automatically identifying query groups is helpful for anumber of different search engine components and applications, such as querysuggestions, result ranking, query alterations, sessionization, andcollaborative search. In our approach, we go beyond approaches that rely ontextual similarity or time thresholds, and we propose a more robust approachthat leverages search query logs. We experimentally study the performance ofdifferent techniques, and showcase their potential, especially when combinedtogether.

 KEYWORDS: Energy efficient algorithm;Manets; total transmission energy; maximum number of hops; network lifetimeI.     IntroductionAS the size andrichness of information on the web grows, so does the variety and thecomplexity of tasks that users try to accomplish online. Users are no longercontent with issuing simple navigational queries.

Various studies on query logs(e.g., Yahoo’s and AltaVista’s) reveal that only about 20 percent of queriesare navigational. The rest are informational or transactional in nature. Thisis because users now pursue much broader informational and task oriented goalssuch as arranging for future travel, managing their finances, or planning theirpurchase decisions. However, the primary means of accessing information onlineis still through keyword queries to a search engine. A complex task such astravel arrangement has to be broken down into a number of codependent stepsover a period of time. For instance, a user may first search on possibledestinations, timeline, events, etc.

After deciding when and where to go, theuser may then search for the most suitable arrangements for air tickets, rentalcars, lodging, meals, etc. Each step requires one or more queries, and eachquery results in one or more clicks on relevant pages.One importantstep toward enabling services and features that can help users during theircomplex search quests online is the capability to identify and group relatedqueries together. Recently, some of the major search engines have introduced anew “Search History” feature, which allows users to track their online searchesby recording their queries and clicks.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
4,80
Writers Experience
4,80
Delivery
4,90
Support
4,70
Price
Recommended Service
From $13.90 per page
4,6 / 5
4,70
Writers Experience
4,70
Delivery
4,60
Support
4,60
Price
From $20.00 per page
4,5 / 5
4,80
Writers Experience
4,50
Delivery
4,40
Support
4,10
Price
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

For example, a portion of a user’shistory as it is shown by the Bing search engine on February of 2010. Thishistory includes a sequence of four queries displayed in reverse chronologicalorder together with their corresponding clicks. In addition to viewing theirsearch history, users can manipulate it by manually editing and organizingrelated queries and clicks into groups, or by sharing them with their friends.While these features are helpful, the manual efforts involved can be disruptiveand will be untenable as the search history gets longer over time.In fact,identifying groups of related queries has applications beyond helping the usersto make sense and keep track of queries and clicks in their search history.First and foremost, query grouping allows the search engine to betterunderstand a user’s session and potentially tailor that user’s searchexperience according to her needs.

Once query groups have been identified,search engines can have a good representation of the search context behind thecurrent query using queries and clicks in the corresponding query group. Thiswill help to improve the quality of key components of search engines such asquery suggestions, result ranking, query alterations, sessionization, andcollaborative search. For example, if a search engine knows that a currentquery “financial statement” belongs to a {“bank of america,” “financialstatement”} query group, it can boost the rank of the page that providesinformation about how to get a Bank of America statement instead of theWikipedia article on “financial statement,” or the pages related to financialstatements from other banks.Query groupingcan also assist other users by promoting task-level collaborative search. Forinstance, given a set of query groups created by expert users, we can selectthe ones that are highly relevant to the current user’s query activity andrecommend them to her.

Explicit collaborative search can also be performed byallowing users in a trusted community to find, share and merge relevant querygroups to perform larger, long-term tasks on the web. II.  Related work Fig 1: Web Mining StructureWeb Contentmining 3 deals with discovery of useful information from unstructured, semistructured or structured contents of web documents. Text, images, audio, videocomprised by unstructured document, semi structured data includes HTMLdocuments and lists and tables represent structured documents.

The main aim ofweb content mining is to act as tool to retrieve information easily andquickly. Web Content Mining works by organizing a group of documents intorelated categories which helps web search engine to ex-tract information morequickly and efficiently. Web Structure Mining 6, 7 mines the information byutilizing the link structure of the web documents.

It works on inter documentlevel and discovers hyperlink structure. It helps in describing thesimilarities and relationships between sites. Web Usage Mining 3 is a datamining technique that mines the information by analyzing the log files thatcontains the user access patterns. Web Usage Mining mines the secondary datawhich is present in log files and derived from the interactions of the userswith the web. Web usage Mining techniques are applied on the data present inweb server logs, browser logs, cookies, user profiles, bookmarks, mouse clicksetc. This information is often gathered automatically access web log throughthe Web server. 2.1 web usageminingWeb Usage Miningconcentrates on the techniques that could predict the navigational pattern ofthe user while the user interacts with the web.

It is mainly divided into twocategories, they are general access pattern tracking and customized usagetracking. In general access pattern tracking information is discovered by usingthe history of web page visited by user while in customized usage trackingmining is targeted on specific user. Mainly there are four types of datasources present in which usage data is recorded at different levels they are:client level collection, browser level collection, server level collection andproxy level collection.Client Levelcollection: At this leveldata is gathered together by means of java scripts or java applets. This datashows the behavior of a single user on single site. Client side data collectionrequires user participation for enabling java scripts or java applets. Theadvantage of data collection at client side is that it can capture all clicksincluding pressing of back or reload button 2. Browser LevelCollection: Second methodof data collection is by modifying the browser.

It shows the behavior of singleuser over multiple sites. The data collection capabilities are enhanced bymodifying the source code of existing browser. They provide much more versatiledata as they consider the behavior of single user on multiple sites 2.Server LevelCollection: Web server log5 stores the behavior of multiple users over single site.

These log files canbe stored in common log format or extended log format. Server logs are not ableto store cached page views. Another technique used for usage data collection atserver level is TCP/IP packet sniffing. Packet sniffers works by monitoring thenet-work logs and retrieve usage data directly.Proxy LevelCollection: Proxy serversare used by internet service provider to provide World Wide Web access tocustomers. These server stores the behavior of multiple user at multiple site.These server functions like cache server and they are able to produce cachedpage views. By predicting the usage pattern of the visitor Web Usage Miningimproves the quality of e- commerce services, personalizes the web 1 or enhancesthe performance of web structure and web server.

Serverdata are data that arecollected from web servers; it includes log files, cookies and explicit userinput. Servers contain different types of logs, which are considered to be themain date resource for web usage mining.  Problem  Definition Thereare rich variants of browsing behaviour analysis techniques are available butmost of them are suffers from the following issues: 1.Web server access log based technique only contains the partial user behaviourtherefore need to improve the log management scheme 2.More than one pages are navigated in different times, therefore establishingthe correlation between each user event and their corresponding web page iscomplex to learn by an algorithm 3.Huge data needs large time and space complexity 4.Inaccurate predictive methodology due to less number of feature availability onthe user navigation pattern.

Limitations of Existing System:1.      Accuracy ofsystem is quite less2.      Time consumptionincrease with increase in dataset size Proposed ArchitectureThe framework consists of three LevelsLevel 1: In this level the basic features aregenerated from web logs  whereproposed servers resides in and are used to form the web logs records forwell-defined time period.

Monitoring and analysing logs to reduce the maliciousactivities only on relevant users & sessions.To provide abest protection for a targeted sessions. This also enables our detector toprovide protection which is the best fit for the targeted users becauselegitimate user profiles used by the detectors are developed for a smallernumber of logs.

 Level 2: In this step the Analysis is applied in which the userprofile Generation module is applied to extract the correlation between twoseparate features within individual log.The distinct features are come from level 1or “feature normalization module” in this step. All the extracted correlationare stored, are then used to replace the original logs to represent the weblogs. Its differentiating between legitimate and illegitimate log data. Level 3: The anomaly session identification mechanism is adoptedin decision making. Normal user profile generation module is togenerate a profiles for various types of web logs and the generated normalprofiles are stored in a database. The “Tested Profile Generation” module isused in the “test phase” to build profiles for individual observed web logs.

Then at last the tested profiles are handed over to “session identification”module it compares tested profile with stored normal profiles.This needs theexpertise in the targeted detection algorithm and it is manual task. The NormalProfile Generation module is operated to generate profiles for various types oflegal records of logs, and the normal profiles generated are stored in thedatabase. The tested profile generation module is used in a Test Phase to buildprofiles for the each observed logs documentation. Next, the profiles of testedare passed over to an session identification part, which calculates the testedprofiles for individual with the self-stored profiles of normal. A thresholdbased classifier is employed in the session identification portion module to classifylogs 8.

 A.  Data CleaningInput: log_tableOutput: refine_log_tableBegin1. Read records in log_table2. For each record in log_table3. Read fields (Status code)4. If Status code=200, Then Get all fields.5.

If suffix.URL_Link={*.gif,*.jpg,*.css,*.ico}then,6.

Remove suffix.URL_link7. Save fields in new table.End ifElse8.

Next recordEnd ifEnd B.  Detection MechanismIn this section, we present a thresholdbased on anomaly finder whose regular profiles are produced using purely legalrecords of web logs and utilized for the future distinguish with new incominginvestigated logs report. The difference between an individual normal outlineand a fresh arriving logs record is examined by the planned detector. If thevariation is large than a pre-determined threshold, then a record of logs is markedas an malicious session otherwise it is marked as the legal session. C.

  Algorithm for User Profile GenerationIn this algorithm 1 the user normalprofile is built through the density estimation between individual legitimatetraining web logs and the expectation of the legitimate training web logs.Step 1: Input web logs.Step 2: Extract original features of individual logs.Step 3: Apply the concept user profile to extract thegeometrical correlation between the jth and kth features in the vector xi.

Step 4: User Normal profile generationi.  Generate triangle area map of each log.ii.

Generate covariance matrix.iii.                 Calculatefeatures between legitimate record’s value and input records valueiv.     Calculatemean v.  Calculatestandard deviation.

vi.      Return pro.Step 5: Session identification.i.

Input:observed logs, normal profile and alpha.ii. Generate values for i/p logsiii.                 Calculatevalue between normal profile and i/p logs iv.     If value< threshold Detect Normal sessionElseDetect malicious session.In the trainingphase, we employ only the normal records.

Normal profiles are built withrespect to the various types of appropriate logs using the algorithm describebelow. Clearly, normal profiles and threshold points have the direct power onthe performance of the threshold based detector. An underlying quality usualshape origins a mistaken characterization to correct logs. D.  Algorithm for Session identificationThisalgorithm is used for classification purpose.Step1: Task is to classify new features as they arrive, i.e.,decide to which class label they belong, based on the currently existing logsrecord.

Step2: Formulated our prior probability, so ready to classifya new record.Step 3: Then we calculate the number of points in the recordbelonging to each logs record.Step 4: Final classification is produced by combining both featuresof information, i.e., the prior and to form a posterior probability. E.  Mathematical ModelingLet S be the system which we use to findthe session identification system.

They equip proposed detection system withcapabilities of accurate characterization for logs behaviours and detection ofknown and unknown attacks respectively.·  Input:Given an arbitrary datasetX = {x1, x2, · · · , xn}·  Output:DP (Detected Sessions) :DP={n,m} Where n  is  normal sessions  and  M is  the malicious sessions.Process: S= {D, mvc, NP, AD, DP} Where, S=System.

D= Datasetmvc     =    Multivariate   correlation analysis.NP = Normal profile generation. AD =Sessionidentification.DP= Detected packets. EXERPIMENT EVALUATION AND ANALYSISEvaluation of sessionidentification is done by using web logs dataset. User Normal Profile is builtby using same dataset. Threshold range is generated by using ‘µ + ? *?’ and ‘µ- ? *?’ For normal Distribution, the value of ‘?’ ranges from 1 to 3. Detectionrate and False positive rate is evaluated for the different values of ‘?’.

  Fig: Graphfor detection of False positive rate Vs detection rate  Advantages of Proposed System:1.      Accuracy is high2.      Time consumptionis very less as compared to previous systems3.      Classificationaccuracy is better than previous systemsDisadvantages of proposed system:1.

      Doest notconsider real time dataset2.      Processing speeddepends on the machine configurationFuture Scope:1.      Can beimplemented with other algorithms to check accuracy2.      Hybrid approachcan also be implemented to improve accuracy3.

      To be implementedusing real world datasetIII. Conclusion Web usage mining isindeed one of the emerging areas of research and important sub-domain of datamining and its techniques. In order to take full advantage of web usage miningand its all techniques, it is important to carry out preprocessing stageefficiently and effectively. This paper tries to deliver areas ofpreprocessing, including data cleansing, session identification, useridentification. Once the preprocessing stage is well-performed, we have applieddata mining technique classification. Web log mining is one of the recent areasof research in Data mining.

Web Usage Mining becomes an important aspect intoday’s era because the quantity of data is continuously increasing. Above results shows that the detection rate ofsession identification is far more better than previous systems and the falsepositive rate is very low. As the fpr changes there is certain deflection indetection rate also. Thus we prove that our system performs better on givendataset and also on realtime dataset generated from wireshark software tool.We deal with the web server logs which maintain thehistory of page requests.

for applications of web usage mining such asbusiness intelligence, e-commerce, e-learning, personalization, etc.References1 J. Teevan, E.Adar, R. Jones, and M.A.S.

Potts, “Information Re-Retrieval: Repeat Queries inYahoo’s Logs,” Proc. 30th Ann. Int’l ACM SIGIR Conf. Research and Developmentin Information Retrieval (SIGIR ’07), pp. 151-158, 2007.2 A. Broder, “ATaxonomy of Web Search,” SIGIR Forum, vol. 36, no.

2, pp. 3-10, 2002.3 A. Spink, M.

Park, B.J. Jansen, and J. Pedersen, “Multitasking during Web Search Sessions,”Information Processing and Management, vol. 42, no.

1, pp. 264-275, 2006.4 R. Jones andK.L. Klinkner, “Beyond the Session Timeout: AutomaticHierarchical Segmentation of Search Topics in Query Logs,” Proc. 17th ACM Conf.

Information and Knowledge Management (CIKM), 2008. P. Boldi, F. Bonchi, C.Castillo, D.

Donato, A. Gionis, and S. Vigna, “The Query-Flow Graph: Model andApplications,” Proc.

17th ACM Conf. Information and KnowledgeManagement (CIKM), 2008.6 D. Beefermanand A.

Berger, “Agglomerative Clustering of a Search Engine Query Log,” Proc.Sixth ACM SIGKDD Int’l Conf. Knowledge Discovery and Data Mining (KDD), 2000.7 R.Baeza-Yates and A.

Tiberi, “Extracting Semantic Relations from Query Logs,”Proc. 13th ACM SIGKDD Int’l Conf. Knowledge Discovery and Data Mining (KDD),2007.8 J. Han and M.

Kamber, Data Mining: Concepts and Techniques. Morgan Kaufmann, 2000.9 W. Barbakhand C. Fyfe, “Online Clustering Algorithms,” Int’l J. Neural Systems, vol. 18,no.

3, pp. 185-194, 2008.10 LectureNotes in Data Mining, M. Berry, and M. Browne, eds. World Scientific PublishingCompany, 2006.

11 V.I. Levenshtein,”Binary Codes Capable of Correcting Deletions, Insertions and Reversals,”Soviet Physics Doklady, vol. 10, pp.

707-710, 1966.12 M. Sahamiand T.D. Heilman, “A Web-based Kernel Function for Measuring the Similarity ofShort Text Snippets,” Proc. the 15th Int’l Conf. World Wide Web (WWW’06), pp. 377-386, 2006.

13 J.-R. Wen,J.-Y. Nie, and H.

-J. Zhang, “Query Clustering Using User Logs,” ACM Trans. inInformation Systems, vol. 20, no.

1, pp. 59-81, 2002.14 A. Fuxman,P. Tsaparas, K. Achan, and R.

Agrawal, “Using the Wisdom of the Crowds forKeyword Generation,” Proc. the 17th Int’l Conf. World Wide Web (WWW’08), 2008.15 K.Avrachenkov, N.

Litvak, D. Nemirovsky, and N. Osipova, “Monte Carlo Methods inPageRank Computation: When One Iteration Is Sufficient,” SIAM J. NumericalAnalysis, vol. 45, no. 2, pp.

890-904, 2007.