Data Dimensions by Rank Cover Tree Technique

V.V.S Sasank, Dr T. Srinivas Rao, Dr.M. Srivenkatesh

Abstract


These days, the need to procedures, methodologies, and algorithms to search on data is expanded because of upgrades in software engineering and expanding measure of data. This consistently expanding data volume has prompted time and calculation intricacy. As of late, extraordinary techniques to take care of such issues are proposed. Among the others, nearest neighbor search is extraordinary compared to other methods to this end which is engaged by numerous researchers. Distinctive systems are utilized for nearest neighbor search. Notwithstanding put a conclusion to a few complexities, assortment of these systems has made them reasonable for various applications, for example, design acknowledgment, searching in media data, data recovery, databases, data mining, and computational geometry to give some examples. This order is comprises of seven gatherings: Weighted, Reduction, Additive, Reverse, Continuous, Principal Axis and Other methods which are contemplated, assessed and thought about in this paper. Many-sided quality of utilized structures, strategies and their algorithms are talked about, also. K-NN algorithm. Search Engine is a program which discover particular page which gives important consequence of user query. Many web search motors are accessible like Google, yippee, Bing, ask and so forth this paper introduces another rank Based framework by utilizing page Ranker Reviser algorithm with data mining strategies and furthermore considered consequence of K-Nearest Neighbor and page rank updated algorithm.


Keywords


Nearest neighbor search, Query Processing, dimensionality, rank-based search.

References


Michael E. Houle And Michael Nett,‖ Rank-Based Similarity Search: Reducing The Dimensional Dependence‖, IEEE Transactions On Pattern Analysis And Machine Intelligence, Vol. 37, No. 1, January 2015.

I. Abraham, D. Malkhi, and O. Dobzinski, ―LAND: Stretch (1 + epsilon) locality-aware networks for DHTs,‖ in Proc. 15th Annu. ACM-SIAM Symp. Discrete Algorithm, 2004, pp. 550–559.

A. Andoni and P. Indyk. (2005). E2LSH 0.1: User Manual. [Online]. Available: www.mit.edu/andoni/LSH/, 2005.

M. Ankerst, M. M. Breunig, H.-P. Kriegel, and J. Sander, ―OPTICS: Ordering points to identify the clustering structure,‖ in Proc. ACM SIGMOD Int. Conf. Manag. Data, 1999, pp. 49–60.

A. Asuncion and D. J. Newman. (2007). UCI machine learning repository. [Online]. Available: http://www.ics.uci.edu/~mlearn/MLRepository.html

K. S. Beyer, J. Goldstein, R. Ramakrishnan, and U. Shaft, ―When is ‖nearest neighbor‖ meaningful?‖ in Proc. 7th Int. Conf. Database Theory, 1999, pp. 217– 235.

A. Beygelzimer, S. Kakade, and J. Langford, ―Cover trees for nearest neighbor,‖ in Proc. 23rd Int. Conf. Mach. Learn., 2006, pp. 97– 104.

T. Bozkaya and M. Ozsoyoglu, ―Indexing large metric spaces for similarity search queries,‖ ACM Trans. Database Syst., vol. 24, no. 3, pp. 361–404, 1999.

M. M. Breunig, H.-P. Kriegel, R. T. Ng, and J. Sander, ―LOF: Identifying density-based local outliers,‖ SIGMOD Rec., vol. 29, no. 2, pp. 93–104, 2000.

S. Brin, ―Near neighbor search in large metric spaces,‖ in Proc. 21th Int. Conf. Very Large Data Bases, 1995, pp. 574–584.


Full Text: PDF [Full Text]

Refbacks

  • There are currently no refbacks.



Copyright © 2012, All rights reserved.| ijmca.org

Creative Commons License
International Journal of Mechanical Engineering and Computer Applications by chief editor is licensed under a Creative Commons Attribution 3.0 Unported License.Permissions beyond the scope of this license may be available at www.ijmca.org.