Johnson-Lindenstrauss lemma

The Johnson-Lindenstrauss lemma asserts that a set of n points in any high dimensional Euclidean space can be mapped down into an dimensional Euclidean space such that the distance between any two points changes by only a factor of for any .

Introduction

Johnson and Lindenstrauss {cite} proved a fundamental mathematical result: any point set in any Euclidean space can be embedded in dimensions without distorting the distances between any pair of points by more than a factor of , for any . The original proof of Johnson and Lindenstrauss was much simplified by Frankl and Maehara {cite}, using geometric insights and refined approximation techniques.

Proof

Suppose we have a set of -dimensional points and we map them down to dimensions, for appropriate constant . Define as the linear map, that is if , then . For example could be a matrix.

The general proof framework

All known proofs of the Johnson-Lindenstrauss lemma proceed according to the following scheme: For given and an appropriate , one defines a suitable probability distribution on the set of all linear maps . Then one proves the following statement:

Statement: If any is a random linear mapping drown from the distribution , then for every vector we have

Having established this statement for the considered distribution , the JL result follows easily: We choose at random according to F. Then for every , using linearity of and the above Statement with , we get that fails to satisfy with probability at most . Consequently, the probability that any of the pairwise distances is distorted by by more than is at most . Therefore, a random works with probability at least .

References

  • S. Dasgupta and A. Gupta, An elementary proof of the Johnson-Lindenstrauss lemma, Tech. Rep. TR-99-06, Intl. Comput. Sci. Inst., Berkeley, CA, 1999.
  • W. Johnson and J. Lindenstrauss. Extensions of Lipschitz maps into a Hilbert space. Contemporary Mathematics, 26:189--206, 1984.

Fast monte-carlo algorithms for MM

Given an matrix and an matrix ,we present 2 simple and intuitive algorithms to computean approximation P to the product , with provablebounds for the norm of the "error matrix" .Both algorithms run in time. In bothalgorithms, we randomly pick columns of A toform an matrix S and the corresponding rows ofB to form an matrix R. After scaling the columnsof S and the rows of R, we multiply them together toobtain our approximation P . The choice of the probabilitydistribution we use for picking the columns ofA and the scaling are the crucial features which enableus to give fairly elementary proofs of the error bounds.Our rest algorithm can be implemented without storingthe matrices A and B in Random Access Memory,provided we can make two passes through the matrices(stored in external memory). The second algorithm hasa smaller bound on the 2-norm of the error matrix, butrequires storage of A and B in RAM. We also presenta fast algorithm that \describes" P as a sum of rankone matrices if B = A

Boxes

This user is a member of WikiProject Eastern Orthodoxy.
This user is a luxembourgist.
This user has a website, which can be found here.
This user is a Greek Wikipedian or is a Wikipedian living in Greece.

There are things particularly relevant to Greek-based Wikipedians at the Greek Wikipedians' notice board.

Please feel free to help us improve Greek related articles in Wikipedia!