We give two different and simple constructions for dimensionality reduction
in $\ell_2$ via linear mappings that are sparse: only an
$O(\varepsilon)$-fraction of entries in each column of our embedding matrices
are non-zero to achieve distortion $1+\varepsilon$ with high probability, w
研究了一个基于稀疏 Johnson-Lindenstrauss 变换的几何设置,它在 T 上保存每个 x 的范数,从而推导出一种关于几何复杂度的新参数,并且该参数可以帮助限制需要 M、S 的大小。这一结果是 Gordon 定理的稀疏模拟,并且该方法可应用于经典和基于模型的压缩感知、流形学习和约束最小二乘问题等领域。