NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
A linear-time alternative for Dimensionality Reduction and fast visualisation (medium.com)
romanfll 2 hours ago [-]
Author here. I built this because I needed to run dimensionality reduction entirely in the browser (client-side) for an interactive tool. The standard options (UMAP, t-SNE) were either too heavy for JS/WASM or required a GPU backend to run at acceptable speeds for interactive use.

This approach ("Sine Landmark Reduction") uses linearised trilateration—similar to GPS positioning—against a synthetic "sine skeleton" of landmarks.

The main trade-offs:

It is O(N) and deterministic (solves Ax=b instead of iterative gradient descent).

It forces the topology onto a loop structure, so it is less accurate than UMAP for complex manifolds (like Swiss Rolls), but it guarantees a clean layout for user interfaces.

It can project ~9k points (50 dims) to 3D in about 2 seconds on a laptop CPU. Python implementation and math details are in the post. Happy to answer questions!

threeducks 8 minutes ago [-]
Without looking at the code, O(N * k) with N = 9000 points and k = 50 dimensions should take in the order of milliseconds, not seconds. Did you profile your code to see whether there is perhaps something that takes an unexpected amount of time?
lmeyerov 36 minutes ago [-]
Fwiw, we are heavy UMAP users (pygraphistry), and find UMAP CPU fine for interactive use at up to 30K rows and GPU at 100K rows, then generally switch to a trained mode when > 100K rows. Our use case is often highly visual - see correlations, and link together similar entities into explorable & interactive network diagrams. For headless, like in daily anomaly detection, we will do this to much larger scales.

We see a lot of wide social, log, and cyber data where this works, anywhere from 5-200 dim. Our bio users are trickier, as we can have 1K+ dimensions pretty fast. We find success there too, and mostly get into preconditioning tricks for those.

At the same time, I'm increasingly thinking of learning neural embeddings in general for these instead of traditional clustering algorithms. As scales go up, the performance argument here goes up too.

aoeusnth1 2 hours ago [-]
This is really cool! Are you considering publishing a paper on it? This seems conceptually similar to landmark MDS / Isomap, except using PCA on the landmark matrix instead of MDS. (https://cannoodt.dev/2019/11/lmds-landmark-multi-dimensional...)
romanfll 42 minutes ago [-]
Thanks! You nailed the intuition! Yes, it shares DNA with Landmark MDS, but we needed something strictly deterministic for the UI. Re: Publishing: We don't have a paper planned for this specific visualisation technique yet. I just wanted to open-source it because it solved a major bottleneck for our dashboard. However, our main research focus at Thingbook is DriftMind (a cold start streaming forecaster and anomaly detector, preprint here: https://www.researchgate.net/publication/398142288_DriftMind...). That paper is currently under peer review! It shares the same 'efficiency-first' philosophy as this visualisation tool
benob 3 minutes ago [-]
Is there a pip installable version?
jmpeax 1 hours ago [-]
> They typically need to compare many or all points to each other, leading to O(N²) complexity.

UMAP is not O(n^2) it is O(n log n).

romanfll 48 minutes ago [-]
Thanks for your comment! You are right, Barnes-Hut implementation brings UMAP down to O(N log N). I should have been more precise in the document. The main point is that even O(N log N) could be too much if you run this in a browser.. Thanks for clarifying!
memming 2 hours ago [-]
first subsample a fixed number of random landmark points from data, then...
romanfll 36 minutes ago [-]
Thanks for your comment. You are spot on, that is effectively the standard Nyström/Landmark MDS approach.

The technique actually supports both modes in the implementation (synthetic skeleton or random subsampling). However, for this browser visualisation, we default to the synthetic sine skeleton for two reasons:

1. Determinism: Random landmarks produce a different layout every time you calculate the projection. For a user interface, we needed the layout to be identical every time the user loads the data, without needing to cache a random seed. 2. Topology Forcing: By using a fixed sine/loop skeleton, we implicitly 'unroll' the high-dimensional data onto a clean reduced structure. We found this easier for users to visually navigate compared to the unpredictable geometry that comes from a random subset

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 09:33:57 GMT+0000 (Coordinated Universal Time) with Vercel.