Reference index

AI Glossary

Definitions, intuitions, sources. Fast search. EN / IT.

Loading...
Term Map
How this map is built

We build one embedding vector per English term using the term name, definition, key intuition, and (if present) use cases. Think of an embedding as semantic coordinates: terms with similar meaning end up in nearby regions.

Similarity is computed with angular distance: d(i,j) = acos(cosine(i,j)) / pi. Intuition: cosine measures how aligned two vectors are in direction (1 = same direction, 0 = unrelated, -1 = opposite). Taking acos turns that alignment into an angle, then dividing by pi rescales it to 0-1. Smaller distance means stronger semantic similarity.

For each term, we rank all other terms by this distance and keep the top 3 nearest neighbors. The lines in the map are exactly those neighbor links.

Node positions are then projected to 2D for visualization (MDS): this helps us draw an interpretable map, but neighbor links are computed in embedding space, not from 2D screen distance.

Start here

Know the words. Save the tokens.

A compact AI terminology glossary with definitions, intuition, and sources. Use it to stay precise without burning tokens on basics.

What you get

Fast search, intuition, sources, difficulty, and related terms.

Contribute

Want to contribute to the project? You can write, improve, translate, or edit existing definitions, and add new ones in the GitHub repository. If you prefer to support it financially, you can donate via PayPal.