mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

16K
active users

#embedding

0 posts0 participants0 posts today

I’m excited to share my newest blog post, "Don't sure cosine similarity carelessly"

p.migdal.pl/blog/2025/01/dont-

We often rely on cosine similarity to compare embeddings—it's like “duct tape” for vector comparisons. But just like duct tape, it can quietly mask deeper problems. Sometimes, embeddings pick up a “wrong kind” of similarity, matching questions to questions instead of questions to answers or getting thrown off by formatting quirks and typos rather than the text's real meaning.

In my post, I discuss what can go wrong with off-the-shelf cosine similarity and share practical alternatives. If you’ve ever wondered why your retrieval system returns oddly matched items or how to refine your embeddings for more meaningful results, this is for you!
`
I want to thank Max Salamonowicz and Grzegorz Kossakowski for their feedback after my flash talk at the Warsaw AI Breakfast, Rafał Małanij for inviting me to give a talk at the Python Summit, and for all the curious questions at the conference, and LinkedIn.

p.migdal.plDon't use cosine similarity carelesslyCosine similarity - the duct tape of AI. Convenient but often misused. Let's find out how to use it better.

🆕 Encoder only model that's a direct drop-in replacement for existing BERT models
- First major upgrade to BERT-style models in six years
- Significantly reduced processing costs for large-scale applications
- Enables longer document processing without chunking
- Better performance in retrieval tasks
- Suitable for consumer-grade GPU deployment
#llm #ai #embedding
huggingface.co/blog/modernbert

huggingface.coFinally, a Replacement for BERT: Introducing ModernBERTWe’re on a journey to advance and democratize artificial intelligence through open source and open science.

The current relevation that LLMs can’t reason is causing a lot of shade&fraud, but it’s not purely true

An LLM could reason, if you gave it a corpus of sentences (in whichever languages) which explicitly and unambiguously described a whole big bag of causal relationships and outcomes and things that happen because other things happen, and general structures such as that described clearly and formally and without any possibility of confusion

The embeddings which result from such a corpus could well work as a reference source of logic or cause or common sense or reason, about lots of things, and the next step would be to make it so that these embeddings are generalisable so that the common sense of the way life is, can be applied widely (again using vector comparison) so that yes it is possible to apply reason to a LLM, the main thing is that there probably isn’t an emphasis on that kind of descriptive and even prescriptive literature in and among the source learning in the first place – there’ll be a lot, there’ll be some, but I don’t think it was emphasised

By introducing it at the RAG level, and then the embeddings migrating back into the future models, I believe it could be possible to emulate a lot of common sense about the world and the way things are, purely through description of such – after all, the embeddings produced from such a block (a very massive block) of description, as vectors, are only numbers, which is what LLMs are really operating on, just vectors, not words, not tokens, just numbers

Consequently my dreams of applying real-world sensor/actuator ways of learning about the real world and building common sense are probably able to be supplanted just by a rigorous and hefty major project of just describing it instead of actually doing it – but the thing to watch would be in the description itself, it’d have to be as detailed and accurate and wide-ranging as the experiential model would be, and this might be where the difficulty lies, people describing common sense in the world would tend to abbreviate, generalise prematurely, miss things out, misunderstand, and above all, they’ll assume a lot
#AI #LLM #reasoning #CommonSense #vector #embedding

Распределённый инференс llama.cpp через RPC

Приветствую, хабровчане! Идея создания данной публикации крутилась с моей голове уже давно, дело в том, что одно из моих хобби связанно с распределёнными вычислениями, а другое хобби связанно с нейросетями и мне давно не давала покоя идея запустить инференс LLM на нескольких компьютерах, но так чтобы все они выполняли работу над одно и той же моделью параллельно. Погуглив некоторое время узнал, что проект LocalAI уже относительно давно поддерживает такую возможность, недолго думая я раскатал на нескольких компьютерах данный проект, после чего выполнил все необходимые настройки связав все инстансы в единую систему и, мягко говоря, был разочарован, уж слишком "фатально-недостаточным" оказалось данное решение, Docker-образ собран неоптимально, он был огромный по весу и только под amd64 , неотключаемый веб-интерфейс шел в комплекте с проектом, скупой выбор моделей, некоторые из доступных LLM не работали в режиме RPC, все эмбеддинговые модели тоже отказывались запускаться в таком режиме, и так далее и тому подобное. Повозившись ещё немного, полез в исходники и обнаружил упоминание проекта llama.cpp , затем нашёл вызов бинарника rpc-server . И вот я оказался на странице llama.cpp/examples/rpc и всё заверте...

habr.com/ru/articles/843372/

ХабрРаспределённый инференс llama.cpp через RPCПриветствую, хабровчане! Идея создания данной публикации крутилась с моей голове уже давно, дело в том, что одно из моих хобби связанно с распределёнными вычислениями, а другое хобби связанно с...