Wow that was a loud
@rysiek Does that mean we get to find out what the tech is actually good for now? :-D
@whvholst @HerraBRE @rysiek Not surprising that LLMs are of help for translation and some kinds of analyses, because that's where it originates and has been in use for several years now.
But incorporating it into all kinds of software, offering to reply to emails for you or write code, that's another case.
@whvholst @HerraBRE @rysiek BTW; I am translating the upcoming release of Nextcloud, and now there will be an "assistant", fortunately optional, helping people to summarize messages and suggesting possible replies - is this really a benefit?
If messages are so convoluted that you need firing up the world so that you can extract some meaning from them, then isn't it more straightforward to ask people to write more concise text?
@sv1 @whvholst @HerraBRE @rysiek
My other half turned it on for pictures, honestly it's a bit crap. The classification is so so, there doesn't appear to be any way to further train it, and I actually don't see the usecase for having a list of all my pictures that contain chairs even if it could properly identify them.
@econads I am using a tiny model on my phone to search through photos:
https://f-droid.org/packages/com.slavabarkov.tidy/
Works pretty well. Being able to identify photos with chairs is already a boon when you're looking for that one photo of a chair.
@econads @rysiek @whvholst @HerraBRE A good comparaison would be the Digikam photo-management software I've been using a lot to manage my photo collection: for years now, several different 'neural networks' have been available for things like facial recognition, aesthetic qualification and automatic tagging - all done locally. Meaning that it is trained locally with your data and nothing is ever sent somewhere else.
By using those 'new' LLMs, you are training them for somebody else.