Good news: an AI chatbot added to a mushroom foraging Facebook group immediately gave readers advice to “sauté in butter” a potentially dangerous mushroom
@josephcox @rysiek
The only good use of something like that is the do not eat bot on r/whatisthisplant, I think it was, which posted a warning every time someone used the word "eat" "eaten" "edible" etc.
@econads @josephcox for that kind of use case you really don't need an LLM at all.
@rysiek @josephcox
No and for sure it wasn't one. It was a bot however. Actually for 90% of chatbots you don't need an LLM either. At least in my experience for all the help I get from customer service "AI" regurgitating me the help pages anyway.