mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

9.8K
active users

#explainableai

3 posts3 participants0 posts today

Huge congratulations to Paula Feliu Criado for the successful defense of her Bachelor's thesis within the BIG-5 Project 🎓

Paula’s work developed a multitask AI pipeline to analyze how we express our connection to nature on social media. Going beyond simple classification, she used Explainable AI to understand how the model "sees" images, ensuring its insights are transparent and trustworthy.

👉 Read more about her work on our blog: bit.ly/3Uotk4S

AI: Explainable Enough

They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem. 

Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.

Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details. 

What the domain expert user doesn’t want:
– How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor. 

What the domain expert desires: 
– Help at the lowest level of detail that they care about. 
– AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X. 

Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.

This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem,  easily.  So in a Betty Crocker cake mix kind of way, let the user add the egg. 

Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking. 

I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.

Continued thread

The core idea is that answers and explanations only extract information from the output of a reasoning process, which does not need to be human-readable. To improve faithfulness, explanations do not depend on answers, and vice versa.

#AI#genAI#LLM

Alles im Flow! Fachleute aus Wissenschaft und Wirtschaft diskutieren ab heute auf dem #Zukunftsforum Kunststoffkreislauf über KI, Nachhaltigkeit, Daten- und selbstverständlich auch Stoffströme.

Es ging um #OpenSource und die Psychologie, die hinter falsch entsorgten Gegenständen (wie Akkus im gelben Sack) steckt, um den Einsatz von #ExplainableAI – eindrücklich erklärt am Beispiel von Clever Hans, dem Pferd – und vieles mehr.

Am Nachmittag und morgen gehts weiter!

#KI#KIHub#KuRT

CRISIS IN MACHINE LEARNING - Semantics to the Rescue
Frank van Harmelen starts his keynote at ISWS 2025 with this headline from "The AI Times"
So, what is this crisis about? There are the following (still) unsoloved problems in AI research:
- Learning from small data
- Explainable AI
- Updating
- Learning by explaining

#isws2025#llms#AI

🔍 The 1st XAI+KG Workshop is now underway at #ESWC2025!
📍 Room 7 – Nautilus, Floor 0 (First Half of the Day)

XAI+KG 2025 explores how Knowledge Graphs can enhance the interpretability and transparency of AI models — especially deep learning systems — and how Explainable AI (XAI) techniques can, in turn, improve the construction and refinement of Knowledge Graphs. 🤝🧠

Join us for thought-provoking discussions at the intersection of explainability and semantics.

Delve into the darker realms of artificial intelligence with this reflective exploration of AI bias, toxic data practices, and ethical dilemmas. Discover the challenges and opportunities facing IT leaders as they navigate the complexities of AI technology. #ArtificialIntelligence #AIethics #DataEthics #TechnologyEthics #ExplainableAI #ChatGPT #EthicalAI #Regulation #AGI #SanjayMohindroo
medium.com/@sanjay.mohindroo66

The Dark Side of AI: Navigating Ethical Waters in a Digital Era. Sanjay K Mohindroo
Medium · The Dark Side of AI: Navigating Ethical Waters in a Digital EraBy Sanjay K Mohindroo

"Finally, AI can fact-check itself. One large language model-based chatbot can now trace its outputs to the exact original data sources that informed them.

Developed by the Allen Institute for Artificial Intelligence (Ai2), OLMoTrace, a new feature in the Ai2 Playground, pinpoints data sources behind text responses from any model in the OLMo (Open Language Model) project.

OLMoTrace identifies the exact pre-training document behind a response — including full, direct quote matches. It also provides source links. To do so, the underlying technology uses a process called “exact-match search” or “string matching.”

“We introduced OLMoTrace to help people understand why LLMs say the things they do from the lens of their training data,” Jiacheng Liu, a University of Washington Ph.D. candidate and Ai2 researcher, told The New Stack.

“By showing that a lot of things generated by LLMs are traceable back to their training data, we are opening up the black boxes of how LLMs work, increasing transparency and our trust in them,” he added.

To date, no other chatbot on the market provides the ability to trace a model’s response back to specific sources used within its training data. This makes the news a big stride for AI visibility and transparency."

thenewstack.io/llms-can-now-tr

The New Stack · Breakthrough: LLM Traces Outputs to Specific Training DataAi2’s OLMoTrace uses string matching to reveal the exact sources behind chatbot responses