mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

14K
active users

#blackbox

1 post1 participant0 posts today

🎬 Am Samstag, den 14. Juni 2025, ist um 20:00 Uhr "The Navigator" (USA 1924) im Filmmuseum Düsseldorf mit Livemusik zu erleben. Rolf Springer (Saxophon) und Peter Thoms (Schlagwerk) begleiten den Film. Mehr unter stummfilm-magazin.de/termintip
#stummfilmmagazin #stummfilm #silentfilm #silentmovie #kino #film #kinoprogramm #filmerbe #düsseldorf #filmmuseum #stummfilmkonzert #busterkeaton #blackbox

Beyond the Black Box: Interpretability of LLMs in Finance

arxiv.org/abs/2505.24650

arXiv logo
arXiv.orgBeyond the Black Box: Interpretability of LLMs in FinanceLarge Language Models (LLMs) exhibit remarkable capabilities across a spectrum of tasks in financial services, including report generation, chatbots, sentiment analysis, regulatory compliance, investment advisory, financial knowledge retrieval, and summarization. However, their intrinsic complexity and lack of transparency pose significant challenges, especially in the highly regulated financial sector, where interpretability, fairness, and accountability are critical. As far as we are aware, this paper presents the first application in the finance domain of understanding and utilizing the inner workings of LLMs through mechanistic interpretability, addressing the pressing need for transparency and control in AI systems. Mechanistic interpretability is the most intuitive and transparent way to understand LLM behavior by reverse-engineering their internal workings. By dissecting the activations and circuits within these models, it provides insights into how specific features or components influence predictions - making it possible not only to observe but also to modify model behavior. In this paper, we explore the theoretical aspects of mechanistic interpretability and demonstrate its practical relevance through a range of financial use cases and experiments, including applications in trading strategies, sentiment analysis, bias, and hallucination detection. While not yet widely adopted, mechanistic interpretability is expected to become increasingly vital as adoption of LLMs increases. Advanced interpretability tools can ensure AI systems remain ethical, transparent, and aligned with evolving financial regulations. In this paper, we have put special emphasis on how these techniques can help unlock interpretability requirements for regulatory and compliance purposes - addressing both current needs and anticipating future expectations from financial regulators globally.

🎬 Am Samstag, den 14. Juni 2025, ist um 20:00 Uhr "The Navigator" (USA 1924) im Filmmuseum Düsseldorf mit Livemusik zu erleben. Rolf Springer (Saxophon) und Peter Thoms (Schlagwerk) begleiten den Film.

🎬 Mehr unter stummfilm-magazin.de/termintip
#stummfilmmagazin #stummfilm #silentfilm #silentmovie #kino #film #kinoprogramm #filmerbe #düsseldorf #filmmuseum #stummfilmkonzert #busterkeaton #blackbox

Eine #Blackbox namens #Polizei

Man weiß, dass man nichts weiß: Studie zeigt For­schungs­lü­cken auf

Von Frederik #Eikmanns

#Rassistisch​e Kontrollen, exzessive #Gewalt, #abwertend​e Sprüche: Wie eine neue Untersuchung der Antidiskriminierungsstelle des Bundes zeigt, ist das Risiko, von der Polizei diskriminiert zu werden, #strukturell angelegt
taz.de/!6086111

TAZ Verlags- und Vertriebs GmbH · Eine Blackbox namens PolizeiMan weiß, dass man nichts weiß: Studie zeigt Forschungslücken auf

Wusstest du schon, was „Black Box“ bei Algorithmen bedeutet?

#BlackBox heißt, dass selbst Entwickler*innen oft nicht genau wissen, wie der #Algorithmus zu seinen Entscheidungen kommt. Das wird besonders problematisch bei #Diskriminierung oder unfairen Entscheidungen.

Was, wenn du diskriminiert wirst – und niemand kann dir sagen, ob das real passiert ist? Wir helfen, damit du #AlgorithmischeDiskriminierung besser erkennst. Bald launchen wir ein Tool, mit dem du deinen Verdacht melden kannst!

It dawned on me, many conflicts in UI/UX design philosophy boil down to Black Box vs White Box worldviews and an increasing focus/bias towards opaqueness...

On the one hand, there's the long illusive dream/goal of UI designers for their technological artifacts/products/services (esp. AI™ powered ones) to blend perfectly into their physical and human environment, be as autonomous & intuitive-to-use as possible, have as few controls/interfaces as possible (often minimalist brand aesthetics explicitly demand so), all whilst offering perfectly suited outcomes/actions from only minimal direct inputs, yet always with perfect prediction/predictability — DWIM (Do What I Mean) magic!

This approach mostly this comes with a large set of brushed-under-the-rug costs: Patronizing/dehumanizing the people intended to interact with the artifact, doubting/denying their intelligence, outright removal/limitation of user controls (usually misrepresented/celebrated as "simplicity"), relying on intense telemetry/surveillance/tracking, enforced 24/7 connectivity, increased energy usage and all the resulting skewed incentives for monetization which actually have nothing to do with the original purpose of the artifact...

In contrast, the White/Clear Box approach offers the artifact with full transparency (and control) of the inner workings of the system. Because of this it only works (great) for smaller, human scale domains/contexts, but due to the out-of-bounds complexity of our surrounding contemporary tech stack, these days this very quickly just means Welcome to Configuration Hell (Dante says "ciao!")...

(So in the end: Choose your own hellscape... :)

#NoteToSelf#UI#UX

Practical Black-Box Attacks against Machine Learning

arxiv.org/abs/1602.02697

Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, Ananthram Swami

arXiv logo
arXiv.orgPractical Black-Box Attacks against Machine LearningMachine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.