mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

14K
active users

#provenance

2 posts2 participants0 posts today

How can a collaborative approach that combines human expertise with #machineintelligence help us tackle complex problems across various application domains? We are greatly looking forward to hearing Kai Xu from the University of Nottingham discuss this question in his #sfbtrr161 talk this Thursday. More information: sfbtrr161.de/events_sfbtrr161/ #AI #ArtificialIntelligence #DataVisualization #Provenance #HumanAICollaboration #Visualization

Getty: Getty Transforms Art Provenance Data to Support 21st Century Research. “First launched in the 1980s, the Getty Provenance Index (GPI) has evolved into an unparalleled resource for tracing the ownership history of artworks, serving as a cornerstone for research on provenance, collecting, and art markets. Now, after nearly a decade of redevelopment, Getty has reimagined this essential […]

https://rbfirehose.com/2025/05/04/getty-getty-transforms-art-provenance-data-to-support-21st-century-research/

ResearchBuzz: Firehose | Individual posts from ResearchBuzz · Getty: Getty Transforms Art Provenance Data to Support 21st Century Research | ResearchBuzz: Firehose
More from ResearchBuzz: Firehose

Events! April 8, 2025 online: "Wikidata for Provenance Research" with Ruth von dem Bussche, Laurel Zuckerman, Achim Raschka , panel moderated by Meike Hopp.
What provenance data is available in #wikidata How can it be retrieved and used in provenance projects? Get to know people who are involved in wiki projects that create and use data for provenance research.

Starts 7 pm French time
Starts 1 pm NY time
arbeitskreis-provenienzforschu

www.arbeitskreis-provenienzforschung.orgDüsseldorf / Kuwiki AG – Wikipedia Arbeitsgemeinschaft Kunstwissenschaften (O) (V) – Arbeitskreis Provenienzforschung

Calligrapher friends, check this out! A 1477 manuscript that never got its initial letters, which a Harvard curator in 1955 then commissioned for completion by Irene Wellington!!!

(Someone made an interesting comment on the post, but guess what? Nothing was destroyed. In fact, a whole new layer of history was added. Is it current best practice to amend manuscripts in ink? No. But how blessed we are to have this example to consider!)

glammr.us/@overholt/1139824183

PowerPoint slide of the manuscript with a large initial B and a colophon by the modern calligrapher.
glammr.us MastodonJohn Overholt (@overholt@glammr.us)Attached: 1 image I was at a great lecture this evening on a subject I’m predisposed to find interesting, Houghton history). I was introduced to this 1477 manuscript that never got the initial letters it was designed to have, so curator/collector Philip Hofer, one of Houghton’s founding fathers, commissioned an eminent calligrapher TO DRAW SOME IN. IN 1955. https://id.lib.harvard.edu/alma/990097581700203941/catalog

wikidata + mediawiki = wikidata + provenance == wikiprov


by @beet_keeper

Today I want to showcase a Wikidata proof of concept that I developed as part of my work integrating Siegfried and Wikidata.

That work is wikiprov a utility to augment Wikidata results in JSON with the Wikidata revision history.

For siegfried it means that we can showcase the source of the results being returned by an identification without having to go directly back to Wikidata, this might mean more exposure for individuals contributing to Wikidata. We also provide access to a standard permalink where records contributing to a format identification are fixed at their last edit. Because Wikidata is more mutable than a resource like PRONOM this gives us the best chance of understanding differences in results if we are comparing siegfried+Wikidata results side-by-side.

I am interested to hear your thoughts on the results of the work. Lets go into more detail below.

Continue reading “wikidata + mediawiki = wikidata + provenance == wikiprov”

Série d'entretiens #videos très intéressante du #Musée d'histoire naturelle de #Neuchâtel (#Suisse)

youtube.com/@museumdhistoirena

Elles et ils y parlent de #décentrement, #provenance, #collecte etc. en lien avec un vaste programme réflexif entrepris par le Musée.

En lien aussi avec leur exposition en cours "Nommer les Natures – Histoire naturelle et héritage #colonial "

museum-neuchatel.ch/exposition

www.youtube.comBefore you continue to YouTube

#Scientific papers include more and more often #replication packages with valuable #datasets that are then frequently used to train #AI and #MachineLearing models.

But with datasets not properly #documented with information about their #provenance, #biases and other #social concerns, this is risky as the #ML models will use in environments for which the data was not representative, yielding potentially wrong conclusions.

In this work, we have analyzed the datasets in two top dataset journals to study their #documentation #practices and propose a few recommendations to improve the current situation.

Paper accepted in the 𝘕𝘢𝘵𝘶𝘳𝘦'𝘴 𝘚𝘤𝘪𝘦𝘯𝘵𝘪𝘧𝘪𝘤 𝘋𝘢𝘵𝘢 journal

Pre-print arxiv.org/pdf/2401.10304

Replied to DW Innovation

@dw_innovation
This is an interesting approach I've been thinking about in the past as well. I wonder whether it's built using W3C PROV as an official and open standard for #provenance. Seeing MS and Adobe in the mix of participants, I'm fearing proprietary tech in this.

I hope it gains traction to help (re-) establish trust in public media pieces, detect fakes and uncover AI crap.