mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

0
active users

#dataquality

1 post1 participant1 post today

Beyond the Dataset

On the recent season of the show Clarkson’s farm, J.C. goes through great lengths to buy the right pub. As with any sensible buyer, the team does a thorough tear down followed by a big build up before the place is open for business. They survey how the place is built, located, and accessed. In their refresh they ensure that each part of the pub is built with purpose. Even the tractor on the ceiling. The art is  in answering the question: How was this place put together? 

A data-scientist should be equally fussy. Until we trace how every number was collected, corrected and cleaned, —who measured it, what tool warped it, what assumptions skewed it—we can’t trust the next step in our business to flourish.

Old sound (1925) painting in high resolution by Paul Klee. Original from the Kunstmuseum Basel Museum. Digitally enhanced by rawpixel.

Two load-bearing pillars

While there are many flavors of data science I’m concerned about the analysis that is done in scientific spheres and startups. In this world, the structure held up by two pillars:

  1. How we measure — the trip from reality to raw numbers. Feature extraction.
  2. How we compare — the rules that let those numbers answer a question. Statistics and causality.

Both of these related to having a deep understanding of the data generation process. Each from a different angle. A crack in either pillar and whatever sits on top crumbles. Plots, significance, AI predictions, mean nothing.

How we measure

A misaligned microscope is the digital equivalent of crooked lumber. No amount of massage can birth a photon that never hit the sensor. In fluorescence imaging, the point-spread function tells you how a pin-point of light smears across neighboring pixels; noise reminds you that light itself arrives from and is recorded by at least some randomness. Misjudge either and the cell you call “twice as bright” may be a mirage.

In this data generation process the instrument nuances control what you see. Understanding this enables us to make judgements about what kind of post processing is right and which one may destroy or invent data. For simpler analysis the post processing can stop at cleaner raw data. For developing AI models, this process extends to labeling and analyzing data distributions. Andrew Ng’s approach, in data-centric AI, insists that tightening labels, fixing sensor drift, and writing clear provenance notes often beat fancier models.

How we compare

Now suppose Clarkson were to test a new fertilizer, fresh goat pellets, only on sunny plots. Any bumper harvest that follows says more about sunshine than about the pellets. Sound comparisons begin long before data arrive. A deep understanding of the science behind the experiment is critical before conducting any statistics. The wrong randomization, controls, and lurking confounder eat away at the foundation of statistics.

This information is not in the data. Only understanding how the experiment was designed and which events preclude others enable us to build a model of the world of the experiment. Taking this lightly has large risks for startups with limited budgets and smaller experiments. A false positive result leads to wasted resources while a false negative presents opportunity costs.   

The stakes climb quickly. Early in the COVID-19 pandemic, some regions bragged of lower death rates. Age, testing access, and hospital load varied wildly, yet headlines crowned local policies as miracle cures. When later studies re-leveled the footing, the miracles vanished. 

Why the pillars get skipped

Speed, habit, and misplaced trust. Leo Breiman warned in 2001 that many analysts chase algorithmic accuracy and skip the question of how the data were generated. What he called the “two cultures.” Today’s tooling tempts us even more: auto-charts, one-click models, pretrained everything. They save time—until they cost us the answer.

The other issue is lack of a culture that communicates and shares a common language. Only in academic training is it possible to train a single person to understand the science, the instrumentation, and the statistics sufficiently that their research may be taken seriously. Even then we prefer peer review. There is no such scope in startups. Tasks and expertise must be split. It falls to the data scientist to ensure clarity and collecting information horizontally. It is the job of the leadership to enable this or accept dumb risks.

Opening day

Clarkson’s pub opening was a monumental task with a thousand details tracked and tackled by an army of experts. Follow the journey from phenomenon to file, guard the twin pillars of measure and compare, and reinforce them up with careful curation and open culture. Do that, and your analysis leaves room for the most important thing: inquiry.

🛠️ @corinnaberg und Ksenia Stanicka haben im Rahmen des Formats Data Carpentries eine Lektion zum Thema #Metadaten und Metadatenstandards entwickelt und publiziert.

🔗 Zur News: hermes-hub.de/aktuelles/news/r
🔗 Direkt zur Lektion: hermes-hub.de/lernen/datacarpe

🎯 Interesse an einer praktischen Einführung?

📍 Historikertag 2025 Bonn, Praxislabor
📅 16. September 2025, 14:00–15:40 Uhr
🌐 digigw.hypotheses.org/6357

AI adoption matures, but big challenges remain

68% of companies now run custom AI in production, with 81% spending $1M+ annually. But issues like poor data, tough training, and project delays still slow progress. As AI goes mainstream, control and trust are the next big frontiers.

#ArtificialIntelligence #AIDeployment #EnterpriseAI #DataQuality #MachineLearning #GenerativeAI

artificialintelligence-news.co

AI News · AI adoption matures but deployment hurdles remainAI has moved beyond experimentation to become a core part of business operations, but deployment challenges persist.

#dataquality #Surveydata #digitalbehavioraldata #linkeddatasources
Official launch of the #KODAQS #Toolbox in July 2025

The KODAQS Toolbox is a new, open platform for assessing and improving data quality in the social sciences. It supports researchers in systematically reflecting on the quality of their data - along three central data types: Survey data, digital behavioral data (e.g. app or sensor data) and linked data sources (e.g. register and geospatial data).
kodaqs-toolbox.gesis.org/

Quality Assurance in SAP Data Migrations

The SAP migration run is usually repeated several times to improve data quality and eliminate errors. Usually, a SAP system copy is created before the data migration so that the system can be reset to this state at any time. This allows iterative improvement processes in which data migrations can be repeated multiple times. Check out the core magazine to learn more:

s4-experts.com/2024/01/16/sap-

Garbage in, garbage out – even Agentic AI can’t save you from yourself.

Artificial intelligence is only as brilliant as the data it’s spoon-fed – and spoiler alert: your data is often trash.
Whether it’s traditional machine learning, generative models, or your shiny new agentic systems, the pattern remains insultingly consistent:
• Bad data? Expect bad decisions.
• Incomplete data? Enjoy half-baked ideas.
• Outdated data? Say hello to irrelevant nonsense.

I often talk about what AI can or tragically still can’t do.
But here’s the real twist: the problem isn’t the system. It’s you. Or more specifically, the glorious mess you call your “data foundation.”

You don’t have a lack of innovation.
You have a lack of clean data structures, maintained knowledge bases, and basic contextual awareness.
And then you expect the AI to magically fill gaps that should never have existed in the first place.

#GESISGuides #DBD #DataQuality
Three new GESIS Guides to Digital Behavioral Data out now - get helpful information on data quality now:

* Bleier, A.: What is Computational Reproducibility?

* Fröhling, L., Birkenmaier, L., Lux, V., & Daikeler, J.: How to Find and Explore Data Quality Frameworks for Digital Behavioral Data

*Lux, V., & Wieland, M.: How to Set up and Monitor App-based Data Collections

Check out the whole collection of our Guides to DBD:
gesis.org/en/gesis-guides/gesi

Hast du Fragen zu OpenRefine & brauchst Unterstützung bei deinen Projekten? Dann komm zu unserer regelmäßigen OpenRefine Sprechstunde!

🗓 Wann?
Do. 22.05. 15:00 – 16:00 Uhr
📍 Wo?
Online

Nutzt die Gelegenheit, um eure Fragen zu klären, Tipps zu erhalten oder gemeinsam an euren Datenprojekten zu arbeiten.
Alle Infos & Link: sammlungen.io/termine/openrefi
#SODaZentrum #OpenRefine #Dataquality #DataLiteracy

A Comprehensive Framework For Evaluating The Quality Of Street View Imagery
--
doi.org/10.1016/j.jag.2022.103 <-- shared paper
--
“HIGHLIGHTS
• [They] propose the first comprehensive quality framework for street view imagery.
• Framework comprises 48 quality elements and may be applied to other image datasets.
• [They] implement partial evaluation for data in 9 cities, exposing varying quality.
• The implementation is released open-source and can be applied to other locations.
• [They] provide an overdue definition of street view imagery..."
#GIS #spatial #mapping #streetlevelimagery #Crowdsourcing #QualityAssessmentFramework #Heterogeneity #imagery #dataquality #metrics #QA #urban #cities #remotesensing #spatialanalysis #StreetView #Google #Mapillary #KartaView #commercial #crowsourced #opendata #consistency #standards #specifications #metadata #accuracy #precision #spatiotemporal #terrestrial #assessment

What breaks if I change this column?

Read our technical deep-dive into how Recce constructs column-level lineage from #dbt models

- How we track column origins and transformations using SQLGlot

- How we classify columns as pass-through, renamed, derived, or source

- How we handle tricky edge cases like SELECT *, name collisions, and macro expansion

Read more:
datarecce.io/blog/column-level

Recce · How Recce Performs Column-Level Lineage: Our Approach to SQL TransformationsA technical deep dive into how Recce constructs column-level lineage using SQLGlot. We break down scope traversal, AST analysis, transformation classification, and the challenges involved in building reliable lineage across complex SQL models.

Phew! Been a fun week or so for Wimsey (my data testing project), finished building out:

- Handy "validate or test" function (test or build a set of tests from the data)
- New tests for strings (matches regex, maximum/minimum length, category should be in, etc)
- Functionality for arbitrary narwhals expressions

Plus every time I blink Narwhals gets even better, so Pyspark and DuckDB are supported without me doing anything!

github.com/benrutter/wimsey

Easy and flexible data contracts. Contribute to benrutter/wimsey development by creating an account on GitHub.
GitHubGitHub - benrutter/wimsey: Easy and flexible data contractsEasy and flexible data contracts. Contribute to benrutter/wimsey development by creating an account on GitHub.