Duck scientists by Tom Gauld:
Historian duck: "When did they start to give us bread?"😜
Really appreciate the new energy monitor that ZEIT online has setup:
They still maintain the Covid monitor they've setup but place this one now more prominently on their main page. Flatten the curve scaled in °C!
Why did nobody tell me about Consent-O-Matic? It's a browser extension that automatically clicks on GDPR consent dialogs to indicate that you do not consent to your data being shared. Available for all major browsers.
Interesting story regarding how Google engineers might have been tricked into believing that the language model they've built is sentient:
Having read the transcripts of conversations they had with the system it is still hard to not find the conversations at least meaningful and in parts even inspiring.
Promising ongoing effort at DLR (German Aerospace Center) to generate a new global of water surface areas and establish a better reference for flood mapping:
They do not only use optical satellite data (Sentinel-2) but also Radar data (Sentinel-1) to fill the gaps in areas that are particularly cloudy.
Curious to see how the final map will compare with continental products such as the HRL Water and Wetness 2018:
So much for meetings just before lunch time (or closing time... or on Friday after 11h)😜
Bicycling in the rain this morning 🚴🌧️ I remembered an argument with a friend some years ago regarding whether it's better (i.e. you get less wet) if you run or walk through the rain (we might not have been completely sober).
At the time I thought he was foolish to even consider this as an interesting question and was sure: The longer you stay in the rain, the wetter you get! The faster you move, the faster your are out of the rain! Easy, right?
Turns out the whole thing is much more complex... if for example the wind /rain comes from behind there can be an optimal speed: https://abel.math.harvard.edu/archive/21a_fall_12/exhibits/rain/bocci.pdf
Having this said, as long as it's warm, cycling in the rain can actually be quite enjoyable for a long time...so maybe rather a problem that should be approached through psychology rather than through physics 😜.
Good night everyone!
All Cryptocurrency Should “Die in a Fire” 🔥
I'm not a particular fan of Microsoft and especially not Facebook but the machine learning models, datasets and tools they've put together based on OpenStreetMap are quite something:
Here the latest edition of Microsoft's building footprint detections with nearly global coverage containing 776,712,641 footprints:
They have apparently ingested all of it into Facebook's version of the OSM ID editor (called Rapid) where AI-based road detections from Facebook and the original OSM data are accessible as well:
Looking at a few examples it's not perfect (maps never are) but the quality is surprisingly good considering the difficulty of the problem.
Below an example from the small town of Basoko in the Democratic Republic of Kongo with the features added by Mircosoft's and Facebook's AIs in magenta.
The largest ever earthquake (Mag 5) detected on another planet:
Great article in the Guardian:
Copenhagenize your city: the case for urban cycling in 12 graphs
“To choose among the possible Sgr A* images, additional information, assumptions, or constraints must be included when solving the inverse problem. We broadly categorize imaging algorithms into three methodologies: CLEAN, RML, and Bayesian posterior sampling.”
… and from what I understand, tune these methods on synthetic data, pick the parameters for which the output images correlate best the synthetic data, and apply it to the real data. And in the end they group/cluster within the still numerous plausible images from each pipeline and average images within each cluster… and average the four resulting images again to produce the one that is now making it’s tour around the world.
And this is only one out of many papers describing the huge effort that went into this endeavor...amazing!
One major problem is that Sagittarius A* (i.e. the black hole) rotates very fast so that it looks slightly different in each observation. That’s why they selected for now only a subset of two days from the data, namely:
“less variable 2017 April 7 data as the primary data set for static image reconstruction with the April 6 observations as a secondary validation data set”
They then apply a quite complex pre-processing chain including data reduction, cross-station calibration, calibration on known celestial objects, calibration for scattering in space and for changes of over time, and and and
They then continue to explain that “Recovering an image of Sgr A* from interferometric measurements amounts to solving an inverse problem.”
I understand this as “we need to find out how the black hole might look given the observations”
They seem to deploy different four different pipelines which have been designed to solve such a problem ...
Having seen the news on the new image of the black hole at the center of our galaxy (or rather the hot gases around it) I couldn’t help it to have a closer look how these images are produced.
There are many papers describing different aspects of the entire endeavor but this here seems to capture most of the image processing pipeline: https://iopscience.iop.org/article/10.3847/2041-8213/ac6429
They start out with explaining briefly that the Event Horizon telescope “observed Sgr A* with eight stations at six geographic sites on 2017 April 5, 6, 7, 10, and 11.“
In this article (https://www.theverge.com/2022/5/12/23042995/black-hole-image-event-horizon-telescope-sagittarius-a-milky-way) one of the scientists involved explains that they produced 3.5 TB of data which was apparently to much for internet transmission and shipping hard disk was a better choice to collect the data for further analysis in Westford, Massachusetts and Bonn, Germany.