mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

12K
active users

#predictivealgorithms

0 posts0 participants0 posts today
ResearchBuzz: Firehose<p>The Register: UK officials insist ‘murder prediction tool’ algorithms purely abstract. “The UK’s justice department has confirmed it is working on developing algorithms to predict which criminals will later become murderers. It was internally referred to as the Homicide Prediction Project, and was first discovered via Freedom of Information (FOI) requests filed by civil liberties group […]</p><p><a href="https://rbfirehose.com/2025/04/11/the-register-uk-officials-insist-murder-prediction-tool-algorithms-purely-abstract/" class="" rel="nofollow noopener" target="_blank">https://rbfirehose.com/2025/04/11/the-register-uk-officials-insist-murder-prediction-tool-algorithms-purely-abstract/</a></p>
Miguel Afonso Caetano<p>"Increasingly, algorithmic predictions are used to make decisions about credit, insurance, sentencing, education, and employment. We contend that algorithmic predictions are being used “with too much confidence, and not enough accountability. Ironically, future forecasting is occurring with far too little foresight.”</p><p>We contend that algorithmic predictions “shift control over people’s future, taking it away from individuals and giving the power to entities to dictate what people’s future will be.” Algorithmic predictions do not work like a crystal ball, looking to the future. Instead, they look to the past. They analyze patterns in past data and assume that these patterns will persist into the future. Instead of predicting the future, algorithmic predictions fossilize the past. We argue: “Algorithmic predictions not only forecast the future; they also create it.”"</p><p><a href="https://teachprivacy.com/the-tyranny-of-algorithms/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">teachprivacy.com/the-tyranny-o</span><span class="invisible">f-algorithms/</span></a></p><p><a href="https://tldr.nettime.org/tags/Algorithms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Algorithms</span></a> <a href="https://tldr.nettime.org/tags/PredictiveAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAI</span></a> <a href="https://tldr.nettime.org/tags/PredictiveAlgorithms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAlgorithms</span></a> <a href="https://tldr.nettime.org/tags/AlgorihtmicBias" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AlgorihtmicBias</span></a></p>
Miguel Afonso Caetano<p>"A Home Office artificial intelligence tool which proposes enforcement action against adult and child migrants could make it too easy for officials to rubberstamp automated life-changing decisions, campaigners have said.</p><p>As new details of the AI-powered immigration enforcement system emerged, critics called it a “robo-caseworker” that could “encode injustices” because an algorithm is involved in shaping decisions, including returning people to their home countries.</p><p>The government describes it as a “rules-based” rather than AI system, as it does not involve machine-learning from data, and insists it delivers efficiencies by prioritising work and that a human remains responsible for each decision. The system is being used amid a rising caseload of asylum seekers who are subject to removal action, currently about 41,000 people.</p><p>Migrant rights campaigners called for the Home Office to withdraw the system, claiming it was “technology being used to make cruelty and harm more efficient”."</p><p><a href="https://tldr.nettime.org/tags/UK" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>UK</span></a> <a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/PredictiveAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAI</span></a> <a href="https://tldr.nettime.org/tags/Algorithms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Algorithms</span></a> <a href="https://tldr.nettime.org/tags/PredictiveAlgorithms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAlgorithms</span></a> <a href="https://tldr.nettime.org/tags/Immigration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Immigration</span></a> <a href="https://tldr.nettime.org/tags/AsylumSeekers" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AsylumSeekers</span></a> </p><p><a href="https://www.theguardian.com/uk-news/2024/nov/11/ai-tool-could-influence-home-office-immigration-decisions-critics-say" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">theguardian.com/uk-news/2024/n</span><span class="invisible">ov/11/ai-tool-could-influence-home-office-immigration-decisions-critics-say</span></a></p>
Miguel Afonso Caetano<p>"Predictive algorithms are used in many life-or-death situations. In the paper Against Predictive Optimization, we argued that the use of predictive logic for making decisions about people has recurring, inherent flaws, and should be rejected in many cases.</p><p>A wrenching case study comes from the UK’s liver allocation algorithm, which appears to discriminate by age, with some younger patients seemingly unable to receive a transplant, no matter how ill. What went wrong here? Can it be fixed? Or should health systems avoid using algorithms for liver transplant matching?"</p><p><a href="https://www.aisnakeoil.com/p/does-the-uks-liver-transplant-matching" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">aisnakeoil.com/p/does-the-uks-</span><span class="invisible">liver-transplant-matching</span></a></p><p><a href="https://tldr.nettime.org/tags/Algorithms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Algorithms</span></a> <a href="https://tldr.nettime.org/tags/PredictiveAlgorithms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAlgorithms</span></a> <a href="https://tldr.nettime.org/tags/AgeDiscrimination" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AgeDiscrimination</span></a> <a href="https://tldr.nettime.org/tags/UK" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>UK</span></a></p>
Heals :heart_nb:AI mention
Miguel Afonso Caetano<p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/PredictiveAlgorithms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAlgorithms</span></a> <a href="https://tldr.nettime.org/tags/PredictiveOptimization" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveOptimization</span></a> <a href="https://tldr.nettime.org/tags/AIEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIEthics</span></a>: "In predictive optimisation systems, machine learning is used to predict future outcomes of interest about individuals, and these predictions are used to make decisions about them. Despite being based on pseudoscience (on the belief that the future of the individual is already written and, therefore, readable), not working and unfixably harmful, predictive optimisation systems are still used by private companies and by governments. As they are based on the assimilation of people to things, predictive optimisation systems have inherent political properties that cannot be altered by any technical design choice: the initial choice about whether or not to adopt them is therefore decisive, as Langdon Winner wrote about inherently political technologies. </p><p>The adoption of predictive optimisation systems is incompatible with liberalism and the rule of law because it results in people not being recognised as self-determining subjects, not being equal before the law, not being able to predict which law will be applied to them, all being under surveillance as 'suspects' and being able or unable to exercise their rights in ways that depend not on their status as citizens, but on their contingent economic, social, emotional, health or religious status. Under the rule of law, these systems should simply be banned. </p><p>Requiring only a risk impact assessment – as in the European Artificial Intelligence Act – is like being satisfied with asking whether a despot is benevolent or malevolent: freedom, understood as the absence of domination, is lost whatever the answer. Under the AI ACT's harm approach to fundamental rights impact assessments (perhaps a result of the "lobbying ghost in the machine of regulation"), fundamental rights can be violated with impunity as long as there is no foreseeable harm."</p><p><a href="https://zenodo.org/records/10866778" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="">zenodo.org/records/10866778</span><span class="invisible"></span></a></p>
dcanalitica<p>Algorithmic constructions of risk: Anticipating uncertain futures in child protection services<br><a href="https://mastodon.social/tags/PredictiveAlgorithms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAlgorithms</span></a><br> <a href="https://mastodon.social/tags/DataInfrastructures" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataInfrastructures</span></a></p><p><a href="https://journals.sagepub.com/doi/10.1177/20539517231186120" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">journals.sagepub.com/doi/10.11</span><span class="invisible">77/20539517231186120</span></a></p>
Ricardo Harvin<p>Catch <a href="https://mstdn.social/tags/ClassOf09" class="mention hashtag" rel="tag">#<span>ClassOf09</span></a> if you can for a good preview, imo, of some of the very real and highly probable <a href="https://mstdn.social/tags/societal" class="mention hashtag" rel="tag">#<span>societal</span></a> problems that <a href="https://mstdn.social/tags/LLM" class="mention hashtag" rel="tag">#<span>LLM</span></a>&#39;s, <a href="https://mstdn.social/tags/ML" class="mention hashtag" rel="tag">#<span>ML</span></a>, and <a href="https://mstdn.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://mstdn.social/tags/algorithms" class="mention hashtag" rel="tag">#<span>algorithms</span></a> will definitely cause or make exponentially worse, which <a href="https://mstdn.social/tags/BigTech" class="mention hashtag" rel="tag">#<span>BigTech</span></a> is actively claiming AI will improve or solve.</p><p><a href="https://mstdn.social/tags/Power" class="mention hashtag" rel="tag">#<span>Power</span></a> never yields power and whatever power we grant to <a href="https://mstdn.social/tags/technology" class="mention hashtag" rel="tag">#<span>technology</span></a> will eventually and deliberately be used against us by that technology.</p><p><a href="https://mstdn.social/tags/LargeLanguageModels" class="mention hashtag" rel="tag">#<span>LargeLanguageModels</span></a> <a href="https://mstdn.social/tags/MachineLearning" class="mention hashtag" rel="tag">#<span>MachineLearning</span></a> <a href="https://mstdn.social/tags/ArtificialIntelligence" class="mention hashtag" rel="tag">#<span>ArtificialIntelligence</span></a> <a href="https://mstdn.social/tags/PredictiveAlgorithms" class="mention hashtag" rel="tag">#<span>PredictiveAlgorithms</span></a></p>
Bongolian<p>Bina Venkataraman: Technology of the future shouldn’t trap people in the past</p><p>"But would you think it fair to be denied life insurance based on your Zip code, online shopping behavior or social media posts? Or to pay a higher rate on a student loan because you majored in history rather than science? What if you were passed over for a job interview or an apartment because of where you grew up? How would you feel about an insurance company using the data from your Fitbit or Apple Watch to figure out how much you should pay for your health-care plan?"</p><p>…</p><p>"With Congress thus far failing to pass an algorithmic accountability law, some state and local leaders are now stepping up to fill the void. Draft regulations issued last month by Colorado’s insurance commissioner, as well as recently proposed reforms in D.C. and California, point to what policymakers might do to bring us a future where algorithms better serve the public good."</p><p><a href="https://universeodon.com/tags/predictivealgorithms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>predictivealgorithms</span></a><br><a href="https://www.washingtonpost.com/opinions/2023/03/15/algorithms-bias-regulations" rel="nofollow noopener" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">washingtonpost.com/opinions/20</span><span class="invisible">23/03/15/algorithms-bias-regulations</span></a></p>
Glyn Moody<p>German Constitutional Court strikes down <a href="https://mastodon.social/tags/PredictiveAlgorithms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAlgorithms</span></a> for policing - <a href="https://www.euractiv.com/section/artificial-intelligence/news/german-constitutional-court-strikes-down-predictive-algorithms-for-policing/" rel="nofollow noopener" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">euractiv.com/section/artificia</span><span class="invisible">l-intelligence/news/german-constitutional-court-strikes-down-predictive-algorithms-for-policing/</span></a> now make it EU-wide...</p>