Content Warning

#reading#ML
Mitigating Bias in Machine Learning

Edited By @drcaberry
Brandeis Hill Marshall

„We dedicate this work to the diverse voices in #AI who work tirelessly to call out bias and work to mitigate it and advocate for #EthicalAI every day.
Some of the trailblazers doing the work are
@ruha9
@timnitGebru
@cfiesler
Joy Buolamwini
@ruchowdh
@safiyanoble
We also dedicate this work to the future engineers, scientists, and sociologists who will use it to inspire them to join the charge.“

Book Cover:
Mitigating Bias in Machine Learning
Edited by Carlotta A. Berry, Brandeis Hill Marshall
https://www.mhprofessional.com/mitigating-bias-in-machine-learning-9781264922444-usa

This practical guide shows, step by step, how to use machine learning to carry out actionable decisions that do not discriminate based on numerous human factors, including ethnicity and gender. The authors examine the many kinds of bias that occur in the field today and provide mitigation strategies that are ready to deploy across a wide range of technologies, applications, and industries.
Edited by engineering and computing experts, Mitigating Bias in Machine Learning includes contributions from recognized scholars and professionals working across different artificial intelligence sectors. Each chapter addresses a different topic and real-world case studies are featured throughout that highlight discriminatory machine learning practices and clearly show how they were reduced.

Mitigating Bias in Machine Learning addresses:
Ethical and Societal Implications of Machine Learning
Social Media and Health Information Dissemination
Comparative Case Study of Fairness Toolkits
Bias Mitigation in Hate Speech Detection
Unintended Systematic Biases in Natural Language Processing
Combating Bias in Large Language Models
Recognizing Bias in Medical Machine Learning and AI Models
Machine Learning Bias in Healthcare
Achieving Systemic Equity in Socioecological Systems
Community Engagement for Machine Learning
Book Cover: Mitigating Bias in Machine Learning Edited by Carlotta A. Berry, Brandeis Hill Marshall https://www.mhprofessional.com/mitigating-bias-in-machine-learning-9781264922444-usa This practical guide shows, step by step, how to use machine learning to carry out actionable decisions that do not discriminate based on numerous human factors, including ethnicity and gender. The authors examine the many kinds of bias that occur in the field today and provide mitigation strategies that are ready to deploy across a wide range of technologies, applications, and industries. Edited by engineering and computing experts, Mitigating Bias in Machine Learning includes contributions from recognized scholars and professionals working across different artificial intelligence sectors. Each chapter addresses a different topic and real-world case studies are featured throughout that highlight discriminatory machine learning practices and clearly show how they were reduced. Mitigating Bias in Machine Learning addresses: Ethical and Societal Implications of Machine Learning Social Media and Health Information Dissemination Comparative Case Study of Fairness Toolkits Bias Mitigation in Hate Speech Detection Unintended Systematic Biases in Natural Language Processing Combating Bias in Large Language Models Recognizing Bias in Medical Machine Learning and AI Models Machine Learning Bias in Healthcare Achieving Systemic Equity in Socioecological Systems Community Engagement for Machine Learning
Figure 9.9 ML life cycle with bias indicators and mitigation techniques.
(inspired by Herhausen & Fahse, 2022; Huang et al., 2022; and van Giffen et al., 2022)

• Preprocessing bias mitigation techniques attempt to remove discrimination by adding more data or modifying the available training data.

• In-processing bias mitigation techniques affect the algorithm itself and the learning procedure by imposing constraints, updating the objective function, or regularization.

• Postprocessing bias mitigation techniques may be implemented following model deployment or during the re-evaluation period in which adjustments are made to the model decision thresholds or the model output, including relabeling.

From book:
Mitigating Bias in Machine Learning
Edited by Carlotta A. Berry, Brandeis Hill Marshall
https://www.mhprofessional.com/mitigating-bias-in-machine-learning-9781264922444-usa
Figure 9.9 ML life cycle with bias indicators and mitigation techniques. (inspired by Herhausen & Fahse, 2022; Huang et al., 2022; and van Giffen et al., 2022) • Preprocessing bias mitigation techniques attempt to remove discrimination by adding more data or modifying the available training data. • In-processing bias mitigation techniques affect the algorithm itself and the learning procedure by imposing constraints, updating the objective function, or regularization. • Postprocessing bias mitigation techniques may be implemented following model deployment or during the re-evaluation period in which adjustments are made to the model decision thresholds or the model output, including relabeling. From book: Mitigating Bias in Machine Learning Edited by Carlotta A. Berry, Brandeis Hill Marshall https://www.mhprofessional.com/mitigating-bias-in-machine-learning-9781264922444-usa

Content Warning

@timnitGebru

#Technofeudalists and the perceived #AI threat incongruence

(1/n)

I'm surprised that you as an #AI and #TESCREAL expert See a discrepancy in this. For the morbid and haughty minds of the #Longtermists line #Elon, there is no discrepancy, IMHO:

1) At least since Goebbels, fascists offen acuse others of what they have done or are about to do themselves; or they deflect, flood the zone, etc. That on a tactical/communications strategy level.

2) More...

Content Warning

I compiled a short list of anti-AI tools. If you know of others, please add them

Anti-AI tools

Glaze
https://glaze.cs.uchicago.edu
Glaze is a system designed to protect human artists by disrupting style mimicry. At a high level, Glaze works by understanding the AI models that are training on human art, and using machine learning algorithms, computing a set of minimal changes to artworks, such that it appears unchanged to human eyes, but appears to AI models like a dramatically different art style.

Nightshade
https://nightshade.cs.uchicago.edu/
Nightshade, a tool that turns any image into a data sample that is unsuitable for model training

HarmonyCloak
https://mosis.eecs.utk.edu/harmonycloak.html
HarmonyCloak is designed to protect musicians from the unauthorized exploitation of their work by generative AI models. At its core, HarmonyCloak functions by introducing imperceptible, error-minimizing noise into musical compositions.

Kudurru
https://kudurru.ai
Actively block AI scrapers from your website with Spawning's defense network

Nepenthes
https://zadzmo.org/code/nepenthes/
This is a tarpit intended to catch web crawlers. Specifically, it's targetting crawlers that scrape data for LLMs - but really, like the plants it is named after, it'll eat just about anything that finds it's way inside.

AI Labyrinth
https://blog.cloudflare.com/ai-labyrinth/
Today, we’re excited to announce AI Labyrinth, a new mitigation approach that uses AI-generated content to slow down, confuse, and waste the resources of AI Crawlers and other bots that don’t respect “no crawl” directives.

More tools, suggested by comments on this posts:

Anubis
https://xeiaso.net/blog/2025/anubis/
Anubis is a reverse proxy that requires browsers and bots to solve a proof-of-work challenge before they can access your site.

Iocaine
https://iocaine.madhouse-project.org
The goal of iocaine is to generate a stable, infinite maze of garbage.

#NoToAI#AI

Content Warning

"AIs want the future to be like the past, and AIs make the future like the past. If the training data is full of human bias, then the predictions will also be full of human bias, and then the outcomes will be full of human bias, and when those outcomes are copraphagically fed back into the training data, you get new, highly concentrated human/machine bias.”

From @pluralistichttps://pluralistic.net/2025/03/18/asbestos-in-the-walls/#government-by-spicy-autocomplete

#ai

Content Warning

Great episode of #TechWontSaveUs with @timnitGebru

It's a real pleasure to listen to such a rich conversation on such diverse topics.

I especially liked how the topic of how the #AI industry labels people and methods was addressed.

It's the same for me, I've ended up assuming I'm a #DataScientist when I'm actually a #mechanical #engineer with a #PhD in #statistics. But the industry has decided that what I am is something I haven't studied about.

https://www.techwontsave.us/episode/267_ai_hype_enters_its_geopolitics_era_w_timnit_gebru

Content Warning

Generative AI isn’t delivering on its promises, but that hasn’t stopped governments from turning it into a geopolitical football.

On #TechWontSaveUs, I spoke with @timnitGebru to discuss the state of AI in 2025 and how it continues to sustain the hype.

Listen to the full episode: https://techwontsave.us/episode/267_ai_hype_enters_its_geopolitics_era_w_timnit_gebru

#tech #ai#generativeai #politics

Untitled media

Content Warning

@timnitGebru
📉 A work of art indeed—Tesla’s AI narrative is finally meeting reality.

Tech leaders can’t hide behind hype forever. The market is watching, and so are we.

AI is not a shortcut to profit—it’s a responsibility. Time to start treating it that way.

#AI#EthicalAI#Tesla

Content Warning

📬 Catch up on the biggest #crypto hack ever (courtesy of North Korea), a violent Reels glitch and its link to #Meta’s new content moderation, Grok misbehaving, and my reflections on #AI & #climate governance in today’s Weekly Reckoning (link ⬇️)

Content Warning

"Baker noted that "there's long been this very human-centric idea of intelligence that only humans are intelligent." That's fallen away within the scientific community as we've studied more about animal behavior. But there's still a bias to privilege human-like behaviors."
https://arstechnica.com/science/2025/03/ai-versus-the-brain-and-the-race-for-general-intelligence/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social

A #longread by @arstechnica about the #anthropocentric #bias of #AGI and #AI companies - money versus nature.

#brain #science #moreThanHuman #animals #intelligence #anthropocene #nature

Content Warning

Many of the #AI systems Israel’s using in Palestine are based on eugenic ideas. Its “emphasis (is) on damage, not accuracy". Bcos these technologies are about scale. Not surgical precision. They are not surveillance, they’re incrimination technologies. They are like cluster bombs
But but but...AGI is gonna bring utopia & AI will stop climate change!

"By looking at where power is being generated & the size of nearby populations, the researchers estimated the number of adverse health events that would likely be caused by AI-related air pollution."

https://www.sfexaminer.com/news/technology/ai-induced-pollution-could-kill-hundreds-cost-billions-researchers-say/article_6449a044-e811-11ef-88d2-473a3ec5a724.html

Content Warning

@timnitGebru Related to this #sustainable#AI #narrative this might be interesting too:

Rehak, R. (2024) On the (im)possibility of sustainable artificial intelligence. In: Züger, T. & Asghari, H. (2024) AI systems for the public interest. Internet Policy Review, 13(3). https://policyreview.info/articles/news/impossibility-sustainable-artificial-intelligence/1804

Anyways, thanks for your great and inspiring work. :)

Friends, for something to be open source, we need to see

1. The data it was trained and evaluated on

2. The code

3. The model architecture

4. The model weights.

DeepSeek only gives 3, 4. And I'll see the day that anyone gives us #1 without being forced to do so, because all of them are stealing data.

Content Warning

@timnitGebru thank you for sharing.

Quick question have you seen this initiative "European Open-Source AI index"?

https://www.osai-index.eu

By @dingemansemark & @andreasliesenfeld from @Radboud_uni

Looks good to me, to help people determine how open an AI model actually is. Are you aware of other initiatives like this?

I'd like to gather these initiatives and share it with @publicspaces so more people learn what to look for to determine if an AI is truly opensource.

#AI#OpenSource