Content Warning

#reading#ML
Mitigating Bias in Machine Learning

Edited By @drcaberry
Brandeis Hill Marshall

„We dedicate this work to the diverse voices in #AI who work tirelessly to call out bias and work to mitigate it and advocate for #EthicalAI every day.
Some of the trailblazers doing the work are
@ruha9
@timnitGebru
@cfiesler
Joy Buolamwini
@ruchowdh
@safiyanoble
We also dedicate this work to the future engineers, scientists, and sociologists who will use it to inspire them to join the charge.“

Book Cover:
Mitigating Bias in Machine Learning
Edited by Carlotta A. Berry, Brandeis Hill Marshall
https://www.mhprofessional.com/mitigating-bias-in-machine-learning-9781264922444-usa

This practical guide shows, step by step, how to use machine learning to carry out actionable decisions that do not discriminate based on numerous human factors, including ethnicity and gender. The authors examine the many kinds of bias that occur in the field today and provide mitigation strategies that are ready to deploy across a wide range of technologies, applications, and industries.
Edited by engineering and computing experts, Mitigating Bias in Machine Learning includes contributions from recognized scholars and professionals working across different artificial intelligence sectors. Each chapter addresses a different topic and real-world case studies are featured throughout that highlight discriminatory machine learning practices and clearly show how they were reduced.

Mitigating Bias in Machine Learning addresses:
Ethical and Societal Implications of Machine Learning
Social Media and Health Information Dissemination
Comparative Case Study of Fairness Toolkits
Bias Mitigation in Hate Speech Detection
Unintended Systematic Biases in Natural Language Processing
Combating Bias in Large Language Models
Recognizing Bias in Medical Machine Learning and AI Models
Machine Learning Bias in Healthcare
Achieving Systemic Equity in Socioecological Systems
Community Engagement for Machine Learning
Book Cover: Mitigating Bias in Machine Learning Edited by Carlotta A. Berry, Brandeis Hill Marshall https://www.mhprofessional.com/mitigating-bias-in-machine-learning-9781264922444-usa This practical guide shows, step by step, how to use machine learning to carry out actionable decisions that do not discriminate based on numerous human factors, including ethnicity and gender. The authors examine the many kinds of bias that occur in the field today and provide mitigation strategies that are ready to deploy across a wide range of technologies, applications, and industries. Edited by engineering and computing experts, Mitigating Bias in Machine Learning includes contributions from recognized scholars and professionals working across different artificial intelligence sectors. Each chapter addresses a different topic and real-world case studies are featured throughout that highlight discriminatory machine learning practices and clearly show how they were reduced. Mitigating Bias in Machine Learning addresses: Ethical and Societal Implications of Machine Learning Social Media and Health Information Dissemination Comparative Case Study of Fairness Toolkits Bias Mitigation in Hate Speech Detection Unintended Systematic Biases in Natural Language Processing Combating Bias in Large Language Models Recognizing Bias in Medical Machine Learning and AI Models Machine Learning Bias in Healthcare Achieving Systemic Equity in Socioecological Systems Community Engagement for Machine Learning
Figure 9.9 ML life cycle with bias indicators and mitigation techniques.
(inspired by Herhausen & Fahse, 2022; Huang et al., 2022; and van Giffen et al., 2022)

• Preprocessing bias mitigation techniques attempt to remove discrimination by adding more data or modifying the available training data.

• In-processing bias mitigation techniques affect the algorithm itself and the learning procedure by imposing constraints, updating the objective function, or regularization.

• Postprocessing bias mitigation techniques may be implemented following model deployment or during the re-evaluation period in which adjustments are made to the model decision thresholds or the model output, including relabeling.

From book:
Mitigating Bias in Machine Learning
Edited by Carlotta A. Berry, Brandeis Hill Marshall
https://www.mhprofessional.com/mitigating-bias-in-machine-learning-9781264922444-usa
Figure 9.9 ML life cycle with bias indicators and mitigation techniques. (inspired by Herhausen & Fahse, 2022; Huang et al., 2022; and van Giffen et al., 2022) • Preprocessing bias mitigation techniques attempt to remove discrimination by adding more data or modifying the available training data. • In-processing bias mitigation techniques affect the algorithm itself and the learning procedure by imposing constraints, updating the objective function, or regularization. • Postprocessing bias mitigation techniques may be implemented following model deployment or during the re-evaluation period in which adjustments are made to the model decision thresholds or the model output, including relabeling. From book: Mitigating Bias in Machine Learning Edited by Carlotta A. Berry, Brandeis Hill Marshall https://www.mhprofessional.com/mitigating-bias-in-machine-learning-9781264922444-usa
Fostering a Federated AI Commons ecosystem

A policy briefing by Joana Varon, @schock and @timnitGebru

https://codingrights.org/docs/Federated_AI_Commons_ecosystem_T20Policybriefing.pdf

> This policy paper provides actionable recommendations for the G20 to foster decentralized AI development. We urge support for an alternative AI ecosystem characterized by community and public control of consensual data; decentralized, local, and federated development of small, task-specific AI models; worker cooperatives for appropriately ...

#ai #commons #policy

Content Warning

[Algorithmique 4/6] IA qu'à algorithmiser le climat ? Podcast animé par @mathildesaliou avec @clementmarquet et Anne-Laure Ligozat

https://next.ink/podcast/algorithmique-4-6-ia-qua-algorithmiser-le-climat/

La retranscription https://www.librealire.org/ia-qu-a-algorithmiser-le-climat

#ai #environment

Content Warning

Fostering a Federated AI Commons ecosystem

A policy briefing by Joana Varon, @schock and @timnitGebru

https://codingrights.org/docs/Federated_AI_Commons_ecosystem_T20Policybriefing.pdf

> This policy paper provides actionable recommendations for the G20 to foster decentralized AI development. We urge support for an alternative AI ecosystem characterized by community and public control of consensual data; decentralized, local, and federated development of small, task-specific AI models; worker cooperatives for appropriately ...

#ai #commons #policy

Content Warning

Content Warning

It's really effing obvious LLMs are a con trick:

If LLMs were actually intelligent, they would be able to just learn from each other and would get better all the time. But what actually happens if LLMs only learn from each other is their models collapse and they start spouting gibberish.

LLMs depend entirely on copying what humans write because they have no ability to create anything themselves. That's why they collapse when you remove their access to humans.

There is no intelligence in LLMs, it's just repackaging what humans have written without their permission. It's stolen human labour.

#LLM #LLMs#AI#AIs#ChatGPT

Content Warning

Join the conversation at #NISOPlus25, where we'll address #ResearchIntegrity, #OpenAccess, #AI, & more! Our program features speakers @timnitGebru (our 2025 Miles Conrad awardee) & ALA president Cindy Hohl. Register by January 10 for early bird rates!: https://niso.plus/niso-plus-2025-baltimore/
#ScholComm
I was just talking to a colleague about the AI bubble. These companies are in so deep they can't tell the truth. They are all lying about the efficacy, costs to consumers and most importantly how & when this tech works or doesn't.

Is there enough money on the line to kill over?

There's likely a trillion bucks of valuations across the industry. Billions in sunk costs, billions in c suite remuneration, billions in VC mgmt costs.

RIP Suchir

https://www.mercurynews.com/2024/12/13/openai-whistleblower-found-dead-in-san-francisco-apartment/

#OpenAI#AI#VC#SuchirBalaji

Content Warning

This whole situation is about as believable as a Russian dissident or oligarch falling out of a midrise window in Moscow. Like the Boeing whistle blower I don't think it much matters if "foul play" was involved because this whole scenario is foul af.

This young man was very brave and righteous to blow the whistle and was undoubtedly under immense social and professional pressure not to. He's fucking up A LOT of people's grifty gravy train.

We should read his words

https://suchir.net/fair_use.html

#AI

Content Warning

I was just talking to a colleague about the AI bubble. These companies are in so deep they can't tell the truth. They are all lying about the efficacy, costs to consumers and most importantly how & when this tech works or doesn't.

Is there enough money on the line to kill over?

There's likely a trillion bucks of valuations across the industry. Billions in sunk costs, billions in c suite remuneration, billions in VC mgmt costs.

RIP Suchir

https://www.mercurynews.com/2024/12/13/openai-whistleblower-found-dead-in-san-francisco-apartment/

#OpenAI#AI#VC#SuchirBalaji