About This Webinar

Explore the mechanics and societal implications of artificial intelligence! We begin by level-setting AI as a "probability machine" that predicts outcomes based on historical data rather than true understanding. We illustrate how bias is often baked into the training phase, where datasets - ranging from historical maps and textbooks to medical imaging - unintentionally reflect human prejudices and errors. The webinar covers diverse examples of AI bias, such as the reinforcement of gender stereotypes in image generation and systemic inequities in judicial risk assessments.

Then we address intentional bias and the "algorithmic treadmill," showing how digital platforms can marginalize specific cultural and linguistic groups. We highlight the rise of "AI slop" - low-quality, purpose-free content that can damage a professional's reputation if used without scrutiny. To combat these issues, we advocate for a "human-led approach," emphasizing critical thinking and the use of specialized tools like Latimer or NotebookLM to control data sources. We conclude with a call for skepticism and proactive mitigation, urging users to "co-create, don't abdicate" with AI.

What We Covered

AI as a Probability Machine: AI models function by analyzing massive datasets to predict the most likely next word or element based on patterns they have "read". Because they rely on historical data, if that data contains human prejudices or inaccuracies, the AI will naturally mirror and perpetuate those biases in its responses.

Unintentional vs. Intentional Bias: Bias can occur accidentally through skewed training sets - such as medical AI identifying tumors based on the presence of a ruler in the image rather than the pathology itself. It can also be intentional, where models are programmed to filter information through specific perspectives or overcorrect for diversity, sometimes leading to historical inaccuracies.

The Algorithmic Treadmill: Social media algorithms can create a "flywheel effect" that marginalizes certain cultures. For example, linguistic bias can lead to the appropriation and eventual erasure of African-American English (AAE) as it is treated as "incorrect" by standard spellchecks or diluted by mainstream internet trends.

Mitigation through Human Oversight: AIUsers are encouraged to "co-create, not advocate," meaning humans should always have the final say and review AI output for "slop" or errors . Practical steps include using inclusive models like Latimer, providing feedback via "thumbs down" buttons to train models against bias, and using tools like NotebookLM to control and audit the specific data sources the AI uses.

Session Resources