Building Ethics and Diversity in AI
Bias in machine learning is a significant concern as technology gets increasingly ubiquitous across many industries. Some types of bias can be attributed to limits in design and tooling; however, the bias in the training data itself is a general phenomenon. Skewed training data propagates into discriminatory AI models that amplify human prejudices. Building a data labeling framework that uses a diverse set of crowd workers to collect and label the data can help reduce bias. Additionally, when you tap into a global crowd workforce you need to optimize the quality and speed of the labeling tasks, while at the same time follow ethical pricing practices so the crowd workforce is paid fair wages. This is a tough nut to crack. In this talk, we present some of the frameworks and approaches to minimize bias and maintain a thriving community of highly engaged crowd workers. We will talk about: -A bias minimizer framework that routes data labeling tasks to the right crowd worker and maintains a healthy worker distribution for a given task. -An approach to ensure a fair wage for the crowd with location, skillsets, and task complexity considerations. -Ways to increase crowd performance and engagement with smart targeting of labeling tasks to the crowd workers who are best suited for the job.
Session ID: Presentation Type: On-Demand Session (Recorded)
Date / Time: [Content On-Demand] @ On demand ET (US)
Presented by:Appen collects and labels images, text, speech, audio, video, and other data used to build and continuously improve the world’s most innovative artificial intelligence systems. Click for more details and additional sessions and content.
To view this session, register for the conference.