Before reading this article, I hadn’t even thought about the stereotypes and biases that arise when it comes to labeling AI datasets. Without manual labelling and tighter restrictions on these datasets, inappropriate or misconstrued categorization can easily arise. I definitely agree with the author about having more engagement and ethical guidelines, especially when students start utilizing these resources.
Large corporations, tech companies, and dataset creators all have power to label images. Another powerhouse is crowdsourced workers. Increasing involvement from the public comes with pros and cons. Pro would be that these conversations would include a diverse range of stakeholders, including ethicists, sociologists, affected communities, and policymakers. Cons would be that one of the reasons we have these issues is because of crowdsourced workers that could go over the guideline rules.
The use of AI in areas such as law enforcement and judicial systems, where AI tools help identify, classify, or predict human behaviors, poses significant legal and ethical challenges. Incorrect labels can lead to wrongful accusations or biased law enforcement practices, especially in situations involving racial and cultural stereotypes.
In class practice code:
In order to create this weeks homework, I wanted to play with using different emojis that would aligned with emotions I trained in teachable machines. I decided to just go with sad and happy. At first, my images weren’t enough to have a distinct difference that teachable machines could recognize, so then I added a total of over 300 images for each emotion, including with thumbs up and thumbs down.
In the first iteration I used for loops to have sad or happy emojis scatter all over my screen:
Link to video demo^
Link to p5 sketch: https://editor.p5js.org/cp3636/sketches/ewCMbyUSK
I decided I wanted to add in scale from the p5 reference because honestly the emojis popping around gave me a headache. So, instead I drew a single emoji in a fixed position.