mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

9.6K
active users

#dataannotation

1 post1 participant0 posts today

6 Essential Data Annotation Techniques that Drive Computer Vision

Our latest video on the 6 common types of annotation in Computer Vision reveals how the perfect blend of human intelligence and cutting-edge data annotation techniques can significantly enhance the performance and scalability of your AI and ML models.

youtube.com/watch?v=EHXVzz7VHvo

Data Annotation vs Data Labelling- Find the right for you

Key takeaways:

• Understand the core difference between annotation and labeling
• Explore use cases across NLP, computer vision & more
• Learn how each process impacts model training and accuracy

Read now to make smarter data decisions:

hitechbpo.com/blog/data-annota

"Scale AI is basically a data annotation hub that does essential grunt work for the AI industry. To train an AI model, you need quality data. And for that data to mean anything, an AI model needs to know what it's looking at. Annotators manually go in and add that context.

As is the means du jour in corporate America, Scale AI built its business model on an army of egregiously underpaid gig workers, many of them overseas. The conditions have been described as "digital sweatshops," and many workers have accused Scale AI of wage theft.

It turns out this was not an environment for fostering high-quality work.

According to internal documents obtained by Inc, Scale AI's "Bulba Experts" program to train Google's AI systems was supposed to be staffed with authorities across relevant fields. But instead, during a chaotic 11 months between March 2023 and April 2024, its dubious "contributors" inundated the program with "spam," which was described as "writing gibberish, writing incorrect information, GPT-generated thought processes."

In many cases, the spammers, who were independent contractors who worked through Scale AI-owned platforms like Remotasks and Outlier, still got paid for submitting complete nonsense, according to former Scale contractors, since it became almost impossible to catch them all. And even if they did get caught, some would come back by simply using a VPN.

"People made so much money," a former contributor told Inc. "They just hired everybody who could breathe.""

futurism.com/scale-ai-zuckerbe

Futurism · The AI Company Zuckerberg Just Poured $14 Billion Into Is Reportedly a Clown Show of Ludicrous IncompetenceBy Frank Landymore

"The production of artificial intelligence (AI) requires human labour, with tasks ranging from well-paid engineering work to often-outsourced data work. This commentary explores the economic and policy implications of improving working conditions for AI data workers, specifically focusing on the impact of clearer task instructions and increased pay for data annotators. It contrasts rule-based and standard-based approaches to task instructions, revealing evidence-based practices for increasing accuracy in annotation and lowering task difficulty for annotators. AI developers have an economic incentive to invest in these areas as better annotation can lead to higher quality AI systems. The findings have broader implications for AI policy beyond the fairness of labour standards in the AI economy. Testing the design of annotation instructions is crucial for the development of annotation standards as a prerequisite for scientific review and effective human oversight of AI systems in protection of ethical values and fundamental rights."

journals.sagepub.com/doi/10.11

TechXplore: Third-party data annotators often fail to accurately read the emotions of others, study finds. “Machine learning algorithms and large language models (LLMs), such as the model underpinning the functioning of the platform ChatGPT, have proved to be effective in tackling a wide range of tasks. These models are trained on various types of data (e.g., texts, images, videos, and/or […]

https://rbfirehose.com/2025/05/22/techxplore-third-party-data-annotators-often-fail-to-accurately-read-the-emotions-of-others-study-finds/

AI’s Hidden Human Cost: The Struggle of Kenya’s Data Workforce

A growing body of evidence reveals a disturbing truth: data workers in countries like Kenya are being exploited to train AI systems, facing underpayment, emotional trauma, and precarious working conditions.

#TheHumanCostofAI #AI #ArtificialIntelligence #DataAnnotation #DataWorkers #GlobalTech #ChatGPT #OpenAI #TikTok #AINews #FeatureStory #LaborRights

maniainc.com/technology/ais-hi

🚀 Fuel Your AI Innovations with Premium Datasets! 🤖💡

Looking for high-quality AI datasets to train your machine learning models? GTS AI has you covered!

📊 What We Offer:
✅ Curated Artificial Intelligence Datasets
✅ Scalable Solutions for Diverse Use Cases
🌐 Explore: GTS AI

Empower your AI projects with the right data. The future of innovation starts here! 🌟

#AIDataSets #ArtificialIntelligence #MachineLearning #DataAnnotation #GTSAI

Read more - gts.ai/

gts.aiHomeGTS is a leading expert in AI Datasets Collection & Annotation Services like Image, Video, Speech, & Text datasets for ML Models.

#AI #GenerativeAI #Chatbots #Kenya #ContentModeration #DataAnnotation: "Annie Minoff: Data annotation basically means labeling images or text passages so that AI systems can learn from them. For example, labeling thousands of pictures of street scenes so that an AI system can learn what a stop sign or a tree looks like. But Bill's team wouldn't be labeling images for long, because in November of 2021 the job changed. Sama had a new client, OpenAI.

Karen Hao: OpenAI had basically tens of thousands of text passages that they needed labeled. So they would deliver these on a regular basis to Sama and workers would read each text passage one by one and then assign a label to it.

Annie Minoff: OpenAI wanted a system where if you asked the AI to write something awful, like a description of a child being abused or a method for ending your own life, the system would refuse to write that. It would filter out those bad responses before they got to you. But to do that, the AI has to know what child abuse and suicide are. Humans have to teach it. And that was the Sama worker's job, to read descriptions of extreme violence, rape, suicide, and to categorize those texts for the AI. Here's Bill, the team leader."

wsj.com/podcasts/the-journal/t

The Wall Street JournalThe Hidden Workforce That Helped Filter Violence and Abuse Out of ChatGPT - The Journal. - WSJ PodcastsChatGPT is one of the most successful tech products ever launched. And crucial to that success is a group of largely unknown data workers in Kenya. By reviewing disturbing, grotesque content, often for wages of just two to three dollars an hour, they helped make the viral chatbot safe. WSJ’s Karen Hao traveled to Kenya to meet those workers and hear about what the job cost them. Further Reading: - What Is ChatGPT? What to Know About the AI Chatbot - The Contradictions of Sam Altman, AI Crusader Further Listening: - The Company Behind ChatGPT