Google has laid off more than 200 contractors engaged in refining its artificial intelligence systems, including Gemini and AI Overviews. The job cuts, which occurred in multiple rounds last month, affected workers employed via GlobalLogic and its subcontractors — many of whom were responsible for tuning AI responses and improving chatbot quality.
The impacted teams, known as “super raters,” played a crucial role in shaping AI-generated content by evaluating and correcting responses, improving tone, and aligning outputs with user expectations. Several held advanced degrees and worked across natural language, search summarisation, and conversational AI tools.
Project ramp-down or quiet restructuring?
According to reports, many contractors were notified abruptly, with access revoked without prior warning. Some were told the cuts were due to a “ramp-down” in AI projects, though specific project terminations were not disclosed.
The layoffs come amid Google’s broader push to scale AI investments and streamline its contractor model — a recurring theme in the tech industry as companies expand AI development while reassessing cost structures and team compositions.
A spokesperson for Google clarified that the affected individuals were not direct employees of Alphabet, and that employment conditions are managed by contracting firms. However, the move has reignited concerns about labour protections, especially for workers engaged in critical functions without job stability or visibility into strategic changes.
Tensions over unionisation and working conditions
Some workers alleged that the layoffs may have been linked to informal unionisation discussions and past complaints about low wages. Given their role in training large language models and refining AI interactions, the dismissals have raised red flags within labour rights groups who argue that these roles are essential yet undervalued.
Contractors reported working across Google’s flagship AI tools, including its summarisation feature AI Overviews. Their role involved validating AI outputs for accuracy, tone, and usability — functions that directly impact product quality and user experience.
While Google has not officially linked the cuts to budget constraints or strategy shifts, the timing coincides with rising internal and external pressure on Big Tech companies to improve AI reliability while keeping costs in check.
As generative AI development continues to accelerate, the treatment of behind-the-scenes AI training teams — especially those operating under contract — may become a key issue for tech labour policy and transparency in model development.
