June 23, 2021

On World Day for Safety and Health at Work, Image Analyzer urges better protection of content moderators -Occupational Safety and Health should include management of online harms-

Gloucester, April 27th 2021, To mark World Day for Safety and Health at Work, visual threat moderation software company and Online Safety Tech Industry Association (OSTIA) member, Image Analyzer, is calling on digital platform operators to consider how they can better protect employees from online harms.

The International Labour Organization (ILO), observes World Day for Safety and Health at Work on April 28th.

ILO will host a virtual seminar tomorrow, April 28th, where occupational health and safety representatives from ILO, the World Health Organisation, the US Secretary of Labor, and ministers for employment, workplace safety and health from Singapore, Turkey, Nigeria, Madagascar, Belgium, Portugal and the International Trade Union Confederation (ITUC), will discuss the importance of anticipating and responding to crises that impact workers’ health and safety, in light of the pandemic.

Commenting on the looming mental health crisis facing digital platform workers, Cris Pikes, CEO of Image Analyzer said, “The ILO recognizes that the shift to working from home has introduced new psychosocial risks for employees. We also know that the increase in global digital platform use has increased the burden on content moderators who strive to remove the most disturbing images and videos uploaded by users.”

In March, it was reported that thirty European content moderators are suing Facebook and its recruitment agencies, CPL Solutions, Accenture, Majorel and CCC, for exposing them to toxic visual content which has harmed their mental health. The plaintiffs claim that they took on community moderation roles with inadequate training, without access to psychiatrists and that their working conditions resulted in severe mental trauma which left some moderators feeling suicidal. Employee advocacy group, Foxglove, has compared social media community moderation to an unsafe factory floor, where workers are being recklessly exposed to an injurious working environment.

The ILO Flagship Report, ‘World Employment and Social Outlook: the role of digital labour platforms in transforming the world of work’, published in February 2021, drew on research from 12,000 workers around the world and examined the working conditions of digital platform workers in the taxi, food delivery, microtask, and content moderation sectors. The report states: “Regulatory responses from many countries have started to address some of the issues related to working conditions on digital labour platforms. Countries have taken various approaches to extending labour protections to platform workers.”

The ILO found that there is a growing demand for data labelling and content moderation to enable organizations to meet their corporate social responsibility requirements. Page 121 of the report states, “Some of the companies offering IT-enabled services, such as Accenture, Genpact and Cognizant, have diversified and entered into the content moderation business, hiring university graduates to perform these tasks (Mendonca and Christopher 2018).”

“A number of “big tech” companies, such as Facebook, Google and Microsoft, have also started outsourcing content review and moderation, data annotation, image tagging, object labelling and other tasks to BPO companies. Some new BPO companies, such as FS and CO, India, stated in the ILO interviews that content moderation not only provides a business opportunity but also allows them to perform a very important task for society as they “act as a firewall or gatekeeper or a watchdog for the internet.”

Pikes continues, “In a single minute 147,000 images are posted to Facebook, 500 hours of video are uploaded to YouTube and 347,222 stories are posted on Instagram. A small percentage of these images are horrific. Digital platforms rely on an army of moderators to view, assess and remove harmful content to keep our online spaces safe. As occupational health and safety leaders gather for World Day for Safety and Health at Work 2021, we call on all digital organizations to consider the psychological injuries suffered by human moderators and how they can be better protected.”


About Image Analyzer

Image Analyzer provides artificial intelligence-based content moderation technology for image, video and streaming media, including live-streamed footage uploaded by users. Its technology helps organizations minimize their corporate legal risk exposure caused by employees or users abusing their digital platform access to share harmful visual material. Image Analyzer’s technology has been designed to identify visual risks in milliseconds, including illegal content, and images and videos that are deemed harmful to users, especially children and vulnerable adults.

 

Founded in 2005, Image Analyzer holds various patents across multiple countries under the Patent Co-operation Treaty. Its worldwide customers typically include large technology and cybersecurity vendors, digital platform providers, digital forensic solution vendors, online community operators, and education technology providers which integrate its AI technology into their own solutions.

 

For further information please visit: https://www.image-analyzer.com

 

References:

International Labour Organization, World Day for Safety and Health at Work virtual seminar, April 28th 2021, https://www.ilo.org/global/topics/safety-and-health-at-work/events-training/WCMS_780935/lang–en/index.htm

International Labour Organization, Flagship Report, ‘World Employment and Social Outlook: the role of digital labour platforms in transforming the world of work’, February 23rd 2021, https://www.ilo.org/wcmsp5/groups/public/—dgreports/—dcomm/—publ/documents/publication/wcms_771749.pdf

Statista ‘New user-generated content uploaded by users per minute – as of August 2020’ https://www.statista.com/statistics/195140/new-user-generated-content-uploaded-by-users-per-minute/

Foxglove, ‘Open letter from content moderators re: the pandemic,’ November 18th, 2020 https://www.foxglove.org.uk/news/open-letter-from-content-moderators-re-pandemic

Daily Mail, ‘Dozens of moderators sue Facebook for severe mental trauma after being exposed to violent images at work,’ February 28th, 2021,

https://www.dailymail.co.uk/news/article-9308337/Dozens-moderators-sue-Facebook-severe-mental-trauma-exposure-violent-images-work.html

Independent.ie, ‘Facebook moderators sue over trauma of vetting graphic images,’ December 6th 2020, https://www.independent.ie/irish-news/news/facebook-moderators-sue-over-trauma-of-vetting-graphic-images-39830414.html

The Irish Times, ‘Facebook: US settlement does not apply to Irish cases,’ May 13th, 2020,https://www.irishtimes.com/business/technology/facebook-us-settlement-for-moderators-does-not-apply-to-irish-cases-1.4252611

Daily Telegraph, ‘Moderators sue Facebook for unrelenting exposure to disturbing content,’ December 5th 2019,https://www.telegraph.co.uk/technology/2019/12/05/moderators-sue-facebook-unrelenting-exposure-disturbing-content/

The Guardian, ‘Ex-Facebook worker claims disturbing content led to PTSD’, December 4th 2019, https://www.theguardian.com/technology/2019/dec/04/ex-facebook-worker-claims-disturbing-content-led-to-ptsd