Image Analyzer wins Computing Magazine AI & Machine Learning Awards
-automated content moderation pioneer recognized for Best Emerging Technology in AI-
Visual content moderation software company, Image Analyzer, has announced that it has won the Best Emerging Technology in AI category of the Computing AI and Machine Learning Awards.
Judged by a 12-member panel of CIOs and IT professionals from the public and private sectors and academia, and Computing Magazine journalists, the awards identify the leading companies, projects, and professionals in the AI sector.
To maintain a positive online experience for all users, reduce their legal risk exposure, and protect their brand reputation and revenue, organizations are under increasing pressure to moderate the visual content that users upload to their digital platforms. Impending changes to UK and EU online safety laws will legally oblige platform operators to swiftly remove illegal or harmful content posted to their websites, or risk large fines. Companies that fail to comply with the new laws could ultimately have access to their services suspended in the UK or European countries in which their users reside.
Image Analyzer’s AI-powered visual risk moderation technology helps organizations to automatically remove more than 90% of manifestly illegal and harmful images, videos, and live-streamed footage, so that toxic content never reaches their websites or moderation queues.
Cris Pikes, CEO and founder of Image Analyzer commented, “We are delighted to have won the Computing Award for Best Emerging Technology in AI. Online organizations are tackling a huge number of images and videos uploaded by more and more users. Human moderators can no longer cope with the sheer volume and the impending legislation is only adding to the pressure. Our technology was specifically developed to help digital platform providers to make their online communities and working environments safer. Automated content moderation allows organizations to scale their efforts and demonstrate to the relevant authorities that they have put systems and processes in place to protect their users and employees from illegal and harmful content posted to their sites.”
Image Analyzer was selected as the winner of Computing’s AI and Machine Learning Awards from a shortlist of six companies. Explaining their selection, the judges described Image Analyzer Visual Intelligence System (IAVIS) as, “A great use of AI to resolve a problem that affects all sectors and all organizations.”
Content moderation has traditionally been undertaken by human moderators, who manually review questionable content uploaded to their platforms. Manual review of toxic content risks creating an unsafe working environment, where harrowing images and videos harm human moderators’ mental health and huge backlogs of material cause employee stress and burnout. IAVIS helps organizations to combat these workplace and online harms by automatically categorising and filtering out high-risk-scoring images, videos and live-streamed footage, leaving only the more nuanced content for human review. By applying advanced AI computer vision technology that is trained to identify specific visual threats, the solution gives each piece of content a risk probability score, speeds the review of posts, and reduces the moderation queue by 90% or more. The technology is designed to constantly improve the accuracy of core visual threat categories, with simple displays to allow moderators to easily interpret threat category labels and probability scores. It can scale to moderate increasing volumes of visual content, without impacting performance, or user experience.
Image Analyzer holds US and European patents for its AI-powered content moderation technology, IAVIS, which identifies visual risks in milliseconds, with near zero false positives.
Organizations use IAVIS to protect online community members from being harmed by visual content that contravenes existing and impending laws. It minimises corporate legal risk exposure; aids digital forensics investigations; and helps safeguard children and educational communities. In HR applications, IAVIS reduces vicarious liability exposure by blocking content that is not safe for work, identifying high risk users and providing visibility of misuse.
About Image Analyzer
Image Analyzer provides artificial intelligence-based content moderation technology for image, video and streaming media, including live-streamed footage uploaded by users. Its technology helps organizations minimize their corporate legal risk exposure caused by employees or users abusing their digital platform access to share harmful visual material. Image Analyzer’s technology has been designed to identify visual risks in milliseconds, including illegal content, and images and videos that are deemed harmful to users, especially children and vulnerable adults.
Founded in 2005, Image Analyzer holds various patents across multiple countries under the Patent Co-operation Treaty. Its worldwide customers typically include large technology and cybersecurity vendors, digital platform providers, digital forensic solution vendors, online community operators, and education technology providers which integrate its AI technology into their own solutions.
For further information please visit: https://www.image-analyzer.com
References:
Computing AI and Machine Learning Awards Winners 2021: AI & Machine Learning Awards 2021 (ceros.com)
Computing AI and Machine Learning Awards Judges: https://event.computing.co.uk/aimachinelearningexcellenceawards2021/en/page/judges
Computing AI and Machine Learning Awards 2021 Finalists: https://event.computing.co.uk/aimachinelearningexcellenceawards2021/en/page/2021-finalists
Gov.UK, ‘Draft Online Safety Bill’, 12th May 2021 https://www.gov.uk/government/publications/draft-online-safety-bill
European Commission: The Digital Services Act package | Shaping Europe’s digital future (europa.eu)
The Hill – Section 230 – ‘GOP lawmaker introduces bill targeting tech liability protections, March 18th 2021, https://thehill.com/policy/technology/543862-gop-lawmaker-introduces-bill-targeting-tech-liability-protections
Tech.co, ‘Section 230 Explained, https://tech.co/news/section-230-explained