Socially Responsible AI for Online Safety

Rewire is a startup creating socially responsible AI to keep online communities safe.
Our customisable AI solutions protect online spaces by finding and stopping toxic content.


Leveraging AI to Fight Online Hate

We have developed state-of-the-art AI solutions for detecting online hate, which are available via our easy-to-use API. We use a proprietary training process to make our AI accurate, robust and fair.

Our API can provide real-time assessments for millions of pieces of content per day. We will adapt our server infrastructure to best fit your deployment needs.

We will customise our AI to reflect your priorities and community values. We provide guidance on different types of toxic content, and we will work with you to maximise the value of the AI for your use case.

Our API can be called with just a few lines of code from any application. We provide comprehensive API documentation and example code, and we will support your integration.


Experts in AI and Online Safety

Bertie Vidgen
(CEO / Co-Founder)

Online safety expert. Six years of experience in online harms. PhD from the University of Oxford. Research Fellow at The Alan Turing Institute.

Paul Röttger
(CTO / Co-Founder)

AI expert. Three years of experience in natural language processing. Completing a PhD on AI for hate speech detection at the University of Oxford.

Douwe Kiela

Industry scientist. Ten years of experience in machine learning. PhD from the University of Cambridge. Research scientist at Facebook AI Research.

Media Coverage of our Research

Our team regularly contributes to discussions about AI and online safety on leading news and media platforms. Visit our newsroom to stay up to date with the latest conversations.

July 2021
June 2021
June 2021
July 2021
January 2021

Research Publications

Our work has been published at top academic conferences in NLP and computer science, and we have directly informed UK policy and regulation on online safety.

Vidgen et al. (2021): “Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection”. Published at ACL 2021.

Röttger et al. (2021): “HateCheck: Functional Tests for Hate Speech Detection Models”. Published at ACL 2021.


Let’s Work Together

Want to learn more about Rewire? Want to schedule a demo? Either contact us directly at or via the form below. We look forward to hearing from you!