Online Safety & Content Policy

This Online Safety & Content Policy sets out how BentVox keeps its users safe, how we tackle illegal content, and how we protect children. It is published in support of our compliance with the UK Online Safety Act 2023 and the Codes of Practice issued by Ofcom, the UK’s independent online safety regulator.

This Policy forms part of our Terms of Use and should be read alongside our Privacy Policy. By using BentVox, you agree to be bound by this Policy.

Last updated: May 2026.

If you need to report illegal or harmful content, please email safety@bentvox.com. If you believe you have encountered child sexual abuse material (CSAM), you can also report it directly and confidentially to the Internet Watch Foundation at www.iwf.org.uk/report. In an emergency, or if a child is in immediate danger, contact your local police (in the UK, dial 999).

About BentVox & this Policy

BentVox is a Safe-For-Work (SFW) user-to-user platform that allows Creators to share content with, communicate with, and receive support (including donations, subscriptions and Commissions) from their Supporters. BentVox includes private messaging functionality.

BentVox is not designed for, marketed to, or intended for use by children. All users must be at least 18 years of age. BentVox does not host pornographic content or any other content that is “primary priority content” harmful to children within the meaning of the UK Online Safety Act 2023.

Even though BentVox is not directed at children and operates an 18+ policy, we recognise that we have legal duties under the UK Online Safety Act 2023 in respect of illegal content and that we must take proportionate, effective steps to keep all our users safe. This Policy explains how we do that.

Our Commitment to Child Safety

Zero tolerance. BentVox has zero tolerance for child sexual abuse material (CSAM), child sexual exploitation and abuse (CSEA), grooming, and any content or behaviour that sexualises, endangers or exploits children. This applies whether the child is real, depicted, illustrated, computer-generated, AI-generated, “deepfaked”, or otherwise.

What we do. To keep children safe and to comply with our legal duties, BentVox:

  • Operates a strict 18+ minimum age requirement, applied to both Creator and Supporter accounts;
  • Is implementing age-assurance measures, in line with Ofcom’s guidance, to make it harder for under-18s to access the service;
  • Does not allow pornographic content or other “primary priority content” harmful to children on the platform under any circumstances;
  • Operates a content moderation function that reviews and assesses suspected illegal or terms-breaching content;
  • Is implementing proactive technology (including perceptual hash-matching against the Internet Watch Foundation’s and other recognised CSAM hash databases, URL detection, and keyword filtering) to detect and remove known CSAM and prevent its sharing;
  • Provides easy-to-use reporting tools so that users and members of the public can flag illegal or harmful content;
  • Reports suspected CSAM and CSEA content to the National Crime Agency (NCA), the Internet Watch Foundation (IWF), the National Center for Missing & Exploited Children (NCMEC) and other competent law enforcement and reporting bodies, in accordance with our legal obligations;
  • Cooperates fully with law enforcement, Ofcom and other competent authorities, including by preserving content and disclosing user information where lawfully required;
  • Permanently bans any user who creates, uploads, shares, requests, solicits, distributes, or facilitates the distribution of CSAM or CSEA content, or who engages in grooming behaviour, and reports them to the relevant authorities.

If you become aware of CSAM, CSEA or grooming, please report it to us immediately at safety@bentvox.com, and to the Internet Watch Foundation at www.iwf.org.uk/report. If a child is in immediate danger, contact the police on 999 (UK) or your local emergency number.

Illegal Content We Prohibit

BentVox prohibits all forms of illegal content. In particular, in line with the UK Online Safety Act 2023, we prohibit the following categories of “priority illegal content”:

  • Child sexual exploitation and abuse (CSEA) and CSAM — including grooming, sexual communications with a child, and the creation, sharing, possession, distribution or solicitation of any sexualised material involving (or appearing to involve) a person under 18, in any form.
  • Terrorism content — encouraging, supporting, glorifying or facilitating acts of terrorism, or disseminating terrorist publications.
  • Hate offences — content that stirs up hatred on the basis of race, religion, sexual orientation, disability, gender identity, or other protected characteristics.
  • Harassment, stalking, threats and abuse — including malicious communications targeting individuals.
  • Controlling or coercive behaviour in the context of intimate or family relationships.
  • Intimate image abuse — sharing or threatening to share intimate images without the consent of the person depicted (often called “revenge porn”).
  • Cyberflashing — sending unsolicited sexual images or content.
  • False communications — sending a message that the sender knows to be false, with the intention of causing non-trivial psychological or physical harm.
  • Threatening communications — including threats to kill, threats of serious harm, and threats of rape or sexual assault.
  • Encouraging or assisting suicide or serious self-harm.
  • Drugs and psychoactive substances offences — including offering to supply controlled drugs.
  • Firearms, knives and other weapons offences — including the unlawful sale or offer of weapons.
  • Assisting unlawful immigration, human trafficking and modern slavery.
  • Sexual exploitation of adults — including paying for the sexual services of a person subjected to force, and controlling prostitution for gain.
  • Fraud and financial services offences — including fraud by false representation, fraud by failing to disclose information, and unauthorised financial services activity.
  • Proceeds of crime offences — including money laundering.
  • Foreign interference — including conduct designed to interfere with UK democratic processes or institutions on behalf of a foreign power.

In addition to this list of legally defined priority offences, BentVox also prohibits the categories of content set out in the “Acceptable Use” sections of our Terms of Use, including all adult or sexually explicit content, gory or excessively violent content, fraudulent or deceptive content, and content that promotes eating disorders or self-harm.

Monitoring, Detection & Proactive Technology

To detect, prevent and remove illegal content and breaches of our Terms of Use, BentVox uses and/or is in the process of implementing the following safety measures:

  • Content moderation function. A dedicated function reviews and assesses suspected illegal or terms-breaching content, and is empowered to remove content swiftly when it is identified.
  • Hash-matching for CSAM. We are deploying perceptual hash-matching technology that compares image hashes against databases of known CSAM maintained by organisations such as the Internet Watch Foundation (IWF), the National Center for Missing & Exploited Children (NCMEC) and the Canadian Centre for Child Protection (C3P). Matches are blocked, removed and reported.
  • URL detection. We use and/or are deploying URL detection to identify and block links to known sources of CSAM and other illegal content.
  • Keyword filtering and pattern detection. We use and/or are deploying automated tools to identify keywords, phrases and patterns associated with grooming, CSEA, terrorism and other priority illegal content for human review.
  • AI-assisted moderation. We use and/or are deploying machine-learning tools to assist human moderators in identifying content that may breach our policies, as one input into the moderation process.
  • User reporting. We provide easy-to-find reporting tools to allow users and members of the public to flag suspected illegal or harmful content.
  • Account-level enforcement. Accounts that repeatedly breach our policies, or that engage in serious illegal conduct, are suspended, restricted or permanently banned. Accounts associated with CSAM or CSEA are permanently banned on first offence and reported to law enforcement.

Application to private messages. Where lawful and proportionate, BentVox may apply proactive detection technology to content transmitted via the platform’s private messaging features, including images and links, for the purpose of detecting CSAM, CSEA and other illegal content. We do this in a manner designed to respect users’ privacy and to minimise the impact on legitimate communications. We do not read or monitor the substance of users’ private text messages other than where strictly necessary in connection with a specific report, investigation, safety-related automated detection signal, or legal obligation. Our handling of personal data is described in our Privacy Policy.

End-to-end encryption. BentVox private messaging is not, at present, end-to-end encrypted. If this changes in the future, we will update this Policy and our Privacy Policy accordingly.

Reporting Content to BentVox

If you encounter content on BentVox that you believe is illegal, breaches our Terms of Use, or otherwise causes you concern, please report it to us. You can:

  • Use the in-product reporting tools where available (look for the “Report” option associated with the content or user concerned);
  • Email us at safety@bentvox.com;
  • For suspected CSAM, also report directly to the Internet Watch Foundation at www.iwf.org.uk/report;
  • For suspected terrorism content, also report via the UK government’s Action Counters Terrorism service at www.gov.uk/report-terrorism;
  • If a person is in immediate danger, or a crime is in progress, contact the police on 999 (UK) or your local emergency number.

What to include in a report. To help us investigate, please include where possible: a link to (or screenshot of) the content concerned, the username or profile concerned, a brief description of why you are reporting it, and any other context you think is relevant. You do not need to include legal analysis; just tell us what you saw.

What happens next. We aim to acknowledge reports promptly and to act swiftly. Depending on what we find, we may: remove the content, suspend or ban the account, refer the matter to law enforcement, preserve relevant evidence, and/or take other appropriate steps. Where the law permits, we will let you know what action we have taken in response to your report.

Complaints

If you are unhappy with the way we have handled a report, with action we have taken against your account or your content, or with any other aspect of how we have applied this Policy or our Terms of Use, you can submit a complaint to safety@bentvox.com.

What we will do. We will acknowledge your complaint, review it, and respond to you with our decision and the reasons for it within a reasonable period. Where appropriate, we will reverse or modify our earlier action.

Categories of complaint we accept. We accept complaints in particular about:

  • Content on BentVox that you believe is illegal or harmful;
  • Failure to take down content that you have reported as illegal or breaching our terms;
  • Removal of your content where you believe it should not have been removed;
  • Suspension, restriction or termination of your account where you believe it was not justified;
  • Any other failure by BentVox to comply with its duties under the UK Online Safety Act 2023.

Ofcom. Ofcom is the independent regulator for online safety in the UK. Ofcom does not, as a general rule, handle individual user complaints about specific items of content, but it does oversee how regulated services such as BentVox comply with their duties. You can find out more at www.ofcom.org.uk/online-safety.

Cooperation with Law Enforcement & Regulators

BentVox cooperates with lawful requests from Ofcom, UK police forces, the National Crime Agency, His Majesty’s Revenue and Customs, and other competent UK and international authorities. This may include responding to information requests, preserving content and account data, disclosing user identity and content where lawfully required, and providing records of our risk assessments and safety measures.

Where we identify suspected CSAM, CSEA or other serious illegal content, we will, in accordance with our legal obligations, report this content to the appropriate authorities, which may include the National Crime Agency (NCA), the Internet Watch Foundation (IWF), and the National Center for Missing & Exploited Children (NCMEC).

Risk Assessment & Governance

BentVox carries out and maintains a written illegal content risk assessment in respect of its user-to-user services, in accordance with section 9 of the UK Online Safety Act 2023. This risk assessment covers each of the categories of priority illegal content identified by Ofcom, as well as other illegal content relevant to our service. We keep this risk assessment up to date and review it at least annually, and whenever there is a significant change to the design or operation of our service or to Ofcom’s risk profiles.

BentVox has designated a senior person accountable to its most senior governance body for compliance with its illegal content, reporting and complaints duties under the Online Safety Act 2023. You can contact this person at safety@bentvox.com.

Privacy

Our use of proactive detection technology, content moderation and reporting necessarily involves processing personal data. We do so in accordance with the UK General Data Protection Regulation, the Data Protection Act 2018, and applicable Swiss and EU data protection law. We only collect, use and retain personal data to the extent necessary for the purposes set out in this Policy and in our Privacy Policy. For full details of the personal data we process, the legal bases on which we rely, how long we keep it, and your rights, please read our Privacy Policy.

Changes to this Policy

We may update this Policy from time to time to reflect changes in our service, in the law (including the UK Online Safety Act 2023 and Ofcom’s Codes of Practice), and in best practice. When we make material changes, we will update the “Last updated” date at the top of this page and, where appropriate, notify users via the platform or by email.

Contact

For online safety matters, including reports and complaints: safety@bentvox.com.

For general enquiries: info@bentvox.com.

For suspected CSAM, please also report directly to the Internet Watch Foundation: www.iwf.org.uk/report.

In an emergency, or if a child is in immediate danger, contact the police on 999 (UK) or your local emergency number.