The report - which has been conducted by the industry regulator Ofcom - has revealed that this "harm" has occurred in a number of different ways, including bullying and harassment, as well as cyber attacks.
Reflecting on the outcome of the research, Ofcom boss Sharon White warned: "While the regulation of online content has evolved in recent years, there are significant disparities in whether and how it is regulated."
The publication of the research comes shortly after Google released a new AI tool to help identify child sex abuse images online.
The technology industry has recently come under fire for not doing more to tackle child sex abuse, but Google has responded to the criticism by releasing a free artificial intelligence tool, which will help to identify abuse images on the web.
In a company blog post, engineering lead Nikola Todorovic and product manager Abhi Chaudhuri wrote: "Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse.
"We're making this available for free to NGOs and industry partners via our Content Safety API, a toolkit to increase the capacity to review content in a way that requires fewer people to be exposed to it."