
TikTok has revealed that it removed more than 580,000 videos in Kenya between July and September 2025 as part of its ongoing efforts to enforce Community Guidelines and improve user safety on the platform. According to the company’s latest transparency report released on Tuesday, 99.7% of the videos were detected and taken down through automated moderation systems before users had a chance to report them. This highlights the platform’s increasing reliance on artificial intelligence and machine-learning tools to identify harmful or non-compliant content.
Majority of Content Removed Within 24 Hours
The report shows that 94.6% of the violating videos were removed within 24 hours of being uploaded. TikTok said this rapid response reflects its proactive approach to content moderation and its commitment to maintaining a safe digital environment for users. In addition to video removals, the platform disclosed that about 90,000 live streams originating from Kenya were terminated during the same quarter for breaching content policies. These interruptions accounted for roughly 1% of all live broadcasts in the country over that period.
Global Enforcement Figures
On a global scale, TikTok removed over 204 million videos between July and September 2025, representing approximately 0.7% of total uploads worldwide. Key global enforcement actions included:
- 91% of violating videos removed via automated technology
- Deletion of more than 118 million fake accounts
- Removal of over 22 million accounts suspected to belong to users under the age of 13
Human Moderation Still Key
While automation plays a central role, TikTok emphasized that human oversight remains critical. The company works with thousands of trust and safety professionals who:
- Review appeals
- Consult independent experts
- Respond to emerging or fast-moving risks
- Refine moderation policies
TikTok noted that combining advanced technology with human expertise allows for faster and more consistent enforcement of platform rules, particularly around harmful content such as misinformation and hate speech.
Teen Well-Being Initiatives
In November last year, the platform also launched a dedicated Time and Well-being Hub along with four new Well-being Missions. These tools are designed to promote healthier screen habits and encourage more mindful platform use especially among teenagers.
