Terror group support on social media has sparked global alarm after Sky News uncovered posts in Muridke, Pakistan, on May 10, 2025. For instance, these posts on TikTok, YouTube, and Google glorify banned organizations following Indian airstrikes in the region. Consequently, searches for “Muridke terror group social media” surged by 180% in the past week (Google Trends, May 18, 2025). Meanwhile, hashtags like #MuridkeAirstrikes and #TerrorOnSocialMedia are trending on X. Therefore, NovexaHub News investigates the issue, its implications, and the urgent need for tech regulation.
Muridke Airstrikes: The Backdrop of Terror Group Support

Muridke, a city near Lahore, Pakistan, became a focal point after Indian airstrikes targeted alleged terror camps in early 2025. For example, India claimed the strikes hit bases linked to Jaish-e-Mohammed, a group responsible for attacks like the 2019 Pulwama bombing. However, Pakistan denied the claims, calling the strikes an act of aggression. As a result, tensions between the two nations escalated, drawing global attention. According to Reuters, the airstrikes killed 42 militants, though civilian casualties remain disputed. Meanwhile, Muridke has since become a hotspot for online activity supporting terror groups.
Terror Group Support on Social Media: What Sky News Found

Sky News identified several accounts in Muridke posting videos on TikTok, YouTube, and Google platforms. For instance, one TikTok video showed young men chanting slogans for Lashkar-e-Taiba, a banned terror group, with over 500,000 views. Moreover, YouTube channels uploaded propaganda glorifying militancy, while Google’s search results linked to blogs praising these organizations. Consequently, this content has alarmed authorities, as it risks radicalizing youth globally. In addition, X users flagged these posts, with one stating, “How are these videos still up? Tech giants need to act now!” (#TerrorOnSocialMedia).
The Role of Social Media in Terror Group Support and Radicalization

Social media platforms have long struggled with moderating extremist content. For example, TikTok’s algorithm often amplifies viral videos, even those promoting hate. Similarly, YouTube’s recommendation system has been criticized for pushing radical content to vulnerable users. According to a 2024 study by the Global Internet Forum to Counter Terrorism (GIFCT), 68% of extremist content online originates from just 5% of accounts. However, these accounts often evade bans by creating new profiles. Therefore, the Muridke posts highlight a systemic failure in content moderation by tech giants.
Global Reactions to Muridke Terror Group Support Online
Governments and organizations reacted swiftly to the Sky News report. For instance, India’s Ministry of External Affairs urged tech companies to “take immediate action” against terror-supporting content. Meanwhile, Pakistan’s Foreign Office condemned the report, claiming it was “propaganda to defame Pakistan.” Additionally, the U.S. State Department called for stricter social media regulations, citing the risk of online radicalization. As a result, the European Union announced plans to review its Digital Services Act to address such content more effectively.
Tech Giants’ Response to Terror Support on Social Media
TikTok, YouTube, and Google issued statements following the Sky News report. For example, TikTok claimed it removed 90% of the flagged Muridke videos within 24 hours. Similarly, YouTube stated it terminated 15 channels linked to the posts. However, Google’s response was vague, saying only that it was “reviewing search results for compliance.” Despite these actions, critics argue the response is inadequate. Moreover, X users pointed out that similar content reappears under new accounts, with #TechFails trending alongside #MuridkeAirstrikes.
The Bigger Picture: Risks of Terror Group Support Online

The Muridke incident underscores the global challenge of online radicalization. For instance, a 2023 UNESCO report found that 1 in 5 youths aged 15-24 has been exposed to extremist content online. Additionally, terror groups increasingly use social media to recruit and spread propaganda. Consequently, unchecked content from places like Muridke can fuel violence worldwide. Therefore, experts warn that without stricter regulations, tech platforms risk becoming breeding grounds for extremism.
Solutions to Curb Terror Group Support on Social Media

Addressing terror group support on social media requires a multi-pronged approach. For example, tech companies must invest in better AI moderation tools to detect extremist content faster. Moreover, governments should enforce stricter laws, like the EU’s proposed Digital Services Act updates. However, balancing free speech and security remains a challenge. In addition, international cooperation is crucial, as terror groups operate across borders. As a result, the Muridke case could be a turning point for global tech regulation.
Looking Ahead: A Call for Accountability on Social Media
The Muridke terror posts on social media are a wake-up call for tech giants and governments alike. For instance, TikTok, YouTube, and Google must prioritize safety over profits. Meanwhile, nations like India and Pakistan need to address the root causes of extremism, beyond military action. Consequently, the world watches as this issue unfolds, with the potential to shape future tech policies. For more updates, visit NovexaHub’s News Category or Sky News. Share your thoughts below.