The spread of a video across the internet that was apparently recorded by a shooter who killed 50 people at two mosques in Christchurch, New Zealand, has reignited a debate around how tech companies moderate their platforms — and whether they've done enough to crack down on the spread of white supremacists online.

Critics of the companies, led by U.K. politicians, say that Facebook and YouTube have not done enough to address white supremacist groups on their platforms, pointing to a successful effort to control Islamic extremist content on the sites as proof that the problem is well within the power of the companies.

Those calls have been countered by warnings from some in the tech industry who say that pushing tech companies to further regulate extremism will not fix the deeper problems of online radicalization.

Read more: NBC News