Jump to the main content block
 
:::

Navigating Digital Governance: Meta’s APAC Policy Head Speaks at NCCU

Date : 2025-11-28 Department : International Master's Program in International Communication Studies
【Article by College of Communication】

November 27, 2025 — The College of Communication hosted Meg Chang, Head of Content Regulation Policy for the Asia-Pacific region at Meta, for a guest talk on the rapidly evolving landscape of global content governance. Speaking to faculty and students, Chang offered a clear and nuanced view of how platforms navigate increasing regulatory pressures, complex cross-border responsibilities, and the growing threat of misinformation.

Chang noted that online freedom has sharply declined worldwide, citing trends documented in Freedom on the Net 2024. Regions that once saw expanding openness have shifted toward greater censorship, platform blocking, and more restrictive regulatory environments. Examples span countries across Asia—including China, India, Sri Lanka, Pakistan, Bangladesh, Indonesia, and Nepal—where governments have imposed new controls on access and expression. Australia’s recent mandate barring users under 16 from social media further illustrates the global tightening of digital oversight. According to Chang, these changes have contributed to a pronounced “chilling effect,” in which users increasingly self-censor for fear of penalties.

Turning to governance, Chang emphasized that content moderation is defined by its trade-offs. Digital platforms must reconcile differing legal systems, cultural norms, and expectations regarding freedom of expression and harm reduction. A single post may be subject to varying national regulations, forcing companies to make judgment calls about jurisdiction, severity, and proportionality. She highlighted the difficulties that arise in cases such as non-consensual intimate imagery or cross-border takedown requests, which reveal the tension between user safety, privacy, and regulatory authority.

Chang also addressed the limits of artificial intelligence in moderating harmful content, explaining that AI often struggles with context-dependent material. She referenced an incident involving a racist portrayal of former U.S. President Barack Obama that initially evaded automated detection because the system could not determine whether the image was a Halloween costume, satire, or a hateful attack. Only after user reports triggered human review was the content demoted. The example underscores why user participation remains essential, even as AI forms the backbone of proactive detection efforts. She added that actors who seek to evade detection regularly adapt by using coded language, emojis, or rapidly shifting formats, challenging both AI systems and human reviewers.

On misinformation, Chang stressed that public awareness and risk perception are the strongest defenses, surpassing any single regulatory or platform-based intervention. Misinformation campaigns succeed only when users believe and circulate false content, underscoring the centrality of digital literacy and critical evaluation in building societal resilience. She argued that empowering individuals is crucial in an environment where manipulative tactics continue to evolve.

Students engaged with these themes through discussions on AI-generated content, online scams, youth protection laws, digital literacy, and fake news. Their analyses echoed Chang’s assertion that effective governance requires collaboration among platforms, governments, and users themselves.

The talk was delivered as part of the Computer-Mediated Communication course, an English-Medium Instruction (EMI) class at NCCU that explores how digital technologies shape communication practices, policy debates, and public understanding of online risks.
Click Num: