During my time at Meta Platforms, I worked on projects that sat at the intersection of content integrity, platform safety, and algorithmic trust. My role focused on reviewing, labeling, and authenticating digital content and creator accounts to help improve how the platform identifies credible voices, reduces manipulation, and protects users from misleading or harmful material.
This work required a combination of editorial judgment, investigative analysis, and careful adherence to evolving platform guidelines. Across multiple high priority sprints, I collaborated with cross functional teams reviewing creator authenticity, identifying impersonation networks, and improving the signals used by algorithms to detect spam, misleading narratives, and manipulated engagement. The work was both analytical and editorial. Each review helped train the systems that determine what content is surfaced, trusted, or demoted for millions of users globally.
What We Did
Across a six month period, our team participated in a series of operational and analytical sprints designed to strengthen platform integrity across both Facebook and Instagram. These initiatives focused on creator authentication, impersonation detection, and originality signals that help differentiate authentic reporting and creative work from deceptive or low quality content.
Many of these projects were tied to high impact internal programs including Project Red and Project Green, which aimed to improve how the platform evaluates original content and identifies manipulation patterns. The work involved reviewing large datasets of creator accounts and digital content, labeling authenticity signals, and providing structured feedback that helped refine the models responsible for ranking and trust evaluation.
The objective was simple but critical: ensure that authentic creators, journalists, and organizations are recognized by the system while reducing the visibility of impersonators, bait content, and coordinated inauthentic behavior.
What I Did
Executed multiple platform integrity sprints
Over a six month period, I contributed to a series of authentication and integrity review sprints designed to evaluate content credibility and creator authenticity across the platform. These included:
- Torso Non Plenty Authentication Sprints conducted across several months
- Project Red Instagram specific review initiatives
- VIM V4 and USCA VIM review cycles
- Aggregator strategy internal reviews
- Cross functional impersonation reviews with the CI and S team
Each sprint required high volume evaluation of accounts and content signals including impersonation patterns, misleading narratives, originality indicators, and engagement manipulation.
Authenticated creators and identified impersonation networks
A central component of my work involved verifying creator authenticity and identifying accounts impersonating public figures, brands, or media organizations. This process included evaluating signals such as content patterns, narrative consistency, audience manipulation tactics, and alignment with known authentic sources.
Through these reviews, we uncovered cases where accounts previously assumed to be authentic were in fact impersonators. These findings were documented and escalated to cross functional partners responsible for enforcement and policy updates.
Produced platform insight reports
Beyond sprint participation, I contributed written analysis that helped inform internal strategy discussions.
One major report I authored was part of the Project Red and Project Green originality insights initiative. My contribution focused specifically on the Project Green section, where I examined how platform guidelines could better recognize original journalism and authoritative news organizations.
The report proposed clearer labeling signals and expanded definitions that would allow the system to distinguish between original reporting and repurposed or aggregated content. This work contributed to broader conversations about protecting high quality journalism within algorithmic distribution systems.
I also authored an impersonation insights report within the CI and S review process. In this analysis, I examined a dataset of accounts that had been pre classified as authentic by partner teams. Through detailed review, I identified cases where those assumptions did not hold up under scrutiny, documenting evidence that several accounts were in fact impersonators.
Maintained high output and sprint execution
During the August authentication sprint, I independently completed 100 review jobs ahead of the deadline, demonstrating both speed and accuracy in a high volume review environment.
Established strong documentation practices
My internal documentation and sprint notes were later used by my manager as a best practice example for sprint note taking and reporting. These notes were shared during internal training discussions involving the Project Green and Project Red teams, helping set a standard for clear analytical documentation across the project.
Why This Work Matters
Content integrity work often happens behind the scenes, but it plays a crucial role in shaping the information ecosystems billions of people rely on. The decisions made during review processes like these help refine the signals that determine whether content is trusted, amplified, limited, or removed.
By contributing editorial judgment, investigative thinking, and structured analysis to these projects, I helped support the broader mission of ensuring that authentic creators and credible reporting are recognized while deceptive behaviors are detected earlier and more accurately.
For me, this work was a natural extension of my background in journalism and media research. The same instincts used in reporting and verification translated directly into platform integrity work where accuracy, context, and skepticism remain essential tools.