
This project was in collaboration with NYU Center for Cybersecurity and intelligence research firm Graphika for a United States Department of Defense initiative under DARPA’s Semantic Forensics program.
At its core, the project examined one of the most urgent challenges in modern information warfare: how manipulated and AI-generated media are weaponized in geopolitical conflict.
I was brought on as an analyst to annotate in the wild examples of manipulated media used in disinformation campaigns. But the work extended far beyond labeling content. It required identifying patterns, interrogating narratives, and understanding how foreign actors deploy digital deception to destabilize trust.
Much of the material we analyzed centered on wartime propaganda, geopolitical conflict zones, and coordinated influence campaigns. The work was intellectually rigorous and emotionally demanding. Each annotation required independent research, contextual awareness, and evidentiary reasoning. Assumptions were never enough. Every conclusion had to be defensible.
Working alongside NYU researchers and Graphika analysts, my contributions directly supported the development of detection tools designed to identify, attribute, and characterize manipulated media. The analytical layer we provided informed the technical systems being built to safeguard digital ecosystems.
What distinguished this experience for me was the convergence of journalism and artificial intelligence. My background in investigative reporting and fact-checking positioned me to interpret narrative manipulation with nuance. I approached each case as both a researcher and a storyteller, asking not only what was altered but why it was altered and who it was meant to influence.
Weekly review sessions with senior analysts were not passive check-ins. They were intellectual stress tests. I helped raise strategic questions, challenged interpretations when evidence required it, and refined analytical outputs based on constructive critique. The process sharpened both rigor and humility.
The work was sensitive. It required discretion, objectivity, and composure. It also reinforced something I believe deeply: that defending information integrity is not only a technical challenge but a human one.
What I achieved:
- Produced high-volume, high-accuracy analytical outputs that exceeded internal productivity benchmarks
- Prompted additional campaign assignments due to sustained analytical capacity
- Strengthened research integrity by challenging assumptions and providing evidence-backed counterpoints
- Contributed annotated datasets that informed AI model development for manipulated media detection
- Applied geopolitical expertise to interpret narrative intent within disinformation campaigns
- Demonstrated resilience and composure while analyzing conflict-driven propaganda content
- Supported a mission-critical national cybersecurity initiative focused on protecting public trust
This project marked a defining shift in my career. I was no longer solely reporting on disinformation from the outside. I was helping build systems to detect and defend against it.
It solidified my position at the intersection of journalism, cybersecurity, and AI research.