Tech & Software
Google Labs
With a safety-first, community-centered approach, the goal was to keep Google Labs’ Discord open for fast AI experimentation—while maintaining clear standards, consistent enforcement, and a constructive signal-to-noise ratio.
Challenge
Google Labs' Discord moves fast, and the content is often cutting-edge and messy by nature: lots of experimentation, lots of strong opinions, and plenty of AI-generated materials that can drift into spam, harassment, impersonation, doxxing, or other policy-breaking territory. The job was keeping the space open for curiosity and honest Labs-centric discussion without letting it slide into chaos, low-quality noise, or safety risk.
Results
As a contract moderator for the Google Labs community, I helped keep the Discord stable, readable, and safe at scale, from growing from tens of thousands to over 350,000 users. I enforced community guidelines consistently, handled user reports, and made quick, well-documented calls on edge cases. That included identifying and removing spam and scam attempts, managing disruptive behavior, and addressing high-risk issues like targeted harassment, privacy violations, and harmful synthetic media.
About Levellr
Levellr is a community intelligence and management platform built for brands and teams that run their communities on Discord. Founded in 2021, Levellr's mission is to help companies foster authentic, thriving spaces by turning fast-moving community conversations into actionable insights, from sentiment and trends to engagement and measurable ROI.
While most of my client reviews are NDA-protected (because, you know, top-secret agency white label stuff), I managed to sneak in a few favorites from my previous partners.








