April 30, 2026 · Valentín Stancu
10 languages with cultural anchoring: the Daily Challenge problem
Why translating the same Daily Challenge into 10 languages doesn't work, and how Mentium generates 10 distinct daily challenges, culturally anchored to each locale.
Mentium supports 10 languages: Spanish, English, French, Romanian, Brazilian Portuguese, German, Italian, Turkish, Polish and Dutch. But there’s a design decision people don’t expect the first time they discover it: the Daily Challenge is not the same challenge translated 10 times. They’re 10 different challenges.
Here’s why and how.
Why translation isn’t enough
Imagine the following Daily Challenge generated in Spanish:
“Who won Spain’s Copa del Rey football trophy in 2024?”
Translated to Turkish, it’s grammatically correct. But a player in Istanbul shrugs. Zero cultural context, zero relevance. The Daily Challenge is supposedly a shared daily puzzle — but if it only resonates with Spanish-speaking audiences, it’s not global, it’s a Spanish challenge translated.
The usual workaround for international trivia games: make Daily Challenges hyper-generic (“What’s the capital of France?”). Works everywhere but bores everywhere too.
Mentium tries something different.
The idea: anchor each locale culturally
Each language has a calibrated LOCALE_FOCUS. For example:
- es-ES: Spain + LatAm. Mix of Iberian + Spanish-speaking cultures.
- es-MX: stronger emphasis on Mexico.
- en-US: USA + modern Anglo-American pop culture.
- en-GB: UK + Commonwealth + British history.
- fr-FR: France + Francophonie + art/cuisine.
- ro-RO: Romania + Balkans + Central European history.
- pt-BR: Brazil — music, sport, Brazilian geography.
- de-DE: Germany + Austria + Switzerland.
- it-IT: Italy + Renaissance art + cuisine.
- tr-TR: Türkiye + Azerbaijan + Ottoman history.
- pl-PL: Poland + Central European history.
- nl-NL: Netherlands + Belgium (Flanders).
Every day, at 03:00 UTC, a 20-question set is generated for each locale. The AI takes the LOCALE_FOCUS into account and produces questions an average player of that language will recognize.
The process, simplified
- Cron job at 03:00 UTC in Firebase Cloud Functions.
- For each active locale, a model prompt is composed with:
- The
LOCALE_FOCUS+ 3-5 example past good questions for that locale. - Constraints: 20 questions, 4 options each, mixed difficulty, no contentious topics (partisan politics, religion, etc.).
- Expected category distribution (e.g. 20% sport, 20% history, 15% geography, 15% pop culture, 10% science, 10% mythology, 10% misc).
- The
- The AI generates the set. It passes through automatic validation:
- Each question has exactly 1 marked correct answer.
- Distractors are plausible (not trivial).
- No duplicates with sets from the last 30 days.
- If it passes, it’s published to
/daily-challenge/{locale}/{date}. - When a player opens the Daily Challenge, their app fetches the set for their locale + date.
Real examples (same categories, different countries)
Category: Music (one specific day)
| Locale | Daily Challenge question |
|---|---|
| es-ES | ”Which Spanish group won Eurovision in 1968?“ |
| pt-BR | ”Which bossa nova did Vinicius de Moraes make international?“ |
| tr-TR | ”Who composed the Atatürk March?“ |
| nl-NL | ”Which Dutch musician invented shoegaze before it had that name?” |
Four “music” questions, the same day. Same structure. Four different experiences. Each player feels the challenge was thought for them.
What stays global
Some axes are kept identical for everyone:
- Number of questions (20).
- Scoring system (≥300 = 50 coins, ≥200 = 30, etc.).
- Global daily leaderboard (all players compete in the same daily ranking, even though questions differ — score is what’s compared).
- Reset time (00:00 UTC).
The technical challenge
The hard problems aren’t really generating the questions (Gemini does it well with a good prompt). The real challenges:
-
Fact validation. “Which group won Eurovision in 1968?” has a verifiable answer. If the AI fabricates it, reputational damage. Solution: pre-validation with semantic search vs Wikipedia + low-confidence flagging.
-
Believable distractors. If the correct answer is “Massiel” and distractors are “Albert Einstein” / “Garfield the cat” / “my aunt”, the question is a joke. Distractors must be plausible (other Spanish artists from the era, for example).
-
Cultural balance within a locale. In es-ES, not everything can be about Spain: there’s LatAm. To avoid biases, the prompt includes a suggested sub-regional distribution.
-
Time-sensitive edge cases. “What was the most-discussed event in Poland this month?” requires fresh data the AI doesn’t have. Solution: the LOCALE_FOCUS only covers timeless topics (history, culture, geography); current-affairs questions are discarded.
What didn’t work (I tried before)
- Translation + local-name substitution. “Who won the league in 2024” + substitute “league” with the local league. Felt forced and sometimes outright wrong.
- A single global question with cultural explanation per locale. The AI clarified context in each language. Failed because the question stayed culturally neutral but translated; the extra explanations made it heavy.
The current system has been in production for ~4 months and reported satisfaction in the beta is high. Players in Poland and the Netherlands (the two most recent languages) are especially enthusiastic because for the first time they feel an international trivia “really speaks their language”.
Mentium 1.0 “Curie” is now in early access on Google Play. If you’re interested in trying the Daily Challenge in your language (any of the 10 available), download it for free and let us know how it goes at hello@kingislandstudio.com.
— Valentín
devlog i18n culture daily-challenge