5 min read

The hardest test we ever made... was a simple Nintendo DS game

Sometimes there's more than meets the eye. Especially when dealing with a simple, little videogame
The hardest test we ever made... was a simple Nintendo DS game

Most collaborations start with a translation test, as it's the most efficient way to turn this chaotic, fragmented sector of our into a simple answer: is this translator any good?

But what makes a great videogame translator? 'Creativity!' would shout Square-Enix, before telling you to write a piece of fan-fiction about their characters. 'Specialization!' replies Electronic Arts, making you pick between half a dozen genre-specific tests. 'A bit of everything' shrug most translation agencies, handing you a test that is part system terminology, part jargon and part funny pirate songs.

Needless to say, I had the chance to do and review countless tests like these. So much that it's how I actually met GLOC member #2, Matteo Scarabelli, while reviewing his test for a Japanese agency.

When I had to create my own test, I looked for three skills in particular: self-management, attention to detail and localization.

Self-management

The test came with a set of 4 instructions

You can find our test in attachment: 300 words to be translated "in one sitting".
When you have some spare time:

  • Write us again
  • We will mail you the password
  • Give us an estimate of the expected delivery time
  • Start translating right away and deliver as soon as you're done.
  • A nice little process that told us a surprising amount of things

Some candidates never read the instructions ('Hey! When will you send me the text?'), didn't understand them well ('Here is the translation. Sorry for the wait, I was away for one week') or simply refused to follow them ('I will not give you an estimate, I'll just deliver when it's done').

Needless to say, instructions in real projects are way more complex and confusing. If a candidate can't (or won't) follow 4 steps even during a test, it's a worrying sign.

This also gave us an insight on how the translator would (and could) manage their time.

First of all, we would see their real availability, from 'send the password right away', to 'please mail me the password tomorrow at 9AM', to 'maybe next week'

And then, it showed us how effective they were in managing their time. As we mentioned, there was no set limit, they had to tell us their estimate.

This created an interesting set of profiles, from those who would ask full days for it (not so impressive unless the quality was stellar), to those who would underestimate the time needed and then make mistakes in the rush (not impressive), to those who would underestimate the time needed and then ask for a delay in order to deliver properly (not great, but excusable), to the very few who would nail a perfect translation in a fair deadline they picked themselves.

Attention to detail

Based on my experience, the best text for taxing a videogame translator in the shortest possible space was... Pawly Pets: My Vet Practice, an edutainment game for Nintendo DS aimed at teenage girls

No, really
No, really

It sounds like a joke but let's review everything it tested.

First of all, professionalism: this is an industry of nerds and we all like to show the most exciting and glamorous face of our job.

As much as we like to say that we write the funniest puns for the Joker, the deadliest one-liners for Ryu Hayabusa and the most epic legends for Dragon Quest, more often than not we patiently grind text that is just as complex but unlikely to ever turn a single head ever. Like Pawly Pets.

Then research ability. It's very unlikely that any candidate has previous knowledge of the topic. Could they find precise, technical terms on the spot? And how well they estimate the time needed for it?

Then the writing ability for balancing three diverging needs: staying faithful to veterinary terminology -avoiding calques and mistranslations- while keeping a friendly tone for the intended audience -young girls!- while staying within the strict +20% length limit

Localization

After a few years of using the same test, I learned where to look for the candidates I preferred. My favorite parts were these... because there is no proper way to translate them into Italian:

WordDefinition
pinnaA feather, wing, or fin; the auricle of the ear.
calculusTartar on the teeth
tartarA hard yellowish deposit on the teeth.
grubThe thick worm-like larva of some beetles, flies and other insects.
warbleA thick worm like larva of some beetles, flies and other insects.

The curveball with 'Pinna' is that they aren't really describing a concept, but an English word. No matter what translation you use for it, you will never have those exact meanings. Did the translator notice it? How did they reword it? Did it still fit the context?

Calculus/tartar and grub/warble play on a similar 'meta-linguistic' level, since they are pairs of synonyms that don't exist in Italian: stick to the correct meaning and there's only one possible term for both.Again, did the translator notice, even if they are spread across the list? How did they react? Asking to remove one of the lines? Or tweaking it so that it could stay? If so, how much did they depart from the source?

I really like this element because it shows if the translator saw the game beyond the spreadsheet, the player/reader and their interactions. At least for me, that's a core skill for being a good games translator.

And extra credits to those that realized that the list itself is in alphabetical order and added a note to say that it should be reordered in the translation!

What about dialogues?

A fair question (raised by TBAC on Twitter) would be why we didn't focus on dialogue like most translation tests do.

It's down to project needs: narrative-heavy titles like JRPGs and visual novels aren't often translated from English into Italian, so you are much more likely to work on strings with interactive, "gameplay" elements instead of pure, linear dialogues.

And while we all have some knowledge of dialogue through movies, you need some software experience in order to really understand the localization issues above. Since that was the real deal-breaker for us, and since we wanted to keep the test as short as possible, it seemed fair to focus entirely on it.

Why we stopped using it

We stopped using this test quite some time ago.

Not for its own fault. I think that it packs a lot of interesting checks in an efficient way and it probably highlights some of the surprising complexities of this job -when I showed it during a conference at Gengo.com it made their CEO slump in his chair-

So why we stopped? The first reason is that networking is more efficient for us. Instead of vetting dozens of translators in search of a good one, we just ping a colleague that already has a strong reputation.

The second is that tests are... artificial. Everything strives to be perfect: the work conditions, the candidate and even yourself.
But real projects are less like a factory, with their string of perfect tasks carried by robots, and more like a jazz improv, where notes get traded and adapted between players.

So, now that we have the margin and experience for it, we think the best way to test that harmony is by doing a little project together and see how we jam.

If anyone is curious, here is the full test