Localizing big & agile software: Kaspersky case study

Need translations? Try Smartcat for free!

Localizing big agile software kaspersky case study - Smartcat blog

Leading the localization efforts of a large software company is challenging in itself. What happens when this company switches to Agile practices? Well, the challenges get even more challenging, and you have to implement drastic changes, fast. In this case study, Ekaterina Galitskaya and Darya Egorushkina from Kaspersky’s documentation and localization team dive into their journey of upping their processes’ capacities and efficiency with Smartcat.

Further text by Ekaterina & Darya

Our team is responsible for writing and localizing both UI texts and help center articles for the company’s mobile security apps. Below, we’ll tell you how we started localizing Mobile Security apps in a more reliable, agile, and automated fashion. We’ll start with the pains that led us to the need to change anything at all, and lead you through the challenges we faced and the solutions that we came up with. We hope this article will be interesting for any mid-to-large-scale software companies facing the challenge of implementing Agile not only in their development, but all related aspects as well.

Pains

Like many other companies, Kaspersky had at some point switched to Agile development practices. This naturally led to much shorter release cycles. If we previously rolled out new app versions every few months, now it became once every two weeks. Granted, there were now fewer strings in each new release, but that didn’t help much: We still had to run these few strings through our whole localization and linguisting testing process, while facing much tighter deadlines.

There is also a common misconception that mobile apps involve just a small amount of text. We wish! In our case, for example, we had ~25,000 words per app on average in UI texts alone, multiplied by ~10 apps, and by ~20 target languages for each app. All that with new UI and documentation texts arriving each week.

As a result, localization essentially became the bottleneck in the whole release rollout process. And if previously product managers didn’t even know the localization team members by name — why would they, when all the translations appeared “magically all by themselves”? — now they were aware of all the issues involved at a much deeper level than they ever wished to.

In Kaspersky, the localization process generally consists of two stages: translation and linguistic testing.

The general problem at the translation stage was that there was too much manual work, due to both the process used and the CAT tool limitations. Specifically:

  • As multibranch pipelines were not supported, we had to manually create the deltas for translation and later push them back to the branches.

  • It was impossible to ensure consistency across apps and languages.

  • We could not do additional requested translations in parallel, e.g., if the source texts were changed in the process. Instead, we had to wait for the basic translation package to be ready, and only then proceed with the additional ones.

  • Build fails due to errors in “non-translatables”, unescaped apostrophes, and other human errors were increasingly becoming an issue.

As for the linguistic testing stage, it could take up to two weeks, compared to the three to five days it took to actually translate. “What in the world is linguistic testing?”, we hear you asking.

The main purpose of linguistic testing is to check the whole translation within context. We do have a solid team of translators who know our terminology well. But when you translate text without seeing what surrounds it or even simply if it is a button or a heading, things can go south quickly.

So linguistic testing involves manually checking all the resulting app screens, usually via screenshots. It helps identify issues such as

  • Text being too long for the screen element size. Sometimes this may involve legal implications, if the text left out includes disclaimers or financial info,

  • Text left untranslated, either by the translator making a mistake or because it was hardcoded instead of externalized as a string,

  • Text translated in the wrong context, e.g., when the text on a button — e.g., “Download” — is grammatically an imperative instead of an infinitive.

Just the screenshotting part alone took an exorbitant amount of time. For example, if a new feature involved 40 UI screens, and there were 20 target languages, it could take up to 70 hours of manual, mechanical drudgery.

All in all, this was something you could live with when you had a new release every three months. But with biweekly releases, this started to take its toll on the localization team. It had to be fixed, and fixed fast.

We had two options:

1. To hire low-experience workers and decrease the amount of localization work — both naturally leading to a drop in quality, OR
2. To automate.

We opted for the latter.

Why Smartcat

When picking the CAT/TMS solution, our top priorities were:

  1. Fewer internal sign-offs — approving budgets, generating serial keys, and all that jazz,

  2. Ready-to-use basic features — so we could start using it right away without waiting for more features to be developed,

  3. Lightweight server requirements — again, to avoid lengthy approvals,

  4. Affordable, preferably free, entry to the service.

  5. Adequate support on the service side to not have to hire an in-house developer,

  6. Security requirements — we connect to it, and not the other way around,

  7. Multibranch support — to translate several features in parallel,

  8. Additional translations possible in parallel with the original batch.

When we compiled a shortlist of options, we ended up with just two names: Smartcat and Zing, a continuous localization server from Evernote creators.

We liked Zing for its customizability, free installation pack, and private access — we could host it within our own organization. On the downside, the installation process was far from easy, so onboarding all our translators and staff would make the time costs of running the service too high.

So Smartcat it was. As we are not allowed to connect CAT tools directly to our internal VCS, we opted to use a Smartcat–Serge bundle. (Serge is an open-source piece of software that syncs strings between version control and translation management systems. It identifies strings in files of various formats and converts them to the industry-standard PO format, which it then feeds to Smartcat. We can install it right on our servers, so none of our classified info finds its way outside.)

Here’s what we liked most about the resulting solution:

  • It supports all our requirements: multibranch pipelines, additional translations, security, etc.

  • We get updates on the fly, without the need to download or install anything,

  • We can create our own parsing schemas for strings thanks to the Smartcat–Serge bundle,

  • We can talk to translators working on our documents without leaving the platform,

  • We can find freelancers right on the platform’s marketplace, if we ever have the need to increase production,

  • We can pay for all languages and projects with just one invoice,

  • We love the support we get — Smartcat’s team both helped us get our workflow up and running and prioritized some of the features that were critical for us,

  • The service is virtually free — we ultimately opted for a subscription because of the project-wide text search feature, but this move was optional.

Some of the challenges we faced were:

  • Initially, we could not search for text inside all documents of a project — not an issue anymore, as Smartcat has since then implemented that feature,

  • Freelancers sometimes miss or ignore notifications that a project document was updated, so we have to manually send them reminders via the built-in chat,

  • The project manager has to manually initiate invitations to translators — but we hear this step will be automated soon.

Considering our experience with Smartcat so far, we are hopeful that their team is already working on addressing these issues.

Before & After

To put things into perspective, here’s a comparison of what we had and what we have now, both process- and number-wise.

Process

Before

Before the changes, we had to take close to 30 steps across the translation and linguistic testing stages:

Translation:

  1. Grab texts from different branches in the rep — manually,

  2. Create a delta for translation — manually,

  3. Build packages for translation,

  4. Upload them on an FTP server,

  5. Write a boatload of emails to agencies, freelancers, or local offices,

  6. Take the translation from the FTP server once ready,

  7. Load it into the CAT tool and make sure everything looks okay,

  8. Upload the translated strings to the repo trying not to mix up the branches — manually,

  9. Run a build, fix errors, complete the build,

  10. Request additional translations — essentially repeating the same process again.

Linguistic testing:

  1. Start the build and wait for it to complete,

  2. Restart the build if it failed due to localization errors,

  3. Configure a special testing environment, if there is no debug menu,

  4. Take all relevant screenshots for 20+ languages,

  5. Find out, together with the QA team, how to get the still missing screenshots,

  6. Create and name screenshot packages,

  7. Upload them to the FTP server,

  8. Assign tasks to translation agencies to check the translations,

  9. Answer the agencies’ questions,

  10. Accept the tasks and make the changes,

  11. Do the build — which sometimes takes long,

  12. Redo the build if there were errors,

  13. Take screenshots for regressive testing,

  14. Again, upload screenshots and assign tasks to the translation agencies,

  15. Again, discuss everything with the agencies,

  16. Again, another round of regression testing if there were translation changes.

After

Now we have just nine steps across all stages:

  1. The copywriter commits new strings in Git. Serge automatically feeds the strings to Smartcat,

  2. The localization project manager assigns translators,

  3. The translators translate in context — with screenshots and comments at their fingertips,

  4. The localization project manager reviews and confirms the translation, which then automatically makes its way back to Git,

  5. The localization team runs the feature screenshotting bot for localized texts,

  6. The localization team puts the localized screenshots to the FTP server and sends them to the linguists,

  7. The linguists check and fix the translations if needed while looking at the localized screenshots,

  8. The changes automatically make their way to Git,

  9. The localization team closes the pull request.

That’s it — with this three-fold reduction in complexity we really feel the difference compared to what we used to have!

Numbers

All numbers are per one release — every two weeks — and per one app.

Step

Hours before

Hours after

Collect strings from all branches

1

-

Create a delta containing new or updated strings only & upload them to the CAT tool for 20+ languages

4

0.25

Create translation packages for 20+ languages

0.5

-

Upload translation packages to the FTP server for 20+ languages

0.5

-

Communicate with agencies/translators to confirm that they can take the job, for 20+ languages

2–3

Assign jobs to agencies/translators right on the platform

-

0.25

Answer translators’ questions

2–4

0.5

Review & confirm translations

1

0.25

Run a build

Up to 8

0.25

Additional translations

8

0.25

Obtain screenshots

16–32

8 with the auto screenshotting tool

Upload screenshots to the FTP server

8

1

Communicate with agencies/translators & obtain fixed translations

8

1

Update the resource files

8

2

Write the changes to Git

8

0.25

Total time per release per app

84 hours

14 hours
SIX times less!

Bonuses

Additional benefits — some of which we did not anticipate — include:

  • More reliable builds: Thanks to placeholders, we no longer have to worry about non-translatable text getting translated or apostrophes not being escaped, and so on.

  • Smartcat identified some older bugs thanks to its critical error settings.

  • Not wasting other people’s time and resources: We do not need to take testing devices from the QA team or use up the dev team’s time taking screenshots.

  • Screenshots available to translators, which they can easily open and view right from the editor, improved the quality of translations big time.

We could go on, and we’re sure with time we’ll find other ways to improve both the efficiency and quality of our localization processes. Most importantly, localization is no longer a bottleneck in the release cycle. We believe obtaining these results in such a short time frame was a feat both for our team and the Smartcat platform.


Appendix. Tips and ideas

Here are some concrete steps we have taken once we implemented Smartcat. We put it here as a “cheat sheet” for other companies and teams who want to follow in our footsteps. Not all of them are easy to do, but most will help make the localization process smoother and less error-prone.

Integration:

  • Test the Git–Serge–Smartcat integration to make sure all strings make their way to Smartcat projects and back. You don’t want to run into surprises at the production stage.

  • Agree on branch naming with software engineers. This way you will be able to set up a bot that will look for the specific branches that need to be localized — saving both you and the devs hours of communication time.

  • Customize Serge’s default parsers if needed. For example, we made string ids, comments, and links to reference screenshots visible to translators.

  • Create a cron job to find localization branches according to the name mask agreed above.

  • Consider UI testing and feature screenshotting using the Kaspresso framework. For example, our devs put a link to a screenshot for each string they use. When the file makes its way to Smartcat, the screenshot link automatically goes in the Comments tab. You can read more about Kaspresso and why you might want to use it here.

Localization & linguistic testing:

  • If you have glossaries in place, upload them to Smartcat to ensure consistency across your localizations.

  • Add your in-house linguists so they can explore the platform and learn the ropes before they get actual jobs from you.

  • Find and select freelancers and onboard them with your company’s processes, making sure they know how to use screenshots, comments, glossaries, etc.

  • Where necessary, find translation agencies for additional localization or testing needs.

Hope these were helpful — let us know if you have some of your own!

💌

Subscribe to our newsletter

Email *