By Matthew Jackowski

Best Practice for Naming String Identifiers

One of the first steps in the internationalization process is also one of the most critical: establishing a naming convention for string identifiers. Identifiers (commonly known as keys) act as placeholders for translated text, providing context for developers and translators. No matter how many languages your project supports, your string identifiers are one of the few components that remain consistent throughout the internationalization process.

Establishing a set of best practices at the start of the internationalization process makes it easier to add new strings and new languages as your project grows. It also serves to help identify localized strings in code and maintain consistency when strings are added, modified, or removed.

This post looks to establish a set of best practices for naming string identifiers. While the exact method will vary based on your localization framework, these practices are general enough to apply to any internationalization project.

1. Use Namespaces

Namespaces are used in software to group related objects, methods, and identifiers. In terms of localization, namespaces perform a similar function while providing additional information about the localized string. For example, a website built using the Model-View-Controller pattern could use namespaces to specify where in the application a particular string appears. For example, a login button on a home page could use the key “user.login_form.login,” where “user.login_form” defines the location of the string and “login” identifies the actual control. Namespaces can be as simple or as verbose as you like, as long as they identify where the string occurs in your project.

2. Be Descriptive

A descriptive identifier accurately reflects the contents of the underlying string, making it easier for developers to recognize the purpose of the string in code.Consider a user login button with two possible identifiers: “user_login_submit,” or simply “submit.” While both represent the same idea, the first option conveys more information about the purpose of the string without being significantly longer.

3. Be Unique

Each identifier in your project is a one-to-one mapping to a string. As a result, using the same identifier for multiple strings can lead to unexpected issues such as translations repeating or appearing in the wrong locations. One possible exception to this rule is when using the same translation in two different locations, although this can be better handled by using Translation Memory to autofill text that’s already been translated.

There are two approaches to generating identifiers: creating a human-readable ID based on the original string, or creating a computer-generated identifier using a hashing algorithm. Take our user login example from the previous section. The ID “user_login_submit” is effective because it reflects the contents of the string. However, another developer working on another login component could accidentally use the same identifier for a completely different element. Human-readable identifiers are easier to recognize in code, but maintaining uniqueness becomes more difficult as projects get larger.

Hashing algorithms, on the other hand, create computer-generated identifiers that are significantly less likely to collide with one another. Hashed identifiers are often generated by combining multiple attributes about the source string, such as the string itself and a description of its context in the program. This way, it’s only possible for two strings to share the same hash if their source strings and contexts are exactly the same.

4. Carefully Consider Using the Source String

Some localization frameworks recommend using the untranslated string as the string identifier. For instance, gettext will let you use “Hello world!” as both the source string and as the identifier. While this approach may seem simpler, not all localization frameworks fully support its use. For instance, .resx files don’t allow spaces when naming string identifiers.

Source strings also limit your ability to modify your translations. What if the original text changes? Not only do you have to update your other translations, but you also have to change each instance of the identifier throughout your project. Also consider that not all languages use the same word for multiple contexts. For instance, the English word “run” could refer to running a marathon just as much as it could refer to executing code. In Spanish, however, you would have to differentiate between “correr” and “ejecutar.” By using “run” as a common identifier, you’ve limited yourself to a single option, even for languages that may use different words depending on context. In this case, you would either need to change the source string to create two different translations or risk using the wrong translation.

It is possible to use source strings successfully in a localization project. For frameworks where it’s the default option, such as gettext, the original string often acts as a fallback if the translated version can’t be found. It’s somewhat more readable than creating a custom identifier, and it removes a layer between the original text and translated text. Check your internationalization framework for the recommended approach to string identifiers.

5. Stick to a Single Language

If it’s supported by your localization framework, your identifiers should be in the same language as your source language. For instance, using Cyrillic characters for your identifiers when your code is written in English is asking for trouble. Sticking to your default language keeps your identifiers readable while eliminating inconsistencies in your code. This practice also applies to characters that might be interpreted by your code as operators, such as single quotes, double quotes, or escape characters.

Building the Foundation for Localization

As one of the first steps in internationalization, creating a naming convention for your identifiers can have a bigger impact on the success of your localization project than expected. A successful naming convention can streamline the localization process, but a poor convention can just as easily hinder it. Taking the time to establish a solid convention today will help prevent hurdles further along in the internationalization process.

Have other questions about localization? Schedule a demo with one of our team members and see how Transifex can help streamline your personal workflow!

Using Grunt and Gulp with Transifex

TL;DR: Both Grunt and Gulp can be used to automate the download of translation files and quickly enable agile localization for your organization. In this post, we’ll cover a few detailed sample configurations and give code examples of how Grunt and Gulp can be used.

What are Grunt and Gulp?

Grunt and Gulp are free, Open Source projects typically used to automate building and deploying of JavaScript applications. They can also be used to help with automating translation tasks for localization. These localization processes can be used for JavaScript projects, or even projects using a different technology stack altogether. The strength of these tools primarily comes from how easy it is to get them up and running.

Before diving into Grunt and Gulp, there are a few prerequisites we’ll need to set up: the NodeJS programming language and the NPM package manager. If you aren’t familiar with these tools, you can find the Getting Started guide for Grunt here and for Gulp here.

After Setting Up Grunt or Gulp

Once you have Grunt and Gulp installed and set up with a default configuration, test and make sure you are able to run them without any task setup (see screenshot below). This is an important step because as you set up your localization process, there will be a number of configuration options. By testing Grunt and Gulp first, you will know that any issues lie in the localization configuration and not the initial project installation.


Next, you’ll need to identify your source translation files. These could be a number of different formats depending on your project. Often they will be JSON files, but since Grunt and Gulp are merely handling the “grunt work” of building your project, this can easily be extended to other file types such as, XML, PO and HTML formats.

After you have established the source files that need to be translated, you’ll need to decide where to put the resulting translation files from the localization process. Often these files can be located under a subdirectory called “translations.” The important part of this step is to decide a naming convention that includes the locale designation. A locale is usually a 2 or 4 letter code that represents language and region. For example: en is used to indicate the English language generally, and en_US or en_UK is used to indicate the regional use of this language. Your translation files will need to implement this locale designation either in the filename (i.e. index_de.html) or in the directory (i.e. /translations/de/index.html).

Next, we need to decide how our translators are going to receive the source files and then send back the translated files. The easiest way to do this is to use a Translation Management System like Transifex to help manage the linguistic portion of the localization process. Transifex provides plugins to both Grunt and Gulp which make configuration much easier. You can also use other approaches such as file transfer using sFTP or scp. In this post, we will focus on using Transifex for our translation management.

For both Grunt and Gulp, Transifex has a number of community-created plugins. It’s important that you set up a project, upload the initial resource, and assign translation languages and teams in Transifex before continuing the configuration of Grunt and Gulp. If you don’t have a fully setup project in Transifex, be prepared to work through some minor “gotchas.”

Using Grunt-Transifex

When you first use the grunt-transifex package, you’ll be prompted to enter your credentials which will be saved to a “.transifexrc” in your home directory. The rest of the configuration occurs inside the Gruntfile.js. Below is an example of this setup for at test project.

// Project configuration.
    transifex: {
      "grunt-test": {
        options: {
          targetDir: "./translations", 
          resources: ["samplejson"],
          languages: ["fr"],
          filename: "sample-_lang_.json"


Our project directory structure looks like this:

├── Gruntfile.js
├── package.json
├── sample.json
└── translations
    └── sample-fr.json

Don’t forget to load the grunt-transifex task at the end of the Gruntfile.js !

Now, our example task can be run like this:

$ grunt transifex
Running "transifex:grunt-test" (transifex) task
>> Successfully downloaded samplejson | fr strings into translations/sample-fr.json

Done, without errors.

And our Transifex project looks like this:


Using Gulp-Transifex

For the ‘gulp-transifex’ package, we will add our credentials to a “config.json” folder stored outside of the project directory. The rest of the configuration occurs inside the Gulpfile.js. Below is an example of this setup for a test project.

var config = require('../config.json');
var env = config.env
var options = {
    user: config.transifex[env].user,
    password: config.transifex[env].password,
    project: 'gulp-test',
    local_path: './translations/'

var transifex = require('gulp-transifex').createClient(options)

gulp.task('upstream', function(){
    return gulp.src('./sample.json')

gulp.task('downstream', function(){
    return gulp.src('./sample.json')

Our project directory structure looks like this:

├── gulpfile.js
├── package.json
├── sample.json
└── translations
    └── fr
        └── sample.json

Now our example tasks can be run like this:

$ gulp upstream
[22:24:01] Using gulpfile ~/wip/node-v4.2.1/gulpjs/gulpfile.js
[22:24:01] Starting 'upstream'...
[22:24:01] updating: sample.json
[22:24:02] no changes done
[22:24:02] ✔ sample.json Uploaded successful
[22:24:02] Finished 'upstream' after 1.74 s

$ gulp downstream
[22:24:35] Using gulpfile ~/wip/node-v4.2.1/gulpjs/gulpfile.js
[22:24:35] Starting 'downstream'...
[22:24:36] Downloading file: sample.json
[22:24:36] ✔ fr sample.json Downloaded:
[22:24:36] File saved
[22:24:36] Finished 'downstream' after 1.46 s

And our Transifex project looks like this:


Now, you’ll be able to automate the download of translation files.

Thanks to Marcos Hernández and tonyskn for their contributions to these projects.

Localization Process for Javascript Web Applications

Last week, Transifex software engineer, Matt Jackowski (@mjjacko) spoke at Bitmatica’s Code and Cookies event in San Francisco, sharing his insights about Javascript development. For readers who couldn’t make the event, Matt recaps the localization story shared at Code and Cookies!

“Localization can often be a scary process, especially if you haven’t done it before. In this post I’ll walk through the a basic agile process that can be used for any Javascript web application.”

An Agile process for Javascript development

When we are building Javascript applications, we move from ideation to application very quickly. Therefore, we need an agile approach that can be used to quickly enable us.


We often skip this step and go directly to coding. But it’s important to pause a minute and consider the decisions we must make before building our app.

  1. Internationalization Support – We’ll want to leverage existing i18n functionality either in our framework, or pick a library that supports multilingual functionalities. Unfortunately, support for i18n is not as mature as most other languages. Depending on the structure of your application, it might be difficult to find a good supporting i18n library. ReactJS and AngularJS currently have the best support, however this support comes from external libraries. Some integration will be needed.
  2. Linguistic Guides – During planning, it’s a great time to define style guides for your translated copy. A typical translation style guide contains the application’s standards and expectations that must be followed when writing copy.


1: NodeJS i18n will allow you to process formatting on the server-side. This is great from a performance perspective, but can be tricky with client-side views.

2: Build tools can be used in place for a more formal i18n approach. However, this approach primarily works well for static content.


There are 2 key parts to build step:

  1. Tagging the code– For most frameworks, we can use our templating language to designate the text that needs translations. Similarly, dates and numbers should be passed to an internationalization format before displayed in the view.

  1. Translation – As soon as we start tagging strings for our text, we also want to start sending this to our translators. Waiting until the entire site is complete and THEN doing translation, isn’t very efficient and certainly isn’t agile.


Now we want to bring the code and the translations together into a working application.
Reliable automation is the key in this step.

We can accomplish much of this automation with Grunt and if we integrate with a translation management tool (like Transifex), that setup allows us to run our application in other languages before translation is fully completed.



The last step in this process is our final quality check. Here we can run any time consuming acceptance tests, keeping in mind that we’ll likely need to run these multiple times for each language. Also on the linguistic side, it’s recommend that a quality check should be performed by professional translators who really understand our application.

Next Steps

Here are some additional resources to help you get started:

Matt shared some great tips for companies interested in taking their web apps to a global audience. For more information about localizing Javascript web apps, don’t hesitate to visit our website at or request a personalized demo with one of our team members today!

5 L10n Problems That Will Make You Go Crazy

Localization is never easy, but there are some situations that will turn a difficult process into a complete nightmare. After resurfacing some repressed memories, we found five surprisingly common issues that drive developers and localizers up the wall. In this post, we’ll show you how to approach and resolve these issues before they make you loopy.

1. The Software Localization Paradox

There’s a counterintuitive pattern in software localization: the people who are best suited to localizing also tend to be the least interested in using localized software. The problem stems from the widespread use of English in computing combined with a lack of good translations. If a user doesn’t know English, they have to memorize the English interface, work with a poorly translated interface, or not use the software at all. On the other hand, if the user does know English, he or she will likely prefer the English interface over the localized version.

The result is an endless cycle of dependency. The software needs users to translate from English to their native language, but native speakers who know English tend to prefer the English software. The lack of users willing to localize software in their native language results in fewer users using the localized software, which results in fewer users willing to localize, and so on.

What’s the Solution?

Some organizations have the luxury of hiring dedicated professional translators who have a fluent understanding of the current language and target language. If there’s a large enough demand for the software in a particular region, a company can easily recover the money spent on a localization team in sales.

On the other hand, some organizations source their translations directly from their users. With Transifex, this has become a much more viable alternative. Crowdsourcing translations is a way for companies to expand their market reach while simultaneously connecting with users. Some of the web’s most popular projects including Discourse, Reddit, Coursera, and Eventbrite rely on crowdsourced translations.

2. The Turkish Problem

There’s one country that always seems to come up when discussing localization: Turkey. Turkish localization is so notorious that it even has its own battery of tests known as the Turkey Test. The problem we’re looking at specifically is how text is converted between English and Turkish. Take a look at the following Java code:

String inputLower = "input";
String inputUpper = "INPUT";

String inputLowerToUpper = inputLower.toUpperCase();

if (inputUpper.equals(inputLowerToUpper)) {
System.out.println(inputLowerToUpper + " is equal to " + inputUpper);
else {
	System.out.println(inputLowerToUpper + " is not equal to " + inputUpper);

Seems simple enough, right? We convert a string from lower-case to upper-case and compare it against its upper-case equivalent. If they’re the same, then we say they’re equal. If not, then they’re not equal.

INPUT is equal to INPUT

At least, it is as long as you’re not Turkish. It turns out the Turkish alphabet has two extra representations for the letter i: the dotless lowercase i and dotted uppercase I.

Turkey Localization Test
Credit: i18n Guy

When the program goes to convert inputLower to its upper-case Turkish equivalent, it uses the dotted İ instead of the dotless I, and the comparison fails:

Locale turkishLocale = new Locale("tr", "TR");
String inputLower = "input";

İNPUT is not equal to INPUT

What’s the Solution?

We need a way to compare the two strings in a non-linguistic way. Using the Java program from above, we perform what’s known as an invariant culture comparison by converting the lower-case string to its upper-case equivalent using the en_US locale. That way, we’re comparing two strings formatted in the same locale:

String inputLowerToUpper = inputLower.toUpperCase(Locale.US);

You can perform a similar test using an ordinal comparison, which compares each character in the string based on its underlying bytes. Ordinal string comparisons are generally preferable to invariant culture comparisons, although both will work in most situations.

3. Time is of the Essence

If you’ve ever tried to implement timezones in software, you know how difficult the process can be. The world has a total of 40 time zones. Some timezones are offset by a fraction of an hour rather than a full hour. Some areas of the world, such as the Gaza Strip and West Bank, even have multiple timezones in the same geographic region.

Not only that, but timezones change: In 2007, Venezuela set its clocks back by half an hour, while in 2011 Samoa lost an entire day by jumping to the other side of the International Date Line. North Korea created its own new time zone on August 15th, over 100 years after both North and South Korea set their clocks nine hours ahead of GMT. And if you think that’s hard to keep track of, just wait until you have to account for Daylight Savings Time!

Tom Scott provides a much more vivid explanation of why timezones are such a problem for developers. In short, time zones are complicated, dynamic, and hard to track without dedicated resources.

What’s the Solution?

The solution is to use an existing library that already tackles the problem of time zones. Google recently released CCTZ, an open-source library for C++. CCTZ represents time in two ways: absolute time, which represents a specific and universal point in time similar to a timestamp; and civil time, which represents absolute time in a specific region. Civil time accounts for time zones, but absolute time is the same for everyone. You can convert from civil time to absolute time and vice versa, as long as you know the time zone that the civil time is based on.

Similar libraries also exist for other languages. Java contains the built-in java.util.TimeZone and java.util.Calendar classes, Python has pytz, and PHP has the built-in DateTime class. If you need to compare dates from two different time zones, a good method is to base the comparison on a single standard such as UTC. Then, use the library to format and display the final result.

Before implementing any kind of time tracking, refer to your language’s documentation for details on handling time zones. Most importantly, don’t try to implement time zone tracking yourself, because, as Tom Scott put it, “that way lies madness.”

4. User Input or: How I Learned to Stop Worrying and Love Unicode

Much of the localization process focuses on output. However, if your application supports direct user input, you have to look at the other side of the coin. How well will does application handle input from users in Russia, China, Egypt, or India? Can it read characters, dates, and numbers from different locales, or from right-to-left languages? If it needs to send that data to an external service, such as a database, how well does the service handle the locale?

What’s the Solution?

Unicode is the current standard for language support in software. The most common character encoding is UTF-8, which is used by over 85% of websites as of this post. If your application supports UTF-8, then it already has the ability to interpret characters from almost any written language. For languages that don’t support natively UTF, look into functions or modules that allow you to change the default character set.

Other forms of input will need to be handled on a case-by-case basis. Numbers (including phone numbers), addresses, currencies, measurements, dates, and times all need to be adapted to the user’s locale. For many of these (dates, times, and numbers), you can store user input in a standardized format. Otherwise, you may need to instruct your users on how to format their input, or provide an alternative UI for unique cases.

5. Why “Your” And “Your” Aren’t Always the Same

Imagine you’re developing an online storefront. You’ve just finished internationalizing and you’re ready to start adding languages. As part of your interface, you have a line of text that shows the number of items in the user’s shopping cart. For example, if a user has 5 items in the cart, the store shows “You have 5 items in your shopping cart.” You might have broken the rules a bit by using string concatenation, but you only did it to make your store more user-friendly. No harm no foul, right?

All of a sudden you get a call from your Spanish localizers. They want to know if they should use “you” in the formal usted or in the informal tú, and in either case, whether “you” refers to one user or multiple users. You tell them it’s for a single user, and since you don’t want to tread on any toes, you tell them to use the formal usted. Shortly after, you get a call from your Arabic localizers, who need to know if “you” refers to a man, a woman, two men or two women, or multiple men or multiple women.

Say you manage to account for all of these cases in all of your supported languages. You suddenly get a call from your Russian localizers, who are having trouble working around the limited support for plural numbers. They need the application to support three plural forms: one for numbers ending in 1, one for numbers ending in 2 through 4, and one for numbers ending in 5 through 9 (plus 0). Shortly after, you get another call from your Arabic localizers, and all heck breaks loose.

What’s the Solution?

The source of most grammatical conflicts is simple string concatenation. The same sentence that seemed so simple in English has dozens of conjugations, gender agreements, and plural forms in other languages. When the contents of a string are dependent on the contents of a variable, there’s no way of knowing how to format the string without creating dozens of use cases for each language. The key is to remove this dependency by removing the variable from the string entirely.

Using our shopping cart example, you can reserve the conversational translation for English while using a simplified version for other languages. For instance, an English user would see “You have 5 items in your shopping cart,” whereas a Spanish user would see the Spanish translation of “Number of items in the shopping cart: 5.” It’s not as user-friendly as the English translation, but it’s much easier on your localization team.

As an alternative, localization frameworks canprovide for complex grammatical rules. The Unicode Common Locale Data Repository (CLDR) supports locale-specific formatting, parsing, and name translation, as well as countless other resources.

Have you come across any localization issues that have made your head spin? Let us know in the comments!

Integrating Transifex with Bamboo (Part 2)

Earlier this week, we shared a post about integrating Transifex with Bitbucket. In addition to Bitbucket, Atlassian offers a continuous integration server called Bamboo. Bamboo lets you automatically setup, build, and deploy projects. Integrating Transifex with Bamboo lets you update your project’s localization files in an automatic and seamless way.

The Bamboo workflow is split into five components:

  • Tasks are the individual work units in a Bamboo project. A task is any discrete action such as checking out a project from source, executing a script, or, in our case, calling an external program such as the Transifex Client.
  • Jobs are used to control multiple tasks. They specify task execution order, enforce requirements, and collect artifacts created by tasks.
  • Stages represent steps within a build process. As an example, you might use three separate stages for compiling, testing, and deploying code.
  • Plans organize work units, builds, and tests. They also allow you to link to code repositories, generate notifications, configure permissions, and specify variables.
  • Projects organize multiple plans under a single logical unit.

For more information on Bamboo’s structure, click here to go to the Atlassian documentation site.

Installing Bamboo

Bamboo can be installed on a server or hosted on a cloud instance. This article assumes Bamboo is being installed on a server.

Navigate to Bamboo’s download page and download the executable for your server operating system. From there, navigate to Atlassian’s documentation site to find more information on installing Bamboo for your particular OS. When the installation is finished, access Bamboo by opening a web browser and navigating to http://<server IP address>:8085. You may need to open the port in your server’s firewall.

Bamboo requires a valid license before it can start. If you haven’t already, generate a trial license by logging into your Atlassian account and requesting an evaluation license. After Bamboo verifies the license, it will ask you to create a new administrator account for the Bamboo agent. Once the account setup is complete, you’ll be greeted by the Bamboo home screen.

Creating a New Project

To create a new project, click the Create button at the top of the screen, then click “Create a new plan.” This brings you to the plan configuration screen where you can enter details about the new project. We’ll create a new project for our Node.js app:

Make New Project in Bamboo


Click “Configure plan” to create the new project. Along with the new project and plan, Bamboo creates a default stage and a default job. Since we specified a Bitbucket repository, Bamboo automatically creates a task to retrieve the source code from the repository. Next, we’ll add tasks that synchronize the source code with Transifex using the Transifex Client.

Adding a Task Command to Bamboo

To run the Transifex client during the build process, we need to add a new task to our default job. Bamboo supports a variety of tasks for generating logs, deploying to cloud services, or even executing Maven or Ant tasks. In this case, we’ll use the Command task to run a command that calls the Transifex Client.

Before we do this, we need to register the Transifex Client as an executable. Navigate to the Bamboo administration page by clicking the gear icon in the top-right corner of the screen, then click “Overview”. Under the “Build Resources” section on the left-hand navigation menu, click “Server capabilities.” This will show you the actions available to the Bamboo server, including the executables available to your build plan.

Scroll down until you see the “Add capability” section. Under “Capability type,” select “Executable,” then select “Command” for the type. Enter an identifier for the command, followed by the path to the executable that you want to run (in Ubuntu Linux, the Transifex Client executable is found at /usr/local/bin/tx). Click the “Add” button to register the new executable with Bamboo:

Adding Transifex Client to Bamboo

Navigate back to your project by clicking “Build” at the top of the screen, then “All build plans.” Edit the project by clicking the pencil icon on the right-hand side of the screen, across from the project name. Under “Plan Configuration”, click on the default job. Switch to the “Tasks” tab, then click the “Add task” button. Bamboo prompts you for the type of task to add:

Add a New Task

Click the “Command” task and select the “Transifex Client” executable we defined earlier. In this task, we’ll push the latest source files to Transifex. Under arguments, type “push -s.” Add a small description, then click Save. Repeat this process to create a new command that pulls the latest translations from transifex using the command tx pull -a.

Note that you may need to specify the “Working Sub Directory” field before the command will successfully execute. The working sub directory tells Bamboo to run the command in a folder relative to the project’s root folder. If your Transifex configuration is stored somewhere other than in the project’s root directory, you’ll need to specify the directory here. The best way to determine this is to run the Transifex Client in your project, note which subfolder you ran the command in, then enter that subfolder as the working sub directory.

How to Create a New Command

Next, we’ll run the plan and generate a new build.

Run the Plan

To generate a new build, click the “Run” button in the top-right corner of the screen. Your build will begin in the background, and the results will be displayed in the Build Dashboard.

Build Dashboard on Bamboo

Click into the build to see the results. If it was successful, you should be able to see output from the Transifex Client in the Logs tab.

Client Output in Tx

You can use this same process to integrate Transifex into an existing Bamboo project. Once the Transifex Client is registered as an executable, add two tasks to your project that call the tx push and tx pull commands. Make sure you do this earlier enough in your build process so that you can reliably test and package the localization files with the rest of your project.

For more information on integrating Transifex into your projects, visit the Integrate Transifex documentation page.

Integrating Transifex and Bitbucket (Part 1)

Bitbucket is a service that lets you host and share code with other users. Bitbucket supports version control using Git and Mercurial. In this post, we’ll show you how to synchronize changes between your projects hosted in Bitbucket and your localization projects in Transifex. We’ll also post a blog on how to integrate Transifex with Bamboo, a popular continuous integration service, so don’t forget to check out our blog later this week!

What is Version Control?

If you’re unfamiliar with version control, we recommend reading our previous post on version controlled translations with Git. Version control systems (VCS) track changes to files across multiple users. Most version control systems are based around three core concepts:

  • Repositories, or directories that hold code files.
  • Commits, or changes to source code that are applied to a repository.
  • Branches, or deviations in the code base.

Version control helps developers coordinate code changes while reducing the chances of conflicts or data loss. While there are multiple version control systems available, the examples in this article use Git simply due to its popularity.

Getting Started with Bitbucket

To start, we’ll create a new repository in Bitbucket via the web UI. After logging into your account (new accounts are free), click on the “Create” dropdown button, then click “Create Repository.”

Enter the details of your repository. You can modify the visibility settings of your project and allow other users to “fork” your project, which lets them create and work on a copy of your repository. You can also add bug tracking, add a Wiki, or select a base language for the codebase. When you’re ready, click “Create repository.”

How To Make New Repository in BitBucket

By default, your new repository will be empty. You can use a version control system to push project files to Bitbucket, or you can import an existing repository from another code hosting service. In this example, we’ll use an existing Git repository for a NodeJS project stored locally on a Linux desktop. We’ll change the working directory to the project folder, use git remote add to add the remote repository, then push our project to the remote repository.

$ cd /home/transifex/projects/i18n
$ git remote add origin
$ git push -u origin --all

Once the command completes, you should be able to browse your Git repository through the Bitbucket website.

Syncing Changes with Bitbucket

As you make changes to your local code files, you’ll need to update the Bitbucket repository. Imagine we have a file named “i18n.js,” which contains a list of all the locales used in the project. We decide to change a locale, so we update i18n.js. With Git, you can view changes between the repository’s current state and the last commit using the command git status:

$ git status
On branch master
Your branch is up-to-date with 'origin/master'.

Changes not staged for commit:
  (use "git add ..." to update what will be committed)
  (use "git checkout -- ..." to discard changes in working directory)

	modified:   i18n.js

no changes added to commit (use "git add" and/or "git commit -a")

Git uses a staging area to temporarily store files before committing them, allowing you to customize a commit by including or excluding certain changes. We’ll add i18n.js to the staging area, then create a new commit:

$ git add i18n.js
$ git commit -m "Added new locale: es_ES"
[master 6a9f8d5] Added new locale es_ES
 1 file changed, 1 insertion(+), 1 deletion(-)

To update the Bitbucket repository, use the git push command. origin specifies the name of the remote destination, while master specifies the name of the local branch being pushed. You may be prompted for your Bitbucket account password:

$ git push origin master
Password for '': 
Counting objects: 5, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 298 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
   45cff79..6a9f8d5  master -> master

If you need to pull changes from a remote repository into a local repository, for instance, to incorporate changes from another developer, use the git pull command:

$ git pull
remote: Counting objects: 3, done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 3 (delta 2), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
   6a9f8d5..776aa6d  master     -> origin/master
Updating 6a9f8d5..776aa6d
 i18n-node-http.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

You can use git log to review the latest changes:

$ git log
commit 776aa6d5375037f009f4d4dc8acbe06bd228c214
Author: Other Developer 
Date:   Mon Aug 24 16:41:40 2015 -0400

    Changed default locale to es_ES

commit 6a9f8d57dab0db5fb50d4be1b863307bd10c9c0c
Author: bitbucket
Date:   Mon Aug 24 16:25:08 2015 -0400

    Added new locale es_ES

Syncing with the Transifex Client

You can use the Transifex Client to push source changes to Transifex. The benefit to this approach is that it ensures everyone has access the latest translations. However, it could potentially make it more difficult to control who updates the latest translations and when. Features that are still in development can change entirely, and having your localizers work on text that might not appear in the final product would be a waste of time and money. Developers will also need to remember to update localization files when committing their code changes. Enforcing a push policy ensures everyone knows exactly when to sync their changes to the Transifex project.

To update your Transifex project from a local Git repository, make sure your repository is up to date with the remote Bitbucket repository by using git pull. Install the Transifex Client if it’s not already installed. The Transifex Client is based off of the Git client and uses a similar command structure. For instance, use the tx init command inside of your project’s root folder to create a new Transifex configuration:

$ tx init
Creating .tx folder...
Transifex instance []: 
Creating skeleton...
Creating config file...

You should now have a .tx folder inside of your project. Inside of this folder is a configuration file, which contains the information used to identify the project on a Transifex server. For instance, this Node.js app has the project name “i18n” and stores its localization resources in the “locales” directory as standard JSON files.

host =

file_filter = locales/.json
source_file = locales/en.json
source_lang = en

You can add the .tx folder to your Git repository by using git add. When other developers pull your changes, they can use the same configuration file to connect their Transifex clients to the Transifex project.

When you’re ready to push your updated files to Transifex, use the tx push command. The -s flag pushes source files, while the -t flag pushes translation files:

$ tx push -st
Pushing translations for resource i18n.enjson:
Pushing source file (locales/en.json)
Pushing 'de' translations (file: locales/de.json)
Pushing 'es' translations (file: locales/es.json)

To pull changes into your local project folder, use tx pull:

$ tx pull -a
New translations found for the following languages: de, es
Pulling new translations for resource i18n.enjson (source: locales/en.json)
 -> de: locales/de.json
 -> es: locales/es.json

From here, simply stage, commit, then push the updated localization files to your Bitbucket repository.

Again, don’t forget to check in for part 2 of this post, Integrating Transifex with Bamboo! And until then, check out the Transifex localization website for more information about localizing digital content!

Why Is Localization So Dang Hard?

This post initially appeared on Medium.

Why Is Localization So Dang Hard

With so many software frameworks and development environments supporting the ability to internationalize software, why is localization still so difficult?

I18n, L10n, translation — what does it all mean?

Technically, the most direct way to build a global application is to 1) somehow figure out which locale the user wants and then 2) give the user an user interface specific to the locale requested. Prior to the Internet, often software developers would build separate applications for every locale. Localization files had to be distributed with the application (usually on separate floppy disks, yes, we are going back that far) and the user had to pick the right floppy disk per language; this process was fairly awful. With the advent of the Internet and the proliferation of computer access globally, it has become common and easier to support multiple languages in the same app.

The problem that arose from relying on the user to pick application versions based on language was partially solved by operating systems. The OS software developers built in the capability for the user to pick their locale during configuration. This advance limited the exposure to most users whose operating system was set up for them. While this has been amazing progress for the user, for the software developer who is building the user interface, these changes did not go far enough.

Standards? Where did they fall short?

When learning about globalization, you can find a plethora of documentation on globalization standards. However, when it comes to actually implementing translations in a product or website, there is little guidance. The good news here is that the mechanism to display different languages in a software product, or what is commonly referred to as internationalisation (i18n), is a well understood software engineering problem. As a result, most development environments or web frameworks support i18n. Unfortunately, there is a downside.

Software developers tend to be a fairly disagreeable bunch, and so they disagreed about the *right* way to support i18n. It is in this lack of “universality” where the standards fall short. Each programming language implements a slightly different form of i18n with a slightly different approach. Some languages avoid this altogether and leave it up to frameworks and libraries to solve.

File formats had to become the standard

In the absence of clear guidelines, the software development community has had to find a way manage translation assets. For this reason, they turned to file formats to specify the integration method. In some cases, the programming language simply adopted a well known file format as a “method” of integration. Oh, you are using a PHP framework? Well, then you must be using PO files for managing your translations. However, there are a couple of key issues with a file based approach.

  1. Version management is a nightmare. Developers often make multiple copies of translation files when building applications. This can lead to significant confusion around which set of files are the most current from the translators. Or even worse… software development projects sometimes have last minute text changes. Those changes often result in generating even more translations files.
  2. Process agility is sacrificed. In a file-based approach, the translation file needs to be completed with all translations and generally blocks the development process. On large software projects, having external blocks waiting for translators to complete can often slow even the nimblest development teams. Evidence for this can be seen in the fact that many software startups bypass any localization efforts completely in an effort to keep their development velocity high.
  3. We forgot DRY! Often with the file-based approach translation management tends to organize the translation files around a particular project, or product, or website. After a few iterations, translators are translating the exact same text copy again and again. If there is no process in place to limit this effect it can spiral out of control in time and cost just the same way that real code does when we neglect DRY principles.

Looking for a better way

Looking for a better way

It was in this environment that Dimitris Glezos found himself when working with the Fedora Linux project in 2007. Back then, translation projects had grown so large and unmanageable that Red Hat developers were desperate for help. Dimitris came up with the idea for Transifex.

“The idea is that Transifex will act as a proxy/mediator for translation commits.”

Fast forward to 2015, Transifex is part of the cloud technologies landscape but has this completely solved the problem? We’ve made great progress, but there is always more to be done.

This approach does gain some ground on versioning and agility. However, we have also added some new issues. Clearly, using the cloud to manage our translation files is just one step in solving this problem. Dimitris idea of needing a proxy/mediator between translators and developers still persists even today. Transifex’s developer-centric approach aims to ease management, storage, collaboration, and real-time monitoring that will allow companies to launch products and content in multiple languages without slowing down the development speed; thus, solving these translation issues.

Taking a leap forward

Taking a leap forward

Part of the problem with globalization is that, generally speaking, we’ve been going about it the wrong way. We’ve been focusing on translation management as an engineering problem and have been building developer first focused solutions. But in order to take a leap forward, we need to solve the potentially harder, more people focused issue and make translation efforts truly seamless for the individual. Here are three key aspects of doing this:

  1. Using software tools as an enabler. Software tools should enable us to build on a global level — they shouldn’t be used to define boundaries. There will always be cases in whatever approach we take where issues arise. Our tools should be capable of helping guide us past those issues and smart enough to such that they don’t come up again once they are solved.
  2. Appropriate context for everyone based on their role. Here are some examples. People who are performing the translator role need to see the text copy in context of the website or application and not in some localization file format. Translation project managers need to see a dashboard of timelines and cost so they can have the appropriate context for their role. And finally, developers should not need to spend time digging through UI for translatable strings…this should happen seamlessly as part of their build process.
  3. Keeping translations cycles quick. Agile methods have transformed our approach to developing software. No longer do we spend months in dingy poorly lit rooms building an application before validating it with product experts or even users. Translation projects can benefit the same way. By allowing for shorter cycles, and more transparency not only will timelines be reduced, but overall quality will likely improve as well. This approach enables us to fit to a process rather than force fitting the process to us.

The world is just a big community

With the growth of the Internet, especially in countries outside of the US and Europe, we are quickly finding that the world as a whole is a community unto itself. Even though it is not practical for the entire world to agree on a single language, it *is* practical to expect our software and processes to make this as transparent as possible. When we are building new software products or websites we shouldn’t even need to *choose* for whom to make our product available. It should simply be available to all.

Find out additional information

The future is not as far as you might think!

  1. See how Transifex Cloud APIs helps streamline file-based translation approaches
  2. See how Transifex Live is helping to create translation context for websites
  3. See how Transifex Command Line Client is automating software build processes
  4. See Dimitris’ letter to the Fedora community

An Opinionated Guide to WordPress Plugin Development

After creating our own WordPress multilingual plugin, we wanted to share some information about how we developed it. This guide runs you through the basics to set up a PHP development environment for plugin development. Many steps in this guide are intended for Mac OSX systems running WordPress on a virtual Ubuntu server. If you are attempting to follow this on a different platform, the steps might be different.

What you will need

For starters, let’s download a bunch of things:

Getting WordPress running on your local Virtual Machine

The first task is to get WordPress up and running locally from the image. Following the steps from Bitnami, you should be able to easily import the image.
One key item to note at this point…make sure you are connected to a network with internet access. The VM will setup itself with ‘Bridged’ networking initially, so it needs an external DNS. I’ll address working offline later.

Also reference the following FAQ from Bitnami.

The cloud setup will be slightly different, so be sure you are on the ‘Virtual Appliance’ FAQ.

WordPress dev

If you chose VirtualBox

After you have installed and launched VirtualBox, go to File > Import Appliance, and navigate to the folder of your extracted VM image and import it.

In the next window, allocate at least 2048 MB of RAM.

Next up: starting the VM!

If you get the WRITE SAME failed. Manually zeroing error, just wait. It will eventually boot to the OS.

Note: If the default login credentials don’t work, try bitnami for the username and password. Also, if prompted to change the password, change it to whatever you like.

Set up a repository and Git Client

If you don’t have any existing plugin code, create a new project on GitHub. Then generate your plugin boilerplate here:

The boilerplate generator will create 2 directories: 1 for your assets which will be displayed on the WordPress plugin page , and another for your plugin. I recommend creating two separate projects for these directories. Right now, we are only focused on the plugin code.

Getting the IDE setup, and sFTP access

At this point, go ahead and install NetBeans; you’ll also need to set up Firefox with the FireFTP plugin.
We are SO close to being able to do something productive!

Just a few more quick setup tasks:

  • Shell into the VM as ‘bitnami’ user (you can use console or set up SSH).
    • Enable SSH in the VM
    • The VM’s IP address can be found using ifconfig (the inet address)
  • Make a symbolic link to the WordPress plugin directory as it’s somewhat hidden in Bitnami’s special paths.
    ln -s /opt/bitnami/apps/wordpress/htdocs/wp-content/plugins/ plugins
  • Now, set up your sFTP connection. I recommend just using the IP address displayed on the VM console screen, and the ‘bitnami’ user. (Note: You can create separate users for security purposes…but it’s easy to lose track if you have many vms)
  • While still inside the FireFTP console, navigate on the left pane to your boilerplate plugin ( and copy the plugin directory over to the VM.
  • Finally! You can now log in to WordPress and activate your plugin! The default WordPress login info is ‘user’ / ‘bitnami’

Advanced topics

Getting unit tests setup

You’ll need to set up PHPUnit and WP-CLI on the VM.

To install PHPunit:

chmod +x phpunit.phar
sudo mv phpunit.phar /usr/local/bin/phpunit

For additional info:

To install WP-CLI (similiar to PHPUnit):

curl -O
chmod +x wp-cli.phar
sudo mv wp-cli.phar /usr/local/bin/wp

For additional info:

If you have existing tests…simply go to your plugin directory and run the WP-CLI initialization script:

sudo ./bin/ wordpress_test <mysql root user> <mysql root user password> localhost

Now, if everything is good and the starts aligned for you…simply run:


If you are working on a project that needs tests setup, there’s a few more steps which I’m not going to cover here. Instead, please refer to WP-CLI docs here.

Working offline

When you are not connected to a network, this setup doesn’t work since the WordPress Virtual Machine can’t get an IP address to support a local connection. In this case, you will need to setup ‘host-only networking’. For Virtual Box, this is fairly straightforward…although you do need to take some extra configuration steps.

Unfortunately, the VirtualBox documentation isn’t completely clear. I’ll break it down quickly here:

  • Create the host-only loopback device. The management console will default to ‘vboxnet0’ which is fine. Be sure to turn DHCP server on. This will prevent you from having to change networking on the guest.
  • Now you will be able to flip from bridged to host-only in your Virtual Machine’s settings panel.
  • Be sure to reboot the VM so that the virtual hardware settings are updated and you will get a new IP address on the virtual machine’s console.

If you found this guide useful, please check out our WordPress plugin that helps you to quickly translate your website or blog: