Skip to main content
Skip table of contents

Set Up a Data-Secure Translation Backend

To set up a Data-Secure translation backend you need to install the Next Generation Translator Backend on your own servers or in your Cloud environment of choice. Details on how to do this can be found below.

Alternatively, feel free to reach out to the TNG Atlassian Consulting Team via We will gladly:

  • set up a data-secure backend for you

  • set up a data-secure backend on an AWS trial account to allow you to test the functionality

After setting up your trial account, we will send you an activation link for your username. You will then be able to choose a password for this account. To connect to the trial backend, select AWS Cognito from the Authentication Type drop-down when configuring Next Generation Translator to use the trial backend. We will send you the Translation URL, Cognito region and Cognito client ID needed to complete the setup.

How to Set Up your own Data-Secure Backend

System requirements:

  • 25 GB of disk space

  • 16 GB RAM

  • A Linux operating system

Additionally, if you want the neural network to run on your GPU:

  • An NVIDIA GPU with a minimum of 12 GB RAM that can be used with CUDA 11 (Tesla K80, Tesla V100, or the like) If you want the backend to utilize the system’s GPU, make sure you have the latest NVIDIA drivers and CUDA 11 / CuDNN 8 installed (for a quick installation guide on Ubuntu 20.04, visit this page).

You need a server that is reachable from your Confluence server using a DNS lookup or direct IP address. Please be aware that by default the backend service is not password protected, so you may want to avoid public availability of the service. If you want to use SSL/TLS transport encryption or other security measures for the communication between the Confluence plugin and your backend, you will have to configure this as well. A common solution would be a reverse proxy in front of the backend. In case you need help with such a setup, feel free to contact We’ll be able to help you with your setup.

Prerequisites for running the backend:

  • You need a Redis instance that can be reached from the backend server.

  • You need the Python 3.8 and pip binaries available on the server (you can also use a virtual environment).

To use the backend service, first clone the open source backend repository

Once cloned, switch to the newly created directory and install the required libraries via the following command in the command line:

pip install -r requirements.txt

Next you need to set the environment variable REDIS_HOST to the domain name or IP address of your Redis installation. Be sure to also include the port number (the default is 6379).


If you have no running Redis instance, you can also start a local instance by calling the following script:

export REDIS_HOST=localhost:6379

Please be aware that the script is not a permanent solution, but intended for trial purposes only (e.g. to test out the service). For production use, make sure to utilize a production-ready Redis installation.

Once the environment variable is set to a running Redis instance, you can start the translation backend server via the following command:


During the first startup (and only if the models have not yet been downloaded), the backend will download the machine learning models (~25 GB of data). Once that is done, it will load the model into memory. Then you will be able to start translating. The service will be available on port 80 (default HTTP port).

You can find instructions on how to run your service as a daemon (using systemd) here.

Using a Docker container

The backend can also be deployed via a Docker container. See for instructions on how to build and run the container locally. If the docker host is configured correctly, it will also utilize the GPU.

If you, for example, want to deploy the resulting container to your AWS account using ECS, we can help you in your endeavor. Please contact

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.