Content Localization CLI
Find a file
2025-04-22 14:29:43 +08:00
config feat: chunking + progress bar 2025-04-22 14:27:17 +08:00
translator feat: chunking + progress bar 2025-04-22 14:27:17 +08:00
.gitignore chore: add .gitignore 2025-04-22 14:28:00 +08:00
go.mod feat: chunking + progress bar 2025-04-22 14:27:17 +08:00
go.sum feat: chunking + progress bar 2025-04-22 14:27:17 +08:00
main.go feat: chunking + progress bar 2025-04-22 14:27:17 +08:00
README.md chore: update README 2025-04-22 14:29:43 +08:00
sample_config.yaml feat: chunking + progress bar 2025-04-22 14:27:17 +08:00

d10n - Content Translation CLI Tool

d10n is a CLI tool for translating text content in files or directories using LLM models.

Configuration

Create a configuration file at ~/.config/d10n.yaml with the following format:

api_base: "https://api.openai.com"  # OpenAI-compatible API base URL
api_key: "your-api-key"             # API key for the service
model: "gpt-4o"                     # Model to use for translation
system_prompt: "Custom prompt"      # Optional: Custom system prompt

# Concurrency settings
concurrency: 3                      # Number of concurrent translation tasks

# Chunked translation settings
chunk:
  enabled: false                    # Whether to enable chunked translation
  size: 10240                       # Size of each chunk in tokens
  prompt: "Please continue translation"  # Prompt to use for continuing translation
  context: 2                        # Number of chunks to include as context

You can use the following variables in your system prompt:

  • $SOURCE_LANG: Will be replaced with the source language code
  • $TARGET_LANG: Will be replaced with the target language code

Usage

d10n <source_path> [options]

Options

  • -target <path>: Target path for translated content (default: <source_path>_l10n[.extension])
  • -source-lang <code>: Source language code (required)
  • -target-lang <code>: Target language code (required)
  • -model <model>: Model to use for translation (overrides config)
  • -api-key <key>: API key (overrides config)
  • -api-base <url>: API base URL (overrides config)
  • -system-prompt <prompt>: System prompt (overrides config)
  • -format <ext>: File format to process (e.g., md, txt)
  • -concurrency <num>: Number of concurrent translation tasks (default: 3)
  • -chunk: Enable chunked translation for large documents
  • -chunk-size <tokens>: Size of each chunk in tokens (default: 10240)
  • -chunk-prompt <prompt>: Prompt for continuing translation (default: "Please continue translation")
  • -chunk-context <num>: Number of chunks to include as context (default: 2)

Examples

Translate a single file from English to Spanish:

d10n document.md -target-lang es

Translate a directory of Markdown files to Chinese:

d10n ./documents -target-lang zh -format md

Specify source language explicitly:

d10n article.txt -source-lang fr -target-lang en

Using chunked translation for large documents:

d10n large-document.md -source-lang en -target-lang es -chunk

Adjust concurrency for processing multiple files:

d10n ./documents -target-lang ja -format md -concurrency 5

Fine-tune chunked translation parameters:

d10n huge-document.md -source-lang en -target-lang zh -chunk -chunk-size 8192 -chunk-context 3

Building from Source

go build -o d10n

Then move the binary to a location in your PATH.