Since OCR Translation software has a marked solution, I’m making a new thread to not necrobump…
I’m wondering if there’s a better method to get faster translations from the camera, like what Word Lens (acquired by Google and now part of Google Translate) did/does. I’m thinking about scripting this, though I’m not sure if there’s easy access to the camera via the CLI. Maybe if someone is savvy, one could look into integrating the Firefox Translations feature that was added in FF118. I haven’t found where the Firefox Translations repos are, though this list seems to have them (Mozilla · GitHub); tihs way, translations would occur offline.
Frog might be able to get the text from the command line with the -e
option:
❯ flatpak run com.github.tenderowl.frog --help
Usage:
python3 [OPTION…]
Help Options:
-h, --help Show help options
--help-all Show all help options
--help-gapplication Show GApplication options
Application Options:
-e, --extract_to_clipboard Extract directly into the clipboard
and Dialect is able to be run via the command line:
❯ flatpak run app.drey.Dialect --help
Usage:
dialect [OPTION…]
Help Options:
-h, --help Show help options
--help-all Show all help options
--help-gapplication Show GApplication options
Application Options:
-t, --text Text to translate
-s, --src Source lang code
-d, --dest Destination lang code