As far as I'm concerned, Andrew Conner's example reveals a shortcoming of the translator a lot more basic than sexist bias or lack of nuance. The meaning of all pronouns should be entirely unambiguous thanks to the presence of a unique antedecent ("John", in the first sentence). Even if Google should somehow believe that "John" is a female name, the pronouns would all end up the same. My impression is that the previous, logical (as opposed to statistical) wave of AI would have gotten this one right. Apparently, whatever google is doing is unable to connect words that are more than 2 sentences apart. If this is a general limitation, then we can sleep easy, at least those of us with tech jobs beyond data entry...
> Right now, MT is only doing step 1 — the wide reading of an entire corpus. There is currently no technical mechanism for doing the more focused reading in step 2.
"Fine-tuning" models on a smaller, more focused dataset definitely exists in other areas of natural-language processing. I wonder why we haven't seen it applied to translation?
A quick google scholar search shows this paper on fine-tuning language models for classification: https://arxiv.org/abs/1801.06146 . From my understanding, generation is similar enough, especially in training. AI Dungeon pretty much does this with GPT-2 and 3. I bet their founder Nick Walton (https://twitter.com/nickwalton00) would know a lot since he's doing this for a living.
Probably because uses of MT and LLMs is mostly restricted in use for novelty or research purposes or because no one has a use-case serious enough to warrant spending the time to fine-tune. Also depends on the cost of human translators. Perhaps publishers, but maybe they're not tech-savvy enough to hire an engineer (or a team of engineers) to do that and/or the cost of a translator vs that + those returns isn't high enough yet. For good reason, few trust also MT to be willing to replace a human translator.
As far as I'm concerned, Andrew Conner's example reveals a shortcoming of the translator a lot more basic than sexist bias or lack of nuance. The meaning of all pronouns should be entirely unambiguous thanks to the presence of a unique antedecent ("John", in the first sentence). Even if Google should somehow believe that "John" is a female name, the pronouns would all end up the same. My impression is that the previous, logical (as opposed to statistical) wave of AI would have gotten this one right. Apparently, whatever google is doing is unable to connect words that are more than 2 sentences apart. If this is a general limitation, then we can sleep easy, at least those of us with tech jobs beyond data entry...
> Right now, MT is only doing step 1 — the wide reading of an entire corpus. There is currently no technical mechanism for doing the more focused reading in step 2.
"Fine-tuning" models on a smaller, more focused dataset definitely exists in other areas of natural-language processing. I wonder why we haven't seen it applied to translation?
I'd like to include some discussion of "fine tuning" in a followup. Right now, I have this link: https://ruder.io/recent-advances-lm-fine-tuning/
Do you (or anyone else) have good links to fine-tuning papers?
A quick google scholar search shows this paper on fine-tuning language models for classification: https://arxiv.org/abs/1801.06146 . From my understanding, generation is similar enough, especially in training. AI Dungeon pretty much does this with GPT-2 and 3. I bet their founder Nick Walton (https://twitter.com/nickwalton00) would know a lot since he's doing this for a living.
Probably because uses of MT and LLMs is mostly restricted in use for novelty or research purposes or because no one has a use-case serious enough to warrant spending the time to fine-tune. Also depends on the cost of human translators. Perhaps publishers, but maybe they're not tech-savvy enough to hire an engineer (or a team of engineers) to do that and/or the cost of a translator vs that + those returns isn't high enough yet. For good reason, few trust also MT to be willing to replace a human translator.