History of machine translation: page 2  

The first generation of machine translation systems based on algorithms for consecutive translation “word for word”, “phrase by phrase“. Potential of such systems was determined by the available size of dictionaries, directly depending on the computer storage capacity. The translation of a text was carried out with single sentences; semantic links between them weren’t taken into account. Such systems are called systems of direct translation. As time passed they were replaced by systems of the next generation, in which translation from one language to another was performed at syntactic structures level. Translation algorithms used a set of operations, which allowed to build syntactic structure according to grammar rules of input sentence of by analyzing the translated sentence (like children are taught language in school), and then to convert it to the syntactic structure of the output sentence and synthesize the output sentence, putting correct words from the dictionary. These systems are called T–systems (T – from the word “transfer – conversion”).

The most advanced approach to machine translation is considered the one based on receiving of some semantic representation of the input sentence, regardless of the language, by means of its semantic analysis. Then, the synthesis of the output sentence is carried out according to the received semantic representation. Such systems are called the I–systems (I – for “Intelingua”). It is believed that the next generation of machine translation will be I–systems.

As a prominent scientist, seeing a problem in all its complexity, from the very beginning of machine translation research A.A. Lyapunov was promoting the idea of extraction of meaning of a translated text and its presentation in another language. However, such statement about the problem of machine translation was premature at that time. Moreover, it has not been solved generally by computer science even at present, despite the efforts made by the International Federation (IFIP) – the global community of scientists in the field of information processing. However, many particular results related to the semantic analysis of texts have been received and published in the Proceedings of IFIP.

The first experience of creating a machine translation program showed that it is necessary to solve these problems gradually and piecemeal. There were too many difficulties and uncertainties in how to formalize and build algorithms for working with texts, which dictionaries should be installed into the machine, which linguistic regularities should be used in machine translation, and what these regularities are.

It turned out that the traditional linguistics has neither factual material nor ideas and concepts needed for building machine translation systems, which would use the meaning of the translated text.

Traditional linguistics could not offer a basic idea not only in terms of semantics, but also in terms of syntax. At that time no language had a list of syntactic structures, no rules of compatibility and interchangeability were studied, no principles of constructing large syntactic structure items from smaller ones were elaborated. In fact the traditional linguistics in the 50s could not give an answer to a single question posed in connection with the construction of machine translation systems.

Pages 1 2 3

You can order a test translation —
for free!
Yes, we translate: