OBJECTIVES
As a person with knowledge of the 20th and 21st centuries Machine Translation Papers, the author has always brought up to his superiors the viability of strategy formation regarding the analysis of these topics and at times fails to understand the reasons or logic behind certain strategic implementations imposed on it.
By delving into this project paper, the author intends to have better insights into how the 20th and 21st centuries Machine Translation Papers were thought up, formulated and then imparted down. The author hopes to have an in-depth understanding as to how the 20th and 21st centuries Machine Translation Papers enable companies, organizations and individuals to compete effectively and profitably in this era of internationalization where competition is extremely intense.
In order to reinforce the learning objectives, two key focal issues were focused upon i.e. innovation and diversity. Innovation was discussed with regard to the 20th and 21st centuries Machine Translation Papers where they were renowned for their developmental capabilities to constantly innovate. Diversity came under strategic thinking and formation as the author considered the diverse culture, political climate, economic surroundings, social environment, technological settings, government policies and legal systems in order to better understand the issues being discussed.
ABSTRACT
This essay utilized various 20th and 21st centuries Machine Translation Papers as the models to review their present implications and how they dealt with critical situations. From the analysis, key trends in the 20th and 21st centuries Machine Translation Papers were then identified, how they worked and their effectiveness in dealing with critical situations was ascertained. The paper then moved on to assess these 20th and 21st centuries Machine Translation Papers with regard to their suitability to critical situations, during which the internal capabilities of these papers in relation to the strategy being followed by most organizations nowadays was determined also. An overall analysis of the performance and effectiveness of the 20th and 21st centuries Machine Translation Papers was also conducted to assess and compare the implications of these papers with those of others. Gaps in the 20th and 21st centuries Machine Translation Papers were then identified.
Finally, several choices of strategies to improve the significance of the 20th and 21st centuries Machine Translation Papers as effective means in critical situations were recommended and evaluated in terms of appropriateness to the issues reviewed, feasibility in carrying out the options and acceptability within the key stakeholders and decision makers. Several key implementation issues related to managing strategic change were also addressed as well.
Paper # 1
“The dominant framework until the late 1980s was what is now known as ‘rule-based’ machine translation (RBMT). Since then, research has been dominated by corpus-based approaches, among which the primary distinction is made between, on the one hand, statistical machine translation (SMT), based primarily on word frequency and word combinations, and on the other hand, example-based machine translation (EBMT), based on the extraction and combination of phrases (or other short parts of texts). “1
Paper # 2
“The field of machine translation (MT) was the pioneer research area in computational linguistics during the 1950s and 1960s. When it began, the assumed goal was the automatic translation of all kinds of documents at a quality equaling that of the best human translators. It became apparent very soon that this goal was impossible in the foreseeable future. Human revision of MT output was essential if the results were to be published in any form. At the same time, however, it was found that for many purposes the crude (unedited) MT output could be useful to those who wanted to get a general idea of the content of a text in an unknown language as quickly as possible.” 2
Paper # 3
“The MT Summit series of conferences began nearly fifteen years ago, in 1987 at Hakone, Japan. Much has changed in the field of MT since then. Many of the methods, systems and techniques that are familiar to us today have emerged in the last fifteen years. For example, in the late 1980s there were no example-based MT systems, no statistics-based methods, there were no translation memories, there was no text alignment, there was no localization industry, there were scarcely any MT systems for personal computers; and, above all, there was no translation on the Internet, and the World Wide Web was just a gleam in the eyes of its creators. Systems were used only by large organizations, governmental bodies and a few multinationals.” 3
Paper # 4
“Ever since the idea of using computers to translate natural languages was first proposed in the 1940s and since the first investigations were begun in the 1950s, translators have watched developments either in scorn or in trepidation. Either they have dismissed the very notion that anyone could even believe that translation could be mechanized, or (at the other extreme) they have feared that their profession would be taken over entirely by machines.” 4
Paper # 5
“In those early days, and for many years afterwards, computers were quite different from those we are familiar with today. They were large very expensive machines housed in large rooms with reinforced flooring and ventilation systems to reduce excess heat. They required a small army of maintenance engineers and a dedicated staff of operators and programmers. Most of the work was mathematical in nature, either directly for military institutions or for university departments of physics and applied mathematics with strong links to the armed forces. It was perhaps natural in these circumstances that much of the earliest work on machine translation was supported by military or intelligence funds directly or indirectly, and was intended for use by such organizations – hence the emphasis in the United States on Russian-to-English translation, and in the Soviet Union on English-to-Russian translation.” 5
Paper # 6
“Since the middle of the 1990s there has been a rapid increase in the number and variety of translation systems available, in the form of stand-alone software for ‘automatic’ translation, computer-aided translation systems for large corporations, translator workbenches, translation memory systems, on-line systems provided on the Internet (some of them free), and there will no doubt be more in the future. For the general public, computer software for translation is a quite new product; they are unaware of the advantages, limitations and methods of using such systems. They are furthermore familiar with rapid improvements of computer technology and software, and will therefore be expecting similar rapid improvements in the quality of translation software.” 6
Paper # 7
“As in other areas of NLP, three types of evaluation are recognized: adequacy evaluation to determine the fitness of MT systems within a specified operational context; diagnostic evaluation to identify limitations, errors and deficiencies, which may be corrected or improved (by the research team or by the developers); and performance evaluation to assess stages of system development or different technical implementations. Adequacy evaluation is typically performed by potential users and/or purchasers of systems (individuals, companies, or agencies); diagnostic evaluation is the concern mainly of researchers and developers; and performance evaluation may be undertaken by either researchers/developers or by potential users. In the case of production systems there are also assessments of marketability undertaken by or for MT system vendors.” 7
Paper # 8
“When machine translation was in its infancy, in the early 1950s, research was necessarily modest in its aims. It was constrained by the limitations of hardware, in particular inadequate memories and slow access to storage, and the unavailability of high-level programming languages. Even more crucially it could look to no assistance from the language experts. Syntax was a relatively neglected area of linguistic study and semantics was virtually ignored in the United States thanks to the behaviorist inclinations of the leading scholars. It was therefore not surprising that the first MT researchers turned initially to crude dictionary based approaches, i.e. predominantly word-for-word translation, and to the application of statistical methods. Warren Weaver himself, in the 1949 memorandum which effectively launched MT research, had advocated statistical methods alongside cryptography, which was soon recognized as being irrelevant, and more futuristically the investigation of universal interlinguas.” 8
Paper # 9
“The mechanization of translation has been one of humanity’s oldest dreams. In the twentieth century it has become a reality, in the form of computer programs capable of translating a wide variety of texts from one natural language into another. But, as ever, reality is not perfect. There are no ‘translating machines’ which, at the touch of a few buttons, can take any text in any language and produce a perfect translation in any other language without human intervention or assistance. That is an ideal for the distant future, if it is even achievable in principle, which many doubt.” 9
Paper # 10
“My intention is this paper is to provide some explanation for the difficulties encountered by present computer system which attempt to produce partial or complete translations of texts from one natural language into another. The emphasis will be on what can or cannot be achieved automatically at present. I shall not be concerned with the relative merits of different approaches to translation problems, for example, whether systems which switch between languages through some kind of inter-lingual representation are better than those which do not, or whether systems which employ methods from artificial intelligence are better than those which are use more familiar methods of computational linguistics, and I shall say virtually nothing about what developments may bring improvements in the future. Furthermore, I shall not be describing any particular system of whatever kind, past or present, or any methods of analysis or processing, or how dictionaries may be structured and compiled, whether monolingual or bilingual.” 10
Paper # 11
“Research on machine translation began in the 1950s and has largely remained to this day an activity which combines an intellectual challenge, a worthy motive and an eminently practical objective. The challenge is to produce translations as good as those made by human translators. The motive is the removal of language barriers which hinder scientific communication and international understanding. The practical objective is the development of economically viable systems to satisfy a growing demand for translations which cannot be met by traditional means. However, the realization has been disappointing in many respects; and, although recently optimism has been growing, operational machine translation systems are still far from satisfactory.” 11
Paper # 12
“Why should we be interested in using computers for translation at all? The first and probably most important reason is that there is just too much that needs to be translated, and that human translators cannot cope. A second reason is that on the whole technical materials are too boring for human translators, they do not like translating them, and so they look for help from computers. Thirdly, as far as large corporations are concerned, there is the major requirement that terminology is used consistently; they want terms to be translated in the same way every time. Computers are consistent, but human translators tend to seek variety; they do not like to repeat the same translation and this is no good for technical translation.” 12
Paper # 13
“When giving any general overview of the development and use of machine translation (MT) systems and translation tools, it is important to distinguish four basic types of translation demand. The first, and traditional one, is the demand for translations of a quality normally expected from human translators, i.e. translations of publishable quality – whether actually printed and sold, or whether distributed internally within a company or organization. The second basic demand is for translations at a somewhat lower level of quality (and particularly in style), which are intended for users who want to find out the essential content of a particular document – and generally, as quickly as possible.” 13
Paper # 14
“The traditional use of MT is the production of translations of technical documentation, e.g. for multinational companies. The system produces ‘raw’ output of variable quality which has then to be revised (post-edited) by translators. Post-editing can be expensive, and a successful cost-effective option is the pre-editing of input texts (typically with a controlled ‘regularized’ language) to minimize incorrect MT output and reduce editing processes. An important development of this usage, now expanding rapidly (with millions of translated pages every year), is the integration of translation with technical authoring, printing and publishing processes. translation of Web pages and of electronic mail, and there is great and increasing usage of MT services (often free), such as the well-known ‘Babelfish’ on AltaVista. At the same time, the Internet is providing the means for more rapid delivery of quality translations to individuals and to small companies, and a number of MT system vendors now offer translation services, usually ‘adding value’ by human post-editing.” 14
Paper # 15
“The workshops held by the European Association for Machine Translation (EAMT) in the past three years have been devoted to exploring the practicalities of using machine translation software and other computer-based translation tools in organizations. In these environments, translation has to be seen by management as a positive enhancement to the promotion and sales of company products. For those involved in the provision of company translations the cost-effective exploitation of the most appropriate translation software and the careful design of operational workflows are of crucial importance.” 15
Paper # 16
“From the beginning, researchers concentrated almost exclusively on the translation of scientific and technical documents, where the difficulties of cultural differences and variable contexts are less acute than in the more ‘culture-bound’ translation of literature, legal texts, and many areas of sociology. In science and technology, the demand for translation has almost always exceeded the capacity of the translation profession, and these demands are growing rapidly. In addition, the coming of the Internet has created a demand for immediate online translations, which human translators cannot possibly meet.” 16
Paper # 17
“A question frequently asked by those new to the field is whether machine translation (MT) has improved – in the last five years, or in the last ten years, or in the last twenty, etc. In many respects, the answer is quite easy. It is obvious that programs for automatic translation run much faster than in the past because computers are faster. Equally obvious is that text input is much easier than in the days of punched cards and paper tape, and that the output is much easier to read than in the days of almost illegible computer printouts all in upper case. MT systems are also becoming cheaper by the year, even by the month.” 17
Paper # 18
“It is possible to trace ideas about mechanizing translation processes back to the seventeenth century, but realistic possibilities came only in the 20th century. In the mid 1930s, a French-Armenian Georges Artsrouni and a Russian Petr Troyanskii applied for patents for ‘translating machines’. Of the two, Troyanskii’s was the more significant, proposing not only a method for an automatic bilingual dictionary, but also a scheme for coding interlingual grammatical roles (based on Esperanto) and an outline of how analysis and synthesis might work. However, Troyanskii’s ideas were not known about until the end of the 1950s. Before then, the computer had been born.” 18
Paper # 19
“This paper traces the history of efforts to develop computer programs (software) for the translation of natural languages, commonly and traditionally called ‘machine translation’ (MT), or, in non-English-speaking countries, ‘automatic translation’ (traduction automatique, avtomaticheskij perevod). Translators (particularly translators of literature) have generally regarded the task as misguided if not impossible. From the beginning, however, MT researchers have concentrated almost exclusively on the translation of scientific and technical documents, where the difficulties of cultural differences and variable contexts are less acute, but where (in particular) the demand for translation has almost always exceeded the capacity of the translation profession.” 19
Paper # 20
“The translation of natural languages by machine, first dreamt of in the seventeenth century, has become a reality in the late twentieth. Computer programs are producing translations – not perfect translations, for that is an ideal to which no human translator can aspire; nor translations of literary texts, for the subtleties and nuances of poetry are beyond computational analysis; but translations of technical manuals, scientific documents, commercial prospectuses, administrative memoranda, medical reports. Machine translation is not primarily an area of abstract intellectual inquiry but the application of computer and language sciences to the development of systems answering
practical needs.” 20
SYNTHESIS
Structurally, the introduction of the 20 papers can be divided into two distinct parts: the first part of the papers introduces the history of machine translation when it was still on its early stages of development, and just as importantly, the growing fondness of people to discover the importance of machine translation. The second section portrays the social implications of machine translation in the 20th and 21st century.
The introduction of the 20 papers emphasizes the importance of the role of Information Technology in the 21st century machine translation, where the equipment and techniques that are being used to manage and process relevant information are critical towards the success of the translation process. The introductions tackle the importance of data and information in helping the management of a publications company, for example, in analyzing their present status in order to cut costs, increase profits, spot market trends faster, and communicate more effectively with customers. However, in order to achieve these goals, the data and information need to be relevant, accurate, complete and timely.
Some of the authors of the papers also agree that in the 21st century, computer networks also play an integral part in the proper functioning of machine translations. Computer networks refer to a group of two or more computer systems linked together by communication channels for the purposes of sharing data and information in the process of machine translation. Through the efficient computer networking, the effective sharing of information and even expensive hardware is thus made possible along the process of machine translation. This in turn contributes to the improvement of the overall operating efficiency of the company coupled with an increase in productivity.
For instance, Blekhman contends that the first attempt of a human expert towards machine translation would unlikely be very successful. This is partly because the human expert generally would find it very difficult to express the necessary knowledge and rules needed to decode the information. Much of it is almost subconscious, or appears so obvious that most human experts don’t even bother mentioning it. Knowledge acquisition for machine translation in the 21st century would involve a big area of research, with a wide variety of techniques that also need to be developed. Blekhman concludes that it is important for companies and organizations to first develop an initial prototype of machine translation based on the relevant information extracted from the human expert, then make some refinements on it based this time on the feedbacks coming from both the human expert and from the potential users of the machine translation.
On the other hand, the introduction of Nagao’s paper typically symbolizes the machine translation perspective of the 20th century. Nagao contends that for many years data processing was the domain of machine translation in the 20th century, especially since information systems were able to guarantee that any changes to the database would be completed. This worked well for most publication companies at that time, and they could all even run on top of a database with a single client-server.
However, Nagao noted that during the 20th century this machine translation model has also become more difficult to maintain. As the number of transactions grew in response to various machine translation services, a single database proved to be very inefficient. Also, most machine translation systems at that time consisted of a whole suite of programs operating together, as opposed to a strict client-server model where the single server could handle the machine translation process. Nevertheless, this type of machine translation model significantly bolstered the operations of most organizations upon successful implementation during the 20th century.
In order to do refinements from this machine translation system, it is important for it to be created in such a way that it can easily be inspected and modified. The machine translation system should be able to explain its reasoning and also be able to answer questions regarding the solution process. The system updating must not involve rewriting a whole lot of codes. It must include simply the addition or deleting localized chunks of knowledge.
The introduction of the 20 papers’ overall theme tends to be socially conservative. The technical aspects of the introduction of the 20 papers are more likely conservative in nature. The real meanings of the introductions can be ascertained through the instant reflection of the absurd into the “real” which may or may not be further investigated in the rest of the paper. One may try to investigate for the purpose of discovering such ephemeral sequences, and partly with the concept of trying to mend the introductions in their own minds out of the disoriented components. Such an example is very common nowadays.
The introductions of the 20 papers included automatist outputs and interpretations of machine translation. The introductions also reflected some of the author’s un-appreciation for literal terms given to machine translation and instead emphasized on the technical representations. Not only did the papers focus on technical representations, but also to the symbolisms present in unclear processes within machine translation.
Because the introductions of the 20 papers seem to systematize our ideas regarding machine translation and the processes it presents, I really find some of the introductions of the 20 papers extremely difficult to comprehend. This idea however is hypothetical in nature. We must assume that when the authors of the 20 papers wrote their introductions, they based their principles towards freeing their collective states of mind through tapping the imaginative capabilities of their “unconscious minds”. For these authors, this principle of altering daily real situations toward one that integrates the machine translation concepts and the unconscious capabilities of their minds have manifested itself for the purpose of releasing the personal, cultural, political and social revolution, sometimes pertained to as a complete evolution of life in terms of independence and emotions. At various instances, I found myself coinciding with the beliefs of machine translation to enable the progress of wild economic and social developments.
Because of the introduction’s technical quality, the definitions and messages can be viewed and taken in quickly, however they resist speedy interpretation and remain ambiguous. They speak to our subconscious and are expressions of our own subconscious. The introductions of the 20 papers possess this kind of dialect of clarity and mystery. During the process of writing the introductions of the 20 papers, the authors decided to take up and incorporate a wide variety of influences in respect with machine translation. They combined contradictory tendencies, and meld them together to their own personal style, in which intellect and feeling, consciousness and the subconscious, objective form and subjective intuition combined to form a single unity.
More distinctively the introductions of the 20 papers represent an art that reflects strong feelings and emotions. The introductions may not really be appealing to the reader’s eyes, but it has the capability to touch the reader through the portrayal of intense emotions and ideas very much reflective of machine translation.
Credit:ivythesis.typepad.com
0 comments:
Post a Comment