当前位置: 首页 > 教学工作 > 教务信息 > 正文

外国语学院关于举办第31届“韩素音国际翻译”大赛盐城工学院选拔赛的通知

发布日期:2019-05-15    点击数:    

为了进一步提高同学们的英语水平、掌握基本的翻译技能,满足未来各类工作的实际需要,盐城工学院外国语学院将于5月13日至5月23日面向盐城工学院全体在读学生,举办第31届“韩素音国际翻译”大赛盐城工学院选拔赛(竞赛题目见附件)。

比赛汉译英、英译汉各设一等奖(1名)、二等奖(3名)、三等奖(6名),共计10名,并设优胜奖若干名。

一、比赛时间

5月13日—5月23日。

二、参赛译文要求

1.附件中的参赛文章(本届竞赛分别设立英译汉和汉译英两个奖项,参赛者可任选一项或同时参加两项竞赛)。

2.参赛译文用A4纸打印,中文宋体、英文Times New Roman字体,全文小四号字,1.5倍行距,页眉请写“XXX(姓名)英译汉”或“XXX(姓名)汉译英”及学院、班级、联系电话。

3.参赛者在大赛截稿之日前妥善保存参赛译文,请勿在书报刊、网络等任何媒体公布自己的参赛译文,否则将被取消参赛资格并承担由此造成的一切后果。

三、交稿说明

所有参赛同学请将打印译文交到润美楼A410外国语学院国际合作外语部办公室。交稿联系人:李老师13032599310;徐老师13505102853。

附参赛原文:

第三十一届韩素音国际翻译大赛

汉译英 竞赛原文

鲁迅曾讲过一个笑话:大热天的正午,一个农妇做事做得很辛苦,忽而叹道:“皇后娘娘真不知道多么快活。这时还不是在床上睡午觉,醒过来的时候,就叫道:‘太监,拿个柿饼来!’”这就是所谓“贫困限制了想象力”,含辛茹苦,没见过世面的山乡农妇,就是绞尽脑汁,也只能把皇后的幸福生活想到这个高度。

记不得是哪个小品,扮演穷汉的演员一脸羡慕地憧憬着富人的日子,说:“等将来咱有了钱,就天天喝豆浆吃油条,想蘸白糖蘸白糖,想蘸红糖蘸红糖。豆浆买两碗,喝一碗,倒一碗!”贫困卑微限制了他的想象力,他无从知道富人生活要远比这精彩奢华得多。

其实,不仅贫困会限制想象力,豪富同样也会限制想象力。从小锦衣玉食,在蜜罐里长大的贾宝玉,根本就不知道穷人是咋过日子的,他去探视卧病在家的晴雯,看到满屋子破破烂烂,家徒四壁,脏得要命,连个坐的地方也没有,让他大吃一惊也大开眼界。这也就是迅翁说的那句名言:“煤油大王哪会知道北京捡煤渣老婆子身受的酸辛。”

高高在上,也会限制人们的想象力。最典型的自然是晋惠帝的“何不食肉糜”,生在深宫,享尽荣华富贵的晋惠帝,能问出这样流传千古的奇葩问题,一点也不奇怪,当然他也有点弱智。但即使精明博学如乾隆皇帝,因久居深宫,对民间生活不甚了了,对海外世界更是一抹黑,照样受困于想象力的贫乏,不仅屡被下臣蒙蔽却不自知,也无法相信海外发生的科学经济巨变,错失了与世界接轨的大好时机。

道理很简单,存在决定意识,对客观世界的认知条件决定一个人的思想深度、高度、广度,而每个人都会囿于一定空间,局限于一定条件,其想象力生发的基础也不同,想象力的发挥也大相径庭。曹雪芹能写出《红楼梦》,是因为他曾经有过这样的豪富生活,知道什么叫烈火烹油,骄奢淫逸,再加上丰富的想象力,天才的创造力,于是就有了这部巨著的横空出世,辉耀古今。而这样的题材,蒲松龄、吴承恩、施耐庵是无法驾驭的,不是才气使然,而是缺乏想象力的必要基础。

要破解想象力受限的怪圈,别无他法,关键就是一句话:实践出真知。见多识广的人,肯定要比孤陋寡闻的人想象力丰富得多;常接地气的人,肯定要比闭门造车的人想象力丰富得多;博览群书的人,肯定要比不学无术的人想象力丰富得多。因而,要使我们的想象力少受限制,就只有读万卷书,行万里路。

相比较而言,穷人、富人的想象力受困,都是其个人私事,可能会影响其生活质量或发财速度。而官员领导的想象力受限,则会影响一个部门、地区属下百姓的发展前途与幸福指数,譬如决策失误,瞎乱指挥;心血来潮,自以为是;因循守旧,不求进取,其中都有想象力受困的因素。化解之策,就是要跳出文山会海,多下来走走,勤深入基层,听听群众呼声,了解百姓疾苦,自然会开阔思路,想象力丰富,进而谋划合理,决策科学,措施得力,政绩扎实。

什么限制了我们的想象力?多思考一下这个问题很有好处。

英译汉参赛原文

Outing A.I.: Beyond the Turing Test

The idea of measuring A.I. by its ability to “pass” as a human – dramatized in countless scifi films – is actually as old as modern A.I. research itself. It is traceable at least to 1950 when the British mathematician Alan Turing published “Computing Machinery and Intelligence,” a paper in which he described what we now call the “Turing Test,” and which he referred to as the “imitation game.” There are different versions of the test, all of which are revealing as to why our approach to the culture and ethics of A.I. is what it is, for good and bad. For the most familiar version, a human interrogator asks questions of two hidden contestants, one a human and the other a computer. Turing suggests that if the interrogator usually cannot tell which is which, and if the computer can successfully pass as human, then can we not conclude, for practical purposes, that the computer is “intelligent”?

More people “know” Turing’s foundational text than have actually read it. This is unfortunate because the text is marvelous, strange and surprising. Turing introduces his test as a variation on a popular parlor game in which two hidden contestants, a woman (player A) and a man (player B) try to convince a third that he or she is a woman by their written responses to leading questions. To win, one of the players must convincingly be who they really are, whereas the other must try to pass as another gender. Turing describes his own variation as one where “a computer takes the place of player A,” and so a literal reading would suggest that in his version the computer is not just pretending to be a human, but pretending to be a woman. It must pass as a she.

Passing as a person comes down to what others see and interpret. Because everyone else is already willing to read others according to conventional cues (of race, sex, gender, species, etc.) the complicity between whoever (or whatever) is passing and those among which he or she or it performs is what allows passing to succeed. Whether or not an A.I. is trying to pass as a human or is merely in drag as a human is another matter. Is the ruse all just a game or, as for some people who are compelled to pass in their daily lives, an essential camouflage? Either way, “passing” may say more about the audience than about the performers.

That we would wish to define the very existence of A.I. in relation to its ability to mimichow humans think that humans thinkwill be looked back upon as a weird sort of speciesism. The legacy of that conceit helped to steer some older A.I. research down disappointingly fruitless paths, hoping to recreate human minds from available parts. It just doesn’t work that way. Contemporary A.I. research suggests instead that the threshold by which any particular arrangement of matter can be said to be “intelligent” doesn’t have much to do with how it reflects humanness back at us. As Stuart Russell and Peter Norvig (now director of research at Google) suggest in their essential A.I. textbook, biomorphic imitation is not how we design complex technology. Airplanes don’t fly like birds fly, and we certainly don’t try to trick birds into thinking that airplanes are birds in order to test whether those planes “really” are flying machines. Why do it for A.I. then? Today’s serious A.I. research does not focus on the Turing Test as an objective criterion of success, and yet in our popular culture of A.I., the test’s anthropocentrism holds such durable conceptual importance. Like the animals who talk like teenagers in a Disney movie, other minds are conceivable mostly by way of puerile ventriloquism.

Where is the real injury in this? If we want everyday A.I. to be congenial in a humane sort of way, so what? The answer is that we have much to gain from a more sincere and disenchanted relationship to synthetic intelligences, and much to lose by keeping illusions on life support. Some philosophers write about the possible ethical “rights” of A.I. as sentient entities, but that’s not my point here. Rather, the truer perspective is also the better one for us as thinking technical creatures.

Musk, Gates and Hawking made headlines by speaking to the dangers that A.I. may pose. Their points are important, but I fear were largely misunderstood by many readers. Relying on efforts to program A.I. not to “harm humans” (inspired by Isaac Asimov’s “three laws” of robotics from 1942) makes sense only when an A.I. knows what humans are and what harming them might mean. There are many ways that an A.I. might harm us that have nothing to do with its malevolence toward us, and chief among these is exactly following our well-meaning instructions to an idiotic and catastrophic extreme. Instead of mechanical failure or a transgression of moral code, the A.I. may pose an existential risk because it is both powerfully intelligent and disinterested in humans. To the extent that we recognize A.I. by its anthropomorphic qualities, or presume its preoccupation with us, we are vulnerable to those eventualities.

Whether or not “hard A.I.” ever appears, the harm is also in the loss of all that we prevent ourselves from discovering and understanding when we insist on protecting beliefs we know to be false. In the 1950 essay, Turing offers several rebuttals to his speculative A.I., including a striking comparison with earlier objections to Copernican astronomy. Copernican traumas that abolish the false centrality and absolute specialness of human thought and species-being are priceless accomplishments. They allow for human culture based on how the world actually is more than on how it appears to us from our limited vantage point. Turing referred to these as “theological objections,” but one could argue that the anthropomorphic precondition for A.I. is a “pre-Copernican” attitude as well, however secular it may appear. The advent of robust inhuman A.I. may let us achieve another disenchantment, one that should enable a more reality-based understanding of ourselves, our situation, and a fuller and more complex understanding of what “intelligence” is and is not. From there we can hopefully make our world with a greater confidence that our models are good approximations of what’s out there.

外国语学院

2019年5月13日

地址:江苏省盐城市希望大道中路1号 邮政编码:224051 盐城工学院外国语学院 版权所有 © 2008-2016