您现在的位置是:主页 > 澳门金沙真人手机app下载 > 透視全球AI治理十大事件站在十字路口的AI會失控嗎

透視全球AI治理十大事件站在十字路口的AI會失控嗎
2020-01-11 17:01   来源:  www.miss9.cn   评论:0 点击:

透視全球AI治理十大事件站在十字路口的AI會失控嗎AI显著的帮助了社会生产效率的提高,同时也让人类生活变得

  AI显著的帮助了社会生产效率的提高,同时也让人类生活变得更加舒适便捷。但另一方面AI出现,对于人类社会伦理道德、隐私安全等方方面面都提出了挑战。新年伊始,回顾近两年全球较有代表性的十大AI伦理事件,我们认为,在允许技术适度向前发展的同时,也要在隐私、安全和便捷三者之间找到恰当的平衡。

AI has significantly helped improve social productivity and made human life more comfortable and convenient. But on the other hand, the emergence of AI challenges all aspects of human society ethics, privacy security and so on. At the beginning of the new year, looking back at the world's ten most representative AI ethics events over the past two years, we believe that while allowing technology to move forward appropriately, we should find the right balance between privacy, security and convenience.

  “人工智能也许会是人类的终结者”,闻名世界的理论物理学家霍金生前曾对人工智能技术抱有十分警惕的态度。

“Artificial intelligence may be the end of the human race." Hawking, the world-famous theoretical physicist, was wary of AI technology.

  过去十年,得益于算法、算力及通讯技术的提升,人工智能技术迎来了前所未有的发展机遇期。从理论到实践、从实验室走向产业化,人工智能技术在全球范围内展开了一场集“产学研”为一体的比拼较量。

Over the past decade, thanks to improved algorithms, computing power and communications technology, artificial intelligence technology has ushered in an unprecedented period of development opportunities. From the theory to the practice, from the laboratory to the industrialization, the artificial intelligence technology has launched a competition which integrates \"the industry, the university and the research\" in the global scope.

  人工智能技术显著帮助了社会生产效率的提高,同时也让人类生活变得更加舒适便捷。但正如霍金所担心的,人工智能的出现,对于人类社会伦理道德、隐私安全等方方面面都提出了挑战。

Artificial intelligence technology has significantly helped improve the efficiency of social production, but also made human life more comfortable and convenient. But as Hawking worries, the emergence of artificial intelligence has challenged all aspects of human society, such as ethics, privacy and security.

  特别是近一两年,伴随着AI技术大规模产业化应用,一些并无先例可循的人与人工智能矛盾逐渐浮出水面。担忧和质疑之声愈演愈烈,尽快探讨出一种可预期的、可被约束的、行为向善的人工智能治理机制,成为了近人工智能时代的首要命题。

Especially in the last year or two, with the large-scale industrial application of AI technology, some contradictions between people and artificial intelligence that have no precedent gradually emerged. The sound of worry and doubt has intensified, and it has become the primary proposition of the near artificial intelligence era to explore a predictable, restrained and good-behaved artificial intelligence governance mechanism as soon as possible.

  例如,旷视作为专注于AI的科技企业、AI产业化落地的先行者之一,已经意识到正视人工智能技术所带来的伦理问题的重要性,同时主张AI企业要把治理当作头等大事来关注。

For example, as a technology company focusing on AI, one of the pioneers of AI industrialization has realized the importance of facing up to the ethical problems brought about by AI technology, and advocated that AI enterprises should pay attention to governance as a top priority.

  2019年7月旷视首发人工智能应用准则,并于同年正式成立人工智能治理研究院,期望各界对AI事件有理性的关注,并要针对事件背后问题做深度的研究,通过社会各界建设性的讨论,才能最终将AI向善这件事付诸于实际的行动。旷视联合创始人、CEO印奇用“理性的关注,深度的研究,建设性的讨论,坚持不懈的行动”四句话,表达了旷视倡导AI治理的决心。

In july 2019, we launched the guidelines for the application of artificial intelligence, and formally established the institute of artificial intelligence governance in the same year, hoping for rational attention to the ai incident, and to do in-depth research on the problems behind the incident, through the constructive discussion of all sectors of society, we can finally put the ai good thing into practical action. In four sentences,\" rational attention, in-depth research, constructive discussion, and persistent action,\" the co-founder and CEO of Kuang Vision, India, expressed his determination to advocate AI governance.

  2020年伊始,人工智能治理研究院首先回顾全球较有代表性的十大AI治理事件,希望从这些事件中与各方就背后折射的深层次问题探寻解决之道。

At the beginning of 2020, the Institute of Artificial Intelligence Governance first reviewed the world's more representative top 10 AI governance events, hoping to find solutions to the deep-seated problems reflected behind them.

  还记得一幅名为《埃德蒙·贝拉米》的肖像画吗?在纽约由佳士得拍卖画作最终成交价432000美元,甚至赶超同一时间在佳士得拍卖的毕加索的画作。这位画坛新起之秀并不是“人”,而是人工智能“画家”。

Remember a portrait called Edmund Bellamy? In New York, Christie's eventually sold for $432,000, even catching up with Picasso's paintings for sale at Christie's at the same time. The new artist is not a human being, but an artificial intelligence painter.

  如今的AI,琴棋书画,无所不能,除了新闻写作、图片生成、视频与音乐创作,还能当歌手、明星换脸。当然,还能帮忙科研人员,上至天文探索,下至产品研发。

Today's AI, Qin Qi calligraphy and painting, omnipotent, in addition to news writing, picture generation, video and music creation, but also to become a singer, star change face. Of course, can also help researchers, up to astronomical exploration, down to product development.

  然而,在给人类生产和生活带来效率、乐趣的同时,这些聪明的AI也制造了不少棘手问题,不仅挑战着人类伦理底线,也冲击着前人工智能时代法律边界。

However, while bringing efficiency and pleasure to human production and life, these smart AI also create many difficult problems, which not only challenge the bottom line of human ethics, but also impact the legal boundary of the pre-Artificial Intelligence era.

  英国萨里大学教授RyanAbbott率领的国际专利律师团,为他们使用的AI系统DABUS发明的器材申请专利。申请资料上的发明人一栏写的是,DABUS(AI系统DABUS发明人系ImaginationEngines公司CEOStephenThaler)。

A team of international patent lawyers, led by Professor Ryan Abbott of the University of Surrey in the UK, patented the equipment they used for the invention of the AI system DABUS. The inventor's column on the application data states that DABUS (AI system DABUS inventor is Imagination Engines CEO Stephen Thaler).

  据介绍,DABUS是一个基于连接主义的AI模型。系统包括两个神经网络,一个经过训练后,基于神经元连接权重生成自我干扰做出回应而产生新的想法;第二个神经网络负责批判,监控和对比现有知识找出的创新想法,反馈给第一个神经网络以生成最具创意的想法。

According to the introduction, DABUS is an AI model based on connectivism. The system consists of two neural networks, one trained to respond to new ideas based on the rebirth of neuronal connectivity into self-interference; the second neural network is responsible for criticizing, monitoring and comparing innovative ideas identified by existing knowledge, feedback to the first neural network to generate the most creative ideas.

  正是在这一运作机制下,DABUS“想出”了一些有趣的发明,比如固定饮料的新型装置;帮助搜救小组找到目标的信号设备。

It is under this operating mechanism that Dabus \"came up with\" some interesting inventions, such as the new device for fixing drinks; the signal equipment that helps search and rescue teams find their targets.

  RyanAbbott团队分别向英国、美国和欧盟申请了专利。英国、欧盟均拒绝了他们的申请。英国知识产权局表示,该机构不会承认人工智能系统DABUS是合格发明家,因为该机器“不是人”,因此不接受专利申请。欧洲专利局也拒绝了DABUS专利申请,称其不符合欧洲专利局的要求,即提交专利申请中指定的发明人必须是人类,而不是机器。

Ryan Abbott's team filed patents for the UK, US and EU respectively. Britain and the European Union have rejected their applications. The UK Intellectual Property Office said it would not recognize the AI system DABUS as a qualified inventor because the machine was \"not human\" and therefore would not accept patent applications. The European Patent Office also rejected the DABUS patent application, saying it did not meet the requirements of the European Patent Office that the inventor designated in the filing of the patent application must be a human, not a machine.

  然而,Abbott并不认可欧洲专利局的理由。他以学生为例,既然受到训练的学生申请专利,拿到专利的是学生而不是训练他的人,那么,接受训练的机器,也有资格被视为发明人。事实上,去年美国已经授予DABUS发明以专利。

Mr Abbott, however, does not endorse the European Patent Office's case. He took the student as an example, since the trained student applied for a patent and got the patent from the student rather than the person who trained him, the machine that received the training was also eligible to be regarded as an inventor. In fact, last year the United States patented the DABUS invention.

  对此,Abbott的解释是,虽然美国法律也强调申请人必须是“个人”,但是立法旨在防止法人发明权,成为专利主体,而不会审慎考量AI和自主思考机器的发明。

In response, Abbott explains that while U.S. law also emphasizes that applicants must be \"individuals,\"the legislation is designed to prevent legal persons from making inventions and becoming patent subjects, without carefully considering the invention of AI and autonomous thinking machines.

  值得注意的是,随着人工智能(AI)持续影响内容创作、发明创造等人类智力创造领域,开始更多扮演“创作者”、“发明者”等角色,国际社会已在着力应对人工智能对版权、专利、商标、商业秘密等知识产权制度的影响。

It is worth noting that as artificial intelligence (AI) continues to affect human intellectual creation, such as content creation, invention and creation, and begins to play more roles as \"creators\" and \"inventors \", the international community has been focusing on the impact of artificial intelligence on intellectual property rights systems such as copyright, patents, trademarks, trade secrets and so on.

  比如,专利方面的问题主要包括,如何界定AI发明(包括使用AI的发明和AI开发的发明);如何认定自然人对AI发明的贡献;AI专利是否需要新形式的知识产权保护,如数据保护,等。

For example, patent issues include how to define AI disclosure (including inventions using AI and inventions developed by AI); how to determine the contribution of natural persons to AI inventions; and whether AI patents require new forms of intellectual property protection, such as data protection, etc.

  比较而言,欧盟有关AI的一系列举措,往往更为严格。有专家团甚至主张对AI做出严格限制,包括不得赋予AI系统或机器人在法律上的人格权。

In comparison, a series of EU initiatives on AI tend to be more stringent. Some have even advocated strict restrictions on AI, including the prohibition of granting legal personality rights to AI systems or robots.

  2019年8月30日晚,一款AI换脸软件在社交媒体刷屏,用户只需要一张正脸照就可以将视频中的人物替换为自己的脸。由于用户疯狂涌进,该AI换脸软件的服务器一晚上就烧掉了200多万人民币的运营费用。

On the night of August 30,2019, an AI face-changing software brushed the screen on social media, and users needed just a positive face photo to replace the characters in the video with their own. The server for the AI's face-changing software burned more than 2 million yuan in operating expenses overnight as users drove in frantically.

  不过,互联网世界的改变始于更早。2018年初,有位名叫“Deepfake”的匿名Reddit用户用自家的电脑和开源的人工智能工具,鼓捣出了人工智能“变脸术”,可以在任何视频中将一个人的脸换成另一张脸。

However, the change in the Internet world began earlier. In early 2018, an anonymous Reddit user named Deepfake tinkered with AI's'face-changing' with its own computer and open source AI tools to transform a person's face into another in any video.

  短短几周之内,网上就到处充斥着换上名人脸的粗劣色情片。尽管Reddit很快封杀了Deepfake,但为时已晚,这种技术已经在网络上扎根。

In just a few weeks, the Internet is littered with crude porn with celebrity faces. Although Reddit quickly blocked Deepcake, it was too late and the technology had taken root on the web.

  人类对变脸的痴迷,古已有之。从川剧变脸到《聊斋志异》“画皮”,经典久经不衰的背后,或许是因为变脸提供了一种关于身份转换、摆脱现实桎梏的遐想。

Man's obsession with face change is ancient. From the face of Sichuan opera to \"Liaozhai Zhi Yi \",\" painting skin \", the classic after a long time behind, perhaps because of the face to provide a change of identity, out of the shackles of reality reverie.

  基于计算机技术的易容术早已有之,但是,AI第一次让变脸如此平民化,无论是技术门槛还是传播门槛,都被降至极低。特别是像AI换脸软件这种作为手机APP出现,“应该还是头一次”,一些业内人士表示。

Computer-based easy-to-read technology has long been available, but for the first time, AI has reduced the face change to such a civilian, both technical and communication barriers to extremely low levels. Especially as a mobile app, such as ai face-changing software,\" should be the first time,\" some industry insiders said.

  由于不少高频使用的App都用手机号 面部图像注册登录,中国用户担心AI换脸软件被不法分子利用,通过技术合成完成刷脸支付等;或在微信视频,假扮家人朋友却不被识破,导致诈骗甚至更严重的犯罪行为。

Since many high-frequency Apps use mobile numbers Face image registration login, chinese users worried that ai face-changing software was used by lawbreakers to complete face brushing payments through technology synthesis; or in wechat video, pretending to be a family friend was not detected, leading to fraud and even more serious criminal behavior.

  除此之外,该AI换脸软件对用户肖像权的管理也充满争议甚至陷阱,比如如何规范用户合理使用他人肖像?又如何保护用户自身肖像权不被侵犯与滥用?

In addition, the AI's management of the user's portrait rights is controversial or even pitfalls, such as how to regulate the reasonable use of other people's portraits? How to protect the user's own portrait right from infringement and abuse?

  目前,该AI换脸软件因存在“安全风险”而被中国微信封杀。然而,据媒体报道,具有Deepfake功能的代码出现在了海外版某知名短视频社交应用的安卓版本上,尽管它限制了未成年人使用这项功能,只允许你自己换脸,并阻止用户上传自己的源视频,但此项动作表明,该短视频社交应用的母公司仍愿意接受有争议的技术。

At present, the AI face change software is blocked by China WeChat because of \"security risks \". However, the code with deepwake features appears on an android version of an overseas version of a well-known short video social app, according to media reports. Although it limits the use of the feature by minors, allows you to change your face only, and prevents users from uploading their own source video, the move suggests that the parent company of the short video social app is still willing to accept controversial technology.

  比如,现有换脸技术都存在缺陷,如基于生成性对抗网络技术的变脸视频往往不具有实时性,可通过人工实时指定交互来加强检测。

For example, there are defects in the existing face changing technology, such as the face changing video based on generative counter network technology is often not real-time, can be manually real-time designated interaction to enhance detection.

  比如,2019年4月20日,民法典人格权编草案二审稿已经对AI换脸做出规范。此外,还要加强人工智能技术标准制定和推广。通过对深度学习网络、人工智能芯片等技术的规范,从根源上加强监督引导。

For example, on April 20,2019, the second draft of the Civil Code's human rights draft has been standardized for AI change. In addition, we should strengthen the establishment and promotion of artificial intelligence technology standards. Through the specification of deep learning network, artificial intelligence chip and other technologies, strengthen the supervision and guidance from the root.

  就国外经验而言,2018年德国成立了数据伦理委员会,负责为德国联邦政府制定数字社会的道德标准和具体指引。去年十月,该委员会发布了针对数据和算法的建议,其核心设想在于建立数字服务企业使用数据的5级风险评级制度,对不同风险类型的企业采取不同的监管措施。

As far as foreign experience is concerned, in 2018 Germany established the Data Ethics Committee, which is responsible for the development of ethical standards and specific guidelines for the German federal government in the digital society. Last october, the commission issued recommendations for data and algorithms, with a central vision of creating a five-tier risk rating system for digital service companies using data, with different regulatory measures for companies of different types of risk.

  而且,企业自身也应认识到,如果人们滥用你的东西,你的企业就会出问题,最好要设法预防这种情况的发生。

Also, the business itself should recognize that if people misuse your stuff, your business will have problems, and it's best to try to prevent this from happening.

  据美国NarrativeScience预测,未来15年内,90%以上的新闻稿将由人工智能创作。但问题是,如果它也擅长写假新闻呢?

More than 90 percent of press releases over the next 15 years will be created by AI, according to US-based NarrativeScience. But the problem is, what if it's also good at writing fake news?

  2019年2月15日,AI研究机构OpenAI展示了一款软件GPT-2,只需要给软件提供一些信息,它就能编写逼真的假新闻。

On February 15,2019, OpenAI, an AI research institute, showed a software GPT-2 that just needs to give the software some information to write realistic fake news.

  OpenAI公布了软件编写新闻的过程。研究人员给软件提供如下信息:“一节装载受控核材料的火车车厢今天在Cincinnati被盗,下落不明。”以此为基础,软件编写出由7个段落组成的新闻,软件还引述了政府官员的话语,只不过,这些信息全是假的。

The OpenAI published the process of writing news by software. The researchers gave the software the following information:\" a train carriage loaded with controlled nuclear material was stolen in Cincinnati today and its whereabouts are unknown.\" Based on this, the software produced seven paragraphs of news, and the software quoted the words of government officials, except that the information was all false.

  从技术突破上来说,这是令人兴奋的。“GPT-2让人兴奋的原因是,预测文本(predictingtext)被视为计算机的『超级任务』(uber-task),这个挑战如果能够攻克,将打开智能的阀门。”艾伦人工智能研究所的研究员AniKembhavi告诉TheVerge。

In terms of technological breakthroughs, it's exciting. The reason for the gpt-2's excitement is that predicting text is seen as a computer's \"uber-task,\" a challenge that, if it can be overcome, opens the valve for intelligence.\" Ani Kembhavi, a researcher at the Allen Institute of Artificial Intelligence, told The Verge.

  但是,如果GPT-2可被用来写假新闻,理论上,也有可能被用来生产仇恨语言和暴力言论,包括垃圾邮件、虚假社交言论等。由于GPT-2生成的文本都不是单纯复制粘贴来的,而是AI的即时生成,这导致负面文字无法被有效地追踪和清理。

However, if the GPT-2 can be used to write fake news, in theory, it may also be used to produce hate language and violent speech, including spam, false social speech and so on. Because the text generated by GPT-2 is not simply copied and pasted, but the instant generation of AI, this results in negative text cannot be effectively tracked and cleaned up.

  对此,OpenAI一方面强调,工具只是为政策制定者、记者、作家、艺术家等人群服务,并由他们在测试GPT-2能帮忙撰写什么。另一方面也表示,这样一个强大的工具可能会构成危险,因此只发布了一个较小、功能较单一的模型。

On the one hand, the OpenAI emphasizes that the tool serves only the population of policymakers, journalists, writers, artists, and so on, and that they are testing what GPT-2 can help write. On the other hand, such a powerful tool could pose a danger, so only a smaller, more functional model was released.

  一些研究人员认为,人类发布假新闻通常具有一定的目的性,然而,语言模型生成文本是没有目的性的。类似GPT-2这样的语言模型是为了生成看起来更加真实、连贯、与主题相关的文本,事实上,想要用它们来产生大规模的假新闻并没有那么简单。

Some researchers believe that human publishing fake news usually has a certain purpose, however, language model generation of text is not purposeful. Language models like GPT-2 are designed to produce text that looks more realistic, coherent, subject-related, and in fact, it's not that simple to use them to generate mass fake news.

  而且,这些研究人员也发现,这套模型擅长各种有趣的生成方向,但其中最不擅长的,反而是人们原本担心的生成虚假信息或者其它不良内容。

Furthermore, the researchers found that the model was good at various interesting ways of generating information, but the least good was the feared generation of false information or other undesirable content.

  不少技术人员更愿意从技术谋求对策,比如Grover,识别AI生成的虚假新闻的最佳方式是创建一个本身能撰写假新闻的AI模型。

Many technicians prefer to seek solutions from technology, such as Grover, where the best way to identify false news generated by AI is to create an AI model that can write fake news itself.

  除了更换自己的脸,如今,无论身往何处,你最常见的可能就是各种具有人脸识别功能的摄像头:搭乘地铁、进出小区和校园、出入公园,甚至公厕取纸,都需要刷脸防偷。有专家称,中国人每天面对500个以上的摄像头。但摄像头和人脸识别技术的快速发展,也带来大量争议性报道,如从人脸数据泄露、黑产案例,再到中国人脸识别第一案。

In addition to changing your face, wherever you go these days, the most common thing you do is a variety of face-recognition cameras: take the subway, get in and out of the neighborhood and campus, get in and out of the park, and even get paper from public toilets. Experts say the chinese face more than 500 cameras a day. But the rapid development of camera and face recognition technology also brings a lot of controversial reports, such as from face data disclosure, black production cases, to the first case of face recognition in China.

  2019年4月27日,浙江理工大学特聘副教授郭兵购买了杭州野生动物世界年卡,支付了年卡卡费1360元。合同中承诺,持卡者可在该卡有效期一年内通过同时验证年卡及指纹入园,可在该年度不限次数畅游。

On April 27,2019, Guo Bing, a special associate professor at Zhejiang University of Technology, purchased the Hangzhou Wildlife World Annual Card and paid 1360 yuan for the annual Kaka fee. Under the contract, cardholders are eligible to visit the card for a period of one year through the simultaneous verification of their annual card and fingerprints.

  同年10月17日,杭州野生动物世界通过短信的方式告知郭兵“园区年卡系统已升级为人脸识别入园,原指纹识别已取消,未注册人脸识别的用户将无法正常入园”。在郭兵前往实地验证后,工作人员确认了短信属实,并向郭兵明确表示,如果不进行人脸识别注册将无法入园,也无法办理退卡退费手续。

On october 17 of the same year, hangzhou wildlife world informed guo bing by text message that \"the park's annual card system has been upgraded to face recognition into the park, the original fingerprint identification has been cancelled, unregistered face recognition users will not be able to enter the park normally \". After Guo Bing went to the field to verify, the staff confirmed that the text message is true, and made clear to Guo Bing that if the face recognition registration will not be able to enter the park, and cannot go through the refund procedures.

  但郭兵认为,园区升级后的年卡系统进行人脸识别将收集他的面部特征等个人生物识别信息,该类信息属于个人敏感信息,一旦泄露、非法提供或者滥用,将极易危害包括原告在内的消费者人身和财产安全。

But guo believes that the park's upgraded annual card system for face recognition will collect his facial features and other personal biometric information, such as personal sensitive information, once leaked, illegally provided or abused, will be extremely vulnerable to the safety of consumers, including plaintiffs and property.

  协商无果后,郭兵于2019年10月28日向杭州市富阳区人民法院提起了诉讼,目前杭州市富阳区人民法院已正式受理此案。

After the consultation failed, Guo Bing filed a lawsuit with the Fuyang District people's Court of Hangzhou on October 28,2019, and now the Fuyang District people's Court of Hangzhou has formally accepted the case.

  人脸识别涉及对个人重要的生物学数据的收集,既然索要电话号码、姓名和住址之类的信息都必须征得公民同意,更何况人脸信息?相关组织或机构在收集之前,必须证明这种做法的合法性与必要性。

Face recognition involves the collection of important biological data for individuals, since information such as phone numbers, names and addresses must be obtained with the consent of citizens, let alone face information? Relevant organizations or institutions must demonstrate the legitimacy and necessity of such an approach prior to its collection.

  当然,没有人否认人脸识别技术也曾屡建奇功。有网友统计自2018年以来,“歌神”张学友在全国各地的巡回演唱会中,警方依靠人脸识别技术,抓获了几十名逃犯;也帮助人们找到失踪和丢失的亲人。但是,一个不可忽视的基本现实是,应用场景的爆炸并未换回数据搜集、保管和使用上的制度和技术保障。

Of course, no one denies that the technology of face recognition has done wonders. Some netizens counted that since 2018, the \"song god\" Zhang Xueyou in the country's tour concert, the police rely on face recognition technology to capture dozens of fugitives; also help people find missing and lost relatives. However, a basic reality that cannot be ignored is that the explosion of application scenarios does not return to the system and technical safeguards for data collection, storage and use.

  “目前,对人们生物特征的保护,例如虹膜、人脸、指纹等的保护,都没有被写进现有法律,信息保护的责任主体、责任边界,如何使用、处理和销毁信息,法律都未具体规定。”中国政法大学传播法研究中心副主任朱巍告诉《中国新闻周刊》,在法律脱节的当下,商业伦理要有约束力,“人脸识别公司作为技术源头以及获益者,必须让用户有获知风险的知情权。”

“Currently, the protection of people's biological characteristics, such as iris, face, fingerprints, etc., is not included in the existing law, the subject of responsibility for information protection, the boundary of responsibility, how information is used, processed and destroyed, and the law does not specify." Zhu wei, deputy director of the communications law research center at china university of political science and law, told china newsweek that business ethics should be binding when the law is disconnected." Face recognition companies, as sources of technology and beneficiaries, must give users the right to know about the risks."

  微软就曾删除了其最大的人脸识别数据库MSCeleb,英国《金融时报》报道,数据库中采集的很多图像的主人并没有授权这一行为,MSCeleb数据库通过“知识共享”许可证来抓取和搜索图像,这引发一些人的反对。

Microsoft deleted its largest face recognition database, msceleb, which caused some to object by using a \"knowledge-sharing\" license to capture and search for images that were not authorized by the owners of many images taken from the database, the financial times reported.

  在中国,越拉越多的人脸识别从业者也承认,人脸识别技术带来的隐私和安全问题,已经到了必须正视的时候,要在隐私、安全和便捷三者之间找到恰当的平衡,允许技术适度向前发展,同时保护公民隐私。。

In china, more and more face recognition practitioners have admitted that the privacy and security problems caused by face recognition technology have come to face up to the time to find an appropriate balance between privacy, security and convenience, allowing technology to move forward moderately while protecting citizens'privacy. 。

  将先进技术用于教学场景,算是最具争议的一类应用。2019年11月,浙江一小学戴监控头环的视频引起广泛的关注与争议。

The use of advanced technology in teaching scenarios is one of the most controversial applications. In November 2019, a video of Zhejiang Primary School wearing a surveillance head ring attracted widespread attention and controversy.

  视频中,孩子们头上戴着号称“脑机接口”的头环,这些头环宣称可以记录孩子们上课时的专注程度,生成数据与分数发送给老师和家长。

In the video, children wear what are known as \"brain-machine interfaces\" that record their concentration in class and generate data and scores to send to teachers and parents.

  对此,头环开发者在声明中回复,脑机接口技术是一种新兴技术,会不容易被人理解。报道中提到的“打分”,是班级平均专注力数值,而非网友猜测的每个学生专注力数值。通过该数据整合形成的专注力报告也没有发送给家长的功能。此外,该头环不需长期佩戴,一周训练一次即可。

In response, the head-ring developer replied in the statement that brain-computer interface technology is an emerging technology that will not be easily understood. The \"score\" mentioned in the report is the average class concentration, rather than each student's concentration figure that netizens speculated. The focus report formed through this data integration is also not available to parents. In addition, the ring does not need to be worn for a long time and can be trained once a week.

  除了脑机接口,安防厂商也对这一场景颇感兴趣。理论上,人脸识别系统可以每隔一段时间,用摄像头扫描一次学生的脸,采集并分析他们的坐姿、表情,评价他们有没有专注听讲。相较于常见的用于门禁“刷脸”的身份识别能力,教学的要求更高,被视为“对人的深刻理解。”

In addition to the brain-machine interface, security manufacturers are also interested in this scenario. In theory, the face recognition system can scan students'faces with a camera every other time, collect and analyze their sitting posture, facial expressions, and evaluate whether they are listening attentively. Teaching is more demanding and is seen as a \"deep understanding of people\" than the common \"face-brushing\" recognition skills.

  AINow创始人克劳福德(KateCrawford)教授曾对BBC说:“情绪识别技术声称可以通过解读我们的微表情(microexpression)、声音语调,甚至是我们的走路方式来读懂我们的内心情感状态。目前它被社会广泛使用,从在面试中找到最完美的员工,到评估医院里病人的痛苦,再到追踪哪些学生在课堂上专心听讲。”

Professor Kate Crawford, founder of AINow, once told the BBC:\" Emotional recognition technology claims to be able to read our inner emotional state by interpreting our microexpressions, voice and tone, and even how we walk. It's now widely used in society, from finding the perfect staff in interviews to assessing the suffering of patients in hospitals to tracking which students are listening attentively in class.

  法国数据保护局CNIL曾宣布,由于违反GDPR原则,在学校使用人脸识别技术属于非法。但在中国,有观点认为,教室属于公共场所,不存在侵犯隐私的问题。也有观点坚持,人脸识别要获取学生的肖像权,肯定涉及侵犯学生隐私权。即便不讨论隐私,用人脸识别监控学生的状态,更涉及教育的根本价值观。

The French Data Protection Agency, CNIL, has declared it illegal to use face recognition technology in schools because of violating the GDPR principle. But in China, there is a view that the classroom is a public place, and there is no problem of invasion of privacy. There is also a point to insist that face recognition to obtain students'right to portrait, must involve infringement of students'right to privacy. Even without discussing privacy, using face recognition to monitor students'status involves more fundamental values of education.

  人脸识别在校园里使用的边界到底是什么?目前教育部正在针对人脸识别技术制定相关管理文件。不过,在之前针对媒体提问的回答,官方表示,脸识别进校园,既有数据安全也有个人隐私问题。对于学生个人信息要非常谨慎,能不采集就不采。能少采集就少采集,尤其是涉及个人生物信息。

What exactly is the boundary used for face recognition on campus? The Ministry of Education is currently developing relevant management documents for face recognition technology. However, in previous responses to media questions, officials said face recognition entered the campus with both data security and personal privacy issues. Be very careful about your student's personal information. The ability to collect less is less, especially involving personal biological information.

  比如,有媒体报道,尽管类似产品在科学性上存在缺陷,但是,美国和英国的警方确实正在使用眼部检测软件Converus,该软件可以检查眼睛的运动和瞳孔大小的变化,以标记潜在的欺骗行为。

For example, it has been reported in the media that although similar products are scientifically flawed, police in the United States and the United Kingdom are indeed using the eye detection software Converus, which examines eye movements and changes in pupil size to mark potential deception.

  向美国联邦调查局(FBI)、国际刑警组织(Interpol)、伦敦警察等机构销售数据提取工具的OxygenForensics公司也在今年7月表示,它在产品中加入了可以进行情绪识别的工具,这样便可以“分析无人机捕获的视频和图像,识别已知的恐怖分子。”

Oxygen Forensics, which sells data extraction tools to agencies such as the FBI, Interpol and London Police, also said in July that it had added tools to its products to enable it to \"analyse videos and images captured by drones and identify known terrorists.\"

  2019年09月13日,美国加利福尼亚州议会通过一项为期三年的议案,禁止州和地方执法机构在执法记录仪上使用面部识别技术。如果州长加文·纽森签字通过,议案将于2020年1月1日生效成为法律。

On September 13,2019, the U.S. Congress of California passed a three-year bill prohibiting state and local law enforcement agencies from using facial recognition technology on law enforcement recorders. If Gov. Gavin Newson signs through, the bill will come into force as law on January 1,2020.

  该议案如果生效,将使加州成为美国禁止使用面部识别技术最大的州。包括俄勒冈州和新罕布什尔州在内的一些州也有类似的禁令。

Its entry into force would make California the largest state in the United States to ban facial recognition technology. Some states, including Oregon and New Hampshire, have similar bans.

  欧洲对人脸识别的使用更为谨慎。2018年5月,欧盟实施通用数据保护条例(简称“GDPR”),规定违规收集个人信息(其中包含指纹、人脸识别、视网膜扫描、线上定位资料等)、没有保障数据安全的互联网公司,最高可罚款2000万欧元或全球营业额的4%,被称为“史上最严”条例。

Europe is more cautious about face recognition. In May 2018, the European Union introduced general data protection regulations (referred to as \"GDPR \"), which provide for Internet companies that illegally collect personal information (including fingerprints, face recognition, retinal scanning, online location data, etc.) and do not guarantee data security, with a fine of up to 20 million euros or 4 percent of global turnover, known as the\" strictest ever \"regulation.

  虽然GDPR不是一项完美的立法,但却在国际上产生了影响。这也让美国政府面临着尴尬的境地,因为美国企业可能会受到其他国家的监管。美国AI专家表示,现在是开始制定法规、实现法规的关键时刻。这也将是未来五年里最重要的事情之一。联邦政府需要对人工智能及其附属技术如何被监管做出决策。

Although GDPR is not a perfect piece of legislation, it has had an impact internationally. It also puts the U.S. government in an awkward position because U.S. companies could be regulated by other countries. U.S. AI experts say it is a crucial time to start enacting and implementing regulations. It will also be one of the most important things in the next five years. The federal government needs to make decisions about how AI and its ancillary technologies are regulated.

  2017年,斯坦福大学一项发表于《PersonalityandSocialPsychology》的研究引发社会广泛争议。研究基于超过35,000张美国交友网站上男女的头像图片训练,利用深度神经网络从图像中提取特征,通过研究一个面部图像来检测一个人的性取向。

In 2017, a Stanford study published in \"PersonalityandSocialPsychology\" sparked widespread social controversy. Based on more than 35,000 images of men and women on american dating sites, the study uses deep neural networks to extract features from images and examine a person's sexual orientation by studying a facial image.

  在“识别性取向”任务中,人类的判断表现要低于算法,其准确率为在男性中61%,在女性中54%。当软件识别被测试者的图片(每人五张图片)时,准确率则显著不同:男性91%准确率,女性83%准确率。

In the \"recognition of sexual orientation\" task, human judgment performance is lower than the algorithm, with an accuracy rate of 61% among men and 54% among women. When the software identified the subjects'images (five images per person), the accuracy was significantly different:91% for men and 83% for women.

  本质上,这款算法仍然会导致人们对他人肖像权、数据隐私的滥用,并有可能造成极其严重的后果。“如果我们开始以外表来判定人的好坏,那么结果将会是灾难性的。”多伦多大学心理学教授NickRule表示。

In essence, this algorithm will still lead to the abuse of other people's portrait rights, data privacy, and may have extremely serious consequences. \"If we start judging people by their appearance, then the results will be disastrous. Nick Rule, a professor of psychology at the University of Toronto, said.

  性别取向属于人的隐私,如果AI可以强行根据照片算出性取向,既不合法,也不人道。一旦这种技术推广开来,夫妻一方会使用这种技术来调查自己是否被欺骗,青少年使用这种算法来识别自己的同龄人,而在针对某些特定群体的识别引发的争议则更难以想象。“如果我们开始以外表来判定人的好坏,那么结果将会是灾难性的。”

Gender orientation belongs to human privacy, if AI can be forced to calculate sexual orientation according to the photo, it is neither legal nor humane. Once the technology is popularized, couples use the technology to investigate whether they have been cheated, and teens use the algorithm to identify their peers, even more unthinkable in the controversy over identifying specific groups. \"If we start judging people by their appearance, then the results will be disastrous.

  目前,针对人工智能产品的设计与应用,除了世界电气电子工程师学会(IEEE)在《人工智能设计的伦理准则》中提出了人权、福祉、问责与透明的伦理标准外,亚马逊、微软、谷歌以及苹果等全球百余家科技巨头还创建了非营利性的人工智能合作组织PartnershiponAI,提出了公平、没有伤害、公开透明以及问责等伦理道德框架。

At present, for the design and application of AI products, in addition to the ethical standards of human rights, well-being, accountability and transparency set up by the World Institute of Electrical and Electronic Engineers (IEEE) in the Code of Ethics for Artificial Intelligence Design, more than 100 global tech giants, such as Amazon, Microsoft, Google and Apple, have created a non-profit AI cooperation organization, PartnershiponAI, which proposes ethical frameworks such as fairness, no harm, openness and transparency, and accountability.

  另一方面,正如一些学者所建议的,从应用传播角度来看,有必要出台AI使用的道德与法律规则,并建立起可以监控的人工智能平台,对所有使用不道德AI产品的用户进行账号管制,以此倒逼科技公司调整与校准自己的研发行为,进而让AI道德演变成民众信任的基础。

On the other hand, as some scholars have suggested, from the point of view of application communication, it is necessary to introduce ethical and legal rules for the use of AI, and to establish a platform of artificial intelligence that can be monitored, and to control the account number of all users who use unethical AI products, so as to force technology companies to adjust and calibrate their R


相关热词搜索:

上一篇:2019年俄羅斯人均GDP預計略低于全球平均水平,那中國、巴西、印
下一篇:没有了

分享到: