美国电子隐私信息中心-生成式人工智能的潜在有害影响与未来之路(英)-2023.5-86页.pdf

上传人:帝莘 文档编号:100758494 上传时间:2023-09-01 格式:PDF 页数:85 大小:655.37KB
下载 相关 举报
美国电子隐私信息中心-生成式人工智能的潜在有害影响与未来之路(英)-2023.5-86页.pdf_第1页
第1页 / 共85页
美国电子隐私信息中心-生成式人工智能的潜在有害影响与未来之路(英)-2023.5-86页.pdf_第2页
第2页 / 共85页
美国电子隐私信息中心-生成式人工智能的潜在有害影响与未来之路(英)-2023.5-86页.pdf_第3页
第3页 / 共85页
美国电子隐私信息中心-生成式人工智能的潜在有害影响与未来之路(英)-2023.5-86页.pdf_第4页
第4页 / 共85页
美国电子隐私信息中心-生成式人工智能的潜在有害影响与未来之路(英)-2023.5-86页.pdf_第5页
第5页 / 共85页
亲,该文档总共85页,到这儿已超出免费预览范围,如果喜欢就下载吧!
资源描述
MA Y 2 0 2 3G e n e r a t i v e A I s I mp a c t&P a t h s F o r wa r d CONTRIBUTIONS BY Grant Fergusson Caitriona Fitzgerald Chris Frascella Megan Iorio Tom McBrien Calli Schroeder Ben Winters Enid Zhou EDITED BY Grant Fergusson,Calli Schroeder,Ben Winters,and Enid Zhou Thank you to Sarah Myers West and Katharina Kopp for your generous comments on an earlier draft of the paper.Notes on this Paper:This is version 1 of this paper and is reflective of documented and anticipated harms of Generative AI as of May 15,2023.Due to the fast-changing pace of development,use,and harms of Generative AI,we want to acknowledge that this is an inherently dynamic paper,subject to changes in the future.Throughout this paper,we use a standard format to explain the typology of harms that generative AI can produce.Each section first explains relevant background information and potential risks imposed by generative AI,then highlights specifics harms and interventions that scholars and regulators have pursued to remedy each harm.This paper draws on two taxonomies of A.I.harms to guide our analysis:1.Danielle Citrons and Daniel Soloves Typology of Privacy Harms,comprising physical,economic,reputational,psychological,autonomy,discrimination,and relationship harms;1 and 2.Joy Buolamwinis Taxonomy of Algorithmic Harms,comprising loss of opportunity,economic loss,and social stigmatization,including loss of liberty,increased surveillance,stereotype reinforcement,and other dignitary harms.2 These taxonomies do not necessarily cover all potential AI harms,and our use of these taxonomies is meant to help readers visualize and contextualize AI harms without limiting the types and variety of AI harms that readers consider.Table of Contents Introduction.i Turbocharging Information Manipulation.1 Harassment,Impersonation,and Extortion.9 Spotlight:Section 230.19 Profits Over Privacy:Increased Opaque Data Collection.24 Increasing Data Security Risk.30 Confronting Creativity:Impact on Intellectual Property Rights.33 Exacerbating Effects of Climate Change.40 Labor Manipulation,Theft,and Displacement.44 Spotlight:Discrimination.53 The Potential Application of Products Liability Law.54 Exacerbating Market Power and Concentration.57 Recommendations.60 Appendix of Harms.64 References.68 EPIC|Generating Harm:Generative AIs Impact and Paths Forward i I In nt tr ro od du uc ct ti io on n OpenAIs decision to release ChatGPT,a chatbot built on the Large Language Model GPT-3,last November thrust AI tools to the forefront of public consciousness.In the last six months,new AI tools used to generate text,images,video,and audio based on user prompts exploded in popularity.Suddenly,phrases like Stable Diffusion,Hallucinations,and Value Alignment were everywhere.Each day,new stories about the different capabilities of generative AIand their potential for harmemerged without any clear indication of what would come next or what impacts these tools would have.While generative AI may be new,its harms are not.AI scholars have been warning us of the problems that large AI models can cause for years.3 These old problems are exacerbated by the industrys shift in goals from research and transparency to profit,opacity,and concentration of power.The widespread availability and hype of these tools has led to increased harm both individually and on a massive scale.AI replicates racial,gender,and disability discrimination,and these harms are weaved inextricably through every issue highlighted in this report.OpenAI and other companies decisions to rapidly integrate generative AI technology into consumer-facing products and services have undermined longstanding efforts to make AI development transparent and accountable,leaving many regulators scrambling to prepare for the repercussions.And it is clear that generative AI systems can significantly amplify risks to both individual privacy and to democracy and cybersecurity generally.In the words of the OpenAI CEO,who indeed had the power not to accelerate the release of this technology,“Im especially concerned that these models could be used for widespread misinformationand offensive cyberattacks.”Introduction EPIC|Generating Harm:Generative AIs Impact and Paths Forward ii This rapid deployment of generative AI systems without adequate safeguards is clear evidence that self-regulation has failed.Hundreds of entities,from corporations to media and government entities,are developing and looking to rapidly integrate these untested AI tools into a wide range of systems.And this rapid rollout will have disastrous results without necessary fairness,accountability,and transparency protections built in from the beginning.We are at a critical juncture as policymakers and industry around the globe are focusing on the substantial risks and opportunities posed by AI.There is an opportunity to make this technology work for people.Companies should be required to show their work,make it clear when AI is in use,and offer informed consent through
展开阅读全文
相关资源
相关搜索

当前位置:首页 > 行业标准 > 化工行业标准HG

版权所有:www.WDFXW.net 

鲁ICP备09066343号-25