以下是关于最新的 AI 在安全领域的应用的相关内容:
同时,AI 的概念并非新事物,但数据生成和处理方面的近期进展改变了该领域及其产生的技术。例如,生成式 AI 模型能力的近期发展创造了令人兴奋的机会,但也引发了关于潜在 AI 风险的新辩论。一些 AI 风险可能是无意的后果或缺乏适当的控制以确保负责任的 AI 使用而产生的。对 AI 特定风险的初步评估确定了一系列高级风险,包括对人类和财产的身体损害以及对心理健康的损害等安全风险。
1.19.AI is already delivering major advances and efficiencies in many areas.AI quietly automates aspects of our everyday activities,from systems that monitor traffic to make our commutes smoother,17 to those that detect fraud in our bank accounts.18 AI has revolutionised large-scale safety-critical practices in industry,like controlling the process of nuclear fusion.19 And it has also been used to accelerate scientific advancements,such as the discovery of new medicine20 or the technologies we need to tackle climate change.212.20.But this is just the beginning.AI can be used in a huge variety of settings and has the extraordinary potential to transform our society and economy.22 It could have as much impact as electricity or the internet,and has been identified as one of five critical technologies in the UK Science and Technology Framework.23 As AI becomes more powerful,and as innovators explore new ways to use it,we will see more applications of AI emerge.As a result,AI has a huge potential to drive growth24 and create jobs.25 It will support people to carry out their existing jobs,by helping to improve workforce efficiency and workplace safety.26 To remain world leaders in AI,attract global talent and create high-skilled jobs in the UK,we must create a regulatory environment where such innovation can thrive.3.21.Technological advances like large language models(LLMs)are an indication of the transformative developments yet to come.27 LLMs provide substantial opportunities to transform the economy and society.For example,LLMs can automate the process of writing code and17 Transport apps like Google Maps,and CityMapper,use AI.18 Artificial Intelligence in Banking Industry:A Review on Fraud Detection,Credit Management,and Document Processing,ResearchBerg Review of Science and Technology,2018.19 Accelerating fusion science through learned plasma control,Deepmind,2022;Magnetic control of tokamak plasmas through deep reinforcement learning,Degrave et al.,2022.20
Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S.government.In accordance with the Defense Production Act,the Order will require that companies developing any foundation model that poses a serious risk to national security,national economic security,or national public health and safety must notify the federal government when training the model,and must share the results of all red-team safety tests.These measures will ensure AI systems are safe,secure,and trustworthy before companies make them public.Develop standards,tools,and tests to help ensure that AI systems are safe,secure,and trustworthy.The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release.The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board.The Departments of Energy and Homeland Security will also address AI systems’threats to critical infrastructure,as well as chemical,biological,radiological,nuclear,and cybersecurity risks.Together,these are the most significant actions ever taken by any government to advance the field of AI safety.Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening.Agencies that fund life-science projects will establish these standards as a condition of federal funding,creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content.Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.
1.22.The concept of AI is not new,but recent advances in data generation and processing have changed the field and the technology it produces.For example,while recent developments in the capabilities of generative AI models have created exciting opportunities,they have also sparked new debates about potential AI risks.39 As AI research and development continues at pace and scale,we expect to see even greater impact and public awareness of AI risks.402.23.We know that not all AI risks arise from the deliberate action of bad actors.Some AI risks can emerge as an unintended consequence or from a lack of appropriate controls to ensure responsible AI use.413.24.We have made an initial assessment of AI-specific risks and their potential to cause harm,with reference in our analysis to the values that they threaten if left unaddressed.These values include safety,security,fairness,privacy and agency,human rights,societal well-being and prosperity.4.25.Our assessment of cross-cutting AI risk identified a range of high-level risks that our framework will seek to prioritise and mitigate with proportionate interventions.For example,safety risks include physical damage to humans and property,as well as damage to mental health.42 AI38 Intelligent security tools,National Cyber Security Centre,2019.39 What is generative AI,and why is it suddenly everywhere?,Vox,2023.40 See,for example,The Benefits and Harms of Algorithms,The Digital Regulation Cooperation Forum,2022;Harms of AI,Acemoglu,2021.41 AI Accidents:An Emerging Threat,Center for Security and Emerging Technology,2021.42 AI for radiographic COVID-19 detection selects shortcuts over signal,DeGrave,Janizek and Lee,2021;Pathways:How digital design puts children at risk,5Rights Foundation,2021.11