L'AI Act è in arrivo, anche formalmente.
Nel frattempo si trovano online varie linee guida.
Queste alcune:
- L’ACN aderisce alle linee guida internazionali sulla sicurezza dell’Intelligenza Artificiale - ACN
- Guidelines for secure AI system development (ACN)
- 1710505409-final-version_declaration.pdf (G7)
- Call for views on the Cyber Security of AI - GOV.UK
Analisi Gen AI, dal G7:
Based on the content you provided, here are the most repeated values in the document:
- Safe, secure, and trustworthy: This phrase is used repeatedly throughout the document, particularly when discussing the development and deployment of AI and other digital technologies.
- Sustainable: The document emphasizes the importance of sustainable development and the role of digital technologies in achieving the Sustainable Development Goals (SDGs).
- Inclusive: The document stresses the need for inclusive digital transformation, ensuring that everyone benefits from the opportunities presented by digital technologies.
- Resilient: The document highlights the importance of resilient digital infrastructure and supply chains, particularly in the context of global challenges and crises.
- Open: The document emphasizes the need for an open and interoperable internet, as well as open collaboration and knowledge sharing among countries.
- Global: The document stresses the importance of global cooperation and collaboration in addressing digital challenges and opportunities.
Analisi Gen AI, da ACN:
The document you provided is a set of guidelines for secure AI system development. It was published by the UK National Cyber Security Centre (NCSC), the US Cybersecurity and Infrastructure Security Agency (CISA), and several international partners.
The document covers four key areas:
- Secure Design: This section focuses on understanding risks, threat modeling, and making secure design choices for the AI system.
- Secure Development: This section emphasizes secure supply chain practices, proper documentation, and managing technical debt.
- Secure Deployment: This section covers securing infrastructure, protecting models from compromise, developing incident management procedures, and responsible release of the AI system.
- Secure Operation and Maintenance: This section focuses on monitoring the system's behavior and inputs, managing updates, and sharing information about security vulnerabilities.
The document also emphasizes the importance of "secure by design" principles, which prioritize taking ownership of security outcomes, embracing transparency and accountability, and building organizational structures that prioritize security.
The document is intended for providers of AI systems, but it is also recommended for other stakeholders, such as data scientists, developers, managers, and decision-makers.
To answer your specific request, the document does not explicitly list the most repeated values. However, it emphasizes the importance of:
- Security: The document repeatedly stresses the importance of security in all stages of the AI system development lifecycle.
- Transparency: The document encourages transparency in all aspects of the AI system, including documentation, data sources, and potential failure modes.
- Accountability: The document emphasizes the importance of accountability for the security of AI systems.
These three values, security, transparency, and accountability, are consistently emphasized throughout the document.
---
The document you linked, "Guidelines for secure AI system development," provides a comprehensive framework for developing, deploying, and maintaining secure AI systems. Here are some key phrases and concepts highlighted in the document:
- Secure by design: Emphasizes building security into AI systems from the outset, rather than as an afterthought.
- AI security is different: Acknowledges the unique vulnerabilities of AI systems beyond traditional cybersecurity threats.
- Adversarial machine learning (AML): Describes the exploitation of vulnerabilities in AI systems to induce unintended behaviors.
- Supply chain security: Highlights the importance of securing the entire chain of components, data, and tools used in AI development.
- Technical debt: Recognizes the need to manage technical compromises made for short-term gains, which can impact security in the long run.
- Responsible release: Stresses the importance of thorough security evaluations and transparency about limitations before releasing AI systems.
- Secure design: Includes threat modeling, risk assessment, and choosing secure design choices for models, data, and infrastructure.
- Secure development: Covers supply chain security, asset management, documentation, and technical debt management.
- Secure deployment: Focuses on securing infrastructure, protecting models and data, incident management, and responsible release practices.
- Secure operation and maintenance: Emphasizes monitoring system behavior and inputs, managing updates, and sharing information about security vulnerabilities.
- Model selection: Weighing security considerations against functionality, performance, interpretability, and data requirements.
- Data protection: Protecting sensitive data used for training and inference, as well as user feedback.
- Model hardening: Employing techniques like adversarial training and regularisation to improve model robustness.
- Threat modeling: Identifying potential attackers, their motivations, and the potential impact of successful attacks.
- Incident response: Developing plans to handle security incidents, including detection, containment, and remediation.
- Transparency and accountability: Providing clear documentation and information to users about system limitations and potential risks.
The document emphasizes the need for a proactive and holistic approach to AI security, involving all stakeholders throughout the system's lifecycle. It provides practical guidelines and considerations for organizations to build secure and trustworthy AI systems.