Divulgare la privacy e la cybersecurity nelle aziende
con spiegazioni semplici e operative, AI assisted
Osservatorio a cura del dott. V. Spataro 



   dizionario 2025-04-18 ·  NEW:   Appunta · Stampa · Cita: 'Doc 99602' · pdf

South Korea’s New AI Framework Act: A Balancing Act Between Innovation and Regulation

abstract:



Documento annotato il 18.04.2025 Fonte: fpf.org
Link: https://fpf.org/blog/south-koreas-new-ai-framework




analisi:

L'analisi è riservata agli iscritti. Segui la newsletter dell'Osservatorio oppure il Podcast iscrizione gratuita 30 giorni

L'analisi è riservata agli iscritti. Segui la .......... ....'............ ...... .. ....... .......... ........ .. ......

-




index:

Indice

  • Framework Act on Artificial Intelligence
  • Act
  • AI
  • AI systems
  • AI technology
  • And
  • Generative AI.
  • AI Development Business Operators
  • AI Utilization Business Operators
  • – that apply to AI Business Operators



testo:

Eestimated reading time: 9 min

FILTER

April 18, 2025

Share on LinkedIn

On 21 January 2025, South Korea became the first jurisdiction in the Asia-Pacific (APAC) region to adopt comprehensive artificial intelligence (AI) legislation. Taking effect on 22 January 2026, the Framework Act on Artificial Intelligence Development and Establishment of a Foundation for Trustworthiness (AI Framework Act or simply, Act) introduces specific obligations for “high-impact” AI systems in critical sectors, including healthcare, energy, and public services, and mandatory labeling requirements for certain applications of generative AI. The Act also includes substantial public support for private sector AI development and innovation through its support for AI data centers, as well as projects that create and provide access to training data, and encouragement of technological standardization to support SMEs and start-ups in fostering AI innovation. 

In the broader context of public policies in South Korea that are designed to allow the advancement of AI, the Act is notable for its layered, transparency-focused approach to regulation, moderate enforcement approach compared to the EU AI Act, and significant public support intended to foster AI innovation and development. We cover these in Parts 2 to 4 below. 

Key features of the law include:

  • Broad extraterritorial reach, applying to AI activities impacting South Korea’s domestic market or users;
  • Government support for AI development through infrastructure (AI data centers) and learning resources;
  • Focused oversight of “high-impact” AI systems in critical sectors like healthcare, energy, and public services; providers of most AI systems, including all those that are not high-impact, are not regulated. The Act provides express carve-outs for AI used in security or national defense;
  • Transparency obligations for providers of generative AI products and services, including mandatory labeling of AI-generated content, and
  • A moderate enforcement approach with administrative fines up to KRW 30 million (approximately USD 21,000).

In Part 5, we provide a comparison below to the European Union (EU)’s AI Act (EU AI Act). We note that while the AI Framework Act shares some common elements with the EU AI Act, including tiered classification and transparency mandates, South Korea’s regulatory approach differs in its simplified risk categorization, including absence of prohibited AI practices, comparatively lower financial penalties, and the establishment of initiatives and government bodies aimed at promoting the development and use of AI technologies. The intent of this comparison is to assist practitioners in understanding and analyzing key commonalities and differences between both laws.

Finally, Part 6 of this article places the Act within South Korea’s broader AI innovation strategy and discusses the challenges of regulatory alignment between the Ministry of Science and IT (MSIT) and South Korea’s data protection authority, the Personal Information Protection Commission (PIPC) in South Korea’s evolving AI governance landscape.

On 26 December 2024, South Korea’s National Assembly passed the Framework Act on Artificial Intelligence Development and Establishment of a Foundation for Trustworthiness (AI Framework Act or Act). 

The AI Framework Act was officially promulgated on 21 January 2025 and will take effect on 22 January 2026, following a one-year transition period to prepare for compliance. During this period, MSIT will assist with the issuance of Presidential Decrees and other sub-regulations and guidelines to clarify implementation details.

South Korea was the first country in the Asia-Pacific region to introduce a comprehensive AI law in 2021: the Bill on Fostering Artificial Intelligence and Creating a Foundation of Trust. However, the legislative process faced significant hurdles, including political uncertainty surrounding the April 2024 general elections, raising concerns that the bill could be scrapped entirely.

However, by November 2024, South Korea’s AI policy landscape had grown increasingly complex, with 20 separate AI governance bills since the National Assembly began its new term in June 2024, each independently proposed by different members. In November 2024, the Information and Communication Broadcasting Bill Review Subcommittee conducted a comprehensive review of these AI-related bills and consolidated them into a single framework, leading to the passage of the AI Framework Act.

At its core, the AI Framework Act adopts a risk-based approach to AI regulation. In particular, it introduces specific obligations for high-impact AI systems and generative AI applications. The AI Framework Act also has extraterritorial reach: it applies to AI activities that impact South Korea’s domestic market or users.

This blog post examines the key provisions of the Act, including its scope, regulatory requirements, and implications for organizations developing or deploying AI systems.

2.1 Definitions lay the foundation for how different AI systems will be regulated under the Act

Article 2 of the Act provides three AI-related definitions. 

  • First, AI is defined as “an electronic implementation of human intellectual abilities such as learning, reasoning, perception, judgment and language comprehension.” 
  • Second, AI systems are defined as “an artificial intelligence-based system that infers results such as predictions, recommendations and decisions that affect real and virtual environments for a given goal with various levels of autonomy and adaptability.” 
  • Third, AI technology is defined as “hardware, software technology, or utilization technology necessary to implement artificial intelligence.” 

At the core of the Act’s layered approach is its definition of “high-impact AI” (which is subject to more stringent requirements). “High-impact AI” refers to AI systems “that may have a significant impact on or pose a risk to human life, physical safety, and basic rights,” and is utilized in critical sectors identified under the AI Framework Act, including energy, healthcare, nuclear operations, biometric data analysis, public decision-making, education, or other areas that have a significant impact on the safety of human life and body and the protection of basic rights as prescribed by Presidential Decree.

The Act also introduces specific provisions for “generative AI.” The Act defines generative AI as AI systems that create text, sounds, images, videos, or other outputs by imitating the structure and characteristics of the input data. 

The Act also defines an “AI Business Operator” as corporations, organizations, government agencies, or individuals conducting business related to the AI industry. The Act subdivides AI Business Operators into two sub-categories (which effectively reflect a developer-deployer distinction): 

  • AI Development Business Operators” that develop and provide AI systems, and 
  • AI Utilization Business Operators” that offer products or services using AI developed by AI Development Business Operators.

Currently, as will be covered in more detail below, the obligations under the Act apply to both categories of AI Business Operators, regardless of their specific roles in the AI lifecycle. For example, transparency-related obligations apply to all AI Business Operators, regardless of whether they are involved in the development and/or deployment phases of AI systems. It remains to be seen if forthcoming Presidential Decrees to implement the Act will introduce more differentiated obligations for each type of entity.

While the Act expressly excludes AI used solely for national defense and security from its scope, the Act applies to both government agencies and public bodies when they are involved in the development, provision, or use of AI technology in a business-related context. More broadly, the Act also assigns the government a significant role in shaping AI policy, providing support, and overseeing the development and use of AI.

2.2. The AI Framework Act has broad extraterritorial reach 

Under Article 4(1), the Act applies not only to acts conducted within South Korea but also to those conducted abroad that impact South Korea’s domestic market, or users in South Korea. This means that foreign companies providing AI systems or services to users in South Korea will be subject to the Act’s requirements, even if they lack a physical presence in the country. 

However, Article 4(2) of the Act introduces a notable exemption for AI systems developed and deployed exclusively for national defense or security purposes. These systems, which will be designated by Presidential Decree, fall outside the Act’s regulatory framework.

For global organizations, the Act’s jurisdictional scope raises key compliance considerations. Companies will likely need to assess whether their AI activities fall under South Korea’s regulatory reach, particularly if they:

  • Offer AI-powered services to South Korean users;
  • Process data or make algorithmic decisions affecting South Korean businesses or individuals; or
  • Indirectly impact the Korean market through AI-driven analytics or decision-making.

This last criterion appears to be a novel policy proposition and differentiates the AI Framework Act from the EU AI Act, potentially making it broader in reach. This is because it does not seem necessary for an AI system to be placed on the South Korean market for the condition to be triggered, but simply for the AI-related activity of a covered entity to “indirectly impact” the South Korean market. 

2.3. The Act establishes a multi-layered approach to AI safety and trustworthiness requirements

(i) The Act emphasizes oversight of high-impact AI but does not prohibit particular AI uses 

For most AI Business Operators, compliance obligations under the AI Framework Act are min imal. There are, however, noteworthy obligations – relating to transparency, safety, risk management and Accountability – that apply to AI Business Operators deploying high-impact AI systems. 

Under Article 33, AI Business Operators providing AI products and services must “review in advance” (this presumably means before the relevant product or service is released into a live environment or goes to market) whether their AI systems is considered “high-impact AI.” Businesses may request confirmation from the MSIT on whether their AI system is to be considered “high-impact AI.”

Under Article 34, organizations that offer high-impact AI, or products or services using high-impact AI, must meet much stricter requirements, including:

1. Establishing and operating a risk management plan.

2. Establishing and operating a plan to provide explanation for AI-generated results within technical limits, including key decision criteria and an overview of training data.

3. Establishing and operating “user protection measures.”

4. Ensuring human oversight and supervision of high-impact AI.

5. Preserving and storing documents that demonstrate measures taken to ensure AI safety and reliability.

6. Following any additional requirements imposed by the National AI Committee (established under the Act) to enhance AI safety and 7. reliability.

  1. Submitting safety measure implementation results;
  2. Managing high-impact AI confirmation processes; and
  3. Supporting the implementation of safety and trustworthiness measures.
  4. Notification and labeling requirements for generative AI outputs;
  5. Implementation of safety measures and submission of compliance results for AI systems exceeding computational thresholds set by Presidential Decree, and
  6. Adherence to safety and reliability standards for high-impact AI systems.
  7. Failure to comply with corrective or cease-and-desist orders issued by the MSIT.
  8. Non-fulfillment of notification obligations related to high-impact AI or generative AI systems.
  9. Failure to designate a required domestic representative, as mandated for certain foreign AI providers operating in South Korea.
  10. Support the production, collection, management, distribution, and utilization of AI training data.
  11. Select and fund projects that generate and provide training data.
  12. Establish an integrated system for managing and providing AI training data to the private sector.
  13. Thresholds for computational capacity, which determine when AI systems face additional obligations;
  14. Revenue and user criteria that trigger domestic representative requirements for foreign AI Business Operators; and
  15. Detailed criteria for identifying high-impact AI systems, ensuring consistent risk-based regulation.
  16. The nature of the data is such that anonymizing or pseudonymizing it would make it difficult to use in AI development;
  17. Appropriate technical, administrative, and physical safeguards are implemented;
  18. The purpose of AI development aligns with objectives such as promoting public interest, protecting individuals or third parties, or fostering AI innovation;
  19. There is min imal risk of harm to data subjects or third parties, and
  20. The PIPC has confirmed that each of the above requirements has been met (note that the PIPC may also attach further conditions, if necessary).


Link: https://fpf.org/blog/south-koreas-new-ai-framework

Testo del 2025-04-18 Fonte: fpf.org




Commenta



i commenti sono anonimi e inviati via mail e cancellati dopo aver migliorato la voce alla quale si riferiscono: non sono archiviati; comunque non lasciare dati particolari. Si applica la privacy policy.


Ricevi gli aggiornamenti su South Korea’s New AI Framework Act: A Balancing Act Between Innovation and Regulation e gli altri post del sito:

Email: (gratis Info privacy)






Nota: il dizionario è aggiornato frequentemente con correzioni e giurisprudenza