South Korea has enacted the worldʼs first law on the safe use of artificial intelligence. It sets rules for AI companies and developers to prevent the spread of deepfakes, disinformation, and more.
This is reported by the South Korean media outlet Yonhap.
The new law introduces the concept of “high-risk AI” — models that can significantly impact users’ daily lives or safety, including when applying for jobs, evaluating loans, and providing medical advice.
Companies must inform users that their services are based on AI and guarantee the security of the application. All content generated by AI must contain watermarks indicating its artificial origin.
The law also sets requirements for international companies. If the service is used daily by more than 1 million Koreans and has revenue exceeding $6.8 million or annual profit exceeding $681 million, the company must open a representative office in South Korea.
Such companies include OpenAI and Google. Violations of the law are punishable by fines of up to $20 000, but the government has introduced a one-year grace period for businesses to adapt.
In addition to security obligations, the law provides for measures to develop the AI industry: South Koreaʼs Minister of Science must present a policy development plan in this area within three years to support innovation and technological leadership.
- Microsoft reported a five-phase plan on January 13 called “Community-Driven AI Infrastructure”. This is how they plan to address the challenges associated with opposing the construction of AI data centers.
- The plan includes increased spending to prevent other customersʼ electricity bills from rising, minimizing water consumption, training workers and creating jobs, and contributing to the local tax base at construction sites.
For more news and in-depth stories from Ukraine, please follow us on X.