North Korean hackers used AI to target South Korean military: report
SEOUL – North Korea’s Kimsuky hacking group used artificial intelligence-generated images in a recent phishing campaign against South Korean military agencies, a local cybersecurity firm said in a report Monday.
According to the report by Genians, the group sent malware-laced emails to undisclosed military agencies, asking them to review sample ID card designs for civilian employees of the military.
The attached images of identification cards were fabricated with AI tools, which Genians claimed were produced through ChatGPT.
The firm said the attackers appeared to circumvent restrictions on commercial AI services such as ChatGPT, which typically block ID card generation, by presenting their requests as mock-up designs for legitimate use.
“They probably persuaded the AI models by saying they were producing sample designs, not replicating actual military ID cards,” the report noted.
The emails also used false domain names such as “.mli.kr” that closely mimicked Korea’s defense websites ending in “.mil.kr.”
The case adds to growing concerns over Pyongyang’s use of AI in cyber operations.
In a separate report released in August, US-based AI firm Anthropic — developer of the Claude model — said North Korean hackers used generative AI to create fake online identities for job applications at overseas IT companies.
In some cases, AI tools were also used to complete tasks after employment.
The report said North Korean agents have used AI not only to compensate for poor programming skills and limited English proficiency during interviews, but also to actively conduct post-hiring operations.
Anthropic added that Kimsuky has recently ramped up phishing attacks with AI-themed lures, including emails that appeared to be from AI-powered email management services.
“While AI services offer convenience in the workplace, they also carry the risk of being exploited for cyber operations with potential national security consequences,” Anthropic said.
“There is a growing need for safeguards across recruitment, daily operations, and IT systems to prevent AI misuse.”
Phnom Penh Post/ANN/The Korea Herald



