China is considering a raft of new controls for training AI on chat log data. Here's what it means.
China is proposing new rules that require user consent before AI companies can use chat logs to train their models. d3sign/Getty Images China is moving to tighten rules on using chat logs to train AI. The draft rule states that users must consent to sharing their conversation data for model training. Analysts said the move aligns with Beijing's emphasis on user safety and collective public interest. China is moving to tighten one of the most common ways AI systems improve: Learning directly from conversations with users. The Cyberspace Administration of China said in a statement published Saturday that it has drafted measures that would restrict how AI platforms can collect and use chat logs for model training. The proposal aims to ensure that "human-like" interactive AI services, including chatbots and virtual companions, are safe and secure for users, the statement said. China encourages innovation in "human-like" interactive AI, but will pair that with governance and prudent, tiered supervision to "prevent abuse and loss of control," it added. Under the draft rules, platforms would need to inform users when they are interacting with AI and provide options to access or delete their chat history. Using conversation data for model training — or sharing it with third parties — would require explicit user consent, the agency said. For minors, providers would need additional consent from a guardian before sharing their conversation data with third parties. Guardians would also have the right to request deletion of a minor's chat history. The draft measures are open for public consultation, with feedback due in late January. Analysts say China is balancing AI safety with continued development If finalized, the rules could slow the pace at which AI chatbots improve, Lian Jye Su, the chief analyst at Omdia, a technology research and consulting firm, told Business Insider. Restricting access to chat logs may "limit the human-feedback mechanisms in reinforcement learning, which has been critical to the rise of engaging and accurate conversational AI," Su said. That said, China's AI ecosystem is "robust," and the country has access to massive public and proprietary datasets, he added. Su said the move aligns with Beijing's broader emphasis on national security and the collective public interest. Tightening controls over chat logs signals that certain user conversations are too sensitive to be treated as free training data, he added. Wei Sun, the principal analyst for AI at Counterpoint Research, told Business Insider these provisions "function less as brakes and more as directional signals." "The emphasis is on protecting users and preventing opaque data practices, rather than constraining innovation," she said. Sun said the draft encourages providers — once safety and reliability are proven — to expand the use of human-like AI across more application areas, including cultural dissemination and companionship for older adults. "In the context of a rapidly aging population, they can be read as an explicit policy nudge to accelerate the development of human-like AI interactions in a regulated, socially constructive, and scalable manner," she added. AI chatlog safety China's new draft rules on chatlog data come as concerns grow over how AI companies handle deeply personal user conversations. Business Insider reported in August that contract workers employed by Meta and other tech giants can read user conversations with chatbots as part of the process of evaluating AI responses. Several contractors told Business Insider that the material they reviewed contained sensitive details that could be used to identify individual users. Many of the conversations were highly personal, including exchanges that resembled therapy sessions, private chats with close friends, or intimate conversations with romantic partners. Meta's AI terms of service said that it "may review" user chatlogs with its AI products, either through automated systems or human reviewers. A Meta spokesperson told Business Insider that the company has "strict policies" over who can access personal data. "While we work with contractors to help improve training data quality, we intentionally limit what personal information they see, and we have processes and guardrails in place instructing them how to handle any such information they may encounter," the spokesperson said in the August report. A Google engineer in AI security told Business Insider earlier this month that there are certain things he would never share with chatbots. "AI models use data to generate helpful responses, and we users need to protect our private information so that harmful entities, like cybercriminals and data brokers, can't access it," he said. Read the original article on Business Insider