How to stop LinkedIn from training AI on your data
Ars Technica
Unknown
April 06, 2025
0.2
Summary
LinkedIn limits opt-outs to future training, warns AI models may spout personal data.
LinkedIn admitted Wednesday that it has been training its own AI on many users' data without seeking consent. Now there's no way for users to opt out of training that has already occurred, as LinkedIn limits opt-out to only future AI training.
In a blog detailing updates coming on November 20, LinkedIn general counsel Blake Lawit confirmed that LinkedIn's user agreement and privacy policy will be changed to better explain how users' personal data powers AI on the platform.
Under the new privacy policy, LinkedIn now informs users that "we may use your personal data... [to] develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others."
An FAQ explained that the personal data could be collected any time a user interacts with generative AI or other AI features, as well as when a user composes a post, changes their preferences, provides feedback to LinkedIn, or uses the platform for any amount of time.
That data is then stored until the user deletes the AI-generated content. LinkedIn recommends that users use its data access tool if they want to delete or request to delete data collected about past LinkedIn activities.
LinkedIn's AI models powering generative AI features "may be trained by LinkedIn or another provider," such as Microsoft, which provides some AI models through its Azure OpenAI service, the FAQ said.
A potentially major privacy risk for users, LinkedIn's FAQ noted, is that users who "provide personal data as an input to a generative AI powered feature" could end up seeing their "personal data being provided as an output."