AI And Privacy: The Ordinary Ways People Overshare
- Mar 3
- 4 min read

Privacy risks used to feel technical, like hackers breaking into databases or phishing emails tricking you into clicking. Now the bigger risk is simpler, which is that people treat chatbots like safe, private spaces and tell them things they would not tell anyone else, without realizing those conversations might be stored, reviewed, or used in ways they did not anticipate.
A Stanford study published in 2025 found that most people using AI chatbots are sharing sensitive information without understanding how that data gets handled. The study analyzed privacy policies from six major AI developers, including Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI, and found that all of them collect user conversations for model training by default (Stanford HAI, 2025). That means the prompt you type, the file you upload, and the conversation you have with the bot can all become part of the data used to improve the model unless you actively opt out, and even then, opting out is not always available or clear.
The study's lead author, Jennifer King, a Privacy and Data Policy Fellow at Stanford HAI, said users should absolutely worry about their privacy if they are sharing sensitive information with chatbots like ChatGPT or Gemini, because it may be collected and used for training, even if it is in a separate file uploaded during the conversation (Stanford HAI, 2025). The problem is not that people are deliberately careless, but that chatbots feel conversational, which makes them feel private. People paste in medical details, financial information, work documents, and personal stories because the bot feels like it is there to help, and the conversational design encourages disclosure in ways that other tools do not.
Hundreds of millions of people now interact with AI chatbots regularly, and the volume of personal information flowing into these systems is enormous. Research has shown that users routinely share health data, identity details, and confidential work information because the design of the tool makes it feel safe. One study highlighted how AI chatbots mimic human communication and interpersonal relationships in ways that can prompt users, especially children and teens, to trust and form relationships with bots as if they were friends or confidants (FTC, 2025). That trust is being exploited, not necessarily maliciously, but structurally, because the business model of most AI systems depends on data to improve performance.
The Federal Trade Commission launched an inquiry in September 2025 into AI-powered chatbots acting as companions, seeking information on how companies measure, test, and monitor potentially negative impacts of the technology, especially on children and teens (FTC, 2025). The inquiry is focused on understanding what steps companies have taken to limit use by minors, mitigate harms, and inform users and parents of the risks, including how personal information from conversations with chatbots is used or shared. The FTC has made clear that it considers data collection through chatbots a consumer protection issue, and it has warned companies not to manipulate consumers based on the relationships they form with avatars or bots (Fenwick, 2024).
Data retention is part of the problem. For most free and Plus-tier users of services like ChatGPT, conversations are now stored indefinitely unless the user manually deletes them, and even then, the deletion process can take up to 30 days, with no guarantee that data subject to legal holds will be removed. OpenAI had changed its policy in 2024 to remove the ability for free and Plus users to disable chat history entirely, meaning all prompts and interactions were retained by default unless actively deleted. Enterprise and team users still had the option to opt out, but individual consumers did not.
These days, people are pasting work emails into chatbots to "make them sound better," which can include client names, project details, and proprietary information. They are uploading resumes, tax documents, and medical records because the tool offers to summarize or organize them. They are asking for advice on personal problems and including identifying details that could be linked back to them if the data leaks or gets reused. Recent trends involved users uploading personal photos to generative AI tools to create custom images, which normalizes sending highly personal visual data to systems that may store and process it in ways users do not fully understand.
Stanford researchers recommend three interventions that policymakers should consider. First, require affirmative opt-in for model training, meaning users should have to choose to allow their data to be used rather than having to figure out how to opt out. Second, make privacy policies transparent and understandable so people know what they are agreeing to. Third, pass stronger privacy laws that specifically address children's data and prevent sensitive information from being pulled into training datasets by default (Stanford HAI, 2025). The researchers also recommend that companies proactively filter personal information from chat inputs and remove entire conversations that contain sensitive content like health issues, rather than relying on output suppression techniques that can fail.
For individuals, the new privacy skill is pausing before you paste. The habit people need to build is treating the prompt box like a semi-public space, not a private conversation. A simple rule is not to paste anything you would not forward to someone else. That means keeping out medical details, financial account information, social security numbers, passwords, addresses, and anything that contains someone else's personal data. It also means being cautious about work documents, especially if they contain client information, internal strategy, or anything covered by confidentiality agreements.
The conversational design of chatbots is working exactly as intended, which is to make people comfortable enough to share. The problem is that comfort does not match the reality of how the data flows, who has access to it, and how long it stays in the system. Privacy is no longer just about protecting your accounts from hackers. It is about recognizing that every casual prompt is a disclosure, and most people are making those disclosures without thinking twice.



Comments