In today’s digital landscape, users often seek assistance from AI chatbots to answer questions, generate content, and gather information efficiently. However, the convenience of these tools tends to come at a price: data privacy. A recent study from VPN and security service Surfshark has revealed how much user data various AI chatbots collect and which ones pose the greatest risks.
Surfshark’s report examined ten popular AI chatbots, including ChatGPT, Google Gemini, and others, analyzing their privacy policies to determine the types of data harvested from users. The focus was on 35 different data types, ranging from contact information to sensitive details like health and financial data.
### Data Collection in AI Chatbots
The findings were alarming. All examined AI apps collected some form of user data, with an average of 13 out of 35 data types per app. Approximately 45% of these apps track users’ geographic locations, while around 30% link user data to third-party advertising networks.
So, who are the worst offenders? According to Surfshark, Meta AI stands at the top of the list, collecting a staggering 32 out of 35 data types, which accounts for 90% of all possible data. This includes financial information and sensitive data, the latter encompassing information about users’ racial and ethnic backgrounds, sexual orientation, and even political opinions.
### Major Players and Their Data Practices
Following Meta AI, Google Gemini also raised concerns by collecting 22 data types, which include precise location and contact information. Other prominent names like Poe, Claude, and Microsoft Copilot make the top five, each gathering between 12 to 14 data types, including user device IDs, which can facilitate third-party data sales.
DeepSeek, a Chinese AI model, generated particular concerns due to its data handling practices. While it collected 11 data types, including chat history, there are fears about potential censorship and the vulnerability of user data stored in China.
Conversely, OpenAI’s ChatGPT collected 10 data types but did stand out positively by not employing third-party advertising. Users also have options for privacy settings, enabling them to delete past conversations or request that their data not be used for training.
### A Common Practice
The collection of user data is commonplace among not only AI chatbots but various mobile apps and platforms. While many users accept this as the cost of free or low-cost services, understanding what data is being collected and how it may be used is crucial for privacy-conscious individuals.
Fortunately, users can take proactive steps to manage their data. Reviewing the privacy policies of your chosen AI service can help identify what types of data are being collected. Most services provide settings that allow you to limit data sharing.
### Final Thoughts
In light of these findings, it is clear that user data collection remains a fundamental aspect of how many AI chatbots operate. While the technology behind these tools is undeniably advantageous, it’s essential to remain vigilant. Many platforms do offer features that allow users to safeguard their data, but being informed is the first step in navigating the complex landscape of AI privacy.
As more people embrace AI chatbots for their everyday needs, prioritizing privacy has never been more critical. The reports from Surfshark serve as a timely reminder that while AI can enhance our lives, we must also be cautious about the data that is exchanged in the process. Understanding the implications of data collection is vital for both individual privacy and overall digital safety.
Source link