In a development that has left many Slack users unsettled, an obscure privacy policy page has surfaced, revealing that the popular workplace communication platform uses user conversations, including direct messages (DMs), to train its machine learning (ML) and artificial intelligence (AI) systems. This revelation has sparked a significant backlash, as users grapple with the implications for their privacy and data security.
Slack’s Data Utilisation Policy
According to the privacy page, Slack, owned by Salesforce, systematically analyses “Customer Data,” which includes messages, content, and files submitted to the platform, along with other information such as usage data. This policy is not restricted to those who have explicitly opted into Slack’s AI add-ons; it applies universally across all Slack instances.
Slack maintains that these analyses are performed to enhance user experience, offering improvements in search functionality, autocomplete suggestions, channel recommendations, and even emoji suggestions. The company assures users that their data will not “leak across workspaces” and that technical controls are in place to prevent unauthorised access.
Despite these assurances, the potential for misuse or unintended consequences looms large. Users are particularly concerned about the possibility of sensitive information being inadvertently exposed or misused, given the extensive access Slack has to their conversations.
The Promised Benefits
Slack posits that feeding conversations into a large language model will significantly enhance the platform’s utility. The company highlights several benefits, including more relevant search results, better autocomplete suggestions, and more accurate channel recommendations. However, the enhancement that has drawn the most attention—and ridicule—is the improvement of emoji suggestions.
A statement from Slack outlines how the platform might suggest emoji reactions based on the content and sentiment of messages, as well as historical and contextual usage patterns. For instance, if the 🎉 emoji is commonly used in celebratory messages within a particular channel, Slack would suggest this emoji for similar future messages.
Critics argue that such improvements, particularly in emoji suggestions, do not justify the privacy trade-off. While better search functionality might be seen as a practical enhancement, the need for AI-driven emoji recommendations is viewed by many as trivial and disproportionate to the potential risks.
Opting Out: A Daunting Task
For users and organisations uncomfortable with Slack’s data usage policy, the opt-out process is cumbersome. Individual users cannot directly opt out; this action must be taken by a Slack administrator, typically someone in the company’s IT department. Furthermore, there is no straightforward option in the settings to disable this feature. Instead, administrators must email Slack’s Customer Experience team to request an opt-out.
This convoluted process has been criticised as a “dark pattern,” designed to discourage users from opting out by making it inconvenient. Critics hope that in light of the backlash, Slack will streamline the opt-out process to better respect user preferences and privacy concerns.
The Reality of Slack DMs
The controversy serves as a stark reminder that Slack DMs are not as private as users might hope. While Slack’s data usage policy is aimed at improving the platform, it also means that every message sent could be analysed and utilised to train AI systems. This extends to private conversations, which can also be accessed by company administrators.
For employees, this underscores the importance of being cautious about what they share on Slack. Sensitive discussions or criticisms of company leadership might be better suited to more secure communication platforms not under the company’s control, such as Signal.
Moving Forward
The outcry over Slack’s data usage practices highlights the growing tension between technological advancement and privacy rights. As AI continues to evolve, the need for robust privacy protections and transparent data usage policies becomes increasingly critical. Companies like Slack must navigate these challenges carefully, balancing the benefits of AI with the imperative to protect user privacy.
In response to the backlash, it remains to be seen whether Slack will amend its policies or simplify the opt-out process. Users and organisations will be closely watching for any changes, advocating for their right to privacy in an increasingly interconnected digital landscape.
The debate over Slack’s use of private conversations to train AI systems is far from over. As users demand greater transparency and control over their data, the outcome of this controversy could set important precedents for data privacy in the digital age. In the meantime, users are urged to remain vigilant about their communications on workplace platforms and to advocate for stronger privacy safeguards.