Particle style design of hands united together (Illustration by iStock/Who_I_am)

Picture it: At 11 a.m. on a Thursday, you get a personalized Slack notification prompting you to connect with a colleague you haven’t seen in a while. Then, at a midday team meeting on Zoom, you are alerted about who is speaking up less, so you can invite them to contribute. Later that day, while you are writing, an AI-powered plugin prompts you to use “chairperson” instead of “chairman.” The next day, in preparation for a quarterly check-in with a supervisee, you look at a dashboard that shares how people in your team are doing (data from pulse surveys and “listening tools” like text analysis, video, and always-on surveys suggest that your team is feeling highly connected to you and other teammates through one-on-ones, but that they may be feeling burnt out.)

Welcome to a new era of workplace digital surveillance and AI. Are you ready to belong?

When so many in-person offices went remote after the pandemic—with meetings and communications abruptly facilitated digitally—it became newly possible to collect, analyze, and leverage incredible amounts of workplace data. And with this employee data has come a major uptick in new digital tools to inform employee engagement and performance management. At the same time, organizations have been responding to new and louder calls for diversity, equity, inclusion, and belonging (DEIB) at work: Persistent disparities related to who is represented within organizations, and particularly leadership roles, continued to reinforce and illustrate longstanding systemic inequities in society and organizations along lines of race, gender, sexual orientation, socio-economic status, and more. Unsurprisingly, then, tech companies have begun exploring the role that technology and the newly available data troves could play in measuring and/or enhancing organizational DEIB efforts, surveilling employees in order to enhance belonging.

Belonging goes further than inclusion: It is about feeling meaningfully connected to and part of the organization. And the importance of belonging cannot be denied. In the past, survival literally depended on building connections with others to overcome threats and stresses, and humans thus have an evolutionary need to belong. In the last few years, isolation and lack of belonging have fueled a growing mental health crisis, while the lack of belonging has been identified as a key driver behind the “great resignation.”

Are you enjoying this article? Read more like this, plus SSIR's full archive of content, when you subscribe.

Is the answer AI-powered tools for workplace surveillance? What are these tools and what are the opportunities they provide? What, if any, are the unintended consequences that their use might bring? To what extent are these tools “for good” also legitimizing employee over-surveillance? Can we ensure that such DEIB tools actually advance equitable and just outcomes?

The Growing Landscape of AI Workplace Belonging Tools

Workplace digital surveillance to monitor employee productivity have already been ubiquitous for warehouse and logistics workers or UPS drivers, but employee engagement and productivity tools are now expanding rapidly among knowledge workers. The New York Times found, for example, that eight of the 10 largest private US employers track productivity of individual workers. Some of these tools are building in aspects related to advancing internal DEIB, while new tools are focusing explicitly on DEIB goals.

Our analysis of workplace technological tools—especially those using AI—focused on those with stated goals around “belonging” (given its centrality to advancing equity in the workplace). The 34 tools we mapped vary in size and scope, but all have stated goals linking to belonging and are currently reaching employees and workplaces across the globe, with customers spanning a variety of industries in companies ranging from startups with fewer than 1000 employees, (such as Axios) to companies that have 5-10K employees globally (such as Spotify, Twilio, and Virgin Atlantic), as well as large corporations like Microsoft, Unilever, and Vodafone which have over 100K employees.

Three types of tools emerge:

  • Data analytics tools that seek to measure or assess belonging (32.3 percent)
  • Behavior-change tools that seek to enhance belonging (26.5 percent)
  • Tools that combine both (41.2 percent)

Data analytics tools that measure or assess belonging collect real-time information for organizations to understand who employees are connected to and communicating with, their levels of inclusion, how engaged they are, and how they are feeling. They do this through a range of tools, which may provide surveys and assess responses, capture regular pulse checks, and/or track meeting data. More technically complex services include tracking and analyzing communication metadata (ranging from internal emails and messages to external reviews on sites like Glassdoor), using sentiment analysis to assess emotions in qualitative survey data, and mapping employee networks to assess who is connected to whom. While only some of these tools currently use AI, many continue to explore ways to integrate AI into their solutions.

Tools that seek to enhance belonging in organizations encourage behavior change, often by using digital “nudges.” “Nudge theory” is a behavioral economics concept by which positive reinforcements and indirect suggestions can influence people's actions and thinking. These nudges—sent over email, text message, Slack, and more—can be customized and context-based. A majority of digital nudging tools leverage machine learning to personalize nudges based on individual communications, meetings information, and other internal data. These nudges can deliver tips on different topics related to DEIB and wellbeing, prompt inclusive interpersonal workplace behavior or learnings around DEIB topics, and prompt inclusive language and work practices specific to certain roles or functions. Besides nudges, some tools also provide a platform for employees and managers to share recognition, praise, and other forms of positive reinforcements for their work.

While Promising, Concerns Loom

These technologies have the potential to better understand and advance DEIB efforts within organizations, while also making DEIB efforts more efficient, cost-effective, and scalable. However, there are also critical concerns of tools that leverage personal data to draw insights and drive personalized behavior change.

  1. Data privacy | The mapped tools demonstrate a range of data privacy approaches. Not all tools allow employees to determine what data is being collected, and there is significant variation with regard to whether employees’ personal data and insights from their data are sufficiently protected. Cultivate is an example where individual users must opt in to give the platform access to all the types of data it collects (i.e., the contents of their chats, emails, and calendars). Medallia also allows employees to opt into sharing certain data such as transcripts of their calls. However, the platform also automatically collects signals from calendar and email metadata without employees having the opportunity to opt out. In many cases, employees do not even know what data is being collected about them, much less have the opportunity to choose. And even with safeguards like the anonymization of information in place, personal data like email contents could be accessed by managers or bad actors.
  2. Transparency | How informed are employees about how their data is used? Tools that deliver nudges to encourage behavior change have different levels of transparency in terms of how the nudges are developed. For instance, Microsoft VIVA provides individual employees with access to information about where the data underlying their nudges comes from. Humu takes a similar approach, using hyperlinks for each nudge with information on what data points informed the nudge and why the employee received it. However, a majority of tools delivering nudges do not provide employees with this information, and while various tools layer in demographic and HR data to derive more holistic insights, it is unclear whether employees know that their demographic or HR data is being used in this way.
  3. Bias | Bias can come into play at various stages within AI tools. In particular, AI systems make decisions based on the data that they are trained on, but this data may have bias built into it. For example, we know that ​​women’s networks in organizations are less powerful than men’s, and research shows that women often end up networking with peers or lower-level employees and may miss out on networking opportunities due to caretaking responsibilities. Tools that build connections stemming from existing networks may reinforce these inequities and perpetuate gender networking gaps and trends. On this issue, Microsoft VIVA nudges employees to connect with each other on the basis of data such as who is offering positive reinforcement and recognition to whom, which may inadvertently reinforce existing networks. Other tools strive to diversify the networks being developed within organizations. For instance: Donut, a Slack app, randomizes connections with people across departments, geographies, and leadership levels and can also try to introduce people who otherwise would not interact.
  4. Incoherence | Tools often lack a clear, evidence-based understanding of what “belonging” means and what variables should serve as signals for belonging. For instance, one of the variables Medallia considers when assessing “belonging” is whether employees take time off as soon as they earn it, rather than save it up. Beyond tenuous connections to belonging, parents may take time off differently, which means potential consequences for parents and particularly mothers who tend to do the majority of caretaking work. Relatedly, since Cultivate parses through communications metadata, it gauges whether managers promote psychological safety by tracking how often they “express doubt, request feedback, and share opinions.” But while these variables track leadership behaviors linked to promoting psychological safety, they do not necessarily account for whether employees actually feel psychologically safe.
  5. Slippery Slope | Since tools like these already represent a large and rapidly growing market, more innovations are around the corner, including AI tools to monitor and detect the cognitive and emotional states of remote workers. Take the new virtual school software being developed by Intel and Classroom Technologies that can be layered on Zoom, which promises to detect whether students are bored, distracted, or confused through assessing their facial expressions and interactions. Similar types of tech are being tested and deployed in the virtual workplace through video and digital communication platforms. While intentions for the development of such tools appear positive, capturing and assessing emotions and facial expressions is rife with controversy and not based on sound scientific evidence.

At a higher level, we are concerned about over-surveillance conducted in the name of DEIB. While these tools are collecting data with the positive aim of advancing DEIB, they are still acting as surveillance tools in personal spaces. Even when developed for purposes of “good,” surveillance can be an invasion of privacy and ultimately fuel workplace control. Also, surveillance has long disproportionately targeted marginalized communities, particularly Black and Brown communities in the case of the United States, perversely enabling more precise discrimination.

To be clear, not every tool we mapped falls prey to these concerns. Everyday Inclusion, for example, provides employees with un-customized, science-based “inclusion nudges” while Donut simply randomizes employee connections, and thus, tools like these do not raise the concerns we outline here. It is when tools start to leverage personal data to draw insights and drive personalized behavior change that we urge leaders to consider the potential pitfalls in addition to their potential.

What Can Social Change Leaders Do?

Social change leaders must be attentive to the types of technologies they are using, supporting, investing in or funding under the name of DEIB. If tools to advance belonging can be helpful, they must be developed and managed with extreme consideration and caution if they are to result in more just and equitable outcomes. Social change leaders must ask:

  1. What power dynamics and biases might the tool be inadvertently reinforcing? How might the tool be perpetuating inequities in terms of who is connected and networked with whom? How might the tool support certain employees to receive praise/recognition and not others? How are the tools and their development teams considering how certain employees are seen and heard within the organization, and working to ensure that all employees have equal opportunities to be seen and heard?
  2. Have we done our due diligence to ensure equitable outcomes for all employees?
  3. Is the tool built using sound scientific evidence that is applicable across various identities, communities, and cultures? Or is it making assumptions that could have unintended consequences?
  4. Is there a diverse team behind the development and management of this tool (across different demographics and disciplines)? Is the team equipped to proactively consider how people may use and experience these tools differently?
  5. Are there robust privacy measures built in? Have we considered how managers or bad actors may use the tools in ways that could perpetuate bias and discrimination (on purpose or not)?
  6. Is the collection and use of personal data transparent to employees? Are they able to easily opt into or out of data collection?

It’s easy to believe that technology can solve intractable issues like lack of belonging and inequality at work across different identities. However, we must be careful regarding the promises of technology and AI. Tools like these can indeed be helpful, but as social change leaders we must demand more and ask critical questions to better understand what the potential implications of such tools can be, and how power is replicated within and through such technologies. We can also support innovations and teams that center justice as a core value and priority from design through management.

Ultimately, increasing surveillance and AI in the name of DEIB is a dangerous game. Thoughtful, curious, and intentional social change leadership and investment is required to help advance and push for tools that can truly create more just and equitable workplace environments. But in some cases, the main question is: Should this tool be developed at all?

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Genevieve Smith & Ishita Rustagi.