More uniquely, Headroom’s software uses emotion recognition to take the temperature of the room periodically, and to gauge how much attention participants are paying to whoever’s speaking. Those metrics are displayed in a window on-screen, designed mostly to give the speaker real-time feedback that can sometimes disappear in the virtual context. “If five minutes ago everyone was super into what I’m saying and now they’re not, maybe I should think about shutting up,” says Green.
Emotion recognition is still a nascent field of AI. “The goal is to basically try to map the facial expressions as captured by facial landmarks: the rise of the eyebrow, the shape of the mouth, the opening of the pupils,” says Rabinovich. Each of these facial movements can be represented as data, which in theory can then be translated into an emotion: happy, sad, bored, confused. In practice, the process is rarely so straightforward. Emotion recognition software has a history of mislabeling people of color; one program, used by airport security, overestimated how often Black men showed negative emotions, like “anger.” Affective computing also fails to take cultural cues into context, like whether someone is averting their eyes out of respect, shame, or shyness.
For Headroom’s purposes, Rabinovich argues that these inaccuracies aren’t as important. “We care less if you’re happy or super happy, so long that we’re able to tell if you’re involved,” says Rabinovich. But Alice Xiang, the head of fairness, transparency, and accountability research at the Partnership on AI, says even basic facial recognition still has problems—like failing to detect when Asian individuals have their eyes open—because they are often trained on white faces. “If you have smaller eyes, or hooded eyes, it might be the case that the facial recognition concludes you are constantly looking down or closing your eyes, when you’re not,” says Xiang. These sorts of disparities can have real-world consequences as facial recognition software gains more widespread use in the workplace. Headroom is not the first to bring such software into the office. HireVue, a recruiting technology firm, recently introduced an emotion recognition software that suggests a job candidate’s “employability,” based on factors like facial movements and speaking voice.
Constance Hadley, a researcher at Boston University’s Questrom School of Business, says that gathering data on people’s behavior during meetings can reveal what is and isn’t working within that setup, which could be useful for employers and employees alike. But when people know their behavior is being monitored, it can change how they act in unintended ways. “If the monitoring is used to understand patterns as they exist, that’s great,” says Hadley. “But if it’s used to incentivize certain types of behavior, then it can end up triggering dysfunctional behavior.” In Hadley’s classes, when students know that 25 percent of the grade is participation, students raise their hands more often, but they don’t necessarily say more interesting things. When Green and Rabinovich demonstrated their software to me, I found myself raising my eyebrows, widening my eyes, and grinning maniacally to change my levels of perceived emotion.
In Hadley’s estimation, when meetings are conducted is just as important as how. Poorly scheduled meetings can rob workers of the time to do their own tasks, and a deluge of meetings can make people feel like they’re wasting time while drowning in work. Naturally, there are software solutions to this, too. Clockwise, an AI time management platform launched in 2019, uses an algorithm to optimize the timing of meetings. “Time has become a shared asset inside a company, not a personal asset,” says Matt Martin, the founder of Clockwise. “People are balancing all these different threads of communication, the velocity has gone up, the demands of collaboration are more intense. And yet, the core of all of that, there’s not a tool for anyone to express, ‘This is the time I need to actually get my work done. Do not distract me!’”