We just got our best-ever look at the inside of Mars

NASA’s InSight robotic lander has just given us our first look deep inside a planet other than Earth. 

More than two years after its launch, seismic data that InSight collected has given researchers hints into how Mars was formed, how it has evolved over 4.6 billion years, and how it differs from Earth. A set of three new studies, published in Science this week, suggests that Mars has a thicker crust than expected, as well as a molten liquid core that is bigger than we thought.  

In the early days of the solar system, Mars and Earth were pretty much alike, each with a blanket of ocean covering the surface. But over the following 4 billion years, Earth became temperate and perfect for life, while Mars lost its atmosphere and water and became the barren wasteland we know today. Finding out more about what Mars is like inside might help us work out why the two planets had such very different fates. 

“By going from [a] cartoon understanding of what the inside of Mars looks like to putting real numbers on it,” said Mark Panning, project scientist for the InSight mission, during a NASA press conference, “we are able to really expand the family tree of understanding how these rocky planets form and how they’re similar and how they’re different.” 

Since InSight landed on Mars in 2018, its seismometer, which sits on the surface of the planet, has picked up more than a thousand distinct quakes. Most are so small they would be unnoticeable to someone standing on Mars’s surface. But a few were big enough to help the team get the first true glimpse of what’s happening underneath. 

NASA/JPL-CALTECH

Marsquakes create seismic waves that the seismometer detects. Researchers created a 3D map of Mars using data from two different kinds of seismic waves: shear and pressure waves. Shear waves, which can only pass through solids, are reflected off the planet’s surface.  

Pressure waves are faster and can pass through solids, liquids, and gases. Measuring the differences between the times that these waves arrived allowed the researchers to locate quakes and gave clues to the interior’s composition.  

One team, led by Simon Stähler, a seismologist at ETH Zurich, used data generated by 11 bigger quakes to study the planet’s core. From the way the seismic waves reflected off the core, they concluded it’s made from liquid nickel-iron, and that it’s far larger than had been previously estimated (between 2,230 and 2320 miles wide) and probably less dense. 

Another team, led by Amir Khan, a scientist at the Institute of Geophysics at ETH Zurich and at the Physics Institute at the University of Zurich, looked at the Martian mantle, the layer that sits between the crust and the core. They used the data to determine that Mars’s lithosphere—while similar in chemical composition to Earth’s—lacks tectonic plates. It is also thicker than Earth’s by about 56 miles.  

This extra thickness was most likely “the result of early magma ocean crystallization and solidification,” meaning that Mars may have been quickly frozen at a key point in its formative years, the team suggests. 

A third team, led by Brigitte Knapmeyer-Endrun, a planetary seismologist at the University of Cologne, analyzed the Martian crust, the layer of rocks at its surface. They found while its crust is likely very deep, it’s also thinner than her team expected.  

“That’s intriguing because it points to differences in the interior of the Earth and Mars, and maybe they are not made of exactly the same stuff, so they were not built from exactly the same building blocks,” says Knapmeyer-Endrun.  

The InSight mission will come to an end next year after its solar cells are unable to produce any more power, but in the meantime, it’s possible even more of Mars’s inner secrets will be unveiled.  

“Regarding seismology and InSight, there are also still many open questions for the extended mission,” says Knapmeyer-Endrun. 

Is the UK’s pingdemic good or bad? Yes.

Oscar Maung-Haley, 24, was working a part-time job in a bar in Manchester, England, when his phone pinged. It was the UK’s NHS Test and Trace app letting him know he’d potentially been exposed to covid-19 and needed to self-isolate. The news immediately caused problems. “It was a mad dash around the venue to show my manager and say I had to go,” he says.

The alert he got was one of hundreds of thousands being sent out every week as the UK battles its latest wave of covid, which means more and more people face the same logistical, emotional, and financial challenges. An estimated one in five have resorted to deleting the app altogether—after all, you can’t get a notification if you don’t have it on your phone. The phenomenon is being dubbed a “pingdemic” on social media, blamed for everything from gas shortages to bare store shelves.

The ping deluge reflects the collision of several developments. The delta variant, which appears much easier to spread than others, has swept across the UK. At the same time, record numbers of Britons have downloaded the NHS app. Meanwhile, the UK has dropped many of its lockdown restrictions, so more people are coming into more frequent contact than before. More infections, more users, more contact: more pings. 

But that’s exactly how it’s supposed to work, says Imogen Parker, policy director for the Ada Lovelace Institute, which studies AI and data policies. In fact, even with so many notifications being sent, there are still many infections that the system is not catching. 

“More than 600,000 people have been told to isolate by the NHS covid-19 app across the week of July 8 in England and Wales,” she says, “but that’s only a little more than double the number of new positive cases in the same period. While we had concerns about the justification for the contact tracing app, criticizing it for the ‘pingdemic’ is misplaced: the app is essentially working as it always has been.”

Christophe Fraser, an epidemiologist at the University of Oxford’s Big Data Institute who has done the most prominent studies on the effectiveness of the app, says that while it is functioning as designed, there’s another problem: a significant breakdown in the social contract. “People can see, on TV, there are raves and nightclubs going on. Why am I being told to stay home? Which is a fair point, to be honest,” he says.

It’s this lack of clear, fair rules, he says, that is leading to widespread frustration as people are told to self-isolate. As we’ve seen throughout the pandemic, public health technology is deeply intertwined with everything around it—the way it’s marketed, the way it’s talked about in the media, the way it’s discussed by your physician, the way it’s supported (or not) by lawmakers. 

“People do want to do the right thing,” Fraser says. “They need to be met halfway.”

How we got here

Exposure notification apps are a digital public health tactic pioneered during the pandemic—and they’ve already weathered a lot of criticism from those who say that they didn’t get enough use. Dozens of countries built apps to alert users to covid exposure, sharing code and using a framework developed jointly by Google and Apple. But amid criticism over privacy worries and tech glitches, detractors charged that the apps had launched too late in the pandemic—at a time when case numbers were too high for tech to turn back the tide.

So shouldn’t this moment in the UK—when technical glitches have been ironed out, when adoption is high, and with a new wave spiking—be the right time for its app to make a real difference? 

“The science is not as much of a challenge … the challenge comes around the behavior. The hardest parts of the system are the parts where you need to convince people to do something.”

Jenny Wanger, Linux Foundation Public Health

Not if people don’t voluntarily follow the instructions to isolate, says Jenny Wanger, who leads covid-related tech initiatives for Linux Foundation Public Health. 

Eighteen months into the pandemic, “the tech is not usually a challenge,” she says. “The science is not as much of a challenge … we know, at this point, how covid transmission works. The challenge comes around the behavior. The hardest parts of the system are the parts where you need to convince people to do something—of course, based on best practices.”

Oxford’s Fraser says that he thinks about it in terms of incentives. For the average person, he says, the incentives for adhering to the rules of contact tracing—digital or otherwise—don’t always add up. 

If the result of using the app is that “you end up being quarantined but your neighbor who hasn’t installed the app doesn’t get quarantined,” he says, “that doesn’t necessarily feel fair, right?”

To make matters even more complicated, the UK has announced that it’s about to change its rules. In mid-August, people who have received two doses of a vaccine will no longer need to self-isolate because of covid exposure; they’ll only need to do so if they test positive. About half of the country’s adult population is fully vaccinated.

That could be a moment to bring incentives more in line with what people would be willing to do, he says. “Maybe people should be offered tests so that they can keep going to work and get on with life, rather than be isolated for a number of days.”

In the meantime, though, a handful of corporate leaders—the head of a budget airline, for example—have encouraged employees to delete the app to avoid the pings. Even the two most powerful politicians in the country, Prime Minister Boris Johnson and Chancellor Rishi Sunak, tried to skirt the requirement to isolate after being pinged (saying they were taking part in a trial of alternative measures) before public outcry forced them into quarantine.

When protection creates confusion

The mixed messages are compounded by the app’s privacy-protecting functions. Users aren’t told who among their contacts may have infected them—and they’re not told where any interactions happened. But that isn’t an accident: the apps were designed that way to safeguard people’s information.

“In epidemiology, surveillance is a noble thing,” says Fraser. “In digital tech, it’s a darker thing. I think the privacy-preserving protocol got the balance right. It’s incumbent on science and epidemiology to get information to people while preserving that privacy.”

Be that as it may, those privacy protections are now creating even more confusion.

Alistair Scott, 38, lives with his fiancée in North London. The couple did everything together during lockdown—yet Scott recently got a notification telling him he needed to isolate, while his partner did not. “It immediately became this game of ‘Why did I get pinged and you didn’t?’” he says. 

What’s next

Experts say that there are a few ways forward. One could be to tweak the algorithm: the app could incorporate new science about the length of covid exposure that might merit a ping even if you’re vaccinated. 

 “Emerging evidence looks like full vaccination should decrease the risk that someone transmits the virus by around half,” says Parker of the Ada Lovelace Institute. “That could have a sizeable impact on alerts if it was built into the model.” 

That means alerts could become less frequent for vaccinated people.

On the other hand, Wanger says that NHS leaders could adjust settings to be more sensitive, to reflect the increased transmission risk of variants like delta. There’s no indication that such changes have been made yet.

Either way, she says, what’s important is that the app keep doing its job.

“As a public health authority, when you’re looking at cases rising dramatically within your country, and you’re trying to pursue economic goals by lifting lockdown restrictions—it’s a really hard position to be in,” Wanger says. “You want to nudge people to do behavior changes, but you’ve got this whole psychology aspect to it. If people get notification fatigue, they are not going to change their behavior.”

Meanwhile, people are still being pinged, still feeling confused—and still hearing mixed messages.

Charlotte Wilson, 39, and her husband both downloaded the app onto their phones almost as soon as it was available. But there’s been a split in the household, especially since lawmakers were seen apparently trying to avoid the rules. Faced with the prospect of being told to self-isolate, Wilson said she would follow the advice, while her partner felt differently and deleted the app completely. 

“My husband thought [over the weekend], ‘You know what? This is ridiculous,’” she says. The impending change in self-isolation protocol made it seem especially fruitless.

Still, she understands his view, even if she’s personally keeping the app on her phone. 

“I don’t really know what the answer is as far as society’s concerned,” she says. “We’re just riddled with covid.”

This story is part of the Pandemic Technology Project, supported by The Rockefeller Foundation.

DeepMind says it will release the structure of every protein known to science

Back in December 2020, DeepMind took the world of biology by surprise when it solved a 50-year grand challenge with AlphaFold, an AI tool that predicts the structure of proteins. Last week the London-based company published full details of that tool and released its source code.

Now the firm has announced that it has used its AI to predict the shapes of nearly every protein in the human body, as well as the shapes of hundreds of thousands of other proteins found in 20 of the most widely studied organisms, including yeast, fruit flies, and mice. The breakthrough could allow biologists from around the world to understand diseases better and develop new drugs. 

So far the trove consists of 350,000 newly predicted protein structures. DeepMind says it will predict and release the structures for more than 100 million more in the next few months—more or less all proteins known to science. 

“Protein folding is a problem I’ve had my eye on for more than 20 years,” says DeepMind cofounder and CEO Demis Hassabis. “It’s been a huge project for us. I would say this is the biggest thing we’ve done so far. And it’s the most exciting in a way, because it should have the biggest impact in the world outside of AI.”

Proteins are made of long ribbons of amino acids, which twist themselves up into complicated knots. Knowing the shape of a protein’s knot can reveal what that protein does, which is crucial for understanding how diseases work and developing new drugs—or identifying organisms that can help tackle pollution and climate change. Figuring out a protein’s shape takes weeks or months in the lab. AlphaFold can predict shapes to the nearest atom in a day or two.

The new database should make life even easier for biologists. AlphaFold might be available for researchers to use, but not everyone will want to run the software themselves. “It’s much easier to go and grab a structure from the database than it is running it on your own computer,” says David Baker of the Institute for Protein Design at the University of Washington, whose lab has built its own tool for predicting protein structure, called RoseTTAFold, based on AlphaFold’s approach.

In the last few months Baker’s team has been working with biologists who were previously stuck trying to figure out the shape of proteins they were studying. “There’s a lot of pretty cool biological research that’s been really sped up,” he says. A public database containing hundreds of thousands of ready-made protein shapes should be an even bigger accelerator.  

“It looks astonishingly impressive,” says Tom Ellis, a synthetic biologist at Imperial College London studying the yeast genome, who is excited to try the database. But he cautions that most of the predicted shapes have not yet been verified in the lab.  

Atomic precision

In the new version of AlphaFold, predictions come with a confidence score that the tool uses to flag how close it thinks each predicted shape is to the real thing. Using this measure, DeepMind found that AlphaFold predicted shapes for 36% of human proteins with an accuracy that is correct down to the level of individual atoms. This is good enough for drug development, says Hassabis.   

Previously, after decades of work, only 17% of the proteins in the human body have had their structures identified in the lab. If AlphaFold’s predictions are as accurate as DeepMind says, the tool has more than doubled this number in just a few weeks.

Even predictions that are not fully accurate at the atomic level are still useful. For more than half of the proteins in the human body, AlphaFold has predicted a shape that should be good enough for researchers to figure out the protein’s function. The rest of AlphaFold’s current predictions are either incorrect, or are for the third of proteins in the human body that don’t have a structure at all until they bind with others. “They’re floppy,” says Hassabis.

“The fact that it can be applied at this level of quality is an impressive thing,” says Mohammed AlQuraish, a systems biologist at Columbia University who has developed his own software for predicting protein structure. He also points out that having structures for most of the proteins in an organism will make it possible to study how these proteins work as a system, not just in isolation. “That’s what I think is most exciting,” he says.

DeepMind is releasing its tools and predictions for free and will not say if it has plans for making money from them in future. It is not ruling out the possibility, however. To set up and run the database, DeepMind is partnering with the European Molecular Biology Laboratory, an international research institution that already hosts a large database of protein information. 

For now, AlQuraishi can’t wait to see what researchers do with the new data. “It’s pretty spectacular,” he says “I don’t think any of us thought we would be here this quickly. It’s mind boggling.”

An albino opossum proves CRISPR works for marsupials, too

Mice: check. Lizards: check. Squid: check. Marsupials … check.

CRISPR has been used to modify the genes of tomatoes, humans, and just about everything in between. Because of their unique reproductive biology and their relative rarity in laboratory settings, though, marsupials had eluded the CRISPR rush—until now.

A team of researchers at Japan’s Riken Institute, a national research facility, have used the technology to edit the genes of a South American species of opossum. The results were described in a new study out today in Current Biology. The ability to tweak marsupial genomes could help biologists learn more about the animals and use them to study immune responses, developmental biology, and even diseases like melanoma.

“I’m very excited to see this paper. It’s an accomplishment that I didn’t think would perhaps happen in my lifetime,” says John VandeBerg, a geneticist at the University of Texas Rio Grande Valley, who was not involved in the study.

The difficulties of genetically modifying marsupials had less to do with CRISPR than with the intricacies of marsupial reproductive biology, says Hiroshi Kiyonari (link in Japanese), the lead author of the new study.

While kangaroos and koalas are more well-known, researchers who study marsupials often use opossums in lab experiments, since they’re smaller and easier to care for. Gray short-tailed opossums, the species used in the study, are related to the white-faced North American opossums, but they’re smaller and don’t have a pouch.

The researchers at Riken used CRISPR to delete, or knock out, a gene that codes for pigment production. Targeting this gene meant that if the experiments worked, the results would be obvious at a glance: the opossums would be albino if both copies of the gene were knocked out, and mottled, or mosaic, if a single copy was deleted.

The resulting litter included one albino opossum and one mosaic opossum (pictured above). The researchers also bred the two, which resulted in a litter of fully albino opossums, showing that the coloring was an inherited genetic trait.

The researchers had to navigate a few hurdles to edit the opossum genome. First, they had to work out the timing of hormone injections to get the animals ready for pregnancy. The other challenge was that marsupial eggs develop a thick layer around them, called a mucoid shell, soon after fertilization. This makes it harder to inject the CRISPR treatment into the cells. In their first attempts, needles either would not penetrate the cells or would damage them so the embryos couldn’t survive, Kiyonari says.

The researchers realized that it would be a lot easier to do the injection at an earlier stage, before the coating around the egg got too tough. By changing when the lights turned off in the labs, researchers got the opossums to mate later in the evening so that the eggs would be ready to work with in the morning, about a day and a half later.

The researchers then used a tool called a piezoelectric drill, which uses electric charge to more easily penetrate the membrane. This helped them inject the cells without damaging them.

“I think it’s an incredible result,” says Richard Behringer, a geneticist at the University of Texas. “They’ve shown it can be done. Now it’s time to do the biology,” he adds. 

Opossums have been used as laboratory animals since the 1970s, and researchers have attempted to edit their genes for at least 25 years, says VandeBerg, who started trying to create the first laboratory opossum colony in 1978. They were also the first marsupial to have their genome fully sequenced, in 2007.

Comparative biologists hope the ability to genetically modify opossums will help them learn more about some of the unique aspects of marsupial biology that have yet to be decoded. “We find genes and marsupial genomes that we don’t have, so that creates a bit of a mystery as to what they’re doing,” says Rob Miller, an immunologist at the University of New Mexico, who uses opossums in his research.

Most vertebrates have two types of T cells, one of the components of the immune system (and lizards only have one type). But marsupials, including opossums, have a third type, and researchers aren’t sure what they do or how they work. Being able to remove the cells and see what happens, or knock out other parts of the immune system, might help them figure out what this mystery cell is doing, Miller says.

Opossums are also used as models for some human diseases. They’re among the few mammals that get melanoma (a skin cancer) like humans.

Another interesting characteristic of opossums is that they are born after only 14 days, as barely more than balls of cells with forearms to help them crawl onto their mother’s chest. These little jelly beans then develop their eyes, back limbs, and a decent chunk of their immune system after they’re already out in the world.

Since so much of their development happens after birth, studying and manipulating their growth could be much easier than doing similar work in other laboratory animals like mice. Kiyonari says his team is looking for other ways to tweak opossum genes to study the animals’ organ development. 

Miller and other researchers are hopeful that gene-edited opossums will help them make new discoveries about biology and about ourselves. “Sometimes comparative biology reveals what’s really important,” he says. “Things that we have in common must be fundamental, and things that are different are interesting.”

Disability rights advocates are worried about discrimination in AI hiring tools

Your ability to land your next job could depend on how well you play one of the AI-powered games that companies like AstraZeneca and Postmates are increasingly using in the hiring process.

Some companies that create these games, like Pymetrics and Arctic Shores, claim that they limit bias in hiring. But AI hiring games can be especially difficult to navigate for job seekers with disabilities.

In the latest episode of MIT Technology Review’s podcast “In Machines We Trust,” we explore how AI-powered hiring games and other tools may exclude people with disabilities. And while many people in the US are looking to the federal commission responsible for employment discrimination to regulate these technologies, the agency has yet to act.

To get a closer look, we asked Henry Claypool, a disability policy analyst, to play one of Pymetrics’s games. Pymetrics measures nine skills, including attention, generosity, and risk tolerance, that CEO and cofounder Frida Polli says relate to job success.

When it works with a company looking to hire new people, Pymetrics first asks the company to identify people who are already succeeding at the job it’s trying to fill and has them play its games. Then, to identify the skills most specific to the successful employees, it compares their game data with data from a random sample of players.

When he signed on, the game prompted Claypool to choose between a modified version—designed for those with color blindness, ADHD, or dyslexia—and an unmodified version. This question poses a dilemma for applicants with disabilities, he says.

“The fear is that if I click one of these, I’ll disclose something that will disqualify me for the job, and if I don’t click on—say—dyslexia or whatever it is that makes it difficult for me to read letters and process that information quickly, then I’ll be at a disadvantage,” Claypool says. “I’m going to fail either way.”

Polli says Pymetrics does not tell employers which applicants requested in-game accommodations during the hiring process, which should help prevent employers from discriminating against people with certain disabilities. She added that in response to our reporting, the company will make this information more clear so applicants know that their need for an in-game accommodation is private and confidential.   

The Americans with Disabilities Act requires employers to provide reasonable accommodations to people with disabilities. And if a company’s hiring assessments exclude people with disabilities, then it must prove that those assessments are necessary to the job.

For employers, using games such as those produced by Arctic Shores may seem more objective. Unlike traditional psychometric testing, Arctic Shores’s algorithm evaluates candidates on the basis of their choices throughout the game. However, candidates often don’t know what the game is measuring or what to expect as they play. For applicants with disabilities, this makes it hard to know whether they should ask for an accommodation.

Safe Hammad, CTO and cofounder of Arctic Shores, says his team is focused on making its assessments accessible to as many people as possible. People with color blindness and hearing disabilities can use the company’s software without special accommodations, he says, but employers should not use such requests to screen out candidates.

The use of these tools can sometimes exclude people in ways that may not be obvious to a potential employer, though. Patti Sanchez is an employment specialist at the MacDonald Training Center in Florida who works with job seekers who are deaf or hard of hearing. About two years ago, one of her clients applied for a job at Amazon that required a video interview through HireVue.

Sanchez, who is also deaf, attempted to call and request assistance from the company, but couldn’t get through. Instead, she brought her client and a sign language interpreter to the hiring site and persuaded representatives there to interview him in person. Amazon hired her client, but Sanchez says issues like these are common when navigating automated systems. (Amazon did not respond to a request for comment.)

Making hiring technology accessible means ensuring both that a candidate can use the technology and that the skills it measures don’t unfairly exclude candidates with disabilities, says Alexandra Givens, the CEO of the Center for Democracy and Technology, an organization focused on civil rights in the digital age.

AI-powered hiring tools often fail to include people with disabilities when generating their training data, she says. Such people have long been excluded from the workforce, so algorithms modeled after a company’s previous hires won’t reflect their potential.

Even if the models could account for outliers, the way a disability presents itself varies widely from person to person. Two people with autism, for example, could have very different strengths and challenges.

“As we automate these systems, and employers push to what’s fastest and most efficient, they’re losing the chance for people to actually show their qualifications and their ability to do the job,” Givens says. “And that is a huge loss.”

A hands-off approach

Government regulators are finding it difficult to monitor AI hiring tools. In December 2020, 11 senators wrote a letter to the US Equal Employment Opportunity Commission expressing concerns about the use of hiring technologies after the covid-19 pandemic. The letter inquired about the agency’s authority to investigate whether these tools discriminate, particularly against those with disabilities.

The EEOC responded with a letter in January that was leaked to MIT Technology Review. In the letter, the commission indicated that it cannot investigate AI hiring tools without a specific claim of discrimination. The letter also outlined concerns about the industry’s hesitance to share data and said that variation between different companies’ software would prevent the EEOC from instituting any broad policies.

“I was surprised and disappointed when I saw the response,” says Roland Behm, a lawyer and advocate for people with behavioral health issues. “The whole tenor of that letter seemed to make the EEOC seem like more of a passive bystander rather than an enforcement agency.”

The agency typically starts an investigation once an individual files a claim of discrimination. With AI hiring technology, though, most candidates don’t know why they were rejected for the job. “I believe a reason that we haven’t seen more enforcement action or private litigation in this area is due to the fact that candidates don’t know that they’re being graded or assessed by a computer,” says Keith Sonderling, an EEOC commissioner.

Sonderling says he believes that artificial intelligence will improve the hiring process, and he hopes the agency will issue guidance for employers on how best to implement it. He says he welcomes oversight from Congress.

However, Aaron Rieke, managing director of Upturn, a nonprofit dedicated to civil rights and technology, expressed disappointment in the EEOC’s response: “I actually would hope that in the years ahead, the EEOC could be a little bit more aggressive and creative in thinking about how to use that authority.”

Pauline Kim, a law professor at Washington University in St. Louis, whose research focuses on algorithmic hiring tools, says the EEOC could be more proactive in gathering research and updating guidelines to help employers and AI companies comply with the law.

Behm adds that the EEOC could pursue other avenues of enforcement, including a commissioner’s charge, which allows commissioners to initiate an investigation into suspected discrimination instead of requiring an individual claim (Sonderling says he is considering making such a charge). He also suggests that the EEOC consult with advocacy groups to develop guidelines for AI companies hoping to better represent people with disabilities in their algorithmic models.

It’s unlikely that AI companies and employers are screening out people with disabilities on purpose, Behm says. But they “haven’t spent the time and effort necessary to understand the systems that are making what for many people are life-changing decisions: Am I going to be hired or not? Can I support my family or not?”

Review: Why Facebook can never fix itself

The Facebook engineer was itching to know why his date hadn’t responded to his messages. Perhaps there was a simple explanation—maybe she was sick or on vacation.

So at 10 p.m. one night in the company’s Menlo Park headquarters, he brought up her Facebook profile on the company’s internal systems and began looking at her personal data. Her politics, her lifestyle, her interests—even her real-time location.

The engineer would be fired for his behavior, along with 51 other employees who had inappropriately abused their access to company data, a privilege that was then available to everyone who worked at Facebook, regardless of their job function or seniority. The vast majority of the 51 were just like him: men looking up information about the women they were interested in.

In September 2015, after Alex Stamos, the new chief security officer, brought the issue to Mark Zuckerberg’s attention, the CEO ordered a system overhaul to restrict employee access to user data. It was a rare victory for Stamos, one in which he convinced Zuckerberg that Facebook’s design was to blame, rather than individual behavior.

So begins An Ugly Truth, a new book about Facebook written by veteran New York Times reporters Sheera Frenkel and Cecilia Kang. With Frenkel’s expertise in cybersecurity, Kang’s expertise in technology and regulatory policy, and their deep well of sources, the duo provide a compelling account of Facebook’s years spanning the 2016 and 2020 elections.

Stamos would no longer be so lucky. The issues that derived from Facebook’s business model would only escalate in the years that followed but as Stamos unearthed more egregious problems, including Russian interference in US elections, he was pushed out for making Zuckerberg and Sheryl Sandberg face inconvenient truths. Once he left, the leadership continued to refuse to address a whole host of profoundly disturbing problems, including the Cambridge Analytica scandal, the genocide in Myanmar, and rampant covid misinformation.

The authors, Cecilia Kang and Sheera Frenkel
BEOWULF SHEEHAN

Frenkel and Kang argue that Facebook’s problems today are not the product of a company that lost its way. Instead they are part of its very design, built atop Zuckerberg’s narrow worldview, the careless privacy culture he cultivated, and the staggering ambitions he chased with Sandberg.

When the company was still small, perhaps such a lack of foresight and imagination could be excused. But since then, Zuckerberg’s and Sandberg’s decisions have shown that growth and revenue trump everything else.

In a chapter titled “Company Over Country,” for example, the authors chronicle how the leadership tried to bury the extent of Russian election interference on the platform from the US intelligence community, Congress, and the American public. They censored the Facebook security team’s multiple attempts to publish details of what they had found, and cherry-picked the data to downplay the severity and partisan nature of the problem. When Stamos proposed a redesign of the company’s organization to prevent a repeat of the issue, other leaders dismissed the idea as “alarmist” and focused their resources on getting control of the public narrative and keeping regulators at bay.

In 2014, a similar pattern began to play out in Facebook’s response to the escalating violence in Myanmar, detailed in the chapter “Think Before You Share.” A year prior, Myanmar-based activists had already begun to warn the company about the concerning levels of hate speech and misinformation on the platform being directed at the country’s Rohingya Muslim minority. But driven by Zuckerberg’s desire to expand globally, Facebook didn’t take the warnings seriously.

When riots erupted in the country, the company further underscored their priorities. It remained silent in the face of two deaths and fourteen injured but jumped in the moment the Burmese government cut off Facebook access for the country. Leadership then continued to delay investments and platform changes that could have prevented the violence from getting worse because it risked reducing user engagement. By 2017, ethnic tensions had devolved into a full-blown genocide, which the UN later found had been “substantively contributed to” by Facebook, resulting in the killing of more than 24,000 Rohingya Muslims.

This is what Frenkel and Kang call Facebook’s “ugly truth.” Its “irreconcilable dichotomy” of wanting to connect people to advance society but also enrich its bottom line. Chapter after chapter makes abundantly clear that it isn’t possible to satisfy both—and Facebook has time again chosen the latter at the expense of the former.

The book is as much a feat of storytelling as it is reporting. Whether you have followed Facebook’s scandals closely as I have, or only heard bits and pieces at a distance, Frenkel and Kang weave it together in a way that leaves something for everyone. The detailed anecdotes take readers behind the scenes into Zuckerberg’s conference room known as “Aquarium,” where key decisions shaped the course of the company. The pacing of each chapter guarantees fresh revelations with every turn of the page.

While I recognized each of the events that the authors referenced, the degree to which the company sought to protect itself at the cost of others was still worse than I had previously known. Meanwhile, my partner who read it side-by-side with me and squarely falls into the second category of reader repeatedly looked up stunned by what he had learned.

The authors keep their own analysis light, preferring to let the facts speak for themselves. In this spirit, they demur at the end of their account from making any hard conclusions about what to do with Facebook, or where this leaves us. “Even if the company undergoes a radical transformation in the coming year,” they write, “that change is unlikely to come from within.” But between the lines, the message is loud and clear: Facebook will never fix itself.

Podcast: Playing the job market

Increasingly, job seekers need to pass a series of tests in the form of artificial-intelligence games just to be seen by a hiring manager. In this third of a four-part miniseries on AI and hiring, we speak to someone who helped create these tests, and we ask who might get left behind in the process and why there isn’t more policy in place. We also try out some of these tools ourselves.

We Meet:

  • Matthew Neale, Vice President of Assessment Products, Criteria Corp. 
  • Frida Polli, CEO, Pymetrics 
  • Henry Claypool, Consultant and former member, Obama administration Commission on Long-Term Care
  • Safe Hammad, CTO, Arctic Shores  
  • Alexandra Reeve Givens, President and CEO, Center for Democracy and Technology
  • Nathaniel Glasser, Employment Lawyer, Epstein Becker Green
  • Keith Sonderling, Commissioner, Equal Employment Opportunity Commission (EEOC)

We Talked To: 

  • Aaron Rieke, Managing Director, Upturn
  • Adam Forman, Employment Lawyer, Epstein Becker Green
  • Brian Kropp, Vice President Research, Gartner
  • Josh Bersin, Research Analyst
  • Jonathan Kestenbaum, Co-Founder and Managing Director, Talent Tech Labs
  • Frank Pasquale, Professor, Brooklyn Law School
  • Patricia (Patti) Sanchez, Employment Manager, MacDonald Training Center 
  • Matthew Neale, Vice President of Assessment Products, Criteria Corp. 
  • Frida Polli, CEO, pymetrics 
  • Henry Claypool, Consultant and former member, Obama administration Commission on Long-Term Care
  • Safe Hammad, CTO, Arctic Shores  
  • Alexandra Reeve Givens, President and CEO, Center for Democracy and Technology
  • Nathaniel Glasser, Employment Lawyer, Epstein Becker Green
  • Keith Sonderling, Commissioner, Equal Employment Opportunity Commission (EEOC)

Sounds From:

  • Science 4-Hire, podcast
  • Matthew Kirkwold’s cover of XTC’s, Complicated Game, https://www.youtube.com/watch?v=tumM_6YYeXs

Credits:

This miniseries on hiring was reported by Hilke Schellmann and produced by Jennifer Strong, Emma Cillekens, Anthony Green, and Karen Hao. We’re edited by Michael Reilly.

Transcript

[TR ID]

Jennifer: Often in life … you have to “play the metaphorical game”… to get the win you might be chasing.

(sounds from Matthew Kirkwold’s cover of XTC’s Complicated Game, “And it’s always been the same.. It’s just a complicated game.. Gh – ah.. Game..”

Jennifer: But what if that game… was literal?

And what if winning at it could mean the difference between landing a job you’ve been dreaming of… or not.

Increasingly job seekers need to pass a series of “tests” in the form of artificial-intelligence games… just to be seen by a hiring manager.

Anonymous job seeker: For me, being a military veteran being able to take tests and quizzes or being under pressure is nothing for me, but I don’t know why the cognitive tests gave me anxiety, but I think it’s because I knew that it had nothing to do with software engineering that’s what really got me.

Jennifer: We met this job seeker in the first episode of this series … 

She asked us to call her Sally because she’s criticizing the hiring methods of potential employers and she’s concerned about publishing her real name.

 She has a graduate degree in information from Rutgers University in New Jersey, with specialties in data science and interaction design. 

And Sally fails to see how solving a timed puzzle… or playing video games like Tetris… have any real bearing on her potential to succeed in her field.

Anonymous job seeker: And I’m just like, what? I don’t understand. This is not relevant. So companies want to do diversity and inclusion, but you’re not doing diversity and inclusion when it comes to thinking, not everyone thinks the same. So how are you inputting that diversity and inclusion when you’re only selecting the people that can figure out a puzzle within 60 seconds. 

Jennifer: She says she’s tried everything to succeed at games like the ones from Cognify she described… but without success. 

She was rejected from multiple jobs she applied to that required these games.

Anonymous job seeker: I took their practice exams. I was practicing stuff on YouTube. I was using other peers and we were competing against each other. So I was like, all right, it’s not me because I studied for this. And I still did not quote unquote pass so…

Jennifer: I’m Jennifer Strong and in this third episode of our series on AI and hiring… we look at the role of games in the hiring process…

We meet some of the lead creators and distributors of these tools… and we share with them some feedback on their products from people like Sally.

Matthew Neale: The disconnect I think for this candidate was between what the assessment was getting the candidate to do and, and what was required or the perceptions about what was required on the job.

Jennifer: Matthew Neale helped create the Cognify tests she’s talking about.

Matthew Neale: I think the intention behind Cognify is to look at people’s ability to learn, to process information, to solve problems. You know, I would say, I suppose that these kinds of skills are relevant in, in software design, particularly in, in software design where you’re going to be presented with complex difficult or unusual problems. And that’s the connection that I would draw between the assessment and the role.

Jennifer: So we tested some of these tools ourselves…

And we ask who might get left behind in the process… Plus, we find out why there isn’t more policy in place … and speak with one of the leading US regulators. 

Keith Sonderling: There has been no guidelines. There’s been nothing specific to the use of artificial intelligence, whether it is resume screening, whether it’s targeting job ads or facial recognition or voice recognition, there has been no new guidelines from the EEOC since the technology has been created.

[SHOW ID]

Frida Polli: So I’m Frida Polli. I’m a former academic scientist. I spent 10 years at Harvard and MIT and I am the current CEO of a company called Pymetrics.

Jennifer: It’s an AI-games company that uses behavioral science and machine learning to help decide whether people are the right fit for a given job.   

Frida Polli: I was a scientist who really loved the research I was doing. I was, at some point, frustrated by the fact that it wasn’t there wasn’t a lot of applications, real-world applications. So I went to business school looking for a problem, essentially, that our science could help solve. 

Jennifer: When I spoke to her earlier this year she called this her fundamental “aha” moment in the path to creating her company.

Frida Polli: Essentially, people were trying to glean cognitive, social, and emotional aptitudes, or what we call soft skills, from a person’s résumé, which didn’t seem like the optimal thing to do. If you’re trying to understand somebody more holistically, you can use newer behavioral science tools to do that. So ultimately just had this light bulb go off, thinking, okay, we know how to measure soft skills. We know how to measure the things that recruiters and candidates are looking to understand about themselves in a much more scientific objective way. We don’t have to tea-leaf-read off a résumé .

Jennifer: The reason companies score job seekers is because they get way too many applications for open roles. 

Frida Polli: You know if we could all wave our magic wand and not have to score people and magically distribute opportunity. I mean, my God, I’m all in all in. Right? And what we can do, I think, is make these systems as fair and predictive as possible, which was always kind of the goal. 

Jennifer: She says Pymetrics does this using cognitive research… and they don’t measure hard skills…like whether someone can code… or use a spreadsheet.

Frida Polli: The fundamental premise is that we all sort of have certain predispositions and they’ll lead us to be more versus less successful. There’s been a lot of research showing that, you know, different cognitive, social and emotional, or personality attributes do make people particularly well suited for, you know, role A and less well suited for role B. I mean, that research has, you know, predates Pymetrics and all we’ve done is essentially make the measurement of those things less reliant on self-report questionnaires and more reliant on actually measuring your behavior. 

Jennifer: These games measure nine specific soft skills including attention and risk preference… which she says are important in certain jobs. 

Frida Polli: It’s not super deterministic. It can change over time. But it’s a broad brush stroke of like, Hey, you know, if you tend to be like, let’s take me for a second, I tend to be somewhat impulsive, right. That would make me well disposed for certain roles, but potentially not others. So I guess what I would say is that both hard and soft skills are important for success in any particular role and the particular mix… it really depends on the role at hand, right?

Jennifer: Basically it works like this. — Employees who’ve been successful in a particular job a company is hiring for… are asked to play these games. That data gets compared against people already in a Pymetrics database…  The idea is to build a model that identifies and ranks the skills unique to this group of employees… and to remove bias. 

Jennifer: All of that gets compared against incoming job applicants… And it’s used by large global companies including KraftHeinz and AstraZeneca.

Another big player in this field is a company called Arctic Shores. Their games are used by the financial industry… and by large companies mainly in Europe.  

Safe Hammad: The way we recruit was and in many cases is broken. 

Jennifer: Safe Hammad is a cofounder and CTO of Arctic Shores.

Safe Hammad: But companies are recognizing that actually they can do better. They can do better on the predictability front to improve the bottom line for the companies. And also they can do better on the bias front as well. It’s a win-win situation: by removing the bias, you get more suitable people, the right people, in your company. I mean, what’s not to like? 

Jennifer: The same as Pymetrics, Arctic Shores teases out personality traits via AI-based “video games.” 

Safe Hammad:  So, the way we measure something like sociability isn’t “Oh, you’re in a room and you want to go and talk to someone,” or, you know, actually, you really wouldn’t realize, you know, there’s a few tasks where we ask you to choose left, choose right, and press you a little bit. And we come out with a measure of sociability. I mean, for me, it’s magic. I mean, I understand the science a little bit underneath. I certainly understand the mathematics, but  it’s like magic. We actually don’t put you in a scenario. That’s anything to do with sociability. And yet, if you look at the stats, the measurements are great. 

Jennifer: He says the company’s tools are better than traditional testing methods, because the games can pull out traits of job applicants that might otherwise be hard to figure out. Like whether they’re innovative thinkers… something most candidates would probably just answer with yes if they were asked in a job interview. 

Safe Hammad: When you ask questions, they can be faked. When I ask a question about, you know, how would you react if you’re in this position? You’re not thinking, oh, how would I react? You’re thinking, oh, what does the person asking me want me to say, that’s going to give me the best chance of getting that job. So without asking questions, by not asking questions, it’s a lot harder to fake and it’s a lot less subjective.

Jennifer: And to him, more data equals more objective hiring decisions. 

Safe Hammad: It’s about seeing more in people before you bring in some of this information which can lead to bias, so long as in the first stage of the process, you’re blind to their name, the school they went to, what degree they got, and you just look at the psychometrics, what is their potential for the role? That’s the first thing we need to answer.

Jennifer: He says games give everyone a fair chance to succeed regardless of their background. Both Pymetrics and Arctic Shores say their tools are built on well-established research and testing.

Safe Hammad: Now, these interactive tasks are very carefully crafted based on the scientific literature based on a lot of research and a lot of sweat that’s gone into making these, we actually can capture thousands of data points and, and a lot of those are very finely nuanced. And by using all that data, we’re able to really try and hone in on some of the behaviors that will match you to those roles. 

Jennifer: And he says explainability of results is key in building trust in these new technologies. 

Safe Hammad: So we do use AI, we do use machine learning to try and inform us to help us build that model. But the final model itself is more akin to what you would find a traditional psychometrics. It means that when it comes to our results, we can actually give you the results. We can tell you where you lie on the low to medium to high scale for creativity, for resilience, for learning agility. And we will stand by that.

Jennifer: And he says his company is also closely monitoring if the use of AI games — leads to better hiring decisions. 

Safe Hammad: We’ll always be looking at the results, you know, has the outcome actually reflected what we said would happen. Are you getting better hires? Are they actually fulfilling your requirements? And, and better doesn’t necessarily mean, Hey, I’m more productive. Better can mean that they’re more likely to stay in the role for a year.  

Jennifer: But not everyone is feeling so optimistic. Hilke Schellmann is a professor of journalism at NYU who covers the use of AI in hiring… She’s been playing a whole lot of these games… as well as talking to some critics. She’s here to help put it all into context.


Hilke: Yeah Jen.. AI-based video games are a recent phenomenon of the last decade or so. It’s a way for job applicants to get tested for a job in a more “fun” way…(well…  that’s at least how vendors and employers pitch it), since they are playing “video games” instead of answering lots of questions in a personality test for example. 


Jennifer: So, what types of games are you playing? 

Hilke: So…. And I played a lot of these games. For one company, I had to essentially play a game of tetris and put different blocks in the right order. I also had to solve basic mathematical problems and test my language skills by finding grammar and spelling mistakes in an email.

Jennifer: But these aren’t like the ones you might play at home. There’s no sound in these games… and they look more like games from the 1980’s or early 90’s… and something that surprised me was all the biggest companies in this space seem to really be into games about putting air in balloons… What’s that about?

Hilke: Well.. The balloon game apparently measures your appetite for risk. When I played the game… I figured out pretty early on that yellow and red balloons pop after fewer pumps than the blue balloons, so I was able to push my luck with blue balloons and bank more money. But while I was playing I was also wondering if this game really measures my risk taking preferences in real life or if this only measures my appetite for risk in a video game. 

Jennifer: Yeah, I could be a real daredevil when playing a game, but totally risk averse in real life. 

Hilke: Exactly…  And..  that’s one concern about AI-games. Another one is whether these games are actually relevant to a given job. 

Jennifer: Ok, so help me understand then why companies are interested in using these games in the first place… 


Hilke: So.. Jen From what I’ve learned these AI-games are most often used for entry level positions. So they are really popular with companies who hire recent college graduates. At that point, most job applicants don’t have a ton of work experience under their belts and personality traits start to play a larger role in finding the right people for the job. And oftentimes..  traits like agility or learning capabilities are becoming more important for employers. 

Jennifer: Right… And companies are more likely to need to change up the way they do business now… so it means some skills wind up with a shorter shelf life. 

Hilke: Yeah.. so in the past it may have been enough to hire a software developer with python skills, because that’s what a company needed for years to come. But these days, who knows how long a specific programming language is relevant in the workplace. Companies want to hire people who can re-train themselves and are not put off by change. And… these AI-games are supposed to help them find the right people. 

Jennifer: So, that’s the sales pitch from the vendors. But Walmart.. (one of the biggest employers in the U-S)… they shared some of their findings with these technologies in a recent episode of an industry podcast called Science 4-Hire. 

David Futrell: There’s no doubt that what we run is the biggest selection and assessment machine that’s ever existed on the planet. So, I mean, we test, everyday, between ten and fifteen thousand people and that’s just entry level hires for the stores. 

Jennifer: That’s David Futrell. He is the senior director of organizational performance at Walmart. 

David Futrell: When this machine learning idea first came out, I was very excited by it because, you know, it seemed to me like it would solve all of the problems that we had with prediction. And so we really got into it and did a lot of work with trying to build predictors using these machine based algorithms. And they work. But whenever it’s all said and done, they don’t really work any better than, you know, doing it the old fashioned way. 

Jennifer: And he told the host of that podcast that Walmart acquired a company that was using a pure games based approach…

David Futrell: And uh we found that it just didn’t work well at all. I won’t mention the company, but it’s not the big company that it’s in this space. But they were purported to measure some underlying aspects of personality, like your willingness to take risks and so on.

Jennifer: And a concern with these and other AI hiring tools (that goes beyond whether they work better than what they’re replacing… is whether they work equally on different groups of people…

 [sound of mouse clicking] 

including those with disabilities. 

Henry Claypool: I’m just logging in now.

Jennifer: Henry Claypool is a disability policy analyst… and we asked him to play some of these games with us. 

Henry Claypool: Okay, here we go…complete games.

Jennifer: He’s looking at one of the opening screens on a Pymetrics’ game. It asks players to select if they want to play a version that’s modified for color blindness, ADHD or dyslexia… or if they’d rather play a non-modified version. 

Henry Claypool: Seems like that alone would be a legal challenge here.

Jennifer: He thinks it might even violate the Americans with Disabilities Act…. or A-D-A.

Henry Claypool: This is a pre-employment disclosure of disability, which could be used to discriminate against you. And so you’re putting the applicant on the horns of a dilemma, right — do I choose to disclose and seek an accommodation or do I just push through? The thing is you’re not allowed to ask an applicant about their disability before you make a job offer.

Jennifer: Claypool himself has a spinal cord injury from a skiing accident during his college years… It left him without the use of his legs and an arm.

But this hasn’t held back his career… He worked in the Obama administration and he helps companies with their disability policies. 

Henry Claypool: The fear is that if I click one of these, I’ll disclose something that will disqualify me for the job, and if I don’t click on say dyslexia or whatever it is that I’ll be at a disadvantage to other people that read and process information more quickly. Therefore I’m going to fail either way or either way now my anxiety is heightened because I know I’m probably at a disadvantage.

Jennifer: In other words, he’s afraid if he discloses a disability this soon in the process… it might prevent him from getting an interview.

Henry Claypool: Ooops… am I… oh I’m pumping by the mouse.

Jennifer: Pymetrics’ suite of games starts with one where people get money to pump up balloons… and they have to bank it before a balloon pops.  

Henry Claypool: Okay. Now carpal tunnel is setting in..

Jennifer: A few minutes into the game it’s getting harder for him. 

Henry Claypool:  I really hate that game. I just, I don’t see any logic in there at all. Knowing that I’m being tested by something that doesn’t want me to understand what it’s testing for makes me try to think through what it’s anticipating.  

Jennifer: In other words, he has a dialogue going in his head trying to figure out what the system might want from him. And that distracts him from playing… such  that he’s afraid he might not be doing so well. 

And the idea that he and his peers have to play these games to get a job… doesn’t sit right with him. 

He believes it’ll be harder for those with disabilities to get hired if that personal interaction early on in the process is taken away. 

Henry Claypool: It’s really, it’s too bad that we’ve lost that human touch. And is there a way to use the merits of these analytic tools without leaving people feeling so vulnerable? And I feel almost scared and a little bit violated, right? That I’ve been probed in ways that I don’t really understand. And that feels pretty bad.

[music transition]

Alexandra Givens: When you think about the important role of access to employment, right? This is the gateway to opportunity for so many people. It’s a huge part, not only of economic stability, but also personal identity for people.

Jennifer: Alexandra Givens is the CEO of the Center for Democracy and Technology.

Alexandra Givens: And the risk that new tools are being deployed that are throwing up artificial barriers in a space that already has challenges with access is really troubling.

Jennifer: She studies the potential impacts of hiring algorithms on people with disabilities. 

Alexandra Givens: When you’re doing algorithmic analysis, you’re looking for trends and you’re looking for kind of the statistical majority. And by definition, people with disabilities are outliers. So what do you do when an entire system is set up to not account for statistical outliers and not only not to account for them, but to actually end up intentionally excluding them because they don’t look like the statistical median that you’re gathering around. 

Jennifer: She’s the daughter of the late Christopher Reeve, best known for his film roles as Superman… that is until he was paralyzed from the neck down from a horseback riding accident. 

About 1 in 5 people in the U-S will experience disability at some point in their lives… and like Claypool she believes this type of hiring may exclude them.

Alexandra Givens: You hear people saying, well, this is actually the move toward future equality, right? HR run by humans is inherently flawed. People are going to judge you based on your hairstyle or your skin color, or whether you look like they’re friends or not their friends. And so let’s move to gamified tests, which actually don’t ask what someone’s resume is or doesn’t mean that they have to make good conversation in an interview. We want to see this optimistic story around the use of AI and employees are buying into that without realizing all of the ways in which these tools really can actually entrench discrimination and in a way even worse than human decision-making because they’re doing it at scale and they’re doing it in a way that’s harder to detect than individualized human bias because it’s hidden behind the decision-making of the machine.

Jennifer: We shared a specific hiring test with Givens… where you have to hit the spacebar as fast as you can. She says this game might screen out people with motor impairments — maybe even people who are older.

Alexandra Givens: They’re trying to use this as a proxy, but is that proxy actually a fair predictor of the skills required for the job? And I would say here, the answer is no for a certain percentage of the population. And indeed the way in which they’re choosing to test this is affirmatively going to screen out a bunch of people across the population in a way that’s deeply unfair.

Jennifer: And since job applicants don’t know what’s in these AI-games before they take a test… how do they know if they need to ask for an accommodation?

Also, she says people with disabilities might not want to ask for one anyway… if they’re afraid that could land their application into Pile B … and an employer may never look at Pile B. 

Alexandra Givens: This isn’t just about discrimation against disabled people as a protected class. This is actually a question about the functioning of our society. And I think that’s pulling back that I think is one of the big systemic questions we need to raise here. Increasingly as we automate these systems and employers push to what’s most fastest, and most efficient, they’re losing the chance for people to actually show their qualifications and their ability to do the job and the context that they bring when they tell that story. And that is a huge loss. It’s a moral failing. I think it has legal ramifications, but that’s what we need to be scared about when we think about entrenching inequality in the workforce.

[Music transition]

Jennifer: After the break… a regulator in Washington answers why his agency hasn’t given any guidance on these tools.

But first… I’d like to invite you along for EmTech MIT in September. It’s Tech Review’s annual flagship conference and I’ll be there with the rest of the newsroom to help unpack the most relevant issues and technologies of our time. You can learn more at EmTech M-I-T dot-com.

We’ll be right back.

[MIDROLL]

Jennifer: I’m back with our reporting partner on this series, Hilke Schellmann… and as we just heard, access and fairness of these hiring tools for people with disabilities is a big concern…And…Hilke, did you expect to find this when you set out to do your research? 

Hilke: Actually – that surprised me. I kinda expected to see a bias against women and people of color.. because we’ve seen that time and time again.. And it’s widely acknowledged that there’s a failing there… But I didn’t expect that people with disabilities would be at risk too. And all this made me ask another question. Are the algorithms in these games really making fair and unbiased decisions for all candidates?

Jennifer: And… so.. are the decisions fair? 

Hilke: Well, actually no. Early on in my research.. I spoke to employment lawyers that deal with a lot of companies who are planning on doing business with AI-hiring vendors. They told me that they’re no strangers to problems with these algorithms…and they shared with me — something they haven’t shared publicly before.

Nathaniel Glasser: I think the underlying question was do these tools work? And I think the answer is, um, in some circumstances they do. And in some circumstances they don’t, a lot of it is, it is both vendor slash tool dependent and, also employer dependent and, and how they’re being put to work. And then practically, what’s the tool doing? And to the extent that we see problems and more specifically an adverse impact on a particular group, what are the solutions for addressing those issues? 

Jennifer: Nathaniel Glasser is an employment lawyer in Washington, DC.

Nathaniel Glasser: So monitor, monitor, monitor, and if we see something wrong, let’s make sure that we have a plan of attack to address that. And that might be changing the algorithm in some sense, changing the inputs or if it doesn’t work, just making that decision to say, actually this tool is not right for us. It’s unfortunate that you know, we spent a little bit of money on it, but in the long run, it’s going to cause more problems than it’s worth. And so let’s cut ties now and move forward. And I’ve been involved in that situation before. 

Jennifer: And he recalls a specific incident involving a startup vendor of AI-games. 

Nathaniel Glasser: And unfortunately after multiple rounds in beta prior to going live, the tool demonstrated adverse impact against the female applicants and no matter the tweaks to the inputs and the traits and, and, and the algorithm itself, they couldn’t get confident that it wouldn’t continue to create this adverse impact. And they ultimately had to part ways and they went out to the market and they found something else that worked for them. Now that initial vendor, that was a startup five years ago, has continued to learn and grow and do quite well in the market. And, and I’m very confident, you know, that they learned from their mistakes and in working with other companies have figured it out.

Hilke: So, unfortunately the two lawyers signed a non-disclosure agreement and we don’t know which companies he’s talking about. 

Jennifer: We only know that the company is still out there… and grew from a startup into an established player. 

Hilke: And that could indicate that they fixed their algorithm or it could mean that no one’s looking…

Jennifer: And that’s something that comes up again and again. There’s no process that decides what AI-hiring tools are fair game… And anyone can bring any tool to market. 

Hilke: Yeah… and The Equal Employment Opportunity Commission is the regulator of hiring and employment in the United States…. But they’ve been super quiet. So that’s probably why we’re now seeing individual states and cities starting to try to regulate the industry, but everyone is still kind of waiting for the commission to step in. 

Jennifer: So we reached out to the E-E-O-C and connected with Keith Sonderling… He’s one of the commissioners who leads the agency. 

Keith Sonderling: Well, since the 1960s, when the civil rights laws were enacted, our mission has been the same and that’s to make sure that everyone has an equal opportunity in the workplace… to enter the workplace and to succeed in the workplace. 

Jennifer: Women, immigrants, people of color, and others have often had fewer workplace opportunities because of human bias… and despite its challenges, he believes AI has the potential to make some decisions more fair. 

Keith Sonderling: So, I personally believe in the benefits of artificial intelligence in hiring. I believe that this is a technology that can fundamentally change how both employees and employers view their working relationship from everything of getting the right candidates to apply, to finding the actual right candidates to then when you’re working to making sure you’re in the job that is best suited for your skills or learn about other jobs that may, you may be even better at that you didn’t even know that a computer will help you understand. So there’s unlimited benefits here. Also, it can help diversify the workforce. So I think it is an excellent way to eliminate bias in recruiting and promotion, but also more importantly, it’s going to help employers find the right candidates who will have high level job satisfaction. And for the employees too, they will find the jobs that are right for them. So essentially it’s a wonderful matchmaking service.

Jennifer: But he’s also aware of the risks. He believes bad actors could exclude people like older workers… by doing things like programming a resume parser to reject resumes from people with a certain amount of experience. 

And he says the tools themselves could also discriminate…. unintentionally. 

Keith Sonderling: For instance, if an employer wants to use AI to screen through 500,000 resumes of workers to find people who live nearby. So they’re not late to work, say it’s a transportation company and the buses need to leave on time. So I’m only going to pick people who live in one zip code over from my terminal. And you know, that may exclude a whole protected class of people based on the demographics of that zip code. So the law will say that that person who uses AI for, intentionally, for wrong versus an employer who uses it for the right reasons, but winds up violating the law because they have that disparate impact based on those zip codes are equally held liable. So there’s a lot of potential liability for using AI unchecked.

Jennifer: Unintentional discrimination is called disparate impact… and it’s a key thing to watch in this new age of algorithmic decision making.

But…with these systems… how do you know for sure you’re being assessed differently? When most of the laws and guidelines that steer the agency were established more than 40 years ago… it was much easier for employees to know when and how they were being evaluated. 

Keith Sonderling: Well, that could be potentially the first issue of using AI in the hiring process is that the employee may not even know they’re being subject to tests. They may not even know a program is monitoring their facial expressions as part of the interview. So that is a very difficult type of discrimination to find if you don’t even know you’re being discriminated against, how could you possibly bring a claim for discrimination?

Jennifer: Sonderling says that employers should also think long and hard about using AI tools that are built on the data of their current workforce. 

Keith Sonderling: Is it going to have a disparate impact on different protected classes? And that is the number one thing employers using AI should be looking out for is the ideal employee I’m looking for? Is that just based on my existing workforce, which may be of a certain race, gender, national origin. And am I telling the computer that’s only who I’m looking for? And then when you get 50 resumes and they’re all identical to your workforce, there’s going to be some significant problems there because essentially the data you have fed that algorithm is only looking towards your existing workforce. And that is not going to create a diverse workforce with potentially workers from all different categories who can actually perform the jobs.

Jennifer: Experts we’ve talked to over the course of this reporting believe there’s enough evidence that some of these tools do not work as advertised and potentially harm women, people of color, people with disabilities and other protected groups… and they’ve criticized the agency’s lack of action and guidance. 

The last hearing it held on big data? was in 2016… and a whole lot has changed with this technology since then.

And so we asked the commissioner about that.

Keith Sonderling: There has been no guidelines. There’s been nothing specific to the use of artificial intelligence, whether it is resume screening, whether it’s targeting job ads or facial recognition or voice recognition, there has been no new guidelines from the EEOC since the technology has been created.

Jennifer: And we wanted to understand how that fits with the agency’s mission…

Keith Sonderling: Well, my personal belief is that the EEOC is more than just an enforcement agency. Yes, we are a civil law enforcement agency. That’s required by law to bring investigations and to bring federal lawsuits. But part of our mission is to educate employees and employers. And this is an area where I think the EEOC should take the lead within the federal government.

Jennifer: What might be surprising here is this question of whether these tools work as advertised and pick the best people? That Isn’t the agency’s concern…

Keith Sonderling: Companies have been using similar assessment tests for a very long time and whether or not those tests are actually accurate and predict success of an employee, you know, that is beyond the scope of our job here at the EEOC. The only thing that the EEOC is concerned with when these tests are being instituted, are, is it discriminating against a protected class? That is our purpose. That is our responsibility and whether or not the tools actually work and whether not, it can computer can figure out is this employee in this position, in this location going to be an absolute superstar versus, you know, this employee in this location who should be doing these tasks, is that going to make them happy and going to make them productive for a company that’s beyond the scope of federal EEO law. 

Jennifer: But.. If an AI tool passes a disproportionate number of men vs women, the agency can start investigating. And then, that question of whether the tool works or not, may become an important part of the investigation. 

Keith Sonderling: It becomes very relevant when the results of the test have a disparate impact on a certain protected characteristic. So say if a test, a cognitive test, for some reason, excludes females, as an example, you know, then the employer would have to show if they want to move forward with that test and validate that test, they would then need to show that there is a business need and is job related that the tests we’re giving is excluding females. And, you know, that is a very difficult burden for employers to prove. And it can be very costly as well.  

Jennifer: He’s contemplating something that’s called a Commissioner’s charge, which is a move that would allow him to force the agency to start an investigation… and he’s asking the public for help. 

Keith Sonderling: So if an individual commissioner believes that discrimination is occurring, any areas of the laws we enforce, whether it’s disability, discrimination, sex discrimination, or here, AI discrimination, we can file a charge against the company ourselves and initiate an investigation. Now to do that, we need very credible evidence, and we need people to let us know this is happening, whether it’s a competitor in an industry, or whether it’s an individual employee who is afraid to come forward in their own name, but may be willing to allow a commissioner to go, or many commissioner charges have begun historically off watching the news, reading a newspaper. So there’s a lot of ways that the EEOC can get involved here. And that’s something I’m very interested in doing. 

Jennifer: In the meantime, individual states and cities are starting to try to regulate the use of AI in hiring on their own… having a patchwork of laws that differ state by state can make it a whole lot harder for everyone to navigate an emerging field.  

[music]

Jennifer: Next episode… what happens when AI interviews AI? We wrap up this series with a look at how people are gaming these systems… From advice on what a successful resume might look like…to classes on YouTube about how to up your odds of getting through A-I gatekeepers. 

Narrator: My aim today is to help you get familiar and comfortable with this gamified assessment. In this game you’re presented with a number of balloons individually that you’re required to pump. Try the balloon game now.

[CREDITS]

Jennifer: This miniseries on hiring was reported by Hilke Schellmann and produced by me, Emma Cillekens, Anthony Green and Karen Hao. We’re edited by Michael Reilly.

Thanks for listening… I’m Jennifer Strong.

Blue Origin takes its first passengers to space

This time, there was a blastoff.  

Blue Origin founder Jeff Bezos and three other civilians watched the sky turn from blue to black this morning as the company’s reusable rocket and capsule system New Shepard passed the Kármán line, the boundary between Earth’s atmosphere and outer space.  

Around 9:25 a.m. US Eastern time, Bezos and his fellow passengers landed safely, successfully completing the company’s first crewed suborbital flight—a major step in Blue Origin’s efforts to provide commercial space flights to paying customers.  

Compared with the launch earlier this month of Virgin Galactic’s SpaceShipTwo, a type of spaceplane that carried founder Richard Branson to space, Bezos’s trip was more reminiscent of a NASA mission, with a vertical takeoff, parachutes, and a soft landing. 

Ramon Lugo III, an aerospace engineer and director of the Florida Space Institute, says that although this is the second crewed launch by people not considered astronauts in the classical sense, Blue Origin’s mission represents a bigger opportunity for commercial space tourism.  

The main difference was how the two missions got to space. Virgin Galactic’s took about an hour and involved an aircraft that carried the spaceplane with the crew to a specific altitude before releasing it. The spaceplane then ignited its rocket engines to travel even higher before gliding back to Earth.  

“If you look at Branson’s spacecraft, he’s really creating a transportation system that is very much like a commercial airline. You’re going to take off at an airport and you’re going to land at an airport,“ says Lugo.  

Bezos’s is what most aerospace engineers would call a more traditional take on crewed spacecraft, Lugo says. Blue Origin’s entire launch and reentry took about 10 minutes. The crew launched from within a capsule attached to the nose of a rocket, which detached and returned to Earth as the crew capsule continued into space, reaching ​​a maximum height of 351,210 feet before beginning its fall back to Earth and then deploying parachutes to land. 

Regardless of their differences, experts say, both flights represent major milestones in the future of spaceflight.  

“These vehicles are reimagining travel just as the pioneers of early airplanes did,” says Elaine Petro, a professor of mechanical and aerospace engineering at Cornell.  

Beyond getting humans closer to orbit, Petro says, both Virgin Galactic and Blue Origin could advance new approaches to faster cross-continental travel, since both vehicles can reach speeds four to five times those of a regular airplane.  

Petro is encouraged by the pace of progress she’s seen in the industry. “Ten years ago, the Obama administration was pushing for the expansion of the commercial launch vehicle industry. Now two public space travel platforms have flown crews in the last week, and SpaceX is contracted to ferry astronauts to the moon,” she says. 

And what’s next for Blue Origin? Although commercial space tourism is just getting started, Bezos hopes that launching more flights could bring down the cost so that in the next few decades, everyone can have a chance to experience the beauty of life above Earth. 

How Zello keeps people connected during South Africa’s unrest

On June 29, former South African president Jacob Zuma was sentenced to 15 months in prison for corruption during his presidency. Zuma—the first ethnic Zulu to hold the country’s highest office—has a loyal following. He also has many detractors, who blame his administration’s corruption for a stagnant economy and weakened democracy.

Zuma didn’t turn himself in until July 7, saying he was innocent and that jail could kill him at 79 years old. Within hours, protests and widespread looting, particularly in his home city of Durban, were reported as supporters stationed themselves around his compound and challenged police. That violence has led to at least 215 deaths and more than 2,500 arrests.

For South Africans like Amith Gosai, keeping track of what was happening on the ground was hard. His WhatsApp chats were flooded and confusing. Then he saw a note on his community WhatsApp group urging neighbors to join a sort of neighborhood watch channel on Zello, a “walkie-talkie” app that is fast becoming a tool for protest communication. 

“This helped us tremendously to create awareness around the community as well as to quell fears,” Gosai told me via Twitter DM. 

Gosai, who is also from Durban, was among 180,000 people who downloaded Zello in the wake of Zuma’s arrest. Users subscribe to channels to talk to each other, sending live audio files that are accessible to anyone listening in on the channel.

Zello was originally designed to help people communicate and organize after natural disasters. With Wi-Fi or a data connection, people can use it to broadcast their location, share tips, and communicate with rescuers or survivors in the aftermath of a hurricane, flood, or other emergency. In the US, Zello found traction in 2017’s Hurricane Harvey rescue efforts. The app is also used by taxi drivers, ambulance workers, and delivery personnel who want to send hands-free voice messages, according to Raphael Varieras, Zello’s vice president of operations, says. Because Zello is a voice-first platform, it’s faster than typing and requires no literacy skills. 

But recent events suggest that use of Zello is increasingly being used to connect people in areas of unrest as well. Within hours of the most recent Israeli-Palestinian conflict, downloads skyrocketed to 100 times their usual rate, for example. And Cuba also saw a spike in downloads amid protests over shortages of food and medicine. Unsurprisingly, this development has prompted some countries to ban the app, including China, Venezuela, and Syria.

Without a formal emergency response system like the US’s 911, South Africans have been increasingly turning to Zello to coordinate ad hoc ambulances and neighborhood patrols. One channel, South Africa Community Action Network, boasts 11,600 paying members who donate for emergency services like ambulances, along with more than 33,000 non-paying members, according to a blog post on the site.

One Twitter user in South Africa I spoke to (who requested anonymity in light of the current dangerous situation) said that some people were using Zello to figure out which houses and storefronts were ripe for looting, while others were tuning in to gauge whether they should flee or stay where they are. 

Another user, Javhar Singh, said via Twitter DM that he was using it as “live communication among community members to notify us about the whereabouts of looters,” adding: “It is way faster than the news.”

Crucially in such a tense situation, Zello is anonymous. “People don’t have access to your personal number like in WhatsApp,” says Gosai. 

The speed, anonymity, and intimacy created by voice make Zello feel urgent. But those same characteristics could breed misinformation, which Zello does not currently monitor—anyone can use the app at any time to say whatever they want. In fact, Zello was used in planning and carrying out the January 6 insurrection at the US Capitol.

Zello’s popularity in South Africa also proves that online audio isn’t just a 2020 trend. Audio chat rooms on Clubhouse and Discord are built on the idea that people want to talk about common interests, and Facebook and Twitter are actively testing live audio on their platforms. Zello’s general audience, however, isn’t sticking around long enough to get to know people: they’re looking for news, fast and unfiltered.

“There’s a long history of Zello as a go-to app in times of crisis,” says Zello’s CEO, Bill Moore. In South Africa and elsewhere, that increasingly means more than just natural disasters.

Cities are scrambling to prevent flooding

US cities are working to shore up their flood defenses in the face of climate change, building and upgrading pumps, storm drains, and other infrastructure.

In many cases, their existing systems are aging and built for the climate of the past. And even upgrades can do only so much to mitigate the intense flooding that’s becoming more common, leaving cities to come up with other solutions.

Floods have hit New York and Flagstaff, Arizona, in recent weeks. In Germany and Belgium, they have swept away whole towns and left over 1,000 people missing.

Rainfall inundated Detroit during a recent June storm, flooding streets and houses and overwhelming the local stormwater systems. The city received over 23,000 reports of damage, and local news reported gutted basements and cars swept away in water.

“We’ve never experienced anything like this,” said Sue McCormick, the CEO of the Great Lakes Water Authority, in a press conference after the storm. The water authority runs wastewater services for Detroit and the surrounding area.

Urban centers are more prone to flooding than other areas because streets, parking lots, and buildings are impervious, meaning water can’t seep into the ground the way it would in a forest or grassland. Instead, it flows.

Detroit, like many older cities, deals with flowing stormwater by combining it with sewage. This blend is then pumped to treatment plants. During the recent storm, electrical outages and mechanical issues knocked out four of 12 pumps in two major pump stations.

The agency has spent $10 million over the past several years upgrading just these two pump stations, and hundreds of millions more on other improvements. But fully modernizing the sewer system would require building a separate stormwater network at a cost of over $17 billion.

Stormwater infrastructure around the country is aging, and many governments have resorted to Band-Aid solutions instead of building more resilient systems, says Mikhail Chester, an infrastructure and policy researcher at Arizona State University. And mechanical and electrical systems are bound to fail occasionally during major storms, Chester adds.

However, even if the pump stations had worked perfectly, they might not have prevented disastrous flooding.

Outdated models

Detroit’s pumping stations, similar to a lot of stormwater infrastructure, were designed to keep up with a 10-year storm, meaning an amount of rainfall within an hour that has roughly a one in 10 chance of happening in any given year. A 10-year storm in the Detroit area would amount to about 1.7 inches of rainfall in an hour, according to National Weather Service data.

During the June storm, parts of Detroit saw intense levels of rainfall that would be more characteristic of a 1,000-year storm (over 3.7 inches of rain within an hour), far beyond the capacity of the pumping stations, according to the water authority.

But rainfall predictions are based on historical data that might not represent the true odds of major storms, according to Anne Jefferson, a hydrologist at Kent State University. Storms that supposedly have a one in 10 chance of happening in any given year are likely happening more often now because of climate change. And she says few agencies are taking climate change into account in their infrastructure designs.

“We’re locking ourselves into a past climate,” Jefferson says.

Governments hoping to account for climate change when designing infrastructure face uncertainty—should they plan for the best-case emissions scenarios or the worst? And how exactly emissions will affect rainfall is difficult to predict.

Planning for bigger storms is an admirable goal, but it’s also costly. Bigger pumps and pipes are more expensive to build and harder to install, says Chester. And price increases aren’t linear, he adds—a pump or pipe with double the capacity will be more than double the price in most cases.

Fast forward

Coastal cities face even more dire climate threats, and some are investing aggressively to stave them off. Tampa, Florida, spent $27 million upgrading pump stations and other infrastructure after major floods in 2015 and 2016, according to the Tampa Bay Times. Some of the upgrades appear to be working—this year at least, the city avoided floods during major storms like Hurricane Elsa.

However, the rising seas along Tampa’s shoreline may soon cover up the pumps’ outlets. If sea levels reach the spot where water is supposed to exit storm pipes, the system won’t be able to remove water from the city.

Some cities are looking to install other features, like storm ponds and rain gardens, to help manage urban flooding. Grassy areas like rain gardens can reduce the volume and speed of excess water, Jefferson says. If enough of these facilities are built in the right places, they can help prevent smaller floods, she adds, but like other stormwater infrastructure, they’re usually not designed to stop flooding during larger storms.

For the most extreme events, there’s not much to do except get out of the way, Jefferson says. Instead of building ever-larger flood-control measures, governments could purchase flood-prone land and either keep it vacant or find appropriate uses for it. Chester points to the Netherlands, where local governments created the Room for the River initiative to increase buffer zones around rivers and change the way flood-prone areas are used. Now farms are sited there instead of homes, and the government compensates farmers if their crops are destroyed by flooding.

While cities can build or upgrade pipes, pumps, and rain gardens, climate change is quickly upending normal conditions, challenging infrastructure that’s built to last decades.

“Now we’ve entered into this new paradigm where the environment is changing fast, and our infrastructure is not designed to change quickly,” Chester says. “Those two things are at odds with each other.”