The Chinese rocket has safely crash-landed in the ocean

Update 5/9, 12:25 a.m. ET: The US Space Force confirmed the booster landed in the Indian Ocean just north of Maldives late Saturday evening.

Last week, China successfully launched Tianhe-1, the first part of its new space station, to be completed before the end of 2022. A week later, the mission is still making huge waves—and not in a good way. The core booster from the Long March 5B rocket that launched Tianhe-1 ended up in an uncontrolled orbit around Earth. It is expected to fall back to Earth this weekend, with current estimates (as of Friday afternoon) suggesting it will begin reentry between 2:13 p.m. Eastern Time Saturday and 8:13 a.m. Sunday. That’s such an enormous window that no one has any idea where it will land.

At 21 metric tons and 10 stories tall, the booster, CZ-5B, is huge. Sure, it might burn up completely in the atmosphere when it reenters, but that’s pretty unlikely given its size. More than likely, huge melted fragments will survive reentry and hit the ground. Reports over the last week have put many agencies around the world on alert, including those in Russia and the US.

The probability that parts of the booster could hit populated land is admittedly quite low—it’s much more likely to land in the ocean somewhere. But that probability is not zero. Case in point: the CZ-5B booster’s debut last year for a mission on May 5, 2020. The same problem arose back then as well: the core booster ended up in an uncontrolled orbit before eventually reentering Earth’s atmosphere. Debris landed in villages across Ivory Coast. It was enough to elicit a notable rebuke from the NASA administrator at the time, Jim Bridenstine.

The same story is playing out this time, and we’re playing the same waiting game because of how difficult it is to predict when and where this thing will reenter. The first reason is the booster’s speed: it’s currently traveling at nearly 30,000 kilometers per hour, orbiting the planet about once every 90 minutes. The second reason has to do with the amount of drag the booster is experiencing. Although technically it’s in space, the booster is still interacting with the upper edges of the planet’s atmosphere.

That drag varies from day to day with changes in upper-atmosphere weather, solar activity, and other phenomena. In addition, the booster isn’t just zipping around smoothly and punching through the atmosphere cleanly—it’s tumbling, which creates even more unpredictable drag. 

Given those factors, we can establish a window for when and where we think the booster will reenter Earth’s atmosphere. But a change of even a couple of minutes can put its location thousands of miles away. “It can be difficult to model precisely, meaning we are left with some serious uncertainties when it comes to the space object’s reentry time,” says Thomas G. Roberts, an adjunct fellow at the CSIS Aerospace Security Project. 

This also depends on how well the structure of the booster holds up to heating caused by friction with the atmosphere. Some materials might hold up better than others, but drag will increase as the structure breaks up and melts. The flimsier the structure, the more it will break up, and the more drag will be produced, causing it to fall out of orbit more quickly. Some parts may hit the ground earlier or later than others.

By the morning of reentry, the estimate of when it will land should have narrowed to just a few hours. Several different groups around the world are tracking the booster, but most experts are following data provided by the US Space Force through its Space Track website. Jonathan McDowell, an astrophysicist at the Harvard-Smithsonian Center for Astrophysics, hopes that by the morning of reentry, the timing window will have shrunk to just a couple of hour where the booster orbits Earth maybe two more times. By then we should have a sharper sense of the route those orbits are taking and what regions of the Earth may be at risk from a shower of debris.

The Space Force’s missile early warning systems will already be tracking the infrared flare from the disintegrating rocket when reentry starts, so it will know where the debris is headed. Civilians won’t know for a while, of course, because that data is sensitive—it will take a few hours to work through the bureaucracy before an update is made to the Space Track site. If the remnants of the booster have landed in a populated area, we might already know thanks to reports on social media.

In the 1970s, these were common hazards after missions. “Then people started to feel it wasn’t appropriate to have large chunks of metal falling out of the sky,” says McDowell. NASA’s 77-ton Skylab space station was something of a wake-up call—its widely watched uncontrolled deorbit in 1979 led to large debris hitting Western Australia. No one was hurt and there was no property damage, but the world was eager to avoid any similar risks of large spacecraft uncontrollably reentering the atmosphere (not a problem with smaller boosters, which just burn up safely).

As a result, after the core booster gets into orbit and separates from the secondary boosters and payload, many launch providers quickly do  a deorbit burn that brings it back into the atmosphere and sets it on a controlled crash course for the ocean, eliminating the risk it would pose if left in space. This can be accomplished with either a restartable engine or an added second engine designed for deorbit burns specifically. The remnants of these boosters are sent to a remote part of the ocean, such as the South Pacific Ocean Uninhabited Area, where other massive spacecraft like Russia’s former Mir space station have been dumped. 

Another approach which was, used during space shuttle missions and is currently used by large boosters like Europe’s Ariane 5, is to avoid putting the core stage in orbit entirely and simply switch it off a few seconds early while it’s still in Earth’s atmosphere. Smaller engines then fire to take the payload the short extra distance to space, while the core booster is dumped in the ocean.

None of these options are cheap, and they create some new risks (more engines mean more points of failure), but “it’s what everyone does, since they don’t want to create this type of debris risk,” says McDowell. “It’s been standard practice around the world to avoid leaving these boosters in orbit. The Chinese are an outlier of this.”

Why? “Space safety is just not China’s priority,” says Roberts. “With years of space launch operations under its belt, China is capable of avoiding this weekend’s outcome, but chose not to.” 

The past few years have seen a number of rocket bodies from Chinese launches that have been allowed to fall back to land, destroying buildings in villages and exposing people to toxic chemicals. “It’s no wonder that they would be willing to roll the dice on an uncontrolled atmospheric reentry, where the threat to populated areas pales in comparison,” says Roberts. “I find this behavior totally unacceptable, but not surprising.”

McDowell also points to what happened during the space shuttle Columbia disaster, when damage to the wing caused the spacecraft’s entry to become unstable and break apart. Nearly 38,500 kilograms of debris landed in Texas and Louisiana. Large chunks of the main engine ended up in a swamp—had it broken up a couple of minutes earlier, those parts could have hit a major city, slamming into skyscrapers in, say, Dallas. “I think people don’t appreciate how lucky we were that there weren’t casualties on the ground,” says McDowell. “We’ve been in these risky situations before and been lucky.” 

But you can’t always count on luck. The CZ-5B variant of the Long March 5B is slated for two more launches in 2022 to help build out the rest of the Chinese space station. There’s no indication yet whether China plans to change its blueprint for those missions. Perhaps that will depend on what happens this weekend.

Universal basic income is here—it just looks different from what you expected

Several years ago, when Elizabeth Softky first heard of the concept of universal basic income, she had her doubts. She was a public school teacher at the time, and she knew how hard it was to convince people to support even modest financial benefits, like pay raises for her coworkers. “Giving people money? I couldn’t wrap my head around it,” she says. “You can’t just give people money.” 

But that was before she was diagnosed with colon cancer, before aggressive chemotherapy left her unable to work and unable to pay the rent, before she was evicted from her home in Redwood City, California, and before she moved into an area homeless shelter. It was also before she got the call saying she was being accepted into a program offering six monthly payments of $500 to 15 people experiencing homelessness. 

It was December 2020, and she was being invited into a pilot program, run by the non-profit Miracle Messages, providing guaranteed income—a direct cash transfer with no strings attached. For Softky, it was a lifeline. “For the first time in a long time, I felt like I could … take a deep breath, start saving, and see myself in the future,” she says. 

The idea of “just giving people money” has been in and out of the news since becoming a favored cause for many high-profile Silicon Valley entrepreneurs, including Twitter’s Jack Dorsey, Facebook cofounders Mark Zuckerberg and (separately) Chris Hughes, and Singularity University’s Peter Diamandis. They proposed a universal basic income as a solution to the job losses and social conflict that would be wrought by automation and artificial intelligence—the very technologies their own companies create. 

But while prominent names in technology are still involved today, especially when it comes to funding projects, the conversation has changed. Its center of gravity has shifted away from “universal basic income” aimed at counterbalancing the automation of work and toward “guaranteed income” aimed at addressing economic and racial injustices. 

How guaranteed income came to be

First proposed by philosophers in the 16th century, the idea of an income delivered directly by the state has been seen in many quarters as a balm for all kinds of social ills. Progressives argue that a guaranteed minimum income has the potential to lift communities out of poverty. Some conservatives and libertarians, meanwhile, see universal basic income as a cost-effective alternative to existing social welfare systems. 

In the United States, proponents of guaranteed income as a matter of economic justice have included the Black Panthers and Martin Luther King Jr., while the libertarian economist Milton Friedman advocated it as a form of negative income tax. Even President Richard Nixon proposed providing cash directly to families, without conditions. His plan—produced after 1,000 economists urged it in an open letter—twice passed the House, but got rejected by the Senate.

Tech-sector proponents of UBI tend to be driven by the libertarian model. It aligns both with their core beliefs about the future and with their primary theory of change. While it is not a technological solution per se … it also kind of is. It’s the ultimate hack to get around the complexities of creating equitable social welfare policies. 

Elizabeth Softky
Elizabeth Softky says she didn’t like the idea of guaranteed income “because I was a good American”
COURTESY PHOTO

It’s very much “in keeping with modern Silicon Valley’s excitement for alternative policy experiments and ideas,” says Margaret O’Mara, a professor at the University of Washington who has written extensively on the history of the tech industry. “Like, ‘Okay, the regular systems and institutions aren’t working, and here’s this one cool trick.’” 

When the concept of UBI began taking hold in Silicon Valley, many proponents looked outside the US for case studies. In 2017, Finland launched a two-year plan giving monthly payments to 2,000 unemployed citizens. In Canada, the government of Ontario announced a three-year program that was cut short when a more conservative party took control of the government. There have also been pilots in Iran, Spain, the Netherlands, and Germany. 

But the United States has precedents as well. When Nixon was considering his own guaranteed income plan, studies were carried out in cities including Denver and Seattle. Since 1982, the Alaska Permanent Fund has given out a share of the state’s oil revenues to every adult resident (an average of $1,100 each year). A number of Native American tribes pay a share of casino revenues to every registered member. These American systems have shown almost no impact on the rate of employment—people don’t quit their jobs, one of the common concerns voiced by critics—but have led to improved outcomes in education, mental health, and crime. 

Even so, there’s something that has felt inherently un-American about UBI. That’s why Softky objected when she first heard it discussed on the radio—“Because I was a good American,” she explains. (The implication being that a good American wouldn’t take handouts.) 

“For the first time in a long time, I felt like I could … take a deep breath, start saving, and see myself in the future.”

Former presidential hopeful Andrew Yang understood this cognitive barrier of “Americanness” when he proposed UBI as the centerpiece of his 2020 campaign for the Democratic nomination. He knew that what he decided to name his plan to mail monthly $1,000 checks to every American would be crucial to getting a positive reception, and so he workshopped multiple options before landing on “freedom dividend.” 

After all, capitalism has become synonymous with the American dream, and what’s more capitalistic than a dividend? And freedom … well, that part speaks for itself. 

Getting a fair shot 

By the time Yang launched himself onto the presidential debate stage, a number of basic income pilot projects in American cities were starting to generate data. 

One was the Magnolia Mother’s Trust (MMT), a guaranteed income pilot project in Jackson, Mississippi, that specifically targeted low-income Black mothers. In December 2018, its first cohort of 20 mothers received their first $1,000, and they would receive the same sum every month for a year (they were also given savings accounts for their children). For many, the $12,000 effectively doubled their annual income. The program has since added two more cohorts of 110 women each. 

Aisha Nyandoro portrait
Aisha Nyandoro: “We have more than enough data now to prove that cash works.”
COURTESY PHOTO

The focus on Black mothers was intentional, says Aisha Nyandoro of Springboard to Opportunity, the nonprofit behind MMT: “When we look at poverty in this country and who has been harmed the most,” she says, “it’s Black women.” The group also chose to start savings accounts for the children to address the fact that poverty in the US is often generational

“So how do we go about ensuring that we are supporting well that population that has been marginalized?” Nyandoro asks.

While the analysis is not complete, early results are promising. Compared with a control group, the pilot participants were 40% less likely to incur debt for emergency expenses and 27% more likely to visit a doctor. On average, they were able to set aside $150 each month for food and household expenses. 

But for Nyandoro, these measurable “capitalistic outcomes” were only part of the story. They were important, but so were the dignity and agency that it returned to recipients. “For so many of the families that we work with,” she says, “they have not had someone to say to them, ‘You don’t have to prove that you deserve this. You simply deserve it because you are.’”

In other words, guaranteed income wasn’t about handouts, but about giving everyone—starting with the most marginalized individuals—a chance at a fair shot. 

The power of narrative

Giving everybody a fair shot was also the mission of Michael Tubbs, then the newly elected mayor of Stockton, California, when he launched his city’s guaranteed income experiment in February 2019 and became the face of the renewed movement.

The Stockton Economic Empower Demonstration, or SEED,  gave 125 randomly selected residents $500 a month for 18 months. It garnered plenty of attention—Tubbs and his efforts were even profiled in an HBO documentary—and drew funding from Chris Hughes’s nonprofit, the Economic Security Project. Results were encouraging. Most of the money went toward fulfilling basic needs. Food made up the largest spending category (37%), whereas just 1% was spent on alcohol or tobacco (an outcome that opponents had worried about). Meanwhile, rather than dropping out of the workforce, participants found jobs at twice the rate of a control group.

Buoyed by this success, Tubbs started an organization, Mayors for Guaranteed Income, to expand his city’s pilot. To date, 42 mayors across America have signed on, and additional projects are now being run in towns and cities from Hudson, New York, and Gary, Indiana, to Compton, California. 

Since the results of SEED’s first year were released in March, Tubbs has often been asked what he learned from it. “I am tempted to say ‘Nothing,’” he told me in late March.

He means the pilot didn’t tell him anything that wasn’t already obvious to him: he knew from personal experience that many stereotypes about poor people (especially poor Black people) are not, as he put it, “rooted in reality.” 

Tubbs was born in Stockton to a teenage mother and an incarcerated father. He attended Stanford on a need-based scholarship, and returned home after graduation. Soon he was elected to City Council, before becoming mayor when he was just 26. 

Tubbs didn’t need the data to know he could trust people to make rational financial decisions, but the experience did help him “learn the power of narrative.” 

He recognized that “sometimes ideology, sometimes racism,” colors people’s perceptions. Part of his job as mayor became to “illustrate what’s real and what’s not,” he says. He saw the chance to “illustrate what’s actually backed by data and what’s backed by bias.” 

The need to change narratives through research and evidence was also apparent to Nyandoro, of Magnolia Mother’s Trust. A few days before the third cohort began receiving money, I asked her what research questions she hoped this new cycle would answer.

“We have more than enough data now to prove that cash works,” she told me. Now her question was not how cash would affect low-income individuals but, rather, “What is the data or talking points that we need to get to the policymakers … to move their hearts?” What evidence could be sufficient to make guaranteed income a federal-level policy? 

As it turned out, what made the difference wasn’t more research but a global pandemic. 

The pandemic effect

When stay-at-home orders closed many businesses—and destroyed jobs, especially for already vulnerable low-income workers—the chasm of American inequality became harder to ignore. Food lines stretched for miles. Millions of Americans faced eviction. Students without internet access at home resorted to sitting in public parking lots to hook into Wi-Fi so they could attend classes online. 

This was all worse for people of color. By February 2021, Black and Hispanic women, who make up only a third of the female labor force, accounted for nearly half of women’s pandemic job losses. Black men, meanwhile, were unemployed at almost double the rate of other ethnic groups, according to Census data analyzed by the Pew Research Center. 

All this also changed the conversation about the costs of guaranteed income programs. When the comparison was between basic income and the status quo, they’d been seen as too expensive to be realistic. But in the face of the recession caused by the pandemic, relief packages were suddenly seen as necessary to jump-start the American economy or, at the very least, avoid what Jerome Powell, then chairman of the Federal Reserve, called a “downward spiral” with “tragic” outcomes.

“Covid-19 really illustrated all the things that those of us who actually work with, and work for, and are in relationship with, folks who are economically insecure know.”

“Covid-19 really illustrated all the things that those of us who actually work with, and work for, and are in relationship with, folks who are economically insecure know,” says Tubbs. That is, poverty was not an issue of “the people. It’s with the systems. It’s with the policies.”

Stimulus payments and increased unemployment benefits—that is, direct cash transfers to Americans with no conditions attached—passed with huge public support. And earlier this year, an expanded Child and Dependent Tax Credit (CTC) was introduced that provides up to $3,600 per child, paid in monthly installments, to most American families. 

This new benefit, which is set to last for a year, is available even to families that don’t make enough money to pay income tax; they had been left out of previous versions of the tax credit. And by sending monthly payments of up to $300 per child, rather than a single rebate at the end of the year, it gives families a better chance to plan and budget. It is expected to cut child poverty in half. 

Washington might not have used the language of guaranteed income, but these programs fit the definition.

The CTC is “a game changer,” says Natalie Foster, a cofounder of the Economic Security Project, which funded many of the guaranteed income pilots, including both SEED and Mayors for Guaranteed Income. It “overturns decades of punitive welfare policies in America,” she says, and sets the stage for more permanent policies. 

Whereas her organization originally thought it might take a decade of data from city-based pilot programs to “inform federal policymaking,” the CTC means that guaranteed income has, at least temporarily, arrived. 

The stimulus bills and CTC also make Tubbs “more bullish now than ever” that guaranteed income could soon become a permanent fixture of federal policy. 

“We live in a time of pandemics,” he says. “It’s not just covid-19. It’s an earthquake next month. It’s wildfires. All these things are happening all the time—not even mentioning automation. We have to have the ability for our folks to build economic resilience.”

Stockton Mayor Michael Tubbs
The responsibility for poverty is “with the policies,” says Michael Tubbs, the former mayor of Stockton, California.
AP PHOTO/RICH PEDRONCELLI, FILE

But even if the rhetoric has shifted away from the technocratic concept of UBI, Silicon Valley’s interest in universality hasn’t gone away. Last April, Jack Dorsey announced a new philanthropic initiative, Start Small LLC, to give away $1 billion. 

The donations would focus initially on covid-19 relief and then, after the pandemic, shift to universal basic income and girls’ education, he said. Putting money toward these causes, Dorsey explained, represented “the best long-term solutions to the existential problems facing the world.” 

Despite its announced focus on universal basic income, StartSmall has become one of the largest funders of guaranteed income. It donated $18 million to Mayors for Guaranteed Income, $15 million to the Open Research Lab (previously known as the Y Combinator basic income experiment), $7 million to Humanity Forward, Andrew Yang’s foundation, and most recently $3.5 million to establish a Cash Transfer Lab at New York University to conduct more research on the issue. 

Yang, now running for mayor of New York City, has also shifted away from his focus on universality. Rather than sending $1,000 checks every month to everyone, he now advocates for a guaranteed minimum income of $2,000 per year for New Yorkers living in extreme poverty. 

Tubbs claims some credit for these shifts. He recalls a conversation with Dorsey in which he told the billionaire, “It’s gonna take time to get to universality, but it’s urgent that we do guaranteed income… So look, we’re not going to … test a UBI. We can test the income guarantee. Let’s start there.”

If his donations are any indication, Dorsey took Tubbs’s words to heart. What’s still unclear, however, is whether he and other tech leaders see guaranteed income as a stepping-stone to UBI or as an end in itself. (Neither Dorsey nor Start Small staff responded to requests for an interview.)

Scott Santens, one of the earliest “basic income bros,” believes that the tech sector’s initial interest in UBI as a fix for job loss is still relevant. The pandemic has led to an increase in sales of automation and robots, he says, pointing to reports that inquiries about Amazon’s call center tech have increased, as have purchases of warehouse robots to replace warehouse workers. 

Meanwhile, Sam Altman, who helped kick off Y Combinator’s UBI experiment before leaving to head the artificial-intelligence startup OpenAI, wrote a recent manifesto about the situation. In it, he urged that we remain focused on the bigger picture: even if the pandemic has caused a short-term shock, it is technology—specifically, artificial intelligence—that will have the greatest impact on employment over time. 

Altman called for the UBI to be funded by a 2.5% tax on businesses. “The best way to improve capitalism is to enable everyone to benefit from it directly as an equity owner,“ he wrote.

But would “everyone” include people of color, who are already being harmed at disproportionate levels by AI’s biases? And could a dividend paid out from the spoils of artificial intelligence make up for that harm? Altman’s manifesto notably leaves out any mention of race. 

When reached for comment, he sent a statement through an OpenAI representative saying, “We must build AI in a way that doesn’t cause more harm to traditionally marginalized communities. In addition to building the technology in an equitable and just way, we must also find a way to share the benefits broadly. These are independently important issues.” 

He did not respond to specific requests for comments on how AI was already harming Black communities, and how Black men are already being erroneously charged with crimes on the basis of faulty facial recognition. 

Margaret O’Mara, the technology historian, notes that for technologists, one thing hasn’t changed during the pandemic: the assumption that technological progress is inevitable—and positive. That promotes an attitude of “Let’s figure out how to adjust society around it rather than saying, well, maybe we should try to prevent the displacement in the first place,” she says.

Tubbs, who recently cohosted a Clubhouse session with Altman, has a more generous—and straightforward—view of Silicon Valley’s role in the movement. 

“I’m happy that they [technologists] are part of the conversation,” he says, because “a lot of revenue will come from them, or come from the products they make.”

At the end of the day, after all, it’s largely tech money that allowed him to put an extra $500 in the hands of his pilot participants every month. “Once that money is given,” he says, what happens next is “up to the person who has the money.” 

But what if the harms caused by the tech sector are the reason recipients needed tech largesse in the first place? 

When Elizabeth Softky became homeless in 2018, she wasn’t alone; Redwood City’s gentrification at the hands of tech companies and workers was in full swing. Economic forces far beyond her control have shaped her personal ups and downs.

It’s “hyper-capitalism,” Softky says.

She was grateful, of course, for her six months of guaranteed income—but also aware of the broader challenges that a short-term program run by a small nonprofit could not solve. Softky says she hopes the organization will expand both the amount of money it’s giving out and the duration of the program. But far better would be for the government to do the same.

AI consumes a lot of energy. Hackers could make it consume more.

The news: A new type of attack could increase the energy consumption of AI systems.  In the same way a denial-of-service attack on the internet seeks to clog up a network and make it unusable, the new attack forces a deep neural network to tie up more computational resources than necessary and slow down its “thinking” process. 

The target: In recent years, growing concern over the costly energy consumption of large AI models has led researchers to design more efficient neural networks. One category, known as input-adaptive multi-exit architectures, works by splitting up tasks according to how hard they are to solve. It then spends the minimum amount of computational resources needed to solve each.

Say you have a picture of a lion looking straight at the camera with perfect lighting and a picture of a lion crouching in a complex landscape, partly hidden from view. A traditional neural network would pass both photos through all of its layers and spend the same amount of computation to label each. But an input-adaptive multi-exit neural network might pass the first photo through just one layer before reaching the necessary threshold of confidence to call it what it is. This  shrinks the model’s carbon footprint—but it also improves its speed and allows it to be deployed on small devices like smartphones and smart speakers.

The attack: But this kind of neural network means if you change the input, such as the image it’s fed, you can change how much computation it needs to solve it. This opens up a vulnerability that hackers could exploit, as the researchers from the Maryland Cybersecurity Center outlined in a new paper being presented at the International Conference on Learning Representations this week. By adding small amounts of noise to a network’s inputs, they made it perceive the inputs as more difficult and jack up its computation. 

When they assumed the attacker had full information about the neural network, they were able to max out its energy draw. When they assumed the attacker had limited to no information, they were still able to slow down the network’s processing and increase energy usage by 20% to 80%. The reason, as the researchers found, is that the attacks transfer well across different types of neural networks. Designing an attack for one image classification system is enough to disrupt many, says Yiğitcan Kaya, a PhD student and paper coauthor.

The caveat: This kind of attack is still somewhat theoretical. Input-adaptive architectures aren’t yet commonly used in real-world applications. But the researchers believe this will quickly change from the pressures within the industry to deploy lighter weight neural networks, such as for smart home and other IoT devices. Tudor Dumitraş, the professor who advised the research, says more work is needed to understand the extent to which this kind of threat could create damage. But, he adds, this paper is a first step to raising awareness: “What’s important to me is to bring to people’s attention the fact that this is a new threat model, and these kinds of attacks can be done.”

Why mixing vaccines could help boost immunity

A dozen covid-19 vaccines are now being used around the world. Most require two doses, and health officials have warned against mixing and matching: the vaccines, they argue, should be administered the way they were tested in trials. But after emerging concerns about the very rare risk of blood clots linked to the Oxford-AstraZeneca vaccine, that advice may soon change.

Guidance on this issue varies from country to country. Germany and France, for example, have advised younger citizens who received the first shot to switch vaccines for their second dose. Canada, where millions of people have received their first dose of Oxford-AstraZeneca, is still deciding how to proceed. 

David Masopust, an immunologist at the University of Minnesota Medical School, points out that most of the vaccines target the same protein. So switching vaccines should work, at least in theory. 

We should soon have a better idea. A handful of trials are now under way to test the power of vaccine combinations, with the first results due in later this month. If these mixed regimens prove safe and effective, countries will be able to keep the vaccine rollout moving even if supplies of one vaccine dwindle because of manufacturing delays, unforeseen shortages, or safety concerns. 

But there’s another, more exciting prospect that could be a vital part of our strategy in the future: mixing vaccines might lead to broader immunity and hamper the virus’s attempts to evade our immune systems. Eventually, a mix-and-match approach might be the best way to protect ourselves.

Mixing on trial 

The covid-19 vaccines currently in use protect against the virus in slightly different ways. Most target the coronavirus’s spike protein, which it uses to gain entry to our cells. But some deliver the instructions for making the protein in the form of messenger RNA (Pfizer, Moderna). Some deliver the spike protein itself (Novavax). Some use another harmless virus to ferry in the instructions for making it, like a Trojan horse (Johnson & Johnson, Oxford-AstraZeneca, Sputnik V). Some offer up whole inactivated virus (Sinopharm, Sinovac). 

In a study published in March, researchers from the National Institutes for Food and Drug Control in China tested combinations of four different covid-19 vaccines in mice, and found that some did improve immune response. When they first gave the rodents a vaccine that relies on a harmless cold virus to smuggle in the instructions and then a second dose of a different type of vaccine, they saw higher antibody levels and a better T-cell response. But when they reversed the order, giving the viral vaccine second, they did not see an improvement. 

Why combining shots might improve efficacy is a bit of a mystery, says Shan Lu, a physician and vaccine researcher at the University of Massachusetts Medical School who pioneered this mixing strategy. “The mechanism we can explain partially, but we don’t fully understand.” Different vaccines present the same information in slightly different ways. Those differences might awaken different parts of the immune system or sharpen the immune response. This strategy might also make immunity last longer.  

Whether those results translate to humans remains to be seen. Researchers at Oxford University have launched a human trial to test just how mixing might work. The study, called Com-CoV, offers participants a first shot of Pfizer or Oxford-AstraZeneca. For their second dose, they will either get the same vaccine or a shot of Moderna or Novavax. The first results should be available in the coming weeks. 

Other studies are under way as well. In Spain, where Oxford-AstraZeneca is now being given only to people over 60, researchers plan to recruit 600 people to test whether a first dose of the shot can be paired with a second dose from Pfizer. According to reporting in El País, about a million people received the first dose of the vaccine but aren’t old enough to receive the second dose. Health officials are waiting for the results of this study before issuing recommendations for this group, but it’s not clear whether any participants have yet been recruited. 

Late last year Oxford-AstraZeneca announced that it would partner with Russia’s Gamaleya Institute, which developed Sputnik V vaccine, to test how the two shots work in combination. The trial was supposed to launch in March and provide interim results in May, but it’s not clear whether it has actually begun. And Chinese officials have hinted that they’ll explore mixing vaccines to boost the efficacy of their shots. 

The biggest gains might come from mixing vaccines that have lower efficacies. The mRNA vaccines from Pfizer and Moderna provide excellent protection. “I don’t think there’s reason to mess with that,” says Donna Farber, an immunologist at Columbia University. But mixing might improve protection for some of the vaccines that have reported lower levels of protection, like Oxford-AstraZeneca and Johnson & Johnson, as well as some of the Chinese vaccines. Many of these vaccines work quite well, but mixing might help them work even better. 

Johnson & Johnson, Sputnik V, Oxford-AstraZeneca, and China’s CanSino, all contain adenoviruses, a class of viruses that includes cold viruses. The manufacturers tweak these viruses to ferry DNA blueprints for the coronavirus spike protein into cells. With these vaccines, the body develops an immune response to the spike, but also to the adenovirus carrying the spike. That poses a risk: a second shot might prompt an immune response against the adenovirus and make the booster less effective. 

To get around this issue, Johnson & Johnson and CanSino offer only one dose. Sputnik V requires two doses, but the first and second incorporate different adenoviruses. The two-dose Oxford-AstraZeneca shot relies on a chimpanzee adenovirus. That allows the vaccine to avoid any preexisting immunity—the virus doesn’t typically infect humans. And perhaps because the first dose is relatively low, there doesn’t seem to be a problem offering a second shot.

In fact, some researchers speculate that may be why one Oxford-AstraZeneca trial, which mistakenly offered participants a lower first dose, showed better efficacy. The body does not generate a strong immune response against the adenovirus, but still generates an immune response against the spike, Lu says. But he cautions that a third booster shot might not work as well.

That could pose a problem. With an increasing number of variants, “we may get in a situation where we’re going to need a yearly booster shot,” Masopust says. That’s easy to do with the Pfizer and Moderna vaccines, but the vaccines that rely on adenoviruses may run up against the body’s pre-existing immunity. 

More mix-and-match

Combining vaccines that are already in use is just one way to mix and match. Another option is to mix up the vaccine targets. 

With the surge in new variants, some experts fear that the virus may eventually be able to evade the body’s antibody response by changing its spike protein, the target of most existing vaccines. Luckily the immune system has another line of defense: T cells. 

After vaccination, your immune system generates antibodies that can bind to particular portions of the spike protein. If you come in contact with the virus , these antibodies will bind to the spike and only the spike. “T cells see the world differently,” Masopust says. They can recognize protein fragments from inside the virus too, and more of them.  A vaccine that contains the spike and another protein might broaden the vaccine’s coverage and decrease the probability of escape. T-cells don’t block infection, but they can help clear the virus. 

And a strong T-cell response is much harder to evade. Many of the proteins that T-cells recognize don’t mutate as quickly as the spike protein. And T-cells in one person might recognize different protein fragments than T-cells in another. So even if the virus slips past T-cells in one individual, it’s unlikely to evade the immune response at the population level. “If you have broad T-cell immunity you’re much less vulnerable to viral mutations,” Masopust says. 

Adding another vaccine target to boost the T-cell response is “an interesting idea,” says Marc Jenkins, director of the Center for Immunology at the University of Minnesota Medical School. Nucleoprotein, which is found inside the virus, could be a good candidate. Eliciting an immune response against both nucleoprotein and the spike could boost the number of T cells and antibodies, he says. “And more is better when it comes to wiping out the virus.” 

Farber envisions another kind of mixing that might provide benefits: pairing an injectable vaccine with a vaccine delivered into the nose. Putting the second dose in the nose would bring the immune response into the lungs, priming T cells that live there. These tissue-resident T cells provide protection against severe lung disease. So offering this type of mixed vaccine to older adults, who are more susceptible to developing lung problems like pneumonia if they do become infected, might be a worthwhile strategy, she says. 

Despite evidence that mixing vaccines can boost immunity, the idea hasn’t really caught on—yet. Vaccine development is expensive. Companies don’t necessarily have an incentive to develop two different vaccines if one will do the trick, Lu says. Nor are they likely to partner with another drug company to create this kind of combination approach. But the pandemic has changed the vaccine development landscape, and the idea may be gaining traction. “It’s a very ripe time,” Farber says. 

How China turned a prize-winning iPhone hack against the Uyghurs

In March 2017, a group of hackers from China arrived in Vancouver with one goal: Find hidden weak spots inside the world’s most popular technologies. 

Google’s Chrome browser, Microsoft’s Windows operating system, and Apple’s iPhones were all in the crosshairs. But no one was breaking the law. These were just some of the people taking part in Pwn2Own, one of the world’s most prestigious hacking competitions.

It was the 10th anniversary for Pwn2Own, a contest that draws elite hackers from around the globe with the lure of big cash prizes if they manage to exploit previously undiscovered software vulnerabilities, known as “zero-days.” Once a flaw is found, the details are handed over to the companies involved, giving them time to fix it. The hacker, meanwhile, walks away with a financial reward and eternal bragging rights.

For years, Chinese hackers were the most dominant forces at events like Pwn2Own, earning millions of dollars in prizes and establishing themselves among the elite. But in 2017, that all stopped. 

One of China’s elite hacked an iPhone…. Virtually overnight, Chinese intelligence used it as a weapon against a besieged minority ethnic group, striking before Apple could fix the problem. It was a brazen act performed in broad daylight.

In an unexpected statement, the billionaire founder and CEO of the Chinese cybersecurity giant Qihoo 360—one of the most important technology firms in China—publicly criticized Chinese citizens who went overseas to take part in hacking competitions. In an interview with the Chinese news site Sina, Zhou Hongyi said that performing well in such events represented merely an “imaginary” success. Zhou warned that once Chinese hackers show off vulnerabilities at overseas competitions, they can “no longer be used.” Instead, he argued, the hackers and their knowledge should “stay in China” so that they could recognize the true importance and “strategic value” of the software vulnerabilities. 

Beijing agreed. Soon, the Chinese government banned cybersecurity researchers from attending overseas hacking competitions. Just months later, a new competition popped up inside China to take the place of the international contests. The Tianfu Cup, as it was called, offered prizes that added up to over a million dollars. 

The inaugural event was held in November 2018. The $200,000 top prize went to Qihoo 360 researcher Qixun Zhao, who showed off a remarkable chain of exploits that allowed him to easily and reliably take control of even the newest and most up-to-date iPhones. From a starting point within the Safari web browser, he found a weakness in the core of the iPhones operating system, its kernel. The result? A remote attacker could take over any iPhone that visited a web page containing Qixun’s malicious code. It’s the kind of hack that can potentially be sold for millions of dollars on the open market to give criminals or governments the ability to spy on large numbers of people. Qixun named it “Chaos.”

Two months later, in January 2019, Apple issued an update that fixed the flaw. There was little fanfare—just a quick note of thanks to those who discovered it.

But in August of that year, Google published an extraordinary analysis into a hacking campaign it said was “exploiting iPhones en masse.” Researchers dissected five distinct exploit chains they’d spotted “in the wild.” These included the exploit that won Qixun the top prize at Tianfu, which they said had also been discovered by an unnamed “attacker.” 

The Google researchers pointed out similarities between the attacks they caught being used in the real world and Chaos. What their deep dive omitted, however, were the identities of the victims and the attackers: Uyghur Muslims and the Chinese government.

A campaign of oppression

For the past seven years, China has committed human rights abuses against the Uyghur people and other minority groups in the Western province of Xinjiang. Well-documented aspects of the campaign include detention camps, systematic compulsory sterilization, organized torture and rape, forced labor, and an unparalleled surveillance effort. Officials in Beijing argue that China is acting to fight “terrorism and extremism,” but the United States, among other countries, has called the actions genocide. The abuses add up to an unprecedented high-tech campaign of oppression that dominates Uyghur lives, relying in part on targeted hacking campaigns.

China’s hacking of Uyghurs is so aggressive that it is effectively global, extending far beyond the country’s own borders. It targets journalists, dissidents, and anyone who raises Beijing’s suspicions of insufficient loyalty. 

Shortly after Google’s researchers noted the attacks, media reports connected the dots: the targets of the campaign that used the Chaos exploit were the Uyghur people, and the hackers were linked to the Chinese government. Apple published a rare blog post that confirmed the attack had taken place over two months: that is, the period beginning immediately after Qixun won the Tianfu Cup and stretching until Apple issued the fix.  

MIT Technology Review has learned that United States government surveillance independently spotted the Chaos exploit being used against Uyghurs, and informed Apple. (Both Apple and Google declined to comment on this story.)

The Americans concluded that the Chinese essentially followed the “strategic value” plan laid out by Qihoo’s Zhou Hongyi; that the Tianfu Cup had generated an important hack; and that the exploit had been quickly handed over to Chinese intelligence, which then used it to spy on Uyghurs. 

The US collected the full details of the exploit used to hack the Uyghurs, and it matched Tianfu’s Chaos hack, MIT Technology Review has learned. (Google’s in-depth examination later noted how structurally similar the exploits are.) The US quietly informed Apple, which had already been tracking the attack on its own and reached the same conclusion: the Tianfu hack and the Uyghur hack were one and the same. The company prioritized a difficult fix.

Qihoo 360 and Tianfu Cup did not respond to multiple requests for comment. When we contacted Qixun Zhao via Twitter, he strongly denied involvement, although he also said he couldn’t remember who came into possession of the exploit code. At first, he suggested the exploit wielded against Uyghurs was probably used “after the patch release.” On the contrary, both Google and Apple have extensively documented how this exploit was used before January 2019. He also pointed out that his ‘Chaos’ exploit shared code from other hackers. In fact, within Apple and US intelligence, the conclusion has long been that these exploits are not merely similar—they are the same. Although Qixun wrote the exploit, there is nothing to suggest he was personally involved in what happened to it after the Tianfu event (Chinese law requires citizens and organizations to provide support and assistance to the country’s intelligence agencies whenever asked.)

By the time the vulnerabilities were closed, Tianfu had achieved its goal.

“The original decision to not to allow the hackers to go abroad to competitions seems to be motivated by a desire to keep discovered vulnerabilities inside of China,” says Adam Segal, an expert on Chinese cybersecurity policy at the Council for Foreign Relations. It also cut top Chinese hackers from other income sources “so they are forced into a closer connection with the state and established companies,” he says.

The incident is stark. One of China’s elite hacked an iPhone, and won public acclaim and a large amount of money for doing so. Virtually overnight, Chinese intelligence used it as a weapon against a besieged minority ethnic group, striking before Apple could fix the problem. It was a brazen act performed in broad daylight and with the knowledge that there would be no consequences to speak of.

Concerning links

Today, the Tianfu Cup is heading into its third year, and it’s sponsored by some of China’s biggest tech companies: Alibaba, Baidu, and Qihoo 360 are among the organizers. But American officials and security experts are increasingly concerned about the links between those involved in the competition and the Chinese military.

Qihoo, which is valued at over $9 billion, was one of dozens of Chinese companies added to a trade blacklist by the United States in 2020 after a US Department of Commerce assessment that the company might support Chinese military activity.

Others involved in the event have also raised alarms in Washington. The Beijing company Topsec, which helps organize Tianfu, allegedly provides hacking training, services, and recruitment for the government and has employed nationalist hackers, according to US officials.

The company is linked to cyber-espionage campaigns including the 2015 hack of the US insurance giant Anthem, a connection that was accidentally exposed when hackers used the same server to try to break into a US military contractor and to host a Chinese university hacking competition. 

Other organizers and sponsors include NSFocus, which grew directly out of the earliest Chinese nationalist hacker movement called the Green Army, and Venus Tech, a prolific Chinese military contractor that has been linked to offensive hacking. 

One other Tianfu organizer, the state-owned Chinese Electronics Technology Group, has a surveillance subsidiary called Hikvision, which provides “Uyghur analytics” and facial recognition tools to the Chinese government. It was added to a US trade blacklist in 2019.

US experts say the links between the event and Chinese intelligence are clear, however.

“I think it is not only a venue for China to get zero-days but it’s also a big recruiting venue,” says Scott Henderson, an analyst on the cyber espionage team at FireEye, a major security company based in California.

Tianfu’s links to Uyghur surveillance and genocide show that getting early access to bugs can be a powerful weapon. In fact, the “reckless” hacking spree that Chinese groups launched against Microsoft Exchange in early 2021 bears some striking similarities. 

In that case, a Taiwanese researcher uncovered the security flaws and passed them to Microsoft, which then privately shared them with security partners. But before a fix could be released, Chinese hacking groups started exploiting the flaw all around the world. Microsoft, which was forced to rush out a fix two weeks earlier than planned, is investigating the potential that the bug was leaked.

These bugs are incredibly valuable, not just in financial terms, but in their capacity to create an open window for espionage and oppression. 

Google researcher Ian Beer said as much in the original report detailing the exploit chain. “I shan’t get into a discussion of whether these exploits cost $1 million, $2 million, or $20 million,” he wrote. “I will instead suggest that all of those price tags seem low for the capability to target and monitor the private activities of entire populations in real time.”

How to stop AI from recognizing your face in selfies

Uploading personal photos to the internet can feel like letting go. Who else will have access to them, what will they do with them—and which machine-learning algorithms will they help train?

The company Clearview has already supplied US law enforcement agencies with a facial recognition tool trained on photos of millions of people scraped from the public web. But that was likely just the start. Anyone with basic coding skills can now develop facial recognition software, meaning there is more potential than ever to abuse the tech in everything from sexual harassment and racial discrimination to political oppression and religious persecution.

A number of AI researchers are pushing back and developing ways to make sure AIs can’t learn from personal data. Two of the latest are being presented this week at ICLR, a leading AI conference.

“I don’t like people taking things from me that they’re not supposed to have,” says Emily Wenger at the University of Chicago, who developed one of the first tools to do this, called Fawkes, with her colleagues last summer: “I guess a lot of us had a similar idea at the same time.”

Data poisoning isn’t new. Actions like deleting data that companies have on you, or deliberating polluting data sets with fake examples, can make it harder for companies to train accurate machine-learning models. But these efforts typically require collective action, with hundreds or thousands of people participating, to make an impact. The difference with these new techniques is that they work on a single person’s photos.

“This technology can be used as a key by an individual to lock their data,” says Daniel Ma at Deakin University in Australia. “It’s a new frontline defense for protecting people’s digital rights in the age of AI.”

Hiding in plain sight

Most of the tools, including Fawkes, take the same basic approach. They make tiny changes to an image that are hard to spot with a human eye but throw off an AI, causing it to misidentify who or what it sees in a photo. This technique is very close to a kind of adversarial attack, where small alterations to input data can force deep-learning models to make big mistakes.

Give Fawkes a bunch of selfies and it will add pixel-level perturbations to the images that stop state-of-the-art facial recognition systems from identifying who is in the photos. Unlike previous ways of doing this, such as wearing AI-spoofing face paint, it leaves the images apparently unchanged to humans.

Wenger and her colleagues tested their tool against several widely used commercial facial recognition systems, including Amazon’s AWS Rekognition, Microsoft Azure, and Face++, developed by the Chinese company Megvii Technology. In a small experiment with a data set of 50 images, Fawkes was 100% effective against all of them, preventing models trained on tweaked images of people from later recognizing images of those people in fresh images. The doctored training images had stopped the tools from forming an accurate representation of those people’s faces.

Fawkes has already been downloaded nearly half a million times from the project website. One user has also built an online version, making it even easier for people to use (though Wenger won’t vouch for third parties using the code, warning: “You don’t know what’s happening to your data while that person is processing it”). There’s not yet a phone app, but there’s nothing stopping somebody from making one, says Wenger.

Fawkes may keep a new facial recognition system from recognizing you—the next Clearview, say. But it won’t sabotage existing systems that have been trained on your unprotected images already. The tech is improving all the time, however. Wenger thinks that a tool developed by Valeriia Cherepanova and her colleagues at the University of Maryland, one of the teams at ICLR this week, might address this issue. 

Called LowKey, the tool expands on Fawkes by applying perturbations to images based on a stronger kind of adversarial attack, which also fools pretrained commercial models. Like Fawkes, LowKey is also available online.

Ma and his colleagues have added an even bigger twist. Their approach, which turns images into what they call unlearnable examples, effectively makes an AI ignore your selfies entirely. “I think it’s great,” says Wenger. “Fawkes trains a model to learn something wrong about you, and this tool trains a model to learn nothing about you.”

Images of me scraped from the web (top) are turned into unlearnable examples (bottom) that a facial recognition system will ignore. (Credit to Daniel Ma, Sarah Monazam Erfani and colleagues) 

Unlike Fawkes and its followers, unlearnable examples are not based on adversarial attacks. Instead of introducing changes to an image that force an AI to make a mistake, Ma’s team adds tiny changes that trick an AI into ignoring it during training. When presented with the image later, its evaluation of what’s in it will be no better than a random guess.

Unlearnable examples may prove more effective than adversarial attacks, since they cannot be trained against. The more adversarial examples an AI sees, the better it gets at recognizing them. But because Ma and his colleagues stop an AI from training on images in the first place, they claim this won’t happen with unlearnable examples.

Wenger is resigned to an ongoing battle, however. Her team recently noticed that Microsoft Azure’s facial recognition service was no longer spoofed by some of their images. “It suddenly somehow became robust to cloaked images that we had generated,” she says. “We don’t know what happened.”

Microsoft may have changed its algorithm, or the AI may simply have seen so many images from people using Fawkes that it learned to recognize them. Either way, Wenger’s team released an update to their tool last week that works against Azure again. “This is another cat-and-mouse arms race,” she says.

For Wenger, this is the story of the internet. “Companies like Clearview are capitalizing on what they perceive to be freely available data and using it to do whatever they want,” she says.”

Regulation might help in the long run, but that won’t stop companies from exploiting loopholes. “There’s always going to be a disconnect between what is legally acceptable and what people actually want,” she says. “Tools like Fawkes fill that gap.”

“Let’s give people some power that they didn’t have before,” she says. 

Why upholding Trump’s Facebook ban won’t break the cycle

The night before the Facebook Oversight Board decided to stand by the company’s decision to ban him from its platforms, former president Donald Trump announced—via an exclusive on Fox News—that he’d created a website. Called From the Desk of Donald Trump, it looked like a social media site but was really just a feed of his statements. Each statement had a “Like” button, and buttons for sharing links to that post on Facebook and Twitter, where Trump remains permanently banned. 

It was done in anticipation of the oversight board’s announcement on Wednesday that it was upholding Facebook’s ban on Trump.

The decision was awaited by many as urgently as a Supreme Court ruling. And yet the board—which characterizes itself as independent but was created by and funded by Facebook—simply relies on Facebook’s word that it will abide by its decisions (its recommendations, meanwhile, are nonbinding). The decision was not nearly as final as some seemed to expect, either. While upholding the initial ban, the board essentially said Facebook needed to decide for itself what to do with Trump’s account in the long term, instead of punting it to somebody else.  

“It was not appropriate for Facebook to impose the indeterminate and standardless penalty of indefinite suspension,” the decision reads. Facebook needs to review the matter itself, the board wrote, and “determine and justify a proportionate response that is consistent with the rules that are applied to other users of its platform.” The board set a deadline of six months from now, at which point we will no doubt have another news cycle about Trump’s presence on social media. 

For years, Trump was at the center of an attention loop that was both extremely consequential and meaningless; a head of state was using his personal Twitter account to amplify extremist content, manipulate public attention, retweet dumb memes, promote dangerous conspiracy theories, and speak directly to followers, who in the end were willing to storm the Capitol to try to overturn an election they falsely believed was stolen.

For years, companies like Facebook and Twitter refrained from interfering in Trump’s social media posts, claiming their “newsworthiness” should keep him protected even when he broke platform rules on abuse or disinformation. That began to change during the covid pandemic, as Trump used his platform to repeatedly spread misinformation about both voting and the virus. Over the summer, Twitter began to append “fact checks” to Trump’s rule-breaking tweets, which so infuriated the president that he threatened to abolish Section 230, the rule that shields many internet companies from liability for what users do on their services. 

But even if Trump stays off the major social media platforms forever, the cycle has been established. Trump will continue to issue statements, and they will be shared by his supporters, and covered by the media whether or not he is on social media. And the networked attention cycle revolved around him for so long will continue without him, as will the underlying structures that make Trump’s influential presence on social media possible. 

It’s the “worst-case scenario for Facebook, who put this thing together.”

Joan Donovan, Harvard Shorenstein Center on Media, Politics, and Public Policy

Banning Trump from Facebook permanently would keep him on the sidelines of these networks. But focusing so much attention on the platform decisions themselves is extremely misguided, says Whitney Phillips, an assistant professor at Syracuse University who studies media literacy and disinformation. Trump’s social media success comes partly from the platforms but partly from “economic, political, and social undercurrents” that incentivized Trump and will continue to promote the next Trumps to come.  

“Trump’s accounts are exhausting because they are taking attention away from the deeper stuff we’ve have to deal with yesterday,” Phillips says. The oversight board’s decision was hyped as a major referendum on how Facebook balances free speech and safety; instead, it was a non-decision that changes little about why we ended up here in the first place. 

The creation of the board itself “was essentially a media op PR campaign,” argues Joan Donovan, research director at the Harvard Kennedy School’s Shorenstein Center on Media, Politics, and Public Policy. The board’s approach means that Facebook has been tasked with deciding for itself how to apply its own policies, which is essentially the “worst-case scenario for Facebook, who put this thing together,” she says. “They had one job.”  

“When it comes to Facebook, you have to remember that Facebook isn’t just a place where people post messages,” says Donovan. “It effectively gives you the capacity to have your own television station,” along with a network of related pages and accounts that can quickly amplify content to an audience of millions. Facebook is an organizing tool and broadcast network in one, and its power in that capacity is routinely used for good and for bad. 

Conservative content does extremely well on Facebook, despite longtime claims by many right-wing personalities that their voices are unfairly suppressed there (the claim of censorship itself generates a vast amount of viral right-wing content whenever it comes up). The top-performing link post on Facebook the day before the ruling was from Ben Shapiro, the right-wing commentator, who promoted a Daily Wire article about a city “revolt” against critical race theory, according to Crowdtangle. Shapiro had three of the top 10 posts of the day.

Trump is permanently banned from Twitter for inciting the mob of right-wing extremists who stormed the Capitol building in his name on January 6. YouTube said in March that it will reinstate his channel when the risk of violence lessens.  

 Meanwhile, in six months, there will be another day when Facebook issues a decision about Trump’s account, and we will all be talking about this again. 

“It’s really hard to keep showing up and thinking and analyzing and being thoughtful when it feels like you are shooting that light into a black hole,” Phillips says. “People need to be paying attention, and it all matters so much. If you exhaust people to the point of burnout, then that’s going to benefit Facebook.” 

We reviewed three at-home covid tests. The results were mixed.

Over-the-counter home tests for covid-19 are finally here. MIT Technology Review obtained kits sold by three companies and tried them out.

After buying tests from CVS and online, I tested myself several times and ended up learning an important lesson: while some people worry that home tests could miss covid cases, the bigger problem may be just the opposite. These tests have “false positive” rates of around 2%, which means that if you keep using them, you’ll eventually test positive, even though you don’t have covid-19.

That happened to me. I tested negative several times, but the fourth time the result came up “POSITIVE FOR COVID-19.” I knew that was probably wrong—I’m a dedicated quarantiner who rarely goes anywhere. But I was sufficiently alarmed to follow the directions and scurry to a hospital for a gold-standard laboratory test, wasting my time and that of the friendly nurse who swabbed deep into my nasal cavity. That result was negative.

Some experts have argued that cheap, fast tests could be used to screen the whole population every week. But what I learned is that this type of mass screening could be as much of a public nuisance as pandemic-buster. In fact, if you tested everyone in the US tomorrow with over-the-counter tests, the large majority of positive results—maybe nine out of 10—would be false alarms.

After trying them, I do think there is an important role for consumer tests. Overall, I found they’re easy to use, cheaper than existing mail-in tests, and more convenient than waiting at a testing site. If you have symptoms, or fear you’ve been exposed, having a test handy could help. As a screening tool for schools or businesses, they could also work, so long as there’s a backup plan to confirm positives.

Accuracy issues

The issue with home tests is accuracy, which is between 85% and 95% for detecting covid. That is, they catch about nine of every 10 infections, a metric called the test’s “sensitivity.” Some people have said that any missed cases are a worry, since a person with a false negative could go out and infect someone else. But if the alternative is no test at all, then none of those infections would be caught.

The tricky part of unrestricted testing, I learned, comes instead from the concept of “specificity,” or the rate at which a test correctly identifies negatives. For the home tests I tried, that figure is about 98%, with a corresponding 2% rate of false positives. What I didn’t realize—and what your everyday CVS shopper won’t either—is that there are two ways that less-than-perfect specificity can get amplified into a bigger problem.

The first way is through repeat testing, the kind I did. By the time my review of the home tests was complete, I’d tested five times in two days, accumulating 1 in 10 odds of being told I had covid when I didn’t (a 2% chance of a false positive each time, multiplied by five tests). The second source of trouble I didn’t anticipate is what is known as “pretest probability.” As I said, I don’t socialize, so my probability of actually having covid in first place was very low, maybe even zero. What this meant is that my chance of a correct positive when I took the test was also essentially zero, while my false positive chance remained 2% like everyone else’s. The way I was using the test, any positive result was nearly certain to be wrong.

Now consider this same phenomenon—a higher chance of false positives than of real ones—applying to a large group, or even a whole country. In the US, covid rates are falling. This lower background rate means if home tests were used by everyone in the country tomorrow, there could be five to 15 wrong positives for every right one.

As a result, I don’t think home tests are as useful as some have hoped. If used at scale to screen for covid, they could send millions of anxious people in search of lab tests and medical care they don’t need.

Still relevant?

As the covid-19 pandemic spread around the globe last year, economists and scientists called for massive expansion of testing and contact tracing in the US, to find and isolate infected people. But the number of daily tests in the US has never much exceeded 2 million, according to the Covid Tracking Project, and most of those were done in labs or on special instruments.

Home tests will now be manufactured in the tens of millions, say their makers, but some experts aren’t sure how much they will matter at this point. “The real value of these tests was six months ago,” says Amitabh Chandra, a professor at Harvard University’s Kennedy School. “I think that the move to over-the-counter is great, but it has limited value in a world where vaccines become more widely available.” Vaccination credentials could be more important for travel and dining than test results are.

Companies selling the tests say they are still a relevant strategy for getting back to normal, especially given that kids aren’t getting vaccinated yet. For employers who want to keep an office or factory open, they say, self-directed consumer tests might be a good option. A spokesperson for Abbott told me that they might also help people “start thinking about coordinating more covid-conscious bridal showers, baby showers, or birthday parties.”

The UK government started giving away covid antigen tests for free, by mail and on street corners, on April 9, saying it wants people “to get in the habit” of testing themselves twice a week as social distancing restrictions are eased. Along with vaccines, free tests are part of that nation’s plan to quash the virus. Later, though, a leaked government memo said health officials were privately worried about a tsunami of false positives.

In the US, there’s no still no national campaign around home tests or subsidy for them, and as an out-of-pocket expense, they are still too expensive for most people to use with any frequency. That may be for the best, given my experience.

Types of tests

The three tests we tried included two antigen tests, BinaxNow from Abbott Laboratories and a kit from Ellume, as well as one molecular test, called Lucira. In general, molecular tests, which detect the genes of the coronavirus, are more reliable than antigen tests, which sense the presence of the virus’s outer shell.

Everything you need is in one box, except in the case of the Ellume test, which must be paired with an app. Overall, the Lucira test had the best combination of advertised accuracy and simplicity, but it was also the most expensive at $55.

We didn’t try Quidel QuickVue, another antigen test, or a molecular test from Cue Health. Those tests, while authorized for home use, are not being sold directly to the public yet.

After trying all the tests, I am not planning to invest in using them regularly. I work from home and don’t socialize, so I don’t really need to. Instead, I plan to keep at least one test in my cupboard so that if I do feel sick, or lose my sense of smell, I will be able to quickly find out whether it’s covid-19. The ability to test at home might become more important next winter when cold and flu season returns.

shipping Abbott tests
ABBOTT LABS

BinaxNow by Abbott

Time required: about 20 minutes
Price: $23.99 for two
Availability: At some CVS stores starting in April. Abbott says it is making tens of millions of BinaxNow tests per month.
Accuracy: 84.6% for detecting covid-19 infections, 98.5% for correctly identifying covid-19 negatives

This is the at-home version of the fast, 15-minute test the White House was using last year to screen staff and visitors. It’s an antigen test, meaning that it examines a sample from a nasal swab to detect a protein in the shell of the virus. It went on sale in the US last week, and I was able to buy a two-test kit at CVS for $23.99 plus tax.

The technology used is called a “lateral flow immunoassay.” In simple terms, that means it works like a pregnancy test. It’s basically a paper card with a test strip. As the sample flows through it, it hits antibodies that stick to the virus protein and then to a colored marker. If the virus is present, a pink bar appears on the strip.

I found the test fairly easy to perform. You use an eye dropper to dispense six drops of chemical into a small hole in the card; then you insert a swab after you’ve run it around in both nostrils. Rotate the swab counterclockwise, fold the card to bring the test strip in contact with the swab, and that’s it. Fifteen minutes later, a positive result will show up as a faint pink line.

The drawback of the test is that there’s room for two different kinds of user error. It’s hard to see the drops come out of the dropper, and using too few could cause a false negative. So could swabbing your nose incorrectly. Unlike the other tests, this one can’t tell if you’ve made a mistake.

And besides the prospect of user error, the test itself has issues with accuracy. BinaxNow is the cheapest test out there, but it’s also the most likely to be wrong, missing about one in seven real infections. Abbott cautions that results “should be treated as presumptive” and “do not rule out SARS-Cov-2.”

But a buyer won’t find the accuracy rate without digging into the fine print. The company also buries a crucial requirement imposed by regulators: to compensate for the lower accuracy, you are supposed to use both tests in the kit, at least 36 hours apart. I doubt a casual buyer will realize that. The two-test requirement is barely mentioned in the instructions.

Lucira Check-It

Time required: about 40 minutes
Price: $55
Availability: Can be purchased online at lucirahealth.com
Accuracy: 94% for positives, 98% for negatives

Of all the kits I used, Lucira was far and away my favorite. This is a laboratory-type test, with techniques similar to those used by professional labs, and you feel a little bit like a scientist using it.

Since it’s not in stores yet, the Lucira test needs to be ordered online, and I would suggest doing so well before you need it. The first test I purchased took five days to arrive, leaving me anxious about its whereabouts. The company says you can track its packages, but I wasn’t able to access any tracking data until after my kit arrived. I ordered a second test, this time paying $20 for express shipping, and I still couldn’t find the tracking information.

At $55, this is the most expensive test we reviewed, so it’s not something you’ll use too often. Still, it’s about half the cost of the mail-away swab tests from companies like Vault Health—previously my go-to option for avoiding hospitals and crowded testing facilities, as when I needed to test my kid last July so she could go to sleep-away camp. Those mail-in tests give an answer within 48 hours. With Lucira, you’ll get your answer in under an hour.

The test kit includes a swab, a tube of purple chemicals, and a small battery-operated base station. It works with a technology called LAMP, a molecular method that makes copies of a coronavirus gene until the amount is large enough to detect. That means it’s nearly equivalent to PCR, the gold-standard test used by labs. Unlike PCR, a test using LAMP doesn’t need rapid heating and cooling, so it can be run at home.

After swabbing your nose, you stir the swab in the tube and then then click it into place in the base station. After half an hour, one of two LED lights turns on, saying either “Positive” or “Negative.” I found the Lucira test’s readout the easiest to understand.

Ellume Home Covid Test

Time required: about 45 minutes
Price: $38.99
Availability: Available online at CVS.com The company says it is shipping 100,000 tests a day to the US from Australia and will be manufacturing 500,000 tests a day in the US by the end of the year.
Accuracy: 95% for positives, 97% for negatives

Home tests still aren’t easy to find, and I couldn’t find a pharmacy that stocked Ellume, a test marketed by an Australian company of the same name. But the company had previously sent me a sample kit, which I used in this review. As of this week, the Ellume test can also be purchased through the website of CVS.

Of all the tests I tried, Ellume’s had the most components—five, versus three for the others. That tally included an app that you have to download onto your phone. Including resetting your Apple ID if you forget it, as I always do, and answering the app’s questions, including your name, address, and phone number, plus a break to get a cup of coffee, this test took longer to carry out. Budget an hour if you decide to read the app’s privacy policy and terms and conditions.

Like the Abbott test, Ellume’s is an antigen test. But it is a more sophisticated one, with embedded optics and electronics that read a fluorescent result. In addition to looking for the virus, it also detects a common human protein, so if you didn’t swab you nose correctly, the test will know.

Thanks to these bells and whistles, and a special swab, Ellume has a higher accuracy rate for spotting covid than other antigen tests, missing only one in 20 infections, according to the company. The drawback is that it is 50% more likely than other tests to falsely inform you that you are positive for covid-19 when you are not. Indeed, my false positive result occurred while using this test.

Because it uses a phone app, you’ll need an internet connection to use Ellume, which involves communication between your phone and the kit via Bluetooth. An advantage of the app is that it provides good directions and an electronic receipt for your test—the kind you can show to a school or employer. The others I tried didn’t have a paper trail, so there’s no proof you took the test. But that receipt comes with a privacy cost. Of the three tests I tried, Ellume’s was the only one that isn’t entirely private. The app warns that it will share “certain information with public health authorities.” That information turns out to include your birthday, your zip code, and your test result. The company says the data helps health agencies track the pandemic and report infection levels.

The internet is excluding Asian-Americans who don’t speak English

Jennifer Xiong spent her summer helping Hmong people in California register to vote in the US presidential election. The Hmong are an ethnic group that come from the mountains of China, Vietnam, Laos, and Thailand but don’t have a country of their own, and Xiong was a volunteer organizer at Hmong Innovating Politics, or HIP, in Fresno. There are around 300,000 Hmong people in the US, and she spent hours phone-banking and working on ads to run on Hmong radio and TV channels. It was inspiring work. “This was an entirely new thing for me to see,” she says. “Young, progressive, primarily women doing this work in our community was just so rare, and I knew it was going to be a huge feat.” And by all accounts it was. Asian-American turnout in the 2020 election in general was extraordinary, and observers say turnout among Hmong citizens was the highest they can remember. 

But Xiong says it was also incredibly disheartening. 

While Hmong people have long ties to the US—many were encouraged to migrate across the Pacific after being recruited to support the United States during the Vietnam War—they are often left out of mainstream political discourse. One example? On the website of Fresno’s county clerk, the government landing page for voter registration has an option to translate the entire page into Hmong—but, Xiong says, much of the information is mistranslated. 

And it starts right at the beginning. Instead of the Hmong word for “hello” or “welcome,” she says, is “something else that said, like, ‘your honor’ or ‘the queen’ or ‘the king’ instead.” 

Seeing something so simple done incorrectly was frustrating and off-putting. “Not only was it just probably churned through Google Translate, it wasn’t even peer edited and reviewed to ensure that there was fluency and coherence,” she says.

Xiong says this kind of carelessness is common online—and it’s one reason she and others in the Hmong community can feel excluded from politics.

They aren’t the only ones with the sense that the digital world wasn’t built for them. The web itself is built on an English-first architecture, and most of the big social media platforms that host public discourse in the United States put English first too. 

And as technologies become proxies for civic spaces in the United States, the primacy of English has been magnified. For Asian-Americans, the move to digital means that access to democratic institutions—everything from voting registration to local news—is impeded by linguistic barriers. 

It’s an issue in health care as well. During the pandemic, when Black, Hispanic, and Native patients have been two to three times more likely to be hospitalized or die than white patients, these barriers add another burden: Brigham and Women’s Hospital in Boston found that non-English-speaking patients were 35% more likely to die of covid than those who spoke English. Translation problems are not the only issue. Xiong says that when Hmong users were trying to make vaccine appointments, they were asked for their zodiac sign as a security question—despite the fact that many in this community are unfamiliar with Western astrology.

In normal times, overcoming these challenges would be complicated enough, since Asian-Americans are the most linguistically diverse ethnic group in America. But after a year that has seen a dramatic increase in real-world and online attacks on Asian-Americans, the situation has become urgent in a different way.

“They don’t catch misinformation”

Christine Chen, executive director of APIAVote, a nonprofit that promotes civic engagement among Asian people and Pacific Islanders, says that political life has always been “exclusionary” for Asian people in the US, but “with digital spaces, it’s even more challenging. It’s so much easier to be siloed.” 

Big platforms like Facebook, Twitter, and YouTube are popular among Asian-Americans, as are messaging apps like WeChat, WhatsApp, and Line. Which communication channels people use often depends on their ethnicity. During the election campaign, Chen focused on building a volunteer network that could move in and out of those siloes to achieve maximum impact. At the time, disinformation targeting Asian-Americans ran rampant in WeChat groups and on Facebook and Twitter, where content moderation is less effective in non-English languages. 

APIAVote volunteers would join different groups on the various platforms to monitor for disinformation while encouraging members to vote. Volunteers found that Vietnamese-Americans, for example, were being targeted with claims that Joe Biden was a socialist, preying on their fears of communism—and similar to political messages pushed at Cuban-Americans

Chen says that while content moderation policies from Facebook, Twitter, and others succeeded in filtering out some of the most obvious English-language disinformation, the system often misses such content when it’s in other languages. That work instead had to be done by volunteers like her team, who looked for disinformation and were trained to defuse it and minimize its spread. “Those mechanisms meant to catch certain words and stuff don’t necessarily catch that dis- and misinformation when it’s in a different language,” she says.

Google’s translation services and technologies such as Translatotron and real-time translation headphones use artificial intelligence to convert between languages. But Xiong finds these tools inadequate for Hmong, a deeply complex language where context is incredibly important. “I think we’ve become really complacent and dependent on advanced systems like Google,” she says. “They claim to be ‘language accessible,’ and then I read it and it says something totally different.” 

(A Google spokesperson admitted that smaller languages “pose a more difficult translation task” but said that the company has “invested in research that particularly benefits low-resource language translations,” using machine learning and community feedback.)

All the way down

The challenges of language online go beyond the US—and down, quite literally, to the underlying code. Yudhanjaya Wijeratne is a researcher and data scientist at the Sri Lankan think tank LIRNEasia. In 2018, he started tracking bot networks whose activity on social media encouraged violence against Muslims: in February and March of that year, a string of riots by Sinhalese Buddhists targeted Muslims and mosques in the cities of Ampara and Kandy. His team documented “the hunting logic” of the bots, catalogued hundreds of thousands of Sinhalese social media posts, and took the findings to Twitter and Facebook. “They’d say all sorts of nice and well-meaning things–basically canned statements,” he says. (In a statement, Twitter says it uses human review and automated systems to “apply our rules impartially for all people in the service, regardless of background, ideology, or placement on the political spectrum.”)

When contacted by MIT Technology Review, a Facebook spokesperson said the company commissioned an independent human rights assessment of the platform’s role in the violence in Sri Lanka, which was published in May 2020, and made changes in the wake of the attacks, including hiring dozens of Sinhala and Tamil-speaking content moderators. “We deployed proactive hate speech detection technology in Sinhala to help us more quickly and effectively identify potentially violating content,” they said.

“What I can do with three lines of code in Python in English literally took me two years of looking at 28 million words of Sinhala”

Yudhanjaya Wijeratne, LIRNEasia

When the bot behavior continued, Wijeratne grew skeptical of the platitudes. He decided to look at the code libraries and software tools the companies were using, and found that the mechanisms to monitor hate speech in most non-English languages had not yet been built. 

“Much of the research, in fact, for a lot of languages like ours has simply not been done yet,” Wijeratne says. “What I can do with three lines of code in Python in English literally took me two years of looking at 28 million words of Sinhala to build the core corpuses, to build the core tools, and then get things up to that level where I could potentially do that level of text analysis.”

After suicide bombers targeted churches in Colombo, the Sri Lankan capital, in April 2019, Wijeratne built a tool to analyze hate speech and misinformation in Sinhala and Tamil. The system, called Watchdog, is a free mobile application that aggregates news and attaches warnings to false stories. The warnings come from volunteers who are trained in fact-checking. 

Wijeratne stresses that this work goes far beyond translation. 

“Many of the algorithms that we take for granted that are often cited in research, in particular in natural-language processing, show excellent results for English,” he says. “And yet many identical algorithms, even used on languages that are only a few degrees of difference apart—whether they’re West German or from the Romance tree of languages—may return completely different results.” 

Natural-language processing is the basis of automated content moderation systems. Wijeratne published a paper in 2019 that examined the discrepancies between their accuracy in different languages. He argues that the more computational resources that exist for a language, like data sets and web pages, the better the algorithms can work. Languages from poorer countries or communities are disadvantaged.

“If you’re building, say, the Empire State Building for English, you have the blueprints. You have the materials,” he says. “You have everything on hand and all you have to do is put this stuff together. For every other language, you don’t have the blueprints.

“You have no idea where the concrete is going to come from. You don’t have steel and you don’t have the workers, either. So you’re going to be sitting there tapping away one brick at a time and hoping that maybe your grandson or your granddaughter might complete the project.”

Deep-seated issues

The movement to provide those blueprints is known as language justice, and it is not new. The American Bar Association describes language justice as a “framework” that preserves people’s rights “to communicate, understand, and be understood in the language in which they prefer and feel most articulate and powerful.” 

The path to language justice is tenuous. Technology companies and government service providers would have to make it a much higher priority and invest many more resources into its realization. And, Wijeratne points out, racism, hate speech, and exclusion targeting Asian people, especially in the United States, existed long before the internet. Even if language justice could be achieved, it’s not going to fix these deep-seated issues.

But for Xiong, language justice is an important goal that she believes is crucial for the Hmong community. 

After the election, Xiong took on a new role with her organization, seeking to connect California’s Hmong community with public services such as the Census Bureau, the county clerk, and vaccine registration. Her main objective is to “meet the community where they are,” whether that’s on Hmong radio or in English via Facebook live, and then amplify the perspective of Hmong people to the broader public. But every day she has to face the imbalances in technology that shut people out of the conversation—and block them from access to resources. 

Equality would mean “operating in a world where interpretation and translation is just the norm,” she says. “We don’t ask whether there’s enough budgeting for it, we don’t question if it’s important or it’s valuable, because we prioritize it when it comes to the legislative table and public spaces.”

Correction: The world wide web was invented in Switzerland. The article mistakenly stated that it was invented in the US. The reference has been removed.