Could covid lead to a lifetime of autoimmune disease?

When Aaron Ring began testing blood samples collected from covid-19 patients who had come through Yale–New Haven Hospital last March and April, he expected to see a type of immune cell known as an autoantibody in at least some of them. These are antibodies that have gone rogue and started attacking the body’s own tissue; they’re known to show up after some severe infections.

Researchers at New York City’s Rockefeller University had already found that some patients with bad cases of covid had copies of these potentially dangerous immune cells, circulating in the bloodstream. These preexisting autoantibodies, likely created by previous infections, were still lurking around and appeared to be mistakenly attacking other immune cells. It helped explain why some people were getting so sick from covid-19.

Still, what Ring, a shaggy-haired cancer immunologist at Yale University, detected in his blood samples last fall so spooked him that he pulled his nine-month-old daughter out of day care and put his family back on lockdown.

The Rockefeller researchers had identified a single type of antibody primed to attack other immune cells. But Ring, using a novel detection method he had invented, found a vast array of autoantibodies ready to attack scores of other human proteins, including ones found in the body’s vital organs and bloodstream. The levels, variety, and ubiquity of the autoantibodies he found in some patients shocked him; it looked like what doctors might see in people with chronic autoimmune diseases that often lead to a lifetime of pain and damage to organs including the brain.

Aaron Ring
Aaron Ring, an immunologist at Yale, has found a wide array of autoantibodies ready to attack the body’s organs.

“What rocked my world was seeing covid patients with levels of autoreactivity commensurate with an autoimmune disease like lupus,” he says.

Ring’s autoantibody tests showed that in some patients—even some with mild cases of covid—the rogue immune cells were marking blood cells for attack. Others were on the hunt for proteins associated with the heart and liver. Some patients appeared to have autoantibodies primed to attack the central nervous system and the brain. This was far more ominous than anything identified by the Rockefeller scientists. Ring’s findings seemed to suggest a potentially systemic problem; these patients seemed to be cranking out multiple varieties of new autoantibodies in response to covid, until the body appeared to be at war with itself.

What scared Ring the most was that autoantibodies have the potential to last a lifetime. This raised a series of chilling questions: What are the long-term consequences for these patients if these powerful cellular assassins outlive the infection? How much destruction could they cause? And for how long?

What Ring detected in his blood samples last fall so spooked him that he pulled his nine-month-old daughter out of day care and put his family back on lockdown.

Even as hope is building that vaccines will provide a way to halt covid’s relentless spread, another public health crisis is looming: the mysterious and persistent chronic condition afflicting some survivors, often referred to as “long covid.” Roughly 10% of covid survivors, many of whom had only mild initial symptoms, can’t seem to kick it.

These long-haulers often suffer from extreme fatigue, shortness of breath, “brain fog,” sleep disorders, fevers, gastrointestinal symptoms, anxiety, depression, and a wide array of other symptoms. Policymakers, doctors, and scientists around the globe warn that countless millions of otherwise healthy young adults could face decades of debilitating issues.

The causes of long-haul covid are still mysterious. But autoimmunity now tops the list of possibilities. And Ring believes that among the likeliest culprits, at least in some patients, are the armies of runaway autoantibodies.

A system gone haywire

It did not take long for doctors on the front lines of the covid pandemic to recognize that the biggest threat to many of their patients was not the virus itself, but the body’s response to it.

In Wuhan, China, some clinicians noted that the blood of many of their sickest patients was flooded with immune proteins known as cytokines, a cellular SOS signal capable of triggering cell death or a phenomenon known as a cytokine storm, where elements of the body start attacking its own tissue. Cytokine storms were thought to represent a kind of risky, doomsday immune response—akin to calling in an air strike on your own position while badly outnumbered in the middle of a firefight.

Though this was something doctors had seen in other conditions, it quickly became apparent that the cytokine storms produced by covid-19 had unusual destructive power.

Early on in the pandemic, Jean-Laurent Casanova, an immunologist and geneticist at Rockefeller University, decided to take a closer look. In 2015, Casanova had demonstrated that many people who contracted severe cases of influenza carried genetic mutations blocking their ability to produce an important signaling protein, called interferon-1 (IGF-1), that enables patients to mount an effective early immune response. Interferon got its name, Casanova says, because it “interferes” with viral replication by informing neighboring cells “that there’s a virus around, and that they should close the windows and lock the door.”

Jean-Laurent Casanova
Jean-Laurent Casanova, an immunologist and geneticist at Rockefeller University, first spotted autoantibodies lurking about in the blood of patients with bad cases of covid.

When Casanova looked at patients with severe covid, he found that indeed, a small but significant number of those suffering from critical pneumonia also carried these inborn errors—genetic typos that prevented them from producing interferon. But he also found something else intriguing: an additional 10% of covid patients with pneumonia were suffering from interferon deficiencies because the signaling agent was being attacked and neutralized by autoantibodies.

These autoantibodies, he concluded, had likely been circulating in the patients’ bloodstream before they contracted covid. However, in response to the covid infection, these lingering autoantibodies had replicated in massive numbers and attacked the crucial early warning signal before it could sound the alarm. By the time the immune system finally kicked into gear, it was so far behind the 8-ball that it resorted to its last-ditch option: a dangerous cytokine storm.         

“The autoantibodies already exist—their creation is not triggered by the virus,” Casanova explains. But once a person is infected, they seem to multiply in large numbers, causing catastrophic pulmonary and systemic inflammation.

Casanova’s findings, published in September in Science, suggested that many critical covid patients could be saved with widely available existing drugs—types of synthetic interferon that could evade the autoantibodies and kick the immune system into gear early enough to avoid a cytokine storm.

But the results also hinted at something that fed Ring’s anxiety: the ability of the autoantibodies, once created and allowed to circulate, to stick around and pose an ongoing threat. There was something else that worried Ring too. While Casanova attributed the rogue antibodies to the legacy of a previous infection, Ring’s data suggested that new ones can somehow be created by covid itself.   

 Ring quickly confirmed Casanova’s results in some of his own patients. But that was just the start, since his own detection technique, created as a tool in cancer immunology, could test for the presence of antibodies directed against any of 2,688 human proteins.

Ring found antibodies targeting 30 other important signaling agents besides interferon, some of which play an essential role in directing where other immune cells needed to attack. There were also antibodies against a number of organ- and tissue-specific proteins—some of which seemed to account for certain symptoms of covid. Ominously, unlike Casanova’s autoantibodies, many of Ring’s appeared to be brand new.

On his computer, Ring can pull up several graphs displaying the population of 15 different autoantibodies found in several patients as their infection progressed. Just as Casanova described, antibodies against interferon are clearly visible in the blood when patients were first tested at the hospital. Those numbers stay high as the infection progresses. But Ring found the trajectory to be quite different for the other autoantibodies.

In the initial samples, autoantibodies except for the ones against interferon are nonexistent or undetectable in the blood. Those other antibodies first appear in subsequent blood samples and continue to rise as the infection persists. It seemed to confirm Ring’s worst fears: that those autoantibodies were created by covid itself.

“These are very clearly newly acquired—no question about it,” he explains, pointing to one line of rising autoantibodies. “They came up during the course of infection. The infection triggered autoimmunity.”

In most of those patients, the autoantibodies returned to undetectable levels in subsequent blood samples. But in some, the autoantibodies remained high at the point of last testing—in some cases more than two months after infection. Some of those patients developed long covid.

“We have been, publicly and in the paper, pretty cautious about the interpretation of our results,” he says. “But this does have implications for post-covid syndrome, because autoantibodies can plausibly persist well after the virus has been dealt with.”

An all-out attack

Why do these new autoantibodies appear? Some enticing clues have emerged. In October, a team of researchers led by Ignacio Sanz, an expert on lupus at Emory University, documented a phenomenon in the immune system of many severe covid patients that is often seen during lupus flare-ups.

It occurs in the specialized immune cells known as B cells, which produce antibodies. In order to quickly scale up production of the B cells needed to combat the covid virus, Sanz explains, the immune systems of some patients seem to take a dangerous shortcut in the biological process that usually determines which antibodies the body generates to fight off a specific infection.

Normally when an invading virus triggers an immune response, B cells form into self-contained structures in the follicles of the lymph nodes, where they multiply rapidly, mutate, and swell into an immune army of billions, each one bearing a copy of its signature antibody protein on its surface. Almost as soon as this happens, however, the cells launch into a deadly game of molecular-level musical chairs, competing to bind with a small number of viral fragments to see which one is best suited to attack it. The losing cells immediately begin to die off by the millions. In the end, only the B cells with the antibody that forms the strongest bond to the invading virus survive to be released into the bloodstream.

It’s a good thing the rest don’t, Sanz explains, because as many as 30% of the antibodies produced in the race to fight off an invading virus will target parts of the body the system is designed to protect.

When Sanz looked at the blood of patients with severe covid, he found that many did quickly create antibodies to fight the virus. But most of these antibodies were produced by rapidly multiplying B cells generated outside the normal weeding-out process. Sanz had seen this phenomenon before in lupus, and many believed it to be a hallmark of immune dysfunction.

Eline Luning Prak, a professor at the Hospital of the University of Pennsylvania, says she is not surprised. Luning Prak, an expert on autoimmune diseases, notes that when the body is in crisis, the usual controls may be relaxed. “This is what I call an all-hands-on-deck-style immune response,” she says. “When you’re dying from an overwhelming viral infection, the immune system at this point says, ‘I don’t care—just give me anything.’”

Still a mystery

In March, James Heath, president of the Institute for Systems Biology in Seattle, worked with a long list of eminent immunologists to publish what he believes to be the first scientific paper characterizing the immune system of patients two to three months after becoming infected. Heath and his colleagues found that people who survived took one of four different pathways. Two groups of patients experienced full recoveries—one group from severe acute covid, and a second from the disease’s milder form. And two other groups—some of whom had severe acute covid and some of whose initial symptoms were mild—continued to experience massive immune activation.

The vast majority of patients Heath studied have yet to make a full recovery. Only a third, he says, “are feeling and looking, from immunology metrics, like they’re recovered.”

But what exactly is causing this continued immune reaction—whether it’s autoimmune disease and autoantibodies or something else—is “the million-dollar question.” To Heath, the persistent presence of self-attacking antibodies, like those found by Ring and others, seems like a leading hypothesis. He believes, though, that the chronic symptoms could also be caused by undetectable remnants of the virus that keep the immune system in a state of low-level activation.

In the end, Heath thinks that what we call long covid may well turn out to be more than one disorder caused by the initial infection. “For sure, your immune system is activating against something,” he says. “And whether it’s activating itself or not, which is the difference between autoimmune and something else, is an open question. It’s probably different in different people.”

Luning Prak agrees that the cause of long covid may well be different in different patients.

“What could be causing long covid? Well, one possibility is you have viral injury and you have residual damage from that,” she says. “Another possibility is that you have autoimmunity.” She adds, “A third possibility is some type of chronic infection; they just don’t completely clear the virus and it allows the virus to kind of chronically set up shop somehow. That’s a really scary and creepy idea for which we have very little evidence.” And, she says, all three might turn out to be true.

Why risk it?

Though the culprit (or culprits) behind long covid remains a mystery, the work being done by Ring, Heath, Luning Prak, and others may soon give us a far better idea of what is happening. Ring notes, for example, that a growing number of reports from long-haulers suggest that in some cases, the vaccine seems to be curing them.

Akiko Iwasaki
Yale’s immunologist Akiko Iwasaki speculates long covid may be caused by the presence of viral remnants.

Ring’s colleague Akiko Iwasaki, a Yale immunologist and a coauthor on his autoantibody paper, speculates that if long covid is caused by the presence of viral remnants, the vaccine might help clear them out by inducing more viral-specific antibodies. And if the cause is autoantibodies, she says, the specificity of the vaccine—which is engineered to train the immune system to target the covid virus —might be mobilizing a response with such urgency and force that other aspects of the system are stepping in to inhibit the autoantibodies.

All this remains scientific speculation. But Ring hopes he and his collaborators will soon get some answers. They are in the process of collecting blood samples from long covid patients from clinics around the country, looking for telltale signs of autoantibodies and other indications of immune dysfunction.

In the meantime, Ring isn’t taking any chances with his daughter.

“The fact that we had seen autoantibodies come up in so many covid patients really made me think, ‘Yeah, we’re not going to roll the dice with baby Sara,’” he says. “So, I mean, we put our money where our mouths are. Like I said, we are still paying for a day-care slot that we don’t use because we just don’t want to risk it. I mean, I don’t want to seem like Chicken Little here. But having seen the cases where things go badly, I’m just like, ‘Yeah, no, we want zero chance of that.’”

Stop talking about AI ethics. It’s time to talk about power.

At the turn of the 20th century, a German horse took Europe by storm. Clever Hans, as he was known, could seemingly perform all sorts of tricks previously limited to humans. He could add and subtract numbers, tell time and read a calendar, even spell out words and sentences—all by stamping out the answer with a hoof. “A” was one tap; “B” was two; 2+3 was five. He was an international sensation—and proof, many believed, that animals could be taught to reason as well as humans.

The problem was Clever Hans wasn’t really doing any of these things. As investigators later discovered, the horse had learned to provide the right answer by observing changes in his questioners’ posture, breathing, and facial expressions. If the questioner stood too far away, Hans would lose his abilities. His intelligence was only an illusion.

This story is used as a cautionary tale for AI researchers when evaluating the capabilities of their algorithms. A system isn’t always as intelligent as it seems. Take care to measure it properly.


But in her new book, Atlas of AI, leading AI scholar Kate Crawford flips this moral on its head. The problem, she writes, was with the way people defined Hans’s achievements: “Hans was already performing remarkable feats of interspecies communication, public performance, and considerable patience, yet these were not recognized as intelligence.”

So begins Crawford’s exploration into the history of artificial intelligence and its impact on our physical world. Each chapter seeks to stretch our understanding of the technology by unveiling how narrowly we’ve viewed and defined it.

Crawford does this by bringing us on a global journey, from the mines where the rare earth elements used in computer manufacturing are extracted to the Amazon fulfillment centers where human bodies have been mechanized in the company’s relentless pursuit of growth and profit. In chapter one, she recounts driving a van from the heart of Silicon Valley to a tiny mining community in Nevada’s Clayton Valley. There she investigates the destructive environmental practices required to obtain the lithium that powers the world’s computers. It’s a forceful illustration of how close these two places are in physical space yet how vastly far apart they are in wealth.

By grounding her analysis in such physical investigations, Crawford disposes of the euphemistic framing that artificial intelligence is simply efficient software running in “the cloud.” Her close-up, vivid descriptions of the earth and labor AI is built on, and the deeply problematic histories behind it, make it impossible to continue speaking about the technology purely in the abstract.

In chapter four, for example, Crawford takes us on another trip—this one through time rather than space. To explain the history of the field’s obsession with classification, she visits the Penn Museum in Philadelphia, where she stares at rows and rows of human skulls.

The skulls were collected by Samuel Morton, a 19th-century American craniologist, who believed it was possible to “objectively” divide them by their physical measurements into the five “races” of the world: African, Native American, Caucasian, Malay, and Mongolian. Crawford draws parallels between Morton’s work and the modern AI systems that continue to classify the world into fixed categories.

These classifications are far from objective, she argues. They impose a social order, naturalize hierarchies, and magnify inequalities. Seen through this lens, AI can no longer be considered an objective or neutral technology.

In her 20-year career, Crawford has contended with the real-world consequences of large-scale data systems, machine learning, and artificial intelligence. In 2017, with Meredith Whittaker, she cofounded the research institute AI Now as the first organization dedicated to studying the social implications of these technologies. She is also now a professor at USC Annenberg, in Los Angeles, and the inaugural visiting chair in AI and justice at the École Normale Supérieure in Paris, as well as a senior principal researcher at Microsoft Research.

Five years ago, Crawford says, she was still working to introduce the mere idea that data and AI were not neutral. Now the conversation has evolved, and AI ethics has blossomed into its own field. She hopes her book will help it mature even further.

I sat down with Crawford to talk about her book.

The following has been edited for length and clarity.

Why did you choose to do this book project, and what does it mean to you?

Crawford: So many of the books that have been written about artificial intelligence really just talk about very narrow technical achievements. And sometimes they write about the great men of AI, but that’s really all we’ve had in terms of really contending with what artificial intelligence is.

I think it’s produced this very skewed understanding of artificial intelligence as purely technical systems that are somehow objective and neutral, and—as Stuart Russell and Peter Norvig say in their textbook—as intelligent agents that make the best decision of any possible action.

I wanted to do something very different: to really understand how artificial intelligence is made in the broadest sense. This means looking at the natural resources that drive it, the energy that it consumes, the hidden labor all along the supply chain, and the vast amounts of data that are extracted from every platform and device that we use every day.

In doing that,, I wanted to really open up this understanding of AI as neither artificial nor intelligent. It’s the opposite of artificial. It comes from the most material parts of the Earth’s crust and from human bodies laboring, and from all of the artifacts that we produce and say and photograph every day. Neither is it intelligent. I think there’s this great original sin in the field, where people assumed that computers are somehow like human brains and if we just train them like children, they will slowly grow into these supernatural beings.

That’s something that I think is really problematic—that we’ve bought this idea of intelligence when in actual fact, we’re just looking at forms of statistical analysis at scale that have as many problems as the data that it’s given.

Was it immediately obvious to you that this is how people should be thinking about AI? Or was it a journey?

It’s absolutely been a journey. I’d say one of the turning points for me was back in 2016, when I started a project called “Anatomy of an AI system” with Vladan Joler. We met at a conference specifically about voice-enabled AI, and we were trying to effectively draw what it takes to make an Amazon Echo work. What are the components? How does it extract data? What are the layers in the data pipeline?

We realized, well—actually, to understand that, you have to understand where the components come from. Where did the chips get produced? Where are the mines? Where does it get smelted? Where are the logistical and supply chain paths?

Finally, how do we trace the end of life of these devices? How do we look at where the e-waste tips are located in places like Malaysia and Ghana and Pakistan? What we ended up with was this very time-consuming two-year research project to really trace those material supply chains from cradle to grave.

When you start looking at AI systems on that bigger scale, and on that longer time horizon, you shift away from these very narrow accounts of “AI fairness” and “ethics” to saying: these are systems that produce profound and lasting geomorphic changes to our planet, as well as increase the forms of labor inequality that we already have in the world.

So that made me realize that I had to shift from an analysis of just one device, the Amazon Echo, to applying this sort of analytic to the entire industry. That to me was the big task, and that’s why Atlas of AI took five years to write. There’s such a need to actually see what these systems really cost us, because we so rarely do the work of actually understanding their true planetary implications.

The other thing I would say that’s been a real inspiration is the growing field of scholars who are asking these bigger questions around labor, data, and inequality. Here I’m thinking of Ruha Benjamin, Safiya Noble, Mar Hicks, Julie Cohen, Meredith Broussard, Simone Brown—the list goes on. I see this as a contribution to that body of knowledge by bringing in perspectives that connect the environment, labor rights, and data protection.

You travel a lot throughout the book. Almost every chapter starts with you actually looking around at your surroundings. Why was this important to you?

It was a very conscious choice to ground an analysis of AI in specific places, to move away from these abstract “nowheres” of algorithmic space, where so many of the debates around machine learning happen. And hopefully it highlights the fact that when we don’t do that, when we just talk about these “nowhere spaces” of algorithmic objectivity, that is also a political choice, and it has ramifications.

In terms of threading the locations together, this is really why I started thinking about this metaphor of an atlas, because atlases are unusual books. They’re books that you can open up and look at the scale of an entire continent, or you can zoom in and look at a mountain range or a city. They give you these shifts in perspective and shifts in scale.

There’s this lovely line that I use in the book from the physicist Ursula Franklin. She writes about how maps join together the known and the unknown in these methods of collective insight. So for me, it was really drawing on the knowledge that I had, but also thinking about the actual locations where AI is being constructed very literally from rocks and sand and oil.

What kind of feedback has the book received?

One of the things that I’ve been surprised by in the early responses is that people really feel like this kind of perspective was overdue. There’s a moment of recognition that we need to have a different sort of conversation than the ones that we’ve been having over the last few years.

We’ve spent far too much time focusing on narrow tech fixes for AI systems and always centering technical responses and technical answers. Now we have to contend with the environmental footprint of the systems. We have to contend with the very real forms of labor exploitation that have been happening in the construction of these systems.

And we also are now starting to see the toxic legacy of what happens when you just rip out as much data off the internet as you can, and just call it ground truth. That kind of problematic framing of the world has produced so many harms, and as always, those harms have been felt most of all by communities who were already marginalized and not experiencing the benefits of those systems.

What do you hope people will start to do differently?

I hope it’s going to be a lot harder to have these cul-de-sac conversations where terms like “ethics” and “AI for good” have been so completely denatured of any actual meaning. I hope it pulls aside the curtain and says, let’s actually look at who’s running the levers of these systems. That means shifting away from just focusing on things like ethical principles to talking about power.

How do we move away from this ethics framing?

If there’s been a real trap in the tech sector for the last decade, it’s that the theory of change has always centered engineering. It’s always been, “If there’s a problem, there’s a tech fix for it.” And only recently are we starting to see that broaden out to “Oh, well, if there’s a problem, then regulation can fix it. Policymakers have a role.”

But I think we need to broaden that out even further. We have to say also: Where are the civil society groups, where are the activists, where are the advocates who are addressing issues of climate justice, labor rights, data protection? How do we include them in these discussions? How do we include affected communities?

In other words, how do we make this a far deeper democratic conversation around how these systems are already influencing the lives of billions of people in primarily unaccountable ways that live outside of regulation and democratic oversight?

In that sense, this book is trying to de-center tech and starting to ask bigger questions around: What sort of world do we want to live in?

What sort of world do you want to live in? What kind of future do you dream of?

I want to see the groups that have been doing the really hard work of addressing questions like climate justice and labor rights draw together, and realize that these previously quite separate fronts for social change and racial justice have really shared concerns and a shared ground on which to coordinate and to organize.

Because we’re looking at a really short time horizon here. We’re dealing with a planet that’s already under severe strain. We’re looking at a profound concentration of power into extraordinarily few hands. You’d really have to go back to the early days of the railways to see another industry that is so concentrated, and now you could even say that tech has overtaken that.

So we have to contend with ways in which we can pluralize our societies and have greater forms of democratic accountability. And that is a collective-action problem. It’s not an individual-choice problem. It’s not like we choose the more ethical tech brand off the shelf. It’s that we have to find ways to work together on these planetary-scale challenges.

NASA’s Perseverance rover has produced pure oxygen on Mars

NASA’s Perseverance rover has successfully generated breathable oxygen on Mars. The demonstration, carried out by the rover’s MOXIE instrument on April 20, could lay the groundwork for helping future astronauts establish a sustainable colony on the planet.

What’s MOXIE and how does it work? Short for Mars Oxygen In-Situ Resource Utilization Experiment, it’s a toaster-size device that can convert carbon dioxide into oxygen through a process called solid oxide electrolysis. Martian air is drawn into the device through a filter, and a mechanical pump compresses it down to Earth-like settings, forwarding carbon dioxide to the electrolysis system. The carbon dioxide is heated to about 800 °C (MOXIE itself can withstand this temperature thanks to its heat-resistant nickel alloy parts and a lightweight aerogel that traps heat inside). The gas separates into oxygen and carbon monoxide, and MOXIE isolates the oxygen to a different chamber, where the oxygen ions recombine. The carbon monoxide is released, and we’re left with oxygen. 

How’d the test go? It generated about 5.4 grams of oxygen in one hour—which is roughly 10 minutes of breathable air for a human being. Preliminary findings suggest the resulting gas is nearly 100% pure oxygen. MOXIE was designed to produce about 12 grams of oxygen every hour (roughly the same as what a large tree on Earth produces).

What’s the big deal? Future astronauts will need oxygen to breathe and live, but oxygen is also a critical rocket fuel component. A single rocket launch off the surface of Mars carrying four astronauts might require about 25 metric tons of oxygen. The Martian atmosphere is 95-96% carbon dioxide, so there’s a plentiful potential source for this oxygen—we just need the proper technology to generate it. MOXIE is far from capable of fulfilling those needs, but it will lay the groundwork for larger conversion instruments. 

What’s next? There will be at least nine more tests over the next two years. The first round of tests MOXIE is currently running are supposed to validate that the device really works. The second phase will run the process in different kinds of atmospheric conditions and during different Martian times and seasons. And the third will attempt to push MOXIE to its limits. 

Perseverance, meanwhile, is continuing to do exciting work. The Ingenuity helicopter had its second flight Thursday and is set to fly at least three more times. The rover will then head on out to start its search for alien life and look for potential samples to store for delivery back to Earth one day

The US has pledged to halve its carbon emissions by 2030

The news: The US will pledge at a summit of 40 global leaders today to halve its carbon emissions from 2005 levels by 2030. This far exceeds an Obama-era pledge in 2014 to get emissions 26-28% below 2005 levels by 2025. The hope is that the commitment will help encourage India, China, and other major emitters to sign up to similar targets before the 2021 United Nations Climate Change Conference, set to be held in Glasgow, UK,  in November. “The United States is not waiting, the costs of delay are too great, and our nation is resolved to act now,” the White House said in a statement

The big picture: The world has already warmed up by 1.2 °C since preindustrial times, and it’s getting ever closer to the 1.5 °C threshold that the 2016 Paris agreement aimed to avoid. Climate scientists have been warning for years now that a significant amount of climate damage is already baked in thanks to previous emissions, but there is still a short window to avoid catastrophic global warming. 

Is Biden’s pledge feasible? For now, there’s no specific road map to reaching this new target, but the White House is expected to release sector-by-sector recommendations later this year. To meet it, the US will have to radically overhaul its economy and drastically cut the use of oil, gas, and coal. Specifically, President Biden will need to push through a set of ambitious policies to spend $2.3 trillion to tackle emissions in high-polluting areas, such as cars and power plants, and accelerate innovation in clean energy and climate technology. 

The reactions: Michael E. Mann, director of the Earth System Science Center at Pennsylvania State University, said: “This is the boldest action we’ve seen on climate by any US president, more so than even Obama. Of course, the times now call for much bolder action, especially after four years of lost opportunity under Trump. These commitments will help get us on the path toward limiting warming below catastrophic levels (1.5 °C or more), but there is a limit to what the executive branch alone can do. We also need climate legislation, and Biden will have to use every diplomatic and procedural tool available to get a climate bill or set of bills past a divided Senate.”

Nat Keohane, head of the Environmental Defense Fund, an influential US NGO, tweeted that the new US target “meets the moment and the urgency that the climate crisis demands. It aligns with the science, pushes global ambition & accelerates the shift to a stronger, clean economy.” 

“After years of US federal inaction to address its role in the climate crisis, today the Biden administration has presented all of us with significant reason for hope,” says Rachel Cleetus, policy director and lead economist in the Climate and Energy Program at the Union of Concerned Scientists.

However, some environmental groups say that the target still is not ambitious enough. Evan Weber, cofounder of the youth-led Sunrise Movement, said: “If the US does not achieve much, much more by the end of this decade, it will be a death sentence for our generation and the billions of people at the front lines of the climate crisis.”

What are other countries doing? Earlier this week, the UK pledged to reduce emissions 78% from 1990 levels by 2035. The  EU has also promised to cut its current emissions 55% by 2030, while Japan announced today that it will cut its emissions 46% from 2013 levels by 2030.

Asian-Americans are using Instagram to help protect their communities

One February afternoon, a 50-year-old Asian woman was waiting in line at a bakery in Queens, New York, when a man threw a box of spoons at her and then shoved her so violently she required 10 stitches in her head. In a surveillance video, a crowd watches as the man attacks the woman, doing nothing as he hits her and then walks away. 

“When I saw that, I thought, ‘That could be my mother. That could be my grandmother. That could be someone I knew,’” says Teresa Ting, a resident of Flushing, the neighborhood in which the attack occurred. “It hit close to home.”

The assault in Queens was one of a series of attacks on vulnerable or elderly Asian-Americans that have been captured in viral videos over the past few months. The March mass shooting of eight people in Atlanta, six of whom were Asian-Americans, was a breaking point.

In her shock, Ting turned to Instagram Stories, the app’s ephemeral collections of videos or photos. She suggested that a bunch of neighborhood activists meet on Flushing’s Main Street in groups of four to keep an eye out for disturbances or violence. Within days she’d gathered a group of 100 volunteers trained in peaceful bystander intervention, which offers strategies for de-escalating violent situations, to patrol Main Street in groups for three hours each Saturday and Sunday and watch out for possible hate crimes.

“I started with an Instagram story and sharing my frustration on how I wanted to provide an extra set of eyes and ears, and here we are,” Ting says. 

Traditionally Instagram activism has fallen into one of two categories: shows of solidarity (a black square post for Black Lives Matter, a black-and-white selfie for feminism) or fundraising (links and information to donation platforms like Venmo or GoFundMe). But Asian-Americans like Ting, feeling helpless and distrustful of the police, are now using Instagram and other platforms to protect themselves and their community more directly.

After the shootings in Atlanta, Kenji Jones, a digital marketer, used his Instagram account for a campaign to distribute pepper spray canisters to mostly elderly Asian-Americans in New York. Within three days, Jones had raised $18,000—enough for nearly 3,000 canisters.

Like Jones, Carolyn Kang is distributing a protective device: safety alarms that hook onto keychains and emit an ear-piercing 140-decibel siren when activated. Kang’s activism was born of a terrifying experience: a man lunged toward her on the subway and screamed, “Chinese people are ruining this fucking country!” Kang was uninjured but shaken.

“I felt completely helpless, so I wanted to do something that could tangibly help my community,” she says. “So many of us feel scared walking down the street at night, or taking the subway alone.”

Some Asian-American activists say their move to Instagram campaigns is driven by a distrust of authorities and a lack of confidence that they will respond effectively to hate crimes. Victims are often (but not always) older immigrants who may not be very familiar with the legal system or are uncomfortable bringing attention to their situation. This has caused hate crimes to be vastly underreported. “We don’t know how the government will help combat racism, so we take it upon ourselves to take care of each other,” says Kang. “And that starts with protection.”

Groups like Ting’s Main Street Patrol have cropped up in Asian neighborhoods across the US. These patrols bring together groups of volunteers who communicate via the walkie-talkie app Zello, which lets users talk to each other in real time without a phone number, and draw on bystander intervention training to defuse potentially violent interactions.

“We’re really paying attention to what people are doing,” says Farrah Zhao, a volunteer with Main Street Patrol . “We don’t want to be reactive … the NYPD [New York Police Department] should be more proactive with how they respond to these hate crimes. The fact that I am on patrol or working on Zello every weekend—it makes me feel sad.”

Early in the pandemic, Carrd and Instagram slideshows were popular as a way to educate the wider public about racial justice and ways to help. Slideshows aren’t going away, but they’re evolving. Esther Lim leads the Instagram account @hatecrimebook, a project distributing pocket-sized booklets in eight Asian languages plus English, advising folks on how to report a hate crime.

“Instagram was my only platform that I shared my booklets on when I first created it last year,” Lim told me via email. The books were often lengthy, so Lim worked with volunteers to distill them into essential information and then used an e-booklet platform, Flipsnack, to turn online PDFs into compact physical pamphlets. Around 34,500 of her booklets have now been distributed across Los Angeles, San Francisco, and New York.

“Obviously sharing digital content is more cost effective, but booklet printing is still so essential for people who do not have access to the internet or don’t know where to find resources like this,” Lim says. “Most of these groups are non-English-speaking elders, so it’s my duty to look out for them.”

While some fundraising projects use tried and true approaches (Kang is using GoFundMe, and Jones has his PayPal, CashApp, and Venmo handles displayed prominently on his stories), others are getting creative: @cafemaddycab, an account started by Madeleine Park that raises cab fare for Asian-Americans who might feel unsafe riding the subway, uses WeChat links through Instagram and a publicly available Google poster in Chinese to reach out to non-English speakers.

Though media attention to anti-Asian hate crimes has subsided, activists still see plenty of concerns to address. Last weekend, Jones ran out of pepper spray canisters; he had to turn so many people away that he, Lim, and another Instagram activist decided to join forces to host another event in mid-May where they’ll distribute more pepper spray and booklets, offer health screenings, and organize a voter registration drive. Activists say there’s a long road ahead of the Asian-American community, and the organizing engendered by the emergency needs to continue.

“We are living in fear every day,” says Lim. “If an attack happens to one of us, the whole community is affected.”

This spit test promises to tell couples their risk of passing on common diseases

A new startup called Orchid is offering the chance for couples planning a pregnancy to learn their odds of passing on risks for common conditions like Alzheimer’s, heart disease, type 1 and 2 diabetes, schizophrenia, and certain cancers to their future child.

Existing pre-conception tests, which are widely available, can tell parents whether their children could have certain inherited disorders that are caused by mutations in a single gene. But such single-gene disorders, which include cystic fibrosis, sickle-cell disease, and Tay-Sachs, are relatively rare.

Orchid’s test, by contrast, looks at far more common diseases that are influenced by some combination of multiple genes, often numbering in the hundreds. Couples take the test at home by spitting into a tube and mailing it in. The company sequences the genomes of each parent and uses data sets of people with and without these diseases to calculate their risks. The result is known as a polygenic risk score.

Later this year, the startup also plans to begin offering embryo testing, which will involve extracting a few cells from embryos created by in vitro fertilization, sequencing their DNA, and generating similar risk reports. Couples undergoing IVF can already get their embryos tested for chromosomal abnormalities and single-gene disorders, but the Orchid test would greatly expand the list.

The company launched this month with backing from Stanford scientists and $4.5 million in funding from investors, including from Anne Wojcicki, the CEO of 23andMe, which supplies consumer genetic tests.

“Having children is the most consequential choice most of us make, yet parents go into pregnancy with zero visibility into how genetic risks could impact their future child,” Noor Siddiqui, Orchid’s founder and CEO, said in a press release.

Siddiqui, a Stanford graduate and computer scientist, sees it working this way: A couple learns that they’re at high risk of having a baby with diabetes. They can then use that information to mitigate their child’s risk. That could mean adopting a low-sugar diet or getting regular health screenings. Another option would be to pursue IVF and use Orchid’s test to select the embryo with the lowest risk of diabetes.

The testing could be an attractive prospect, especially for couples with a family history of diabetes, schizophrenia, or one of the other conditions that Orchid looks for. But the genetics behind many of these conditions are complex and still poorly understood.

For that reason, many experts think polygenic risk scores aren’t yet ready for prime time, and they worry that companies like Orchid may be overselling the technology to anxious couples.

Beyond single genes

The introduction of polygenetic consumer tests seems all but inevitable: generating risk scores for schizophrenia and other complex conditions is getting better thanks to an explosion in the amount of DNA data available. Using vast genetic databases of hundreds of thousands of people, researchers are developing algorithms to estimate a person’s risk for diabetes, depression, obesity, and certain cancers.

Another startup, Genomic Prediction, began offering polygenic risk reports in 2019, testing embryos for couples undergoing IVF. The company provides risk reports for some of the same multi-gene conditions as Orchid.

Amit V. Khera, a cardiologist at Massachusetts General Hospital and the Broad Institute who’s developed polygenic risk scores for heart disease and other conditions, says these scores could help adults mitigate their own risks by doing things like changing their diet or exercising more. But he doesn’t think the scores are ready to be deployed for preconception and embryo screening without further consideration.

For one thing, Khera says, there’s only so much risk you can eliminate when choosing among embryos that come from the same parents.

“For any two parents, the difference in risk between embryos is not going to be that big,” he says. “If my score is 0 and my wife’s score is 1, on average my kid’s score is going to be around 0.5. You might be able to find a 0.4 or 0.6 embryo, but they’re not going to be that different.”

Plus, Khera says, there are lots of genetic variants that researchers just don’t understand yet. His group has found variants that seem to protect against heart attacks but increase the risk of diabetes. In other words, there are genetic trade-offs.

Orchid’s test could also lure parents into a false sense of security that their future children won’t develop a particular disease. For instance, Patrick Sullivan, director of the Center for Psychiatric Genomics at the University of North Carolina, Chapel Hill, says that while genetics play a role in schizophrenia, the disease is often not inherited.

“The highest risk factors we get from schizophrenia are generally de novo variants, meaning neither parent has them,” he says. “This is a mutation that develops in the making of the child. It’s a random event.” These de novo mutations wouldn’t show up on a couple’s risk report generated by Orchid. They would on an embryo report, but that would require couples undergoing IVF and embryo screening.

Another limitation of current polygenic risk scores is that the data sets they rely on include mostly people of European ancestry. Historically, genetic studies haven’t included people of diverse backgrounds.

“You’re going to lose accuracy when you take those scores and try to use them on other groups,” says Genevieve Wojcik, a genetic epidemiologist at Johns Hopkins.

Picking your best embryo

As polygenic scores get more accurate, embryo selection may offer a chance to reduce the prevalence of certain common diseases. But there is a more controversial prospect.

The same techniques geneticists use to predict these diseases can be also be used to predict characteristics like intelligence or weight in adulthood. For now, Orchid is focused on providing disease risk reports to parents, but Genomic Prediction of New Jersey already screens embryos for “intellectual disability.”

Gabriel Lázaro-Muñoz, a bioethicist and lawyer at Baylor College of Medicine who has studied polygenic risk scores, says the ability to screen and select embryos for a wide range of traits veers into eugenics territory. “We have to have a serious conversation about how to use this technology in our society,” he says.

Negative attitudes about mental illness are already pervasive, and polygenic risk tests could further stigmatize these conditions. The idea that it’s possible to choose whether or not to reduce a future child’s risk of such conditions puts a lot of pressure on parents, he says. Beyond the issue of mental illness, should parents be able to choose their “smartest” embryo?

And even if couples wanted to pursue polygenic screening, the expense could be prohibitive. Orchid hasn’t publicly released the cost of its tests, but one source told MIT Technology Review that it charges $1,100 for its couple report. (Orchid did not respond to multiple interview attempts.) While the company is offering a financial assistance program to couples who can’t afford it, there’s still the price of IVF to consider. One IVF cycle costs $12,000 to $17,000, and getting pregnant often takes several cycles.

“This is reproduction for rich people,” says Laura Hercher, a genetic counselor and director of research in human genetics at Sarah Lawrence College. “What they seem to be saying is, everyone who can afford it should do IVF.”

Indeed, in a podcast interview Siddiqui suggested that more couples should use IVF to choose their healthiest embryos.

Hercher and others wonder whether that’s the best use of polygenic risk scores. “Are we comfortable with saying ‘Let the market decide what things we want to test embryos for’?” Hercher asks. “Or is time to step in and say ‘Are all uses of this justified?’”

Saving a life

That market for this technology is driven by demand from parents, and for some, knowing the genetic risks their child faces could be a godsend.

Laura Pogliano says having a test like Orchid’s could have helped her better support her son Zac, who was initially diagnosed with obsessive-compulsive disorder as a teenager in 2009. As his symptoms grew worse, doctors eventually determined that he had schizophrenia. Zac died from heart failure in 2015, at age 23. (An estimated 50% of sudden deaths in schizophrenia are from cardiovascular causes.)

Pogliano says if she had known about her son’s risk before he was born, she would have been able to look out for early signs and get him treatment sooner. Schizophrenia symptoms— hallucinations, delusions, confused speech, and disorganized thinking—generally start to appear in a person’s 20s, but changes in the brain can begin several years earlier.

She says Zac’s illness blindsided her family: “With schizophrenia, you think you have a healthy child, but you never actually did. The brain has been prepping for this disease for years.”

Pogliano says she would have raised her son differently if she’d known he was at high risk. She would have been more vigilant about his use of alcohol and marijuana, which can alter the nervous system and trigger psychosis in people with schizophrenia.

She hopes screening for schizophrenia will be routine someday. It’s different from guessing the risk of conditions like heart disease, breast cancer, or Alzheimer’s, she says: those diseases emerge much later in life, but parents have an opportunity to make a real difference in their kids’ lives if they know their schizophrenia risk.

“Designer babies isn’t the point,” she says. “All parents want is a path to health for their children.”

This has just become a big week for AI regulation

It’s a bumper week for government pushback on the misuse of artificial intelligence

Today the EU released its long-awaited set of AI regulations, an early draft of which leaked last week. The regulations are wide ranging, with restrictions on mass surveillance and the use of AI to manipulate people.

But a statement of intent from the US Federal Trade Commission, outlined in a short blog post by staff lawyer Elisa Jillson on April 19, may have more teeth in the immediate future. According to the post, the FTC plans to go after companies using and selling biased algorithms.

A number of companies will be running scared right now, says Ryan Calo, a professor at the University of Washington, who works on technology and law. “It’s not really just this one blog post,” he says. “This one blog post is a very stark example of what looks to be a sea change.”

The EU is known for its hard line against Big Tech, but the FTC has taken a softer approach, at least in recent years. The agency is meant to police unfair and dishonest trade practices. Its remit is narrow—it does not have jurisdiction over government agencies, banks, or nonprofits. But it can step in when companies misrepresent the capabilities of a product they are selling, which means firms that claim their facial recognition systems, predictive policing algorithms or healthcare tools are not biased may now be in the line of fire. “Where they do have power, they have enormous power,” says Calo.

Taking action

The FTC has not always been willing to wield that power. Following criticism in the 1980s and ’90s that it was being too aggressive, it backed off and picked fewer fights, especially against technology companies. This looks to be changing.

In the blog post, the FTC warns vendors that claims about AI must be “truthful, non-deceptive, and backed up by evidence.”

“For example, let’s say an AI developer tells clients that its product will provide ‘100% unbiased hiring decisions,’ but the algorithm was built with data that lacked racial or gender diversity. The result may be deception, discrimination—and an FTC law enforcement action.”

The FTC action has bipartisan support in the Senate, where commissioners were asked yesterday what more they could be doing and what they needed to do it. “There’s wind behind the sails,” says Calo.

Meanwhile, though they draw a clear line in the sand, the EU’s AI regulations are guidelines only. As with the GDPR rules introduced in 2018, it will be up to individual EU member states to decide how to implement them. Some of the language is also vague and open to interpretation. Take one provision against “subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour” in a way that could cause psychological harm. Does that apply to social media news feeds and targeted advertising? “We can expect many lobbyists to attempt to explicitly exclude advertising or recommender systems,” says Michael Veale, a faculty member at University College London who studies law and technology.

It will take years of legal challenges in the courts to thrash out the details and definitions. “That will only be after an extremely long process of investigation, complaint, fine, appeal, counter-appeal, and referral to the European Court of Justice,” says Veale. “At which point the cycle will start again.” But the FTC, despite its narrow remit, has the autonomy to act now.

One big limitation common to both the FTC and European Commission is the inability to rein in governments’ use of harmful AI tech. The EU’s regulations include carve-outs for state use of surveillance, for example. And the FTC is only authorized to go after companies. It could intervene by stopping private vendors from selling biased software to law enforcement agencies. But implementing this will be hard, given the secrecy around such sales and the lack of rules about what government agencies have to declare when procuring technology.

Yet this week’s announcements reflect an enormous worldwide shift toward serious regulation of AI, a technology that has been developed and deployed with little oversight so far. Ethics watchdogs have been calling for restrictions on unfair and harmful AI practices for years.

The EU sees its regulations bringing AI under existing protections for human liberties. “Artificial intelligence must serve people, and therefore artificial intelligence must always comply with people’s rights,” said Ursula von der Leyen, president of the European Commission, in a speech ahead of the release.

Regulation will also help AI with its image problem. As von der Leyen also said: “We want to encourage our citizens to feel confident to use it.” 

How a tiny media company is helping people get vaccinated

More than 132 million people in the US have received at least one dose of a covid-19 vaccine, and as of this week, all Americans over 16 are eligible.

But while the US has vaccinated more people than any other country in the world, vulnerable people are still falling through the cracks. Those most affected include people who don’t speak English, people who aren’t internet-savvy, and shift workers who don’t have the time or computer access to book their own slots. In many places, community leaders, volunteers, and even news outlets have stepped in to help.

One of those groups is Epicenter-NYC, a media company that was founded during the pandemic to help neighbors navigate covid-19. Based in the Queens neighborhood of Jackson Heights, which was particularly hard hit by the virus, the organization publishes a newsletter on education, business, and other local news. 

S. Mitra Kalita, publisher of Epicenter-NYC

But Epicenter-NYC has gone further and actually booked more than 4,600 vaccine appointments for people in New York and beyond. People who want to get vaccinated can contact the organization—either through an intake form, a hotline, a text, or an email—for help setting up an appointment.

Throughout the vaccine rollout, the group has also been documenting and sharing what it has learned about the process with a large audience of newsletter readers. 

We spoke with S. Mitra Kalita, the publisher of Epicenter-NYC, who was previously a senior vice president at CNN Digital and is also the cofounder and CEO of URL Media, a network for news outlets covering communities of color. 

This interview has been condensed and edited for clarity.

Q: How did you start setting people up with vaccine appointments? 

A: It began with two areas of outreach. First, when I had to register my own parents for a vaccine and found the process to be pretty confusing, I immediately wondered how well elderly residents, their friends and neighbors, manage this process. I just started messaging them.

The second was when a restaurant [from our small business spotlight program] reached out and said, “Do you guys know how to get vaccines for our restaurant workers?” Because I had been navigating some of this for the elderly, I started to help the restaurant workers. There started to be a similar network effect. One of the workers at this restaurant has a boyfriend who is a taxi driver; when I helped her, she asked if I could help her boyfriend; then the boyfriend texted me with some of his friends; and it kept spreading in that way. 

Q: How is Epicenter-NYC filling gaps in vaccine distribution right now? What is your process like, and who are you helping?

“There’s a lot of matchmaking going on. We can sort through a list of about 7,500 to 8,000 people who said they need help, and then find places in proximity.”

S. Mitra Kalita

A: We’ve had between 200 and 250 people reach out to volunteer. The outreach efforts range from putting up fliers, doing translations, and calling people to literally booking the appointments. 

I don’t care if you’re a Bangladeshi taxi driver in Queens and your cousin is in New Jersey. We’re going to help both of you. A woman on the Upper East Side who’s 102 years old who is homebound and needs a visit is absolutely going to get Epicenter’s help. 

What we’re doing now is continuing the route of connecting people to each other and opportunities. There’s a lot of matchmaking going on. We can sort through a list of about 7,500 to 8,000 people who said they need help, and then find places in proximity. We’ve become this wonderful marriage—a centralized operation that also embraces decentralized solutions.

Q: We know that vaccination rates lag in many communities that were hit the hardest. Why is that? What issues and barriers are people experiencing? 

A: Just before the latest Johnson & Johnson pause announcement, I said, “We’re at a point where everybody remaining is a special case.”

I think we’ve leapfrogged to vaccine hesitancy without solving for vaccine access. We don’t see a lot of hesitancy, but we do see a lot of concerns over some issues. Number one would be scheduling. We’re dealing with populations that are working two, maybe three jobs, and when they say “I have this window on Sunday at 3 p.m. until maybe 6 p.m., when my next shift starts,” they really mean that’s the only window.

Q: People have been asked to prove who they are, where they work, and where they live in order to qualify for a vaccine. This was especially true when eligibility was more limited. How did you help people face barriers around getting the documents they needed? 

A: New York State has been explicit in saying you can still get a vaccine even if you are undocumented. But that messaging doesn’t really match the on-the-ground reality. 

For decades New York has had a restaurant industry built and thriving on the backs of undocumented labor. Getting a letter from an employer or showing a pay stub to prove employment isn’t always possible for undocumented workers: We created public resources for documentation, with a sample letter you can show your employer and have them sign.

Q: Are there other challenges?

A: Proof of residency in New York City. The homeless population through the pandemic has not only exploded, but it’s been redefined. We hear from people who are moving couch to couch or are crashing with friends or with a cousin. We had someone who was showering in a gym, and the gym offered to write the letter on their behalf. 

Inevitably, the question I get is “Is this the role of a journalism organization?” The essence of what we are describing is [a method] for these people to prove that they are human. In some ways, there is no greater purpose of our journalism.

Q: You recently wrote about the need to adjust vaccination schedules as Ramadan approaches, because Muslim New Yorkers had some concerns around getting their shot during the holiday. Do you think governments are approaching the vaccine rollout with this level of granularity and consideration?

A: This is a question of: Do governments see people? Do they see communities? We love living in New York because it’s a global city. There is an awareness of other cultures and other situations. 

“I think we’ve leapfrogged to vaccine hesitancy without solving for vaccine access.”

S. Mitra Kalita

It’s one thing to know that Ramadan exists. It’s another for you to say “I need to accommodate this population because it’s the difference between life and death for my mother or my aunt.” 

Our system has allowed Epicenter to spot trends very early. Long before the massacre in Atlanta, our Chinese-language team was flagging to me that Asian seniors were very afraid and they didn’t want to go without another person, for example. And they wanted to go somewhere where there would be translation. 

When you can make government delivery of a service on the terms of not just government to governed, but actually human to human with something in common, it’s just so much greater. 

Q: What are the lessons that can be carried on beyond the pandemic? 

A: Maybe never again will we have this opportunity to interface with the public as we are right now with vaccines. How does that change the delivery of other services?

Some of our volunteers have asked would we like to do a summer tutoring program, because children might be ill-equipped to start school in September. Do we need to share cover letters to apply for jobs, or catalogue the tips and tricks that many of us take for granted? How do you take this moment, learn, and then react accordingly?

I will most definitely continue Epicenter, as long as there’s readers, community, and it’s sustainable. 

This story is part of the Pandemic Technology Project, supported by the Rockefeller Foundation.

NASA has just flown a helicopter on Mars for the first time

The news: NASA has flown an aircraft on another planet for the first time. On Monday, April 19, Ingenuity, a 1.8-kilogram drone helicopter, took off from the surface of Mars, flew up about three meters, then swiveled and hovered for 40 seconds. The historic moment was livestreamed on YouTube, and Ingenuity captured the photo above with one of its two cameras. “We can now say that human beings have flown a rotorcraft on another planet,” said MiMi Aung, the Ingenuity Mars Helicopter project manager at NASA’s Jet Propulsion Laboratory, at a press conference. “We, together, flew at Mars, and we, together, now have our Wright brothers moment,” she added, referring to the first powered airplane flight on Earth in 1903.

In fact, Ingenuity carries a tribute to that famous flight: a postage-stamp-size piece of material from the Wright brothers’ plane tucked beneath its solar panel. (The Apollo crew also took a splinter of wood from the Wright Flyer, as it was named, to the moon in 1969.)

The details: The flight was a significant technical challenge, thanks to Mars’s bone-chilling temperatures (nights can drop down to -130 °F/-90 °C) and its incredibly thin atmosphere—just 1% the density of Earth’s. That meant Ingenuity had to be light, with rotor blades that were bigger and faster than would be needed to achieve liftoff on Earth (although the gravity on Mars, which is only about one-third of Earth’s, worked in its favor). The flight had originally been scheduled to take place on April 11 but was delayed by software issues. 

Why it’s significant: Beyond being a significant milestone for Mars exploration, the flight will also pave the way for engineers to think about new ways to explore other planets. Future drone helicopters could help rovers or even astronauts by scoping out locations, exploring inaccessible areas, and capturing images. Ingenuity will also help inform the design of Dragonfly, a car-size drone that NASA is planning to send to Saturn’s moon Titan in 2027. 

What’s next: In the next few weeks, Ingenuity will conduct four more flights, each lasting up to 90 seconds. Each one is designed to further push the limits of Ingenuity’s capabilities. Ingenuity is only designed to last for 30 Martian days, and is expected to stop functioning around May 4. Its final resting place will be in the Jezero Crater as NASA moves on to the main focus of its mission: getting the Perseverance rover to study Mars for evidence of life.

Inside the rise of police department real-time crime centers

At a conference in New Orleans in 2007, Jon Greiner, then the chief of police in Ogden, Utah, heard a presentation by the New York City Police Department about a sophisticated new data hub called a “real time crime center.” Reams of information rendered in red and green splotches, dotted lines, and tiny yellow icons appeared as overlays on an interactive map of New York City: Murders. Shootings. Road closures. You could see the routes of planes landing at LaGuardia and the schedules of container ships arriving at the mouth of the Hudson River. 

In the early 1990s, the NYPD had pioneered a system called CompStat that aimed to discern patterns in crime data, since widely adopted by large police departments around the country. With the real time crime center, the idea was to go a step further: What if dispatchers could use the department’s vast trove of data to inform the police response to incidents as they occurred?

Back in Ogden, population 82,702, the main problem on Greiner’s mind was a stubbornly high rate of vehicle burglaries. As it was, the department’s lone crime analyst was left to look for patterns by plotting addresses on paper maps, or by manually calculating the average time between similar crimes in a given area. The city had recently purchased license-plate readers with money from a federal grant, but it had no way to integrate the resulting archive of images with the rest of the department’s investigations. It was obvious that much more could be made of the data on hand.

“I’m not New York City,” Greiner thought, “but I could scale this down with the right software.” Greiner called a former colleague who’d gone to work for Esri, a large mapping company, and asked what kinds of disparate information he might put on a map. The answer, it turned out, was anything you could put in a spreadsheet: the address history of people on parole—sorting for those with past drug, burglary, or weapons convictions—or the respective locations of car thefts and car recoveries, to see if joyrides tended to end near the joyrider’s home. You could watch police cars and fire trucks move around the city, or plot cell-phone records over time to look back at a suspect’s whereabouts during the hours before and after a crime. 

Eric Young, a 28-year veteran of the department, became Ogden’s chief of police in January.

In 2021, it might be simpler to ask what can’t be mapped. Just as Google and social media have enabled each of us to reach into the figurative diaries and desk drawers of anyone we might be curious about, law enforcement agencies today have access to powerful new engines of data processing and association. Ogden is hardly the tip of the spear: police agencies in major cities are already using facial recognition to identify suspects—sometimes falsely—and deploying predictive policing to define patrol routes. 

“That’s not happening here,” Ogden’s current police chief, Eric Young, told me. “We don’t have any kind of machine intelligence.” 

The city council rebuffed Greiner’s first funding request for a real time crime center, in 2007. But the mayor gave his blessing to pursue the project within the existing police budget. Greiner approached Esri and flew down to the company’s headquarters in Redlands, California. He “started up a little friendship” with Esri’s billionaire cofounder, Jack Dangermond, and spoke at the company’s convention, floating a plan to fly a 30-foot camera-equipped blimp over Ogden to monitor emergencies as they developed. (“I got beat up by Jay Leno for that,” Greiner said. The blimp never launched.) Since Ogden already had a subscription to Esri’s flagship product, ArcGIS, which it used for planning and public works, the company offered to build a free test site for a real time crime center (RTCC).

Around the country, the expansion of police technology has followed a similar pattern, driven more by conversations between police agencies and their vendors than between police and the public they serve. The Electronic Frontier Foundation, an advocacy group that tracks the spread of surveillance technology among local law enforcement agencies, currently counts 85 RTCCs in cities as small as Westwego, Louisiana, whose population has yet to crack 10,000. I traveled to Ogden to find answers to a question Greiner phrased this way: “What are we gonna do with this new tool that gets really close to your constitutional rights?” And as federal and state laws take their time to catch up to the wares on offer at conventions like Esri’s, who gets to decide how close is too close?

Ogden grew up in the late 19th century, the junction nearest to the spot where the two halves of the transcontinental railroad were finally stitched together in 1869. Marketed at the time as the “crossroads of the West,” it sits at the seam between two of the region’s defining natural features. On one side, the Wasatch Mountains form the westernmost edge of the Rockies; on the other, the Great Basin extends outward from the shores of the Great Salt Lake. Ogden’s mayor, Mike Caldwell, likes to say the railroad made Ogden “rich at the right time.” But the railroad also brought an unsavory reputation it is still trying to overcome. Local legend has it that Al Capone stepped off a train in the 1920s, did a lap around 25th Street, and declared Ogden too wild a town for him to stay. By the time Jon Greiner took over as police chief in 1995, the main challenges on 25th Street were panhandling and public drunkenness. Still, the city’s leadership sees the real time crime center as a linchpin of efforts to revitalize its downtown.

What’s much harder to evaluate is how the use of surveillance tools affects the relationship between officers and the residents they encounter in their daily rounds.

The RTCC occupies a dim triangular office on the second floor of the city’s public safety building. Much of the light comes from twin monitors on each of six desks that wind their way along the wall, augmented by two rows of wall-mounted displays overhead. There’s a cell-phone extraction machine in the back corner, and several drones stacked in hard cases. 

A team of seven analysts works in staggered shifts, monitoring police-radio traffic and working “requests for information” from detectives and patrol officers. Their supervisor, David Weloth, is a laid-back former detective with a neatly trimmed beard and a silver crew cut. Weloth retired from the Ogden City Police Department (OPD) in 2005, but he came back less than a year later to work as a crime analyst and has stayed on ever since.

When I arrived for a visit in February, OPD detective Heather West was scrolling through a queue of hundreds of photos captured by a new license-plate-reading system called Flock Safety, looking for a distinctive pickup truck—gray with a red camper shell—thought to have been used in a theft. The previous week, Weloth explained, Flock had helped the department recover five stolen vehicles in three days. Since they got it in December 2020, they’d queried the system more than 800 times. On searches without a plate number, though, looking for a particular kind or color of car, the algorithm had a tendency to veer off course. “For some reason, it likes red Mazda 3s,” West said, still looking at her screen.

Weloth introduced the team as Fox News played silently on a TV in the corner. West holds one of two OPD detective positions on the team, which also includes a sheriff’s deputy from surrounding Weber County and four civilian analysts with backgrounds in federal law enforcement. A former US Treasury officer was going through a statewide register of pawned goods, looking for matches with property reported stolen in Ogden.

Weloth had one of the analysts cue up a video from a recent homicide investigation, in which cell-phone records obtained by subpoena helped disprove key parts of a suspect’s story about his whereabouts on the night his girlfriend was murdered. Footage from a city-owned surveillance camera at Ogden’s water treatment plant allowed Weloth’s team to “put him where the phone said he was,” tightening the case for the prosecution. 

This was one of a few greatest hits that came up repeatedly in discussions about how Ogden uses the technology in its real time crime center. In another, in 2018, analysts tapped into a network of city-owned cameras to locate a kidnapping suspect after the woman he’d held managed to flag down an officer and provide a physical description. When officers arrived on scene, the man shot at them; police returned fire and killed him.

If there’s any good reason to deploy invasive technology, surely solving a murder and stopping a violent crime both qualify. What’s much harder to evaluate is how the use of surveillance tools affects the relationship between officers and the residents they encounter in their daily rounds, or how they change the collective understanding of the purpose of policing.

Dave Weloth, a retired police detective, directs the Ogden Police Area Tactical Analysis Center (formerly know as the Real Time Crime Center).

Take car theft. Recovering stolen cars has been an early success of the city’s network of license-plate readers. As Greiner recalled, thefts increase in the winter, “because people warm up their cars in the driveway, then go back inside and leave their keys in the ignition.” Today, Weloth told me, “running and unattendeds” still account for about a third of car thefts in the city. This includes an incident last November when a young mother left her 10-month-old in the back seat of her running car, which was stolen. Both the mayor and the chief of police told me the license-plate reader had been instrumental in finding the kid within two hours. But they didn’t mention that two women had found the baby crying on a front porch some miles away—and that the automatic reader had only helped them recover the car.

The police department maintains a web page advising residents on “10 Ways to reduce your vehicle from being stolen” and periodically sends community policing officers out to relay the message. Would a more robust public education program be a better way to reduce car theft than an intrusive citywide license-plate surveillance system? That’s not a question anyone at OPD appears to be asking.

When the RTCC launched, Weloth explained, his goal was to “close the gap between raw data and something that’s actionable.” To do that, he first had to figure out “What have we already paid for?” More than 100 city-owned surveillance cameras, installed by Ogden’s public works department after 9/11, were trained on sites like the parking lot of the fleet and facilities building, or the door to the city’s computer server room. In some places, the cameras could be controlled remotely. Analysts could review footage and pan, tilt, or zoom those cameras in accordance with requests from dispatch or officers in the field. 

This is what had allowed Joshua Terry, who does much of the real time crime center’s mapping work, to follow along during the 2018 kidnapping call, zeroing in on a dark figure on the sidewalk in a Dallas Cowboys jacket seconds before he darted out of view. “That’s the reason we have it on,” Terry told me, playing back the footage of the incident on one of the big screens. The goal is not, he says, to constantly surveil everyone but to use what tools the analysts can to aid active investigations. “We couldn’t care less what people are doing,” he says, even though “people think we sit here watching these cameras.” 

“I’d be bored to death,” a colleague said with a chuckle. 

Besides, Weloth pointed out, the system had accountability: “I can tell exactly who moved what camera, where, when.” 

When the state chapter of the American Civil Liberties Union called a city council member with concerns about the possible use of facial recognition, Weloth explained, he offered a tour of the RTCC. “We’re very cautious about stuff that’s not supported by law,” he said. “One mistake and we’re gonna pay the price.” 

The challenge is that for much of police surveillance technology, the most relevant law is the Fourth Amendment prohibition on “unreasonable searches” of people’s “persons, houses, papers, and effects.” The court system has yet to figure out how this applies to modern surveillance systems. As Justice Sonia Sotomayor wrote in a 2012 Supreme Court opinion, “Awareness that the Government may be watching chills associational and expressive freedoms. And the Government’s unrestrained power to assemble data that reveal private aspects of identity is susceptible to abuse.” 

Utah is one of 16 states with statutes that explicitly address automated license-plate readers; the OPD’s policy calls for two supervisors to sign off before querying a plate number against the database, and plate information can’t be stored for longer than nine months; it’s usually deleted within 30 days. Still, there’s no federal or state law that specifically regulates government use of surveillance cameras, and none of the department’s audits are published.

Sotomayor’s 2012 opinion was nonbinding (but widely cited), and it served mostly to point out that important issues haven’t been addressed in law. As Weloth had said when I first called to plan my visit, “We regulate ourselves extremely well.”

One afternoon, I accompanied Heather West, the detective who’d been perusing gray pickups in the license-plate database, and Josh Terry, the analyst who’d spotted the kidnapper with the Cowboys jacket, to fly a drone over a park abutting a city-owned golf course on the edge of town. West was at the controls; Terry followed the drone’s path in the sky and maintained “situational awareness” for the crew; another detective focused on the iPad showing what the drone was seeing, as opposed to where and how it was flying. 

Of all the gadgets under the hood at the real time crime center, drones may well be the most tightly regulated, subject to safety (but not privacy) regulations and review by the Federal Aviation Administration. In Ogden, neighbor to a large Air Force base, these rules are compounded by flight restrictions covering most of the city. The police department had to obtain waivers to get its drones off the ground; it took two years to develop policies and get the necessary approvals to start making flights. 

Joshua Terry, an analyst who does much of the real time crime center’s mapping work, with a drone.

The police department purchased its drones with a mind to managing large public events or complex incidents like hostage situations. But, as Dave Weloth soon found, “the more we use our drones, the more use cases we find.” At the real time crime center, Terry, who has a master’s in geographic information technology, had given me a tour of the city with images gathered on recent drone flights, clicking through to cloud-shaped splotches, assembled from the drone’s composite photographs, that dotted the map of Ogden. 

Above 21st Street and Washington, he zoomed in on the site of a fatal crash caused by a motorcycle running a red light. A bloody sheet covered the driver’s body, legs splayed on the pavement, surrounded by a ring of fire trucks. Within minutes, the drone’s cameras had scanned the scene and created a 3D model accurate to a centimeter, replacing the complex choreography of place markers and fixed cameras on the ground that sometimes leave major intersections closed for hours after a deadly collision.

No one seemed to give much thought to the fact that quietly, people who were homeless had become the sight most frequently captured by the police department’s drone program.

When the region was hit by a powerful windstorm last September, Terry flew a drone over massive piles of downed trees and brush collected by the city. When county officials saw the resulting volumetric analysis—12,938 cubic yards—that would be submitted as part of a claim to the Federal Emergency Management Agency, they asked the police department to perform the same service for two neighboring towns. Ogden drones have also been used to pinpoint hot spots after wildland fires, locate missing persons, and fly “overwatch” for SWAT team raids.

This flight was more routine. When I pulled into the parking lot, two officers from Ogden’s community policing unit looked on as West steered the craft over a dense stand of Gambel oak and then hovered over a triangular log fort on a hillside a couple of hundred yards away. Though they’d never encountered people on drone sweeps through the area, trash and makeshift structures were commonplace. Once the RTCC pinpointed the location of any encampments, the community service officers would go in on foot to get a closer look. “We get a lot of positive feedback from runners, hikers,” one officer explained. After one recent visit to a camp near a pond on 21st Street, he and the county social service workers who accompanied him found housing for two people they’d met there. When clearing camps, police also “try and connect [people] with services they need,” Weloth said. The department recently hired a full-time homeless outreach coordinator to help. “We can’t police ourselves out of this problem,” he said, comparing the department’s efforts to keep new camps from springing up to “pushing water uphill.”

Still, no one seemed to give much thought to the fact that quietly, people who were homeless had become the sight most frequently captured by the police department’s drone program. Of the 137 non-training flights made since May 2019, nearly half—62—were flyovers of homeless encampments, with regular flights over a parkway on the Ogden River and in woods by the railroad, whose owner, Union Pacific, employs its own private security as well. It was easy to see the appeal: if, instead of spending hours clambering through the woods, you could find people in minutes by looking down from on high, why not? 

“We’ve had a lot of homicides come out of those illegal encampments,” Ogden’s mayor, Mike Caldwell, told me. Chief Young cited two incidents to support Caldwell’s claim. The first was the 2018 murder of a homeless man, whose killer told police he considered homeless people a “problem.” The second was a fatal stabbing in an encampment near the railroad tracks, just outside city limits; the suspect arrested in the case was homeless himself. Both incidents were tragic examples of the well-documented vulnerability to violence of people without shelter. But does it follow that drones would be an effective deterrent? 

The idea that police were flying over the city’s open spaces to investigate homicides is also hard to square with the contention that the flights were part of the city’s homeless outreach. Aren’t those different activities, or shouldn’t they be? Either way, Caldwell said, “if it wasn’t the drone, it would be officers climbing over deadfall and going into those places. That keeps our officers safe, and gives us more bandwidth.”

One important function of resource constraints, though—bandwidth, in the mayor’s equation—is that they force governments, and citizens, to consider priorities. One Friday afternoon, I met Doug Young, a 49-year-old who has lived outdoors in Ogden on and off for the last 12 years. He wore a gray poncho and a cowboy hat with a pin in the shape of a cow’s skull. Young said he often saw drones overhead when he camped behind a local Walmart, and he had learned to distinguish police drones by the whirr of their motors. “If it stops violent crime, cool. If it’s for some petty bullshit, leave us the fuck alone,” he said. 

To Mayor Caldwell, this wasn’t a meaningful distinction. Asked whether there were some complaints or alleged crimes that weren’t serious enough to justify use of the RTCC’s most invasive technologies, he said, “I think we should use all the tools … The average everyday person wouldn’t even know that these tools are out there or that anything is being monitored.”

For Betty Sawyer, president of the Ogden chapter of the NAACP, that’s precisely the problem. Sawyer told me she wasn’t aware the city had license-plate readers and remotely monitored surveillance cameras until I called her for an interview. When she asked the department for more information, Chief Young shared a presentation he’d made before the City Council in December—one week before the new license-plate readers were deployed. “How many people are listening to weekly city council meetings?” she asked.  “If no one’s talking about it but it’s here—how, why, what’s the reason for it? Is that the best use of our dollar when we’re down officers? These are things that should be put up front, not after the fact.” 

Betty Sawyer, president of the Ogden NAACP, says the department should do more to engage city residents in conversations about new police technologies.

Last summer, as protests flared across the country in response to the police killing of George Floyd in Minneapolis, Sawyer spearheaded a group that held a series of meetings with the mayor and police chief. It was an effort to improve police–community relations in a city where no Black cop serves in a department of 126 sworn officers, and where the police force is less than 10% Hispanic, though Hispanic residents make up more than 30% of Ogden’s population. “Our whole goal is: How do we build in transparency so we can dispel the myths and speak to the truth of what you are doing?” she said. 

One risk for the police department is that the RTCC’s usefulness is, at least for some of the city, ultimately overshadowed by mistrust over cops’ ability to use their new powers with restraint. As Malik Dayo, who organized several Black Lives Matter protests in Ogden last summer, told me, “I can leave my house, drive to the store, and come back, and if [police] wanted to, they can figure out what time I left, what time I came back, and if I made any stops along the way.” Some cities have preempted similar objections with an avalanche of public data: in Southern California, the city of Chula Vista publishes routes and accompanying case numbers for every drone flight its police department conducts. Weloth assured me the checks and balances on Ogden’s license-plate readers would prevent the scenario Dayo described. Dayo was unmoved. “I think it’s gonna be abused,” he said. “I really do.”

The city’s leadership sees the real-time crime center as a linchpin of efforts to revitalize downtown.

Police tend to view all the tools at their disposal as part of the same basic continuum—drones and bicycles alike helping “to protect and serve.” After a few days in Ogden, though, I couldn’t help but think that the RTCC’s tools were also functioning as a kind of digital armor for a particular worldview. Was the department’s reliance on technology allowing it to do more with less, or was it letting the city ignore the complexities of its most urgent social problems?

Last August, a covid-19 outbreak at the Lantern House, Ogden’s largest homeless shelter, infected at least 48 residents and killed two. Confirmed cases were quarantined in a separate wing of the shelter, but people soon began to set up tents on the sidewalk outside, where 33rd Street dead-ended by the railroad tracks.

Among them was a man who asked me to use only his first name, Ryan, and said he no longer felt safe sleeping on closely spaced bunks: “You’re within four feet of five people.” Outside, people had to move their stuff twice a week for workers to clear trash, and sometimes human waste, from the area—there were no dumpsters, and no porta-potties—but it felt safer than being indoors. “We were staying so close together it was a health risk,” he said.

The police department set up a trailer with surveillance cameras atop a high pole to record what happened in the new camp. Through the fall, as the group living outside the shelter swelled to some 60 people in about 30 tents, the cameras captured several incidents of violence. A car window was smashed. Someone punched a pizza delivery driver in the face. 

On December 10, a Thursday, a team including police, firefighters, and county social workers cleared the encampment once and for all. “Up to this point, Ogden city has taken a moderated approach during the pandemic. However, the situation has now become untenable,” a city press release read, identifying the encampment as a source of crime and a drain on city resources. 

“Given the potential for the spread of COVID-19 and other communicable diseases often found in camps like these, risks from camp members spread throughout the city.” This was not the approach advocated by the Centers for Disease Control, which recommends that local governments “allow people who are living unsheltered or in encampments to remain where they are,” emphasizing that dispersing encampments increases potential for disease spread

According to a report in the local paper, 10 people accepted the city’s offer to go sleep inside the Lantern House, and the rest dispersed. If they found themselves setting up tents along the Ogden River, they’d be spotted soon enough by one of the police department’s drones. 

Paige Berhow, who retired as assistant police chief in the Ogden suburb of Riverdale and now lives in the city, became an officer in the early 1980s, when her on-duty equipment consisted of little more than a uniform and a revolver. Then came tasers and bulletproof vests and computer dashboards in every patrol car. “With every layer of stuff, that’s another layer of detachment from the public, too,” she told me. As Berhow pointed out, much of the expanding footprint of technology in police departments has come in the name of officer safety, though on-duty officer deaths have declined dramatically over the last several decades.

David Weloth hesitated when I asked what would change, 10 years into Ogden’s experiment, if the police department suddenly had to do without the RTCC, since renamed the Area Tactical Analysis Center. “We would have a very difficult time,” he said. “There’s no crime reduction strategy that happens without ATAC.” 

“There’s no crime reduction strategy that happens without ATAC.”

David Weloth

ATAC’s role in the police department’s relationship with the city has steadily expanded over time. The number of “requests for information” completed by the group was up by over 20% last year. The police department now has a say in the city’s master plan for surveillance cameras; the popularity of Amazon Ring’s camera–equipped doorbells, meanwhile, has given analysts a new trove of data to peruse. 

But Ogden releases very little data to shed light on ATAC’s role, beyond confirmation that it’s still growing. In the fall of 2019, when the city launched an expanded network of surveillance cameras that ATAC could monitor remotely, employees accessed them only a handful of times each month. They soon found reasons to peer through the cameras daily. From November 23, 2020, to February 23, 2021 (the most recent three months for which the city provided data), ATAC processed over 27,000 queries, or about 300 each day.

Suresh Venkatasubramanian, a computer scientist at the University of Utah who studies the social implications of algorithmic decision-making, worries that police departments have embraced novel tools without the resources or the expertise to properly evaluate their influence. How might the distribution of surveillance cameras, for instance, affect the department’s understanding of the distribution of crime? How could software like that sold by Palantir (a data analytics firm with roots in the intelligence community) amplify existing biases and distortions in the criminal justice system? “A lot of government agencies who are getting solicited by vendors would like … to scrutinize them properly, but they don’t know how,” he told me. “The idea coming from vendors is that more data is always better. That’s really not the case.”

To their credit, the analysts working at ATAC made good on Weloth’s pledge of openness. They were candid, and willing to explore potential pitfalls in their work. Terry, who did much of the mapping work at ATAC, had spent four years as a contractor with the National Geospatial-Intelligence Agency working on American drone strikes. He told the story of a fellow image analyst who misidentified what he thought was a group of men making IEDs under cover of darkness. On the strength of that analysis, Terry says, “they blew up kids carrying firewood.” When Terry came to Ogden, he was surprised to see that local police departments had access to tools as powerful as Palantir’s. Another analyst swiveled in his chair and chimed in. “The technology is getting better and the cost is coming down,” he said. “At some point will we get access to technology we regret having? Probably.”  

Rowan Moore Gerety is a writer in Phoenix, Arizona.