ChatGPT is about to revolutionize the economy. We need to decide what that looks like.

Whether it’s based on hallucinatory beliefs or not, an artificial-intelligence gold rush has started over the last several months to mine the anticipated business opportunities from generative AI models like ChatGPT. App developers, venture-backed startups, and some of the world’s largest corporations are all scrambling to make sense of the sensational text-generating bot released by OpenAI last November.

You can practically hear the shrieks from corner offices around the world: “What is our ChatGPT play? How do we make money off this?”

But while companies and executives see a clear chance to cash in, the likely impact of the technology on workers and the economy on the whole is far less obvious. Despite their limitations—chief among of them their propensity for making stuff up—ChatGPT and other recently released generative AI models hold the promise of automating all sorts of tasks that were previously thought to be solely in the realm of human creativity and reasoning, from writing to creating graphics to summarizing and analyzing data. That has left economists unsure how jobs and overall productivity might be affected.

For all the amazing advances in AI and other digital tools over the last decade, their record in improving prosperity and spurring widespread economic growth is discouraging. Although a few investors and entrepreneurs have become very rich, most people haven’t benefited. Some have even been automated out of their jobs. 

Productivity growth, which is how countries become richer and more prosperous, has been dismal since around 2005 in the US and in most advanced economies (the UK is a particular basket case). The fact that the economic pie is not growing much has led to stagnant wages for many people. 

What productivity growth there has been in that time is largely confined to a few sectors, such as information services, and in the US to a few cities—think San Jose, San Francisco, Seattle, and Boston. 

Will ChatGPT make the already troubling income and wealth inequality in the US and many other countries even worse? Or could it help? Could it in fact provide a much-needed boost to productivity?

ChatGPT, with its human-like writing abilities, and OpenAI’s other recent release DALL-E 2, which generates images on demand, use large language models trained on huge amounts of data. The same is true of rivals such as Claude from Anthropic and Bard from Google. These so-called foundational models, such as GPT-3.5 from OpenAI, which ChatGPT is based on, or Google’s competing language model LaMDA, which powers Bard, have evolved rapidly in recent years.  

They keep getting more powerful: they’re trained on ever more data, and the number of parameters—the variables in the models that get tweaked—is rising dramatically. Earlier this month, OpenAI released its newest version, GPT-4. While OpenAI won’t say exactly how much bigger it is, one can guess; GPT-3, with some 175 billion parameters, was about 100 times larger than GPT-2.

But it was the release of ChatGPT late last year that changed everything for many users. It’s incredibly easy to use and compelling in its ability to rapidly create human-like text, including recipes, workout plans, and—perhaps most surprising—computer code. For many non-experts, including a growing number of entrepreneurs and businesspeople, the user-friendly chat model—less abstract and more practical than the impressive but often esoteric advances that been brewing in academia and a handful of high-tech companies over the last few years—is clear evidence that the AI revolution has real potential.

Venture capitalists and other investors are pouring billions into companies based on generative AI, and the list of apps and services driven by large language models is growing longer every day.

Among the big players, Microsoft has invested a reported $10 billion in OpenAI and its ChatGPT, hoping the technology will bring new life to its long-struggling Bing search engine and fresh capabilities to its Office products. In early March, Salesforce said it will introduce a ChatGPT app in its popular Slack product; at the same time, it announced a $250 million fund to invest in generative AI startups. The list goes on, from Coca-Cola to GM. Everyone has a ChatGPT play.  

Meanwhile, Google announced it is going to use its new generative AI tools in Gmail, Docs, and some of its other widely used products. 

Will ChatGPT make the already troubling income and wealth inequality in the US and many other countries even worse? Or could it help?

Still, there are no obvious killer apps yet. And as businesses scramble for ways to use the technology, economists say a rare window has opened for rethinking how to get the most benefits from the new generation of AI. 

“We’re talking in such a moment because you can touch this technology. Now you can play with it without needing any coding skills. A lot of people can start imagining how this impacts their workflow, their job prospects,” says Katya Klinova, the head of research on AI, labor, and the economy at the Partnership on AI in San Francisco. 

“The question is who is going to benefit? And who will be left behind?” says Klinova, who is working on a report outlining the potential job impacts of generative AI and providing recommendations for using it to increase shared prosperity.

The optimistic view: it will prove to be a powerful tool for many workers, improving their capabilities and expertise, while providing a boost to the overall economy. The pessimistic one: companies will simply use it to destroy what once looked like automation-proof jobs, well-paying ones that require creative skills and logical reasoning; a few high-tech companies and tech elites will get even richer, but it will do little for overall economic growth.

Helping the least skilled

The question of ChatGPT’s impact on the workplace isn’t just a theoretical one. 

In the most recent analysis, OpenAI’s Tyna Eloundou, Sam Manning, and Pamela Mishkin, with the University of Pennsylvania’s Daniel Rock, found that large language models such as GPT could have some effect on 80% of the US workforce. They further estimated that the AI models, including GPT-4 and other anticipated software tools, would heavily affect 19% of jobs, with at least 50% of the tasks in those jobs “exposed.” In contrast to what we saw in earlier waves of automation, higher-income jobs would be most affected, they suggest. Some of the people whose jobs are most vulnerable: writers, web and digital designers, financial quantitative analysts, and—just in case you were thinking of a career change—blockchain engineers.

“There is no question that [generative AI] is going to be used—it’s not just a novelty,” says David Autor, an MIT labor economist and a leading expert on the impact of technology on jobs. “Law firms are already using it, and that’s just one example. It opens up a range of tasks that can be automated.” 

David Autor in his office
David Autor
PETER TENZER/MIT

Autor has spent years documenting how advanced digital technologies have destroyed many manufacturing and routine clerical jobs that once paid well. But he says ChatGPT and other examples of generative AI have changed the calculation.

Previously, AI had automated some office work, but it was those rote step-by-step tasks that could be coded for a machine. Now it can perform tasks that we have viewed  as creative, such as writing and producing graphics. “It’s pretty apparent to anyone who’s paying attention that generative AI opens the door to computerization of a lot of kinds of tasks that we think of as not easily automated,” he says.

The worry is not so much that ChatGPT will lead to large-scale unemployment—as Autor points out, there are plenty of jobs in the US—but that companies will replace relatively well-paying white-collar jobs with this new form of automation, sending those workers off to lower-paying service employment while the few who are best able to exploit the new technology reap all the benefits. 

Generative AI could help a wide swath of people gain the skills to compete with those who have more education and expertise.

In this scenario, tech-savvy workers and companies could quickly take up the AI tools, becoming so much more productive that they dominate their workplaces and their sectors. Those with fewer skills and little technical acumen to begin with would be left further behind. 

But Autor also sees a more positive possible outcome: generative AI could help a wide swath of people gain the skills to compete with those who have more education and expertise.

One of the first rigorous studies done on the productivity impact of ChatGPT suggests that such an outcome might be possible. 

Two MIT economics graduate students, Shakked Noy and Whitney Zhang, ran an experiment involving hundreds of college-educated professionals working in areas like marketing and HR; they asked half to use ChatGPT in their daily tasks and the others not to. ChatGPT raised overall productivity (not too surprisingly), but here’s the really interesting result: the AI tool helped the least skilled and accomplished workers the most, decreasing the performance gap between employees. In other words, the poor writers got much better; the good writers simply got a little faster.

The preliminary findings suggest that ChatGPT and other generative AIs could, in the jargon of economists, “upskill” people who are having trouble finding work. There are lots of experienced workers “lying fallow” after being displaced from office and manufacturing jobs over the last few decades, Autor says. If generative AI can be used as a practical tool to broaden their expertise and provide them with the specialized skills required in areas such as health care or teaching, where there are plenty of jobs, it could revitalize our workforce.

Determining which scenario wins out will require a more deliberate effort to think about how we want to exploit the technology. 

“I don’t think we should take it as the technology is loose on the world and we must adapt to it. Because it’s in the process of being created, it can be used and developed in a variety of ways,” says Autor. “It’s hard to overstate the importance of designing what it’s there for.”

Simply put, we are at a juncture where either less-skilled workers will increasingly be able to take on what is now thought of as knowledge work, or the most talented knowledge workers will radically scale up their existing advantages over everyone else. Which outcome we get depends largely on how employers implement tools like ChatGPT. But the more hopeful option is well within our reach.  

Beyond human-like

There are some reasons to be pessimistic, however. Last spring, in “The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence,” the Stanford economist Erik Brynjolfsson warned that AI creators were too obsessed with mimicking human intelligence rather than finding ways to use the technology to allow people to do new tasks and extend their capabilities.

The pursuit of human-like capabilities, Brynjolfsson argued, has led to technologies that simply replace people with machines, driving down wages and exacerbating inequality of wealth and income. It is, he wrote, “the single biggest explanation” for the rising concentration of wealth.

Erik Brynjolfsson
Erik Brynjolfsson
NEILSON BARNARD/GETTY IMAGES

A year later, he says ChatGPT, with its human-sounding outputs, “is like the poster child for what I warned about”: it has “turbocharged” the discussion around how the new technologies can be used to give people new abilities rather than simply replacing them.

Despite his worries that AI developers will continue to blindly outdo each other in mimicking human-like capabilities in their creations, Brynjolfsson, the director of the Stanford Digital Economy Lab, is generally a techno-optimist when it comes to artificial intelligence. Two years ago, he predicted a productivity boom from AI and other digital technologies, and these days he’s bullish on the impact of the new AI models.

Much of Brynjolfsson’s optimism comes from the conviction that businesses could greatly benefit from using generative AI such as ChatGPT to expand their offerings and improve the productivity of their workforce. “It’s a great creativity tool. It’s great at helping you to do novel things. It’s not simply doing the same thing cheaper,” says Brynjolfsson. As long as companies and developers can “stay away from the mentality of thinking that humans aren’t needed,” he says, “it’s going to be very important.” 

Within a decade, he predicts, generative AI could add trillions of dollars in economic growth in the US. “A majority of our economy is basically knowledge workers and information workers,” he says. “And it’s hard to think of any type of information workers that won’t be at least partly affected.”

When that productivity boost will come—if it does—is an economic guessing game. Maybe we just need to be patient.

In 1987, Robert Solow, the MIT economist who won the Nobel Prize that year for explaining how innovation drives economic growth, famously said, “You can see the computer age everywhere except in the productivity statistics.” It wasn’t until later, in the mid and late 1990s, that the impacts—particularly from advances in semiconductors—began showing up in the productivity data as businesses found ways to take advantage of ever cheaper computational power and related advances in software.  

Could the same thing happen with AI? Avi Goldfarb, an economist at the University of Toronto, says it depends on whether we can figure out how to use the latest technology to transform businesses as we did in the earlier computer age.

So far, he says, companies have just been dropping in AI to do tasks a little bit better: “It’ll increase efficiency—it might incrementally increase productivity—but ultimately, the net benefits are going to be small. Because all you’re doing is the same thing a little bit better.” But, he says, “the technology doesn’t just allow us to do what we’ve always done a little bit better or a little bit cheaper. It might allow us to create new processes to create value to customers.”

The verdict on when—even if—that will happen with generative AI remains uncertain. “Once we figure out what good writing at scale allows industries to do differently, or—in the context of Dall-E—what graphic design at scale allows us to do differently, that’s when we’re going to experience the big productivity boost,” Goldfarb says. “But if that is next week or next year or 10 years from now, I have no idea.”

Power struggle

When Anton Korinek, an economist at the University of Virginia and a fellow at the Brookings Institution, got access to the new generation of large language models such as ChatGPT, he did what a lot of us did: he began playing around with them to see how they might help his work. He carefully documented their performance in a paper in February, noting how well they handled 25 “use cases,” from brainstorming and editing text (very useful) to coding (pretty good with some help) to doing math (not great).

ChatGPT did explain one of the most fundamental principles in economics incorrectly, says Korinek: “It screwed up really badly.” But the mistake, easily spotted, was quickly forgiven in light of the benefits. “I can tell you that it makes me, as a cognitive worker, more productive,” he says. “Hands down, no question for me that I’m more productive when I use a language model.” 

When GPT-4 came out, he tested its performance on the same 25 questions that he documented in February, and it performed far better. There were fewer instances of making stuff up; it also did much better on the math assignments, says Korinek.

Since ChatGPT and other AI bots automate cognitive work, as opposed to physical tasks that require investments in equipment and infrastructure, a boost to economic productivity could happen far more quickly than in past technological revolutions, says Korinek. “I think we may see a greater boost to productivity by the end of the year—certainly by 2024,” he says. 

Who will control the future of this amazing technology?

What’s more, he says, in the longer term, the way the AI models can make researchers like himself more productive has the potential to drive technological progress. 

That potential of large language models is already turning up in research in the physical sciences. Berend Smit, who runs a chemical engineering lab at EPFL in Lausanne, Switzerland, is an expert on using machine learning to discover new materials. Last year, after one of his graduate students, Kevin Maik Jablonka, showed some interesting results using GPT-3, Smit asked him to demonstrate that GPT-3 is, in fact, useless for the kinds of sophisticated machine-learning studies his group does to predict the properties of compounds.

“He failed completely,” jokes Smit.

It turns out that after being fine-tuned for a few minutes with a few relevant examples, the model performs as well as advanced machine-learning tools specially developed for chemistry in answering basic questions about things like the solubility of a compound or its reactivity. Simply give it the name of a compound, and it can predict various properties based on the structure.

As in other areas of work, large language models could help expand the expertise and capabilities of non-experts—in this case, chemists with little knowledge of complex machine-learning tools. Because it’s as simple as a literature search, Jablonka says, “it could bring machine learning to the masses of chemists.”

These impressive—and surprising—results are just a tantalizing hint of how powerful the new forms of AI could be across a wide swath of creative work, including scientific discovery, and how shockingly easy they are to use. But this also points to some fundamental questions.

As the potential impact of generative AI on the economy and jobs becomes more imminent, who will define the vision for how these tools should be designed and deployed? Who will control the future of this amazing technology?

Diane Coyle
Diane Coyle
DAVID LEVENSON/GETTY IMAGES

Diane Coyle, an economist at Cambridge University in the UK, says one concern is the potential for large language models to be dominated by the same big companies that rule much of the digital world. Google and Meta are offering their own large language models alongside OpenAI, she points out, and the large computational costs required to run the software create a barrier to entry for anyone looking to compete.

The worry is that these companies have similar “advertising-driven business models,” Coyle says. “So obviously you get a certain uniformity of thought, if you don’t have different kinds of people with different kinds of incentives.”

Coyle acknowledges that there are no easy fixes, but she says one possibility is a publicly funded international research organization for generative AI, modeled after CERN, the Geneva-based intergovernmental European nuclear research body where the World Wide Web was created in 1989. It would be equipped with the huge computing power needed to run the models and the scientific expertise to further develop the technology. 

Such an effort outside of Big Tech, says Coyle, would “bring some diversity to the incentives that the creators of the models face when they’re producing them.” 

While it remains uncertain which public policies would help make sure that large language models best serve the public interest, says Coyle, it’s becoming clear that the choices about how we use the technology can’t be left to a few dominant companies and the market alone.  

History provides us with plenty of examples of how important government-funded research can be in developing technologies that bring about widespread prosperity. Long before the invention of the web at CERN, another publicly funded effort in the late 1960s gave rise to the internet, when the US Department of Defense supported ARPANET, which pioneered ways for multiple computers to communicate with each other.  

In Power and Progress: Our 1000-Year Struggle Over Technology & Prosperity, the MIT economists Daron Acemoglu and Simon Johnson provide a compelling walk through the history of technological progress and its mixed record in creating widespread prosperity. Their point is that it’s critical to deliberately steer technological advances in ways that provide broad benefits and don’t just make the elite richer. 

Simon Johnson (left) and Daron Acemoglu
Simon Johnson and Daron Acemoglu
STEPHEN JAFFE/IMF VIA GETTY IMAGES; JAROD CHARNEY/MIT

From the decades after World War II until the early 1970s, the US economy was marked by rapid technological changes; wages for most workers rose while income inequality dropped sharply. The reason, Acemoglu and Johnson say, is that technological advances were used to create new tasks and jobs, while social and political pressures helped ensure that workers shared the benefits more equally with their employers than they do now. 

In contrast, they write, the more recent rapid adoption of manufacturing robots in “the industrial heartland of the American economy in the Midwest” over the last few decades simply destroyed jobs and led to a “prolonged regional decline.”  

The book, which comes out in May, is particularly relevant for understanding what today’s rapid progress in AI could bring and how decisions about the best way to use the breakthroughs will affect us all going forward. In a recent interview, Acemoglu said they were writing the book when GPT-3 was first released. And, he adds half-jokingly, “we foresaw ChatGPT.”

Acemoglu maintains that the creators of AI “are going in the wrong direction.” The entire architecture behind the AI “is in the automation mode,” he says. “But there is nothing inherent about generative AI or AI in general that should push us in this direction. It’s the business models and the vision of the people in OpenAI and Microsoft and the venture capital community.”

If you believe we can steer a technology’s trajectory, then an obvious question is: Who is “we”? And this is where Acemoglu and Johnson are most provocative. They write: “Society and its powerful gatekeepers need to stop being mesmerized by tech billionaires and their agenda … One does not need to be an AI expert to have a say about the direction of progress and the future of our society forged by these technologies.”

The creators of ChatGPT and the businesspeople involved in bringing it to market, notably OpenAI’s CEO, Sam Altman, deserve much credit for offering the new AI sensation to the public. Its potential is vast. But that doesn’t mean we must accept their vision and aspirations for where we want the technology to go and how it should be used.

According to their narrative, the end goal is artificial general intelligence, which, if all goes well, will lead to great economic wealth and abundances. Altman, for one, has promoted the vision at great length recently, providing further justification for his longtime advocacy of a universal basic income (UBI) to feed the non-technocrats among us. For some, it sounds tempting. No work and free money! Sweet!

It’s the assumptions underlying the narrative that are most troubling—namely, that AI is headed on an inevitable job-destroying path and most of us are just along for the (free?) ride. This view barely acknowledges the possibility that generative AI could lead to a creativity and productivity boom for workers far beyond the tech-savvy elites by helping to unlock their talents and brains. There is little discussion of the idea of using the technology to produce widespread prosperity by expanding human capabilities and expertise throughout the working population.

Companies can decide to use ChatGPT to give workers more abilities—or to simply cut jobs and trim costs.

As Acemoglu and Johnson write: “We are heading toward greater inequality not inevitably but because of faulty choices about who has power in society and the direction of technology … In fact, UBI fully buys into the vision of the business and tech elite that they are the enlightened, talented people who should generously finance the rest.”

Acemoglu and Johnson write of various tools for achieving “a more balanced technology portfolio,” from tax reforms and other government policies that might encourage the creation of more worker-friendly AI to reforms that might wean academia off Big Tech’s funding for computer science research and business schools.

But, the economists acknowledge, such reforms are “a tall order,” and a social push to redirect technological change is “not just around the corner.” 

The good news is that, in fact, we can decide how we choose to use ChatGPT and other large language models. As countless apps based on the technology are rushed to market, businesses and individual users will have a chance to choose how they want to exploit it; companies can decide to use ChatGPT to give workers more abilities—or to simply cut jobs and trim costs.

Another positive development: there is at least some momentum behind open-source projects in generative AI, which could break Big Tech’s grip on the models. Notably, last year more than a thousand international researchers collaborated on a large language model called Bloom that can create text in languages such as French, Spanish, and Arabic. And if Coyle and others are right, increased public funding for AI research could help change the course of future breakthroughs. 

Stanford’s Brynjolfsson refuses to say he’s optimistic about how it will play out. Still, his enthusiasm for the technology these days is clear. “We can have one of the best decades ever if we use the technology in the right direction,” he says. “But it’s not inevitable.”

The Download: covid’s origin drama, and TikTok’s uncertain future

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Newly-revealed coronavirus data has reignited a debate over the virus’s origins

This week, we’ve seen the resurgence of a debate that has been swirling since the start of the pandemic—where did the virus that causes covid-19 come from?

For the most part, scientists have maintained that the virus probably jumped from an animal to a human at the Huanan Seafood Market in Wuhan at some point in late 2019. But some claim that the virus leaped from humans to animals, rather than the other way around. And many continue to claim that the virus somehow leaked from a nearby laboratory that was studying coronaviruses in bats.  

Data collected in 2020—and kept from public view since then—potentially adds weight to the animal theory. It highlights a potential suspect: the raccoon dog. But exactly how much weight it adds depends on who you ask. Read the full story.

—Jessica Hamzelou

This story is from The Checkup, Jessica’s weekly biotech newsletter. Sign up to receive it in your inbox every Thursday.

Read more of MIT Technology Review’s covid reporting:

+ Our senior biotech editor Antonio Regalado investigated the origins of the coronavirus behind covid-19 in his five-part podcast series Curious Coincidence.

+ Meet the scientist at the center of the covid lab leak controversy. Shi Zhengli has spent years at the Wuhan Institute of Virology researching coronaviruses that live in bats. Her work has come under fire as the world tries to understand where covid-19 came from. Read the full story.

+ This scientist now believes covid started in Wuhan’s wet market. Here’s why. Michael Worobey of the University of Arizona, believes that a spillover of the virus from animals at the Huanan Seafood market was almost certainly behind the origin of the pandemic. Read the full story.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 TikTok’s future in the US is hanging in the balance
Banning it is a colossal challenge, and officials still lack the legal authority to do so. (WP $)
+ TikTok CEO Shou Zi Chew was grilled by a congressional committee. (FT $)
+ He told lawmakers the company would earn their trust. (WSJ $)
+ Meanwhile, TikTok paid for influencers to travel to DC to lobby its cause. (Wired $)

2 A crypto fugitive has been arrested in Montenegro
Do Kwon has been on the run since TerraUSD stablecoin collapsed last year. (WSJ $)
+ Want to mine Bitcoin? Get yourself to Texas. (Reuters)
+ What’s next for crypto. (MIT Technology Review)

3 Twitter’s getting rid of its legacy blue checks
On the entirely serious date of April 1. (The Verge)+ The platform’s still an unattractive prospect for advertisers. (Vox)

4 Chatbots are having tough conversations for us
ChatGPT is adept at writing scripts for sensitive talks with kids and colleagues. (NYT $)
+ OpenAI has given ChatGPT access to the web’s live data. (The Verge)
+ How Character.AI became a billion-dollar unicorn. (WSJ $)
+ The inside story of how ChatGPT was built from the people who made it. (MIT Technology Review)

5 Jack Dorsey’s Block has been accused of fraudulent transactions
The payments company denied it, and claims it inflated its users numbers, too.(FT $)
+ Dorsey doesn’t have a track record of caring about this kind of thing. (The Information $)

6 Homeowners associations are secretly installing surveillance systems
The system tracks license plates and follows residents’ movements. (The Intercept)

7 Inside the tricky ethics of using DNA to solve crimes
A new database could help to protect users’ privacy. (Wired $)|
+ The citizen scientist who finds killers from her couch. (MIT Technology Review)

8 There’s plenty of reasons to be optimistic about the climate
Healthier, more sustainable diets are a good place to start. (Scientific American)
+ Taking stock of our climate past, present, and future. (MIT Technology Review)

9 TikTok keeps hectoring us
It seems we just can’t get enough of being aggressively told what to do. (Vox)

10 Don’t get scammed by a deepfake
CallerID can’t be trusted to protect you from rogue AI calls. (Gizmodo)

Quote of the day

“Wait, I need content.”

—TikTok fashion creator Kristine Thompson refuses to miss a content opportunity during a trip to the US Capitol to lobby against a potential TikTok ban, she tells the New York Times.

The big story

This sci-fi blockchain game could help create a metaverse that no one owns

November 2022

Dark Forest is a vast universe, and most of it is shrouded in darkness. Your mission, should you choose to accept it, is to venture into the unknown, avoid being destroyed by opposing players who may be lurking in the dark, and build an empire of the planets you discover and can make your own.

But while the video game seemingly looks and plays much like other online strategy games, it doesn’t rely on the servers running other popular online strategy games. And it may point to something even more profound: the possibility of a metaverse that isn’t owned by a big tech company. Read the full story.

—Mike Orcutt

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ If underwater terrors are your thing, Joe Romiero takes some seriously impressive shark pictures and videos.
+ Try as it might, Ted Lasso’s British dialog falls wide of the mark.
+ Let’s have a good old snoop around some celebrities’ bedrooms.
+ Why we can’t get enough of those fancy candles.
+ Interviewing animals with a tiny microphone, it doesn’t get much better than that.

Newly revealed coronavirus data has reignited a debate over the virus’s origins

This article is from The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, sign up here.

This week, coronavirus has been back in the news in a big way. We’ve seen the resurgence of a debate that has been swirling since the start of the pandemic—where did the virus that causes covid-19 come from?

For the most part, scientists have maintained that the virus probably jumped from an animal to a human at the Huanan Seafood Market in Wuhan at some point in late 2019. But some claim that the virus leaped from humans to animals, rather than the other way around. And many continue to claim that the virus somehow leaked from a nearby laboratory that was studying coronaviruses in bats.

Data collected in 2020—and kept from public view since then—potentially adds weight to the animal theory. It highlights a potential suspect: the raccoon dog. But exactly how much weight it adds depends on who you ask. New analyses of the data have only reignited the debate, and stirred up some serious drama.

The current ruckus starts with a study shared by Chinese scientists back in February 2022. In a preprint (a scientific paper that has not yet been peer-reviewed or published in a journal), George Gao of the Chinese Center for Disease Control and Prevention (CCDC) and his colleagues described how they collected and analyzed 1,380 samples from the Huanan Seafood Market.

These samples were collected between January and March 2020, just after the market was closed. At the time, the team wrote that they only found coronavirus in samples alongside genetic material from people.

There were a lot of animals on sale at this market, which sold more than just seafood. The Gao paper features a long list, including chickens, ducks, geese, pheasants, doves, deer, badgers, rabbits, bamboo rats, porcupines, hedgehogs, crocodiles, snakes, and salamanders. And that list is not exhaustive—there are reports of other animals being traded there, including raccoon dogs. We’ll come back to them later.

But Gao and his colleagues reported that they didn’t find the coronavirus in any of the 18 species of animal they looked at. They suggested that it was humans who most likely brought the virus to the market, which ended up being the first known epicenter of the outbreak.

Fast-forward to March 2023. On March 4, Florence Débarre, an evolutionary biologist at Sorbonne University in Paris, spotted some data that had been uploaded to GISAID, a website that allows researchers to share genetic data to help them study and track viruses that cause infectious diseases. The data appeared to have been uploaded in June 2022. It seemed to have been collected by Gao and his colleagues for their February 2022 study, although it had not been included in the actual paper.

When Débarre and her colleagues analyzed this data, they found evidence that some of the samples Gao’s team collected that were positive for the coronavirus had been collected from areas that housed a range of animals, including raccoon dogs. Their findings were covered in a report by The Atlantic. Since then, Débarre and her colleagues have posted a report detailing their findings on the scientific repository Zenodo.

“This finding was a really big deal, not because it proves the presence of an infected animal (it doesn’t). But it does put animals—raccoon dogs and other susceptible species—into the exact location at the market with the virus. And not with humans,” Angela Rasmussen, a virologist at the University of Saskatchewan in Canada and a coauthor of the report, tweeted on March 21.

Raccoon dogs are of special interest because we now know that they are at risk of being infected with the virus and spreading it. But the data doesn’t confirm that raccoon dogs in the market had the virus. Even if they did, it doesn’t mean that they were the animals responsible for passing the virus to humans. So what does it mean?

If you ask a proponent of the lab leak theory, it means nothing. There is no new conclusive evidence that the virus jumped to humans at the Huanan Seafood Market, or that raccoon dogs were involved.

But if you ask one of the many scientists who believe that this marketplace jump from animals is the most likely origin of the coronavirus outbreak in people, they might tell you that this strengthens their case. For them, it’s another nail in the coffin for the lab leak theory, because it offers yet more compelling evidence that susceptible animals were exposed to the virus, at the very least.

There’s more drama to this story. Débarre and her colleagues say they told Gao’s team their findings on March 10. The next day, Gao’s team’s data disappeared from GISAID, and Débarre’s team took their findings to the World Health Organization. The WHO convened two meetings to discuss both teams’ results with the Scientific Advisory Group for the Origins of Novel Pathogens (SAGO).

“Although this does not provide conclusive evidence as to the intermediate host or origins of the virus, the data provide further evidence of the presence of susceptible animals at the market that may have been a source of human infections,” SAGO said in a statement on March 18.

But many are concerned that researchers in China have been hiding their data. The preprint shared in 2022 made no mention of raccoon dogs, but the data posted on GISAID, as well as photographic evidence, suggests that these animals were present at the market before it was closed. Gao’s team’s data “could have—and should have—been shared three years ago,” WHO director general Tedros Adhanom Ghebreyesus said at a media briefing on March 17. “We continue to call on China to be transparent in sharing data, and to conduct the necessary investigations and share the results.”

Débarre’s team are among many scientists publicly urging the CCDC to share all their data. Given that the samples were collected at the start of 2020, “an unreasonable amount of time” has passed already, Débarre and her colleagues write. Gao and his colleagues are apparently working on a paper that will be submitted for publication in a Nature journal. So perhaps we’ll learn more then …

In the meantime, there’s yet more drama! On March 21, Débarre tweeted that she’d had her access to GISAID revoked. This is probably because she and her colleagues shared their own analysis of the Chinese team’s results. According to a statement released by GISAID that same day, the Chinese researchers were preparing their own paper based on that data (the Nature one, presumably). Any other scientists using that data for their own publication would essentially be unfairly “scooping” the Chinese team. Débarre’s access was restored the following day, and Débarre has asked for an apology from “people who questioned our integrity.”

“This isn’t about ‘scooping.’ It’s about the world’s right to know how the pandemic that has profoundly disrupted all our lives began,” Rasmussen tweeted.

The debate over the origins of the virus behind covid-19 continues to rage. US federal agencies can’t agree on where they stand. And while the majority of scientists support the animal theory, many are open to the idea the virus escaped from a lab.

My money is on an animal jump. Not only is keeping animals caged and in close contact inhumane, but it provides the perfect environment for the spread of disease. Trapping wild animals and encroaching on their habitats is known to pose the risk that a disease will jump between species. Even if the coronavirus outbreak did have some other origin, I hope we won’t lose sight of the importance of maintaining wildlife habitats and banning the trade of wild animals.

You can readand listen to!more from Tech Review’s archive:

My colleague Antonio Regalado investigated the origins of the coronavirus behind covid-19 in his brilliant five-part podcast series “Curious Coincidence.”

Last year, Jane Qiu spoke to Shi Zhengli of the Wuhan Institute of Virology. Shi, sometimes nicknamed “China’s bat woman,” has long been at the center of the controversy over the lab leak theory.

Michael Worobey of the University of Arizona, who performed the recent analysis of the CCDC data with Débarre, signed a letter asking for more investigation into the lab leak theory in May 2021. He now believes that a spillover of the virus from animals at the Huanan Seafood market was almost certainly behind the origin of the pandemic, as Qiu reported in 2021.

Antonio had the inside scoop on how Pfizer developed Paxlovid, an antiviral drug that was found to reduce the chance of a serious case of covid by 89%.

Since then, others have explored whether anti-aging drugs might also help us treat covid, as I reported last year.

From around the web

Hospitals are performing drug tests on pregnant people without their consent. The results have caused some to miss out on epidurals or important skin-to-skin bonding with their newborns. (New York Magazine)

Can brain stimulation help treat endometriosis pain? Maybe. The findings of a small, placebo-controlled trial suggest that transcranial direct current stimulation (tDCS) can lower the perception of pain in people with the disorder. (Pain Medicine)

Weight-loss injections have taken over the internet. But if all your information is coming from influencers, the dangers might not be apparent. (MIT Technology Review)

When 47-year-old Marlene Schultz began to lose her hearing, she refused to accept her doctor’s suggestion that the cause was loud music and embarked on a quest for a correct diagnosis. (The Washington Post)

What does a memory look like? Some researchers reckon that memories could be stored in nucleic acid, read out as a molecular code. (Neurobiology of Learning and Memory)

The Download: the battle for satellite internet, and detecting biased AI

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Amazon is about to go head to head with SpaceX in a battle for satellite internet dominance 

What’s coming: Elon Musk and Jeff Bezos are about to lock horns once again. Last month, the US Federal Communications Commission approved the final aspects of Project Kuiper, Amazon’s effort to deliver high-speed internet access from space. In May, the company will test its satellites in an effort to take on SpaceX’s own venture, Starlink, and tap into a potentially very lucrative market.

The catch: The key difference is that Starlink is operational, and has been for years, whereas Amazon doesn’t plan to start offering Kuiper as a service until 2024, giving SpaceX a considerable head start. Also, none of the rockets Amazon has bought a ride on has yet made it to space. Read the full story.

—Jonathan O’Callaghan

These new tools let you see for yourself how biased AI image models are

The news: A set of new interactive online tools allow people to examine biases in three popular AI image-generating models: DALL-E 2 and the two recent versions of Stable Diffusion. The tools, built by researchers at AI startup Hugging Face and Leipzig University, are detailed in a non-peer-reviewed paper.

Why it matters: It’s well-known that AI image-generating models tend to amplify harmful biases and stereotypes. For example, the researchers found that DALL-E 2 generated white men 97% of the time when given prompts like “CEO” or “director.” Now, people don’t just have to take the experts at their word: they can use these tools to see the problem for themselves. Read the full story.

—Melissa Heikkilä

Taking stock of our climate past, present, and future

Earlier this week, the UN Intergovernmental Panel on Climate Change (IPCC) published a major climate report digging deep into the state of climate change research. 

The IPCC works in seven-year cycles, give or take. Each cycle, the group looks at all the published literature on climate change and puts together a handful of reports on different topics, leading up to a synthesis report that sums it all up. This week’s release was one of those synthesis reports.

Because these reports are a sort of summary of existing research, our climate reporter Casey Crownhart has been taking a look at where we’ve come from, where we are, and where we’re going on climate change. What she found was surprisingly heartening. Read the full story.

—Casey Crownhart

This story is from The Spark, Casey’s weekly newsletter giving you the inside track on all things climate. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 How ChatGPT stole Alexa’s thunder  
The once-ubiquitous voice assistant’s capabilities pale in comparison to language model AIs. (The Information $)
+ Conservatives are building political chatbots to counter ‘woke AI.’ (NYT $)
+ Why the businesses banning ChatGPT could actually benefit from using it. (WSJ $)
+ Google’s Bard isn’t as exciting as its fancier rivals. (Vox)

+ Google and Microsoft’s chatbots are already citing each other in a misinformation nightmare. (The Verge)
+ The inside story of how ChatGPT was built from the people who made it. (MIT Technology Review)

2 TikTok stars are protesting the app’s potential ban
They’ve united in Washington ahead of the firm’s Congress hearing today. (WSJ $)
+ The company’s CEO is facing a tough few hours. (TechCrunch)

3 Celebrities have been charged over crypto endorsements
The SEC claims they illegally touted the currencies to fans online. (The Guardian)
+ It’s also warned exchange Coinbase that it may have violated US law. (CNBC)

4 Chipmakers are joining forces to fight the US ‘forever chemicals’ crackdown
Controversial chemicals are key elements in the chip manufacturing process. (FT $)
+ These simple design rules could turn the chip industry on its head. (MIT Technology Review)

5 What it’ll take to make fusion power viable
A handful of optimistic firms are confident their stations will be functional by the early 2030s. (Economist $)
+ What you really need to know about that fusion news. (MIT Technology Review)

6 Crypto’s climate emissions are still appalling
The industry may be down, but its carbon footprint is still crazily high. (The Atlantic $)
+ Ethereum moved to proof of stake. Why can’t Bitcoin? (MIT Technology Review)

7 The secret threat lurking within photo cropping tools
A bug is revealing people’s location data, even after they’d deliberately removed it. (Wired $)

8 Inside China’s aspirational ‘little red book’ app 🛍️
Xiaohongshu sells its users a glossy lifestyle that millions covet. (Rest of World)

9 Blockbuster is back, maybe 📼
Its website has mysteriously reactivated, a decade after the company shut down. (WP $)

10 What it’s like to be dumped by a chatbot
People are mourning the loss of their AI partners. (Bloomberg $)
+ Would you let ChatGPT write your wedding vows? These people would. (Vice)

Quote of the day

“A lot of this is a game of chicken.”

—James A. Lewis, who runs the cyberthreats program at the Center for Strategic and International Studies, tells the New York Times he doesn’t believe the US will actually ban TikTok. 

The big story

We used to get excited about technology. What happened?

October 2022

As a philosopher who studies AI and data, Shannon Vallor’s Twitter feed is always filled with the latest tech news. Increasingly, she’s realized that the constant stream of information, detailing everything from Mark Zuckerberg’s dead-eyed metaverse cartoon avatar, from Amazon’s Ring Nation surveillance reality show, is no longer inspiring joy, but a sense of resignation.

Joy is missing from our lives, and from our technology. Its absence is feeding a growing unease being voiced by many who work in tech or study it. Fixing it depends on understanding how and why the priorities in our tech ecosystem have changed, triggering a sea change in the entire model for innovation and the incentives that drive it. Read the full story.

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Pride and Prejudice reenacted with pinecones? Absolutely.
+ Wow, labradors are no longer the USA’s favorite dog—but who’s the replacement
+ Virginia Woolf’s take on Sex and the City courtesy of ChatGPT is….quite something.
+ This cat really, really wanted to play in the orchestra.
+ Stone the crows: why rock is such a popular medium for artists these days.

Taking stock of our climate past, present, and future

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

New Year’s Eve is my favorite holiday. It’s a time to celebrate, reflect, and look forward to what’s next. Setting goals, drinking champagne—what’s not to like? 

Before you say anything, I do know that it is, in fact, nearly April. But this week has the distinct feeling of a sort of climate change New Year’s to me. Not only is it the spring equinox this week, which is celebrated as the new year in some cultures (Happy Nowruz!), but we also saw a big UN climate report drop on Monday, which has me in a very contemplative mood.

The report comes from the UN Intergovernmental Panel on Climate Change (IPCC), a group of scientists that releases reports about the state of climate change research. 

The IPCC works in seven-year cycles, give or take. Each cycle, the group looks at all the published literature on climate change and puts together a handful of reports on different topics, leading up to a synthesis report that sums it all up. This week’s release was one of those synthesis reports. It follows one from 2014, and we should see another one around 2030. 

Because these reports are a sort of summary of existing research, I’ve been thinking about this moment as a time to reflect. So for the newsletter this week, I thought we could get in the new year’s spirit and take a look at where we’ve come from, where we are, and where we’re going on climate change. 

Climate past: 2014

Let’s start in 2014. The concentration of carbon dioxide in the atmosphere was just under 400 parts per million. The song “Happy” by Pharrell Williams was driving me slowly insane. And in November, the IPCC released its fifth synthesis report. 

Some bits of the 2014 IPCC synthesis report feel familiar. Its authors clearly laid out the case that human activity was causing climate change, adaptation wasn’t going to cut it, and the world would need to take action to limit greenhouse-gas emissions. I saw all those same lines in this year’s report. 

But there are also striking differences.  

First, we were in a different place politically. World leaders hadn’t yet signed the Paris agreement, the landmark treaty that set a goal to limit global warming to 2 °C (3.6 °F) above preindustrial levels, with a target of 1.5 °C (2.7 °F). The 2014 assessment report laid the groundwork for that agreement. 

Technology has also changed dramatically. The 2014 report put renewable energy on the table as a potential solution to replace fossil fuels and slow climate change. But renewables had yet to make a significant difference in emissions, partially because they were still so expensive (per watt, solar power was about five times more expensive than it is today!). 

“It’s crunch time, now.”

Detlef Van Vuuren

Looking back, it’s frustrating just how clear the warnings were on climate change a decade ago. But it’s also a little bit heartening to see just how far we’ve come with awareness, political momentum, and technology. 

Climate present: 2023

Fast-forward nine years, or seven Taylor Swift albums. The year is 2023, carbon dioxide concentrations averaged 419 parts per million last year, and global temperatures are about 1.1 °C (2 °F) higher than they were before 1900. In March, the IPCC released its sixth synthesis report. 

Climate change has broken into the public conversation, with both supercharged disasters and momentous climate action to talk about. A movie about climate change was nominated for a 2022 Oscar. Nearly half the voters in the last US presidential election said climate change was very important to their vote, and 93% of Europeans believe that climate change is a serious problem. 

The US, the world’s leader in total historical emissions, passed landmark climate legislation, the largest in history. But emissions are still ticking up, hitting a new record high in 2022.

The 2023 IPCC synthesis report is more dire than its 2014 predecessor. Higher risks from climate change are now projected to come at lower levels of global warming. And it’s even more clear how crucial it is to act quickly. 

I spoke with one of the authors of the IPCC report, climate scientist Detlef Van Vuuren. One clear difference between the fifth and sixth reports is the urgency of this moment: “It’s crunch time, now,” he told me. 

The good news is there are a lot of solutions available right now. It’s possible for us to set ourselves up for success by 2030, when we could be well on our way to reaching our climate goals. The IPCC has handed out a climate to-do list that we need to get going on. For more on what’s on that list, check out my story from Monday. 

Climate future: 2030

By the time the next synthesis report comes out, around 2030, NASA may well have put humans on the moon again

It will be clear by that time whether or not limiting global warming to 1.5 °C is still on the table. Right now, we’ve got just under a decade left of emissions-as-usual before we’ve sailed past that goal. 

Here’s what the world may need to look like in 2030 if we’re going to reach net-zero emissions by 2050 (which is what we’d need to do to hit the 1.5 °C target), according to a few of the International Energy Agency’s projections: 

That’s a lot of transformation, but then again, energy analysts have consistently underestimated the contributions renewables would be making. So who knows what 2030 might bring?

I’m always cautiously optimistic going into each new year, and that’s how I feel now too. The stakes are high, and there’s plenty to be worried about on climate change. But I look around and see a lot of potential progress ahead. 

Keeping up with climate

A city in Germany wants to store energy in aquifers underground. If the system works, it could help replace a coal power plant. (Bloomberg)

California could pass strict pollution rules for trucks. The regulations would jump-start electric trucking across the country. (Washington Post)

→ Here’s why the grid is ready for fleets of electric trucks. (MIT Technology Review)

We need the right kind of climate optimism: the kind that spurs action. (Vox)

Climate change is the star of the new show Extrapolations. (LA Times) Some argue it doesn’t do the topic justice, though. (Washington Post)

Tesla announced it would stop using rare-earth metals for magnets in its motors. Experts are skeptical. (IEEE Spectrum)

Going on an EV road trip has gotten easier in recent years, but there are still some speed bumps. (E&E News

Heat pumps are commonplace in Japan and some other countries in Asia. Their success could be a blueprint for efficient heating and cooling in the rest of the world. (Canary Media)

→ Here’s how a heat pump really works. (MIT Technology Review)

Tractors that run on cow manure could help farmers get around while cutting methane emissions. (Bloomberg)

Lithium prices are falling, making EVs that use the metal in their batteries cheaper. But rising demand could turn things back around soon. (New York Times)

New Mexico is putting up a fight against a proposed storage facility for nuclear waste in the state. (Associated Press)

Amazon is about to go head to head with SpaceX in a battle for satellite internet dominance 

Elon Musk and Jeff Bezos are about to lock horns once again. Last month, the US Federal Communications Commission approved the final aspects of Project Kuiper, Amazon’s effort to deliver high-speed internet access from space. In May, the company will launch test versions of the Kuiper communications satellites in an attempt to take on SpaceX’s own venture, Starlink, and tap into a market of perhaps hundreds of millions of prospective internet users.

Other companies are hoping to do the same, and a few are already doing so, but Starlink and Amazon are the major players. “It is really a head-to-head rivalry,” says Tim Farrar, a satellite expert from the firm TMF Associates in the US. 

The rocket that will launch Amazon’s first two Kuiper satellites—the United Launch Alliance’s new Vulcan Centaur rocket—has been assembled at Cape Canaveral in Florida. Its inaugural launch is set to fly two prototype Kuiper satellites, called KuiperSat-1 and KuiperSat-2, as early as May 4. Ultimately, Amazon plans to launch a total of 3,236 full Kuiper satellites by 2029. The first of that fleet could launch in early 2024.

“They have ambitions to be disruptive across the technology sector,” says Farrar. “It’s hardly surprising that they’ve jumped in here.”

In the past few years, companies have been trying to expand access to the internet via satellite, both as commercial ventures and to supply internet to those in remote locations without otherwise easy access. Starlink, the mega-constellation of more than 3,500 satellites built by Musk’s SpaceX, is the biggest of these ventures. 

Amazon announced Project Kuiper in 2019, the same year Starlink began launching, leading Musk to tweet that Bezos, then the company’s CEO, was a “copycat.” Others are in development too, such as the UK-based OneWeb, which currently has more than 500 satellites. But Farrar says the key competition is between SpaceX and Amazon.

To take on SpaceX, last year Amazon revealed it had essentially bought all the spare rocket launch capacity in the world (although with little effect on its rival, because SpaceX launches satellites on its own rockets). Thanks to Amazon’s multibillion-dollar deals with United Launch Alliance, Bezos’s Blue Origin in the US, and Arianespace in Europe, Project Kuiper satellites are expected to fly on 92 different launches over the next five years.

The rapid launch cadence is important. Under its license with the FCC, Amazon has until July 2026 to launch half its constellation. “We are on track to meet that deadline,” an Amazon spokesperson said. Last month, the FCC gave Amazon the full green light to begin launching its satellites after the company finalized details of its plan to address concerns about its potential to increase space junk.

But there is a catch: none of the rockets Amazon has bought a ride on has yet made it to space (in fact, one launch vehicle Amazon had initially planned to use exploded in January). “Those rockets are largely behind schedule,” says Farrar.

The satellites are meant to orbit at an altitude of about 600 kilometers and cover latitudes from Canada to Argentina, reaching “95% of the world’s population,” the Amazon spokesperson said. “Our constellation will serve individual households, as well as businesses, schools, hospitals, government agencies, and other organizations operating in locations without reliable broadband.” 

Amazon has applied to the FCC to increase its constellation to 7,774 satellites, which would allow it to cover regions further north and south, including Alaska, as Starlink does.

There are riches to be had: SpaceX currently charges $110 a month to access Starlink, with an up-front cost of $599 for an antenna to connect to the satellites. According to a letter to shareholders last year, Amazon is spending “over $10 billion” to develop Kuiper, with more than 1,000 employees working on the project. Andy Jassy, Amazon’s current CEO, has said that Kuiper has a chance of becoming a “fourth pillar” for the company, alongside its retail marketplace, Amazon Prime, and its widely used cloud computing service, Amazon Web Services

“Amazon’s business model relies on people having internet connectivity,” says Shagun Sachdeva, an industry expert at the space investment firm Kosmic Apple in France. “It makes a lot of sense for them to have this constellation to provide connectivity.”

Amazon is not yet disclosing the pricing of its service but has previously said a goal is to “bridge the digital divide” by bringing fast and affordable broadband to “underserved communities,” an ambition Starlink has also professed. But whether costs will ever get low enough for that to be achievable remains to be seen. “Costs will come down, but to what extent is really the question,”  says Sachdeva. On March 14, the company revealed it was producing its own antennas at a cost of $400 for a standard antenna, although a retail cost has not yet been revealed.

Amazon has said it can offer speeds of up to one gigabit per second, and bandwidth of one terabit per second. Those are similar to Starlink’s numbers, and the two services seem fairly similar overall. The key difference is that Starlink is operational, and has been for years, whereas Amazon does not plan to start offering Kuiper as a service until the latter half of 2024, giving SpaceX a considerable head start to attract users and secure contracts.

The astronomy problem

There remain concerns, too, about space junk and the impact on ground-based astronomy. Before 2019 there were only about 3,000 active satellites in space. SpaceX and Amazon by themselves could increase that number to 20,000 by the end of this decade. Tracking large numbers of moving objects in orbit—and making sure they don’t collide with one another—is a headache.

“I’m not satisfied that we can safely sustain [even] one of these systems in orbit,” says Hugh Lewis, a space debris expert at the University of Southampton in the UK, who has tracked thousands of close calls between Starlink, OneWeb, and other satellites. “They’re continually rolling the dice. At some point, in spite of all their best efforts, I think there will be a collision.”

Amazon’s spokesperson said the company had “designed our system and operational parameters with space safety in mind.” When satellites finish their mission, the spokesperson added, they will be removed from orbit within one year using onboard thrusters, and in the case of satellite failure, atmospheric drag will “help ensure any remaining satellites will deorbit naturally.”

Amazon has not revealed the size of its satellites, but—like Starlink’s—they might reflect enough sunlight to pose a problem to astronomers and even change the appearance of the night sky. Attempts to lessen the impact satellites have on astronomy have been moderately successful at best, with the satellites appearing particularly bright at twilight. Telescope observations of the universe are already affected by bright satellite streaks, and the problem is likely to worsen in the future.  

Amazon has said it is working with astronomers on the issue. “Reflectivity is a key consideration in our design and development process,” the company spokesperson said. “We’ve already made a number of design and operational decisions that will help reduce our impact on astronomical observations.”

If the problem cannot fully be solved, however, some aspects of astronomy will become much more difficult or even impossible. “Starlink has not managed to make their satellites nearly as faint as they promised,” says Samantha Lawler, an astronomer at the University of Regina in Canada. “I’m quite worried what the sky will look like with yet another company launching thousands of potentially bright satellites.”

With plans to build up to four satellites per day, Amazon plans to progress rapidly. After its first two test satellites have launched, the rest could come thick and fast. Can the company take on Musk? “That’s the big question,” says Farrar. “They have to move quickly.”

This story was updated on 23 March to clarify the figure of $400 is the cost to build a standard Kuiper antenna and to correct a typo regarding Project Kuiper’s bandwidth.