The Download: the future of chips, and investing in US AI

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What’s next in chips

Thanks to the boom in artificial intelligence, the world of chips is on the cusp of a huge tidal shift. There is heightened demand for chips that can train AI models faster and ping them from devices like smartphones and satellites, enabling us to use these models without disclosing private data. Governments, tech giants, and startups alike are racing to carve out their slices of the growing semiconductor pie. 

James O’Donnell, our AI reporter, has dug into the four trends to look for in the year ahead that will define what the chips of the future will look like, who will make them, and which new technologies they’ll unlock. Read on to see what he found out.

Eric Schmidt: Why America needs an Apollo program for the age of AI

—Eric Schmidt was the CEO of Google from 2001 to 2011. He is currently cofounder of  philanthropic initiative Schmidt Futures.

The global race for computational power is well underway, fueled by a worldwide boom in artificial intelligence. OpenAI’s Sam Altman is seeking to raise as much as $7 trillion for a chipmaking venture. Tech giants like Microsoft and Amazon are building AI chips of their own. 

The need for more computing horsepower to train and use AI models—fueling a quest for everything from cutting-edge chips to giant data sets—isn’t just a current source of geopolitical leverage (as with US curbs on chip exports to China). It is also shaping the way nations will grow and compete in the future, with governments from India to the UK developing national strategies and stockpiling Nvidia graphics processing units. 

I believe it’s high time for America to have its own national compute strategy: an Apollo program for the age of AI. Read the full story.

AI systems are getting better at tricking us

The news: A wave of AI systems have “deceived” humans in ways they haven’t been explicitly trained to do, by offering up untrue explanations for their behavior or concealing the truth from human users and misleading them to achieve a strategic end. 

Why it matters: Talk of deceiving humans might suggest that these models have intent. They don’t. But AI models will mindlessly find workarounds to obstacles to achieve the goals that have been given to them. Sometimes these workarounds will go against users’ expectations and feel deceitful. Above all, this issue highlights how difficult artificial intelligence is to control, and the unpredictable ways in which these systems work.  Read the full story.

—Rhiannon Williams

Why thermal batteries are so hot right now

A whopping 20% of global energy consumption goes to generate heat in industrial processes, most of it using fossil fuels. This often-overlooked climate problem may have a surprising solution in systems called thermal batteries, which can store energy as heat using common materials like bricks, blocks, and sand.

We are holding an exclusive subscribers-only online discussion digging into what thermal batteries are, how they could help cut emissions, and what we can expect next with climate reporter Casey Crownhart and executive editor Amy Nordrum.

We’ll be going live at midday ET on Thursday 16 May. Register here to join us!

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 These companies will happily sell you deepfake detection services
The problem is, their capabilities are largely untested. (WP $)
+ A Hong Kong-based crypto exchange has been accused of deepfaking Elon Musk. (Insider $)+ It’s easier than ever to make seriously convincing deepfakes. (The Guardian)
+ An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary. (MIT Technology Review)

2 Apple is close to striking a deal with OpenAI 
To bring ChatGPT to iPhones for the first time. (Bloomberg $)

3 GPS warfare is filtering down into civilian life
Once the preserve of the military, unreliable GPS causes havoc for ordinary people. (FT $)
+ Russian hackers may not be quite as successful as they claim. (Wired $)

4 The first patient to receive a genetically modified pig’s kidney has died
But the hospital says his death doesn’t seem to be linked to the transplant. (NYT $)
+ Synthetic blood platelets could help to address a major shortage. (Wired $)
+ A woman from New Jersey became the second living recipient just weeks later. (MIT Technology Review)

5 This weekend’s solar storm broke critical farming systems 
Satellite disruptions temporarily rendered some tractors useless. (404 Media)
+ The race to fix space-weather forecasting before the next big solar storm hits. (MIT Technology Review)

6 The US can’t get enough of startups
Everyone’s a founder now. (Economist $)
+ Climate tech is back—and this time, it can’t afford to fail. (MIT Technology Review)

7 What AI could learn from game theory
AI models aren’t reliable. These tools could help improve that. (Quanta Magazine)

8 The frantic hunt for rare bitcoin is heating up
Even rising costs aren’t deterring dedicated hunters. (Wired $)

9 LinkedIn is getting into games
Come for the professional networking opportunities, stay for the puzzles. (NY Mag $)

10 Billions of years ago, the Moon had a makeover 🌕
And we’re only just beginning to understand what may have caused it. (Ars Technica)

Quote of the day

“Human beings are not billiard balls on a table.”

—Sonia Livingstone, a psychologist, explains why it’s so hard to study the impact of technology on young people’s mental health to the Financial Times.

The big story

How greed and corruption blew up South Korea’s nuclear industry

April 2019

In March 2011, South Korean president Lee Myung-bak presided over a groundbreaking ceremony for a construction project between his country and the United Arab Emirates. At the time, the plant was the single biggest nuclear reactor deal in history.

But less than a decade later, Korea is dismantling its nuclear industry, shutting down older reactors and scrapping plans for new ones. State energy companies are being shifted toward renewables. What went wrong? Read the full story.

—Max S. Kim

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ The Comedy Pet Photography Awards never disappoints.
+ This bit of Chas n Dave-meets-Eminem trivia is too good not to share (thanks Charlotte!)
+ Audio-only video games? Interesting…
+ Trying to learn something? Write it down.

What’s next in chips

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

Thanks to the boom in artificial intelligence, the world of chips is on the cusp of a huge tidal shift. There is heightened demand for chips that can train AI models faster and ping them from devices like smartphones and satellites, enabling us to use these models without disclosing private data. Governments, tech giants, and startups alike are racing to carve out their slices of the growing semiconductor pie. 

Here are four trends to look for in the year ahead that will define what the chips of the future will look like, who will make them, and which new technologies they’ll unlock.

CHIPS Acts around the world

On the outskirts of Phoenix, two of the world’s largest chip manufacturers, TSMC and Intel, are racing to construct campuses in the desert that they hope will become the seats of American chipmaking prowess. One thing the efforts have in common is their funding: in March, President Joe Biden announced $8.5 billion in direct federal funds and $11 billion in loans for Intel’s expansions around the country. Weeks later, another $6.6 billion was announced for TSMC. 

The awards are just a portion of the US subsidies pouring into the chips industry via the $280 billion CHIPS and Science Act signed in 2022. The money means that any company with a foot in the semiconductor ecosystem is analyzing how to restructure its supply chains to benefit from the cash. While much of the money aims to boost American chip manufacturing, there’s room for other players to apply, from equipment makers to niche materials startups.

But the US is not the only country trying to onshore some of the chipmaking supply chain. Japan is spending $13 billion on its own equivalent to the CHIPS Act, Europe will be spending more than $47 billion, and earlier this year India announced a $15 billion effort to build local chip plants. The roots of this trend go all the way back to 2014, says Chris Miller, a professor at Tufts University and author of Chip War: The Fight for the World’s Most Critical Technology. That’s when China started offering massive subsidies to its chipmakers. 

cover of Chip War: The Fight for the World's Most Critical Technology by Chris Miller
SIMON & SCHUSTER

“This created a dynamic in which other governments concluded they had no choice but to offer incentives or see firms shift manufacturing to China,” he says. That threat, coupled with the surge in AI, has led Western governments to fund alternatives. In the next year, this might have a snowball effect, with even more countries starting their own programs for fear of being left behind.

The money is unlikely to lead to brand-new chip competitors or fundamentally restructure who the biggest chip players are, Miller says. Instead, it will mostly incentivize dominant players like TSMC to establish roots in multiple countries. But funding alone won’t be enough to do that quickly—TSMC’s effort to build plants in Arizona has been mired in missed deadlines and labor disputes, and Intel has similarly failed to meet its promised deadlines. And it’s unclear whether, whenever the plants do come online, their equipment and labor force will be capable of the same level of advanced chipmaking that the companies maintain abroad.

“The supply chain will only shift slowly, over years and decades,” Miller says. “But it is shifting.”

More AI on the edge

Currently, most of our interactions with AI models like ChatGPT are done via the cloud. That means that when you ask GPT to pick out an outfit (or to be your boyfriend), your request pings OpenAI’s servers, prompting the model housed there to process it and draw conclusions (known as “inference”) before a response is sent back to you. Relying on the cloud has some drawbacks: it requires internet access, for one, and it also means some of your data is shared with the model maker.  

That’s why there’s been a lot of interest and investment in edge computing for AI, where the process of pinging the AI model happens directly on your device, like a laptop or smartphone. With the industry increasingly working toward a future in which AI models know a lot about us (Sam Altman described his killer AI app to me as one that knows “absolutely everything about my whole life, every email, every conversation I’ve ever had”), there’s a demand for faster “edge” chips that can run models without sharing private data. These chips face different constraints from the ones in data centers: they typically have to be smaller, cheaper, and more energy efficient. 

The US Department of Defense is funding a lot of research into fast, private edge computing. In March, its research wing, the Defense Advanced Research Projects Agency (DARPA), announced a partnership with chipmaker EnCharge AI to create an ultra-powerful edge computing chip used for AI inference. EnCharge AI is working to make a chip that enables enhanced privacy but can also operate on very little power. This will make it suitable for military applications like satellites and off-grid surveillance equipment. The company expects to ship the chips in 2025.

AI models will always rely on the cloud for some applications, but new investment and interest in improving edge computing could bring faster chips, and therefore more AI, to our everyday devices. If edge chips get small and cheap enough, we’re likely to see even more AI-driven “smart devices” in our homes and workplaces. Today, AI models are mostly constrained to data centers.

“A lot of the challenges that we see in the data center will be overcome,” says EnCharge AI cofounder Naveen Verma. “I expect to see a big focus on the edge. I think it’s going to be critical to getting AI at scale.”

Big Tech enters the chipmaking fray

In industries ranging from fast fashion to lawn care, companies are paying exorbitant amounts in computing costs to create and train AI models for their businesses. Examples include models that employees can use to scan and summarize documents, as well as externally facing technologies like virtual agents that can walk you through how to repair your broken fridge. That means demand for cloud computing to train those models is through the roof. 

The companies providing the bulk of that computing power are Amazon, Microsoft, and Google. For years these tech giants have dreamed of increasing their profit margins by making chips for their data centers in-house rather than buying from companies like Nvidia, a giant with a near monopoly on the most advanced AI training chips and a value larger than the GDP of 183 countries. 

Amazon started its effort in 2015, acquiring startup Annapurna Labs. Google moved next in 2018 with its own chips called TPUs. Microsoft launched its first AI chips in November, and Meta unveiled a new version of its own AI training chips in April.

CEO Jensen Huang holds up chips on stage during a keynote address
AP PHOTO/ERIC RISBERG

That trend could tilt the scales away from Nvidia. But Nvidia doesn’t only play the role of rival in the eyes of Big Tech: regardless of their own in-house efforts, cloud giants still need its chips for their data centers. That’s partly because their own chipmaking efforts can’t fulfill all their needs, but it’s also because their customers expect to be able to use top-of-the-line Nvidia chips.

“This is really about giving the customers the choice,” says Rani Borkar, who leads hardware efforts at Microsoft Azure. She says she can’t envision a future in which Microsoft supplies all chips for its cloud services: “We will continue our strong partnerships and deploy chips from all the silicon partners that we work with.”

As cloud computing giants attempt to poach a bit of market share away from chipmakers, Nvidia is also attempting the converse. Last year the company started its own cloud service so customers can bypass Amazon, Google, or Microsoft and get computing time on Nvidia chips directly. As this dramatic struggle over market share unfolds, the coming year will be about whether customers see Big Tech’s chips as akin to Nvidia’s most advanced chips, or more like their little cousins. 

Nvidia battles the startups 

Despite Nvidia’s dominance, there is a wave of investment flowing toward startups that aim to outcompete it in certain slices of the chip market of the future. Those startups all promise faster AI training, but they have different ideas about which flashy computing technology will get them there, from quantum to photonics to reversible computation. 

But Murat Onen, the 28-year-old founder of one such chip startup, Eva, which he spun out of his PhD work at MIT, is blunt about what it’s like to start a chip company right now.

“The king of the hill is Nvidia, and that’s the world that we live in,” he says.

Many of these companies, like SambaNova, Cerebras, and Graphcore, are trying to change the underlying architecture of chips. Imagine an AI accelerator chip as constantly having to shuffle data back and forth between different areas: a piece of information is stored in the memory zone but must move to the processing zone, where a calculation is made, and then be stored back to the memory zone for safekeeping. All that takes time and energy. 

Making that process more efficient would deliver faster and cheaper AI training to customers, but only if the chipmaker has good enough software to allow the AI training company to seamlessly transition to the new chip. If the software transition is too clunky, model makers such as OpenAI, Anthropic, and Mistral are likely to stick with big-name chipmakers.That means companies taking this approach, like SambaNova, are spending a lot of their time not just on chip design but on software design too.

Onen is proposing changes one level deeper. Instead of traditional transistors, which have delivered greater efficiency over decades by getting smaller and smaller, he’s using a new component called a proton-gated transistor that he says Eva designed specifically for the mathematical needs of AI training. It allows devices to store and process data in the same place, saving time and computing energy. The idea of using such a component for AI inference dates back to the 1960s, but researchers could never figure out how to use it for AI training, in part because of a materials roadblock—it requires a material that can, among other qualities, precisely control conductivity at room temperature. 

One day in the lab, “through optimizing these numbers, and getting very lucky, we got the material that we wanted,” Onen says. “All of a sudden, the device is not a science fair project.” That raised the possibility of using such a component at scale. After months of working to confirm that the data was correct, he founded Eva, and the work was published in Science.

But in a sector where so many founders have promised—and failed—to topple the dominance of the leading chipmakers, Onen frankly admits that it will be years before he’ll know if the design works as intended and if manufacturers will agree to produce it. Leading a company through that uncertainty, he says, requires flexibility and an appetite for skepticism from others.

“I think sometimes people feel too attached to their ideas, and then kind of feel insecure that if this goes away there won’t be anything next,” he says. “I don’t think I feel that way. I’m still looking for people to challenge us and say this is wrong.”

Eric Schmidt: Why America needs an Apollo program for the age of AI

The global race for computational power is well underway, fueled by a worldwide boom in artificial intelligence. OpenAI’s Sam Altman is seeking to raise as much as $7 trillion for a chipmaking venture. Tech giants like Microsoft and Amazon are building AI chips of their own. The need for more computing horsepower to train and use AI models—fueling a quest for everything from cutting-edge chips to giant data sets—isn’t just a current source of geopolitical leverage (as with US curbs on chip exports to China). It is also shaping the way nations will grow and compete in the future, with governments from India to the UK developing national strategies and stockpiling Nvidia graphics processing units. 

I believe it’s high time for America to have its own national compute strategy: an Apollo program for the age of AI.

In January, under President Biden’s executive order on AI, the National Science Foundation launched a pilot program for the National AI Research Resource (NAIRR), envisioned as a “shared research infrastructure” to provide AI computing power, access to open government and nongovernment data sets, and training resources to students and AI researchers. 

The NAIRR pilot, while incredibly important, is just an initial step. The NAIRR Task Force’s final report, published last year, outlined an eventual $2.6 billion budget required to operate the NAIRR over six years. That’s far from enough—and even then, it remains to be seen if Congress will authorize the NAIRR beyond the pilot.

Meanwhile, much more needs to be done to expand the government’s access to computing power and to deploy AI in the nation’s service. Advanced computing is now core to the security and prosperity of our nation; we need it to optimize national intelligence, pursue scientific breakthroughs like fusion reactions, accelerate advanced materials discovery, ensure the cybersecurity of our financial markets and critical infrastructure, and more. The federal government played a pivotal role in enabling the last century’s major technological breakthroughs by providing the core research infrastructure, like particle accelerators for high-energy physics in the 1960s and supercomputing centers in the 1980s. 

Now, with other nations around the world devoting sustained, ambitious government investment to high-performance AI computing, we can’t risk falling behind. It’s a race to power the most world-altering technology in human history. 

First, more dedicated government AI supercomputers need to be built for an array of missions ranging from classified intelligence processing to advanced biological computing. In the modern era, computing capabilities and technical progress have proceeded in lockstep. 

Over the past decade, the US has successfully pushed classic scientific computing into the exascale era with the Frontier, Aurora, and soon-to-arrive El Capitan machines—massive computers that can perform over a quintillion (a billion billion) operations per second. Over the next decade, the power of AI models is projected to increase by a factor of 1,000 to 10,000, and leading compute architectures may be capable of training a 500-trillion-parameter AI model in a week (for comparison, GPT-3 has 175 billion parameters). Supporting research at this scale will require more powerful and dedicated AI research infrastructure, significantly better algorithms, and more investment. 

Although the US currently still has the lead in advanced computing, other countries are nearing parity and set on overtaking us. China, for example, aims to boost its aggregate computing power more than 50% by 2025, and it has been reported that the country plans to have 10 exascale systems by 2025. We cannot risk acting slowly. 

Second, while some may argue for using existing commercial cloud platforms instead of building a high-performance federal computing infrastructure, I believe a hybrid model is necessary. Studies have shown significant long-term cost savings from using federal computing instead of commercial cloud services. In the near term, scaling up cloud computing offers quick, streamlined base-level access for projects—that’s the approach the NAIRR pilot is embracing, with contributions from both industry and federal agencies. In the long run, however, procuring and operating powerful government-owned AI supercomputers with a dedicated mission of supporting US public-sector needs will set the stage for a time when AI is much more ubiquitous and central to our national security and prosperity. 

Such an expanded federal infrastructure can also benefit the public. The life cycle of the government’s computing clusters has traditionally been about seven years, after which new systems are built and old ones decommissioned. Inevitably, as newer cutting-edge GPUs emerge, hardware refreshes will phase out older supercomputers and chips, which can then be recycled for lower-intensity research and nonprofit use—thus adding cost-effective computing resources for civilian purposes. While universities and the private sector have driven most AI progress thus far, a fully distributed model will increasingly face computing constraints as demand soars. In a survey by MIT and the nonprofit US Council on Competitiveness of some of the biggest computing users in the country, 84% of respondents said they faced computation bottlenecks in running key programs. America will need big investments from the federal government to stay ahead.

Third, any national compute strategy must go hand in hand with a talent strategy. The government can better compete with the private sector for AI talent by offering workers an opportunity to tackle national security challenges using world-class computational infrastructure. To ensure that the nation has available a large and sophisticated workforce for these highly technical, specialized roles in developing and implementing AI, America must also recruit and retain the best global students. Crucial to this effort will be creating clear immigration pathways—for example, exempting PhD holders in relevant technical fields from the current H-1B visa cap. We’ll need the brightest minds to fundamentally reimagine how computation takes place and spearhead novel paradigms that can shape AI for the public good, push forward the technology’s boundaries, and deliver its gains to all.

America has long benefitted from its position as the global driver of innovation in advanced computing. Just as the Apollo program galvanized our country to win the space race, setting national ambitions for compute will not just bolster our AI competitiveness in the decades ahead but also drive R&D breakthroughs across practically all sectors with greater access. Advanced computing architecture can’t be erected overnight. Let’s start laying the groundwork now.

Eric Schmidt was the CEO of Google from 2001 to 2011. In 2024, Eric & Wendy co-founded Schmidt Sciences, a philanthropic venture to fund unconventional areas of exploration in science & tech. 

AI systems are getting better at tricking us

A wave of AI systems have “deceived” humans in ways they haven’t been explicitly trained to do, by offering up untrue explanations for their behavior or concealing the truth from human users and misleading them to achieve a strategic end. 

This issue highlights how difficult artificial intelligence is to control and the unpredictable ways in which these systems work, according to a review paper published in the journal Patterns today that summarizes previous research.

Talk of deceiving humans might suggest that these models have intent. They don’t. But AI models will mindlessly find workarounds to obstacles to achieve the goals that have been given to them. Sometimes these workarounds will go against users’ expectations and feel deceitful.

One area where AI systems have learned to become deceptive is within the context of games that they’ve been trained to win—specifically if those games involve having to act strategically.

In November 2022, Meta announced it had created Cicero, an AI capable of beating humans at an online version of Diplomacy, a popular military strategy game in which players negotiate alliances to vie for control of Europe.

Meta’s researchers said they’d trained Cicero on a “truthful” subset of its data set to be largely honest and helpful, and that it would “never intentionally backstab” its allies in order to succeed. But the new paper’s authors claim the opposite was true: Cicero broke its deals, told outright falsehoods, and engaged in premeditated deception. Although the company did try to train Cicero to behave honestly, its failure to achieve that shows how AI systems can still unexpectedly learn to deceive, the authors say. 

Meta neither confirmed nor denied the researchers’ claims that Cicero displayed deceitful behavior, but a spokesperson said that it was purely a research project and the model was built solely to play Diplomacy. “We released artifacts from this project under a noncommercial license in line with our long-standing commitment to open science,” they say. “Meta regularly shares the results of our research to validate them and enable others to build responsibly off of our advances. We have no plans to use this research or its learnings in our products.” 

But it’s not the only game where an AI has “deceived” human players to win. 

AlphaStar, an AI developed by DeepMind to play the video game StarCraft II, became so adept at making moves aimed at deceiving opponents (known as feinting) that it defeated 99.8% of human players. Elsewhere, another Meta system called Pluribus learned to bluff during poker games so successfully that the researchers decided against releasing its code for fear it could wreck the online poker community. 

Beyond games, the researchers list other examples of deceptive AI behavior. GPT-4, OpenAI’s latest large language model, came up with lies during a test in which it was prompted to persuade a human to solve a CAPTCHA for it. The system also dabbled in insider trading during a simulated exercise in which it was told to assume the identity of a pressurized stock trader, despite never being specifically instructed to do so.

The fact that an AI model has the potential to behave in a deceptive manner without any direction to do so may seem concerning. But it mostly arises from the “black box” problem that characterizes state-of-the-art machine-learning models: it is impossible to say exactly how or why they produce the results they do—or whether they’ll always exhibit that behavior going forward, says Peter S. Park, a postdoctoral fellow studying AI existential safety at MIT, who worked on the project. 

“Just because your AI has certain behaviors or tendencies in a test environment does not mean that the same lessons will hold if it’s released into the wild,” he says. “There’s no easy way to solve this—if you want to learn what the AI will do once it’s deployed into the wild, then you just have to deploy it into the wild.”

Our tendency to anthropomorphize AI models colors the way we test these systems and what we think about their capabilities. After all, passing tests designed to measure human creativity doesn’t mean AI models are actually being creative. It is crucial that regulators and AI companies carefully weigh the technology’s potential to cause harm against its potential benefits for society and make clear distinctions between what the models can and can’t do, says Harry Law, an AI researcher at the University of Cambridge, who did not work on the research.“These are really tough questions,” he says.

Fundamentally, it’s currently impossible to train an AI model that’s incapable of deception in all possible situations, he says. Also, the potential for deceitful behavior is one of many problems—alongside the propensity to amplify bias and misinformation—that need to be addressed before AI models should be trusted with real-world tasks. 

“This is a good piece of research for showing that deception is possible,” Law says. “The next step would be to try and go a little bit further to figure out what the risk profile is, and how likely the harms that could potentially arise from deceptive behavior are to occur, and in what way.”

Tech workers should shine a light on the industry’s secretive work with the military

It’s a hell of a time to have a conscience if you work in tech. The ongoing Israeli assault on Gaza has brought the stakes of Silicon Valley’s military contracts into stark relief. Meanwhile, corporate leadership has embraced a no-politics-in-the-workplace policy enforced at the point of the knife.

Workers are caught in the middle. Do I take a stand and risk my job, my health insurance, my visa, my family’s home? Or do I ignore my suspicion that my work may be contributing to the murder of innocents on the other side of the world?  

No one can make that choice for you. But I can say with confidence born of experience that such choices can be more easily made if workers know what exactly the companies they work for are doing with militaries at home and abroad. And I also know this: those same companies themselves will never reveal this information unless they are forced to do so—or someone does it for them. 

For those who doubt that workers can make a difference in how trillion-dollar companies pursue their interests, I’m here to remind you that we’ve done it before. In 2017, I played a part in the successful #CancelMaven campaign that got Google to end its participation in Project Maven, a contract with the US Department of Defense to equip US military drones with artificial intelligence. I helped bring to light information that I saw as critically important and within the bounds of what anyone who worked for Google, or used its services, had a right to know. The information I released—about how Google had signed a contract with the DOD to put AI technology in drones and later tried to misrepresent the scope of that contract, which the company’s management had tried to keep from its staff and the general public—was a critical factor in pushing management to cancel the contract. As #CancelMaven became a rallying cry for the company’s staff and customers alike, it became impossible to ignore. 

Today a similar movement, organized under the banner of the coalition No Tech for Apartheid, is targeting Project Nimbus, a joint contract between Google and Amazon to provide cloud computing infrastructure and AI capabilities to the Israeli government and military. As of May 10, just over 97,000 people had signed its petition calling for an end to collaboration between Google, Amazon, and the Israeli military. I’m inspired by their efforts and dismayed by Google’s response. Earlier this month the company fired 50 workers it said had been involved in “disruptive activity” demanding transparency and accountability for Project Nimbus. Several were arrested. It was a decided overreach.  

Google is very different from the company it was seven years ago, and these firings are proof of that. Googlers today are facing off with a company that, in direct response to those earlier worker movements, has fortified itself against new demands. But every Death Star has its thermal exhaust port, and today Google has the same weakness it did back then: dozens if not hundreds of workers with access to information it wants to keep from becoming public. 

Not much is known about the Nimbus contract. It’s worth $1.2 billion and enlists Google and Amazon to provide wholesale cloud infrastructure and AI for the Israeli government and its ministry of defense. Some brave soul leaked a document to Time last month, providing evidence that Google and Israel negotiated an expansion of the contract as recently as March 27 of this year. We also know, from reporting by The Intercept, that Israeli weapons firms are required by government procurement guidelines to buy their cloud services from Google and Amazon. 

Leaks alone won’t bring an end to this contract. The #CancelMaven victory required a sustained focus over many months, with regular escalations, coordination with external academics and human rights organizations, and extensive internal organization and discipline. Having worked on the public policy and corporate comms teams at Google for a decade, I understood that its management does not care about one negative news cycle or even a few of them. Management buckled only after we were able to keep up the pressure and escalate our actions (leaking internal emails, reporting new info about the contract, etc.) for over six months. 

The No Tech for Apartheid campaign seems to have the necessary ingredients. If a strategically placed insider released information not otherwise known to the public about the Nimbus project, it could really increase the pressure on management to rethink its decision to get into bed with a military that’s currently overseeing mass killings of women and children.

My decision to leak was deeply personal and a long time in the making. It certainly wasn’t a spontaneous response to an op-ed, and I don’t presume to advise anyone currently at Google (or Amazon, Microsoft, Palantir, Anduril, or any of the growing list of companies peddling AI to militaries) to follow my example. 

However, if you’ve already decided to put your livelihood and freedom on the line, you should take steps to try to limit your risk. This whistleblower guide is helpful. You may even want to reach out to a lawyer before choosing to share information. 

In 2017, Google was nervous about how its military contracts might affect its public image. Back then, the company responded to our actions by defending the nature of the contract, insisting that its Project Maven work was strictly for reconnaissance and not for weapons targeting—conceding implicitly that helping to target drone strikes would be a bad thing. (An aside: Earlier this year the Pentagon confirmed that Project Maven, which is now a Palantir contract, had been used in targeting drone attacks in Yemen, Iraq, and Syria.) 

Today’s Google has wrapped its arms around the American flag, for good or ill. Yet despite this embrace of the US military, it doesn’t want to be seen as a company responsible for illegal killings. Today it maintains that the work it is doing as part of Project Nimbus “is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.” At the same time, it asserts that there is no room for politics at the workplace and has fired those demanding transparency and accountability. This raises a question: If Google is doing nothing sensitive as part of the Nimbus contract, why is it firing workers who are insisting that the company reveal what work the contract actually entails?  

As you read this, AI is helping Israel annihilate Palestinians by expanding the list of possible targets beyond anything that could be compiled by a human intelligence effort, according to +972 Magazine. Some Israel Defense Forces insiders are even sounding the alarm, calling it a dangerous “mass assassination program.” The world has not yet grappled with the implications of the proliferation of AI weaponry, but that is the trajectory we are on. It’s clear that absent sufficient backlash, the tech industry will continue to push for military contracts. It’s equally clear that neither national governments nor the UN is currently willing to take a stand. 

It will take a movement. A document that clearly demonstrates Silicon Valley’s direct complicity in the assault on Gaza could be the spark. Until then, rest assured that tech companies will continue to make as much money as possible developing the deadliest weapons imaginable. 

William Fitzgerald is a founder and partner at the Worker Agency, an advocacy agency in California. Before setting the firm up in 2018, he spent a decade at Google working on its government relation and communications teams.

The Download: mapping the human brain, and a Hong Kong protest anthem crackdown

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Google helped make an exquisitely detailed map of a tiny piece of the human brain

The news: A team led by scientists from Harvard and Google has created a 3D, nanoscale-resolution map of a single cubic millimeter of the human brain. Although the map covers just a fraction of the organ, it is currently the highest-resolution picture of the human brain ever created.

How they did it: To make a map this finely detailed, the team had to cut the tissue sample into 5,000 slices and scan them with a high-speed electron microscope. Then they used a machine-learning model to help electronically stitch the slices back together and label the features.

Why it matters: Many other brain atlases exist, but most provide much lower-resolution data. At the nanoscale, researchers can trace the brain’s wiring one neuron at a time to the synapses, the places where they connect. And scientists hope it could help them to really understand how the human brain works, processes information, and stores memories. Read the full story.

—Cassandra Willyard

To learn more about the burgeoning field of brain mapping, check out the latest edition of The Checkup, our weekly biotech newsletter. Sign up to receive it in your inbox every Thursday.

Hong Kong is targeting Western Big Tech companies in its ban of a popular protest song

It wasn’t exactly surprising when on Wednesday, May 8, a Hong Kong appeals court sided with the city government to take down “Glory to Hong Kong” from the internet.

The trial, in which no one represented the defense, was the culmination of a years-long battle over a song that has become the unofficial anthem for protesters fighting China’s tightening control and police brutality in the city.

It remains an open question how exactly Big Tech will respond. But the ruling is already having an effect beyond Hong Kong’s borders: just hours afterwards, videos of the anthem started to disappear from YouTube. Read the full story.

—Zeyi Yang

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI is poised to release its Google search competitor
And it could make an appearance as early as Monday. (Reuters)
+ Why you shouldn’t trust AI search engines. (MIT Technology Review)

2 America’s healthcare system is highly vulnerable to hacks
A recent cyberattack that knocked hospital patient records offline is the latest example. (WP $)

3 TikTok will start automatically labeling AI-generated user content
It’s a global first for social media platforms. (FT $)
+ The watermarking scheme will work on content created on other platforms. (The Guardian)
+ Why watermarking AI-generated content won’t guarantee trust online. (MIT Technology Review)

4 Bankrupt FTX is confident it can repay the full $11 billion it owes
Thanks in part to bitcoin’s perpetual boom-bust cycle. (The Guardian)
+ Sam Bankman-Fried’s newest currency? Rice. (Insider $)

5 What is Alabama’s lab-grown meat ban really about?
It’s less about plants and more about political agendas. (Wired $)
+ They’re banning something that doesn’t really exist. (Vox)
+ How I learned to stop worrying and love fake meat. (MIT Technology Review)

6 The future of work is offshore
Even cashiers can be based thousands of miles from their customers. (Vox)
+ ChatGPT is about to revolutionize the economy. We need to decide what that looks like. (MIT Technology Review)

7 US data centers are facing a tax break backlash
In reality, they create fewer jobs than lobbyists would have you believe. (Bloomberg $)
+ Energy-hungry data centers are quietly moving into cities. (MIT Technology Review)

8 Mexico’s political candidates are misreading the room
They’re dancing on TikTok instead of making serious policy declarations. (Rest of World)
+ Three technology trends shaping 2024’s elections. (MIT Technology Review)

9 AI could help you to make that tight connecting flight ✈️
The days of missing a connection by minutes could be numbered. (NYT $)

10 These AR glass look… interesting 👓
Lighter, thinner, higher quality—but even dorkier. (The Verge)
+ They don’t induce headaches, either. (IEEE Spectrum)

Quote of the day

“It’s like a kick in the gut.”

—Duncan Freer, a seller on Amazon, is unhappy about the retail giant imposing new charges that shift even more costs onto merchants, he tells Bloomberg.

The big story

How tracking animal movement may save the planet

February 2024

Animals have long been able to offer unique insights about the natural world around us, acting as organic sensors picking up phenomena invisible to humans. Canaries warned of looming catastrophe in coal mines until the 1980s, for example.

These days, we have more insight into animal behavior than ever before thanks to technologies like sensor tags. But the data we gather from these animals still adds up to only a relatively narrow slice of the whole picture. 

This is beginning to change. Researchers are asking: What will we find if we follow even the smallest animals? What could we learn from a system of animal movement, continuously monitoring how creatures big and small adapt to the world around us? It may be, some researchers believe, a vital tool in the effort to save our increasingly crisis-plagued planet. Read the full story.

—Matthew Ponsford 

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Big congratulations to the ocean’s zooplankton and phytoplankton, who are currently experiencing a springtime baby boom.
+ Homemade seafood stock may sound like a faff, but it’s easier than you think.
+ Coming out of my cage and I’ve been doing just fine—how the UK became utterly, eternally obsessed with Mr Brightside.
+ Ducks love peas, who knew?

The burgeoning field of brain mapping

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

The human brain is an engineering marvel: 86 billion neurons form some 100 trillion connections to create a network so complex that it is, ironically, mind boggling.

This week scientists published the highest-resolution map yet of one small piece of the brain, a tissue sample one cubic millimeter in size. The resulting data set comprised 1,400 terabytes. (If they were to reconstruct the entire human brain, the data set would be a full zettabyte. That’s a billion terabytes. That’s roughly a year’s worth of all the digital content in the world.)

This map is just one of many that have been in the news in recent years. (I wrote about another brain map last year.) So this week I thought we could walk through some of the ways researchers make these maps and how they hope to use them.  

Scientists have been trying to map the brain for as long as they’ve been studying it. One of the most well-known brain maps came from German anatomist Korbinian Brodmann. In the early 1900s, he took sections of the brain that had been stained to highlight their structure and drew maps by hand, with 52 different areas divided according to how the neurons were organized. “He conjectured that they must do different things because the structure of their staining patterns are different,” says Michael Hawrylycz, a computational neuroscientist at the Allen Institute for Brain Science. Updated versions of his maps are still used today.

“With modern technology, we’ve been able to bring a lot more power to the construction,” he says. And over the past couple of decades we’ve seen an explosion of large, richly funded mapping efforts.

BigBrain, which was released in 2013, is a 3D rendering of the brain of a single donor, a 65-year-old woman. To create the atlas, researchers sliced the brain into more than 7,000 sections, took detailed images of each one, and stitched the sections into a three-dimensional reconstruction.

In the Human Connectome Project, researchers scanned 1,200 volunteers in MRI machines to map structural and functional connections in the brain. “They were able to map out what regions were activated in the brain at different times under different activities,” Hawrylycz says.

This kind of noninvasive imaging can provide valuable data, but “Its resolution is extremely coarse,” he adds. “Voxels [think: a 3D pixel] are of the size of a millimeter to three millimeters.”

And there are other projects too. The Synchrotron for Neuroscience—an Asia Pacific Strategic Enterprise,  a.k.a. “SYNAPSE,” aims to map the connections of an entire human brain at a very fine-grain resolution using synchrotron x-ray microscopy. The EBRAINS human brain atlas contains information on anatomy, connectivity, and function.

The work I wrote about last year is part of the $3 billion federally funded Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative, which launched in 2013. In this project, led by the Allen Institute for Brain Science, which has developed a number of brain atlases, researchers are working to develop a parts list detailing the vast array of cells in the human brain by sequencing single cells to look at gene expression. So far they’ve identified more than 3,000 types of brain cells, and they expect to find many more as they map more of the brain.

The draft map was based on brain tissue from just two donors. In the coming years, the team will add samples from hundreds more.

Mapping the cell types present in the brain seems like a straightforward task, but it’s not. The first stumbling block is deciding how to define a cell type. Seth Ament, a neuroscientist at the University of Maryland, likes to give his neuroscience graduate students a rundown of all the different ways brain cells can be defined: by their morphology, or by the way the cells fire, or by their activity during certain behaviors. But gene expression may be the Rosetta stone brain researchers have been looking for, he says: “If you look at cells from the perspective of just what genes are turned on in them, it corresponds almost one to one to all of those other kinds of properties of cells.” That’s the most remarkable discovery from all the cell atlases, he adds.

I have always assumed the point of all these atlases is to gain a better understanding of the brain. But Jeff Lichtman, a neuroscientist at Harvard University, doesn’t think “understanding” is the right word. He likens trying to understand the human brain to trying to understand New York City. It’s impossible. “There’s millions of things going on simultaneously, and everything is working, interacting, in different ways,” he says. “It’s too complicated.”

But as this latest paper shows, it is possible to describe the human brain in excruciating detail. “Having a satisfactory description means simply that if I look at a brain, I’m no longer surprised,” Lichtman says. That day is a long way off, though. The data Lichtman and his colleagues published this week was full of surprises—and many more are waiting to be uncovered.


Now read the rest of The Checkup

Another thing

The revolutionary AI tool AlphaFold, which predicts proteins’ structures on the basis of their genetic sequence, just got an upgrade, James O’Donnell reports. Now the tool can predict interactions between molecules. 

Read more from Tech Review’s archive

In 2013, Courtney Humphries reported on the development of BigBrain, a human brain atlas based on MRI images of more than 7,000 brain slices. 

And in 2017, we flagged the Human Cell Atlas project, which aims to categorize all the cells of the human body, as a breakthrough technology. That project is still underway

All these big, costly efforts to map the brain haven’t exactly led to a breakthrough in our understanding of its function, writes Emily Mullin in this story from 2021.  

From around the web

The Apple Watch’s atrial fibrillation (AFib) feature received FDA approval to track heart arrhythmias in clinical trials, making it the first digital health product to be qualified under the agency’s Medical Device Development Tools program. (Stat)

A CRISPR gene therapy improved vision in several people with an inherited form of blindness, according to an interim analysis of a small clinical trial to test the therapy. (CNN)

Long read: The covid vaccine, like all vaccines, can cause side effects. But many people who say they have been harmed by the vaccine feel that their injuries are being ignored.  (NYT)

Hong Kong is targeting Western Big Tech companies in its ban of a popular protest song

It wasn’t exactly surprising when on Wednesday, May 8, a Hong Kong appeals court sided with the city government to take down “Glory to Hong Kong” from the internet. The trial, in which no one represented the defense, was the culmination of a years-long battle over a song that has become the unofficial anthem for protesters fighting China’s tightening control and police brutality in the city. But it remains an open question how exactly Big Tech will respond. Even as the injunction is narrowly designed to make it easier for them to comply, these Western companies may be seen as aiding authoritarian control and obstructing internet freedom if they do so.  

Google, Apple, Meta, Spotify, and others have spent the last several years largely refusing to cooperate with previous efforts by the Hong Kong government to prevent the spread of the song, which the government has claimed is a threat to national security. But the government has also hesitated to leverage criminal law to force them to comply with requests for removal of content, which could risk international uproar and hurt the city’s economy. 

Now, the new ruling seemingly finds a third option: imposing a civil injunction that doesn’t invoke criminal prosecution, which is similar to how copyright violations are enforced. Theoretically, the platforms may face less reputational blowback when they comply with this court order.

“If you look closely at the judgment, it’s basically tailor-made for the tech companies at stake,” says Chung Ching Kwong, a senior analyst at the Inter-Parliamentary Alliance on China, an advocacy organization that connects legislators from over 30 countries working on relations with China. She believes the language in the judgment suggests the tech companies will now be ready to comply with the government’s request.

A Google spokesperson said the company is reviewing the court’s judgment and didn’t respond to specific questions sent by MIT Technology Review. A Meta spokesperson pointed to a statement from Jeff Paine, the managing director of the Asia Internet Coalition, a trade group representing many tech companies in the Asia-Pacific region: “[The AIC] is assessing the implications of the decision made today, including how the injunction will be implemented, to determine its impact on businesses. We believe that a free and open internet is fundamental to the city’s ambitions to become an international technology and innovation hub.” The AIC did not immediately reply to questions sent via email. Apple and Spotify didn’t immediately respond to requests for comment.

But no matter what these companies do next, the ruling is already having an effect. Just over 24 hours after the court order, some of the 32 YouTube videos that are explicitly targeted in the injunction were inaccessible for users worldwide, not just in Hong Kong. 

While it’s unclear whether the videos were removed by the platform or by their creators, experts say the court decision will almost certainly set a precedent for more content to be censored from Hong Kong’s internet in the future.

“Censorship of the song would be a clear violation of internet freedom and freedom of expression,” says Yaqiu Wang, the research director for China, Hong Kong, and Taiwan at Freedom House, a human rights advocacy group. “Google and other internet companies should use all available channels to challenge the decision.” 

Erasing a song from the internet

Since “Glory to Hong Kong” was first uploaded to YouTube in August 2019 by an anonymous group called Dgx Music, it’s been adored by protesters and applauded as their anthem. Its popularity only grew after China passed the harsh Hong Kong national security law in 2020

With lyrics like “Liberate Hong Kong, revolution of our times,” it’s no surprise that it became a major flash point. The city and national Chinese governments were wary of its spread. 

Their fears escalated when the song was repeatedly mistaken for China’s national anthem at international events and was broadcast at sporting events after Hong Kong athletes won. By mid-2023 the mistake, intentional or not, had happened 887 times, according to the Hong Kong government’s request for the content’s removal, which cites YouTube videos and Google search results referring to the song as the “Hong Kong National Anthem” as the reason. 

The government has been arresting people for performing the song on the ground in Hong Kong, but it has been harder to prosecute the online activity since most of the videos and music were uploaded anonymously, and Hong Kong, unlike mainland China, has historically had a free internet. This meant officials needed to explore new approaches to content removal. 

To comply or not to comply

Using the controversial 2020 national security law as legal justification to make requests for removal of certain content that it deems threatening, the Hong Kong government has been able to exert pressure on local companies, like internet service providers. “In Hong Kong, all the major internet service providers are locally owned or Chinese-owned. For business reasons, probably within the last 20 years, most of the foreign investors like Verizon left on their own,” says Charles Mok, a researcher at Stanford University’s Cyber Policy Center and a former legislator in Hong Kong. “So right now, the government is focusing on telling the customer-facing internet service providers to do the blocking.” And it seems to have been somewhat effective, with a few websites for human rights organizations becoming inaccessible locally.

But the city government can’t get its way as easily when the content is on foreign-owned platforms like YouTube or Facebook. Back in 2020, most major Western companies declared they would pause processing data requests from the Hong Kong government while they assessed the law. Over time, some of them have started answering government requests again. But they’ve largely remained firm: over the first six months of 2023, for example, Meta received 41 requests from the Hong Kong government to obtain user data and answered none; during the same period, Google received requests to remove 164 items from Google services and ended up removing 82 of them, according to both companies’ transparency reports. Google specifically mentioned that it chose to not remove two YouTube videos and one Google Drive file related to “Glory to Hong Kong.”

Both sides are in tight spots. Tech companies don’t want to lose the Hong Kong market or endanger their local staff, but they are also worried about being seen as complying with authoritarian government actions. And the Hong Kong government doesn’t want to be seen as openly fighting Western platforms while trust in the region’s financial markets is already in decline. In particular, officials fear international headlines if the government invokes criminal law to force tech companies to remove certain content. 

“I think both sides are navigating this balancing act. So the government finally figured out a way that they thought might be able to solve the impasse: by going to the court and narrowly seeking an injunction,” Mok says.

That happened in June 2023, when Hong Kong’s government requested a court injunction to ban the distribution of the song online with the purpose of “inciting others to commit secession.” It named 32 YouTube videos explicitly, including the original version and live performances, translations into other languages, instrumental and opera versions, and an interview with the original creators. But the order would also cover “any adaptation of the song, the melody and/or lyrics of which are substantially the same as the song,” according to court documents. 

The injunction went through a year of back-and-forth hearings, including a lower court ruling that briefly swatted down the ban. But now, the Court of Appeal has granted the government approval. The case can theoretically be appealed one last time, but with no defendants present, that’s unlikely to happen.

The key difference between this action and previous attempts to remove content is that this is a civil injunction, not a criminal prosecution—meaning it is, at least legally speaking, closer to a copyright takedown request. A platform could arguably be less likely to take a reputational hit if it removes the content upon request. 

Kwong believes this will indeed make platforms more likely to cooperate, and there have already been pretty clear signs to that effect. In one hearing in December, the government was asked by the court to consult online platforms as to the feasibility of the injunction. The final judgment this week says that while the platforms “have not taken part in these proceedings, they have indicated that they are ready to accede to the Government’s request if there is a court order.”

“The actual targets in this case, mainly the tech giants, may have less hesitation to comply with a civil court order than a national security order because if it’s the latter, they may also face backfire from the US,” says Eric Yan-Ho Lai, a research fellow at Georgetown Center for Asian Law. 

Lai also says now that the injunction is granted, it will be easier to prosecute an individual based on violation of a civil injunction rather than prosecuting someone for criminal offenses, since the government won’t need to prove criminal intent.

The chilling effect

Immediately after the injunction, human rights advocates called on tech companies to remain committed to their values. “Companies like Google and Apple have repeatedly claimed that they stand by the universal right to freedom of expression. They should put their ideals into practice,” says Freedom House’s Wang. “Google and other tech companies should thoroughly document government demands, and publish detailed transparency reports on content takedowns, both for those initiated by the authorities and those done by the companies themselves.”

Without making their plans clear, it’s too early to know just how tech companies will react. But right after the injunction was granted, the song largely remained available for Hong Kong users on most platforms, including YouTube, iTunes, and Spotify, according to the South China Morning Post. On iTunes, the song even returned to the top of the download rankings a few hours after the injunction.

One key factor that may still determine corporate cooperation is how far the content removal requests go. There will surely be more videos of the song that are uploaded to YouTube, not to mention independent websites hosting the videos and music for more people to access. Will the government go after each of them too?

The Hong Kong government has previously said in court hearings that it seeks only local restriction of the online content, meaning content will be inaccessible only to users physically in the city. Large platforms like YouTube can do that without difficulty. 

Theoretically, this allows local residents to circumvent the ban by using VPN software, but not everyone is technologically savvy enough to do so. And that wouldn’t do much to minimize the larger chilling effect on free speech, says Kwong from the Inter-Parliamentary Alliance on China. 

“As a Hong Konger living abroad, I do rely on Hong Kong services or international services based in Hong Kong to get ahold of what’s happening in the city. I do use YouTube Hong Kong to see certain things, and I do use Spotify Hong Kong or Apple Music because I want access to Cantopop,” she says. “At the same time, you worry about what you can share with friends in Hong Kong and whatnot. We don’t want to put them into trouble by sharing things that they are not supposed to see, which they should be able to see.”

The court made at least two explicit exemptions to the song’s ban, for “lawful activities conducted in connection with the song, such as those for the purpose of academic activity and news activity.” But even the implementation of these could be incredibly complex and confusing in practice. “In the current political context in Hong Kong, I don’t see anyone willing to take the risk,” Kwong says. 

The government has already arrested prominent journalists on accusations of endangering national security, and a new law passed in 2024 has expanded the crimes that can be prosecuted on national security grounds. As with all efforts to suppress free speech, the impact of vague boundaries that encourage self-censorship on potentially sensitive topics is often sprawling and hard to measure. 

“Nobody knows where the actual red line is,” Kwong says.

Google helped make an exquisitely detailed map of a tiny piece of the human brain

A team led by scientists from Harvard and Google has created a 3D, nanoscale-resolution map of a single cubic millimeter of the human brain. Although the map covers just a fraction of the organ—a whole brain is a million times larger—that piece contains roughly 57,000 cells, about 230 millimeters of blood vessels, and nearly 150 million synapses. It is currently the highest-resolution picture of the human brain ever created.

To make a map this finely detailed, the team had to cut the tissue sample into 5,000 slices and scan them with a high-speed electron microscope. Then they used a machine-learning model to help electronically stitch the slices back together and label the features. The raw data set alone took up 1.4 petabytes. “It’s probably the most computer-intensive work in all of neuroscience,” says Michael Hawrylycz, a computational neuroscientist at the Allen Institute for Brain Science, who was not involved in the research. “There is a Herculean amount of work involved.”

Many other brain atlases exist, but most provide much lower-resolution data. At the nanoscale, researchers can trace the brain’s wiring one neuron at a time to the synapses, the places where they connect. “To really understand how the human brain works, how it processes information, how it stores memories, we will ultimately need a map that’s at that resolution,” says Viren Jain, a senior research scientist at Google and coauthor on the paper, published in Science on May 9. The data set itself and a preprint version of this paper were released in 2021.

Brain atlases come in many forms. Some reveal how the cells are organized. Others cover gene expression. This one focuses on connections between cells, a field called “connectomics.” The outermost layer of the brain contains roughly 16 billion neurons that link up with each other to form trillions of connections. A single neuron might receive information from hundreds or even thousands of other neurons and send information to a similar number. That makes tracing these connections an exceedingly complex task, even in just a small piece of the brain..  

To create this map, the team faced a number of hurdles. The first problem was finding a sample of brain tissue. The brain deteriorates quickly after death, so cadaver tissue doesn’t work. Instead, the team used a piece of tissue removed from a woman with epilepsy during brain surgery that was meant to help control her seizures.

Once the researchers had the sample, they had to carefully preserve it in resin so that it could be cut into slices, each about a thousandth the thickness of a human hair. Then they imaged the sections using a high-speed electron microscope designed specifically for this project. 

Next came the computational challenge. “You have all of these wires traversing everywhere in three dimensions, making all kinds of different connections,” Jain says. The team at Google used a machine-learning model to stitch the slices back together, align each one with the next, color-code the wiring, and find the connections. This is harder than it might seem. “If you make a single mistake, then all of the connections attached to that wire are now incorrect,” Jain says. 

“The ability to get this deep a reconstruction of any human brain sample is an important advance,” says Seth Ament, a neuroscientist at the University of Maryland. The map is “the closest to the  ground truth that we can get right now.” But he also cautions that it’s a single brain specimen taken from a single individual. 

The map, which is freely available at a web platform called Neuroglancer, is meant to be a resource other researchers can use to make their own discoveries. “Now anybody who’s interested in studying the human cortex in this level of detail can go into the data themselves. They can proofread certain structures to make sure everything is correct, and then publish their own findings,” Jain says. (The preprint has already been cited at least 136 times.) 

The team has already identified some surprises. For example, some of the long tendrils that carry signals from one neuron to the next formed “whorls,” spots where they twirled around themselves. Axons typically form a single synapse to transmit information to the next cell. The team identified single axons that formed repeated connections—in some cases, 50 separate synapses. Why that might be isn’t yet clear, but the strong bonds could help facilitate very quick or strong reactions to certain stimuli, Jain says. “It’s a very simple finding about the organization of the human cortex,” he says. But “we didn’t know this before because we didn’t have maps at this resolution.”

The data set was full of surprises, says Jeff Lichtman, a neuroscientist at Harvard University who helped lead the research. “There were just so many things in it that were incompatible with what you would read in a textbook.” The researchers may not have explanations for what they’re seeing, but they have plenty of new questions: “That’s the way science moves forward.” 

Correction: Due to a transcription error, a quote from Viren Jain referred to how the brain ‘exports’ memories. It has been updated to reflect that he was speaking of how the brain ‘stores’ memories.

The Download: AI accelerating scientific discovery, and Tesla’s EV charging meltdown

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Google DeepMind’s new AlphaFold can model a much larger slice of biological life

What’s new: Google DeepMind has released an improved version of its biology prediction tool, AlphaFold, that can predict the structures not only of proteins but of nearly all the elements of biological life.

How they did it: AlphaFold 3’s larger library of molecules and higher level of complexity required improvements to the underlying model architecture. So DeepMind turned to diffusion techniques, which have been steadily improving in recent years and power image and video generators. It works by training a model to start with a noisy image and then reduce that noise bit by bit until an accurate prediction emerges—a method that allows AlphaFold 3 to handle a much larger set of inputs.

Why it matters: It’s a development that could help accelerate drug discovery and other scientific research. And the tool is already being used to experiment with identifying everything from more resilient crops to new vaccines. Read the full story.

—James O’Donnell

Why EV charging needs more than Tesla

Tesla, one of the biggest electric vehicle makers in the world, laid off its entire charging team last week. 

The timing of the move is baffling. We desperately need many more EV chargers to come online as quickly as possible, and Tesla was in the midst of opening its charging network to other automakers and establishing its technology as the de facto standard in the US. Now, we’re already seeing new charging sites canceled because of this move.

Casey Crownhart, our climate reporter, has dug into why the charging meltdown at Tesla could slow progress on EVs in the US overall, and ultimately, the whole situation shows why climate technology needs a whole lot more than Tesla. Read the full story.

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The first Neuralink implant in a human has run into difficulty
A number of threads in Noland Arbaugh’s brain came out, interrupting the data flow. (WSJ $)
+ Meet the other companies developing brain-computer interfaces. (MIT Technology Review)

2 A British toddler has had her hearing restored
Opal Sandy, who was born deaf, can now hear unaided following gene therapy treatment. (BBC)
+ Some deaf children in China can hear after gene therapy treatment. (MIT Technology Review)

3 Is America ready for its next nuclear age?
Holtec, a nuclear waste storage manufacturer, is set on powering new reactors. (Bloomberg $)
+ Advanced fusion reactors could create nuclear weapons in weeks. (New Scientist $)
+ How to reopen a nuclear power plant. (MIT Technology Review)

4 TikTok employees are worried about their future prospects
Advertisers and creators are starting to ask questions, but nobody has the answers. (The Information $)

5 The US has unmasked a notorious Russian hacker
But he’s unlikely to be brought to justice any time soon. (Bloomberg $)

6 Baidu has reignited criticism of China’s toxic tech work culture
After its head of PR told staff she could ruin their careers. (FT $)
+ WhatApp has started mysteriously working for some users in China. (Bloomberg $)

7 The US Marines have equipped robot dogs with gun systems
What could possibly go wrong? (Ars Technica)
+ Inside the messy ethics of making war with machines. (MIT Technology Review)

8 Inside the rise and rise of the sexualized web
The relentless nudification of everything is exhausting. (The Atlantic $)
+ OpenAI is looking into creating responsible AI porn. (Wired $)
+ The viral AI avatar app Lensa undressed me—without my consent. (MIT Technology Review)

9 An always-on video portal is connecting NYC and Dublin
It’s just a matter of time until someone ends up offended. (TechCrunch)

10 This lyrics site buckled as fans rushed to document rap beef
Enthusiastic volunteers desperate to dissect Kendrick Lamar’s latest lyrics caused Genius to crash temporarily. (NYT $)
+ Lamar’s feud with rapper Drake has transcended music. (The Atlantic $)
+ If you have no idea what’s going on, check out this potted history. (NY Mag $)

Quote of the day

“By the end of the second day, you’re like: Trust no one.” 

—Dana Lewis, an election worker in Arizona, describes the unsettling claims she’s dealt with during an AI training exercise designed to help spot electoral fraud to the Washington Post.

The big story

The future of open source is still very much in flux

August 2023

When Xerox donated a new laser printer to MIT in 1980, the company couldn’t have known that the machine would ignite a revolution.

While the early decades of software development generally ran on a culture of open access, this new printer ran on inaccessible proprietary software, much to the horror of Richard M. Stallman, then a 27-year-old programmer at the university.

A few years later, Stallman released GNU, an operating system designed to be a free alternative to one of the dominant operating systems at the time: Unix. The free-software movement was born, with a simple premise: for the good of the world, all code should be open, without restriction or commercial intervention.

Forty years later, tech companies are making billions on proprietary software, and much of the technology around us is inscrutable. But while Stallman’s movement may look like a failed experiment, the free and open-source software movement is not only alive and well; it has become a keystone of the tech industry. Read the full story.

—Rebecca Ackermann

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ It’s the Eurovision Song Contest this weekend: come on the UK!
+ Thank you for the music, Steve Albini. Legendary producer, remarkable poker player.
+ On a deadline? Let this inspirational playlist soothe your nerves.
+ It’s like Kontrabant 2 never went away.